hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9c2281126ab28b4576140d64757e463483538ea5 | 205 | py | Python | moto/polly/__init__.py | alexsult/moto | ed861ecae1039a048a6350a4ff832ef094cdf2c2 | [
"Apache-2.0"
] | 2 | 2019-07-10T14:44:12.000Z | 2020-06-08T17:26:29.000Z | moto/polly/__init__.py | alexsult/moto | ed861ecae1039a048a6350a4ff832ef094cdf2c2 | [
"Apache-2.0"
] | 5 | 2018-04-25T21:04:20.000Z | 2018-11-02T19:59:27.000Z | moto/polly/__init__.py | alexsult/moto | ed861ecae1039a048a6350a4ff832ef094cdf2c2 | [
"Apache-2.0"
] | 12 | 2017-09-06T22:11:15.000Z | 2021-05-28T17:22:31.000Z | from __future__ import unicode_literals
from .models import polly_backends
from ..core.models import base_decorator
polly_backend = polly_backends['us-east-1']
mock_polly = base_decorator(polly_backends)
| 29.285714 | 43 | 0.839024 | 29 | 205 | 5.517241 | 0.551724 | 0.24375 | 0.225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.092683 | 205 | 6 | 44 | 34.166667 | 0.854839 | 0 | 0 | 0 | 0 | 0 | 0.043902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c233ed83f435b5e3aa3ae2f24e3b8d975f35762 | 27 | py | Python | src/mask_the_face/__init__.py | sveatlo/MaskTheFace | c98b8eb340181a41441c72bb7f1a9de88f968dbe | [
"MIT"
] | null | null | null | src/mask_the_face/__init__.py | sveatlo/MaskTheFace | c98b8eb340181a41441c72bb7f1a9de88f968dbe | [
"MIT"
] | null | null | null | src/mask_the_face/__init__.py | sveatlo/MaskTheFace | c98b8eb340181a41441c72bb7f1a9de88f968dbe | [
"MIT"
] | null | null | null | from .masker import Masker
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c47da95e3c5a88d9f6413aa01d7f4095e56fd37 | 44 | py | Python | lowball/builtins/response_class/__init__.py | EmersonElectricCo/lowball | 7cd2e33a2495d83bbcf1ae45cd40493f9576da9c | [
"Apache-2.0"
] | 3 | 2021-05-05T23:47:38.000Z | 2021-05-06T14:44:00.000Z | lowball/builtins/response_class/__init__.py | EmersonElectricCo/lowball | 7cd2e33a2495d83bbcf1ae45cd40493f9576da9c | [
"Apache-2.0"
] | 5 | 2021-06-18T18:28:08.000Z | 2022-01-14T15:47:02.000Z | lowball/builtins/response_class/__init__.py | EmersonElectricCo/lowball | 7cd2e33a2495d83bbcf1ae45cd40493f9576da9c | [
"Apache-2.0"
] | null | null | null | from .response_class import LowballResponse
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c5b782fe1371209b77c7584d5f0f6156fdc077d | 337 | py | Python | foxtrot/models/missions/missions/__init__.py | narfman0/foxtrot | ffcf9c4c0e01cda5ca65c4a3dd978a18cf762860 | [
"MIT"
] | null | null | null | foxtrot/models/missions/missions/__init__.py | narfman0/foxtrot | ffcf9c4c0e01cda5ca65c4a3dd978a18cf762860 | [
"MIT"
] | 14 | 2018-08-16T20:37:13.000Z | 2018-09-13T17:07:40.000Z | foxtrot/models/missions/missions/__init__.py | narfman0/foxtrot | ffcf9c4c0e01cda5ca65c4a3dd978a18cf762860 | [
"MIT"
] | null | null | null | from foxtrot.models.missions.missions.awake import AwakeMission
from foxtrot.models.missions.missions.board_craft import BoardCraftMission
from foxtrot.models.missions.missions.buildout import BuildoutMission
from foxtrot.models.missions.missions.debrief import DebriefMission
from foxtrot.models.missions.missions.win import WinMission
| 56.166667 | 74 | 0.881306 | 41 | 337 | 7.219512 | 0.390244 | 0.185811 | 0.287162 | 0.422297 | 0.557432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059347 | 337 | 5 | 75 | 67.4 | 0.933754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c6fa4ec1583d7dcb077f0a4e995c6a73ab69781 | 3,880 | py | Python | ssht00ls/.legacy/3.14.0/classes/installation/__init__.py | vandenberghinc/ssht00ls | e08081773c8da7dfac0764170bfeacb4bf421ec1 | [
"CNRI-Python"
] | 5 | 2021-02-18T17:46:39.000Z | 2021-12-29T15:48:07.000Z | ssht00ls/.legacy/3.14.0/classes/installation/__init__.py | vandenberghinc/ssht00ls | e08081773c8da7dfac0764170bfeacb4bf421ec1 | [
"CNRI-Python"
] | null | null | null | ssht00ls/.legacy/3.14.0/classes/installation/__init__.py | vandenberghinc/ssht00ls | e08081773c8da7dfac0764170bfeacb4bf421ec1 | [
"CNRI-Python"
] | 2 | 2021-03-19T14:06:20.000Z | 2021-09-26T14:08:34.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# imports.
from ssht00ls.classes.config import *
from ssht00ls.classes import utils
# the installation object class.
class Installation(object):
def __init__(self):
a=1
def install(self,
# optional define the user (leave None for current user).
username=None,
):
# initialize.
if username == None: username = syst3m.defaults.vars.user
home = f"{syst3m.defaults.vars.homes}/{username}/"
sudo = True
# users ssh directory.
fp = FilePath(f"{home}.ssh/")
if not fp.exists(sudo=sudo):
fp.create(
directory=True,
permission=700,
owner=username,
group=None,
sudo=sudo,)
else:
fp.permission.set(permission=700, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# the ssh config.
fp = FilePath(f"{home}.ssh/config")
if not fp.exists(sudo=sudo):
fp.create(
directory=False,
data="",
permission=644,
owner=username,
group=None,
sudo=sudo,)
else:
fp.permission.set(permission=644, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# the ssh known hosts.
fp = FilePath(f"{home}.ssh/known_hosts")
if not fp.exists(sudo=sudo):
fp.create(
directory=False,
data="",
permission=644,
owner=username,
group=None,
sudo=sudo,)
else:
fp.permission.set(permission=644, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# authorized keys.
fp = FilePath(f"{home}.ssh/authorized_keys")
if not fp.exists(sudo=sudo):
fp.create(
directory=False,
data="",
permission=600,
owner=username,
group=None,
sudo=sudo,)
else:
fp.permission.set(permission=600, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# success.
return r3sponse.success(f"Successfully installed ssh for user [{username}].")
#
def check_installed(self,
# optional define the user (leave None for current user).
username=None,
):
# initialize.
if username == None: username = syst3m.defaults.vars.user
home = f"{syst3m.defaults.vars.homes}/{username}/"
sudo = True
# users ssh directory.
fp = FilePath(f"{home}.ssh/")
if not fp.exists():
return r3sponse.error(f"Required ssh configuration file [{fp.path}] for user [{username}] is not installed.")
else:
fp.permission.set(permission=700, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# the ssh config.
fp = FilePath(f"{home}.ssh/config")
if not fp.exists():
return r3sponse.error(f"Required ssh configuration file [{fp.path}] for user [{username}] is not installed.")
else:
fp.permission.set(permission=644, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# the ssh known hosts.
fp = FilePath(f"{home}.ssh/known_hosts")
if not fp.exists():
return r3sponse.error(f"Required ssh configuration file [{fp.path}] for user [{username}] is not installed.")
else:
fp.permission.set(permission=644, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# authorized keys.
fp = FilePath(f"{home}.ssh/authorized_keys")
if not fp.exists():
return r3sponse.error(f"Required ssh configuration file [{fp.path}] for user [{username}] is not installed.")
else:
fp.permission.set(permission=600, sudo=sudo)
fp.ownership.set(owner=username, group=None, sudo=sudo)
# success.
return r3sponse.success(f"SSH is successfully installed for user [{username}].")
# Initialized objects.
installation = Installation()
"""
# --------------------
# SSH Installation.
# check if ssh is correctly installed.
# (leave the username None to use the current user.)
response = installation.check_installed(username=None)
# install the ssh correctly for the specified user.
if response["error"] != None:
response = installation.install(username=None)
"""
| 26.216216 | 112 | 0.683247 | 527 | 3,880 | 5.011385 | 0.159393 | 0.0727 | 0.045437 | 0.099962 | 0.790231 | 0.790231 | 0.790231 | 0.790231 | 0.790231 | 0.780765 | 0 | 0.016501 | 0.172165 | 3,880 | 147 | 113 | 26.394558 | 0.805729 | 0.105155 | 0 | 0.857143 | 0 | 0 | 0.213483 | 0.056501 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032967 | false | 0 | 0.021978 | 0 | 0.131868 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c70326d529d086a6b38743760529453f64c547d | 135 | py | Python | examples/automl_freiburg/winner_cv/skeleton/__init__.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | 3 | 2020-04-28T08:00:23.000Z | 2020-12-06T22:10:50.000Z | examples/automl_freiburg/winner_cv/skeleton/__init__.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | 5 | 2021-09-08T02:36:47.000Z | 2022-03-12T01:01:36.000Z | examples/automl_freiburg/winner_cv/skeleton/__init__.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | 4 | 2020-04-17T17:27:09.000Z | 2021-04-26T09:33:15.000Z | # -*- coding: utf-8 -*-
# pylint: disable=wildcard-import
from __future__ import absolute_import
from . import data, nn, optim, utils
| 22.5 | 38 | 0.725926 | 18 | 135 | 5.166667 | 0.777778 | 0.215054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0.148148 | 135 | 5 | 39 | 27 | 0.8 | 0.392593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92c3d958a5a9bdd5135d83742154ba45b7633a7d | 41 | py | Python | cvfpscalc/__init__.py | Kazuhito00/cvfpscalc | e5b76681a5b1e807ef4dba2ad15dac6cb22e8036 | [
"MIT"
] | 1 | 2020-02-18T00:54:18.000Z | 2020-02-18T00:54:18.000Z | cvfpscalc/__init__.py | Kazuhito00/cvfpscalc | e5b76681a5b1e807ef4dba2ad15dac6cb22e8036 | [
"MIT"
] | null | null | null | cvfpscalc/__init__.py | Kazuhito00/cvfpscalc | e5b76681a5b1e807ef4dba2ad15dac6cb22e8036 | [
"MIT"
] | null | null | null | from cvfpscalc.cvfpscalc import CvFpsCalc | 41 | 41 | 0.902439 | 5 | 41 | 7.4 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92c51752221e277719cf3b1f3a94dd66bba3b128 | 89 | py | Python | banes/_format.py | ramomar/banes | ccd4e83a294d4e6abffbb9c4adc30f05cb986d23 | [
"MIT"
] | null | null | null | banes/_format.py | ramomar/banes | ccd4e83a294d4e6abffbb9c4adc30f05cb986d23 | [
"MIT"
] | null | null | null | banes/_format.py | ramomar/banes | ccd4e83a294d4e6abffbb9c4adc30f05cb986d23 | [
"MIT"
] | null | null | null | import re
def remove_extra_whitespaces(string):
return re.sub(r'\s+', ' ', string)
| 14.833333 | 38 | 0.674157 | 13 | 89 | 4.461538 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168539 | 89 | 5 | 39 | 17.8 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
92d5716becdbee67dd1fe8fd587916b08a50074b | 520 | py | Python | astroNN/gaia/__init__.py | igomezv/astroNN | 50af116f9cbfc684b63e7ddcf8829343a455722b | [
"MIT"
] | 156 | 2017-10-22T01:29:10.000Z | 2022-03-14T10:28:09.000Z | astroNN/gaia/__init__.py | AbdulfattahBaalawi/astroNN | 0b970dd1a8d4d5e6d611ffa52cfd3c2ffdcb4643 | [
"MIT"
] | 16 | 2017-11-02T21:29:28.000Z | 2022-03-14T08:40:41.000Z | astroNN/gaia/__init__.py | AbdulfattahBaalawi/astroNN | 0b970dd1a8d4d5e6d611ffa52cfd3c2ffdcb4643 | [
"MIT"
] | 46 | 2017-11-01T18:56:03.000Z | 2022-03-07T06:44:22.000Z | from astroNN.gaia.downloader import anderson_2017_parallax, gaiadr2_parallax
from astroNN.gaia.downloader import tgas, gaia_source
from astroNN.gaia.downloader import tgas_load
from astroNN.gaia.gaia_shared import gaia_default_dr, gaia_env
from astroNN.gaia.gaia_shared import mag_to_absmag, mag_to_fakemag, absmag_to_pc, fakemag_to_absmag, absmag_to_fakemag, \
fakemag_to_pc, fakemag_to_logsol, absmag_to_logsol, logsol_to_fakemag, logsol_to_absmag, extinction_correction, \
fakemag_to_parallax, fakemag_to_mag
| 65 | 121 | 0.861538 | 80 | 520 | 5.1625 | 0.3 | 0.133172 | 0.181598 | 0.181598 | 0.394673 | 0.319613 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.086538 | 520 | 7 | 122 | 74.285714 | 0.858947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92da546c6b801629daae557b87b864fe1241484e | 15,523 | py | Python | pcdet/datasets/augmentor/augmentor_utils.py | twn29004/OpenPCDet | 3457cc30b21d882a1376ef272fbaa49755c72a2e | [
"Apache-2.0"
] | null | null | null | pcdet/datasets/augmentor/augmentor_utils.py | twn29004/OpenPCDet | 3457cc30b21d882a1376ef272fbaa49755c72a2e | [
"Apache-2.0"
] | null | null | null | pcdet/datasets/augmentor/augmentor_utils.py | twn29004/OpenPCDet | 3457cc30b21d882a1376ef272fbaa49755c72a2e | [
"Apache-2.0"
] | null | null | null | import numpy as np
import math
import copy
from ...utils import common_utils
def random_flip_along_x(gt_boxes, points):
"""
Args:
gt_boxes: (N, 7 + C), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C)
Returns:
"""
enable = np.random.choice([False, True], replace=False, p=[0.5, 0.5])
if enable:
gt_boxes[:, 1] = -gt_boxes[:, 1]
gt_boxes[:, 6] = -gt_boxes[:, 6]
points[:, 1] = -points[:, 1]
if gt_boxes.shape[1] > 7:
gt_boxes[:, 8] = -gt_boxes[:, 8]
return gt_boxes, points
def random_flip_along_y(gt_boxes, points):
"""
Args:
gt_boxes: (N, 7 + C), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C)
Returns:
"""
enable = np.random.choice([False, True], replace=False, p=[0.5, 0.5])
if enable:
gt_boxes[:, 0] = -gt_boxes[:, 0]
gt_boxes[:, 6] = -(gt_boxes[:, 6] + np.pi)
points[:, 0] = -points[:, 0]
if gt_boxes.shape[1] > 7:
gt_boxes[:, 7] = -gt_boxes[:, 7]
return gt_boxes, points
def global_rotation(gt_boxes, points, rot_range):
"""
Args:
gt_boxes: (N, 7 + C), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
rot_range: [min, max]
Returns:
"""
noise_rotation = np.random.uniform(rot_range[0], rot_range[1])
points = common_utils.rotate_points_along_z(points[np.newaxis, :, :], np.array([noise_rotation]))[0]
gt_boxes[:, 0:3] = common_utils.rotate_points_along_z(gt_boxes[np.newaxis, :, 0:3], np.array([noise_rotation]))[0]
gt_boxes[:, 6] += noise_rotation
if gt_boxes.shape[1] > 7:
gt_boxes[:, 7:9] = common_utils.rotate_points_along_z(
np.hstack((gt_boxes[:, 7:9], np.zeros((gt_boxes.shape[0], 1))))[np.newaxis, :, :],
np.array([noise_rotation])
)[0][:, 0:2]
return gt_boxes, points
def global_scaling(gt_boxes, points, scale_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading]
points: (M, 3 + C),
scale_range: [min, max]
Returns:
"""
if scale_range[1] - scale_range[0] < 1e-3:
return gt_boxes, points
noise_scale = np.random.uniform(scale_range[0], scale_range[1])
points[:, :3] *= noise_scale
gt_boxes[:, :6] *= noise_scale
return gt_boxes, points
def random_image_flip_horizontal(image, depth_map, gt_boxes, calib):
"""
Performs random horizontal flip augmentation
Args:
image: (H_image, W_image, 3), Image
depth_map: (H_depth, W_depth), Depth map
gt_boxes: (N, 7), 3D box labels in LiDAR coordinates [x, y, z, w, l, h, ry]
calib: calibration.Calibration, Calibration object
Returns:
aug_image: (H_image, W_image, 3), Augmented image
aug_depth_map: (H_depth, W_depth), Augmented depth map
aug_gt_boxes: (N, 7), Augmented 3D box labels in LiDAR coordinates [x, y, z, w, l, h, ry]
"""
# Randomly augment with 50% chance
enable = np.random.choice([False, True], replace=False, p=[0.5, 0.5])
if enable:
# Flip images
aug_image = np.fliplr(image)
aug_depth_map = np.fliplr(depth_map)
# Flip 3D gt_boxes by flipping the centroids in image space
aug_gt_boxes = copy.copy(gt_boxes)
locations = aug_gt_boxes[:, :3]
img_pts, img_depth = calib.lidar_to_img(locations)
W = image.shape[1]
img_pts[:, 0] = W - img_pts[:, 0]
pts_rect = calib.img_to_rect(u=img_pts[:, 0], v=img_pts[:, 1], depth_rect=img_depth)
pts_lidar = calib.rect_to_lidar(pts_rect)
aug_gt_boxes[:, :3] = pts_lidar
aug_gt_boxes[:, 6] = -1 * aug_gt_boxes[:, 6]
else:
aug_image = image
aug_depth_map = depth_map
aug_gt_boxes = gt_boxes
return aug_image, aug_depth_map, aug_gt_boxes
def random_translation_along_x(gt_boxes, points, offset_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
offset_range: [min max]]
Returns:
"""
offset = np.random.uniform(offset_range[0], offset_range[1])
points[:, 0] += offset
gt_boxes[:, 0] += offset
# if gt_boxes.shape[1] > 7:
# gt_boxes[:, 7] += offset
return gt_boxes, points
def random_translation_along_y(gt_boxes, points, offset_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
offset_range: [min max]]
Returns:
"""
offset = np.random.uniform(offset_range[0], offset_range[1])
points[:, 1] += offset
gt_boxes[:, 1] += offset
# if gt_boxes.shape[1] > 8:
# gt_boxes[:, 8] += offset
return gt_boxes, points
def random_translation_along_z(gt_boxes, points, offset_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
offset_range: [min max]]
Returns:
"""
offset = np.random.uniform(offset_range[0], offset_range[1])
points[:, 2] += offset
gt_boxes[:, 2] += offset
return gt_boxes, points
def random_local_translation_along_x(gt_boxes, points, offset_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
offset_range: [min max]]
Returns:
"""
# augs = {}
for idx, box in enumerate(gt_boxes):
offset = np.random.uniform(offset_range[0], offset_range[1])
# augs[f'object_{idx}'] = offset
points_in_box, mask = get_points_in_box(points, box)
points[mask, 0] += offset
gt_boxes[idx, 0] += offset
# if gt_boxes.shape[1] > 7:
# gt_boxes[idx, 7] += offset
return gt_boxes, points
def random_local_translation_along_y(gt_boxes, points, offset_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
offset_range: [min max]]
Returns:
"""
# augs = {}
for idx, box in enumerate(gt_boxes):
offset = np.random.uniform(offset_range[0], offset_range[1])
# augs[f'object_{idx}'] = offset
points_in_box, mask = get_points_in_box(points, box)
points[mask, 1] += offset
gt_boxes[idx, 1] += offset
# if gt_boxes.shape[1] > 8:
# gt_boxes[idx, 8] += offset
return gt_boxes, points
def random_local_translation_along_z(gt_boxes, points, offset_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
offset_range: [min max]]
Returns:
"""
# augs = {}
for idx, box in enumerate(gt_boxes):
offset = np.random.uniform(offset_range[0], offset_range[1])
# augs[f'object_{idx}'] = offset
points_in_box, mask = get_points_in_box(points, box)
points[mask, 2] += offset
gt_boxes[idx, 2] += offset
return gt_boxes, points
def global_frustum_dropout_top(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
threshold = np.max(points[:, 2]) - intensity * (np.max(points[:, 2]) - np.min(points[:, 2]))
points = points[points[:,2] < threshold]
gt_boxes = gt_boxes[gt_boxes[:,2] < threshold]
return gt_boxes, points
def global_frustum_dropout_bottom(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
threshold = np.min(points[:, 2]) + intensity * (np.max(points[:, 2]) - np.min(points[:, 2]))
points = points[points[:,2] > threshold]
gt_boxes = gt_boxes[gt_boxes[:,2] > threshold]
return gt_boxes, points
def global_frustum_dropout_left(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
threshold = np.max(points[:, 1]) - intensity * (np.max(points[:, 1]) - np.min(points[:, 1]))
points = points[points[:,1] < threshold]
gt_boxes = gt_boxes[gt_boxes[:,1] < threshold]
return gt_boxes, points
def global_frustum_dropout_right(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
threshold = np.min(points[:, 1]) + intensity * (np.max(points[:, 1]) - np.min(points[:, 1]))
points = points[points[:,1] > threshold]
gt_boxes = gt_boxes[gt_boxes[:,1] > threshold]
return gt_boxes, points
def local_scaling(gt_boxes, points, scale_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading]
points: (M, 3 + C),
scale_range: [min, max]
Returns:
"""
if scale_range[1] - scale_range[0] < 1e-3:
return gt_boxes, points
# augs = {}
for idx, box in enumerate(gt_boxes):
noise_scale = np.random.uniform(scale_range[0], scale_range[1])
# augs[f'object_{idx}'] = noise_scale
points_in_box, mask = get_points_in_box(points, box)
# tranlation to axis center
points[mask, 0] -= box[0]
points[mask, 1] -= box[1]
points[mask, 2] -= box[2]
# apply scaling
points[mask, :3] *= noise_scale
# tranlation back to original position
points[mask, 0] += box[0]
points[mask, 1] += box[1]
points[mask, 2] += box[2]
gt_boxes[idx, 3:6] *= noise_scale
return gt_boxes, points
def local_rotation(gt_boxes, points, rot_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]]
points: (M, 3 + C),
rot_range: [min, max]
Returns:
"""
# augs = {}
for idx, box in enumerate(gt_boxes):
noise_rotation = np.random.uniform(rot_range[0], rot_range[1])
# augs[f'object_{idx}'] = noise_rotation
points_in_box, mask = get_points_in_box(points, box)
centroid_x = box[0]
centroid_y = box[1]
centroid_z = box[2]
# tranlation to axis center
points[mask, 0] -= centroid_x
points[mask, 1] -= centroid_y
points[mask, 2] -= centroid_z
box[0] -= centroid_x
box[1] -= centroid_y
box[2] -= centroid_z
# apply rotation
points[mask, :] = common_utils.rotate_points_along_z(points[np.newaxis, mask, :], np.array([noise_rotation]))[0]
box[0:3] = common_utils.rotate_points_along_z(box[np.newaxis, np.newaxis, 0:3], np.array([noise_rotation]))[0][0]
# tranlation back to original position
points[mask, 0] += centroid_x
points[mask, 1] += centroid_y
points[mask, 2] += centroid_z
box[0] += centroid_x
box[1] += centroid_y
box[2] += centroid_z
gt_boxes[idx, 6] += noise_rotation
if gt_boxes.shape[1] > 8:
gt_boxes[idx, 7:9] = common_utils.rotate_points_along_z(
np.hstack((gt_boxes[idx, 7:9], np.zeros((gt_boxes.shape[0], 1))))[np.newaxis, :, :],
np.array([noise_rotation])
)[0][:, 0:2]
return gt_boxes, points
def local_frustum_dropout_top(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
for idx, box in enumerate(gt_boxes):
x, y, z, dx, dy, dz = box[0], box[1], box[2], box[3], box[4], box[5]
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
points_in_box, mask = get_points_in_box(points, box)
threshold = (z + dz/2) - intensity * dz
points = points[np.logical_not(np.logical_and(mask, points[:,2] >= threshold))]
return gt_boxes, points
def local_frustum_dropout_bottom(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
for idx, box in enumerate(gt_boxes):
x, y, z, dx, dy, dz = box[0], box[1], box[2], box[3], box[4], box[5]
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
points_in_box, mask = get_points_in_box(points, box)
threshold = (z - dz/2) + intensity * dz
points = points[np.logical_not(np.logical_and(mask, points[:,2] <= threshold))]
return gt_boxes, points
def local_frustum_dropout_left(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
for idx, box in enumerate(gt_boxes):
x, y, z, dx, dy, dz = box[0], box[1], box[2], box[3], box[4], box[5]
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
points_in_box, mask = get_points_in_box(points, box)
threshold = (y + dy/2) - intensity * dy
points = points[np.logical_not(np.logical_and(mask, points[:,1] >= threshold))]
return gt_boxes, points
def local_frustum_dropout_right(gt_boxes, points, intensity_range):
"""
Args:
gt_boxes: (N, 7), [x, y, z, dx, dy, dz, heading, [vx], [vy]],
points: (M, 3 + C),
intensity: [min, max]
Returns:
"""
for idx, box in enumerate(gt_boxes):
x, y, z, dx, dy, dz = box[0], box[1], box[2], box[3], box[4], box[5]
intensity = np.random.uniform(intensity_range[0], intensity_range[1])
points_in_box, mask = get_points_in_box(points, box)
threshold = (y - dy/2) + intensity * dy
points = points[np.logical_not(np.logical_and(mask, points[:,1] <= threshold))]
return gt_boxes, points
def get_points_in_box(points, gt_box):
x, y, z = points[:,0], points[:,1], points[:,2]
cx, cy, cz = gt_box[0], gt_box[1], gt_box[2]
dx, dy, dz, rz = gt_box[3], gt_box[4], gt_box[5], gt_box[6]
shift_x, shift_y, shift_z = x - cx, y - cy, z - cz
MARGIN = 1e-1
cosa, sina = math.cos(-rz), math.sin(-rz)
local_x = shift_x * cosa + shift_y * (-sina)
local_y = shift_x * sina + shift_y * cosa
mask = np.logical_and(abs(shift_z) <= dz / 2.0, \
np.logical_and(abs(local_x) <= dx / 2.0 + MARGIN, \
abs(local_y) <= dy / 2.0 + MARGIN ))
points = points[mask]
return points, mask
| 32.957537 | 122 | 0.557753 | 2,206 | 15,523 | 3.718495 | 0.066636 | 0.117762 | 0.066561 | 0.014629 | 0.835548 | 0.819944 | 0.797757 | 0.782519 | 0.747897 | 0.706571 | 0 | 0.02691 | 0.286607 | 15,523 | 470 | 123 | 33.02766 | 0.713834 | 0.241641 | 0 | 0.370192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105769 | false | 0 | 0.019231 | 0 | 0.240385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1304e45316ee90f61e0d1786521147b14684f2fd | 16,613 | py | Python | tests/test_transformers.py | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
] | 15 | 2019-01-29T08:22:39.000Z | 2022-01-13T20:41:32.000Z | tests/test_transformers.py | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
] | 1 | 2017-11-28T21:34:52.000Z | 2017-11-28T21:34:52.000Z | tests/test_transformers.py | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
] | 4 | 2018-02-15T02:12:47.000Z | 2020-02-16T17:52:15.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Sep 9 23:22:02 2017
@author: jmmauricio
"""
from pydgrid import grid
from pydgrid.pf import pf_eval,time_serie
from pydgrid.electric import bess_vsc, bess_vsc_eval
from pydgrid.simu import simu, f_eval, ini_eval, run_eval
import matplotlib.pyplot as plt
import numpy as np
import time
def test_trafos_OC():
trafos_list = [('Dyn11','gnd_k', -30),
('Dyn1','gnd_k', 30),
('Dyg11_3w','3wires',-30),
('Ynd11','gnd_j',-30),
#('Ygd11_3w','3wires',-30),
#('Ygd5_3w','3wires',-60),
('Yy_3wires','3wires',0)]
U_j_n = 20.0 # kV
U_k_n = 0.42 # kV
S_n = 630.0 # kVA
V_j_n = U_j_n/np.sqrt(3)
for connection, gnd, phase in trafos_list:
if gnd == 'gnd_k':
data = {
"buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":U_j_n},
{"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":U_k_n}],
"transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": S_n, "U_j_kV":U_j_n, "U_k_kV":U_k_n,
"R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": connection, "conductors_j": 3, "conductors_k": 4}],
"grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [V_j_n,V_j_n, V_j_n], "deg": [0, 120, 240.0]}],
"shunts":[{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [4,0]}]}
if gnd == '3wires':
data = {
"buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":U_j_n},
{"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":U_k_n}],
"transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": S_n, "U_j_kV":U_j_n, "U_k_kV":U_k_n,
"R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": connection, "conductors_j": 3, "conductors_k": 3}],
"grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [V_j_n,V_j_n, V_j_n], "deg": [0, 120, 240.0]}],
"shunts":[{"bus": "Bus_2" , "R": 100.0e6, "X": 0.0, "bus_nodes": [1,0]}]
}
# pydgrid calculation
sys1 = grid()
sys1.read(data) # Load data
sys1.pf() # solve power flow
# positive sequence calculation
r_t_theoretic = U_j_n/U_k_n
r_t_model = data["buses"][0]["v_ab"]/data["buses"][1]["v_ab"]
error_rt = r_t_theoretic - r_t_model
assert abs(error_rt)<0.001
phase_shift_theoretic = phase
phase_shift_model = data["buses"][0]["deg_an"] - data["buses"][1]["deg_an"]
error_angle = phase_shift_theoretic - phase_shift_model
assert abs(error_angle)<0.001
def test_trafos_SC():
trafos_list = [('Dyn11','gnd_k', -30),
('Dyn1','gnd_k', 30),
('Dyg11_3w','3wires',-30),
('Ynd11','gnd_j',-30),
('Ygd11_3w','3wires',-30),
#('Ygd5_3w','3wires',-60),
('Yy_3wires','3wires',0)
]
U_j_n = 20.0 # kV
U_k_n = 0.42 # kV
S_n = 630.0 # kVA
R_cc_pu = 0.01
X_cc_pu = 0.04
I_nom = S_n*1000/(np.sqrt(3)*U_j_n*1000.0)
Z_base = (1000*U_j_n)**2/(S_n*1000)
Z_cc = (R_cc_pu + 1j*X_cc_pu)*Z_base
V_j_n = np.abs(I_nom*Z_cc)/1000.0
for connection, gnd, phase in trafos_list:
if gnd == 'gnd_j':
data = {
"buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":U_j_n},
{"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":U_k_n}],
"transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": S_n, "U_j_kV":U_j_n, "U_k_kV":U_k_n,
"R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": connection, "conductors_j": 4, "conductors_k": 3}],
"grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [V_j_n,V_j_n, V_j_n], "deg": [0, 120, 240.0]}],
"shunts":[{"bus": "Bus_1" , "R": 0.00001, "X": 0.0, "bus_nodes": [4,0]},
{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [1,2]},
{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [2,3]}]}
if gnd == 'gnd_k':
data = {
"buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":U_j_n},
{"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":U_k_n}],
"transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": S_n, "U_j_kV":U_j_n, "U_k_kV":U_k_n,
"R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": connection, "conductors_j": 3, "conductors_k": 4}],
"grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [V_j_n,V_j_n, V_j_n], "deg": [0, 120, 240.0]}],
"shunts":[{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [4,0]},
{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [1,2]},
{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [2,3]}]}
if gnd == '3wires':
data = {
"buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":U_j_n},
{"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":U_k_n}],
"transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": S_n, "U_j_kV":U_j_n, "U_k_kV":U_k_n,
"R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": connection, "conductors_j": 3, "conductors_k": 3}],
"grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [V_j_n,V_j_n, V_j_n], "deg": [0, 120, 240.0]}],
"shunts":[{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [1,2]},
{"bus": "Bus_2" , "R": 0.00001, "X": 0.0, "bus_nodes": [2,3]}]
}
# pydgrid calculation
sys1 = grid()
sys1.read(data) # Load data
sys1.pf() # solve power flow
# positive sequence calculation
r_t_theoretic = U_j_n/U_k_n
i_1a_m = sys1.transformers[0]['i_1a_m']
i_2a_m = sys1.transformers[0]['i_2a_m']
r_t_model = i_2a_m/i_1a_m
error_rt = r_t_theoretic - r_t_model
assert abs(error_rt)<100.0
i_1a_m = sys1.transformers[0]['i_1a_m']
print('I_nom',I_nom)
print(connection,'i_1a_m',i_1a_m)
error_icc = (I_nom - i_1a_m)/I_nom
assert abs(error_icc)<0.001
i_1b_m = sys1.transformers[0]['i_1b_m']
print('I_nom',I_nom)
print(connection,'i_1b_m',i_1b_m)
error_icc = (I_nom - i_1b_m)/I_nom
assert abs(error_icc)<0.001
i_1c_m = sys1.transformers[0]['i_1c_m']
print('I_nom',I_nom)
print(connection,'i_1c_m',i_1c_m)
error_icc = (I_nom - i_1c_m)/I_nom
assert abs(error_icc)<0.001
# phase_shift_theoretic = phase
# phase_shift_model = data["buses"][0]["deg_an"] - data["buses"][1]["deg_an"]
# error_angle = phase_shift_theoretic - phase_shift_model
# assert abs(error_angle)<0.001
# def test_Dyn11_OC():
# '''
# Open circuit like test
# '''
# data = {
# "buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":20.0},
# {"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":0.4}],
# "transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": 1000.0, "U_j_kV":20.0, "U_k_kV":0.4,
# "R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": "Dyn11", "conductors_j": 3, "conductors_k": 4}],
# "grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [11.547, 11.547, 11.547], "deg": [0, 120, 240.0]}],
# "grid_feeders":[{"bus": "Bus_2","bus_nodes": [1, 2, 3],"kW": [0,0,0],
# "kvar": [0,0,0],"kA": [0,0,0], "phi_deg":[0, 0, 0]}],
# "shunts":[{"bus": "Bus_2" , "R": 0.001, "X": 0.0, "bus_nodes": [4,0]}]}
# # pydgrid calculation
# sys1 = grid()
# sys1.read(data) # Load data
# sys1.pf_solver = 1
# sys1.pf() # solve power flow
# sys1.get_v() # post process voltages
# # positive sequence calculation
# U_1_n = data["transformers"][0]["U_j_kV"]*1000
# U_2_n = data["transformers"][0]["U_k_kV"]*1000
# r_t = U_1_n/U_2_n
# V_2_manual = U_1_n/np.sqrt(3)/r_t*np.exp(1j*np.deg2rad(30))
# V_2_pydgrid = sys1.buses[1]['v_an']*np.exp(1j*np.deg2rad(sys1.buses[1]['deg_an']))
# error = V_2_manual - V_2_pydgrid
# assert abs(error)<0.001
# def test_Dyn11_SC():
# '''
# Short circuit like test
# '''
# data = {
# "buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":0.4},
# {"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":0.4}],
# "transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": 1000.0, "U_j_kV":20.0, "U_k_kV":0.4,
# "R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": "Dyn11", "conductors_j": 3, "conductors_k": 4}],
# "grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [11.547, 11.547, 11.547], "deg": [0, 120, 240.0]}],
# "grid_feeders":[{"bus": "Bus_2","bus_nodes": [1, 2, 3],"kW": [0,0,0],
# "kvar": [0,0,0],"kA": [0,0,0], "phi_deg":[0, 0, 0]}],
# "shunts":[{"bus": "Bus_2" , "R": 0.001, "X": 0.0, "bus_nodes": [4,0]},
# {"bus": "Bus_2" , "R": 1.0e-8, "X": 0.0, "bus_nodes": [1,0]},
# {"bus": "Bus_2" , "R": 1.0e-8, "X": 0.0, "bus_nodes": [2,0]},
# {"bus": "Bus_2" , "R": 1.0e-8, "X": 0.0, "bus_nodes": [3,0]}]}
# U_1_n = data["transformers"][0]["U_j_kV"]*1000
# U_2_n = data["transformers"][0]["U_k_kV"]*1000
# R_cc_pu = data["transformers"][0]["R_cc_pu"]
# X_cc_pu = data["transformers"][0]["X_cc_pu"]
# Z_cc_pu = R_cc_pu + 1j*X_cc_pu
# V_cc = np.abs(Z_cc_pu)*U_1_n/np.sqrt(3)
# V_cc_kV = V_cc/1000
# data["grid_formers"][0]["kV"] = [V_cc_kV, V_cc_kV, V_cc_kV]
# # pydgrid calculation
# sys1 = grid()
# sys1.read(data) # Load data
# sys1.pf_solver = 1
# sys1.pf() # solve power flow
# sys1.get_v() # post process voltages
# sys1.get_i()
# p_a,p_b,p_c = sys1.buses[0]['p_a'],sys1.buses[0]['p_b'],sys1.buses[0]['p_c']
# p_cc = p_a + p_b + p_c
# i_1a_m = sys1.transformers[0]['i_1a_m']
# R_cc_pydgrid = p_cc/(3*i_1a_m**2)
# Z_cc_pydgrid = V_cc/i_1a_m
# X_cc_pydgrid = np.sqrt(Z_cc_pydgrid**2 - R_cc_pydgrid**2)
# Z_b = U_1_n**2/1000.0e3
# R_cc_pu_pydgrid = R_cc_pydgrid/Z_b
# X_cc_pu_pydgrid = X_cc_pydgrid/Z_b
# Z_cc_pu_pydgrid = R_cc_pu_pydgrid + 1j*X_cc_pu_pydgrid
# print('Z_cc_pu',Z_cc_pu)
# print('Z_cc_pu_pydgrid',Z_cc_pu_pydgrid)
# error = Z_cc_pu - Z_cc_pu_pydgrid
# assert abs(error)<0.001
# def test_Ygd11_3w_OC():
# '''
# Open circuit like test
# '''
# data = {
# "buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":20.0},
# {"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":0.4}],
# "transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": 1000.0, "U_j_kV":20.0, "U_k_kV":0.4,
# "R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": "Ygd11_3w", "conductors_j": 3, "conductors_k": 4}],
# "grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [11.547, 11.547, 11.547], "deg": [0, 120, 240.0]}],
# "grid_feeders":[{"bus": "Bus_2","bus_nodes": [1, 2, 3],"kW": [0,0,0],
# "kvar": [0,0,0],"kA": [0,0,0], "phi_deg":[0, 0, 0]}],
# "shunts":[{"bus": "Bus_2" , "R": 0.001, "X": 0.0, "bus_nodes": [4,0]}]}
# # pydgrid calculation
# sys1 = grid()
# sys1.read(data) # Load data
# sys1.pf_solver = 1
# sys1.pf() # solve power flow
# sys1.get_v() # post process voltages
# # positive sequence calculation
# U_1_n = data["transformers"][0]["U_j_kV"]*1000
# U_2_n = data["transformers"][0]["U_k_kV"]*1000
# r_t = U_1_n/U_2_n
# V_2_manual = U_1_n/np.sqrt(3)/r_t*np.exp(1j*np.deg2rad(30))
# V_2_pydgrid = sys1.buses[1]['v_an']*np.exp(1j*np.deg2rad(sys1.buses[1]['deg_an']))
# error = V_2_manual - V_2_pydgrid
# assert abs(error)<0.001
# def test_Ygd11_3w_SC():
# '''
# Short circuit like test
# '''
# data = {
# "buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":0.4},
# {"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":0.4}],
# "transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": 1000.0, "U_j_kV":20.0, "U_k_kV":0.4,
# "R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": "Ygd11_3w", "conductors_j": 3, "conductors_k": 3}],
# "grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [11.547, 11.547, 11.547], "deg": [0, 120, 240.0]}],
# "grid_feeders":[{"bus": "Bus_2","bus_nodes": [1, 2, 3],"kW": [0,0,0],
# "kvar": [0,0,0],"kA": [0,0,0], "phi_deg":[0, 0, 0]}],
# "shunts":[{"bus": "Bus_2" , "R": 1.0e-8, "X": 0.0, "bus_nodes": [1,0]},
# {"bus": "Bus_2" , "R": 1.0e-8, "X": 0.0, "bus_nodes": [2,0]},
# {"bus": "Bus_2" , "R": 1.0e-8, "X": 0.0, "bus_nodes": [3,0]}]}
# U_1_n = data["transformers"][0]["U_j_kV"]*1000
# U_2_n = data["transformers"][0]["U_k_kV"]*1000
# R_cc_pu = data["transformers"][0]["R_cc_pu"]
# X_cc_pu = data["transformers"][0]["X_cc_pu"]
# Z_cc_pu = R_cc_pu + 1j*X_cc_pu
# V_cc = np.abs(Z_cc_pu)*U_1_n/np.sqrt(3)
# V_cc_kV = V_cc/1000
# data["grid_formers"][0]["kV"] = [V_cc_kV, V_cc_kV, V_cc_kV]
# # pydgrid calculation
# sys1 = grid()
# sys1.read(data) # Load data
# sys1.pf_solver = 1
# sys1.pf() # solve power flow
# sys1.get_v() # post process voltages
# sys1.get_i()
# p_a,p_b,p_c = sys1.buses[0]['p_a'],sys1.buses[0]['p_b'],sys1.buses[0]['p_c']
# p_cc = p_a + p_b + p_c
# i_1a_m = sys1.transformers[0]['i_1a_m']
# R_cc_pydgrid = p_cc/(3*i_1a_m**2)
# Z_cc_pydgrid = V_cc/i_1a_m
# X_cc_pydgrid = np.sqrt(Z_cc_pydgrid**2 - R_cc_pydgrid**2)
# Z_b = U_1_n**2/1000.0e3
# R_cc_pu_pydgrid = R_cc_pydgrid/Z_b
# X_cc_pu_pydgrid = X_cc_pydgrid/Z_b
# Z_cc_pu_pydgrid = R_cc_pu_pydgrid + 1j*X_cc_pu_pydgrid
# print('Z_cc_pu',Z_cc_pu)
# print('Z_cc_pu_pydgrid',Z_cc_pu_pydgrid)
# error = Z_cc_pu - Z_cc_pu_pydgrid
# assert abs(error)<0.001
# def test_Ygd11_3w_SC():
# data = {"buses":[{"bus": "Bus_1", "pos_x": 10.0, "pos_y": 0.0, "units": "m", "U_kV":20.0},
# {"bus": "Bus_2", "pos_x": 15.0, "pos_y": 0.0, "units": "m", "U_kV":0.4}],
# "transformers":[{"bus_j": "Bus_1", "bus_k": "Bus_2", "S_n_kVA": 1000.0, "U_j_kV":20.0, "U_k_kV":0.4,
# "R_cc_pu": 0.01, "X_cc_pu":0.04, "connection": "Dyg11_3w", "conductors_j": 3, "conductors_k": 3}],
# "grid_formers":[{"bus": "Bus_1","bus_nodes": [1, 2, 3],"kV": [11.547, 11.547, 11.547], "deg": [0, 120, 240.0]}],
# "grid_feeders":[{"bus": "Bus_2","bus_nodes": [1, 2, 3],"kW": [0,0,0],"kvar": [0,0,0],"kA": [0,0,0], "phi_deg":[0, 0, 0]}]}
# # pydgrid calculation
# sys1 = grid()
# sys1.read(data) # Load data
# sys1.pf_solver = 1
# sys1.pf() # solve power flow
# sys1.get_v() # post process voltages
# # positive sequence calculation
# U_1_n = data["transformers"][0]["U_j_kV"]*1000
# U_2_n = data["transformers"][0]["U_k_kV"]*1000
# r_t = U_1_n/U_2_n
# V_2_manual = U_1_n/np.sqrt(3)/r_t*np.exp(1j*np.deg2rad(30))
# V_2_pydgrid = sys1.buses[1]['v_an']*np.exp(1j*np.deg2rad(sys1.buses[1]['deg_an']))
# print('V_2_manual',V_2_manual)
# print('V_2_pydgrid',V_2_pydgrid)
# error = V_2_manual - V_2_pydgrid
# assert abs(error)<0.001
if __name__ == "__main__":
# test_Dyg11_3w()
# test_Ygd11_3w_OC()
# test_Ygd11_3w_SC()
test_trafos_OC()
test_trafos_SC()
pass
# test_Dyn11()
# test_Dyg11_3w()
| 44.183511 | 136 | 0.506952 | 2,862 | 16,613 | 2.616003 | 0.058001 | 0.021103 | 0.030853 | 0.016028 | 0.920662 | 0.909176 | 0.903566 | 0.900361 | 0.900361 | 0.878857 | 0 | 0.101266 | 0.267682 | 16,613 | 375 | 137 | 44.301333 | 0.514138 | 0.555589 | 0 | 0.581967 | 0 | 0 | 0.20366 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 1 | 0.016393 | false | 0.008197 | 0.057377 | 0 | 0.07377 | 0.04918 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
132b4af24b586f38e6574953645b31261aee3c6d | 181 | py | Python | environments/__init__.py | floriandonhauser/TeBaG-RL | 0110087c97e4d67f739961e7320945da4b3d9592 | [
"MIT"
] | null | null | null | environments/__init__.py | floriandonhauser/TeBaG-RL | 0110087c97e4d67f739961e7320945da4b3d9592 | [
"MIT"
] | null | null | null | environments/__init__.py | floriandonhauser/TeBaG-RL | 0110087c97e4d67f739961e7320945da4b3d9592 | [
"MIT"
] | null | null | null | from environments.tf_game_env import TWGameEnv
from environments.tf_create_environment import create_environments
from environments.tf_vocab_collection_simple import run_auto_vocab
| 45.25 | 66 | 0.917127 | 25 | 181 | 6.24 | 0.56 | 0.307692 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066298 | 181 | 3 | 67 | 60.333333 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
134b85d656cdcb46fa456f0ad2a83e096af9af47 | 23,880 | py | Python | sequitr/networks/unet.py | quantumjot/sequitr | 6a62100a9d4dcecc429bd063fb6931acb711c740 | [
"MIT"
] | 3 | 2019-04-16T14:51:04.000Z | 2019-12-16T22:28:17.000Z | sequitr/networks/unet.py | quantumjot/sequitr | 6a62100a9d4dcecc429bd063fb6931acb711c740 | [
"MIT"
] | 1 | 2021-04-07T16:07:07.000Z | 2021-04-26T09:35:36.000Z | sequitr/networks/unet.py | quantumjot/sequitr | 6a62100a9d4dcecc429bd063fb6931acb711c740 | [
"MIT"
] | 1 | 2019-04-16T14:51:07.000Z | 2019-04-16T14:51:07.000Z | #!/usr/bin/env python
#-------------------------------------------------------------------------------
# Name: Sequitr
# Purpose: Sequitr is a small, lightweight Python library for common image
# processing tasks in optical microscopy, in particular, single-
# molecule imaging, super-resolution or time-lapse imaging of cells.
# Sequitr implements fully convolutional neural networks for image
# segmentation and classification. Modelling of the PSF is also
# supported, and the library is designed to integrate with
# BayesianTracker.
#
# Authors: Alan R. Lowe (arl) a.lowe@ucl.ac.uk
#
# License: See LICENSE.md
#
# Created: 23/03/2018
#-------------------------------------------------------------------------------
__author__ = 'Alan R. Lowe'
__email__ = 'code@arlowe.co.uk'
import os
import core
import utils
import logging
import numpy as np
import tensorflow as tf
# set verbose logging
tf.logging.set_verbosity(tf.logging.INFO)
LOGDIR = core.TensorflowConfiguration.LOGDIR
MODELDIR = core.TensorflowConfiguration.MODELDIR
UNET_MODEL_FOLDER = MODELDIR
DEFAULT_FILTERS = (16, 32, 64, 128, 256)
DEFAULT_DROPOUT = 0.4
BRIDGE_TYPES = ('eltwise_add', 'eltwise_mul', 'eltwise_sub', 'concat', None)
# get the logger instance
logger = logging.getLogger('worker_process')
class UNet(object):
""" UNet
** This is the Base Class, use the sublasses UNet2D or UNet3D **
A UNet class for image segmentation, implemented using TensorFlow.
Basic architechture nomenclature used here: L0u (Layer 0, up)
- Bridge
- Layers a labeled from the top (0) to the bottom, e.g. 4
- Layers are labeled as up or down
This implementation differs in that we pad each convolution such
that the output following convolution is the same size as the input.
Also, bridges are elementwise operations of the filters to approach a
residual-net architecture (resnet), although this can be changed by
the user. The bridge_type property allows different bridge types to be
specified:
- elementwise_add
- elementwise_multiply
- elementwise_subtract
- concatenate
- None (no bridge information, resembles an autoencoder)
Image autoencoders can also be subclassed from this structure, by
removing the bridge information.
Note that the UNet class should not be used on it's own. Generally
there are subclassed versions which inherit the main features but
specify loss functions and bridge details that are specific to the
particular architecture.
TODO(arl): implement filter doubling
Args:
params: a network configuration object dict (usually from
utils.NetConfiguration)
mode: the tensorflow training mode flag
Properties:
_activation: the activation function to use, e.g. tf.nn.relu
_initializer: default initializer for kernels
name: a name for the network, this should be on the white-listed network
names list in the core module
bridge: name of bridge type ('eltwise_add', 'eltwise_mul', 'concat')
dropout: dropout rate (e.g. 0.5 during training)
use_filter_doubling: (bool) doubles filters within a layer before the
maxpool/conv_transpose layers to prevent bottlenecks
Methods:
build(): build the network
Notes:
Based on the original publications:
U-Net: Convolutional Networks for Biomedical Image Segmentation
Olaf Ronneberger, Philipp Fischer and Thomas Brox
http://arxiv.org/abs/1505.04597
3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
Ozgun Cicek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox
and Olaf Ronneberger
https://arxiv.org/abs/1606.06650
Filter doubling from:
Rethinking the Inception Architecture for Computer Vision.
Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojn, Z.
https://arxiv.org/abs/1512.00567
"""
def __init__(self, params, mode):
# training mode: TRAIN, TEST or PREDICT
self._mode = mode
# TODO(arl): proper error checking on these parameters
self.name = params.get('name','UNet2d_test')
self.filters = params.get('filters', DEFAULT_FILTERS)
self.dropout = params.get('dropout', DEFAULT_DROPOUT)
self.n_inputs = params.get('num_inputs', 1)
self.n_outputs = params.get('num_outputs', 2)
self.shape = params.get('shape', (1024, 1024))
self.bridge_type = params.get('bridge', 'eltwise_mul')
self.kernel = params.get('kernel', (3,3))
# activation functions and initializers
self._activation = tf.nn.relu
self._initializer = tf.initializers.variance_scaling
# empty net to start with
self._net = None
@property
def width(self):
""" width of the image volume """
return self.shape[0]
@property
def height(self):
""" height of the image volume """
return self.shape[1]
@property
def slices(self):
""" depth (number of slices) of the image volume """
if self.ndim < 3: return 0
return self.shape[2]
@property
def ndim(self):
""" number of dimensions of image volume """
return len(self.shape)
@property
def training(self):
""" training mode flag """
return self._mode==tf.estimator.ModeKeys.TRAIN
@property
def btype(self):
""" DEPRECATED: bridge type """
raise DeprecationWarning("Use @bridge_type")
@property
def bridge_type(self):
return self._bridge_type
@bridge_type.setter
def bridge_type(self, bridge):
""" Set the bridge type """
if bridge not in BRIDGE_TYPES:
raise ValueError('Bridge type not recognized')
# set the bridge function
if bridge == 'eltwise_add':
self.bridge = lambda x,y: tf.add(x,y)
elif bridge == 'eltwise_mul':
self.bridge = lambda x,y: tf.multiply(x,y)
elif bridge =='eltwise_sub':
self.bridge = lambda x,y: tf.subtract(x,y)
elif bridge == 'concat':
self.bridge = lambda x,y: tf.concat([x,y],-1)
else:
logger.warning('Bridge function in UNet not recognized')
self.bridge = lambda x,y: x
self._bridge_type = bridge
def reshape_input(self, features):
""" Reshape the input layer from the dataset features:
(batch, depth (aka slices), height, width, channels)
"""
# reshape the data to the correct size
full_shape = [-1, self.slices, self.width, self.height, self.n_inputs]
input_shape = [d for d in full_shape if d != 0]
print full_shape, input_shape
input_layer = tf.reshape(features,
input_shape,
name='input_layer')
return input_layer
def logits(self):
""" return the un-normalized logits (i.e. last) layer of the network """
return self._net[-1]
def build(self, features):
""" build
Build the network using the given parameters and the features. Returns
the final output layers. Input are the features as a tensor, typically
from a tensorflow Dataset object.
"""
# output some details of the net
logger.info('Building UNet ({0:s})...'.format(self.__class__.__name__))
with tf.variable_scope('UNet'):
input_layer = self.reshape_input(features)
# BUILD THE NET!
self._net = [self.down_layer(input_layer, self.filters[0], name=0)]
# do the down layers
for i, f in enumerate(self.filters[1:]):
prev_layer = self.max_pool_layer(self._net[-1])
self._net.append( self.down_layer(prev_layer, f, name=i+1) )
# now add the up layers
for i, f in reversed(list(enumerate(self.filters[:-1]))):
prev_layer = self._net[-1] # layer below
bridge = self._net[i] # bridge information
self._net.append( self.up_layer(prev_layer, f, bridge, name=i) )
# make an output layer with a 1x1 convolution
with tf.variable_scope('to_image'):
logits = self.conv_layer_1x1(self._net[-1], self.n_outputs)
logger.info('Output layer -> shape {0:s}'.format(str(logits.shape)))
# append this layer for completeness
self._net.append(logits)
logger.info('...Done')
return logits
def conv_block(self, x, filters):
""" convolutional block """
with tf.variable_scope('conv1'):
conv1 = self.conv_layer(x, filters)
with tf.variable_scope('conv2'):
conv2 = self.conv_layer(conv1, filters)
# Dropout
drop = tf.layers.dropout(inputs=conv2,
rate=self.dropout,
training=self.training)
return drop
def down_layer(self, x, filters, name=None):
""" down_layer
A down layer of the UNet. These are characterised by a series of
convolution and ReLu operations, followed by a max pool to down
sample to the next layer. A layer here is defined as 2x
[3x3 convolution, ReLu]
Tensor shape is often of the format: NHWC
"""
logger.info('Down layer -> shape {0:s}'.format(str(x.shape)))
with tf.variable_scope('down{0:d}'.format(name)):
out = self.conv_block(x, filters)
return out
def up_layer(self, x, filters, bridge, name=None):
""" up_layer
These are characterised by a series of convolution and ReLu
operations, followed by a transpose deconvolution to up sample to the
next layer.
Tensor shape is often of the format: NHWC
"""
logger.info('Up layer -> shape {0:s} (bridge: {1:s})'
.format(str(x.shape), self.bridge_type))
with tf.variable_scope('up{0:d}'.format(name)):
# scale up the image
with tf.variable_scope('upscale'):
upscale = self.conv_transpose_layer(x, filters)
# now we need to incorporate the filters using the bridge
with tf.variable_scope('bridge'):
bridge = self.bridge(upscale, bridge)
out = self.conv_block(bridge, filters)
return out
def conv_layer(self, x, filters):
""" Convolution layer, conv-relu with padding """
raise NotImplementedError
def conv_layer_1x1(self, x, filters):
""" Return a 1x1 convolution layer """
raise NotImplementedError
def conv_transpose_layer(self, x, filters):
""" Transpose convolution (aka deconvolution) layer """
raise NotImplementedError
def pool_layer(self, x):
""" Max pool operation """
raise NotImplementedError
def tr_augment(features, params):
""" Augment the dataset by random cropping, flipping and rotations
Identical augmentations need to be applied to the labels and any weight
map. For the weight map, we make a mask where the regions outside of the
actual image have weights of one - this ensures that we don't set incorrect
labels outside of the actual image data while augmenting...
"""
img = features['image']
label = features['label']
weights = features['weights']
image_shape = features['shape']
height = image_shape[1]
width = image_shape[2]
outputs = params.get('num_outputs', 2)
ch, cw = params.get('shape', (512,512))[0:2]
# random rotation, crop and flips
theta = 2.*tf.random_uniform([], dtype=tf.float32)*np.pi
# rotate the images
im_rot = tf.contrib.image.rotate(img, theta, interpolation='BILINEAR')
lbl_rot = tf.contrib.image.rotate(label, theta, interpolation='NEAREST')
wgt_rot = tf.contrib.image.rotate(weights, theta, interpolation='BILINEAR')
# make a mask where the regions outside of the actual image have weights
# of one - this ensures that we don't set incorrect labels outside of
# the actual image data while augmenting...
mask_im = tf.ones(image_shape, dtype=tf.float32)
wgt_mask = 1.0 - tf.contrib.image.rotate(mask_im, theta,
interpolation='NEAREST')
wgt_rot = tf.add(wgt_rot, wgt_mask)
if (ch,cw != height,width):
rh = tf.random_uniform([], maxval=height-ch, dtype='int32')
rw = tf.random_uniform([], maxval=width-cw, dtype='int32')
# crop the image, labels and weights
img = tf.image.crop_to_bounding_box(im_rot, rh, rw, ch, cw)
label = tf.image.crop_to_bounding_box(lbl_rot, rh, rw, ch, cw)
weights = tf.image.crop_to_bounding_box(wgt_rot, rh, rw, ch, cw)
# now expand the label
channels = range(5) #TODO(arl): this is UGLY!
labels = [tf.cast(tf.equal(label, chnl), tf.uint8) for chnl in channels]
label = tf.concat(labels, axis=-1)[...,:outputs]
# only return the first two channels...
return img, {'label':label, 'weights':weights}
def preprocess_norm(features):
""" normalise images or volumes to mean 0. and std 1.0
if the image shape is: (Z, H, W, C), rank=4, axes = [0,1,2]
(H, W, C), rank=3, axes = [0,1]
"""
try:
img = features['image']
except:
img = features
axes = tf.range(tf.rank(img)-1)
# need to normalise these now
mean, var = tf.nn.moments(img, axes=axes, keep_dims=True)
img = tf.nn.batch_normalization(img,
mean,
var,
None,
None,
1e-38,
name='image_normalisation')
# return the normalized image to the dict
# features['image'] = img
return features
class UNet_LEGACY(object):
""" UNet
** This is the Base Class, use the sublasses UNet2D or UNet3D **
A UNet class for image segmentation, implemented using TensorFlow.
Basic architechture nomenclature used here: L0u (Layer 0, up)
- Bridge
- Layers a labeled from the top (0) to the bottom, e.g. 4
- Layers are labeled as up or down
This implementation differs in that we pad each convolution such
that the output following convolution is the same size as the input.
Also, bridges are elementwise operations of the filters to approach a
residual-net architecture (resnet), although this can be changed by
the user. The bridge_type property allows different bridge types to be
specified:
- elementwise_add
- elementwise_multiply
- elementwise_subtract
- concatenate
- None (no bridge information, resembles an autoencoder)
Image autoencoders can also be subclassed from this structure, by
removing the bridge information.
Note that the UNet class should not be used on it's own. Generally
there are subclassed versions which inherit the main features but
specify loss functions and bridge details that are specific to the
particular architecture.
TODO(arl): implement filter doubling
Args:
params: a network configuration object dict (usually from
utils.NetConfiguration)
mode: the tensorflow training mode flag
Properties:
_activation: the activation function to use, e.g. tf.nn.relu
_initializer: default initializer for kernels
name: a name for the network, this should be on the white-listed network
names list in the core module
bridge: name of bridge type ('eltwise_add', 'eltwise_mul', 'concat')
dropout: dropout rate (e.g. 0.5 during training)
use_filter_doubling: (bool) doubles filters within a layer before the
maxpool/conv_transpose layers to prevent bottlenecks
Methods:
build(): build the network
Notes:
Based on the original publications:
U-Net: Convolutional Networks for Biomedical Image Segmentation
Olaf Ronneberger, Philipp Fischer and Thomas Brox
http://arxiv.org/abs/1505.04597
3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
Ozgun Cicek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox
and Olaf Ronneberger
https://arxiv.org/abs/1606.06650
Filter doubling from:
Rethinking the Inception Architecture for Computer Vision.
Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojn, Z.
https://arxiv.org/abs/1512.00567
"""
def __init__(self, params, mode):
# training mode: TRAIN, TEST or PREDICT
self._mode = mode
# TODO(arl): proper error checking on these parameters
self.name = params.get('name','UNet2d_test')
self.filters = params.get('filters', DEFAULT_FILTERS)
self.dropout = params.get('dropout', DEFAULT_DROPOUT)
self.n_inputs = params.get('num_inputs', 1)
self.n_outputs = params.get('num_outputs', 2)
self.shape = params.get('shape', (1024, 1024))
self.use_filter_doubling = params.get('use_filter_doubling', False)
self.bridge_type = params.get('bridge', 'eltwise_mul')
self.kernel = params.get('kernel', (3,3))
# activation functions and initializers
self._activation = tf.nn.relu
self._initializer = tf.initializers.variance_scaling
# empty net to start with
self._net = None
@property
def width(self):
""" width of the image volume """
return self.shape[0]
@property
def height(self):
""" height of the image volume """
return self.shape[1]
@property
def slices(self):
""" depth (number of slices) of the image volume """
if self.ndim < 3: return 0
return self.shape[2]
@property
def ndim(self):
""" number of dimensions of image volume """
return len(self.shape)
@property
def training(self):
""" training mode flag """
return self._mode==tf.estimator.ModeKeys.TRAIN
@property
def btype(self):
""" DEPRECATED: bridge type """
raise DeprecationWarning("Use @bridge_type")
@property
def bridge_type(self):
return self._bridge_type
@bridge_type.setter
def bridge_type(self, bridge):
""" Set the bridge type """
if bridge not in BRIDGE_TYPES:
raise ValueError('Bridge type not recognized')
# set the bridge function
if bridge == 'eltwise_add':
self.bridge = lambda x,y: tf.add(x,y)
elif bridge == 'eltwise_mul':
self.bridge = lambda x,y: tf.multiply(x,y)
elif bridge =='eltwise_sub':
self.bridge = lambda x,y: tf.subtract(x,y)
elif bridge == 'concat':
self.bridge = lambda x,y: tf.concat([x,y],-1)
else:
logger.warning('Bridge function in UNet not recognized')
self.bridge = lambda x,y: x
self._bridge_type = bridge
@property
def use_filter_doubling(self):
return self._use_filter_doubling
@use_filter_doubling.setter
def use_filter_doubling(self, flag):
""" use filter doubling within a layer """
if not isinstance(flag, bool):
raise TypeError('use_filter_doubling should be a boolean flag')
self._use_filter_doubling = flag
def logits(self):
""" return the un-normalized logits (i.e. last) layer of the network """
return self._net[-1]
def build(self, features):
""" build
Build the network using the given parameters and the features. Returns
the final output layers. Input are the features as a tensor, typically
from a tensorflow Dataset object.
"""
# output some details of the net
logger.info('Building UNet ({0:s})...'.format(self.__class__.__name__))
input_layer = self.reshape_input(features)
# BUILD THE NET!
self._net = [self.down_layer(input_layer, self.filters[0], name='L0d')]
# do the down layers
for i, f in enumerate(self.filters[1:]):
name = "L{0:d}d".format(i+1)
prev_layer = self.max_pool_layer(self._net[-1])
self._net.append( self.down_layer(prev_layer, f, name=name) )
# now add the up layers
for i, f in reversed(list(enumerate(self.filters[:-1]))):
name = "L{0:d}u".format(i)
prev_layer = self._net[-1] # layer below
bridge = self._net[i] # bridge information
self._net.append( self.up_layer(prev_layer, f, bridge, name=name) )
# make an output layer with a 1x1 convolution
logits = self.conv_layer_1x1(self._net[-1], self.n_outputs)
logger.info('Output layer -> shape {0:s}'.format(str(logits.shape)))
# append this layer for completeness
self._net.append(logits)
logger.info('...Done')
return logits
def down_layer(self, input_layer, filters, name=None):
""" down_layer
A down layer of the UNet. These are characterised by a series of
convolution and ReLu operations, followed by a max pool to down
sample to the next layer. A layer here is defined as 2x
[3x3 convolution, ReLu]
Tensor shape is often of the format: NHWC
"""
conv1 = self.conv_layer(input_layer, filters)
conv2 = self.conv_layer(conv1, filters)
logger.info('Down layer -> shape {0:s}'.format(str(conv2.shape)))
# Dropout
drop = tf.layers.dropout(inputs=conv2,
rate=self.dropout,
training=self.training)
return drop
def up_layer(self, input_layer, filters, bridge, name=None):
""" up_layer
These are characterised by a series of convolution and ReLu
operations, followed by a transpose deconvolution to up sample to the
next layer.
Tensor shape is often of the format: NHWC
"""
logger.info('Up layer -> shape {0:s} (bridge: {1:s})'
.format(str(input_layer.shape), self.bridge_type))
# scale up the image
upscale = self.conv_transpose_layer(input_layer, filters)
# now we need to incorporate the filters using the bridge
bridge = self.bridge(upscale, bridge)
# do the convolutions
conv1 = self.conv_layer(bridge, filters)
conv2 = self.conv_layer(conv1, filters)
# dropout
drop = tf.layers.dropout(inputs=conv2,
rate=self.dropout,
training=self.training)
return drop
def reshape_input(self, features):
""" Reshape the input layer from the dataset features """
raise NotImplementedError
def conv_layer(self, input_layer, filters):
""" Convolution layer, conv-relu with padding """
raise NotImplementedError
def conv_layer_1x1(self, input_layer, filters):
""" Return a 1x1 convolution layer """
raise NotImplementedError
def conv_transpose_layer(self, input_layer, filters):
""" Transpose convolution (aka deconvolution) layer """
raise NotImplementedError
def max_pool_layer(self, input_layer):
""" Max pool operation """
raise NotImplementedError
if __name__ == "__main__":
pass
| 32.53406 | 80 | 0.619305 | 3,036 | 23,880 | 4.774704 | 0.166337 | 0.017936 | 0.0129 | 0.011727 | 0.776145 | 0.749034 | 0.733237 | 0.730201 | 0.724269 | 0.709782 | 0 | 0.014139 | 0.28325 | 23,880 | 733 | 81 | 32.578445 | 0.832788 | 0.092839 | 0 | 0.599291 | 0 | 0 | 0.076725 | 0 | 0 | 0 | 0 | 0.006821 | 0 | 0 | null | null | 0.003546 | 0.021277 | null | null | 0.003546 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13503d6193197e08b3507c9eff5c699cba17b5fe | 38 | py | Python | testsuite/modulegraph-dir/package/renamed_attr.py | xoviat/modulegraph2 | 766d00bdb40e5b2fe206b53a87b1bce3f9dc9c2a | [
"MIT"
] | 9 | 2020-03-22T14:48:01.000Z | 2021-05-30T12:18:12.000Z | testsuite/modulegraph-dir/package/renamed_attr.py | xoviat/modulegraph2 | 766d00bdb40e5b2fe206b53a87b1bce3f9dc9c2a | [
"MIT"
] | 15 | 2020-01-06T10:02:32.000Z | 2021-05-28T12:22:44.000Z | testsuite/modulegraph-dir/package/renamed_attr.py | ronaldoussoren/modulegraph2 | b6ab1766b0098651b51083235ff8a18a5639128b | [
"MIT"
] | 4 | 2020-05-10T18:51:41.000Z | 2021-04-07T14:03:12.000Z | from .renamed_package import the_path
| 19 | 37 | 0.868421 | 6 | 38 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13d730380a4f8ecc3dfa8459b73260548747035f | 2,944 | py | Python | vsphere/datadog_checks/vsphere/config_models/defaults.py | kjmadscience/integrations-core | 663bdf44730dd6c9f3565c121318b320bfcb4988 | [
"BSD-3-Clause"
] | null | null | null | vsphere/datadog_checks/vsphere/config_models/defaults.py | kjmadscience/integrations-core | 663bdf44730dd6c9f3565c121318b320bfcb4988 | [
"BSD-3-Clause"
] | null | null | null | vsphere/datadog_checks/vsphere/config_models/defaults.py | kjmadscience/integrations-core | 663bdf44730dd6c9f3565c121318b320bfcb4988 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2021-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
# This file is autogenerated.
# To change this file you should edit assets/configuration/spec.yaml and then run the following commands:
# ddev -x validate config -s <INTEGRATION_NAME>
# ddev -x validate models -s <INTEGRATION_NAME>
from datadog_checks.base.utils.models.fields import get_default_field_value
def shared_rest_api_options(field, value):
return get_default_field_value(field, value)
def shared_service(field, value):
return get_default_field_value(field, value)
def instance_attributes_prefix(field, value):
return ''
def instance_batch_property_collector_size(field, value):
return 500
def instance_batch_tags_collector_size(field, value):
return 200
def instance_collect_attributes(field, value):
return False
def instance_collect_events(field, value):
return get_default_field_value(field, value)
def instance_collect_events_only(field, value):
return False
def instance_collect_per_instance_filters(field, value):
return get_default_field_value(field, value)
def instance_collect_tags(field, value):
return False
def instance_collection_level(field, value):
return 1
def instance_collection_type(field, value):
return 'realtime'
def instance_disable_generic_tags(field, value):
return False
def instance_excluded_host_tags(field, value):
return []
def instance_include_datastore_cluster_folder_tag(field, value):
return True
def instance_max_historical_metrics(field, value):
return 256
def instance_metric_filters(field, value):
return get_default_field_value(field, value)
def instance_metric_patterns(field, value):
return get_default_field_value(field, value)
def instance_metrics_per_query(field, value):
return 500
def instance_min_collection_interval(field, value):
return 15
def instance_refresh_infrastructure_cache_interval(field, value):
return 300
def instance_refresh_metrics_metadata_cache_interval(field, value):
return 1800
def instance_resource_filters(field, value):
return get_default_field_value(field, value)
def instance_rest_api_options(field, value):
return get_default_field_value(field, value)
def instance_service(field, value):
return get_default_field_value(field, value)
def instance_ssl_capath(field, value):
return get_default_field_value(field, value)
def instance_ssl_verify(field, value):
return True
def instance_tags(field, value):
return get_default_field_value(field, value)
def instance_tags_prefix(field, value):
return ''
def instance_threads_count(field, value):
return 4
def instance_tls_ignore_warning(field, value):
return False
def instance_use_collect_events_fallback(field, value):
return False
def instance_use_guest_hostname(field, value):
return False
| 20.587413 | 105 | 0.780571 | 404 | 2,944 | 5.371287 | 0.30198 | 0.258065 | 0.243318 | 0.110599 | 0.570046 | 0.511521 | 0.424885 | 0.323502 | 0.323502 | 0.323502 | 0 | 0.011187 | 0.149796 | 2,944 | 142 | 106 | 20.732394 | 0.855773 | 0.115489 | 0 | 0.358209 | 1 | 0 | 0.003082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.492537 | false | 0 | 0.014925 | 0.492537 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b981c276e15cb0c41cd3b8b2117df6570ec6b71d | 24,197 | py | Python | zstackwoodpecker/zstackwoodpecker/zstack_test/kvm_checker/kvm_checker_factory.py | bgerxx/woodpecker | fdc51245945cc9be4d1f028988079213eb99b2ad | [
"Apache-2.0"
] | null | null | null | zstackwoodpecker/zstackwoodpecker/zstack_test/kvm_checker/kvm_checker_factory.py | bgerxx/woodpecker | fdc51245945cc9be4d1f028988079213eb99b2ad | [
"Apache-2.0"
] | null | null | null | zstackwoodpecker/zstackwoodpecker/zstack_test/kvm_checker/kvm_checker_factory.py | bgerxx/woodpecker | fdc51245945cc9be4d1f028988079213eb99b2ad | [
"Apache-2.0"
] | null | null | null | '''
Zstack KVM Checker Factory.
@author: YYK
'''
import zstackwoodpecker.test_lib as test_lib
import zstackwoodpecker.header.vm as vm_header
import zstackwoodpecker.header.volume as volume_header
import zstackwoodpecker.header.image as image_header
import zstackwoodpecker.header.security_group as sg_header
import zstackwoodpecker.header.port_forwarding as pf_header
import zstackwoodpecker.header.vip as vip_header
import zstackwoodpecker.header.load_balancer as lb_header
import zstackwoodpecker.header.checker as checker_header
import zstackwoodpecker.zstack_test.zstack_checker.zstack_db_checker as db_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_vm_checker as vm_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_volume_checker as volume_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_share_volume_checker as share_volume_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_image_checker as image_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_security_group_checker as sg_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_port_forwarding_checker as pf_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_host_checker as host_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_eip_checker as eip_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_vip_checker as vip_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_snapshot_checker as sp_checker
import zstackwoodpecker.zstack_test.kvm_checker.zstack_kvm_load_balancer_checker as lb_checker
import zstackwoodpecker.test_util as test_util
import apibinding.inventory as inventory
class KvmVmCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
kvm_vm_checker_chain = checker_header.CheckerChain()
checker_dict = {}
if test_obj.state == vm_header.RUNNING:
checker_dict[vm_checker.zstack_kvm_vm_set_host_vlan_ip] = True
checker_dict[db_checker.zstack_vm_db_checker] = True
checker_dict[vm_checker.zstack_kvm_vm_running_checker] = True
#if behind of VR
vrs = test_lib.lib_find_vr_by_vm(test_obj.vm)
if vrs:
svr_types = test_lib.lib_get_l3s_service_type(test_obj.vm)
#The first DHCP checker will wait for VM start up.
if 'DHCP' in svr_types and not test_lib.lib_get_flat_dhcp_by_vm(test_obj.vm):
checker_dict[vm_checker.zstack_kvm_vm_dhcp_checker] = True
checker_dict[vm_checker.zstack_kvm_vm_network_checker] = True
#if guest can't get IP address from DHCP, auto case can
# not test DNS feature.
if 'DNS' in svr_types:
checker_dict[vm_checker.zstack_kvm_vm_dns_checker] \
= True
else:
checker_dict[vm_checker.zstack_kvm_vm_dns_checker] \
= False
elif 'DHCP' in svr_types and test_lib.lib_get_flat_dhcp_by_vm(test_obj.vm) and test_lib.lib_find_vr_by_vm(test_obj.vm):
checker_dict[vm_checker.zstack_kvm_vm_dhcp_checker] = False
checker_dict[vm_checker.zstack_kvm_vm_network_checker] = True
else:
checker_dict[vm_checker.zstack_kvm_vm_dhcp_checker] = False
checker_dict[vm_checker.zstack_kvm_vm_network_checker] \
= False
if 'SNAT' in svr_types:
checker_dict[vm_checker.zstack_kvm_vm_snat_checker] = True
else:
checker_dict[vm_checker.zstack_kvm_vm_snat_checker] = False
#if 'PortForwarding' in svr_types:
# checker_dict[vm_checker.zstack_kvm_vm_dnat_checker] = True
#else:
# checker_dict[vm_checker.zstack_kvm_vm_dnat_checker] = False
else:
sp_types = test_lib.lib_get_vm_l3_service_provider_types(test_obj.vm)
if 'Flat' in sp_types:
checker_dict[vm_checker.zstack_kvm_vm_ssh_no_vr_checker] = True
if test_obj.get_creation_option().get_default_l3_uuid():
checker_dict[vm_checker.zstack_kvm_vm_default_l3_checker] = True
elif test_obj.state == vm_header.STOPPED:
checker_dict[db_checker.zstack_vm_db_checker] = True
#stopped_checker is deprecated, since the stopped vm will be removed
#from host.
#checker_dict[vm_checker.zstack_kvm_vm_stopped_checker] = True
elif test_obj.state == vm_header.PAUSED:
checker_dict[db_checker.zstack_vm_db_checker] = True
checker_dict[vm_checker.zstack_kvm_vm_suspended_checker] = True
elif test_obj.state == vm_header.DESTROYED:
#VM destroy will cause vm structure be removed from DB, when VmExpungeInterval is set to 1, so doesn't need to check destroyed state sync in db in most case.
checker_dict[db_checker.zstack_vm_db_checker] = True
checker_dict[vm_checker.zstack_kvm_vm_destroyed_checker] = True
elif test_obj.state == vm_header.EXPUNGED:
checker_dict[db_checker.zstack_vm_db_checker] = True
kvm_vm_checker_chain.add_checker_dict(checker_dict, test_obj)
return kvm_vm_checker_chain
class KvmVolumeCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
kvm_volume_checker_chain = checker_header.CheckerChain()
checker_dict = {}
if test_obj.state == volume_header.CREATED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[volume_checker.zstack_kvm_volume_file_checker] = False
elif test_obj.state == volume_header.ATTACHED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[volume_checker.zstack_kvm_volume_file_checker] = True
if not test_obj.target_vm.state == vm_header.DESTROYED:
checker_dict[db_checker.zstack_volume_attach_db_checker] = True
if test_obj.target_vm.state == vm_header.RUNNING:
checker_dict[volume_checker.zstack_kvm_volume_attach_checker] = True
else:
checker_dict[db_checker.zstack_volume_attach_db_checker] = False
elif test_obj.state == volume_header.DETACHED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[db_checker.zstack_volume_attach_db_checker] = False
checker_dict[volume_checker.zstack_kvm_volume_attach_checker] = False
checker_dict[volume_checker.zstack_kvm_volume_file_checker] = True
elif test_obj.state == volume_header.DELETED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[volume_checker.zstack_kvm_volume_file_checker] = True
elif test_obj.state == volume_header.EXPUNGED:
checker_dict[db_checker.zstack_volume_db_checker] = False
checker_dict[volume_checker.zstack_kvm_volume_file_checker] = False
kvm_volume_checker_chain.add_checker_dict(checker_dict, test_obj)
return kvm_volume_checker_chain
class KvmSharableVolumeCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
kvm_volume_checker_chain = checker_header.CheckerChain()
checker_dict = {}
if test_obj.state == volume_header.CREATED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[share_volume_checker.zstack_kvm_share_volume_file_checker] = False
elif test_obj.state == volume_header.ATTACHED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[share_volume_checker.zstack_kvm_share_volume_file_checker] = True
if not test_obj.target_vm.state == vm_header.DESTROYED:
checker_dict[db_checker.zstack_share_volume_attach_db_checker] = True
if test_obj.target_vm.state == vm_header.RUNNING:
checker_dict[share_volume_checker.zstack_kvm_share_volume_attach_checker] = True
checker_dict[share_volume_checker.zstack_kvm_virtioscsi_shareable_checker] = True
else:
checker_dict[db_checker.zstack_share_volume_attach_db_checker] = False
elif test_obj.state == volume_header.DETACHED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[db_checker.zstack_share_volume_attach_db_checker] = False
checker_dict[share_volume_checker.zstack_kvm_share_volume_attach_checker] = False
checker_dict[share_volume_checker.zstack_kvm_share_volume_file_checker] = True
elif test_obj.state == volume_header.DELETED:
checker_dict[db_checker.zstack_volume_db_checker] = True
checker_dict[share_volume_checker.zstack_kvm_share_volume_file_checker] = True
elif test_obj.state == volume_header.EXPUNGED:
checker_dict[db_checker.zstack_volume_db_checker] = False
checker_dict[share_volume_checker.zstack_kvm_share_volume_file_checker] = False
kvm_volume_checker_chain.add_checker_dict(checker_dict, test_obj)
return kvm_volume_checker_chain
class KvmImageCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
kvm_image_checker_chain = checker_header.CheckerChain()
checker_dict = {}
if test_obj.state == image_header.CREATED:
checker_dict[db_checker.zstack_image_db_checker] = True
checker_dict[image_checker.zstack_kvm_image_file_checker] = True
if test_obj.state == image_header.DELETED:
checker_dict[db_checker.zstack_image_db_checker] = True
checker_dict[image_checker.zstack_kvm_image_file_checker] = True
if test_obj.state == image_header.EXPUNGED:
checker_dict[db_checker.zstack_image_db_checker] = False
checker_dict[image_checker.zstack_kvm_image_file_checker] = False
kvm_image_checker_chain.add_checker_dict(checker_dict, test_obj)
return kvm_image_checker_chain
class KvmSecurityGroupCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
kvm_sg_checker_chain = checker_header.CheckerChain()
checker_dict = {}
for nic_uuid in test_obj.get_all_nics():
target_vm = test_obj.get_vm_by_nic(nic_uuid)
if target_vm.state == vm_header.RUNNING:
if test_lib.lib_is_vm_sim(target_vm.vm):
kvm_sg_checker_chain.add_checker(db_checker.zstack_sg_db_checker(True), test_obj)
continue
if not test_lib.lib_is_vm_kvm(target_vm.vm):
continue
if test_obj.get_nic_tcp_ingress_rules(nic_uuid):
checker = sg_checker.zstack_kvm_sg_tcp_ingress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
checker = sg_checker.zstack_kvm_sg_tcp_ingress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
checker = sg_checker.zstack_kvm_sg_tcp_internal_vms_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
else:
checker = sg_checker.zstack_kvm_sg_tcp_ingress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
if test_obj.get_nic_tcp_egress_rules(nic_uuid):
checker = sg_checker.zstack_kvm_sg_tcp_egress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
checker = sg_checker.zstack_kvm_sg_tcp_egress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
if not test_obj.get_nic_tcp_ingress_rules(nic_uuid):
checker = sg_checker.zstack_kvm_sg_tcp_internal_vms_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
else:
checker = sg_checker.zstack_kvm_sg_tcp_egress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
if test_obj.get_nic_udp_ingress_rules(nic_uuid):
checker = sg_checker.zstack_kvm_sg_udp_ingress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
else:
checker = sg_checker.zstack_kvm_sg_udp_ingress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
if test_obj.get_nic_udp_egress_rules(nic_uuid):
checker = sg_checker.zstack_kvm_sg_udp_egress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
else:
checker = sg_checker.zstack_kvm_sg_udp_egress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
if test_obj.get_nic_icmp_ingress_rules(nic_uuid):
checker = sg_checker.zstack_kvm_sg_icmp_ingress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
checker = sg_checker.zstack_kvm_sg_icmp_ingress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
checker = sg_checker.zstack_kvm_sg_icmp_internal_vms_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
else:
checker = sg_checker.zstack_kvm_sg_icmp_ingress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
if test_obj.get_nic_icmp_egress_rules(nic_uuid):
checker = sg_checker.zstack_kvm_sg_icmp_egress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
checker = sg_checker.zstack_kvm_sg_icmp_egress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, True, test_obj)
#if not test_obj.get_nic_icmp_ingress_rules(nic_uuid):
# checker = sg_checker.zstack_kvm_sg_icmp_internal_vms_checker()
# checker.set_nic_uuid(nic_uuid)
# kvm_sg_checker_chain.add_checker(checker, True, test_obj)
else:
checker = sg_checker.zstack_kvm_sg_icmp_egress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
else:
#TODO: only do iptables rules check
checker = sg_checker.zstack_kvm_sg_tcp_ingress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_tcp_egress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_icmp_egress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_icmp_ingress_exist_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_udp_ingress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_udp_egress_checker()
checker.set_nic_uuid(nic_uuid)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
for test_vm in test_obj.get_detached_vm():
vm = test_vm.vm
if not test_lib.lib_is_vm_kvm(vm):
continue
checker = sg_checker.zstack_kvm_sg_tcp_ingress_exist_checker()
checker.set_vm(vm)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_tcp_egress_exist_checker()
checker.set_vm(vm)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_icmp_egress_exist_checker()
checker.set_vm(vm)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_icmp_ingress_exist_checker()
checker.set_vm(vm)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_udp_ingress_checker()
checker.set_vm(vm)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
checker = sg_checker.zstack_kvm_sg_udp_egress_checker()
checker.set_vm(vm)
kvm_sg_checker_chain.add_checker(checker, False, test_obj)
return kvm_sg_checker_chain
class KvmPortForwardingCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
kvm_pf_checker_chain = checker_header.CheckerChain()
checker_dict = {}
pf_rule = test_obj.get_port_forwarding()
if test_obj.get_state() == pf_header.ATTACHED and \
test_obj.get_target_vm().get_state() == vm_header.RUNNING:
if pf_rule.protocolType == inventory.TCP:
checker_dict[pf_checker.zstack_kvm_pf_tcp_checker] = True
if pf_rule.protocolType == inventory.UDP:
checker_dict[pf_checker.zstack_kvm_pf_rule_exist_checker] = True
elif test_obj.get_state() == pf_header.ATTACHED and test_obj.get_target_vm().get_state() == vm_header.STOPPED:
checker_dict[pf_checker.zstack_kvm_pf_vip_icmp_checker] = False
if pf_rule.protocolType == inventory.TCP:
checker_dict[pf_checker.zstack_kvm_pf_tcp_checker] = False
elif test_obj.get_state() == pf_header.DETACHED:
checker_dict[pf_checker.zstack_kvm_pf_vip_icmp_checker] = False
kvm_pf_checker_chain.add_checker_dict(checker_dict, test_obj)
return kvm_pf_checker_chain
class HostCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
host_checker_chain = checker_header.CheckerChain()
checker = host_checker.zstack_kvm_host_checker()
host_checker_chain.add_checker(checker, True, test_obj)
return host_checker_chain
class EipCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
eip_checker_chain = checker_header.CheckerChain()
checker = eip_checker.eip_checker()
eip_checker_chain.add_checker(checker, True, test_obj)
return eip_checker_chain
class VipCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
vip_checker_chain = checker_header.CheckerChain()
if test_obj.get_state() == vip_header.ATTACHED:
if test_obj.get_use_for() == vip_header.PortForwarding:
checker = vip_checker.vip_used_for_checker()
checker.set_target_use_for(vip_header.PortForwarding)
vip_checker_chain.add_checker(checker, True, test_obj)
vip_checker_chain.add_checker(vip_checker.pf_checker(), True, test_obj)
for pf in test_obj.get_pf_list_for_running_vm():
vip_checker_chain.add_checker(pf_checker.zstack_kvm_pf_rule_exist_checker(), True, pf)
for pf in test_obj.get_pf_list_for_stopped_vm():
#vip_checker_chain.add_checker(pf_checker.zstack_kvm_pf_rule_exist_checker(), True, pf)
pass
elif test_obj.get_use_for() == vip_header.Eip:
checker = vip_checker.vip_used_for_checker()
checker.set_target_use_for(vip_header.Eip)
vip_checker_chain.add_checker(checker, True, test_obj)
vip_checker_chain.add_checker(vip_checker.eip_checker(), True, test_obj)
elif test_obj.get_state() == vip_header.DETACHED:
vip_checker_chain.add_checker(vip_checker.vip_icmp_checker(), False, test_obj)
elif test_obj.get_state() == vip_header.CREATED:
vip_checker_chain.add_checker(vip_checker.vip_icmp_checker(), False, test_obj)
elif test_obj.get_state() == vip_header.DELETED:
vip_checker_chain.add_checker(vip_checker.vip_icmp_checker(), False, test_obj)
return vip_checker_chain
class SnapshotCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
sp_checker_chain = checker_header.CheckerChain()
if test_obj.get_target_volume().get_volume():
#target volume is not deleted.
sp_checker_chain.add_checker(\
sp_checker.zstack_kvm_snapshot_checker(), True, test_obj)
ps_uuid = test_obj.get_target_volume().get_volume().primaryStorageUuid
if test_lib.lib_is_ps_iscsi_backend(ps_uuid):
sp_checker_chain.add_checker(\
sp_checker.zstack_kvm_snapshot_tree_checker(), True, \
test_obj)
if test_obj.get_backuped_snapshots():
sp_checker_chain.add_checker(\
sp_checker.zstack_kvm_backuped_snapshot_checker(), \
True, test_obj)
return sp_checker_chain
class LoadBalancerCheckerFactory(checker_header.CheckerFactory):
def create_checker(self, test_obj):
lb_checker_chain = checker_header.CheckerChain()
if test_obj.get_state() != lb_header.DELETED:
lb_checker_chain.add_checker(db_checker.zstack_lb_db_checker(), \
True, test_obj)
for lbl in test_obj.get_load_balancer_listeners().values():
if lbl.get_state() != lb_header.DELETED:
checker = lb_checker.zstack_kvm_lbl_checker()
checker.set_lbl(lbl)
lb_checker_chain.add_checker(checker, True, test_obj)
if test_obj.get_load_balancer_listeners():
if test_obj.is_separated_vr():
lb_checker_chain.add_checker(\
db_checker.zstack_alone_lb_vr_db_checker(),\
True, test_obj)
else:
lb_checker_chain.add_checker(\
db_checker.zstack_alone_lb_vr_db_checker(),\
False, test_obj)
else:
lb_checker_chain.add_checker(db_checker.zstack_lb_db_checker(), \
False, test_obj)
return lb_checker_chain
| 52.716776 | 169 | 0.678059 | 3,089 | 24,197 | 4.814503 | 0.057948 | 0.059777 | 0.098978 | 0.08432 | 0.844137 | 0.812063 | 0.79727 | 0.77145 | 0.747647 | 0.652838 | 0 | 0.000279 | 0.259495 | 24,197 | 458 | 170 | 52.831878 | 0.829724 | 0.041451 | 0 | 0.571038 | 0 | 0 | 0.00082 | 0 | 0 | 0 | 0 | 0.002183 | 0 | 1 | 0.030055 | false | 0.002732 | 0.062842 | 0 | 0.153005 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b983aa2e62c855edd7228f7201aa25a8409f3655 | 28 | py | Python | metasub_utils/pangea/metasub_utils/pangea/__init__.py | MetaSUB/metasub_utils | c52c5dde816d710db5ac8dc6f8804bb795a992e4 | [
"MIT"
] | 8 | 2018-12-30T23:35:03.000Z | 2022-02-22T09:43:48.000Z | metasub_utils/pangea/metasub_utils/pangea/__init__.py | MetaSUB/metasub_utils | c52c5dde816d710db5ac8dc6f8804bb795a992e4 | [
"MIT"
] | 5 | 2019-01-05T04:54:46.000Z | 2021-03-10T08:59:16.000Z | metasub_utils/pangea/metasub_utils/pangea/__init__.py | MetaSUB/metasub_utils | c52c5dde816d710db5ac8dc6f8804bb795a992e4 | [
"MIT"
] | 2 | 2019-08-26T22:08:18.000Z | 2020-02-24T19:57:17.000Z |
from .sample import Sample
| 9.333333 | 26 | 0.785714 | 4 | 28 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 2 | 27 | 14 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9956fc1c60e4bb0f5e282dc30e1fc02be3c7c8a | 228 | py | Python | haystack/nodes/query_classifier/__init__.py | mapapa/haystack | 79fdda8a7cf393d774803608a4874f2a6e63cf6f | [
"Apache-2.0"
] | 7 | 2022-01-22T18:58:54.000Z | 2022-03-18T17:06:35.000Z | haystack/nodes/query_classifier/__init__.py | mapapa/haystack | 79fdda8a7cf393d774803608a4874f2a6e63cf6f | [
"Apache-2.0"
] | 17 | 2021-12-08T18:00:58.000Z | 2021-12-28T14:03:27.000Z | haystack/nodes/query_classifier/__init__.py | mapapa/haystack | 79fdda8a7cf393d774803608a4874f2a6e63cf6f | [
"Apache-2.0"
] | 1 | 2022-01-05T15:24:36.000Z | 2022-01-05T15:24:36.000Z | from haystack.nodes.query_classifier.base import BaseQueryClassifier
from haystack.nodes.query_classifier.sklearn import SklearnQueryClassifier
from haystack.nodes.query_classifier.transformers import TransformersQueryClassifier | 76 | 84 | 0.912281 | 24 | 228 | 8.541667 | 0.5 | 0.17561 | 0.24878 | 0.321951 | 0.468293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048246 | 228 | 3 | 84 | 76 | 0.9447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b9ad5777aae124b55efa984d2eb115ef9278d926 | 111 | py | Python | _main/home/pi/kmori-Pla-Rail-Scripts/clear.py | Kumapapa2012/Raspberry-Pi-Zero-on-PLA-RAIL | 2dcbd2f11bd92ef1b9be569c2a97c9117d984cef | [
"MIT"
] | null | null | null | _main/home/pi/kmori-Pla-Rail-Scripts/clear.py | Kumapapa2012/Raspberry-Pi-Zero-on-PLA-RAIL | 2dcbd2f11bd92ef1b9be569c2a97c9117d984cef | [
"MIT"
] | null | null | null | _main/home/pi/kmori-Pla-Rail-Scripts/clear.py | Kumapapa2012/Raspberry-Pi-Zero-on-PLA-RAIL | 2dcbd2f11bd92ef1b9be569c2a97c9117d984cef | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import utils
print utils.Clear8830Status_Fault()
print utils.Set8830Status(0,"Standby")
| 18.5 | 38 | 0.792793 | 15 | 111 | 5.8 | 0.8 | 0.229885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0.081081 | 111 | 5 | 39 | 22.2 | 0.764706 | 0.18018 | 0 | 0 | 0 | 0 | 0.077778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
b9cf4ab71ee9cd43c716ddeea1fe3443d077b4fe | 104 | py | Python | pyfbx/exceptions/__init__.py | zhangxinlei-cn/pyfbx | 8b732efdc47057b7b1cb0127a6ee570c7d8984c7 | [
"MIT"
] | null | null | null | pyfbx/exceptions/__init__.py | zhangxinlei-cn/pyfbx | 8b732efdc47057b7b1cb0127a6ee570c7d8984c7 | [
"MIT"
] | null | null | null | pyfbx/exceptions/__init__.py | zhangxinlei-cn/pyfbx | 8b732efdc47057b7b1cb0127a6ee570c7d8984c7 | [
"MIT"
] | null | null | null | from .fbx_exception import FBXException
from .invalid_fbx_file_exception import InvalidFBXFileException
| 34.666667 | 63 | 0.903846 | 12 | 104 | 7.5 | 0.666667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 104 | 2 | 64 | 52 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a125d867f06f4d9dc26466d8dfb1f14ac610515 | 110 | py | Python | peter_py/__main__.py | peter201943/peter-py | 4bc8227cf41ac04d439899ea6497cedde154782c | [
"MIT"
] | null | null | null | peter_py/__main__.py | peter201943/peter-py | 4bc8227cf41ac04d439899ea6497cedde154782c | [
"MIT"
] | null | null | null | peter_py/__main__.py | peter201943/peter-py | 4bc8227cf41ac04d439899ea6497cedde154782c | [
"MIT"
] | null | null | null | from repl import *
from amrename import *
from bcrename import (bc_pjm_rename, rename_song, rename_all_songs)
| 27.5 | 67 | 0.818182 | 17 | 110 | 5 | 0.647059 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 110 | 3 | 68 | 36.666667 | 0.885417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a2815a5de0dd45e10fb9f967a00564c138b8692 | 167 | py | Python | 973-K_Closest_Points_to_Origin.py | dingwenzheng730/Leet | c08bd48e8dcc6bca41134d218d39f66bfc112eaf | [
"MIT"
] | 1 | 2021-06-15T21:01:53.000Z | 2021-06-15T21:01:53.000Z | 973-K_Closest_Points_to_Origin.py | dingwenzheng730/Leet | c08bd48e8dcc6bca41134d218d39f66bfc112eaf | [
"MIT"
] | null | null | null | 973-K_Closest_Points_to_Origin.py | dingwenzheng730/Leet | c08bd48e8dcc6bca41134d218d39f66bfc112eaf | [
"MIT"
] | null | null | null | class Solution:
def kClosest(self, points: List[List[int]], k: int) -> List[List[int]]:
return sorted(points, key=lambda x: x[0]**2 + x[1]**2)[:k]
| 41.75 | 75 | 0.568862 | 27 | 167 | 3.518519 | 0.62963 | 0.168421 | 0.231579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0.221557 | 167 | 4 | 76 | 41.75 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6a2ff0f7723b2f8f4ed24fbf202e7755440f4a8e | 74 | py | Python | Python/Tests/TestData/Grammar/TryStmt.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 695 | 2019-05-06T23:49:37.000Z | 2022-03-30T01:56:00.000Z | Python/Tests/TestData/Grammar/TryStmt.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 1,672 | 2019-05-06T21:09:38.000Z | 2022-03-31T23:16:04.000Z | Python/Tests/TestData/Grammar/TryStmt.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 186 | 2019-05-13T03:17:37.000Z | 2022-03-31T16:24:05.000Z | try:
pass
except:
pass
try:
pass
except Exception:
pass
| 6.727273 | 17 | 0.581081 | 9 | 74 | 4.777778 | 0.444444 | 0.325581 | 0.604651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.364865 | 74 | 10 | 18 | 7.4 | 0.914894 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
6a4e1b4e2f15d22d18536c648aa797314b696586 | 142 | py | Python | ilastik_feature_selection/__init__.py | ilastik/ilastik-feature-selection | ffb174890c92a2be80f4335898a628f9fb344a18 | [
"MIT"
] | 10 | 2017-01-12T14:17:18.000Z | 2021-06-10T07:49:34.000Z | ilastik_feature_selection/__init__.py | ilastik/ilastik-feature-selection | ffb174890c92a2be80f4335898a628f9fb344a18 | [
"MIT"
] | 4 | 2016-07-08T01:57:57.000Z | 2021-03-09T14:52:11.000Z | ilastik_feature_selection/__init__.py | ilastik/ilastik-feature-selection | ffb174890c92a2be80f4335898a628f9fb344a18 | [
"MIT"
] | 6 | 2016-10-31T10:07:55.000Z | 2020-01-09T02:41:15.000Z | __author__ = 'fabian'
__all__ = ['filter_feature_selection']
from . import filter_feature_selection
from . import wrapper_feature_selection
| 20.285714 | 39 | 0.816901 | 16 | 142 | 6.375 | 0.5625 | 0.470588 | 0.431373 | 0.509804 | 0.627451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 142 | 6 | 40 | 23.666667 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0.211268 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dbfe75926a344b41c80e188f6e92ed3c5081d5ae | 22 | py | Python | xunitmerge/__init__.py | yoeo/xunitmerge | c5445714e40ac6e8e2a259598c62129e8e17f76a | [
"MIT"
] | 14 | 2016-04-04T15:18:46.000Z | 2021-12-15T11:07:50.000Z | xunitmerge/__init__.py | yoeo/xunitmerge | c5445714e40ac6e8e2a259598c62129e8e17f76a | [
"MIT"
] | 7 | 2016-03-14T18:00:10.000Z | 2021-05-03T02:09:10.000Z | xunitmerge/__init__.py | yoeo/xunitmerge | c5445714e40ac6e8e2a259598c62129e8e17f76a | [
"MIT"
] | 15 | 2015-09-29T08:24:23.000Z | 2021-06-09T11:00:08.000Z | from .xmerge import *
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e06ef0b9ce5021f16fc654f5195182abd2715b6b | 1,057 | py | Python | Advance_Python/Graph/Delete_Edge_Using_Matrix.py | siddharth-143/Python | 293f4643a3a13e3b82d23fd8922db54dbb0f12bc | [
"MIT"
] | null | null | null | Advance_Python/Graph/Delete_Edge_Using_Matrix.py | siddharth-143/Python | 293f4643a3a13e3b82d23fd8922db54dbb0f12bc | [
"MIT"
] | null | null | null | Advance_Python/Graph/Delete_Edge_Using_Matrix.py | siddharth-143/Python | 293f4643a3a13e3b82d23fd8922db54dbb0f12bc | [
"MIT"
] | null | null | null | # Python program to implement deletion operation | delete edge | using adjacency matrix
nodes = []
graph = []
count_node = 0
# delete edge for undirected graph and it also work for weighted and unweighted graph
def delete_edge(v1, v2):
if v1 not in nodes:
print(v1, "is not present in the graph")
elif v2 not in nodes:
print(v2, "is not present in the graph")
else:
index1 = nodes.index(v1) # get the index of v1
index2 = nodes.index(v2) # get the index of v2
graph[index1][index2] = 0
graph[index2][index1] = 0
# delete edge for directed graph weighted and unweighted graph
def delete_edge(v1, v2):
if v1 not in nodes:
print(v1, "is not present in the graph")
elif v2 not in nodes:
print(v2, "is not present in the graph")
else:
index1 = nodes.index(v1) # get the index of v1
index2 = nodes.index(v2) # get the index of v2
graph[index1][index2] = 0
# graph[index2][index1] = 0
delete_edge("A", "B")
| 28.567568 | 87 | 0.615894 | 159 | 1,057 | 4.069182 | 0.27673 | 0.092736 | 0.061824 | 0.092736 | 0.763524 | 0.763524 | 0.763524 | 0.763524 | 0.763524 | 0.763524 | 0 | 0.049664 | 0.295175 | 1,057 | 36 | 88 | 29.361111 | 0.818792 | 0.317881 | 0 | 0.782609 | 0 | 0 | 0.154494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0 | 0 | 0.086957 | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0edb1d02b2df9db25d35cd98cfb5a819eb0f147c | 48 | py | Python | Florence/FiniteElements/__init__.py | jdlaubrie/florence | 830dca4a34be00d6e53cbec3007c10d438b27f57 | [
"MIT"
] | 65 | 2017-08-04T10:21:13.000Z | 2022-02-21T21:45:09.000Z | Florence/FiniteElements/__init__.py | jdlaubrie/florence | 830dca4a34be00d6e53cbec3007c10d438b27f57 | [
"MIT"
] | 6 | 2018-06-03T02:29:20.000Z | 2022-01-18T02:30:22.000Z | Florence/FiniteElements/__init__.py | jdlaubrie/florence | 830dca4a34be00d6e53cbec3007c10d438b27f57 | [
"MIT"
] | 10 | 2018-05-30T09:44:10.000Z | 2021-05-18T08:06:51.000Z | from .Assembly import AssembleMass, AssembleForm | 48 | 48 | 0.875 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
163fb781f7ab99c52379201be16f3d3c8dbd96a4 | 30 | py | Python | src/app/webmin/dvbstreamer/views.py | ivanmurashko/kalinka | 58a3f774c414dfc408aa06f560dde455c2271c6b | [
"MIT"
] | null | null | null | src/app/webmin/dvbstreamer/views.py | ivanmurashko/kalinka | 58a3f774c414dfc408aa06f560dde455c2271c6b | [
"MIT"
] | null | null | null | src/app/webmin/dvbstreamer/views.py | ivanmurashko/kalinka | 58a3f774c414dfc408aa06f560dde455c2271c6b | [
"MIT"
] | null | null | null | from models import *
| 7.5 | 21 | 0.533333 | 3 | 30 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.433333 | 30 | 4 | 22 | 7.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16b70210bc36433501943336d485ff6283a9bd98 | 2,990 | py | Python | aoc12_data.py | zidarsk8/aoc2020 | 21239a8bfd3cba31f16c91c28a176e1163ba4cf9 | [
"Apache-2.0"
] | 1 | 2020-12-02T08:29:50.000Z | 2020-12-02T08:29:50.000Z | aoc12_data.py | zidarsk8/aoc2020 | 21239a8bfd3cba31f16c91c28a176e1163ba4cf9 | [
"Apache-2.0"
] | null | null | null | aoc12_data.py | zidarsk8/aoc2020 | 21239a8bfd3cba31f16c91c28a176e1163ba4cf9 | [
"Apache-2.0"
] | null | null | null | test_data = """
F10
N3
F7
R90
F11
""".strip()
test_data2 = """
F2
R90
F2
R90
F2
R90
""".strip()
test_data3 = """
F2
L90
F2
R180
F2
L90
""".strip()
data = """
N1
R90
S5
R180
N3
W1
L180
F92
R270
E4
F4
W4
W4
L180
S2
W2
F90
E1
S5
W3
F78
S5
R180
F100
N1
W3
L90
L90
N1
F94
W2
R90
F49
W2
F26
R180
W1
S5
R180
W4
S3
R90
W3
S4
E5
S1
F13
N5
R270
E2
R270
S5
F3
E3
F4
S3
R270
S1
W4
R90
S4
L180
N4
F81
W2
R90
F61
R90
F13
N3
R180
W1
F98
S5
F50
W5
S3
W5
R90
F17
S5
F70
F7
E2
F87
E1
L270
F59
E2
R180
N5
F59
L90
N5
W5
F10
N3
E1
R90
W1
S2
R90
N5
F25
R90
E2
F57
R180
E1
N3
W2
F85
L90
F50
W2
R90
S3
R90
F27
E1
S1
L90
F32
L90
W3
R90
E1
F39
S5
E4
F50
W4
L90
F63
N2
F67
W3
R90
F4
N2
R90
F90
N5
L180
F24
E5
N3
L180
F67
E3
L90
S3
F49
R90
E5
F89
W5
F62
F39
F33
W1
R90
F18
S3
R90
N4
F47
N5
N3
W2
S5
L90
E4
L90
W2
R90
W5
L90
W5
N4
F64
R90
S2
W4
R90
N3
F18
L90
S4
L90
F31
S4
L90
F79
R90
F69
N3
E4
F64
N2
E4
R90
F20
R180
E1
F85
W1
S5
S2
F21
R90
F43
N1
F18
S5
R180
F52
L180
W4
F5
L90
F70
S4
N3
R180
F64
R90
F17
R90
E5
F85
N1
F74
E5
F21
N1
F35
N1
F65
W2
F67
N1
E5
F79
S4
R90
F20
R180
W5
L180
S4
F56
S4
L90
E5
F13
S5
F38
W1
S2
L90
N4
E3
R180
W3
N1
R90
F52
N5
F23
E5
F82
E5
S2
E3
N3
S2
L90
N1
R90
S5
F60
W1
N2
W1
N3
E4
F2
E2
L90
S1
L90
E4
N1
R180
E2
R180
F93
F94
L90
S4
E5
R90
F5
S2
E2
S3
E4
R180
F56
E2
N2
F3
R90
W2
F94
W5
F47
L180
F68
E5
F63
S3
E4
F93
L90
S5
L180
W5
S5
W3
L180
F34
R90
F87
W4
S1
W3
R270
S1
E1
F78
E4
R90
F91
W4
S3
W1
F41
N4
E1
F66
S1
W5
F62
N2
W2
L90
W1
F23
L270
N2
W2
S3
F9
R90
F2
E4
F61
L90
W5
N4
F97
L90
F93
N5
L270
R90
W1
R90
R90
N4
E1
F72
N4
R270
F24
W1
F79
S1
E3
N4
E3
L90
W2
S1
R270
W5
F24
E5
S4
F22
L180
F57
S5
R90
N4
W3
F18
N2
R90
E3
F55
N2
R90
S5
F4
W3
L90
N2
W3
L270
E4
R90
F46
S5
N1
F16
N1
R90
F8
L180
N2
W3
N4
E1
S3
L90
F4
E5
N5
E3
R90
F35
N2
F68
F33
E5
F38
E4
F27
R180
S5
F47
R90
F43
R90
S1
F84
L180
F47
R90
N4
E4
F77
R180
N1
E2
S4
F45
S1
L90
E5
F40
L90
W5
F25
W4
R90
F80
N5
E2
F74
W3
N3
E4
F48
N3
R90
N2
W1
L90
S2
F35
L90
E5
R180
W5
N2
E1
L90
N2
F78
S5
R270
S5
R90
N5
E3
L90
S5
F13
S5
F52
L90
N2
R180
E1
F41
S1
F20
N4
F34
N2
F45
E5
L90
W3
L270
N5
F52
R90
N5
E5
N2
W2
W5
R270
W5
F10
N3
F63
N4
F53
L90
E5
L270
F17
N1
L90
F26
F93
R90
S5
R270
S5
R180
N4
F58
L180
F40
S2
F54
N5
F70
W1
N4
W1
L90
W5
R90
N2
R90
S5
F95
W4
L180
E3
F68
S1
F56
R90
W1
L180
F66
R90
S2
F57
L90
E1
F42
S4
F44
L90
F42
E4
R90
S4
W5
R90
E4
S4
E5
F27
R90
N1
R90
E5
R90
W4
S1
F81
N5
R180
S4
E4
F68
S3
L90
E4
E4
L180
E3
F8
W2
L90
S4
L180
N2
L180
E1
R90
W5
N4
W4
R90
F1
S5
E2
L90
F49
N4
W3
R90
E5
F33
R180
S4
E5
S2
F79
W4
F38
R90
F1
L90
F56
L270
N2
L90
E2
L90
F25
W1
S4
L270
W3
R90
N2
F68
E1
R180
W3
R90
W3
R90
S3
F4
W3
N3
R90
W3
N1
F54
W2
S5
E4
F76
F47
N1
F32
L180
L90
F19
N2
E5
L90
E1
L90
E3
R90
F48
R270
S3
R180
S4
F53
R90
F90
E4
F100
L90
F49
N1
W1
F56
E2
N5
L90
F39
R90
W2
F26
E4
N4
L90
F9
L90
F41
W5
N4
S1
W4
N3
R90
N5
L270
F82
L90
F75
S5
F25
S4
F67
N4
F57
E4
N4
F73
W5
L90
E2
R180
N5
L270
W3
F95
W2
S4
E1
R180
N3
W2
N1
F28
N2
R90
E3
S1
F41
E4
N1
R90
F12
L90
N2
S2
E3
F31
W1
L90
E5
S1
F12
R180
W5
R90
F26
""".strip()
| 3.6419 | 16 | 0.709699 | 816 | 2,990 | 2.596814 | 0.143382 | 0.014158 | 0.006607 | 0.009438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.585569 | 0.276923 | 2,990 | 820 | 17 | 3.646341 | 0.394542 | 0 | 0 | 0.96556 | 0 | 0 | 0.962542 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc644958b7c674070aedfaced466003bfc32bd3e | 2,606 | py | Python | courses/python/ssti/tests/test_functional.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | 49 | 2021-05-20T12:49:28.000Z | 2022-03-13T11:35:03.000Z | courses/python/ssti/tests/test_functional.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | null | null | null | courses/python/ssti/tests/test_functional.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | 5 | 2021-05-20T12:58:34.000Z | 2021-12-05T19:08:13.000Z | import uuid
class TestBadgeGeneratorFunctional:
def badge_generation(self, client):
username = str(uuid.uuid4())
response = client.post("/",
data={'username': username})
if ('<p><h3>%s</h3>' % username) not in response.data.decode('utf-8'):
return False, "Badge generation is broken."
return True, "Badge generation - OK"
def badge_generation_with_brace(self, client):
username = str(uuid.uuid4())
payload = str(uuid.uuid4())+"{{ '"+username+"' }}"+str(uuid.uuid4())
response = client.post("/",
data={'username': payload})
if username not in response.data.decode('utf-8'):
return False, "Badge generation with brace is broken."
return True, "Badge generation with brace - OK"
def badge_generation_with_bracket(self, client):
username = str(uuid.uuid4())
payload = "<" + str(uuid.uuid4()) + "/>" + username
response = client.post("/",
data={'username': payload})
if username not in response.data.decode('utf-8'):
return False, "Badge generation with bracket is broken."
return True, "Badge generation with bracket - OK"
def test_vulnerable_badge_generation(self, vulnerable_client):
(success,_) = self.badge_generation(vulnerable_client)
assert success, 'Badge generation is broken in vulnerable app.'
def test_patched_badge_generation(self, patched_client):
(success, _) = self.badge_generation(patched_client)
assert success, 'Badge generation is broken in patched app.'
def test_vulnerable_badge_generation_with_brace(self, vulnerable_client):
(success,_) = self.badge_generation_with_brace(vulnerable_client)
assert success, 'Badge generation with brace is broken in vulnerable app.'
def test_patched_badge_generation_with_brace(self, patched_client):
(success, _) = self.badge_generation_with_brace(patched_client)
assert success, 'Badge generation with brace is broken in patched app.'
def test_vulnerable_badge_generation_with_bracket(self, vulnerable_client):
(success,_) = self.badge_generation_with_bracket(vulnerable_client)
assert success, 'Badge generation with bracket is broken in vulnerable app.'
def test_patched_badge_generation_with_bracket(self, patched_client):
(success, _) = self.badge_generation_with_bracket(patched_client)
assert success, 'Badge generation with bracket is broken in patched app.'
| 47.381818 | 84 | 0.669225 | 300 | 2,606 | 5.596667 | 0.133333 | 0.241215 | 0.203693 | 0.128648 | 0.945801 | 0.870756 | 0.827874 | 0.696248 | 0.540203 | 0.503276 | 0 | 0.005475 | 0.229087 | 2,606 | 54 | 85 | 48.259259 | 0.830264 | 0 | 0 | 0.232558 | 0 | 0 | 0.217959 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 1 | 0.209302 | false | 0 | 0.023256 | 0 | 0.395349 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc6b8ad16906e0fbec4e67e008eb5f9d2666c90e | 27 | py | Python | qt_tree/__init__.py | ktrk115/qt-tree | fcffc271e0e465faecbb17370e22efe74bcb1d35 | [
"MIT"
] | null | null | null | qt_tree/__init__.py | ktrk115/qt-tree | fcffc271e0e465faecbb17370e22efe74bcb1d35 | [
"MIT"
] | null | null | null | qt_tree/__init__.py | ktrk115/qt-tree | fcffc271e0e465faecbb17370e22efe74bcb1d35 | [
"MIT"
] | null | null | null | from .view import NodeView
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc9fac5632f68e2368db157b901d331dfcaa9978 | 22 | py | Python | pdbtest/__init__.py | zack112358/pdbtest | ccb1ecba2c789417da4b63812dee8cd0fe9e60b7 | [
"Apache-2.0"
] | null | null | null | pdbtest/__init__.py | zack112358/pdbtest | ccb1ecba2c789417da4b63812dee8cd0fe9e60b7 | [
"Apache-2.0"
] | null | null | null | pdbtest/__init__.py | zack112358/pdbtest | ccb1ecba2c789417da4b63812dee8cd0fe9e60b7 | [
"Apache-2.0"
] | null | null | null | from pdbtest import *
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcd1f908f41d5b748d8ffb098b56233d577df8c6 | 25 | py | Python | mdx_wavedrom/__init__.py | chiggs/mdx_wavedrom | c8f27da68f873fe15bc6d6616e3e454507a6b2a0 | [
"MIT"
] | 3 | 2015-07-29T04:23:39.000Z | 2021-02-06T19:33:39.000Z | mdx_qrcode/__init__.py | airtonix/python-markdown-qrcode | c61efee77c9d5b5dc8179a89cbe4d870388fe02b | [
"MIT"
] | null | null | null | mdx_qrcode/__init__.py | airtonix/python-markdown-qrcode | c61efee77c9d5b5dc8179a89cbe4d870388fe02b | [
"MIT"
] | 2 | 2018-05-26T14:46:40.000Z | 2020-09-25T16:06:59.000Z | from extension import *
| 8.333333 | 23 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 25 | 2 | 24 | 12.5 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcf2f729a52ceb737e740f78a178899e36002ea2 | 236 | py | Python | json_sensor/__init__.py | deniz195/json-sensor | c0e55d39bab3be6eca444273fb5436e1eafe8860 | [
"MIT"
] | 1 | 2018-10-30T11:22:33.000Z | 2018-10-30T11:22:33.000Z | json_sensor/__init__.py | deniz195/json-sensor | c0e55d39bab3be6eca444273fb5436e1eafe8860 | [
"MIT"
] | null | null | null | json_sensor/__init__.py | deniz195/json-sensor | c0e55d39bab3be6eca444273fb5436e1eafe8860 | [
"MIT"
] | null | null | null | import asyncio
import mode
from mode import Service
from json_sensor.robust_serial_service import *
from json_sensor.serial_server import *
from json_sensor.json_sensor import *
from json_sensor.json_sensor_server import *
| 21.454545 | 48 | 0.809322 | 34 | 236 | 5.323529 | 0.294118 | 0.331492 | 0.309392 | 0.331492 | 0.331492 | 0.331492 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15678 | 236 | 10 | 49 | 23.6 | 0.909548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c187799cea17286eb1e06d69d1ed68425b96f5c | 15,085 | py | Python | test/test_md005.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | test/test_md005.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | test/test_md005.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | """
Module to provide tests related to the MD005 rule.
"""
from test.markdown_scanner import MarkdownScanner
import pytest
@pytest.mark.rules
def test_md005_good_unordered_list_single_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_unordered_list_single_level.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_bad_unordered_list_single_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md007",
"scan",
"test/resources/rules/md005/bad_unordered_list_single_level.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md005/bad_unordered_list_single_level.md:2:2: "
+ "MD005: Inconsistent indentation for list items at the same level "
+ "[Expected: 0; Actual: 1] (list-indent)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_unordered_list_double_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md032",
"scan",
"test/resources/rules/md005/good_unordered_list_double_level.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_bad_unordered_list_double_level_bad_first():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md032,md007",
"scan",
"test/resources/rules/md005/bad_unordered_list_double_level_bad_first.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md005/bad_unordered_list_double_level_bad_first.md:4:2: "
+ "MD005: Inconsistent indentation for list items at the same level "
+ "[Expected: 0; Actual: 1] (list-indent)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_bad_unordered_list_double_level_bad_second():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md032,md007",
"scan",
"test/resources/rules/md005/bad_unordered_list_double_level_bad_second.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md005/bad_unordered_list_double_level_bad_second.md:6:4: "
+ "MD005: Inconsistent indentation for list items at the same level "
+ "[Expected: 2; Actual: 3] (list-indent)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_unordered_list_separate_lists():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md007",
"scan",
"test/resources/rules/md005/good_unordered_list_separate_lists.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_ordered_list_single_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_ordered_list_single_level.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_bad_ordered_list_single_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/bad_ordered_list_single_level.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md005/bad_ordered_list_single_level.md:2:2: "
+ "MD005: Inconsistent indentation for list items at the same level "
+ "[Expected: 0; Actual: 1] (list-indent)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_ordered_list_single_level_widths():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_ordered_list_single_level_widths.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_bad_ordered_list_single_level_widths():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/bad_ordered_list_single_level_widths.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md005/bad_ordered_list_single_level_widths.md:2:2: "
+ "MD005: Inconsistent indentation for list items at the same level "
+ "[Expected: 0; Actual: 1] (list-indent)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_ordered_list_single_level_widths_right():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_ordered_list_single_level_widths_right.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_bad_ordered_list_single_level_widths_right():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/bad_ordered_list_single_level_widths_right.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md005/bad_ordered_list_single_level_widths_right.md:2:1: "
+ "MD005: Inconsistent indentation for list items at the same level "
+ "[Expected: 3; Actual: 0] (list-indent)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_ordered_list_single_level_short_widths_right():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_ordered_list_single_level_short_widths_right.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_ordered_list_seperate_single_level_short_widths_right():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_ordered_list_seperate_single_level_short_widths_right.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_ordered_list_seperate_single_level_short_widths():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_ordered_list_seperate_single_level_short_widths.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md005_good_ordered_list_double_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md032",
"scan",
"test/resources/rules/md005/good_ordered_list_double_level.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
@pytest.mark.skip
def test_md005_good_ordered_list_double_level_right():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/good_ordered_list_double_level_right.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
@pytest.mark.skip
def test_md005_bad_ordered_list_double_level_weirdx():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/bad_ordered_list_double_level_weird.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.skip
@pytest.mark.rules
def test_md005_bad_ordered_list_double_level_weirder():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md005 directory that has...
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md005/bad_ordered_list_double_level_weirder.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md005/bad_ordered_list_double_level_weirder.md:3:3: "
+ "MD005: Inconsistent indentation for list items at the same level [Expected: 3; Actual: 7] (list-indent)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
| 26.841637 | 115 | 0.693669 | 1,797 | 15,085 | 5.523094 | 0.047301 | 0.058942 | 0.081612 | 0.104282 | 0.987003 | 0.987003 | 0.984383 | 0.983678 | 0.972796 | 0.965542 | 0 | 0.024417 | 0.220815 | 15,085 | 561 | 116 | 26.889483 | 0.819976 | 0.19357 | 0 | 0.693548 | 0 | 0.003226 | 0.232964 | 0.153237 | 0 | 0 | 0 | 0 | 0.06129 | 1 | 0.06129 | false | 0 | 0.006452 | 0 | 0.067742 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
912f3f6b179d3822b6a7667c1f693f2bcf05e50a | 31 | py | Python | ddblapi/__init__.py | Zenrac/ddblAPI.py | cdad4d6a65356e0711a676f21f92b86ed82e2b45 | [
"Apache-2.0"
] | null | null | null | ddblapi/__init__.py | Zenrac/ddblAPI.py | cdad4d6a65356e0711a676f21f92b86ed82e2b45 | [
"Apache-2.0"
] | null | null | null | ddblapi/__init__.py | Zenrac/ddblAPI.py | cdad4d6a65356e0711a676f21f92b86ed82e2b45 | [
"Apache-2.0"
] | 1 | 2020-12-06T23:32:35.000Z | 2020-12-06T23:32:35.000Z | from .ddblapi import DivineAPI
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6712e4a6bc34e2b31885d2305a83e3c6de7d784 | 56,902 | py | Python | aries_cloudagent/protocols/connections/v1_0/tests/test_manager.py | adamsc64/aries-cloudagent-python | d09f6085b248a68c95822ae6b2aa06bb0053675b | [
"Apache-2.0"
] | 1 | 2021-01-15T01:04:43.000Z | 2021-01-15T01:04:43.000Z | aries_cloudagent/protocols/connections/v1_0/tests/test_manager.py | adamsc64/aries-cloudagent-python | d09f6085b248a68c95822ae6b2aa06bb0053675b | [
"Apache-2.0"
] | null | null | null | aries_cloudagent/protocols/connections/v1_0/tests/test_manager.py | adamsc64/aries-cloudagent-python | d09f6085b248a68c95822ae6b2aa06bb0053675b | [
"Apache-2.0"
] | 1 | 2021-01-15T01:04:31.000Z | 2021-01-15T01:04:31.000Z | from asynctest import TestCase as AsyncTestCase
from asynctest import mock as async_mock
from aries_cloudagent.cache.base import BaseCache
from aries_cloudagent.cache.basic import BasicCache
from aries_cloudagent.config.base import InjectorError
from aries_cloudagent.config.injection_context import InjectionContext
from aries_cloudagent.connections.models.conn_record import ConnRecord
from aries_cloudagent.connections.models.connection_target import ConnectionTarget
from aries_cloudagent.connections.models.diddoc import (
DIDDoc,
PublicKey,
PublicKeyType,
Service,
)
from aries_cloudagent.ledger.base import BaseLedger
from aries_cloudagent.messaging.responder import BaseResponder, MockResponder
from aries_cloudagent.storage.base import BaseStorage
from aries_cloudagent.storage.basic import BasicStorage
from aries_cloudagent.storage.error import StorageNotFoundError
from aries_cloudagent.transport.inbound.receipt import MessageReceipt
from aries_cloudagent.wallet.base import BaseWallet, DIDInfo
from aries_cloudagent.wallet.basic import BasicWallet
from aries_cloudagent.wallet.error import WalletNotFoundError
from aries_cloudagent.protocols.routing.v1_0.manager import RoutingManager
from ..manager import ConnectionManager, ConnectionManagerError
from ..messages.connection_invitation import ConnectionInvitation
from ..messages.connection_request import ConnectionRequest
from ..messages.connection_response import ConnectionResponse
from ..models.connection_detail import ConnectionDetail
class TestConnectionManager(AsyncTestCase):
def make_did_doc(self, did, verkey):
doc = DIDDoc(did=did)
controller = did
ident = "1"
pk_value = verkey
pk = PublicKey(
did, ident, pk_value, PublicKeyType.ED25519_SIG_2018, controller, False
)
doc.set(pk)
recip_keys = [pk]
router_keys = []
service = Service(
did, "indy", "IndyAgent", recip_keys, router_keys, self.test_endpoint
)
doc.set(service)
return doc
def setUp(self):
self.test_seed = "testseed000000000000000000000001"
self.test_did = "55GkHamhTU1ZbTbV2ab9DE"
self.test_verkey = "3Dn1SJNPaCXcvvJvSbsFWP2xaCjMom3can8CQNhWrTRx"
self.test_endpoint = "http://localhost"
self.test_target_did = "GbuDUYXaUZRfHD2jeDuQuP"
self.test_target_verkey = "9WCgWKUaAJj3VWxxtzvvMQN3AoFxoBtBDo9ntwJnVVCC"
self.storage = BasicStorage()
self.cache = BasicCache()
self.wallet = BasicWallet()
self.responder = MockResponder()
self.responder.send = async_mock.CoroutineMock()
self.context = InjectionContext(enforce_typing=False)
self.context.injector.bind_instance(BaseStorage, self.storage)
self.context.injector.bind_instance(BaseWallet, self.wallet)
self.context.injector.bind_instance(BaseResponder, self.responder)
self.context.injector.bind_instance(BaseCache, self.cache)
self.context.update_settings(
{
"default_endpoint": "http://aries.ca/endpoint",
"default_label": "This guy",
"additional_endpoints": ["http://aries.ca/another-endpoint"],
"debug.auto_accept_invites": True,
"debug.auto_accept_requests": True,
}
)
self.manager = ConnectionManager(self.context)
async def test_create_invitation_public_and_multi_use_fails(self):
self.manager.context.update_settings({"public_invites": True})
with async_mock.patch.object(
BaseWallet, "get_public_did", autospec=True
) as mock_wallet_get_public_did:
mock_wallet_get_public_did.return_value = DIDInfo(
self.test_did, self.test_verkey, None
)
with self.assertRaises(ConnectionManagerError):
await self.manager.create_invitation(public=True, multi_use=True)
async def test_create_invitation_non_multi_use_invitation_fails_on_reuse(self):
connect_record, connect_invite = await self.manager.create_invitation()
receipt = MessageReceipt(recipient_verkey=connect_record.invitation_key)
requestA = ConnectionRequest(
connection=ConnectionDetail(
did=self.test_target_did,
did_doc=self.make_did_doc(
self.test_target_did, self.test_target_verkey
),
),
label="SameInviteRequestA",
)
await self.manager.receive_request(requestA, receipt)
requestB = ConnectionRequest(
connection=ConnectionDetail(
did=self.test_did,
did_doc=self.make_did_doc(self.test_did, self.test_verkey),
),
label="SameInviteRequestB",
)
# requestB fails because the invitation was not set to multi-use
rr_awaitable = self.manager.receive_request(requestB, receipt)
await self.assertAsyncRaises(ConnectionManagerError, rr_awaitable)
async def test_create_invitation_public(self):
self.manager.context.update_settings({"public_invites": True})
with async_mock.patch.object(
BaseWallet, "get_public_did", autospec=True
) as mock_wallet_get_public_did:
mock_wallet_get_public_did.return_value = DIDInfo(
self.test_did, self.test_verkey, None
)
connect_record, connect_invite = await self.manager.create_invitation(
public=True, my_endpoint="testendpoint"
)
assert connect_record == None
assert connect_invite.did.endswith(self.test_did)
async def test_create_invitation_public_no_public_invites(self):
self.manager.context.update_settings({"public_invites": False})
with self.assertRaises(ConnectionManagerError):
await self.manager.create_invitation(
public=True, my_endpoint="testendpoint"
)
async def test_create_invitation_public_no_public_did(self):
self.manager.context.update_settings({"public_invites": True})
with async_mock.patch.object(
BaseWallet, "get_public_did", autospec=True
) as mock_wallet_get_public_did:
mock_wallet_get_public_did.return_value = None
with self.assertRaises(ConnectionManagerError):
await self.manager.create_invitation(
public=True, my_endpoint="testendpoint"
)
async def test_create_invitation_multi_use(self):
connect_record, connect_invite = await self.manager.create_invitation(
my_endpoint="testendpoint", multi_use=True
)
receipt = MessageReceipt(recipient_verkey=connect_record.invitation_key)
requestA = ConnectionRequest(
connection=ConnectionDetail(
did=self.test_target_did,
did_doc=self.make_did_doc(
self.test_target_did, self.test_target_verkey
),
),
label="SameInviteRequestA",
)
await self.manager.receive_request(requestA, receipt)
requestB = ConnectionRequest(
connection=ConnectionDetail(
did=self.test_did,
did_doc=self.make_did_doc(self.test_did, self.test_verkey),
),
label="SameInviteRequestB",
)
await self.manager.receive_request(requestB, receipt)
async def test_create_invitation_recipient_routing_endpoint(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
connect_record, connect_invite = await self.manager.create_invitation(
my_endpoint=self.test_endpoint,
recipient_keys=[self.test_verkey],
routing_keys=[self.test_verkey],
)
receipt = MessageReceipt(recipient_verkey=connect_record.invitation_key)
requestA = ConnectionRequest(
connection=ConnectionDetail(
did=self.test_target_did,
did_doc=self.make_did_doc(
self.test_target_did, self.test_target_verkey
),
),
label="InviteRequestA",
)
await self.manager.receive_request(requestA, receipt)
async def test_receive_invitation(self):
(_, connect_invite) = await self.manager.create_invitation(
my_endpoint="testendpoint"
)
invitee_record = await self.manager.receive_invitation(connect_invite)
assert ConnRecord.State.get(invitee_record.state) is ConnRecord.State.REQUEST
async def test_receive_invitation_no_auto_accept(self):
(_, connect_invite) = await self.manager.create_invitation(
my_endpoint="testendpoint"
)
invitee_record = await self.manager.receive_invitation(
connect_invite, auto_accept=False
)
assert ConnRecord.State.get(invitee_record.state) is ConnRecord.State.INVITATION
async def test_receive_invitation_bad_invitation(self):
x_invites = [
ConnectionInvitation(),
ConnectionInvitation(
recipient_keys=["3Dn1SJNPaCXcvvJvSbsFWP2xaCjMom3can8CQNhWrTRx"]
),
]
for x_invite in x_invites:
with self.assertRaises(ConnectionManagerError):
await self.manager.receive_invitation(x_invite)
async def test_create_request(self):
conn_req = await self.manager.create_request(
ConnRecord(
invitation_key=self.test_verkey,
their_label="Hello",
their_role=ConnRecord.Role.RESPONDER.rfc160,
alias="Bob",
)
)
assert conn_req
async def test_create_request_my_endpoint(self):
conn_req = await self.manager.create_request(
ConnRecord(
invitation_key=self.test_verkey,
their_label="Hello",
their_role=ConnRecord.Role.RESPONDER.rfc160,
alias="Bob",
),
my_endpoint="http://testendpoint.com/endpoint",
)
assert conn_req
async def test_create_request_my_did(self):
wallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(seed=None, did=self.test_did)
conn_req = await self.manager.create_request(
ConnRecord(
invitation_key=self.test_verkey,
my_did=self.test_did,
their_label="Hello",
their_role=ConnRecord.Role.RESPONDER.rfc160,
alias="Bob",
)
)
assert conn_req
async def test_receive_request_public_did(self):
mock_request = async_mock.MagicMock()
mock_request.connection = async_mock.MagicMock()
mock_request.connection.did = self.test_did
mock_request.connection.did_doc = async_mock.MagicMock()
mock_request.connection.did_doc.did = self.test_did
receipt = MessageReceipt(recipient_did=self.test_did, recipient_did_public=True)
wallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(seed=None, did=self.test_did)
self.manager.context.update_settings({"public_invites": True})
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "attach_request", autospec=True
) as mock_conn_attach_request, async_mock.patch.object(
ConnRecord, "retrieve_by_id", autospec=True
) as mock_conn_retrieve_by_id, async_mock.patch.object(
ConnRecord, "retrieve_request", autospec=True
):
conn_rec = await self.manager.receive_request(mock_request, receipt)
assert conn_rec
messages = self.responder.messages
assert len(messages) == 1
(result, target) = messages[0]
assert type(result) == ConnectionResponse
assert "connection_id" in target
async def test_receive_request_public_did_no_did_doc(self):
mock_request = async_mock.MagicMock()
mock_request.connection = async_mock.MagicMock()
mock_request.connection.did = self.test_did
mock_request.connection.did_doc = None
receipt = MessageReceipt(recipient_did=self.test_did, recipient_did_public=True)
wallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(seed=None, did=self.test_did)
self.manager.context.update_settings({"public_invites": True})
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "attach_request", autospec=True
) as mock_conn_attach_request, async_mock.patch.object(
ConnRecord, "retrieve_by_id", autospec=True
) as mock_conn_retrieve_by_id, async_mock.patch.object(
ConnRecord, "retrieve_request", autospec=True
):
with self.assertRaises(ConnectionManagerError):
await self.manager.receive_request(mock_request, receipt)
async def test_receive_request_public_did_wrong_did(self):
mock_request = async_mock.MagicMock()
mock_request.connection = async_mock.MagicMock()
mock_request.connection.did = self.test_did
mock_request.connection.did_doc = async_mock.MagicMock()
mock_request.connection.did_doc.did = "dummy"
receipt = MessageReceipt(recipient_did=self.test_did, recipient_did_public=True)
wallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(seed=None, did=self.test_did)
self.manager.context.update_settings({"public_invites": True})
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "attach_request", autospec=True
) as mock_conn_attach_request, async_mock.patch.object(
ConnRecord, "retrieve_by_id", autospec=True
) as mock_conn_retrieve_by_id, async_mock.patch.object(
ConnRecord, "retrieve_request", autospec=True
):
with self.assertRaises(ConnectionManagerError):
await self.manager.receive_request(mock_request, receipt)
async def test_receive_request_public_did_no_public_invites(self):
mock_request = async_mock.MagicMock()
mock_request.connection = async_mock.MagicMock()
mock_request.connection.did = self.test_did
mock_request.connection.did_doc = async_mock.MagicMock()
mock_request.connection.did_doc.did = self.test_did
receipt = MessageReceipt(recipient_did=self.test_did, recipient_did_public=True)
wallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(seed=None, did=self.test_did)
self.manager.context.update_settings({"public_invites": False})
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "attach_request", autospec=True
) as mock_conn_attach_request, async_mock.patch.object(
ConnRecord, "retrieve_by_id", autospec=True
) as mock_conn_retrieve_by_id, async_mock.patch.object(
ConnRecord, "retrieve_request", autospec=True
):
with self.assertRaises(ConnectionManagerError):
await self.manager.receive_request(mock_request, receipt)
async def test_receive_request_public_did_no_auto_accept(self):
mock_request = async_mock.MagicMock()
mock_request.connection = async_mock.MagicMock()
mock_request.connection.did = self.test_did
mock_request.connection.did_doc = async_mock.MagicMock()
mock_request.connection.did_doc.did = self.test_did
receipt = MessageReceipt(recipient_did=self.test_did, recipient_did_public=True)
wallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(seed=None, did=self.test_did)
self.manager.context.update_settings(
{"public_invites": True, "debug.auto_accept_requests": False}
)
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "attach_request", autospec=True
) as mock_conn_attach_request, async_mock.patch.object(
ConnRecord, "retrieve_by_id", autospec=True
) as mock_conn_retrieve_by_id, async_mock.patch.object(
ConnRecord, "retrieve_request", autospec=True
):
conn_rec = await self.manager.receive_request(mock_request, receipt)
assert conn_rec
messages = self.responder.messages
assert not messages
async def test_create_response(self):
conn_rec = ConnRecord(state=ConnRecord.State.REQUEST.rfc160)
with async_mock.patch.object(
ConnRecord, "log_state", autospec=True
) as mock_conn_log_state, async_mock.patch.object(
ConnRecord, "retrieve_request", autospec=True
) as mock_conn_retrieve_request, async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_save, async_mock.patch.object(
ConnectionResponse, "sign_field", autospec=True
) as mock_sign:
await self.manager.create_response(conn_rec, "http://10.20.30.40:5060/")
async def test_create_response_bad_state(self):
with self.assertRaises(ConnectionManagerError):
await self.manager.create_response(
ConnRecord(
invitation_key=self.test_verkey,
their_label="Hello",
their_role=ConnRecord.Role.RESPONDER.rfc160,
alias="Bob",
state=ConnRecord.State.ABANDONED.rfc160,
)
)
async def test_accept_response_find_by_thread_id(self):
mock_response = async_mock.MagicMock()
mock_response._thread = async_mock.MagicMock()
mock_response.connection = async_mock.MagicMock()
mock_response.connection.did = self.test_target_did
mock_response.connection.did_doc = async_mock.MagicMock()
mock_response.connection.did_doc.did = self.test_target_did
receipt = MessageReceipt(recipient_did=self.test_did, recipient_did_public=True)
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "retrieve_by_request_id", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_req_id:
mock_conn_retrieve_by_req_id.return_value = async_mock.MagicMock(
did=self.test_target_did,
did_doc=async_mock.MagicMock(did=self.test_target_did),
state=ConnRecord.State.RESPONSE.rfc23,
save=async_mock.CoroutineMock(),
)
conn_rec = await self.manager.accept_response(mock_response, receipt)
assert conn_rec.their_did == self.test_target_did
assert ConnRecord.State.get(conn_rec.state) is ConnRecord.State.RESPONSE
async def test_accept_response_not_found_by_thread_id_receipt_has_sender_did(self):
mock_response = async_mock.MagicMock()
mock_response._thread = async_mock.MagicMock()
mock_response.connection = async_mock.MagicMock()
mock_response.connection.did = self.test_target_did
mock_response.connection.did_doc = async_mock.MagicMock()
mock_response.connection.did_doc.did = self.test_target_did
receipt = MessageReceipt(sender_did=self.test_target_did)
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "retrieve_by_request_id", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_req_id, async_mock.patch.object(
ConnRecord, "retrieve_by_did", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_did:
mock_conn_retrieve_by_req_id.side_effect = StorageNotFoundError()
mock_conn_retrieve_by_did.return_value = async_mock.MagicMock(
did=self.test_target_did,
did_doc=async_mock.MagicMock(did=self.test_target_did),
state=ConnRecord.State.RESPONSE.rfc23,
save=async_mock.CoroutineMock(),
)
conn_rec = await self.manager.accept_response(mock_response, receipt)
assert conn_rec.their_did == self.test_target_did
assert ConnRecord.State.get(conn_rec.state) is ConnRecord.State.RESPONSE
async def test_accept_response_not_found_by_thread_id_nor_receipt_sender_did(self):
mock_response = async_mock.MagicMock()
mock_response._thread = async_mock.MagicMock()
mock_response.connection = async_mock.MagicMock()
mock_response.connection.did = self.test_target_did
mock_response.connection.did_doc = async_mock.MagicMock()
mock_response.connection.did_doc.did = self.test_target_did
receipt = MessageReceipt(sender_did=self.test_target_did)
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "retrieve_by_request_id", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_req_id, async_mock.patch.object(
ConnRecord, "retrieve_by_did", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_did:
mock_conn_retrieve_by_req_id.side_effect = StorageNotFoundError()
mock_conn_retrieve_by_did.side_effect = StorageNotFoundError()
with self.assertRaises(ConnectionManagerError):
await self.manager.accept_response(mock_response, receipt)
async def test_accept_response_find_by_thread_id_bad_state(self):
mock_response = async_mock.MagicMock()
mock_response._thread = async_mock.MagicMock()
mock_response.connection = async_mock.MagicMock()
mock_response.connection.did = self.test_target_did
mock_response.connection.did_doc = async_mock.MagicMock()
mock_response.connection.did_doc.did = self.test_target_did
receipt = MessageReceipt(sender_did=self.test_target_did)
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "retrieve_by_request_id", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_req_id:
mock_conn_retrieve_by_req_id.return_value = async_mock.MagicMock(
state=ConnRecord.State.ABANDONED.rfc23
)
with self.assertRaises(ConnectionManagerError):
await self.manager.accept_response(mock_response, receipt)
async def test_accept_response_find_by_thread_id_no_connection_did_doc(self):
mock_response = async_mock.MagicMock()
mock_response._thread = async_mock.MagicMock()
mock_response.connection = async_mock.MagicMock()
mock_response.connection.did = self.test_target_did
mock_response.connection.did_doc = None
receipt = MessageReceipt(sender_did=self.test_target_did)
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "retrieve_by_request_id", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_req_id:
mock_conn_retrieve_by_req_id.return_value = async_mock.MagicMock(
did=self.test_target_did,
did_doc=async_mock.MagicMock(did=self.test_target_did),
state=ConnRecord.State.RESPONSE.rfc23,
)
with self.assertRaises(ConnectionManagerError):
await self.manager.accept_response(mock_response, receipt)
async def test_accept_response_find_by_thread_id_did_mismatch(self):
mock_response = async_mock.MagicMock()
mock_response._thread = async_mock.MagicMock()
mock_response.connection = async_mock.MagicMock()
mock_response.connection.did = self.test_target_did
mock_response.connection.did_doc = async_mock.MagicMock()
mock_response.connection.did_doc.did = self.test_did
receipt = MessageReceipt(sender_did=self.test_target_did)
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save, async_mock.patch.object(
ConnRecord, "retrieve_by_request_id", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_req_id:
mock_conn_retrieve_by_req_id.return_value = async_mock.MagicMock(
did=self.test_target_did,
did_doc=async_mock.MagicMock(did=self.test_target_did),
state=ConnRecord.State.RESPONSE.rfc23,
)
with self.assertRaises(ConnectionManagerError):
await self.manager.accept_response(mock_response, receipt)
async def test_create_static_connection(self):
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save:
_my, _their, conn_rec = await self.manager.create_static_connection(
my_did=self.test_did,
their_did=self.test_target_did,
their_verkey=self.test_target_verkey,
their_endpoint=self.test_endpoint,
)
assert ConnRecord.State.get(conn_rec.state) is ConnRecord.State.COMPLETED
async def test_create_static_connection_no_their(self):
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save:
with self.assertRaises(ConnectionManagerError):
await self.manager.create_static_connection(
my_did=self.test_did,
their_did=None,
their_verkey=self.test_target_verkey,
their_endpoint=self.test_endpoint,
)
async def test_create_static_connection_their_seed_only(self):
with async_mock.patch.object(
ConnRecord, "save", autospec=True
) as mock_conn_rec_save:
_my, _their, conn_rec = await self.manager.create_static_connection(
my_did=self.test_did,
their_seed=self.test_seed,
their_endpoint=self.test_endpoint,
)
assert ConnRecord.State.get(conn_rec.state) is ConnRecord.State.COMPLETED
async def test_find_connection_retrieve_by_did(self):
with async_mock.patch.object(
ConnRecord, "retrieve_by_did", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_did:
mock_conn_retrieve_by_did.return_value = async_mock.MagicMock(
state=ConnRecord.State.RESPONSE.rfc23,
save=async_mock.CoroutineMock(),
)
conn_rec = await self.manager.find_connection(
their_did=self.test_target_did,
my_did=self.test_did,
my_verkey=self.test_verkey,
auto_complete=True,
)
assert ConnRecord.State.get(conn_rec.state) is ConnRecord.State.COMPLETED
async def test_find_connection_retrieve_by_invitation_key(self):
with async_mock.patch.object(
ConnRecord, "retrieve_by_did", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_did, async_mock.patch.object(
ConnRecord, "retrieve_by_invitation_key", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_invitation_key:
mock_conn_retrieve_by_did.side_effect = StorageNotFoundError()
mock_conn_retrieve_by_invitation_key.return_value = async_mock.MagicMock(
state=ConnRecord.State.RESPONSE,
save=async_mock.CoroutineMock(),
)
conn_rec = await self.manager.find_connection(
their_did=self.test_target_did,
my_did=self.test_did,
my_verkey=self.test_verkey,
)
assert conn_rec
async def test_find_connection_retrieve_none_by_invitation_key(self):
with async_mock.patch.object(
ConnRecord, "retrieve_by_did", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_did, async_mock.patch.object(
ConnRecord, "retrieve_by_invitation_key", async_mock.CoroutineMock()
) as mock_conn_retrieve_by_invitation_key:
mock_conn_retrieve_by_did.side_effect = StorageNotFoundError()
mock_conn_retrieve_by_invitation_key.side_effect = StorageNotFoundError()
conn_rec = await self.manager.find_connection(
their_did=self.test_target_did,
my_did=self.test_did,
my_verkey=self.test_verkey,
)
assert conn_rec is None
async def test_find_inbound_connection(self):
receipt = MessageReceipt(
sender_verkey=self.test_verkey,
recipient_verkey=self.test_target_verkey,
recipient_did_public=False,
)
mock_conn = async_mock.MagicMock()
mock_conn.connection_id = "dummy"
# First pass: not yet in cache
with async_mock.patch.object(
ConnectionManager, "resolve_inbound_connection", async_mock.CoroutineMock()
) as mock_conn_mgr_resolve_conn:
mock_conn_mgr_resolve_conn.return_value = mock_conn
conn_rec = await self.manager.find_inbound_connection(receipt)
assert conn_rec
# Second pass: in cache
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
conn_rec = await self.manager.find_inbound_connection(receipt)
assert conn_rec.id == mock_conn.id
async def test_find_inbound_connection_no_cache(self):
receipt = MessageReceipt(
sender_verkey=self.test_verkey,
recipient_verkey=self.test_target_verkey,
recipient_did_public=False,
)
mock_conn = async_mock.MagicMock()
mock_conn.connection_id = "dummy"
with async_mock.patch.object(
self.manager.context, "inject", async_mock.CoroutineMock()
) as mock_ctx_inject, async_mock.patch.object(
ConnectionManager, "resolve_inbound_connection", async_mock.CoroutineMock()
) as mock_conn_mgr_resolve_conn:
mock_ctx_inject.return_value = None
mock_conn_mgr_resolve_conn.return_value = mock_conn
conn_rec = await self.manager.find_inbound_connection(receipt)
assert conn_rec
async def test_resolve_inbound_connection(self):
receipt = MessageReceipt(
sender_verkey=self.test_verkey,
recipient_verkey=self.test_target_verkey,
recipient_did_public=True,
)
mock_conn = async_mock.MagicMock()
mock_conn.connection_id = "dummy"
with async_mock.patch.object(
BasicWallet, "get_local_did_for_verkey", async_mock.CoroutineMock()
) as mock_wallet_get_local_did_for_verkey, async_mock.patch.object(
self.manager, "find_connection", async_mock.CoroutineMock()
) as mock_mgr_find_conn:
mock_wallet_get_local_did_for_verkey.return_value = DIDInfo(
self.test_did, self.test_verkey, {"public": True}
)
mock_mgr_find_conn.return_value = mock_conn
assert await self.manager.resolve_inbound_connection(receipt)
async def test_resolve_inbound_connection_injector_error(self):
receipt = MessageReceipt(
sender_verkey=self.test_verkey,
recipient_verkey=self.test_target_verkey,
recipient_did_public=True,
)
mock_conn = async_mock.MagicMock()
mock_conn.connection_id = "dummy"
with async_mock.patch.object(
BasicWallet, "get_local_did_for_verkey", async_mock.CoroutineMock()
) as mock_wallet_get_local_did_for_verkey, async_mock.patch.object(
self.manager, "find_connection", async_mock.CoroutineMock()
) as mock_mgr_find_conn:
mock_wallet_get_local_did_for_verkey.side_effect = InjectorError()
mock_mgr_find_conn.return_value = mock_conn
assert await self.manager.resolve_inbound_connection(receipt)
async def test_resolve_inbound_connection_wallet_not_found_error(self):
receipt = MessageReceipt(
sender_verkey=self.test_verkey,
recipient_verkey=self.test_target_verkey,
recipient_did_public=True,
)
mock_conn = async_mock.MagicMock()
mock_conn.connection_id = "dummy"
with async_mock.patch.object(
BasicWallet, "get_local_did_for_verkey", async_mock.CoroutineMock()
) as mock_wallet_get_local_did_for_verkey, async_mock.patch.object(
self.manager, "find_connection", async_mock.CoroutineMock()
) as mock_mgr_find_conn:
mock_wallet_get_local_did_for_verkey.side_effect = WalletNotFoundError()
mock_mgr_find_conn.return_value = mock_conn
assert await self.manager.resolve_inbound_connection(receipt)
async def test_create_did_document(self):
did_info = DIDInfo(
self.test_did,
self.test_verkey,
None,
)
mock_conn = async_mock.MagicMock(
connection_id="dummy",
inbound_connection_id=None,
their_did=self.test_target_did,
state=ConnRecord.State.COMPLETED.rfc23,
)
did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
for i in range(2): # first cover store-record, then update-value
await self.manager.store_did_document(did_doc)
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
did_doc = await self.manager.create_did_document(
did_info=did_info,
inbound_connection_id="dummy",
svc_endpoints=[self.test_endpoint],
)
async def test_create_did_document_not_active(self):
did_info = DIDInfo(
self.test_did,
self.test_verkey,
None,
)
mock_conn = async_mock.MagicMock(
connection_id="dummy",
inbound_connection_id=None,
their_did=self.test_target_did,
state=ConnRecord.State.ABANDONED.rfc23,
)
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
with self.assertRaises(ConnectionManagerError):
await self.manager.create_did_document(
did_info=did_info,
inbound_connection_id="dummy",
svc_endpoints=[self.test_endpoint],
)
async def test_create_did_document_no_services(self):
did_info = DIDInfo(
self.test_did,
self.test_verkey,
None,
)
mock_conn = async_mock.MagicMock(
connection_id="dummy",
inbound_connection_id=None,
their_did=self.test_target_did,
state=ConnRecord.State.COMPLETED.rfc23,
)
x_did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
x_did_doc._service = {}
for i in range(2): # first cover store-record, then update-value
await self.manager.store_did_document(x_did_doc)
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
with self.assertRaises(ConnectionManagerError):
await self.manager.create_did_document(
did_info=did_info,
inbound_connection_id="dummy",
svc_endpoints=[self.test_endpoint],
)
async def test_create_did_document_no_service_endpoint(self):
did_info = DIDInfo(
self.test_did,
self.test_verkey,
None,
)
mock_conn = async_mock.MagicMock(
connection_id="dummy",
inbound_connection_id=None,
their_did=self.test_target_did,
state=ConnRecord.State.COMPLETED.rfc23,
)
x_did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
x_did_doc._service = {}
x_did_doc.set(
Service(self.test_target_did, "dummy", "IndyAgent", [], [], "", 0)
)
for i in range(2): # first cover store-record, then update-value
await self.manager.store_did_document(x_did_doc)
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
with self.assertRaises(ConnectionManagerError):
await self.manager.create_did_document(
did_info=did_info,
inbound_connection_id="dummy",
svc_endpoints=[self.test_endpoint],
)
async def test_create_did_document_no_service_recip_keys(self):
did_info = DIDInfo(
self.test_did,
self.test_verkey,
None,
)
mock_conn = async_mock.MagicMock(
connection_id="dummy",
inbound_connection_id=None,
their_did=self.test_target_did,
state=ConnRecord.State.COMPLETED.rfc23,
)
x_did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
x_did_doc._service = {}
x_did_doc.set(
Service(
self.test_target_did,
"dummy",
"IndyAgent",
[],
[],
self.test_endpoint,
0,
)
)
for i in range(2): # first cover store-record, then update-value
await self.manager.store_did_document(x_did_doc)
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
with self.assertRaises(ConnectionManagerError):
await self.manager.create_did_document(
did_info=did_info,
inbound_connection_id="dummy",
svc_endpoints=[self.test_endpoint],
)
async def test_did_key_storage(self):
did_info = DIDInfo(
self.test_did,
self.test_verkey,
None,
)
did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
await self.manager.add_key_for_did(
did=self.test_target_did, key=self.test_target_verkey
)
did = await self.manager.find_did_for_key(key=self.test_target_verkey)
assert did == self.test_target_did
await self.manager.remove_keys_for_did(self.test_target_did)
async def test_get_connection_targets_invitation_no_did(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
await self.manager.store_did_document(did_doc)
# First pass: not yet in cache
mock_invite = async_mock.MagicMock(
did=None,
endpoint=self.test_endpoint,
recipient_keys=[self.test_target_verkey],
routing_keys=[self.test_verkey],
label="label",
)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
their_did=self.test_target_did,
connection_id="dummy",
their_role=ConnRecord.Role.RESPONDER.rfc23,
state=ConnRecord.State.INVITATION.rfc23,
retrieve_invitation=async_mock.CoroutineMock(return_value=mock_invite),
)
targets = await self.manager.get_connection_targets(
connection_id=None,
connection=mock_conn,
)
assert len(targets) == 1
target = targets[0]
assert target.did == mock_conn.their_did
assert target.endpoint == mock_invite.endpoint
assert target.label == mock_invite.label
assert target.recipient_keys == mock_invite.recipient_keys
assert target.routing_keys == mock_invite.routing_keys
assert target.sender_key == (await wallet.get_local_did(self.test_did)).verkey
# Next pass: exercise cache
targets = await self.manager.get_connection_targets(
connection_id=None,
connection=mock_conn,
)
assert len(targets) == 1
target = targets[0]
assert target.did == mock_conn.their_did
assert target.endpoint == mock_invite.endpoint
assert target.label == mock_invite.label
assert target.recipient_keys == mock_invite.recipient_keys
assert target.routing_keys == mock_invite.routing_keys
assert target.sender_key == (await wallet.get_local_did(self.test_did)).verkey
async def test_get_connection_targets_retrieve_connection(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
await self.manager.store_did_document(did_doc)
# Connection target not in cache
mock_invite = async_mock.MagicMock(
did=None,
endpoint=self.test_endpoint,
recipient_keys=[self.test_target_verkey],
routing_keys=[self.test_verkey],
label="label",
)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
their_did=self.test_target_did,
connection_id="dummy",
their_role=ConnRecord.Role.RESPONDER.rfc23,
state=ConnRecord.State.INVITATION.rfc23,
retrieve_invitation=async_mock.CoroutineMock(return_value=mock_invite),
)
with async_mock.patch.object(
ConnectionTarget, "serialize", autospec=True
) as mock_conn_target_ser, async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
mock_conn_target_ser.return_value = {"serialized": "value"}
targets = await self.manager.get_connection_targets(
connection_id="dummy",
connection=None,
)
assert len(targets) == 1
target = targets[0]
assert target.did == mock_conn.their_did
assert target.endpoint == mock_invite.endpoint
assert target.label == mock_invite.label
assert target.recipient_keys == mock_invite.recipient_keys
assert target.routing_keys == mock_invite.routing_keys
assert (
target.sender_key == (await wallet.get_local_did(self.test_did)).verkey
)
async def test_get_connection_targets_no_cache(self):
self.context.injector.clear_binding(BaseCache)
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
await self.manager.store_did_document(did_doc)
mock_invite = async_mock.MagicMock(
did=None,
endpoint=self.test_endpoint,
recipient_keys=[self.test_target_verkey],
routing_keys=[self.test_verkey],
label="label",
)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
their_did=self.test_target_did,
connection_id="dummy",
their_role=ConnRecord.Role.RESPONDER.rfc23,
state=ConnRecord.State.INVITATION.rfc23,
retrieve_invitation=async_mock.CoroutineMock(return_value=mock_invite),
)
targets = await self.manager.get_connection_targets(
connection_id=None,
connection=mock_conn,
)
assert len(targets) == 1
target = targets[0]
assert target.did == mock_conn.their_did
assert target.endpoint == mock_invite.endpoint
assert target.label == mock_invite.label
assert target.recipient_keys == mock_invite.recipient_keys
assert target.routing_keys == mock_invite.routing_keys
assert target.sender_key == (await wallet.get_local_did(self.test_did)).verkey
async def test_fetch_connection_targets_no_my_did(self):
mock_conn = async_mock.MagicMock()
mock_conn.my_did = None
assert await self.manager.fetch_connection_targets(mock_conn) is None
async def test_fetch_connection_targets_invitation_did_no_ledger(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
mock_invite = async_mock.MagicMock(
did=self.test_target_did,
endpoint=self.test_endpoint,
recipient_keys=[self.test_target_verkey],
routing_keys=[self.test_verkey],
label="label",
)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
their_did=self.test_target_did,
connection_id="dummy",
their_role=ConnRecord.Role.RESPONDER.rfc23,
state=ConnRecord.State.INVITATION.rfc23,
retrieve_invitation=async_mock.CoroutineMock(return_value=mock_invite),
)
with self.assertRaises(ConnectionManagerError):
await self.manager.fetch_connection_targets(mock_conn)
async def test_fetch_connection_targets_invitation_did_ledger(self):
self.ledger = async_mock.MagicMock()
self.ledger.get_endpoint_for_did = async_mock.CoroutineMock(
return_value=self.test_endpoint
)
self.ledger.get_key_for_did = async_mock.CoroutineMock(
return_value=self.test_target_verkey
)
self.context.injector.bind_instance(BaseLedger, self.ledger)
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
mock_invite = async_mock.MagicMock(
did=self.test_target_did,
endpoint=self.test_endpoint,
recipient_keys=[self.test_target_verkey],
routing_keys=[self.test_verkey],
label="label",
)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
their_did=self.test_target_did,
connection_id="dummy",
their_role=ConnRecord.Role.RESPONDER.rfc23,
state=ConnRecord.State.INVITATION.rfc23,
retrieve_invitation=async_mock.CoroutineMock(return_value=mock_invite),
)
targets = await self.manager.fetch_connection_targets(mock_conn)
assert len(targets) == 1
target = targets[0]
assert target.did == mock_conn.their_did
assert target.endpoint == mock_invite.endpoint
assert target.label == mock_invite.label
assert target.recipient_keys == mock_invite.recipient_keys
assert target.routing_keys == []
assert target.sender_key == (await wallet.get_local_did(self.test_did)).verkey
async def test_fetch_connection_targets_conn_initiator_completed_no_their_did(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
their_did=None,
state=ConnRecord.State.COMPLETED.rfc23,
)
assert await self.manager.fetch_connection_targets(mock_conn) is None
async def test_fetch_connection_targets_conn_completed_their_did(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
did_doc = self.make_did_doc(did=self.test_did, verkey=self.test_verkey)
await self.manager.store_did_document(did_doc)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
their_did=self.test_did,
their_label="label",
their_role=ConnRecord.Role.REQUESTER.rfc160,
state=ConnRecord.State.COMPLETED.rfc23,
)
targets = await self.manager.fetch_connection_targets(mock_conn)
assert len(targets) == 1
target = targets[0]
assert target.did == mock_conn.their_did
assert target.endpoint == self.test_endpoint
assert target.label == mock_conn.their_label
assert target.recipient_keys == [self.test_verkey]
assert target.routing_keys == []
assert target.sender_key == (await wallet.get_local_did(self.test_did)).verkey
async def test_diddoc_connection_targets_diddoc_underspecified(self):
with self.assertRaises(ConnectionManagerError):
self.manager.diddoc_connection_targets(None, self.test_verkey)
x_did_doc = DIDDoc(did=None)
with self.assertRaises(ConnectionManagerError):
self.manager.diddoc_connection_targets(x_did_doc, self.test_verkey)
x_did_doc = self.make_did_doc(
did=self.test_target_did, verkey=self.test_target_verkey
)
x_did_doc._service = {}
with self.assertRaises(ConnectionManagerError):
self.manager.diddoc_connection_targets(x_did_doc, self.test_verkey)
async def test_establish_inbound(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
mock_conn = async_mock.MagicMock(
my_did=self.test_did,
is_ready=True,
save=async_mock.CoroutineMock(),
)
inbound_conn_id = "dummy"
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id, async_mock.patch.object(
RoutingManager, "send_create_route", async_mock.CoroutineMock()
) as mock_routing_mgr_send_create_route:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
routing_state = await self.manager.establish_inbound(
mock_conn, inbound_conn_id, None
)
assert routing_state == ConnRecord.ROUTING_STATE_REQUEST
async def test_establish_inbound_conn_rec_no_my_did(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
mock_conn = async_mock.MagicMock()
mock_conn.my_did = None
mock_conn.is_ready = True
mock_conn.save = async_mock.CoroutineMock()
inbound_conn_id = "dummy"
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id, async_mock.patch.object(
RoutingManager, "send_create_route", async_mock.CoroutineMock()
) as mock_routing_mgr_send_create_route:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
routing_state = await self.manager.establish_inbound(
mock_conn, inbound_conn_id, None
)
assert routing_state == ConnRecord.ROUTING_STATE_REQUEST
async def test_establish_inbound_no_conn_record(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
mock_conn = async_mock.MagicMock()
mock_conn.my_did = self.test_did
mock_conn.is_ready = True
mock_conn.save = async_mock.CoroutineMock()
inbound_conn_id = "dummy"
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id, async_mock.patch.object(
RoutingManager, "send_create_route", async_mock.CoroutineMock()
) as mock_routing_mgr_send_create_route:
mock_conn_rec_retrieve_by_id.side_effect = StorageNotFoundError()
with self.assertRaises(ConnectionManagerError):
await self.manager.establish_inbound(mock_conn, inbound_conn_id, None)
async def test_establish_inbound_router_not_ready(self):
wallet: BaseWallet = await self.context.inject(BaseWallet)
await wallet.create_local_did(
seed=self.test_seed, did=self.test_did, metadata=None
)
mock_conn = async_mock.MagicMock()
mock_conn.my_did = self.test_did
mock_conn.is_ready = False
mock_conn.save = async_mock.CoroutineMock()
inbound_conn_id = "dummy"
with async_mock.patch.object(
ConnRecord, "retrieve_by_id", async_mock.CoroutineMock()
) as mock_conn_rec_retrieve_by_id, async_mock.patch.object(
RoutingManager, "send_create_route", async_mock.CoroutineMock()
) as mock_routing_mgr_send_create_route:
mock_conn_rec_retrieve_by_id.return_value = mock_conn
with self.assertRaises(ConnectionManagerError):
await self.manager.establish_inbound(mock_conn, inbound_conn_id, None)
async def test_update_inbound(self):
with async_mock.patch.object(
ConnRecord, "query", async_mock.CoroutineMock()
) as mock_conn_rec_query, async_mock.patch.object(
self.wallet, "get_local_did", autospec=True
) as mock_wallet_get_local_did:
mock_conn_rec_query.return_value = [
async_mock.MagicMock(
my_did=None,
their_did=self.test_target_did,
their_role=None,
save=None,
),
async_mock.MagicMock(
my_did=self.test_did,
their_did=self.test_target_did,
their_role=None,
save=async_mock.CoroutineMock(),
),
]
mock_wallet_get_local_did.return_value = async_mock.CoroutineMock(
verkey=self.test_verkey
)
await self.manager.update_inbound(
"dummy", self.test_verkey, ConnRecord.ROUTING_STATE_ACTIVE
)
mock_conn_rec_query.return_value[1].save.assert_called_once_with(
self.context
)
| 41.263234 | 88 | 0.660486 | 6,530 | 56,902 | 5.396631 | 0.03951 | 0.053803 | 0.041515 | 0.043133 | 0.879569 | 0.849376 | 0.829711 | 0.821084 | 0.808627 | 0.793672 | 0 | 0.003721 | 0.263172 | 56,902 | 1,378 | 89 | 41.293179 | 0.836788 | 0.00659 | 0 | 0.696398 | 0 | 0 | 0.036574 | 0.010493 | 0 | 0 | 0 | 0 | 0.085763 | 1 | 0.001715 | false | 0 | 0.020583 | 0 | 0.024014 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6930a7665367b96a4e31cdcfd898c77fe4e4d6d | 73 | py | Python | pommerman/envs/__init__.py | lucasb-eyer/playground | 718bba4d1db70a0f835afc83752f3ab7687c5e3f | [
"Apache-2.0"
] | null | null | null | pommerman/envs/__init__.py | lucasb-eyer/playground | 718bba4d1db70a0f835afc83752f3ab7687c5e3f | [
"Apache-2.0"
] | null | null | null | pommerman/envs/__init__.py | lucasb-eyer/playground | 718bba4d1db70a0f835afc83752f3ab7687c5e3f | [
"Apache-2.0"
] | null | null | null | from . import v0
from . import v1
from . import v2
from . import utility
| 14.6 | 21 | 0.726027 | 12 | 73 | 4.416667 | 0.5 | 0.754717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.219178 | 73 | 4 | 22 | 18.25 | 0.877193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6fda1eccaa108e8e15a9c322a756b9d2d8e3a57 | 106 | py | Python | hypernet/apps/__init__.py | christian-jacobsen/hypernet | 9f62e1531eb152cc08af0b0c6b09d6fde8d42400 | [
"Apache-2.0"
] | null | null | null | hypernet/apps/__init__.py | christian-jacobsen/hypernet | 9f62e1531eb152cc08af0b0c6b09d6fde8d42400 | [
"Apache-2.0"
] | null | null | null | hypernet/apps/__init__.py | christian-jacobsen/hypernet | 9f62e1531eb152cc08af0b0c6b09d6fde8d42400 | [
"Apache-2.0"
] | null | null | null | from hypernet.apps import box
from hypernet.apps import fitRates
__all__ = [
"box",
"fitRates"
]
| 13.25 | 34 | 0.688679 | 13 | 106 | 5.307692 | 0.538462 | 0.347826 | 0.463768 | 0.637681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216981 | 106 | 7 | 35 | 15.142857 | 0.831325 | 0 | 0 | 0 | 0 | 0 | 0.103774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fc117289e9240d7107721bfc95af342a5c5f3fd4 | 174 | py | Python | chiamon/src/plugins/__init__.py | danielringch/chiatools | 2825f71acc68d613de3c8b3b2f784ccd75610b71 | [
"MIT"
] | null | null | null | chiamon/src/plugins/__init__.py | danielringch/chiatools | 2825f71acc68d613de3c8b3b2f784ccd75610b71 | [
"MIT"
] | null | null | null | chiamon/src/plugins/__init__.py | danielringch/chiatools | 2825f71acc68d613de3c8b3b2f784ccd75610b71 | [
"MIT"
] | null | null | null | from .chiafarmer import *
from .chianode import *
from .chiawallet import *
from .flexfarmer import *
from .flexpool import *
from .pingdrive import *
from .smartctl import * | 24.857143 | 25 | 0.764368 | 21 | 174 | 6.333333 | 0.428571 | 0.451128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155172 | 174 | 7 | 26 | 24.857143 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc3050148347c8100e11634db252884d1fd5cec9 | 151 | py | Python | hhcms/services/__init__.py | youngershen/hhcms | 748bfcaaf250584b2b7233f271644ca33f8ff80b | [
"MIT"
] | null | null | null | hhcms/services/__init__.py | youngershen/hhcms | 748bfcaaf250584b2b7233f271644ca33f8ff80b | [
"MIT"
] | null | null | null | hhcms/services/__init__.py | youngershen/hhcms | 748bfcaaf250584b2b7233f271644ca33f8ff80b | [
"MIT"
] | 1 | 2018-07-15T05:33:34.000Z | 2018-07-15T05:33:34.000Z | # PROJECT : hhcms
# TIME : 18-4-21 下午3:24
# AUTHOR : 申延刚 <Younger Shen>
# EMAIL : younger.shen@hotmail.com
# CELL : 13811754531
# WECHAT : 13811754531
| 21.571429 | 34 | 0.688742 | 21 | 151 | 4.952381 | 0.857143 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241935 | 0.178808 | 151 | 6 | 35 | 25.166667 | 0.596774 | 0.913907 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d76eb1fa6dbc011bd7d03f9097098c80f19721f | 77,431 | py | Python | tests/conftest.py | th2-net/th2-data-services | b2177aa903705fb248151b3ca4d0c53056b87cff | [
"Apache-2.0"
] | 3 | 2021-08-03T07:50:55.000Z | 2022-03-23T15:42:07.000Z | tests/conftest.py | th2-net/th2-data-services | b2177aa903705fb248151b3ca4d0c53056b87cff | [
"Apache-2.0"
] | 7 | 2021-11-12T16:22:42.000Z | 2022-03-24T08:56:30.000Z | tests/conftest.py | th2-net/th2-data-services | b2177aa903705fb248151b3ca4d0c53056b87cff | [
"Apache-2.0"
] | null | null | null | from collections import namedtuple
from datetime import datetime
from typing import List, NamedTuple
import pytest
from th2_data_services.data import Data
from th2_data_services.data_source import DataSource
from th2_data_services.filter import Filter
@pytest.fixture
def demo_data_source():
DEMO_HOST = "10.64.66.66" # th2-kube-demo
DEMO_PORT = "30999" # Data-provider Node port
data_source = DataSource(f"http://{DEMO_HOST}:{DEMO_PORT}")
return data_source
START_TIME = datetime(year=2021, month=6, day=15, hour=9, minute=44, second=41, microsecond=692724)
END_TIME = datetime(year=2021, month=6, day=15, hour=12, minute=45, second=49, microsecond=28579)
@pytest.fixture
def demo_get_events_with_one_filter(demo_data_source: DataSource) -> Data:
case = demo_data_source.get_events_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
metadataOnly=False,
filters=[Filter("name", "ExecutionReport")],
)
return case
@pytest.fixture
def demo_get_events_with_filters(demo_data_source: DataSource) -> Data:
case = demo_data_source.get_events_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
metadataOnly=False,
filters=[Filter("name", "ExecutionReport"), Filter("type", "Send message")],
)
return case
@pytest.fixture
def demo_get_messages_with_one_filter(demo_data_source: DataSource) -> Data:
case = demo_data_source.get_messages_from_data_provider(
startTimestamp=datetime(
year=2021,
month=1,
day=26,
hour=12,
minute=44,
second=41,
microsecond=692724,
),
endTimestamp=datetime(year=2021, month=1, day=26, hour=13, minute=45, second=49, microsecond=28579),
stream=["demo-conn2"],
filters=Filter("body", "195"),
)
return case
@pytest.fixture
def demo_get_messages_with_filters(demo_data_source: DataSource) -> Data:
case = demo_data_source.get_messages_from_data_provider(
startTimestamp=datetime(
year=2021,
month=1,
day=26,
hour=12,
minute=44,
second=41,
microsecond=692724,
),
endTimestamp=datetime(year=2021, month=1, day=26, hour=13, minute=45, second=49, microsecond=28579),
stream=["demo-conn2"],
filters=[Filter("type", ""), Filter("body", "195")],
)
return case
@pytest.fixture
def demo_events_from_data_source(demo_data_source: DataSource) -> Data:
events = demo_data_source.get_events_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
metadataOnly=False,
)
# Returns 49 events
# Failed = 6
return events
@pytest.fixture
def demo_events_with_metadataOnly_true(demo_data_source: DataSource) -> Data:
events = demo_data_source.get_events_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
metadataOnly=True,
)
return events
@pytest.fixture
def demo_events_with_metadataOnly_metadata_not_set(
demo_data_source: DataSource,
) -> Data:
events = demo_data_source.get_events_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
)
return events
@pytest.fixture
def demo_messages_with_metadataOnly_true(demo_data_source: DataSource) -> Data:
messages = demo_data_source.get_messages_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
stream=["th2-hand-demo"],
metadataOnly=True,
)
return messages
@pytest.fixture
def demo_messages_with_metadataOnly_false(demo_data_source: DataSource) -> Data:
messages = demo_data_source.get_messages_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
stream=["th2-hand-demo"],
metadataOnly=False,
)
return messages
@pytest.fixture
def demo_messages_from_data_source(demo_data_source: DataSource) -> Data:
messages = demo_data_source.get_messages_from_data_provider(
startTimestamp=START_TIME, endTimestamp=END_TIME, stream=["th2-hand-demo"]
)
# Returns 36 messages
return messages
@pytest.fixture
def demo_events_from_data_source_with_cache_status(
demo_data_source: DataSource,
) -> Data:
events = demo_data_source.get_events_from_data_provider(
startTimestamp=START_TIME, endTimestamp=END_TIME, metadataOnly=False, cache=True
)
# Returns 49 events
# Failed = 6
return events
@pytest.fixture
def demo_messages_from_data_source_with_test_streams(
demo_data_source: DataSource,
) -> Data:
messages = demo_data_source.get_messages_from_data_provider(
startTimestamp=START_TIME,
endTimestamp=END_TIME,
stream=[
"Test-123",
"Test-1234",
"Test-12345",
"Test-123456",
"Test-1234567",
"Test-12345678",
"Test-123456789",
"Test-1234567810",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest1",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest2",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest3",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest4",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest5",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest6",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest7",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest8",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest9",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest10",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest11",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest12",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest13",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest14",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest15",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest16",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest17",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest18",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest19",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest20",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest21",
"TestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTestTest22",
"demo-dc1",
"demo-dc2",
"demo-log",
"th2-hand-demo",
],
)
return messages
@pytest.fixture
def general_data() -> List[dict]:
data = [
{
"batchId": None,
"eventId": "84db48fc-d1b4-11eb-b0fb-199708acc7bc",
"eventName": "[TS_1]Aggressive IOC vs two orders: second order's price is " "lower than first",
"eventType": "",
"isBatched": False,
"parentEventId": None,
},
{
"batchId": None,
"eventId": "88a3ee80-d1b4-11eb-b0fb-199708acc7bc",
"eventName": "Case[TC_1.1]: Trader DEMO-CONN1 vs trader DEMO-CONN2 for " "instrument INSTR1",
"eventType": "",
"isBatched": False,
"parentEventId": "84db48fc-d1b4-11eb-b0fb-199708acc7bc",
},
{
"batchId": None,
"eventId": "8bc787fe-d1b4-11eb-bae5-57b0c4472880",
"eventName": 'placeOrderFIX demo-conn1 - STEP1: Trader "DEMO-CONN1" sends '
"request to create passive Order.",
"eventType": "placeOrderFIX",
"isBatched": False,
"parentEventId": "88a3ee80-d1b4-11eb-b0fb-199708acc7bc",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint",
"eventType": "Checkpoint",
"isBatched": True,
"parentEventId": "8bc787fe-d1b4-11eb-bae5-57b0c4472880",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a4-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'th2-hand-demo' direction 'FIRST' "
"sequence '1623852603564709030'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a5-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-conn1' direction 'SECOND' "
"sequence '1624005455622140289'",
"eventType": "Checkpoint for session",
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a6-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-dc1' direction 'SECOND' " "sequence '1624005475721015014'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a7-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-dc1' direction 'FIRST' " "sequence '1624005475720919499'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a8-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-conn2' direction 'FIRST' "
"sequence '1624005448022245399'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a9-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-conn2' direction 'SECOND' "
"sequence '1624005448022426113'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114aa-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-dc2' direction 'SECOND' " "sequence '1624005466840347015'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114ab-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-dc2' direction 'FIRST' " "sequence '1624005466840263372'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114ac-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-conn1' direction 'FIRST' "
"sequence '1624005455622011522'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4",
"eventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114ad-d1b4-11eb-9278-591e568ad66e",
"eventName": "Checkpoint for session alias 'demo-log' direction 'FIRST' " "sequence '1624029363623063053'",
"eventType": "Checkpoint for session",
"isBatched": True,
"parentEventId": "6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
},
{
"batchId": None,
"eventId": "8c3fec4f-d1b4-11eb-bae5-57b0c4472880",
"eventName": "Send 'NewOrderSingle' message to connectivity",
"eventType": "Outgoing message",
"isBatched": False,
"parentEventId": "8bc787fe-d1b4-11eb-bae5-57b0c4472880",
},
{
"batchId": None,
"eventId": "8c44806c-d1b4-11eb-8e55-d3a76285d588",
"eventName": "Send 'NewOrderSingle' message",
"eventType": "Outgoing message",
"isBatched": False,
"parentEventId": "8bc787fe-d1b4-11eb-bae5-57b0c4472880",
},
{
"batchId": "654c2724-5202-460b-8e6c-a7ee9fb02ddf",
"eventId": "654c2724-5202-460b-8e6c-a7ee9fb02ddf:8ca20288-d1b4-11eb-986f-1e8d42132387",
"eventName": "Remove 'NewOrderSingle' "
"id='demo-conn1:SECOND:1624005455622135205' "
"Hash='7009491514226292581' Group='NOS_CONN' "
"Hash['SecondaryClOrdID': 11111, 'SecurityID': INSTR1]",
"isBatched": True,
"eventType": "",
"parentEventId": "a3779b94-d051-11eb-986f-1e8d42132387",
},
{
"batchId": None,
"eventId": "8ceb47f6-d1b4-11eb-a9ed-ffb57363e013",
"eventName": "Send 'ExecutionReport' message",
"isBatched": False,
"eventType": "Send message",
"parentEventId": "845d70d2-9c68-11eb-8598-691ebd7f413d",
},
{
"batchId": None,
"eventId": "8ced1c93-d1b4-11eb-a9f4-b12655548efc",
"eventName": "Send 'ExecutionReport' message",
"isBatched": False,
"eventType": "Send message",
"parentEventId": "845d70d2-9c68-11eb-8598-691ebd7f413d",
},
{
"batchId": None,
"eventId": "8d44d930-d1b4-11eb-bae5-57b0c4472880",
"eventName": "Received 'ExecutionReport' response message",
"isBatched": False,
"eventType": "message",
"parentEventId": "8bc787fe-d1b4-11eb-bae5-57b0c4472880",
},
{
"batchId": None,
"eventId": "8d6e0c9e-d1b4-11eb-9278-591e568ad66e",
"eventName": "Check sequence rule SessionKey{sessionAlias='demo-conn1', "
'direction=FIRST} - STEP2: Trader "DEMO-CONN1" receives '
"Execution Report. The order stands on book in status NEW",
"isBatched": False,
"eventType": "Checkpoint for session",
"parentEventId": "88a3ee80-d1b4-11eb-b0fb-199708acc7bc",
},
]
return data
@pytest.fixture
def test_events_tree() -> NamedTuple:
TestEventTree = namedtuple("TestEventTree", ["events", "unknown_events"])
test_events_tree = TestEventTree(
events=[
"84db48fc-d1b4-11eb-b0fb-199708acc7bc",
"88a3ee80-d1b4-11eb-b0fb-199708acc7bc",
"8bc787fe-d1b4-11eb-bae5-57b0c4472880",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a4-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a5-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a6-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a7-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a8-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114a9-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114aa-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114ab-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114ac-d1b4-11eb-9278-591e568ad66e",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c1114ad-d1b4-11eb-9278-591e568ad66e",
"8c3fec4f-d1b4-11eb-bae5-57b0c4472880",
"8c44806c-d1b4-11eb-8e55-d3a76285d588",
"654c2724-5202-460b-8e6c-a7ee9fb02ddf:8ca20288-d1b4-11eb-986f-1e8d42132387",
"8ceb47f6-d1b4-11eb-a9ed-ffb57363e013",
"8ced1c93-d1b4-11eb-a9f4-b12655548efc",
"8d44d930-d1b4-11eb-bae5-57b0c4472880",
"8d6e0c9e-d1b4-11eb-9278-591e568ad66e",
],
unknown_events=[
"a3779b94-d051-11eb-986f-1e8d42132387",
"845d70d2-9c68-11eb-8598-691ebd7f413d",
],
)
return test_events_tree
@pytest.fixture
def test_parent_events_tree() -> NamedTuple:
TestEventTree = namedtuple("TestEventTree", ["events", "unknown_events"])
test_parent_events_tree = TestEventTree(
events=[
"84db48fc-d1b4-11eb-b0fb-199708acc7bc",
"88a3ee80-d1b4-11eb-b0fb-199708acc7bc",
"8bc787fe-d1b4-11eb-bae5-57b0c4472880",
"6e3be13f-cab7-4653-8cb9-6e74fd95ade4:8c035903-d1b4-11eb-9278-591e568ad66e",
],
unknown_events=[
"a3779b94-d051-11eb-986f-1e8d42132387",
"845d70d2-9c68-11eb-8598-691ebd7f413d",
],
)
return test_parent_events_tree
def get_super_type(record: dict, *args):
event_type = record.get("eventType")
if event_type:
if not record.get("parentEventId"):
event_type = "Test Run"
else:
event_type = "Test Case"
return event_type
@pytest.fixture
def data_for_analyzing() -> List[dict]:
data = [
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=1, second=1),
"type": "Test Run",
"eventName": "test run 1",
"successful": True,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=10, second=2),
"type": "Heartbeat",
"eventName": "heartbeat",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=2, second=12),
"type": "Test Run",
"eventName": "test run 2",
"successful": False,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=4, second=30),
"type": "Test Case",
"eventName": "test case 1",
"successful": True,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=13, second=40),
"type": "Receive message",
"eventName": "message123",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=2, minute=12, second=11),
"type": "Heartbeat",
"eventName": "heartbeat",
"successful": False,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=2, minute=10, second=1),
"type": "Test Case",
"eventName": "test case 2",
"successful": True,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=32, second=42),
"type": "Test Case",
"eventName": "test run 3",
"successful": True,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=41, second=19),
"type": "Receive message",
"eventName": "message122",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=45, second=22),
"type": "Verification",
"eventName": "verification32",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=33, second=12),
"type": "Heartbeat",
"eventName": "heartbeat",
"successful": False,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=1, second=59),
"type": "Test Case",
"eventName": "test case 3",
"successful": False,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=3, second=54),
"type": "Send message",
"eventName": "message",
"successful": False,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=54, second=52),
"type": "Verification",
"eventName": "verification33",
"successful": False,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=2, minute=12, second=32),
"type": "Send message",
"eventName": "message123",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=2, minute=33, second=1),
"type": "Verification",
"eventName": "verification",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=33, second=33),
"type": "Test Run",
"eventName": "test run 4",
"successful": False,
"attachedMessageIds": False,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=44, second=44),
"type": "Send message",
"eventName": "message122",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=23, second=23),
"type": "Receive message",
"eventName": "message 333",
"successful": False,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=55, second=55),
"type": "Send message",
"eventName": "message 333",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=11, second=11),
"type": "Receive message",
"eventName": "message 444",
"successful": False,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=43, second=43),
"type": "Send message",
"eventName": "message 444",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=56, second=32),
"type": "Receive message",
"eventName": "message 444",
"successful": True,
"attachedMessageIds": True,
},
{
"time": datetime(year=2021, month=1, day=1, hour=1, minute=40, second=10),
"type": "Test Case",
"eventName": "test case 4",
"successful": True,
"attachedMessageIds": False,
},
]
return data
@pytest.fixture
def general_body():
data = {
"rows": {
"AccountType": {"columns": {"fieldValue": "1"}, "type": "row"},
"ClOrdID": {"columns": {"fieldValue": "9601585"}, "type": "row"},
"OrdType": {"columns": {"fieldValue": "2"}, "type": "row"},
"OrderCapacity": {"columns": {"fieldValue": "A"}, "type": "row"},
"OrderQty": {"columns": {"fieldValue": "30"}, "type": "row"},
"Price": {"columns": {"fieldValue": "55"}, "type": "row"},
"SecondaryClOrdID": {"columns": {"fieldValue": "11111"}, "type": "row"},
"SecurityID": {"columns": {"fieldValue": "INSTR1"}, "type": "row"},
"SecurityIDSource": {"columns": {"fieldValue": "8"}, "type": "row"},
"Side": {"columns": {"fieldValue": "1"}, "type": "row"},
"TradingParty": {
"rows": {
"NoPartyIDs": {
"rows": {
"0": {
"rows": {
"PartyID": {
"columns": {"fieldValue": "DEMO-CONN1"},
"type": "row",
},
"PartyIDSource": {
"columns": {"fieldValue": "D"},
"type": "row",
},
"PartyRole": {
"columns": {"fieldValue": "76"},
"type": "row",
},
},
"type": "collection",
},
"1": {
"rows": {
"PartyID": {
"columns": {"fieldValue": "0"},
"type": "row",
},
"PartyIDSource": {
"columns": {"fieldValue": "P"},
"type": "row",
},
"PartyRole": {
"columns": {"fieldValue": "3"},
"type": "row",
},
},
"type": "collection",
},
"2": {
"rows": {
"PartyID": {
"columns": {"fieldValue": "0"},
"type": "row",
},
"PartyIDSource": {
"columns": {"fieldValue": "P"},
"type": "row",
},
"PartyRole": {
"columns": {"fieldValue": "122"},
"type": "row",
},
},
"type": "collection",
},
"3": {
"rows": {
"PartyID": {
"columns": {"fieldValue": "3"},
"type": "row",
},
"PartyIDSource": {
"columns": {"fieldValue": "P"},
"type": "row",
},
"PartyRole": {
"columns": {"fieldValue": "12"},
"type": "row",
},
},
"type": "collection",
},
},
"type": "collection",
}
},
"type": "collection",
},
"TransactTime": {
"columns": {"fieldValue": "2021-06-20T13:44:48.170589"},
"type": "row",
},
},
"type": "treeTable",
}
return data
@pytest.fixture
def complex_body():
data = [
{
"fields": {
"AccountType": {
"actual": "1",
"expected": "1",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"ClOrdID": {
"actual": "9601585",
"expected": "9601585",
"key": True,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"CumQty": {
"actual": "0",
"expected": "0",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"ExecID": {
"actual": "2346",
"expected": "*",
"key": False,
"operation": "NOT_EMPTY",
"status": "PASSED",
"type": "field",
},
"ExecType": {
"actual": "0",
"expected": "0",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"LeavesQty": {
"actual": "30",
"expected": "30",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"OrdStatus": {
"actual": "0",
"expected": "0",
"key": True,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"OrdType": {
"actual": "2",
"expected": "2",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"OrderCapacity": {
"actual": "A",
"expected": "A",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"OrderID": {
"actual": "867",
"expected": "*",
"key": False,
"operation": "NOT_EMPTY",
"status": "PASSED",
"type": "field",
},
"OrderQty": {
"actual": "30",
"expected": "30",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"Price": {
"actual": "55",
"expected": "55",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"SecurityID": {
"actual": "INSTR1",
"expected": "INSTR1",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"SecurityIDSource": {
"actual": "8",
"expected": "8",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"Side": {
"actual": "1",
"expected": "1",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"Text": {
"actual": "Simulated New Order Buy is placed",
"expected": "*",
"key": False,
"operation": "NOT_EMPTY",
"status": "PASSED",
"type": "field",
},
"TradingParty": {
"actual": "1",
"expected": "1",
"fields": {
"NoPartyIDs": {
"actual": "4",
"expected": "4",
"fields": {
"0": {
"actual": "3",
"expected": "3",
"fields": {
"PartyID": {
"actual": "DEMO-CONN1",
"expected": "DEMO-CONN1",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyIDSource": {
"actual": "D",
"expected": "D",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyRole": {
"actual": "76",
"expected": "76",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
},
"key": False,
"operation": "EQUAL",
"type": "collection",
},
"1": {
"actual": "3",
"expected": "3",
"fields": {
"PartyID": {
"actual": "0",
"expected": "0",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyIDSource": {
"actual": "P",
"expected": "P",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyRole": {
"actual": "3",
"expected": "3",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
},
"key": False,
"operation": "EQUAL",
"type": "collection",
},
"2": {
"actual": "3",
"expected": "3",
"fields": {
"PartyID": {
"actual": "0",
"expected": "0",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyIDSource": {
"actual": "P",
"expected": "P",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyRole": {
"actual": "122",
"expected": "122",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
},
"key": False,
"operation": "EQUAL",
"type": "collection",
},
"3": {
"actual": "3",
"expected": "3",
"fields": {
"PartyID": {
"actual": "3",
"expected": "3",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyIDSource": {
"actual": "P",
"expected": "P",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"PartyRole": {
"actual": "12",
"expected": "12",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
},
"key": False,
"operation": "EQUAL",
"type": "collection",
},
},
"key": False,
"operation": "EQUAL",
"type": "collection",
}
},
"key": False,
"operation": "EQUAL",
"type": "collection",
},
"TransactTime": {
"actual": "2021-06-20T10:44:55",
"expected": "null",
"key": False,
"operation": "EQUAL",
"status": "NA",
"type": "field",
},
"header": {
"actual": "7",
"expected": "7",
"fields": {
"BeginString": {
"actual": "FIXT.1.1",
"expected": "FIXT.1.1",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"BodyLength": {
"actual": "310",
"expected": "*",
"key": False,
"operation": "NOT_EMPTY",
"status": "PASSED",
"type": "field",
},
"MsgSeqNum": {
"actual": "1291",
"expected": "*",
"key": False,
"operation": "NOT_EMPTY",
"status": "PASSED",
"type": "field",
},
"MsgType": {
"actual": "8",
"expected": "8",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
"SenderCompID": {
"actual": "FGW",
"expected": "*",
"key": False,
"operation": "NOT_EMPTY",
"status": "PASSED",
"type": "field",
},
"SendingTime": {
"actual": "2021-06-20T10:44:55.346",
"expected": "*",
"key": False,
"operation": "NOT_EMPTY",
"status": "PASSED",
"type": "field",
},
"TargetCompID": {
"actual": "DEMO-CONN1",
"expected": "DEMO-CONN1",
"key": False,
"operation": "EQUAL",
"status": "PASSED",
"type": "field",
},
},
"key": False,
"operation": "EQUAL",
"type": "collection",
},
"trailer": {
"actual": "1",
"expected": "null",
"fields": {
"CheckSum": {
"actual": "056",
"expected": "null",
"key": False,
"operation": "EQUAL",
"status": "NA",
"type": "field",
}
},
"key": False,
"operation": "EQUAL",
"status": "NA",
"type": "collection",
},
},
"type": "verification",
}
]
return data
@pytest.fixture
def messages_before_pipeline_adapter():
messages = [
{
"attachedEventIds": ["09960e51-1c6b-11ec-9d85-cd5454918fce"],
"body": {
"fields": {
"PHCount": {"simpleValue": "0"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617064",
"subsequence": [1],
},
"messageType": "PacketHeader",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:37.928Z",
},
},
"bodyBase64": "TTEyNzIwNTMyOAAAAAAAADyLAAA=",
"direction": "IN",
"messageId": "test-42:first:1632216515838617064",
"messageType": "PacketHeader",
"sessionId": "test-42",
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"TestMessageHeader": {"messageValue": {"fields": {"Length": {"simpleValue": "4"}}}},
"PacketHeader": {
"messageValue": {
"fields": {
"PHCount": {"simpleValue": "3"},
"PHSequence": {"simpleValue": "15487"},
"PHSession": {"simpleValue": "M127204538"},
}
}
},
"SecondsMessage": {
"messageValue": {
"fields": {
"MessageSequenceNumber": {"simpleValue": "15487"},
"MessageType": {"simpleValue": "T"},
"PHCount": {"simpleValue": "3"},
"PHSequence": {"simpleValue": "15487"},
"PHSession": {"simpleValue": "M127204538"},
"Second": {"simpleValue": "1632375458"},
}
}
},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216519834417326",
"subsequence": [1, 2, 3],
},
"messageType": "PacketHeader/TestMessageHeader/SecondsMessage",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216519834417326",
"messageType": "PacketHeader/TestMessageHeader/SecondsMessage/TestMessageHeader/AddOrder",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"AddOrder-5": {
"messageValue": {
"fields": {
"ExchangeOrderType": {"simpleValue": "0"},
"LotType": {"simpleValue": "2"},
"MessageSequenceNumber": {"simpleValue": "15500"},
"MessageType": {"simpleValue": "A"},
"OrderBookID": {"simpleValue": "119549"},
"OrderBookPosition": {"simpleValue": "1"},
"OrderID": {"simpleValue": "7478143635544868134"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
"Price": {"simpleValue": "1000"},
"Quantity": {"simpleValue": "2000"},
"Side": {"simpleValue": "B"},
"TimestampNanoseconds": {"simpleValue": "2724576"},
}
}
},
"TestMessageHeader-2": {"messageValue": {"fields": {"Length": {"simpleValue": "5"}}}},
"TestMessageHeader-4": {"messageValue": {"fields": {"Length": {"simpleValue": "37"}}}},
"PacketHeader-1": {
"messageValue": {
"fields": {
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
}
}
},
"SecondsMessage-3": {
"messageValue": {
"fields": {
"MessageSequenceNumber": {"simpleValue": "15499"},
"MessageType": {"simpleValue": "T"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
"Second": {"simpleValue": "1632375458"},
}
}
},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617066",
"subsequence": [1, 2, 3, 4, 5],
},
"messageType": "PacketHeader/TestMessageHeader/SecondsMessage/TestMessageHeader/AddOrder",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216515838617066",
"messageType": "PacketHeader/TestMessageHeader/SecondsMessage/TestMessageHeader/AddOrder",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": ["09960e51-1c6b-11ec-9d85-cd5454918fce"],
"body": {
"fields": {
"MessageSequenceNumber": {"simpleValue": "15239"},
"MessageType": {"simpleValue": "T"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "154319"},
"PHSession": {"simpleValue": "M1212305328"},
"Second": {"simpleValue": "163231325458"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617064",
"subsequence": [1],
},
"messageType": "SecondsMessage",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:37.928Z",
},
},
"bodyBase64": "TTEyNLOeedaNTMyOAPPPPPFyLuAA=",
"direction": "IN",
"messageId": "test-42:first:1632216515838617064",
"messageType": "SecondsMessage",
"sessionId": "test-42",
"type": "message",
},
{
"attachedEventIds": ["09960e51-1c6b-11ec-9d85-cd5454918fce"],
"body": {
"fields": {
"ExchangeOrderType": {"simpleValue": "0"},
"LotType": {"simpleValue": "2"},
"MessageSequenceNumber": {"simpleValue": "15330"},
"MessageType": {"simpleValue": "A"},
"OrderBookID": {"simpleValue": "133549"},
"OrderBookPosition": {"simpleValue": "1"},
"OrderID": {"simpleValue": "7478143635544868134"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "13399"},
"PHSession": {"simpleValue": "M127205328"},
"Price": {"simpleValue": "1330"},
"Quantity": {"simpleValue": "2200"},
"Side": {"simpleValue": "B"},
"TimestampNanoseconds": {"simpleValue": "2724576"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617064",
"subsequence": [1],
},
"messageType": "AddOrder",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:37.928Z",
},
},
"bodyBase64": "ppEDEyNzIwPPPEDAOAAAAAAAADyLAAA=",
"direction": "IN",
"messageId": "test-42:first:1632216515838617064",
"messageType": "AddOrder",
"sessionId": "test-42",
"type": "message",
},
]
return messages
@pytest.fixture
def messages_after_pipeline_adapter():
messages = [
{
"attachedEventIds": ["09960e51-1c6b-11ec-9d85-cd5454918fce"],
"body": {
"fields": {
"PHCount": {"simpleValue": "0"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617064",
"subsequence": [1],
},
"messageType": "PacketHeader",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:37.928Z",
},
},
"bodyBase64": "TTEyNzIwNTMyOAAAAAAAADyLAAA=",
"direction": "IN",
"messageId": "test-42:first:1632216515838617064",
"messageType": "PacketHeader",
"sessionId": "test-42",
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {"Length": {"simpleValue": "4"}},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216519834417326",
"subsequence": [2],
},
"messageType": "TestMessageHeader",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216519834417326.2",
"messageType": "TestMessageHeader",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"PHCount": {"simpleValue": "3"},
"PHSequence": {"simpleValue": "15487"},
"PHSession": {"simpleValue": "M127204538"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216519834417326",
"subsequence": [1],
},
"messageType": "PacketHeader",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216519834417326.1",
"messageType": "PacketHeader",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"MessageSequenceNumber": {"simpleValue": "15487"},
"MessageType": {"simpleValue": "T"},
"PHCount": {"simpleValue": "3"},
"PHSequence": {"simpleValue": "15487"},
"PHSession": {"simpleValue": "M127204538"},
"Second": {"simpleValue": "1632375458"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216519834417326",
"subsequence": [3],
},
"messageType": "SecondsMessage",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216519834417326.3",
"messageType": "SecondsMessage",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"ExchangeOrderType": {"simpleValue": "0"},
"LotType": {"simpleValue": "2"},
"MessageSequenceNumber": {"simpleValue": "15500"},
"MessageType": {"simpleValue": "A"},
"OrderBookID": {"simpleValue": "119549"},
"OrderBookPosition": {"simpleValue": "1"},
"OrderID": {"simpleValue": "7478143635544868134"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
"Price": {"simpleValue": "1000"},
"Quantity": {"simpleValue": "2000"},
"Side": {"simpleValue": "B"},
"TimestampNanoseconds": {"simpleValue": "2724576"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617066",
"subsequence": [5],
},
"messageType": "AddOrder",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216515838617066.5",
"messageType": "AddOrder",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {"Length": {"simpleValue": "5"}},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617066",
"subsequence": [2],
},
"messageType": "TestMessageHeader",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216515838617066.2",
"messageType": "TestMessageHeader",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {"Length": {"simpleValue": "37"}},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617066",
"subsequence": [4],
},
"messageType": "TestMessageHeader",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216515838617066.4",
"messageType": "TestMessageHeader",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617066",
"subsequence": [1],
},
"messageType": "PacketHeader",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216515838617066.1",
"messageType": "PacketHeader",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"MessageSequenceNumber": {"simpleValue": "15499"},
"MessageType": {"simpleValue": "T"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
"Second": {"simpleValue": "1632375458"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617066",
"subsequence": [3],
},
"messageType": "SecondsMessage",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216515838617066.3",
"messageType": "SecondsMessage",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
},
{
"attachedEventIds": ["09960e51-1c6b-11ec-9d85-cd5454918fce"],
"body": {
"fields": {
"MessageSequenceNumber": {"simpleValue": "15239"},
"MessageType": {"simpleValue": "T"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "154319"},
"PHSession": {"simpleValue": "M1212305328"},
"Second": {"simpleValue": "163231325458"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617064",
"subsequence": [1],
},
"messageType": "SecondsMessage",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:37.928Z",
},
},
"bodyBase64": "TTEyNLOeedaNTMyOAPPPPPFyLuAA=",
"direction": "IN",
"messageId": "test-42:first:1632216515838617064",
"messageType": "SecondsMessage",
"sessionId": "test-42",
"type": "message",
},
{
"attachedEventIds": ["09960e51-1c6b-11ec-9d85-cd5454918fce"],
"body": {
"fields": {
"ExchangeOrderType": {"simpleValue": "0"},
"LotType": {"simpleValue": "2"},
"MessageSequenceNumber": {"simpleValue": "15330"},
"MessageType": {"simpleValue": "A"},
"OrderBookID": {"simpleValue": "133549"},
"OrderBookPosition": {"simpleValue": "1"},
"OrderID": {"simpleValue": "7478143635544868134"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "13399"},
"PHSession": {"simpleValue": "M127205328"},
"Price": {"simpleValue": "1330"},
"Quantity": {"simpleValue": "2200"},
"Side": {"simpleValue": "B"},
"TimestampNanoseconds": {"simpleValue": "2724576"},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617064",
"subsequence": [1],
},
"messageType": "AddOrder",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:37.928Z",
},
},
"bodyBase64": "ppEDEyNzIwPPPEDAOAAAAAAAADyLAAA=",
"direction": "IN",
"messageId": "test-42:first:1632216515838617064",
"messageType": "AddOrder",
"sessionId": "test-42",
"type": "message",
},
]
return messages
@pytest.fixture
def message_from_pipeline():
message = {
"attachedEventIds": [
"09960e51-1c6b-11ec-9d85-cd5454918fce",
"09963563-1c6b-11ec-9d85-cd5454918fce",
],
"body": {
"fields": {
"AddOrder-5": {
"messageValue": {
"fields": {
"ExchangeOrderType": {"simpleValue": "0"},
"LotType": {"simpleValue": "2"},
"MessageSequenceNumber": {"simpleValue": "15500"},
"MessageType": {"simpleValue": "A"},
"OrderBookID": {"simpleValue": "119549"},
"OrderBookPosition": {"simpleValue": "1"},
"OrderID": {"simpleValue": "7478143635544868134"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
"Price": {"simpleValue": "1000"},
"Quantity": {"simpleValue": "2000"},
"Side": {"simpleValue": "B"},
"TimestampNanoseconds": {"simpleValue": "2724576"},
}
}
},
"TestMessageHeader-2": {"messageValue": {"fields": {"Length": {"simpleValue": "5"}}}},
"TestMessageHeader-4": {"messageValue": {"fields": {"Length": {"simpleValue": "37"}}}},
"PacketHeader-1": {
"messageValue": {
"fields": {
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
}
}
},
"SecondsMessage-3": {
"messageValue": {
"fields": {
"MessageSequenceNumber": {"simpleValue": "15499"},
"MessageType": {"simpleValue": "T"},
"PHCount": {"simpleValue": "2"},
"PHSequence": {"simpleValue": "15499"},
"PHSession": {"simpleValue": "M127205328"},
"Second": {"simpleValue": "1632375458"},
}
}
},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "test-42"},
"sequence": "1632216515838617066",
"subsequence": [1, 2, 3, 4, 5],
},
"messageType": "PacketHeader/TestMessageHeader/SecondsMessage/TestMessageHeader/AddOrder",
"protocol": "SOUP",
"timestamp": "2021-09-23T12:37:38.004Z",
},
},
"direction": "IN",
"messageId": "test-42:first:1632216515838617066",
"messageType": "PacketHeader/TestMessageHeader/SecondsMessage/TestMessageHeader/AddOrder",
"sessionId": "test-42",
"timestamp": {"epochSecond": 1632400658, "nano": 4000000},
"type": "message",
}
return message
@pytest.fixture
def message_from_pipeline_empty_body():
messages = {
"attachedEventIds": [],
"body": {
"fields": {
"Csv_Header": {"fields": {}, "metadata": {}},
"Csv_Message": {
"fields": {},
"metadata": {
"TestField": "test",
"timestamp": "2021-10-12T12:13:59.766600545Z",
},
},
},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "satscomments2"},
"sequence": "1634314921633704398",
"subsequence": [1],
},
"messageType": "Csv_Header",
"properties": {"logTimestamp": "2021-10-12 " "12:13:59.733300545"},
"timestamp": "2021-10-12T12:13:59.733300545Z",
},
},
"bodyBase64": "Ik1lc3NhZ2UiLCJNc2dUeXBlIgoiQU8xMjExMDEyMTIwOTA3MTE0MDAxIC0gZGVwdGggeWllbGRzIG5vIGJhdGNoIiwiU0FUU0NvbW1lbnRzIg==",
"direction": "IN",
"messageId": "satscomments2:first:1634314921633704398",
"messageType": "Csv_Header/Csv_Message",
"sessionId": "satscomments2",
"timestamp": {"epochSecond": 1634040839, "nano": 733300545},
"type": "message",
}
return messages
@pytest.fixture
def messages_from_after_pipeline_empty_body():
messages = [
{
"attachedEventIds": [],
"body": {
"fields": {},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "satscomments2"},
"sequence": "1634314921633704398",
"subsequence": [1],
},
"messageType": "Csv_Header",
"properties": {"logTimestamp": "2021-10-12 " "12:13:59.733300545"},
"timestamp": "2021-10-12T12:13:59.733300545Z",
},
},
"bodyBase64": "Ik1lc3NhZ2UiLCJNc2dUeXBlIgoiQU8xMjExMDEyMTIwOTA3MTE0MDAxIC0gZGVwdGggeWllbGRzIG5vIGJhdGNoIiwiU0FUU0NvbW1lbnRzIg==",
"direction": "IN",
"messageId": "satscomments2:first:1634314921633704398.1",
"messageType": "Csv_Header",
"sessionId": "satscomments2",
"timestamp": {"epochSecond": 1634040839, "nano": 733300545},
"type": "message",
},
{
"attachedEventIds": [],
"body": {
"fields": {},
"metadata": {
"id": {
"connectionId": {"sessionAlias": "satscomments2"},
"sequence": "1634314921633704398",
"subsequence": [2],
},
"messageType": "Csv_Message",
"properties": {"logTimestamp": "2021-10-12 " "12:13:59.733300545"},
"timestamp": "2021-10-12T12:13:59.766600545Z",
"TestField": "test",
},
},
"bodyBase64": "Ik1lc3NhZ2UiLCJNc2dUeXBlIgoiQU8xMjExMDEyMTIwOTA3MTE0MDAxIC0gZGVwdGggeWllbGRzIG5vIGJhdGNoIiwiU0FUU0NvbW1lbnRzIg==",
"direction": "IN",
"messageId": "satscomments2:first:1634314921633704398.2",
"messageType": "Csv_Message",
"sessionId": "satscomments2",
"timestamp": {"epochSecond": 1634040839, "nano": 733300545},
"type": "message",
},
]
return messages
| 42.19673 | 141 | 0.418657 | 4,687 | 77,431 | 6.861319 | 0.096864 | 0.015672 | 0.021891 | 0.027364 | 0.815262 | 0.778258 | 0.76355 | 0.748469 | 0.729407 | 0.705806 | 0 | 0.144229 | 0.448412 | 77,431 | 1,834 | 142 | 42.219738 | 0.608733 | 0.001485 | 0 | 0.638214 | 0 | 0 | 0.356315 | 0.128538 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014132 | false | 0.02035 | 0.003957 | 0 | 0.032222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d9c36423dd807f5a5bb97a59287967a264bbb57 | 110 | py | Python | python-for-beginners/35 - Demo Virtual environments pakages/demos.py | poligonosapp/c9-python-getting-started | d4cc4cf4979abe203e5f4b022fcaa7cce80afa7b | [
"MIT"
] | 4 | 2021-08-03T14:25:31.000Z | 2021-08-18T13:21:23.000Z | python-for-beginners/35 - Demo Virtual environments pakages/demos.py | poligonosapp/c9-python-getting-started | d4cc4cf4979abe203e5f4b022fcaa7cce80afa7b | [
"MIT"
] | null | null | null | python-for-beginners/35 - Demo Virtual environments pakages/demos.py | poligonosapp/c9-python-getting-started | d4cc4cf4979abe203e5f4b022fcaa7cce80afa7b | [
"MIT"
] | 3 | 2021-08-15T00:09:13.000Z | 2021-08-18T13:22:45.000Z | import helpers
helpers.display('Sample message', True)
from helpers import display
display('Sample message')
| 18.333333 | 39 | 0.8 | 14 | 110 | 6.285714 | 0.5 | 0.295455 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 110 | 5 | 40 | 22 | 0.897959 | 0 | 0 | 0 | 0 | 0 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5da1423f219e84edaccef8e9bb982fc84b2851b9 | 99 | py | Python | generate/julia/__init__.py | Luthaf/Chemharp-bindgen | 7d25556773fb5fe22dd1dbb0bd0d34fb2e6dccb8 | [
"MIT"
] | null | null | null | generate/julia/__init__.py | Luthaf/Chemharp-bindgen | 7d25556773fb5fe22dd1dbb0bd0d34fb2e6dccb8 | [
"MIT"
] | 2 | 2018-02-25T21:46:45.000Z | 2018-11-19T22:39:54.000Z | generate/julia/__init__.py | chemfiles/bindgen | 7d25556773fb5fe22dd1dbb0bd0d34fb2e6dccb8 | [
"MIT"
] | null | null | null | # -* coding: utf-8 -*
"""Generate FFI for Julia"""
from .ffi import write_types, write_functions
| 16.5 | 45 | 0.686869 | 14 | 99 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.161616 | 99 | 5 | 46 | 19.8 | 0.783133 | 0.434343 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8d5b8fd782e7a7ad4443e35b9d0f01c575e3700 | 163 | py | Python | plantuml2freemind/parsers/yaml.py | ave2me/plantuml2freemind | 021c4d514213d702d504945068db83d46acbed10 | [
"MIT"
] | null | null | null | plantuml2freemind/parsers/yaml.py | ave2me/plantuml2freemind | 021c4d514213d702d504945068db83d46acbed10 | [
"MIT"
] | null | null | null | plantuml2freemind/parsers/yaml.py | ave2me/plantuml2freemind | 021c4d514213d702d504945068db83d46acbed10 | [
"MIT"
] | null | null | null | import yaml
from plantuml2freemind.custom_types import MindmapTreeType
def entry(file_content: str) -> MindmapTreeType:
return yaml.safe_load(file_content)
| 20.375 | 58 | 0.815951 | 20 | 163 | 6.45 | 0.75 | 0.170543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.122699 | 163 | 7 | 59 | 23.285714 | 0.895105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f8ee669ab1e4a211885230d6347840f320b5a7dd | 34,956 | py | Python | tests/api/v1/endpoints/test_privacy_request_endpoints.py | eastandwestwind/fidesops | 93e2881c0fdc30075b7cc22024965d18cec0bdea | [
"Apache-2.0"
] | null | null | null | tests/api/v1/endpoints/test_privacy_request_endpoints.py | eastandwestwind/fidesops | 93e2881c0fdc30075b7cc22024965d18cec0bdea | [
"Apache-2.0"
] | null | null | null | tests/api/v1/endpoints/test_privacy_request_endpoints.py | eastandwestwind/fidesops | 93e2881c0fdc30075b7cc22024965d18cec0bdea | [
"Apache-2.0"
] | null | null | null | import json
from datetime import datetime
from typing import List, Dict
from unittest import mock
from fastapi_pagination import Params
import pytest
from starlette.testclient import TestClient
from fidesops.api.v1.endpoints.privacy_request_endpoints import (
EMBEDDED_EXECUTION_LOG_LIMIT,
)
from fidesops.api.v1.urn_registry import (
PRIVACY_REQUESTS,
V1_URL_PREFIX,
REQUEST_PREVIEW,
PRIVACY_REQUEST_RESUME,
)
from fidesops.api.v1.scope_registry import (
PRIVACY_REQUEST_CREATE,
STORAGE_CREATE_OR_UPDATE,
PRIVACY_REQUEST_READ,
PRIVACY_REQUEST_CALLBACK_RESUME,
)
from fidesops.models.client import ClientDetail
from fidesops.models.privacy_request import (
PrivacyRequest,
ExecutionLog,
ExecutionLogStatus,
PrivacyRequestStatus,
)
from fidesops.models.policy import ActionType
from fidesops.schemas.dataset import DryRunDatasetResponse
from fidesops.schemas.masking.masking_secrets import SecretType
from fidesops.util.cache import (
get_identity_cache_key,
get_encryption_cache_key,
get_masking_secret_cache_key,
)
from fidesops.util.oauth_util import generate_jwe
page_size = Params().size
def stringify_date(log_date: datetime) -> str:
return log_date.strftime("%Y-%m-%dT%H:%M:%S.%f+00:00")
class TestCreatePrivacyRequest:
@pytest.fixture(scope="function")
def url(self, oauth_client: ClientDetail, policy) -> str:
return V1_URL_PREFIX + PRIVACY_REQUESTS
def test_privacy_request_unauthenticated(self, api_client: TestClient, url):
resp = api_client.post(url)
assert resp.status_code == 401
def test_privacy_request_wrong_scopes(
self, api_client: TestClient, url, generate_auth_header
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
resp = api_client.post(url, json={}, headers=auth_header)
assert resp.status_code == 403
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.PrivacyRequestRunner.submit"
)
def test_create_privacy_request(
self,
run_access_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
):
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": {"email": "test@example.com"},
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
pr.delete(db=db)
assert run_access_request_mock.called
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.run_access_request"
)
def test_create_privacy_request_limit_exceeded(
self,
_,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
):
payload = []
for i in range(0, 51):
payload.append(
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": {"email": "ftest{i}@example.com"},
},
)
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
response = api_client.post(url, headers=auth_header, json=payload)
assert 422 == response.status_code
assert (
json.loads(response.text)["detail"][0]["msg"]
== "ensure this value has at most 50 items"
)
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.PrivacyRequestRunner.submit"
)
def test_create_privacy_request_starts_processing(
self,
start_processing_mock,
url,
api_client: TestClient,
db,
generate_auth_header,
policy,
):
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": {"email": "test@example.com"},
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
assert start_processing_mock.called
response_data = resp.json()["succeeded"]
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
pr.delete(db=db)
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.PrivacyRequestRunner.submit"
)
def test_create_privacy_request_with_external_id(
self,
run_access_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
):
external_id = "ext_some-uuid-here-1234"
data = [
{
"external_id": external_id,
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": {"email": "test@example.com"},
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(
V1_URL_PREFIX + PRIVACY_REQUESTS, json=data, headers=auth_header
)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
assert response_data[0]["external_id"] == external_id
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
assert pr.external_id == external_id
pr.delete(db=db)
assert run_access_request_mock.called
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.PrivacyRequestRunner.submit"
)
def test_create_privacy_request_caches_identity(
self,
run_access_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
cache,
):
identity = {"email": "test@example.com"}
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": identity,
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
key = get_identity_cache_key(
privacy_request_id=pr.id,
identity_attribute=list(identity.keys())[0],
)
assert cache.get(key) == list(identity.values())[0]
pr.delete(db=db)
assert run_access_request_mock.called
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.PrivacyRequestRunner.submit"
)
def test_create_privacy_request_caches_masking_secrets(
self,
run_erasure_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
erasure_policy_aes,
cache,
):
identity = {"email": "test@example.com"}
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": erasure_policy_aes.key,
"identity": identity,
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
secret_key = get_masking_secret_cache_key(
privacy_request_id=pr.id,
masking_strategy="aes_encrypt",
secret_type=SecretType.key,
)
assert cache.get_encoded_by_key(secret_key) is not None
pr.delete(db=db)
assert run_erasure_request_mock.called
def test_create_privacy_request_invalid_encryption_values(
self, url, db, api_client: TestClient, generate_auth_header, policy, cache
):
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": {"email": "test@example.com"},
"encryption_key": "test",
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 422
assert resp.json()["detail"][0]["msg"] == "Encryption key must be 16 bytes long"
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.PrivacyRequestRunner.submit"
)
def test_create_privacy_request_caches_encryption_keys(
self,
run_access_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
cache,
):
identity = {"email": "test@example.com"}
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": identity,
"encryption_key": "test--encryption",
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
encryption_key = get_encryption_cache_key(
privacy_request_id=pr.id,
encryption_attr="key",
)
assert cache.get(encryption_key) == "test--encryption"
pr.delete(db=db)
assert run_access_request_mock.called
def test_create_privacy_request_no_identities(
self,
url,
api_client: TestClient,
generate_auth_header,
policy,
):
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identity": {},
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 0
response_data = resp.json()["failed"]
assert len(response_data) == 1
class TestGetPrivacyRequests:
@pytest.fixture(scope="function")
def url(self, oauth_client: ClientDetail) -> str:
return V1_URL_PREFIX + PRIVACY_REQUESTS
def test_get_privacy_requests_unauthenticated(self, api_client: TestClient, url):
response = api_client.get(url, headers={})
assert 401 == response.status_code
def test_get_privacy_requests_wrong_scope(
self, api_client: TestClient, generate_auth_header, url
):
auth_header = generate_auth_header(scopes=[STORAGE_CREATE_OR_UPDATE])
response = api_client.get(url, headers=auth_header)
assert 403 == response.status_code
def test_conflicting_query_params(
self, api_client: TestClient, generate_auth_header, url
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?completed_lt=2021-01-01&errored_gt=2021-01-02",
headers=auth_header,
)
assert 400 == response.status_code
def test_get_privacy_requests_by_id(
self,
api_client: TestClient,
url,
generate_auth_header,
privacy_request,
postgres_execution_log,
mongo_execution_log,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?id={privacy_request.id}", headers=auth_header
)
assert 200 == response.status_code
expected_resp = {
"items": [
{
"id": privacy_request.id,
"created_at": stringify_date(privacy_request.created_at),
"started_processing_at": stringify_date(
privacy_request.started_processing_at
),
"finished_processing_at": None,
"status": privacy_request.status.value,
"external_id": privacy_request.external_id,
}
],
"total": 1,
"page": 1,
"size": page_size,
}
resp = response.json()
assert resp == expected_resp
def test_filter_privacy_requests_by_status(
self,
api_client: TestClient,
url,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?status=complete", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == succeeded_privacy_request.id
response = api_client.get(url + f"?status=error", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == failed_privacy_request.id
def test_filter_privacy_requests_by_external_id(
self,
db,
api_client,
url,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?external_id={succeeded_privacy_request.id}", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
privacy_request.external_id = "test_external_id_1"
privacy_request.save(db)
response = api_client.get(
url + f"?external_id=test_external_id_1", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == privacy_request.id
def test_filter_privacy_requests_by_created(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?created_lt=2019-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
response = api_client.get(url + f"?created_gt=2019-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 3
assert resp["items"][0]["id"] == privacy_request.id
assert resp["items"][1]["id"] == succeeded_privacy_request.id
assert resp["items"][2]["id"] == failed_privacy_request.id
def test_filter_privacy_requests_by_started(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?started_lt=2021-05-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 2
assert resp["items"][0]["id"] == privacy_request.id
assert resp["items"][1]["id"] == failed_privacy_request.id
response = api_client.get(url + f"?started_gt=2021-05-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == succeeded_privacy_request.id
def test_filter_privacy_requests_by_completed(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?completed_lt=2021-10-01", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
response = api_client.get(
url + f"?completed_gt=2021-10-01", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == succeeded_privacy_request.id
def test_filter_privacy_requests_by_errored(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?errored_lt=2021-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
response = api_client.get(url + f"?errored_gt=2021-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == failed_privacy_request.id
def test_verbose_privacy_requests(
self,
api_client: TestClient,
generate_auth_header,
privacy_request: PrivacyRequest,
postgres_execution_log,
second_postgres_execution_log,
mongo_execution_log,
url,
db,
):
"""Test privacy requests endpoint with verbose query param to show execution logs"""
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?verbose=True", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert (
postgres_execution_log.updated_at < second_postgres_execution_log.updated_at
)
expected_resp = {
"items": [
{
"id": privacy_request.id,
"created_at": stringify_date(privacy_request.created_at),
"started_processing_at": stringify_date(
privacy_request.started_processing_at
),
"finished_processing_at": None,
"status": privacy_request.status.value,
"external_id": privacy_request.external_id,
"results": {
"my-mongo-db": [
{
"collection_name": "orders",
"fields_affected": [
{
"path": "my-mongo-db:orders:name",
"field_name": "name",
"data_categories": [
"user.provided.identifiable.contact.name"
],
}
],
"message": None,
"action_type": "access",
"status": "in_processing",
"updated_at": stringify_date(
mongo_execution_log.updated_at
),
}
],
"my-postgres-db": [
{
"collection_name": "user",
"fields_affected": [
{
"path": "my-postgres-db:user:email",
"field_name": "email",
"data_categories": [
"user.provided.identifiable.contact.email"
],
}
],
"message": None,
"action_type": "access",
"status": "pending",
"updated_at": stringify_date(
postgres_execution_log.updated_at
),
},
{
"collection_name": "address",
"fields_affected": [
{
"path": "my-postgres-db:address:street",
"field_name": "street",
"data_categories": [
"user.provided.identifiable.contact.street"
],
},
{
"path": "my-postgres-db:address:city",
"field_name": "city",
"data_categories": [
"user.provided.identifiable.contact.city"
],
},
],
"message": "Database timed out.",
"action_type": "access",
"status": "error",
"updated_at": stringify_date(
second_postgres_execution_log.updated_at
),
},
],
},
},
],
"total": 1,
"page": 1,
"size": page_size,
}
assert resp == expected_resp
def test_verbose_privacy_request_embed_limit(
self,
db,
api_client: TestClient,
generate_auth_header,
privacy_request: PrivacyRequest,
url,
):
for i in range(0, EMBEDDED_EXECUTION_LOG_LIMIT + 10):
ExecutionLog.create(
db=db,
data={
"dataset_name": "my-postgres-db",
"collection_name": f"test_collection_{i}",
"fields_affected": [],
"action_type": ActionType.access,
"status": ExecutionLogStatus.pending,
"privacy_request_id": privacy_request.id,
},
)
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?verbose=True", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert (
len(resp["items"][0]["results"]["my-postgres-db"])
== EMBEDDED_EXECUTION_LOG_LIMIT
)
db.query(ExecutionLog).filter(
ExecutionLog.privacy_request_id == privacy_request.id
).delete()
class TestGetExecutionLogs:
@pytest.fixture(scope="function")
def url(self, db, privacy_request):
return V1_URL_PREFIX + PRIVACY_REQUESTS + f"/{privacy_request.id}/log"
def test_get_execution_logs_unauthenticated(
self, api_client: TestClient, privacy_request, url
):
response = api_client.get(url + "/", headers={})
assert 401 == response.status_code
def test_get_execution_logs_wrong_scope(
self, api_client: TestClient, generate_auth_header, url
):
auth_header = generate_auth_header(scopes=[STORAGE_CREATE_OR_UPDATE])
response = api_client.get(url, headers=auth_header)
assert 403 == response.status_code
def test_get_execution_logs_invalid_privacy_request_id(
self, api_client: TestClient, generate_auth_header
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
V1_URL_PREFIX + PRIVACY_REQUESTS + f"/invalid_privacy_request_id/log",
headers=auth_header,
)
assert 404 == response.status_code
def test_get_execution_logs(
self,
api_client: TestClient,
generate_auth_header,
url,
postgres_execution_log,
mongo_execution_log,
second_postgres_execution_log,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url,
headers=auth_header,
)
assert 200 == response.status_code
resp = response.json()
expected_resp = {
"items": [
{
"collection_name": "user",
"fields_affected": [
{
"path": "my-postgres-db:user:email",
"field_name": "email",
"data_categories": [
"user.provided.identifiable.contact.email"
],
}
],
"message": None,
"action_type": "access",
"status": "pending",
"updated_at": stringify_date(postgres_execution_log.updated_at),
"dataset_name": "my-postgres-db",
},
{
"collection_name": "orders",
"fields_affected": [
{
"path": "my-mongo-db:orders:name",
"field_name": "name",
"data_categories": [
"user.provided.identifiable.contact.name"
],
}
],
"message": None,
"action_type": "access",
"status": "in_processing",
"updated_at": stringify_date(mongo_execution_log.updated_at),
"dataset_name": "my-mongo-db",
},
{
"collection_name": "address",
"fields_affected": [
{
"path": "my-postgres-db:address:street",
"field_name": "street",
"data_categories": [
"user.provided.identifiable.contact.street"
],
},
{
"path": "my-postgres-db:address:city",
"field_name": "city",
"data_categories": [
"user.provided.identifiable.contact.city"
],
},
],
"message": "Database timed out.",
"action_type": "access",
"status": "error",
"updated_at": stringify_date(
second_postgres_execution_log.updated_at
),
"dataset_name": "my-postgres-db",
},
],
"total": 3,
"page": 1,
"size": page_size,
}
assert resp == expected_resp
class TestRequestPreview:
@pytest.fixture(scope="function")
def url(self, db, privacy_request):
return V1_URL_PREFIX + REQUEST_PREVIEW
def test_request_preview(
self,
dataset_config_preview,
api_client: TestClient,
url,
generate_auth_header,
) -> None:
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
data = [dataset_config_preview.fides_key]
response = api_client.put(url, headers=auth_header, json=data)
assert response.status_code == 200
response_body: List[DryRunDatasetResponse] = json.loads(response.text)
assert (
next(
response["query"]
for response in response_body
if response["collectionAddress"]["dataset"] == "postgres"
if response["collectionAddress"]["collection"] == "subscriptions"
)
== "SELECT email,id FROM subscriptions WHERE email = ?"
)
def test_request_preview_all(
self,
dataset_config_preview,
api_client: TestClient,
url,
generate_auth_header,
) -> None:
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.put(url, headers=auth_header)
assert response.status_code == 200
response_body: List[DryRunDatasetResponse] = json.loads(response.text)
assert (
next(
response["query"]
for response in response_body
if response["collectionAddress"]["dataset"] == "postgres"
if response["collectionAddress"]["collection"] == "subscriptions"
)
== "SELECT email,id FROM subscriptions WHERE email = ?"
)
class TestResumePrivacyRequest:
@pytest.fixture(scope="function")
def url(self, db, privacy_request):
return V1_URL_PREFIX + PRIVACY_REQUEST_RESUME.format(
privacy_request_id=privacy_request.id
)
def test_resume_privacy_request_not_authenticated(
self,
url,
api_client,
generate_webhook_auth_header,
policy_pre_execution_webhooks,
):
response = api_client.post(url)
assert response.status_code == 401
def test_resume_privacy_request_invalid_jwe_format(
self,
url,
api_client,
generate_webhook_auth_header,
policy_pre_execution_webhooks,
):
auth_header = {
"Authorization": "Bearer "
+ generate_jwe(json.dumps({"unexpected": "format"}))
}
response = api_client.post(url, headers=auth_header, json={})
assert response.status_code == 403
def test_resume_privacy_request_invalid_scopes(
self,
url,
api_client,
generate_webhook_auth_header,
policy_pre_execution_webhooks,
):
"""
Test scopes are correct, although we just gave a user this token with the
correct scopes, the check doesn't mean much
"""
auth_header = {
"Authorization": "Bearer "
+ generate_jwe(
json.dumps(
{
"webhook_id": policy_pre_execution_webhooks[0].id,
"scopes": [PRIVACY_REQUEST_READ],
"iat": datetime.now().isoformat(),
}
)
)
}
response = api_client.post(url, headers=auth_header, json={})
assert response.status_code == 403
def test_resume_privacy_request_invalid_webhook(
self,
url,
api_client,
generate_webhook_auth_header,
policy_post_execution_webhooks,
):
"""Only can resume execution after Pre-Execution webhooks"""
auth_header = {
"Authorization": "Bearer "
+ generate_jwe(
json.dumps(
{
"webhook_id": policy_post_execution_webhooks[0].id,
"scopes": [PRIVACY_REQUEST_CALLBACK_RESUME],
"iat": datetime.now().isoformat(),
}
)
)
}
response = api_client.post(url, headers=auth_header, json={})
assert response.status_code == 404
def test_resume_privacy_request_not_paused(
self,
url,
api_client,
generate_webhook_auth_header,
policy_pre_execution_webhooks,
privacy_request,
db,
):
privacy_request.status = PrivacyRequestStatus.complete
privacy_request.save(db=db)
auth_header = generate_webhook_auth_header(
webhook=policy_pre_execution_webhooks[0]
)
response = api_client.post(url, headers=auth_header, json={})
assert response.status_code == 400
@mock.patch(
"fidesops.service.privacy_request.request_runner_service.PrivacyRequestRunner.submit"
)
def test_resume_privacy_request(
self,
submit_mock,
url,
api_client,
generate_webhook_auth_header,
policy_pre_execution_webhooks,
privacy_request,
db,
):
privacy_request.status = PrivacyRequestStatus.paused
privacy_request.save(db=db)
auth_header = generate_webhook_auth_header(
webhook=policy_pre_execution_webhooks[0]
)
response = api_client.post(
url, headers=auth_header, json={"derived_identity": {}}
)
assert response.status_code == 200
response_body = json.loads(response.text)
assert submit_mock.called
assert response_body == {
"id": privacy_request.id,
"created_at": stringify_date(privacy_request.created_at),
"started_processing_at": stringify_date(
privacy_request.started_processing_at
),
"finished_processing_at": None,
"status": "in_processing",
"external_id": privacy_request.external_id,
}
| 35.852308 | 93 | 0.549577 | 3,392 | 34,956 | 5.350531 | 0.084906 | 0.101052 | 0.051573 | 0.038019 | 0.830128 | 0.799989 | 0.759987 | 0.737561 | 0.716844 | 0.689404 | 0 | 0.019507 | 0.356191 | 34,956 | 974 | 94 | 35.889117 | 0.786936 | 0.00718 | 0 | 0.652222 | 0 | 0 | 0.130225 | 0.0564 | 0 | 0 | 0 | 0 | 0.103333 | 1 | 0.045556 | false | 0 | 0.018889 | 0.006667 | 0.076667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d2063fcf17e91064862076552af55df1a9a4087 | 25,474 | py | Python | local/lib/python3.6/site-packages/pgadmin4/pgadmin/tools/backup/tests/test_backup_create_job_unit_test.py | sahilsdei/django_ecommerce | edc2513e41aca178d1ccae14ebaa6c7b1d709e73 | [
"MIT"
] | null | null | null | local/lib/python3.6/site-packages/pgadmin4/pgadmin/tools/backup/tests/test_backup_create_job_unit_test.py | sahilsdei/django_ecommerce | edc2513e41aca178d1ccae14ebaa6c7b1d709e73 | [
"MIT"
] | null | null | null | local/lib/python3.6/site-packages/pgadmin4/pgadmin/tools/backup/tests/test_backup_create_job_unit_test.py | sahilsdei/django_ecommerce | edc2513e41aca178d1ccae14ebaa6c7b1d709e73 | [
"MIT"
] | null | null | null | ##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2018, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
import sys
import simplejson as json
from pgadmin.utils.route import BaseTestGenerator
from regression import parent_node_dict
from pgadmin.utils import server_utils as server_utils
from pgadmin.browser.server_groups.servers.databases.tests import utils as \
database_utils
if sys.version_info < (3, 3):
from mock import patch, MagicMock
else:
from unittest.mock import patch, MagicMock
class BackupCreateJobTest(BaseTestGenerator):
"""Test the BackupCreateJob class"""
scenarios = [
('When backup object with default options',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='custom',
verbose=True,
blobs=True,
schemas=[],
tables=[],
database='postgres'
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--format=c', '--blobs'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup object with format directory',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_folder',
format='directory',
verbose=True,
blobs=False,
schemas=[],
tables=[],
database='postgres'
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--format=d'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option sections to all data',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='custom',
verbose=True,
schemas=[],
tables=[],
database='postgres',
data=True,
pre_data=True,
post_data=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--format=c',
'--section=pre-data', '--section=data',
'--section=post-data'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option only_data',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='plain',
verbose=True,
schemas=[],
tables=[],
database='postgres',
only_data=True,
only_schema=False
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--format=p', '--data-only'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option only_data',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='plain',
verbose=True,
schemas=[],
tables=[],
database='postgres',
only_data=True,
only_schema=True,
dns_owner=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--format=p', '--data-only'],
not_expected_cmd_opts=['--schema-only'],
expected_exit_code=[0, None]
)),
('When backup the object with option only_schema',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='plain',
verbose=True,
schemas=[],
tables=[],
database='postgres',
only_data=False,
only_schema=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--format=p', '--schema-only'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option - format plain and dns_owner',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='plain',
verbose=True,
schemas=[],
tables=[],
database='postgres',
dns_owner=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--format=p', '--no-owner'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option - Do not save privilege,'
' tablespace, unlogged table data',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='custom',
verbose=True,
schemas=[],
tables=[],
database='postgres',
dns_privilege=True,
dns_unlogged_tbl_data=True,
dns_tablespace=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--no-privileges',
'--no-tablespaces',
'--no-unlogged-table-data'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option - Do not save comments,',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='custom',
verbose=True,
schemas=[],
tables=[],
database='postgres',
no_comments=True,
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--no-comments'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None],
server_min_version=110000,
message='Backup object with --no-comments are not supported '
'by EPAS/PG server less than 11.0'
)),
('When backup the object with option - all queries',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='plain',
verbose=True,
schemas=[],
tables=[],
database='postgres',
use_column_inserts=True,
include_create_database=True,
use_insert_commands=True,
include_drop_database=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--create', '--clean', '--inserts',
'--column-inserts'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option - load via partition root',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='plain',
verbose=True,
schemas=[],
tables=[],
database='postgres',
load_via_partition_root=True,
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--load-via-partition-root'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None],
server_min_version=110000,
message='Backup object with --load-via-partition-root are not '
'supported by EPAS/PG server less than 11.0'
)),
('When backup the object with option - all queries and format custom',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='custom',
verbose=True,
schemas=[],
tables=[],
database='postgres',
use_column_inserts=True,
include_create_database=True,
use_insert_commands=True,
include_drop_database=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--inserts', '--clean',
'--column-inserts', '--create'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with option - miscellaneous',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='custom',
verbose=True,
schemas=[],
tables=[],
database='postgres',
disable_quoting=True,
use_set_session_auth=True,
with_oids=True,
dqoute=True
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose', '--quote-all-identifiers',
'--disable-dollar-quoting', '--oids',
'--use-set-session-authorization'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the object with format tar',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_file',
format='tar',
verbose=True,
schemas=[],
tables=[],
database='postgres',
blobs=True,
),
url='/backup/job/{0}/object',
expected_cmd_opts=['--verbose',
'--blobs',
'--format=t'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the server',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
dqoute=False,
verbose=True,
type='server'
),
url='/backup/job/{0}',
expected_cmd_opts=['--verbose'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the server with option only_data',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
type='server',
verbose=True,
only_data=True,
only_schema=False
),
url='/backup/job/{0}',
expected_cmd_opts=['--verbose', '--data-only'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the server with option only_schema',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
type='server',
format='plain',
verbose=True,
only_data=False,
only_schema=True
),
url='/backup/job/{0}',
expected_cmd_opts=['--verbose', '--schema-only'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the server with option - Do not save privilege,'
' tablespace, unlogged table data',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
type='server',
format='plain',
verbose=True,
dns_privilege=True,
dns_unlogged_tbl_data=True,
dns_tablespace=True
),
url='/backup/job/{0}',
expected_cmd_opts=['--no-privileges',
'--no-tablespaces',
'--no-unlogged-table-data'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the server with option - Do not save comments,',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
type='server',
format='plain',
verbose=True,
no_comments=True,
),
url='/backup/job/{0}',
expected_cmd_opts=['--no-comments'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None],
server_min_version=110000,
message='Backup server with --no-comments are not supported '
'by EPAS/PG server less than 11.0'
)),
('When backup the server with option - all queries',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
type='server',
format='plain',
verbose=True,
use_column_inserts=True,
use_insert_commands=True,
include_drop_database=True
),
url='/backup/job/{0}',
expected_cmd_opts=['--clean', '--inserts',
'--column-inserts'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the server with option - miscellaneous',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
type='server',
verbose=True,
disable_quoting=True,
use_set_session_auth=True,
with_oids=True,
dqoute=True
),
url='/backup/job/{0}',
expected_cmd_opts=['--verbose', '--quote-all-identifiers',
'--disable-dollar-quoting', '--oids',
'--use-set-session-authorization'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
)),
('When backup the server with encoding',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_server_file',
dqoute=False,
verbose=True,
type='server',
encoding='UTF-8'
),
url='/backup/job/{0}',
expected_cmd_opts=['--encoding'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None],
server_min_version=110000,
message='Backup server with encoding are not supported '
'by EPAS/PG server less than 11.0'
)),
('When backup globals',
dict(
class_params=dict(
sid=1,
name='test_backup_server',
port=5444,
host='localhost',
database='postgres',
bfile='test_backup',
username='postgres'
),
params=dict(
file='test_backup_global_file',
dqoute=False,
verbose=True,
type='globals'
),
url='/backup/job/{0}',
expected_cmd_opts=['--globals-only'],
not_expected_cmd_opts=[],
expected_exit_code=[0, None]
))
]
def setUp(self):
if self.server['default_binary_paths'] is None:
self.skipTest(
"default_binary_paths is not set for the server {0}".format(
self.server['name']
)
)
@patch('pgadmin.tools.backup.Server')
@patch('pgadmin.tools.backup.BackupMessage')
@patch('pgadmin.tools.backup.filename_with_file_manager_path')
@patch('pgadmin.tools.backup.BatchProcess')
@patch('pgadmin.utils.driver.psycopg2.server_manager.ServerManager.'
'export_password_env')
def runTest(self, export_password_env_mock, batch_process_mock,
filename_mock, backup_message_mock, server_mock):
class TestMockServer():
def __init__(self, name, host, port, id, username,
maintenance_db):
self.name = name
self.host = host
self.port = port
self.id = id
self.username = username
self.maintenance_db = maintenance_db
self.server_id = parent_node_dict["server"][-1]["server_id"]
mock_obj = TestMockServer(self.class_params['name'],
self.class_params['host'],
self.class_params['port'],
self.server_id,
self.class_params['username'],
self.class_params['database']
)
mock_result = server_mock.query.filter_by.return_value
mock_result.first.return_value = mock_obj
filename_mock.return_value = self.params['file']
batch_process_mock.set_env_variables = MagicMock(
return_value=True
)
batch_process_mock.start = MagicMock(
return_value=True
)
export_password_env_mock.return_value = True
server_response = server_utils.connect_server(self, self.server_id)
if server_response["info"] == "Server connected.":
db_owner = server_response['data']['user']['name']
self.data = database_utils.get_db_data(db_owner)
if hasattr(self, 'server_min_version') and \
server_response["data"]["version"] < \
self.server_min_version:
self.skipTest(self.message)
url = self.url.format(self.server_id)
# Create the backup job
response = self.tester.post(url,
data=json.dumps(self.params),
content_type='html/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(backup_message_mock.called)
self.assertTrue(batch_process_mock.called)
if self.expected_cmd_opts:
for opt in self.expected_cmd_opts:
self.assertIn(
opt,
batch_process_mock.call_args_list[0][1]['args']
)
if self.not_expected_cmd_opts:
for opt in self.not_expected_cmd_opts:
self.assertNotIn(
opt,
batch_process_mock.call_args_list[0][1]['args']
)
| 35.282548 | 78 | 0.44359 | 2,148 | 25,474 | 5.032588 | 0.10149 | 0.06383 | 0.06938 | 0.041628 | 0.76605 | 0.763367 | 0.756337 | 0.744311 | 0.73765 | 0.730527 | 0 | 0.015535 | 0.446612 | 25,474 | 721 | 79 | 35.331484 | 0.751295 | 0.007576 | 0 | 0.800289 | 0 | 0 | 0.203654 | 0.038772 | 0 | 0 | 0 | 0 | 0.007236 | 1 | 0.004342 | false | 0.004342 | 0.011577 | 0 | 0.02026 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d21868bb82c54910fbb435bab5457307cf9e605 | 97 | py | Python | SilverstackAccess/__init__.py | andostini/DailiesPipe | 06dedfa30b7d12ff795a9267d13b2f5c6106c986 | [
"MIT"
] | 1 | 2021-12-08T09:16:27.000Z | 2021-12-08T09:16:27.000Z | SilverstackAccess/__init__.py | andostini/SilverstackAccess | 06dedfa30b7d12ff795a9267d13b2f5c6106c986 | [
"MIT"
] | 1 | 2021-08-10T13:24:41.000Z | 2021-08-10T13:24:41.000Z | SilverstackAccess/__init__.py | andostini/DailiesPipe | 06dedfa30b7d12ff795a9267d13b2f5c6106c986 | [
"MIT"
] | 1 | 2021-01-29T15:23:27.000Z | 2021-01-29T15:23:27.000Z | from SilverstackAccess.SilverstackAccess import findSilverstackInstances, getProjectList, Project | 97 | 97 | 0.917526 | 7 | 97 | 12.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051546 | 97 | 1 | 97 | 97 | 0.967391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d358cbc51decc8a57603fc561c1a95f1c8993cc | 30 | py | Python | TestGithub_PZB_20190210.py | Megabazus/Test_ZZ-Group | de0307c62bc43c1e0305f1d75a34b5391e9b0eeb | [
"Apache-2.0"
] | null | null | null | TestGithub_PZB_20190210.py | Megabazus/Test_ZZ-Group | de0307c62bc43c1e0305f1d75a34b5391e9b0eeb | [
"Apache-2.0"
] | 1 | 2018-10-20T15:54:43.000Z | 2018-10-20T15:54:43.000Z | TestGithub_PZB_20190210.py | Megabazus/Test_ZZ-Group | de0307c62bc43c1e0305f1d75a34b5391e9b0eeb | [
"Apache-2.0"
] | null | null | null | #### test
import pandas as pd
| 10 | 19 | 0.666667 | 5 | 30 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 30 | 2 | 20 | 15 | 0.833333 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d3ffc9072b38f03a5cc32c756b53f16c27e1f49 | 294 | py | Python | sentry/views/__init__.py | davedash/sentry | 8c11b2db7f09844aa860bfe7f1c3ff23c0d30f94 | [
"BSD-3-Clause"
] | 1 | 2018-07-15T00:12:53.000Z | 2018-07-15T00:12:53.000Z | sentry/views/__init__.py | davedash/sentry | 8c11b2db7f09844aa860bfe7f1c3ff23c0d30f94 | [
"BSD-3-Clause"
] | null | null | null | sentry/views/__init__.py | davedash/sentry | 8c11b2db7f09844aa860bfe7f1c3ff23c0d30f94 | [
"BSD-3-Clause"
] | null | null | null | """
sentry.views
~~~~~~~~~~~~
:copyright: (c) 2010-2012 by the Sentry Team, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from sentry.views.base import *
from sentry.views.exception import *
from sentry.views.message import *
from sentry.views.query import *
| 22.615385 | 75 | 0.717687 | 42 | 294 | 5.02381 | 0.52381 | 0.260664 | 0.28436 | 0.298578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031621 | 0.139456 | 294 | 12 | 76 | 24.5 | 0.802372 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d43632577e7f973181eddeb85dc3cef81146d59 | 190 | py | Python | datasetcode.py | Hope-2020/web-scraping-OECD- | ed30f87805bbda585fc7df8131a8c23b95dbb835 | [
"Apache-2.0"
] | null | null | null | datasetcode.py | Hope-2020/web-scraping-OECD- | ed30f87805bbda585fc7df8131a8c23b95dbb835 | [
"Apache-2.0"
] | null | null | null | datasetcode.py | Hope-2020/web-scraping-OECD- | ed30f87805bbda585fc7df8131a8c23b95dbb835 | [
"Apache-2.0"
] | null | null | null | from get_url_from_firstpage import Get_Url_From_FirstPage
from selenium import webdriver
class datacode:
urls = Get_Url_From_FirstPage.getUrl()
print(urls)
d = datacode()
| 19 | 57 | 0.757895 | 26 | 190 | 5.192308 | 0.5 | 0.133333 | 0.222222 | 0.422222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189474 | 190 | 9 | 58 | 21.111111 | 0.876623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0.166667 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d4a69a127d401e476559d85754553ebadc96e7a | 9,268 | py | Python | sdk/python/pulumi_aws/kinesis/stream.py | johnktims/pulumi-aws | c838bc79043f5376c66fc66275a1e012edd3ab7d | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/kinesis/stream.py | johnktims/pulumi-aws | c838bc79043f5376c66fc66275a1e012edd3ab7d | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/kinesis/stream.py | johnktims/pulumi-aws | c838bc79043f5376c66fc66275a1e012edd3ab7d | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class Stream(pulumi.CustomResource):
arn: pulumi.Output[str]
"""
The Amazon Resource Name (ARN) specifying the Stream (same as `id`)
"""
encryption_type: pulumi.Output[str]
"""
The encryption type to use. The only acceptable values are `NONE` or `KMS`. The default value is `NONE`.
"""
enforce_consumer_deletion: pulumi.Output[bool]
"""
A boolean that indicates all registered consumers should be deregistered from the stream so that the stream can be destroyed without error. The default value is `false`.
"""
kms_key_id: pulumi.Output[str]
"""
The GUID for the customer-managed KMS key to use for encryption. You can also use a Kinesis-owned master key by specifying the alias `alias/aws/kinesis`.
"""
name: pulumi.Output[str]
"""
A name to identify the stream. This is unique to the AWS account and region the Stream is created in.
"""
retention_period: pulumi.Output[float]
"""
Length of time data records are accessible after they are added to the stream. The maximum value of a stream's retention period is 168 hours. Minimum value is 24. Default is 24.
"""
shard_count: pulumi.Output[float]
"""
The number of shards that the stream will use.
Amazon has guidelines for specifying the Stream size that should be referenced when creating a Kinesis stream. See [Amazon Kinesis Streams][2] for more.
"""
shard_level_metrics: pulumi.Output[list]
"""
A list of shard-level CloudWatch metrics which can be enabled for the stream. See [Monitoring with CloudWatch][3] for more. Note that the value ALL should not be used; instead you should provide an explicit list of metrics you wish to enable.
"""
tags: pulumi.Output[dict]
"""
A mapping of tags to assign to the resource.
"""
def __init__(__self__, resource_name, opts=None, arn=None, encryption_type=None, enforce_consumer_deletion=None, kms_key_id=None, name=None, retention_period=None, shard_count=None, shard_level_metrics=None, tags=None, __props__=None, __name__=None, __opts__=None):
"""
Provides a Kinesis Stream resource. Amazon Kinesis is a managed service that
scales elastically for real-time processing of streaming big data.
For more details, see the [Amazon Kinesis Documentation][1].
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: The Amazon Resource Name (ARN) specifying the Stream (same as `id`)
:param pulumi.Input[str] encryption_type: The encryption type to use. The only acceptable values are `NONE` or `KMS`. The default value is `NONE`.
:param pulumi.Input[bool] enforce_consumer_deletion: A boolean that indicates all registered consumers should be deregistered from the stream so that the stream can be destroyed without error. The default value is `false`.
:param pulumi.Input[str] kms_key_id: The GUID for the customer-managed KMS key to use for encryption. You can also use a Kinesis-owned master key by specifying the alias `alias/aws/kinesis`.
:param pulumi.Input[str] name: A name to identify the stream. This is unique to the AWS account and region the Stream is created in.
:param pulumi.Input[float] retention_period: Length of time data records are accessible after they are added to the stream. The maximum value of a stream's retention period is 168 hours. Minimum value is 24. Default is 24.
:param pulumi.Input[float] shard_count: The number of shards that the stream will use.
Amazon has guidelines for specifying the Stream size that should be referenced when creating a Kinesis stream. See [Amazon Kinesis Streams][2] for more.
:param pulumi.Input[list] shard_level_metrics: A list of shard-level CloudWatch metrics which can be enabled for the stream. See [Monitoring with CloudWatch][3] for more. Note that the value ALL should not be used; instead you should provide an explicit list of metrics you wish to enable.
:param pulumi.Input[dict] tags: A mapping of tags to assign to the resource.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['arn'] = arn
__props__['encryption_type'] = encryption_type
__props__['enforce_consumer_deletion'] = enforce_consumer_deletion
__props__['kms_key_id'] = kms_key_id
__props__['name'] = name
__props__['retention_period'] = retention_period
if shard_count is None:
raise TypeError("Missing required property 'shard_count'")
__props__['shard_count'] = shard_count
__props__['shard_level_metrics'] = shard_level_metrics
__props__['tags'] = tags
super(Stream, __self__).__init__(
'aws:kinesis/stream:Stream',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, arn=None, encryption_type=None, enforce_consumer_deletion=None, kms_key_id=None, name=None, retention_period=None, shard_count=None, shard_level_metrics=None, tags=None):
"""
Get an existing Stream resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: The Amazon Resource Name (ARN) specifying the Stream (same as `id`)
:param pulumi.Input[str] encryption_type: The encryption type to use. The only acceptable values are `NONE` or `KMS`. The default value is `NONE`.
:param pulumi.Input[bool] enforce_consumer_deletion: A boolean that indicates all registered consumers should be deregistered from the stream so that the stream can be destroyed without error. The default value is `false`.
:param pulumi.Input[str] kms_key_id: The GUID for the customer-managed KMS key to use for encryption. You can also use a Kinesis-owned master key by specifying the alias `alias/aws/kinesis`.
:param pulumi.Input[str] name: A name to identify the stream. This is unique to the AWS account and region the Stream is created in.
:param pulumi.Input[float] retention_period: Length of time data records are accessible after they are added to the stream. The maximum value of a stream's retention period is 168 hours. Minimum value is 24. Default is 24.
:param pulumi.Input[float] shard_count: The number of shards that the stream will use.
Amazon has guidelines for specifying the Stream size that should be referenced when creating a Kinesis stream. See [Amazon Kinesis Streams][2] for more.
:param pulumi.Input[list] shard_level_metrics: A list of shard-level CloudWatch metrics which can be enabled for the stream. See [Monitoring with CloudWatch][3] for more. Note that the value ALL should not be used; instead you should provide an explicit list of metrics you wish to enable.
:param pulumi.Input[dict] tags: A mapping of tags to assign to the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["arn"] = arn
__props__["encryption_type"] = encryption_type
__props__["enforce_consumer_deletion"] = enforce_consumer_deletion
__props__["kms_key_id"] = kms_key_id
__props__["name"] = name
__props__["retention_period"] = retention_period
__props__["shard_count"] = shard_count
__props__["shard_level_metrics"] = shard_level_metrics
__props__["tags"] = tags
return Stream(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 63.479452 | 297 | 0.708999 | 1,313 | 9,268 | 4.808835 | 0.160701 | 0.038486 | 0.045613 | 0.024073 | 0.736775 | 0.720621 | 0.713652 | 0.713652 | 0.702882 | 0.697339 | 0 | 0.004012 | 0.220004 | 9,268 | 145 | 298 | 63.917241 | 0.869415 | 0.440117 | 0 | 0.029851 | 1 | 0 | 0.151239 | 0.021361 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0.014925 | 0.089552 | 0.029851 | 0.343284 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
538792691654d77c0703ddb0c0988cf3d4031e96 | 11,203 | py | Python | DrivenRange2Ddensity.py | MargauxVrech/neuro | 32cc73969c6f6d832025a1ffd6fe094eb6cf2c37 | [
"BSD-2-Clause"
] | null | null | null | DrivenRange2Ddensity.py | MargauxVrech/neuro | 32cc73969c6f6d832025a1ffd6fe094eb6cf2c37 | [
"BSD-2-Clause"
] | null | null | null | DrivenRange2Ddensity.py | MargauxVrech/neuro | 32cc73969c6f6d832025a1ffd6fe094eb6cf2c37 | [
"BSD-2-Clause"
] | null | null | null | {
'run_time': 6000, # ms
'dt': 0.1, # ms
'Populations' : {
'drive' : {
# 'n' : 25*25,
'n' : 100*100,
'type': sim.SpikeSourcePoisson,
'cellparams' : {
'start':0.0,
'rate':4.,
'duration': 6000.0
}
},
'py' : {
'n': 100*100, # units
'type': sim.EIF_cond_alpha_isfa_ista,
'structure' : Grid2D(aspect_ratio=1, dx=1.0, dy=1.0, fill_order='sequential', rng=sim.NumpyRNG(seed=2**32-1)),
'cellparams': {
'e_rev_I' : -80, # mV, reversal potential of excitatory synapses
'e_rev_E' : 0, # mV, reversal potential of inhibitory synapses
'tau_syn_E' : 5.0, # ms, time constant of excitatory synaptic short-term plasticity, YgerBoustaniDestexheFregnac2011
'tau_syn_I' : 5.0, # ms, time constant of excitatory synaptic short-term plasticity, YgerBoustaniDestexheFregnac2011
'tau_refrac' : 5.0, # ms, refractory period
'v_reset' : -65.0, # mV, reset after spike
'v_thresh' : -50.0, # mV, spike threshold (modified by adaptation)
'delta_T' : 2., # mV, steepness of exponential approach to threshold
'cm' : 0.150, # nF, tot membrane capacitance
'a' : 4., # nS, conductance of adaptation variable
'tau_m' : 15.0, # ms, time constant of leak conductance (cm/gl)
'v_rest' : -65.0, # mV, resting potential E_leak
'tau_w' : 500.0, # ms, time constant of adaptation variable
'b' : .02, # nA, increment to adaptation variable
},
},
'inh' : {
'n': 50*50, #{'ref':'py','ratio':0.25},
'type': sim.EIF_cond_alpha_isfa_ista,
'structure' : Grid2D(aspect_ratio=1, dx=2.0, dy=2.0, fill_order='sequential', rng=sim.NumpyRNG(seed=2**32-1)),
'cellparams': {
'e_rev_I' : -80, # mV, reversal potential of excitatory synapses
'e_rev_E' : 0, # mV, reversal potential of inhibitory synapses
'tau_syn_E' : 5.0, # ms, time constant of excitatory synaptic short-term plasticity, YgerBoustaniDestexheFregnac2011
'tau_syn_I' : 5.0, # ms, time constant of inhibitory synaptic short-term plasticity, YgerBoustaniDestexheFregnac2011
'tau_refrac' : 5.0, # ms, refractory period
'v_reset' : -65.0, # mV, reset after spike
'v_thresh' : -50.0, # mV, spike threshold (modified by adaptation)
'delta_T' : 0.5, # mV, steepness of exponential approach to threshold
'cm' : 0.150, # nF, tot membrane capacitance
'a' : 0.0, # nS, conductance of adaptation variable
'tau_m' : 15.0, # ms, time constant of leak conductance (cm/gl)
'v_rest' : -65.0, # mV, resting potential E_leak
'tau_w' : 500.0, # ms, time constant of adaptation variable
'b' : 0.0, # nA, increment to adaptation variable
},
},
},
'Projections' : {
# 'drive_py' : {
# 'source' : 'drive',
# 'target' : 'py',
# 'space' : sim.Space(periodic_boundaries=((0,100), (0,100), None)), # torus
# 'connector' : sim.FixedProbabilityConnector(.01, rng=sim.random.NumpyRNG(2**32-1)),
# 'synapse_type' : sim.StaticSynapse(),
# # 'weight' : .003, # uS # 25*25 *1000 *.008 = 5000
# 'weight' : .0005, # uS # 100*100 *1000 *0.005 = 5000
# 'receptor_type' : 'excitatory',
# 'save_connections':False,
# 'print_statistics':False,
# },
# 'drive_inh' : {
# 'source' : 'drive',
# 'target' : 'inh',
# 'space' : sim.Space(periodic_boundaries=((0,100), (0,100), None)), # torus
# 'connector' : sim.FixedProbabilityConnector(.01, rng=sim.random.NumpyRNG(2**32-1)),
# 'synapse_type' : sim.StaticSynapse(),
# # 'weight' : .003, # uS # 25*25 *1000 *.008 = 5000
# 'weight' : {'ref':'drive_py'}, # uS # 100*100 *1000 *0.005 = 5000
# 'receptor_type' : 'excitatory',
# 'save_connections':False,
# 'print_statistics':False,
# },
'py_py' : {
'source' : 'py',
'target' : 'py',
'space' : sim.Space(periodic_boundaries=((0,100), (0,100), None)), # torus
'connector' : sim.DistanceDependentProbabilityConnector("14*exp(-1.2*d)", allow_self_connections=False, rng=sim.NumpyRNG(2**32-1)), # radius 300um
'weight' : .001, # uS
'synapse_type' : sim.StaticSynapse(),
'delay' : .5, # ms
'receptor_type' : 'excitatory',
'save_connections':False,
'print_statistics':False,
},
'py_inh' : {
'source' : 'py',
'target' : 'inh',
'space' : sim.Space(periodic_boundaries=((0,100), (0,100), None)), # torus
'connector' : sim.DistanceDependentProbabilityConnector("24*exp(-1.5*d)", allow_self_connections=False, rng=sim.NumpyRNG(2**32-1)), # radius 100um
'weight' : .001, # uS
'synapse_type' : sim.StaticSynapse(),
'delay' : .5, # ms,
'receptor_type' : 'excitatory',
'save_connections':False,
'print_statistics':False,
},
'inh_inh' : {
'source' : 'inh',
'target' : 'inh',
'space' : sim.Space(periodic_boundaries=((0,100), (0,100), None)), # torus
'connector' : sim.DistanceDependentProbabilityConnector("14*exp(-1.2*d)", allow_self_connections=False, rng=sim.NumpyRNG(2**32-1)), # radius 300um
'weight' : .005, # uS
'synapse_type' : sim.StaticSynapse(),
'delay' : .5, # ms,
'receptor_type' : 'inhibitory',
'save_connections':False,
'print_statistics':False,
},
'inh_py' : {
'source' : 'inh',
'target' : 'py',
'space' : sim.Space(periodic_boundaries=((0,100), (0,100), None)), # torus
'connector' : sim.DistanceDependentProbabilityConnector("24*exp(-1.5*d)", allow_self_connections=False, rng=sim.NumpyRNG(2**32-1)), # radius 200um
'weight' : .005, # uS
'synapse_type' : sim.StaticSynapse(),
'delay' : .5, # ms,
'receptor_type' : 'inhibitory',
'save_connections':False,
'print_statistics':False,
},
},
'Recorders' : {
'py' : {
'spikes' : 'all',
'v' : {
'MUA': True,
'x': 35, #Correspond à la limite inférieure gauche du carré de 10x10 centré en 32 donc 32-5=27
'y': 35,
'size': 30,
},
'gsyn_exc' : {
'start' : 0,
'end' : 10,
},
'gsyn_inh' : {
'start' : 0,
'end' : 10,
},
},
'inh' : {
'spikes' : 'all',
# 'v' : {
# 'start' : 0,
# 'end' : 100,
# },
'gsyn_exc' : {
'start' : 0,
'end' : 10,
},
'gsyn_inh' : {
'start' : 0,
'end' : 10,
},
},
},
'Modifiers' :{
},
'Injections' : {
# 'py' : { # 'modKey':{modVal}
# 'source' : sim.StepCurrentSource,
# 'amplitude' : [.4, .0], # default
# 'start' : [1000., 1100.], # long duration
# 'stop' : 0.0,
# #'cellidx' : 50+(100*50), # On prend la cellule au milieu du carré donc on monte 50 lignes à partir d'en bas (sur un tableau de 100 cellules) puis on se décale à la 50eme colonne
# #'cellidx' : 32+(64*32), # On prend la cellule au milieu du carré donc on monte 32 lignes à partir d'en bas (sur un tableau de 64 cellules) puis on se décale à la 32eme colonne
# #'cellidx' : [2015,2016,2017,2079,2080,2081,2143,2144,2145], #3x3 cellinjected
# #[2015, 48], [2016, 49], [2017, 50], [2079, 59], [2080, 60], [2081, 61], , [2143, 70], [2144, 71], [2145, 72],
# #'cellidx' : [5149,5150,5151,5049,5050,5051,4949,4950,4951], #3x3 cellinjected
#
# ##### 7x7 ceEll injectedD
# #'cellidx' : [5347,5348,5349,5350,5351,5352,5353,5247,5248,5249,5250,5251,5252,5253,5147,5148,5149,5150,5151,5152,5153,5047,5048,5049,5050,5051,5052,5053,4947,4948,4949,4950,4951,4952,4953,4847,4848,4849,4850,4851,4852,4853,4747,4748,4749,4750,4751,4752,4753]
# ##### 5x5 cell injected
# 'cellidx' : [5248,5249,5250,5251,5252,5148,5149,5150,5151,5152,5048,5049,5050,5051,5052,4948,4949,4950,4951,4952,4848,4849,4850,4851,4852],
#
#
#
# # 'cellidx' : 32+(64*32), # On prend la cellule au milieu du carré donc on monte 32 lignes à partir d'en bas (sur un tableau de 64 cellules) puis on se décale à la 32eme colonne
# # 'cellidx' : [32*32, 7543, 6536], # On prend la cellule au milieu du carré
# },
},
'Analysis' : {
# 'subsampling': 1000, # number of randomly selected units spiketrains for analysis (10% of the total as for gentic or rabies labelling)
'scores' : ['py'],
# 'scores' : ['py','inh'],
'transient' : 2, # ms
'Vm' : {
'py'
},
# 'ConductanceBalance' : {
# 'py':{
# 'trials': ['default'], # for which trials the analysis has to be computed
# },
# 'inh':{
# 'trials': ['default'], # for which trials the analysis has to be computed
# },
# },
'FiringRate' : {
'bin': 10, # ms
'py':{
'firing': [0,5],
},
'inh':{
'firing': [0,5],
},
},
# 'Rasterplot' : {
# 'py':{
# 'limits': [(0,63),(0,63)], # coords: [(from x, to x), (from y, to y)]
# 'color': 'red',
# },
# 'inh':{
# 'limits': [(0,63),(0,63)], # coords: [(from x, to x), (from y, to y)]
# 'color': 'blue',
# },
# 'type': '.png',
# 'type': '.svg',
# 'interval': False, # all
# # 'interval': [2000.,3000.], # ms # from 2s to 3s
# 'dpi':800,
# },
},
}
| 44.633466 | 275 | 0.466929 | 1,167 | 11,203 | 4.401885 | 0.277635 | 0.009344 | 0.006229 | 0.02336 | 0.771657 | 0.720654 | 0.720654 | 0.714619 | 0.708585 | 0.708585 | 0 | 0.130095 | 0.382487 | 11,203 | 250 | 276 | 44.812 | 0.61246 | 0.434169 | 0 | 0.579618 | 0 | 0 | 0.170781 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.025478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53b635ce91fcd4d4ef6740f25d69ca9ca8612a3c | 8,979 | py | Python | mpf/tests/test_Flippers.py | enteryourinitials/mpf | 8fa529aacc1b163c71557adb61b591077d66c77e | [
"MIT"
] | null | null | null | mpf/tests/test_Flippers.py | enteryourinitials/mpf | 8fa529aacc1b163c71557adb61b591077d66c77e | [
"MIT"
] | null | null | null | mpf/tests/test_Flippers.py | enteryourinitials/mpf | 8fa529aacc1b163c71557adb61b591077d66c77e | [
"MIT"
] | null | null | null | from mpf.platforms.interfaces.driver_platform_interface import PulseSettings, HoldSettings
from mpf.core.platform import SwitchSettings, DriverSettings, RepulseSettings
from mpf.tests.MpfTestCase import MpfTestCase
from unittest.mock import MagicMock, call
class TestFlippers(MpfTestCase):
def get_config_file(self):
return 'config.yaml'
def get_machine_path(self):
return 'tests/machine_files/flippers/'
def get_platform(self):
return 'virtual'
def test_single(self):
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule = MagicMock()
self.machine.flippers["f_test_single"].enable()
self.assertEqual(1, len(self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule.
_mock_call_args_list))
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule.assert_called_once_with(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=HoldSettings(power=0.125), recycle=False)
)
self.machine.default_platform.clear_hw_rule = MagicMock()
self.machine.flippers["f_test_single"].disable()
self.assertEqual(1, self.machine.default_platform.clear_hw_rule.called)
self.machine.default_platform.clear_hw_rule.assert_called_once_with(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=HoldSettings(power=0.125), recycle=False)
)
def test_hold_with_eos(self):
self.machine.default_platform.set_pulse_on_hit_and_release_and_disable_rule = MagicMock()
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule = MagicMock()
self.machine.flippers["f_test_hold_eos"].enable()
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule.assert_called_once_with(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_hold"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=HoldSettings(power=1.0), recycle=False)
)
self.machine.default_platform.set_pulse_on_hit_and_release_and_disable_rule.assert_called_with(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
SwitchSettings(hw_switch=self.machine.switches["s_flipper_eos"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=None, recycle=False),
RepulseSettings(enable_repulse=False)
)
self.machine.default_platform.clear_hw_rule = MagicMock()
self.machine.flippers["f_test_hold_eos"].disable()
self.machine.default_platform.clear_hw_rule.assert_has_calls([
call(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=None, recycle=False)
),
call(
SwitchSettings(hw_switch=self.machine.switches["s_flipper_eos"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=None, recycle=False)
),
call(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_hold"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=HoldSettings(power=1.0), recycle=False)
),
], any_order=True)
def test_flipper_with_settings(self):
flipper = self.machine.flippers["f_test_flippers_with_settings"]
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule = MagicMock()
flipper.enable()
self.assertEqual(1, len(self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule.
_mock_call_args_list))
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule.assert_called_once_with(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=HoldSettings(power=0.125), recycle=False)
)
self.machine.default_platform.clear_hw_rule = MagicMock()
flipper.disable()
self.assertEqual(1, self.machine.default_platform.clear_hw_rule.called)
self.machine.default_platform.clear_hw_rule.assert_called_once_with(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=10),
hold_settings=HoldSettings(power=0.125), recycle=False))
self.machine.settings.set_setting_value("flipper_power", 0.8)
self.advance_time_and_run()
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule = MagicMock()
flipper.enable()
self.assertEqual(1, len(self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule.
_mock_call_args_list))
self.machine.default_platform.set_pulse_on_hit_and_enable_and_release_rule.assert_called_once_with(
SwitchSettings(hw_switch=self.machine.switches["s_flipper"].hw_switch, invert=False, debounce=False),
DriverSettings(hw_driver=self.machine.coils["c_flipper_main"].hw_driver,
pulse_settings=PulseSettings(power=1.0, duration=8),
hold_settings=HoldSettings(power=0.125), recycle=False)
)
self.assertEqual(8, flipper._get_pulse_ms())
def test_sw_flip_and_release(self):
self.machine.coils["c_flipper_main"].enable = MagicMock()
self.machine.coils["c_flipper_main"].disable = MagicMock()
self.post_event("flip_single")
assert not self.machine.coils["c_flipper_main"].enable.called
self.machine.flippers["f_test_single"].enable()
self.post_event("flip_single")
self.machine.coils["c_flipper_main"].enable.assert_called_once_with()
self.machine.coils["c_flipper_main"].enable = MagicMock()
self.post_event("release_single")
self.machine.coils["c_flipper_main"].disable.assert_called_once_with()
# flip again
self.post_event("flip_single")
self.machine.coils["c_flipper_main"].enable.assert_called_once_with()
self.machine.coils["c_flipper_main"].pulse = MagicMock()
self.machine.coils["c_flipper_main"].disable = MagicMock()
self.machine.coils["c_flipper_hold"].enable = MagicMock()
self.machine.coils["c_flipper_hold"].disable = MagicMock()
self.machine.flippers["f_test_single"].disable()
# switch is not active. it should release the flipper
self.machine.coils["c_flipper_main"].disable.assert_called_once_with()
self.machine.coils["c_flipper_main"].disable = MagicMock()
self.machine.flippers["f_test_hold_eos"].enable()
self.post_event("flip_hold")
self.machine.coils["c_flipper_main"].pulse.assert_called_once_with()
self.machine.coils["c_flipper_hold"].enable.assert_called_once_with()
self.post_event("release_hold")
self.machine.coils["c_flipper_main"].disable.assert_called_once_with()
self.machine.coils["c_flipper_hold"].disable.assert_called_once_with()
| 53.446429 | 121 | 0.686379 | 1,094 | 8,979 | 5.276051 | 0.093236 | 0.129591 | 0.074844 | 0.079522 | 0.869369 | 0.856202 | 0.852044 | 0.823285 | 0.79955 | 0.780665 | 0 | 0.010028 | 0.211493 | 8,979 | 167 | 122 | 53.766467 | 0.805226 | 0.006905 | 0 | 0.656489 | 0 | 0 | 0.082903 | 0.006507 | 0 | 0 | 0 | 0 | 0.175573 | 1 | 0.053435 | false | 0 | 0.030534 | 0.022901 | 0.114504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53d947b5174be80d5937277207ad836242a223b3 | 88 | py | Python | examples/module_example/baz.py | d3rp/fissle | 770a140e42e6d8f7d55b3211a6ba691d2a915a2d | [
"Apache-2.0"
] | 1 | 2021-05-21T12:54:32.000Z | 2021-05-21T12:54:32.000Z | examples/module_example/baz.py | d3rp/fissle | 770a140e42e6d8f7d55b3211a6ba691d2a915a2d | [
"Apache-2.0"
] | 4 | 2020-03-24T17:37:35.000Z | 2020-12-03T13:22:35.000Z | examples/module_example/baz.py | d3rp/fissle | 770a140e42e6d8f7d55b3211a6ba691d2a915a2d | [
"Apache-2.0"
] | null | null | null | from module_example import c
def print_configuration():
print(c.a)
print(c.x)
| 12.571429 | 28 | 0.693182 | 14 | 88 | 4.214286 | 0.714286 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204545 | 88 | 6 | 29 | 14.666667 | 0.842857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0.75 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9904e66d2f8f2fab12639f53f5f560f695aa849e | 42 | py | Python | CAFFR/envs/__init__.py | Bobobert/RolloutFF | 23f71df2ee7e66ae0976196222aedb607b18e2a5 | [
"MIT"
] | null | null | null | CAFFR/envs/__init__.py | Bobobert/RolloutFF | 23f71df2ee7e66ae0976196222aedb607b18e2a5 | [
"MIT"
] | null | null | null | CAFFR/envs/__init__.py | Bobobert/RolloutFF | 23f71df2ee7e66ae0976196222aedb607b18e2a5 | [
"MIT"
] | null | null | null | from .helicopter import EnvMakerForestFire | 42 | 42 | 0.904762 | 4 | 42 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 1 | 42 | 42 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54e4c98bc3c687056d50a0d4ad5cc9589ca3d062 | 2,236 | py | Python | mayan/apps/documents/migrations/0076_applicant_metric_review_reviewer_reviewerassignment.py | CMU-313/fall-2021-hw2-connect5 | 2faece2eef28eabf122b99fd5699636a8c4ad20a | [
"Apache-2.0"
] | null | null | null | mayan/apps/documents/migrations/0076_applicant_metric_review_reviewer_reviewerassignment.py | CMU-313/fall-2021-hw2-connect5 | 2faece2eef28eabf122b99fd5699636a8c4ad20a | [
"Apache-2.0"
] | 29 | 2021-09-14T22:17:48.000Z | 2021-10-01T06:01:52.000Z | mayan/apps/documents/migrations/0076_applicant_metric_review_reviewer_reviewerassignment.py | CMU-313/fall-2021-hw2-connect5 | 2faece2eef28eabf122b99fd5699636a8c4ad20a | [
"Apache-2.0"
] | 1 | 2021-11-02T21:14:42.000Z | 2021-11-02T21:14:42.000Z | # Generated by Django 2.2.23 on 2021-09-27 19:58
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('documents', '0075_delete_duplicateddocumentold'),
]
operations = [
migrations.CreateModel(
name='Applicant',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=30)),
('document', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='documents.Document')),
],
),
migrations.CreateModel(
name='Metric',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('metric_name', models.CharField(max_length=30)),
],
),
migrations.CreateModel(
name='Reviewer',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=30)),
],
),
migrations.CreateModel(
name='ReviewerAssignment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('applicant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='documents.Applicant')),
('reviewer', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='documents.Reviewer')),
],
),
migrations.CreateModel(
name='Review',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('evaluation', models.TextField()),
('applicant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='documents.Applicant')),
('reviewer', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='documents.Reviewer')),
],
),
]
| 41.407407 | 120 | 0.585868 | 215 | 2,236 | 5.972093 | 0.246512 | 0.043614 | 0.065421 | 0.102804 | 0.718847 | 0.718847 | 0.718847 | 0.718847 | 0.718847 | 0.656542 | 0 | 0.01602 | 0.27415 | 2,236 | 53 | 121 | 42.188679 | 0.775108 | 0.020572 | 0 | 0.659574 | 1 | 0 | 0.124314 | 0.015082 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.042553 | 0 | 0.106383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54fb1366a46e763cfb56b0576be7975b4da25083 | 428 | py | Python | orchestration/moosefs/commands.py | monkey-H/nap-core | 50d23b0431682f276990db04527deae3b6d84661 | [
"Apache-2.0"
] | null | null | null | orchestration/moosefs/commands.py | monkey-H/nap-core | 50d23b0431682f276990db04527deae3b6d84661 | [
"Apache-2.0"
] | null | null | null | orchestration/moosefs/commands.py | monkey-H/nap-core | 50d23b0431682f276990db04527deae3b6d84661 | [
"Apache-2.0"
] | null | null | null | from orchestration import config
from orchestration.database import database_update
class Moosefs(object):
"""
moosefs for a project
"""
def __init__(self, username, password):
self.volume = self.get_volume(username, password)
def set_volume(self, username, password):
database_update.set_volume(username, password)
def get_volume(self, username, password):
return database_update.get_volume(username, password)
| 25.176471 | 55 | 0.785047 | 54 | 428 | 6 | 0.388889 | 0.296296 | 0.185185 | 0.154321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123832 | 428 | 16 | 56 | 26.75 | 0.864 | 0.049065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.666667 | 0.222222 | 0.111111 | 0.777778 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
075b6cd689c45b91b57805d9ebf1cb38635fbd5a | 22,545 | py | Python | tests/python/unittest/test_tir_schedule_state_cached_flags.py | mozga-intel/tvm | 544724439efb9a795c92bd7ec9f7929e41c843c6 | [
"Zlib",
"Unlicense",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0"
] | 3 | 2021-05-08T17:04:39.000Z | 2021-07-11T17:40:54.000Z | tests/python/unittest/test_tir_schedule_state_cached_flags.py | mozga-intel/tvm | 544724439efb9a795c92bd7ec9f7929e41c843c6 | [
"Zlib",
"Unlicense",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0"
] | null | null | null | tests/python/unittest/test_tir_schedule_state_cached_flags.py | mozga-intel/tvm | 544724439efb9a795c92bd7ec9f7929e41c843c6 | [
"Zlib",
"Unlicense",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0"
] | 5 | 2020-11-13T19:26:25.000Z | 2022-01-25T07:55:16.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# pylint: disable=missing-function-docstring,missing-module-docstring
import sys
import pytest
import tvm
from tvm import tir
from tvm.script import ty
from tvm.tir.schedule.state import CachedFlags
from tvm.tir.stmt_functor import post_order_visit
# pylint: disable=no-member,invalid-name,unused-variable,unexpected-keyword-arg
@tvm.script.tir
def elementwise(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, (128, 128), "float32")
C = tir.match_buffer(c, (128, 128), "float32")
B = tir.alloc_buffer((128, 128), "float32")
with tir.block([128, 128], "B") as [vi, vj]:
B[vi, vj] = A[vi, vj] * 2.0
with tir.block([128, 128], "C") as [vi, vj]:
C[vi, vj] = B[vi, vj] + 1.0
@tvm.script.tir
def matmul(a: ty.handle, b: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, [128, 128])
B = tir.match_buffer(b, [128, 128])
C = tir.match_buffer(c, [128, 128])
for i, j in tir.grid(128, 128):
with tir.block([128, 128], "init") as [vi, vj]:
C[vi, vj] = 0.0
for k in range(0, 128):
with tir.block([128, 128, tir.reduce_axis(0, 128)], "update") as [vi, vj, vk]:
C[vi, vj] = C[vi, vj] + A[vi, vk] * B[vj, vk]
@tvm.script.tir
def block_in_opaque_block(a: ty.handle, b: ty.handle) -> None:
A = tir.match_buffer(a, (128, 128), "float32")
B = tir.match_buffer(b, (128, 128), "float32")
with tir.block([128], "B") as vi:
tir.reads([A[0:128, 0:128]])
tir.writes([B[0:128, 0:128]])
B[vi, 0] = A[vi, 0]
if A[vi, 0] == 0.0:
with tir.block([], "C"):
tir.reads([A[0:128, 0:128]])
tir.writes([B[0:128, 0:128]])
with tir.block([128], "D") as vj:
B[vi, vj] = A[vi, vj] * 3.0
else:
with tir.block([], "E"):
tir.reads([A[0:128, 0:128]])
tir.writes([B[0:128, 0:128]])
with tir.block([128], "F") as vj:
B[vi, vj] = A[vi, vj] * 2.0
@tvm.script.tir
def write_after_read(a: ty.handle, b: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, (128, 128))
B = tir.match_buffer(b, (128, 128))
C = tir.match_buffer(c, (128, 128))
with tir.block([128, 128], "C") as [vi, vj]:
C[vi, vj] = B[vi, vj] + 1.0
with tir.block([128, 128], "B") as [vi, vj]:
B[vi, vj] = A[vi, vj] * 2.0
@tvm.script.tir
def loop_carried_dependency(a: ty.handle, b: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, (128,))
B = tir.match_buffer(b, (128,))
C = tir.match_buffer(c, (128,))
for i in range(0, 128):
with tir.block([128], "B") as vi:
B[vi] = A[vi] * 2.0
with tir.block([128], "C") as vi:
C[vi] = tir.if_then_else(vi >= 1, B[vi - 1] + 1.0, 0.0, dtype="float32")
@tvm.script.tir
def concatenate_multi_producer(a: ty.handle, b: ty.handle) -> None:
A = tir.match_buffer(a, (128,))
B = tir.match_buffer(b, (128,))
for i in range(0, 64):
with tir.block([64], "A_0") as vi:
A[vi] = vi + 1
for i in range(0, 64):
with tir.block([64], "A_1") as vi:
tir.bind(vi, i + 64)
A[vi] = vi + 2
with tir.block([128], "B") as vi:
B[vi] = A[vi] * 2.0
@tvm.script.tir
def concatenate_multi_producer_uncovered(a: ty.handle, b: ty.handle) -> None:
A = tir.match_buffer(a, (128,))
B = tir.match_buffer(b, (128,))
for i in range(0, 63):
with tir.block([63], "A_0") as vi:
A[vi] = vi + 1
for i in range(0, 64):
with tir.block([64], "A_1") as vi:
tir.bind(vi, i + 64)
A[vi] = vi + 2
with tir.block([128], "B") as vi:
B[vi] = A[vi] * 2.0
@tvm.script.tir
def lca_at_loop(a: ty.handle, b: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, (128,))
B = tir.match_buffer(b, (128,))
C = tir.match_buffer(c, (128,))
for i in range(0, 128):
with tir.block([128], "B") as vi:
B[vi] = A[vi] * 2.0
with tir.block([128], "C") as vi:
C[vi] = B[vi] + 1.0
@tvm.script.tir
def multi_producer_consumer(a: ty.handle, b: ty.handle) -> None:
A = tir.match_buffer(a, (128,))
B = tir.match_buffer(b, (128,))
for i in range(0, 64):
with tir.block([64], "A_0") as vi:
A[vi] = vi + 1
for i in range(0, 64):
with tir.block([64], "A_1") as vi:
tir.bind(vi, i + 64)
A[vi] = vi + 2
for i in range(0, 64):
with tir.block([64], "B_0") as vi:
B[vi] = A[vi] + 2.0
for i in range(0, 64):
with tir.block([64], "B_1") as vi:
tir.bind(vi, i + 64)
B[vi] = A[vi] + 3.0
@tvm.script.tir
def elementwise_affine_producer(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, (128, 128), "float32")
C = tir.match_buffer(c, (128, 128), "float32")
B = tir.alloc_buffer((128, 128), "float32")
for i, j, k, l in tir.grid(16, 2, 32, 16):
with tir.block([128, 128], "B") as [vi, vj]:
tir.bind(vi, i * 8 + j * 4 + k // 8)
tir.bind(vj, k % 8 * 16 + l)
B[vi, vj] = A[vi, vj] * 2.0
with tir.block([128, 128], "C") as [vi, vj]:
C[vi, vj] = B[vi, vj] + 1.0
@tvm.script.tir
def elementwise_subblock(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, (128, 128), "float32")
C = tir.match_buffer(c, (128, 128), "float32")
B = tir.alloc_buffer((128, 128), "float32")
with tir.block([32, 32], "B") as [vi, vj]:
tir.reads([A[vi * 4 : vi * 4 + 4, vj * 4 : vj * 4 + 4]])
tir.writes([B[vi * 4 : vi * 4 + 4, vj * 4 : vj * 4 + 4]])
with tir.block([4, 4], "B_sub") as [vi_i, vj_i]:
B[vi * 4 + vi_i, vj * 4 + vj_i] = A[vi * 4 + vi_i, vj * 4 + vj_i] * 2.0
with tir.block([128, 128], "C") as [vi, vj]:
C[vi, vj] = B[vi, vj] + 1.0
@tvm.script.tir
def elementwise_subblock_uncovered(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, (128, 128), "float32")
C = tir.match_buffer(c, (128, 128), "float32")
B = tir.alloc_buffer((128, 128), "float32")
with tir.block([32, 32], "B") as [vi, vj]:
tir.reads([A[vi * 4 : vi * 4 + 2, vj * 4 : vj * 4 + 2]])
tir.writes([B[vi * 4 : vi * 4 + 2, vj * 4 : vj * 4 + 2]])
with tir.block([2, 2], "B_sub") as [vi_i, vj_i]:
B[vi * 4 + vi_i, vj * 4 + vj_i] = A[vi * 4 + vi_i, vj * 4 + vj_i] * 2.0
with tir.block([128, 128], "C") as [vi, vj]:
C[vi, vj] = B[vi, vj] + 1.0
@tvm.script.tir
def bound_to_thread(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, [128, 128])
C = tir.match_buffer(c, [128, 128])
B = tir.alloc_buffer([128, 128], scope="shared")
for i in tir.thread_binding(0, 128, thread="threadIdx.x"):
for j in tir.serial(0, 128):
with tir.block([128, 128], "B") as [vi, vj]:
B[vi, vj] = A[vi, vj] * 2.0
for j in tir.serial(0, 128):
with tir.block([128, 128], "C") as [vi, vj]:
C[vj, vi] = B[vj, vi] + 1.0
@tvm.script.tir
def equal_ranked_threads(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, [128, 128])
C = tir.match_buffer(c, [128, 128])
B = tir.alloc_buffer([128, 128], scope="shared")
for i_o in tir.thread_binding(0, 16, thread="threadIdx.x"):
for i_i in tir.thread_binding(0, 8, thread="threadIdx.y"):
for j in tir.serial(0, 128):
with tir.block([128, 128], "B") as [vi, vj]:
tir.bind(vi, i_o * 8 + i_i)
tir.bind(vj, j)
B[vi, vj] = A[vi, vj] * 2.0
for j in tir.serial(0, 128):
with tir.block([128, 128], "C") as [vi, vj]:
tir.bind(vi, i_o * 8 + i_i)
tir.bind(vj, j)
C[vj, vi] = B[vj, vi] + 1.0
@tvm.script.tir
def warp_memory(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, [128, 128])
C = tir.match_buffer(c, [128, 128])
B = tir.alloc_buffer([128, 4, 32], scope="warp")
for i_o in tir.thread_binding(0, 4, thread="threadIdx.y"):
for i_i in tir.thread_binding(0, 32, thread="threadIdx.x"):
for j in tir.serial(0, 128):
with tir.block([4, 32, 128], "B") as [warp_id, lane_id, vj]:
B[vj, warp_id, lane_id] = A[warp_id * 32 + lane_id, vj] * 2.0
for j in tir.serial(0, 128):
with tir.block([4, 32, 128], "C") as [warp_id, lane_id, vj]:
C[warp_id * 32 + lane_id, vj] = B[vj, warp_id, lane_id] + 1.0
@tvm.script.tir
def warp_memory_negative(a: ty.handle, c: ty.handle) -> None:
A = tir.match_buffer(a, [128, 128])
C = tir.match_buffer(c, [128, 128])
B = tir.alloc_buffer([128, 4, 32], scope="warp")
for i_o in tir.thread_binding(0, 4, thread="threadIdx.y"):
for i_i in tir.thread_binding(0, 32, thread="threadIdx.x"):
for j in tir.serial(0, 128):
with tir.block([4, 32, 128], "B") as [warp_id, lane_id, vj]:
B[vj, warp_id, lane_id] = A[warp_id * 32 + lane_id, vj] * 2.0
for i_o_prime in tir.thread_binding(0, 4, thread="threadIdx.y"):
for j in tir.serial(0, 128):
with tir.block([4, 32, 4, 128], "C") as [_warp_id, lane_id, warp_id, vj]:
C[warp_id * 32 + lane_id, vj] = B[vj, warp_id, lane_id] + 1.0
# pylint: enable=no-member,invalid-name,unused-variable,unexpected-keyword-arg
def _get_block(s: tir.ScheduleState, name_hint: str) -> tir.StmtSRef:
result = None
def f_visit(node):
nonlocal result
if isinstance(node, tvm.tir.Block) and node.name_hint == name_hint:
result = node
func = s.mod["main"]
post_order_visit(func.body, f_visit)
assert result is not None and isinstance(result, tvm.tir.Block)
return s.get_sref(result)
def test_elementwise():
s = tir.ScheduleState(elementwise, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_matmul():
s = tir.ScheduleState(matmul, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "init")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "update")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_block_in_opaque_block():
s = tir.ScheduleState(block_in_opaque_block, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "E")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "F")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_write_after_read():
s = tir.ScheduleState(write_after_read, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=False,
)
# pylint: enable=protected-access
def test_loop_carried_dependency():
s = tir.ScheduleState(loop_carried_dependency, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=False,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=False,
)
# pylint: enable=protected-access
def test_concatenate_multi_producer_covered(): # pylint: disable=invalid-name
s = tir.ScheduleState(concatenate_multi_producer, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "A_0")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "A_1")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_concatenate_multi_producer_uncovered(): # pylint: disable=invalid-name
s = tir.ScheduleState(concatenate_multi_producer_uncovered, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "A_0")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "A_1")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=False,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=False,
)
# pylint: enable=protected-access
def test_lca_at_loop():
s = tir.ScheduleState(lca_at_loop, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_multi_producer_consumer():
s = tir.ScheduleState(multi_producer_consumer, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "A_0")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "A_1")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B_0")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B_1")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_elementwise_affine_producer():
s = tir.ScheduleState(elementwise_affine_producer, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_subblock():
s = tir.ScheduleState(elementwise_subblock, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B_sub")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_subblock_uncovered():
s = tir.ScheduleState(elementwise_subblock_uncovered, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=False,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B_sub")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=False,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_thread_binding():
s = tir.ScheduleState(bound_to_thread, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_equal_ranked_threads():
s = tir.ScheduleState(equal_ranked_threads, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_warp_memory():
s = tir.ScheduleState(warp_memory, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
# pylint: enable=protected-access
def test_warp_memory_negative():
s = tir.ScheduleState(warp_memory_negative, debug_mask="all")
# pylint: disable=protected-access
assert s._get_cached_flags(_get_block(s, "root")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=False,
)
assert s._get_cached_flags(_get_block(s, "B")) == CachedFlags(
affine_binding=True,
region_cover=True,
stage_pipeline=True,
)
assert s._get_cached_flags(_get_block(s, "C")) == CachedFlags(
affine_binding=True,
region_cover=False,
stage_pipeline=True,
)
# pylint: enable=protected-access
if __name__ == "__main__":
sys.exit(pytest.main([__file__] + sys.argv[1:]))
| 34.315068 | 93 | 0.603637 | 3,282 | 22,545 | 3.932054 | 0.066423 | 0.026501 | 0.039055 | 0.068191 | 0.835258 | 0.822782 | 0.818365 | 0.80217 | 0.790546 | 0.778071 | 0 | 0.043099 | 0.252828 | 22,545 | 656 | 94 | 34.367378 | 0.723004 | 0.091949 | 0 | 0.679849 | 0 | 0 | 0.023411 | 0 | 0 | 0 | 0 | 0 | 0.105461 | 1 | 0.06403 | false | 0 | 0.013183 | 0 | 0.079096 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4afddd6391eb87377011fe76283fba93d48c1486 | 141 | py | Python | iot/mqtt/service.py | RockyLiys/access_ssh | 0a167a34951bc2812fefc16674a00c7cb9bb7a9a | [
"MIT"
] | null | null | null | iot/mqtt/service.py | RockyLiys/access_ssh | 0a167a34951bc2812fefc16674a00c7cb9bb7a9a | [
"MIT"
] | null | null | null | iot/mqtt/service.py | RockyLiys/access_ssh | 0a167a34951bc2812fefc16674a00c7cb9bb7a9a | [
"MIT"
] | null | null | null | #! coding:utf-8
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
IN_DATA = os.path.join(BASE_DIR, 'data')
| 17.625 | 70 | 0.723404 | 25 | 141 | 3.8 | 0.56 | 0.252632 | 0.273684 | 0.315789 | 0.336842 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.099291 | 141 | 7 | 71 | 20.142857 | 0.740157 | 0.099291 | 0 | 0 | 0 | 0 | 0.031746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ab027a26dffdf6a5fc928b3f0801da8fc939b376 | 152 | py | Python | tests/basics/bytes_add.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 13,648 | 2015-01-01T01:34:51.000Z | 2022-03-31T16:19:53.000Z | tests/basics/bytes_add.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 7,092 | 2015-01-01T07:59:11.000Z | 2022-03-31T23:52:18.000Z | tests/basics/bytes_add.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 4,942 | 2015-01-02T11:48:50.000Z | 2022-03-31T19:57:10.000Z | # test bytes + other
print(b"123" + b"456")
print(b"123" + b"") # RHS is empty, can be optimised
print(b"" + b"123") # LHS is empty, can be optimised
| 21.714286 | 52 | 0.625 | 28 | 152 | 3.392857 | 0.5 | 0.189474 | 0.189474 | 0.210526 | 0.442105 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 0.197368 | 152 | 6 | 53 | 25.333333 | 0.680328 | 0.526316 | 0 | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ab1c421b2103e3b799cc4146ae8f50c2771e7191 | 49 | py | Python | cdh/catkin_ws/devel/lib/python2.7/dist-packages/hd_map/msg/__init__.py | DeeCamp-Demo/HDMapProject | 68e549661f6e583d09448bd0a0b122a6dc2e9fc9 | [
"MIT"
] | 5 | 2021-01-19T13:32:06.000Z | 2022-03-03T13:09:51.000Z | cdh/catkin_ws/devel/lib/python2.7/dist-packages/hd_map/msg/__init__.py | DeeCamp-Demo/HDMapProject | 68e549661f6e583d09448bd0a0b122a6dc2e9fc9 | [
"MIT"
] | null | null | null | cdh/catkin_ws/devel/lib/python2.7/dist-packages/hd_map/msg/__init__.py | DeeCamp-Demo/HDMapProject | 68e549661f6e583d09448bd0a0b122a6dc2e9fc9 | [
"MIT"
] | null | null | null | from ._element import *
from ._elements import *
| 16.333333 | 24 | 0.755102 | 6 | 49 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 2 | 25 | 24.5 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab1ea32df2ac2de6971cc6bfbdb70a72b440ae48 | 11,340 | py | Python | tests/test_signing.py | Dynatrace-James-Kitson/dt-cli | b0532d4b91e4b86978b6baafffd07d73d9dc43e0 | [
"Apache-2.0"
] | null | null | null | tests/test_signing.py | Dynatrace-James-Kitson/dt-cli | b0532d4b91e4b86978b6baafffd07d73d9dc43e0 | [
"Apache-2.0"
] | null | null | null | tests/test_signing.py | Dynatrace-James-Kitson/dt-cli | b0532d4b91e4b86978b6baafffd07d73d9dc43e0 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Dynatrace LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import os
import pytest
from cryptography import x509 as crypto_x509
from cryptography.hazmat.primitives import serialization
from cryptography.x509.oid import NameOID
from dtcli import signing
from dtcli import utils
def test_generate_ca():
_test_generate_ca()
def test_generate_ca_with_rsa():
_test_generate_ca(True)
def test_generate_ca_empty_attributes():
_test_generate_ca_empty_attributes()
def test_generate_ca_empty_attributes_with_rsa():
_test_generate_ca_empty_attributes(True)
def test_generate_cert():
_test_generate_cert()
def test_generate_cert_with_rsa():
_test_generate_cert(True)
def test_generate_cert_issuer_eq_subject():
_test_generate_cert_issuer_eq_subject()
def test_generate_cert_issuer_eq_subject_with_rsa():
_test_generate_cert_issuer_eq_subject(True)
def _test_generate_ca(is_rsa=False):
cert_path = "test_ca_certificate.crt"
key_path = "test_ca_key.key"
not_valid_after = datetime.datetime.today().replace(microsecond=0) + datetime.timedelta(days=123)
passphrase = "secretpassphrase"
signing.generate_ca(
cert_path,
key_path,
{
"CN": "Some Common Name",
"O": "Some Org Name",
"OU": "Some OU",
"L": "Some Locality",
"S": "Some State",
"C": "PL"
},
not_valid_after,
passphrase,
is_rsa
)
assert os.path.exists(cert_path)
assert os.path.exists(key_path)
with open(cert_path, "rb") as fp:
ca_cert = crypto_x509.load_pem_x509_certificate(fp.read())
assert ca_cert.issuer.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value == "Some Common Name"
assert ca_cert.issuer.get_attributes_for_oid(NameOID.ORGANIZATION_NAME)[0].value == "Some Org Name"
assert ca_cert.issuer.get_attributes_for_oid(NameOID.ORGANIZATIONAL_UNIT_NAME)[0].value == "Some OU"
assert ca_cert.issuer.get_attributes_for_oid(NameOID.LOCALITY_NAME)[0].value == "Some Locality"
assert ca_cert.issuer.get_attributes_for_oid(NameOID.STATE_OR_PROVINCE_NAME)[0].value == "Some State"
assert ca_cert.issuer.get_attributes_for_oid(NameOID.COUNTRY_NAME)[0].value == "PL"
assert ca_cert.subject.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value == "Some Common Name"
assert ca_cert.subject.get_attributes_for_oid(NameOID.ORGANIZATION_NAME)[0].value == "Some Org Name"
assert ca_cert.subject.get_attributes_for_oid(NameOID.ORGANIZATIONAL_UNIT_NAME)[0].value == "Some OU"
assert ca_cert.subject.get_attributes_for_oid(NameOID.LOCALITY_NAME)[0].value == "Some Locality"
assert ca_cert.subject.get_attributes_for_oid(NameOID.STATE_OR_PROVINCE_NAME)[0].value == "Some State"
assert ca_cert.subject.get_attributes_for_oid(NameOID.COUNTRY_NAME)[0].value == "PL"
assert ca_cert.not_valid_after == not_valid_after
with open(key_path, "rb") as fp:
ca_private_key = serialization.load_pem_private_key(fp.read(), password=passphrase.encode())
if is_rsa:
assert (
ca_cert.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.PKCS1) ==
ca_private_key.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.PKCS1)
)
else:
assert (
ca_cert.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.SubjectPublicKeyInfo) ==
ca_private_key.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.SubjectPublicKeyInfo)
)
os.remove(cert_path)
os.remove(key_path)
def _test_generate_ca_empty_attributes(is_rsa=False):
cert_path = "test_ca_certificate.crt"
key_path = "test_ca_key.key"
signing.generate_ca(
cert_path,
key_path,
{},
datetime.datetime.today() + datetime.timedelta(days=1),
is_rsa = is_rsa
)
assert os.path.exists(cert_path)
assert os.path.exists(key_path)
with open(cert_path, "rb") as fp:
ca_cert = crypto_x509.load_pem_x509_certificate(fp.read())
assert not ca_cert.issuer.get_attributes_for_oid(NameOID.COMMON_NAME)
assert not ca_cert.issuer.get_attributes_for_oid(NameOID.ORGANIZATION_NAME)
assert not ca_cert.issuer.get_attributes_for_oid(NameOID.ORGANIZATIONAL_UNIT_NAME)
assert not ca_cert.issuer.get_attributes_for_oid(NameOID.LOCALITY_NAME)
assert not ca_cert.issuer.get_attributes_for_oid(NameOID.STATE_OR_PROVINCE_NAME)
assert not ca_cert.issuer.get_attributes_for_oid(NameOID.COUNTRY_NAME)
assert not ca_cert.subject.get_attributes_for_oid(NameOID.COMMON_NAME)
assert not ca_cert.subject.get_attributes_for_oid(NameOID.ORGANIZATION_NAME)
assert not ca_cert.subject.get_attributes_for_oid(NameOID.ORGANIZATIONAL_UNIT_NAME)
assert not ca_cert.subject.get_attributes_for_oid(NameOID.LOCALITY_NAME)
assert not ca_cert.subject.get_attributes_for_oid(NameOID.STATE_OR_PROVINCE_NAME)
assert not ca_cert.subject.get_attributes_for_oid(NameOID.COUNTRY_NAME)
with open(key_path, "rb") as fp:
ca_private_key = serialization.load_pem_private_key(fp.read(), password=None)
if is_rsa:
assert (
ca_cert.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.PKCS1) ==
ca_private_key.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.PKCS1)
)
else:
assert (
ca_cert.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.SubjectPublicKeyInfo) ==
ca_private_key.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.SubjectPublicKeyInfo)
)
os.remove(cert_path)
os.remove(key_path)
def _test_generate_cert(is_rsa=False):
ca_cert_path = "test_ca_certificate.crt"
ca_key_path = "test_ca_key.key"
ca_passphrase = "secretcapassphrase"
signing.generate_ca(
ca_cert_path,
ca_key_path,
{
"CN": "Some Common Name",
"O": "Some Org Name",
"OU": "Some OU",
"L": "Some Locality",
"S": "Some State",
"C": "PL"
},
datetime.datetime.today() + datetime.timedelta(days=1),
ca_passphrase,
is_rsa
)
assert os.path.exists(ca_cert_path)
assert os.path.exists(ca_key_path)
dev_cert_path = "test_dev_certificate.crt"
dev_key_path = "test_dev_key.key"
not_valid_after = datetime.datetime.today().replace(microsecond=0) + datetime.timedelta(days=123)
dev_passphrase = "secretdevpassphrase"
signing.generate_cert(
ca_cert_path,
ca_key_path,
dev_cert_path,
dev_key_path,
{
"CN": "Some Other Common Name",
"O": "Some Other Org Name",
"OU": "Some Other OU",
"L": "Some Locality",
"S": "Some State",
"C": "PL"
},
not_valid_after,
ca_passphrase,
dev_passphrase,
is_rsa
)
assert os.path.exists(dev_cert_path)
assert os.path.exists(dev_key_path)
with open(dev_cert_path, "rb") as fp:
dev_cert = crypto_x509.load_pem_x509_certificate(fp.read())
assert dev_cert.issuer.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value == "Some Common Name"
assert dev_cert.issuer.get_attributes_for_oid(NameOID.ORGANIZATION_NAME)[0].value == "Some Org Name"
assert dev_cert.issuer.get_attributes_for_oid(NameOID.ORGANIZATIONAL_UNIT_NAME)[0].value == "Some OU"
assert dev_cert.issuer.get_attributes_for_oid(NameOID.LOCALITY_NAME)[0].value == "Some Locality"
assert dev_cert.issuer.get_attributes_for_oid(NameOID.STATE_OR_PROVINCE_NAME)[0].value == "Some State"
assert dev_cert.issuer.get_attributes_for_oid(NameOID.COUNTRY_NAME)[0].value == "PL"
assert dev_cert.subject.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value == "Some Other Common Name"
assert dev_cert.subject.get_attributes_for_oid(NameOID.ORGANIZATION_NAME)[0].value == "Some Other Org Name"
assert dev_cert.subject.get_attributes_for_oid(NameOID.ORGANIZATIONAL_UNIT_NAME)[0].value == "Some Other OU"
assert dev_cert.subject.get_attributes_for_oid(NameOID.LOCALITY_NAME)[0].value == "Some Locality"
assert dev_cert.subject.get_attributes_for_oid(NameOID.STATE_OR_PROVINCE_NAME)[0].value == "Some State"
assert dev_cert.subject.get_attributes_for_oid(NameOID.COUNTRY_NAME)[0].value == "PL"
assert dev_cert.not_valid_after == not_valid_after
with open(dev_key_path, "rb") as fp:
dev_private_key = serialization.load_pem_private_key(fp.read(), password=dev_passphrase.encode())
if is_rsa:
assert (
dev_cert.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.PKCS1) ==
dev_private_key.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.PKCS1)
)
else:
assert (
dev_cert.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.SubjectPublicKeyInfo) ==
dev_private_key.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.SubjectPublicKeyInfo)
)
os.remove(ca_cert_path)
os.remove(ca_key_path)
os.remove(dev_cert_path)
os.remove(dev_key_path)
def _test_generate_cert_issuer_eq_subject(is_rsa=False):
ca_cert_path = "test_ca_certificate.crt"
ca_key_path = "test_ca_key.key"
signing.generate_ca(
ca_cert_path,
ca_key_path,
{
"CN": "Some Common Name",
"O": "Some Org Name",
"OU": "Some OU",
"L": "Some Locality",
"S": "Some State",
"C": "PL"
},
datetime.datetime.today() + datetime.timedelta(days=1),
is_rsa = is_rsa
)
assert os.path.exists(ca_cert_path)
assert os.path.exists(ca_key_path)
dev_cert_path = "test_dev_certificate.crt"
dev_key_path = "test_dev_key.key"
with pytest.raises(utils.KeyGenerationError):
signing.generate_cert(
ca_cert_path,
ca_key_path,
dev_cert_path,
dev_key_path,
{
"CN": "Some Common Name",
"O": "Some Org Name",
"OU": "Some OU",
"L": "Some Locality",
"S": "Some State",
"C": "PL"
},
datetime.datetime.today() + datetime.timedelta(days=1),
is_rsa = is_rsa
)
assert not os.path.exists(dev_cert_path)
assert not os.path.exists(dev_key_path)
os.remove(ca_cert_path)
os.remove(ca_key_path) | 39.789474 | 130 | 0.702293 | 1,538 | 11,340 | 4.841352 | 0.09948 | 0.034649 | 0.077357 | 0.091861 | 0.8704 | 0.840183 | 0.807548 | 0.775181 | 0.775181 | 0.762826 | 0 | 0.008454 | 0.196825 | 11,340 | 285 | 131 | 39.789474 | 0.809069 | 0.048501 | 0 | 0.51073 | 0 | 0 | 0.085529 | 0.012987 | 0 | 0 | 0 | 0 | 0.240343 | 1 | 0.051502 | false | 0.042918 | 0.034335 | 0 | 0.085837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
916b66f38198cf9aa2c1d03f69964aa64aa4fda6 | 49 | py | Python | web/__init__.py | ExtensiveAutomation/automateactions-plugin-web | c95e488badb2daa3c2678c5b0debee3ea748cc6c | [
"MIT"
] | null | null | null | web/__init__.py | ExtensiveAutomation/automateactions-plugin-web | c95e488badb2daa3c2678c5b0debee3ea748cc6c | [
"MIT"
] | null | null | null | web/__init__.py | ExtensiveAutomation/automateactions-plugin-web | c95e488badb2daa3c2678c5b0debee3ea748cc6c | [
"MIT"
] | null | null | null | from ea.automateactions.plugins.web.curl import * | 49 | 49 | 0.836735 | 7 | 49 | 5.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 49 | 1 | 49 | 49 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
917a8ac2fff4b7043af8abf25e0e3032073b80a5 | 3,349 | py | Python | tests/test_ir_metrics.py | bhaskargautam/record-linkage | 01eb29f8b7fb4dd1625187232f2dafe47f24cddf | [
"MIT"
] | 1 | 2019-06-07T08:33:40.000Z | 2019-06-07T08:33:40.000Z | tests/test_ir_metrics.py | bhaskargautam/record-linkage | 01eb29f8b7fb4dd1625187232f2dafe47f24cddf | [
"MIT"
] | 6 | 2019-09-19T23:30:53.000Z | 2022-02-10T00:07:09.000Z | tests/test_ir_metrics.py | bhaskargautam/record-linkage | 01eb29f8b7fb4dd1625187232f2dafe47f24cddf | [
"MIT"
] | null | null | null | import unittest
from common import InformationRetrievalMetrics
class TestMetrics(unittest.TestCase):
def test_mean_precision_at_k(self):
result_prob = [(0, 1, 0.1), (0, 2, 0.3), (1, 2, 0.5), (1, 4, 0.2), (2, 4, 0.9), (2, 3, 1)]
true_pairs = [(0, 1), (2,4)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(ir_metrics.get_mean_precisison_at_k(k=1), 1)
self.assertEqual(ir_metrics.get_mean_precisison_at_k(k=2), 0.5)
result_prob = [(0 , 1, 0.9), (1, 2, 0.4), (2, 3, 0.5), (0, 2, 0.2), (0, 3, 0.5)]
true_pairs = [(0, 1)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(ir_metrics.get_mean_precisison_at_k(k=1), 0)
self.assertEqual(ir_metrics.get_mean_precisison_at_k(k=2), 0)
self.assertEqual(round(ir_metrics.get_mean_precisison_at_k(k=3), 2), 0.33)
def test_mean_reciprocal_rank(self):
result_prob = [(0, 1, 0.1), (0, 2, 0.3), (1, 2, 0.5), (2, 3, 0.2), (2, 4, 0.9)]
true_pairs = [(0, 1), (2,4)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(ir_metrics.get_mean_reciprocal_rank(), 0.75)
ir_metrics = InformationRetrievalMetrics(result_prob[:4], true_pairs[:1])
self.assertEqual(ir_metrics.get_mean_reciprocal_rank(), 1)
result_prob = [(0, 2, 0.1), (0, 1, 0.2), (2, 3, 0.1), (2, 4, 0.5), (3, 1, 0.2), (3, 2, 0.4), (3, 4, 0.8)]
true_pairs = [(0, 1), (2, 3), (3, 4)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(round(ir_metrics.get_mean_reciprocal_rank(), 2), 0.61)
result_prob = [(0, 1, 0.1), (0, 2, 0.2), (0, 3, 0.3), (0, 4, 0.4), (0, 5, 0.5), (0, 6, 0.6),
(1, 0, 0.1), (1, 2, 0.2),(1, 3, 0.3), (1, 4, 0.4), (1, 5, 0.5), (1, 6, 0.6),]
true_pairs = [(0, 1), (0, 4), (0, 5), (0, 6), (1, 4), (1, 5), (1, 6)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(round(ir_metrics.get_mean_reciprocal_rank(), 2), 0.63)
def test_mean_average_precision(self):
result_prob = [(0, 1, 0.1), (0, 2, 0.3), (1, 2, 0.5), (2, 3, 0.2), (2, 4, 0.9)]
true_pairs = [(0, 1), (2,4)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(ir_metrics.get_mean_average_precision(), 0.75)
ir_metrics = InformationRetrievalMetrics(result_prob[:4], true_pairs[:1])
self.assertEqual(ir_metrics.get_mean_average_precision(), 1)
result_prob = [(0, 1, 0.1), (0, 2, 0.2), (0, 3, 0.3), (0, 4, 0.4), (0, 5, 0.5), (0, 6, 0.6),
(1, 0, 0.1), (1, 2, 0.2),(1, 3, 0.3), (1, 4, 0.4), (1, 5, 0.5), (1, 6, 0.6),]
true_pairs = [(0, 1), (0, 4), (0, 5), (0, 6), (1, 4), (1, 5), (1, 6)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(round(ir_metrics.get_mean_average_precision(), 2), 0.54)
result_prob = [(0, 2, 0.1), (0, 1, 0.2), (2, 3, 0.1), (2, 4, 0.5), (3, 1, 0.2), (3, 2, 0.4), (3, 4, 0.8)]
true_pairs = [(0, 1), (2, 3), (3, 4)]
ir_metrics = InformationRetrievalMetrics(result_prob, true_pairs)
self.assertEqual(round(ir_metrics.get_mean_average_precision(), 2), 0.61) | 54.016129 | 113 | 0.579277 | 574 | 3,349 | 3.182927 | 0.073171 | 0.029557 | 0.027915 | 0.113848 | 0.8867 | 0.879584 | 0.8763 | 0.860974 | 0.811713 | 0.811713 | 0 | 0.126384 | 0.217976 | 3,349 | 62 | 114 | 54.016129 | 0.57121 | 0 | 0 | 0.531915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276596 | 1 | 0.06383 | false | 0 | 0.042553 | 0 | 0.12766 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91961abe0a85b5c791da90d99f63ec77b3213fb3 | 37 | py | Python | src/proofpoint_itm/__init__.py | drizzo-tech/proofpoint_itm | 89754c314f559018cbaa80d4b4c7a6ce65b1781b | [
"Apache-2.0"
] | null | null | null | src/proofpoint_itm/__init__.py | drizzo-tech/proofpoint_itm | 89754c314f559018cbaa80d4b4c7a6ce65b1781b | [
"Apache-2.0"
] | null | null | null | src/proofpoint_itm/__init__.py | drizzo-tech/proofpoint_itm | 89754c314f559018cbaa80d4b4c7a6ce65b1781b | [
"Apache-2.0"
] | null | null | null | from .proofpoint_itm import ITMClient | 37 | 37 | 0.891892 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
531281669c932c2f00b93c3351c804c1bd71de0d | 216 | py | Python | swaps/model/etf/__init__.py | DunnCreativeSS/cash_carry_leveraged_futures_arbitrageur | 1120ebfb487ce4987fe70e6645b36e0d7ce041ec | [
"Apache-2.0"
] | 1 | 2021-09-06T00:09:11.000Z | 2021-09-06T00:09:11.000Z | swaps/model/etf/__init__.py | DunnCreativeSS/cash_carry_leveraged_futures_arbitrageur | 1120ebfb487ce4987fe70e6645b36e0d7ce041ec | [
"Apache-2.0"
] | null | null | null | swaps/model/etf/__init__.py | DunnCreativeSS/cash_carry_leveraged_futures_arbitrageur | 1120ebfb487ce4987fe70e6645b36e0d7ce041ec | [
"Apache-2.0"
] | null | null | null | from swaps.model.etf.etf_swap_config import EtfSwapConfig
from swaps.model.etf.etf_swap_list import EtfSwapList
from swaps.model.etf.etf_swap_in_out import EtfSwapInOut
from swaps.model.etf.unitprice import UnitPrice | 54 | 57 | 0.875 | 35 | 216 | 5.2 | 0.4 | 0.197802 | 0.307692 | 0.373626 | 0.395604 | 0.395604 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 216 | 4 | 58 | 54 | 0.905473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5338f3191eaebdf350aa635857fa1c242e64f6b6 | 108 | py | Python | test/files/datasources/foo.py | tnelson-doghouse/docker-jinja | 6946d6f14e9c53cd3b6bba0ae6a3fa03e57d5d59 | [
"MIT"
] | 4 | 2016-11-27T10:33:37.000Z | 2019-09-12T02:43:10.000Z | test/files/datasources/foo.py | tnelson-doghouse/docker-jinja | 6946d6f14e9c53cd3b6bba0ae6a3fa03e57d5d59 | [
"MIT"
] | 5 | 2016-11-04T09:31:53.000Z | 2016-11-05T11:29:26.000Z | test/files/datasources/foo.py | tnelson-doghouse/docker-jinja | 6946d6f14e9c53cd3b6bba0ae6a3fa03e57d5d59 | [
"MIT"
] | 8 | 2015-02-27T17:45:11.000Z | 2020-05-04T01:29:28.000Z | from djinja.env import global_function
@global_function
def bar(name):
return " - {} - ".format(name)
| 15.428571 | 38 | 0.694444 | 14 | 108 | 5.214286 | 0.785714 | 0.383562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175926 | 108 | 6 | 39 | 18 | 0.820225 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
533b7464cc197242eeec26c01cb83859e71ec5ef | 88,240 | py | Python | TVMfuzz/elements.py | anonymousWork000/TVMfuzz | 0ccbb33af89758b8ead59a8c686645246ccd0545 | [
"Apache-2.0"
] | 16 | 2021-05-22T07:39:53.000Z | 2022-02-23T14:50:38.000Z | TVMfuzz/elements.py | anonymousWork000/TVMfuzz | 0ccbb33af89758b8ead59a8c686645246ccd0545 | [
"Apache-2.0"
] | null | null | null | TVMfuzz/elements.py | anonymousWork000/TVMfuzz | 0ccbb33af89758b8ead59a8c686645246ccd0545 | [
"Apache-2.0"
] | 3 | 2021-05-28T07:12:14.000Z | 2021-11-28T02:10:48.000Z |
'''ASTutils.py'''
varnamesRead = set()
mutable = True
'''generation.py'''
funcPool = {}
varPool = set()
withPool = set()
clsPool = {}
subsPool = {}
lazy = []
restAdjuncts = []
'''analyzeSyntax'''
importSet = set()
funcNameTopFunc = {}
constants = set()
records = {}
id = 0
varTofuncst = {}
varTowith = {}
taboo = ['gpu']
ingredient = []
clsInstanceToParam = {}
fullstrTopsubs = {}
functionDefNames = set(['relay.multiply',
'relay.divide',
'relay.add',
'relay.subtract',
'relay.less',
'relay.greater',
'relay.less_equal',
'relay.greater_equal',
'relay.equal',
'relay.not_equal'])
funcTolambda = {}
'''getAST'''
helperFuncDef = {}
helperStatDef_global = []
helperStatDef_local = {}
funcDefParents = {}
forbiddenFuncDef = ['random_bsr_matrix']
funcDefs = []
'''autorun'''
message = {}
message['0.7'] = [
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile/1.py", line 7, in <module>
xqvpP=tvm.IRModule.from_expr(pDL7X)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 237, in from_expr
return _ffi_api.Module_FromExpr(expr, funcs, defs)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RelayExpr tvm::relay::TypeInferencer::Resolver::AttachCheckedType<tvm::relay::FunctionNode>(tvm::relay::FunctionNode const*)+0x1be) [0x7f52df55d8ee]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr_(tvm::relay::FunctionNode const*)+0x382) [0x7f52df63a8f2]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)+0x96) [0x7f52df63ce06]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x76) [0x7f52df642ce6]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)#3}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)+0x2c) [0x7f52df4e139c]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Resolver::VisitExpr_(tvm::relay::VarNode const*)+0x87) [0x7f52df5604f7]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Resolver::VisitVar(tvm::relay::Var const&)+0xe2) [0x7f52df5602c2]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RelayExpr tvm::relay::TypeInferencer::Resolver::AttachCheckedType<tvm::relay::VarNode>(tvm::relay::VarNode const*)+0x1ab) [0x7f52df55ad2b]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x27a2988) [0x7f52df552988]
File "/home/lisa/tvm-0.7/src/relay/transforms/type_infer.cc", line 617
TVMError: Check failed: checked_type.as<IncompleteTypeNode>() == nullptr: Cannot resolve type of Var(x) at (nullptr)
''',
'''
[14:44:23] /home/lisa/tvm-0.7/src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [14:44:23] /home/lisa/tvm-0.7/src/relay/op/tensor/transform.cc:2367: Check failed: data->shape.size() != 0 (0 vs. 0) : Input shape cannot be empty
Stack trace:
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x25245e8) [0x7f8062d885e8]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::SplitRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x458) [0x7f8062da0a38]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f806276eeb8]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f8062e8978d]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f8063007bec]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f8063008972]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f80626ee7e6]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f80626eee37]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8f091) [0x7f80626f3091]
; ' should not has tab or newline.
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile/5.py", line 18, in <module>
EeNPI['main']=QLiMS
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 75, in __setitem__
return self._add(var, val)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 84, in _add
_ffi_api.Module_Add(self, var, val, update)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(TVMFuncCall+0x63) [0x7f80631dcd63]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8fed8) [0x7f80626f3ed8]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8f091) [0x7f80626f3091]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f80626eee37]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f80626ee7e6]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f8063008972]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x78) [0x7f8063007c08]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool)+0x2098) [0x7f80626da288]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f80625ddb60]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8f091) [0x7f80626f3091]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f80626eee37]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f80626ee7e6]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f8063008972]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f8063007bec]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f8062e8978d]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f806276eeb8]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::SplitRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x458) [0x7f8062da0a38]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x25245e8) [0x7f8062d885e8]
File "/home/lisa/tvm-0.7/src/ir/error.cc", line 132
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `main`:
#[version = "0.0.5"]
fn (%y2: uint64) {
split(%y2, indices_or_sections=3) an internal invariant was violated while typechecking your program [14:44:23] /home/lisa/tvm-0.7/src/relay/op/tensor/transform.cc:2367: Check failed: data->shape.size() != 0 (0 vs. 0) : Input shape cannot be empty
;
}
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile/6.py", line 155, in <module>
rdg9G[fAhiQ]=su5yc
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 75, in __setitem__
return self._add(var, val)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 89, in _add
_ffi_api.Module_AddDef(self, var, val, update)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(TVMFuncCall+0x63) [0x7fb9d92b6d63]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<void (tvm::IRModule, tvm::GlobalTypeVar const&, tvm::TypeData const&, bool)>::AssignTypedLambda<tvm::runtime::Registry::set_body_method<tvm::IRModule, tvm::IRModuleNode, void, tvm::GlobalTypeVar const&, tvm::TypeData const&, bool, void>(void (tvm::IRModuleNode::*)(tvm::GlobalTypeVar const&, tvm::TypeData const&, bool))::{lambda(tvm::IRModule, tvm::GlobalTypeVar const&, tvm::TypeData const&, bool)#1}>(tvm::runtime::Registry::set_body_method<tvm::IRModule, tvm::IRModuleNode, void, tvm::GlobalTypeVar const&, tvm::TypeData const&, bool, void>(void (tvm::IRModuleNode::*)(tvm::GlobalTypeVar const&, tvm::TypeData const&, bool))::{lambda(tvm::IRModule, tvm::GlobalTypeVar const&, tvm::TypeData const&, bool)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const, tvm::runtime::TVMRetValue) const+0x2a4) [0x7fb9d87d9c34]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::AddTypeDef(tvm::GlobalTypeVar const&, tvm::TypeData const&, bool)+0x2b) [0x7fb9d87cc6cb]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::AddTypeDefUnchecked(tvm::GlobalTypeVar const&, tvm::TypeData const&, bool)+0x1f5) [0x7fb9d87cc445]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e885b8) [0x7fb9d87c65b8]
File "/home/lisa/tvm-0.7/src/ir/module.cc", line 260
TVMError: Check failed: global_type_var_map_.count(var->name_hint) == 0: Duplicate global type definition name gtv
''',
'''
[09:03:12] /home/lisa/tvm-0.7/src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [09:03:12] /home/lisa/tvm-0.7/src/relay/op/tensor/transform.cc:1623: Check failed: reporter->Assert(seq_lengths->shape[0] == data->shape[batch_axis]): For reverse_sequnece seq_lengths size should match with dimension of batch axis, but got dimension of batch_axis = 4, and seq_length size = 5
Stack trace:
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x25245e8) [0x7f428f6b95e8]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ReverseSequenceRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x59d) [0x7f428f6c46cd]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f428f09feb8]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f428f7ba78d]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f428f938bec]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f428f939972]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f428f01f7e6]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f428f01fe37]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f428f02312c]
; ' should not has tab or newline.
''',
'''
[09:03:24] /home/lisa/tvm-0.7/src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [09:03:24] /home/lisa/tvm-0.7/src/relay/op/type_relations.cc:107: Check failed: t0->dtype == t1->dtype (int16 vs. float32) :
Stack trace:
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f94c1cf4b60]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::BroadcastRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x30f) [0x7f94c251f8ef]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f94c1e85eb8]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f94c25a078d]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f94c271ebec]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f94c271f972]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f94c1e057e6]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f94c1e05e37]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f94c1e0912c]
; ' should not has tab or newline.
[09:03:24] /home/lisa/tvm-0.7/src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [09:03:24] /home/lisa/tvm-0.7/src/relay/op/type_relations.cc:123: Check failed: t0->dtype == t1->dtype (int16 vs. float32) :
Stack trace:
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f94c1cf4b60]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::BroadcastCompRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x30f) [0x7f94c251e53f]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f94c1e85eb8]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f94c25a078d]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f94c271ebec]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f94c271f972]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f94c1e057e6]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f94c1e05e37]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f94c1e0912c]
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/../buggyFile/11.py", line 158, in <module>
nJek4=run_opt_pass(aPzKb,[FMA8X,])
File "/home/lisa/TVMfuzz/buggyFile2/../buggyFile/11.py", line 153, in run_opt_pass
mod = tvm.IRModule.from_expr(expr)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 237, in from_expr
return _ffi_api.Module_FromExpr(expr, funcs, defs)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(TVMFuncCall+0x63) [0x7f22caadcd63]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e932e8) [0x7f22c9ff72e8]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f22c9ff212c]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f22c9feee37]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f22c9fee7e6]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f22ca908972]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x78) [0x7f22ca907c08]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool)+0x2098) [0x7f22c9fda288]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f22c9eddb60]
File "/home/lisa/tvm-0.7/src/ir/error.cc", line 132
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `main`:
#[version = "0.0.5"]
fn (%x0: uint64, %x) {
if (%x0) {
add(3, %x)
} else {
3
}
}
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/../buggyFile/12.py", line 169, in <module>
MgtyN=run_infer_type(DKq4w)
File "/home/lisa/TVMfuzz/buggyFile2/../buggyFile/12.py", line 152, in run_infer_type
mod = tvm.IRModule.from_expr(expr)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 237, in from_expr
return _ffi_api.Module_FromExpr(expr, funcs, defs)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Resolver::VisitExpr_(tvm::relay::FunctionNode const*)+0x22) [0x7f8a6ce29cc2]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RelayExpr tvm::relay::TypeInferencer::Resolver::AttachCheckedType<tvm::relay::FunctionNode>(tvm::relay::FunctionNode const*)+0x1be) [0x7f8a6ce298ee]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr_(tvm::relay::FunctionNode const*)+0x55f) [0x7f8a6cf06acf]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)+0x96) [0x7f8a6cf08e06]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x76) [0x7f8a6cf0ece6]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)+0x2c) [0x7f8a6cdad48c]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Resolver::VisitExpr_(tvm::relay::CallNode const*)+0x22) [0x7f8a6ce29702]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RelayExpr tvm::relay::TypeInferencer::Resolver::AttachCheckedType<tvm::relay::CallNode>(tvm::relay::CallNode const*)+0x1ad) [0x7f8a6ce291cd]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x27a2988) [0x7f8a6ce1e988]
File "/home/lisa/tvm-0.7/src/relay/transforms/type_infer.cc", line 617
TVMError: Check failed: checked_type.as<IncompleteTypeNode>() == nullptr: Cannot resolve type of CallNode(Op(image.resize3d), [Var(x0, ty=TupleTypeNode([]))], relay.attrs.Resize3dAttrs(0x55f633b94398), []) at (nullptr)
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/6.py", line 173, in <module>
F8ExZ=make_nat_expr(EHXnr,3)
File "/home/lisa/tvm-0.7/python/tvm/relay/testing/nat.py", line 182, in make_nat_expr
ret = prelude.z()
AttributeError: 'Prelude' object has no attribute 'z'
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/9.py", line 37, in <module>
hz8Ce[WY1iX]=xmkLE
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 75, in __setitem__
return self._add(var, val)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 84, in _add
_ffi_api.Module_Add(self, var, val, update)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(TVMFuncCall+0x63) [0x7f09eda38d63]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8fed8) [0x7f09ecf4fed8]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8f091) [0x7f09ecf4f091]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f09ecf4ae37]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f09ecf4a7e6]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f09ed864972]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x78) [0x7f09ed863c08]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool)+0x2098) [0x7f09ecf36288]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f09ece39b60]
File "/home/lisa/tvm-0.7/src/ir/error.cc", line 132
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `f`:
#[version = "0.0.5"]
fn [t](%a: t) -> t {
@f(1)
} unable to unify: `int32` and `t`;
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/39.py", line 128, in <module>
check_kind(EK3h8)
File "/home/lisa/tvm-0.7/python/tvm/relay/analysis/analysis.py", line 106, in check_kind
return _ffi_api.check_kind(t)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: TVMError: Incorrect kind for a type call function. Type GlobalTypeVar(v1, 0) inside TypeCallNode(GlobalTypeVar(v1, 0), []) is of kind 0 but was expected to be 5
''',
'''
[10:03:33] /home/lisa/tvm-0.7/src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [10:03:33] /home/lisa/tvm-0.7/src/relay/op/nn/convolution.h:204: Check failed: reporter->AssertEQ(param->kernel_size[0], wshape[2]) && reporter->AssertEQ(param->kernel_size[1], wshape[3]): Conv2D: shape of weight is inconsistent with kernel_size, kernel_size=[3, 3] wshape=[32, 1, 3, 0]
Stack trace:
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x2419d28) [0x7f2de0c65d28]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(bool tvm::relay::Conv2DRel<tvm::relay::Conv2DAttrs>(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x13a7) [0x7f2de0c909f7]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f2de0756eb8]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f2de0e7178d]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f2de0fefbec]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f2de0ff0972]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f2de06d67e6]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f2de06d6e37]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f2de06da12c]
; ' should not has tab or newline.
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/37.py", line 192, in <module>
RjZTd=annotated(\'\'\'int32\'\'\',(1,32,0,10,),(32,1,3,0,))
File "/home/lisa/TVMfuzz/buggyFile2/37.py", line 190, in annotated
mod = tvm.IRModule.from_expr(f)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 237, in from_expr
return _ffi_api.Module_FromExpr(expr, funcs, defs)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(TVMFuncCall+0x63) [0x7f2de11c4d63]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e932e8) [0x7f2de06df2e8]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f2de06da12c]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f2de06d6e37]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f2de06d67e6]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f2de0ff0972]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x78) [0x7f2de0fefc08]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool)+0x2098) [0x7f2de06c2288]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f2de05c5b60]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f2de06da12c]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f2de06d6e37]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f2de06d67e6]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f2de0ff0972]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f2de0fefbec]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f2de0e7178d]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f2de0756eb8]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(bool tvm::relay::Conv2DRel<tvm::relay::Conv2DAttrs>(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x13a7) [0x7f2de0c909f7]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x2419d28) [0x7f2de0c65d28]
File "/home/lisa/tvm-0.7/src/ir/error.cc", line 132
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `main`:
#[version = "0.0.5"]
fn (%data: Tensor[(1, 32, 0, 10), int32], %weight1: Tensor[(32, 1, 3, 0), int32]) {
%0 = nn.conv2d(%data, %weight1, padding=[1, 1, 1, 1], groups=32, kernel_size=[3, 3]) an internal invariant was violated while typechecking your program [10:03:33] /home/lisa/tvm-0.7/src/relay/op/nn/convolution.h:204: Check failed: reporter->AssertEQ(param->kernel_size[0], wshape[2]) && reporter->AssertEQ(param->kernel_size[1], wshape[3]): Conv2D: shape of weight is inconsistent with kernel_size, kernel_size=[3, 3] wshape=[32, 1, 3, 0]
; ;
%1 = nn.conv2d(%0, %weight1, padding=[1, 1, 1, 1], groups=32, kernel_size=[3, 3]);
add(%0, %1)
}
''',
'''
[10:13:58] /home/lisa/tvm-0.7/src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [10:13:58] /home/lisa/tvm-0.7/src/relay/op/tensor/transform.cc:1452: Check failed: val->value > 0 (0 vs. 0) : Tile reps value should always be larger than 0, but get: 0
Stack trace:
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x25245e8) [0x7f68cce7c5e8]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TileRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x6a1) [0x7f68cce86881]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f68cc862eb8]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f68ccf7d78d]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f68cd0fbbec]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f68cd0fc972]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f68cc7e27e6]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f68cc7e2e37]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8f091) [0x7f68cc7e7091]
; ' should not has tab or newline.
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/32.py", line 705, in <module>
verify_tile((2,3,4,),(3,0,1,))
File "/home/lisa/TVMfuzz/buggyFile2/32.py", line 703, in verify_tile
op_res = intrp.evaluate(func)(x_data)
File "/home/lisa/tvm-0.7/python/tvm/relay/backend/interpreter.py", line 178, in evaluate
return self._make_executor(expr)
File "/home/lisa/tvm-0.7/python/tvm/relay/build_module.py", line 365, in _make_executor
self.mod["main"] = expr
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 75, in __setitem__
return self._add(var, val)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 84, in _add
_ffi_api.Module_Add(self, var, val, update)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(TVMFuncCall+0x63) [0x7f68cd2d0d63]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8fed8) [0x7f68cc7e7ed8]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8f091) [0x7f68cc7e7091]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f68cc7e2e37]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f68cc7e27e6]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f68cd0fc972]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x78) [0x7f68cd0fbc08]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool)+0x2098) [0x7f68cc7ce288]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f68cc6d1b60]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e8f091) [0x7f68cc7e7091]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f68cc7e2e37]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f68cc7e27e6]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f68cd0fc972]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f68cd0fbbec]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f68ccf7d78d]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f68cc862eb8]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TileRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x6a1) [0x7f68cce86881]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x25245e8) [0x7f68cce7c5e8]
File "/home/lisa/tvm-0.7/src/ir/error.cc", line 132
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `main`:
#[version = "0.0.5"]
fn (%x: Tensor[(2, 3, 4), float32]) {
tile(%x, reps=[3, 0, 1]) an internal invariant was violated while typechecking your program [10:13:58] /home/lisa/tvm-0.7/src/relay/op/tensor/transform.cc:1452: Check failed: val->value > 0 (0 vs. 0) : Tile reps value should always be larger than 0, but get: 0
;
}
''',
'''
[10:48:52] /home/lisa/tvm-0.7/src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [10:48:52] /home/lisa/tvm-0.7/src/relay/qnn/op/quantize.cc:49: Check failed: input_dtype == DataType::Float(32): Input type should be one of float32 but was int32
Stack trace:
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x28ccf68) [0x7f65bc0d8f68]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::qnn::QuantizeRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x211) [0x7f65bc0d9b41]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f65bb716eb8]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f65bbe3178d]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f65bbfafbec]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f65bbfb0972]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f65bb6967e6]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f65bb696e37]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f65bb69a12c]
; ' should not has tab or newline.
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/26.py", line 153, in <module>
quantize_test_driver(in_dtype=\'\'\'int32\'\'\',quant_args={\'\'\'out_zero_point\'\'\':UBKqL,\'\'\'out_scale\'\'\':wunQz},axis=0,out_dtype=\'\'\'uint8\'\'\',in_data=vIWjt,verify_output_data=vIWjt)
File "/home/lisa/TVMfuzz/buggyFile2/26.py", line 141, in quantize_test_driver
mod = tvm.IRModule.from_expr(mod)
File "/home/lisa/tvm-0.7/python/tvm/ir/module.py", line 237, in from_expr
return _ffi_api.Module_FromExpr(expr, funcs, defs)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(TVMFuncCall+0x63) [0x7f65bc184d63]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(+0x1e932e8) [0x7f65bb69f2e8]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f65bb69a12c]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f65bb696e37]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f65bb6967e6]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f65bbfb0972]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x78) [0x7f65bbfafc08]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool)+0x2098) [0x7f65bb682288]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x80) [0x7f65bb585b60]
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x4ac) [0x7f65bb69a12c]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xd7) [0x7f65bb696e37]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x5f6) [0x7f65bb6967e6]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x3a2) [0x7f65bbfb0972]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x5c) [0x7f65bbfafbec]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4cd) [0x7f65bbe3178d]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x518) [0x7f65bb716eb8]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::qnn::QuantizeRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x211) [0x7f65bc0d9b41]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x28ccf68) [0x7f65bc0d8f68]
File "/home/lisa/tvm-0.7/src/ir/error.cc", line 132
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `main`:
#[version = "0.0.5"]
fn (%input_data: Tensor[(5, 2), int32]) {
qnn.quantize(%input_data, meta[relay.Constant][0], meta[relay.Constant][1], out_dtype="uint8", axis=0) an internal invariant was violated while typechecking your program [10:48:52] /home/lisa/tvm-0.7/src/relay/qnn/op/quantize.cc:49: Check failed: input_dtype == DataType::Float(32): Input type should be one of float32 but was int32
;
}
/* For debugging purposes the metadata section has been omitted.
* If you would like to see the full metadata section you can set the
* option to `True` when invoking `astext`.
*/
''',
'''
file:10:18: parse error: a type definition with the name `List` was previously defined
type List[A] {
~~~~~~~~~~~~~~~~~^~~~~~~~~
file:11:13: parse error: a constructor with the name `Cons` was previously defined
Cons(A, List[A]),
~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/7.py", line 3728, in <module>
LZ2cL=KEkqH.partition(mcksQ)
File "/home/lisa/tvm-0.7/python/tvm/relay/dataflow_pattern/__init__.py", line 171, in partition
return partition(self, expr, attrs, check)
File "/home/lisa/tvm-0.7/python/tvm/relay/dataflow_pattern/__init__.py", line 802, in partition
return ffi.partition(pattern, expr, attrs, check)
File "/home/lisa/tvm-0.7/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RelayExpr tvm::relay::TypeInferencer::Resolver::AttachCheckedType<tvm::relay::FunctionNode>(tvm::relay::FunctionNode const*)+0x1be) [0x7f6e77fce8ee]
[bt] (7) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr_(tvm::relay::FunctionNode const*)+0x382) [0x7f6e780ab8f2]
[bt] (6) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)+0x96) [0x7f6e780ade06]
[bt] (5) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x76) [0x7f6e780b3ce6]
[bt] (4) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)#3}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)+0x2c) [0x7f6e77f5239c]
[bt] (3) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Resolver::VisitExpr_(tvm::relay::VarNode const*)+0x87) [0x7f6e77fd14f7]
[bt] (2) /home/lisa/tvm-0.7/build/libtvm.so(tvm::relay::TypeInferencer::Resolver::VisitVar(tvm::relay::Var const&)+0xe2) [0x7f6e77fd12c2]
[bt] (1) /home/lisa/tvm-0.7/build/libtvm.so(tvm::RelayExpr tvm::relay::TypeInferencer::Resolver::AttachCheckedType<tvm::relay::VarNode>(tvm::relay::VarNode const*)+0x1ab) [0x7f6e77fcbd2b]
[bt] (0) /home/lisa/tvm-0.7/build/libtvm.so(+0x27a2988) [0x7f6e77fc3988]
File "/home/lisa/tvm-0.7/src/relay/transforms/type_infer.cc", line 617
TVMError: Check failed: checked_type.as<IncompleteTypeNode>() == nullptr: Cannot resolve type of Var(weight) at (nullptr)
''',
]
message['0.8'] = [
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/byproduct/../buggyFile/7.py", line 2236, in <module>
m7tRI=run_opt_pass(IVUfN,rolTI)
File "/home/lisa/TVMfuzz/byproduct/../buggyFile/7.py", line 2230, in run_opt_pass
mod = opt_pass(mod)
File "/home/lisa/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)+0x96) [0x7f7255c78b96]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x76) [0x7f7255c7ea46]
[bt] (6) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)+0x2c) [0x7f7255b0b3ec]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::relay::MixedModeMutator::VisitExpr_(tvm::relay::CallNode const*)+0x9e) [0x7f7255a80f6e]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::Rewrite_(tvm::relay::CallNode const*, tvm::RelayExpr const&)+0xf97) [0x7f7255ade7e7]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>::AssignTypedLambda<tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>(tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x349) [0x7f7255a2c389]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::RelayExpr tvm::relay::LayoutRewriter<tvm::relay::alter_op_layout::AlterTransformMemorizer>(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)+0xcd5) [0x7f7255a344e5]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::RelayExprNode::checked_type() const+0x157) [0x7f72559aa5e7]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x2702338) [0x7f72559a8338]
File "/home/lisa/tvm/include/tvm/ir/expr.h", line 476
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: checked_type_.defined() == false: internal error: the type checker has not populated the checked_type field for Var(x, ty=TensorType([1, 56, 56, 64], float32))
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/1.py", line 152, in <module>
AG624=to_cps(C9trW[\'\'\'main\'\'\'],mod=C9trW)
File "/home/lisa/tvm/python/tvm/relay/transform/transform.py", line 840, in to_cps
return _ffi_api.to_cps(func, use_mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::relay::ToCPS(tvm::relay::Function const&, tvm::IRModule const&, std::unordered_map<tvm::GlobalVar, tvm::GlobalVar, tvm::runtime::ObjectPtrHash, tvm::runtime::ObjectPtrEqual, std::allocator<std::pair<tvm::GlobalVar const, tvm::GlobalVar> > >*)+0x17e) [0x7f583782855e]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x8b) [0x7f583792a5db]
[bt] (6) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f58378d705f]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::FunctionNode const*)+0xba) [0x7f583792693a]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x8b) [0x7f583792a5db]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f58378d705f]
[bt] (2) /home/lisa/tvm/build/libtvm.so(+0x28d00de) [0x7f58378290de]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::RelayExprNode::checked_type() const+0x157) [0x7f583765d5e7]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x2702338) [0x7f583765b338]
File "/home/lisa/tvm/include/tvm/ir/expr.h", line 476
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: checked_type_.defined() == false: internal error: the type checker has not populated the checked_type field for Var(data, ty=TensorType([1, 17, 56, 38], float32))
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/../buggyFile/11.py", line 158, in <module>
nJek4=run_opt_pass(aPzKb,[FMA8X,])
File "/home/lisa/TVMfuzz/buggyFile2/../buggyFile/11.py", line 154, in run_opt_pass
mod = opt_pass(mod)
TypeError: 'list' object is not callable
''',
'''
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile/12.py", line 169, in <module>
MgtyN=run_infer_type(DKq4w)
File "/home/lisa/TVMfuzz/buggyFile/12.py", line 153, in run_infer_type
mod = transform.InferType()(mod)
File "/home/lisa/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
[bt] (6) /home/lisa/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7f01235242a3]
[bt] (5) /home/lisa/tvm/build/libtvm.so(+0x1f39ed4) [0x7f012298aed4]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7f012298a534]
[bt] (3) /home/lisa/tvm/build/libtvm.so(+0x28dd442) [0x7f012332e442]
[bt] (2) /home/lisa/tvm/build/libtvm.so(+0x28dc3c7) [0x7f012332d3c7]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::DiagnosticContext::Render()+0x231) [0x7f012293b4f1]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x1eea0e8) [0x7f012293b0e8]
File "/home/lisa/tvm/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/4.py", line 1968, in <module>
ayUbk=run_opt_pass(cqCZg,g8VBv)
File "/home/lisa/TVMfuzz/buggyFile2/4.py", line 1962, in run_opt_pass
mod = opt_pass(mod)
File "/home/lisa/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)+0x96) [0x7feba3fadb96]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x76) [0x7feba3fb3a46]
[bt] (6) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)+0x2c) [0x7feba3e403ec]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::relay::MixedModeMutator::VisitExpr_(tvm::relay::CallNode const*)+0x9e) [0x7feba3db5f6e]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::Rewrite_(tvm::relay::CallNode const*, tvm::RelayExpr const&)+0xf97) [0x7feba3e137e7]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>::AssignTypedLambda<tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>(tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x349) [0x7feba3d61389]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::RelayExpr tvm::relay::LayoutRewriter<tvm::relay::alter_op_layout::AlterTransformMemorizer>(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)+0xcd5) [0x7feba3d694e5]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::RelayExprNode::checked_type() const+0x157) [0x7feba3cdf5e7]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x2702338) [0x7feba3cdd338]
File "/home/lisa/tvm/include/tvm/ir/expr.h", line 476
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: checked_type_.defined() == false: internal error: the type checker has not populated the checked_type field for Var(x, ty=TensorType([1, 56, 56, 64], float32))
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/6.py", line 173, in <module>
F8ExZ=make_nat_expr(EHXnr,3)
File "/home/lisa/tvm/python/tvm/relay/testing/nat.py", line 60, in make_nat_expr
_, z, s = prelude.mod.get_type("nat")
File "/home/lisa/tvm/python/tvm/ir/module.py", line 215, in get_type
ty_var = self.get_global_type_var(name)
File "/home/lisa/tvm/python/tvm/ir/module.py", line 193, in get_global_type_var
return _ffi_api.Module_GetGlobalTypeVar(self, name)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (3) /home/lisa/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7f64c6c912a3]
[bt] (2) /home/lisa/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::GlobalTypeVar (tvm::IRModule, tvm::runtime::String const&)>::AssignTypedLambda<tvm::runtime::Registry::set_body_method<tvm::IRModule, tvm::IRModuleNode, tvm::GlobalTypeVar, tvm::runtime::String const&, void>(tvm::GlobalTypeVar (tvm::IRModuleNode::*)(tvm::runtime::String const&) const)::{lambda(tvm::IRModule, tvm::runtime::String const&)#1}>(tvm::runtime::Registry::set_body_method<tvm::IRModule, tvm::IRModuleNode, tvm::GlobalTypeVar, tvm::runtime::String const&, void>(tvm::GlobalTypeVar (tvm::IRModuleNode::*)(tvm::runtime::String const&) const)::{lambda(tvm::IRModule, tvm::runtime::String const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x1f0) [0x7f64c60df730]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::IRModuleNode::GetGlobalTypeVar(tvm::runtime::String const&) const+0x139) [0x7f64c60ccd99]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x1f0e068) [0x7f64c60cc068]
File "/home/lisa/tvm/src/ir/module.cc", line 156
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: it != global_type_var_map_.end() == false: Cannot find global type var nat in the Module
''',
'''
The Relay type checker is unable to show the following types match.
In particular `int32` does not match `t`
note: run with `TVM_BACKTRACE=1` environment variable to display a backtrace.
''',
'''
The Relay type checker is unable to show the following types match.
In particular `ref(meta[IncompleteType][0])
` does not match `fn [c, c](c) -> c`
The Relay type checker is unable to show the following types match.
In particular `ref(meta[IncompleteType][0])
` does not match `fn [c, c](c) -> c`
The Relay type checker is unable to show the following types match.
In particular `ref(meta[IncompleteType][0])
` does not match `fn [c, c](c) -> c`
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/2.py", line 171, in <module>
tkIeS=run_opt_pass(AsPBS,Ac5bE)
File "/home/lisa/TVMfuzz/buggyFile2/2.py", line 158, in run_opt_pass
mod = seq(mod)
File "/home/lisa/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
[bt] (7) /home/lisa/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7fef562362a3]
[bt] (6) /home/lisa/tvm/build/libtvm.so(+0x1f39ed4) [0x7fef5569ced4]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x32f) [0x7fef556993af]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7fef5569c534]
[bt] (3) /home/lisa/tvm/build/libtvm.so(+0x28dd442) [0x7fef56040442]
[bt] (2) /home/lisa/tvm/build/libtvm.so(+0x28dc3c7) [0x7fef5603f3c7]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::DiagnosticContext::Render()+0x231) [0x7fef5564d4f1]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x1eea0e8) [0x7fef5564d0e8]
File "/home/lisa/tvm/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/39.py", line 128, in <module>
check_kind(EK3h8)
File "/home/lisa/tvm/python/tvm/relay/analysis/analysis.py", line 106, in check_kind
return _ffi_api.check_kind(t)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (6) /home/lisa/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7f40a7b022a3]
[bt] (5) /home/lisa/tvm/build/libtvm.so(+0x27292b8) [0x7f40a77582b8]
[bt] (4) /home/lisa/tvm/build/libtvm.so(+0x272905e) [0x7f40a775805e]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::relay::KindCheck(tvm::Type const&, tvm::IRModule const&, tvm::runtime::Optional<tvm::DiagnosticContext>)+0xb9) [0x7f40a7757989]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::relay::KindChecker::VisitType_(tvm::TypeCallNode const*)+0x10a) [0x7f40a775ef9a]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::relay::KindChecker::CheckKindMatches(tvm::Type const&, tvm::Type const&, tvm::TypeKind, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x61c) [0x7f40a775c3ac]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x2728878) [0x7f40a7757878]
File "/home/lisa/tvm/src/relay/analysis/kind_check.cc", line 54
TVMError: Incorrect kind for a type call function. Type GlobalTypeVar(v1, 0) inside TypeCallNode(GlobalTypeVar(v1, 0), []) is of kind 0 but was expected to be 5
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/38.py", line 1366, in <module>
IsoSU=run_opt_pass(cfBFv,vOeTP)
File "/home/lisa/TVMfuzz/buggyFile2/38.py", line 1361, in run_opt_pass
mod = opt_pass(mod)
File "/home/lisa/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)#5}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)+0x2c) [0x7f3fe87e739c]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr_(tvm::relay::FunctionNode const*)+0x55f) [0x7f3fe89527ff]
[bt] (6) /home/lisa/tvm/build/libtvm.so(tvm::relay::MixedModeMutator::VisitExpr(tvm::RelayExpr const&)+0x1b1) [0x7f3fe8954211]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::relay::MixedModeMutator::VisitLeaf(tvm::RelayExpr const&)+0x47) [0x7f3fe8953447]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::relay::PostOrderRewriter::DispatchVisitExpr(tvm::RelayExpr const&)+0xff) [0x7f3fe895ad2f]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprRewriter::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprRewriter*, tvm::RelayExpr const&)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprRewriter*, tvm::RelayExpr const&)+0x2c) [0x7f3fe86a6bec]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::relay::legalize::Legalizer::Rewrite_(tvm::relay::CallNode const*, tvm::RelayExpr const&)+0x7a0) [0x7f3fe87f2e60]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::RelayExprNode::checked_type() const+0x157) [0x7f3fe86865e7]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x2702338) [0x7f3fe8684338]
File "/home/lisa/tvm/include/tvm/ir/expr.h", line 476
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: checked_type_.defined() == false: internal error: the type checker has not populated the checked_type field for Var(x, ty=TensorType([1, 500, 500, 64], float32))
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/32.py", line 705, in <module>
verify_tile((2,3,4,),(3,0,1,))
File "/home/lisa/TVMfuzz/buggyFile2/32.py", line 703, in verify_tile
op_res = intrp.evaluate(func)(x_data)
File "/home/lisa/tvm/python/tvm/relay/backend/interpreter.py", line 178, in evaluate
return self._make_executor(expr)
File "/home/lisa/tvm/python/tvm/relay/build_module.py", line 382, in _make_executor
self.mod = InferType()(self.mod)
File "/home/lisa/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (7) /home/lisa/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7f7e810992a3]
[bt] (6) /home/lisa/tvm/build/libtvm.so(+0x1f39ed4) [0x7f7e804ffed4]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7f7e804ff534]
[bt] (4) /home/lisa/tvm/build/libtvm.so(+0x28dd442) [0x7f7e80ea3442]
[bt] (3) /home/lisa/tvm/build/libtvm.so(+0x28dc358) [0x7f7e80ea2358]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)+0x75) [0x7f7e80ea1a45]
[bt] (1) /home/lisa/tvm/build/libtvm.so(+0x1bdc8b1) [0x7f7e801a28b1]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x27418f8) [0x7f7e80d078f8]
[bt] (8) /home/lisa/tvm/build/libtvm.so(+0x1f39ed4) [0x7f7e804ffed4]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7f7e804ff534]
[bt] (6) /home/lisa/tvm/build/libtvm.so(+0x28dd442) [0x7f7e80ea3442]
[bt] (5) /home/lisa/tvm/build/libtvm.so(+0x28dc358) [0x7f7e80ea2358]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)+0x75) [0x7f7e80ea1a45]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x45c) [0x7f7e80d0a2ec]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x557) [0x7f7e8055a747]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::relay::TileRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x72c) [0x7f7e80bfe0dc]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x262b918) [0x7f7e80bf1918]
File "/home/lisa/tvm/src/relay/analysis/type_solver.cc", line 622
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: false == false: [10:14:03] /home/lisa/tvm/src/relay/op/tensor/transform.cc:1622:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: val->value > 0 (0 vs. 0) : Tile reps value should always be larger than 0, but get: 0
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/26.py", line 153, in <module>
quantize_test_driver(in_dtype=\'\'\'int32\'\'\',quant_args={\'\'\'out_zero_point\'\'\':UBKqL,\'\'\'out_scale\'\'\':wunQz},axis=0,out_dtype=\'\'\'uint8\'\'\',in_data=vIWjt,verify_output_data=vIWjt)
File "/home/lisa/TVMfuzz/buggyFile2/26.py", line 143, in quantize_test_driver
(graph, lib, params) = relay.build(mod, 'llvm', params=None)
File "/home/lisa/tvm/python/tvm/relay/build_module.py", line 275, in build
graph_json, mod, params = bld_mod.build(mod, target, target_host, params)
File "/home/lisa/tvm/python/tvm/relay/build_module.py", line 138, in build
self._build(mod, target, target_host)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule) const+0x67) [0x7f9f6abf6ed7]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x32f) [0x7f9f6ad063af]
[bt] (6) /home/lisa/tvm/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x27e) [0x7f9f6ad062fe]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7f9f6ad09534]
[bt] (4) /home/lisa/tvm/build/libtvm.so(+0x28dd442) [0x7f9f6b6ad442]
[bt] (3) /home/lisa/tvm/build/libtvm.so(+0x28dc358) [0x7f9f6b6ac358]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)+0x75) [0x7f9f6b6aba45]
[bt] (1) /home/lisa/tvm/build/libtvm.so(+0x1bdc8b1) [0x7f9f6a9ac8b1]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x27418f8) [0x7f9f6b5118f8]
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x27e) [0x7f9f6ad062fe]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7f9f6ad09534]
[bt] (6) /home/lisa/tvm/build/libtvm.so(+0x28dd442) [0x7f9f6b6ad442]
[bt] (5) /home/lisa/tvm/build/libtvm.so(+0x28dc358) [0x7f9f6b6ac358]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)+0x75) [0x7f9f6b6aba45]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x45c) [0x7f9f6b5142ec]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const+0x557) [0x7f9f6ad64747]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::relay::qnn::QuantizeRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0x26d) [0x7f9f6b7ec58d]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x2a1b0d8) [0x7f9f6b7eb0d8]
File "/home/lisa/tvm/src/relay/analysis/type_solver.cc", line 622
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: false == false: [10:48:58] /home/lisa/tvm/src/relay/qnn/op/quantize.cc:49:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: input_dtype == DataType::Float(32) == false: Input type should be one of float32 but was int32
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/19.py", line 151, in <module>
Qi0Mu=random_bsr_matrix(239,128,32,1,0.1)
NameError: name 'random_bsr_matrix' is not defined
''',
'''
error: a constructor with the name `Cons` was previously defined
--> from_string:11:13
|
11 | Cons(A, List[A]),
| ^^^^
''',
'''
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
The type inference pass was unable to infer a type for this expression.
This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/7.py", line 3728, in <module>
LZ2cL=KEkqH.partition(mcksQ)
File "/home/lisa/tvm/python/tvm/relay/dataflow_pattern/__init__.py", line 171, in partition
return partition(self, expr, attrs, check)
File "/home/lisa/tvm/python/tvm/relay/dataflow_pattern/__init__.py", line 814, in partition
return ffi.partition(pattern, expr, attrs, check)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::relay::DFPatternMatcher::VisitDFPattern_(tvm::relay::DominatorPatternNode const*, tvm::RelayExpr const&)+0x1c) [0x7fa5bd868fec]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::relay::DFPatternMatcher::VisitDFPattern(tvm::relay::DFPattern const&, tvm::RelayExpr const&)+0x216) [0x7fa5bd8678b6]
[bt] (6) /home/lisa/tvm/build/libtvm.so(tvm::relay::DFPatternMatcher::VisitDFPattern_(tvm::relay::ShapePatternNode const*, tvm::RelayExpr const&)+0x3b) [0x7fa5bd86348b]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::relay::InferType(tvm::RelayExpr const&)+0x17b) [0x7fa5bd86302b]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7fa5bce01534]
[bt] (3) /home/lisa/tvm/build/libtvm.so(+0x28dd442) [0x7fa5bd7a5442]
[bt] (2) /home/lisa/tvm/build/libtvm.so(+0x28dc3c7) [0x7fa5bd7a43c7]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::DiagnosticContext::Render()+0x231) [0x7fa5bcdb24f1]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x1eea0e8) [0x7fa5bcdb20e8]
File "/home/lisa/tvm/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.
''',
'''
Traceback (most recent call last):
File "/home/lisa/TVMfuzz/buggyFile2/1.py", line 275, in <module>
q3ir3=pD1WT(nWW5C)
File "/home/lisa/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/lisa/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)#5}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)+0x2c) [0x7f11114b739c]
[bt] (7) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr_(tvm::relay::FunctionNode const*)+0x55f) [0x7f11116227ff]
[bt] (6) /home/lisa/tvm/build/libtvm.so(tvm::relay::MixedModeMutator::VisitExpr(tvm::RelayExpr const&)+0x1b1) [0x7f1111624211]
[bt] (5) /home/lisa/tvm/build/libtvm.so(tvm::relay::MixedModeMutator::VisitLeaf(tvm::RelayExpr const&)+0x47) [0x7f1111623447]
[bt] (4) /home/lisa/tvm/build/libtvm.so(tvm::relay::PostOrderRewriter::DispatchVisitExpr(tvm::RelayExpr const&)+0xff) [0x7f111162ad2f]
[bt] (3) /home/lisa/tvm/build/libtvm.so(tvm::relay::ExprRewriter::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprRewriter*, tvm::RelayExpr const&)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprRewriter*, tvm::RelayExpr const&)+0x2c) [0x7f1111376bec]
[bt] (2) /home/lisa/tvm/build/libtvm.so(tvm::relay::legalize::Legalizer::Rewrite_(tvm::relay::CallNode const*, tvm::RelayExpr const&)+0x7a0) [0x7f11114c2e60]
[bt] (1) /home/lisa/tvm/build/libtvm.so(tvm::RelayExprNode::checked_type() const+0x157) [0x7f11113565e7]
[bt] (0) /home/lisa/tvm/build/libtvm.so(+0x2702338) [0x7f1111354338]
File "/home/lisa/tvm/include/tvm/ir/expr.h", line 476
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: checked_type_.defined() == false: internal error: the type checker has not populated the checked_type field for Var(data, ty=TensorType([10, 3], uint8))
''',
]
bugid = 1 | 90.595483 | 957 | 0.682525 | 12,529 | 88,240 | 4.756485 | 0.073509 | 0.059201 | 0.074202 | 0.048931 | 0.900292 | 0.893748 | 0.891147 | 0.88576 | 0.8769 | 0.851965 | 0 | 0.070372 | 0.132956 | 88,240 | 974 | 958 | 90.595483 | 0.708553 | 0.000125 | 0 | 0 | 0 | 0 | 0.13108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013514 | 0 | 0.013514 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72a3e479de3af99250ee0e730fe83b7c42e49a71 | 6,966 | py | Python | owapi/prestige.py | gameleap/OWAPI | 6ff46ffcea9b4adc9c00a1730459c44fabbe45fe | [
"MIT"
] | 2 | 2019-01-21T13:46:25.000Z | 2019-08-20T14:10:31.000Z | owapi/prestige.py | gameleap/OWAPI | 6ff46ffcea9b4adc9c00a1730459c44fabbe45fe | [
"MIT"
] | null | null | null | owapi/prestige.py | gameleap/OWAPI | 6ff46ffcea9b4adc9c00a1730459c44fabbe45fe | [
"MIT"
] | null | null | null | """
This contains the constants for the prestige ranks.
These images are used as the backgrounds for each rank or so.
The dict is used to map them as appropriate.
"""
PRESTIGE_BORDERS = {
# Base level Bronze = 0
"1055f5ae3a84b7bd8afa9fcbd2baaf9a412c63e8fe5411025b3264db12927771": 0, # Bronze Lv 1
"69c2c1aff0db8429a980bad7db76a3388003e43f0034097dc4cfa7f13c5de7d7": 0, # Bronze Lv 11
"4d63c2aadf536e87c84bdb7157c7b688cffb286e17a5362d2fa5c5281f4fc2a2": 0, # Bronze Lv 21
"78ebb45dd26b0050404305fdc1cb9ddc311d2c7e62400fd6348a3a488c69eee7": 0, # Bronze Lv 31
"888c84f2dfd211cde0c595036574040ca96b1698578daab90ce6822d89f7fe0e": 0, # Bronze Lv 41
"3fdfdd16c34ab7cdc9b7be3c04197e900928b368285ce639c1d3e1c0619eea6d": 0, # Bronze Lv 51
"e8b7df4b88998380658d49d00e7bc483c740432ac417218e94fab4137bec4ae0": 0, # Bronze Lv 61
"45cc69ca29f3981fa085b5337d2303a4eb555853daae1c29351b7ba46b27bbcd": 0, # Bronze Lv 71
"8b4be1017beff0bcd1f7a48d8cdf7faf9f22c1ffd2bdeaaff2684da5cddeaa76": 0, # Bronze Lv 81
"1b00b8cab530e98c378de2f3e8834d92ee41b4cd7b118179a8ecbccee83c8104": 0, # Bronze Lv 91
# Base level Silver = 6
"f5d80c8b7370cda9a491bdf89e02bcd8c6ba1708189d907c7e4f55a719030264": 6, # Silver Lv 1
"ddb6f3f79241b8af2fa77b52910f60a2332db5d8347b3039d1328ae6d1272a59": 6, # Silver Lv 11
"c59072a340e6187116f5ae7456674dd6e1cba4b15781922d63fb94f56d9539c0": 6, # Silver Lv 21
"624461e537900ce98e3178d1a298cba4830c14f6a81a8b36319da6273bed255a": 6, # Silver Lv 31
"ba68d2c0f1b55e1991161cb1f88f369b97311452564b200ea1da226eb493e2e8": 6, # Silver Lv 41
"3c078f588353feeb3f52b0198fade12a78573a01c53050aca890969a395ff66a": 6, # Silver Lv 51
"f9bc9c6bb95f07f4e882b9e003ba7fa5ca6552fb8e0c27473a8b031714670116": 6, # Silver Lv 61
"8aa9f56cdd250579dd8b0ce6bd835934fffe8c27b9ce609f046c19a4a81591f8": 6, # Silver Lv 71
"32f84a58719318fa0aeee530ed3240952ba9945b998cd9e8150ebb583db0d4f6": 6, # Silver Lv 81
"c95fa44c02a1eae89a7c8d503026f181f1cc565da93d47c6254fab2c3d8793ef": 6, # Silver Lv 91
# Base level Gold = 12
"5ab5c29e0e1e33f338ae9afc37f51917b151016aef42d10d361baac3e0965df1": 12, # Gold Lv 1
"7fd73e680007054dbb8ac5ea8757a565858b9d7dba19f389228101bda18f36b0": 12, # Gold Lv 11
"0ada1b8721830853d3fbcfabf616e1841f2100279cff15b386093f69cc6c09ad": 12, # Gold Lv 21
"7095ee84fc0a3aaac172120ffe0daa0d9abca33112e878cd863cd925cd8404b6": 12, # Gold Lv 31
"fa410247dd3f5b7bf2eb1a65583f3b0a3c8800bcd6b512ab1c1c4d9dd81675ae": 12, # Gold Lv 41
"a938ef37b673a240c4ade00d5a95f330b1e1ba93da9f0d3754bdb8a77bbbd7a1": 12, # Gold Lv 51
"49afee29dc05547ceebe6c1f61a54f7105a0e1b7f2c8509ff2b4aeaf4d384c8e": 12, # Gold Lv 61
"2c1464fb96d38839281c0bdb6e1a0cd06769782a5130609c13f6ca76fa358bcf": 12, # Gold Lv 71
"98f6eea1a2a10576251d6c690c13d52aaac19b06811ed2b684b43e7a9318f622": 12, # Gold Lv 81
"6e1036eab98de41694d785e076c32dbabe66962d38325117436b31210b003ad4": 12, # Gold Lv 91
# Base level Platinum = 18
"69fde7abebb0bb5aa870e62362e84984cae13e441aec931a5e2c9dc5d22a56dc": 18, # Platinum Lv 1
"9c84055f9d91a297ccd1bac163c144e52bcce981dc385ff9e2957c5bd4433452": 18, # Platinum Lv 11
"97c803711cddc691bc458ec83dec73c570b0cc07219632c274bb5c5534786984": 18, # Platinum Lv 21
"c562ec882ababf2030e40ad3ce27e38176899f732166a1b335fd8f83735261f3": 18, # Platinum Lv 31
"da2cb4ab3281329c367cea51f9438c3d20d29ee07f55fa65762481777663f7f9": 18, # Platinum Lv 41
"460670e4d61b9bf0bcde6d93a52e50f01541177a20aaf69bbda91fe4353ed2b0": 18, # Platinum Lv 51
"5a019024b384de73f4348ed981ae58ec458a7ae6db68e0c44cda4d7062521b04": 18, # Platinum Lv 61
"1d5a458ecaf00fe0ef494b4159412d30a4b58ee76b9f0ff44b1db14ed211273c": 18, # Platinum Lv 71
"f1d43d87bbe5868cb99062ac02099001dd9f8215831347d8978e895468e81ef6": 18, # Platinum Lv 81
"27b2d05f97179aae72c8f72b69978777e1c5022f77e84f28e5943be8e9cd1d49": 18, # Platinum Lv 91
# Base level Diamond = 24
"5c83959aa079f9ed9fd633411289920568e616c5117b2a7bb280dd8c857f8406": 24, # Diamond Lv 1
"ac14208753baf77110880020450fa4aa0121df0c344c32a2d20f77c18ba75db5": 24, # Diamond Lv 11
"a42bcb3339e1b3c999fc2799b0787fd862e163ec504d7541fa3ea8893b83957a": 24, # Diamond Lv 21
"7f1cc30ed6981974b6950666bb8236a6aa7b5a8579b14969394212dd7fa2951d": 24, # Diamond Lv 31
"efe3ab1c85c6266199ac7539566d4c811b0ee17bc5fb3e3e7a48e9bc2473cf50": 24, # Diamond Lv 41
"c7b9df20c91b10dc25bfdc847d069318ed9e8e69c5cad760803470caa9576e48": 24, # Diamond Lv 51
"413bdc1e11f9b190ed2c6257a9f7ea021fd9fcef577d50efcf30a5ea8df989a4": 24, # Diamond Lv 61
"625645c3c9af49eb315b504dba32137bb4081d348ec5b9750196b0ec0c9bb6a6": 24, # Diamond Lv 71
"f9813603e19350bb6d458bbee3c8c2a177b6503e6ff54868e8d176fa424a0191": 24, # Diamond Lv 81
"9e8600f97ea4a84d822d8b336f2b1dbfe7372fb9f2b6bf1d0336193567f6f943": 24, # Diamond Lv 91 / Max
}
PRESTIGE_STARS = {
# Prestige modifiers
"8de2fe5d938256a5725abe4b3655ee5e9067b7a1f4d5ff637d974eb9c2e4a1ea": 1, # 1 Bronze star
"755825d4a6768a22de17b48cfbe66ad85a54310ba5a8f8ab1e9c9a606b389354": 2, # 2 Bronze stars
"4a2c852a16043f613b7bfac33c8536dd9f9621a3d567174cb4ad9a80e3b13102": 3, # 3 Bronze stars
"bc80149bbd78d2f940984712485bce23ddaa6f2bd0edd1c0494464ef55251eef": 4, # 4 Bronze stars
"d35d380b7594b8f6af2d01040d80a5bfb6621553406c0905d4764bdc92a4ede8": 5, # 5 Bronze stars
"426c754c76cd12e6aacd30293a67363571341eea37880df549d3e02015a588fe": 1, # 1 Silver star
"c137dd97008328ed94efc5a9ec446e024c9ac92fce89fa5b825c5b1d7ff8d807": 2, # 2 Silver stars
"9a7c57aee22733a47c2b562000861d687d0423a74eb5e609c425f10db5528ed9": 3, # 3 Silver stars
"b944cf1de6653b629c951fd14583069bc91b1f1b7efdb171203448b2dbc39917": 4, # 4 Silver stars
"9b838b75065248ec14360723e4caf523239128ff8c13bda36cfd0b59ef501c1e": 5, # 5 Silver stars
"1858704e180db3578839aefdb83b89054f380fbb3d4c46b3ee12d34ed8af8712": 1, # 1 Gold/Platinum star
"e8568b9f9f5cac7016955f57c7b192ccd70f7b38504c7849efa8b1e3f7a1b077": 2, # 2 Gold/Platinum stars
"a25388825a0e00c946a23f5dd74c5b63f77f564231e0fd01e42ff2d1c9f10d38": 3, # 3 Gold/Platinum stars
"cff520765f143c521b25ad19e560abde9a90eeae79890b14146a60753d7baff8": 4, # 4 Gold/Platinum stars
"35fd7b9b98f57389c43e5a8e7ca989ca593c9f530985adf4670845bb598e1a9d": 5, # 5 Gold/Platinum stars
"8033fa55e3de5e7655cd694340870da851cdef348d7dcb76411f3a9c2c93002c": 1, # 1 Diamond star
"605c201cf3f0d24b318f643acb812084ff284e660f2bb5d62b487847d33fad29": 2, # 2 Diamond stars
"1c8c752d0f2757dc0bcc9e3db76f81c3802c874164a3b661475e1c7bd67c571f": 3, # 3 Diamond stars
"58b1323ab2eb1298fa6be649a8d4d7f0e623523bd01964ed8fefd5175d9073c0": 4, # 4 Diamond stars
"cd877430ccc400c10e24507dba972e24a4543edc05628045300f1349cf003f3a": 5, # 5 Diamond stars
} | 73.326316 | 101 | 0.806345 | 411 | 6,966 | 13.6618 | 0.304136 | 0.012467 | 0.016029 | 0.009261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.507251 | 0.138817 | 6,966 | 95 | 102 | 73.326316 | 0.428738 | 0.185185 | 0 | 0 | 0 | 0 | 0.800858 | 0.800858 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f41312a09adf8e5c1e534131420443fc92936b97 | 3,090 | py | Python | genonets/test/test_dna.py | fkhalid/genonets | 0dcd2e35ebf6957b8d0934e6033e2c962938c18a | [
"MIT"
] | 4 | 2016-03-01T10:43:40.000Z | 2021-07-17T14:53:04.000Z | genonets/test/test_dna.py | fkhalid/genonets | 0dcd2e35ebf6957b8d0934e6033e2c962938c18a | [
"MIT"
] | 15 | 2016-04-13T10:54:49.000Z | 2020-11-07T16:17:34.000Z | genonets/test/test_dna.py | fkhalid/genonets | 0dcd2e35ebf6957b8d0934e6033e2c962938c18a | [
"MIT"
] | 1 | 2016-03-01T10:46:44.000Z | 2016-03-01T10:46:44.000Z |
import tempfile
import genonets.test.utils as utils
import genonets.test.compare_result_files as comparator
from genonets.cmdl_handler import CmdParser
from genonets.interface import Genonets
class TestDNA:
@staticmethod
def run_test(cmd_args, ground_truth_dir, data_dir):
args = CmdParser(arguments=cmd_args).get_args()
gn = Genonets(args)
gn.create()
gn.analyze()
gn.save_network_results()
gn.save_genotype_results()
assert utils.num_files_matches(ground_truth_dir, data_dir)
assert comparator.compare_genotype_set_measures(
ground_truth_dir, data_dir
)
assert comparator.compare_genotype_measures(
ground_truth_dir, data_dir
)
assert comparator.compare_overlap_results(
ground_truth_dir, data_dir
)
@staticmethod
def test_no_indels_no_rc():
ground_truth_dir = 'genonets/test/data/ground_truth/dna/mus/no_indels_no_rc'
with tempfile.TemporaryDirectory(prefix='test_dna_') as data_dir:
cmd_args = [
'--alphabet=DNA',
'--tau=0.35',
'--input-file=genonets/test/data/inputs/dna/input_sample_dna-mus.tsv',
f'--output-path={data_dir}'
]
TestDNA.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_no_indels_with_rc():
ground_truth_dir = 'genonets/test/data/ground_truth/dna/mus/no_indels_with_rc'
with tempfile.TemporaryDirectory(prefix='test_dna_') as data_dir:
cmd_args = [
'--alphabet=DNA',
'--tau=0.35',
'--input-file=genonets/test/data/inputs/dna/input_sample_dna-mus.tsv',
'--use-reverse-complements',
f'--output-path={data_dir}'
]
TestDNA.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_with_indels_no_rc():
ground_truth_dir = 'genonets/test/data/ground_truth/dna/mus/with_indels_no_rc'
with tempfile.TemporaryDirectory(prefix='test_dna_') as data_dir:
cmd_args = [
'--alphabet=DNA',
'--tau=0.35',
'--input-file=genonets/test/data/inputs/dna/input_sample_dna-mus.tsv',
'--include-indels',
f'--output-path={data_dir}'
]
TestDNA.run_test(cmd_args, ground_truth_dir, data_dir)
@staticmethod
def test_with_indels_with_rc():
ground_truth_dir = 'genonets/test/data/ground_truth/dna/mus/with_indels_with_rc'
with tempfile.TemporaryDirectory(prefix='test_dna_') as data_dir:
cmd_args = [
'--alphabet=DNA',
'--tau=0.35',
'--input-file=genonets/test/data/inputs/dna/input_sample_dna-mus.tsv',
'--include-indels',
'--use-reverse-complements',
f'--output-path={data_dir}'
]
TestDNA.run_test(cmd_args, ground_truth_dir, data_dir)
| 32.526316 | 88 | 0.61165 | 369 | 3,090 | 4.788618 | 0.176152 | 0.105829 | 0.102999 | 0.091681 | 0.805886 | 0.805886 | 0.805886 | 0.805886 | 0.805886 | 0.693265 | 0 | 0.005408 | 0.281877 | 3,090 | 94 | 89 | 32.87234 | 0.790897 | 0 | 0 | 0.555556 | 0 | 0 | 0.260926 | 0.207834 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.069444 | false | 0 | 0.069444 | 0 | 0.152778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f46037393f1766f046850e4a59eb4e9d5a3149be | 231 | py | Python | transequilibrium/escaping.py | barisione/transequilibrium | a930fd08ac4806c23c1bdcba558224241a2701cc | [
"MIT"
] | null | null | null | transequilibrium/escaping.py | barisione/transequilibrium | a930fd08ac4806c23c1bdcba558224241a2701cc | [
"MIT"
] | null | null | null | transequilibrium/escaping.py | barisione/transequilibrium | a930fd08ac4806c23c1bdcba558224241a2701cc | [
"MIT"
] | null | null | null | try:
import html
#pylint: disable=invalid-name
html_unescape = html.unescape
except (ImportError, NameError):
import HTMLParser
#pylint: disable=invalid-name
html_unescape = HTMLParser.HTMLParser().unescape
| 25.666667 | 52 | 0.731602 | 25 | 231 | 6.68 | 0.48 | 0.215569 | 0.239521 | 0.287425 | 0.431138 | 0.431138 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 231 | 8 | 53 | 28.875 | 0.883598 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f474322c47c7644bf34dfd2855dc448bf4505612 | 2,869 | py | Python | tests/test_filters.py | HazyResearch/pytorch_radon | bca1626c036e119896acd8c7eef0d08e8541c2c7 | [
"MIT"
] | 1 | 2021-03-26T00:16:14.000Z | 2021-03-26T00:16:14.000Z | tests/test_filters.py | HazyResearch/pytorch_radon | bca1626c036e119896acd8c7eef0d08e8541c2c7 | [
"MIT"
] | null | null | null | tests/test_filters.py | HazyResearch/pytorch_radon | bca1626c036e119896acd8c7eef0d08e8541c2c7 | [
"MIT"
] | 1 | 2021-11-15T17:09:03.000Z | 2021-11-15T17:09:03.000Z | import unittest
from pytorch_radon import Radon, IRadon
from pytorch_radon.filters import RampFilter, HannFilter, LearnableFilter
from pytorch_radon.filters import RampButterflyFilter, HannButterflyFilter
import torch
class TestStackgram(unittest.TestCase):
def test_ramp_filter(self):
img = torch.zeros(1,1,256,256)
img[:, :, 120:130, 120:130] = 1
circle = True
theta = torch.arange(180)
r = Radon(img.shape[2], theta, circle)
ir = IRadon(img.shape[2], theta, circle, use_filter=RampFilter())
reco = ir(r(img))
self.assertAlmostEqual(torch.nn.MSELoss()(img, reco).item(), 0, places=4)
def test_hann_filter(self):
img = torch.zeros(1,1,256,256)
img[:, :, 120:130, 120:130] = 1
circle = True
theta = torch.arange(180)
r = Radon(img.shape[2], theta, circle)
ir = IRadon(img.shape[2], theta, circle, use_filter=HannFilter())
reco = ir(r(img))
self.assertAlmostEqual(torch.nn.MSELoss()(img, reco).item(), 0, places=3)
def test_learnable_filter(self):
img = torch.zeros(1,1,256,256)
img[:, :, 120:130, 120:130] = 1
circle = True
theta = torch.arange(180)
r = Radon(img.shape[2], theta, circle)
ir = IRadon(img.shape[2], theta, circle, use_filter=LearnableFilter(img.shape[2]))
reco = ir(r(img))
self.assertAlmostEqual(torch.nn.MSELoss()(img, reco).item(), 0, places=4)
def test_ramp_butterfly_filter(self):
img = torch.zeros(1,1,256,256)
img[:, :, 120:130, 120:130] = 1
circle = True
theta = torch.arange(180)
r = Radon(img.shape[2], theta, circle)
ir = IRadon(img.shape[2], theta, circle, use_filter=RampButterflyFilter(img.shape[2]))
reco = ir(r(img))
self.assertAlmostEqual(torch.nn.MSELoss()(img, reco).item(), 0, places=4)
# Check that it's close to using RampFilter
ir_og = IRadon(img.shape[2], theta, circle, use_filter=RampFilter())
reco_og = ir_og(r(img))
self.assertAlmostEqual(torch.nn.MSELoss()(reco, reco_og).item(), 0, places=4)
def test_hann_butterfly_filter(self):
img = torch.zeros(1,1,256,256)
img[:, :, 120:130, 120:130] = 1
circle = True
theta = torch.arange(180)
r = Radon(img.shape[2], theta, circle)
ir = IRadon(img.shape[2], theta, circle, use_filter=HannButterflyFilter(img.shape[2]))
reco = ir(r(img))
self.assertAlmostEqual(torch.nn.MSELoss()(img, reco).item(), 0, places=3)
# Check that it's close to using HannFilter
ir_og = IRadon(img.shape[2], theta, circle, use_filter=HannFilter())
reco_og = ir_og(r(img))
self.assertAlmostEqual(torch.nn.MSELoss()(reco, reco_og).item(), 0, places=4)
if __name__ == '__main__':
unittest.main()
| 42.191176 | 94 | 0.621819 | 399 | 2,869 | 4.37594 | 0.150376 | 0.068729 | 0.07732 | 0.09622 | 0.833333 | 0.800115 | 0.800115 | 0.764032 | 0.764032 | 0.764032 | 0 | 0.06739 | 0.229348 | 2,869 | 67 | 95 | 42.820896 | 0.722298 | 0.02893 | 0 | 0.661017 | 0 | 0 | 0.002875 | 0 | 0 | 0 | 0 | 0 | 0.118644 | 1 | 0.084746 | false | 0 | 0.084746 | 0 | 0.186441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be9221f3b123103ce5774297f0ac1318df08c14a | 4,457 | py | Python | tests/components/sleepiq/test_number.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 3 | 2021-11-22T22:37:43.000Z | 2022-03-17T00:55:28.000Z | tests/components/sleepiq/test_number.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 14 | 2022-01-26T06:25:32.000Z | 2022-03-31T06:27:51.000Z | tests/components/sleepiq/test_number.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 3 | 2022-01-02T18:49:54.000Z | 2022-01-25T02:03:54.000Z | """The tests for SleepIQ number platform."""
from homeassistant.components.number import DOMAIN
from homeassistant.components.number.const import ATTR_VALUE, SERVICE_SET_VALUE
from homeassistant.const import ATTR_ENTITY_ID, ATTR_FRIENDLY_NAME, ATTR_ICON
from homeassistant.helpers import entity_registry as er
from tests.components.sleepiq.conftest import (
BED_ID,
BED_NAME,
BED_NAME_LOWER,
SLEEPER_L_ID,
SLEEPER_L_NAME,
SLEEPER_L_NAME_LOWER,
SLEEPER_R_ID,
SLEEPER_R_NAME,
SLEEPER_R_NAME_LOWER,
setup_platform,
)
async def test_firmness(hass, mock_asyncsleepiq):
"""Test the SleepIQ firmness number values for a bed with two sides."""
entry = await setup_platform(hass, DOMAIN)
entity_registry = er.async_get(hass)
state = hass.states.get(
f"number.sleepnumber_{BED_NAME_LOWER}_{SLEEPER_L_NAME_LOWER}_firmness"
)
assert state.state == "40.0"
assert state.attributes.get(ATTR_ICON) == "mdi:bed"
assert (
state.attributes.get(ATTR_FRIENDLY_NAME)
== f"SleepNumber {BED_NAME} {SLEEPER_L_NAME} Firmness"
)
entry = entity_registry.async_get(
f"number.sleepnumber_{BED_NAME_LOWER}_{SLEEPER_L_NAME_LOWER}_firmness"
)
assert entry
assert entry.unique_id == f"{SLEEPER_L_ID}_firmness"
state = hass.states.get(
f"number.sleepnumber_{BED_NAME_LOWER}_{SLEEPER_R_NAME_LOWER}_firmness"
)
assert state.state == "80.0"
assert state.attributes.get(ATTR_ICON) == "mdi:bed"
assert (
state.attributes.get(ATTR_FRIENDLY_NAME)
== f"SleepNumber {BED_NAME} {SLEEPER_R_NAME} Firmness"
)
entry = entity_registry.async_get(
f"number.sleepnumber_{BED_NAME_LOWER}_{SLEEPER_R_NAME_LOWER}_firmness"
)
assert entry
assert entry.unique_id == f"{SLEEPER_R_ID}_firmness"
await hass.services.async_call(
DOMAIN,
SERVICE_SET_VALUE,
{
ATTR_ENTITY_ID: f"number.sleepnumber_{BED_NAME_LOWER}_{SLEEPER_L_NAME_LOWER}_firmness",
ATTR_VALUE: 42,
},
blocking=True,
)
await hass.async_block_till_done()
mock_asyncsleepiq.beds[BED_ID].sleepers[0].set_sleepnumber.assert_called_once()
mock_asyncsleepiq.beds[BED_ID].sleepers[0].set_sleepnumber.assert_called_with(42)
async def test_actuators(hass, mock_asyncsleepiq):
"""Test the SleepIQ actuator position values for a bed with adjustable head and foot."""
entry = await setup_platform(hass, DOMAIN)
entity_registry = er.async_get(hass)
state = hass.states.get(f"number.sleepnumber_{BED_NAME_LOWER}_right_head_position")
assert state.state == "60.0"
assert state.attributes.get(ATTR_ICON) == "mdi:bed"
assert (
state.attributes.get(ATTR_FRIENDLY_NAME)
== f"SleepNumber {BED_NAME} Right Head Position"
)
entry = entity_registry.async_get(
f"number.sleepnumber_{BED_NAME_LOWER}_right_head_position"
)
assert entry
assert entry.unique_id == f"{BED_ID}_R_H"
state = hass.states.get(f"number.sleepnumber_{BED_NAME_LOWER}_left_head_position")
assert state.state == "50.0"
assert state.attributes.get(ATTR_ICON) == "mdi:bed"
assert (
state.attributes.get(ATTR_FRIENDLY_NAME)
== f"SleepNumber {BED_NAME} Left Head Position"
)
entry = entity_registry.async_get(
f"number.sleepnumber_{BED_NAME_LOWER}_left_head_position"
)
assert entry
assert entry.unique_id == f"{BED_ID}_L_H"
state = hass.states.get(f"number.sleepnumber_{BED_NAME_LOWER}_foot_position")
assert state.state == "10.0"
assert state.attributes.get(ATTR_ICON) == "mdi:bed"
assert (
state.attributes.get(ATTR_FRIENDLY_NAME)
== f"SleepNumber {BED_NAME} Foot Position"
)
entry = entity_registry.async_get(
f"number.sleepnumber_{BED_NAME_LOWER}_foot_position"
)
assert entry
assert entry.unique_id == f"{BED_ID}_F"
await hass.services.async_call(
DOMAIN,
SERVICE_SET_VALUE,
{
ATTR_ENTITY_ID: f"number.sleepnumber_{BED_NAME_LOWER}_right_head_position",
ATTR_VALUE: 42,
},
blocking=True,
)
await hass.async_block_till_done()
mock_asyncsleepiq.beds[BED_ID].foundation.actuators[
0
].set_position.assert_called_once()
mock_asyncsleepiq.beds[BED_ID].foundation.actuators[
0
].set_position.assert_called_with(42)
| 32.532847 | 99 | 0.702715 | 591 | 4,457 | 4.945854 | 0.142132 | 0.061581 | 0.104687 | 0.086213 | 0.807048 | 0.781731 | 0.751625 | 0.751625 | 0.748888 | 0.743072 | 0 | 0.007565 | 0.199237 | 4,457 | 136 | 100 | 32.772059 | 0.811432 | 0.008526 | 0 | 0.429825 | 0 | 0 | 0.248237 | 0.176775 | 0 | 0 | 0 | 0 | 0.254386 | 1 | 0 | false | 0 | 0.04386 | 0 | 0.04386 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be937f2d97d9d6379e94b2bedd4be77b2173a071 | 83 | py | Python | experiment/__init__.py | XiaoLiSean/Cognitive-Map | 6b2019e5b3a46902b06c8d5d1e86b39425042de9 | [
"MIT"
] | null | null | null | experiment/__init__.py | XiaoLiSean/Cognitive-Map | 6b2019e5b3a46902b06c8d5d1e86b39425042de9 | [
"MIT"
] | null | null | null | experiment/__init__.py | XiaoLiSean/Cognitive-Map | 6b2019e5b3a46902b06c8d5d1e86b39425042de9 | [
"MIT"
] | 1 | 2021-11-04T06:25:31.000Z | 2021-11-04T06:25:31.000Z | from .node_generation import *
from .data_collection import *
from .robot import *
| 20.75 | 30 | 0.783133 | 11 | 83 | 5.727273 | 0.636364 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 83 | 3 | 31 | 27.666667 | 0.887324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bea99288a733f1bfeca2a171af4fd5827f61e997 | 62 | py | Python | dpd/oracles/__init__.py | AkshatSh/DPD | 5ec8b2105c841b78c33c78815381f45e1196e159 | [
"MIT"
] | null | null | null | dpd/oracles/__init__.py | AkshatSh/DPD | 5ec8b2105c841b78c33c78815381f45e1196e159 | [
"MIT"
] | 5 | 2019-05-06T00:56:37.000Z | 2019-05-06T09:29:36.000Z | dpd/oracles/__init__.py | AkshatSh/DPD | 5ec8b2105c841b78c33c78815381f45e1196e159 | [
"MIT"
] | null | null | null | from .oracle import Oracle
from .gold_oracle import GoldOracle | 31 | 35 | 0.854839 | 9 | 62 | 5.777778 | 0.555556 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 2 | 35 | 31 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe3f4461b979304d64512419c464a0f19d778d68 | 25 | py | Python | kalliope/neurons/debug/__init__.py | joshuaboniface/kalliope | 0e040be3165e838485d1e5addc4d2c5df12bfd84 | [
"MIT"
] | 1 | 2020-03-30T15:03:19.000Z | 2020-03-30T15:03:19.000Z | kalliope/neurons/debug/__init__.py | joshuaboniface/kalliope | 0e040be3165e838485d1e5addc4d2c5df12bfd84 | [
"MIT"
] | 6 | 2021-03-18T21:25:05.000Z | 2022-03-11T23:34:07.000Z | src/tools/__init__.py | kilfu0701/wp-import-export | f212c0c7434967fc51973b2c23c41a2929b8db68 | [
"MIT"
] | null | null | null | from .debug import Debug
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe449abda27ae32040f5df912787d654b95c6118 | 5,571 | py | Python | test/test_random.py | silky/fatiando | 5041c6b29758a5e73e9d7b2b906fa5e493fd9aba | [
"BSD-3-Clause"
] | 1 | 2019-06-27T11:32:56.000Z | 2019-06-27T11:32:56.000Z | test/test_random.py | silky/fatiando | 5041c6b29758a5e73e9d7b2b906fa5e493fd9aba | [
"BSD-3-Clause"
] | null | null | null | test/test_random.py | silky/fatiando | 5041c6b29758a5e73e9d7b2b906fa5e493fd9aba | [
"BSD-3-Clause"
] | null | null | null | import numpy
from fatiando import utils, gridder
def test_utils_circular_points():
"utils.circular_points return diff sequence"
area = [-1000, 1200, -40, 200]
size = 1300
for i in xrange(20):
x1, y1 = utils.circular_points(area, size, random=True).T
x2, y2 = utils.circular_points(area, size, random=True).T
assert numpy.all(x1 != x2) and numpy.all(y1 != y2)
def test_utils_circular_points_seed():
"utils.circular_points returns same sequence using same random seed"
area = [0, 1000, 0, 1000]
size = 1000
for seed in numpy.random.randint(low=0, high=10000, size=20):
x1, y1 = utils.circular_points(area, size, random=True, seed=seed).T
x2, y2 = utils.circular_points(area, size, random=True, seed=seed).T
assert numpy.all(x1 == x2) and numpy.all(y1 == y2)
def test_utils_circular_points_seed_noseed():
"utils.circular_points returns diff sequence after using random seed"
area = [0, 1000, 0, 1000]
size = 1000
seed = 1242
x1, y1 = utils.circular_points(area, size, random=True, seed=seed).T
x2, y2 = utils.circular_points(area, size, random=True, seed=seed).T
assert numpy.all(x1 == x2) and numpy.all(y1 == y2)
x3, y3 = utils.circular_points(area, size, random=True).T
assert numpy.all(x1 != x3) and numpy.all(y1 != y3)
def test_utils_random_points():
"utils.random_points return diff sequence"
area = [-1000, 1200, -40, 200]
size = 1300
for i in xrange(20):
x1, y1 = utils.random_points(area, size).T
x2, y2 = utils.random_points(area, size).T
assert numpy.all(x1 != x2) and numpy.all(y1 != y2)
def test_utils_random_points_seed():
"utils.random_points returns same sequence using same random seed"
area = [0, 1000, 0, 1000]
size = 1000
for seed in numpy.random.randint(low=0, high=10000, size=20):
x1, y1 = utils.random_points(area, size, seed=seed).T
x2, y2 = utils.random_points(area, size, seed=seed).T
assert numpy.all(x1 == x2) and numpy.all(y1 == y2)
def test_utils_random_points_seed_noseed():
"utils.random_points returns diff sequence after using random seed"
area = [0, 1000, 0, 1000]
size = 1000
seed = 1242
x1, y1 = utils.random_points(area, size, seed=seed).T
x2, y2 = utils.random_points(area, size, seed=seed).T
assert numpy.all(x1 == x2) and numpy.all(y1 == y2)
x3, y3 = utils.random_points(area, size).T
assert numpy.all(x1 != x3) and numpy.all(y1 != y3)
def test_gridder_scatter():
"gridder.scatter returns diff sequence"
area = [-1000, 1200, -40, 200]
size = 1300
for i in xrange(20):
x1, y1 = gridder.scatter(area, size)
x2, y2 = gridder.scatter(area, size)
assert numpy.all(x1 != x2) and numpy.all(y1 != y2)
def test_gridder_scatter_seed():
"gridder.scatter returns same sequence using same random seed"
area = [0, 1000, 0, 1000]
size = 1000
for seed in numpy.random.randint(low=0, high=10000, size=20):
x1, y1 = gridder.scatter(area, size, seed=seed)
x2, y2 = gridder.scatter(area, size, seed=seed)
assert numpy.all(x1 == x2) and numpy.all(y1 == y2)
def test_gridder_scatter_seed_noseed():
"gridder.scatter returns diff sequence after using random seed"
area = [0, 1000, 0, 1000]
size = 1000
seed = 1242
x1, y1 = gridder.scatter(area, size, seed=seed)
x2, y2 = gridder.scatter(area, size, seed=seed)
assert numpy.all(x1 == x2) and numpy.all(y1 == y2)
x3, y3 = gridder.scatter(area, size)
assert numpy.all(x1 != x3) and numpy.all(y1 != y3)
def test_utils_contaminate():
"utils.contaminate generates noise with 0 mean and right stddev"
size = 10 ** 6
data = numpy.zeros(size)
std = 4.213
for i in xrange(20):
noise = utils.contaminate(data, std)
assert abs(noise.mean()) < 10 ** -10, 'mean:%g' % (noise.mean())
assert abs(noise.std() - std) / std < 0.01, 'std:%g' % (noise.std())
def test_utils_contaminate_seed():
"utils.contaminate noise with 0 mean and right stddev using random seed"
size = 10 ** 6
data = numpy.zeros(size)
std = 4400.213
for i in xrange(20):
noise = utils.contaminate(data, std, seed=i)
assert abs(noise.mean()) < 10 ** - \
10, 's:%d mean:%g' % (i, noise.mean())
assert abs(noise.std() - std) / std < 0.01, \
's:%d std:%g' % (i, noise.std())
def test_utils_contaminate_diff():
"utils.contaminate uses diff noise"
size = 1235
data = numpy.linspace(-100., 12255., size)
noise = 244.4
for i in xrange(20):
d1 = utils.contaminate(data, noise)
d2 = utils.contaminate(data, noise)
assert numpy.all(d1 != d2)
def test_utils_contaminate_same_seed():
"utils.contaminate uses same noise using same random seed"
size = 1000
data = numpy.linspace(-1000, 1000, size)
noise = 10
for seed in numpy.random.randint(low=0, high=10000, size=20):
d1 = utils.contaminate(data, noise, seed=seed)
d2 = utils.contaminate(data, noise, seed=seed)
assert numpy.all(d1 == d2)
def test_utils_contaminate_seed_noseed():
"utils.contaminate uses diff noise after using random seed"
size = 1000
data = numpy.linspace(-1000, 1000, size)
noise = 10
seed = 45212
d1 = utils.contaminate(data, noise, seed=seed)
d2 = utils.contaminate(data, noise, seed=seed)
assert numpy.all(d1 == d2)
d3 = utils.contaminate(data, noise)
assert numpy.all(d1 != d3)
| 35.259494 | 76 | 0.641716 | 841 | 5,571 | 4.168847 | 0.095125 | 0.06389 | 0.06389 | 0.054763 | 0.890188 | 0.846549 | 0.806902 | 0.784655 | 0.730177 | 0.711922 | 0 | 0.087907 | 0.228146 | 5,571 | 157 | 77 | 35.484076 | 0.727442 | 0.142344 | 0 | 0.550388 | 0 | 0 | 0.146473 | 0.011309 | 0 | 0 | 0 | 0 | 0.155039 | 1 | 0.108527 | false | 0 | 0.015504 | 0 | 0.124031 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe5e2bf9870213e5b14f3a4352d03197a6ba153a | 40 | py | Python | Chapter 04/ch4_3_11.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 04/ch4_3_11.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 04/ch4_3_11.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | print(bool(0), bool(0.0))
#False False | 20 | 27 | 0.65 | 8 | 40 | 3.25 | 0.5 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.125 | 40 | 2 | 28 | 20 | 0.657143 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fe73a5b28836c6588a891af619efcac739b81325 | 3,339 | py | Python | Internet-Worm/1.py | Qiaozhi94/Python-Projects | aefc6cf49c1f4f2cc9beba8dbe80cfa826ba75c4 | [
"MIT"
] | null | null | null | Internet-Worm/1.py | Qiaozhi94/Python-Projects | aefc6cf49c1f4f2cc9beba8dbe80cfa826ba75c4 | [
"MIT"
] | null | null | null | Internet-Worm/1.py | Qiaozhi94/Python-Projects | aefc6cf49c1f4f2cc9beba8dbe80cfa826ba75c4 | [
"MIT"
] | null | null | null | import requests
hd={
'cookie': 'lianjia_uuid=01765a98-5297-41f9-b19d-89a7449cf57a; UM_distinctid=170e7a4b5ffb7b-0fe20ca1f284a2-396d7406-13c680-170e7a4b600ab1; _smt_uid=5e708c78.114db2b5; _ga=GA1.2.1647190516.1584434300; Hm_lvt_efa595b768cc9dc7d7f9823368e795f1=1590328589; select_city=310000; _jzqc=1; _jzqx=1.1584434297.1592370338.8.jzqsr=google%2Ecom|jzqct=/.jzqsr=google%2Ecom|jzqct=/; _jzqckmp=1; _qzjc=1; _gid=GA1.2.427499494.1592370341; Hm_lvt_9152f8221cb6243a53c83b956842be8a=1590290914,1590586455,1590671171,1592370350; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%22170e7a4bcdc247-0c752ca4b1a414-396d7406-1296000-170e7a4bcddb75%22%2C%22%24device_id%22%3A%22170e7a4bcdc247-0c752ca4b1a414-396d7406-1296000-170e7a4bcddb75%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_referrer_host%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_utm_source%22%3A%22office%22%7D%7D; lianjia_ssid=76e8d6d4-eac9-1a36-0a4b-ef77659d303d; CNZZDATA1255604082=1745799172-1588039033-https%253A%252F%252Fwww.google.com%252F%7C1592380486; _jzqa=1.3759888401518846500.1584434297.1592376456.1592381003.29; CNZZDATA1253492439=646215105-1588040561-https%253A%252F%252Fwww.google.com%252F%7C1592381184; CNZZDATA1255633284=1262782669-1588041365-https%253A%252F%252Fwww.google.com%252F%7C1592381302; CNZZDATA1254525948=1479450439-1588037718-https%253A%252F%252Fwww.google.com%252F%7C1592378211; login_ucid=2000000104475969; lianjia_token=2.002814d4ab54c5940639b9fd9a9836d79a; lianjia_token_secure=2.002814d4ab54c5940639b9fd9a9836d79a; security_ticket=euBaN/4SZJwcyvKnJNwHXyEnzh/pmS7jY+fk/AVezXLMuQff2tH7S1H9yliVqRosZRzaHBYRR7x7sqQgCU2oqidJ4fPMtlsePOlcYTIbbyRhLzq1xkb2r59hQHJkl4hjDuQkh9nJQo4uCA9fum2cZaHIb1ZqRr+bVFVuTe2wB+8=; Hm_lpvt_9152f8221cb6243a53c83b956842be8a=1592381602; srcid=eyJ0Ijoie1wiZGF0YVwiOlwiOGI3YzExNGQ0NmE0MTdmOTUxYmRhNzFjMDE5OTdjZDIxZWQ3YjUyOGM1MGFiN2MyNmYxODcyYjA2YTYxNmRkNjNlZTYzODE4MTZlYzU3ODU4NWNjYjAyNWEyMjZlNjc3MzM4MDRkZWRiZjhlZmY3Yzg2ZmZjMDBhYmIwNWVhYWRlOTE1YTMwMGQxY2QyZTRiMGI2ZjA2OWE3ZDY2NGNhMTFhNTBjYWMxZWVlOGZhNjdhYzcwYWZmZTgxMDRkMDVjNWI2ZmQ3MWRkZTljNTczNDRiOTYzZDcxZThmODVlYzFhYWY2ZGVmMTUxZTRlN2FlMzQxNTAyNjkwY2EwZTczY2ZmOTE5NTViMWI5NjU5YzUwNjIyMWMyMWQ2OWE3YTU0Y2U0MGZmMzk2M2QzZDEzZGU5YzQxYWRjMmQ5MTIxZmZcIixcImtleV9pZFwiOlwiMVwiLFwic2lnblwiOlwiMmM2ODM1N2FcIn0iLCJyIjoiaHR0cHM6Ly9zaC5saWFuamlhLmNvbS94aWFvcXUvNTAxMTEwMjIwODE5MS8iLCJvcyI6IndlYiIsInYiOiIwLjEifQ==; _qzja=1.1285165120.1588042706441.1592376455845.1592381003051.1592381603349.1592381604691.0.0.0.137.26; _qzjb=1.1592381003051.9.0.0.0; _qzjto=26.3.0; _jzqb=1.9.10.1592381003.1; _gat=1; _gat_past=1; _gat_global=1; _gat_new_global=1; _gat_dianpu_agent=1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'
}
hd1 = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'
}
url = 'https://sh.lianjia.com/xiaoqu/5011102208191/'
r = requests.get(url,headers=hd1,timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.status_code)
# session = requests.Session()
| 159 | 2,807 | 0.858341 | 393 | 3,339 | 7.139949 | 0.531807 | 0.011404 | 0.010691 | 0.019957 | 0.202423 | 0.191732 | 0.191732 | 0.120456 | 0.120456 | 0.120456 | 0 | 0.342928 | 0.032345 | 3,339 | 20 | 2,808 | 166.95 | 0.525534 | 0.008386 | 0 | 0.153846 | 0 | 0.230769 | 0.937443 | 0.793895 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe7514302744ca801ed383512905f4692c7d32b6 | 3,598 | py | Python | test/test_lib_picture.py | TE-ToshiakiTanaka/stve | 30b1a0c9b8b20f7059999b0b25b16d6b43aa935c | [
"MIT"
] | null | null | null | test/test_lib_picture.py | TE-ToshiakiTanaka/stve | 30b1a0c9b8b20f7059999b0b25b16d6b43aa935c | [
"MIT"
] | null | null | null | test/test_lib_picture.py | TE-ToshiakiTanaka/stve | 30b1a0c9b8b20f7059999b0b25b16d6b43aa935c | [
"MIT"
] | null | null | null | import os
import sys
from stve.script import StveTestCase
from nose.tools import with_setup, raises, ok_, eq_
LIB_PATH = os.path.dirname(os.path.abspath(__file__))
if not LIB_PATH in sys.path:
sys.path.insert(0, LIB_PATH)
from runner import TestStveTestRunner as TSTR
class TestPictureTestRuner(TSTR):
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_01(self):
self.script_path = os.path.join(self.script_path, "picture")
self.base_library_execute_success("picture_01.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_02(self):
self.script_path = os.path.join(self.script_path, "picture")
self.base_library_execute_success("picture_02.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_03(self):
self.script_path = os.path.join(self.script_path, "picture")
self.base_library_execute_success("picture_03.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_04(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_04.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_05(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_05.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_06(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_06.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_07(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_07.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_08(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_08.py")
self.workspace.rm(os.path.join(self.data_path, "test02.png"))
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_09(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_09.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_10(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_10.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_11(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_11.py")
@with_setup(TSTR.setup, TSTR.teardown)
def test_library_execute_picture_success_12(self):
self.script_path = os.path.join(self.script_path, "picture")
StveTestCase.set("system.tmp", self.data_path)
self.base_library_execute_success("picture_12.py")
| 43.349398 | 69 | 0.734019 | 507 | 3,598 | 4.885602 | 0.120316 | 0.087202 | 0.135648 | 0.073476 | 0.865563 | 0.865563 | 0.865563 | 0.865563 | 0.865563 | 0.865563 | 0 | 0.016721 | 0.152307 | 3,598 | 82 | 70 | 43.878049 | 0.79541 | 0 | 0 | 0.492537 | 0 | 0 | 0.094497 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.179104 | false | 0 | 0.074627 | 0 | 0.268657 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe888313531e7ed51f2c432ea96274cb3943fd04 | 299,500 | py | Python | openmc/mgxs/mgxs.py | RyotaroOKabe/openmc | 9926294324cb80dd7ff0e4f1a9b361addfcfa8fc | [
"MIT"
] | null | null | null | openmc/mgxs/mgxs.py | RyotaroOKabe/openmc | 9926294324cb80dd7ff0e4f1a9b361addfcfa8fc | [
"MIT"
] | null | null | null | openmc/mgxs/mgxs.py | RyotaroOKabe/openmc | 9926294324cb80dd7ff0e4f1a9b361addfcfa8fc | [
"MIT"
] | null | null | null | from collections import OrderedDict
import copy
from numbers import Integral
import os
import warnings
import h5py
import numpy as np
import openmc
from openmc.data import REACTION_MT, REACTION_NAME, FISSION_MTS
import openmc.checkvalue as cv
from ..tallies import ESTIMATOR_TYPES
from . import EnergyGroups
# Supported cross section types
MGXS_TYPES = (
'total',
'transport',
'nu-transport',
'absorption',
'capture',
'fission',
'nu-fission',
'kappa-fission',
'scatter',
'nu-scatter',
'scatter matrix',
'nu-scatter matrix',
'multiplicity matrix',
'nu-fission matrix',
'scatter probability matrix',
'consistent scatter matrix',
'consistent nu-scatter matrix',
'chi',
'chi-prompt',
'inverse-velocity',
'prompt-nu-fission',
'prompt-nu-fission matrix',
'current',
'diffusion-coefficient',
'nu-diffusion-coefficient'
)
# Some scores from REACTION_MT are not supported, or are simply overkill to
# support and test (like inelastic levels), remoev those from consideration
_BAD_SCORES = ["(n,misc)", "(n,absorption)", "(n,total)", "fission"]
_BAD_SCORES += [REACTION_NAME[mt] for mt in FISSION_MTS]
ARBITRARY_VECTOR_TYPES = tuple(k for k in REACTION_MT.keys()
if k not in _BAD_SCORES)
ARBITRARY_MATRIX_TYPES = []
for rxn in ARBITRARY_VECTOR_TYPES:
# Preclude the fission channels from being treated as a matrix
if rxn not in [REACTION_NAME[mt] for mt in FISSION_MTS]:
split_rxn = rxn.strip("()").split(",")
if len(split_rxn) > 1 and "n" in split_rxn[1]:
# Then there is a neutron product, so it can also be a matrix
ARBITRARY_MATRIX_TYPES.append(rxn + " matrix")
ARBITRARY_MATRIX_TYPES = tuple(ARBITRARY_MATRIX_TYPES)
# Supported domain types
DOMAIN_TYPES = (
'cell',
'distribcell',
'universe',
'material',
'mesh'
)
# Filter types corresponding to each domain
_DOMAIN_TO_FILTER = {
'cell': openmc.CellFilter,
'distribcell': openmc.DistribcellFilter,
'universe': openmc.UniverseFilter,
'material': openmc.MaterialFilter,
'mesh': openmc.MeshFilter
}
# Supported domain classes
_DOMAINS = (
openmc.Cell,
openmc.Universe,
openmc.Material,
openmc.RegularMesh
)
# Supported ScatterMatrixXS angular distribution types. Note that 'histogram' is
# defined here and used in mgxs_library.py, but it is not used for the current
# module
SCATTER_TABULAR = 'tabular'
SCATTER_LEGENDRE = 'legendre'
SCATTER_HISTOGRAM = 'histogram'
MU_TREATMENTS = (
SCATTER_LEGENDRE,
SCATTER_HISTOGRAM
)
# Maximum Legendre order supported by OpenMC
_MAX_LEGENDRE = 10
def _df_column_convert_to_bin(df, current_name, new_name, values_to_bin,
reverse_order=False):
"""Convert a Pandas DataFrame column from the bin edges to an index for
each bin. This method operates on the DataFrame, df, in-place.
Parameters
----------
df : pandas.DataFrame
A Pandas DataFrame containing the cross section data.
current_name : str
Name of the column to replace with bins
new_name : str
New name for column after the data is replaced with bins
values_to_bin : Iterable of Real
Values of the bin edges to be used for identifying the bins
reverse_order : bool
Whether the bin indices should be reversed
"""
# Get the current values
df_bins = np.asarray(df[current_name])
new_vals = np.zeros_like(df_bins, dtype=int)
# Replace the values with the index of the closest entry in values_to_bin
# The closest is used because it is expected that the values in df could
# have lost precision along the way
for i, df_val in enumerate(df_bins):
idx = np.searchsorted(values_to_bin, df_val)
# Check to make sure if the value is just above the search result
if idx > 0 and np.isclose(values_to_bin[idx - 1], df_val):
idx -= 1
# If it is just below the search result then we are done
new_vals[i] = idx
# Switch to a one-based indexing
new_vals += 1
# Reverse the ordering if requested (this is for energy group ordering)
if reverse_order:
new_vals = (len(values_to_bin) - 1) - new_vals + 1
# Assign the values
df[current_name] = new_vals[:]
# And rename the column
df.rename(columns={current_name: new_name}, inplace=True)
class MGXS:
"""An abstract multi-group cross section for some energy group structure
within some spatial domain.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations.
.. note:: Users should instantiate the subclasses of this abstract class.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file) and the number of mesh cells for
'mesh' domain types.
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
# Store whether or not the number density should be removed for microscopic
# values of this data
_divide_by_density = True
def __init__(self, domain=None, domain_type=None,
energy_groups=None, by_nuclide=False, name='', num_polar=1,
num_azimuthal=1):
self._name = ''
self._rxn_type = None
self._by_nuclide = None
self._nuclides = None
self._estimator = 'tracklength'
self._domain = None
self._domain_type = None
self._energy_groups = None
self._num_polar = 1
self._num_azimuthal = 1
self._tally_trigger = None
self._tallies = None
self._rxn_rate_tally = None
self._xs_tally = None
self._sparse = False
self._loaded_sp = False
self._derived = False
self._hdf5_key = None
self._valid_estimators = ESTIMATOR_TYPES
self.name = name
self.by_nuclide = by_nuclide
if domain_type is not None:
self.domain_type = domain_type
if domain is not None:
self.domain = domain
if energy_groups is not None:
self.energy_groups = energy_groups
self.num_polar = num_polar
self.num_azimuthal = num_azimuthal
def __deepcopy__(self, memo):
existing = memo.get(id(self))
# If this object has been copied before, return the first copy made
if existing is not None:
return existing
# If this is the first time we have tried to copy this object, copy it
clone = type(self).__new__(type(self))
clone._name = self.name
clone._rxn_type = self.rxn_type
clone._by_nuclide = self.by_nuclide
clone._nuclides = copy.deepcopy(self._nuclides, memo)
clone._domain = self.domain
clone._domain_type = self.domain_type
clone._energy_groups = copy.deepcopy(self.energy_groups, memo)
clone._num_polar = self._num_polar
clone._num_azimuthal = self._num_azimuthal
clone._tally_trigger = copy.deepcopy(self.tally_trigger, memo)
clone._rxn_rate_tally = copy.deepcopy(self._rxn_rate_tally, memo)
clone._xs_tally = copy.deepcopy(self._xs_tally, memo)
clone._sparse = self.sparse
clone._loaded_sp = self._loaded_sp
clone._derived = self.derived
clone._hdf5_key = self._hdf5_key
clone._tallies = OrderedDict()
for tally_type, tally in self.tallies.items():
clone.tallies[tally_type] = copy.deepcopy(tally, memo)
memo[id(self)] = clone
return clone
def _add_angle_filters(self, filters):
"""Add the azimuthal and polar bins to the MGXS filters if needed.
Filters will be provided as a ragged 2D list of openmc.Filter objects.
Parameters
----------
filters : Iterable of Iterable of openmc.Filter
Ragged 2D list of openmc.Filter objects for the energy and spatial
domains. The angle filters will be added to the list.
Returns
-------
Iterable of Iterable of openmc.Filter
Ragged 2D list of openmc.Filter objects for the energy and spatial
domains with the angle filters added to the list.
"""
if self.num_polar > 1 or self.num_azimuthal > 1:
# Then the user has requested angular data, so create the bins
pol_bins = np.linspace(0., np.pi, num=self.num_polar + 1,
endpoint=True)
azi_bins = np.linspace(-np.pi, np.pi, num=self.num_azimuthal + 1,
endpoint=True)
for filt in filters:
filt.insert(0, openmc.PolarFilter(pol_bins))
filt.insert(1, openmc.AzimuthalFilter(azi_bins))
return filters
def _squeeze_xs(self, xs):
"""Remove dimensions which are not needed from a cross section array
due to user options. This is used by the openmc.Mgxs.get_xs(...) method
Parameters
----------
xs : np.ndarray
Cross sections array with dimensions to be squeezed
Returns
-------
np.ndarray
Squeezed array of cross sections
"""
# numpy.squeeze will return a ValueError if the axis has a size
# greater than 1, to avoid this we will try each axis one at a
# time to preclude the ValueError.
initial_shape = len(xs.shape)
for axis in range(initial_shape - 1, -1, -1):
if axis not in self._dont_squeeze and xs.shape[axis] == 1:
xs = np.squeeze(xs, axis=axis)
return xs
def _df_convert_columns_to_bins(self, df):
"""This method converts all relevant and present DataFrame columns from
their bin boundaries to the index for each bin. This method operates on
the DataFrame, df, in place. The method returns a list of the columns
in which it has operated on.
Parameters
----------
df : pandas.DataFrame
A Pandas DataFrame containing the cross section data.
Returns
-------
columns : Iterable of str
Names of the re-named and re-valued columns
"""
# Override polar and azimuthal bounds with indices
if self.num_polar > 1 or self.num_azimuthal > 1:
# First for polar
bins = np.linspace(0., np.pi, self.num_polar + 1, True)
_df_column_convert_to_bin(df, 'polar low', 'polar bin', bins)
del df['polar high']
# Second for azimuthal
bins = np.linspace(-np.pi, np.pi, self.num_azimuthal + 1, True)
_df_column_convert_to_bin(df, 'azimuthal low', 'azimuthal bin',
bins)
del df['azimuthal high']
columns = ['polar bin', 'azimuthal bin']
else:
columns = []
# Override energy groups bounds with indices
if 'energy low [eV]' in df:
_df_column_convert_to_bin(df, 'energy low [eV]', 'group in',
self.energy_groups.group_edges,
reverse_order=True)
del df['energy high [eV]']
columns += ['group in']
if 'energyout low [eV]' in df:
_df_column_convert_to_bin(df, 'energyout low [eV]', 'group out',
self.energy_groups.group_edges,
reverse_order=True)
del df['energyout high [eV]']
columns += ['group out']
if 'mu low' in df and hasattr(self, 'histogram_bins'):
# Only the ScatterMatrix class has the histogram_bins attribute
bins = np.linspace(-1., 1., self.histogram_bins + 1, True)
_df_column_convert_to_bin(df, 'mu low', 'mu bin', bins)
del df['mu high']
columns += ['mu bin']
return columns
@property
def _dont_squeeze(self):
"""Create a tuple of axes which should not be removed during the get_xs
process
"""
if self.num_polar > 1 or self.num_azimuthal > 1:
return (0, 1, 3)
else:
return (1, )
@property
def name(self):
return self._name
@property
def rxn_type(self):
return self._rxn_type
@property
def by_nuclide(self):
return self._by_nuclide
@property
def domain(self):
return self._domain
@property
def domain_type(self):
return self._domain_type
@property
def energy_groups(self):
return self._energy_groups
@property
def num_polar(self):
return self._num_polar
@property
def num_azimuthal(self):
return self._num_azimuthal
@property
def tally_trigger(self):
return self._tally_trigger
@property
def num_groups(self):
return self.energy_groups.num_groups
@property
def scores(self):
return ['flux', self.rxn_type]
@property
def filters(self):
group_edges = self.energy_groups.group_edges
energy_filter = openmc.EnergyFilter(group_edges)
filters = []
for i in range(len(self.scores)):
filters.append([energy_filter])
return self._add_angle_filters(filters)
@property
def tally_keys(self):
return self.scores
@property
def estimator(self):
return self._estimator
@property
def tallies(self):
# Instantiate tallies if they do not exist
if self._tallies is None:
# Initialize a collection of Tallies
self._tallies = OrderedDict()
# Create a domain Filter object
filter_type = _DOMAIN_TO_FILTER[self.domain_type]
if self.domain_type == 'mesh':
domain_filter = filter_type(self.domain)
else:
domain_filter = filter_type(self.domain.id)
if isinstance(self.estimator, str):
estimators = [self.estimator] * len(self.scores)
else:
estimators = self.estimator
# Create each Tally needed to compute the multi group cross section
tally_metadata = \
zip(self.scores, self.tally_keys, self.filters, estimators)
for score, key, filters, estimator in tally_metadata:
self._tallies[key] = openmc.Tally(name=self.name)
self._tallies[key].scores = [score]
self._tallies[key].estimator = estimator
if score != 'current':
self._tallies[key].filters = [domain_filter]
# If a tally trigger was specified, add it to each tally
if self.tally_trigger:
trigger_clone = copy.deepcopy(self.tally_trigger)
trigger_clone.scores = [score]
self._tallies[key].triggers.append(trigger_clone)
# Add non-domain specific Filters (e.g., 'energy') to the Tally
for add_filter in filters:
self._tallies[key].filters.append(add_filter)
# If this is a by-nuclide cross-section, add nuclides to Tally
if self.by_nuclide and score != 'flux':
self._tallies[key].nuclides += self.get_nuclides()
else:
self._tallies[key].nuclides.append('total')
return self._tallies
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
self._rxn_rate_tally = self.tallies[self.rxn_type]
self._rxn_rate_tally.sparse = self.sparse
return self._rxn_rate_tally
@property
def xs_tally(self):
if self._xs_tally is None:
if self.tallies is None:
msg = 'Unable to get xs_tally since tallies have ' \
'not been loaded from a statepoint'
raise ValueError(msg)
self._xs_tally = self.rxn_rate_tally / self.tallies['flux']
self._compute_xs()
return self._xs_tally
@property
def sparse(self):
return self._sparse
@property
def num_subdomains(self):
if self.domain_type.startswith('sum('):
domain_type = self.domain_type[4:-1]
else:
domain_type = self.domain_type
if self._rxn_type == 'current':
filter_type = openmc.MeshSurfaceFilter
else:
filter_type = _DOMAIN_TO_FILTER[domain_type]
domain_filter = self.xs_tally.find_filter(filter_type)
return domain_filter.num_bins
@property
def num_nuclides(self):
if self.by_nuclide:
return len(self.get_nuclides())
else:
return 1
@property
def nuclides(self):
if self.by_nuclide:
return self.get_nuclides()
else:
return ['sum']
@property
def loaded_sp(self):
return self._loaded_sp
@property
def derived(self):
return self._derived
@property
def hdf5_key(self):
if self._hdf5_key is not None:
return self._hdf5_key
else:
return self._rxn_type
@name.setter
def name(self, name):
cv.check_type('name', name, str)
self._name = name
@by_nuclide.setter
def by_nuclide(self, by_nuclide):
cv.check_type('by_nuclide', by_nuclide, bool)
self._by_nuclide = by_nuclide
@nuclides.setter
def nuclides(self, nuclides):
cv.check_iterable_type('nuclides', nuclides, str)
self._nuclides = nuclides
@estimator.setter
def estimator(self, estimator):
cv.check_value('estimator', estimator, self._valid_estimators)
self._estimator = estimator
@domain.setter
def domain(self, domain):
cv.check_type('domain', domain, _DOMAINS)
self._domain = domain
# Assign a domain type
if self.domain_type is None:
if isinstance(domain, openmc.Material):
self._domain_type = 'material'
elif isinstance(domain, openmc.Cell):
self._domain_type = 'cell'
elif isinstance(domain, openmc.Universe):
self._domain_type = 'universe'
elif isinstance(domain, openmc.RegularMesh):
self._domain_type = 'mesh'
@domain_type.setter
def domain_type(self, domain_type):
cv.check_value('domain type', domain_type, DOMAIN_TYPES)
self._domain_type = domain_type
@energy_groups.setter
def energy_groups(self, energy_groups):
cv.check_type('energy groups', energy_groups, openmc.mgxs.EnergyGroups)
self._energy_groups = energy_groups
@num_polar.setter
def num_polar(self, num_polar):
cv.check_type('num_polar', num_polar, Integral)
cv.check_greater_than('num_polar', num_polar, 0)
self._num_polar = num_polar
@num_azimuthal.setter
def num_azimuthal(self, num_azimuthal):
cv.check_type('num_azimuthal', num_azimuthal, Integral)
cv.check_greater_than('num_azimuthal', num_azimuthal, 0)
self._num_azimuthal = num_azimuthal
@tally_trigger.setter
def tally_trigger(self, tally_trigger):
cv.check_type('tally trigger', tally_trigger, openmc.Trigger)
self._tally_trigger = tally_trigger
@sparse.setter
def sparse(self, sparse):
"""Convert tally data from NumPy arrays to SciPy list of lists (LIL)
sparse matrices, and vice versa.
This property may be used to reduce the amount of data in memory during
tally data processing. The tally data will be stored as SciPy LIL
matrices internally within the Tally object. All tally data access
properties and methods will return data as a dense NumPy array.
"""
cv.check_type('sparse', sparse, bool)
# Sparsify or densify the derived MGXS tallies and the base tallies
if self._xs_tally:
self.xs_tally.sparse = sparse
if self._rxn_rate_tally:
self.rxn_rate_tally.sparse = sparse
for tally_name in self.tallies:
self.tallies[tally_name].sparse = sparse
self._sparse = sparse
@staticmethod
def get_mgxs(mgxs_type, domain=None, domain_type=None,
energy_groups=None, by_nuclide=False, name='', num_polar=1,
num_azimuthal=1):
"""Return a MGXS subclass object for some energy group structure within
some spatial domain for some reaction type.
This is a factory method which can be used to quickly create MGXS
subclass objects for various reaction types.
Parameters
----------
mgxs_type : str or Integral
The type of multi-group cross section object to return; valid
values are members of MGXS_TYPES, or the reaction types that are
the keys of REACTION_MT. Note that if a reaction type from
REACTION_MT is used, it can be appended with ' matrix' to obtain
a multigroup matrix (from incoming to outgoing energy groups) for
reactions with a neutron in an outgoing channel.
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain.
Defaults to False
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file. Defaults to the empty string.
num_polar : Integral, optional
Number of equi-width polar angles for angle discretization;
defaults to no discretization
num_azimuthal : Integral, optional
Number of equi-width azimuthal angles for angle discretization;
defaults to no discretization
Returns
-------
openmc.mgxs.MGXS
A subclass of the abstract MGXS class for the multi-group cross
section type requested by the user
"""
cv.check_value(
"mgxs_type", mgxs_type,
MGXS_TYPES + ARBITRARY_VECTOR_TYPES + ARBITRARY_MATRIX_TYPES)
if mgxs_type == 'total':
mgxs = TotalXS(domain, domain_type, energy_groups)
elif mgxs_type == 'transport':
mgxs = TransportXS(domain, domain_type, energy_groups)
elif mgxs_type == 'nu-transport':
mgxs = TransportXS(domain, domain_type, energy_groups, nu=True)
elif mgxs_type == 'absorption':
mgxs = AbsorptionXS(domain, domain_type, energy_groups)
elif mgxs_type == 'capture':
mgxs = CaptureXS(domain, domain_type, energy_groups)
elif mgxs_type == 'fission':
mgxs = FissionXS(domain, domain_type, energy_groups)
elif mgxs_type == 'nu-fission':
mgxs = FissionXS(domain, domain_type, energy_groups, nu=True)
elif mgxs_type == 'kappa-fission':
mgxs = KappaFissionXS(domain, domain_type, energy_groups)
elif mgxs_type == 'scatter':
mgxs = ScatterXS(domain, domain_type, energy_groups)
elif mgxs_type == 'nu-scatter':
mgxs = ScatterXS(domain, domain_type, energy_groups, nu=True)
elif mgxs_type == 'scatter matrix':
mgxs = ScatterMatrixXS(domain, domain_type, energy_groups)
elif mgxs_type == 'nu-scatter matrix':
mgxs = ScatterMatrixXS(domain, domain_type, energy_groups, nu=True)
elif mgxs_type == 'multiplicity matrix':
mgxs = MultiplicityMatrixXS(domain, domain_type, energy_groups)
elif mgxs_type == 'scatter probability matrix':
mgxs = ScatterProbabilityMatrix(domain, domain_type, energy_groups)
elif mgxs_type == 'consistent scatter matrix':
mgxs = ScatterMatrixXS(domain, domain_type, energy_groups)
mgxs.formulation = 'consistent'
elif mgxs_type == 'consistent nu-scatter matrix':
mgxs = ScatterMatrixXS(domain, domain_type, energy_groups, nu=True)
mgxs.formulation = 'consistent'
elif mgxs_type == 'nu-fission matrix':
mgxs = NuFissionMatrixXS(domain, domain_type, energy_groups)
elif mgxs_type == 'chi':
mgxs = Chi(domain, domain_type, energy_groups)
elif mgxs_type == 'chi-prompt':
mgxs = Chi(domain, domain_type, energy_groups, prompt=True)
elif mgxs_type == 'inverse-velocity':
mgxs = InverseVelocity(domain, domain_type, energy_groups)
elif mgxs_type == 'prompt-nu-fission':
mgxs = FissionXS(domain, domain_type, energy_groups, prompt=True)
elif mgxs_type == 'prompt-nu-fission matrix':
mgxs = NuFissionMatrixXS(domain, domain_type, energy_groups,
prompt=True)
elif mgxs_type == 'current':
mgxs = Current(domain, domain_type, energy_groups)
elif mgxs_type == 'diffusion-coefficient':
mgxs = DiffusionCoefficient(domain, domain_type, energy_groups)
elif mgxs_type == 'nu-diffusion-coefficient':
mgxs = DiffusionCoefficient(domain, domain_type, energy_groups, nu=True)
elif mgxs_type in ARBITRARY_VECTOR_TYPES:
# Then it is a reaction not covered by the above that is
# supported by the ArbitraryXS Class
mgxs = ArbitraryXS(mgxs_type, domain, domain_type, energy_groups)
elif mgxs_type in ARBITRARY_MATRIX_TYPES:
mgxs = ArbitraryMatrixXS(mgxs_type, domain, domain_type,
energy_groups)
mgxs.by_nuclide = by_nuclide
mgxs.name = name
mgxs.num_polar = num_polar
mgxs.num_azimuthal = num_azimuthal
return mgxs
def get_nuclides(self):
"""Get all nuclides in the cross section's spatial domain.
Returns
-------
list of str
A list of the string names for each nuclide in the spatial domain
(e.g., ['U235', 'U238', 'O16'])
Raises
------
ValueError
When this method is called before the spatial domain has been set.
"""
if self.domain is None:
raise ValueError('Unable to get all nuclides without a domain')
# If the user defined nuclides, return them
if self._nuclides:
return self._nuclides
# Otherwise, return all nuclides in the spatial domain
else:
return self.domain.get_nuclides()
def get_nuclide_density(self, nuclide):
"""Get the atomic number density in units of atoms/b-cm for a nuclide
in the cross section's spatial domain.
Parameters
----------
nuclide : str
A nuclide name string (e.g., 'U235')
Returns
-------
float
The atomic number density (atom/b-cm) for the nuclide of interest
"""
cv.check_type('nuclide', nuclide, str)
# Get list of all nuclides in the spatial domain
nuclides = self.domain.get_nuclide_densities()
return nuclides[nuclide][1] if nuclide in nuclides else 0.0
def get_nuclide_densities(self, nuclides='all'):
"""Get an array of atomic number densities in units of atom/b-cm for all
nuclides in the cross section's spatial domain.
Parameters
----------
nuclides : Iterable of str or 'all' or 'sum'
A list of nuclide name strings (e.g., ['U235', 'U238']). The
special string 'all' will return the atom densities for all nuclides
in the spatial domain. The special string 'sum' will return the atom
density summed across all nuclides in the spatial domain. Defaults
to 'all'.
Returns
-------
numpy.ndarray of float
An array of the atomic number densities (atom/b-cm) for each of the
nuclides in the spatial domain
Raises
------
ValueError
When this method is called before the spatial domain has been set.
"""
if self.domain is None:
raise ValueError('Unable to get nuclide densities without a domain')
# Sum the atomic number densities for all nuclides
if nuclides == 'sum':
nuclides = self.get_nuclides()
densities = np.zeros(1, dtype=np.float)
for nuclide in nuclides:
densities[0] += self.get_nuclide_density(nuclide)
# Tabulate the atomic number densities for all nuclides
elif nuclides == 'all':
nuclides = self.get_nuclides()
densities = np.zeros(self.num_nuclides, dtype=np.float)
for i, nuclide in enumerate(nuclides):
densities[i] += self.get_nuclide_density(nuclide)
# Tabulate the atomic number densities for each specified nuclide
else:
densities = np.zeros(len(nuclides), dtype=np.float)
for i, nuclide in enumerate(nuclides):
densities[i] = self.get_nuclide_density(nuclide)
return densities
def _compute_xs(self):
"""Performs generic cleanup after a subclass' uses tally arithmetic to
compute a multi-group cross section as a derived tally.
This method replaces CrossNuclides generated by tally arithmetic with
the original Nuclide objects in the xs_tally instance attribute. The
simple Nuclides allow for cleaner output through Pandas DataFrames as
well as simpler data access through the get_xs(...) class method.
In addition, this routine resets NaNs in the multi group cross section
array to 0.0. This may be needed occur if no events were scored in
certain tally bins, which will lead to a divide-by-zero situation.
"""
# If computing xs for each nuclide, replace CrossNuclides with originals
if self.by_nuclide:
self.xs_tally._nuclides = []
nuclides = self.get_nuclides()
for nuclide in nuclides:
self.xs_tally.nuclides.append(openmc.Nuclide(nuclide))
# Remove NaNs which may have resulted from divide-by-zero operations
self.xs_tally._mean = np.nan_to_num(self.xs_tally.mean)
self.xs_tally._std_dev = np.nan_to_num(self.xs_tally.std_dev)
self.xs_tally.sparse = self.sparse
def load_from_statepoint(self, statepoint):
"""Extracts tallies in an OpenMC StatePoint with the data needed to
compute multi-group cross sections.
This method is needed to compute cross section data from tallies
in an OpenMC StatePoint object.
.. note:: The statepoint must be linked with an OpenMC Summary object.
Parameters
----------
statepoint : openmc.StatePoint
An OpenMC StatePoint object with tally data
Raises
------
ValueError
When this method is called with a statepoint that has not been
linked with a summary object.
"""
cv.check_type('statepoint', statepoint, openmc.StatePoint)
if statepoint.summary is None:
msg = 'Unable to load data from a statepoint which has not been ' \
'linked with a summary file'
raise ValueError(msg)
# Override the domain object that loaded from an OpenMC summary file
# NOTE: This is necessary for micro cross-sections which require
# the isotopic number densities as computed by OpenMC
su = statepoint.summary
if self.domain_type in ('cell', 'distribcell'):
self.domain = su._fast_cells[self.domain.id]
elif self.domain_type == 'universe':
self.domain = su._fast_universes[self.domain.id]
elif self.domain_type == 'material':
self.domain = su._fast_materials[self.domain.id]
elif self.domain_type == 'mesh':
self.domain = statepoint.meshes[self.domain.id]
else:
msg = 'Unable to load data from a statepoint for domain type {0} ' \
'which is not yet supported'.format(self.domain_type)
raise ValueError(msg)
# Use tally "slicing" to ensure that tallies correspond to our domain
# NOTE: This is important if tally merging was used
if self.domain_type == 'mesh':
filters = [_DOMAIN_TO_FILTER[self.domain_type]]
filter_bins = [tuple(self.domain.indices)]
elif self.domain_type != 'distribcell':
filters = [_DOMAIN_TO_FILTER[self.domain_type]]
filter_bins = [(self.domain.id,)]
# Distribcell filters only accept single cell - neglect it when slicing
else:
filters = []
filter_bins = []
# Clear any tallies previously loaded from a statepoint
if self.loaded_sp:
self._tallies = None
self._xs_tally = None
self._rxn_rate_tally = None
self._loaded_sp = False
# Find, slice and store Tallies from StatePoint
# The tally slicing is needed if tally merging was used
for tally_type, tally in self.tallies.items():
sp_tally = statepoint.get_tally(
tally.scores, tally.filters, tally.nuclides,
estimator=tally.estimator, exact_filters=True)
sp_tally = sp_tally.get_slice(
tally.scores, filters, filter_bins, tally.nuclides)
sp_tally.sparse = self.sparse
self.tallies[tally_type] = sp_tally
self._loaded_sp = True
def get_xs(self, groups='all', subdomains='all', nuclides='all',
xs_type='macro', order_groups='increasing',
value='mean', squeeze=True, **kwargs):
r"""Returns an array of multi-group cross sections.
This method constructs a 3D NumPy array for the requested
multi-group cross section data for one or more subdomains
(1st dimension), energy groups (2nd dimension), and nuclides
(3rd dimension).
Parameters
----------
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
subdomains : Iterable of Integral or 'all'
Subdomain IDs of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
A list of nuclide name strings (e.g., ['U235', 'U238']). The
special string 'all' will return the cross sections for all nuclides
in the spatial domain. The special string 'sum' will return the
cross section summed over all nuclides. Defaults to 'all'.
xs_type: {'macro', 'micro'}
Return the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
order_groups: {'increasing', 'decreasing'}
Return the cross section indexed according to increasing or
decreasing energy groups (decreasing or increasing energies).
Defaults to 'increasing'.
value : {'mean', 'std_dev', 'rel_err'}
A string for the type of value to return. Defaults to 'mean'.
squeeze : bool
A boolean representing whether to eliminate the extra dimensions
of the multi-dimensional array to be returned. Defaults to True.
Returns
-------
numpy.ndarray
A NumPy array of the multi-group cross section indexed in the order
each group, subdomain and nuclide is listed in the parameters.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
cv.check_value('value', value, ['mean', 'std_dev', 'rel_err'])
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# FIXME: Unable to get microscopic xs for mesh domain because the mesh
# cells do not know the nuclide densities in each mesh cell.
if self.domain_type == 'mesh' and xs_type == 'micro':
msg = 'Unable to get micro xs for mesh domain since the mesh ' \
'cells do not know the nuclide densities in each mesh cell.'
raise ValueError(msg)
filters = []
filter_bins = []
# Construct a collection of the domain filter bins
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral,
max_depth=3)
filters.append(_DOMAIN_TO_FILTER[self.domain_type])
subdomain_bins = []
for subdomain in subdomains:
subdomain_bins.append(subdomain)
filter_bins.append(tuple(subdomain_bins))
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(groups, str):
cv.check_iterable_type('groups', groups, Integral)
filters.append(openmc.EnergyFilter)
energy_bins = []
for group in groups:
energy_bins.append(
(self.energy_groups.get_group_bounds(group),))
filter_bins.append(tuple(energy_bins))
# Construct a collection of the nuclides to retrieve from the xs tally
if self.by_nuclide:
if nuclides == 'all' or nuclides == 'sum' or nuclides == ['sum']:
query_nuclides = self.get_nuclides()
else:
query_nuclides = nuclides
else:
query_nuclides = ['total']
# If user requested the sum for all nuclides, use tally summation
if nuclides == 'sum' or nuclides == ['sum']:
xs_tally = self.xs_tally.summation(nuclides=query_nuclides)
xs = xs_tally.get_values(filters=filters,
filter_bins=filter_bins, value=value)
else:
xs = self.xs_tally.get_values(filters=filters,
filter_bins=filter_bins,
nuclides=query_nuclides, value=value)
# Divide by atom number densities for microscopic cross sections
if xs_type == 'micro' and self._divide_by_density:
if self.by_nuclide:
densities = self.get_nuclide_densities(nuclides)
else:
densities = self.get_nuclide_densities('sum')
if value == 'mean' or value == 'std_dev':
xs /= densities[np.newaxis, :, np.newaxis]
# Eliminate the trivial score dimension
xs = np.squeeze(xs, axis=len(xs.shape) - 1)
xs = np.nan_to_num(xs)
if groups == 'all':
num_groups = self.num_groups
else:
num_groups = len(groups)
# Reshape tally data array with separate axes for domain and energy
# Accomodate the polar and azimuthal bins if needed
num_subdomains = int(xs.shape[0] / (num_groups * self.num_polar *
self.num_azimuthal))
if self.num_polar > 1 or self.num_azimuthal > 1:
new_shape = (self.num_polar, self.num_azimuthal, num_subdomains,
num_groups)
else:
new_shape = (num_subdomains, num_groups)
new_shape += xs.shape[1:]
xs = np.reshape(xs, new_shape)
# Reverse data if user requested increasing energy groups since
# tally data is stored in order of increasing energies
if order_groups == 'increasing':
xs = xs[..., ::-1, :]
if squeeze:
# We want to squeeze out everything but the polar, azimuthal,
# and energy group data.
xs = self._squeeze_xs(xs)
return xs
def get_flux(self, groups='all', subdomains='all',
order_groups='increasing', value='mean',
squeeze=True, **kwargs):
r"""Returns an array of the fluxes used to weight the MGXS.
This method constructs a 2D NumPy array for the requested
weighting flux for one or more subdomains (1st dimension), and
energy groups (2nd dimension).
Parameters
----------
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
subdomains : Iterable of Integral or 'all'
Subdomain IDs of interest. Defaults to 'all'.
order_groups: {'increasing', 'decreasing'}
Return the cross section indexed according to increasing or
decreasing energy groups (decreasing or increasing energies).
Defaults to 'increasing'.
value : {'mean', 'std_dev', 'rel_err'}
A string for the type of value to return. Defaults to 'mean'.
squeeze : bool
A boolean representing whether to eliminate the extra dimensions
of the multi-dimensional array to be returned. Defaults to True.
Returns
-------
numpy.ndarray
A NumPy array of the flux indexed in the order
each group and subdomain is listed in the parameters.
Raises
------
ValueError
When this method is called before the data is available from tally
data, or, when this is used on an MGXS type without a flux score.
"""
cv.check_value('value', value, ['mean', 'std_dev', 'rel_err'])
filters = []
filter_bins = []
# Construct a collection of the domain filter bins
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral,
max_depth=3)
filters.append(_DOMAIN_TO_FILTER[self.domain_type])
subdomain_bins = []
for subdomain in subdomains:
subdomain_bins.append(subdomain)
filter_bins.append(tuple(subdomain_bins))
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(groups, str):
cv.check_iterable_type('groups', groups, Integral)
filters.append(openmc.EnergyFilter)
energy_bins = []
for group in groups:
energy_bins.append(
(self.energy_groups.get_group_bounds(group),))
filter_bins.append(tuple(energy_bins))
# Determine which flux to obtain
# Step through in order of usefulness
for key in ['flux', 'flux (tracklength)', 'flux (analog)']:
if key in self.tally_keys:
tally = self.tallies[key]
break
else:
msg = "MGXS of Type {} do not have an explicit weighting flux!"
raise ValueError(msg.format(self.__name__))
flux = tally.get_values(filters=filters, filter_bins=filter_bins,
nuclides=['total'], value=value)
# Eliminate the trivial score dimension
flux = np.squeeze(flux, axis=len(flux.shape) - 1)
# Eliminate the trivial nuclide dimension
flux = np.squeeze(flux, axis=len(flux.shape) - 1)
flux = np.nan_to_num(flux)
if groups == 'all':
num_groups = self.num_groups
else:
num_groups = len(groups)
# Reshape tally data array with separate axes for domain and energy
# Accomodate the polar and azimuthal bins if needed
num_subdomains = int(flux.shape[0] / (num_groups * self.num_polar *
self.num_azimuthal))
if self.num_polar > 1 or self.num_azimuthal > 1:
new_shape = (self.num_polar, self.num_azimuthal, num_subdomains,
num_groups)
else:
new_shape = (num_subdomains, num_groups)
new_shape += flux.shape[1:]
flux = np.reshape(flux, new_shape)
# Reverse data if user requested increasing energy groups since
# tally data is stored in order of increasing energies
if order_groups == 'increasing':
flux = flux[..., ::-1]
if squeeze:
# We want to squeeze out everything but the polar, azimuthal,
# and energy group data.
flux = self._squeeze_xs(flux)
return flux
def get_condensed_xs(self, coarse_groups):
"""Construct an energy-condensed version of this cross section.
Parameters
----------
coarse_groups : openmc.mgxs.EnergyGroups
The coarse energy group structure of interest
Returns
-------
MGXS
A new MGXS condensed to the group structure of interest
"""
cv.check_type('coarse_groups', coarse_groups, EnergyGroups)
cv.check_less_than('coarse groups', coarse_groups.num_groups,
self.num_groups, equality=True)
cv.check_value('upper coarse energy', coarse_groups.group_edges[-1],
[self.energy_groups.group_edges[-1]])
cv.check_value('lower coarse energy', coarse_groups.group_edges[0],
[self.energy_groups.group_edges[0]])
# Clone this MGXS to initialize the condensed version
condensed_xs = copy.deepcopy(self)
condensed_xs._rxn_rate_tally = None
condensed_xs._xs_tally = None
condensed_xs._sparse = False
condensed_xs._energy_groups = coarse_groups
# Build energy indices to sum across
energy_indices = []
for group in range(coarse_groups.num_groups, 0, -1):
low, high = coarse_groups.get_group_bounds(group)
low_index = np.where(self.energy_groups.group_edges == low)[0][0]
energy_indices.append(low_index)
fine_edges = self.energy_groups.group_edges
# Condense each of the tallies to the coarse group structure
for tally in condensed_xs.tallies.values():
# Make condensed tally derived and null out sum, sum_sq
tally._derived = True
tally._sum = None
tally._sum_sq = None
# Get tally data arrays reshaped with one dimension per filter
mean = tally.get_reshaped_data(value='mean')
std_dev = tally.get_reshaped_data(value='std_dev')
# Sum across all applicable fine energy group filters
for i, tally_filter in enumerate(tally.filters):
if not isinstance(tally_filter, (openmc.EnergyFilter,
openmc.EnergyoutFilter)):
continue
elif len(tally_filter.bins) != len(fine_edges) - 1:
continue
elif not np.allclose(tally_filter.bins[:, 0], fine_edges[:-1]):
continue
else:
cedge = coarse_groups.group_edges
tally_filter.values = cedge
tally_filter.bins = np.vstack((cedge[:-1], cedge[1:])).T
mean = np.add.reduceat(mean, energy_indices, axis=i)
std_dev = np.add.reduceat(std_dev**2, energy_indices,
axis=i)
std_dev = np.sqrt(std_dev)
# Reshape condensed data arrays with one dimension for all filters
mean = np.reshape(mean, tally.shape)
std_dev = np.reshape(std_dev, tally.shape)
# Override tally's data with the new condensed data
tally._mean = mean
tally._std_dev = std_dev
# Compute the energy condensed multi-group cross section
condensed_xs.sparse = self.sparse
return condensed_xs
def get_subdomain_avg_xs(self, subdomains='all'):
"""Construct a subdomain-averaged version of this cross section.
This method is useful for averaging cross sections across distribcell
instances. The method performs spatial homogenization to compute the
scalar flux-weighted average cross section across the subdomains.
Parameters
----------
subdomains : Iterable of Integral or 'all'
The subdomain IDs to average across. Defaults to 'all'.
Returns
-------
openmc.mgxs.MGXS
A new MGXS averaged across the subdomains of interest
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
# Construct a collection of the subdomain filter bins to average across
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral)
subdomains = [(subdomain,) for subdomain in subdomains]
subdomains = [tuple(subdomains)]
elif self.domain_type == 'distribcell':
subdomains = [i for i in range(self.num_subdomains)]
subdomains = [tuple(subdomains)]
else:
subdomains = None
# Clone this MGXS to initialize the subdomain-averaged version
avg_xs = copy.deepcopy(self)
avg_xs._rxn_rate_tally = None
avg_xs._xs_tally = None
# Average each of the tallies across subdomains
for tally_type, tally in avg_xs.tallies.items():
filt_type = _DOMAIN_TO_FILTER[self.domain_type]
tally_avg = tally.summation(filter_type=filt_type,
filter_bins=subdomains)
avg_xs.tallies[tally_type] = tally_avg
avg_xs._domain_type = 'sum({0})'.format(self.domain_type)
avg_xs.sparse = self.sparse
return avg_xs
def _get_homogenized_mgxs(self, other_mgxs, denom_score='flux'):
"""Construct a homogenized MGXS with other MGXS objects.
This method constructs a new MGXS object that is the flux-weighted
combination of two MGXS objects. It is equivalent to what one would
obtain if the tally spatial domain were designed to encompass the
individual domains for both MGXS objects. This is accomplished by
summing the rxn rate (numerator) tally and the denominator tally
(often a tally of the flux over the spatial domain) that are used to
compute a multi-group cross-section.
Parameters
----------
other_mgxs : openmc.mgxs.MGXS or Iterable of openmc.mgxs.MGXS
The MGXS to homogenize with this one.
denom_score : str
The denominator score in the denominator of computing the MGXS.
Returns
-------
openmc.mgxs.MGXS
A new homogenized MGXS
Raises
------
ValueError
If the other_mgxs is of a different type.
"""
# Check type of denom score
cv.check_type('denom_score', denom_score, str)
# Construct a collection of the subdomain filter bins to homogenize
# across
if isinstance(other_mgxs, openmc.mgxs.MGXS):
other_mgxs = [other_mgxs]
cv.check_iterable_type('other_mgxs', other_mgxs, openmc.mgxs.MGXS)
for mgxs in other_mgxs:
if mgxs.rxn_type != self.rxn_type:
msg = 'Not able to homogenize two MGXS with different rxn types'
raise ValueError(msg)
# Clone this MGXS to initialize the homogenized version
homogenized_mgxs = copy.deepcopy(self)
homogenized_mgxs._derived = True
name = 'hom({}, '.format(self.domain.name)
# Get the domain filter
filter_type = _DOMAIN_TO_FILTER[self.domain_type]
self_filter = self.rxn_rate_tally.find_filter(filter_type)
# Get the rxn rate and denom tallies
rxn_rate_tally = self.rxn_rate_tally
denom_tally = self.tallies[denom_score]
for mgxs in other_mgxs:
# Swap the domain filter bins for the other mgxs rxn rate tally
other_rxn_rate_tally = copy.deepcopy(mgxs.rxn_rate_tally)
other_filter = other_rxn_rate_tally.find_filter(filter_type)
other_filter._bins = self_filter._bins
# Swap the domain filter bins for the denom tally
other_denom_tally = copy.deepcopy(mgxs.tallies[denom_score])
other_filter = other_denom_tally.find_filter(filter_type)
other_filter._bins = self_filter._bins
# Add the rxn rate and denom tallies
rxn_rate_tally += other_rxn_rate_tally
denom_tally += other_denom_tally
# Update the name for the homogenzied MGXS
name += '{}, '.format(mgxs.domain.name)
# Set the properties of the homogenized MGXS
homogenized_mgxs._rxn_rate_tally = rxn_rate_tally
homogenized_mgxs.tallies[denom_score] = denom_tally
homogenized_mgxs._domain.name = name[:-2] + ')'
return homogenized_mgxs
def get_homogenized_mgxs(self, other_mgxs):
"""Construct a homogenized mgxs with other MGXS objects.
Parameters
----------
other_mgxs : openmc.mgxs.MGXS or Iterable of openmc.mgxs.MGXS
The MGXS to homogenize with this one.
Returns
-------
openmc.mgxs.MGXS
A new homogenized MGXS
Raises
------
ValueError
If the other_mgxs is of a different type.
"""
return self._get_homogenized_mgxs(other_mgxs, 'flux')
def get_slice(self, nuclides=[], groups=[]):
"""Build a sliced MGXS for the specified nuclides and energy groups.
This method constructs a new MGXS to encapsulate a subset of the data
represented by this MGXS. The subset of data to include in the tally
slice is determined by the nuclides and energy groups specified in
the input parameters.
Parameters
----------
nuclides : list of str
A list of nuclide name strings
(e.g., ['U235', 'U238']; default is [])
groups : list of int
A list of energy group indices starting at 1 for the high energies
(e.g., [1, 2, 3]; default is [])
Returns
-------
openmc.mgxs.MGXS
A new MGXS object which encapsulates the subset of data requested
for the nuclide(s) and/or energy group(s) requested in the
parameters.
"""
cv.check_iterable_type('nuclides', nuclides, str)
cv.check_iterable_type('energy_groups', groups, Integral)
# Build lists of filters and filter bins to slice
filters = []
filter_bins = []
if len(groups) != 0:
energy_bins = []
for group in groups:
group_bounds = self.energy_groups.get_group_bounds(group)
energy_bins.append(group_bounds)
filter_bins.append(tuple(energy_bins))
filters.append(openmc.EnergyFilter)
# Clone this MGXS to initialize the sliced version
slice_xs = copy.deepcopy(self)
slice_xs._rxn_rate_tally = None
slice_xs._xs_tally = None
# Slice each of the tallies across nuclides and energy groups
for tally_type, tally in slice_xs.tallies.items():
slice_nuclides = [nuc for nuc in nuclides if nuc in tally.nuclides]
if len(groups) != 0 and tally.contains_filter(openmc.EnergyFilter):
tally_slice = tally.get_slice(filters=filters,
filter_bins=filter_bins,
nuclides=slice_nuclides)
else:
tally_slice = tally.get_slice(nuclides=slice_nuclides)
slice_xs.tallies[tally_type] = tally_slice
# Assign sliced energy group structure to sliced MGXS
if groups:
new_group_edges = []
for group in groups:
group_edges = self.energy_groups.get_group_bounds(group)
new_group_edges.extend(group_edges)
new_group_edges = np.unique(new_group_edges)
slice_xs.energy_groups.group_edges = sorted(new_group_edges)
# Assign sliced nuclides to sliced MGXS
if nuclides:
slice_xs.nuclides = nuclides
slice_xs.sparse = self.sparse
return slice_xs
def can_merge(self, other):
"""Determine if another MGXS can be merged with this one
If results have been loaded from a statepoint, then MGXS are only
mergeable along one and only one of enegy groups or nuclides.
Parameters
----------
other : openmc.mgxs.MGXS
MGXS to check for merging
"""
if not isinstance(other, type(self)):
return False
# Compare reaction type, energy groups, nuclides, domain type
if self.rxn_type != other.rxn_type:
return False
elif not self.energy_groups.can_merge(other.energy_groups):
return False
elif self.by_nuclide != other.by_nuclide:
return False
elif self.domain_type != other.domain_type:
return False
elif 'distribcell' not in self.domain_type and self.domain != other.domain:
return False
elif not self.xs_tally.can_merge(other.xs_tally):
return False
elif not self.rxn_rate_tally.can_merge(other.rxn_rate_tally):
return False
# If all conditionals pass then MGXS are mergeable
return True
def merge(self, other):
"""Merge another MGXS with this one
MGXS are only mergeable if their energy groups and nuclides are either
identical or mutually exclusive. If results have been loaded from a
statepoint, then MGXS are only mergeable along one and only one of
energy groups or nuclides.
Parameters
----------
other : openmc.mgxs.MGXS
MGXS to merge with this one
Returns
-------
merged_mgxs : openmc.mgxs.MGXS
Merged MGXS
"""
if not self.can_merge(other):
raise ValueError('Unable to merge MGXS')
# Create deep copy of tally to return as merged tally
merged_mgxs = copy.deepcopy(self)
merged_mgxs._derived = True
# Merge energy groups
if self.energy_groups != other.energy_groups:
merged_groups = self.energy_groups.merge(other.energy_groups)
merged_mgxs.energy_groups = merged_groups
# Merge nuclides
if self.nuclides != other.nuclides:
# The nuclides must be mutually exclusive
for nuclide in self.nuclides:
if nuclide in other.nuclides:
msg = 'Unable to merge MGXS with shared nuclides'
raise ValueError(msg)
# Concatenate lists of nuclides for the merged MGXS
merged_mgxs.nuclides = self.nuclides + other.nuclides
# Null base tallies but merge reaction rate and cross section tallies
merged_mgxs._tallies = OrderedDict()
merged_mgxs._rxn_rate_tally = self.rxn_rate_tally.merge(other.rxn_rate_tally)
merged_mgxs._xs_tally = self.xs_tally.merge(other.xs_tally)
return merged_mgxs
def print_xs(self, subdomains='all', nuclides='all', xs_type='macro'):
"""Print a string representation for the multi-group cross section.
Parameters
----------
subdomains : Iterable of Integral or 'all'
The subdomain IDs of the cross sections to include in the report.
Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
The nuclides of the cross-sections to include in the report. This
may be a list of nuclide name strings (e.g., ['U235', 'U238']).
The special string 'all' will report the cross sections for all
nuclides in the spatial domain. The special string 'sum' will report
the cross sections summed over all nuclides. Defaults to 'all'.
xs_type: {'macro', 'micro'}
Return the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
"""
# Construct a collection of the subdomains to report
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral)
elif self.domain_type == 'distribcell':
subdomains = np.arange(self.num_subdomains, dtype=np.int)
elif self.domain_type == 'mesh':
subdomains = list(self.domain.indices)
else:
subdomains = [self.domain.id]
# Construct a collection of the nuclides to report
if self.by_nuclide:
if nuclides == 'all':
nuclides = self.get_nuclides()
elif nuclides == 'sum':
nuclides = ['sum']
else:
cv.check_iterable_type('nuclides', nuclides, str)
else:
nuclides = ['sum']
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# Build header for string with type and domain info
string = 'Multi-Group XS\n'
string += '{0: <16}=\t{1}\n'.format('\tReaction Type', self.rxn_type)
string += '{0: <16}=\t{1}\n'.format('\tDomain Type', self.domain_type)
string += '{0: <16}=\t{1}\n'.format('\tDomain ID', self.domain.id)
# Generate the header for an individual XS
xs_header = '\tCross Sections [{0}]:'.format(self.get_units(xs_type))
# If cross section data has not been computed, only print string header
if self.tallies is None:
print(string)
return
# Set polar/azimuthal bins
if self.num_polar > 1 or self.num_azimuthal > 1:
pol_bins = np.linspace(0., np.pi, num=self.num_polar + 1,
endpoint=True)
azi_bins = np.linspace(-np.pi, np.pi, num=self.num_azimuthal + 1,
endpoint=True)
# Loop over all subdomains
for subdomain in subdomains:
if self.domain_type == 'distribcell' or self.domain_type == 'mesh':
string += '{0: <16}=\t{1}\n'.format('\tSubdomain', subdomain)
# Loop over all Nuclides
for nuclide in nuclides:
# Build header for nuclide type
if nuclide != 'sum':
string += '{0: <16}=\t{1}\n'.format('\tNuclide', nuclide)
# Build header for cross section type
string += '{0: <16}\n'.format(xs_header)
template = '{0: <12}Group {1} [{2: <10} - {3: <10}eV]:\t'
average_xs = self.get_xs(nuclides=[nuclide],
subdomains=[subdomain],
xs_type=xs_type, value='mean')
rel_err_xs = self.get_xs(nuclides=[nuclide],
subdomains=[subdomain],
xs_type=xs_type, value='rel_err')
rel_err_xs = rel_err_xs * 100.
if self.num_polar > 1 or self.num_azimuthal > 1:
# Loop over polar, azimuthal, and energy group ranges
for pol in range(len(pol_bins) - 1):
pol_low, pol_high = pol_bins[pol: pol + 2]
for azi in range(len(azi_bins) - 1):
azi_low, azi_high = azi_bins[azi: azi + 2]
string += '\t\tPolar Angle: [{0:5f} - {1:5f}]'.format(
pol_low, pol_high) + \
'\tAzimuthal Angle: [{0:5f} - {1:5f}]'.format(
azi_low, azi_high) + '\n'
for group in range(1, self.num_groups + 1):
bounds = \
self.energy_groups.get_group_bounds(group)
string += '\t' + template.format('', group,
bounds[0],
bounds[1])
string += '{0:.2e} +/- {1:.2e}%'.format(
average_xs[pol, azi, group - 1],
rel_err_xs[pol, azi, group - 1])
string += '\n'
string += '\n'
else:
# Loop over energy groups
for group in range(1, self.num_groups + 1):
bounds = self.energy_groups.get_group_bounds(group)
string += template.format('', group, bounds[0],
bounds[1])
string += '{0:.2e} +/- {1:.2e}%'.format(
average_xs[group - 1], rel_err_xs[group - 1])
string += '\n'
string += '\n'
string += '\n'
print(string)
def build_hdf5_store(self, filename='mgxs.h5', directory='mgxs',
subdomains='all', nuclides='all',
xs_type='macro', row_column='inout', append=True,
libver='earliest'):
"""Export the multi-group cross section data to an HDF5 binary file.
This method constructs an HDF5 file which stores the multi-group
cross section data. The data is stored in a hierarchy of HDF5 groups
from the domain type, domain id, subdomain id (for distribcell domains),
nuclides and cross section type. Two datasets for the mean and standard
deviation are stored for each subdomain entry in the HDF5 file.
.. note:: This requires the h5py Python package.
Parameters
----------
filename : str
Filename for the HDF5 file. Defaults to 'mgxs.h5'.
directory : str
Directory for the HDF5 file. Defaults to 'mgxs'.
subdomains : Iterable of Integral or 'all'
The subdomain IDs of the cross sections to include in the report.
Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
The nuclides of the cross-sections to include in the report. This
may be a list of nuclide name strings (e.g., ['U235', 'U238']).
The special string 'all' will report the cross sections for all
nuclides in the spatial domain. The special string 'sum' will report
the cross sections summed over all nuclides. Defaults to 'all'.
xs_type: {'macro', 'micro'}
Store the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
row_column: {'inout', 'outin'}
Store scattering matrices indexed first by incoming group and
second by outgoing group ('inout'), or vice versa ('outin').
Defaults to 'inout'.
append : bool
If true, appends to an existing HDF5 file with the same filename
directory (if one exists). Defaults to True.
libver : {'earliest', 'latest'}
Compatibility mode for the HDF5 file. 'latest' will produce files
that are less backwards compatible but have performance benefits.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
# Make directory if it does not exist
if not os.path.exists(directory):
os.makedirs(directory)
filename = os.path.join(directory, filename)
filename = filename.replace(' ', '-')
if append and os.path.isfile(filename):
xs_results = h5py.File(filename, 'a')
else:
xs_results = h5py.File(filename, 'w', libver=libver)
# Construct a collection of the subdomains to report
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral)
elif self.domain_type == 'distribcell':
subdomains = np.arange(self.num_subdomains, dtype=np.int)
elif self.domain_type == 'sum(distribcell)':
domain_filter = self.xs_tally.find_filter('sum(distribcell)')
subdomains = domain_filter.bins
elif self.domain_type == 'mesh':
subdomains = list(self.domain.indices)
else:
subdomains = [self.domain.id]
# Construct a collection of the nuclides to report
if self.by_nuclide:
if nuclides == 'all':
nuclides = self.get_nuclides()
densities = np.zeros(len(nuclides), dtype=np.float)
elif nuclides == 'sum':
nuclides = ['sum']
else:
cv.check_iterable_type('nuclides', nuclides, str)
else:
nuclides = ['sum']
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# Create an HDF5 group within the file for the domain
domain_type_group = xs_results.require_group(self.domain_type)
domain_group = domain_type_group.require_group(str(self.domain.id))
# Determine number of digits to pad subdomain group keys
num_digits = len(str(self.num_subdomains))
# Create a separate HDF5 group for each subdomain
for subdomain in subdomains:
# Create an HDF5 group for the subdomain
if self.domain_type == 'distribcell':
group_name = ''.zfill(num_digits)
subdomain_group = domain_group.require_group(group_name)
else:
subdomain_group = domain_group
# Create a separate HDF5 group for this cross section
rxn_group = subdomain_group.require_group(self.hdf5_key)
# Create a separate HDF5 group for each nuclide
for j, nuclide in enumerate(nuclides):
if nuclide != 'sum':
density = densities[j]
nuclide_group = rxn_group.require_group(nuclide)
nuclide_group.require_dataset('density', dtype=np.float64,
data=[density], shape=(1,))
else:
nuclide_group = rxn_group
# Extract the cross section for this subdomain and nuclide
average = self.get_xs(subdomains=[subdomain], nuclides=[nuclide],
xs_type=xs_type, value='mean',
row_column=row_column)
std_dev = self.get_xs(subdomains=[subdomain], nuclides=[nuclide],
xs_type=xs_type, value='std_dev',
row_column=row_column)
# Add MGXS results data to the HDF5 group
nuclide_group.require_dataset('average', dtype=np.float64,
shape=average.shape, data=average)
nuclide_group.require_dataset('std. dev.', dtype=np.float64,
shape=std_dev.shape, data=std_dev)
# Close the results HDF5 file
xs_results.close()
def export_xs_data(self, filename='mgxs', directory='mgxs',
format='csv', groups='all', xs_type='macro'):
"""Export the multi-group cross section data to a file.
This method leverages the functionality in the Pandas library to export
the multi-group cross section data in a variety of output file formats
for storage and/or post-processing.
Parameters
----------
filename : str
Filename for the exported file. Defaults to 'mgxs'.
directory : str
Directory for the exported file. Defaults to 'mgxs'.
format : {'csv', 'excel', 'pickle', 'latex'}
The format for the exported data file. Defaults to 'csv'.
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
xs_type: {'macro', 'micro'}
Store the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
"""
cv.check_type('filename', filename, str)
cv.check_type('directory', directory, str)
cv.check_value('format', format, ['csv', 'excel', 'pickle', 'latex'])
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# Make directory if it does not exist
if not os.path.exists(directory):
os.makedirs(directory)
filename = os.path.join(directory, filename)
filename = filename.replace(' ', '-')
# Get a Pandas DataFrame for the data
df = self.get_pandas_dataframe(groups=groups, xs_type=xs_type)
# Export the data using Pandas IO API
if format == 'csv':
df.to_csv(filename + '.csv', index=False)
elif format == 'excel':
if self.domain_type == 'mesh':
df.to_excel(filename + '.xls')
else:
df.to_excel(filename + '.xls', index=False)
elif format == 'pickle':
df.to_pickle(filename + '.pkl')
elif format == 'latex':
if self.domain_type == 'distribcell':
msg = 'Unable to export distribcell multi-group cross section' \
'data to a LaTeX table'
raise NotImplementedError(msg)
df.to_latex(filename + '.tex', bold_rows=True,
longtable=True, index=False)
# Surround LaTeX table with code needed to run pdflatex
with open(filename + '.tex', 'r') as original:
data = original.read()
with open(filename + '.tex', 'w') as modified:
modified.write(
'\\documentclass[preview, 12pt, border=1mm]{standalone}\n')
modified.write('\\usepackage{caption}\n')
modified.write('\\usepackage{longtable}\n')
modified.write('\\usepackage{booktabs}\n')
modified.write('\\begin{document}\n\n')
modified.write(data)
modified.write('\n\\end{document}')
def get_pandas_dataframe(self, groups='all', nuclides='all',
xs_type='macro', paths=True):
"""Build a Pandas DataFrame for the MGXS data.
This method leverages :meth:`openmc.Tally.get_pandas_dataframe`, but
renames the columns with terminology appropriate for cross section data.
Parameters
----------
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
The nuclides of the cross-sections to include in the dataframe. This
may be a list of nuclide name strings (e.g., ['U235', 'U238']).
The special string 'all' will include the cross sections for all
nuclides in the spatial domain. The special string 'sum' will
include the cross sections summed over all nuclides. Defaults
to 'all'.
xs_type: {'macro', 'micro'}
Return macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
paths : bool, optional
Construct columns for distribcell tally filters (default is True).
The geometric information in the Summary object is embedded into
a Multi-index column with a geometric "path" to each distribcell
instance.
Returns
-------
pandas.DataFrame
A Pandas DataFrame for the cross section data.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
if not isinstance(groups, str):
cv.check_iterable_type('groups', groups, Integral)
if nuclides != 'all' and nuclides != 'sum':
cv.check_iterable_type('nuclides', nuclides, str)
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# Get a Pandas DataFrame from the derived xs tally
if self.by_nuclide and nuclides == 'sum':
# Use tally summation to sum across all nuclides
xs_tally = self.xs_tally.summation(nuclides=self.get_nuclides())
df = xs_tally.get_pandas_dataframe(paths=paths)
# Remove nuclide column since it is homogeneous and redundant
if self.domain_type == 'mesh':
df.drop('sum(nuclide)', axis=1, level=0, inplace=True)
else:
df.drop('sum(nuclide)', axis=1, inplace=True)
# If the user requested a specific set of nuclides
elif self.by_nuclide and nuclides != 'all':
xs_tally = self.xs_tally.get_slice(nuclides=nuclides)
df = xs_tally.get_pandas_dataframe(paths=paths)
# If the user requested all nuclides, keep nuclide column in dataframe
else:
df = self.xs_tally.get_pandas_dataframe(paths=paths)
# Remove the score column since it is homogeneous and redundant
if self.domain_type == 'mesh':
df = df.drop('score', axis=1, level=0)
else:
df = df.drop('score', axis=1)
# Convert azimuthal, polar, energy in and energy out bin values in to
# bin indices
columns = self._df_convert_columns_to_bins(df)
# Select out those groups the user requested
if not isinstance(groups, str):
if 'group in' in df:
df = df[df['group in'].isin(groups)]
if 'group out' in df:
df = df[df['group out'].isin(groups)]
# If user requested micro cross sections, divide out the atom densities
if xs_type == 'micro' and self._divide_by_density:
if self.by_nuclide:
densities = self.get_nuclide_densities(nuclides)
else:
densities = self.get_nuclide_densities('sum')
densities = np.repeat(densities, len(self.rxn_rate_tally.scores))
tile_factor = int(df.shape[0] / len(densities))
df['mean'] /= np.tile(densities, tile_factor)
df['std. dev.'] /= np.tile(densities, tile_factor)
# Replace NaNs by zeros (happens if nuclide density is zero)
df['mean'].replace(np.nan, 0.0, inplace=True)
df['std. dev.'].replace(np.nan, 0.0, inplace=True)
# Sort the dataframe by domain type id (e.g., distribcell id) and
# energy groups such that data is from fast to thermal
if self.domain_type == 'mesh':
mesh_str = 'mesh {0}'.format(self.domain.id)
df.sort_values(by=[(mesh_str, 'x'), (mesh_str, 'y'),
(mesh_str, 'z')] + columns, inplace=True)
else:
df.sort_values(by=[self.domain_type] + columns, inplace=True)
return df
def get_units(self, xs_type='macro'):
"""This method returns the units of a MGXS based on a desired xs_type.
Parameters
----------
xs_type: {'macro', 'micro'}
Return the macro or micro cross section units.
Defaults to 'macro'.
Returns
-------
str
A string representing the units of the MGXS.
"""
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
return 'cm^-1' if xs_type == 'macro' else 'barns'
class MatrixMGXS(MGXS):
"""An abstract multi-group cross section for some energy group structure
within some spatial domain. This class is specifically intended for
cross sections which depend on both the incoming and outgoing energy groups
and are therefore represented by matrices. Examples of this include the
scattering and nu-fission matrices.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations.
.. note:: Users should instantiate the subclasses of this abstract class.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file) and the number of mesh cells for
'mesh' domain types.
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
@property
def _dont_squeeze(self):
"""Create a tuple of axes which should not be removed during the get_xs
process
"""
if self.num_polar > 1 or self.num_azimuthal > 1:
return (0, 1, 3, 4)
else:
return (1, 2)
@property
def filters(self):
# Create the non-domain specific Filters for the Tallies
group_edges = self.energy_groups.group_edges
energy = openmc.EnergyFilter(group_edges)
energyout = openmc.EnergyoutFilter(group_edges)
filters = [[energy], [energy, energyout]]
return self._add_angle_filters(filters)
def get_xs(self, in_groups='all', out_groups='all', subdomains='all',
nuclides='all', xs_type='macro', order_groups='increasing',
row_column='inout', value='mean', squeeze=True, **kwargs):
"""Returns an array of multi-group cross sections.
This method constructs a 4D NumPy array for the requested
multi-group cross section data for one or more subdomains
(1st dimension), energy groups in (2nd dimension), energy groups out
(3rd dimension), and nuclides (4th dimension).
Parameters
----------
in_groups : Iterable of Integral or 'all'
Incoming energy groups of interest. Defaults to 'all'.
out_groups : Iterable of Integral or 'all'
Outgoing energy groups of interest. Defaults to 'all'.
subdomains : Iterable of Integral or 'all'
Subdomain IDs of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
A list of nuclide name strings (e.g., ['U235', 'U238']). The
special string 'all' will return the cross sections for all
nuclides in the spatial domain. The special string 'sum' will
return the cross section summed over all nuclides. Defaults to
'all'.
xs_type: {'macro', 'micro'}
Return the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
order_groups: {'increasing', 'decreasing'}
Return the cross section indexed according to increasing or
decreasing energy groups (decreasing or increasing energies).
Defaults to 'increasing'.
row_column: {'inout', 'outin'}
Return the cross section indexed first by incoming group and
second by outgoing group ('inout'), or vice versa ('outin').
Defaults to 'inout'.
value : {'mean', 'std_dev', 'rel_err'}
A string for the type of value to return. Defaults to 'mean'.
squeeze : bool
A boolean representing whether to eliminate the extra dimensions
of the multi-dimensional array to be returned. Defaults to True.
Returns
-------
numpy.ndarray
A NumPy array of the multi-group cross section indexed in the order
each group and subdomain is listed in the parameters.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
cv.check_value('value', value, ['mean', 'std_dev', 'rel_err'])
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# FIXME: Unable to get microscopic xs for mesh domain because the mesh
# cells do not know the nuclide densities in each mesh cell.
if self.domain_type == 'mesh' and xs_type == 'micro':
msg = 'Unable to get micro xs for mesh domain since the mesh ' \
'cells do not know the nuclide densities in each mesh cell.'
raise ValueError(msg)
filters = []
filter_bins = []
# Construct a collection of the domain filter bins
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral,
max_depth=3)
filters.append(_DOMAIN_TO_FILTER[self.domain_type])
subdomain_bins = []
for subdomain in subdomains:
subdomain_bins.append(subdomain)
filter_bins.append(tuple(subdomain_bins))
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(in_groups, str):
cv.check_iterable_type('groups', in_groups, Integral)
filters.append(openmc.EnergyFilter)
energy_bins = []
for group in in_groups:
energy_bins.append((self.energy_groups.get_group_bounds(group),))
filter_bins.append(tuple(energy_bins))
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(out_groups, str):
cv.check_iterable_type('groups', out_groups, Integral)
for group in out_groups:
filters.append(openmc.EnergyoutFilter)
filter_bins.append((
self.energy_groups.get_group_bounds(group),))
# Construct a collection of the nuclides to retrieve from the xs tally
if self.by_nuclide:
if nuclides == 'all' or nuclides == 'sum' or nuclides == ['sum']:
query_nuclides = self.get_nuclides()
else:
query_nuclides = nuclides
else:
query_nuclides = ['total']
# Use tally summation if user requested the sum for all nuclides
if nuclides == 'sum' or nuclides == ['sum']:
xs_tally = self.xs_tally.summation(nuclides=query_nuclides)
xs = xs_tally.get_values(filters=filters, filter_bins=filter_bins,
value=value)
else:
xs = self.xs_tally.get_values(filters=filters,
filter_bins=filter_bins,
nuclides=query_nuclides, value=value)
# Divide by atom number densities for microscopic cross sections
if xs_type == 'micro' and self._divide_by_density:
if self.by_nuclide:
densities = self.get_nuclide_densities(nuclides)
else:
densities = self.get_nuclide_densities('sum')
if value == 'mean' or value == 'std_dev':
xs /= densities[np.newaxis, :, np.newaxis]
# Eliminate the trivial score dimension
xs = np.squeeze(xs, axis=len(xs.shape) - 1)
xs = np.nan_to_num(xs)
if in_groups == 'all':
num_in_groups = self.num_groups
else:
num_in_groups = len(in_groups)
if out_groups == 'all':
num_out_groups = self.num_groups
else:
num_out_groups = len(out_groups)
# Reshape tally data array with separate axes for domain and energy
# Accomodate the polar and azimuthal bins if needed
num_subdomains = int(xs.shape[0] / (num_in_groups * num_out_groups *
self.num_polar *
self.num_azimuthal))
if self.num_polar > 1 or self.num_azimuthal > 1:
new_shape = (self.num_polar, self.num_azimuthal, num_subdomains,
num_in_groups, num_out_groups)
new_shape += xs.shape[1:]
xs = np.reshape(xs, new_shape)
# Transpose the matrix if requested by user
if row_column == 'outin':
xs = np.swapaxes(xs, 3, 4)
else:
new_shape = (num_subdomains, num_in_groups, num_out_groups)
new_shape += xs.shape[1:]
xs = np.reshape(xs, new_shape)
# Transpose the matrix if requested by user
if row_column == 'outin':
xs = np.swapaxes(xs, 1, 2)
# Reverse data if user requested increasing energy groups since
# tally data is stored in order of increasing energies
if order_groups == 'increasing':
xs = xs[..., ::-1, ::-1, :]
if squeeze:
# We want to squeeze out everything but the polar, azimuthal,
# and in/out energy group data.
xs = self._squeeze_xs(xs)
return xs
def get_slice(self, nuclides=[], in_groups=[], out_groups=[]):
"""Build a sliced MatrixMGXS object for the specified nuclides and
energy groups.
This method constructs a new MGXS to encapsulate a subset of the data
represented by this MGXS. The subset of data to include in the tally
slice is determined by the nuclides and energy groups specified in
the input parameters.
Parameters
----------
nuclides : list of str
A list of nuclide name strings
(e.g., ['U235', 'U238']; default is [])
in_groups : list of int
A list of incoming energy group indices starting at 1 for the high
energies (e.g., [1, 2, 3]; default is [])
out_groups : list of int
A list of outgoing energy group indices starting at 1 for the high
energies (e.g., [1, 2, 3]; default is [])
Returns
-------
openmc.mgxs.MatrixMGXS
A new MatrixMGXS object which encapsulates the subset of data
requested for the nuclide(s) and/or energy group(s) requested in
the parameters.
"""
# Call super class method and null out derived tallies
slice_xs = super().get_slice(nuclides, in_groups)
slice_xs._rxn_rate_tally = None
slice_xs._xs_tally = None
# Slice outgoing energy groups if needed
if len(out_groups) != 0:
filter_bins = []
for group in out_groups:
group_bounds = self.energy_groups.get_group_bounds(group)
filter_bins.append(group_bounds)
filter_bins = [tuple(filter_bins)]
# Slice each of the tallies across energyout groups
for tally_type, tally in slice_xs.tallies.items():
if tally.contains_filter(openmc.EnergyoutFilter):
tally_slice = tally.get_slice(
filters=[openmc.EnergyoutFilter],
filter_bins=filter_bins)
slice_xs.tallies[tally_type] = tally_slice
slice_xs.sparse = self.sparse
return slice_xs
def print_xs(self, subdomains='all', nuclides='all', xs_type='macro'):
"""Prints a string representation for the multi-group cross section.
Parameters
----------
subdomains : Iterable of Integral or 'all'
The subdomain IDs of the cross sections to include in the report.
Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
The nuclides of the cross-sections to include in the report. This
may be a list of nuclide name strings (e.g., ['U235', 'U238']).
The special string 'all' will report the cross sections for all
nuclides in the spatial domain. The special string 'sum' will
report the cross sections summed over all nuclides. Defaults to
'all'.
xs_type: {'macro', 'micro'}
Return the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
"""
# Construct a collection of the subdomains to report
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral)
elif self.domain_type == 'distribcell':
subdomains = np.arange(self.num_subdomains, dtype=np.int)
elif self.domain_type == 'mesh':
subdomains = list(self.domain.indices)
else:
subdomains = [self.domain.id]
# Construct a collection of the nuclides to report
if self.by_nuclide:
if nuclides == 'all':
nuclides = self.get_nuclides()
if nuclides == 'sum':
nuclides = ['sum']
else:
cv.check_iterable_type('nuclides', nuclides, str)
else:
nuclides = ['sum']
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# Build header for string with type and domain info
string = 'Multi-Group XS\n'
string += '{0: <16}=\t{1}\n'.format('\tReaction Type', self.rxn_type)
string += '{0: <16}=\t{1}\n'.format('\tDomain Type', self.domain_type)
string += '{0: <16}=\t{1}\n'.format('\tDomain ID', self.domain.id)
# Generate the header for an individual XS
xs_header = '\tCross Sections [{0}]:'.format(self.get_units(xs_type))
# If cross section data has not been computed, only print string header
if self.tallies is None:
print(string)
return
string += '{0: <16}\n'.format('\tEnergy Groups:')
template = '{0: <12}Group {1} [{2: <10} - {3: <10}eV]\n'
# Loop over energy groups ranges
for group in range(1, self.num_groups + 1):
bounds = self.energy_groups.get_group_bounds(group)
string += template.format('', group, bounds[0], bounds[1])
# Set polar and azimuthal bins if necessary
if self.num_polar > 1 or self.num_azimuthal > 1:
pol_bins = np.linspace(0., np.pi, num=self.num_polar + 1,
endpoint=True)
azi_bins = np.linspace(-np.pi, np.pi, num=self.num_azimuthal + 1,
endpoint=True)
# Loop over all subdomains
for subdomain in subdomains:
if self.domain_type == 'distribcell' or self.domain_type == 'mesh':
string += '{0: <16}=\t{1}\n'.format('\tSubdomain', subdomain)
# Loop over all Nuclides
for nuclide in nuclides:
# Build header for nuclide type
if xs_type != 'sum':
string += '{0: <16}=\t{1}\n'.format('\tNuclide', nuclide)
# Build header for cross section type
string += '{0: <16}\n'.format(xs_header)
template = '{0: <12}Group {1} -> Group {2}:\t\t'
average_xs = self.get_xs(nuclides=[nuclide],
subdomains=[subdomain],
xs_type=xs_type, value='mean')
rel_err_xs = self.get_xs(nuclides=[nuclide],
subdomains=[subdomain],
xs_type=xs_type, value='rel_err')
rel_err_xs = rel_err_xs * 100.
if self.num_polar > 1 or self.num_azimuthal > 1:
# Loop over polar, azi, and in/out energy group ranges
for pol in range(len(pol_bins) - 1):
pol_low, pol_high = pol_bins[pol: pol + 2]
for azi in range(len(azi_bins) - 1):
azi_low, azi_high = azi_bins[azi: azi + 2]
string += '\t\tPolar Angle: [{0:5f} - {1:5f}]'.format(
pol_low, pol_high) + \
'\tAzimuthal Angle: [{0:5f} - {1:5f}]'.format(
azi_low, azi_high) + '\n'
for in_group in range(1, self.num_groups + 1):
for out_group in range(1, self.num_groups + 1):
string += '\t' + template.format('',
in_group,
out_group)
string += '{0:.2e} +/- {1:.2e}%'.format(
average_xs[pol, azi, in_group - 1,
out_group - 1],
rel_err_xs[pol, azi, in_group - 1,
out_group - 1])
string += '\n'
string += '\n'
string += '\n'
else:
# Loop over incoming/outgoing energy groups ranges
for in_group in range(1, self.num_groups + 1):
for out_group in range(1, self.num_groups + 1):
string += template.format('', in_group, out_group)
string += '{0:.2e} +/- {1:.2e}%'.format(
average_xs[in_group - 1, out_group - 1],
rel_err_xs[in_group - 1, out_group - 1])
string += '\n'
string += '\n'
string += '\n'
string += '\n'
print(string)
class TotalXS(MGXS):
r"""A total multi-group cross section.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group total cross sections for multi-group neutronics calculations. At
a minimum, one needs to set the :attr:`TotalXS.energy_groups` and
:attr:`TotalXS.domain` properties. Tallies for the flux and appropriate
reaction rates over the specified domain are generated automatically via the
:attr:`TotalXS.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`TotalXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
total cross section is calculated as:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\sigma_t (r, E) \psi (r, E, \Omega)}{\int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`TotalXS.tally_keys` property and values
are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = 'total'
class TransportXS(MGXS):
r"""A transport-corrected total multi-group cross section.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`TransportXS.energy_groups` and
:attr:`TransportXS.domain` properties. Tallies for the flux and appropriate
reaction rates over the specified domain are generated automatically via the
:attr:`TransportXS.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`TransportXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
transport-corrected total cross section is calculated as:
.. math::
\begin{aligned}
\langle \sigma_t \phi \rangle &= \int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \sigma_t (r, E) \psi
(r, E, \Omega) \\
\langle \sigma_{s1} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \int_{4\pi}
d\Omega' \int_0^\infty dE' \int_{-1}^1 d\mu \; \mu \sigma_s
(r, E' \rightarrow E, \Omega' \cdot \Omega)
\phi (r, E', \Omega) \\
\langle \phi \rangle &= \int_{r \in V} dr \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega) \\
\sigma_{tr} &= \frac{\langle \sigma_t \phi \rangle - \langle \sigma_{s1}
\phi \rangle}{\langle \phi \rangle}
\end{aligned}
To incorporate the effect of scattering multiplication in the above
relation, the `nu` parameter can be set to `True`.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
nu : bool
If True, the cross section data will include neutron multiplication;
defaults to False.
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
nu : bool
If True, the cross section data will include neutron multiplication
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`TransportXS.tally_keys` property and
values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None, nu=False,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
# Use tracklength estimators for the total MGXS term, and
# analog estimators for the transport correction term
self._estimator = ['tracklength', 'tracklength', 'analog', 'analog']
self._valid_estimators = ['analog']
self.nu = nu
def __deepcopy__(self, memo):
clone = super().__deepcopy__(memo)
clone._nu = self.nu
return clone
@property
def scores(self):
if not self.nu:
return ['flux', 'total', 'flux', 'scatter']
else:
return ['flux', 'total', 'flux', 'nu-scatter']
@property
def tally_keys(self):
return ['flux (tracklength)', 'total', 'flux (analog)', 'scatter-1']
@property
def filters(self):
group_edges = self.energy_groups.group_edges
energy_filter = openmc.EnergyFilter(group_edges)
energyout_filter = openmc.EnergyoutFilter(group_edges)
p1_filter = openmc.LegendreFilter(1)
filters = [[energy_filter], [energy_filter],
[energy_filter], [energyout_filter, p1_filter]]
return self._add_angle_filters(filters)
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
# Switch EnergyoutFilter to EnergyFilter.
p1_tally = self.tallies['scatter-1']
old_filt = p1_tally.filters[-2]
new_filt = openmc.EnergyFilter(old_filt.values)
p1_tally.filters[-2] = new_filt
# Slice Legendre expansion filter and change name of score
p1_tally = p1_tally.get_slice(filters=[openmc.LegendreFilter],
filter_bins=[('P1',)],
squeeze=True)
p1_tally._scores = ['scatter-1']
self._rxn_rate_tally = self.tallies['total'] - p1_tally
self._rxn_rate_tally.sparse = self.sparse
return self._rxn_rate_tally
@property
def xs_tally(self):
if self._xs_tally is None:
if self.tallies is None:
msg = 'Unable to get xs_tally since tallies have ' \
'not been loaded from a statepoint'
raise ValueError(msg)
# Switch EnergyoutFilter to EnergyFilter.
p1_tally = self.tallies['scatter-1']
old_filt = p1_tally.filters[-2]
new_filt = openmc.EnergyFilter(old_filt.values)
p1_tally.filters[-2] = new_filt
# Slice Legendre expansion filter and change name of score
p1_tally = p1_tally.get_slice(filters=[openmc.LegendreFilter],
filter_bins=[('P1',)],
squeeze=True)
p1_tally._scores = ['scatter-1']
# Compute total cross section
total_xs = self.tallies['total'] / self.tallies['flux (tracklength)']
# Compute transport correction term
trans_corr = p1_tally / self.tallies['flux (analog)']
# Compute the transport-corrected total cross section
self._xs_tally = total_xs - trans_corr
self._compute_xs()
return self._xs_tally
@property
def nu(self):
return self._nu
@nu.setter
def nu(self, nu):
cv.check_type('nu', nu, bool)
self._nu = nu
if not nu:
self._rxn_type = 'transport'
else:
self._rxn_type = 'nu-transport'
class DiffusionCoefficient(TransportXS):
r"""A diffusion coefficient multi-group cross section.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`DiffusionCoefficient.energy_groups` and
:attr:`DiffusionCoefficient.domain` properties. Tallies for the flux and appropriate
reaction rates over the specified domain are generated automatically via the
:attr:`DiffusionCoefficient.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`DiffusionCoefficient.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
diffusion coefficient is calculated as:
.. math::
\begin{aligned}
\langle \sigma_t \phi \rangle &= \int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \sigma_t (r, E) \psi
(r, E, \Omega) \\
\langle \sigma_{s1} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \int_{4\pi}
d\Omega' \int_0^\infty dE' \int_{-1}^1 d\mu \; \mu \sigma_s
(r, E' \rightarrow E, \Omega' \cdot \Omega)
\phi (r, E', \Omega) \\
\langle \phi \rangle &= \int_{r \in V} dr \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega) \\
\sigma_{tr} &= \frac{\langle \sigma_t \phi \rangle - \langle \sigma_{s1}
\phi \rangle}{\langle \phi \rangle} \\
D = \frac{1}{3 \sigma_{tr}}
\end{aligned}
To incorporate the effect of scattering multiplication in the above
relation, the `nu` parameter can be set to `True`.
.. versionadded:: 0.12.1
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
nu : bool
If True, the cross section data will include neutron multiplication;
defaults to False.
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
nu : bool
If True, the cross section data will include neutron multiplication
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`TransportXS.tally_keys` property and
values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None, nu=False,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super(DiffusionCoefficient, self).__init__(domain, domain_type, groups,
nu, by_nuclide, name,
num_polar, num_azimuthal)
if not nu:
self._rxn_type = 'diffusion-coefficient'
else:
self._rxn_type = 'nu-diffusion-coefficient'
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
# Switch EnergyoutFilter to EnergyFilter.
p1_tally = self.tallies['scatter-1']
old_filt = p1_tally.filters[-2]
new_filt = openmc.EnergyFilter(old_filt.values)
p1_tally.filters[-2] = new_filt
# Slice Legendre expansion filter and change name of score
p1_tally = p1_tally.get_slice(filters=[openmc.LegendreFilter],
filter_bins=[('P1',)],
squeeze=True)
p1_tally._scores = ['scatter-1']
transport = self.tallies['total'] - p1_tally
self._rxn_rate_tally = transport**(-1) / 3.0
self._rxn_rate_tally.sparse = self.sparse
return self._rxn_rate_tally
@property
def xs_tally(self):
if self._xs_tally is None:
if self.tallies is None:
msg = 'Unable to get xs_tally since tallies have ' \
'not been loaded from a statepoint'
raise ValueError(msg)
# Switch EnergyoutFilter to EnergyFilter
p1_tally = self.tallies['scatter-1']
old_filt = p1_tally.filters[-2]
new_filt = openmc.EnergyFilter(old_filt.values)
p1_tally.filters[-2] = new_filt
# Slice Legendre expansion filter and change name of score
p1_tally = p1_tally.get_slice(filters=[openmc.LegendreFilter],
filter_bins=[('P1',)],
squeeze=True)
p1_tally._scores = ['scatter-1']
# Compute total cross section
total_xs = self.tallies['total'] / self.tallies['flux (tracklength)']
# Compute transport correction term
trans_corr = p1_tally / self.tallies['flux (analog)']
# Compute the diffusion coefficient
transport = total_xs - trans_corr
diff_coef = transport**(-1) / 3.0
self._xs_tally = diff_coef
self._compute_xs()
return self._xs_tally
class AbsorptionXS(MGXS):
r"""An absorption multi-group cross section.
Absorption is defined as all reactions that do not produce secondary
neutrons (disappearance) plus fission reactions.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group absorption cross sections for multi-group neutronics
calculations. At a minimum, one needs to set the
:attr:`AbsorptionXS.energy_groups` and :attr:`AbsorptionXS.domain`
properties. Tallies for the flux and appropriate reaction rates over the
specified domain are generated automatically via the
:attr:`AbsorptionXS.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`AbsorptionXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
absorption cross section is calculated as:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\sigma_a (r, E) \psi (r, E, \Omega)}{\int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`AbsorptionXS.tally_keys` property and
values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file) and the number of mesh cells for
'mesh' domain types.
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = 'absorption'
class CaptureXS(MGXS):
r"""A capture multi-group cross section.
The neutron capture reaction rate is defined as the difference between
OpenMC's 'absorption' and 'fission' reaction rate score types. This includes
not only radiative capture, but all forms of neutron disappearance aside
from fission (i.e., MT > 100).
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group capture cross sections for multi-group neutronics
calculations. At a minimum, one needs to set the
:attr:`CaptureXS.energy_groups` and :attr:`CaptureXS.domain`
properties. Tallies for the flux and appropriate reaction rates over the
specified domain are generated automatically via the
:attr:`CaptureXS.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`CaptureXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
capture cross section is calculated as:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\left [ \sigma_a (r, E) \psi (r, E, \Omega) - \sigma_f (r, E) \psi (r, E,
\Omega) \right ]}{\int_{r \in V} dr \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`CaptureXS.tally_keys` property and
values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = 'capture'
@property
def scores(self):
return ['flux', 'absorption', 'fission']
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
self._rxn_rate_tally = \
self.tallies['absorption'] - self.tallies['fission']
self._rxn_rate_tally.sparse = self.sparse
return self._rxn_rate_tally
class FissionXS(MGXS):
r"""A fission multi-group cross section.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group fission cross sections for multi-group neutronics
calculations. At a minimum, one needs to set the
:attr:`FissionXS.energy_groups` and :attr:`FissionXS.domain`
properties. Tallies for the flux and appropriate reaction rates over the
specified domain are generated automatically via the
:attr:`FissionXS.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`FissionXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
fission cross section is calculated as:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\sigma_f (r, E) \psi (r, E, \Omega)}{\int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}.
To incorporate the effect of neutron multiplication in the above
relation, the `nu` parameter can be set to `True`.
This class can also be used to gather a prompt-nu-fission cross section
(which only includes the contributions from prompt neutrons). This is
accomplished by setting the :attr:`FissionXS.prompt` attribute to `True`.
Since the prompt-nu-fission cross section requires neutron multiplication,
the `nu` parameter will automatically be set to `True` if `prompt` is also
`True`.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
nu : bool
If True, the cross section data will include neutron multiplication;
defaults to False
prompt : bool
If true, computes cross sections which only includes prompt neutrons;
defaults to False which includes prompt and delayed in total. Setting
this to True will also set nu to True
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
nu : bool
If True, the cross section data will include neutron multiplication
prompt : bool
If true, computes cross sections which only includes prompt neutrons
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`FissionXS.tally_keys` property and
values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None, nu=False,
prompt=False, by_nuclide=False, name='', num_polar=1,
num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._nu = False
self._prompt = False
self.nu = nu
self.prompt = prompt
def __deepcopy__(self, memo):
clone = super().__deepcopy__(memo)
clone._nu = self.nu
clone._prompt = self.prompt
return clone
@property
def nu(self):
return self._nu
@property
def prompt(self):
return self._prompt
@nu.setter
def nu(self, nu):
cv.check_type('nu', nu, bool)
self._nu = nu
if not self.prompt:
if not self.nu:
self._rxn_type = 'fission'
else:
self._rxn_type = 'nu-fission'
else:
self._rxn_type = 'prompt-nu-fission'
@prompt.setter
def prompt(self, prompt):
cv.check_type('prompt', prompt, bool)
self._prompt = prompt
if not self.prompt:
if not self.nu:
self._rxn_type = 'fission'
else:
self._rxn_type = 'nu-fission'
else:
self._rxn_type = 'prompt-nu-fission'
class KappaFissionXS(MGXS):
r"""A recoverable fission energy production rate multi-group cross section.
The recoverable energy per fission, :math:`\kappa`, is defined as the
fission product kinetic energy, prompt and delayed neutron kinetic energies,
prompt and delayed :math:`\gamma`-ray total energies, and the total energy
released by the delayed :math:`\beta` particles. The neutrino energy does
not contribute to this response. The prompt and delayed :math:`\gamma`-rays
are assumed to deposit their energy locally.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`KappaFissionXS.energy_groups` and
:attr:`KappaFissionXS.domain` properties. Tallies for the flux and appropriate
reaction rates over the specified domain are generated automatically via the
:attr:`KappaFissionXS.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`KappaFissionXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
recoverable fission energy production rate cross section is calculated as:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\kappa\sigma_f (r, E) \psi (r, E, \Omega)}{\int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`KappaFissionXS.tally_keys` property and
values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = 'kappa-fission'
class ScatterXS(MGXS):
r"""A scattering multi-group cross section.
The scattering cross section is defined as the difference between the total
and absorption cross sections.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`ScatterXS.energy_groups` and
:attr:`ScatterXS.domain` properties. Tallies for the flux and
appropriate reaction rates over the specified domain are generated
automatically via the :attr:`ScatterXS.tallies` property, which can
then be appended to a :class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`ScatterXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
scattering cross section is calculated as:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\left [ \sigma_t (r, E) \psi (r, E, \Omega) - \sigma_a (r, E) \psi (r, E,
\Omega) \right ]}{\int_{r \in V} dr \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}.
To incorporate the effect of scattering multiplication from (n,xn)
reactions in the above relation, the `nu` parameter can be set to `True`.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
nu : bool
If True, the cross section data will include neutron multiplication;
defaults to False
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
nu : bool
If True, the cross section data will include neutron multiplication
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`ScatterXS.tally_keys` property and
values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1,
num_azimuthal=1, nu=False):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self.nu = nu
def __deepcopy__(self, memo):
clone = super().__deepcopy__(memo)
clone._nu = self.nu
return clone
@property
def nu(self):
return self._nu
@nu.setter
def nu(self, nu):
cv.check_type('nu', nu, bool)
self._nu = nu
if not nu:
self._rxn_type = 'scatter'
else:
self._rxn_type = 'nu-scatter'
self._estimator = 'analog'
self._valid_estimators = ['analog']
class ArbitraryXS(MGXS):
r"""A multi-group cross section for an arbitrary reaction type.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group total cross sections for multi-group neutronics calculations.
At a minimum, one needs to set the :attr:`ArbitraryXS.energy_groups` and
:attr:`ArbitraryXS.domain` properties. Tallies for the flux and appropriate
reaction rates over the specified domain are generated automatically via the
:attr:`ArbitraryXS.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`ArbitraryXS.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
requested cross section is calculated as:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\sigma_X (r, E) \psi (r, E, \Omega)}{\int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}
where :math:`\sigma_X` is the requested reaction type of interest.
Parameters
----------
rxn_type : str
Reaction type (e.g., '(n,2n)', '(n,Xt)', etc.)
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., '(n,2n)', '(n,Xt)', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`TotalXS.tally_keys` property and values
are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, rxn_type, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
cv.check_value("rxn_type", rxn_type, ARBITRARY_VECTOR_TYPES)
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = rxn_type
class ArbitraryMatrixXS(MatrixMGXS):
r"""A multi-group matrix cross section for an arbitrary reaction type.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`ArbitraryMatrixXS.energy_groups` and
:attr:`ArbitraryMatrixXS.domain` properties. Tallies for the flux and
appropriate reaction rates over the specified domain are generated
automatically via the :attr:`ArbitraryMatrixXS.tallies` property, which can
then be appended to a :class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`ArbitraryMatrixXS.xs_tally` property.
For a spatial domain :math:`V`, incoming energy group
:math:`[E_{g'},E_{g'-1}]`, and outgoing energy group :math:`[E_g,E_{g-1}]`,
the fission production is calculated as:
.. math::
\begin{aligned}
\langle \sigma_{X,g'\rightarrow g} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega' \int_{E_{g'}}^{E_{g'-1}} dE' \int_{E_g}^{E_{g-1}} dE
\; \chi(E) \sigma_X (r, E') \psi(r, E', \Omega')\\
\langle \phi \rangle &= \int_{r \in V} dr \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega) \\
\sigma_{X,g'\rightarrow g} &= \frac{\langle \sigma_{X,g'\rightarrow
g} \phi \rangle}{\langle \phi \rangle}
\end{aligned}
where :math:`\sigma_X` is the requested reaction type of interest.
Parameters
----------
rxn_type : str
Reaction type (e.g., '(n,2n)', '(n,nta)', etc.). Valid names have
neutrons as a product.
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`NuFissionMatrixXS.tally_keys`
property and values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, rxn_type, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1,
num_azimuthal=1):
cv.check_value("rxn_type", rxn_type, ARBITRARY_MATRIX_TYPES)
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = rxn_type.split(" ")[0]
self._estimator = 'analog'
self._valid_estimators = ['analog']
class ScatterMatrixXS(MatrixMGXS):
r"""A scattering matrix multi-group cross section with the cosine of the
change-in-angle represented as one or more Legendre moments or a histogram.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`ScatterMatrixXS.energy_groups` and
:attr:`ScatterMatrixXS.domain` properties. Tallies for the flux and
appropriate reaction rates over the specified domain are generated
automatically via the :attr:`ScatterMatrixXS.tallies` property, which can
then be appended to a :class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`ScatterMatrixXS.xs_tally` property.
For a spatial domain :math:`V`, incoming energy group
:math:`[E_{g'},E_{g'-1}]`, and outgoing energy group :math:`[E_g,E_{g-1}]`,
the Legendre scattering moments are calculated as:
.. math::
\begin{aligned}
\langle \sigma_{s,\ell,g'\rightarrow g} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega' \int_{E_{g'}}^{E_{g'-1}} dE' \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; P_\ell (\Omega \cdot \Omega') \sigma_s (r, E'
\rightarrow E, \Omega' \cdot \Omega) \psi(r, E', \Omega')\\
\langle \phi \rangle &= \int_{r \in V} dr \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega) \\
\sigma_{s,\ell,g'\rightarrow g} &= \frac{\langle
\sigma_{s,\ell,g'\rightarrow g} \phi \rangle}{\langle \phi \rangle}
\end{aligned}
If the order is zero and a :math:`P_0` transport-correction is applied
(default), the scattering matrix elements are:
.. math::
\sigma_{s,g'\rightarrow g} = \frac{\langle \sigma_{s,0,g'\rightarrow g}
\phi \rangle - \delta_{gg'} \sum_{g''} \langle \sigma_{s,1,g''\rightarrow
g} \phi \rangle}{\langle \phi \rangle}
To incorporate the effect of neutron multiplication from (n,xn) reactions
in the above relation, the `nu` parameter can be set to `True`.
An alternative form of the scattering matrix is computed when the
`formulation` property is set to 'consistent' rather than the default
of 'simple'. This formulation computes the scattering matrix multi-group
cross section as the product of the scatter cross section and
group-to-group scattering probabilities.
Unlike the default 'simple' formulation, the 'consistent' formulation
is computed from the groupwise scattering cross section which uses a
tracklength estimator. This ensures that reaction rate balance is exactly
preserved with a :class:`TotalXS` computed using a tracklength estimator.
For a scattering probability matrix :math:`P_{s,\ell,g'\rightarrow g}` and
scattering cross section :math:`\sigma_s (r, E)` for incoming energy group
:math:`[E_{g'},E_{g'-1}]` and outgoing energy group :math:`[E_g,E_{g-1}]`,
the Legendre scattering moments are calculated as:
.. math::
\sigma_{s,\ell,g'\rightarrow g} = \sigma_s (r, E) \times
P_{s,\ell,g'\rightarrow g}
To incorporate the effect of neutron multiplication from (n,xn) reactions
in the 'consistent' scattering matrix, the `nu` parameter can be set to `True`
such that the Legendre scattering moments are calculated as:
.. math::
\sigma_{s,\ell,g'\rightarrow g} = \upsilon_{g'\rightarrow g} \times
\sigma_s (r, E) \times P_{s,\ell,g'\rightarrow g}
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : int, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : int, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
nu : bool
If True, the cross section data will include neutron multiplication;
defaults to False
Attributes
----------
formulation : 'simple' or 'consistent'
The calculation approach to use ('simple' by default). The 'simple'
formulation simply divides the group-to-group scattering rates by
the groupwise flux, each computed from analog tally estimators. The
'consistent' formulation multiplies the groupwise scattering rates
by the group-to-group scatter probability matrix, the former computed
from tracklength tallies and the latter computed from analog tallies.
The 'consistent' formulation is designed to better conserve reaction
rate balance with the total and absorption cross sections computed
using tracklength tally estimators.
correction : 'P0' or None
Apply the P0 correction to scattering matrices if set to 'P0'; this is
used only if :attr:`ScatterMatrixXS.scatter_format` is 'legendre'
scatter_format : {'legendre', or 'histogram'}
Representation of the angular scattering distribution (default is
'legendre')
legendre_order : int
The highest Legendre moment in the scattering matrix; this is used if
:attr:`ScatterMatrixXS.scatter_format` is 'legendre'. (default is 0)
histogram_bins : int
The number of equally-spaced bins for the histogram representation of
the angular scattering distribution; this is used if
:attr:`ScatterMatrixXS.scatter_format` is 'histogram'. (default is 16)
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
nu : bool
If True, the cross section data will include neutron multiplication
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : int
Number of equi-width polar angle bins for angle discretization
num_azimuthal : int
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`ScatterMatrixXS.tally_keys` property
and values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1,
num_azimuthal=1, nu=False):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._formulation = 'simple'
self._correction = 'P0'
self._scatter_format = SCATTER_LEGENDRE
self._legendre_order = 0
self._histogram_bins = 16
self._estimator = 'analog'
self._valid_estimators = ['analog']
self.nu = nu
def __deepcopy__(self, memo):
clone = super().__deepcopy__(memo)
clone._formulation = self.formulation
clone._correction = self.correction
clone._scatter_format = self.scatter_format
clone._legendre_order = self.legendre_order
clone._histogram_bins = self.histogram_bins
clone._nu = self.nu
return clone
@property
def _dont_squeeze(self):
"""Create a tuple of axes which should not be removed during the get_xs
process
"""
if self.num_polar > 1 or self.num_azimuthal > 1:
if self.scatter_format == SCATTER_HISTOGRAM:
return (0, 1, 3, 4, 5)
else:
return (0, 1, 3, 4)
else:
if self.scatter_format == SCATTER_HISTOGRAM:
return (1, 2, 3)
else:
return (1, 2)
@property
def formulation(self):
return self._formulation
@property
def correction(self):
return self._correction
@property
def scatter_format(self):
return self._scatter_format
@property
def legendre_order(self):
return self._legendre_order
@property
def histogram_bins(self):
return self._histogram_bins
@property
def nu(self):
return self._nu
@property
def scores(self):
if self.formulation == 'simple':
scores = ['flux', self.rxn_type]
else:
# Add scores for groupwise scattering cross section
scores = ['flux', 'scatter']
# Add scores for group-to-group scattering probability matrix
# these scores also contain the angular information, whether it be
# Legendre expansion or histogram bins
scores.append('scatter')
# Add scores for multiplicity matrix; scatter info for the
# denominator will come from the previous score
if self.nu:
scores.append('nu-scatter')
# Add scores for transport correction
if self.correction == 'P0' and self.legendre_order == 0:
scores.extend([self.rxn_type, 'flux'])
return scores
@property
def tally_keys(self):
if self.formulation == 'simple':
return super().tally_keys
else:
# Add keys for groupwise scattering cross section
tally_keys = ['flux (tracklength)', 'scatter']
# Add keys for group-to-group scattering probability matrix
tally_keys.append('scatter matrix')
# Add keys for multiplicity matrix
if self.nu:
tally_keys.extend(['nu-scatter'])
# Add keys for transport correction
if self.correction == 'P0' and self.legendre_order == 0:
tally_keys.extend(['correction', 'flux (analog)'])
return tally_keys
@property
def estimator(self):
if self.formulation == 'simple':
return self._estimator
else:
# Add estimators for groupwise scattering cross section
estimators = ['tracklength', 'tracklength']
# Add estimators for group-to-group scattering probabilities
estimators.append('analog')
# Add estimators for multiplicity matrix
if self.nu:
estimators.extend(['analog'])
# Add estimators for transport correction
if self.correction == 'P0' and self.legendre_order == 0:
estimators.extend(['analog', 'analog'])
return estimators
@property
def filters(self):
if self.formulation == 'simple':
group_edges = self.energy_groups.group_edges
energy = openmc.EnergyFilter(group_edges)
energyout = openmc.EnergyoutFilter(group_edges)
if self.scatter_format == SCATTER_LEGENDRE:
if self.correction == 'P0' and self.legendre_order == 0:
angle_filter = openmc.LegendreFilter(order=1)
else:
angle_filter = \
openmc.LegendreFilter(order=self.legendre_order)
elif self.scatter_format == SCATTER_HISTOGRAM:
bins = np.linspace(-1., 1., num=self.histogram_bins + 1,
endpoint=True)
angle_filter = openmc.MuFilter(bins)
filters = [[energy], [energy, energyout, angle_filter]]
else:
group_edges = self.energy_groups.group_edges
energy = openmc.EnergyFilter(group_edges)
energyout = openmc.EnergyoutFilter(group_edges)
# Groupwise scattering cross section
filters = [[energy], [energy]]
# Group-to-group scattering probability matrix
if self.scatter_format == SCATTER_LEGENDRE:
angle_filter = openmc.LegendreFilter(order=self.legendre_order)
elif self.scatter_format == SCATTER_HISTOGRAM:
bins = np.linspace(-1., 1., num=self.histogram_bins + 1,
endpoint=True)
angle_filter = openmc.MuFilter(bins)
filters.append([energy, energyout, angle_filter])
# Multiplicity matrix
if self.nu:
filters.extend([[energy, energyout]])
# Add filters for transport correction
if self.correction == 'P0' and self.legendre_order == 0:
filters.extend([[energyout, openmc.LegendreFilter(1)],
[energy]])
return self._add_angle_filters(filters)
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
if self.formulation == 'simple':
if self.scatter_format == SCATTER_LEGENDRE:
# If using P0 correction subtract P1 scatter from the diag.
if self.correction == 'P0' and self.legendre_order == 0:
scatter_p0 = self.tallies[self.rxn_type].get_slice(
filters=[openmc.LegendreFilter],
filter_bins=[('P0',)])
scatter_p1 = self.tallies[self.rxn_type].get_slice(
filters=[openmc.LegendreFilter],
filter_bins=[('P1',)])
# Set the Legendre order of these tallies to be 0
# so they can be subtracted
legendre = openmc.LegendreFilter(order=0)
scatter_p0.filters[-1] = legendre
scatter_p1.filters[-1] = legendre
scatter_p1 = scatter_p1.summation(
filter_type=openmc.EnergyFilter,
remove_filter=True)
energy_filter = \
scatter_p0.find_filter(openmc.EnergyFilter)
# Transform scatter-p1 into an energyin/out matrix
# to match scattering matrix shape for tally arithmetic
energy_filter = copy.deepcopy(energy_filter)
scatter_p1 = \
scatter_p1.diagonalize_filter(energy_filter, 1)
self._rxn_rate_tally = scatter_p0 - scatter_p1
# Otherwise, extract scattering moment reaction rate Tally
else:
self._rxn_rate_tally = self.tallies[self.rxn_type]
elif self.scatter_format == SCATTER_HISTOGRAM:
# Extract scattering rate distribution tally
self._rxn_rate_tally = self.tallies[self.rxn_type]
self._rxn_rate_tally.sparse = self.sparse
else:
msg = 'The reaction rate tally is poorly defined' \
' for the consistent formulation'
raise NotImplementedError(msg)
return self._rxn_rate_tally
@property
def xs_tally(self):
if self._xs_tally is None:
if self.tallies is None:
msg = 'Unable to get xs_tally since tallies have ' \
'not been loaded from a statepoint'
raise ValueError(msg)
# Use super class method
if self.formulation == 'simple':
self._xs_tally = MGXS.xs_tally.fget(self)
else:
# Compute scattering probability matrixS
tally_key = 'scatter matrix'
# Compute normalization factor summed across outgoing energies
if self.scatter_format == SCATTER_LEGENDRE:
norm = self.tallies[tally_key].get_slice(
scores=['scatter'],
filters=[openmc.LegendreFilter],
filter_bins=[('P0',)], squeeze=True)
# Compute normalization factor summed across outgoing mu bins
elif self.scatter_format == SCATTER_HISTOGRAM:
norm = self.tallies[tally_key].get_slice(
scores=['scatter'])
norm = norm.summation(
filter_type=openmc.MuFilter, remove_filter=True)
norm = norm.summation(filter_type=openmc.EnergyoutFilter,
remove_filter=True)
# Compute groupwise scattering cross section
self._xs_tally = self.tallies['scatter'] * \
self.tallies[tally_key] / norm / \
self.tallies['flux (tracklength)']
# Override the nuclides for tally arithmetic
self._xs_tally.nuclides = self.tallies['scatter'].nuclides
# Multiply by the multiplicity matrix
if self.nu:
numer = self.tallies['nu-scatter']
# Get the denominator
if self.scatter_format == SCATTER_LEGENDRE:
denom = self.tallies[tally_key].get_slice(
scores=['scatter'],
filters=[openmc.LegendreFilter],
filter_bins=[('P0',)], squeeze=True)
# Compute normalization factor summed across mu bins
elif self.scatter_format == SCATTER_HISTOGRAM:
denom = self.tallies[tally_key].get_slice(
scores=['scatter'])
# Sum across all mu bins
denom = denom.summation(
filter_type=openmc.MuFilter, remove_filter=True)
self._xs_tally *= (numer / denom)
# If using P0 correction subtract scatter-1 from the diagonal
if self.correction == 'P0' and self.legendre_order == 0:
scatter_p1 = self.tallies['correction'].get_slice(
filters=[openmc.LegendreFilter], filter_bins=[('P1',)])
flux = self.tallies['flux (analog)']
# Set the Legendre order of the P1 tally to be P0
# so it can be subtracted
legendre = openmc.LegendreFilter(order=0)
scatter_p1.filters[-1] = legendre
# Transform scatter-p1 tally into an energyin/out matrix
# to match scattering matrix shape for tally arithmetic
energy_filter = flux.find_filter(openmc.EnergyFilter)
energy_filter = copy.deepcopy(energy_filter)
scatter_p1 = scatter_p1.diagonalize_filter(energy_filter, 1)
# Compute the trasnport correction term
correction = scatter_p1 / flux
# Override the nuclides for tally arithmetic
correction.nuclides = scatter_p1.nuclides
# Set xs_tally to be itself with only P0 data
self._xs_tally = self._xs_tally.get_slice(
filters=[openmc.LegendreFilter], filter_bins=[('P0',)])
# Tell xs_tally that it is P0
legendre_xs_tally = \
self._xs_tally.find_filter(openmc.LegendreFilter)
legendre_xs_tally.order = 0
# And subtract the P1 correction from the P0 matrix
self._xs_tally -= correction
self._compute_xs()
# Force the angle filter to be the last filter
if self.scatter_format == SCATTER_HISTOGRAM:
angle_filter = self._xs_tally.find_filter(openmc.MuFilter)
else:
angle_filter = \
self._xs_tally.find_filter(openmc.LegendreFilter)
angle_filter_index = self._xs_tally.filters.index(angle_filter)
# If the angle filter index is not last, then make it last
if angle_filter_index != len(self._xs_tally.filters) - 1:
energyout_filter = \
self._xs_tally.find_filter(openmc.EnergyoutFilter)
self._xs_tally._swap_filters(energyout_filter,
angle_filter)
return self._xs_tally
@nu.setter
def nu(self, nu):
cv.check_type('nu', nu, bool)
self._nu = nu
if self.formulation == 'simple':
if not nu:
self._rxn_type = 'scatter'
self._hdf5_key = 'scatter matrix'
else:
self._rxn_type = 'nu-scatter'
self._hdf5_key = 'nu-scatter matrix'
else:
if not nu:
self._rxn_type = 'scatter'
self._hdf5_key = 'consistent scatter matrix'
else:
self._rxn_type = 'nu-scatter'
self._hdf5_key = 'consistent nu-scatter matrix'
@formulation.setter
def formulation(self, formulation):
cv.check_value('formulation', formulation, ('simple', 'consistent'))
self._formulation = formulation
if self.formulation == 'simple':
self._valid_estimators = ['analog']
if not self.nu:
self._hdf5_key = 'scatter matrix'
else:
self._hdf5_key = 'nu-scatter matrix'
else:
self._valid_estimators = ['tracklength']
if not self.nu:
self._hdf5_key = 'consistent scatter matrix'
else:
self._hdf5_key = 'consistent nu-scatter matrix'
@correction.setter
def correction(self, correction):
cv.check_value('correction', correction, ('P0', None))
if self.scatter_format == SCATTER_LEGENDRE:
if correction == 'P0' and self.legendre_order > 0:
msg = 'The P0 correction will be ignored since the ' \
'scattering order {} is greater than '\
'zero'.format(self.legendre_order)
warnings.warn(msg)
elif self.scatter_format == SCATTER_HISTOGRAM:
msg = 'The P0 correction will be ignored since the ' \
'scatter format is set to histogram'
warnings.warn(msg)
self._correction = correction
@scatter_format.setter
def scatter_format(self, scatter_format):
cv.check_value('scatter_format', scatter_format, MU_TREATMENTS)
self._scatter_format = scatter_format
@legendre_order.setter
def legendre_order(self, legendre_order):
cv.check_type('legendre_order', legendre_order, Integral)
cv.check_greater_than('legendre_order', legendre_order, 0,
equality=True)
cv.check_less_than('legendre_order', legendre_order, _MAX_LEGENDRE,
equality=True)
if self.scatter_format == SCATTER_LEGENDRE:
if self.correction == 'P0' and legendre_order > 0:
msg = 'The P0 correction will be ignored since the ' \
'scattering order {} is greater than '\
'zero'.format(legendre_order)
warnings.warn(msg, RuntimeWarning)
self.correction = None
elif self.scatter_format == SCATTER_HISTOGRAM:
msg = 'The legendre order will be ignored since the ' \
'scatter format is set to histogram'
warnings.warn(msg)
self._legendre_order = legendre_order
@histogram_bins.setter
def histogram_bins(self, histogram_bins):
cv.check_type('histogram_bins', histogram_bins, Integral)
cv.check_greater_than('histogram_bins', histogram_bins, 0)
self._histogram_bins = histogram_bins
def load_from_statepoint(self, statepoint):
"""Extracts tallies in an OpenMC StatePoint with the data needed to
compute multi-group cross sections.
This method is needed to compute cross section data from tallies
in an OpenMC StatePoint object.
.. note:: The statepoint must be linked with an OpenMC Summary object.
Parameters
----------
statepoint : openmc.StatePoint
An OpenMC StatePoint object with tally data
Raises
------
ValueError
When this method is called with a statepoint that has not been
linked with a summary object.
"""
# Clear any tallies previously loaded from a statepoint
if self.loaded_sp:
self._tallies = None
self._xs_tally = None
self._rxn_rate_tally = None
self._loaded_sp = False
super().load_from_statepoint(statepoint)
def get_slice(self, nuclides=[], in_groups=[], out_groups=[],
legendre_order='same'):
"""Build a sliced ScatterMatrix for the specified nuclides and
energy groups.
This method constructs a new MGXS to encapsulate a subset of the data
represented by this MGXS. The subset of data to include in the tally
slice is determined by the nuclides and energy groups specified in
the input parameters.
Parameters
----------
nuclides : list of str
A list of nuclide name strings
(e.g., ['U235', 'U238']; default is [])
in_groups : list of int
A list of incoming energy group indices starting at 1 for the high
energies (e.g., [1, 2, 3]; default is [])
out_groups : list of int
A list of outgoing energy group indices starting at 1 for the high
energies (e.g., [1, 2, 3]; default is [])
legendre_order : int or 'same'
The highest Legendre moment in the sliced MGXS. If order is 'same'
then the sliced MGXS will have the same Legendre moments as the
original MGXS (default). If order is an integer less than the
original MGXS' order, then only those Legendre moments up to that
order will be included in the sliced MGXS.
Returns
-------
openmc.mgxs.MatrixMGXS
A new MatrixMGXS which encapsulates the subset of data requested
for the nuclide(s) and/or energy group(s) requested in the
parameters.
"""
# Call super class method and null out derived tallies
slice_xs = super().get_slice(nuclides, in_groups)
slice_xs._rxn_rate_tally = None
slice_xs._xs_tally = None
# Slice the Legendre order if needed
if legendre_order != 'same' and self.scatter_format == SCATTER_LEGENDRE:
cv.check_type('legendre_order', legendre_order, Integral)
cv.check_less_than('legendre_order', legendre_order,
self.legendre_order, equality=True)
slice_xs.legendre_order = legendre_order
# Slice the scattering tally
filter_bins = [tuple(['P{}'.format(i)
for i in range(self.legendre_order + 1)])]
slice_xs.tallies[self.rxn_type] = \
slice_xs.tallies[self.rxn_type].get_slice(
filters=[openmc.LegendreFilter], filter_bins=filter_bins)
# Slice outgoing energy groups if needed
if len(out_groups) != 0:
filter_bins = []
for group in out_groups:
group_bounds = self.energy_groups.get_group_bounds(group)
filter_bins.append(group_bounds)
filter_bins = [tuple(filter_bins)]
# Slice each of the tallies across energyout groups
for tally_type, tally in slice_xs.tallies.items():
if tally.contains_filter(openmc.EnergyoutFilter):
tally_slice = tally.get_slice(
filters=[openmc.EnergyoutFilter],
filter_bins=filter_bins)
slice_xs.tallies[tally_type] = tally_slice
slice_xs.sparse = self.sparse
return slice_xs
def get_xs(self, in_groups='all', out_groups='all',
subdomains='all', nuclides='all', moment='all',
xs_type='macro', order_groups='increasing',
row_column='inout', value='mean', squeeze=True):
r"""Returns an array of multi-group cross sections.
This method constructs a 5D NumPy array for the requested
multi-group cross section data for one or more subdomains
(1st dimension), energy groups in (2nd dimension), energy groups out
(3rd dimension), nuclides (4th dimension), and moments/histograms
(5th dimension).
.. note:: The scattering moments are not multiplied by the
:math:`(2\ell+1)/2` prefactor in the expansion of the
scattering source into Legendre moments in the neutron
transport equation.
Parameters
----------
in_groups : Iterable of Integral or 'all'
Incoming energy groups of interest. Defaults to 'all'.
out_groups : Iterable of Integral or 'all'
Outgoing energy groups of interest. Defaults to 'all'.
subdomains : Iterable of Integral or 'all'
Subdomain IDs of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
A list of nuclide name strings (e.g., ['U235', 'U238']). The
special string 'all' will return the cross sections for all nuclides
in the spatial domain. The special string 'sum' will return the
cross section summed over all nuclides. Defaults to 'all'.
moment : int or 'all'
The scattering matrix moment to return. All moments will be
returned if the moment is 'all' (default); otherwise, a specific
moment will be returned.
xs_type: {'macro', 'micro'}
Return the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
order_groups: {'increasing', 'decreasing'}
Return the cross section indexed according to increasing or
decreasing energy groups (decreasing or increasing energies).
Defaults to 'increasing'.
row_column: {'inout', 'outin'}
Return the cross section indexed first by incoming group and
second by outgoing group ('inout'), or vice versa ('outin').
Defaults to 'inout'.
value : {'mean', 'std_dev', 'rel_err'}
A string for the type of value to return. Defaults to 'mean'.
squeeze : bool
A boolean representing whether to eliminate the extra dimensions
of the multi-dimensional array to be returned. Defaults to True.
Returns
-------
numpy.ndarray
A NumPy array of the multi-group cross section indexed in the order
each group and subdomain is listed in the parameters.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
cv.check_value('value', value, ['mean', 'std_dev', 'rel_err'])
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# FIXME: Unable to get microscopic xs for mesh domain because the mesh
# cells do not know the nuclide densities in each mesh cell.
if self.domain_type == 'mesh' and xs_type == 'micro':
msg = 'Unable to get micro xs for mesh domain since the mesh ' \
'cells do not know the nuclide densities in each mesh cell.'
raise ValueError(msg)
filters = []
filter_bins = []
# Construct a collection of the domain filter bins
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral, max_depth=3)
filters.append(_DOMAIN_TO_FILTER[self.domain_type])
subdomain_bins = []
for subdomain in subdomains:
subdomain_bins.append(subdomain)
filter_bins.append(tuple(subdomain_bins))
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(in_groups, str):
cv.check_iterable_type('groups', in_groups, Integral)
filters.append(openmc.EnergyFilter)
energy_bins = []
for group in in_groups:
energy_bins.append(
(self.energy_groups.get_group_bounds(group),))
filter_bins.append(tuple(energy_bins))
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(out_groups, str):
cv.check_iterable_type('groups', out_groups, Integral)
for group in out_groups:
filters.append(openmc.EnergyoutFilter)
filter_bins.append((self.energy_groups.get_group_bounds(group),))
# Construct CrossScore for requested scattering moment
if self.scatter_format == SCATTER_LEGENDRE:
if moment != 'all':
cv.check_type('moment', moment, Integral)
cv.check_greater_than('moment', moment, 0, equality=True)
cv.check_less_than(
'moment', moment, self.legendre_order, equality=True)
filters.append(openmc.LegendreFilter)
filter_bins.append(('P{}'.format(moment),))
num_angle_bins = 1
else:
num_angle_bins = self.legendre_order + 1
else:
num_angle_bins = self.histogram_bins
# Construct a collection of the nuclides to retrieve from the xs tally
if self.by_nuclide:
if nuclides == 'all' or nuclides == 'sum' or nuclides == ['sum']:
query_nuclides = self.get_nuclides()
else:
query_nuclides = nuclides
else:
query_nuclides = ['total']
# Use tally summation if user requested the sum for all nuclides
scores = self.xs_tally.scores
if nuclides == 'sum' or nuclides == ['sum']:
xs_tally = self.xs_tally.summation(nuclides=query_nuclides)
xs = xs_tally.get_values(scores=scores, filters=filters,
filter_bins=filter_bins, value=value)
else:
xs = self.xs_tally.get_values(scores=scores, filters=filters,
filter_bins=filter_bins,
nuclides=query_nuclides, value=value)
# Divide by atom number densities for microscopic cross sections
if xs_type == 'micro' and self._divide_by_density:
if self.by_nuclide:
densities = self.get_nuclide_densities(nuclides)
else:
densities = self.get_nuclide_densities('sum')
if value == 'mean' or value == 'std_dev':
xs /= densities[np.newaxis, :, np.newaxis]
# Convert and nans to zero
xs = np.nan_to_num(xs)
if in_groups == 'all':
num_in_groups = self.num_groups
else:
num_in_groups = len(in_groups)
if out_groups == 'all':
num_out_groups = self.num_groups
else:
num_out_groups = len(out_groups)
# Reshape tally data array with separate axes for domain and energy
# Accomodate the polar and azimuthal bins if needed
num_subdomains = int(xs.shape[0] / (num_angle_bins * num_in_groups *
num_out_groups * self.num_polar *
self.num_azimuthal))
if self.num_polar > 1 or self.num_azimuthal > 1:
new_shape = (self.num_polar, self.num_azimuthal,
num_subdomains, num_in_groups, num_out_groups,
num_angle_bins)
new_shape += xs.shape[1:]
xs = np.reshape(xs, new_shape)
# Transpose the scattering matrix if requested by user
if row_column == 'outin':
xs = np.swapaxes(xs, 3, 4)
# Reverse data if user requested increasing energy groups since
# tally data is stored in order of increasing energies
if order_groups == 'increasing':
xs = xs[:, :, :, ::-1, ::-1, ...]
else:
new_shape = (num_subdomains, num_in_groups, num_out_groups,
num_angle_bins)
new_shape += xs.shape[1:]
xs = np.reshape(xs, new_shape)
# Transpose the scattering matrix if requested by user
if row_column == 'outin':
xs = np.swapaxes(xs, 1, 2)
# Reverse data if user requested increasing energy groups since
# tally data is stored in order of increasing energies
if order_groups == 'increasing':
xs = xs[:, ::-1, ::-1, ...]
if squeeze:
# We want to squeeze out everything but the angles, in_groups,
# out_groups, and, if needed, num_angle_bins dimension. These must
# not be squeezed so 1-group, 1-angle problems have the correct
# shape.
xs = self._squeeze_xs(xs)
return xs
def get_pandas_dataframe(self, groups='all', nuclides='all',
xs_type='macro', paths=False):
"""Build a Pandas DataFrame for the MGXS data.
This method leverages :meth:`openmc.Tally.get_pandas_dataframe`, but
renames the columns with terminology appropriate for cross section data.
Parameters
----------
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
The nuclides of the cross-sections to include in the dataframe. This
may be a list of nuclide name strings (e.g., ['U235', 'U238']).
The special string 'all' will include the cross sections for all
nuclides in the spatial domain. The special string 'sum' will
include the cross sections summed over all nuclides. Defaults to
'all'.
xs_type: {'macro', 'micro'}
Return macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
paths : bool, optional
Construct columns for distribcell tally filters (default is True).
The geometric information in the Summary object is embedded into
a Multi-index column with a geometric "path" to each distribcell
instance.
Returns
-------
pandas.DataFrame
A Pandas DataFrame for the cross section data.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
# Build the dataframe using the parent class method
df = super().get_pandas_dataframe(groups, nuclides, xs_type,
paths=paths)
# If the matrix is P0, remove the legendre column
if self.scatter_format == SCATTER_LEGENDRE and self.legendre_order == 0:
df = df.drop(axis=1, labels=['legendre'])
return df
def print_xs(self, subdomains='all', nuclides='all',
xs_type='macro', moment=0):
"""Prints a string representation for the multi-group cross section.
Parameters
----------
subdomains : Iterable of Integral or 'all'
The subdomain IDs of the cross sections to include in the report.
Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
The nuclides of the cross-sections to include in the report. This
may be a list of nuclide name strings (e.g., ['U235', 'U238']).
The special string 'all' will report the cross sections for all
nuclides in the spatial domain. The special string 'sum' will
report the cross sections summed over all nuclides. Defaults to
'all'.
xs_type: {'macro', 'micro'}
Return the macro or micro cross section in units of cm^-1 or barns.
Defaults to 'macro'.
moment : int
The scattering moment to print (default is 0)
"""
# Construct a collection of the subdomains to report
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral)
elif self.domain_type == 'distribcell':
subdomains = np.arange(self.num_subdomains, dtype=np.int)
elif self.domain_type == 'mesh':
subdomains = list(self.domain.indices)
else:
subdomains = [self.domain.id]
# Construct a collection of the nuclides to report
if self.by_nuclide:
if nuclides == 'all':
nuclides = self.get_nuclides()
if nuclides == 'sum':
nuclides = ['sum']
else:
cv.check_iterable_type('nuclides', nuclides, str)
else:
nuclides = ['sum']
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
if self.correction != 'P0' and self.scatter_format == SCATTER_LEGENDRE:
rxn_type = '{0} (P{1})'.format(self.rxn_type, moment)
else:
rxn_type = self.rxn_type
# Build header for string with type and domain info
string = 'Multi-Group XS\n'
string += '{0: <16}=\t{1}\n'.format('\tReaction Type', rxn_type)
string += '{0: <16}=\t{1}\n'.format('\tDomain Type', self.domain_type)
string += '{0: <16}=\t{1}\n'.format('\tDomain ID', self.domain.id)
# Generate the header for an individual XS
xs_header = '\tCross Sections [{0}]:'.format(self.get_units(xs_type))
# If cross section data has not been computed, only print string header
if self.tallies is None:
print(string)
return
string += '{0: <16}\n'.format('\tEnergy Groups:')
template = '{0: <12}Group {1} [{2: <10} - {3: <10}eV]\n'
# Loop over energy groups ranges
for group in range(1, self.num_groups + 1):
bounds = self.energy_groups.get_group_bounds(group)
string += template.format('', group, bounds[0], bounds[1])
# Set polar and azimuthal bins if necessary
if self.num_polar > 1 or self.num_azimuthal > 1:
pol_bins = np.linspace(0., np.pi, num=self.num_polar + 1,
endpoint=True)
azi_bins = np.linspace(-np.pi, np.pi, num=self.num_azimuthal + 1,
endpoint=True)
# Loop over all subdomains
for subdomain in subdomains:
if self.domain_type == 'distribcell' or self.domain_type == 'mesh':
string += '{0: <16}=\t{1}\n'.format('\tSubdomain', subdomain)
# Loop over all Nuclides
for nuclide in nuclides:
# Build header for nuclide type
if xs_type != 'sum':
string += '{0: <16}=\t{1}\n'.format('\tNuclide', nuclide)
# Build header for cross section type
string += '{0: <16}\n'.format(xs_header)
average_xs = self.get_xs(nuclides=[nuclide],
subdomains=[subdomain],
xs_type=xs_type, value='mean',
moment=moment)
rel_err_xs = self.get_xs(nuclides=[nuclide],
subdomains=[subdomain],
xs_type=xs_type, value='rel_err',
moment=moment)
rel_err_xs = rel_err_xs * 100.
# Create a function for printing group and histogram data
def print_groups_and_histogram(avg_xs, err_xs, num_groups,
num_histogram_bins):
template = '{0: <12}Group {1} -> Group {2}:\t\t'
to_print = ""
# Loop over incoming/outgoing energy groups ranges
for in_group in range(1, num_groups + 1):
for out_group in range(1, num_groups + 1):
to_print += template.format('', in_group,
out_group)
if num_histogram_bins > 0:
for i in range(num_histogram_bins):
to_print += \
'\n{0: <16}Histogram Bin {1}:{2: <6}'.format(
'', i + 1, '')
to_print += '{0:.2e} +/- {1:.2e}%'.format(
avg_xs[in_group - 1, out_group - 1, i],
err_xs[in_group - 1, out_group - 1, i])
to_print += '\n'
else:
to_print += '{0:.2e} +/- {1:.2e}%'.format(
avg_xs[in_group - 1, out_group - 1],
err_xs[in_group - 1, out_group - 1])
to_print += '\n'
to_print += '\n'
return to_print
# Set the number of histogram bins
if self.scatter_format == SCATTER_HISTOGRAM:
num_mu_bins = self.histogram_bins
else:
num_mu_bins = 0
if self.num_polar > 1 or self.num_azimuthal > 1:
# Loop over polar, azi, and in/out energy group ranges
for pol in range(len(pol_bins) - 1):
pol_low, pol_high = pol_bins[pol: pol + 2]
for azi in range(len(azi_bins) - 1):
azi_low, azi_high = azi_bins[azi: azi + 2]
string += \
'\t\tPolar Angle: [{0:5f} - {1:5f}]'.format(
pol_low, pol_high) + \
'\tAzimuthal Angle: [{0:5f} - {1:5f}]'.format(
azi_low, azi_high) + '\n'
string += print_groups_and_histogram(
average_xs[pol, azi, ...],
rel_err_xs[pol, azi, ...], self.num_groups,
num_mu_bins)
string += '\n'
else:
string += print_groups_and_histogram(
average_xs, rel_err_xs, self.num_groups, num_mu_bins)
string += '\n'
string += '\n'
string += '\n'
print(string)
class MultiplicityMatrixXS(MatrixMGXS):
r"""The scattering multiplicity matrix.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`MultiplicityMatrixXS.energy_groups` and
:attr:`MultiplicityMatrixXS.domain` properties. Tallies for the flux and
appropriate reaction rates over the specified domain are generated
automatically via the :attr:`MultiplicityMatrixXS.tallies` property, which
can then be appended to a :class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`MultiplicityMatrixXS.xs_tally`
property.
For a spatial domain :math:`V`, incoming energy group
:math:`[E_{g'},E_{g'-1}]`, and outgoing energy group :math:`[E_g,E_{g-1}]`,
the multiplicity is calculated as:
.. math::
\begin{aligned}
\langle \upsilon \sigma_{s,g'\rightarrow g} \phi \rangle &= \int_{r \in
D} dr \int_{4\pi} d\Omega' \int_{E_{g'}}^{E_{g'-1}} dE' \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \sum_i \upsilon_i \sigma_i (r, E' \rightarrow
E, \Omega' \cdot \Omega) \psi(r, E', \Omega') \\
\langle \sigma_{s,g'\rightarrow g} \phi \rangle &= \int_{r \in
D} dr \int_{4\pi} d\Omega' \int_{E_{g'}}^{E_{g'-1}} dE' \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \sum_i \upsilon_i \sigma_i (r, E' \rightarrow
E, \Omega' \cdot \Omega) \psi(r, E', \Omega') \\
\upsilon_{g'\rightarrow g} &= \frac{\langle \upsilon
\sigma_{s,g'\rightarrow g} \rangle}{\langle \sigma_{s,g'\rightarrow g}
\rangle}
\end{aligned}
where :math:`\upsilon_i` is the multiplicity for the :math:`i`-th reaction.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`MultiplicityMatrixXS.tally_keys`
property and values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
# Store whether or not the number density should be removed for microscopic
# values of this data; since a multiplicity matrix should reflect the
# multiplication relative to 1, this class will not divide by density
# for microscopic data
_divide_by_density = False
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = 'multiplicity matrix'
self._estimator = 'analog'
self._valid_estimators = ['analog']
@property
def scores(self):
scores = ['nu-scatter', 'scatter']
return scores
@property
def filters(self):
# Create the non-domain specific Filters for the Tallies
group_edges = self.energy_groups.group_edges
energy = openmc.EnergyFilter(group_edges)
energyout = openmc.EnergyoutFilter(group_edges)
filters = [[energy, energyout], [energy, energyout]]
return self._add_angle_filters(filters)
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
self._rxn_rate_tally = self.tallies['nu-scatter']
self._rxn_rate_tally.sparse = self.sparse
return self._rxn_rate_tally
@property
def xs_tally(self):
if self._xs_tally is None:
scatter = self.tallies['scatter']
# Compute the multiplicity
self._xs_tally = self.rxn_rate_tally / scatter
super()._compute_xs()
return self._xs_tally
class ScatterProbabilityMatrix(MatrixMGXS):
r"""The group-to-group scattering probability matrix.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`ScatterProbabilityMatrix.energy_groups`
and :attr:`ScatterProbabilityMatrix.domain` properties. Tallies for the
appropriate reaction rates over the specified domain are generated
automatically via the :attr:`ScatterProbabilityMatrix.tallies` property,
which can then be appended to a :class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`ScatterProbabilityMatrix.xs_tally`
property.
For a spatial domain :math:`V`, incoming energy group
:math:`[E_{g'},E_{g'-1}]`, and outgoing energy group :math:`[E_g,E_{g-1}]`,
the group-to-group scattering probabilities are calculated as:
.. math::
\begin{aligned}
\langle \sigma_{s,g'\rightarrow g} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega' \int_{E_{g'}}^{E_{g'-1}} dE' \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \sigma_{s} (r, E' \rightarrow E, \Omega'
\cdot \Omega) \psi(r, E', \Omega')\\
\langle \sigma_{s,0,g'} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega' \int_{E_{g'}}^{E_{g'-1}} dE' \int_{4\pi} d\Omega
\int_{0}^{\infty} dE \; \sigma_s (r, E'
\rightarrow E, \Omega' \cdot \Omega) \psi(r, E', \Omega')\\
P_{s,g'\rightarrow g} &= \frac{\langle
\sigma_{s,g'\rightarrow g} \phi \rangle}{\langle
\sigma_{s,g'} \phi \rangle}
\end{aligned}
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`ScatterProbabilityMatrix.tally_keys`
property and values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
# Store whether or not the number density should be removed for microscopic
# values of this data; since this probability matrix is always normalized
# to 1.0, this density division is not necessary
_divide_by_density = False
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide,
name, num_polar, num_azimuthal)
self._rxn_type = 'scatter'
self._hdf5_key = 'scatter probability matrix'
self._estimator = 'analog'
self._valid_estimators = ['analog']
@property
def scores(self):
return [self.rxn_type]
@property
def filters(self):
# Create the non-domain specific Filters for the Tallies
group_edges = self.energy_groups.group_edges
energy = openmc.EnergyFilter(group_edges)
energyout = openmc.EnergyoutFilter(group_edges)
filters = [[energy, energyout]]
return self._add_angle_filters(filters)
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
self._rxn_rate_tally = self.tallies[self.rxn_type]
self._rxn_rate_tally.sparse = self.sparse
return self._rxn_rate_tally
@property
def xs_tally(self):
if self._xs_tally is None:
norm = self.rxn_rate_tally.get_slice(scores=[self.rxn_type])
norm = norm.summation(
filter_type=openmc.EnergyoutFilter, remove_filter=True)
# Compute the group-to-group probabilities
self._xs_tally = self.tallies[self.rxn_type] / norm
super()._compute_xs()
return self._xs_tally
class NuFissionMatrixXS(MatrixMGXS):
r"""A fission production matrix multi-group cross section.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`NuFissionMatrixXS.energy_groups` and
:attr:`NuFissionMatrixXS.domain` properties. Tallies for the flux and
appropriate reaction rates over the specified domain are generated
automatically via the :attr:`NuFissionMatrixXS.tallies` property, which can
then be appended to a :class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`NuFissionMatrixXS.xs_tally` property.
For a spatial domain :math:`V`, incoming energy group
:math:`[E_{g'},E_{g'-1}]`, and outgoing energy group :math:`[E_g,E_{g-1}]`,
the fission production is calculated as:
.. math::
\begin{aligned}
\langle \nu\sigma_{f,g'\rightarrow g} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega' \int_{E_{g'}}^{E_{g'-1}} dE' \int_{E_g}^{E_{g-1}} dE
\; \chi(E) \nu\sigma_f (r, E') \psi(r, E', \Omega')\\
\langle \phi \rangle &= \int_{r \in V} dr \int_{4\pi} d\Omega
\int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega) \\
\nu\sigma_{f,g'\rightarrow g} &= \frac{\langle \nu\sigma_{f,g'\rightarrow
g} \phi \rangle}{\langle \phi \rangle}
\end{aligned}
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
prompt : bool
If true, computes cross sections which only includes prompt neutrons;
defaults to False which includes prompt and delayed in total
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
prompt : bool
If true, computes cross sections which only includes prompt neutrons
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`NuFissionMatrixXS.tally_keys`
property and values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1,
num_azimuthal=1, prompt=False):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
if not prompt:
self._rxn_type = 'nu-fission'
self._hdf5_key = 'nu-fission matrix'
else:
self._rxn_type = 'prompt-nu-fission'
self._hdf5_key = 'prompt-nu-fission matrix'
self._estimator = 'analog'
self._valid_estimators = ['analog']
self.prompt = prompt
@property
def prompt(self):
return self._prompt
@prompt.setter
def prompt(self, prompt):
cv.check_type('prompt', prompt, bool)
self._prompt = prompt
def __deepcopy__(self, memo):
clone = super().__deepcopy__(memo)
clone._prompt = self.prompt
return clone
class Chi(MGXS):
r"""The fission spectrum.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group cross sections for multi-group neutronics calculations. At a
minimum, one needs to set the :attr:`Chi.energy_groups` and
:attr:`Chi.domain` properties. Tallies for the flux and appropriate reaction
rates over the specified domain are generated automatically via the
:attr:`Chi.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`Chi.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
fission spectrum is calculated as:
.. math::
\begin{aligned}
\langle \nu\sigma_{f,g' \rightarrow g} \phi \rangle &= \int_{r \in V} dr
\int_{4\pi} d\Omega' \int_0^\infty dE' \int_{E_g}^{E_{g-1}} dE \; \chi(E)
\nu\sigma_f (r, E') \psi(r, E', \Omega')\\
\langle \nu\sigma_f \phi \rangle &= \int_{r \in V} dr \int_{4\pi}
d\Omega' \int_0^\infty dE' \int_0^\infty dE \; \chi(E) \nu\sigma_f (r,
E') \psi(r, E', \Omega') \\
\chi_g &= \frac{\langle \nu\sigma_{f,g' \rightarrow g} \phi \rangle}
{\langle \nu\sigma_f \phi \rangle}
\end{aligned}
This class can also be used to gather a prompt-chi (which only includes the
outgoing energy spectrum of prompt neutrons). This is accomplished by
setting the :attr:`Chi.prompt` attribute to `True`.
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
prompt : bool
If true, computes cross sections which only includes prompt neutrons;
defaults to False which includes prompt and delayed in total
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
prompt : bool
If true, computes cross sections which only includes prompt neutrons
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : 'analog'
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`Chi.tally_keys` property and values are
instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file).
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
# Store whether or not the number density should be removed for microscopic
# values of this data; since this chi data is normalized to 1.0, the
# data should not be divided by the number density
_divide_by_density = False
def __init__(self, domain=None, domain_type=None, groups=None,
prompt=False, by_nuclide=False, name='', num_polar=1,
num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
if not prompt:
self._rxn_type = 'chi'
else:
self._rxn_type = 'chi-prompt'
self._estimator = 'analog'
self._valid_estimators = ['analog']
self.prompt = prompt
def __deepcopy__(self, memo):
clone = super().__deepcopy__(memo)
clone._prompt = self.prompt
return clone
@property
def prompt(self):
return self._prompt
@property
def _dont_squeeze(self):
"""Create a tuple of axes which should not be removed during the get_xs
process
"""
if self.num_polar > 1 or self.num_azimuthal > 1:
return (0, 1, 3)
else:
return (1,)
@property
def scores(self):
if not self.prompt:
return ['nu-fission', 'nu-fission']
else:
return ['prompt-nu-fission', 'prompt-nu-fission']
@property
def filters(self):
# Create the non-domain specific Filters for the Tallies
group_edges = self.energy_groups.group_edges
energyout = openmc.EnergyoutFilter(group_edges)
energyin = openmc.EnergyFilter([group_edges[0], group_edges[-1]])
filters = [[energyin], [energyout]]
return self._add_angle_filters(filters)
@property
def tally_keys(self):
return ['nu-fission-in', 'nu-fission-out']
@property
def rxn_rate_tally(self):
if self._rxn_rate_tally is None:
self._rxn_rate_tally = self.tallies['nu-fission-out']
self._rxn_rate_tally.sparse = self.sparse
return self._rxn_rate_tally
@property
def xs_tally(self):
if self._xs_tally is None:
nu_fission_in = self.tallies['nu-fission-in']
# Remove coarse energy filter to keep it out of tally arithmetic
energy_filter = nu_fission_in.find_filter(openmc.EnergyFilter)
nu_fission_in.remove_filter(energy_filter)
# Compute chi
self._xs_tally = self.rxn_rate_tally / nu_fission_in
# Add the coarse energy filter back to the nu-fission tally
nu_fission_in.filters.append(energy_filter)
return self._xs_tally
@prompt.setter
def prompt(self, prompt):
cv.check_type('prompt', prompt, bool)
self._prompt = prompt
if not self.prompt:
self._rxn_type = 'nu-fission'
self._hdf5_key = 'chi'
else:
self._rxn_type = 'prompt-nu-fission'
self._hdf5_key = 'chi-prompt'
def get_homogenized_mgxs(self, other_mgxs):
"""Construct a homogenized mgxs with other MGXS objects.
Parameters
----------
other_mgxs : openmc.mgxs.MGXS or Iterable of openmc.mgxs.MGXS
The MGXS to homogenize with this one.
Returns
-------
openmc.mgxs.MGXS
A new homogenized MGXS
Raises
------
ValueError
If the other_mgxs is of a different type.
"""
return self._get_homogenized_mgxs(other_mgxs, 'nu-fission-in')
def get_slice(self, nuclides=[], groups=[]):
"""Build a sliced Chi for the specified nuclides and energy groups.
This method constructs a new MGXS to encapsulate a subset of the data
represented by this MGXS. The subset of data to include in the tally
slice is determined by the nuclides and energy groups specified in
the input parameters.
Parameters
----------
nuclides : list of str
A list of nuclide name strings
(e.g., ['U235', 'U238']; default is [])
groups : list of Integral
A list of energy group indices starting at 1 for the high energies
(e.g., [1, 2, 3]; default is [])
Returns
-------
openmc.mgxs.MGXS
A new MGXS which encapsulates the subset of data requested
for the nuclide(s) and/or energy group(s) requested in the
parameters.
"""
# Temporarily remove energy filter from nu-fission-in since its
# group structure will work in super MGXS.get_slice(...) method
nu_fission_in = self.tallies['nu-fission-in']
energy_filter = nu_fission_in.find_filter(openmc.EnergyFilter)
nu_fission_in.remove_filter(energy_filter)
# Call super class method and null out derived tallies
slice_xs = super().get_slice(nuclides, groups)
slice_xs._rxn_rate_tally = None
slice_xs._xs_tally = None
# Slice energy groups if needed
if len(groups) != 0:
filter_bins = []
for group in groups:
group_bounds = self.energy_groups.get_group_bounds(group)
filter_bins.append(group_bounds)
filter_bins = [tuple(filter_bins)]
# Slice nu-fission-out tally along energyout filter
nu_fission_out = slice_xs.tallies['nu-fission-out']
tally_slice = nu_fission_out.get_slice(
filters=[openmc.EnergyoutFilter], filter_bins=filter_bins)
slice_xs._tallies['nu-fission-out'] = tally_slice
# Add energy filter back to nu-fission-in tallies
self.tallies['nu-fission-in'].add_filter(energy_filter)
slice_xs._tallies['nu-fission-in'].add_filter(energy_filter)
slice_xs.sparse = self.sparse
return slice_xs
def merge(self, other):
"""Merge another Chi with this one
If results have been loaded from a statepoint, then Chi are only
mergeable along one and only one of energy groups or nuclides.
Parameters
----------
other : openmc.mgxs.MGXS
MGXS to merge with this one
Returns
-------
merged_mgxs : openmc.mgxs.MGXS
Merged MGXS
"""
if not self.can_merge(other):
raise ValueError('Unable to merge a Chi MGXS')
# Create deep copy of tally to return as merged tally
merged_mgxs = copy.deepcopy(self)
merged_mgxs._derived = True
merged_mgxs._rxn_rate_tally = None
merged_mgxs._xs_tally = None
# Merge energy groups
if self.energy_groups != other.energy_groups:
merged_groups = self.energy_groups.merge(other.energy_groups)
merged_mgxs.energy_groups = merged_groups
# Merge nuclides
if self.nuclides != other.nuclides:
# The nuclides must be mutually exclusive
for nuclide in self.nuclides:
if nuclide in other.nuclides:
msg = 'Unable to merge a Chi MGXS with shared nuclides'
raise ValueError(msg)
# Concatenate lists of nuclides for the merged MGXS
merged_mgxs.nuclides = self.nuclides + other.nuclides
# Merge tallies
for tally_key in self.tallies:
merged_tally = self.tallies[tally_key].merge(other.tallies[tally_key])
merged_mgxs.tallies[tally_key] = merged_tally
return merged_mgxs
def get_xs(self, groups='all', subdomains='all', nuclides='all',
xs_type='macro', order_groups='increasing',
value='mean', squeeze=True, **kwargs):
"""Returns an array of the fission spectrum.
This method constructs a 3D NumPy array for the requested
multi-group cross section data for one or more subdomains
(1st dimension), energy groups (2nd dimension), and nuclides
(3rd dimension).
Parameters
----------
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
subdomains : Iterable of Integral or 'all'
Subdomain IDs of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
A list of nuclide name strings (e.g., ['U235', 'U238']). The
special string 'all' will return the cross sections for all nuclides
in the spatial domain. The special string 'sum' will return the
cross section summed over all nuclides. Defaults to 'all'.
xs_type: {'macro', 'micro'}
This parameter is not relevant for chi but is included here to
mirror the parent MGXS.get_xs(...) class method
order_groups: {'increasing', 'decreasing'}
Return the cross section indexed according to increasing or
decreasing energy groups (decreasing or increasing energies).
Defaults to 'increasing'.
value : {'mean', 'std_dev', 'rel_err'}
A string for the type of value to return. Defaults to 'mean'.
squeeze : bool
A boolean representing whether to eliminate the extra dimensions
of the multi-dimensional array to be returned. Defaults to True.
Returns
-------
numpy.ndarray
A NumPy array of the multi-group cross section indexed in the order
each group, subdomain and nuclide is listed in the parameters.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
cv.check_value('value', value, ['mean', 'std_dev', 'rel_err'])
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# FIXME: Unable to get microscopic xs for mesh domain because the mesh
# cells do not know the nuclide densities in each mesh cell.
if self.domain_type == 'mesh' and xs_type == 'micro':
msg = 'Unable to get micro xs for mesh domain since the mesh ' \
'cells do not know the nuclide densities in each mesh cell.'
raise ValueError(msg)
filters = []
filter_bins = []
# Construct a collection of the domain filter bins
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral,
max_depth=3)
filters.append(_DOMAIN_TO_FILTER[self.domain_type])
subdomain_bins = []
for subdomain in subdomains:
subdomain_bins.append(subdomain)
filter_bins.append(tuple(subdomain_bins))
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(groups, str):
cv.check_iterable_type('groups', groups, Integral)
filters.append(openmc.EnergyoutFilter)
energy_bins = []
for group in groups:
energy_bins.append(
(self.energy_groups.get_group_bounds(group),))
filter_bins.append(tuple(energy_bins))
# If chi was computed for each nuclide in the domain
if self.by_nuclide:
# Get the sum as the fission source weighted average chi for all
# nuclides in the domain
if nuclides == 'sum' or nuclides == ['sum']:
# Retrieve the fission production tallies
nu_fission_in = self.tallies['nu-fission-in']
nu_fission_out = self.tallies['nu-fission-out']
# Sum out all nuclides
nuclides = self.get_nuclides()
nu_fission_in = nu_fission_in.summation(nuclides=nuclides)
nu_fission_out = nu_fission_out.summation(nuclides=nuclides)
# Remove coarse energy filter to keep it out of tally arithmetic
energy_filter = nu_fission_in.find_filter(openmc.EnergyFilter)
nu_fission_in.remove_filter(energy_filter)
# Compute chi and store it as the xs_tally attribute so we can
# use the generic get_xs(...) method
xs_tally = nu_fission_out / nu_fission_in
# Add the coarse energy filter back to the nu-fission tally
nu_fission_in.filters.append(energy_filter)
xs = xs_tally.get_values(filters=filters,
filter_bins=filter_bins, value=value)
# Get chi for all nuclides in the domain
elif nuclides == 'all':
nuclides = self.get_nuclides()
xs = self.xs_tally.get_values(filters=filters,
filter_bins=filter_bins,
nuclides=nuclides, value=value)
# Get chi for user-specified nuclides in the domain
else:
cv.check_iterable_type('nuclides', nuclides, str)
xs = self.xs_tally.get_values(filters=filters,
filter_bins=filter_bins,
nuclides=nuclides, value=value)
# If chi was computed as an average of nuclides in the domain
else:
xs = self.xs_tally.get_values(filters=filters,
filter_bins=filter_bins, value=value)
# Eliminate the trivial score dimension
xs = np.squeeze(xs, axis=len(xs.shape) - 1)
xs = np.nan_to_num(xs)
if groups == 'all':
num_groups = self.num_groups
else:
num_groups = len(groups)
# Reshape tally data array with separate axes for domain and energy
# Accomodate the polar and azimuthal bins if needed
num_subdomains = int(xs.shape[0] / (num_groups * self.num_polar *
self.num_azimuthal))
if self.num_polar > 1 or self.num_azimuthal > 1:
new_shape = (self.num_polar, self.num_azimuthal, num_subdomains,
num_groups) + xs.shape[1:]
else:
new_shape = (num_subdomains, num_groups) + xs.shape[1:]
xs = np.reshape(xs, new_shape)
# Reverse data if user requested increasing energy groups since
# tally data is stored in order of increasing energies
if order_groups == 'increasing':
xs = xs[..., ::-1, :]
if squeeze:
# We want to squeeze out everything but the polar, azimuthal,
# and energy group data.
xs = self._squeeze_xs(xs)
return xs
def get_units(self, xs_type='macro'):
"""Returns the units of Chi.
This method returns the units of Chi, which is "%" for both macro
and micro xs types.
Parameters
----------
xs_type: {'macro', 'micro'}
Return the macro or micro cross section units.
Defaults to 'macro'.
Returns
-------
str
A string representing the units of Chi.
"""
cv.check_value('xs_type', xs_type, ['macro', 'micro'])
# Chi has the same units (%) for both macro and micro
return '%'
class InverseVelocity(MGXS):
r"""An inverse velocity multi-group cross section.
This class can be used for both OpenMC input generation and tally data
post-processing to compute spatially-homogenized and energy-integrated
multi-group neutron inverse velocities for multi-group neutronics
calculations. The units of inverse velocity are seconds per centimeter. At a
minimum, one needs to set the :attr:`InverseVelocity.energy_groups` and
:attr:`InverseVelocity.domain` properties. Tallies for the flux and
appropriate reaction rates over the specified domain are generated
automatically via the :attr:`InverseVelocity.tallies` property, which can
then be appended to a :class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`InverseVelocity.xs_tally` property.
For a spatial domain :math:`V` and energy group :math:`[E_g,E_{g-1}]`, the
neutron inverse velocities are calculated by tallying the flux-weighted
inverse velocity and the flux. The inverse velocity is then the
flux-weighted inverse velocity divided by the flux:
.. math::
\frac{\int_{r \in V} dr \int_{4\pi} d\Omega \int_{E_g}^{E_{g-1}} dE \;
\frac{\psi (r, E, \Omega)}{v (r, E)}}{\int_{r \in V} dr \int_{4\pi}
d\Omega \int_{E_g}^{E_{g-1}} dE \; \psi (r, E, \Omega)}
Parameters
----------
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
num_polar : Integral, optional
Number of equi-width polar angle bins for angle discretization;
defaults to one bin
num_azimuthal : Integral, optional
Number of equi-width azimuthal angle bins for angle discretization;
defaults to one bin
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
If true, computes cross sections for each nuclide in domain
domain : openmc.Material or openmc.Cell or openmc.Universe or openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'material', 'cell', 'distribcell', 'universe', 'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
num_polar : Integral
Number of equi-width polar angle bins for angle discretization
num_azimuthal : Integral
Number of equi-width azimuthal angle bins for angle discretization
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'tracklength', 'collision', 'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`InverseVelocity.tally_keys` property
and values are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is unity for 'material', 'cell' and 'universe'
domain types. This is equal to the number of cell instances
for 'distribcell' domain types (it is equal to unity prior to loading
tally data from a statepoint file) and the number of mesh cells for
'mesh' domain types.
num_nuclides : int
The number of nuclides for which the multi-group cross section is
being tracked. This is unity if the by_nuclide attribute is False.
nuclides : Iterable of str or 'sum'
The optional user-specified nuclides for which to compute cross
sections (e.g., 'U238', 'O16'). If by_nuclide is True but nuclides
are not specified by the user, all nuclides in the spatial domain
are included. This attribute is 'sum' if by_nuclide is false.
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
# Store whether or not the number density should be removed for microscopic
# values of this data; since the inverse velocity does not contain number
# density scaling, we should not remove the number density from microscopic
# values
_divide_by_density = False
def __init__(self, domain=None, domain_type=None, groups=None,
by_nuclide=False, name='', num_polar=1, num_azimuthal=1):
super().__init__(domain, domain_type, groups, by_nuclide, name,
num_polar, num_azimuthal)
self._rxn_type = 'inverse-velocity'
def get_units(self, xs_type='macro'):
"""Returns the units of InverseVelocity.
This method returns the units of an InverseVelocity based on a desired
xs_type.
Parameters
----------
xs_type: {'macro', 'micro'}
Return the macro or micro cross section units.
Defaults to 'macro'.
Returns
-------
str
A string representing the units of the InverseVelocity.
"""
if xs_type == 'macro':
return 'second/cm'
else:
raise ValueError('Unable to return the units of InverseVelocity'
' for xs_type other than "macro"')
class MeshSurfaceMGXS(MGXS):
"""An abstract multi-group cross section for some energy group structure
on the surfaces of a mesh domain.
This class can be used for both OpenMC input generation and tally data
post-processing to compute surface- and energy-integrated multi-group cross
sections for multi-group neutronics calculations.
.. note:: Users should instantiate the subclasses of this abstract class.
.. versionadded:: 0.12.1
Parameters
----------
domain : openmc.RegularMesh
The domain for spatial homogenization
domain_type : {'mesh'}
The domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
Unused in MeshSurfaceMGXS
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
Unused in MeshSurfaceMGXS
domain : Mesh
Domain for spatial homogenization
domain_type : {'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is equal to the number of mesh surfaces times
two to account for both the incoming and outgoing current from the
mesh cell surfaces.
num_nuclides : int
Unused in MeshSurfaceMGXS
nuclides : Iterable of str or 'sum'
Unused in MeshSurfaceMGXS
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None, energy_groups=None,
by_nuclide=False, name=''):
super(MeshSurfaceMGXS, self).__init__(domain, domain_type, energy_groups,
by_nuclide, name)
self._estimator = ['analog']
self._valid_estimators = ['analog']
@property
def scores(self):
return [self.rxn_type]
@property
def domain(self):
return self._domain
@property
def domain_type(self):
return self._domain_type
@domain.setter
def domain(self, domain):
cv.check_type('domain', domain, openmc.RegularMesh)
self._domain = domain
# Assign a domain type
if self.domain_type is None:
self._domain_type = 'mesh'
@domain_type.setter
def domain_type(self, domain_type):
cv.check_value('domain type', domain_type, 'mesh')
self._domain_type = domain_type
@property
def filters(self):
group_edges = self.energy_groups.group_edges
energy_filter = openmc.EnergyFilter(group_edges)
mesh = _DOMAIN_TO_FILTER[self.domain_type](self.domain).mesh
meshsurface_filter = openmc.MeshSurfaceFilter(mesh)
filters = [[meshsurface_filter, energy_filter]]
return self._add_angle_filters(filters)
@property
def xs_tally(self):
if self._xs_tally is None:
if self.tallies is None:
msg = 'Unable to get xs_tally since tallies have ' \
'not been loaded from a statepoint'
raise ValueError(msg)
self._xs_tally = self.rxn_rate_tally
self._compute_xs()
return self._xs_tally
def load_from_statepoint(self, statepoint):
"""Extracts tallies in an OpenMC StatePoint with the data needed to
compute multi-group cross sections.
This method is needed to compute cross section data from tallies
in an OpenMC StatePoint object.
.. note:: The statepoint must first be linked with a :class:`openmc.Summary`
object.
Parameters
----------
statepoint : openmc.StatePoint
An OpenMC StatePoint object with tally data
Raises
------
ValueError
When this method is called with a statepoint that has not been
linked with a summary object.
"""
cv.check_type('statepoint', statepoint, openmc.statepoint.StatePoint)
if statepoint.summary is None:
msg = 'Unable to load data from a statepoint which has not been ' \
'linked with a summary file'
raise ValueError(msg)
filters= []
filter_bins = []
# Clear any tallies previously loaded from a statepoint
if self.loaded_sp:
self._tallies = None
self._xs_tally = None
self._rxn_rate_tally = None
self._loaded_sp = False
# Find, slice and store Tallies from StatePoint
# The tally slicing is needed if tally merging was used
for tally_type, tally in self.tallies.items():
sp_tally = statepoint.get_tally(
tally.scores, tally.filters, tally.nuclides,
estimator=tally.estimator, exact_filters=True)
sp_tally = sp_tally.get_slice(
tally.scores, filters, filter_bins, tally.nuclides)
sp_tally.sparse = self.sparse
self.tallies[tally_type] = sp_tally
self._loaded_sp = True
def get_xs(self, groups='all', subdomains='all', nuclides='all',
xs_type='macro', order_groups='increasing',
value='mean', squeeze=True, **kwargs):
r"""Returns an array of multi-group cross sections.
This method constructs a 3D NumPy array for the requested
multi-group cross section data for one or more subdomains
(1st dimension), energy groups (2nd dimension), and nuclides
(3rd dimension).
Parameters
----------
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
subdomains : Iterable of Integral or 'all'
Subdomain IDs of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
Unused in MeshSurfaceMGXS, its value will be ignored. The nuclides
dimension of the resultant array will always have a length of 1.
xs_type: {'macro'}
The 'macro'/'micro' distinction does not apply to MeshSurfaceMGXS.
The calculation of a 'micro' xs_type is omited in this class.
order_groups: {'increasing', 'decreasing'}
Return the cross section indexed according to increasing or
decreasing energy groups (decreasing or increasing energies).
Defaults to 'increasing'.
value : {'mean', 'std_dev', 'rel_err'}
A string for the type of value to return. Defaults to 'mean'.
squeeze : bool
A boolean representing whether to eliminate the extra dimensions
of the multi-dimensional array to be returned. Defaults to True.
Returns
-------
numpy.ndarray
A NumPy array of the multi-group cross section indexed in the order
each group, subdomain and nuclide is listed in the parameters.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
cv.check_value('value', value, ['mean', 'std_dev', 'rel_err'])
cv.check_value('xs_type', xs_type, ['macro'])
filters = []
filter_bins = []
# Construct a collection of the domain filter bins
if not isinstance(subdomains, str):
cv.check_iterable_type('subdomains', subdomains, Integral,
max_depth=3)
filters.append(_DOMAIN_TO_FILTER[self.domain_type])
subdomain_bins = []
for subdomain in subdomains:
subdomain_bins.append(subdomain)
filter_bins.append(tuple(subdomain_bins))
xs = self.xs_tally.get_values(filters=filters,
filter_bins=filter_bins, value=value)
# Construct list of energy group bounds tuples for all requested groups
if not isinstance(groups, str):
cv.check_iterable_type('groups', groups, Integral)
filters.append(openmc.EnergyFilter)
energy_bins = []
for group in groups:
energy_bins.append(
(self.energy_groups.get_group_bounds(group),))
filter_bins.append(tuple(energy_bins))
# Eliminate the trivial score dimension
xs = np.squeeze(xs, axis=len(xs.shape) - 1)
xs = np.nan_to_num(xs)
if groups == 'all':
num_groups = self.num_groups
else:
num_groups = len(groups)
# Reshape tally data array with separate axes for domain and energy
# Accomodate the polar and azimuthal bins if needed
num_surfaces = 4 * self.domain.n_dimension
num_subdomains = int(xs.shape[0] / (num_groups * self.num_polar *
self.num_azimuthal * num_surfaces))
if self.num_polar > 1 or self.num_azimuthal > 1:
new_shape = (self.num_polar, self.num_azimuthal, num_subdomains,
num_groups, num_surfaces)
else:
new_shape = (num_subdomains, num_groups, num_surfaces)
new_shape += xs.shape[1:]
new_xs = np.zeros(new_shape)
for cell in range(num_subdomains):
for g in range(num_groups):
for s in range(num_surfaces):
new_xs[cell,g,s] = \
xs[cell*num_surfaces*num_groups+s*num_groups+g]
xs = new_xs
# Reverse data if user requested increasing energy groups since
# tally data is stored in order of increasing energies
if order_groups == 'increasing':
xs = xs[..., ::-1, :, :]
if squeeze:
# We want to squeeze out everything but the polar, azimuthal,
# and energy group data.
xs = self._squeeze_xs(xs)
return xs
def get_pandas_dataframe(self, groups='all', nuclides='all',
xs_type='macro', paths=True):
"""Build a Pandas DataFrame for the MGXS data.
This method leverages :meth:`openmc.Tally.get_pandas_dataframe`, but
renames the columns with terminology appropriate for cross section data.
Parameters
----------
groups : Iterable of Integral or 'all'
Energy groups of interest. Defaults to 'all'.
nuclides : Iterable of str or 'all' or 'sum'
Unused in MeshSurfaceMGXS, its value will be ignored. The nuclides
dimension of the resultant array will always have a length of 1.
xs_type: {'macro'}
'micro' unused in MeshSurfaceMGXS.
paths : bool, optional
Construct columns for distribcell tally filters (default is True).
The geometric information in the Summary object is embedded into
a Multi-index column with a geometric "path" to each distribcell
instance.
Returns
-------
pandas.DataFrame
A Pandas DataFrame for the cross section data.
Raises
------
ValueError
When this method is called before the multi-group cross section is
computed from tally data.
"""
if not isinstance(groups, str):
cv.check_iterable_type('groups', groups, Integral)
cv.check_value('xs_type', xs_type, ['macro'])
df = self.xs_tally.get_pandas_dataframe(paths=paths)
# Remove the score column since it is homogeneous and redundant
df = df.drop('score', axis=1, level=0)
# Convert azimuthal, polar, energy in and energy out bin values in to
# bin indices
columns = self._df_convert_columns_to_bins(df)
# Select out those groups the user requested
if not isinstance(groups, str):
if 'group in' in df:
df = df[df['group in'].isin(groups)]
if 'group out' in df:
df = df[df['group out'].isin(groups)]
mesh_str = 'mesh {0}'.format(self.domain.id)
col_key = (mesh_str, 'surf')
surfaces = df.pop(col_key)
df.insert(len(self.domain.dimension), col_key, surfaces)
if len(self.domain.dimension) == 1:
df.sort_values(by=[(mesh_str, 'x'), (mesh_str, 'surf')]
+ columns, inplace=True)
elif len(self.domain.dimension) == 2:
df.sort_values(by=[(mesh_str, 'x'), (mesh_str, 'y'),
(mesh_str, 'surf')] + columns, inplace=True)
elif len(self.domain.dimension) == 3:
df.sort_values(by=[(mesh_str, 'x'), (mesh_str, 'y'),
(mesh_str, 'z'), (mesh_str, 'surf')] + columns, inplace=True)
return df
class Current(MeshSurfaceMGXS):
r"""A current multi-group cross section.
This class can be used for both OpenMC input generation and tally data
post-processing to compute surface- and energy-integrated
multi-group current cross sections for multi-group neutronics calculations. At
a minimum, one needs to set the :attr:`Current.energy_groups` and
:attr:`Current.domain` properties. Tallies for the appropriate
reaction rates over the specified domain are generated automatically via the
:attr:`Current.tallies` property, which can then be appended to a
:class:`openmc.Tallies` instance.
For post-processing, the :meth:`MGXS.load_from_statepoint` will pull in the
necessary data to compute multi-group cross sections from a
:class:`openmc.StatePoint` instance. The derived multi-group cross section
can then be obtained from the :attr:`Current.xs_tally` property.
For a spatial domain :math:`S` and energy group :math:`[E_g,E_{g-1}]`, the
total cross section is calculated as:
.. math::
\frac{\int_{r \in S} dS \int_{E_g}^{E_{g-1}} dE \;
J(r, E)}{\int_{r \in S} dS \int_{E_g}^{E_{g-1}} dE}.
.. versionadded:: 0.12.1
Parameters
----------
domain : openmc.RegularMesh
The domain for spatial homogenization
domain_type : ('mesh'}
The domain type for spatial homogenization
groups : openmc.mgxs.EnergyGroups
The energy group structure for energy condensation
by_nuclide : bool
Unused in MeshSurfaceMGXS
name : str, optional
Name of the multi-group cross section. Used as a label to identify
tallies in OpenMC 'tallies.xml' file.
Attributes
----------
name : str, optional
Name of the multi-group cross section
rxn_type : str
Reaction type (e.g., 'total', 'nu-fission', etc.)
by_nuclide : bool
Unused in MeshSurfaceMGXS
domain : openmc.RegularMesh
Domain for spatial homogenization
domain_type : {'mesh'}
Domain type for spatial homogenization
energy_groups : openmc.mgxs.EnergyGroups
Energy group structure for energy condensation
tally_trigger : openmc.Trigger
An (optional) tally precision trigger given to each tally used to
compute the cross section
scores : list of str
The scores in each tally used to compute the multi-group cross section
filters : list of openmc.Filter
The filters in each tally used to compute the multi-group cross section
tally_keys : list of str
The keys into the tallies dictionary for each tally used to compute
the multi-group cross section
estimator : {'analog'}
The tally estimator used to compute the multi-group cross section
tallies : collections.OrderedDict
OpenMC tallies needed to compute the multi-group cross section. The keys
are strings listed in the :attr:`TotalXS.tally_keys` property and values
are instances of :class:`openmc.Tally`.
rxn_rate_tally : openmc.Tally
Derived tally for the reaction rate tally used in the numerator to
compute the multi-group cross section. This attribute is None
unless the multi-group cross section has been computed.
xs_tally : openmc.Tally
Derived tally for the multi-group cross section. This attribute
is None unless the multi-group cross section has been computed.
num_subdomains : int
The number of subdomains is equal to the number of mesh surfaces times
two to account for both the incoming and outgoing current from the
mesh cell surfaces.
num_nuclides : int
Unused in MeshSurfaceMGXS
nuclides : Iterable of str or 'sum'
Unused in MeshSurfaceMGXS
sparse : bool
Whether or not the MGXS' tallies use SciPy's LIL sparse matrix format
for compressed data storage
loaded_sp : bool
Whether or not a statepoint file has been loaded with tally data
derived : bool
Whether or not the MGXS is merged from one or more other MGXS
hdf5_key : str
The key used to index multi-group cross sections in an HDF5 data store
"""
def __init__(self, domain=None, domain_type=None,
groups=None, by_nuclide=False, name=''):
super(Current, self).__init__(domain, domain_type,
groups, by_nuclide, name)
self._rxn_type = 'current'
| 42.896018 | 88 | 0.625122 | 38,093 | 299,500 | 4.799596 | 0.027433 | 0.02737 | 0.029946 | 0.037062 | 0.824237 | 0.797119 | 0.780272 | 0.762365 | 0.747472 | 0.732616 | 0 | 0.005433 | 0.298174 | 299,500 | 6,981 | 89 | 42.902163 | 0.864375 | 0.538514 | 0 | 0.606312 | 0 | 0.001183 | 0.071347 | 0.002272 | 0 | 0 | 0 | 0.000573 | 0 | 1 | 0.065878 | false | 0 | 0.004734 | 0.013807 | 0.135306 | 0.008284 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22993a68d46f50dd9cea3f877e2119dfc80bd26a | 201 | py | Python | python/smqtk/algorithms/classifier/__init__.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 82 | 2015-01-07T15:33:29.000Z | 2021-08-11T18:34:05.000Z | python/smqtk/algorithms/classifier/__init__.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 230 | 2015-04-08T14:36:51.000Z | 2022-03-14T17:55:30.000Z | python/smqtk/algorithms/classifier/__init__.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 65 | 2015-01-04T15:00:16.000Z | 2021-11-19T18:09:11.000Z | from ._classifier_collection import ClassifierCollection # noqa: F401
from ._interface_classifier import Classifier # noqa: F401
from ._interface_supervised import SupervisedClassifier # noqa: F401
| 50.25 | 70 | 0.835821 | 21 | 201 | 7.714286 | 0.47619 | 0.148148 | 0.148148 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050847 | 0.119403 | 201 | 3 | 71 | 67 | 0.864407 | 0.159204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22aed204e67e26bba278743887f2f924f9cd1e18 | 2,707 | py | Python | Day17/Day17_pt2.py | EllAchE/aoc2020LastPlace | ad3650b3909b1231c9c931b7d85ce842d30e3d15 | [
"MIT"
] | 1 | 2021-12-04T01:06:18.000Z | 2021-12-04T01:06:18.000Z | Day17/Day17_pt2.py | logan-credera/aoc2020LastPlace | ad3650b3909b1231c9c931b7d85ce842d30e3d15 | [
"MIT"
] | null | null | null | Day17/Day17_pt2.py | logan-credera/aoc2020LastPlace | ad3650b3909b1231c9c931b7d85ce842d30e3d15 | [
"MIT"
] | null | null | null | data = open("input.txt").readlines()
# data = ['.#.\n', '..#\n', '###\n']
# top left 0,0,0, bottom right 3,3,0 or sth
x = 0
y = 0
z = 0
w = 0
current = {}
for line in data:
# cut trailing \n
line = line.strip('\n')
for cube in line:
if cube == '#':
current[x, y, z, w] = cube
x += 1
x = 0
y += 1
cycles = 6
neighbor = [[1, 0, 0, 0], [0, 1, 0, 0], [1, 1, 0, 0], [-1, 0, 0, 0], [0, -1, 0, 0], [-1, -1, 0, 0], [1, -1, 0, 0], [-1, 1, 0, 0],
[0, 0, 1, 0], [1, 0, 1, 0], [0, 1, 1, 0], [1, 1, 1, 0], [-1, 0, 1, 0], [0, -1, 1, 0], [-1, -1, 1, 0], [1, -1, 1, 0], [-1, 1, 1, 0],
[0, 0, -1, 0], [1, 0, -1, 0], [0, 1, -1, 0], [1, 1, -1, 0], [-1, 0, -1, 0], [0, -1, -1, 0], [-1, -1, -1, 0], [1, -1, -1, 0],
[-1, 1, -1, 0],
[1, 0, 0, 1], [0, 1, 0, 1], [1, 1, 0, 1], [-1, 0, 0, 1], [0, -1, 0, 1], [-1, -1, 0, 1], [1, -1, 0, 1], [-1, 1, 0, 1],
[0, 0, 1, 1], [1, 0, 1, 1], [0, 1, 1, 1], [1, 1, 1, 1], [-1, 0, 1, 1], [0, -1, 1, 1], [-1, -1, 1, 1], [1, -1, 1, 1], [-1, 1, 1, 1],
[0, 0, -1, 1], [1, 0, -1, 1], [0, 1, -1, 1], [1, 1, -1, 1], [-1, 0, -1, 1], [0, -1, -1, 1], [-1, -1, -1, 1], [1, -1, -1, 1],
[-1, 1, -1, 1],
[1, 0, 0, -1], [0, 1, 0, -1], [1, 1, 0, -1], [-1, 0, 0, -1], [0, -1, 0, -1], [-1, -1, 0, -1], [1, -1, 0, -1], [-1, 1, 0, -1],
[0, 0, 1, -1], [1, 0, 1, -1], [0, 1, 1, -1], [1, 1, 1, -1], [-1, 0, 1, -1], [0, -1, 1, -1], [-1, -1, 1, -1], [1, -1, 1, -1], [-1, 1, 1, -1],
[0, 0, -1, -1], [1, 0, -1, -1], [0, 1, -1, -1], [1, 1, -1, -1], [-1, 0, -1, -1], [0, -1, -1, -1], [-1, -1, -1, -1], [1, -1, -1, -1],
[-1, 1, -1, -1],
[0, 0, 0, -1], [0, 0, 0, 1]
]
for i in range(0, cycles):
nxt = current.keys()
for key in current.keys():
for n in neighbor:
neighbor_key = (key[0] + n[0], key[1] + n[1], key[2] + n[2], key[3] + n[3])
if neighbor_key not in nxt:
nxt.append(neighbor_key)
active = []
for key in nxt:
# print(key)
active_neighbors = 0
for n in neighbor:
neighbor_key = (key[0] + n[0], key[1] + n[1], key[2] + n[2], key[3] + n[3])
if neighbor_key in current.keys():
active_neighbors += 1
# active
if key in current.keys() and current[key] == '#':
if active_neighbors == 3 or active_neighbors == 2:
active.append(key)
# inactive
else:
if active_neighbors == 3:
active.append(key)
current = {}
for key in active:
current[key] = '#'
print(i)
print(len(current.keys()))
| 41.646154 | 152 | 0.355375 | 511 | 2,707 | 1.863014 | 0.09002 | 0.30042 | 0.305672 | 0.289916 | 0.466387 | 0.465336 | 0.463235 | 0.463235 | 0.460084 | 0.460084 | 0 | 0.202503 | 0.350573 | 2,707 | 64 | 153 | 42.296875 | 0.339022 | 0.042113 | 0 | 0.185185 | 0 | 0 | 0.005424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.037037 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22eea1dd59cb1851d541a441d7d374118b2a6f9d | 9,670 | py | Python | tests/unit/test_timeout.py | LastRemote/sagemaker-python-sdk | fddf29d9e4383cd3f939253eef47ee79a464dd37 | [
"Apache-2.0"
] | 1 | 2021-08-31T09:39:37.000Z | 2021-08-31T09:39:37.000Z | tests/unit/test_timeout.py | LastRemote/sagemaker-python-sdk | fddf29d9e4383cd3f939253eef47ee79a464dd37 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_timeout.py | LastRemote/sagemaker-python-sdk | fddf29d9e4383cd3f939253eef47ee79a464dd37 | [
"Apache-2.0"
] | null | null | null | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
This class tests the timeout.py class in the integration tests.
This is to prevent regressions that cause the timeout function to hide failed tests.
"""
from __future__ import absolute_import
import time
import pytest
from mock import Mock, patch
import stopit
from botocore.exceptions import ClientError
from tests.integ.timeout import (
timeout,
timeout_and_delete_endpoint_by_name,
timeout_and_delete_model_with_transformer,
)
BOTO_SESSION_NAME = "boto_session_name"
SAGEMAKER_SESSION_NAME = "sagemaker_session_name"
DEFAULT_BUCKET_NAME = "default_bucket_name"
TRANSFORMER_NAME = "transformer.name"
REGION = "us-west-2"
BUCKET_NAME = "bucket-name"
ENDPOINT_NAME = "endpoint_name"
EXCEPTION_MESSAGE = "This Exception is expected and should not be swallowed by the timeout."
SHORT_TIMEOUT_TO_FORCE_TIMEOUT_TO_OCCUR = 0.001
LONG_DURATION_TO_EXCEED_TIMEOUT = 0.002
LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED = 10
DURATION_TO_SLEEP_TO_ALLOW_BACKGROUND_THREAD_TO_COMPLETE = 0.2
@pytest.fixture()
def session():
boto_mock = Mock(name=BOTO_SESSION_NAME, region_name=REGION)
sms = Mock(
name=SAGEMAKER_SESSION_NAME,
boto_session=boto_mock,
boto_region_name=REGION,
config=None,
local_mode=True,
)
sms.default_bucket = Mock(name=DEFAULT_BUCKET_NAME, return_value=BUCKET_NAME)
return sms
@pytest.fixture()
def transformer():
return Mock(name=TRANSFORMER_NAME, region_name=REGION)
def test_timeout_fails_correctly_when_method_throws_exception():
with pytest.raises(ValueError) as exception:
with timeout(hours=0, minutes=0, seconds=LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED):
raise ValueError(EXCEPTION_MESSAGE)
assert EXCEPTION_MESSAGE in str(exception.value)
def test_timeout_does_not_throw_exception_when_method_ends_gracefully():
with timeout(hours=0, minutes=0, seconds=LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED):
pass
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_endpoint_by_name_fails_when_method_throws_exception(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session
):
with pytest.raises(ValueError) as exception:
with timeout_and_delete_endpoint_by_name(
endpoint_name=ENDPOINT_NAME,
sagemaker_session=session,
hours=0,
minutes=0,
seconds=LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED,
sleep_between_cleanup_attempts=0,
):
raise ValueError(EXCEPTION_MESSAGE)
assert EXCEPTION_MESSAGE in str(exception.value)
assert session.delete_endpoint.call_count == 1
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_endpoint_by_name_throws_timeout_exception_when_method_times_out(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session
):
with pytest.raises(stopit.utils.TimeoutException):
with timeout_and_delete_endpoint_by_name(
endpoint_name=ENDPOINT_NAME,
sagemaker_session=session,
hours=0,
minutes=0,
seconds=SHORT_TIMEOUT_TO_FORCE_TIMEOUT_TO_OCCUR,
sleep_between_cleanup_attempts=0,
):
time.sleep(LONG_DURATION_TO_EXCEED_TIMEOUT)
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_endpoint_by_name_does_not_throw_exception_when_method_ends_gracefully(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session
):
with timeout_and_delete_endpoint_by_name(
endpoint_name=ENDPOINT_NAME,
sagemaker_session=session,
hours=0,
minutes=0,
seconds=LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED,
sleep_between_cleanup_attempts=0,
):
pass
assert session.delete_endpoint.call_count == 1
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_endpoint_by_name_retries_resource_deletion_on_failure(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session
):
session.delete_endpoint = Mock(
side_effect=ClientError(
error_response={"Error": {"Code": 403, "Message": "ValidationException"}},
operation_name="Unit Test",
)
)
with timeout_and_delete_endpoint_by_name(
endpoint_name=ENDPOINT_NAME,
sagemaker_session=session,
hours=0,
minutes=0,
seconds=LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED,
sleep_between_cleanup_attempts=0,
):
pass
assert session.delete_endpoint.call_count == 3
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_model_with_transformer_fails_when_method_throws_exception(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session, transformer
):
with pytest.raises(ValueError) as exception:
with timeout_and_delete_model_with_transformer(
sagemaker_session=session,
transformer=transformer,
hours=0,
minutes=1,
sleep_between_cleanup_attempts=0,
):
raise ValueError(EXCEPTION_MESSAGE)
assert EXCEPTION_MESSAGE in str(exception.value)
assert transformer.delete_model.call_count == 1
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_model_with_transformer_throws_timeout_exception_when_method_times_out(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session, transformer
):
with pytest.raises(stopit.utils.TimeoutException):
with timeout_and_delete_model_with_transformer(
sagemaker_session=session,
transformer=transformer,
hours=0,
minutes=0,
seconds=SHORT_TIMEOUT_TO_FORCE_TIMEOUT_TO_OCCUR,
sleep_between_cleanup_attempts=0,
):
time.sleep(LONG_DURATION_TO_EXCEED_TIMEOUT)
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_model_with_transformer_does_not_throw_when_method_ends_gracefully(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session, transformer
):
with timeout_and_delete_model_with_transformer(
sagemaker_session=session,
transformer=transformer,
hours=0,
minutes=0,
seconds=LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED,
sleep_between_cleanup_attempts=0,
):
pass
assert transformer.delete_model.call_count == 1
@patch("tests.integ.timeout._show_logs", return_value=None, autospec=True)
@patch("tests.integ.timeout._cleanup_logs", return_value=None, autospec=True)
@patch(
"tests.integ.timeout._delete_schedules_associated_with_endpoint",
return_value=None,
autospec=True,
)
def test_timeout_and_delete_model_with_transformer_retries_resource_deletion_on_failure(
_show_logs, _cleanup_logs, _delete_schedules_associated_with_endpoint, session, transformer
):
transformer.delete_model = Mock(
side_effect=ClientError(
error_response={"Error": {"Code": 403, "Message": "ValidationException"}},
operation_name="Unit Test",
)
)
with timeout_and_delete_model_with_transformer(
sagemaker_session=session,
transformer=transformer,
hours=0,
minutes=0,
seconds=LONG_TIMEOUT_THAT_WILL_NEVER_BE_EXCEEDED,
sleep_between_cleanup_attempts=0,
):
pass
assert transformer.delete_model.call_count == 3
| 35.682657 | 98 | 0.753878 | 1,219 | 9,670 | 5.551272 | 0.147662 | 0.036944 | 0.062805 | 0.078026 | 0.795478 | 0.779814 | 0.770061 | 0.76016 | 0.744939 | 0.744939 | 0 | 0.007084 | 0.167942 | 9,670 | 270 | 99 | 35.814815 | 0.833955 | 0.070838 | 0 | 0.714932 | 0 | 0 | 0.141089 | 0.113986 | 0 | 0 | 0 | 0 | 0.040724 | 1 | 0.054299 | false | 0.022624 | 0.031674 | 0.004525 | 0.095023 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3cb7e74115ed6335b278c00e85ac2dcf779e55e | 26 | py | Python | abc.py | Harsh8668/Harsh_form | 4a4b6ca002df41d7ab0523918a1730585a57a3b6 | [
"Apache-2.0"
] | null | null | null | abc.py | Harsh8668/Harsh_form | 4a4b6ca002df41d7ab0523918a1730585a57a3b6 | [
"Apache-2.0"
] | null | null | null | abc.py | Harsh8668/Harsh_form | 4a4b6ca002df41d7ab0523918a1730585a57a3b6 | [
"Apache-2.0"
] | null | null | null | print("I am Harshvardhan") | 26 | 26 | 0.769231 | 4 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 26 | 1 | 26 | 26 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a3d37026a2cbcf7a0c02eaa4b1a6f59277f40677 | 347 | py | Python | checklist/views/__init__.py | cagandhi/Checklist-Django | c8edf1d8f821900a71f36abd34a76663d8d8f7da | [
"Apache-2.0"
] | 3 | 2021-07-02T07:35:19.000Z | 2022-01-14T11:14:14.000Z | checklist/views/__init__.py | cagandhi/Checklist-Django | c8edf1d8f821900a71f36abd34a76663d8d8f7da | [
"Apache-2.0"
] | 57 | 2021-01-31T23:39:57.000Z | 2022-03-12T00:47:23.000Z | checklist/views/__init__.py | cagandhi/Checklist-Django | c8edf1d8f821900a71f36abd34a76663d8d8f7da | [
"Apache-2.0"
] | 3 | 2021-08-29T21:46:54.000Z | 2022-03-24T13:10:00.000Z | # refer https://stackoverflow.com/a/1921911/6543250 and https://stackoverflow.com/a/46108146/6543250 to break up code into diff files
from .views_bookupsearcateg import * # noqa: F401, F403
from .views_checklist import * # noqa: F401, F403
from .views_itemcomm_crud import * # noqa: F401, F403
from .views_viewfunc import * # noqa: F401, F403
| 57.833333 | 133 | 0.760807 | 50 | 347 | 5.18 | 0.54 | 0.138996 | 0.216216 | 0.277992 | 0.312741 | 0.312741 | 0 | 0 | 0 | 0 | 0 | 0.177258 | 0.138329 | 347 | 5 | 134 | 69.4 | 0.688963 | 0.573487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
43380b7f9c1160d4fd104b6fefb650230e690ca3 | 2,105 | py | Python | tests/test_chapter7.py | GoodMonsters/Building-Data-Science-Applications-with-FastAPI | d2218d225c5b93723ecf46c19619ed5d3f2473e6 | [
"MIT"
] | 107 | 2021-03-26T20:18:51.000Z | 2022-03-26T03:38:08.000Z | tests/test_chapter7.py | GoodMonsters/Building-Data-Science-Applications-with-FastAPI | d2218d225c5b93723ecf46c19619ed5d3f2473e6 | [
"MIT"
] | 4 | 2021-06-09T08:48:21.000Z | 2021-12-27T09:04:43.000Z | tests/test_chapter7.py | GoodMonsters/Building-Data-Science-Applications-with-FastAPI | d2218d225c5b93723ecf46c19619ed5d3f2473e6 | [
"MIT"
] | 58 | 2021-03-12T20:51:19.000Z | 2022-03-27T15:49:49.000Z | import httpx
import pytest
from fastapi import status
from chapter7.chapter7_api_key_header import (
app as chapter7_api_key_header_app,
API_TOKEN as CHAPTER7_API_KEY_HEADER_API_TOKEN,
)
from chapter7.chapter7_api_key_header_dependency import (
app as chapter7_api_key_header_app_dependency,
API_TOKEN as CHAPTER7_API_KEY_HEADER_DEPENDENCY_API_TOKEN,
)
@pytest.mark.fastapi(app=chapter7_api_key_header_app)
@pytest.mark.asyncio
class TestChapter7APIKeyHeader:
async def test_missing_header(self, client: httpx.AsyncClient):
response = await client.get("/protected-route")
assert response.status_code == status.HTTP_403_FORBIDDEN
async def test_invalid_token(self, client: httpx.AsyncClient):
response = await client.get("/protected-route", headers={"Token": "Foo"})
assert response.status_code == status.HTTP_403_FORBIDDEN
async def test_valid_token(self, client: httpx.AsyncClient):
response = await client.get(
"/protected-route", headers={"Token": CHAPTER7_API_KEY_HEADER_API_TOKEN}
)
assert response.status_code == status.HTTP_200_OK
json = response.json()
assert json == {"hello": "world"}
@pytest.mark.fastapi(app=chapter7_api_key_header_app_dependency)
@pytest.mark.asyncio
class TestChapter7APIKeyHeaderDependency:
async def test_missing_header(self, client: httpx.AsyncClient):
response = await client.get("/protected-route")
assert response.status_code == status.HTTP_403_FORBIDDEN
async def test_invalid_token(self, client: httpx.AsyncClient):
response = await client.get("/protected-route", headers={"Token": "Foo"})
assert response.status_code == status.HTTP_403_FORBIDDEN
async def test_valid_token(self, client: httpx.AsyncClient):
response = await client.get(
"/protected-route",
headers={"Token": CHAPTER7_API_KEY_HEADER_DEPENDENCY_API_TOKEN},
)
assert response.status_code == status.HTTP_200_OK
json = response.json()
assert json == {"hello": "world"}
| 35.083333 | 84 | 0.726366 | 261 | 2,105 | 5.555556 | 0.172414 | 0.075862 | 0.096552 | 0.137931 | 0.897931 | 0.891034 | 0.827586 | 0.766897 | 0.72 | 0.66069 | 0 | 0.01854 | 0.180048 | 2,105 | 59 | 85 | 35.677966 | 0.821553 | 0 | 0 | 0.545455 | 0 | 0 | 0.067458 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0 | false | 0 | 0.113636 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4347d1d965bca8b88d1b5f0c0cae8f155108609e | 3,576 | py | Python | notifications/migrations/0001_initial.py | luterien/django-action-notifications | 0843baba73a7c92681a68e32bae550ec9af87555 | [
"MIT"
] | 1 | 2017-04-22T11:16:13.000Z | 2017-04-22T11:16:13.000Z | notifications/migrations/0001_initial.py | luterien/django-action-notifications | 0843baba73a7c92681a68e32bae550ec9af87555 | [
"MIT"
] | null | null | null | notifications/migrations/0001_initial.py | luterien/django-action-notifications | 0843baba73a7c92681a68e32bae550ec9af87555 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2017-04-16 15:24
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Action',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('actor_object_id', models.TextField(verbose_name='object id')),
('action_object_id', models.TextField(blank=True, null=True, verbose_name='object id')),
('target_object_id', models.TextField(blank=True, null=True, verbose_name='object id')),
('date_created', models.DateTimeField(default=django.utils.timezone.now)),
('verb', models.CharField(max_length=100)),
('is_active', models.BooleanField(default=True)),
('action_object_content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications_action_action_type', to='contenttypes.ContentType')),
('actor_content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='notifications_action_actor_type', to='contenttypes.ContentType')),
('target_content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications_action_target_type', to='contenttypes.ContentType')),
],
options={
'ordering': ('-date_created',),
'abstract': False,
},
),
migrations.CreateModel(
name='Notification',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('actor_object_id', models.TextField(verbose_name='object id')),
('action_object_id', models.TextField(blank=True, null=True, verbose_name='object id')),
('target_object_id', models.TextField(blank=True, null=True, verbose_name='object id')),
('date_created', models.DateTimeField(default=django.utils.timezone.now)),
('verb', models.CharField(max_length=100)),
('is_seen', models.BooleanField(default=False)),
('action_object_content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications_notification_action_type', to='contenttypes.ContentType')),
('actor_content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='notifications_notification_actor_type', to='contenttypes.ContentType')),
('recipient', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='notifications', to=settings.AUTH_USER_MODEL)),
('target_content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications_notification_target_type', to='contenttypes.ContentType')),
],
options={
'ordering': ('-date_created',),
'abstract': False,
},
),
]
| 58.622951 | 220 | 0.659676 | 384 | 3,576 | 5.903646 | 0.226563 | 0.042347 | 0.049405 | 0.077636 | 0.774151 | 0.744155 | 0.744155 | 0.744155 | 0.744155 | 0.744155 | 0 | 0.009187 | 0.208613 | 3,576 | 60 | 221 | 59.6 | 0.791873 | 0.018456 | 0 | 0.5 | 1 | 0 | 0.234103 | 0.123467 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.096154 | 0 | 0.173077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a82cc2b1b3569b9f3d1866fca2ccd9e948f3edc | 16,239 | py | Python | tests/test_cookies.py | GlitchCorp/fastapi-another-jwt-auth | 2f916093e804d910bdc40b96483d6724c312cac6 | [
"MIT"
] | null | null | null | tests/test_cookies.py | GlitchCorp/fastapi-another-jwt-auth | 2f916093e804d910bdc40b96483d6724c312cac6 | [
"MIT"
] | null | null | null | tests/test_cookies.py | GlitchCorp/fastapi-another-jwt-auth | 2f916093e804d910bdc40b96483d6724c312cac6 | [
"MIT"
] | null | null | null | import pytest
from fastapi_another_jwt_auth import AuthJWT
from fastapi_another_jwt_auth.exceptions import AuthJWTException
from fastapi import FastAPI, Request, Depends
from fastapi.responses import JSONResponse
from fastapi.testclient import TestClient
@pytest.fixture(scope='function')
def client():
app = FastAPI()
@app.exception_handler(AuthJWTException)
def authjwt_exception_handler(request: Request, exc: AuthJWTException):
return JSONResponse(
status_code=exc.status_code,
content={"detail": exc.message}
)
@app.get('/all-token')
def all_token(Authorize: AuthJWT = Depends()):
access_token = Authorize.create_access_token(subject=1,fresh=True)
refresh_token = Authorize.create_refresh_token(subject=1)
Authorize.set_access_cookies(access_token)
Authorize.set_refresh_cookies(refresh_token)
return {"msg":"all token"}
@app.get('/all-token-response')
def all_token_response(Authorize: AuthJWT = Depends()):
access_token = Authorize.create_access_token(subject=1,fresh=True)
refresh_token = Authorize.create_refresh_token(subject=1)
response = JSONResponse(content={"msg":"all token"})
Authorize.set_access_cookies(access_token,response)
Authorize.set_refresh_cookies(refresh_token,response)
return response
@app.get('/access-token')
def access_token(Authorize: AuthJWT = Depends()):
access_token = Authorize.create_access_token(subject=1)
Authorize.set_access_cookies(access_token)
return {"msg":"access token"}
@app.get('/access-token-response')
def access_token_response(Authorize: AuthJWT = Depends()):
access_token = Authorize.create_access_token(subject=1)
response = JSONResponse(content={"msg":"access token"})
Authorize.set_access_cookies(access_token,response)
return response
@app.get('/refresh-token')
def refresh_token(Authorize: AuthJWT = Depends()):
refresh_token = Authorize.create_refresh_token(subject=1)
Authorize.set_refresh_cookies(refresh_token)
return {"msg":"refresh token"}
@app.get('/refresh-token-response')
def refresh_token_response(Authorize: AuthJWT = Depends()):
refresh_token = Authorize.create_refresh_token(subject=1)
response = JSONResponse(content={"msg":"refresh token"})
Authorize.set_refresh_cookies(refresh_token,response)
return response
@app.get('/unset-all-token')
def unset_all_token(Authorize: AuthJWT = Depends()):
Authorize.unset_jwt_cookies()
return {"msg":"unset all token"}
@app.get('/unset-all-token-response')
def unset_all_token_response(Authorize: AuthJWT = Depends()):
response = JSONResponse(content={"msg":"unset all token"})
Authorize.unset_jwt_cookies(response)
return response
@app.get('/unset-access-token')
def unset_access_token(Authorize: AuthJWT = Depends()):
Authorize.unset_access_cookies()
@app.get('/unset-refresh-token')
def unset_refresh_token(Authorize: AuthJWT = Depends()):
Authorize.unset_refresh_cookies()
@app.post('/jwt-optional')
def jwt_optional(Authorize: AuthJWT = Depends()):
Authorize.jwt_optional()
return {"hello": Authorize.get_jwt_subject()}
@app.post('/jwt-required')
def jwt_required(Authorize: AuthJWT = Depends()):
Authorize.jwt_required()
return {"hello": Authorize.get_jwt_subject()}
@app.post('/jwt-refresh')
def jwt_refresh(Authorize: AuthJWT = Depends()):
Authorize.jwt_refresh_token_required()
return {"hello": Authorize.get_jwt_subject()}
@app.post('/jwt-fresh')
def jwt_fresh(Authorize: AuthJWT = Depends()):
Authorize.fresh_jwt_required()
return {"hello": Authorize.get_jwt_subject()}
client = TestClient(app)
return client
@pytest.mark.parametrize(
"url",["/access-token","/refresh-token","/unset-access-token","/unset-refresh-token"]
)
def test_warning_if_cookies_not_in_token_location(url,client):
@AuthJWT.load_config
def get_secret_key():
return [("authjwt_secret_key","secret")]
with pytest.raises(RuntimeWarning,match=r"authjwt_token_location"):
client.get(url)
def test_set_cookie_not_valid_type_max_age(Authorize):
@AuthJWT.load_config
def get_cookie_location():
return [("authjwt_token_location",{'cookies'}),("authjwt_secret_key","secret")]
token = Authorize.create_access_token(subject=1)
with pytest.raises(TypeError,match=r"max_age"):
Authorize.set_access_cookies(token,max_age="string")
with pytest.raises(TypeError,match=r"max_age"):
Authorize.set_refresh_cookies(token,max_age="string")
def test_set_unset_cookies_not_valid_type_response(Authorize):
@AuthJWT.load_config
def get_cookie_location():
return [("authjwt_token_location",{'cookies'}),("authjwt_secret_key","secret")]
token = Authorize.create_access_token(subject=1)
with pytest.raises(TypeError,match=r"response"):
Authorize.set_access_cookies(token,response={"msg":"hello"})
with pytest.raises(TypeError,match=r"response"):
Authorize.set_refresh_cookies(token,response={"msg":"hello"})
with pytest.raises(TypeError,match=r"response"):
Authorize.unset_jwt_cookies({"msg":"hello"})
with pytest.raises(TypeError,match=r"response"):
Authorize.unset_access_cookies({"msg":"hello"})
with pytest.raises(TypeError,match=r"response"):
Authorize.unset_refresh_cookies({"msg":"hello"})
@pytest.mark.parametrize("url",["/access-token","/refresh-token","/access-token-response","/refresh-token-response"])
def test_set_cookie_csrf_protect_false(url,client):
@AuthJWT.load_config
def get_cookie_location():
return [
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_cookie_csrf_protect",False)
]
cookie_key = url.split("-")[0][1:]
response = client.get(url)
assert response.cookies.get("csrf_{}_token".format(cookie_key)) is None
@pytest.mark.parametrize("url",["/access-token","/refresh-token","/access-token-response","/refresh-token-response"])
def test_set_cookie_csrf_protect_true(url,client):
@AuthJWT.load_config
def get_cookie_location():
return [("authjwt_token_location",{'cookies'}),("authjwt_secret_key","secret")]
cookie_key = url.split("-")[0][1:]
response = client.get(url)
assert response.cookies.get("csrf_{}_token".format(cookie_key)) is not None
def test_unset_all_cookie(client):
@AuthJWT.load_config
def get_cookie_location():
return [("authjwt_token_location",{'cookies'}),("authjwt_secret_key","secret")]
response = client.get('/all-token')
assert response.cookies.get("access_token_cookie") is not None
assert response.cookies.get("csrf_access_token") is not None
assert response.cookies.get("refresh_token_cookie") is not None
assert response.cookies.get("csrf_refresh_token") is not None
response = client.get('/unset-all-token')
assert response.cookies.get("access_token_cookie") is None
assert response.cookies.get("csrf_access_token") is None
assert response.cookies.get("refresh_token_cookie") is None
assert response.cookies.get("csrf_refresh_token") is None
def test_unset_all_cookie_response(client):
@AuthJWT.load_config
def get_cookie_location():
return [("authjwt_token_location",{'cookies'}),("authjwt_secret_key","secret")]
response = client.get('/all-token-response')
assert response.cookies.get("access_token_cookie") is not None
assert response.cookies.get("csrf_access_token") is not None
assert response.cookies.get("refresh_token_cookie") is not None
assert response.cookies.get("csrf_refresh_token") is not None
response = client.get('/unset-all-token-response')
assert response.cookies.get("access_token_cookie") is None
assert response.cookies.get("csrf_access_token") is None
assert response.cookies.get("refresh_token_cookie") is None
assert response.cookies.get("csrf_refresh_token") is None
def test_custom_cookie_key(client):
@AuthJWT.load_config
def get_cookie_location():
return [
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_access_cookie_key","access_cookie"),
("authjwt_refresh_cookie_key","refresh_cookie"),
("authjwt_access_csrf_cookie_key","csrf_access"),
("authjwt_refresh_csrf_cookie_key","csrf_refresh")
]
response = client.get('/all-token')
assert response.cookies.get("access_cookie") is not None
assert response.cookies.get("csrf_access") is not None
assert response.cookies.get("refresh_cookie") is not None
assert response.cookies.get("csrf_refresh") is not None
response = client.get('/unset-all-token')
assert response.cookies.get("access_cookie") is None
assert response.cookies.get("csrf_access") is None
assert response.cookies.get("refresh_cookie") is None
assert response.cookies.get("csrf_refresh") is None
def test_cookie_optional_protected(client):
@AuthJWT.load_config
def get_cookie_location():
return [("authjwt_token_location",{'cookies'}),("authjwt_secret_key","secret")]
url = '/jwt-optional'
# without token
response = client.post(url)
assert response.status_code == 200
assert response.json() == {'hello': None}
# change request methods and not check csrf token
@AuthJWT.load_config
def change_request_methods():
return [
("authjwt_csrf_methods",{"GET"}),
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret")
]
client.get('/access-token')
response = client.post(url)
assert response.status_code == 200
assert response.json() == {'hello': 1}
# change csrf protect to False not check csrf token
@AuthJWT.load_config
def change_request_csrf_protect_to_false():
return [
("authjwt_csrf_methods",{'POST','PUT','PATCH','DELETE'}),
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_cookie_csrf_protect",False)
]
client.get('/access-token')
response = client.post(url)
assert response.status_code == 200
assert response.json() == {'hello': 1}
# missing csrf token
@AuthJWT.load_config
def change_csrf_protect_to_true():
return [
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_cookie_csrf_protect",True)
]
res = client.get('/access-token')
csrf_token = res.cookies.get("csrf_access_token")
response = client.post(url)
assert response.status_code == 401
assert response.json() == {'detail': 'Missing CSRF Token'}
# csrf token do not match
response = client.post(url,headers={"X-CSRF-Token":"invalid"})
assert response.status_code == 401
assert response.json() == {'detail': 'CSRF double submit tokens do not match'}
response = client.post(url,headers={"X-CSRF-Token": csrf_token})
assert response.status_code == 200
assert response.json() == {'hello': 1}
# missing claim csrf in token
@AuthJWT.load_config
def change_request_csrf_protect_to_falsee():
return [
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_cookie_csrf_protect",False)
]
client.get('/access-token')
@AuthJWT.load_config
def change_request_csrf_protect_to_truee():
return [("authjwt_token_location",{'cookies'}),("authjwt_secret_key","secret")]
response = client.post(url,headers={"X-CSRF-Token":"invalid"})
assert response.status_code == 422
assert response.json() == {'detail': 'Missing claim: csrf'}
# custom csrf header name and cookie key
@AuthJWT.load_config
def custom_header_name_cookie_key():
return [
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_access_cookie_key","access_cookie"),
("authjwt_access_csrf_header_name","X-CSRF")
]
res = client.get('/access-token')
csrf_token = res.cookies.get("csrf_access_token")
# valid request
response = client.post(url,headers={"X-CSRF": csrf_token})
assert response.status_code == 200
assert response.json() == {'hello': 1}
@pytest.mark.parametrize("url",["/jwt-required","/jwt-refresh","/jwt-fresh"])
def test_cookie_protected(url,client):
# custom csrf header name and cookie key
@AuthJWT.load_config
def custom_header_name_cookie_key():
return [
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_access_cookie_key","access_cookie"),
("authjwt_access_csrf_header_name","X-CSRF-Access"),
("authjwt_refresh_cookie_key","refresh_cookie"),
("authjwt_refresh_csrf_header_name","X-CSRF-Refresh")
]
res = client.get('/all-token')
csrf_access = res.cookies.get("csrf_access_token")
csrf_refresh = res.cookies.get("csrf_refresh_token")
if url != "/jwt-refresh":
response = client.post(url,headers={"X-CSRF-Access": csrf_access})
else:
response = client.post(url,headers={"X-CSRF-Refresh": csrf_refresh})
assert response.status_code == 200
assert response.json() == {'hello': 1}
# missing csrf token
response = client.post(url)
assert response.status_code == 401
assert response.json() == {'detail': 'Missing CSRF Token'}
# missing cookie
client.get('/unset-all-token')
response = client.post(url)
assert response.status_code == 401
if url != "/jwt-refresh":
assert response.json() == {'detail': 'Missing cookie access_cookie'}
else:
assert response.json() == {'detail': 'Missing cookie refresh_cookie'}
# change csrf protect to False not check csrf token
@AuthJWT.load_config
def change_request_csrf_protect_to_false():
return [
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_cookie_csrf_protect",False)
]
client.get('/all-token')
response = client.post(url)
assert response.status_code == 200
assert response.json() == {'hello': 1}
# change request methods and not check csrf token
@AuthJWT.load_config
def change_request_methods():
return [
("authjwt_csrf_methods",{"GET"}),
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
("authjwt_cookie_csrf_protect",True)
]
response = client.post(url)
assert response.status_code == 200
assert response.json() == {'hello': 1}
# missing claim csrf in token
@AuthJWT.load_config
def change_request_methods_to_default():
return [
("authjwt_csrf_methods",{'POST','PUT','PATCH','DELETE'}),
("authjwt_token_location",{'cookies'}),
("authjwt_secret_key","secret"),
]
response = client.post(url,headers={"X-CSRF-Token":"invalid"})
assert response.status_code == 422
assert response.json() == {'detail': 'Missing claim: csrf'}
# csrf token do not match
res = client.get('/all-token')
csrf_access = res.cookies.get("csrf_access_token")
csrf_refresh = res.cookies.get("csrf_refresh_token")
response = client.post(url,headers={"X-CSRF-Token":"invalid"})
assert response.status_code == 401
assert response.json() == {'detail': 'CSRF double submit tokens do not match'}
# valid request
if url != "/jwt-refresh":
response = client.post(url,headers={"X-CSRF-Token": csrf_access})
else:
response = client.post(url,headers={"X-CSRF-Token": csrf_refresh})
assert response.status_code == 200
assert response.json() == {'hello': 1}
| 36.906818 | 117 | 0.676396 | 1,954 | 16,239 | 5.374104 | 0.058854 | 0.078659 | 0.051995 | 0.059423 | 0.845158 | 0.811066 | 0.77745 | 0.757928 | 0.723645 | 0.701838 | 0 | 0.005312 | 0.188497 | 16,239 | 439 | 118 | 36.990888 | 0.791547 | 0.029066 | 0 | 0.612426 | 0 | 0 | 0.232762 | 0.066476 | 0 | 0 | 0 | 0 | 0.174556 | 1 | 0.133136 | false | 0 | 0.017751 | 0.059172 | 0.248521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4aaf6a0aaeb1634e9f2e8a7e669a1b23d1a65dd3 | 30 | py | Python | lectures/week_1/test.py | ziga-solar/cog-sci-python | 1c8e3d829f039e1994be7c82844eff79b8cd74d3 | [
"MIT"
] | null | null | null | lectures/week_1/test.py | ziga-solar/cog-sci-python | 1c8e3d829f039e1994be7c82844eff79b8cd74d3 | [
"MIT"
] | null | null | null | lectures/week_1/test.py | ziga-solar/cog-sci-python | 1c8e3d829f039e1994be7c82844eff79b8cd74d3 | [
"MIT"
] | null | null | null | from math import pi
print(pi) | 10 | 19 | 0.766667 | 6 | 30 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 3 | 20 | 10 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
435c100f885d6f9c79e7af4bd9c66fda16e7925f | 9,076 | py | Python | tests/test_drivetrain.py | TechnoJays/robot2022 | a7afb67c8941d98f3ef794dbb824989835fc6548 | [
"MIT"
] | null | null | null | tests/test_drivetrain.py | TechnoJays/robot2022 | a7afb67c8941d98f3ef794dbb824989835fc6548 | [
"MIT"
] | null | null | null | tests/test_drivetrain.py | TechnoJays/robot2022 | a7afb67c8941d98f3ef794dbb824989835fc6548 | [
"MIT"
] | null | null | null | import pytest
from wpilib import IterativeRobotBase
from subsystems.drivetrain import Drivetrain
from wpilib.simulation import PWMSim
@pytest.fixture(scope="function")
def drivetrain_default(robot: IterativeRobotBase):
return Drivetrain(
robot, "TestDriveTrain", "../tests/test_configs/drivetrain_default.ini"
)
def test_drivetrain_default(drivetrain_default: Drivetrain):
assert drivetrain_default is not None
assert drivetrain_default._left_motor is not None
assert drivetrain_default._right_motor is not None
assert drivetrain_default._robot_drive is not None
# No gyro for 2022
assert drivetrain_default.is_gyro_enabled() is False
assert drivetrain_default.get_arcade_rotation_modifier() == -1
def test_drivetrain_channels_0_1(robot: IterativeRobotBase):
# given: a drivetrain
dt = Drivetrain(
robot, "TestDriveTrain", "../tests/test_configs/drivetrain_channels_0_1.ini"
)
# then: the drivetrain should be valid, and there should motors
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is not None
assert dt._robot_drive is not None
# and: the robot drive motors are real
left_m = PWMSim(dt._left_motor.getChannel())
right_m = PWMSim(dt._right_motor.getChannel())
# then: left motor is initialized and zero latched
assert left_m.getInitialized() is True
assert left_m.getRawValue() == 0.0
# Determine how to check this accurately. Check safety enabled? What is zero latch?
assert left_m.getZeroLatch() is False
# and: right motor is initialized and zero latched
assert right_m.getInitialized() is True
assert right_m.getRawValue() == 0.0
# Determine how to check this accurately. Check safety enabled? What is zero latch?
assert right_m.getZeroLatch() is False
@pytest.mark.parametrize(
"left_speed,right_speed,left_ex_speed,right_ex_speed",
[
(0.0, 0.0, 0.0, 0.0),
(0.5, 0.5, 0.0, 0.0),
(1.0, 1.0, 0.0, 0.0),
(-0.5, -0.5, 0.0, 0.0),
(-1.0, -1.0, 0.0, 0.0),
],
)
def test_drivetrain_zero_speed(
robot: IterativeRobotBase,
left_speed: float,
right_speed: float,
left_ex_speed: float,
right_ex_speed: float,
):
# given: a drivetrain
dt = Drivetrain(
robot, "TestDrivetrain", "../tests/test_configs/drivetrain_zero_speed.ini"
)
# then: the drivetrain should be valid, and there should motors
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is not None
assert dt._robot_drive is not None
assert dt._max_speed == 0.0
# and: the robot drive motors are real
left_m = PWMSim(dt._left_motor.getChannel())
right_m = PWMSim(dt._right_motor.getChannel())
# and: the drivetrain is "tank drived" at the left and right speed
dt.tank_drive(left_speed, right_speed)
# the speed of the left and right motor should be as set
pytest.approx(left_ex_speed, left_m.getSpeed())
pytest.approx(right_ex_speed, right_m.getSpeed())
@pytest.mark.parametrize(
"left_speed,right_speed,left_ex_speed,right_ex_speed",
[
(0.0, 0.0, 0.0, 0.0),
(0.5, 0.5, 0.25, -0.25),
(1.0, 1.0, 0.5, -0.5),
(-0.5, -0.5, -0.25, 0.25),
(-1.0, -1.0, -0.5, 0.5),
],
)
def test_drivetrain_half_speed(
robot: IterativeRobotBase,
left_speed: float,
right_speed: float,
left_ex_speed: float,
right_ex_speed: float,
):
# given: a drivetrain
dt = Drivetrain(
robot, "TestDrivetrain", "../tests/test_configs/drivetrain_half_speed.ini"
)
# then: the drivetrain should have a left and right motor with a max spped of 0.5
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is not None
assert dt._robot_drive is not None
assert dt._max_speed == 0.5
# and: the robot drive motors are real
left_m = PWMSim(dt._left_motor.getChannel())
right_m = PWMSim(dt._right_motor.getChannel())
# and the drivetrain is "tank drived" at the left right
dt.tank_drive(left_speed, right_speed)
# the speed of the left and right motor should be less then it was
assert abs(left_m.getSpeed()) - abs(left_ex_speed) < 0.05
assert abs(right_m.getSpeed()) - abs(right_ex_speed) < 0.05
@pytest.mark.parametrize(
"left_speed,right_speed,left_ex_speed,right_ex_speed",
[
(0.0, 0.0, 0.0, 0.0),
(0.5, 0.5, 0.375, -0.375),
(1.0, 1.0, 0.75, -0.75),
(-0.5, -0.5, -0.375, 0.375),
(-1.0, -1.0, -0.75, 0.75),
],
)
def test_drivetrain_3_4_speed(
robot: IterativeRobotBase,
left_speed: float,
right_speed: float,
left_ex_speed: float,
right_ex_speed: float,
):
# given: a drivetrain
dt = Drivetrain(
robot, "TestDrivetrain", "../tests/test_configs/drivetrain_3_4_speed.ini"
)
# then: the drivetrain should have a left and right motor and 3/4 max speed
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is not None
assert dt._robot_drive is not None
assert dt._max_speed == 0.75
# and: the robot drive motors are real
left_m = PWMSim(dt._left_motor.getChannel())
right_m = PWMSim(dt._right_motor.getChannel())
# and the drivetrain is "tank drived" at the left right
dt.tank_drive(left_speed, right_speed)
# then: the speed of the left and right motor should be less than 0.5
assert abs(left_m.getSpeed()) - abs(left_ex_speed) < 0.05
assert abs(right_m.getSpeed()) - abs(right_ex_speed) < 0.05
@pytest.mark.parametrize(
"left_speed,right_speed,left_ex_speed,right_ex_speed",
[
(0.0, 0.0, 0.0, 0.0),
(0.5, 0.5, 0.5306122448979592, -0.5306122448979592),
(1.0, 1.0, 1.0, -1.0),
(-0.5, -0.5, -0.5306122448979592, 0.5306122448979592),
(-1.0, -1.0, -1.0, 1.0),
],
)
def test_drivetrain_full_speed(
robot: IterativeRobotBase,
left_speed: float,
right_speed: float,
left_ex_speed: float,
right_ex_speed: float,
):
# given: a drivetrain
dt = Drivetrain(
robot, "TestDriveTrain", "../tests/test_configs/drivetrain_full_speed.ini"
)
# then: the drivetrain should have a left and right motor at full speed
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is not None
assert dt._robot_drive is not None
assert dt._max_speed == 1.0
# and: the robot drive motors are real
left_m = PWMSim(dt._left_motor.getChannel())
right_m = PWMSim(dt._right_motor.getChannel())
# and the drivetrain is "tank drived" at the left right
dt.tank_drive(left_speed, right_speed)
# then the speed of the left and the right motor should be the speed
pytest.approx(left_ex_speed, left_m.getSpeed())
pytest.approx(right_ex_speed, right_m.getSpeed())
def test_drivetrain_left_inverted(robot: IterativeRobotBase):
dt = Drivetrain(
robot, "TestDriveTrain", "../tests/test_configs/drivetrain_left_inverted.ini"
)
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is not None
assert dt._robot_drive is not None
left_m = PWMSim(dt._left_motor.getChannel())
right_m = PWMSim(dt._right_motor.getChannel())
assert left_m.getInitialized() is True
assert left_m.getSpeed() == 0.0
assert left_m.getZeroLatch() is False
assert right_m.getInitialized() is True
assert right_m.getSpeed() == 0.0
assert right_m.getZeroLatch() is False
assert dt._left_motor.getInverted() is True
assert dt._right_motor.getInverted() is False
def test_drivetrain_right_inverted(robot: IterativeRobotBase):
dt = Drivetrain(
robot, "TestDrivetrain", "../tests/test_configs/drivetrain_right_inverted.ini"
)
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is not None
assert dt._robot_drive is not None
left_m = PWMSim(dt._left_motor.getChannel())
right_m = PWMSim(dt._right_motor.getChannel())
assert left_m.getInitialized() is True
assert left_m.getSpeed() == 0.0
assert left_m.getZeroLatch() is False
assert right_m.getInitialized() is True
assert right_m.getSpeed() == 0.0
assert right_m.getZeroLatch() is False
assert dt._left_motor.getInverted() is False
assert dt._right_motor.getInverted() is True
def test_drivetrain_left_disabled(robot: IterativeRobotBase):
dt = Drivetrain(
robot, "TestDrivetrain", "../tests/test_configs/drivetrain_left_disabled.ini"
)
assert dt is not None
assert dt._left_motor is None
assert dt._right_motor is not None
assert dt._robot_drive is None
def test_drivetrain_right_disabled(robot: IterativeRobotBase):
dt = Drivetrain(
robot, "TestDrivetrain", "../tests/test_configs/drivetrain_right_disabled.ini"
)
assert dt is not None
assert dt._left_motor is not None
assert dt._right_motor is None
assert dt._robot_drive is None
| 32.298932 | 87 | 0.688519 | 1,372 | 9,076 | 4.33965 | 0.078717 | 0.019819 | 0.019651 | 0.080618 | 0.873194 | 0.870003 | 0.843635 | 0.805845 | 0.805845 | 0.791737 | 0 | 0.040212 | 0.210886 | 9,076 | 280 | 88 | 32.414286 | 0.79112 | 0.153482 | 0 | 0.612745 | 0 | 0 | 0.108963 | 0.089626 | 0 | 0 | 0 | 0 | 0.352941 | 1 | 0.053922 | false | 0 | 0.019608 | 0.004902 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4384f974a145a3887e8a267fcfe1bb60201e613a | 38 | py | Python | src/pe2loaddata/content/item/__init__.py | dlogan/pe2loaddata | 4121fbab354278f4504fa8b0caabc7d3da48e91c | [
"BSD-2-Clause"
] | 1 | 2018-02-17T00:29:55.000Z | 2018-02-17T00:29:55.000Z | src/pe2loaddata/content/item/__init__.py | dlogan/pe2loaddata | 4121fbab354278f4504fa8b0caabc7d3da48e91c | [
"BSD-2-Clause"
] | 18 | 2018-05-22T16:37:23.000Z | 2022-03-16T19:24:52.000Z | src/pe2loaddata/content/item/__init__.py | dlogan/pe2loaddata | 4121fbab354278f4504fa8b0caabc7d3da48e91c | [
"BSD-2-Clause"
] | 3 | 2021-06-11T18:25:16.000Z | 2022-03-21T15:36:26.000Z | from .root_handler import RootHandler
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
438ec275aefe9377faed7a811df3d6bac9445d5f | 42 | py | Python | xsimma/remd/__init__.py | XipingGong/xsimma | 72ce5eee0161a0831feaac86c709480fa86821b9 | [
"BSD-3-Clause"
] | 1 | 2021-02-01T22:33:02.000Z | 2021-02-01T22:33:02.000Z | xsimma/remd/__init__.py | XipingGong/xsimma | 72ce5eee0161a0831feaac86c709480fa86821b9 | [
"BSD-3-Clause"
] | null | null | null | xsimma/remd/__init__.py | XipingGong/xsimma | 72ce5eee0161a0831feaac86c709480fa86821b9 | [
"BSD-3-Clause"
] | 1 | 2021-02-05T04:49:45.000Z | 2021-02-05T04:49:45.000Z |
from .remd import *
from .plot import *
| 8.4 | 19 | 0.666667 | 6 | 42 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 42 | 4 | 20 | 10.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
438f51bf4e0647a1cf251c979da1066fdc5efd48 | 31 | py | Python | edflow/util/__init__.py | edflow/edflow | 317cb1b61bf810a68004788d08418a5352653264 | [
"MIT"
] | 23 | 2019-04-04T07:52:57.000Z | 2022-02-02T03:11:07.000Z | edflow/util/__init__.py | edflow/edflow | 317cb1b61bf810a68004788d08418a5352653264 | [
"MIT"
] | 149 | 2019-04-04T09:53:01.000Z | 2020-07-21T16:55:32.000Z | edflow/util/__init__.py | edflow/edflow | 317cb1b61bf810a68004788d08418a5352653264 | [
"MIT"
] | 12 | 2019-04-04T07:52:58.000Z | 2020-08-28T12:30:03.000Z | from edflow.util.util import *
| 15.5 | 30 | 0.774194 | 5 | 31 | 4.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
438fe102f4747f75efa33ffe24d85c118ce39850 | 63,036 | py | Python | scripts/summary_spreadsheet.py | GMLC-TDC/helics_benchmark_results | 8a133a89645b98c94858c4343a79d4f4a95e3e04 | [
"BSD-3-Clause"
] | null | null | null | scripts/summary_spreadsheet.py | GMLC-TDC/helics_benchmark_results | 8a133a89645b98c94858c4343a79d4f4a95e3e04 | [
"BSD-3-Clause"
] | 17 | 2019-12-05T18:21:03.000Z | 2020-06-17T20:20:52.000Z | scripts/summary_spreadsheet.py | GMLC-TDC/helics_benchmark_results | 8a133a89645b98c94858c4343a79d4f4a95e3e04 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Mar 12 07:40:00 2020
Creates metrics and calculates ratios for analysis
of HELICS performance for a given benchmark. For each
benchmark, a spreadsheet and csv that summarizes the calculated
metrics and ratios are generated. A spreadsheet for all
the benchmarks is created and saved as a single Excel
spreadsheet and csv file.
This script can be run as a standalone script to generate the summary
spreadsheet for all the benchmarks results.
The command line arguments for the function can be found in the code
following the lines following the "if __name__ == '__main__':" line
at the end of this file.
@author: barn553
"""
import argparse
import pandas as pd
import numpy as np
from scipy.stats import linregress as lr
import logging
import pprint
import os
from make_dataframe import make_dataframe1, make_dataframe2
import sys
# Setting up logger
logger = logging.getLogger(__name__)
# Setting up pretty printing, mostly for debugging.
pp = pprint.PrettyPrinter(indent=4)
def get_ratio1(dataframe, groupby_columns, index_columns, filter_columns,
value_columns, metric_columns, time):
"""This function gets all the metrics' ratios for the entire dataframe.
Args:
dataframe (str) - Pandas dataframe object that contains the
desired information for calculating metrics' ratios.
groupby_columns (list) - List of columns to group the dataframe
by.
index_columns (list) - List of columns to set as the index
after calculating the ratios.
filter_columns (list) - List of columns to filter the grouped
dataframe by for calculating the ratios.
value_columns (list) - List of specific values to locate in the
dataframe to be used for the denominator of the ratios.
metric_columns (list) - List of metrics to get ratios for.
time (str) - Used to assert there is a one-to-one relationship
between a metric value and the time value; should be 'real_time'
or 'elapsed_time'.
Returns:
final_df (pandas dataframe) - Contains the original
information plus the metrics' ratios' results.
"""
lst = []
for fs, vs, ms in zip(filter_columns, value_columns, metric_columns):
for g, df in dataframe.groupby(groupby_columns):
a_df = df
for f in a_df['{}'.format(fs)].unique():
a_df = a_df[a_df['{}'.format(fs)] == f]
a_df = a_df.set_index('core_type')
try:
value1 = float(
a_df.loc['{}'.format(vs), '{}'.format(ms)])
value2 = float(
a_df.loc['{}'.format(vs), '{}'.format(time)])
a_df['{}_ratio'.format(ms)] = np.ma.array(
a_df['{}'.format(ms)],
mask=np.isnan(a_df['{}'.format(ms)])) / value1
a_df['{}_ratio'.format(time)] = np.ma.array(
a_df['{}'.format(time)],
mask=np.isnan(a_df['{}'.format(time)])) / value2
except Exception as e:
logging.warning('core type {} is not in the index'.format(
e))
a_df['{}_ratio'.format(ms)] = np.nan
a_df['{}_ratio'.format(time)] = np.nan
a_df = a_df.reset_index()
cols = index_columns+['{}'.format(ms),
'{}_ratio'.format(ms),
'{}_ratio'.format(time)]
a_df = a_df[cols]
lst.append(a_df)
ratio_df = pd.concat(lst).set_index(index_columns).reset_index()
return ratio_df
def get_ratio2(dataframe, groupby_columns, index_columns, filter_columns,
value_columns, metric_columns, time):
"""This function gets all the metrics' ratios for the entire dataframe.
Args:
dataframe (str) - Pandas dataframe object that contains the
desired information for calculating metrics' ratios.
groupby_columns (list) - List of columns to group the dataframe
by.
index_columns (list) - List of columns to set as the index
after calculating the ratios.
filter_columns (list) - List of columns to filter the grouped
dataframe by for calculating the ratios.
value_columns (list) - List of specific values to locate in the
dataframe to be used for the denominator of the ratios.
metric_columns (list) - List of metrics to get ratios for.
time (str) - Used to assert there is a one-to-one relationship
between a metric value and the time value; should be 'real_time'
or 'elapsed_time'.
Returns:
final_df (pandas dataframe) - Contains the original
information plus the metrics' ratios' results.
"""
lst = []
for fs, vs, ms in zip(filter_columns, value_columns, metric_columns):
for g, df in dataframe.groupby(groupby_columns):
a_df = df
for f in a_df['{}'.format(fs)].unique():
a_df = a_df[a_df['{}'.format(fs)] == f]
a_df = a_df.set_index('core_type')
try:
value1 = float(
a_df.loc['{}'.format(vs), '{}'.format(ms)][0])
value2 = float(
a_df.loc['{}'.format(vs), '{}'.format(time)][0])
a_df['{}_ratio'.format(ms)] = np.ma.array(
a_df['{}'.format(ms)],
mask=np.isnan(a_df['{}'.format(ms)])) / value1
a_df['{}_ratio'.format(time)] = np.ma.array(
a_df['{}'.format(time)],
mask=np.isnan(a_df['{}'.format(time)])) / value2
except Exception as e:
logging.warning('core type {} is not in the index'.format(
e))
a_df['{}_ratio'.format(ms)] = np.nan
a_df['{}_ratio'.format(time)] = np.nan
a_df = a_df.reset_index()
cols = index_columns+['{}'.format(ms),
'{}_ratio'.format(ms),
'{}_ratio'.format(time)]
a_df = a_df[cols]
lst.append(a_df)
ratio_df = pd.concat(lst).set_index(index_columns).reset_index()
return ratio_df
def get_slopes(dataframe, benchmark, xdatas, ydatas):
"""This function gets all the slopes for the benchmarks
and the core_types.
Args:
dataframe (pandas dataframe) - Contains all the
desired information along with the results of the
metrics' ratios' calculations.
benchmark (str) - Specific benchmark to get slopes for.
xdatas (list) - List of values to be considered as x-values
in a linear regression approach to get the slope.
ydatas (list) - List of values to be considered as y-values
in a linear regression approach to get the slope.
Returns:
slope_df (pandas dataframe) - Contains the original
desired information, the mterics' ratios' results, and the
calculated slopes for the metrics' ratios.
"""
df_list = []
for xs, ys in zip(xdatas, ydatas):
benchmarks = []
run_ids = []
core_types = []
slopes = []
for run_id in dataframe.run_id.unique():
for core_type in dataframe.core_type.unique():
df = dataframe[(dataframe.run_id == run_id) &
(dataframe.core_type == core_type)]
x = np.nan_to_num(np.asarray(df['{}'.format(xs)]))
y = np.nan_to_num(np.asarray(df['{}'.format(ys)]))
if len(x) == 0 or len(y) == 0:
continue
m, intercept, r_value, p_value, std_err = lr(x, y)
slopes.append(m)
benchmarks.append(benchmark)
run_ids.append(run_id)
core_types.append(core_type)
data = {'benchmark': benchmarks,
'run_id': run_ids,
'core_type': core_types,
'{}_vs_{}_slope'.format(xs, ys): slopes}
df = pd.DataFrame(data, index=[s for s in range(len(slopes))])
df_list.append(df)
slope_df = pd.concat(df_list, axis=0, ignore_index=True)
return slope_df
def create_metrics1(dataframe, filter_columns, groupby_columns, metric_names,
columns, operations, time):
"""This function creates/calculates the desired metrics for analysis.
Args:
dataframe (pandas dataframe) - Contains all the
desired information for analysis.
filter_columns (list) - List of columns to use to create a
subset of the original dataframe.
groupby_columns (list) - List of columns to use to group the
dataframe subset.
metric_names (list) - List of names for the metrics that are
to be created/calculated.
columns (list) - List of tuples of columns to use for calculating
the metrics.
operations (list) - List of mathematical operations to perform
when calculating the metrics; should be either '*' or '/'.
time (str) - Used for getting a ratio of the times; should be
'real_time' or 'elapsed_time'.
Returns:
main_df (pandas dataframe) - Contains the original
desired information and the new created/calculated metrics to
be used for analysis.
"""
# Making sure there is a one-to-one relationship between real_time
# and federate_count, etc.
df = dataframe[filter_columns].groupby(
groupby_columns)['{}'.format(time)].min()
df.name = '{}'.format(time)
df = df.reset_index()
for m, c, o in zip(metric_names, columns, operations):
if o == '/':
df['{}'.format(m)] = np.array(df['{}'.format(c[0])])\
/ np.array(df['{}'.format(c[1])]).astype(float)
elif o == '*':
df['{}'.format(m)] = np.array(df['{}'.format(c[0])])\
* np.array(df['{}'.format(c[1])]).astype(float)
else:
logging.error('Invalid operation; should be "/" or "*".')
main_df = df
return main_df
def create_metrics2(dataframe, filter_columns, groupby_columns, metric_names,
columns, operations, time):
"""This function creates/calculates the desired metrics for analysis.
Args:
dataframe (pandas dataframe) - Contains all the
desired information for analysis.
filter_columns (list) - List of columns to use to create a
subset of the original dataframe.
groupby_columns (list) - List of columns to use to group the
dataframe subset.
metric_names (list) - List of names for the metrics that are
to be created/calculated.
columns (list) - List of tuples of columns to use for calculating
the metrics.
operations (list) - List of mathematical operations to perform
when calculating the metrics; should be either '*' or '/'.
time (str) - Used for getting a ratio of the times; should be
'real_time' or 'elapsed_time'.
Returns:
main_df (pandas dataframe) - Contains the original
desired information and the new created/calculated metrics to
be used for analysis.
"""
# Making sure there is a one-to-one relationship between real_time
# and federate_count, etc.
df = dataframe
for m, c, o in zip(metric_names, columns, operations):
if o == '/':
df['{}'.format(m)] = np.array(df['{}'.format(c[0])])\
/ np.array(df['{}'.format(c[1])]).astype(float)
elif o == '*':
df['{}'.format(m)] = np.array(df['{}'.format(c[0])])\
* np.array(df['{}'.format(c[1])]).astype(float)
else:
logging.error('Invalid operation; should be "/" or "*".')
main_df = df
return main_df
def cpu_score(dataframe, bm_type):
"""This function calculates the CPU Benchmark Score
for a given dataframe and benchmark type.
Args:
dataframe (pandas dataframe) - Contains all the information
for calculating the CPU Benchmark Score.
bm_type (str) - The type of benchmark; 'full', 'key', or
'multinode'.
Returns:
score_df (pandas dataframe) - A dataframe that contains the
original information along with the calculates CPU
benchmark score(s).
"""
if bm_type == 'full':
dataframe = dataframe[dataframe.benchmark != 'cEchoBenchmark']
df_list = []
for g, df in dataframe.groupby('helics_version_string'):
score_df = df
score_df = score_df.set_index('benchmark')
try:
if ('messageSendBenchmark' in score_df.index and
'messageLookupBenchmark' in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
spi = score_df.loc[
'messageLookupBenchmark', 'spi'].mean()
cpi = score_df.loc[
'messageLookupBenchmark', 'cpi'].mean()
spms = score_df.loc[
'messageSendBenchmark', 'spms'].mean()
cpms = score_df.loc[
'messageSendBenchmark', 'cpms'].mean()
spmc = score_df.loc[
'messageSendBenchmark', 'spmc'].mean()
cpmc = score_df.loc[
'messageSendBenchmark', 'cpmc'].mean()
some_scores = [spi, cpi, spms, cpms, spmc, cpmc]
all_scores = np.array(
fed_scores + cycles + some_scores)
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
elif ('messageSendBenchmark' in score_df.index and
'messageLookupBenchmark' not in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
spms = score_df.loc[
'messageSendBenchmark', 'spms'].mean()
cpms = score_df.loc[
'messageSendBenchmark', 'cpms'].mean()
spmc = score_df.loc[
'messageSendBenchmark', 'spmc'].mean()
cpmc = score_df.loc[
'messageSendBenchmark', 'cpmc'].mean()
all_scores = np.array(
fed_scores + cycles + [spms, cpms, spmc, cpmc])
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
elif ('messageSendBenchmark' not in score_df.index and
'messageLookupBenchmark' in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
spi = score_df.loc[
'messageLookupBenchmark', 'spi'].mean()
cpi = score_df.loc[
'messageLookupBenchmark', 'cpi'].mean()
all_scores = np.array(
fed_scores + cycles + [spi, cpi])
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
elif ('messageSendBenchmark' not in score_df.index and
'messageLookupBenchmark' not in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'messageSendBenchmark']
all_scores = np.array(fed_scores+cycles)
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
else:
logging.error('Failed to calculate score for {}'.format(g))
except Exception as e:
logging.warning('benchmark {} does not exist'.format(e))
score_df['cpu_score'] = np.nan
score_df = score_df.reset_index()
score_df = score_df[
['helics_version_string', 'date', 'benchmark',
'cpu_score', 'cpf', 'cpi',
'cpms', 'cpmc', 'spf',
'spi', 'spmc', 'spms']]
df_list.append(score_df)
score_df = pd.concat(df_list).set_index(
'helics_version_string').reset_index()
elif bm_type == 'key':
df_list = []
for g, df in dataframe.groupby('helics_version_string'):
score_df = df
score_df = score_df.set_index('benchmark')
try:
if 'messageLookupBenchmark' in score_df.index:
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()]
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()]
spi = score_df.loc['messageLookupBenchmark', 'spi'].mean()
cpi = score_df.loc['messageLookupBenchmark', 'cpi'].mean()
all_scores = np.array(
fed_scores+cycles+[spi, cpi])
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
elif 'messageLookupBenchmark' not in score_df.index:
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()]
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()]
all_scores = np.array(fed_scores+cycles)
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
else:
logging.error('Failed to calculate score for {}'.format(g))
except Exception as e:
logging.warning('benchmark {} does not exist'.format(e))
score_df['cpu_score'] = np.nan
score_df = score_df.reset_index()
score_df = score_df[[
'helics_version_string', 'date', 'benchmark', 'cpu_score',
'cpf', 'cpi', 'spf', 'spi'
]]
df_list.append(score_df)
score_df = pd.concat(df_list).set_index(
'helics_version_string').reset_index()
elif bm_type == 'multinode':
df_list = []
for g, df in dataframe.groupby('helics_version_string'):
score_df = df
score_df = score_df.set_index('benchmark')
try:
if ('MessageExchangeFederate' in score_df.index and
'PholdFederate' in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
spe = score_df.loc['PholdFederate', 'spe'].mean()
cpe = score_df.loc['PholdFederate', 'cpe'].mean()
cpmc = score_df.loc[
'MessageExchangeFederate', 'cpmc'].mean()
cpms = score_df.loc[
'MessageExchangeFederate', 'cpms'].mean()
spms = score_df.loc[
'MessageExchangeFederate', 'spms'].mean()
spmc = score_df.loc[
'MessageExchangeFederate', 'spmc'].mean()
some_scores = [spe, cpe, cpmc, cpms, spms, spmc]
all_scores = np.array(
fed_scores + cycles + some_scores)
score_df['cpu_score'] = np.round(
np.mean((all_scores/(np.median(all_scores)))),
decimals=0)
elif ('MessageExchangeFederate' in score_df.index and
'PholdFederate' not in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
cpmc = score_df.loc[
'MessageExchangeFederate', 'cpmc'].mean()
cpms = score_df.loc[
'MessageExchangeFederate', 'cpms'].mean()
spms = score_df.loc[
'MessageExchangeFederate', 'spms'].mean()
spmc = score_df.loc[
'MessageExchangeFederate', 'spmc'].mean()
all_scores = np.array(
fed_scores + cycles + [spms, spmc, cpmc, cpms])
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
elif ('MessageExchangeFederate' not in score_df.index and
'PholdFederate' in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
spe = score_df.loc['PholdFederate', 'spe'].mean()
cpe = score_df.loc['PholdFederate', 'cpe'].mean()
all_scores = np.array(
fed_scores+cycles+[spe, cpe])
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
elif ('MessageExchangeFederate' not in score_df.index and
'PholdFederate' not in score_df.index):
fed_scores = [
score_df.loc[i, 'spf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
cycles = [
score_df.loc[i, 'cpf'].mean()
for i in score_df.index.unique()
if i != 'MessageExchangeFederate']
all_scores = np.array(
fed_scores + cycles)
score_df['cpu_score'] = np.round(
np.mean((all_scores / (np.median(all_scores)))),
decimals=0)
else:
logging.error('Failed to calculate score for {}'.format(g))
except Exception as e:
logging.warning('benchmark {} does not exist'.format(e))
score_df['cpu_score'] = np.nan
score_df = score_df.reset_index()
score_df = score_df[[
'helics_version_string', 'date', 'benchmark', 'cpu_score',
'cpf', 'cpe', 'spf', 'spe',
'spmc', 'spms', 'cpms', 'cpmc'
]]
df_list.append(score_df)
score_df = pd.concat(df_list).set_index(
'helics_version_string').reset_index()
else:
logging.error(
'Invalid value; should be "full", "key" or "multinode".')
return score_df
def relative_standard_deviation(x):
"""This function calculates the relative standard deviation;
simply put, it is a statistical calculation that compares the
standard deviation in relation to the mean.
Args:
x (array) - Array of float values for calculating the
relative standard deviation
Returns:
np.std(x) / np.mean(x) (float) - The relative standard
deviation.
"""
return np.std(x) / np.mean(x)
def create_pivot_tables(dataframe, index_columns, value_columns):
"""This function creates all the pivot tables to send to an Excel
spreadsheet.
Args:
dataframe (pandas dataframe) - Final formatted dataframe that
contains all the information, results/calculations for analysis.
index_columns (list) - List of columns to be the indices
for the pivot table.
value_columns (list) - List of metrics for the pivot table
to compute values; the default computation is mean.
Returns:
p (pandas pivot table) - Pivot table of the final dataframe.
"""
# Creating pivot_tables:
p = pd.pivot_table(
dataframe, index=index_columns, values=value_columns, fill_value='')
return p
def create_spreadsheet1(dataframe, filename, output_path):
"""This function combines all the above functions and
creates a spreadsheet and csv for 'benchmark_type' = 'full'.
Args:
dataframe (pandas dataframe) - Contains all the information
for analysis.
filename (str) - Name of the file for saving the results as
an excel spreadsheet and csv file.
output_path (str) - Location to send the analysis files.
Returns:
(null)
"""
logging.info('Filtering it to just bmk_type = "full"...')
dataframe = dataframe[
(dataframe.benchmark_type == 'full') &
(dataframe.benchmark != 'actionMessageBenchmark') &
(dataframe.benchmark != 'conversionBenchmark')]
c_echo_df = dataframe[dataframe.benchmark == 'cEchoBenchmark']
echo_res_df = dataframe[dataframe.benchmark == 'echoBenchmark']
echo_msg_df = dataframe[dataframe.benchmark == 'echoMessageBenchmark']
msg_lkp_df = dataframe[dataframe.benchmark == 'messageLookupBenchmark']
msg_send_df = dataframe[dataframe.benchmark == 'messageSendBenchmark']
ring_df = dataframe[dataframe.benchmark == 'ringBenchmark']
ring_msg_df = dataframe[dataframe.benchmark == 'ringMessageBenchmark']
phold_df = dataframe[dataframe.benchmark == 'pholdBenchmark']
filter_df = dataframe[dataframe.benchmark == 'filterBenchmark']
timing_df = dataframe[dataframe.benchmark == 'timingBenchmark']
# Getting all necessary info for the functions
logging.info('Saving the necessary information to memory...')
met_fed_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id', 'core_type',
'num_cpus', 'mhz_per_cpu', 'federate_count', 'real_time'
]
met_fed_groupby_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'core_type', 'num_cpus', 'mhz_per_cpu', 'federate_count'
]
met_fed_metrics = ['spf', 'new_mhz_per_cpu', 'cpf']
met_fed_cols_tuples = [
('real_time', 'federate_count'), ('real_time', 'mhz_per_cpu'),
('spf', 'new_mhz_per_cpu')
]
met_fed_ops = ['/', '*', '*']
r_fed_groupby_columns = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'num_cpus', 'mhz_per_cpu', 'federate_count'
]
r_fed_index_columns = met_fed_cols
r_fed_filter_columns = ['federate_count']*2
r_fed_value_columns = ['inproc']*2
r_fed_metric_columns = ['spf', 'cpf']
met_filt_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'core_type', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'filter_location', 'real_time'
]
met_filt_groupby_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id', 'core_type',
'num_cpus', 'mhz_per_cpu', 'federate_count', 'filter_location'
]
met_filt_metrics = met_fed_metrics
met_filt_cols_tuples = met_fed_cols_tuples
met_filt_ops = met_fed_ops
r_filt_groupby_columns = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'num_cpus', 'mhz_per_cpu', 'filter_location', 'federate_count'
]
r_filt_index_columns = met_filt_cols
r_filt_filter_columns = r_fed_filter_columns
r_filt_value_columns = r_fed_value_columns
r_filt_metric_columns = r_fed_metric_columns
met_int_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'core_type', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'interface_count', 'real_time'
]
met_int_groupby_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id', 'core_type',
'num_cpus', 'mhz_per_cpu', 'federate_count', 'interface_count'
]
met_int_metrics = ['spf', 'spi', 'new_mhz_per_cpu', 'cpf', 'cpi']
met_int_cols_tuples = [
('real_time', 'federate_count'), ('real_time', 'interface_count'),
('real_time', 'mhz_per_cpu'), ('spf', 'new_mhz_per_cpu'),
('spi', 'new_mhz_per_cpu')
]
met_int_ops = ['/', '/', '*', '*', '*']
r_int_groupby_columns = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'num_cpus', 'mhz_per_cpu', 'federate_count', 'interface_count'
]
r_int_index_columns = met_int_cols
r_int_filter_columns = ['interface_count']*4
r_int_value_columns = ['inproc']*4
r_int_metric_columns = ['spf', 'spi', 'cpf', 'cpi']
met_msg_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'core_type', 'num_cpus', 'mhz_per_cpu', 'message_count',
'message_size', 'real_time'
]
met_msg_groupby_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id', 'core_type',
'num_cpus', 'mhz_per_cpu', 'message_count', 'message_size'
]
met_msg_metrics = ['spms', 'spmc', 'new_mhz_per_cpu', 'cpms', 'cpmc']
met_msg_cols_tuples = [
('real_time', 'message_size'), ('real_time', 'message_count'),
('real_time', 'mhz_per_cpu'), ('spms', 'new_mhz_per_cpu'),
('spmc', 'new_mhz_per_cpu')
]
met_msg_ops = ['/', '/', '*', '*', '*']
r_msg_groupby_columns = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'num_cpus', 'mhz_per_cpu', 'message_size', 'message_count'
]
r_msg_index_columns = met_msg_cols
r_msg_filter_columns = ['message_count']*4
r_msg_value_columns = ['inproc']*4
r_msg_metric_columns = ['spms', 'spmc', 'cpms', 'cpmc']
# Applying the functions
logging.info('Creating the desired metrics and getting the ratios...')
c_echo_ratio = get_ratio1(
create_metrics1(
c_echo_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
echo_ratio = get_ratio1(
create_metrics1(
echo_res_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
echo_msg_ratio = get_ratio1(
create_metrics1(
echo_msg_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
ring_ratio = get_ratio1(
create_metrics1(
ring_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
ring_msg_ratio = get_ratio1(
create_metrics1(
ring_msg_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
phold_ratio = get_ratio1(
create_metrics1(
phold_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
filter_ratio = get_ratio1(
create_metrics1(
filter_df, met_filt_cols, met_filt_groupby_cols,
met_filt_metrics, met_filt_cols_tuples, met_filt_ops,
'real_time'),
r_filt_groupby_columns, r_filt_index_columns, r_filt_filter_columns,
r_filt_value_columns, r_filt_metric_columns, 'real_time')
timing_ratio = get_ratio1(
create_metrics1(
timing_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
msg_lkp_ratio = get_ratio1(
create_metrics1(
msg_lkp_df, met_int_cols, met_int_groupby_cols,
met_int_metrics, met_int_cols_tuples, met_int_ops,
'real_time'),
r_int_groupby_columns, r_int_index_columns, r_int_filter_columns,
r_int_value_columns, r_int_metric_columns, 'real_time')
msg_send_ratio = get_ratio1(
create_metrics1(
msg_send_df, met_msg_cols, met_msg_groupby_cols,
met_msg_metrics, met_msg_cols_tuples, met_msg_ops,
'real_time'),
r_msg_groupby_columns, r_msg_index_columns, r_msg_filter_columns,
r_msg_value_columns, r_msg_metric_columns, 'real_time')
logging.info('Calculating CPU benchmark score...')
ratio_df = pd.concat(
[c_echo_ratio, echo_msg_ratio, echo_ratio, filter_ratio,
msg_lkp_ratio, msg_send_ratio, phold_ratio, ring_msg_ratio,
ring_ratio, timing_ratio], axis=0, ignore_index=True)
score_df = cpu_score(ratio_df, 'full')
score_p = create_pivot_tables(
score_df,
['helics_version_string', 'cpu_score', 'benchmark', 'date'],
['cpf', 'cpi', 'cpmc', 'cpms', 'spf', 'spi', 'spmc', 'spms'])
logging.info('Creating the pivot table and saving to excel...')
c_echo_p = create_pivot_tables(
c_echo_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
echo_p = create_pivot_tables(
echo_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
echo_msg_p = create_pivot_tables(
echo_msg_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
ring_p = create_pivot_tables(
ring_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
ring_msg_p = create_pivot_tables(
ring_msg_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
phold_p = create_pivot_tables(
phold_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
timing_p = create_pivot_tables(
timing_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
filter_p = create_pivot_tables(
filter_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
msg_lkp_p = create_pivot_tables(
msg_lkp_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'spi_ratio', 'cpi_ratio', 'cpf_ratio',
'real_time_ratio'])
msg_send_p = create_pivot_tables(
msg_send_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'message_size',
'message_count', 'core_type'],
['spms_ratio', 'spmc_ratio', 'cpms_ratio', 'cpmc_ratio',
'real_time_ratio'])
file_path = os.path.join(output_path, '{}.xlsx'.format(filename))
with pd.ExcelWriter(file_path) as writer:
score_p.to_excel(writer,
sheet_name='CPU Benchmark Score')
c_echo_p.to_excel(writer,
sheet_name='{}'.format('cEchoBenchmark'))
echo_p.to_excel(writer,
sheet_name='{}'.format('echoBenchmark'))
echo_msg_p.to_excel(writer,
sheet_name='{}'.format('echoMessageBenchmark'))
ring_p.to_excel(writer,
sheet_name='{}'.format('ringBenchmark'))
ring_msg_p.to_excel(writer,
sheet_name='{}'.format('ringMessageBenchmark'))
phold_p.to_excel(writer,
sheet_name='{}'.format('pholdBenchmark'))
timing_p.to_excel(writer,
sheet_name='{}'.format('timingBenchmark'))
filter_p.to_excel(writer,
sheet_name='{}'.format('filterBenchmark'))
msg_lkp_p.to_excel(writer,
sheet_name='{}'.format('messageLookupBenchmark'))
msg_send_p.to_excel(writer,
sheet_name='{}'.format('messageSendBenchmark'))
logging.info('Successfully saved the data to excel.')
logging.info('Saving data as .csv file...')
main_df = pd.merge(
ratio_df, score_df, how='outer', on=[
'benchmark', 'helics_version_string', 'date', 'cpi',
'cpf', 'cpmc', 'cpms', 'spf',
'spi', 'spmc', 'spms'
])
main_df.to_csv(r'{}\{}.csv'.format(os.path.join(output_path), filename))
logging.info('Successfully saved data as .csv file.')
def create_spreadsheet2(dataframe, filename, output_path):
"""This function combines all the above functions and
creates a spreadsheet and csv for 'benchmark_type' = 'key'.
Args:
dataframe (pandas dataframe) - Contains all the information
for analysis.
filename (str) - Name of the file for saving the results as
an excel spreadsheet and csv file.
output_path (str) - Location to send the analysis files.
Returns:
(null)
"""
logging.info('Filtering it to just bmk_type = "key"...')
dataframe = dataframe[(dataframe.benchmark_type == 'key') &
(dataframe.benchmark != 'conversionBenchmark')]
echo_res_df = dataframe[dataframe.benchmark == 'echoBenchmark']
echo_msg_df = dataframe[dataframe.benchmark == 'echoMessageBenchmark']
msg_lkp_df = dataframe[dataframe.benchmark == 'messageLookupBenchmark']
timing_df = dataframe[dataframe.benchmark == 'timingBenchmark']
# Getting all necessary info for the functions
logging.info('Saving the necessary information to memory...')
met_fed_cols = [
'benchmark', 'helics_version_string', 'date',
'run_id', 'core_type', 'num_cpus',
'mhz_per_cpu', 'federate_count', 'real_time'
]
met_fed_groupby_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'core_type', 'num_cpus', 'mhz_per_cpu', 'federate_count'
]
met_fed_metrics = ['spf', 'new_mhz_per_cpu', 'cpf']
met_fed_cols_tuples = [('real_time', 'federate_count'),
('real_time', 'mhz_per_cpu'),
('spf', 'new_mhz_per_cpu')]
met_fed_ops = ['/', '*', '*']
r_fed_groupby_columns = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'num_cpus', 'mhz_per_cpu', 'federate_count']
r_fed_index_columns = met_fed_cols
r_fed_filter_columns = ['federate_count']*2
r_fed_value_columns = ['inproc']*2
r_fed_metric_columns = ['spf', 'cpf']
met_int_cols = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'core_type', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'interface_count', 'real_time'
]
met_int_groupby_cols = [
'benchmark', 'helics_version_string', 'date',
'run_id', 'core_type', 'num_cpus',
'mhz_per_cpu', 'federate_count', 'interface_count'
]
met_int_metrics = ['spf', 'spi', 'new_mhz_per_cpu', 'cpf', 'cpi']
met_int_cols_tuples = [('real_time', 'federate_count'),
('real_time', 'interface_count'),
('real_time', 'mhz_per_cpu'),
('spf', 'new_mhz_per_cpu'),
('spi', 'new_mhz_per_cpu')]
met_int_ops = ['/', '/', '*', '*', '*']
r_int_groupby_columns = [
'benchmark', 'helics_version_string', 'date', 'run_id',
'num_cpus', 'mhz_per_cpu', 'federate_count', 'interface_count'
]
r_int_index_columns = met_int_cols
r_int_filter_columns = ['interface_count']*4
r_int_value_columns = ['inproc']*4
r_int_metric_columns = ['spf', 'spi', 'cpf', 'cpi']
# Applying the functions
logging.info('Creating the desired metrics and getting the ratios...')
echo_ratio = get_ratio1(
create_metrics1(
echo_res_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
echo_msg_ratio = get_ratio1(
create_metrics1(
echo_msg_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
timing_ratio = get_ratio1(
create_metrics1(
timing_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'real_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'real_time')
msg_lkp_ratio = get_ratio1(
create_metrics1(
msg_lkp_df, met_int_cols, met_int_groupby_cols,
met_int_metrics, met_int_cols_tuples, met_int_ops,
'real_time'),
r_int_groupby_columns, r_int_index_columns, r_int_filter_columns,
r_int_value_columns, r_int_metric_columns, 'real_time')
logging.info('Calculating CPU benchmark score...')
ratio_df = pd.concat(
[echo_msg_ratio, echo_ratio, msg_lkp_ratio, timing_ratio],
axis=0,
ignore_index=True)
score_df = cpu_score(ratio_df, 'key')
score_p = create_pivot_tables(
score_df,
['helics_version_string', 'cpu_score', 'benchmark', 'date'],
['cpf', 'cpi', 'spf', 'spi'])
logging.info('Creating the pivot table and saving to excel...')
echo_p = create_pivot_tables(
echo_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
echo_msg_p = create_pivot_tables(
echo_msg_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
timing_p = create_pivot_tables(
timing_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'cpf_ratio', 'real_time_ratio'])
msg_lkp_p = create_pivot_tables(
msg_lkp_ratio,
['benchmark', 'run_id', 'num_cpus', 'mhz_per_cpu', 'federate_count',
'core_type'],
['spf_ratio', 'spi_ratio', 'cpi_ratio', 'cpf_ratio',
'real_time_ratio'])
file_path = os.path.join(output_path, '{}.xlsx'.format(filename))
with pd.ExcelWriter(file_path) as writer:
score_p.to_excel(writer,
sheet_name='CPU Benchmark Score')
echo_p.to_excel(writer,
sheet_name='{}'.format('echoBenchmark'))
echo_msg_p.to_excel(writer,
sheet_name='{}'.format('echoMessageBenchmark'))
timing_p.to_excel(writer,
sheet_name='{}'.format('timingBenchmark'))
msg_lkp_p.to_excel(writer,
sheet_name='{}'.format('messageLookupBenchmark'))
logging.info('Successfully saved the data to excel.')
logging.info('Saving data as .csv file...')
main_df = pd.merge(
ratio_df, score_df, how='outer', on=[
'benchmark', 'helics_version_string', 'date', 'cpi',
'cpf', 'spf', 'spi'
])
main_df.to_csv(r'{}\{}.csv'.format(os.path.join(output_path), filename))
logging.info('Successfully saved data as .csv file.')
def create_spreadsheet3(dataframe, filename, output_path):
"""This function combines all the above functions and
creates a spreadhsheet and for multinode benchmark results.
Args:
dataframe (pandas dataframe) - Contains all the information
for analysis.
filename (str) - Name of the file for saving the results as
an excel spreadsheet and csv file.
output_path (str) - Location to send the analysis files.
Returns:
(null)
"""
logging.info('Processing data for multinode benchmark results...')
echo_df = dataframe[dataframe.benchmark == 'EchoLeafFederate']
echo_msg_df = dataframe[dataframe.benchmark == 'EchoMessageLeafFederate']
msg_df = dataframe[dataframe.benchmark == 'MessageExchangeFederate']
phold_df = dataframe[dataframe.benchmark == 'PholdFederate']
ring_df = dataframe[dataframe.benchmark == 'RingTransmitFederate']
ring_msg_df = dataframe[
dataframe.benchmark == 'RingTransmitMessageFederate']
timing_df = dataframe[dataframe.benchmark == 'TimingLeafFederate']
# Getting all necessary info for the functions
logging.info('Saving the necessary information to memory...')
met_fed_cols = [
'benchmark', 'helics_version_string', 'date', 'mhz_per_cpu',
'core_type', 'federate_count', 'elapsed_time'
]
met_fed_groupby_cols = [
'benchmark', 'helics_version_string', 'date',
'mhz_per_cpu', 'core_type', 'federate_count'
]
met_fed_metrics = ['spf', 'new_mhz_per_cpu', 'cpf']
met_fed_cols_tuples = [('elapsed_time', 'federate_count'),
('elapsed_time', 'mhz_per_cpu'),
('spf', 'new_mhz_per_cpu')]
met_fed_ops = ['/', '*', '*']
r_fed_groupby_columns = [
'benchmark', 'helics_version_string', 'date',
'mhz_per_cpu', 'federate_count'
]
r_fed_index_columns = met_fed_cols
r_fed_filter_columns = ['federate_count']*2
r_fed_value_columns = ['tcp']*2
r_fed_metric_columns = ['spf', 'cpf']
met_p_cols = [
'benchmark', 'helics_version_string', 'date', 'core_type',
'mhz_per_cpu', 'federate_count', 'EvCount', 'elapsed_time'
]
met_p_groupby_cols = [
'benchmark', 'helics_version_string', 'date', 'core_type',
'mhz_per_cpu', 'federate_count', 'EvCount'
]
met_p_metrics = ['spf', 'spe', 'new_mhz_per_cpu', 'cpf', 'cpe']
met_p_cols_tuples = [('elapsed_time', 'federate_count'),
('elapsed_time', 'EvCount'),
('elapsed_time', 'mhz_per_cpu'),
('spf', 'new_mhz_per_cpu'),
('spe', 'new_mhz_per_cpu')]
met_p_ops = ['/', '/', '*', '*', '*']
r_p_groupby_columns = [
'benchmark', 'helics_version_string', 'date',
'mhz_per_cpu', 'federate_count', 'EvCount'
]
r_p_index_columns = met_fed_cols
r_p_filter_columns = ['federate_count']*4
r_p_value_columns = ['tcp']*4
r_p_metric_columns = ['spf', 'spe', 'cpf', 'cpe']
met_msg_cols = [
'benchmark', 'helics_version_string', 'date', 'core_type',
'mhz_per_cpu', 'message_count', 'message_size', 'elapsed_time'
]
met_msg_groupby_cols = [
'benchmark', 'helics_version_string', 'date', 'core_type',
'mhz_per_cpu', 'message_count', 'message_size'
]
met_msg_metrics = ['spms', 'spmc', 'new_mhz_per_cpu', 'cpms', 'cpmc']
met_msg_cols_tuples = [('elapsed_time', 'message_size'),
('elapsed_time', 'message_count'),
('elapsed_time', 'mhz_per_cpu'),
('spms', 'new_mhz_per_cpu'),
('spmc', 'new_mhz_per_cpu')]
met_msg_ops = ['/', '/', '*', '*', '*']
r_msg_groupby_columns = [
'benchmark', 'helics_version_string', 'date',
'mhz_per_cpu', 'message_size', 'message_count']
r_msg_index_columns = met_msg_cols
r_msg_filter_columns = ['message_count']*4
r_msg_value_columns = ['tcp']*4
r_msg_metric_columns = ['spms', 'spmc', 'cpms', 'cpmc']
# Applying the functions
logging.info('Creating the desired metrics and getting the ratios...')
echo_ratio = get_ratio2(
create_metrics2(
echo_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'elapsed_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'elapsed_time')
echo_msg_ratio = get_ratio2(
create_metrics2(
echo_msg_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'elapsed_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'elapsed_time')
timing_ratio = get_ratio2(
create_metrics2(
timing_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'elapsed_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'elapsed_time')
ring_ratio = get_ratio2(
create_metrics2(
ring_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'elapsed_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'elapsed_time')
ring_msg_ratio = get_ratio2(
create_metrics2(
ring_msg_df, met_fed_cols, met_fed_groupby_cols,
met_fed_metrics, met_fed_cols_tuples, met_fed_ops,
'elapsed_time'),
r_fed_groupby_columns, r_fed_index_columns, r_fed_filter_columns,
r_fed_value_columns, r_fed_metric_columns, 'elapsed_time')
msg_ratio = get_ratio2(
create_metrics2(
msg_df, met_msg_cols, met_msg_groupby_cols,
met_msg_metrics, met_msg_cols_tuples, met_msg_ops,
'elapsed_time'),
r_msg_groupby_columns, r_msg_index_columns, r_msg_filter_columns,
r_msg_value_columns, r_msg_metric_columns, 'elapsed_time')
phold_ratio = get_ratio2(
create_metrics2(
phold_df, met_p_cols, met_p_groupby_cols,
met_p_metrics, met_p_cols_tuples, met_p_ops,
'elapsed_time'),
r_p_groupby_columns, r_p_index_columns, r_p_filter_columns,
r_p_value_columns, r_p_metric_columns, 'elapsed_time')
logging.info('Calculating CPU benchmark score...')
ratio_df = pd.concat(
[echo_msg_ratio, echo_ratio, msg_ratio,
phold_ratio, ring_msg_ratio, msg_ratio,
timing_ratio], axis=0, ignore_index=True)
score_df = cpu_score(ratio_df, 'multinode')
score_p = create_pivot_tables(
score_df,
['helics_version_string', 'cpu_score', 'date'],
['cpf', 'cpe', 'cpmc', 'cpms', 'spf', 'spe', 'spmc', 'spms'])
logging.info('Creating the pivot table and saving to excel...')
echo_p = create_pivot_tables(
echo_ratio,
['benchmark', 'federate_count', 'core_type'],
['spf_ratio', 'elapsed_time_ratio'])
echo_msg_p = create_pivot_tables(
echo_msg_ratio,
['benchmark', 'federate_count', 'core_type'],
['spf_ratio', 'elapsed_time_ratio'])
timing_p = create_pivot_tables(
timing_ratio,
['benchmark', 'federate_count', 'core_type'],
['spf_ratio', 'elapsed_time_ratio'])
ring_p = create_pivot_tables(
ring_ratio,
['benchmark', 'federate_count', 'core_type'],
['spf_ratio', 'elapsed_time_ratio'])
ring_msg_p = create_pivot_tables(
ring_msg_ratio,
['benchmark', 'federate_count', 'core_type'],
['spf_ratio', 'elapsed_time_ratio'])
phold_p = create_pivot_tables(
phold_ratio,
['benchmark', 'federate_count', 'core_type'],
['spf_ratio', 'spe_ratio', 'cpf_ratio', 'cpe_ratio',
'elapsed_time_ratio'])
msg_p = create_pivot_tables(
msg_ratio,
['benchmark', 'message_size', 'message_count', 'core_type'],
['spms_ratio', 'spmc_ratio', 'elapsed_time_ratio'])
file_path = os.path.join(output_path, '{}.xlsx'.format(filename))
with pd.ExcelWriter(file_path) as writer:
score_p.to_excel(writer,
sheet_name='CPU Benchmark Score')
echo_p.to_excel(writer,
sheet_name='EchoLeafFederate')
echo_msg_p.to_excel(writer,
sheet_name='EchoMessageLeafFederate')
ring_p.to_excel(writer,
sheet_name='RingTransmitFederate')
ring_msg_p.to_excel(writer,
sheet_name='RingTransmitMessageFederate')
timing_p.to_excel(writer,
sheet_name='TimingLeafFederate')
phold_p.to_excel(writer,
sheet_name='PholdFederate')
msg_p.to_excel(writer,
sheet_name='MessageExchangeFederate')
logging.info('Successfully saved the data to excel.')
logging.info('Saving data as .gz file.')
main_df = pd.merge(ratio_df, score_df, how='outer', on=[
'benchmark', 'helics_version_string', 'date', 'cpe',
'cpf', 'spf', 'cpmc', 'cpms',
'spe', 'spmc', 'spms'])
main_df.to_csv(
r'{}\{}.gz'.format(os.path.join(output_path), filename),
compression='gzip')
logging.info('Successfully saved data as .gz file.')
def create_table(
dataframe, drop_columns, subset_columns, output_path, filename):
"""This function creates a metadata reference table for
each run-id in the (multinode) benchmark results files.
Args:
dataframe (pandas dataframe) - Contains all the information
for each run-id.
drop_columns (list) - List of columns to ignore in the reference
table; the summary spreadsheet/csv contains counts and metrics,
which we don't need for the reference table.
subset_columns (list) - List of columns for creating a subset
for getting rid of duplicates.
output_path (path) - Path to send the reference table.
filename (str) - Name of the reference table.
Returns:
(null)
"""
dataframe = dataframe.drop(columns=drop_columns)
dataframe = dataframe.sort_values(subset_columns).set_index(
subset_columns).reset_index()
dataframe = dataframe.drop_duplicates(
subset=subset_columns, keep='last')
dataframe.index = list(range(len(dataframe.run_id)))
# dataframe = dataframe.set_index('index').reset_index()
dataframe.to_csv(r'{}\{}.csv'.format(output_path, filename))
def _auto_run(args):
"""This function executes when the script is called as a stand-alone
executable. It is used both for development/testing as well as the
primary executable for generating the results summary files.
A more complete description of this code can be found in the
docstring at the beginning of this file.
Args:
'-b' or '--bmk_type' - Identifier for type of summary results
should be produced; should be "full", "key", or "multinode" and
"json_file" should be changed accordingly.
'-j' or '--json_file' - JSON file of all the benchmark results
data.
'-o' or '--output_path' - Path to send the spreadsheet.
Returns:
(null)
"""
logging.info('starting the execution of this script')
if args.bmk_type == 'full':
dataframe = make_dataframe1(args.json_file)
create_spreadsheet1(dataframe,
'full_benchmark_results_summary',
args.output_path)
dataframe = dataframe[
(dataframe.benchmark_type == 'full') &
(dataframe.benchmark != 'actionMessageBenchmark') &
(dataframe.benchmark != 'conversionBenchmark')
]
create_table(
dataframe,
['federate_count', 'EvCount', 'interface_count',
'message_size', 'message_count', 'real_time', 'cpu_time',
'info_id', 'benchmark_id', 'cache_id', 'executable',
'name', 'identifier_id', 'benchmark', 'core_type',
'filename', 'run_name'],
['run_id'],
args.output_path,
'bmk_type_full_metadata')
elif args.bmk_type == 'key':
dataframe = make_dataframe1(args.json_file)
create_spreadsheet2(dataframe,
'key_benchmark_results_summary',
args.output_path)
dataframe = dataframe[
(dataframe.benchmark_type == 'key') &
(dataframe.benchmark != 'conversionBenchmark')
]
create_table(
dataframe,
['federate_count', 'EvCount', 'interface_count',
'message_size', 'message_count', 'real_time', 'cpu_time',
'info_id', 'benchmark_id', 'cache_id', 'executable',
'name', 'identifier_id', 'benchmark', 'core_type',
'filename', 'run_name'],
['run_id'],
args.output_path,
'bmk_type_key_metadata')
elif args.bmk_type == 'multinode':
dataframe = make_dataframe2(args.json_file)
create_spreadsheet3(dataframe,
'multinode_benchmark_results_summary',
args.output_path)
create_table(
dataframe,
['index', 'federate_count', 'EvCount',
'message_size', 'message_count', 'elapsed_time',
'benchmark_type', 'identifier_id', 'benchmark',
'core_type', 'filename'],
['run_id'],
args.output_path,
'multinode_metadata')
else:
logging.error(
'Invalid; bmk_type should be "full", "key", or "multinode".')
logging.info('successfully finished creating the summary spreadsheets.')
if __name__ == '__main__':
fileHandle = logging.FileHandler(
"benchmark_results_summary.log", mode='w')
fileHandle.setLevel(logging.DEBUG)
streamHandle = logging.StreamHandler(sys.stdout)
streamHandle.setLevel(logging.ERROR)
logging.basicConfig(level=logging.INFO,
handlers=[fileHandle, streamHandle])
parser = argparse.ArgumentParser(description='Produce results summary.')
# TDH: Have to do a little bit of work to generate a good default
# path for the results folder. Default only works if being run
# from the "scripts" directory in the repository structure.
script_path = os.path.dirname(os.path.realpath(__file__))
head, tail = os.path.split(script_path)
parser.add_argument('-j',
'--json_file',
nargs='?',
default='multinode_bm_results.json')
parser.add_argument('-b',
'--bmk_type',
nargs='?',
default='multinode')
parser.add_argument('-o',
'--output_path',
nargs='?',
default=os.path.join(
head, 'summary_spreadsheets'))
args = parser.parse_args()
_auto_run(args)
| 43.683992 | 79 | 0.58354 | 7,364 | 63,036 | 4.667029 | 0.061515 | 0.027497 | 0.018331 | 0.01548 | 0.809299 | 0.781454 | 0.756576 | 0.74034 | 0.726693 | 0.711883 | 0 | 0.002821 | 0.302605 | 63,036 | 1,442 | 80 | 43.714286 | 0.778963 | 0.151485 | 0 | 0.686703 | 0 | 0 | 0.219234 | 0.039407 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01184 | false | 0 | 0.008197 | 0 | 0.027322 | 0.001821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
600e939c835488e6e4d0c0e1bb2ab1ce9e93659c | 5,219 | py | Python | update_licenses.py | urbanopt/urbanopt-ditto-reader | d805536df2451b401cc0662b4a087ac65e4eab2c | [
"BSD-3-Clause"
] | null | null | null | update_licenses.py | urbanopt/urbanopt-ditto-reader | d805536df2451b401cc0662b4a087ac65e4eab2c | [
"BSD-3-Clause"
] | 17 | 2020-08-13T02:34:33.000Z | 2022-03-25T16:39:07.000Z | update_licenses.py | urbanopt/urbanopt-ditto-reader | d805536df2451b401cc0662b4a087ac65e4eab2c | [
"BSD-3-Clause"
] | 1 | 2021-02-05T23:03:39.000Z | 2021-02-05T23:03:39.000Z | """
****************************************************************************************************
:copyright (c) 2019-2021 URBANopt, Alliance for Sustainable Energy, LLC, and other contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted
provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions
and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions
and the following disclaimer in the documentation and/or other materials provided with the
distribution.
Neither the name of the copyright holder nor the names of its contributors may be used to endorse
or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
****************************************************************************************************
"""
import glob
import os
import re
import click
PYTHON_REGEX = re.compile(r'^""".\*{100}.*:copyright.*\*{100}."""$', re.MULTILINE | re.DOTALL)
PYTHON_LICENSE = '''"""
****************************************************************************************************
:copyright (c) 2019-2021 URBANopt, Alliance for Sustainable Energy, LLC, and other contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted
provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions
and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions
and the following disclaimer in the documentation and/or other materials provided with the
distribution.
Neither the name of the copyright holder nor the names of its contributors may be used to endorse
or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
****************************************************************************************************
"""'''
EXCLUDE_FILES = ["__init__.py"]
PATHS = [
{"glob": "urbanopt_ditto_reader/**/*.py", "license": PYTHON_LICENSE, "REGEX": PYTHON_REGEX, },
{"glob": "tests/**/*.py", "license": PYTHON_LICENSE, "REGEX": PYTHON_REGEX},
# single files
{ "glob": 'setup.py', "license": PYTHON_LICENSE, "REGEX": PYTHON_REGEX },
{ "glob": 'update_licenses.py', "license": PYTHON_LICENSE, "REGEX": PYTHON_REGEX }
]
def check_and_update_license(filename):
"""
check if the license exists in the file, and if it does, then make sure it is up-to-date with
the license defined in this file.
:param filename: str, path of the file to update
:return: None
"""
s = open(filename, "r").read()
if PYTHON_REGEX.search(s):
print("License already exists, updating")
content = re.sub(PYTHON_REGEX, PYTHON_LICENSE, s)
with open(filename, "w") as f:
f.write(content)
f.close()
else:
print("Adding license")
with open(filename, "r+") as f:
content = f.read()
f.seek(0, 0)
f.write(PYTHON_LICENSE.rstrip("\r\n") + "\n\n\n" + content)
f.close()
@click.command()
@click.argument('license', required=False)
def update_licenses(license):
for p in PATHS:
gl = glob.glob(p["glob"], recursive=True)
for g in gl:
if os.path.basename(g) in EXCLUDE_FILES:
print(f"Skipping file {g}")
else:
print(f"Checking license in file {g}")
check_and_update_license(g)
| 52.19 | 100 | 0.680207 | 688 | 5,219 | 5.116279 | 0.292151 | 0.009943 | 0.019318 | 0.026136 | 0.761932 | 0.761932 | 0.761932 | 0.740341 | 0.716477 | 0.716477 | 0 | 0.005556 | 0.172255 | 5,219 | 99 | 101 | 52.717172 | 0.809259 | 0.371527 | 0 | 0.098361 | 0 | 0.016393 | 0.628351 | 0.08228 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0 | 0.065574 | 0 | 0.098361 | 0.065574 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
601ccbc025b1a325f9c544d0e1e6238b4d398f1b | 78 | py | Python | faster_particles/metrics/__init__.py | Temigo/faster-particles | ba4655cf48525de1f326f037b1e54b6f28551cdf | [
"MIT"
] | 2 | 2018-08-02T10:48:44.000Z | 2018-11-11T01:16:57.000Z | faster_particles/metrics/__init__.py | Temigo/faster-particles | ba4655cf48525de1f326f037b1e54b6f28551cdf | [
"MIT"
] | null | null | null | faster_particles/metrics/__init__.py | Temigo/faster-particles | ba4655cf48525de1f326f037b1e54b6f28551cdf | [
"MIT"
] | 2 | 2018-08-02T10:49:06.000Z | 2020-06-10T02:20:30.000Z | from metrics_ppn import PPNMetrics
from metrics_uresnet import UResNetMetrics
| 26 | 42 | 0.897436 | 10 | 78 | 6.8 | 0.7 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 78 | 2 | 43 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e083b6cb33b387468c9f20fa464868a15022eef1 | 12,970 | py | Python | cottonformation/res/ssmcontacts.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 5 | 2021-07-22T03:45:59.000Z | 2021-12-17T21:07:14.000Z | cottonformation/res/ssmcontacts.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 1 | 2021-06-25T18:01:31.000Z | 2021-06-25T18:01:31.000Z | cottonformation/res/ssmcontacts.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 2 | 2021-06-27T03:08:21.000Z | 2021-06-28T22:15:51.000Z | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class PropContactContactTargetInfo(Property):
"""
AWS Object Type = "AWS::SSMContacts::Contact.ContactTargetInfo"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-contacttargetinfo.html
Property Document:
- ``rp_ContactId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-contacttargetinfo.html#cfn-ssmcontacts-contact-contacttargetinfo-contactid
- ``rp_IsEssential``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-contacttargetinfo.html#cfn-ssmcontacts-contact-contacttargetinfo-isessential
"""
AWS_OBJECT_TYPE = "AWS::SSMContacts::Contact.ContactTargetInfo"
rp_ContactId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ContactId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-contacttargetinfo.html#cfn-ssmcontacts-contact-contacttargetinfo-contactid"""
rp_IsEssential: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "IsEssential"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-contacttargetinfo.html#cfn-ssmcontacts-contact-contacttargetinfo-isessential"""
@attr.s
class PropContactChannelTargetInfo(Property):
"""
AWS Object Type = "AWS::SSMContacts::Contact.ChannelTargetInfo"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-channeltargetinfo.html
Property Document:
- ``rp_ChannelId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-channeltargetinfo.html#cfn-ssmcontacts-contact-channeltargetinfo-channelid
- ``rp_RetryIntervalInMinutes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-channeltargetinfo.html#cfn-ssmcontacts-contact-channeltargetinfo-retryintervalinminutes
"""
AWS_OBJECT_TYPE = "AWS::SSMContacts::Contact.ChannelTargetInfo"
rp_ChannelId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ChannelId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-channeltargetinfo.html#cfn-ssmcontacts-contact-channeltargetinfo-channelid"""
rp_RetryIntervalInMinutes: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "RetryIntervalInMinutes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-channeltargetinfo.html#cfn-ssmcontacts-contact-channeltargetinfo-retryintervalinminutes"""
@attr.s
class PropContactTargets(Property):
"""
AWS Object Type = "AWS::SSMContacts::Contact.Targets"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-targets.html
Property Document:
- ``p_ChannelTargetInfo``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-targets.html#cfn-ssmcontacts-contact-targets-channeltargetinfo
- ``p_ContactTargetInfo``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-targets.html#cfn-ssmcontacts-contact-targets-contacttargetinfo
"""
AWS_OBJECT_TYPE = "AWS::SSMContacts::Contact.Targets"
p_ChannelTargetInfo: typing.Union['PropContactChannelTargetInfo', dict] = attr.ib(
default=None,
converter=PropContactChannelTargetInfo.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropContactChannelTargetInfo)),
metadata={AttrMeta.PROPERTY_NAME: "ChannelTargetInfo"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-targets.html#cfn-ssmcontacts-contact-targets-channeltargetinfo"""
p_ContactTargetInfo: typing.Union['PropContactContactTargetInfo', dict] = attr.ib(
default=None,
converter=PropContactContactTargetInfo.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropContactContactTargetInfo)),
metadata={AttrMeta.PROPERTY_NAME: "ContactTargetInfo"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-targets.html#cfn-ssmcontacts-contact-targets-contacttargetinfo"""
@attr.s
class PropContactStage(Property):
"""
AWS Object Type = "AWS::SSMContacts::Contact.Stage"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-stage.html
Property Document:
- ``rp_DurationInMinutes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-stage.html#cfn-ssmcontacts-contact-stage-durationinminutes
- ``p_Targets``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-stage.html#cfn-ssmcontacts-contact-stage-targets
"""
AWS_OBJECT_TYPE = "AWS::SSMContacts::Contact.Stage"
rp_DurationInMinutes: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "DurationInMinutes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-stage.html#cfn-ssmcontacts-contact-stage-durationinminutes"""
p_Targets: typing.List[typing.Union['PropContactTargets', dict]] = attr.ib(
default=None,
converter=PropContactTargets.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropContactTargets), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Targets"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ssmcontacts-contact-stage.html#cfn-ssmcontacts-contact-stage-targets"""
#--- Resource declaration ---
@attr.s
class Contact(Resource):
"""
AWS Object Type = "AWS::SSMContacts::Contact"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html
Property Document:
- ``rp_Alias``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-alias
- ``rp_DisplayName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-displayname
- ``rp_Plan``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-plan
- ``rp_Type``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-type
"""
AWS_OBJECT_TYPE = "AWS::SSMContacts::Contact"
rp_Alias: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Alias"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-alias"""
rp_DisplayName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DisplayName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-displayname"""
rp_Plan: typing.List[typing.Union['PropContactStage', dict]] = attr.ib(
default=None,
converter=PropContactStage.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropContactStage), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "Plan"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-plan"""
rp_Type: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Type"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#cfn-ssmcontacts-contact-type"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contact.html#aws-resource-ssmcontacts-contact-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class ContactChannel(Resource):
"""
AWS Object Type = "AWS::SSMContacts::ContactChannel"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html
Property Document:
- ``rp_ChannelAddress``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-channeladdress
- ``rp_ChannelName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-channelname
- ``rp_ChannelType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-channeltype
- ``rp_ContactId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-contactid
- ``p_DeferActivation``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-deferactivation
"""
AWS_OBJECT_TYPE = "AWS::SSMContacts::ContactChannel"
rp_ChannelAddress: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ChannelAddress"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-channeladdress"""
rp_ChannelName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ChannelName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-channelname"""
rp_ChannelType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ChannelType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-channeltype"""
rp_ContactId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ContactId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-contactid"""
p_DeferActivation: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "DeferActivation"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#cfn-ssmcontacts-contactchannel-deferactivation"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssmcontacts-contactchannel.html#aws-resource-ssmcontacts-contactchannel-return-values"""
return GetAtt(resource=self, attr_name="Arn")
| 54.957627 | 221 | 0.758751 | 1,392 | 12,970 | 6.981322 | 0.063937 | 0.120395 | 0.047541 | 0.073472 | 0.875489 | 0.872916 | 0.840296 | 0.799547 | 0.793785 | 0.769912 | 0 | 0.000087 | 0.1101 | 12,970 | 235 | 222 | 55.191489 | 0.841882 | 0.357132 | 0 | 0.369748 | 0 | 0 | 0.093338 | 0.053632 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016807 | false | 0 | 0.033613 | 0 | 0.310924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e094210b76d6f52b4f65454fe3b9ecf5fba834a4 | 37 | py | Python | Py0729/modules/search.py | tbor8080/pyprog | 3642b9af2a92f7369d9b6fa138e47ba22df3271c | [
"MIT"
] | null | null | null | Py0729/modules/search.py | tbor8080/pyprog | 3642b9af2a92f7369d9b6fa138e47ba22df3271c | [
"MIT"
] | null | null | null | Py0729/modules/search.py | tbor8080/pyprog | 3642b9af2a92f7369d9b6fa138e47ba22df3271c | [
"MIT"
] | null | null | null | def sanitize(forms):
return forms | 18.5 | 20 | 0.72973 | 5 | 37 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 37 | 2 | 21 | 18.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.