hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1052be7243d9d92006427132097908b44ba62ca1 | 232 | py | Python | src/modules/activations.py | abheesht17/super-pixels | d650ecf5e227bf94d237c9d8eff57c8bbf5b8983 | [
"MIT"
] | 10 | 2021-05-20T02:18:30.000Z | 2021-12-24T05:21:39.000Z | src/modules/activations.py | abheesht17/super-pixels | d650ecf5e227bf94d237c9d8eff57c8bbf5b8983 | [
"MIT"
] | 3 | 2021-05-20T01:19:47.000Z | 2022-02-12T16:38:47.000Z | src/modules/activations.py | abheesht17/super-pixels | d650ecf5e227bf94d237c9d8eff57c8bbf5b8983 | [
"MIT"
] | 4 | 2021-05-20T01:30:14.000Z | 2021-10-02T16:53:06.000Z | import torch.nn as nn
from src.utils.mapper import configmapper
configmapper.map("activations", "relu")(nn.ReLU)
configmapper.map("activations", "logsoftmax")(nn.LogSoftmax)
configmapper.map("activations", "softmax")(nn.Softmax)
| 25.777778 | 60 | 0.771552 | 29 | 232 | 6.172414 | 0.482759 | 0.251397 | 0.435754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073276 | 232 | 8 | 61 | 29 | 0.832558 | 0 | 0 | 0 | 0 | 0 | 0.232759 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
109f9d3b6d011c3e0fa01661b64139ed3ed3cd7e | 214 | py | Python | license_plate_processing/__init__.py | arekmula/license_plate_recognition | 62e5374fc56a0709d6d951629449aed347e101d2 | [
"MIT"
] | 1 | 2021-02-11T07:16:06.000Z | 2021-02-11T07:16:06.000Z | license_plate_processing/__init__.py | arekmula/license_plate_recognition | 62e5374fc56a0709d6d951629449aed347e101d2 | [
"MIT"
] | null | null | null | license_plate_processing/__init__.py | arekmula/license_plate_recognition | 62e5374fc56a0709d6d951629449aed347e101d2 | [
"MIT"
] | 1 | 2021-04-06T20:12:15.000Z | 2021-04-06T20:12:15.000Z | from .license_plate_recognizer import recognize_license_plate
from .license_plate_recognizer import train_KNN
from .character_classifier import get_chars_contour
from .character_classifier import train_classifier
| 35.666667 | 61 | 0.901869 | 28 | 214 | 6.464286 | 0.464286 | 0.198895 | 0.176796 | 0.287293 | 0.353591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079439 | 214 | 5 | 62 | 42.8 | 0.918782 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e13148a6395a94a38b9bc27ab3ced83e32cb807 | 171 | py | Python | examples/print_function.py | dhruvmanila/pyinspect | ce90df243e5e5ee100f13de4329c111454b8c891 | [
"MIT"
] | 87 | 2020-09-30T10:18:26.000Z | 2022-03-10T08:56:04.000Z | examples/print_function.py | dhruvmanila/pyinspect | ce90df243e5e5ee100f13de4329c111454b8c891 | [
"MIT"
] | 16 | 2020-09-30T10:57:17.000Z | 2022-01-16T02:10:45.000Z | examples/print_function.py | dhruvmanila/pyinspect | ce90df243e5e5ee100f13de4329c111454b8c891 | [
"MIT"
] | 5 | 2020-11-20T07:39:26.000Z | 2022-01-13T04:54:51.000Z | """
This tutorial shows how to print
a function's source code
"""
# import pyinspect
import pyinspect as pi
# Print a function's source code
pi.showme(pi.search)
| 17.1 | 36 | 0.71345 | 27 | 171 | 4.518519 | 0.62963 | 0.098361 | 0.229508 | 0.245902 | 0.409836 | 0.409836 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204678 | 171 | 9 | 37 | 19 | 0.897059 | 0.619883 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
eab067e3356e1c018f23ae9576402fe6bf71b065 | 317 | py | Python | src/common/common.py | glamod/glamod-cdm-lite | 026d87d499feaf7ee3611cf1c112384f3819e653 | [
"BSD-2-Clause"
] | 1 | 2020-06-16T14:29:26.000Z | 2020-06-16T14:29:26.000Z | src/common/common.py | glamod/glamod-cdm-lite | 026d87d499feaf7ee3611cf1c112384f3819e653 | [
"BSD-2-Clause"
] | 75 | 2020-01-17T12:25:58.000Z | 2021-04-29T14:48:52.000Z | src/common/common.py | glamod/glamod-cdm-lite | 026d87d499feaf7ee3611cf1c112384f3819e653 | [
"BSD-2-Clause"
] | 2 | 2020-07-03T11:11:04.000Z | 2020-08-03T14:19:54.000Z | BASE_INPUT_DIR = '/gws/nopw/j04/c3s311a_lot2/data/marine/r092019/ICOADS_R3.0.0T/level1a'
BASE_OUTPUT_DIR = '/gws/nopw/j04/c3s311a_lot2/data/marine/r092019_cdm_lite/ICOADS_R3.0.0T/level1a'
BASE_LOG_DIR = '/gws/smf/j04/c3s311a_lot2/cdmlite/log/prep/marine'
BASE_SQL_DIR = '/gws/smf/j04/c3s311a_lot2/cdmlite/marine/sql'
| 63.4 | 98 | 0.807571 | 57 | 317 | 4.210526 | 0.421053 | 0.1 | 0.233333 | 0.108333 | 0.775 | 0.775 | 0.591667 | 0.341667 | 0.341667 | 0 | 0 | 0.157377 | 0.037855 | 317 | 4 | 99 | 79.25 | 0.629508 | 0 | 0 | 0 | 0 | 0.5 | 0.757098 | 0.757098 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d81f8b15fcd933e10762cc5ad987f34540611fa2 | 22 | py | Python | paco/__init__.py | fabianlischka/paco | 82a3d14df8ccc7628bdafde9dd2a7293be9bf94e | [
"MIT"
] | 1 | 2018-01-18T10:11:31.000Z | 2018-01-18T10:11:31.000Z | paco/__init__.py | fabianlischka/paco | 82a3d14df8ccc7628bdafde9dd2a7293be9bf94e | [
"MIT"
] | null | null | null | paco/__init__.py | fabianlischka/paco | 82a3d14df8ccc7628bdafde9dd2a7293be9bf94e | [
"MIT"
] | null | null | null | from pacoRun import *
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d83a264e7589ae042d41f3619803a27354de2102 | 133 | py | Python | lib/glob.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | 3 | 2019-08-21T22:01:35.000Z | 2021-07-25T00:21:28.000Z | lib/glob.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null | lib/glob.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null |
import os, os.path, fnmatch, re
def iglob(s):
return __iter('')
def glob(s):
return ['']
def init():
pass
| 11.083333 | 32 | 0.518797 | 18 | 133 | 3.722222 | 0.722222 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.323308 | 133 | 11 | 33 | 12.090909 | 0.744444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.142857 | 0.142857 | 0.285714 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
dc1fd8b5321721f8a223ba65bc79a1349804f6ea | 204 | py | Python | pylight/__init__.py | en0/pylight | ff91de63a40216484888548d83dcbc2de9d7ddb5 | [
"MIT"
] | null | null | null | pylight/__init__.py | en0/pylight | ff91de63a40216484888548d83dcbc2de9d7ddb5 | [
"MIT"
] | null | null | null | pylight/__init__.py | en0/pylight | ff91de63a40216484888548d83dcbc2de9d7ddb5 | [
"MIT"
] | null | null | null | from pylight.interface import IBacklight, IBacklightManager, IActionBroker
from pylight.backlight import Backlight
from pylight.backlight_manager import BacklightManager
from pylight.action import Action
| 40.8 | 74 | 0.882353 | 23 | 204 | 7.782609 | 0.478261 | 0.24581 | 0.223464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 204 | 4 | 75 | 51 | 0.962366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dc5eb2b80f86a9521d26ee3f67d7a4f539ca0e64 | 45 | py | Python | file_catalog/schema/__init__.py | WIPACrepo/file_catalog | 01c0947d32d621d28516ecce3604a0124c673925 | [
"MIT"
] | null | null | null | file_catalog/schema/__init__.py | WIPACrepo/file_catalog | 01c0947d32d621d28516ecce3604a0124c673925 | [
"MIT"
] | 61 | 2017-02-23T17:58:43.000Z | 2022-03-24T22:13:24.000Z | file_catalog/schema/__init__.py | WIPACrepo/file_catalog | 01c0947d32d621d28516ecce3604a0124c673925 | [
"MIT"
] | 2 | 2017-12-20T18:30:35.000Z | 2018-01-08T15:15:03.000Z | """Init."""
from . import types, validation
| 11.25 | 31 | 0.644444 | 5 | 45 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 3 | 32 | 15 | 0.763158 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f49934d73c1a4813d11d90412962419c71420029 | 154 | py | Python | credential.py | Reeju2019/Projects | 8def4f7c954946a12ba7a063792621038861fcfb | [
"MIT"
] | null | null | null | credential.py | Reeju2019/Projects | 8def4f7c954946a12ba7a063792621038861fcfb | [
"MIT"
] | null | null | null | credential.py | Reeju2019/Projects | 8def4f7c954946a12ba7a063792621038861fcfb | [
"MIT"
] | null | null | null | account_sid = 'ACb764593ed8f98268235ba1c2768712ed'
auth_token = '0e7b1a9daac9f36e8b855a25585b59f2'
my_cell = '+918820931166'
my_twilio = '+12624563373' | 38.5 | 51 | 0.818182 | 12 | 154 | 10.166667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.464286 | 0.090909 | 154 | 4 | 52 | 38.5 | 0.407143 | 0 | 0 | 0 | 0 | 0 | 0.598684 | 0.434211 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4da6c51345b48463d19edef85bb5472f036fbb5 | 200 | py | Python | pages/views.py | olubiyiontheweb/travelworld | ca9d2206108bd59fd222e384bcaab7efd6832e24 | [
"MIT"
] | null | null | null | pages/views.py | olubiyiontheweb/travelworld | ca9d2206108bd59fd222e384bcaab7efd6832e24 | [
"MIT"
] | null | null | null | pages/views.py | olubiyiontheweb/travelworld | ca9d2206108bd59fd222e384bcaab7efd6832e24 | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def index(request):
return render(request, "pages/index.html")
def base(request):
return render(request, "layout/base.html")
| 16.666667 | 46 | 0.725 | 27 | 200 | 5.37037 | 0.62963 | 0.17931 | 0.262069 | 0.358621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 200 | 11 | 47 | 18.181818 | 0.863095 | 0.115 | 0 | 0 | 0 | 0 | 0.182857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f4ea8579a8421fbe2f3a224e6a25b3efafe0a274 | 133 | py | Python | LookProjeto/checkout/admin.py | jeffreyquirino/backup_looksistem | 46f3d372bb474b32c2fd0b249e1df029380a7925 | [
"Apache-2.0"
] | null | null | null | LookProjeto/checkout/admin.py | jeffreyquirino/backup_looksistem | 46f3d372bb474b32c2fd0b249e1df029380a7925 | [
"Apache-2.0"
] | 2 | 2020-06-06T01:14:59.000Z | 2021-06-10T22:52:57.000Z | LookProjeto/checkout/admin.py | jeffreyquirino/backup_looksistem | 46f3d372bb474b32c2fd0b249e1df029380a7925 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import CartItem, Order, OrderItem
admin.site.register([CartItem, Order, OrderItem])
| 19 | 49 | 0.789474 | 17 | 133 | 6.176471 | 0.647059 | 0.247619 | 0.419048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120301 | 133 | 6 | 50 | 22.166667 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
76091a90147fe417e29516ac5aca6d19f3b6a51a | 25,218 | py | Python | pyy1/.pycharm_helpers/python_stubs/-1550516950/_datetime.py | pyy1988/pyy_test1 | 6bea878409e658aa87441384419be51aaab061e7 | [
"Apache-2.0"
] | null | null | null | pyy1/.pycharm_helpers/python_stubs/-1550516950/_datetime.py | pyy1988/pyy_test1 | 6bea878409e658aa87441384419be51aaab061e7 | [
"Apache-2.0"
] | null | null | null | pyy1/.pycharm_helpers/python_stubs/-1550516950/_datetime.py | pyy1988/pyy_test1 | 6bea878409e658aa87441384419be51aaab061e7 | [
"Apache-2.0"
] | null | null | null | # encoding: utf-8
# module _datetime
# from (built-in)
# by generator 1.145
""" Fast implementation of the datetime type. """
# imports
import datetime as __datetime
# Variables with simple values
MAXYEAR = 9999
MINYEAR = 1
# no functions
# classes
class date(object):
""" date(year, month, day) --> date object """
def ctime(self): # real signature unknown; restored from __doc__
""" Return ctime() style string. """
pass
@classmethod
def fromordinal(cls, ordinal): # known case of _datetime.date.fromordinal
""" int -> date corresponding to a proleptic Gregorian ordinal. """
return date(1,1,1)
@classmethod
def fromtimestamp(cls, timestamp): # known case of _datetime.date.fromtimestamp
""" timestamp -> local date from a POSIX timestamp (like time.time()). """
return date(1,1,1)
def isocalendar(self): # known case of _datetime.date.isocalendar
""" Return a 3-tuple containing ISO year, week number, and weekday. """
return (1, 1, 1)
def isoformat(self): # known case of _datetime.date.isoformat
""" Return string in ISO 8601 format, YYYY-MM-DD. """
return ""
def isoweekday(self): # known case of _datetime.date.isoweekday
"""
Return the day of the week represented by the date.
Monday == 1 ... Sunday == 7
"""
return 0
def replace(self, year=None, month=None, day=None): # known case of _datetime.date.replace
""" Return date with new specified fields. """
return date(1,1,1)
def strftime(self, format): # known case of _datetime.date.strftime
""" format -> strftime() style string. """
return ""
def timetuple(self): # known case of _datetime.date.timetuple
""" Return time tuple, compatible with time.localtime(). """
return (0, 0, 0, 0, 0, 0, 0, 0, 0)
@classmethod
def today(self): # known case of _datetime.date.today
""" Current date or datetime: same as self.__class__.fromtimestamp(time.time()). """
return date(1, 1, 1)
def toordinal(self): # known case of _datetime.date.toordinal
""" Return proleptic Gregorian ordinal. January 1 of year 1 is day 1. """
return 0
def weekday(self): # known case of _datetime.date.weekday
"""
Return the day of the week represented by the date.
Monday == 0 ... Sunday == 6
"""
return 0
def __add__(self, *args, **kwargs): # real signature unknown
""" Return self+value. """
pass
def __eq__(self, *args, **kwargs): # real signature unknown
""" Return self==value. """
pass
def __format__(self, *args, **kwargs): # real signature unknown
""" Formats self with strftime. """
pass
def __getattribute__(self, *args, **kwargs): # real signature unknown
""" Return getattr(self, name). """
pass
def __ge__(self, *args, **kwargs): # real signature unknown
""" Return self>=value. """
pass
def __gt__(self, *args, **kwargs): # real signature unknown
""" Return self>value. """
pass
def __hash__(self, *args, **kwargs): # real signature unknown
""" Return hash(self). """
pass
def __init__(self, year, month, day): # real signature unknown; restored from __doc__
pass
def __le__(self, *args, **kwargs): # real signature unknown
""" Return self<=value. """
pass
def __lt__(self, *args, **kwargs): # real signature unknown
""" Return self<value. """
pass
@staticmethod # known case of __new__
def __new__(cls, year=None, month=None, day=None): # known case of _datetime.date.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
def __ne__(self, *args, **kwargs): # real signature unknown
""" Return self!=value. """
pass
def __radd__(self, *args, **kwargs): # real signature unknown
""" Return value+self. """
pass
def __reduce__(self): # real signature unknown; restored from __doc__
""" __reduce__() -> (cls, state) """
pass
def __repr__(self, *args, **kwargs): # real signature unknown
""" Return repr(self). """
pass
def __rsub__(self, *args, **kwargs): # real signature unknown
""" Return value-self. """
pass
def __str__(self, *args, **kwargs): # real signature unknown
""" Return str(self). """
pass
def __sub__(self, *args, **kwargs): # real signature unknown
""" Return self-value. """
pass
day = property(lambda self: 0)
""":type: int"""
month = property(lambda self: 0)
""":type: int"""
year = property(lambda self: 0)
""":type: int"""
max = None # (!) real value is ''
min = None # (!) real value is ''
resolution = None # (!) real value is ''
class datetime(__datetime.date):
"""
datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])
The year, month and day arguments are required. tzinfo may be None, or an
instance of a tzinfo subclass. The remaining arguments may be ints.
"""
def astimezone(self, tz): # known case of _datetime.datetime.astimezone
""" tz -> convert to local time in new timezone tz """
return datetime(1, 1, 1)
@classmethod
def combine(cls, date, time): # known case of _datetime.datetime.combine
""" date, time -> datetime with same date and time fields """
return datetime(1, 1, 1)
def ctime(self): # real signature unknown; restored from __doc__
""" Return ctime() style string. """
pass
def date(self): # known case of _datetime.datetime.date
""" Return date object with same year, month and day. """
return datetime(1, 1, 1)
def dst(self): # real signature unknown; restored from __doc__
""" Return self.tzinfo.dst(self). """
pass
@classmethod
def fromtimestamp(cls, timestamp, tz=None): # known case of _datetime.datetime.fromtimestamp
""" timestamp[, tz] -> tz's local time from POSIX timestamp. """
return datetime(1, 1, 1)
def isoformat(self, sep='T'): # known case of _datetime.datetime.isoformat
"""
[sep] -> string in ISO 8601 format, YYYY-MM-DDTHH:MM:SS[.mmmmmm][+HH:MM].
sep is used to separate the year from the time, and defaults to 'T'.
"""
return ""
@classmethod
def now(cls, tz=None): # known case of _datetime.datetime.now
"""
Returns new datetime object representing current time local to tz.
tz
Timezone object.
If no tz is specified, uses local timezone.
"""
return datetime(1, 1, 1)
def replace(self, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=None): # known case of _datetime.datetime.replace
""" Return datetime with new specified fields. """
return datetime(1, 1, 1)
@classmethod
def strptime(cls, date_string, format): # known case of _datetime.datetime.strptime
""" string, format -> new datetime parsed from a string (like time.strptime()). """
return ""
def time(self): # known case of _datetime.datetime.time
""" Return time object with same time but with tzinfo=None. """
return time(0, 0)
def timestamp(self, *args, **kwargs): # real signature unknown
""" Return POSIX timestamp as float. """
pass
def timetuple(self): # known case of _datetime.datetime.timetuple
""" Return time tuple, compatible with time.localtime(). """
return (0, 0, 0, 0, 0, 0, 0, 0, 0)
def timetz(self): # known case of _datetime.datetime.timetz
""" Return time object with same time and tzinfo. """
return time(0, 0)
def tzname(self): # real signature unknown; restored from __doc__
""" Return self.tzinfo.tzname(self). """
pass
@classmethod
def utcfromtimestamp(self, timestamp): # known case of _datetime.datetime.utcfromtimestamp
""" Construct a naive UTC datetime from a POSIX timestamp. """
return datetime(1, 1, 1)
@classmethod
def utcnow(cls): # known case of _datetime.datetime.utcnow
""" Return a new datetime representing UTC day and time. """
return datetime(1, 1, 1)
def utcoffset(self): # real signature unknown; restored from __doc__
""" Return self.tzinfo.utcoffset(self). """
pass
def utctimetuple(self): # known case of _datetime.datetime.utctimetuple
""" Return UTC time tuple, compatible with time.localtime(). """
return (0, 0, 0, 0, 0, 0, 0, 0, 0)
def __add__(self, *args, **kwargs): # real signature unknown
""" Return self+value. """
pass
def __eq__(self, *args, **kwargs): # real signature unknown
""" Return self==value. """
pass
def __getattribute__(self, *args, **kwargs): # real signature unknown
""" Return getattr(self, name). """
pass
def __ge__(self, *args, **kwargs): # real signature unknown
""" Return self>=value. """
pass
def __gt__(self, *args, **kwargs): # real signature unknown
""" Return self>value. """
pass
def __hash__(self, *args, **kwargs): # real signature unknown
""" Return hash(self). """
pass
def __init__(self, year, month, day, hour=None, minute=None, second=None, microsecond=None, tzinfo=None): # real signature unknown; restored from __doc__
pass
def __le__(self, *args, **kwargs): # real signature unknown
""" Return self<=value. """
pass
def __lt__(self, *args, **kwargs): # real signature unknown
""" Return self<value. """
pass
@staticmethod # known case of __new__
def __new__(cls, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=None): # known case of _datetime.datetime.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
def __ne__(self, *args, **kwargs): # real signature unknown
""" Return self!=value. """
pass
def __radd__(self, *args, **kwargs): # real signature unknown
""" Return value+self. """
pass
def __reduce__(self): # real signature unknown; restored from __doc__
""" __reduce__() -> (cls, state) """
pass
def __repr__(self, *args, **kwargs): # real signature unknown
""" Return repr(self). """
pass
def __rsub__(self, *args, **kwargs): # real signature unknown
""" Return value-self. """
pass
def __str__(self, *args, **kwargs): # real signature unknown
""" Return str(self). """
pass
def __sub__(self, *args, **kwargs): # real signature unknown
""" Return self-value. """
pass
hour = property(lambda self: 0)
""":type: int"""
microsecond = property(lambda self: 0)
""":type: int"""
minute = property(lambda self: 0)
""":type: int"""
second = property(lambda self: 0)
""":type: int"""
tzinfo = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
max = None # (!) real value is ''
min = None # (!) real value is ''
resolution = None # (!) real value is ''
class time(object):
"""
time([hour[, minute[, second[, microsecond[, tzinfo]]]]]) --> a time object
All arguments are optional. tzinfo may be None, or an instance of
a tzinfo subclass. The remaining arguments may be ints.
"""
def dst(self): # real signature unknown; restored from __doc__
""" Return self.tzinfo.dst(self). """
pass
def isoformat(self): # known case of _datetime.time.isoformat
""" Return string in ISO 8601 format, HH:MM:SS[.mmmmmm][+HH:MM]. """
return ""
def replace(self, hour=None, minute=None, second=None, microsecond=None, tzinfo=None): # known case of _datetime.time.replace
""" Return time with new specified fields. """
return time(0, 0)
def strftime(self, format): # known case of _datetime.time.strftime
""" format -> strftime() style string. """
return ""
def tzname(self): # real signature unknown; restored from __doc__
""" Return self.tzinfo.tzname(self). """
pass
def utcoffset(self): # real signature unknown; restored from __doc__
""" Return self.tzinfo.utcoffset(self). """
pass
def __eq__(self, *args, **kwargs): # real signature unknown
""" Return self==value. """
pass
def __format__(self, *args, **kwargs): # real signature unknown
""" Formats self with strftime. """
pass
def __getattribute__(self, *args, **kwargs): # real signature unknown
""" Return getattr(self, name). """
pass
def __ge__(self, *args, **kwargs): # real signature unknown
""" Return self>=value. """
pass
def __gt__(self, *args, **kwargs): # real signature unknown
""" Return self>value. """
pass
def __hash__(self, *args, **kwargs): # real signature unknown
""" Return hash(self). """
pass
def __init__(self, hour=None, minute=None, second=None, microsecond=None, tzinfo=None): # real signature unknown; restored from __doc__
pass
def __le__(self, *args, **kwargs): # real signature unknown
""" Return self<=value. """
pass
def __lt__(self, *args, **kwargs): # real signature unknown
""" Return self<value. """
pass
@staticmethod # known case of __new__
def __new__(cls, hour=None, minute=None, second=None, microsecond=None, tzinfo=None): # known case of _datetime.time.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
def __ne__(self, *args, **kwargs): # real signature unknown
""" Return self!=value. """
pass
def __reduce__(self): # real signature unknown; restored from __doc__
""" __reduce__() -> (cls, state) """
pass
def __repr__(self, *args, **kwargs): # real signature unknown
""" Return repr(self). """
pass
def __str__(self, *args, **kwargs): # real signature unknown
""" Return str(self). """
pass
hour = property(lambda self: 0)
""":type: int"""
microsecond = property(lambda self: 0)
""":type: int"""
minute = property(lambda self: 0)
""":type: int"""
second = property(lambda self: 0)
""":type: int"""
tzinfo = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
max = None # (!) real value is ''
min = None # (!) real value is ''
resolution = None # (!) real value is ''
class timedelta(object):
""" Difference between two datetime values. """
def total_seconds(self, *args, **kwargs): # real signature unknown
""" Total seconds in the duration. """
pass
def __abs__(self, *args, **kwargs): # real signature unknown
""" abs(self) """
pass
def __add__(self, *args, **kwargs): # real signature unknown
""" Return self+value. """
pass
def __bool__(self, *args, **kwargs): # real signature unknown
""" self != 0 """
pass
def __divmod__(self, *args, **kwargs): # real signature unknown
""" Return divmod(self, value). """
pass
def __eq__(self, *args, **kwargs): # real signature unknown
""" Return self==value. """
pass
def __floordiv__(self, *args, **kwargs): # real signature unknown
""" Return self//value. """
pass
def __getattribute__(self, *args, **kwargs): # real signature unknown
""" Return getattr(self, name). """
pass
def __ge__(self, *args, **kwargs): # real signature unknown
""" Return self>=value. """
pass
def __gt__(self, *args, **kwargs): # real signature unknown
""" Return self>value. """
pass
def __hash__(self, *args, **kwargs): # real signature unknown
""" Return hash(self). """
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
def __le__(self, *args, **kwargs): # real signature unknown
""" Return self<=value. """
pass
def __lt__(self, *args, **kwargs): # real signature unknown
""" Return self<value. """
pass
def __mod__(self, *args, **kwargs): # real signature unknown
""" Return self%value. """
pass
def __mul__(self, *args, **kwargs): # real signature unknown
""" Return self*value. """
pass
def __neg__(self, *args, **kwargs): # real signature unknown
""" -self """
pass
@staticmethod # known case of __new__
def __new__(cls, days=None, seconds=None, microseconds=None, milliseconds=None, minutes=None, hours=None, weeks=None): # known case of _datetime.timedelta.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
def __ne__(self, *args, **kwargs): # real signature unknown
""" Return self!=value. """
pass
def __pos__(self, *args, **kwargs): # real signature unknown
""" +self """
pass
def __radd__(self, *args, **kwargs): # real signature unknown
""" Return value+self. """
pass
def __rdivmod__(self, *args, **kwargs): # real signature unknown
""" Return divmod(value, self). """
pass
def __reduce__(self): # real signature unknown; restored from __doc__
""" __reduce__() -> (cls, state) """
pass
def __repr__(self, *args, **kwargs): # real signature unknown
""" Return repr(self). """
pass
def __rfloordiv__(self, *args, **kwargs): # real signature unknown
""" Return value//self. """
pass
def __rmod__(self, *args, **kwargs): # real signature unknown
""" Return value%self. """
pass
def __rmul__(self, *args, **kwargs): # real signature unknown
""" Return value*self. """
pass
def __rsub__(self, *args, **kwargs): # real signature unknown
""" Return value-self. """
pass
def __rtruediv__(self, *args, **kwargs): # real signature unknown
""" Return value/self. """
pass
def __str__(self, *args, **kwargs): # real signature unknown
""" Return str(self). """
pass
def __sub__(self, *args, **kwargs): # real signature unknown
""" Return self-value. """
pass
def __truediv__(self, *args, **kwargs): # real signature unknown
""" Return self/value. """
pass
days = property(lambda self: 0)
"""Number of days.
:type: int
"""
microseconds = property(lambda self: 0)
"""Number of microseconds (>= 0 and less than 1 second).
:type: int
"""
seconds = property(lambda self: 0)
"""Number of seconds (>= 0 and less than 1 day).
:type: int
"""
max = None # (!) real value is ''
min = None # (!) real value is ''
resolution = None # (!) real value is ''
class tzinfo(object):
""" Abstract base class for time zone info objects. """
def dst(self, date_time): # known case of _datetime.tzinfo.dst
""" datetime -> DST offset in minutes east of UTC. """
return 0
def fromutc(self, date_time): # known case of _datetime.tzinfo.fromutc
""" datetime in UTC -> datetime in local time. """
return datetime(1, 1, 1)
def tzname(self, date_time): # known case of _datetime.tzinfo.tzname
""" datetime -> string name of time zone. """
return ""
def utcoffset(self, date_time): # known case of _datetime.tzinfo.utcoffset
""" datetime -> timedelta showing offset from UTC, negative values indicating West of UTC """
return 0
def __getattribute__(self, *args, **kwargs): # real signature unknown
""" Return getattr(self, name). """
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(*args, **kwargs): # real signature unknown
""" Create and return a new object. See help(type) for accurate signature. """
pass
def __reduce__(self, *args, **kwargs): # real signature unknown
""" -> (cls, state) """
pass
class timezone(__datetime.tzinfo):
""" Fixed offset from UTC implementation of tzinfo. """
def dst(self, *args, **kwargs): # real signature unknown
""" Return None. """
pass
def fromutc(self, *args, **kwargs): # real signature unknown
""" datetime in UTC -> datetime in local time. """
pass
def tzname(self, *args, **kwargs): # real signature unknown
""" If name is specified when timezone is created, returns the name. Otherwise returns offset as 'UTC(+|-)HH:MM'. """
pass
def utcoffset(self, *args, **kwargs): # real signature unknown
""" Return fixed offset. """
pass
def __eq__(self, *args, **kwargs): # real signature unknown
""" Return self==value. """
pass
def __getinitargs__(self, *args, **kwargs): # real signature unknown
""" pickle support """
pass
def __ge__(self, *args, **kwargs): # real signature unknown
""" Return self>=value. """
pass
def __gt__(self, *args, **kwargs): # real signature unknown
""" Return self>value. """
pass
def __hash__(self, *args, **kwargs): # real signature unknown
""" Return hash(self). """
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
def __le__(self, *args, **kwargs): # real signature unknown
""" Return self<=value. """
pass
def __lt__(self, *args, **kwargs): # real signature unknown
""" Return self<value. """
pass
@staticmethod # known case of __new__
def __new__(*args, **kwargs): # real signature unknown
""" Create and return a new object. See help(type) for accurate signature. """
pass
def __ne__(self, *args, **kwargs): # real signature unknown
""" Return self!=value. """
pass
def __repr__(self, *args, **kwargs): # real signature unknown
""" Return repr(self). """
pass
def __str__(self, *args, **kwargs): # real signature unknown
""" Return str(self). """
pass
max = None # (!) real value is ''
min = None # (!) real value is ''
utc = None # (!) real value is ''
class __loader__(object):
"""
Meta path import for built-in modules.
All methods are either class or static methods to avoid the need to
instantiate the class.
"""
@classmethod
def create_module(cls, *args, **kwargs): # real signature unknown
""" Create a built-in module """
pass
@classmethod
def exec_module(cls, *args, **kwargs): # real signature unknown
""" Exec a built-in module """
pass
@classmethod
def find_module(cls, *args, **kwargs): # real signature unknown
"""
Find the built-in module.
If 'path' is ever specified then the search is considered a failure.
This method is deprecated. Use find_spec() instead.
"""
pass
@classmethod
def find_spec(cls, *args, **kwargs): # real signature unknown
pass
@classmethod
def get_code(cls, *args, **kwargs): # real signature unknown
""" Return None as built-in modules do not have code objects. """
pass
@classmethod
def get_source(cls, *args, **kwargs): # real signature unknown
""" Return None as built-in modules do not have source code. """
pass
@classmethod
def is_package(cls, *args, **kwargs): # real signature unknown
""" Return False as built-in modules are never packages. """
pass
@classmethod
def load_module(cls, *args, **kwargs): # real signature unknown
"""
Load the specified module into sys.modules and return it.
This method is deprecated. Use loader.exec_module instead.
"""
pass
def module_repr(module): # reliably restored by inspect
"""
Return repr for the module.
The method is deprecated. The import machinery does the job itself.
"""
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
__weakref__ = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""list of weak references to the object (if defined)"""
__dict__ = None # (!) real value is ''
# variables with complex values
datetime_CAPI = None # (!) real value is ''
__spec__ = None # (!) real value is ''
| 31.483146 | 166 | 0.59287 | 2,949 | 25,218 | 4.871482 | 0.098338 | 0.104065 | 0.1601 | 0.1601 | 0.77579 | 0.730823 | 0.671516 | 0.598218 | 0.567312 | 0.562648 | 0 | 0.007065 | 0.275914 | 25,218 | 800 | 167 | 31.5225 | 0.779682 | 0.446229 | 0 | 0.768817 | 0 | 0 | 0.000083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.408602 | false | 0.322581 | 0.002688 | 0 | 0.604839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
5247555ef50d7b2db1edc398eb67f9fe2be3e86f | 72 | py | Python | mc2pbrt/liquid/__init__.py | PbrtCraft/mc2pbrt | 145c844a7ec0fb3ee3c83dba67c2f8cdcf888ed4 | [
"MIT"
] | 3 | 2019-10-04T17:56:50.000Z | 2019-11-11T13:39:24.000Z | mc2pbrt/liquid/__init__.py | PbrtCraft/mc2pbrt | 145c844a7ec0fb3ee3c83dba67c2f8cdcf888ed4 | [
"MIT"
] | 8 | 2019-10-12T03:53:23.000Z | 2022-03-12T00:01:21.000Z | mc2pbrt/liquid/__init__.py | PbrtCraft/mc2pbrt | 145c844a7ec0fb3ee3c83dba67c2f8cdcf888ed4 | [
"MIT"
] | 1 | 2019-02-12T23:41:00.000Z | 2019-02-12T23:41:00.000Z | from liquid.lava import LavaSolver
from liquid.water import WaterSolver
| 24 | 36 | 0.861111 | 10 | 72 | 6.2 | 0.7 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 72 | 2 | 37 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
526038f645ec07f187329b116306cea6ab07104d | 4,657 | py | Python | indicators/tests/test_iptt_excel_endpoints.py | Falliatcom-sa/falliatcom | 39fb926de072c296ed32d50cccfb8003ca870739 | [
"Apache-2.0"
] | null | null | null | indicators/tests/test_iptt_excel_endpoints.py | Falliatcom-sa/falliatcom | 39fb926de072c296ed32d50cccfb8003ca870739 | [
"Apache-2.0"
] | 5 | 2021-02-08T20:42:48.000Z | 2022-03-12T00:19:38.000Z | indicators/tests/test_iptt_excel_endpoints.py | Falliatcom-sa/falliatcom | 39fb926de072c296ed32d50cccfb8003ca870739 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Excel endpoints (Timeperiods, TvA, and TvA full report) tested to ensure they download parsable content"""
from django import test
from django.urls import reverse
from factories import (
workflow_models as w_factories,
indicators_models as i_factories
)
from indicators.models import Indicator
class IPTTDownloadTestCases:
def test_timeperiods_monthly_download(self):
params = {
'reportType': '2',
'programId': self.program.pk,
'frequency': '7'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
def test_timeperiods_annual_download(self):
params = {
'reportType': '2',
'programId': self.program.pk,
'frequency': '3'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
def test_timeperiods_group_by_levels_download(self):
params = {
'reportType': '2',
'programId': self.program.pk,
'groupby': '2',
'frequency': '7'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
def test_timeperiods_limited_download(self):
params = {
'reportType': '2',
'programId': self.program.pk,
'frequency': '6',
'start': '1',
'end': '2'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
def test_tva_download(self):
params = {
'reportType': '1',
'programId': self.program.pk,
'frequency': '7'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
def test_tva_lop_download(self):
params = {
'reportType': '1',
'programId': self.program.pk,
'frequency': '1'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
def test_tva_limited_download(self):
params = {
'reportType': '1',
'programId': self.program.pk,
'frequency': '4',
'start': '1',
'end': '2'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
def test_tva_full_report_download(self):
params = {
'programId': self.program.pk,
'fullTVA': 'true'
}
response = self.client.get(reverse('iptt_excel'), params)
self.assertEqual(response.status_code, 200)
class TestRFExcelDownload(test.TestCase, IPTTDownloadTestCases):
def setUp(self):
country = w_factories.CountryFactory()
self.program = w_factories.RFProgramFactory(tiers=True, levels=2)
self.program.country.add(country)
for level in self.program.levels.all():
i_factories.RFIndicatorFactory(program=self.program, level=level, targets=500, results=400)
tola_user = w_factories.TolaUserFactory(country=country)
w_factories.grant_program_access(tola_user, self.program, country, 'high')
self.client.force_login(tola_user.user)
class TestNonRFExcelDownload(test.TestCase, IPTTDownloadTestCases):
def setUp(self):
country = w_factories.CountryFactory()
self.program = w_factories.RFProgramFactory(migrated=False)
self.program.country.add(country)
for _, level in Indicator.OLD_LEVELS:
i_factories.RFIndicatorFactory(program=self.program, old_level=level, targets=500, results=400)
tola_user = w_factories.TolaUserFactory(country=country)
w_factories.grant_program_access(tola_user, self.program, country, 'high')
self.client.force_login(tola_user.user)
class TestRFSpecialcharsExcelDownload(test.TestCase, IPTTDownloadTestCases):
def setUp(self):
country = w_factories.CountryFactory()
self.program = w_factories.RFProgramFactory(tiers=['Spécîål Chars', '##1!@!#$', 'asdf'], levels=2)
self.program.country.add(country)
for level in self.program.levels.all():
i_factories.RFIndicatorFactory(program=self.program, level=level, targets=500, results=400)
tola_user = w_factories.TolaUserFactory(country=country)
w_factories.grant_program_access(tola_user, self.program, country, 'high')
self.client.force_login(tola_user.user) | 37.556452 | 109 | 0.635388 | 500 | 4,657 | 5.754 | 0.2 | 0.084115 | 0.050052 | 0.061175 | 0.816476 | 0.811609 | 0.79562 | 0.79562 | 0.782412 | 0.764685 | 0 | 0.018534 | 0.24694 | 4,657 | 124 | 110 | 37.556452 | 0.801825 | 0.027056 | 0 | 0.641509 | 0 | 0 | 0.082855 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 1 | 0.103774 | false | 0 | 0.037736 | 0 | 0.179245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5260b760b2a213ed86c8417ec53998fccf057d39 | 164 | py | Python | devel/lib/python2.7/dist-packages/waterplus_map_tools/srv/__init__.py | THUwelcomerobot/robot | a8c8e5105cf4a3be478c89249e818aa970bfc243 | [
"MIT"
] | null | null | null | devel/lib/python2.7/dist-packages/waterplus_map_tools/srv/__init__.py | THUwelcomerobot/robot | a8c8e5105cf4a3be478c89249e818aa970bfc243 | [
"MIT"
] | null | null | null | devel/lib/python2.7/dist-packages/waterplus_map_tools/srv/__init__.py | THUwelcomerobot/robot | a8c8e5105cf4a3be478c89249e818aa970bfc243 | [
"MIT"
] | null | null | null | from ._AddNewWaypoint import *
from ._GetNumOfWaypoints import *
from ._GetWaypointByIndex import *
from ._GetWaypointByName import *
from ._SaveWaypoints import *
| 27.333333 | 34 | 0.817073 | 15 | 164 | 8.6 | 0.466667 | 0.310078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 164 | 5 | 35 | 32.8 | 0.895833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5270c39b4d53b290d72762b495e1e4102fcc82a3 | 3,190 | py | Python | micromath/pow/test_pow.py | hedrox/micromath | 0f300da914c844e5ff0775f25119909f748de635 | [
"MIT"
] | null | null | null | micromath/pow/test_pow.py | hedrox/micromath | 0f300da914c844e5ff0775f25119909f748de635 | [
"MIT"
] | null | null | null | micromath/pow/test_pow.py | hedrox/micromath | 0f300da914c844e5ff0775f25119909f748de635 | [
"MIT"
] | null | null | null | import logging
import json
import math
from server import app
app.testing = True
logging.disable(logging.ERROR)
class TestPow:
def test_correct_pow(self):
with app.test_client() as client:
body = {'base': 32, 'power': 2}
result = client.post('/api/v1/pow', json=body)
assert result.status_code == 200
data = json.loads(result.data)
assert math.isclose(float(data['result']), 1024.0)
assert data['error'] is None
def test_missing_attribute(self):
with app.test_client() as client:
body = {'power': 2}
result = client.post('/api/v1/pow', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
def test_invalid_attribute_type(self):
with app.test_client() as client:
body = {'base': None, 'power': 2}
result = client.post('/api/v1/pow', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
body = {'base': '2', 'power': 2}
result = client.post('/api/v1/pow', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
def test_empty_body(self):
with app.test_client() as client:
body = {}
result = client.post('/api/v1/pow', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
def test_extra_attribute(self):
with app.test_client() as client:
body = {'base': 32, 'power': 2, 'extra_key': 32}
result = client.post('/api/v1/pow', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
def test_no_body(self):
with app.test_client() as client:
result = client.post('/api/v1/pow')
assert result.status_code == 400
data = json.loads(result.data)
assert data['result'] is None
assert data['error'] == 'Data not provided'
def test_invalid_api_version(self):
with app.test_client() as client:
body = {'base': 32, 'power': 2}
result = client.post('/api/v2/pow', json=body)
assert result.status_code == 404
data = json.loads(result.data)
assert data['result'] is None
assert data['error'] == 'API version v2 not found'
| 33.93617 | 62 | 0.554545 | 384 | 3,190 | 4.523438 | 0.158854 | 0.086356 | 0.07369 | 0.087507 | 0.807714 | 0.807714 | 0.777202 | 0.758204 | 0.717904 | 0.667242 | 0 | 0.024212 | 0.313793 | 3,190 | 93 | 63 | 34.301075 | 0.769301 | 0 | 0 | 0.616438 | 0 | 0 | 0.130721 | 0 | 0 | 0 | 0 | 0 | 0.39726 | 1 | 0.09589 | false | 0 | 0.054795 | 0 | 0.164384 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87543eab9c74a08254348553c8eea30b66f76a1f | 3,601 | py | Python | data_generator/config.py | lshupingxl/crnn.pytorch | 62a18446b701c81cea9b1a42c1d278c3d5cd486d | [
"MIT"
] | 1 | 2018-11-14T09:19:10.000Z | 2018-11-14T09:19:10.000Z | data_generator/config.py | lshupingxl/crnn.pytorch | 62a18446b701c81cea9b1a42c1d278c3d5cd486d | [
"MIT"
] | null | null | null | data_generator/config.py | lshupingxl/crnn.pytorch | 62a18446b701c81cea9b1a42c1d278c3d5cd486d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
__author__ = 'solivr'
class CONST:
DIMENSION_REDUCTION_W_POOLING = 2 * 2 # 2x2 pooling in dimension W on layer 1 and 2
class Alphabet:
# 训练时的字母表应该包含空白字符,制作数据集时不应该包含空白字符
LettersLowercase = 'abcdefghijklmnopqrstuvwxyz' # 26
LettersCapitals = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' # 26
Digits = '0123456789' # 10
Symbols = '~!@#%^&*()+$_=,.。<>/?\`;:\'"[]{}《》' # 33
ChineseChar = '臣击居移鼻炉栀魏展多酚蘑数泥槽野蓝散售占服检含府葵咨德蔓退扫撕岩锂浅福穹蜡吸茯碍旭迪泛般材远猪柏肌止热活搔鑫芷湿耗锆荫稚盏赢梭抑听朋把取癃丝耀丸比厚利冻及衰晋弓怀育支華宽氦健蔡差袖科息济谊钢呼苷窿萤凝染凌结就扩扶分后凇联十础煎贵反巷糪时榆翻资桐起麸常言变大制隙能羟冠蒡气赖蔘张找肺百便恺冷锌衍专援景广三珊祥磨簧馆岛赫娃突薢贝熟值稣眠佳嫩仙晒酵淋甘拔袭于延霜埃响黑疣榈麒蚀茵轮齦翁水乳鳞转盆勃劈搏键泵毕枇斜寝葛职芃告各雪械长哈珂微庤教柯备癖每领聚因残维醯薯七趾绕总豪草园唐陈绷澳阀给液诱禾谱祛芬拜良壮愈诃医缬约涵短铬国戟叁心绿琼路际肯沃玛振孚销奈续民叫方癜拕产翠奠天龙应湘邮镑轩贰咀皂策痰涎链令沉痣喉冀造掩身杭最劲坤儿疚篮烧伟黏激汀境筝彤盒性怪復菲何辐婷鸡氏低癥芐瘤烯他楂葫桉潘易沥尼狂声糖兰萨轴稳尘锦过屋璃呋嘧妮塞瞳板檗泸森滩肾鲤檬斛庄颧廊捷语畸杏氯损膀陇浜超债爆榭系克塑逆扁室蒽环余飞忍谢桥党溪运作腰禅义翼托率胚缀唯杉可植粪庭泊手毒呕鼓碘孟鱼探险速几睡肥标紫烟醇袪渴丰引夏属浓脒饮鹏疡抢惨上庆穿熏崇蜕双妆哮另依向压嗦子海盖准瓣户蛇俭妈规希珈外酰滴力列疾业痹胖里殊弯交巨柿钉癣吲豉母醚闽晕诊萘喳捌曼铜舒恩浮论梢雯农期赛附汁镫迷股脉窥啶次蒿唇但状煨浦匀霰杜寻源柄贴泉商流搽原辰脏扬蛸导虫默疫兽模毛酞脾称珠与麝赝枝耵万茱博厄茂额瘰置风辛京杷文务久整央飘郭布重合伐茄块图关虾请朱任唥润坳都髋蜂奶蔷受厦众颔踝窝亥颐记恶月膏醋泳果贤庙叉层深配膝盈树馥抜锡瘘滋袋士异付吹幽泰急花赵核妇甜只询理南舌胀藻镍高噻锐辽鲜郁脲沐侬究马苓翳团紧芳清彩格哚圳硝犬查消栗氟耐孔促焙群咣黄鳄害戊栾嫬蛋呢并房被蒜尚硫出照木牛摘嗉蛰亚癸钻量容美聍弎添凡营遵观铵固物点枸垂船江猴已萸椒咗腮龄屈疼想葡湖刷酸旁奉嚏仲友芡叔呱喜秋丹枙帕杰螵洁幼敌颌试填泮刺欣阳灭肽藿光梗如带连敏蝶汞络拗莪妙苏雾释谜元补荞浆絮充租酯啉摄坑所秀莱迫胯磺荡佛杂夷苠淫拨钾还箴秘牌势渡东炔网留阁盘燥汗参桔素介虎首茛进闵菟药葆沛姓苍饱蕊种籓练待枳预满睛钆调苄员病淳蓣经委石盐埔氛晨眉早视四榴橡故墴凯难撞瑞褶沿基痘棘褐六亿羔之尾税腻芎密藤麦癓脯才伦授裸咖例疹全铁稹普抛疏杨师油融杀好泡灰冡纳未态疳赤峰垫霍玫酱甙测使瘠需淤枯社干纶索正寒惹栓鸟苡暂竣伍映卷断罕琳滤鲁烙缘疮泪按帽粒肆丁五复肪狻假睦颗乱墨蹬宙洽莨闸票望法则型晴垸菜乙逍左典护鲨吁皿王至杞停站米拓体罩诺夹喘的生季示事近担舟邱柳军随芩尤捈狗改卫载着娠湾西老腔胎摩浸琅穗窦壁崔秦皋火第提磷州糜芥市然特区条吨枣华缓黎祁钞羊乃哆暗谷茴稀觉意瓶界蒙混政刻走坦鞣输宫地槐径治冶角邦艽遥佰菌娜囡溶罐纸穴埋街鸿沫莎权择遂镜瓷噫甫趋膦头局腺粉度痛歆爽棉孜疗倩灸技二孢田卜馈云薄顺门替炼髓悉堵施厅它件橼扭积漕恒贞单死漱独钡简色套洗铸质蝎藏直胃谈宣达蔗纱计傲麟年爬录品定者葱抒话必类洞亭汇蟅刚监工橘名端勒汉靶矾边腹肝空芪姜籍镁详烷初裹圣梨渍萝唑蔻笛枫椽烁肮巴精解肟肃射完为劳撩乌升宇份显操棱椎样胶胺阴伸盼蒲氢疱疲涌蟹代归幻迈豆釉纺住楷旱泻枪硅史排甾隔县弃菪炒鹅虹喹硕仰酶帅竟嵩自曹和妥欧酒酐裕顶静齐塘菱脐鉍味传葶蓼眝冒背律感苋负换溴腾耳蛭半菇恬肢挫宏荧钛吊缝情阈忻犀昆启征俞伤龈问翘刮梅宁估有跌弱灌浣切店陆胸钯鉴岁尔念富限再钩下软成蔽眼荆疑去选彬研瘫防暖拆通呤做昔片根效刀嘌不芦等院岑公筛极床酣麻粘式松桩墩缴滑游柴嗪咬萎痤土历部来莘周或纈间膨骨芯针学具接甁芽末触注屏污萍绒锁镉臂尪官集捡蜗破焦玖醌波养形弹逐榕氮楼城山透姆挂九颈丙苹智化财封鹿肩以薇红徐茁承冬侧香家震创咳巧肛训衬乡铍泗败认茅迅离瓦罗薏珺曝喷芙泌盯倍滨除企吝平膜安白旦奇见凤潍赌允腱筋抗淮悦莲威个氧康咪插珀纷粗醛主潜梦综弧乐缺善陀睾序瘟痧翅括烫津卤杵步码村武紊嗜笋会协费温硒益思建放古吖柱伏脊莫锉位喃共萦灯强舜金竹错杆钴葩签蠲毫疬净臭折扑同春球非评铂呲价柒棒青致搐壳追仁买功也钼右包熨腐沟泽用表顾太棋恳电蓟炎两报畄夫矫内肼航辨血卓实场吗闪砂这先俄少申割钟祺忧锰螨柠加歧千钱河逸旋笨司聖淀沙贺免蕙送肤脱匮颠俊临夭浙嗽惠刘而乔毗蒎蟒肠管露循信羌恢级别兆脂供什囊妊痔辅立台误阶新铅碱抽炙摸剔牙卡佐食桑昌奥象拿桃迎癌辩豫曙尖朵仑助煮蝉阻渊北畅朗硬杯讯钵中号洒真毋港厘症脓舍晶窍须休醒竭批怡证庚洛锋打线机得兵前裂入苁坊樟睑英殷胰厂芍戴樱炭决玻芝凉节小脑保蒸拾推严陪寺牵丽鹤冲蜜仔螺避辣蟾宝洋处从咛在隆尿萆腋口胍帖要淞其啡莵皮痒救糊仪己顿料颉岳匹旨芒泾曲影巯对当志印危慧宾落举擦君扎萄绪灵斯目座冢催据悠银阿绣渗珍蔬堂烤颛羧患装知孕析衡降频剂呀硼快汾统匪襞肉笔叮吐瑟泼塔设构班缩适册术锯嚼伊卵瑚涤佩哌敷铋没世黛桂瓜苗川脈磁奎慈圆玉荷隐胞岐童弥伪伽腿茜熊冰副确底纤棕嵌浴卢瘀浊於修餐嘉相永糠胆滞蓉娘籽蛎无嘴寓段牡祝命雅氨眩雌献划面林苯收婴翔羚蚓衣减咏灶漠雍跟唾坎车癀拉纟醉梓八菊抚回韦盛是行幸琪陷锑肿祭荣开雄碧吡彭席祖纯酪郊挑聆腕艾酊汤纹钙指啊发呐酮拍嗓苦织吉淡星道茶丫沸派琥督丛本斑足傅叶像礼慢扰宜虑程绵亮增诚旧氩诗算剑佑抓夜控考沪窄藓伴明涂描采吴项塌寿器女坏氖甲碳汽障厌字攻齿看蚕內璐兴雁溃那招失人闭验咽音颅霞细悬动霉腥寄胱厉察钳鹰更锻培神荟日渣钠爱抹雷玄壹围洲硷写仟昼簇一铺组胡优镇铝账零宗'
BLANK_SYMBOL = '-'
DIGITS_ONLY = BLANK_SYMBOL + Digits
SYMBOLS_ONLY = BLANK_SYMBOL + Symbols
CHINESECHAR_ONLY = BLANK_SYMBOL + ChineseChar
LETTERS_DIGITS = BLANK_SYMBOL + Digits + LettersCapitals + LettersLowercase
LETTERS_DIGITS_LOWERCASE = BLANK_SYMBOL + LettersLowercase
LETTERS_ONLY = BLANK_SYMBOL + LettersCapitals + LettersLowercase
LETTERS_ONLY_LOWERCASE = BLANK_SYMBOL + BLANK_SYMBOL + LettersLowercase
LETTERS_SYMBOLS = BLANK_SYMBOL + LettersCapitals + LettersLowercase + Symbols
LETTERS_LOWERCASE_SYMBOLS = BLANK_SYMBOL + LettersLowercase + Symbols
LETTERS_DIGITS_SYMBOLS = BLANK_SYMBOL + Digits + LettersCapitals + LettersLowercase + Symbols
LETTERS__LOWERCASE_DIGITS_SYMBOLS= BLANK_SYMBOL + Digits + LettersLowercase + Symbols
# CHINESECHAR_LETTERS_DIGIT_SYMBOLS = Digits + LettersCapitals + LettersLowercase + Symbols + ChineseChar
CHINESECHAR_LETTERS_DIGIT_SYMBOLS = BLANK_SYMBOL +Digits + LettersCapitals + LettersLowercase + Symbols + ChineseChar | 124.172414 | 2,174 | 0.898917 | 137 | 3,601 | 23.291971 | 0.284672 | 0.048261 | 0.031965 | 0.030085 | 0.107803 | 0.038859 | 0.038859 | 0 | 0 | 0 | 0 | 0.007156 | 0.068592 | 3,601 | 29 | 2,175 | 124.172414 | 0.944246 | 0.058873 | 0 | 0 | 0 | 0 | 0.664991 | 0.659965 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.954545 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
8766caf448baa76ec800a27ba5d830d8e2e6618a | 98,880 | py | Python | vmware_nsx/tests/unit/services/lbaas/test_nsxp_driver.py | salv-orlando/vmware-nsx | 6ad0d595aa8099004eb6dd5ff62c7a91b0e11dfd | [
"Apache-2.0"
] | null | null | null | vmware_nsx/tests/unit/services/lbaas/test_nsxp_driver.py | salv-orlando/vmware-nsx | 6ad0d595aa8099004eb6dd5ff62c7a91b0e11dfd | [
"Apache-2.0"
] | null | null | null | vmware_nsx/tests/unit/services/lbaas/test_nsxp_driver.py | salv-orlando/vmware-nsx | 6ad0d595aa8099004eb6dd5ff62c7a91b0e11dfd | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2019 VMware, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from unittest import mock
from neutron.tests import base
from neutron_lib import context
from neutron_lib import exceptions as n_exc
from vmware_nsx.services.lbaas import base_mgr
from vmware_nsx.services.lbaas import lb_const
from vmware_nsx.services.lbaas.nsx_p.implementation import healthmonitor_mgr
from vmware_nsx.services.lbaas.nsx_p.implementation import l7policy_mgr
from vmware_nsx.services.lbaas.nsx_p.implementation import l7rule_mgr
from vmware_nsx.services.lbaas.nsx_p.implementation import lb_utils as p_utils
from vmware_nsx.services.lbaas.nsx_p.implementation import listener_mgr
from vmware_nsx.services.lbaas.nsx_p.implementation import loadbalancer_mgr
from vmware_nsx.services.lbaas.nsx_p.implementation import member_mgr
from vmware_nsx.services.lbaas.nsx_p.implementation import pool_mgr
from vmware_nsx.services.lbaas.nsx_v3.implementation import lb_utils
from vmware_nsx.services.lbaas.octavia import octavia_listener
from vmware_nsx.tests.unit.services.lbaas import lb_data_models as lb_models
from vmware_nsx.tests.unit.services.lbaas import lb_translators
from vmware_nsxlib.v3 import exceptions as nsxlib_exc
from vmware_nsxlib.v3.policy import constants as policy_constants
# TODO(asarfaty): Use octavia models for those tests
LB_VIP = '10.0.0.10'
LB_ROUTER_ID = 'router-x'
ROUTER_ID = 'neutron-router-x'
LB_ID = 'xxx-xxx'
LB_TENANT_ID = 'yyy-yyy'
LB_SERVICE_ID = LB_ID
LB_NETWORK = {'router:external': False,
'id': 'xxxxx',
'name': 'network-1'}
EXT_LB_NETWORK = {'router:external': True,
'id': 'public',
'name': 'network-2'}
LISTENER_ID = 'listener-x'
HTTP_LISTENER_ID = 'listener-http'
HTTPS_LISTENER_ID = 'listener-https'
UDP_LISTENER_ID = 'listener-udp'
APP_PROFILE_ID = 'appp-x'
LB_VS_ID = LISTENER_ID
LB_APP_PROFILE = {
"resource_type": "LbHttpProfile",
"description": "my http profile",
"id": APP_PROFILE_ID,
"display_name": "httpprofile1",
"ntlm": False,
"request_header_size": 1024,
"http_redirect_to_https": False,
"idle_timeout": 1800,
"x_forwarded_for": "INSERT",
}
POOL_ID = 'ppp-qqq'
LB_POOL_ID = POOL_ID
LB_POOL = {
"display_name": "httppool1",
"description": "my http pool",
"id": LB_POOL_ID,
"algorithm": "ROUND_ROBIN",
}
MEMBER_ID = 'mmm-mmm'
MEMBER_ADDRESS = '10.0.0.200'
LB_MEMBER = {'display_name': 'member1_' + MEMBER_ID,
'weight': 1, 'ip_address': MEMBER_ADDRESS, 'port': 80,
'backup_member': False, 'admin_state_up': True}
LB_POOL_WITH_MEMBER = {
"display_name": "httppool1",
"description": "my http pool",
"id": LB_POOL_ID,
"algorithm": "ROUND_ROBIN",
"members": [
{
"display_name": "http-member1",
"ip_address": MEMBER_ADDRESS,
"port": "80",
"weight": "1",
"admin_state": "ENABLED"
}
]
}
HM_ID = 'hhh-mmm'
LB_MONITOR_ID = HM_ID
L7POLICY_ID = 'l7policy-xxx'
LB_RULE_ID = 'lb-rule-xx'
L7RULE_ID = 'l7rule-111'
LB_PP_ID = POOL_ID
FAKE_CERT = {'id': 'cert-xyz'}
SERVICE_STATUSES = {
"virtual_servers": [{
"virtual_server_id": LB_VS_ID,
"status": "UP"
}],
"service_id": LB_SERVICE_ID,
"service_status": "UP",
"pools": [{
"members": [{
"port": "80",
"ip_address": MEMBER_ADDRESS,
"status": "DOWN"
}],
"pool_id": LB_POOL_ID,
"status": "DOWN"
}]
}
VS_STATUSES = {
"results": [{
"virtual_server_id": LB_VS_ID,
"status": "UP"
}]
}
class BaseTestEdgeLbaasV2(base.BaseTestCase):
def _tested_entity(self):
return None
def completor(self, success=True):
self.last_completor_succees = success
self.last_completor_called = True
def reset_completor(self):
self.last_completor_succees = False
self.last_completor_called = False
def setUp(self):
super(BaseTestEdgeLbaasV2, self).setUp()
self.reset_completor()
self.context = context.get_admin_context()
octavia_objects = {
'loadbalancer': loadbalancer_mgr.EdgeLoadBalancerManagerFromDict(),
'listener': listener_mgr.EdgeListenerManagerFromDict(),
'pool': pool_mgr.EdgePoolManagerFromDict(),
'member': member_mgr.EdgeMemberManagerFromDict(),
'healthmonitor':
healthmonitor_mgr.EdgeHealthMonitorManagerFromDict(),
'l7policy': l7policy_mgr.EdgeL7PolicyManagerFromDict(),
'l7rule': l7rule_mgr.EdgeL7RuleManagerFromDict()}
self.edge_driver = octavia_listener.NSXOctaviaListenerEndpoint(
**octavia_objects)
self.lbv2_driver = mock.Mock()
self.core_plugin = mock.Mock()
self.core_plugin._nsx_version = '2.5.0'
base_mgr.LoadbalancerBaseManager._lbv2_driver = self.lbv2_driver
base_mgr.LoadbalancerBaseManager._core_plugin = self.core_plugin
self._patch_lb_plugin(self.lbv2_driver, self._tested_entity)
self._patch_policy_lb_clients(self.core_plugin)
self.lb = lb_models.LoadBalancer(LB_ID, LB_TENANT_ID, 'lb1', '',
'some-subnet', 'port-id', LB_VIP)
self.listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1', 'Dummy', None, LB_ID,
'HTTP', protocol_port=80,
loadbalancer=self.lb)
self.https_listener = lb_models.Listener(
HTTP_LISTENER_ID, LB_TENANT_ID, 'listener2', '', None, LB_ID,
'HTTPS', protocol_port=443, loadbalancer=self.lb)
self.terminated_https_listener = lb_models.Listener(
HTTPS_LISTENER_ID, LB_TENANT_ID, 'listener3', '', None, LB_ID,
'TERMINATED_HTTPS', protocol_port=443, loadbalancer=self.lb)
self.udp_listener = lb_models.Listener(
UDP_LISTENER_ID, LB_TENANT_ID, 'listener4', '', None, LB_ID,
'UDP', protocol_port=90, loadbalancer=self.lb)
self.allowed_cidr_listener = lb_models.Listener(
LISTENER_ID, LB_TENANT_ID, 'listener4', '', None, LB_ID,
'HTTP', protocol_port=80, allowed_cidrs=['1.1.1.0/24'],
loadbalancer=self.lb)
self.pool = lb_models.Pool(POOL_ID, LB_TENANT_ID, 'pool1', '',
None, 'HTTP', 'ROUND_ROBIN',
loadbalancer_id=LB_ID,
listener=self.listener,
listeners=[self.listener],
loadbalancer=self.lb)
self.sess_persistence = lb_models.SessionPersistence(
POOL_ID, 'HTTP_COOKIE', 'meh_cookie')
self.pool_persistency = lb_models.Pool(POOL_ID, LB_TENANT_ID,
'pool1', '', None, 'HTTP',
'ROUND_ROBIN', loadbalancer_id=LB_ID,
listener=self.listener,
listeners=[self.listener],
loadbalancer=self.lb,
session_persistence=self.sess_persistence)
self.member = lb_models.Member(MEMBER_ID, LB_TENANT_ID, POOL_ID,
MEMBER_ADDRESS, 80, 1, pool=self.pool,
name='member1', admin_state_up=True)
self.hm = lb_models.HealthMonitor(HM_ID, LB_TENANT_ID, 'PING', 3, 3,
1, pool=self.pool, name='hm1')
self.hm_http = lb_models.HealthMonitor(HM_ID, LB_TENANT_ID, 'HTTP',
3, 3, 1, pool=self.pool,
http_method='GET',
url_path="/meh", name='hm2')
self.l7policy = lb_models.L7Policy(L7POLICY_ID, LB_TENANT_ID,
name='policy-test',
description='policy-desc',
listener_id=LISTENER_ID,
action='REDIRECT_TO_POOL',
redirect_pool_id=POOL_ID,
listener=self.listener,
position=1)
self.l7rule = lb_models.L7Rule(L7RULE_ID, LB_TENANT_ID,
l7policy_id=L7POLICY_ID,
compare_type='EQUAL_TO',
invert=False,
type='HEADER',
key='key1',
value='val1',
policy=self.l7policy)
# Translate LBaaS objects to dictionaries
self.lb_dict = lb_translators.lb_loadbalancer_obj_to_dict(
self.lb)
self.listener_dict = lb_translators.lb_listener_obj_to_dict(
self.listener)
self.cidr_list_dict = lb_translators.lb_listener_obj_to_dict(
self.allowed_cidr_listener)
self.https_listener_dict = lb_translators.lb_listener_obj_to_dict(
self.https_listener)
self.terminated_https_listener_dict = lb_translators.\
lb_listener_obj_to_dict(self.terminated_https_listener)
self.udp_listener_dict = lb_translators.lb_listener_obj_to_dict(
self.udp_listener)
self.pool_dict = lb_translators.lb_pool_obj_to_dict(
self.pool)
self.pool_persistency_dict = lb_translators.lb_pool_obj_to_dict(
self.pool_persistency)
self.member_dict = lb_translators.lb_member_obj_to_dict(
self.member)
self.hm_dict = lb_translators.lb_hm_obj_to_dict(
self.hm)
self.hm_http_dict = lb_translators.lb_hm_obj_to_dict(
self.hm_http)
self.l7policy_dict = lb_translators.lb_l7policy_obj_to_dict(
self.l7policy)
self.l7rule_dict = lb_translators.lb_l7rule_obj_to_dict(
self.l7rule)
def tearDown(self):
self._unpatch_lb_plugin(self.lbv2_driver, self._tested_entity)
super(BaseTestEdgeLbaasV2, self).tearDown()
def _patch_lb_plugin(self, lb_plugin, manager):
self.real_manager = getattr(lb_plugin, manager)
lb_manager = mock.patch.object(lb_plugin, manager).start()
mock.patch.object(lb_manager, 'create').start()
mock.patch.object(lb_manager, 'update').start()
mock.patch.object(lb_manager, 'delete').start()
mock.patch.object(lb_manager, 'successful_completion').start()
def _patch_policy_lb_clients(self, core_plugin):
nsxpolicy = mock.patch.object(core_plugin, 'nsxpolicy').start()
load_balancer = mock.patch.object(nsxpolicy, 'load_balancer').start()
self.service_client = mock.patch.object(load_balancer,
'lb_service').start()
self.app_client = mock.patch.object(load_balancer,
'lb_http_profile').start()
self.vs_client = mock.patch.object(load_balancer,
'virtual_server').start()
self.pool_client = mock.patch.object(load_balancer,
'lb_pool').start()
self.monitor_client = mock.patch.object(
load_balancer, 'lb_monitor_profile_icmp').start()
self.http_monitor_client = mock.patch.object(
load_balancer, 'lb_monitor_profile_http').start()
self.rule_client = mock.patch.object(load_balancer,
'rule').start()
self.pp_client = mock.patch.object(
load_balancer, 'lb_source_ip_persistence_profile').start()
self.pp_cookie_client = mock.patch.object(
load_balancer, 'lb_cookie_persistence_profile').start()
self.pp_generic_client = mock.patch.object(
load_balancer, 'lb_persistence_profile').start()
self.nsxpolicy = nsxpolicy
def _unpatch_lb_plugin(self, lb_plugin, manager):
setattr(lb_plugin, manager, self.real_manager)
class TestEdgeLbaasV2Loadbalancer(BaseTestEdgeLbaasV2):
def setUp(self):
super(TestEdgeLbaasV2Loadbalancer, self).setUp()
@property
def _tested_entity(self):
return 'load_balancer'
def test_create(self):
self.reset_completor()
neutron_router = {'id': ROUTER_ID, 'name': 'dummy',
'external_gateway_info': {'external_fixed_ips': []}}
with mock.patch.object(lb_utils, 'get_network_from_subnet',
return_value=LB_NETWORK), \
mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(lb_utils, 'get_tags', return_value=[]),\
mock.patch.object(self.core_plugin, 'get_router',
return_value=neutron_router), \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin,
'service_router_has_services',
return_value=False) as plugin_has_sr,\
mock.patch.object(self.service_client, 'get_router_lb_service',
return_value=None),\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': []}),\
mock.patch.object(self.service_client, 'create_or_overwrite'
) as create_service:
self.edge_driver.loadbalancer.create(
self.context, self.lb_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
# Service should be created with connectivity path
create_service.assert_called_once_with(
mock.ANY, lb_service_id=LB_ID,
description=self.lb_dict['description'],
tags=mock.ANY, size='SMALL',
connectivity_path=mock.ANY)
# Verify that the tags contain the loadbalancer id
actual_tags = create_service.mock_calls[0][-1]['tags']
found_tag = False
for tag in actual_tags:
if (tag['scope'] == p_utils.SERVICE_LB_TAG_SCOPE and
tag['tag'] == LB_ID):
found_tag = True
self.assertTrue(found_tag)
plugin_has_sr.assert_called_once_with(mock.ANY, ROUTER_ID)
def test_create_same_router(self):
self.reset_completor()
neutron_router = {'id': ROUTER_ID, 'name': 'dummy',
'external_gateway_info': {'external_fixed_ips': []}}
old_lb_id = 'aaa'
lb_service = {'id': old_lb_id,
'tags': [{'scope': p_utils.SERVICE_LB_TAG_SCOPE,
'tag': old_lb_id}]}
with mock.patch.object(lb_utils, 'get_network_from_subnet',
return_value=LB_NETWORK), \
mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(self.core_plugin, 'get_router',
return_value=neutron_router), \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.core_plugin,
'service_router_has_services',
return_value=True) as plugin_has_sr,\
mock.patch.object(self.service_client,
'update_customized') as service_update:
self.edge_driver.loadbalancer.create(
self.context, self.lb_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
plugin_has_sr.assert_not_called()
service_update.assert_called_once()
def test_create_same_router_many_fail(self):
lb_service = {'id': 'first_lb', 'tags': []}
self.reset_completor()
neutron_router = {'id': ROUTER_ID, 'name': 'dummy',
'external_gateway_info': {'external_fixed_ips': []}}
with mock.patch.object(lb_utils, 'get_network_from_subnet',
return_value=LB_NETWORK), \
mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(self.core_plugin, 'get_router',
return_value=neutron_router), \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.service_client, 'update_customized',
side_effect=n_exc.BadRequest(resource='', msg='')
) as service_update,\
mock.patch.object(self.service_client, 'get_router_lb_service',
return_value=None):
self.assertRaises(
n_exc.BadRequest,
self.edge_driver.loadbalancer.create,
self.context, self.lb_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertFalse(self.last_completor_succees)
service_update.assert_called_once()
def test_create_external_vip(self):
self.reset_completor()
with mock.patch.object(lb_utils, 'get_router_from_network',
return_value=None),\
mock.patch.object(lb_utils, 'get_network_from_subnet',
return_value=EXT_LB_NETWORK), \
mock.patch.object(self.service_client, 'get_router_lb_service',
return_value=None),\
mock.patch.object(self.service_client, 'create_or_overwrite',
return_value={'id': LB_SERVICE_ID}
) as create_service:
self.edge_driver.loadbalancer.create(self.context, self.lb_dict,
self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
# Service should be created with no connectivity path
create_service.assert_called_once_with(
mock.ANY, lb_service_id=LB_ID,
description=self.lb_dict['description'],
tags=mock.ANY, size='SMALL',
connectivity_path=None)
def test_create_no_services(self):
self.reset_completor()
neutron_router = {'id': ROUTER_ID, 'name': 'dummy',
'external_gateway_info': {'external_fixed_ips': []}}
with mock.patch.object(lb_utils, 'get_network_from_subnet',
return_value=LB_NETWORK), \
mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(self.core_plugin, 'get_router',
return_value=neutron_router), \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin, 'service_router_has_services',
return_value=False) as plugin_has_sr, \
mock.patch.object(self.service_client, 'get_router_lb_service',
return_value=None),\
mock.patch.object(self.service_client, 'create_or_overwrite'
) as create_service,\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': []}),\
mock.patch.object(self.core_plugin,
"create_service_router") as create_sr:
self.edge_driver.loadbalancer.create(
self.context, self.lb_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
# Service should be created with connectivity path
create_service.assert_called_once_with(
mock.ANY, lb_service_id=LB_ID,
description=self.lb_dict['description'],
tags=mock.ANY, size='SMALL',
connectivity_path=mock.ANY)
plugin_has_sr.assert_called_once_with(mock.ANY, ROUTER_ID)
create_sr.assert_called_once()
def test_create_with_port(self):
self.reset_completor()
neutron_router = {'id': ROUTER_ID, 'name': 'dummy',
'external_gateway_info': {'external_fixed_ips': []}}
neutron_port = {'id': 'port-id', 'name': 'dummy', 'device_owner': ''}
with mock.patch.object(lb_utils, 'get_network_from_subnet',
return_value=LB_NETWORK), \
mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(self.core_plugin, 'get_router',
return_value=neutron_router), \
mock.patch.object(self.core_plugin, 'get_port',
return_value=neutron_port), \
mock.patch.object(self.core_plugin, 'update_port'
) as update_port, \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin,
'service_router_has_services',
return_value=False) as plugin_has_sr,\
mock.patch.object(self.service_client, 'get_router_lb_service',
return_value=None),\
mock.patch.object(self.service_client, 'create_or_overwrite'
) as create_service:
self.edge_driver.loadbalancer.create(
self.context, self.lb_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
# Service should be created with connectivity path
create_service.assert_called_once_with(
mock.ANY, lb_service_id=LB_ID,
description=self.lb_dict['description'],
tags=mock.ANY, size='SMALL',
connectivity_path=mock.ANY)
plugin_has_sr.assert_called_once_with(mock.ANY, ROUTER_ID)
update_port.assert_called_once()
def test_update(self):
self.reset_completor()
new_lb = lb_models.LoadBalancer(LB_ID, 'yyy-yyy', 'lb1-new',
'new-description', 'some-subnet',
'port-id', LB_VIP)
new_lb_dict = lb_translators.lb_loadbalancer_obj_to_dict(new_lb)
self.edge_driver.loadbalancer.update(self.context, self.lb_dict,
new_lb_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID}
with mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(self.service_client, 'update_customized',
side_effect=n_exc.BadRequest(resource='', msg='')
) as service_update,\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.service_client, 'delete'
) as mock_delete_lb_service:
self.edge_driver.loadbalancer.delete(
self.context, self.lb_dict, self.completor)
mock_delete_lb_service.assert_called_with(LB_SERVICE_ID)
service_update.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_cascade(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID}
with mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.service_client, 'update_customized',
side_effect=n_exc.BadRequest(resource='', msg='')
) as service_update,\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.service_client, 'delete'
) as mock_delete_lb_service:
mock_get_floatingips.return_value = []
self.edge_driver.loadbalancer.delete_cascade(
self.context, self.lb_dict, self.completor)
mock_delete_lb_service.assert_called_with(LB_SERVICE_ID)
service_update.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_with_router_id(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID,
'connectivity_path': 'infra/%s' % ROUTER_ID}
with mock.patch.object(lb_utils, 'get_router_from_network',
return_value=None),\
mock.patch.object(self.service_client, 'update_customized',
side_effect=n_exc.BadRequest(resource='', msg='')
) as service_update,\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.service_client, 'delete'
) as mock_delete_lb_service:
self.edge_driver.loadbalancer.delete(self.context, self.lb_dict,
self.completor)
mock_delete_lb_service.assert_called_with(LB_SERVICE_ID)
service_update.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_no_services(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID,
'connectivity_path': 'infra/%s' % ROUTER_ID}
with mock.patch.object(lb_utils, 'get_router_from_network',
return_value=None),\
mock.patch.object(self.service_client, 'update_customized',
side_effect=n_exc.BadRequest(resource='', msg='')
) as service_update,\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.core_plugin, 'service_router_has_services',
return_value=False), \
mock.patch.object(self.core_plugin,
'delete_service_router') as delete_sr, \
mock.patch.object(self.service_client, 'delete'
) as mock_delete_lb_service:
self.edge_driver.loadbalancer.delete(self.context, self.lb_dict,
self.completor)
mock_delete_lb_service.assert_called_with(LB_SERVICE_ID)
delete_sr.assert_called_once_with(ROUTER_ID)
service_update.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_with_port(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID}
neutron_port = {'id': 'port-id', 'name': 'dummy',
'device_owner': lb_const.VMWARE_LB_VIP_OWNER}
with mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID),\
mock.patch.object(self.service_client, 'update_customized',
side_effect=n_exc.BadRequest(resource='', msg='')
) as service_update,\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.core_plugin, 'get_port',
return_value=neutron_port), \
mock.patch.object(self.core_plugin, 'update_port'
) as update_port, \
mock.patch.object(self.service_client, 'delete'
) as mock_delete_lb_service:
self.edge_driver.loadbalancer.delete(self.context, self.lb_dict,
self.completor)
mock_delete_lb_service.assert_called_with(LB_SERVICE_ID)
service_update.assert_called_once()
update_port.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_add_tags_callback(self):
callback = p_utils.add_service_tag_callback(LB_ID)
# Add a tag
body = {'tags': [{'scope': p_utils.SERVICE_LB_TAG_SCOPE,
'tag': 'dummy_id'}]}
callback(body)
self.assertEqual(2, len(body['tags']))
# Tag already there
callback(body)
self.assertEqual(2, len(body['tags']))
# Too many tags
body['tags'] = []
for x in range(p_utils.SERVICE_LB_TAG_MAX):
body['tags'].append({
'scope': p_utils.SERVICE_LB_TAG_SCOPE,
'tag': 'dummy_id_%s' % x})
self.assertRaises(n_exc.BadRequest, callback, body)
# No tags
body['tags'] = []
callback(body)
self.assertEqual(1, len(body['tags']))
def test_add_tags_callback_only_first(self):
callback = p_utils.add_service_tag_callback(LB_ID, only_first=True)
# No tags
body = {'tags': []}
callback(body)
self.assertEqual(1, len(body['tags']))
# Tag already there
self.assertRaises(n_exc.BadRequest, callback, body)
# Another tag exists
body['tags'] = [{'scope': p_utils.SERVICE_LB_TAG_SCOPE,
'tag': 'dummy'}]
self.assertRaises(n_exc.BadRequest, callback, body)
def test_del_tags_callback(self):
callback = p_utils.remove_service_tag_callback(LB_ID)
# remove a tag
body = {'tags': [{'scope': p_utils.SERVICE_LB_TAG_SCOPE,
'tag': 'dummy_id'},
{'scope': p_utils.SERVICE_LB_TAG_SCOPE,
'tag': LB_ID}]}
callback(body)
self.assertEqual(1, len(body['tags']))
# Tag not there there
callback(body)
self.assertEqual(1, len(body['tags']))
# Last one
body['tags'] = [{'scope': p_utils.SERVICE_LB_TAG_SCOPE,
'tag': LB_ID}]
self.assertRaises(n_exc.BadRequest, callback, body)
class TestEdgeLbaasV2Listener(BaseTestEdgeLbaasV2):
def setUp(self):
super(TestEdgeLbaasV2Listener, self).setUp()
@property
def _tested_entity(self):
return 'listener'
def _create_listener(self, protocol='HTTP', allowed_cidr=False):
self.reset_completor()
with mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.core_plugin.nsxpolicy.gateway_policy,
'get',
side_effect=nsxlib_exc.ResourceNotFound), \
mock.patch.object(self.core_plugin.nsxpolicy.gateway_policy,
'create_with_entries') as create_gw_pol, \
mock.patch.object(self.vs_client, 'create_or_overwrite'
) as mock_add_virtual_server:
mock_get_floatingips.return_value = []
listener = self.listener_dict
listener_id = LISTENER_ID
if protocol == 'HTTPS':
listener = self.https_listener_dict
listener_id = HTTP_LISTENER_ID
elif protocol == 'UDP':
listener = self.udp_listener_dict
listener_id = UDP_LISTENER_ID
if allowed_cidr:
listener = self.cidr_list_dict
self.edge_driver.listener.create(self.context, listener,
self.completor)
mock_add_virtual_server.assert_called_with(
application_profile_id=listener_id,
description=listener['description'],
lb_service_id=LB_ID,
ip_address=LB_VIP,
tags=mock.ANY,
name=mock.ANY,
ports=[listener['protocol_port']],
virtual_server_id=listener_id,
pool_id='',
lb_persistence_profile_id='')
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
if not allowed_cidr:
create_gw_pol.assert_not_called()
else:
create_gw_pol.assert_called_once_with(
'LB %s allowed cidrs' % LB_ID,
policy_constants.DEFAULT_DOMAIN,
map_id=LB_ID,
category=policy_constants.CATEGORY_LOCAL_GW,
description=mock.ANY,
entries=[mock.ANY],
tags=mock.ANY)
def test_create_http_listener(self):
self._create_listener()
def test_create_allowed_cidr_listener(self):
orig_nsx_ver = self.core_plugin._nsx_version
self.core_plugin._nsx_version = '3.1.0'
with mock.patch.object(lb_utils, 'get_router_from_network',
return_value=ROUTER_ID):
self._create_listener(allowed_cidr=True)
self.core_plugin._nsx_version = orig_nsx_ver
def test_create_https_listener(self):
self._create_listener(protocol='HTTPS')
def test_create_udp_listener(self):
self._create_listener(protocol='UDP')
def test_create_terminated_https(self):
#TODO(asarfaty): Add test with certificate
self.reset_completor()
with mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.vs_client, 'create_or_overwrite'
) as mock_add_virtual_server:
mock_get_floatingips.return_value = []
self.edge_driver.listener.create(
self.context,
self.terminated_https_listener_dict,
self.completor)
mock_add_virtual_server.assert_called_with(
application_profile_id=HTTPS_LISTENER_ID,
description=self.terminated_https_listener_dict['description'],
lb_service_id=LB_ID,
ip_address=LB_VIP,
tags=mock.ANY,
name=mock.ANY,
ports=[self.terminated_https_listener_dict['protocol_port']],
virtual_server_id=HTTPS_LISTENER_ID,
pool_id='',
lb_persistence_profile_id='')
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_create_listener_with_default_pool(self):
self.reset_completor()
listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1', 'Dummy', self.pool.id,
LB_ID, 'HTTP', protocol_port=80,
loadbalancer=self.lb,
default_pool=self.pool)
listener_dict = lb_translators.lb_listener_obj_to_dict(listener)
with mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.vs_client, 'create_or_overwrite'
) as mock_add_virtual_server:
mock_get_floatingips.return_value = []
listener_dict['connection_limit'] = 7
self.edge_driver.listener.create(self.context, listener_dict,
self.completor)
mock_add_virtual_server.assert_called_with(
application_profile_id=LISTENER_ID,
description=listener_dict['description'],
lb_service_id=LB_ID,
ip_address=LB_VIP,
tags=mock.ANY,
name=mock.ANY,
ports=[listener_dict['protocol_port']],
max_concurrent_connections=7,
virtual_server_id=LISTENER_ID,
pool_id=POOL_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_create_listener_with_used_default_pool(self):
self.reset_completor()
listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1', 'Dummy', self.pool.id,
LB_ID, 'HTTP', protocol_port=80,
loadbalancer=self.lb,
default_pool=self.pool)
listener_dict = lb_translators.lb_listener_obj_to_dict(listener)
with mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips,\
mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)),\
mock.patch.object(self.vs_client, 'list',
return_value=[{'pool_path': POOL_ID}]):
mock_get_floatingips.return_value = []
self.assertRaises(n_exc.BadRequest,
self.edge_driver.listener.create,
self.context, listener_dict,
self.completor)
self.assertTrue(self.last_completor_called)
self.assertFalse(self.last_completor_succees)
def test_create_listener_with_session_persistence(self):
self.reset_completor()
listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1', 'Dummy',
self.pool_persistency.id,
LB_ID, 'HTTP', protocol_port=80,
loadbalancer=self.lb,
default_pool=self.pool_persistency)
listener_dict = lb_translators.lb_listener_obj_to_dict(listener)
with mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.vs_client, 'create_or_overwrite'
) as mock_add_virtual_server,\
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.vs_client, 'get', return_value={}),\
mock.patch.object(self.edge_driver.listener, '_get_pool_tags'),\
mock.patch.object(self.pp_cookie_client, 'create_or_overwrite'
) as mock_create_pp:
mock_get_floatingips.return_value = []
listener_dict['connection_limit'] = -1 # Should be ignored
self.edge_driver.listener.create(self.context, listener_dict,
self.completor)
mock_add_virtual_server.assert_called_with(
application_profile_id=LISTENER_ID,
description=listener_dict['description'],
lb_service_id=LB_ID,
ip_address=LB_VIP,
tags=mock.ANY,
name=mock.ANY,
ports=[listener_dict['protocol_port']],
virtual_server_id=LISTENER_ID,
pool_id=listener_dict['default_pool_id'])
mock_create_pp.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_create_listener_with_session_persistence_fail(self):
self.reset_completor()
listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1', 'Dummy',
self.pool_persistency.id,
LB_ID, 'TCP', protocol_port=80,
loadbalancer=self.lb,
default_pool=self.pool_persistency)
listener_dict = lb_translators.lb_listener_obj_to_dict(listener)
with mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips:
mock_get_floatingips.return_value = []
self.assertRaises(n_exc.BadRequest,
self.edge_driver.listener.create,
self.context, listener_dict,
self.completor)
self.assertTrue(self.last_completor_called)
self.assertFalse(self.last_completor_succees)
def test_create_listener_lb_no_name(self, protocol='HTTP'):
self.reset_completor()
with mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.vs_client, 'create_or_overwrite'
) as mock_add_virtual_server:
mock_get_floatingips.return_value = []
listener = copy.deepcopy(self.listener_dict)
listener['loadbalancer']['name'] = None
listener_id = LISTENER_ID
self.edge_driver.listener.create(self.context, listener,
self.completor)
mock_add_virtual_server.assert_called_with(
application_profile_id=listener_id,
description=listener['description'],
lb_service_id=LB_ID,
ip_address=LB_VIP,
tags=mock.ANY,
name=mock.ANY,
ports=[listener['protocol_port']],
virtual_server_id=listener_id,
pool_id='',
lb_persistence_profile_id='')
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update(self):
self.reset_completor()
new_listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1-new', 'new-description',
None, LB_ID, protocol_port=80,
loadbalancer=self.lb)
new_listener_dict = lb_translators.lb_listener_obj_to_dict(
new_listener)
with mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips:
mock_get_floatingips.return_value = []
self.edge_driver.listener.update(self.context, self.listener_dict,
new_listener_dict,
self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update_with_default_pool(self):
self.reset_completor()
new_listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1-new', 'new-description',
self.pool, LB_ID, protocol_port=80,
loadbalancer=self.lb,
default_pool=self.pool)
new_listener_dict = lb_translators.lb_listener_obj_to_dict(
new_listener)
with mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips:
mock_get_floatingips.return_value = []
self.edge_driver.listener.update(self.context, self.listener_dict,
new_listener_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update_with_session_persistence(self):
self.reset_completor()
new_listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1-new', 'new-description',
self.pool_persistency.id,
LB_ID, protocol='HTTP',
protocol_port=80,
loadbalancer=self.lb,
default_pool=self.pool_persistency)
new_listener_dict = lb_translators.lb_listener_obj_to_dict(
new_listener)
with mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.edge_driver.listener, '_get_pool_tags'),\
mock.patch.object(self.vs_client, 'get', return_value={}),\
mock.patch.object(self.vs_client, 'update',
return_value={'id': LB_VS_ID}), \
mock.patch.object(self.pp_cookie_client, 'create_or_overwrite'
) as mock_create_pp:
mock_get_floatingips.return_value = []
self.edge_driver.listener.update(self.context, self.listener_dict,
new_listener_dict, self.completor)
mock_create_pp.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update_with_session_persistence_change(self):
self.reset_completor()
old_listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1', 'description',
self.pool_persistency.id,
LB_ID, protocol='HTTP',
protocol_port=80,
loadbalancer=self.lb,
default_pool=self.pool_persistency)
old_listener_dict = lb_translators.lb_listener_obj_to_dict(
old_listener)
sess_persistence = lb_models.SessionPersistence(
POOL_ID, 'SOURCE_IP')
pool_persistency = lb_models.Pool('new_pool_id', LB_TENANT_ID,
'pool1', '', None, 'HTTP',
'ROUND_ROBIN', loadbalancer_id=LB_ID,
listener=self.listener,
listeners=[self.listener],
loadbalancer=self.lb,
session_persistence=sess_persistence)
new_listener = lb_models.Listener(LISTENER_ID, LB_TENANT_ID,
'listener1-new', 'new-description',
pool_persistency.id,
LB_ID, protocol='HTTP',
protocol_port=80,
loadbalancer=self.lb,
default_pool=pool_persistency)
new_listener_dict = lb_translators.lb_listener_obj_to_dict(
new_listener)
with mock.patch.object(self.core_plugin,
'get_waf_profile_path_and_mode',
return_value=(None, None)), \
mock.patch.object(self.pp_client, 'create_or_overwrite'
) as mock_create_pp, \
mock.patch.object(self.pp_generic_client, 'delete'
) as mock_delete_pp, \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [
{'id': LB_SERVICE_ID}]}),\
mock.patch.object(self.core_plugin, 'get_floatingips'
) as mock_get_floatingips, \
mock.patch.object(self.edge_driver.listener,
'_get_pool_tags'
) as mock_get_pool_tags:
mock_get_pool_tags.return_value = []
mock_get_floatingips.return_value = []
self.edge_driver.listener.update(
self.context, old_listener_dict,
new_listener_dict, self.completor)
mock_create_pp.assert_called_once_with(
name='persistence_pool1_new_p...ol_id',
persistence_profile_id='new_pool_id_sourceip',
tags=mock.ANY)
# No reason to check parameters here, it's
# all mocked out
mock_delete_pp.assert_called_once()
def test_delete(self):
self.reset_completor()
with mock.patch.object(self.service_client, 'get'
) as mock_get_lb_service, \
mock.patch.object(self.core_plugin, 'get_floatingips',
return_value=[]), \
mock.patch.object(self.app_client, 'delete'
) as mock_delete_app_profile, \
mock.patch.object(self.vs_client, 'delete'
) as mock_delete_virtual_server:
mock_get_lb_service.return_value = {
'id': LB_SERVICE_ID,
'virtual_server_ids': [LB_VS_ID]}
self.edge_driver.listener.delete(self.context, self.listener_dict,
self.completor)
mock_delete_virtual_server.assert_called_with(LB_VS_ID)
mock_delete_app_profile.assert_called_with(LISTENER_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_cascade(self):
self.reset_completor()
with mock.patch.object(self.service_client, 'get'
) as mock_get_lb_service, \
mock.patch.object(self.core_plugin, 'get_floatingips',
return_value=[]), \
mock.patch.object(self.app_client, 'delete'
) as mock_delete_app_profile, \
mock.patch.object(self.vs_client, 'delete'
) as mock_delete_virtual_server:
mock_get_lb_service.return_value = {
'id': LB_SERVICE_ID,
'virtual_server_ids': [LB_VS_ID]}
self.edge_driver.listener.delete_cascade(
self.context, self.listener_dict,
self.completor)
mock_delete_virtual_server.assert_called_with(LB_VS_ID)
mock_delete_app_profile.assert_called_with(LISTENER_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
class TestEdgeLbaasV2Pool(BaseTestEdgeLbaasV2):
def setUp(self):
super(TestEdgeLbaasV2Pool, self).setUp()
@property
def _tested_entity(self):
return 'pool'
def test_create(self):
self.reset_completor()
with mock.patch.object(self.pp_client, 'create_or_overwrite'
) as mock_create_pp, \
mock.patch.object(self.vs_client, 'update', return_value=None
) as mock_vs_update:
self.edge_driver.pool.create(self.context, self.pool_dict,
self.completor)
mock_create_pp.assert_not_called()
mock_vs_update.assert_called_once_with(
LB_VS_ID, pool_id=LB_POOL_ID, lb_persistence_profile_id=None)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def _test_create_with_persistency(self, vs_data, verify_func):
self.reset_completor()
with mock.patch.object(self.edge_driver.pool, '_get_pool_tags'),\
mock.patch.object(self.pp_cookie_client, 'create_or_overwrite'
) as mock_create_pp, \
mock.patch.object(self.pp_cookie_client, 'update',
return_value=None) as mock_update_pp, \
mock.patch.object(self.vs_client, 'get'
) as mock_vs_get, \
mock.patch.object(self.vs_client, 'update', return_value=None
) as mock_vs_update:
mock_vs_get.return_value = vs_data
self.edge_driver.pool.create(
self.context, self.pool_persistency_dict, self.completor)
verify_func(mock_create_pp, mock_update_pp,
mock_vs_update)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_create_with_persistency(self):
def verify_func(mock_create_pp, mock_update_pp,
mock_vs_update):
mock_create_pp.assert_called_once_with(
cookie_mode='INSERT',
cookie_name='meh_cookie',
name=mock.ANY,
tags=mock.ANY,
persistence_profile_id="%s_cookie" % LB_PP_ID)
mock_update_pp.assert_not_called()
mock_vs_update.assert_called_once_with(
LB_VS_ID, pool_id=LB_POOL_ID,
lb_persistence_profile_id="%s_cookie" % LB_PP_ID)
vs_data = {'id': LB_VS_ID}
self._test_create_with_persistency(vs_data, verify_func)
def test_create_with_persistency_existing_profile(self):
def verify_func(mock_create_pp, mock_update_pp,
mock_vs_update):
mock_create_pp.assert_not_called()
mock_update_pp.assert_called_once_with(
LB_PP_ID,
cookie_mode='INSERT',
cookie_name='meh_cookie',
name=mock.ANY,
tags=mock.ANY)
mock_vs_update.assert_called_once_with(
LB_VS_ID, pool_id=LB_POOL_ID,
lb_persistence_profile_id=LB_PP_ID)
vs_data = {'id': LB_VS_ID,
'lb_persistence_profile_path': LB_PP_ID}
self._test_create_with_persistency(vs_data, verify_func)
def test_create_with_persistency_no_listener(self):
def verify_func(mock_create_pp, mock_update_pp,
mock_vs_update):
mock_create_pp.assert_not_called()
mock_update_pp.assert_not_called()
mock_vs_update.assert_not_called()
vs_data = {'id': LB_VS_ID,
'lb_persistence_profile_path': LB_PP_ID}
self.pool_persistency_dict['listener'] = None
self.pool_persistency_dict['listeners'] = []
self._test_create_with_persistency(vs_data, verify_func)
def test_create_multiple_listeners(self):
"""Verify creation will fail if multiple listeners are set"""
pool = lb_models.Pool(POOL_ID, LB_TENANT_ID, 'pool1', '',
None, 'HTTP', 'ROUND_ROBIN',
loadbalancer_id=LB_ID,
listeners=[self.listener,
self.https_listener],
loadbalancer=self.lb)
pool_dict = lb_translators.lb_pool_obj_to_dict(pool)
self.assertRaises(n_exc.BadRequest,
self.edge_driver.pool.create,
self.context, pool_dict, self.completor)
def test_update(self):
self.reset_completor()
new_pool = lb_models.Pool(POOL_ID, LB_TENANT_ID, 'pool-name', '',
None, 'HTTP', 'LEAST_CONNECTIONS',
listener=self.listener)
new_pool_dict = lb_translators.lb_pool_obj_to_dict(new_pool)
self.edge_driver.pool.update(self.context, self.pool_dict,
new_pool_dict,
self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update_multiple_listeners(self):
"""Verify update action will fail if multiple listeners are set"""
self.reset_completor()
new_pool = lb_models.Pool(POOL_ID, LB_TENANT_ID, 'pool1', '',
None, 'HTTP', 'ROUND_ROBIN',
loadbalancer_id=LB_ID,
listeners=[self.listener,
self.https_listener],
loadbalancer=self.lb)
new_pool_dict = lb_translators.lb_pool_obj_to_dict(new_pool)
self.assertRaises(n_exc.BadRequest,
self.edge_driver.pool.update,
self.context, self.pool_dict, new_pool_dict,
self.completor)
self.assertTrue(self.last_completor_called)
self.assertFalse(self.last_completor_succees)
def _test_update_with_persistency(self, vs_data, old_pool, new_pool,
verify_func, cookie=False):
self.reset_completor()
old_pool_dict = lb_translators.lb_pool_obj_to_dict(old_pool)
new_pool_dict = lb_translators.lb_pool_obj_to_dict(new_pool)
with mock.patch.object(self.edge_driver.pool, '_get_pool_tags'),\
mock.patch.object(self.pp_client, 'create_or_overwrite'
) as mock_create_pp, \
mock.patch.object(self.pp_cookie_client, 'create_or_overwrite'
) as mock_create_cookie_pp, \
mock.patch.object(self.pp_client, 'update', return_value=None
) as mock_update_pp, \
mock.patch.object(self.pp_cookie_client, 'update',
return_value=None) as mock_update_cookie_pp, \
mock.patch.object(self.pp_generic_client, 'delete',
return_value=None) as mock_delete_pp, \
mock.patch.object(self.vs_client, 'get'
) as mock_vs_get, \
mock.patch.object(self.vs_client, 'update', return_value=None
) as mock_vs_update:
mock_vs_get.return_value = vs_data
self.edge_driver.pool.update(self.context, old_pool_dict,
new_pool_dict, self.completor)
verify_func(
mock_create_cookie_pp if cookie else mock_create_pp,
mock_update_cookie_pp if cookie else mock_update_pp,
mock_delete_pp,
mock_vs_update)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update_with_persistency(self):
def verify_func(mock_create_pp, mock_update_pp,
mock_delete_pp, mock_vs_update):
mock_create_pp.assert_called_once_with(
cookie_mode='INSERT',
cookie_name='meh_cookie',
name=mock.ANY,
tags=mock.ANY,
persistence_profile_id="%s_cookie" % LB_PP_ID)
mock_update_pp.assert_not_called()
mock_delete_pp.assert_not_called()
mock_vs_update.assert_called_once_with(
LB_VS_ID, pool_id=LB_POOL_ID,
lb_persistence_profile_id="%s_cookie" % LB_PP_ID)
vs_data = {'id': LB_VS_ID}
self._test_update_with_persistency(vs_data, self.pool,
self.pool_persistency, verify_func,
cookie=True)
def test_update_switch_persistency_type(self):
def verify_func(mock_create_pp, mock_update_pp,
mock_delete_pp, mock_vs_update):
mock_create_pp.assert_called_once_with(
name=mock.ANY,
tags=mock.ANY,
persistence_profile_id="%s_sourceip" % LB_PP_ID)
mock_update_pp.assert_not_called()
mock_delete_pp.assert_called_once()
mock_vs_update.assert_called_once_with(
LB_VS_ID, pool_id=LB_POOL_ID,
lb_persistence_profile_id="%s_sourceip" % LB_PP_ID)
ip_sess_persistence = lb_models.SessionPersistence(
POOL_ID, 'SOURCE_IP')
pool_ip_persistency = lb_models.Pool(
POOL_ID, LB_TENANT_ID,
'pool1', '', None, 'HTTP',
'ROUND_ROBIN', loadbalancer_id=LB_ID,
listener=self.listener,
listeners=[self.listener],
loadbalancer=self.lb,
session_persistence=ip_sess_persistence)
vs_data = {'id': LB_VS_ID,
'lb_persistence_profile_path': 'meh'}
self._test_update_with_persistency(vs_data,
self.pool_persistency,
pool_ip_persistency,
verify_func,)
def test_update_remove_persistency(self):
def verify_func(mock_create_pp, mock_update_pp,
mock_delete_pp, mock_vs_update):
mock_create_pp.assert_not_called()
mock_update_pp.assert_not_called()
mock_delete_pp.assert_called_with(LB_PP_ID)
mock_vs_update.assert_called_once_with(
LB_VS_ID, pool_id=LB_POOL_ID, lb_persistence_profile_id=None)
vs_data = {'id': LB_VS_ID,
'lb_persistence_profile_path': LB_PP_ID}
self._test_update_with_persistency(vs_data, self.pool_persistency,
self.pool, verify_func)
def test_delete(self):
self.reset_completor()
with mock.patch.object(self.vs_client, 'update', return_value=None
) as mock_update_virtual_server, \
mock.patch.object(self.pool_client, 'delete'
) as mock_delete_pool:
self.edge_driver.pool.delete(self.context, self.pool_dict,
self.completor)
mock_update_virtual_server.assert_called_with(
LB_VS_ID, lb_persistence_profile_id=None, pool_id=None)
mock_delete_pool.assert_called_with(LB_POOL_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_cascade(self):
self.reset_completor()
with mock.patch.object(self.pool_client, 'delete'
) as mock_delete_pool:
self.edge_driver.pool.delete_cascade(
self.context, self.pool_dict,
self.completor)
mock_delete_pool.assert_called_with(LB_POOL_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_with_persistency(self):
self.reset_completor()
with mock.patch.object(self.vs_client, 'get'
) as mock_vs_get, \
mock.patch.object(self.vs_client, 'update', return_value=None
) as mock_update_virtual_server, \
mock.patch.object(self.pool_client, 'delete'
) as mock_delete_pool, \
mock.patch.object(self.pp_generic_client, 'delete',
return_value=None) as mock_delete_pp:
mock_vs_get.return_value = {
'id': LB_VS_ID,
'lb_persistence_profile_path': LB_PP_ID}
self.edge_driver.pool.delete(
self.context, self.pool_persistency_dict, self.completor)
mock_delete_pp.assert_called_once_with(LB_PP_ID)
mock_update_virtual_server.assert_called_once_with(
LB_VS_ID, lb_persistence_profile_id=None, pool_id=None)
mock_delete_pool.assert_called_with(LB_POOL_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def _verify_create(self, cookie_name, cookie_mode,
mock_create_pp, mock_update_pp):
if cookie_name:
mock_create_pp.assert_called_once_with(
persistence_profile_id="%s_cookie" % LB_PP_ID,
cookie_name=cookie_name,
cookie_mode=cookie_mode,
name=mock.ANY,
tags=mock.ANY)
else:
mock_create_pp.assert_called_once_with(
persistence_profile_id="%s_sourceip" % LB_PP_ID,
name=mock.ANY,
tags=mock.ANY)
# Compare tags - kw args are the last item of a mock call tuple
self.assertCountEqual(mock_create_pp.mock_calls[0][-1]['tags'],
[{'scope': 'os-lbaas-lb-id', 'tag': 'xxx-xxx'},
{'scope': 'os-lbaas-lb-name', 'tag': 'lb1'},
{'scope': 'os-lbaas-listener-id', 'tag': 'listener-x'}])
mock_update_pp.assert_not_called()
def _verify_update(self, cookie_name, cookie_mode,
mock_create_pp, mock_update_pp):
if cookie_name:
mock_update_pp.assert_called_once_with(
"%s_cookie" % LB_PP_ID,
cookie_name=cookie_name,
cookie_mode=cookie_mode,
name=mock.ANY,
tags=mock.ANY)
else:
mock_update_pp.assert_called_once_with(
"%s_sourceip" % LB_PP_ID,
name=mock.ANY,
tags=mock.ANY)
# Compare tags - kw args are the last item of a mock call tuple
self.assertCountEqual(mock_update_pp.mock_calls[0][-1]['tags'],
[{'scope': 'os-lbaas-lb-id', 'tag': 'xxx-xxx'},
{'scope': 'os-lbaas-lb-name', 'tag': 'lb1'},
{'scope': 'os-lbaas-listener-id', 'tag': 'listener-x'}])
mock_create_pp.assert_not_called()
def _verify_delete(self, cookie_name, cookie_mode,
mock_create_pp, mock_update_pp):
mock_create_pp.assert_not_called()
mock_update_pp.assert_not_called()
def _test_setup_session_persistence(self, session_persistence,
vs_data, verify_func,
cookie_name=None,
cookie_mode=None,
switch_type=False):
with mock.patch.object(self.pp_client, 'create_or_overwrite'
) as mock_create_pp, \
mock.patch.object(self.pp_cookie_client, 'create_or_overwrite'
) as mock_create_cookie_pp, \
mock.patch.object(self.pp_client, 'update', return_value=None,
) as mock_update_pp,\
mock.patch.object(self.pp_cookie_client, 'update',
return_value=None) as mock_update_cookie_pp:
self.pool.session_persistence = session_persistence
pool_dict = lb_translators.lb_pool_obj_to_dict(self.pool)
pp_id, post_func = p_utils.setup_session_persistence(
self.nsxpolicy, pool_dict, [], switch_type,
self.listener_dict, vs_data)
pp_id_suffix = ""
if session_persistence:
if session_persistence.type == "SOURCE_IP":
pp_id_suffix = "sourceip"
elif session_persistence.type in ["HTTP_COOKIE", "APP_COOKIE"]:
pp_id_suffix = "cookie"
self.assertEqual("%s_%s" % (LB_PP_ID, pp_id_suffix), pp_id)
else:
self.assertIsNone(pp_id)
self.assertEqual(
(self.nsxpolicy, vs_data['lb_persistence_profile_path'],),
post_func.args)
verify_func(cookie_name, cookie_mode,
mock_create_cookie_pp if cookie_name
else mock_create_pp,
mock_update_cookie_pp if cookie_name
else mock_update_pp)
def test_setup_session_persistence_sourceip_new_profile(self):
sess_persistence = lb_models.SessionPersistence(
"%s_sourceip" % LB_PP_ID, 'SOURCE_IP')
self._test_setup_session_persistence(
sess_persistence, {'id': LB_VS_ID}, self._verify_create)
def test_setup_session_persistence_httpcookie_new_profile(self):
sess_persistence = lb_models.SessionPersistence(
"%s_cookie" % LB_PP_ID, 'HTTP_COOKIE')
self._test_setup_session_persistence(
sess_persistence, {'id': LB_VS_ID},
self._verify_create, 'default_cookie_name', 'INSERT')
def test_setup_session_persistence_appcookie_new_profile(self):
sess_persistence = lb_models.SessionPersistence(
"%s_cookie" % LB_PP_ID, 'APP_COOKIE', 'whatever')
self._test_setup_session_persistence(
sess_persistence, {'id': LB_VS_ID},
self._verify_create, 'whatever', 'REWRITE')
def test_setup_session_persistence_none_from_existing(self):
sess_persistence = None
self._test_setup_session_persistence(
sess_persistence,
{'id': LB_VS_ID,
'lb_persistence_profile_path': "%s_sourceip" % LB_PP_ID},
self._verify_delete)
def test_setup_session_persistence_sourceip_from_existing(self):
sess_persistence = lb_models.SessionPersistence(
"%s_sourceip" % LB_PP_ID, 'SOURCE_IP')
self._test_setup_session_persistence(
sess_persistence,
{'id': LB_VS_ID,
'lb_persistence_profile_path': "%s_sourceip" % LB_PP_ID},
self._verify_update)
def test_setup_session_persistence_httpcookie_from_existing(self):
sess_persistence = lb_models.SessionPersistence(
"%s_cookie" % LB_PP_ID, 'HTTP_COOKIE')
self._test_setup_session_persistence(
sess_persistence,
{'id': LB_VS_ID,
'lb_persistence_profile_path': '%s_cookie' % LB_PP_ID},
self._verify_update,
'default_cookie_name', 'INSERT')
def test_setup_session_persistence_appcookie_from_existing(self):
sess_persistence = lb_models.SessionPersistence(
"%s_cookie" % LB_PP_ID, 'APP_COOKIE', 'whatever')
self._test_setup_session_persistence(
sess_persistence,
{'id': LB_VS_ID,
'lb_persistence_profile_path': '%s_cookie' % LB_PP_ID},
self._verify_update,
'whatever', 'REWRITE')
class TestEdgeLbaasV2Member(BaseTestEdgeLbaasV2):
def setUp(self):
super(TestEdgeLbaasV2Member, self).setUp()
@property
def _tested_entity(self):
return 'member'
def test_create(self):
self.reset_completor()
with mock.patch.object(self.lbv2_driver.plugin, 'get_pool_members'
) as mock_get_pool_members, \
mock.patch.object(lb_utils, 'get_network_from_subnet'
) as mock_get_network, \
mock.patch.object(lb_utils, 'get_router_from_network'
) as mock_get_router, \
mock.patch.object(self.service_client, 'get_router_lb_service'
) as mock_get_lb_service, \
mock.patch.object(self.pool_client, 'get'
) as mock_get_pool, \
mock.patch.object(self.pool_client,
'create_pool_member_and_add_to_pool'
) as mock_update_pool_with_members:
mock_get_pool_members.return_value = [self.member]
mock_get_network.return_value = LB_NETWORK
mock_get_router.return_value = LB_ROUTER_ID
mock_get_lb_service.return_value = {'id': LB_SERVICE_ID}
mock_get_pool.return_value = LB_POOL
self.edge_driver.member.create(
self.context, self.member_dict, self.completor)
mock_update_pool_with_members.assert_called_with(
LB_POOL_ID, MEMBER_ADDRESS,
port=self.member_dict['protocol_port'],
display_name=mock.ANY,
weight=self.member_dict['weight'],
backup_member=self.member_dict.get('backup', False),
admin_state='ENABLED')
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_create_external_vip(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID}
with mock.patch.object(self.lbv2_driver.plugin, 'get_pool_members'
) as mock_get_pool_members, \
mock.patch.object(lb_utils, 'get_network_from_subnet'
) as mock_get_network, \
mock.patch.object(lb_utils, 'get_router_from_network'
) as mock_get_router, \
mock.patch.object(self.service_client, 'get_router_lb_service'
) as mock_get_lb_service, \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.core_plugin,
'service_router_has_services',
return_value=False) as plugin_has_sr,\
mock.patch.object(self.core_plugin,
'service_router_has_loadbalancers',
return_value=False),\
mock.patch.object(self.pool_client, 'get'
) as mock_get_pool, \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin, 'get_floatingips',
return_value=[{
'fixed_ip_address': MEMBER_ADDRESS,
'floating_ip_address': '1.1.1.1',
'router_id': LB_ROUTER_ID}]),\
mock.patch.object(self.pool_client,
'create_pool_member_and_add_to_pool'
) as mock_update_pool_with_members:
mock_get_pool_members.return_value = [self.member]
mock_get_network.return_value = EXT_LB_NETWORK
mock_get_router.return_value = LB_ROUTER_ID
mock_get_lb_service.return_value = {'id': LB_SERVICE_ID}
mock_get_pool.return_value = LB_POOL
self.edge_driver.member.create(
self.context, self.member_dict, self.completor)
mock_update_pool_with_members.assert_called_with(
LB_POOL_ID, MEMBER_ADDRESS,
port=self.member_dict['protocol_port'],
display_name=mock.ANY,
weight=self.member_dict['weight'],
backup_member=self.member_dict.get('backup', False),
admin_state='ENABLED')
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
plugin_has_sr.assert_called_once_with(mock.ANY, LB_ROUTER_ID)
def test_create_external_vip_router_used(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID}
with mock.patch.object(self.lbv2_driver.plugin, 'get_pool_members'
) as mock_get_pool_members, \
mock.patch.object(lb_utils, 'get_network_from_subnet'
) as mock_get_network, \
mock.patch.object(lb_utils, 'get_router_from_network'
) as mock_get_router, \
mock.patch.object(self.service_client, 'get_router_lb_service'
) as mock_get_lb_service, \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.core_plugin,
'service_router_has_loadbalancers',
return_value=True),\
mock.patch.object(self.pool_client, 'get'
) as mock_get_pool, \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin, 'get_floatingips',
return_value=[{
'fixed_ip_address': MEMBER_ADDRESS,
'router_id': LB_ROUTER_ID}]):
mock_get_pool_members.return_value = [self.member]
mock_get_network.return_value = EXT_LB_NETWORK
mock_get_router.return_value = LB_ROUTER_ID
mock_get_lb_service.return_value = {'id': LB_SERVICE_ID}
mock_get_pool.return_value = LB_POOL
self.assertRaises(
n_exc.BadRequest, self.edge_driver.member.create,
self.context, self.member_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertFalse(self.last_completor_succees)
def test_create_external_vip_no_fip(self):
self.reset_completor()
lb_service = {'id': LB_SERVICE_ID}
with mock.patch.object(self.lbv2_driver.plugin, 'get_pool_members'
) as mock_get_pool_members, \
mock.patch.object(lb_utils, 'get_network_from_subnet'
) as mock_get_network, \
mock.patch.object(lb_utils, 'get_router_from_network'
) as mock_get_router, \
mock.patch.object(self.service_client, 'get_router_lb_service'
) as mock_get_lb_service, \
mock.patch.object(self.core_plugin.nsxpolicy, 'search_by_tags',
return_value={'results': [lb_service]}),\
mock.patch.object(self.core_plugin,
'service_router_has_loadbalancers',
return_value=True),\
mock.patch.object(self.pool_client, 'get'
) as mock_get_pool, \
mock.patch.object(self.core_plugin, '_find_router_gw_subnets',
return_value=[]),\
mock.patch.object(self.core_plugin, 'get_floatingips',
return_value=[]):
mock_get_pool_members.return_value = [self.member]
mock_get_network.return_value = EXT_LB_NETWORK
mock_get_router.return_value = LB_ROUTER_ID
mock_get_lb_service.return_value = {'id': LB_SERVICE_ID}
mock_get_pool.return_value = LB_POOL
self.assertRaises(
n_exc.BadRequest, self.edge_driver.member.create,
self.context, self.member_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertFalse(self.last_completor_succees)
def test_update(self):
self.reset_completor()
new_member = lb_models.Member(MEMBER_ID, LB_TENANT_ID, POOL_ID,
MEMBER_ADDRESS, 80, 2, pool=self.pool,
name='member-nnn-nnn')
new_member_dict = lb_translators.lb_member_obj_to_dict(new_member)
with mock.patch.object(self.pool_client, 'get'
) as mock_get_pool, \
mock.patch.object(lb_utils, 'get_network_from_subnet'
) as mock_get_network_from_subnet:
mock_get_pool.return_value = LB_POOL_WITH_MEMBER
mock_get_network_from_subnet.return_value = LB_NETWORK
self.edge_driver.member.update(self.context, self.member_dict,
new_member_dict, self.completor)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete(self):
self.reset_completor()
with mock.patch.object(self.pool_client, 'get'
) as mock_get_pool, \
mock.patch.object(lb_utils, 'get_network_from_subnet'
) as mock_get_network_from_subnet, \
mock.patch.object(self.pool_client, 'remove_pool_member'
) as mock_update_pool_with_members:
mock_get_pool.return_value = LB_POOL_WITH_MEMBER
mock_get_network_from_subnet.return_value = LB_NETWORK
self.edge_driver.member.delete(self.context, self.member_dict,
self.completor)
mock_update_pool_with_members.assert_called_with(
LB_POOL_ID, MEMBER_ADDRESS,
port=self.member_dict['protocol_port'])
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_cascade(self):
self.reset_completor()
with mock.patch.object(self.pool_client, 'get'
) as mock_get_pool, \
mock.patch.object(lb_utils, 'get_network_from_subnet'
) as mock_get_network_from_subnet, \
mock.patch.object(self.pool_client, 'remove_pool_member'
) as mock_update_pool_with_members:
mock_get_pool.return_value = LB_POOL_WITH_MEMBER
mock_get_network_from_subnet.return_value = LB_NETWORK
self.edge_driver.member.delete_cascade(
self.context, self.member_dict,
self.completor)
mock_update_pool_with_members.assert_not_called()
self.assertFalse(self.last_completor_called)
class TestEdgeLbaasV2HealthMonitor(BaseTestEdgeLbaasV2):
def setUp(self):
super(TestEdgeLbaasV2HealthMonitor, self).setUp()
@property
def _tested_entity(self):
return 'health_monitor'
def test_create(self):
self.reset_completor()
with mock.patch.object(self.monitor_client, 'create_or_overwrite'
) as mock_create_monitor, \
mock.patch.object(self.pool_client, 'add_monitor_to_pool'
) as mock_add_monitor_to_pool:
self.edge_driver.healthmonitor.create(
self.context, self.hm_dict, self.completor)
mock_create_monitor.assert_called_once()
mock_add_monitor_to_pool.assert_called_with(
LB_POOL_ID, mock.ANY)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_create_http(self):
self.reset_completor()
with mock.patch.object(self.http_monitor_client, 'create_or_overwrite'
) as mock_create_monitor, \
mock.patch.object(self.pool_client, 'add_monitor_to_pool'
) as mock_add_monitor_to_pool:
# Verify HTTP-specific monitor parameters are added
self.edge_driver.healthmonitor.create(
self.context, self.hm_http_dict, self.completor)
kw_args = mock_create_monitor.mock_calls[0][2]
self.assertEqual(self.hm_http.http_method,
kw_args.get('request_method'))
self.assertEqual(self.hm_http.url_path,
kw_args.get('request_url'))
mock_add_monitor_to_pool.assert_called_with(
LB_POOL_ID, mock.ANY)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_create_without_pool(self):
self.reset_completor()
hm = lb_models.HealthMonitor(HM_ID, LB_TENANT_ID, 'PING', 3, 3,
1, pool=None, name='hm1')
hm_dict = lb_translators.lb_hm_obj_to_dict(hm)
with mock.patch.object(self.monitor_client, 'create_or_overwrite'
) as mock_create_monitor, \
mock.patch.object(self.pool_client, 'add_monitor_to_pool'
) as mock_add_monitor_to_pool:
self.assertRaises(
n_exc.BadRequest,
self.edge_driver.healthmonitor.create,
self.context, hm_dict, self.completor)
mock_create_monitor.assert_called_once()
mock_add_monitor_to_pool.assert_not_called()
self.assertTrue(self.last_completor_called)
self.assertFalse(self.last_completor_succees)
def test_update(self):
self.reset_completor()
with mock.patch.object(self.monitor_client, 'update'
) as mock_update_monitor:
new_hm = lb_models.HealthMonitor(
HM_ID, LB_TENANT_ID, 'PING', 5, 5,
5, pool=self.pool, name='new_name')
new_hm_dict = lb_translators.lb_hm_obj_to_dict(new_hm)
self.edge_driver.healthmonitor.update(
self.context, self.hm_dict, new_hm_dict, self.completor)
mock_update_monitor.assert_called_with(
LB_MONITOR_ID, name=mock.ANY,
fall_count=5, interval=5, timeout=5)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete(self):
self.reset_completor()
with mock.patch.object(self.pool_client, 'remove_monitor_from_pool'
) as mock_remove_monitor_from_pool, \
mock.patch.object(self.monitor_client, 'delete'
) as mock_delete_monitor:
self.edge_driver.healthmonitor.delete(
self.context, self.hm_dict, self.completor)
mock_remove_monitor_from_pool.assert_called_with(
LB_POOL_ID, mock.ANY)
mock_delete_monitor.assert_called_with(LB_MONITOR_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_cascade(self):
self.reset_completor()
with mock.patch.object(self.pool_client, 'remove_monitor_from_pool'
) as mock_remove_monitor_from_pool, \
mock.patch.object(self.monitor_client, 'delete'
) as mock_delete_monitor:
self.edge_driver.healthmonitor.delete_cascade(
self.context, self.hm_dict, self.completor)
mock_remove_monitor_from_pool.assert_called_with(
LB_POOL_ID, mock.ANY)
mock_delete_monitor.assert_called_with(LB_MONITOR_ID)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
class TestEdgeLbaasV2L7Policy(BaseTestEdgeLbaasV2):
def setUp(self):
super(TestEdgeLbaasV2L7Policy, self).setUp()
@property
def _tested_entity(self):
return 'l7policy'
def test_create(self):
self.reset_completor()
with mock.patch.object(self.vs_client, 'get'
) as mock_get_virtual_server, \
mock.patch.object(self.vs_client, 'add_lb_rule'
) as mock_update_virtual_server:
mock_get_virtual_server.return_value = {'id': LB_VS_ID}
self.edge_driver.l7policy.create(
self.context, self.l7policy_dict, self.completor)
mock_update_virtual_server.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update(self):
self.reset_completor()
new_l7policy = lb_models.L7Policy(L7POLICY_ID, LB_TENANT_ID,
name='new-policy',
listener_id=LISTENER_ID,
action='REJECT',
listener=self.listener,
position=2)
new_policy_dict = lb_translators.lb_l7policy_obj_to_dict(new_l7policy)
vs_with_rules = {
'id': LB_VS_ID,
'rule_ids': [LB_RULE_ID, 'abc', 'xyz']
}
with mock.patch.object(self.vs_client, 'get'
) as mock_get_virtual_server, \
mock.patch.object(self.vs_client, 'update_lb_rule'
) as mock_update_virtual_server:
mock_get_virtual_server.return_value = vs_with_rules
self.edge_driver.l7policy.update(self.context, self.l7policy_dict,
new_policy_dict, self.completor)
mock_update_virtual_server.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete(self):
self.reset_completor()
with mock.patch.object(self.vs_client, 'remove_lb_rule'
) as mock_vs_remove_rule:
self.edge_driver.l7policy.delete(
self.context, self.l7policy_dict, self.completor)
mock_vs_remove_rule.assert_called_with(LB_VS_ID, mock.ANY,
check_name_suffix=True)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_cascade(self):
self.reset_completor()
with mock.patch.object(self.vs_client, 'remove_lb_rule'
) as mock_vs_remove_rule:
self.edge_driver.l7policy.delete_cascade(
self.context, self.l7policy_dict, self.completor)
mock_vs_remove_rule.assert_called_with(LB_VS_ID, mock.ANY,
check_name_suffix=True)
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
class TestEdgeLbaasV2L7Rule(BaseTestEdgeLbaasV2):
def setUp(self):
super(TestEdgeLbaasV2L7Rule, self).setUp()
@property
def _tested_entity(self):
return 'l7rule'
def test_create(self):
self.reset_completor()
self.l7policy.rules = [self.l7rule]
with mock.patch.object(self.vs_client, 'update_lb_rule'
) as mock_update_virtual_server:
self.edge_driver.l7rule.create(
self.context, self.l7rule_dict, self.completor)
mock_update_virtual_server.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_update(self):
self.reset_completor()
new_l7rule = lb_models.L7Rule(L7RULE_ID, LB_TENANT_ID,
l7policy_id=L7POLICY_ID,
compare_type='STARTS_WITH',
invert=True,
type='COOKIE',
key='cookie1',
value='xxxxx',
policy=self.l7policy)
new_rule_dict = lb_translators.lb_l7rule_obj_to_dict(new_l7rule)
self.l7policy.rules = [new_l7rule]
with mock.patch.object(self.vs_client, 'update_lb_rule'
) as mock_update_virtual_server:
self.edge_driver.l7rule.update(self.context, self.l7rule_dict,
new_rule_dict, self.completor)
mock_update_virtual_server.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete(self):
self.reset_completor()
self.l7policy.rules = [self.l7rule]
with mock.patch.object(self.vs_client, 'update_lb_rule'
) as mock_update_virtual_server:
self.edge_driver.l7rule.delete(
self.context, self.l7rule_dict, self.completor)
mock_update_virtual_server.assert_called_once()
self.assertTrue(self.last_completor_called)
self.assertTrue(self.last_completor_succees)
def test_delete_cascade(self):
self.reset_completor()
self.l7policy.rules = [self.l7rule]
with mock.patch.object(self.vs_client, 'update_lb_rule'
) as mock_update_virtual_server:
self.edge_driver.l7rule.delete_cascade(
self.context, self.l7rule_dict, self.completor)
mock_update_virtual_server.assert_not_called()
self.assertFalse(self.last_completor_called)
| 48.44684 | 79 | 0.574474 | 10,680 | 98,880 | 4.925749 | 0.041011 | 0.040204 | 0.067006 | 0.0679 | 0.85481 | 0.823484 | 0.800521 | 0.780581 | 0.754576 | 0.722546 | 0 | 0.004151 | 0.339775 | 98,880 | 2,040 | 80 | 48.470588 | 0.801679 | 0.014513 | 0 | 0.670354 | 0 | 0 | 0.079428 | 0.024159 | 0 | 0 | 0 | 0.00049 | 0.126659 | 1 | 0.059181 | false | 0 | 0.011615 | 0.004425 | 0.079646 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e7ec730a85c58e6f5ecba1531a79f5d2d5de1ba | 23,477 | py | Python | SCODE-G/text_to_code/error_analysis.py | rizwan09/REDCODER | e889b3d3f37573be8418c0ac536c2201e8f1be26 | [
"MIT"
] | 22 | 2021-11-15T06:00:13.000Z | 2022-03-25T14:33:15.000Z | SCODE-G/text_to_code/error_analysis.py | rizwan09/REDCODER | e889b3d3f37573be8418c0ac536c2201e8f1be26 | [
"MIT"
] | 2 | 2021-12-02T19:22:51.000Z | 2022-02-19T10:50:58.000Z | SCODE-G/text_to_code/error_analysis.py | rizwan09/REDCODER | e889b3d3f37573be8418c0ac536c2201e8f1be26 | [
"MIT"
] | 2 | 2021-10-07T15:27:25.000Z | 2021-12-08T07:12:13.000Z | import os, ipdb, csv
import json
import sys
import weighted_ngram_match
import syntax_match
import dataflow_match
import bleu_code
from bleu import _bleu
base_path='/home/rizwan/DPR_models/'
output_path=base_path+'prediction_ori_raw_top_1_bleu_70.30.txt'
output_path='/local/rizwan/workspace/projects/RaProLanG/plbart_ori_top_1_masked/-java-ms100000-wu5000-bsz64/output.hyp'
output_path='/local/rizwan/workspace/projects/RaProLanG/maksed_with_top_2-java-ms100000-wu5000-bsz64/output.hyp'
input_path=base_path+'source_ori_raw_top_1.txt'
target_path=base_path+'test.target'
annotation_path=base_path+'input_target_output_coder.csv'
plbart_pred_path=base_path+'plbart_ori_pred.txt'
ensamble_preds_path=base_path+'ensamble_pred.txt'
retrieved_code_path=base_path+'retrieved.txt'
def check_retrived_acc(retrieved_code_path, max_k=10, lang='python'):
with open(retrieved_code_path) as f:
retrieved_code = json.load(f)
scores = [[0 for i in range(max_k)] for j in range(len(retrieved_code))]
refss = []
hypss = []
dpr_scores= [ ]
for idx, ex in enumerate(retrieved_code):
try:
target = ex['answers'].strip()
except:
ex = retrieved_code[ex]
target = ex['answers'].strip()
refss.append(target)
for rank, ctx in enumerate(ex['ctxs']):
try:
dpr_scores.append(ctx['score'])
except:
dpr_scores.append(ctx['_score'])
cand = ctx["text"].strip()
if rank==0:
hypss.append(cand)
if cand==target:
for j in range(rank, max_k): scores[idx][j] = 1
for i in range(max_k):
EM = sum([score[i] for score in scores])/len(retrieved_code)
print("At top ", i, " EM/Recall: ", EM*100, 'dpr score: ', dpr_scores[i])
if lang == 'js':
lang = 'javascript'
alpha, beta, gamma, theta = 0.25, 0.25, 0.25, 0.25
# preprocess inputs
pre_references = [[ref.strip() for ref in refss]]
hypothesis = [hyp.strip() for hyp in hypss]
for i in range(len(pre_references)):
assert len(hypothesis) == len(pre_references[i])
references = []
for i in range(len(hypothesis)):
ref_for_instance = []
for j in range(len(pre_references)):
ref_for_instance.append(pre_references[j][i])
references.append(ref_for_instance)
assert len(references) == len(pre_references) * len(hypothesis)
# calculate ngram match (BLEU)
tokenized_hyps = [x.split() for x in hypothesis]
tokenized_refs = [[x.split() for x in reference] for reference in references]
ngram_match_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
# calculate weighted ngram match
keywords = [x.strip() for x in open('keywords/' + lang + '.txt', 'r', encoding='utf-8').readlines()]
def make_weights(reference_tokens, key_word_list):
return {token: 1 if token in key_word_list else 0.2 \
for token in reference_tokens}
tokenized_refs_with_weights = [[[reference_tokens, make_weights(reference_tokens, keywords)] \
for reference_tokens in reference] for reference in tokenized_refs]
weighted_ngram_match_score = weighted_ngram_match.corpus_bleu(tokenized_refs_with_weights, tokenized_hyps)
# calculate syntax match
syntax_match_score = syntax_match.corpus_syntax_match(references, hypothesis, lang)
# calculate dataflow match
dataflow_match_score = dataflow_match.corpus_dataflow_match(references, hypothesis, lang)
# print('ngram match: {0}, weighted ngram match: {1}, syntax_match: {2}, dataflow_match: {3}'. \
# format(ngram_match_score, weighted_ngram_match_score, syntax_match_score, dataflow_match_score))
print('Ngram match:\t%.2f\nWeighted ngram:\t%.2f\nSyntax match:\t%.2f\nDataflow match:\t%.2f' % ( \
ngram_match_score * 100, weighted_ngram_match_score * 100, syntax_match_score * 100,
dataflow_match_score * 100))
code_bleu_score = alpha * ngram_match_score \
+ beta * weighted_ngram_match_score \
+ gamma * syntax_match_score \
+ theta * dataflow_match_score
print('CodeBLEU score: %.2f' % (code_bleu_score * 100.0))
def check_retrived_acc_other_than_without_ref(retrieved_code_path, max_k=10, lang="python"):
with open(retrieved_code_path) as f:
retrieved_code = json.load(f)
scores = [[] for j in range(len(retrieved_code))]
refss = []
hypss = []
dpr_scores=[]
for idx, ex in enumerate(retrieved_code):
try:
target = ex['answers'].strip()
except:
ex = retrieved_code[ex]
target = ex['answers'].strip()
refss.append(target)
inserted=False
for rank, ctx in enumerate(ex['ctxs']):
cand = ctx["text"].strip()
if cand!=target:
if not inserted:
hypss.append(cand)
inserted=True
scores[idx].append(_bleu(target, cand))
try:
dpr_scores.append(ctx['score'])
except:
dpr_scores.append(ctx['_score'])
if not inserted:
hypss.append("")
if len(scores[idx])!=max_k:
for i in range(len(scores[idx]), max_k): scores[idx].append(0)
min_len = min([len(score) for score in scores[:-1]])
print (min_len)
for i in range(min_len):
Blue = sum([score[i] for score in scores])/len(retrieved_code)
print("At top ", i, " Bleu: ", Blue, 'dpr score: ', dpr_scores[i])
if lang == 'js':
lang = 'javascript'
alpha, beta, gamma, theta = 0.25, 0.25, 0.25, 0.25
# preprocess inputs
pre_references = [[ref.strip() for ref in refss]]
hypothesis = [hyp.strip() for hyp in hypss]
for i in range(len(pre_references)):
if len(hypothesis) != len(pre_references[i]):
import ipdb
ipdb.set_trace()
references = []
for i in range(len(hypothesis)):
ref_for_instance = []
for j in range(len(pre_references)):
ref_for_instance.append(pre_references[j][i])
references.append(ref_for_instance)
assert len(references) == len(pre_references) * len(hypothesis)
# calculate ngram match (BLEU)
tokenized_hyps = [x.split() for x in hypothesis]
tokenized_refs = [[x.split() for x in reference] for reference in references]
ngram_match_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
# calculate weighted ngram match
keywords = [x.strip() for x in open('keywords/' + lang + '.txt', 'r', encoding='utf-8').readlines()]
def make_weights(reference_tokens, key_word_list):
return {token: 1 if token in key_word_list else 0.2 \
for token in reference_tokens}
tokenized_refs_with_weights = [[[reference_tokens, make_weights(reference_tokens, keywords)] \
for reference_tokens in reference] for reference in tokenized_refs]
weighted_ngram_match_score = weighted_ngram_match.corpus_bleu(tokenized_refs_with_weights, tokenized_hyps)
# calculate syntax match
syntax_match_score = syntax_match.corpus_syntax_match(references, hypothesis, lang)
# calculate dataflow match
dataflow_match_score = dataflow_match.corpus_dataflow_match(references, hypothesis, lang)
# print('ngram match: {0}, weighted ngram match: {1}, syntax_match: {2}, dataflow_match: {3}'. \
# format(ngram_match_score, weighted_ngram_match_score, syntax_match_score, dataflow_match_score))
print('Ngram match:\t%.2f\nWeighted ngram:\t%.2f\nSyntax match:\t%.2f\nDataflow match:\t%.2f' % ( \
ngram_match_score * 100, weighted_ngram_match_score * 100, syntax_match_score * 100,
dataflow_match_score * 100))
code_bleu_score = alpha * ngram_match_score \
+ beta * weighted_ngram_match_score \
+ gamma * syntax_match_score \
+ theta * dataflow_match_score
print('CodeBLEU score: %.2f' % (code_bleu_score * 100.0))
def get_num_word_tokens(code):
count = 0
for w in code.split():
if len(w)>1: count+=1
return count
def error_analysis():
with open(annotation_path, 'w') as csvfile, \
open(ensamble_preds_path, 'w') as ensamble_pred_f, \
open(retrieved_code_path, 'w') as retrieved_f:
csvwriter = csv.writer(csvfile)
tars = [x.strip() for x in open(target_path, 'r', encoding='utf-8').readlines()]
pres = [x.strip() for x in open(output_path, 'r', encoding='utf-8').readlines()]
plbart_preds = [x.strip() for x in open(plbart_pred_path, 'r', encoding='utf-8').readlines()]
retrievd_codes = [x.split('_CODE_SEP_')[-1].strip() for x in open(input_path, 'r', encoding='utf-8').readlines()]
NLs = [x.split('concode')[0].strip() for x in open(input_path, 'r', encoding='utf-8').readlines()]
correct_pred_coount = 0
correct_retrival_count = 0
copy_cpount = 0
not_copied_but_correct = 0
copied_and_correct = 0
for id, (nl, retrievd_code, output, target) in enumerate(zip(NLs, retrievd_codes, pres, tars)):
retrieved_f.write(retrievd_code+"\n")
blue_score=_bleu(target, output)
# print(target, output, blue_score)
csvwriter.writerow([nl, retrievd_code, target, output, output==target, retrievd_code==target, retrievd_code==output, blue_score])
if output==target:
correct_pred_coount+=1
if retrievd_code==target:
correct_retrival_count+=1
if retrievd_code==output:
copy_cpount+=1
ensamble_pred_f.write(output+"\n")
if output==target:
copied_and_correct+=1
else:
# print("wring: ", output, "->", plbart_preds[id])
if get_num_word_tokens(retrievd_code)>15:
ensamble_pred_f.write(output + "\n")
else:
ensamble_pred_f.write(plbart_preds[id] + "\n")
if output==target:
not_copied_but_correct+=1
print("Acc/Em: ", correct_pred_coount, "in percentage: ", correct_pred_coount/len(tars)*100)
print("Retrieved/: ", correct_retrival_count, "in percentage: ", correct_retrival_count/len(tars)*100)
print("Copied: ", copy_cpount, "in percentage: ", copy_cpount/len(tars)*100)
print("Not Copied: ", len(tars)-copy_cpount, "in percentage: ", (len(tars)-copy_cpount)/len(tars)*100)
print("Not Copied but correct: ", not_copied_but_correct, "in percentage of not copied: ", not_copied_but_correct/(len(tars)-copy_cpount)*100)
print("Copied and correct: ", copied_and_correct, "in percentage of copied: ", copied_and_correct/(copy_cpount)*100)
# error_analysis()
def call_ret(lang, k=3):
retrievd_test_file='/local/rizwan/DPR_models/csnet/'+lang+'_csnet_pos_only_retrieval_dedup_test_30.json'
print(retrievd_test_file)
print("---" * 50)
print("With Ref:: ")
print("---" * 50)
check_retrived_acc(retrievd_test_file, max_k=k, lang=lang)
print("---"*50)
print("Without Ref:: ")
print("---" * 50)
check_retrived_acc_other_than_without_ref(retrievd_test_file, max_k=k, lang=lang)
# lang='java'
# call_ret(lang, k=1)
# lang='python'
# call_ret(lang, k=1)
#
#
# retrievd_test_file='/local/rizwan/DPR_models/biencoder_models_concode_without_code_tokens/java/python_csnet_pos_only_retrieval_dedup_test_20.json'
# print(retrievd_test_file)
# check_retrived_acc(retrievd_test_file, lang="python")
# check_retrived_acc_other_than_without_ref(retrievd_test_file, lang="python")
# #
# # #
# retrievd_test_file='/local/rizwan/DPR_models/biencoder_models_concode_without_code_tokens/java/java_csnet_pos_only_retrieval_dedup_test_20.json'
# print(retrievd_test_file)
# check_retrived_acc(retrievd_test_file, lang="java")
# check_retrived_acc_other_than_without_ref(retrievd_test_file)
retrievd_test_file='/local/rizwan/DPR_models/biencoder_models_concode_without_code_tokens/java/test_100.json'
check_retrived_acc(retrievd_test_file, lang="java")
check_retrived_acc_other_than_without_ref(retrievd_test_file)
exit(0)
# RETDIR='/local/rizwan/DPR_models/csnet/'
# langs=[ 'python', 'java']
# test_ret_paths = { lang: RETDIR+lang+'_csnet_pos_only_retrieval_dedup_test_30.json' for lang in langs}
# for lang in langs:
# print('lang: ', lang, 'with')
# check_retrived_acc(test_ret_paths[lang], lang=lang)
# print('lang: ', lang, 'without')
# check_retrived_acc_other_than_without_ref(test_ret_paths[lang], lang=lang)
# RETDIR='/local/rizwan/workspace/projects/RaProLang/retrieval/bm25/'
# langs=[ 'python', 'java']
# test_ret_paths = { lang: RETDIR+'codexglue-csnet-'+lang+'.test_bm25.json' for lang in langs}
# for lang in langs:
# print('lang: ', lang, 'with')
# check_retrived_acc(test_ret_paths[lang], lang=lang)
# print('lang: ', lang, 'without')
# check_retrived_acc_other_than_without_ref(test_ret_paths[lang], lang=lang)
#
# print('Concode lang: ', lang, 'with')
# filepth='/local/rizwan/workspace/projects/RaProLang/retrieval/bm25/concode.test_bm25.json'
# check_retrived_acc(filepth, lang='java')
# print('Concode lang: ', lang, 'without')
# check_retrived_acc_other_than_without_ref(filepth, lang='java')
# RETDIR='/local/rizwan/workspace/projects/RaProLang/retrieval/bm25/code_to_text/'
# langs=[ 'java', 'python',]
# test_ret_paths = { lang: RETDIR+'codexglue-csnet-'+lang+'.test_code_text_bm25.json' for lang in langs}
# for lang in langs:
# print('lang: ', lang, 'with')
# check_retrived_acc(test_ret_paths[lang], lang=lang)
# print('lang: ', lang, 'without')
# check_retrived_acc_other_than_without_ref(test_ret_paths[lang], lang=lang)
redcoder_ext_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-python-comments-top-5-without-python-ms100000-wu5000-bsz64/output.hyp'
redcoder_ext_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-python-comments-top-5-without-python-ms100000-wu5000-bsz64/output.hyp'
redcoder_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-python-with-top-5-retrived-from-no-ref-no-mask-python-ms100000-wu5000-bsz72/output.hyp'
plbart_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-python-with-top-0-retrived-from-with-ref-no-mask-python-ms100000-wu5000-bsz72/output.hyp'
retrived_f='/local/rizwan/workspace/projects/RaProLanG/data/plbart/csnet/python_retrievd_from_no_ref_top_5_mask_rate_0/test.source'
retrived_f='/local/rizwan/workspace/projects/RaProLanG/data/plbart/csnet_with_comments/python_without_ref_top_5/test.source'
target_f='/local/rizwan/workspace/projects/RaProLanG/data/plbart/csnet/python_retrievd_from_no_ref_top_5_mask_rate_0/test.target'
# redcoder_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-java-with-top-5-retrived-from-no-ref-no-mask-java-ms100000-wu5000-bsz72/output.hyp'
# plbart_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-java-with-top-0-retrived-from-with-ref-no-mask-java-ms100000-wu5000-bsz72/output.hyp'
# redcoder_ext_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-java-comments-top-5-without-java-ms100000-wu5000-bsz72/output.hyp'
# target_f='/local/rizwan/workspace/projects/RaProLanG/data/plbart/csnet/java_retrievd_from_no_ref_top_5_mask_rate_0/test.target'
# retrived_f='/local/rizwan/workspace/projects/RaProLanG/data/plbart/csnet_with_comments/java_without_ref_top_5/test.source'
quants=[20, 40, 60, 80, 100, 50]
num_examples = {x:0 for x in quants}
plbarts={x:[] for x in quants}
retriveds={x:[] for x in quants}
redcoders={x:[] for x in quants}
redcoders_exts={x:[] for x in quants}
# with open(redcoder_ext_f) as rd_ext, open(redcoder_f) as rd, open(plbart_f) as plbrt, open(retrived_f) as rtvd, open(target_f) as target:
# for rdext, rd, rtr, plbt, tgt in zip(rd_ext, rd, rtvd, plbrt, target):
#
# tgt = tgt.strip()
# # calculate ngram match (BLEU)
# tokenized_hyps = [plbt.split() ]
# tokenized_refs = [[tgt.split() ] ]
# plbart_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
#
#
# tokenized_hyps = [ rtr.split('_CODE_SEP_')[1].split('_NL_')[0].strip().split()]
# rtr_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
#
# tokenized_hyps = [rd.strip().split()]
# rd_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
#
# tokenized_hyps = [rdext.strip().split()]
# rdext_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
#
# # if rdext_bleu_score > rd_bleu_score and rd_bleu_score>rtr_bleu_score and rtr_bleu_score>plbart_bleu_score and \
# # len(tgt.split()) >= len(rdext.strip().split()) and len(rdext.strip().split())>= len(rtr.split('_CODE_SEP_')[1].strip().split())\
# # and len(rtr.split('_CODE_SEP_')[1].strip().split())>= len(plbt.split() ) and rdext_bleu_score>0.3 and rd_bleu_score>0.2 \
# # and tgt not in rtr:
# #
# # print('input: ', rtr.split('_CODE_SEP_')[0])
# # print("target:", tgt)
# #
# #
# # print('='*10)
# # print("plbart:", plbt)
# # print("bleu:", plbart_bleu_score)
# # print('=' * 10)
# # print("rtrvd:", rtr)
# # print("bleu:", rtr_bleu_score)
# #
# # print('=' * 10)
# # print("rd:", rd)
# # print("bleu:", rd_bleu_score)
# # print('=' * 10)
# # print('rdext: ' ,rdext)
# # print("bleu:", rdext_bleu_score)
#
#
# l=len(tgt.split())
# for q in quants:
# if l<q:
# num_examples[q]+=1
# plbarts[q].append(plbart_bleu_score)
# retriveds[q].append(rtr_bleu_score)
# redcoders[q].append(rd_bleu_score)
# redcoders_exts[q].append(rdext_bleu_score)
# break
# redcoder_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-java-with-top-5-retrived-from-no-ref-no-mask-java-ms100000-wu5000-bsz72/output.hyp'
# plbart_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-java-with-top-0-retrived-from-with-ref-no-mask-java-ms100000-wu5000-bsz72/output.hyp'
# redcoder_ext_f='/local/rizwan/workspace/projects/RaProLanG/plbart-codexglue-csnet-java-comments-top-5-without-java-ms100000-wu5000-bsz72/output.hyp'
# target_f='/local/rizwan/workspace/projects/RaProLanG/data/plbart/csnet/java_retrievd_from_no_ref_top_5_mask_rate_0/test.target'
# retrived_f='/local/rizwan/workspace/projects/RaProLanG/data/plbart/csnet_with_comments/java_without_ref_top_5/test.source'
quants=[20, 40, 60, 80, 100, 150, 500]
num_examples = {x:0 for x in quants}
plbarts={x:[] for x in quants}
retriveds={x:[] for x in quants}
redcoders={x:[] for x in quants}
redcoders_exts={x:[] for x in quants}
# print(num_examples)
# print("plbarts:", )
# for x, y in plbarts.items():
# print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y)/len(y))
# print("retrievd:", )
# for x, y in retriveds.items():
# print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y)/len(y))
# print("redcoder:", )
# for x, y in redcoders.items():
# print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y)/len(y))
# print("redcoder-ext:", )
# for x, y in redcoders_exts.items():
# print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y)/len(y))
with open(redcoder_ext_f) as rd_ext, open(redcoder_f) as rd, open(plbart_f) as plbrt, open(retrived_f) as rtvd, open(target_f) as target:
for rdext, rd, rtr, plbt, tgt in zip(rd_ext, rd, rtvd, plbrt, target):
tgt = tgt.strip()
# calculate ngram match (BLEU)
tokenized_hyps = [plbt.split() ]
tokenized_refs = [[tgt.split() ] ]
plbart_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
tokenized_hyps = [ rtr.split('_CODE_SEP_')[1].split('_NL_')[0].strip().split()]
rtr_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
tokenized_hyps = [rd.strip().split()]
rd_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
tokenized_hyps = [rdext.strip().split()]
rdext_bleu_score = bleu_code.corpus_bleu(tokenized_refs, tokenized_hyps)
# if rdext_bleu_score > rd_bleu_score and rd_bleu_score>rtr_bleu_score and rtr_bleu_score>plbart_bleu_score and \
# len(tgt.split()) >= len(rdext.strip().split()) and len(rdext.strip().split())>= len(rtr.split('_CODE_SEP_')[1].strip().split())\
# and len(rtr.split('_CODE_SEP_')[1].strip().split())>= len(plbt.split() ) and rdext_bleu_score>0.3 and rd_bleu_score>0.2 \
# and tgt not in rtr:
#
# print('input: ', rtr.split('_CODE_SEP_')[0])
# print("target:", tgt)
#
#
# print('='*10)
# print("plbart:", plbt)
# print("bleu:", plbart_bleu_score)
# print('=' * 10)
# print("rtrvd:", rtr)
# print("bleu:", rtr_bleu_score)
#
# print('=' * 10)
# print("rd:", rd)
# print("bleu:", rd_bleu_score)
# print('=' * 10)
# print('rdext: ' ,rdext)
# print("bleu:", rdext_bleu_score)
l=len(tgt.split())
for q in quants:
if l<q:
num_examples[q]+=1
plbarts[q].append(plbart_bleu_score)
retriveds[q].append(rtr_bleu_score)
redcoders[q].append(rd_bleu_score)
redcoders_exts[q].append(rdext_bleu_score)
break
print(num_examples.keys(), num_examples.values())
print("plbarts:", )
xxxx=[]
for x, y in plbarts.items():
if len(y)==0:
print(x, 'num exmples ', num_examples[x])
else:
print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y)/len(y))
xxxx.append(sum(y)/len(y))
print(xxxx)
xxxx=[]
print("retrievd:", )
for x, y in retriveds.items():
if len(y) == 0:
print(x, 'num exmples ', num_examples[x])
else:
print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y) / len(y))
xxxx.append(sum(y) / len(y))
print(xxxx)
xxxx=[]
print("redcoder:", )
for x, y in redcoders.items():
if len(y) == 0:
print(x, 'num exmples ', num_examples[x])
else:
print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y) / len(y))
xxxx.append(sum(y) / len(y))
print(xxxx)
xxxx = []
print("redcoder-ext:", )
for x, y in redcoders_exts.items():
if len(y) == 0:
print(x, 'num exmples ', num_examples[x])
else:
print("len: ", x, " number ", num_examples[x], "avg blue: ", sum(y) / len(y))
xxxx.append(sum(y) / len(y))
print(xxxx)
xxxx = []
| 40.477586 | 165 | 0.646846 | 3,194 | 23,477 | 4.502192 | 0.077019 | 0.027538 | 0.030598 | 0.042837 | 0.835396 | 0.819193 | 0.789708 | 0.770654 | 0.746036 | 0.728373 | 0 | 0.022431 | 0.213826 | 23,477 | 579 | 166 | 40.547496 | 0.756678 | 0.3388 | 0 | 0.546713 | 0 | 0.038062 | 0.153936 | 0.097384 | 0 | 0 | 0 | 0 | 0.010381 | 1 | 0.024221 | false | 0 | 0.031142 | 0.00692 | 0.065744 | 0.128028 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e8dba5dbde71a2b4075c47eac5c8c95997729cb | 119 | py | Python | Lib/test/libregrtest/__init__.py | shawwn/cpython | 0ff8a3b374286d2218fc18f47556a5ace202dad3 | [
"0BSD"
] | 52,316 | 2015-01-01T15:56:25.000Z | 2022-03-31T23:19:01.000Z | Lib/test/libregrtest/__init__.py | shawwn/cpython | 0ff8a3b374286d2218fc18f47556a5ace202dad3 | [
"0BSD"
] | 25,286 | 2015-03-03T23:18:02.000Z | 2022-03-31T23:17:27.000Z | Lib/test/libregrtest/__init__.py | shawwn/cpython | 0ff8a3b374286d2218fc18f47556a5ace202dad3 | [
"0BSD"
] | 31,623 | 2015-01-01T13:29:37.000Z | 2022-03-31T19:55:06.000Z | from test.libregrtest.cmdline import _parse_args, RESOURCE_NAMES, ALL_RESOURCES
from test.libregrtest.main import main
| 39.666667 | 79 | 0.865546 | 17 | 119 | 5.823529 | 0.705882 | 0.161616 | 0.383838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084034 | 119 | 2 | 80 | 59.5 | 0.908257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0d841468ee48995a4f1b2c71ec7f69adcb37baae | 4,001 | py | Python | webdriver/tests/new_window/user_prompts.py | ziransun/wpt | ab8f451eb39eb198584d547f5d965ef54df2a86a | [
"BSD-3-Clause"
] | 14,668 | 2015-01-01T01:57:10.000Z | 2022-03-31T23:33:32.000Z | webdriver/tests/new_window/user_prompts.py | ziransun/wpt | ab8f451eb39eb198584d547f5d965ef54df2a86a | [
"BSD-3-Clause"
] | 7,642 | 2018-05-28T09:38:03.000Z | 2022-03-31T20:55:48.000Z | webdriver/tests/new_window/user_prompts.py | ziransun/wpt | ab8f451eb39eb198584d547f5d965ef54df2a86a | [
"BSD-3-Clause"
] | 5,941 | 2015-01-02T11:32:21.000Z | 2022-03-31T16:35:46.000Z | # META: timeout=long
import pytest
from tests.support.asserts import assert_dialog_handled, assert_error, assert_success
def new_window(session, type_hint=None):
return session.transport.send(
"POST", "session/{session_id}/window/new".format(**vars(session)),
{"type": type_hint})
@pytest.fixture
def check_user_prompt_closed_without_exception(session, create_dialog):
def check_user_prompt_closed_without_exception(dialog_type, retval):
original_handles = session.handles
create_dialog(dialog_type, text=dialog_type)
response = new_window(session)
value = assert_success(response)
handles = session.handles
assert len(handles) == len(original_handles) + 1
assert value["handle"] in handles
assert value["handle"] not in original_handles
assert_dialog_handled(session, expected_text=dialog_type, expected_retval=retval)
return check_user_prompt_closed_without_exception
@pytest.fixture
def check_user_prompt_closed_with_exception(session, create_dialog):
def check_user_prompt_closed_with_exception(dialog_type, retval):
original_handles = session.handles
create_dialog(dialog_type, text=dialog_type)
response = new_window(session)
assert_error(response, "unexpected alert open")
assert_dialog_handled(session, expected_text=dialog_type, expected_retval=retval)
assert len(session.handles) == len(original_handles)
return check_user_prompt_closed_with_exception
@pytest.fixture
def check_user_prompt_not_closed_but_exception(session, create_dialog):
def check_user_prompt_not_closed_but_exception(dialog_type):
original_handles = session.handles
create_dialog(dialog_type, text=dialog_type)
response = new_window(session)
assert_error(response, "unexpected alert open")
assert session.alert.text == dialog_type
session.alert.dismiss()
assert len(session.handles) == len(original_handles)
return check_user_prompt_not_closed_but_exception
@pytest.mark.capabilities({"unhandledPromptBehavior": "accept"})
@pytest.mark.parametrize("dialog_type, retval", [
("alert", None),
("confirm", True),
("prompt", ""),
])
def test_accept(check_user_prompt_closed_without_exception, dialog_type, retval):
check_user_prompt_closed_without_exception(dialog_type, retval)
@pytest.mark.capabilities({"unhandledPromptBehavior": "accept and notify"})
@pytest.mark.parametrize("dialog_type, retval", [
("alert", None),
("confirm", True),
("prompt", ""),
])
def test_accept_and_notify(check_user_prompt_closed_with_exception, dialog_type, retval):
check_user_prompt_closed_with_exception(dialog_type, retval)
@pytest.mark.capabilities({"unhandledPromptBehavior": "dismiss"})
@pytest.mark.parametrize("dialog_type, retval", [
("alert", None),
("confirm", False),
("prompt", None),
])
def test_dismiss(check_user_prompt_closed_without_exception, dialog_type, retval):
check_user_prompt_closed_without_exception(dialog_type, retval)
@pytest.mark.capabilities({"unhandledPromptBehavior": "dismiss and notify"})
@pytest.mark.parametrize("dialog_type, retval", [
("alert", None),
("confirm", False),
("prompt", None),
])
def test_dismiss_and_notify(check_user_prompt_closed_with_exception, dialog_type, retval):
check_user_prompt_closed_with_exception(dialog_type, retval)
@pytest.mark.capabilities({"unhandledPromptBehavior": "ignore"})
@pytest.mark.parametrize("dialog_type", ["alert", "confirm", "prompt"])
def test_ignore(check_user_prompt_not_closed_but_exception, dialog_type):
check_user_prompt_not_closed_but_exception(dialog_type)
@pytest.mark.parametrize("dialog_type, retval", [
("alert", None),
("confirm", False),
("prompt", None),
])
def test_default(check_user_prompt_closed_with_exception, dialog_type, retval):
check_user_prompt_closed_with_exception(dialog_type, retval)
| 32.795082 | 90 | 0.751812 | 482 | 4,001 | 5.856846 | 0.139004 | 0.10627 | 0.111583 | 0.119022 | 0.841304 | 0.808006 | 0.794545 | 0.746015 | 0.724407 | 0.629826 | 0 | 0.000291 | 0.140215 | 4,001 | 121 | 91 | 33.066116 | 0.820349 | 0.004499 | 0 | 0.578313 | 0 | 0 | 0.119568 | 0.036674 | 0 | 0 | 0 | 0 | 0.144578 | 1 | 0.156627 | false | 0 | 0.024096 | 0.012048 | 0.228916 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0dae3df0ccfd9196a49a50c0614288aafbccbc6d | 29 | py | Python | app/core/models/__init__.py | rdurica/example | 733420f955b679d34adfb6bffa35b17177e086f6 | [
"MIT"
] | null | null | null | app/core/models/__init__.py | rdurica/example | 733420f955b679d34adfb6bffa35b17177e086f6 | [
"MIT"
] | 1 | 2022-03-15T22:42:58.000Z | 2022-03-15T23:05:30.000Z | app/core/models/__init__.py | rdurica/example | 733420f955b679d34adfb6bffa35b17177e086f6 | [
"MIT"
] | null | null | null | from .profile import Profile
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0db784e9ebd325d6eef15a4cd1b74fba12b1b17f | 20,862 | py | Python | tests/unit/test_expander.py | cicorias/putput | 96bde3c2219070bfed1ca76d47a4fe7cd0bc4b44 | [
"MIT"
] | 1 | 2019-01-17T07:45:43.000Z | 2019-01-17T07:45:43.000Z | tests/unit/test_expander.py | cicorias/putput | 96bde3c2219070bfed1ca76d47a4fe7cd0bc4b44 | [
"MIT"
] | 1 | 2019-01-17T07:47:04.000Z | 2019-01-17T07:47:04.000Z | tests/unit/test_expander.py | cicorias/putput | 96bde3c2219070bfed1ca76d47a4fe7cd0bc4b44 | [
"MIT"
] | null | null | null | import unittest
from pathlib import Path
from putput.expander import expand
from putput.pipeline import _load_pattern_def
from tests.unit.helper_functions import compare_all_pairs
class TestExpander(unittest.TestCase):
# pylint: disable=too-many-public-methods
def setUp(self) -> None:
self.maxDiff = None
self._base_dir = Path(__file__).parent / 'pattern_definitions' / 'valid'
def test_dynamic_token_patterns_only(self) -> None:
dynamic_token_patterns_map = {'ARTIST': ('the beatles', 'kanye')}
pattern_def = _load_pattern_def(self._base_dir / 'dynamic_token_patterns_only.yml')
expected_utterance_combo = ((('the beatles', 'kanye'),),)
expected_tokens = (('ARTIST',),)
expected_groups = (((('None', 1)),),)
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_static_token_patterns_only(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'static_token_patterns_only.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('he will want', 'she will want'), ('to play', 'to listen')),)
expected_tokens = (('START', 'PLAY'),)
expected_groups = (((('None', 1)), ('None', 1)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_dynamic_and_static_token_patterns(self) -> None:
dynamic_token_patterns_map = {'ARTIST': ('the beatles', 'kanye')}
pattern_def = _load_pattern_def(self._base_dir / 'dynamic_and_static_token_patterns.yml')
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('he will want', 'she will want'),
('to play', 'to listen'),
('the beatles', 'kanye')),)
expected_tokens = (('START', 'PLAY', 'ARTIST'),)
expected_groups = ((('None', 1), ('None', 1), ('None', 1)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_static_and_base_tokens(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'static_and_base_tokens.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('he will want', 'she will want'), ('to play', 'to listen')),)
expected_tokens = (('START', 'PLAY'),)
expected_groups = ((('None', 1), ('None', 1)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_static_and_base_tokens_and_group_tokens(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'static_and_base_tokens_and_group_tokens.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('he will want', 'she will want'), ('to play', 'to listen')),)
expected_tokens = (('WAKE', 'START', 'PLAY'),)
expected_groups = ((('None', 1), ('PLAY_PHRASE', 2),),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_keys_in_addition_to_utterance_patterns_token_patterns(self) -> None:
dynamic_token_patterns_map = {'ARTIST': ('the beatles', 'kanye')}
pattern_def = _load_pattern_def(self._base_dir / 'keys_in_addition_to_utterance_patterns_tokens_patterns.yml')
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('he will want', 'she will want'),
('to play', 'to listen'),
('the beatles', 'kanye')),)
expected_tokens = (('START', 'PLAY', 'ARTIST'),)
expected_groups = ((('None', 1), ('None', 1), ('None', 1)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_groups_with_range(self) -> None:
artists = ('the beatles', 'kanye', 'nico', 'tom waits')
dynamic_token_patterns_map = {'ARTIST': artists}
pattern_def = _load_pattern_def(self._base_dir / 'groups_with_range.yml')
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('she wants',), ('to play',), artists),
(('she wants',), ('to play',), artists, artists),
(('she wants',), ('to play',), artists, artists, artists),)
expected_tokens = (('START', 'PLAY', 'ARTIST'),
('START', 'PLAY', 'ARTIST', 'ARTIST'),
('START', 'PLAY', 'ARTIST', 'ARTIST', 'ARTIST'),)
expected_groups = ((('None', 1), ('PLAY_ARTISTS', 2)),
(('None', 1), ('PLAY_ARTISTS', 3)),
(('None', 1), ('PLAY_ARTISTS', 4)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_groups_with_single_range(self) -> None:
artists = ('the beatles', 'kanye', 'nico', 'tom waits')
dynamic_token_patterns_map = {'ARTIST': artists}
pattern_def = _load_pattern_def(self._base_dir / 'groups_with_single_range.yml')
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('she wants',), ('to play',), artists, artists, artists),)
expected_tokens = (('START', 'PLAY', 'ARTIST', 'ARTIST', 'ARTIST'),)
expected_groups = ((('None', 1), ('PLAY_ARTISTS', 4)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_utterance_patterns_with_range(self) -> None:
artists = ('the beatles', 'kanye', 'nico', 'tom waits')
dynamic_token_patterns_map = {'ARTIST': artists}
pattern_def = _load_pattern_def(self._base_dir / 'utterance_patterns_with_range.yml')
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('she wants',), ('to play',), artists),
(('she wants',), ('to play',), artists, ('to play',), artists),
(('she wants',), ('to play',), artists,
('to play',), artists, ('to play',), artists),)
expected_tokens = (('START', 'PLAY', 'ARTIST'),
('START', 'PLAY', 'ARTIST', 'PLAY', 'ARTIST'),
('START', 'PLAY', 'ARTIST', 'PLAY', 'ARTIST', 'PLAY', 'ARTIST'),)
expected_groups = ((('None', 1), ('PLAY_ARTIST', 2)),
(('None', 1), ('PLAY_ARTIST', 2), ('PLAY_ARTIST', 2)),
(('None', 1), ('PLAY_ARTIST', 2), ('PLAY_ARTIST', 2), ('PLAY_ARTIST', 2)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_utterance_patterns_with_range_and_non_range(self) -> None:
artists = ('the beatles', 'kanye', 'nico', 'tom waits')
dynamic_token_patterns_map = {'ARTIST': artists}
pattern_def = _load_pattern_def(self._base_dir / 'utterance_patterns_with_range_and_non_range.yml')
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('she wants',),),
(('she wants',), ('to play',), artists),
(('she wants',), ('to play',), artists, ('to play',), artists),
(('she wants',), ('to play',), artists,
('to play',), artists, ('to play',), artists),)
expected_tokens = (('START',),
('START', 'PLAY', 'ARTIST'),
('START', 'PLAY', 'ARTIST', 'PLAY', 'ARTIST'),
('START', 'PLAY', 'ARTIST', 'PLAY', 'ARTIST', 'PLAY', 'ARTIST'),)
expected_groups = ((('None', 1),),
(('None', 1), ('PLAY_ARTIST', 2)),
(('None', 1), ('PLAY_ARTIST', 2), ('PLAY_ARTIST', 2)),
(('None', 1), ('PLAY_ARTIST', 2), ('PLAY_ARTIST', 2), ('PLAY_ARTIST', 2)),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_groups_with_range_and_non_range(self) -> None:
artists = ('the beatles', 'kanye', 'nico', 'tom waits')
dynamic_token_patterns_map = {'ARTIST': artists}
pattern_def = _load_pattern_def(self._base_dir / 'groups_with_range_and_non_range.yml')
_, generator = expand(pattern_def, dynamic_token_patterns_map=dynamic_token_patterns_map)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('she wants',), ('to play',), artists, artists, artists),
(('she wants',), ('to play',), artists),)
expected_tokens = (('START', 'PLAY', 'ARTIST', 'ARTIST', 'ARTIST'),
('START', 'PLAY', 'ARTIST'),)
expected_groups = ((('None', 1), ('PLAY_ARTISTS', 4)),
(('START_SONG', 3),),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_nested_group_tokens(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'nested_group_tokens.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she wants',), ('to play', 'to listen'), ('to play', 'to listen')),)
expected_tokens = (('WAKE', 'START', 'PLAY', 'PLAY'),)
expected_groups = ((('None', 1), ('PLAY_PHRASE', 3),),)
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_nested_group_tokens_and_ranges(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'nested_group_tokens_and_ranges.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she wants',),
('to play', 'to listen')),
(('hi',), ('she wants',),
('to play', 'to listen'), ('to play', 'to listen')),
(('hi',), ('she wants',),
('to play', 'to listen'), ('to play', 'to listen'),
('to play', 'to listen')),
(('hi',), ('she wants',),
('to play', 'to listen'), ('to play', 'to listen'),
('to play', 'to listen'), ('to play', 'to listen')),)
expected_tokens = (('WAKE', 'START', 'PLAY'),
('WAKE', 'START', 'PLAY', 'PLAY'),
('WAKE', 'START', 'PLAY', 'PLAY', 'PLAY'),
('WAKE', 'START', 'PLAY', 'PLAY', 'PLAY', 'PLAY'),)
expected_groups = ((('None', 1), ('PLAY_PHRASE', 2),),
(('None', 1), ('PLAY_PHRASE', 3),),
(('None', 1), ('PLAY_PHRASE', 4),),
(('None', 1), ('PLAY_PHRASE', 5),))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_single_optional_group(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'single_optional_group.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she will want',), ('to play',)),
(('hi',), ('she will want',), ('to play',), ('nico',)))
expected_tokens = (('WAKE', 'START', 'PLAY'), ('WAKE', 'START', 'PLAY', 'ARTIST_1'))
expected_groups = ((('None', 1), ('PLAY_PHRASE', 2)), (('None', 1), ('PLAY_PHRASE', 3)))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_single_optional_utterance_pattern(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'single_optional_utterance_pattern.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she will want',), ('to play',)),
(('hi',), ('she will want',), ('to play',), ('nico',)))
expected_tokens = (('WAKE', 'START', 'PLAY'), ('WAKE', 'START', 'PLAY', 'ARTIST_1'))
expected_groups = ((('None', 1), ('PLAY_PHRASE', 2)), (('None', 1), ('PLAY_PHRASE', 2), ('None', 1)))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_multiple_optional_group(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'multiple_optional_group.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she will want',), ('to play',), ('nico',)),
(('hi',), ('she will want',), ('to play',), ('tom waits',)))
expected_tokens = (('WAKE', 'START', 'PLAY', 'ARTIST_1'), ('WAKE', 'START', 'PLAY', 'ARTIST_2'))
expected_groups = ((('None', 1), ('PLAY_PHRASE', 3)), (('None', 1), ('PLAY_PHRASE', 3)))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_multiple_optional_utternace_pattern(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'multiple_optional_utternace_pattern.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she will want',), ('to play',), ('nico',)),
(('hi',), ('she will want',), ('to play',), ('tom waits',)))
expected_tokens = (('WAKE', 'START', 'PLAY', 'ARTIST_1'), ('WAKE', 'START', 'PLAY', 'ARTIST_2'))
expected_groups = ((('None', 1), ('PLAY_PHRASE', 2), ('None', 1)),
(('None', 1), ('PLAY_PHRASE', 2), ('None', 1)))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_multiple_with_none_optional_group(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'multiple_with_none_optional_group.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she will want',), ('to play',)),
(('hi',), ('she will want',), ('to play',), ('nico',)),
(('hi',), ('she will want',), ('to play',), ('tom waits',)))
expected_tokens = (('WAKE', 'START', 'PLAY'), ('WAKE', 'START', 'PLAY', 'ARTIST_1'),
('WAKE', 'START', 'PLAY', 'ARTIST_2'))
expected_groups = ((('None', 1), ('PLAY_PHRASE', 2)), (('None', 1), ('PLAY_PHRASE', 3)),
(('None', 1), ('PLAY_PHRASE', 3)))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_multiple_with_none_optional_utterance_pattern(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'multiple_with_none_optional_utterance_pattern.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('hi',), ('she will want',), ('to play',)),
(('hi',), ('she will want',), ('to play',), ('nico',)),
(('hi',), ('she will want',), ('to play',), ('tom waits',)))
expected_tokens = (('WAKE', 'START', 'PLAY'), ('WAKE', 'START', 'PLAY', 'ARTIST_1'),
('WAKE', 'START', 'PLAY', 'ARTIST_2'))
expected_groups = ((('None', 1), ('PLAY_PHRASE', 2)), (('None', 1), ('PLAY_PHRASE', 2), ('None', 1)),
(('None', 1), ('PLAY_PHRASE', 2), ('None', 1)))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
def test_intents_and_entities(self) -> None:
pattern_def = _load_pattern_def(self._base_dir / 'intents_and_entities.yml')
_, generator = expand(pattern_def)
actual_utterance_combo, actual_tokens, actual_groups = zip(*generator)
expected_utterance_combo = ((('he will want', 'she will want'),
('to play', 'to listen'),
('nico', 'kanye', 'tom waits')),
(('he will want', 'she will want'),
('to play', 'to listen'),
('sunday morning', 'all falls down', 'table top joe')))
expected_tokens = (('START', 'PLAY', 'ARTIST'), ('START', 'PLAY', 'SONG'))
expected_groups = ((('None', 1), ('None', 1), ('None', 1)), (('None', 1), ('None', 1), ('None', 1)))
pairs = [(actual_utterance_combo, expected_utterance_combo),
(actual_tokens, expected_tokens),
(actual_groups, expected_groups)]
compare_all_pairs(self, pairs)
if __name__ == '__main__':
unittest.main()
| 62.648649 | 118 | 0.56984 | 2,177 | 20,862 | 5.081764 | 0.048691 | 0.101238 | 0.079544 | 0.094007 | 0.948929 | 0.936093 | 0.930127 | 0.920184 | 0.904547 | 0.902016 | 0 | 0.006736 | 0.274135 | 20,862 | 332 | 119 | 62.837349 | 0.723833 | 0.001869 | 0 | 0.648208 | 0 | 0 | 0.17223 | 0.032851 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068404 | false | 0 | 0.016287 | 0 | 0.087948 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0dca432b061214b9b5c03fb5472f45901775fe4e | 2,263 | py | Python | dimensigon/web/api_1_0/resources/step_children.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | 2 | 2020-11-20T10:27:14.000Z | 2021-02-21T13:57:56.000Z | dimensigon/web/api_1_0/resources/step_children.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | null | null | null | dimensigon/web/api_1_0/resources/step_children.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | null | null | null | from flask import request
from flask_jwt_extended import jwt_required
from flask_restful import Resource
from dimensigon.domain.entities import Step
from dimensigon.web import db
from dimensigon.web.decorators import forward_or_dispatch, securizer, validate_schema, lock_catalog
from dimensigon.web.json_schemas import step_children
class StepRelationshipChildren(Resource):
@forward_or_dispatch()
@jwt_required()
@securizer
def get(self, step_id):
s: Step = Step.query.get_or_raise(step_id)
return dict(child_step_ids=[str(cs.id) for cs in s.children]), 200
@forward_or_dispatch()
@jwt_required()
@securizer
@validate_schema(step_children)
@lock_catalog
def patch(self, step_id):
s: Step = Step.query.get_or_raise(step_id)
child_step_ids = request.get_json()['child_step_ids']
child_steps = []
for child_step_id in child_step_ids:
child_steps.append(Step.query.get_or_raise(child_step_id))
s.orchestration.set_children(s, child_steps)
db.session.commit()
return dict(child_step_ids=[str(cs.id) for cs in s.children]), 200
@forward_or_dispatch()
@jwt_required()
@securizer
@validate_schema(step_children)
@lock_catalog
def post(self, step_id):
s = Step.query.get_or_raise(step_id)
child_step_ids = request.get_json()['child_step_ids']
child_steps = []
for child_step_id in child_step_ids:
child_steps.append(Step.query.get_or_raise(child_step_id))
s.orchestration.add_children(s, child_steps)
db.session.commit()
return dict(child_step_ids=[str(cs.id) for cs in s.children]), 200
@forward_or_dispatch()
@jwt_required()
@securizer
@validate_schema(step_children)
@lock_catalog
def delete(self, step_id):
s = Step.query.get_or_raise(step_id)
child_step_ids = request.get_json()['child_step_ids']
child_steps = []
for child_step_id in child_step_ids:
child_steps.append(Step.query.get_or_raise(child_step_id))
s.orchestration.delete_children(s, child_steps)
db.session.commit()
return dict(child_step_ids=[str(cs.id) for cs in s.children]), 200
| 32.328571 | 99 | 0.698188 | 323 | 2,263 | 4.563467 | 0.170279 | 0.116011 | 0.105834 | 0.066486 | 0.761194 | 0.761194 | 0.736092 | 0.736092 | 0.736092 | 0.736092 | 0 | 0.006685 | 0.206805 | 2,263 | 69 | 100 | 32.797101 | 0.814485 | 0 | 0 | 0.732143 | 0 | 0 | 0.018559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.125 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
218708d496c65bc5bd87e288cd4cc2517787f821 | 5,871 | py | Python | mercurial_reviewboard/tests/test_ui.py | cicd-team/mercurial-reviewboard | 1bb2a3c2009a7340a244d8426ef4c78be9806a67 | [
"MIT"
] | 1 | 2020-07-29T03:21:30.000Z | 2020-07-29T03:21:30.000Z | mercurial_reviewboard/tests/test_ui.py | cicd-team/mercurial-reviewboard | 1bb2a3c2009a7340a244d8426ef4c78be9806a67 | [
"MIT"
] | 1 | 2020-10-06T11:01:01.000Z | 2021-01-27T10:21:23.000Z | mercurial_reviewboard/tests/test_ui.py | cicd-team/mercurial-reviewboard | 1bb2a3c2009a7340a244d8426ef4c78be9806a67 | [
"MIT"
] | null | null | null | from mock import Mock, patch_object
from nose.tools import eq_, raises
import mercurial_reviewboard
from mercurial_reviewboard import postreview, util
from mercurial_reviewboard.tests import get_initial_opts, get_repo, mock_ui
class TestChangesetsOutput:
expected_status = \
'changesets:\n\t1:669e757d4a24 "1"\n\t0:a8ea53640b24 "0"\n\n'
@patch_object(mercurial_reviewboard, 'new_review')
def test_changeset_shown(self, mock_create_method):
ui = mock_ui()
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
opts['parent'] = '000000'
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
eq_(self.expected_status, ui.status.call_args_list[1][0][0])
@patch_object(mercurial_reviewboard, 'update_review')
def test_changeset_shown_on_existing(self, mock_create_method):
ui = mock_ui()
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
opts['parent'] = '000000'
opts['update'] = False
opts['existing'] = '1'
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
eq_(self.expected_status, ui.status.call_args_list[1][0][0])
class TestMerge:
@patch_object(mercurial_reviewboard, 'new_review')
def test_changeset_shown(self, mock_create_method):
"""status should show all revisions on all included branches"""
expected_status = \
'changesets:\n\t5:1de20dbad49b "5"'\
'\n\t4:d955e65420c8 "4"\n\t3:13a89135f389 "3"'\
'\n\t2:e97ab41d91c8 "2"'\
'\n\t1:7051d9f99104 "1"\n\t0:1d4da73b2570 "0"\n\n'
ui = mock_ui()
repo = get_repo(ui, 'merge')
opts = get_initial_opts()
opts['parent'] = '000000'
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
eq_(expected_status, ui.status.call_args_list[1][0][0])
@patch_object(mercurial_reviewboard, 'new_review')
def test_changeset_on_branch(self, mock_create_method):
"""in branch mode only show revisions on branch"""
expected_status = \
'review of branch: default\n\n'\
'changesets:\n\t5:1de20dbad49b "5"'\
'\n\t2:e97ab41d91c8 "2"'\
'\n\t1:7051d9f99104 "1"\n\t0:1d4da73b2570 "0"\n\n'
ui = mock_ui()
repo = get_repo(ui, 'merge')
opts = get_initial_opts()
opts['branch'] = True
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
eq_(expected_status, ui.status.call_args_list[1][0][0])
class TestLaunchBrowser:
@patch_object(mercurial_reviewboard, 'new_review')
@patch_object(mercurial_reviewboard, 'launch_webbrowser')
def test_browser_launch_default(self, mock_launch, mock_create_method):
ui = mock_ui()
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
assert mock_launch.called == False
@patch_object(mercurial_reviewboard, 'new_review')
@patch_object(mercurial_reviewboard, 'launch_webbrowser')
def test_browser_launch_false(self, mock_launch, mock_create_method):
ui = mock_ui()
ui.setconfig('reviewboard', 'launch_webbrowser', 'false')
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
assert mock_launch.called == False
@patch_object(mercurial_reviewboard, 'new_review')
@patch_object(mercurial_reviewboard, 'launch_webbrowser')
def test_browser_launch_true(self, mock_launch, mock_create_method):
mock_create_method.return_value = '1'
ui = mock_ui()
ui.setconfig('reviewboard', 'launch_webbrowser', 'true')
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
eq_('http://example.com/r/1/', mock_launch.call_args[0][1])
@patch_object(mercurial_reviewboard, 'new_review')
@patch_object(mercurial_reviewboard, 'launch_webbrowser')
def test_browser_launch_server_arg(self, mock_launch, mock_create_method):
mock_create_method.return_value = '1'
ui = mock_ui()
ui.setconfig('reviewboard', 'launch_webbrowser', 'true')
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
opts['server'] = 'example.org'
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
eq_('http://example.org/r/1/', mock_launch.call_args[0][1])
class TestServerConfiguration:
@raises(util.Abort)
@patch_object(mercurial_reviewboard, 'new_review')
def test_no_reviewboard_configured(self, mock_create_review):
ui = mock_ui()
ui.setconfig('reviewboard', 'server', None)
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
postreview(ui, repo, **opts)
@patch_object(mercurial_reviewboard, 'new_review')
def test_reviewboard_option(self, mock_create_review):
ui = mock_ui()
ui.setconfig('reviewboard', 'server', None)
repo = get_repo(ui, 'two_revs')
opts = get_initial_opts()
opts['server'] = 'example.com'
opts['outgoingchanges'] = False
opts['outgoing'] = False
postreview(ui, repo, **opts)
assert mock_create_review.called
| 34.946429 | 78 | 0.625447 | 684 | 5,871 | 5.090643 | 0.152047 | 0.097645 | 0.080414 | 0.124641 | 0.797243 | 0.789489 | 0.774268 | 0.774268 | 0.723722 | 0.701608 | 0 | 0.036145 | 0.250724 | 5,871 | 167 | 79 | 35.155689 | 0.755399 | 0.017374 | 0 | 0.757813 | 0 | 0.007813 | 0.184092 | 0.029698 | 0 | 0 | 0 | 0 | 0.023438 | 1 | 0.078125 | false | 0 | 0.039063 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21a5eb586756990988ab5ac74d60bb77ed0982e6 | 9,738 | py | Python | tests/services/test_sample_endpoints.py | sartography/uva-covid19-testing-communicator | 39bd041463349d5bada0a3196b4b031f7e3bc3e6 | [
"MIT"
] | 1 | 2020-12-07T21:47:02.000Z | 2020-12-07T21:47:02.000Z | tests/services/test_sample_endpoints.py | sartography/uva-covid19-testing-communicator | 39bd041463349d5bada0a3196b4b031f7e3bc3e6 | [
"MIT"
] | 14 | 2020-09-24T15:38:27.000Z | 2021-03-29T19:23:46.000Z | tests/services/test_sample_endpoints.py | sartography/uva-covid19-testing-communicator | 39bd041463349d5bada0a3196b4b031f7e3bc3e6 | [
"MIT"
] | 1 | 2020-09-22T18:01:29.000Z | 2020-09-22T18:01:29.000Z | from datetime import datetime
from time import sleep
from tests.base_test import BaseTest
import json
from communicator.models import Sample
from communicator import db, app
from communicator.api import admin
from communicator.models.notification import Notification
class TestSampleEndpoint(BaseTest):
sample_json = {"barcode": "000000111-202009091449-4321",
"location": "0102",
"date": "2020-09-09T14:49:00+0000",
"student_id": "000000111",
"computing_id": "abc12d"}
def test_create_sample(self):
# Test add sample
samples = db.session.query(Sample).all()
self.assertEqual(0, len(samples))
rv = self.app.post('/v1.0/sample',
content_type="application/json",
data=json.dumps(self.sample_json))
samples = db.session.query(Sample).all()
self.assertEqual(1, len(samples))
def test_create_sample_gets_correct_location_and_station(self):
# Test add sample
samples = db.session.query(Sample).all()
self.assertEqual(0, len(samples))
rv = self.app.post('/v1.0/sample',
content_type="application/json",
data=json.dumps(self.sample_json))
samples = db.session.query(Sample).all()
self.assertEqual(1, len(samples))
self.assertEqual(1, samples[0].location)
self.assertEqual(2, samples[0].station)
def test_create_sample_has_last_updated(self):
rv = self.app.post('/v1.0/sample',
content_type="application/json",
data=json.dumps(self.sample_json))
samples = db.session.query(Sample).all()
self.assertEqual(1, len(samples))
self.assertIsNotNone(samples[0].last_modified)
def test_create_duplicate_sample_does_not_raise_error(self):
# Test add sample
samples = db.session.query(Sample).all()
self.assertEqual(0, len(samples))
rv = self.app.post('/v1.0/sample', content_type="application/json", data=json.dumps(self.sample_json))
rv = self.app.post('/v1.0/sample', content_type="application/json", data=json.dumps(self.sample_json))
rv = self.app.post('/v1.0/sample', content_type="application/json", data=json.dumps(self.sample_json))
rv = self.app.post('/v1.0/sample', content_type="application/json", data=json.dumps(self.sample_json))
samples = db.session.query(Sample).all()
self.assertEqual(1, len(samples))
def test_notify_by_email_by_file_name(self):
db.session.add(Sample(barcode="000000111-202009091449-4321",
location="4321",
date="2020-09-09T14:49:00+0000",
student_id="000000111",
email="daniel.h.funk@gmail.com",
result_code="12345",
ivy_file="xxx"))
db.session.add(Sample(barcode="000000112-202009091449-4321",
location="4321",
date="2020-09-09T14:49:00+0000",
student_id="000000112",
email="dan@gmail.com",
result_code="12345",
ivy_file="yyy"))
db.session.commit()
admin._notify_by_email('xxx')
samples = db.session.query(Sample).filter(Sample.email_notified == True).all()
self.assertEqual(1, len(samples))
samples = db.session.query(Sample).filter(Sample.email_notified != True).all()
self.assertEqual(1, len(samples))
admin._notify_by_email()
samples = db.session.query(Sample).filter(Sample.email_notified == True).all()
self.assertEqual(2, len(samples))
def test_get_all_samples(self):
s1 = Sample(barcode="000000111-202009091449-4321",
location="4321",
date= datetime.now(),
student_id="000000111",
email="daniel.h.funk@gmail.com",
result_code="12345",
ivy_file="xxx",
email_notified=True,
text_notified=True)
s2 = Sample(barcode="000000112-202009091449-4321",
location="4321",
date= datetime.now(),
student_id="000000112",
email="dan@gmail.com",
result_code="12345",
ivy_file="yyy",
email_notified=False,
text_notified=False)
db.session.add(s1)
db.session.add(Notification(sample=s1, date="2020-12-09T14:49:00+0000", type="email", successful=True))
db.session.add(Notification(sample=s1, date="2020-12-09T14:49:00+0000", type="text", successful=True))
db.session.add(s2)
rv = self.app.get('/v1.0/sample', content_type="application/json",
headers={'X-CR-API-KEY': app.config.get('API_TOKEN')})
data = json.loads(rv.get_data(as_text=True))
self.assertEqual(2, len(data))
self.assertEqual("000000111-202009091449-4321", data[0]["barcode"])
self.assertEqual(2, len(data[0]["notifications"]))
self.assertEqual(True, data[0]['email_notified'])
self.assertEqual(True, data[0]["notifications"][0]["successful"])
self.assertEqual(0, len(data[1]["notifications"]))
self.assertEqual(False, data[1]['email_notified'])
print(data)
def test_get_all_samples_by_last_modified(self):
d1_str = '202012300101' # dec 30th 2020
d1 = datetime.strptime(d1_str, '%Y%m%d%H%M')
s1_bar_code = '000000111-'+ d1_str +'-4321'
d2_str = '202101010101' # Jan 1st 2021
d2 = datetime.strptime(d2_str, '%Y%m%d%H%M')
s2_bar_code = '000000111-'+ d2_str +'-4321'
s1 = Sample(barcode=s1_bar_code,
location="4321",
date= datetime.now(),
last_modified=d1,
student_id="000000111",
email="daniel.h.funk@gmail.com",
result_code="12345",
ivy_file="xxx",
email_notified=True,
text_notified=True)
s2 = Sample(barcode=s2_bar_code,
location="4321",
date= datetime.now(),
last_modified=d2,
student_id="000000112",
email="dan@gmail.com",
result_code="12345",
ivy_file="yyy",
email_notified=False,
text_notified=False)
db.session.add(s1)
db.session.add(s2)
rv = self.app.get(f'/v1.0/sample', content_type="application/json",
headers={'X-CR-API-KEY': app.config.get('API_TOKEN')})
data = json.loads(rv.get_data(as_text=True))
self.assertEqual(2, len(data))
last_modified_arg = d1.isoformat()
rv = self.app.get(f'/v1.0/sample?last_modified={last_modified_arg}', content_type="application/json",
headers={'X-CR-API-KEY': app.config.get('API_TOKEN')})
data = json.loads(rv.get_data(as_text=True))
self.assertEqual(1, len(data))
self.assertEqual(s2.barcode, data[0]['barcode'])
last_modified_arg = d2.isoformat()
rv = self.app.get(f'/v1.0/sample?last_modified={last_modified_arg}', content_type="application/json",
headers={'X-CR-API-KEY': app.config.get('API_TOKEN')})
data = json.loads(rv.get_data(as_text=True))
self.assertEqual(0, len(data))
def test_get_all_samples_by_created_on(self):
d1_str = '202012300101' # dec 30th 2020
d1 = datetime.strptime(d1_str, '%Y%m%d%H%M')
s1_bar_code = '000000111-' + d1_str + '-4321'
d2_str = '202101010101' # Jan 1st 2021
d2 = datetime.strptime(d2_str, '%Y%m%d%H%M')
s2_bar_code = '000000111-' + d2_str + '-4321'
d3_str = '202101010101' # Jan 5th 2021
d3 = datetime.strptime(d3_str, '%Y%m%d%H%M')
s1 = Sample(barcode=s1_bar_code,
location="4321",
date=datetime.now(),
created_on=d1,
last_modified=d3, # Note Modified date is later than created date.
student_id="000000111",
email="daniel.h.funk@gmail.com",
result_code="12345",
ivy_file="xxx",
email_notified=True,
text_notified=True)
s2 = Sample(barcode=s2_bar_code,
location="4321",
date=datetime.now(),
created_on=d2,
last_modified=d2,
student_id="000000112",
email="dan@gmail.com",
result_code="12345",
ivy_file="yyy",
email_notified=False,
text_notified=False)
db.session.add(s1)
db.session.add(s2)
created_on_arg = d1.isoformat()
rv = self.app.get(f'/v1.0/sample?created_on={created_on_arg}', content_type="application/json",
headers={'X-CR-API-KEY': app.config.get('API_TOKEN')})
data = json.loads(rv.get_data(as_text=True))
self.assertEqual(1, len(data)) # Even through s1 was modified on the 4th, it isn't returned.
self.assertEqual(s2.barcode, data[0]['barcode'])
| 43.28 | 111 | 0.553091 | 1,131 | 9,738 | 4.60389 | 0.129973 | 0.072018 | 0.020741 | 0.059919 | 0.803726 | 0.763395 | 0.763395 | 0.738045 | 0.716151 | 0.693874 | 0 | 0.097191 | 0.316389 | 9,738 | 224 | 112 | 43.473214 | 0.684993 | 0.022695 | 0 | 0.693122 | 0 | 0 | 0.154744 | 0.05323 | 0 | 0 | 0 | 0 | 0.137566 | 1 | 0.042328 | false | 0 | 0.042328 | 0 | 0.095238 | 0.005291 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21f07bf6dd46ddb4fa2e72657a164839277dd295 | 1,426 | py | Python | downtime/main/models.py | GSCrawley/downtime | 57a1c8e00424c1948e1650cd980d541174febbd8 | [
"MIT"
] | 1 | 2020-05-03T03:57:26.000Z | 2020-05-03T03:57:26.000Z | downtime/main/models.py | GSCrawley/downtime | 57a1c8e00424c1948e1650cd980d541174febbd8 | [
"MIT"
] | null | null | null | downtime/main/models.py | GSCrawley/downtime | 57a1c8e00424c1948e1650cd980d541174febbd8 | [
"MIT"
] | null | null | null | from django.db import models
from django.urls import reverse
from django.utils.text import slugify
class Movie(models.Model):
title = models.CharField(max_length=200)
slug = models.SlugField(editable=False, max_length=200)
def get_absolute_url(self):
kwargs = {
"slug": self.slug
}
return reverse("main:movie-detail", kwargs=kwargs)
def save(self, *args, **kwargs):
value = self.title
self.slug = slugify(value, allow_unicode=True)
super().save(*args, **kwargs)
class Music(models.Model):
title = models.CharField(max_length=200)
slug = models.SlugField(editable=False, max_length=200)
def get_absolute_url(self):
kwargs = {
"slug": self.slug
}
return reverse("main:music-detail", kwargs=kwargs)
def save(self, *args, **kwargs):
value = self.title
self.slug = slugify(value, allow_unicode=True)
super().save(*args, **kwargs)
class Book(models.Model):
title = models.CharField(max_length=200)
slug = models.SlugField(editable=False, max_length=200)
def get_absolute_url(self):
kwargs = {
"slug": self.slug
}
return reverse("main:book-detail", kwargs=kwargs)
def save(self, *args, **kwargs):
value = self.title
self.slug = slugify(value, allow_unicode=True)
super().save(*args, **kwargs)
| 27.423077 | 59 | 0.626928 | 175 | 1,426 | 5.022857 | 0.228571 | 0.061433 | 0.081911 | 0.075085 | 0.868032 | 0.868032 | 0.868032 | 0.868032 | 0.868032 | 0.868032 | 0 | 0.016744 | 0.246143 | 1,426 | 51 | 60 | 27.960784 | 0.80093 | 0 | 0 | 0.692308 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21fe55970f8a96cb46581c12344ef3a2fe90cee2 | 36 | py | Python | examples/math.log/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/math.log/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/math.log/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | import math
print(math.log(10, 10))
| 12 | 23 | 0.722222 | 7 | 36 | 3.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0.111111 | 36 | 2 | 24 | 18 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
df0275c11536808e485c7226ba650de76d6af317 | 51 | py | Python | backend/shopping-cart-service/tests/utils/mock_shared.py | qingshui-hui/aws-serverless-shopping-cart | 3838c981b02726e1ff7b504f1aa0f99b1ddf9b5a | [
"MIT-0"
] | null | null | null | backend/shopping-cart-service/tests/utils/mock_shared.py | qingshui-hui/aws-serverless-shopping-cart | 3838c981b02726e1ff7b504f1aa0f99b1ddf9b5a | [
"MIT-0"
] | null | null | null | backend/shopping-cart-service/tests/utils/mock_shared.py | qingshui-hui/aws-serverless-shopping-cart | 3838c981b02726e1ff7b504f1aa0f99b1ddf9b5a | [
"MIT-0"
] | null | null | null |
def get_user_sub(jwt_token):
return jwt_token
| 12.75 | 28 | 0.764706 | 9 | 51 | 3.888889 | 0.777778 | 0.457143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 51 | 3 | 29 | 17 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
df1232cc57cd4e7f243b95d0470fceb5adfeab65 | 46 | py | Python | paquete/main.py | MT2321/cid-ProjectZero | 65c2d1ed32557e6a1db6a3fcc3bc5a8707ba71fe | [
"MIT"
] | 1 | 2020-01-26T17:58:23.000Z | 2020-01-26T17:58:23.000Z | paquete/main.py | CID-ITBA/similarity-lab | cc1d30ef3a2659cdf22c20f91a299ca1e884457b | [
"MIT"
] | null | null | null | paquete/main.py | CID-ITBA/similarity-lab | cc1d30ef3a2659cdf22c20f91a299ca1e884457b | [
"MIT"
] | null | null | null | from classPackg import SimiLab
print("Hello") | 15.333333 | 30 | 0.804348 | 6 | 46 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 3 | 31 | 15.333333 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
df7643f5aff51199a41a7ee00171184eccd77666 | 32 | py | Python | app/__init__.py | DineshDevaraj/evalsr | 4155881441fcc3aad6dd0dd18da791506e435102 | [
"MIT"
] | null | null | null | app/__init__.py | DineshDevaraj/evalsr | 4155881441fcc3aad6dd0dd18da791506e435102 | [
"MIT"
] | null | null | null | app/__init__.py | DineshDevaraj/evalsr | 4155881441fcc3aad6dd0dd18da791506e435102 | [
"MIT"
] | 1 | 2022-02-18T00:41:29.000Z | 2022-02-18T00:41:29.000Z |
from app import routes_handler
| 10.666667 | 30 | 0.84375 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 2 | 31 | 16 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10dedec216b0fc5598759113129f05a6347349ac | 92 | py | Python | Practice/Python/Math/Polar_Coordinates.py | alexanderbauer89/HackerRank | 0fb6face083b0183692c9251ffe4bb635591393f | [
"MIT"
] | 1 | 2021-11-17T02:47:11.000Z | 2021-11-17T02:47:11.000Z | Practice/Python/Math/Polar_Coordinates.py | alexanderbauer89/HackerRank | 0fb6face083b0183692c9251ffe4bb635591393f | [
"MIT"
] | null | null | null | Practice/Python/Math/Polar_Coordinates.py | alexanderbauer89/HackerRank | 0fb6face083b0183692c9251ffe4bb635591393f | [
"MIT"
] | null | null | null | import cmath
r = complex(input().strip())
print(cmath.polar(r)[0])
print(cmath.polar(r)[1])
| 18.4 | 28 | 0.684783 | 16 | 92 | 3.9375 | 0.625 | 0.31746 | 0.47619 | 0.507937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023529 | 0.076087 | 92 | 4 | 29 | 23 | 0.717647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
10ed72c404b901bb49cc03dbc6664335e9f0ce41 | 29 | py | Python | crfasrnn/__init__.py | prashantraina/PlaneNet | 51fa261f7e958191fce6077cf923539acadb707b | [
"MIT"
] | null | null | null | crfasrnn/__init__.py | prashantraina/PlaneNet | 51fa261f7e958191fce6077cf923539acadb707b | [
"MIT"
] | null | null | null | crfasrnn/__init__.py | prashantraina/PlaneNet | 51fa261f7e958191fce6077cf923539acadb707b | [
"MIT"
] | null | null | null | from . import crfasrnn_layer
| 14.5 | 28 | 0.827586 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10eedde2a016e974a9df496050e6b00b34bd879b | 2,661 | py | Python | tests/test_batch.py | umd-lib/aws-archiver | 9de3bb8a48c2e18960cd70c7fb892093c48174d6 | [
"MIT"
] | null | null | null | tests/test_batch.py | umd-lib/aws-archiver | 9de3bb8a48c2e18960cd70c7fb892093c48174d6 | [
"MIT"
] | 5 | 2020-02-26T22:15:09.000Z | 2021-07-10T14:35:35.000Z | tests/test_batch.py | umd-lib/aws-archiver | 9de3bb8a48c2e18960cd70c7fb892093c48174d6 | [
"MIT"
] | 1 | 2020-02-25T16:10:41.000Z | 2020-02-25T16:10:41.000Z | import os
import unittest
from archiver.batch import Batch
from archiver.manifests.manifest_factory import ManifestFactory
class TestBatch(unittest.TestCase):
def setUp(self):
pass
def test_load_inventory_manifest(self):
manifest = ManifestFactory.create('tests/data/manifests/sample_inventory_manifest.csv')
batch = Batch(manifest, bucket='test_bucket', asset_root='/', log_dir='/tmp')
manifest.load_manifest('sample_inventory_manifest.csv', batch)
self.assertEqual(11, batch.stats['total_assets'])
self.assertEqual(11, batch.stats['assets_missing'])
def test_load_md5sum_manifest(self):
manifest = ManifestFactory.create('tests/data/manifests/sample_md5sum_manifest.txt')
batch = Batch(manifest, bucket='test_bucket', asset_root='/', log_dir='/tmp')
manifest.load_manifest('sample_md5sum_manifest.txt', batch)
self.assertEqual(5, batch.stats['total_assets'])
self.assertEqual(5, batch.stats['assets_missing'])
def test_load_patsy_manifest(self):
manifest = ManifestFactory.create('tests/data/manifests/sample_patsy_manifest.csv')
batch = Batch(manifest, bucket='test_bucket', asset_root='/', log_dir='/tmp')
manifest.load_manifest('sample_patsy_manifest.csv', batch)
self.assertEqual(5, batch.stats['total_assets'])
self.assertEqual(5, batch.stats['assets_missing'])
def test_add_asset_without_specified_relpath(self):
sample_file_1_path = os.path.abspath('tests/data/files/sample_file_1.txt')
asset_root = os.path.abspath('.')
manifest = ManifestFactory.create(None)
batch = Batch(manifest, bucket='test_bucket', asset_root=asset_root, log_dir='/tmp')
batch.add_asset(sample_file_1_path)
self.assertEqual(1, batch.stats['total_assets'])
self.assertEqual(1, batch.stats['assets_found'])
asset = batch.contents[0]
self.assertEqual('tests/data/files/sample_file_1.txt', asset.relpath)
def test_add_asset_with_specified_relpath(self):
sample_file_1_path = os.path.abspath('tests/data/files/sample_file_1.txt')
asset_root = os.path.abspath('.')
manifest = ManifestFactory.create(None)
batch = Batch(manifest, bucket='test_bucket', asset_root=asset_root, log_dir='/tmp')
batch.add_asset(sample_file_1_path, relpath='test/specific/relpath/sample_file_1.txt')
self.assertEqual(1, batch.stats['total_assets'])
self.assertEqual(1, batch.stats['assets_found'])
asset = batch.contents[0]
self.assertEqual('test/specific/relpath/sample_file_1.txt', asset.relpath)
| 46.684211 | 95 | 0.712514 | 339 | 2,661 | 5.333333 | 0.165192 | 0.099558 | 0.054757 | 0.066372 | 0.875553 | 0.815819 | 0.785398 | 0.727876 | 0.709624 | 0.60177 | 0 | 0.011654 | 0.161593 | 2,661 | 56 | 96 | 47.517857 | 0.798745 | 0 | 0 | 0.466667 | 0 | 0 | 0.228861 | 0.151447 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.133333 | false | 0.022222 | 0.088889 | 0 | 0.244444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10f382ad6b39b23fcdf1affd0e41f9a7c5cc4a37 | 93 | py | Python | aerosandbox/aerodynamics/aero_2D/singularities/__init__.py | scivm/AeroSandbox | 616c579e49bc13c3023364773705eaac7df10da7 | [
"MIT"
] | 1 | 2021-04-07T08:59:31.000Z | 2021-04-07T08:59:31.000Z | aerosandbox/aerodynamics/aero_2D/singularities/__init__.py | scivm/AeroSandbox | 616c579e49bc13c3023364773705eaac7df10da7 | [
"MIT"
] | null | null | null | aerosandbox/aerodynamics/aero_2D/singularities/__init__.py | scivm/AeroSandbox | 616c579e49bc13c3023364773705eaac7df10da7 | [
"MIT"
] | 1 | 2021-09-11T03:28:45.000Z | 2021-09-11T03:28:45.000Z | from .linear_strength_line_singularities import calculate_induced_velocity_line_singularities | 93 | 93 | 0.956989 | 11 | 93 | 7.454545 | 0.818182 | 0.414634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 93 | 1 | 93 | 93 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10f5bd20deb0f1e493b5a3cbeb77201f20bb2955 | 3,705 | py | Python | tests/upbit_ws_test.py | gyunt/aio-upbit | 16f06d88622aee00c2f8edccb41b982e9bcbb6d0 | [
"MIT"
] | 4 | 2018-05-26T12:12:48.000Z | 2020-02-13T16:12:43.000Z | tests/upbit_ws_test.py | gyunt/aio-upbit | 16f06d88622aee00c2f8edccb41b982e9bcbb6d0 | [
"MIT"
] | null | null | null | tests/upbit_ws_test.py | gyunt/aio-upbit | 16f06d88622aee00c2f8edccb41b982e9bcbb6d0 | [
"MIT"
] | 1 | 2019-06-07T19:24:53.000Z | 2019-06-07T19:24:53.000Z | import unittest
import asyncio
from aioupbit import UpbitWs
MARKET = 'KRW-BTC'
MARKETS = [MARKET]
def async_loop(*tasks):
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*tasks))
class TestUpbitWsClient(unittest.TestCase):
"""
Integration tests for the Upbit WebSocket
"""
def setUp(self):
pass
def test_get_ticker(self):
async def test():
async with UpbitWs() as u:
actual = await u.get_ticker(MARKETS)
self.assertGreater(len(actual), 0, 'get_ticker is 0-length.')
self.assertEqual(actual['code'], MARKET, 'get_ticker has wrong data.')
async_loop(test())
def test_get_trading_history(self):
async def test():
async with UpbitWs() as u:
actual = await u.get_trading_history(MARKETS)
self.assertGreater(len(actual), 0, 'get_trading_history is 0-length.')
self.assertEqual(actual['code'], MARKET, 'get_trading_history has wrong data.')
async_loop(test())
def test_get_orderbook(self):
async def test():
async with UpbitWs() as u:
actual = await u.get_orderbook(MARKETS)
self.assertGreater(len(actual), 0, 'get_orderbook is 0-length.')
self.assertEqual(actual['code'], MARKET, 'get_orderbook has wrong data.')
async_loop(test())
def test_get_ticker_streaming(self):
async def test1(q):
async with UpbitWs(queue=q) as u:
await u.get_ticker_streaming(MARKETS)
async def test2(q):
count = 3
while count:
actual = await q.get()
self.assertGreater(len(actual), 0, 'get_ticker_streaming is 0-length.')
self.assertEqual(actual['code'], MARKET, 'get_ticker_streaming has wrong data.')
count -= 1
q = asyncio.Queue()
tasks = [
test1(q),
test2(q)
]
loop = asyncio.get_event_loop()
loop.run_until_complete(
asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED))
def test_get_trading_history_streaming(self):
async def test1(q):
async with UpbitWs(queue=q) as u:
await u.get_trading_history_streaming(MARKETS)
async def test2(q):
count = 3
while count:
actual = await q.get()
self.assertGreater(len(actual), 0, 'get_trading_history_streaming is 0-length.')
self.assertEqual(actual['code'], MARKET, 'get_trading_history_streaming has wrong data.')
count -= 1
q = asyncio.Queue()
tasks = [
test1(q),
test2(q)
]
loop = asyncio.get_event_loop()
loop.run_until_complete(
asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED))
def test_get_orderbook_streaming(self):
async def test1(q):
async with UpbitWs(queue=q) as u:
await u.get_orderbook_streaming(MARKETS)
async def test2(q):
count = 3
while count:
actual = await q.get()
self.assertGreater(len(actual), 0, 'get_orderbook_streaming is 0-length.')
self.assertEqual(actual['code'], MARKET, 'get_orderbook_streaming has wrong data.')
count -= 1
q = asyncio.Queue()
tasks = [
test1(q),
test2(q)
]
loop = asyncio.get_event_loop()
loop.run_until_complete(
asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED))
| 34.95283 | 105 | 0.575169 | 429 | 3,705 | 4.79021 | 0.153846 | 0.030657 | 0.06618 | 0.075912 | 0.871046 | 0.843796 | 0.843796 | 0.789781 | 0.761557 | 0.710462 | 0 | 0.01199 | 0.324696 | 3,705 | 105 | 106 | 35.285714 | 0.809353 | 0.011066 | 0 | 0.58427 | 0 | 0 | 0.118728 | 0.028517 | 0 | 0 | 0 | 0 | 0.134831 | 1 | 0.089888 | false | 0.011236 | 0.033708 | 0 | 0.134831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8020eea83d6ef91ec5ade027ae2b4e1e3f4123e9 | 65 | py | Python | tests/functions/ignores/main.py | timmartin-airsupply/lambda-tools | 2c0b52ce889631c63c6dc343439fd5be822eefec | [
"MIT"
] | 3 | 2017-12-08T12:37:46.000Z | 2018-04-13T01:08:35.000Z | tests/functions/ignores/main.py | timmartin-airsupply/lambda-tools | 2c0b52ce889631c63c6dc343439fd5be822eefec | [
"MIT"
] | 29 | 2017-10-11T06:15:07.000Z | 2018-02-19T23:04:51.000Z | tests/functions/ignores/main.py | timmartin-airsupply/lambda-tools | 2c0b52ce889631c63c6dc343439fd5be822eefec | [
"MIT"
] | 3 | 2017-10-10T12:40:41.000Z | 2021-11-24T10:24:46.000Z | import another
def run(event, context):
return 'Hello world' | 16.25 | 24 | 0.723077 | 9 | 65 | 5.222222 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184615 | 65 | 4 | 25 | 16.25 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
803cc3a1402383b4fa7c166f448962f547eeaa71 | 38 | py | Python | force_feeder/__init__.py | willdickson/force_feeder | cb90fbdef69244c720823e11a9880dddbaf5e7af | [
"MIT"
] | null | null | null | force_feeder/__init__.py | willdickson/force_feeder | cb90fbdef69244c720823e11a9880dddbaf5e7af | [
"MIT"
] | null | null | null | force_feeder/__init__.py | willdickson/force_feeder | cb90fbdef69244c720823e11a9880dddbaf5e7af | [
"MIT"
] | null | null | null | from .force_feeder import ForceFeeder
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
804e66ea740e677674d063f9d162a6aa6accd649 | 211 | py | Python | translator/ast/token.py | akvarats/habr_antlr4_internal | 0146af29b277c772fd342f905fe356a2cc789824 | [
"Apache-2.0"
] | null | null | null | translator/ast/token.py | akvarats/habr_antlr4_internal | 0146af29b277c772fd342f905fe356a2cc789824 | [
"Apache-2.0"
] | null | null | null | translator/ast/token.py | akvarats/habr_antlr4_internal | 0146af29b277c772fd342f905fe356a2cc789824 | [
"Apache-2.0"
] | null | null | null | from .base import BaseNode
class TokenNode(BaseNode):
""" """
def __init__(self, text: str):
super().__init__()
self.text = text
def get_text(self):
return self.text or ""
| 17.583333 | 34 | 0.578199 | 25 | 211 | 4.52 | 0.6 | 0.212389 | 0.212389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2891 | 211 | 11 | 35 | 19.181818 | 0.753333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
337fbc1ea75ddc9feb9b0a08929124f4626b090b | 30 | py | Python | tests/test_sinkhorn.py | nbgl/pytorch-ot | cd416befd3fa89bc3e4a858e0b6f8d28dfa9f18b | [
"MIT"
] | 2 | 2020-11-17T15:29:14.000Z | 2021-04-24T15:12:35.000Z | tests/test_sinkhorn.py | nbgl/torch-wasserstein | cd416befd3fa89bc3e4a858e0b6f8d28dfa9f18b | [
"MIT"
] | null | null | null | tests/test_sinkhorn.py | nbgl/torch-wasserstein | cd416befd3fa89bc3e4a858e0b6f8d28dfa9f18b | [
"MIT"
] | 2 | 2020-10-09T02:43:01.000Z | 2021-06-03T11:48:47.000Z | def test_sinkhorn():
pass
| 10 | 20 | 0.666667 | 4 | 30 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 30 | 2 | 21 | 15 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
33aad0d6a88a43d2cd62be2e70b8dee313eb8201 | 42 | py | Python | builder/bot/tasks/__init__.py | Krossom/Loki | 5d821fa8901340d51579bb260ef7efd4da6afc24 | [
"MIT"
] | 3 | 2019-11-24T05:35:54.000Z | 2021-12-25T11:48:16.000Z | builder/bot/tasks/__init__.py | juan157/Loki | 28fbcebfe25631c539f656e98efe789d526432e2 | [
"MIT"
] | null | null | null | builder/bot/tasks/__init__.py | juan157/Loki | 28fbcebfe25631c539f656e98efe789d526432e2 | [
"MIT"
] | 4 | 2019-06-18T21:56:29.000Z | 2020-04-29T16:03:14.000Z | # Date: 09/27/2018
# Author: Pure-L0G1C
| 14 | 21 | 0.642857 | 7 | 42 | 3.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 0.190476 | 42 | 2 | 22 | 21 | 0.5 | 0.833333 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d1567a38243ae196a5936dace44084b58b189f4 | 302 | py | Python | bot/Utils/systemUtils.py | LukasForst/toggl-wire-bot | 1a242ef281b3cb501f30a1acee9cda7fd2cb2a84 | [
"MIT"
] | null | null | null | bot/Utils/systemUtils.py | LukasForst/toggl-wire-bot | 1a242ef281b3cb501f30a1acee9cda7fd2cb2a84 | [
"MIT"
] | null | null | null | bot/Utils/systemUtils.py | LukasForst/toggl-wire-bot | 1a242ef281b3cb501f30a1acee9cda7fd2cb2a84 | [
"MIT"
] | null | null | null | import os
def getTogglToken() -> str:
return os.getenv("TOGGL_TOKEN")
def getRomanToken() -> str:
return os.getenv("ROMAN_TOKEN")
def getRomanURL() -> str:
# TODO
return "http://proxy.services.zinfra.io"
def getWorkspaceId() -> int:
# TODO fetch from env
return 4039412
| 15.1 | 44 | 0.652318 | 37 | 302 | 5.27027 | 0.648649 | 0.092308 | 0.112821 | 0.174359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029536 | 0.215232 | 302 | 19 | 45 | 15.894737 | 0.793249 | 0.07947 | 0 | 0 | 0 | 0 | 0.192727 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 1 | 0.444444 | true | 0 | 0.111111 | 0.444444 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1d2f047a8bacd45cb093c13b939cafc7ac729fed | 148 | py | Python | Language/Parsing/alphabet_symbol.py | anoppa/Proyecto-IA-Sim-Comp | 71132bd0c6cb5aeff812fd96e0017be71178a5f3 | [
"MIT"
] | 1 | 2022-03-11T14:24:10.000Z | 2022-03-11T14:24:10.000Z | Language/Parsing/alphabet_symbol.py | anoppa/Proyecto-IA-Sim-Comp | 71132bd0c6cb5aeff812fd96e0017be71178a5f3 | [
"MIT"
] | null | null | null | Language/Parsing/alphabet_symbol.py | anoppa/Proyecto-IA-Sim-Comp | 71132bd0c6cb5aeff812fd96e0017be71178a5f3 | [
"MIT"
] | 1 | 2022-01-19T04:29:19.000Z | 2022-01-19T04:29:19.000Z | class AlphabetSymbol:
def __init__(self, symbol) -> None:
self.symbol = symbol
def __str__(self) -> str:
return self.symbol | 24.666667 | 39 | 0.628378 | 17 | 148 | 5 | 0.529412 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27027 | 148 | 6 | 40 | 24.666667 | 0.787037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1d4c158fa161b7a8f9ba8869ece2175233ec5956 | 151 | py | Python | pysindy/differentiation/__init__.py | billtubbs/pysindy | f9ac15b5d073c71bb210b77ff6a5579beb8ed94b | [
"MIT"
] | 7 | 2020-04-02T00:19:29.000Z | 2021-11-02T07:22:28.000Z | pysindy/differentiation/__init__.py | billtubbs/pysindy | f9ac15b5d073c71bb210b77ff6a5579beb8ed94b | [
"MIT"
] | null | null | null | pysindy/differentiation/__init__.py | billtubbs/pysindy | f9ac15b5d073c71bb210b77ff6a5579beb8ed94b | [
"MIT"
] | null | null | null | from .base import BaseDifferentiation
from .finite_difference import FiniteDifference
from .smoothed_finite_difference import SmoothedFiniteDifference
| 37.75 | 64 | 0.900662 | 15 | 151 | 8.866667 | 0.6 | 0.240602 | 0.330827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07947 | 151 | 3 | 65 | 50.333333 | 0.956835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d517ed88248c7c2d6eead0e498ccb5c9a3d20ffc | 40 | py | Python | featuretools/core/api.py | zhxt95/featuretools | 4fd90031e2b525ddaacddf46ff06a58103dbca35 | [
"BSD-3-Clause"
] | null | null | null | featuretools/core/api.py | zhxt95/featuretools | 4fd90031e2b525ddaacddf46ff06a58103dbca35 | [
"BSD-3-Clause"
] | null | null | null | featuretools/core/api.py | zhxt95/featuretools | 4fd90031e2b525ddaacddf46ff06a58103dbca35 | [
"BSD-3-Clause"
] | null | null | null | # flake8: noqa
from .base import FTBase
| 13.333333 | 24 | 0.75 | 6 | 40 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.175 | 40 | 2 | 25 | 20 | 0.878788 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d51d7e95d96506f7328ab467907216ad99442dbb | 215 | py | Python | Python practice/Mit opencourceware(2.7)/tuple.py | chiranjeevbitp/Python27new | d366efee57857402bae16cabf1df94c657490750 | [
"bzip2-1.0.6"
] | null | null | null | Python practice/Mit opencourceware(2.7)/tuple.py | chiranjeevbitp/Python27new | d366efee57857402bae16cabf1df94c657490750 | [
"bzip2-1.0.6"
] | null | null | null | Python practice/Mit opencourceware(2.7)/tuple.py | chiranjeevbitp/Python27new | d366efee57857402bae16cabf1df94c657490750 | [
"bzip2-1.0.6"
] | null | null | null | ##tuples and hashing ....
if __name__ == '__main__':
n = int(raw_input())
integer_list = map(int, raw_input().split())
t=tuple(integer_list)
print integer_list
print t
print hash(t)
| 23.888889 | 49 | 0.604651 | 29 | 215 | 4.034483 | 0.62069 | 0.282051 | 0.188034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255814 | 215 | 8 | 50 | 26.875 | 0.73125 | 0.106977 | 0 | 0 | 0 | 0 | 0.043956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.428571 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d5314bb76821aec2ec61c7c2ecff4b4244d48ddb | 13,694 | py | Python | tests/test_kmeans.py | rlespinet/pomegranate | 98c87d0ac6dc8494b8b0110d4913ece652aaa1f6 | [
"MIT"
] | null | null | null | tests/test_kmeans.py | rlespinet/pomegranate | 98c87d0ac6dc8494b8b0110d4913ece652aaa1f6 | [
"MIT"
] | null | null | null | tests/test_kmeans.py | rlespinet/pomegranate | 98c87d0ac6dc8494b8b0110d4913ece652aaa1f6 | [
"MIT"
] | null | null | null | from pomegranate import *
from nose.tools import with_setup
from nose.tools import assert_true
from nose.tools import assert_equal
from nose.tools import assert_greater_equal
from nose.tools import assert_greater
from nose.tools import assert_raises
from nose.tools import assert_not_equal
from numpy.testing import assert_almost_equal
from numpy.testing import assert_array_almost_equal
from numpy.testing import assert_array_equal
import random
import pickle
import numpy as np
numpy.random.seed(0)
def setup_three_dimensions():
global X
X = numpy.array([[-0.13174492, 0.51895916, -1.13141796],
[ 7.92260379, 7.86325294, 7.9884075 ],
[-0.63378039, -0.96394236, -1.34125012],
[ 8.16216236, 8.04655182, 6.68825619],
[-0.69595565, -0.19004012, 0.40768949],
[ 7.76271281, 8.94969945, 7.03687617],
[-1.92481462, -1.03905815, -0.44048926],
[ 7.90926091, 7.21944418, 7.15989354],
[-0.97493454, -0.04714556, -0.38607725],
[ 9.65781658, 7.04832845, 6.47613347]])
idxs = numpy.array([29, 19, 26, 11, 8, 27, 21, 7, 14, 13])
i, j = idxs // 3, idxs % 3
global X_nan
X_nan = X.copy()
X_nan[i, j] = numpy.nan
global centroids
centroids = numpy.array([[0, 0, 0],
[8, 8, 8]])
global model
model = Kmeans(2, centroids)
def setup_five_dimensions():
global X
X = numpy.array([[-0.04320239, 2.25402395, -0.3075753 , 0.01710706, 2.88816037],
[ 3.6483074 , 5.03958367, 3.14457941, 4.94180558, 4.32880698],
[ 7.48485345, 8.54100011, 7.90936486, 8.12260819, 6.6466098 ],
[ 12.15394848, 10.52091121, 13.55495735, 10.48190106, 10.94417476],
[ 1.21068778, 0.77311369, -0.31479566, -0.51865649, 0.4408653 ],
[-0.62796182, -0.34947675, -1.09050772, -0.34591408, 0.78866514],
[ 0.5661847 , 0.30785453, 0.38823634, 1.99717206, -0.99415221],
[ 0.10871016, 2.06244903, -0.19580087, -0.22100353, -0.43777027],
[ 3.06987578, 4.8633418 , 4.23645519, 4.20563589, 3.40046883],
[ 3.0471144 , 3.43070459, 3.88690894, 3.61962816, 3.52399965],
[ 3.3020318 , 5.16491752, 3.85249134, 2.7075964 , 4.03831846],
[ 3.55266908, 2.69803949, 4.13340743, 5.72527752, 4.9840009 ],
[ 7.27689336, 8.99614296, 7.10109146, 7.81354687, 7.27320546],
[ 9.55443921, 7.70358635, 8.9762396 , 7.8054752 , 7.95933534],
[ 7.55150108, 9.09523173, 8.38379803, 8.18932292, 7.70853 ],
[ 9.59329137, 8.26811547, 9.82226673, 8.35257773, 8.21768809],
[ 11.77294852, 12.33135372, 13.02160394, 12.05536766, 11.96375761],
[ 11.08768408, 13.15689157, 12.59002102, 11.16137415, 9.84335332],
[ 11.41978669, 11.45646564, 11.77622614, 11.96590564, 12.33083825],
[ 12.13323296, 11.89683824, 12.18373541, 13.21432431, 11.79987739]])
idxs = numpy.array([77, 26, 61, 46, 18, 30, 94, 96, 45, 67, 4, 20, 23, 73, 37, 21, 58,
99, 51, 7, 69, 53, 81, 85, 95, 9, 98, 24, 28, 38])
i, j = idxs // 5, idxs % 5
global X_nan
X_nan = X.copy()
X_nan[i, j] = numpy.nan
global centroids
centroids = numpy.array([[0, 0, 0, 0, 0],
[4, 4, 4, 4, 4],
[8, 8, 8, 8, 8],
[12, 12, 12, 12, 12]])
global model
model = Kmeans(4, centroids)
def test_kmeans_init():
centroids = [[2, 3], [5, 7]]
model = Kmeans(2, centroids)
assert_equal(model.d, 2)
assert_equal(model.k, 2)
assert_array_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_from_samples():
model = Kmeans.from_samples(2, X, init='first-k')
centroids = [[-0.872246, -0.344245, -0.578309],
[ 8.282911, 7.825455, 7.069913]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_from_samples_parallel():
model = Kmeans.from_samples(2, X, init='first-k', n_jobs=2)
centroids = [[-0.872246, -0.344245, -0.578309],
[ 8.282911, 7.825455, 7.069913]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_predict():
y = numpy.array([0, 1, 0, 1, 0, 1, 0, 1, 0, 1])
y_hat = model.predict(X)
assert_array_equal(y, y_hat)
@with_setup(setup_three_dimensions)
def test_kmeans_predict_parallel():
y = numpy.array([0, 1, 0, 1, 0, 1, 0, 1, 0, 1])
y_hat = model.predict(X, n_jobs=2)
assert_array_equal(y, y_hat)
y_hat = model.predict(X, n_jobs=4)
assert_array_equal(y, y_hat)
@with_setup(setup_five_dimensions)
def test_kmeans_predict_large():
y = [0, 1, 2, 3, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3]
y_hat = model.predict(X)
assert_array_equal(y, y_hat)
@with_setup(setup_five_dimensions)
def test_kmeans_predict_large_parallel():
y = [0, 1, 2, 3, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3]
y_hat = model.predict(X, n_jobs=2)
assert_array_equal(y, y_hat)
y_hat = model.predict(X, n_jobs=4)
assert_array_equal(y, y_hat)
@with_setup(setup_three_dimensions)
def test_kmeans_fit():
model.fit(X)
centroids = [[-0.872246, -0.344245, -0.578309],
[ 8.282911, 7.825455, 7.069913]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_fit_parallel():
model.fit(X, n_jobs=2)
centroids = [[-0.872246, -0.344245, -0.578309],
[ 8.282911, 7.825455, 7.069913]]
assert_array_almost_equal(model.centroids, centroids)
model.fit(X, n_jobs=4)
centroids = [[-0.872246, -0.344245, -0.578309],
[ 8.282911, 7.825455, 7.069913]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_five_dimensions)
def test_kmeans_multiple_init():
model1 = Kmeans.from_samples(4, X, init='kmeans++', n_init=1)
model2 = Kmeans.from_samples(4, X, init='kmeans++', n_init=25)
dist1 = model1.distance(X).min(axis=1).sum()
dist2 = model2.distance(X).min(axis=1).sum()
assert_greater_equal(dist1, dist2)
model1 = Kmeans.from_samples(4, X, init='first-k', n_init=1)
model2 = Kmeans.from_samples(4, X, init='first-k', n_init=5)
dist1 = model1.distance(X).min(axis=1).sum()
dist2 = model2.distance(X).min(axis=1).sum()
assert_equal(dist1, dist2)
@with_setup(setup_five_dimensions)
def test_kmeans_ooc_from_samples():
numpy.random.seed(0)
model1 = Kmeans.from_samples(5, X, init='first-k', batch_size=20)
model2 = Kmeans.from_samples(5, X, init='first-k', batch_size=None)
assert_array_equal(model1.centroids, model2.centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_ooc_fit():
centroids_copy = numpy.copy(centroids)
model1 = Kmeans(2, centroids_copy, n_init=1)
model1.fit(X)
centroids_copy = numpy.copy(centroids)
model2 = Kmeans(2, centroids_copy, n_init=1)
model2.fit(X, batch_size=10)
centroids_copy = numpy.copy(centroids)
model3 = Kmeans(2, centroids_copy, n_init=1)
model3.fit(X, batch_size=1)
assert_array_almost_equal(model1.centroids, model2.centroids)
assert_array_almost_equal(model1.centroids, model3.centroids)
@with_setup(setup_five_dimensions)
def test_kmeans_minibatch_from_samples():
model1 = Kmeans.from_samples(4, X, init='first-k', batch_size=10)
model2 = Kmeans.from_samples(4, X, init='first-k', batch_size=None)
model3 = Kmeans.from_samples(4, X, init='first-k', batch_size=10, batches_per_epoch=1)
assert_array_almost_equal(model1.centroids, model2.centroids)
assert_raises(AssertionError, assert_array_equal, model1.centroids, model3.centroids)
@with_setup(setup_five_dimensions)
def test_kmeans_minibatch_fit():
centroids_copy = numpy.copy(centroids)
model1 = Kmeans(4, centroids_copy)
model1.fit(X, batch_size=10)
centroids_copy = numpy.copy(centroids)
model2 = Kmeans(4, centroids_copy)
model2.fit(X, batch_size=None)
centroids_copy = numpy.copy(centroids)
model3 = Kmeans(4, centroids_copy)
model3.fit(X, batch_size=5, batches_per_epoch=1)
assert_array_almost_equal(model1.centroids, model2.centroids)
assert_raises(AssertionError, assert_array_equal, model1.centroids, model3.centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_nan_from_samples():
model = Kmeans.from_samples(2, X_nan, init='first-k')
centroids = [[-0.872246, 0.235907, -0.785954],
[ 7.94916 , 7.825455, 7.395059]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_nan_from_samples_parallel():
model = Kmeans.from_samples(2, X_nan, init='first-k', n_jobs=2)
centroids = [[-0.872246, 0.235907, -0.785954],
[ 7.94916 , 7.825455, 7.395059]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_nan_fit():
model.fit(X_nan)
centroids = [[-0.872246, 0.235907, -0.785954],
[ 7.94916 , 7.825455, 7.395059]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_nan_fit_parallel():
model.fit(X_nan, n_jobs=2)
centroids = [[-0.872246, 0.235907, -0.785954],
[ 7.94916 , 7.825455, 7.395059]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_five_dimensions)
def test_kmeans_nan_fit_large():
model.fit(X_nan)
centroids = [[ -0.187485, 1.541443, -0.331161, 1.00714 , -0.214419],
[ 3.393221, 4.200322, 4.027316, 4.25569 , 3.986697],
[ 8.292196, 8.401983, 7.798085, 8.023552, 7.461508],
[ 11.782228, 11.711423, 12.625309, 11.727549, 10.917095]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_five_dimensions)
def test_kmeans_nan_fit_large_parallel():
model.fit(X_nan, n_jobs=2)
centroids = [[ -0.187485, 1.541443, -0.331161, 1.00714 , -0.214419],
[ 3.393221, 4.200322, 4.027316, 4.25569 , 3.986697],
[ 8.292196, 8.401983, 7.798085, 8.023552, 7.461508],
[ 11.782228, 11.711423, 12.625309, 11.727549, 10.917095]]
assert_array_almost_equal(model.centroids, centroids)
@with_setup(setup_three_dimensions)
def test_kmeans_nan_predict():
y = numpy.array([0, 1, 0, 1, 0, 1, 0, 1, 0, 1])
y_hat = model.predict(X_nan)
assert_array_almost_equal(y, y_hat)
@with_setup(setup_three_dimensions)
def test_kmeans_nan_predict_parallel():
y = numpy.array([0, 1, 0, 1, 0, 1, 0, 1, 0, 1])
y_hat = model.predict(X_nan, n_jobs=2)
assert_array_almost_equal(y, y_hat)
@with_setup(setup_five_dimensions)
def test_kmeans_nan_large_predict():
y = numpy.array([0, 1, 2, 3, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3])
y_hat = model.predict(X_nan)
assert_array_almost_equal(y, y_hat)
@with_setup(setup_five_dimensions)
def test_kmeans_nan_large_predict_parallel():
y = numpy.array([0, 1, 2, 3, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3])
y_hat = model.predict(X_nan, n_jobs=2)
assert_array_almost_equal(y, y_hat)
@with_setup(setup_five_dimensions)
def test_kmeans_nan_multiple_init():
model1 = Kmeans.from_samples(4, X_nan, init='kmeans++', n_init=1)
model2 = Kmeans.from_samples(4, X_nan, init='kmeans++', n_init=25)
dist1 = model1.distance(X).min(axis=1).sum()
dist2 = model2.distance(X).min(axis=1).sum()
assert_greater(dist1, dist2)
model1 = Kmeans.from_samples(4, X_nan, init='first-k', n_init=1)
model2 = Kmeans.from_samples(4, X_nan, init='first-k', n_init=5)
dist1 = model1.distance(X).min(axis=1).sum()
dist2 = model2.distance(X).min(axis=1).sum()
assert_equal(dist1, dist2)
@with_setup(setup_five_dimensions)
def test_kmeans_ooc_nan_from_samples():
model1 = Kmeans.from_samples(4, X_nan, init='first-k', batch_size=20)
model2 = Kmeans.from_samples(4, X_nan, init='first-k', batch_size=None)
assert_array_almost_equal(model1.centroids, model2.centroids)
@with_setup(setup_five_dimensions)
def test_kmeans_ooc_nan_fit():
centroids_copy = numpy.copy(centroids)
model1 = Kmeans(4, centroids_copy, n_init=1)
model1.fit(X_nan)
centroids_copy = numpy.copy(centroids)
model2 = Kmeans(4, centroids_copy, n_init=1)
model2.fit(X_nan, batch_size=10)
centroids_copy = numpy.copy(centroids)
model3 = Kmeans(4, centroids_copy, n_init=1)
model3.fit(X_nan, batch_size=1)
assert_array_almost_equal(model1.centroids, model2.centroids, 4)
assert_array_almost_equal(model1.centroids, model3.centroids, 4)
@with_setup(setup_five_dimensions)
def test_kmeans_minibatch_nan_from_samples():
model1 = Kmeans.from_samples(4, X_nan, init='first-k', batch_size=10)
model2 = Kmeans.from_samples(4, X_nan, init='first-k', batch_size=None)
model3 = Kmeans.from_samples(4, X_nan, init='first-k', batch_size=10, batches_per_epoch=1)
model4 = Kmeans.from_samples(4, X_nan, init='first-k', batch_size=10, batches_per_epoch=2)
assert_array_almost_equal(model1.centroids, model2.centroids)
assert_array_almost_equal(model1.centroids, model4.centroids)
assert_raises(AssertionError, assert_array_equal, model1.centroids, model3.centroids)
@with_setup(setup_five_dimensions)
def test_kmeans_minibatch_nan_fit():
centroids_copy = numpy.copy(centroids)
model1 = Kmeans(4, centroids_copy, n_init=1)
model1.fit(X, batch_size=10)
centroids_copy = numpy.copy(centroids)
model2 = Kmeans(4, centroids_copy, n_init=1)
model2.fit(X, batch_size=None)
centroids_copy = numpy.copy(centroids)
model3 = Kmeans(4, centroids_copy, n_init=1)
model3.fit(X, batch_size=10, batches_per_epoch=1)
centroids_copy = numpy.copy(centroids)
model4 = Kmeans(4, centroids_copy, n_init=1)
model4.fit(X, batch_size=10, batches_per_epoch=2)
assert_array_almost_equal(model1.centroids, model2.centroids)
assert_array_almost_equal(model1.centroids, model4.centroids)
assert_raises(AssertionError, assert_array_equal, model1.centroids, model3.centroids)
| 32.450237 | 91 | 0.704031 | 2,168 | 13,694 | 4.223247 | 0.127306 | 0.048056 | 0.041175 | 0.070336 | 0.812473 | 0.791721 | 0.785496 | 0.762232 | 0.719965 | 0.701289 | 0 | 0.197383 | 0.151672 | 13,694 | 421 | 92 | 32.527316 | 0.590772 | 0 | 0 | 0.538721 | 0 | 0 | 0.012049 | 0 | 0 | 0 | 0 | 0 | 0.178451 | 1 | 0.104377 | false | 0 | 0.047138 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5334183de525fc0c4048854ebcc19aab774df8e | 54 | py | Python | pandaslookup/__init__.py | StefRe/pandas-lookup | 14bc77be01c0749e257e20550871ec13db847a43 | [
"MIT"
] | 1 | 2021-02-22T12:40:00.000Z | 2021-02-22T12:40:00.000Z | pandaslookup/__init__.py | StefRe/pandas-lookup | 14bc77be01c0749e257e20550871ec13db847a43 | [
"MIT"
] | null | null | null | pandaslookup/__init__.py | StefRe/pandas-lookup | 14bc77be01c0749e257e20550871ec13db847a43 | [
"MIT"
] | 1 | 2019-11-01T08:37:42.000Z | 2019-11-01T08:37:42.000Z | from .lookup import lookup, from_lookup # noqa: F401
| 27 | 53 | 0.759259 | 8 | 54 | 5 | 0.625 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.166667 | 54 | 1 | 54 | 54 | 0.822222 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d54468d960cf5a9e171bd011f543b441120b47e2 | 2,245 | py | Python | week3/MedianString.py | hot9cups/uhh-stuff | 2c77b6ba8632f79d821cd37513e22c0f464ffdca | [
"MIT"
] | null | null | null | week3/MedianString.py | hot9cups/uhh-stuff | 2c77b6ba8632f79d821cd37513e22c0f464ffdca | [
"MIT"
] | null | null | null | week3/MedianString.py | hot9cups/uhh-stuff | 2c77b6ba8632f79d821cd37513e22c0f464ffdca | [
"MIT"
] | null | null | null | """import sys
import os
from itertools import product
sys.path.append(os.path.abspath('../week2'))
# from GetNeighbors import get_neighbors
from HammingDist import hamming_distance
from time import time
def median_string(dna, k):
neighbors = set()
# get_neighbors("A"*k, 0, k, neighbors)
best_score = k * len(dna)
best_motif = ""
for pattern in product(('A', 'T', 'C', 'G'), repeat = k):
pattern = "".join(pattern)
score = 0
for dna_seq in dna:
least_dist = k
for i in range(len(dna_seq) - k + 1):
dist = hamming_distance(pattern, dna_seq[i:i+k])
if dist < least_dist:
least_dist = dist
score += least_dist
if score <= best_score:
best_score = score
best_motif = pattern
return best_motif
if __name__ == '__main__':
with open("dataset_158_9.txt") as f:
start = time()
data = f.readlines()
k = int(data[0].strip())
dna = []
for i in range(1, len(data)):
dna.append(data[i].strip())
print(median_string(dna, k))
print(time() - start)
"""
import sys
import os
sys.path.append(os.path.abspath('../week2'))
from GetNeighbors import get_neighbors
from HammingDist import hamming_distance
from time import time
def median_string(dna, k):
neighbors = set()
get_neighbors("A"*k, 0, k, neighbors)
best_score = k * len(dna)
best_motif = ""
for pattern in neighbors:
score = 0
for dna_seq in dna:
least_dist = k
for i in range(len(dna_seq) - k + 1):
dist = hamming_distance(pattern, dna_seq[i:i+k])
if dist < least_dist:
least_dist = dist
score += least_dist
if score <= best_score:
best_score = score
best_motif = pattern
return best_motif
if __name__ == '__main__':
with open("dataset_158_9.txt") as f:
start = time()
data = f.readlines()
k = int(data[0].strip())
dna = []
for i in range(1, len(data)):
dna.append(data[i].strip())
print(median_string(dna, k))
print(time() - start)
| 27.716049 | 64 | 0.562584 | 299 | 2,245 | 4.0301 | 0.204013 | 0.059751 | 0.049793 | 0.053112 | 0.912863 | 0.912863 | 0.912863 | 0.912863 | 0.912863 | 0.912863 | 0 | 0.013089 | 0.319376 | 2,245 | 81 | 65 | 27.716049 | 0.775524 | 0.518486 | 0 | 0 | 0 | 0 | 0.031628 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.147059 | 0 | 0.205882 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
63363ffe4ea3026b62579b0d00fcfa0f26add645 | 31 | py | Python | core_admin/skybot/models/__init__.py | linea-it/tno | f973381280504ceb1b606b5b3ccc79b6b8c2aa4f | [
"MIT"
] | null | null | null | core_admin/skybot/models/__init__.py | linea-it/tno | f973381280504ceb1b606b5b3ccc79b6b8c2aa4f | [
"MIT"
] | 112 | 2018-04-24T19:10:55.000Z | 2022-02-26T16:55:02.000Z | core_admin/skybot/models/__init__.py | linea-it/tno | f973381280504ceb1b606b5b3ccc79b6b8c2aa4f | [
"MIT"
] | null | null | null |
from .position import Position | 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 2 | 30 | 15.5 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
633f9586a061833a89f573dcedef58d0c9951188 | 11 | py | Python | tests/src/trivia/assignRightShift.py | lindlind/python-interpreter | ffcb38627dc128dddb04e769d0bff6466365271a | [
"MIT"
] | null | null | null | tests/src/trivia/assignRightShift.py | lindlind/python-interpreter | ffcb38627dc128dddb04e769d0bff6466365271a | [
"MIT"
] | null | null | null | tests/src/trivia/assignRightShift.py | lindlind/python-interpreter | ffcb38627dc128dddb04e769d0bff6466365271a | [
"MIT"
] | null | null | null | a = 14 >>2
| 5.5 | 10 | 0.363636 | 3 | 11 | 1.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 0.363636 | 11 | 1 | 11 | 11 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
636ba7a5c4decebe5d8df4e70ed2d5c4739b4ed1 | 2,907 | py | Python | tests/test_examples.py | cyrilbois/PFNET.py | 81d2fd911c6e6aae4c5de0d1739c6f5361799ce2 | [
"BSD-2-Clause"
] | 3 | 2018-03-21T11:54:38.000Z | 2020-12-29T16:46:14.000Z | tests/test_examples.py | cyrilbois/PFNET.py | 81d2fd911c6e6aae4c5de0d1739c6f5361799ce2 | [
"BSD-2-Clause"
] | 23 | 2018-03-29T00:42:06.000Z | 2021-01-05T19:15:05.000Z | tests/test_examples.py | cyrilbois/PFNET.py | 81d2fd911c6e6aae4c5de0d1739c6f5361799ce2 | [
"BSD-2-Clause"
] | 5 | 2018-10-01T19:05:11.000Z | 2020-05-27T06:19:11.000Z | import os
import unittest
from subprocess import call, STDOUT
class TestExamples(unittest.TestCase):
def test_constraints(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/constraints.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_contingencies(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/contingencies.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_functions(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/functions.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_muti_period(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/multi_period.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_networks(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/networks.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_parsers(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/parsers.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_problems(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/problems.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_projections(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/projections.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_start(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/start.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
def test_variables(self):
FNULL = open(os.devnull, 'w')
retcode = call(["python", "./examples/variables.py", "./data/ieee14.m"],
stdout=FNULL,
stderr=STDOUT)
self.assertEqual(retcode, 0)
| 30.28125 | 84 | 0.488476 | 275 | 2,907 | 5.12 | 0.141818 | 0.049716 | 0.09233 | 0.106534 | 0.804688 | 0.804688 | 0.804688 | 0.804688 | 0.804688 | 0.804688 | 0 | 0.016529 | 0.375645 | 2,907 | 95 | 85 | 30.6 | 0.759229 | 0 | 0 | 0.625 | 0 | 0 | 0.155831 | 0.073615 | 0 | 0 | 0 | 0 | 0.15625 | 1 | 0.15625 | false | 0 | 0.046875 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8955030e6c2c624ff8c140bbb397dbbd0d330bea | 177 | py | Python | Python2/Day00-PrintHelloWorld.py | MatthewMerrill/HackerRank-30-Days-of-Code | 65595c56fa4c8497a07b2386f1034eb0820e004e | [
"Unlicense"
] | null | null | null | Python2/Day00-PrintHelloWorld.py | MatthewMerrill/HackerRank-30-Days-of-Code | 65595c56fa4c8497a07b2386f1034eb0820e004e | [
"Unlicense"
] | null | null | null | Python2/Day00-PrintHelloWorld.py | MatthewMerrill/HackerRank-30-Days-of-Code | 65595c56fa4c8497a07b2386f1034eb0820e004e | [
"Unlicense"
] | null | null | null |
# https://www.hackerrank.com/contests/30-days-of-code/challenges/day-0-print-hello-world/submissions/code/4682129
print("Hello World.");
print("Welcome to 30 Days of Code.");
| 29.5 | 113 | 0.751412 | 28 | 177 | 4.75 | 0.678571 | 0.090226 | 0.120301 | 0.180451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 0.067797 | 177 | 5 | 114 | 35.4 | 0.733333 | 0.627119 | 0 | 0 | 0 | 0 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
89a3bda8ed83b343fcce0856fa35de1af7e95ae5 | 90 | py | Python | example/data/__init__.py | lemoner20/tensorlayer | 69bd591f247b4a67f8968bd29c3660b22dbffae4 | [
"Apache-2.0"
] | 2 | 2019-03-27T02:24:50.000Z | 2021-08-29T23:35:55.000Z | example/data/__init__.py | lemoner20/tensorlayer | 69bd591f247b4a67f8968bd29c3660b22dbffae4 | [
"Apache-2.0"
] | null | null | null | example/data/__init__.py | lemoner20/tensorlayer | 69bd591f247b4a67f8968bd29c3660b22dbffae4 | [
"Apache-2.0"
] | null | null | null |
from __future__ import absolute_import
from . import imagenet_classes
# from . import
| 12.857143 | 38 | 0.788889 | 11 | 90 | 5.909091 | 0.545455 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 90 | 6 | 39 | 15 | 0.878378 | 0.144444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89a8c86859b59c8a61b971c72e6a5114e2d44a5e | 156 | py | Python | graphlayer/graphql/naming.py | mwilliamson/python-graphlayer | d71d99c314aca07816ce6a1a7329d0d7fecdfb2f | [
"BSD-2-Clause"
] | 25 | 2019-03-11T16:48:52.000Z | 2021-05-02T03:23:20.000Z | graphlayer/graphql/naming.py | mwilliamson/python-graphlayer | d71d99c314aca07816ce6a1a7329d0d7fecdfb2f | [
"BSD-2-Clause"
] | 9 | 2019-03-24T10:43:44.000Z | 2021-11-09T23:02:20.000Z | graphlayer/graphql/naming.py | mwilliamson/python-graphlayer | d71d99c314aca07816ce6a1a7329d0d7fecdfb2f | [
"BSD-2-Clause"
] | 7 | 2018-12-30T17:52:07.000Z | 2021-05-02T03:23:35.000Z | import re
def snake_case_to_camel_case(value):
return value[0].lower() + re.sub(r"_(.)", lambda match: match.group(1).upper(), value[1:]).rstrip("_")
| 26 | 106 | 0.666667 | 25 | 156 | 3.92 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.115385 | 156 | 5 | 107 | 31.2 | 0.688406 | 0 | 0 | 0 | 0 | 0 | 0.032051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
982db48df53e6527680b2a74b6d5db8ff8c4189d | 103 | py | Python | tests/test_active_learning_ts.py | bela127/active_learning_ts | b652995edfb14c37e486ddc8261d6093d6babdae | [
"MIT"
] | 1 | 2022-02-14T09:38:22.000Z | 2022-02-14T09:38:22.000Z | tests/test_active_learning_ts.py | bela127/active_learning_ts | b652995edfb14c37e486ddc8261d6093d6babdae | [
"MIT"
] | 1 | 2022-02-11T12:13:31.000Z | 2022-02-11T12:13:31.000Z | tests/test_active_learning_ts.py | bela127/active_learning_ts | b652995edfb14c37e486ddc8261d6093d6babdae | [
"MIT"
] | 2 | 2021-12-15T12:56:30.000Z | 2022-02-01T15:31:08.000Z | from active_learning_ts import __version__
def test_version():
assert __version__ == "0.2.3.0"
| 12.875 | 42 | 0.728155 | 15 | 103 | 4.266667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047059 | 0.174757 | 103 | 7 | 43 | 14.714286 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0.069307 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98893a4725b328c9c8e79dfd93002adc81eb05ef | 229 | py | Python | src/memegenerator.py | akliamrous/AkliBot | 424311d76ed9ae9f85cb4fee83cf04e7860d6574 | [
"MIT"
] | null | null | null | src/memegenerator.py | akliamrous/AkliBot | 424311d76ed9ae9f85cb4fee83cf04e7860d6574 | [
"MIT"
] | null | null | null | src/memegenerator.py | akliamrous/AkliBot | 424311d76ed9ae9f85cb4fee83cf04e7860d6574 | [
"MIT"
] | null | null | null | import random
import discord
from discord.ext import commands
class MemeGenerator(commands.Cog):
def __init__(self, client):
self.client = client
def generateMeme(self):
pass
| 15.266667 | 35 | 0.620087 | 24 | 229 | 5.75 | 0.625 | 0.144928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.318777 | 229 | 14 | 36 | 16.357143 | 0.884615 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.125 | 0.375 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
98aac4857058379aa4e4c7d5429e30dcf794e73e | 32 | py | Python | app/routers/basic_router/__init__.py | mabittar/fast_api_starter | 37cb388b55253994199f4bb1b709c70e95eea1b9 | [
"MIT"
] | null | null | null | app/routers/basic_router/__init__.py | mabittar/fast_api_starter | 37cb388b55253994199f4bb1b709c70e95eea1b9 | [
"MIT"
] | null | null | null | app/routers/basic_router/__init__.py | mabittar/fast_api_starter | 37cb388b55253994199f4bb1b709c70e95eea1b9 | [
"MIT"
] | null | null | null | from .basic_router import router | 32 | 32 | 0.875 | 5 | 32 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f52529f02a52c6d367fd2ddeea837c0a85eba4e | 82 | py | Python | Python/libraries/recognizers-sequence/recognizers_sequence/resources/__init__.py | acblacktea/Recognizers-Text | 2170b8e35216f3fd56cce98fb33cde5339c9f088 | [
"MIT"
] | 1 | 2019-06-19T10:45:24.000Z | 2019-06-19T10:45:24.000Z | Python/libraries/recognizers-sequence/recognizers_sequence/resources/__init__.py | AzureMentor/Recognizers-Text | 4f18e1d03607cc96e87095d8bf68c481c1b0756f | [
"MIT"
] | null | null | null | Python/libraries/recognizers-sequence/recognizers_sequence/resources/__init__.py | AzureMentor/Recognizers-Text | 4f18e1d03607cc96e87095d8bf68c481c1b0756f | [
"MIT"
] | 1 | 2019-03-21T13:02:12.000Z | 2019-03-21T13:02:12.000Z | from .base_phone_numbers import BasePhoneNumbers
from .base_email import BaseEmail | 41 | 48 | 0.890244 | 11 | 82 | 6.363636 | 0.727273 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085366 | 82 | 2 | 49 | 41 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f7cc3f4864a7d653f7a1a38f0d447ab4e5199b4 | 8,894 | py | Python | BurstCube/NoahSim/burstutils.py | nkasmanoff/Simulation | 38d47db79cebe8504a03424c564f2207ae2275ac | [
"MIT"
] | null | null | null | BurstCube/NoahSim/burstutils.py | nkasmanoff/Simulation | 38d47db79cebe8504a03424c564f2207ae2275ac | [
"MIT"
] | null | null | null | BurstCube/NoahSim/burstutils.py | nkasmanoff/Simulation | 38d47db79cebe8504a03424c564f2207ae2275ac | [
"MIT"
] | null | null | null | import numpy as np
import math as mth
import random as rand
import healpy as hp
def length(v):
"""
Finds the length of a vector
Parameters
----------
v : array
numpy array representative of the vector you want to find the magnitude of.
Returns
-------
magv : float
magnitude of v.
"""
magv = mth.sqrt(np.dot(v, v))
return magv
def angle(v1, v2):
""""
Finds the angle between 2 vectors
Parameters
----------
v1 : array
v2 : array
The arrays representing the vectors who's angle is to be calculated.
Returns
-------
ang : float
Angle between the 2 vectors.
"""
ang = np.arccos(np.dot(v1, v2) / (length(v1) * length(v2)))
return ang
#Fuck around w this one.
def look_up_A(detnorm,source,array=False):
"""The look up table for detector A.
Currently for all these functions the coordinates are relative to the top of the spacecraft,
not indivudial detectors. To tranform just rotate by this specific detnorm.
Parameters
----------
detnorm : array
The vector normal to detector A.
source : array
The vector pointing to where in the sky the GRB came from.
Returns
-------
x : float
The exponent of dependence for the detector's response.
"""
if array:
ang = findAngles(detnorm,source)
if not array:
ang = angle(detnorm,source)
sourceang = hp.vec2ang(source)
sourcetheta = sourceang[0]
sourcephi = sourceang[1] #convert to degrees for now, not a big dealio or anything yet.
sourcetheta = np.around(np.rad2deg(sourcetheta)) #This needs to be able to take in an array and produce corresponding R's.
sourcephi = np.around(np.rad2deg(sourcephi))
X = np.arange(0, 180, 1) #full sky now.
Y = np.arange(0, 360, 1)
X, Y = np.meshgrid(X, Y)
R = 0.76*np.ones(shape=np.shape(X))
if not array:
if ang> np.pi/2:
x = 0
else:
mask1 = X == sourcetheta
mask2 = Y == sourcephi
x = R[mask1 & mask2]
else:
x = []
for i in range(len(source)):
sourceang = hp.vec2ang(source[i])
mask1 = X == np.around(np.rad2deg(sourceang[0])) #theta mask
mask2 = Y == np.around(np.rad2deg(sourceang[1])) #phi mask
x.append(R[mask1 & mask2])
return x
def look_up_B(detnorm,source,array=False):
"""The look up table for detector B.
Currently for all these functions the coordinates are relative to the top of the spacecraft,
not indivudial detectors. To tranform just rotate by this specific detnorm.
Parameters
----------
detnorm : array
The vector normal to detector B.
source : array
The vector pointing to where in the sky the GRB came from.
Returns
-------
x : float
The exponent of dependence for the detector's response.
"""
if array:
#for fitting purposes, creates the entire lookup table all at once. Unfortuntaley I only know how to do this by putting them in a loop as done below, which is time costly.
ang = findAngles(detnorm,source)
if not array:
ang = angle(detnorm,source)
sourceang = hp.vec2ang(source)
sourcetheta = sourceang[0]
sourcephi = sourceang[1] #convert to degrees for now, not a big dealio or anything yet.
sourcetheta = np.around(np.rad2deg(sourcetheta)) #This needs to be able to take in an array and produce corresponding R's.
sourcephi = np.round(np.rad2deg(sourcephi))
X = np.arange(0, 180, 1) #full sky now.
Y = np.arange(0, 360, 1)
X, Y = np.meshgrid(X, Y)
#creates meshgrid for theta phi, and masks the source's position to get response exponent.
R = 0.76*np.ones(shape=np.shape(X))
if not array:
if ang> np.pi/2:
x = 0
else:
mask1 = X == sourcetheta
mask2 = Y == sourcephi
x = R[mask1 & mask2]
else:
x = []
for i in range(len(source)):
sourceang = hp.vec2ang(source[i])
mask1 = X == np.around(np.rad2deg(sourceang[0])) #theta mask
mask2 = Y == np.around(np.rad2deg(sourceang[1])) #phi mask
x.append(R[mask1 & mask2])
return x
def look_up_C(detnorm,source,array=False):
"""The look up table for detector C.
Parameters
----------
detnorm : array
The vector normal to detector C.
source : array
The vector pointing to where in the sky the GRB came from.
Returns
-------
x : float
The exponent of dependence for the detector's response.
Example:
Let's say for this detector, past 30 degrees and for azimuths of
60 - 180, it's blocked. This is what it would look like:
R = 0.76*np.ones(shape=np.shape(X))
R[30:,60:180] = 0
"""
if array:
ang = findAngles(detnorm,source)
if not array:
ang = angle(detnorm,source)
sourceang = hp.vec2ang(source)
sourcetheta = sourceang[0]
sourcephi = sourceang[1]
#convert to degrees for now, not a big dealio or anything yet.
sourcetheta = np.around(np.rad2deg(sourcetheta)) #This needs to be able to take in an array and produce corresponding R's.
sourcephi = np.around(np.rad2deg(sourcephi))
X = np.arange(0, 180, 1) #full sky now.
Y = np.arange(0, 360, 1)
X, Y = np.meshgrid(X, Y)
R = 0.76*np.ones(shape=np.shape(X)) #response function
if not array:
if ang> np.pi/2:
x = 0
else:
mask1 = X == sourcetheta
mask2 = Y == sourcephi
x = R[mask1 & mask2]
else:
x = []
for i in range(len(source)):
sourceang = hp.vec2ang(source[i])
mask1 = X == np.around(np.rad2deg(sourceang[0])) #theta mask
mask2 = Y == np.around(np.rad2deg(sourceang[1])) #phi mask
x.append(R[mask1 & mask2])
return x
def look_up_D(detnorm,source,array=False):
"""The look up table for detector D.
Parameters
----------
detnorm : array
The vector normal to detector D.
source : array
The vector pointing to where in the sky the GRB came from.
Returns
-------
x : float
The exponent of dependence for the detector's response.
"""
if array:
ang = findAngles(detnorm,source)
if not array:
ang = angle(detnorm,source)
sourceang = hp.vec2ang(source)
sourcetheta = sourceang[0]
sourcephi = sourceang[1]
#convert to degrees for now, not a big dealio or anything yet.
sourcetheta = np.around(np.rad2deg(sourcetheta)) #This needs to be able to take in an array and produce corresponding R's.
sourcephi = np.around(np.rad2deg(sourcephi))
X = np.arange(0, 180, 1) #full sky now.
Y = np.arange(0, 360, 1)
X, Y = np.meshgrid(X, Y)
R = 0.76*np.ones(shape=np.shape(X))
if not array:
if ang> np.pi/2:
x = 0
else:
mask1 = X == sourcetheta
mask2 = Y == sourcephi
x = R[mask1 & mask2]
else:
x = []
for i in range(len(source)):
sourceang = hp.vec2ang(source[i])
mask1 = X == np.around(np.rad2deg(sourceang[0])) #theta mask
mask2 = Y == np.around(np.rad2deg(sourceang[1])) #phi mask
x.append(R[mask1 & mask2])
return x
def response(A,x):
"""Meant to imitate the actual response of a scintillator.
Inputs 2 vectors, and responds with a cos^x dependence.
Parameters
-----------
A : float
The angular separation in radians between the normal vector of the
detector, and the position in the sky of the simulated GRB.
x : float
The dependence
Returns
-------
R : float
The response function of how the scintillator will respond to a source
at angle A.
"""
#meant to imitate the response of the detectors for effective area vs. angle, found to be around .77
# print(length(A),length(B))
#if cosine is negative,
#Maybe include the pi/2 thing here.
R = pow(abs(np.cos(A)),x)
#How I fix the angle stuff now.
if A > np.pi/2:
R = 0
return R
| 24.705556 | 180 | 0.555318 | 1,189 | 8,894 | 4.147183 | 0.171573 | 0.029203 | 0.03042 | 0.051714 | 0.731495 | 0.731495 | 0.731495 | 0.731495 | 0.712432 | 0.707767 | 0 | 0.028542 | 0.346076 | 8,894 | 359 | 181 | 24.774373 | 0.819292 | 0.416011 | 0 | 0.847328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053435 | false | 0 | 0.030534 | 0 | 0.137405 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6857747395115ddb83c6a519cd664579319ada4 | 203 | py | Python | setcount.py | billylo1/hit-counter | 7df9ea2898c05608149832a76c9b6c8400eb4de3 | [
"MIT"
] | null | null | null | setcount.py | billylo1/hit-counter | 7df9ea2898c05608149832a76c9b6c8400eb4de3 | [
"MIT"
] | null | null | null | setcount.py | billylo1/hit-counter | 7df9ea2898c05608149832a76c9b6c8400eb4de3 | [
"MIT"
] | null | null | null | import config
import db
import utils
db_connection = db.DbAccess(config.DATABASE_FILE_PATH)
connection = db_connection.get_connection()
db_connection.set_count(connection, 'pass.vaccine-ontario.ca', 21) | 29 | 66 | 0.832512 | 29 | 203 | 5.586207 | 0.586207 | 0.222222 | 0.271605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0.073892 | 203 | 7 | 66 | 29 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0.112745 | 0.112745 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
f6a217bcaa0c3d241a5352133e77c46fca3ef059 | 9,691 | py | Python | integration/dagster/tests/test_sensor_cursor.py | denimalpaca/OpenLineage | 0448d019b0559b22e96d52a8d5eecb16a9517767 | [
"Apache-2.0"
] | 1 | 2021-12-03T17:00:00.000Z | 2021-12-03T17:00:00.000Z | integration/dagster/tests/test_sensor_cursor.py | wjohnson/OpenLineage | 3d9a2cd24de82270301169179d4630686b0e6f5d | [
"Apache-2.0"
] | null | null | null | integration/dagster/tests/test_sensor_cursor.py | wjohnson/OpenLineage | 3d9a2cd24de82270301169179d4630686b0e6f5d | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import json
import uuid
from unittest.mock import patch
from dagster import SensorDefinition, build_sensor_context, DagsterEventType
from dagster.core.test_utils import instance_for_test
from openlineage.dagster.cursor import OpenLineageCursor, RunningPipeline, RunningStep
from .conftest import make_test_event_log_record
@patch.dict(os.environ, {"OPENLINEAGE_URL": "http://mock-url:5000"})
def test_basic_sensor_def():
from openlineage.dagster.sensor import openlineage_sensor # noqa: E402
sensor_def = openlineage_sensor()
assert isinstance(sensor_def, SensorDefinition)
assert not sensor_def.targets
@patch.dict(os.environ, {"OPENLINEAGE_URL": "http://mock-url:5000"})
@patch("openlineage.dagster.sensor.get_event_log_records")
def test_cursor_update_with_after_storage_id(mock_event_log_records):
from openlineage.dagster.sensor import openlineage_sensor # noqa: E402
with instance_for_test() as instance:
context = build_sensor_context(instance=instance, repository_name="hello")
openlineage_sensor(after_storage_id=100).evaluate_tick(context)
assert context.cursor == json.dumps({
"last_storage_id": 100,
"running_pipelines": {}
})
@patch("openlineage.dagster.sensor._ADAPTER")
@patch("openlineage.dagster.sensor.make_step_run_id")
@patch("openlineage.dagster.sensor.get_event_log_records")
def test_cursor_update_with_successful_run(mock_event_log_records, mock_step_run_id, mock_adapter): # noqa: E501
from openlineage.dagster.sensor import openlineage_sensor # noqa: E402
with instance_for_test() as instance:
ol_sensor_def = openlineage_sensor(record_filter_limit=1)
# 1. pipeline start
pipeline_run_id = str(uuid.uuid4())
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.RUN_START, pipeline_run_id=pipeline_run_id
)
]
context = build_sensor_context(instance=instance)
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=1,
running_pipelines={
pipeline_run_id: RunningPipeline(
running_steps={},
repository_name=None
)
}
)
# 2. step start
step_run_id = str(uuid.uuid4())
step_key = "an_op"
mock_step_run_id.return_value = step_run_id
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.STEP_START, pipeline_run_id=pipeline_run_id, step_key=step_key, storage_id=2 # noqa: E501
)
]
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=2,
running_pipelines={
pipeline_run_id: RunningPipeline(
running_steps={
step_key: RunningStep(
step_run_id=step_run_id,
input_datasets=[],
output_datasets=[])
},
repository_name=None
)
}
)
# 3. step success
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.STEP_SUCCESS, pipeline_run_id=pipeline_run_id, step_key=step_key, storage_id=3 # noqa: E501
)
]
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=3,
running_pipelines={
pipeline_run_id: RunningPipeline(
running_steps={},
repository_name=None
)
}
)
# 4. pipeline success
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.RUN_SUCCESS, pipeline_run_id=pipeline_run_id, storage_id=4
)
]
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=4,
running_pipelines={}
)
@patch("openlineage.dagster.sensor._ADAPTER")
@patch("openlineage.dagster.sensor.make_step_run_id")
@patch("openlineage.dagster.sensor.get_event_log_records")
def test_cursor_update_with_failing_run(mock_event_log_records, mock_step_run_id, mock_adapter):
from openlineage.dagster.sensor import openlineage_sensor # noqa: E402
with instance_for_test() as instance:
ol_sensor_def = openlineage_sensor(record_filter_limit=1)
# 1. pipeline start
pipeline_run_id = str(uuid.uuid4())
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.RUN_START, pipeline_run_id=pipeline_run_id
)
]
context = build_sensor_context(instance=instance, cursor=None)
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=1,
running_pipelines={
pipeline_run_id: RunningPipeline(
running_steps={},
repository_name=None
)
},
)
# 2. step start
step_run_id = str(uuid.uuid4())
step_key = "an_op"
mock_step_run_id.return_value = step_run_id
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.STEP_START, pipeline_run_id=pipeline_run_id, step_key=step_key, storage_id=2 # noqa: E501
)
]
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=2,
running_pipelines={
pipeline_run_id: RunningPipeline(
running_steps={
step_key: RunningStep(
step_run_id=step_run_id,
input_datasets=[],
output_datasets=[])
},
)
}
)
# 3. step fail
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.STEP_FAILURE, pipeline_run_id=pipeline_run_id, step_key=step_key, storage_id=3 # noqa: E501
)
]
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=3,
running_pipelines={
pipeline_run_id: RunningPipeline(
running_steps={},
repository_name=None
)
}
)
# 4. pipeline fail
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.RUN_SUCCESS, pipeline_run_id=pipeline_run_id, storage_id=4
)
]
ol_sensor_def.evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=4,
running_pipelines={}
)
@patch("openlineage.dagster.sensor._ADAPTER")
@patch("openlineage.dagster.sensor.make_step_run_id")
@patch("openlineage.dagster.sensor.get_event_log_records")
def test_cursor_update_with_exception_raised(mock_event_log_records, mock_step_run_id, mock_adapter): # noqa: E501
from openlineage.dagster.sensor import openlineage_sensor # noqa: E402
with instance_for_test() as instance:
pipeline_run_id = str(uuid.uuid4())
step_key = "an_op"
step_run_id = str(uuid.uuid4())
mock_step_run_id.return_value = step_run_id
mock_event_log_records.return_value = [
make_test_event_log_record(
DagsterEventType.STEP_START, pipeline_run_id=pipeline_run_id, step_key=step_key
),
make_test_event_log_record(
DagsterEventType.STEP_SUCCESS, pipeline_run_id=pipeline_run_id, step_key=step_key
),
]
mock_adapter.complete_step.side_effect = Exception("test!")
context = build_sensor_context(instance=instance)
openlineage_sensor(record_filter_limit=2).evaluate_tick(context)
assert OpenLineageCursor.from_json(context.cursor) == OpenLineageCursor(
last_storage_id=1,
running_pipelines={
pipeline_run_id: RunningPipeline(
running_steps={
step_key: RunningStep(
step_run_id=step_run_id,
input_datasets=[],
output_datasets=[])
},
repository_name=None
)
}
)
| 39.555102 | 125 | 0.634919 | 1,067 | 9,691 | 5.372071 | 0.137769 | 0.044487 | 0.068039 | 0.043091 | 0.821179 | 0.807746 | 0.788381 | 0.788381 | 0.788381 | 0.773203 | 0 | 0.01209 | 0.291611 | 9,691 | 244 | 126 | 39.717213 | 0.82287 | 0.079662 | 0 | 0.64532 | 0 | 0 | 0.062219 | 0.04793 | 0 | 0 | 0 | 0 | 0.059113 | 1 | 0.024631 | false | 0 | 0.064039 | 0 | 0.08867 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6c2aba46843d1ca7ff65364fe22763fa2ce9646 | 100 | py | Python | lib/regex.py | itsmewulf/airdrop | b599e669a6777b78cbf68c6f808d1c3744352d73 | [
"MIT"
] | null | null | null | lib/regex.py | itsmewulf/airdrop | b599e669a6777b78cbf68c6f808d1c3744352d73 | [
"MIT"
] | null | null | null | lib/regex.py | itsmewulf/airdrop | b599e669a6777b78cbf68c6f808d1c3744352d73 | [
"MIT"
] | null | null | null | import re
ETH_ADDRESS_RE: re.Pattern = re.compile(r'^(?P<prefix>0x)?(?P<actual>[0-9a-fA-F]){40}$')
| 25 | 88 | 0.65 | 20 | 100 | 3.15 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053763 | 0.07 | 100 | 3 | 89 | 33.333333 | 0.623656 | 0 | 0 | 0 | 0 | 0.5 | 0.44 | 0.44 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
120121ba961095e2c1470ce96d8c2eec73c18619 | 365 | py | Python | chainer/optimizer_hooks/__init__.py | disktnk/chainer | 133798db470f6fd95973b882b9ccbd0c9726ac13 | [
"MIT"
] | 1 | 2021-05-31T08:59:28.000Z | 2021-05-31T08:59:28.000Z | chainer/optimizer_hooks/__init__.py | disktnk/chainer | 133798db470f6fd95973b882b9ccbd0c9726ac13 | [
"MIT"
] | null | null | null | chainer/optimizer_hooks/__init__.py | disktnk/chainer | 133798db470f6fd95973b882b9ccbd0c9726ac13 | [
"MIT"
] | null | null | null | from chainer.optimizer_hooks.gradient_clipping import GradientClipping # NOQA
from chainer.optimizer_hooks.gradient_hard_clipping import GradientHardClipping # NOQA
from chainer.optimizer_hooks.gradient_noise import GradientNoise # NOQA
from chainer.optimizer_hooks.lasso import Lasso # NOQA
from chainer.optimizer_hooks.weight_decay import WeightDecay # NOQA
| 60.833333 | 87 | 0.863014 | 45 | 365 | 6.777778 | 0.377778 | 0.180328 | 0.327869 | 0.409836 | 0.540984 | 0.242623 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09589 | 365 | 5 | 88 | 73 | 0.924242 | 0.065753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
120bd5ce0a8d9cc9197f6981ed7b561c28f07b80 | 45 | py | Python | python/nlutag/core/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/nlutag/core/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/nlutag/core/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | from .perform_deep_nlu import PerformDeepNLU
| 22.5 | 44 | 0.888889 | 6 | 45 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1226a85961d0be3901bc304489aebaee65a7d8a1 | 47 | py | Python | aiger_coins/__init__.py | mvcisback/py-aiger-coins | 3e7f5a84e56debe7001d63f2f271e29163781e68 | [
"MIT"
] | null | null | null | aiger_coins/__init__.py | mvcisback/py-aiger-coins | 3e7f5a84e56debe7001d63f2f271e29163781e68 | [
"MIT"
] | 7 | 2019-04-01T17:19:13.000Z | 2019-11-01T17:33:15.000Z | aiger_coins/__init__.py | mvcisback/py-aiger-coins | 3e7f5a84e56debe7001d63f2f271e29163781e68 | [
"MIT"
] | 2 | 2019-03-28T03:05:53.000Z | 2021-01-05T23:03:53.000Z | # flake8: noqa
from aiger_coins.pcirc import *
| 15.666667 | 31 | 0.765957 | 7 | 47 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.148936 | 47 | 2 | 32 | 23.5 | 0.85 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12417191af2f14ad976def3007844a00c8d34dee | 257 | py | Python | ui/components/navigation/templatetags/breadcrumbs.py | adelsonllima/djangoplus | a4ce50bf8231a0d9a4a40751f0d076c2e9931f44 | [
"BSD-3-Clause"
] | 21 | 2017-10-08T23:19:47.000Z | 2020-01-16T20:02:08.000Z | ui/components/navigation/templatetags/breadcrumbs.py | adelsonllima/djangoplus | a4ce50bf8231a0d9a4a40751f0d076c2e9931f44 | [
"BSD-3-Clause"
] | 6 | 2020-06-03T05:30:52.000Z | 2022-01-13T00:44:26.000Z | ui/components/navigation/templatetags/breadcrumbs.py | adelsonllima/djangoplus | a4ce50bf8231a0d9a4a40751f0d076c2e9931f44 | [
"BSD-3-Clause"
] | 9 | 2017-10-09T22:58:31.000Z | 2021-11-20T15:20:18.000Z | # -*- coding: utf-8 -*-
from djangoplus.ui.components.navigation.breadcrumbs import Breadcrumbs
from django_jinja import library as register
@register.global_function
def breadcrumbs(request, view_title):
return str(Breadcrumbs(request, view_title))
| 25.7 | 71 | 0.793774 | 32 | 257 | 6.25 | 0.71875 | 0.18 | 0.22 | 0.27 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004386 | 0.11284 | 257 | 9 | 72 | 28.555556 | 0.872807 | 0.081712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
89d529f6eb8cd3e0fb070f23989235e1cc5aeb88 | 91 | py | Python | test.py | harizMunawar/kbbi-scraper | ece8daa7f3d53248ae9d6dd96ff724d092ee3283 | [
"MIT"
] | 1 | 2020-11-08T14:42:09.000Z | 2020-11-08T14:42:09.000Z | test.py | harizMunawar/kbbi-scraper | ece8daa7f3d53248ae9d6dd96ff724d092ee3283 | [
"MIT"
] | null | null | null | test.py | harizMunawar/kbbi-scraper | ece8daa7f3d53248ae9d6dd96ff724d092ee3283 | [
"MIT"
] | null | null | null | import kbbi_scraper as kbbi
from kbbi_scraper import utils
print(utils.definition('baju')) | 22.75 | 31 | 0.824176 | 14 | 91 | 5.214286 | 0.642857 | 0.30137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098901 | 91 | 4 | 31 | 22.75 | 0.890244 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89ef494262ee647789a44a61ab9f4153639f1934 | 46 | py | Python | akData/parse/__init__.py | adamkerz/akData | 673884671da54b2b96480616a3f2633ba4b4710d | [
"BSD-3-Clause"
] | null | null | null | akData/parse/__init__.py | adamkerz/akData | 673884671da54b2b96480616a3f2633ba4b4710d | [
"BSD-3-Clause"
] | null | null | null | akData/parse/__init__.py | adamkerz/akData | 673884671da54b2b96480616a3f2633ba4b4710d | [
"BSD-3-Clause"
] | null | null | null | from . import phoneNumber
from . import abn
| 15.333333 | 26 | 0.73913 | 6 | 46 | 5.666667 | 0.666667 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 46 | 2 | 27 | 23 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c3ad7dfbbdcad727fa9a7a78d6719d6ce4a6a926 | 121 | py | Python | app/home/views.py | cyq7on/MicroFilm | d889d2ee035f69b699c57bd4a3353f89c11adb93 | [
"Apache-2.0"
] | 1 | 2019-07-06T00:46:47.000Z | 2019-07-06T00:46:47.000Z | app/home/views.py | cyq7on/MicroFilm | d889d2ee035f69b699c57bd4a3353f89c11adb93 | [
"Apache-2.0"
] | null | null | null | app/home/views.py | cyq7on/MicroFilm | d889d2ee035f69b699c57bd4a3353f89c11adb93 | [
"Apache-2.0"
] | null | null | null | # coding:utf8
from . import home
@home.route("/")
def index():
return "<h1 style='color:green'>this is home<h1/>"
| 13.444444 | 54 | 0.628099 | 18 | 121 | 4.222222 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03 | 0.173554 | 121 | 8 | 55 | 15.125 | 0.73 | 0.090909 | 0 | 0 | 0 | 0 | 0.388889 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c3e21526c1c0fde2c6fe7ebf01c1bb4637d91ff4 | 85 | py | Python | tsa/src/main/python/thalesians/tsa/finance.py | mikimaus78/ml_monorepo | b2c2627ff0e86e27f6829170d0dac168d8e5783b | [
"BSD-3-Clause"
] | 117 | 2017-06-30T14:29:32.000Z | 2022-02-10T00:54:35.000Z | tsa/src/main/python/thalesians/tsa/finance.py | mikimaus78/ml_monorepo | b2c2627ff0e86e27f6829170d0dac168d8e5783b | [
"BSD-3-Clause"
] | 2 | 2019-02-23T18:54:22.000Z | 2019-11-09T01:30:32.000Z | tsa/src/main/python/thalesians/tsa/finance.py | mikimaus78/ml_monorepo | b2c2627ff0e86e27f6829170d0dac168d8e5783b | [
"BSD-3-Clause"
] | 37 | 2017-07-05T19:51:10.000Z | 2021-04-27T00:11:18.000Z | def usd_trade_size_scaling(x, power=.5, factor=.1):
return (x ** power) * factor
| 28.333333 | 51 | 0.670588 | 14 | 85 | 3.857143 | 0.785714 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 0.164706 | 85 | 2 | 52 | 42.5 | 0.732394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7f076af938ee4554e8ce7384ec09db857bafbb32 | 193 | py | Python | common/settings.py | timmartin19/pycon-ripozo-tutorial | d6f68d0b7c8c8aacb090014c5ff1f34b21ded017 | [
"MIT"
] | null | null | null | common/settings.py | timmartin19/pycon-ripozo-tutorial | d6f68d0b7c8c8aacb090014c5ff1f34b21ded017 | [
"MIT"
] | null | null | null | common/settings.py | timmartin19/pycon-ripozo-tutorial | d6f68d0b7c8c8aacb090014c5ff1f34b21ded017 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
SQLALCHEMY_URI = 'sqlite:///models.sqlite'
| 27.571429 | 42 | 0.854922 | 24 | 193 | 6.041667 | 0.541667 | 0.275862 | 0.441379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103627 | 193 | 6 | 43 | 32.166667 | 0.83815 | 0 | 0 | 0 | 0 | 0 | 0.119171 | 0.119171 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6154fce9b9d782a0b2321862925ac96b8e9fb22a | 12,574 | py | Python | test/flowchart/generate/Flowchart_Subchart.py | nanofrog/asyncflow | 0412ba0d16219994b2baade76fe4223e402285ec | [
"MIT"
] | 24 | 2021-06-29T05:41:46.000Z | 2021-07-22T02:03:27.000Z | test/flowchart/generate/Flowchart_Subchart.py | nanofrog/asyncflow | 0412ba0d16219994b2baade76fe4223e402285ec | [
"MIT"
] | 1 | 2021-07-29T12:32:03.000Z | 2021-07-29T12:32:03.000Z | test/flowchart/generate/Flowchart_Subchart.py | nanofrog/asyncflow | 0412ba0d16219994b2baade76fe4223e402285ec | [
"MIT"
] | 11 | 2021-06-29T02:39:39.000Z | 2021-07-05T09:37:32.000Z | import asyncflow
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_01_id_42a43a6e54514e45afcfd15f46cbbe03(self):
ret = self.Say("hello")
return True
## SubchartTest_01_sub()
@asyncflow.func
def Subchart_SubchartTest_01_id_dca03dece2a04233a5618dc68c9cf5bd(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_01_sub", self)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_01_id_fc24d0b59b914be2a62cb8e93bb04aa9(self):
ret = self.Say("end")
return True
## Say("hellosub")
@asyncflow.func
def Subchart_SubchartTest_01_sub_id_e73bdb60cc12406198c22d25bdaf9383(self):
ret = self.Say("hellosub")
return True
## wait(1)
@asyncflow.func
def Subchart_SubchartTest_01_sub_id_266276762f0c4ae59b8baa1b498f05b7(self):
ret = asyncflow.wait(1)
return True
## return(1)
@asyncflow.func
def Subchart_SubchartTest_01_sub_id_ea2ddbba34484e36a827e2f73d8ac897(self):
ret = asyncflow.ret(1)
return True
## $s1=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_02_id_9c62aa07bc894002943af2b9760f9bf3(self):
ret = asyncflow.set_var(0, self.CreateCharacter())
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_02_id_fb738de1dcfd4041a45624584f8a6596(self):
ret = self.Say("hello")
return True
## $s1.SubchartTest_01_sub()
@asyncflow.func
def Subchart_SubchartTest_02_id_f67352ec4c7644ddb0cf903835f11f01(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_01_sub", asyncflow.get_var(0))
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_02_id_a31dbf21ae574b9eaed2d72273f22091(self):
ret = self.Say("end")
return True
## $s1=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_03_id_53d285961cbb403482b040d60b7aa6ff(self):
ret = asyncflow.set_var(0, self.CreateCharacter())
return True
## $s2=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_03_id_d02f8ce037bf4cb2be0440f504764ad0(self):
ret = asyncflow.set_var(1, self.CreateCharacter())
return True
## $s={$s1,$s2}
@asyncflow.func
def Subchart_SubchartTest_03_id_d802d10897ae44419a31852331f1e491(self):
ret = asyncflow.set_var(2, [asyncflow.get_var(0), asyncflow.get_var(1)])
return True
## $index=1
@asyncflow.func
def Subchart_SubchartTest_03_id_ca65b52b0dbb436887af98e9b59eba68(self):
ret = asyncflow.set_var(3, 1)
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_03_id_370b0b893381472e862ec67b05ed37f5(self):
ret = self.Say("hello")
return True
## $s[$index].SubchartTest_01_sub()
@asyncflow.func
def Subchart_SubchartTest_03_id_b49315c6f6d44c8da2a24371dc5e59aa(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_01_sub", asyncflow.get_var(2)[asyncflow.get_var(3)])
return True
## $index=1+($index+1)%2
@asyncflow.func
def Subchart_SubchartTest_03_id_133c1db850f448bbb58ab69a64c08ee0(self):
ret = asyncflow.set_var(3, 1 + (asyncflow.get_var(3) + 1) % 2)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_03_id_77c1994817624bf1be2a13a3dd72eea4(self):
ret = self.Say("end")
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_04_id_94165897646e4b1ca4f34c560a37a222(self):
ret = self.Say("hello")
return True
## SubchartTest_04_sub()
@asyncflow.func
def Subchart_SubchartTest_04_id_b1b6ff9be3104b0cab7f7af889e89d53(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_04_sub", self)
return ret
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_04_id_3d865c0443344fc79e675f19126bba6f(self):
ret = self.Say("end")
return True
## Say("hellosub")
@asyncflow.func
def Subchart_SubchartTest_04_sub_id_98a94c83e0424336a633ff94baa3a3e9(self):
ret = self.Say("hellosub")
return True
## wait(1)
@asyncflow.func
def Subchart_SubchartTest_04_sub_id_22db35f73af84406b441e216f7dc4852(self):
ret = asyncflow.wait(1)
return True
## $s1=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_05_id_d6cd6a2b73e345fcb276e4e16ec9422e(self):
ret = asyncflow.set_var(0, self.CreateCharacter())
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_05_id_4d0de3891cf24f6ba32e8c2ce3b2efd9(self):
ret = self.Say("hello")
return True
## $s1.SubchartTest_04_sub()
@asyncflow.func
def Subchart_SubchartTest_05_id_081c769291b44bd2bc2c49ab14ed6520(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_04_sub", asyncflow.get_var(0))
return ret
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_05_id_eecc6db7972445c9b24d38d3b5d6e02e(self):
ret = self.Say("end")
return True
## $s1=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_06_id_90ee7f98deb04f14b95c689f558eacfa(self):
ret = asyncflow.set_var(0, self.CreateCharacter())
return True
## $s2=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_06_id_57b18856751b4df28098e7e0257bc109(self):
ret = asyncflow.set_var(1, self.CreateCharacter())
return True
## $s={$s1,$s2}
@asyncflow.func
def Subchart_SubchartTest_06_id_0ded790ca5d94adca08635c4f0550e38(self):
ret = asyncflow.set_var(2, [asyncflow.get_var(0), asyncflow.get_var(1)])
return True
## $index=1
@asyncflow.func
def Subchart_SubchartTest_06_id_07130982db514acc8381dc1ac12ce561(self):
ret = asyncflow.set_var(3, 1)
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_06_id_119bffe10d054c479b41c94e26f3df06(self):
ret = self.Say("hello")
return True
## $s[$index].SubchartTest_04_sub()
@asyncflow.func
def Subchart_SubchartTest_06_id_767f30ee29ce4085bc9c0ee2f511f49d(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_04_sub", asyncflow.get_var(2)[asyncflow.get_var(3)])
return ret
## $index=1+($index+1)%2
@asyncflow.func
def Subchart_SubchartTest_06_id_1f7e7ecd8cd647238ee277018e7744f6(self):
ret = asyncflow.set_var(3, 1 + (asyncflow.get_var(3) + 1) % 2)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_06_id_4e2f23bb60814ac19ad2c0de288116d4(self):
ret = self.Say("end")
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_07_id_cc5b038230ff498988d5e3c97486f8d0(self):
ret = self.Say("hello")
return True
## SubchartTest_07_sub()
@asyncflow.func
def Subchart_SubchartTest_07_id_9d7479fd0ed64117ac868fa914e6bf23(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_07_sub", self)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_07_id_69f4517afe49446d81244fac4a1f69cc(self):
ret = self.Say("end")
return True
## Say("hellosub")
@asyncflow.func
def Subchart_SubchartTest_07_sub_id_dc32fef005424047a0cd29588cf9304e(self):
ret = self.Say("hellosub")
return True
## return(1)
@asyncflow.func
def Subchart_SubchartTest_07_sub_id_3881148c21a44fc29280ff8b3330fa4f(self):
ret = asyncflow.ret(1)
return True
## $s1=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_08_id_ecb7cf4b2d254ae38663b6ca76241c22(self):
ret = asyncflow.set_var(0, self.CreateCharacter())
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_08_id_0ce4ec77462e491eb630ae99e9650937(self):
ret = self.Say("hello")
return True
## $s1.SubchartTest_07_sub()
@asyncflow.func
def Subchart_SubchartTest_08_id_c2b07004eedd4cd5aaecc630dde26f8f(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_07_sub", asyncflow.get_var(0))
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_08_id_441812d2ea634ca1891a2343159a0e9a(self):
ret = self.Say("end")
return True
## $s1=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_09_id_c633b30637554a21a23d91a310e20ac6(self):
ret = asyncflow.set_var(0, self.CreateCharacter())
return True
## $s2=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_09_id_5a84181fef9644aaaa5433b337941ce2(self):
ret = asyncflow.set_var(1, self.CreateCharacter())
return True
## $s={$s1,$s2}
@asyncflow.func
def Subchart_SubchartTest_09_id_e10130034cd14bc39fea3047237c975c(self):
ret = asyncflow.set_var(2, [asyncflow.get_var(0), asyncflow.get_var(1)])
return True
## $index=1
@asyncflow.func
def Subchart_SubchartTest_09_id_53dde06bda054e46a38bf7f721c5e538(self):
ret = asyncflow.set_var(3, 1)
return True
## Say("hello")
@asyncflow.func
def Subchart_SubchartTest_09_id_7147e19370734e80abd721248e055e1e(self):
ret = self.Say("hello")
return True
## $s[$index].SubchartTest_07_sub()
@asyncflow.func
def Subchart_SubchartTest_09_id_161afe7f9987433aa42e71d691785b95(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_07_sub", asyncflow.get_var(2)[asyncflow.get_var(3)])
return True
## $index=1+($index+1)%2
@asyncflow.func
def Subchart_SubchartTest_09_id_b13723e7fa6944b0941fbb4de31b0ff2(self):
ret = asyncflow.set_var(3, 1 + (asyncflow.get_var(3) + 1) % 2)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_09_id_c0d87c34083e42bba62f6803b1da01ad(self):
ret = self.Say("end")
return True
## $s1=CreateCharacter()
@asyncflow.func
def Subchart_SubchartTest_10_id_e1389f5d6a5b41559a579782cb791f31(self):
ret = asyncflow.set_var(0, self.CreateCharacter())
return True
## $s1.SubchartTest_10_sub()
@asyncflow.func
def Subchart_SubchartTest_10_id_de1254a491ea42f0aa1f6935ab06102c(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_10_sub", asyncflow.get_var(0))
return True
## deregister($s1)
@asyncflow.func
def Subchart_SubchartTest_10_id_19ef73dcd7e946e7b5c2cbc34627d280(self):
ret = asyncflow.deregister(asyncflow.get_var(0))
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_10_id_a896783bee8d414199a15c202d7b3736(self):
ret = self.Say("end")
return True
## Say("joinsub")
@asyncflow.func
def Subchart_SubchartTest_10_sub_id_37fb63219c8249df91a47fa358bfeeae(self):
ret = self.Say("joinsub")
return True
## wait(1)
@asyncflow.func
def Subchart_SubchartTest_10_sub_id_bec46862dfd441ff8121b81dcc28df46(self):
ret = asyncflow.wait(1)
return True
## return(1)
@asyncflow.func
def Subchart_SubchartTest_10_sub_id_36f67553b0044196ba245152c0987544(self):
ret = asyncflow.ret(1)
return True
## $s1=0
@asyncflow.func
def Subchart_SubchartTest_11_id_4c2b870e43ba459494314448ce047cfd(self):
ret = asyncflow.set_var(0, 0)
return True
## $s1<3
@asyncflow.func
def Subchart_SubchartTest_11_id_8134b765856745239df8c754ff132868(self):
ret = asyncflow.get_var(0) < 3
return ret
## SubchartTest_01_sub()
@asyncflow.func
def Subchart_SubchartTest_11_id_b0909c46d2c9464a9cef61c790964e18(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_01_sub", self)
return True
## $s1=$s1+1
@asyncflow.func
def Subchart_SubchartTest_11_id_d8e5d41748cd4f79b0a8b879baec801f(self):
ret = asyncflow.set_var(0, asyncflow.get_var(0) + 1)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_11_id_2ea6055927634502a84584bbd098f83b(self):
ret = self.Say("end")
return True
## $s1=0
@asyncflow.func
def Subchart_SubchartTest_12_id_f91d56f7249d437698870d204a8c819f(self):
ret = asyncflow.set_var(0, 0)
return True
## $s1<3
@asyncflow.func
def Subchart_SubchartTest_12_id_295a1f172ea24251802f1d8a9ace7a9d(self):
ret = asyncflow.get_var(0) < 3
return ret
## SubchartTest_04_sub()
@asyncflow.func
def Subchart_SubchartTest_12_id_22db2edad8c04a7381589fc1c4f0050a(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_04_sub", self)
return ret
## $s1=$s1+1
@asyncflow.func
def Subchart_SubchartTest_12_id_c0bbb9a37a464ef78138014769ec09b7(self):
ret = asyncflow.set_var(0, asyncflow.get_var(0) + 1)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_12_id_9c436c7877f445eeb9a446bb1401824c(self):
ret = self.Say("end")
return True
## $s1=0
@asyncflow.func
def Subchart_SubchartTest_13_id_d02710f383004ded9eed2e1125cb75e7(self):
ret = asyncflow.set_var(0, 0)
return True
## $s1<3
@asyncflow.func
def Subchart_SubchartTest_13_id_4d9b024b013d4e32920bb87454570c1e(self):
ret = asyncflow.get_var(0) < 3
return ret
## SubchartTest_07_sub()
@asyncflow.func
def Subchart_SubchartTest_13_id_7bbd70650b024598aeca6d34d5ad70c2(self):
ret = asyncflow.call_sub("Subchart.SubchartTest_07_sub", self)
return True
## $s1=$s1+1
@asyncflow.func
def Subchart_SubchartTest_13_id_8151dfbbf8e54164a7ea6cd70125a107(self):
ret = asyncflow.set_var(0, asyncflow.get_var(0) + 1)
return True
## Say("end")
@asyncflow.func
def Subchart_SubchartTest_13_id_76b0b09faf18488fb468531138d5696d(self):
ret = self.Say("end")
return True
| 32.241026 | 104 | 0.778591 | 1,526 | 12,574 | 6.138925 | 0.077982 | 0.185739 | 0.126388 | 0.189582 | 0.736336 | 0.736336 | 0.735909 | 0.683711 | 0.597459 | 0.554441 | 0 | 0.166622 | 0.11317 | 12,574 | 389 | 105 | 32.323907 | 0.673482 | 0.091061 | 0 | 0.717172 | 0 | 0 | 0.042468 | 0.032272 | 0 | 0 | 0 | 0 | 0 | 1 | 0.249158 | false | 0 | 0.003367 | 0 | 0.501684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
619467554ac8d464db10f377d6433a9beae4feb3 | 177 | py | Python | cgxsh_encrypt_config.py | ebob9/cgxsh | 0682922bae4354d2e306147e314dd309da968059 | [
"MIT"
] | null | null | null | cgxsh_encrypt_config.py | ebob9/cgxsh | 0682922bae4354d2e306147e314dd309da968059 | [
"MIT"
] | 3 | 2020-02-10T00:01:18.000Z | 2022-03-28T00:26:45.000Z | cgxsh_encrypt_config.py | ebob9/cgxsh | 0682922bae4354d2e306147e314dd309da968059 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
from cgxsh_lib.file_crypto import encrypt_config_file
if __name__ == '__main__':
sys.exit(encrypt_config_file())
| 19.666667 | 53 | 0.723164 | 26 | 177 | 4.384615 | 0.769231 | 0.22807 | 0.298246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013072 | 0.135593 | 177 | 8 | 54 | 22.125 | 0.732026 | 0.242938 | 0 | 0 | 0 | 0 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4eebcf72a1c7fb13b3a3b39a1346f5851f34e22a | 10,904 | py | Python | src.py | edbezci/Hasher | 9752175c3087d99b6bb20ddc688fe5450a7e7aca | [
"MIT"
] | null | null | null | src.py | edbezci/Hasher | 9752175c3087d99b6bb20ddc688fe5450a7e7aca | [
"MIT"
] | null | null | null | src.py | edbezci/Hasher | 9752175c3087d99b6bb20ddc688fe5450a7e7aca | [
"MIT"
] | null | null | null | import pygame
import os
import random
import math
import sys
import timeit
import functools
import hlp
import intro
import md5
import sha1
import sha256
# initialise Pygame library, it is necessary in Programs using Pygame
pygame.init()
line_colour = pygame.Color(50, 50, 120)
# initialise window size at 800 * 550 with a caption
display = pygame.display.set_mode((1280, 550), pygame.FULLSCREEN | pygame.DOUBLEBUF | pygame.HWSURFACE)
pygame.display.set_caption("Hashing Algorithm Comparison Tool")
# frames per second determines how many frames should be refreshed per second
clock = pygame.time.Clock()
#Setting up the background image
background_image = os.path.abspath("resources/desc_bgrnd.jpg")
bg_image = pygame.image.load(background_image)
bg = pygame.transform.scale(bg_image, (1280,550))
# Font
bitterfont = os.path.abspath("resources/bitterfont.otf")
def dspMd5():
pygame.display.set_caption("MD5 Algorithm") # adding a caption
display = pygame.display.set_mode((1280, 550),pygame.FULLSCREEN)
effc = timeit.Timer(functools.partial(md5.md5,intro.secret))
efcTime = effc.timeit(15)
effcsha1 = timeit.Timer(functools.partial(sha1.sha1,intro.secret))
efcTimesha1 = effc.timeit(15)
effcSHA2 = timeit.Timer(functools.partial(sha256.sha256,intro.secret))
efcTimeSHA2 = effc.timeit(15)
m = max(efcTime,efcTimesha1,efcTimeSHA2)
efcMD5 = (efcTime / m) * 250
efcSHA1 = (efcTimesha1 / m) * 250
efcSHA2 = (efcTimeSHA2 / m) * 250
while True: # starting the game loop
display.fill((0,0,0)) # pygame method to fill the screen, takes colours and a display object
#display.blit(bg,(0,0))
# pygame method, iterates over the events in pygame to determine what we are doing with every event
for event in pygame.event.get():
if event.type == pygame.QUIT: # this one quits
pygame.quit() # putting the quit pygame method
exit() # takes the user from GUI to the script for exiting
if event.type == pygame.KEYUP: # Here is to tell the computer to recognise if a keybord key is pressed.
if event.key == pygame.K_ESCAPE: # if that keyboard key is ESC
exit() # call for the exit function.
surface = pygame.display.get_surface()
w,h = surface.get_size()
loc1, loc2 = surface.get_size()
w = (w/2) - 60
h = (h/2) - 200
hlp.AddText("Encrypted Text: "+ intro.secret, (w-150, h))
hlp.AddText("Padded Text: " + str(md5.padding(intro.secret))[0:15]+"...", (w-150,h+40))
hlp.AddText("MD5 Hashing: "+ md5.md5(intro.secret), (w -150, loc2 - 175))
hlp.AddText(str(round(float(efcTime),4)) +" seconds passed to execute this algorithm", (w -150, 5*loc2/6))
hlp.Button("Exit", 350, 5, 100,
30, sys.exit)
back = hlp.ButtonWithReturn("Back", 900, 5, 100, 30, 1)
if back > 0: # if back has a value, which means it has been clicked, stop the bigger loop that we started, i.e. the game loop, and break the game loop
break
pygame.draw.line(display, line_colour, (400, 250), (400, 500), 2)
# pygame method, takes display, colour, and positions of where the lines start and end
pygame.draw.line(display, line_colour, (350, 500), (950, 500), 2)
hlp.AddText("MD5", (465, 510), hlp.white)
hlp.AddText("SHA1", (615, 510), hlp.white)
hlp.AddText("SHA256", (765, 510), hlp.white)
hlp.AddText("0s",(395,505), hlp.white)
hlp.AddText(str(round(float(efcTime),5)), (460, 230), hlp.white)
hlp.AddText(str(round(float(efcTimesha1),5)), (610, 230), hlp.white)
hlp.AddText(str(round(float(efcTimeSHA2),5)), (760, 230), hlp.white)
bPos = 500
pygame.draw.rect(display, hlp.button_colour, (465, bPos-efcMD5, 50, efcMD5))
pygame.draw.rect(display, hlp.button_colour, (615, bPos-efcSHA1, 50, efcSHA1))
pygame.draw.rect(display, hlp.button_colour, (765, bPos-efcSHA2, 50, efcSHA2))
pygame.display.flip()
# will not run more than 30 frames per second
clock.tick(30)
intro.Introduction2()
def dspSHA1():
pygame.display.set_caption("MD5 Algorithm") # adding a caption
display = pygame.display.set_mode((1280, 550),pygame.FULLSCREEN)
effc = timeit.Timer(functools.partial(sha1.sha1,intro.secret))
efcTime = effc.timeit(15)
effcMDA = timeit.Timer(functools.partial(md5.md5,intro.secret))
efcTimeMDA = effc.timeit(15)
effcSHA2 = timeit.Timer(functools.partial(sha256.sha256,intro.secret))
efcTimeSHA2 = effc.timeit(15)
m = max(efcTime,efcTimeMDA,efcTimeSHA2)
efcMD5 = (efcTimeMDA / m) * 250
efcSHA1 = (efcTime / m) * 250
efcSHA2 = (efcTimeSHA2 / m) * 250
while True: # starting the game loop
display.fill((0,0,0)) # pygame method to fill the screen, takes colours and a display object
#display.blit(bg,(0,0))
# pygame method, iterates over the events in pygame to determine what we are doing with every event
for event in pygame.event.get():
if event.type == pygame.QUIT: # this one quits
pygame.quit() # putting the quit pygame method
exit() # takes the user from GUI to the script for exiting
if event.type == pygame.KEYUP: # Here is to tell the computer to recognise if a keybord key is pressed.
if event.key == pygame.K_ESCAPE: # if that keyboard key is ESC
exit() # call for the exit function.
surface = pygame.display.get_surface()
loc1, loc2 = surface.get_size()
w,h = surface.get_size()
w = (w/2) - 60
h = (h/2) - 200
hlp.AddText("Encrypted Text: "+ intro.secret, (w-150, h))
hlp.AddText("Padded Text: " + str(md5.padding(intro.secret))[0:15]+"...", (w-150,h+30))
hlp.AddText("SHA1 Hashing: "+ sha1.sha1(intro.secret), (w -150, loc2 - 175))
hlp.AddText(str(round(float(efcTime),4)) +" seconds passed to execute this algorithm", (w-150, 5*loc2/6))
hlp.Button("Exit", 350, 5, 100,
30, sys.exit)
back = hlp.ButtonWithReturn("Back", 900, 5, 100, 30, 1)
if back > 0: # if back has a value, which means it has been clicked, stop the bigger loop that we started, i.e. the game loop, and break the game loop
break
pygame.draw.line(display, line_colour, (400, 250), (400, 500), 2)
# pygame method, takes display, colour, and positions of where the lines start and end
pygame.draw.line(display, line_colour, (350, 500), (950, 500), 2)
hlp.AddText("0s",(395,505), hlp.white)
hlp.AddText("SHA1", (465, 510), hlp.white)
hlp.AddText("MD5", (615, 510), hlp.white)
hlp.AddText("SHA256", (765, 510), hlp.white)
hlp.AddText(str(round(float(efcTime),5)), (460, 230), hlp.white)
hlp.AddText(str(round(float(efcTimeMDA),5)), (610, 230), hlp.white)
hlp.AddText(str(round(float(efcTimeSHA2),5)), (760, 230), hlp.white)
bPos = 500
pygame.draw.rect(display, hlp.button_colour, (465, bPos-efcSHA1, 50, efcSHA1))
pygame.draw.rect(display, hlp.button_colour, (615, bPos-efcMD5, 50, efcMD5))
pygame.draw.rect(display, hlp.button_colour, (765, bPos-efcSHA2, 50, efcSHA2))
pygame.display.flip()
# will not run more than 30 frames per second
clock.tick(30)
intro.Introduction2()
def dspSHA256():
pygame.display.set_caption("MD5 Algorithm") # adding a caption
display = pygame.display.set_mode((1280, 550),pygame.FULLSCREEN)
effc = timeit.Timer(functools.partial(sha256.sha256,intro.secret))
efcTime = effc.timeit(15)
effcSHA1 = timeit.Timer(functools.partial(sha1.sha1,intro.secret))
efcTimeSHA1 = effc.timeit(15)
effcMDA = timeit.Timer(functools.partial(md5.md5,intro.secret))
efcTimeMDA = effc.timeit(15)
m = max(efcTime,efcTimeMDA,efcTimeSHA1)
efcMD5 = (efcTimeMDA / m) * 250
efcSHA2 = (efcTime / m) * 250
efcSHA1 = (efcTimeSHA1 / m) * 250
while True: # starting the game loop
display.fill((0,0,0)) # pygame method to fill the screen, takes colours and a display object
#display.blit(bg,(0,0))
# pygame method, iterates over the events in pygame to determine what we are doing with every event
for event in pygame.event.get():
if event.type == pygame.QUIT: # this one quits
pygame.quit() # putting the quit pygame method
exit() # takes the user from GUI to the script for exiting
if event.type == pygame.KEYUP: # Here is to tell the computer to recognise if a keybord key is pressed.
if event.key == pygame.K_ESCAPE: # if that keyboard key is ESC
exit() # call for the exit function.
surface = pygame.display.get_surface()
loc1, loc2 = surface.get_size()
w,h = surface.get_size()
w = (w/2) - 60
h = (h/2) - 200
hlp.AddText("Encrypted Text: "+ intro.secret, (w-150, h))
hlp.AddText("Padded Text: " + str(md5.padding(intro.secret))[0:15]+"...", (w-150,h+30))
hlp.AddText("SHA256 Hashing: "+ sha256.sha256(intro.secret), (w -150, loc2 - 175))
hlp.AddText(str(round(float(efcTime),4)) +" seconds passed to execute this algorithm", (w-150, 5*loc2/6))
hlp.Button("Exit", 350, 5, 100,
30, sys.exit)
back = hlp.ButtonWithReturn("Back", 900, 5, 100, 30, 1)
if back > 0: # if back has a value, which means it has been clicked, stop the bigger loop that we started, i.e. the game loop, and break the game loop
break
pygame.draw.line(display, line_colour, (400, 250), (400, 500), 2)
# pygame method, takes display, colour, and positions of where the lines start and end
pygame.draw.line(display, line_colour, (350, 500), (950, 500), 2)
hlp.AddText("0s",(395,505), hlp.white)
hlp.AddText("SHA256", (465, 510), hlp.white)
hlp.AddText("MD5", (615, 510), hlp.white)
hlp.AddText("SHA1", (765, 510), hlp.white)
hlp.AddText(str(round(float(efcTime),5)), (460, 230), hlp.white)
hlp.AddText(str(round(float(efcTimeMDA),5)), (610, 230), hlp.white)
hlp.AddText(str(round(float(efcTimeSHA1),5)), (760, 230), hlp.white)
bPos = 500
pygame.draw.rect(display, hlp.button_colour, (465, bPos-efcSHA2, 50, efcSHA2))
pygame.draw.rect(display, hlp.button_colour, (615, bPos-efcMD5, 50, efcMD5))
pygame.draw.rect(display, hlp.button_colour, (765, bPos-efcSHA1, 50, efcSHA1))
pygame.display.flip()
# will not run more than 30 frames per second
clock.tick(30)
intro.Introduction2()
| 49.117117 | 159 | 0.632245 | 1,533 | 10,904 | 4.47032 | 0.150033 | 0.048154 | 0.028892 | 0.047279 | 0.878593 | 0.87013 | 0.863272 | 0.850285 | 0.844885 | 0.84328 | 0 | 0.081608 | 0.238078 | 10,904 | 222 | 160 | 49.117117 | 0.74326 | 0.21836 | 0 | 0.689266 | 0 | 0 | 0.053209 | 0.005663 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0.016949 | 0.067797 | 0 | 0.084746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f614b329ee85ee5bebc72d26a2a5038fb3a1e1b1 | 1,355 | py | Python | tests/commands/test_autofill.py | gnutix/taxi | 3abad480bbc07a0ac1c47c16ee0989227234505b | [
"WTFPL"
] | 17 | 2016-02-02T14:10:49.000Z | 2021-11-30T00:04:29.000Z | tests/commands/test_autofill.py | gnutix/taxi | 3abad480bbc07a0ac1c47c16ee0989227234505b | [
"WTFPL"
] | 70 | 2015-01-08T17:02:42.000Z | 2021-09-21T20:08:07.000Z | tests/commands/test_autofill.py | gnutix/taxi | 3abad480bbc07a0ac1c47c16ee0989227234505b | [
"WTFPL"
] | 8 | 2015-08-23T12:50:36.000Z | 2021-11-26T10:33:45.000Z | from freezegun import freeze_time
@freeze_time('2012-02-20')
def test_autofill_bottom(cli, config, entries_file):
config.set_dict({
'taxi': {
'auto_fill_days': '1',
'auto_add': 'bottom'
}
})
cli('autofill')
entries_file_contents = entries_file.readlines()
assert entries_file_contents == [
"07/02/2012\n", "\n", "14/02/2012\n", "\n", "21/02/2012\n", "\n",
"28/02/2012\n"
]
@freeze_time('2012-02-20')
def test_autofill_top(cli, config, entries_file):
config.set_dict({
'taxi': {
'auto_fill_days': '1',
'auto_add': 'top'
}
})
cli('autofill')
entries_file_contents = entries_file.readlines()
assert entries_file_contents == [
"28/02/2012\n", "\n", "21/02/2012\n", "\n", "14/02/2012\n", "\n",
"07/02/2012\n"
]
@freeze_time('2012-02-20')
def test_autofill_existing_entries(cli, config, entries_file):
config.set_dict({
'taxi': {
'auto_fill_days': '1',
'auto_add': 'top'
}
})
entries_file.write("15/02/2012\n\n07/02/2012")
cli('autofill')
entries_file_contents = entries_file.readlines()
assert entries_file_contents == [
"28/02/2012\n", "\n", "21/02/2012\n", "\n", "15/02/2012\n", "\n",
"07/02/2012\n"
]
| 23.77193 | 73 | 0.55572 | 181 | 1,355 | 3.928177 | 0.20442 | 0.118143 | 0.127989 | 0.101266 | 0.860759 | 0.860759 | 0.860759 | 0.860759 | 0.787623 | 0.734177 | 0 | 0.138446 | 0.259041 | 1,355 | 56 | 74 | 24.196429 | 0.569721 | 0 | 0 | 0.636364 | 0 | 0 | 0.245756 | 0.017712 | 0 | 0 | 0 | 0 | 0.068182 | 1 | 0.068182 | false | 0 | 0.022727 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f65db77d7a4a3afe4d42823fcc19a5240915c784 | 2,883 | py | Python | cprint.py | hansalemaos/kivy_widget_attribute_printer | 49741262d93c6f89a294f653e0a4969984b906c7 | [
"MIT"
] | null | null | null | cprint.py | hansalemaos/kivy_widget_attribute_printer | 49741262d93c6f89a294f653e0a4969984b906c7 | [
"MIT"
] | null | null | null | cprint.py | hansalemaos/kivy_widget_attribute_printer | 49741262d93c6f89a294f653e0a4969984b906c7 | [
"MIT"
] | null | null | null | class cprint:
'''
Use like:
cprint.red('Hallo')
cprint.green('green')
print(cprint.red('Hallo', False) + cprint.green('green', False))
'''
Red = '\033[91m'
Green = '\033[92m'
Blue = '\033[94m'
Cyan = '\033[96m'
White = '\033[97m'
Yellow = '\033[93m'
Magenta = '\033[95m'
Grey = '\033[90m'
Black = '\033[90m'
Default = '\033[99m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
@staticmethod
def red(text, printtext=True):
if printtext:
print(cprint.Red + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Red + str(text) + cprint.ENDC + cprint.Default
def green(text, printtext=True):
if printtext:
print(cprint.Green + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Green + str(text) + cprint.ENDC + cprint.Default
def blue(text, printtext=True):
if printtext:
print(cprint.Blue + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Blue + str(text) + cprint.ENDC + cprint.Default
def cyan(text, printtext=True):
if printtext:
print(cprint.Cyan + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Cyan + str(text) + cprint.ENDC + cprint.Default
def white(text, printtext=True):
if printtext:
print(cprint.White + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.White + str(text) + cprint.ENDC + cprint.Default
def yellow(text, printtext=True):
if printtext:
print(cprint.Yellow + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Yellow + str(text) + cprint.ENDC + cprint.Default
def magenta(text, printtext=True):
if printtext:
print(cprint.Magenta + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Magenta + str(text) + cprint.ENDC + cprint.Default
def grey(text, printtext=True):
if printtext:
print(cprint.Grey + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Grey + str(text) + cprint.ENDC + cprint.Default
def black(text, printtext=True):
if printtext:
print(cprint.Black + str(text) + cprint.ENDC + cprint.Default)
return ''
elif not printtext:
return cprint.Black + str(text) + cprint.ENDC + cprint.Default
| 36.961538 | 77 | 0.554977 | 320 | 2,883 | 5 | 0.134375 | 0.07875 | 0.14625 | 0.19125 | 0.816875 | 0.816875 | 0.816875 | 0.61625 | 0.36 | 0.36 | 0 | 0.031926 | 0.326396 | 2,883 | 77 | 78 | 37.441558 | 0.791967 | 0.040236 | 0 | 0.391304 | 0 | 0 | 0.037956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0 | 0 | 0.594203 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 6 |
f668cb0fe3c5489249449b93730ed24fcf680a56 | 235 | py | Python | src/Model/EatMove.py | anthonyf996/ChessApp | 614f72bc641793681fd5ad7040d6a4c407c9ae9e | [
"MIT"
] | null | null | null | src/Model/EatMove.py | anthonyf996/ChessApp | 614f72bc641793681fd5ad7040d6a4c407c9ae9e | [
"MIT"
] | null | null | null | src/Model/EatMove.py | anthonyf996/ChessApp | 614f72bc641793681fd5ad7040d6a4c407c9ae9e | [
"MIT"
] | null | null | null | from SimpleMove import SimpleMove
from MoveType import MoveType
class EatMove(SimpleMove):
def __init__(self, board, startPos, endPos):
super().__init__(board, startPos, endPos)
def getMoveType(self):
return MoveType.EAT
| 23.5 | 46 | 0.761702 | 28 | 235 | 6.107143 | 0.571429 | 0.152047 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153191 | 235 | 9 | 47 | 26.111111 | 0.859296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0.142857 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f670893394b529fa82ec70970477b7d478bb7afb | 257 | py | Python | tests/test_config.py | skoenig/website-monitor | e3e6ce4978736f76b3f6fd6ca19e7e66cad9982e | [
"MIT"
] | null | null | null | tests/test_config.py | skoenig/website-monitor | e3e6ce4978736f76b3f6fd6ca19e7e66cad9982e | [
"MIT"
] | null | null | null | tests/test_config.py | skoenig/website-monitor | e3e6ce4978736f76b3f6fd6ca19e7e66cad9982e | [
"MIT"
] | null | null | null | import pytest
from monitor.config import config
from monitor.config import configure
def test_config_type():
assert type(config) == dict
def test_nonexistent_file():
with pytest.raises(FileNotFoundError):
configure("/tmp/foobar.yaml")
| 17.133333 | 42 | 0.743191 | 32 | 257 | 5.84375 | 0.59375 | 0.117647 | 0.181818 | 0.245989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167315 | 257 | 14 | 43 | 18.357143 | 0.873832 | 0 | 0 | 0 | 0 | 0 | 0.062257 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.25 | true | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9cad5f3696699cae8bff9e1e06126d2ceb5da51f | 23,122 | py | Python | models/conv_ae.py | liujiyuan13/MvDOCC-code | 1f2d4427dab9f90fbba1575621948f7acbdffddb | [
"MIT"
] | 1 | 2021-05-18T01:29:20.000Z | 2021-05-18T01:29:20.000Z | models/conv_ae.py | liujiyuan13/MvDOCC-code | 1f2d4427dab9f90fbba1575621948f7acbdffddb | [
"MIT"
] | null | null | null | models/conv_ae.py | liujiyuan13/MvDOCC-code | 1f2d4427dab9f90fbba1575621948f7acbdffddb | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.init import xavier_normal_
slope = 0
def mv_outer_product(X):
batch_size = X[0].size(0)
X = [torch.cat([X[i], torch.ones(batch_size, 1).cuda()], dim=-1) for i in range(len(X))]
for i in range(len(X) - 1):
if i == 0:
cur_fused_tensor = torch.bmm(X[i].unsqueeze(2), X[i+1].unsqueeze(1))
else:
cur_fused_tensor = cur_fused_tensor.view(batch_size, -1, 1)
cur_fused_tensor = torch.bmm(cur_fused_tensor, X[i+1].unsqueeze(1))
return cur_fused_tensor.view(batch_size, -1)
class CAE_pytorch(nn.Module):
def __init__(self, in_channels=3, rep_dim=32):
super(CAE_pytorch, self).__init__()
nf = 32
self.nf = nf
# Encoder: 32x32-x_channel-16x16x32-8x8x64-4x4x128-32
self.enc_conv1 = nn.Conv2d(in_channels=in_channels, out_channels=nf, kernel_size=3, stride=2, padding=1)
self.enc_bn1 = nn.BatchNorm2d(num_features=nf)
self.enc_act1 = nn.LeakyReLU(slope, inplace=True)
self.enc_conv2 = nn.Conv2d(in_channels=nf, out_channels=nf * 2, kernel_size=3, stride=2, padding=1)
self.enc_bn2 = nn.BatchNorm2d(num_features=nf * 2)
self.enc_act2 = nn.LeakyReLU(slope, inplace=True)
self.enc_conv3 = nn.Conv2d(in_channels=nf * 2, out_channels=nf * 4, kernel_size=3, stride=2, padding=1)
self.enc_bn3 = nn.BatchNorm2d(num_features=nf * 4)
self.enc_act3 = nn.LeakyReLU(slope, inplace=True)
self.enc_fc = nn.Linear(nf * 2 * 2 * 16, rep_dim)
# Decoder
self.dec_fc = nn.Linear(rep_dim, nf * 2 * 2 * 16)
self.dec_bn0 = nn.BatchNorm1d(num_features=nf * 2 * 2 * 16)
self.dec_act0 = nn.LeakyReLU(slope, inplace=True)
self.dec_conv1 = nn.ConvTranspose2d(in_channels=nf * 4, out_channels=nf * 2, kernel_size=3, stride=2, padding=1, output_padding=1)
self.dec_bn1 = nn.BatchNorm2d(num_features=nf * 2)
self.dec_act1 = nn.LeakyReLU(slope, inplace=True)
self.dec_conv2 = nn.ConvTranspose2d(in_channels=nf * 2, out_channels=nf, kernel_size=3, stride=2, padding=1, output_padding=1)
self.dec_bn2 = nn.BatchNorm2d(num_features=nf)
self.dec_act2 = nn.LeakyReLU(slope, inplace=True)
self.dec_conv3 = nn.ConvTranspose2d(in_channels=nf, out_channels=in_channels, kernel_size=3, stride=2, padding=1, output_padding=1)
self.output_act = nn.Tanh()
def encode(self, x):
x = self.enc_act1(self.enc_bn1(self.enc_conv1(x)))
x = self.enc_act2(self.enc_bn2(self.enc_conv2(x)))
x = self.enc_act3(self.enc_bn3(self.enc_conv3(x)))
rep = self.enc_fc(x.view(x.size(0), -1))
return rep
def decode(self, rep):
x = self.dec_act0(self.dec_bn0(self.dec_fc(rep)))
x = x.view(-1, self.nf * 4, 4, 4)
x = self.dec_act1(self.dec_bn1(self.dec_conv1(x)))
x = self.dec_act2(self.dec_bn2(self.dec_conv2(x)))
x = self.output_act(self.dec_conv3(x))
return x
def forward(self, x):
output = self.decode(self.encode(x))
return output
class CENC(nn.Module):
def __init__(self, in_channels=3, rep_dim=32):
super(CENC, self).__init__()
nf = 32
self.nf = nf
# Encoder: 32x32-x_channel-16x16x32-8x8x64-4x4x128-32
self.enc_conv1 = nn.Conv2d(in_channels=in_channels, out_channels=nf, kernel_size=3, stride=2, padding=1)
self.enc_bn1 = nn.BatchNorm2d(num_features=nf)
self.enc_act1 = nn.LeakyReLU(slope, inplace=True)
self.enc_conv2 = nn.Conv2d(in_channels=nf, out_channels=nf * 2, kernel_size=3, stride=2, padding=1)
self.enc_bn2 = nn.BatchNorm2d(num_features=nf * 2)
self.enc_act2 = nn.LeakyReLU(slope, inplace=True)
self.enc_conv3 = nn.Conv2d(in_channels=nf * 2, out_channels=nf * 4, kernel_size=3, stride=2, padding=1)
self.enc_bn3 = nn.BatchNorm2d(num_features=nf * 4)
self.enc_act3 = nn.LeakyReLU(slope, inplace=True)
self.enc_fc = nn.Linear(nf * 2 * 2 * 16, rep_dim)
def forward(self, x):
x = self.enc_act1(self.enc_bn1(self.enc_conv1(x)))
x = self.enc_act2(self.enc_bn2(self.enc_conv2(x)))
x = self.enc_act3(self.enc_bn3(self.enc_conv3(x)))
rep = self.enc_fc(x.view(x.size(0), -1))
return rep
class CDEC(nn.Module):
def __init__(self, in_channels=3, rep_dim=32):
super(CDEC, self).__init__()
nf = 32
self.nf = nf
# Decoder
self.dec_fc = nn.Linear(rep_dim, nf * 2 * 2 * 16)
self.dec_bn0 = nn.BatchNorm1d(num_features=nf * 2 * 2 * 16)
self.dec_act0 = nn.LeakyReLU(slope, inplace=True)
self.dec_conv1 = nn.ConvTranspose2d(in_channels=nf * 4, out_channels=nf * 2, kernel_size=3, stride=2, padding=1, output_padding=1)
self.dec_bn1 = nn.BatchNorm2d(num_features=nf * 2)
self.dec_act1 = nn.LeakyReLU(slope, inplace=True)
self.dec_conv2 = nn.ConvTranspose2d(in_channels=nf * 2, out_channels=nf, kernel_size=3, stride=2, padding=1, output_padding=1)
self.dec_bn2 = nn.BatchNorm2d(num_features=nf)
self.dec_act2 = nn.LeakyReLU(slope, inplace=True)
self.dec_conv3 = nn.ConvTranspose2d(in_channels=nf, out_channels=in_channels, kernel_size=3, stride=2, padding=1, output_padding=1)
self.output_act = nn.Tanh()
def forward(self, rep):
x = self.dec_act0(self.dec_bn0(self.dec_fc(rep)))
x = x.view(-1, self.nf * 4, 4, 4)
x = self.dec_act1(self.dec_bn1(self.dec_conv1(x)))
x = self.dec_act2(self.dec_bn2(self.dec_conv2(x)))
x = self.output_act(self.dec_conv3(x))
return x
class mv_cae(nn.Module):
def __init__(self, input_channel_set, rep_dim):
super(mv_cae, self).__init__()
self.cae_set = nn.ModuleList([CAE_pytorch(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.score_fuc = torch.nn.MSELoss(reduction='none')
def get_latent(self, X):
return [self.cae_set[i].encode((X[i])) for i in range(len(X))]
def forward(self, X):
return [self.cae_set[i](X[i]) for i in range(len(X))]
def get_ad_scores(self, X):
'''
:param X: A list with multi-view inputs
:return: A list of (N, 1) tensor anomaly score (larger means more normal)
'''
outputs_set = self.forward(X)
outputs_set = [self.score_fuc(outputs_set[i], X[i]) for i in range(len(X))]
scores_set = [-1. * torch.mean(torch.mean(torch.mean(outputs_set[i], dim=-1), dim=-1), dim=-1, keepdim=True) for i in range(len(X))]
return scores_set
class mvcae_fused(nn.Module):
def __init__(self, input_channel_set, rep_dim, fuse_dim):
super(mvcae_fused, self).__init__()
self.cae_set = nn.ModuleList([CAE_pytorch(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.fuse = torch.nn.Sequential(
nn.Linear(len(input_channel_set) * rep_dim, fuse_dim, bias=False),
nn.BatchNorm1d(fuse_dim),
nn.LeakyReLU(slope, inplace=True),
nn.Linear(fuse_dim, len(input_channel_set) * rep_dim, bias=False),
nn.BatchNorm1d(len(input_channel_set) * rep_dim),
nn.LeakyReLU(slope, inplace=True))
self.score_fuc = torch.nn.MSELoss(reduction='none')
def get_latent(self, X):
return [self.cae_set[i].encode((X[i])) for i in range(len(X))]
def forward(self, X):
X_latent = self.get_latent(X)
X_latent = torch.cat(X_latent, dim=1)
X_fused = self.fuse(X_latent)
X_fused = X_fused.view(X_fused.size(0), -1, len(self.cae_set))
return [self.cae_set[i].decode(X_fused[:, :, i]) for i in range(len(X))]
def get_ad_scores(self, X):
'''
:param X: A list with multi-view inputs
:return: A list of (N, 1) tensor anomaly score (larger means more normal)
'''
outputs_set = self.forward(X)
outputs_set = [self.score_fuc(outputs_set[i], X[i]) for i in range(len(X))]
scores_set = [-1. * torch.mean(torch.mean(torch.mean(outputs_set[i], dim=-1), dim=-1), dim=-1, keepdim=True) for i in range(len(X))]
return scores_set
class mvenc(nn.Module):
def __init__(self, input_channel_set, rep_dim):
super(mvenc, self).__init__()
self.cae_set = nn.ModuleList([CENC(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
def forward(self, X):
return [self.cae_set[i](X[i]) for i in range(len(X))]
class mvenc_fused(nn.Module):
def __init__(self, input_channel_set, rep_dim):
super(mvenc_fused, self).__init__()
self.cae_set = nn.ModuleList([CENC(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.fuse_layer = torch.nn.Sequential(nn.Linear(len(input_channel_set) * rep_dim, rep_dim, bias=False),
nn.BatchNorm1d(num_features=rep_dim),
nn.LeakyReLU(slope, inplace=True)
)
def forward(self, X):
cae_outputs = [self.cae_set[i](X[i]) for i in range(len(X))]
cae_outputs = torch.cat(cae_outputs, dim=1)
output = self.fuse_layer(cae_outputs)
return output
class mv_corrCAE(nn.Module):
def __init__(self, input_channel_set, rep_dim):
super(mv_corrCAE, self).__init__()
self.cae_set = nn.ModuleList([CAE_pytorch(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.score_fuc = torch.nn.MSELoss(reduction='none')
def get_latent(self, X):
return [self.cae_set[i].encode((X[i])) for i in range(len(X))]
def forward(self, X):
latent_set = [self.cae_set[i].encode(X[i]) for i in range(len(X))]
return latent_set, [self.cae_set[i].decode(latent_set[i]) for i in range(len(X))]
def get_ad_scores(self, X):
'''
:param X: A list with multi-view inputs
:return: A list of (N, 1) tensor anomaly score (larger means more normal)
'''
_, outputs_set = self.forward(X)
outputs_set = [self.score_fuc(outputs_set[i], X[i]) for i in range(len(X))]
scores_set = [-1. * torch.mean(torch.mean(torch.mean(outputs_set[i], dim=-1), dim=-1), dim=-1, keepdim=True) for i in range(len(X))]
return scores_set
class splitCAE(nn.Module):
def __init__(self, input_channel_set, rep_dim, dec_mode='fixed'):
super(splitCAE, self).__init__()
self.enc_set = nn.ModuleList([CENC(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.dec_mode = dec_mode
if self.dec_mode is 'fixed':
self.dec_set = nn.ModuleList([CDEC(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
else:
self.dec_set = nn.ModuleList([nn.ModuleList([CDEC(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))]) for y in range(len(input_channel_set))])
self.score_fuc = torch.nn.MSELoss(reduction='none')
def forward(self, X):
latent_set = [self.enc_set[i](X[i]) for i in range(len(X))]
if self.dec_mode is 'fixed':
output_set = [[self.dec_set[j](latent_set[i]) for j in range(len(X))] for i in range(len(X))]
else:
output_set = [[self.dec_set[i][j](latent_set[i]) for j in range(len(X))] for i in range(len(X))]
return output_set
def get_ad_scores(self, X):
outputs_set = self.forward(X)
scores_set = [[] for i in range(len(X))]
for j in range(len(X)): # for each enc.
cur_res_set = [self.score_fuc(outputs_set[j][i], X[i]) for i in range(len(X))]
cur_scores_set = [-1. * torch.mean(torch.mean(torch.mean(cur_res_set[i], dim=-1), dim=-1), dim=-1, keepdim=True) for i in range(len(X))]
for i in range(len(X)): # for each view
scores_set[i].append(cur_scores_set[i])
for i in range(len(X)): # for each view
scores_set[i] = torch.cat(scores_set[i], dim=-1)
scores_set[i] = torch.mean(scores_set[i], dim=-1, keepdim=True)
return scores_set
class mvcae_ss(nn.Module):
def __init__(self, input_channel_set, rep_dim, fuse_dim, mode='fuse', param_mode='fixed'):
super(mvcae_ss, self).__init__()
self.cae_set = nn.ModuleList([CAE_pytorch(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.mode = mode
self.param_mode = param_mode
# fuse all views and reconstruct all views
if mode is 'fuse':
if param_mode is 'nn':
self.fuse_all = torch.nn.Sequential(nn.Linear(len(input_channel_set) * rep_dim, fuse_dim, bias=False),
nn.BatchNorm1d(fuse_dim),
nn.LeakyReLU(slope, inplace=True))
self.split_all = torch.nn.Sequential(nn.Linear(fuse_dim, len(input_channel_set) * rep_dim, bias=False),
nn.BatchNorm1d(len(input_channel_set) * rep_dim),
nn.LeakyReLU(slope, inplace=True))
elif param_mode is 'sum' or param_mode is 'max':
pass
else:
raise NotImplementedError
# fuse all views other than the target view and pred. the target view
if mode is 'pred':
if param_mode is 'fixed':
self.fuse_pred = torch.nn.Sequential(nn.Linear((len(input_channel_set) - 1) * rep_dim, fuse_dim, bias=False),
nn.BatchNorm1d(fuse_dim),
nn.LeakyReLU(slope, inplace=True)
)
else:
self.fuse_pred = nn.ModuleList([torch.nn.Sequential(nn.Linear((len(input_channel_set) - 1) * rep_dim, fuse_dim, bias=False),
nn.BatchNorm1d(fuse_dim),
nn.LeakyReLU(slope, inplace=True)) for i in range(len(input_channel_set))])
# reconstruct all views from the embedding of one view
if mode is 'split':
if param_mode is 'fixed':
self.split_pred = torch.nn.Sequential(nn.Linear(rep_dim, len(input_channel_set) * rep_dim, bias=False),
nn.BatchNorm1d(len(input_channel_set) * rep_dim),
nn.LeakyReLU(slope, inplace=True)
)
else:
self.split_pred = nn.ModuleList([torch.nn.Sequential(nn.Linear(rep_dim, len(input_channel_set) * rep_dim, bias=False),
nn.BatchNorm1d(len(input_channel_set) * rep_dim),
nn.LeakyReLU(slope, inplace=True)) for i in range(len(input_channel_set))])
self.score_fuc = torch.nn.MSELoss(reduction='none')
def get_latent(self, X):
return [self.cae_set[i].encode((X[i])) for i in range(len(X))]
def forward(self, X):
if self.mode is 'fuse':
X_latent = self.get_latent(X)
if self.param_mode is 'nn':
X_latent = torch.cat(X_latent, dim=1)
X_fused = self.fuse_all(X_latent)
X_fused = self.split_all(X_fused)
X_fused = X_fused.view(X_fused.size(0), -1, len(self.cae_set))
output_set = [self.cae_set[i].decode(X_fused[:, :, i]) for i in range(len(X))]
elif self.param_mode is 'sum':
X_fused = X_latent[0]
for i in range(len(X_latent) - 1):
X_fused += X_latent[i+1]
X_fused /= len(X_latent)
output_set = [self.cae_set[i].decode(X_fused) for i in range(len(X))]
elif self.param_mode is 'max':
X_latent = [X_latent[_].unsqueeze(2) for _ in range(len(X_latent))]
X_fused = torch.max(torch.cat(X_latent, dim=-1), dim=-1)[0]
output_set = [self.cae_set[i].decode(X_fused) for i in range(len(X))]
else:
raise NotImplementedError
return output_set
elif self.mode is 'pred':
X_latent = self.get_latent(X)
X_fused = []
for i in range(len(X)):
cur_fused = []
for j in range(len(X)):
if i != j:
cur_fused.append(X_latent[j])
cur_fused = torch.cat(cur_fused, dim=1)
if self.param_mode is 'fixed':
X_fused.append(self.fuse_pred(cur_fused))
else:
X_fused.append(self.fuse_pred[i](cur_fused))
return [self.cae_set[i].decode(X_fused[i]) for i in range(len(X))]
elif self.mode is 'split':
X_latent = self.get_latent(X)
if self.param_mode is 'fixed':
X_latent = [self.split_pred(X_latent[i]) for i in range(len(X_latent))]
else:
X_latent = [self.split_pred[i](X_latent[i]) for i in range(len(X_latent))]
output_set = []
for i in range(len(X_latent)):
cur_latent = X_latent[i]
cur_latent = cur_latent.view(cur_latent.size(0), -1, len(self.cae_set))
cur_outputs = [self.cae_set[j].decode(cur_latent[:, :, j]) for j in range(len(X))]
output_set.append(cur_outputs)
return output_set
else:
raise NotImplementedError
def get_ad_scores(self, X):
'''
:param X: A list with multi-view inputs
:return: A list of (N, 1) tensor anomaly score (larger means more normal)
'''
if self.mode is 'fuse' or self.mode is 'pred':
outputs_set = self.forward(X)
outputs_set = [self.score_fuc(outputs_set[i], X[i]) for i in range(len(X))]
scores_set = [-1. * torch.mean(torch.mean(torch.mean(outputs_set[i], dim=-1), dim=-1), dim=-1, keepdim=True) for i in range(len(X))]
elif self.mode is 'split':
outputs_set = self.forward(X)
scores_set = [[] for i in range(len(X))]
for j in range(len(X)): # for each enc.
cur_res_set = [self.score_fuc(outputs_set[j][i], X[i]) for i in range(len(X))]
cur_scores_set = [-1. * torch.mean(torch.mean(torch.mean(cur_res_set[i], dim=-1), dim=-1), dim=-1, keepdim=True) for i in range(len(X))]
for i in range(len(X)): # for each view
scores_set[i].append(cur_scores_set[i])
for i in range(len(X)): # for each view
scores_set[i] = torch.cat(scores_set[i], dim=-1)
scores_set[i] = torch.mean(scores_set[i], dim=-1, keepdim=True)
else:
raise NotImplementedError
return scores_set
class tensor_fusion_layer(nn.Module):
def __init__(self, in_dim1, in_dim2, output_dim):
super(tensor_fusion_layer, self).__init__()
self.in_dim = (in_dim1+1) * (in_dim2+1)
self.in_dim1 = in_dim1
self.in_dim2 = in_dim2
self.output_dim = output_dim
self.fuse_layer = torch.nn.Sequential(
nn.Linear(self.in_dim, self.output_dim, bias=False),
nn.BatchNorm1d(output_dim),
nn.LeakyReLU(slope, inplace=True),
)
def forward(self, x, y):
# both x and y must be a 2-D matrice
assert x.size(0) == y.size(0)
batch_size = x.size(0)
assert x.size(1) == self.in_dim1
assert y.size(1) == self.in_dim2
assert len(x.size()) == 2
assert len(y.size()) == 2
x = torch.cat([x, torch.ones(batch_size, 1).cuda()], dim=-1)
y = torch.cat([y, torch.ones(batch_size, 1).cuda()], dim=-1)
fused_tensor = torch.bmm(x.unsqueeze(2), y.unsqueeze(1)).view(batch_size, -1)
fused_tensor = self.fuse_layer(fused_tensor)
return fused_tensor
class mv_CAE_tf(nn.Module):
def __init__(self, input_channel_set, rep_dim, rank=3):
super(mv_CAE_tf, self).__init__()
self.enc_set = nn.ModuleList([CENC(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.dec_set = nn.ModuleList([CDEC(in_channels=input_channel_set[x], rep_dim=rep_dim) for x in range(len(input_channel_set))])
self.score_fuc = torch.nn.MSELoss(reduction='none')
output_dim = rep_dim
self.factor_set = nn.Parameter(torch.Tensor(len(input_channel_set), rank, rep_dim + 1, output_dim))
self.factor_set = xavier_normal_(self.factor_set)
self.fusion_weights = nn.Parameter(torch.Tensor(1, rank))
self.fusion_bias = nn.Parameter(torch.Tensor(1, output_dim))
xavier_normal_(self.fusion_weights)
self.fusion_bias.data.fill_(0)
# self.fuse_layer = torch.nn.Sequential(
# nn.Linear((rep_dim+1) ** len(input_channel_set), rep_dim, bias=False),
# nn.BatchNorm1d(rep_dim),
# nn.LeakyReLU(slope, inplace=True),
# )
# self.fuse_layer = []
# for i in range(len(input_channel_set) -1):
# self.fuse_layer.append(tensor_fusion_layer(in_dim1=rep_dim, in_dim2=rep_dim, output_dim=rep_dim))
# self.fuse_layer = nn.ModuleList(self.fuse_layer)
def get_latent(self, X):
return [self.enc_set[i]((X[i])) for i in range(len(X))]
def forward(self, X):
latent_set = [self.enc_set[i](X[i]) for i in range(len(X))]
latent_set = [torch.cat([torch.ones(latent_set[i].size(0), 1).cuda(), latent_set[i]], dim=-1) for i in range(len(X))]
for i in range(len(X)):
if i == 0:
latent_fused = torch.matmul(latent_set[i], self.factor_set[i])
else:
cur_latent_fused = torch.matmul(latent_set[i], self.factor_set[i])
latent_fused = latent_fused * cur_latent_fused
fused_rep = torch.matmul(self.fusion_weights, latent_fused.permute(1, 0, 2)).squeeze() + self.fusion_bias
# fused_rep = latent_set[0]
# for i in range(len(X) - 1):
# fused_rep = self.fuse_layer[i](fused_rep, latent_set[i+1])
# latent_fused = mv_outer_product(latent_set)
# latent_set = self.fuse_layer(latent_fused)
return [self.dec_set[i](fused_rep) for i in range(len(X))]
def get_ad_scores(self, X):
'''
:param X: A list with multi-view inputs
:return: A list of (N, 1) tensor anomaly score (larger means more normal)
'''
outputs_set = self.forward(X)
outputs_set = [self.score_fuc(outputs_set[i], X[i]) for i in range(len(X))]
scores_set = [-1. * torch.mean(torch.mean(torch.mean(outputs_set[i], dim=-1), dim=-1), dim=-1, keepdim=True) for i in range(len(X))]
return scores_set
| 46.151697 | 194 | 0.606998 | 3,543 | 23,122 | 3.734124 | 0.05024 | 0.038095 | 0.054422 | 0.047392 | 0.81678 | 0.785034 | 0.761829 | 0.739683 | 0.7161 | 0.71542 | 0 | 0.022029 | 0.259839 | 23,122 | 500 | 195 | 46.244 | 0.751023 | 0.068074 | 0 | 0.607046 | 0 | 0 | 0.005759 | 0 | 0 | 0 | 0 | 0 | 0.01355 | 1 | 0.102981 | false | 0.00271 | 0.01084 | 0.01897 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9cba1ab361fbdd4d1c36902b797dfac5a8e2ef4a | 9,760 | py | Python | tests/test_app.py | rafaelsouzak2b/lista_favoritos | 38cdda263a7a8d157cbfd9abdc4b6e4af3b9f4b8 | [
"MIT"
] | null | null | null | tests/test_app.py | rafaelsouzak2b/lista_favoritos | 38cdda263a7a8d157cbfd9abdc4b6e4af3b9f4b8 | [
"MIT"
] | null | null | null | tests/test_app.py | rafaelsouzak2b/lista_favoritos | 38cdda263a7a8d157cbfd9abdc4b6e4af3b9f4b8 | [
"MIT"
] | null | null | null | import base64
import string
import random
import json
random_str = string.ascii_letters
from src.app.models.usuarios import UsuarioModel
username = ''.join(random.choice(random_str) for i in range(10))
password = 'teste'
nome_cliente = ''.join(random.choice(random_str) for i in range(10))
email_cliente = ''.join(random.choice(random_str) for i in range(20))
token = ''
def test_inserir_usuario_201(client):
'''
Teste de inserção de usuario
'''
payload = {
'username': username,
'password': password
}
response = client.post('/api/usuario', json=payload)
retorno = json.loads(response.data)
assert response.status_code == 201
assert len(retorno) > 0
def test_inserir_usuario_ja_cadastrado_400(client):
'''
Teste de inserção de usuario ja existente
'''
payload = {
'username': username,
'password': password
}
response = client.post('/api/usuario', json=payload)
assert response.status_code == 400
def test_inserir_usuario_400(client):
'''
Teste de inserção de usuario sem payload
'''
response = client.post('/api/usuario')
assert response.status_code == 400
def test_autenticar_usuario_200(client):
'''
Teste de geração de token
'''
userpass = f'{username}:{password}'
base64_val = base64.b64encode(userpass.encode()).decode()
headers = {
'Authorization': f'Basic {base64_val}'
}
response = client.post('/api/usuario/auth', headers=headers)
global token
retorno = json.loads(response.data)
token = retorno['token']
usuario_data = UsuarioModel.find_by_username(username)
if usuario_data:
usuario_data.delete_from_db()
assert response.status_code == 200
assert len(retorno) > 0
def test_inserir_clientes_201(client):
'''
Teste de inserção de cliente
'''
headers = {
'Authorization': f'Bearer {token}'
}
payload = {
'nome': nome_cliente,
'email': email_cliente
}
response = client.post('/api/clientes', headers=headers, json=payload)
retorno = json.loads(response.data)
assert response.status_code == 201
assert len(retorno) > 0
def test_inserir_clientes_ja_cadastrado_400(client):
'''
Teste de inserção de cliente ja cadastrado
'''
headers = {
'Authorization': f'Bearer {token}'
}
payload = {
'nome': nome_cliente,
'email': email_cliente
}
response = client.post('/api/clientes', headers=headers, json=payload)
assert response.status_code == 400
def test_inserir_clientes_400(client):
'''
Teste de inserção de cliente sem payload
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.post('/api/clientes', headers=headers)
assert response.status_code == 400
def test_visualizar_todos_clientes_200(client):
'''
Teste de visualização de todos os clientes
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.get('/api/clientes', headers=headers)
retorno = json.loads(response.data)
assert response.status_code == 200
assert len(retorno) > 0
def test_visualizar_clientes_200(client):
'''
Teste de visualização de cliente
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.get(f'/api/clientes/{email_cliente}', headers=headers)
retorno = json.loads(response.data)
assert response.status_code == 200
assert len(retorno) > 0
def test_visualizar_clientes_404(client):
'''
Teste de visualização de cliente não cadastrado
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.get(f'/api/clientes/{email_cliente}k', headers=headers)
assert response.status_code == 404
def test_alterar_clientes_200(client):
'''
Teste de atualização de cliente
'''
headers = {
'Authorization': f'Bearer {token}'
}
payload = {
'nome': nome_cliente,
'email': email_cliente
}
response = client.put(f'/api/clientes/{email_cliente}', headers=headers, json=payload)
retorno = json.loads(response.data)
assert response.status_code == 200
assert len(retorno) > 0
def test_alterar_clientes_404(client):
'''
Teste de atualização de cliente não cadastrado
'''
headers = {
'Authorization': f'Bearer {token}'
}
payload = {
'nome': nome_cliente,
'email': email_cliente
}
response = client.put(f'/api/clientes/{email_cliente}k', headers=headers, json=payload)
assert response.status_code == 404
def test_alterar_clientes_400(client):
'''
Teste de atualização de cliente sem payload
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.put(f'/api/clientes/{email_cliente}', headers=headers)
assert response.status_code == 400
def test_deletar_clientes_404(client):
'''
Teste de deletar cliente não cadastrado
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.delete(f'/api/clientes/{email_cliente}k', headers=headers)
assert response.status_code == 404
def test_visualizar_lista_favoritos_200(client):
'''
Teste de visualização da lista de favoritos
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.get(f'/api/clientes/{email_cliente}/favoritos', headers=headers)
retorno = json.loads(response.data)
assert response.status_code == 200
assert len(retorno) > 0
def test_visualizar_lista_favoritos_404(client):
'''
Teste de visualização da lista de favoritos de cliente não cadastrado
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.get(f'/api/clientes/{email_cliente}k/favoritos', headers=headers)
assert response.status_code == 404
def test_inserir_lista_favoritos_201(client):
'''
Teste de inclusao de produto na lista de favoritos
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.post(f'/api/clientes/{email_cliente}/favoritos/1bf0f365-fbdd-4e21-9786-da459d78dd1f', headers=headers)
retorno = json.loads(response.data)
assert response.status_code == 201
assert len(retorno) > 0
def test_inserir_lista_favoritos_400(client):
'''
Teste de inclusão de produto ja inserido na lista de favoritos
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.post(f'/api/clientes/{email_cliente}/favoritos/1bf0f365-fbdd-4e21-9786-da459d78dd1f', headers=headers)
assert response.status_code == 400
def test_inserir_lista_favoritos_produto_nao_existe_400(client):
'''
Teste de inclusão de produto não existente
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.post(f'/api/clientes/{email_cliente}/favoritos/teste', headers=headers)
assert response.status_code == 400
def test_inserir_lista_favoritos_404(client):
'''
Teste de inclusão de produto na lista de favoritos de cliente que não existe
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.post(f'/api/clientes/{email_cliente}k/favoritos/1bf0f365-fbdd-4e21-9786-da459d78dd1f', headers=headers)
assert response.status_code == 404
def test_deletar_lista_favoritos_204(client):
'''
Teste de exclusão do produto na lista de favoritos
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.delete(f'/api/clientes/{email_cliente}/favoritos/1bf0f365-fbdd-4e21-9786-da459d78dd1f', headers=headers)
assert response.status_code == 204
def test_deletar_lista_favoritos_404(client):
'''
Teste de exclusão do produto na lista de favoritos de produto que não esta na lista do cliente
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.delete(f'/api/clientes/{email_cliente}/favoritos/1bf0f365-fbdd-4e21-9786-da459d78dd1f', headers=headers)
assert response.status_code == 404
def test_deletar_lista_favoritos_cliente_nao_cadastrado_404(client):
'''
Teste de exclusão do produto na lista de favoritos de cliente que não esta cadastrado
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.delete(f'/api/clientes/{email_cliente}k/favoritos/1bf0f365-fbdd-4e21-9786-da459d78dd1f', headers=headers)
assert response.status_code == 404
def test_deletar_toda_lista_favoritos_204(client):
'''
Teste de exclusão de toda lista de favoritos do cliente
'''
headers = {
'Authorization': f'Bearer {token}'
}
client.post(f'/api/clientes/{email_cliente}/favoritos/1bf0f365-fbdd-4e21-9786-da459d78dd1f', headers=headers)
response = client.delete(f'/api/clientes/{email_cliente}/favoritos', headers=headers)
assert response.status_code == 204
def test_deletar_toda_lista_favoritos_404(client):
'''
Teste de exclusão de toda lista de favoritos de cliente não cadastrado
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.delete(f'/api/clientes/{email_cliente}f/favoritos', headers=headers)
assert response.status_code == 404
def test_deletar_clientes_204(client):
'''
Teste exclusão de cliente
'''
headers = {
'Authorization': f'Bearer {token}'
}
response = client.delete(f'/api/clientes/{email_cliente}', headers=headers)
assert response.status_code == 204
| 25.025641 | 127 | 0.661066 | 1,146 | 9,760 | 5.480803 | 0.094241 | 0.028976 | 0.082789 | 0.099347 | 0.888234 | 0.856711 | 0.823754 | 0.768827 | 0.730457 | 0.67935 | 0 | 0.040401 | 0.223975 | 9,760 | 389 | 128 | 25.089974 | 0.788883 | 0.130123 | 0 | 0.548544 | 0 | 0 | 0.219144 | 0.119218 | 0 | 0 | 0 | 0.002571 | 0.165049 | 1 | 0.126214 | false | 0.024272 | 0.024272 | 0 | 0.150485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9cc051d0ea7c0fe603cbee8e1b9386269f730557 | 47 | py | Python | python_modules/libraries/dagster-postgres/dagster_postgres/event_log/__init__.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 4,606 | 2018-06-21T17:45:20.000Z | 2022-03-31T23:39:42.000Z | python_modules/libraries/dagster-postgres/dagster_postgres/event_log/__init__.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 6,221 | 2018-06-12T04:36:01.000Z | 2022-03-31T21:43:05.000Z | python_modules/libraries/dagster-postgres/dagster_postgres/event_log/__init__.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 619 | 2018-08-22T22:43:09.000Z | 2022-03-31T22:48:06.000Z | from .event_log import PostgresEventLogStorage
| 23.5 | 46 | 0.893617 | 5 | 47 | 8.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
14676e65cb32fd903fc7242ba9d84b87ff585f82 | 48 | py | Python | ec2_compare/internal/usage_classes/__init__.py | weldpua2008/aws.ec2.compare | 5149fc4c7cb42f4d7df1930ed8a06750155fe578 | [
"Apache-2.0"
] | null | null | null | ec2_compare/internal/usage_classes/__init__.py | weldpua2008/aws.ec2.compare | 5149fc4c7cb42f4d7df1930ed8a06750155fe578 | [
"Apache-2.0"
] | null | null | null | ec2_compare/internal/usage_classes/__init__.py | weldpua2008/aws.ec2.compare | 5149fc4c7cb42f4d7df1930ed8a06750155fe578 | [
"Apache-2.0"
] | 1 | 2021-12-15T11:58:22.000Z | 2021-12-15T11:58:22.000Z |
# Automatically generated at December 05, 2020
| 16 | 46 | 0.791667 | 6 | 48 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.166667 | 48 | 2 | 47 | 24 | 0.8 | 0.916667 | 0 | null | 1 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1af96ce2d518d77f199b3cb1c2fd61d17f5c516b | 45 | py | Python | torchwisdom/vision/transforms/__init__.py | nunenuh/modelzoo.pytorch | 0a0e5dda84d59243a084b053d98f2eabd76474f5 | [
"MIT"
] | 8 | 2019-03-23T17:53:52.000Z | 2021-06-15T17:38:00.000Z | torchwisdom/vision/transforms/__init__.py | nunenuh/modelzoo.pytorch | 0a0e5dda84d59243a084b053d98f2eabd76474f5 | [
"MIT"
] | 39 | 2019-03-26T08:22:40.000Z | 2019-05-22T05:18:31.000Z | torchwisdom/vision/transforms/__init__.py | nunenuh/modelzoo.pytorch | 0a0e5dda84d59243a084b053d98f2eabd76474f5 | [
"MIT"
] | 4 | 2019-04-05T06:32:09.000Z | 2019-05-09T14:53:51.000Z | from .pair import *
from .transforms import * | 22.5 | 25 | 0.755556 | 6 | 45 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 25 | 22.5 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2115aa77bce777b55b1e73c4147429d98339c71b | 100 | py | Python | models/partseg/__init__.py | luost26/Equivariant-OrientedMP | 597f9c4ace953929e5eefef84e4c840d6636b818 | [
"MIT"
] | 5 | 2022-03-26T07:08:21.000Z | 2022-03-31T12:23:40.000Z | models/partseg/__init__.py | luost26/Equivariant-OrientedMP | 597f9c4ace953929e5eefef84e4c840d6636b818 | [
"MIT"
] | null | null | null | models/partseg/__init__.py | luost26/Equivariant-OrientedMP | 597f9c4ace953929e5eefef84e4c840d6636b818 | [
"MIT"
] | null | null | null | from ._registry import get_model
from .dgcnn import DGCNN
from .oriented_dgcnn import OrientedDGCNN
| 25 | 41 | 0.85 | 14 | 100 | 5.857143 | 0.571429 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 100 | 3 | 42 | 33.333333 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0d45b279b5c1aa7c488f801d2e344bb8fc198484 | 106 | py | Python | app/models/core/store/__init__.py | polowis/virtComp | 142c623da4722f7443d76fc8bef0e56c0aa3d48e | [
"MIT"
] | null | null | null | app/models/core/store/__init__.py | polowis/virtComp | 142c623da4722f7443d76fc8bef0e56c0aa3d48e | [
"MIT"
] | null | null | null | app/models/core/store/__init__.py | polowis/virtComp | 142c623da4722f7443d76fc8bef0e56c0aa3d48e | [
"MIT"
] | null | null | null |
from .global_store import * # noqa
from .global_store import __all__ as glob_store
__all__ = glob_store | 21.2 | 47 | 0.792453 | 16 | 106 | 4.5 | 0.5 | 0.277778 | 0.416667 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160377 | 106 | 5 | 48 | 21.2 | 0.808989 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0d49e3149723afb309e2deadab8ccfe16e442dc8 | 91 | py | Python | nntools/experiment/__init__.py | ClementPla/NNTools | 61562be2d931a7f720ceee1bd91a37a2b9a329af | [
"MIT"
] | null | null | null | nntools/experiment/__init__.py | ClementPla/NNTools | 61562be2d931a7f720ceee1bd91a37a2b9a329af | [
"MIT"
] | null | null | null | nntools/experiment/__init__.py | ClementPla/NNTools | 61562be2d931a7f720ceee1bd91a37a2b9a329af | [
"MIT"
] | null | null | null | from .experiment import Experiment
from .supervised_experiment import SupervisedExperiment
| 30.333333 | 55 | 0.89011 | 9 | 91 | 8.888889 | 0.555556 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087912 | 91 | 2 | 56 | 45.5 | 0.963855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4b560b1d6111512544ff830bd87291cdf1f782e | 61 | py | Python | conpaas-services/src/conpaas/core/https/__init__.py | bopopescu/conpaas-1 | cea3c02f499a729464697de7cf98c2041febc0ab | [
"BSD-3-Clause"
] | 1 | 2015-09-20T18:20:01.000Z | 2015-09-20T18:20:01.000Z | conpaas-services/src/conpaas/core/https/__init__.py | bopopescu/conpaas-1 | cea3c02f499a729464697de7cf98c2041febc0ab | [
"BSD-3-Clause"
] | 1 | 2020-07-27T11:56:18.000Z | 2020-07-27T11:56:18.000Z | conpaas-services/src/conpaas/core/https/__init__.py | bopopescu/conpaas-1 | cea3c02f499a729464697de7cf98c2041febc0ab | [
"BSD-3-Clause"
] | 3 | 2018-09-14T16:54:14.000Z | 2020-07-26T03:14:56.000Z | from . import client
from . import server
from . import x509
| 15.25 | 20 | 0.754098 | 9 | 61 | 5.111111 | 0.555556 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 0.196721 | 61 | 3 | 21 | 20.333333 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4b9db60242f2b6da3850ce3a163f642fd2b3b5a | 77 | py | Python | python/src/test/resources/pyfunc/numpy_random18_test.py | maropu/lljvm-translator | 322fbe24a27976948c8e8081a9552152dda58b4b | [
"Apache-2.0"
] | 70 | 2017-12-12T10:54:00.000Z | 2022-03-22T07:45:19.000Z | python/src/test/resources/pyfunc/numpy_random18_test.py | maropu/lljvm-as | 322fbe24a27976948c8e8081a9552152dda58b4b | [
"Apache-2.0"
] | 14 | 2018-02-28T01:29:46.000Z | 2019-12-10T01:42:22.000Z | python/src/test/resources/pyfunc/numpy_random18_test.py | maropu/lljvm-as | 322fbe24a27976948c8e8081a9552152dda58b4b | [
"Apache-2.0"
] | 4 | 2019-07-21T07:58:25.000Z | 2021-02-01T09:46:59.000Z | import numpy as np
def numpy_random18_test(a):
return np.random.choice(a)
| 15.4 | 28 | 0.766234 | 14 | 77 | 4.071429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.142857 | 77 | 4 | 29 | 19.25 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b4d4611f79ba90ce501679525ce39af566fb31a0 | 179 | py | Python | buzzard/_tools/__init__.py | ashnair1/buzzard | f9a9c2ac2929d997b1643f4730c67e3db45e181e | [
"Apache-2.0"
] | 30 | 2019-12-07T21:16:41.000Z | 2022-03-07T15:12:25.000Z | buzzard/_tools/__init__.py | ashnair1/buzzard | f9a9c2ac2929d997b1643f4730c67e3db45e181e | [
"Apache-2.0"
] | 42 | 2018-01-31T20:03:55.000Z | 2019-11-12T19:42:13.000Z | buzzard/_tools/__init__.py | ashnair1/buzzard | f9a9c2ac2929d997b1643f4730c67e3db45e181e | [
"Apache-2.0"
] | 15 | 2018-01-31T19:47:22.000Z | 2019-11-26T10:27:50.000Z | """
Collection of private tools
"""
from .parameters import *
from .helper_classes import *
from .rect import *
from .multi_ordered_dict import *
from .slices_of_matrix import *
| 17.9 | 33 | 0.759777 | 24 | 179 | 5.458333 | 0.625 | 0.305344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150838 | 179 | 9 | 34 | 19.888889 | 0.861842 | 0.150838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4e8d94b296fefa3cf51048af1e428c77fdd70d5 | 139 | py | Python | pagi/utils/env.py | khuongnd/pagi | 33e6b6c9bf126c648b58320fa7d8d488f82b16d1 | [
"MIT"
] | null | null | null | pagi/utils/env.py | khuongnd/pagi | 33e6b6c9bf126c648b58320fa7d8d488f82b16d1 | [
"MIT"
] | null | null | null | pagi/utils/env.py | khuongnd/pagi | 33e6b6c9bf126c648b58320fa7d8d488f82b16d1 | [
"MIT"
] | null | null | null | import torch
def check_gpu_available():
return torch.cuda.is_available()
if __name__ == "__main__":
print(check_gpu_available()) | 17.375 | 36 | 0.733813 | 18 | 139 | 4.944444 | 0.722222 | 0.179775 | 0.382022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151079 | 139 | 8 | 37 | 17.375 | 0.754237 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0.2 | 0.6 | 0.2 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b4f87a17ea8315bf8603190f62218c08495abbdc | 45 | py | Python | MTL_startup.py | mtlong/FGVC | b1c5b1a84b763e8b8236a68f2ffef9274dea49cd | [
"MIT"
] | null | null | null | MTL_startup.py | mtlong/FGVC | b1c5b1a84b763e8b8236a68f2ffef9274dea49cd | [
"MIT"
] | null | null | null | MTL_startup.py | mtlong/FGVC | b1c5b1a84b763e8b8236a68f2ffef9274dea49cd | [
"MIT"
] | null | null | null |
print("Run to startup the python session")
| 11.25 | 42 | 0.733333 | 7 | 45 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 3 | 43 | 15 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0.767442 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2ef76736dedb347f9bb8ed053ebcad1e84e246d4 | 130 | py | Python | Stepik1.py | devilnotcry77/devil_not_cry | a9d342d053c788ec6db2d1c5967ed55104b40045 | [
"Apache-2.0"
] | null | null | null | Stepik1.py | devilnotcry77/devil_not_cry | a9d342d053c788ec6db2d1c5967ed55104b40045 | [
"Apache-2.0"
] | null | null | null | Stepik1.py | devilnotcry77/devil_not_cry | a9d342d053c788ec6db2d1c5967ed55104b40045 | [
"Apache-2.0"
] | null | null | null | a = int(input())
b = int(input())
print(a+b)
print(a-b)
print(a*b)
print(a/b)
print(a//b)
print(a%b)
print((a**10 + b **10)**0.5)
| 13 | 28 | 0.561538 | 31 | 130 | 2.354839 | 0.258065 | 0.575342 | 0.575342 | 0.986301 | 0.657534 | 0.657534 | 0.657534 | 0.657534 | 0.657534 | 0.657534 | 0 | 0.052632 | 0.123077 | 130 | 9 | 29 | 14.444444 | 0.587719 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.777778 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2efa35c517ff1949b4b0f06b7c7c0517d5e0f80d | 177 | py | Python | sis-web/rest/__init__.py | maxbilbow/7054CEM-sis | 1c5067c9afc38e340fcce046048f8ae21d267365 | [
"MIT"
] | null | null | null | sis-web/rest/__init__.py | maxbilbow/7054CEM-sis | 1c5067c9afc38e340fcce046048f8ae21d267365 | [
"MIT"
] | null | null | null | sis-web/rest/__init__.py | maxbilbow/7054CEM-sis | 1c5067c9afc38e340fcce046048f8ae21d267365 | [
"MIT"
] | null | null | null | def init():
import rest.auth_controller
import rest.index
import rest.membership_controller
import rest.quote_controller
import rest.user_profile_controller
| 25.285714 | 39 | 0.774011 | 22 | 177 | 6 | 0.5 | 0.378788 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180791 | 177 | 6 | 40 | 29.5 | 0.910345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.833333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2c179052e7a08ee25e6aa6d67f1e4168ebf0e7d0 | 1,734 | py | Python | temboo/core/Library/Flickr/Photos/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Flickr/Photos/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Flickr/Photos/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Flickr.Photos.AddTags import AddTags, AddTagsInputSet, AddTagsResultSet, AddTagsChoreographyExecution
from temboo.Library.Flickr.Photos.Delete import Delete, DeleteInputSet, DeleteResultSet, DeleteChoreographyExecution
from temboo.Library.Flickr.Photos.Download import Download, DownloadInputSet, DownloadResultSet, DownloadChoreographyExecution
from temboo.Library.Flickr.Photos.GetInfo import GetInfo, GetInfoInputSet, GetInfoResultSet, GetInfoChoreographyExecution
from temboo.Library.Flickr.Photos.ListGeoTaggedPhotos import ListGeoTaggedPhotos, ListGeoTaggedPhotosInputSet, ListGeoTaggedPhotosResultSet, ListGeoTaggedPhotosChoreographyExecution
from temboo.Library.Flickr.Photos.ListPeople import ListPeople, ListPeopleInputSet, ListPeopleResultSet, ListPeopleChoreographyExecution
from temboo.Library.Flickr.Photos.ListPhotosWithoutGeoTags import ListPhotosWithoutGeoTags, ListPhotosWithoutGeoTagsInputSet, ListPhotosWithoutGeoTagsResultSet, ListPhotosWithoutGeoTagsChoreographyExecution
from temboo.Library.Flickr.Photos.ListPublicPhotos import ListPublicPhotos, ListPublicPhotosInputSet, ListPublicPhotosResultSet, ListPublicPhotosChoreographyExecution
from temboo.Library.Flickr.Photos.ListRecentPhotos import ListRecentPhotos, ListRecentPhotosInputSet, ListRecentPhotosResultSet, ListRecentPhotosChoreographyExecution
from temboo.Library.Flickr.Photos.Replace import Replace, ReplaceInputSet, ReplaceResultSet, ReplaceChoreographyExecution
from temboo.Library.Flickr.Photos.SearchPhotos import SearchPhotos, SearchPhotosInputSet, SearchPhotosResultSet, SearchPhotosChoreographyExecution
from temboo.Library.Flickr.Photos.Upload import Upload, UploadInputSet, UploadResultSet, UploadChoreographyExecution
| 133.384615 | 206 | 0.903114 | 132 | 1,734 | 11.863636 | 0.409091 | 0.076628 | 0.130268 | 0.176245 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048443 | 1,734 | 12 | 207 | 144.5 | 0.949091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25a76c2817bb7803e0468bbb40914707d3cf4648 | 61,669 | py | Python | lib/data_loaders.py | SwagJ/FCGF | 18cfb4d4e9f7a18487d6ef8941fcaeda387ab55f | [
"MIT"
] | null | null | null | lib/data_loaders.py | SwagJ/FCGF | 18cfb4d4e9f7a18487d6ef8941fcaeda387ab55f | [
"MIT"
] | null | null | null | lib/data_loaders.py | SwagJ/FCGF | 18cfb4d4e9f7a18487d6ef8941fcaeda387ab55f | [
"MIT"
] | null | null | null | # -*- coding: future_fstrings -*-
#
# Written by Chris Choy <chrischoy@ai.stanford.edu>
# Distributed under MIT License
import logging
import random
import torch
import torch.utils.data
import numpy as np
import glob
import os
from scipy.linalg import expm, norm
import pathlib
from util.pointcloud import get_matching_indices, make_open3d_point_cloud, get_neighbor_indices, get_hardest_neigative_indices
import lib.transforms as t
import MinkowskiEngine as ME
import open3d as o3d
from scipy.interpolate import interp1d
from scipy.spatial.transform import Rotation as R
from scipy.spatial.transform import Slerp
from lib.timer import Timer
from lib.metrics import pdist
#from lib.feature_extraction import prepare_feature
import ntpath
from sklearn.preprocessing import normalize
kitti_cache = {}
kitti_icp_cache = {}
kaist_cache = {}
kaist_icp_cache = {}
def collate_pair_fn(list_data):
xyz0, xyz1, coords0, coords1, feats0, feats1, matching_inds, trans, sample_num, neighbor0, neighbor1,num_neighbor0,num_neighbor1 = list(
zip(*list_data))
xyz_batch0, xyz_batch1 = [], []
matching_inds_batch, trans_batch, len_batch = [], [], []
coords0_in_batch,coords1_in_batch = [],[]
unique_match = []
centroid_in_batch = []
max_in_batch = []
neighbor0_in_batch = []
neighbor1_in_batch = []
num_neighbor0_in_batch = []
num_neighbor1_in_batch = []
correspondance_in_batch = []
#neg_ind_in_batch=[]
pos1_coord_in_batch = []
pos0_coord_in_batch = []
batch_id = 0
curr_start_inds = np.zeros((1, 2))
#print(sample_num)
sample_num = min(sample_num)
def to_tensor(x):
if isinstance(x, torch.Tensor):
return x
elif isinstance(x, np.ndarray):
return torch.from_numpy(x)
else:
raise ValueError(f'Can not convert to torch tensor, {x}')
for batch_id, _ in enumerate(coords0):
N0 = coords0[batch_id].shape[0]
N1 = coords1[batch_id].shape[0]
#print("num of points:",N0,N1)
#print("neighborhood0 size:",neighbor0[batch_id].shape)
#print("neighborhood1 size:",neighbor1[batch_id].shape)
neighbor0_in_batch.append(to_tensor(neighbor0[batch_id]))
neighbor1_in_batch.append(to_tensor(neighbor1[batch_id]))
num_neighbor0_in_batch.append(to_tensor(num_neighbor0[batch_id]))
num_neighbor1_in_batch.append(to_tensor(num_neighbor1[batch_id]))
xyz_batch0.append(to_tensor(xyz0[batch_id]))
xyz_batch1.append(to_tensor(xyz1[batch_id]))
#points normalization
coord0_i = to_tensor(coords0[batch_id]).float()
coord1_i = to_tensor(coords1[batch_id]).float()
#print("coord0_i shape:",coord0_i.shape)
#print("coord1_i shape:",coord1_i.shape)
batch_corr = np.array(matching_inds[batch_id])
# normalization
centroid0 = torch.mean(coord0_i, axis=0)
centroid1 = torch.mean(coord1_i, axis=0)
centroid_in_batch.append([centroid0,centroid1])
#print(centroid0,centroid1)
centered0 = coord0_i - centroid0
centered1 = coord1_i - centroid1
max0 = torch.max(torch.sqrt(torch.sum(abs(centered0)**2,axis=-1)))
max1 = torch.max(torch.sqrt(torch.sum(abs(centered1)**2,axis=-1)))
max_in_batch.append([max0,max1])
normed_coords0 = centered0 / max0
normed_coords1 = centered1 / max1
#sample points in batch
sel0 = np.random.choice(N0, sample_num, replace=False)
sel1 = np.random.choice(N1, sample_num, replace=False)
coords0_in_batch.append(normed_coords0[sel0,:].float())
coords1_in_batch.append(normed_coords1[sel1,:].float())
#sample positive correspondance
_,unique_idx = np.unique(batch_corr[:,0],return_index=True)
#unique_idx = torch.from_numpy(unique_idx)
corr_match_idx = batch_corr[unique_idx]
pos0_coord = normed_coords0[corr_match_idx[:,0],:]
pos1_coord = normed_coords1[corr_match_idx[:,1],:]
pos0_coord_in_batch.append(to_tensor(pos0_coord))
pos1_coord_in_batch.append(to_tensor(pos1_coord))
trans_batch.append(to_tensor(trans[batch_id]))
#neg_ind_in_batch.append(to_tensor(neg_inds[batch_id]).int())
matching_inds_batch.append(
torch.from_numpy(np.array(matching_inds[batch_id])).int())
correspondance_in_batch.append(
torch.from_numpy(np.array(matching_inds[batch_id]) + curr_start_inds))
len_batch.append([N0, N1])
# Move the head
curr_start_inds[0, 0] += N0
curr_start_inds[0, 1] += N1
#print("before sparse_collate:",coords0[0].shape)
coords_batch0, feats_batch0 = ME.utils.sparse_collate(coords0, feats0)
coords_batch1, feats_batch1 = ME.utils.sparse_collate(coords1, feats1)
#print("after sparse_collate:",coords_batch0.shape)
# Concatenate all lists
xyz_batch0 = torch.cat(xyz_batch0, 0).float()
xyz_batch1 = torch.cat(xyz_batch1, 0).float()
trans_batch = torch.cat(trans_batch, 0).float()
#matching_inds_batch = torch.cat(matching_inds_batch, 0).int()
correspondance_in_batch = torch.cat(correspondance_in_batch, 0).int()
coords0_downsampled = torch.stack(coords0_in_batch,0).float()
coords1_downsampled = torch.stack(coords1_in_batch,0).float()
#coords0 = to_tensor(np.array(coords0).astype(np.float32))
#coords1 = to_tensor(np.array(coords1).astype(np.float32))
#print(matching_inds_batch[0:100,:])
return {
'pcd0': xyz_batch0,
'pcd1': xyz_batch1,
'sinput0_C': coords_batch0,
'sinput0_F': feats_batch0.float(),
'sinput1_C': coords_batch1,
'sinput1_F': feats_batch1.float(),
'correspondences': correspondance_in_batch,
# 'matching_inds': matching_inds_batch,
# 'pos0' : pos0_coord_in_batch,
# 'pos1' : pos1_coord_in_batch,
'T_gt': trans_batch,
'len_batch': len_batch,
'coords0': coords0_downsampled,
'coords1': coords1_downsampled,
'centroid': centroid_in_batch,
'max': max_in_batch,
'neighbor0': neighbor0_in_batch,
'neighbor1': neighbor1_in_batch,
'num_neighbor0': num_neighbor0_in_batch,
'num_neighbor1': num_neighbor1_in_batch,
#'neg_inds': neg_ind_in_batch
}
# Rotation matrix along axis with angle theta
def M(axis, theta):
return expm(np.cross(np.eye(3), axis / norm(axis) * theta))
def sample_random_trans(pcd, randg, rotation_range=360):
T = np.eye(4)
R = M(randg.rand(3) - 0.5, rotation_range * np.pi / 180.0 * (randg.rand(1) - 0.5))
T[:3, :3] = R
T[:3, 3] = R.dot(-np.mean(pcd, axis=0))
return T
class PairDataset(torch.utils.data.Dataset):
AUGMENT = None
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
self.phase = phase
self.files = []
self.data_objects = []
self.transform = transform
self.voxel_size = config.voxel_size
self.matching_search_voxel_size = \
config.voxel_size * config.positive_pair_search_voxel_size_multiplier
self.random_scale = random_scale
self.min_scale = config.min_scale
self.max_scale = config.max_scale
self.random_rotation = random_rotation
self.rotation_range = config.rotation_range
self.randg = np.random.RandomState()
#self.get_feature = config.get_feature
if manual_seed:
self.reset_seed()
def reset_seed(self, seed=0):
logging.info(f"Resetting the data loader seed to {seed}")
self.randg.seed(seed)
def apply_transform(self, pts, trans):
R = trans[:3, :3]
T = trans[:3, 3]
pts = pts @ R.T + T
return pts
def __len__(self):
return len(self.files)
class IndoorPairDataset(PairDataset):
OVERLAP_RATIO = None
AUGMENT = None
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
PairDataset.__init__(self, phase, transform, random_rotation, random_scale,
manual_seed, config)
self.root = root = config.threed_match_dir
self.get_feature = config.get_feature
self.sample_num = config.sample_num
logging.info(f"Loading the subset {phase} from {root}")
subset_names = open(self.DATA_FILES[phase]).read().split()
for name in subset_names:
fname = name + "*%.2f.txt" % self.OVERLAP_RATIO
fnames_txt = glob.glob(root + "/" + fname)
assert len(fnames_txt) > 0, f"Make sure that the path {root} has data {fname}"
for fname_txt in fnames_txt:
with open(fname_txt) as f:
content = f.readlines()
fnames = [x.strip().split() for x in content]
for fname in fnames:
self.files.append([fname[0], fname[1]])
def __getitem__(self, idx):
file0 = os.path.join(self.root, self.files[idx][0])
file1 = os.path.join(self.root, self.files[idx][1])
abs_dir0 = os.path.splitext(file0)[0]
abs_dir1 = os.path.splitext(file1)[0]
ntpath.basename(self.root)
featname0 = self.root + 'features/' + ntpath.split(abs_dir0)[1] + '.npy'
featname1 = self.root + 'features/' + ntpath.split(abs_dir1)[1] + '.npy'
data0 = np.load(file0,allow_pickle=True)
data1 = np.load(file1,allow_pickle=True)
xyz0 = data0["pcd"]
xyz1 = data1["pcd"]
color0 = data0["color"]
color1 = data1["color"]
matching_search_voxel_size = self.matching_search_voxel_size
if self.random_scale and random.random() < 0.95:
scale = self.min_scale + \
(self.max_scale - self.min_scale) * random.random()
matching_search_voxel_size *= scale
xyz0 = scale * xyz0
xyz1 = scale * xyz1
if self.random_rotation:
T0 = sample_random_trans(xyz0, self.randg, self.rotation_range)
T1 = sample_random_trans(xyz1, self.randg, self.rotation_range)
trans = T1 @ np.linalg.inv(T0)
xyz0 = self.apply_transform(xyz0, T0)
xyz1 = self.apply_transform(xyz1, T1)
else:
trans = np.identity(4)
# Voxelization
sel0 = ME.utils.sparse_quantize(xyz0 / self.voxel_size, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1 / self.voxel_size, return_index=True)
# Make point clouds using voxelized points
#print("before:", xyz0.shape)
pcd0 = make_open3d_point_cloud(xyz0)
pcd1 = make_open3d_point_cloud(xyz1)
# Select features and points using the returned voxelized indices
pcd0.colors = o3d.utility.Vector3dVector(color0[sel0])
pcd1.colors = o3d.utility.Vector3dVector(color1[sel1])
pcd0.points = o3d.utility.Vector3dVector(np.array(pcd0.points)[sel0])
pcd1.points = o3d.utility.Vector3dVector(np.array(pcd1.points)[sel1])
# Get matches
matches = get_matching_indices(pcd0, pcd1, trans, matching_search_voxel_size)
# Get neighborhoods of pcd0 and pcd1
neighbor0_mask,num_neighbor0 = get_neighbor_indices(pcd0, 0.025*2.5)
neighbor1_mask,num_neighbor1 = get_neighbor_indices(pcd1, 0.025*2.5)
#print("pcd1 points shape:",np.shape(pcd1.points))
#neg_inds = get_hardest_neigative_indices(pcd1,0.125)
#print("neg_inds shape:",neg_inds.shape)
#num_neighbor0 = np.count_nonzero(neighbor0_mask != len(pcd0.points),axis=1)
#num_neighbor1 = np.count_nonzero(neighbor1_mask != len(pcd1.points),axis=1)
#print(num_neighbor0.shape)
#print(matches[0:100,:])
# Get features
npts0 = len(pcd0.colors)
npts1 = len(pcd1.colors)
#print("pcd0 color dim", np.shape(pcd0.colors))
#print("pcd1 color dim", np.shape(pcd1.colors))
#print("pcd0 color len:",npts0)
#print("pcd1 color len:",npts1)
feats_train0, feats_train1 = [], []
feats_train0.append(np.ones((npts0, 1)))
feats_train1.append(np.ones((npts1, 1)))
feats0 = np.hstack(feats_train0)
feats1 = np.hstack(feats_train1)
# Get coords
xyz0 = np.array(pcd0.points)
xyz1 = np.array(pcd1.points)
#print("after:",xyz0.shape)
coords0 = np.floor(xyz0 / self.voxel_size)
coords1 = np.floor(xyz1 / self.voxel_size)
#print("Max in pcd0 coord:",np.max(coords0,axis=0))
#print("Min in pcd0 coord:",np.min(coords0,axis=0))
#print("Max in pcd1 coord:",np.max(coords1,axis=0))
#print("Min in pcd1 coord:",np.min(coords1,axis=0))
if self.transform:
coords0, feats0 = self.transform(coords0, feats0)
coords1, feats1 = self.transform(coords1, feats1)
if self.get_feature == True:
feats0 = np.load(featname0, allow_pickle=True)
feats1 = np.load(featname1, allow_pickle=True)
feats0 = feats0[sel0]
feats1 = feats1[sel1]
feats0 = normalize(feats0)
feats1 = normalize(feats1)
#print("coords shape:",coords0.shape)
#print("feature shape:", feats0.shape)
if self.sample_num > min(npts0,npts1):
sample_num = min(npts0,npts1)
#print(sample_num)
else:
sample_num = self.sample_num
return (xyz0, xyz1, coords0, coords1, feats0, feats1, matches, trans, sample_num, neighbor0_mask, neighbor1_mask,num_neighbor0,num_neighbor1)
class KITTIPairDataset(PairDataset):
AUGMENT = None
DATA_FILES = {
'train': './config/train_kitti.txt',
'val': './config/val_kitti.txt',
'test': './config/test_kitti.txt'
}
TEST_RANDOM_ROTATION = False
IS_ODOMETRY = True
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
# For evaluation, use the odometry dataset training following the 3DFeat eval method
if self.IS_ODOMETRY:
self.root = root = config.kitti_root + '/dataset'
random_rotation = self.TEST_RANDOM_ROTATION
else:
self.date = config.kitti_date
self.root = root = os.path.join(config.kitti_root, self.date)
self.icp_path = os.path.join(config.kitti_root, 'icp')
self.get_feature = config.get_feature
pathlib.Path(self.icp_path).mkdir(parents=True, exist_ok=True)
PairDataset.__init__(self, phase, transform, random_rotation, random_scale,
manual_seed, config)
logging.info(f"Loading the subset {phase} from {root}")
# Use the kitti root
self.max_time_diff = max_time_diff = config.kitti_max_time_diff
subset_names = open(self.DATA_FILES[phase]).read().split()
for dirname in subset_names:
drive_id = int(dirname)
inames = self.get_all_scan_ids(drive_id)
for start_time in inames:
for time_diff in range(2, max_time_diff):
pair_time = time_diff + start_time
if pair_time in inames:
self.files.append((drive_id, start_time, pair_time))
def get_all_scan_ids(self, drive_id):
if self.IS_ODOMETRY:
fnames = glob.glob(self.root + '/sequences/%02d/velodyne/*.bin' % drive_id)
else:
fnames = glob.glob(self.root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive_id)
assert len(
fnames) > 0, f"Make sure that the path {self.root} has drive id: {drive_id}"
inames = [int(os.path.split(fname)[-1][:-4]) for fname in fnames]
return inames
@property
def velo2cam(self):
try:
velo2cam = self._velo2cam
except AttributeError:
R = np.array([
7.533745e-03, -9.999714e-01, -6.166020e-04, 1.480249e-02, 7.280733e-04,
-9.998902e-01, 9.998621e-01, 7.523790e-03, 1.480755e-02
]).reshape(3, 3)
T = np.array([-4.069766e-03, -7.631618e-02, -2.717806e-01]).reshape(3, 1)
velo2cam = np.hstack([R, T])
self._velo2cam = np.vstack((velo2cam, [0, 0, 0, 1])).T
return self._velo2cam
def get_video_odometry(self, drive, indices=None, ext='.txt', return_all=False):
if self.IS_ODOMETRY:
data_path = self.root + '/poses/%02d.txt' % drive
#print(data_path)
if data_path not in kitti_cache:
kitti_cache[data_path] = np.genfromtxt(data_path)
if return_all:
return kitti_cache[data_path]
else:
return kitti_cache[data_path][indices]
else:
data_path = self.root + '/' + self.date + '_drive_%04d_sync/oxts/data' % drive
odometry = []
if indices is None:
fnames = glob.glob(self.root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive)
indices = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
for index in indices:
filename = os.path.join(data_path, '%010d%s' % (index, ext))
if filename not in kitti_cache:
kitti_cache[filename] = np.genfromtxt(filename)
odometry.append(kitti_cache[filename])
odometry = np.array(odometry)
return odometry
def odometry_to_positions(self, odometry):
if self.IS_ODOMETRY:
T_w_cam0 = odometry.reshape(3, 4)
T_w_cam0 = np.vstack((T_w_cam0, [0, 0, 0, 1]))
return T_w_cam0
else:
lat, lon, alt, roll, pitch, yaw = odometry.T[:6]
R = 6378137 # Earth's radius in metres
# convert to metres
lat, lon = np.deg2rad(lat), np.deg2rad(lon)
mx = R * lon * np.cos(lat)
my = R * lat
times = odometry.T[-1]
return np.vstack([mx, my, alt, roll, pitch, yaw, times]).T
def rot3d(self, axis, angle):
ei = np.ones(3, dtype='bool')
ei[axis] = 0
i = np.nonzero(ei)[0]
m = np.eye(3)
c, s = np.cos(angle), np.sin(angle)
m[i[0], i[0]] = c
m[i[0], i[1]] = -s
m[i[1], i[0]] = s
m[i[1], i[1]] = c
return m
def pos_transform(self, pos):
x, y, z, rx, ry, rz, _ = pos[0]
RT = np.eye(4)
RT[:3, :3] = np.dot(np.dot(self.rot3d(0, rx), self.rot3d(1, ry)), self.rot3d(2, rz))
RT[:3, 3] = [x, y, z]
return RT
def get_position_transform(self, pos0, pos1, invert=False):
T0 = self.pos_transform(pos0)
T1 = self.pos_transform(pos1)
return (np.dot(T1, np.linalg.inv(T0)).T if not invert else np.dot(
np.linalg.inv(T1), T0).T)
def _get_velodyne_fn(self, drive, t):
if self.IS_ODOMETRY:
fname = self.root + '/sequences/%02d/velodyne/%06d.bin' % (drive, t)
else:
fname = self.root + \
'/' + self.date + '_drive_%04d_sync/velodyne_points/data/%010d.bin' % (
drive, t)
return fname
def _get_feature_fn(self, drive, t):
if self.IS_ODOMETRY:
fname = self.root + '/sequences/%02d/features/%06d.npy' % (drive, t)
else:
fname = self.root + \
'/' + self.date + '_drive_%04d_sync/velodyne_points/data/%010d.bin' % (
drive, t)
return fname
def __getitem__(self, idx):
drive = self.files[idx][0]
t0, t1 = self.files[idx][1], self.files[idx][2]
all_odometry = self.get_video_odometry(drive, [t0, t1])
positions = [self.odometry_to_positions(odometry) for odometry in all_odometry]
fname0 = self._get_velodyne_fn(drive, t0)
fname1 = self._get_velodyne_fn(drive, t1)
if self.get_feature == True:
featname0 = self._get_feature_fn(drive,t0)
featname1 = self._get_feature_fn(drive,t1)
# XYZ and reflectance
xyzr0 = np.fromfile(fname0, dtype=np.float32).reshape(-1, 4)
xyzr1 = np.fromfile(fname1, dtype=np.float32).reshape(-1, 4)
xyz0 = xyzr0[:, :3]
xyz1 = xyzr1[:, :3]
key = '%d_%d_%d' % (drive, t0, t1)
filename = self.icp_path + '/' + key + '.npy'
if key not in kitti_icp_cache:
if not os.path.exists(filename):
# work on the downsampled xyzs, 0.05m == 5cm
sel0 = ME.utils.sparse_quantize(xyz0 / 0.05, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1 / 0.05, return_index=True)
M = (self.velo2cam @ positions[0].T @ np.linalg.inv(positions[1].T)
@ np.linalg.inv(self.velo2cam)).T
xyz0_t = self.apply_transform(xyz0[sel0], M)
pcd0 = make_open3d_point_cloud(xyz0_t)
pcd1 = make_open3d_point_cloud(xyz1[sel1])
reg = o3d.registration.registration_icp(
pcd0, pcd1, 0.2, np.eye(4),
o3d.registration.TransformationEstimationPointToPoint(),
o3d.registration.ICPConvergenceCriteria(max_iteration=200))
pcd0.transform(reg.transformation)
# pcd0.transform(M2) or self.apply_transform(xyz0, M2)
M2 = M @ reg.transformation
# o3d.draw_geometries([pcd0, pcd1])
# write to a file
np.save(filename, M2)
else:
M2 = np.load(filename,allow_pickle=True)
kitti_icp_cache[key] = M2
else:
M2 = kitti_icp_cache[key]
if self.random_rotation:
T0 = sample_random_trans(xyz0, self.randg, np.pi / 4)
T1 = sample_random_trans(xyz1, self.randg, np.pi / 4)
trans = T1 @ M2 @ np.linalg.inv(T0)
xyz0 = self.apply_transform(xyz0, T0)
xyz1 = self.apply_transform(xyz1, T1)
else:
trans = M2
matching_search_voxel_size = self.matching_search_voxel_size
if self.random_scale and random.random() < 0.95:
scale = self.min_scale + \
(self.max_scale - self.min_scale) * random.random()
matching_search_voxel_size *= scale
xyz0 = scale * xyz0
xyz1 = scale * xyz1
# Voxelization
xyz0_th = torch.from_numpy(xyz0)
xyz1_th = torch.from_numpy(xyz1)
sel0 = ME.utils.sparse_quantize(xyz0_th / self.voxel_size, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1_th / self.voxel_size, return_index=True)
# Make point clouds using voxelized points
pcd0 = make_open3d_point_cloud(xyz0[sel0])
pcd1 = make_open3d_point_cloud(xyz1[sel1])
# Get matches
matches = get_matching_indices(pcd0, pcd1, trans, matching_search_voxel_size)
if len(matches) < 1000:
raise ValueError(f"{drive}, {t0}, {t1}")
# Get features
npts0 = len(sel0)
npts1 = len(sel1)
feats_train0, feats_train1 = [], []
unique_xyz0_th = xyz0_th[sel0]
unique_xyz1_th = xyz1_th[sel1]
feats_train0.append(torch.ones((npts0, 1)))
feats_train1.append(torch.ones((npts1, 1)))
feats0 = torch.cat(feats_train0, 1)
feats1 = torch.cat(feats_train1, 1)
coords0 = torch.floor(unique_xyz0_th / self.voxel_size)
coords1 = torch.floor(unique_xyz1_th / self.voxel_size)
if self.transform:
coords0, feats0 = self.transform(coords0, feats0)
coords1, feats1 = self.transform(coords1, feats1)
if self.get_feature == True:
feats0 = np.load(featname0, allow_pickle=True)
feats1 = np.load(featname1, allow_pickle=True)
feats0 = feats0[sel0]
feats1 = feats1[sel1]
feats0 = normalize(feats0)
feats1 = normalize(feats1)
return (unique_xyz0_th.float(), unique_xyz1_th.float(), coords0.int(),
coords1.int(), feats0.float(), feats1.float(), matches, trans)
class KITTINMPairDataset(KITTIPairDataset):
r"""
Generate KITTI pairs within N meter distance
"""
MIN_DIST = 10
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
if self.IS_ODOMETRY:
self.root = root = os.path.join(config.kitti_root, 'dataset')
random_rotation = self.TEST_RANDOM_ROTATION
else:
self.date = config.kitti_date
self.root = root = os.path.join(config.kitti_root, self.date)
self.icp_path = os.path.join(config.kitti_root, 'icp')
self.get_feature = config.get_feature
pathlib.Path(self.icp_path).mkdir(parents=True, exist_ok=True)
PairDataset.__init__(self, phase, transform, random_rotation, random_scale,
manual_seed, config)
logging.info(f"Loading the subset {phase} from {root}")
subset_names = open(self.DATA_FILES[phase]).read().split()
if self.IS_ODOMETRY:
for dirname in subset_names:
drive_id = int(dirname)
fnames = glob.glob(root + '/sequences/%02d/velodyne/*.bin' % drive_id)
#print(fnames)
assert len(fnames) > 0, f"Make sure that the path {root} has data {dirname}"
inames = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
#print(inames)
all_odo = self.get_video_odometry(drive_id, return_all=True)
all_pos = np.array([self.odometry_to_positions(odo) for odo in all_odo])
Ts = all_pos[:, :3, 3]
pdist = (Ts.reshape(1, -1, 3) - Ts.reshape(-1, 1, 3))**2
pdist = np.sqrt(pdist.sum(-1))
valid_pairs = pdist > self.MIN_DIST
#print(np.shape(valid_pairs))
curr_time = inames[0]
while curr_time in inames:
# Find the min index
#print(curr_time)
next_time = np.where(valid_pairs[curr_time][curr_time:curr_time + 100])[0]
if len(next_time) == 0:
curr_time += 1
else:
# Follow https://github.com/yewzijian/3DFeatNet/blob/master/scripts_data_processing/kitti/process_kitti_data.m#L44
next_time = next_time[0] + curr_time - 1
if next_time in inames:
#print("Appending Files")
self.files.append((drive_id, curr_time, next_time))
curr_time = next_time + 1
else:
for dirname in subset_names:
drive_id = int(dirname)
fnames = glob.glob(root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive_id)
assert len(fnames) > 0, f"Make sure that the path {root} has data {dirname}"
inames = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
all_odo = self.get_video_odometry(drive_id, return_all=True)
all_pos = np.array([self.odometry_to_positions(odo) for odo in all_odo])
Ts = all_pos[:, 0, :3]
pdist = (Ts.reshape(1, -1, 3) - Ts.reshape(-1, 1, 3))**2
pdist = np.sqrt(pdist.sum(-1))
for start_time in inames:
pair_time = np.where(
pdist[start_time][start_time:start_time + 100] > self.MIN_DIST)[0]
if len(pair_time) == 0:
continue
else:
pair_time = pair_time[0] + start_time
if pair_time in inames:
self.files.append((drive_id, start_time, pair_time))
if self.IS_ODOMETRY:
# Remove problematic sequence
for item in [
(8, 15, 58),
]:
print("items are ",item)
if item in self.files:
self.files.pop(self.files.index(item))
#########################################################
#
# KAIST_DATASET_Left
#
# added config: kaist_root,kaist_date,kaist_max_time_diff
# train_kaist.txt,val_kaist.txt,test_kaist.txt
#
#########################################################
class KAISTLPairDataset(PairDataset):
AUGMENT = None
DATA_FILES = {
'train': './config/train_kaist.txt',
'val': './config/val_kaist.txt',
'test': './config/test_kaist.txt'
}
TEST_RANDOM_ROTATION = False
IS_ODOMETRY = True
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
# For evaluation, use the odometry dataset training following the 3DFeat eval method
if self.IS_ODOMETRY:
self.root = root = config.kaist_root
random_rotation = self.TEST_RANDOM_ROTATION
else:
self.date = config.kaist_date
self.root = root = os.path.join(config.kaist_root, self.date)
self.icp_path = os.path.join(config.kaist_root, 'icp')
pathlib.Path(self.icp_path).mkdir(parents=True, exist_ok=True)
PairDataset.__init__(self, phase, transform, random_rotation, random_scale,
manual_seed, config)
logging.info(f"Loading the subset {phase} from {root}")
# Use the kitti root
self.max_time_diff = max_time_diff = config.kaist_max_time_diff
subset_names = open(self.DATA_FILES[phase]).read().split()
for dirname in subset_names:
drive_id = int(dirname)
inames = self.get_all_scan_ids(drive_id)
for start_time in inames:
for time_diff in range(2, max_time_diff):
pair_time = time_diff + start_time
if pair_time in inames:
self.files.append((drive_id, start_time, pair_time))
def get_all_scan_ids(self, drive_id):
load_path = self.root + '/%02d/' % drive_id + 'left_valid.csv'
valid_time = np.genfromtxt(load_path,delimiter=',')
fnames = []
for time in valid_time:
if self.IS_ODOMETRY:
fnames.extend(glob.glob(self.root + '/%02d/VLP_left/' % drive_id + '%06d.bin' % time))
else:
fnames.extend(glob.glob(self.root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive_id))
assert len(
fnames) > 0, f"Make sure that the path {self.root} has drive id: {drive_id}"
inames = [int(os.path.split(fname)[-1][:-4]) for fname in fnames]
return inames
@property
def velo2cam(self):
try:
velo2cam = self._velo2cam
except AttributeError:
R = np.array([
7.533745e-03, -9.999714e-01, -6.166020e-04, 1.480249e-02, 7.280733e-04,
-9.998902e-01, 9.998621e-01, 7.523790e-03, 1.480755e-02
]).reshape(3, 3)
T = np.array([-4.069766e-03, -7.631618e-02, -2.717806e-01]).reshape(3, 1)
velo2cam = np.hstack([R, T])
self._velo2cam = np.vstack((velo2cam, [0, 0, 0, 1])).T
return self._velo2cam
def get_video_odometry(self, drive, indices=None, ext='.txt', return_all=False):
if self.IS_ODOMETRY:
#logging.info(f"Drive id is {drive}")
data_path = self.root + '/%02d/VLP_left_pose.csv' % drive
#print(data_path)
if data_path not in kaist_cache:
kaist_cache[data_path] = np.genfromtxt(data_path,delimiter=',')
#print(np.shape(kaist_cache[data_path]))
if return_all:
return kaist_cache[data_path]
else:
return kaist_cache[data_path][indices]
else:
data_path = self.root + '/' + self.date + '_drive_%04d_sync/oxts/data' % drive
odometry = []
if indices is None:
fnames = glob.glob(self.root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive)
indices = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
for index in indices:
filename = os.path.join(data_path, '%010d%s' % (index, ext))
if filename not in kaist_cache:
kaist_cache[filename] = np.genfromtxt(filename)
odometry.append(kaist_cache[filename])
odometry = np.array(odometry)
return odometry
def odometry_to_positions(self, odometry):
if self.IS_ODOMETRY:
T_w_cam0 = odometry.reshape(3, 4)
T_w_cam0 = np.vstack((T_w_cam0, [0, 0, 0, 1]))
return T_w_cam0
else:
lat, lon, alt, roll, pitch, yaw = odometry.T[:6]
R = 6378137 # Earth's radius in metres
# convert to metres
lat, lon = np.deg2rad(lat), np.deg2rad(lon)
mx = R * lon * np.cos(lat)
my = R * lat
times = odometry.T[-1]
return np.vstack([mx, my, alt, roll, pitch, yaw, times]).T
def rot3d(self, axis, angle):
ei = np.ones(3, dtype='bool')
ei[axis] = 0
i = np.nonzero(ei)[0]
m = np.eye(3)
c, s = np.cos(angle), np.sin(angle)
m[i[0], i[0]] = c
m[i[0], i[1]] = -s
m[i[1], i[0]] = s
m[i[1], i[1]] = c
return m
def pos_transform(self, pos):
x, y, z, rx, ry, rz, _ = pos[0]
RT = np.eye(4)
RT[:3, :3] = np.dot(np.dot(self.rot3d(0, rx), self.rot3d(1, ry)), self.rot3d(2, rz))
RT[:3, 3] = [x, y, z]
return RT
def get_position_transform(self, pos0, pos1, invert=False):
T0 = self.pos_transform(pos0)
T1 = self.pos_transform(pos1)
return (np.dot(T1, np.linalg.inv(T0)).T if not invert else np.dot(
np.linalg.inv(T1), T0).T)
def _get_velodyne_fn(self, drive, t):
if self.IS_ODOMETRY:
fname = self.root + '/%02d/VLP_left/%06d.bin' % (drive, t)
else:
fname = self.root + \
'/' + self.date + '_drive_%04d_sync/velodyne_points/data/%010d.bin' % (
drive, t)
return fname
def __getitem__(self, idx):
drive = self.files[idx][0]
inames = self.get_all_scan_ids(drive)
t0, t1 = self.files[idx][1], self.files[idx][2]
t0_odo,t1_odo = self.files[idx][1] - inames[0],self.files[idx][2] - inames[0]
all_odometry = self.get_video_odometry(drive, [t0_odo, t1_odo])
positions = [self.odometry_to_positions(odometry) for odometry in all_odometry]
fname0 = self._get_velodyne_fn(drive, t0)
fname1 = self._get_velodyne_fn(drive, t1)
# XYZ and reflectance
xyzr0 = np.fromfile(fname0, dtype=np.float32).reshape(-1, 4)
xyzr1 = np.fromfile(fname1, dtype=np.float32).reshape(-1, 4)
xyz0 = xyzr0[:, :3]
xyz1 = xyzr1[:, :3]
key = '%d_%d_%d' % (drive, t0, t1)
filename = self.icp_path + '/' + key + '.npy'
if key not in kaist_icp_cache:
if not os.path.exists(filename):
# work on the downsampled xyzs, 0.05m == 5cm
sel0 = ME.utils.sparse_quantize(xyz0 / 0.05, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1 / 0.05, return_index=True)
M = (self.velo2cam @ positions[0].T @ np.linalg.inv(positions[1].T)
@ np.linalg.inv(self.velo2cam)).T
xyz0_t = self.apply_transform(xyz0[sel0], M)
pcd0 = make_open3d_point_cloud(xyz0_t)
pcd1 = make_open3d_point_cloud(xyz1[sel1])
reg = o3d.registration.registration_icp(
pcd0, pcd1, 0.2, np.eye(4),
o3d.registration.TransformationEstimationPointToPoint(),
o3d.registration.ICPConvergenceCriteria(max_iteration=200))
pcd0.transform(reg.transformation)
# pcd0.transform(M2) or self.apply_transform(xyz0, M2)
M2 = M @ reg.transformation
# o3d.draw_geometries([pcd0, pcd1])
# write to a file
np.save(filename, M2)
else:
M2 = np.load(filename,allow_pickle=True)
kaist_icp_cache[key] = M2
else:
M2 = kaist_icp_cache[key]
if self.random_rotation:
T0 = sample_random_trans(xyz0, self.randg, np.pi / 4)
T1 = sample_random_trans(xyz1, self.randg, np.pi / 4)
trans = T1 @ M2 @ np.linalg.inv(T0)
xyz0 = self.apply_transform(xyz0, T0)
xyz1 = self.apply_transform(xyz1, T1)
else:
trans = M2
matching_search_voxel_size = self.matching_search_voxel_size
if self.random_scale and random.random() < 0.95:
scale = self.min_scale + \
(self.max_scale - self.min_scale) * random.random()
matching_search_voxel_size *= scale
xyz0 = scale * xyz0
xyz1 = scale * xyz1
# Voxelization
xyz0_th = torch.from_numpy(xyz0)
xyz1_th = torch.from_numpy(xyz1)
sel0 = ME.utils.sparse_quantize(xyz0_th / self.voxel_size, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1_th / self.voxel_size, return_index=True)
# Make point clouds using voxelized points
pcd0 = make_open3d_point_cloud(xyz0[sel0])
pcd1 = make_open3d_point_cloud(xyz1[sel1])
# Get matches
matches = get_matching_indices(pcd0, pcd1, trans, matching_search_voxel_size)
#if len(matches) < 1000:
#raise ValueError(f"{drive}, {t0}, {t1}")
#return
# Get features
npts0 = len(sel0)
npts1 = len(sel1)
feats_train0, feats_train1 = [], []
unique_xyz0_th = xyz0_th[sel0]
unique_xyz1_th = xyz1_th[sel1]
feats_train0.append(torch.ones((npts0, 1)))
feats_train1.append(torch.ones((npts1, 1)))
feats0 = torch.cat(feats_train0, 1)
feats1 = torch.cat(feats_train1, 1)
coords0 = torch.floor(unique_xyz0_th / self.voxel_size)
coords1 = torch.floor(unique_xyz1_th / self.voxel_size)
if self.transform:
coords0, feats0 = self.transform(coords0, feats0)
coords1, feats1 = self.transform(coords1, feats1)
return (unique_xyz0_th.float(), unique_xyz1_th.float(), coords0.int(),
coords1.int(), feats0.float(), feats1.float(), matches, trans)
class KAISTLNMPairDataset(KAISTLPairDataset):
r"""
Generate KITTI pairs within N meter distance
"""
MIN_DIST = 10
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
if self.IS_ODOMETRY:
self.root = root = config.kaist_root
random_rotation = self.TEST_RANDOM_ROTATION
else:
self.date = config.kaist_date
self.root = root = os.path.join(config.kaist_root, self.date)
self.icp_path = os.path.join(config.kaist_root, 'icp')
pathlib.Path(self.icp_path).mkdir(parents=True, exist_ok=True)
PairDataset.__init__(self, phase, transform, random_rotation, random_scale,
manual_seed, config)
logging.info(f"Loading the subset {phase} from {root}")
subset_names = open(self.DATA_FILES[phase]).read().split()
if self.IS_ODOMETRY:
for dirname in subset_names:
drive_id = int(dirname)
logging.info(f"Processing Sequence {drive_id}")
self.kaist_interpolate(left=True,data_root=root,drive_id=drive_id)
load_path = root + '/%02d/' % drive_id + 'left_valid.csv'
valid_time = np.genfromtxt(load_path,delimiter=',')
fnames = []
for time in valid_time:
fnames.extend(glob.glob(self.root + '/%02d/VLP_left/%06d.bin' % (drive_id, time)))
#print(fnames)
assert len(fnames) > 0, f"Make sure that the path {root} has data {dirname}"
inames = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
#print(inames)
all_odo = self.get_video_odometry(drive_id, return_all=True)
#print(all_odo.shape)
all_pos = np.array([self.odometry_to_positions(odo) for odo in all_odo])
logging.info(f"Position Acquisition Done")
Ts = all_pos[:, :3, 3]
pdist = (Ts.reshape(1, -1, 3) - Ts.reshape(-1, 1, 3))**2
pdist = np.sqrt(pdist.sum(-1))
valid_pairs = pdist > self.MIN_DIST
#print(np.shape(valid_pairs))
curr_time = inames[0]
loop_names = np.asarray(inames) - inames[0]
while curr_time in loop_names:
#print(curr_time)
# Find the min index
next_time = np.where(valid_pairs[curr_time][curr_time:curr_time + 100])[0]
#print(next_time)
if len(next_time) == 0:
curr_time += 1
else:
# Follow https://github.com/yewzijian/3DFeatNet/blob/master/scripts_data_processing/kitti/process_kitti_data.m#L44
next_time = next_time[0] + curr_time - 1
if next_time in inames:
if self.valid_pair(drive_id,curr_time + inames[0],next_time + inames[0], inames[0]):
#print("Appending Files")
self.files.append((drive_id, curr_time + inames[0], next_time + inames[0]))
#logging.info(f"current time_stamp {curr_time}")
curr_time = next_time + 1
else:
for dirname in subset_names:
drive_id = int(dirname)
fnames = glob.glob(root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive_id)
assert len(fnames) > 0, f"Make sure that the path {root} has data {dirname}"
inames = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
all_odo = self.get_video_odometry(drive_id, return_all=True)
all_pos = np.array([self.odometry_to_positions(odo) for odo in all_odo])
Ts = all_pos[:, 0, :3]
pdist = (Ts.reshape(1, -1, 3) - Ts.reshape(-1, 1, 3))**2
pdist = np.sqrt(pdist.sum(-1))
for start_time in inames:
pair_time = np.where(
pdist[start_time][start_time:start_time + 100] > self.MIN_DIST)[0]
if len(pair_time) == 0:
continue
else:
pair_time = pair_time[0] + start_time
if pair_time in inames:
self.files.append((drive_id, start_time, pair_time))
logging.info(f"length of files found {len(self.files)}")
def kaist_interpolate(self,left=None,data_root=None,drive_id=None):
pose_path = data_root + '/' + '%02d/' % drive_id + 'global_pose.csv'
pose = np.genfromtxt(pose_path,delimiter=',')
if left == True:
stamp_path = data_root + '/' + '%02d/' % drive_id + 'VLP_left_stamp.csv'
write_VLP_path = data_root + '/' + '%02d/' % drive_id + 'VLP_left_pose.csv'
write_valid_path = data_root + '/' + '%02d/' % drive_id + 'left_valid.csv'
time_stamp = np.genfromtxt(stamp_path,delimiter=',')
else:
stamp_path = data_root + '/' + '%02d/' % drive_id + 'VLP_right_stamp.csv'
write_VLP_path = data_root + '/' + '%02d/' % drive_id + 'VLP_right_pose.csv'
write_valid_path = data_root + '/' + '%02d/' % drive_id + 'right_valid.csv'
time_stamp = np.genfromtxt(stamp_path,delimiter=',')
time = pose[:,0]
pose = pose[:,1:13]
pose = pose.reshape(-1,3,4)
rotate = pose[:,:,0:3]
translation = pose[:,:,3]
valid_start = (time_stamp > np.min(time)).astype(int)
valid_end = (time_stamp < np.max(time)).astype(int)
valid_idx = np.where(valid_start*valid_end == 1)
valid_time = time_stamp[valid_idx]
# interpolate translation matrix
trans_interpolate = interp1d(time,translation,kind='linear',axis=0)
trans = trans_interpolate(valid_time).reshape(valid_time.shape[0],3,1)
# interpolate rotation matrix
slerp = Slerp(time,R.from_matrix(rotate))
rotate = slerp(valid_time).as_matrix()
VLP_pose = np.concatenate((rotate,trans),axis=2).reshape(valid_time.shape[0],12)
valid_idx = np.asarray(valid_idx).T
np.savetxt(write_VLP_path,VLP_pose,delimiter=',')
np.savetxt(write_valid_path,valid_idx,delimiter=',')
return
def valid_pair(self,drive=None,t0=None,t1=None,offset=None):
#logging.info(f"Function called")
all_odometry = self.get_video_odometry(drive, [t0-offset, t1-offset])
#print(np.shape(all_odometry))
positions = [self.odometry_to_positions(odometry) for odometry in all_odometry]
fname0 = self.root + '/%02d/VLP_left/%06d.bin' % (drive, t0)
fname1 = self.root + '/%02d/VLP_left/%06d.bin' % (drive, t1)
xyzr0 = np.fromfile(fname0, dtype=np.float32).reshape(-1, 4)
xyzr1 = np.fromfile(fname1, dtype=np.float32).reshape(-1, 4)
xyz0 = xyzr0[:, :3]
xyz1 = xyzr1[:, :3]
key = '%d_%d_%d' % (drive, t0, t1)
filename = self.icp_path + '/' + key + '.npy'
if key not in kitti_icp_cache:
if not os.path.exists(filename):
# work on the downsampled xyzs, 0.05m == 5cm
sel0 = ME.utils.sparse_quantize(xyz0 / 0.05, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1 / 0.05, return_index=True)
M = (self.velo2cam @ positions[0].T @ np.linalg.inv(positions[1].T)
@ np.linalg.inv(self.velo2cam)).T
xyz0_t = self.apply_transform(xyz0[sel0], M)
pcd0 = make_open3d_point_cloud(xyz0_t)
pcd1 = make_open3d_point_cloud(xyz1[sel1])
reg = o3d.registration.registration_icp(
pcd0, pcd1, 0.2, np.eye(4),
o3d.registration.TransformationEstimationPointToPoint(),
o3d.registration.ICPConvergenceCriteria(max_iteration=200))
pcd0.transform(reg.transformation)
# pcd0.transform(M2) or self.apply_transform(xyz0, M2)
M2 = M @ reg.transformation
# o3d.draw_geometries([pcd0, pcd1])
# write to a file
np.save(filename, M2)
else:
M2 = np.load(filename,allow_pickle=True)
kitti_icp_cache[key] = M2
else:
M2 = kitti_icp_cache[key]
if self.random_rotation:
T0 = sample_random_trans(xyz0, self.randg, np.pi / 4)
T1 = sample_random_trans(xyz1, self.randg, np.pi / 4)
trans = T1 @ M2 @ np.linalg.inv(T0)
xyz0 = self.apply_transform(xyz0, T0)
xyz1 = self.apply_transform(xyz1, T1)
else:
trans = M2
matching_search_voxel_size = self.matching_search_voxel_size
if self.random_scale and random.random() < 0.95:
scale = self.min_scale + \
(self.max_scale - self.min_scale) * random.random()
matching_search_voxel_size *= scale
xyz0 = scale * xyz0
xyz1 = scale * xyz1
xyz0_th = torch.from_numpy(xyz0)
xyz1_th = torch.from_numpy(xyz1)
sel0 = ME.utils.sparse_quantize(xyz0_th / self.voxel_size, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1_th / self.voxel_size, return_index=True)
# Make point clouds using voxelized points
pcd0 = make_open3d_point_cloud(xyz0[sel0])
pcd1 = make_open3d_point_cloud(xyz1[sel1])
# Get matches
matches = get_matching_indices(pcd0, pcd1, trans, matching_search_voxel_size)
if len(matches) < 1000:
return False
else:
return True
#if self.IS_ODOMETRY:
# Remove problematic sequence
# for item in [
# (8, 15, 58),
# ]:
# print("items are ",item)
# if item in self.files:
# self.files.pop(self.files.index(item))
#########################################################
#
# KAIST_DATASET_Right
#
# added config: kaist_root,kaist_date,kaist_max_time_diff
# train_kaist.txt,val_kaist.txt,test_kaist.txt
#
#########################################################
class KAISTRPairDataset(PairDataset):
AUGMENT = None
DATA_FILES = {
'train': './config/train_kaist.txt',
'val': './config/val_kaist.txt',
'test': './config/test_kaist.txt'
}
TEST_RANDOM_ROTATION = False
IS_ODOMETRY = True
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
# For evaluation, use the odometry dataset training following the 3DFeat eval method
if self.IS_ODOMETRY:
self.root = root = config.kaist_root
random_rotation = self.TEST_RANDOM_ROTATION
else:
self.date = config.kaist_date
self.root = root = os.path.join(config.kaist_root, self.date)
self.icp_path = os.path.join(config.kaist_root, 'icp')
pathlib.Path(self.icp_path).mkdir(parents=True, exist_ok=True)
PairDataset.__init__(self, phase, transform, random_rotation, random_scale,
manual_seed, config)
logging.info(f"Loading the subset {phase} from {root}")
# Use the kitti root
self.max_time_diff = max_time_diff = config.kaist_max_time_diff
subset_names = open(self.DATA_FILES[phase]).read().split()
for dirname in subset_names:
drive_id = int(dirname)
inames = self.get_all_scan_ids(drive_id)
for start_time in inames:
for time_diff in range(2, max_time_diff):
pair_time = time_diff + start_time
if pair_time in inames:
self.files.append((drive_id, start_time, pair_time))
def get_all_scan_ids(self, drive_id):
load_path = self.root + '/%02d/' % drive_id + 'right_valid.csv'
valid_time = np.genfromtxt(load_path,delimiter=',')
fnames = []
for time in valid_time:
if self.IS_ODOMETRY:
fnames.extend(glob.glob(self.root + '/%02d/VLP_right/' % drive_id + '%06d.bin' % time))
else:
fnames.extend(glob.glob(self.root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive_id))
assert len(
fnames) > 0, f"Make sure that the path {self.root} has drive id: {drive_id}"
inames = [int(os.path.split(fname)[-1][:-4]) for fname in fnames]
return inames
@property
def velo2cam(self):
try:
velo2cam = self._velo2cam
except AttributeError:
R = np.array([
7.533745e-03, -9.999714e-01, -6.166020e-04, 1.480249e-02, 7.280733e-04,
-9.998902e-01, 9.998621e-01, 7.523790e-03, 1.480755e-02
]).reshape(3, 3)
T = np.array([-4.069766e-03, -7.631618e-02, -2.717806e-01]).reshape(3, 1)
velo2cam = np.hstack([R, T])
self._velo2cam = np.vstack((velo2cam, [0, 0, 0, 1])).T
return self._velo2cam
def get_video_odometry(self, drive, indices=None, ext='.txt', return_all=False):
if self.IS_ODOMETRY:
data_path = self.root + '/%02d/VLP_right_pose.csv' % drive
if data_path not in kaist_cache:
kaist_cache[data_path] = np.genfromtxt(data_path,delimiter=',')
if return_all:
return kaist_cache[data_path]
else:
return kaist_cache[data_path][indices]
else:
data_path = self.root + '/' + self.date + '_drive_%04d_sync/oxts/data' % drive
odometry = []
if indices is None:
fnames = glob.glob(self.root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive)
indices = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
for index in indices:
filename = os.path.join(data_path, '%010d%s' % (index, ext))
if filename not in kaist_cache:
kaist_cache[filename] = np.genfromtxt(filename)
odometry.append(kaist_cache[filename])
odometry = np.array(odometry)
return odometry
def odometry_to_positions(self, odometry):
if self.IS_ODOMETRY:
T_w_cam0 = odometry.reshape(3, 4)
T_w_cam0 = np.vstack((T_w_cam0, [0, 0, 0, 1]))
return T_w_cam0
else:
lat, lon, alt, roll, pitch, yaw = odometry.T[:6]
R = 6378137 # Earth's radius in metres
# convert to metres
lat, lon = np.deg2rad(lat), np.deg2rad(lon)
mx = R * lon * np.cos(lat)
my = R * lat
times = odometry.T[-1]
return np.vstack([mx, my, alt, roll, pitch, yaw, times]).T
def rot3d(self, axis, angle):
ei = np.ones(3, dtype='bool')
ei[axis] = 0
i = np.nonzero(ei)[0]
m = np.eye(3)
c, s = np.cos(angle), np.sin(angle)
m[i[0], i[0]] = c
m[i[0], i[1]] = -s
m[i[1], i[0]] = s
m[i[1], i[1]] = c
return m
def pos_transform(self, pos):
x, y, z, rx, ry, rz, _ = pos[0]
RT = np.eye(4)
RT[:3, :3] = np.dot(np.dot(self.rot3d(0, rx), self.rot3d(1, ry)), self.rot3d(2, rz))
RT[:3, 3] = [x, y, z]
return RT
def get_position_transform(self, pos0, pos1, invert=False):
T0 = self.pos_transform(pos0)
T1 = self.pos_transform(pos1)
return (np.dot(T1, np.linalg.inv(T0)).T if not invert else np.dot(
np.linalg.inv(T1), T0).T)
def _get_velodyne_fn(self, drive, t):
if self.IS_ODOMETRY:
fname = self.root + '/%02d/VLP_right/%06d.bin' % (drive, t)
else:
fname = self.root + \
'/' + self.date + '_drive_%04d_sync/velodyne_points/data/%010d.bin' % (
drive, t)
return fname
def __getitem__(self, idx):
drive = self.files[idx][0]
t0, t1 = self.files[idx][1], self.files[idx][2]
all_odometry = self.get_video_odometry(drive, [t0, t1])
positions = [self.odometry_to_positions(odometry) for odometry in all_odometry]
fname0 = self._get_velodyne_fn(drive, t0)
fname1 = self._get_velodyne_fn(drive, t1)
# XYZ and reflectance
xyzr0 = np.fromfile(fname0, dtype=np.float32).reshape(-1, 4)
xyzr1 = np.fromfile(fname1, dtype=np.float32).reshape(-1, 4)
xyz0 = xyzr0[:, :3]
xyz1 = xyzr1[:, :3]
key = '%d_%d_%d' % (drive, t0, t1)
filename = self.icp_path + '/' + key + '.npy'
if key not in kaist_icp_cache:
if not os.path.exists(filename):
# work on the downsampled xyzs, 0.05m == 5cm
sel0 = ME.utils.sparse_quantize(xyz0 / 0.05, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1 / 0.05, return_index=True)
M = (self.velo2cam @ positions[0].T @ np.linalg.inv(positions[1].T)
@ np.linalg.inv(self.velo2cam)).T
xyz0_t = self.apply_transform(xyz0[sel0], M)
pcd0 = make_open3d_point_cloud(xyz0_t)
pcd1 = make_open3d_point_cloud(xyz1[sel1])
reg = o3d.registration.registration_icp(
pcd0, pcd1, 0.2, np.eye(4),
o3d.registration.TransformationEstimationPointToPoint(),
o3d.registration.ICPConvergenceCriteria(max_iteration=200))
pcd0.transform(reg.transformation)
# pcd0.transform(M2) or self.apply_transform(xyz0, M2)
M2 = M @ reg.transformation
# o3d.draw_geometries([pcd0, pcd1])
# write to a file
np.save(filename, M2)
else:
M2 = np.load(filename,allow_pickle=True)
kaist_icp_cache[key] = M2
else:
M2 = kaist_icp_cache[key]
if self.random_rotation:
T0 = sample_random_trans(xyz0, self.randg, np.pi / 4)
T1 = sample_random_trans(xyz1, self.randg, np.pi / 4)
trans = T1 @ M2 @ np.linalg.inv(T0)
xyz0 = self.apply_transform(xyz0, T0)
xyz1 = self.apply_transform(xyz1, T1)
else:
trans = M2
matching_search_voxel_size = self.matching_search_voxel_size
if self.random_scale and random.random() < 0.95:
scale = self.min_scale + \
(self.max_scale - self.min_scale) * random.random()
matching_search_voxel_size *= scale
xyz0 = scale * xyz0
xyz1 = scale * xyz1
# Voxelization
xyz0_th = torch.from_numpy(xyz0)
xyz1_th = torch.from_numpy(xyz1)
sel0 = ME.utils.sparse_quantize(xyz0_th / self.voxel_size, return_index=True)
sel1 = ME.utils.sparse_quantize(xyz1_th / self.voxel_size, return_index=True)
# Make point clouds using voxelized points
pcd0 = make_open3d_point_cloud(xyz0[sel0])
pcd1 = make_open3d_point_cloud(xyz1[sel1])
# Get matches
matches = get_matching_indices(pcd0, pcd1, trans, matching_search_voxel_size)
if len(matches) < 1000:
raise ValueError(f"{drive}, {t0}, {t1}")
# Get features
npts0 = len(sel0)
npts1 = len(sel1)
feats_train0, feats_train1 = [], []
unique_xyz0_th = xyz0_th[sel0]
unique_xyz1_th = xyz1_th[sel1]
feats_train0.append(torch.ones((npts0, 1)))
feats_train1.append(torch.ones((npts1, 1)))
feats0 = torch.cat(feats_train0, 1)
feats1 = torch.cat(feats_train1, 1)
coords0 = torch.floor(unique_xyz0_th / self.voxel_size)
coords1 = torch.floor(unique_xyz1_th / self.voxel_size)
if self.transform:
coords0, feats0 = self.transform(coords0, feats0)
coords1, feats1 = self.transform(coords1, feats1)
return (unique_xyz0_th.float(), unique_xyz1_th.float(), coords0.int(),
coords1.int(), feats0.float(), feats1.float(), matches, trans)
class KAISTRNMPairDataset(KAISTRPairDataset):
r"""
Generate KITTI pairs within N meter distance
"""
MIN_DIST = 10
def __init__(self,
phase,
transform=None,
random_rotation=True,
random_scale=True,
manual_seed=False,
config=None):
if self.IS_ODOMETRY:
self.root = root = config.kaist_root
random_rotation = self.TEST_RANDOM_ROTATION
else:
self.date = config.kaist_date
self.root = root = os.path.join(config.kaist_root, self.date)
self.icp_path = os.path.join(config.kaist_root, 'icp')
pathlib.Path(self.icp_path).mkdir(parents=True, exist_ok=True)
PairDataset.__init__(self, phase, transform, random_rotation, random_scale,
manual_seed, config)
logging.info(f"Loading the subset {phase} from {root}")
subset_names = open(self.DATA_FILES[phase]).read().split()
if self.IS_ODOMETRY:
for dirname in subset_names:
drive_id = int(dirname)
#print("Processing sequence ", drive_id)
self.kaist_interpolate(left=False,data_root=root,drive_id=drive_id)
load_path = root + '/%02d/' % drive_id + 'right_valid.csv'
valid_time = np.genfromtxt(load_path,delimiter=',')
fnames = []
for time in valid_time:
fnames.extend(glob.glob(self.root + '/%02d/VLP_right/' % drive_id + '%06d.bin' % time))
assert len(fnames) > 0, f"Make sure that the path {root} has data {dirname}"
inames = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
#print(inames)
all_odo = self.get_video_odometry(drive_id, return_all=True)
all_pos = np.array([self.odometry_to_positions(odo) for odo in all_odo])
#print("Odometry and position acquisition done")
Ts = all_pos[:, :3, 3]
pdist = (Ts.reshape(1, -1, 3) - Ts.reshape(-1, 1, 3))**2
pdist = np.sqrt(pdist.sum(-1))
valid_pairs = pdist > self.MIN_DIST
#print("Valid Pairs Generated")
curr_time = inames[0] - inames[0]
while curr_time in inames:
# Find the min index
next_time = np.where(valid_pairs[curr_time][curr_time:curr_time + 100])[0]
if len(next_time) == 0:
curr_time += 1
else:
# Follow https://github.com/yewzijian/3DFeatNet/blob/master/scripts_data_processing/kitti/process_kitti_data.m#L44
next_time = next_time[0] + curr_time - 1
if next_time in inames:
self.files.append((drive_id, curr_time + inames[0], next_time + inames[0]))
curr_time = next_time + 1
else:
for dirname in subset_names:
drive_id = int(dirname)
fnames = glob.glob(root + '/' + self.date +
'_drive_%04d_sync/velodyne_points/data/*.bin' % drive_id)
assert len(fnames) > 0, f"Make sure that the path {root} has data {dirname}"
inames = sorted([int(os.path.split(fname)[-1][:-4]) for fname in fnames])
all_odo = self.get_video_odometry(drive_id, return_all=True)
all_pos = np.array([self.odometry_to_positions(odo) for odo in all_odo])
Ts = all_pos[:, 0, :3]
pdist = (Ts.reshape(1, -1, 3) - Ts.reshape(-1, 1, 3))**2
pdist = np.sqrt(pdist.sum(-1))
for start_time in inames:
pair_time = np.where(
pdist[start_time][start_time:start_time + 100] > self.MIN_DIST)[0]
if len(pair_time) == 0:
continue
else:
pair_time = pair_time[0] + start_time
if pair_time in inames:
self.files.append((drive_id, start_time, pair_time))
def kaist_interpolate(self,left=True,data_root=None,drive_id=None):
pose_path = data_root + '/' + '%02d/' % drive_id + 'global_pose.csv'
pose = np.genfromtxt(pose_path,delimiter=',')
if left == True:
stamp_path = data_root + '/' + '%02d/' % drive_id + 'VLP_left_stamp.csv'
write_VLP_path = data_root + '/' + '%02d/' % drive_id + 'VLP_left_pose.csv'
write_valid_path = data_root + '/' + '%02d/' % drive_id + 'left_valid.csv'
time_stamp = np.genfromtxt(stamp_path,delimiter=',')
else:
stamp_path = data_root + '/' + '%02d/' % drive_id + 'VLP_right_stamp.csv'
write_VLP_path = data_root + '/' + '%02d/' % drive_id + 'VLP_right_pose.csv'
write_valid_path = data_root + '/' + '%02d/' % drive_id + 'right_valid.csv'
time_stamp = np.genfromtxt(stamp_path,delimiter=',')
time = pose[:,0]
pose = pose[:,1:13]
pose = pose.reshape(-1,3,4)
rotate = pose[:,:,0:3]
translation = pose[:,:,3]
valid_start = (time_stamp > np.min(time)).astype(int)
valid_end = (time_stamp < np.max(time)).astype(int)
valid_idx = np.where(valid_start*valid_end == 1)
valid_time = time_stamp[valid_idx]
# interpolate translation matrix
trans_interpolate = interp1d(time,translation,kind='linear',axis=0)
trans = trans_interpolate(valid_time).reshape(valid_time.shape[0],3,1)
# interpolate rotation matrix
slerp = Slerp(time,R.from_matrix(rotate))
rotate = slerp(valid_time).as_matrix()
VLP_pose = np.concatenate((rotate,trans),axis=2).reshape(valid_time.shape[0],12)
valid_idx = np.asarray(valid_idx).T
np.savetxt(write_VLP_path,VLP_pose,delimiter=',')
np.savetxt(write_valid_path,valid_idx,delimiter=',')
return
#if self.IS_ODOMETRY:
# Remove problematic sequence
# for item in [
# (8, 15, 58),
# ]:
# print("items are ",item)
# if item in self.files:
# self.files.pop(self.files.index(item))
class ThreeDMatchPairDataset(IndoorPairDataset):
OVERLAP_RATIO = 0.3
DATA_FILES = {
'train': './config/train_3dmatch.txt',
'val': './config/val_3dmatch.txt',
'test': './config/test_3dmatch.txt'
}
ALL_DATASETS = [ThreeDMatchPairDataset, KITTIPairDataset, KITTINMPairDataset,KAISTLNMPairDataset
,KAISTRNMPairDataset,KAISTLPairDataset,KAISTRPairDataset]
dataset_str_mapping = {d.__name__: d for d in ALL_DATASETS}
def make_data_loader(config, phase, batch_size, num_threads=0, shuffle=None):
assert phase in ['train', 'trainval', 'val', 'test']
if shuffle is None:
shuffle = phase != 'test'
if config.dataset not in dataset_str_mapping.keys():
logging.error(f'Dataset {config.dataset}, does not exists in ' +
', '.join(dataset_str_mapping.keys()))
Dataset = dataset_str_mapping[config.dataset]
use_random_scale = False
use_random_rotation = False
transforms = []
if phase in ['train', 'trainval']:
use_random_rotation = config.use_random_rotation
use_random_scale = config.use_random_scale
transforms += [t.Jitter()]
dset = Dataset(
phase,
transform=t.Compose(transforms),
random_scale=use_random_scale,
random_rotation=use_random_rotation,
config=config)
print(len(dset))
loader = torch.utils.data.DataLoader(
dset,
batch_size=batch_size,
shuffle=shuffle,
num_workers=num_threads,
collate_fn=collate_pair_fn,
pin_memory=False,
drop_last=True)
return loader
| 35.937646 | 145 | 0.637825 | 8,611 | 61,669 | 4.357798 | 0.062246 | 0.014177 | 0.00533 | 0.01066 | 0.789074 | 0.773804 | 0.753951 | 0.746063 | 0.736736 | 0.734897 | 0 | 0.04193 | 0.22924 | 61,669 | 1,715 | 146 | 35.958601 | 0.747539 | 0.089461 | 0 | 0.750397 | 0 | 0 | 0.057647 | 0.021968 | 0 | 0 | 0 | 0 | 0.008744 | 1 | 0.038156 | false | 0 | 0.015898 | 0.00159 | 0.116852 | 0.00159 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25be9a4117a1aa9065662b8826b55e8044a2f1dd | 25 | py | Python | pytealutils/tealstruct/__init__.py | gmcgoldr/pyteal-utils | 3716ff74312d5136df89456e3db711037edccdcb | [
"MIT"
] | 1 | 2021-12-06T21:58:47.000Z | 2021-12-06T21:58:47.000Z | pytealutils/tealstruct/__init__.py | gmcgoldr/pyteal-utils | 3716ff74312d5136df89456e3db711037edccdcb | [
"MIT"
] | null | null | null | pytealutils/tealstruct/__init__.py | gmcgoldr/pyteal-utils | 3716ff74312d5136df89456e3db711037edccdcb | [
"MIT"
] | null | null | null | from .tealstruct import * | 25 | 25 | 0.8 | 3 | 25 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25def628a3a1f48f661cad2f968c215e7ec06136 | 175 | py | Python | testshop/test_models.py | Iv/django-shop | aa52dce6e9115d3b7a913ffa6027f978260b324c | [
"BSD-3-Clause"
] | null | null | null | testshop/test_models.py | Iv/django-shop | aa52dce6e9115d3b7a913ffa6027f978260b324c | [
"BSD-3-Clause"
] | null | null | null | testshop/test_models.py | Iv/django-shop | aa52dce6e9115d3b7a913ffa6027f978260b324c | [
"BSD-3-Clause"
] | 1 | 2020-01-10T01:51:07.000Z | 2020-01-10T01:51:07.000Z | from django.test import TestCase
class AddressTest(TestCase):
def test_can_import(self):
pass
#from shop.models.defaults.address import Address # noqa
| 19.444444 | 65 | 0.708571 | 22 | 175 | 5.545455 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222857 | 175 | 8 | 66 | 21.875 | 0.897059 | 0.314286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d30050bcfc5ec36d576ab20f33cfe742c146c290 | 45 | py | Python | QuadSwingUpEnv/QuadSwingUp/envs/__init__.py | shauray8/quadruple_inverted_pendulum | 3b5746c534423d39da2582d4bae54ea7ecdacffd | [
"MIT"
] | null | null | null | QuadSwingUpEnv/QuadSwingUp/envs/__init__.py | shauray8/quadruple_inverted_pendulum | 3b5746c534423d39da2582d4bae54ea7ecdacffd | [
"MIT"
] | null | null | null | QuadSwingUpEnv/QuadSwingUp/envs/__init__.py | shauray8/quadruple_inverted_pendulum | 3b5746c534423d39da2582d4bae54ea7ecdacffd | [
"MIT"
] | null | null | null | from QuadSwingUp.envs.env import QuadSwingUp
| 22.5 | 44 | 0.866667 | 6 | 45 | 6.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3042e007a40fc1ace083eb477d10762f0795123 | 229 | py | Python | tensortrade/features/scalers/__init__.py | msincenselee/tensortrade | aeed3fbf1657ba1f3bf4bb81fec876d65284b79b | [
"Apache-2.0"
] | null | null | null | tensortrade/features/scalers/__init__.py | msincenselee/tensortrade | aeed3fbf1657ba1f3bf4bb81fec876d65284b79b | [
"Apache-2.0"
] | null | null | null | tensortrade/features/scalers/__init__.py | msincenselee/tensortrade | aeed3fbf1657ba1f3bf4bb81fec876d65284b79b | [
"Apache-2.0"
] | null | null | null | from .comparison_normalizer import ComparisonNormalizer
from .min_max_normalizer import MinMaxNormalizer
from .percent_change_normalizer import PercentChangeNormalizer
from .standard_normalizer import StandardNormalizer
# 标准化转换
| 32.714286 | 62 | 0.895197 | 23 | 229 | 8.652174 | 0.608696 | 0.321608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082969 | 229 | 6 | 63 | 38.166667 | 0.947619 | 0.021834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.