hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bb55e0398866729adf47fdc4b89975f652a45756 | 182 | py | Python | lesson-01/02/escape_character.py | minimum-hsu/tutorial-python | 667692e7cd13a8a4d061a4da530dc2dfe25ac1de | [
"MIT"
] | null | null | null | lesson-01/02/escape_character.py | minimum-hsu/tutorial-python | 667692e7cd13a8a4d061a4da530dc2dfe25ac1de | [
"MIT"
] | null | null | null | lesson-01/02/escape_character.py | minimum-hsu/tutorial-python | 667692e7cd13a8a4d061a4da530dc2dfe25ac1de | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
## see ASCII (https://www.asciitable.com)
## new line
print('hello\nworld')
## tab
print('hello\tpython')
## vertical tab
print('hello world', end = '\v')
| 14 | 41 | 0.648352 | 26 | 182 | 4.538462 | 0.807692 | 0.254237 | 0.220339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 0.137363 | 182 | 12 | 42 | 15.166667 | 0.745223 | 0.472527 | 0 | 0 | 0 | 0 | 0.44186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
247a2c0069f697bd608be9343c9444f8d6242c79 | 59 | py | Python | novel_stego_protocol/genkey.py | tkShir/EC521-Group6-Novel-Steganographic-Scheme | d01ff5b625d5ef85790451fa62e5c33f15f06f0d | [
"0BSD"
] | null | null | null | novel_stego_protocol/genkey.py | tkShir/EC521-Group6-Novel-Steganographic-Scheme | d01ff5b625d5ef85790451fa62e5c33f15f06f0d | [
"0BSD"
] | null | null | null | novel_stego_protocol/genkey.py | tkShir/EC521-Group6-Novel-Steganographic-Scheme | d01ff5b625d5ef85790451fa62e5c33f15f06f0d | [
"0BSD"
] | null | null | null | import os
print(os.urandom(32)) # change to secrets.token
| 14.75 | 47 | 0.745763 | 10 | 59 | 4.4 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.135593 | 59 | 3 | 48 | 19.666667 | 0.823529 | 0.389831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
24afe727999bede39f2d66304d17ea9ead35e0df | 40 | py | Python | dataset/__init__.py | Fyy10/UESTC-Thesis-DA | 6cb16efd1f80aa569c90874a806a62dec8afaec4 | [
"MIT"
] | null | null | null | dataset/__init__.py | Fyy10/UESTC-Thesis-DA | 6cb16efd1f80aa569c90874a806a62dec8afaec4 | [
"MIT"
] | null | null | null | dataset/__init__.py | Fyy10/UESTC-Thesis-DA | 6cb16efd1f80aa569c90874a806a62dec8afaec4 | [
"MIT"
] | null | null | null | from .DigitFive import DigitFiveDataset
| 20 | 39 | 0.875 | 4 | 40 | 8.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
24c7bbb62b8639a7681ec6934a9c73a0eea2fec1 | 92 | py | Python | service_2/create.py | JasonSinclair95/SFIA2 | e2657d266226dc3cd67ceeebd8148e72900ad66e | [
"MIT"
] | null | null | null | service_2/create.py | JasonSinclair95/SFIA2 | e2657d266226dc3cd67ceeebd8148e72900ad66e | [
"MIT"
] | null | null | null | service_2/create.py | JasonSinclair95/SFIA2 | e2657d266226dc3cd67ceeebd8148e72900ad66e | [
"MIT"
] | null | null | null | from application import db
from application.models import Car
db.drop_all()
db.create_all() | 18.4 | 34 | 0.815217 | 15 | 92 | 4.866667 | 0.6 | 0.410959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 92 | 5 | 35 | 18.4 | 0.890244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
24f3799412bd6466c1510123b982e26f88b2e88f | 184 | py | Python | algorithms/validAnagram/validAnagram.py | zhyu/leetcode | 3c2d85b4b7a497ceffac3e562ac1a468f1f6a4b0 | [
"MIT"
] | 5 | 2015-02-18T10:17:12.000Z | 2016-11-14T19:12:21.000Z | algorithms/validAnagram/validAnagram.py | zhyu/leetcode | 3c2d85b4b7a497ceffac3e562ac1a468f1f6a4b0 | [
"MIT"
] | null | null | null | algorithms/validAnagram/validAnagram.py | zhyu/leetcode | 3c2d85b4b7a497ceffac3e562ac1a468f1f6a4b0 | [
"MIT"
] | null | null | null | class Solution:
# @param {string} s
# @param {string} t
# @return {boolean}
def isAnagram(self, s, t):
return collections.Counter(s) == collections.Counter(t)
| 23 | 63 | 0.608696 | 22 | 184 | 5.090909 | 0.590909 | 0.196429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 184 | 7 | 64 | 26.285714 | 0.811594 | 0.288043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
700d185a06c9e36ca9405e0c11ff185b70049072 | 256 | py | Python | lib/extensions/navigation_buttons_menu_only/startup.py | neurodiverseEsoteric/nimbus | a79a37d6001d84116831c9f3f99f5d9acfa7fd19 | [
"CC-BY-3.0"
] | 3 | 2015-11-04T10:48:12.000Z | 2020-07-12T06:46:27.000Z | lib/extensions/navigation_buttons_menu_only/startup.py | esotericDisciple/nimbus | a79a37d6001d84116831c9f3f99f5d9acfa7fd19 | [
"CC-BY-3.0"
] | null | null | null | lib/extensions/navigation_buttons_menu_only/startup.py | esotericDisciple/nimbus | a79a37d6001d84116831c9f3f99f5d9acfa7fd19 | [
"CC-BY-3.0"
] | 1 | 2021-05-05T13:56:49.000Z | 2021-05-05T13:56:49.000Z | self.toolBar.widgetForAction(self.backAction).setPopupMode(QToolButton.InstantPopup)
self.toolBar.widgetForAction(self.forwardAction).setPopupMode(QToolButton.InstantPopup)
self.toolBar.widgetForAction(self.upAction).setPopupMode(QToolButton.InstantPopup)
| 64 | 87 | 0.882813 | 24 | 256 | 9.416667 | 0.375 | 0.146018 | 0.345133 | 0.39823 | 0.575221 | 0.575221 | 0.575221 | 0 | 0 | 0 | 0 | 0 | 0.011719 | 256 | 3 | 88 | 85.333333 | 0.893281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
703404180bc249fc8d7a7788495a86053c4cd7d7 | 234 | py | Python | what-is-the-total-amount/python/main.py | bayupuspanugraha/lets-solve-it | 851d962a12ff4a4a9eba758d1288811cb22817b8 | [
"MIT"
] | null | null | null | what-is-the-total-amount/python/main.py | bayupuspanugraha/lets-solve-it | 851d962a12ff4a4a9eba758d1288811cb22817b8 | [
"MIT"
] | null | null | null | what-is-the-total-amount/python/main.py | bayupuspanugraha/lets-solve-it | 851d962a12ff4a4a9eba758d1288811cb22817b8 | [
"MIT"
] | null | null | null | import simple_request as sr
import async_request as ar
if __name__ == "__main__":
print(f"(normal way) Total Amount: {sr.get_transactions(2, 8, 5, 50)}")
print(f"(async way) Total Amount: {ar.get_transactions(2, 8, 5, 50)}")
| 33.428571 | 75 | 0.692308 | 39 | 234 | 3.846154 | 0.564103 | 0.12 | 0.186667 | 0.226667 | 0.266667 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0.050761 | 0.15812 | 234 | 6 | 76 | 39 | 0.71066 | 0 | 0 | 0 | 0 | 0 | 0.551282 | 0.196581 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0.4 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
704c130de62127b98cd7d08ef602cc89ba22c340 | 37 | py | Python | tests/__init__.py | ltiao/pynance | 1f170f9d32262eacf566a8d7647be04715c47dc1 | [
"MIT"
] | 1 | 2021-04-24T09:23:35.000Z | 2021-04-24T09:23:35.000Z | tests/__init__.py | ltiao/pynance | 1f170f9d32262eacf566a8d7647be04715c47dc1 | [
"MIT"
] | null | null | null | tests/__init__.py | ltiao/pynance | 1f170f9d32262eacf566a8d7647be04715c47dc1 | [
"MIT"
] | 1 | 2021-07-14T08:55:39.000Z | 2021-07-14T08:55:39.000Z | """Unit test package for pynance."""
| 18.5 | 36 | 0.675676 | 5 | 37 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.78125 | 0.810811 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7061bb0b9cb9298d64547af566471de47f613fd6 | 84 | py | Python | enthought/enable/wx/cairo.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/enable/wx/cairo.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/enable/wx/cairo.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from __future__ import absolute_import
from enable.wx.cairo import *
| 21 | 38 | 0.821429 | 12 | 84 | 5.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130952 | 84 | 3 | 39 | 28 | 0.876712 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
706a8d20890fbf0ea75b224b3fa7dd3907eafd53 | 210 | py | Python | sigma/core/__init__.py | suzuki-shunsuke/pysigma.core | 89fe0d99e8cba015aad245dfda8465af99d1ab9d | [
"MIT"
] | 1 | 2022-02-09T06:38:52.000Z | 2022-02-09T06:38:52.000Z | sigma/core/__init__.py | suzuki-shunsuke/pysigma.core | 89fe0d99e8cba015aad245dfda8465af99d1ab9d | [
"MIT"
] | null | null | null | sigma/core/__init__.py | suzuki-shunsuke/pysigma.core | 89fe0d99e8cba015aad245dfda8465af99d1ab9d | [
"MIT"
] | null | null | null | from .field import option, Field, FieldMeta
from .error import SigmaError, ErrorContainer, UnitError
from .model import Model, ModelMeta
from .util import validate, asdict
from .validator import FieldValidator
| 35 | 56 | 0.82381 | 26 | 210 | 6.653846 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12381 | 210 | 5 | 57 | 42 | 0.940217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
708ab42984ee965f86b7e3e52ee46beb714dbf51 | 26 | py | Python | kaolin/graphics/nmr/__init__.py | Bob-Yeah/kaolin | 7ad34f8158000499a30b8dfa14fb3ed86d2e57a6 | [
"ECL-2.0",
"Apache-2.0"
] | 90 | 2020-08-15T16:14:45.000Z | 2022-01-22T10:24:13.000Z | kaolin/graphics/nmr/__init__.py | Bob-Yeah/kaolin | 7ad34f8158000499a30b8dfa14fb3ed86d2e57a6 | [
"ECL-2.0",
"Apache-2.0"
] | 11 | 2020-09-07T17:31:18.000Z | 2021-11-25T12:07:30.000Z | kaolin/graphics/nmr/__init__.py | Bob-Yeah/kaolin | 7ad34f8158000499a30b8dfa14fb3ed86d2e57a6 | [
"ECL-2.0",
"Apache-2.0"
] | 13 | 2020-09-03T04:25:50.000Z | 2021-12-23T08:23:33.000Z | from .rasterizer import *
| 13 | 25 | 0.769231 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
70a9d22ba17f6b7d3c808473c136fbb824ffd15b | 594 | py | Python | package/ml/base.py | xenron/coco | e318d534127b769612716c05d40e3d5b090eb5a3 | [
"MIT"
] | null | null | null | package/ml/base.py | xenron/coco | e318d534127b769612716c05d40e3d5b090eb5a3 | [
"MIT"
] | null | null | null | package/ml/base.py | xenron/coco | e318d534127b769612716c05d40e3d5b090eb5a3 | [
"MIT"
] | null | null | null | import uuid
class BaseModel(object):
def __init__(self):
# Generate an random UUID to reference this model
self._uuid = str(uuid.uuid4())
self._name = None
# train the model with given data set
def train(self, data):
pass
# train the model with given data set
def getParameterDef(self):
pass
def setParameter(self, parameter):
pass
# predict the model with given dataset
def predict(self, data):
pass
def getId(self):
return self._uuid
def getName(self):
return self._name
| 19.16129 | 57 | 0.612795 | 75 | 594 | 4.746667 | 0.453333 | 0.067416 | 0.101124 | 0.143258 | 0.179775 | 0.179775 | 0.179775 | 0.179775 | 0 | 0 | 0 | 0.002463 | 0.316498 | 594 | 30 | 58 | 19.8 | 0.874384 | 0.262626 | 0 | 0.235294 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.411765 | false | 0.235294 | 0.058824 | 0.117647 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
560e418b243f53db86c23de6da8fdff2b027e4f8 | 1,034 | py | Python | tests/recurrent_test.py | maxpumperla/neuralforecast | 61d730a28454f89843defb823430daee2ff64912 | [
"MIT"
] | 35 | 2016-05-10T04:55:01.000Z | 2021-12-24T14:11:18.000Z | tests/recurrent_test.py | Julian0C/neuralforecast | 61d730a28454f89843defb823430daee2ff64912 | [
"MIT"
] | null | null | null | tests/recurrent_test.py | Julian0C/neuralforecast | 61d730a28454f89843defb823430daee2ff64912 | [
"MIT"
] | 8 | 2016-06-22T11:32:30.000Z | 2018-06-24T19:34:25.000Z | from __future__ import print_function
from keras.models import Sequential
from keras.utils.test_utils import get_test_data
from neuralforecast.layers.recurrent import ARMA, GARCH
def test_arma_layer():
(X_train, y_train), (X_test, y_test) = get_test_data(nb_train=100, nb_test=50, input_shape=(100, 1),
output_shape=(1,), classification=False)
model = Sequential()
model.add(ARMA(inner_input_dim=10, input_shape=(100, 1), output_dim=1))
model.compile(loss='mean_squared_error', optimizer='sgd')
model.fit(X_train, y_train)
def test_garch_layer():
(X_train, y_train), (X_test, y_test) = get_test_data(nb_train=100, nb_test=50, input_shape=(100, 1),
output_shape=(1,), classification=False)
model = Sequential()
model.add(GARCH(inner_input_dim=10, input_shape=(100, 1), output_dim=1))
model.compile(loss='mean_squared_error', optimizer='sgd')
model.fit(X_train, y_train)
| 34.466667 | 104 | 0.659574 | 146 | 1,034 | 4.342466 | 0.30137 | 0.037855 | 0.044164 | 0.07571 | 0.716088 | 0.716088 | 0.716088 | 0.716088 | 0.716088 | 0.716088 | 0 | 0.042289 | 0.222437 | 1,034 | 29 | 105 | 35.655172 | 0.746269 | 0 | 0 | 0.555556 | 0 | 0 | 0.040619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.333333 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
562d9ae01e2a80c2af1dfdb4da3cfa1569b4ef24 | 249 | py | Python | lldb/packages/Python/lldbsuite/test/lang/objc/variadic_methods/TestVariadicMethods.py | medismailben/llvm-project | e334a839032fe500c3bba22bf976ab7af13ce1c1 | [
"Apache-2.0"
] | 765 | 2015-12-03T16:44:59.000Z | 2022-03-07T12:41:10.000Z | lldb/packages/Python/lldbsuite/test/lang/objc/variadic_methods/TestVariadicMethods.py | medismailben/llvm-project | e334a839032fe500c3bba22bf976ab7af13ce1c1 | [
"Apache-2.0"
] | 1,815 | 2015-12-11T23:56:05.000Z | 2020-01-10T19:28:43.000Z | lldb/packages/Python/lldbsuite/test/lang/objc/variadic_methods/TestVariadicMethods.py | medismailben/llvm-project | e334a839032fe500c3bba22bf976ab7af13ce1c1 | [
"Apache-2.0"
] | 284 | 2015-12-03T16:47:25.000Z | 2022-03-12T05:39:48.000Z | from lldbsuite.test import lldbinline
from lldbsuite.test import decorators
lldbinline.MakeInlineTest(
__file__, globals(), [
decorators.skipIfFreeBSD, decorators.skipIfLinux,
decorators.skipIfWindows, decorators.skipIfNetBSD])
| 31.125 | 59 | 0.779116 | 22 | 249 | 8.636364 | 0.590909 | 0.136842 | 0.178947 | 0.242105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148594 | 249 | 7 | 60 | 35.571429 | 0.896226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
5662407128e616efc453ea1f81717c68bacea38b | 25 | py | Python | package/__init__.py | AyoubBelhadji/random_matrix_factorization | 44ee6cd01b1d3f5d70d8392b3b7c1ccdb93e2d89 | [
"MIT"
] | null | null | null | package/__init__.py | AyoubBelhadji/random_matrix_factorization | 44ee6cd01b1d3f5d70d8392b3b7c1ccdb93e2d89 | [
"MIT"
] | null | null | null | package/__init__.py | AyoubBelhadji/random_matrix_factorization | 44ee6cd01b1d3f5d70d8392b3b7c1ccdb93e2d89 | [
"MIT"
] | null | null | null | # from . import fast_svd
| 12.5 | 24 | 0.72 | 4 | 25 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 25 | 1 | 25 | 25 | 0.85 | 0.88 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5682f26d183b647ddf2ddf67df24e4b25fba6a88 | 130 | py | Python | tests/test_versions_routes.py | aryan040501/wikilabels | ea110da2b969cc978a0f288c4da6250dc9d67e72 | [
"MIT"
] | 15 | 2015-07-16T17:56:43.000Z | 2018-08-20T14:59:16.000Z | tests/test_versions_routes.py | aryan040501/wikilabels | ea110da2b969cc978a0f288c4da6250dc9d67e72 | [
"MIT"
] | 122 | 2015-06-10T15:58:11.000Z | 2018-08-16T14:56:23.000Z | tests/test_versions_routes.py | aryan040501/wikilabels | ea110da2b969cc978a0f288c4da6250dc9d67e72 | [
"MIT"
] | 27 | 2015-07-15T22:12:35.000Z | 2018-08-06T23:10:28.000Z | from .routes_test_fixture import app # noqa
def test_versions(client):
assert client.get("/versions/")._status_code == 200
| 21.666667 | 55 | 0.738462 | 18 | 130 | 5.055556 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.146154 | 130 | 5 | 56 | 26 | 0.792793 | 0.030769 | 0 | 0 | 0 | 0 | 0.080645 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
568c68119a854c154a5931ff8637be4afe812a5c | 85 | py | Python | quizes/admin.py | regina404/school | ec7d44c158694293c381c31d778b97496ccf154b | [
"MIT"
] | null | null | null | quizes/admin.py | regina404/school | ec7d44c158694293c381c31d778b97496ccf154b | [
"MIT"
] | null | null | null | quizes/admin.py | regina404/school | ec7d44c158694293c381c31d778b97496ccf154b | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Quiz
admin.site.register(Quiz) | 17 | 32 | 0.811765 | 13 | 85 | 5.307692 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 85 | 5 | 33 | 17 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5699c197940353adc43960820cef87ea30e6bbfe | 149 | py | Python | FileAction.py | fanonwue/scannertool | d16f5085b98fba877cc85dda08f26de9df8dc5ba | [
"MIT"
] | null | null | null | FileAction.py | fanonwue/scannertool | d16f5085b98fba877cc85dda08f26de9df8dc5ba | [
"MIT"
] | null | null | null | FileAction.py | fanonwue/scannertool | d16f5085b98fba877cc85dda08f26de9df8dc5ba | [
"MIT"
] | null | null | null | from pathlib import Path
class FileAction:
def execute(self, file: Path):
"""Execute an action on the given file path"""
pass
| 16.555556 | 54 | 0.637584 | 20 | 149 | 4.75 | 0.8 | 0.168421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.281879 | 149 | 8 | 55 | 18.625 | 0.88785 | 0.268456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
3b0a9d69e29225a8e4a1e974da0d130f79fb5611 | 150 | py | Python | corehq/ex-submodules/pillowtop/processors/__init__.py | dimagilg/commcare-hq | ea1786238eae556bb7f1cbd8d2460171af1b619c | [
"BSD-3-Clause"
] | 471 | 2015-01-10T02:55:01.000Z | 2022-03-29T18:07:18.000Z | corehq/ex-submodules/pillowtop/processors/__init__.py | dimagilg/commcare-hq | ea1786238eae556bb7f1cbd8d2460171af1b619c | [
"BSD-3-Clause"
] | 14,354 | 2015-01-01T07:38:23.000Z | 2022-03-31T20:55:14.000Z | corehq/ex-submodules/pillowtop/processors/__init__.py | dimagilg/commcare-hq | ea1786238eae556bb7f1cbd8d2460171af1b619c | [
"BSD-3-Clause"
] | 175 | 2015-01-06T07:16:47.000Z | 2022-03-29T13:27:01.000Z | from .interface import PillowProcessor, BulkPillowProcessor
from .sample import NoopProcessor, LoggingProcessor
from .elastic import ElasticProcessor
| 37.5 | 59 | 0.873333 | 14 | 150 | 9.357143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093333 | 150 | 3 | 60 | 50 | 0.963235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3b217bf14d55b7ef419c170175ca69fa339247d6 | 483 | py | Python | sited_py/lib/noear_snacks_OObject.py | wistn/sited_py | fbecf09f410bd2494a952383073956946d9df813 | [
"Apache-2.0"
] | null | null | null | sited_py/lib/noear_snacks_OObject.py | wistn/sited_py | fbecf09f410bd2494a952383073956946d9df813 | [
"Apache-2.0"
] | null | null | null | sited_py/lib/noear_snacks_OObject.py | wistn/sited_py | fbecf09f410bd2494a952383073956946d9df813 | [
"Apache-2.0"
] | null | null | null | # -*- coding: UTF-8 -*-
"""
Author:wistn
since:2021-03-02
LastEditors:Do not edit
LastEditTime:2021-03-04
Description:
"""
class OObject:
def __init__(self):
self.members = dict()
def set(self, key, value):
self.members[key] = value
def get(self, key):
return self.members.get(key)
def contains(self, key):
return key in self.members
def count(self):
return self.members.__len__()
def clear(self):
pass
| 16.1 | 37 | 0.604555 | 64 | 483 | 4.4375 | 0.53125 | 0.193662 | 0.091549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047753 | 0.26294 | 483 | 29 | 38 | 16.655172 | 0.75 | 0.233954 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | false | 0.076923 | 0 | 0.230769 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
3b59d594a7999ee3da3ca9022f26403ad6a8eb5c | 141 | py | Python | jptel/exception.py | zenwerk/jptel-py | 4c26b94db5b8ba3832a7295ec92542737005146b | [
"MIT"
] | null | null | null | jptel/exception.py | zenwerk/jptel-py | 4c26b94db5b8ba3832a7295ec92542737005146b | [
"MIT"
] | null | null | null | jptel/exception.py | zenwerk/jptel-py | 4c26b94db5b8ba3832a7295ec92542737005146b | [
"MIT"
] | null | null | null | # -*- coding=utf-8 -*-
class InvalidCharacterException(ValueError):
pass
class InvalidTelephoneNumberException(ValueError):
pass
| 14.1 | 50 | 0.737589 | 11 | 141 | 9.454545 | 0.727273 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008403 | 0.156028 | 141 | 9 | 51 | 15.666667 | 0.865546 | 0.141844 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
3b77d035ca1ca415c87132f85a47aeeb2aae9abd | 104 | py | Python | kbastroutils/mag2flux.py | bkornpob/kbastroutils | 89bdf8f395234a22a8b1386f028892b07559022e | [
"MIT"
] | null | null | null | kbastroutils/mag2flux.py | bkornpob/kbastroutils | 89bdf8f395234a22a8b1386f028892b07559022e | [
"MIT"
] | null | null | null | kbastroutils/mag2flux.py | bkornpob/kbastroutils | 89bdf8f395234a22a8b1386f028892b07559022e | [
"MIT"
] | null | null | null | def mag2flux(mag,wave):
import numpy as np
return 10**(-0.4 * (mag + 2.406 + 5.*np.log10(wave))) | 34.666667 | 57 | 0.596154 | 19 | 104 | 3.263158 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 0.201923 | 104 | 3 | 57 | 34.666667 | 0.60241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8e65fdf367d604ecdeab0e5e13ab7f7365bc5554 | 1,465 | py | Python | 00.FullSpeedPython/06.Classes/09.ExerciseInheritance.py | hadibakalim/python_related_courses | 79773e8186a0a109d8afe981440e36f4eeacbcb9 | [
"MIT"
] | 1 | 2019-11-08T10:54:38.000Z | 2019-11-08T10:54:38.000Z | 00.FullSpeedPython/06.Classes/09.ExerciseInheritance.py | hadibakalim/python_related_courses | 79773e8186a0a109d8afe981440e36f4eeacbcb9 | [
"MIT"
] | null | null | null | 00.FullSpeedPython/06.Classes/09.ExerciseInheritance.py | hadibakalim/python_related_courses | 79773e8186a0a109d8afe981440e36f4eeacbcb9 | [
"MIT"
] | null | null | null | class Rectangle:
def __init__(self, x1, y1, x2, y2): # class constructor
self.x1 = x1 # class variable
self.y1 = y1 # class variable
self.x2 = x2 # class variable
self.y2 = y2 # class variable
def width(self):
return self.x2 - self.x1
def height(self):
return self.y2 - self.y1
def area(self):
return self.width() * self.height()
class Square(Rectangle):
#write your code here
def __init__(self, x1, y1, length):
self.x1 = x1 # class variable
self.y1 = y1 # class variable
self.length = length
super().__init__(x1,y1,x1+length,y1+length)
def area(self):
return super().area()
# Alternative Solution
class Rectangle:
def __init__(self, x1, y1, x2, y2): # class constructor
self.x1 = x1 # class variable
self.y1 = y1 # class variable
self.x2 = x2 # class variable
self.y2 = y2 # class variable
def width(self):
return self.x2 - self.x1
def height(self):
return self.y2 - self.y1
def area(self):
return self.width() * self.height()
#write your code here
class Square(Rectangle):
def __init__(self, x1, y1, length):
x2 = x1 + length
y2 = y1 + length
super().__init__(x1, y1, x2, y2)
# test your code here
square = Square (2, 7, 7)
print("Length: " + str(square.width()) + ", Area: " + str(square.area()))
square2 = Square (1, 3, 5)
print("Length: " + str(square2.width()) + ", Area: " + str(square2.area())) | 26.636364 | 75 | 0.6157 | 209 | 1,465 | 4.200957 | 0.157895 | 0.148064 | 0.154897 | 0.059226 | 0.677677 | 0.634396 | 0.57631 | 0.57631 | 0.57631 | 0.57631 | 0 | 0.055354 | 0.247782 | 1,465 | 55 | 75 | 26.636364 | 0.741379 | 0.18157 | 0 | 0.756098 | 0 | 0 | 0.02705 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 1 | 0.268293 | false | 0 | 0 | 0.170732 | 0.536585 | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8eafd8a105bee63925541c19548cbd4d80d6e255 | 65 | py | Python | project/polypy/__init__.py | apjansing/MAT560_project | e420f1462e91258ace47a43e6525d540e3ee841e | [
"MIT"
] | null | null | null | project/polypy/__init__.py | apjansing/MAT560_project | e420f1462e91258ace47a43e6525d540e3ee841e | [
"MIT"
] | null | null | null | project/polypy/__init__.py | apjansing/MAT560_project | e420f1462e91258ace47a43e6525d540e3ee841e | [
"MIT"
] | null | null | null | from plotter import plotter
from interpolator import interpolator | 32.5 | 37 | 0.892308 | 8 | 65 | 7.25 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 65 | 2 | 37 | 32.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d9606dfda150e6ceef80b392ba44edd582c147b1 | 22,331 | py | Python | artifact_analyzer/parsers/lv1_os_win_reg_os_info.py | sk-yaho/carpe | 077ef7ba1582b3de9f5c08d63431e744b77a9e09 | [
"Apache-2.0"
] | 2 | 2020-07-09T02:01:50.000Z | 2020-11-21T15:19:32.000Z | artifact_analyzer/parsers/lv1_os_win_reg_os_info.py | sk-yaho/carpe | 077ef7ba1582b3de9f5c08d63431e744b77a9e09 | [
"Apache-2.0"
] | null | null | null | artifact_analyzer/parsers/lv1_os_win_reg_os_info.py | sk-yaho/carpe | 077ef7ba1582b3de9f5c08d63431e744b77a9e09 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import os, sys, re
from datetime import datetime, timedelta
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__)), "parsers"))
import parser_interface
from utility import carpe_db
class OS_Information:
def __init__(self):
self.par_id = ''
self.case_id = ''
self.evd_id = ''
self.operating_system = ''
self.version_number = ''
self.install_time = ''
self.product_key = ''
self.owner = ''
self.display_computer_name = ''
self.computer_name = ''
self.dhcp_dns_server = ''
self.operating_system_version = ''
self.build_number = ''
self.product_id = ''
self.last_service_pack = ''
self.organization = ''
self.last_shutdown_time = ''
self.system_root = ''
self.path = ''
self.last_access_time_flag = ''
self.timezone_utc = ''
self.display_timezone_name = ''
class OSINFO(parser_interface.ParserInterface):
"""
OS Information Parser.
"""
def __init__(self):
# Initialize Formatter Interface
super().__init__()
def Parse(self, case_id, evd_id, par_list):
"""
Analyzes records related to usb all device
in Psort result.
"""
# Check Table
if not self.CheckTable():
self.CreateTable()
db = carpe_db.Mariadb()
db.open()
for par in par_list:
for art in self.ARTIFACTS:
name = art['Name']
desc = art['Desc']
values = art['Values']
if name == "os_info":
# TODO : Fix a Chorme Cache plugin -> lack information
OS_list = []
OS_count = 0
#Select Default OS
query = r"SELECT description, filename, datetime FROM log2timeline WHERE description like '%#[HKEY_LOCAL_MACHINE#\\Software#\\Microsoft#\\Windows NT#\\CurrentVersion#]%' escape '#' or description like '%#[HKEY_LOCAL_MACHINE#\\System#\\ControlSet001#\\Control#\\ComputerName#\\ComputerName#]%' escape '#' or description like '%#[HKEY_LOCAL_MACHINE#\\System#\\ControlSet001#\\Control#\\FileSystem#]%' escape '#' or description like '%#[HKEY_LOCAL_MACHINE#\\System#\\ControlSet001#\\Control#\\TimeZoneInformation#]%' escape '#' or description like '%#[HKEY_LOCAL_MACHINE#\\System#\\ControlSet001#\\Services#\\Tcpip#\\Parameters#]%' escape '#' or description like '%#[HKEY_LOCAL_MACHINE#\\System#\\ControlSet001#\\Control#\\Windows#]%' escape '#'"
result_query = db.execute_query_mul(query)
os_information = OS_Information()
try:
OS_list.append(os_information)
for result_data in result_query:
if 'SafeOS' in result_data[1] or 'Windows.old' in result_data[1]:
if 'CurrentVersion]' in result_data[0].decode('utf-8'):
if 'ProductName' in result_data[0].decode('utf-8'):
if 'RegisteredOwner' in result_data[0].decode('utf-8'):
dataInside = r"ProductName: \[REG_SZ\] (.*) RegisteredOrganization: (.*) ReleaseId: \[REG_SZ\] (.*) SoftwareType:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system = m.group(1)+'('+m.group(3)+')'
else:
dataInside = r"ProductName: \[REG_SZ\] (.*) ReleaseId: \[REG_SZ\] (.*) SoftwareType:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system = m.group(1)+'('+m.group(2)+')'
if 'CurrentVersion' in result_data[0].decode('utf-8'):
dataInside = r" CurrentVersion: \[REG_SZ\] ([\d\W]*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].version_number = m.group(1)
if 'InstallTime' in result_data[0].decode('utf-8'):
dataInside = r"InstallTime: \[REG_QWORD\] ([\d]*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].install_time = datetime(1601, 1, 1) + timedelta(microseconds=int(m.group(1))/10)
if 'RegisteredOwner' in result_data[0].decode('utf-8'):
dataInside = r"RegisteredOwner: \[REG_SZ\] (.*) ReleaseId:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].owner = m.group(1)
if 'EditionID' in result_data[0].decode('utf-8'):
if 'InstallTime' in result_data[0].decode('utf-8'):
dataInside = r"EditionID: \[REG_SZ\] (.*) InstallTime:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system_version = m.group(1)
else:
dataInside = r"EditionID: \[REG_SZ\] (.*) InstallationType:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system_version = m.group(1)
if 'CurrentBuildNumber' in result_data[0].decode('utf-8'):
dataInside = r"CurrentBuildNumber: \[REG_SZ\] ([\d]*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].build_number = m.group(1)
if 'ProductId' in result_data[0].decode('utf-8'):
dataInside = r"ProductId: \[REG_SZ\] (.*) ProductName:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].product_id = m.group(1)
if 'RegisteredOrganizations' in result_data[0].decode('utf-8'):
dataInside = r"RegisteredOrganizations: \[REG_SZ\] (.*) RegisteredOwner:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].Organization = m.group(1)
if 'SystemRoot' in result_data[0].decode('utf-8'):
dataInside = r"SystemRoot: \[REG_SZ\] (.*) UBR:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].system_root = m.group(1)
if 'PathName' in result_data[0].decode('utf-8'):
if 'ProductId' in result_data[0].decode('utf-8'):
dataInside = r"PathName: \[REG_SZ\] (.*) ProductId:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].path = m.group(1)
else:
dataInside = r"PathName: \[REG_SZ\] (.*) ProductName:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].path = m.group(1)
if 'ComputerName\ComputerName]' in result_data[0].decode('utf-8'):
dataInside = r"ComputerName: \[REG_SZ\] (.*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].computer_name = m.group(1)
if 'FileSystem]' in result_data[0].decode('utf-8'):
dataInside = r"NtfsDisableLastAccessUpdate: \[REG_DWORD_LE\] ([\d])"
m = re.search(dataInside, result_data[0].decode('utf-8'))
if m.group(1) == '1':
OS_list[OS_count].last_access_time_flag = 'No'
else:
OS_list[OS_count].last_access_time_flag = 'Yes'
if 'TimeZoneInformation]' in result_data[0].decode('utf-8'):
dataInside = r"TimeZoneKeyName: (.*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].display_timezone_name = m.group(1)
if 'Tcpip\Parameters]' in result_data[0].decode('utf-8'):
if 'Hostname' in result_data[0].decode('utf-8'):
dataInside = r"Hostname: \[REG_SZ\] (.*) ICSDomain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].display_computer_name = m.group(1)
dataInside = r"DhcpNameServer: \[REG_SZ\] (.*) Domain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].dhcp_dns_server = m.group(1)
elif 'HostName' in result_data[0].decode('utf-8'):
dataInside = r"HostName: \[REG_SZ\] (.*) ICSDomain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].display_computer_name = m.group(1)
dataInside = r"DhcpNameServer: \[REG_SZ\] (.*) Domain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].dhcp_dns_server = m.group(1)
if 'Control\Windows]' in result_data[0].decode('utf-8'):
OS_list[OS_count].last_shutdown_time = str(result_data[2])
OS_count = OS_count + 1
os_information = OS_Information()
OS_list.append(os_information)
for result_data in result_query:
if 'SafeOS' not in result_data[1] and 'Windows.old' not in result_data[1]:
if 'CurrentVersion]' in result_data[0].decode('utf-8'):
if 'ProductName' in result_data[0].decode('utf-8'):
if 'RegisteredOwner' in result_data[0].decode('utf-8'):
dataInside = r"ProductName: \[REG_SZ\] (.*) RegisteredOrganization: (.*) ReleaseId: \[REG_SZ\] (.*) SoftwareType:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system = m.group(1)+'('+m.group(3)+')'
else:
dataInside = r"ProductName: \[REG_SZ\] (.*) ReleaseId: \[REG_SZ\] (.*) SoftwareType:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system = m.group(1)+'('+m.group(2)+')'
if 'CurrentVersion' in result_data[0].decode('utf-8'):
dataInside = r" CurrentVersion: \[REG_SZ\] ([\d\W]*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].version_number = m.group(1)
if 'InstallTime' in result_data[0].decode('utf-8'):
dataInside = r"InstallTime: \[REG_QWORD\] ([\d]*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].install_time = datetime(1601, 1, 1) + timedelta(microseconds=int(m.group(1))/10)
if 'RegisteredOwner' in result_data[0].decode('utf-8'):
dataInside = r"RegisteredOwner: \[REG_SZ\] (.*) ReleaseId:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].owner = m.group(1)
if 'EditionID' in result_data[0].decode('utf-8'):
if 'InstallTime' in result_data[0].decode('utf-8'):
if 'EditionSubManufacturer' in result_data[0].decode('utf-8'):
dataInside = r" EditionID: \[REG_SZ\] (.*) EditionSubManufacturer:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system_version = m.group(1)
else:
dataInside = r" EditionID: \[REG_SZ\] (.*) InstallTime:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system_version = m.group(1)
else:
dataInside = r" EditionID: \[REG_SZ\] (.*) InstallationType:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].operating_system_version = m.group(1)
if 'CurrentBuildNumber' in result_data[0].decode('utf-8'):
dataInside = r"CurrentBuildNumber: \[REG_SZ\] ([\d]*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].build_number = m.group(1)
if 'ProductId' in result_data[0].decode('utf-8'):
dataInside = r"ProductId: \[REG_SZ\] (.*) ProductName:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].product_id = m.group(1)
if 'RegisteredOrganizations' in result_data[0].decode('utf-8'):
dataInside = r"RegisteredOrganizations: \[REG_SZ\] (.*) RegisteredOwner:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].Organization = m.group(1)
if 'SystemRoot' in result_data[0].decode('utf-8'):
dataInside = r"SystemRoot: \[REG_SZ\] (.*) UBR:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].system_root = m.group(1)
if 'PathName' in result_data[0].decode('utf-8'):
dataInside = r"PathName: \[REG_SZ\] (.*) ProductId:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].path = m.group(1)
if 'ComputerName\ComputerName]' in result_data[0].decode('utf-8'):
dataInside = r"ComputerName: \[REG_SZ\] (.*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].computer_name = m.group(1)
if 'FileSystem]' in result_data[0].decode('utf-8'):
dataInside = r"NtfsDisableLastAccessUpdate: \[REG_DWORD_LE\] (.*) NtfsDisableLfsDowngrade:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
if m.group(1) == '1':
OS_list[OS_count].last_access_time_flag = 'No'
elif m.group(1) == '0':
OS_list[OS_count].last_access_time_flag = 'Yes'
else:
OS_list[OS_count].last_access_time_flag = m.group(1) + '(not parsed)'
if 'TimeZoneInformation]' in result_data[0].decode('utf-8'):
dataInside = r"TimeZoneKeyName: (.*)"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].display_timezone_name = m.group(1)
if 'Tcpip\Parameters]' in result_data[0].decode('utf-8'):
if 'HostName' in result_data[0].decode('utf-8'):
dataInside = r"HostName: \[REG_SZ\] (.*) ICSDomain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].display_computer_name = m.group(1)
dataInside = r"DhcpNameServer: \[REG_SZ\] (.*) Domain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].dhcp_dns_server = m.group(1)
elif 'Hostname' in result_data[0].decode('utf-8'):
dataInside = r"Hostname: \[REG_SZ\] (.*) ICSDomain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].display_computer_name = m.group(1)
dataInside = r"DhcpNameServer: \[REG_SZ\] (.*) Domain:"
m = re.search(dataInside, result_data[0].decode('utf-8'))
OS_list[OS_count].dhcp_dns_server = m.group(1)
if 'Control\Windows]' in result_data[0].decode('utf-8'):
OS_list[OS_count].last_shutdown_time = str(result_data[2])
except:
print("MAX-OS_INFO" + result_data[0].decode('utf-8')+"error")
try:
query1 = r"SELECT description, filename FROM log2timeline WHERE description like '%#Windows NT#\\CurrentVersion#\\Time Zones#\\" + \
OS_list[0].display_timezone_name + "%' escape '#' and (filename like '%SafeOS%' or filename like '%Windows.old%')"
query2 = r"SELECT description, filename FROM log2timeline WHERE description like '%#Windows NT#\\CurrentVersion#\\Time Zones#\\" + \
OS_list[1].display_timezone_name + "%' escape '#' and (filename not like '%SafeOS%' and filename not like '%Windows.old%')"
result_data = db.execute_query_mul(query1)
result_data2 = db.execute_query_mul(query2)
dataInside = r" Display: \[REG_SZ\] (.*) Dlt:"
m = re.search(dataInside, result_data[0][0].decode('utf-8'))
OS_list[0].timezone_utc = m.group(1)
m = re.search(dataInside, result_data2[0][0].decode('utf-8'))
OS_list[1].timezone_utc = m.group(1)
except:
print("MAX-OSINFO-Timezone_error")
for os in OS_list:
insert_values = (par[0], case_id, evd_id,
str(os.operating_system), str(os.version_number), str(os.install_time),
str(os.product_key), str(os.owner), str(os.display_computer_name),
str(os.computer_name), str(os.dhcp_dns_server), str(os.operating_system_version),
str(os.build_number), str(os.product_id), str(os.last_service_pack),
str(os.organization), str(os.last_shutdown_time), str(os.system_root),
str(os.path), str(os.last_access_time_flag), str(os.timezone_utc),
str(os.display_timezone_name))
self.InsertQuery(db, insert_values)
db.close()
now = datetime.now()
print('[%s-%s-%s %s:%s:%s] OS INFO DONE' % (now.year, now.month, now.day, now.hour, now.minute, now.second))
def InsertQuery(self, _db, _insert_values_tuple):
query = self.GetQuery('I', _insert_values_tuple)
_db.execute_query(query)
| 74.685619 | 763 | 0.448704 | 2,143 | 22,331 | 4.472235 | 0.09566 | 0.097037 | 0.088689 | 0.097558 | 0.768886 | 0.753652 | 0.742383 | 0.735601 | 0.735392 | 0.718176 | 0 | 0.021801 | 0.431015 | 22,331 | 298 | 764 | 74.936242 | 0.732489 | 0.009762 | 0 | 0.66791 | 0 | 0.003731 | 0.185091 | 0.03963 | 0 | 0 | 0 | 0.003356 | 0 | 1 | 0.014925 | false | 0 | 0.018657 | 0 | 0.041045 | 0.011194 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d9992bdd5e6dc247b839511c13055b295aee1b17 | 24,573 | py | Python | notes/tests/test_parser.py | juancoquet/personalwebsite | 3d9979ed6aa42ed031971a5e3c6e880f5642c33f | [
"MIT"
] | null | null | null | notes/tests/test_parser.py | juancoquet/personalwebsite | 3d9979ed6aa42ed031971a5e3c6e880f5642c33f | [
"MIT"
] | null | null | null | notes/tests/test_parser.py | juancoquet/personalwebsite | 3d9979ed6aa42ed031971a5e3c6e880f5642c33f | [
"MIT"
] | null | null | null | from django.urls import reverse
from unittest import TestCase, result
from unittest import skip
from .. import parse_md
class MarkdownParseTest(TestCase):
def test_parse_h1(self):
text = '# Title being tested'
result = parse_md.parse_h1(text)
expected = '<h1>Title being tested</h1>'
self.assertEqual(result, expected)
def test_parse_h1_on_multiple_lines(self):
text = 'preceding text\n# Title being tested\nsome more text'
result = parse_md.parse_h1(text)
expected = 'preceding text\n<h1>Title being tested</h1>\nsome more text'
self.assertEqual(result, expected)
def test_multiple_h1_parses(self):
text = '# Title being tested\nsome random text\n# Title being tested2'
result = parse_md.parse_h1(text)
expected = '<h1>Title being tested</h1>\nsome random text\n<h1>Title being tested2</h1>'
self.assertEqual(result, expected)
def test_unintended_hash_symbol_does_not_parse_to_h1(self):
text = 'the following # should not match'
result = parse_md.parse_h1(text)
expected = 'the following # should not match'
self.assertEqual(result, expected)
def test_h1_does_not_parse_inside_code_block(self):
text = '```\n# python comment\nmy_var = "hello world"\n```'
result = parse_md.parse_h1(text)
expected = '```\n# python comment\nmy_var = "hello world"\n```'
self.assertEqual(result, expected)
def test_h1_does_not_parse_inside_code_block_with_multiple_lines(self):
text = '```\n# python comment\nmy_var = "hello world"\n# python comment2\n```'
result = parse_md.parse_h1(text)
expected = '```\n# python comment\nmy_var = "hello world"\n# python comment2\n```'
self.assertEqual(result, expected)
def test_parse_h2(self):
text = '## Title being tested'
result = parse_md.parse_h2(text)
expected = '<h2>Title being tested</h2>'
self.assertEqual(result, expected)
def test_parse_h2_on_multiple_lines(self):
text = 'preceding text\n## Title being tested\nsome more text'
result = parse_md.parse_h2(text)
expected = 'preceding text\n<h2>Title being tested</h2>\nsome more text'
self.assertEqual(result, expected)
def test_multiple_h2_parses(self):
text = '## Title being tested\nsome random text\n## Title being tested2'
result = parse_md.parse_h2(text)
expected = '<h2>Title being tested</h2>\nsome random text\n<h2>Title being tested2</h2>'
self.assertEqual(result, expected)
def test_unintended_hash_symbol_does_not_parse_to_h2(self):
text = 'the following ## should not match'
result = parse_md.parse_h2(text)
expected = 'the following ## should not match'
self.assertEqual(result, expected)
def test_h2_does_not_parse_inside_code_block(self):
text = '```\n## python comment\nmy_var = "hello world"\n```'
result = parse_md.parse_h2(text)
expected = '```\n## python comment\nmy_var = "hello world"\n```'
self.assertEqual(result, expected)
def test_h2_does_not_parse_inside_code_block_with_multiple_lines(self):
text = '```\n## python comment\nmy_var = "hello world"\n## python comment2\n```'
result = parse_md.parse_h2(text)
expected = '```\n## python comment\nmy_var = "hello world"\n## python comment2\n```'
self.assertEqual(result, expected)
def test_parse_h3(self):
text = '### Title being tested'
result = parse_md.parse_h3(text)
expected = '<h3>Title being tested</h3>'
self.assertEqual(result, expected)
def test_parse_h3_on_multiple_lines(self):
text = 'preceding text\n### Title being tested\nsome more text'
result = parse_md.parse_h3(text)
expected = 'preceding text\n<h3>Title being tested</h3>\nsome more text'
self.assertEqual(result, expected)
def test_multiple_h3_parses(self):
text = '### Title being tested\nsome random text\n### Title being tested2'
result = parse_md.parse_h3(text)
expected = '<h3>Title being tested</h3>\nsome random text\n<h3>Title being tested2</h3>'
self.assertEqual(result, expected)
def test_unintended_hash_symbol_does_not_parse_to_h3(self):
text = 'the following ### should not match'
result = parse_md.parse_h3(text)
expected = 'the following ### should not match'
self.assertEqual(result, expected)
def test_h3_does_not_parse_inside_code_block(self):
text = '```\n### python comment\nmy_var = "hello world"\n```'
result = parse_md.parse_h3(text)
expected = '```\n### python comment\nmy_var = "hello world"\n```'
self.assertEqual(result, expected)
def test_h3_does_not_parse_inside_code_block_with_multiple_lines(self):
text = '```\n### python comment\nmy_var = "hello world"\n### python comment2\n```'
result = parse_md.parse_h3(text)
expected = '```\n### python comment\nmy_var = "hello world"\n### python comment2\n```'
self.assertEqual(result, expected)
def test_parse_h4(self):
text = '#### Title being tested'
result = parse_md.parse_h4(text)
expected = '<h4>Title being tested</h4>'
self.assertEqual(result, expected)
def test_parse_h4_on_multiple_lines(self):
text = 'preceding text\n#### Title being tested\nsome more text'
result = parse_md.parse_h4(text)
expected = 'preceding text\n<h4>Title being tested</h4>\nsome more text'
self.assertEqual(result, expected)
def test_multiple_h4_parses(self):
text = '#### Title being tested\nsome random text\n#### Title being tested2'
result = parse_md.parse_h4(text)
expected = '<h4>Title being tested</h4>\nsome random text\n<h4>Title being tested2</h4>'
self.assertEqual(result, expected)
def test_unintended_hash_symbol_does_not_parse_to_h4(self):
text = 'the following #### should not match'
result = parse_md.parse_h4(text)
expected = 'the following #### should not match'
self.assertEqual(result, expected)
def test_h4_does_not_parse_inside_code_block(self):
text = '```\n#### python comment\nmy_var = "hello world"\n```'
result = parse_md.parse_h4(text)
expected = '```\n#### python comment\nmy_var = "hello world"\n```'
self.assertEqual(result, expected)
def test_h4_does_not_parse_inside_code_block_with_multiple_lines(self):
text = '```\n#### python comment\nmy_var = "hello world"\n#### python comment2\n```'
result = parse_md.parse_h4(text)
expected = '```\n#### python comment\nmy_var = "hello world"\n#### python comment2\n```'
self.assertEqual(result, expected)
def test_parse_h5(self):
text = '##### Title being tested'
result = parse_md.parse_h5(text)
expected = '<h5>Title being tested</h5>'
self.assertEqual(result, expected)
def test_parse_h5_on_multiple_lines(self):
text = 'preceding text\n##### Title being tested\nsome more text'
result = parse_md.parse_h5(text)
expected = 'preceding text\n<h5>Title being tested</h5>\nsome more text'
self.assertEqual(result, expected)
def test_multiple_h5_parses(self):
text = '##### Title being tested\nsome random text\n##### Title being tested2'
result = parse_md.parse_h5(text)
expected = '<h5>Title being tested</h5>\nsome random text\n<h5>Title being tested2</h5>'
self.assertEqual(result, expected)
def test_unintended_hash_symbol_does_not_parse_to_h5(self):
text = 'the following ##### should not match'
result = parse_md.parse_h5(text)
expected = 'the following ##### should not match'
self.assertEqual(result, expected)
def test_h5_does_not_parse_inside_code_block(self):
text = '```\n###### python comment\nmy_var = "hello world"\n```'
result = parse_md.parse_h5(text)
expected = '```\n###### python comment\nmy_var = "hello world"\n```'
self.assertEqual(result, expected)
def test_h5_does_not_parse_inside_code_block_with_multiple_lines(self):
text = '```\n###### python comment\nmy_var = "hello world"\n###### python comment2\n```'
result = parse_md.parse_h5(text)
expected = '```\n###### python comment\nmy_var = "hello world"\n###### python comment2\n```'
self.assertEqual(result, expected)
def test_parse_h6(self):
text = '###### Title being tested'
result = parse_md.parse_h6(text)
expected = '<h6>Title being tested</h6>'
self.assertEqual(result, expected)
def test_parse_h6_on_multiple_lines(self):
text = 'preceding text\n###### Title being tested\nsome more text'
result = parse_md.parse_h6(text)
expected = 'preceding text\n<h6>Title being tested</h6>\nsome more text'
self.assertEqual(result, expected)
def test_multiple_h6_parses(self):
text = '###### Title being tested\nsome random text\n###### Title being tested2'
result = parse_md.parse_h6(text)
expected = '<h6>Title being tested</h6>\nsome random text\n<h6>Title being tested2</h6>'
self.assertEqual(result, expected)
def test_unintended_hash_symbol_does_not_parse_to_h6(self):
text = 'the following ###### should not match'
result = parse_md.parse_h6(text)
expected = 'the following ###### should not match'
self.assertEqual(result, expected)
def test_h6_does_not_parse_inside_code_block(self):
text = '```\n####### python comment\nmy_var = "hello world"\n```'
result = parse_md.parse_h6(text)
expected = '```\n####### python comment\nmy_var = "hello world"\n```'
self.assertEqual(result, expected)
def test_h6_does_not_parse_inside_code_block_with_multiple_lines(self):
text = '```\n####### python comment\nmy_var = "hello world"\n####### python comment2\n```'
result = parse_md.parse_h6(text)
expected = '```\n####### python comment\nmy_var = "hello world"\n####### python comment2\n```'
self.assertEqual(result, expected)
def test_parse_code_block(self):
text = '```python\nprint("hello world")\n```'
result = parse_md.parse_code_block(text)
expected = '<pre><code>\nprint("hello world")\n</code></pre>'
self.assertEqual(result, expected)
def test_parse_code_block_with_multiple_lines(self):
text = 'preceding text\n```python\nprint("hello world")\nmy_var = 123\n```\nmore text'
result = parse_md.parse_code_block(text)
expected = 'preceding text\n<pre><code>\nprint("hello world")\nmy_var = 123\n</code></pre>\nmore text'
self.assertEqual(result, expected)
def test_parse_multiple_code_blocks(self):
text = '```python\nprint("hello world")\n```\nrandom text\n```python\nprint("hello world2")\n```'
result = parse_md.parse_code_block(text)
expected = '<pre><code>\nprint("hello world")\n</code></pre>\nrandom text\n<pre><code>\nprint("hello world2")\n</code></pre>'
self.assertEqual(result, expected)
def test_parse_p(self):
text = 'this is a paragraph'
result = parse_md.parse_p(text)
expected = '<p>this is a paragraph</p>'
self.assertEqual(result, expected)
def test_parse_p_on_multiple_lines(self):
text = 'this is a paragraph\nthis is another paragraph'
result = parse_md.parse_p(text)
expected = '<p>this is a paragraph</p>\n<p>this is another paragraph</p>'
self.assertEqual(result, expected)
def test_p_does_not_parse_inside_code_block(self):
text = 'this should parse\n```\nthis should not parse\n```\nthis should also parse'
result = parse_md.parse_p(text)
expected = '<p>this should parse</p>\n```\nthis should not parse\n```\n<p>this should also parse</p>'
self.assertEqual(result, expected)
def test_p_does_not_parse_heading(self):
text = 'this should parse\n# this should not parse\nthis should also parse'
result = parse_md.parse_p(text)
expected = '<p>this should parse</p>\n# this should not parse\n<p>this should also parse</p>'
self.assertEqual(result, expected)
@skip
def test_p_does_not_parse_quote(self):
text = 'this should parse\n> this should not parse\nthis should also parse'
result = parse_md.parse_p(text)
expected = '<p>this should parse</p>\n> this should not parse\n<p>this should also parse</p>'
self.assertEqual(result, expected)
self.fail('fix expected result')
def test_p_does_not_parse_unordered_list(self):
text = 'this should parse\n- this should not parse\nthis should also parse\n* this should not parse either'
result = parse_md.parse_p(text)
expected = '<p>this should parse</p>\n- this should not parse\n<p>this should also parse</p>\n* this should not parse either'
self.assertEqual(result, expected)
def test_p_does_not_parse_ordered_list(self):
text = 'this should parse\n1. this should not parse\nthis should also parse\n12. this should not parse either'
result = parse_md.parse_p(text)
expected = '<p>this should parse</p>\n1. this should not parse\n<p>this should also parse</p>\n12. this should not parse either'
self.assertEqual(result, expected)
def test_clean_blank_lines(self):
text = '<p>some text</p><p></p><p>some more text</p>'
result = parse_md.clean_blank_lines(text)
expected = '<p>some text</p><p>some more text</p>'
self.assertEqual(result, expected)
def test_clean_multiple_blank_lines(self):
text = '<p></p><h1>some text</h1><p></p><p></p><p>some more text</p><p></p>'
result = parse_md.clean_blank_lines(text)
expected = '<h1>some text</h1><p>some more text</p>'
self.assertEqual(result, expected)
def test_parse_inline_code(self):
text = 'this is a paragraph with `inline code`'
result = parse_md.parse_inline_code(text)
expected = 'this is a paragraph with <code>inline code</code>'
self.assertEqual(result, expected)
def test_parse_inline_code_does_not_parse_inside_code_block(self):
text = '```\n`this should not parse`\n```'
result = parse_md.parse_inline_code(text)
expected = '```\n`this should not parse`\n```'
self.assertEqual(result, expected)
def test_parse_bold(self):
text = 'this is a paragraph with **bold text**'
result = parse_md.parse_bold(text)
expected = 'this is a paragraph with <strong>bold text</strong>'
self.assertEqual(result, expected)
def test_parse_bold_does_not_parse_inside_code_block(self):
text = '```\n**this should not parse**\n```'
result = parse_md.parse_bold(text)
expected = '```\n**this should not parse**\n```'
self.assertEqual(result, expected)
def test_parse_italic(self):
text = 'this is a paragraph with *italic text*'
result = parse_md.parse_italic(text)
expected = 'this is a paragraph with <em>italic text</em>'
self.assertEqual(result, expected)
def test_parse_italic_does_not_parse_inside_code_block(self):
text = '```\n*this should not parse*\n```'
result = parse_md.parse_italic(text)
expected = '```\n*this should not parse*\n```'
self.assertEqual(result, expected)
def test_parse_italic_does_not_parse_bold(self):
text = 'this is a paragraph with **bold text**'
result = parse_md.parse_italic(text)
expected = 'this is a paragraph with **bold text**'
self.assertEqual(result, expected)
def test_parse_hr(self):
text = 'this is a paragraph\n\n---\nmore text'
result = parse_md.parse_hr(text)
expected = 'this is a paragraph\n\n<hr>\nmore text'
self.assertEqual(result, expected)
def test_parse_hr_does_not_parse_inside_code_block(self):
text = '```\n---\n```'
result = parse_md.parse_hr(text)
expected = '```\n---\n```'
self.assertEqual(result, expected)
def test_hr_does_not_parse_mid_paragraph(self):
text = 'this --- should not parse'
result = parse_md.parse_hr(text)
expected = 'this --- should not parse'
self.assertEqual(result, expected)
def test_remove_hr_from_end(self):
text = '<p>some text</p>\n---\n<p>some more text</p>\n---'
result = parse_md.parse_hr(text)
expected = '<p>some text</p>\n<hr>\n<p>some more text</p>\n'
self.assertEqual(result, expected)
def test_parse_ul(self):
text = 'this is a paragraph\n- item 1\n- item 2\n- item 3\nmore text'
result = parse_md.parse_ul(text)
expected = 'this is a paragraph\n<ul>\n<li>item 1</li>\n<li>item 2</li>\n<li>item 3</li>\n'\
'</ul>\nmore text'
self.assertEqual(result, expected)
def test_parse_multiple_ul(self):
text = 'this is a paragraph\n- item 1\n- item 2\nmore text before another list\n- item 3\n- item 4\n'\
'more text after another list'
result = parse_md.parse_ul(text)
expected = 'this is a paragraph\n<ul>\n<li>item 1</li>\n<li>item 2</li>\n</ul>\n'\
'more text before another list\n<ul>\n<li>item 3</li>\n<li>item 4</li>\n</ul>\n'\
'more text after another list'
self.assertEqual(result, expected)
def test_parse_sub_ul(self):
text = '- item 1\n\t- sub item 1\n\t- sub item 2\n- item 2\n\t- sub item 3'
result = parse_md.parse_ul(text)
expected = '<ul>\n<li>item 1</li>\n<ul>\n<li>sub item 1</li>\n<li>sub item 2</li>\n</ul>\n'\
'<li>item 2</li>\n<ul>\n<li>sub item 3</li>\n</ul>\n</ul>'
self.assertEqual(result, expected)
def test_parse_ol(self):
text = 'this is a paragraph\n1. item 1\n2. item 2\n3. item 3\nmore text'
result = parse_md.parse_ol(text)
expected = 'this is a paragraph\n<ol>\n<li>item 1</li>\n<li>item 2</li>\n<li>item 3</li>\n'\
'</ol>\nmore text'
self.assertEqual(result, expected)
def test_parse_multiple_ol(self):
text = 'this is a paragraph\n1. item 1\n2. item 2\nmore text before another list\n1. item 3\n'\
'2. item 4\nmore text after another list'
result = parse_md.parse_ol(text)
expected = 'this is a paragraph\n<ol>\n<li>item 1</li>\n<li>item 2</li>\n</ol>\n'\
'more text before another list\n<ol>\n<li>item 3</li>\n<li>item 4</li>\n</ol>\n'\
'more text after another list'
self.assertEqual(result, expected)
def test_replace_html_characters(self):
text = 'this <code>should change to encoded symbols</code>'
result = parse_md.replace_special_characters(text)
expected = 'this <code>should change to encoded symbols</code>'
self.assertEqual(result, expected)
def test_parse_wiki_link(self):
text = 'this links to the [[Functional tests]] note'
result = parse_md.parse_wiki_links(text).replace('%20', ' ')
expected = 'this links to the <a href="/notes/Test Driven Development with Python/'\
'Functional tests/">Functional tests</a> note'
self.assertEqual(result, expected)
def test_parse_multiple_wiki_links(self):
text = '[[Functional tests]] [[TDD workflow]]'
result = parse_md.parse_wiki_links(text).replace('%20', ' ')
expected = '<a href="/notes/Test Driven Development with Python/Functional tests/">Functional tests</a> '\
'<a href="/notes/Test Driven Development with Python/TDD workflow/">TDD workflow</a>'
self.assertEqual(result, expected)
def test_link_to_note_not_found_links_to_note_not_found_page(self):
text = "this note [[doesn't exist]]"
result = parse_md.parse_wiki_links(text)
url = reverse('note_not_found', kwargs={'note_title': 'doesn\'t exist'})
expected = f"""this note <a href="{url}">doesn't exist</a>"""
self.assertEqual(result, expected)
def test_parse_wiki_link_alias(self):
text = 'this note has an alias [[TDD workflow|alias name]]'
result = parse_md.parse_wiki_links(text)
url = reverse('note', kwargs={'note_title': 'TDD workflow',
'source_title': 'Test Driven Development with Python'})
expected = f'this note has an alias <a href="{url}">alias name</a>'
self.assertEqual(result, expected)
def test_parse_wiki_links_case_insensitive(self):
text = "[[tdd workflow]]"
result = parse_md.parse_wiki_links(text)
expected_url = reverse('note', kwargs={'note_title': 'TDD workflow',
'source_title': 'Test Driven Development with Python'})
expected = f'<a href="{expected_url}">tdd workflow</a>'
self.assertEqual(result, expected)
def test_parse_wiki_links_does_not_parse_image_embeds(self):
text = '![[an_image.png]]'
result = parse_md.parse_wiki_links(text)
expected = text
self.assertEqual(result, expected)
def test_parse_image_embed(self):
text = 'some text\n\nmore text'
result = parse_md.parse_image(text)
expected = 'some text\n<img src="http://www.example.com/path/to/img.jpg" alt="alt text">\nmore text'
self.assertEqual(result, expected)
def test_parse_multiple_image_embed(self):
text = 'some text\n\nmore text\n'
result = parse_md.parse_image(text)
expected = 'some text\n<img src="http://www.example.com/path/to/img.jpg" alt="alt text">\nmore text\n'\
'<img src="http://www.example2.com/path/to/img2.jpg" alt="alt text 2">'
self.assertEqual(result, expected)
def test_parse_image_from_file_path(self):
text = 'some text\n\nmore text'
result = parse_md.parse_image(text)
expected = 'some text\n<img src="/static/img/notes/tdd-workflow.png" '\
'alt="tdd workflow">\nmore text'
self.assertEqual(result, expected)
@skip
def test_parse_image_from_static_folder(self):
self.fail()
def test_parse_image_file_embed(self):
text = "some text\n![[tdd-workflow.png]]"
result = parse_md.parse_image(text)
expected = 'some text\n<img src="/static/img/notes/tdd-workflow.png">'
self.assertEqual(result, expected)
def test_remove_note_type(self):
text = 'Note type: #litnote\n\n---\nnote body'
result = parse_md.remove_note_type(text)
expected = '\n\n---\nnote body'
self.assertEqual(result, expected)
def test_remove_permanotes(self):
text = 'some text\n### See also\n- [[TDD workflow]]\n\n'\
'### permanotes\n- [[another note]]'
result = parse_md.remove_permanotes(text)
expected = 'some text\n### See also\n- [[TDD workflow]]'
self.assertEqual(result, expected)
def test_remove_frontmatter(self):
text = '---\naliases: []\nDate: 2022-02-09\n---\n# note title\ntext'
result = parse_md.remove_frontmatter(text)
expected = '# note title\ntext'
self.assertEqual(result, expected)
def test_remove_frontmatter_does_not_remove_note_body(self):
text = 'note type: #litnote\n---\n# heading\ntext\n\n---\n# see also\n- [[another note]]'
result = parse_md.remove_frontmatter(text)
expected = text
self.assertEqual(result, expected)
def test_parse_inline_tex(self):
text = 'some text that has inline tex $a+b=2^c$'
result = parse_md.parse_inline_tex(text)
expected = 'some text that has inline tex \(a+b=2^c\)'
self.assertEqual(result, expected)
def test_parse_inline_tex_does_not_parse_tex_block(self):
text = 'should not parse tex block\n\n$$a+b=2^c$$'
result = parse_md.parse_inline_tex(text)
expected = text
self.assertEqual(result, expected)
def test_remove_br(self):
text = 'some text\n\n<br>\nmore text'
result = parse_md.remove_br(text)
expected = 'some text\n\n\nmore text'
self.assertEqual(result, expected) | 46.805714 | 147 | 0.647459 | 3,472 | 24,573 | 4.397753 | 0.052419 | 0.038051 | 0.069815 | 0.15574 | 0.905167 | 0.871242 | 0.838431 | 0.760299 | 0.684 | 0.634554 | 0 | 0.011844 | 0.216579 | 24,573 | 525 | 148 | 46.805714 | 0.78131 | 0 | 0 | 0.3918 | 0 | 0.079727 | 0.384227 | 0.033816 | 0 | 0 | 0 | 0 | 0.186788 | 1 | 0.189066 | false | 0 | 0.009112 | 0 | 0.200456 | 0.013667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7962aec9d493497abe0f2162b875a761e985511f | 349 | py | Python | tests/utils.py | File5/lcdui | cb91c8659b7c0ffe663cf9961272d94a3a223c28 | [
"MIT"
] | null | null | null | tests/utils.py | File5/lcdui | cb91c8659b7c0ffe663cf9961272d94a3a223c28 | [
"MIT"
] | null | null | null | tests/utils.py | File5/lcdui | cb91c8659b7c0ffe663cf9961272d94a3a223c28 | [
"MIT"
] | null | null | null | from lcdui.utils import ensure_line_count
def test_ensure_line_count():
assert ensure_line_count([''], 4) == [''] * 4
assert ensure_line_count(['a', 'b', 'c'], 4) == ['a', 'b', 'c', '']
assert ensure_line_count([''] * 10, 4) == [''] * 4
assert ensure_line_count(
[i for i in 'abcdef'] * 10, 4
) == ['a', 'b', 'c', 'd']
| 31.727273 | 71 | 0.535817 | 51 | 349 | 3.411765 | 0.392157 | 0.344828 | 0.517241 | 0.482759 | 0.264368 | 0.264368 | 0 | 0 | 0 | 0 | 0 | 0.0369 | 0.223496 | 349 | 10 | 72 | 34.9 | 0.605166 | 0 | 0 | 0 | 0 | 0 | 0.045845 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.125 | true | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
798829c3d1ff007834accb4d5f5e36348c87519e | 173 | py | Python | preprocessors/preprocessor.py | TheLocust3/FRC-Field-Vision-System | 356243de64279d76ed18a43393d124493f82d7a1 | [
"MIT"
] | 1 | 2017-01-12T15:04:38.000Z | 2017-01-12T15:04:38.000Z | preprocessors/preprocessor.py | TheLocust3/FRC-Field-Vision-System | 356243de64279d76ed18a43393d124493f82d7a1 | [
"MIT"
] | null | null | null | preprocessors/preprocessor.py | TheLocust3/FRC-Field-Vision-System | 356243de64279d76ed18a43393d124493f82d7a1 | [
"MIT"
] | null | null | null | from abc import ABCMeta, abstractmethod
class Preprocessor(object):
__metaclass__ = ABCMeta
@abstractmethod
def preprocess_image(self, image):
pass | 21.625 | 39 | 0.710983 | 17 | 173 | 6.941176 | 0.823529 | 0.355932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231214 | 173 | 8 | 40 | 21.625 | 0.887218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
79c8599322e89c0495da273f8c2b48d8e82ff5b7 | 20,038 | py | Python | safenet/authenticator.py | DuncanKushnir/pySafe | b47c234f5333566027c8981054747d2587a673fe | [
"MIT"
] | null | null | null | safenet/authenticator.py | DuncanKushnir/pySafe | b47c234f5333566027c8981054747d2587a673fe | [
"MIT"
] | null | null | null | safenet/authenticator.py | DuncanKushnir/pySafe | b47c234f5333566027c8981054747d2587a673fe | [
"MIT"
] | null | null | null | import safenet.safeUtils as safeUtils
import queue
appQueue = queue.Queue()
class lib:
def __init__(self,authlib,applib,fromBytes=None):
self.safe_authenticator = authlib
self.safe_app = applib
class authenticator:
def __init__(self,authlib,applib):
self.lib = lib(authlib,applib)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_init_logging(self, output_file_name_override, user_data, o_cb=None):
"""
bytes, [any], [function], [custom ffi lib]
char* output_file_name_override, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result)
"""
@ffi.callback("void(void* ,FfiResult*)")
def _auth_init_logging_o_cb(user_data ,result):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result)
self.lib.safe_app.auth_init_logging(output_file_name_override, user_data, _auth_init_logging_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_output_log_path(self, output_file_name, user_data, o_cb=None):
"""
bytes, [any], [function], [custom ffi lib]
char* output_file_name, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, char* log_path)
"""
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _auth_output_log_path_o_cb(user_data ,result ,log_path):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,log_path)
self.lib.safe_app.auth_output_log_path(output_file_name, user_data, _auth_output_log_path_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _create_acc(self, account_locator, account_password, invitation, user_data, o_disconnect_notifier_cb=None, o_cb=None):
"""
bytes, bytes, bytes, [any], [function], [function], [custom ffi lib]
char* account_locator, char* account_password, char* invitation, void* user_data
> callback functions:
(*o_disconnect_notifier_cb)(void* user_data)
(*o_cb)(void* user_data, FfiResult* result, Authenticator* authenticator)
"""
@ffi.callback("void(void*)")
def _create_acc_o_disconnect_notifier_cb(user_data):
appQueue.put('gotResult')
if o_disconnect_notifier_cb:
o_disconnect_notifier_cb(user_data)
@ffi.callback("void(void* ,FfiResult* ,Authenticator*)")
def _create_acc_o_cb(user_data ,result ,authenticator):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,authenticator)
self.lib.safe_app.create_acc(account_locator, account_password, invitation, user_data, _create_acc_o_disconnect_notifier_cb, _create_acc_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _login(self, account_locator, account_password, user_data, o_disconnect_notifier_cb=None, o_cb=None):
"""
bytes, bytes, [any], [function], [function], [custom ffi lib]
char* account_locator, char* account_password, void* user_data
> callback functions:
(*o_disconnect_notifier_cb)(void* user_data)
(*o_cb)(void* user_data, FfiResult* result, Authenticator* authenticaor)
"""
@ffi.callback("void(void*)")
def _login_o_disconnect_notifier_cb(user_data):
appQueue.put('gotResult')
if o_disconnect_notifier_cb:
o_disconnect_notifier_cb(user_data)
@ffi.callback("void(void* ,FfiResult* ,Authenticator*)")
def _login_o_cb(user_data ,result ,authenticaor):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,authenticaor)
self.lib.safe_app.login(account_locator, account_password, user_data, _login_o_disconnect_notifier_cb, _login_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_reconnect(self, auth, user_data, o_cb=None):
"""
Authenticator*, [any], [function], [custom ffi lib]
Authenticator* auth, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result)
"""
@ffi.callback("void(void* ,FfiResult*)")
def _auth_reconnect_o_cb(user_data ,result):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result)
self.lib.safe_app.auth_reconnect(auth, user_data, _auth_reconnect_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_account_info(self, auth, user_data, o_cb=None):
"""
Authenticator*, [any], [function], [custom ffi lib]
Authenticator* auth, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, AccountInfo* account_info)
"""
@ffi.callback("void(void* ,FfiResult* ,AccountInfo*)")
def _auth_account_info_o_cb(user_data ,result ,account_info):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,account_info)
self.lib.safe_app.auth_account_info(auth, user_data, _auth_account_info_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_exe_file_stem(self, user_data, o_cb=None):
"""
[any], [function], [custom ffi lib]
void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, char* filename)
"""
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _auth_exe_file_stem_o_cb(user_data ,result ,filename):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,filename)
self.lib.safe_app.auth_exe_file_stem(user_data, _auth_exe_file_stem_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_set_additional_search_path(self, new_path, user_data, o_cb=None):
"""
bytes, [any], [function], [custom ffi lib]
char* new_path, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result)
"""
@ffi.callback("void(void* ,FfiResult*)")
def _auth_set_additional_search_path_o_cb(user_data ,result):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result)
self.lib.safe_app.auth_set_additional_search_path(new_path, user_data, _auth_set_additional_search_path_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_free(self, auth):
"""
Authenticator*, [custom ffi lib]
Authenticator* auth
> callback functions:
"""
self.lib.safe_app.auth_free(auth)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_unregistered_decode_ipc_msg(self, msg, user_data, o_unregistered=None, o_err=None):
"""
bytes, [any], [function], [function], [custom ffi lib]
char* msg, void* user_data
> callback functions:
(*o_unregistered)(void* user_data, uint32_t req_id, uint8_t* extra_data, uintptr_t extra_data_len)
(*o_err)(void* user_data, FfiResult* result, char* response)
"""
@ffi.callback("void(void* ,uint32_t ,uint8_t* ,uintptr_t)")
def _auth_unregistered_decode_ipc_msg_o_unregistered(user_data ,req_id ,extra_data ,extra_data_len):
appQueue.put('gotResult')
if o_unregistered:
o_unregistered(user_data ,req_id ,extra_data ,extra_data_len)
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _auth_unregistered_decode_ipc_msg_o_err(user_data ,result ,response):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_err:
o_err(user_data ,result ,response)
self.lib.safe_app.auth_unregistered_decode_ipc_msg(msg, user_data, _auth_unregistered_decode_ipc_msg_o_unregistered, _auth_unregistered_decode_ipc_msg_o_err)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_decode_ipc_msg(self, auth, msg, user_data, o_auth=None, o_containers=None, o_unregistered=None, o_share_mdata=None, o_err=None):
"""
Authenticator*, bytes, [any], [function], [function], [function], [function], [function], [custom ffi lib]
Authenticator* auth, char* msg, void* user_data
> callback functions:
(*o_auth)(void* user_data, uint32_t req_id, AuthReq* req)
(*o_containers)(void* user_data, uint32_t req_id, ContainersReq* req)
(*o_unregistered)(void* user_data, uint32_t req_id, uint8_t* extra_data, uintptr_t extra_data_len)
(*o_share_mdata)(void* user_data, uint32_t req_id, ShareMDataReq* req, MetadataResponse* metadata, uintptr_t metadata_len)
(*o_err)(void* user_data, FfiResult* result, char* response)
"""
@ffi.callback("void(void* ,uint32_t ,AuthReq*)")
def _auth_decode_ipc_msg_o_auth(user_data ,req_id ,req):
appQueue.put('gotResult')
if o_auth:
o_auth(user_data ,req_id ,req)
@ffi.callback("void(void* ,uint32_t ,ContainersReq*)")
def _auth_decode_ipc_msg_o_containers(user_data ,req_id ,req):
appQueue.put('gotResult')
if o_containers:
o_containers(user_data ,req_id ,req)
@ffi.callback("void(void* ,uint32_t ,uint8_t* ,uintptr_t)")
def _auth_decode_ipc_msg_o_unregistered(user_data ,req_id ,extra_data ,extra_data_len):
appQueue.put('gotResult')
if o_unregistered:
o_unregistered(user_data ,req_id ,extra_data ,extra_data_len)
@ffi.callback("void(void* ,uint32_t ,ShareMDataReq* ,MetadataResponse* ,uintptr_t)")
def _auth_decode_ipc_msg_o_share_mdata(user_data ,req_id ,req ,metadata ,metadata_len):
appQueue.put('gotResult')
if o_share_mdata:
o_share_mdata(user_data ,req_id ,req ,metadata ,metadata_len)
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _auth_decode_ipc_msg_o_err(user_data ,result ,response):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_err:
o_err(user_data ,result ,response)
self.lib.safe_app.auth_decode_ipc_msg(auth, msg, user_data, _auth_decode_ipc_msg_o_auth, _auth_decode_ipc_msg_o_containers, _auth_decode_ipc_msg_o_unregistered, _auth_decode_ipc_msg_o_share_mdata, _auth_decode_ipc_msg_o_err)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _encode_share_mdata_resp(self, auth, req, req_id, is_granted, user_data, o_cb=None):
"""
Authenticator*, ShareMDataReq*, uint32_t, _Bool, [any], [function], [custom ffi lib]
Authenticator* auth, ShareMDataReq* req, uint32_t req_id, _Bool is_granted, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, char* response)
"""
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _encode_share_mdata_resp_o_cb(user_data ,result ,response):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,response)
self.lib.safe_app.encode_share_mdata_resp(auth, req, req_id, is_granted, user_data, _encode_share_mdata_resp_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_revoke_app(self, auth, app_id, user_data, o_cb=None):
"""
Authenticator*, bytes, [any], [function], [custom ffi lib]
Authenticator* auth, char* app_id, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, char* response)
"""
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _auth_revoke_app_o_cb(user_data ,result ,response):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,response)
self.lib.safe_app.auth_revoke_app(auth, app_id, user_data, _auth_revoke_app_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_flush_app_revocation_queue(self, auth, user_data, o_cb=None):
"""
Authenticator*, [any], [function], [custom ffi lib]
Authenticator* auth, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result)
"""
@ffi.callback("void(void* ,FfiResult*)")
def _auth_flush_app_revocation_queue_o_cb(user_data ,result):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result)
self.lib.safe_app.auth_flush_app_revocation_queue(auth, user_data, _auth_flush_app_revocation_queue_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _encode_unregistered_resp(self, req_id, is_granted, user_data, o_cb=None):
"""
uint32_t, _Bool, [any], [function], [custom ffi lib]
uint32_t req_id, _Bool is_granted, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, char* response)
"""
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _encode_unregistered_resp_o_cb(user_data ,result ,response):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,response)
self.lib.safe_app.encode_unregistered_resp(req_id, is_granted, user_data, _encode_unregistered_resp_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _encode_auth_resp(self, auth, req, req_id, is_granted, user_data, o_cb=None):
"""
Authenticator*, AuthReq*, uint32_t, _Bool, [any], [function], [custom ffi lib]
Authenticator* auth, AuthReq* req, uint32_t req_id, _Bool is_granted, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, char* response)
"""
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _encode_auth_resp_o_cb(user_data ,result ,response):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,response)
self.lib.safe_app.encode_auth_resp(auth, req, req_id, is_granted, user_data, _encode_auth_resp_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _encode_containers_resp(self, auth, req, req_id, is_granted, user_data, o_cb=None):
"""
Authenticator*, ContainersReq*, uint32_t, _Bool, [any], [function], [custom ffi lib]
Authenticator* auth, ContainersReq* req, uint32_t req_id, _Bool is_granted, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, char* response)
"""
@ffi.callback("void(void* ,FfiResult* ,char*)")
def _encode_containers_resp_o_cb(user_data ,result ,response):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,response)
self.lib.safe_app.encode_containers_resp(auth, req, req_id, is_granted, user_data, _encode_containers_resp_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_rm_revoked_app(self, auth, app_id, user_data, o_cb=None):
"""
Authenticator*, bytes, [any], [function], [custom ffi lib]
Authenticator* auth, char* app_id, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result)
"""
@ffi.callback("void(void* ,FfiResult*)")
def _auth_rm_revoked_app_o_cb(user_data ,result):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result)
self.lib.safe_app.auth_rm_revoked_app(auth, app_id, user_data, _auth_rm_revoked_app_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_revoked_apps(self, auth, user_data, o_cb=None):
"""
Authenticator*, [any], [function], [custom ffi lib]
Authenticator* auth, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, AppExchangeInfo* app_exchange_info, uintptr_t app_exchange_info_len)
"""
@ffi.callback("void(void* ,FfiResult* ,AppExchangeInfo* ,uintptr_t)")
def _auth_revoked_apps_o_cb(user_data ,result ,app_exchange_info ,app_exchange_info_len):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,app_exchange_info ,app_exchange_info_len)
self.lib.safe_app.auth_revoked_apps(auth, user_data, _auth_revoked_apps_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_registered_apps(self, auth, user_data, o_cb=None):
"""
Authenticator*, [any], [function], [custom ffi lib]
Authenticator* auth, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, RegisteredApp* registered_app, uintptr_t registered_app_len)
"""
@ffi.callback("void(void* ,FfiResult* ,RegisteredApp* ,uintptr_t)")
def _auth_registered_apps_o_cb(user_data ,result ,registered_app ,registered_app_len):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,registered_app ,registered_app_len)
self.lib.safe_app.auth_registered_apps(auth, user_data, _auth_registered_apps_o_cb)
@safeUtils.safeThread(timeout=5,queue=appQueue)
def _auth_apps_accessing_mutable_data(self, auth, md_name, md_type_tag, user_data, o_cb=None):
"""
Authenticator*, XorNameArray*, uint64_t, [any], [function], [custom ffi lib]
Authenticator* auth, XorNameArray* md_name, uint64_t md_type_tag, void* user_data
> callback functions:
(*o_cb)(void* user_data, FfiResult* result, AppAccess* app_access, uintptr_t app_access_len)
"""
@ffi.callback("void(void* ,FfiResult* ,AppAccess* ,uintptr_t)")
def _auth_apps_accessing_mutable_data_o_cb(user_data ,result ,app_access ,app_access_len):
self.safeUtils.checkResult(result)
appQueue.put('gotResult')
if o_cb:
o_cb(user_data ,result ,app_access ,app_access_len)
self.lib.safe_app.auth_apps_accessing_mutable_data(auth, md_name, md_type_tag, user_data, _auth_apps_accessing_mutable_data_o_cb)
| 39.996008 | 232 | 0.628206 | 2,434 | 20,038 | 4.796631 | 0.047247 | 0.096617 | 0.048308 | 0.033919 | 0.880343 | 0.819186 | 0.762484 | 0.699529 | 0.65773 | 0.64591 | 0 | 0.004431 | 0.267841 | 20,038 | 500 | 233 | 40.076 | 0.791357 | 0.230861 | 0 | 0.561404 | 0 | 0 | 0.081252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.219298 | false | 0.017544 | 0.008772 | 0 | 0.236842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
79d08538d1a467d535ccf05336ae83f888b64555 | 161 | py | Python | github/content/__init__.py | NCPlayz/github | f95a3f622c82f4b19b67468bde715c0de5bcedb6 | [
"Apache-2.0"
] | 17 | 2019-10-27T00:30:11.000Z | 2021-03-24T09:28:07.000Z | github/content/__init__.py | NCPlayz/github | f95a3f622c82f4b19b67468bde715c0de5bcedb6 | [
"Apache-2.0"
] | 7 | 2019-07-24T04:43:12.000Z | 2020-08-19T19:27:24.000Z | github/content/__init__.py | NCPlayz/github | f95a3f622c82f4b19b67468bde715c0de5bcedb6 | [
"Apache-2.0"
] | 6 | 2019-07-22T09:09:59.000Z | 2020-12-19T05:36:30.000Z | from github.content.codeofconduct import *
from github.content.codeofconduct import __all__ as _codeofconduct__all__
__all__ = [
*_codeofconduct__all__,
]
| 20.125 | 73 | 0.807453 | 17 | 161 | 6.588235 | 0.411765 | 0.178571 | 0.303571 | 0.535714 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 161 | 7 | 74 | 23 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
79deb60e0185188c157266b9a5da6aafcb6085e3 | 222 | py | Python | config.py | SJSUSpring21/Team14 | 37394e309dbaef6fcd6df4ee7162d1ffa0690923 | [
"MIT"
] | 9 | 2020-07-28T19:59:46.000Z | 2021-12-16T07:14:12.000Z | config.py | SJSUSpring21/Team14 | 37394e309dbaef6fcd6df4ee7162d1ffa0690923 | [
"MIT"
] | 10 | 2020-05-17T06:23:39.000Z | 2022-02-10T01:53:06.000Z | config.py | imdreamer2018/privacy-preserving-for-face-recognition | be7dde549c7a168085d89b17635ad8bdd375e04b | [
"MIT"
] | null | null | null | #create directory path to store face images
faceImagesPath = 'faceImages'
#create directory path to store face metadata and embedded
faceData = 'faceData'
#create directory path to store trained weights
weights = 'weights' | 37 | 58 | 0.806306 | 29 | 222 | 6.172414 | 0.517241 | 0.251397 | 0.318436 | 0.351955 | 0.480447 | 0.335196 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13964 | 222 | 6 | 59 | 37 | 0.937173 | 0.653153 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8de7cffab22776a817ad1d29d8bb3b9938ae98b3 | 41 | py | Python | runmain.py | emillynge/pyangular | edcf1519a1e1bf39779d9ecd0649513c3a613cbd | [
"Apache-2.0"
] | null | null | null | runmain.py | emillynge/pyangular | edcf1519a1e1bf39779d9ecd0649513c3a613cbd | [
"Apache-2.0"
] | null | null | null | runmain.py | emillynge/pyangular | edcf1519a1e1bf39779d9ecd0649513c3a613cbd | [
"Apache-2.0"
] | null | null | null | import backend
backend.main('-e', 'test') | 20.5 | 26 | 0.707317 | 6 | 41 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 2 | 26 | 20.5 | 0.763158 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
8ded79eb0bcb9ed2af1e6dc87ddceff8e6f1ad4d | 30 | py | Python | bdateutil/easter.py | coder6464/python-bdateutil | 0769da9df8b5130a3af9878d81d6bca2fc615574 | [
"MIT"
] | 4 | 2018-06-04T15:33:28.000Z | 2021-07-05T10:39:50.000Z | bdateutil/easter.py | coder6464/python-bdateutil | 0769da9df8b5130a3af9878d81d6bca2fc615574 | [
"MIT"
] | 1 | 2020-08-11T11:56:06.000Z | 2020-08-11T11:56:06.000Z | bdateutil/easter.py | coder6464/python-bdateutil | 0769da9df8b5130a3af9878d81d6bca2fc615574 | [
"MIT"
] | 6 | 2018-08-10T12:47:24.000Z | 2021-08-10T10:59:17.000Z | from dateutil.easter import *
| 15 | 29 | 0.8 | 4 | 30 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
5c41d8e62904697f3544d7e77f78122dc7b5d70a | 340 | py | Python | sysopt/backends/casadi/__init__.py | csp-at-unimelb/sysopt | f33653623e19d9fff3bae279210de8fac6c67b42 | [
"Apache-2.0"
] | null | null | null | sysopt/backends/casadi/__init__.py | csp-at-unimelb/sysopt | f33653623e19d9fff3bae279210de8fac6c67b42 | [
"Apache-2.0"
] | null | null | null | sysopt/backends/casadi/__init__.py | csp-at-unimelb/sysopt | f33653623e19d9fff3bae279210de8fac6c67b42 | [
"Apache-2.0"
] | 1 | 2022-03-09T03:59:49.000Z | 2022-03-09T03:59:49.000Z | """Casadi Backend Implementation."""
import casadi as _casadi
from sysopt.backends.casadi.expression_graph import lambdify
from sysopt.backends.casadi.solvers import Integrator, InterpolatedPath
epsilon = 1e-9
def sparse_matrix(shape):
return _casadi.SX(*shape)
def list_symbols(expr) -> set:
return set(_casadi.symvar(expr))
| 21.25 | 71 | 0.776471 | 44 | 340 | 5.863636 | 0.636364 | 0.077519 | 0.139535 | 0.186047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006734 | 0.126471 | 340 | 15 | 72 | 22.666667 | 0.861953 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.375 | 0.25 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
3082b8acaff549dfc7862d81f6e565e154a271af | 315 | py | Python | flexinfer/postprocess/obj_det/converters/__init__.py | rriyaldhi/flexinfer | 523b9bf49226001c0477a3635ffe9071387ae788 | [
"Apache-2.0"
] | 81 | 2020-06-17T07:46:19.000Z | 2022-03-29T23:29:33.000Z | flexinfer/postprocess/obj_det/converters/__init__.py | rriyaldhi/flexinfer | 523b9bf49226001c0477a3635ffe9071387ae788 | [
"Apache-2.0"
] | 1 | 2020-07-02T08:14:49.000Z | 2020-07-20T11:34:30.000Z | flexinfer/postprocess/obj_det/converters/__init__.py | rriyaldhi/flexinfer | 523b9bf49226001c0477a3635ffe9071387ae788 | [
"Apache-2.0"
] | 20 | 2020-06-29T01:30:44.000Z | 2022-03-23T09:47:41.000Z | from .bbox_anchor_converter import BBoxAnchorConverter
from .builder import build_converter
from .point_anchor_converter import PointAnchorConverter
from .iou_bbox_anchor_converter import IoUBBoxAnchorConverter
__all__ = ['BBoxAnchorConverter', 'PointAnchorConverter', 'build_converter', 'IoUBBoxAnchorConverter']
| 45 | 102 | 0.869841 | 30 | 315 | 8.7 | 0.433333 | 0.172414 | 0.241379 | 0.191571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073016 | 315 | 6 | 103 | 52.5 | 0.893836 | 0 | 0 | 0 | 0 | 0 | 0.24127 | 0.069841 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
30876d806a6a87e12d968cfa1cb7e800359fda0d | 196 | py | Python | projects/MEDIUM/ATS-parse-emit/Python/prelude/CATS/bool_cats.py | steinwaywhw/ATS-Postiats-contrib | 6e5268649a3b91158445cb665d9102f1ee1cdfbf | [
"MIT"
] | null | null | null | projects/MEDIUM/ATS-parse-emit/Python/prelude/CATS/bool_cats.py | steinwaywhw/ATS-Postiats-contrib | 6e5268649a3b91158445cb665d9102f1ee1cdfbf | [
"MIT"
] | null | null | null | projects/MEDIUM/ATS-parse-emit/Python/prelude/CATS/bool_cats.py | steinwaywhw/ATS-Postiats-contrib | 6e5268649a3b91158445cb665d9102f1ee1cdfbf | [
"MIT"
] | null | null | null | #
# for Python code translated from ATS
#
###### beg of [bool_cats.py] ######
######################################
######################################
###### end of [bool_cats.py] ######
| 16.333333 | 38 | 0.316327 | 16 | 196 | 3.75 | 0.75 | 0.2 | 0.333333 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127551 | 196 | 11 | 39 | 17.818182 | 0.350877 | 0.408163 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
308fbb0787ec5369aa360ccaa6627392c5a99cf1 | 1,263 | py | Python | paco/constant.py | lmicra/paco | 1e5ef4df317e7cbbcefdf67d8dee28ce90538f3d | [
"MIT"
] | 208 | 2016-10-24T13:17:08.000Z | 2022-03-12T05:39:21.000Z | paco/constant.py | lmicra/paco | 1e5ef4df317e7cbbcefdf67d8dee28ce90538f3d | [
"MIT"
] | 39 | 2016-10-24T10:40:21.000Z | 2020-04-22T16:17:51.000Z | paco/constant.py | lmicra/paco | 1e5ef4df317e7cbbcefdf67d8dee28ce90538f3d | [
"MIT"
] | 14 | 2016-11-29T11:37:34.000Z | 2021-09-30T12:27:00.000Z | # -*- coding: utf-8 -*-
import asyncio
def constant(value, delay=None):
"""
Returns a coroutine function that when called, always returns
the provided value.
This function has an alias: `paco.identity`.
Arguments:
value (mixed): value to constantly return when coroutine is called.
delay (int/float): optional return value delay in seconds.
Returns:
coroutinefunction
Usage::
coro = paco.constant('foo')
await coro()
# => 'foo'
await coro()
# => 'foo'
"""
@asyncio.coroutine
def coro():
if delay:
yield from asyncio.sleep(delay)
return value
return coro
def identity(value, delay=None):
"""
Returns a coroutine function that when called, always returns
the provided value.
This function is an alias to `paco.constant`.
Arguments:
value (mixed): value to constantly return when coroutine is called.
delay (int/float): optional return value delay in seconds.
Returns:
coroutinefunction
Usage::
coro = paco.identity('foo')
await coro()
# => 'foo'
await coro()
# => 'foo'
"""
return constant(value, delay=delay)
| 20.047619 | 75 | 0.590657 | 140 | 1,263 | 5.328571 | 0.328571 | 0.067024 | 0.064343 | 0.080429 | 0.715818 | 0.715818 | 0.715818 | 0.643432 | 0.643432 | 0.643432 | 0 | 0.001151 | 0.311956 | 1,263 | 62 | 76 | 20.370968 | 0.857307 | 0.673793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
30bd8f2ae743fda9a5416144b08c5395b651b93b | 68 | py | Python | examples/absolute-import-rewrite/hello/hello/__init__.py | olsonpm/python-vendorize | 10f992177c21bcef7da213e8605d8e070a202e88 | [
"BSD-2-Clause"
] | 38 | 2016-06-16T03:18:06.000Z | 2022-02-02T00:06:02.000Z | examples/absolute-import-rewrite/hello/hello/__init__.py | olsonpm/python-vendorize | 10f992177c21bcef7da213e8605d8e070a202e88 | [
"BSD-2-Clause"
] | 14 | 2018-02-17T10:46:18.000Z | 2021-11-20T16:30:23.000Z | examples/absolute-import-rewrite/hello/hello/__init__.py | olsonpm/python-vendorize | 10f992177c21bcef7da213e8605d8e070a202e88 | [
"BSD-2-Clause"
] | 4 | 2019-05-12T15:38:39.000Z | 2021-07-11T09:48:07.000Z | from hello.messages import message
def hello():
print(message)
| 13.6 | 34 | 0.735294 | 9 | 68 | 5.555556 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 4 | 35 | 17 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
30d8f9b843eca7439f796d6303454303f72ac99e | 1,963 | py | Python | mmdet/datasets/pipelines/dct_channel_index.py | vinnibuh/mmdetection | b8a6adbc0ea44976feabcd7d4e85bd7655a49474 | [
"Apache-2.0"
] | null | null | null | mmdet/datasets/pipelines/dct_channel_index.py | vinnibuh/mmdetection | b8a6adbc0ea44976feabcd7d4e85bd7655a49474 | [
"Apache-2.0"
] | null | null | null | mmdet/datasets/pipelines/dct_channel_index.py | vinnibuh/mmdetection | b8a6adbc0ea44976feabcd7d4e85bd7655a49474 | [
"Apache-2.0"
] | null | null | null | dct_channel_index = {
24:
[
[0, 1, 2, 3,
8, 9, 10, 11,
16, 17, 18, 19,
24, 25, 26, 27],
[0, 1,
8, 9],
[0, 1,
8, 9]
],
32:
[
[0, 1, 2, 3, 4,
8, 9, 10, 11, 12,
16, 17, 18, 19, 20,
24, 25, 26, 27,
32, 33, 34],
[0, 1, 2,
8, 9],
[0, 1, 2,
8, 9]
],
48:
[
[0, 1, 2, 3, 4, 5,
8, 9, 10, 11, 12, 13,
16, 17, 18, 19, 20, 21,
24, 25, 26, 27, 28, 29,
32, 33, 34, 35,
40, 41, 42, 43],
[0, 1, 2,
8, 9, 10,
16, 17],
[0, 1, 2,
8, 9, 10,
16, 17]
],
64:
[
[0, 1, 2, 3, 4, 5, 6,
8, 9, 10, 11, 12, 13, 14,
16, 17, 18, 19, 20, 21,
24, 25, 26, 27, 28, 29,
32, 33, 34, 35, 36, 37,
40, 41, 42, 43, 44, 45,
48, 49, 50, 51, 52, 53],
[0, 1, 2,
8, 9, 10,
16, 17,
24, 25],
[0, 1, 2,
8, 9, 10,
16, 17,
24, 25],
],
128:
[
[0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31,
32, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 44, 45, 46, 47,
48, 49, 50, 51, 52, 53, 54, 55,
56, 57, 58, 59, 60, 61, 62, 63],
[0, 1, 2, 3, 4, 5,
8, 9, 10, 11, 12, 13,
16, 17, 18, 19, 20, 21,
24, 25, 26, 27, 28, 29,
32, 33, 34, 35,
40, 41, 42, 43],
[0, 1, 2, 3, 4, 5,
8, 9, 10, 11, 12, 13,
16, 17, 18, 19, 20, 21,
24, 25, 26, 27, 28, 29,
32, 33, 34, 35,
40, 41, 42, 43]
]
}
| 23.939024 | 41 | 0.267957 | 304 | 1,963 | 1.723684 | 0.226974 | 0.057252 | 0.074427 | 0.053435 | 0.784351 | 0.721374 | 0.553435 | 0.480916 | 0.444656 | 0.444656 | 0 | 0.585057 | 0.556801 | 1,963 | 81 | 42 | 24.234568 | 0.017241 | 0 | 0 | 0.530864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
30f9f54b9bba7e4942a9e4da1e620734394c483e | 1,204 | py | Python | PyVersion/PeriodicVersion/dataAnalysisTest.py | levondov/FTR_Adjoint | fccb7f43aec7dfce76dcabf693e5b673aa8adb53 | [
"MIT"
] | null | null | null | PyVersion/PeriodicVersion/dataAnalysisTest.py | levondov/FTR_Adjoint | fccb7f43aec7dfce76dcabf693e5b673aa8adb53 | [
"MIT"
] | null | null | null | PyVersion/PeriodicVersion/dataAnalysisTest.py | levondov/FTR_Adjoint | fccb7f43aec7dfce76dcabf693e5b673aa8adb53 | [
"MIT"
] | null | null | null | from gd_adj_match_1var_functions import RunMoments,getLattice,Initial_conditions
from analysis_util import *
if 0:
dt = LoadData('test')
period = 5
lat,_ = getLattice(dt[0][0,:],period)
z,y,m,k = RunMoments(lat,Initial_conditions())
fig,ax=plt.subplots()
#PlotMoments(z,y,k,fig,ax,linestyles=['--','--'])
lat,_ = getLattice(dt[0][-1,:],period)
z,y,m,k = RunMoments(lat,Initial_conditions())
PlotMoments(z,y,k,fig,ax)
plt.figure()
plt.plot(np.log10(dt[1]/dt[1][1,0]),marker='.')
plt.ylabel('$log( FoM/FoM_0 )$')
plt.xlabel('Iterations (kind of)')
plt.title('FoM vs Iterations')
plt.grid(True)
else:
dt = LoadData('testx2')
period = 5
lat,_ = getLattice(dt[0],period)
z,y,m,k = RunMoments(lat,dt[1][0,:])
fig,ax=plt.subplots()
PlotMoments(z,y,k,fig,ax,linestyles=['--','--'])
lat,_ = getLattice(dt[0],period)
z,y,m,k = RunMoments(lat,dt[1][-1,:])
PlotMoments(z,y,k,fig,ax)
plt.figure()
plt.plot(np.log10(dt[2]/dt[2][1,0]),marker='.')
plt.ylabel('$log( FoM/FoM_0 )$')
plt.xlabel('Iterations (kind of)')
plt.title('FoM vs Iterations')
plt.grid(True)
plt.show()
plt.show() | 22.716981 | 80 | 0.605482 | 186 | 1,204 | 3.844086 | 0.27957 | 0.022378 | 0.083916 | 0.08951 | 0.784615 | 0.784615 | 0.742657 | 0.742657 | 0.741259 | 0.629371 | 0 | 0.029323 | 0.178571 | 1,204 | 53 | 81 | 22.716981 | 0.69363 | 0.039867 | 0 | 0.628571 | 0 | 0 | 0.108997 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057143 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a503c2325da2fa0823cb72cfa2206b9b80c5353c | 214 | py | Python | dffml/skel/service/setup.py | Patil2099/dffml | 9310c1ad55a7339e5d15786d4b9d890283f52ec2 | [
"MIT"
] | null | null | null | dffml/skel/service/setup.py | Patil2099/dffml | 9310c1ad55a7339e5d15786d4b9d890283f52ec2 | [
"MIT"
] | 1 | 2019-10-17T17:34:14.000Z | 2019-10-17T17:34:14.000Z | dffml/skel/service/setup.py | raghav-ys/dffml | 2a23f55acaac69c7a1840260b0ede694216c2007 | [
"MIT"
] | null | null | null | from setuptools import setup
from dffml_setup_common import SETUP_KWARGS, IMPORT_NAME
SETUP_KWARGS["entry_points"] = {
"dffml.service.cli": [f"misc = {IMPORT_NAME}.misc:MiscService"]
}
setup(**SETUP_KWARGS)
| 21.4 | 67 | 0.761682 | 29 | 214 | 5.344828 | 0.517241 | 0.212903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116822 | 214 | 9 | 68 | 23.777778 | 0.820106 | 0 | 0 | 0 | 0 | 0 | 0.308411 | 0.140187 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a50863faa4ec8f6a7084a1a96e4f73e075cd858c | 130 | py | Python | clapbot/search/model/__init__.py | alexrudy/ClapBot | fc6149cbd20b8afd6ffd3d5c7a4836100a509215 | [
"MIT"
] | 2 | 2017-05-04T18:33:51.000Z | 2017-05-04T18:34:17.000Z | clapbot/search/model/__init__.py | alexrudy/ClapBot | fc6149cbd20b8afd6ffd3d5c7a4836100a509215 | [
"MIT"
] | 2 | 2020-05-26T23:50:43.000Z | 2021-03-26T21:39:30.000Z | clapbot/search/model/__init__.py | alexrudy/ClapBot | fc6149cbd20b8afd6ffd3d5c7a4836100a509215 | [
"MIT"
] | null | null | null | from .search import HousingSearch, Status
from .location import BoundingBox
__all__ = ['HousingSearch', 'Status', 'BoundingBox']
| 26 | 52 | 0.776923 | 13 | 130 | 7.461538 | 0.615385 | 0.391753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 130 | 4 | 53 | 32.5 | 0.843478 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
eb5c37ee5f3888982fd2a866796664acfa04b815 | 96 | py | Python | venv/lib/python3.8/site-packages/virtualenv/create/via_global_ref/_virtualenv.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/virtualenv/create/via_global_ref/_virtualenv.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/virtualenv/create/via_global_ref/_virtualenv.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/68/4b/8c/07932ba4a84ac2e5ae9afe49ef04e92bac3d8d4191d455cf09d093e5fc | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 0 | 96 | 1 | 96 | 96 | 0.520833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
eb89dc99e30be7629fa2dd6949d3cc4d96416655 | 29,012 | py | Python | similarities/literalsim.py | shibing624/similarities | f573ae158b0e2a908c1ef549784bd88e23cbd9c6 | [
"Apache-2.0"
] | 16 | 2022-02-23T11:46:18.000Z | 2022-03-29T07:35:33.000Z | similarities/literalsim.py | shibing624/similarities | f573ae158b0e2a908c1ef549784bd88e23cbd9c6 | [
"Apache-2.0"
] | 1 | 2022-03-15T13:51:36.000Z | 2022-03-16T02:56:15.000Z | similarities/literalsim.py | shibing624/similarities | f573ae158b0e2a908c1ef549784bd88e23cbd9c6 | [
"Apache-2.0"
] | 3 | 2022-02-24T02:06:05.000Z | 2022-03-13T11:31:16.000Z | # -*- coding: utf-8 -*-
"""
@author:XuMing(xuming624@qq.com), Vit Novotny <witiko@mail.muni.cz>, lhy<lhy_in_blcu@126.com>
@description:
Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
This module provides classes that deal with sentence similarities from mean term vector.
Adjust the gensim similarities Index to compute sentence similarities.
"""
import os
from typing import List, Union, Dict
import jieba
import jieba.analyse
import jieba.posseg
import numpy as np
from loguru import logger
from tqdm import tqdm
from text2vec import Word2Vec
from similarities.similarity import SimilarityABC
from similarities.utils.distance import string_hash, hamming_distance
from similarities.utils.rank_bm25 import BM25Okapi
from similarities.utils.tfidf import TFIDF, default_stopwords
from similarities.utils.util import cos_sim, semantic_search
pwd_path = os.path.abspath(os.path.dirname(__file__))
class SimHashSimilarity(SimilarityABC):
"""
Compute SimHash similarity between two sentences and retrieves most
similar sentence for a given corpus.
"""
def __init__(self, corpus: Union[List[str], Dict[str, str]] = None):
self.corpus = {}
self.corpus_ids_map = {}
self.corpus_embeddings = []
if corpus is not None:
self.add_corpus(corpus)
def __len__(self):
"""Get length of corpus."""
return len(self.corpus)
def __str__(self):
base = f"Similarity: {self.__class__.__name__}, matching_model: SimHash"
if self.corpus:
base += f", corpus size: {len(self.corpus)}"
return base
def add_corpus(self, corpus: Union[List[str], Dict[str, str]]):
"""
Extend the corpus with new documents.
Parameters
----------
corpus : list of str or dict of str
"""
corpus_new = {}
start_id = len(self.corpus) if self.corpus else 0
if isinstance(corpus, list):
corpus = list(set(corpus))
for id, doc in enumerate(corpus):
if doc not in list(self.corpus.values()):
corpus_new[start_id + id] = doc
else:
for id, doc in corpus.items():
if doc not in list(self.corpus.values()):
corpus_new[id] = doc
self.corpus.update(corpus_new)
self.corpus_ids_map = {i: id for i, id in enumerate(list(self.corpus.keys()))}
logger.info(f"Start computing corpus embeddings, new docs: {len(corpus_new)}")
corpus_texts = list(corpus_new.values())
corpus_embeddings = []
for sentence in tqdm(corpus_texts, desc="Computing corpus SimHash"):
corpus_embeddings.append(self.simhash(sentence))
if self.corpus_embeddings:
self.corpus_embeddings += corpus_embeddings
else:
self.corpus_embeddings = corpus_embeddings
logger.info(f"Add {len(corpus)} docs, total: {len(self.corpus)}, emb size: {len(self.corpus_embeddings)}")
def simhash(self, sentence: str):
"""
Compute SimHash for a given text.
:param sentence: str
:return: hash code
"""
seg = jieba.cut(sentence)
key_word = jieba.analyse.extract_tags('|'.join(seg), topK=None, withWeight=True, allowPOS=())
# 先按照权重排序,再按照词排序
key_list = []
for feature, weight in key_word:
weight = int(weight * 20)
temp = []
for f in string_hash(feature):
if f == '1':
temp.append(weight)
else:
temp.append(-weight)
key_list.append(temp)
content_list = np.sum(np.array(key_list), axis=0)
# 编码读不出来
if len(key_list) == 0:
return '00'
hash_code = ''
for c in content_list:
if c > 0:
hash_code = hash_code + '1'
else:
hash_code = hash_code + '0'
return hash_code
def _sim_score(self, seq1, seq2):
"""Convert hamming distance to similarity score."""
# 将距离转化为相似度
score = 0.0
if len(seq1) > 2 and len(seq2) > 2:
score = 1 - hamming_distance(seq1, seq2) / len(seq1)
return score
def similarity(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""
Compute hamming similarity between two sentences.
Parameters
----------
a : str or list of str
b : str or list of str
Returns
-------
list of float
"""
if isinstance(a, str):
a = [a]
if isinstance(b, str):
b = [b]
if len(a) != len(b):
raise ValueError("expected two inputs of the same length")
seqs1 = [self.simhash(text) for text in a]
seqs2 = [self.simhash(text) for text in b]
scores = [self._sim_score(seq1, seq2) for seq1, seq2 in zip(seqs1, seqs2)]
return scores
def distance(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""
Compute hamming distance between two sentences.
Parameters
----------
a : str or list of str
b : str or list of str
Returns
-------
list of float
"""
sim_scores = self.similarity(a, b)
return [1 - score for score in sim_scores]
def most_similar(self, queries: Union[str, List[str], Dict[str, str]], topn: int = 10):
"""
Find the topn most similar texts to the query against the corpus.
:param queries: list of str or str
:param topn: int
:return: list of list tuples (corpus_id, corpus_text, similarity_score)
"""
if isinstance(queries, str) or not hasattr(queries, '__len__'):
queries = [queries]
if isinstance(queries, list):
queries = {id: query for id, query in enumerate(queries)}
result = {qid: {} for qid, query in queries.items()}
for qid, query in queries.items():
q_res = []
query_emb = self.simhash(query)
for (corpus_id, doc), doc_emb in zip(self.corpus.items(), self.corpus_embeddings):
score = self._sim_score(query_emb, doc_emb)
q_res.append((corpus_id, score))
q_res.sort(key=lambda x: x[1], reverse=True)
q_res = q_res[:topn]
for corpus_id, score in q_res:
result[qid][corpus_id] = score
return result
class TfidfSimilarity(SimilarityABC):
"""
Compute TFIDF similarity between two sentences and retrieves most
similar sentence for a given corpus.
"""
def __init__(self, corpus: Union[List[str], Dict[str, str]] = None):
super().__init__()
self.corpus = {}
self.corpus_ids_map = {}
self.corpus_embeddings = []
self.tfidf = TFIDF()
if corpus is not None:
self.add_corpus(corpus)
def __len__(self):
"""Get length of corpus."""
return len(self.corpus)
def __str__(self):
base = f"Similarity: {self.__class__.__name__}, matching_model: Tfidf"
if self.corpus:
base += f", corpus size: {len(self.corpus)}"
return base
def add_corpus(self, corpus: Union[List[str], Dict[str, str]]):
"""
Extend the corpus with new documents.
Parameters
----------
corpus : list of str
"""
corpus_new = {}
start_id = len(self.corpus) if self.corpus else 0
if isinstance(corpus, list):
corpus = list(set(corpus))
for id, doc in enumerate(corpus):
if doc not in list(self.corpus.values()):
corpus_new[start_id + id] = doc
else:
for id, doc in corpus.items():
if doc not in list(self.corpus.values()):
corpus_new[id] = doc
self.corpus.update(corpus_new)
self.corpus_ids_map = {i: id for i, id in enumerate(list(self.corpus.keys()))}
logger.info(f"Start computing corpus embeddings, new docs: {len(corpus_new)}")
corpus_texts = list(corpus_new.values())
corpus_embeddings = []
for sentence in tqdm(corpus_texts, desc="Computing corpus TFIDF"):
corpus_embeddings.append(self.tfidf.get_tfidf(sentence))
if self.corpus_embeddings:
self.corpus_embeddings += corpus_embeddings
else:
self.corpus_embeddings = corpus_embeddings
logger.info(f"Add {len(corpus)} docs, total: {len(self.corpus)}, emb size: {len(self.corpus_embeddings)}")
def similarity(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""
Compute cosine similarity score between two sentences.
:param a:
:param b:
:return:
"""
if isinstance(a, str):
a = [a]
if isinstance(b, str):
b = [b]
features1 = [self.tfidf.get_tfidf(text) for text in a]
features2 = [self.tfidf.get_tfidf(text) for text in b]
return cos_sim(np.array(features1), np.array(features2))
def distance(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""Compute cosine distance between two keys."""
return 1 - self.similarity(a, b)
def most_similar(self, queries: Union[str, List[str], Dict[str, str]], topn: int = 10):
"""Find the topn most similar texts to the query against the corpus."""
if isinstance(queries, str) or not hasattr(queries, '__len__'):
queries = [queries]
if isinstance(queries, list):
queries = {id: query for id, query in enumerate(queries)}
result = {qid: {} for qid, query in queries.items()}
queries_ids_map = {i: id for i, id in enumerate(list(queries.keys()))}
queries_texts = list(queries.values())
queries_embeddings = np.array([self.tfidf.get_tfidf(query) for query in queries_texts], dtype=np.float32)
corpus_embeddings = np.array(self.corpus_embeddings, dtype=np.float32)
all_hits = semantic_search(queries_embeddings, corpus_embeddings, top_k=topn)
for idx, hits in enumerate(all_hits):
for hit in hits[0:topn]:
result[queries_ids_map[idx]][self.corpus_ids_map[hit['corpus_id']]] = hit['score']
return result
class BM25Similarity(SimilarityABC):
"""
Compute BM25OKapi similarity between two sentences and retrieves most
similar sentence for a given corpus.
"""
def __init__(self, corpus: Union[List[str], Dict[str, str]] = None):
super().__init__()
self.corpus = {}
self.corpus_ids_map = {}
self.bm25 = None
if corpus is not None:
self.add_corpus(corpus)
def __len__(self):
"""Get length of corpus."""
return len(self.corpus)
def __str__(self):
base = f"Similarity: {self.__class__.__name__}, matching_model: BM25"
if self.corpus:
base += f", corpus size: {len(self.corpus)}"
return base
def add_corpus(self, corpus: Union[List[str], Dict[str, str]]):
"""
Extend the corpus with new documents.
Parameters
----------
corpus : list of str or dict
"""
corpus_new = {}
start_id = len(self.corpus) if self.corpus else 0
if isinstance(corpus, list):
corpus = list(set(corpus))
for id, doc in enumerate(corpus):
if doc not in list(self.corpus.values()):
corpus_new[start_id + id] = doc
else:
for id, doc in corpus.items():
if doc not in list(self.corpus.values()):
corpus_new[id] = doc
self.corpus.update(corpus_new)
self.corpus_ids_map = {i: id for i, id in enumerate(list(self.corpus.keys()))}
logger.info(f"Start computing corpus embeddings, new docs: {len(corpus_new)}")
corpus_texts = list(corpus_new.values())
corpus_seg = [jieba.lcut(d) for d in corpus_texts]
corpus_seg = [[w for w in doc if (w.strip().lower() not in default_stopwords) and len(w.strip()) > 0] for doc in
corpus_seg]
self.bm25 = BM25Okapi(corpus_seg)
logger.info(f"Add {len(corpus)} docs, total: {len(self.corpus)}")
def most_similar(self, queries: Union[str, List[str], Dict[str, str]], topn=10):
"""
Find the topn most similar texts to the query against the corpus.
:param queries: input query
:param topn: int
:return: Dict[str, Dict[str, float]], {query_id: {corpus_id: similarity_score}}
"""
if not self.bm25:
raise ValueError("BM25 model is not initialized. Please add_corpus first, eg. `add_corpus(corpus)`")
if isinstance(queries, str):
queries = [queries]
if isinstance(queries, list):
queries = {id: query for id, query in enumerate(queries)}
result = {qid: {} for qid, query in queries.items()}
for qid, query in queries.items():
tokens = jieba.lcut(query)
scores = self.bm25.get_scores(tokens)
q_res = [{'corpus_id': corpus_id, 'score': score} for corpus_id, score in enumerate(scores)]
q_res = sorted(q_res, key=lambda x: x['score'], reverse=True)[:topn]
for res in q_res:
corpus_id = self.corpus_ids_map[res['corpus_id']]
result[qid][corpus_id] = res['score']
return result
class WordEmbeddingSimilarity(SimilarityABC):
"""
Compute Word2Vec similarity between two sentences and retrieves most
similar sentence for a given corpus.
"""
def __init__(self, corpus: Union[List[str], Dict[str, str]] = None, model_name_or_path="w2v-light-tencent-chinese"):
"""
Init WordEmbeddingSimilarity.
:param model_name_or_path: ~text2vec.Word2Vec model name or path to model file.
:param corpus: list of str
"""
if isinstance(model_name_or_path, str):
self.keyedvectors = Word2Vec(model_name_or_path)
elif hasattr(model_name_or_path, "encode"):
self.keyedvectors = model_name_or_path
else:
raise ValueError("model_name_or_path must be ~text2vec.Word2Vec or Word2Vec model name")
self.corpus = {}
self.corpus_ids_map = {}
self.corpus_embeddings = []
if corpus is not None:
self.add_corpus(corpus)
def __len__(self):
"""Get length of corpus."""
return len(self.corpus)
def __str__(self):
base = f"Similarity: {self.__class__.__name__}, matching_model: Word2Vec"
if self.corpus:
base += f", corpus size: {len(self.corpus)}"
return base
def add_corpus(self, corpus: Union[List[str], Dict[str, str]]):
"""
Extend the corpus with new documents.
Parameters
----------
corpus : list of str
"""
corpus_new = {}
start_id = len(self.corpus) if self.corpus else 0
if isinstance(corpus, list):
corpus = list(set(corpus))
for id, doc in enumerate(corpus):
if doc not in list(self.corpus.values()):
corpus_new[start_id + id] = doc
else:
for id, doc in corpus.items():
if doc not in list(self.corpus.values()):
corpus_new[id] = doc
self.corpus.update(corpus_new)
self.corpus_ids_map = {i: id for i, id in enumerate(list(self.corpus.keys()))}
logger.info(f"Start computing corpus embeddings, new docs: {len(corpus_new)}")
corpus_texts = list(corpus_new.values())
corpus_embeddings = self._get_vector(corpus_texts, show_progress_bar=True).tolist()
if self.corpus_embeddings:
self.corpus_embeddings += corpus_embeddings
else:
self.corpus_embeddings = corpus_embeddings
logger.info(f"Add {len(corpus)} docs, total: {len(self.corpus)}, emb size: {len(self.corpus_embeddings)}")
def _get_vector(self, text, show_progress_bar: bool = False) -> np.ndarray:
return self.keyedvectors.encode(text, show_progress_bar=show_progress_bar)
def similarity(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""Compute cosine similarity between two texts."""
v1 = self._get_vector(a)
v2 = self._get_vector(b)
return cos_sim(v1, v2)
def distance(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""Compute cosine distance between two texts."""
return 1 - self.similarity(a, b)
def most_similar(self, queries: Union[str, List[str], Dict[str, str]], topn: int = 10):
"""
Find the topn most similar texts to the query against the corpus.
:param queries: list of str or str
:param topn: int
:return: list of list of tuples (corpus_id, corpus_text, similarity_score)
"""
if isinstance(queries, str) or not hasattr(queries, '__len__'):
queries = [queries]
if isinstance(queries, list):
queries = {id: query for id, query in enumerate(queries)}
result = {qid: {} for qid, query in queries.items()}
queries_ids_map = {i: id for i, id in enumerate(list(queries.keys()))}
queries_texts = list(queries.values())
queries_embeddings = np.array([self._get_vector(query) for query in queries_texts], dtype=np.float32)
corpus_embeddings = np.array(self.corpus_embeddings, dtype=np.float32)
all_hits = semantic_search(queries_embeddings, corpus_embeddings, top_k=topn)
for idx, hits in enumerate(all_hits):
for hit in hits[0:topn]:
result[queries_ids_map[idx]][self.corpus_ids_map[hit['corpus_id']]] = hit['score']
return result
class CilinSimilarity(SimilarityABC):
"""
Compute Cilin similarity between two sentences and retrieves most
similar sentence for a given corpus.
"""
default_cilin_path = os.path.join(pwd_path, 'data/cilin.txt')
def __init__(self, corpus: Union[List[str], Dict[str, str]] = None, cilin_path: str = default_cilin_path):
super().__init__()
self.cilin_dict = self.load_cilin_dict(cilin_path) # Cilin(词林) semantic dictionary
self.corpus = {}
self.corpus_ids_map = {}
if corpus is not None:
self.add_corpus(corpus)
def __len__(self):
"""Get length of corpus."""
return len(self.corpus)
def __str__(self):
base = f"Similarity: {self.__class__.__name__}, matching_model: Cilin"
if self.corpus:
base += f", corpus size: {len(self.corpus)}"
return base
def add_corpus(self, corpus: Union[List[str], Dict[str, str]]):
"""
Extend the corpus with new documents.
Parameters
----------
corpus : list of str
"""
corpus_new = {}
start_id = len(self.corpus) if self.corpus else 0
if isinstance(corpus, list):
corpus = list(set(corpus))
for id, doc in enumerate(corpus):
if doc not in list(self.corpus.values()):
corpus_new[start_id + id] = doc
else:
for id, doc in corpus.items():
if doc not in list(self.corpus.values()):
corpus_new[id] = doc
self.corpus.update(corpus_new)
self.corpus_ids_map = {i: id for i, id in enumerate(list(self.corpus.keys()))}
logger.info(f"Start add new docs: {len(corpus_new)}")
logger.info(f"Add {len(corpus)} docs, total: {len(self.corpus)}")
@staticmethod
def load_cilin_dict(path):
"""加载词林语义词典"""
sem_dict = {}
for line in open(path, 'r', encoding='utf-8'):
line = line.strip()
terms = line.split(' ')
sem_type = terms[0]
words = terms[1:]
for word in words:
if word not in sem_dict:
sem_dict[word] = sem_type
else:
sem_dict[word] += ';' + sem_type
for word, sem_type in sem_dict.items():
sem_dict[word] = sem_type.split(';')
return sem_dict
def _word_sim(self, word1, word2):
"""
比较计算词语之间的相似度,取max最大值
:param word1:
:param word2:
:return:
"""
sems_word1 = self.cilin_dict.get(word1, [])
sems_word2 = self.cilin_dict.get(word2, [])
score_list = [self._semantic_sim(sem_word1, sem_word2) for sem_word1 in sems_word1 for sem_word2 in sems_word2]
if score_list:
return max(score_list)
else:
return 0
def _semantic_sim(self, sem1, sem2):
"""
基于语义计算词语相似度
:param sem1:
:param sem2:
:return:
"""
sem1 = [sem1[0], sem1[1], sem1[2:4], sem1[4], sem1[5:7], sem1[-1]]
sem2 = [sem2[0], sem2[1], sem2[2:4], sem2[4], sem2[5:7], sem2[-1]]
score = 0
for index in range(len(sem1)):
if sem1[index] == sem2[index]:
if index in [0, 1]:
score += 3
elif index == 2:
score += 2
elif index in [3, 4]:
score += 1
return score / 10
def similarity(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""
Compute Cilin similarity between two texts.
:param a:
:param b:
:return:
"""
if isinstance(a, str):
a = [a]
if isinstance(b, str):
b = [b]
if len(a) != len(b):
raise ValueError("expected two inputs of the same length")
def calc_pair_sim(sentence1, sentence2):
words1 = [word.word for word in jieba.posseg.cut(sentence1) if word.flag[0] not in ['u', 'x', 'w']]
words2 = [word.word for word in jieba.posseg.cut(sentence2) if word.flag[0] not in ['u', 'x', 'w']]
score_words1 = []
score_words2 = []
for word1 in words1:
score = max(self._word_sim(word1, word2) for word2 in words2)
score_words1.append(score)
for word2 in words2:
score = max(self._word_sim(word2, word1) for word1 in words1)
score_words2.append(score)
similarity_score = max(sum(score_words1) / len(words1), sum(score_words2) / len(words2))
return similarity_score
return [calc_pair_sim(sentence1, sentence2) for sentence1, sentence2 in zip(a, b)]
def distance(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""Compute cosine distance between two texts."""
return [1 - s for s in self.similarity(a, b)]
def most_similar(self, queries: Union[str, List[str], Dict[str, str]], topn: int = 10):
"""Find the topn most similar texts to the query against the corpus."""
if isinstance(queries, str) or not hasattr(queries, '__len__'):
queries = [queries]
if isinstance(queries, list):
queries = {id: query for id, query in enumerate(queries)}
result = {qid: {} for qid, query in queries.items()}
for qid, query in queries.items():
q_res = []
for corpus_id, doc in self.corpus.items():
score = self.similarity(query, doc)[0]
q_res.append((corpus_id, score))
q_res.sort(key=lambda x: x[1], reverse=True)
q_res = q_res[:topn]
for corpus_id, score in q_res:
result[qid][corpus_id] = score
return result
class HownetSimilarity(SimilarityABC):
"""
Compute Hownet similarity between two sentences and retrieves most
similar sentence for a given corpus.
"""
default_hownet_path = os.path.join(pwd_path, 'data/hownet.txt')
def __init__(self, corpus: Union[List[str], Dict[str, str]] = None, hownet_path: str = default_hownet_path):
self.hownet_dict = self.load_hownet_dict(hownet_path) # semantic dictionary
self.corpus = {}
self.corpus_ids_map = {}
if corpus is not None:
self.add_corpus(corpus)
def __len__(self):
"""Get length of corpus."""
return len(self.corpus)
def __str__(self):
base = f"Similarity: {self.__class__.__name__}, matching_model: Hownet"
if self.corpus:
base += f", corpus size: {len(self.corpus)}"
return base
def add_corpus(self, corpus: Union[List[str], Dict[str, str]]):
"""
Extend the corpus with new documents.
Parameters
----------
corpus : list of str
"""
corpus_new = {}
start_id = len(self.corpus) if self.corpus else 0
if isinstance(corpus, list):
corpus = list(set(corpus))
for id, doc in enumerate(corpus):
if doc not in list(self.corpus.values()):
corpus_new[start_id + id] = doc
else:
for id, doc in corpus.items():
if doc not in list(self.corpus.values()):
corpus_new[id] = doc
self.corpus.update(corpus_new)
self.corpus_ids_map = {i: id for i, id in enumerate(list(self.corpus.keys()))}
logger.info(f"Start add new docs: {len(corpus_new)}")
logger.info(f"Add {len(corpus)} docs, total: {len(self.corpus)}")
@staticmethod
def load_hownet_dict(path):
"""加载Hownet语义词典"""
hownet_dict = {}
for line in open(path, 'r', encoding='utf-8'):
words = [word for word in line.strip().replace(' ', '>').replace('\t', '>').split('>') if word != '']
word = words[0]
word_def = words[2]
hownet_dict[word] = word_def.split(',')
return hownet_dict
def _semantic_sim(self, sem1, sem2):
"""计算语义相似度"""
sem_inter = set(sem1).intersection(set(sem2))
sem_union = set(sem1).union(set(sem2))
return float(len(sem_inter)) / float(len(sem_union))
def _word_sim(self, word1, word2):
"""比较两个词语之间的相似度"""
sems_word1 = self.hownet_dict.get(word1, [])
sems_words = self.hownet_dict.get(word2, [])
scores = [self._semantic_sim(sem_word1, sem_word2) for sem_word1 in sems_word1 for sem_word2 in sems_words]
if scores:
return max(scores)
else:
return 0
def similarity(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""
Computer Hownet similarity between two texts.
:param a:
:param b:
:return:
"""
if isinstance(a, str):
a = [a]
if isinstance(b, str):
b = [b]
if len(a) != len(b):
raise ValueError("expected two inputs of the same length")
def calc_pair_sim(sentence1, sentence2):
words1 = [word.word for word in jieba.posseg.cut(sentence1) if word.flag[0] not in ['u', 'x', 'w']]
words2 = [word.word for word in jieba.posseg.cut(sentence2) if word.flag[0] not in ['u', 'x', 'w']]
score_words1 = []
score_words2 = []
for word1 in words1:
score = max(self._word_sim(word1, word2) for word2 in words2)
score_words1.append(score)
for word2 in words2:
score = max(self._word_sim(word2, word1) for word1 in words1)
score_words2.append(score)
similarity_score = max(sum(score_words1) / len(words1), sum(score_words2) / len(words2))
return similarity_score
return [calc_pair_sim(sentence1, sentence2) for sentence1, sentence2 in zip(a, b)]
def distance(self, a: Union[str, List[str]], b: Union[str, List[str]]):
"""Compute Hownet distance between two keys."""
return [1 - s for s in self.similarity(a, b)]
def most_similar(self, queries: Union[str, List[str], Dict[str, str]], topn: int = 10):
"""Find the topn most similar texts to the query against the corpus."""
if isinstance(queries, str) or not hasattr(queries, '__len__'):
queries = [queries]
if isinstance(queries, list):
queries = {id: query for id, query in enumerate(queries)}
result = {qid: {} for qid, query in queries.items()}
for qid, query in queries.items():
q_res = []
for corpus_id, doc in self.corpus.items():
score = self.similarity(query, doc)[0]
q_res.append((corpus_id, score))
q_res.sort(key=lambda x: x[1], reverse=True)
q_res = q_res[:topn]
for corpus_id, score in q_res:
result[qid][corpus_id] = score
return result
| 38.375661 | 120 | 0.581484 | 3,710 | 29,012 | 4.392183 | 0.083288 | 0.06996 | 0.02154 | 0.023934 | 0.737956 | 0.727708 | 0.71703 | 0.713961 | 0.710279 | 0.709297 | 0 | 0.013248 | 0.300117 | 29,012 | 755 | 121 | 38.42649 | 0.789264 | 0.125569 | 0 | 0.694332 | 0 | 0.006073 | 0.074465 | 0.01105 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103239 | false | 0 | 0.02834 | 0.002024 | 0.232794 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
eb98caabf0fb3dda405c666538a452f1c6425b13 | 1,412 | py | Python | Lectures/UAV_control/mavsim_python_chap5_model_coef.py | donnel2-cooper/drone_control | 3bb3a1c1f768916ac41d4b78692e2edab0776c07 | [
"MIT"
] | null | null | null | Lectures/UAV_control/mavsim_python_chap5_model_coef.py | donnel2-cooper/drone_control | 3bb3a1c1f768916ac41d4b78692e2edab0776c07 | [
"MIT"
] | null | null | null | Lectures/UAV_control/mavsim_python_chap5_model_coef.py | donnel2-cooper/drone_control | 3bb3a1c1f768916ac41d4b78692e2edab0776c07 | [
"MIT"
] | null | null | null | import numpy as np
x_trim = np.array([[-0.000000, -0.000000, -100.000000, 24.971443, 0.000000, 1.194576, 0.993827, 0.000000, 0.110938, 0.000000, 0.000000, 0.000000, 0.000000]]).T
u_trim = np.array([[-0.118662, 0.009775, -0.001611, 0.857721]]).T
Va_trim = 25.000000
alpha_trim = 0.047801
theta_trim = 0.222334
a_phi1 = 22.628851
a_phi2 = 130.883678
a_theta1 = 5.294738
a_theta2 = 99.947422
a_theta3 = -36.112390
a_V1 = 0.052559
a_V2 = 10.698657
a_V3 = 9.651117
A_lon = np.array([
[-0.220972, 0.483662, -1.165980, -9.547814, -0.000000],
[-0.552148, -4.463072, 24.373686, -2.208720, -0.000000],
[0.191108, -3.993407, -5.294738, 0.000000, -0.000000],
[0.000000, 0.000000, 0.999878, 0.000000, -0.000000],
[0.220506, -0.975386, -0.000000, 24.598080, 0.000000]])
B_lon = np.array([
[-0.144115, 10.698657],
[-2.585871, 0.000000],
[-36.112390, 0.000000],
[0.000000, 0.000000],
[-0.000000, -0.000000]])
A_lat = np.array([
[-0.776773, 1.194576, -24.971443, 9.558619, 0.000000],
[-3.866705, -22.628851, 10.905041, 0.000000, 0.000000],
[0.783078, -0.115092, -1.227655, 0.000000, 0.000000],
[0.000000, 1.000000, 0.226071, 0.000000, 0.000000],
[0.000000, -0.000000, 1.025235, 0.000000, 0.000000]])
B_lat = np.array([
[1.486172, 3.764969],
[130.883678, -1.796374],
[5.011735, -24.881341],
[0.000000, 0.000000],
[0.000000, 0.000000]])
Ts = 0.020000
| 35.3 | 159 | 0.623938 | 251 | 1,412 | 3.442231 | 0.358566 | 0.324074 | 0.259259 | 0.356481 | 0.232639 | 0.197917 | 0.197917 | 0.171296 | 0.06713 | 0 | 0 | 0.606392 | 0.157932 | 1,412 | 39 | 160 | 36.205128 | 0.120269 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ebc7275921fa55eff22237803020fcd37980be54 | 244 | py | Python | conditional_mutual_information/data/func_def.py | martinmamql/conditional_infonce | 7c4ac4138ce165720578443eb891c59e10a8b262 | [
"MIT"
] | null | null | null | conditional_mutual_information/data/func_def.py | martinmamql/conditional_infonce | 7c4ac4138ce165720578443eb891c59e10a8b262 | [
"MIT"
] | null | null | null | conditional_mutual_information/data/func_def.py | martinmamql/conditional_infonce | 7c4ac4138ce165720578443eb891c59e10a8b262 | [
"MIT"
] | null | null | null | import numpy as np
def same(x):
return x
def square(x):
return np.square(x)
def cube(x):
return np.power(x,3)
def tanh(x):
return np.tanh(x)
def negexp(x):
return np.exp(-np.abs(x))
def cosine(x):
return np.cos(x)
| 12.2 | 29 | 0.606557 | 47 | 244 | 3.148936 | 0.382979 | 0.283784 | 0.304054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.237705 | 244 | 19 | 30 | 12.842105 | 0.790323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | false | 0 | 0.076923 | 0.461538 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
ebdf3e7368f8fe0a0786b88399df5614fd3416ae | 46 | py | Python | locale/pot/api/examples/_autosummary/pyvista-demos-demos-plot_glyphs-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 4 | 2020-08-07T08:19:19.000Z | 2020-12-04T09:51:11.000Z | locale/pot/api/examples/_autosummary/pyvista-demos-demos-plot_glyphs-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 19 | 2020-08-06T00:24:30.000Z | 2022-03-30T19:22:24.000Z | locale/pot/api/examples/_autosummary/pyvista-demos-demos-plot_glyphs-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 1 | 2021-03-09T07:50:40.000Z | 2021-03-09T07:50:40.000Z | from pyvista import demos
demos.plot_glyphs()
| 15.333333 | 25 | 0.826087 | 7 | 46 | 5.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 2 | 26 | 23 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
cce815ecbbd95864e731c29c2b41f5e52d5f0763 | 33 | py | Python | exercises/leap/leap.py | wonhyeongseo/python | ccd399510a58ad42d03420e43de67893f55dd411 | [
"MIT"
] | 2 | 2019-07-25T04:40:24.000Z | 2020-12-18T21:29:02.000Z | exercises/leap/leap.py | toroad/python | ce085c81a82ae5fb460fe166323dbbaa5a2588c5 | [
"MIT"
] | null | null | null | exercises/leap/leap.py | toroad/python | ce085c81a82ae5fb460fe166323dbbaa5a2588c5 | [
"MIT"
] | 1 | 2021-12-29T19:26:23.000Z | 2021-12-29T19:26:23.000Z | def is_leap_year(year):
pass
| 11 | 23 | 0.69697 | 6 | 33 | 3.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 2 | 24 | 16.5 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
6901858aa59bfa6038a328b7eec7605931c51237 | 6,627 | py | Python | tests/utils_test.py | leonhard-s/Py2DM | a2c4c193dfa4494f2c9117f580f99f0dbdc579fc | [
"MIT"
] | 6 | 2021-01-28T10:59:21.000Z | 2022-03-30T08:00:06.000Z | tests/utils_test.py | leonhard-s/Py2DM | a2c4c193dfa4494f2c9117f580f99f0dbdc579fc | [
"MIT"
] | 7 | 2020-10-28T13:01:13.000Z | 2022-03-08T19:21:05.000Z | tests/utils_test.py | leonhard-s/Py2DM | a2c4c193dfa4494f2c9117f580f99f0dbdc579fc | [
"MIT"
] | null | null | null | """Test cases for the py2dm.utils sub module."""
import os
import unittest
import shutil
import tempfile
from typing import Tuple
import py2dm # pylint: disable=import-error
class TestTriangleConverter(unittest.TestCase):
"""Tests for the py2dm.utils.triangle_to_2dm method."""
_PATH = os.path.join('tests', 'data', 'triangle')
def get_triangle_output(self, filename: str) -> Tuple[str, str]:
"""Return the pregenerated output for a given test file."""
filename, _ = os.path.splitext(filename)
output = os.path.join(self._PATH, 'output', f'{filename}.1')
return f'{output}.node', f'{output}.ele'
def test_spiral(self) -> None:
node, ele = self.get_triangle_output('spiral.node')
with tempfile.TemporaryDirectory() as temp_dir:
output = os.path.join(temp_dir, 'spiral.2dm')
py2dm.utils.triangle_to_2dm(node, ele, output)
with py2dm.Reader(output) as mesh:
self.assertEqual(
mesh.materials_per_element, 0,
'bad material count')
self.assertEqual(
mesh.num_elements, 33,
'bad element count')
self.assertEqual(
mesh.num_nodes, 26,
'bad node count')
self.assertEqual(
mesh.num_node_strings, 0,
'bad node string count')
self.assertEqual(
mesh.node(2),
py2dm.Node(2, -0.416, 0.909, 0.0),
'bad node returned')
self.assertEqual(
mesh.element(3),
py2dm.Element3T(3, 16, 21, 23),
'bad element returned')
def test_spiral_e6t(self) -> None:
node, ele = self.get_triangle_output('spiral_e6t.node')
with tempfile.TemporaryDirectory() as temp_dir:
output = os.path.join(temp_dir, 'spiral_e6t.2dm')
py2dm.utils.triangle_to_2dm(node, ele, output)
with py2dm.Reader(output) as mesh:
self.assertEqual(
mesh.materials_per_element, 0,
'bad material count')
self.assertEqual(
mesh.num_elements, 33,
'bad element count')
self.assertEqual(
mesh.num_nodes, 84,
'bad node count')
self.assertEqual(
mesh.num_node_strings, 0,
'bad node string count')
self.assertEqual(
mesh.node(73),
py2dm.Node(73, 1.76, 3.19, 0.0),
'bad node returned')
self.assertEqual(
mesh.element(20),
# NOTE: Element6T node order adjusted to account for the
# node ordering differences between 2DM and Triangle,
# namely 1-6-2-4-3-5
py2dm.Element6T(20, 15, 70, 16, 35, 23, 71),
'bad element returned')
class UnsortedIdConverter(unittest.TestCase):
"""Test cases for the `convert_unsorted_nodes` parser."""
_DATA_DIR = os.path.join('tests', 'data', 'external', 'mdal')
_temp_dir: tempfile.TemporaryDirectory # type: ignore
def setUp(self) -> None:
super().setUp()
self._temp_dir = tempfile.TemporaryDirectory()
def tearDown(self) -> None:
super().tearDown()
self._temp_dir.cleanup() # type: ignore
@classmethod
def data(cls, filename: str) -> str:
"""Return an absolute path to a synthetic test file."""
return os.path.abspath(
os.path.join(cls._DATA_DIR, filename))
def convert(self, filename: str) -> str:
"""Convert an input file and open the converted copy."""
# Copy input to temporary directory
in_path = os.path.join(self._temp_dir.name, filename) # type: ignore
shutil.copy(self.data(filename), in_path)
# Convert
py2dm.utils.convert_unsorted_nodes(in_path)
# Return converted file's path
basename, ext = os.path.splitext(filename)
return os.path.join(self._temp_dir.name, # type: ignore
f'{basename}_converted{ext}')
def test_triangle_e6t(self) -> None:
path = self.convert('triangleE6T.2dm')
with self.assertRaises(py2dm.errors.FormatError):
py2dm.Reader(path).open()
def test_unordered_ids(self) -> None:
path = self.convert('unordered_ids.2dm')
py2dm.Reader(path).open()
class RandomIdConverter(unittest.TestCase):
"""Test cases for the `convert_random_nodes` parser."""
_DATA_DIR = os.path.join('tests', 'data', 'external', 'mdal')
_temp_dir: tempfile.TemporaryDirectory # type: ignore
def setUp(self) -> None:
super().setUp()
self._temp_dir = tempfile.TemporaryDirectory()
def tearDown(self) -> None:
super().tearDown()
self._temp_dir.cleanup() # type: ignore
@classmethod
def data(cls, filename: str) -> str:
"""Return an absolute path to a synthetic test file."""
return os.path.abspath(
os.path.join(cls._DATA_DIR, filename))
def convert(self, filename: str) -> str:
"""Convert an input file and open the converted copy."""
# Copy input to temporary directory
in_path = os.path.join(self._temp_dir.name, filename) # type: ignore
shutil.copy(self.data(filename), in_path)
# Convert
py2dm.utils.convert_random_nodes(in_path)
# Return converted file's path
basename, ext = os.path.splitext(filename)
return os.path.join(self._temp_dir.name, # type: ignore
f'{basename}_converted{ext}')
def test_triangle_e6t(self) -> None:
path = self.convert('triangleE6T.2dm')
with py2dm.Reader(path) as mesh:
self.assertEqual(mesh.num_elements, 6)
self.assertEqual(mesh.num_nodes, 22)
self.assertEqual(mesh.num_node_strings, 0)
self.assertEqual(mesh.element(1).card, 'E6T')
def test_unordered_ids(self) -> None:
path = self.convert('unordered_ids.2dm')
with py2dm.Reader(path) as mesh:
self.assertEqual(mesh.num_elements, 2)
self.assertEqual(mesh.num_nodes, 5)
self.assertEqual(mesh.num_node_strings, 0)
self.assertEqual(mesh.element(1).card, 'E4Q')
self.assertEqual(mesh.element(2).card, 'E3T')
| 38.306358 | 77 | 0.575373 | 767 | 6,627 | 4.843546 | 0.186441 | 0.084791 | 0.107402 | 0.071063 | 0.769314 | 0.743472 | 0.743472 | 0.723015 | 0.723015 | 0.677254 | 0 | 0.025923 | 0.313113 | 6,627 | 172 | 78 | 38.52907 | 0.790202 | 0.128565 | 0 | 0.68254 | 0 | 0 | 0.085609 | 0.008754 | 0 | 0 | 0 | 0 | 0.174603 | 1 | 0.119048 | false | 0 | 0.047619 | 0 | 0.269841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6902a51def1853ca4b2937244b69678cbbbc10b2 | 220 | py | Python | datasets.py | mafanhe/pbt | 013ebb24aace1a8ebd537f1a7ec25db6e2d62234 | [
"MIT"
] | 2 | 2019-01-30T02:19:14.000Z | 2020-10-03T18:39:25.000Z | datasets.py | mafanhe/pbt | 013ebb24aace1a8ebd537f1a7ec25db6e2d62234 | [
"MIT"
] | null | null | null | datasets.py | mafanhe/pbt | 013ebb24aace1a8ebd537f1a7ec25db6e2d62234 | [
"MIT"
] | 1 | 2019-06-11T08:58:03.000Z | 2019-06-11T08:58:03.000Z | from tensorflow.keras.datasets import mnist
def data():
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
return (x_train, y_train), (x_test, y_test) | 31.428571 | 60 | 0.686364 | 40 | 220 | 3.45 | 0.4 | 0.173913 | 0.217391 | 0.173913 | 0.318841 | 0.318841 | 0.318841 | 0.318841 | 0 | 0 | 0 | 0.044199 | 0.177273 | 220 | 7 | 61 | 31.428571 | 0.718232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
690a61ea14c675f11bff8cbb30e2d11a7010a2a4 | 42 | py | Python | python/testData/resolve/multiFile/dunderAllDynamicallyBuiltInHelperFunction/DunderAllDynamicallyBuiltInHelperFunction.py | Sajaki/intellij-community | 6748af2c40567839d11fd652ec77ba263c074aad | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/resolve/multiFile/dunderAllDynamicallyBuiltInHelperFunction/DunderAllDynamicallyBuiltInHelperFunction.py | Sajaki/intellij-community | 6748af2c40567839d11fd652ec77ba263c074aad | [
"Apache-2.0"
] | 2 | 2022-02-19T09:45:05.000Z | 2022-02-27T20:32:55.000Z | python/testData/resolve/multiFile/dunderAllDynamicallyBuiltInHelperFunction/DunderAllDynamicallyBuiltInHelperFunction.py | bradleesand/intellij-community | 750ff9c10333c9c1278c00dbe8d88c877b1b9749 | [
"Apache-2.0"
] | 1 | 2020-10-15T05:56:42.000Z | 2020-10-15T05:56:42.000Z | from pkg import *
print(bar)
<ref> | 10.5 | 17 | 0.571429 | 6 | 42 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309524 | 42 | 4 | 18 | 10.5 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
6938741166159b2664abea896dc4276965892890 | 6,449 | py | Python | tests/test_secret.py | futurice/vault | 6da5341804509b7984d0a5817bbd13d3477fe0bc | [
"Apache-2.0"
] | 9 | 2015-10-16T12:06:35.000Z | 2020-04-03T09:05:06.000Z | tests/test_secret.py | futurice/vault | 6da5341804509b7984d0a5817bbd13d3477fe0bc | [
"Apache-2.0"
] | null | null | null | tests/test_secret.py | futurice/vault | 6da5341804509b7984d0a5817bbd13d3477fe0bc | [
"Apache-2.0"
] | 3 | 2015-10-20T09:36:53.000Z | 2021-01-18T20:49:41.000Z | # encoding=utf8
import unittest
import os, sys
import trollius as asyncio
from trollius import From, Return
try:# PY2
from cStringIO import StringIO
except:
from io import StringIO
from contextlib import contextmanager
from secret.secret import runner, prepare, main
from secret.project import get_project
@contextmanager
def capture(command, *args, **kwargs):
out, sys.stdout = sys.stdout, StringIO()
command(*args, **kwargs)
sys.stdout.seek(0)
yield sys.stdout.read()
sys.stdout = out
def run_runner(*args, **kwargs):
# to avoid event loop open/closed -issues while testing
args = prepare()
future = main(args, **kwargs)
loop = asyncio.get_event_loop()
loop.run_until_complete(future)
def datadir():
return os.path.dirname(os.path.realpath(__file__))
class TestSecret(unittest.TestCase):
def setUp(self):
del sys.argv[1:]
self.project = get_project('.secret').load()
def project_name(self):
return self.project['project']
def test_cli_without_arguments(self):
with self.assertRaises(SystemExit):
run_runner()
def test_put(self):
sys.argv.append('put')
with self.assertRaises(SystemExit):
with capture(run_runner) as output:
self.assertEquals(output, 'Error! No value provided.\n')
def test_put_key(self):
sys.argv.append('put')
sys.argv.append('key')
with self.assertRaises(SystemExit):
with capture(run_runner) as output:
self.assertEquals(output, 'Error! No value provided.\n')
def test_put_key_value(self):
sys.argv.append('put')
sys.argv.append('key')
sys.argv.append('value')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Wrote: {}/default/key\n'.format(self.project_name()))
del sys.argv[1:]
sys.argv.append('get')
sys.argv.append('key')
with capture(run_runner) as output:
self.assertEquals(output, 'value\n')
del sys.argv[1:]
sys.argv.append('config')
sys.argv.append('-F')
sys.argv.append('docker')
with capture(run_runner) as output:
self.assertEquals(output, '-e key="value"\n')
del sys.argv[1:]
sys.argv.append('config')
sys.argv.append('-F')
sys.argv.append('json')
with capture(run_runner) as output:
self.assertEquals(output, '{"key": "value"}\n')
del sys.argv[1:]
sys.argv.append('delete')
sys.argv.append('key')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Deleted: {}/default/key\n'.format(self.project_name()))
def test_put_key_value_nonascii(self):
sys.argv.append('put')
sys.argv.append('key')
sys.argv.append('val%€')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Wrote: {}/default/key\n'.format(self.project_name()))
del sys.argv[1:]
sys.argv.append('get')
sys.argv.append('key')
with capture(run_runner) as output:
self.assertEquals(output, 'val%€\n')
del sys.argv[1:]
sys.argv.append('delete')
sys.argv.append('key')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Deleted: {}/default/key\n'.format(self.project_name()))
def test_put_key_file_nonascii(self):
sys.argv.append('put')
sys.argv.append('keyfile')
sys.argv.append(os.path.join(datadir(), 'upload_nonascii.txt'))
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Wrote: {}/default/keyfile\n'.format(self.project_name()))
del sys.argv[1:]
sys.argv.append('get')
sys.argv.append('keyfile')
with capture(run_runner) as output:
self.assertEquals(output, 'h€llo Ä ö Å\n')
del sys.argv[1:]
sys.argv.append('delete')
sys.argv.append('keyfile')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Deleted: {}/default/keyfile\n'.format(self.project_name()))
def test_put_key_file(self):
sys.argv.append('put')
sys.argv.append('keyfile')
sys.argv.append(os.path.join(datadir(), 'upload.txt'))
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Wrote: {}/default/keyfile\n'.format(self.project_name()))
del sys.argv[1:]
sys.argv.append('get')
sys.argv.append('keyfile')
with capture(run_runner) as output:
self.assertEquals(output, 'upload test\n')
del sys.argv[1:]
sys.argv.append('delete')
sys.argv.append('keyfile')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Deleted: {}/default/keyfile\n'.format(self.project_name()))
def test_file_listing(self):
sys.argv.append('put')
sys.argv.append('key')
sys.argv.append('value')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Wrote: {}/default/key\n'.format(self.project_name()))
del sys.argv[1:]
sys.argv.append('put')
sys.argv.append('keyfile')
sys.argv.append(os.path.join(datadir(), 'upload.txt'))
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Wrote: {}/default/keyfile\n'.format(self.project_name()))
del sys.argv[1:]
sys.argv.append('config')
with capture(run_runner) as output:
self.assertEquals(output, 'Key Value\nkey value\nkeyfile <FILE>\n')
del sys.argv[1:]
sys.argv.append('config')
sys.argv.append('--skip-files')
with capture(run_runner) as output:
self.assertEquals(output, 'Key Value\nkey value\n')
del sys.argv[1:]
sys.argv.append('delete')
sys.argv.append('key')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Deleted: {}/default/key\n'.format(self.project_name()))
del sys.argv[1:]
sys.argv.append('delete')
sys.argv.append('keyfile')
with capture(run_runner) as output:
self.assertEquals(output, 'Success! Deleted: {}/default/keyfile\n'.format(self.project_name()))
| 34.672043 | 107 | 0.61622 | 823 | 6,449 | 4.747266 | 0.132442 | 0.118249 | 0.166368 | 0.112618 | 0.779882 | 0.772204 | 0.772204 | 0.772204 | 0.770668 | 0.744561 | 0 | 0.003878 | 0.240347 | 6,449 | 185 | 108 | 34.859459 | 0.793019 | 0.011009 | 0 | 0.657895 | 0 | 0 | 0.140257 | 0 | 0 | 0 | 0 | 0 | 0.164474 | 1 | 0.085526 | false | 0 | 0.059211 | 0.013158 | 0.164474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6964ae2bafcfb23056a951c3d28f720333bc528b | 104 | py | Python | analytics/admin.py | Alexmhack/django_url_shorter | cbdad3b08db3558ba4383d29964f88e110b92119 | [
"MIT"
] | null | null | null | analytics/admin.py | Alexmhack/django_url_shorter | cbdad3b08db3558ba4383d29964f88e110b92119 | [
"MIT"
] | null | null | null | analytics/admin.py | Alexmhack/django_url_shorter | cbdad3b08db3558ba4383d29964f88e110b92119 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import ClickAnalytic
admin.site.register(ClickAnalytic)
| 17.333333 | 34 | 0.836538 | 13 | 104 | 6.692308 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 104 | 5 | 35 | 20.8 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c611693322b62f3eb2fc98d6958031b357db34a1 | 116 | py | Python | basics/dice.py | ashwinkonireddy/pythonAutomation | d551a51744a4f8ba3d6f8def3c070f6565ac233f | [
"MIT"
] | null | null | null | basics/dice.py | ashwinkonireddy/pythonAutomation | d551a51744a4f8ba3d6f8def3c070f6565ac233f | [
"MIT"
] | null | null | null | basics/dice.py | ashwinkonireddy/pythonAutomation | d551a51744a4f8ba3d6f8def3c070f6565ac233f | [
"MIT"
] | null | null | null | import random
dice1 = random.randint(1,6)
dice2 = random.randint(1,6)
print("Rolling Dice...!!")
print(dice1, dice2) | 23.2 | 27 | 0.715517 | 18 | 116 | 4.611111 | 0.555556 | 0.313253 | 0.337349 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07619 | 0.094828 | 116 | 5 | 28 | 23.2 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.145299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.4 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c61813f33fb608bb73741d63ff2c5d778c5fc377 | 165 | py | Python | topnav_capo2/scripts/constants/tty_ports.py | kasptom/topnav_ros_kasptom | 9e7cd97ac0f5f22544880d71bcc91c9db4de528a | [
"Apache-2.0"
] | null | null | null | topnav_capo2/scripts/constants/tty_ports.py | kasptom/topnav_ros_kasptom | 9e7cd97ac0f5f22544880d71bcc91c9db4de528a | [
"Apache-2.0"
] | null | null | null | topnav_capo2/scripts/constants/tty_ports.py | kasptom/topnav_ros_kasptom | 9e7cd97ac0f5f22544880d71bcc91c9db4de528a | [
"Apache-2.0"
] | null | null | null | TTY_PORT_MAESTRO_DEFAULT = '/dev/ttyACM0'
TTY_PORT_MAESTRO_FALLBACK_A = '/dev/ttyACM1'
TTY_PORT_HOKUYO = '/dev/ttyACM2'
TTY_PORT_MAESTRO_FALLBACK_B = '/dev/ttyACM3'
| 33 | 44 | 0.806061 | 25 | 165 | 4.8 | 0.52 | 0.233333 | 0.35 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026144 | 0.072727 | 165 | 4 | 45 | 41.25 | 0.75817 | 0 | 0 | 0 | 0 | 0 | 0.290909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c63c1d93f9c845b81f97bd8a104c3bd640afc21b | 217 | py | Python | book_selling/admin.py | AminovE99/BookSelling | e0e683ea262707f1cb71a9b4d4197ffc05c82b9d | [
"MIT"
] | null | null | null | book_selling/admin.py | AminovE99/BookSelling | e0e683ea262707f1cb71a9b4d4197ffc05c82b9d | [
"MIT"
] | 2 | 2021-03-30T14:09:29.000Z | 2021-06-04T23:42:47.000Z | book_selling/admin.py | AminovE99/BookSelling | e0e683ea262707f1cb71a9b4d4197ffc05c82b9d | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from book_selling.models import Book, Author, PurchaseRequest
admin.site.register(Author)
admin.site.register(Book)
admin.site.register(PurchaseRequest)
| 24.111111 | 61 | 0.824885 | 29 | 217 | 6.137931 | 0.482759 | 0.151685 | 0.286517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092166 | 217 | 8 | 62 | 27.125 | 0.903553 | 0.119816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d6c1508c0c2b393e923ebca495da95319b70f766 | 86 | py | Python | pottermore/__init__.py | Ricotjhe/kennnyshiwa-cogs | 5a596f298a6f7fe7502634793384a747060fc6c7 | [
"MIT"
] | null | null | null | pottermore/__init__.py | Ricotjhe/kennnyshiwa-cogs | 5a596f298a6f7fe7502634793384a747060fc6c7 | [
"MIT"
] | null | null | null | pottermore/__init__.py | Ricotjhe/kennnyshiwa-cogs | 5a596f298a6f7fe7502634793384a747060fc6c7 | [
"MIT"
] | null | null | null | from .pottermore import Pottermore
def setup(bot):
bot.add_cog(Pottermore(bot))
| 14.333333 | 34 | 0.744186 | 12 | 86 | 5.25 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 86 | 5 | 35 | 17.2 | 0.863014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d6e418ff153bb2d8cc66694677513c856e325f98 | 60 | py | Python | deformetrica/core/model_tools/attachments/__init__.py | coolteemf/coolteemf-deformetrica | f965d6ecc0d04f243e487468a9dafe9fe864eed2 | [
"MIT"
] | 2 | 2022-03-04T11:19:30.000Z | 2022-03-08T04:47:22.000Z | deformetrica/core/model_tools/attachments/__init__.py | lepennec/Deformetrica_multiscale | dbcb69962dd02f14dde5d63a9abc1de69112f273 | [
"MIT"
] | null | null | null | deformetrica/core/model_tools/attachments/__init__.py | lepennec/Deformetrica_multiscale | dbcb69962dd02f14dde5d63a9abc1de69112f273 | [
"MIT"
] | 1 | 2022-03-07T09:52:52.000Z | 2022-03-07T09:52:52.000Z |
from .multi_object_attachment import MultiObjectAttachment
| 20 | 58 | 0.9 | 6 | 60 | 8.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 60 | 2 | 59 | 30 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d6ec27b66a57a314f083fb8dc330d197c7a43292 | 9,629 | py | Python | test/test_batch.py | aspose-words-cloud/aspose-words-cloud-python | 65c7b55fa4aac69b60d41e7f54aed231df285479 | [
"MIT"
] | 14 | 2018-07-15T17:01:52.000Z | 2018-11-29T06:15:33.000Z | test/test_batch.py | aspose-words-cloud/aspose-words-cloud-python | 65c7b55fa4aac69b60d41e7f54aed231df285479 | [
"MIT"
] | 1 | 2018-09-28T12:59:34.000Z | 2019-10-08T08:42:59.000Z | test/test_batch.py | aspose-words-cloud/aspose-words-cloud-python | 65c7b55fa4aac69b60d41e7f54aed231df285479 | [
"MIT"
] | 2 | 2020-12-21T07:59:17.000Z | 2022-02-16T21:41:25.000Z | # coding: utf-8
# -----------------------------------------------------------------------------------
# <copyright company="Aspose" file="test_batch.py">
# Copyright (c) 2021 Aspose.Words for Cloud
# </copyright>
# <summary>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# </summary>
# -----------------------------------------------------------------------------------
from urllib3 import request
import asposewordscloud.models
import os
import asposewordscloud.models.requests
from test.base_test_context import BaseTestContext
class TestBatch(BaseTestContext):
test_folder = 'DocumentElements/Bookmarks'
#
# Test for batch request
#
def test_batch(self):
filename = 'test_multi_pages.docx'
remote_name = 'TestBatchParagraphs.docx'
file = open(os.path.join(self.local_test_folder, self.local_common_folder, filename), 'rb')
remote_path = os.path.join(self.remote_test_folder, self.test_folder, remote_name)
remote_folder = os.path.join(self.remote_test_folder, self.test_folder)
paragraph = asposewordscloud.models.ParagraphInsert(text="This is a new paragraph for your document")
request0 = asposewordscloud.models.requests.UploadFileRequest(file, remote_path)
request1 = asposewordscloud.models.requests.GetParagraphsRequest(name=remote_name, node_path="sections/0", folder=remote_folder)
request2 = asposewordscloud.models.requests.GetParagraphRequest(name=remote_name, index=0, node_path="sections/0", folder=remote_folder)
request3 = asposewordscloud.models.requests.InsertParagraphRequest(name=remote_name, paragraph=paragraph, node_path="sections/0", folder=remote_folder)
request4 = asposewordscloud.models.requests.DeleteParagraphRequest(name=remote_name, index=0, node_path="", folder=remote_folder)
reportingFolder = 'DocumentActions/Reporting'
localDocumentFile = 'ReportTemplate.docx'
localDataFile = open(os.path.join(self.local_test_folder, reportingFolder + '/ReportData.json')).read()
requestReportEngineSettings = asposewordscloud.ReportEngineSettings(data_source_type='Json', data_source_name='persons')
request5 = asposewordscloud.models.requests.BuildReportOnlineRequest(
template=open(os.path.join(self.local_test_folder, reportingFolder + '/' + localDocumentFile), 'rb'),
data=localDataFile, report_engine_settings=requestReportEngineSettings)
self.words_api.upload_file(request0)
batch_request1 = asposewordscloud.models.requests.BatchRequest(request = request1)
batch_request2 = asposewordscloud.models.requests.BatchRequest(request = request2)
batch_request3 = asposewordscloud.models.requests.BatchRequest(request = request3)
batch_request4 = asposewordscloud.models.requests.BatchRequest(request = request4)
batch_request5 = asposewordscloud.models.requests.BatchRequest(request = request5)
result = self.words_api.batch(batch_request1, batch_request2, batch_request3, batch_request4, batch_request5)
self.assertEqual(len(result), 5)
self.assertIsInstance(result[0], asposewordscloud.ParagraphLinkCollectionResponse)
self.assertIsInstance(result[1], asposewordscloud.ParagraphResponse)
self.assertIsInstance(result[2], asposewordscloud.ParagraphResponse)
self.assertIsNone(result[3])
self.assertIsInstance(result[4], str)
#
# Test for batch request without returning of intermediate results
#
def test_batch_without_intermidiate_results(self):
filename = 'test_multi_pages.docx'
remote_name = 'TestBatchParagraphs.docx'
file = open(os.path.join(self.local_test_folder, self.local_common_folder, filename), 'rb')
remote_path = os.path.join(self.remote_test_folder, self.test_folder, remote_name)
remote_folder = os.path.join(self.remote_test_folder, self.test_folder)
paragraph = asposewordscloud.models.ParagraphInsert(text="This is a new paragraph for your document")
request0 = asposewordscloud.models.requests.UploadFileRequest(file, remote_path)
request1 = asposewordscloud.models.requests.GetParagraphsRequest(name=remote_name, node_path="sections/0", folder=remote_folder)
request2 = asposewordscloud.models.requests.GetParagraphRequest(name=remote_name, index=0, node_path="sections/0", folder=remote_folder)
request3 = asposewordscloud.models.requests.InsertParagraphRequest(name=remote_name, paragraph=paragraph, node_path="sections/0", folder=remote_folder)
request4 = asposewordscloud.models.requests.DeleteParagraphRequest(name=remote_name, index=0, node_path="", folder=remote_folder)
reportingFolder = 'DocumentActions/Reporting'
localDocumentFile = 'ReportTemplate.docx'
localDataFile = open(os.path.join(self.local_test_folder, reportingFolder + '/ReportData.json')).read()
requestReportEngineSettings = asposewordscloud.ReportEngineSettings(data_source_type='Json', data_source_name='persons')
request5 = asposewordscloud.models.requests.BuildReportOnlineRequest(
template=open(os.path.join(self.local_test_folder, reportingFolder + '/' + localDocumentFile), 'rb'),
data=localDataFile, report_engine_settings=requestReportEngineSettings)
self.words_api.upload_file(request0)
batch_request1 = asposewordscloud.models.requests.BatchRequest(request = request1)
batch_request2 = asposewordscloud.models.requests.BatchRequest(request = request2)
batch_request3 = asposewordscloud.models.requests.BatchRequest(request = request3)
batch_request4 = asposewordscloud.models.requests.BatchRequest(request = request4)
batch_request5 = asposewordscloud.models.requests.BatchRequest(request = request5)
result = self.words_api.batch_with_options(True, batch_request1, batch_request2, batch_request3, batch_request4, batch_request5)
self.assertEqual(len(result), 1)
self.assertIsInstance(result[0], str)
#
# Test for a batch with depend on feature
#
def test_batch_with_reversed_order(self):
filename = 'test_multi_pages.docx'
remote_name = 'test_batch_with_reversed_order.docx'
file = open(os.path.join(self.local_test_folder, self.local_common_folder, filename), 'rb')
remote_path = os.path.join(self.remote_test_folder, self.test_folder, remote_name)
remote_folder = os.path.join(self.remote_test_folder, self.test_folder)
self.upload_file(remote_folder + '/' + remote_name, file)
request1 = asposewordscloud.models.requests.GetParagraphsRequest(name=remote_name, node_path="sections/0", folder=remote_folder)
request2 = asposewordscloud.models.requests.GetParagraphRequest(name=remote_name, index=0, node_path="sections/0", folder=remote_folder)
batch_request1 = asposewordscloud.models.requests.BatchRequest(request=request1)
batch_request2 = asposewordscloud.models.requests.BatchRequest(request=request2)
batch_request1.depends_on(batch_request2)
result = self.words_api.batch(batch_request1, batch_request2)
self.assertEqual(len(result), 2)
self.assertIsInstance(result[0], asposewordscloud.models.ParagraphResponse)
self.assertIsInstance(result[1], asposewordscloud.models.ParagraphLinkCollectionResponse)
#
# Test for a batch with resultOf feature
#
def test_batch_with_result_of(self):
filename = 'test_multi_pages.docx'
remote_name = 'test_batch_with_result_of.docx'
file = open(os.path.join(self.local_test_folder, self.local_common_folder, filename), 'rb')
remote_path = os.path.join(self.remote_test_folder, self.test_folder, remote_name)
remote_folder = os.path.join(self.remote_test_folder, self.test_folder)
self.upload_file(remote_folder + '/' + remote_name, file)
request1 = asposewordscloud.models.requests.GetDocumentWithFormatRequest(name=remote_name, format='docx', folder=remote_folder)
batch_request1 = asposewordscloud.models.requests.BatchRequest(request=request1)
request2 = asposewordscloud.models.requests.DeleteCommentsOnlineRequest(document=batch_request1.use_as_source())
batch_request2 = asposewordscloud.models.requests.BatchRequest(request=request2)
batch_request2.depends_on(batch_request1)
result = self.words_api.batch(batch_request1, batch_request2)
self.assertEqual(len(result), 2)
self.assertIsInstance(result[1], str) | 59.807453 | 159 | 0.743795 | 1,077 | 9,629 | 6.467967 | 0.192201 | 0.113695 | 0.133506 | 0.032156 | 0.748062 | 0.709589 | 0.709589 | 0.709589 | 0.709589 | 0.698823 | 0 | 0.011865 | 0.151002 | 9,629 | 161 | 160 | 59.807453 | 0.840245 | 0.15713 | 0 | 0.693878 | 0 | 0 | 0.067706 | 0.033791 | 0 | 0 | 0 | 0 | 0.132653 | 1 | 0.040816 | false | 0 | 0.05102 | 0 | 0.112245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ba353270b6125152dc98244558debcc4dfc147e5 | 124 | py | Python | reinvent_chemistry/library_design/aizynth/synthetic_pathway_dto.py | MolecularAI/reinvent-chemistry | bf0235bc2b1168b1db54c1e04bdba04b166ab7bf | [
"MIT"
] | null | null | null | reinvent_chemistry/library_design/aizynth/synthetic_pathway_dto.py | MolecularAI/reinvent-chemistry | bf0235bc2b1168b1db54c1e04bdba04b166ab7bf | [
"MIT"
] | null | null | null | reinvent_chemistry/library_design/aizynth/synthetic_pathway_dto.py | MolecularAI/reinvent-chemistry | bf0235bc2b1168b1db54c1e04bdba04b166ab7bf | [
"MIT"
] | 1 | 2022-03-22T15:24:13.000Z | 2022-03-22T15:24:13.000Z | from typing import List
from dataclasses import dataclass
@dataclass
class SyntheticPathwayDTO:
precursors: List[str] | 15.5 | 33 | 0.806452 | 14 | 124 | 7.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 124 | 8 | 34 | 15.5 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ba6435a43db76926253b5754706fe35d94aaa8e9 | 400 | py | Python | logical/collection.py | MichaelKim0407/logical-func | 5a6925edc4f0ca0d2379fa8558f8f61367f0259c | [
"MIT"
] | null | null | null | logical/collection.py | MichaelKim0407/logical-func | 5a6925edc4f0ca0d2379fa8558f8f61367f0259c | [
"MIT"
] | null | null | null | logical/collection.py | MichaelKim0407/logical-func | 5a6925edc4f0ca0d2379fa8558f8f61367f0259c | [
"MIT"
] | null | null | null | from ._base import (
BaseFunction as _BaseFunction,
)
class In(_BaseFunction):
def __init__(self, collection):
self.__collection = collection
def __call__(self, item):
return item in self.__collection
class Contains(_BaseFunction):
def __init__(self, item):
self.__item = item
def __call__(self, collection):
return self.__item in collection
| 20 | 40 | 0.68 | 44 | 400 | 5.545455 | 0.340909 | 0.229508 | 0.155738 | 0.188525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24 | 400 | 19 | 41 | 21.052632 | 0.802632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.076923 | 0.153846 | 0.692308 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
ba9067952231e5c9b5ea55dbf7c651e00454d8c4 | 65 | py | Python | python/python_backup/Python_Progs/PYTHON_LEGACY_PROJECTS/Path_Of_Current_Prog.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 16 | 2018-11-26T08:39:42.000Z | 2019-05-08T10:09:52.000Z | python/python_backup/Python_Progs/PYTHON_LEGACY_PROJECTS/Path_Of_Current_Prog.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 8 | 2020-05-04T06:29:26.000Z | 2022-02-12T05:33:16.000Z | python/python_backup/Python_Progs/PYTHON_LEGACY_PROJECTS/Path_Of_Current_Prog.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 5 | 2020-02-11T16:02:21.000Z | 2021-02-05T07:48:30.000Z | import os
print("Current File Name :",os.path.realpath(__file__)) | 32.5 | 55 | 0.769231 | 10 | 65 | 4.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 65 | 2 | 55 | 32.5 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0.287879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
ba9a19b29f60230597199d63113670afa8c06e73 | 17 | py | Python | tools/libari/libari/__init__.py | seemoo-lab/aristoteles | 092c746d2a211bb4edc1b60072487a94e8a97c99 | [
"MIT"
] | 21 | 2021-08-30T13:25:08.000Z | 2021-12-09T16:48:25.000Z | tools/libari/libari/__init__.py | seemoo-lab/aristoteles | 092c746d2a211bb4edc1b60072487a94e8a97c99 | [
"MIT"
] | 5 | 2021-09-03T20:33:37.000Z | 2021-11-28T21:01:00.000Z | tools/libari/libari/__init__.py | seemoo-lab/aristoteles | 092c746d2a211bb4edc1b60072487a94e8a97c99 | [
"MIT"
] | 3 | 2021-09-03T20:08:47.000Z | 2021-11-05T21:10:02.000Z | # libari package
| 8.5 | 16 | 0.764706 | 2 | 17 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 17 | 1 | 17 | 17 | 0.928571 | 0.823529 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
246132e72d210d6a52d4ff4f3edb07d17953b785 | 14 | py | Python | exercises/01.python-for-everybody/chapter02/exercise01.py | Fabricio-Lopees/computer-science-learning | e8cfcd468f9fdbaa1cacf803d0dade04a99eb19a | [
"MIT"
] | null | null | null | exercises/01.python-for-everybody/chapter02/exercise01.py | Fabricio-Lopees/computer-science-learning | e8cfcd468f9fdbaa1cacf803d0dade04a99eb19a | [
"MIT"
] | null | null | null | exercises/01.python-for-everybody/chapter02/exercise01.py | Fabricio-Lopees/computer-science-learning | e8cfcd468f9fdbaa1cacf803d0dade04a99eb19a | [
"MIT"
] | null | null | null | x=5
print(x+1) | 7 | 10 | 0.642857 | 5 | 14 | 1.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0.071429 | 14 | 2 | 10 | 7 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
246973f6e51807e9dd7ebb6a90e8ba903b77330d | 287 | py | Python | yawhois/parser/whois_1und1_info.py | huyphan/pyyawhois | 77fb2f73a9c67989f1d41d98f37037406a69d136 | [
"MIT"
] | null | null | null | yawhois/parser/whois_1und1_info.py | huyphan/pyyawhois | 77fb2f73a9c67989f1d41d98f37037406a69d136 | [
"MIT"
] | null | null | null | yawhois/parser/whois_1und1_info.py | huyphan/pyyawhois | 77fb2f73a9c67989f1d41d98f37037406a69d136 | [
"MIT"
] | null | null | null | from ..scanner.base_icann_compliant import BaseIcannCompliantScanner
from .base_icann_compliant import BaseIcannCompliantParser
class Whois1und1InfoParser(BaseIcannCompliantParser):
_scanner = BaseIcannCompliantScanner, { "pattern_available": "Domain (.+) is not registered here."}
| 47.833333 | 103 | 0.836237 | 25 | 287 | 9.36 | 0.68 | 0.076923 | 0.153846 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007663 | 0.090592 | 287 | 5 | 104 | 57.4 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0.181185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
79fc6ec847e2745bae28a37dc14b458ac3efd6ec | 94 | py | Python | gromos2amber/__init__.py | ATB-UQ/gromos2amber | 4d1686e6443de151c94a14d32c5bc338677d97d9 | [
"MIT"
] | null | null | null | gromos2amber/__init__.py | ATB-UQ/gromos2amber | 4d1686e6443de151c94a14d32c5bc338677d97d9 | [
"MIT"
] | null | null | null | gromos2amber/__init__.py | ATB-UQ/gromos2amber | 4d1686e6443de151c94a14d32c5bc338677d97d9 | [
"MIT"
] | null | null | null |
from .Converter import convert
from .Errors import GromosFormatError, IllegalArgumentError
| 18.8 | 59 | 0.840426 | 9 | 94 | 8.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 94 | 4 | 60 | 23.5 | 0.963415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0373fd146162b67695cb706a4555bab894724196 | 110 | py | Python | vigir_flexbe_states/src/vigir_flexbe_states/proxy/__init__.py | team-vigir/vigir_behaviors | 6696e7b7aadb24bb5495475065cc7b10d80b7db4 | [
"BSD-3-Clause"
] | 5 | 2015-08-25T18:47:52.000Z | 2019-12-04T21:40:28.000Z | vigir_flexbe_states/src/vigir_flexbe_states/proxy/__init__.py | team-vigir/vigir_behaviors | 6696e7b7aadb24bb5495475065cc7b10d80b7db4 | [
"BSD-3-Clause"
] | 2 | 2017-08-16T16:09:47.000Z | 2020-08-18T17:25:22.000Z | vigir_flexbe_states/src/vigir_flexbe_states/proxy/__init__.py | team-vigir/vigir_behaviors | 6696e7b7aadb24bb5495475065cc7b10d80b7db4 | [
"BSD-3-Clause"
] | 5 | 2015-11-06T21:57:37.000Z | 2022-03-30T10:15:57.000Z | import roslib; roslib.load_manifest('vigir_flexbe_states')
from proxy_moveit_client import ProxyMoveitClient
| 27.5 | 58 | 0.872727 | 14 | 110 | 6.5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 110 | 3 | 59 | 36.666667 | 0.892157 | 0 | 0 | 0 | 0 | 0 | 0.172727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
cee63db52fbd34b74083ced6ba2e1f2b683bf73d | 54 | py | Python | casbin/persist/__init__.py | dwang7/pycasbin | c0d537dc8613146555fc03fa23b503b39260385e | [
"Apache-2.0"
] | null | null | null | casbin/persist/__init__.py | dwang7/pycasbin | c0d537dc8613146555fc03fa23b503b39260385e | [
"Apache-2.0"
] | null | null | null | casbin/persist/__init__.py | dwang7/pycasbin | c0d537dc8613146555fc03fa23b503b39260385e | [
"Apache-2.0"
] | null | null | null | from .adapter import *
from .adapter_filtered import * | 27 | 31 | 0.796296 | 7 | 54 | 6 | 0.571429 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 54 | 2 | 31 | 27 | 0.893617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
cef385a6b7a2905ecfadf00112c9d9e406d3b98b | 66 | py | Python | pyexlatex/figure/models/__init__.py | whoopnip/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | 4 | 2020-06-08T07:17:12.000Z | 2021-11-04T21:39:52.000Z | pyexlatex/figure/models/__init__.py | nickderobertis/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | 24 | 2020-02-17T17:20:44.000Z | 2021-12-20T00:10:19.000Z | pyexlatex/figure/models/__init__.py | nickderobertis/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | null | null | null | from .figure import Figure, Subfigure
from .graphic import Graphic | 33 | 37 | 0.833333 | 9 | 66 | 6.111111 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 66 | 2 | 38 | 33 | 0.948276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
cef5e2877c1fc83a5baa59e95ebcc2ed9a805d87 | 45 | py | Python | pylss/__main__.py | saptarshibasu15/py-ls | a0809d78edb6356c07d1db12d34a81e6f39427e4 | [
"MIT"
] | 1 | 2020-11-26T05:17:44.000Z | 2020-11-26T05:17:44.000Z | pylss/__main__.py | saptarshibasu15/pylss | a0809d78edb6356c07d1db12d34a81e6f39427e4 | [
"MIT"
] | null | null | null | pylss/__main__.py | saptarshibasu15/pylss | a0809d78edb6356c07d1db12d34a81e6f39427e4 | [
"MIT"
] | null | null | null | from __init__ import getDirList
getDirList() | 15 | 31 | 0.844444 | 5 | 45 | 6.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 3 | 32 | 15 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
300499f62b244105a35af859f85204fad3cba2d3 | 35 | py | Python | smpy/__init__.py | canbatuhan/smpy | 36bc0a3bad3cfa77b6d15316ae8cdd39eee3721e | [
"MIT"
] | 1 | 2022-02-22T17:21:17.000Z | 2022-02-22T17:21:17.000Z | smpy/__init__.py | canbatuhan/smpy | 36bc0a3bad3cfa77b6d15316ae8cdd39eee3721e | [
"MIT"
] | null | null | null | smpy/__init__.py | canbatuhan/smpy | 36bc0a3bad3cfa77b6d15316ae8cdd39eee3721e | [
"MIT"
] | null | null | null | from .fsm import FiniteStateMachine | 35 | 35 | 0.885714 | 4 | 35 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
304d1b3dc88e9dbbea6e0b76f05a0448ffa7e912 | 138 | py | Python | async_dns/request/__init__.py | gera2ld/async_dns | e82066415f334f0a6518056643b58e475abb117f | [
"MIT"
] | 59 | 2017-01-22T04:46:04.000Z | 2022-03-18T08:31:38.000Z | async_dns/request/__init__.py | gera2ld/async_dns | e82066415f334f0a6518056643b58e475abb117f | [
"MIT"
] | 26 | 2017-05-09T05:20:53.000Z | 2021-10-02T10:42:55.000Z | async_dns/request/__init__.py | gera2ld/async_dns | e82066415f334f0a6518056643b58e475abb117f | [
"MIT"
] | 21 | 2017-05-08T20:50:19.000Z | 2022-02-04T19:19:43.000Z | from .udp import Dispatcher
from .util import ConnectionPool
def clean():
ConnectionPool.destroy_all()
Dispatcher.destroy_all()
| 17.25 | 32 | 0.76087 | 16 | 138 | 6.4375 | 0.625 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 138 | 7 | 33 | 19.714286 | 0.887931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3054af9326dd3a642556813afbd57137003acdcc | 95 | py | Python | src/rosbag_pandas/__init__.py | simontegelid/rosbag_pandas | 65383173164ab1cc4f169e6f4be077ca4f344b19 | [
"Apache-2.0"
] | null | null | null | src/rosbag_pandas/__init__.py | simontegelid/rosbag_pandas | 65383173164ab1cc4f169e6f4be077ca4f344b19 | [
"Apache-2.0"
] | null | null | null | src/rosbag_pandas/__init__.py | simontegelid/rosbag_pandas | 65383173164ab1cc4f169e6f4be077ca4f344b19 | [
"Apache-2.0"
] | 1 | 2020-01-27T14:46:30.000Z | 2020-01-27T14:46:30.000Z | from .rosbag_pandas import RosbagPandaException, bag_to_dataframe, topics_from_keys, POSTPROCS
| 47.5 | 94 | 0.884211 | 12 | 95 | 6.583333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073684 | 95 | 1 | 95 | 95 | 0.897727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
06280eb58b9dd22cc2b0987d166deeb0f33a550d | 33 | py | Python | PCAtomography/__init__.py | muryelgp/PCAtomography | ed385f7180b984d465095364a49259e85057eff9 | [
"MIT"
] | null | null | null | PCAtomography/__init__.py | muryelgp/PCAtomography | ed385f7180b984d465095364a49259e85057eff9 | [
"MIT"
] | null | null | null | PCAtomography/__init__.py | muryelgp/PCAtomography | ed385f7180b984d465095364a49259e85057eff9 | [
"MIT"
] | null | null | null | from .datacube import IFUcube
| 11 | 30 | 0.757576 | 4 | 33 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 2 | 31 | 16.5 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
062f0f3c67530607a7689d235602a1203a8e788d | 214 | py | Python | ui/vendor/qt.py | edart76/tree | 743d14a819c057c50bd4f84ed0712768cb4ba582 | [
"Apache-2.0"
] | null | null | null | ui/vendor/qt.py | edart76/tree | 743d14a819c057c50bd4f84ed0712768cb4ba582 | [
"Apache-2.0"
] | null | null | null | ui/vendor/qt.py | edart76/tree | 743d14a819c057c50bd4f84ed0712768cb4ba582 | [
"Apache-2.0"
] | null | null | null |
"""Try to import PySide6 - fall back to PySide2"""
try:
import PySide6 as qt
from PySide6 import QtCore, QtWidgets, QtGui
except:
import PySide2 as qt
from PySide2 import QtCore, QtWidgets, QtGui
| 21.4 | 50 | 0.71028 | 30 | 214 | 5.066667 | 0.466667 | 0.171053 | 0.105263 | 0.342105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.228972 | 214 | 9 | 51 | 23.777778 | 0.884848 | 0.205607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0654e986b6be1b4a7a2384408a914e4805e53b0a | 82 | py | Python | {{ cookiecutter.model_name.lower()}}/src/{{cookiecutter.model_name.lower()}}/predict.py | KasperJuunge/mlcutter | 6e0126786ebe776180169f9abf0ca7ef9a6a1287 | [
"MIT"
] | 4 | 2021-03-25T09:49:10.000Z | 2021-05-25T09:05:07.000Z | {{ cookiecutter.model_name.lower()}}/src/{{cookiecutter.model_name.lower()}}/predict.py | KasperJuunge/mlcutter | 6e0126786ebe776180169f9abf0ca7ef9a6a1287 | [
"MIT"
] | 7 | 2021-03-08T09:30:44.000Z | 2021-07-05T11:51:43.000Z | {{ cookiecutter.model_name.lower()}}/src/{{cookiecutter.model_name.lower()}}/predict.py | KasperJuunge/mlcutter | 6e0126786ebe776180169f9abf0ca7ef9a6a1287 | [
"MIT"
] | 3 | 2021-03-08T06:42:17.000Z | 2021-11-30T11:11:14.000Z |
def predict():
# make prediction with model
print('Predict!')
return
| 13.666667 | 32 | 0.621951 | 9 | 82 | 5.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.268293 | 82 | 5 | 33 | 16.4 | 0.85 | 0.317073 | 0 | 0 | 0 | 0 | 0.150943 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
06ac87e3c954432c818f8007686b4f3a77d8c0a0 | 46 | py | Python | CodeUp/1527.py | chae-heechan/Algorithm_Study | 183a77e2cfe352cd82fb5e988b493082529a73dd | [
"MIT"
] | null | null | null | CodeUp/1527.py | chae-heechan/Algorithm_Study | 183a77e2cfe352cd82fb5e988b493082529a73dd | [
"MIT"
] | null | null | null | CodeUp/1527.py | chae-heechan/Algorithm_Study | 183a77e2cfe352cd82fb5e988b493082529a73dd | [
"MIT"
] | null | null | null | # 함수로 123 값 출력하기
def f():
print("123")
f() | 11.5 | 16 | 0.521739 | 9 | 46 | 2.666667 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 0.26087 | 46 | 4 | 17 | 11.5 | 0.529412 | 0.304348 | 0 | 0 | 0 | 0 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ebf2277e5c9f3378d3af78ce913746bb1df7d1ab | 104 | py | Python | geostore/import_dataset_keys.py | adisbladis/geostore | 79439c06b33414e1e26b3aa4b93a72fd7cbbae83 | [
"MIT"
] | 25 | 2021-05-19T08:05:07.000Z | 2022-03-14T02:48:58.000Z | geostore/import_dataset_keys.py | adisbladis/geostore | 79439c06b33414e1e26b3aa4b93a72fd7cbbae83 | [
"MIT"
] | 311 | 2021-05-17T23:04:56.000Z | 2022-03-31T10:41:44.000Z | geostore/import_dataset_keys.py | adisbladis/geostore | 79439c06b33414e1e26b3aa4b93a72fd7cbbae83 | [
"MIT"
] | 1 | 2022-01-03T05:38:32.000Z | 2022-01-03T05:38:32.000Z | ORIGINAL_KEY_KEY = "original_key"
NEW_KEY_KEY = "new_key"
TARGET_BUCKET_NAME_KEY = "target_bucket_name"
| 26 | 45 | 0.826923 | 17 | 104 | 4.411765 | 0.352941 | 0.293333 | 0.24 | 0.506667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086538 | 104 | 3 | 46 | 34.666667 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0.355769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ebf8df5baa7707b0e8890dfadb1d30f0370c1cc2 | 283 | py | Python | src/pyf/aggregator/plugins/__init__.py | collective/pyf.aggregator | f64b06e5f10dc7aa097317c63c510148b0c0705c | [
"Apache-2.0"
] | null | null | null | src/pyf/aggregator/plugins/__init__.py | collective/pyf.aggregator | f64b06e5f10dc7aa097317c63c510148b0c0705c | [
"Apache-2.0"
] | null | null | null | src/pyf/aggregator/plugins/__init__.py | collective/pyf.aggregator | f64b06e5f10dc7aa097317c63c510148b0c0705c | [
"Apache-2.0"
] | null | null | null | from .curated import load_curated
from .github import load_github_stats
from .version import load_version
def register_plugins(PLUGINS, settings):
PLUGINS.append(load_curated(settings))
PLUGINS.append(load_github_stats(settings))
PLUGINS.append(load_version(settings))
| 28.3 | 47 | 0.809187 | 37 | 283 | 5.945946 | 0.324324 | 0.136364 | 0.286364 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113074 | 283 | 9 | 48 | 31.444444 | 0.876494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
23539fdf992e94860fb379c263e487c816a41cd7 | 166 | py | Python | chapter3/drawable.py | yuishihara/probabilistic_robotics_implementations | 91115260cb95697f89b1413d49dd45ebe3014a53 | [
"MIT"
] | null | null | null | chapter3/drawable.py | yuishihara/probabilistic_robotics_implementations | 91115260cb95697f89b1413d49dd45ebe3014a53 | [
"MIT"
] | null | null | null | chapter3/drawable.py | yuishihara/probabilistic_robotics_implementations | 91115260cb95697f89b1413d49dd45ebe3014a53 | [
"MIT"
] | null | null | null | class Drawable(object):
def draw(self, ax):
raise NotImplementedError(
"Drawable object must implement draw(self, ax) but not implemented!!")
| 33.2 | 82 | 0.662651 | 19 | 166 | 5.789474 | 0.736842 | 0.254545 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240964 | 166 | 4 | 83 | 41.5 | 0.873016 | 0 | 0 | 0 | 0 | 0 | 0.403614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
237317924a20ded58a554cbf0db6f2fc7a08d5d8 | 53 | py | Python | data/__init__.py | aiainui/crnn-audio-classification | 1e00177c692867bb8bed8bf5ae6196c612081c57 | [
"MIT"
] | 307 | 2019-03-17T04:43:13.000Z | 2022-03-26T15:28:15.000Z | data/__init__.py | aiainui/crnn-audio-classification | 1e00177c692867bb8bed8bf5ae6196c612081c57 | [
"MIT"
] | 20 | 2019-05-16T00:38:35.000Z | 2022-03-18T03:22:45.000Z | data/__init__.py | aiainui/crnn-audio-classification | 1e00177c692867bb8bed8bf5ae6196c612081c57 | [
"MIT"
] | 76 | 2019-05-15T18:07:09.000Z | 2022-03-26T15:28:14.000Z | from .data_manager import *
from .transforms import * | 26.5 | 27 | 0.792453 | 7 | 53 | 5.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132075 | 53 | 2 | 28 | 26.5 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
88d0d2ade3733e13f2b0e351bd17c903ac3deb75 | 22 | py | Python | pyomegle/__init__.py | goddessrachel/pyomegle | ac7a876f0892035473a75c20b8bfe0252882e4c6 | [
"MIT"
] | 1 | 2016-11-19T05:30:48.000Z | 2016-11-19T05:30:48.000Z | pyomegle/__init__.py | michaelmdresser/alexaomegle | 42f8011fa20ecea5040feaa96f2667dc2fb16068 | [
"MIT"
] | null | null | null | pyomegle/__init__.py | michaelmdresser/alexaomegle | 42f8011fa20ecea5040feaa96f2667dc2fb16068 | [
"MIT"
] | null | null | null | from pyomegle import * | 22 | 22 | 0.818182 | 3 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
88f8cf634988a7578e9f769c4d169fc37f42f7d2 | 268 | py | Python | CPython/bubblesort/bubbletest.py | aaronwinter/algostructure | 08a56466f6fd925040b538c331c3d39edb81d6c5 | [
"BSD-4-Clause"
] | 1 | 2015-01-04T22:40:59.000Z | 2015-01-04T22:40:59.000Z | CPython/bubblesort/bubbletest.py | aaronwinter/algostructure | 08a56466f6fd925040b538c331c3d39edb81d6c5 | [
"BSD-4-Clause"
] | null | null | null | CPython/bubblesort/bubbletest.py | aaronwinter/algostructure | 08a56466f6fd925040b538c331c3d39edb81d6c5 | [
"BSD-4-Clause"
] | null | null | null | import bubblesort
print bubblesort.sort([10,9,8,7,6,5,4,3,2,1])
print bubblesort.sort([1,2,3,4,5,6,7,8,9,10])
print bubblesort.sort([6,7,8,9,10,1,2,3,4,5])
print bubblesort.sort(12323123)
print bubblesort.sort("thisshouldn'twork") # need to implement error checking
| 29.777778 | 77 | 0.731343 | 55 | 268 | 3.563636 | 0.4 | 0.382653 | 0.484694 | 0.040816 | 0.112245 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165323 | 0.074627 | 268 | 8 | 78 | 33.5 | 0.625 | 0.119403 | 0 | 0 | 0 | 0 | 0.07265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.833333 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
000cdc513fca5a187655ec764195b891e5a35f52 | 205 | py | Python | pynads/abc/__init__.py | justanr/gonads | 4cd40eae04d3e738443d1232063c762b34784462 | [
"MIT"
] | 19 | 2015-03-21T07:47:15.000Z | 2019-10-05T21:20:36.000Z | pynads/abc/__init__.py | justanr/gonads | 4cd40eae04d3e738443d1232063c762b34784462 | [
"MIT"
] | 7 | 2015-03-31T14:30:26.000Z | 2017-07-29T15:29:59.000Z | pynads/abc/__init__.py | justanr/gonads | 4cd40eae04d3e738443d1232063c762b34784462 | [
"MIT"
] | 4 | 2017-10-18T08:07:46.000Z | 2019-10-05T21:20:38.000Z | from .container import Container, ContainerABC
from .functor import Functor
from .applicative import Applicative
from .monad import Monad
from .monoid import Monoid
from .option import Option, Full, Empty
| 29.285714 | 46 | 0.82439 | 27 | 205 | 6.259259 | 0.407407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131707 | 205 | 6 | 47 | 34.166667 | 0.949438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
002a00e41bfd695598a3e8bc5d989361a35708b7 | 73 | py | Python | dipy/viz/__init__.py | Garyfallidis/dipy | 4341b734995d6f51ac9c16df26a7de00c46f57ef | [
"BSD-3-Clause"
] | 3 | 2015-07-31T20:43:18.000Z | 2019-07-26T13:58:07.000Z | dipy/viz/__init__.py | Garyfallidis/dipy | 4341b734995d6f51ac9c16df26a7de00c46f57ef | [
"BSD-3-Clause"
] | 9 | 2015-05-13T17:44:42.000Z | 2018-05-27T20:09:55.000Z | dipy/viz/__init__.py | Garyfallidis/dipy | 4341b734995d6f51ac9c16df26a7de00c46f57ef | [
"BSD-3-Clause"
] | 3 | 2016-08-05T22:43:16.000Z | 2017-06-23T18:35:13.000Z | # Init file for visualization package
from ._show_odfs import show_odfs
| 18.25 | 37 | 0.821918 | 11 | 73 | 5.181818 | 0.818182 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 73 | 3 | 38 | 24.333333 | 0.919355 | 0.479452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
cc53c75ab952ff72872e012fe17f37e26f03f750 | 70 | py | Python | exp/guitest.py | zippybenjiman/PythonCraft | daeb9b4eac1530c98843bf8fc412345f58e37c97 | [
"MIT"
] | 51 | 2015-08-08T22:36:47.000Z | 2022-03-02T20:20:54.000Z | exp/guitest.py | zippybenjiman/PythonCraft | daeb9b4eac1530c98843bf8fc412345f58e37c97 | [
"MIT"
] | 3 | 2016-04-12T16:36:58.000Z | 2019-03-02T21:46:10.000Z | exp/guitest.py | zippybenjiman/PythonCraft | daeb9b4eac1530c98843bf8fc412345f58e37c97 | [
"MIT"
] | 13 | 2016-03-25T07:05:40.000Z | 2021-08-15T12:19:04.000Z | import pygame
from pygame.locals import *
import zgui2
print("hello") | 14 | 27 | 0.785714 | 10 | 70 | 5.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.128571 | 70 | 5 | 28 | 14 | 0.885246 | 0 | 0 | 0 | 0 | 0 | 0.070423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
cc5f19e91bc5f16eae42c45953b082b8a57f1dab | 175 | py | Python | mvlearn/construct/__init__.py | agramfort/mvlearn | 7b3b8fc09c9ecafc91cd83672576ff6ed6057fed | [
"Apache-2.0"
] | 1 | 2020-12-29T15:41:29.000Z | 2020-12-29T15:41:29.000Z | mvlearn/construct/__init__.py | agramfort/mvlearn | 7b3b8fc09c9ecafc91cd83672576ff6ed6057fed | [
"Apache-2.0"
] | null | null | null | mvlearn/construct/__init__.py | agramfort/mvlearn | 7b3b8fc09c9ecafc91cd83672576ff6ed6057fed | [
"Apache-2.0"
] | null | null | null | from .random_gaussian_projection import random_gaussian_projection
from .rsm import random_subspace_method
__all__ = ["random_gaussian_projection", "random_subspace_method"]
| 35 | 66 | 0.868571 | 21 | 175 | 6.571429 | 0.428571 | 0.304348 | 0.521739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074286 | 175 | 4 | 67 | 43.75 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.274286 | 0.274286 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
cc89fa99bbd748ec11cbb4939600932beb76d07d | 119 | py | Python | app.py | 9hax/simpleticket | a9fd9103f7bef1a5dc1968154b9e7f86ef7eb70e | [
"Apache-2.0"
] | 1 | 2021-04-08T13:17:52.000Z | 2021-04-08T13:17:52.000Z | app.py | 9hax/simpleticket | a9fd9103f7bef1a5dc1968154b9e7f86ef7eb70e | [
"Apache-2.0"
] | 1 | 2021-04-10T19:52:45.000Z | 2021-04-10T19:56:30.000Z | app.py | 9hax/simpleticket | a9fd9103f7bef1a5dc1968154b9e7f86ef7eb70e | [
"Apache-2.0"
] | 1 | 2021-04-10T19:49:42.000Z | 2021-04-10T19:49:42.000Z | # app.py is only used for the Flask Dev environment and should not be used in production.
from simpleticket import app
| 39.666667 | 89 | 0.798319 | 21 | 119 | 4.52381 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 119 | 2 | 90 | 59.5 | 0.969388 | 0.731092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ccd2e27a7803bb2024a3678d6755f771755cf784 | 131 | py | Python | tools/tests/helperBuildInfo.py | spockthegray/mantaflow | df72cf235e14ef4f3f8fac9141b5e0a8707406b3 | [
"Apache-2.0"
] | 158 | 2018-06-24T17:42:13.000Z | 2022-03-12T13:29:43.000Z | tools/tests/helperBuildInfo.py | spockthegray/mantaflow | df72cf235e14ef4f3f8fac9141b5e0a8707406b3 | [
"Apache-2.0"
] | 5 | 2018-09-05T07:30:48.000Z | 2020-07-01T08:56:28.000Z | tools/tests/helperBuildInfo.py | spockthegray/mantaflow | df72cf235e14ef4f3f8fac9141b5e0a8707406b3 | [
"Apache-2.0"
] | 35 | 2018-06-13T04:05:42.000Z | 2022-03-29T16:55:24.000Z | # Print build info
from manta import *
# note - executable should print by default, for safety, print again...
printBuildInfo()
| 16.375 | 71 | 0.732824 | 17 | 131 | 5.647059 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183206 | 131 | 7 | 72 | 18.714286 | 0.897196 | 0.656489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
aeb05bd791efe3887736963ef846e9fb440883b5 | 25 | py | Python | Hello_world.py | Vishal2602/profiles-rest-api | d869ce3f5bd2ad7b87655cc2fa691ccba5b9f4fa | [
"MIT"
] | null | null | null | Hello_world.py | Vishal2602/profiles-rest-api | d869ce3f5bd2ad7b87655cc2fa691ccba5b9f4fa | [
"MIT"
] | null | null | null | Hello_world.py | Vishal2602/profiles-rest-api | d869ce3f5bd2ad7b87655cc2fa691ccba5b9f4fa | [
"MIT"
] | null | null | null | print('Hello World!!1');
| 12.5 | 24 | 0.64 | 4 | 25 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.08 | 25 | 1 | 25 | 25 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
aed980d1da896044428ef37dff7f2e29e5701f21 | 170 | py | Python | allauth/socialaccount/providers/xing_provider/urls.py | Fuzzwah/django-allauth | 071cbef1388bb61a563d3e41197bd5b7c26664d2 | [
"MIT"
] | null | null | null | allauth/socialaccount/providers/xing_provider/urls.py | Fuzzwah/django-allauth | 071cbef1388bb61a563d3e41197bd5b7c26664d2 | [
"MIT"
] | null | null | null | allauth/socialaccount/providers/xing_provider/urls.py | Fuzzwah/django-allauth | 071cbef1388bb61a563d3e41197bd5b7c26664d2 | [
"MIT"
] | null | null | null | from allauth.socialaccount.providers.oauth_provider.urls import default_urlpatterns
from .provider import XingProvider
urlpatterns = default_urlpatterns(XingProvider)
| 24.285714 | 83 | 0.870588 | 18 | 170 | 8.055556 | 0.611111 | 0.248276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082353 | 170 | 6 | 84 | 28.333333 | 0.929487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4e12e852230fbd1498b5a6403a610eb181b3da47 | 98 | py | Python | EEG_Lightning/dassl/engine/base/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | 23 | 2021-10-14T02:31:06.000Z | 2022-01-25T16:26:44.000Z | EEG_Lightning/dassl/engine/base/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | null | null | null | EEG_Lightning/dassl/engine/base/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | 1 | 2022-03-05T06:54:11.000Z | 2022-03-05T06:54:11.000Z | from .base_model import BaseModel
from .FBCNet import FBCNet
from .deepconvnet import DeepConvNet
| 24.5 | 36 | 0.846939 | 13 | 98 | 6.307692 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 98 | 3 | 37 | 32.666667 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4e1305e72b64e7f8602f8ac328aca2ba3064fca2 | 103 | py | Python | _helpers.py | imallett/apparent-brightness-of-monochromatic-sources | 4ac6dae23b4edb79f319e0bd1322db4c4f7d8098 | [
"MIT"
] | null | null | null | _helpers.py | imallett/apparent-brightness-of-monochromatic-sources | 4ac6dae23b4edb79f319e0bd1322db4c4f7d8098 | [
"MIT"
] | null | null | null | _helpers.py | imallett/apparent-brightness-of-monochromatic-sources | 4ac6dae23b4edb79f319e0bd1322db4c4f7d8098 | [
"MIT"
] | null | null | null | def rndint(x):
return int(round(x))
def lerp(val0,val1, t):
return val0*(1.0-t) + val1*t
| 17.166667 | 33 | 0.572816 | 19 | 103 | 3.105263 | 0.631579 | 0.169492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.242718 | 103 | 5 | 34 | 20.6 | 0.679487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
9d91394f502cd96c457ccdc2d7af10863660a672 | 159 | py | Python | addresses/__init__.py | obezpalko/ip-map | e3befecb8a6c733daa78d8ae550857d9ff2c91c7 | [
"MIT"
] | null | null | null | addresses/__init__.py | obezpalko/ip-map | e3befecb8a6c733daa78d8ae550857d9ff2c91c7 | [
"MIT"
] | null | null | null | addresses/__init__.py | obezpalko/ip-map | e3befecb8a6c733daa78d8ae550857d9ff2c91c7 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
if __name__ == '__main__':
import amazon, IANA
print('Ok')
print(IANA.Networks().get())
print(amazon.Networks().get())
| 17.666667 | 34 | 0.616352 | 20 | 159 | 4.5 | 0.7 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188679 | 159 | 8 | 35 | 19.875 | 0.697674 | 0.125786 | 0 | 0 | 0 | 0 | 0.072464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
9da67a1f4b954a83d4ecc43622c5d74f9e8c6784 | 304 | py | Python | rubin_sim/photUtils/__init__.py | RileyWClarke/flarubin | eb7b1ee21c828523f8a5374fe4510fe6e5ec2a2a | [
"MIT"
] | null | null | null | rubin_sim/photUtils/__init__.py | RileyWClarke/flarubin | eb7b1ee21c828523f8a5374fe4510fe6e5ec2a2a | [
"MIT"
] | null | null | null | rubin_sim/photUtils/__init__.py | RileyWClarke/flarubin | eb7b1ee21c828523f8a5374fe4510fe6e5ec2a2a | [
"MIT"
] | null | null | null | from .LSSTdefaults import *
from .PhysicalParameters import *
from .Sed import *
from .Bandpass import *
from .SedUtils import *
from .BandpassDict import *
from .SedList import *
from .PhotometricParameters import *
from .SignalToNoise import *
from .CosmologyObject import *
from .BandpassSet import *
| 25.333333 | 36 | 0.782895 | 33 | 304 | 7.212121 | 0.393939 | 0.420168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144737 | 304 | 11 | 37 | 27.636364 | 0.915385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.272727 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
d1791c60b815526d192e7feae56a0953bdd4bfb7 | 50 | py | Python | pynache/data/__init__.py | MrinalJain17/neural-style-transfer | c4f1c83fb7c5cb16dbd1d5da5eed584e3733f618 | [
"MIT"
] | 3 | 2020-12-30T23:52:45.000Z | 2021-08-24T21:46:24.000Z | pynache/data/__init__.py | MrinalJain17/neural-style-transfer | c4f1c83fb7c5cb16dbd1d5da5eed584e3733f618 | [
"MIT"
] | null | null | null | pynache/data/__init__.py | MrinalJain17/neural-style-transfer | c4f1c83fb7c5cb16dbd1d5da5eed584e3733f618 | [
"MIT"
] | null | null | null | from .load_inputs import load_content, load_style
| 25 | 49 | 0.86 | 8 | 50 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 50 | 1 | 50 | 50 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d19445906517d64fde6e1e1f12e2719763b51f21 | 2,310 | py | Python | grouped_by_commas/test_grouped_by_commas.py | philipwerner/code-katas | 3bdce2b5d12df612e7c8f2e2b8b5ebe16a653712 | [
"MIT"
] | null | null | null | grouped_by_commas/test_grouped_by_commas.py | philipwerner/code-katas | 3bdce2b5d12df612e7c8f2e2b8b5ebe16a653712 | [
"MIT"
] | null | null | null | grouped_by_commas/test_grouped_by_commas.py | philipwerner/code-katas | 3bdce2b5d12df612e7c8f2e2b8b5ebe16a653712 | [
"MIT"
] | null | null | null | import pytest
def test_group_by_commas():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 1
assert group_by_commas(n) == '1'
def test_group_by_commas2():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 10
assert group_by_commas(n) == '10'
def test_group_by_commas3():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 100
assert group_by_commas(n) == '100'
def test_group_by_commas4():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 1000
assert group_by_commas(n) == '1,000'
def test_group_by_commas5():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 10000
assert group_by_commas(n) == '10,000'
def test_group_by_commas6():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 100000
assert group_by_commas(n) == '100,000'
def test_group_by_commas7():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 1000000
assert group_by_commas(n) == '1,000,000'
def test_group_by_commas8():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 35235235
assert group_by_commas(n) == '35,235,235'
def test_group_by_commas9():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 789067
assert group_by_commas(n) == '789,067'
def test_group_by_commas10():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 1234567890
assert group_by_commas(n) == '1,234,567,890'
def test_group_by_commas11():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 9876543219876
assert group_by_commas(n) == '9,876,543,219,876'
def test_group_by_commas12():
"""Tests that commas are inserted properly."""
from grouped_by_commas import group_by_commas
n = 1357924680
assert group_by_commas(n) == '1,357,924,680'
| 26.25 | 52 | 0.709957 | 344 | 2,310 | 4.453488 | 0.156977 | 0.193211 | 0.212141 | 0.219321 | 0.788512 | 0.703003 | 0.603133 | 0.571802 | 0.571802 | 0.571802 | 0 | 0.088077 | 0.193939 | 2,310 | 87 | 53 | 26.551724 | 0.734694 | 0.212554 | 0 | 0.244898 | 0 | 0 | 0.052931 | 0 | 0 | 0 | 0 | 0 | 0.244898 | 1 | 0.244898 | false | 0 | 0.265306 | 0 | 0.510204 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
d1b14ac60fc30763c4a8c9165ee856ae538ff8b4 | 75 | py | Python | tornadopy/__init__.py | janwillembuist/tornadopy | f440d6885c7773acec38af814609a6e793adb61b | [
"MIT"
] | null | null | null | tornadopy/__init__.py | janwillembuist/tornadopy | f440d6885c7773acec38af814609a6e793adb61b | [
"MIT"
] | null | null | null | tornadopy/__init__.py | janwillembuist/tornadopy | f440d6885c7773acec38af814609a6e793adb61b | [
"MIT"
] | null | null | null | # Import the main function
from tornadopy.makeplot import plot, set_labels
| 25 | 47 | 0.826667 | 11 | 75 | 5.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 75 | 2 | 48 | 37.5 | 0.938462 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d1c4209785c20472c7d196edc64308496fe419c7 | 240 | py | Python | pelary/usecase/predictor.py | neverbeenthisweeb/predictsalary | 68cf8b424583fe0477d4dd01e36342dcfebc1f4d | [
"MIT"
] | null | null | null | pelary/usecase/predictor.py | neverbeenthisweeb/predictsalary | 68cf8b424583fe0477d4dd01e36342dcfebc1f4d | [
"MIT"
] | null | null | null | pelary/usecase/predictor.py | neverbeenthisweeb/predictsalary | 68cf8b424583fe0477d4dd01e36342dcfebc1f4d | [
"MIT"
] | null | null | null | from pelary.service.predictor import PredictorInterface
class Predictor():
def __init__(self, predictor: PredictorInterface):
self.predictor = predictor
def predict(self) -> float:
return self.predictor.predict()
| 24 | 55 | 0.720833 | 24 | 240 | 7.041667 | 0.541667 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191667 | 240 | 9 | 56 | 26.666667 | 0.871134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
d1cef7fcc9c35545c43593bda385d83b73e365b8 | 300 | py | Python | sknetwork/linkpred/__init__.py | altana-tech/scikit-network | dedc9d3e694c7106e4709aae22dffb5142c15859 | [
"BSD-3-Clause"
] | 457 | 2018-07-24T12:42:14.000Z | 2022-03-31T08:30:39.000Z | sknetwork/linkpred/__init__.py | altana-tech/scikit-network | dedc9d3e694c7106e4709aae22dffb5142c15859 | [
"BSD-3-Clause"
] | 281 | 2018-07-13T05:01:19.000Z | 2022-03-31T14:13:43.000Z | sknetwork/linkpred/__init__.py | altana-tech/scikit-network | dedc9d3e694c7106e4709aae22dffb5142c15859 | [
"BSD-3-Clause"
] | 58 | 2019-04-22T09:04:32.000Z | 2022-03-30T12:43:08.000Z | """link prediction module"""
from sknetwork.linkpred.first_order import CommonNeighbors, JaccardIndex, SaltonIndex, SorensenIndex, HubPromotedIndex,\
HubDepressedIndex, AdamicAdar, ResourceAllocation, PreferentialAttachment
from sknetwork.linkpred.postprocessing import is_edge, whitened_sigmoid
| 60 | 120 | 0.853333 | 27 | 300 | 9.37037 | 0.851852 | 0.102767 | 0.166008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 300 | 4 | 121 | 75 | 0.916667 | 0.073333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.