hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c083353bf0265a8bcdde74e6a9ffbe91b66c25cc | 553 | py | Python | pgdrive/utils/__init__.py | decisionforce/pgdrive | 19af5d09a40a68a2a5f8b3ac8b40f109e71c26ee | [
"Apache-2.0"
] | 97 | 2020-12-25T06:02:17.000Z | 2022-01-16T06:58:39.000Z | pgdrive/utils/__init__.py | decisionforce/pgdrive | 19af5d09a40a68a2a5f8b3ac8b40f109e71c26ee | [
"Apache-2.0"
] | 192 | 2020-12-25T07:58:17.000Z | 2021-08-28T10:13:59.000Z | pgdrive/utils/__init__.py | decisionforce/pgdrive | 19af5d09a40a68a2a5f8b3ac8b40f109e71c26ee | [
"Apache-2.0"
] | 11 | 2020-12-29T11:23:44.000Z | 2021-12-06T23:25:49.000Z | from pgdrive.utils.config import Config, merge_config_with_unknown_keys, merge_config
from pgdrive.utils.coordinates_shift import panda_heading, panda_position, pgdrive_heading, pgdrive_position
from pgdrive.utils.cutils import import_cutils
from pgdrive.utils.math_utils import safe_clip, clip, norm, distance_greater, safe_clip_for_small_array, Vector
from pgdrive.utils.random_utils import get_np_random, random_string
from pgdrive.utils.utils import is_mac, import_pygame, recursive_equal, setup_logger, merge_dicts, \
concat_step_infos, is_win
| 69.125 | 111 | 0.862568 | 83 | 553 | 5.385542 | 0.493976 | 0.147651 | 0.214765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083183 | 553 | 7 | 112 | 79 | 0.881657 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c0884d813c49f4a3f61d2dbcd88f49881dcf8e50 | 98 | py | Python | Project/Python/project/public/__init__.py | renwei-release/dave | 773301edd3bee6e7526e0d5587ff8af9f01e288f | [
"MIT"
] | null | null | null | Project/Python/project/public/__init__.py | renwei-release/dave | 773301edd3bee6e7526e0d5587ff8af9f01e288f | [
"MIT"
] | null | null | null | Project/Python/project/public/__init__.py | renwei-release/dave | 773301edd3bee6e7526e0d5587ff8af9f01e288f | [
"MIT"
] | null | null | null | import ctypes
import struct
import os
from .auto import *
from .base import *
from .tools import * | 16.333333 | 20 | 0.765306 | 15 | 98 | 5 | 0.533333 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173469 | 98 | 6 | 20 | 16.333333 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
c0bf403a44a580f2e583da1f68f06408454bb542 | 185 | py | Python | api/utils/images/__init__.py | BytesToBits/BytesToBits-API | bfa305e4f6ded995da95bbedc79c91ef8ce498fb | [
"MIT"
] | null | null | null | api/utils/images/__init__.py | BytesToBits/BytesToBits-API | bfa305e4f6ded995da95bbedc79c91ef8ce498fb | [
"MIT"
] | null | null | null | api/utils/images/__init__.py | BytesToBits/BytesToBits-API | bfa305e4f6ded995da95bbedc79c91ef8ce498fb | [
"MIT"
] | null | null | null | from .message_faker import make_message as DiscordMessage
from .btb_convert import btbify
from .transparent import clear_background as make_transparent
from .hue_shift import change_hue | 46.25 | 61 | 0.875676 | 27 | 185 | 5.740741 | 0.592593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102703 | 185 | 4 | 62 | 46.25 | 0.933735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
238bd2848e42414413b60c512f8ceeed06f77a78 | 161 | py | Python | DigitalSecurityHub/orders/admin.py | vineethsai/DigitalSecurityHub | fb3380e983d71bbd67dde19346fad274f6ed2ba8 | [
"MIT"
] | null | null | null | DigitalSecurityHub/orders/admin.py | vineethsai/DigitalSecurityHub | fb3380e983d71bbd67dde19346fad274f6ed2ba8 | [
"MIT"
] | null | null | null | DigitalSecurityHub/orders/admin.py | vineethsai/DigitalSecurityHub | fb3380e983d71bbd67dde19346fad274f6ed2ba8 | [
"MIT"
] | null | null | null | from django.contrib import admin
from orders.models import Order, LineItem
# Register your models here.
admin.site.register(Order)
admin.site.register(LineItem) | 26.833333 | 41 | 0.819876 | 23 | 161 | 5.73913 | 0.565217 | 0.136364 | 0.257576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099379 | 161 | 6 | 42 | 26.833333 | 0.910345 | 0.161491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
23e16a756e40261fb9da766729672d1a54852a50 | 151 | py | Python | Curso_Python_3_UDEMY/pacotes/pacote_v4.py | DanilooSilva/Cursos_de_Python | 8f167a4c6e16f01601e23b6f107578aa1454472d | [
"MIT"
] | null | null | null | Curso_Python_3_UDEMY/pacotes/pacote_v4.py | DanilooSilva/Cursos_de_Python | 8f167a4c6e16f01601e23b6f107578aa1454472d | [
"MIT"
] | null | null | null | Curso_Python_3_UDEMY/pacotes/pacote_v4.py | DanilooSilva/Cursos_de_Python | 8f167a4c6e16f01601e23b6f107578aa1454472d | [
"MIT"
] | null | null | null | from pacotes.pacote1.modulo1 import soma
from pacotes.pacote2.modulo1 import subtracao
print('Soma ', soma(3, 2))
print('Subtração ', subtracao(3, 2)) | 30.2 | 45 | 0.761589 | 22 | 151 | 5.227273 | 0.545455 | 0.191304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059259 | 0.10596 | 151 | 5 | 46 | 30.2 | 0.792593 | 0 | 0 | 0 | 0 | 0 | 0.098684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
23ec23a754736bccd13dc932c3cea50ec364d0e2 | 88 | py | Python | perception/encoders/base.py | ramaneswaran/perception | 045b85634412355d66b2db6a102a97796c9aa11f | [
"Apache-2.0"
] | 1 | 2021-04-14T10:58:13.000Z | 2021-04-14T10:58:13.000Z | perception/encoders/base.py | shivamsaraswat8/perception | 045b85634412355d66b2db6a102a97796c9aa11f | [
"Apache-2.0"
] | null | null | null | perception/encoders/base.py | shivamsaraswat8/perception | 045b85634412355d66b2db6a102a97796c9aa11f | [
"Apache-2.0"
] | 1 | 2021-04-10T18:02:45.000Z | 2021-04-10T18:02:45.000Z | from abc import ABC
class BaseEncoder(ABC):
def encode(self, image):
pass | 14.666667 | 28 | 0.647727 | 12 | 88 | 4.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 88 | 6 | 29 | 14.666667 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
f1a0f5b307a6777215d944687feda8d4f1e1e408 | 55 | py | Python | src/ssp/ml/dataset/__init__.py | gyan42/spark-streaming-playground | 147ef9cbc31b7aed242663dee36143ebf0e8043f | [
"Apache-2.0"
] | 10 | 2020-03-12T11:51:46.000Z | 2022-03-24T04:56:05.000Z | src/ssp/ml/dataset/__init__.py | gyan42/spark-streaming-playground | 147ef9cbc31b7aed242663dee36143ebf0e8043f | [
"Apache-2.0"
] | 12 | 2020-04-23T07:28:14.000Z | 2022-03-12T00:20:24.000Z | src/ssp/ml/dataset/__init__.py | gyan42/spark-streaming-playground | 147ef9cbc31b7aed242663dee36143ebf0e8043f | [
"Apache-2.0"
] | 1 | 2020-04-20T14:48:38.000Z | 2020-04-20T14:48:38.000Z | from ssp.ml.dataset.prepare_dataset import SSPMLDataset | 55 | 55 | 0.890909 | 8 | 55 | 6 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 55 | 1 | 55 | 55 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f1bacff637bbe8e15fe212a8f1a82ec675d4428c | 353 | py | Python | test_accounts/admin.py | AllFactors/django-organizations | df079e97f8c88214bcdc0b87d2717a8b5323bc4f | [
"BSD-2-Clause"
] | 855 | 2015-01-06T21:08:34.000Z | 2022-03-31T04:24:49.000Z | test_accounts/admin.py | AllFactors/django-organizations | df079e97f8c88214bcdc0b87d2717a8b5323bc4f | [
"BSD-2-Clause"
] | 156 | 2015-02-09T01:51:40.000Z | 2022-03-29T22:23:01.000Z | test_accounts/admin.py | AllFactors/django-organizations | df079e97f8c88214bcdc0b87d2717a8b5323bc4f | [
"BSD-2-Clause"
] | 186 | 2015-01-21T06:21:59.000Z | 2022-03-29T12:44:24.000Z | from django.contrib import admin
from test_accounts.models import Account
from test_accounts.models import AccountInvitation
from test_accounts.models import AccountOwner
from test_accounts.models import AccountUser
admin.site.register(Account)
admin.site.register(AccountInvitation)
admin.site.register(AccountUser)
admin.site.register(AccountOwner)
| 29.416667 | 50 | 0.866856 | 45 | 353 | 6.711111 | 0.311111 | 0.10596 | 0.211921 | 0.291391 | 0.370861 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073654 | 353 | 11 | 51 | 32.090909 | 0.923547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.555556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9e3a37d04e9189dac290eaea4e61eb9154f0d4d8 | 156 | py | Python | opennsfw2/_typing.py | bhky/opennsfw2 | 9da0836d2d59ee85f7dde4332712d361474d799e | [
"MIT"
] | 66 | 2021-11-08T06:42:32.000Z | 2022-03-29T16:51:35.000Z | opennsfw2/_typing.py | bhky/opennsfw2 | 9da0836d2d59ee85f7dde4332712d361474d799e | [
"MIT"
] | 2 | 2021-11-10T09:37:37.000Z | 2022-01-26T00:11:37.000Z | opennsfw2/_typing.py | bhky/opennsfw2 | 9da0836d2d59ee85f7dde4332712d361474d799e | [
"MIT"
] | 10 | 2021-11-08T12:36:20.000Z | 2021-12-30T15:33:07.000Z | """
Typing utilities.
"""
import numpy as np
import numpy.typing
NDFloat32Array = np.typing.NDArray[np.float32]
NDUInt8Array = np.typing.NDArray[np.uint8]
| 17.333333 | 46 | 0.762821 | 21 | 156 | 5.666667 | 0.52381 | 0.184874 | 0.252101 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043165 | 0.108974 | 156 | 8 | 47 | 19.5 | 0.81295 | 0.108974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9e5077f94bfb14abbaa805fcba9ac206e84a7226 | 107 | py | Python | imouto/__init__.py | bakalab/imouto | 01944746d4f7530a741bcb082866e18c48d07f3a | [
"BSD-3-Clause"
] | 9 | 2017-06-18T06:03:00.000Z | 2019-05-07T10:06:22.000Z | imouto/__init__.py | bakalab/imouto | 01944746d4f7530a741bcb082866e18c48d07f3a | [
"BSD-3-Clause"
] | 3 | 2017-08-05T08:01:42.000Z | 2017-12-08T01:58:33.000Z | imouto/__init__.py | bakalab/imouto | 01944746d4f7530a741bcb082866e18c48d07f3a | [
"BSD-3-Clause"
] | null | null | null | from imouto.request import Request
from imouto.response import Response
__all__ = ['Request', 'Response']
| 21.4 | 36 | 0.785047 | 13 | 107 | 6.153846 | 0.461538 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121495 | 107 | 4 | 37 | 26.75 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0.140187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9e69e83265ce9c985c18d2c67f7fe86732ac968d | 80 | py | Python | staticPageServer/serverCode/__init__.py | hydrogen602/simpleServer | d5cb39fd8b196fbc77899038e5fe392d433d2888 | [
"MIT"
] | null | null | null | staticPageServer/serverCode/__init__.py | hydrogen602/simpleServer | d5cb39fd8b196fbc77899038e5fe392d433d2888 | [
"MIT"
] | null | null | null | staticPageServer/serverCode/__init__.py | hydrogen602/simpleServer | d5cb39fd8b196fbc77899038e5fe392d433d2888 | [
"MIT"
] | null | null | null |
from .fileLoader import fetch # NOQA
from .functionTools import enforce # NOQA
| 20 | 41 | 0.7875 | 10 | 80 | 6.3 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1625 | 80 | 3 | 42 | 26.666667 | 0.940299 | 0.1125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7b3af7830688de69c4b58ce38f190fa11b1036e4 | 41 | py | Python | tests/__init__.py | sbliven/gem-painting | 368b128723240b095bf9af5274dd26a147504f11 | [
"BSD-3-Clause"
] | null | null | null | tests/__init__.py | sbliven/gem-painting | 368b128723240b095bf9af5274dd26a147504f11 | [
"BSD-3-Clause"
] | 1 | 2021-11-22T15:23:40.000Z | 2021-11-22T15:23:40.000Z | tests/__init__.py | sbliven/diamond-art | 368b128723240b095bf9af5274dd26a147504f11 | [
"BSD-3-Clause"
] | null | null | null | """Unit test package for diamond_art."""
| 20.5 | 40 | 0.707317 | 6 | 41 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 1 | 41 | 41 | 0.777778 | 0.829268 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7b63c9dab171ee5fdeea6d357f5936c74a3691fc | 114 | py | Python | wlcsim/special/__init__.py | SpakowitzLab/BasicWLC | 13edbbc8e8cd36a3586571ff4d80880fc89d30e6 | [
"MIT"
] | 1 | 2021-03-16T01:39:18.000Z | 2021-03-16T01:39:18.000Z | wlcsim/special/__init__.py | riscalab/wlcsim | e34877ef6c5dc83c6444380dbe624b371d70faf2 | [
"MIT"
] | 17 | 2016-07-08T21:17:40.000Z | 2017-01-24T09:05:25.000Z | wlcsim/special/__init__.py | riscalab/wlcsim | e34877ef6c5dc83c6444380dbe624b371d70faf2 | [
"MIT"
] | 9 | 2017-02-19T06:28:38.000Z | 2021-11-05T22:28:08.000Z | """Hold modules that act as helpers for generating more complicated
simulations."""
from . import homolog_process
| 28.5 | 67 | 0.798246 | 15 | 114 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 114 | 3 | 68 | 38 | 0.909091 | 0.675439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7b63f8238a1a8fdcb4df729dd17b9bc9bd9f6719 | 125 | py | Python | verybasic/hello_world.py | nightmarebadger/tutorials-python-basic | a4c49e01bf9c9c5006239c013c81d85603dd96fd | [
"MIT"
] | null | null | null | verybasic/hello_world.py | nightmarebadger/tutorials-python-basic | a4c49e01bf9c9c5006239c013c81d85603dd96fd | [
"MIT"
] | null | null | null | verybasic/hello_world.py | nightmarebadger/tutorials-python-basic | a4c49e01bf9c9c5006239c013c81d85603dd96fd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
A program that prints "Hello World!".
"""
if __name__ == '__main__':
print("Hello World!")
| 13.888889 | 37 | 0.568 | 15 | 125 | 4.2 | 0.866667 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.2 | 125 | 8 | 38 | 15.625 | 0.62 | 0.48 | 0 | 0 | 0 | 0 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
7b6c4a833a0a09c292e4df972e01eba6a1797a13 | 117 | py | Python | soft/disable_wifi.py | Mirmik/zippo | 50097d9b33c165d8f6a8ec65b22db4b1c4e1f61c | [
"MIT"
] | null | null | null | soft/disable_wifi.py | Mirmik/zippo | 50097d9b33c165d8f6a8ec65b22db4b1c4e1f61c | [
"MIT"
] | null | null | null | soft/disable_wifi.py | Mirmik/zippo | 50097d9b33c165d8f6a8ec65b22db4b1c4e1f61c | [
"MIT"
] | null | null | null | sudo systemctl stop hostapd
sudo systemctl stop dnsmasq
sudo systemctl stop wpa_supplicant
sudo ifconfig wlan0 down
| 19.5 | 34 | 0.854701 | 17 | 117 | 5.823529 | 0.588235 | 0.393939 | 0.515152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.145299 | 117 | 5 | 35 | 23.4 | 0.98 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7b7266eec594120271ed819d6beaea12f493db4a | 46 | py | Python | src/crowdai/predict.py | kristijanbartol/DeepMusicClassifier | f47295c3171e77733be5b80ddcec9790dfc3165b | [
"MIT"
] | 64 | 2017-11-23T09:43:30.000Z | 2021-12-22T12:41:53.000Z | src/crowdai/predict.py | kristijanbartol/DeepMusicClassifier | f47295c3171e77733be5b80ddcec9790dfc3165b | [
"MIT"
] | null | null | null | src/crowdai/predict.py | kristijanbartol/DeepMusicClassifier | f47295c3171e77733be5b80ddcec9790dfc3165b | [
"MIT"
] | 7 | 2018-04-11T07:29:47.000Z | 2020-04-11T21:14:13.000Z | import model
model = model.load_best_model()
| 11.5 | 31 | 0.782609 | 7 | 46 | 4.857143 | 0.571429 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 46 | 3 | 32 | 15.333333 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
7ba25e6c18082a60f0bed540d473f37be244e248 | 48 | py | Python | tests/components/flick_electric/__init__.py | domwillcode/home-assistant | f170c80bea70c939c098b5c88320a1c789858958 | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | tests/components/flick_electric/__init__.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 31,101 | 2020-03-02T13:00:16.000Z | 2022-03-31T23:57:36.000Z | tests/components/flick_electric/__init__.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """Tests for the Flick Electric integration."""
| 24 | 47 | 0.729167 | 6 | 48 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 1 | 48 | 48 | 0.833333 | 0.854167 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7bc0b7e85e7d1ab860f1d51242fda3ed8fb427cc | 57 | py | Python | starbucks/__init__.py | movermeyer/starbucks-py | 9eee89e2837d9950c27a6350ff891c891f8a07b6 | [
"BSD-2-Clause"
] | 23 | 2015-03-02T16:13:27.000Z | 2021-09-06T22:09:09.000Z | starbucks/__init__.py | movermeyer/starbucks-py | 9eee89e2837d9950c27a6350ff891c891f8a07b6 | [
"BSD-2-Clause"
] | 2 | 2015-03-07T05:13:52.000Z | 2015-03-07T05:18:38.000Z | starbucks/__init__.py | movermeyer/starbucks-py | 9eee89e2837d9950c27a6350ff891c891f8a07b6 | [
"BSD-2-Clause"
] | 6 | 2015-03-02T16:14:41.000Z | 2020-10-22T17:20:52.000Z | from .starbucks import Starbucks, Card, Beverage, Coupon
| 28.5 | 56 | 0.807018 | 7 | 57 | 6.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 57 | 1 | 57 | 57 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c8a29f1d132776013f4907eca0466c583d9edcb3 | 135 | py | Python | featuretools/primitives/premium/__init__.py | oslab-ewha/featuretools | c1d86433f9050bf383e55520a0d42cc63fa16839 | [
"BSD-3-Clause"
] | 2 | 2021-07-13T07:40:20.000Z | 2021-08-19T04:57:24.000Z | featuretools/primitives/premium/__init__.py | oslab-ewha/featuretools | c1d86433f9050bf383e55520a0d42cc63fa16839 | [
"BSD-3-Clause"
] | 6 | 2021-07-19T05:15:38.000Z | 2021-08-24T11:34:58.000Z | featuretools/primitives/premium/__init__.py | oslab-ewha/featuretools | c1d86433f9050bf383e55520a0d42cc63fa16839 | [
"BSD-3-Clause"
] | 2 | 2021-07-02T00:48:07.000Z | 2021-07-02T09:35:49.000Z | # flake8: noqa
from .api import *
import nltk
# nltk.download('stopwords', quiet=True)
# nltk.download('vader_lexicon', quiet=True)
| 15 | 44 | 0.718519 | 18 | 135 | 5.333333 | 0.666667 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008547 | 0.133333 | 135 | 8 | 45 | 16.875 | 0.811966 | 0.696296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
c8a9c3fd32f3eb12e576a233c543e706a39f439e | 2,329 | py | Python | src/apps/core/views/PublicationViews.py | crivet/HydroLearn | f4caa868fffa08f1bf4163de94d44a234c8879b2 | [
"BSD-3-Clause"
] | null | null | null | src/apps/core/views/PublicationViews.py | crivet/HydroLearn | f4caa868fffa08f1bf4163de94d44a234c8879b2 | [
"BSD-3-Clause"
] | 23 | 2018-08-09T18:46:20.000Z | 2021-06-10T20:21:26.000Z | src/apps/core/views/PublicationViews.py | crivet/HydroLearn | f4caa868fffa08f1bf4163de94d44a234c8879b2 | [
"BSD-3-Clause"
] | 1 | 2019-01-28T15:42:39.000Z | 2019-01-28T15:42:39.000Z | from django.http import Http404
from src.apps.core.models.PublicationModels import Publication
class PublicationViewMixin(object):
'''
mixin to restrict access to publication objects based on module
defined 'user_has_access' result
if a user does not have access to an object, a 404 error will be raised
'''
def get_object(self, queryset=None):
object = super(PublicationViewMixin, self).get_object(queryset)
# check if the requesting user has access, if not return None
if not object.user_has_access(self.request.user):
raise Http404
return object
class PublicationChildViewMixin(object):
'''
mixin to restrict access to publication objects based on module
defined 'user_has_access' result
if a user does not have access to an object, a 404 error will be raised
'''
def get_object(self, queryset=None):
object = super(PublicationChildViewMixin, self).get_object(queryset)
# check if the requesting user has access, if not return None
if not object.get_Publishable_parent().user_has_access(self.request.user):
raise Http404
return object
class DraftOnlyViewMixin(object):
'''
mixin to restrict access of a particular view to Draft versions of publications
'''
def get_object(self, queryset=None):
object = super(DraftOnlyViewMixin, self).get_object(queryset)
accepted_statuses = [Publication.DRAFT_ONLY, Publication.PUBLISHED]
# check if this object's publishable parent is the Current Publication
if not object.get_Publishable_parent().get_publish_status() in accepted_statuses:
raise Http404
return object
class PublicOnlyViewMixin(object):
'''
mixin to restrict access of a particular view to Published versions of publications
'''
def get_object(self, queryset=None):
object = super(PublicOnlyViewMixin, self).get_object(queryset)
accepted_statuses = [Publication.PUBLICATION_OBJECT, Publication.PAST_PUBLICATION]
# check if this object's publishable parent is the Current Publication
if not object.get_Publishable_parent().get_publish_status() in accepted_statuses:
raise Http404
return object | 34.761194 | 91 | 0.701159 | 285 | 2,329 | 5.614035 | 0.252632 | 0.045 | 0.04875 | 0.0525 | 0.789375 | 0.78625 | 0.77375 | 0.71375 | 0.71375 | 0.71375 | 0 | 0.011818 | 0.237012 | 2,329 | 67 | 92 | 34.761194 | 0.888576 | 0.330614 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c8b5d13ececfd49165f5fe6cbb058db81a662eda | 88 | py | Python | ontology/logistic_regression/sherlock/listify_circuits_k16_reverse.py | ehbeam/neuro-knowledge-engine | 9dc56ade0bbbd8d14f0660774f787c3f46d7e632 | [
"MIT"
] | 15 | 2020-07-17T07:10:26.000Z | 2022-02-18T05:51:45.000Z | ontology/neural_network/sherlock/listify_circuits_k16_reverse.py | YifeiCAO/neuro-knowledge-engine | 9dc56ade0bbbd8d14f0660774f787c3f46d7e632 | [
"MIT"
] | 2 | 2022-01-14T09:10:12.000Z | 2022-01-28T17:32:42.000Z | ontology/neural_network/sherlock/listify_circuits_k16_reverse.py | YifeiCAO/neuro-knowledge-engine | 9dc56ade0bbbd8d14f0660774f787c3f46d7e632 | [
"MIT"
] | 4 | 2021-12-22T13:27:32.000Z | 2022-02-18T05:51:47.000Z | #!/bin/python
import listify_circuits
listify_circuits.optimize_circuits(16, 'reverse') | 22 | 49 | 0.829545 | 11 | 88 | 6.363636 | 0.727273 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024096 | 0.056818 | 88 | 4 | 49 | 22 | 0.819277 | 0.136364 | 0 | 0 | 0 | 0 | 0.092105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
c8c062eb3c8379c179aec8d4a8501d88269ab5f8 | 50 | py | Python | 13-protocol-abc/double/double_object.py | SeirousLee/example-code-2e | 81ec1669a4b8fd098db44a78a3d551287eec7bc9 | [
"MIT"
] | 990 | 2019-03-21T21:17:34.000Z | 2022-03-31T00:55:07.000Z | 13-protocol-abc/double/double_object.py | Turall/example-code-2e | 1702717182cff9a48beb55b2a9f5618e9bd1da18 | [
"MIT"
] | 17 | 2019-12-18T18:00:05.000Z | 2022-01-12T14:23:47.000Z | 13-protocol-abc/double/double_object.py | Turall/example-code-2e | 1702717182cff9a48beb55b2a9f5618e9bd1da18 | [
"MIT"
] | 276 | 2019-04-06T12:32:00.000Z | 2022-03-29T11:50:47.000Z | def double(x: object) -> object:
return x * 2
| 16.666667 | 32 | 0.6 | 8 | 50 | 3.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.26 | 50 | 2 | 33 | 25 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
c8c0794a2f9a2f2e3e30f549b0c91a91964bcf15 | 151 | py | Python | spaced_repetition/gateways/django_gateway/django_project/apps/problem/apps.py | MBlistein/spaced-repetition | c10281d43e928f8d1799076190f962f8e49a405b | [
"MIT"
] | null | null | null | spaced_repetition/gateways/django_gateway/django_project/apps/problem/apps.py | MBlistein/spaced-repetition | c10281d43e928f8d1799076190f962f8e49a405b | [
"MIT"
] | null | null | null | spaced_repetition/gateways/django_gateway/django_project/apps/problem/apps.py | MBlistein/spaced-repetition | c10281d43e928f8d1799076190f962f8e49a405b | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class ProblemConfig(AppConfig):
name = 'spaced_repetition.gateways.django_gateway.django_project.apps.problem'
| 25.166667 | 82 | 0.821192 | 18 | 151 | 6.722222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099338 | 151 | 5 | 83 | 30.2 | 0.889706 | 0 | 0 | 0 | 0 | 0 | 0.456954 | 0.456954 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c8ddf6c863862394d55fb5d3c211fbaba720645c | 83 | py | Python | src/sage/crypto/public_key/all.py | defeo/sage | d8822036a9843bd4d75845024072515ede56bcb9 | [
"BSL-1.0"
] | 2 | 2018-06-30T01:37:35.000Z | 2018-06-30T01:37:39.000Z | src/sage/crypto/public_key/all.py | boothby/sage | 1b1e6f608d1ef8ee664bb19e991efbbc68cbd51f | [
"BSL-1.0"
] | null | null | null | src/sage/crypto/public_key/all.py | boothby/sage | 1b1e6f608d1ef8ee664bb19e991efbbc68cbd51f | [
"BSL-1.0"
] | null | null | null | from __future__ import absolute_import
from .blum_goldwasser import BlumGoldwasser
| 27.666667 | 43 | 0.891566 | 10 | 83 | 6.8 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 83 | 2 | 44 | 41.5 | 0.906667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c8fa394a588b95fa7b1fbb38d060f1d00788c0cc | 64 | py | Python | sources/simulators/serial_execution_simulator/__init__.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/serial_execution_simulator/__init__.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/serial_execution_simulator/__init__.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | from .serial_execution_simulator import SerialExecutionSimulator | 64 | 64 | 0.9375 | 6 | 64 | 9.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046875 | 64 | 1 | 64 | 64 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
cdb528de4b73f90990fa79e3dd1091aaa743293b | 82 | py | Python | dj_scaffold/conf/app/views.py | vicalloy/dj-scaffold | 92e9a991e0699f8b88c16d8a95b23bd5b3cf29e1 | [
"BSD-3-Clause"
] | 10 | 2015-04-29T08:24:06.000Z | 2021-09-06T14:58:01.000Z | dj_scaffold/conf/app/views.py | vicalloy/dj-scaffold | 92e9a991e0699f8b88c16d8a95b23bd5b3cf29e1 | [
"BSD-3-Clause"
] | null | null | null | dj_scaffold/conf/app/views.py | vicalloy/dj-scaffold | 92e9a991e0699f8b88c16d8a95b23bd5b3cf29e1 | [
"BSD-3-Clause"
] | 3 | 2015-10-12T04:36:13.000Z | 2016-03-24T11:33:11.000Z | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
from django.shortcuts import render
| 20.5 | 35 | 0.682927 | 12 | 82 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.134146 | 82 | 3 | 36 | 27.333333 | 0.774648 | 0.512195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
cdbdabdd3293ab394848c537c3b52781c81be447 | 29 | py | Python | turbosensei/modB.py | FORCaist/turbosensei | d6d800ef9c9e73fd4cd1e130d9480334b27e7d3e | [
"MIT"
] | null | null | null | turbosensei/modB.py | FORCaist/turbosensei | d6d800ef9c9e73fd4cd1e130d9480334b27e7d3e | [
"MIT"
] | null | null | null | turbosensei/modB.py | FORCaist/turbosensei | d6d800ef9c9e73fd4cd1e130d9480334b27e7d3e | [
"MIT"
] | null | null | null | def funcB(x):
return x+2 | 9.666667 | 14 | 0.586207 | 6 | 29 | 2.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.275862 | 29 | 3 | 14 | 9.666667 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
cdd76e6ae8fe667ba2cf5fb3e8fc1654f15a1044 | 22 | py | Python | my_script.py | Ramguru94/watchtower | efb45e93850b1c7c0eee7edcfc0f602f9f9979ce | [
"MIT"
] | null | null | null | my_script.py | Ramguru94/watchtower | efb45e93850b1c7c0eee7edcfc0f602f9f9979ce | [
"MIT"
] | null | null | null | my_script.py | Ramguru94/watchtower | efb45e93850b1c7c0eee7edcfc0f602f9f9979ce | [
"MIT"
] | null | null | null | print("y")
print("z")
| 7.333333 | 10 | 0.545455 | 4 | 22 | 3 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 22 | 2 | 11 | 11 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
b541c081b66be3e6eff6a14b2027d41398fddb67 | 923 | py | Python | homes_for_sale/querysets.py | Xtuden-com/django-property | 6656d469a5d06c103a34c2e68b9f1754413fb3ba | [
"MIT"
] | null | null | null | homes_for_sale/querysets.py | Xtuden-com/django-property | 6656d469a5d06c103a34c2e68b9f1754413fb3ba | [
"MIT"
] | null | null | null | homes_for_sale/querysets.py | Xtuden-com/django-property | 6656d469a5d06c103a34c2e68b9f1754413fb3ba | [
"MIT"
] | null | null | null | from datetime import datetime
from django.contrib.gis.db import models
from homes.behaviours import Publishable
import pytz
class SaleQuerySet(models.query.QuerySet):
def published(self):
return self.filter(status=Publishable.STATUS_CHOICE_ACTIVE)
def unpublished(self):
return self.filter(status=Publishable.STATUS_CHOICE_INACTIVE)
def unexpired(self):
return self.filter(expires_at__isnull=True) | self.filter(expires_at__gt=datetime.utcnow().replace(tzinfo=pytz.utc))
def expired(self):
return self.filter(expires_at__lte=datetime.utcnow().replace(tzinfo=pytz.utc))
def new_home(self):
return self.filter(new_home=True)
def shared_ownership(self):
return self.filter(shared_ownership=True)
def auction(self):
return self.filter(auction=True)
def tenure(self, slug):
return self.filter(property_tenure__slug=slug) | 27.969697 | 124 | 0.732394 | 120 | 923 | 5.466667 | 0.383333 | 0.137195 | 0.195122 | 0.213415 | 0.35061 | 0.35061 | 0.262195 | 0.14939 | 0 | 0 | 0 | 0 | 0.171181 | 923 | 33 | 125 | 27.969697 | 0.857516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.380952 | false | 0 | 0.190476 | 0.380952 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
b574ab83dfdd91d5f2f3d89787e2611685ca4722 | 163 | py | Python | src/api/domain/operation/GetDataOperationJobExecutionLogList/GetDataOperationJobExecutionLogListRequest.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 14 | 2020-12-19T15:06:13.000Z | 2022-01-12T19:52:17.000Z | src/api/domain/operation/GetDataOperationJobExecutionLogList/GetDataOperationJobExecutionLogListRequest.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 43 | 2021-01-06T22:05:22.000Z | 2022-03-10T10:30:30.000Z | src/api/domain/operation/GetDataOperationJobExecutionLogList/GetDataOperationJobExecutionLogListRequest.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 4 | 2020-12-18T23:10:09.000Z | 2021-04-02T13:03:12.000Z | from infrastructure.cqrs.decorators.requestclass import requestclass
@requestclass
class GetDataOperationJobExecutionLogListRequest:
ExecutionId: int = None
| 23.285714 | 68 | 0.852761 | 13 | 163 | 10.692308 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104294 | 163 | 6 | 69 | 27.166667 | 0.952055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
b5a4e0f879c15315e646c8f4869ec5e8ec64fb44 | 132 | py | Python | tests/test_schema.py | Ssvenkerud/population_generator | 7ba62b3222eee3935ee1ed689810d15dcda9c1b0 | [
"MIT"
] | null | null | null | tests/test_schema.py | Ssvenkerud/population_generator | 7ba62b3222eee3935ee1ed689810d15dcda9c1b0 | [
"MIT"
] | null | null | null | tests/test_schema.py | Ssvenkerud/population_generator | 7ba62b3222eee3935ee1ed689810d15dcda9c1b0 | [
"MIT"
] | null | null | null | import pytest
from src.Schema import *
def test_schema_blank_init():
schema = Schema()
assert isinstance(schema, Schema)
| 14.666667 | 37 | 0.727273 | 17 | 132 | 5.470588 | 0.647059 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189394 | 132 | 8 | 38 | 16.5 | 0.869159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a9236d67efc8d5b685346d8f0ee798ba0e7ee216 | 4,465 | py | Python | apps/sso/access_requests/tests.py | g10f/sso | ba6eb712add388c69d4880f5620a2e4ce42d3fee | [
"BSD-3-Clause"
] | 3 | 2021-05-16T17:06:57.000Z | 2021-05-28T17:14:05.000Z | apps/sso/access_requests/tests.py | g10f/sso | ba6eb712add388c69d4880f5620a2e4ce42d3fee | [
"BSD-3-Clause"
] | null | null | null | apps/sso/access_requests/tests.py | g10f/sso | ba6eb712add388c69d4880f5620a2e4ce42d3fee | [
"BSD-3-Clause"
] | null | null | null | import os
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
from django.conf import settings
from django.contrib.auth import get_user_model
from django.urls import reverse
from sso.organisations.models import Organisation
from sso.tests import SSOSeleniumTests
class AccessRequestsSeleniumTests(SSOSeleniumTests):
fixtures = ['roles.json', 'test_l10n_data.json', 'app_roles.json', 'test_organisation_data.json',
'test_app_roles.json', 'test_user_data.json']
def test_new_access_request(self):
self.login(username='GunnarScherf', password='gsf')
# add new access request
self.selenium.get('%s%s' % (self.live_server_url, reverse('access_requests:extend_access')))
self.selenium.find_element_by_name("message").send_keys('Hello world.')
picture = os.path.abspath(os.path.join(settings.BASE_DIR, 'sso/static/img/face-cool.png'))
self.add_picture(picture)
self.selenium.find_element_by_tag_name("form").submit()
self.wait_page_loaded()
url = reverse('access_requests:extend_access_thanks')
full_url = self.live_server_url + url
self.assertEqual(self.selenium.current_url, full_url)
self.logout()
# login as organisation admin and accept the request
self.login(username='CenterAdmin', password='gsf')
list_url = reverse('access_requests:extend_access_list')
self.selenium.get('%s%s' % (self.live_server_url, list_url))
elems = self.selenium.find_elements(by=By.XPATH, value="//a[starts-with(@href, '%s')]" % list_url)
# should be one element in the list
elems[0].click()
self.wait_page_loaded()
self.selenium.find_element_by_tag_name("form").submit()
# check success message
self.wait_page_loaded()
self.selenium.find_element_by_class_name("alert-success")
self.logout()
# check if the user got the member profile
user = get_user_model().objects.get(username='GunnarScherf')
self.assertIn(get_user_model().get_default_role_profile(), user.role_profiles.all())
self.assertNotIn(get_user_model().get_default_guest_profile(), user.role_profiles.all())
def test_new_access_request_for_user_without_organisation(self):
# remove all organisations from user
user = get_user_model().objects.get(username='GunnarScherf')
user.organisations.clear()
self.login(username='GunnarScherf', password='gsf')
# add new access request
self.selenium.get('%s%s?app_id=%s' % (self.live_server_url, reverse('access_requests:extend_access'),
'bc0ee635a536491eb8e7fbe5749e8111'))
self.selenium.find_element_by_name("message").send_keys('Hello world.')
picture = os.path.abspath(os.path.join(settings.BASE_DIR, 'sso/static/img/face-cool.png'))
self.add_picture(picture)
Select(self.selenium.find_element_by_name("organisation")).select_by_index(1)
self.selenium.find_element_by_tag_name("form").submit()
self.wait_page_loaded()
url = reverse('access_requests:extend_access_thanks')
full_url = self.live_server_url + url
self.assertEqual(self.selenium.current_url, full_url)
self.logout()
# login as organisation admin and accept the request
self.login(username='CenterAdmin', password='gsf')
list_url = reverse('access_requests:extend_access_list')
self.selenium.get('%s%s' % (self.live_server_url, list_url))
elems = self.selenium.find_elements(by=By.XPATH, value="//a[starts-with(@href, '%s')]" % list_url)
# should be one element in the list
elems[0].click()
self.wait_page_loaded()
self.selenium.find_element_by_tag_name("form").submit()
# check success message
self.wait_page_loaded()
self.selenium.find_element_by_class_name("alert-success")
self.logout()
user.refresh_from_db()
organisation = Organisation.objects.get(uuid='31664dd38ca4454e916e55fe8b1f0745')
self.assertIn(organisation, user.organisations.all())
self.assertEqual(len(user.organisations.all()), 1)
self.assertIn(get_user_model().get_default_role_profile(), user.role_profiles.all())
self.assertNotIn(get_user_model().get_default_guest_profile(), user.role_profiles.all())
| 46.030928 | 109 | 0.694513 | 573 | 4,465 | 5.155323 | 0.226876 | 0.069059 | 0.05958 | 0.070074 | 0.739336 | 0.723764 | 0.713947 | 0.713947 | 0.682803 | 0.682803 | 0 | 0.013227 | 0.187234 | 4,465 | 96 | 110 | 46.510417 | 0.800772 | 0.075028 | 0 | 0.676471 | 0 | 0 | 0.16606 | 0.09444 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.029412 | false | 0.058824 | 0.117647 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
a925d4613fa40116b9e05b914aa9e031f1eb8bc1 | 20 | py | Python | ImageFetcher/__init__.py | finleyexp/georef_imageregistration | c896ddea1055b9c8b919560643a3cb5f87dcc0f1 | [
"Apache-2.0"
] | 11 | 2018-01-26T09:06:28.000Z | 2022-01-02T07:32:26.000Z | ImageFetcher/__init__.py | finleyexp/georef_imageregistration | c896ddea1055b9c8b919560643a3cb5f87dcc0f1 | [
"Apache-2.0"
] | null | null | null | ImageFetcher/__init__.py | finleyexp/georef_imageregistration | c896ddea1055b9c8b919560643a3cb5f87dcc0f1 | [
"Apache-2.0"
] | 9 | 2017-07-16T03:14:11.000Z | 2021-08-29T01:06:45.000Z | """
ImageFetcher
""" | 6.666667 | 12 | 0.6 | 1 | 20 | 12 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 20 | 3 | 13 | 6.666667 | 0.666667 | 0.6 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a94437504b6c0ec20447bfe79a350a2b289965e8 | 22,589 | py | Python | research/PromptKGC/utils.py | zjunlp/PromptKG | 791bf82390eeadc30876d9f95e8dd26cd05de3dc | [
"MIT"
] | 11 | 2022-02-04T12:32:37.000Z | 2022-03-25T11:49:48.000Z | research/PromptKGC/utils.py | zjunlp/PromptKG | 791bf82390eeadc30876d9f95e8dd26cd05de3dc | [
"MIT"
] | null | null | null | research/PromptKGC/utils.py | zjunlp/PromptKG | 791bf82390eeadc30876d9f95e8dd26cd05de3dc | [
"MIT"
] | 4 | 2022-02-04T05:08:23.000Z | 2022-03-16T02:07:52.000Z | import argparse
import csv
import logging
import os
import random
import sys
import numpy as np
import torch
import pickle
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler, TensorDataset, Dataset
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm, trange
from data import convert_examples_to_features
from data import KGProcessor
# from torch.nn import CrossEntropyLoss, MSELoss
# from scipy.stats import pearsonr, spearmanr
# from sklearn.metrics import matthews_corrcoef, f1_scoreclass
logger = logging.getLogger(__name__)
class InputExample(object):
"""A single training/test example for simple sequence classification."""
def __init__(self, guid, text_a, text_b=None, text_c=None, label=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
text_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
text_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
text_c: (Optional) string. The untokenized text of the third sequence.
Only must be specified for sequence triple tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.text_a = text_a
self.text_b = text_b
self.text_c = text_c
self.label = label
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, input_ids, input_mask, segment_ids, label_id):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
class DataProcessor(object):
"""Base class for data converters for sequence classification data sets."""
def get_train_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the train set."""
raise NotImplementedError()
def get_dev_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the dev set."""
raise NotImplementedError()
def get_labels(self, data_dir):
"""Gets the list of labels for this data set."""
raise NotImplementedError()
@classmethod
def _read_tsv(cls, input_file, quotechar=None):
"""Reads a tab separated value file."""
with open(input_file, "r", encoding="utf-8") as f:
reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
lines = []
for line in reader:
if sys.version_info[0] == 2:
line = list(unicode(cell, "utf-8") for cell in line)
lines.append(line)
return lines
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def _truncate_seq_triple(tokens_a, tokens_b, tokens_c, max_length):
"""Truncates a sequence triple in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b) + len(tokens_c)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b) and len(tokens_a) > len(tokens_c):
tokens_a.pop()
elif len(tokens_b) > len(tokens_a) and len(tokens_b) > len(tokens_c):
tokens_b.pop()
elif len(tokens_c) > len(tokens_a) and len(tokens_c) > len(tokens_b):
tokens_c.pop()
else:
tokens_c.pop()
logger = logging.getLogger()
# TODO write a dataset for fast test processing
class TestDataset(Dataset):
def __init__(self, args, test_triples, tokenizer, processor):
self.test_triples = test_triples
self.tokenizer = tokenizer
self.processor = processor
self.args = args
self.label_list = processor.get_labels(args.data_dir)
self.entity_list = processor.get_entities(args.data_dir)
def __len__(self):
return len(self.test_triples)
def __getitem__(self, index):
entity_list = self.entity_list
all_triples_str_set = set()
processor = self.processor
args = self.args
tokenizer = self.tokenizer
label_list = self.label_list
test_triple = self.test_triples[index]
head = test_triple[0]
relation = test_triple[1]
tail = test_triple[2]
# print(test_triple, head, relation, tail)
head_corrupt_list = [test_triple]
for corrupt_ent in entity_list:
if corrupt_ent != head:
tmp_triple = [corrupt_ent, relation, tail]
tmp_triple_str = "\t".join(tmp_triple)
if tmp_triple_str not in all_triples_str_set:
# may be slow
head_corrupt_list.append(tmp_triple)
tmp_examples = processor._create_examples(
head_corrupt_list, "test", args.data_dir
)
# print(len(tmp_examples))
tmp_features = convert_examples_to_features(
tmp_examples, label_list, args.max_seq_length, tokenizer, args
)
all_input_ids = torch.tensor(
[f.input_ids for f in tmp_features], dtype=torch.long
)
all_input_mask = torch.tensor(
[f.input_mask for f in tmp_features], dtype=torch.long
)
all_segment_ids = torch.tensor(
[f.segment_ids for f in tmp_features], dtype=torch.long
)
all_label_ids = torch.tensor(
[f.label_id for f in tmp_features], dtype=torch.long
)
eval_data = TensorDataset(
all_input_ids, all_input_mask, all_segment_ids, all_label_ids
)
# Run prediction for temp data
eval_sampler = SequentialSampler(eval_data)
left_eval_dataloader = DataLoader(
eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size, num_workers=16
)
tail_corrupt_list = [test_triple]
for corrupt_ent in entity_list:
if corrupt_ent != tail:
tmp_triple = [head, relation, corrupt_ent]
tmp_triple_str = "\t".join(tmp_triple)
if tmp_triple_str not in all_triples_str_set:
# may be slow
tail_corrupt_list.append(tmp_triple)
tmp_examples = processor._create_examples(
tail_corrupt_list, "test", args.data_dir
)
# print(len(tmp_examples))
tmp_features = convert_examples_to_features(
tmp_examples, label_list, args.max_seq_length, tokenizer, args
)
all_input_ids = torch.tensor(
[f.input_ids for f in tmp_features], dtype=torch.long
)
all_input_mask = torch.tensor(
[f.input_mask for f in tmp_features], dtype=torch.long
)
all_segment_ids = torch.tensor(
[f.segment_ids for f in tmp_features], dtype=torch.long
)
all_label_ids = torch.tensor(
[f.label_id for f in tmp_features], dtype=torch.long
)
eval_data = TensorDataset(
all_input_ids, all_input_mask, all_segment_ids, all_label_ids
)
# Run prediction for temp data
eval_sampler = SequentialSampler(eval_data)
right_eval_dataloader = DataLoader(
eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size, num_workers=16
)
return dict(left=left_eval_dataloader, right=right_eval_dataloader)
def test_model(args, model, tokenizer, wandb_logger):
model.eval()
processor = KGProcessor()
# get the chunk entities
test_triples = processor.get_test_triples(args.data_dir, args.chunk)
dataset = TestDataset(args, test_triples,tokenizer=tokenizer, processor=processor)
dataloader = DataLoader(dataset, batch_size=1,shuffle=False, num_workers=4, collate_fn=lambda x:x)
all_triples_str_set = set()
# get all the entities
entity_list = processor.get_entities(args.data_dir)
label_list = processor.get_labels(args.data_dir)
device = torch.device("cuda:0")
model = model.to(device)
ranks = []
ranks_left = []
ranks_right = []
hits_left = []
hits_right = []
hits = []
top_ten_hit_count = 0
for i in range(10):
hits_left.append([])
hits_right.append([])
hits.append([])
pbar = tqdm(total=len(test_triples), desc="Testing...")
for batch in dataloader:
left_dataloader = batch[0]['left']
right_dataloader = batch[0]['right']
preds = []
for input_ids, input_mask, segment_ids, label_ids in left_dataloader:
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model(
input_ids, token_type_ids=segment_ids, attention_mask=input_mask
)
if len(preds) == 0:
batch_logits = logits.detach().cpu().numpy()
preds.append(batch_logits)
else:
batch_logits = logits.detach().cpu().numpy()
preds[0] = np.append(preds[0], batch_logits, axis=0)
preds = preds[0]
# get the dimension corresponding to current label 1
# print(preds, preds.shape)
rel_values = preds[:, 1]
rel_values = torch.tensor(rel_values)
# print(rel_values, rel_values.shape)
_, argsort1 = torch.sort(rel_values, descending=True)
# print(max_values)
# print(argsort1)
argsort1 = argsort1.cpu().numpy()
rank1 = np.where(argsort1 == 0)[0][0]
# print("left: ", rank1)
ranks.append(rank1 + 1)
ranks_left.append(rank1 + 1)
if rank1 < 10:
top_ten_hit_count += 1
preds = []
for input_ids, input_mask, segment_ids, label_ids in right_dataloader:
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model(
input_ids, token_type_ids=segment_ids, attention_mask=input_mask
)
if len(preds) == 0:
batch_logits = logits.detach().cpu().numpy()
preds.append(batch_logits)
else:
batch_logits = logits.detach().cpu().numpy()
preds[0] = np.append(preds[0], batch_logits, axis=0)
preds = preds[0]
# get the dimension corresponding to current label 1
rel_values = preds[:, 1]
rel_values = torch.tensor(rel_values)
_, argsort1 = torch.sort(rel_values, descending=True)
argsort1 = argsort1.cpu().numpy()
rank2 = np.where(argsort1 == 0)[0][0]
ranks.append(rank2 + 1)
ranks_right.append(rank2 + 1)
# print("right: ", rank2)
# print("mean rank until now: ", np.mean(ranks))
if rank2 < 10:
top_ten_hit_count += 1
for hits_level in range(10):
if rank1 <= hits_level:
hits[hits_level].append(1.0)
hits_left[hits_level].append(1.0)
else:
hits[hits_level].append(0.0)
hits_left[hits_level].append(0.0)
if rank2 <= hits_level:
hits[hits_level].append(1.0)
hits_right[hits_level].append(1.0)
else:
hits[hits_level].append(0.0)
hits_right[hits_level].append(0.0)
pbar.update(1)
pbar.set_postfix({"mean rank": np.mean(ranks), "hit@10": top_ten_hit_count * 1.0 / len(ranks) })
if args.chunk:
with open(f"chuck{args.chunk}_result_rank.pkl", "wb") as file:
pickle.dump(ranks, file)
print(f"mean rank: {np.mean(ranks)} \nhits@10: {top_ten_hit_count * 1.0 / len(ranks)}")
def _test_model(args, model, tokenizer, wandb_logger):
# run link prediction
# only use one gpu
processor = KGProcessor()
# get the chunk entities
test_triples = processor.get_test_triples(args.data_dir, args.chunk)
dataset = TestDataset(args, test_triples,tokenizer=tokenizer, processor=processor)
dataloader = DataLoader(dataset, batch_size=1,shuffle=False, num_workers=2)
all_triples_str_set = set()
# get all the entities
entity_list = processor.get_entities(args.data_dir)
label_list = processor.get_labels(args.data_dir)
device = torch.device("cuda:0")
model = model.to(device)
ranks = []
ranks_left = []
ranks_right = []
hits_left = []
hits_right = []
hits = []
top_ten_hit_count = 0
for i in range(10):
hits_left.append([])
hits_right.append([])
hits.append([])
pbar = tqdm(total=len(test_triples), desc="Testing...")
for test_triple in test_triples:
head = test_triple[0]
relation = test_triple[1]
tail = test_triple[2]
# print(test_triple, head, relation, tail)
head_corrupt_list = [test_triple]
for corrupt_ent in entity_list:
if corrupt_ent != head:
tmp_triple = [corrupt_ent, relation, tail]
tmp_triple_str = "\t".join(tmp_triple)
if tmp_triple_str not in all_triples_str_set:
# may be slow
head_corrupt_list.append(tmp_triple)
tmp_examples = processor._create_examples(
head_corrupt_list, "test", args.data_dir
)
# print(len(tmp_examples))
tmp_features = convert_examples_to_features(
tmp_examples, label_list, args.max_seq_length, tokenizer, args
)
all_input_ids = torch.tensor(
[f.input_ids for f in tmp_features], dtype=torch.long
)
all_input_mask = torch.tensor(
[f.input_mask for f in tmp_features], dtype=torch.long
)
all_segment_ids = torch.tensor(
[f.segment_ids for f in tmp_features], dtype=torch.long
)
all_label_ids = torch.tensor(
[f.label_id for f in tmp_features], dtype=torch.long
)
eval_data = TensorDataset(
all_input_ids, all_input_mask, all_segment_ids, all_label_ids
)
# Run prediction for temp data
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(
eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size, num_workers=16
)
model.eval()
preds = []
for input_ids, input_mask, segment_ids, label_ids in eval_dataloader:
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model(
input_ids, token_type_ids=segment_ids, attention_mask=input_mask
)
if len(preds) == 0:
batch_logits = logits.detach().cpu().numpy()
preds.append(batch_logits)
else:
batch_logits = logits.detach().cpu().numpy()
preds[0] = np.append(preds[0], batch_logits, axis=0)
preds = preds[0]
# get the dimension corresponding to current label 1
# print(preds, preds.shape)
rel_values = preds[:, all_label_ids[0]]
rel_values = torch.tensor(rel_values)
# print(rel_values, rel_values.shape)
_, argsort1 = torch.sort(rel_values, descending=True)
# print(max_values)
# print(argsort1)
argsort1 = argsort1.cpu().numpy()
rank1 = np.where(argsort1 == 0)[0][0]
# print("left: ", rank1)
ranks.append(rank1 + 1)
ranks_left.append(rank1 + 1)
if rank1 < 10:
top_ten_hit_count += 1
tail_corrupt_list = [test_triple]
for corrupt_ent in entity_list:
if corrupt_ent != tail:
tmp_triple = [head, relation, corrupt_ent]
tmp_triple_str = "\t".join(tmp_triple)
if tmp_triple_str not in all_triples_str_set:
# may be slow
tail_corrupt_list.append(tmp_triple)
tmp_examples = processor._create_examples(
tail_corrupt_list, "test", args.data_dir
)
# print(len(tmp_examples))
tmp_features = convert_examples_to_features(
tmp_examples, label_list, args.max_seq_length, tokenizer, args
)
all_input_ids = torch.tensor(
[f.input_ids for f in tmp_features], dtype=torch.long
)
all_input_mask = torch.tensor(
[f.input_mask for f in tmp_features], dtype=torch.long
)
all_segment_ids = torch.tensor(
[f.segment_ids for f in tmp_features], dtype=torch.long
)
all_label_ids = torch.tensor(
[f.label_id for f in tmp_features], dtype=torch.long
)
eval_data = TensorDataset(
all_input_ids, all_input_mask, all_segment_ids, all_label_ids
)
# Run prediction for temp data
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(
eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size, num_workers=16
)
model.eval()
preds = []
for input_ids, input_mask, segment_ids, label_ids in eval_dataloader:
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model(
input_ids, token_type_ids=segment_ids, attention_mask=input_mask
)
if len(preds) == 0:
batch_logits = logits.detach().cpu().numpy()
preds.append(batch_logits)
else:
batch_logits = logits.detach().cpu().numpy()
preds[0] = np.append(preds[0], batch_logits, axis=0)
preds = preds[0]
# get the dimension corresponding to current label 1
rel_values = preds[:, all_label_ids[0]]
rel_values = torch.tensor(rel_values)
_, argsort1 = torch.sort(rel_values, descending=True)
argsort1 = argsort1.cpu().numpy()
rank2 = np.where(argsort1 == 0)[0][0]
ranks.append(rank2 + 1)
ranks_right.append(rank2 + 1)
# print("right: ", rank2)
# print("mean rank until now: ", np.mean(ranks))
if rank2 < 10:
top_ten_hit_count += 1
# print("hit@10 until now: ", top_ten_hit_count * 1.0 / len(ranks))
pbar.update(1)
pbar.set_postfix({"mean rank": np.mean(ranks), "hit@10": top_ten_hit_count * 1.0 / len(ranks) })
# file_prefix = (
# str(args.data_dir[7:])
# + "_"
# + str(args.batch_size)
# + "_"
# + str(args.lr)
# + "_"
# + str(args.max_seq_length)
# + "_"
# + str(args.max_epochs)
# )
# # file_prefix = str(args.data_dir[7:])
# f = open(file_prefix + "_ranks.txt", "a")
# f.write(str(rank1) + "\t" + str(rank2) + "\n")
# f.close()
# this could be done more elegantly, but here you go
for hits_level in range(10):
if rank1 <= hits_level:
hits[hits_level].append(1.0)
hits_left[hits_level].append(1.0)
else:
hits[hits_level].append(0.0)
hits_left[hits_level].append(0.0)
if rank2 <= hits_level:
hits[hits_level].append(1.0)
hits_right[hits_level].append(1.0)
else:
hits[hits_level].append(0.0)
hits_right[hits_level].append(0.0)
for i in [0, 2, 9]:
logger.info("Hits left @{0}: {1}".format(i + 1, np.mean(hits_left[i])))
logger.info("Hits right @{0}: {1}".format(i + 1, np.mean(hits_right[i])))
logger.info("Hits @{0}: {1}".format(i + 1, np.mean(hits[i])))
wandb_logger.log_metrics({f'hits{i+1}': np.mean(hits[i])})
logger.info("Mean rank left: {0}".format(np.mean(ranks_left)))
logger.info("Mean rank right: {0}".format(np.mean(ranks_right)))
logger.info("Mean rank: {0}".format(np.mean(ranks)))
logger.info(
"Mean reciprocal rank left: {0}".format(np.mean(1.0 / np.array(ranks_left)))
)
logger.info(
"Mean reciprocal rank right: {0}".format(np.mean(1.0 / np.array(ranks_right)))
)
logger.info("Mean reciprocal rank: {0}".format(np.mean(1.0 / np.array(ranks))))
wandb_logger.log_metrics({'mrr': np.mean(1.0 / np.array(ranks))})
wandb_logger.log_metrics({'mr': np.mean(ranks)})
if args.chunk:
with open(f"chuck{args.chunk}_result_rank.pkl", "wb") as file:
pickle.dump(ranks, file)
def gather_all_ranks():
ranks = np.array([])
for i in range(10):
with open(f"chuck{i}_result_rank.pkl", "rb") as file:
ranks = np.concatenate([ranks, pickle.load(file)], axis=0)
return ranks.mean(), (ranks<=10).mean()
| 35.350548 | 104 | 0.601753 | 2,906 | 22,589 | 4.437715 | 0.108052 | 0.019231 | 0.014888 | 0.011166 | 0.786988 | 0.757987 | 0.74496 | 0.72697 | 0.707971 | 0.701225 | 0 | 0.014179 | 0.297534 | 22,589 | 638 | 105 | 35.405956 | 0.798525 | 0.149055 | 0 | 0.683486 | 0 | 0.002294 | 0.025611 | 0.004733 | 0 | 0 | 0 | 0.001567 | 0 | 1 | 0.03211 | false | 0 | 0.03211 | 0.002294 | 0.082569 | 0.002294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a99a82b7c5b854b68f699ed3ce351501204a31bd | 7,460 | py | Python | project/game/ai/tests/test_riichi.py | MahjongRepository/tenhou-python-bot | e07f480d519930148534e02978ef7687f424924e | [
"MIT"
] | 201 | 2016-06-11T20:04:09.000Z | 2021-12-30T06:32:09.000Z | project/game/ai/tests/test_riichi.py | MahjongRepository/tenhou-python-bot | e07f480d519930148534e02978ef7687f424924e | [
"MIT"
] | 131 | 2016-05-21T08:06:44.000Z | 2021-03-01T01:01:50.000Z | project/game/ai/tests/test_riichi.py | MahjongRepository/tenhou-python-bot | e07f480d519930148534e02978ef7687f424924e | [
"MIT"
] | 68 | 2016-10-12T16:17:07.000Z | 2022-02-26T17:36:17.000Z | from game.table import Table
from mahjong.tile import Tile
from utils.test_helpers import enemy_called_riichi_helper, string_to_136_array, string_to_136_tile
def test_dont_call_riichi_with_yaku_and_central_tanki_wait():
table = _make_table()
tiles = string_to_136_array(sou="234567", pin="234567", man="4")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(man="5"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
def test_dont_call_riichi_expensive_damaten_with_yaku():
table = _make_table(
dora_indicators=[
string_to_136_tile(man="7"),
string_to_136_tile(man="5"),
string_to_136_tile(sou="1"),
]
)
# tanyao pinfu sanshoku dora 4 - this is damaten baiman, let's not riichi it
tiles = string_to_136_array(man="67888", sou="678", pin="34678")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
# let's test lots of doras hand, tanyao dora 8, also damaten baiman
tiles = string_to_136_array(man="666888", sou="22", pin="34678")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
# chuuren
tiles = string_to_136_array(man="1112345678999")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
def test_riichi_expensive_hand_without_yaku_2():
table = _make_table(
dora_indicators=[
string_to_136_tile(man="1"),
string_to_136_tile(sou="1"),
string_to_136_tile(pin="1"),
]
)
tiles = string_to_136_array(man="222", sou="22278", pin="22789")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
def test_riichi_tanki_honor_without_yaku():
table = _make_table(dora_indicators=[string_to_136_tile(man="2"), string_to_136_tile(sou="6")])
tiles = string_to_136_array(man="345678", sou="789", pin="123", honors="2")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
def test_riichi_tanki_honor_chiitoitsu():
table = _make_table()
tiles = string_to_136_array(man="22336688", sou="99", pin="99", honors="2")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
def test_always_call_daburi():
table = _make_table()
table.player.round_step = 0
tiles = string_to_136_array(sou="234567", pin="234567", man="4")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(man="5"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
def test_dont_call_karaten_tanki_riichi():
table = _make_table()
tiles = string_to_136_array(man="22336688", sou="99", pin="99", honors="2")
table.player.init_hand(tiles)
for _ in range(0, 3):
table.add_discarded_tile(1, string_to_136_tile(honors="2"), False)
table.add_discarded_tile(1, string_to_136_tile(honors="3"), False)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
def test_dont_call_karaten_ryanmen_riichi():
table = _make_table(
dora_indicators=[
string_to_136_tile(man="1"),
string_to_136_tile(sou="1"),
string_to_136_tile(pin="1"),
]
)
tiles = string_to_136_array(man="222", sou="22278", pin="22789")
table.player.init_hand(tiles)
for _ in range(0, 4):
table.add_discarded_tile(1, string_to_136_tile(sou="6"), False)
table.add_discarded_tile(1, string_to_136_tile(sou="9"), False)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
def test_call_riichi_penchan_with_suji():
table = _make_table(
dora_indicators=[
string_to_136_tile(pin="1"),
]
)
tiles = string_to_136_array(sou="11223", pin="234567", man="66")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(sou="6"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
def test_call_riichi_tanki_with_kabe():
table = _make_table(
dora_indicators=[
string_to_136_tile(pin="1"),
]
)
for _ in range(0, 3):
table.add_discarded_tile(1, string_to_136_tile(honors="1"), False)
for _ in range(0, 4):
table.add_discarded_tile(1, string_to_136_tile(sou="8"), False)
tiles = string_to_136_array(sou="1119", pin="234567", man="666")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="1"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
def test_call_riichi_chiitoitsu_with_suji():
table = _make_table(
dora_indicators=[
string_to_136_tile(man="1"),
]
)
for _ in range(0, 3):
table.add_discarded_tile(1, string_to_136_tile(honors="3"), False)
tiles = string_to_136_array(man="22336688", sou="9", pin="99", honors="22")
table.player.init_hand(tiles)
table.player.add_discarded_tile(Tile(string_to_136_tile(sou="6"), True))
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
def test_dont_call_riichi_chiitoitsu_bad_wait():
table = _make_table(
dora_indicators=[
string_to_136_tile(man="1"),
]
)
for _ in range(0, 3):
table.add_discarded_tile(1, string_to_136_tile(honors="3"), False)
tiles = string_to_136_array(man="22336688", sou="4", pin="99", honors="22")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
def test_dont_call_pinfu_nomi_chasing_riichi():
table = _make_table()
enemy_called_riichi_helper(table, 3)
tiles = string_to_136_array(man="123", sou="234567", pin="2278")
table.player.init_hand(tiles)
table.player.draw_tile(string_to_136_tile(honors="3"))
# on early stages it is fine to call chasing riichi here
_, with_riichi = table.player.discard_tile()
assert with_riichi is True
table.player.round_step = 9
table.player.draw_tile(string_to_136_tile(honors="3"))
# on late stage let's save riichi stick
_, with_riichi = table.player.discard_tile()
assert with_riichi is False
def _make_table(dora_indicators=None) -> Table:
table = Table()
table.count_of_remaining_tiles = 60
table.player.scores = 25000
# with that we don't have daburi anymore
table.player.round_step = 1
# with that we are not dealer anymore
table.player.seat = 1
if dora_indicators:
for x in dora_indicators:
table.add_dora_indicator(x)
return table
| 30.826446 | 99 | 0.692091 | 1,111 | 7,460 | 4.273627 | 0.123312 | 0.09604 | 0.132056 | 0.129528 | 0.783488 | 0.773589 | 0.728939 | 0.723252 | 0.720303 | 0.720303 | 0 | 0.072941 | 0.189544 | 7,460 | 241 | 100 | 30.954357 | 0.712372 | 0.042359 | 0 | 0.627219 | 0 | 0 | 0.032932 | 0 | 0 | 0 | 0 | 0 | 0.094675 | 1 | 0.08284 | false | 0 | 0.017751 | 0 | 0.106509 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8d47a6de1280180bbbe354fc8fce76f333139c37 | 110 | py | Python | longboxed/api_v1/decorators.py | timbueno/longboxed | b5969739bbc3226fa1a6e851ee1c7b3f978b0631 | [
"MIT"
] | 9 | 2015-06-17T07:39:17.000Z | 2022-01-16T21:58:41.000Z | longboxed/api_v1/decorators.py | timbueno/longboxed | b5969739bbc3226fa1a6e851ee1c7b3f978b0631 | [
"MIT"
] | 21 | 2015-02-04T01:35:33.000Z | 2021-02-18T03:11:29.000Z | longboxed/api_v1/decorators.py | timbueno/longboxed | b5969739bbc3226fa1a6e851ee1c7b3f978b0631 | [
"MIT"
] | 2 | 2015-09-02T22:32:07.000Z | 2019-05-14T23:29:50.000Z | # -*- coding: utf-8 -*-
"""
longboxed.api.decorators
~~~~~~~~~~~~~
longboxed api decorators
"""
| 12.222222 | 28 | 0.490909 | 9 | 110 | 6 | 0.666667 | 0.444444 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 0.236364 | 110 | 8 | 29 | 13.75 | 0.630952 | 0.790909 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8d4f5a07fd2c2a217d08da24eeafc2e0c4efd2a1 | 312 | py | Python | day20/src_test.py | arcadecoffee/advent-2015 | 711ac1061f661c07d511c0b2c77c0b111a22ff44 | [
"MIT"
] | null | null | null | day20/src_test.py | arcadecoffee/advent-2015 | 711ac1061f661c07d511c0b2c77c0b111a22ff44 | [
"MIT"
] | null | null | null | day20/src_test.py | arcadecoffee/advent-2015 | 711ac1061f661c07d511c0b2c77c0b111a22ff44 | [
"MIT"
] | null | null | null | import day20.src as src
def test_part1():
assert src.part1(src.TEST_INPUT_FILE) == 8
def test_part1_full():
assert src.part1(src.FULL_INPUT_FILE) == 665280
def test_part2():
assert src.part2(src.TEST_INPUT_FILE) == 8
def test_part2_full():
assert src.part2(src.FULL_INPUT_FILE) == 705600
| 17.333333 | 51 | 0.714744 | 51 | 312 | 4.098039 | 0.294118 | 0.133971 | 0.114833 | 0.162679 | 0.229665 | 0.229665 | 0.229665 | 0 | 0 | 0 | 0 | 0.092308 | 0.166667 | 312 | 17 | 52 | 18.352941 | 0.711538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.444444 | 1 | 0.444444 | true | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
8d5c980f652597cdfd5fca4157824a5f6793592d | 52 | py | Python | lib/__init__.py | motomizuki/tarsier-output-slack | 68fe77fec0bb339b7f13fa680931be76cde207e9 | [
"MIT"
] | null | null | null | lib/__init__.py | motomizuki/tarsier-output-slack | 68fe77fec0bb339b7f13fa680931be76cde207e9 | [
"MIT"
] | null | null | null | lib/__init__.py | motomizuki/tarsier-output-slack | 68fe77fec0bb339b7f13fa680931be76cde207e9 | [
"MIT"
] | null | null | null | from .tarsier_output_slack import TarsierOutputSlack | 52 | 52 | 0.923077 | 6 | 52 | 7.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057692 | 52 | 1 | 52 | 52 | 0.938776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8d8d896d6bc37e69726cae003b5031d695ffd011 | 64 | py | Python | bcbio/picard/utils.py | a113n/bcbio-nextgen | 1d4afef27ad2e84a4ecb6145ccc5058f2abb4616 | [
"MIT"
] | 418 | 2015-01-01T18:21:17.000Z | 2018-03-02T07:26:28.000Z | bcbio/picard/utils.py | ahmedelhosseiny/bcbio-nextgen | b5618f3c100a1a5c04bd5c8acad8f96d0587e41c | [
"MIT"
] | 1,634 | 2015-01-04T11:43:43.000Z | 2018-03-05T18:06:39.000Z | bcbio/picard/utils.py | ahmedelhosseiny/bcbio-nextgen | b5618f3c100a1a5c04bd5c8acad8f96d0587e41c | [
"MIT"
] | 218 | 2015-01-26T05:58:18.000Z | 2018-03-03T05:50:05.000Z | # Placeholder for back compatibility.
from bcbio.utils import *
| 21.333333 | 37 | 0.796875 | 8 | 64 | 6.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 64 | 2 | 38 | 32 | 0.927273 | 0.546875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a5fbd85aa1eafb3dbc2ff4d5df3fd865035ba431 | 88 | py | Python | src/modulepaths.py | nicholaschiasson/focus | 9c081806eec09d0b894e39b282a553fa292d458b | [
"MIT"
] | null | null | null | src/modulepaths.py | nicholaschiasson/focus | 9c081806eec09d0b894e39b282a553fa292d458b | [
"MIT"
] | null | null | null | src/modulepaths.py | nicholaschiasson/focus | 9c081806eec09d0b894e39b282a553fa292d458b | [
"MIT"
] | null | null | null | import os
import sys
sys.path.append(os.path.dirname(__file__) + "/../modules/nai/src")
| 22 | 66 | 0.727273 | 14 | 88 | 4.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079545 | 88 | 3 | 67 | 29.333333 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0.215909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
574e9c43bceb3a527085af1eed2e0ad798badf9c | 154 | py | Python | zcrmsdk/src/com/zoho/crm/api/pipeline/transfer_action_handler.py | zoho/zohocrm-python-sdk-2.1 | cde6fcd1c5c8f7a572154ebb2b947ec697c24209 | [
"Apache-2.0"
] | null | null | null | zcrmsdk/src/com/zoho/crm/api/pipeline/transfer_action_handler.py | zoho/zohocrm-python-sdk-2.1 | cde6fcd1c5c8f7a572154ebb2b947ec697c24209 | [
"Apache-2.0"
] | null | null | null | zcrmsdk/src/com/zoho/crm/api/pipeline/transfer_action_handler.py | zoho/zohocrm-python-sdk-2.1 | cde6fcd1c5c8f7a572154ebb2b947ec697c24209 | [
"Apache-2.0"
] | null | null | null | from abc import ABC, abstractmethod
class TransferActionHandler(ABC):
def __init__(self):
"""Creates an instance of TransferActionHandler"""
pass
| 17.111111 | 52 | 0.766234 | 17 | 154 | 6.705882 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149351 | 154 | 8 | 53 | 19.25 | 0.870229 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
93ce89cf139670f393b075d361055079f57862bf | 40 | py | Python | moredata/enricher/api_connector/__init__.py | thomassonobe/more-data | b3d4a8e32f385a69749c8139915e3638fcced37b | [
"BSD-3-Clause"
] | 7 | 2021-02-08T12:09:26.000Z | 2022-03-29T15:11:35.000Z | moredata/enricher/api_connector/__init__.py | thomassonobe/more-data | b3d4a8e32f385a69749c8139915e3638fcced37b | [
"BSD-3-Clause"
] | 14 | 2021-06-02T17:24:51.000Z | 2022-02-28T13:52:05.000Z | moredata/enricher/api_connector/__init__.py | thomassonobe/more-data | b3d4a8e32f385a69749c8139915e3638fcced37b | [
"BSD-3-Clause"
] | 1 | 2021-10-05T21:12:14.000Z | 2021-10-05T21:12:14.000Z | from .api_connector import ApiConnector
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
93d053af39d4602ba4666d54a090368856a949ce | 199 | py | Python | harvester/sharekit/admin.py | surfedushare/search-portal | 708a0d05eee13c696ca9abd7e84ab620d3900fbe | [
"MIT"
] | 2 | 2021-08-19T09:40:59.000Z | 2021-12-14T11:08:20.000Z | harvester/sharekit/admin.py | surfedushare/search-portal | 708a0d05eee13c696ca9abd7e84ab620d3900fbe | [
"MIT"
] | 159 | 2020-05-14T14:17:34.000Z | 2022-03-23T10:28:13.000Z | harvester/sharekit/admin.py | nppo/search-portal | aedf21e334f178c049f9d6cf37cafd6efc07bc0d | [
"MIT"
] | 1 | 2021-11-11T13:37:22.000Z | 2021-11-11T13:37:22.000Z | from django.contrib import admin
from datagrowth.admin import HttpResourceAdmin
from sharekit.models import SharekitMetadataHarvest
admin.site.register(SharekitMetadataHarvest, HttpResourceAdmin)
| 24.875 | 63 | 0.874372 | 20 | 199 | 8.7 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085427 | 199 | 7 | 64 | 28.428571 | 0.956044 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
93d2d82a9e1c991ec7181f33ff8a1406a744b914 | 114 | py | Python | autotest/smoother/freyberg/start_pestpp_slaves.py | hwreeves-USGS/pyemu | 6b443601fbb9bcb9e97a8c200a78480c11c51f22 | [
"BSD-3-Clause"
] | 94 | 2015-01-09T14:19:47.000Z | 2022-03-14T18:42:23.000Z | autotest/smoother/freyberg/start_pestpp_slaves.py | hwreeves-USGS/pyemu | 6b443601fbb9bcb9e97a8c200a78480c11c51f22 | [
"BSD-3-Clause"
] | 184 | 2020-05-29T14:25:23.000Z | 2022-03-29T04:01:42.000Z | autotest/smoother/freyberg/start_pestpp_slaves.py | hwreeves-USGS/pyemu | 6b443601fbb9bcb9e97a8c200a78480c11c51f22 | [
"BSD-3-Clause"
] | 51 | 2015-01-14T15:55:11.000Z | 2021-12-28T17:59:24.000Z | import os
import pyemu
pyemu.utils.start_workers('template',"pestpp","freyberg.pst",15,worker_root='.',port=4004) | 28.5 | 90 | 0.77193 | 17 | 114 | 5.058824 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055046 | 0.04386 | 114 | 4 | 90 | 28.5 | 0.733945 | 0 | 0 | 0 | 0 | 0 | 0.234783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
93df346a9234df4eb41ecb699a42803f1580b4e0 | 44 | py | Python | cupy_alias/logic/comparison.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 142 | 2018-06-07T07:43:10.000Z | 2021-10-30T21:06:32.000Z | cupy_alias/logic/comparison.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 282 | 2018-06-07T08:35:03.000Z | 2021-03-31T03:14:32.000Z | cupy_alias/logic/comparison.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 19 | 2018-06-19T11:07:53.000Z | 2021-05-13T20:57:04.000Z | from clpy.logic.comparison import * # NOQA
| 22 | 43 | 0.75 | 6 | 44 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 1 | 44 | 44 | 0.891892 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
93e7b6accff840bfb0ca8a8b0fd18b5c130adef5 | 84 | py | Python | api/organisations/permissions/__init__.py | mevinbabuc/flagsmith | 751bd6cb4a34bd2f80af5a9c547559da9c2fa010 | [
"BSD-3-Clause"
] | 1,259 | 2021-06-10T11:24:09.000Z | 2022-03-31T10:30:44.000Z | api/organisations/permissions/__init__.py | mevinbabuc/flagsmith | 751bd6cb4a34bd2f80af5a9c547559da9c2fa010 | [
"BSD-3-Clause"
] | 392 | 2021-06-10T11:12:29.000Z | 2022-03-31T10:13:53.000Z | api/organisations/permissions/__init__.py | mevinbabuc/flagsmith | 751bd6cb4a34bd2f80af5a9c547559da9c2fa010 | [
"BSD-3-Clause"
] | 58 | 2021-06-11T03:18:07.000Z | 2022-03-31T14:39:10.000Z | default_app_config = "organisations.permissions.apps.OrganisationPermissionsConfig"
| 42 | 83 | 0.892857 | 7 | 84 | 10.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 84 | 1 | 84 | 84 | 0.901235 | 0 | 0 | 0 | 0 | 0 | 0.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
93f00b7232ccc79b89fae031d0ecb3d5fe4b0bc3 | 42 | py | Python | test_Classes/__init__.py | nickmarton/Paxos-Distributed-Calendar | 45316c59ee7c300fdd33cdb9f149d19e1ec9c199 | [
"MIT"
] | 3 | 2015-11-08T23:26:57.000Z | 2018-11-06T19:37:48.000Z | test_Classes/__init__.py | nickmarton/Paxos-Distributed-Calendar | 45316c59ee7c300fdd33cdb9f149d19e1ec9c199 | [
"MIT"
] | null | null | null | test_Classes/__init__.py | nickmarton/Paxos-Distributed-Calendar | 45316c59ee7c300fdd33cdb9f149d19e1ec9c199 | [
"MIT"
] | null | null | null | """Treate Paxos directory as a package.""" | 42 | 42 | 0.714286 | 6 | 42 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.810811 | 0.857143 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9e118d81ad5eb424d969aa5f7b4a638fcafdad99 | 127 | py | Python | mysite/aiir/admin.py | kztrp/aiir-server | 0c49250cee57b8b155b06982721f493cd729ee58 | [
"MIT"
] | null | null | null | mysite/aiir/admin.py | kztrp/aiir-server | 0c49250cee57b8b155b06982721f493cd729ee58 | [
"MIT"
] | null | null | null | mysite/aiir/admin.py | kztrp/aiir-server | 0c49250cee57b8b155b06982721f493cd729ee58 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Calculation
# Register your models here.
admin.site.register(Calculation) | 25.4 | 32 | 0.826772 | 17 | 127 | 6.176471 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110236 | 127 | 5 | 33 | 25.4 | 0.929204 | 0.204724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f53c8f4323d6c8865809f6cc0584c26b285ee272 | 192 | py | Python | Contributors/EXAMPLE/Hello.py | The-Arjun-Thakor/HacktoberFest2021 | 09648b39f6160d421e169fbf68d7e6f1809c3c6c | [
"MIT"
] | 3 | 2021-10-01T06:15:56.000Z | 2021-10-05T11:12:44.000Z | Contributors/EXAMPLE/Hello.py | The-Arjun-Thakor/HacktoberFest2021 | 09648b39f6160d421e169fbf68d7e6f1809c3c6c | [
"MIT"
] | 5 | 2021-10-05T11:08:56.000Z | 2021-10-14T05:55:27.000Z | Contributors/EXAMPLE/Hello.py | The-Arjun-Thakor/HacktoberFest2021 | 09648b39f6160d421e169fbf68d7e6f1809c3c6c | [
"MIT"
] | 4 | 2021-10-01T07:56:03.000Z | 2021-10-14T05:35:16.000Z | print("Hello World !")
print("I am <YourName>")
print("I am From <Place>")
print("I am <Your College Name and Year of Study>")
print("Email ID : <Your Email>")
print("Github : <Your Github>")
| 27.428571 | 51 | 0.661458 | 31 | 192 | 4.096774 | 0.580645 | 0.141732 | 0.188976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 192 | 6 | 52 | 32 | 0.77439 | 0 | 0 | 0 | 0 | 0 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
f57323fef7d9721fdaebe1d7d2f042c430a065b0 | 41 | py | Python | jklib/django/utils/__init__.py | Jordan-Kowal/jklib | 84dc8ad64b9216926ba9af0ec11f1dbd5d8a53f4 | [
"MIT"
] | 1 | 2020-02-28T21:53:51.000Z | 2020-02-28T21:53:51.000Z | jklib/django/utils/__init__.py | Jordan-Kowal/jklib | 84dc8ad64b9216926ba9af0ec11f1dbd5d8a53f4 | [
"MIT"
] | null | null | null | jklib/django/utils/__init__.py | Jordan-Kowal/jklib | 84dc8ad64b9216926ba9af0ec11f1dbd5d8a53f4 | [
"MIT"
] | null | null | null | """Contains utility and QOL functions"""
| 20.5 | 40 | 0.731707 | 5 | 41 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 1 | 41 | 41 | 0.833333 | 0.829268 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f5950dab0a9b00527ec12a6e8345dd080f68b7e5 | 923 | py | Python | swagger_client/api/__init__.py | chbndrhnns/finapi-client | 259beda8b05e912c49d2dc4c3ed71205134e5d8a | [
"MIT"
] | 2 | 2019-04-15T05:58:21.000Z | 2021-11-15T18:26:37.000Z | swagger_client/api/__init__.py | chbndrhnns/finapi-client | 259beda8b05e912c49d2dc4c3ed71205134e5d8a | [
"MIT"
] | 1 | 2021-06-18T09:46:25.000Z | 2021-06-18T20:12:41.000Z | swagger_client/api/__init__.py | chbndrhnns/finapi-client | 259beda8b05e912c49d2dc4c3ed71205134e5d8a | [
"MIT"
] | 2 | 2019-07-08T13:41:09.000Z | 2020-12-07T12:10:04.000Z | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from swagger_client.api.accounts_api import AccountsApi
from swagger_client.api.authorization_api import AuthorizationApi
from swagger_client.api.bank_connections_api import BankConnectionsApi
from swagger_client.api.banks_api import BanksApi
from swagger_client.api.categories_api import CategoriesApi
from swagger_client.api.client_configuration_api import ClientConfigurationApi
from swagger_client.api.labels_api import LabelsApi
from swagger_client.api.mandator_administration_api import MandatorAdministrationApi
from swagger_client.api.mocks_and_tests_api import MocksAndTestsApi
from swagger_client.api.notification_rules_api import NotificationRulesApi
from swagger_client.api.securities_api import SecuritiesApi
from swagger_client.api.transactions_api import TransactionsApi
from swagger_client.api.users_api import UsersApi
| 48.578947 | 84 | 0.895991 | 122 | 923 | 6.47541 | 0.352459 | 0.181013 | 0.279747 | 0.329114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001168 | 0.072589 | 923 | 18 | 85 | 51.277778 | 0.921729 | 0.04442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
191d31015106b5044484975723e580eb42a050c1 | 42 | py | Python | src/test/resources/python/invalid.py | VolkerHartmann/testAction | 5e92ad5e2598633ac401bfaaaa84b567a900d16c | [
"Apache-2.0"
] | null | null | null | src/test/resources/python/invalid.py | VolkerHartmann/testAction | 5e92ad5e2598633ac401bfaaaa84b567a900d16c | [
"Apache-2.0"
] | 2 | 2020-12-16T14:36:34.000Z | 2021-12-13T15:05:03.000Z | src/test/resources/python/invalid.py | VolkerHartmann/testAction | 5e92ad5e2598633ac401bfaaaa84b567a900d16c | [
"Apache-2.0"
] | 2 | 2020-12-15T13:09:59.000Z | 2020-12-15T13:20:36.000Z | #!/usr/bin/python
print "Not allowed"
| 6 | 19 | 0.642857 | 6 | 42 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 6 | 20 | 7 | 0.794118 | 0.380952 | 0 | 0 | 0 | 0 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
1941cee21dbce7d84a7811e27e22a78643751e0a | 49 | py | Python | tests/__init__.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | tests/__init__.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | tests/__init__.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | """Unit test package for ml_model_evaluation."""
| 24.5 | 48 | 0.755102 | 7 | 49 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.795455 | 0.857143 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
195629075bfb37adc9480358948ed4cf1c6bf460 | 8,113 | py | Python | models/bert_hie_self_sup_pretrain.py | alibaba/Retrieval-based-Pre-training-for-Machine-Reading-Comprehension | b27dc55446a29a53af7fffdad8628ccb545420da | [
"Apache-2.0"
] | 7 | 2021-06-16T01:40:23.000Z | 2021-12-04T02:40:35.000Z | models/bert_hie_self_sup_pretrain.py | SparkJiao/Retrieval-based-Pre-training-for-Machine-Reading-Comprehension | 9ccad31bd0bf2216004cf729d1d511fc3e0b77c9 | [
"Apache-2.0"
] | 1 | 2021-08-16T09:10:05.000Z | 2021-08-25T08:44:44.000Z | models/bert_hie_self_sup_pretrain.py | SparkJiao/Retrieval-based-Pre-training-for-Machine-Reading-Comprehension | 9ccad31bd0bf2216004cf729d1d511fc3e0b77c9 | [
"Apache-2.0"
] | 3 | 2021-09-13T02:03:37.000Z | 2021-10-11T18:48:21.000Z | import torch
from torch import nn
from torch.nn import functional as F
from transformers import BertPreTrainedModel, BertModel
from modules import layers
class BertSelfSupPretain(BertPreTrainedModel):
"""
Pre-training BERT backbone or together with LinearSelfAttn
"""
def __init__(self, config):
super().__init__(config)
print(f'The model {self.__class__.__name__} is loading...')
layers.set_seq_dropout(True)
layers.set_my_dropout_prob(config.hidden_dropout_prob)
self.bert = BertModel(config)
self.sent_self_attn = layers.LinearSelfAttnAllennlp(config.hidden_size)
self.project1 = nn.Linear(config.hidden_size, config.hidden_size)
self.project2 = nn.Linear(config.hidden_size, config.hidden_size)
self.init_weights()
def forward(self, input_ids, token_type_ids=None, attention_mask=None, answers=None,
p_sentence_spans=None, q_sentence_spans=None):
sequence_output = self.bert(input_ids=input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask)[0]
# mask: 1 for masked value and 0 for true value
hidden, mask, sent_mask = split_doc_sen_que(sequence_output, q_sentence_spans, p_sentence_spans)
batch, sent_num, seq_len = mask.size()
hidden = hidden.view(batch * sent_num, seq_len, -1)
mask = mask.view(batch * sent_num, seq_len)
alpha = self.sent_self_attn(hidden, mask)
hidden = alpha.unsqueeze(1).bmm(hidden).squeeze().reshape(batch, sent_num, -1)
assert hidden.size(-1) == sequence_output.size(-1)
query = self.project1(hidden)
key = self.project2(hidden)
scores = query.bmm(key.transpose(1, 2))
output_dict = {"logits": scores}
if answers is not None:
if sent_num > answers.size(1):
scores = scores[:, :answers.size(1)]
elif answers.size(1) > sent_num:
answers = answers[:, :sent_num]
assert answers.size(1) == scores.size(1)
scores = scores + sent_mask[:, None, :] * -10000.0
loss = F.cross_entropy(scores.reshape(-1, sent_num), answers.reshape(-1), ignore_index=-1,
reduction='sum') / (batch * 1.0)
output_dict["loss"] = loss
_, pred = scores.max(dim=-1)
valid_num = torch.sum(answers != -1)
acc = torch.sum(pred == answers).to(dtype=scores.dtype) / (valid_num * 1.0)
output_dict["acc"] = acc
output_dict["valid_num"] = valid_num
return output_dict
class BertSelfSupPretainClsQuery(BertPreTrainedModel):
"""
Pre-training BERT backbone or together with LinearSelfAttn.
Use representation of [CLS] as query to make it trained for downstream task.
"""
model_prefix = 'self_sup_pretrain_cls_query'
def __init__(self, config):
super().__init__(config)
print(f'The model {self.__class__.__name__} is loading...')
layers.set_seq_dropout(True)
layers.set_my_dropout_prob(config.hidden_dropout_prob)
self.bert = BertModel(config)
self.cls_w = nn.Linear(config.hidden_size, config.hidden_size)
self.project1 = nn.Linear(config.hidden_size, config.hidden_size)
self.project2 = nn.Linear(config.hidden_size, config.hidden_size)
self.init_weights()
def forward(self, input_ids, token_type_ids=None, attention_mask=None, answers=None,
p_sentence_spans=None, q_sentence_spans=None):
sequence_output = self.bert(input_ids=input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask)[0]
# mask: 1 for masked value and 0 for true value
hidden, mask, sent_mask, cls_h = split_doc_sen_que(sequence_output, q_sentence_spans, p_sentence_spans,
sep_cls=True)
batch, sent_num, seq_len = mask.size()
# hidden = hidden.view(batch * sent_num, seq_len, -1)
# hidden = hidden.view(batch, sent_num * seq_len, -1)
# mask = mask.view(batch * sent_num, seq_len)
hidden = layers.dropout(hidden, p=layers.my_dropout_p, training=self.training)
cls_h = self.cls_w(cls_h) # [batch, h]
alpha = torch.einsum('bh,bsth->bst', cls_h, hidden)
alpha = (alpha + mask * -10000.0).softmax(dim=-1)
hidden = torch.einsum('bst,bsth->bsh', alpha, hidden)
query = self.project1(hidden)
key = self.project2(hidden)
scores = query.bmm(key.transpose(1, 2))
output_dict = {"logits": scores}
if answers is not None:
if sent_num > answers.size(1):
scores = scores[:, :answers.size(1)]
elif answers.size(1) > sent_num:
answers = answers[:, :sent_num]
assert answers.size(1) == scores.size(1)
scores = scores + sent_mask[:, None, :] * -10000.0
loss = F.cross_entropy(scores.reshape(-1, sent_num), answers.reshape(-1), ignore_index=-1,
reduction='sum') / (batch * 1.0)
output_dict["loss"] = loss
_, pred = scores.max(dim=-1)
valid_num = torch.sum(answers != -1)
acc = torch.sum(pred == answers).to(dtype=scores.dtype) / (valid_num * 1.0)
output_dict["acc"] = acc
output_dict["valid_num"] = valid_num
return output_dict
def split_sentence(hidden_state, sentence_spans):
batch, seq_len, h = hidden_state.size()
max_sent_len = 0
for b in range(batch):
max_sent_len = max(max_sent_len, max(map(lambda x: x[1] - x[0] + 1, sentence_spans[b])))
max_sent_num = max(map(lambda x: len(x), sentence_spans))
output = hidden_state.new_zeros((batch, max_sent_num, max_sent_len, h))
mask = hidden_state.new_ones(batch, max_sent_num, max_sent_len)
for b in range(batch):
for sent_id, (start, end) in enumerate(sentence_spans[b]):
lens = end - start + 1
output[b][sent_id][:lens] = hidden_state[b][start:(end + 1)]
mask[b][sent_id][:lens] = hidden_state.new_zeros(lens)
return output, mask
def split_doc_sen_que(hidden_state, q_sentence_spans, p_sentence_spans, sep_cls=False):
# q_hidden, q_mask = split_sentence(hidden_state, q_sentence_spans)
# p_hidden, p_mask = split_sentence(hidden_state, p_sentence_spans)
# return q_hidden, q_mask, p_hidden, p_mask
cls_h = hidden_state[:, 0]
batch = hidden_state.size(0)
h = hidden_state.size(-1)
# print(hidden_state.size())
# print(len(q_sentence_spans))
max_sent_len = 0
for b in range(batch):
max_sent_len = max(max_sent_len, max(map(lambda x: x[1] - x[0] + 1, q_sentence_spans[b] + p_sentence_spans[b])))
max_sent_num = max(map(lambda x: len(x[0]) + len(x[1]), zip(q_sentence_spans, p_sentence_spans)))
# print(max_sent_len)
# print(max_sent_num)
output = hidden_state.new_zeros((batch, max_sent_num, max_sent_len, h))
mask = hidden_state.new_ones(batch, max_sent_num, max_sent_len)
sent_mask = hidden_state.new_ones(batch, max_sent_num)
for b in range(batch):
for sent_id, (start, end) in enumerate(q_sentence_spans[b] + p_sentence_spans[b]):
if sep_cls and start == 0:
assert end >= 1
start += 1
lens = end - start + 1
output[b][sent_id][:lens] = hidden_state[b][start:(end + 1)]
mask[b][sent_id][:lens] = hidden_state.new_zeros(lens)
sent_mask[b][sent_id] = 0
# print(b, sent_id, lens, start, end)
if sep_cls:
return output, mask, sent_mask, cls_h
return output, mask, sent_mask
| 40.363184 | 121 | 0.606311 | 1,079 | 8,113 | 4.280816 | 0.137164 | 0.036372 | 0.038103 | 0.022732 | 0.772462 | 0.750595 | 0.736523 | 0.736523 | 0.717904 | 0.674388 | 0 | 0.015952 | 0.2814 | 8,113 | 200 | 122 | 40.565 | 0.776329 | 0.092814 | 0 | 0.661538 | 0 | 0 | 0.028121 | 0.010827 | 0 | 0 | 0 | 0 | 0.030769 | 1 | 0.046154 | false | 0 | 0.038462 | 0 | 0.146154 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
19703e863617688eac91cae73c0d98cc775d3f20 | 169 | py | Python | linky/tools/__init__.py | apizzimenti/LinkY | 47f493fd4ed8d61177e25f26e8f9d2f3b2a67607 | [
"MIT"
] | 1 | 2017-05-17T17:38:38.000Z | 2017-05-17T17:38:38.000Z | linky/tools/__init__.py | apizzimenti/LinkY | 47f493fd4ed8d61177e25f26e8f9d2f3b2a67607 | [
"MIT"
] | null | null | null | linky/tools/__init__.py | apizzimenti/LinkY | 47f493fd4ed8d61177e25f26e8f9d2f3b2a67607 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from .paths import load_package
from .paths import load_command
from .help import helping
__all__ = ["load_package", "load_command", "helping"]
| 21.125 | 53 | 0.763314 | 24 | 169 | 5.041667 | 0.541667 | 0.14876 | 0.247934 | 0.31405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.12426 | 169 | 7 | 54 | 24.142857 | 0.810811 | 0.12426 | 0 | 0 | 0 | 0 | 0.210884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
1997468adb2be470585667dae4fc8e1eb32343bf | 84 | py | Python | athena/utils/gluonts/__init__.py | NREL/ATHENA-forecast | ddf51ff5dd3bdbab55076ea335668bc569672a02 | [
"BSD-3-Clause"
] | 1 | 2021-07-02T09:20:51.000Z | 2021-07-02T09:20:51.000Z | athena/utils/gluonts/__init__.py | NREL/ATHENA-forecast | ddf51ff5dd3bdbab55076ea335668bc569672a02 | [
"BSD-3-Clause"
] | null | null | null | athena/utils/gluonts/__init__.py | NREL/ATHENA-forecast | ddf51ff5dd3bdbab55076ea335668bc569672a02 | [
"BSD-3-Clause"
] | 1 | 2021-09-02T11:34:01.000Z | 2021-09-02T11:34:01.000Z | from . evaluation import evaluate_gluonts
from . transform import DataTransformGluon | 42 | 42 | 0.869048 | 9 | 84 | 8 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 84 | 2 | 42 | 42 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5fd68527ce40ebc582c7ae4e6d718b54a11875c9 | 63 | py | Python | CovidSonglinePawit/__init__.py | PawitKrai/CovidSonglinePawit | 497b2421641ebf942774ad2c82e427feda5caf45 | [
"MIT"
] | null | null | null | CovidSonglinePawit/__init__.py | PawitKrai/CovidSonglinePawit | 497b2421641ebf942774ad2c82e427feda5caf45 | [
"MIT"
] | null | null | null | CovidSonglinePawit/__init__.py | PawitKrai/CovidSonglinePawit | 497b2421641ebf942774ad2c82e427feda5caf45 | [
"MIT"
] | null | null | null | #__init__.py
from CovidSonglinePawit.covidreport import report | 31.5 | 49 | 0.873016 | 7 | 63 | 7.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079365 | 63 | 2 | 49 | 31.5 | 0.87931 | 0.174603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5feba2c104e82f836a366fbe03ce20b98e4a9284 | 191 | py | Python | FAS/forms.py | codeLAlit/FAS | 99369198fc85b24fc55f77d33afb43834b6f6e7f | [
"MIT"
] | null | null | null | FAS/forms.py | codeLAlit/FAS | 99369198fc85b24fc55f77d33afb43834b6f6e7f | [
"MIT"
] | null | null | null | FAS/forms.py | codeLAlit/FAS | 99369198fc85b24fc55f77d33afb43834b6f6e7f | [
"MIT"
] | null | null | null | from django import forms
class emp_reg(forms.Form):
emp_code=forms.CharField(label="Employee Code", max_length=8)
emp_name=forms.CharField(label="Employee Name", max_length=100)
| 31.833333 | 67 | 0.753927 | 29 | 191 | 4.793103 | 0.586207 | 0.201439 | 0.273381 | 0.388489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024096 | 0.13089 | 191 | 6 | 68 | 31.833333 | 0.813253 | 0 | 0 | 0 | 0 | 0 | 0.135417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
27285689b4d5d21b4f671763486bacadc0af09c0 | 60 | py | Python | axesresearch/settings/__init__.py | kevinmcguinness/axes-research | 2d7bffcec128d30ae538b5979148aa90b91df393 | [
"Apache-2.0"
] | 1 | 2015-03-31T11:58:35.000Z | 2015-03-31T11:58:35.000Z | axesresearch/settings/__init__.py | kevinmcguinness/axes-research | 2d7bffcec128d30ae538b5979148aa90b91df393 | [
"Apache-2.0"
] | null | null | null | axesresearch/settings/__init__.py | kevinmcguinness/axes-research | 2d7bffcec128d30ae538b5979148aa90b91df393 | [
"Apache-2.0"
] | null | null | null | # By default, use the development settings
from dev import * | 30 | 42 | 0.783333 | 9 | 60 | 5.222222 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 60 | 2 | 43 | 30 | 0.94 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
27919f24862ce116c3efa55f0282d2d7b7809d29 | 111 | py | Python | pyinstaller/hooks/hook-ms_deisotope.py | mstim/glycresoft | 1d305c42c7e6cba60326d8246e4a485596a53513 | [
"Apache-2.0"
] | 18 | 2017-09-01T12:26:12.000Z | 2022-02-23T02:31:29.000Z | pyinstaller/hooks/hook-ms_deisotope.py | mstim/glycresoft | 1d305c42c7e6cba60326d8246e4a485596a53513 | [
"Apache-2.0"
] | 19 | 2017-03-12T20:40:36.000Z | 2022-03-31T22:50:47.000Z | pyinstaller/hooks/hook-ms_deisotope.py | mstim/glycresoft | 1d305c42c7e6cba60326d8246e4a485596a53513 | [
"Apache-2.0"
] | 14 | 2016-05-06T02:25:30.000Z | 2022-03-31T14:40:06.000Z |
from PyInstaller.utils.hooks import collect_submodules
hiddenimports = collect_submodules("ms_deisotope._c")
| 22.2 | 54 | 0.846847 | 13 | 111 | 6.923077 | 0.846154 | 0.377778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 111 | 4 | 55 | 27.75 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
27e2b1fdee13ad4af42c1ce4038ba5acc06d2c90 | 9,244 | py | Python | pfbayes/common/metric.py | xinshi-chen/MetaParticleFlow | 2fb400f7a9ffe03cd654fe78f1d51c405cf6b7df | [
"MIT"
] | 12 | 2019-05-29T02:29:09.000Z | 2021-06-15T14:24:35.000Z | pfbayes/common/metric.py | xinshi-chen/MetaParticleFlow | 2fb400f7a9ffe03cd654fe78f1d51c405cf6b7df | [
"MIT"
] | null | null | null | pfbayes/common/metric.py | xinshi-chen/MetaParticleFlow | 2fb400f7a9ffe03cd654fe78f1d51c405cf6b7df | [
"MIT"
] | 5 | 2019-08-11T23:29:26.000Z | 2022-03-12T15:58:43.000Z | from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
import numpy as np
import sklearn.metrics.pairwise as sk_metric
from pfbayes.common.distributions import KDE
import torch
from pfbayes.common.cmd_args import cmd_args
from numpy import linalg as LA
import pickle
import os
def square_mmd_fine(p_samples, q_samples, n_p, n_q, kernel_type):
"""
n_p: number of samples from true distribution p
assume n_p >> n_q
"""
kernel_dict = {
'gaussian': sk_metric.rbf_kernel,
'laplacian': sk_metric.laplacian_kernel,
'sigmoid': sk_metric.sigmoid_kernel,
'polynomial': sk_metric.polynomial_kernel,
'cosine': sk_metric.cosine_similarity,
}
kernel = kernel_dict[kernel_type]
p_samples = np.array(p_samples)
q_samples = np.array(q_samples)
k_xi_xj = kernel(p_samples, p_samples)
k_yi_yj = kernel(q_samples, q_samples)
k_xi_yj = kernel(p_samples, q_samples)
off_diag_k_xi_xj = (np.sum(k_xi_xj) - np.sum(np.diag(k_xi_xj))) / n_p / (n_p - 1)
off_diag_k_yi_yj = (np.sum(k_yi_yj) - np.sum(np.diag(k_yi_yj))) / n_q / (n_q - 1)
sum_k_xi_yj = np.sum(k_xi_yj) * 2 / n_p / n_q
return off_diag_k_xi_xj + off_diag_k_yi_yj - sum_k_xi_yj
def e_p_log_q(p_samples, q_samples):
q_samples = torch.Tensor(q_samples)
p_samples = torch.Tensor(p_samples)
kde = KDE(q_samples)
log_score = kde.log_pdf(p_samples)
return torch.mean(log_score)
class EvalMetric(object):
"""
using numpy
"""
def __init__(self, particles, true_mean, true_cov, dim, num_true_samples=None):
self.dim = dim
self.particles = np.array(particles).reshape(-1, dim)
self.n_particles = self.particles.shape[0]
self.true_mean = np.array(true_mean).reshape(dim)
self.true_cov = np.array(true_cov).reshape(dim, dim)
if num_true_samples is None:
self.n_samples = max(5000, 10 * cmd_args.num_particles)
else:
self.n_samples = num_true_samples
def square_mmd(self, kernel_type='gaussian'):
p_particles = np.random.multivariate_normal(self.true_mean.astype(np.float64), self.true_cov.astype(np.float64), self.n_samples)
return square_mmd_fine(p_particles, self.particles, self.n_samples, self.n_particles, kernel_type)
def cross_entropy(self):
p_particles = np.random.multivariate_normal(self.true_mean.astype(np.float64), self.true_cov.astype(np.float64), self.n_samples)
return -np.array(e_p_log_q(p_particles, self.particles).cpu())
def integral_eval(self, test_function):
full_path = os.path.realpath(__file__)
path = os.path.dirname(full_path)
filename = path+'/test_function/test_function_'+str(cmd_args.gauss_dim)+'.pkl'
with open(filename, 'rb') as f:
matrix_aa, matrix_a, matrix_b, a, b = pickle.load(f)
if test_function == 'x':
return self.dist_of_mean()
elif test_function == 'xAx':
return self.distance_of_xax(matrix_aa)
elif test_function == 'quadratic':
return self.distance_of_quadratic(matrix_a, a, matrix_b, b)
else:
print('test function not supported')
def dist_of_mean(self, q_samples=None):
if q_samples is None:
q_samples = self.particles
else:
q_samples = np.array(q_samples).reshape(-1, self.dim)
q_mean = np.mean(q_samples, 0)
return LA.norm(q_mean-self.true_mean)
def distance_of_xax(self, matrix_a, q_samples=None):
"""||E_q[x'Ax] - E_p[x'Ax]||"""
if q_samples is None:
q_samples = self.particles
else:
q_samples = np.array(q_samples).reshape(-1, self.dim)
mean = self.true_mean.reshape(1, self.dim)
true_val = np.trace(np.matmul(matrix_a, self.true_cov))
true_val += np.sum(np.dot(mean, np.matmul(matrix_a, mean.T)))
est_ax = np.matmul(matrix_a, q_samples.T)
est_xax = np.diag(np.matmul(q_samples, est_ax))
est_xax = np.mean(est_xax)
return np.abs(true_val - est_xax)
def distance_of_quadratic(self, matrix_a, a, matrix_b, b, q_samples=None):
"""||E_q[(Ax+a)'(Bx+b)] - E_p[~~~]||"""
if q_samples is None:
q_samples = self.particles
else:
q_samples = np.array(q_samples).reshape(-1, self.dim)
# format
matrix_a = np.array(matrix_a)
matrix_b = np.array(matrix_b)
true_val = np.trace(np.matmul(np.matmul(matrix_a, self.true_cov), matrix_b.T))
true_val += np.dot(matrix_a.dot(self.true_mean) + a, matrix_b.dot(self.true_mean) + b)
est_ax_a = np.matmul(q_samples, matrix_a.T) + a
est_bx_b = np.matmul(q_samples, matrix_b.T) + b
est_val = np.diag(np.matmul(est_ax_a, est_bx_b.T))
est_val = np.mean(est_val)
return np.abs(true_val - est_val)
def create_metric_dict(num_epoch, len_sequence):
metric = dict()
metric['mmd'] = dict()
metric['mmd']['gaussian'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['mmd']['laplacian'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['mmd']['sigmoid'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['mmd']['polynomial'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['mmd']['cosine'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['cross-entropy'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['integral-eval'] = dict()
metric['integral-eval']['x'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['integral-eval']['xAx'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
metric['integral-eval']['quadratic'] = np.zeros([num_epoch, len_sequence], dtype=np.float32)
print('evaluate MMD, cross-entropy and discrepancy of integral evaluations')
return metric
class EvalMetricKbr(object):
"""
using numpy
"""
def __init__(self, weights, particles, equal_particles, true_mean, true_cov, dim, num_true_samples=None):
self.dim = dim
self.particles = np.array(particles).reshape(-1, dim)
self.n_particles = self.particles.shape[0]
self.true_mean = np.array(true_mean).reshape(dim)
self.true_cov = np.array(true_cov).reshape(dim, dim)
self.weights = np.array(weights).reshape(-1)
self.equal_particles = np.array(equal_particles).reshape(-1, dim)
if num_true_samples is None:
self.n_samples = max(5000, 10 * cmd_args.num_particles)
else:
self.n_samples = num_true_samples
def square_mmd(self, kernel_type='gaussian'):
p_particles = np.random.multivariate_normal(self.true_mean.astype(np.float64), self.true_cov.astype(np.float64), self.n_samples)
return square_mmd_fine(p_particles, self.equal_particles, self.n_samples, self.n_particles, kernel_type)
def cross_entropy(self):
p_particles = np.random.multivariate_normal(self.true_mean.astype(np.float64), self.true_cov.astype(np.float64), self.n_samples)
return -np.array(e_p_log_q(p_particles, self.equal_particles).cpu())
def integral_eval(self, test_function):
full_path = os.path.realpath(__file__)
path = os.path.dirname(full_path)
filename = path+'/test_function/test_function.pkl'
with open(filename, 'rb') as f:
matrix_aa, matrix_a, matrix_b, a, b = pickle.load(f)
if test_function == 'x':
return self.dist_of_mean()
elif test_function == 'xAx':
return self.distance_of_xax(matrix_aa)
elif test_function == 'quadratic':
return self.distance_of_quadratic(matrix_a, a, matrix_b, b)
else:
print('test function not supported')
def dist_of_mean(self):
q_mean = np.sum(self.weights.reshape(-1, 1) * self.particles, 0)
return LA.norm(q_mean-self.true_mean)
def distance_of_xax(self, matrix_a):
"""||E_q[x'Ax] - E_p[x'Ax]||"""
q_samples = self.particles
mean = self.true_mean.reshape(1, self.dim)
true_val = np.trace(np.matmul(matrix_a, self.true_cov))
true_val += np.sum(np.dot(mean, np.matmul(matrix_a, mean.T)))
est_ax = np.matmul(matrix_a, q_samples.T)
est_xax = np.diag(np.matmul(q_samples, est_ax))
est_xax = np.sum(self.weights.reshape(-1) * est_xax)
return np.abs(true_val - est_xax)
def distance_of_quadratic(self, matrix_a, a, matrix_b, b):
"""||E_q[(Ax+a)'(Bx+b)] - E_p[~~~]||"""
q_samples = self.particles
# format
matrix_a = np.array(matrix_a)
matrix_b = np.array(matrix_b)
true_val = np.trace(np.matmul(np.matmul(matrix_a, self.true_cov), matrix_b.T))
true_val += np.dot(matrix_a.dot(self.true_mean) + a, matrix_b.dot(self.true_mean) + b)
est_ax_a = np.matmul(q_samples, matrix_a.T) + a
est_bx_b = np.matmul(q_samples, matrix_b.T) + b
est_val = np.diag(np.matmul(est_ax_a, est_bx_b.T))
est_val = np.sum(self.weights.reshape(-1) * est_val)
return np.abs(true_val - est_val)
| 40.191304 | 136 | 0.657508 | 1,424 | 9,244 | 3.974719 | 0.104635 | 0.050883 | 0.029682 | 0.033569 | 0.769965 | 0.742933 | 0.725088 | 0.717314 | 0.710247 | 0.69258 | 0 | 0.009068 | 0.212678 | 9,244 | 229 | 137 | 40.366812 | 0.768618 | 0.02434 | 0 | 0.564706 | 0 | 0 | 0.045597 | 0.006817 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.064706 | 0 | 0.288235 | 0.023529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fd8e293c314a1ac55d0177b42161abb290baa0d0 | 248 | py | Python | app/config.py | MattiasZurkovic/CSGO_News | d74555e57037ad9b71cceb2449bf7f81c99aa78f | [
"Apache-2.0"
] | 1 | 2015-07-24T19:30:40.000Z | 2015-07-24T19:30:40.000Z | app/config.py | MattiasZurkovic/CSGO_News | d74555e57037ad9b71cceb2449bf7f81c99aa78f | [
"Apache-2.0"
] | null | null | null | app/config.py | MattiasZurkovic/CSGO_News | d74555e57037ad9b71cceb2449bf7f81c99aa78f | [
"Apache-2.0"
] | null | null | null | consumer_key='aRahVNAuCVdWy5PGFjMoAIWui'
consumer_secret='fABUmGW1uV4pnlgTpwSx8KAxQdbVH6fz2le4dEW4e9wlnxmP2b'
access_token_key='2834176217-coE5CGfxIdniddoou1HOBcG3r4KVdVG2UzJQStS'
access_token_secret='3tfg6G4clDY42ie6wYekxf77xHGKCZjmWtUzIEqRTHqoW'
| 49.6 | 69 | 0.931452 | 15 | 248 | 15 | 0.666667 | 0.097778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135246 | 0.016129 | 248 | 4 | 70 | 62 | 0.786885 | 0 | 0 | 0 | 0 | 0 | 0.685484 | 0.685484 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fda5e1007ee96c32e432e9a0f453fbc39aba1e7f | 519 | py | Python | codeforces/api/json_objects/__init__.py | ericbrandwein/CodeforcesAPI | 12ae641910a3308033584dc518bb2fc0173e56f3 | [
"MIT"
] | 26 | 2015-06-21T16:19:44.000Z | 2021-11-15T12:32:25.000Z | codeforces/api/json_objects/__init__.py | ericbrandwein/CodeforcesAPI | 12ae641910a3308033584dc518bb2fc0173e56f3 | [
"MIT"
] | 5 | 2015-03-10T06:00:52.000Z | 2020-01-18T12:59:25.000Z | codeforces/api/json_objects/__init__.py | ericbrandwein/CodeforcesAPI | 12ae641910a3308033584dc518bb2fc0173e56f3 | [
"MIT"
] | 12 | 2015-04-24T17:16:50.000Z | 2022-01-04T14:21:25.000Z | from ..json_objects.base_json_object import *
from ..json_objects.problem import *
from ..json_objects.problem_statistics import *
from ..json_objects.contest import *
from ..json_objects.user import *
from ..json_objects.member import *
from ..json_objects.party import *
from ..json_objects.submission import *
from ..json_objects.rating_change import *
from ..json_objects.judge_protocol import *
from ..json_objects.hack import *
from ..json_objects.problem_result import *
from ..json_objects.ranklist_row import * | 39.923077 | 47 | 0.801541 | 72 | 519 | 5.5 | 0.291667 | 0.262626 | 0.492424 | 0.636364 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098266 | 519 | 13 | 48 | 39.923077 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
fdbda646d2a0f91568026b465be36f6ae4150dbe | 440 | py | Python | goblin/driver/__init__.py | goblin-ogm/goblin | c5801affb30be690e6cc260010a8414fdea31291 | [
"Apache-2.0"
] | 11 | 2020-01-26T14:35:23.000Z | 2021-12-01T17:04:19.000Z | goblin/driver/__init__.py | goblin-ogm/goblin | c5801affb30be690e6cc260010a8414fdea31291 | [
"Apache-2.0"
] | 3 | 2020-04-21T20:34:23.000Z | 2021-05-10T15:31:47.000Z | goblin/driver/__init__.py | goblin-ogm/goblin | c5801affb30be690e6cc260010a8414fdea31291 | [
"Apache-2.0"
] | 4 | 2020-04-21T09:50:35.000Z | 2022-01-12T22:16:22.000Z | from aiogremlin import Cluster, DriverRemoteConnection, Graph # type: ignore
from aiogremlin.driver.client import Client # type: ignore
from aiogremlin.driver.connection import Connection # type: ignore
from aiogremlin.driver.pool import ConnectionPool # type: ignore
from aiogremlin.driver.server import GremlinServer # type: ignore
from gremlin_python.driver.serializer import GraphSONMessageSerializer # type: ignore
AsyncGraph = Graph
| 48.888889 | 85 | 0.834091 | 51 | 440 | 7.176471 | 0.392157 | 0.163934 | 0.191257 | 0.262295 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 440 | 8 | 86 | 55 | 0.933673 | 0.175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fde575cefa723a9ef79f6aaff09174faf2e17660 | 241 | py | Python | website/doctype/blog_post/templates/pages/blog.py | hafeez3000/wnframework | 1160c108fef8f4956f5e14a072ea43e75230b9eb | [
"MIT"
] | 6 | 2015-08-24T23:10:57.000Z | 2019-11-10T06:57:23.000Z | website/doctype/blog_post/templates/pages/blog.py | hafeez3000/wnframework | 1160c108fef8f4956f5e14a072ea43e75230b9eb | [
"MIT"
] | null | null | null | website/doctype/blog_post/templates/pages/blog.py | hafeez3000/wnframework | 1160c108fef8f4956f5e14a072ea43e75230b9eb | [
"MIT"
] | 5 | 2015-01-05T06:59:45.000Z | 2020-11-07T15:15:07.000Z | # Copyright (c) 2013, Web Notes Technologies Pvt. Ltd. and Contributors
# MIT License. See license.txt
from __future__ import unicode_literals
import webnotes
def get_context():
return webnotes.doc("Blog Settings", "Blog Settings").fields | 30.125 | 71 | 0.784232 | 33 | 241 | 5.545455 | 0.848485 | 0.131148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019048 | 0.128631 | 241 | 8 | 72 | 30.125 | 0.852381 | 0.406639 | 0 | 0 | 0 | 0 | 0.184397 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
fdef65358f811d396fef06191094726e506165b0 | 96 | py | Python | tools/archive/archive.py | AlexP11223/WebChangeNotifier | 96317685e260af6920cf9cc65f0ceb4822db1154 | [
"MIT"
] | 3 | 2018-03-23T22:04:59.000Z | 2021-09-07T22:06:22.000Z | tools/archive/archive.py | AlexP11223/WebChangeNotifier | 96317685e260af6920cf9cc65f0ceb4822db1154 | [
"MIT"
] | 2 | 2018-06-01T14:58:53.000Z | 2021-06-01T22:01:03.000Z | tools/archive/archive.py | AlexP11223/WebChangeNotifier | 96317685e260af6920cf9cc65f0ceb4822db1154 | [
"MIT"
] | 1 | 2018-11-08T18:40:54.000Z | 2018-11-08T18:40:54.000Z | import shutil
import sys
shutil.make_archive(sys.argv[1], "zip", sys.argv[2], sys.argv[3])
| 19.2 | 66 | 0.6875 | 17 | 96 | 3.823529 | 0.588235 | 0.323077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 0.135417 | 96 | 4 | 67 | 24 | 0.746988 | 0 | 0 | 0 | 0 | 0 | 0.032609 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a906fc77230a446170944a03a6e61f71ecef168c | 75 | py | Python | Day Pogress - 18~100/Day 03/Tresure-Island Project/tempCodeRunnerFile.py | Abbhiishek/Python | 3ad5310ca29469f353f9afa99531f01273eec6bd | [
"MIT"
] | 1 | 2022-02-04T07:04:34.000Z | 2022-02-04T07:04:34.000Z | Day Pogress - 18~100/Day 03/Tresure-Island Project/tempCodeRunnerFile.py | Abbhiishek/Python | 3ad5310ca29469f353f9afa99531f01273eec6bd | [
"MIT"
] | 12 | 2022-02-13T12:10:32.000Z | 2022-02-17T09:36:49.000Z | Day Pogress - 18~100/Day 03/Tresure-Island Project/tempCodeRunnerFile.py | Abbhiishek/Python | 3ad5310ca29469f353f9afa99531f01273eec6bd | [
"MIT"
] | null | null | null | elif choice_number == 1:
# elif choice_number == 2:
# else:
# print | 9.375 | 26 | 0.6 | 10 | 75 | 4.3 | 0.7 | 0.465116 | 0.744186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.266667 | 75 | 8 | 27 | 9.375 | 0.745455 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e301d9b06d7843c17d09041bc21393b5ed964a82 | 59 | py | Python | bumpv/client/vcs/__init__.py | kylie-a/bumpversion | 13a150daa02f29e7dd74b5240c54c7929ec176b8 | [
"MIT"
] | null | null | null | bumpv/client/vcs/__init__.py | kylie-a/bumpversion | 13a150daa02f29e7dd74b5240c54c7929ec176b8 | [
"MIT"
] | null | null | null | bumpv/client/vcs/__init__.py | kylie-a/bumpversion | 13a150daa02f29e7dd74b5240c54c7929ec176b8 | [
"MIT"
] | 1 | 2019-11-24T15:36:19.000Z | 2019-11-24T15:36:19.000Z | from .vcs import get_vcs, WorkingDirectoryIsDirtyException
| 29.5 | 58 | 0.881356 | 6 | 59 | 8.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 59 | 1 | 59 | 59 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e33290af77bf94757395650bd581889f234f0eab | 97 | py | Python | Python_for_Everybody-Coursera/CHAPTER_9/chap_09_02.py | andresdelarosa1887/Public-Projects | db8d8e0c0f5f0f7326346462fcdfe21ce8142a12 | [
"Unlicense"
] | 1 | 2020-09-29T17:29:34.000Z | 2020-09-29T17:29:34.000Z | Python_for_Everybody-Coursera/CHAPTER_9/chap_09_02.py | andresdelarosa1887/Public-Projects | db8d8e0c0f5f0f7326346462fcdfe21ce8142a12 | [
"Unlicense"
] | null | null | null | Python_for_Everybody-Coursera/CHAPTER_9/chap_09_02.py | andresdelarosa1887/Public-Projects | db8d8e0c0f5f0f7326346462fcdfe21ce8142a12 | [
"Unlicense"
] | null | null | null |
ccc= dict()
ccc['csev'] = 1
ccc['cwen'] = 1
#print(ccc)
ccc['cwen']= ccc['cwen'] + 1
print(ccc)
| 12.125 | 28 | 0.556701 | 17 | 97 | 3.176471 | 0.352941 | 0.388889 | 0.296296 | 0.481481 | 0.592593 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 0.154639 | 97 | 7 | 29 | 13.857143 | 0.621951 | 0.103093 | 0 | 0 | 0 | 0 | 0.188235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e34949e322b3087aec4604c60454e9400e0217cf | 34 | py | Python | server/blueprints/upload/__init__.py | mmaltsev/onti40 | 4b00e7130e2dece80afd9680b38ebc311c1d60f5 | [
"MIT"
] | null | null | null | server/blueprints/upload/__init__.py | mmaltsev/onti40 | 4b00e7130e2dece80afd9680b38ebc311c1d60f5 | [
"MIT"
] | null | null | null | server/blueprints/upload/__init__.py | mmaltsev/onti40 | 4b00e7130e2dece80afd9680b38ebc311c1d60f5 | [
"MIT"
] | null | null | null | from .upload import upload_handler | 34 | 34 | 0.882353 | 5 | 34 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e37419148c5e5670209ba558fd2ad4ae6a214439 | 34 | py | Python | imageserver/src/lib/statistics.py | nicolageorge/ownimageserver | 630576ff944215a97476f4e10d88bbae1a97c543 | [
"MIT"
] | null | null | null | imageserver/src/lib/statistics.py | nicolageorge/ownimageserver | 630576ff944215a97476f4e10d88bbae1a97c543 | [
"MIT"
] | 3 | 2021-09-08T00:48:49.000Z | 2022-03-11T23:41:23.000Z | imageserver/src/lib/statistics.py | nicolageorge/ownimageserver | 630576ff944215a97476f4e10d88bbae1a97c543 | [
"MIT"
] | null | null | null | class Statistics(object):
pass | 17 | 25 | 0.735294 | 4 | 34 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 34 | 2 | 26 | 17 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
e374c9e1e50c6126451bff22f1e930b3da41f19f | 155 | py | Python | pubclouds/skydrive/__init__.py | mk-fg/tahoe-lafs-public-clouds | 84e61e1742db08f1868e09a6ec77b762d41f85c2 | [
"WTFPL"
] | 21 | 2015-01-23T04:39:54.000Z | 2020-04-07T17:39:55.000Z | pubclouds/skydrive/__init__.py | mk-fg/tahoe-lafs-public-clouds | 84e61e1742db08f1868e09a6ec77b762d41f85c2 | [
"WTFPL"
] | null | null | null | pubclouds/skydrive/__init__.py | mk-fg/tahoe-lafs-public-clouds | 84e61e1742db08f1868e09a6ec77b762d41f85c2 | [
"WTFPL"
] | 2 | 2020-06-29T15:56:51.000Z | 2021-08-21T07:28:37.000Z |
from allmydata.storage.backends.cloud.skydrive.skydrive_container import configure_skydrive_container
configure_container = configure_skydrive_container
| 31 | 101 | 0.903226 | 17 | 155 | 7.882353 | 0.529412 | 0.380597 | 0.38806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058065 | 155 | 4 | 102 | 38.75 | 0.917808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
8b82826ad67019a1ab8a8d94a4ad5f0fd638fb0f | 24 | py | Python | flask_app/app/models.py | ruteckimikolaj/demo-gatsby-flask-scraper | 09490bac49147760a1301012ffa2619e1c690c78 | [
"MIT"
] | null | null | null | flask_app/app/models.py | ruteckimikolaj/demo-gatsby-flask-scraper | 09490bac49147760a1301012ffa2619e1c690c78 | [
"MIT"
] | 1 | 2021-03-31T19:32:20.000Z | 2021-03-31T19:32:20.000Z | flask_app/app/models.py | ruteckimikolaj/demo-gatsby-flask-scraper | 09490bac49147760a1301012ffa2619e1c690c78 | [
"MIT"
] | null | null | null | from app import db
| 6 | 19 | 0.625 | 4 | 24 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 24 | 3 | 20 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
8bb5d5ec27c6744767b1ddcd4771827f1ef9fff5 | 270 | py | Python | opaware/ingests/__init__.py | cyclogenesis-au/opaware | 871f2af83a1958c9171f39dc73357d0c0859c9ca | [
"BSD-3-Clause"
] | null | null | null | opaware/ingests/__init__.py | cyclogenesis-au/opaware | 871f2af83a1958c9171f39dc73357d0c0859c9ca | [
"BSD-3-Clause"
] | null | null | null | opaware/ingests/__init__.py | cyclogenesis-au/opaware | 871f2af83a1958c9171f39dc73357d0c0859c9ca | [
"BSD-3-Clause"
] | 1 | 2021-02-26T14:49:19.000Z | 2021-02-26T14:49:19.000Z | from .ambient_json import ingest_ambient
"""
===============
opaware.ingests (opaware.ingests)
===============
.. currentmodule:: opaware.ingests
This module contains procedures for reading and writing .
.. autosummary::
:toctree: generated/
ingest_ambient
"""
| 20.769231 | 57 | 0.666667 | 26 | 270 | 6.807692 | 0.730769 | 0.237288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 270 | 12 | 58 | 22.5 | 0.75641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8be421f474cc68333a103cf034943550b886b021 | 20 | py | Python | code/sample_2-2-4.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | 1 | 2022-03-29T13:50:12.000Z | 2022-03-29T13:50:12.000Z | code/sample_2-2-4.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | code/sample_2-2-4.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | print(True or True)
| 10 | 19 | 0.75 | 4 | 20 | 3.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
9a5717f062dd11cb61743b3dce5c1fd756394c10 | 22 | py | Python | request.py | bHodges97/timetable | bca2df1e54a288c27d71f0fa27cced7af45e9b0e | [
"MIT"
] | null | null | null | request.py | bHodges97/timetable | bca2df1e54a288c27d71f0fa27cced7af45e9b0e | [
"MIT"
] | null | null | null | request.py | bHodges97/timetable | bca2df1e54a288c27d71f0fa27cced7af45e9b0e | [
"MIT"
] | null | null | null | import configparser
| 5.5 | 19 | 0.818182 | 2 | 22 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 3 | 20 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9a580ca64dd0ea28427d0938dedcab21841f5233 | 178 | py | Python | python2.7libs/hammer_tools/widgets/__init__.py | anvdev/Hammer-Tools | 0211ec837da6754e537c98624ecd07c23abab28e | [
"Apache-2.0"
] | 19 | 2019-10-09T13:48:11.000Z | 2021-06-14T01:25:23.000Z | python2.7libs/hammer_tools/widgets/__init__.py | anvdev/Hammer-Tools | 0211ec837da6754e537c98624ecd07c23abab28e | [
"Apache-2.0"
] | 219 | 2019-10-08T14:44:48.000Z | 2021-06-19T06:27:46.000Z | python2.7libs/hammer_tools/widgets/__init__.py | anvdev/Hammer-Tools | 0211ec837da6754e537c98624ecd07c23abab28e | [
"Apache-2.0"
] | 3 | 2020-02-14T06:18:06.000Z | 2020-11-25T20:47:06.000Z | from .input_field import InputField
from .location_field import LocationField
from .file_path_field import FilePathField
from .slider import Slider
from .combobx import ComboBox
| 29.666667 | 42 | 0.859551 | 24 | 178 | 6.208333 | 0.541667 | 0.221477 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11236 | 178 | 5 | 43 | 35.6 | 0.943038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9a67619df30b4390db893b936e710bbe6260154a | 75 | py | Python | web_scripts/parts.py | celskeggs/hwops | 55b847a19ec0d2afa75c613de2ffd6deef5c227f | [
"MIT"
] | 1 | 2021-10-12T04:03:56.000Z | 2021-10-12T04:03:56.000Z | web_scripts/parts.py | celskeggs/hwops | 55b847a19ec0d2afa75c613de2ffd6deef5c227f | [
"MIT"
] | 1 | 2019-05-06T21:33:47.000Z | 2019-05-06T21:34:52.000Z | web_scripts/parts.py | sipb/hwops | 55b847a19ec0d2afa75c613de2ffd6deef5c227f | [
"MIT"
] | null | null | null | #!/usr/bin/python2
# -*- coding: utf-8 -*-
import main
main.print_parts()
| 12.5 | 23 | 0.64 | 11 | 75 | 4.272727 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0.133333 | 75 | 5 | 24 | 15 | 0.692308 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
7bd1b5637b46cbf260b4cfc0430e95bf6ca2a588 | 53 | py | Python | script_runner/__init__.py | sajjadmaneshi/dws_dev_007_python_q2 | b95617041f13de43fbdce398adb0cbbcc6276a1e | [
"Apache-2.0"
] | null | null | null | script_runner/__init__.py | sajjadmaneshi/dws_dev_007_python_q2 | b95617041f13de43fbdce398adb0cbbcc6276a1e | [
"Apache-2.0"
] | null | null | null | script_runner/__init__.py | sajjadmaneshi/dws_dev_007_python_q2 | b95617041f13de43fbdce398adb0cbbcc6276a1e | [
"Apache-2.0"
] | null | null | null | from script_runner.script_runner import script_runner | 53 | 53 | 0.924528 | 8 | 53 | 5.75 | 0.5 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 53 | 1 | 53 | 53 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d025446e48f593c45251511938177f50c1f875ba | 938 | py | Python | tests/test_number_to_word.py | gillespied/ideal-octo-spoon | fbc4d16fcce235c5e99e9f592c24f9c11d66dde3 | [
"MIT"
] | null | null | null | tests/test_number_to_word.py | gillespied/ideal-octo-spoon | fbc4d16fcce235c5e99e9f592c24f9c11d66dde3 | [
"MIT"
] | null | null | null | tests/test_number_to_word.py | gillespied/ideal-octo-spoon | fbc4d16fcce235c5e99e9f592c24f9c11d66dde3 | [
"MIT"
] | null | null | null | import pytest
from number_to_word import number_to_word
def test_valid_string():
assert number_to_word.number_to_word("111") == 'one hundred eleven'
def test_invalid_string():
with pytest.raises(ValueError):
number_to_word.number_to_word("not a number")
def test_int():
assert number_to_word.number_to_word(111) == 'one hundred eleven'
def test_float():
assert number_to_word.number_to_word(111.0) == 'one hundred eleven'
def test_decimal_places():
assert number_to_word.number_to_word(.999) == 'zero point nine nine nine'
def test_minus_int():
assert number_to_word.number_to_word(-1) == 'minus one'
def test_minus_float():
assert number_to_word.number_to_word(-1.01) == 'minus one point zero one'
def test_zero():
assert number_to_word.number_to_word(0) == 'zero'
def test_bigger_than_max():
with pytest.raises(AssertionError):
number_to_word.number_to_word(10**30)
| 22.878049 | 77 | 0.73774 | 149 | 938 | 4.268456 | 0.261745 | 0.251572 | 0.377358 | 0.254717 | 0.556604 | 0.52044 | 0.444969 | 0.350629 | 0.176101 | 0.176101 | 0 | 0.027743 | 0.154584 | 938 | 40 | 78 | 23.45 | 0.774275 | 0 | 0 | 0 | 0 | 0 | 0.139659 | 0 | 0 | 0 | 0 | 0 | 0.363636 | 1 | 0.409091 | true | 0 | 0.090909 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d06aa1ade7088e045d5def36afa74a53ac3c96d3 | 1,467 | py | Python | terrascript/resource/e_breuninger/netbox.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/resource/e_breuninger/netbox.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/resource/e_breuninger/netbox.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/resource/e-breuninger/netbox.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:22:23 UTC)
import terrascript
class netbox_available_ip_address(terrascript.Resource):
pass
class netbox_cluster(terrascript.Resource):
pass
class netbox_cluster_group(terrascript.Resource):
pass
class netbox_cluster_type(terrascript.Resource):
pass
class netbox_device_role(terrascript.Resource):
pass
class netbox_interface(terrascript.Resource):
pass
class netbox_ip_address(terrascript.Resource):
pass
class netbox_platform(terrascript.Resource):
pass
class netbox_prefix(terrascript.Resource):
pass
class netbox_primary_ip(terrascript.Resource):
pass
class netbox_service(terrascript.Resource):
pass
class netbox_tag(terrascript.Resource):
pass
class netbox_tenant(terrascript.Resource):
pass
class netbox_tenant_group(terrascript.Resource):
pass
class netbox_virtual_machine(terrascript.Resource):
pass
class netbox_vrf(terrascript.Resource):
pass
__all__ = [
"netbox_available_ip_address",
"netbox_cluster",
"netbox_cluster_group",
"netbox_cluster_type",
"netbox_device_role",
"netbox_interface",
"netbox_ip_address",
"netbox_platform",
"netbox_prefix",
"netbox_primary_ip",
"netbox_service",
"netbox_tag",
"netbox_tenant",
"netbox_tenant_group",
"netbox_virtual_machine",
"netbox_vrf",
]
| 16.670455 | 73 | 0.746421 | 168 | 1,467 | 6.196429 | 0.238095 | 0.310279 | 0.353506 | 0.403458 | 0.548511 | 0.287224 | 0.082613 | 0 | 0 | 0 | 0 | 0.009796 | 0.164963 | 1,467 | 87 | 74 | 16.862069 | 0.84 | 0.078391 | 0 | 0.313725 | 1 | 0 | 0.195701 | 0.036323 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.313725 | 0.019608 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
d0800a8682f4d47c2a25ca68191b5ec341b5824e | 139 | py | Python | pkg/pkg/flow/__init__.py | Restok/networks-course | c1c959b1a73b6bb301a4273bd9c1bb4c0a2fa4ff | [
"MIT"
] | 8 | 2022-01-03T23:54:30.000Z | 2022-03-18T11:04:18.000Z | pkg/pkg/flow/__init__.py | Restok/networks-course | c1c959b1a73b6bb301a4273bd9c1bb4c0a2fa4ff | [
"MIT"
] | 17 | 2021-03-03T14:48:54.000Z | 2021-09-08T15:52:50.000Z | pkg/pkg/flow/__init__.py | Restok/networks-course | c1c959b1a73b6bb301a4273bd9c1bb4c0a2fa4ff | [
"MIT"
] | 16 | 2022-01-04T17:54:57.000Z | 2022-03-29T00:34:14.000Z | from .flow import (
estimate_spring_rank_P,
signal_flow,
rank_signal_flow,
rank_graph_match_flow,
calculate_p_upper,
)
| 17.375 | 27 | 0.726619 | 19 | 139 | 4.736842 | 0.631579 | 0.222222 | 0.311111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215827 | 139 | 7 | 28 | 19.857143 | 0.825688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d082de69dd1616f8bd7b344e3861a74a5dc47696 | 1,418 | py | Python | wheel5/metrics/pipeline.py | xdralex/pytorch-wheel5 | 336529e354a45908cf3f8f12cd401a95fb2a5351 | [
"MIT"
] | 2 | 2020-06-08T13:10:06.000Z | 2020-07-07T05:34:18.000Z | wheel5/metrics/pipeline.py | xdralex/pytorch-wheel5 | 336529e354a45908cf3f8f12cd401a95fb2a5351 | [
"MIT"
] | 1 | 2020-04-29T08:46:14.000Z | 2020-04-29T08:46:14.000Z | wheel5/metrics/pipeline.py | xdralex/pytorch-wheel5 | 336529e354a45908cf3f8f12cd401a95fb2a5351 | [
"MIT"
] | null | null | null | import logging
from typing import Tuple
from torch import Tensor
from torch import nn
from .functional import exact_match_accuracy, jaccard_accuracy, dice_accuracy
class ExactMatchAccuracy(nn.Module):
def __init__(self):
super(ExactMatchAccuracy, self).__init__()
self.logger = logging.getLogger(f'{__name__}')
self.debug = self.logger.isEnabledFor(logging.DEBUG)
def forward(self, input: Tensor, target: Tensor, name: str = '') -> Tuple[Tensor, Tensor]:
return exact_match_accuracy(input, target, name=name, logger=self.logger, debug=self.debug)
class JaccardAccuracy(nn.Module):
def __init__(self):
super(JaccardAccuracy, self).__init__()
self.logger = logging.getLogger(f'{__name__}')
self.debug = self.logger.isEnabledFor(logging.DEBUG)
def forward(self, input: Tensor, target: Tensor, name: str = '') -> Tuple[Tensor, Tensor]:
return jaccard_accuracy(input, target, name=name, logger=self.logger, debug=self.debug)
class DiceAccuracy(nn.Module):
def __init__(self):
super(DiceAccuracy, self).__init__()
self.logger = logging.getLogger(f'{__name__}')
self.debug = self.logger.isEnabledFor(logging.DEBUG)
def forward(self, input: Tensor, target: Tensor, name: str = '') -> Tuple[Tensor, Tensor]:
return dice_accuracy(input, target, name=name, logger=self.logger, debug=self.debug)
| 34.585366 | 99 | 0.703103 | 172 | 1,418 | 5.540698 | 0.197674 | 0.094439 | 0.034627 | 0.047219 | 0.734523 | 0.734523 | 0.658972 | 0.658972 | 0.658972 | 0.658972 | 0 | 0 | 0.174189 | 1,418 | 40 | 100 | 35.45 | 0.813834 | 0 | 0 | 0.461538 | 0 | 0 | 0.021157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.192308 | 0.115385 | 0.653846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
d0b5cb080d8002fd69df8c9dee28cd1dbc316af1 | 115 | py | Python | mysite/learn/views.py | hjjia/python | 665b89614f6d12fdbbe2250a4920f568e6cc0181 | [
"MIT"
] | null | null | null | mysite/learn/views.py | hjjia/python | 665b89614f6d12fdbbe2250a4920f568e6cc0181 | [
"MIT"
] | null | null | null | mysite/learn/views.py | hjjia/python | 665b89614f6d12fdbbe2250a4920f568e6cc0181 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from django.http import HttpResponse
def index(req):
return HttpResponse(u'欢迎光临 Django')
| 19.166667 | 36 | 0.713043 | 16 | 115 | 5.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.13913 | 115 | 5 | 37 | 23 | 0.818182 | 0.182609 | 0 | 0 | 0 | 0 | 0.119565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
d0df5652bb6b3e265635f4610887551434c8b380 | 56 | py | Python | Online-Judges/CodingBat/Python/String-01/01-hello_name.py | shihab4t/Competitive-Programming | e8eec7d4f7d86bfa1c00b7fbbedfd6a1518f19be | [
"Unlicense"
] | 3 | 2021-06-15T01:19:23.000Z | 2022-03-16T18:23:53.000Z | Online-Judges/CodingBat/Python/String-01/01-hello_name.py | shihab4t/Competitive-Programming | e8eec7d4f7d86bfa1c00b7fbbedfd6a1518f19be | [
"Unlicense"
] | null | null | null | Online-Judges/CodingBat/Python/String-01/01-hello_name.py | shihab4t/Competitive-Programming | e8eec7d4f7d86bfa1c00b7fbbedfd6a1518f19be | [
"Unlicense"
] | null | null | null | def hello_name(name):
return ("Hello"+" "+name+"!")
| 18.666667 | 33 | 0.571429 | 7 | 56 | 4.428571 | 0.571429 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160714 | 56 | 2 | 34 | 28 | 0.659574 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
ef9949a9c29303d2cf0543d9c03378cf43a6ccab | 60 | py | Python | python/3.module/base/3.use_name.py | dunitian/BaseCode | 4855ef4c6dd7c95d7239d2048832d8acfe26e084 | [
"Apache-2.0"
] | 25 | 2018-06-13T08:13:44.000Z | 2020-11-19T14:02:11.000Z | python/3.module/base/3.use_name.py | dunitian/BaseCode | 4855ef4c6dd7c95d7239d2048832d8acfe26e084 | [
"Apache-2.0"
] | null | null | null | python/3.module/base/3.use_name.py | dunitian/BaseCode | 4855ef4c6dd7c95d7239d2048832d8acfe26e084 | [
"Apache-2.0"
] | 13 | 2018-06-13T08:13:38.000Z | 2022-01-06T06:45:07.000Z | import get_user_infos as user_infos
user_infos.get_infos()
| 15 | 35 | 0.85 | 11 | 60 | 4.181818 | 0.454545 | 0.586957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 60 | 3 | 36 | 20 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
efe9b09df68e42492b21fa980740e106d0f3b652 | 48 | py | Python | tests/__init__.py | ciresnave/eetools | 0752a2dec8f19c647e3b3e4dfd33982101cabc34 | [
"MIT"
] | null | null | null | tests/__init__.py | ciresnave/eetools | 0752a2dec8f19c647e3b3e4dfd33982101cabc34 | [
"MIT"
] | null | null | null | tests/__init__.py | ciresnave/eetools | 0752a2dec8f19c647e3b3e4dfd33982101cabc34 | [
"MIT"
] | null | null | null | """Test suite for the eepythontools package."""
| 24 | 47 | 0.729167 | 6 | 48 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 1 | 48 | 48 | 0.833333 | 0.854167 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4bd74bb31bac22aa8ce2ab224526db6abcd3e1ca | 24 | py | Python | src/ynab_bank_import/__init__.py | zagy/ynab-bank-imports | 9c26ef8f124a25af2ed734e6af44190fcfe5c90c | [
"BSD-2-Clause"
] | 1 | 2021-07-07T05:25:49.000Z | 2021-07-07T05:25:49.000Z | src/ynab_bank_import/__init__.py | zagy/ynab-bank-imports | 9c26ef8f124a25af2ed734e6af44190fcfe5c90c | [
"BSD-2-Clause"
] | null | null | null | src/ynab_bank_import/__init__.py | zagy/ynab-bank-imports | 9c26ef8f124a25af2ed734e6af44190fcfe5c90c | [
"BSD-2-Clause"
] | 1 | 2021-03-20T09:42:55.000Z | 2021-03-20T09:42:55.000Z | # Make a Python package
| 12 | 23 | 0.75 | 4 | 24 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 1 | 24 | 24 | 0.947368 | 0.875 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ef2c436b481d5f43745b2e9973345923241bf2ae | 34,423 | py | Python | sig-contributor-experience/surveys/k8s_survey_analysis/plot_utils.py | shubham14bajpai/community | f4c7914af3132a765472090b3a03b024e7aa951e | [
"Apache-2.0"
] | 9,963 | 2016-05-03T08:34:46.000Z | 2022-03-31T16:28:24.000Z | sig-contributor-experience/surveys/k8s_survey_analysis/plot_utils.py | shubham14bajpai/community | f4c7914af3132a765472090b3a03b024e7aa951e | [
"Apache-2.0"
] | 6,434 | 2016-05-11T16:11:43.000Z | 2022-03-31T23:40:33.000Z | sig-contributor-experience/surveys/k8s_survey_analysis/plot_utils.py | shubham14bajpai/community | f4c7914af3132a765472090b3a03b024e7aa951e | [
"Apache-2.0"
] | 5,067 | 2016-05-04T21:46:00.000Z | 2022-03-31T15:00:56.000Z | from textwrap import wrap
import math
import plotnine as p9
import pandas as pd
import textwrap
from textwrap import shorten
from matplotlib import pyplot as plt
from copy import copy
from mizani.palettes import brewer_pal
from plotnine.scales.scale import scale_discrete
# Custom scales for plotnine that reverse the direction of the colors
class reverse_scale_color_brewer(p9.scale_color_brewer):
def __init__(self, type="seq", palette=1, direction=-1, **kwargs):
self.palette = brewer_pal(type, palette, direction)
scale_discrete.__init__(self, **kwargs)
class reverse_scale_fill_brewer(p9.scale_fill_brewer):
def __init__(self, type="seq", palette=1, direction=-1, **kwargs):
self.palette = brewer_pal(type, palette, direction)
scale_discrete.__init__(self, **kwargs)
def split_for_likert(topic_data_long, mid_point):
"""
Returns the aggregated counts for ratings in the top and bottom halves of
the of each category, necssary for making offset bar charts
Args:
topic_data_long (pandas.Dataframe): A pandas Dataframe storing each respondents
ratings for a given topic, in long format
mid_point (int): The midpoint to use to split the into two halves, based on ratings
Returns:
(tuple): Tuple containing:
(pandas.DataFrame): Aggregated counts for ratings greater than or equal to the midpoinnt
(pandas.DataFrame): Aggregated counts for ratings less than or equal to the midpoinnt
"""
x = topic_data_long.columns.tolist()
x.remove("level_1")
top_cutoff = topic_data_long["rating"] >= mid_point
bottom_cutoff = topic_data_long["rating"] <= mid_point
top_scores = (
topic_data_long[top_cutoff]
.groupby(x)
.count()
.reindex(
pd.MultiIndex.from_product(
[topic_data_long[y].unique().tolist() for y in x], names=x
),
fill_value=0,
)
.reset_index()
.sort_index(ascending=False)
)
# The mid point is in both the top and bottom halves, so divide by two
top_scores.loc[top_scores["rating"] == mid_point, "level_1"] = (
top_scores[top_scores["rating"] == mid_point]["level_1"] / 2.0
)
bottom_scores = (
topic_data_long[bottom_cutoff]
.groupby(x)
.count()
.reindex(
pd.MultiIndex.from_product(
[topic_data_long[y].unique().tolist() for y in x], names=x
),
fill_value=0,
)
.reset_index()
)
# The mid point is in both the top and bottom halves, so divide by two
bottom_scores.loc[bottom_scores["rating"] == mid_point, "level_1"] = (
bottom_scores[bottom_scores["rating"] == mid_point]["level_1"] / 2.0
)
return top_scores, bottom_scores
def make_long(data, facets, multi_year=False):
"""Converts a wide dataframe with columns for each topic's rating into a long dataframe
Args:
data (pandas.DataFrame): A wide dataframe
facets (list): List of columns to keep as their own column
mulit_year (bool, optional) Defaults to False. If True, add the "year" column to the list of facets
Returns:
(pandas.DataFrame): Long dataframe
"""
facets = copy(facets)
if multi_year:
facets.append("year")
long_data = data.set_index(facets, append=True).stack().reset_index()
# Rename so Level_0 always has the values of the topic we are interested in
long_data = long_data.rename(
columns={
"level_0": "level_1",
"level_4": "level_0",
"level_3": "level_0",
"level_2": "level_0",
0: "rating",
}
)
long_data = long_data.assign(
level_0=pd.Categorical(long_data.level_0, ordered=True)
)
return long_data
def get_data_subset(
survey_data, topic, facets=[], exclude_new_contributors=False, include_year=False
):
"""Get only the relevant columns from the data
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
facets (list, optional): List of columns use for grouping
exclude_new_contributors: (bool, optional) Defaults to False. If True, remove
all responses from contributors who have been involved a year or less.
include_year: (bool, optional) Defaults to False. If True, include the year column
in the output
Returns:
(pandas.DataFrame): Survey dataframe with only columns relevant to the topics
and facets remaining.
"""
og_cols = [x for x in survey_data.columns if x.startswith(topic)]
facets = copy(facets)
if include_year:
facets.append("year")
if facets:
if "." in facets:
facets.remove(".")
cols = og_cols + facets
facets.append(".")
else:
cols = og_cols + facets
else:
cols = og_cols
if exclude_new_contributors:
topic_data = survey_data[
survey_data["Contributing_Length"] != "less than one year"
][cols]
else:
topic_data = survey_data[cols]
return topic_data
def get_multi_year_data_subset(
survey_data, topic, facet_by=[], exclude_new_contributors=False
):
"""Get appropriate data for multi-year plots and convert it to long form
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
facet_by (list, optional): List of columns use for grouping
exclude_new_contributors (bool, optional) Defaults to False. If True, remove
all responses from contributors who have been involved a year or less.
Returns:
(pandas.DataFrame): Long dataframe
"""
topic_data = get_data_subset(
survey_data, topic, facet_by, exclude_new_contributors, include_year=True
)
if facet_by:
if "." in facet_by:
facet_by.remove(".")
topic_data_long = make_long(topic_data, facet_by, multi_year=True)
facet_by.append(".")
else:
topic_data_long = make_long(topic_data, facet_by, multi_year=True)
else:
topic_data_long = make_long(topic_data, [], multi_year=True)
return topic_data_long
def get_single_year_data_subset(survey_data, topic, facet_by=[]):
"""Get appropriate data for single-year plots and convert it to long form
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
facet_by (list, optional): List of columns use for grouping
Returns:
(pandas.DataFrame): Long dataframe
"""
topic_data = get_data_subset(survey_data, topic, facet_by)
if facet_by:
if "." in facet_by:
facet_by.remove(".")
topic_data_long = make_long(topic_data, facet_by)
facet_by.append(".")
else:
topic_data_long = make_long(topic_data, facet_by)
else:
topic_data_long = (
topic_data.unstack().reset_index().rename(columns={0: "rating"})
)
topic_data_long = topic_data_long.assign(
level_0=pd.Categorical(topic_data_long.level_0, ordered=True)
)
return topic_data_long
def make_bar_chart_multi_year(
survey_data, topic, facet_by=[], exclude_new_contributors=False
):
"""Make a barchart showing proportions of respondents listing each
column that starts with topic. Bars are colored by which year of
the survey they correspond to. If facet_by is not empty, the resulting
plot will be faceted into subplots by the variables given.
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
facet_by (list,optional): List of columns use for grouping
exclude_new_contributors (bool, optiona ): Defaults to False. If True,
do not include any responses from contributors with less than
one year of experience
Returns:
(plotnine.ggplot): Plot object which can be displayed in a notebook or saved out to a file
"""
topic_data = get_data_subset(
survey_data, topic, facet_by, exclude_new_contributors, include_year=True
)
if facet_by:
fix = False
if "." in facet_by:
facet_by.remove(".")
fix = True
agg = (
topic_data.groupby(facet_by + ["year"])
.sum()
.reset_index()
.melt(id_vars=facet_by + ["year"])
)
totals = (
topic_data.groupby(facet_by + ["year"])
.count()
.reset_index()
.melt(id_vars=facet_by + ["year"])
)
percent = agg.merge(totals, on=facet_by + ["year", "variable"])
if fix:
facet_by.append(".")
else:
agg = topic_data.groupby(["year"]).sum().reset_index().melt(id_vars=["year"])
totals = (
topic_data.groupby(["year"]).count().reset_index().melt(id_vars=["year"])
)
percent = agg.merge(totals, on=["year", "variable"])
# This plot is always done proportionally
percent = percent.assign(value=percent["value_x"] / percent["value_y"])
percent = percent.assign(variable=pd.Categorical(percent.variable, ordered=True))
br = (
p9.ggplot(percent, p9.aes(x="variable", fill="factor(year)", y="value"))
+ p9.geom_bar(show_legend=True, position="dodge", stat="identity")
+ p9.theme(
axis_text_x=p9.element_text(angle=45, ha="right"),
strip_text_y=p9.element_text(angle=0, ha="left"),
)
+ p9.scale_x_discrete(
limits=sorted(percent["variable"].unique().tolist()),
labels=[
shorten(
x.replace(topic, "").replace("_", " "), placeholder="...", width=30
)
for x in sorted(percent["variable"].unique().tolist())
],
)
)
# Uncomment to return dataframe instead of plot
# return percent
if facet_by:
br = (
br
+ p9.facet_grid(
facet_by,
shrink=False,
labeller=lambda x: "\n".join(wrap(x.replace("/", "/ "), 15)),
)
+ p9.theme(
strip_text_x=p9.element_text(wrap=True, va="bottom", margin={"b": -0.5})
)
)
return br
def make_single_bar_chart_multi_year(survey_data, column, facet, proportionally=False):
"""Make a barchart showing the number of respondents responding to a single column.
Bars are colored by which year of the survey they correspond to. If facet
is not empty, the resulting plot will be faceted into subplots by the variables
given.
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
column (str): Column to plot responses to
facet (list,optional): List of columns use for grouping
proportionally (bool, optiona ): Defaults to False. If True,
the bars heights are determined proportionally to the
total number of responses in that facet.
Returns:
(plotnine.ggplot): Plot object which can be displayed in a notebook or saved out to a file
"""
cols = [column, facet]
show_legend = False
topic_data = survey_data[cols + ["year"]]
topic_data_long = make_long(topic_data, facet, multi_year=True)
if proportionally:
proportions = (
topic_data_long[topic_data_long.rating == 1].groupby(facet + ["year"]).sum()
/ topic_data_long.groupby(facet + ["year"]).sum()
).reset_index()
else:
proportions = (
topic_data_long[topic_data_long.rating == 1]
.groupby(facet + ["year"])
.count()
.reset_index()
)
x = topic_data_long.columns.tolist()
x.remove("level_1")
## Uncomment to return dataframe instead of plot
# return proportions
return (
p9.ggplot(proportions, p9.aes(x=facet, fill="year", y="level_1"))
+ p9.geom_bar(show_legend=show_legend, stat="identity")
+ p9.theme(
axis_text_x=p9.element_text(angle=45, ha="right"),
strip_text_y=p9.element_text(angle=0, ha="left"),
)
+ p9.scale_x_discrete(
limits=topic_data_long[facet].unique().tolist(),
labels=[
x.replace("_", " ") for x in topic_data_long[facet].unique().tolist()
],
)
)
def make_likert_chart_multi_year(
survey_data,
topic,
labels,
facet_by=[],
five_is_high=False,
exclude_new_contributors=False,
):
"""Make an offset stacked barchart showing the number of respondents at each rank or value for
all columns in the topic. Each column in the topic is a facet, with the years displayed
along the x-axis.
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
labels (list): List of strings to use as labels, corresponding
to the numerical values given by the respondents.
facet_by (list,optional): List of columns use for grouping
five_is_high (bool, optiona ): Defaults to False. If True,
five is considered the highest value in a ranking, otherwise
it is taken as the lowest value.
exclude_new_contributors (bool, optional): Defaults to False. If True,
do not include any responses from contributors with less than
one year of experience
Returns:
(plotnine.ggplot): Offset stacked barchart plot object which
can be displayed in a notebook or saved out to a file
"""
facet_by = copy(facet_by)
og_cols = [x for x in survey_data.columns if x.startswith(topic)]
show_legend = True
topic_data_long = get_multi_year_data_subset(
survey_data, topic, facet_by, exclude_new_contributors
)
if not five_is_high:
topic_data_long = topic_data_long.assign(rating=topic_data_long.rating * -1.0)
mid_point = 3 if five_is_high else -3
top_scores, bottom_scores = split_for_likert(topic_data_long, mid_point)
if facet_by:
fix = False
if "." in facet_by:
facet_by.remove(".")
fix = True
# Calculate proportion for each rank
top_scores = top_scores.merge(
topic_data_long.groupby(facet_by + ["year"]).count().reset_index(),
on=facet_by + ["year"],
).rename(columns={"rating_x": "rating", "level_0_x": "level_0"})
top_scores = top_scores.assign(
level_1=top_scores.level_1_x / (top_scores.level_1_y / len(og_cols))
)
bottom_scores = bottom_scores.merge(
topic_data_long.groupby(facet_by + ["year"]).count().reset_index(),
on=facet_by + ["year"],
).rename(columns={"rating_x": "rating", "level_0_x": "level_0"})
bottom_scores = bottom_scores.assign(
level_1=bottom_scores.level_1_x
* -1
/ (bottom_scores.level_1_y / len(og_cols))
)
if fix:
facet_by.append(".")
else:
# Calculate proportion for each rank
top_scores = top_scores.merge(
topic_data_long.groupby(["year"]).count().reset_index(), on=["year"]
).rename(columns={"rating_x": "rating", "level_0_x": "level_0"})
top_scores = top_scores.assign(
level_1=top_scores.level_1_x / (top_scores.level_1_y / len(og_cols))
)
bottom_scores = bottom_scores.merge(
topic_data_long.groupby(["year"]).count().reset_index(), on=["year"]
).rename(columns={"rating_x": "rating", "level_0_x": "level_0"})
bottom_scores = bottom_scores.assign(
level_1=bottom_scores.level_1_x
* -1
/ (bottom_scores.level_1_y / len(og_cols))
)
vp = (
p9.ggplot(
topic_data_long,
p9.aes(x="factor(year)", fill="factor(rating)", color="factor(rating)"),
)
+ p9.geom_col(
data=top_scores,
mapping=p9.aes(y="level_1"),
show_legend=show_legend,
size=0.25,
position=p9.position_stack(reverse=True),
)
+ p9.geom_col(
data=bottom_scores,
mapping=p9.aes(y="level_1"),
show_legend=show_legend,
size=0.25,
position=p9.position_stack(),
)
+ p9.geom_hline(yintercept=0, color="white")
)
if five_is_high:
vp = (
vp
+ p9.scale_color_brewer(
"div", "RdBu", limits=[1, 2, 3, 4, 5], labels=labels
)
+ p9.scale_fill_brewer("div", "RdBu", limits=[1, 2, 3, 4, 5], labels=labels)
+ p9.theme(
axis_text_x=p9.element_text(angle=45, ha="right"),
strip_text_y=p9.element_text(angle=0, ha="left"),
)
)
else:
vp = (
vp
+ p9.scale_color_brewer(
"div", "RdBu", limits=[-5, -4, -3, -2, -1], labels=labels
)
+ p9.scale_fill_brewer(
"div", "RdBu", limits=[-5, -4, -3, -2, -1], labels=labels
)
+ p9.theme(strip_text_y=p9.element_text(angle=0, ha="left"))
)
if facet_by:
facet_by.remove(".")
else:
facet_by.append(".")
vp = (
vp
+ p9.facet_grid(
facet_by + ["level_0"],
labeller=lambda x: "\n".join(
wrap(
x.replace(topic, "").replace("_", " ").replace("/", "/ ").strip(),
15,
)
),
)
+ p9.theme(
strip_text_x=p9.element_text(wrap=True, ma="left"), panel_spacing_x=0.1
)
)
return vp
def make_bar_chart(survey_data, topic, facet_by=[], proportional=False):
"""Make a barchart showing the number of respondents listing each
column that starts with topic for a single year. If facet_by is
not empty, the resulting plot will be faceted into subplots
by the variables given.
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
facet_by (list,optional): List of columns use for grouping
proportional (bool, optiona ): Defaults to False. If True,
the bars heights are determined proportionally to the
total number of responses in that facet.
Returns:
(plotnine.ggplot): Plot object which can be displayed in a notebook or saved out to a file
"""
show_legend = False
if facet_by:
show_legend = True
topic_data_long = get_single_year_data_subset(survey_data, topic, facet_by)
x = topic_data_long.columns.tolist()
x.remove("level_1")
if facet_by:
period = False
if "." in facet_by:
facet_by.remove(".")
period = True
aggregate_data = (
topic_data_long[topic_data_long.rating == 1]
.dropna()
.groupby(["level_0"] + facet_by)
.count()
.reset_index()
)
if period:
facet_by.append(".")
else:
aggregate_data = (
topic_data_long[topic_data_long.rating == 1]
.dropna()
.groupby("level_0")
.count()
.reset_index()
)
if proportional and facet_by:
period = False
if "." in facet_by:
facet_by.remove(".")
period = True
facet_sums = (
topic_data_long[topic_data_long.rating == 1]
.dropna()
.groupby(facet_by)
.count()
.reset_index()
)
aggregate_data = aggregate_data.merge(facet_sums, on=facet_by).rename(
columns={"level_0_x": "level_0"}
)
aggregate_data = aggregate_data.assign(
rating=aggregate_data.rating_x / aggregate_data.rating_y
)
if period:
facet_by.append(".")
br = (
p9.ggplot(aggregate_data, p9.aes(x="level_0", fill="level_0", y="rating"))
+ p9.geom_bar(show_legend=show_legend, stat="identity")
+ p9.theme(
axis_text_x=p9.element_text(angle=45, ha="right"),
strip_text_y=p9.element_text(angle=0, ha="left"),
)
+ p9.scale_x_discrete(
limits=topic_data_long["level_0"].unique().tolist(),
labels=[
"\n".join(
textwrap.wrap(x.replace(topic, "").replace("_", " "), width=35)[0:2]
)
for x in topic_data_long["level_0"].unique().tolist()
],
)
)
if facet_by:
br = (
br
+ p9.facet_grid(
facet_by, shrink=False, labeller=lambda x: "\n".join(wrap(x, 15))
)
+ p9.theme(
axis_text_x=p9.element_blank(),
strip_text_x=p9.element_text(
wrap=True, va="bottom", margin={"b": -0.5}
),
)
+ p9.scale_fill_discrete(
limits=topic_data_long["level_0"].unique().tolist(),
labels=[
"\n".join(
wrap(
x.replace(topic, "")
.replace("_", " ")
.replace("/", "/ ")
.strip(),
30,
)
)
for x in topic_data_long["level_0"].unique().tolist()
],
)
)
return br
def make_likert_chart(
survey_data,
topic,
labels,
facet_by=[],
max_value=5,
max_is_high=False,
wrap_facets=True,
sort_x=False,
):
"""Make an offset stacked barchart showing the number of respondents at each rank or value for
all columns in the topic. Each column in the original data is a tick on the x-axis
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
labels (list): List of strings to use as labels, corresponding
to the numerical values given by the respondents.
facet_by (list,optional): List of columns use for grouping
max_value (int, optional): Defaults to 5. The maximuum value a respondent can assign.
max_is_high (bool, optiona ): Defaults to False. If True,
the max_value is considered the highest value in a ranking, otherwise
it is taken as the lowest value.
wrap_facets (bool, optional): Defaults to True. If True, the facet labels are
wrapped
sort_x (bool, optional): Defaults to False. If True, the x-axis is sorted by the
mean value for each column in the original data
Returns:
(plotnine.ggplot): Offset stacked barchart plot object which
can be displayed in a notebook or saved out to a file
"""
mid_point = math.ceil(max_value / 2)
og_cols = [x for x in survey_data.columns if x.startswith(topic)]
show_legend = True
topic_data_long = get_single_year_data_subset(survey_data, topic, facet_by)
if not max_is_high:
topic_data_long = topic_data_long.assign(rating=topic_data_long.rating * -1.0)
mid_point = -1 * mid_point
top_scores, bottom_scores = split_for_likert(topic_data_long, mid_point)
if facet_by:
fix = False
if "." in facet_by:
facet_by.remove(".")
fix = True
top_scores = top_scores.merge(
topic_data_long.groupby(facet_by).count().reset_index(), on=facet_by
).rename(columns={"rating_x": "rating", "level_0_x": "level_0"})
top_scores = top_scores.assign(
level_1=top_scores.level_1_x / (top_scores.level_1_y / len(og_cols))
)
bottom_scores = bottom_scores.merge(
topic_data_long.groupby(facet_by).count().reset_index(), on=facet_by
).rename(columns={"rating_x": "rating", "level_0_x": "level_0"})
bottom_scores = bottom_scores.assign(
level_1=bottom_scores.level_1_x
* -1
/ (bottom_scores.level_1_y / len(og_cols))
)
if fix:
facet_by.append(".")
else:
bottom_scores = bottom_scores.assign(level_1=bottom_scores.level_1 * -1)
if sort_x:
x_sort_order = (
topic_data_long.groupby("level_0")
.mean()
.sort_values("rating")
.reset_index()["level_0"]
.values.tolist()
)
x_sort_order.reverse()
else:
x_sort_order = topic_data_long["level_0"].unique().tolist()
vp = (
p9.ggplot(
topic_data_long,
p9.aes(x="level_0", fill="factor(rating)", color="factor(rating)"),
)
+ p9.geom_col(
data=top_scores,
mapping=p9.aes(y="level_1"),
show_legend=show_legend,
size=0.25,
position=p9.position_stack(reverse=True),
)
+ p9.geom_col(
data=bottom_scores,
mapping=p9.aes(y="level_1"),
show_legend=show_legend,
size=0.25,
position=p9.position_stack(),
)
+ p9.geom_hline(yintercept=0, color="white")
+ p9.theme(
axis_text_x=p9.element_text(angle=45, ha="right"),
strip_text_y=p9.element_text(angle=0, ha="left"),
)
+ p9.scale_x_discrete(
limits=x_sort_order,
labels=[
"\n".join(
textwrap.wrap(x.replace(topic, "").replace("_", " "), width=35)[0:2]
)
for x in x_sort_order
],
)
)
if max_is_high:
vp = (
vp
+ p9.scale_color_brewer(
"div", "RdBu", limits=list(range(1, max_value + 1)), labels=labels
)
+ p9.scale_fill_brewer(
"div", "RdBu", limits=list(range(1, max_value + 1)), labels=labels
)
)
else:
vp = (
vp
+ reverse_scale_fill_brewer(
"div",
"RdBu",
limits=list(reversed(range(-max_value, 0))),
labels=labels,
)
+ reverse_scale_color_brewer(
"div",
"RdBu",
limits=list(reversed(range(-max_value, 0))),
labels=labels,
)
)
if facet_by:
if wrap_facets:
vp = (
vp
+ p9.facet_grid(facet_by, labeller=lambda x: "\n".join(wrap(x, 15)))
+ p9.theme(
strip_text_x=p9.element_text(
wrap=True, va="bottom", margin={"b": -0.5}
)
)
)
else:
vp = vp + p9.facet_grid(facet_by, space="free", labeller=lambda x: x)
return vp
def make_single_likert_chart(survey_data, column, facet, labels, five_is_high=False):
"""Make an offset stacked barchart showing the number of respondents at each rank
or value for a single columns in the original data. Each facet is shown as
a tick on the x-axis
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
labels (list): List of strings to use as labels, corresponding
to the numerical values given by the respondents.
facet (str): Column used for grouping
five_is_high (bool, optionalc): Defaults to False. If True,
5 is considered the highest value in a ranking, otherwise
it is taken as the lowest value.
Returns:
(plotnine.ggplot): Offset stacked barchart plot object which
can be displayed in a notebook or saved out to a file
"""
mid_point = 3
cols = [column, facet]
show_legend = True
topic_data = survey_data[cols]
topic_data_long = make_long(topic_data, facet)
if not five_is_high:
topic_data_long = topic_data_long.assign(rating=topic_data_long.rating * -1.0)
x = topic_data_long.columns.tolist()
x.remove("level_1")
x.remove("level_0")
if not five_is_high:
mid_point *= -1
top_cutoff = topic_data_long["rating"] >= mid_point
bottom_cutoff = topic_data_long["rating"] <= mid_point
top_scores = (
topic_data_long[top_cutoff]
.groupby(x)
.count()
.reset_index()
.sort_index(ascending=False)
)
top_scores.loc[top_scores["rating"] == mid_point, "level_1"] = (
top_scores[top_scores["rating"] == mid_point]["level_1"] / 2.0
)
top_scores = top_scores.merge(
topic_data_long.groupby(facet).count().reset_index(), on=facet
)
top_scores = top_scores.assign(level_1=top_scores.level_1_x / top_scores.level_1_y)
bottom_scores = topic_data_long[bottom_cutoff].groupby(x).count().reset_index()
bottom_scores.loc[bottom_scores["rating"] == mid_point, "level_1"] = (
bottom_scores[bottom_scores["rating"] == mid_point]["level_1"] / 2.0
)
bottom_scores = bottom_scores.merge(
topic_data_long.groupby(facet).count().reset_index(), on=facet
)
bottom_scores = bottom_scores.assign(
level_1=bottom_scores.level_1_x * -1 / bottom_scores.level_1_y
)
vp = (
p9.ggplot(
topic_data_long,
p9.aes(x=facet, fill="factor(rating_x)", color="factor(rating_x)"),
)
+ p9.geom_col(
data=top_scores,
mapping=p9.aes(y="level_1"),
show_legend=show_legend,
size=0.25,
position=p9.position_stack(reverse=True),
)
+ p9.geom_col(
data=bottom_scores,
mapping=p9.aes(y="level_1"),
show_legend=show_legend,
size=0.25,
)
+ p9.geom_hline(yintercept=0, color="white")
+ p9.theme(
axis_text_x=p9.element_text(angle=45, ha="right"),
strip_text_y=p9.element_text(angle=0, ha="left"),
)
+ p9.scale_x_discrete(
limits=topic_data_long[facet].unique().tolist(),
labels=[
x.replace("_", " ") for x in topic_data_long[facet].unique().tolist()
],
)
)
if five_is_high:
vp = (
vp
+ p9.scale_color_brewer(
"div",
"RdBu",
limits=[1, 2, 3, 4, 5],
labels=["\n".join(wrap(x, 15)) for x in labels],
)
+ p9.scale_fill_brewer(
"div",
"RdBu",
limits=[1, 2, 3, 4, 5],
labels=["\n".join(wrap(x, 15)) for x in labels],
)
)
else:
vp = (
vp
+ reverse_scale_fill_brewer(
"div",
"RdBu",
limits=[-1, -2, -3, -4, -5],
labels=["\n".join(wrap(x, 15)) for x in labels],
)
+ reverse_scale_color_brewer(
"div",
"RdBu",
limits=[-1, -2, -3, -4, -5],
labels=["\n".join(wrap(x, 15)) for x in labels],
)
)
return vp
def make_single_bar_chart(
survey_data, column, facet, proportionally=False, facet2=None
):
"""Make a barchart showing the number of respondents marking
a certain column in the original dataset as True. The facet
variable values are used as ticks on the x-axis
Args:
survey_data (pandas.DataFrame): Raw data read in from Kubernetes Survey
topic (str): String that all questions of interest start with
facet (str): Column use for grouping
proportional (bool, optiona ): Defaults to False. If True,
the bars heights are determined proportionally to the
total number of responses in that facet.
facet2 (str, optional): If provided, a second variable to facet against.
Returns:
(plotnine.ggplot): Plot object which can be displayed in a notebook or saved out to a file
"""
cols = [column, facet]
if facet2:
cols.append(facet2)
show_legend = False
topic_data = survey_data[cols]
grouper = [facet, facet2] if facet2 else facet
topic_data_long = make_long(topic_data, grouper)
if proportionally:
proportions = (
topic_data_long[topic_data_long.rating == 1].groupby(grouper).sum()
/ topic_data_long.groupby(grouper).sum()
).reset_index()
else:
proportions = (
topic_data_long[topic_data_long.rating == 1]
.groupby(grouper)
.count()
.reset_index()
)
x = topic_data_long.columns.tolist()
x.remove("level_1")
br = (
p9.ggplot(proportions, p9.aes(x=facet, fill=facet, y="level_1"))
+ p9.geom_bar(show_legend=show_legend, stat="identity")
+ p9.theme(
axis_text_x=p9.element_text(angle=45, ha="right"),
strip_text_y=p9.element_text(angle=0, ha="left"),
)
+ p9.scale_x_discrete(
limits=topic_data_long[facet].unique().tolist(),
labels=[
x.replace("_", " ") for x in topic_data_long[facet].unique().tolist()
],
)
)
if facet2:
br = br + p9.facet_grid([facet2, "."])
return br
| 33.452867 | 107 | 0.578334 | 4,329 | 34,423 | 4.376992 | 0.081081 | 0.050348 | 0.057631 | 0.01425 | 0.812803 | 0.773644 | 0.736014 | 0.720023 | 0.686194 | 0.660386 | 0 | 0.015216 | 0.316504 | 34,423 | 1,028 | 108 | 33.485409 | 0.790122 | 0.259855 | 0 | 0.63662 | 0 | 0 | 0.049094 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019718 | false | 0 | 0.014085 | 0 | 0.053521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
322f1ef60534e5ae1808cafd8bca72607da38b91 | 392 | py | Python | tests/client/mappings/test_resetting.py | symonk/pytest-wiremock | 372956418174adafadcb33ad38db121a049f7f2b | [
"MIT"
] | null | null | null | tests/client/mappings/test_resetting.py | symonk/pytest-wiremock | 372956418174adafadcb33ad38db121a049f7f2b | [
"MIT"
] | 7 | 2022-03-14T08:41:55.000Z | 2022-03-28T18:01:22.000Z | tests/client/mappings/test_resetting.py | symonk/pytest-wiremock | 372956418174adafadcb33ad38db121a049f7f2b | [
"MIT"
] | null | null | null | def test_resetting_removes_created_stubs(connected_client, random_stub) -> None:
assert connected_client.stubs.create_stub(random_stub).status_code == 201
assert connected_client.stubs.get_all_stubs().json()["meta"]["total"] == 1
assert connected_client.stubs.reset_stub_mappings().status_code == 200
assert connected_client.stubs.get_all_stubs().json()["meta"]["total"] == 0
| 65.333333 | 80 | 0.767857 | 54 | 392 | 5.203704 | 0.481481 | 0.266904 | 0.298932 | 0.370107 | 0.355872 | 0.355872 | 0.355872 | 0.355872 | 0.355872 | 0.355872 | 0 | 0.022535 | 0.094388 | 392 | 5 | 81 | 78.4 | 0.769014 | 0 | 0 | 0 | 0 | 0 | 0.045918 | 0 | 0 | 0 | 0 | 0 | 0.8 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
322ff2da22fee1f102983aa091df59ba03d41aef | 133 | py | Python | main/lib/flaskext/wtf/recaptcha/__init__.py | topless/gae-init-docs | 8a727e35b7b7aa8abadf482325d7ca6489a2c2cd | [
"MIT"
] | 1 | 2015-11-05T03:51:44.000Z | 2015-11-05T03:51:44.000Z | main/lib/flaskext/wtf/recaptcha/__init__.py | topless/gae-init-docs | 8a727e35b7b7aa8abadf482325d7ca6489a2c2cd | [
"MIT"
] | 1 | 2020-02-25T10:02:30.000Z | 2020-02-25T10:02:30.000Z | main/lib/flaskext/wtf/recaptcha/__init__.py | topless/gae-init-docs | 8a727e35b7b7aa8abadf482325d7ca6489a2c2cd | [
"MIT"
] | null | null | null | from . import fields
from . import validators
from . import widgets
__all__ = fields.__all__ + validators.__all__ + widgets.__all__
| 22.166667 | 63 | 0.781955 | 16 | 133 | 5.5 | 0.375 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150376 | 133 | 5 | 64 | 26.6 | 0.778761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
087036ef938f94834f4f08434dcaacedbc54b508 | 54 | py | Python | sparkplug/helpers/__init__.py | Quva/sparkplug | c6ec310ae1f53067fece6e690d7b10c1eb69516e | [
"Apache-2.0"
] | null | null | null | sparkplug/helpers/__init__.py | Quva/sparkplug | c6ec310ae1f53067fece6e690d7b10c1eb69516e | [
"Apache-2.0"
] | null | null | null | sparkplug/helpers/__init__.py | Quva/sparkplug | c6ec310ae1f53067fece6e690d7b10c1eb69516e | [
"Apache-2.0"
] | null | null | null |
from .tag_info import TagInfo
from .helpers import *
| 13.5 | 29 | 0.777778 | 8 | 54 | 5.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 54 | 3 | 30 | 18 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
08792ed8a338510fe1a55bb95c5925c275c9d886 | 83 | py | Python | pod/__init__.py | TaitoUnited/pod | df8c0f303cd99e9d56bbc323c8e6e4444dccc1a7 | [
"MIT"
] | null | null | null | pod/__init__.py | TaitoUnited/pod | df8c0f303cd99e9d56bbc323c8e6e4444dccc1a7 | [
"MIT"
] | 1 | 2019-06-10T18:27:31.000Z | 2019-08-19T12:28:23.000Z | pod/__init__.py | TaitoUnited/pod | df8c0f303cd99e9d56bbc323c8e6e4444dccc1a7 | [
"MIT"
] | null | null | null | from .fetcher import fetcher # noqa
from .application import application # noqa
| 27.666667 | 45 | 0.771084 | 10 | 83 | 6.4 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180723 | 83 | 2 | 46 | 41.5 | 0.941176 | 0.108434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
08bb287c9bd0e638671daa441f3c1ee00ccd815a | 454 | py | Python | iridium/test/glance_tests/glance_test.py | Toure/Rhea | fda0e4cd7c568943725245393bfe762bc858e917 | [
"Apache-2.0"
] | 1 | 2015-08-19T15:55:46.000Z | 2015-08-19T15:55:46.000Z | iridium/test/glance_tests/glance_test.py | Toure/Rhea | fda0e4cd7c568943725245393bfe762bc858e917 | [
"Apache-2.0"
] | null | null | null | iridium/test/glance_tests/glance_test.py | Toure/Rhea | fda0e4cd7c568943725245393bfe762bc858e917 | [
"Apache-2.0"
] | null | null | null | __author__ = "Toure Dunnon"
__license__ = "Apache License 2.0"
__version__ = "0.1"
__email__ = "toure@redhat.com"
__status__ = "Alpha"
def test_create_image():
pass
def test_list_image():
pass
def test_add_location():
pass
def test_delete_image():
pass
def test_delete_location():
pass
def test_update_location():
pass
def test_image_upload():
pass
def test_get_info():
# data and get calls here.
pass
| 11.641026 | 34 | 0.680617 | 62 | 454 | 4.403226 | 0.5 | 0.205128 | 0.282051 | 0.175824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011299 | 0.220264 | 454 | 38 | 35 | 11.947368 | 0.759887 | 0.052863 | 0 | 0.380952 | 0 | 0 | 0.126168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.380952 | false | 0.380952 | 0 | 0 | 0.380952 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
08d948640d49fc7c2325d22b0ffe8fe0514749cc | 137 | py | Python | readpower.py | motivatedsloth/powerpi | 2d66b82fd3c74b90f160d6ef0589e6b423925e7b | [
"MIT"
] | null | null | null | readpower.py | motivatedsloth/powerpi | 2d66b82fd3c74b90f160d6ef0589e6b423925e7b | [
"MIT"
] | null | null | null | readpower.py | motivatedsloth/powerpi | 2d66b82fd3c74b90f160d6ef0589e6b423925e7b | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
from subprocess import call
call(['/usr/bin/python3 /home/pi/powerpi/reader/reader.py 2>/dev/null'], shell=True)
| 34.25 | 84 | 0.737226 | 23 | 137 | 4.391304 | 0.782609 | 0.118812 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.080292 | 137 | 3 | 85 | 45.666667 | 0.777778 | 0.160584 | 0 | 0 | 0 | 0.5 | 0.54386 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
3e99c77ad33dffb9bbf37c3a59391e5945109036 | 407 | py | Python | spatialstats/polyspectra/__init__.py | mjo22/mobstats | 7fe5fb8cf96b87845851826cb057828f271891f2 | [
"MIT"
] | 10 | 2021-04-09T17:23:50.000Z | 2022-03-16T12:12:20.000Z | spatialstats/polyspectra/__init__.py | mjo22/softstats | 7fe5fb8cf96b87845851826cb057828f271891f2 | [
"MIT"
] | null | null | null | spatialstats/polyspectra/__init__.py | mjo22/softstats | 7fe5fb8cf96b87845851826cb057828f271891f2 | [
"MIT"
] | 1 | 2021-05-04T22:00:51.000Z | 2021-05-04T22:00:51.000Z | """
Calculating spectral correlation functions for vector and scalar fields.
.. moduleauthor:: Michael O'Brien <michaelobrien@g.harvard.edu>
"""
import spatialstats
if spatialstats.config.gpu is False:
from .powerspectrum import powerspectrum
from .bispectrum import bispectrum
else:
from .cuda_powerspectrum import powerspectrum
from .cuda_bispectrum import bispectrum
del spatialstats
| 23.941176 | 72 | 0.788698 | 46 | 407 | 6.934783 | 0.652174 | 0.119122 | 0.200627 | 0.225705 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149877 | 407 | 16 | 73 | 25.4375 | 0.921965 | 0.336609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.625 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3ebe75f77b3312ea32892c2b24e742cc5ceb4697 | 3,842 | py | Python | cropper/migrations/0001_initial.py | ro5k0/django-image-cropper | fd2660d5f6b1941a2052a900276e3ba6faa19c7f | [
"MIT"
] | null | null | null | cropper/migrations/0001_initial.py | ro5k0/django-image-cropper | fd2660d5f6b1941a2052a900276e3ba6faa19c7f | [
"MIT"
] | null | null | null | cropper/migrations/0001_initial.py | ro5k0/django-image-cropper | fd2660d5f6b1941a2052a900276e3ba6faa19c7f | [
"MIT"
] | null | null | null | # encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Original'
db.create_table('cropper_original', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('image', self.gf('django.db.models.fields.files.ImageField')(max_length=100)),
('image_width', self.gf('django.db.models.fields.PositiveIntegerField')(default=0)),
('image_height', self.gf('django.db.models.fields.PositiveIntegerField')(default=0)),
))
db.send_create_signal('cropper', ['Original'])
# Adding model 'Cropped'
db.create_table('cropper_cropped', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('original', self.gf('django.db.models.fields.related.ForeignKey')(related_name='cropped', to=orm['cropper.Original'])),
('image', self.gf('django.db.models.fields.files.ImageField')(max_length=100)),
('x', self.gf('django.db.models.fields.PositiveIntegerField')(default=0)),
('y', self.gf('django.db.models.fields.PositiveIntegerField')(default=0)),
('w', self.gf('django.db.models.fields.PositiveIntegerField')(null=True, blank=True)),
('h', self.gf('django.db.models.fields.PositiveIntegerField')(null=True, blank=True)),
('w_display', self.gf('django.db.models.fields.PositiveIntegerField')(null=True, blank=True)),
('h_display', self.gf('django.db.models.fields.PositiveIntegerField')(null=True, blank=True)),
))
db.send_create_signal('cropper', ['Cropped'])
def backwards(self, orm):
# Deleting model 'Original'
db.delete_table('cropper_original')
# Deleting model 'Cropped'
db.delete_table('cropper_cropped')
models = {
'cropper.cropped': {
'Meta': {'object_name': 'Cropped'},
'h': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'h_display': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image': ('django.db.models.fields.files.ImageField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'original': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'cropped'", 'to': "orm['cropper.Original']"}),
'w': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'w_display': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'x': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'y': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'})
},
'cropper.original': {
'Meta': {'object_name': 'Original'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image': ('django.db.models.fields.files.ImageField', [], {'max_length': '100'}),
'image_height': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'image_width': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'})
}
}
complete_apps = ['cropper']
| 54.112676 | 139 | 0.598647 | 413 | 3,842 | 5.479419 | 0.157385 | 0.109589 | 0.185594 | 0.265135 | 0.781706 | 0.759611 | 0.75696 | 0.714538 | 0.666814 | 0.524525 | 0 | 0.011 | 0.195471 | 3,842 | 70 | 140 | 54.885714 | 0.721126 | 0.029412 | 0 | 0.222222 | 0 | 0 | 0.479989 | 0.331722 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3ed039cb340d354c71730fb8253768b8ad5d6b1b | 191 | py | Python | DPGAnalysis/SiStripTools/python/filtertest/raw_102169_debug_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | DPGAnalysis/SiStripTools/python/filtertest/raw_102169_debug_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | DPGAnalysis/SiStripTools/python/filtertest/raw_102169_debug_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
fileNames = cms.untracked.vstring(
'/store/data/Commissioning09/Cosmics/RAW/v2/000/102/169/F6566668-4267-DE11-8354-001D09F2983F.root',
)
| 31.833333 | 107 | 0.753927 | 25 | 191 | 5.76 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224852 | 0.115183 | 191 | 5 | 108 | 38.2 | 0.627219 | 0 | 0 | 0 | 0 | 0.25 | 0.502618 | 0.502618 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3ed9f04f476b20fbaa0a71f8b98fa4195a5078fa | 298 | py | Python | teen/static/js/sandboxOA-master/apps/system/forms.py | manga89/Teens | a392d615854b8340651f035c291c27dc27d1faa6 | [
"bzip2-1.0.6"
] | null | null | null | teen/static/js/sandboxOA-master/apps/system/forms.py | manga89/Teens | a392d615854b8340651f035c291c27dc27d1faa6 | [
"bzip2-1.0.6"
] | null | null | null | teen/static/js/sandboxOA-master/apps/system/forms.py | manga89/Teens | a392d615854b8340651f035c291c27dc27d1faa6 | [
"bzip2-1.0.6"
] | null | null | null | # @Time : 2018/7/18 18:47
# @Author : RobbieHan
# @File : forms.py
from django import forms
class LoginForm(forms.Form):
username = forms.CharField(required=True, error_messages={"requeired": "请填写用户名"})
password = forms.CharField(required=True, error_messages={"requeired": "请填写密码"})
| 29.8 | 85 | 0.704698 | 37 | 298 | 5.621622 | 0.702703 | 0.134615 | 0.211538 | 0.25 | 0.461538 | 0.461538 | 0.461538 | 0 | 0 | 0 | 0 | 0.043137 | 0.144295 | 298 | 9 | 86 | 33.111111 | 0.772549 | 0.214765 | 0 | 0 | 0 | 0 | 0.126087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
3eed4c078fe6ea8006680892f099f99a4bbcad53 | 209 | py | Python | modules/workflow_lambdas/tests/suite-db-integration/exec.py | groboclown/whimbrel | 1968cccf4888ef893686a812ed729205a31d2a12 | [
"Apache-2.0"
] | null | null | null | modules/workflow_lambdas/tests/suite-db-integration/exec.py | groboclown/whimbrel | 1968cccf4888ef893686a812ed729205a31d2a12 | [
"Apache-2.0"
] | null | null | null | modules/workflow_lambdas/tests/suite-db-integration/exec.py | groboclown/whimbrel | 1968cccf4888ef893686a812ed729205a31d2a12 | [
"Apache-2.0"
] | null | null | null |
def setup(config):
pass
def teardown(config):
pass
def run_test(config):
pass
def execute(config):
setup(config)
try:
run_test(config)
finally:
teardown(config)
| 9.952381 | 24 | 0.598086 | 25 | 209 | 4.92 | 0.4 | 0.243902 | 0.317073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.30622 | 209 | 20 | 25 | 10.45 | 0.848276 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.25 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.