hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e951acf619bd81394427d8e42721c0b0db717204 | 1,160 | py | Python | Python Part 2 Exercises/Lesson 5 - Lists of Lists.py | ryanzhao2/grade11cs | 173b9f50db49368ea2042f6803d6674dd9f185cd | [
"Apache-2.0"
] | null | null | null | Python Part 2 Exercises/Lesson 5 - Lists of Lists.py | ryanzhao2/grade11cs | 173b9f50db49368ea2042f6803d6674dd9f185cd | [
"Apache-2.0"
] | null | null | null | Python Part 2 Exercises/Lesson 5 - Lists of Lists.py | ryanzhao2/grade11cs | 173b9f50db49368ea2042f6803d6674dd9f185cd | [
"Apache-2.0"
] | null | null | null | """
def generate_list_books(filename):
book_list = []
file_in = open(filename, encoding='utf-8', errors='replace')
file_in.readline()
for line in file_in:
line = line.strip().split(",")
line[2] = float(line[2])
line[3] = int(line[3])
line[4] = int(line[4])
line[5] = int(line[5])
book_list.append(line)
return book_list
#QUESTION #1
def print_books(list_of_books):
for book in list_of_books:
print(f' {book[0][0:30]:<30} by {book[1][0:20]:<20} {str(book[5]):<4} rated {str(book[2])[0:3]}')
#QUESTION #2
def print_detailed_book(list_of_books):
format = '-'
for book in list_of_books:
print(f' {book[0]}\n'
f' by: {book[1]}\n'
f' {book[5]}\n'
f' {format * len(book[0])}\n'
f' ${book[4]:.2f}\n\n'
f' {book[6]}\n'
f' rated {book[2]} for {book[3]} reviews\n\n\n')
def main():
main_book_list = generate_list_books("amazon_bestseller_books.csv")
#print(main_book_list[:10])
print_books(main_book_list)
print_detailed_book(main_book_list)
main()
""" | 21.090909 | 105 | 0.552586 | 178 | 1,160 | 3.410112 | 0.280899 | 0.105437 | 0.072488 | 0.042834 | 0.102142 | 0.102142 | 0.102142 | 0.102142 | 0.102142 | 0.102142 | 0 | 0.045828 | 0.266379 | 1,160 | 55 | 106 | 21.090909 | 0.66745 | 0.984483 | 0 | null | 1 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e957393d4b094e24e1cea3eeb2c0b3f95ffdaa26 | 427 | py | Python | arc852/file_image_source.py | athenian-robotics/common-robotics-python | a2ede8fb3072cf1baa53672f76081aa6bfde397f | [
"MIT"
] | 1 | 2019-02-20T22:59:59.000Z | 2019-02-20T22:59:59.000Z | arc852/file_image_source.py | athenian-robotics/common-robotics | a2ede8fb3072cf1baa53672f76081aa6bfde397f | [
"MIT"
] | null | null | null | arc852/file_image_source.py | athenian-robotics/common-robotics | a2ede8fb3072cf1baa53672f76081aa6bfde397f | [
"MIT"
] | 1 | 2020-05-23T09:08:42.000Z | 2020-05-23T09:08:42.000Z | import cv2
import arc852.cli_args as cli
from arc852.generic_image_source import GenericImageSource
class FileImageSource(GenericImageSource):
args = [cli.filename]
def __init__(self, filename):
super(FileImageSource, self).__init__()
self.__cv2_img = cv2.imread(filename)
def start(self):
pass
def stop(self):
pass
def get_image(self):
return self.__cv2_img
| 19.409091 | 58 | 0.683841 | 51 | 427 | 5.372549 | 0.490196 | 0.080292 | 0.072993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030675 | 0.236534 | 427 | 21 | 59 | 20.333333 | 0.809816 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.214286 | 0.071429 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
e985b081be6c314c34518c301b0ad505a26b7801 | 3,136 | py | Python | sources/utils/helpers.py | pablintino/Altium-DBlib-source | 65e85572f84048a7e7c5a116b429e09ac9a33e82 | [
"MIT"
] | 1 | 2021-06-23T20:19:45.000Z | 2021-06-23T20:19:45.000Z | sources/utils/helpers.py | pablintino/Altium-DBlib-source | 65e85572f84048a7e7c5a116b429e09ac9a33e82 | [
"MIT"
] | null | null | null | sources/utils/helpers.py | pablintino/Altium-DBlib-source | 65e85572f84048a7e7c5a116b429e09ac9a33e82 | [
"MIT"
] | null | null | null | #
# MIT License
#
# Copyright (c) 2020 Pablo Rodriguez Nava, @pablintino
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
class BraceMessage(object):
def __init__(self, fmt, *args, **kwargs):
self.fmt = fmt
self.args = args
self.kwargs = kwargs
def __str__(self):
return self.fmt.format(*self.args, **self.kwargs)
class CaseInsensitiveDict(dict):
@classmethod
def _k(cls, key):
return key.lower() if isinstance(key, str) else key
def __init__(self, *args, **kwargs):
super(CaseInsensitiveDict, self).__init__(*args, **kwargs)
self._convert_keys()
def __getitem__(self, key):
return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))
def __setitem__(self, key, value):
super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)
def __delitem__(self, key):
return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))
def __contains__(self, key):
return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))
def pop(self, key, *args, **kwargs):
return super(CaseInsensitiveDict, self).pop(self.__class__._k(key), *args, **kwargs)
def get(self, key, *args, **kwargs):
return super(CaseInsensitiveDict, self).get(self.__class__._k(key), *args, **kwargs)
def setdefault(self, key, *args, **kwargs):
return super(CaseInsensitiveDict, self).setdefault(self.__class__._k(key), *args, **kwargs)
def update(self, E={}, **F):
super(CaseInsensitiveDict, self).update(self.__class__(E))
super(CaseInsensitiveDict, self).update(self.__class__(**F))
def _convert_keys(self):
for k in list(self.keys()):
v = super(CaseInsensitiveDict, self).pop(k)
self.__setitem__(k, v)
def is_float(value):
try:
float(value)
return True
except ValueError:
return False
def is_int(x):
try:
a = float(x)
b = int(a)
except ValueError:
return False
else:
return a == b
| 34.086957 | 99 | 0.683673 | 407 | 3,136 | 5.017199 | 0.358722 | 0.129285 | 0.150833 | 0.044564 | 0.238981 | 0.215475 | 0.113124 | 0.074927 | 0 | 0 | 0 | 0.001619 | 0.212054 | 3,136 | 91 | 100 | 34.461538 | 0.824767 | 0.350765 | 0 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.306122 | false | 0 | 0 | 0.163265 | 0.591837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e98a999d029d498d01b9f015cfa099eed54b6542 | 169 | py | Python | office365/sharepoint/changes/change_web.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 544 | 2016-08-04T17:10:16.000Z | 2022-03-31T07:17:20.000Z | office365/sharepoint/changes/change_web.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 438 | 2016-10-11T12:24:22.000Z | 2022-03-31T19:30:35.000Z | office365/sharepoint/changes/change_web.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 202 | 2016-08-22T19:29:40.000Z | 2022-03-30T20:26:15.000Z | from office365.sharepoint.changes.change import Change
class ChangeWeb(Change):
@property
def web_id(self):
return self.properties.get("WebId", None)
| 18.777778 | 54 | 0.715976 | 21 | 169 | 5.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.183432 | 169 | 8 | 55 | 21.125 | 0.847826 | 0 | 0 | 0 | 0 | 0 | 0.029586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
e997cf29f1f010ccb03101018c493d5f81b230d9 | 162 | py | Python | main.py | JanAndrosiuk/Movie-Scraper-App | 1702da1a42c723c1c058ae3455a729a9215ed96c | [
"MIT"
] | null | null | null | main.py | JanAndrosiuk/Movie-Scraper-App | 1702da1a42c723c1c058ae3455a729a9215ed96c | [
"MIT"
] | null | null | null | main.py | JanAndrosiuk/Movie-Scraper-App | 1702da1a42c723c1c058ae3455a729a9215ed96c | [
"MIT"
] | null | null | null | from routes import *
from bottle import run
def main():
run(host='localhost', port=8001, debug=True)
return 1
if __name__ == "__main__":
main()
| 11.571429 | 48 | 0.641975 | 22 | 162 | 4.363636 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040323 | 0.234568 | 162 | 13 | 49 | 12.461538 | 0.733871 | 0 | 0 | 0 | 0 | 0 | 0.104938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.285714 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
e99f8d8cd97252231551ad2f188e95f0782f6786 | 770 | py | Python | insights/parsers/tests/test_od_cpu_dma_latency.py | skateman/insights-core | e7cd3001ffc2558757b9e7759dbe27b8b29f4bac | [
"Apache-2.0"
] | 1 | 2021-11-08T16:25:01.000Z | 2021-11-08T16:25:01.000Z | insights/parsers/tests/test_od_cpu_dma_latency.py | ahitacat/insights-core | 0ba58dbe5edceef0bd4a74c1caf6b826381ccda5 | [
"Apache-2.0"
] | null | null | null | insights/parsers/tests/test_od_cpu_dma_latency.py | ahitacat/insights-core | 0ba58dbe5edceef0bd4a74c1caf6b826381ccda5 | [
"Apache-2.0"
] | null | null | null | import doctest
import pytest
from insights.parsers import od_cpu_dma_latency
from insights.parsers.od_cpu_dma_latency import OdCpuDmaLatency
from insights.tests import context_wrap
from insights.parsers import SkipException
CONTENT_OD_CPU_DMA_LATENCY = """
2000000000
"""
CONTENT_OD_CPU_DMA_LATENCY_EMPTY = ""
def test_doc_examples():
env = {'cpu_dma_latency': OdCpuDmaLatency(context_wrap(CONTENT_OD_CPU_DMA_LATENCY))}
failed, total = doctest.testmod(od_cpu_dma_latency, globs=env)
assert failed == 0
def test_OdCpuDmaLatency():
d = OdCpuDmaLatency(context_wrap(CONTENT_OD_CPU_DMA_LATENCY))
assert d.force_latency == 2000000000
with pytest.raises(SkipException):
OdCpuDmaLatency(context_wrap(CONTENT_OD_CPU_DMA_LATENCY_EMPTY))
| 27.5 | 88 | 0.801299 | 103 | 770 | 5.592233 | 0.320388 | 0.09375 | 0.203125 | 0.208333 | 0.34375 | 0.305556 | 0.25 | 0.25 | 0 | 0 | 0 | 0.031204 | 0.125974 | 770 | 27 | 89 | 28.518519 | 0.824666 | 0 | 0 | 0 | 0 | 0 | 0.037662 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.105263 | false | 0 | 0.315789 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
e9bcd30273c11fb35ffa8754f17e17af5c1e3785 | 14 | py | Python | login.py | wihop/test27 | a4191ba59d8a2024e1fe440062e9b1cf8b30c2dc | [
"MIT"
] | null | null | null | login.py | wihop/test27 | a4191ba59d8a2024e1fe440062e9b1cf8b30c2dc | [
"MIT"
] | null | null | null | login.py | wihop/test27 | a4191ba59d8a2024e1fe440062e9b1cf8b30c2dc | [
"MIT"
] | null | null | null | num =1
num =3
| 4.666667 | 6 | 0.571429 | 4 | 14 | 2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.285714 | 14 | 2 | 7 | 7 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e9c0895add5899be815b789724f6ac052629fa99 | 117 | py | Python | Artesian/MarketData/_Enum/AggregationRule.py | AndreaCuneo/Artesian.SDK-Python | 1f3528293e2da42d9f1901cd94165464cffab3fc | [
"MIT"
] | null | null | null | Artesian/MarketData/_Enum/AggregationRule.py | AndreaCuneo/Artesian.SDK-Python | 1f3528293e2da42d9f1901cd94165464cffab3fc | [
"MIT"
] | null | null | null | Artesian/MarketData/_Enum/AggregationRule.py | AndreaCuneo/Artesian.SDK-Python | 1f3528293e2da42d9f1901cd94165464cffab3fc | [
"MIT"
] | null | null | null | from enum import Enum
class AggregationRule(Enum):
Undefined = 0
SumAndDivide = 1
AverageAndReplicate = 2 | 23.4 | 28 | 0.726496 | 13 | 117 | 6.538462 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032967 | 0.222222 | 117 | 5 | 29 | 23.4 | 0.901099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
e9c1749005c1906c7c1389f59e2a043db52283c1 | 705 | py | Python | src/ros_say/say.py | tsaoyu/ROS-SAY | fb187e6aa88e46bf0dbac2a7a79231d08a82bf5c | [
"MIT"
] | null | null | null | src/ros_say/say.py | tsaoyu/ROS-SAY | fb187e6aa88e46bf0dbac2a7a79231d08a82bf5c | [
"MIT"
] | null | null | null | src/ros_say/say.py | tsaoyu/ROS-SAY | fb187e6aa88e46bf0dbac2a7a79231d08a82bf5c | [
"MIT"
] | null | null | null | import rospy
from std_msgs.msg import String
def publish_to_topic(msg):
pub = rospy.Publisher('/rossay', String, queue_size=10)
tx = String()
tx.data = msg
pub.publish(tx)
def logdebug(msg, *args, **kwargs):
rospy.logdebug(msg, *args, **kwargs)
publish_to_topic(msg)
def loginfo(msg, *args, **kwargs):
rospy.loginfo(msg, *args, **kwargs)
publish_to_topic(msg)
def logwarn(msg, *args, **kwargs):
rospy.logwarn(msg, *args, **kwargs)
publish_to_topic(msg)
def logerr(msg, *args, **kwargs):
rospy.logerr(msg, *args, **kwargs)
publish_to_topic(msg)
def logfatal(msg, *args, **kwargs):
rospy.logfatal(msg, *args, **kwargs)
publish_to_topic(msg)
| 22.03125 | 59 | 0.662411 | 99 | 705 | 4.575758 | 0.262626 | 0.154525 | 0.286976 | 0.225166 | 0.357616 | 0.357616 | 0.357616 | 0.291391 | 0 | 0 | 0 | 0.003454 | 0.178723 | 705 | 31 | 60 | 22.741935 | 0.778929 | 0 | 0 | 0.227273 | 0 | 0 | 0.009957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e9cef6d99b52cc805ec9ac9ce59b259aa3cf54ca | 151 | py | Python | visual_map/apps.py | ig-rudenko/zabbix-map | 84255b133039426426ddbd31dc93c45acb48dcfa | [
"Apache-2.0"
] | null | null | null | visual_map/apps.py | ig-rudenko/zabbix-map | 84255b133039426426ddbd31dc93c45acb48dcfa | [
"Apache-2.0"
] | null | null | null | visual_map/apps.py | ig-rudenko/zabbix-map | 84255b133039426426ddbd31dc93c45acb48dcfa | [
"Apache-2.0"
] | null | null | null | from django.apps import AppConfig
class VisualMapConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'visual_map'
| 21.571429 | 56 | 0.768212 | 18 | 151 | 6.277778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145695 | 151 | 6 | 57 | 25.166667 | 0.875969 | 0 | 0 | 0 | 0 | 0 | 0.258278 | 0.192053 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
e9de4762f802e5caf1e5de03911f42c066d117e3 | 359 | py | Python | radiopadre/notebook.py | ratt-ru/radiopadre | 3bf934eba69144d9707777a57da0e827625517a3 | [
"MIT"
] | 9 | 2019-08-08T12:32:20.000Z | 2021-07-06T17:50:35.000Z | radiopadre/notebook.py | ratt-ru/radiopadre | 3bf934eba69144d9707777a57da0e827625517a3 | [
"MIT"
] | 70 | 2019-03-26T12:42:23.000Z | 2022-02-14T13:45:03.000Z | radiopadre/notebook.py | ratt-ru/radiopadre | 3bf934eba69144d9707777a57da0e827625517a3 | [
"MIT"
] | null | null | null | from radiopadre.file import ItemBase, FileBase
from radiopadre.render import render_title, render_url, render_preamble, rich_string, htmlize
class NotebookFile(FileBase):
def __init__(self, *args, **kw):
FileBase.__init__(self, *args, **kw)
@property
def downloadable_url(self):
return render_url(self.fullpath, notebook=True)
| 27.615385 | 93 | 0.735376 | 44 | 359 | 5.681818 | 0.590909 | 0.112 | 0.096 | 0.112 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167131 | 359 | 12 | 94 | 29.916667 | 0.83612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
756eb1cfdd22b5a8264cae58a411306eb270498f | 654 | py | Python | step_impl/vowels_steps.py | almeidh/py-gauge | ad27c01084c002464d953d7996497bdc5f926bf8 | [
"MIT"
] | null | null | null | step_impl/vowels_steps.py | almeidh/py-gauge | ad27c01084c002464d953d7996497bdc5f926bf8 | [
"MIT"
] | null | null | null | step_impl/vowels_steps.py | almeidh/py-gauge | ad27c01084c002464d953d7996497bdc5f926bf8 | [
"MIT"
] | null | null | null | from getgauge.python import step, before_scenario, Messages, DataStoreFactory
from utils.page_factory import PageFactory
# --------------------------
# Gauge step implementations
# --------------------------
@step("The word <word> has <number> vowels.")
def assert_no_of_vowels_in(word, number):
PageFactory.vowels_page.assert_no_of_vowels_in(word, number)
@step("Vowels in English language are <vowels>.")
def assert_default_vowels(given_vowels):
PageFactory.vowels_page.assert_default_vowels(given_vowels)
@step("Almost all words have vowels <table>")
def assert_words_vowel_count(table):
PageFactory.vowels_page.assert_table(table) | 29.727273 | 77 | 0.732416 | 83 | 654 | 5.493976 | 0.445783 | 0.059211 | 0.138158 | 0.177632 | 0.254386 | 0.122807 | 0.122807 | 0 | 0 | 0 | 0 | 0 | 0.102446 | 654 | 22 | 78 | 29.727273 | 0.776831 | 0.122324 | 0 | 0 | 0 | 0 | 0.196147 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 1 | 0.272727 | false | 0 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
759f9a07b3d027653188df371076c4d0240ec9de | 623 | py | Python | src/github/models/user.py | isabella232/SGTM | 3793d78e99f89e5f73bac5c44f9d8a18cac75fbf | [
"MIT"
] | 8 | 2020-12-05T00:13:03.000Z | 2022-01-11T11:35:51.000Z | src/github/models/user.py | Asana/SGTM | 0e9e236980ed68e80e021470da6374945bbac501 | [
"MIT"
] | 12 | 2020-12-14T18:21:21.000Z | 2022-03-29T17:06:20.000Z | src/github/models/user.py | isabella232/SGTM | 3793d78e99f89e5f73bac5c44f9d8a18cac75fbf | [
"MIT"
] | 2 | 2021-06-27T09:32:55.000Z | 2022-02-27T23:17:36.000Z | from typing import Optional, Dict, Any
import copy
class User(object):
def __init__(self, raw_user: Dict[str, Any]):
if "login" not in raw_user or not raw_user["login"].strip():
raise ValueError("User must have a login")
self._raw = copy.deepcopy(raw_user)
def id(self) -> str:
return self._raw["id"]
def login(self) -> str:
return self._raw["login"]
def name(self) -> Optional[str]:
if "name" not in self._raw:
return None
return self._raw["name"]
def to_raw(self) -> Dict[str, Any]:
return copy.deepcopy(self._raw)
| 25.958333 | 68 | 0.598716 | 89 | 623 | 4.022472 | 0.359551 | 0.136872 | 0.108939 | 0.094972 | 0.111732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272873 | 623 | 23 | 69 | 27.086957 | 0.790287 | 0 | 0 | 0 | 0 | 0 | 0.075441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.117647 | 0.176471 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
75d48195c56aae6a2d0f05e4d6ae50ff8c778bf2 | 462 | py | Python | galaxy/main/migrations/0063_remove_deprecated_role_fields.py | tima/galaxy | b371b973e0e9150f3e8b9b08068828b092982f62 | [
"Apache-2.0"
] | null | null | null | galaxy/main/migrations/0063_remove_deprecated_role_fields.py | tima/galaxy | b371b973e0e9150f3e8b9b08068828b092982f62 | [
"Apache-2.0"
] | null | null | null | galaxy/main/migrations/0063_remove_deprecated_role_fields.py | tima/galaxy | b371b973e0e9150f3e8b9b08068828b092982f62 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0062_move_repository_counters'),
]
operations = [
migrations.RemoveField(model_name='role', name='average_score'),
migrations.RemoveField(model_name='role', name='bayesian_score'),
migrations.RemoveField(model_name='role', name='num_ratings'),
]
| 25.666667 | 73 | 0.686147 | 48 | 462 | 6.3125 | 0.604167 | 0.207921 | 0.257426 | 0.29703 | 0.409241 | 0.409241 | 0.283828 | 0 | 0 | 0 | 0 | 0.013228 | 0.181818 | 462 | 17 | 74 | 27.176471 | 0.78836 | 0.045455 | 0 | 0 | 0 | 0 | 0.189066 | 0.066059 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
75e7f7a28483a2bca9b731fbdc2e199a8fa9ea99 | 1,140 | py | Python | dapper/mods/Lorenz63/boc12.py | sebastienbarthelemy/DAPPER | dc92a7339932af059967bd9cf0a473ae9b8d7bf9 | [
"MIT"
] | 1 | 2019-11-12T21:39:00.000Z | 2019-11-12T21:39:00.000Z | dapper/mods/Lorenz63/boc12.py | cecidip/DAPPER | dc92a7339932af059967bd9cf0a473ae9b8d7bf9 | [
"MIT"
] | null | null | null | dapper/mods/Lorenz63/boc12.py | cecidip/DAPPER | dc92a7339932af059967bd9cf0a473ae9b8d7bf9 | [
"MIT"
] | null | null | null | # Reproduce results from Fig 11 of
# M. Bocquet and P. Sakov (2012): "Combining inflation-free and
# iterative ensemble Kalman filters for strongly nonlinear systems"
from dapper.mods.Lorenz63.sak12 import HMM
# The only diff to sak12 is R: boc12 uses 1 and 8, sak12 uses 2 (and 8)
from dapper import *
HMM.Obs.noise.C = CovMat(eye(3))
HMM.name = os.path.relpath(__file__,'mods/')
####################
# Suggested tuning
####################
# from dapper.mods.Lorenz63.boc12 import HMM # Expected RMSE_a:
# HMM.t.dkObs = 25
# cfgs += iEnKS('Sqrt', N=10,infl=1.02,rot=True) # 0.22
# cfgs += iEnKS('Sqrt', N=3, infl=1.04) # 0.23
# cfgs += iEnKS('Sqrt', N=3, xN=1.0) # 0.22
#
# HMM.t.dkObs = 5
# cfgs += iEnKS('Sqrt', N=10,infl=1.02,rot=True) # 0.13
# cfgs += iEnKS('Sqrt', N=3, infl=1.02) # 0.13
# cfgs += iEnKS('Sqrt', N=3, xN=1.0) # 0.15
# cfgs += iEnKS('Sqrt', N=3, xN=2.0) # 0.14
#
# HMM.t.dkObs = 25 and R=8*eye(3):
# cfgs += iEnKS('Sqrt', N=3, xN=1.0) # 0.70
| 38 | 78 | 0.52193 | 178 | 1,140 | 3.314607 | 0.438202 | 0.122034 | 0.176271 | 0.189831 | 0.311864 | 0.311864 | 0.283051 | 0.20678 | 0.20678 | 0.105085 | 0 | 0.105521 | 0.285088 | 1,140 | 29 | 79 | 39.310345 | 0.618405 | 0.792982 | 0 | 0 | 0 | 0 | 0.02994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
f92ea7c0c6d49055dbc619624c3e66a7180dfdb3 | 12,311 | py | Python | UltronRoBo/modules/__aimultilanguage.py | UltronRoBo/UltronRoBoAssistant | 874dcf725d453ffabd85543533d2a07676af4d65 | [
"MIT"
] | null | null | null | UltronRoBo/modules/__aimultilanguage.py | UltronRoBo/UltronRoBoAssistant | 874dcf725d453ffabd85543533d2a07676af4d65 | [
"MIT"
] | null | null | null | UltronRoBo/modules/__aimultilanguage.py | UltronRoBo/UltronRoBoAssistant | 874dcf725d453ffabd85543533d2a07676af4d65 | [
"MIT"
] | null | null | null | """
MIT License
Copyright (c) 2021 UltronRoBo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import re
import emoji
import requests
url = "https://acobot-brainshop-ai-v1.p.rapidapi.com/get"
from google_trans_new import google_translator
from pyrogram import filters
from UltronRoBo.helper_extra.aichat import add_chat, get_session, remove_chat
from UltronRoBo.pyrogramee.pluginshelper import admins_only, edit_or_reply
from UltronRoBo import UltronRobo as ULTRON
translator = google_translator()
def extract_emojis(s):
return "".join(c for c in s if c in emoji.UNICODE_EMOJI)
BOT_ID = 1811984200
ultron_chats = []
en_chats = []
@ULTRON.on_message(filters.command("chatbot") & ~filters.edited & ~filters.bot)
@admins_only
async def hmm(_, message):
global asuna_chats
if len(message.command) != 2:
await message.reply_text(
"I only recognize `/chatbot on` and /chatbot `off only`"
)
message.continue_propagation()
status = message.text.split(None, 1)[1]
chat_id = message.chat.id
if status == "ON" or status == "on" or status == "On":
lel = await edit_or_reply(message, "`Processing...`")
lol = add_chat(int(message.chat.id))
if not lol:
await lel.edit("UltronRoBo AI Already Activated In This Chat")
return
await lel.edit(
f"UltronRoBo AI Successfully Added For Users In The Chat {message.chat.id}"
)
elif status == "OFF" or status == "off" or status == "Off":
lel = await edit_or_reply(message, "`Processing...`")
Escobar = remove_chat(int(message.chat.id))
if not Escobar:
await lel.edit("UltronRoBo AI Was Not Activated In This Chat")
return
await lel.edit(
f"UltronRoBo AI Successfully Deactivated For Users In The Chat {message.chat.id}"
)
elif status == "EN" or status == "en" or status == "english":
if not chat_id in en_chats:
en_chats.append(chat_id)
await message.reply_text("English AI chat Enabled!")
return
await message.reply_text("AI Chat Is Already Disabled.")
message.continue_propagation()
else:
await message.reply_text(
"I only recognize `/chatbot on` and /chatbot `off only`"
)
@ULTRON.on_message(
filters.text & filters.reply & ~filters.bot & ~filters.via_bot & ~filters.forwarded,
group=2,
)
async def hmm(client, message):
if message.reply_to_message.from_user.id != BOT_ID:
message.continue_propagation()
msg = message.text
chat_id = message.chat.id
if msg.startswith("/") or msg.startswith("@"):
message.continue_propagation()
if chat_id in en_chats:
test = msg
test = test.replace("tesla", "Aco")
test = test.replace("tesla", "Aco")
querystring = {
"bid": "178",
"key": "sX5A2PcYZbsN5EY6",
"uid": "mashape",
"msg": {test},
}
headers = {
"x-rapidapi-key": "cf9e67ea99mshecc7e1ddb8e93d1p1b9e04jsn3f1bb9103c3f",
"x-rapidapi-host": "acobot-brainshop-ai-v1.p.rapidapi.com",
}
response = requests.request("GET", url, headers=headers, params=querystring)
result = response.text
result = result.replace('{"cnt":"', "")
result = result.replace('"}', "")
result = result.replace("Aco", "tesla")
result = result.replace("<a href=\\", "<a href =")
result = result.replace("<\/a>", "</a>")
pro = result
try:
await tesla.send_chat_action(message.chat.id, "typing")
await message.reply_text(pro)
except CFError as e:
print(e)
else:
u = msg.split()
emj = extract_emojis(msg)
msg = msg.replace(emj, "")
if (
[(k) for k in u if k.startswith("@")]
and [(k) for k in u if k.startswith("#")]
and [(k) for k in u if k.startswith("/")]
and re.findall(r"\[([^]]+)]\(\s*([^)]+)\s*\)", msg) != []
):
h = " ".join(filter(lambda x: x[0] != "@", u))
km = re.sub(r"\[([^]]+)]\(\s*([^)]+)\s*\)", r"", h)
tm = km.split()
jm = " ".join(filter(lambda x: x[0] != "#", tm))
hm = jm.split()
rm = " ".join(filter(lambda x: x[0] != "/", hm))
elif [(k) for k in u if k.startswith("@")]:
rm = " ".join(filter(lambda x: x[0] != "@", u))
elif [(k) for k in u if k.startswith("#")]:
rm = " ".join(filter(lambda x: x[0] != "#", u))
elif [(k) for k in u if k.startswith("/")]:
rm = " ".join(filter(lambda x: x[0] != "/", u))
elif re.findall(r"\[([^]]+)]\(\s*([^)]+)\s*\)", msg) != []:
rm = re.sub(r"\[([^]]+)]\(\s*([^)]+)\s*\)", r"", msg)
else:
rm = msg
# print (rm)
lan = translator.detect(rm)
test = rm
if not "en" in lan and not lan == "":
test = translator.translate(test, lang_tgt="en")
# test = emoji.demojize(test.strip())
test = test.replace("tesla", "Aco")
test = test.replace("tesla", "Aco")
querystring = {
"bid": "178",
"key": "sX5A2PcYZbsN5EY6",
"uid": "mashape",
"msg": {test},
}
headers = {
"x-rapidapi-key": "cf9e67ea99mshecc7e1ddb8e93d1p1b9e04jsn3f1bb9103c3f",
"x-rapidapi-host": "acobot-brainshop-ai-v1.p.rapidapi.com",
}
response = requests.request("GET", url, headers=headers, params=querystring)
result = response.text
result = result.replace('{"cnt":"', "")
result = result.replace('"}', "")
result = result.replace("Aco", "tesla")
result = result.replace("<a href=\\", "<a href =")
result = result.replace("<\/a>", "</a>")
pro = result
if not "en" in lan and not lan == "":
pro = translator.translate(pro, lang_tgt=lan[0])
try:
await tesla.send_chat_action(message.chat.id, "typing")
await message.reply_text(pro)
except CFError as e:
print(e)
@ULTRON.on_message(filters.text & filters.private & filters.reply & ~filters.bot)
async def inuka(client, message):
msg = message.text
if msg.startswith("/") or msg.startswith("@"):
message.continue_propagation()
u = msg.split()
emj = extract_emojis(msg)
msg = msg.replace(emj, "")
if (
[(k) for k in u if k.startswith("@")]
and [(k) for k in u if k.startswith("#")]
and [(k) for k in u if k.startswith("/")]
and re.findall(r"\[([^]]+)]\(\s*([^)]+)\s*\)", msg) != []
):
h = " ".join(filter(lambda x: x[0] != "@", u))
km = re.sub(r"\[([^]]+)]\(\s*([^)]+)\s*\)", r"", h)
tm = km.split()
jm = " ".join(filter(lambda x: x[0] != "#", tm))
hm = jm.split()
rm = " ".join(filter(lambda x: x[0] != "/", hm))
elif [(k) for k in u if k.startswith("@")]:
rm = " ".join(filter(lambda x: x[0] != "@", u))
elif [(k) for k in u if k.startswith("#")]:
rm = " ".join(filter(lambda x: x[0] != "#", u))
elif [(k) for k in u if k.startswith("/")]:
rm = " ".join(filter(lambda x: x[0] != "/", u))
elif re.findall(r"\[([^]]+)]\(\s*([^)]+)\s*\)", msg) != []:
rm = re.sub(r"\[([^]]+)]\(\s*([^)]+)\s*\)", r"", msg)
else:
rm = msg
# print (rm)
lan = translator.detect(rm)
test = rm
if not "en" in lan and not lan == "":
test = translator.translate(test, lang_tgt="en")
# test = emoji.demojize(test.strip())
test = test.replace("tesla", "Aco")
test = test.replace("tesla", "Aco")
querystring = {
"bid": "178",
"key": "sX5A2PcYZbsN5EY6",
"uid": "mashape",
"msg": {test},
}
headers = {
"x-rapidapi-key": "cf9e67ea99mshecc7e1ddb8e93d1p1b9e04jsn3f1bb9103c3f",
"x-rapidapi-host": "acobot-brainshop-ai-v1.p.rapidapi.com",
}
response = requests.request("GET", url, headers=headers, params=querystring)
result = response.text
result = result.replace('{"cnt":"', "")
result = result.replace('"}', "")
result = result.replace("Aco", "tesla")
result = result.replace("<a href=\\", "<a href =")
result = result.replace("<\/a>", "</a>")
pro = result
if not "en" in lan and not lan == "":
pro = translator.translate(pro, lang_tgt=lan[0])
try:
await tesla.send_chat_action(message.chat.id, "typing")
await message.reply_text(pro)
except CFError as e:
print(e)
@ULTRON.on_message(
filters.regex("ultron|Ultron|huntinbots|hello|hi")
& ~filters.bot
& ~filters.via_bot
& ~filters.forwarded
& ~filters.reply
& ~filters.channel
)
async def inuka(client, message):
msg = message.text
if msg.startswith("/") or msg.startswith("@"):
message.continue_propagation()
u = msg.split()
emj = extract_emojis(msg)
msg = msg.replace(emj, "")
if (
[(k) for k in u if k.startswith("@")]
and [(k) for k in u if k.startswith("#")]
and [(k) for k in u if k.startswith("/")]
and re.findall(r"\[([^]]+)]\(\s*([^)]+)\s*\)", msg) != []
):
h = " ".join(filter(lambda x: x[0] != "@", u))
km = re.sub(r"\[([^]]+)]\(\s*([^)]+)\s*\)", r"", h)
tm = km.split()
jm = " ".join(filter(lambda x: x[0] != "#", tm))
hm = jm.split()
rm = " ".join(filter(lambda x: x[0] != "/", hm))
elif [(k) for k in u if k.startswith("@")]:
rm = " ".join(filter(lambda x: x[0] != "@", u))
elif [(k) for k in u if k.startswith("#")]:
rm = " ".join(filter(lambda x: x[0] != "#", u))
elif [(k) for k in u if k.startswith("/")]:
rm = " ".join(filter(lambda x: x[0] != "/", u))
elif re.findall(r"\[([^]]+)]\(\s*([^)]+)\s*\)", msg) != []:
rm = re.sub(r"\[([^]]+)]\(\s*([^)]+)\s*\)", r"", msg)
else:
rm = msg
# print (rm)
lan = translator.detect(rm)
test = rm
if not "en" in lan and not lan == "":
test = translator.translate(test, lang_tgt="en")
# test = emoji.demojize(test.strip())
test = test.replace("tesla", "Aco")
test = test.replace("tesla", "Aco")
querystring = {
"bid": "178",
"key": "sX5A2PcYZbsN5EY6",
"uid": "mashape",
"msg": {test},
}
headers = {
"x-rapidapi-key": "cf9e67ea99mshecc7e1ddb8e93d1p1b9e04jsn3f1bb9103c3f",
"x-rapidapi-host": "acobot-brainshop-ai-v1.p.rapidapi.com",
}
response = requests.request("GET", url, headers=headers, params=querystring)
result = response.text
result = result.replace('{"cnt":"', "")
result = result.replace('"}', "")
result = result.replace("Aco", "tesla")
result = result.replace("<a href=\\", "<a href =")
result = result.replace("<\/a>", "</a>")
pro = result
if not "en" in lan and not lan == "":
pro = translator.translate(pro, lang_tgt=lan[0])
try:
await tesla.send_chat_action(message.chat.id, "typing")
await message.reply_text(pro)
except CFError as e:
print(e)
| 36.208824 | 93 | 0.553001 | 1,552 | 12,311 | 4.338918 | 0.160438 | 0.009356 | 0.05643 | 0.018711 | 0.721859 | 0.702406 | 0.689932 | 0.655628 | 0.655628 | 0.647312 | 0 | 0.017762 | 0.268297 | 12,311 | 339 | 94 | 36.315634 | 0.729796 | 0.098124 | 0 | 0.774194 | 0 | 0 | 0.166366 | 0.063571 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003584 | false | 0 | 0.028674 | 0.003584 | 0.046595 | 0.014337 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f937fa8c8a16b6213b8f5b89a5b75680e1222f56 | 2,013 | py | Python | worms/criteria/base.py | abiedermann/worms | 026c45a88d5c71b0e035ac83de6f4dc107316ed8 | [
"Apache-2.0"
] | 4 | 2018-01-30T23:13:43.000Z | 2021-02-12T22:36:54.000Z | worms/criteria/base.py | abiedermann/worms | 026c45a88d5c71b0e035ac83de6f4dc107316ed8 | [
"Apache-2.0"
] | 9 | 2018-02-23T00:52:25.000Z | 2022-01-26T00:02:32.000Z | worms/criteria/base.py | abiedermann/worms | 026c45a88d5c71b0e035ac83de6f4dc107316ed8 | [
"Apache-2.0"
] | 4 | 2018-06-28T21:30:14.000Z | 2022-03-30T17:50:42.000Z | import abc
import numpy as np
import homog as hm
from numpy.linalg import inv
from worms.util import jit
Ux = np.array([1, 0, 0, 0])
Uy = np.array([0, 1, 0, 0])
Uz = np.array([0, 0, 1, 0])
class WormCriteria(abc.ABC):
@abc.abstractmethod
def score(self, **kw):
pass
allowed_attributes = (
"last_body_same_as",
"symname",
"is_cyclic",
"alignment",
"from_seg",
"to_seg",
"origin_seg",
"symfile_modifiers",
"crystinfo",
)
class CriteriaList(WormCriteria):
def __init__(self, children):
if isinstance(children, WormCriteria):
children = [children]
self.children = children
def score(self, **kw):
return sum(c.score(**kw) for c in self.children)
def __getattr__(self, name):
if name not in WormCriteria.allowed_attributes:
raise AttributeError("CriteriaList has no attribute: " + name)
r = [getattr(c, name) for c in self.children if hasattr(c, name)]
r = [x for x in r if x is not None]
assert len(r) < 2
return r[0] if len(r) else None
def __getitem__(self, index):
assert isinstance(index, int)
return self.children[index]
def __len__(self):
return len(self.children)
def __iter__(self):
return iter(self.children)
class NullCriteria(WormCriteria):
def __init__(self, from_seg=0, to_seg=-1, origin_seg=None):
self.from_seg = from_seg
self.to_seg = to_seg
self.origin_seg = None
self.is_cyclic = False
self.tolerance = 9e8
self.symname = None
def merge_segment(self, **kw):
return None
def score(self, segpos, **kw):
return np.zeros(segpos[-1].shape[:-2])
def alignment(self, segpos, **kw):
r = np.empty_like(segpos[-1])
r[..., :, :] = np.eye(4)
return r
def jit_lossfunc(self):
@jit
def null_lossfunc(pos, idx, verts):
return 0.0
return null_lossfunc
def iface_rms(self, pose0, prov0, **kw):
return -1
| 23.964286 | 71 | 0.615002 | 282 | 2,013 | 4.216312 | 0.333333 | 0.070648 | 0.030278 | 0.023549 | 0.030278 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018145 | 0.260805 | 2,013 | 83 | 72 | 24.253012 | 0.780914 | 0 | 0 | 0.029851 | 0 | 0 | 0.061103 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 1 | 0.208955 | false | 0.014925 | 0.074627 | 0.104478 | 0.507463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
f93ab25dcb9271147e77b194862c6c60e2c15b85 | 1,757 | py | Python | nba_data/data/game.py | jaebradley/nba_data | 30d817bbc1c5474774f97f3800354492e382d206 | [
"MIT"
] | 8 | 2017-01-07T13:32:16.000Z | 2019-08-08T17:36:26.000Z | nba_data/data/game.py | jaebradley/nba_data | 30d817bbc1c5474774f97f3800354492e382d206 | [
"MIT"
] | 72 | 2016-09-01T01:21:07.000Z | 2021-03-25T21:41:38.000Z | nba_data/data/game.py | jaebradley/nba_data | 30d817bbc1c5474774f97f3800354492e382d206 | [
"MIT"
] | 4 | 2016-12-06T10:30:59.000Z | 2021-09-08T21:23:43.000Z | class Game:
def __init__(self, id, match_up):
self.id = id
self.match_up = match_up
def __unicode__(self):
return '{0} - {1}'.format(self.get_additional_unicode(), self.get_base_unicode())
def get_base_unicode(self):
return 'id: {id} | match up: {match_up}'.format(self.id, self.match_up)
def get_additional_unicode(self):
raise NotImplementedError('Implement in concrete classes')
class SeasonGame(Game):
def __init__(self, id, match_up, season):
self.season = season
Game.__init__(self, id, match_up)
def get_base_unicode(self):
return 'season: {season} | {base_unicode}'.format(season=self.season, base_unicode=Game.get_base_unicode(self))
def get_additional_unicode(self):
raise NotImplementedError('Implement in concrete classes')
class LoggedGame(SeasonGame):
def __init__(self, id, match_up, season, start_date, season_type, home_team_outcome):
self.start_date = start_date
self.season_type = season_type
self.home_team_outcome = home_team_outcome
SeasonGame.__init__(self, id=id, match_up=match_up, season=season)
def get_additional_unicode(self):
return 'start_date: {start_date} | season type: {season_type} | home team outcome: {home_team_outcome}'\
.format(date=self.start_date, season_type=self.season_type, home_team_outcome=self.home_team_outcome)
class ScoreboardGame(SeasonGame):
def __init__(self, id, season, start_time, match_up):
self.start_time = start_time
SeasonGame.__init__(self, id=id, match_up=match_up, season=season)
def get_additional_unicode(self):
return 'start time: {start_time}'.format(start_time=self.start_time)
| 35.857143 | 119 | 0.704041 | 237 | 1,757 | 4.818565 | 0.139241 | 0.085814 | 0.061296 | 0.105079 | 0.583187 | 0.530648 | 0.364273 | 0.294221 | 0.294221 | 0.294221 | 0 | 0.001404 | 0.188958 | 1,757 | 48 | 120 | 36.604167 | 0.8 | 0 | 0 | 0.30303 | 0 | 0 | 0.141719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.151515 | 0.606061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
f945d73dc56bc2c6de0605926d234f912917c5c4 | 192 | py | Python | my_first_webdriver.py | williamholtkamp/unc_webscraping | c9ad7a6a492d050ea646ab5b70c6f351ab99e870 | [
"CC0-1.0"
] | 4 | 2021-02-03T00:33:33.000Z | 2021-04-16T21:58:25.000Z | my_first_webdriver.py | williamholtkamp/unc_webscraping | c9ad7a6a492d050ea646ab5b70c6f351ab99e870 | [
"CC0-1.0"
] | null | null | null | my_first_webdriver.py | williamholtkamp/unc_webscraping | c9ad7a6a492d050ea646ab5b70c6f351ab99e870 | [
"CC0-1.0"
] | 3 | 2020-10-23T18:20:37.000Z | 2021-02-05T12:49:07.000Z | #following script opens up your first webdriver browser and goes to the specified link
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
| 27.428571 | 87 | 0.760417 | 27 | 192 | 5.407407 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 192 | 6 | 88 | 32 | 0.901235 | 0.442708 | 0 | 0 | 0 | 0 | 0.232323 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
f94db5537f35a72175a5968d1dcc47b2f6d25ba6 | 311 | py | Python | kosmos/zos/ZosNodes/ZOSVirtual.py | Pishoy/jumpscaleX_threebot | 781e839857fecfa601a31d98d86d304e3a6b3b4e | [
"Apache-2.0"
] | null | null | null | kosmos/zos/ZosNodes/ZOSVirtual.py | Pishoy/jumpscaleX_threebot | 781e839857fecfa601a31d98d86d304e3a6b3b4e | [
"Apache-2.0"
] | 546 | 2019-08-29T11:48:19.000Z | 2020-12-06T07:20:45.000Z | kosmos/zos/ZosNodes/ZOSVirtual.py | Pishoy/jumpscaleX_threebot | 781e839857fecfa601a31d98d86d304e3a6b3b4e | [
"Apache-2.0"
] | 5 | 2019-09-26T14:03:05.000Z | 2020-04-16T08:47:10.000Z | from Jumpscale import j
from .ZOSContainer import ZOSContainers
from .ZOSInstance import ZOSInstance
class ZOSVirtual(ZOSInstance):
"""
is the host which runs a ZOS operating system
is a Virtual Zero-OS running on a virtual service provider
"""
def _init(self, **kwargs):
pass
| 18.294118 | 62 | 0.707395 | 40 | 311 | 5.475 | 0.75 | 0.073059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234727 | 311 | 16 | 63 | 19.4375 | 0.920168 | 0.337621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.5 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
f983560b381da0021faa8b318c7ecd01b487c63f | 16,051 | py | Python | lib/python3.7/site-packages/gatco_apimanager/views/couchdb.py | teomoney1999/ACT_gatco_project | a804a6348efeab90f3114606cfbc73aaebab63e1 | [
"MIT"
] | null | null | null | lib/python3.7/site-packages/gatco_apimanager/views/couchdb.py | teomoney1999/ACT_gatco_project | a804a6348efeab90f3114606cfbc73aaebab63e1 | [
"MIT"
] | 1 | 2019-11-27T08:58:03.000Z | 2019-11-27T08:58:03.000Z | lib/python3.7/site-packages/gatco_apimanager/views/couchdb.py | teomoney1999/ACT_gatco_project | a804a6348efeab90f3114606cfbc73aaebab63e1 | [
"MIT"
] | 2 | 2019-06-19T07:33:45.000Z | 2021-06-21T08:19:31.000Z | # -*- coding: utf-8 -
import asyncio
# import pymongo
import ujson
import math
# from bson.objectid import ObjectId
from cloudant.query import Query
from gatco.exceptions import GatcoException, ServerError
from gatco.response import json, text, HTTPResponse
from gatco.request import json_loads
from . import ModelView
# https://python-cloudant.readthedocs.io/en/latest/getting_started.html
# https://github.com/cloudant/python-cloudant
def to_dict(document, exclude=None, include=None):
obj = dict(document)
columns = list(obj.keys())
if exclude is not None:
for c in columns:
if c in exclude:
del (obj[c])
elif include is not None:
for c in columns:
if c not in include:
del (obj[c])
return obj
def response_exception(exception):
if type(exception.message) is dict:
return json(exception.message, status=exception.status_code)
else:
return json({"error_code": "UNKNOWN_ERROR", "error_message": exception.message}, status=520)
class APIView(ModelView):
primary_key = "_id"
db = None
def _compute_results_per_page(self, request):
try:
results_per_page = int(request.args.get('results_per_page'))
except:
results_per_page = self.results_per_page
if results_per_page <= 0:
results_per_page = self.results_per_page
return min(results_per_page, self.max_results_per_page)
async def _search(self, request, search_params):
is_single = search_params.get('single')
order_by_list = search_params.get('order_by', None)
# paginate
# num_results = 20
results_per_page = self._compute_results_per_page(request)
page_num = 1
if results_per_page > 0:
page_num = int(request.args.get('page', 1))
start = (page_num - 1) * results_per_page
# end = start + results_per_page
# total_pages = int(math.ceil(num_results / results_per_page))
else:
start = 0
# total_pages = 1
query = None
filters = {"doc_type": self.collection_name}
if 'filters' in search_params:
filters = search_params['filters']
filters["doc_type"] = self.collection_name
if order_by_list is not None:
query = Query(self.db.db, selector=filters, sort=order_by_list)
else:
query = Query(self.db.db, selector=filters)
if is_single:
for document in query(limit=1)['docs']:
result = to_dict(document, exclude=self.exclude_columns, include=self.include_columns)
break
else:
query_result = None
if (results_per_page is not None) and (results_per_page > 0):
if (start is not None) and (start > 0):
query_result = query.result[start:start + results_per_page]
else:
query_result = query.result[:results_per_page]
else:
if (start is not None) and (start > 0):
query_result = query.result[start:]
else:
query_result = query.result
objects = []
for document in query_result:
objects.append(to_dict(document, exclude=self.exclude_columns, include=self.include_columns))
new_page = page_num + 1
result = {
"page": page_num,
"next_page": new_page,
"objects": objects
}
return result
async def search(self, request):
try:
search_params = json_loads(request.args.get('q', '{}'))
except (TypeError, ValueError, OverflowError) as exception:
# current_app.logger.exception(str(exception))
return json(dict(error_code="PARAM_ERROR", error_message='Unable to decode data'), status=520)
try:
for preprocess in self.preprocess['GET_MANY']:
if asyncio.iscoroutinefunction(preprocess):
resp = await preprocess(request=request, search_params=search_params, collection_name=self.collection_name)
else:
resp = preprocess(request=request, search_params=search_params, collection_name=self.collection_name)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
except Exception as exception:
return response_exception(exception)
result = await self._search(request, search_params)
if result is None:
return json(dict(error_code="NOT_FOUND", error_message='No result found'), status=520)
try:
headers = {}
for postprocess in self.postprocess['GET_MANY']:
if asyncio.iscoroutinefunction(postprocess):
resp = await postprocess(request=request, result=result, search_params=search_params, collection_name=self.collection_name, headers=headers)
else:
resp = postprocess(request=request, result=result, search_params=search_params, collection_name=self.collection_name, headers=headers)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
except Exception as exception:
return response_exception(exception)
return json(result, headers=headers, status=200)
async def _get(self, request, instid):
document = self.db.db.get(instid, remote=True)
if document is not None:
obj = to_dict(document, exclude=self.exclude_columns, include=self.include_columns)
return obj
return None
async def get(self, request, instid=None):
if instid is None:
return await self.search(request)
try:
for preprocess in self.preprocess['GET_SINGLE']:
if asyncio.iscoroutinefunction(preprocess):
resp = await preprocess(request=request, instance_id=instid, collection_name=self.collection_name)
else:
resp = preprocess(request=request, instance_id=instid, collection_name=self.collection_name)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
if resp is not None:
instid = resp
except Exception as exception:
return response_exception(exception)
result = await self._get(request, instid)
if result is None:
return json(dict(error_code="NOT_FOUND", error_message='No result found'), status=520)
try:
headers = {}
for postprocess in self.postprocess['GET_SINGLE']:
if asyncio.iscoroutinefunction(postprocess):
resp = await postprocess(request=request, result=result, collection_name=self.collection_name, headers=headers)
else:
resp = postprocess(request=request, result=result, collection_name=self.collection_name, headers=headers)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
except Exception as exception:
return response_exception(exception)
return json(result, headers=headers, status=200)
async def delete(self, request, instid=None):
try:
for preprocess in self.preprocess['DELETE_SINGLE']:
if asyncio.iscoroutinefunction(preprocess):
resp = await preprocess(request=request, instance_id=instid, collection_name=self.collection_name)
else:
resp = preprocess(request=request, instance_id=instid, collection_name=self.collection_name)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
# See the note under the preprocess in the get() method.
if resp is not None:
instid = resp
except Exception as exception:
return response_exception(exception)
was_deleted = False
if instid is not None:
# result = await self.db.db[self.collection_name].delete_one({'_id': {'$eq': ObjectId(instid)}})
# result = self.db.db[self.collection_name].delete_one({'_id': {'$in': [ObjectId(instid), instid]}})
# was_deleted = result.deleted_count > 0
document = self.db.db.get(instid,remote=True)
if document is not None:
document.delete()
was_deleted = True
else:
was_deleted = False
try:
headers = {}
for postprocess in self.postprocess['DELETE_SINGLE']:
if asyncio.iscoroutinefunction(postprocess):
resp = await postprocess(request=request, was_deleted=was_deleted, collection_name=self.collection_name, headers=headers)
else:
resp = postprocess(request=request, was_deleted=was_deleted, collection_name=self.collection_name, headers=headers)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
except Exception as exception:
return response_exception(exception)
return json({}, headers=headers, status=200) if was_deleted else json({}, headers=headers, status=520)
async def _post(self, request, data):
if "_id" in data:
del data["_id"]
if "_rev" in data:
del data["_rev"]
data["doc_type"] = self.collection_name
if self.primary_key is not None:
# if data["_id"] not in self.db.db:
document = self.db.db.create_document(data)
obj = to_dict(document, exclude=self.exclude_columns, include=self.include_columns)
del (obj["_rev"])
return obj
async def post(self, request):
content_type = request.headers.get('Content-Type', "")
content_is_json = content_type.startswith('application/json')
if not content_is_json:
msg = 'Request must have "Content-Type: application/json" header'
return json(dict(message=msg), status=520)
try:
data = request.json or {}
except (ServerError, TypeError, ValueError, OverflowError) as exception:
# current_app.logger.exception(str(exception))
return json(dict(message='Unable to decode data'), status=520)
try:
for preprocess in self.preprocess['POST']:
if asyncio.iscoroutinefunction(preprocess):
resp = await preprocess(request=request, data=data, collection_name=self.collection_name)
else:
resp = preprocess(request=request, data=data, collection_name=self.collection_name)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
except Exception as exception:
return response_exception(exception)
result = await self._post(request, data)
if result is None:
json(dict(error_code='UNKNOWN_ERROR', error_message=''), status=520)
try:
headers = {}
for postprocess in self.postprocess['POST']:
if asyncio.iscoroutinefunction(postprocess):
resp = await postprocess(request=request, result=result, collection_name=self.collection_name, headers=headers)
else:
resp = postprocess(request=request, result=result, collection_name=self.collection_name, headers=headers)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
except Exception as exception:
return response_exception(exception)
return json(result, headers=headers, status=201)
async def _put(self, request, data, instid):
if "doc_type" in data:
del data["doc_type"]
if "_rev" in data:
del data["_rev"]
if instid is not None:
document = self.db.db.get(instid, remote=True)
if document is not None:
update = False
for key, value in data.items():
try:
if document.get(key) != value:
update = True
document[key] = value
except:
pass
if update:
document.save()
if document is not None:
return to_dict(document, exclude=self.exclude_columns, include=self.include_columns)
else:
if self.primary_key is not None:
data["doc_type"] = self.collection_name
document = self.db.db.create_document(data)
obj = to_dict(document, exclude=self.exclude_columns, include=self.include_columns)
del (obj["_rev"])
return obj
else:
if self.primary_key is not None:
data["doc_type"] = self.collection_name
document = self.db.db.create_document(data)
obj = to_dict(document, exclude=self.exclude_columns, include=self.include_columns)
del (obj["_rev"])
return obj
return None
async def put(self, request, instid=None):
content_type = request.headers.get('Content-Type', "")
content_is_json = content_type.startswith('application/json')
if not content_is_json:
msg = 'Request must have "Content-Type: application/json" header'
return json(dict(message=msg), status=520)
try:
data = request.json or {}
except (ServerError, TypeError, ValueError, OverflowError) as exception:
# current_app.logger.exception(str(exception))
return json(dict(error_code='PARAM_ERROR', error_message='Unable to decode data'), status=520)
for preprocess in self.preprocess['PATCH_SINGLE']:
try:
if asyncio.iscoroutinefunction(preprocess):
resp = await preprocess(request=request, instance_id=instid, data=data, collection_name=self.collection_name)
else:
resp = preprocess(request=request, instance_id=instid, data=data, collection_name=self.collection_name)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
# See the note under the preprocess in the get() method.
if resp is not None:
instid = resp
except Exception as exception:
return response_exception(exception)
result = await self._put(request, data, instid)
if result is None:
return json(dict(error_code='UNKNOWN_ERROR', error_message=''), status=520)
headers = {}
try:
for postprocess in self.postprocess['PATCH_SINGLE']:
if asyncio.iscoroutinefunction(postprocess):
resp = await postprocess(request=request, result=result, collection_name=self.collection_name, headers=headers)
else:
resp = postprocess(request=request, result=result, collection_name=self.collection_name, headers=headers)
if (resp is not None) and isinstance(resp, HTTPResponse):
return resp
except Exception as exception:
return response_exception(exception)
return json(result, headers=headers, status=200)
async def patch(self, *args, **kw):
"""Alias for :meth:`patch`."""
return await self.put(*args, **kw)
| 40.842239 | 160 | 0.595041 | 1,767 | 16,051 | 5.253537 | 0.099038 | 0.070882 | 0.027146 | 0.060325 | 0.76527 | 0.727459 | 0.708176 | 0.671119 | 0.671119 | 0.643219 | 0 | 0.005858 | 0.319357 | 16,051 | 392 | 161 | 40.946429 | 0.843844 | 0.051648 | 0 | 0.59 | 0 | 0 | 0.041326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01 | false | 0.003333 | 0.026667 | 0 | 0.203333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f989a8ab7bd4c020ae58199b39d5401abadd08a1 | 357 | py | Python | back/api/management/commands/setcustompassword.py | kalbermattenm/geoshop2 | b1c5c9124248e7df4d4653e9b7cb33590e9d11de | [
"BSD-3-Clause"
] | null | null | null | back/api/management/commands/setcustompassword.py | kalbermattenm/geoshop2 | b1c5c9124248e7df4d4653e9b7cb33590e9d11de | [
"BSD-3-Clause"
] | null | null | null | back/api/management/commands/setcustompassword.py | kalbermattenm/geoshop2 | b1c5c9124248e7df4d4653e9b7cb33590e9d11de | [
"BSD-3-Clause"
] | null | null | null | import os
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth import get_user_model
UserModel = get_user_model()
class Command(BaseCommand):
def handle(self, *args, **options):
u = UserModel.objects.get(username='admin')
u.set_password(os.environ.get('ADMIN_PASSWORD', 'admin'))
u.save()
| 29.75 | 65 | 0.722689 | 47 | 357 | 5.361702 | 0.638298 | 0.079365 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 357 | 11 | 66 | 32.454545 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0.067227 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.111111 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
f99e4cafa46fa36cad26a48954e6a140b222dc41 | 501 | py | Python | reqcli/errors.py | shiftinv/reqcli | 3b8fe42bcddf7676bb7d9a70f6bd2a430ae7beb1 | [
"Apache-2.0"
] | null | null | null | reqcli/errors.py | shiftinv/reqcli | 3b8fe42bcddf7676bb7d9a70f6bd2a430ae7beb1 | [
"Apache-2.0"
] | null | null | null | reqcli/errors.py | shiftinv/reqcli | 3b8fe42bcddf7676bb7d9a70f6bd2a430ae7beb1 | [
"Apache-2.0"
] | null | null | null | class ResponseStatusError(Exception):
status: int
def __init__(self, message: str, status: int):
super().__init__(message)
self.status = status
def __reduce__(self): # pragma: no cover
return (type(self), (*self.args, self.status))
class ConfigDependencyError(Exception):
pass
class XmlLoadError(Exception):
pass
class XmlSchemaError(Exception):
pass
class ReaderError(Exception):
pass
class TypeAlreadyLoadedError(Exception):
pass
| 16.7 | 54 | 0.686627 | 51 | 501 | 6.509804 | 0.470588 | 0.195783 | 0.216867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215569 | 501 | 29 | 55 | 17.275862 | 0.844784 | 0.031936 | 0 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.294118 | 0 | 0.058824 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
f9a7923e4fcf5836f4c0dbf26f9c1465c820f925 | 49 | py | Python | cst_modeling/__init__.py | swayli94/cst-modeling3d | 3799aec6fa08ae5e9b66760bf0a8c998527f9a8c | [
"MIT"
] | 5 | 2020-11-17T02:24:55.000Z | 2021-12-18T10:12:21.000Z | cst_modeling/__init__.py | swayli94/cst-modeling3d | 3799aec6fa08ae5e9b66760bf0a8c998527f9a8c | [
"MIT"
] | null | null | null | cst_modeling/__init__.py | swayli94/cst-modeling3d | 3799aec6fa08ae5e9b66760bf0a8c998527f9a8c | [
"MIT"
] | 1 | 2021-03-13T16:19:57.000Z | 2021-03-13T16:19:57.000Z | __version__ = "0.1.13"
__name__ = "cst_modeling"
| 16.333333 | 25 | 0.714286 | 7 | 49 | 3.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 0.122449 | 49 | 2 | 26 | 24.5 | 0.511628 | 0 | 0 | 0 | 0 | 0 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f9abcb53f8c5b40987eb6b1c02d59e4b4c660b34 | 827 | py | Python | tests/test_data.py | alingse/emoji-chengyu | 2d4436212c1d2899dfc12a1c965ea2ddce9a4aab | [
"MIT"
] | 3 | 2020-04-28T03:25:36.000Z | 2022-01-24T04:52:01.000Z | tests/test_data.py | alingse/emoji-chengyu | 2d4436212c1d2899dfc12a1c965ea2ddce9a4aab | [
"MIT"
] | null | null | null | tests/test_data.py | alingse/emoji-chengyu | 2d4436212c1d2899dfc12a1c965ea2ddce9a4aab | [
"MIT"
] | 1 | 2020-04-28T03:25:49.000Z | 2020-04-28T03:25:49.000Z | import unittest
from emoji_chengyu.data import DefaultChengyuManager
from emoji_chengyu.data import CommonChengyuManager
from emoji_chengyu.data import DefaultEmojiManager
from emoji_chengyu.data import clean_tone
from emoji_chengyu.data import split_pinyin
class TestData(unittest.TestCase):
def test_load_on_init(self):
assert len(DefaultChengyuManager.chengyu_list) > 0
assert len(CommonChengyuManager.chengyu_list) > 0
assert len(DefaultEmojiManager.emoji_list) > 0
def test_clean_tone(self):
self.assertEqual(clean_tone('nihao'), 'nihao')
self.assertEqual(clean_tone('dòu'), 'dou')
self.assertEqual(clean_tone('zǒu gǒu'), 'zou gou')
def test_split_pinyin(self):
pinyin = "ān bù lí mǎ,jiǎ bù lí shēn"
assert len(split_pinyin(pinyin)) == 9
| 30.62963 | 58 | 0.737606 | 107 | 827 | 5.504673 | 0.383178 | 0.076401 | 0.135823 | 0.169779 | 0.29202 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005882 | 0.177751 | 827 | 26 | 59 | 31.807692 | 0.860294 | 0 | 0 | 0 | 0 | 0 | 0.067715 | 0 | 0 | 0 | 0 | 0 | 0.388889 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
f9b38134648472f5513418f289ad7f59bb4c5113 | 2,017 | py | Python | wlroots/wlr_types/__init__.py | anriha/pywlroots | 13e043fc73b40249d09596f5c26580b1ae55ec54 | [
"NCSA"
] | null | null | null | wlroots/wlr_types/__init__.py | anriha/pywlroots | 13e043fc73b40249d09596f5c26580b1ae55ec54 | [
"NCSA"
] | null | null | null | wlroots/wlr_types/__init__.py | anriha/pywlroots | 13e043fc73b40249d09596f5c26580b1ae55ec54 | [
"NCSA"
] | null | null | null | # Copyright (c) 2019 Sean Vig
from .compositor import Compositor # noqa: F401
from .cursor import Cursor # noqa: F401
from .data_control_v1 import DataControlManagerV1 # noqa: F401
from .data_device_manager import DataDeviceManager # noqa: F401
from .foreign_toplevel_management_v1 import ForeignToplevelManagerV1 # noqa: F401
from .gamma_control_v1 import GammaControlManagerV1 # noqa: F401
from .input_device import InputDevice # noqa: F401
from .input_inhibit import InputInhibitManager # noqa: F401
from .keyboard import Keyboard # noqa: F401
from .layer_shell_v1 import LayerShellV1 # noqa: F401
from .matrix import Matrix # noqa: F401
from .output import Output # noqa: F401
from .output_damage import OutputDamage # noqa: F401
from .output_layout import OutputLayout # noqa: F401
from .pointer import ( # noqa: F401
PointerEventAxis,
PointerEventButton,
PointerEventMotion,
PointerEventMotionAbsolute,
)
from .pointer_constraints_v1 import ( # noqa: F401
PointerConstraintsV1,
PointerConstraintV1,
)
from .primary_selection_v1 import PrimarySelectionV1DeviceManager # noqa: F401
from .relative_pointer_manager_v1 import RelativePointerManagerV1 # noqa: F401
from .scene import Scene, SceneNode # noqa: F401
from .screencopy_v1 import ScreencopyManagerV1 # noqa: F401
from .seat import Seat # noqa: F401
from .surface import Surface, SurfaceState # noqa: F401
from .texture import Texture # noqa: F401
from .virtual_keyboard_v1 import VirtualKeyboardManagerV1 # noqa: F401
from .xcursor_manager import XCursorManager # noqa: F401
from .xdg_decoration_v1 import XdgDecorationManagerV1 # noqa: F401
from .xdg_output_v1 import XdgOutputManagerV1 # noqa: F401
from .xdg_shell import XdgShell # noqa: F401
def __getattr__(name: str):
if name == "Box":
from .box import Box # noqa: F401
return Box
try:
return globals()[name]
except KeyError:
raise ImportError(f"cannot import name '{name}' from wlroots.wlr_types")
| 40.34 | 82 | 0.768468 | 242 | 2,017 | 6.256198 | 0.355372 | 0.153236 | 0.198151 | 0.035667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068019 | 0.169063 | 2,017 | 49 | 83 | 41.163265 | 0.835322 | 0.171542 | 0 | 0 | 0 | 0 | 0.032317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.681818 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
f9bb4c56c85fe0458266ec52b26fe1dbee6037d6 | 967 | py | Python | tbr_reg/__init__.py | ucl-tbr-group-project/regression | ab02f62cdf83ffc43d9b8e88b0c1833190d65b95 | [
"MIT"
] | null | null | null | tbr_reg/__init__.py | ucl-tbr-group-project/regression | ab02f62cdf83ffc43d9b8e88b0c1833190d65b95 | [
"MIT"
] | null | null | null | tbr_reg/__init__.py | ucl-tbr-group-project/regression | ab02f62cdf83ffc43d9b8e88b0c1833190d65b95 | [
"MIT"
] | null | null | null | from .autoencoders import create_autoencoder
from .autoencoders import train_autoencoder
from .data_utils import load_batches
from .data_utils import encode_data_frame
from .data_utils import x_y_split
from .data_utils import c_d_y_split
from .gans import create_discriminator
from .gans import create_generator
from .gans import create_discriminator_model
from .gans import create_adversarial_model
from .gans import train_gan
from .hyperopt.grid_search import grid_search
from .hyperopt.model_space import model_space_product
from .model_loader import get_model_factory
from .model_loader import load_model_from_file
from .plot_params_vs_tbr import plot_params_vs_tbr
from .plot_reg_performance import plot_reg_performance
from .plot_reg_vs_time import plot_reg_vs_time
from .plot_utils import set_plotting_style
from .plot_utils import density_scatter
from .train import apply_on_y_columns
from .train import fit_multiple
from .train import predict_multiple
| 32.233333 | 54 | 0.872802 | 153 | 967 | 5.130719 | 0.326797 | 0.084076 | 0.089172 | 0.096815 | 0.084076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101344 | 967 | 29 | 55 | 33.344828 | 0.903337 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
f9cac33d6aab543b9bb9aaf8d87b105473df95c9 | 33 | py | Python | src/__init__.py | uvraj88/SimpleNetworkService | 9396654f148170f02ce13577a25f8caaa6415a2e | [
"Apache-2.0"
] | null | null | null | src/__init__.py | uvraj88/SimpleNetworkService | 9396654f148170f02ce13577a25f8caaa6415a2e | [
"Apache-2.0"
] | null | null | null | src/__init__.py | uvraj88/SimpleNetworkService | 9396654f148170f02ce13577a25f8caaa6415a2e | [
"Apache-2.0"
] | null | null | null | #init file
__version__ = '1.0.0'
| 11 | 21 | 0.666667 | 6 | 33 | 3 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 0.151515 | 33 | 2 | 22 | 16.5 | 0.535714 | 0.272727 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
ddb24d05ac2913bf9d5467c1c017c03fc1c95ef3 | 8,629 | py | Python | printers/fairresults70.py | simonlet/MissRateSimulator | 89f481a3408d243a61a6907f332e36ca9a2fb45d | [
"MIT"
] | 1 | 2021-05-26T16:03:13.000Z | 2021-05-26T16:03:13.000Z | printers/fairresults70.py | simonlet/MissRateSimulator | 89f481a3408d243a61a6907f332e36ca9a2fb45d | [
"MIT"
] | null | null | null | printers/fairresults70.py | simonlet/MissRateSimulator | 89f481a3408d243a61a6907f332e36ca9a2fb45d | [
"MIT"
] | 3 | 2019-05-27T16:53:28.000Z | 2021-02-07T21:58:11.000Z | import matplotlib
matplotlib.use('Agg')
import matplotlib.patches as mpatches
import random
import math
import numpy as np
import matplotlib.pyplot as plt
import itertools
from matplotlib import rcParams
from matplotlib.backends.backend_pdf import PdfPages
#fileName = 'diffrent_set_size'
fileName = '10tasks_u70'
folder = 'final_plot/'
perfault = []
# plot in pdf
pp = PdfPages(folder + fileName + '.pdf')
percentageU = 70
#title = 'Tasks: '+ repr(10) + ', Utilization:'+repr(percentageU)+'%'
title = 'Tasks: '+ repr(10) + ', $U^N_{SUM}$:'+repr(percentageU)+'%'
faultRate = [10**-2., 10**-4, 10**-6.]
rate102 = [0.00222184606643217, 9.39324716686523e-10, 1.49597848095289e-8, 2.82808218593350e-6, 0.000519849044411903, 7.91475444562763e-10, 7.63137213153905e-9, 0.00844286257163001, 4.85236052205006e-5, 2.90563491251103e-8, 1.91460710499306e-5, 3.74261719880609e-9, 0.00713732790552336, 0.000207450616666342, 1.39423096296592e-7, 1.12540466961398e-6, 0.000365215756628136, 0.000610315643742477, 0.0141338447497429, 9.75821568330343e-11, 3.14328179987514e-9, 7.94471258590973e-5, 4.31921903676913e-9, 3.22952584412667e-8, 1.21789332986402e-8, 3.50470600511955e-9, 1.07798151743278e-9, 2.41743004314943e-9, 1.53109752426535e-5, 6.69294805754113e-5, 4.43053428536352e-5, 0.129091977880312, 6.57197834950264e-8, 8.14766361347744e-10, 0.000109200822785597, 8.33601490379021e-7, 1.42511699542707e-8, 0.0259112869376599, 1.97181386727492e-8, 2.66813604969816e-8, 2.56688412663861e-9, 0.000301936621855498, 0.00273439666925762, 5.40020267302603e-8, 1.74936443476837e-8, 1.54602431135751e-6, 0.000568039358057984, 3.19406528700786e-11, 1.15739385850340e-6, 0.00180186413239562, 3.03198334119634e-5, 1.03149500246218e-9, 0.0129149469144891, 0.0143390629520243, 0.00259558019110573, 1.93580791164615e-8, 2.55111987179379e-7, 0.000152258872221307, 5.42604163306919e-5, 2.01483792895990e-5, 0.000684083196343718, 4.16399041462163e-8, 2.13472945543019e-6, 1.53061079443339e-5, 4.28533411016388e-7, 0.00585649086012912, 2.37257895261147e-7, 0.000104624934316751, 3.01003596485980e-5, 2.27653602379746e-9, 6.29822794216893e-5, 1.44875341319109e-8, 3.62678316357252e-6, 7.02911964737374e-5, 5.77516941349414e-6, 0.0611608199907390, 1.42603058812823e-7, 1.22439187597120e-9, 7.37031693326042e-8, 5.66778932028624e-7, 1.51381148642878e-6, 5.87553980690649e-7, 5.56701215028766e-7, 1.98712384192308e-5, 3.02540207700675e-5, 0.000670861064895008, 0.00833562215374853, 0.000995406161983560, 0.0121058396237000, 2.15574010657646e-5, 2.97706259742780e-6, 5.62510432190370e-6, 1.60632355202967e-8, 1.02592081515856e-5, 1.63302734258775e-9, 6.53003207019999e-5, 5.89093970422537e-6, 7.97228025405875e-6, 2.43182208854725e-9, 0.00441120623659347]
rate104 = [2.55922634486244e-9, 4.02511098423670e-7, 0.00239760214429604, 2.92184396061692e-10, 4.68027206076662e-11, 2.77189315369545e-7, 4.03757229254190e-10, 3.79332106423175e-6, 1.99132024417317e-9, 3.72599220833455e-5, 1.66893171485177e-6, 0.000140263086000775, 5.49910920259565e-7, 1.46020776398318e-7, 7.59011058216085e-8, 0.000680440579154460, 9.93208001913125e-7, 3.46311405252067e-10, 1.91213199475381e-6, 7.55088383015764e-8, 1.24781955378994e-6, 1.15370434185976e-5, 6.85316427995476e-10, 1.33744752500981e-8, 2.12136263540692e-6, 0.000166945214883034, 1.13641061115819e-6, 7.01225546690983e-9, 3.33002807838448e-8, 3.74556840722565e-10, 3.71368686833925e-9, 1.47221531580665e-8, 1.14071646669328e-8, 7.51558903780034e-8, 4.25350362325632e-5, 1.23779414960434e-7, 4.52282485426895e-9, 9.55235380921678e-9, 7.60192402213061e-10, 1.35566862267848e-8, 7.20241424474404e-8, 4.52843101819219e-5, 1.15163838038676e-9, 7.24721501414755e-8, 1.76426699014602e-8, 6.62810015386497e-8, 8.52047452974755e-11, 8.44107542687781e-10, 4.26862155165545e-13, 1.13999220453730e-9, 1.32066039353684e-6, 3.47672767934463e-8, 2.26590601126801e-10, 1.19406400021389e-7, 7.43728250508719e-9, 9.26329687702886e-9, 9.99327200558436e-10, 2.82290903681309e-6, 2.89794846966364e-7, 2.61412821277112e-8, 1.61921677166592e-8, 6.50828822701966e-8, 1.82341982305757e-9, 7.86976455864671e-11, 5.95354399895515e-10, 2.01648661895825e-7, 4.17308574139251e-6, 5.02162643942448e-12, 1.97158364139061e-6, 1.05249446361136e-10, 0.00135739377433914, 1.66399209160707e-6, 7.57494023507982e-8, 4.08937697570720e-7, 8.69189465901483e-9, 1.69504869756261e-10, 1.40137426975428e-10, 3.95837466833965e-9, 1.23392998307897e-10, 1.04665988150720e-7, 1.30726987992533e-7, 2.92781349377796e-11, 0.000868803822898168, 0.00257277203529016, 2.07299584107239e-9, 2.63019712043447e-7, 2.80363229738044e-10, 3.87962784285876e-6, 8.36214624379606e-9, 1.95584160695076e-11, 2.49866614732545e-7, 0.000611654233942630, 3.26616021684404e-7, 8.57540892372777e-12, 1.42505813760285e-7, 5.48403240083137e-9, 2.02895691825970e-8, 0.00231019725722616, 8.84974117290134e-11, 1.93458943327190e-7]
rate106 = [8.10246326875006e-8, 3.19572386172112e-9, 3.70214040707043e-7, 2.81485362609499e-9, 2.66235949386522e-9, 8.48893373659518e-9, 0.000251633014266995, 0.000135382856725150, 1.00898330391177e-5, 1.98681838141496e-7, 8.45554235730134e-8, 6.35465746553238e-9, 8.91758800061116e-7, 4.42137419616922e-10, 4.57101772946562e-7, 1.87328885932463e-5, 9.72096057018516e-8, 1.44308567253600e-10, 0.000159427990721121, 0.00130897598013566, 3.67808657193573e-7, 2.97773597334758e-9, 1.33609909282283e-7, 3.96827811355925e-10, 6.97919198165053e-9, 1.18320210195456e-8, 2.16040337962765e-8, 2.08339322122311e-10, 1.98435666604422e-8, 1.42674788399530e-10, 2.07208671508576e-10, 3.62129606893985e-8, 6.04525877332040e-10, 2.35420635231133e-6, 1.05101361047384e-8, 6.03706301598164e-9, 3.53058371018829e-12, 3.41459273578394e-9, 1.53405530923801e-6, 8.61208993386428e-9, 4.86343231386139e-9, 1.91902888494632e-7, 2.69837228723710e-10, 6.84784370482018e-8, 0.000769147005785305, 0.000250265233675757, 1.18203624174800e-7, 4.16933304076519e-10, 1.65326623023707e-11, 2.64428098968946e-7, 4.90666504981627e-11, 2.42137123867731e-8, 4.46923732351041e-8, 4.72588114887817e-11, 1.25224682058919e-7, 1.25886692060796e-5, 4.00180818548924e-8, 8.26883841639840e-9, 1.98476299877318e-11, 1.82068141418345e-9, 7.09273345021130e-9, 8.16940726884649e-11, 5.49826401646357e-10, 4.04186360195653e-5, 4.59721463211207e-7, 6.54987436233689e-11, 4.91919078603306e-8, 1.62387725752927e-9, 1.20516317516089e-9, 2.36968590935437e-9, 7.22480389150433e-10, 6.58666085366840e-7, 1.64611675896198e-8, 4.99008066068288e-12, 1.29465058340227e-7, 2.20179378233447e-6, 1.47427861957454e-8, 3.28139875139643e-12, 3.79133089057844e-11, 6.66624900817902e-7, 5.80694957430743e-11, 2.48210279991865e-7, 6.98262056147752e-9, 1.68626690527661e-10, 3.46041782742860e-8, 7.26467025543694e-5, 6.97675913021538e-8, 4.84110109230271e-9, 6.39669263117586e-7, 1.32584269738637e-8, 3.91429149309245e-11, 1.41625758571705e-9, 3.49283325705739e-9, 7.49032340673913e-10, 7.68595473810612e-12, 6.22348878558919e-10, 7.59810676061588e-9, 1.75195124267886e-10, 1.79458794153281e-7, 9.95413237899837e-10]
frate102 = []
frate104 = []
frate106 = []
for x in rate102:
if x != 0:
frate102.append(x)
for x in rate104:
if x != 0:
frate104.append(x)
for x in rate106:
if x != 0:
frate106.append(x)
perfault.append(frate102)
perfault.append(frate104)
perfault.append(frate106)
'''
lower_error=[]
upper_error=[]
median = []
for i in range(4):
median.append(np.median(perfault[i]))
lower_error.append(min(perfault[i]))
upper_error.append(max(perfault[i]))
asymmetric_error = (lower_error, upper_error)
print(asymmetric_error)
'''
#the blue box
boxprops = dict(linewidth=2, color='blue')
#the median line
medianprops = dict(linewidth=2.5, color='red')
whiskerprops = dict(linewidth=2.5, color='black')
capprops = dict(linewidth=2.5)
plt.title(title, fontsize=20)
plt.grid(True)
plt.ylabel('Expected Miss Rate', fontsize=20)
plt.xlabel('Fault Rate $P^A_i$', fontsize=22)
ax = plt.subplot()
ax.set_yscale("log")
ax.set_ylim([10**-21,10**0])
ax.tick_params(axis='both', which='major',labelsize=20)
labels = ('$10^{-2}$','$10^{-4}$', '$10^{-6}$')
try:
ax.boxplot(perfault, 0, '', labels=labels, boxprops=boxprops, whiskerprops=whiskerprops, capprops=capprops)
except ValueError:
print("ValueError")
figure = plt.gcf()
figure.set_size_inches([10,6.5])
box = mpatches.Patch(color='blue', label='First to Third Quartiles', linewidth=3)
av = mpatches.Patch(color='red', label='Median', linewidth=3)
whisk = mpatches.Patch(color='black', label='Whiskers', linewidth=3)
plt.legend(handles=[av, box, whisk], fontsize=16, frameon=True, loc=1)
pp.savefig()
plt.clf()
pp.close()
| 90.831579 | 2,144 | 0.785375 | 1,179 | 8,629 | 5.731128 | 0.373198 | 0.00444 | 0.008288 | 0.00666 | 0.012432 | 0.002664 | 0 | 0 | 0 | 0 | 0 | 0.631055 | 0.069301 | 8,629 | 94 | 2,145 | 91.797872 | 0.21031 | 0.015877 | 0 | 0.04918 | 0 | 0 | 0.024079 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.147541 | 0 | 0.147541 | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
ddba631bac0c4624ecd16301bbc93a45be5e7b6a | 186 | py | Python | setup.py | uptrace/uptrace-python | 808ee29e6cecfa561224c4f4387f8a9fb95e85a6 | [
"BSD-2-Clause"
] | 7 | 2020-10-10T09:07:06.000Z | 2022-03-17T14:26:05.000Z | setup.py | uptrace/uptrace-python | 808ee29e6cecfa561224c4f4387f8a9fb95e85a6 | [
"BSD-2-Clause"
] | 31 | 2021-02-23T11:31:15.000Z | 2022-03-21T08:06:12.000Z | setup.py | uptrace/uptrace-python | 808ee29e6cecfa561224c4f4387f8a9fb95e85a6 | [
"BSD-2-Clause"
] | 3 | 2020-11-30T13:36:54.000Z | 2022-02-23T17:37:06.000Z | import os
import setuptools
PKG_INFO = {}
with open(os.path.join("src", "uptrace", "version.py")) as f:
exec(f.read(), PKG_INFO)
setuptools.setup(version=PKG_INFO["__version__"])
| 18.6 | 61 | 0.698925 | 28 | 186 | 4.392857 | 0.642857 | 0.170732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123656 | 186 | 9 | 62 | 20.666667 | 0.754601 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
ddbf0cbb40263312c451826ff2e69b53928d7416 | 60 | py | Python | package_updater/__init__.py | superlevure/package_updater | df80df23bd24fb3856cf723081ac4a68f2c412e3 | [
"MIT"
] | null | null | null | package_updater/__init__.py | superlevure/package_updater | df80df23bd24fb3856cf723081ac4a68f2c412e3 | [
"MIT"
] | null | null | null | package_updater/__init__.py | superlevure/package_updater | df80df23bd24fb3856cf723081ac4a68f2c412e3 | [
"MIT"
] | null | null | null | from .package_updater import Update
__version__ = "0.0.3"
| 12 | 35 | 0.75 | 9 | 60 | 4.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.15 | 60 | 4 | 36 | 15 | 0.72549 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
ddc06c65eb1a2f3358dc1d1c80fb8211ee59e6a3 | 445 | py | Python | polyaxon/api/index/health.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/api/index/health.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/api/index/health.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | from rest_framework import status
from rest_framework.response import Response
from rest_framework.throttling import AnonRateThrottle
from rest_framework.views import APIView
class HealthRateThrottle(AnonRateThrottle):
scope = 'health'
class HealthView(APIView):
authentication_classes = ()
throttle_classes = (HealthRateThrottle,)
def get(self, request, *args, **kwargs):
return Response(status=status.HTTP_200_OK)
| 26.176471 | 54 | 0.782022 | 49 | 445 | 6.938776 | 0.571429 | 0.094118 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007895 | 0.146067 | 445 | 16 | 55 | 27.8125 | 0.886842 | 0 | 0 | 0 | 0 | 0 | 0.013483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0.090909 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
ddcca3c42effd1a86ff82fad018297ec94ad1127 | 138 | py | Python | test/resources/inspectors/python/case3_redefining_builtin.py | hyperskill/hyperstyle | bf3c6e2dc42290ad27f2d30ce42d84a53241544b | [
"Apache-2.0"
] | 18 | 2020-10-05T16:48:11.000Z | 2022-03-22T04:15:38.000Z | test/resources/inspectors/python/case3_redefining_builtin.py | hyperskill/hyperstyle | bf3c6e2dc42290ad27f2d30ce42d84a53241544b | [
"Apache-2.0"
] | 60 | 2020-10-05T17:01:05.000Z | 2022-01-27T12:46:14.000Z | test/resources/inspectors/python/case3_redefining_builtin.py | hyperskill/hyperstyle | bf3c6e2dc42290ad27f2d30ce42d84a53241544b | [
"Apache-2.0"
] | 6 | 2021-02-09T09:31:19.000Z | 2021-08-13T07:45:51.000Z | a = int(input())
b = int(input())
range = range(a, b + 1)
list = list(filter(lambda x: x % 3 == 0, range))
print(sum(list) / len(list))
| 17.25 | 48 | 0.572464 | 25 | 138 | 3.16 | 0.6 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.195652 | 138 | 7 | 49 | 19.714286 | 0.684685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
ddd7affd6e7048e92218de8726cda8982eeb56a1 | 110 | py | Python | nbconvert_reportlab/__init__.py | takluyver/nbconvert-reportlab | 8f921dcd7fb441c2555642d7364b011d4711aaf3 | [
"MIT"
] | 13 | 2016-09-29T17:28:26.000Z | 2019-09-09T15:08:47.000Z | nbconvert_reportlab/__init__.py | takluyver/nbconvert-reportlab | 8f921dcd7fb441c2555642d7364b011d4711aaf3 | [
"MIT"
] | 4 | 2016-09-29T17:45:02.000Z | 2017-11-15T09:31:55.000Z | nbconvert_reportlab/__init__.py | takluyver/nbconvert-reportlab | 8f921dcd7fb441c2555642d7364b011d4711aaf3 | [
"MIT"
] | 3 | 2016-12-06T10:13:59.000Z | 2019-11-29T09:46:49.000Z | """Convert notebooks to PDF using Reportlab
"""
__version__ = '0.2'
from .exporter import ReportlabExporter
| 15.714286 | 43 | 0.754545 | 13 | 110 | 6.076923 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.145455 | 110 | 6 | 44 | 18.333333 | 0.819149 | 0.363636 | 0 | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
ddd93cb01b6f286d2cf3bf053ca404d9d463910d | 270 | py | Python | classes/angle.py | KenNewcomb/Topologist | 8e3faffdb1945ca355b1667aaae48a51df930de1 | [
"MIT"
] | 3 | 2015-07-15T18:42:51.000Z | 2019-05-01T10:52:57.000Z | classes/angle.py | KenNewcomb/Topologist | 8e3faffdb1945ca355b1667aaae48a51df930de1 | [
"MIT"
] | null | null | null | classes/angle.py | KenNewcomb/Topologist | 8e3faffdb1945ca355b1667aaae48a51df930de1 | [
"MIT"
] | null | null | null | # angle.py: A class describing an angle between three atoms.
class Angle:
atom1 = ""
atom2 = ""
atom3 = ""
angle = 0.0
def __init__(self, atom1, atom2, atom3, angle):
self.atom1 = atom1
self.atom2 = atom2
self.atom3 = atom3
self.angle = angle
| 19.285714 | 60 | 0.62963 | 37 | 270 | 4.486486 | 0.432432 | 0.120482 | 0.180723 | 0.240964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069652 | 0.255556 | 270 | 13 | 61 | 20.769231 | 0.756219 | 0.214815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
ddecb00af4970d6b0bfa6886415e8136f7d81a95 | 159 | py | Python | EfficientCoding/Assignment-6-shooting.py | vikbehal/Explore | b35948d8a6894647df3ee462746475f7e66f78f8 | [
"MIT"
] | 3 | 2019-01-29T06:33:34.000Z | 2022-01-26T20:01:04.000Z | EfficientCoding/Assignment-6-shooting.py | vikbehal/Explore | b35948d8a6894647df3ee462746475f7e66f78f8 | [
"MIT"
] | null | null | null | EfficientCoding/Assignment-6-shooting.py | vikbehal/Explore | b35948d8a6894647df3ee462746475f7e66f78f8 | [
"MIT"
] | 1 | 2022-03-11T10:47:29.000Z | 2022-03-11T10:47:29.000Z | def solve(data):
X, Y, N, W, P1, P2 = data
for _ in range(int(input())):
data = [int(num) for num in input().split(" ")]
print(solve(data)) | 22.714286 | 52 | 0.534591 | 26 | 159 | 3.230769 | 0.653846 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.264151 | 159 | 7 | 53 | 22.714286 | 0.700855 | 0 | 0 | 0 | 0 | 0 | 0.006494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
ddfc17a7986a68754f19910472f7d868a2858c32 | 25 | py | Python | cmsplugin_vimeo/test/__init__.py | rescale/cmsplugin_vimeo | 246fc21755446412924ff67f3bc4c778244cee09 | [
"BSD-3-Clause"
] | null | null | null | cmsplugin_vimeo/test/__init__.py | rescale/cmsplugin_vimeo | 246fc21755446412924ff67f3bc4c778244cee09 | [
"BSD-3-Clause"
] | null | null | null | cmsplugin_vimeo/test/__init__.py | rescale/cmsplugin_vimeo | 246fc21755446412924ff67f3bc4c778244cee09 | [
"BSD-3-Clause"
] | null | null | null | __author__ = 'francesco'
| 12.5 | 24 | 0.76 | 2 | 25 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fb0c8270e934912c80a9f2eda6592537eac1af02 | 1,434 | py | Python | mibs/pycopia/mibs/SNMP_USM_DH_OBJECTS_MIB_OID.py | kdart/pycopia | 1446fabaedf8c6bdd4ab1fc3f0ea731e0ef8da9d | [
"Apache-2.0"
] | 89 | 2015-03-26T11:25:20.000Z | 2022-01-12T06:25:14.000Z | mibs/pycopia/mibs/SNMP_USM_DH_OBJECTS_MIB_OID.py | kdart/pycopia | 1446fabaedf8c6bdd4ab1fc3f0ea731e0ef8da9d | [
"Apache-2.0"
] | 1 | 2015-07-05T03:27:43.000Z | 2015-07-11T06:21:20.000Z | mibs/pycopia/mibs/SNMP_USM_DH_OBJECTS_MIB_OID.py | kdart/pycopia | 1446fabaedf8c6bdd4ab1fc3f0ea731e0ef8da9d | [
"Apache-2.0"
] | 30 | 2015-04-30T01:35:54.000Z | 2022-01-12T06:19:49.000Z | # python
# This file is generated by a program (mib2py).
import SNMP_USM_DH_OBJECTS_MIB
OIDMAP = {
'1.3.6.1.3.101': SNMP_USM_DH_OBJECTS_MIB.snmpUsmDHObjectsMIB,
'1.3.6.1.3.101.1': SNMP_USM_DH_OBJECTS_MIB.usmDHKeyObjects,
'1.3.6.1.3.101.1.1': SNMP_USM_DH_OBJECTS_MIB.usmDHPublicObjects,
'1.3.6.1.3.101.1.2': SNMP_USM_DH_OBJECTS_MIB.usmDHKickstartGroup,
'1.3.6.1.3.101.2': SNMP_USM_DH_OBJECTS_MIB.usmDHKeyConformance,
'1.3.6.1.3.101.2.1': SNMP_USM_DH_OBJECTS_MIB.usmDHKeyMIBCompliances,
'1.3.6.1.3.101.2.2': SNMP_USM_DH_OBJECTS_MIB.usmDHKeyMIBGroups,
'1.3.6.1.3.101.1.1.1': SNMP_USM_DH_OBJECTS_MIB.usmDHParameters,
'1.3.6.1.3.101.1.1.2.1.1': SNMP_USM_DH_OBJECTS_MIB.usmDHUserAuthKeyChange,
'1.3.6.1.3.101.1.1.2.1.2': SNMP_USM_DH_OBJECTS_MIB.usmDHUserOwnAuthKeyChange,
'1.3.6.1.3.101.1.1.2.1.3': SNMP_USM_DH_OBJECTS_MIB.usmDHUserPrivKeyChange,
'1.3.6.1.3.101.1.1.2.1.4': SNMP_USM_DH_OBJECTS_MIB.usmDHUserOwnPrivKeyChange,
'1.3.6.1.3.101.1.2.1.1.1': SNMP_USM_DH_OBJECTS_MIB.usmDHKickstartIndex,
'1.3.6.1.3.101.1.2.1.1.2': SNMP_USM_DH_OBJECTS_MIB.usmDHKickstartMyPublic,
'1.3.6.1.3.101.1.2.1.1.3': SNMP_USM_DH_OBJECTS_MIB.usmDHKickstartMgrPublic,
'1.3.6.1.3.101.1.2.1.1.4': SNMP_USM_DH_OBJECTS_MIB.usmDHKickstartSecurityName,
'1.3.6.1.3.101.2.2.1': SNMP_USM_DH_OBJECTS_MIB.usmDHKeyMIBBasicGroup,
'1.3.6.1.3.101.2.2.2': SNMP_USM_DH_OBJECTS_MIB.usmDHKeyParamGroup,
'1.3.6.1.3.101.2.2.3': SNMP_USM_DH_OBJECTS_MIB.usmDHKeyKickstartGroup,
}
| 53.111111 | 78 | 0.778243 | 306 | 1,434 | 3.385621 | 0.133987 | 0.07722 | 0.173745 | 0.30888 | 0.573359 | 0.53668 | 0.472973 | 0.198842 | 0.092664 | 0.092664 | 0 | 0.15625 | 0.040446 | 1,434 | 26 | 79 | 55.153846 | 0.596657 | 0.036262 | 0 | 0 | 1 | 0.363636 | 0.269231 | 0.133527 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fb0d02a5d465c49c56af84ea66d6a3bb2cbf99c5 | 1,105 | py | Python | odp/db/models/client.py | SAEONData/Open-Data-Platform | cfd2a53e145ec86e187d7c4e1260df17ec6dcb03 | [
"MIT"
] | 2 | 2021-03-04T07:09:47.000Z | 2022-01-02T19:23:41.000Z | odp/db/models/client.py | SAEONData/Open-Data-Platform | cfd2a53e145ec86e187d7c4e1260df17ec6dcb03 | [
"MIT"
] | 18 | 2020-09-16T09:16:45.000Z | 2022-01-25T14:17:42.000Z | odp/db/models/client.py | SAEONData/Open-Data-Platform | cfd2a53e145ec86e187d7c4e1260df17ec6dcb03 | [
"MIT"
] | 1 | 2021-06-25T13:02:57.000Z | 2021-06-25T13:02:57.000Z | from sqlalchemy import Column, String, ForeignKey
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.orm import relationship
from odp.db import Base
from odp.db.models.client_scope import ClientScope
class Client(Base):
"""Client application config. The associated scopes
represent the set of permissions granted to the client.
If a client is linked to a provider, then its scopes apply
only to entities that are associated with that provider.
"""
__tablename__ = 'client'
id = Column(String, primary_key=True)
name = Column(String, unique=True, nullable=False)
provider_id = Column(String, ForeignKey('provider.id', ondelete='CASCADE'))
provider = relationship('Provider')
# many-to-many relationship between client and scope
client_scopes = relationship('ClientScope', back_populates='client', cascade='all, delete-orphan', passive_deletes=True)
scopes = association_proxy('client_scopes', 'scope', creator=lambda s: ClientScope(scope=s))
def __repr__(self):
return self._repr('id', 'name', 'provider')
| 35.645161 | 124 | 0.742986 | 140 | 1,105 | 5.735714 | 0.514286 | 0.059776 | 0.054795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163801 | 1,105 | 30 | 125 | 36.833333 | 0.869048 | 0.247059 | 0 | 0 | 0 | 0 | 0.122373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.066667 | 0.333333 | 0.066667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
fb1ba3a5f26babae6c55761357c299747dca3d7d | 1,225 | py | Python | obj/category.py | ArthurBartoli/QuestLog | 96acb8279c067547dc1b44556af6c9fd280284aa | [
"MIT"
] | null | null | null | obj/category.py | ArthurBartoli/QuestLog | 96acb8279c067547dc1b44556af6c9fd280284aa | [
"MIT"
] | null | null | null | obj/category.py | ArthurBartoli/QuestLog | 96acb8279c067547dc1b44556af6c9fd280284aa | [
"MIT"
] | null | null | null | from questlog import QuestLog
class Category():
'''A category is a subfolder of a QuestLog and contains itself another QuestLog'''
def __init__(self, title, ql_title, ql_parent, ql_desc=''):
self.title = title
self.questlog = QuestLog(ql_title, desc=ql_desc)
self.ql_parent = ql_parent
self._pos_parent = self.ql_parent.search_log(self.title)
def __str__(self):
return f"{self.questlog}"
def __len__(self):
return len(self.questlog)
# Reader Functions
def read_title(self):
return self.title
def read_ql_title(self):
return self.questlog.read_name()
def read_desc(self):
return self.questlog.read_desc()
# Access Functions
def change_title(self, title):
self.title = title
def change_ql(self, ql_title, ql_desc):
self.questlog = QuestLog(ql_title, desc=ql_desc)
def change_parent(self, new_parent):
self.ql_parent.remove_category(self)
self.ql_parent = new_parent
self.ql_parent.add_category(self)
self._pos_parent = self.ql_parent.search_log(self.title)
def correct_pos(self):
self._pos_parent = self.ql_parent.search_log(self.title)
| 29.166667 | 86 | 0.671837 | 170 | 1,225 | 4.535294 | 0.217647 | 0.093385 | 0.108949 | 0.116732 | 0.403372 | 0.281453 | 0.281453 | 0.281453 | 0.185473 | 0.185473 | 0 | 0 | 0.231837 | 1,225 | 41 | 87 | 29.878049 | 0.819341 | 0.090612 | 0 | 0.25 | 0 | 0 | 0.01355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0 | 0.035714 | 0.178571 | 0.607143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
fb2058691cb76f9bf1b051e83d3299359a200c6c | 397 | py | Python | Python_OCR_JE/venv/Lib/site-packages/numpy/typing/tests/data/fail/numerictypes.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | Python_OCR_JE/venv/Lib/site-packages/numpy/typing/tests/data/fail/numerictypes.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | Python_OCR_JE/venv/Lib/site-packages/numpy/typing/tests/data/fail/numerictypes.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | 1 | 2021-04-26T22:41:56.000Z | 2021-04-26T22:41:56.000Z | import numpy as np
# Techincally this works, but probably shouldn't. See
#
# https://github.com/numpy/numpy/issues/16366
#
np.maximum_sctype(1) # E: incompatible type "int"
np.issubsctype(1, np.int64) # E: incompatible type "int"
np.issubdtype(1, np.int64) # E: incompatible type "int"
np.find_common_type(np.int64, np.int64) # E: incompatible type "Type[signedinteger[Any]]"
| 28.357143 | 91 | 0.702771 | 59 | 397 | 4.677966 | 0.525424 | 0.188406 | 0.246377 | 0.217391 | 0.384058 | 0.217391 | 0.217391 | 0.217391 | 0 | 0 | 0 | 0.048048 | 0.161209 | 397 | 13 | 92 | 30.538462 | 0.780781 | 0.564232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fb25b3d7a2fe7cd6282ebe55438c530b80c2419e | 517 | py | Python | env/experiments.py | clayton-ho/EGGs_Control | 312f02488b47cf880c6e6600ce10856a871123df | [
"MIT"
] | null | null | null | env/experiments.py | clayton-ho/EGGs_Control | 312f02488b47cf880c6e6600ce10856a871123df | [
"MIT"
] | null | null | null | env/experiments.py | clayton-ho/EGGs_Control | 312f02488b47cf880c6e6600ce10856a871123df | [
"MIT"
] | null | null | null | """
Stores everything needed to write experiments.
"""
__all__ = []
#experiments
from EGGS_labrad.lib.servers.script_scanner.experiment import experiment
__all__.append("experiment")
from EGGS_labrad.lib.servers.script_scanner import experiment_classes
from EGGS_labrad.lib.servers.script_scanner.experiment_classes import *
__all__.extend(experiment_classes.__all__)
#pulser
from EGGS_labrad.lib.servers.pulser import sequence
from EGGS_labrad.lib.servers.pulser.sequence import *
__all__.extend(sequence.__all__) | 30.411765 | 72 | 0.839458 | 67 | 517 | 5.955224 | 0.313433 | 0.100251 | 0.175439 | 0.213033 | 0.478697 | 0.478697 | 0.328321 | 0.235589 | 0 | 0 | 0 | 0 | 0.073501 | 517 | 17 | 73 | 30.411765 | 0.832985 | 0.123791 | 0 | 0 | 0 | 0 | 0.022472 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.555556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
fb3693408e7092730e43d1020be124f53cd11da7 | 322 | py | Python | server/project/apps/core/forms.py | CalHoll/9001 | 7fe371346d4fefb7b2434262da67b7e1b18057e4 | [
"MIT"
] | 54 | 2017-01-20T20:05:23.000Z | 2022-02-02T12:34:33.000Z | server/project/apps/core/forms.py | CalHoll/9001 | 7fe371346d4fefb7b2434262da67b7e1b18057e4 | [
"MIT"
] | 104 | 2019-11-25T08:33:52.000Z | 2021-08-02T06:17:19.000Z | server/project/apps/core/forms.py | CalHoll/9001 | 7fe371346d4fefb7b2434262da67b7e1b18057e4 | [
"MIT"
] | 9 | 2017-01-26T05:56:01.000Z | 2018-05-15T16:50:19.000Z | import django_filters
from .models import Playlist, FavoritesList
class PlaylistFilter(django_filters.FilterSet):
class Meta:
model = Playlist
fields = ('user_id', )
class FavoritesListFilter(django_filters.FilterSet):
class Meta:
model = FavoritesList
fields = ('user_id', )
| 18.941176 | 52 | 0.686335 | 32 | 322 | 6.75 | 0.5 | 0.180556 | 0.203704 | 0.25 | 0.333333 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232919 | 322 | 16 | 53 | 20.125 | 0.874494 | 0 | 0 | 0.4 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
348dc17ec6bd1a910d2e1f23c9de1eb0677fad7a | 459 | py | Python | hello_world.py | edmund-bishanga/pytest_and_selenium_expts | e6e34e80f5d5bf853ffbbe1d96e567568319b827 | [
"MIT"
] | null | null | null | hello_world.py | edmund-bishanga/pytest_and_selenium_expts | e6e34e80f5d5bf853ffbbe1d96e567568319b827 | [
"MIT"
] | null | null | null | hello_world.py | edmund-bishanga/pytest_and_selenium_expts | e6e34e80f5d5bf853ffbbe1d96e567568319b827 | [
"MIT"
] | null | null | null | #!/usr/bin/python
print("Hello world")
print("GOD LOVES YOU")
print("John 3:16\n")
print("CHRIST JESUS: EMMANUEL...")
print("Full of TRUTH & GRACE")
print("John 1:17\n")
print("Love God: with ALL you are... + the kitchen sink!")
print("Love ya Neighbour: as you already love ya self...")
print("ENJOY THE DISCIPLESHIP PROCESS")
print("HALLELUJAH!!!\n")
print("just another pilgrim: Edmund Muzoora BISHANGA")
print("#TeamEMMANUELForever")
print("#YNWA")
| 21.857143 | 58 | 0.697168 | 68 | 459 | 4.705882 | 0.676471 | 0.05625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014888 | 0.122004 | 459 | 20 | 59 | 22.95 | 0.779156 | 0.034858 | 0 | 0 | 0 | 0 | 0.693182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
349b39884bb881700fb84a1d3950aac605a02b25 | 2,290 | py | Python | tests/test_abc.py | neurobin/python-ocd | 178bc7923e4702a1d90b2b38bc28515921f8553d | [
"BSD-3-Clause"
] | null | null | null | tests/test_abc.py | neurobin/python-ocd | 178bc7923e4702a1d90b2b38bc28515921f8553d | [
"BSD-3-Clause"
] | 1 | 2020-04-26T16:00:32.000Z | 2020-04-26T16:00:32.000Z | tests/test_abc.py | neurobin/python-easyvar | 178bc7923e4702a1d90b2b38bc28515921f8553d | [
"BSD-3-Clause"
] | null | null | null |
import unittest
from ocd import abc
from ocd.prop import Prop
class Test_abc(unittest.TestCase):
def setUp(self):
# init
pass
def tearDown(self):
# destruct
pass
def test_abc_VarConf_NIMP(self): # NIMP: Not Itempemented
class VarConf(abc.VarConf):
pass
with self.assertRaises(TypeError):
VarConf.get_conf()
with self.assertRaises(TypeError):
VarConf.get_conf(None)
with self.assertRaises(TypeError):
VarConf.get_conf(None, None)
with self.assertRaises(NotImplementedError):
VarConf.get_conf(None, None, None)
with self.assertRaises(TypeError):
VarConf.get_conf(None, None, None, None)
conf = VarConf()
with self.assertRaises(TypeError):
conf.get_conf()
with self.assertRaises(TypeError):
conf.get_conf(None)
with self.assertRaises(NotImplementedError):
conf.get_conf(None, None)
with self.assertRaises(TypeError):
conf.get_conf(None, None, None)
def test_abc_VarConf_IMP(self): # IMP: implemented
class VarConf(abc.VarConf):
def get_conf(self, name, value):
return None
conf = VarConf()
p = conf.get_conf(None, None)
self.assertTrue(isinstance(p, Prop) or p is None)
class VarConf(abc.VarConf):
def get_conf(self, name, value):
return Prop()
conf = VarConf()
p = conf.get_conf(None, None)
self.assertTrue(isinstance(p, Prop) or p is None)
class VarConf(abc.VarConf):
def get_conf(self, name, value):
return True
conf = VarConf()
p = conf.get_conf(None, None)
with self.assertRaises(AssertionError):
self.assertTrue(isinstance(p, Prop) or p is None)
class VarConf(abc.VarConf):
def get_conf(self, name, value):
return self
conf = VarConf()
p = conf.get_conf(None, None)
with self.assertRaises(AssertionError):
self.assertTrue(isinstance(p, Prop) or p is None)
if __name__ == '__main__':
unittest.main(verbosity=2)
| 28.987342 | 61 | 0.581659 | 260 | 2,290 | 5 | 0.165385 | 0.091538 | 0.169231 | 0.103846 | 0.771538 | 0.736154 | 0.700769 | 0.621538 | 0.506154 | 0.506154 | 0 | 0.000647 | 0.325328 | 2,290 | 78 | 62 | 29.358974 | 0.840777 | 0.023144 | 0 | 0.610169 | 0 | 0 | 0.003586 | 0 | 0 | 0 | 0 | 0 | 0.254237 | 1 | 0.135593 | false | 0.050847 | 0.050847 | 0.067797 | 0.355932 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
34aa5c06330b1d511b424db33ed44e5d7dd8129d | 189 | py | Python | booktest/forms.py | Adolph-Anthony/study_django | 9ba9037107c8b1763bbc449d9d2c14f028ca0672 | [
"MIT"
] | null | null | null | booktest/forms.py | Adolph-Anthony/study_django | 9ba9037107c8b1763bbc449d9d2c14f028ca0672 | [
"MIT"
] | 1 | 2019-07-01T02:21:06.000Z | 2019-07-02T01:12:54.000Z | booktest/forms.py | Adolph-Anthony/study_django | 9ba9037107c8b1763bbc449d9d2c14f028ca0672 | [
"MIT"
] | null | null | null | from django import forms
class BookInfoForm(forms.Form):
btitle = forms.CharField(label='图书名称',required=True,max_length=20)
bpub_date = forms.DateField(label='发型日期',required=True)
| 31.5 | 70 | 0.761905 | 26 | 189 | 5.461538 | 0.769231 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 0.111111 | 189 | 5 | 71 | 37.8 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.042328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
34ad2ccb8ea8232aec89a5f71c1d054aa77dc2ad | 13,970 | py | Python | bot/New bot.py | AllisonW2323/creativecode | e7c276de46d6208505f2bd1c2922a2f0e0bd1ecd | [
"MIT"
] | null | null | null | bot/New bot.py | AllisonW2323/creativecode | e7c276de46d6208505f2bd1c2922a2f0e0bd1ecd | [
"MIT"
] | null | null | null | bot/New bot.py | AllisonW2323/creativecode | e7c276de46d6208505f2bd1c2922a2f0e0bd1ecd | [
"MIT"
] | null | null | null | {
"origin": [ "Where in the world am I? #Place#? I should be in prison. ", "#greeting#, remember to thank #Place# for their sacrifice of hosting Trump today! Someone save me.", "I escaped to #Place# but Trumps on my tail! I need help!", "Things I overhear: collusion....Idoit....#Phrase#......", "I'm having a great time in #Place#! I flew away and hope to never see him again, wait, is that...? Oh no.", "#greeting#! Decided to stop by at #Place#! If you live here, please keep an eye out so you don't become another victim of Trump! Stay safe!", "Oh my, I just saw a #animal# in #Place#! I hope it carries me away somewhere safe!", "Yay! #animal# is running off with me in #Place#! I'm free!", "True Trump quotes: #true# #impeach", "Many words have been thrown my way like: #noun#. And you know what? I'm not even mad, its true.", "Boy, I sure love to dance with #animal#! Damn it, Trump found me again... see you later friends!", "I know the real reason Trump doesn't own a pet...it's because he is afriad of #animal# and thinks that a dog will act the same way."],
"animal": ["Aardvark", "Abyssinian", "Adelie Penguin", "Affenpinscher", "Afghan Hound", "African Bush Elephant", "African Civet", "African Clawed Frog", "African Forest Elephant", "African Palm Civet", "African Penguin", "African Tree Toad", "Airedale Terrier", "Akbash", "Akita", "Alaskan Malamute", "Albatross", "Aldabra Giant Tortoise", "Alligator", "American Staffordshire Terrier", "American Water Spaniel", "Angelfish", "Ant", "Anteater", "Antelope", "Arctic Fox", "Arctic Hare", "Arctic Wolf", "Armadillo", "Asian Elephant", "Asian Giant Hornet", "Asian Palm Civet", "Asiatic Black Bear", "Australian Mist", "Australian Shepherd", "Australian Terrier", "Avocet", "Axolotl", "Aye Aye", "Baboon", "Bactrian Camel", "Badger", "Balinese", "Banded Palm Civet", "Bandicoot", "Barb", "Barn Owl", "Barnacle", "Barracuda", "Basking Shark", "Basset Hound", "Bat", "Bavarian Mountain Hound", "Beagle", "Bear", "Bearded Collie", "Bearded Dragon", "Beaver", "Bedlington Terrier", "Beetle", "Bengal Tiger", "Bichon Frise", "Binturong", "Bird","Birds Of Paradise", "Birman", "Bison", "Black Bear", "Black Rhinoceros", "Black Russian Terrier", "Black Widow Spider", "Blue Whale", "Bobcat", "Bombay", "Bongo", "Bonobo", "Booby", "Bornean Orang-utan", "Borneo Elephant", "Boston Terrier", "Bottle Nosed Dolphin", "Boykin Spaniel", "Brazilian Terrier", "Brown Bear", "Budgerigar", "Buffalo", "Bull Mastiff", "Bull Shark", "Bull Terrier", "Bullfrog", "Bumble Bee", "Burmese", "Burrowing Frog", "Butterfly", "Butterfly Fish", "Caiman", "Caiman Lizard", "Cairn Terrier", "Camel", "Capybara", "Caracal", "Cassowary", "Caterpillar", "Catfish", "Cavalier King Charles Spaniel", "Centipede", "Cesky Fousek", "Chameleon", "Chamois", "Cheetah", "Chesapeake Bay Retriever", "Chicken", "Chimpanzee", "Chinchilla", "Chinook", "Chinstrap Penguin", "Chipmunk", "Chow Chow", "Cichlid", "Clouded Leopard", "Clown Fish", "Coati", "Cockroach", "Collared Peccary", "Collie", "Common Buzzard", "Common Frog", "Common Loon", "Common Toad", "Coral", "Cottontop Tamarin", "Cougar", "Cow", "Coyote", "Crab", "Crab-Eating Macaque", "Crane", "Crested Penguin", "Crocodile", "Cross River Gorilla", "Curly Coated Retriever", "Cuscus", "Cuttlefish", "Darwin's Frog", "Deer", "Desert Tortoise", "Deutsche Bracke", "Dhole", "Dingo", "Discus", "Doberman Pinscher", "Dodo", "Dogo Argentino", "Dogue De Bordeaux", "Dolphin",
"Donkey",
"Dormouse",
"Dragonfly",
"Drever",
"Duck",
"Dugong",
"Dunker",
"Dusky Dolphin",
"Dwarf Crocodile",
"Eagle",
"Earwig",
"Eastern Gorilla",
"Eastern Lowland Gorilla",
"Echidna",
"Edible Frog",
"Egyptian Mau",
"Electric Eel",
"Elephant",
"Elephant Seal",
"Elephant Shrew",
"Emperor Penguin",
"Emperor Tamarin",
"Emu",
"Epagneul Pont Audemer",
"Falcon",
"Fennec Fox",
"Ferret",
"Fin Whale",
"Finnish Spitz",
"Fire-Bellied Toad",
"Fish",
"Flamingo",
"Flounder",
"Fly",
"Flying Squirrel",
"Fossa",
"Fox",
"Frigatebird",
"Frilled Lizard",
"Frog",
"Fur Seal",
"Galapagos Penguin",
"Galapagos Tortoise",
"Gar",
"Gecko",
"Gentoo Penguin",
"Geoffroys Tamarin",
"Gerbil",
"Gharial",
"Giant African Land Snail",
"Giant Clam",
"Giant Panda Bear",
"Giant Schnauzer",
"Gibbon",
"Gila Monster",
"Giraffe",
"Glass Lizard",
"Glow Worm",
"Goat",
"Golden Lion Tamarin",
"Golden Oriole",
"Goose",
"Gopher",
"Gorilla",
"Grasshopper",
"Great White Shark",
"Green Bee-Eater",
"Grey Mouse Lemur",
"Grey Reef Shark",
"Grey Seal",
"Grizzly Bear",
"Grouse",
"Guinea Fowl",
"Guinea Pig",
"Guppy",
"Hammerhead Shark",
"Hamster",
"Hare",
"Harrier",
"Havanese",
"Hedgehog",
"Hercules Beetle",
"Hermit Crab",
"Heron",
"Highland Cattle",
"Himalayan",
"Hippopotamus",
"Honey Bee",
"Horn Shark",
"Horned Frog",
"Horse",
"Horseshoe Crab",
"Howler Monkey",
"Human",
"Humboldt Penguin",
"Hummingbird",
"Humpback Whale",
"Hyena",
"Ibis",
"Ibizan Hound",
"Iguana",
"Impala",
"Indian Elephant",
"Indian Palm Squirrel",
"Indian Rhinoceros",
"Indian Star Tortoise",
"Indochinese Tiger",
"Indri",
"Insect",
"Jackal",
"Jaguar",
"Japanese Chin",
"Japanese Macaque",
"Javan Rhinoceros",
"Javanese",
"Jellyfish",
"Kakapo",
"Kangaroo",
"Keel Billed Toucan",
"Killer Whale",
"King Crab",
"King Penguin",
"Kingfisher",
"Kiwi",
"Koala",
"Komodo Dragon",
"Kudu",
"Labradoodle",
"Labrador Retriever",
"Ladybird",
"Leaf-Tailed Gecko",
"Lemming",
"Lemur",
"Leopard",
"Leopard Cat",
"Leopard Seal",
"Leopard Tortoise",
"Liger",
"Lion",
"Lionfish",
"Little Penguin",
"Lizard",
"Llama",
"Lobster",
"Long-Eared Owl",
"Lynx",
"Macaroni Penguin",
"Macaw",
"Magellanic Penguin",
"Magpie",
"Malayan Civet",
"Malayan Tiger",
"Manatee",
"Mandrill",
"Manta Ray",
"Marine Toad",
"Markhor",
"Marsh Frog",
"Masked Palm Civet",
"Mayfly",
"Meerkat",
"Millipede",
"Minke Whale",
"Mole",
"Molly",
"Mongoose",
"Mongrel",
"Monitor Lizard",
"Monkey",
"Monte Iberia Eleuth",
"Moorhen",
"Moose",
"Moray Eel",
"Moth",
"Mountain Gorilla",
"Mountain Lion",
"Mouse",
"Mule",
"Neanderthal",
"Neapolitan Mastiff",
"Newfoundland",
"Newt",
"Nightingale",
"Norwegian Forest",
"Numbat",
"Nurse Shark",
"Ocelot",
"Octopus",
"Okapi",
"Olm",
"Opossum",
"Orang-utan",
"Ostrich",
"Otter",
"Oyster",
"Pademelon",
"Panther",
"Parrot",
"Patas Monkey",
"Peacock",
"Pelican",
"Penguin",
"Persian",
"Pheasant",
"Pied Tamarin",
"Pig",
"Pika",
"Pike",
"Pink Fairy Armadillo",
"Piranha",
"Platypus",
"Pointer",
"Poison Dart Frog",
"Polar Bear",
"Pond Skater",
"Pool Frog",
"Porcupine",
"Possum",
"Prawn",
"Proboscis Monkey",
"Puffer Fish",
"Puffin",
"Pug",
"Puma",
"Purple Emperor",
"Puss Moth",
"Pygmy Hippopotamus",
"Pygmy Marmoset",
"Quail",
"Quetzal",
"Quokka",
"Quoll",
"Rabbit",
"Raccoon",
"Radiated Tortoise",
"Ragdoll",
"Rat",
"Rattlesnake",
"Red Knee Tarantula",
"Red Panda",
"Red Wolf",
"Red-handed Tamarin",
"Reindeer",
"Rhinoceros",
"River Dolphin",
"River Turtle",
"Robin",
"Rock Hyrax",
"Rockhopper Penguin",
"Roseate Spoonbill",
"Royal Penguin",
"Russian Blue",
"Sabre-Toothed Tiger",
"Saint Bernard",
"Salamander",
"Sand Lizard",
"Saola",
"Scorpion",
"Scorpion Fish",
"Sea Dragon",
"Sea Lion",
"Sea Otter",
"Sea Slug",
"Sea Squirt",
"Sea Turtle",
"Sea Urchin",
"Seahorse",
"Seal",
"Serval",
"Sheep",
"Shih Tzu",
"Shrimp",
"Siamese Fighting Fish",
"Siberian Tiger",
"Silver Dollar",
"Skunk",
"Sloth",
"Slow Worm",
"Snail",
"Snake",
"Snapping Turtle",
"Snowshoe",
"Snowy Owl",
"Somali",
"South China Tiger",
"Spadefoot Toad",
"Sparrow",
"Spectacled Bear",
"Sperm Whale",
"Spider Monkey",
"Spiny Dogfish",
"Sponge",
"Squid",
"Squirrel",
"Squirrel Monkey",
"Sri Lankan Elephant",
"Stag Beetle",
"Starfish",
"Stellers Sea Cow",
"Stick Insect",
"Stingray",
"Stoat",
"Striped Rocket Frog",
"Sumatran Elephant",
"Sumatran Orang-utan",
"Sumatran Rhinoceros",
"Sumatran Tiger",
"Sun Bear",
"Swan",
"Tang",
"Tapanuli Orang-utan",
"Tapir",
"Tarsier",
"Tasmanian Devil",
"Tawny Owl",
"Termite",
"Tetra",
"Thorny Devil",
"Tibetan Mastiff",
"Tiffany",
"Tiger",
"Tiger Salamander",
"Tiger Shark",
"Tortoise",
"Toucan",
"Tree Frog",
"Tropicbird",
"Tuatara",
"Turkey",
"Turkish Angora",
"Uakari",
"Uguisu",
"Umbrellabird",
"Vampire Bat",
"Vervet Monkey",
"Vulture",
"Wallaby",
"Walrus",
"Warthog",
"Wasp",
"Water Buffalo",
"Water Dragon",
"Water Vole",
"Weasel",
"Welsh Corgi",
"West Highland Terrier",
"Western Gorilla",
"Western Lowland Gorilla",
"Whale Shark",
"Whippet",
"White Faced Capuchin",
"White Rhinoceros",
"White Tiger",
"Wild Boar",
"Wildebeest",
"Wolf",
"Wolverine",
"Wombat",
"Woodlouse",
"Woodpecker",
"Woolly Mammoth",
"Woolly Monkey",
"Wrasse",
"X-Ray Tetra",
"Yak",
"Yellow-Eyed Penguin",
"Yorkshire Terrier",
"Zebra",
"Zebra Shark",
"Zebu",
"Zonkey",
"Zorse"],
"noun": ["pheasant", "bug", "far too expensive bunch of thread", "taxpayer payed bunch of twigs", "bobcat toy", "meat hat", "ass hat", "corn colored corncob", "little son", "politician hat made by mommy", "Last minute cosplay", "leftover mop", "water", "vegan top hat", "purple pestilence", "pelicans dinner", "day old fish", "cat vomit", "college student funded wig", "MAGA hat extension", "disrespecting women wig"],
"adjective": ["horrendous", "vegan", "scrawny", "questionable", "swift", "(you said hair Jimmy? That doesn't sound quite right.)", "quality", "affordable", "menswear", "unfitting", "gross", "updo"],
"greeting": ["Howdy", "What's up ya'll", "Yo", "Welcome", "It's me again"],
"Phrase": ["Putin is huuuuge, the best", "I hate everyone", "Alright! I did it! Okay? I did the bad stuff." , "mexicans are in Spain and it's all crazy."],
"true": ["I've always said, 'If you need Viagra, you're probably with the wrong girl.'", "I love her … upper body.", "Look at that face. Would anybody vote for that? Can you imagine that, the face of our next president? I mean, she's a woman, and I'm not supposed to say bad things, but really, folks, come on. Are we serious?", "Why does she keep interrupting everybody?", "She's certainly not hot.", "Horseface" , "I promise not to talk about your massive plastic surgeries that didn't work.", "I know where she went, it's disgusting, I don't want to talk about it … No, it's too disgusting. Don't say it, it's disgusting.", "If she were a man, I don't think she'd get five percent of the vote." , "Does she look presidential, fellas? Give me a break.", "Such a nasty woman.", "Unattractive both inside and out. I fully understand why her former husband left her for a man — he made a good decision.", "A crazed, crying lowlife", "Does she have a good body? No. Does she have a fat ass? Absolutely.", "Sadly, she's no longer a 10.", "She does have a very nice figure ... if [she] weren't my daughter, perhaps I'd be dating her.", "She's actually always been very voluptuous.","How do the breasts look?", "Well, I think that she's got a lot of Marla. She's a really beautiful baby, and she's got Marla's legs. We don't know whether she's got this part yet, but time will tell.", "26,000 unreported sexual assaults in the military — only 238 convictions. What did these geniuses expect when they put men & women together?", "We could say, politically correct, that look doesn't matter, but the look obviously matters. Like you wouldn't have your job if you weren't beautiful.", "I've got to use some Tic Tacs, just in case I start kissing her. You know I'm automatically attracted to beautiful — I just start kissing them. It's like a magnet. Just kiss. I don't even wait. And when you're a star, they let you do it. You can do anything ... Grab them by the pussy. You can do anything." , "Nobody has more respect for women than I do. Nobody. Nobody has more respect." ],
"Place": ["Afghanistan", "Albania", "Algeria", "Andorra", "Angola", "Antigua and Barbuda", "Argentina", "Armenia", "Australia", "Austria", "Azerbaijan", "The Bahamas", "Bahrain", "Bangladesh", "Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bhutan", "Bolivia", "Bosnia and Herzegovina", "Botswana", "Brazil", "Brunei", "Bulgaria", "Burkina Faso", "Burundi", "Cabo Verde", "Cambodia", "Cameroon", "Canada", "the Central African Republic", "Chad", "Chile", "China", "Colombia", "Comoros", "Congo", "the Democratic Republic of the Congo", "Costa Rica", "Côte d’Ivoire", "Croatia", "Cuba","Cyprus","the Czech Republic","Denmark","Djibouti","Dominica","Dominican Republic","East Timor (Timor-Leste)","Ecuador","Egypt","El Salvador","Equatorial Guinea","Eritrea","Estonia","Eswatini","Ethiopia","Fiji","Finland","France","Gabon","The Gambia","Georgia","Germany","Ghana","Greece","Grenada","Guatemala","Guinea","Guinea-Bissau","Guyana","Haiti","Honduras","Hungary","Iceland","India","Indonesia","Iran","Iraq","Ireland","Israel","Italy","Jamaica","Japan","Jordan","Kazakhstan","Kenya","Kiribati","Korea, North","Korea, South","Kosovo","Kuwait","Kyrgyzstan","Laos","Latvia","Lebanon","Lesotho","Liberia","Libya","Liechtenstein","Lithuania","Luxembourg","Madagascar","Malawi","Malaysia","Maldives","Mali","Malta","the Marshall Islands","Mauritania","Mauritius","Mexico","Micronesia","Moldova","Monaco","Mongolia","Montenegro","Morocco","Mozambique","Myanmar (Burma)","Namibia","Nauru","Nepal","Netherlands","New Zealand","Nicaragua","Niger","Nigeria","North Macedonia","Norway","Oman","Pakistan","Palau","Panama","Papua New Guinea","Paraguay","Peru","the Philippines","Poland","Portugal","Qatar","Romania","Russia","Rwanda","Saint Kitts and Nevis","Saint Lucia","Saint Vincent and the Grenadines","Samoa","San Marino","Sao Tome and Principe","Saudi Arabia","Senegal","Serbia","Seychelles","Sierra Leone","Singapore","Slovakia","Slovenia","the Solomon Islands","Somalia","South Africa","Spain","Sri Lanka","Sudan","Sudan, South","Suriname","Sweden","Switzerland","Syria","Taiwan","Tajikistan","Tanzania","Thailand","Togo","Tonga","Trinidad and Tobago","Tunisia","Turkey","Turkmenistan","Tuvalu","Uganda","Ukraine","the United Arab Emirates","the United Kingdom","the United States","Uruguay","Uzbekistan","Vanuatu","Vatican City","Venezuela","Vietnam","Yemen","Zambia","Zimbabwe"]
}
| 36.098191 | 2,382 | 0.688332 | 1,826 | 13,970 | 5.271084 | 0.616648 | 0.003325 | 0.002494 | 0.002494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000812 | 0.118754 | 13,970 | 386 | 2,383 | 36.19171 | 0.780278 | 0 | 0 | 0 | 0 | 0.044503 | 0.776473 | 0.002578 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
34b092edc052b1e0d00c401ee896473f6a64c23d | 283 | py | Python | test_main.py | cupppu/BigDate | 6a9c285bf37dc6f967a23c1e1592fd9c41ed1a71 | [
"MIT"
] | null | null | null | test_main.py | cupppu/BigDate | 6a9c285bf37dc6f967a23c1e1592fd9c41ed1a71 | [
"MIT"
] | null | null | null | test_main.py | cupppu/BigDate | 6a9c285bf37dc6f967a23c1e1592fd9c41ed1a71 | [
"MIT"
] | null | null | null | hour_test = 6
if not args.hr:
hr = convertMonDayHr(hour_test)
elif hour_test < 12:
hr = convertMonDayHr(hour_test)
hr = "上午" + hr
elif hour_test > 12:
hr = convertMonDayHr(hour_test - 12)
hr = "下午" + hr
else:
hr = convertMonDayHr(hour_test)
hr = "下午" + hr | 23.583333 | 40 | 0.632509 | 42 | 283 | 4.095238 | 0.309524 | 0.325581 | 0.488372 | 0.581395 | 0.622093 | 0.453488 | 0.453488 | 0.453488 | 0 | 0 | 0 | 0.033019 | 0.250883 | 283 | 12 | 41 | 23.583333 | 0.778302 | 0 | 0 | 0.416667 | 0 | 0 | 0.021127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
34c7cc921e0f9deb40eb8faacd83261dbbe1b439 | 1,026 | py | Python | code/Symbols/Symbol_Table.py | antuniooh/Dattebayo-compiler | fd6767aee1c131dcbf76fc061bce43224df03f8b | [
"MIT"
] | null | null | null | code/Symbols/Symbol_Table.py | antuniooh/Dattebayo-compiler | fd6767aee1c131dcbf76fc061bce43224df03f8b | [
"MIT"
] | null | null | null | code/Symbols/Symbol_Table.py | antuniooh/Dattebayo-compiler | fd6767aee1c131dcbf76fc061bce43224df03f8b | [
"MIT"
] | 1 | 2021-12-05T14:00:39.000Z | 2021-12-05T14:00:39.000Z | """
Integrantes:
Nome: Antônio Gustavo Muniz 22.119.001-0
Nome: João Vitor Dias dos Santos 22.119.006-9
Nome: Weverson da Silva Pereira 22.119.004-4
"""
import json
from Symbol import Symbol
class SymbolTable:
""" Class for Symbols tables """
symbol_table = {}
def __init__(self) -> None:
""" Performs the creation of an object of type SymbolTable
"""
pass
@staticmethod
def set_varaible(kw_symbol):
""" Set a kw_symbol in symbol table
:param kw_symbol: Symbol to be add in symbol table
"""
SymbolTable.symbol_table[kw_symbol.name] = {"name": kw_symbol.name, "type": kw_symbol.type,
"scope": kw_symbol.scope}
@staticmethod
def get_symbol_table():
""" Return the symbol table
"""
return SymbolTable.symbol_table
def __str__(self):
""" Custom log
"""
return json.dumps(SymbolTable.symbol_table, indent=4)
| 22.304348 | 99 | 0.586745 | 123 | 1,026 | 4.715447 | 0.512195 | 0.151724 | 0.113793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039943 | 0.316764 | 1,026 | 45 | 100 | 22.8 | 0.787447 | 0.391813 | 0 | 0.133333 | 0 | 0 | 0.022688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0.066667 | 0.133333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
34ca493c8ac992db65269a12049777595d6a879a | 22 | py | Python | project/openaid/templatetags/__init__.py | DeppSRL/open-aid | 84130761c00600a8523f4f28467d70ad974859cd | [
"BSD-3-Clause"
] | 11 | 2015-09-30T19:08:37.000Z | 2021-11-08T11:18:04.000Z | project/openaid/templatetags/__init__.py | DeppSRL/open-aid | 84130761c00600a8523f4f28467d70ad974859cd | [
"BSD-3-Clause"
] | 3 | 2015-02-07T23:29:00.000Z | 2015-10-19T04:42:20.000Z | project/openaid/templatetags/__init__.py | DeppSRL/open-aid | 84130761c00600a8523f4f28467d70ad974859cd | [
"BSD-3-Clause"
] | 7 | 2016-06-08T10:11:36.000Z | 2022-02-05T14:26:00.000Z | __author__ = 'joke2k'
| 11 | 21 | 0.727273 | 2 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.136364 | 22 | 1 | 22 | 22 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
34edb633462afc86f41871ccea960661b584383c | 1,247 | py | Python | scrapers/scrapers/spiders/FuseSpider.py | rbbh/Oncase-challenge | b04057ece5354d7bc6221dd7870a1a9693730e32 | [
"MIT"
] | null | null | null | scrapers/scrapers/spiders/FuseSpider.py | rbbh/Oncase-challenge | b04057ece5354d7bc6221dd7870a1a9693730e32 | [
"MIT"
] | null | null | null | scrapers/scrapers/spiders/FuseSpider.py | rbbh/Oncase-challenge | b04057ece5354d7bc6221dd7870a1a9693730e32 | [
"MIT"
] | null | null | null | from datetime import datetime as dt
import scrapy
from scrapers.items import FuseItem
class PostSpider(scrapy.Spider):
name = 'fuse'
allowed_domains = ['fuse.tv']
urls = [
"https://www.fuse.tv/latest#read" # selecionando a url específica que contém as notícias do site
]
def parse(self, response):
news = Selector(response).xpath('//div[@class="imagetitle"]/h1') # modificando o XPath do site para conseguir todas as notícias
for iterator in news:
item = FuseItem()
item['title'] = iterator.xpath('a[@class="imagetitle"]/text()').extract()[0] ''' TODO: função nece não se sabe até então como se passar o xpath da forma correta '''
yield item
| 54.217391 | 469 | 0.380914 | 96 | 1,247 | 4.9375 | 0.708333 | 0.025316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003578 | 0.551724 | 1,247 | 22 | 470 | 56.681818 | 0.844365 | 0.097033 | 0 | 0 | 0 | 0 | 0.422974 | 0.051647 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.066667 | 0.2 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
34eff7d809d46eea33078c325514201d7c387481 | 1,279 | py | Python | test/test_SensorPose.py | jcandan/WonderPy | ee82322b082e94015258b34b27f23501f8130fa2 | [
"MIT"
] | 46 | 2018-07-31T20:30:41.000Z | 2022-03-23T17:14:51.000Z | test/test_SensorPose.py | jcandan/WonderPy | ee82322b082e94015258b34b27f23501f8130fa2 | [
"MIT"
] | 24 | 2018-08-01T09:59:29.000Z | 2022-02-26T20:57:51.000Z | test/test_SensorPose.py | jcandan/WonderPy | ee82322b082e94015258b34b27f23501f8130fa2 | [
"MIT"
] | 24 | 2018-08-01T19:14:31.000Z | 2021-02-18T13:26:40.000Z | import unittest
from test.robotTestUtil import RobotTestUtil
class MyTestCase(unittest.TestCase):
def test_pose(self):
robot = RobotTestUtil.make_fake_dash()
packet = {}
packet['2002'] = {
'x' : 1.2,
'y' : 3.4,
'degree': 5.6,
}
robot.sensors.parse(packet)
sensor = robot.sensors.pose
self.assertAlmostEquals(sensor.x , -3.4)
self.assertAlmostEquals(sensor.y , 1.2)
self.assertAlmostEquals(sensor.degrees, 5.6)
self.assertTrue (sensor.watermark_measured is None)
self.assertAlmostEquals(sensor.watermark_inferred, 0.0)
packet['2002'] = {
'x' : 1.2,
'y' : 3.4,
'degree' : 5.6,
'watermark': 3,
}
robot.sensors.parse(packet)
sensor = robot.sensors.pose
self.assertAlmostEquals(sensor.x , -3.4)
self.assertAlmostEquals(sensor.y , 1.2)
self.assertAlmostEquals(sensor.degrees , 5.6)
self.assertAlmostEquals(sensor.watermark_measured, 3)
self.assertAlmostEquals(sensor.watermark_inferred, 3)
if __name__ == '__main__':
unittest.main()
| 28.422222 | 66 | 0.548084 | 128 | 1,279 | 5.359375 | 0.320313 | 0.28863 | 0.367347 | 0.161808 | 0.632653 | 0.501458 | 0.501458 | 0.501458 | 0.501458 | 0.501458 | 0 | 0.043839 | 0.340109 | 1,279 | 44 | 67 | 29.068182 | 0.768957 | 0 | 0 | 0.545455 | 0 | 0 | 0.032056 | 0 | 0 | 0 | 0 | 0 | 0.30303 | 1 | 0.030303 | false | 0 | 0.060606 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5501ab9251679e9c55580f0c64a632d331a5bc44 | 186 | py | Python | deets.py | lileddie/selfservepwd | abf4a25890a06a48c3e30fa551582fa116a346eb | [
"MIT"
] | null | null | null | deets.py | lileddie/selfservepwd | abf4a25890a06a48c3e30fa551582fa116a346eb | [
"MIT"
] | null | null | null | deets.py | lileddie/selfservepwd | abf4a25890a06a48c3e30fa551582fa116a346eb | [
"MIT"
] | null | null | null | #Passwd file for passwordmanager app
loginpass='windowsADpasswd'
user='domain.name\\username'
passwordReset = 'staticPassword'
new_pass = 'newadpassword'
adServer='adServer.domain.name'
| 26.571429 | 36 | 0.811828 | 20 | 186 | 7.5 | 0.85 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 186 | 6 | 37 | 31 | 0.872093 | 0.188172 | 0 | 0 | 0 | 0 | 0.553333 | 0.14 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.6 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
5503e7724cf930a5319c0edd5862a02412bfb684 | 1,279 | py | Python | piperoni/operators/base.py | CitrineInformatics/piperoni | a5764e9da3a51da0a8962a00fd574a97b173d9a4 | [
"Apache-2.0"
] | 2 | 2021-04-21T19:51:06.000Z | 2021-04-23T17:57:09.000Z | piperoni/operators/base.py | CitrineInformatics/piperoni | a5764e9da3a51da0a8962a00fd574a97b173d9a4 | [
"Apache-2.0"
] | null | null | null | piperoni/operators/base.py | CitrineInformatics/piperoni | a5764e9da3a51da0a8962a00fd574a97b173d9a4 | [
"Apache-2.0"
] | null | null | null | from functools import reduce
from typing import List
from abc import ABC, abstractmethod
import logging
import pandas as pd
import warnings
"""
This module implements the BaseOperator object.
Operators are applied to inputs using the __call__ python API, which wraps
the `transform` methods implemented for each Operator.
Examples
--------
Functionality of the BaseOperator::
opp = BaseOperator()
input = "some data"
output = opp(input)
"""
class BaseOperator(ABC):
"""A generic operator to apply to data."""
@property
def logger(self):
return logging.getLogger("base")
@abstractmethod
def transform(self, input_: object) -> object:
"""Implemented in concrete sub-classes of BaseOperator.
Concrete implementations of this method should accept a single argument
as input and return a single output (most likely pandas DataFrame).
Parameters
----------
input_: object
The single argument for transform.
Returns
-------
object
The single output of transform.
"""
pass
def __call__(self, *args, **kwargs) -> object:
"""Class instances emulate callable methods."""
return self.transform(*args, **kwargs)
| 23.254545 | 79 | 0.657545 | 144 | 1,279 | 5.770833 | 0.534722 | 0.036101 | 0.036101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.254105 | 1,279 | 54 | 80 | 23.685185 | 0.871069 | 0.319781 | 0 | 0 | 0 | 0 | 0.008753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.066667 | 0.4 | 0.066667 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
5521ff3fe5fe31c80eacd4765957cf09677868c9 | 3,986 | py | Python | locations/spiders/apple.py | bealbrown/allhours | f750ee7644246a97bd16879f14115d7845f76b89 | [
"MIT"
] | null | null | null | locations/spiders/apple.py | bealbrown/allhours | f750ee7644246a97bd16879f14115d7845f76b89 | [
"MIT"
] | null | null | null | locations/spiders/apple.py | bealbrown/allhours | f750ee7644246a97bd16879f14115d7845f76b89 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import scrapy
#import json
import re
from locations.hourstudy import inputoutput
class AppleSpider(scrapy.Spider):
name = "apple"
allowed_domains = ["apple.com"]
start_urls = (
'https://www.apple.com/retail/storelist/',
)
def store_hours(self, store_hours):
result = ''
for line in range(0,int(len(store_hours)/2)):
days = re.search(r'^(\D{3})\s*((through|and|-|&|-)\s*(\D{3}))?$',store_hours[line*2])
if (not days)or(store_hours[line*2+1]=='Closed'):
continue
result += days[1][:2]
try:
result += "-"+days[4][:2]+" "
except Exception as e:
result += " "
hours_str=store_hours[line*2+1].replace("Noon","12:00 a.m.")
hours= re.search(r'(\d+):(\d+)\s*((a\.m\.)|(p\.m\.))\s*-\s*(\d+):?(\d+)?\s*((a\.m\.)|(p\.m\.))',hours_str)
result += str(int(hours[1])+(12 if hours[3] in ['p.m.','m.p.'] else 0))+':'+hours[2]+'-'
result += str(int(hours[6])+(12 if hours[8] in ['p.m.','m.p.'] else 0))+':'+hours[7]+';'
return result.rstrip(';')
def parse(self, response):
shops=response.xpath('//div[@id="usstores"]//li/a/@href')
for shop in shops:
yield scrapy.Request(response.urljoin(shop.extract()), callback=self.parse_shops)
def parse_shops(self, response):
props={}
if response.xpath('//div[contains(@class,"store-details")]//span[contains(@class,"store-phone")]/text()'):
props['phone'] = response.xpath('//div[contains(@class,"store-details")]//span[contains(@class,"store-phone")]/text()').extract_first()
if response.xpath('//div[contains(@class,"store-details")]//span[contains(@class,"store-street")]/text()'):
props['addr_full'] = response.xpath('//div[contains(@class,"store-details")]//span[contains(@class,"store-street")]/text()').extract_first()
if response.xpath('//div[contains(@class,"store-details")]//span[contains(@class,"store-postal-code")]/text()'):
props['postcode'] = response.xpath('//div[contains(@class,"store-details")]//span[contains(@class,"store-postal-code")]/text()').extract_first()
if response.xpath('//div[contains(@class,"store-locality")]//span[contains(@class,"store-postal-code")]/text()'):
props['city'] = response.xpath('//div[contains(@class,"store-locality")]//span[contains(@class,"store-postal-code")]/text()').extract_first()
if response.xpath('//div[contains(@class,"store-locality")]//span[contains(@class,"store-region")]/text()'):
props['state'] =response.xpath('//div[contains(@class,"store-locality")]//span[contains(@class,"store-region")]/text()').extract_first()
if response.xpath('//li[@class="country-name"]/span/text()'):
props['country'] =response.xpath('//li[@class="country-name"]/span/text()').extract_first().strip(),
if response.xpath('//div[contains(@class,"copy-software")]/a/@href'):
pos = re.search(r'&lat=(.+)&long=(.+)',response.xpath('//div[contains(@class,"copy-software")]/a/@href').extract_first())
props['lat'] = pos[1]
props['lon'] = pos[2]
# yield inputoutput(
# **props,
# website=response.url,
# ref=response.xpath('//meta[@property="og:title"]/@content').extract_first(),
# country='USA',
# opening_hours=self.store_hours(response.xpath('//div[contains(@class,"store-hours-row")]/div/text()').extract()),
# )
raw = response.xpath('//div[contains(@class,"store-hours-row")]/div/text()').extract()
formatted = self.store_hours(response.xpath('//div[contains(@class,"store-hours-row")]/div/text()').extract())
yield inputoutput(raw,formatted)
| 52.447368 | 160 | 0.561967 | 480 | 3,986 | 4.616667 | 0.258333 | 0.146661 | 0.186823 | 0.162455 | 0.575812 | 0.561372 | 0.554152 | 0.547834 | 0.493682 | 0.452166 | 0 | 0.010091 | 0.204466 | 3,986 | 75 | 161 | 53.146667 | 0.688742 | 0.077772 | 0 | 0 | 0 | 0.22 | 0.398254 | 0.35461 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06 | false | 0 | 0.06 | 0 | 0.22 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5523c221f96bcb8d726eca4962f30f097b541ff6 | 25 | py | Python | recipes/yolov5_onnx/example.py | notAI-tech/fastDeploy-core | 011f4cba2d2b018efe0a5548978e290225f2e745 | [
"MIT"
] | null | null | null | recipes/yolov5_onnx/example.py | notAI-tech/fastDeploy-core | 011f4cba2d2b018efe0a5548978e290225f2e745 | [
"MIT"
] | 1 | 2020-04-12T13:36:22.000Z | 2020-04-12T13:36:22.000Z | recipes/yolov5_onnx/example.py | notAI-tech/fastDeploy-core | 011f4cba2d2b018efe0a5548978e290225f2e745 | [
"MIT"
] | null | null | null | example = ["zidane.jpg"]
| 12.5 | 24 | 0.64 | 3 | 25 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9b4a540421f9c467216dc50871c7a0aedf1eb26a | 396 | py | Python | NST_Hotbox/W_hotbox/Single/apColorSampler/003.py | CreativeLyons/NST_Hotbox | 48d23a651d9578a70b16bcc2c034de4b3586883f | [
"MIT"
] | null | null | null | NST_Hotbox/W_hotbox/Single/apColorSampler/003.py | CreativeLyons/NST_Hotbox | 48d23a651d9578a70b16bcc2c034de4b3586883f | [
"MIT"
] | null | null | null | NST_Hotbox/W_hotbox/Single/apColorSampler/003.py | CreativeLyons/NST_Hotbox | 48d23a651d9578a70b16bcc2c034de4b3586883f | [
"MIT"
] | null | null | null | #----------------------------------------------------------------------------------------------------------
#
# AUTOMATICALLY GENERATED FILE TO BE USED BY W_HOTBOX
#
# NAME: Bake
# COLOR: #685777
# TEXTCOLOR: #ffffff
#
#----------------------------------------------------------------------------------------------------------
ns = nuke.selectedNodes()
for n in ns:
n.knob('bake').execute()
| 28.285714 | 107 | 0.305556 | 26 | 396 | 4.615385 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.090909 | 396 | 13 | 108 | 30.461538 | 0.316667 | 0.775253 | 0 | 0 | 1 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9b7c2fa4b13bd61043410d116386fb2ea45cedc8 | 999 | py | Python | corehq/form_processor/backends/sql/supply.py | dimagilg/commcare-hq | ea1786238eae556bb7f1cbd8d2460171af1b619c | [
"BSD-3-Clause"
] | null | null | null | corehq/form_processor/backends/sql/supply.py | dimagilg/commcare-hq | ea1786238eae556bb7f1cbd8d2460171af1b619c | [
"BSD-3-Clause"
] | 94 | 2020-12-11T06:57:31.000Z | 2022-03-15T10:24:06.000Z | corehq/form_processor/backends/sql/supply.py | dimagilg/commcare-hq | ea1786238eae556bb7f1cbd8d2460171af1b619c | [
"BSD-3-Clause"
] | null | null | null | from corehq.apps.commtrack.helpers import make_supply_point
from corehq.form_processor.abstract_models import AbstractSupplyInterface
from corehq.form_processor.backends.sql.dbaccessors import CaseAccessorSQL
class SupplyPointSQL(AbstractSupplyInterface):
@classmethod
def get_or_create_by_location(cls, location):
sp = SupplyPointSQL.get_by_location(location)
if not sp:
sp = make_supply_point(location.domain, location)
return sp
@classmethod
def get_by_location(cls, location):
return location.linked_supply_point()
@staticmethod
def get_closed_and_open_by_location_id_and_domain(domain, location_id):
return CaseAccessorSQL.get_case_by_location(domain, location_id)
@staticmethod
def get_supply_point(supply_point_id):
return CaseAccessorSQL.get_case(supply_point_id)
@staticmethod
def get_supply_points(supply_point_ids):
return list(CaseAccessorSQL.get_cases(supply_point_ids))
| 33.3 | 75 | 0.771772 | 122 | 999 | 5.959016 | 0.360656 | 0.121045 | 0.074278 | 0.063274 | 0.154058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168168 | 999 | 29 | 76 | 34.448276 | 0.87485 | 0 | 0 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0 | 0.136364 | 0.181818 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
32c9887e376278347c4e15e18c18eb75e940bdbd | 269 | py | Python | Appointment/admin.py | CiganOliviu/InfiniteShoot | 14f7fb21e360e3c58876d82ebbe206054c72958e | [
"MIT"
] | 1 | 2021-04-02T16:45:37.000Z | 2021-04-02T16:45:37.000Z | Appointment/admin.py | CiganOliviu/InfiniteShoot-1 | 6322ae34f88caaffc1de29dfa4f6d86d175810a7 | [
"Apache-2.0"
] | null | null | null | Appointment/admin.py | CiganOliviu/InfiniteShoot-1 | 6322ae34f88caaffc1de29dfa4f6d86d175810a7 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import Appointment
class AppointmentAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'email', 'desired_date', 'sent_moment', 'seen', 'accepted')
admin.site.register(Appointment, AppointmentAdmin)
| 26.9 | 106 | 0.765799 | 31 | 269 | 6.483871 | 0.774194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107807 | 269 | 9 | 107 | 29.888889 | 0.8375 | 0 | 0 | 0 | 0 | 0 | 0.219331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
32d0f3bc24adfab4054c30f3f5e21861d6979f1a | 58 | py | Python | ASCII/t1.py | matewszz/Python | 18b7fc96d3ed294d2002ed484941a0ee8cf18108 | [
"MIT"
] | null | null | null | ASCII/t1.py | matewszz/Python | 18b7fc96d3ed294d2002ed484941a0ee8cf18108 | [
"MIT"
] | null | null | null | ASCII/t1.py | matewszz/Python | 18b7fc96d3ed294d2002ed484941a0ee8cf18108 | [
"MIT"
] | null | null | null | import string
s = 'mary11had a littlee lamb'
print = (s) | 11.6 | 30 | 0.689655 | 9 | 58 | 4.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.206897 | 58 | 5 | 31 | 11.6 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.40678 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
32d857afba3a20329784306fd1db72a9f87339f6 | 440 | py | Python | tests/test_utils_data.py | lucasmfaria/image_classifier | 94e077125ee0c4b7a0cb7d2ff043567a3601a516 | [
"MIT"
] | 7 | 2021-12-11T20:24:57.000Z | 2022-03-03T02:56:32.000Z | tests/test_utils_data.py | lucasmfaria/image_classifier | 94e077125ee0c4b7a0cb7d2ff043567a3601a516 | [
"MIT"
] | null | null | null | tests/test_utils_data.py | lucasmfaria/image_classifier | 94e077125ee0c4b7a0cb7d2ff043567a3601a516 | [
"MIT"
] | null | null | null | import sys
from pathlib import Path
try:
from utils.data import create_aux_dataframe, train_test_valid_split, filter_binary_labels, optimize_dataset, \
delete_folder, create_split
except ModuleNotFoundError:
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from utils.data import create_aux_dataframe, train_test_valid_split, filter_binary_labels, optimize_dataset, \
delete_folder, create_split
| 40 | 114 | 0.795455 | 59 | 440 | 5.525424 | 0.508475 | 0.055215 | 0.079755 | 0.116564 | 0.687117 | 0.687117 | 0.687117 | 0.687117 | 0.687117 | 0.687117 | 0 | 0.002632 | 0.136364 | 440 | 10 | 115 | 44 | 0.855263 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
32d86d992fd5bceca71f38045878ec955f5ec00b | 294 | py | Python | src/rlext/misc.py | kngwyu/rlext | b7e74ba4b440d2c6f98ea79dcd0aab15053fa96a | [
"Apache-2.0"
] | null | null | null | src/rlext/misc.py | kngwyu/rlext | b7e74ba4b440d2c6f98ea79dcd0aab15053fa96a | [
"Apache-2.0"
] | null | null | null | src/rlext/misc.py | kngwyu/rlext | b7e74ba4b440d2c6f98ea79dcd0aab15053fa96a | [
"Apache-2.0"
] | null | null | null | import typing as t
import numpy as np
from gym.spaces import Box
def box_action_scaler(action_space: Box) -> t.Callable[[np.ndarray], np.ndarray]:
shift = action_space.low
scale = action_space.high - action_space.low
return lambda action: scale / (1.0 + np.exp(-action)) + shift
| 26.727273 | 81 | 0.717687 | 47 | 294 | 4.361702 | 0.531915 | 0.214634 | 0.136585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00823 | 0.173469 | 294 | 10 | 82 | 29.4 | 0.835391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
32f27312c84306f3edb747f0f69983c26b59d778 | 3,586 | py | Python | WebService/app/models.py | CryptoSalamander/DeepFake-Detection | f3b1c95ce1955a4c203a9f3d1279c5fbade66684 | [
"MIT"
] | null | null | null | WebService/app/models.py | CryptoSalamander/DeepFake-Detection | f3b1c95ce1955a4c203a9f3d1279c5fbade66684 | [
"MIT"
] | null | null | null | WebService/app/models.py | CryptoSalamander/DeepFake-Detection | f3b1c95ce1955a4c203a9f3d1279c5fbade66684 | [
"MIT"
] | 1 | 2021-04-11T06:27:48.000Z | 2021-04-11T06:27:48.000Z | from app import db, login
from datetime import datetime
from flask_login import UserMixin
from werkzeug.security import generate_password_hash, check_password_hash
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
user_name = db.Column(db.String(64), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
password_hash = db.Column(db.String(60), nullable=False)
avatar = db.Column(db.String(20), nullable=False, default='avatar.png')
cover_pic = db.Column(db.String(20), nullable=False, default='cover.png')
age = db.Column(db.Integer, nullable=False)
address = db.Column(db.Text, nullable=False)
register_date = db.Column(
db.DateTime, nullable=False, default=datetime.utcnow)
videos = db.relationship('Video', backref='author', lazy='dynamic')
comments = db.relationship('Comments', backref='author', lazy='dynamic')
liked = db.relationship(
'Likes', foreign_keys='Likes.user_id', backref='user', lazy='dynamic')
def __repr__(self):
return f"User('{self.user_name}', '{self.email}')"
def set_password(self, password):
'''Setting up the password'''
self.password_hash = generate_password_hash(password)
def check_password(self, password):
'''Checking the password filled with the password in database'''
return check_password_hash(self.password_hash, password)
def like_video(self, video):
if not self.has_liked_video(video):
like = Likes(user_id=self.id, video_id=video.id)
db.session.add(like)
def unlike_video(self, video):
if self.has_liked_video(video):
Likes.query.filter_by(user_id=self.id, video_id=video.id).delete()
def has_liked_video(self, video):
return Likes.query.filter(Likes.user_id == self.id, Likes.video_id == video.id).count() > 0
@login.user_loader
def load_user(id):
'''function to add the user into the session generated by LoginManager'''
return User.query.get(int(id))
class Video(db.Model):
id = db.Column(db.Integer, primary_key=True)
video_title = db.Column(db.String(120), nullable=False)
video_content = db.Column(db.String(40), nullable=False)
description = db.Column(db.Text, nullable=False)
category = db.Column(db.String, nullable=False)
views_count = db.Column(db.Integer, nullable=False, default=0)
upload_time = db.Column(db.DateTime, nullable=False,
default=datetime.utcnow)
likes_count = db.Column(db.Integer, nullable=False, default=0)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False)
comments = db.relationship('Comments', backref='video', lazy='dynamic')
likes = db.relationship('Likes', backref='video', lazy='dynamic')
def __repr__(self):
return f"Video('{self.video_title}', '{self.upload_time}')"
class Likes(db.Model):
id = db.Column(db.Integer, primary_key=True)
video_id = db.Column(db.Integer, db.ForeignKey('video.id'), nullable=False)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False)
class Comments(db.Model):
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.Text)
comment_time = db.Column(db.DateTime, index=True, default=datetime.utcnow)
author_id = db.Column(db.Integer, db.ForeignKey('user.id'))
video_id = db.Column(db.Integer, db.ForeignKey('video.id'))
def __repr__(self):
return f"Comments('{self.author_id}', '{self.body}', {self.video_id})"
| 38.55914 | 99 | 0.688511 | 502 | 3,586 | 4.782869 | 0.207171 | 0.086631 | 0.108288 | 0.084965 | 0.44898 | 0.356518 | 0.321533 | 0.297376 | 0.244065 | 0.149521 | 0 | 0.006376 | 0.168991 | 3,586 | 92 | 100 | 38.978261 | 0.799329 | 0.041829 | 0 | 0.140625 | 1 | 0 | 0.090643 | 0.02924 | 0 | 0 | 0 | 0 | 0 | 1 | 0.140625 | false | 0.09375 | 0.0625 | 0.0625 | 0.84375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
fd0445408b4e2ce0f1a5646a132544bebf3d255c | 190 | py | Python | dojotrust/request.py | kipkemoimayor/Dojo_Trust | 34b0dbe8ae056b0747ca3474b558dd65a59eb5b0 | [
"MIT"
] | 2 | 2019-06-08T20:02:40.000Z | 2021-09-30T07:20:57.000Z | dojotrust/request.py | kipkemoimayor/Dojo_Trust | 34b0dbe8ae056b0747ca3474b558dd65a59eb5b0 | [
"MIT"
] | 12 | 2020-02-12T00:31:48.000Z | 2022-02-10T12:04:08.000Z | dojotrust/request.py | kipkemoimayor/Dojo_Trust | 34b0dbe8ae056b0747ca3474b558dd65a59eb5b0 | [
"MIT"
] | null | null | null | import requests
def location():
response=requests.get("http://api.ipstack.com/105.27.206.46?access_key=18220f1e8b17fbf02d95d972679ccfef")
geodata=response.json()
return geodata
| 27.142857 | 109 | 0.763158 | 23 | 190 | 6.26087 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171598 | 0.110526 | 190 | 6 | 110 | 31.666667 | 0.680473 | 0 | 0 | 0 | 0 | 0 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
fd17982343a6bf5291fbda21f25004d6079a45a1 | 199 | py | Python | students/K33402/Kondrashov_Egor/LR2/commerce/commerce/urls.py | emina13/ITMO_ICT_WebDevelopment_2021-2022 | 498a6138e352e7e0ca40d1eb301bc29416158f51 | [
"MIT"
] | 7 | 2021-09-02T08:20:58.000Z | 2022-01-12T11:48:07.000Z | students/K33402/Kondrashov_Egor/LR2/commerce/commerce/urls.py | emina13/ITMO_ICT_WebDevelopment_2021-2022 | 498a6138e352e7e0ca40d1eb301bc29416158f51 | [
"MIT"
] | 76 | 2021-09-17T23:01:50.000Z | 2022-03-18T16:42:03.000Z | students/K33402/Kondrashov_Egor/LR2/commerce/commerce/urls.py | emina13/ITMO_ICT_WebDevelopment_2021-2022 | 498a6138e352e7e0ca40d1eb301bc29416158f51 | [
"MIT"
] | 60 | 2021-09-04T16:47:39.000Z | 2022-03-21T04:41:27.000Z | """commerce URL Configuration"""
from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path("admin/", admin.site.urls),
path("", include("auctions.urls"))
]
| 22.111111 | 38 | 0.698492 | 24 | 199 | 5.791667 | 0.583333 | 0.143885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145729 | 199 | 8 | 39 | 24.875 | 0.817647 | 0.130653 | 0 | 0 | 0 | 0 | 0.113772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
fd3feec3826f9373db9daf6f11ac2a6f5a6d9609 | 431 | py | Python | env/lib/python3.6/site-packages/traits/adaptation/api.py | Raniac/NEURO-LEARN | 3c3acc55de8ba741e673063378e6cbaf10b64c7a | [
"Apache-2.0"
] | 8 | 2019-05-29T09:38:30.000Z | 2021-01-20T03:36:59.000Z | env/lib/python3.6/site-packages/traits/adaptation/api.py | Raniac/neurolearn_dev | 3c3acc55de8ba741e673063378e6cbaf10b64c7a | [
"Apache-2.0"
] | 12 | 2021-03-09T03:01:16.000Z | 2022-03-11T23:59:36.000Z | env/lib/python3.6/site-packages/traits/adaptation/api.py | Raniac/NEURO-LEARN | 3c3acc55de8ba741e673063378e6cbaf10b64c7a | [
"Apache-2.0"
] | 1 | 2020-07-17T12:49:49.000Z | 2020-07-17T12:49:49.000Z | from .adapter import Adapter, PurePythonAdapter
from .adaptation_error import AdaptationError
from .adaptation_manager import (
adapt,
AdaptationManager,
get_global_adaptation_manager,
provides_protocol,
register_factory,
register_offer,
register_provides,
reset_global_adaptation_manager,
set_global_adaptation_manager,
supports_protocol,
)
from .adaptation_offer import AdaptationOffer
| 22.684211 | 47 | 0.795824 | 43 | 431 | 7.581395 | 0.488372 | 0.208589 | 0.211656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164733 | 431 | 18 | 48 | 23.944444 | 0.905556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.266667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fd41bc6b8352a13522178ab7c7827ca643f6a978 | 3,115 | py | Python | api/users/serializers.py | fujikawahiroaki/webspecimanager | d46a4feec0c695d5231b21e3311f4adbe25435cb | [
"BSD-2-Clause"
] | null | null | null | api/users/serializers.py | fujikawahiroaki/webspecimanager | d46a4feec0c695d5231b21e3311f4adbe25435cb | [
"BSD-2-Clause"
] | 10 | 2020-12-07T08:54:30.000Z | 2022-03-13T12:23:03.000Z | api/users/serializers.py | fujikawahiroaki/webspecimanager | d46a4feec0c695d5231b21e3311f4adbe25435cb | [
"BSD-2-Clause"
] | null | null | null | from rest_framework import serializers
from django.core.validators import RegexValidator
from .models import UserProfile
class UserProfileSerializer(serializers.ModelSerializer):
"""ユーザープロファイルモデル用シリアライザ"""
user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = UserProfile
fields = '__all__'
read_only_fields = ('created_at', 'id')
extra_kwargs = {
'contient': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'island_group': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'island': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'state_provice': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'kingdom': {
'validators': [RegexValidator(r'^[A-Z][a-z]+$',
message='先頭のみ大文字、以降小文字の半角英字1単語のみ使用可')]
},
'phylum': {
'validators': [RegexValidator(r'^[A-Z][a-z]+$',
message='先頭のみ大文字、以降小文字の半角英字1単語のみ使用可')]
},
'class_name': {
'validators': [RegexValidator(r'^[A-Z][a-z]+$',
message='先頭のみ大文字、以降小文字の半角英字1単語のみ使用可')]
},
'order': {
'validators': [RegexValidator(r'^[A-Z][a-z]+$',
message='先頭のみ大文字、以降小文字の半角英字1単語のみ使用可')]
},
'identified_by': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'collecter': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'preparation_type': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'disposition': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'lifestage': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'establishment_means': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
'rights': {
'validators': [RegexValidator(r'^[!-~ ]+$',
message='半角英数記号のみ使用可')]
},
}
| 40.986842 | 84 | 0.380417 | 156 | 3,115 | 7.5 | 0.358974 | 0.307692 | 0.320513 | 0.300855 | 0.623077 | 0.292308 | 0.208547 | 0.208547 | 0.208547 | 0.208547 | 0 | 0.002446 | 0.47512 | 3,115 | 75 | 85 | 41.533333 | 0.71315 | 0.006421 | 0 | 0.422535 | 0 | 0 | 0.224992 | 0.033668 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.042254 | 0 | 0.084507 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b5bf923f69b2f1a195f6a2164fbae124f199c868 | 81 | py | Python | moodle/__init__.py | Crissal1995/new_moodle_importer | c068a2dc2f97a6346031ac4e9a6844a98caa1305 | [
"MIT"
] | null | null | null | moodle/__init__.py | Crissal1995/new_moodle_importer | c068a2dc2f97a6346031ac4e9a6844a98caa1305 | [
"MIT"
] | null | null | null | moodle/__init__.py | Crissal1995/new_moodle_importer | c068a2dc2f97a6346031ac4e9a6844a98caa1305 | [
"MIT"
] | null | null | null | from moodle.automator import Automator
__all__ = ["Automator"]
version = "0.1"
| 13.5 | 38 | 0.728395 | 10 | 81 | 5.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028986 | 0.148148 | 81 | 5 | 39 | 16.2 | 0.768116 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b5c456a5e8c7aba4885bba955e13589c3d341152 | 1,287 | py | Python | torchmetrics/image/__init__.py | radandreicristian/metrics | 8048c77229f47d82d1adc391407f9cd2f5a8e9fa | [
"Apache-2.0"
] | 2 | 2022-01-20T12:33:18.000Z | 2022-03-25T04:30:02.000Z | torchmetrics/image/__init__.py | radandreicristian/metrics | 8048c77229f47d82d1adc391407f9cd2f5a8e9fa | [
"Apache-2.0"
] | null | null | null | torchmetrics/image/__init__.py | radandreicristian/metrics | 8048c77229f47d82d1adc391407f9cd2f5a8e9fa | [
"Apache-2.0"
] | null | null | null | # Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from torchmetrics.image.inception import IS, InceptionScore # noqa: F401
from torchmetrics.image.kid import KID, KernelInceptionDistance # noqa: F401
from torchmetrics.image.psnr import PSNR, PeakSignalNoiseRatio # noqa: F401
from torchmetrics.image.ssim import ( # noqa: F401
SSIM,
MultiScaleStructuralSimilarityIndexMeasure,
StructuralSimilarityIndexMeasure,
)
from torchmetrics.utilities.imports import _LPIPS_AVAILABLE, _TORCH_FIDELITY_AVAILABLE
if _TORCH_FIDELITY_AVAILABLE:
from torchmetrics.image.fid import FID, FrechetInceptionDistance # noqa: F401
if _LPIPS_AVAILABLE:
from torchmetrics.image.lpip import LPIPS, LearnedPerceptualImagePatchSimilarity # noqa: F401
| 44.37931 | 98 | 0.794095 | 163 | 1,287 | 6.208589 | 0.546012 | 0.110672 | 0.124506 | 0.071146 | 0.085968 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020018 | 0.146076 | 1,287 | 28 | 99 | 45.964286 | 0.900819 | 0.485625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.538462 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
b5d70ab09684b20a24edec88af826451e55e1ec4 | 195 | py | Python | services/viewcounts/apps.py | RyanFleck/AuxilliaryWebsiteServices | bcaa6689e567fdf9f20f7f4ea84043aa2b6f1378 | [
"MIT"
] | 1 | 2020-11-11T20:20:42.000Z | 2020-11-11T20:20:42.000Z | services/viewcounts/apps.py | RyanFleck/AuxilliaryWebsiteServices | bcaa6689e567fdf9f20f7f4ea84043aa2b6f1378 | [
"MIT"
] | 17 | 2020-11-09T19:04:04.000Z | 2022-03-01T18:08:42.000Z | services/viewcounts/apps.py | RyanFleck/AuxilliaryWebsiteServices | bcaa6689e567fdf9f20f7f4ea84043aa2b6f1378 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
from django.utils.translation import gettext_lazy as _
class ViewCountsConfig(AppConfig):
name = "services.viewcounts"
verbose_name = _("View Counts")
| 24.375 | 54 | 0.774359 | 23 | 195 | 6.391304 | 0.782609 | 0.136054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148718 | 195 | 7 | 55 | 27.857143 | 0.885542 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
bd1fb555a3aa55bcecdf98f31894d5a52d697994 | 734 | py | Python | backend/src/data_process/datacleaner.py | AnXi-TieGuanYin-Tea/MusicGenreClassifiaction | a0b9f621b0a5d2451180b12af7681756c5abd138 | [
"MIT"
] | 7 | 2018-05-01T19:39:17.000Z | 2020-01-02T17:11:05.000Z | backend/src/data_process/datacleaner.py | AnXi-TieGuanYin-Tea/MusicGenreClassifiaction | a0b9f621b0a5d2451180b12af7681756c5abd138 | [
"MIT"
] | 10 | 2018-12-10T22:16:43.000Z | 2020-08-27T18:23:45.000Z | backend/src/data_process/datacleaner.py | AnXi-TieGuanYin-Tea/MusicGenreClassifiaction | a0b9f621b0a5d2451180b12af7681756c5abd138 | [
"MIT"
] | 2 | 2021-04-16T08:20:17.000Z | 2022-01-06T14:06:44.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Nov 6 2018
@author: Akihiro Inui
"""
import numpy as np
class DataCleaner:
@staticmethod
def clean_data(input_dataframe):
"""
Replace Inf to NaN, then replace NaN to 0
:param input_dataframe: input pandas dataframe
:return clean dataframe
"""
# Copy input dataframe
clean_dataframe = input_dataframe.copy()
# Replace Inf to NaN
clean_dataframe=clean_dataframe.replace([np.inf, -np.inf], np.nan)
clean_dataframe=clean_dataframe.replace([np.inf, -np.inf], np.nan)
# Replace NaN to 0
clean_dataframe = clean_dataframe.fillna(0)
return clean_dataframe
| 23.677419 | 74 | 0.637602 | 93 | 734 | 4.903226 | 0.419355 | 0.276316 | 0.201754 | 0.184211 | 0.225877 | 0.225877 | 0.225877 | 0.225877 | 0.225877 | 0.225877 | 0 | 0.018519 | 0.264305 | 734 | 30 | 75 | 24.466667 | 0.825926 | 0.358311 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bd237c8cc029dc3ff7a07afaffbfd766a6cb00a7 | 205 | py | Python | ontraportlib/controllers/__init__.py | LifePosts/ontraport | fb4834e89b897dce3475c89c7e6c34bf8756880e | [
"MIT"
] | null | null | null | ontraportlib/controllers/__init__.py | LifePosts/ontraport | fb4834e89b897dce3475c89c7e6c34bf8756880e | [
"MIT"
] | null | null | null | ontraportlib/controllers/__init__.py | LifePosts/ontraport | fb4834e89b897dce3475c89c7e6c34bf8756880e | [
"MIT"
] | null | null | null | __all__ = [
'base_controller',
'forms_controller',
'landing_page_controller',
'messages_controller',
'objects_controller',
'tasks_controller',
'transactions_controller',
] | 22.777778 | 31 | 0.668293 | 16 | 205 | 7.8125 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214634 | 205 | 9 | 32 | 22.777778 | 0.776398 | 0 | 0 | 0 | 0 | 0 | 0.65 | 0.23 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bd3a6e6866b8527ae521f3115fb4c8710b8bf0d2 | 1,185 | py | Python | general/cleanup_py.py | xR86/scripts | a28e4b5011d2629805642b63f97038ede7c0b2e5 | [
"MIT"
] | 1 | 2022-02-05T06:35:14.000Z | 2022-02-05T06:35:14.000Z | general/cleanup_py.py | xR86/scripts | a28e4b5011d2629805642b63f97038ede7c0b2e5 | [
"MIT"
] | 2 | 2017-12-30T10:32:54.000Z | 2021-02-16T23:12:26.000Z | general/cleanup_py.py | xR86/scripts | a28e4b5011d2629805642b63f97038ede7c0b2e5 | [
"MIT"
] | null | null | null | """
Utility functions for finding dependencies locations.
"""
import site
def get_package_install_locations():
'''
There was also this method:
```
from distutils.sysconfig import get_python_lib
print(get_python_lib())
```
'''
return site.getsitepackages()
def recursive_find_peer_deps(find_peerdep):
'''
Call get_package_install_locations and start from there.
Find any requirements.txt files where find_peerdep is listed, save to list
Print list with locations of the packages where peerdep is needed(parse pip freeze ??)
Print list with locations of the packages where peerdep is needed(parse pip freeze ??)
'''
pass
def json_file_location_stub():
import json # is this lazy importing ?
return json.__file__
def pip_freeze():
from pip.operations import freeze
x = freeze.freeze()
return list(x)
if __name__ == '__main__':
# get_package_install_locations()
print(get_package_install_locations())
print()
# json_file_location_stub()
print(json_file_location_stub())
print()
# pip_freeze()
# print(help('modules'))
p_fr = pip_freeze()
# print('\n'.join('{}: {}'.format(*k) for k in enumerate(p_fr)))
print('\n'.join('%s' % pkg for pkg in p_fr))
| 23.7 | 87 | 0.736709 | 171 | 1,185 | 4.824561 | 0.438596 | 0.054545 | 0.082424 | 0.126061 | 0.30303 | 0.233939 | 0.167273 | 0.167273 | 0.167273 | 0.167273 | 0 | 0 | 0.147679 | 1,185 | 49 | 88 | 24.183673 | 0.816832 | 0.547679 | 0 | 0.105263 | 0 | 0 | 0.024096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0.052632 | 0.157895 | 0 | 0.526316 | 0.263158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
bd404d49ac15baec2549ab59990990528394ae4f | 134 | py | Python | helm/dagster/schema/schema/charts/dagster/subschema/telemetry.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | helm/dagster/schema/schema/charts/dagster/subschema/telemetry.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | helm/dagster/schema/schema/charts/dagster/subschema/telemetry.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | from pydantic import BaseModel, Extra
class Telemetry(BaseModel):
enabled: bool
class Config:
extra = Extra.forbid
| 14.888889 | 37 | 0.69403 | 15 | 134 | 6.2 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246269 | 134 | 8 | 38 | 16.75 | 0.920792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
bd535ff2da475103db73e39bd34a39afb78bca84 | 85 | py | Python | src/hello/__init__.py | SarahDVictoria/hello | e3bdee706acbbc3b5418b005c0d18297caee25e4 | [
"MIT"
] | null | null | null | src/hello/__init__.py | SarahDVictoria/hello | e3bdee706acbbc3b5418b005c0d18297caee25e4 | [
"MIT"
] | null | null | null | src/hello/__init__.py | SarahDVictoria/hello | e3bdee706acbbc3b5418b005c0d18297caee25e4 | [
"MIT"
] | null | null | null | class Hello:
def __init__(self):
while True:
print("Hello!")
| 17 | 27 | 0.517647 | 9 | 85 | 4.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.364706 | 85 | 4 | 28 | 21.25 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0.070588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1fadefca45df4102aacb48cc8d6446d9f9d5635c | 186 | py | Python | mysite/scisheets/ui/__init__.py | ScienceStacks/JViz | c8de23d90d49d4c9bc10da25f4a87d6f44aab138 | [
"Artistic-2.0",
"Apache-2.0"
] | 31 | 2016-11-16T22:34:35.000Z | 2022-03-22T22:16:11.000Z | mysite/scisheets/ui/__init__.py | ScienceStacks/JViz | c8de23d90d49d4c9bc10da25f4a87d6f44aab138 | [
"Artistic-2.0",
"Apache-2.0"
] | 6 | 2017-06-24T06:29:36.000Z | 2022-01-23T06:30:01.000Z | mysite/scisheets/ui/__init__.py | ScienceStacks/JViz | c8de23d90d49d4c9bc10da25f4a87d6f44aab138 | [
"Artistic-2.0",
"Apache-2.0"
] | 4 | 2017-07-27T16:23:50.000Z | 2022-03-12T06:36:13.000Z | """
Extends the scitables core to display tables in a browser.
UITable is a rendering independent user API to tables.
DGTable is an API to tables that gets rendered in YUI DataTable
"""
| 31 | 63 | 0.77957 | 31 | 186 | 4.677419 | 0.741935 | 0.068966 | 0.151724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 186 | 5 | 64 | 37.2 | 0.947712 | 0.951613 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1fb5d28b4a894485334890c05e234a77200b449a | 731 | py | Python | model/user_achievement.py | fi-ksi/dashboard-alpha | aaa800d02f78f198a95faf27af2bc8afeca4b867 | [
"MIT"
] | 4 | 2017-12-11T00:14:22.000Z | 2022-02-07T15:08:13.000Z | model/user_achievement.py | fi-ksi/dashboard-alpha | aaa800d02f78f198a95faf27af2bc8afeca4b867 | [
"MIT"
] | 94 | 2016-04-29T10:38:37.000Z | 2022-02-10T13:41:29.000Z | model/user_achievement.py | fi-ksi/dashboard-alpha | aaa800d02f78f198a95faf27af2bc8afeca4b867 | [
"MIT"
] | null | null | null | from sqlalchemy import Column, Integer, ForeignKey
from . import Base
from .user import User
from .achievement import Achievement
from .task import Task
class UserAchievement(Base):
__tablename__ = 'user_achievement'
__table_args__ = {
'mysql_engine': 'InnoDB',
'mysql_charset': 'utf8mb4'
}
user_id = Column(Integer, ForeignKey(User.id, ondelete='CASCADE'),
primary_key=True, nullable=False)
achievement_id = Column(Integer,
ForeignKey(Achievement.id, ondelete='CASCADE'),
primary_key=True, nullable=False)
task_id = Column(Integer, ForeignKey(Task.id, ondelete='CASCADE'),
nullable=True)
| 31.782609 | 75 | 0.636115 | 74 | 731 | 6.054054 | 0.378378 | 0.116071 | 0.205357 | 0.167411 | 0.196429 | 0.196429 | 0.196429 | 0.196429 | 0 | 0 | 0 | 0.003745 | 0.269494 | 731 | 22 | 76 | 33.227273 | 0.835206 | 0 | 0 | 0.111111 | 0 | 0 | 0.102599 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.277778 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
1fca975aaae74f761eb75ea1f54014d67027a10d | 51,199 | py | Python | library.py | olonok69/TIME_SERIES_MODELING_KERAS | c0e5a6f274d707551e3bcad2d2ccf75b18c39464 | [
"MIT"
] | 1 | 2019-03-14T19:15:32.000Z | 2019-03-14T19:15:32.000Z | library.py | olonok69/TIME_SERIES_MODELING_KERAS | c0e5a6f274d707551e3bcad2d2ccf75b18c39464 | [
"MIT"
] | null | null | null | library.py | olonok69/TIME_SERIES_MODELING_KERAS | c0e5a6f274d707551e3bcad2d2ccf75b18c39464 | [
"MIT"
] | 2 | 2019-03-14T19:15:33.000Z | 2019-05-28T13:12:36.000Z | import os
import sys
import pandas as pd
import numpy as np
import time
import datetime
from sklearn import preprocessing
import matplotlib.pyplot as plt
import matplotlib.pylab as py
#from sklearn import svm
#from sklearn import cross_validation
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import SGDClassifier
from sklearn import neighbors
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score, classification_report, roc_auc_score, matthews_corrcoef
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
from sklearn.metrics import classification_report
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
#import pandas_datareader.data as web
#import quandl
global kFOLDS
daysAhead = 270
kFOLDS=0
def get_data(symbols, dates):
#"""Read stock data (adjusted close) for given symbols from CSV files."""
df_final = pd.DataFrame(index=dates)
i=0
for symbol in symbols:
path = os.path.dirname(os.path.realpath(__file__))
file_path = path + "\\raw_data\\" + symbol + ".csv"
#print ("Loading csv..." + str(file_path))
if symbol=="ORB"or symbol=="WT1010":#OIL
df_temp = pd.read_csv(file_path, parse_dates=True, index_col="Date",usecols=["Date", "Value"], na_values=["nan"])
df_temp = df_temp.rename(columns={"Value": symbol})
elif symbol=="EUR" or symbol=="GBP" or symbol=="AUD" or symbol=="JPY":
df_temp = pd.read_csv(file_path, parse_dates=True, index_col="DATE",usecols=["DATE", "RATE"], na_values=["nan"])
df_temp = df_temp.rename(columns={"RATE": symbol})
elif symbol=="PLAT":
df_temp = pd.read_csv(file_path, parse_dates=True, index_col="Date",usecols=["Date", "London 08:00"], na_values=["nan"])
df_temp = df_temp.rename(columns={"London 08:00": symbol})
elif symbol=="GOLD":
df_temp = pd.read_csv(file_path, parse_dates=True, index_col="Date",usecols=["Date", "USD (AM)"], na_values=["nan"])
df_temp = df_temp.rename(columns={"USD (AM)": symbol})
elif symbol=="SILVER":
df_temp = pd.read_csv(file_path, parse_dates=True, index_col="Date",usecols=["Date", "USD"], na_values=["nan"])
df_temp = df_temp.rename(columns={"USD": symbol})
else:
df_temp = pd.read_csv(file_path, parse_dates=True, index_col="Date",usecols=["Date", "Adj Close"], na_values=["nan"])
df_temp = df_temp.rename(columns={"Adj Close": symbol})
#df_temp = df_temp.rename(columns={"Adj Close": symbol})
df_final = df_final.join(df_temp)
i+=1
if i == 1: # drop dates SPY did not trade
df_final = df_final.dropna(subset=[symbol])
return df_final
def write_Ex(__df_r1, filename, title):
print ("Printing Report..."+ str(title))
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\raw_data\\" + filename
writer = pd.ExcelWriter(total_file)
__df_r1.to_excel(writer,title)
writer.save()
def plot_confusion_matrix(cm,alg, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
title=title+" " +alg
plt.title(title)
plt.colorbar()
names=["Up","Down"]
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names)
plt.yticks(tick_marks, names)
#plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def plot_cm(cm, alg):
np.set_printoptions(precision=2)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.figure(figsize=(6, 4))
plot_confusion_matrix(cm_normalized, alg)
file = os.path.dirname(os.path.realpath(__file__)) + "\\out\\" +"CM_"+alg+".png"
plt.savefig(file)
def shift_day(df_index,indices_list,indices_day_after):
for indice in indices_list:
if indice not in str(indices_day_after):
df_index[indice] = df_index[indice].shift(1)
return df_index
def Load_DataFrames():
# Open Excel files to load them in a dataframe
path = os.path.dirname(os.path.realpath(__file__))
file_Accuracy = path + "\\raw_data\\Accuracy.xlsx"
# Load files in a dataframe for manipulation
accu = pd.ExcelFile(file_Accuracy)
# Select first sheet in the file
_df_accu = accu.parse(accu.sheet_names[0])
return _df_accu
def get_web_data(symbols, dates):
#"""Read stock data (adjusted close) for given symbols from CSV files."""
df_final = pd.DataFrame(index=dates)
i=0
fechastart='2007-01-01'
fechaend='2017-01-01'
for symbol in symbols:
df_temp = web.DataReader(symbol, start=fechastart, end=fechaend,data_source='yahoo')["Date", "Adj Close"]
df_temp = df_temp.rename(columns={"Adj Close": symbol})
df_final = df_final.join(df_temp)
i+=1
if i == 1: # drop dates SPY did not trade
df_final = df_final.dropna(subset=[symbol])
return df_final
def get_web_quandl(symbol, simbolo):
df = quandl.get(symbol, trim_start = "01/01/2007", trim_end ="01/01/2017", authtoken="_N85bWLCNCWz14smKHSi")
name=simbolo
df.columns.values[-1] = 'AdjClose'
df.columns = df.columns + '_' + name
df['Return_%s' %name] = df['AdjClose_%s' %name].pct_change()
return df
def adjusted_return(df, symbol,numDaysArray):
#df_final = pd.DataFrame(index=dates)
for n in numDaysArray:
#["^GSPC","SPY","^IXIC", "^DJI", "^GDAXI", "^FTSE","^FCHI", "EUR","GBP", "SILVER", "WT1010"]
if symbol =="^GSPC"or symbol =="^IXIC"or symbol =="^DJI"or symbol == "^GDAXI"or symbol =="^FTSE"or symbol == "^FCHI"or symbol =="EUR"or symbol == "GBP"or symbol == "SILVER" or symbol == "WT1010":
M = pd.Series(df[symbol].pct_change(n).shift(-1), name = str(symbol)+ '_Adj_' + str(n))
#M = pd.Series(df[symbol].pct_change(n), name = str(symbol)+ '_Adj_' + str(n))
else:
M = pd.Series(df[symbol].pct_change(n), name = str(symbol)+ '_Adj_' + str(n))
df = df.join(M)
return df
def rolling_average(df, symbol,numDaysArray):
for n in numDaysArray:
if symbol=="^GSPC_Adj_1":
M=pd.Series(df[str(symbol)].rolling(window = n, center = False).mean().shift(1), name = str(symbol)+ '_Roll_Avg_' + str(n))
else:
M=pd.Series(df[str(symbol)].rolling(window = n, center = False).mean(), name = str(symbol)+ '_Roll_Avg_' + str(n))
df = df.join(M)
return df
def remove_col(df, symbol, symbol2):
if symbol != symbol2:
del df[symbol]
return df
def normalize(df, symbols):
result = df.copy()
for symbol in df.columns:
max_value = df[symbol].max()
min_value = df[symbol].min()
result[symbol] = (df[symbol] - min_value) / (max_value - min_value)
return result
def mergeDataframes(out, dataset1,dataset2, symbol1,symbol2):
out= pd.merge(dataset1, dataset2, on='Date', how='left')
return out
def Volatity_n(df, symbol,numDaysArray):
#numDaysArray = [2, 21, 63]
#numDaysArray = [2]
for n in numDaysArray:
#M = pd.Series(df[symbol].pct_change(n).std(), name = 'Volatility_' + str(n))
#if symbol != "^GSPC":
df["Log_Ret"] = np.log(df[symbol] / df[symbol].shift(1))
df[str(symbol)+ "_Vol_"+ str(n)] = df["Log_Ret"].rolling(window=n, center=False).std() * np.sqrt(252) # if we multiply sqrt(252) is anualized volatility
del df["Log_Ret"]
return df
def MOM(df, symbol,numDaysArray):
for n in numDaysArray:
#if symbol != "^GSPC":
M = pd.Series(df[symbol].diff(n), name = str(symbol)+ '_MoM_' + str(n))
df = df.join(M)
return df
def ExpMovingAverage(df, symbol,numDaysArray):
#ts_log = np.log(df[str(symbol)])
for n in numDaysArray:
#if symbol !="^GSPC":
M = pd.Series(df[symbol].ewm(span=n).mean(), name = str(symbol)+ '_EWA_' + str(n))
df = df.join(M)
#df[str(symbol)+ "_EMA_"+ str(n)] = pd.ewma(df[symbol], span=n, freq="D")
#Series.ewm(min_periods=0,adjust=True,ignore_na=False,span=1,freq=D).mean()
return df
def Label_Change (row):
if row['^GSPC_Adj_1'] > 0 :
return 'Up'
return 'Down'
def Label_Change2 (row):
if row['Real'] > 0 :
return 'Up'
return 'Down'
def Cleaning(df): #(Remove, inf with NaN and replace Bfill)
df.replace([np.inf, -np.inf], np.nan)
df[df==np.inf] = np.nan
df[df==-np.inf] = np.nan
df.fillna(method='bfill', inplace=True)
df.fillna(method='ffill', inplace=True)
return df
def plotvalidation_curve(y_test,predictions):
fig, ax = plt.subplots()
ax.scatter(y_test, predictions)
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'k--', lw=4)
ax.set_xlabel('Measured')
ax.set_ylabel('Predicted')
plt.show()
return
def prepareDataForClassification(dataset, start_test, test):
le = preprocessing.LabelEncoder()
dataset['UpDown'] = dataset.apply (lambda row: Label_Change (row),axis=1)
dataset1=dataset.truncate(before='2003-07-01') #delete all values up to the first rolling average 63 days
#dataset.UpDown[dataset.UpDown >= 0] = 'Up'
#dataset.UpDown[dataset.UpDown < 0] = 'Down'
dataset1.UpDown = le.fit(dataset1.UpDown).transform(dataset1.UpDown)
if test==2:
features = dataset1.columns[0:-1]
X = dataset1[features]
del X["^GSPC_Adj_1"]
if test==1 or test==5:
features = dataset1.columns[0:-1]
X = dataset1[features]
del X["^GSPC_Adj_1"]
if test==3:
features = dataset1.columns[0:-1]
X = dataset1[features]
del X["^GSPC_Adj_1"]
if test==4:
features = dataset1.columns[0:-1]
X = dataset1[features]
del X["^GSPC_Adj_1"]
if test==6:
features = dataset1.columns[0:-1]
X = dataset1[features]
del X["^GSPC_Adj_1"]
y = dataset1.UpDown
X_train = X[X.index < start_test]
y_train = y[y.index < start_test]
X_test = X[X.index >= start_test]
y_test = y[y.index >= start_test]
#path = os.path.dirname(os.path.realpath(__file__))
#file_path = path + "\\raw_data\\" + "y_train.csv"
#y_train.to_csv(file_path, index = False)
#file_path = path + "\\raw_data\\" + "y_testn.csv"
#y_test.to_csv(file_path, index = False)
#write_Ex(X_train, "X_train.xlsx", "X_train")
#write_Ex(X_test, "X_test.xlsx", "X_test")
return X, X_train, y_train, X_test, y_test
def performEnsemberBlendingClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("Ensembler Blending Classification")
start_date = "2013-01-01"
end_date = "2017-01-01"
dates = pd.date_range(start_date, end_date)
df_out = pd.DataFrame(index=dates)
df_out["Real"] = y_test.copy()
df_out=df_out.dropna() # so we can modify it
le = preprocessing.LabelEncoder()
df_out['UpDown'] = df_out.apply (lambda row: Label_Change2 (row),axis=1)
df_out.UpDown = le.fit(df_out.UpDown).transform(df_out.UpDown)
for i in range(50):
clf1 = SVC(C=((i+1)*100)+3200, kernel= 'sigmoid', gamma=0.05) #'C': 3500, 'kernel': 'sigmoid', 'gamma': 0.05
clf2 = RandomForestClassifier(n_estimators=4000, criterion= 'gini') #'criterion': 'gini', 'n_estimators': 4000}
clf3 = SGDClassifier(penalty='elasticnet', loss='perceptron', learning_rate= 'invscaling', eta0=0.1, alpha=0.001)
#{'penalty': 'elasticnet', 'eta0': 0.1, 'alpha': 0.001, 'loss': 'perceptron', 'learning_rate': 'invscaling'}
clf4 = AdaBoostClassifier(n_estimators=200, algorithm='SAMME')
clf5= GradientBoostingClassifier(min_samples_leaf=75, n_estimators=90+(i*10), max_features= 'auto',min_samples_split=300, learning_rate=0.1)
clf6 = neighbors.KNeighborsClassifier(algorithm='ball_tree', n_neighbors=350, weights='distance', leaf_size= 30)
clf7 = LinearDiscriminantAnalysis(store_covariance=True, n_components= 1, solver= 'lsqr', shrinkage= 0.1)
print("Fitting")
print("SVC"+str(i))
clf1.fit(X_train, y_train)
print("RFC"+str(i))
clf2.fit(X_train, y_train)
print("SGD"+str(i))
clf3.fit(X_train, y_train)
print("ADA"+str(i))
clf4.fit(X_train, y_train)
print("GTB"+str(i))
clf5.fit(X_train, y_train)
print("KNN"+str(i))
clf6.fit(X_train, y_train)
print("LDA"+str(i))
clf7.fit(X_train, y_train)
print("predicting")
df_out['SVM'+str(i)] = clf1.predict(X_test)
df_out['RFC'+str(i)] = clf2.predict(X_test)
df_out['SDG'+str(i)] = clf3.predict(X_test)
df_out['ADA'+str(i)] = clf4.predict(X_test)
df_out['GTB'+str(i)] = clf5.predict(X_test)
df_out['KNN'+str(i)] = clf6.predict(X_test)
df_out['LDA'+str(i)] = clf7.predict(X_test)
features = df_out.columns[1:]
X = df_out[features]
del X["UpDown"]
#print(X)
y = df_out.UpDown
start_test="2016-01-01"
#print(y)
X_train = X[X.index < start_test]
y_train = y[y.index < start_test]
X_test = X[X.index >= start_test]
y_test = y[y.index >= start_test]
#{'kernel': 'rbf', 'C': 100.0, 'gamma': 0.1}
#{'penalty': 'l2', 'C': 0.001, 'solver': 'newton-cg'}
clfsb= LogisticRegression(penalty= 'l2', C= 0.001, solver= 'newton-cg')
clfsb.fit(X_train, y_train)
clfsb2= SVC(C=100,kernel= 'rbf', gamma=0.1)
clfsb2.fit(X_train, y_train)
#training=end-start
accuracy = clfsb.score(X_test, y_test)
accuracy2 = clfsb2.score(X_test, y_test)
#start = datetime.datetime.now()
predictions = clfsb.predict(X_test)
predictions2 = clfsb2.predict(X_test)
#end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
Sscore2 = f1_score(y_test, predictions2)
#predecir=end-start
print("F1 Score Blender " +str(Sscore))
print("Accuracy Blender " +str(accuracy))
print("F1 Score Blender 2 " +str(Sscore2))
print("Accuracy Blender 2 " +str(accuracy2))
cm = confusion_matrix(y_test, predictions)
plot_cm(cm,"Blender1 Classification")
cm1 = confusion_matrix(y_test, predictions2)
plot_cm(cm1,"Blender2 Classification")
return df_out, accuracy, Sscore, accuracy2, Sscore2
def CVperformEnsemberBlendingClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
print("Ensembler Blending Classification")
start_date = "2013-01-01"
end_date = "2017-01-01"
dates = pd.date_range(start_date, end_date)
df_out = pd.DataFrame(index=dates)
df_out["Real"] = y_test.copy()
df_out=df_out.dropna() # so we can modify it
le = preprocessing.LabelEncoder()
df_out['UpDown'] = df_out.apply (lambda row: Label_Change2 (row),axis=1)
df_out.UpDown = le.fit(df_out.UpDown).transform(df_out.UpDown)
for i in range(2):
clf1 = SVC(C=((i+1)*100)+3200, kernel= 'sigmoid', gamma=0.05) #'C': 3500, 'kernel': 'sigmoid', 'gamma': 0.05
clf2 = RandomForestClassifier(n_estimators=4000, criterion= 'gini') #'criterion': 'gini', 'n_estimators': 4000}
clf3 = SGDClassifier(penalty='elasticnet', loss='perceptron', learning_rate= 'invscaling', eta0=0.1, alpha=0.001)
#{'penalty': 'elasticnet', 'eta0': 0.1, 'alpha': 0.001, 'loss': 'perceptron', 'learning_rate': 'invscaling'}
clf4 = AdaBoostClassifier(n_estimators=200, algorithm='SAMME')
clf5= GradientBoostingClassifier(min_samples_leaf=75, n_estimators=90+(i*10), max_features= 'auto',min_samples_split=300, learning_rate=0.1)
clf6 = neighbors.KNeighborsClassifier(algorithm='ball_tree', n_neighbors=350, weights='distance', leaf_size= 30)
clf7 = LinearDiscriminantAnalysis(store_covariance=True, n_components= 1, solver= 'lsqr', shrinkage= 0.1)
print("Fitting")
print("SVC"+str(i))
clf1.fit(X_train, y_train)
print("RFC"+str(i))
clf2.fit(X_train, y_train)
print("SGD"+str(i))
clf3.fit(X_train, y_train)
print("ADA"+str(i))
clf4.fit(X_train, y_train)
print("GTB"+str(i))
clf5.fit(X_train, y_train)
print("KNN"+str(i))
clf6.fit(X_train, y_train)
print("LDA"+str(i))
clf7.fit(X_train, y_train)
print("predicting")
df_out['SVM'+str(i)] = clf1.predict(X_test)
df_out['RFC'+str(i)] = clf2.predict(X_test)
df_out['SDG'+str(i)] = clf3.predict(X_test)
df_out['ADA'+str(i)] = clf4.predict(X_test)
df_out['GTB'+str(i)] = clf5.predict(X_test)
df_out['KNN'+str(i)] = clf6.predict(X_test)
df_out['LDA'+str(i)] = clf7.predict(X_test)
features = df_out.columns[1:]
X = df_out[features]
del X["UpDown"]
#print(X)
y = df_out.UpDown
start_test="2016-01-01"
#print(y)
X_train = X[X.index < start_test]
y_train = y[y.index < start_test]
X_test = X[X.index >= start_test]
y_test = y[y.index >= start_test]
#clfsb= LogisticRegression(penalty= 'l1', C= 1, solver= 'liblinear')
#clfsb.fit(X_train, y_train)
#clfsb2= SVC(C=0.01,kernel= 'rbf')
#clfsb2.fit(X_train, y_train)
#training=end-start
params_map = [{ 'C': [0.001,0.005, 0.01,0.02, 0.1,0.4, 0.5,0.6,0.7, 1.0, 2.0, 5.0, 10.0, 100.0],'kernel': ['rbf'],'gamma': ['auto',0.001,0.01, 0.05,0.1, 0.5, 1.0,4.0, 5.0, 6.0, 10.0,50.0,100.0,150.0]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000, 1500, 2000, 2500,3000, 3500,4000]}]#,
#{'kernel': ['poly'], 'C': [1, 10, 100, 1000, 1500, 2000, 2500,3000, 3500,4000],'degree':[2,3,4,5,6,7,8,9,10,15], 'gamma': ['auto',0.001,0.01, 0.05,0.1, 0.5, 1.0, 5.0]},
#{'kernel': ['sigmoid'], 'C': [1, 10, 100, 1000, 1500, 2000, 2500,3000, 3500,4000], 'gamma': ['auto',0.001,0.01, 0.05,0.1, 0.5, 1.0, 5.0]}]
clf = GridSearchCV(SVC(), params_map, scoring=f1_score_on_test, verbose=100)
test=5
clf.fit(X_train, y_train)
#joblib.dump(clf.best_estimator_, file_path, compress = 3)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVBlender_SVC_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"SVC")
writer.save()
param_grid = [{'C':[0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty' :["l1"], 'solver' :["liblinear"] },
{'C':[0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty' :["l2"], 'solver' :[ "newton-cg", "lbfgs", "sag"] }]
clf1 = GridSearchCV(LogisticRegression(), param_grid, scoring=f1_score_on_test, verbose=100)
clf1.fit(X_train, y_train)
df=pd.DataFrame(clf1.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVBlenderLRC_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"LRC")
writer.save()
return clf.best_params_, clf1.best_params_
def performKNNClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("KNN binary Classification")
#{{'leaf_size': 30, 'algorithm': 'ball_tree', 'n_neighbors': 350, 'weights': 'distance'}
clf = neighbors.KNeighborsClassifier(algorithm='ball_tree', n_neighbors=300, weights='distance', leaf_size= 30)
#clf = neighbors.KNeighborsClassifier()
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"KNN Binary Classification")
print("F1 Score KNN " +str(Sscore))
return accuracy, Sscore, y_test,predictions , training, predecir,roc,mat
def performRFClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("Random Forest Binary Classification")
clf = RandomForestClassifier(n_estimators=500, criterion= 'gini')
#clf = RandomForestClassifier()# '{'n_estimators': 500, 'criterion': 'gini'}
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Random Forest Binary")
print("F1 Score RFC " +str(Sscore))
return accuracy, Sscore, y_test,predictions , training, predecir,roc,mat
def performSGDClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("Stochastic Gradient Descent binary Classification")
#{'learning_rate': 'invscaling', 'alpha': 0.01, 'loss': 'perceptron', 'penalty': 'elasticnet', 'eta0': 0.2}
clf = SGDClassifier( alpha= 0.01,penalty='elasticnet', loss='perceptron', learning_rate= 'invscaling', eta0=0.2)
#clf = SGDClassifier()
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Stochastic Gradient Descent")
print("F1 Score SGD " +str(Sscore))
return accuracy, Sscore, y_test,predictions , training, predecir,roc,mat
def performSVMClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
#"""
print("SVM binary Classification")
#print(n_components)
#c = parameters[0]
#g = parameters[1]
#{{'C': 3500, 'kernel': 'sigmoid', 'gamma': 0.05}
#{'kernel': 'rbf', 'C': 0.1, 'gamma': 100.0}
clf = SVC(C=2000,kernel= 'linear')
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
#df_out['SVM'] = clf.predict(X_test)
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions,average='binary')
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Support Vector Machines")
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
#write_Ex(df_out, "df_out.xlsx", "df_out")
#plotvalidation_curve(y_test,predictions)
print("F1 Score SVM " +str(Sscore))
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def performSVM_PCAClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
#"""
print("SVM binary Classification")
start_date = "2013-01-01"
end_date = "2017-01-01"
dates = pd.date_range(start_date, end_date)
df_out = pd.DataFrame(index=dates)
df_out["Real"] = y_test.copy()
df_out=df_out.dropna()
n_components=len(X_train.columns)-1
pca = PCA(copy=True,n_components=n_components, whiten=False).fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
#print(n_components)
#c = parameters[0]
#g = parameters[1]
#{{'C': 3500, 'kernel': 'sigmoid', 'gamma': 0.05}
#{'kernel': 'rbf', 'C': 0.1, 'gamma': 100.0}
clf = SVC(C=3500,kernel= 'sigmoid', gamma= 0.05)
start = datetime.datetime.now()
clf.fit(X_train_pca, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test_pca, y_test)
start = datetime.datetime.now()
df_out['SVM'] = clf.predict(X_test_pca)
predictions = clf.predict(X_test_pca)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions,average='binary')
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Support Vector Machines")
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
#write_Ex(df_out, "df_out.xlsx", "df_out")
#plotvalidation_curve(y_test,predictions)
print("F1 Score SVM " +str(Sscore))
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def performAdaBoostClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("Ada Boosting binary Classification")
#{'algorithm': 'SAMME.R', 'n_estimators': 80, 'learning_rate': 0.7}
clf = AdaBoostClassifier(n_estimators=80, algorithm='SAMME.R', learning_rate= 0.7)
#clf = AdaBoostClassifier()
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Ada Boosting binary Classification")
predecir=end-start
print("F1 Score ADA " +str(Sscore))
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def performGTBClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
#"""
print("Gradient Tree Boosting binary Classification")
#{'max_features': 'auto', 'min_samples_split': 300, 'min_samples_leaf': 75, 'n_estimators': 90, 'learning_rate': 0.1}
clf = GradientBoostingClassifier(min_samples_leaf=75, n_estimators=90, max_features= 'auto',min_samples_split=300, learning_rate=0.1)
#clf = GradientBoostingClassifier()
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Gradient Tree Boosting Classification")
predecir=end-start
print("F1 Score GTB " +str(Sscore))
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def performLRBClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("Logistic Regresion Binary Classification")
clf = LogisticRegression(penalty= 'l1', C= 1, solver= 'liblinear') #{'penalty': 'l1', 'C': 1, 'solver': 'liblinear'}
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Gaussian Naives Bayes Classification")
print("F1 Score Logistic Regresion " +str(Sscore))
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def performLDAClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("Linear Discriminant Analysis binary Classification")
#'{'store_covariance': 'True', 'n_components': 1, 'solver': 'lsqr', 'shrinkage': 0.1}
clf = LinearDiscriminantAnalysis(store_covariance=True, n_components= 1, tol=0.5, solver='svd')
#clf = LinearDiscriminantAnalysis()
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
print("F1 Score LDA " +str(Sscore))
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Linear Discriminant Analysis Classification")
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def performVotingClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
print("Voting binary Classification")
#clf = LinearDiscriminantAnalysis()
clf1 = SVC(C=2000,kernel= 'linear')
clf2 = neighbors.KNeighborsClassifier(algorithm='ball_tree', n_neighbors=350, weights='distance', leaf_size= 30)
#clf2 = RandomForestClassifier(random_state=1)
clf3 = LogisticRegression(penalty= 'l1', C= 1, solver= 'liblinear')
clf4 = SGDClassifier( alpha= 0.01,penalty='elasticnet', loss='perceptron', learning_rate= 'invscaling', eta0=0.2)
eclf = VotingClassifier(estimators=[('svc', clf1), ('knn', clf2), ('lgr', clf3),('sdg',clf4)], voting='hard')
start = datetime.datetime.now()
clf1.fit(X_train, y_train)
clf2.fit(X_train, y_train)
clf3.fit(X_train, y_train)
clf4.fit(X_train, y_train)
eclf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = eclf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = eclf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
print("F1 Score Voting Classifier " +str(Sscore))
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Voting binary Classification")
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def performDTCClass(X_train, y_train, X_test, y_test):#, parameters, fout, savemodel):
#"""
print("DecisionTreeClassifier Classification")
#"""
clf = DecisionTreeClassifier(max_features='sqrt', criterion='entropy') #{'max_features': 'sqrt', 'criterion': 'entropy'}
start = datetime.datetime.now()
clf.fit(X_train, y_train)
end = datetime.datetime.now()
training=end-start
accuracy = clf.score(X_test, y_test)
start = datetime.datetime.now()
predictions = clf.predict(X_test)
end = datetime.datetime.now()
Sscore = f1_score(y_test, predictions)
roc=mat=0
if kFOLDS != 2:
roc = roc_auc_score(y_test, predictions)
mat = matthews_corrcoef(y_test, predictions)
predecir=end-start
#cm = confusion_matrix(y_test, predictions)
#plot_cm(cm,"Decision Tree Classification")
print("F1 Score DecisionTreeClassifier " +str(Sscore))
return accuracy, Sscore, y_test,predictions, training, predecir,roc,mat
def GSLDA(X_train, y_train, X_test, y_test,test):
#Choose all predictors except target & IDcols
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
params_map = [{'solver':['lsqr','eigen'],'shrinkage':['auto',0.1,0.3,0.5,0.7,0.9],'store_covariance':['True', 'False'],'n_components':[1,2,10,20,30,40,50,60, 80, 100,200, 300,500,1000]},{'solver':['svd'], 'tol':[0.3,0.45,0.5,0.55,0.8, 1.0],'store_covariance':['True', 'False'],'n_components':[1,2,10,20,30,40,50,60, 80, 100,200, 300,500,1000]}]
clf = GridSearchCV(LinearDiscriminantAnalysis(), param_grid = params_map, scoring=f1_score_on_test, verbose=100)
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVLDA_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"LDA")
writer.save()
return clf.best_params_
def GSGTB(X_train, y_train, X_test, y_test,test):
#Choose all predictors except target & IDcols
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
param_test1 = {'n_estimators':[20,40,50,60,70,80,90,100,150,200],'learning_rate':[0.1,0.2], 'min_samples_split':[100,200,300],'min_samples_leaf':[10,30,40, 50, 75],'max_features':['auto', 'sqrt']}
#,'subsample':[0.6,0.8,1.0],'random_state':[10,20,30]}
#param_test2 = [{'n_estimators':[20,40,50,60,70,80,90,100,150,200]}]
clf = GridSearchCV(GradientBoostingClassifier(), param_grid = param_test1, scoring=f1_score_on_test,n_jobs=1,iid=False, cv=5, verbose=100)
#clf = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, min_samples_split=500,min_samples_leaf=50,max_depth=8,max_features='sqrt',subsample=0.8,random_state=10),param_grid = param_test2, scoring='roc_auc',n_jobs=1,iid=False, cv=5, verbose=100)
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVGTB_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"GTB")
writer.save()
return clf.best_params_
def GSSGD(X_train, y_train, X_test, y_test,test):
#Choose all predictors except target & IDcols
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
print("Stochastic Gradient Descent binary Classification")
#c = parameters[0]
#parameters = {'loss': [ 'hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'], 'alpha': [0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001], 'n_iter': list(np.arange(1,1001))}
#clf = SVC(C=5, kernel= 'rbf', gamma= 0.01) {'kernel': 'rbf', 'C': 100.0, 'gamma': 0.5}
param_test1=[{'loss':['hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'],
'penalty':['none', 'l2', 'l1', 'elasticnet'],
'learning_rate':['constant','optimal','invscaling'],
'eta0':[0.1,0.2], 'alpha': [0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001]}]
clf = GridSearchCV(SGDClassifier(), param_grid = param_test1, scoring=f1_score_on_test,n_jobs=1,iid=False, cv=5, verbose=100)
#clf = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, min_samples_split=500,min_samples_leaf=50,max_depth=8,max_features='sqrt',subsample=0.8,random_state=10),param_grid = param_test2, scoring='roc_auc',n_jobs=1,iid=False, cv=5, verbose=100)
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVSGD_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"SGD")
writer.save()
return clf.best_params_
def GSADA(X_train, y_train, X_test, y_test,test):
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)#return f1_score(y_test.values, y_pred, pos_label=1)
param_grid = [{'n_estimators':[20,40,50,60,70,80,90,100,150,200],'algorithm' : ['SAMME', 'SAMME.R'], 'learning_rate':[0.1, 0.3,0.5,0.7,1] }]
DTC = DecisionTreeClassifier(random_state = 11, max_features = "auto", class_weight = "balanced")
#DTC = SVC(kernel= 'rbf', C= 10.0, gamma= 0.1)
ABC = AdaBoostClassifier(base_estimator = DTC)
clf = GridSearchCV(ABC, param_grid=param_grid, scoring=f1_score_on_test, verbose=100 )
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVADA_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"ADA")
writer.save()
return clf.best_params_
def GSDTC(X_train, y_train, X_test, y_test,test):
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
param_grid = [{'criterion':['gini','entropy'],'max_features':['auto','sqrt','log2' ] }]
clf = GridSearchCV(DecisionTreeClassifier(), param_grid=param_grid, scoring=f1_score_on_test, verbose=100 )
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVDTC_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"DTC")
writer.save()
return clf.best_params_
def GSSVC(X_train, y_train, X_test, y_test, test):
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
# TODO: Create the parameters list you wish to tune
params_map = [{ 'C': [0.001,0.005, 0.01,0.02, 0.1,0.4, 0.5,0.6,0.7, 1.0, 2.0, 5.0, 10.0, 100.0],'kernel': ['rbf'],'gamma': ['auto',0.001,0.01, 0.05,0.1, 0.5, 1.0,4.0, 5.0, 6.0, 10.0,50.0,100.0,150.0]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000, 1500, 2000, 2500,3000, 3500,4000]}]#,
#{'kernel': ['poly'], 'C': [1, 10, 100, 1000, 1500, 2000, 2500,3000, 3500,4000],'degree':[2,3,4,5,6,7,8,9,10,15], 'gamma': ['auto',0.001,0.01, 0.05,0.1, 0.5, 1.0, 5.0]},
#{'kernel': ['sigmoid'], 'C': [1, 10, 100, 1000, 1500, 2000, 2500,3000, 3500,4000], 'gamma': ['auto',0.001,0.01, 0.05,0.1, 0.5, 1.0, 5.0]}]
clf = GridSearchCV(SVC(), params_map, scoring=f1_score_on_test, verbose=100)
clf.fit(X_train, y_train)
#joblib.dump(clf.best_estimator_, file_path, compress = 3)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVSVC_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"SVC")
writer.save()
return clf.best_params_
def GSKNN(X_train, y_train, X_test, y_test, test):
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
# here must be some code for your training set and test set
tuned_parameters = [{'n_neighbors': [50,100,150,200,250,300,350], 'weights': ['distance', 'uniform'],'algorithm': ['ball_tree', 'kd_tree', 'brute'], 'leaf_size':[30,40,50,60] }]
clf = GridSearchCV(neighbors.KNeighborsClassifier(), tuned_parameters, cv=10, scoring=f1_score_on_test, verbose=100)
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVKNN_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"KNN")
writer.save()
return clf.best_estimator_
def GSRFC(X_train, y_train, X_test, y_test,test):
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
# here must be some code for your training set and test set
tuned_parameters = [{'n_estimators': [500,1000,2000,2500,3000,3500,4000, 4500,5000],
'criterion':['gini','entropy'] }]
clf = GridSearchCV(RandomForestClassifier(), tuned_parameters, scoring=f1_score_on_test, verbose=100)
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVRFC_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"RFC")
writer.save()
return clf.best_params_
def GSLRC(X_train, y_train, X_test, y_test,test):
def f1_score_on_test(estimator, x, y):
y_pred = estimator.predict(X_test)
return f1_score(y_test.values, y_pred)
param_grid = [{'C':[0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty' :["l1"], 'solver' :["liblinear"] },
{'C':[0.001, 0.01, 0.1, 1, 10, 100, 1000], 'penalty' :["l2"], 'solver' :[ "newton-cg", "lbfgs", "sag"] }]
clf = GridSearchCV(LogisticRegression(), param_grid, scoring=f1_score_on_test, verbose=100)
clf.fit(X_train, y_train)
df=pd.DataFrame(clf.grid_scores_)
total_file = os.path.dirname(os.path.realpath(__file__)) + "\\CV\\CVLRC_" + str(test)+ ".xlsx"
writer = pd.ExcelWriter(total_file)
df.to_excel(writer,"LRC")
writer.save()
return clf.best_params_
def TimeSeriesCrossValidation(X_train, y_train, number_folds, algorithm):# algorithm=["KNN","RFC","SVM","ADA Bost","GTB","LDA"]
#print('Parameters --------------------------------> ', parameters)
print('Size train set: '+ str(X_train.shape))
k = int(np.floor(float(X_train.shape[0]) / number_folds))
print('Size of each fold: '+ str(k))
accuracies = np.zeros(number_folds-1)
F1scores = np.zeros(number_folds-1)#
rocs= np.zeros(number_folds-1)
mats= np.zeros(number_folds-1)
# loop from the first 2 folds to the total number of folds
for i in range(2, number_folds + 1):
print('')
split = float(i-1)/i
print('Splitting the first ' + str(i) + ' chunks at ' + str(i-1) + '/' + str(i))
X = X_train[:(k*i)]
y = y_train[:(k*i)]
# split percentage we have set above
index = int(np.floor(X.shape[0] * split))
# folds used to train the model
X_trainFolds = X[:index]
y_trainFolds = y[:index]
# fold used to test the model
X_testFolds = X[(index + 1):]
y_testFolds = y[(index + 1):]
# algorithm=["KNN","RFC","SVM","ADA Bost","GTB","LDA"]
if algorithm=="ADA Bost":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performAdaBoostClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="LDA":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performLDAClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="GTB":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performGTBClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="KNN":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performKNNClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="RFC":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performRFClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="SVM":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performSVMClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="SGD":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performSGDClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="VOT":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performVotingClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="LRC":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performLRBClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="DTC":
accuracies[i-2], F1scores[i-2], y_test_,predict, training, predecir,rocs[i-2], mats[i-2] = performDTCClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
print('Accuracy on fold ' + str(i) + ': ' + str(accuracies[i-2]))
print('F1 Score on fold ' + str(i) + ': ' + str(F1scores[i-2]))
print('Roc on fold ' + str(i) + ': ' + str(rocs[i-2]))
print('Matthew Coef on fold ' + str(i) + ': ' + str(mats[i-2]))
return accuracies.mean(), F1scores.mean(), y_test_,predict, training, predecir,rocs.mean(),mats.mean()
def call_alg(X_train, y_train, X_test, y_test, alg):#
if alg=="KNN":
score, f1score, y_test,predictions, training, predecir,roc,mat = performKNNClass(X_train, y_train, X_test, y_test)
if alg=="RFC":
score, f1score, y_test,predictions, training, predecir,roc,mat = performRFClass(X_train, y_train, X_test, y_test)
if alg=="SVM":
score, f1score, y_test,predictions, training, predecir,roc,mat = performSVMClass(X_train, y_train, X_test, y_test)
if alg=="ADA Bost":
score, f1score, y_test,predictions, training, predecir,roc,mat = performAdaBoostClass(X_train, y_train, X_test, y_test)
if alg=="GTB":
score, f1score, y_test,predictions, training, predecir,roc,mat = performGTBClass(X_train, y_train, X_test, y_test)
if alg=="LDA":
score, f1score, y_test,predictions, training, predecir,roc,mat = performLDAClass(X_train, y_train, X_test, y_test)
if alg=="SGD":
score, f1score, y_test,predictions, training, predecir,roc,mat = performSGDClass(X_train, y_train, X_test, y_test)
if alg=="VOT":
score, f1score, y_test,predictions, training, predecir,roc,mat = performVotingClass(X_train, y_train, X_test, y_test)
if alg=="LRC":
score, f1score, y_test,predictions, training, predecir,roc,mat = performLRBClass(X_train, y_train, X_test, y_test)
if alg=="DTC":
score, f1score, y_test,predictions, training, predecir,roc,mat = performDTCClass(X_train, y_train, X_test, y_test)
return score, f1score, y_test,predictions, training, predecir,roc,mat
def classifaction_report_csv(report,alg,version,TEST,kFOLDS, score, training, predecir,roc,mat):
path = os.path.dirname(os.path.realpath(__file__))
ahora=str(datetime.datetime.now().strftime("%y-%m-%d-%H-%M-%S"))
file_path = path + "\\out\\" + "classification_report_"+str(alg) + ahora+ ".csv"
report_data = []
lines = report.split('\n')
for line in lines[2:-3]:
row = {}
row_data = line.split(' ')
row['Algorithm'] = str(alg)
row['Accuracy'] = str(score)
row['roc_auc_score'] = str(roc)
row['matthews_corrcoef'] = str(mat)
row['Features'] = str(version)
row['Version'] = str(TEST)
row['KFolds'] = str(kFOLDS)
row['Date'] = str(ahora)
row['Training'] = str(training)
row['Predicting'] = str(predecir)
row['precision'] = float(row_data[1])
row['recall'] = float(row_data[2])
row['f1_score'] = float(row_data[3])
row['support'] = float(row_data[4])
report_data.append(row)
dataframe = pd.DataFrame.from_dict(report_data)
dataframe.to_csv(file_path, index = False)
def Walk_Forward_Validation_CV(df, start_test, algorithm):# algorithm=["KNN","RFC","SVM","ADA Bost","GTB","LDA"]
#def prepareDataForClassification(dataset, start_test):
le = preprocessing.LabelEncoder()
df['UpDown'] = df.apply (lambda row: Label_Change (row),axis=1)
dataset1=df.truncate(before='2003-07-01') #delete all values up to the first rolling average 63 days
dataset1.UpDown = le.fit(dataset1.UpDown).transform(dataset1.UpDown)
features = dataset1.columns[1:-1]
X = dataset1[features]
y = dataset1.UpDown
X_train = X[X.index < start_test]
y_train = y[y.index < start_test]
n_train = len(X_train)
n_records= len(dataset1)
number_folds=n_records-n_train
accuracies = np.zeros(number_folds-1)
F1scores = np.zeros(number_folds-1)
rocs= np.zeros(number_folds-1)
mats= np.zeros(number_folds-1)
yreals = np.zeros(number_folds-1)
ypredictions = np.zeros(number_folds-1)
# loop from the first 2 folds to the total number of folds
for i in range(n_train, n_records-1):
X_trainFolds = X[0:i]
y_trainFolds = y[0:i]
# fold used to test the model
X_testFolds = X[i:i+1]
y_testFolds = y[i:i+1]
# algorithm=["KNN","RFC","SVM","ADA Bost","GTB","LDA", "SGD","GNB", "VOT", "DTC"]
if algorithm=="ADA Bost":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performAdaBoostClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="LDA":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performLDAClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="GTB":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performGTBClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="KNN":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performKNNClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="RFC":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performRFClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="SVM":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performSVMClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="SGD":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performSGDClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="VOT":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performVotingClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="LRC":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performLRBClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
if algorithm=="DTC":
accuracies[i-n_train], F1scores[i-n_train],yreals[i-n_train], ypredictions[i-n_train], training, predecir,rocs[i-n_train], mats[i-n_train] = performDTCClass(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds)
print('Accuracy on fold ' + str(i) + ': ' + str(accuracies[i-n_train]))
print('F1 Score on fold ' + str(i) + ': ' + str(F1scores[i-n_train]))
print('ytest fold ' + str(i) + ': ' + str(yreals[i-n_train]))
print('Prediction on fold ' + str(i) + ': ' + str(ypredictions[i-n_train]))
print('Roc on fold ' + str(i) + ': ' + str(rocs[i-n_train]))
print('Matthew Coef on fold ' + str(i) + ': ' + str(mats[i-n_train]))
return accuracies.mean(), F1scores.mean(),yreals, ypredictions,training, predecir,rocs.mean(),mats.mean()
# Plot CV scores of a 2D grid search
def plotGridResults2D(x, y, x_label, y_label, grid_scores):
scores = [s[1] for s in grid_scores]
scores = np.array(scores).reshape(len(x), len(y))
plt.figure()
plt.imshow(scores, interpolation='nearest', cmap=plt.cm.RdYlGn)
plt.xlabel(y_label)
plt.ylabel(x_label)
plt.colorbar()
plt.xticks(np.arange(len(y)), y, rotation=45)
plt.yticks(np.arange(len(x)), x)
plt.title('Validation accuracy') | 41.189863 | 345 | 0.681302 | 7,685 | 51,199 | 4.349382 | 0.083409 | 0.023037 | 0.016335 | 0.028003 | 0.749738 | 0.727599 | 0.706387 | 0.677577 | 0.663396 | 0.608227 | 0 | 0.041242 | 0.157484 | 51,199 | 1,243 | 346 | 41.189863 | 0.733633 | 0.156722 | 0 | 0.546083 | 0 | 0 | 0.084085 | 0.002116 | 0 | 0 | 0 | 0.000805 | 0 | 1 | 0.06682 | false | 0 | 0.03341 | 0 | 0.163594 | 0.072581 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1fed36e026cd0d2b241907aec64b4f8af567e6cd | 281 | py | Python | workon/contrib/flow/apps.py | dalou/django-workon | ef63c0a81c00ef560ed693e435cf3825f5170126 | [
"BSD-3-Clause"
] | null | null | null | workon/contrib/flow/apps.py | dalou/django-workon | ef63c0a81c00ef560ed693e435cf3825f5170126 | [
"BSD-3-Clause"
] | null | null | null | workon/contrib/flow/apps.py | dalou/django-workon | ef63c0a81c00ef560ed693e435cf3825f5170126 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
'''
AppConfig
'''
from django.apps import AppConfig
from django.utils.translation import ugettext_lazy as _
class FlowConfig(AppConfig):
name = 'workon.contrib.flow'
label = 'workon_flow'
verbose_name = _("Real time websocket per user backend")
| 18.733333 | 60 | 0.715302 | 35 | 281 | 5.6 | 0.771429 | 0.132653 | 0.193878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004367 | 0.185053 | 281 | 14 | 61 | 20.071429 | 0.851528 | 0.085409 | 0 | 0 | 0 | 0 | 0.270492 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
1ff649576a8cbdf7535a2184c084d671c12547fa | 655 | py | Python | viewmodels/reserve/detail.py | computablelabs/compas | fbde4e2ef61ebadc5ffd6453bff60f678e6002d6 | [
"MIT"
] | 1 | 2019-10-24T19:16:10.000Z | 2019-10-24T19:16:10.000Z | viewmodels/reserve/detail.py | computablelabs/compas | fbde4e2ef61ebadc5ffd6453bff60f678e6002d6 | [
"MIT"
] | 7 | 2019-10-31T14:55:07.000Z | 2019-12-11T19:29:15.000Z | viewmodels/reserve/detail.py | computablelabs/compas | fbde4e2ef61ebadc5ffd6453bff60f678e6002d6 | [
"MIT"
] | null | null | null | from models.reserve import Reserve
from viewmodels.viewmodel import ViewModel
class Detail(ViewModel):
def __init__(self):
self.model = Reserve()
def get_support_price(self):
return str(self.model.get_support_price())
def get_withdrawal_proceeds(self, addr=None):
# assure was not set to empty str...
if addr == '':
addr = None
return str(self.model.get_withdrawal_proceeds(addr))
def support(self, offer, gas_price):
return self.transact(self.model.support(offer, gas_price))
def withdraw(self, gas_price):
return self.transact(self.model.withdraw(gas_price))
| 28.478261 | 66 | 0.677863 | 85 | 655 | 5.035294 | 0.376471 | 0.10514 | 0.070093 | 0.084112 | 0.261682 | 0.163551 | 0.163551 | 0 | 0 | 0 | 0 | 0 | 0.221374 | 655 | 22 | 67 | 29.772727 | 0.839216 | 0.051908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.133333 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
95166d77ef1d248ae094521782f231bf7169cc51 | 244 | py | Python | exercicios_basico/ex007.py | montalvas/python | 483c2097f6f91bfae127dafcb63e3006eeecad1d | [
"MIT"
] | null | null | null | exercicios_basico/ex007.py | montalvas/python | 483c2097f6f91bfae127dafcb63e3006eeecad1d | [
"MIT"
] | null | null | null | exercicios_basico/ex007.py | montalvas/python | 483c2097f6f91bfae127dafcb63e3006eeecad1d | [
"MIT"
] | null | null | null | # Informe duas notas e sem seguida mostre sua média.
n1 = float(input('Informe a primeira nota: '))
n2 = float(input('Informe a segunda nota: '))
print("As notas do aluno foram {} e {}, logo sua média será {:.2f}".format(n1, n2, (n1 + n2)/2))
| 40.666667 | 96 | 0.663934 | 41 | 244 | 3.95122 | 0.658537 | 0.098765 | 0.209877 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0.172131 | 244 | 5 | 97 | 48.8 | 0.762376 | 0.204918 | 0 | 0 | 0 | 0 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9520483442fea8406fd4ea774a04497e7d71c1d4 | 191 | py | Python | Python/Warmup-2/string_times.py | LucasHenrique-dev/exerc-cios-codingbat | ff92db10387757b9a2e3f72be6b7e51824b1ffa6 | [
"MIT"
] | 2 | 2020-12-09T13:36:44.000Z | 2021-08-16T01:17:16.000Z | Python/Warmup-2/string_times.py | LucasHenrique-dev/exerc-cios-codingbat | ff92db10387757b9a2e3f72be6b7e51824b1ffa6 | [
"MIT"
] | null | null | null | Python/Warmup-2/string_times.py | LucasHenrique-dev/exerc-cios-codingbat | ff92db10387757b9a2e3f72be6b7e51824b1ffa6 | [
"MIT"
] | null | null | null | """
Dada uma String "str", retorne uma nova que será uma cópia "n" vezes da original.
"n" não será negativo.
"""
def string_times(str, n):
return str*n
print(string_times("CASA", 0))
| 15.916667 | 81 | 0.670157 | 32 | 191 | 3.9375 | 0.65625 | 0.174603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006452 | 0.188482 | 191 | 11 | 82 | 17.363636 | 0.806452 | 0.544503 | 0 | 0 | 0 | 0 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1f05478e54a476705a6251584e74a8a8c92519df | 216 | py | Python | unit2/deterministic_number.py | purelind/MITx-6.00.2x | de92f62c8b0465713077312d66690f47818e7f5b | [
"MIT"
] | null | null | null | unit2/deterministic_number.py | purelind/MITx-6.00.2x | de92f62c8b0465713077312d66690f47818e7f5b | [
"MIT"
] | null | null | null | unit2/deterministic_number.py | purelind/MITx-6.00.2x | de92f62c8b0465713077312d66690f47818e7f5b | [
"MIT"
] | null | null | null | import random
def deterministicNumber():
'''
Deterministically g
:return:
'''
return list(i for i in range(9, 21) if i % 2 == 0)[1]
if __name__ == '__main__':
print(deterministicNumber())
| 15.428571 | 57 | 0.606481 | 26 | 216 | 4.730769 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037267 | 0.25463 | 216 | 13 | 58 | 16.615385 | 0.726708 | 0.12963 | 0 | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.6 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
1f12240b4d881fdec072311b9da5230ba028c437 | 23 | py | Python | src/__init__.py | sksdutra/RAFPy | d067b23723c7bed1debe096426ae2bc7bd8f2d50 | [
"MIT"
] | null | null | null | src/__init__.py | sksdutra/RAFPy | d067b23723c7bed1debe096426ae2bc7bd8f2d50 | [
"MIT"
] | null | null | null | src/__init__.py | sksdutra/RAFPy | d067b23723c7bed1debe096426ae2bc7bd8f2d50 | [
"MIT"
] | null | null | null | __author__ = 'skdutra'
| 11.5 | 22 | 0.73913 | 2 | 23 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1f215d1c4e58dfcce168b8ac4f42f6fdcd93d9ef | 179 | py | Python | Crawler/bus/bus-crawalData.py | AlbatrossBill/COSC4P02Project | c48682c014ab9de4847d46cffc710d386db93c0f | [
"MIT"
] | 4 | 2022-01-15T22:04:06.000Z | 2022-01-24T01:46:46.000Z | Crawler/bus/bus-crawalData.py | AlbatrossBill/COSC4P02Project | c48682c014ab9de4847d46cffc710d386db93c0f | [
"MIT"
] | null | null | null | Crawler/bus/bus-crawalData.py | AlbatrossBill/COSC4P02Project | c48682c014ab9de4847d46cffc710d386db93c0f | [
"MIT"
] | 1 | 2022-01-24T01:31:57.000Z | 2022-01-24T01:31:57.000Z | import requests
from bs4 import BeautifulSoup
url = "http://whereis.yourbus.com/bustime/api/v3/getroutes?key=n3YEcYp545e55YhAVG65m9CKZ"
doc = requests.get(url)
print(doc.text)
| 19.888889 | 89 | 0.793296 | 24 | 179 | 5.916667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067485 | 0.089385 | 179 | 8 | 90 | 22.375 | 0.803681 | 0 | 0 | 0 | 0 | 0 | 0.452514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
1f69e1a328f31bb36fa1655ff8885266e5f40852 | 590 | py | Python | vae-jiali/models/base.py | haehn/dicompute | ee4364aaa1258a370bd62bbaf6e577936bf463b3 | [
"MIT"
] | null | null | null | vae-jiali/models/base.py | haehn/dicompute | ee4364aaa1258a370bd62bbaf6e577936bf463b3 | [
"MIT"
] | null | null | null | vae-jiali/models/base.py | haehn/dicompute | ee4364aaa1258a370bd62bbaf6e577936bf463b3 | [
"MIT"
] | null | null | null | import torch.nn as nn
from abc import abstractmethod
class BaseVAE(nn.Module):
def __init__(self):
super(BaseVAE, self).__init__()
def encode(self, input):
raise NotImplementedError
def decode(self, inputs):
raise NotImplementedError
def sample(self, batch_size, current_device, **kwargs):
raise RuntimeWarning()
def generate(self, x, **kwargs):
raise NotImplementedError
@abstractmethod
def forward(self, *inputs):
pass
@abstractmethod
def loss_function(self, *inputs, **kwargs):
pass
| 21.071429 | 59 | 0.650847 | 63 | 590 | 5.920635 | 0.52381 | 0.193029 | 0.144772 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257627 | 590 | 27 | 60 | 21.851852 | 0.851598 | 0 | 0 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.368421 | false | 0.105263 | 0.105263 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
1f8eedf3a72903aab2d7d5efd63db57c9070f34c | 2,314 | py | Python | tests/test_client.py | diarmuidw/PyKazoo | 4b6e22c56400c6da1cdd6949c27b687b51712017 | [
"MIT"
] | 3 | 2015-10-16T07:45:38.000Z | 2016-05-19T18:11:42.000Z | tests/test_client.py | diarmuidw/PyKazoo | 4b6e22c56400c6da1cdd6949c27b687b51712017 | [
"MIT"
] | 35 | 2015-08-29T17:33:05.000Z | 2021-06-06T15:53:52.000Z | tests/test_client.py | diarmuidw/PyKazoo | 4b6e22c56400c6da1cdd6949c27b687b51712017 | [
"MIT"
] | 10 | 2015-08-29T17:33:51.000Z | 2021-06-04T20:54:56.000Z | import pykazoo.client
from unittest import TestCase
from pykazoo.accounts import Accounts
from pykazoo.agents import Agents
from pykazoo.authentication import Authentication
from pykazoo.callflows import Callflows
from pykazoo.cdrs import CDRs
from pykazoo.clicktocalls import ClickToCalls
from pykazoo.conferences import Conferences
from pykazoo.devices import Devices
from pykazoo.directories import Directories
from pykazoo.faxes import Faxes
from pykazoo.hotdesks import Hotdesks
from pykazoo.media import Media
from pykazoo.menus import Menus
from pykazoo.metaflows import Metaflows
from pykazoo.phonenumbers import PhoneNumbers
from pykazoo.queues import Queues
from pykazoo.quickcalls import QuickCalls
from pykazoo.resources import Resources
from pykazoo.timedroutes import TimedRoutes
from pykazoo.users import Users
from pykazoo.voicemailboxes import VoicemailBoxes
from pykazoo.webhooks import Webhooks
class TestDevices(TestCase):
def setUp(self):
self.client = pykazoo.client.PyKazooClient('https://localhost:8080/v2')
def test_map_attributes_to_client_objects(self):
assert type(self.client.accounts) is Accounts
assert type(self.client.agents) is Agents
assert type(self.client.authentication) is Authentication
assert type(self.client.callflows) is Callflows
assert type(self.client.cdrs) is CDRs
assert type(self.client.clicktocalls) is ClickToCalls
assert type(self.client.conferences) is Conferences
assert type(self.client.devices) is Devices
assert type(self.client.directories) is Directories
assert type(self.client.faxes) is Faxes
assert type(self.client.hotdesks) is Hotdesks
assert type(self.client.media) is Media
assert type(self.client.menus) is Menus
assert type(self.client.metaflows) is Metaflows
assert type(self.client.phonenumbers) is PhoneNumbers
assert type(self.client.queues) is Queues
assert type(self.client.quickcalls) is QuickCalls
assert type(self.client.resources) is Resources
assert type(self.client.timedroutes) is TimedRoutes
assert type(self.client.users) is Users
assert type(self.client.voicemailboxes) is VoicemailBoxes
assert type(self.client.webhooks) is Webhooks
| 42.851852 | 79 | 0.775713 | 294 | 2,314 | 6.088435 | 0.156463 | 0.128492 | 0.172067 | 0.24581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002593 | 0.166811 | 2,314 | 53 | 80 | 43.660377 | 0.92583 | 0 | 0 | 0 | 0 | 0 | 0.010804 | 0 | 0 | 0 | 0 | 0 | 0.44 | 1 | 0.04 | false | 0 | 0.48 | 0 | 0.54 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
2f0d09da507f82170b9b07b9d8bfdbfef8c502f1 | 255 | py | Python | src/apps/eggs/__init__.py | SlonSky/django-grasped | 97ea2f6d2e10232fc084a6407fa089df2cdf086e | [
"MIT"
] | null | null | null | src/apps/eggs/__init__.py | SlonSky/django-grasped | 97ea2f6d2e10232fc084a6407fa089df2cdf086e | [
"MIT"
] | null | null | null | src/apps/eggs/__init__.py | SlonSky/django-grasped | 97ea2f6d2e10232fc084a6407fa089df2cdf086e | [
"MIT"
] | null | null | null | """
Put here common independent stuff,
such as utilities, management commands, etc.,
what can not belong to any app.
DO NOT let this package to be dump of ugly code.
Putting code here, you rather make an exception,
than an ordinary core organization.
""" | 28.333333 | 48 | 0.764706 | 42 | 255 | 4.642857 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172549 | 255 | 9 | 49 | 28.333333 | 0.924171 | 0.968627 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2f1885b7f5cc551f3e75a35ef649ca09ab6b0266 | 111 | py | Python | Flask/8_context.py | hive-sql-parser/feitian | d9ff89e18c3604590152b64255409d8760f780b9 | [
"BSD-3-Clause"
] | null | null | null | Flask/8_context.py | hive-sql-parser/feitian | d9ff89e18c3604590152b64255409d8760f780b9 | [
"BSD-3-Clause"
] | null | null | null | Flask/8_context.py | hive-sql-parser/feitian | d9ff89e18c3604590152b64255409d8760f780b9 | [
"BSD-3-Clause"
] | null | null | null | from flask import Flask
@app.route("/index")
#线程局部变量 request
def index():
request.form.get("name")
| 10.090909 | 28 | 0.648649 | 15 | 111 | 4.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198198 | 111 | 10 | 29 | 11.1 | 0.808989 | 0.126126 | 0 | 0 | 0 | 0 | 0.10989 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2f3282d19a5398ad14bbc14278ef21323ad8ffc8 | 24 | py | Python | src/MQTTLibrary/version.py | NikoHeikki/robotframework-mqttlibrary | b35614b412287aa4b3cb7082fb2048406f1e5e2a | [
"Apache-2.0"
] | 19 | 2015-02-23T22:53:00.000Z | 2022-03-21T12:08:36.000Z | src/MQTTLibrary/version.py | NikoHeikki/robotframework-mqttlibrary | b35614b412287aa4b3cb7082fb2048406f1e5e2a | [
"Apache-2.0"
] | 23 | 2015-04-22T22:40:14.000Z | 2022-03-22T07:56:10.000Z | src/MQTTLibrary/version.py | NikoHeikki/robotframework-mqttlibrary | b35614b412287aa4b3cb7082fb2048406f1e5e2a | [
"Apache-2.0"
] | 18 | 2015-04-18T00:33:57.000Z | 2022-01-25T09:47:24.000Z | VERSION = '0.7.1.post3'
| 12 | 23 | 0.625 | 5 | 24 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 0.125 | 24 | 1 | 24 | 24 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2f3cd28ec535972dfd45e901f7c4459e17087223 | 79 | py | Python | sponsor-challenges/csit/part1 source/opcode.py | wongwaituck/crossctf-2017-finals-public | bb180bcb3fdb559b7d7040fbe01c4fca98322f11 | [
"MIT"
] | 6 | 2017-06-26T15:07:19.000Z | 2018-10-09T20:03:27.000Z | sponsor-challenges/csit/part1 source/opcode.py | wongwaituck/crossctf-2017-finals-public | bb180bcb3fdb559b7d7040fbe01c4fca98322f11 | [
"MIT"
] | null | null | null | sponsor-challenges/csit/part1 source/opcode.py | wongwaituck/crossctf-2017-finals-public | bb180bcb3fdb559b7d7040fbe01c4fca98322f11 | [
"MIT"
] | 1 | 2018-08-18T00:49:02.000Z | 2018-08-18T00:49:02.000Z | class opcode(object):
nul = 1
hello = 2
rhello = 130
get = 160
rget = 161
| 11.285714 | 21 | 0.620253 | 13 | 79 | 3.769231 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192982 | 0.278481 | 79 | 6 | 22 | 13.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
2f3d1c026893da79800202ff6899d5ac3850913b | 221 | py | Python | ifc_builder/__init__.py | amrit3701/ifc-builder | 1c7b9efc09b930a344fa87d3f783c0b6f2178a45 | [
"MIT"
] | 2 | 2019-02-04T06:50:10.000Z | 2020-09-16T07:03:26.000Z | ifc_builder/__init__.py | amrit3701/ifc-builder | 1c7b9efc09b930a344fa87d3f783c0b6f2178a45 | [
"MIT"
] | null | null | null | ifc_builder/__init__.py | amrit3701/ifc-builder | 1c7b9efc09b930a344fa87d3f783c0b6f2178a45 | [
"MIT"
] | 3 | 2018-11-25T16:58:54.000Z | 2020-11-06T10:46:40.000Z | """ ifc_builder package."""
try:
import ifcopenshell
except ModuleNotFoundError:
raise (
"""ifcopenshell module not found. Install it by following
http://www.ifcopenshell.org/python.html"""
)
| 22.1 | 65 | 0.665158 | 23 | 221 | 6.347826 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.221719 | 221 | 9 | 66 | 24.555556 | 0.848837 | 0.090498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2f4cdb08f5f2ead8b25be8f465591ba4787e5fe7 | 288 | py | Python | tfp/test/split_test.py | meetulr/DanceTransformer | 37f06db6e6feb22b6c983e103a15d91923f926b5 | [
"Apache-2.0"
] | 2 | 2020-01-06T16:32:01.000Z | 2020-01-06T19:29:33.000Z | tfp/test/split_test.py | meetulr/DanceTransformer | 37f06db6e6feb22b6c983e103a15d91923f926b5 | [
"Apache-2.0"
] | null | null | null | tfp/test/split_test.py | meetulr/DanceTransformer | 37f06db6e6feb22b6c983e103a15d91923f926b5 | [
"Apache-2.0"
] | 2 | 2020-01-06T16:32:01.000Z | 2020-01-09T05:25:14.000Z | import os
import sys
# sys.path.append("../")
print(os.getcwd())
from tfp.config.config import SPLIT_JSON_LOC
# from tfp.utils.Split import Split
def test_config_for_split():
print(SPLIT_JSON_LOC)
if __name__ == "__main__":
print("Running manual test")
test_config_for_split()
| 16 | 44 | 0.743056 | 44 | 288 | 4.454545 | 0.5 | 0.071429 | 0.122449 | 0.183673 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131944 | 288 | 17 | 45 | 16.941176 | 0.784 | 0.194444 | 0 | 0 | 0 | 0 | 0.117904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0.333333 | 0 | 0.444444 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
2f79701bc8f6beb7af5874e5007ec569219fe8e0 | 160 | py | Python | test.py | leon709/KafkaConsole | 426409019bbb764f6e54f9be40ef127e66749972 | [
"BSD-2-Clause"
] | 1 | 2018-10-27T04:17:00.000Z | 2018-10-27T04:17:00.000Z | test.py | leon709/KafkaConsole | 426409019bbb764f6e54f9be40ef127e66749972 | [
"BSD-2-Clause"
] | null | null | null | test.py | leon709/KafkaConsole | 426409019bbb764f6e54f9be40ef127e66749972 | [
"BSD-2-Clause"
] | null | null | null |
from kafka_util import KafkaHelper
with KafkaHelper() as h:
consumer = h.get_consumer('test', 'tp_test1')
consumer.seek(-100, 2)
print 'done!' | 22.857143 | 53 | 0.66875 | 22 | 160 | 4.727273 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03937 | 0.20625 | 160 | 7 | 54 | 22.857143 | 0.779528 | 0 | 0 | 0 | 0 | 0 | 0.10625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2f7a077d7fa671ca8aedb46be19ba3ac1f1dfd23 | 577 | py | Python | tests/utils.py | kreyoo/csgo-inv-shuffle | 6392dd1eef1ca87ec25c9cf4845af3f8df3594a5 | [
"MIT"
] | null | null | null | tests/utils.py | kreyoo/csgo-inv-shuffle | 6392dd1eef1ca87ec25c9cf4845af3f8df3594a5 | [
"MIT"
] | 5 | 2021-12-22T19:25:51.000Z | 2022-03-28T19:27:34.000Z | tests/utils.py | kreyoo/csgo-inv-shuffle | 6392dd1eef1ca87ec25c9cf4845af3f8df3594a5 | [
"MIT"
] | null | null | null | import json
def __read_file(file: str):
f = open("tests/test_data/" + file, "r", encoding="utf-8")
data = f.read()
f.close()
return data
def example_data() -> dict:
d = __read_file("test_inventory.json")
return json.loads(d)
def example_csgo_saved_item_shuffles() -> str:
return __read_file("compare_data.txt")
def example_inv_repr() -> str:
return __read_file("inventory_repr.txt")
def new_shuffleconfig() -> str:
f = open("./csgo_saved_item_shuffles.txt", "r", encoding="utf-8")
data = f.read()
f.close()
return data
| 19.896552 | 69 | 0.651646 | 84 | 577 | 4.154762 | 0.380952 | 0.091691 | 0.045845 | 0.074499 | 0.217765 | 0.217765 | 0.217765 | 0.217765 | 0.217765 | 0.217765 | 0 | 0.004301 | 0.194107 | 577 | 28 | 70 | 20.607143 | 0.746237 | 0 | 0 | 0.333333 | 0 | 0 | 0.192374 | 0.051993 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.055556 | 0.111111 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
2f92b84df1b6afd8cad84a0ede31079dde751517 | 113 | py | Python | KatesObjectOrientedPython/KatesObjectOrientedPython.py | katejp/ObjectOrientedPythonReference | ffc30e54b8d1e4a3182e388a544887845ca8c20e | [
"CC0-1.0"
] | null | null | null | KatesObjectOrientedPython/KatesObjectOrientedPython.py | katejp/ObjectOrientedPythonReference | ffc30e54b8d1e4a3182e388a544887845ca8c20e | [
"CC0-1.0"
] | null | null | null | KatesObjectOrientedPython/KatesObjectOrientedPython.py | katejp/ObjectOrientedPythonReference | ffc30e54b8d1e4a3182e388a544887845ca8c20e | [
"CC0-1.0"
] | null | null | null | userName = input("What is your name? ")
print("Hello, {}! What can I help you with today?".format(userName))
| 28.25 | 68 | 0.663717 | 17 | 113 | 4.411765 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176991 | 113 | 3 | 69 | 37.666667 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
2f9eea24aacd7da67201772b20be8422b02eef84 | 143 | py | Python | config.py | SalN3t/NlpSQL | 2e4db3c26c2175204ec5c27d53e63900384b2e84 | [
"MIT"
] | 10 | 2018-05-26T18:00:09.000Z | 2021-11-07T16:56:28.000Z | config.py | SalN3t/NlpSQL | 2e4db3c26c2175204ec5c27d53e63900384b2e84 | [
"MIT"
] | 1 | 2020-01-14T10:04:03.000Z | 2020-01-14T10:04:03.000Z | config.py | SalN3t/NlpSQL | 2e4db3c26c2175204ec5c27d53e63900384b2e84 | [
"MIT"
] | 4 | 2020-05-27T16:48:06.000Z | 2021-12-12T20:21:07.000Z | db_config = {
'user': '##username##',
'passwd': '##password##',
'host': '##host##',
'db': 'employees',
} | 23.833333 | 33 | 0.377622 | 10 | 143 | 5.3 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.34965 | 143 | 6 | 34 | 23.833333 | 0.569892 | 0 | 0 | 0 | 0 | 0 | 0.395833 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
85d54d8d8a754b97bd6874604c8486c708f81498 | 1,059 | py | Python | __init__.py | pykit3/k3color | e9976906ab09f095b76ee98b8497b5de8d1525f1 | [
"MIT"
] | null | null | null | __init__.py | pykit3/k3color | e9976906ab09f095b76ee98b8497b5de8d1525f1 | [
"MIT"
] | null | null | null | __init__.py | pykit3/k3color | e9976906ab09f095b76ee98b8497b5de8d1525f1 | [
"MIT"
] | 1 | 2021-08-05T09:22:26.000Z | 2021-08-05T09:22:26.000Z | """
k3color creates colored text on terminal.
"""
from .color import Str
from .color import blue
from .color import cyan
from .color import danger
from .color import dark
from .color import fading_color
from .color import green
from .color import loaded
from .color import normal
from .color import optimal
from .color import percentage
from .color import purple
from .color import red
from .color import warn
from .color import white
from .color import yellow
from .color import darkblue
from .color import darkcyan
from .color import darkgreen
from .color import darkyellow
from .color import darkred
from .color import darkpurple
from .color import darkwhite
__version__ = "0.1.2"
__name__ = "k3color"
__all__ = [
"Str",
"blue",
"cyan",
"danger",
"dark",
"fading_color",
"green",
"loaded",
"normal",
"optimal",
"percentage",
"purple",
"red",
"warn",
"white",
"yellow",
"darkblue",
"darkcyan",
"darkgreen",
"darkyellow",
"darkred",
"darkpurple",
"darkwhite",
]
| 18.258621 | 41 | 0.679887 | 130 | 1,059 | 5.430769 | 0.292308 | 0.293201 | 0.488669 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006024 | 0.216242 | 1,059 | 57 | 42 | 18.578947 | 0.844578 | 0.038716 | 0 | 0 | 0 | 0 | 0.162376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.46 | 0 | 0.46 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
85d5ec3c535f7e3c5e3b20eac8d01c186304a390 | 291 | py | Python | packages/PythonScripts/scripts/func_params_test.py | AndSviat/public | 4677029ca8ad2dcd8a989b6122e8e2bb21454e7c | [
"MIT"
] | null | null | null | packages/PythonScripts/scripts/func_params_test.py | AndSviat/public | 4677029ca8ad2dcd8a989b6122e8e2bb21454e7c | [
"MIT"
] | null | null | null | packages/PythonScripts/scripts/func_params_test.py | AndSviat/public | 4677029ca8ad2dcd8a989b6122e8e2bb21454e7c | [
"MIT"
] | null | null | null | #name: Func Params Test
#description: suggestions, choices and validators test
#language: python
#tags: test, selenium
#input: string country {suggestions: jsSuggestCountryName}
#input: string vegetable {choices: jsVeggies}
#input: double saltiness {validators: ["jsSaltinessRange"]}
| 36.375 | 60 | 0.773196 | 30 | 291 | 7.5 | 0.733333 | 0.097778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127148 | 291 | 7 | 61 | 41.571429 | 0.885827 | 0.927835 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
85e0e5e6f8e96309bc8a05791754a2381ab4196b | 267 | py | Python | app/helper.py | fushouhai/flask | dc057109008a467685e5585ee8101aaa8d6de38d | [
"MIT"
] | null | null | null | app/helper.py | fushouhai/flask | dc057109008a467685e5585ee8101aaa8d6de38d | [
"MIT"
] | null | null | null | app/helper.py | fushouhai/flask | dc057109008a467685e5585ee8101aaa8d6de38d | [
"MIT"
] | null | null | null | from flask import current_app
def jsonable():
# config_now = current_app.config # pay attentin to this point! pointing to the same object!!!
config_now = current_app.config.copy()
config_now.pop('PERMANENT_SESSION_LIFETIME')
return config_now
| 33.375 | 100 | 0.730337 | 37 | 267 | 5.027027 | 0.648649 | 0.193548 | 0.172043 | 0.204301 | 0.268817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191011 | 267 | 8 | 101 | 33.375 | 0.861111 | 0.35206 | 0 | 0 | 0 | 0 | 0.154762 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.