hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d171dc003b16bfba6a02003ca52bd2633224e2f1 | 27,243 | py | Python | ast_compat/compat3k6_ast.py | python-compiler-tools/ast-compat | c01c1cc7392afa6f44c46713a14e152f401c097e | [
"MIT"
] | 8 | 2020-01-07T19:08:02.000Z | 2021-11-11T23:03:39.000Z | ast_compat/compat3k7_ast.py | python-compiler-tools/ast-compat | c01c1cc7392afa6f44c46713a14e152f401c097e | [
"MIT"
] | 1 | 2020-08-01T18:48:14.000Z | 2020-08-01T18:50:34.000Z | ast_compat/compat3k7_ast.py | python-compiler-tools/ast-compat | c01c1cc7392afa6f44c46713a14e152f401c097e | [
"MIT"
] | null | null | null | import ast
from .make import *
_unset = object()
class Module(metaclass=SupertypeMeta):
__past__ = ast.Module
def __new__(self, body=_unset):
"""start checking validation"""
if body is _unset: body = []
return ast.Module(body)
class Interactive(metaclass=SupertypeMeta):
__past__ = ast.Interactive
def __new__(self, body=_unset):
"""start checking validation"""
if body is _unset: body = []
return ast.Interactive(body)
class Expression(metaclass=SupertypeMeta):
__past__ = ast.Expression
def __new__(self, body=_unset):
"""start checking validation"""
if body is _unset: raise ValueError('body cannot be None.')
return ast.Expression(body)
class Suite(metaclass=SupertypeMeta):
__past__ = ast.Suite
def __new__(self, body=_unset):
"""start checking validation"""
if body is _unset: body = []
return ast.Suite(body)
class FunctionDef(metaclass=SupertypeMeta):
__past__ = ast.FunctionDef
def __new__(self, name=_unset, args=_unset, body=_unset, decorator_list=_unset, returns=_unset):
"""start checking validation"""
if name is _unset: raise ValueError('name cannot be None.')
if args is _unset: raise ValueError('args cannot be None.')
if body is _unset: body = []
if decorator_list is _unset: decorator_list = []
if returns is _unset: returns = None
return ast.FunctionDef(name, args, body, decorator_list, returns)
class AsyncFunctionDef(metaclass=SupertypeMeta):
__past__ = ast.AsyncFunctionDef
def __new__(self, name=_unset, args=_unset, body=_unset, decorator_list=_unset, returns=_unset):
"""start checking validation"""
if name is _unset: raise ValueError('name cannot be None.')
if args is _unset: raise ValueError('args cannot be None.')
if body is _unset: body = []
if decorator_list is _unset: decorator_list = []
if returns is _unset: returns = None
return ast.AsyncFunctionDef(name, args, body, decorator_list, returns)
class ClassDef(metaclass=SupertypeMeta):
__past__ = ast.ClassDef
def __new__(self, name=_unset, bases=_unset, keywords=_unset, body=_unset, decorator_list=_unset):
"""start checking validation"""
if name is _unset: raise ValueError('name cannot be None.')
if bases is _unset: bases = []
if keywords is _unset: keywords = []
if body is _unset: body = []
if decorator_list is _unset: decorator_list = []
return ast.ClassDef(name, bases, keywords, body, decorator_list)
class Return(metaclass=SupertypeMeta):
__past__ = ast.Return
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: value = None
return ast.Return(value)
class Delete(metaclass=SupertypeMeta):
__past__ = ast.Delete
def __new__(self, targets=_unset):
"""start checking validation"""
if targets is _unset: targets = []
return ast.Delete(targets)
class Assign(metaclass=SupertypeMeta):
__past__ = ast.Assign
def __new__(self, targets=_unset, value=_unset):
"""start checking validation"""
if targets is _unset: targets = []
if value is _unset: raise ValueError('value cannot be None.')
return ast.Assign(targets, value)
class AugAssign(metaclass=SupertypeMeta):
__past__ = ast.AugAssign
def __new__(self, target=_unset, op=_unset, value=_unset):
"""start checking validation"""
if target is _unset: raise ValueError('target cannot be None.')
if op is _unset: raise ValueError('op cannot be None.')
if value is _unset: raise ValueError('value cannot be None.')
return ast.AugAssign(target, op, value)
class AnnAssign(metaclass=SupertypeMeta):
__past__ = ast.AnnAssign
def __new__(self, target=_unset, annotation=_unset, value=_unset, simple=_unset):
"""start checking validation"""
if target is _unset: raise ValueError('target cannot be None.')
if annotation is _unset: raise ValueError('annotation cannot be None.')
if value is _unset: value = None
if simple is _unset: raise ValueError('simple cannot be None.')
return ast.AnnAssign(target, annotation, value, simple)
class For(metaclass=SupertypeMeta):
__past__ = ast.For
def __new__(self, target=_unset, iter=_unset, body=_unset, orelse=_unset):
"""start checking validation"""
if target is _unset: raise ValueError('target cannot be None.')
if iter is _unset: raise ValueError('iter cannot be None.')
if body is _unset: body = []
if orelse is _unset: orelse = []
return ast.For(target, iter, body, orelse)
class AsyncFor(metaclass=SupertypeMeta):
__past__ = ast.AsyncFor
def __new__(self, target=_unset, iter=_unset, body=_unset, orelse=_unset):
"""start checking validation"""
if target is _unset: raise ValueError('target cannot be None.')
if iter is _unset: raise ValueError('iter cannot be None.')
if body is _unset: body = []
if orelse is _unset: orelse = []
return ast.AsyncFor(target, iter, body, orelse)
class While(metaclass=SupertypeMeta):
__past__ = ast.While
def __new__(self, test=_unset, body=_unset, orelse=_unset):
"""start checking validation"""
if test is _unset: raise ValueError('test cannot be None.')
if body is _unset: body = []
if orelse is _unset: orelse = []
return ast.While(test, body, orelse)
class If(metaclass=SupertypeMeta):
__past__ = ast.If
def __new__(self, test=_unset, body=_unset, orelse=_unset):
"""start checking validation"""
if test is _unset: raise ValueError('test cannot be None.')
if body is _unset: body = []
if orelse is _unset: orelse = []
return ast.If(test, body, orelse)
class With(metaclass=SupertypeMeta):
__past__ = ast.With
def __new__(self, items=_unset, body=_unset):
"""start checking validation"""
if items is _unset: items = []
if body is _unset: body = []
return ast.With(items, body)
class AsyncWith(metaclass=SupertypeMeta):
__past__ = ast.AsyncWith
def __new__(self, items=_unset, body=_unset):
"""start checking validation"""
if items is _unset: items = []
if body is _unset: body = []
return ast.AsyncWith(items, body)
class Raise(metaclass=SupertypeMeta):
__past__ = ast.Raise
def __new__(self, exc=_unset, cause=_unset):
"""start checking validation"""
if exc is _unset: exc = None
if cause is _unset: cause = None
return ast.Raise(exc, cause)
class Try(metaclass=SupertypeMeta):
__past__ = ast.Try
def __new__(self, body=_unset, handlers=_unset, orelse=_unset, finalbody=_unset):
"""start checking validation"""
if body is _unset: body = []
if handlers is _unset: handlers = []
if orelse is _unset: orelse = []
if finalbody is _unset: finalbody = []
return ast.Try(body, handlers, orelse, finalbody)
class Assert(metaclass=SupertypeMeta):
__past__ = ast.Assert
def __new__(self, test=_unset, msg=_unset):
"""start checking validation"""
if test is _unset: raise ValueError('test cannot be None.')
if msg is _unset: msg = None
return ast.Assert(test, msg)
class Import(metaclass=SupertypeMeta):
__past__ = ast.Import
def __new__(self, names=_unset):
"""start checking validation"""
if names is _unset: names = []
return ast.Import(names)
class ImportFrom(metaclass=SupertypeMeta):
__past__ = ast.ImportFrom
def __new__(self, module=_unset, names=_unset, level=_unset):
"""start checking validation"""
if module is _unset: module = None
if names is _unset: names = []
if level is _unset: level = None
return ast.ImportFrom(module, names, level)
class Global(metaclass=SupertypeMeta):
__past__ = ast.Global
def __new__(self, names=_unset):
"""start checking validation"""
if names is _unset: names = []
return ast.Global(names)
class Nonlocal(metaclass=SupertypeMeta):
__past__ = ast.Nonlocal
def __new__(self, names=_unset):
"""start checking validation"""
if names is _unset: names = []
return ast.Nonlocal(names)
class Expr(metaclass=SupertypeMeta):
__past__ = ast.Expr
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
return ast.Expr(value)
class Pass(metaclass=SupertypeMeta):
__past__ = ast.Pass
def __new__(self):
"""start checking validation"""
return ast.Pass()
class Break(metaclass=SupertypeMeta):
__past__ = ast.Break
def __new__(self):
"""start checking validation"""
return ast.Break()
class Continue(metaclass=SupertypeMeta):
__past__ = ast.Continue
def __new__(self):
"""start checking validation"""
return ast.Continue()
class BoolOp(metaclass=SupertypeMeta):
__past__ = ast.BoolOp
def __new__(self, op=_unset, values=_unset):
"""start checking validation"""
if op is _unset: raise ValueError('op cannot be None.')
if values is _unset: values = []
return ast.BoolOp(op, values)
class BinOp(metaclass=SupertypeMeta):
__past__ = ast.BinOp
def __new__(self, left=_unset, op=_unset, right=_unset):
"""start checking validation"""
if left is _unset: raise ValueError('left cannot be None.')
if op is _unset: raise ValueError('op cannot be None.')
if right is _unset: raise ValueError('right cannot be None.')
return ast.BinOp(left, op, right)
class UnaryOp(metaclass=SupertypeMeta):
__past__ = ast.UnaryOp
def __new__(self, op=_unset, operand=_unset):
"""start checking validation"""
if op is _unset: raise ValueError('op cannot be None.')
if operand is _unset: raise ValueError('operand cannot be None.')
return ast.UnaryOp(op, operand)
class Lambda(metaclass=SupertypeMeta):
__past__ = ast.Lambda
def __new__(self, args=_unset, body=_unset):
"""start checking validation"""
if args is _unset: raise ValueError('args cannot be None.')
if body is _unset: raise ValueError('body cannot be None.')
return ast.Lambda(args, body)
class IfExp(metaclass=SupertypeMeta):
__past__ = ast.IfExp
def __new__(self, test=_unset, body=_unset, orelse=_unset):
"""start checking validation"""
if test is _unset: raise ValueError('test cannot be None.')
if body is _unset: raise ValueError('body cannot be None.')
if orelse is _unset: raise ValueError('orelse cannot be None.')
return ast.IfExp(test, body, orelse)
class Dict(metaclass=SupertypeMeta):
__past__ = ast.Dict
def __new__(self, keys=_unset, values=_unset):
"""start checking validation"""
if keys is _unset: keys = []
if values is _unset: values = []
return ast.Dict(keys, values)
class Set(metaclass=SupertypeMeta):
__past__ = ast.Set
def __new__(self, elts=_unset):
"""start checking validation"""
if elts is _unset: elts = []
return ast.Set(elts)
class ListComp(metaclass=SupertypeMeta):
__past__ = ast.ListComp
def __new__(self, elt=_unset, generators=_unset):
"""start checking validation"""
if elt is _unset: raise ValueError('elt cannot be None.')
if generators is _unset: generators = []
return ast.ListComp(elt, generators)
class SetComp(metaclass=SupertypeMeta):
__past__ = ast.SetComp
def __new__(self, elt=_unset, generators=_unset):
"""start checking validation"""
if elt is _unset: raise ValueError('elt cannot be None.')
if generators is _unset: generators = []
return ast.SetComp(elt, generators)
class DictComp(metaclass=SupertypeMeta):
__past__ = ast.DictComp
def __new__(self, key=_unset, value=_unset, generators=_unset):
"""start checking validation"""
if key is _unset: raise ValueError('key cannot be None.')
if value is _unset: raise ValueError('value cannot be None.')
if generators is _unset: generators = []
return ast.DictComp(key, value, generators)
class GeneratorExp(metaclass=SupertypeMeta):
__past__ = ast.GeneratorExp
def __new__(self, elt=_unset, generators=_unset):
"""start checking validation"""
if elt is _unset: raise ValueError('elt cannot be None.')
if generators is _unset: generators = []
return ast.GeneratorExp(elt, generators)
class Await(metaclass=SupertypeMeta):
__past__ = ast.Await
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
return ast.Await(value)
class Yield(metaclass=SupertypeMeta):
__past__ = ast.Yield
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: value = None
return ast.Yield(value)
class YieldFrom(metaclass=SupertypeMeta):
__past__ = ast.YieldFrom
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
return ast.YieldFrom(value)
class Compare(metaclass=SupertypeMeta):
__past__ = ast.Compare
def __new__(self, left=_unset, ops=_unset, comparators=_unset):
"""start checking validation"""
if left is _unset: raise ValueError('left cannot be None.')
if ops is _unset: ops = []
if comparators is _unset: comparators = []
return ast.Compare(left, ops, comparators)
class Call(metaclass=SupertypeMeta):
__past__ = ast.Call
def __new__(self, func=_unset, args=_unset, keywords=_unset):
"""start checking validation"""
if func is _unset: raise ValueError('func cannot be None.')
if args is _unset: args = []
if keywords is _unset: keywords = []
return ast.Call(func, args, keywords)
class Num(metaclass=SupertypeMeta):
__past__ = ast.Num
def __new__(self, n=_unset):
"""start checking validation"""
if n is _unset: raise ValueError('n cannot be None.')
return ast.Num(n)
class Str(metaclass=SupertypeMeta):
__past__ = ast.Str
def __new__(self, s=_unset):
"""start checking validation"""
if s is _unset: raise ValueError('s cannot be None.')
return ast.Str(s)
class FormattedValue(metaclass=SupertypeMeta):
__past__ = ast.FormattedValue
def __new__(self, value=_unset, conversion=_unset, format_spec=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
if conversion is _unset: conversion = None
if format_spec is _unset: format_spec = None
return ast.FormattedValue(value, conversion, format_spec)
class JoinedStr(metaclass=SupertypeMeta):
__past__ = ast.JoinedStr
def __new__(self, values=_unset):
"""start checking validation"""
if values is _unset: values = []
return ast.JoinedStr(values)
class Bytes(metaclass=SupertypeMeta):
__past__ = ast.Bytes
def __new__(self, s=_unset):
"""start checking validation"""
if s is _unset: raise ValueError('s cannot be None.')
return ast.Bytes(s)
class NameConstant(metaclass=SupertypeMeta):
__past__ = ast.NameConstant
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
return ast.NameConstant(value)
class Ellipsis(metaclass=SupertypeMeta):
__past__ = ast.Ellipsis
def __new__(self):
"""start checking validation"""
return ast.Ellipsis()
class Constant(metaclass=SupertypeMeta):
__past__ = ast.Constant
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
return ast.Constant(value)
class Attribute(metaclass=SupertypeMeta):
__past__ = ast.Attribute
def __new__(self, value=_unset, attr=_unset, ctx=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
if attr is _unset: raise ValueError('attr cannot be None.')
if ctx is _unset: raise ValueError('ctx cannot be None.')
return ast.Attribute(value, attr, ctx)
class Subscript(metaclass=SupertypeMeta):
__past__ = ast.Subscript
def __new__(self, value=_unset, slice=_unset, ctx=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
if slice is _unset: raise ValueError('slice cannot be None.')
if ctx is _unset: raise ValueError('ctx cannot be None.')
return ast.Subscript(value, slice, ctx)
class Starred(metaclass=SupertypeMeta):
__past__ = ast.Starred
def __new__(self, value=_unset, ctx=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
if ctx is _unset: raise ValueError('ctx cannot be None.')
return ast.Starred(value, ctx)
class Name(metaclass=SupertypeMeta):
__past__ = ast.Name
def __new__(self, id=_unset, ctx=_unset):
"""start checking validation"""
if id is _unset: raise ValueError('id cannot be None.')
if ctx is _unset: raise ValueError('ctx cannot be None.')
return ast.Name(id, ctx)
class List(metaclass=SupertypeMeta):
__past__ = ast.List
def __new__(self, elts=_unset, ctx=_unset):
"""start checking validation"""
if elts is _unset: elts = []
if ctx is _unset: raise ValueError('ctx cannot be None.')
return ast.List(elts, ctx)
class Tuple(metaclass=SupertypeMeta):
__past__ = ast.Tuple
def __new__(self, elts=_unset, ctx=_unset):
"""start checking validation"""
if elts is _unset: elts = []
if ctx is _unset: raise ValueError('ctx cannot be None.')
return ast.Tuple(elts, ctx)
class Load(metaclass=SupertypeMeta):
__past__ = ast.Load
def __new__(self):
"""start checking validation"""
return ast.Load()
class Store(metaclass=SupertypeMeta):
__past__ = ast.Store
def __new__(self):
"""start checking validation"""
return ast.Store()
class Del(metaclass=SupertypeMeta):
__past__ = ast.Del
def __new__(self):
"""start checking validation"""
return ast.Del()
class AugLoad(metaclass=SupertypeMeta):
__past__ = ast.AugLoad
def __new__(self):
"""start checking validation"""
return ast.AugLoad()
class AugStore(metaclass=SupertypeMeta):
__past__ = ast.AugStore
def __new__(self):
"""start checking validation"""
return ast.AugStore()
class Param(metaclass=SupertypeMeta):
__past__ = ast.Param
def __new__(self):
"""start checking validation"""
return ast.Param()
class Slice(metaclass=SupertypeMeta):
__past__ = ast.Slice
def __new__(self, lower=_unset, upper=_unset, step=_unset):
"""start checking validation"""
if lower is _unset: lower = None
if upper is _unset: upper = None
if step is _unset: step = None
return ast.Slice(lower, upper, step)
class ExtSlice(metaclass=SupertypeMeta):
__past__ = ast.ExtSlice
def __new__(self, dims=_unset):
"""start checking validation"""
if dims is _unset: dims = []
return ast.ExtSlice(dims)
class Index(metaclass=SupertypeMeta):
__past__ = ast.Index
def __new__(self, value=_unset):
"""start checking validation"""
if value is _unset: raise ValueError('value cannot be None.')
return ast.Index(value)
class And(metaclass=SupertypeMeta):
__past__ = ast.And
def __new__(self):
"""start checking validation"""
return ast.And()
class Or(metaclass=SupertypeMeta):
__past__ = ast.Or
def __new__(self):
"""start checking validation"""
return ast.Or()
class Add(metaclass=SupertypeMeta):
__past__ = ast.Add
def __new__(self):
"""start checking validation"""
return ast.Add()
class Sub(metaclass=SupertypeMeta):
__past__ = ast.Sub
def __new__(self):
"""start checking validation"""
return ast.Sub()
class Mult(metaclass=SupertypeMeta):
__past__ = ast.Mult
def __new__(self):
"""start checking validation"""
return ast.Mult()
class MatMult(metaclass=SupertypeMeta):
__past__ = ast.MatMult
def __new__(self):
"""start checking validation"""
return ast.MatMult()
class Div(metaclass=SupertypeMeta):
__past__ = ast.Div
def __new__(self):
"""start checking validation"""
return ast.Div()
class Mod(metaclass=SupertypeMeta):
__past__ = ast.Mod
def __new__(self):
"""start checking validation"""
return ast.Mod()
class Pow(metaclass=SupertypeMeta):
__past__ = ast.Pow
def __new__(self):
"""start checking validation"""
return ast.Pow()
class LShift(metaclass=SupertypeMeta):
__past__ = ast.LShift
def __new__(self):
"""start checking validation"""
return ast.LShift()
class RShift(metaclass=SupertypeMeta):
__past__ = ast.RShift
def __new__(self):
"""start checking validation"""
return ast.RShift()
class BitOr(metaclass=SupertypeMeta):
__past__ = ast.BitOr
def __new__(self):
"""start checking validation"""
return ast.BitOr()
class BitXor(metaclass=SupertypeMeta):
__past__ = ast.BitXor
def __new__(self):
"""start checking validation"""
return ast.BitXor()
class BitAnd(metaclass=SupertypeMeta):
__past__ = ast.BitAnd
def __new__(self):
"""start checking validation"""
return ast.BitAnd()
class FloorDiv(metaclass=SupertypeMeta):
__past__ = ast.FloorDiv
def __new__(self):
"""start checking validation"""
return ast.FloorDiv()
class Invert(metaclass=SupertypeMeta):
__past__ = ast.Invert
def __new__(self):
"""start checking validation"""
return ast.Invert()
class Not(metaclass=SupertypeMeta):
__past__ = ast.Not
def __new__(self):
"""start checking validation"""
return ast.Not()
class UAdd(metaclass=SupertypeMeta):
__past__ = ast.UAdd
def __new__(self):
"""start checking validation"""
return ast.UAdd()
class USub(metaclass=SupertypeMeta):
__past__ = ast.USub
def __new__(self):
"""start checking validation"""
return ast.USub()
class Eq(metaclass=SupertypeMeta):
__past__ = ast.Eq
def __new__(self):
"""start checking validation"""
return ast.Eq()
class NotEq(metaclass=SupertypeMeta):
__past__ = ast.NotEq
def __new__(self):
"""start checking validation"""
return ast.NotEq()
class Lt(metaclass=SupertypeMeta):
__past__ = ast.Lt
def __new__(self):
"""start checking validation"""
return ast.Lt()
class LtE(metaclass=SupertypeMeta):
__past__ = ast.LtE
def __new__(self):
"""start checking validation"""
return ast.LtE()
class Gt(metaclass=SupertypeMeta):
__past__ = ast.Gt
def __new__(self):
"""start checking validation"""
return ast.Gt()
class GtE(metaclass=SupertypeMeta):
__past__ = ast.GtE
def __new__(self):
"""start checking validation"""
return ast.GtE()
class Is(metaclass=SupertypeMeta):
__past__ = ast.Is
def __new__(self):
"""start checking validation"""
return ast.Is()
class IsNot(metaclass=SupertypeMeta):
__past__ = ast.IsNot
def __new__(self):
"""start checking validation"""
return ast.IsNot()
class In(metaclass=SupertypeMeta):
__past__ = ast.In
def __new__(self):
"""start checking validation"""
return ast.In()
class NotIn(metaclass=SupertypeMeta):
__past__ = ast.NotIn
def __new__(self):
"""start checking validation"""
return ast.NotIn()
class comprehension(metaclass=SupertypeMeta):
__past__ = ast.comprehension
def __new__(self, target=_unset, iter=_unset, ifs=_unset, is_async=_unset):
"""start checking validation"""
if target is _unset: raise ValueError('target cannot be None.')
if iter is _unset: raise ValueError('iter cannot be None.')
if ifs is _unset: ifs = []
if is_async is _unset: raise ValueError('is_async cannot be None.')
return ast.comprehension(target, iter, ifs, is_async)
class ExceptHandler(metaclass=SupertypeMeta):
__past__ = ast.ExceptHandler
def __new__(self, type=_unset, name=_unset, body=_unset):
"""start checking validation"""
if type is _unset: type = None
if name is _unset: name = None
if body is _unset: body = []
return ast.ExceptHandler(type, name, body)
class arguments(metaclass=SupertypeMeta):
__past__ = ast.arguments
def __new__(self, args=_unset, vararg=_unset, kwonlyargs=_unset, kw_defaults=_unset, kwarg=_unset, defaults=_unset):
"""start checking validation"""
if args is _unset: args = []
if vararg is _unset: vararg = None
if kwonlyargs is _unset: kwonlyargs = []
if kw_defaults is _unset: kw_defaults = []
if kwarg is _unset: kwarg = None
if defaults is _unset: defaults = []
return ast.arguments(args, vararg, kwonlyargs, kw_defaults, kwarg, defaults)
class arg(metaclass=SupertypeMeta):
__past__ = ast.arg
def __new__(self, arg=_unset, annotation=_unset):
"""start checking validation"""
if arg is _unset: raise ValueError('arg cannot be None.')
if annotation is _unset: annotation = None
return ast.arg(arg, annotation)
class keyword(metaclass=SupertypeMeta):
__past__ = ast.keyword
def __new__(self, arg=_unset, value=_unset):
"""start checking validation"""
if arg is _unset: arg = None
if value is _unset: raise ValueError('value cannot be None.')
return ast.keyword(arg, value)
class alias(metaclass=SupertypeMeta):
__past__ = ast.alias
def __new__(self, name=_unset, asname=_unset):
"""start checking validation"""
if name is _unset: raise ValueError('name cannot be None.')
if asname is _unset: asname = None
return ast.alias(name, asname)
class withitem(metaclass=SupertypeMeta):
__past__ = ast.withitem
def __new__(self, context_expr=_unset, optional_vars=_unset):
"""start checking validation"""
if context_expr is _unset: raise ValueError('context_expr cannot be None.')
if optional_vars is _unset: optional_vars = None
return ast.withitem(context_expr, optional_vars)
| 35.197674 | 120 | 0.658371 | 3,221 | 27,243 | 5.209873 | 0.056504 | 0.060485 | 0.161135 | 0.179727 | 0.533997 | 0.483583 | 0.478339 | 0.446755 | 0.340266 | 0.328824 | 0 | 0 | 0.234629 | 27,243 | 773 | 121 | 35.243208 | 0.804806 | 0.099218 | 0 | 0.304965 | 0 | 0 | 0.056666 | 0 | 0 | 0 | 0 | 0 | 0.005319 | 1 | 0.184397 | false | 0.005319 | 0.014184 | 0 | 0.751773 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d172c404e3d10b4865189be25135c924094cf851 | 1,210 | py | Python | toqito/state_props/is_mixed.py | georgios-ts/toqito | d9379fb267a8e77784b97820c3131522d384f54d | [
"MIT"
] | 1 | 2021-05-13T15:44:40.000Z | 2021-05-13T15:44:40.000Z | toqito/state_props/is_mixed.py | georgios-ts/toqito | d9379fb267a8e77784b97820c3131522d384f54d | [
"MIT"
] | null | null | null | toqito/state_props/is_mixed.py | georgios-ts/toqito | d9379fb267a8e77784b97820c3131522d384f54d | [
"MIT"
] | null | null | null | """Check if state is mixed."""
import numpy as np
from toqito.state_props import is_pure
def is_mixed(state: np.ndarray) -> bool:
r"""
Determine if a given quantum state is mixed [WikMix]_.
A mixed state by definition is a state that is not pure.
Examples
==========
Consider the following density matrix:
.. math::
\rho = \begin{pmatrix}
\frac{3}{4} & 0 \\
0 & \frac{1}{4}
\end{pmatrix} \in \text{D}(\mathcal{X}).
Calculating the rank of :math:`\rho` yields that the :math:`\rho` is a mixed state. This can be
confirmed in :code:`toqito` as follows:
>>> from toqito.states import basis
>>> from toqito.state_props import is_mixed
>>> e_0, e_1 = basis(2, 0), basis(2, 1)
>>> rho = 3 / 4 * e_0 * e_0.conj().T + 1 / 4 * e_1 * e_1.conj().T
>>> is_mixed(rho)
True
References
==========
.. [WikMix] Wikipedia: Quantum state - Mixed states
https://en.wikipedia.org/wiki/Quantum_state#Mixed_states
:param state: The density matrix representing the quantum state.
:return: True if state is mixed and False otherwise.
"""
return not is_pure(state)
| 28.809524 | 99 | 0.595041 | 176 | 1,210 | 4 | 0.431818 | 0.059659 | 0.051136 | 0.039773 | 0.079545 | 0.079545 | 0 | 0 | 0 | 0 | 0 | 0.022624 | 0.269421 | 1,210 | 41 | 100 | 29.512195 | 0.773756 | 0.792562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
66f4a268d072cf085f3f6fd315e8c817472e7e5e | 2,825 | py | Python | tests/test_macro.py | nenkoru/okama | 1e202bc801aea8adaf4c2ad033cd51af0c957df5 | [
"MIT"
] | 82 | 2020-11-19T18:34:51.000Z | 2022-03-25T18:23:43.000Z | tests/test_macro.py | nenkoru/okama | 1e202bc801aea8adaf4c2ad033cd51af0c957df5 | [
"MIT"
] | 21 | 2020-12-06T08:17:33.000Z | 2022-03-29T03:51:10.000Z | tests/test_macro.py | nenkoru/okama | 1e202bc801aea8adaf4c2ad033cd51af0c957df5 | [
"MIT"
] | 14 | 2020-12-05T11:39:32.000Z | 2022-03-03T13:02:22.000Z | import pandas as pd
from pytest import mark
from pytest import approx
@mark.inflation
@mark.usefixtures('_init_inflation')
class TestInflation:
def test_get_infl_rub_data(self):
assert self.infl_rub.first_date == pd.to_datetime('1991-01')
assert self.infl_rub.pl.years == 10
assert self.infl_rub.pl.months == 1
assert self.infl_rub.name == 'Russia Inflation Rate'
assert self.infl_rub.type == 'inflation'
assert self.infl_rub.cumulative_inflation[-1] == approx(19576.47386585591, rel=1e-4)
assert self.infl_rub.purchasing_power_1000 == approx(0.05107911300773333, rel=1e-4)
assert self.infl_rub.rolling_inflation[-1] == approx(0.2070533602100877, rel=1e-4)
def test_get_infl_usd_data(self):
assert self.infl_usd.first_date == pd.to_datetime('1913-02')
assert self.infl_usd.pl.years == 10
assert self.infl_usd.pl.months == 0
assert self.infl_usd.name == 'US Inflation Rate'
assert self.infl_usd.type == 'inflation'
assert self.infl_usd.cumulative_inflation[-1] == approx(0.7145424753209466, rel=1e-4)
assert self.infl_usd.purchasing_power_1000 == approx(583.2459763429362, rel=1e-4)
assert self.infl_usd.rolling_inflation[-1] == approx(-0.005813765681402461, rel=1e-4)
def test_get_infl_eur_data(self):
assert self.infl_eur.first_date == pd.to_datetime('1996-02')
assert self.infl_eur.pl.years == 10
assert self.infl_eur.pl.years == 10
assert self.infl_eur.name == 'EU Inflation Rate'
assert self.infl_eur.type == 'inflation'
assert self.infl_eur.cumulative_inflation[-1] == approx(0.20267532488218776, rel=1e-4)
assert self.infl_eur.purchasing_power_1000 == approx(831.4796016106495, rel=1e-4)
assert self.infl_eur.rolling_inflation[-1] == approx(0.02317927930197139, rel=1e-4)
def test_describe(self):
description = self.infl_rub.describe(years=[5])
assert list(description.columns) == ['property', 'period', 'Russia Inflation Rate']
assert description.loc[3, 'Russia Inflation Rate'] == approx(3.0414434004010245, rel=1e-4)
assert description.loc[5, 'Russia Inflation Rate'] == approx(247.43634907784974, rel=1e-4)
def test_annual_inflation_ts(self):
assert self.infl_rub.annual_inflation_ts.iloc[-1] == approx(0.02760000000000007, rel=1e-4)
@mark.rates
@mark.usefixtures('_init_rates')
class TestRates:
def test_rates_rub(self):
assert self.rates_rub.name == 'Max deposit rates (RUB) in Russian banks'
assert self.rates_rub.first_date == pd.to_datetime('2015-01')
assert self.rates_rub.last_date == pd.to_datetime('2020-02')
def test_okid(self):
assert self.rates_rub.okid.sum() == approx(6376.2308059223915, rel=1e-4)
| 46.311475 | 98 | 0.698053 | 405 | 2,825 | 4.679012 | 0.232099 | 0.153034 | 0.184697 | 0.080739 | 0.460158 | 0.179947 | 0.130343 | 0.036412 | 0.036412 | 0.036412 | 0 | 0.134367 | 0.178053 | 2,825 | 60 | 99 | 47.083333 | 0.68174 | 0 | 0 | 0.040816 | 0 | 0 | 0.092035 | 0 | 0 | 0 | 0 | 0 | 0.653061 | 1 | 0.142857 | false | 0 | 0.061224 | 0 | 0.244898 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
66f5d3ec30ee3cb74953f418ac60d6f97eadd0de | 5,233 | py | Python | Python/Programação_em_Python_Essencial/5- Coleções/dicionarios.py | vdonoladev/aprendendo-programacao | 83abbcd6701b2105903b28fd549738863418cfb8 | [
"MIT"
] | null | null | null | Python/Programação_em_Python_Essencial/5- Coleções/dicionarios.py | vdonoladev/aprendendo-programacao | 83abbcd6701b2105903b28fd549738863418cfb8 | [
"MIT"
] | null | null | null | Python/Programação_em_Python_Essencial/5- Coleções/dicionarios.py | vdonoladev/aprendendo-programacao | 83abbcd6701b2105903b28fd549738863418cfb8 | [
"MIT"
] | null | null | null | """
Dicionários
OBS: Em algumas linguagens de programação, os dicionários Python são conhecidos
por mapas.
Dicionários são coleções do tipo chave/valor. (Mapeamento entre chave e valor)
#Tuplas
[0, 1, 2]
[1, 2, 3]
# Listas
(0, 1, 2)
(1, 2, 3)
Dicionários são representados por chaves {}.
print(type({}))
OBS: Sobre dicionários
- Chave e valor são separados por dois pontos 'chave:valor';
- Tanto chave quanto valor podem ser de qualquer tipo de dado;
- Podemos misturar tipos de dados;
# Criação de dicionários
# Forma1 (Mais comum)
paises = {'br': 'Brasil', 'eua': 'Estados Unidos', 'py': 'Paraguay'}
print(paises)
print(type(paises))
# Forma2 (Menos comum)
paises = dict(br='Brasil', eua='Estados Unidos', py='Paraguay')
print(paises)
print(type(paises))
# Acessando elementos
# Forma1 - Acessando via chave, da mesma forma que lista/tupla
print(paises['br'])
# print(paises['ru'])
# OBS: Caso tentamos fazer um acesso utilizando uma chave que não existe, teremos o erro KeyError
# Forma2 - Acessando via get (Recomendada)
print(paises.get('br'))
print(paises.get('ru'))
# Caso o get não encontre o objeto com a chave informada, será retornado o valor None e não será gerado KeyError
pais = paises.get('ru')
if pais:
print(f'Encontrei o país {pais}')
else:
print('Não encontrei o país')
# Podemos definir um valor padrão para caso não encontremos o objeto com a chave informada
pais = paises.get('ru', 'Não encontrado')
print(f'Encontrei o país {pais}')
# Podemos verificar se determinada chave se encontra em um dicionário
print('br' in paises)
print('ru' in paises)
print('Estados Unidos' in paises)
if 'ru' in paises:
russia = paises['ru']
# Podemos utilizar qualquer tipo de dado (int, float, string, boolean), inclusive lista, tupla, dicionário, como chaves
# de dicionários.
# Tuplas, por exemplo, são bastante interessantes de serem utilizadas como chave de dicionários, pois as mesmas
# são imutáveis.
localidades = {
(35.6895, 39.6917): 'Escritório em Tókio'
(40.7128, 74.0060): 'Escritório em Nova York'
(37.7749, 122.4194): 'Escritório em São Paulo'
}
print(localidades)
print(type(localidades))
# Adicionar elementos em um dicionário
receita = {'jan': 100, 'fev': 120, 'mar': 300}
print(receita)
print(type(receita))
# Forma1 - Mais comum
receita['abr'] = 350
print(receita)
# Forma2
novo_dado = {'mai': 500}
receita.update(novo_dado) # receita.update({'mai': 500})
print(receita)
# Atualizando dados em um dicionário
# Forma1
receita['mai'] = 550
print(receita)
# Forma2
receita.update({'mai': 600})
print(receita)
# CONCLUSÃO1: A forma de adicionar novos elementos ou atualizar dados em um dicionário é a mesma.
# CONCLUSÃO2: Em dicionários, NÂO podemos ter chaves repetidas.
# Remover dados de um dicionário
receita = {'jan': 100, 'fev': 120, 'mar': 300}
print(receita)
# Forma1 - Mais comum
ret = receita.pop('mar')
print(ret)
print(receita)
# OBS1: Aqui precisamos SEMPRE informar a chave, e caso não encontre o elemento, um KeyError é retornado.
# OBS2: Ao removermos um objeto, o valor deste objeto é sempre retornado.
# Forma2
del receita['fev']
print(receita)
# Se a chave não existir, será gerado um KeyError
# OBS: Neste caso, o valor removido não é retornado.
# Imagine que você tem um comércio eletrônico, onde temos um carrinho de compras na qual adicionamos produtos.
Carrinho de Compras:
Produto 1:
- nome;
- quantidade;
- preço;
Produto 2:
- nome;
- quantidade;
- preço;
# 1 - Poderíamos utilizar uma Lista para isso? Sim
carrinho = []
produto1 = ['PlayStation 4', 1, 2300.00]
produto2 = ['God of War 4', 1, 150.00]
carrinho.append(produto1)
carrinho.append(produto2)
print(carrinho)
# Teríamos que saber qual é o índice de cada informação no produto.
# 2 - Poderíamos utilizar uma Tupla para isso? Sim
produto1 = ('PlayStation 4', 1, 2300.00)
produto2 = ('God of War 4', 1, 150.00)
carrinho = (produto1, produto2)
print(carrinho)
# 3 - Poderíamos utilizar um Dicionário para isso? Sim
carrinho = []
produto1 = {'Nome': 'PlayStation 4', 'Quantidade': 1, 'Preço': 2300.00}
produto2 = {'Nome': 'God of War 4', 'Quantidade': 1, 'Preço': 150.00}
carrinho.append(produto1)
carrinho.append(produto2)
print(carrinho)
# Desta forma, facilmente adicionamos ou removemos produtos no carrinho e em cada produto
# podemos ter a certeza sobre cada informação.
# Métodos de dicionários
d = dict(a=1, b=2, c=3)
print(d)
print(type(d))
# Limpar o dicionário (Zerar dados)
d.clear()
print(d)
# Copiando um dicionário para outro
# Forma1 # Deep Copy
novo = d.copy()
print(novo)
novo['d'] = 4
print(d)
print(novo)
# Forma2 # Shallow Copy
novo = d
print(novo)
novo['d'] = 4
print(d)
print(novo)
"""
# Forma não usual de criação de dicionários
outro = {}.fromkeys('a', 'b')
print(outro)
print(type(outro))
usuario = {}.fromkeys(['nome', 'pontos', 'email', 'profile'], 'desconhecido')
print(usuario)
print(type(usuario))
# O método fromkeys recebe dois parâmetros: um iterável e um valor.
# Ele vai gerar para cada valor do iterável uma chave e irá atribuir a esta chave o valor informado.
veja = {}.fromkeys('teste', 'valor')
print(veja)
veja = {}.fromkeys(range(1, 11), 'novo')
print(veja)
| 19.672932 | 119 | 0.708962 | 768 | 5,233 | 4.828125 | 0.342448 | 0.019417 | 0.015102 | 0.007282 | 0.178263 | 0.165858 | 0.136192 | 0.136192 | 0.136192 | 0.120011 | 0 | 0.03863 | 0.168928 | 5,233 | 265 | 120 | 19.74717 | 0.81398 | 0.973247 | 0 | 0.2 | 0 | 0 | 0.172414 | 0 | 0 | 0 | 0 | 0.007547 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
0f043a3435da8c67a9748a89f14743f58b5275cd | 278 | py | Python | app/main/forms.py | lungsang/pyrrha | 5d101494267ba4b8146e2a846adb3cc7813b892f | [
"MIT"
] | null | null | null | app/main/forms.py | lungsang/pyrrha | 5d101494267ba4b8146e2a846adb3cc7813b892f | [
"MIT"
] | null | null | null | app/main/forms.py | lungsang/pyrrha | 5d101494267ba4b8146e2a846adb3cc7813b892f | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms.fields import (
StringField,
SubmitField,
)
from wtforms.validators import InputRequired
class Delete(FlaskForm):
name = StringField("Name", validators=[InputRequired()])
submit = SubmitField ('Delete this corpus')
| 23.166667 | 60 | 0.748201 | 29 | 278 | 7.137931 | 0.586207 | 0.10628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161871 | 278 | 11 | 61 | 25.272727 | 0.888412 | 0 | 0 | 0 | 0 | 0 | 0.079137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0f1153340c6e2a7051005933c28218dbe28a1c83 | 138 | py | Python | python/built_ins/q2_input.py | mxdzi/hackerrank | 4455f73e4479a4204b2e1167253f6a02351aa5b7 | [
"MIT"
] | null | null | null | python/built_ins/q2_input.py | mxdzi/hackerrank | 4455f73e4479a4204b2e1167253f6a02351aa5b7 | [
"MIT"
] | null | null | null | python/built_ins/q2_input.py | mxdzi/hackerrank | 4455f73e4479a4204b2e1167253f6a02351aa5b7 | [
"MIT"
] | null | null | null | def main():
x, k = [int(i) for i in input().split()]
p = input()
print(eval(p) == k)
if __name__ == '__main__':
main()
| 13.8 | 44 | 0.492754 | 21 | 138 | 2.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.289855 | 138 | 9 | 45 | 15.333333 | 0.612245 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f1d94820bc7fe26fdd23e80e362cc7a142587aa | 553 | py | Python | publish.py | liveldp/consensus | d1d6c3b7f98eb36194c4af093d913a9a34c8c670 | [
"Apache-2.0"
] | null | null | null | publish.py | liveldp/consensus | d1d6c3b7f98eb36194c4af093d913a9a34c8c670 | [
"Apache-2.0"
] | null | null | null | publish.py | liveldp/consensus | d1d6c3b7f98eb36194c4af093d913a9a34c8c670 | [
"Apache-2.0"
] | null | null | null | import json
import pika
import config
exchange = config.AMQP_EXCHANGE
def publish(data):
connection_params = pika.ConnectionParameters(
host=config.AMQP_HOST, port=config.AMQP_PORT,
credentials=pika.credentials.PlainCredentials(config.AMQP_USER, config.AMQP_PASS)
)
connection = pika.BlockingConnection(connection_params)
ch = connection.channel()
ch.basic_publish(exchange=exchange, routing_key='annotations.consensus', body=json.dumps(data))
print '{} -> |consensus|'.format(data)
connection.close()
| 25.136364 | 99 | 0.74141 | 62 | 553 | 6.467742 | 0.5 | 0.124688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151899 | 553 | 21 | 100 | 26.333333 | 0.855011 | 0 | 0 | 0 | 0 | 0 | 0.068716 | 0.037975 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.071429 | 0.214286 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0f1de186d672c85982ee07b2998c3cfaa875f7d9 | 7,440 | py | Python | m2py/misc/matlab2py.py | caiorss/m2py | 4a8f754f04adb151b1967fe13b8f80b4ec169560 | [
"BSD-3-Clause"
] | 13 | 2016-12-10T22:03:18.000Z | 2021-11-27T11:55:41.000Z | m2py/misc/matlab2py.py | caiorss/m2py | 4a8f754f04adb151b1967fe13b8f80b4ec169560 | [
"BSD-3-Clause"
] | null | null | null | m2py/misc/matlab2py.py | caiorss/m2py | 4a8f754f04adb151b1967fe13b8f80b4ec169560 | [
"BSD-3-Clause"
] | 3 | 2017-04-02T00:21:24.000Z | 2021-08-19T14:11:23.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Matlab to Python Code Converter
"""
import re
#print re.sub(r"\[((\S)\s*)+\]", "\[ \2 \]", txt)
function_template = \
"""
def {functioname}({arguments}):
{code}
return {outputs}
"""
def tralaste_general(text):
text= text.replace(';', '') # Remove semicolon end line
text = text.replace('%', '#') # Replace commentary symbol
# Replace multiline comment
text = text.replace("%}", '"""')
text = text.replace("%{", '"""')
# Remove endfunction statement
text = text.replace("endfunction", "")
# Translate logic operators
text = text.replace(r"||", r"or")
text = text.replace(r"&&", r"and")
text = text.replace(r"~=", r"!=")
text = text.replace("(end)", "(-1)")
# Translate exponential operator
text = text.replace("^", "**")
# Missing values; IEEE-754 floating point status flags
text = re.sub('\bNaN\b', 'nan', text)
text = re.sub('\bInf\b', 'inf', text)
text = re.sub('\beps\b', 'spacing(1)', text)
return text
def translate_lists(text):
i = re.finditer("(\[(.*?)\])", text)
for j in i:
rep = j.group(0)
rep2 = re.sub(r'(\s+)', r',\1', rep)
text = re.sub(re.escape(rep), rep2, text)
text = re.sub('\[,', '[', text)
return text
def translate_arange(text):
k = re.compile("\((.*):(.*):(.*)\)'?")
i = k.finditer(text)
for j in i:
rep = j.group(0)
#print "rep = ", rep
#print (j.group(1), j.group(2), j.group(3))
r=j.groups()
rep2 = "arange(%s, %s, %s)" % (r[0], r[2], r[1])
text = re.sub(re.escape(rep), rep2, text)
return text
def translate_builtin_functions(text):
# disp ---> print
text = re.sub('disp', 'print', text)
text = re.sub('puts', 'print', text)
# Convert plot(x, [y1, y2, y3] to ---> plotx(x, [y1, y2, y3])
text = re.sub('\s(plot)(\(\s*.*,\s*\[.*\]\s*\))', r'\nplotx\2', text)
return text
def translate_function(code):
"""
Translate the function statement of mfiles:
{} - Curly brackets indicates optional values:
File: functioname.m
--------------
% File comment
%
function {[returns] =} <functioname> ({arguments})
<source code>
"""
text = code
it = re.finditer(r"function\s+\[(.*)\]\s*=(.*)\((.*)\)", text, re.M)
# dummy mark to mark expression position
mark = "xxxxx-----"
for i in it:
# print pprint(i.groups())
rep = i.group(0)
outputs, functioname, arguments = i.groups()
text = text.replace(rep, mark)
# print "rep ", rep
# print "outputs ", outputs
# print "functioname ", functioname
# print "arguments ", arguments
try:
header, code = text.split(mark)
except ValueError:
return code
#print "\n\n\n\n"
#print header
#print "--------"
#print code
code = "\n".join([4*' ' + line for line in code.splitlines()])
text =function_template.format(functioname=functioname,
arguments=arguments,
outputs=outputs,
code=code)
text = header + text
return text
def translate(text):
# Remove trailing new-line character
# and whitespace
text = text.strip()
text = text.strip('\n')
text = tralaste_general(text)
text = translate_arange(text)
text = translate_lists(text)
text = translate_builtin_functions(text)
text = translate_function(text)
return text
code1 = """
f = prod(1:n);
x = (0:0.2:10)';
y1 = trimf(x, [3 4 5]);
y2 = trimf(x, [2 4 7]); % some commentary
y3 = trimf(x, [1 4 9]);
subplot(211)
plot(x, [y1 y2 y3]);%some commentary4344
y1 = trimf(x, [2 3 5]);
y2 = trimf(x, [3 4 7]);
y3 = trimf(x, [4 5 9]); % Another commentary
subplot(212)
plot(x, [y1 y2 y3]);
set(gcf, 'name', 'trimf', 'numbertitle', 'off')
"""
code2 = """
%
% this function do something else
%
function [u]=Standard_FLC(V_mu,V_m)
%Part_I :Member-ship Functions
%Creates a new Mamdani-style FIS structure
a=newfis('optipaper');
a=addvar(a,'input','relvelo',[-1.5 1.5]);
a=addmf(a,'input',1,'n','trapmf',[-1.5 -1.5 -1 0]);
a=addmf(a,'input',1,'z','trapmf',[-0.25 -0.05 0.05 0.25]);
a=addmf(a,'input',1,'p','trapmf',[0 1 1.5 1.5]);
a=addvar(a,'input','velo',[-1.5 1.5]);
a=addmf(a,'input',2,'n','trapmf',[-1.5 -1.5 -1 0]);
a=addmf(a,'input',2,'z','trapmf',[-0.25 -0.05 0.05 0.25]);
a=addmf(a,'input',2,'p','trapmf',[0 1 1.5 1.5]);
a=addvar(a,'output','damper rate',[0 10000]);
a=addmf(a,'output',1,'s','trapmf', [0 0 1800 5000]);
a=addmf(a,'output',1,'m','trimf', [2400 5000 7500]);
a=addmf(a,'output',1,'l','trapmf', [5000 10000 15000 15000]);
ruleList=[ 1 1 3 1 1 1 2 2 1 1 1 3 1 1 1 2 1 2 1 1 2 2 2 1 1 2 3 2 1 1 3 1 1 1 1 3 2 2 1 1 3 3 3 1 1 ];
a=addrule(a,ruleList);
FLC_input=[V_mu,V_m];%defining inputs to fuzzy
u=evalfis(FLC_input,a);%evaluating output a.fis
%fuzzy(a)%--- displays the FIS Editor.%
%note it will display FIS editor for %every time step so for 10 sec it will produce 1001 FIS editors.
%mfedit(a)%---- displays the Membership Function Editor.
%ruleedit(a)%--- displays the Rule Editor.
%ruleview(a)%--- displays the Rule Viewer. %surfview(a)%---- displays the Surface View
"""
code = """
function v2_pT = v2_pT(p, T)
%Release on the IAPWS Industrial formulation 1997 for the Thermodynamic Properties of Water and Steam, September 1997
%6 Equations for Region 2, Section. 6.1 Basic Equation
%Table 11 and 12, Page 14 and 15
J0 = [0, 1, -5, -4, -3, -2, -1, 2, 3];
n0 = [-9.6927686500217, 10.086655968018, -0.005608791128302, 0.071452738081455, -0.40710498223928, 1.4240819171444, -4.383951131945, -0.28408632460772, 0.021268463753307];
Ir = [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 5, 6, 6, 6, 7, 7, 7, 8, 8, 9, 10, 10, 10, 16, 16, 18, 20, 20, 20, 21, 22, 23, 24, 24, 24];
Jr = [0, 1, 2, 3, 6, 1, 2, 4, 7, 36, 0, 1, 3, 6, 35, 1, 2, 3, 7, 3, 16, 35, 0, 11, 25, 8, 36, 13, 4, 10, 14, 29, 50, 57, 20, 35, 48, 21, 53, 39, 26, 40, 58];
nr = [-1.7731742473213E-03, -0.017834862292358, -0.045996013696365, -0.057581259083432, -0.05032527872793, -3.3032641670203E-05, -1.8948987516315E-04, -3.9392777243355E-03, -0.043797295650573, -2.6674547914087E-05, 2.0481737692309E-08, 4.3870667284435E-07, -3.227767723857E-05, -1.5033924542148E-03, -0.040668253562649, -7.8847309559367E-10, 1.2790717852285E-08, 4.8225372718507E-07, 2.2922076337661E-06, -1.6714766451061E-11, -2.1171472321355E-03, -23.895741934104, -5.905956432427E-18, -1.2621808899101E-06, -0.038946842435739, 1.1256211360459E-11, -8.2311340897998, 1.9809712802088E-08, 1.0406965210174E-19, -1.0234747095929E-13, -1.0018179379511E-09, -8.0882908646985E-11, 0.10693031879409, -0.33662250574171, 8.9185845355421E-25, 3.0629316876232E-13, -4.2002467698208E-06, -5.9056029685639E-26, 3.7826947613457E-06, -1.2768608934681E-15, 7.3087610595061E-29, 5.5414715350778E-17, -9.436970724121E-07];
R = 0.461526; %kJ/(kg K)
Pi = p;
tau = 540 / T;
g0_pi = 1 / Pi;
gr_pi = 0;
for i = 1 : 43
gr_pi = gr_pi + nr(i) * Ir(i) * Pi ^ (Ir(i) - 1) * (tau - 0.5) ^ Jr(i);
end
v2_pT = R * T / p * Pi * (g0_pi + gr_pi) / 1000;
"""
text = translate(code)
print(text)
# for i in it:
#
# print i.start(1)
# print i.group(1)
#
#
# rep = "\b" + i.group(1)
# rep2 = "lotx"
# text = re.sub(rep, rep2, text)
#print text
| 28.288973 | 906 | 0.58871 | 1,155 | 7,440 | 3.765368 | 0.290043 | 0.040469 | 0.03794 | 0.005519 | 0.141642 | 0.07358 | 0.072201 | 0.068981 | 0.048747 | 0.048747 | 0 | 0.210517 | 0.210215 | 7,440 | 262 | 907 | 28.396947 | 0.529612 | 0.151344 | 0 | 0.11194 | 0 | 0.104478 | 0.62433 | 0.158519 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0 | 0.007463 | 0 | 0.104478 | 0.022388 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f2290dc9ea88b725155415d92303556ef43548b | 2,547 | py | Python | pyGRBz/models.py | dcorre/pyGRBz | 4955e9454a19fcc409649ad623c31d5bec66cc64 | [
"MIT"
] | 2 | 2020-05-21T15:06:48.000Z | 2021-08-17T07:22:09.000Z | pyGRBz/models.py | dcorre/pyGRBz | 4955e9454a19fcc409649ad623c31d5bec66cc64 | [
"MIT"
] | null | null | null | pyGRBz/models.py | dcorre/pyGRBz | 4955e9454a19fcc409649ad623c31d5bec66cc64 | [
"MIT"
] | 3 | 2021-07-29T10:42:16.000Z | 2022-03-11T07:15:53.000Z | # -*- coding: utf-8 -*-
from pyGRBz.extinction_correction import sed_extinction
def SPL_lc(t, F0, t0, norm, alpha):
return norm * F0 * (t / t0) ** (-alpha)
def BPL_lc(t, F0, norm, alpha1, alpha2, t1, s):
return (
norm
* F0
* ((t / t1) ** (-s * alpha1) +
(t / t1) ** (-s * alpha2)) ** (-1.0 / s)
)
def SPL_sed(wvl, F0, wvl0, norm, beta):
return norm * F0 * (wvl / wvl0) ** beta
def BPL_sed(wvl, F0, norm, beta1, beta2, wvl1, s):
return (
norm
* F0
* ((wvl / wvl1) ** (s * beta1) +
(wvl / wvl1) ** (s * beta2)) ** (1.0 / s)
)
def template1(wvl, t, F0, wvl0, t0, norm, alpha, beta, z, Av, ext_law,
Host_dust, Host_gas, MW_dust, MW_gas, DLA, igm_att):
return (
norm
* F0
* (t / t0) ** (-alpha)
* (wvl / wvl0) ** beta
* sed_extinction(wvl, z, Av, ext_law=ext_law, Host_dust=Host_dust,
Host_gas=Host_gas, MW_dust=MW_dust,
MW_gas=MW_gas, DLA=DLA, igm_att=igm_att)
)
def template2(wvl, t, F0, wvl0, norm, alpha1, alpha2, t1, s, beta,
z, Av, ext_law, Host_dust, Host_gas, MW_dust, MW_gas, DLA,
igm_att):
Flux = (
norm
* F0
* ((t / t1) ** (-s * alpha1) +
(t / t1) ** (-s * alpha2)) ** (-1.0 / s)
* (wvl / wvl0) ** beta
* sed_extinction(wvl, z, Av, ext_law=ext_law, Host_dust=Host_dust,
Host_gas=Host_gas, MW_dust=MW_dust,
MW_gas=MW_gas, DLA=DLA, igm_att=igm_att)
)
return Flux
def Flux_template(wvl, F0, wvl0, norm, beta, z, Av, ext_law,
Host_dust, Host_gas, MW_dust, MW_gas, DLA, igm_att):
Flux = SPL_sed(wvl, F0, wvl0, norm, beta) * sed_extinction(
wvl,
z,
Av,
ext_law=ext_law,
Host_dust=Host_dust,
Host_gas=Host_gas,
MW_dust=MW_dust,
MW_gas=MW_gas,
DLA=DLA,
igm_att=igm_att,
)
return Flux
def SPL(wvl, t, F0, wvl0, t0, norm, beta, alpha):
return norm * F0 * (wvl / wvl0) ** beta * (t / t0) ** (-alpha)
def BPL(wvl, t, F0, wvl0, t0, norm, beta, alpha1, alpha2, s):
return (
norm
* F0
* (wvl / wvl0) ** beta
* ((t / t0) ** (-s * alpha1) +
(t / t0) ** (-s * alpha2)) ** (-1.0 / s)
)
"""
def Flux_template(wvl,t,F0,wvl0,t0,norm,beta,alpha,z,Av):
return SPL(wvl,t,F0,wvl0,t0,norm,beta,alpha) * sed_extinction(wvl,z,Av)
"""
| 26.810526 | 75 | 0.498626 | 368 | 2,547 | 3.269022 | 0.125 | 0.044888 | 0.089776 | 0.049875 | 0.797174 | 0.710723 | 0.631754 | 0.580216 | 0.516209 | 0.469659 | 0 | 0.048329 | 0.341971 | 2,547 | 94 | 76 | 27.095745 | 0.669451 | 0.008245 | 0 | 0.449275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.014493 | 0.101449 | 0.275362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0f235260c07430ef0e87025b940d17350563652a | 2,480 | py | Python | xsd-fu/python/generateDS/tests/test.py | jburel/ome-model | 4817c8dfcbe3bfbeafe899c489657769d7ebca60 | [
"BSD-2-Clause"
] | null | null | null | xsd-fu/python/generateDS/tests/test.py | jburel/ome-model | 4817c8dfcbe3bfbeafe899c489657769d7ebca60 | [
"BSD-2-Clause"
] | null | null | null | xsd-fu/python/generateDS/tests/test.py | jburel/ome-model | 4817c8dfcbe3bfbeafe899c489657769d7ebca60 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
import sys, popen2
import getopt
import unittest
class GenTest(unittest.TestCase):
def test_1_generate(self):
cmd = 'python ../generateDS.py -f -o out2sup.py -s out2sub.py --super=out2sup -u gends_user_methods people.xsd'
outfile, infile = popen2.popen2(cmd)
result = outfile.read()
outfile.close()
infile.close()
self.failUnless(len(result) == 0)
def test_2_compare_superclasses(self):
cmd = 'diff out1sup.py out2sup.py'
outfile, infile = popen2.popen2(cmd)
result = outfile.read()
outfile.close()
infile.close()
#print 'len(result):', len(result)
# Ignore the differing lines containing the date/time.
#self.failUnless(len(result) < 130 and result.find('Generated') > -1)
self.failUnless(check_result(result))
def test_3_compare_subclasses(self):
cmd = 'diff out1sub.py out2sub.py'
outfile, infile = popen2.popen2(cmd)
result = outfile.read()
outfile.close()
infile.close()
# Ignore the differing lines containing the date/time.
#self.failUnless(len(result) < 130 and result.find('Generated') > -1)
self.failUnless(check_result(result))
def check_result(result):
flag1 = 0
flag2 = 0
lines = result.split('\n')
len1 = len(lines)
if len1 <= 5:
flag1 = 1
s1 = '\n'.join(lines[:4])
if s1.find('Generated') > -1:
flag2 = 1
return flag1 and flag2
# Make the test suite.
def suite():
# The following is obsolete. See Lib/unittest.py.
#return unittest.makeSuite(GenTest)
loader = unittest.TestLoader()
testsuite = loader.loadTestsFromTestCase(GenTest)
return testsuite
# Make the test suite and run the tests.
def test():
testsuite = suite()
runner = unittest.TextTestRunner(sys.stdout, verbosity=2)
runner.run(testsuite)
USAGE_TEXT = """
Usage:
python test.py [options]
Options:
-h, --help Display this help message.
Example:
python test.py
"""
def usage():
print USAGE_TEXT
sys.exit(-1)
def main():
args = sys.argv[1:]
try:
opts, args = getopt.getopt(args, 'h', ['help'])
except:
usage()
relink = 1
for opt, val in opts:
if opt in ('-h', '--help'):
usage()
if len(args) != 0:
usage()
test()
if __name__ == '__main__':
main()
#import pdb
#pdb.run('main()')
| 24.07767 | 119 | 0.606855 | 310 | 2,480 | 4.780645 | 0.364516 | 0.047233 | 0.038462 | 0.050607 | 0.311741 | 0.311741 | 0.311741 | 0.311741 | 0.311741 | 0.311741 | 0 | 0.026287 | 0.26371 | 2,480 | 102 | 120 | 24.313725 | 0.785323 | 0.1875 | 0 | 0.242857 | 1 | 0.014286 | 0.154845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042857 | null | null | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f2962652ce3e074e953b056fb2409a9830faf77 | 50 | py | Python | lesson-1/settings.py | alexyvassili/devman-async | 758859b3142cef61e34c663ae9677b955ea1f225 | [
"MIT"
] | null | null | null | lesson-1/settings.py | alexyvassili/devman-async | 758859b3142cef61e34c663ae9677b955ea1f225 | [
"MIT"
] | null | null | null | lesson-1/settings.py | alexyvassili/devman-async | 758859b3142cef61e34c663ae9677b955ea1f225 | [
"MIT"
] | null | null | null | FPS = 20
ANIMATIONS_FOLDER = 'animations_frames'
| 12.5 | 39 | 0.78 | 6 | 50 | 6.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0.14 | 50 | 3 | 40 | 16.666667 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0.34 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f2c4c4cf34a0795527faaa6778da02968635989 | 648 | py | Python | connect4/consumers.py | duckpodger/connect4 | ffdbe25d57da16faf066c133b909925b621b8e9d | [
"MIT"
] | null | null | null | connect4/consumers.py | duckpodger/connect4 | ffdbe25d57da16faf066c133b909925b621b8e9d | [
"MIT"
] | null | null | null | connect4/consumers.py | duckpodger/connect4 | ffdbe25d57da16faf066c133b909925b621b8e9d | [
"MIT"
] | null | null | null | from channels import Group
from channels.auth import channel_session_user, channel_session_user_from_http
@channel_session_user_from_http
def connect_game(message, game):
Group('game-' + game).add(message.reply_channel)
message.reply_channel.send({'accept':True})
@channel_session_user
def game_disconnect(message, game):
Group('game-' + game).discard(message.reply_channel)
@channel_session_user_from_http
def connect_games(message):
Group('games').add(message.reply_channel)
message.reply_channel.send({'accept':True})
@channel_session_user
def games_disconnect(message):
Group('games').discard(message.reply_channel) | 30.857143 | 78 | 0.791667 | 88 | 648 | 5.511364 | 0.238636 | 0.173196 | 0.22268 | 0.136082 | 0.614433 | 0.461856 | 0.461856 | 0.313402 | 0.313402 | 0.313402 | 0 | 0 | 0.094136 | 648 | 21 | 79 | 30.857143 | 0.826235 | 0 | 0 | 0.375 | 0 | 0 | 0.049307 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f3005939320ae47b3a2a231f885f696e1c04e39 | 302 | py | Python | gather_vision/tests/test_mgmt_cmd_playlists.py | anotherbyte-net/gather-vision | c86aff76dbe698fea1eae9d93c42e65ccfd5ba99 | [
"Apache-2.0"
] | null | null | null | gather_vision/tests/test_mgmt_cmd_playlists.py | anotherbyte-net/gather-vision | c86aff76dbe698fea1eae9d93c42e65ccfd5ba99 | [
"Apache-2.0"
] | 1 | 2021-12-23T12:28:14.000Z | 2021-12-23T12:28:14.000Z | gather_vision/tests/test_mgmt_cmd_playlists.py | anotherbyte-net/gather-vision | c86aff76dbe698fea1eae9d93c42e65ccfd5ba99 | [
"Apache-2.0"
] | null | null | null | from io import StringIO
from django.core.management import call_command
from django.test import TestCase
class OutageTest(TestCase):
def test_command_output(self):
out = StringIO()
# call_command("playlists", stdout=out)
# self.assertIn("Expected output", out.getvalue())
| 27.454545 | 58 | 0.718543 | 37 | 302 | 5.756757 | 0.594595 | 0.093897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18543 | 302 | 10 | 59 | 30.2 | 0.865854 | 0.284768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0f4886682a12fbe6e4e7ceb87a9ed5367cdf3a96 | 2,194 | py | Python | workflow/migrations/0023_auto_20190831_0542.py | tanmayagarwal/Activity-CE | a49c47053b191ffa5aee9a06e66a7c9644804434 | [
"Apache-2.0"
] | 1 | 2021-07-07T14:39:23.000Z | 2021-07-07T14:39:23.000Z | workflow/migrations/0023_auto_20190831_0542.py | michaelbukachi/Activity | f3d4f4da88ae9539c341ca73cc559b850693d669 | [
"Apache-2.0"
] | null | null | null | workflow/migrations/0023_auto_20190831_0542.py | michaelbukachi/Activity | f3d4f4da88ae9539c341ca73cc559b850693d669 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2.4 on 2019-08-31 12:42
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('workflow', '0022_auto_20190830_0126'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='form_label',
field=models.CharField(default='Forms', max_length=255, verbose_name='Form Organization label'),
),
migrations.AlterField(
model_name='organization',
name='indicator_label',
field=models.CharField(default='Indicators', max_length=255, verbose_name='Indicator Organization label'),
),
migrations.AlterField(
model_name='organization',
name='level_1_label',
field=models.CharField(blank=True, default='Programs', max_length=255, verbose_name='Project/Program Organization Level 1 label'),
),
migrations.AlterField(
model_name='organization',
name='level_2_label',
field=models.CharField(blank=True, default='Projects', max_length=255, verbose_name='Project/Program Organization Level 2 label'),
),
migrations.AlterField(
model_name='organization',
name='level_3_label',
field=models.CharField(blank=True, default='Components', max_length=255, verbose_name='Project/Program Organization Level 3 label'),
),
migrations.AlterField(
model_name='organization',
name='level_4_label',
field=models.CharField(blank=True, default='Activities', max_length=255, verbose_name='Project/Program Organization Level 4 label'),
),
migrations.AlterField(
model_name='organization',
name='site_label',
field=models.CharField(default='Locations', max_length=255, verbose_name='Site Organization label'),
),
migrations.AlterField(
model_name='organization',
name='stakeholder_label',
field=models.CharField(default='Stakeholders', max_length=255, verbose_name='Stakeholder Organization label'),
),
]
| 40.62963 | 144 | 0.635369 | 221 | 2,194 | 6.131222 | 0.253394 | 0.118081 | 0.147601 | 0.171218 | 0.775646 | 0.613284 | 0.580074 | 0.42214 | 0.15941 | 0 | 0 | 0.038415 | 0.252507 | 2,194 | 53 | 145 | 41.396226 | 0.787805 | 0.02051 | 0 | 0.510638 | 1 | 0 | 0.267816 | 0.010713 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f4974fa6360425bc8effe89c00411ee253c9586 | 376 | py | Python | netvisor/responses/accounting.py | fastmonkeys/netvisor.py | 1555e06439d69dc7515ebe0ce3dc061c609bc1a1 | [
"MIT"
] | 3 | 2015-08-26T10:08:03.000Z | 2022-03-16T09:31:52.000Z | netvisor/responses/accounting.py | fastmonkeys/netvisor.py | 1555e06439d69dc7515ebe0ce3dc061c609bc1a1 | [
"MIT"
] | 6 | 2015-09-10T08:32:21.000Z | 2016-09-19T14:32:07.000Z | netvisor/responses/accounting.py | fastmonkeys/netvisor.py | 1555e06439d69dc7515ebe0ce3dc061c609bc1a1 | [
"MIT"
] | 7 | 2015-09-10T08:23:04.000Z | 2019-08-28T17:39:08.000Z | # -*- coding: utf-8 -*-
"""
netvisor.responses.accounting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: (c) 2013-2016 by Fast Monkeys Oy.
:license: MIT, see LICENSE for more details.
"""
from ..schemas import AccountingListSchema
from .base import Response
class AccountingListResponse(Response):
schema_cls = AccountingListSchema
tag_name = 'vouchers'
| 23.5 | 49 | 0.648936 | 38 | 376 | 6.368421 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029032 | 0.175532 | 376 | 15 | 50 | 25.066667 | 0.751613 | 0.462766 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0f4fb8c1a7aa8fb968e8c207229282358a97a40a | 309 | py | Python | example/store/models.py | cobusc/protected-media-prototype | 578e5ceaa8132cfaee46648a6e2ca58a5c892392 | [
"BSD-3-Clause"
] | 35 | 2017-06-12T10:49:46.000Z | 2022-03-02T18:13:17.000Z | example/store/models.py | cobusc/protected-media-prototype | 578e5ceaa8132cfaee46648a6e2ca58a5c892392 | [
"BSD-3-Clause"
] | 8 | 2017-11-02T16:37:29.000Z | 2022-03-21T12:28:13.000Z | example/store/models.py | cobusc/protected-media-prototype | 578e5ceaa8132cfaee46648a6e2ca58a5c892392 | [
"BSD-3-Clause"
] | 8 | 2017-05-16T06:41:42.000Z | 2022-01-28T18:42:32.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
from protected_media.models import ProtectedFileField
class FileCollection(models.Model):
public_file = models.FileField(upload_to="collection")
protected_file = ProtectedFileField(upload_to="collection")
| 28.090909 | 63 | 0.79288 | 36 | 309 | 6.527778 | 0.638889 | 0.068085 | 0.153191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003663 | 0.116505 | 309 | 10 | 64 | 30.9 | 0.857143 | 0.067961 | 0 | 0 | 0 | 0 | 0.06993 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0f56c68c105c0d67067eb4baaaf1fdb5e0c306b0 | 613 | py | Python | components/volume.py | Sebastian-Soto-M/dwm-status | 91c07a3ff8a0e4c5e42ebcff3de35f890fa275be | [
"MIT"
] | null | null | null | components/volume.py | Sebastian-Soto-M/dwm-status | 91c07a3ff8a0e4c5e42ebcff3de35f890fa275be | [
"MIT"
] | null | null | null | components/volume.py | Sebastian-Soto-M/dwm-status | 91c07a3ff8a0e4c5e42ebcff3de35f890fa275be | [
"MIT"
] | null | null | null | from threading import Thread, Timer
import sh
from dwm_status_events import on_signal, trigger_change_event
from status2d import VerticalBar, xres
class Volume:
def __init__(self):
self.details = ""
Thread(self.set_details()).start()
def get_details(self):
vol = int(sh.pamixer("--get-volume"))
bar = VerticalBar(4, vol, xres["14"], 5).draw()
return bar
@on_signal
@trigger_change_event
def set_details(self):
self.details = self.get_details()
Timer(3, self.set_details).start()
def __str__(self):
return self.details
| 22.703704 | 61 | 0.652529 | 80 | 613 | 4.7375 | 0.4625 | 0.087071 | 0.079156 | 0.110818 | 0.253298 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012848 | 0.238173 | 613 | 26 | 62 | 23.576923 | 0.798715 | 0 | 0 | 0 | 0 | 0 | 0.022839 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.210526 | 0.052632 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0f5ec3ff66b8d37550666cef132abe740883c8a1 | 720 | py | Python | debug.py | mitchshiotani/led-scoreboard | 3dc2d59f734b0561a01caf3f95028df724655ff4 | [
"MIT"
] | 3 | 2021-08-21T03:00:56.000Z | 2022-01-18T08:10:57.000Z | debug.py | mitchshiotani/led-scoreboard | 3dc2d59f734b0561a01caf3f95028df724655ff4 | [
"MIT"
] | null | null | null | debug.py | mitchshiotani/led-scoreboard | 3dc2d59f734b0561a01caf3f95028df724655ff4 | [
"MIT"
] | 1 | 2021-08-21T03:00:58.000Z | 2021-08-21T03:00:58.000Z | import src.data.scoreboard_config
import time
import sys
debug_enabled = False
time_format = "%H"
def set_debug_status(config):
global debug_enabled
debug_enabled = config.debug
global time_format
time_format = config.time_format
def __debugprint(text):
print(text)
sys.stdout.flush()
def log(text):
if debug_enabled:
__debugprint("DEBUG ({}): {}".format(__timestamp(), text))
def warning(text):
__debugprint("WARNING ({}): {}".format(__timestamp(), text))
def error(text):
__debugprint("ERROR ({}): {}".format(__timestamp(), text))
def info(text):
__debugprint("INFO ({}): {}".format(__timestamp(), text))
def __timestamp():
return time.strftime("{}:%M:%S".format(time_format), time.localtime())
| 21.176471 | 71 | 0.7125 | 90 | 720 | 5.344444 | 0.355556 | 0.10395 | 0.158004 | 0.182952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113889 | 720 | 33 | 72 | 21.818182 | 0.753919 | 0 | 0 | 0 | 0 | 0 | 0.093056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.291667 | false | 0 | 0.125 | 0.041667 | 0.458333 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f65d3df137e26f91c99958aaddd67f74de9d941 | 279 | py | Python | tonic/torch/agents/__init__.py | Eyalcohenx/tonic | afc15c6fa23fed4f696f68f0acf961964b0172dc | [
"MIT"
] | 350 | 2020-08-06T13:49:11.000Z | 2022-03-24T08:53:59.000Z | tonic/torch/agents/__init__.py | Eyalcohenx/tonic | afc15c6fa23fed4f696f68f0acf961964b0172dc | [
"MIT"
] | 12 | 2020-08-07T02:21:58.000Z | 2021-05-20T11:50:44.000Z | tonic/torch/agents/__init__.py | Eyalcohenx/tonic | afc15c6fa23fed4f696f68f0acf961964b0172dc | [
"MIT"
] | 35 | 2020-08-06T16:53:40.000Z | 2021-12-17T06:01:09.000Z | from .agent import Agent
from .a2c import A2C # noqa
from .ddpg import DDPG
from .d4pg import D4PG # noqa
from .mpo import MPO
from .ppo import PPO
from .sac import SAC
from .td3 import TD3
from .trpo import TRPO
__all__ = [Agent, A2C, DDPG, D4PG, MPO, PPO, SAC, TD3, TRPO]
| 19.928571 | 60 | 0.72043 | 48 | 279 | 4.104167 | 0.270833 | 0.081218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040359 | 0.200717 | 279 | 13 | 61 | 21.461538 | 0.843049 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.9 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0f6c4be3cf2cd24cc8f389a20edb6f0df6c28283 | 2,349 | py | Python | data/creating_csv_for_kaggle.py | dsp-uga/rope | 2fa80f585b9da2f364fa8eb153929e58a226c2a6 | [
"MIT"
] | 1 | 2021-05-29T16:02:37.000Z | 2021-05-29T16:02:37.000Z | data/creating_csv_for_kaggle.py | omid-s/GoogleLandmarkRecognision | 2fa80f585b9da2f364fa8eb153929e58a226c2a6 | [
"MIT"
] | null | null | null | data/creating_csv_for_kaggle.py | omid-s/GoogleLandmarkRecognision | 2fa80f585b9da2f364fa8eb153929e58a226c2a6 | [
"MIT"
] | 1 | 2020-02-16T06:06:41.000Z | 2020-02-16T06:06:41.000Z |
# coding: utf-8
# In[2]:
import keras
import scipy as sp
import scipy.misc, scipy.ndimage.interpolation
from medpy import metric
import numpy as np
import os
from keras import losses
import tensorflow as tf
from keras.models import Model
from keras.layers import Input,merge, concatenate, Conv2D, MaxPooling2D, Activation, UpSampling2D,Dropout,Conv2DTranspose,add,multiply,Flatten,Dense
from keras.layers.normalization import BatchNormalization as bn
from keras.callbacks import ModelCheckpoint, TensorBoard
from keras.optimizers import RMSprop
from keras import regularizers
from keras import backend as K
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
import numpy as np
import nibabel as nib
CUDA_VISIBLE_DEVICES = [0]
os.environ['CUDA_VISIBLE_DEVICES']=','.join([str(x) for x in CUDA_VISIBLE_DEVICES])
#oasis files 1-457
import h5py
path='/home/bahaa/oasis_mri/OAS1_'
# In[3]:
import numpy as np
import cv2
from keras.models import load_model
model = load_model('basic_dense_net_dsp_round2.h5')
print(model.summary())
import csv
# In[62]:
fields=['id','landmarks']
with open(r'name.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow(fields)
import glob
import os
import numpy as np
import csv
import cv2
a=glob.glob('/home/rdey/dsp_final/test/*.jpg')
X_test=[]
print(len(a))
#print(a[0].replace('/home/rdey/dsp_final/train/','').replace('.jpg',''))
for i in range (0,len(a)):
if(i%1000==0):
print(i)
#print(('/home/rdey/dsp_final/train/'+str(a[i].replace('/home/rdey/dsp_final/train/','').strip())))
temp_x=cv2.imread(('/home/rdey/dsp_final/test/'+str(a[i].replace('/home/rdey/dsp_final/test/','').strip())),1)
temp_x=cv2.resize(temp_x,(64,64)).astype('float32')
predicted=model.predict(np.array(temp_x).reshape((1,)+temp_x.shape))
#print(predicted.shape)
max_value=0
max_loc=0
for j in range(0,len(predicted[0])):
if(predicted[0][j]>max_value):
max_value=predicted[0][j]
max_loc=j
fields=[str(a[i].replace('/home/rdey/dsp_final/test/','').strip()).replace('.jpg',''),str(max_loc)+' '+str(max_value)]
with open(r'name.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow(fields)
#except:
# print('error',i)
| 26.1 | 149 | 0.687527 | 359 | 2,349 | 4.4039 | 0.345404 | 0.062619 | 0.048703 | 0.070841 | 0.29728 | 0.15623 | 0.135357 | 0.135357 | 0.117647 | 0.117647 | 0 | 0.021839 | 0.161771 | 2,349 | 89 | 150 | 26.393258 | 0.781107 | 0.115368 | 0 | 0.280702 | 0 | 0 | 0.114879 | 0.083502 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.508772 | 0 | 0.508772 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0f711b20ee260ae890179764e8987212dd34936a | 396 | py | Python | tests/lib/__init__.py | prashantmital/tox | 7106a16b65ad66e4ab0662732dbbd8240b91dcc7 | [
"MIT"
] | 2,811 | 2016-09-17T18:26:04.000Z | 2022-03-31T14:55:53.000Z | tests/lib/__init__.py | prashantmital/tox | 7106a16b65ad66e4ab0662732dbbd8240b91dcc7 | [
"MIT"
] | 1,816 | 2016-09-17T20:00:01.000Z | 2022-03-31T10:44:42.000Z | tests/lib/__init__.py | prashantmital/tox | 7106a16b65ad66e4ab0662732dbbd8240b91dcc7 | [
"MIT"
] | 509 | 2016-09-17T20:43:31.000Z | 2022-03-27T04:15:09.000Z | import subprocess
import pytest
def need_executable(name, check_cmd):
def wrapper(fn):
try:
subprocess.check_output(check_cmd)
except OSError:
return pytest.mark.skip(reason="{} is not available".format(name))(fn)
return fn
return wrapper
def need_git(fn):
return pytest.mark.git(need_executable("git", ("git", "--version"))(fn))
| 20.842105 | 82 | 0.638889 | 50 | 396 | 4.94 | 0.5 | 0.097166 | 0.129555 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237374 | 396 | 18 | 83 | 22 | 0.817881 | 0 | 0 | 0 | 0 | 0 | 0.085859 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.166667 | 0.083333 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0f790ce2d9640c099960846b8830555fce33c6bd | 854 | py | Python | gemato/util.py | robbat2/gemato | 883662241ce7ab45515484e7b53a2e967408d203 | [
"BSD-2-Clause"
] | null | null | null | gemato/util.py | robbat2/gemato | 883662241ce7ab45515484e7b53a2e967408d203 | [
"BSD-2-Clause"
] | null | null | null | gemato/util.py | robbat2/gemato | 883662241ce7ab45515484e7b53a2e967408d203 | [
"BSD-2-Clause"
] | null | null | null | # gemato: Utility functions
# vim:fileencoding=utf-8
# (c) 2017 Michał Górny
# Licensed under the terms of 2-clause BSD license
def path_starts_with(path, prefix):
"""
Returns True if the specified @path starts with the @prefix,
performing component-wide comparison. Otherwise returns False.
"""
return prefix == "" or (path + "/").startswith(prefix.rstrip("/") + "/")
def path_inside_dir(path, directory):
"""
Returns True if the specified @path is inside @directory,
performing component-wide comparison. Otherwise returns False.
"""
return ((directory == "" and path != "")
or path.rstrip("/").startswith(directory.rstrip("/") + "/"))
def throw_exception(e):
"""
Raise the given exception. Needed for onerror= argument
to os.walk(). Useful for other callbacks.
"""
raise e
| 28.466667 | 76 | 0.655738 | 103 | 854 | 5.38835 | 0.572816 | 0.025225 | 0.05045 | 0.057658 | 0.320721 | 0.320721 | 0.216216 | 0.216216 | 0 | 0 | 0 | 0.008915 | 0.211944 | 854 | 29 | 77 | 29.448276 | 0.81575 | 0.542155 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0f7af207c0fc1f21837ffc39fa73edbf4452f8f4 | 1,086 | py | Python | setup.py | jDan735/hoi3tools | 0bf8ba6b75e03e04733eecc09fc4c1c7c69a9447 | [
"MIT"
] | null | null | null | setup.py | jDan735/hoi3tools | 0bf8ba6b75e03e04733eecc09fc4c1c7c69a9447 | [
"MIT"
] | null | null | null | setup.py | jDan735/hoi3tools | 0bf8ba6b75e03e04733eecc09fc4c1c7c69a9447 | [
"MIT"
] | null | null | null | from setuptools import setup
from hoi3tools import __version__
with open("README.md", encoding="utf-8") as readme:
long_description = readme.read()
setup(
name="hoi3tools",
version=__version__,
author="Daniel Zakharov",
author_email="daniel734@bk.ru",
description="hoi3tools",
long_description=long_description,
long_description_content_type="text/markdown",
keywords="hoi3 game",
url="https://github.com/jDan735/hoi3tools",
license="MIT",
include_package_data=True,
packages=["hoi3tools"],
classifiers=[
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"License :: OSI Approved :: MIT License",
],
python_requires=">=3",
entry_points={
"console_scripts": [
"hoi3tools=hoi3tools.h3t:main",
"h3t=hoi3tools.h3t:main"
]
}
)
| 29.351351 | 51 | 0.624309 | 113 | 1,086 | 5.823009 | 0.548673 | 0.173252 | 0.227964 | 0.237082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037304 | 0.234807 | 1,086 | 36 | 52 | 30.166667 | 0.754513 | 0 | 0 | 0 | 0 | 0 | 0.421731 | 0.046041 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f8b681a9a2e01c752c6d3b9ce23f392fc512f52 | 625 | py | Python | backend/backend/apps/api/views.py | josegob/microsoft-sec-api | 299043bed65cf1f49c7a4c8ee728c961066dc763 | [
"MIT"
] | null | null | null | backend/backend/apps/api/views.py | josegob/microsoft-sec-api | 299043bed65cf1f49c7a4c8ee728c961066dc763 | [
"MIT"
] | null | null | null | backend/backend/apps/api/views.py | josegob/microsoft-sec-api | 299043bed65cf1f49c7a4c8ee728c961066dc763 | [
"MIT"
] | null | null | null | from django.views.decorators.http import require_http_methods
from django.apps import apps
from .helpers.GetAlertsByProviderView import GetAlertsByProviderView
from .helpers.PatchAlertsByProviderView import PatchAlertsByProviderView
app_config = apps.get_app_config('api')
@require_http_methods(["GET"])
def get_alerts_by_provider_view(request):
response_data = GetAlertsByProviderView(request).execute()
return response_data
@require_http_methods(["PATCH"])
def patch_alerts_by_provider_view(request, alert_id):
response_data = PatchAlertsByProviderView(request).execute(alert_id)
return response_data
| 31.25 | 72 | 0.832 | 73 | 625 | 6.808219 | 0.39726 | 0.096579 | 0.108652 | 0.080483 | 0.108652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0928 | 625 | 19 | 73 | 32.894737 | 0.876543 | 0 | 0 | 0.153846 | 0 | 0 | 0.0176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.307692 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7e1d404d2f7ef998621cab9b3795eefdaa6ba052 | 221 | py | Python | config.py | bruinquants/intro-to-algotrading-workshop | 60729e97233a3460797bcaa8e1a5d5a6935f5aa7 | [
"MIT"
] | 6 | 2020-12-23T15:44:02.000Z | 2021-02-03T04:14:05.000Z | config.py | bruinquants/intro-to-algotrading-workshop | 60729e97233a3460797bcaa8e1a5d5a6935f5aa7 | [
"MIT"
] | null | null | null | config.py | bruinquants/intro-to-algotrading-workshop | 60729e97233a3460797bcaa8e1a5d5a6935f5aa7 | [
"MIT"
] | 3 | 2021-01-05T19:17:55.000Z | 2021-03-25T00:46:36.000Z | BASE_URL = "https://paper-api.alpaca.markets"
KEY = "your key here"
SECRET_KEY = "your secret key here"
HEADERS = {"APCA-API-KEY-ID": KEY, "APCA-API-SECRET-KEY": SECRET_KEY}
# headers are used to authenticate our request | 36.833333 | 69 | 0.728507 | 36 | 221 | 4.388889 | 0.555556 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126697 | 221 | 6 | 70 | 36.833333 | 0.818653 | 0.199095 | 0 | 0 | 0 | 0 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e27d55bde571c6eb061e764a516821c6e135803 | 748 | py | Python | RandomPasswordGenerator.py | chhajednidhish/Basic-Python-Projects | d0bbff6ae8eeb126c5c38489772f8bdd7185f283 | [
"MIT"
] | null | null | null | RandomPasswordGenerator.py | chhajednidhish/Basic-Python-Projects | d0bbff6ae8eeb126c5c38489772f8bdd7185f283 | [
"MIT"
] | null | null | null | RandomPasswordGenerator.py | chhajednidhish/Basic-Python-Projects | d0bbff6ae8eeb126c5c38489772f8bdd7185f283 | [
"MIT"
] | null | null | null | """ import random
import string
def GenerateRandomPass(size):
RandomPassword = ''.join([random.choice(string.ascii_letters + string.digits) for n in range[size]])
return RandomPassword
password = GenerateRandomPass(7)
print(password) """
# Importing random to generate
# random string sequence
import random
# Importing string library function
import string
def rand_pass(size):
# Takes random choices from
# ascii_letters and digits
generate_pass = ''.join([random.choice( string.ascii_letters + string.digits) for n in range(size)])
return generate_pass
# Driver Code
password = 'nac'
it = 0
for i in range(it,100):
while(password == 'nac'):
password = rand_pass(3)
it = it+1
continue
print(password)
| 17.809524 | 104 | 0.719251 | 98 | 748 | 5.418367 | 0.438776 | 0.067797 | 0.056497 | 0.082863 | 0.252354 | 0.252354 | 0.252354 | 0.252354 | 0.252354 | 0.252354 | 0 | 0.011382 | 0.177807 | 748 | 41 | 105 | 18.243902 | 0.852033 | 0.522727 | 0 | 0 | 1 | 0 | 0.017391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.538462 | 0.153846 | 0 | 0.307692 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7e2dd173c249c86fbee12385c3d394516868af4d | 171 | py | Python | examples/urlScan.py | Quadzal/VTScan | 760532c3e5176189fefaebdc9386d1fcc00253e8 | [
"MIT"
] | null | null | null | examples/urlScan.py | Quadzal/VTScan | 760532c3e5176189fefaebdc9386d1fcc00253e8 | [
"MIT"
] | null | null | null | examples/urlScan.py | Quadzal/VTScan | 760532c3e5176189fefaebdc9386d1fcc00253e8 | [
"MIT"
] | null | null | null | from VTScan import VTScan
Scan = VTScan()
detected = Scan.urlScan(url="https://www.google.com/")
for reports in detected:
for report in reports:
print(report)
| 24.428571 | 54 | 0.701754 | 24 | 171 | 5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181287 | 171 | 6 | 55 | 28.5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.134503 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e2ee2474c3d79482ae12a572b18ca769e1b8127 | 1,173 | py | Python | ogr/services/github/auth_providers/abstract.py | shepherdjay/ogr | 87bc55d3f013a1b4c09695ecfaf6706f739f1890 | [
"MIT"
] | 27 | 2019-02-28T11:34:43.000Z | 2020-05-05T19:50:32.000Z | ogr/services/github/auth_providers/abstract.py | shepherdjay/ogr | 87bc55d3f013a1b4c09695ecfaf6706f739f1890 | [
"MIT"
] | 413 | 2019-02-15T12:46:52.000Z | 2020-08-02T21:22:32.000Z | ogr/services/github/auth_providers/abstract.py | shepherdjay/ogr | 87bc55d3f013a1b4c09695ecfaf6706f739f1890 | [
"MIT"
] | 40 | 2019-03-14T15:28:25.000Z | 2020-06-26T08:40:51.000Z | # Copyright Contributors to the Packit project.
# SPDX-License-Identifier: MIT
from typing import Optional
import github
class GithubAuthentication:
"""
Represents a token manager for authentication via GitHub App.
"""
def get_token(self, namespace: str, repo: str) -> str:
"""
Get a GitHub token for requested repository.
Args:
namespace: Namespace of the repository.
repo: Name of the repository.
Returns:
A token that can be used in PyGithub instance for authentication.
"""
raise NotImplementedError()
@property
def pygithub_instance(self) -> "github.Github":
"""
Returns:
Generic PyGithub instance. Used for `GitUser` for example.
"""
raise NotImplementedError()
@staticmethod
def try_create(**kwargs) -> Optional["GithubAuthentication"]:
"""
Tries to construct authentication object from provided keyword arguments.
Returns:
`GithubAuthentication` object or `None` if the creation was not
successful.
"""
raise NotImplementedError()
| 26.066667 | 81 | 0.624893 | 115 | 1,173 | 6.347826 | 0.582609 | 0.065753 | 0.041096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.300085 | 1,173 | 44 | 82 | 26.659091 | 0.88916 | 0.500426 | 0 | 0.272727 | 0 | 0 | 0.077103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7e2ffb7fe1165fcffc6b7303c5be68a7b99b0f58 | 555 | py | Python | 2/main.py | voodoohop/artificial-self-AMLD-2020 | e5f2a9d2e8162e4e2ecacb8d0332de46333809c1 | [
"MIT"
] | 70 | 2020-01-16T10:00:25.000Z | 2022-02-08T04:40:42.000Z | 2/main.py | jvkdev/artificial-self-AMLD-2020 | 3a20d057a8eeca625e9a1cea219fd116c12a8728 | [
"MIT"
] | 2 | 2020-04-27T11:13:06.000Z | 2020-07-16T12:35:44.000Z | 2/main.py | jvkdev/artificial-self-AMLD-2020 | 3a20d057a8eeca625e9a1cea219fd116c12a8728 | [
"MIT"
] | 12 | 2020-01-21T18:09:10.000Z | 2020-12-07T07:07:21.000Z | # ---
# jupyter:
# jupytext:
# formats: ipynb,py:light
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.3.1
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Task 2: Making GPT-2 speak!
# We will
# * Learn how we can trigger conversational behaviour of GPT-2 with a simple "hack"
# * Be introduced to Huggingface's transformers project
# * See the logic which was so far abstracted away from us (by gpt-2-simple)
| 25.227273 | 83 | 0.648649 | 75 | 555 | 4.733333 | 0.76 | 0.033803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02619 | 0.243243 | 555 | 21 | 84 | 26.428571 | 0.819048 | 0.924324 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e317b61b04dd44817ac98cc62cfcc0535f12c58 | 1,036 | py | Python | gblog/accounts/views.py | cristianjs19/Gamin-Blog | 239da49360fc5da18189b41192b094fed62aa9b2 | [
"MIT"
] | null | null | null | gblog/accounts/views.py | cristianjs19/Gamin-Blog | 239da49360fc5da18189b41192b094fed62aa9b2 | [
"MIT"
] | 9 | 2020-02-20T10:59:56.000Z | 2022-02-10T10:18:32.000Z | gblog/accounts/views.py | cristianjs19/Gamin-Blog | 239da49360fc5da18189b41192b094fed62aa9b2 | [
"MIT"
] | 1 | 2019-09-26T00:24:24.000Z | 2019-09-26T00:24:24.000Z | from django.views.generic import CreateView
from django.http import HttpResponseRedirect
from accounts.forms import RegistrationForm
from .models import Account
class AccountCreateView(CreateView):
template_name = 'form.html'
form_class = RegistrationForm
extra_context={'title':'Register'}
success_url = '/'
def get_context_data(self, **kwargs):
context = super(AccountCreateView, self).get_context_data(**kwargs)
if 'form' not in context:
context['form'] = self.form_class(self.request.GET)
return context
def post(self, request, *args, **kwargs):
self.object = self.get_object
form = self.form_class(request.POST)
if form.is_valid():
account = form.save()
account.save()
email = form.cleaned_data.get('email')
raw_password = form.cleaned_data.get('password1')
account = authenticate(email=email, password=raw_password)
login(request, account)
return HttpResponseRedirect(self.get_success_url())
else:
return self.render_to_response(self.get_context_data(form=form))
| 30.470588 | 69 | 0.746139 | 132 | 1,036 | 5.689394 | 0.401515 | 0.037284 | 0.055925 | 0.047936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001124 | 0.140927 | 1,036 | 34 | 70 | 30.470588 | 0.842697 | 0 | 0 | 0 | 0 | 0 | 0.043394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0.074074 | 0.148148 | 0 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7e3dcfff4856c9a8e3d44154e0632d64271cc15a | 345 | py | Python | examples/00002_compose.py | numpde/tcga | a7df66530a0249b82788f6367b9642b68eaf6ec5 | [
"MIT"
] | 2 | 2020-06-30T13:15:14.000Z | 2021-08-04T07:46:02.000Z | examples/00002_compose.py | numpde/tcga | a7df66530a0249b82788f6367b9642b68eaf6ec5 | [
"MIT"
] | null | null | null | examples/00002_compose.py | numpde/tcga | a7df66530a0249b82788f6367b9642b68eaf6ec5 | [
"MIT"
] | null | null | null | # RA, 2020-06-18
from tcga.utils import First
from tcga.codons import standard_rna as rna_codons
from tcga.complements import dna_to_rna, dna_to_dna
from tcga.strings import triplets, reverse
X = reverse("CACGAACTTGTCGAGACCATTGCC")
f = First(dna_to_dna.reverse).then(dna_to_rna).then(triplets).each(rna_codons).join(str)
print(X, "=>", f(X))
| 28.75 | 88 | 0.77971 | 58 | 345 | 4.448276 | 0.465517 | 0.124031 | 0.062016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025806 | 0.101449 | 345 | 11 | 89 | 31.363636 | 0.806452 | 0.04058 | 0 | 0 | 0 | 0 | 0.079027 | 0.072948 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7e545c39ce26d4d94d37ba8ee313a6d15904d296 | 982 | py | Python | novaclient/v2/paas/firewalls.py | foruy/openflow-multiopenstack | 74140b041ac25ed83898ff3998e8dcbed35572bb | [
"Apache-2.0"
] | 1 | 2019-09-11T11:56:19.000Z | 2019-09-11T11:56:19.000Z | tools/dockerize/webportal/usr/lib/python2.7/site-packages/novaclient/v2/paas/firewalls.py | foruy/openflow-multiopenstack | 74140b041ac25ed83898ff3998e8dcbed35572bb | [
"Apache-2.0"
] | null | null | null | tools/dockerize/webportal/usr/lib/python2.7/site-packages/novaclient/v2/paas/firewalls.py | foruy/openflow-multiopenstack | 74140b041ac25ed83898ff3998e8dcbed35572bb | [
"Apache-2.0"
] | null | null | null | from novaclient import base
class Firewall(base.Resource):
"""A Firewall."""
def delete(self):
return self.manager.delete(self)
class FirewallManager(base.Manager):
resource_class = Firewall
def get(self, instance):
return self._list('/os-daoli-firewalls/%s' % base.getid(instance),
'firewall')
def create(self, **kwargs):
body = {"firewall": kwargs}
return self._create('/os-daoli-firewalls', body, 'firewall')
def delete(self, firewall):
self._delete('/os-daoli-firewalls/%s' % base.getid(firewall))
def firewall_exist(self, instance_id, **kwargs):
resp, body = self.api.client.post('/os-daoli-firewalls/%s/action' %
instance_id, body=kwargs)
return body
def firewall_update(self, instance_id, **firewall):
body = {"firewall": firewall}
return self._update('/os-daoli-firewalls/%s' % instance_id, body)
| 32.733333 | 75 | 0.612016 | 112 | 982 | 5.267857 | 0.294643 | 0.09322 | 0.135593 | 0.115254 | 0.088136 | 0.088136 | 0 | 0 | 0 | 0 | 0 | 0 | 0.249491 | 982 | 29 | 76 | 33.862069 | 0.800543 | 0.011202 | 0 | 0 | 0 | 0 | 0.151295 | 0.098446 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.047619 | 0.095238 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7e55846f3fb430cd8238a42bc7d2c66c7410b070 | 183 | py | Python | sphinx_d/c.py | wjw19854/sphinx-dev | 1a4dfb839cba89e31f3eae7c5463317dc9dee875 | [
"MIT"
] | 1 | 2018-05-15T07:34:17.000Z | 2018-05-15T07:34:17.000Z | sphinx_d/c.py | wjw19854/sphinx-dev | 1a4dfb839cba89e31f3eae7c5463317dc9dee875 | [
"MIT"
] | null | null | null | sphinx_d/c.py | wjw19854/sphinx-dev | 1a4dfb839cba89e31f3eae7c5463317dc9dee875 | [
"MIT"
] | null | null | null | #coding=utf-8
"""
c module 文档
"""
class c_cc(object):
"""
c_cc类说明
"""
def func_c_cc(self):
"""
func说明
:return:
"""
return None | 12.2 | 24 | 0.431694 | 21 | 183 | 3.571429 | 0.761905 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.409836 | 183 | 15 | 25 | 12.2 | 0.685185 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e686c4f63245c60f825e9c9fac742670423e246 | 1,264 | py | Python | rain_atten/powerlaw.py | bzohidov/TomoRain | a894249a666ff71289fbd878dbc31f6d2bd82701 | [
"MIT"
] | null | null | null | rain_atten/powerlaw.py | bzohidov/TomoRain | a894249a666ff71289fbd878dbc31f6d2bd82701 | [
"MIT"
] | null | null | null | rain_atten/powerlaw.py | bzohidov/TomoRain | a894249a666ff71289fbd878dbc31f6d2bd82701 | [
"MIT"
] | null | null | null | #-------------------------------------------------------------------------------
# Name: powerlaw.py
# Purpose: This is a set of power law coefficients(a,b) for the calculation of
# empirical rain attenuation model A = a*R^b.
#-------------------------------------------------------------------------------
def get_coef_ab(freq, mod_type = 'MP'):
'''
Returns power law coefficients according to model type (mod_type)
at a given frequency[Hz].
Input:
freq - frequency, [Hz].
Optional:
mod_type - model type for power law. This can be 'ITU_R2005',
'MP','GAMMA'. By default, mode_type = 'ITU_R2005'.
Output:
a,b - power law coefficients.
'''
if mod_type == 'ITU_R2005':
a = {'18': 0.07393, '23': 0.1285, '38': 0.39225}
b = {'18':1.0404605978, '23': 0.99222272,'38':0.8686641682}
elif mod_type == 'MP':
a = {'18': 0.05911087, '23': 0.1080751, '38': 0.37898495}
b = {'18': 1.08693514, '23': 1.05342886, '38': 0.92876888}
elif mod_type=='GAMMA':
a = {'18': 0.04570854, '23': 0.08174184, '38': 0.28520923}
b = {'18': 1.09211488, '23': 1.08105214, '38': 1.01426258}
freq = int(freq/1e9)
return a[str(freq)], b[str(freq)] | 39.5 | 80 | 0.496835 | 164 | 1,264 | 3.756098 | 0.463415 | 0.068182 | 0.097403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210041 | 0.227848 | 1,264 | 32 | 81 | 39.5 | 0.421107 | 0.488133 | 0 | 0 | 0 | 0 | 0.090604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e706074e9abdb57b2e9a95d040298421f658944 | 75 | py | Python | package/kedro_viz/__init__.py | kedro-org/kedro-viz | 627aeca40c543412afb988a8ddc64e86dcaf27ec | [
"BSD-3-Clause-Clear",
"Apache-2.0"
] | 125 | 2022-01-10T14:18:32.000Z | 2022-03-31T16:08:29.000Z | package/kedro_viz/__init__.py | kedro-org/kedro-viz | 627aeca40c543412afb988a8ddc64e86dcaf27ec | [
"BSD-3-Clause-Clear",
"Apache-2.0"
] | 81 | 2022-01-10T15:14:24.000Z | 2022-03-31T16:20:59.000Z | package/kedro_viz/__init__.py | kedro-org/kedro-viz | 627aeca40c543412afb988a8ddc64e86dcaf27ec | [
"BSD-3-Clause-Clear",
"Apache-2.0"
] | 11 | 2022-01-12T14:57:54.000Z | 2022-03-07T06:48:30.000Z | """Kedro plugin for visualising a Kedro pipeline"""
__version__ = "4.7.0"
| 18.75 | 51 | 0.706667 | 11 | 75 | 4.454545 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046875 | 0.146667 | 75 | 3 | 52 | 25 | 0.71875 | 0.6 | 0 | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e784d3020bf994fb8ddf29edfff3e0666faff46 | 209 | py | Python | constants.py | u-n-i-k/budget_tg_bot | 259e6b168f2a9f15f12c83b96e8f2ece0f3d8dce | [
"MIT"
] | 1 | 2021-07-25T10:06:37.000Z | 2021-07-25T10:06:37.000Z | constants.py | u-n-i-k/budget_tg_bot | 259e6b168f2a9f15f12c83b96e8f2ece0f3d8dce | [
"MIT"
] | null | null | null | constants.py | u-n-i-k/budget_tg_bot | 259e6b168f2a9f15f12c83b96e8f2ece0f3d8dce | [
"MIT"
] | null | null | null | from dotenv import dotenv_values
env_vars = dotenv_values('.env')
locals().update(env_vars)
ALLOWED_IDS = [int(el) for el in EXTRA_ALLOWED_IDS.split(',') + [ADMIN_ID, LOGGER_CHAT_ID, FAMILY_BUDGET_CHAT_ID]]
| 29.857143 | 114 | 0.770335 | 34 | 209 | 4.352941 | 0.647059 | 0.162162 | 0.202703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100478 | 209 | 6 | 115 | 34.833333 | 0.787234 | 0 | 0 | 0 | 0 | 0 | 0.023923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e8f16e3a1126d6872b2ca2e012c93821ef842e0 | 1,684 | py | Python | PyGL/app.py | chaosphere2112/PyGL | 9bb878f5c30144aea33de436e4907794ae9995dd | [
"MIT"
] | null | null | null | PyGL/app.py | chaosphere2112/PyGL | 9bb878f5c30144aea33de436e4907794ae9995dd | [
"MIT"
] | null | null | null | PyGL/app.py | chaosphere2112/PyGL | 9bb878f5c30144aea33de436e4907794ae9995dd | [
"MIT"
] | null | null | null | from OpenGL import GLUT
class Application(object):
def __init__(self, title="PyGL Application", position=(0,0), size=(640, 480), fullscreen=False, mouse=True):
GLUT.glutInit()
GLUT.glutInitDisplayMode(GLUT.GLUT_DOUBLE | GLUT.GLUT_RGB | GLUT.GLUT_DEPTH)
GLUT.glutInitWindowPosition(*position)
GLUT.glutInitWindowSize(*size)
self.win_id = GLUT.glutCreateWindow(title)
GLUT.glutDisplayFunc(self.display)
if mouse:
self.mouse_pos = (0,0)
GLUT.glutMotionFunc(self.move_mouse)
GLUT.glutPassiveMotionFunc(self.move_mouse)
GLUT.glutMouseFunc(self.mouse)
if fullscreen:
GLUT.glutFullScreen()
def start(self):
GLUT.glutMainLoop()
def display(self):
pass
def move_mouse(self, x, y):
self.mouse_pos = (x, y)
self.move()
def mouse(self, button, state, x, y):
self.mouse_pos = (x, y)
if state == GLUT.GLUT_DOWN:
if button == GLUT.GLUT_LEFT_BUTTON:
self.left_click()
elif button == GLUT.GLUT_RIGHT_BUTTON:
self.right_click()
elif button == GLUT.GLUT_MIDDLE_BUTTON:
self.middle_click()
else:
raise ValueError("Unknown button %s pressed" % str(button))
elif state == GLUT.GLUT_UP:
if button == GLUT.GLUT_LEFT_BUTTON:
self.left_release()
elif button == GLUT.GLUT_RIGHT_BUTTON:
self.right_release()
elif button == GLUT.GLUT_MIDDLE_BUTTON:
self.middle_release()
else:
raise ValueError("Unknown button %s released" % str(button))
def move(self):
pass
def left_click(self):
pass
def left_release(self):
pass
def middle_click(self):
pass
def middle_release(self):
pass
def right_click(self):
pass
def right_release(self):
pass
def keyboard(self, key, x, y):
pass | 22.756757 | 109 | 0.708432 | 234 | 1,684 | 4.92735 | 0.277778 | 0.076323 | 0.076323 | 0.062446 | 0.300087 | 0.279271 | 0.222029 | 0.194276 | 0 | 0 | 0 | 0.007163 | 0.171021 | 1,684 | 74 | 110 | 22.756757 | 0.818768 | 0 | 0 | 0.322034 | 0 | 0 | 0.039763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.220339 | false | 0.169492 | 0.016949 | 0 | 0.254237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0e309763fc6a2da7ad4bcff79ace039d199a4a9f | 13,701 | py | Python | test_all_modular_release.py | dzhliu/evoNSGA-II | 060ee4f3191f561829ad5bbe3f813c292caad5b4 | [
"Apache-2.0"
] | 1 | 2022-02-24T17:02:50.000Z | 2022-02-24T17:02:50.000Z | test_all_modular_release.py | dzhliu/evoNSGA-II | 060ee4f3191f561829ad5bbe3f813c292caad5b4 | [
"Apache-2.0"
] | null | null | null | test_all_modular_release.py | dzhliu/evoNSGA-II | 060ee4f3191f561829ad5bbe3f813c292caad5b4 | [
"Apache-2.0"
] | null | null | null | from pyGPGOMEA import GPGOMEARegressor as GPG
from sklearn.datasets import load_boston
import numpy as np
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler as SS
import os, sys
import matplotlib.pyplot as plt
from pandas import DataFrame
from hyperVolume import HyperVolume
from scipy import stats
class Linear:
def __init__(self):
self.max_run = 30;
self.pop_size = 1000;
self.iterations = 1;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="alpha_dominance";
self.alphafunction="linear";
self.tournament = 7;
self.algoframework="NSGA2";
self.abbr_name = "alpha_dominance_linear";
#string used for outputing information
self.RunName_best = "linear_best_run = "
self.RunName_worse = "linear_worse_run = "
self.RunName_median = "linear_median_run = "
self.HVList_name = "HV_list_linear = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
class Sigmoid:
def __init__(self):
self.max_run = 30;
self.pop_size = 500;
self.iterations = 100;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="alpha_dominance";
self.alphafunction="sigmoid";
self.tournament = 7;
self.algoframework="NSGA2";
self.abbr_name = "alpha_dominance_sigmoid";
#string used for outputing information
self.RunName_best = "sigmoid_best_run = "
self.RunName_worse = "sigmoid_worse_run = "
self.RunName_median = "sigmoid_median_run = "
self.HVList_name = "HV_list_sigmoid = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
class Cosine:
def __init__(self):
self.max_run = 30;
self.pop_size = 500;
self.iterations = 100;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="alpha_dominance";
self.alphafunction="cosine";
self.tournament = 7;
self.algoframework="NSGA2";
self.abbr_name = "alpha_dominance_cosine";
#string used for outputing information
self.RunName_best = "cosine_best_run = "
self.RunName_worse = "cosine_worse_run = "
self.RunName_median = "cosine_median_run = "
self.HVList_name = "HV_list_cosine = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
class Duplicate_control:
def __init__(self):
self.max_run = 30;
self.pop_size = 500;
self.iterations = 100;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="Duplicate_control";
self.alphafunction="none";
self.tournament = 7;
self.algoframework="NSGA2";
self.abbr_name = "Duplicate_control";
#string used for outputing information
self.RunName_best = "Duplicate_control_best_run = "
self.RunName_worse = "Duplicate_control_worse_run = "
self.RunName_median = "Duplicate_control_median_run = "
self.HVList_name = "HV_list_Duplicate_control = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
class Adaptive_alpha:
def __init__(self):
self.max_run = 30;
self.pop_size = 500;
self.iterations = 100;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="adaptive_alpha_dominance";
self.alphafunction="none";
self.tournament = 7;
self.algoframework="NSGA2";
self.abbr_name = "adaptive_alpha";
#string used for outputing information
self.RunName_best = "adaptive_alpha_best_run = "
self.RunName_worse = "adaptive_alpha_worse_run = "
self.RunName_median = "adaptive_alpha_median_run = "
self.HVList_name = "HV_list_adaptive_alpha = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
class SPEA2:
def __init__(self):
self.max_run = 30;
self.pop_size = 500;
self.iterations = 100;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="none";
self.alphafunction="none";
self.tournament = 7;
self.algoframework="SPEA2";
self.abbr_name = "SPEA2";
#string used for outputing information
self.RunName_best = "SPEA2_best_run = "
self.RunName_worse = "SPEA2_worse_run = "
self.RunName_median = "SPEA2_median_run = "
self.HVList_name = "HV_list_SPEA2 = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
class NSGA2:
def __init__(self):
self.max_run = 30;
self.pop_size = 500;
self.iterations = 100;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="classicNSGA2";
self.alphafunction="none";
self.tournament = 7;
self.algoframework="NSGA2";
self.abbr_name = "classicNSGA2";
#string used for outputing information
self.RunName_best = "NSGA2_best_run = "
self.RunName_worse = "NSGA2_worse_run = "
self.RunName_median = "NSGA2_median_run = "
self.HVList_name = "HV_list_NSGA2 = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
class LengthControl_interpol:
def __init__(self):
self.max_run = 30;
self.pop_size = 500;
self.iterations = 100;
self.init_seed = 42;
self.max_explen = 100;
self.crossover_proportion = 0.5;
self.mutation_proportion = 0.5;
self.prob="multiobj";
self.multiobj="symbreg_size";
self.methodtype="none";
self.alphafunction="none";
self.tournament = 7;
self.algoframework="LengthControlTruncation";
self.abbr_name = "LC_interpol";
#string used for outputing information
self.RunName_best = "LengthControl_interpol_best_run = "
self.RunName_worse = "LengthControl_interpol_worse_run = "
self.RunName_median = "LengthControl_interpol_median_run = "
self.HVList_name = "LengthControl_interpol_list_NSGA2 = "
#for recording info after run the algorithm
self.bestHV = -1
self.bestid = -1
self.worseHV = -1
self.worseid = -1
self.medianHV = -1
self.medianid = -1
self.averageHV = -1
self.HV_std = -1
self.HV_list = []
#for recording info after run the algorithm
self.bestHV_test = -1
self.bestid_test = -1
self.worseHV_test = -1
self.worseid_test = -1
self.medianHV_test = -1
self.medianid_test = -1
self.averageHV_test = -1
self.HV_std_test = -1
self.HV_list_test = []
def load_data(DataSetFileName):
if DataSetFileName == "boston":
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.25, random_state=42)
scaler_X = SS()
scaler_y = SS()
else:
raw = np.loadtxt(DataSetFileName, delimiter=' ')
X = raw[:,:-1]
y = np.expand_dims(raw[:,-1],axis = 1)
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.25, random_state=88)
scaler_X = SS()
scaler_y = SS()
X_train_scaled = scaler_X.fit_transform(X_train)
y_train_scaled = scaler_y.fit_transform(y_train.reshape((-1,1)))
X_test_scaled = scaler_X.transform(X_test)
y_test_scaled = scaler_y.transform(y_test.reshape((-1,1)))
return scaler_X,scaler_y,X_train_scaled,y_train_scaled,X_test_scaled,y_test_scaled
def main():
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.25, random_state=42)
scaler_X = SS()
scaler_y = SS()
X_train_scaled = scaler_X.fit_transform(X_train)
y_train_scaled = scaler_y.fit_transform(y_train.reshape((-1,1)))
X_test_scaled = scaler_X.transform(X_test)
y_test_scaled = scaler_y.transform(y_test.reshape((-1,1)))
max_run = 30
pop_size = 500
iterations = 100
init_seed = 42
max_explen = 100
crossover_proportion = 0.5
mutation_proportion = 0.5
multiobj="symbreg_size"
prob="multiobj"
methodtype="Duplicate_control"
alphafunction="none"
tournament = 7
algoframework="NSGA2"
#===============settings of running all comparison methods====================
# prob algoframework methodtype alpha_function
# multiobj NSGA2 NA NA
# multiobj NSGA2 alpha_dominance cosine
# multiobj NSGA2 alpha_dominance linear
# multiobj NSGA2 alpha_dominance sigmoid
# multiobj NSGA2 adaptive_alpha_dominance NA
# multiobj SPEA2 NA NA
# multiobj evoNSGA2 NA NA
# multiobj NSGA2DP NA NA
#=============================================================================
for i in range(1,max_run+1):
print('now is the '+str(i)+"th instances")
ea = GPG(gomea = False, caching = False, ims = False, time=999999, elitism=0, generations= iterations, seed = init_seed+i, silent = False, prob = prob, multiobj=multiobj, popsize=pop_size, linearscaling=True, erc=True, maxsize = max_explen, methodtype=methodtype, alphafunction= alphafunction, numofrun=i, tournament = tournament, algoframework =algoframework, functions="+_-_*_p/_sqrt_plog", subcross=crossover_proportion,submut=mutation_proportion)
ea.fit(X_train_scaled, y_train_scaled)
print('Test RMSE:', np.sqrt( scaler_y.var_ * mean_squared_error(y_test_scaled, ea.predict(X_test_scaled)) ))
print('training RMSE:', np.sqrt( scaler_y.var_ * mean_squared_error(y_train_scaled, ea.predict(X_train_scaled)) ))
archive_on_test_set = ea.get_final_archive(X_test_scaled)
test_preds = [x[0] for x in archive_on_test_set]
test_mses = [mean_squared_error(y_test_scaled, p) for p in test_preds]
test_sizes = [x[2] for x in archive_on_test_set]
popu_test = [test_mses,test_sizes]
popu_test = DataFrame(popu_test).T.values.tolist()
np.savetxt("PF_test//PF__"+methodtype+"_archive_runs"+str(i)+".txt", popu_test, fmt="%f %d", delimiter=" ",newline="\r\n")
test_sizes = (np.array(test_sizes).astype(float) - 1) / (100-1)
popu_test = [test_mses,test_sizes]
ea.reset()
if __name__ == '__main__':
main()
| 31.862791 | 453 | 0.687176 | 1,935 | 13,701 | 4.603618 | 0.10491 | 0.072407 | 0.064661 | 0.028738 | 0.739784 | 0.687135 | 0.67198 | 0.644701 | 0.601594 | 0.601594 | 0 | 0.03267 | 0.19349 | 13,701 | 429 | 454 | 31.937063 | 0.773484 | 0.108459 | 0 | 0.71123 | 0 | 0 | 0.117947 | 0.037699 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026738 | false | 0 | 0.032086 | 0 | 0.082888 | 0.008021 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e365eebaccdfa06232d91d57814ccec6da896a1 | 691 | py | Python | pilot/user/rubin/memory.py | yesw2000/pilot3 | 2f4363673882b3a95d2d035c83ec902a173026b1 | [
"Apache-2.0"
] | 6 | 2018-05-09T12:08:51.000Z | 2021-12-22T06:45:11.000Z | pilot/user/rubin/memory.py | yesw2000/pilot3 | 2f4363673882b3a95d2d035c83ec902a173026b1 | [
"Apache-2.0"
] | 133 | 2017-03-09T09:41:08.000Z | 2022-03-04T08:03:14.000Z | pilot/user/rubin/memory.py | yesw2000/pilot3 | 2f4363673882b3a95d2d035c83ec902a173026b1 | [
"Apache-2.0"
] | 30 | 2017-01-17T09:29:02.000Z | 2022-03-03T21:27:33.000Z | #!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Authors:
# - Paul Nilsson, paul.nilsson@cern.ch, 2018-2021
def allow_memory_usage_verifications():
"""
Should memory usage verifications be performed?
:return: boolean.
"""
return False
def memory_usage(job):
"""
Perform memory usage verification.
:param job: job object
:return: exit code (int), diagnostics (string).
"""
exit_code = 0
diagnostics = ""
return exit_code, diagnostics
| 20.939394 | 66 | 0.677279 | 93 | 691 | 4.967742 | 0.634409 | 0.095238 | 0.056277 | 0.069264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023941 | 0.214182 | 691 | 32 | 67 | 21.59375 | 0.826888 | 0.670043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0e3d02013e2aedc506535494611c9a2eeded6143 | 3,686 | py | Python | lms/settings.py | julia-shenshina/LMS | 110364bb5c59055d584c314fd5c04158c637d0e6 | [
"MIT"
] | null | null | null | lms/settings.py | julia-shenshina/LMS | 110364bb5c59055d584c314fd5c04158c637d0e6 | [
"MIT"
] | 4 | 2018-12-18T19:03:36.000Z | 2018-12-20T21:07:02.000Z | lms/settings.py | julia-shenshina/LMS | 110364bb5c59055d584c314fd5c04158c637d0e6 | [
"MIT"
] | null | null | null | import os
from django.utils import timezone
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'w7zj^5d1kj=(*)j11@v%s6q#ai8ld(nu4v&$br54ein%03tlir'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'lms',
'django.apps.registry',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'rest_framework_swagger'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'lms.api.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'lms.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'lms/../lms/db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
# Constant fields
START_STUDY_YEARS = [(x, x) for x in range(2000, timezone.now().year + 1)]
STUDY_LEVEL = [(x, x) for x in range(1, 7)]
STUDY_BASES = [
("bg", "бюджетная"),
("cnt", "контрактная")
]
STUDY_FORMS = [
("full", "очная"),
("ext", "заочная"),
("nig", "вечерняя")
]
DEGREES = [
("bach", "бакалавр"),
("mag", "магистр"),
("spec", "специалист")
]
# REST API
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination',
'DEFAULT_AUTHENTICATION_CLASSES': ('lms.api.auth.LMSAuthentication',),
'TEST_REQUEST_DEFAULT_FORMAT': 'json',
'PAGE_SIZE': 10,
}
SWAGGER_SETTINGS = {
'OPERATIONS_SORTER': 'alpha',
'SUPPORTED_SUBMIT_METHODS': ['GET', 'POST', 'PATCH', 'DELETE']
} | 24.573333 | 91 | 0.67363 | 396 | 3,686 | 6.138889 | 0.484848 | 0.080214 | 0.048951 | 0.051419 | 0.174825 | 0.147676 | 0.07281 | 0.06088 | 0.032908 | 0 | 0 | 0.012846 | 0.176343 | 3,686 | 150 | 92 | 24.573333 | 0.787879 | 0.193163 | 0 | 0 | 0 | 0 | 0.530447 | 0.413735 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052083 | 0.020833 | 0 | 0.020833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0e497e0d6ccee6b1c6bc057da88f0e9ba4997a68 | 1,470 | py | Python | tests/test_events.py | scopatz/leyline | 588a1abf3b0b4dd4c01681d8d7ae515c3c81b5af | [
"BSD-3-Clause"
] | null | null | null | tests/test_events.py | scopatz/leyline | 588a1abf3b0b4dd4c01681d8d7ae515c3c81b5af | [
"BSD-3-Clause"
] | null | null | null | tests/test_events.py | scopatz/leyline | 588a1abf3b0b4dd4c01681d8d7ae515c3c81b5af | [
"BSD-3-Clause"
] | null | null | null | """Events testing"""
import pytest
from leyline import parse, EVENTS_CTX, EventsVisitor
from leyline.ast import PlainText
from leyline.events import Event, Slide, Sleep
EVENTS_CASES = {
"Antici...{{sleep(42.0)}}...pation": [
Event(body=[PlainText(lineno=1, column=1, text='Antici...')]),
Sleep(duration=42.0, body=[PlainText(lineno=1, column=25, text='...pation')]),
],
"Hello World{{event(start=42.0)}}Next Slide": [
Event(body=[PlainText(lineno=1, column=1, text='Hello World')]),
Event(start=42.0, body=[PlainText(lineno=1, column=33, text='Next Slide')]),
],
"Hello World{{slide('A Title')}}Next Slide": [
Event(body=[PlainText(lineno=1, column=1, text='Hello World')]),
Slide(title='A Title', body=[[PlainText(lineno=1, column=32, text='Next Slide')]]),
],
"""{{slide()}}Hello World{{slide()}}Next Slide
""": [Event(),
Slide(body=[[PlainText(lineno=1, column=12, text='Hello World')]]),
Slide(body=[[PlainText(lineno=1, column=34, text='Next Slide\n')]]),
],
"""Hello World{{slide()}}Next Slide
""": [Event(body=[PlainText(lineno=1, column=1, text='Hello World')]),
Slide(body=[[PlainText(lineno=1, column=23, text='Next Slide\n')]]),
],
}
@pytest.mark.parametrize('doc, exp', sorted(EVENTS_CASES.items()))
def test_events(doc, exp):
tree = parse(doc)
contexts = {'ctx': EVENTS_CTX}
visitor = EventsVisitor(contexts=contexts)
visitor.visit(tree)
assert exp == visitor.events
| 36.75 | 87 | 0.647619 | 198 | 1,470 | 4.782828 | 0.257576 | 0.137276 | 0.200634 | 0.211193 | 0.500528 | 0.473073 | 0.359029 | 0.297782 | 0.259768 | 0.184794 | 0 | 0.029968 | 0.137415 | 1,470 | 39 | 88 | 37.692308 | 0.716877 | 0.009524 | 0 | 0.21875 | 0 | 0 | 0.176341 | 0.046289 | 0 | 0 | 0 | 0 | 0.03125 | 1 | 0.03125 | false | 0 | 0.125 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e56ff86016db7508030cfa4624be9094cfe60a8 | 3,555 | py | Python | src/bxgateway/testing/mocks/mock_gateway_node.py | blockchain-development-resources/bxgateway | 761b5085f9c7c6527f0b9aaae06d2f70f3786db2 | [
"MIT"
] | 1 | 2021-11-26T07:49:24.000Z | 2021-11-26T07:49:24.000Z | src/bxgateway/testing/mocks/mock_gateway_node.py | beepool/bxgateway | 761b5085f9c7c6527f0b9aaae06d2f70f3786db2 | [
"MIT"
] | null | null | null | src/bxgateway/testing/mocks/mock_gateway_node.py | beepool/bxgateway | 761b5085f9c7c6527f0b9aaae06d2f70f3786db2 | [
"MIT"
] | 1 | 2021-09-06T02:10:08.000Z | 2021-09-06T02:10:08.000Z | # pyre-ignore-all-errors
from typing import Tuple
from bxcommon.test_utils import helpers
from bxcommon.connections.connection_type import ConnectionType
from bxcommon.connections.node_type import NodeType
from bxcommon.network.socket_connection import SocketConnection
from bxcommon.services.transaction_service import TransactionService
from bxgateway.connections.abstract_gateway_blockchain_connection import AbstractGatewayBlockchainConnection
from bxgateway.connections.abstract_gateway_node import AbstractGatewayNode
from bxgateway.connections.abstract_relay_connection import AbstractRelayConnection
from bxgateway.messages.btc.block_btc_message import BlockBtcMessage
from bxgateway.services.btc.abstract_btc_block_cleanup_service import AbstractBtcBlockCleanupService
from bxgateway.utils.btc.btc_object_hash import BtcObjectHash
from bxgateway.services.btc.btc_block_queuing_service import BtcBlockQueuingService
from bxgateway.testing.mocks.mock_blockchain_connection import MockMessageConverter
class _MockCleanupService(AbstractBtcBlockCleanupService):
def __init__(self, node):
super(_MockCleanupService, self).__init__(node, 1)
def clean_block_transactions(self, block_msg: BlockBtcMessage, transaction_service: TransactionService) -> None:
pass
def contents_cleanup(self, block_msg: BlockBtcMessage, transaction_service: TransactionService) -> None:
pass
def is_marked_for_cleanup(self, block_hash: BtcObjectHash) -> bool:
return False
class MockGatewayNode(AbstractGatewayNode):
NODE_TYPE = NodeType.GATEWAY
def __init__(self, opts):
if opts.use_extensions:
helpers.set_extensions_parallelism()
super(MockGatewayNode, self).__init__(opts)
self.broadcast_messages = []
self.send_to_node_messages = []
self._tx_service = TransactionService(self, 0)
self.block_cleanup_service = self._get_cleanup_service()
self.block_queuing_service = BtcBlockQueuingService(self)
self.message_converter = MockMessageConverter()
if opts.use_extensions:
from bxcommon.services.extension_transaction_service import ExtensionTransactionService
self._tx_service = ExtensionTransactionService(self, self.network_num)
else:
self._tx_service = TransactionService(self, self.network_num)
def broadcast(self, msg, broadcasting_conn=None, prepend_to_queue=False, connection_types=None):
if connection_types is None:
connection_types = [ConnectionType.RELAY_ALL]
self.broadcast_messages.append((msg, connection_types))
return []
def send_msg_to_node(self, msg):
self.send_to_node_messages.append(msg)
def get_tx_service(self, _network_num=None):
return self._tx_service
def build_blockchain_connection(self, socket_connection: SocketConnection, address: Tuple[str, int],
from_me: bool) -> AbstractGatewayBlockchainConnection:
pass
def build_relay_connection(self, socket_connection: SocketConnection, address: Tuple[str, int],
from_me: bool) -> AbstractRelayConnection:
pass
def build_remote_blockchain_connection(self, socket_connection: SocketConnection, address: Tuple[str, int],
from_me: bool) -> AbstractGatewayBlockchainConnection:
pass
def _get_cleanup_service(self) -> AbstractBtcBlockCleanupService:
return _MockCleanupService(self)
| 43.888889 | 116 | 0.759494 | 369 | 3,555 | 6.99458 | 0.265583 | 0.040294 | 0.020147 | 0.037195 | 0.25804 | 0.18365 | 0.18365 | 0.18365 | 0.18365 | 0.18365 | 0 | 0.000684 | 0.177778 | 3,555 | 80 | 117 | 44.4375 | 0.882313 | 0.006188 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.083333 | 0.25 | 0.05 | 0.566667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
0e5968111f94f0d889a8db1232fd2cf0bb0fd57c | 643 | py | Python | tests/y2021/test_2021_d1.py | ErikThorsell/advent-of-code-python | 8afb3d2dd731b77a421eff9dbd33d1f6a9dfbee3 | [
"MIT"
] | 2 | 2021-12-03T16:17:13.000Z | 2022-01-27T12:29:45.000Z | tests/y2021/test_2021_d1.py | ErikThorsell/advent-of-code-python | 8afb3d2dd731b77a421eff9dbd33d1f6a9dfbee3 | [
"MIT"
] | null | null | null | tests/y2021/test_2021_d1.py | ErikThorsell/advent-of-code-python | 8afb3d2dd731b77a421eff9dbd33d1f6a9dfbee3 | [
"MIT"
] | 1 | 2021-12-29T20:38:38.000Z | 2021-12-29T20:38:38.000Z | """TEST MODULE TEMPLATE"""
from advent_of_code.y2021.d1 import solution_1
from advent_of_code.y2021.d1 import solution_2
def test_solution_1():
example_input = [
199,
200,
208,
210,
200,
207,
240,
269,
260,
263,
]
example_result = 7
assert solution_1(example_input) == example_result
def test_solution_1():
example_input = [
199,
200,
208,
210,
200,
207,
240,
269,
260,
263,
]
example_result = 5
assert solution_2(example_input) == example_result
| 16.921053 | 54 | 0.525661 | 73 | 643 | 4.356164 | 0.39726 | 0.113208 | 0.150943 | 0.198113 | 0.679245 | 0.679245 | 0.679245 | 0.679245 | 0.446541 | 0.446541 | 0 | 0.2 | 0.393468 | 643 | 37 | 55 | 17.378378 | 0.615385 | 0.031104 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e5bb21ec12a437d2b6f72f6a458640f0c7b34f7 | 165 | py | Python | pc_data_collector/overhead_test.py | nglrt/virtual_energy_sensor | f3ba1c00baf5be80bb4262395afcd3bdea10cafd | [
"MIT"
] | null | null | null | pc_data_collector/overhead_test.py | nglrt/virtual_energy_sensor | f3ba1c00baf5be80bb4262395afcd3bdea10cafd | [
"MIT"
] | null | null | null | pc_data_collector/overhead_test.py | nglrt/virtual_energy_sensor | f3ba1c00baf5be80bb4262395afcd3bdea10cafd | [
"MIT"
] | null | null | null | import system
import time
sys = system.get_system()
i=0
while True:
sys.getParameters()
i+=1
print "i = {0}".format(i)
#time.sleep(.125)
| 11.785714 | 29 | 0.581818 | 24 | 165 | 3.958333 | 0.625 | 0.042105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.272727 | 165 | 13 | 30 | 12.692308 | 0.741667 | 0.09697 | 0 | 0 | 0 | 0 | 0.047297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e6214e758d4635c14f7f8fe603c143a22fe71dc | 2,293 | py | Python | src/main/python/greedy/highest_product.py | mohnoor94/ProblemsSolving | fc861a5804c448d76c4785f3c98943a1e9accfd2 | [
"Apache-2.0"
] | 8 | 2018-10-13T05:46:06.000Z | 2021-10-13T10:13:05.000Z | src/main/python/greedy/highest_product.py | mohnoor94/ProblemsSolving | fc861a5804c448d76c4785f3c98943a1e9accfd2 | [
"Apache-2.0"
] | null | null | null | src/main/python/greedy/highest_product.py | mohnoor94/ProblemsSolving | fc861a5804c448d76c4785f3c98943a1e9accfd2 | [
"Apache-2.0"
] | 1 | 2020-03-31T16:25:55.000Z | 2020-03-31T16:25:55.000Z | """
Given a list of integers, find the highest product you can get from three of the integers.
The input list_of_ints will always have at least three integers.
***
https://www.interviewcake.com/question/python3/highest-product-of-3?course=fc1§ion=greedy
"""
import unittest
def highest_product_of_3(ints):
if len(ints) < 3:
raise ValueError('Less than 3 items!')
highest = max(ints[0], ints[1])
lowest = min(ints[0], ints[1])
highest_product_of_2 = ints[0] * ints[1]
lowest_product_of_2 = ints[0] * ints[1]
highest_product_of_3_ = ints[0] * ints[1] * ints[2]
for num in ints[2:]:
highest_product_of_3_ = max(highest_product_of_3_, num * highest_product_of_2, num * lowest_product_of_2)
highest_product_of_2 = max(highest_product_of_2, num * highest, num * lowest)
lowest_product_of_2 = min(lowest_product_of_2, num * highest, num * lowest)
highest = max(highest, num)
lowest = min(lowest, num)
return highest_product_of_3_
class Test(unittest.TestCase):
def test_short_list(self):
actual = highest_product_of_3([1, 2, 3, 4])
expected = 24
self.assertEqual(actual, expected)
def test_longer_list(self):
actual = highest_product_of_3([6, 1, 3, 5, 7, 8, 2])
expected = 336
self.assertEqual(actual, expected)
def test_list_has_one_negative(self):
actual = highest_product_of_3([-5, 4, 8, 2, 3])
expected = 96
self.assertEqual(actual, expected)
def test_list_has_two_negatives(self):
actual = highest_product_of_3([-10, 1, 3, 2, -10])
expected = 300
self.assertEqual(actual, expected)
def test_list_is_all_negatives(self):
actual = highest_product_of_3([-5, -1, -3, -2])
expected = -6
self.assertEqual(actual, expected)
def test_error_with_empty_list(self):
with self.assertRaises(Exception):
highest_product_of_3([])
def test_error_with_one_number(self):
with self.assertRaises(Exception):
highest_product_of_3([1])
def test_error_with_two_numbers(self):
with self.assertRaises(Exception):
highest_product_of_3([1, 1])
if __name__ == '__main__':
unittest.main(verbosity=2)
| 29.025316 | 113 | 0.663323 | 332 | 2,293 | 4.26506 | 0.26506 | 0.139831 | 0.20339 | 0.168079 | 0.508475 | 0.454802 | 0.403955 | 0.168079 | 0.107345 | 0.072034 | 0 | 0.045198 | 0.228085 | 2,293 | 78 | 114 | 29.397436 | 0.754802 | 0.111208 | 0 | 0.166667 | 0 | 0 | 0.012808 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.1875 | false | 0 | 0.020833 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e64f1d0ae3ecb36f3486a2035c4a1b4258434ee | 781 | py | Python | source/lib/s3_to_efs/s3_to_efs_test.py | aws-samples/aws-ai-solution-kit | 07d0241a40801f180aa3b511ebfec255a146cf06 | [
"Apache-2.0"
] | 3 | 2021-12-03T08:38:39.000Z | 2022-03-28T06:40:16.000Z | source/lib/s3_to_efs/s3_to_efs_test.py | aws-samples/aws-ai-solution-kit | 07d0241a40801f180aa3b511ebfec255a146cf06 | [
"Apache-2.0"
] | 8 | 2021-11-06T04:16:04.000Z | 2022-02-25T06:17:01.000Z | source/lib/s3_to_efs/s3_to_efs_test.py | aws-samples/aws-ai-solution-kit | 07d0241a40801f180aa3b511ebfec255a146cf06 | [
"Apache-2.0"
] | 2 | 2022-02-11T09:17:06.000Z | 2022-03-24T02:06:00.000Z | import json
import pytest
import logging
import os
from os import environ
import s3_to_efs
def test_lambda_handler():
event = {
"ResourceProperties": {
"Objects": [
'https://aws-gcr-solutions-assets.s3.cn-northwest-1.amazonaws.com.cn/ai-solution-kit/porn-image-model/v1.0.0/resnest50_fast_4s2x40d.bin',
'https://aws-gcr-solutions-assets.s3.cn-northwest-1.amazonaws.com.cn/ai-solution-kit/porn-image-model/v1.0.0/resnest50_fast_4s2x40d.mapping',
'https://aws-gcr-solutions-assets.s3.cn-northwest-1.amazonaws.com.cn/ai-solution-kit/porn-image-model/v1.0.0/resnest50_fast_4s2x40d.xml'],
"MountPath": "/",
}
}
result = s3_to_efs.lambda_handler(event=event, context='')
assert 1
| 35.5 | 157 | 0.672215 | 110 | 781 | 4.654545 | 0.409091 | 0.046875 | 0.064453 | 0.117188 | 0.615234 | 0.615234 | 0.615234 | 0.615234 | 0.615234 | 0.615234 | 0 | 0.056426 | 0.183099 | 781 | 21 | 158 | 37.190476 | 0.746082 | 0 | 0 | 0 | 0 | 0.166667 | 0.564661 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.055556 | false | 0 | 0.333333 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0e65b717083169a7c62c622039787563f20f8e5e | 6,535 | py | Python | custom_components/vistapool/dashboard.py | tellerbop/havistapool | e2001754317cf39f4c58e0be6d9f1c3bf322b667 | [
"MIT"
] | 9 | 2020-06-16T16:58:34.000Z | 2022-01-25T07:16:59.000Z | custom_components/vistapool/dashboard.py | tellerbop/havistapool | e2001754317cf39f4c58e0be6d9f1c3bf322b667 | [
"MIT"
] | 8 | 2020-06-20T16:49:39.000Z | 2021-08-18T13:53:02.000Z | custom_components/vistapool/dashboard.py | tellerbop/havistapool | e2001754317cf39f4c58e0be6d9f1c3bf322b667 | [
"MIT"
] | 3 | 2020-10-01T07:09:40.000Z | 2022-01-16T18:59:32.000Z | # Utilities for integration with Home Assistant (directly or via MQTT)
import logging
import re
_LOGGER = logging.getLogger(__name__)
class Instrument:
def __init__(self, component, attr, name, icon=None):
self._attr = attr
self._component = component
self._name = name
self._connection = None
self._pool = None
self._icon = icon
def __repr__(self):
return self._name
def configurate(self, **args):
pass
def camel2slug(self, s):
"""Convert camelCase to camel_case.
>>> camel2slug('fooBar')
'foo_bar'
"""
return re.sub("([A-Z])", "_\\1", s).lower().lstrip("_")
@property
def slug_attr(self):
return self.camel2slug(self._attr.replace(".", "_"))
def setup(self, connection, pool, mutable=True, **config):
self._connection = connection
self._pool = pool
if not mutable and self.is_mutable:
_LOGGER.info("Skipping %s because mutable", self)
return False
if not self.is_supported:
_LOGGER.debug(
"%s (%s:%s) is not supported", self, type(self).__name__, self._attr,
)
return False
_LOGGER.debug("%s is supported", self)
self.configurate(**config)
return True
@property
def full_name(self):
return "%s %s" % (self.name, self.pool_reference)
@property
def component(self):
return self._component
@property
def icon(self):
return self._icon
@property
def name(self):
return self._name
@property
def attr(self):
return self._attr
@property
def pool_name(self):
return self._pool.name
@property
def pid(self):
return self._pool.pid
@property
def pool_reference(self):
return self._pool.ref
@property
def is_mutable(self):
raise NotImplementedError("Must be set")
@property
def is_supported(self):
supported = self._attr + "_supported"
if hasattr(self._pool, supported):
return getattr(self._pool, supported)
if hasattr(self._pool, self._attr):
return True
return False
@property
def str_state(self):
return self.state
@property
def state(self):
if hasattr(self._pool, self._attr):
return getattr(self._pool, self._attr)
return self._pool.get_attr(self._attr)
@property
def attributes(self):
return {}
class Sensor(Instrument):
def __init__(self, attr, name, icon, unit):
super().__init__(component="sensor", attr=attr, name=name, icon=icon)
self._unit = unit
self._convert = False
def configurate(self, unit_system=None, **config):
return
@property
def is_mutable(self):
return False
@property
def str_state(self):
if self.unit:
return "%s %s" % (self.state, self.unit)
else:
return "%s" % self.state
@property
def state(self):
val = super().state
return val
@property
def unit(self):
supported = self._attr + "_unit"
if hasattr(self._pool, supported):
return getattr(self._pool, supported)
return self._unit
class BinarySensor(Instrument):
def __init__(self, attr, name, device_class, icon=None):
super().__init__(component="binary_sensor", attr=attr, name=name, icon=icon)
self.device_class = device_class
@property
def is_mutable(self):
return False
@property
def str_state(self):
if self.state is None:
_LOGGER.error("Can not encode state %s:%s", self._attr, self.state)
return "?"
return "On" if self.state else "Off"
@property
def state(self):
val = super().state
if isinstance(val, (bool, list)):
# for list (e.g. bulb_failures):
# empty list (False) means no problem
return bool(val)
elif isinstance(val, str):
return True if val == "ON" else False
return val
@property
def is_on(self):
return self.state
class Switch(Instrument):
def __init__(self, attr, name, icon):
super().__init__(component="switch", attr=attr, name=name, icon=icon)
@property
def is_mutable(self):
return True
@property
def str_state(self):
return "On" if self.state else "Off"
def is_on(self):
return self.state
def turn_on(self):
pass
def turn_off(self):
pass
class LastUpdate(Instrument):
def __init__(self):
super().__init__(
component="sensor",
attr="last_update_time",
name="Last Update",
icon="mdi:update",
)
self.unit = None
@property
def is_mutable(self):
return False
@property
def str_state(self):
ts = super().state
return str(ts.astimezone(tz=None)) if ts else None
@property
def state(self):
val = super().state
return val
def create_instruments():
return [
LastUpdate(),
# Sensor(attr="location", name="Location", icon="mdi:map-marker", unit=None),
Sensor(attr="temperature", name="Temperature", icon="mdi:thermometer", unit="C"),
Sensor(attr="ph", name="PH", icon="mdi:alpha-p-circle", unit="Ph"),
Sensor(attr="ph_target", name="PH Target", icon="mdi:alpha-p-box", unit="Ph"),
Sensor(attr="rx", name="RX", icon="mdi:alpha-r-circle", unit="Rx"),
Sensor(attr="rx_target", name="RX Target", icon="mdi:alpha-r-box", unit="Rx"),
Sensor(attr="filtration_type", name="Filtration Type", icon="mdi:filter",unit=""),
BinarySensor(attr="connected", name="Connected", device_class="connectivity"),
BinarySensor(attr="rx_relay_state", name="Rx Relay State", device_class=""),
BinarySensor(attr="ph_relay_state", name="Ph Relay State", device_class=""),
BinarySensor(attr="filtration_state", name="Filtration State", device_class=""),
]
class Dashboard:
def __init__(self, connection, pool, **config):
self.instruments = [
instrument
for instrument in create_instruments()
if instrument.setup(connection, pool, **config)
]
| 25.93254 | 101 | 0.581484 | 760 | 6,535 | 4.809211 | 0.184211 | 0.081259 | 0.045964 | 0.028728 | 0.293844 | 0.266211 | 0.204925 | 0.123119 | 0.104514 | 0.081532 | 0 | 0.000873 | 0.298546 | 6,535 | 251 | 102 | 26.035857 | 0.796466 | 0.043305 | 0 | 0.397849 | 0 | 0 | 0.087166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0.016129 | 0.010753 | 0.112903 | 0.494624 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0e78c2f921e7af07188c8982f44f5c27f05d58e2 | 515 | py | Python | automation_content/whois.py | lara29/junos-automation-with-chatops | a01d437da9bdc7af5d5ff61e01640ac8bca35712 | [
"MIT"
] | null | null | null | automation_content/whois.py | lara29/junos-automation-with-chatops | a01d437da9bdc7af5d5ff61e01640ac8bca35712 | [
"MIT"
] | null | null | null | automation_content/whois.py | lara29/junos-automation-with-chatops | a01d437da9bdc7af5d5ff61e01640ac8bca35712 | [
"MIT"
] | null | null | null | from jnpr.junos import Device
import sys
dev=Device(host=sys.argv[1], user="root", password="Embe1mpls")
dev.open()
#message=dev.facts["hostname"]+ " is an " + dev.facts['model'] + " running " + dev.facts["version"]
print (dev.facts["hostname"]+ " is an " + dev.facts['model'] + " running " + dev.facts["version"])
dev.close()
#from slacker import Slacker
#slack = Slacker('xoxp-89396208643-89446660672-98529286339-0e5407813326b0e6c079b06d9f87a6f8')
#slack.chat.post_message('#general', message, username='j-bot')
| 42.916667 | 99 | 0.716505 | 67 | 515 | 5.492537 | 0.567164 | 0.130435 | 0.086957 | 0.097826 | 0.298913 | 0.298913 | 0.298913 | 0.298913 | 0.298913 | 0.298913 | 0 | 0.124464 | 0.095146 | 515 | 11 | 100 | 46.818182 | 0.665236 | 0.539806 | 0 | 0 | 0 | 0 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.333333 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
0e83a46f017cf5b968bc7aae5a57bb7299bf3663 | 4,424 | py | Python | code/doubanmusic.py | zefengdaguo/douban_crawler | 0e303e85d855233a75482c46abb0e056280aeaee | [
"MIT"
] | 116 | 2019-11-27T09:55:27.000Z | 2022-03-17T15:32:57.000Z | code/doubanmusic.py | verazuo/douban_crawler | 042e870c74df8b6f4eb1cd2af3b90d5b6699ab8f | [
"MIT"
] | 7 | 2020-07-11T07:59:40.000Z | 2022-03-18T10:32:20.000Z | code/doubanmusic.py | verazuo/douban_crawler | 042e870c74df8b6f4eb1cd2af3b90d5b6699ab8f | [
"MIT"
] | 25 | 2020-02-23T15:29:41.000Z | 2022-03-18T10:32:36.000Z | from bs4 import BeautifulSoup
import re
import time
import requests
headers0 = {'User-Agent':"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36"}
def Musappend(Mdict,Items):
for it in Items:
title=it('a')[0].get_text(strip=True)
date=it(class_=re.compile('date'))[0].get_text(strip=True)
try:
stars=it(class_=re.compile('rat'))[0]['class'][0][6]
except:
stars='Nah'
try:
comment=it(class_=re.compile('comm'))[0].get_text(strip=True).replace('\n','-')
except:
comment='Nah'
try:
intro=it(class_='intro')[0].get_text(strip=True)
except:
intro='Nah'
Mdict[title]=[intro,date,stars,comment]
def HeardList(doubanid):
firstpage='https://music.douban.com/people/'+doubanid+'/collect?sort=time&start=0&filter=all&mode=list&tags_sort=count'
sess = requests.Session()
sess.headers.update(headers0)
request=sess.get(firstpage)
soup=BeautifulSoup(request.text,'html.parser')
items=soup.find_all(class_=re.compile('item'),id=re.compile('li'))
heard_dic={}
Musappend(Mdict=heard_dic,Items=items)
page=1
print(f'第{page}页',request.reason)
while 1:
time.sleep(1)
try:
NextPage=soup.find(class_='next').link.get('href')
except:
print('已到最终页')
break
else:
request=sess.get(NextPage)
soup=BeautifulSoup(request.text,'html.parser')
items=soup.find_all(class_=re.compile('item'),id=re.compile('li'))
Musappend(Mdict=heard_dic,Items=items)
page+=1
print(f'第{page}页',request.reason)
fw=open(doubanid+'_Heard_List.csv','w',encoding='utf-8_sig')
fw.write('专辑/单曲,简介,日期,评分,短评\n')
for title in heard_dic.keys():
fw.write(title.replace(',','、').replace(',','、')+','+heard_dic[title][0].replace(',','、').replace(',','、')+\
','+heard_dic[title][1]+','+heard_dic[title][2]+\
','+heard_dic[title][3].replace(',','、').replace(',','、')+'\n')
fw.close()
def WMusappend(Mdict,Items):
for it in Items:
title=it('a')[0].get_text(strip=True)
date=it(class_=re.compile('date'))[0].get_text(strip=True)
try:
comment=it(class_=re.compile('comm'))[0].get_text(strip=True).replace('\n','-')
except:
comment='Nah'
try:
intro=it(class_='intro')[0].get_text(strip=True)
except:
intro='Nah'
Mdict[title]=[intro,date,comment]
def WHeardList(doubanid):
firstpage='https://music.douban.com/people/'+doubanid+'/wish?sort=time&start=0&filter=all&mode=list&tags_sort=count'
sess = requests.Session()
sess.headers.update(headers0)
request=sess.get(firstpage)
soup=BeautifulSoup(request.text,'html.parser')
items=soup.find_all(class_=re.compile('item'),id=re.compile('li'))
whear_dic={}
WMusappend(Mdict=whear_dic,Items=items)
page=1
print(f'第{page}页',request.reason)
while 1:
time.sleep(1)
try:
NextPage=soup.find(class_='next').link.get('href')
except:
print('已到最终页')
break
else:
request=sess.get(NextPage)
soup=BeautifulSoup(request.text,'html.parser')
items=soup.find_all(class_=re.compile('item'),id=re.compile('li'))
Musappend(Mdict=whear_dic,Items=items)
page+=1
print(f'第{page}页',request.reason)
fw=open(doubanid+'_MusicWish_List.csv','w',encoding='utf-8_sig')
fw.write('专辑/单曲,简介,日期,留言\n')
for title in whear_dic.keys():
fw.write(title.replace(',','、').replace(',','、')+','+whear_dic[title][0].replace(',','、').replace(',','、')+\
','+whear_dic[title][1]+','+whear_dic[title][2].replace(',','、').replace(',','、')+'\n')
fw.close()
def main():
print('本程序备份用户的豆瓣音乐')
choice=input('请确定你要备份(yes/no):')
if choice == 'yes':
id=input('请输入你的豆瓣ID:')
print('开始备份听过列表')
HeardList(doubanid=id)
time.sleep(2)
print('开始备份想听列表')
WHeardList(doubanid=id)
print('备份已存在该exe所在目录下(如果没出错的话)')
print('问题反馈:jimsun6428@gmail.com | https://github.com/JimSunJing/douban_clawer')
input('按任意键退出')
else:
print('bye')
main() | 35.96748 | 132 | 0.582731 | 572 | 4,424 | 4.41958 | 0.256993 | 0.046282 | 0.049842 | 0.041139 | 0.738924 | 0.706487 | 0.686709 | 0.660601 | 0.621044 | 0.621044 | 0 | 0.019197 | 0.222875 | 4,424 | 123 | 133 | 35.96748 | 0.716114 | 0 | 0 | 0.605263 | 0 | 0.026316 | 0.173559 | 0.038644 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04386 | false | 0 | 0.035088 | 0 | 0.078947 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e86c2efd826c8a538364e73b08a104bb7efdb1d | 97 | py | Python | StartingWxPython.py | juhyun0/python_wxPython | 0b416b8e2c8c53bd8a6bb6d98f329f7e31b0ae87 | [
"Unlicense"
] | null | null | null | StartingWxPython.py | juhyun0/python_wxPython | 0b416b8e2c8c53bd8a6bb6d98f329f7e31b0ae87 | [
"Unlicense"
] | null | null | null | StartingWxPython.py | juhyun0/python_wxPython | 0b416b8e2c8c53bd8a6bb6d98f329f7e31b0ae87 | [
"Unlicense"
] | null | null | null | import wx
app=wx.App()
frame=wx.Frame(parent=None, title='Hello')
frame.Show()
app.MainLoop()
| 10.777778 | 42 | 0.701031 | 16 | 97 | 4.25 | 0.625 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103093 | 97 | 8 | 43 | 12.125 | 0.781609 | 0 | 0 | 0 | 0 | 0 | 0.051546 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0ea3ace093d2c55c6bcb6de645199016e8669d99 | 1,075 | py | Python | src/pyjen/plugins/nullscm.py | TheFriendlyCoder/pyjen | a3d7e8f69cb53f80f627300f8d3aa0d4302a5ac1 | [
"Apache-2.0"
] | 5 | 2017-12-14T13:39:04.000Z | 2020-07-06T09:46:02.000Z | src/pyjen/plugins/nullscm.py | TheFriendlyCoder/pyjen | a3d7e8f69cb53f80f627300f8d3aa0d4302a5ac1 | [
"Apache-2.0"
] | 119 | 2016-09-13T01:39:31.000Z | 2020-08-31T03:06:19.000Z | src/pyjen/plugins/nullscm.py | TheFriendlyCoder/pyjen | a3d7e8f69cb53f80f627300f8d3aa0d4302a5ac1 | [
"Apache-2.0"
] | 3 | 2015-03-17T18:49:22.000Z | 2019-07-03T14:10:27.000Z | """SCM properties of Jenkins jobs with no source control configuration"""
import xml.etree.ElementTree as ElementTree
from pyjen.utils.xml_plugin import XMLPlugin
class NullSCM(XMLPlugin):
"""SCM plugin for Jobs with no source control configurations"""
# --------------------------------------------------------------- PLUGIN API
@staticmethod
def get_jenkins_plugin_name():
"""str: the name of the Jenkins plugin associated with this PyJen plugin
This static method is used by the PyJen plugin API to associate this
class with a specific Jenkins plugin, as it is encoded in the config.xml
"""
return "hudson.scm.NullSCM"
@classmethod
def instantiate(cls):
"""Factory method used to construct instances of this class
Returns:
NullSCM:
instance of this class
"""
root_node = ElementTree.fromstring('<scm class="hudson.scm.NullSCM"/>')
return cls(root_node)
PluginClass = NullSCM
if __name__ == "__main__": # pragma: no cover
pass
| 30.714286 | 80 | 0.631628 | 127 | 1,075 | 5.23622 | 0.511811 | 0.058647 | 0.030075 | 0.04812 | 0.069173 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24186 | 1,075 | 34 | 81 | 31.617647 | 0.815951 | 0.504186 | 0 | 0 | 0 | 0 | 0.130243 | 0.06181 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.076923 | 0.153846 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
0ea5fbbf67b9816ffae35e701a1eb68533b8d78e | 3,228 | py | Python | codes/models/base_model.py | BrainGardenAI/TecoGAN-PyTorch | 15ae60588523f9f0226dd331d2a1e16d69e2b627 | [
"Apache-2.0"
] | null | null | null | codes/models/base_model.py | BrainGardenAI/TecoGAN-PyTorch | 15ae60588523f9f0226dd331d2a1e16d69e2b627 | [
"Apache-2.0"
] | null | null | null | codes/models/base_model.py | BrainGardenAI/TecoGAN-PyTorch | 15ae60588523f9f0226dd331d2a1e16d69e2b627 | [
"Apache-2.0"
] | null | null | null | from collections import OrderedDict
import os.path as osp
import torch
from utils.base_utils import get_logger
class BaseModel():
def __init__(self, opt):
self.opt = opt
self.verbose = opt['verbose']
self.scale = opt['scale']
self.logger = get_logger('base')
self.device = torch.device(opt['device'])
self.is_train = opt['is_train']
if self.is_train:
self.ckpt_dir = opt['train']['ckpt_dir']
self.log_decay = opt['logger'].get('decay', 0.99)
self.log_dict = OrderedDict()
self.running_log_dict = OrderedDict()
def set_network(self):
pass
def config_training(self):
pass
def set_criterion(self):
pass
def train(self, data):
pass
def infer(self, data):
pass
def update_learning_rate(self):
if hasattr(self, 'sched_G') and self.sched_G is not None:
self.sched_G.step()
if hasattr(self, 'sched_D') and self.sched_D is not None:
self.sched_D.step()
def get_current_learning_rate(self):
lr_dict = OrderedDict()
if hasattr(self, 'optim_G'):
lr_dict['lr_G'] = self.optim_G.param_groups[0]['lr']
if hasattr(self, 'optim_D'):
lr_dict['lr_D'] = self.optim_D.param_groups[0]['lr']
return lr_dict
def update_running_log(self):
d = self.log_decay
for k in self.log_dict.keys():
current_val = self.log_dict[k]
running_val = self.running_log_dict.get(k)
if running_val is None:
running_val = current_val
else:
running_val = d * running_val + (1.0 - d) * current_val
self.running_log_dict[k] = running_val
def get_current_log(self):
return self.log_dict
def get_running_log(self):
return self.running_log_dict
def save(self, current_iter):
pass
def save_network(self, net, net_label, current_iter):
save_filename = '{}_iter{}.pth'.format(net_label, current_iter)
save_path = osp.join(self.ckpt_dir, save_filename)
torch.save(net.state_dict(), save_path)
def save_training_state(self, current_epoch, current_iter):
# TODO
pass
def load_network(self, net, load_path):
net.load_state_dict(torch.load(load_path), strict=False)
def pad_sequence(self, lr_data):
"""
Parameters:
:param lr_data: tensor in shape tchw
"""
padding_mode = self.opt['test'].get('padding_mode', 'reflect')
n_pad_front = self.opt['test'].get('num_pad_front', 0)
if padding_mode == 'reflect':
lr_data = torch.cat(
[lr_data[1: 1 + n_pad_front, ...].flip(0), lr_data], dim=0)
elif padding_mode == 'replicate':
lr_data = torch.cat(
[lr_data[:1, ...].expand(n_pad_front, -1, -1, -1), lr_data],
dim=0)
elif padding_mode == 'none':
n_pad_front = 0
else:
raise ValueError('Unrecognized padding mode: {}'.format(
padding_mode))
return lr_data, n_pad_front
| 28.315789 | 76 | 0.581784 | 431 | 3,228 | 4.085847 | 0.234339 | 0.030664 | 0.025554 | 0.040886 | 0.137422 | 0.052243 | 0.052243 | 0 | 0 | 0 | 0 | 0.008004 | 0.303284 | 3,228 | 113 | 77 | 28.566372 | 0.775011 | 0.017968 | 0 | 0.139241 | 0 | 0 | 0.06246 | 0 | 0 | 0 | 0 | 0.00885 | 0 | 1 | 0.202532 | false | 0.088608 | 0.050633 | 0.025316 | 0.316456 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7ec6f1ceb3d716b129240f3248f72933b794efd4 | 4,955 | py | Python | emukit/quadrature/kernels/quadrature_kernels.py | alexgessner/emukit | 355e26bb30edd772a81af2a1267c569d7f446d42 | [
"Apache-2.0"
] | 6 | 2019-06-02T21:23:27.000Z | 2020-02-17T09:46:30.000Z | emukit/quadrature/kernels/quadrature_kernels.py | Tony-Chiong/emukit | a068c8d5e06b2ae8b038f67bf2e4f66c4d91651a | [
"Apache-2.0"
] | 4 | 2019-05-17T13:30:21.000Z | 2019-06-21T13:49:19.000Z | emukit/quadrature/kernels/quadrature_kernels.py | Tony-Chiong/emukit | a068c8d5e06b2ae8b038f67bf2e4f66c4d91651a | [
"Apache-2.0"
] | 1 | 2020-01-12T19:50:44.000Z | 2020-01-12T19:50:44.000Z | # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from typing import List, Tuple
from ...quadrature.interfaces.standard_kernels import IStandardKernel
from .integral_bounds import IntegralBounds
class QuadratureKernel:
"""
Abstract class for covariance function of a Gaussian process than can be integrated
Note that each specific implementation of this class must go with a specific standard kernel as input which
inherits from IStandardKernel. This is because we both want the QuadratureKernel to be backend agnostic
but at the same time QuadratureKernel needs access to specifics of the standard kernel.
An example of a specific QuadratureKernel and IStandardKernel pair is QuadratureRBF and IRBF.
"""
def __init__(self, kern: IStandardKernel, integral_bounds: List[Tuple[float, float]], integral_name: str='') \
-> None:
"""
:param kern: standard emukit kernel
:param integral_bounds: defines the domain of the integral. List of D tuples, where D is the dimensionality
of the integral and the tuples contain the lower and upper bounds of the integral
i.e., [(lb_1, ub_1), (lb_2, ub_2), ..., (lb_D, ub_D)]
:param integral_name: the (variable) name(s) of the integral
"""
self.kern = kern
self.integral_bounds = IntegralBounds(name=integral_name, bounds=integral_bounds)
self.input_dim = self.integral_bounds.dim
def K(self, x1: np.ndarray, x2: np.ndarray) -> np.ndarray:
"""
The kernel k(x1, x2) evaluated at x1 and x2
:param x1: first argument of the kernel
:param x2: second argument of the kernel
:returns: kernel evaluated at x1, x2
"""
return self.kern.K(x1, x2)
# the following methods are integrals of a quadrature kernel
def qK(self, x2: np.ndarray) -> np.ndarray:
"""
Kernel with the first component integrated out aka. kernel mean
:param x2: remaining argument of the once integrated kernel, shape (n_points N, input_dim)
:returns: kernel mean at location x2, shape (1, N)
"""
raise NotImplementedError
def Kq(self, x1: np.ndarray) -> np.ndarray:
"""
Kernel with the second component integrated out aka. kernel mean
:param x1: remaining argument of the once integrated kernel, shape (n_points N, input_dim)
:returns: kernel mean at location x1, shape (N, 1)
"""
raise NotImplementedError
def qKq(self) -> np.float:
"""
Kernel integrated over both arguments x1 and x2
:returns: double integrated kernel
"""
raise NotImplementedError
# the following methods are gradients of a quadrature kernel
def dK_dx1(self, x1: np.ndarray, x2: np.ndarray) -> np.ndarray:
"""
gradient of the kernel wrt x1 evaluated at pair x1, x2
:param x1: first argument of the kernel, shape = (n_points N, input_dim)
:param x2: second argument of the kernel, shape = (n_points M, input_dim)
:return: the gradient of the kernel wrt x1 evaluated at (x1, x2), shape (input_dim, N, M)
"""
return self.kern.dK_dx1(x1, x2)
def dK_dx2(self, x1: np.ndarray, x2: np.ndarray) -> np.ndarray:
"""
gradient of the kernel wrt x2 evaluated at pair x1, x2
Note that it is the transposed gradient wrt x1 evaluated at (x2, x1), i.e., the arguments are switched.
:param x1: first argument of the kernel, shape = (n_points N, N, input_dim)
:param x2: second argument of the kernel, shape = (n_points N, M, input_dim)
:return: the gradient of the kernel wrt x2 evaluated at (x1, x2), shape (input_dim, N, M)
"""
return np.transpose(self.dK_dx1(x1=x2, x2=x1), (0, 2, 1))
def dKdiag_dx(self, x: np.ndarray) -> np.ndarray:
"""
gradient of the diagonal of the kernel (the variance) v(x):=k(x, x) evaluated at x
:param x: argument of the kernel, shape = (n_points M, input_dim)
:return: the gradient of the diagonal of the kernel evaluated at x, shape (input_dim, M)
"""
return self.kern.dKdiag_dx(x)
def dqK_dx(self, x2: np.ndarray) -> np.ndarray:
"""
gradient of the kernel mean (integrated in first argument) evaluated at x2
:param x2: N points at which to evaluate, shape = (n_points N, N, input_dim)
:return: the gradient with shape (input_dim, N)
"""
raise NotImplementedError
def dKq_dx(self, x1: np.ndarray) -> np.ndarray:
"""
gradient of the kernel mean (integrated in second argument) evaluated at x1
:param x1: N points at which to evaluate, shape = (n_points N, N, input_dim)
:return: the gradient with shape (N, input_dim)
"""
raise NotImplementedError
| 41.291667 | 115 | 0.655298 | 707 | 4,955 | 4.519095 | 0.229137 | 0.037559 | 0.051643 | 0.04507 | 0.430986 | 0.412833 | 0.405947 | 0.32989 | 0.309233 | 0.297653 | 0 | 0.020174 | 0.259738 | 4,955 | 119 | 116 | 41.638655 | 0.850872 | 0.614733 | 0 | 0.178571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0 | 0.142857 | 0 | 0.678571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7ecf55baa4fcd48ccc00150d1ee3428823bc6c29 | 296 | py | Python | 1. Primitive Types/02variable.py | ahmadsohail404/helloPython | 7fa0c1c4ae7badecc6adc6e9ce35cce14e86766f | [
"MIT"
] | null | null | null | 1. Primitive Types/02variable.py | ahmadsohail404/helloPython | 7fa0c1c4ae7badecc6adc6e9ce35cce14e86766f | [
"MIT"
] | null | null | null | 1. Primitive Types/02variable.py | ahmadsohail404/helloPython | 7fa0c1c4ae7badecc6adc6e9ce35cce14e86766f | [
"MIT"
] | null | null | null | course = "Python Programming"
message = """
east n west, cats are the best!🐈
"""
print(message)
print(len(course))
print(course[0])
print(course[-1])
print(course[1:4]) # including 1, excluding 4
print(course[3:]) # erases before 3
print(course[:3])
print(course[:]) # copy of the original
| 22.769231 | 46 | 0.679054 | 46 | 296 | 4.391304 | 0.543478 | 0.326733 | 0.118812 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035573 | 0.14527 | 296 | 12 | 47 | 24.666667 | 0.758893 | 0.206081 | 0 | 0 | 0 | 0 | 0.225108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
7ed0f40c9470d352cff8b9043d98b7694e0b36b2 | 385 | py | Python | third_party/gsutil/third_party/protorpc/protorpc/google_imports.py | ravitejavalluri/catapult | 246a39a82c2213d913a96fff020a263838dc76e6 | [
"BSD-3-Clause"
] | 76 | 2015-01-04T13:45:16.000Z | 2022-02-12T11:06:49.000Z | third_party/gsutil/third_party/protorpc/protorpc/google_imports.py | ravitejavalluri/catapult | 246a39a82c2213d913a96fff020a263838dc76e6 | [
"BSD-3-Clause"
] | 27 | 2015-02-12T20:04:37.000Z | 2020-04-28T07:51:39.000Z | third_party/gsutil/third_party/protorpc/protorpc/google_imports.py | ravitejavalluri/catapult | 246a39a82c2213d913a96fff020a263838dc76e6 | [
"BSD-3-Clause"
] | 42 | 2015-01-24T09:49:07.000Z | 2020-10-13T16:59:31.000Z | """Dynamically decide from where to import other SDK modules.
All other protorpc code should import other SDK modules from
this module. If necessary, add new imports here (in both places).
"""
__author__ = 'yey@google.com (Ye Yuan)'
# pylint: disable=g-import-not-at-top
# pylint: disable=unused-import
try:
from google.net.proto import ProtocolBuffer
except ImportError:
pass
| 24.0625 | 65 | 0.761039 | 57 | 385 | 5.070175 | 0.77193 | 0.076125 | 0.096886 | 0.145329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150649 | 385 | 15 | 66 | 25.666667 | 0.883792 | 0.657143 | 0 | 0 | 0 | 0 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
7efd7e581f041d6f1b1c0b05b35f1ec56f19dc5d | 845 | py | Python | tests/test_model.py | sepydev/django-user | 1a67caa197f9bb72ec41491cac1ae0a94385da87 | [
"MIT"
] | 1 | 2022-02-05T18:26:02.000Z | 2022-02-05T18:26:02.000Z | tests/test_model.py | mrprocs/django-user | 1a67caa197f9bb72ec41491cac1ae0a94385da87 | [
"MIT"
] | null | null | null | tests/test_model.py | mrprocs/django-user | 1a67caa197f9bb72ec41491cac1ae0a94385da87 | [
"MIT"
] | null | null | null | from django.test import TestCase
from users.models import User
class UserModelTest(TestCase):
def test_create_user(self):
email = 'test@test.com'
password = 'testpass'
user = User.objects.create_user(email, password)
added_user = User.objects.filter(pk=user.pk).first()
self.assertEqual(email, added_user.email)
self.assertTrue(user.check_password(password))
def test_create_superuser(self):
email = 'test@test.com'
password = 'testpass'
user = User.objects.create_superuser(email, password)
added_user = User.objects.filter(pk=user.pk).first()
self.assertEqual(email, added_user.email)
self.assertTrue(user.check_password(password))
self.assertEqual(added_user.is_staff, 1)
self.assertEqual(added_user.is_superuser, 1)
| 33.8 | 61 | 0.685207 | 105 | 845 | 5.361905 | 0.27619 | 0.095915 | 0.106572 | 0.060391 | 0.738899 | 0.646536 | 0.646536 | 0.646536 | 0.646536 | 0.646536 | 0 | 0.002985 | 0.207101 | 845 | 24 | 62 | 35.208333 | 0.837313 | 0 | 0 | 0.526316 | 0 | 0 | 0.049704 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 1 | 0.105263 | false | 0.315789 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7d24a7405347c19af085d9146f812fecc66c0848 | 665 | py | Python | ui_test/user_flows.py | uktrade/dit-contact-forms | 57d4f040392ce31d8188d492dab5a8ca6eb5cc90 | [
"MIT"
] | 2 | 2020-02-05T11:12:48.000Z | 2021-06-03T14:21:44.000Z | ui_test/user_flows.py | uktrade/dit-contact-forms | 57d4f040392ce31d8188d492dab5a8ca6eb5cc90 | [
"MIT"
] | 8 | 2020-01-28T14:47:14.000Z | 2021-05-28T11:20:48.000Z | ui_test/user_flows.py | uktrade/dit-contact-forms | 57d4f040392ce31d8188d492dab5a8ca6eb5cc90 | [
"MIT"
] | null | null | null | from ui_test.selectors.questionnaire import QUESTIONNAIRE
from ui_test.selectors.form import FORM
def select_questionnaire(browser, options):
for key, value in options.items():
browser.find_by_css(QUESTIONNAIRE[key][value]).click()
browser.find_by_css(QUESTIONNAIRE["continue"]).click()
def submit_form(browser, options):
browser.find_by_css(FORM["message"]).first.type(options["message"])
browser.find_by_css(FORM["name"]).first.type(options["name"])
browser.find_by_css(FORM["email"]).first.type(options["email"])
browser.find_by_css(FORM["accept_terms"]).click()
browser.find_by_css(QUESTIONNAIRE["continue"]).click()
| 39.117647 | 71 | 0.741353 | 90 | 665 | 5.266667 | 0.322222 | 0.162447 | 0.191983 | 0.236287 | 0.42827 | 0.198312 | 0.198312 | 0.198312 | 0 | 0 | 0 | 0 | 0.105263 | 665 | 16 | 72 | 41.5625 | 0.796639 | 0 | 0 | 0.166667 | 0 | 0 | 0.090226 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7d26bf15430855fae6112866c983881399d4ccc1 | 2,025 | py | Python | release/stubs.min/System/Diagnostics/__init___parts/EventLogEntryCollection.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | release/stubs.min/System/Diagnostics/__init___parts/EventLogEntryCollection.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | release/stubs.min/System/Diagnostics/__init___parts/EventLogEntryCollection.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | class EventLogEntryCollection(object):
""" Defines size and enumerators for a collection of System.Diagnostics.EventLogEntry instances. """
def ZZZ(self):
"""hardcoded/mock instance of the class"""
return EventLogEntryCollection()
instance=ZZZ()
"""hardcoded/returns an instance of the class"""
def CopyTo(self,entries,index):
"""
CopyTo(self: EventLogEntryCollection,entries: Array[EventLogEntry],index: int)
Copies the elements of the System.Diagnostics.EventLogEntryCollection to an array of System.Diagnostics.EventLogEntry instances,starting at a particular array index.
entries: The one-dimensional array of System.Diagnostics.EventLogEntry instances that is the destination of the elements copied from the collection. The array must have zero-based
indexing.
index: The zero-based index in the array at which copying begins.
"""
pass
def GetEnumerator(self):
"""
GetEnumerator(self: EventLogEntryCollection) -> IEnumerator
Supports a simple iteration over the System.Diagnostics.EventLogEntryCollection object.
Returns: An object that can be used to iterate over the collection.
"""
pass
def __getitem__(self,*args):
""" x.__getitem__(y) <==> x[y] """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __iter__(self,*args):
""" __iter__(self: IEnumerable) -> object """
pass
def __len__(self,*args):
""" x.__len__() <==> len(x) """
pass
def __repr__(self,*args):
""" __repr__(self: object) -> str """
pass
Count=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the number of entries in the event log (that is,the number of elements in the System.Diagnostics.EventLogEntry collection).
Get: Count(self: EventLogEntryCollection) -> int
"""
| 40.5 | 215 | 0.711111 | 249 | 2,025 | 5.493976 | 0.353414 | 0.074561 | 0.087719 | 0.070175 | 0.179825 | 0.149854 | 0.082602 | 0.082602 | 0.082602 | 0.082602 | 0 | 0 | 0.176296 | 2,025 | 49 | 216 | 41.326531 | 0.820144 | 0.590123 | 0 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.421053 | false | 0.368421 | 0 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7d29cc7091138f7c29639a629ad6896f1032f9eb | 379 | py | Python | cookbook/c03/p08_fraction.py | itpubs/python3-cookbook | 140f5e4cc0416b9674edca7f4c901b1f58fc1415 | [
"Apache-2.0"
] | 3 | 2018-05-10T01:13:08.000Z | 2018-06-17T12:34:07.000Z | cookbook/c03/p08_fraction.py | itpubs/python3-cookbook | 140f5e4cc0416b9674edca7f4c901b1f58fc1415 | [
"Apache-2.0"
] | 2 | 2020-09-19T17:10:23.000Z | 2020-10-17T16:43:52.000Z | cookbook/c03/p08_fraction.py | itpubs/python3-cookbook | 140f5e4cc0416b9674edca7f4c901b1f58fc1415 | [
"Apache-2.0"
] | 1 | 2020-12-22T06:33:18.000Z | 2020-12-22T06:33:18.000Z | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
"""
Topic: 分数运算
Desc :
"""
from fractions import Fraction
def frac():
a = Fraction(5, 4)
b = Fraction(7, 16)
print(print(a + b))
print(a.numerator, a.denominator)
c = a + b
print(float(c))
print(type(c.limit_denominator(8)))
print(c.limit_denominator(8))
if __name__ == '__main__':
frac()
| 15.791667 | 39 | 0.591029 | 54 | 379 | 3.962963 | 0.592593 | 0.056075 | 0.065421 | 0.168224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.229551 | 379 | 23 | 40 | 16.478261 | 0.705479 | 0.168865 | 0 | 0 | 0 | 0 | 0.026144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0.416667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
adb4be3e958e86b2c9b97c9e43374da0e456944b | 826 | py | Python | JugandoCodewars/RomanNumeralsHelper.py | blukitas/JugandoCodewars | aa4938fbba3911ba2d5f6fea2ff35fe3cbbf37b0 | [
"MIT"
] | null | null | null | JugandoCodewars/RomanNumeralsHelper.py | blukitas/JugandoCodewars | aa4938fbba3911ba2d5f6fea2ff35fe3cbbf37b0 | [
"MIT"
] | null | null | null | JugandoCodewars/RomanNumeralsHelper.py | blukitas/JugandoCodewars | aa4938fbba3911ba2d5f6fea2ff35fe3cbbf37b0 | [
"MIT"
] | null | null | null | # Create a RomanNumerals class that can convert a roman numeral to and from an integer value.
# It should follow the API demonstrated in the examples below.
# Multiple roman numeral values will be tested for each helper method.
# Modern Roman numerals are written by expressing each digit separately starting with
# the left most digit and skipping any digit with a value of zero. In Roman numerals
# 1990 is rendered: 1000=M, 900=CM, 90=XC; resulting in MCMXC.
# 2008 is written as 2000=MM, 8=VIII; or MMVIII.
# 1666 uses each Roman symbol in descending order: MDCLXVI.
# Examples
# RomanNumerals.to_roman(1000) # should return 'M'
# RomanNumerals.from_roman('M') # should return 1000
# Help
# | Symbol | Value | |----------------| | I | 1 | | V | 5 | | X | 10 | | L | 50 | | C | 100 | | D | 500 | | M | 1000 |
| 51.625 | 118 | 0.688862 | 128 | 826 | 4.429688 | 0.6875 | 0.042328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076336 | 0.207022 | 826 | 15 | 119 | 55.066667 | 0.789313 | 0.96247 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
adb5120d91c1806cfaa651f7a9c06803ccccc48c | 22,713 | py | Python | tests/test_offsets.py | Yelp/yelp_kafka | 6391985ff90e7d121c95dd352cae3aa3bf96abaa | [
"Apache-2.0"
] | 40 | 2016-11-17T20:28:15.000Z | 2021-10-06T01:47:07.000Z | tests/test_offsets.py | Yelp/yelp_kafka | 6391985ff90e7d121c95dd352cae3aa3bf96abaa | [
"Apache-2.0"
] | null | null | null | tests/test_offsets.py | Yelp/yelp_kafka | 6391985ff90e7d121c95dd352cae3aa3bf96abaa | [
"Apache-2.0"
] | 16 | 2016-12-07T01:53:15.000Z | 2021-06-24T02:32:50.000Z | # -*- coding: utf-8 -*-
# Copyright 2016 Yelp Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import unicode_literals
import copy
import mock
import pytest
from kafka.common import NotLeaderForPartitionError
from kafka.common import OffsetCommitResponse
from kafka.common import OffsetFetchResponse
from kafka.common import OffsetResponse
from kafka.common import RequestTimedOutError
from kafka.common import UnknownTopicOrPartitionError
from yelp_kafka.error import InvalidOffsetStorageError
from yelp_kafka.offsets import _verify_commit_offsets_requests
from yelp_kafka.offsets import advance_consumer_offsets
from yelp_kafka.offsets import get_current_consumer_offsets
from yelp_kafka.offsets import get_topics_watermarks
from yelp_kafka.offsets import OffsetCommitError
from yelp_kafka.offsets import PartitionOffsets
from yelp_kafka.offsets import rewind_consumer_offsets
from yelp_kafka.offsets import set_consumer_offsets
from yelp_kafka.offsets import UnknownPartitions
from yelp_kafka.offsets import UnknownTopic
@pytest.fixture(params=[['topic1'], set(['topic1']), ('topic1',)])
def topics(request):
return request.param
class MyKafkaClient(object):
def __init__(
self,
topics,
group_offsets,
high_offsets,
low_offsets
):
self.topics = topics
self.group_offsets = group_offsets
self.high_offsets = high_offsets
self.low_offsets = low_offsets
self.commit_error = False
self.offset_request_error = False
def load_metadata_for_topics(self):
pass
def send_offset_request(
self,
payloads=None,
fail_on_error=True,
callback=None
):
if payloads is None:
payloads = []
resps = []
for req in payloads:
if req.time == -1:
offset = self.high_offsets[req.topic.decode()].get(req.partition, -1)
else:
offset = self.low_offsets[req.topic.decode()].get(req.partition, -1)
if self.offset_request_error:
error_code = NotLeaderForPartitionError.errno
elif req.partition not in self.topics[req.topic.decode()]:
error_code = UnknownTopicOrPartitionError.errno
else:
error_code = 0
resps.append(OffsetResponse(
req.topic.decode(),
req.partition,
error_code,
(offset,)
))
return [resp if not callback else callback(resp) for resp in resps]
def set_commit_error(self):
self.commit_error = True
def set_offset_request_error(self):
self.offset_request_error = True
def send_offset_commit_request(
self,
group,
payloads=None,
fail_on_error=True,
callback=None
):
if payloads is None:
payloads = []
resps = []
for req in payloads:
if not self.commit_error:
self.group_offsets[req.topic.decode()][req.partition] = req.offset
resps.append(
OffsetCommitResponse(
req.topic.decode(),
req.partition,
0
)
)
else:
resps.append(
OffsetCommitResponse(
req.topic.decode(),
req.partition,
RequestTimedOutError.errno
)
)
return [resp if not callback else callback(resp) for resp in resps]
def send_offset_commit_request_kafka(
self,
group,
payloads=None,
fail_on_error=True,
callback=None
):
return self.send_offset_commit_request(
group,
payloads,
fail_on_error,
callback,
)
def has_metadata_for_topic(self, t):
return t in self.topics
def get_partition_ids_for_topic(self, topic):
return self.topics[topic]
def send_offset_fetch_request(
self,
group,
payloads,
fail_on_error,
callback,
):
return self._send_offset_fetch_request_either(
group,
payloads,
fail_on_error,
callback,
)
def send_offset_fetch_request_kafka(
self,
group,
payloads,
fail_on_error,
callback
):
return self._send_offset_fetch_request_either(
group,
payloads,
fail_on_error,
callback,
)
def _send_offset_fetch_request_either(
self,
group,
payloads,
fail_on_error,
callback
):
return [
callback(
OffsetFetchResponse(
req.topic.decode(),
req.partition,
self.group_offsets[req.topic.decode()].get(req.partition, -1),
None,
0 if req.partition in self.group_offsets[req.topic.decode()] else 3
),
)
for req in payloads
]
class TestOffsetsBase(object):
topics = {
'topic1': [0, 1, 2],
'topic2': [0, 1]
}
group = 'group_name'
high_offsets = {
'topic1': {
0: 30,
1: 30,
2: 30,
},
'topic2': {
0: 50,
1: 50
}
}
low_offsets = {
'topic1': {
0: 10,
1: 5,
2: 3,
},
'topic2': {
0: 5,
1: 5,
}
}
group_offsets = {
'topic1': {
0: 30,
1: 20,
2: 10,
},
'topic2': {
0: 15,
}
}
@pytest.fixture
def kafka_client_mock(self):
return MyKafkaClient(
self.topics,
copy.deepcopy(self.group_offsets),
self.high_offsets,
self.low_offsets
)
class TestOffsets(TestOffsetsBase):
def test_get_current_consumer_offsets_invalid_arguments(self, kafka_client_mock):
with pytest.raises(TypeError):
get_current_consumer_offsets(
kafka_client_mock,
"this won't even be consulted",
"this should be a list or dict",
)
def test_get_current_consumer_offsets_unknown_topic(self, kafka_client_mock):
with pytest.raises(UnknownTopic):
get_current_consumer_offsets(
kafka_client_mock,
"this won't even be consulted",
["something that doesn't exist"],
)
def test_get_current_consumer_offsets_unknown_topic_no_fail(self, kafka_client_mock):
actual = get_current_consumer_offsets(
kafka_client_mock,
"this won't even be consulted",
["something that doesn't exist"],
raise_on_error=False
)
assert not actual
def test_get_current_consumer_offsets_unknown_partitions(self, kafka_client_mock):
with pytest.raises(UnknownPartitions):
get_current_consumer_offsets(
kafka_client_mock,
self.group,
{'topic1': [99]},
)
def test_get_current_consumer_offsets_unknown_partitions_no_fail(self, kafka_client_mock):
actual = get_current_consumer_offsets(
kafka_client_mock,
self.group,
{'topic1': [99]},
raise_on_error=False
)
assert not actual
def test_get_current_consumer_offsets_invalid_partition_subset(self, kafka_client_mock):
with pytest.raises(UnknownPartitions):
get_current_consumer_offsets(
kafka_client_mock,
self.group,
{'topic1': [1, 99]},
)
def test_get_current_consumer_offsets_invalid_partition_subset_no_fail(self, kafka_client_mock):
actual = get_current_consumer_offsets(
kafka_client_mock,
self.group,
{'topic1': [1, 99]},
raise_on_error=False
)
assert actual['topic1'][1] == 20
# Partition 99 does not exist so it shouldn't be in the result
assert 99 not in actual['topic1']
def test_get_current_consumer_offsets(self, topics, kafka_client_mock):
actual = get_current_consumer_offsets(
kafka_client_mock,
self.group,
topics
)
assert actual == {'topic1': {0: 30, 1: 20, 2: 10}}
def test_get_current_consumer_offsets_from_zookeeper(
self,
topics,
kafka_client_mock
):
kafka_client_mock = mock.Mock(wraps=kafka_client_mock)
get_current_consumer_offsets(
kafka_client_mock,
self.group,
topics,
offset_storage='zookeeper',
)
assert kafka_client_mock.send_offset_fetch_request.call_count == 1
assert kafka_client_mock.send_offset_fetch_request_kafka.call_count == 0
def test_get_current_consumer_offsets_from_kafka(
self,
topics,
kafka_client_mock
):
kafka_client_mock = mock.Mock(wraps=kafka_client_mock)
get_current_consumer_offsets(
kafka_client_mock,
self.group,
topics,
offset_storage='kafka',
)
assert kafka_client_mock.send_offset_fetch_request.call_count == 0
assert kafka_client_mock.send_offset_fetch_request_kafka.call_count == 1
def test_get_current_consumer_offsets_invalid_storage(
self,
topics,
kafka_client_mock
):
kafka_client_mock = mock.Mock(wraps=kafka_client_mock)
with pytest.raises(InvalidOffsetStorageError):
get_current_consumer_offsets(
kafka_client_mock,
self.group,
topics,
offset_storage='random_string',
)
assert kafka_client_mock.send_offset_fetch_request.call_count == 0
assert kafka_client_mock.send_offset_fetch_request_kafka.call_count == 0
def test_get_topics_watermarks_invalid_arguments(self, kafka_client_mock):
with pytest.raises(TypeError):
get_topics_watermarks(
kafka_client_mock,
"this should be a list or dict",
)
def test_get_topics_watermarks_unknown_topic(self, kafka_client_mock):
with pytest.raises(UnknownTopic):
get_topics_watermarks(
kafka_client_mock,
["something that doesn't exist"],
)
def test_get_topics_watermarks_unknown_topic_no_fail(self, kafka_client_mock):
actual = get_topics_watermarks(
kafka_client_mock,
["something that doesn't exist"],
raise_on_error=False,
)
assert not actual
def test_get_topics_watermarks_unknown_partitions(self, kafka_client_mock):
with pytest.raises(UnknownPartitions):
get_topics_watermarks(
kafka_client_mock,
{'topic1': [99]},
)
def test_get_topics_watermarks_unknown_partitions_no_fail(self, kafka_client_mock):
actual = get_topics_watermarks(
kafka_client_mock,
{'topic1': [99]},
raise_on_error=False,
)
assert not actual
def test_get_topics_watermarks_invalid_partition_subset(self, kafka_client_mock):
with pytest.raises(UnknownPartitions):
get_topics_watermarks(
kafka_client_mock,
{'topic1': [1, 99]},
)
def test_get_topics_watermarks_invalid_partition_subset_no_fail(self, kafka_client_mock):
actual = get_topics_watermarks(
kafka_client_mock,
{'topic1': [1, 99]},
raise_on_error=False,
)
assert actual['topic1'][1] == PartitionOffsets('topic1', 1, 30, 5)
assert 99 not in actual['topic1']
def test_get_topics_watermarks(self, topics, kafka_client_mock):
actual = get_topics_watermarks(
kafka_client_mock,
topics,
)
assert actual == {'topic1': {
0: PartitionOffsets('topic1', 0, 30, 10),
1: PartitionOffsets('topic1', 1, 30, 5),
2: PartitionOffsets('topic1', 2, 30, 3),
}}
def test_get_topics_watermarks_commit_error(self, topics, kafka_client_mock):
kafka_client_mock.set_offset_request_error()
actual = get_topics_watermarks(
kafka_client_mock,
{'topic1': [0]},
)
assert actual == {'topic1': {
0: PartitionOffsets('topic1', 0, -1, -1),
}}
def test__verify_commit_offsets_requests(self, kafka_client_mock):
new_offsets = {
'topic1': {
0: 123,
1: 456,
},
'topic2': {
0: 12,
},
}
valid_new_offsets = _verify_commit_offsets_requests(
kafka_client_mock,
new_offsets,
True
)
assert new_offsets == valid_new_offsets
def test__verify_commit_offsets_requests_invalid_types_raise_error(
self,
kafka_client_mock
):
new_offsets = "my_str"
with pytest.raises(TypeError):
_verify_commit_offsets_requests(
kafka_client_mock,
new_offsets,
True
)
def test__verify_commit_offsets_requests_invalid_types_no_raise_error(
self,
kafka_client_mock
):
new_offsets = {'topic1': 2, 'topic2': 1}
with pytest.raises(TypeError):
_verify_commit_offsets_requests(
kafka_client_mock,
new_offsets,
False
)
def test__verify_commit_offsets_requests_bad_partitions(
self,
kafka_client_mock
):
new_offsets = {
'topic1': {
23: 123,
11: 456,
},
'topic2': {
21: 12,
},
}
with pytest.raises(UnknownPartitions):
_verify_commit_offsets_requests(
kafka_client_mock,
new_offsets,
True
)
def test__verify_commit_offsets_requests_bad_topics(
self,
kafka_client_mock
):
new_offsets = {
'topic32': {
0: 123,
1: 456,
},
'topic33': {
0: 12,
},
}
with pytest.raises(UnknownTopic):
_verify_commit_offsets_requests(
kafka_client_mock,
new_offsets,
True
)
def test__verify_commit_offsets_requests_bad_partitions_no_fail(
self,
kafka_client_mock
):
new_offsets = {
'topic1': {
0: 32,
23: 123,
11: 456,
},
'topic2': {
21: 12,
},
}
valid_new_offsets = _verify_commit_offsets_requests(
kafka_client_mock,
new_offsets,
False
)
expected_valid_offsets = {
'topic1': {
0: 32,
},
}
assert valid_new_offsets == expected_valid_offsets
def test__verify_commit_offsets_requests_bad_topics_no_fail(
self,
kafka_client_mock
):
new_offsets = {
'topic32': {
0: 123,
1: 456,
},
'topic33': {
0: 12,
},
}
valid_new_offsets = _verify_commit_offsets_requests(
kafka_client_mock,
new_offsets,
False
)
assert valid_new_offsets == {}
def test_advance_consumer_offsets(self, kafka_client_mock):
topics = {
'topic1': [0, 1, 2],
'topic2': [0, 1],
}
status = list(advance_consumer_offsets(
kafka_client_mock,
"group",
topics
))
assert status == []
assert kafka_client_mock.group_offsets == self.high_offsets
def test_advance_consumer_offsets_fail(self, kafka_client_mock):
kafka_client_mock.set_commit_error()
topics = {
'topic1': [0, 1, 2],
'topic2': [0, 1],
}
expected_status = [
OffsetCommitError("topic1", 0, RequestTimedOutError.message),
OffsetCommitError("topic1", 1, RequestTimedOutError.message),
OffsetCommitError("topic1", 2, RequestTimedOutError.message),
OffsetCommitError("topic2", 0, RequestTimedOutError.message),
OffsetCommitError("topic2", 1, RequestTimedOutError.message),
]
status = list(advance_consumer_offsets(
kafka_client_mock,
"group",
topics
))
assert len(status) == len(expected_status)
for expected in expected_status:
assert any(actual == expected for actual in status)
assert kafka_client_mock.group_offsets == self.group_offsets
def test_rewind_consumer_offsets_zk(self, kafka_client_mock):
topics = {
'topic1': [0, 1, 2],
'topic2': [0, 1],
}
kafka_client_spy = mock.Mock(wraps=kafka_client_mock)
status = list(rewind_consumer_offsets(
kafka_client_spy,
"group",
topics
))
assert status == []
assert kafka_client_mock.group_offsets == self.low_offsets
assert kafka_client_spy.send_offset_commit_request.called
assert not kafka_client_spy.send_offset_commit_request_kafka.called
def test_rewind_consumer_offsets_kafka(self, kafka_client_mock):
topics = {
'topic1': [0, 1, 2],
'topic2': [0, 1],
}
client_spy = mock.Mock(wraps=kafka_client_mock)
rewind_consumer_offsets(
client_spy,
"group",
topics,
offset_storage='kafka',
)
assert not client_spy.send_offset_commit_request.called
assert client_spy.send_offset_commit_request_kafka.called
def test_rewind_consumer_offsets_fail(self, kafka_client_mock):
kafka_client_mock.set_commit_error()
topics = {
'topic1': [0, 1, 2],
'topic2': [0, 1],
}
expected_status = [
OffsetCommitError("topic1", 0, RequestTimedOutError.message),
OffsetCommitError("topic1", 1, RequestTimedOutError.message),
OffsetCommitError("topic1", 2, RequestTimedOutError.message),
OffsetCommitError("topic2", 0, RequestTimedOutError.message),
OffsetCommitError("topic2", 1, RequestTimedOutError.message),
]
status = list(rewind_consumer_offsets(
kafka_client_mock,
"group",
topics
))
assert len(status) == len(expected_status)
for expected in expected_status:
assert any(actual == expected for actual in status)
assert kafka_client_mock.group_offsets == self.group_offsets
def test_set_consumer_offsets_zk(self, kafka_client_mock):
new_offsets = {
'topic1': {
0: 100,
1: 200,
},
'topic2': {
0: 150,
1: 300,
},
}
kafka_client_spy = mock.Mock(wraps=kafka_client_mock)
status = list(set_consumer_offsets(
kafka_client_spy,
"group",
new_offsets
))
expected_offsets = {
'topic1': {
0: 100,
1: 200,
2: 10,
},
'topic2': {
0: 150,
1: 300,
}
}
assert status == []
assert kafka_client_mock.group_offsets == expected_offsets
assert kafka_client_spy.send_offset_commit_request.called
assert not kafka_client_spy.send_offset_commit_request_kafka.called
def test_set_consumer_offsets_kafka(self, kafka_client_mock):
new_offsets = {
'topic1': {
0: 100,
1: 200,
},
'topic2': {
0: 150,
1: 300,
},
}
kafka_client_spy = mock.Mock(wraps=kafka_client_mock)
set_consumer_offsets(
kafka_client_spy,
"group",
new_offsets,
offset_storage='kafka',
)
assert not kafka_client_spy.send_offset_commit_request.called
assert kafka_client_spy.send_offset_commit_request_kafka.called
def test_set_consumer_offsets_fail(self, kafka_client_mock):
kafka_client_mock.set_commit_error()
new_offsets = {
'topic1': {
0: 100,
1: 200,
},
'topic2': {
0: 150,
1: 300,
},
}
expected_status = [
OffsetCommitError("topic1", 0, RequestTimedOutError.message),
OffsetCommitError("topic1", 1, RequestTimedOutError.message),
OffsetCommitError("topic2", 0, RequestTimedOutError.message),
OffsetCommitError("topic2", 1, RequestTimedOutError.message),
]
status = list(set_consumer_offsets(
kafka_client_mock,
"group",
new_offsets,
raise_on_error=True
))
assert len(status) == len(expected_status)
for expected in expected_status:
assert any(actual == expected for actual in status)
assert kafka_client_mock.group_offsets == self.group_offsets
| 30.123342 | 100 | 0.567737 | 2,300 | 22,713 | 5.256957 | 0.095217 | 0.095526 | 0.115375 | 0.045571 | 0.792987 | 0.746836 | 0.716153 | 0.662476 | 0.576462 | 0.555206 | 0 | 0.026399 | 0.356228 | 22,713 | 753 | 101 | 30.163347 | 0.800506 | 0.027693 | 0 | 0.616893 | 0 | 0 | 0.036841 | 0 | 0 | 0 | 0 | 0 | 0.064857 | 1 | 0.073906 | false | 0.001508 | 0.033183 | 0.012066 | 0.134238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
adc410d3bdc717f6983dcd4f235c8aa76476793f | 1,740 | py | Python | angrmanagement/ui/menus/view_menu.py | GeistInDerSH/angr-management | 7033aa25957d8d59cea7ba10e296d38b4b6678b7 | [
"BSD-2-Clause"
] | null | null | null | angrmanagement/ui/menus/view_menu.py | GeistInDerSH/angr-management | 7033aa25957d8d59cea7ba10e296d38b4b6678b7 | [
"BSD-2-Clause"
] | null | null | null | angrmanagement/ui/menus/view_menu.py | GeistInDerSH/angr-management | 7033aa25957d8d59cea7ba10e296d38b4b6678b7 | [
"BSD-2-Clause"
] | null | null | null | from PySide2.QtGui import QKeySequence
from PySide2.QtCore import Qt
from .menu import Menu, MenuEntry, MenuSeparator
class ViewMenu(Menu):
def __init__(self, main_window):
super(ViewMenu, self).__init__("&View", parent=main_window)
self.entries.extend([
MenuEntry('Next Tab', main_window.workspace.view_manager.next_tab, shortcut=QKeySequence("Ctrl+Tab")),
MenuEntry('Previous Tab', main_window.workspace.view_manager.previous_tab, shortcut=QKeySequence("Ctrl+Shift+Tab")),
MenuSeparator(),
MenuEntry('New Disassembly View', main_window.workspace.new_disassembly_view, shortcut=QKeySequence("Ctrl+N")),
MenuEntry('Split / Unsplit View', main_window.workspace.toggle_split, shortcut=QKeySequence("Ctrl+D")),
MenuSeparator(),
MenuEntry('Linear Disassembly', main_window.workspace.show_linear_disassembly_view),
MenuEntry('Graph Disassembly', main_window.workspace.show_graph_disassembly_view),
MenuEntry('Symbolic Execution', main_window.workspace.show_symexec_view),
MenuEntry('Symbolic States', main_window.workspace.show_states_view),
MenuEntry('Strings', main_window.workspace.show_strings_view),
MenuEntry('Proximity View', main_window.view_proximity_for_current_function),
MenuEntry('Patches', main_window.workspace.show_patches_view),
MenuEntry('Interaction', main_window.workspace.show_interaction_view),
MenuEntry('Types', main_window.workspace.show_types_view),
MenuEntry('Functions', main_window.workspace.show_functions_view),
MenuEntry('Console', main_window.workspace.show_console_view),
])
| 58 | 128 | 0.716092 | 189 | 1,740 | 6.285714 | 0.285714 | 0.143098 | 0.223906 | 0.193603 | 0.112795 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0.001397 | 0.177011 | 1,740 | 29 | 129 | 60 | 0.828212 | 0 | 0 | 0.08 | 0 | 0 | 0.13046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.12 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
adc559a3f5f58ac58e40327e4897c7189a74fe8e | 260 | py | Python | answers/MridulMohanta/Day21/question2.py | justshivam/30-DaysOfCode-March-2021 | 64d434c07b9ec875384dee681a3eecefab3ddef0 | [
"MIT"
] | 22 | 2021-03-16T14:07:47.000Z | 2021-08-13T08:52:50.000Z | answers/MridulMohanta/Day21/question2.py | MridulMohanta19/30-DaysOfCode-March-2021 | 42868af99a7eca8702f384ce2bd6519b327bb03b | [
"MIT"
] | 174 | 2021-03-16T21:16:40.000Z | 2021-06-12T05:19:51.000Z | answers/MridulMohanta/Day21/question2.py | MridulMohanta19/30-DaysOfCode-March-2021 | 42868af99a7eca8702f384ce2bd6519b327bb03b | [
"MIT"
] | 135 | 2021-03-16T16:47:12.000Z | 2021-06-27T14:22:38.000Z | n = int(input("Enter N: "))
l = []
for i in range(0 , n):
inp = int(input("Enter numbers: "))
l.append(inp)
l.sort()
a = 0
b = 0
for i in range(0 , n):
if i % 2 != 0:
a = a * 10 + l[i]
else:
b = b * 10 + l[i]
c = a + b
print(c)
| 16.25 | 39 | 0.438462 | 52 | 260 | 2.192308 | 0.423077 | 0.140351 | 0.22807 | 0.192982 | 0.22807 | 0.22807 | 0 | 0 | 0 | 0 | 0 | 0.059524 | 0.353846 | 260 | 15 | 40 | 17.333333 | 0.619048 | 0 | 0 | 0.133333 | 0 | 0 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
adc6276d40dce62109eed32b67950aa9b8bc65df | 3,055 | py | Python | isi_mip/invitation/views.py | ISI-MIP/isimip | c2a78c727337e38f3695031e00afd607da7d6dcb | [
"MIT"
] | 4 | 2017-07-05T08:06:18.000Z | 2021-03-01T17:23:18.000Z | isi_mip/invitation/views.py | ISI-MIP/isimip | c2a78c727337e38f3695031e00afd607da7d6dcb | [
"MIT"
] | 4 | 2020-01-31T09:02:57.000Z | 2021-04-20T14:04:35.000Z | isi_mip/invitation/views.py | ISI-MIP/isimip | c2a78c727337e38f3695031e00afd607da7d6dcb | [
"MIT"
] | 4 | 2017-10-12T01:48:55.000Z | 2020-04-29T13:50:03.000Z | import hashlib
import os
from django.conf import settings
from django.contrib import messages
from django.contrib.auth import authenticate, login
from django.contrib.auth.models import User
from django.contrib.sites.shortcuts import get_current_site
from django.core.urlresolvers import reverse
from django.http.response import HttpResponseRedirect
from django.shortcuts import get_object_or_404
from django.template.loader import render_to_string
from django.utils import timezone
from django.views.generic import FormView, UpdateView
from isi_mip.invitation.forms import InvitationForm, RegistrationForm
from isi_mip.invitation.models import Invitation
class InvitationView(FormView):
template_name = 'invitation/invitation_form.html'
form_class = InvitationForm
email_subject_template = 'invitation/email_subject.txt'
email_body_template = 'invitation/email_body.txt'
def __init__(self, **kwargs):
self.valid_until = timezone.now() + timezone.timedelta(days=getattr(settings, "INVITATION_VALID_DAYS", 7))
super(InvitationView, self).__init__(**kwargs)
def get_success_url(self):
return reverse('climatemodels:assign', kwargs={'username': self.invite.user.username})+'?next='+\
reverse('admin:auth_user_changelist')
def form_valid(self, form):
user = User.objects.create(
username=form.data['username'],
email=form.data['email'],
is_active=True)
self.invite = Invitation.objects.create(
user=user,
token=hashlib.sha1(os.urandom(128)).hexdigest(),
valid_from=timezone.now(),
valid_until=self.valid_until)
messages.success(self.request, 'User has been created successfully.')
return super(InvitationView, self).form_valid(form)
class RegistrationView(UpdateView):
template_name = 'invitation/registration_form.html'
form_class = RegistrationForm
model = User
success_url = '/dashboard/'
error_url = '/'
def dispatch(self, request, *args, **kwargs):
pk, token = kwargs['pk'], kwargs['token']
try:
user = User.objects.get(id=pk)
except User.DoesNotExist:
messages.error(request, 'Invalid username')
return HttpResponseRedirect(self.error_url)
try:
invite = Invitation.objects.get(user_id=pk, token=token)
except:
messages.error(request, 'No invite')
return HttpResponseRedirect(self.error_url)
if invite.valid_until < timezone.now():
messages.error(request, 'Invite not valid anymore')
return HttpResponseRedirect(self.error_url)
return super(RegistrationView, self).dispatch(request, *args, **kwargs)
def form_valid(self, form):
response = super(RegistrationView, self).form_valid(form)
form.instance.backend = 'django.contrib.auth.backends.ModelBackend' # TODO: Replace this with "authenticate()"
login(self.request, form.instance)
return response | 39.166667 | 118 | 0.701473 | 351 | 3,055 | 5.960114 | 0.333333 | 0.052581 | 0.032505 | 0.050191 | 0.073614 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003271 | 0.199345 | 3,055 | 78 | 119 | 39.166667 | 0.852003 | 0.013093 | 0 | 0.107692 | 0 | 0 | 0.117784 | 0.068016 | 0 | 0 | 0 | 0.012821 | 0 | 1 | 0.076923 | false | 0 | 0.230769 | 0.015385 | 0.584615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
adcb45576391d5b429504146d71d332ed8ccaf2f | 228 | py | Python | setup.py | dungagan/mpesaapirouter | f36fd3a7ab296c7bb9afad89db9c08e5e1093f17 | [
"MIT"
] | 6 | 2021-12-04T06:01:36.000Z | 2022-02-28T22:46:38.000Z | setup.py | dungagan/mpesaapirouter | f36fd3a7ab296c7bb9afad89db9c08e5e1093f17 | [
"MIT"
] | 24 | 2020-10-15T10:41:31.000Z | 2021-09-22T19:37:11.000Z | setup.py | dungagan/mpesaapirouter | f36fd3a7ab296c7bb9afad89db9c08e5e1093f17 | [
"MIT"
] | 5 | 2020-10-12T16:41:10.000Z | 2022-02-02T14:56:15.000Z | from setuptools import setup
setup(
name='integration-portal',
version='0.1',
packages=['portalsdk'],
url='www.4cgroup.co.za',
license='MIT',
author='4C Group',
author_email='',
description=''
)
| 17.538462 | 30 | 0.614035 | 26 | 228 | 5.346154 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.210526 | 228 | 12 | 31 | 19 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.254386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
addb82bbb4b17b25786ee115510d56738b63bfb3 | 379 | py | Python | app/board_app/serializers.py | KimKiHyuk/BenefitObserver | 74d59ee2d9dc81f8b8423e14a9ce950fa21f332b | [
"MIT"
] | null | null | null | app/board_app/serializers.py | KimKiHyuk/BenefitObserver | 74d59ee2d9dc81f8b8423e14a9ce950fa21f332b | [
"MIT"
] | 8 | 2021-03-30T13:53:18.000Z | 2022-03-02T14:54:13.000Z | app/board_app/serializers.py | KimKiHyuk/BenefitObserver | 74d59ee2d9dc81f8b8423e14a9ce950fa21f332b | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import Posts, Url
class UrlSerializer(serializers.ModelSerializer):
class Meta:
model = Url
fields = ['id', 'url', 'updated_at']
class PostSerializer(serializers.ModelSerializer):
url = UrlSerializer()
class Meta:
model = Posts
fields = ['id', 'title', 'url', 'updated_at']
| 22.294118 | 53 | 0.651715 | 39 | 379 | 6.25641 | 0.487179 | 0.213115 | 0.114754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237467 | 379 | 16 | 54 | 23.6875 | 0.844291 | 0 | 0 | 0.181818 | 0 | 0 | 0.092593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bc246716c1cffe722f42bb377d48fd67677027a7 | 391 | py | Python | tau/core/migrations/0002_reset_all_account_webhooks.py | Dj-Viking/tau | 91f324104c68e76e6514228203d1b7832428790f | [
"MIT"
] | 51 | 2021-06-24T16:22:05.000Z | 2022-03-26T02:39:26.000Z | tau/core/migrations/0002_reset_all_account_webhooks.py | Dj-Viking/tau | 91f324104c68e76e6514228203d1b7832428790f | [
"MIT"
] | 41 | 2021-04-05T17:21:13.000Z | 2021-06-13T14:26:40.000Z | tau/core/migrations/0002_reset_all_account_webhooks.py | Dj-Viking/tau | 91f324104c68e76e6514228203d1b7832428790f | [
"MIT"
] | 15 | 2021-02-16T22:14:36.000Z | 2021-06-07T20:14:33.000Z | # Generated by Django 3.1.7 on 2021-11-06 12:18
from django.db import migrations
from constance import config
def toggle_reset_webhooks(apps, schema_editor):
config.RESET_ALL_WEBHOOKS = True
class Migration(migrations.Migration):
dependencies = [
('core', '0001_scope_update_for_chat'),
]
operations = [
migrations.RunPython(toggle_reset_webhooks)
]
| 21.722222 | 51 | 0.71867 | 50 | 391 | 5.4 | 0.76 | 0.081481 | 0.140741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060317 | 0.194373 | 391 | 17 | 52 | 23 | 0.796825 | 0.11509 | 0 | 0 | 1 | 0 | 0.087209 | 0.075581 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bc29363710194dfc02bb70707e6c3e3455483325 | 2,492 | py | Python | projects/signals/handlers.py | migleankstutyte/kaavapino | 1fd0b642a66f1ec7c61decf46433dc9f0bf3ed8e | [
"MIT"
] | 3 | 2019-02-07T14:47:00.000Z | 2022-02-15T14:09:38.000Z | projects/signals/handlers.py | migleankstutyte/kaavapino | 1fd0b642a66f1ec7c61decf46433dc9f0bf3ed8e | [
"MIT"
] | 74 | 2017-12-13T09:18:04.000Z | 2022-03-11T23:29:59.000Z | projects/signals/handlers.py | migleankstutyte/kaavapino | 1fd0b642a66f1ec7c61decf46433dc9f0bf3ed8e | [
"MIT"
] | 8 | 2017-12-13T09:31:20.000Z | 2022-02-15T13:10:34.000Z | import os
from django.core.cache import cache
from django.db.models.signals import (
pre_delete, post_save, post_delete, m2m_changed
)
from django.dispatch import receiver
from projects.models import (
ProjectAttributeFile,
Attribute,
DataRetentionPlan,
AttributeValueChoice,
FieldSetAttribute,
ProjectType,
ProjectSubtype,
ProjectFloorAreaSection,
ProjectFloorAreaSectionAttribute,
ProjectFloorAreaSectionAttributeMatrixStructure,
ProjectFloorAreaSectionAttributeMatrixCell,
ProjectPhase,
ProjectPhaseSection,
ProjectPhaseSectionAttribute,
ProjectPhaseFieldSetAttributeIndex,
PhaseAttributeMatrixStructure,
PhaseAttributeMatrixCell,
ProjectPhaseDeadlineSection,
ProjectPhaseDeadlineSectionAttribute,
Deadline,
)
@receiver([post_save, post_delete, m2m_changed], sender=Attribute)
@receiver([post_save, post_delete, m2m_changed], sender=DataRetentionPlan)
@receiver([post_save, post_delete, m2m_changed], sender=AttributeValueChoice)
@receiver([post_save, post_delete, m2m_changed], sender=FieldSetAttribute)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectType)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectSubtype)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectFloorAreaSection)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectFloorAreaSectionAttribute)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectFloorAreaSectionAttributeMatrixStructure)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectFloorAreaSectionAttributeMatrixCell)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectPhase)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectPhaseSection)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectPhaseSectionAttribute)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectPhaseFieldSetAttributeIndex)
@receiver([post_save, post_delete, m2m_changed], sender=PhaseAttributeMatrixStructure)
@receiver([post_save, post_delete, m2m_changed], sender=PhaseAttributeMatrixCell)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectPhaseDeadlineSection)
@receiver([post_save, post_delete, m2m_changed], sender=ProjectPhaseDeadlineSectionAttribute)
@receiver([post_save, post_delete, m2m_changed], sender=Deadline)
def delete_cached_sections(*args, **kwargs):
cache.delete("serialized_phase_sections")
cache.delete("serialized_deadline_sections")
| 45.309091 | 104 | 0.822231 | 238 | 2,492 | 8.327731 | 0.193277 | 0.080727 | 0.12109 | 0.181635 | 0.416751 | 0.416751 | 0.402624 | 0.402624 | 0 | 0 | 0 | 0.008807 | 0.088684 | 2,492 | 54 | 105 | 46.148148 | 0.863937 | 0 | 0 | 0 | 0 | 0 | 0.021268 | 0.021268 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | true | 0 | 0.1 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bc31c5abb67ca5ed658348589d8cf0c78fbd14c8 | 69 | py | Python | consts.py | guocheng2018/ner_bilstm_crf | 603f2995f6a73ebf63bc4416e98ea2656dbe9eaa | [
"MIT"
] | 15 | 2019-08-06T06:35:11.000Z | 2021-11-01T08:41:37.000Z | consts.py | guocheng2018/ner_bilstm_crf | 603f2995f6a73ebf63bc4416e98ea2656dbe9eaa | [
"MIT"
] | null | null | null | consts.py | guocheng2018/ner_bilstm_crf | 603f2995f6a73ebf63bc4416e98ea2656dbe9eaa | [
"MIT"
] | 2 | 2020-12-22T08:08:29.000Z | 2021-01-06T04:35:24.000Z | PAD = "<PAD>"
UNK = "<UNK>"
START_TAG = "<START>"
STOP_TAG = "<STOP>" | 17.25 | 21 | 0.550725 | 10 | 69 | 3.6 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 69 | 4 | 22 | 17.25 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0.328571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bc3e716f88584abcfa1330b82fd7fc2b65a09e7f | 1,699 | py | Python | steamworks/interfaces/screenshots.py | zxcvbnm3057/SteamworksPy | 7439499316d5989839af2fda677d179c9d3efcf3 | [
"MIT"
] | 69 | 2019-12-27T23:11:14.000Z | 2022-03-31T08:46:42.000Z | steamworks/interfaces/screenshots.py | zxcvbnm3057/SteamworksPy | 7439499316d5989839af2fda677d179c9d3efcf3 | [
"MIT"
] | 33 | 2018-07-08T01:13:01.000Z | 2020-07-12T16:32:45.000Z | steamworks/interfaces/screenshots.py | zxcvbnm3057/SteamworksPy | 7439499316d5989839af2fda677d179c9d3efcf3 | [
"MIT"
] | 15 | 2020-08-12T11:25:52.000Z | 2022-03-24T21:14:04.000Z | from ctypes import *
from enum import Enum
import steamworks.util as util
from steamworks.enums import *
from steamworks.structs import *
from steamworks.exceptions import *
class SteamScreenshots(object):
def __init__(self, steam: object):
self.steam = steam
if not self.steam.loaded():
raise SteamNotLoadedException('STEAMWORKS not yet loaded')
def AddScreenshotToLibrary(self, filename: str, thumbnail_filename: str, width: int, height: int) -> int:
"""Adds a screenshot to the user's Steam screenshot library from disk
:param filename: str
:param thumbnail_filename: str
:param width: int
:param height: int
:return: int
"""
return self.steam.AddScreenshotToLibrary(filename, thumbnail_filename, width, height)
def HookScreenshots(self, hook: bool) -> None:
"""Toggles whether the overlay handles screenshots
:param hook: bool
:return: None
"""
self.steam.HookScreenshots(hook)
def IsScreenshotsHooked(self) -> bool:
"""Checks if the app is hooking screenshots
:return: bool
"""
return self.steam.IsScreenshotsHooked()
def SetLocation(self, screenshot_handle: int, location: str) -> bool:
"""Sets optional metadata about a screenshot's location
:param screenshot_handle: int
:param location: str
:return: bool
"""
return self.steam.SetLocation(screenshot_handle, location)
def TriggerScreenshot(self) -> None:
"""Causes Steam overlay to take a screenshot
:return: None
"""
self.steam.TriggerScreenshot() | 27.852459 | 109 | 0.648028 | 182 | 1,699 | 5.994505 | 0.357143 | 0.065995 | 0.041247 | 0.03483 | 0.04583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266039 | 1,699 | 61 | 110 | 27.852459 | 0.8749 | 0.283696 | 0 | 0 | 0 | 0 | 0.023787 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.761905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
70c949c421401a9f112b800e66d35d39419117a1 | 575 | py | Python | Logic/or_gate.py | FPGA-4-all/FPGA4all_MyHDL | 20ab2b1ff86d7b13cfca947967a9b4618325fbb1 | [
"MIT"
] | null | null | null | Logic/or_gate.py | FPGA-4-all/FPGA4all_MyHDL | 20ab2b1ff86d7b13cfca947967a9b4618325fbb1 | [
"MIT"
] | null | null | null | Logic/or_gate.py | FPGA-4-all/FPGA4all_MyHDL | 20ab2b1ff86d7b13cfca947967a9b4618325fbb1 | [
"MIT"
] | null | null | null | #or gate class
from myhdl import *
class or_gate:
def __init__(self):
self.inputs = 2
def two_input_or(self, a, b, c):
"""Two Input or Gate
a, b -> Data imputs
c -> output
"""
@always_comb
def or_logic():
c.next = a | b
return or_logic
def three_input_or(self, a, b, c, d):
"""Three Input or Gate
a, b, c -> Data imputs
d -> output
"""
@always_comb
def or_logic():
d.next = a | b | c
return or_logic
| 23 | 41 | 0.464348 | 78 | 575 | 3.230769 | 0.333333 | 0.047619 | 0.047619 | 0.095238 | 0.420635 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0.003067 | 0.433043 | 575 | 24 | 42 | 23.958333 | 0.769939 | 0.236522 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0 | 0.071429 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
70d099a6a4fbcad43f1c27292a98db8074226432 | 208 | py | Python | 剑指Offer/No_03.py | lih627/python-algorithm-templates | a61fd583e33a769b44ab758990625d3381793768 | [
"MIT"
] | 24 | 2020-03-28T06:10:25.000Z | 2021-11-23T05:01:29.000Z | 剑指Offer/No_03.py | lih627/python-algorithm-templates | a61fd583e33a769b44ab758990625d3381793768 | [
"MIT"
] | null | null | null | 剑指Offer/No_03.py | lih627/python-algorithm-templates | a61fd583e33a769b44ab758990625d3381793768 | [
"MIT"
] | 8 | 2020-05-18T02:43:16.000Z | 2021-05-24T18:11:38.000Z | class Solution:
def findRepeatNumber(self, nums: List[int]) -> int:
s = set()
for _ in nums:
if _ not in s:
s.add(_)
else:
return _
| 23.111111 | 55 | 0.427885 | 22 | 208 | 3.863636 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.480769 | 208 | 8 | 56 | 26 | 0.787037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
70d2fc15278523f867399ed6662cc560956a1341 | 1,117 | py | Python | posts/models.py | JoyMbugua/arifa | f3eaf5d710e537428f756e6f6b9a437337dc31c8 | [
"MIT"
] | 1 | 2021-06-29T11:52:17.000Z | 2021-06-29T11:52:17.000Z | posts/models.py | JoyMbugua/arifa | f3eaf5d710e537428f756e6f6b9a437337dc31c8 | [
"MIT"
] | null | null | null | posts/models.py | JoyMbugua/arifa | f3eaf5d710e537428f756e6f6b9a437337dc31c8 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth import get_user_model
from django.urls import reverse
from django.contrib.contenttypes.fields import GenericRelation
from comment.models import Comment
class Post(models.Model):
author = models.ForeignKey(get_user_model(),on_delete=models.CASCADE, related_name='posts')
body = models.TextField()
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
comments = GenericRelation(Comment)
likes = models.ManyToManyField(get_user_model(), related_name='posts_liked', blank=True)
def __str__(self):
return self.body[:20]
def get_absolute_url(self):
return reverse('post_detail', args=[str(self.id)])
def get_likes(self):
return self.likes.all().count()
class BlogPost(models.Model):
title = models.CharField(max_length=200)
author = models.CharField(max_length=100)
body = models.TextField()
link = models.URLField()
image_url = models.URLField()
pub_time = models.DateTimeField()
def __str__(self):
return self.title | 31.914286 | 95 | 0.725157 | 143 | 1,117 | 5.461538 | 0.447552 | 0.051216 | 0.046095 | 0.066581 | 0.051216 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008611 | 0.168308 | 1,117 | 35 | 96 | 31.914286 | 0.832078 | 0 | 0 | 0.148148 | 0 | 0 | 0.02415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.185185 | 0.148148 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
70dd16c9cceea76321027e20d4d65436d853aa48 | 1,156 | py | Python | tutorial-3/src/products/models.py | sugengdcahyo/apprentice-neurabot | e30728c8abfa3befed007dc42ca6cd732f3427d7 | [
"MIT"
] | null | null | null | tutorial-3/src/products/models.py | sugengdcahyo/apprentice-neurabot | e30728c8abfa3befed007dc42ca6cd732f3427d7 | [
"MIT"
] | null | null | null | tutorial-3/src/products/models.py | sugengdcahyo/apprentice-neurabot | e30728c8abfa3befed007dc42ca6cd732f3427d7 | [
"MIT"
] | null | null | null | from django.db import models
from django.db.models.fields.related import ForeignKey
from django.contrib.auth.models import User
from django.utils.translation import ugettext_lazy as _
# Create your models here.
class Type(models.Model):
name = models.CharField(max_length=125)
class Meta:
verbose_name = _("Type")
verbose_name_plural = _("Types")
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse("Type_detail", kwargs={"pk": self.pk})
class Product(models.Model):
name = models.CharField(max_length=200)
price = models.IntegerField()
type = ForeignKey(Type, on_delete=models.CASCADE, related_name='product_type')
user = ForeignKey(User, on_delete=models.CASCADE, related_name='product_user')
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = _("Product")
verbose_name_plural = _("Products")
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse("Product_detail", kwargs={"pk": self.pk})
| 28.9 | 82 | 0.699827 | 148 | 1,156 | 5.202703 | 0.391892 | 0.051948 | 0.031169 | 0.054545 | 0.477922 | 0.353247 | 0.353247 | 0.150649 | 0.150649 | 0.150649 | 0 | 0.006431 | 0.192907 | 1,156 | 39 | 83 | 29.641026 | 0.818864 | 0.020761 | 0 | 0.296296 | 0 | 0 | 0.068142 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.148148 | 0.148148 | 0.851852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
70e493944bd970121bbb199b3e39dcfcf87023fb | 175 | py | Python | aoc20201205b.py | BarnabyShearer/aoc | 4feb66c668b068f0f42ad99b916e80732eba5a2d | [
"MIT"
] | null | null | null | aoc20201205b.py | BarnabyShearer/aoc | 4feb66c668b068f0f42ad99b916e80732eba5a2d | [
"MIT"
] | null | null | null | aoc20201205b.py | BarnabyShearer/aoc | 4feb66c668b068f0f42ad99b916e80732eba5a2d | [
"MIT"
] | null | null | null | from aoc20201205a import seats
def aoc(data):
next = 0
for seat in sorted(seats(data)):
if seat == next:
return seat - 1
next = seat + 2
| 17.5 | 36 | 0.548571 | 24 | 175 | 4 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099099 | 0.365714 | 175 | 9 | 37 | 19.444444 | 0.765766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
70f1547bda581751b8d2bb9f6d47b8a4a7d644be | 500 | py | Python | plugins/redis/komand_redis/actions/delete/action.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/redis/komand_redis/actions/delete/action.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/redis/komand_redis/actions/delete/action.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | import komand
from .schema import DeleteInput, DeleteOutput
# Custom imports below
class Delete(komand.Action):
def __init__(self):
super(self.__class__, self).__init__(
name="delete", description="Delete", input=DeleteInput(), output=DeleteOutput()
)
def run(self, params={}):
"""Run action"""
count = self.connection.redis.delete(params["key"])
return {"count": count}
def test(self):
"""Test action"""
return {}
| 23.809524 | 91 | 0.61 | 52 | 500 | 5.634615 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.248 | 500 | 20 | 92 | 25 | 0.779255 | 0.088 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cb0a243af905b60cd61fb527c399e552b9bc1f26 | 30,390 | py | Python | pdbremix/gromacs.py | tcardlab/pdbremix | bc7afca41da08f6166e518976b1c6dc40e35d78e | [
"MIT"
] | 50 | 2015-01-23T00:05:20.000Z | 2022-03-09T04:11:31.000Z | pdbremix/gromacs.py | tcardlab/pdbremix | bc7afca41da08f6166e518976b1c6dc40e35d78e | [
"MIT"
] | 7 | 2015-02-18T08:14:30.000Z | 2018-05-17T02:01:58.000Z | pdbremix/gromacs.py | tcardlab/pdbremix | bc7afca41da08f6166e518976b1c6dc40e35d78e | [
"MIT"
] | 27 | 2015-01-15T04:10:30.000Z | 2021-10-11T17:56:41.000Z | # encoding: utf-8
__doc__ = """
Interface to the GROMACS molecular-dynamics package.
The library is split into three sections:
1. Read and write restart files
2. Generate restart files from PDB
3. Run simulations from restart files
4. Read trajectories with some post-processing
Copyright (C) 2009, 2014, Bosco K. Ho
"""
import os
import shutil
import copy
import glob
import xdrlib
import util
import pdbatoms
import v3
import data
import pdbtext
import protein
# ##########################################################
# 1. Reading and writing restart files
# In PDBREMIX, restart files for GROMACS are assumed to
# have the naming scheme:
# 1. topology file: sim.top
# 2. coordinate/velocity file: sim.gro
# Parsers have been written to read .top and .gro files into
# Python structures, and to write these back into .top and .gro
# files, and to convert them into .pdb files
# The units used in GROMACS are:
# - positions: nanometers
# - velocities: nanometers/picosecond
# - force constant: kJ/mol/nm^2
# which will be converted into
# - angstroms
# - picoseconds
# - kilocalories
def read_top(top):
"""
Returns a list of (mass, charge, chain_id) for the atoms
in the topology file.
"""
util.check_output(top)
lines = open(top).readlines()
atoms = []
is_chain_topologies = False
chain=" "
top_dir = os.path.dirname(top)
for l in lines:
if not is_chain_topologies:
if 'chain topologies' in l:
is_chain_topologies = True
continue
if l.startswith("#include"):
itp = l.split()[1][1:-1]
itp = os.path.join(top_dir, itp)
if os.path.isfile(itp):
full_chain_name = os.path.splitext(itp)[0]
chain = full_chain_name.split('_')[-1]
these_atoms = read_top(itp, chain)
atoms.extend(these_atoms)
if l.startswith(";"):
break
is_atoms = False
qtot = None
for l in lines:
if not is_atoms:
if '[ atoms ]' in l:
is_atoms = True
continue
if l.startswith('['):
break
if l.startswith(";"):
continue
if not l.strip():
continue
words = l.split()
n = int(words[0])
res_num = int(words[2])
res_type = words[3]
q = float(words[6])
mass = float(words[7])
atoms.append((mass, q, chain))
return atoms
def AtomFromGroLine(line):
"""
Returns an Atom object from a .gro atom line.
"""
atom = pdbatoms.Atom()
atom.res_num = int(line[0:5])
atom.res_type = line[5:8].strip()
atom.type = line[10:15].strip(" ")
atom.element = data.guess_element(
atom.res_type, line[12:15])
atom.num = int(line[15:20])
# 10 x multiplier converts from nm to angstroms
x = 10.0*float(line[20:28])
y = 10.0*float(line[28:36])
z = 10.0*float(line[36:44])
v3.set_vector(atom.pos, x, y, z)
if len(line) > 62:
# 10 x multiplier converts from nm to angstroms
x = 10.0*float(line[44:52])
y = 10.0*float(line[52:60])
z = 10.0*float(line[60:68])
v3.set_vector(atom.vel, x, y, z)
return atom
def convert_to_pdb_atom_names(soup):
"""
For the soup structure, converts residue
atom names peculiar to .gro into standard PDB names
for interacting with other systems.
"""
for res in soup.residues():
if res.type == "ILE":
if res.has_atom("CD"):
res.change_atom_type("CD", "CD1")
if res.has_atom("OC2"):
res.change_atom_type("OC2", "OXT")
if res.has_atom("OC1"):
res.change_atom_type("OC1", "O")
if res.type == "SOL":
res.set_type("HOH")
res.change_atom_type("HW1", "1H")
res.change_atom_type("HW2", "2H")
res.change_atom_type("OW", "O")
if res.type in data.solvent_res_types:
for a in res.atoms():
a.is_hetatm = True
for atom in res.atoms():
if atom.type[-1].isdigit() and atom.type[0] == "H":
new_atom_type = atom.type[-1] + atom.type[:-1]
res.change_atom_type(atom.type, new_atom_type)
def convert_to_gromacs_atom_names(soup):
"""
For writing into .gro files, converts PDB atom names
into GROMACS specific atom names.
"""
for res in soup.residues():
if res.type == "ILE":
if res.has_atom("CD1"):
res.change_atom_type("CD1", "CD")
if res.has_atom("OXT"):
res.change_atom_type("OXT", "OC2")
if res.has_atom("O"):
res.change_atom_type("O", "OC1")
if res.type == "HOH":
res.set_type("SOL")
res.change_atom_type("1H", "HW1")
res.change_atom_type("2H", "HW2")
res.change_atom_type("O", "OW")
if res.type in data.solvent_res_types:
for a in res.atoms():
a.is_hetatm = False
for atom in res.atoms():
if atom.type[0].isdigit() and atom.type[1] == "H":
new_atom_type = atom.type[1:] + atom.type[0]
res.change_atom_type(atom.type, new_atom_type)
def soup_from_top_gro(top, gro, skip_solvent=False):
"""
Returns a Soup built from GROMACS restart files.
If skip_solvent=True, will skip all solvent molecules.
"""
util.check_output(top)
util.check_output(gro)
soup = pdbatoms.Soup()
soup.remaining_text = ""
soup.n_remaining_text = 0
atoms = []
# Read from .gro because .top does not contain water
# residue information, which is "inferred"
lines = open(gro, 'r').readlines()
for i_line, line in enumerate(lines[2:-1]):
atom = AtomFromGroLine(line)
if skip_solvent and atom.res_type == "SOL":
soup.remaining_text = "".join(lines[i_line+2:-1])
soup.n_remaining_text = len(lines[i_line+2:-1])
break
atoms.append(atom)
soup.box = [float(w) for w in lines[-1].split()]
for atom, (mass, q, chain_id) in zip(atoms, read_top(top)):
atom.mass = mass
atom.charge = q
curr_res_num = -1
for a in atoms:
if curr_res_num != a.res_num:
res = pdbatoms.Residue(
a.res_type, a.chain_id, a.res_num)
soup.append_residue(res.copy())
curr_res_num = a.res_num
soup.insert_atom(-1, a)
convert_to_pdb_atom_names(soup)
protein.find_chains(soup)
return soup
def write_soup_to_gro(in_soup, gro):
soup = in_soup.copy()
convert_to_gromacs_atom_names(soup)
f = open(gro, 'w')
f.write("Generated by gromacs.py\n")
atoms = soup.atoms()
n_atom = len(atoms) + soup.n_remaining_text
f.write(str(n_atom) + '\n')
for a in atoms:
# GRO doesn't care about numbering so wrap when full
res_num = a.res_num % 100000
atom_num = a.num % 100000
# 0.1 x multiplier converts from angs back to nm
s = "%5d%-5s%5s%5d%8.3f%8.3f%8.3f%8.4f%8.4f%8.4f\n" % \
(res_num, a.res_type, a.type, atom_num,
a.pos[0]*0.1, a.pos[1]*0.1, a.pos[2]*0.1,
a.vel[0]*0.1, a.vel[1]*0.1, a.vel[2]*0.1)
f.write(s)
if soup.remaining_text:
f.write(soup.remaining_text)
f.write("%10.5f%10.5f%10.5f\n" % (soup.box[0], soup.box[1], soup.box[2]))
f.close()
# The following functions wrap the above functions into a
# standard API that does not explicitly reference GROMACS
def expand_restart_files(basename):
"""Returns expanded restart files based on basename"""
top = os.path.abspath(basename + '.top')
crds = os.path.abspath(basename + '.gro')
vels = '' # dummy file for cross-package interface
return top, crds, vels
def get_restart_files(basename):
"""Returns restart files only if they exist"""
top, crds, vels = expand_restart_files(basename)
util.check_files(top, crds)
return top, crds, vels
def soup_from_restart_files(top, crds, vels, skip_solvent=False):
"""Reads pdbatoms.Soup object from restart files."""
return soup_from_top_gro(top, crds, skip_solvent)
def write_soup_to_crds_and_vels(soup, basename):
"""From soup, writes out the coordinate/velocities, used for pulsing"""
write_soup_to_gro(soup, basename + '.gro')
return basename + '.gro', ''
def convert_restart_to_pdb(basename, pdb):
"""Converts restart files with basename into PDB file"""
top, crds, vels = get_restart_files(basename)
soup = soup_from_restart_files(top, crds, vels)
soup.write_pdb(pdb)
# ##########################################################
# # 2. Generate restart files from PDB
# The restart files used for PDBREMIX assumes a consistent file naming.
# For a given basename `sim`, the files are:
# 1. topology file: sim.top
# 2. coordinate/velocity file: sim.gro
# 3. restraint file: sim.pos_re.itp
# To generate a topology file from the PDB file:
# - handles multiple protein chains
# - hydrogens are removed and then regenerated by GROMACS
# - disulfide bonds are auto-detected
# - charged residue protonation states are auto-detected
# - explicit water in cubic box with 10.0 angstrom buffer
# - counterions to neutralize the system
# - GROMACS4.5: AMBER99 force-field
# - GROMACS4.0: GROMOS96 force-field
# Binaries used to generate restart files:
# 1. pdb2gmx - create topologies for atoms in PDB
# 2. editconf - define cubic box
# 3. genbox - add explicit water molecules
# 4. grompp - build .tpr file for energy terms
# 5. genion - add counterions based on .tpr file
def delete_backup_files(tag):
util.clean_fname(*util.re_glob('*', '^#' + tag))
force_field_mdp = """
; Bond parameters
constraints = hbonds ; bonds from heavy atom to H, constrained
continuation = yes ; first dynamics run
; constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
; constraint-algorithm = lincs ; holonomic constraints
; lincs_iter = 1 ; accuracy of LINCS
; lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid ; search neighboring grid cels
nstlist = 5 ; 10 fs
rlist = 1.0 ; short-range neighborlist cutoff (in nm)
; Periodic boundary conditions
pbc = xyz ; 3-dimensional periodic boundary conditions (xyz|no)
; Electrostatics
coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT
rcoulomb = 1.0 ; short-range electrostatic cutoff (in nm)
rvdw = 1.0 ; short-range van der Waals cutoff (in nm)
; Add also refcoord-scaling to work with position restraints and pressure coupling
refcoord-scaling = all
; Dispersion correction
DispCorr = EnerPres ; account for cut-off vdW scheme
"""
ions_mdp = """
; ions.mdp - used as input into grompp to generate ions.tpr
; Parameters describing what to do, when to stop and what to save
integrator = steep ; Algorithm (steep = steepest descent minimization)
emtol = 1000.0 ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm
emstep = 0.01 ; Energy step size
nsteps = 50000 ; Maximum number of (minimization) steps to perform
"""
def neutralize_system_with_salt(
in_top, in_gro, basename, force_field):
"""
Takes a .top file and adds counterions to neutralize the overall
charge of system, and saves to `basename.gro`.
"""
# Calculate overall charege in the .top file
qtot = sum([q for mass, q, chain in read_top(in_top)])
counter_ion_charge = -int(round(qtot))
if counter_ion_charge == 0:
shutil.copy(in_gro, basename + '.gro')
return
# Create a .tpr paramater file for genion to find low-energy sites
in_mdp = basename + '.salt.grompp.mdp'
open(in_mdp, 'w').write(ions_mdp + force_field_mdp)
top = basename + '.top'
if in_top != top:
shutil.copy(in_top, top)
tpr = basename + '.salt.tpr'
out_mdp = basename + '.mdp'
data.binary(
'grompp',
'-f %s -po %s -c %s -p %s -o %s' \
% (in_mdp, out_mdp, in_gro, top, tpr),
basename + '.salt.grompp')
util.check_files(tpr)
# Use genion to generate a gro of system with counterions
gro = basename + '.gro'
# Genion requires user input "SOL" to choose solvent for replacement
input_fname = basename + '.salt.genion.in'
open(input_fname, 'w').write('SOL')
# Different versions of Gromacs use different counterions
charge_str = ""
if 'GROMACS4.5' in force_field:
charge_str = " -pname NA -nname CL "
elif 'GROMACS4.0' in force_field:
charge_str = " -pname NA+ -nname CL- "
else:
raise ValueError, "Cannot recognize force_field " + force_field
if counter_ion_charge > 0:
charge_str += " -np %d " % counter_ion_charge
else:
charge_str += " -nn %d " % abs(counter_ion_charge)
log = basename + '.salt.genion.log'
data.binary(
'genion',
'-g %s -s %s -o %s -p %s -neutral %s' % \
(log, tpr, gro, top, charge_str),
basename + '.salt.genion',
input_fname)
util.check_files(gro)
def pdb_to_top_and_crds(force_field, pdb, basename, solvent_buffer=10):
"""
Converts a PDB file into GROMACS topology and coordinate files,
and fully converted PDB file. These constitute the restart files
of a GROMACS simulation.
"""
util.check_files(pdb)
full_pdb = os.path.abspath(pdb)
save_dir = os.getcwd()
# All intermediate files placed into a subdirectory
util.goto_dir(basename + '.solvate')
# Remove all but protein heavy atoms in a single clean conformation
pdb = basename + '.clean.pdb'
pdbtext.clean_pdb(full_pdb, pdb)
# Generate protein topology in pdb2gmx_gro using pdb2gmx
pdb2gmx_gro = basename + '.pdb2gmx.gro'
top = basename + '.top'
itp = basename + '_posre.itp'
# Choose force field based on GROMACS version
if 'GROMACS4.5' in force_field:
ff = 'amber99'
elif 'GROMACS4.0' in force_field:
ff = 'G43a1'
else:
raise ValueError, "Couldn't work out pdb2gmx for " + force_field
args = '-ignh -ff %s -water spc -missing -f %s -o %s -p %s -i %s -chainsep id_or_ter -merge all' \
% (ff, pdb, pdb2gmx_gro, top, itp)
data.binary('pdb2gmx', args, basename+'.pdb2gmx')
util.check_files(pdb2gmx_gro)
# Now add a box with editconf
box_gro = basename + '.box.gro'
solvent_buffer_in_nm = solvent_buffer/10.0
data.binary(
'editconf',
'-f %s -o %s -c -d %f -bt cubic' \
% (pdb2gmx_gro, box_gro, solvent_buffer_in_nm),
basename+'.box')
util.check_files(box_gro)
# Given box dimensions, can now populate with explict waters
solvated_gro = basename + '.solvated.gro'
data.binary(
'genbox',
'-cp %s -cs spc216.gro -o %s -p %s' \
% (box_gro, solvated_gro, top),
'%s.solvated' % basename)
util.check_files(solvated_gro)
# Neutralize with counterions using genion to place ions
# based on energy parameters processed by grompp
gro = basename + '.gro'
neutralize_system_with_salt(top, solvated_gro, basename, force_field)
util.check_files(gro)
# Make a reference PDB file from restart files for viewing and restraints
convert_restart_to_pdb(basename, basename+'.pdb')
# Copy finished restart files back into original directory
fnames = util.re_glob(
'*', os.path.basename(basename) + r'[^\.]*\.(pdb|itp|gro|mdp|top)$')
for fname in fnames:
shutil.copy(fname, save_dir)
# Cleanup
delete_backup_files(basename)
os.chdir(save_dir)
return top, gro
# ##########################################################
# # 3. Run simulations from restart files
# Simulation approach:
# - cubic periodic box
# - optional positional restraints: 100 kcal/mol/angs**2
# - PME electrostatics on the periodic box
# - Langevin thermostat for constant temperature
# - Nose-Hoover barometer with flexible periodic box size
# - constraints on hydrogen atoms bonded to heavy atoms
# Binaries used:
# 1. grompp - process topology files for .tpr file
# 2. mdrun - run MD using .tpr on the .gro file
# mpi version - mpiexec -np 8 /...dir.../gromacs-4.0.7/bin/mdrun
# Files for trajectories:
# 1. coordinate trajectory: md.trr
# 2. restart coordinate/velocity: md.gro
minimization_parms = {
'topology' : 'in.top',
'input_crds' : 'in.gro',
'output_basename' : 'min',
'force_field': 'GROMACS',
'restraint_pdb': '',
'restraint_force': 100.0,
'n_step_minimization' : 100,
}
constant_energy_parms = {
'topology' : 'in.top',
'input_crds' : 'in.gro',
'output_basename' : 'md',
'force_field': 'GROMACS',
'solvent_state': 2,
'surface_area': 1,
'restraint_pdb': '',
'restraint_force': 100.0,
'n_step_per_snapshot' : 50,
'n_step_dynamics' : 1000,
}
langevin_thermometer_parms = {
'topology' : 'in.top',
'input_crds' : 'in.gro',
'output_basename' : 'md',
'force_field': 'GROMACS',
'restraint_pdb': '',
'restraint_force': 100.0,
'random_seed' : 2342,
'temperature_thermometer' : 300.0,
'temperature_initial_velocities': 0.0, # ignored if it is 0.0
'n_step_per_snapshot' : 50,
'n_step_dynamics' : 1000,
}
minimization_mdp = """
; template .mdp file used as input into grompp to generate energy minimization for mdrun
; Parameters describing what to do, when to stop and what to save
integrator = steep ; Algorithm (steep = steepest descent minimization)
emtol = 1000.0 ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm
emstep = 0.01 ; Energy step size
nsteps = %(n_step_minimization)s ; Maximum number of (minimization) steps to perform
"""
dynamics_mdp = """
title = Template for constant temperature/pressure
; Run parameters
integrator = md ; leap-frog integrator
nsteps = %(n_step_dynamics)s ; time = n_step_dynamics*dt
dt = 0.001 ; time-step in fs
; Output control
nstxout = %(n_step_per_snapshot)s ; save coordinates every 0.05 ps
nstvout = %(n_step_per_snapshot)s ; save velocities every 0.05 ps
nstenergy = %(n_step_per_snapshot)s ; save energies every 0.05 ps
nstlog = %(n_step_per_snapshot)s ; update log file every 0.05 ps
; Pressure coupling is on
pcoupl = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype = isotropic ; uniform scaling of box vectors
tau_p = 2.0 ; time constant, in ps
ref_p = 1.0 ; reference pressure, in bar
compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1
"""
temp_mdp = """
; Temperature coupling is on
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein ; two coupling groups - more accurate
tau_t = 0.1 0.1 ; time constant, in ps
ref_t = %(temperature_thermometer)s %(temperature_thermometer)s ; reference temperature, one for each group, in K
ld-seed = %(random_seed)s
"""
vel_mdp = """
; Velocity generation
gen_vel = yes ; assign velocities from Maxwell distribution
gen_temp = %(temperature_initial_velocities)s ; temperature for Maxwell distribution
gen_seed = -1 ; generate a random seed
"""
restraint_mdp = """
; Restaints turned on
define = -DPOSRES ; position restrain the protein
"""
def make_mdp(parms):
"""
Using GROMACS, we will always solvate, use periodic
boundaries and Particle Ewald Mesh.
"""
if 'n_step_minimization' in parms:
mdp = minimization_mdp
else:
mdp = dynamics_mdp
if 'temperature_thermometer' in parms:
mdp += temp_mdp
else:
mdp += "; Temperature coupling is off\n"
mdp += "tcoupl = no\n"
if 'temperature_initial_velocities' in parms and parms['temperature_initial_velocities'] > 0.0:
mdp += vel_mdp
else:
mdp += "; Velocity generation\n"
mdp += "gen_vel = no ; Velocity generation is off \n"
if 'restraint_pdb' in parms and parms['restraint_pdb']:
mdp += restraint_mdp
mdp += force_field_mdp
return mdp % parms
def replace_include_file(chain, r1, r2):
lines = open(chain, 'r').readlines()
new_lines = []
for l in lines:
if l.startswith('#include'):
new_lines.append(l.replace(r1, r2))
else:
new_lines.append(l)
open(chain, 'w').write(''.join(new_lines))
restraint_header = """
; In this topology include file, you will find position restraint
; entries for all the heavy atoms in your original pdb file.
; This means that all the protons which were added by pdb2gmx are
; not restrained.
[ position_restraints ]
; atom type fx fy fz
"""
def make_restraint_itp(restraint_pdb, force):
txt = restraint_header
atoms = pdbatoms.Soup(restraint_pdb).atoms()
for i, atom in enumerate(atoms):
if atom.bfactor > 0.0:
txt += "%6s 1 %5.f %5.f %5.f\n" % (i+1, force, force, force)
return txt
def run(in_parms):
"""
Run a GROMACS simulations using the PDBREMIX parms dictionary.
"""
parms = copy.deepcopy(in_parms)
basename = parms['output_basename']
# Copies across topology and related *.itp files, with appropriate
# filename renaming in #includes
top = basename + '.top'
in_top = parms['topology']
shutil.copy(in_top, top)
in_name = os.path.basename(in_top).replace('.top', '')
in_dir = os.path.dirname(in_top)
file_tag = "%s/%s_*itp" % (in_dir, in_name)
new_files = [top]
for f in glob.glob(file_tag):
new_f = os.path.basename(f)
new_f = new_f.replace(in_name, basename)
shutil.copy(f, new_f)
new_files.append(new_f)
for f in new_files:
replace_include_file(f, in_name + "_", basename + "_")
# Copy over input coordinates/velocities
in_gro = basename + '.in.gro'
shutil.copy(parms['input_crds'], in_gro)
# Generates a postiional-restraint topology file
if parms['restraint_pdb']:
# 1kcal*mol*A**-2 = 4.184 kJ*mol*(0.1 nm)**-2
kcalmolang2_to_kJmolnm2 = 400.184
open(basename + '_posre.itp', 'w').write(
make_restraint_itp(
parms['restraint_pdb'],
parms['restraint_force'] * kcalmolang2_to_kJmolnm2))
# Generate .mdp file based on parms
in_mdp = basename + '.grompp.mdp'
open(in_mdp, 'w').write(make_mdp(parms))
# Now run .grompp to generate this .tpr file
tpr = basename + '.tpr'
# .mdp to save complete set of parameters
mdp = basename + '.mdrun.mdp'
data.binary(
'grompp',
'-f %s -po %s -c %s -p %s -o %s' \
% (in_mdp, mdp, in_gro, top, tpr),
basename + '.grompp')
util.check_files(tpr)
# Run simulation with the .tpr file
data.binary(
'mdrun',
'-v -deffnm %s' % (basename),
basename + '.mdrun')
top, crds, vels = get_restart_files(basename)
util.check_output(top)
util.check_output(crds)
# Cleanup
delete_backup_files(basename)
# ##########################################################
# # 4. Read trajectories with some post-processing
# The units used in these files are:
# - positions: nanometers
# - velocities: nanometers/picosecond
# - force constant: kJ/mol/nm^2
n_dim = 3
class TrrReader:
"""
Class to read the coordinates of a GROMACS .trr file.
Attributes:
trr (str) - name of trajectory file
file (file) - file object to trajectory
n_atom (int) - number of atoms simulated
n_frame (int) - number of frames in trajectory
i_frame (int) - index of current frame
frame (array) - container of coordinates of current frame
Methods:
__init__ - initializes trajectory and loads 1st frame
load_frame(i) - loads the i'th frame
__getitem__ - returns the frame
save_to_crd - save current frame to a .crd file
__repr__ - string representation
"""
def __init__(self, trr):
self.trr = trr
self.file = open(self.trr, 'r')
self.read_header()
self.calc_precision()
self.calc_frame_info()
self.i_frame = None
self.load_frame(0)
def read_header(self):
self.u = xdrlib.Unpacker(self.file.read(200))
self.magic = self.u.unpack_int()
self.version = self.u.unpack_string()
self.size_ir = self.u.unpack_int()
self.size_e = self.u.unpack_int()
self.size_box = self.u.unpack_int()
self.size_vir = self.u.unpack_int()
self.size_pres = self.u.unpack_int()
self.size_top = self.u.unpack_int()
self.size_sym = self.u.unpack_int()
self.size_x = self.u.unpack_int()
self.size_v = self.u.unpack_int()
self.size_f = self.u.unpack_int()
self.n_atom = self.u.unpack_int()
self.step = self.u.unpack_int()
self.nre = self.u.unpack_int()
self.t = self.u.unpack_float()
self.lamb = self.u.unpack_float()
self.pos_after_header = self.u.get_position()
def calc_precision(self):
"Returns 4 for single precision, and 8 for double precision"
if self.size_box:
self.precision = self.size_box/n_dim/n_dim
elif self.size_x:
self.precision = self.size_x/n_dim
elif self.size_v:
self.precision = self.size_v/n_dim
elif self.size_f:
self.precision = self.size_f/n_dim
else:
raise ValueError, "Cannot determine precision"
if self.precision not in [4, 8]:
raise ValueError, "Precision not single or double!"
def calc_frame_info(self):
"""Given header info and precision, can calculate frame & n_frame"""
n_vec = 0
if self.size_box: n_vec += n_dim
if self.size_vir: n_vec += n_dim
if self.size_pres: n_vec += n_dim
if self.size_x: n_vec += self.n_atom
if self.size_v: n_vec += self.n_atom
if self.size_f: n_vec += self.n_atom
self.size_frame = n_vec*n_dim*self.precision + self.pos_after_header
# Calculates n_frame from end of file
self.file.seek(0, 2)
size_file = self.file.tell()
self.n_frame = size_file / self.size_frame
def __repr__(self):
return "< Gromacs TRR Coord file %s with %d frames of %d atoms >" % \
(self.trj, self.n_frame, self.n_atom)
def next_3_reals(self):
if self.precision == 4:
return [self.u.unpack_float() for i in range(3)]
if self.precision == 8:
return [self.u.unpack_double() for i in range(3)]
def load_frame(self, i_frame):
if i_frame < - 1*self.n_frame or i_frame >= self.n_frame:
raise IndexError
if i_frame < 0:
i_frame = self.n_frame + i_frame
if i_frame == self.i_frame:
return
self.file.seek(self.pos_after_header + i_frame*self.size_frame)
self.u = xdrlib.Unpacker(self.file.read(self.size_frame))
box, positions, velocities, forces = None, None, None, None
if self.size_box:
box = [self.next_3_reals() for i in range(n_dim)]
if self.size_vir:
dummy = [self.next_3_reals() for i in range(n_dim)]
if self.size_pres:
dummy = [self.next_3_reals() for i in range(n_dim)]
if self.size_x:
positions = [self.next_3_reals() for i in range(self.n_atom)]
if self.size_v:
velocities = [self.next_3_reals() for i in range(self.n_atom)]
if self.size_f:
forces = [self.next_3_reals() for i in range(self.n_atom)]
self.frame = box, positions, velocities, forces
self.i_frame = i_frame
def __getitem__(self, i_frame):
self.load_frame(i_frame)
return self.frame
class SoupTrajectory():
"""
Class to interact with an GROMACS trajctory using soup.
Attributes:
soup (Soup) - Soup object holding current coordinates/velocities
trr (str) - coordinate/velocity trajectory file
trr_reader (TrrReader) - the reader of the frames
n_frame (int) - number of frames in trajectory
i_frame (int) - index of current frame
Methods:
__init__ - load coordinate and velocity trajectories and build soup
load_frame - loads new frame into soup
"""
def __init__(self, soup, trr):
self.soup = soup
self.trr = trr
self.trr_reader = TrrReader(self.trr)
self.n_frame = self.trr_reader.n_frame
self.atoms = self.soup.atoms()
self.load_frame(0)
def __repr__(self):
return "< Gromacs Trajectory object with %d frames of %d atoms >" % \
(self.trj, self.n_frame, self.n_atom)
def load_frame(self, i_frame):
box, positions, velocities, forces = self.trr_reader[i_frame]
for i, atom in enumerate(self.atoms):
v3.set_vector(
atom.pos,
positions[i][0]*10,
positions[i][1]*10,
positions[i][2]*10)
v3.set_vector(
atom.vel,
velocities[i][0]*10,
velocities[i][1]*10,
velocities[i][2]*10)
self.i_frame = self.trr_reader.i_frame
class Trajectory(object):
"""
Class to interact with an GROMACS trajctory using soup.
Attributes:
basename (str) - basename used to guess all required files
top (str) - topology file of trajectory
gro (str) - coordinate/velocity restart file
trr (str) - coordinate/velocity trajectory file
trr_reader (TrrReader) - the reader of the frames
n_frame (int) - number of frames in trajectory
i_frame (int) - index of current frame
soup (Soup) - Soup object holding current coordinates/velocities
Methods:
__init__ - load coordinate and velocity trajectories and build soup
load_frame - loads new frame into soup
"""
def __init__(self, basename):
self.basename = basename
self.top = basename + '.top'
self.gro = basename + '.gro'
self.soup = soup_from_top_gro(self.top, self.gro)
self.trr = basename + '.trr'
self.soup_trj = SoupTrajectory(self.soup, self.trr)
self.n_frame = self.soup_trj.n_frame
self.i_frame = None
self.load_frame(0)
def load_frame(self, i):
self.soup_trj.load_frame(i)
self.i_frame = self.soup_trj.i_frame
def merge_trajectories(basename, traj_basenames):
"""
Given a bunch of directories with consecutive trajectories, all
with the same basename, the function will splice them into a
single trajctory with basename in the current directory.
"""
save_dir = os.getcwd()
trr_fname = basename + '.trr'
trr_list = [b + '.trr' for b in traj_basenames]
util.check_files(*trr_list)
f = open(trr_fname, 'w')
for trr in trr_list:
trr = TrrReader(trr)
for i_frame in range(trr.n_frame-1):
trr.file.seek(trr.size_frame*i_frame)
txt = trr.file.read(trr.size_frame)
f.write(txt)
f.close()
# Copy parameters of last pulse into current directory
traj_basename = traj_basenames[-1]
for ext in ['.top', '.itp', '.tpr', '.mdrun.mdp',
'.grompp.mdp', '.gro']:
for f in glob.glob('%s*%s' % (traj_basename, ext)):
g = f.replace(traj_basename, basename)
shutil.copy(f, g)
if g.endswith('.top'):
replace_include_file(g, traj_basename + "_", basename + "_")
os.chdir(save_dir)
| 31.041879 | 126 | 0.655281 | 4,482 | 30,390 | 4.286479 | 0.161312 | 0.014158 | 0.010879 | 0.012388 | 0.292578 | 0.223818 | 0.179523 | 0.147148 | 0.134655 | 0.120758 | 0 | 0.01942 | 0.222244 | 30,390 | 978 | 127 | 31.07362 | 0.793408 | 0.131194 | 0 | 0.214286 | 1 | 0.007937 | 0.284556 | 0.018144 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.01746 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cb0b059756dd240a382f3daa11b8685dde9e57fb | 851 | py | Python | setup.py | hirokazumiyaji/jsonview | b510a136be27b04f33f4685b0c363a876518a113 | [
"MIT"
] | null | null | null | setup.py | hirokazumiyaji/jsonview | b510a136be27b04f33f4685b0c363a876518a113 | [
"MIT"
] | null | null | null | setup.py | hirokazumiyaji/jsonview | b510a136be27b04f33f4685b0c363a876518a113 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
setup(name='jsonview',
version='0.1',
description='json view for django',
author='Hirokazu Miyaji',
author_email='hirokazu.miyaji@gmail.com',
keywords=['json', 'django'],
url='https://github.com/hirokazumiyaji/jsonview',
packages=['jsonview'],
license='MIT',
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Web Environment',
'License :: OSI Approved :: MIT License',
'Intended Audience :: Developers',
'Programming Language :: Python',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
)
| 29.344828 | 56 | 0.596945 | 83 | 851 | 6.108434 | 0.674699 | 0.149901 | 0.197239 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012759 | 0.26322 | 851 | 28 | 57 | 30.392857 | 0.795853 | 0.023502 | 0 | 0 | 0 | 0 | 0.5 | 0.03012 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cb2a869715dd0dc17013086a85f438a5038484ff | 297 | py | Python | django_football/teams/urls.py | gvpeek/django_football | a5464452a96ae0ebc4a50f7fd571e3aae509914b | [
"MIT"
] | 1 | 2015-01-26T04:10:52.000Z | 2015-01-26T04:10:52.000Z | django_football/teams/urls.py | gvpeek/django_football | a5464452a96ae0ebc4a50f7fd571e3aae509914b | [
"MIT"
] | 20 | 2015-01-18T19:24:17.000Z | 2015-09-20T19:14:21.000Z | django_football/teams/urls.py | gvpeek/django_football | a5464452a96ae0ebc4a50f7fd571e3aae509914b | [
"MIT"
] | null | null | null | from django.conf.urls import patterns, url
import teams.views
urlpatterns = patterns('',
url(r'detail/(?P<team_id>\d+)/(?P<year>\d+)/$', teams.views.show_team_detail, name='show_team_detail'),
url(r'history/(?P<team_id>\d+)/$', teams.views.show_team_history, name='show_team_history'),
) | 37.125 | 107 | 0.703704 | 47 | 297 | 4.234043 | 0.425532 | 0.160804 | 0.070352 | 0.080402 | 0.190955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087542 | 297 | 8 | 108 | 37.125 | 0.734317 | 0 | 0 | 0 | 0 | 0 | 0.328859 | 0.218121 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
cb315cfc7525bc6cb4b2eff87b7f8316237e50cb | 2,960 | py | Python | pytorch-frontend/tools/codegen/api/types.py | AndreasKaratzas/stonne | 2915fcc46cc94196303d81abbd1d79a56d6dd4a9 | [
"MIT"
] | 206 | 2020-11-28T22:56:38.000Z | 2022-03-27T02:33:04.000Z | pytorch-frontend/tools/codegen/api/types.py | AndreasKaratzas/stonne | 2915fcc46cc94196303d81abbd1d79a56d6dd4a9 | [
"MIT"
] | 19 | 2020-12-09T23:13:14.000Z | 2022-01-24T23:24:08.000Z | pytorch-frontend/tools/codegen/api/types.py | AndreasKaratzas/stonne | 2915fcc46cc94196303d81abbd1d79a56d6dd4a9 | [
"MIT"
] | 28 | 2020-11-29T15:25:12.000Z | 2022-01-20T02:16:27.000Z | from tools.codegen.model import *
from dataclasses import dataclass
from typing import Optional, Union, Sequence
# Represents the implicit *this argument for method calls in C++ API
@dataclass(frozen=True)
class ThisArgument:
argument: Argument
# Bundle of arguments that represent a TensorOptions in the C++ API.
@dataclass(frozen=True)
class TensorOptionsArguments:
dtype: Argument
layout: Argument
device: Argument
pin_memory: Argument
def all(self) -> Sequence[Argument]:
return [self.dtype, self.layout, self.device, self.pin_memory]
# Describe a argument (e.g., the x in "f(int x)") in the C++ API
@dataclass(frozen=True)
class CppArgument:
# C++ type, e.g., int
type: str
# C++ name, e.g., x
name: str
# Only used by the header, but we work it out in all cases anyway
default: Optional[str]
# The JIT argument(s) this formal was derived from. May
# correspond to multiple arguments if this is TensorOptions!
# May also correspond to the implicit *this argument!
argument: Union[Argument, TensorOptionsArguments, ThisArgument]
# Default string representation prints the most elaborated form
# of the formal
def __str__(self) -> str:
mb_default = ""
if self.default is not None:
mb_default = f"={self.default}"
return f"{self.type} {self.name}{mb_default}"
# However, you might also find the version with no default useful
def str_no_default(self) -> str:
return f"{self.type} {self.name}"
@dataclass(frozen=True)
class CppExpr:
type: str
expr: str
@dataclass(frozen=True)
class DispatcherExpr:
type: str
expr: str
@dataclass(frozen=True)
class LegacyDispatcherExpr:
type: str
expr: str
@dataclass(frozen=True)
class DispatcherArgument:
type: str
name: str
# dispatcher NEVER has defaults
argument: Union[Argument, TensorOptionsArguments]
# TensorOptionsArguments can occur when not using full c10 dispatch
def __str__(self) -> str:
return f"{self.type} {self.name}"
@dataclass(frozen=True)
class LegacyDispatcherArgument:
type: str
name: str
# Legacy dispatcher arguments have defaults for some reasons (e.g.,
# the function prototypes in CPUType.h are defaulted). There isn't
# really any good reason to do this, as these functions are only
# ever called from a context where all defaulted arguments are
# guaranteed to be given explicitly.
# TODO: Remove this
default: Optional[str]
argument: Union[Argument, TensorOptionsArguments]
# Convention here is swapped because arguably legacy
# dispatcher shouldn't have defaults...
def __str__(self) -> str:
return f"{self.type} {self.name}"
def str_with_default(self) -> str:
mb_default = ""
if self.default is not None:
mb_default = f"={self.default}"
return f"{self.type} {self.name}{mb_default}"
| 30.833333 | 71 | 0.688514 | 393 | 2,960 | 5.124682 | 0.37659 | 0.059583 | 0.075472 | 0.095333 | 0.268123 | 0.268123 | 0.25422 | 0.25422 | 0.164846 | 0.164846 | 0 | 0.000869 | 0.222297 | 2,960 | 95 | 72 | 31.157895 | 0.874023 | 0.370946 | 0 | 0.644068 | 0 | 0 | 0.091898 | 0.025014 | 0 | 0 | 0 | 0.010526 | 0 | 1 | 0.101695 | false | 0 | 0.050847 | 0.067797 | 0.762712 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cb38ba429883d5f6ba03ac4a0db8f71efc0fa5c0 | 1,557 | py | Python | mica/report/tests/test_write_report.py | sot/mica | 136a9b0d9521efda5208067b51cf0c8700b4def3 | [
"BSD-3-Clause"
] | null | null | null | mica/report/tests/test_write_report.py | sot/mica | 136a9b0d9521efda5208067b51cf0c8700b4def3 | [
"BSD-3-Clause"
] | 150 | 2015-01-23T17:09:53.000Z | 2022-01-10T00:50:54.000Z | mica/report/tests/test_write_report.py | sot/mica | 136a9b0d9521efda5208067b51cf0c8700b4def3 | [
"BSD-3-Clause"
] | null | null | null | # Licensed under a 3-clause BSD style license - see LICENSE.rst
import tempfile
import os
from pathlib import Path
import getpass
import shutil
import pytest
from warnings import warn
from testr.test_helper import on_head_network, has_sybase
from .. import report
user = getpass.getuser()
try:
import Ska.DBI
with Ska.DBI.DBI(server='sqlsao', dbi='sybase', user=user, database='axafvv') as db:
HAS_SYBASE_ACCESS = True
except Exception:
HAS_SYBASE_ACCESS = False
# If the user should have access, warn about the issue.
if (on_head_network() and not has_sybase() and
Path(os.environ['SKA'], 'data', 'aspect_authorization',
f'sqlsao-axafvv-{user}').exists()):
warn("On HEAD but no sybase access. Run test from production environment.")
HAS_SC_ARCHIVE = os.path.exists(report.starcheck.FILES['data_root'])
@pytest.mark.skipif('not HAS_SYBASE_ACCESS', reason='Report test requires Sybase VV access')
@pytest.mark.skipif('not HAS_SC_ARCHIVE', reason='Report test requires mica starcheck archive')
def test_write_reports():
"""
Make a report and database
"""
tempdir = tempfile.mkdtemp()
# Get a temporary file, but then delete it, because report.py will only
# make a new table if the supplied file doesn't exist
fh, fn = tempfile.mkstemp(dir=tempdir, suffix='.db3')
os.unlink(fn)
report.REPORT_ROOT = tempdir
report.REPORT_SERVER = fn
for obsid in [20001, 15175, 54778]:
report.main(obsid)
os.unlink(fn)
shutil.rmtree(tempdir)
| 31.14 | 95 | 0.705202 | 225 | 1,557 | 4.777778 | 0.506667 | 0.04186 | 0.04186 | 0.035349 | 0.04093 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013535 | 0.19332 | 1,557 | 49 | 96 | 31.77551 | 0.842357 | 0.170199 | 0 | 0.060606 | 0 | 0 | 0.207384 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0.060606 | 0.30303 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
cb47401d1076d9afb4a0c6b53231cabee3a8b489 | 12,494 | py | Python | pgoapi/protos/pogoprotos/networking/requests/messages/platform_client_actions_message_pb2.py | aroo135/pgoapi | 47d384ffe54051363ba380f0cb14ec584feedab1 | [
"MIT"
] | 842 | 2016-08-07T00:07:26.000Z | 2016-08-22T02:41:38.000Z | aiopogo/pogoprotos/networking/requests/messages/platform_client_actions_message_pb2.py | ultrafunkamsterdam/aiopogo | 43444c994a400bc9bc8fd1ccaa6a1f79ff5df1fe | [
"MIT"
] | 102 | 2016-08-22T19:55:24.000Z | 2018-10-28T12:06:33.000Z | aiopogo/pogoprotos/networking/requests/messages/platform_client_actions_message_pb2.py | ultrafunkamsterdam/aiopogo | 43444c994a400bc9bc8fd1ccaa6a1f79ff5df1fe | [
"MIT"
] | 522 | 2016-08-07T00:06:05.000Z | 2016-08-21T18:03:59.000Z | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: pogoprotos/networking/requests/messages/platform_client_actions_message.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='pogoprotos/networking/requests/messages/platform_client_actions_message.proto',
package='pogoprotos.networking.requests.messages',
syntax='proto3',
serialized_pb=_b('\nMpogoprotos/networking/requests/messages/platform_client_actions_message.proto\x12\'pogoprotos.networking.requests.messages\"\xed\x02\n\x1fRegisterPushNotificationMessage\x12\x64\n\tapn_token\x18\x01 \x01(\x0b\x32Q.pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.ApnToken\x12\x64\n\tgcm_token\x18\x02 \x01(\x0b\x32Q.pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.GcmToken\x1aY\n\x08\x41pnToken\x12\x17\n\x0fregistration_id\x18\x01 \x01(\t\x12\x19\n\x11\x62undle_identifier\x18\x02 \x01(\t\x12\x19\n\x11payload_byte_size\x18\x03 \x01(\x05\x1a#\n\x08GcmToken\x12\x17\n\x0fregistration_id\x18\x01 \x01(\t\"\xf5\x01\n\x1fUpdateNotificationStatusMessage\x12\x18\n\x10notification_ids\x18\x01 \x03(\t\x12\x1b\n\x13\x63reate_timestamp_ms\x18\x02 \x03(\x03\x12i\n\x05state\x18\x03 \x01(\x0e\x32Z.pogoprotos.networking.requests.messages.UpdateNotificationStatusMessage.NotificationState\"0\n\x11NotificationState\x12\x0f\n\x0bUNSET_STATE\x10\x00\x12\n\n\x06VIEWED\x10\x01\";\n%OptOutPushNotificationCategoryMessage\x12\x12\n\ncategories\x18\x01 \x03(\tb\x06proto3')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_UPDATENOTIFICATIONSTATUSMESSAGE_NOTIFICATIONSTATE = _descriptor.EnumDescriptor(
name='NotificationState',
full_name='pogoprotos.networking.requests.messages.UpdateNotificationStatusMessage.NotificationState',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNSET_STATE', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VIEWED', index=1, number=1,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=688,
serialized_end=736,
)
_sym_db.RegisterEnumDescriptor(_UPDATENOTIFICATIONSTATUSMESSAGE_NOTIFICATIONSTATE)
_REGISTERPUSHNOTIFICATIONMESSAGE_APNTOKEN = _descriptor.Descriptor(
name='ApnToken',
full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.ApnToken',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='registration_id', full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.ApnToken.registration_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='bundle_identifier', full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.ApnToken.bundle_identifier', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='payload_byte_size', full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.ApnToken.payload_byte_size', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=362,
serialized_end=451,
)
_REGISTERPUSHNOTIFICATIONMESSAGE_GCMTOKEN = _descriptor.Descriptor(
name='GcmToken',
full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.GcmToken',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='registration_id', full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.GcmToken.registration_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=453,
serialized_end=488,
)
_REGISTERPUSHNOTIFICATIONMESSAGE = _descriptor.Descriptor(
name='RegisterPushNotificationMessage',
full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='apn_token', full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.apn_token', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='gcm_token', full_name='pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.gcm_token', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[_REGISTERPUSHNOTIFICATIONMESSAGE_APNTOKEN, _REGISTERPUSHNOTIFICATIONMESSAGE_GCMTOKEN, ],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=123,
serialized_end=488,
)
_UPDATENOTIFICATIONSTATUSMESSAGE = _descriptor.Descriptor(
name='UpdateNotificationStatusMessage',
full_name='pogoprotos.networking.requests.messages.UpdateNotificationStatusMessage',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='notification_ids', full_name='pogoprotos.networking.requests.messages.UpdateNotificationStatusMessage.notification_ids', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='create_timestamp_ms', full_name='pogoprotos.networking.requests.messages.UpdateNotificationStatusMessage.create_timestamp_ms', index=1,
number=2, type=3, cpp_type=2, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='state', full_name='pogoprotos.networking.requests.messages.UpdateNotificationStatusMessage.state', index=2,
number=3, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_UPDATENOTIFICATIONSTATUSMESSAGE_NOTIFICATIONSTATE,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=491,
serialized_end=736,
)
_OPTOUTPUSHNOTIFICATIONCATEGORYMESSAGE = _descriptor.Descriptor(
name='OptOutPushNotificationCategoryMessage',
full_name='pogoprotos.networking.requests.messages.OptOutPushNotificationCategoryMessage',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='categories', full_name='pogoprotos.networking.requests.messages.OptOutPushNotificationCategoryMessage.categories', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=738,
serialized_end=797,
)
_REGISTERPUSHNOTIFICATIONMESSAGE_APNTOKEN.containing_type = _REGISTERPUSHNOTIFICATIONMESSAGE
_REGISTERPUSHNOTIFICATIONMESSAGE_GCMTOKEN.containing_type = _REGISTERPUSHNOTIFICATIONMESSAGE
_REGISTERPUSHNOTIFICATIONMESSAGE.fields_by_name['apn_token'].message_type = _REGISTERPUSHNOTIFICATIONMESSAGE_APNTOKEN
_REGISTERPUSHNOTIFICATIONMESSAGE.fields_by_name['gcm_token'].message_type = _REGISTERPUSHNOTIFICATIONMESSAGE_GCMTOKEN
_UPDATENOTIFICATIONSTATUSMESSAGE.fields_by_name['state'].enum_type = _UPDATENOTIFICATIONSTATUSMESSAGE_NOTIFICATIONSTATE
_UPDATENOTIFICATIONSTATUSMESSAGE_NOTIFICATIONSTATE.containing_type = _UPDATENOTIFICATIONSTATUSMESSAGE
DESCRIPTOR.message_types_by_name['RegisterPushNotificationMessage'] = _REGISTERPUSHNOTIFICATIONMESSAGE
DESCRIPTOR.message_types_by_name['UpdateNotificationStatusMessage'] = _UPDATENOTIFICATIONSTATUSMESSAGE
DESCRIPTOR.message_types_by_name['OptOutPushNotificationCategoryMessage'] = _OPTOUTPUSHNOTIFICATIONCATEGORYMESSAGE
RegisterPushNotificationMessage = _reflection.GeneratedProtocolMessageType('RegisterPushNotificationMessage', (_message.Message,), dict(
ApnToken = _reflection.GeneratedProtocolMessageType('ApnToken', (_message.Message,), dict(
DESCRIPTOR = _REGISTERPUSHNOTIFICATIONMESSAGE_APNTOKEN,
__module__ = 'pogoprotos.networking.requests.messages.platform_client_actions_message_pb2'
# @@protoc_insertion_point(class_scope:pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.ApnToken)
))
,
GcmToken = _reflection.GeneratedProtocolMessageType('GcmToken', (_message.Message,), dict(
DESCRIPTOR = _REGISTERPUSHNOTIFICATIONMESSAGE_GCMTOKEN,
__module__ = 'pogoprotos.networking.requests.messages.platform_client_actions_message_pb2'
# @@protoc_insertion_point(class_scope:pogoprotos.networking.requests.messages.RegisterPushNotificationMessage.GcmToken)
))
,
DESCRIPTOR = _REGISTERPUSHNOTIFICATIONMESSAGE,
__module__ = 'pogoprotos.networking.requests.messages.platform_client_actions_message_pb2'
# @@protoc_insertion_point(class_scope:pogoprotos.networking.requests.messages.RegisterPushNotificationMessage)
))
_sym_db.RegisterMessage(RegisterPushNotificationMessage)
_sym_db.RegisterMessage(RegisterPushNotificationMessage.ApnToken)
_sym_db.RegisterMessage(RegisterPushNotificationMessage.GcmToken)
UpdateNotificationStatusMessage = _reflection.GeneratedProtocolMessageType('UpdateNotificationStatusMessage', (_message.Message,), dict(
DESCRIPTOR = _UPDATENOTIFICATIONSTATUSMESSAGE,
__module__ = 'pogoprotos.networking.requests.messages.platform_client_actions_message_pb2'
# @@protoc_insertion_point(class_scope:pogoprotos.networking.requests.messages.UpdateNotificationStatusMessage)
))
_sym_db.RegisterMessage(UpdateNotificationStatusMessage)
OptOutPushNotificationCategoryMessage = _reflection.GeneratedProtocolMessageType('OptOutPushNotificationCategoryMessage', (_message.Message,), dict(
DESCRIPTOR = _OPTOUTPUSHNOTIFICATIONCATEGORYMESSAGE,
__module__ = 'pogoprotos.networking.requests.messages.platform_client_actions_message_pb2'
# @@protoc_insertion_point(class_scope:pogoprotos.networking.requests.messages.OptOutPushNotificationCategoryMessage)
))
_sym_db.RegisterMessage(OptOutPushNotificationCategoryMessage)
# @@protoc_insertion_point(module_scope)
| 43.381944 | 1,127 | 0.795822 | 1,285 | 12,494 | 7.446693 | 0.143191 | 0.031769 | 0.092382 | 0.124151 | 0.610931 | 0.55638 | 0.530149 | 0.466193 | 0.390845 | 0.369318 | 0 | 0.024915 | 0.100528 | 12,494 | 287 | 1,128 | 43.533101 | 0.826571 | 0.06315 | 0 | 0.628459 | 1 | 0.007905 | 0.267037 | 0.242069 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.023715 | 0 | 0.023715 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cb5267acc9ff395837da2faffb750bd49f8598ac | 558 | py | Python | devices/custom_sensor.py | bramvreugd/domoticz-zigbee2mqtt-plugin | 9366817c0132a96d054bc2270e732a99a1aeb994 | [
"MIT"
] | null | null | null | devices/custom_sensor.py | bramvreugd/domoticz-zigbee2mqtt-plugin | 9366817c0132a96d054bc2270e732a99a1aeb994 | [
"MIT"
] | null | null | null | devices/custom_sensor.py | bramvreugd/domoticz-zigbee2mqtt-plugin | 9366817c0132a96d054bc2270e732a99a1aeb994 | [
"MIT"
] | null | null | null | import domoticz
from devices.device import Device
class CustomSensor(Device):
def create_device(self, unit, device_id, device_name):
options = {}
if hasattr(self, 'feature') and 'unit' in self.feature:
options['Custom'] = '1;' + self.feature['unit']
return domoticz.create_device(Unit=unit, DeviceID=device_id, Name=device_name, TypeName="Custom", Options=options)
def get_numeric_value(self, value, device):
return int(value)
def get_string_value(self, value, device):
return str(value)
| 29.368421 | 122 | 0.677419 | 71 | 558 | 5.183099 | 0.422535 | 0.089674 | 0.076087 | 0.108696 | 0.141304 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002262 | 0.207885 | 558 | 18 | 123 | 31 | 0.830317 | 0 | 0 | 0 | 0 | 0 | 0.051971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.166667 | 0.166667 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
cb57221c921310572acfda242f9fdb411fab1da4 | 817 | py | Python | src/dispatch/auth/models.py | mlioo/dispatch | 30f3fc224cc5b4099b84a1859847520d0aa990ff | [
"Apache-2.0"
] | null | null | null | src/dispatch/auth/models.py | mlioo/dispatch | 30f3fc224cc5b4099b84a1859847520d0aa990ff | [
"Apache-2.0"
] | null | null | null | src/dispatch/auth/models.py | mlioo/dispatch | 30f3fc224cc5b4099b84a1859847520d0aa990ff | [
"Apache-2.0"
] | null | null | null | from typing import Optional
from sqlalchemy import Column, String, Binary
from dispatch.database import Base
from dispatch.models import TimeStampMixin, DispatchBase
from pydantic import validator
class DispatchUser(Base, TimeStampMixin):
email = Column(String, primary_key=True)
password = Column(Binary, nullable=False)
class UserLoginForm(DispatchBase):
email: str
password: str
@validator("email")
def email_required(cls, v):
if not v:
raise ValueError("Must not be empty string and must be a email")
return v
@validator("password")
def password_required(cls, v):
if not v:
raise ValueError("Must not be empty string")
return v
class UserLoginResponse(DispatchBase):
token: Optional[str]
error: Optional[str]
| 24.757576 | 76 | 0.698898 | 99 | 817 | 5.737374 | 0.444444 | 0.042254 | 0.042254 | 0.049296 | 0.18662 | 0.18662 | 0.18662 | 0.18662 | 0.18662 | 0.18662 | 0 | 0 | 0.226438 | 817 | 32 | 77 | 25.53125 | 0.898734 | 0 | 0 | 0.166667 | 0 | 0 | 0.099143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.166667 | 0.208333 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
cb66aac19535b2539243a2526c1a4234ded9ce62 | 1,032 | py | Python | app/config/secure.py | yyywang/quota | 8e243862aa35d6e2fe3271d5862629964adde8e9 | [
"MIT"
] | null | null | null | app/config/secure.py | yyywang/quota | 8e243862aa35d6e2fe3271d5862629964adde8e9 | [
"MIT"
] | 1 | 2021-06-02T01:16:36.000Z | 2021-06-02T01:16:36.000Z | app/config/secure.py | yyywang/quota | 8e243862aa35d6e2fe3271d5862629964adde8e9 | [
"MIT"
] | null | null | null | """
:copyright: © 2019 by the Lin team.
:license: MIT, see LICENSE for more details.
"""
# 安全性配置
from app.config.setting import BaseConfig
class DevelopmentSecure(BaseConfig):
"""
开发环境安全性配置
"""
SQLALCHEMY_DATABASE_URI = 'mysql+cymysql://root:python0096@localhost:3306/lin-cms'
SQLALCHEMY_ECHO = False
SECRET_KEY = '\x88W\xf09\x91\x07\x98\x89\x87\x96\xa0A\xc68\xf9\xecJJU\x17\xc5V\xbe\x8b\xef\xd7\xd8\xd3\xe6\x95*4'
WX_APP_ID = 'wx77e07d7773c8baba'
WX_APP_SECRET = 'b4486877733d3bb9a648c1c320a96051'
SITE_DOMAIN = "https://www.znldj.com"
class ProductionSecure(BaseConfig):
"""
生产环境安全性配置
"""
SQLALCHEMY_DATABASE_URI = 'mysql+cymysql://root:MyNewPassword@localhost:3306/lincms'
SQLALCHEMY_ECHO = False
SECRET_KEY = '\x88W\xf09\x91\x07\x98\x89\x87\x96\xa0A\xc68\xf9\xecJJU\x17\xc5V\xbe\x8b\xef\xd7\xd8\xd3\xe6\x95*4'
WX_APP_ID = 'wx77e07d7773c8baba'
WX_APP_SECRET = 'b4486877733d3bb9a648c1c320a96051'
SITE_DOMAIN = "https://www.znldj.com"
| 24 | 117 | 0.705426 | 134 | 1,032 | 5.30597 | 0.552239 | 0.028129 | 0.059072 | 0.073136 | 0.658228 | 0.658228 | 0.554149 | 0.554149 | 0.554149 | 0.554149 | 0 | 0.165899 | 0.158915 | 1,032 | 42 | 118 | 24.571429 | 0.652074 | 0.103682 | 0 | 0.666667 | 0 | 0.133333 | 0.50967 | 0.420933 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.066667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
cb810e95d4eb6a15512341b1c958cb62177d99fc | 2,714 | py | Python | Ago-Dic-2021/ferrer-ferrer-francisco-antonio/normal_builder/builder.py | AnhellO/DAS_Sistemas | 07b4eca78357d02d225d570033d05748d91383e3 | [
"MIT"
] | 41 | 2017-09-26T09:36:32.000Z | 2022-03-19T18:05:25.000Z | Ago-Dic-2021/ferrer-ferrer-francisco-antonio/normal_builder/builder.py | AnhellO/DAS_Sistemas | 07b4eca78357d02d225d570033d05748d91383e3 | [
"MIT"
] | 67 | 2017-09-11T05:06:12.000Z | 2022-02-14T04:44:04.000Z | Ago-Dic-2021/ferrer-ferrer-francisco-antonio/normal_builder/builder.py | AnhellO/DAS_Sistemas | 07b4eca78357d02d225d570033d05748d91383e3 | [
"MIT"
] | 210 | 2017-09-01T00:10:08.000Z | 2022-03-19T18:05:12.000Z | import abc
# Product 1
class Car:
def __init__(self, seats=None, engine=None, trip_computer=None, gps=None) -> None:
self.seats = seats
self.engine = engine
self.trip_computer = trip_computer
self.gps = gps
def set_seats(self, seats):
self.seats = seats
def set_engine(self, engine):
self.engine = engine
def set_trip_computer(self, trip_computer):
self.trip_computer = trip_computer
def set_gps(self, gps):
self.gps = gps
# Product 2
class Manual:
def __init__(self, desc_seats=None, desc_engine=None, desc_trip_computer=None, desc_gps=None) -> None:
self.desc_seats = desc_seats
self.desc_engine = desc_engine
self.desc_trip_computer = desc_trip_computer
self.desc_gps = desc_gps
def set_seats(self, desc_seats):
self.desc_seats = desc_seats
def set_engine(self, desc_engine):
self.desc_engine = desc_engine
def set_trip_computer(self, desc_trip_computer):
self.desc_trip_computer = desc_trip_computer
def set_gps(self, desc_gps):
self.desc_gps = desc_gps
# Builder interface
class Builder(metaclass=abc.ABCMeta):
@abc.abstractmethod
def reset(self):
pass
@abc.abstractmethod
def set_seats(self, number):
pass
@abc.abstractmethod
def set_engine(self, engine):
pass
@abc.abstractmethod
def set_trip_computer(self, km):
pass
@abc.abstractmethod
def set_gps(self, has_gps):
pass
@abc.abstractmethod
def get_result(self):
pass
# ConcreteBuilder 1
class CarBuilder(Builder):
def __init__(self) -> None:
self.car = Car()
def reset(self):
self.car = Car()
def set_seats(self, number):
self.car.set_seats(number)
def set_engine(self, engine):
self.car.set_engine(engine)
def set_trip_computer(self, km):
self.car.set_trip_computer(km)
def set_gps(self, has_gps):
self.car.set_gps(has_gps)
def get_result(self):
return self.car
# ConcreteBuilder 2
class CarManualBuilder(Builder):
def __init__(self, ) -> None:
self.car_manual = Manual()
def reset(self):
self.car_manual = Manual()
def set_seats(self, desc_number):
self.car_manual.set_seats(desc_number)
def set_engine(self, desc_engine):
self.car_manual.set_engine(desc_engine)
def set_trip_computer(self, desc_km):
self.car_manual.set_trip_computer(desc_km)
def set_gps(self, desc_gps):
self.car_manual.set_gps(desc_gps)
def get_result(self):
return self.car_manual | 23.807018 | 106 | 0.647384 | 365 | 2,714 | 4.517808 | 0.09589 | 0.072771 | 0.087326 | 0.045482 | 0.639175 | 0.368102 | 0.275318 | 0.089751 | 0.05094 | 0 | 0 | 0.001996 | 0.261606 | 2,714 | 114 | 107 | 23.807018 | 0.820858 | 0.029845 | 0 | 0.653846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0.076923 | 0.012821 | 0.025641 | 0.487179 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
cb849ad5308e9fae0a0ab8dfc64853595c558cee | 2,169 | py | Python | day05/solution.py | MichalChomo/advent-of-code-2021 | 8d224d77089c721780bccb39c328e76e4ed63a16 | [
"MIT"
] | 1 | 2021-12-02T15:48:06.000Z | 2021-12-02T15:48:06.000Z | day05/solution.py | MichalChomo/advent-of-code-2021 | 8d224d77089c721780bccb39c328e76e4ed63a16 | [
"MIT"
] | null | null | null | day05/solution.py | MichalChomo/advent-of-code-2021 | 8d224d77089c721780bccb39c328e76e4ed63a16 | [
"MIT"
] | null | null | null | import numpy as np
def get_int_tuple_from_string_pair(pair):
return tuple((int(x) for x in pair.split(',')))
def get_zeroed_field(vectors):
dimension_size = max(vectors.flat) + 1
return np.zeros((dimension_size, dimension_size), dtype=int)
def get_overlaps_count_from_field(field):
return len([x for x in field.flat if x > 1])
def filter_horizontal_and_vertical_lines(vector):
return vector[0, 0] == vector[1, 0] or vector[0, 1] == vector[1, 1]
def filter_diagonal_lines(vector):
return vector[0, 0] != vector[1, 0] and vector[0, 1] != vector[1, 1]
def get_lines_coordinates(vectors, predicate):
return [(v[0, 0], v[1, 0], v[0, 1], v[1, 1]) for v in vectors if predicate(v)]
def fill_field_by_horizontal_and_vertical_lines(vectors, field):
lines = get_lines_coordinates(vectors, filter_horizontal_and_vertical_lines)
for x1, x2, y1, y2 in lines:
x1, x2 = min(x1, x2), max(x1, x2)
y1, y2 = min(y1, y2), max(y1, y2)
for y in range(y1, y2 + 1):
for x in range(x1, x2 + 1):
field[y][x] += 1
def fill_field_by_diagonal_lines(vectors, field):
lines = get_lines_coordinates(vectors, filter_diagonal_lines)
for x1, x2, y1, y2 in lines:
xrange = range(x1, x2 + 1) if x1 < x2 else range(x1, x2 - 1, -1)
yrange = range(y1, y2 + 1) if y1 < y2 else range(y1, y2 - 1, -1)
for x, y in zip(xrange, yrange):
field[y][x] += 1
def part_one(vectors, field):
fill_field_by_horizontal_and_vertical_lines(vectors, field)
return get_overlaps_count_from_field(field)
def part_two(vectors, field):
fill_field_by_horizontal_and_vertical_lines(vectors, field)
fill_field_by_diagonal_lines(vectors, field)
return get_overlaps_count_from_field(field)
with open('input', 'r') as infile:
vectors = [(get_int_tuple_from_string_pair(pair[0]), get_int_tuple_from_string_pair(pair[1])) for pair in
(line.strip('\n').replace(' ', '').split('->') for line in infile.readlines())]
vectors = np.array(vectors)
print(part_one(vectors, get_zeroed_field(vectors)))
print(part_two(vectors, get_zeroed_field(vectors)))
| 32.373134 | 109 | 0.67681 | 353 | 2,169 | 3.92068 | 0.1983 | 0.026012 | 0.075867 | 0.093931 | 0.565029 | 0.473988 | 0.452312 | 0.334538 | 0.301301 | 0.152457 | 0 | 0.042262 | 0.192716 | 2,169 | 66 | 110 | 32.863636 | 0.748144 | 0 | 0 | 0.190476 | 0 | 0 | 0.005533 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.02381 | 0.119048 | 0.452381 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
cb8545c3e85b8082f0ca9e86f3f80250f369331d | 1,675 | py | Python | Python_Files/murach/exercises/ch15/product_viewer/objects.py | Interloper2448/BCGPortfolio | c4c160a835c64c8d099d44c0995197f806ccc824 | [
"MIT"
] | null | null | null | Python_Files/murach/exercises/ch15/product_viewer/objects.py | Interloper2448/BCGPortfolio | c4c160a835c64c8d099d44c0995197f806ccc824 | [
"MIT"
] | null | null | null | Python_Files/murach/exercises/ch15/product_viewer/objects.py | Interloper2448/BCGPortfolio | c4c160a835c64c8d099d44c0995197f806ccc824 | [
"MIT"
] | null | null | null | class Product:
def __init__(self, name="", price=0.0, discountPercent=0):
self.name = name
self.price = price
self.discountPercent = discountPercent
def getDiscountAmount(self):
return self.price * self.discountPercent / 100
def getDiscountPrice(self):
return self.price - self.getDiscountAmount()
def getDescription(self):
return self.name
class Media(Product):
def __init__(self, name="", price=0.0, discountPercent=0, format=""):
self.format = format
Product.__init__(self, name, price, discountPercent)
# def getDiscription(self):
# return Product.getDescription(self)
class Book(Media):
def __init__(self, name="", price=0.0, discountPercent=0, author="", format="Hardcover"):
self.author = author
Media.__init__(self, name, price, discountPercent, format)
# Book("The Big Short", 15.95, 34, "Michael Lewis", "Ebook")) # ,
def getDescription(self):
return Media.getDescription(self) + " by " + self.author
class Album(Media):
def __init__(self, name="", price=0.0, discountPercent=0, author="", format="cassette"):
Media.__init__(self, name, price, discountPercent, format)
self.author = author
def getDescription(self):
return Media.getDescription(self) + " by " + self.author
class Movie(Media):
def __init__(self, name="", price=0.0, discountPercent=0, year=0, format="DVD"):
Media.__init__(self, name, price, discountPercent, format)
self.year = year
def getDescription(self):
return Media.getDescription(self) + " (" + str(self.year) + ")"
| 31.603774 | 93 | 0.643582 | 188 | 1,675 | 5.542553 | 0.191489 | 0.084453 | 0.103647 | 0.146833 | 0.616123 | 0.541267 | 0.541267 | 0.452015 | 0.361804 | 0.361804 | 0 | 0.019201 | 0.222687 | 1,675 | 52 | 94 | 32.211538 | 0.781106 | 0.077015 | 0 | 0.333333 | 0 | 0 | 0.020117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.181818 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
cb8ee4e59388bc9d171d4fecdd126a7814eb752c | 17,612 | py | Python | test/test_diststats.py | laenan8466/DeerLab | 94f1942da1b506e0661a8e7e4901bb5ba6d69143 | [
"MIT"
] | null | null | null | test/test_diststats.py | laenan8466/DeerLab | 94f1942da1b506e0661a8e7e4901bb5ba6d69143 | [
"MIT"
] | null | null | null | test/test_diststats.py | laenan8466/DeerLab | 94f1942da1b506e0661a8e7e4901bb5ba6d69143 | [
"MIT"
] | null | null | null |
import numpy as np
from deerlab import dipolarkernel, diststats, fitregmodel
from deerlab.dd_models import dd_gauss
# Gaussian distribution parameters
Pmean = 4 # nm
Psigma = 0.4 # nm
def assert_descriptor(key,truth):
# ----------------------------------------------------------------------
# High-resolution distance axis
r = np.linspace(2,6,2000)
dr = np.mean(np.diff(r))
# Gaussian distribution
P = dd_gauss(r,[Pmean, Psigma])
descriptor = diststats(r,P)[0][key]
assert abs(descriptor - truth) < dr
# ----------------------------------------------------------------------
def test_rmin():
# ======================================================================
"Check that the minimal distribution distance is correctly computed"
truth = min(r)
assert_descriptor('rmin',truth)
# ======================================================================
def test_rmax():
# ======================================================================
"Check that the maximal distribution distance is correctly computed"
truth = max(r)
assert_descriptor('rmax',truth)
# ======================================================================
def test_int():
# ======================================================================
"Check that the distribution integral is correctly computed"
truth = 1
assert_descriptor('int',truth)
# ======================================================================
def test_mean():
# ======================================================================
"Check that the mean is correctly computed"
truth = Pmean
assert_descriptor('mean',truth)
# ======================================================================
def test_median():
# ======================================================================
"Check that the median is correctly computed"
truth = Pmean
assert_descriptor('median',truth)
# ======================================================================
def test_mode():
# ======================================================================
"Check that the mode is correctly computed"
truth = Pmean
assert_descriptor('mode',truth)
# ======================================================================
def test_modes():
# ======================================================================
"Check that the modes are correctly computed"
truth = Pmean
assert_descriptor('modes',truth)
# ======================================================================
def test_iqm():
# ======================================================================
"Check that the interquartile mean is correctly computed"
truth = Pmean
assert_descriptor('iqm',truth)
# ======================================================================
def test_iqr():
# ======================================================================
"Check that the interquartile range is correctly computed"
truth = 1.349*Psigma
assert_descriptor('iqr',truth)
# ======================================================================
def test_std():
# ======================================================================
"Check that the standard deviation is correctly computed"
truth = Psigma
assert_descriptor('std',truth)
# ======================================================================
def test_variance():
# ======================================================================
"Check that the variance is correctly computed"
truth = Psigma**2
assert_descriptor('var',truth)
# ======================================================================
def test_entropy():
# ======================================================================
"Check that the entropy is correctly computed"
truth = 1/2*np.log(2*np.pi*np.e*Psigma**2)
assert_descriptor('entropy',truth)
# ======================================================================
def test_mad():
# ======================================================================
"Check that the mean absolute deviation is correctly computed"
truth = np.sqrt(2/np.pi)*Psigma
assert_descriptor('mad',truth)
# ======================================================================
def test_modality():
# ======================================================================
"Check that the modality is correctly computed"
truth = 1
assert_descriptor('modality',truth)
# ======================================================================
def test_skewness():
# ======================================================================
"Check that the skewness is correctly computed"
truth = 0
assert_descriptor('skewness',truth)
# ======================================================================
def test_kurtosis():
# ======================================================================
"Check that the excess kurtosis is correctly computed"
truth = 0
assert_descriptor('kurtosis',truth)
# ======================================================================
def test_moment1():
# ======================================================================
"Check that the 1st moment is correctly computed"
truth = Pmean
assert_descriptor('moment1',truth)
# ======================================================================
def test_moment2():
# ======================================================================
"Check that the 2nd moment is correctly computed"
truth = Psigma**2
assert_descriptor('moment2',truth)
# ======================================================================
def test_moment3():
# ======================================================================
"Check that the 3rd moment is correctly computed"
truth = 0
assert_descriptor('moment3',truth)
# ======================================================================
def test_moment4():
# ======================================================================
"Check that the 4th moment is correctly computed"
truth = 3
assert_descriptor('moment4',truth)
# ======================================================================
t = np.linspace(0,5,200)
r = np.linspace(2,6,100)
P = dd_gauss(r,[Pmean, Psigma])
K = dipolarkernel(t,r)
fit = fitregmodel(K@P,K,r)
def assert_uncertainty(key):
# ----------------------------------------------------------------------
desc,uq = diststats(r,fit.P,fit.Puncert)
desc = desc[key]
ci = uq[key].ci(95)
assert (desc >= ci[0]) & (desc <= ci[1])
# ----------------------------------------------------------------------
def test_mean_uncertainty():
# ======================================================================
"Check that the mean confidence intervals are correct."
assert_uncertainty('mean')
# ======================================================================
def test_median_uncertainty():
# ======================================================================
"Check that the median confidence intervals are correct."
assert_uncertainty('median')
# ======================================================================
def test_iqm_uncertainty():
# ======================================================================
"Check that the interquartile mean confidence intervals are correct."
assert_uncertainty('iqm')
# ======================================================================
def test_std_uncertainty():
# ======================================================================
"Check that the standard deviation confidence intervals are correct."
assert_uncertainty('std')
# ======================================================================
def test_iqr_uncertainty():
# ======================================================================
"Check that the interquartile range confidence intervals are correct."
assert_uncertainty('iqr')
# ======================================================================
def test_mad_uncertainty():
# ======================================================================
"Check that the mean absolute deviation confidence intervals are correct."
assert_uncertainty('mad')
# ======================================================================
def test_var_uncertainty():
# ======================================================================
"Check that the variance confidence intervals are correct."
assert_uncertainty('var')
# ======================================================================
def test_entropy_uncertainty():
# ======================================================================
"Check that the entropy confidence intervals are correct."
assert_uncertainty('entropy')
# ======================================================================
def test_skewness_uncertainty():
# ======================================================================
"Check that the skewness confidence intervals are correct."
assert_uncertainty('skewness')
# ======================================================================
def test_kurtosis_uncertainty():
# ======================================================================
"Check that the kurtosis confidence intervals are correct."
assert_uncertainty('kurtosis')
# ======================================================================
def test_moment1_uncertainty():
# ======================================================================
"Check that the 1st moment confidence intervals are correct."
assert_uncertainty('moment1')
# ======================================================================
def test_moment2_uncertainty():
# ======================================================================
"Check that the 2nd moment confidence intervals are correct."
assert_uncertainty('moment2')
# ======================================================================
def test_moment3_uncertainty():
# ======================================================================
"Check that the 3rd moment confidence intervals are correct."
assert_uncertainty('moment3')
# ======================================================================
def test_moment4_uncertainty():
# ======================================================================
"Check that the 4th moment confidence intervals are correct."
assert_uncertainty('moment4')
# ======================================================================
def assert_descriptor_nonuniform_r(key,truth):
# ----------------------------------------------------------------------
# High-resolution non-uniform distance axis
r = np.sqrt(np.linspace(2**2,6**2,2000))
dr = np.min(np.diff(r))
# Gaussian distribution
P = dd_gauss(r,[Pmean, Psigma])
descriptor = diststats(r,P)[0][key]
assert abs(descriptor - truth) < dr
# ----------------------------------------------------------------------
def test_nonuniform_r_rmin():
# ======================================================================
"Check that the minimal distribution distance is correctly computed"
truth = min(r)
assert_descriptor_nonuniform_r('rmin',truth)
# ======================================================================
def test_nonuniform_r_rmax():
# ======================================================================
"Check that the maximal distribution distance is correctly computed"
truth = max(r)
assert_descriptor_nonuniform_r('rmax',truth)
# ======================================================================
def test_nonuniform_r_int():
# ======================================================================
"Check that the distribution integral is correctly computed"
truth = 1
assert_descriptor_nonuniform_r('int',truth)
# ======================================================================
def test_nonuniform_r_mean():
# ======================================================================
"Check that the mean is correctly computed"
truth = Pmean
assert_descriptor_nonuniform_r('mean',truth)
# ======================================================================
def test_nonuniform_r_median():
# ======================================================================
"Check that the median is correctly computed"
truth = Pmean
assert_descriptor_nonuniform_r('median',truth)
# ======================================================================
def test_nonuniform_r_mode():
# ======================================================================
"Check that the mode is correctly computed"
truth = Pmean
assert_descriptor_nonuniform_r('mode',truth)
# ======================================================================
def test_nonuniform_r_modes():
# ======================================================================
"Check that the modes are correctly computed"
truth = Pmean
assert_descriptor_nonuniform_r('modes',truth)
# ======================================================================
def test_nonuniform_r_iqm():
# ======================================================================
"Check that the interquartile mean is correctly computed"
truth = Pmean
assert_descriptor_nonuniform_r('iqm',truth)
# ======================================================================
def test_nonuniform_r_iqr():
# ======================================================================
"Check that the interquartile range is correctly computed"
truth = 1.349*Psigma
assert_descriptor_nonuniform_r('iqr',truth)
# ======================================================================
def test_nonuniform_r_std():
# ======================================================================
"Check that the standard deviation is correctly computed"
truth = Psigma
assert_descriptor_nonuniform_r('std',truth)
# ======================================================================
def test_nonuniform_r_variance():
# ======================================================================
"Check that the variance is correctly computed"
truth = Psigma**2
assert_descriptor_nonuniform_r('var',truth)
# ======================================================================
def test_nonuniform_r_entropy():
# ======================================================================
"Check that the entropy is correctly computed"
truth = 1/2*np.log(2*np.pi*np.e*Psigma**2)
assert_descriptor_nonuniform_r('entropy',truth)
# ======================================================================
def test_nonuniform_r_mad():
# ======================================================================
"Check that the mean absolute deviation is correctly computed"
truth = np.sqrt(2/np.pi)*Psigma
assert_descriptor_nonuniform_r('mad',truth)
# ======================================================================
def test_nonuniform_r_modality():
# ======================================================================
"Check that the modality is correctly computed"
truth = 1
assert_descriptor_nonuniform_r('modality',truth)
# ======================================================================
def test_nonuniform_r_skewness():
# ======================================================================
"Check that the skewness is correctly computed"
truth = 0
assert_descriptor_nonuniform_r('skewness',truth)
# ======================================================================
def test_nonuniform_r_kurtosis():
# ======================================================================
"Check that the excess kurtosis is correctly computed"
truth = 0
assert_descriptor_nonuniform_r('kurtosis',truth)
# ======================================================================
def test_nonuniform_r_moment1():
# ======================================================================
"Check that the 1st moment is correctly computed"
truth = Pmean
assert_descriptor_nonuniform_r('moment1',truth)
# ======================================================================
def test_nonuniform_r_moment2():
# ======================================================================
"Check that the 2nd moment is correctly computed"
truth = Psigma**2
assert_descriptor_nonuniform_r('moment2',truth)
# ======================================================================
def test_nonuniform_r_moment3():
# ======================================================================
"Check that the 3rd moment is correctly computed"
truth = 0
assert_descriptor_nonuniform_r('moment3',truth)
# ======================================================================
def test_nonuniform_r_moment4():
# ======================================================================
"Check that the 4th moment is correctly computed"
truth = 3
assert_descriptor_nonuniform_r('moment4',truth)
# ====================================================================== | 38.878587 | 79 | 0.365206 | 1,181 | 17,612 | 5.266723 | 0.091448 | 0.060772 | 0.10418 | 0.146624 | 0.776688 | 0.662701 | 0.609486 | 0.542283 | 0.540514 | 0.526367 | 0 | 0.006724 | 0.130195 | 17,612 | 453 | 80 | 38.878587 | 0.399308 | 0.633318 | 0 | 0.379913 | 0 | 0 | 0.356655 | 0 | 0 | 0 | 0 | 0 | 0.262009 | 1 | 0.248908 | false | 0 | 0.0131 | 0 | 0.262009 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cb9d590decec69a48ffffe93dea1cd8c2976b75c | 323 | py | Python | backend/default_profile/serializers.py | adabutch/account_tracker | 2ae6e287266262557268f080cff821a736d6ec8b | [
"MIT"
] | null | null | null | backend/default_profile/serializers.py | adabutch/account_tracker | 2ae6e287266262557268f080cff821a736d6ec8b | [
"MIT"
] | 2 | 2020-02-11T15:45:51.000Z | 2020-07-17T16:47:06.000Z | backend/default_profile/serializers.py | adabutch/account_tracker | 2ae6e287266262557268f080cff821a736d6ec8b | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import DefaultProfile
from service.serializers import ServiceSerializer
class DefaultProfileSerializer(serializers.ModelSerializer):
services = ServiceSerializer(many=True, read_only=True)
class Meta:
model = DefaultProfile
fields = '__all__'
| 24.846154 | 60 | 0.780186 | 31 | 323 | 7.935484 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167183 | 323 | 12 | 61 | 26.916667 | 0.914498 | 0 | 0 | 0 | 0 | 0 | 0.021672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
cbaffedbde4ad22b13d0a02065a780cf1451ca20 | 1,683 | py | Python | pyconscious/pyconscious.py | andresni/isitconscious | a0974f912615c4c4153fd7d196d2452e8f7830e7 | [
"MIT"
] | null | null | null | pyconscious/pyconscious.py | andresni/isitconscious | a0974f912615c4c4153fd7d196d2452e8f7830e7 | [
"MIT"
] | null | null | null | pyconscious/pyconscious.py | andresni/isitconscious | a0974f912615c4c4153fd7d196d2452e8f7830e7 | [
"MIT"
] | 1 | 2020-09-19T02:33:07.000Z | 2020-09-19T02:33:07.000Z | # -*- coding: utf-8 -*-
from . import m_diversity as mdiv
from . import m_quality as mqual
from . import aux_func as aux
def LZc(data, norm = "shuffle_p", concat = "space", threshold = "median", shuffles = 1, ea = "mean"):
"""
Measure LZc. Takes a 2D/3D array input, outputs aggregate (ea) measure over epochs.
"""
data = aux.data_check(data)
return mdiv.LZc(data, norm, concat, threshold, shuffles, ea)
def ACE(data, norm = "shuffle_r", threshold = "median", shuffles = 1, ea = "mean"):
"""
Measure ACE. Takes a 3D array input, outputs aggregate (ea) measure over epochs
"""
data = aux.data_check(data)
return mdiv.ACE(data, norm, threshold, shuffles, ea)
def SCE(data, norm = "shuffle_r", threshold = 0.8, shuffles = 1, ea = "mean", ca = "mean"):
"""
Measure SCE. Takes a 3D array input, outputs aggregate measure
"""
data = aux.data_check(data)
return mdiv.SCE(data, norm, threshold, shuffles, ea, ca)
def stationarity(data, test = "adf", ea = "mean", ca = "mean"):
"""
Calculates if data satisfies Stationarity.
Returns p-value(s) where lower means more likely stationary
condition is satisfied.
Valid tests to use are:
'adf' - Augmented Dicky-Fuller test
"""
data = aux.data_check(data)
return mqual.stationarity(data, test, ea, ca)
def normality(data, test = "sw", ea = "mean", ca = "mean"):
"""
Calculates if data is normally distributed or not. Returns
p-values(s) where lower means more likely normality condition
is satisfied.
Valid tests to use are:
'sw' - Shapiro Wilks test
'k2' - D`Agostino`s K^2 test
"""
data = aux.data_check(data)
return mqual.normality(data, test, ea, ca)
| 31.166667 | 101 | 0.668449 | 247 | 1,683 | 4.510121 | 0.34413 | 0.043088 | 0.049372 | 0.071813 | 0.58079 | 0.495512 | 0.448833 | 0.2693 | 0.138241 | 0.138241 | 0 | 0.008909 | 0.199643 | 1,683 | 53 | 102 | 31.754717 | 0.818114 | 0.394534 | 0 | 0.277778 | 0 | 0 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.166667 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cbb73226ffe3e3cf16c2cec2b092851a682bed7e | 5,367 | py | Python | explorer/utils.py | nivasse/django-sql-explorer | f7c087352c3ec89640a9b63bd45aa3008a77ad44 | [
"MIT"
] | null | null | null | explorer/utils.py | nivasse/django-sql-explorer | f7c087352c3ec89640a9b63bd45aa3008a77ad44 | [
"MIT"
] | null | null | null | explorer/utils.py | nivasse/django-sql-explorer | f7c087352c3ec89640a9b63bd45aa3008a77ad44 | [
"MIT"
] | null | null | null | import functools
import re
from django.db import connections, connection
from six import text_type
import sqlparse
from . import app_settings
EXPLORER_PARAM_TOKEN = "$$"
# SQL Specific Things
def passes_blacklist(sql):
clean = functools.reduce(lambda sql, term: sql.upper().replace(term, ""), [t.upper() for t in app_settings.EXPLORER_SQL_WHITELIST], sql)
fails = [bl_word for bl_word in app_settings.EXPLORER_SQL_BLACKLIST if bl_word in clean.upper()]
return not any(fails), fails
def get_connection():
return connections[app_settings.EXPLORER_CONNECTION_NAME] if app_settings.EXPLORER_CONNECTION_NAME else connection
def schema_info():
"""
Construct schema information via introspection of the django models in the database.
:return: Schema information of the following form, sorted by db_table_name.
[
("package.name -> ModelClass", "db_table_name",
[
("db_column_name", "DjangoFieldType"),
(...),
]
)
]
"""
from django.apps import apps
ret = []
for label, app in apps.app_configs.items():
if app.name not in app_settings.EXPLORER_SCHEMA_EXCLUDE_APPS:
for model_name, model in apps.get_app_config(label).models.items():
friendly_model = "%s -> %s" % (app.name, model._meta.object_name)
ret.append((
friendly_model,
model._meta.db_table,
[_format_field(f) for f in model._meta.fields]
))
# Do the same thing for many_to_many fields. These don't show up in the field list of the model
# because they are stored as separate "through" relations and have their own tables
ret += [(
friendly_model,
m2m.rel.through._meta.db_table,
[_format_field(f) for f in m2m.rel.through._meta.fields]
) for m2m in model._meta.many_to_many]
return sorted(ret, key=lambda t: t[1])
def _format_field(field):
return field.get_attname_column()[1], field.get_internal_type()
def param(name):
return "%s%s%s" % (EXPLORER_PARAM_TOKEN, name, EXPLORER_PARAM_TOKEN)
def swap_params(sql, params):
p = params.items() if params else {}
for k, v in p:
regex = re.compile("\$\$%s(?:\:([^\$]+))?\$\$" % str(k).lower(), re.I)
sql = regex.sub(text_type(v), sql)
return sql
def extract_params(text):
regex = re.compile("\$\$([a-z0-9_]+)(?:\:([^\$]+))?\$\$")
params = re.findall(regex, text.lower())
# We support Python 2.6 so can't use a dict comprehension
return dict(zip([p[0] for p in params], [p[1] if len(p) > 1 else '' for p in params]))
# Helpers
from django.contrib.auth.forms import AuthenticationForm
from django.contrib.auth.views import login
from django.contrib.auth import REDIRECT_FIELD_NAME
def safe_login_prompt(request):
defaults = {
'template_name': 'admin/login.html',
'authentication_form': AuthenticationForm,
'extra_context': {
'title': 'Log in',
'app_path': request.get_full_path(),
REDIRECT_FIELD_NAME: request.get_full_path(),
},
}
return login(request, **defaults)
def shared_dict_update(target, source):
for k_d1 in target:
if k_d1 in source:
target[k_d1] = source[k_d1]
return target
def safe_cast(val, to_type, default=None):
try:
return to_type(val)
except ValueError:
return default
def get_int_from_request(request, name, default):
val = request.GET.get(name, default)
return safe_cast(val, int, default) if val else None
def get_params_from_request(request):
val = request.GET.get('params', None)
try:
d = {}
tuples = val.split('|')
for t in tuples:
res = t.split(':')
d[res[0]] = res[1]
return d
except Exception:
return None
def get_params_for_url(query):
if query.params:
return '|'.join(['%s:%s' % (p, v) for p, v in query.params.items()])
def url_get_rows(request):
return get_int_from_request(request, 'rows', app_settings.EXPLORER_DEFAULT_ROWS)
def url_get_query_id(request):
return get_int_from_request(request, 'query_id', None)
def url_get_log_id(request):
return get_int_from_request(request, 'querylog_id', None)
def url_get_show(request):
return bool(get_int_from_request(request, 'show', 1))
def url_get_params(request):
return get_params_from_request(request)
def allowed_query_pks(user_id):
return app_settings.EXPLORER_GET_USER_QUERY_VIEWS().get(user_id, [])
def user_can_see_query(request, kwargs):
if not request.user.is_anonymous() and 'query_id' in kwargs:
return int(kwargs['query_id']) in allowed_query_pks(request.user.id)
return False
def fmt_sql(sql):
return sqlparse.format(sql, reindent=True, keyword_case='upper')
def noop_decorator(f):
return f
def get_s3_connection():
import tinys3
return tinys3.Connection(app_settings.S3_ACCESS_KEY,
app_settings.S3_SECRET_KEY,
default_bucket=app_settings.S3_BUCKET)
| 28.396825 | 140 | 0.629588 | 720 | 5,367 | 4.4625 | 0.283333 | 0.03766 | 0.047308 | 0.026455 | 0.130408 | 0.053844 | 0.053844 | 0.042328 | 0.018052 | 0 | 0 | 0.006296 | 0.260108 | 5,367 | 188 | 141 | 28.547872 | 0.80282 | 0.112912 | 0 | 0.036036 | 0 | 0 | 0.046344 | 0.012755 | 0 | 0 | 0 | 0 | 0 | 1 | 0.207207 | false | 0.009009 | 0.099099 | 0.099099 | 0.540541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cbc0033bb31c43088fe0360809dd4f15e7c42203 | 609 | py | Python | crawl_engine/providers/boto_test.py | Maneuver-PED/CrawlerEngine | 6cdec81d20c658c2eddd6481ce73ff730a427fd8 | [
"Apache-2.0"
] | null | null | null | crawl_engine/providers/boto_test.py | Maneuver-PED/CrawlerEngine | 6cdec81d20c658c2eddd6481ce73ff730a427fd8 | [
"Apache-2.0"
] | null | null | null | crawl_engine/providers/boto_test.py | Maneuver-PED/CrawlerEngine | 6cdec81d20c658c2eddd6481ce73ff730a427fd8 | [
"Apache-2.0"
] | null | null | null | import boto.ec2
from boto.ec2.connection import EC2Connection
import pickle
region = boto.ec2.regions()
print(region)
# conn = EC2Connection('AKIAIJ3MW7O4VZMYHE7A', 'iQYps6aUGOOHJ+av/GK4lBrFQCigor5UGxWgU8a5')
# images = conn.get_all_images()
# pickle.dump(images, open("ami_list.p", "wb"))
# print("Done")
print("Loading Data")
l_images = pickle.load(open("ami_list.p", "rb"))
print(len(l_images))
print(l_images.__dict__)
# for i in l_images:
# print(i)
# print("{}->{}".format(i.name, i.description))
# print(i.location)
# print(i.description)
# print(i.name)
# print(type(l_images)) | 25.375 | 90 | 0.701149 | 84 | 609 | 4.928571 | 0.47619 | 0.084541 | 0.05314 | 0.057971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026266 | 0.124795 | 609 | 24 | 91 | 25.375 | 0.750469 | 0.551724 | 0 | 0 | 0 | 0 | 0.091603 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.444444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 2 |
cbc03a02b6a3a9ced8bde8545478418994cab6d4 | 391 | py | Python | edabit/medium/fizzbuzz_interview_question/test_fizzbuzz_interview_question.py | ticotheps/practice_problems | 943c5ab9eebeac4e5cf162adbdc681119603dc36 | [
"MIT"
] | null | null | null | edabit/medium/fizzbuzz_interview_question/test_fizzbuzz_interview_question.py | ticotheps/practice_problems | 943c5ab9eebeac4e5cf162adbdc681119603dc36 | [
"MIT"
] | null | null | null | edabit/medium/fizzbuzz_interview_question/test_fizzbuzz_interview_question.py | ticotheps/practice_problems | 943c5ab9eebeac4e5cf162adbdc681119603dc36 | [
"MIT"
] | null | null | null | import unittest
from fizzbuzz_interview_question import fizz_buzz
class Test(unittest.TestCase):
def test_fizzbuzz_interview_question(self):
self.assertEqual(fizz_buzz(3), "Fizz")
self.assertEqual(fizz_buzz(5), "Buzz")
self.assertEqual(fizz_buzz(15), "FizzBuzz")
self.assertEqual(fizz_buzz(4), "4")
if __name__ == '__main__':
unittest.main() | 32.583333 | 51 | 0.695652 | 48 | 391 | 5.291667 | 0.4375 | 0.15748 | 0.299213 | 0.362205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018809 | 0.184143 | 391 | 12 | 52 | 32.583333 | 0.777429 | 0 | 0 | 0 | 0 | 0 | 0.063776 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cbc8e094b83b64237afe9c9cf048dc65b19bff92 | 538 | py | Python | 2021/solutions/d6.py | phenom4n4n/advent-of-code-21 | 4c255f9be69aaab9b8cb795089e86a43c6db77de | [
"MIT"
] | 1 | 2021-12-07T07:05:33.000Z | 2021-12-07T07:05:33.000Z | 2021/solutions/d6.py | phenom4n4n/advent-of-code | 4c255f9be69aaab9b8cb795089e86a43c6db77de | [
"MIT"
] | null | null | null | 2021/solutions/d6.py | phenom4n4n/advent-of-code | 4c255f9be69aaab9b8cb795089e86a43c6db77de | [
"MIT"
] | null | null | null | from collections import Counter
from typing import List
from utils import run
def _get_children(data: List[int], days: int):
counter = Counter(data)
for _ in range(days):
counter = Counter({k - 1: v for k, v in counter.items()})
counter[6] += counter[-1]
counter[8] += counter[-1]
del counter[-1]
return sum(counter.values())
@run(",", c=int)
def part1(data: List[int]):
return _get_children(data, 18)
@run(",", c=int)
def part2(data: List[int]):
return _get_children(data, 256)
| 21.52 | 65 | 0.626394 | 79 | 538 | 4.177215 | 0.405063 | 0.1 | 0.136364 | 0.060606 | 0.193939 | 0.193939 | 0.193939 | 0 | 0 | 0 | 0 | 0.0311 | 0.223048 | 538 | 24 | 66 | 22.416667 | 0.758373 | 0 | 0 | 0.117647 | 0 | 0 | 0.003717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.176471 | 0.117647 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
cbcb07c76a81189ae2e42c34ce0c4b0380d946a6 | 897 | py | Python | server/whirligig/consumers.py | bunjdo/bunjgames | a43f2f297cb388b34143b76179de47df12fc1029 | [
"MIT"
] | null | null | null | server/whirligig/consumers.py | bunjdo/bunjgames | a43f2f297cb388b34143b76179de47df12fc1029 | [
"MIT"
] | 19 | 2021-10-12T06:11:28.000Z | 2022-03-04T16:27:30.000Z | server/whirligig/consumers.py | bunjlabs/bunjgames | a43f2f297cb388b34143b76179de47df12fc1029 | [
"MIT"
] | null | null | null | from common.consumers import Consumer
from whirligig.models import Game
from whirligig.serializers import GameSerializer
class WhirligigConsumer(Consumer):
@property
def routes(self):
return dict(
next_state=lambda game, from_state: game.next_state(from_state),
change_score=lambda game, connoisseurs_score, viewers_score: game.change_score(
connoisseurs_score, viewers_score),
change_timer=lambda game, paused: game.change_timer(paused),
answer_correct=lambda game, is_correct: game.answer_correct(is_correct),
extra_time=lambda game: game.extra_time(),
)
@property
def game_name(self):
return 'whirligig'
def get_game(self, token):
return Game.objects.get(token=token)
def serialize_game(self, game):
return GameSerializer().to_representation(game)
| 32.035714 | 91 | 0.693423 | 105 | 897 | 5.714286 | 0.371429 | 0.083333 | 0.08 | 0.096667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22631 | 897 | 27 | 92 | 33.222222 | 0.864553 | 0 | 0 | 0.095238 | 0 | 0 | 0.010033 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.142857 | 0.190476 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
cbccae572853318c1cc1ef182d4c7d3b8ba50c67 | 4,658 | py | Python | app_blog/views.py | thiagosouzalink/BlogAPI-python_Django | 5f0dbc847709f271c7e066ccccfb0348c71fee90 | [
"MIT"
] | null | null | null | app_blog/views.py | thiagosouzalink/BlogAPI-python_Django | 5f0dbc847709f271c7e066ccccfb0348c71fee90 | [
"MIT"
] | null | null | null | app_blog/views.py | thiagosouzalink/BlogAPI-python_Django | 5f0dbc847709f271c7e066ccccfb0348c71fee90 | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect, get_object_or_404
from django.urls import reverse
from django.utils.decorators import method_decorator
from django.contrib.auth.decorators import login_required
from django.contrib.auth import authenticate, login
from django.contrib import messages
from django.views.generic.base import TemplateView
from django.views.generic import (
ListView,
CreateView,
UpdateView,
DeleteView,
DetailView
)
from .models import CustomUser, Post
from .forms import UserForm, UserEditForm, PostForm
# Página inicial
class IndexListView(ListView):
model = Post
template_name = 'app_blog/index.html'
context_object_name = 'posts'
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
def get_queryset(self):
self.object_list = Post.objects.filter(author=self.request.user).order_by('created_date')
return self.object_list
# Página de login
class LoginTemplateView(TemplateView):
template_name = 'app_blog/login.html'
# Formulário de cadastro do usuário
class UserCadastroCreateView(CreateView):
template_name = 'app_blog/user_cadastro.html'
form_class = UserForm
def form_valid(self, form):
return super().form_valid(form)
def get_success_url(self):
return reverse('login')
# Formulário para editar usuário
class UserEditUpdateView(UpdateView):
template_name = 'app_blog/user_edit.html'
form_class = UserEditForm
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
def get_object(self):
user_id = self.kwargs.get('user_id')
user = self.request.user
return get_object_or_404(CustomUser, id=user_id, username=user)
def form_valid(self, form):
return super().form_valid(form)
def get_success_url(self):
return reverse('index')
# Formulário para criar post
class PostCreateView(CreateView):
template_name = 'app_blog/post_cadastro.html'
form_class = PostForm
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
def form_valid(self, form):
post = form.save(commit=False)
post.author = self.request.user
post.save()
return super().form_valid(form)
def get_success_url(self):
return reverse('index')
# Formulário para editar post
class PostUpdateView(UpdateView):
template_name = 'app_blog/post_cadastro.html'
form_class = PostForm
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
def get_object(self):
post_id = self.kwargs.get('post_id')
user = self.request.user
return get_object_or_404(Post, id=post_id, author=user)
def form_valid(self, form):
return super().form_valid(form)
def get_success_url(self):
return reverse('index')
# Excluir post
class PostDeleteView(DeleteView):
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
def get_object(self):
post_id = self.kwargs.get('post_id')
user = self.request.user
return get_object_or_404(Post, id=post_id, author=user)
def get_success_url(self):
return reverse('index')
# Post em detalhes
class PostDetailView(DetailView):
model = Post
template_name = 'app_blog/post_detail.html'
def get_object(self):
post_id = self.kwargs.get('post_id')
return get_object_or_404(Post, id=post_id)
# Página de perfil do usuário
def user_perfil(request, username):
template_name = 'app_blog/user_perfil.html'
current_user = request.user.username
if current_user == username:
return redirect('index')
else:
wanted_user = get_object_or_404(CustomUser, username=username)
return render(request, template_name, {'wanted_user': wanted_user})
# Submter login
def submit_login(request):
if request.POST:
username = request.POST['username']
password = request.POST['password']
user = authenticate(request, username=username, password=password)
if user:
login(request, user)
return redirect('index')
else:
messages.error(request, "Login/Senha inválido! Por favor tente novamente.")
return redirect('login')
# Página de ERRO 404
def error_404(request, exception):
return render(request, '404.html') | 26.770115 | 97 | 0.691284 | 578 | 4,658 | 5.387543 | 0.198962 | 0.023121 | 0.038536 | 0.048812 | 0.465639 | 0.412974 | 0.39499 | 0.39499 | 0.383751 | 0.373475 | 0 | 0.007317 | 0.207815 | 4,658 | 174 | 98 | 26.770115 | 0.836585 | 0.051739 | 0 | 0.477876 | 0 | 0 | 0.081707 | 0.034952 | 0 | 0 | 0 | 0 | 0 | 1 | 0.19469 | false | 0.017699 | 0.088496 | 0.123894 | 0.690265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
cbd5924368fa58519989e43f9e867f5ce7cc6a25 | 648 | py | Python | tests/unit/assets/synthetic.py | amaas-fintech/amaas-core-sdk-python | bd77884de6e5ab05d864638addeb4bb338a51183 | [
"Apache-2.0"
] | null | null | null | tests/unit/assets/synthetic.py | amaas-fintech/amaas-core-sdk-python | bd77884de6e5ab05d864638addeb4bb338a51183 | [
"Apache-2.0"
] | 8 | 2017-06-06T09:42:41.000Z | 2018-01-16T10:16:16.000Z | tests/unit/assets/synthetic.py | amaas-fintech/amaas-core-sdk-python | bd77884de6e5ab05d864638addeb4bb338a51183 | [
"Apache-2.0"
] | 8 | 2017-01-18T04:14:01.000Z | 2017-12-01T08:03:10.000Z | from __future__ import absolute_import, division, print_function, unicode_literals
from decimal import Decimal
import unittest
from amaascore.assets.synthetic import Synthetic
from amaascore.tools.generate_asset import generate_synthetic
class SyntheticTest(unittest.TestCase):
def setUp(self):
self.longMessage = True # Print complete error message on failure
self.synthetic = generate_synthetic()
self.asset_id = self.synthetic.asset_id
def tearDown(self):
pass
def test_Synthetic(self):
self.assertEqual(type(self.synthetic), Synthetic)
if __name__ == '__main__':
unittest.main()
| 25.92 | 82 | 0.746914 | 76 | 648 | 6.092105 | 0.513158 | 0.084233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182099 | 648 | 24 | 83 | 27 | 0.873585 | 0.060185 | 0 | 0 | 0 | 0 | 0.01318 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.1875 | false | 0.0625 | 0.3125 | 0 | 0.5625 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
cbdd6805d63490aea7e2e911645df182ef471d66 | 1,981 | py | Python | bijou/hook.py | hitlic/agile | 96979f84ee1823b03da70dd1181885e7754d0d09 | [
"Apache-2.0"
] | 33 | 2020-02-04T15:17:42.000Z | 2021-11-21T20:50:34.000Z | bijou/hook.py | Tomas1861/bijou | 8db9a42a138c7480385c752c8106e35dd067a493 | [
"MIT"
] | 1 | 2021-07-14T08:41:12.000Z | 2021-07-18T15:19:21.000Z | bijou/hook.py | Tomas1861/bijou | 8db9a42a138c7480385c752c8106e35dd067a493 | [
"MIT"
] | 4 | 2020-02-04T15:16:19.000Z | 2021-08-30T01:08:34.000Z | from bijou.utils import ToolBox as tbox
from functools import partial
class ListContainer():
def __init__(self, items):
self.items = tbox.listify(items)
def __getitem__(self, idx):
if isinstance(idx, (int, slice)):
return self.items[idx]
if isinstance(idx[0], bool): # idx为bool列表,返回idx中值为true的位置对应的元素
assert len(idx) == len(self) # bool mask
return [o for m, o in zip(idx, self.items) if m]
return [self.items[i] for i in idx]
def __len__(self):
return len(self.items)
def __iter__(self):
return iter(self.items)
def __setitem__(self, i, o):
self.items[i] = o
def __delitem__(self, i):
del(self.items[i])
def __repr__(self):
res = f'{self.__class__.__name__} ({len(self)} items)\n{self.items[:10]}'
if len(self) > 10:
res = res[:-1] + '...]'
return res
class Hook():
def __init__(self, m, fun, forward):
if forward:
self.hook = m.register_forward_hook(partial(fun, self))
else:
self.hook = m.register_backward_hook(partial(fun, self))
self.name = str(m).replace('\n', '').replace(' ', '')
def remove(self):
self.hook.remove()
def __del__(self):
self.remove()
class Hooks(ListContainer):
def __init__(self, model, fun, forward=True):
"""
model: pytorch module
fun: hook function
forward: True is forward hook, Flase is backward hook
"""
ms = [m for m in model.modules() if len(list(m.children())) == 0]
super().__init__([Hook(m, fun, forward) for m in ms])
def __enter__(self, *args):
return self
def __exit__(self, *args):
self.remove()
def __del__(self):
self.remove()
def __delitem__(self, i):
self[i].remove()
super().__delitem__(i)
def remove(self):
for h in self:
h.remove()
| 25.727273 | 81 | 0.566381 | 255 | 1,981 | 4.101961 | 0.286275 | 0.094646 | 0.031549 | 0.045889 | 0.049713 | 0.049713 | 0 | 0 | 0 | 0 | 0 | 0.005029 | 0.297325 | 1,981 | 76 | 82 | 26.065789 | 0.746408 | 0.069157 | 0 | 0.173077 | 0 | 0.019231 | 0.039379 | 0.027732 | 0 | 0 | 0 | 0 | 0.019231 | 1 | 0.307692 | false | 0 | 0.038462 | 0.057692 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cbe8a4838dfe863b5fb5ee2b5ea29bf944fce96e | 653 | py | Python | polyaxon/factories/factory_jobs.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/factories/factory_jobs.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/factories/factory_jobs.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | from faker import Factory as FakerFactory
import factory
from db.models.jobs import Job, JobStatus
from factories.factory_projects import ProjectFactory
from factories.factory_users import UserFactory
from factories.fixtures import job_spec_parsed_content
fake = FakerFactory.create()
class JobFactory(factory.DjangoModelFactory):
config = job_spec_parsed_content.parsed_data
user = factory.SubFactory(UserFactory)
project = factory.SubFactory(ProjectFactory)
class Meta:
model = Job
class JobStatusFactory(factory.DjangoModelFactory):
job = factory.SubFactory(JobFactory)
class Meta:
model = JobStatus
| 23.321429 | 54 | 0.785605 | 73 | 653 | 6.90411 | 0.452055 | 0.077381 | 0.079365 | 0.079365 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159265 | 653 | 27 | 55 | 24.185185 | 0.918033 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.352941 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.