hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d64ac33a4a42c6b8d665b024fe5d0e70195a0ecf | 1,864 | py | Python | users/views.py | Mohit7143/class | 7859447c77548f54b590db73b678828e3bc91305 | [
"BSD-3-Clause"
] | null | null | null | users/views.py | Mohit7143/class | 7859447c77548f54b590db73b678828e3bc91305 | [
"BSD-3-Clause"
] | null | null | null | users/views.py | Mohit7143/class | 7859447c77548f54b590db73b678828e3bc91305 | [
"BSD-3-Clause"
] | null | null | null | from django.shortcuts import render,redirect
from django.views.generic import View
from django.contrib.auth.models import User
from .forms import LoginUser,RegisterUser
from django.http import HttpResponse,Http404
from django.contrib.auth import authenticate,login,logout
class UserLogin(View):
form_class = LoginUser
def get(self,request):
return redirect('users:test')
def post(self,request):
form = self.form_class(request.POST)
if form.is_valid:
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username,password=password)
if user is not None:
login(request,user)
return redirect('drinks:index')
return redirect('users:test')
class UserRegister(View):
form_class = RegisterUser
def get(self,request):
return redirect('users:test')
def post(self,request):
form = self.form_class(request.POST)
if form.is_valid():
user = form.save(commit=False)
username = form.cleaned_data['username']
password = form.cleaned_data['password']
user.set_password(password)
user.save()
user = authenticate(username=username,password=password)
if user is not None:
login(request,user)
return redirect('drinks:index')
return redirect('users:test')
def LogoutView(request):
logout(request)
return redirect('users:test')
def test(request):
log_form = LoginUser
Reg_form = RegisterUser
template = 'users/login_test.html'
if request.user.is_authenticated:
return redirect('drinks:index')
context = {
'form' : log_form,
'tmp' : Reg_form,
}
return render(request , template ,context) | 30.064516 | 68 | 0.640558 | 211 | 1,864 | 5.587678 | 0.270142 | 0.094996 | 0.080577 | 0.09754 | 0.41391 | 0.411366 | 0.383376 | 0.383376 | 0.383376 | 0.383376 | 0 | 0.002179 | 0.261266 | 1,864 | 62 | 69 | 30.064516 | 0.854031 | 0 | 0 | 0.392157 | 0 | 0 | 0.078284 | 0.01126 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.098039 | 0.117647 | 0.039216 | 0.490196 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d64dd861a7fd5cd22573fba9f5c11e9abd41afee | 3,991 | py | Python | books/masteringPython/cp15/setup_template.py | Bingwen-Hu/hackaway | 69727d76fd652390d9660e9ea4354ba5cc76dd5c | [
"BSD-2-Clause"
] | null | null | null | books/masteringPython/cp15/setup_template.py | Bingwen-Hu/hackaway | 69727d76fd652390d9660e9ea4354ba5cc76dd5c | [
"BSD-2-Clause"
] | null | null | null | books/masteringPython/cp15/setup_template.py | Bingwen-Hu/hackaway | 69727d76fd652390d9660e9ea4354ba5cc76dd5c | [
"BSD-2-Clause"
] | null | null | null | import setuptools
if __name__ == '__main__':
setuptools.setup(
name='Name',
version='0.1',
# this automatically detects the packages in the specified
# (or current directory if no directory is given).
packages=setuptools.find_packages(exclude=['tests', 'docs']),
# the entry points are the big difference between
# setuptools and distutils, the entry points make it
# possible to extend setuptools and make it smarter and/or
# add custom commands
entry_points={
# The following would add: python setup.py command_name
'distutils.commands': [
'command_name = your_package:YourClass',
],
# the following would make these functions callable as
# standalone scripts. In this case it would add the spam
# command to run in your shell.
'console_scripts': [
'spam = your_package:SpamClass',
],
},
# Packages required to use this one, it is possible to
# specify simple the application name, a specific version
# or a version range. The syntax is the same as pip accepts
install_requires=['docutils>=0.3'],
# Extra requirements are another amazing feature of setuptools,
# it allows people to install extra dependencies if your are
# interested. In this example doing a "pip install name[all]"
# would install the python-utils package as well.
extras_requires={
'all': ['python-utils'],
},
# Packages required to install this package, not just for running
# it but for the actual install. These will not be installed but
# only downloaded so they can be used during the install.
# the pytest-runner is a useful example:
setup_requires=['pytest-runner'],
# the requirements for the test command. Regular testing is possible
# through: python setup.py test. The pytest module installs a different
# command though: python setup.py pytest
tests_requires=['pytest'],
# the package_data, include_package_data and exclude_package_data
# arguments are used to specify which non-python files should be included
# in the package. An example would be documentation files. More about this
# in the next paragraph
package_data={
# include (restructured text) documentation files from any directory
'': ['*.rst'],
# include text files from the eggs package
'eggs': ['*.txt'],
},
# if a package is zip_safe the package will be installed as a zip file.
# this can be faster but it generally doesn't make too much of a difference
# and breaks packages if they need access to either the source or the data
# files. When this flag is omiited setuptools will try to autodetect based
# on the existance of datafiles and C extensions. If either exists it will
# not install the package as a zip. Generally omitting this parameter is the
# best option but if you have strange problems with missing files, try
# disabling zip_safe
zip_safe=False,
# All of the following fields are PyPI metadata fields. When registering a
# package at PyPi this is used as information on the package page.
author='Rick van Hattem',
author_email='wolph@wol.ph',
# this should be a short description (one line) for the package
description='Description for the name package',
# For this parameter I would recommand including the README.rst
long_description="A very long description",
# Ths license should be one of the standard open source license:
# https://opensource.org/licenses/alphabetical
license='BSD',
# Homepage url for the package
url='https://wol.ph/',
)
| 42.913978 | 84 | 0.642195 | 512 | 3,991 | 4.947266 | 0.425781 | 0.027635 | 0.015397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001422 | 0.295415 | 3,991 | 92 | 85 | 43.380435 | 0.89936 | 0.634177 | 0 | 0.0625 | 0 | 0 | 0.200849 | 0.031117 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.03125 | 0 | 0.03125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d64fac35d83538731c9328874dc6768cc20365c0 | 8,312 | py | Python | compressor/base.py | rossowl/django-compressor | 3e451e836a7a8532781e4247a6a56444a6d4e25c | [
"Apache-2.0"
] | null | null | null | compressor/base.py | rossowl/django-compressor | 3e451e836a7a8532781e4247a6a56444a6d4e25c | [
"Apache-2.0"
] | null | null | null | compressor/base.py | rossowl/django-compressor | 3e451e836a7a8532781e4247a6a56444a6d4e25c | [
"Apache-2.0"
] | null | null | null | import os
from django.core.files.base import ContentFile
from django.template.loader import render_to_string
from django.utils.encoding import smart_unicode
from compressor.cache import get_hexdigest, get_mtime
from compressor.conf import settings
from compressor.exceptions import CompressorError, UncompressableFileError
from compressor.filters import CompilerFilter
from compressor.storage import default_storage
from compressor.utils import get_class, staticfiles
from compressor.utils.decorators import cached_property
# Some constants for nicer handling.
SOURCE_HUNK, SOURCE_FILE = 1, 2
METHOD_INPUT, METHOD_OUTPUT = 'input', 'output'
class Compressor(object):
"""
Base compressor object to be subclassed for content type
depending implementations details.
"""
type = None
def __init__(self, content=None, output_prefix="compressed"):
self.content = content or ""
self.output_prefix = output_prefix
self.charset = settings.DEFAULT_CHARSET
self.storage = default_storage
self.split_content = []
self.extra_context = {}
self.all_mimetypes = dict(settings.COMPRESS_PRECOMPILERS)
self.finders = staticfiles.finders
def split_contents(self):
"""
To be implemented in a subclass, should return an
iterable with four values: kind, value, basename, element
"""
raise NotImplementedError
def get_basename(self, url):
try:
base_url = self.storage.base_url
except AttributeError:
base_url = settings.COMPRESS_URL
if not url.startswith(base_url):
raise UncompressableFileError(
"'%s' isn't accesible via COMPRESS_URL ('%s') and can't be"
" compressed" % (url, base_url))
basename = url.replace(base_url, "", 1)
# drop the querystring, which is used for non-compressed cache-busting.
return basename.split("?", 1)[0]
def get_filename(self, basename):
# first try to find it with staticfiles (in debug mode)
filename = None
if settings.DEBUG and self.finders:
filename = self.finders.find(basename)
# secondly try finding the file in the root
elif self.storage.exists(basename):
filename = self.storage.path(basename)
if filename:
return filename
# or just raise an exception as the last resort
raise UncompressableFileError(
"'%s' could not be found in the COMPRESS_ROOT '%s'%s" % (
basename, settings.COMPRESS_ROOT,
self.finders and " or with staticfiles." or "."))
@cached_property
def parser(self):
return get_class(settings.COMPRESS_PARSER)(self.content)
@cached_property
def cached_filters(self):
return [get_class(filter_cls) for filter_cls in self.filters]
@cached_property
def mtimes(self):
return [str(get_mtime(value))
for kind, value, basename, elem in self.split_contents()
if kind == SOURCE_FILE]
@cached_property
def cachekey(self):
return get_hexdigest(''.join(
[self.content] + self.mtimes).encode(self.charset), 12)
@cached_property
def hunks(self):
for kind, value, basename, elem in self.split_contents():
if kind == SOURCE_HUNK:
content = self.filter(value, METHOD_INPUT,
elem=elem, kind=kind, basename=basename)
yield smart_unicode(content)
elif kind == SOURCE_FILE:
content = ""
fd = open(value, 'rb')
try:
content = fd.read()
except IOError, e:
raise UncompressableFileError(
"IOError while processing '%s': %s" % (value, e))
finally:
fd.close()
content = self.filter(content, METHOD_INPUT,
filename=value, basename=basename, elem=elem, kind=kind)
attribs = self.parser.elem_attribs(elem)
charset = attribs.get("charset", self.charset)
yield smart_unicode(content, charset.lower())
@cached_property
def concat(self):
return '\n'.join((hunk.encode(self.charset) for hunk in self.hunks))
def precompile(self, content, kind=None, elem=None, filename=None, **kwargs):
if not kind:
return content
attrs = self.parser.elem_attribs(elem)
mimetype = attrs.get("type", None)
if mimetype:
command = self.all_mimetypes.get(mimetype)
if command is None:
if mimetype not in ("text/css", "text/javascript"):
raise CompressorError("Couldn't find any precompiler in "
"COMPRESS_PRECOMPILERS setting for "
"mimetype '%s'." % mimetype)
else:
return CompilerFilter(content, filter_type=self.type,
command=command, filename=filename).input(**kwargs)
return content
def filter(self, content, method, **kwargs):
# run compiler
if method == METHOD_INPUT:
content = self.precompile(content, **kwargs)
for filter_cls in self.cached_filters:
filter_func = getattr(
filter_cls(content, filter_type=self.type), method)
try:
if callable(filter_func):
content = filter_func(**kwargs)
except NotImplementedError:
pass
return content
@cached_property
def combined(self):
return self.filter(self.concat, method=METHOD_OUTPUT)
def filepath(self, content):
return os.path.join(settings.COMPRESS_OUTPUT_DIR.strip(os.sep),
self.output_prefix, "%s.%s" % (get_hexdigest(content, 12), self.type))
def output(self, mode='file', forced=False):
"""
The general output method, override in subclass if you need to do
any custom modification. Calls other mode specific methods or simply
returns the content directly.
"""
# First check whether we should do the full compression,
# including precompilation (or if it's forced)
if settings.COMPRESS_ENABLED or forced:
content = self.combined
elif settings.COMPRESS_PRECOMPILERS:
# or concatting it, if pre-compilation is enabled
content = self.concat
else:
# or just doing nothing, when neither
# compression nor compilation is enabled
return self.content
# Shortcurcuit in case the content is empty.
if not content:
return ''
# Then check for the appropriate output method and call it
output_func = getattr(self, "output_%s" % mode, None)
if callable(output_func):
return output_func(mode, content, forced)
# Total failure, raise a general exception
raise CompressorError(
"Couldn't find output method for mode '%s'" % mode)
def output_file(self, mode, content, forced=False):
"""
The output method that saves the content to a file and renders
the appropriate template with the file's URL.
"""
new_filepath = self.filepath(content)
if not self.storage.exists(new_filepath) or forced:
self.storage.save(new_filepath, ContentFile(content))
url = self.storage.url(new_filepath)
return self.render_output(mode, {"url": url})
def output_inline(self, mode, content, forced=False):
"""
The output method that directly returns the content for inline
display.
"""
return self.render_output(mode, {"content": content})
def render_output(self, mode, context=None):
"""
Renders the compressor output with the appropriate template for
the given mode and template context.
"""
if context is None:
context = {}
context.update(self.extra_context)
return render_to_string(
"compressor/%s_%s.html" % (self.type, mode), context)
| 38.660465 | 82 | 0.613811 | 939 | 8,312 | 5.328009 | 0.250266 | 0.022387 | 0.023786 | 0.007196 | 0.089946 | 0.039976 | 0.039976 | 0.039976 | 0.039976 | 0.021987 | 0 | 0.001552 | 0.302214 | 8,312 | 214 | 83 | 38.841122 | 0.861034 | 0.074952 | 0 | 0.119205 | 0 | 0 | 0.059167 | 0.006136 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006623 | 0.072848 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d659f71d371c80b8dc403a0da872f70015a39ed6 | 4,886 | py | Python | DialogCalibrate.py | n2ee/Wind-Tunnel-GUI | dee8708bcecee7b269823ca0ae1cf483e834aaf3 | [
"BSD-3-Clause"
] | 1 | 2018-11-29T01:36:17.000Z | 2018-11-29T01:36:17.000Z | DialogCalibrate.py | n2ee/Wind-Tunnel-GUI | dee8708bcecee7b269823ca0ae1cf483e834aaf3 | [
"BSD-3-Clause"
] | null | null | null | DialogCalibrate.py | n2ee/Wind-Tunnel-GUI | dee8708bcecee7b269823ca0ae1cf483e834aaf3 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'DialogCalibrate.ui'
#
# Created by: PyQt5 UI code generator 5.9.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_DialogCalibrate(object):
def setupUi(self, DialogCalibrate):
DialogCalibrate.setObjectName("DialogCalibrate")
DialogCalibrate.resize(451, 240)
self.btnAoAWingTare = QtWidgets.QPushButton(DialogCalibrate)
self.btnAoAWingTare.setEnabled(True)
self.btnAoAWingTare.setGeometry(QtCore.QRect(260, 80, 161, 32))
self.btnAoAWingTare.setObjectName("btnAoAWingTare")
self.lblRawAoA = QtWidgets.QLabel(DialogCalibrate)
self.lblRawAoA.setGeometry(QtCore.QRect(280, 20, 81, 21))
font = QtGui.QFont()
font.setPointSize(14)
self.lblRawAoA.setFont(font)
self.lblRawAoA.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.lblRawAoA.setObjectName("lblRawAoA")
self.txtRawAoA = QtWidgets.QLabel(DialogCalibrate)
self.txtRawAoA.setGeometry(QtCore.QRect(370, 20, 56, 20))
font = QtGui.QFont()
font.setPointSize(18)
self.txtRawAoA.setFont(font)
self.txtRawAoA.setObjectName("txtRawAoA")
self.lblRawAirspeed = QtWidgets.QLabel(DialogCalibrate)
self.lblRawAirspeed.setGeometry(QtCore.QRect(10, 20, 131, 21))
font = QtGui.QFont()
font.setPointSize(14)
self.lblRawAirspeed.setFont(font)
self.lblRawAirspeed.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.lblRawAirspeed.setObjectName("lblRawAirspeed")
self.txtRawAirspeed = QtWidgets.QLabel(DialogCalibrate)
self.txtRawAirspeed.setGeometry(QtCore.QRect(150, 20, 56, 20))
font = QtGui.QFont()
font.setPointSize(18)
self.txtRawAirspeed.setFont(font)
self.txtRawAirspeed.setObjectName("txtRawAirspeed")
self.btnAirspeedTare = QtWidgets.QPushButton(DialogCalibrate)
self.btnAirspeedTare.setGeometry(QtCore.QRect(30, 50, 161, 32))
self.btnAirspeedTare.setObjectName("btnAirspeedTare")
self.btnDone = QtWidgets.QPushButton(DialogCalibrate)
self.btnDone.setGeometry(QtCore.QRect(310, 190, 110, 32))
self.btnDone.setDefault(True)
self.btnDone.setObjectName("btnDone")
self.btnAoAPlatformTare = QtWidgets.QPushButton(DialogCalibrate)
self.btnAoAPlatformTare.setEnabled(True)
self.btnAoAPlatformTare.setGeometry(QtCore.QRect(260, 50, 161, 32))
self.btnAoAPlatformTare.setObjectName("btnAoAPlatformTare")
self.inpAoAOffset = QtWidgets.QDoubleSpinBox(DialogCalibrate)
self.inpAoAOffset.setGeometry(QtCore.QRect(350, 120, 62, 31))
font = QtGui.QFont()
font.setPointSize(14)
self.inpAoAOffset.setFont(font)
self.inpAoAOffset.setDecimals(1)
self.inpAoAOffset.setMaximum(90.0)
self.inpAoAOffset.setSingleStep(0.1)
self.inpAoAOffset.setProperty("value", 0.0)
self.inpAoAOffset.setObjectName("inpAoAOffset")
self.lblAoAOffset = QtWidgets.QLabel(DialogCalibrate)
self.lblAoAOffset.setGeometry(QtCore.QRect(240, 120, 101, 21))
font = QtGui.QFont()
font.setPointSize(14)
self.lblAoAOffset.setFont(font)
self.lblAoAOffset.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.lblAoAOffset.setObjectName("lblAoAOffset")
self.retranslateUi(DialogCalibrate)
self.btnDone.clicked.connect(DialogCalibrate.accept)
QtCore.QMetaObject.connectSlotsByName(DialogCalibrate)
def retranslateUi(self, DialogCalibrate):
_translate = QtCore.QCoreApplication.translate
DialogCalibrate.setWindowTitle(_translate("DialogCalibrate", "Dialog"))
self.btnAoAWingTare.setText(_translate("DialogCalibrate", "Set AoA Wing Tare"))
self.lblRawAoA.setText(_translate("DialogCalibrate", "Raw AoA:"))
self.txtRawAoA.setText(_translate("DialogCalibrate", "N/A"))
self.lblRawAirspeed.setText(_translate("DialogCalibrate", "Raw Airspeed:"))
self.txtRawAirspeed.setText(_translate("DialogCalibrate", "N/A"))
self.btnAirspeedTare.setText(_translate("DialogCalibrate", "Set Airspeed Tare"))
self.btnDone.setText(_translate("DialogCalibrate", "Done"))
self.btnAoAPlatformTare.setText(_translate("DialogCalibrate", "Set AoA Platform Tare"))
self.lblAoAOffset.setText(_translate("DialogCalibrate", "AoA Offset:"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
DialogCalibrate = QtWidgets.QDialog()
ui = Ui_DialogCalibrate()
ui.setupUi(DialogCalibrate)
DialogCalibrate.show()
sys.exit(app.exec_())
| 48.376238 | 109 | 0.710806 | 478 | 4,886 | 7.219665 | 0.280335 | 0.060562 | 0.06375 | 0.031295 | 0.175891 | 0.154448 | 0.133005 | 0.122573 | 0.089539 | 0.089539 | 0 | 0.032074 | 0.176832 | 4,886 | 100 | 110 | 48.86 | 0.825957 | 0.038477 | 0 | 0.139535 | 1 | 0 | 0.086354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.023256 | 0 | 0.05814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c38a99159a465c6c6adcac264c6b8eb5c21be376 | 1,467 | py | Python | models.py | askomorokhov/fast-api-example | 5d23ddd39413f37697c81f267bb69117011d56f5 | [
"MIT"
] | null | null | null | models.py | askomorokhov/fast-api-example | 5d23ddd39413f37697c81f267bb69117011d56f5 | [
"MIT"
] | null | null | null | models.py | askomorokhov/fast-api-example | 5d23ddd39413f37697c81f267bb69117011d56f5 | [
"MIT"
] | null | null | null | from sqlalchemy import Boolean, Column, ForeignKey, Integer, String, DateTime
from sqlalchemy.orm import relationship
import datetime
from database import Base
class Org(Base):
__tablename__ = "orgs"
id = Column(Integer, primary_key=True, index=True)
name = Column(String, unique=True, index=True)
created_at = Column(DateTime, default=datetime.datetime.utcnow)
buildings = relationship("Building", back_populates="org")
class Building(Base):
__tablename__ = "buildings"
id = Column(Integer, primary_key=True, index=True)
org_id = Column(Integer, ForeignKey(Org.id))
name = Column(String, unique=True, index=True)
address = Column(String)
org = relationship("Org", back_populates="buildings")
groups = relationship("Group", back_populates="building")
class Group(Base):
__tablename__ = "groups"
id = Column(Integer, primary_key=True, index=True)
building_id = Column(Integer, ForeignKey(Building.id))
name = Column(String, index=True)
building = relationship("Building", back_populates="groups")
points = relationship("Point", back_populates="building")
class Point(Base):
__tablename__ = "points"
id = Column(Integer, primary_key=True, index=True)
group_id = Column(Integer, ForeignKey(Building.id))
device_id = Column(Integer, index=True)
name = Column(String)
location = Column(String)
building = relationship("Group", back_populates="points") | 29.34 | 77 | 0.714383 | 171 | 1,467 | 5.947368 | 0.22807 | 0.06293 | 0.117994 | 0.086529 | 0.311701 | 0.287119 | 0.218289 | 0.149459 | 0 | 0 | 0 | 0 | 0.167689 | 1,467 | 50 | 78 | 29.34 | 0.832924 | 0 | 0 | 0.181818 | 0 | 0 | 0.067439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.121212 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c38e1f71e6b3d9b9f233c17af18409553439e9b3 | 446 | py | Python | app/extensions.py | rileymjohnson/fbla | 1f1c37f54edd00af0b47b7c256523c7145f6be6f | [
"MIT"
] | null | null | null | app/extensions.py | rileymjohnson/fbla | 1f1c37f54edd00af0b47b7c256523c7145f6be6f | [
"MIT"
] | null | null | null | app/extensions.py | rileymjohnson/fbla | 1f1c37f54edd00af0b47b7c256523c7145f6be6f | [
"MIT"
] | null | null | null | from flask_bcrypt import Bcrypt
from flask_caching import Cache
from flask_debugtoolbar import DebugToolbarExtension
from flask_login import LoginManager
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
import logging
bcrypt = Bcrypt()
login_manager = LoginManager()
db = SQLAlchemy()
migrate = Migrate()
cache = Cache()
debug_toolbar = DebugToolbarExtension()
gunicorn_error_logger = logging.getLogger('gunicorn.error') | 29.733333 | 59 | 0.838565 | 53 | 446 | 6.867925 | 0.396226 | 0.148352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105381 | 446 | 15 | 59 | 29.733333 | 0.912281 | 0 | 0 | 0 | 0 | 0 | 0.03132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c38e3282802bbc19a4073c3f750e891c9ae10713 | 207 | py | Python | tests/scripts/negative_linenumber_offsets.py | andyfcx/py-spy | 1e971b91c739237708f11c2ddcb1324ab0bc37c7 | [
"MIT"
] | 8,112 | 2018-08-09T13:35:54.000Z | 2022-03-31T23:23:52.000Z | tests/scripts/negative_linenumber_offsets.py | andyfcx/py-spy | 1e971b91c739237708f11c2ddcb1324ab0bc37c7 | [
"MIT"
] | 327 | 2018-08-21T10:39:06.000Z | 2022-03-29T08:58:22.000Z | tests/scripts/negative_linenumber_offsets.py | andyfcx/py-spy | 1e971b91c739237708f11c2ddcb1324ab0bc37c7 | [
"MIT"
] | 328 | 2018-08-21T09:36:49.000Z | 2022-03-30T10:15:18.000Z | import time
def f():
[
# Must be split over multiple lines to see the error.
# https://github.com/benfred/py-spy/pull/208
time.sleep(1)
for _ in range(1000)
]
f()
| 14.785714 | 61 | 0.550725 | 30 | 207 | 3.766667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.333333 | 207 | 13 | 62 | 15.923077 | 0.76087 | 0.454106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c391b115fbcae9056636fe28c8607436688bbc00 | 6,456 | py | Python | mpc_ros/script/teleop_keyboard.py | NaokiTakahashi12/mpc_ros | 8451fec293a5aee72d5fad0323ec206d08d0ed96 | [
"Apache-2.0"
] | 335 | 2019-03-11T23:03:07.000Z | 2022-03-31T13:40:21.000Z | mpc_ros/script/teleop_keyboard.py | NaokiTakahashi12/mpc_ros | 8451fec293a5aee72d5fad0323ec206d08d0ed96 | [
"Apache-2.0"
] | 30 | 2019-05-02T13:59:14.000Z | 2022-03-30T10:56:34.000Z | mpc_ros/script/teleop_keyboard.py | NaokiTakahashi12/mpc_ros | 8451fec293a5aee72d5fad0323ec206d08d0ed96 | [
"Apache-2.0"
] | 103 | 2018-07-11T15:08:38.000Z | 2022-03-17T13:57:24.000Z | #!/usr/bin/python
# This is a modified verison of turtlebot_teleop.py
# to fullfill the needs of HyphaROS MiniCar use case
# Copyright (c) 2018, HyphaROS Workshop
#
# The original license info are as below:
# Copyright (c) 2011, Willow Garage, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the Willow Garage, Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import sys, select, termios, tty, math
import rospy
from ackermann_msgs.msg import AckermannDriveStamped
header_msg = """
Control HyphaROS Minicar!
-------------------------
Moving around:
i
j k l
,
w/x : increase/decrease throttle bounds by 10%
e/c : increase/decrease steering bounds by 10%
s : safety mode
space key, k : force stop
anything else : keep previous commands
CTRL-C to quit
"""
# Func for getting keyboard value
def getKey(safety_mode):
if safety_mode: # wait unit keyboard interrupt
tty.setraw(sys.stdin.fileno())
select.select([sys.stdin], [], [], 0)
key = sys.stdin.read(1)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)
return key
else: # pass if not detected
tty.setraw(sys.stdin.fileno())
rlist, _, _ = select.select([sys.stdin], [], [], 0.1)
if rlist:
key = sys.stdin.read(1)
else:
key = ''
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)
return key
# Func for showing current bounds
def showInfo(speed_bound, angle_bound):
return "current bounds:\tspeed %s\tangle %s " % (speed_bound, angle_bound)
# Main Func
if __name__=="__main__":
settings = termios.tcgetattr(sys.stdin)
rospy.init_node('minicar_teleop')
pub_cmd = rospy.Publisher('/ackermann_cmd', AckermannDriveStamped, queue_size=5)
pub_safe = rospy.Publisher('/ackermann_safe', AckermannDriveStamped, queue_size=5)
safe_mode = bool(rospy.get_param('~safety_mode', False)) # true for safety cmds
speed_i = float(rospy.get_param('~speed_incremental', 0.1)) # m/s
angle_i = float(rospy.get_param('~angle_incremental', 5.0*math.pi/180.0)) # rad (=5 degree)
speed_bound = float(rospy.get_param('~speed_bound', 2.0))
angle_bound = float(rospy.get_param('~angle_bound', 30.0*math.pi/180.0))
if safe_mode:
print "Switched to Safety Mode !"
moveBindings = {
'i':(speed_i,0.0),
'j':(0.0,angle_i),
'l':(0.0,-angle_i),
',':(-speed_i,0.0),
}
boundBindings={
'w':(1.1,1),
'x':(.9,1),
'e':(1,1.1),
'c':(1,.9),
}
status = 0
acc = 0.1
target_speed = 0.0 # m/s
target_angle = 0.0 # rad
# Create AckermannDriveStamped msg object
ackermann_msg = AckermannDriveStamped()
#ackermann_msg.header.frame_id = 'car_id' # for future multi-cars applicaton
try:
print(header_msg)
print(showInfo(speed_bound, angle_bound))
while(1):
key = getKey(safe_mode)
if key in moveBindings.keys():
target_speed = target_speed + moveBindings[key][0]
target_angle = target_angle + moveBindings[key][1]
elif key in boundBindings.keys():
speed_bound = speed_bound * boundBindings[key][0]
angle_bound = angle_bound * boundBindings[key][1]
print(showInfo(speed_bound, angle_bound))
if (status == 14):
print(header_msg)
status = (status + 1) % 15
elif key == ' ' or key == 'k' :
target_speed = 0.0
target_angle = 0.0
elif key == 's' : # switch safety mode
safe_mode = not safe_mode
if safe_mode:
print "Switched to Safety Mode !"
else:
print "Back to Standard Mode !"
elif key == '\x03': # cltr + C
break
# Command constraints
if target_speed > speed_bound:
target_speed = speed_bound
if target_speed < -speed_bound:
target_speed = -speed_bound
if target_angle > angle_bound:
target_angle = angle_bound
if target_angle < -angle_bound:
target_angle = -angle_bound
# Publishing command
#ackermann_msg.header.stamp = rospy.Time.now() # for future multi-cars applicaton
ackermann_msg.drive.speed = target_speed
ackermann_msg.drive.steering_angle = target_angle
if safe_mode:
pub_safe.publish(ackermann_msg)
else:
pub_cmd.publish(ackermann_msg)
except Exception as e:
print(e)
finally:
ackermann_msg.drive.speed = 0
ackermann_msg.drive.steering_angle = 0
pub_cmd.publish(ackermann_msg)
pub_safe.publish(ackermann_msg)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)
| 36.891429 | 95 | 0.629647 | 819 | 6,456 | 4.832723 | 0.343101 | 0.030318 | 0.018949 | 0.020212 | 0.295099 | 0.154118 | 0.137443 | 0.125316 | 0.10763 | 0.078828 | 0 | 0.017131 | 0.276642 | 6,456 | 174 | 96 | 37.103448 | 0.830407 | 0.333178 | 0 | 0.267857 | 0 | 0 | 0.130322 | 0.005881 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.026786 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c392d36cc5c68370d275a885ceff96ecfac69bfd | 895 | py | Python | gym-dubins-airplane/gym_dubins_airplane/envs/config.py | hasanisci/gym-dubins-ac | fe205e75f27bf1b8a2858f5b9973ee09e43bfbce | [
"MIT"
] | 2 | 2021-02-06T20:01:56.000Z | 2021-07-12T13:00:49.000Z | gym-dubins-airplane/gym_dubins_airplane/envs/config.py | hasanisci/gym-dubins-ac | fe205e75f27bf1b8a2858f5b9973ee09e43bfbce | [
"MIT"
] | null | null | null | gym-dubins-airplane/gym_dubins_airplane/envs/config.py | hasanisci/gym-dubins-ac | fe205e75f27bf1b8a2858f5b9973ee09e43bfbce | [
"MIT"
] | 2 | 2021-02-14T15:39:15.000Z | 2021-07-12T13:00:53.000Z | import math
class Config:
G = 9.8
EPISODES = 1000
# input dim
window_width = 800 # pixels
window_height = 800 # pixels
window_z = 800 # pixels
diagonal = 800 # this one is used to normalize dist_to_intruder
tick = 30
scale = 30
# distance param
minimum_separation = 555 / scale
NMAC_dist = 150 / scale
horizon_dist = 4000 / scale
initial_min_dist = 3000 / scale
goal_radius = 600 / scale
# speed
min_speed = 50 / scale
max_speed = 80 / scale
d_speed = 5 / scale
speed_sigma = 2 / scale
position_sigma = 10 / scale
# maximum steps of one episode
max_steps = 1000
# reward setting
position_reward = 10. / 10.
heading_reward = 10 / 10.
collision_penalty = -5. / 10
outside_penalty = -1. / 10
step_penalty = -0.01 / 10 | 21.829268 | 74 | 0.579888 | 113 | 895 | 4.39823 | 0.60177 | 0.054326 | 0.060362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121107 | 0.35419 | 895 | 41 | 75 | 21.829268 | 0.738754 | 0.158659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c3939f0937a9f440e452d4556c178d4679036846 | 726 | py | Python | exemplos/exemplo-aula-04-01.py | quitaiskiluisf/TI4F-2021-LogicaProgramacao | d12e5c389a43c98f27726df5618fe529183329a8 | [
"Unlicense"
] | null | null | null | exemplos/exemplo-aula-04-01.py | quitaiskiluisf/TI4F-2021-LogicaProgramacao | d12e5c389a43c98f27726df5618fe529183329a8 | [
"Unlicense"
] | null | null | null | exemplos/exemplo-aula-04-01.py | quitaiskiluisf/TI4F-2021-LogicaProgramacao | d12e5c389a43c98f27726df5618fe529183329a8 | [
"Unlicense"
] | null | null | null | # Apresentação
print('Programa para identificar a que cargos eletivos')
print('uma pessoa pode se candidatar com base em sua idade')
print()
# Entradas
idade = int(input('Informe a sua idade: '))
# Processamento e saídas
print('Esta pessoa pode se candidatar a estes cargos:')
if (idade < 18):
print('- Nenhum cargo disponível')
if (idade >= 18):
print('- Vereador')
if (idade >= 21):
print('- Deputado Federal')
print('- Deputado Estadual ou Distrital')
print('- Prefeito ou Vice-Prefeito')
print('- Juiz de paz')
if (idade >= 30):
print('- Governador ou Vice-Governador')
if (idade >= 35):
print('- Presidente ou Vice-Presidente')
print('- Senador')
| 23.419355 | 61 | 0.634986 | 91 | 726 | 5.065934 | 0.549451 | 0.075922 | 0.052061 | 0.095445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017889 | 0.230028 | 726 | 30 | 62 | 24.2 | 0.806798 | 0.060606 | 0 | 0 | 0 | 0 | 0.557099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.684211 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
c39e009f068f7788015732d2e6b0edfd0efd992a | 426 | py | Python | examples/simple.py | realazthat/aiopg-trollius | 6f5edf829d92d1b10c4a1a3a90fe2451539e8dd7 | [
"BSD-2-Clause"
] | 1 | 2021-01-03T00:58:01.000Z | 2021-01-03T00:58:01.000Z | examples/simple.py | 1st1/aiopg | 1bcbd95f9ff97675788dc3dbc2f7889e26b2fba4 | [
"BSD-2-Clause"
] | 2 | 2018-07-20T07:05:46.000Z | 2018-07-20T19:44:44.000Z | examples/simple_old_style.py | soar/aiopg | 9bdff257226b14c1828253efb6d0eb7239b0683a | [
"BSD-2-Clause"
] | 3 | 2018-07-18T06:59:47.000Z | 2018-07-19T22:56:50.000Z | import asyncio
import aiopg
dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1'
@asyncio.coroutine
def test_select():
pool = yield from aiopg.create_pool(dsn)
with (yield from pool.cursor()) as cur:
yield from cur.execute("SELECT 1")
ret = yield from cur.fetchone()
assert ret == (1,)
print("ALL DONE")
loop = asyncio.get_event_loop()
loop.run_until_complete(test_select())
| 22.421053 | 62 | 0.680751 | 63 | 426 | 4.492063 | 0.587302 | 0.127208 | 0.084806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023392 | 0.197183 | 426 | 18 | 63 | 23.666667 | 0.804094 | 0 | 0 | 0 | 0 | 0 | 0.164319 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.076923 | false | 0.076923 | 0.153846 | 0 | 0.230769 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c3a2516ed5e2983309b0fcf123980be25b43d165 | 1,061 | py | Python | tests/bsmp/test_commands.py | lnls-sirius/pydrs | 4e44cf0272fcf0020139a6c176a708b4642a644a | [
"MIT"
] | null | null | null | tests/bsmp/test_commands.py | lnls-sirius/pydrs | 4e44cf0272fcf0020139a6c176a708b4642a644a | [
"MIT"
] | 1 | 2022-01-14T14:59:09.000Z | 2022-01-21T18:48:32.000Z | tests/bsmp/test_commands.py | lnls-sirius/pydrs | 4e44cf0272fcf0020139a6c176a708b4642a644a | [
"MIT"
] | 1 | 2022-01-14T14:54:14.000Z | 2022-01-14T14:54:14.000Z | from unittest import TestCase
from siriuspy.pwrsupply.bsmp.constants import ConstPSBSMP
from pydrs.bsmp import CommonPSBSMP, EntitiesPS, SerialInterface
class TestSerialCommandsx0(TestCase):
"""Test BSMP consulting methods."""
def setUp(self):
"""Common setup for all tests."""
self._serial = SerialInterface(path="/serial", baudrate=9600)
self._entities = EntitiesPS()
self._pwrsupply = CommonPSBSMP(
iointerface=self._serial, entities=self._entities, slave_address=1
)
def test_query_protocol_version(self):
"""Test"""
def test_query_variable(self):
"""Test"""
self._pwrsupply.pread_variable(var_id=ConstPSBSMP.V_PS_STATUS, timeout=500)
def test_query_parameter(self):
"""Test"""
self._pwrsupply.parameter_read(var_id=ConstPSBSMP.P_PS_NAME, timeout=500)
def test_write_parameter(self):
"""Test"""
self._pwrsupply.parameter_write(
var_id=ConstPSBSMP.P_PS_NAME, value="pv_test_name", timeout=500
)
| 28.675676 | 83 | 0.679548 | 120 | 1,061 | 5.741667 | 0.441667 | 0.075472 | 0.05225 | 0.091437 | 0.179971 | 0.179971 | 0 | 0 | 0 | 0 | 0 | 0.017943 | 0.212064 | 1,061 | 36 | 84 | 29.472222 | 0.80622 | 0.072573 | 0 | 0 | 0 | 0 | 0.019937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.157895 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3a46fb3802bbb4ed7a5fbfe67fb1da36da3f753 | 1,316 | py | Python | lib/python/cellranger/feature/utils.py | qiangli/cellranger | 046e24c3275cfbd4516a6ebc064594513a5c45b7 | [
"MIT"
] | 1 | 2019-03-29T04:05:58.000Z | 2019-03-29T04:05:58.000Z | lib/python/cellranger/feature/utils.py | qiangli/cellranger | 046e24c3275cfbd4516a6ebc064594513a5c45b7 | [
"MIT"
] | null | null | null | lib/python/cellranger/feature/utils.py | qiangli/cellranger | 046e24c3275cfbd4516a6ebc064594513a5c45b7 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#
# Copyright (c) 2018 10X Genomics, Inc. All rights reserved.
#
# Utils for feature-barcoding technology
import numpy as np
import os
import json
import tenkit.safe_json as tk_safe_json
def check_if_none_or_empty(matrix):
if matrix is None or matrix.get_shape()[0] == 0 or matrix.get_shape()[1] == 0:
return True
else:
return False
def write_json_from_dict(input_dict, out_file_name):
with open(out_file_name, 'w') as f:
json.dump(tk_safe_json.json_sanitize(input_dict), f, indent=4, sort_keys=True)
def write_csv_from_dict(input_dict, out_file_name, header=None):
with open(out_file_name, 'w') as f:
if header is not None:
f.write(header)
for (key, value) in input_dict.iteritems():
line = str(key) + ',' + str(value) + '\n'
f.write(line)
def get_depth_string(num_reads_per_cell):
return str(np.round(float(num_reads_per_cell)/1000,1)) + "k"
def all_files_present(list_file_paths):
if list_file_paths is None:
return False
files_none = [fpath is None for fpath in list_file_paths]
if any(files_none):
return False
files_present = [os.path.isfile(fpath) for fpath in list_file_paths]
if not(all(files_present)):
return False
return True
| 27.416667 | 86 | 0.680091 | 212 | 1,316 | 3.971698 | 0.415094 | 0.052257 | 0.052257 | 0.053444 | 0.180523 | 0.180523 | 0.180523 | 0.054632 | 0 | 0 | 0 | 0.015564 | 0.218845 | 1,316 | 47 | 87 | 28 | 0.803502 | 0.089666 | 0 | 0.258065 | 0 | 0 | 0.005029 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16129 | false | 0 | 0.129032 | 0.032258 | 0.516129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c3b8ec1c8a770c6bded530d6c754e56f0b14dd73 | 2,818 | py | Python | tests/data.py | MathMagique/azure-cost-mon | 0a2a883eb587ee46bd166f8e23ab0b920eee961a | [
"MIT"
] | 65 | 2017-05-22T16:26:37.000Z | 2022-03-11T06:39:51.000Z | tests/data.py | MathMagique/azure-cost-mon | 0a2a883eb587ee46bd166f8e23ab0b920eee961a | [
"MIT"
] | 27 | 2017-05-02T07:48:34.000Z | 2021-03-31T09:53:56.000Z | tests/data.py | MathMagique/azure-cost-mon | 0a2a883eb587ee46bd166f8e23ab0b920eee961a | [
"MIT"
] | 17 | 2017-06-06T21:39:28.000Z | 2021-07-08T14:13:52.000Z | api_output_for_empty_months = """"Usage Data Extract",
"",
"AccountOwnerId","Account Name","ServiceAdministratorId","SubscriptionId","SubscriptionGuid","Subscription Name","Date","Month","Day","Year","Product","Meter ID","Meter Category","Meter Sub-Category","Meter Region","Meter Name","Consumed Quantity","ResourceRate","ExtendedCost","Resource Location","Consumed Service","Instance ID","ServiceInfo1","ServiceInfo2","AdditionalInfo","Tags","Store Service Identifier","Department Name","Cost Center","Unit Of Measure","Resource Group",'
"""
sample_data = [{u'AccountName': u'Platform',
u'AccountOwnerId': u'donald.duck',
u'AdditionalInfo': u'',
u'ConsumedQuantity': 23.0,
u'ConsumedService': u'Virtual Network',
u'CostCenter': u'1234',
u'Date': u'03/01/2017',
u'Day': 1,
u'DepartmentName': u'Engineering',
u'ExtendedCost': 0.499222332425423563466,
u'InstanceId': u'platform-vnet',
u'MeterCategory': u'Virtual Network',
u'MeterId': u'c90286c8-adf0-438e-a257-4468387df385',
u'MeterName': u'Hours',
u'MeterRegion': u'All',
u'MeterSubCategory': u'Gateway Hour',
u'Month': 3,
u'Product': u'Windows Azure Compute 100 Hrs Virtual Network',
u'ResourceGroup': u'',
u'ResourceLocation': u'All',
u'ResourceRate': 0.0304347826086957,
u'ServiceAdministratorId': u'',
u'ServiceInfo1': u'',
u'ServiceInfo2': u'',
u'StoreServiceIdentifier': u'',
u'SubscriptionGuid': u'abc3455ac-3feg-2b3c5-abe4-ec1111111e6',
u'SubscriptionId': 23467313421,
u'SubscriptionName': u'Production',
u'Tags': u'',
u'UnitOfMeasure': u'Hours',
u'Year': 2017},
{u'AccountName': u'Platform',
u'AccountOwnerId': u'donald.duck',
u'AdditionalInfo': u'',
u'ConsumedQuantity': 0.064076,
u'ConsumedService': u'Microsoft.Storage',
u'CostCenter': u'1234',
u'Date': u'03/01/2017',
u'Day': 1,
u'DepartmentName': u'Engineering',
u'ExtendedCost': 0.50000011123124314235234522345,
u'InstanceId': u'/subscriptions/abc3455ac-3feg-2b3c5-abe4-ec1111111e6/resourceGroups/my-group/providers/Microsoft.Storage/storageAccounts/ss7q3264domxo',
u'MeterCategory': u'Windows Azure Storage',
u'MeterId': u'd23a5753-ff85-4ddf-af28-8cc5cf2d3882',
u'MeterName': u'Standard IO - Page Blob/Disk (GB)',
u'MeterRegion': u'All Regions',
u'MeterSubCategory': u'Locally Redundant',
u'Month': 3,
u'Product': u'Locally Redundant Storage Standard IO - Page Blob/Disk',
u'ResourceGroup': u'my-group',
u'ResourceLocation': u'euwest',
u'ResourceRate': 0.0377320156152495,
u'ServiceAdministratorId': u'',
u'ServiceInfo1': u'',
u'ServiceInfo2': u'',
u'StoreServiceIdentifier': u'',
u'SubscriptionGuid': u'abc3455ac-3feg-2b3c5-abe4-ec1111111e6',
u'SubscriptionId': 23467313421,
u'SubscriptionName': u'Production',
u'Tags': None,
u'UnitOfMeasure': u'GB',
u'Year': 2017}]
| 42.059701 | 484 | 0.710078 | 350 | 2,818 | 5.702857 | 0.368571 | 0.012024 | 0.022545 | 0.033066 | 0.397295 | 0.358717 | 0.342685 | 0.342685 | 0.342685 | 0.342685 | 0 | 0.103668 | 0.110007 | 2,818 | 66 | 485 | 42.69697 | 0.692185 | 0 | 0 | 0.424242 | 0 | 0.030303 | 0.683463 | 0.233854 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3baff370d4baee282c8608fd09e9a208ebac3e2 | 596 | py | Python | ifitwala_ed/hr/doctype/training_feedback/training_feedback.py | mohsinalimat/ifitwala_ed | 8927695ed9dee36e56571c442ebbe6e6431c7d46 | [
"MIT"
] | 13 | 2020-09-02T10:27:57.000Z | 2022-03-11T15:28:46.000Z | ifitwala_ed/hr/doctype/training_feedback/training_feedback.py | mohsinalimat/ifitwala_ed | 8927695ed9dee36e56571c442ebbe6e6431c7d46 | [
"MIT"
] | 43 | 2020-09-02T07:00:42.000Z | 2021-07-05T13:22:58.000Z | ifitwala_ed/hr/doctype/training_feedback/training_feedback.py | mohsinalimat/ifitwala_ed | 8927695ed9dee36e56571c442ebbe6e6431c7d46 | [
"MIT"
] | 6 | 2020-10-19T01:02:18.000Z | 2022-03-11T15:28:47.000Z | # Copyright (c) 2021, ifitwala and contributors
# For license information, please see license.txt
import frappe
from frappe.model.document import Document
from frappe import _
class TrainingFeedback(Document):
def validate(self):
training_event = frappe.get_doc("Training Event", self.training_event)
if training_event.status != 1:
frappe.throw(_("{0} must first be submitted").format(_("Training Event")))
emp_event_details = frappe.db.get_value("Training Event Employee", {
"parent": self.training_event,
"employee": self.employee
}, ["name", "attendance"], as_dict=True)
| 33.111111 | 77 | 0.746644 | 77 | 596 | 5.623377 | 0.61039 | 0.210162 | 0.117783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011605 | 0.13255 | 596 | 17 | 78 | 35.058824 | 0.825919 | 0.15604 | 0 | 0 | 0 | 0 | 0.212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3c4fcd9bbcff4febb72c2cd8b9d3f0b7db5b1a1 | 445 | py | Python | backend/utils/management/commands/createsu.py | stasfilin/rss_portal | e6e9f8d254c80c8a7a40901b3b7dab059f259d55 | [
"MIT"
] | null | null | null | backend/utils/management/commands/createsu.py | stasfilin/rss_portal | e6e9f8d254c80c8a7a40901b3b7dab059f259d55 | [
"MIT"
] | null | null | null | backend/utils/management/commands/createsu.py | stasfilin/rss_portal | e6e9f8d254c80c8a7a40901b3b7dab059f259d55 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import User
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""
Command for creating default superuser
"""
def handle(self, *args, **options):
if not User.objects.filter(username="admin").exists():
User.objects.create_superuser("admin", "admin@admin.com", "admin123456")
self.stdout.write(self.style.SUCCESS("Superuser created"))
| 31.785714 | 84 | 0.689888 | 52 | 445 | 5.884615 | 0.711538 | 0.065359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016438 | 0.179775 | 445 | 13 | 85 | 34.230769 | 0.821918 | 0.085393 | 0 | 0 | 0 | 0 | 0.13555 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c3ce5c789ffdabd456c4f50d5c4cea5acccf135f | 558 | py | Python | setup.py | whipper-team/morituri-eaclogger | 4cbdeb24d713ab8c9358ac90f4740d8cec76d3c4 | [
"0BSD"
] | 9 | 2018-10-18T13:33:01.000Z | 2022-01-17T19:25:38.000Z | setup.py | JoeLametta/morituri-eaclogger | 4cbdeb24d713ab8c9358ac90f4740d8cec76d3c4 | [
"0BSD"
] | 6 | 2016-07-03T20:47:05.000Z | 2018-02-09T14:58:43.000Z | setup.py | whipper-team/morituri-eaclogger | 4cbdeb24d713ab8c9358ac90f4740d8cec76d3c4 | [
"0BSD"
] | 3 | 2016-07-03T19:58:36.000Z | 2018-02-07T15:34:41.000Z | from setuptools import setup
from eaclogger import __version__ as plugin_version
setup(
name="whipper-plugin-eaclogger",
version=plugin_version,
description="A plugin for whipper which provides EAC style log reports",
author="JoeLametta, supermanvelo",
maintainer="JoeLametta",
license="ISC License",
url="https://github.com/whipper-team/whipper-plugin-eaclogger",
packages=["eaclogger", "eaclogger.logger"],
entry_points={
"whipper.logger": [
"eac = eaclogger.logger.eac:EacLogger"
]
}
)
| 29.368421 | 76 | 0.688172 | 60 | 558 | 6.283333 | 0.566667 | 0.068966 | 0.116711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195341 | 558 | 18 | 77 | 31 | 0.839644 | 0 | 0 | 0 | 0 | 0 | 0.460573 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.117647 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3cf716bf2646fc626aab11469e78c6e67817ef2 | 201 | py | Python | main.py | cankut/image-to-prime | d681ac0a6435f41662f638479ee7cfde2f203cb0 | [
"MIT"
] | null | null | null | main.py | cankut/image-to-prime | d681ac0a6435f41662f638479ee7cfde2f203cb0 | [
"MIT"
] | null | null | null | main.py | cankut/image-to-prime | d681ac0a6435f41662f638479ee7cfde2f203cb0 | [
"MIT"
] | null | null | null | from PrimeSearcher import PrimeSearcher
###
ps = PrimeSearcher("./images/euler.jpg")
ps.rescale(60*60, fit_to_original=True)
ps.search(max_iterations=1000, noise_count=1, break_on_find=False)
| 18.272727 | 66 | 0.761194 | 29 | 201 | 5.068966 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050279 | 0.109453 | 201 | 10 | 67 | 20.1 | 0.77095 | 0 | 0 | 0 | 0 | 0 | 0.091837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3dfc2511f343d7ceb9692a36dbb37532c595d9b | 1,637 | py | Python | src/utils/schema/jobserver/migrations/0013_migrate_job_completion.py | gravitationalwavedc/gwcloud_job_server | fb96ed1dc6baa240d1a38ac1adcd246577285294 | [
"MIT"
] | null | null | null | src/utils/schema/jobserver/migrations/0013_migrate_job_completion.py | gravitationalwavedc/gwcloud_job_server | fb96ed1dc6baa240d1a38ac1adcd246577285294 | [
"MIT"
] | 8 | 2020-06-06T08:39:37.000Z | 2021-09-22T18:01:47.000Z | src/utils/schema/jobserver/migrations/0013_migrate_job_completion.py | gravitationalwavedc/gwcloud_job_server | fb96ed1dc6baa240d1a38ac1adcd246577285294 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.4 on 2020-05-17 22:34
from django.db import migrations, models
from django.utils import timezone
def migrate_job_completion(apps, schema_editor):
Job = apps.get_model("jobserver", "Job")
JobHistory = apps.get_model("jobserver", "JobHistory")
# Iterate over all jobs
for job in Job.objects.all():
# Check if there are any _job_completion_ job history instances for this job
histories = JobHistory.objects.filter(job=job)
if histories.filter(what='_job_completion_').exists():
# Nothing more to do for this job
continue
job_error = False
# Check that all job steps exist with a success value
job_steps = histories.values_list('what').distinct()
for step in job_steps:
if not histories.filter(what=step, state=500).exists():
job_error = True
if job_error:
JobHistory.objects.create(
job=job,
what="_job_completion_",
state=400, # ERROR
timestamp=timezone.now(),
details="Error state migrated forwards"
)
else:
JobHistory.objects.create(
job=job,
what="_job_completion_",
state=500, # ERROR
timestamp=timezone.now(),
details="Success state migrated forwards"
)
class Migration(migrations.Migration):
dependencies = [
('jobserver', '0012_auto_20200517_2234'),
]
operations = [
migrations.RunPython(migrate_job_completion),
]
| 31.480769 | 84 | 0.592547 | 181 | 1,637 | 5.20442 | 0.486188 | 0.082803 | 0.05414 | 0.044586 | 0.176221 | 0.10828 | 0.10828 | 0.10828 | 0.10828 | 0 | 0 | 0.035907 | 0.319487 | 1,637 | 51 | 85 | 32.098039 | 0.809695 | 0.145388 | 0 | 0.216216 | 1 | 0 | 0.125809 | 0.016535 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.054054 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3ef2c796af21bf862af0de2fcb23a9f957c5de9 | 732 | py | Python | django_vcr/management/commands/download_tapes.py | areedtomlinson/django-vcr | 902f38346a01a8e6124e859e21d7acb9c97241fc | [
"MIT"
] | null | null | null | django_vcr/management/commands/download_tapes.py | areedtomlinson/django-vcr | 902f38346a01a8e6124e859e21d7acb9c97241fc | [
"MIT"
] | null | null | null | django_vcr/management/commands/download_tapes.py | areedtomlinson/django-vcr | 902f38346a01a8e6124e859e21d7acb9c97241fc | [
"MIT"
] | null | null | null | import os
from optparse import make_option
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
args = 'url'
help = 'Download VCR tapes from remote server'
option_list = BaseCommand.option_list + (
make_option(
'-u', '--url',
dest='url',
default=None,
help='URL for zip/tar/gz file that contains all necessary tapes.'
),
)
# TODO: make an option to change overwrite behavior
def handle(self, *args, **options):
print("This command will download VCR tapes from a remote server.")
# TODO: use urllib2 to fetch from URL
# TODO: unzip/uncompress and move to VCR_CASSETTE_PATH
| 27.111111 | 77 | 0.639344 | 92 | 732 | 5.021739 | 0.641304 | 0.04329 | 0.069264 | 0.08658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001887 | 0.275956 | 732 | 26 | 78 | 28.153846 | 0.869811 | 0.188525 | 0 | 0 | 0 | 0 | 0.281356 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0 | 0.5 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3f051a0ba567a1bfa80d8a15622a56fe1837dca | 608 | py | Python | swcms_social/faq/migrations/0005_auto_20180419_1001.py | ivanff/swcms | 20d121003243abcc26e41409bc44f1c0ef3c6c2a | [
"MIT"
] | null | null | null | swcms_social/faq/migrations/0005_auto_20180419_1001.py | ivanff/swcms | 20d121003243abcc26e41409bc44f1c0ef3c6c2a | [
"MIT"
] | 1 | 2019-06-25T11:17:35.000Z | 2019-06-25T11:17:54.000Z | swcms_social/faq/migrations/0005_auto_20180419_1001.py | ivanff/swcms-social | 20d121003243abcc26e41409bc44f1c0ef3c6c2a | [
"MIT"
] | null | null | null | # Generated by Django 2.0.3 on 2018-04-19 07:01
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('faq', '0004_auto_20180322_1330'),
]
operations = [
migrations.AlterModelOptions(
name='faq',
options={'ordering': ('subject__order', 'order'), 'verbose_name': 'статья', 'verbose_name_plural': 'статьи'},
),
migrations.AlterModelOptions(
name='subject',
options={'ordering': ('order',), 'verbose_name': 'тема помощи', 'verbose_name_plural': 'темы помощи'},
),
]
| 27.636364 | 121 | 0.597039 | 60 | 608 | 5.866667 | 0.633333 | 0.125 | 0.176136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068282 | 0.253289 | 608 | 21 | 122 | 28.952381 | 0.707048 | 0.074013 | 0 | 0.266667 | 1 | 0 | 0.306595 | 0.040998 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3f17a70386052db02c11c15fab00823b5a363bc | 1,245 | py | Python | Missing Numbers.py | Shaikh-Nabeel/HackerRankAlgorithms | 3d11fd2c45bd045fa320691c8ac1c89acadc6d8a | [
"Apache-2.0"
] | null | null | null | Missing Numbers.py | Shaikh-Nabeel/HackerRankAlgorithms | 3d11fd2c45bd045fa320691c8ac1c89acadc6d8a | [
"Apache-2.0"
] | null | null | null | Missing Numbers.py | Shaikh-Nabeel/HackerRankAlgorithms | 3d11fd2c45bd045fa320691c8ac1c89acadc6d8a | [
"Apache-2.0"
] | null | null | null | """
Problem Statement
Numeros, The Artist, had two lists A and B, such that, B was a permutation of A. Numeros was very proud of these lists.
Unfortunately, while transporting them from one exhibition to another, some numbers from List A got left out. Can you
find out the numbers missing from A?
"""
__author__ = 'Danyang'
class Solution(object):
def solve(self, cipher):
"""
:param cipher: the cipher
"""
m, A, n, B = cipher
result = set() # {} is for dictionary
hm = {}
for a in A:
if a not in hm:
hm[a] = 1
else:
hm[a] += 1
for b in B:
if b not in hm or hm[b] <= 0:
result.add(b)
else:
hm[b] -= 1
result = sorted(list(result))
return " ".join(map(str, result))
if __name__ == "__main__":
import sys
f = open("1.in", "r")
# f = sys.stdin
solution = Solution()
m = int(f.readline().strip())
A = map(int, f.readline().strip().split(' '))
n = int(f.readline().strip())
B = map(int, f.readline().strip().split(' '))
cipher = m, A, n, B
# solve
s = "%s\n" % (solution.solve(cipher))
print s,
| 24.411765 | 119 | 0.519679 | 174 | 1,245 | 3.649425 | 0.477011 | 0.025197 | 0.075591 | 0.107087 | 0.110236 | 0.07874 | 0 | 0 | 0 | 0 | 0 | 0.006127 | 0.344578 | 1,245 | 50 | 120 | 24.9 | 0.772059 | 0.032129 | 0 | 0.068966 | 0 | 0 | 0.031765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.034483 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3f1f74b466b17b50d5baf80f81ac4960271ecd0 | 3,187 | py | Python | jmap/account/imap/mailbox.py | filiphanes/jmap-proxy-python | 86d7aba07c5faad652dd46f418f2b8dd3fe03bc3 | [
"MIT"
] | 14 | 2020-05-12T14:21:23.000Z | 2022-03-17T07:20:25.000Z | jmap/account/imap/mailbox.py | filiphanes/jmap-proxy-python | 86d7aba07c5faad652dd46f418f2b8dd3fe03bc3 | [
"MIT"
] | 1 | 2020-11-28T12:52:09.000Z | 2020-11-28T12:52:09.000Z | jmap/account/imap/mailbox.py | filiphanes/jmap-proxy-python | 86d7aba07c5faad652dd46f418f2b8dd3fe03bc3 | [
"MIT"
] | 4 | 2020-06-03T09:21:01.000Z | 2022-03-11T20:43:58.000Z | from jmap.account.imap.imap_utf7 import imap_utf7_decode, imap_utf7_encode
KNOWN_SPECIALS = set('\\HasChildren \\HasNoChildren \\NoSelect \\NoInferiors \\UnMarked \\Subscribed'.lower().split())
# special use or name magic
ROLE_MAP = {
'inbox': 'inbox',
'drafts': 'drafts',
'draft': 'drafts',
'draft messages': 'drafts',
'bulk': 'junk',
'bulk mail': 'junk',
'junk': 'junk',
'junk mail': 'junk',
'spam mail': 'junk',
'spam messages': 'junk',
'archive': 'archive',
'sent': 'sent',
'sent items': 'sent',
'sent messages': 'sent',
'deleted messages': 'trash',
'trash': 'trash',
'\\inbox': 'inbox',
'\\trash': 'trash',
'\\sent': 'sent',
'\\junk': 'junk',
'\\spam': 'junk',
'\\archive': 'archive',
'\\drafts': 'drafts',
'\\all': 'all',
}
class ImapMailbox(dict):
__slots__ = ('db',)
def __missing__(self, key):
return getattr(self, key)()
def name(self):
try:
parentname, name = self['imapname'].rsplit(self['sep'], maxsplit=1)
except ValueError:
name = self['imapname']
self['name'] = imap_utf7_decode(name.encode())
return self['name']
def parentId(self):
try:
parentname, name = self['imapname'].rsplit(self['sep'], maxsplit=1)
self['parentId'] = self.db.byimapname[parentname]['id']
except ValueError:
self['parentId'] = None
return self['parentId']
def role(self):
for f in self['flags']:
if f not in KNOWN_SPECIALS:
self['role'] = ROLE_MAP.get(f, None)
break
else:
self['role'] = ROLE_MAP.get(self['imapname'].lower(), None)
return self['role']
def sortOrder(self):
return 2 if self['role'] else (1 if self['role'] == 'inbox' else 3)
def isSubscribed(self):
return '\\subscribed' in self['flags']
def totalEmails(self):
return 0
def unreadEmails(self):
return 0
def totalThreads(self):
return self['totalEmails']
def unreadThreads(self):
return self['unreadEmails']
def myRights(self):
can_select = '\\noselect' not in self['flags']
self['myRights'] = {
'mayReadItems': can_select,
'mayAddItems': can_select,
'mayRemoveItems': can_select,
'maySetSeen': can_select,
'maySetKeywords': can_select,
'mayCreateChild': True,
'mayRename': False if self['role'] else True,
'mayDelete': False if self['role'] else True,
'maySubmit': can_select,
}
return self['myRights']
def imapname(self):
encname = imap_utf7_encode(self['name']).decode()
if self['parentId']:
parent = self.db.mailboxes[self['parentId']]
self['imapname'] = parent['imapname'] + parent['sep'] + encname
else:
self['imapname'] = encname
return self['imapname']
def created(self):
return self['uidvalidity']
def updated(self):
return self['uidvalidity'] * self['uidnext']
def deleted(self):
return None
| 26.338843 | 118 | 0.557891 | 338 | 3,187 | 5.174556 | 0.304734 | 0.051458 | 0.02287 | 0.024014 | 0.109777 | 0.089194 | 0.062893 | 0.062893 | 0.062893 | 0.062893 | 0 | 0.005226 | 0.279573 | 3,187 | 120 | 119 | 26.558333 | 0.756533 | 0.007844 | 0 | 0.106383 | 0 | 0 | 0.236784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159574 | false | 0 | 0.010638 | 0.106383 | 0.351064 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
c3f30ba18a8ccc9b5eb6bda010feb4e01b778707 | 2,987 | py | Python | pyieee1905/multiap_msg.py | evanslai/pyieee1905 | 1007f964c1ebcaf3825a809fb98bc20430e26b64 | [
"MIT"
] | 3 | 2020-07-17T15:58:49.000Z | 2022-01-03T11:40:04.000Z | pyieee1905/multiap_msg.py | jayakumar-ananthakrishnan/pyieee1905 | 8aab72a1bccb9c6c753ca7d3d8913b3cba4c74a0 | [
"MIT"
] | 3 | 2019-03-29T19:31:23.000Z | 2021-09-16T11:51:55.000Z | pyieee1905/multiap_msg.py | jayakumar-ananthakrishnan/pyieee1905 | 8aab72a1bccb9c6c753ca7d3d8913b3cba4c74a0 | [
"MIT"
] | 1 | 2021-09-14T13:58:43.000Z | 2021-09-14T13:58:43.000Z | from pyieee1905.ieee1905_tlv import IEEE1905_TLV
from scapy.packet import Packet, bind_layers
from scapy.fields import BitField, XByteField, XShortField, XShortEnumField
from scapy.layers.l2 import Ether
IEEE1905_MCAST = "01:80:c2:00:00:13"
ieee1905_msg_type = {
0x0000:"TOPOLOGY_DISCOVERY_MESSAGE",
0x0001:"TOPOLOGY_NOTIFICATION_MESSAGE",
0x0002:"TOPOLOGY_QUERY_MESSAGE",
0x0003:"TOPOLOGY_RESPONSE_MESSAGE",
0x0004:"VENDOR_SPECIFIC_MESSAGE",
0x0005:"LINK_METRIC_QUERY_MESSAGE",
0x0006:"LINK_METRIC_RESPONSE_MESSAGE",
0x0007:"AP_AUTOCONFIGURATION_SEARCH_MESSAGE",
0x0008:"AP_AUTOCONFIGURATION_RESPONSE_MESSAGE",
0x0009:"AP_AUTOCONFIGURATION_WSC_MESSAGE",
0x000A:"AP_AUTOCONFIGURATION_RENEW_MESSAGE",
0x000B:"IEEE1905_PUSH_BUTTON_EVENT_NOTIFICATION_MESSAGE",
0x000C:"IEEE1905_PUSH_BUTTON_JOIN_NOTIFICATION_MESSAGE",
0x000D:"HIGHER_LAYER_QUERY_MESSAGE",
0x000E:"HIGHER_LAYER_RESPONSE_MESSAGE",
0x000F:"INTERFACE_POWER_CHANGE_REQUEST_MESSAGE",
0x0010:"INTERFACE_POWER_CHANGE_RESPONSE_MESSAGE",
0x0011:"GENERIC_PHY_QUERY_MESSAGE",
0x0012:"GENERIC_PHY_RESPONSE_MESSAGE",
0x8000:"IEEE1905_ACK_MESSAGE",
0x8001:"AP_CAPABILITY_QUERY_MESSAGE",
0x8002:"AP_CAPABILITY_REPORT_MESSAGE",
0x8003:"MULTI_AP_POLICY_CONFIG_REQUEST_MESSAGE",
0x8004:"CHANNEL_PREFERENCE_QUERY_MESSAGE",
0x8005:"CHANNEL_PREFERENCE_REPORT_MESSAGE",
0x8006:"CHANNEL_SELECTION_REQUEST_MESSAGE",
0x8007:"CHANNEL_SELECTION_RESPONSE_MESSAGE",
0x8008:"OPERATING_CHANNEL_REPORT_MESSAGE",
0x8009:"CLIENT_CAPABILITIES_QUERY_MESSAGE",
0x800A:"CLIENT_CAPABILITIES_REPORT_MESSAGE",
0x800B:"AP_METRICS_QUERY_MESSAGE",
0x800C:"AP_METRICS_RESPONSE_MESSAGE",
0x800D:"ASSOCIATED_STA_LINK_METRICS_QUERY_MESSAGE",
0x800E:"ASSOCIATED_STA_LINK_METRICS_RESPONSE_MESSAGE",
0x800F:"UNASSOCIATED_STA_LINK_METRICS_QUERY_MESSAGE",
0x8010:"UNASSOCIATED_STA_LINK_METRICS_RESPONSE_MESSAGE",
0x8011:"BEACON_METRICS_QUERY_MESSAGE",
0x8012:"BEACON_METRICS_REPONSE_METRICS",
0x8013:"COMBINED_INFRASTRUCTURE_METRICS_MESSAGE",
0x8014:"CLIENT_STEERING_REQUEST_MESSAGE",
0x8015:"CLIENT_STEERING_BTM_REPORT_MESSAGE",
0x8016:"CLIENT_ASSOCIATION_CONTROL_REQUEST_MESSAGE",
0x8017:"STEERING_COMPLETED_MESSAGE",
0x8018:"HIGHER_LAYER_DATA_MESSAGE",
0x8019:"BACKHAUL_STEERING_REQUEST_MESSAGE",
0x801A:"BACKHAUL_STEERING_RESPONSE_MESSAGE"
}
class MultiAP_Message(Packet):
name = "IEEE 1905 MultiAP Message"
fields_desc = [
XByteField("msg_version", None),
XByteField("msg_reserved", None),
XShortEnumField("msg_type", None, ieee1905_msg_type),
XShortField("msg_id", None),
XByteField("frag_id", None),
BitField("flag_last_frag_ind", 0, 1),
BitField("flag_relay_ind", 0, 1),
BitField("flag_reserved", 0, 6)
]
bind_layers(Ether, MultiAP_Message, type=0x893a)
bind_layers(MultiAP_Message, IEEE1905_TLV, )
| 37.810127 | 75 | 0.781051 | 345 | 2,987 | 6.255072 | 0.414493 | 0.061168 | 0.035218 | 0.022243 | 0.066728 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108637 | 0.127888 | 2,987 | 78 | 76 | 38.294872 | 0.71977 | 0 | 0 | 0 | 0 | 0 | 0.541555 | 0.490952 | 0 | 0 | 0.094504 | 0 | 0 | 1 | 0 | false | 0 | 0.059701 | 0 | 0.104478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3f439b48efe360b61fbb39a47da1c6f455643ce | 2,719 | py | Python | daily_practice/tools/GetMem.py | joakimzhang/qa_study | ff8930e674d45c49bea4e130d14d73d17b090e48 | [
"Apache-2.0"
] | null | null | null | daily_practice/tools/GetMem.py | joakimzhang/qa_study | ff8930e674d45c49bea4e130d14d73d17b090e48 | [
"Apache-2.0"
] | null | null | null | daily_practice/tools/GetMem.py | joakimzhang/qa_study | ff8930e674d45c49bea4e130d14d73d17b090e48 | [
"Apache-2.0"
] | null | null | null | # coding:utf8
'''
Created on 2016��8��30��
@author: zhangq
'''
from serial import Serial
import re
from threading import Thread
import time
import datetime
import pygal
import os
class FilterMem(object):
def __init__(self, port, baudrate):
self.serial_obj = Serial()
self.serial_obj.port = port-1
self.serial_obj.baudrate = baudrate
self.connect_uart()
def connect_uart(self):
try:
self.serial_obj.open()
except Exception, e:
if self.serial_obj.isOpen():
self.serial_obj.close()
print e
return 0
def send_thread(self, _command, _period):
self.sent_thread = Thread(target=self.sendfunc,args=(_command, _period))
self.sent_thread.setDaemon(True)
self.sent_thread.start()
#self.getmem()
def getmem(self, keyword, file_name):
today = datetime.date.today()
self.file_name = r"%s_%s" % (file_name, today)
x_list = []
y_list = []
with open("%s.log"%self.file_name, "w") as f:
while 1:
self.info = self.serial_obj.readline()
print self.info
current = datetime.datetime.now()
f_time = "%s-%s-%s %s:%s:%s" % (current.year, current.month, current.day, current.hour, current.minute, current.second)
f.write("%s:%s" % (f_time, self.info))
match_info = re.search("%s.+?(\d+).+bytes" % keyword, self.info)
if match_info:
mem_val = match_info.group(1)
y_list.append(int(mem_val))
x_list.append(current)
print mem_val
if len(y_list)%10 == 0:
self.make_pic(x_list, y_list)
#print match_info.group(0)
#print "bbb"
#time.sleep(1)
def sendfunc(self, _char, _period):
self.serial_obj.write("mon\n")
while 1:
self.serial_obj.write("%s\n" % _char)
time.sleep(_period)
#print _char
# plot a sine wave from 0 to 4pi
def make_pic(self, x_list, y_list):
line_chart = pygal.Line()
line_chart.title = 'Mem usage evolution (in %)'
line_chart.x_labels = x_list
line_chart.add('Mem', y_list)
line_chart.render()
f = open('%s.html' % self.file_name, 'w')
f.write(line_chart.render())
f.close()
if __name__ == "__main__":
my_obj = FilterMem(9, 115200)
my_obj.send_thread("mid", 10)
#my_obj.connect_uart()
my_obj.getmem("Used")
# my_obj.sent_thread.join()
| 29.554348 | 135 | 0.544318 | 348 | 2,719 | 4.043103 | 0.350575 | 0.063966 | 0.083156 | 0.021322 | 0.042644 | 0 | 0 | 0 | 0 | 0 | 0.002207 | 0.015996 | 0.333211 | 2,719 | 91 | 136 | 29.879121 | 0.756757 | 0.059581 | 0 | 0.031746 | 0 | 0 | 0.044836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c3f80a30cddcf300830226429e92b2147122720d | 12,689 | py | Python | waffle/tests/test_management.py | theunraveler/django-waffle | 4b27b27c5ce3f044eb8b2c94f40df841bc982884 | [
"BSD-3-Clause"
] | 1 | 2021-09-12T00:44:29.000Z | 2021-09-12T00:44:29.000Z | waffle/tests/test_management.py | theunraveler/django-waffle | 4b27b27c5ce3f044eb8b2c94f40df841bc982884 | [
"BSD-3-Clause"
] | 4 | 2016-06-01T16:23:26.000Z | 2017-01-04T13:04:06.000Z | waffle/tests/test_management.py | theunraveler/django-waffle | 4b27b27c5ce3f044eb8b2c94f40df841bc982884 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import unicode_literals
import six
from django.core.management import call_command, CommandError
from django.contrib.auth.models import Group, User
from waffle import get_waffle_flag_model
from waffle.models import Sample, Switch
from waffle.tests.base import TestCase
class WaffleFlagManagementCommandTests(TestCase):
def test_create(self):
""" The command should create a new flag. """
name = 'test'
percent = 20
Group.objects.create(name='waffle_group')
call_command('waffle_flag', name, percent=percent,
superusers=True, staff=True, authenticated=True,
rollout=True, create=True, group=['waffle_group'])
flag = get_waffle_flag_model().objects.get(name=name)
self.assertEqual(flag.percent, percent)
self.assertIsNone(flag.everyone)
self.assertTrue(flag.superusers)
self.assertTrue(flag.staff)
self.assertTrue(flag.authenticated)
self.assertTrue(flag.rollout)
self.assertEqual(list(flag.groups.values_list('name', flat=True)),
['waffle_group'])
def test_not_create(self):
""" The command shouldn't create a new flag if the create flag is
not set.
"""
name = 'test'
with self.assertRaisesRegexp(CommandError, 'This flag does not exist.'):
call_command('waffle_flag', name, everyone=True, percent=20,
superusers=True, staff=True, authenticated=True,
rollout=True)
self.assertFalse(get_waffle_flag_model().objects.filter(name=name).exists())
def test_update(self):
""" The command should update an existing flag. """
name = 'test'
flag = get_waffle_flag_model().objects.create(name=name)
self.assertIsNone(flag.percent)
self.assertIsNone(flag.everyone)
self.assertTrue(flag.superusers)
self.assertFalse(flag.staff)
self.assertFalse(flag.authenticated)
self.assertFalse(flag.rollout)
percent = 30
call_command('waffle_flag', name, percent=percent,
superusers=False, staff=True, authenticated=True,
rollout=True)
flag.refresh_from_db()
self.assertEqual(flag.percent, percent)
self.assertIsNone(flag.everyone)
self.assertFalse(flag.superusers)
self.assertTrue(flag.staff)
self.assertTrue(flag.authenticated)
self.assertTrue(flag.rollout)
def test_update_activate_everyone(self):
""" The command should update everyone field to True """
name = 'test'
flag = get_waffle_flag_model().objects.create(name=name)
self.assertIsNone(flag.percent)
self.assertIsNone(flag.everyone)
self.assertTrue(flag.superusers)
self.assertFalse(flag.staff)
self.assertFalse(flag.authenticated)
self.assertFalse(flag.rollout)
percent = 30
call_command('waffle_flag', name, everyone=True, percent=percent,
superusers=False, staff=True, authenticated=True,
rollout=True)
flag.refresh_from_db()
self.assertEqual(flag.percent, percent)
self.assertTrue(flag.everyone)
self.assertFalse(flag.superusers)
self.assertTrue(flag.staff)
self.assertTrue(flag.authenticated)
self.assertTrue(flag.rollout)
def test_update_deactivate_everyone(self):
""" The command should update everyone field to False"""
name = 'test'
flag = get_waffle_flag_model().objects.create(name=name)
self.assertIsNone(flag.percent)
self.assertIsNone(flag.everyone)
self.assertTrue(flag.superusers)
self.assertFalse(flag.staff)
self.assertFalse(flag.authenticated)
self.assertFalse(flag.rollout)
percent = 30
call_command('waffle_flag', name, everyone=False, percent=percent,
superusers=False, staff=True, authenticated=True,
rollout=True)
flag.refresh_from_db()
self.assertEqual(flag.percent, percent)
self.assertFalse(flag.everyone)
self.assertFalse(flag.superusers)
self.assertTrue(flag.staff)
self.assertTrue(flag.authenticated)
self.assertTrue(flag.rollout)
def test_list(self):
""" The command should list all flags."""
stdout = six.StringIO()
get_waffle_flag_model().objects.create(name='test')
call_command('waffle_flag', list_flags=True, stdout=stdout)
expected = 'Flags:\nNAME: test\nSUPERUSERS: True\nEVERYONE: None\n' \
'AUTHENTICATED: False\nPERCENT: None\nTESTING: False\n' \
'ROLLOUT: False\nSTAFF: False\nGROUPS: []\nUSERS: []'
actual = stdout.getvalue().strip()
self.assertEqual(actual, expected)
def test_group_append(self):
""" The command should append a group to a flag."""
original_group = Group.objects.create(name='waffle_group')
Group.objects.create(name='append_group')
flag = get_waffle_flag_model().objects.create(name='test')
flag.groups.add(original_group)
flag.refresh_from_db()
self.assertEqual(list(flag.groups.values_list('name', flat=True)),
['waffle_group'])
call_command('waffle_flag', 'test', group=['append_group'],
append=True)
flag.refresh_from_db()
self.assertEqual(list(flag.groups.values_list('name', flat=True)),
['waffle_group', 'append_group'])
self.assertIsNone(flag.everyone)
def test_user(self):
""" The command should replace a user to a flag."""
original_user = User.objects.create_user('waffle_test')
User.objects.create_user('add_user')
flag = get_waffle_flag_model().objects.create(name='test')
flag.users.add(original_user)
flag.refresh_from_db()
self.assertEqual(list(flag.users.values_list('username', flat=True)),
['waffle_test'])
call_command('waffle_flag', 'test', user=['add_user'])
flag.refresh_from_db()
self.assertEqual(list(flag.users.values_list('username', flat=True)),
['add_user'])
self.assertIsNone(flag.everyone)
def test_user_append(self):
""" The command should append a user to a flag."""
original_user = User.objects.create_user('waffle_test')
User.objects.create_user('append_user')
User.objects.create_user('append_user_email', email='test@example.com')
flag = get_waffle_flag_model().objects.create(name='test')
flag.users.add(original_user)
flag.refresh_from_db()
self.assertEqual(list(flag.users.values_list('username', flat=True)),
['waffle_test'])
call_command('waffle_flag', 'test', user=['append_user'],
append=True)
flag.refresh_from_db()
self.assertEqual(list(flag.users.values_list('username', flat=True)),
['waffle_test', 'append_user'])
self.assertIsNone(flag.everyone)
call_command('waffle_flag', 'test', user=['test@example.com'],
append=True)
flag.refresh_from_db()
self.assertEqual(list(flag.users.values_list('username', flat=True)),
['waffle_test', 'append_user', 'append_user_email'])
self.assertIsNone(flag.everyone)
class WaffleSampleManagementCommandTests(TestCase):
def test_create(self):
""" The command should create a new sample. """
name = 'test'
percent = 20
call_command('waffle_sample', name, str(percent), create=True)
sample = Sample.objects.get(name=name)
self.assertEqual(sample.percent, percent)
def test_not_create(self):
""" The command shouldn't create a new sample if the create flag is
not set.
"""
name = 'test'
with self.assertRaisesRegexp(CommandError, 'This sample does not exist'):
call_command('waffle_sample', name, '20')
self.assertFalse(Sample.objects.filter(name=name).exists())
def test_update(self):
""" The command should update an existing sample. """
name = 'test'
sample = Sample.objects.create(name=name, percent=0)
self.assertEqual(sample.percent, 0)
percent = 50
call_command('waffle_sample', name, str(percent))
sample.refresh_from_db()
self.assertEqual(sample.percent, percent)
def test_list(self):
""" The command should list all samples."""
stdout = six.StringIO()
Sample.objects.create(name='test', percent=34)
call_command('waffle_sample', list_samples=True, stdout=stdout)
expected = 'Samples:\ntest: 34.0%'
actual = stdout.getvalue().strip()
self.assertEqual(actual, expected)
class WaffleSwitchManagementCommandTests(TestCase):
def test_create(self):
""" The command should create a new switch. """
name = 'test'
call_command('waffle_switch', name, 'on', create=True)
switch = Switch.objects.get(name=name, active=True)
switch.delete()
call_command('waffle_switch', name, 'off', create=True)
Switch.objects.get(name=name, active=False)
def test_not_create(self):
""" The command shouldn't create a new switch if the create flag is
not set.
"""
name = 'test'
with self.assertRaisesRegexp(CommandError, 'This switch does not exist.'):
call_command('waffle_switch', name, 'on')
self.assertFalse(Switch.objects.filter(name=name).exists())
def test_update(self):
""" The command should update an existing switch. """
name = 'test'
switch = Switch.objects.create(name=name, active=True)
call_command('waffle_switch', name, 'off')
switch.refresh_from_db()
self.assertFalse(switch.active)
call_command('waffle_switch', name, 'on')
switch.refresh_from_db()
self.assertTrue(switch.active)
def test_list(self):
""" The command should list all switches."""
stdout = six.StringIO()
Switch.objects.create(name='switch1', active=True)
Switch.objects.create(name='switch2', active=False)
call_command('waffle_switch', list_switches=True, stdout=stdout)
expected = 'Switches:\nswitch1: on\nswitch2: off'
actual = stdout.getvalue().strip()
self.assertEqual(actual, expected)
class WaffleDeleteManagementCommandTests(TestCase):
def test_delete_flag(self):
""" The command should delete a flag. """
name = 'test_flag'
get_waffle_flag_model().objects.create(name=name)
call_command('waffle_delete', flag_names=[name])
self.assertEqual(get_waffle_flag_model().objects.count(), 0)
def test_delete_swtich(self):
""" The command should delete a switch. """
name = 'test_switch'
Switch.objects.create(name=name)
call_command('waffle_delete', switch_names=[name])
self.assertEqual(Switch.objects.count(), 0)
def test_delete_sample(self):
""" The command should delete a sample. """
name = 'test_sample'
Sample.objects.create(name=name, percent=0)
call_command('waffle_delete', sample_names=[name])
self.assertEqual(Sample.objects.count(), 0)
def test_delete_mix_of_types(self):
""" The command should delete different types of records. """
name = 'test'
get_waffle_flag_model().objects.create(name=name)
Switch.objects.create(name=name)
Sample.objects.create(name=name, percent=0)
call_command('waffle_delete', switch_names=[name], flag_names=[name],
sample_names=[name])
self.assertEqual(get_waffle_flag_model().objects.count(), 0)
self.assertEqual(Switch.objects.count(), 0)
self.assertEqual(Sample.objects.count(), 0)
def test_delete_some_but_not_all_records(self):
""" The command should delete specified records, but leave records
not specified alone. """
flag_1 = 'test_flag_1'
flag_2 = 'test_flag_2'
get_waffle_flag_model().objects.create(name=flag_1)
get_waffle_flag_model().objects.create(name=flag_2)
call_command('waffle_delete', flag_names=[flag_1])
self.assertTrue(get_waffle_flag_model().objects.filter(name=flag_2).exists())
| 38.568389 | 85 | 0.638033 | 1,458 | 12,689 | 5.386145 | 0.09465 | 0.046352 | 0.054119 | 0.048389 | 0.782631 | 0.749395 | 0.665733 | 0.613651 | 0.556985 | 0.491914 | 0 | 0.004491 | 0.245409 | 12,689 | 328 | 86 | 38.685976 | 0.815666 | 0.084246 | 0 | 0.613445 | 0 | 0 | 0.098267 | 0 | 0 | 0 | 0 | 0 | 0.315126 | 1 | 0.092437 | false | 0 | 0.029412 | 0 | 0.138655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f055b39eb9d9bedda5195c6c85467b444c1804f | 799 | py | Python | setup.py | Fragalli/brFinance | f06d7b148d20d07361c89158837d47225c4fea1f | [
"MIT"
] | 1 | 2021-07-19T19:12:11.000Z | 2021-07-19T19:12:11.000Z | setup.py | Fragalli/brFinance | f06d7b148d20d07361c89158837d47225c4fea1f | [
"MIT"
] | 1 | 2021-07-22T13:52:30.000Z | 2021-07-22T13:52:30.000Z | setup.py | Fragalli/brFinance | f06d7b148d20d07361c89158837d47225c4fea1f | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='brfinance',
version='0.1',
packages=find_packages(exclude=['tests*']),
license='MIT',
description='A Python package for webscraping financial data brazilian sources such as CVM, Banco Central, B3, ANBIMA, etc.',
long_description=open('README.md').read(),
install_requires=['numpy', 'beautifulsoup4', 'bs4', 'certifi', 'charset-normalizer', 'colorama', 'configparser', 'crayons', 'fake-useragent',
'idna', 'lxml', 'numpy', 'pandas', 'python-dateutil', 'pytz', 'requests', 'selenium', 'six', 'soupsieve', 'urllib3', 'webdriver-manager'],
url='https://github.com/BillMills/python-package-example',
author='Eudes Rodrigo Nunes de Oliveira',
author_email='eudesrodrigo@outlook.com'
)
| 49.9375 | 160 | 0.679599 | 89 | 799 | 6.044944 | 0.876404 | 0.04461 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008837 | 0.150188 | 799 | 15 | 161 | 53.266667 | 0.783505 | 0 | 0 | 0 | 0 | 0.071429 | 0.530663 | 0.030038 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f094e91b9871860a612632460408191babfa754 | 788 | py | Python | snakeoil/migrations/0001_initial.py | wbcsmarteezgithub/django-snakeoil | ae1a8dab9e14194e48963101ff3349f45aee0ccf | [
"BSD-2-Clause"
] | 1 | 2020-07-03T15:52:25.000Z | 2020-07-03T15:52:25.000Z | snakeoil/migrations/0001_initial.py | wbcsmarteezgithub/django-snakeoil | ae1a8dab9e14194e48963101ff3349f45aee0ccf | [
"BSD-2-Clause"
] | null | null | null | snakeoil/migrations/0001_initial.py | wbcsmarteezgithub/django-snakeoil | ae1a8dab9e14194e48963101ff3349f45aee0ccf | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 2.2 on 2020-03-30 15:03
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='SeoUrl',
fields=[
('head_title', models.CharField(blank=True, max_length=55, verbose_name='head title')),
('meta_description', models.TextField(blank=True, max_length=160, verbose_name='meta description')),
('url', models.CharField(max_length=255, primary_key=True, serialize=False, unique=True, verbose_name='URL')),
],
options={
'verbose_name': 'SEO URL',
'verbose_name_plural': 'SEO URLs',
},
),
]
| 29.185185 | 126 | 0.574873 | 82 | 788 | 5.378049 | 0.573171 | 0.124717 | 0.054422 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039927 | 0.300761 | 788 | 26 | 127 | 30.307692 | 0.760436 | 0.054569 | 0 | 0 | 1 | 0 | 0.148048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f0b21821574f29c48de13330ac73d4176385f22 | 2,211 | py | Python | mediafeed/databases/files.py | media-feed/mediafeed | c2fb37b20a5bc41a4299193fa9b11f8a3e3b2acf | [
"MIT"
] | null | null | null | mediafeed/databases/files.py | media-feed/mediafeed | c2fb37b20a5bc41a4299193fa9b11f8a3e3b2acf | [
"MIT"
] | null | null | null | mediafeed/databases/files.py | media-feed/mediafeed | c2fb37b20a5bc41a4299193fa9b11f8a3e3b2acf | [
"MIT"
] | null | null | null | import os
from shutil import rmtree
from ..settings import DATA_PATH
class Thumbnail(object):
def __init__(self, model):
self.model = model
self.module = model.module
self.filename = os.path.join('thumbnails', model.__tablename__, self.module.id, model.id)
self.full_filename = os.path.join(DATA_PATH, self.filename)
self.url = model.thumbnail_url
def __repr__(self):
return '<Thumbnail "%s">' % self.filename
def __bool__(self):
return bool(self.path)
@property
def path(self):
if self.local_path:
return self.filename
if self.online_path:
return self.online_path
return ''
@property
def local_path(self):
if os.path.exists(self.full_filename):
return self.full_filename
@property
def online_path(self):
if self.url:
return self.url
def download(self, options=None):
if options is None:
options = getattr(self.model, 'options', None)
if self.url:
self.module.get_thumbnail(self.full_filename, self.url, options)
def remove(self):
local_path = self.local_path
if local_path:
os.remove(local_path)
def get_media_path(item):
return os.path.join(DATA_PATH, 'medias', item.module_id, item.id)
def get_medias(item):
try:
return {Media(item, filename) for filename in os.listdir(get_media_path(item))}
except FileNotFoundError:
return set()
def remove_medias(item):
try:
rmtree(get_media_path(item))
except FileNotFoundError:
pass
class Media(object):
def __init__(self, item, media_filename):
self.item = item
self.media_filename = media_filename
self.filename = os.path.join('medias', item.module_id, item.id, media_filename)
self.full_filename = os.path.join(DATA_PATH, self.filename)
def __repr__(self):
return '<Media "%s:%s" "%s">' % (self.item.module_id, self.item.id, self.media_filename)
def remove(self):
full_filename = self.full_filename
if os.path.exists(full_filename):
os.remove(full_filename)
| 26.638554 | 97 | 0.635911 | 286 | 2,211 | 4.692308 | 0.174825 | 0.080477 | 0.083458 | 0.053651 | 0.208644 | 0.162444 | 0.068554 | 0.068554 | 0.068554 | 0.068554 | 0 | 0 | 0.259159 | 2,211 | 82 | 98 | 26.963415 | 0.819292 | 0 | 0 | 0.241935 | 0 | 0 | 0.029398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0.016129 | 0.048387 | 0.064516 | 0.483871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f0c54650b946eb5ed1cdc2900d388cb5381d2f3 | 1,998 | py | Python | src/robusta/core/triggers/error_event_trigger.py | Rutam21/robusta | 7c918d96362f607488c0e7e0056f436a06dce4ae | [
"MIT"
] | null | null | null | src/robusta/core/triggers/error_event_trigger.py | Rutam21/robusta | 7c918d96362f607488c0e7e0056f436a06dce4ae | [
"MIT"
] | null | null | null | src/robusta/core/triggers/error_event_trigger.py | Rutam21/robusta | 7c918d96362f607488c0e7e0056f436a06dce4ae | [
"MIT"
] | 1 | 2022-02-20T01:49:36.000Z | 2022-02-20T01:49:36.000Z | from ..discovery.top_service_resolver import TopServiceResolver
from ...core.playbooks.base_trigger import TriggerEvent
from ...integrations.kubernetes.autogenerated.triggers import EventAllChangesTrigger, EventChangeEvent
from ...integrations.kubernetes.base_triggers import K8sTriggerEvent
from ...utils.rate_limiter import RateLimiter
class WarningEventTrigger(EventAllChangesTrigger):
rate_limit: int = 3600
def __init__(self,
name_prefix: str = None,
namespace_prefix: str = None,
labels_selector: str = None,
rate_limit: int = 3600,
):
super().__init__(
name_prefix=name_prefix,
namespace_prefix=namespace_prefix,
labels_selector=labels_selector,
)
self.rate_limit = rate_limit
def should_fire(self, event: TriggerEvent, playbook_id: str):
should_fire = super().should_fire(event, playbook_id)
if not should_fire:
return should_fire
if not isinstance(event, K8sTriggerEvent):
return False
exec_event = self.build_execution_event(event, [])
if not isinstance(exec_event, EventChangeEvent):
return False
if not exec_event.obj or not exec_event.obj.involvedObject:
return False
if exec_event.get_event().type != "Warning":
return False
# Perform a rate limit for this service key according to the rate_limit parameter
name = exec_event.obj.involvedObject.name
namespace = exec_event.obj.involvedObject.namespace if exec_event.obj.involvedObject.namespace else ""
service_key = TopServiceResolver.guess_service_key(name=name,namespace=namespace)
return RateLimiter.mark_and_test(
f"WarningEventTrigger_{playbook_id}_{exec_event.obj.reason}",
service_key if service_key else namespace + ":" + name,
self.rate_limit,
)
| 38.423077 | 110 | 0.669169 | 215 | 1,998 | 5.953488 | 0.344186 | 0.063281 | 0.05625 | 0.08125 | 0.054688 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006761 | 0.25976 | 1,998 | 51 | 111 | 39.176471 | 0.858688 | 0.03954 | 0 | 0.1 | 1 | 0 | 0.033907 | 0.029734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f13a49ab964463c1066e45f31d90b6e23b47055 | 1,326 | py | Python | apps/order/migrations/0002_auto_20180710_0937.py | jakejie/ShopPro | f0cec134ae77f4449f15a0219123d6a6bce2aad2 | [
"Apache-2.0"
] | 1 | 2019-04-20T16:58:02.000Z | 2019-04-20T16:58:02.000Z | apps/order/migrations/0002_auto_20180710_0937.py | jakejie/ShopPro | f0cec134ae77f4449f15a0219123d6a6bce2aad2 | [
"Apache-2.0"
] | 6 | 2020-06-05T19:57:58.000Z | 2021-09-08T00:49:17.000Z | apps/order/migrations/0002_auto_20180710_0937.py | jakejie/ShopPro | f0cec134ae77f4449f15a0219123d6a6bce2aad2 | [
"Apache-2.0"
] | 1 | 2021-09-10T18:29:28.000Z | 2021-09-10T18:29:28.000Z | # Generated by Django 2.0.7 on 2018-07-10 09:37
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('product', '0001_initial'),
('order', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='orderlist',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='用户'),
),
migrations.AddField(
model_name='orderitem',
name='orderNum',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='order.OrderList', verbose_name='订单号'),
),
migrations.AddField(
model_name='orderitem',
name='product',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='product.Product', verbose_name='商品'),
),
migrations.AddField(
model_name='logistics',
name='orderNum',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='order.OrderList', verbose_name='订单号'),
),
]
| 33.15 | 129 | 0.631222 | 143 | 1,326 | 5.72028 | 0.342657 | 0.05868 | 0.085575 | 0.134474 | 0.484108 | 0.484108 | 0.391198 | 0.391198 | 0.391198 | 0.391198 | 0 | 0.022931 | 0.24359 | 1,326 | 39 | 130 | 34 | 0.792622 | 0.033937 | 0 | 0.4375 | 1 | 0 | 0.120407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.09375 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f19df6d90e4eaf71ba87a40f7d247e20f11c992 | 6,625 | py | Python | kunquat/tracker/ui/model/keymapmanager.py | kagu/kunquat | 83a2e972121e6a114ecc5ef4392b501ce926bb06 | [
"CC0-1.0"
] | 13 | 2016-09-01T21:52:49.000Z | 2022-03-24T06:07:20.000Z | kunquat/tracker/ui/model/keymapmanager.py | kagu/kunquat | 83a2e972121e6a114ecc5ef4392b501ce926bb06 | [
"CC0-1.0"
] | 290 | 2015-03-14T10:59:25.000Z | 2022-03-20T08:32:17.000Z | kunquat/tracker/ui/model/keymapmanager.py | kagu/kunquat | 83a2e972121e6a114ecc5ef4392b501ce926bb06 | [
"CC0-1.0"
] | 7 | 2015-03-19T13:28:11.000Z | 2019-09-03T16:21:16.000Z | # -*- coding: utf-8 -*-
#
# Authors: Toni Ruottu, Finland 2014
# Tomi Jylhä-Ollila, Finland 2016-2019
#
# This file is part of Kunquat.
#
# CC0 1.0 Universal, http://creativecommons.org/publicdomain/zero/1.0/
#
# To the extent possible under law, Kunquat Affirmers have waived all
# copyright and related or neighboring rights to Kunquat.
#
class HitKeymapID:
pass
class KeyboardAction():
NOTE = 'note'
OCTAVE_DOWN = 'octave_down'
OCTAVE_UP = 'octave_up'
PLAY = 'play'
REST = 'rest'
SILENCE = 'silence'
def __init__(self, action_type):
super().__init__()
self.action_type = action_type
def __eq__(self, other):
return (self.action_type == other.action_type)
def __hash__(self):
return hash(self.action_type)
class KeyboardNoteAction(KeyboardAction):
def __init__(self, row, index):
super().__init__(KeyboardAction.NOTE)
self.row = row
self.index = index
def _get_fields(self):
return (self.action_type, self.row, self.index)
def __eq__(self, other):
if not isinstance(other, KeyboardNoteAction):
return False
return (self._get_fields() == other._get_fields())
def __hash__(self):
return hash(self._get_fields())
_hit_keymap = {
'is_hit_keymap': True,
'name': 'Hits',
'keymap': [
[0, 7, 1, 8, 2, 9, 3, 10, 4, 11, 5, 12, 6, 13,
14, 23, 15, 24, 16, 25, 17, 26, 18, 27, 19, 28, 20, 29, 21, 30, 22, 31],
[32, 39, 33, 40, 34, 41, 35, 42, 36, 43, 37, 44, 38, 45,
46, 55, 47, 56, 48, 57, 49, 58, 50, 59, 51, 60, 52, 61, 53, 62, 54, 63],
[64, 71, 65, 72, 66, 73, 67, 74, 68, 75, 69, 76, 70, 77,
78, 87, 79, 88, 80, 89, 81, 90, 82, 91, 83, 92, 84, 93, 85, 94, 86, 95],
[96, 103, 97, 104, 98, 105, 99, 106, 100, 107, 101, 108, 102, 109,
110, 119, 111, 120, 112, 121, 113, 122, 114, 123, 115, 124, 116, 125,
117, 126, 118, 127],
],
}
class KeymapManager():
def __init__(self):
self._session = None
self._updater = None
self._ui_model = None
def set_controller(self, controller):
self._session = controller.get_session()
self._updater = controller.get_updater()
self._share = controller.get_share()
def set_ui_model(self, ui_model):
self._ui_model = ui_model
def _are_keymap_actions_valid(self, actions):
ka = KeyboardAction
req_single_action_types = (
ka.OCTAVE_DOWN, ka.OCTAVE_UP, ka.PLAY, ka.REST, ka.SILENCE)
for action_type in req_single_action_types:
if ka(action_type) not in actions:
return False
note_actions = set(a for a in actions if a.action_type == ka.NOTE)
if len(note_actions) != 33:
return False
return True
def get_keyboard_row_sizes(self):
# The number of buttons provided for configuration on each row
# On a QWERTY layout, the leftmost buttons are: 1, Q, A, Z
return (11, 11, 11, 10)
def get_typewriter_row_sizes(self):
return (9, 10, 7, 7)
def get_typewriter_row_offsets(self):
return (1, 0, 1, 0)
def _is_row_layout_valid(self, locs):
row_sizes = self.get_keyboard_row_sizes()
used_locs = set()
for loc in locs:
if loc in used_locs:
return False
used_locs.add(loc)
row, index = loc
if not (0 <= row < len(row_sizes)):
return False
if not (0 <= index < row_sizes[row]):
return False
return True
def set_key_actions(self, actions):
assert self._is_row_layout_valid(actions.keys())
assert self._are_keymap_actions_valid(actions.values())
self._session.keyboard_key_actions = actions
action_locations = { act: loc for (loc, act) in actions.items() }
self._session.keyboard_action_locations = action_locations
def set_key_names(self, names):
assert self._is_row_layout_valid(names.keys())
self._session.keyboard_key_names = names
def set_scancode_locations(self, codes_to_locs):
assert self._is_row_layout_valid(codes_to_locs.values())
self._session.keyboard_scancode_locations = codes_to_locs
def set_key_id_locations(self, ids_to_locs):
assert self._is_row_layout_valid(ids_to_locs.values())
self._session.keyboard_id_locations = ids_to_locs
def get_scancode_location(self, code):
return self._session.keyboard_scancode_locations.get(code, None)
def get_key_id_location(self, key_id):
return self._session.keyboard_id_locations.get(key_id, None)
def get_key_action(self, location):
return self._session.keyboard_key_actions.get(location, None)
def get_key_name(self, location):
return self._session.keyboard_key_names.get(location, None)
def get_action_location(self, action):
if not isinstance(action, KeyboardAction):
assert action != KeyboardAction.NOTE
action = KeyboardAction(action)
return self._session.keyboard_action_locations.get(action, None)
def _get_keymap_ids(self):
notation_mgr = self._ui_model.get_notation_manager()
keymap_ids = notation_mgr.get_notation_ids()
keymap_ids.append(HitKeymapID)
return keymap_ids
def _get_some_keymap_id(self):
keymap_ids = self._get_keymap_ids()
if len(keymap_ids) < 2:
return HitKeymapID
some_id = sorted(keymap_ids)[1]
return some_id
def get_selected_keymap_id(self):
#keymap_ids = self.get_keymap_ids()
selected_id = self._session.get_selected_notation_id() or self._get_some_keymap_id()
return selected_id
def is_hit_keymap_active(self):
return self._session.is_hit_keymap_active()
def get_selected_keymap(self):
if self.is_hit_keymap_active():
return _hit_keymap
notation_mgr = self._ui_model.get_notation_manager()
notation = notation_mgr.get_selected_notation()
return notation.get_keymap()
def set_hit_keymap_active(self, active):
self._session.set_hit_keymap_active(active)
keymap_data = self.get_selected_keymap()
if keymap_data.get('is_hit_keymap', False):
base_octave = 0
else:
base_octave = keymap_data['base_octave']
typewriter_mgr = self._ui_model.get_typewriter_manager()
typewriter_mgr.set_octave(base_octave)
| 31.103286 | 92 | 0.635774 | 906 | 6,625 | 4.327815 | 0.295806 | 0.042081 | 0.048457 | 0.020403 | 0.211171 | 0.117827 | 0.076001 | 0.055598 | 0.018873 | 0 | 0 | 0.064932 | 0.263094 | 6,625 | 212 | 93 | 31.25 | 0.738222 | 0.072 | 0 | 0.097222 | 0 | 0 | 0.014682 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.208333 | false | 0.006944 | 0 | 0.083333 | 0.472222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f1aa77534d7ca4bf005172321cf304ed4c252e6 | 7,175 | py | Python | ros/src/tl_detector/light_classification/tl_classifier.py | Kuan-HC/Udacity-CarND-Capstone | cfe6c44f0f46de19a7df728de3044fb2c1a80666 | [
"MIT"
] | null | null | null | ros/src/tl_detector/light_classification/tl_classifier.py | Kuan-HC/Udacity-CarND-Capstone | cfe6c44f0f46de19a7df728de3044fb2c1a80666 | [
"MIT"
] | null | null | null | ros/src/tl_detector/light_classification/tl_classifier.py | Kuan-HC/Udacity-CarND-Capstone | cfe6c44f0f46de19a7df728de3044fb2c1a80666 | [
"MIT"
] | null | null | null | from styx_msgs.msg import TrafficLight
import rospy
import tensorflow as tf
import numpy as np
import cv2
from PIL import Image
# for visualization
#import matplotlib.pyplot as plt
from PIL import ImageDraw
from PIL import ImageColor
import os
class TLClassifier(object):
def __init__(self):
#TODO load classifier
SSD_GRAPH_FILE = "light_classification/model/frozen_inference_graph.pb"
#self.session = None
self.detection_graph = self.__load_graph(os.path.abspath(SSD_GRAPH_FILE))
# visualization
cmap = ImageColor.colormap
self.COLOR_LIST = sorted([c for c in cmap.keys()])
# Create Tensor Session
self.sess = tf.Session(graph = self.detection_graph)
def __load_graph(self, graph_file):
"""Loads a frozen inference graph"""
graph = tf.Graph()
with graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(graph_file, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
rospy.loginfo("model loaded")
return graph
def get_classification(self, cv2_img):
"""Determines the color of the traffic light in the image
Args:
image (cv::Mat): image containing the traffic light
Returns:
int: ID of traffic light color (specified in styx_msgs/TrafficLight)
"""
#TODO implement light color prediction
#image = cv2.resize(image, (300, 300))
cv2_img = cv2.cvtColor(cv2_img, cv2.COLOR_BGR2RGB)
image = Image.fromarray(cv2_img)
image_np = np.expand_dims(np.asarray(image, dtype=np.uint8), 0)
# The input placeholder for the image. Get_tensor_by_name` returns the Tensor with the associated name in the Graph.
image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
# The classification of the object (integer id).
detection_classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
# Actual detection.
(boxes, scores, classes) = self.sess.run([detection_boxes, detection_scores, detection_classes],
feed_dict={image_tensor: image_np})
# Remove unnecessary dimensions
boxes = np.squeeze(boxes)
scores = np.squeeze(scores)
classes = np.squeeze(classes)
confidence_cutoff = 0.55 # was 0.8
# Filter boxes with a confidence score less than `confidence_cutoff`
boxes, scores, classes = self.__filter_boxes(confidence_cutoff, boxes, scores, classes)
rospy.loginfo("Object Detector output")
# The current box coordinates are normalized to a range between 0 and 1.
# This converts the coordinates actual location on the image.
width, height = image.size
box_coords = self.__to_image_coords(boxes, height, width)
light_state = TrafficLight.UNKNOWN
if len(classes)>0:
light_state = self.__classifier(cv2_img, box_coords, classes)
rospy.loginfo("Traffic light from Detector: %d" %light_state)
return light_state
def __filter_boxes(self, min_score, boxes, scores, classes):
"""Return boxes with a confidence >= `min_score`"""
n = len(classes)
idxs = []
for i in range(n):
if scores[i] >= min_score:
idxs.append(i)
filtered_boxes = boxes[idxs, ...]
filtered_scores = scores[idxs, ...]
filtered_classes = classes[idxs, ...]
return filtered_boxes, filtered_scores, filtered_classes
def __to_image_coords(self, boxes, height, width):
"""
The original box coordinate output is normalized, i.e [0, 1].
This converts it back to the original coordinate based on the image
size.
"""
box_coords = np.zeros_like(boxes)
box_coords[:, 0] = boxes[:, 0] * height
box_coords[:, 1] = boxes[:, 1] * width
box_coords[:, 2] = boxes[:, 2] * height
box_coords[:, 3] = boxes[:, 3] * width
return box_coords
def __classifier(self, image, boxes, classes):
traffic_counter = 0
predict_sum = 0.
for i in range(len(boxes)):
if (classes[i]==10):
traffic_counter += 1
bot, left, top, right = boxes[i, ...]
crop_image = image[int(bot):int(top), int(left):int(right)]
'''
Traffic Light classifier - project from intro to self driving cars
'''
predict_sum += self.__estimate_label(crop_image)
# traffic_counter ==0 means there is no object detect as traffic
if (traffic_counter !=0):
avg = predict_sum/traffic_counter
rospy.loginfo("This group brightness value: %f" %avg)
else:
avg = 0
'''
Traffic light definition in UNKNOWN=4
GREEN=2 YELLOW=1 RED=0
'''
if (avg > 1.0):
return TrafficLight.RED
else:
return TrafficLight.UNKNOWN
'''
Traffic Light classifier - reuse project from intro-to-self-driving-cars
'''
def __estimate_label(self, rgb_image):
rgb_image = cv2.resize(rgb_image,(32,32))
test_image_hsv = cv2.cvtColor(np.array(rgb_image), cv2.COLOR_RGB2HSV)
# Mask HSV channel
masked_red = self.__mask_red(test_image_hsv, rgb_image)
Masked_R_V = self.__Masked_Image_Brightness(masked_red)
AVG_Masked_R = self.__AVG_Brightness(Masked_R_V)
return AVG_Masked_R
def __mask_red(self, HSV_image, rgb_image):
#red_mask_1 = cv2.inRange(HSV_image, (0,50,60), (10,255,255))
red_mask = cv2.inRange(HSV_image, (140,10,100), (180,255,255)) #was (140,36,100)
#red_mask = np.add(red_mask_1,red_mask_2)
masked_image = np.copy(rgb_image)
masked_image[red_mask == 0] = [0, 0, 0]
return masked_image
def __Masked_Image_Brightness(self, image):
masked_Image_HSV = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
masked_Image_V = masked_Image_HSV[:,:,2]
return masked_Image_V
def __AVG_Brightness(self, image):
height, width = image.shape
brightness_avg = np.sum(image)/(height*width)
return brightness_avg
| 38.783784 | 124 | 0.607387 | 884 | 7,175 | 4.699095 | 0.263575 | 0.023832 | 0.025999 | 0.018055 | 0.070534 | 0.054165 | 0.054165 | 0.030332 | 0 | 0 | 0 | 0.022623 | 0.303833 | 7,175 | 184 | 125 | 38.994565 | 0.809009 | 0.203066 | 0 | 0.019231 | 0 | 0 | 0.041093 | 0.009802 | 0 | 0 | 0 | 0.01087 | 0 | 1 | 0.096154 | false | 0 | 0.096154 | 0 | 0.298077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f1b46a68b462c9832a4e39c40ba6411d23ff59c | 3,279 | py | Python | template/plugin/datafile.py | vaMuchenje/Template-Python | 9edbf401cfeb9c33b50bd37fd0643fa0205d8a5e | [
"Artistic-2.0"
] | 1 | 2016-03-02T06:51:04.000Z | 2016-03-02T06:51:04.000Z | template/plugin/datafile.py | vaMuchenje/Template-Python | 9edbf401cfeb9c33b50bd37fd0643fa0205d8a5e | [
"Artistic-2.0"
] | 6 | 2015-10-13T13:46:10.000Z | 2019-06-17T09:39:57.000Z | template/plugin/datafile.py | vaMuchenje/Template-Python | 9edbf401cfeb9c33b50bd37fd0643fa0205d8a5e | [
"Artistic-2.0"
] | 3 | 2018-12-03T13:15:21.000Z | 2019-03-13T09:12:09.000Z | #
# The Template-Python distribution is Copyright (C) Sean McAfee 2007-2008,
# derived from the Perl Template Toolkit Copyright (C) 1996-2007 Andy
# Wardley. All Rights Reserved.
#
# The file "LICENSE" at the top level of this source distribution describes
# the terms under which this file may be distributed.
#
import re
from template.plugin import Plugin
from template.util import Sequence
"""
template.plugin.datafile - Plugin to construct records from a simple data file
SYNOPSIS
[% USE mydata = datafile('/path/to/datafile') %]
[% USE mydata = datafile('/path/to/datafile', delim = '|') %]
[% FOREACH record = mydata %]
[% record.this %] [% record.that %]
[% END %]
DESCRIPTION
This plugin provides a simple facility to construct a list of
dictionaries, each of which represents a data record of known
structure, from a data file.
[% USE datafile(filename) %]
A absolute filename must be specified (for this initial implementation
at least - in a future version it might also use the INCLUDE_PATH).
An optional 'delim' parameter may also be provided to specify an
alternate delimiter character.
[% USE userlist = datafile('/path/to/file/users') %]
[% USE things = datafile('items', delim = '|') %]
The format of the file is intentionally simple. The first line
defines the field names, delimited by colons with optional surrounding
whitespace. Subsequent lines then defines records containing data
items, also delimited by colons. e.g.
id : name : email : tel
abw : Andy Wardley : abw@cre.canon.co.uk : 555-1234
neilb : Neil Bowers : neilb@cre.canon.co.uk : 555-9876
Each line is read, split into composite fields, and then used to
initialise a dictionary containing the field names as relevant keys.
The plugin returns an object that encapsulates the dictionaries in the
order as defined in the file.
[% FOREACH user = userlist %]
[% user.id %]: [% user.name %]
[% END %]
The first line of the file MUST contain the field definitions. After
the first line, blank lines will be ignored, along with comment line
which start with a '#'.
BUGS
Should handle file names relative to INCLUDE_PATH.
Doesn't permit use of ':' in a field. Some escaping mechanism is required.
"""
class Datafile(Plugin, Sequence):
"""Template Toolkit Plugin which reads a datafile and constructs a
list object containing hashes representing records in the file.
"""
def __init__(self, context, filename, params=None):
Plugin.__init__(self)
params = params or {}
delim = params.get("delim") or ":"
items = []
line = None
names = None
splitter = re.compile(r'\s*%s\s*' % re.escape(delim))
try:
f = open(filename)
except IOError, e:
return self.fail("%s: %s" % (filename, e))
for line in f:
line = line.rstrip("\n\r")
if not line or line.startswith("#") or line.isspace():
continue
fields = splitter.split(line)
if names is None:
names = fields
else:
fields.extend([None] * (len(names) - len(fields)))
items.append(dict(zip(names, fields)))
f.close()
self.items = items
def __iter__(self):
return iter(self.items)
def as_list(self):
return self.items
| 28.763158 | 78 | 0.68466 | 461 | 3,279 | 4.83731 | 0.444685 | 0.015695 | 0.018834 | 0.018834 | 0.041256 | 0.027803 | 0 | 0 | 0 | 0 | 0 | 0.011696 | 0.217749 | 3,279 | 113 | 79 | 29.017699 | 0.8577 | 0.091796 | 0 | 0 | 0 | 0 | 0.026205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.09375 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f1d6247aa5afbf4e9f81a212764267b491f054c | 551 | py | Python | sqlalchemy_jsonapi/constants.py | jimbobhickville/sqlalchemy-jsonapi | 40f8b5970d44935b27091c2bf3224482d23311bb | [
"MIT"
] | 73 | 2015-01-11T09:23:21.000Z | 2021-07-07T03:51:38.000Z | sqlalchemy_jsonapi/constants.py | jimbobhickville/sqlalchemy-jsonapi | 40f8b5970d44935b27091c2bf3224482d23311bb | [
"MIT"
] | 61 | 2015-03-16T22:08:14.000Z | 2018-01-12T01:43:48.000Z | sqlalchemy_jsonapi/constants.py | JanAckermann/sqlalchemy-jsonapi | 40f8b5970d44935b27091c2bf3224482d23311bb | [
"MIT"
] | 39 | 2015-01-21T15:04:40.000Z | 2020-09-24T19:54:15.000Z | """
SQLAlchemy-JSONAPI
Constants
Colton J. Provias
MIT License
"""
try:
from enum import Enum
except ImportError:
from enum34 import Enum
class Method(Enum):
""" HTTP Methods used by JSON API """
GET = 'GET'
POST = 'POST'
PATCH = 'PATCH'
DELETE = 'DELETE'
class Endpoint(Enum):
""" Four paths specified in JSON API """
COLLECTION = '/<api_type>'
RESOURCE = '/<api_type>/<obj_id>'
RELATED = '/<api_type>/<obj_id>/<relationship>'
RELATIONSHIP = '/<api_type>/<obj_id>/relationships/<relationship>'
| 17.774194 | 70 | 0.637024 | 66 | 551 | 5.212121 | 0.606061 | 0.081395 | 0.087209 | 0.104651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00464 | 0.217786 | 551 | 30 | 71 | 18.366667 | 0.793503 | 0.22323 | 0 | 0 | 0 | 0 | 0.32598 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7f238b01c0def33202fd44f0d56c115a89fb1370 | 9,288 | py | Python | languages/Natlab/src/natlab/tame/builtin/gen/classProp.py | dherre3/mclab-core | ccdcd6f46ee42285c7ad055ff0a9ea3361112e11 | [
"Apache-2.0"
] | 11 | 2015-05-31T16:11:35.000Z | 2021-02-16T00:04:48.000Z | languages/Natlab/src/natlab/tame/builtin/gen/classProp.py | sshrdp/mclab | 1843078edb13e647c0261d1944320ffbcf02ad99 | [
"Apache-2.0"
] | 12 | 2015-05-04T16:21:04.000Z | 2019-04-24T21:49:33.000Z | languages/Natlab/src/natlab/tame/builtin/gen/classProp.py | sshrdp/mclab | 1843078edb13e647c0261d1944320ffbcf02ad99 | [
"Apache-2.0"
] | 13 | 2015-05-31T17:16:45.000Z | 2021-02-09T17:08:26.000Z | # DEPRECATED - THERE IS A PARSER FOR THE CLASS LANGUAGE IN JAVA NOW
# TODO -delete
# processing Class tag - class propagation language
import processTags
import sys
# definition of the class propagation language - in a dictionary
# helper method - converts numbers to a MatlabClassVar
def convertNum(a): return CPNum(a) if isinstance(a, (long, int)) else a;
# 1) python classes
# general matlab class used by the class tag (CP = ClassProp..Info) - defines the operators
class CP():
def __or__ (self, other): return CPUnion(convertNum(self),convertNum(other));
def __ror__ (self, other): return CPUnion(convertNum(self),convertNum(other));
def __and__ (self, other): return CPChain(convertNum(self),convertNum(other));
def __rand__(self, other): return CPChain(convertNum(other),convertNum(self));
def __gt__ (self, other): return CPMap(convertNum(self),convertNum(other));
def __lt__ (self, other): return CPMap(convertNum(other),convertNum(self));
def __repr__(self): return str(self);
# <class1> - represents Matlab builtin class
class CPBuiltin(CP):
def __init__(self,name): self.name = name;
def __str__ (self): return self.name;
def toJava(self): return 'new CPBuiltin("'+self.name+'")';
# class1 | clas2 - mulitple possibilities for one type
class CPUnion(CP):
def __init__(self,a,b): self.class1 = convertNum(a); self.class2 = convertNum(b);
def __str__ (self): return '('+str(self.class1)+'|'+str(self.class2)+')';
def toJava (self): return 'new CPUnion('+self.class1.toJava()+','+self.class2.toJava()+')';
# class1 & class2 - sequences of types
class CPChain(CP):
def __init__(self,a,b): self.class1 = convertNum(a); self.class2 = convertNum(b);
def __str__ (self): return '('+str(self.class1)+')&('+str(self.class2)+')';
def toJava (self): return 'new CPChain('+self.class1.toJava()+','+self.class2.toJava()+')';
# class1 > class2 - matches lhs, emits rhs
class CPMap(CP):
def __init__(self,a,b): self.args = convertNum(a); self.res = convertNum(b);
def __str__ (self): return str(self.args)+'>'+str(self.res);
def toJava (self): return 'new CPMap('+self.args.toJava()+','+self.res.toJava()+')';
# <n> - a specific other argument, defined by a number - negative is counted from back (i.e. -1 is last)
class CPNum(CP):
def __init__(self,num): self.num = num;
def __str__ (self): return str(self.num);
def toJava (self): return 'new CPNum('+str(self.num)+')';
# coerce(CP denoting replacement expr for every argument, CP affeced expr)
# example: coerce((char|logical)>double, (numerical&numerical)>double )
# TODO: this should be a McFunction
class CPCoerce(CP):
def __init__(self,replaceExpr,expr): self.replaceExpr=replaceExpr; self.expr=expr;
def __str__ (self): return 'coerce('+str(self.replaceExpr)+','+str(self.expr)+')'
def toJava (self): return 'new CPCoerce('+self.replaceExpr.toJava()+','+self.expr.toJava()+')'
# unparametric expressions of the language - the string and java representation are given by the constructor
class CPNonParametric(CP):
def __init__(self,str,java): self.str = str; self.java = java;
def __str__(self): return self.str;
def toJava (self): return self.java;
# function of the form name(<expr>,<expr>,...) - the expresions, and string, java are given by the constructor
class CPFunction(CP):
def __init__(self,str,java,*exprs): self.exprs = exprs; self.java = java; self.str = str;
def __str__(self): return self.str+"("+','.join([str(e) for e in self.exprs])+")"
def toJava (self): return self.java+"("+','.join([e.toJava() for e in self.exprs])+")"
# 2) set up keywords of the language in a dictionary
# basic types:
lang = dict(double=CPBuiltin('double'),single=CPBuiltin('single'),char=CPBuiltin('char'),logical=CPBuiltin('logical'),
uint8=CPBuiltin('uint8'),uint16=CPBuiltin('uint16'),uint32=CPBuiltin('uint32'),uint64=CPBuiltin('uint64'),
int8=CPBuiltin('int8'),int16=CPBuiltin('int16'),int32=CPBuiltin('int32'),int64=CPBuiltin('int64'),
function_handle=CPBuiltin('function_handle'))
# union types
lang.update(dict(float=lang['single']|lang['double'], uint=(lang['uint8']|lang['uint16']|lang['uint32']|lang['uint64']),
sint=(lang['int8']|lang['int16']|lang['int32']|lang['int64'])));
lang['int'] = (lang['uint']|lang['sint']);
lang['numeric']= (lang['float']|lang['int']);
lang['matrix'] = (lang['numeric']|lang['char']|lang['logical']);
# non-parametric bits
lang['none'] = CPNonParametric('none', 'new CPNone()');
lang['end'] = CPNonParametric('end', 'new CPEnd()');
lang['begin'] = CPNonParametric('begin', 'new CPBegin()');
lang['any'] = CPNonParametric('any', 'new CPAny()');
lang['parent'] = CPNonParametric('parent','parentClassPropInfo'); # java code is the local variable
lang['error'] = CPNonParametric('error', 'new CPError()');
lang['natlab'] = CPNonParametric('class', 'getClassPropagationInfo()');
lang['matlab'] = CPNonParametric('class', 'getMatlabClassPropagationInfo()');
lang['scalar'] = CPNonParametric('scalar','new CPScalar()');
# other bits of the language
lang['coerce'] = lambda replaceExpr, expr: CPCoerce(replaceExpr,expr)
lang['opt'] = lambda expr: (expr|lang['none']) #note: op(x), being (x|none), will cause an error on the rhs
lang['not'] = lambda typesExpr: CPFunction('not','new CPNot',typesExpr)
lang['arg'] = lambda num : CPNum(num)
# todo - so far opt only allows up to 10 repititions
opt = lang['opt']
lang['star']= lambda expr: opt(expr)&opt(expr)&opt(expr)&opt(expr)&opt(expr)&opt(expr)&opt(expr)&opt(expr)&opt(expr)&opt(expr)
# functions
lang['typeString'] = lambda typesExpr: CPFunction('typeString','new CPTypeString',typesExpr)
# TODO - other possible language features
#variables?
#mult(x,[max],[min]) #tries to match as many as possible max,min may be 0
#matlab - allows matching the current matlab tree for a MatlabClass
# helper method - turns a sequence of CP objects into CPUnion objects
def tupleToCP(seq):
if len(seq) == 1: return seq[0]
return CPUnion(seq[0],tupleToCP(seq[1:]))
# produces the CP tree from the given tagArgs
def parseTagArgs(tagArgs,builtin):
# parse arg
try:
args = processTags.makeArgString(tagArgs);
tree = convertNum(eval(args,lang))
except:
sys.stderr.write(("ERROR: cannot parse/build class propagation information for builtin: "+builtin.name+"\ndef: "+tagArgs+"\n"));
raise
# turn tuple into chain of Unions
if isinstance(tree, tuple): tree = tupleToCP(tree)
return tree
# actual tag definition
def Class(builtin, tagArgs, iset):
# add the interface
iset.add("HasClassPropagationInfo");
# create CP tree
tree = parseTagArgs(tagArgs,builtin)
if (processTags.DEBUG):
print "Class args: ",tagArgs
print "tree: ", tree
#print "java: ", tree.toJava()
# java expr for parent info - find if tag 'Class' is defined for a parent
if (builtin.parent and builtin.parent.getAllTags().has_key('Class')):
parentInfo = 'super.getClassPropagationInfo()'
else:
parentInfo = 'new CPNone()'
# deal with the matlabClass info - check if there is a matlabClass tag defined - if not, emit the default
if (not builtin.getAllTags().has_key('MatlabClass')):
matlabClassMethod = """
public CP getMatlabClassPropagationInfo(){{
return getClassPropagationInfo();
}}
"""; # there's no explicit tag for matlab - just return the class info
else:
matlabClassMethod = ''; # emit nothing - the matlabClass tag will deal with it
# produce code
return matlabClassMethod+"""
private CP classPropInfo = null; //{tree};
public CP getClassPropagationInfo(){{
//set classPropInfo if not defined
if (classPropInfo == null){{
CP parentClassPropInfo = {parentInfo};
classPropInfo = {tree};
}}
return classPropInfo;
}}
""".format(tree=tree.toJava(), javaName=builtin.javaName, parentInfo=parentInfo);
# matlabClass tag definition
def MatlabClass(builtin, tagArgs, iset):
if not builtin.getAllTags().has_key('Class'):
raise Exception('MatlabClass tag defined for builtin '+builtin.name+', but there is no Class tag defined')
# create CP tree
tree = parseTagArgs(tagArgs,builtin)
if (processTags.DEBUG):
print "MatlabClass args: ",tagArgs
print "tree: ", tree
# java expr for parent info - find if tag 'Class' is defined for a parent
if (builtin.parent and builtin.parent.getAllTags().has_key('Class')):
parentInfo = 'super.getMatlabClassPropagationInfo()'
else:
parentInfo = 'new CPNone()'
# produce code
return """
private CP matlabClassPropInfo = null; //{tree};
public CP getMatlabClassPropagationInfo(){{
//set classPropInfo if not defined
if (matlabClassPropInfo == null){{
CP parentClassPropInfo = {parentInfo};
matlabClassPropInfo = {tree};
}}
return matlabClassPropInfo;
}}
""".format(tree=tree.toJava(), javaName=builtin.javaName, parentInfo=parentInfo);
| 46.673367 | 134 | 0.669681 | 1,156 | 9,288 | 5.295848 | 0.241349 | 0.022868 | 0.017968 | 0.020582 | 0.297452 | 0.228683 | 0.173636 | 0.170533 | 0.150604 | 0.112382 | 0 | 0.009311 | 0.179048 | 9,288 | 198 | 135 | 46.909091 | 0.793574 | 0.232666 | 0 | 0.21374 | 0 | 0 | 0.250353 | 0.037581 | 0 | 0 | 0 | 0.005051 | 0 | 0 | null | null | 0 | 0.015267 | null | null | 0.030534 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
613fd24f44efdf77077663143efe7c456df30295 | 14,319 | py | Python | eulxml/xmlmap/cerp.py | ig0774/eulxml | 17d71c7d98c0cebda9932b7f13e72093805e1fe2 | [
"Apache-2.0"
] | 19 | 2015-02-23T17:01:24.000Z | 2022-03-14T08:14:16.000Z | eulxml/xmlmap/cerp.py | ig0774/eulxml | 17d71c7d98c0cebda9932b7f13e72093805e1fe2 | [
"Apache-2.0"
] | 30 | 2015-07-24T17:11:52.000Z | 2021-01-19T22:29:37.000Z | eulxml/xmlmap/cerp.py | ig0774/eulxml | 17d71c7d98c0cebda9932b7f13e72093805e1fe2 | [
"Apache-2.0"
] | 13 | 2015-01-27T21:49:16.000Z | 2021-01-19T23:00:00.000Z | # file eulxml/xmlmap/cerp.py
#
# Copyright 2010,2011 Emory University Libraries
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import unicode_literals
import codecs
import datetime
import email
import logging
import os
import six
from eulxml import xmlmap
from eulxml.utils.compat import u
logger = logging.getLogger(__name__)
# CERP is described at http://siarchives.si.edu/cerp/ . XML spec available at
# http://www.records.ncdcr.gov/emailpreservation/mail-account/mail-account_docs.html
# schema resolves but appears to be empty as of April 2016
# Current schema : http://www.history.ncdcr.gov/SHRAB/ar/emailpreservation/mail-account/mail-account.xsd
# internally-reused and general-utility objects
#
class _BaseCerp(xmlmap.XmlObject):
'Common CERP namespace declarations'
ROOT_NS = 'http://www.archives.ncdcr.gov/mail-account'
ROOT_NAMESPACES = { 'xm': ROOT_NS }
class Parameter(_BaseCerp):
ROOT_NAME = 'Parameter'
name = xmlmap.StringField('xm:Name')
value = xmlmap.StringField('xm:Value')
def __str__(self):
return '%s=%s' % (self.name, self.value)
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__, str(self))
class Header(_BaseCerp):
ROOT_NAME = 'Header'
name = xmlmap.StringField('xm:Name')
value = xmlmap.StringField('xm:Value')
comments = xmlmap.StringListField('xm:Comments')
def __str__(self):
return '%s: %s' % (self.name, self.value)
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__, self.name)
class _BaseBody(_BaseCerp):
'''Common email header elements'''
content_type_list = xmlmap.StringListField('xm:ContentType')
charset_list = xmlmap.StringListField('xm:Charset')
content_name_list = xmlmap.StringListField('xm:ContentName')
content_type_comments_list = xmlmap.StringListField('xm:ContentTypeComments')
content_type_param_list = xmlmap.NodeListField('xm:ContentTypeParam', Parameter)
transfer_encoding_list = xmlmap.StringListField('xm:TransferEncoding')
transfer_encoding_comments_list = xmlmap.StringListField('xm:TransferEncodingComments')
content_id_list = xmlmap.StringListField('xm:ContentId')
content_id_comments_list = xmlmap.StringListField('xm:ContentIdComments')
description_list = xmlmap.StringListField('xm:Description')
description_comments_list = xmlmap.StringListField('xm:DescriptionComments')
disposition_list = xmlmap.StringListField('xm:Disposition')
disposition_file_name_list = xmlmap.StringListField('xm:DispositionFileName')
disposition_comments_list = xmlmap.StringListField('xm:DispositionComments')
disposition_params = xmlmap.NodeListField('xm:DispositionParams', Parameter)
other_mime_headers = xmlmap.NodeListField('xm:OtherMimeHeader', Header)
class Hash(_BaseCerp):
ROOT_NAME = 'Hash'
HASH_FUNCTION_CHOICES = [ 'MD5', 'WHIRLPOOL', 'SHA1', 'SHA224',
'SHA256', 'SHA384', 'SHA512', 'RIPEMD160']
value = xmlmap.StringField('xm:Value')
function = xmlmap.StringField('xm:Function',
choices=HASH_FUNCTION_CHOICES)
def __str__(self):
return self.value
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__, self.function)
class _BaseExternal(_BaseCerp):
'''Common external entity reference elements'''
EOL_CHOICES = [ 'CR', 'LF', 'CRLF' ]
rel_path = xmlmap.StringField('xm:RelPath')
eol = xmlmap.StringField('xm:Eol', choices=EOL_CHOICES)
hash = xmlmap.NodeField('xm:Hash', Hash)
class _BaseContent(_BaseCerp):
'''Common content encoding elements'''
charset_list = xmlmap.StringListField('xm:CharSet')
transfer_encoding_list = xmlmap.StringListField('xm:TransferEncoding')
#
# messages and bodies
#
class BodyContent(_BaseContent):
ROOT_NAME = 'BodyContent'
content = xmlmap.StringField('xm:Content')
class ExtBodyContent(_BaseExternal, _BaseContent):
ROOT_NAME = 'ExtBodyContent'
local_id = xmlmap.IntegerField('xm:LocalId')
xml_wrapped = xmlmap.SimpleBooleanField('xm:XMLWrapped',
true='1', false='0')
class SingleBody(_BaseBody):
ROOT_NAME = 'SingleBody'
body_content = xmlmap.NodeField('xm:BodyContent', BodyContent)
ext_body_content = xmlmap.NodeField('xm:ExtBodyContent', ExtBodyContent)
child_message = xmlmap.NodeField('xm:ChildMessage', None) # this will be fixed below
@property
def content(self):
return self.body_content or \
self.ext_body_content or \
self.child_message
phantom_body = xmlmap.StringField('xm:PhantomBody')
class MultiBody(_BaseCerp):
ROOT_NAME = 'MultiBody'
preamble = xmlmap.StringField('xm:Preamble')
epilogue = xmlmap.StringField('xm:Epilogue')
single_body = xmlmap.NodeField('xm:SingleBody', SingleBody)
multi_body = xmlmap.NodeField('xm:MultiBody', 'self')
@property
def body(self):
return self.single_body or self.multi_body
class Incomplete(_BaseCerp):
ROOT_NAME = 'Incomplete'
error_type = xmlmap.StringField('xm:ErrorType')
error_location = xmlmap.StringField('xm:ErrorLocation')
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__, self.error_type)
class _BaseMessage(_BaseCerp):
'''Common message elements'''
local_id = xmlmap.IntegerField('xm:LocalId')
message_id = xmlmap.StringField('xm:MessageId')
message_id_supplied = xmlmap.SimpleBooleanField('xm:MessageId/@Supplied',
true='1', false=None)
mime_version = xmlmap.StringField('xm:MimeVersion')
orig_date_list = xmlmap.StringListField('xm:OrigDate') # FIXME: really datetime
# NOTE: eulxml.xmlmap.DateTimeField supports specifying format,
# but we might need additional work since %z only works with
# strftime, not strptime
from_list = xmlmap.StringListField('xm:From')
sender_list = xmlmap.StringListField('xm:Sender')
to_list = xmlmap.StringListField('xm:To')
cc_list = xmlmap.StringListField('xm:Cc')
bcc_list = xmlmap.StringListField('xm:Bcc')
in_reply_to_list = xmlmap.StringListField('xm:InReplyTo')
references_list = xmlmap.StringListField('xm:References')
subject_list = xmlmap.StringListField('xm:Subject')
comments_list = xmlmap.StringListField('xm:Comments')
keywords_list = xmlmap.StringListField('xm:Keywords')
headers = xmlmap.NodeListField('xm:Header', Header)
single_body = xmlmap.NodeField('xm:SingleBody', SingleBody)
multi_body = xmlmap.NodeField('xm:MultiBody', MultiBody)
@property
def body(self):
return self.single_body or self.multi_body
incomplete_list = xmlmap.NodeField('xm:Incomplete', Incomplete)
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__,
self.message_id or self.local_id or '(no id)')
class Message(_BaseMessage, _BaseExternal):
"""A single email message in a :class:`Folder`."""
ROOT_NAME = 'Message'
STATUS_FLAG_CHOICES = [ 'Seen', 'Answered', 'Flagged', 'Deleted',
'Draft', 'Recent']
status_flags = xmlmap.StringListField('xm:StatusFlag',
choices=STATUS_FLAG_CHOICES)
@classmethod
def from_email_message(cls, message, local_id=None):
'''
Convert an :class:`email.message.Message` or compatible message
object into a CERP XML :class:`eulxml.xmlmap.cerp.Message`. If an
id is specified, it will be stored in the Message <LocalId>.
:param message: `email.message.Message` object
:param id: optional message id to be set as `local_id`
:returns: :class:`eulxml.xmlmap.cerp.Message` instance populated
with message information
'''
result = cls()
if local_id is not None:
result.local_id = id
message_id = message.get('Message-Id')
if message_id:
result.message_id_supplied = True
result.message_id = message_id
result.mime_version = message.get('MIME-Version')
dates = message.get_all('Date', [])
result.orig_date_list.extend([parse_mail_date(d) for d in dates])
result.from_list.extend(message.get_all('From', []))
result.sender_list.extend(message.get_all('From', []))
try:
result.to_list.extend(message.get_all('To', []))
except UnicodeError:
print(repr(message['To']))
raise
result.cc_list.extend(message.get_all('Cc', []))
result.bcc_list.extend(message.get_all('Bcc', []))
result.in_reply_to_list.extend(message.get_all('In-Reply-To', []))
result.references_list.extend(message.get_all('References', []))
result.subject_list.extend(message.get_all('Subject', []))
result.comments_list.extend(message.get_all('Comments', []))
result.keywords_list.extend(message.get_all('Keywords', []))
headers = [ Header(name=key, value=val) for key, val in message.items() ]
result.headers.extend(headers)
# FIXME: skip multipart messages for now
if not message.is_multipart():
result.create_single_body()
# FIXME: this is a small subset of the actual elements CERP allows.
# we should add the rest of them, too.
# message.get_content_type() always returns something. only
# put it in the CERP if a Content-Type was explicitly specified.
if message['Content-Type']:
result.single_body.content_type_list.append(message.get_content_type())
if message.get_content_charset():
result.single_body.charset_list.append(message.get_content_charset())
if message.get_filename():
result.single_body.content_name_list.append(message.get_filename())
# FIXME: attaching the body_content only makes sense for text
# content types. we'll eventually need a better solution for
# non-text messages
result.single_body.create_body_content()
payload = message.get_payload(decode=False)
# if not unicode, attempt to convert
if isinstance(payload, six.binary_type):
charset = message.get_charset()
# decode according to the specified character set, if any
if charset is not None:
charset_decoder = codecs.getdecoder(str(charset))
payload, length = charset_decoder(payload)
# otherwise, just try to convert
else:
payload = u(payload)
# remove any control characters not allowed in XML
control_char_map = dict.fromkeys(range(32))
for i in [9, 10, 13]: # preserve horizontal tab, line feed, carriage return
del control_char_map[i]
payload = u(payload).translate(control_char_map)
result.single_body.body_content.content = payload
else:
# TODO: handle multipart
logger.warn('CERP conversion does not yet handle multipart')
# assume we've normalized newlines:
result.eol = EOLMAP[os.linesep]
return result
class ChildMessage(_BaseMessage):
ROOT_NAME = 'ChildMessage'
# no additional elements
# Patch-up from above. FIXME: This is necessary because of recursive
# NodeFields. eulxml.xmlmap.NodeField doesn't currently support these
SingleBody.child_message.node_class = ChildMessage
#
# accounts and folders
#
class Mbox(_BaseExternal):
ROOT_NAME = 'Mbox'
# no additional fields
class Folder(_BaseCerp):
"""A single email folder in an :class:`Account`, composed of multiple
:class:`Message` objects and associated metadata."""
ROOT_NAME = 'Folder'
name = xmlmap.StringField('xm:Name')
messages = xmlmap.NodeListField('xm:Message', Message)
subfolders = xmlmap.NodeListField('xm:Folder', 'self')
mboxes = xmlmap.NodeListField('xm:Mbox', Mbox)
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__, self.name)
class ReferencesAccount(_BaseCerp):
ROOT_NAME = 'ReferencesAccount'
REF_TYPE_CHOICES = [ 'PreviousContent', 'SubsequentContent',
'Supplemental', 'SeeAlso', 'SeeInstead' ]
href = xmlmap.StringField('xm:Href')
email_address = xmlmap.StringField('xm:EmailAddress')
reference_type = xmlmap.StringField('xm:RefType',
choices=REF_TYPE_CHOICES)
class Account(_BaseCerp):
"""A single email account associated with a single email address and
composed of multiple :class:`Folder` objects and additional metadata."""
ROOT_NAME = 'Account'
XSD_SCHEMA = 'http://www.history.ncdcr.gov/SHRAB/ar/emailpreservation/mail-account/mail-account.xsd'
email_address = xmlmap.StringField('xm:EmailAddress')
global_id = xmlmap.StringField('xm:GlobalId')
references_accounts = xmlmap.NodeListField('xm:ReferencesAccount',
ReferencesAccount)
folders = xmlmap.NodeListField('xm:Folder', Folder)
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__,
self.global_id or self.email_address or '(no id)')
def parse_mail_date(datestr):
'''Helper method used by :meth:`Message.from_email_message` to
convert dates from rfc822 format to iso 8601.
:param datestr: string containing a date in rfc822 format
:returns: string with date in iso 8601 format
'''
time_tuple = email.utils.parsedate_tz(datestr)
if time_tuple is None:
return datestr
dt = datetime.datetime.fromtimestamp(email.utils.mktime_tz(time_tuple))
return dt.isoformat()
EOLMAP = {
'\r': 'CR',
'\n': 'LF',
'\r\n': 'CRLF',
}
| 35.887218 | 104 | 0.68238 | 1,685 | 14,319 | 5.596439 | 0.261128 | 0.062354 | 0.068293 | 0.074443 | 0.219512 | 0.142206 | 0.106151 | 0.093637 | 0.093637 | 0.093637 | 0 | 0.005023 | 0.207487 | 14,319 | 398 | 105 | 35.977387 | 0.825961 | 0.21887 | 0 | 0.160173 | 0 | 0.004329 | 0.144631 | 0.012415 | 0 | 0 | 0 | 0.005025 | 0 | 1 | 0.064935 | false | 0 | 0.038961 | 0.056277 | 0.658009 | 0.004329 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
61417ac842baa5fce26e656d22d5a3fc89e691b0 | 3,671 | py | Python | tuning/TensileConfiguration.py | mhbliao/Tensile | 99261adfa7fe6b30b0c76b17b87bbd6cb41bae60 | [
"MIT"
] | null | null | null | tuning/TensileConfiguration.py | mhbliao/Tensile | 99261adfa7fe6b30b0c76b17b87bbd6cb41bae60 | [
"MIT"
] | null | null | null | tuning/TensileConfiguration.py | mhbliao/Tensile | 99261adfa7fe6b30b0c76b17b87bbd6cb41bae60 | [
"MIT"
] | 5 | 2019-07-29T01:23:56.000Z | 2022-03-08T09:28:10.000Z |
import os
import sys
import argparse
################################################################################
# Print Debug
################################################################################
#def printWarning(message):
# print "Tensile::WARNING: %s" % message
# sys.stdout.flush()
def printExit(message):
print "Tensile::FATAL: %s" % message
sys.stdout.flush()
sys.exit(-1)
try:
import yaml
except ImportError:
printExit("You must install PyYAML to use Tensile (to parse config files). See http://pyyaml.org/wiki/PyYAML for installation instructions.")
#HR = "################################################################################"
def ensurePath( path ):
if not os.path.exists(path):
os.makedirs(path)
return path
################################################################################
# Define Constants
################################################################################
def constant(f):
def fset(self, value):
raise TypeError
def fget(self):
return f(self)
return property(fget, fset)
class _Const(object):
@constant
def GlobalParameters(self):
return "GlobalParameters"
@constant
def BenchmarkProblems(self):
return "BenchmarkProblems"
@constant
def LibraryLogic(self):
return "LibraryLogic"
@constant
def LibraryClient(self):
return "LibraryClient"
CONST = _Const()
################################################################################
# Tuning Configuration Container
################################################################################
class TuningConfiguration:
def __init__(self,filename=None):
print "implement"
if filename is not None:
print ("# Reading configuration: " + filename)
try:
stream = open(filename, "r")
except IOError:
printExit("Cannot open file: %s" % filename )
data = yaml.load(stream, yaml.SafeLoader)
if CONST.GlobalParameters in data:
self.__set_globalParameters(data[CONST.GlobalParameters])
else:
self.__set_globalParameters(None)
if CONST.BenchmarkProblems in data:
self.__set_benchmarkProblems(data[CONST.BenchmarkProblems])
else:
self.__set_benchmarkProblems(None)
if CONST.LibraryLogic in data:
self.__set_libraryLogic(data[CONST.LibraryLogic])
else:
self.__set_libraryLogic(None)
if CONST.LibraryClient in data:
self.__set_libraryClient(data[CONST.LibraryClient])
else:
self.__set_libraryClient(None)
stream.close()
else:
self.__set_globalParameters(None)
self.__set_benchmarkProblems(None)
self.__set_libraryLogic(None)
self.__set_libraryLogic(None)
def __get_globalParameters(self):
return self.__globalParameters
def __set_globalParameters(self,value):
self.__globalParameters = value
globalParamters = property(__get_globalParameters,__set_globalParameters)
def __get_benchmarkProblems(self):
return self.__benchmarkProblems
def __set_benchmarkProblems(self,value):
self.__benchmarkProblems = value
benchmarkProblems = property(__get_benchmarkProblems,__set_benchmarkProblems)
def __get_libraryLogic(self):
return self.__libraryLogic
def __set_libraryLogic(self,value):
self.__libraryLogic = value
libraryLogic = property(__get_libraryLogic,__set_libraryLogic)
def __get_libraryClient(self):
return self.__libraryClient
def __set_libraryClient(self,value):
self.__libraryClient = value
libraryClient = property(__get_libraryClient,__set_libraryClient)
| 25.493056 | 143 | 0.602016 | 331 | 3,671 | 6.356495 | 0.268882 | 0.039924 | 0.026141 | 0.024715 | 0.074144 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000329 | 0.171615 | 3,671 | 143 | 144 | 25.671329 | 0.691549 | 0.041406 | 0 | 0.206897 | 0 | 0.011494 | 0.087826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.057471 | null | null | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
614b094bc6fe81f808b2770a13e59af2778e159e | 321 | py | Python | Audio Features/waveplot.py | kritika58/A-Novel-Framework-Using-Neutrosophy-for-Integrated-Speech-and-Text-Sentiment-Analysis | a16bd5d02f2efd34ad20f496fb59f273fdb9b60c | [
"MIT"
] | null | null | null | Audio Features/waveplot.py | kritika58/A-Novel-Framework-Using-Neutrosophy-for-Integrated-Speech-and-Text-Sentiment-Analysis | a16bd5d02f2efd34ad20f496fb59f273fdb9b60c | [
"MIT"
] | null | null | null | Audio Features/waveplot.py | kritika58/A-Novel-Framework-Using-Neutrosophy-for-Integrated-Speech-and-Text-Sentiment-Analysis | a16bd5d02f2efd34ad20f496fb59f273fdb9b60c | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import librosa.display
plt.rcParams.update({'font.size': 16})
y, sr = librosa.load(librosa.util.example_audio_file())
plt.figure(figsize=(18, 7))
librosa.display.waveplot(y, sr=sr, x_axis='s')
print(sr)
plt.ylabel('Sampling Rate',fontsize=32)
plt.xlabel('Time (s)',fontsize=32)
plt.show() | 29.181818 | 55 | 0.744548 | 53 | 321 | 4.45283 | 0.679245 | 0.118644 | 0.110169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030201 | 0.071651 | 321 | 11 | 56 | 29.181818 | 0.761745 | 0 | 0 | 0 | 0 | 0 | 0.096273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
614d70b8129724b5e053055af8277e59183bfa60 | 1,753 | py | Python | hardware/humidity_rev001/main.py | deniz195/json-sensor | c0e55d39bab3be6eca444273fb5436e1eafe8860 | [
"MIT"
] | 1 | 2018-10-30T11:22:33.000Z | 2018-10-30T11:22:33.000Z | hardware/humidity_rev001/main.py | deniz195/json-sensor | c0e55d39bab3be6eca444273fb5436e1eafe8860 | [
"MIT"
] | null | null | null | hardware/humidity_rev001/main.py | deniz195/json-sensor | c0e55d39bab3be6eca444273fb5436e1eafe8860 | [
"MIT"
] | null | null | null | # Trinket IO demo
# Welcome to CircuitPython 2.0.0 :)
import board
from digitalio import DigitalInOut, Direction, Pull
from analogio import AnalogOut, AnalogIn
import touchio
from adafruit_hid.keyboard import Keyboard
from adafruit_hid.keycode import Keycode
import adafruit_dotstar as dotstar
import time
import neopixel
from busio import I2C
from board import SCL, SDA
import adafruit_sht31d
i2c = I2C(SCL, SDA)
sensor = adafruit_sht31d.SHT31D(i2c)
# # Built in red LED
# led = DigitalInOut(board.D13)
# led.direction = Direction.OUTPUT
# # Analog input on D0
# analog1in = AnalogIn(board.D0)
# # Analog output on D1
# aout = AnalogOut(board.D1)
# # Digital input with pullup on D2
# button = DigitalInOut(board.D2)
# button.direction = Direction.INPUT
# button.pull = Pull.UP
# # Used if we do HID output, see below
# kbd = Keyboard()
######################### HELPERS ##############################
# # Helper to convert analog input to voltage
# def getVoltage(pin):
# return (pin.value * 3.3) / 65536
######################### MAIN LOOP ##############################
averages = 1
# report_time = 0.0
# loop_time = report_time/averages
i = 0
temperature = 0
relative_humidity = 0
while True:
temperature += sensor.temperature
relative_humidity += sensor.relative_humidity
if i == averages - 1:
temperature /= averages
relative_humidity /= averages
output = ""
output += '{'
output += ' "guid": "btrn-tmp-sensor-0001",'
output += ' "temperature": %0.2f,' % sensor.temperature
output += ' "relative_humidity": %0.2f,' % sensor.relative_humidity
output += '}'
print(output)
temperature = 0
relative_humidity = 0
i = (i + 1) % averages
# time.sleep(loop_time)
| 20.869048 | 72 | 0.654307 | 216 | 1,753 | 5.236111 | 0.407407 | 0.099027 | 0.045093 | 0.049514 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032982 | 0.187108 | 1,753 | 83 | 73 | 21.120482 | 0.760702 | 0.3417 | 0 | 0.117647 | 0 | 0 | 0.084646 | 0.022638 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.352941 | 0 | 0.352941 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
61507394f52ac1e05ac449ce9da47bed8f214087 | 50,220 | py | Python | pyxb/binding/content.py | thorstenb/pyxb | 634e86f61dfb73a2900f32fc3d819e9c25365a49 | [
"Apache-2.0"
] | null | null | null | pyxb/binding/content.py | thorstenb/pyxb | 634e86f61dfb73a2900f32fc3d819e9c25365a49 | [
"Apache-2.0"
] | null | null | null | pyxb/binding/content.py | thorstenb/pyxb | 634e86f61dfb73a2900f32fc3d819e9c25365a49 | [
"Apache-2.0"
] | null | null | null | # Copyright 2009, Peter A. Bigot
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain a
# copy of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Helper classes that maintain the content model of XMLSchema in the binding
classes.
L{AttributeUse} and L{ElementUse} record information associated with a binding
class, for example the types of values, the original XML QName or NCName, and
the Python field in which the values are stored. They also provide the
low-level interface to set and get the corresponding values in a binding
instance.
@todo: Document new content model
L{Wildcard} holds content-related information used in the content model.
"""
import pyxb
import pyxb.namespace
import basis
import xml.dom
class ContentState_mixin (pyxb.cscRoot):
"""Declares methods used by classes that hold state while validating a
content model component."""
def accepts (self, particle_state, instance, value, element_use):
"""Determine whether the provided value can be added to the instance
without violating state validation.
This method must not throw any non-catastrophic exceptions; general
failures should be transformed to a C{False} return value.
@param particle_state: The L{ParticleState} instance serving as the
parent to this state. The implementation must inform that state when
the proposed value completes the content model.
@param instance: An instance of a subclass of
{basis.complexTypeDefinition}, into which the provided value will be
stored if it is consistent with the current model state.
@param value: The value that is being validated against the state.
@param element_use: An optional L{ElementUse} instance that specifies
the element to which the value corresponds. This will be available
when the value is extracted by parsing a document, but will be absent
if the value was passed as a constructor positional parameter.
@return: C{True} if the value was successfully matched against the
state. C{False} if the value did not match against the state."""
raise Exception('ContentState_mixin.accepts not implemented in %s' % (type(self),))
def notifyFailure (self, sub_state, particle_ok):
"""Invoked by a sub-state to indicate that validation cannot proceed
in the current state.
Normally this is used when an intermediate content model must reset
itself to permit alternative models to be evaluated.
@param sub_state: the state that was unable to accept a value
@param particle_ok: C{True} if the particle that rejected the value is
in an accepting terminal state
"""
raise Exception('ContentState_mixin.notifyFailure not implemented in %s' % (type(self),))
def _verifyComplete (self, parent_particle_state):
"""Determine whether the deep state is complete without further elements.
No-op for non-aggregate state. For aggregate state, all contained
particles should be checked to see whether the overall model can be
satisfied if no additional elements are provided.
This method does not have a meaningful return value; violations of the
content model should produce the corresponding exception (generally,
L{MissingContentError}).
@param parent_particle_state: the L{ParticleState} for which this state
is the term.
"""
pass
class ContentModel_mixin (pyxb.cscRoot):
"""Declares methods used by classes representing content model components."""
def newState (self, parent_particle_state):
"""Return a L{ContentState_mixin} instance that will validate the
state of this model component.
@param parent_particle_state: The L{ParticleState} instance for which
this instance is a term. C{None} for the top content model of a
complex data type.
"""
raise Exception('ContentModel_mixin.newState not implemented in %s' % (type(self),))
def _validateCloneSymbolSet (self, symbol_set_im):
"""Create a mutable copy of the symbol set.
The top-level map is copied, as are the lists of values. The values
within the lists are unchanged, as validation does not affect them."""
rv = symbol_set_im.copy()
for (k, v) in rv.items():
rv[k] = v[:]
return rv
def _validateCloneOutputSequence (self, output_sequence_im):
return output_sequence_im[:]
def _validateReplaceResults (self, symbol_set_out, symbol_set_new, output_sequence_out, output_sequence_new):
"""In-place update of symbol set and output sequence structures.
Use this to copy from temporary mutable structures updated by local
validation into the structures that were passed in once the validation
has succeeded."""
symbol_set_out.clear()
symbol_set_out.update(symbol_set_new)
output_sequence_out[:] = output_sequence_new
def _validate (self, symbol_set, output_sequence):
"""Determine whether an output sequence created from the symbols can
be made consistent with the model.
The symbol set represents letters in an alphabet; the output sequence
orders those letters in a way that satisfies the regular expression
expressed in the model. Both are changed as a result of a successful
validation; both remain unchanged if the validation failed. In
recursing, implementers may assume that C{output_sequence} is
monotonic: its length remains unchanged after an invocation iff the
symbol set also remains unchanged. The L{_validateCloneSymbolSet},
L{_validateCloneOutputSequence}, and L{_validateReplaceResults}
methods are available to help preserve this behavior.
@param symbol_set: A map from L{ElementUse} instances to a list of
values. The order of the values corresponds to the order in which
they should appear. A key of C{None} identifies values that are
stored as wildcard elements. Values are removed from the lists as
they are used; when the last value of a list is removed, its key is
removed from the map. Thus an empty dictionary is the indicator that
no more symbols are available.
@param output_sequence: A mutable list to which should be appended
tuples C{( eu, val )} where C{eu} is an L{ElementUse} from the set of
symbol keys, and C{val} is a value from the corresponding list. A
document generated by producing the elements in the given order is
expected to validate.
@return: C{True} iff the model validates. C{symbol_set} and
C{output_path} must be unmodified if returns C{False}.
"""
raise Exception('ContentState_mixin._validate not implemented in %s' % (type(self),))
class AttributeUse (pyxb.cscRoot):
"""A helper class that encapsulates everything we need to know
about the way an attribute is used within a binding class.
Attributes are stored internally as pairs C{(provided, value)}, where
C{provided} is a boolean indicating whether a value for the attribute was
provided externally, and C{value} is an instance of the attribute
datatype. The C{provided} flag is used to determine whether an XML
attribute should be added to a created DOM node when generating the XML
corresponding to a binding instance.
"""
__name = None # ExpandedName of the attribute
__id = None # Identifier used for this attribute within the owning class
__key = None # Private attribute used in instances to hold the attribute value
__dataType = None # PST datatype
__unicodeDefault = None # Default value as a unicode string, or None
__defaultValue = None # Default value as an instance of datatype, or None
__fixed = False # If True, value cannot be changed
__required = False # If True, attribute must appear
__prohibited = False # If True, attribute must not appear
def __init__ (self, name, id, key, data_type, unicode_default=None, fixed=False, required=False, prohibited=False):
"""Create an AttributeUse instance.
@param name: The name by which the attribute is referenced in the XML
@type name: L{pyxb.namespace.ExpandedName}
@param id: The Python identifier for the attribute within the
containing L{pyxb.basis.binding.complexTypeDefinition}. This is a
public identifier, derived from the local part of the attribute name
and modified to be unique, and is usually used as the name of the
attribute's inspector method.
@type id: C{str}
@param key: The string used to store the attribute
value in the dictionary of the containing
L{pyxb.basis.binding.complexTypeDefinition}. This is mangled so
that it is unique among and is treated as a Python private member.
@type key: C{str}
@param data_type: The class reference to the subclass of
L{pyxb.binding.basis.simpleTypeDefinition} of which the attribute
values must be instances.
@type data_type: C{type}
@keyword unicode_default: The default value of the attribute as
specified in the schema, or None if there is no default attribute
value. The default value (of the keyword) is C{None}.
@type unicode_default: C{unicode}
@keyword fixed: If C{True}, indicates that the attribute, if present,
must have the value that was given via C{unicode_default}. The
default value is C{False}.
@type fixed: C{bool}
@keyword required: If C{True}, indicates that the attribute must appear
in the DOM node used to create an instance of the corresponding
L{pyxb.binding.basis.complexTypeDefinition}. The default value is
C{False}. No more that one of L{required} and L{prohibited} should be
assigned C{True}.
@type required: C{bool}
@keyword prohibited: If C{True}, indicates that the attribute must
B{not} appear in the DOM node used to create an instance of the
corresponding L{pyxb.binding.basis.complexTypeDefinition}. The
default value is C{False}. No more that one of L{required} and
L{prohibited} should be assigned C{True}.
@type prohibited: C{bool}
@raise pyxb.BadTypeValueError: the L{unicode_default} cannot be used
to initialize an instance of L{data_type}
"""
self.__name = name
self.__id = id
self.__key = key
self.__dataType = data_type
self.__unicodeDefault = unicode_default
if self.__unicodeDefault is not None:
self.__defaultValue = self.__dataType.Factory(self.__unicodeDefault)
self.__fixed = fixed
self.__required = required
self.__prohibited = prohibited
def name (self):
"""The expanded name of the element.
@rtype: L{pyxb.namespace.ExpandedName}
"""
return self.__name
def defaultValue (self):
"""The default value of the attribute."""
return self.__defaultValue
def fixed (self):
"""C{True} iff the value of the attribute cannot be changed."""
return self.__fixed
def required (self):
"""Return True iff the attribute must be assigned a value."""
return self.__required
def prohibited (self):
"""Return True iff the attribute must not be assigned a value."""
return self.__prohibited
def provided (self, ctd_instance):
"""Return True iff the given instance has been explicitly given a
value for the attribute.
This is used for things like only generating an XML attribute
assignment when a value was originally given (even if that value
happens to be the default).
"""
return self.__getProvided(ctd_instance)
def id (self):
"""Tag used within Python code for the attribute.
This is not used directly in the default code generation template."""
return self.__id
def key (self):
"""String used as key within object dictionary when storing attribute value."""
return self.__key
def dataType (self):
"""The subclass of L{pyxb.binding.basis.simpleTypeDefinition} of which any attribute value must be an instance."""
return self.__dataType
def __getValue (self, ctd_instance):
"""Retrieve the value information for this attribute in a binding instance.
@param ctd_instance: The instance object from which the attribute is to be retrieved.
@type ctd_instance: subclass of L{pyxb.binding.basis.complexTypeDefinition}
@return: C{(provided, value)} where C{provided} is a C{bool} and
C{value} is C{None} or an instance of the attribute's datatype.
"""
return getattr(ctd_instance, self.__key, (False, None))
def __getProvided (self, ctd_instance):
return self.__getValue(ctd_instance)[0]
def value (self, ctd_instance):
"""Get the value of the attribute from the instance."""
return self.__getValue(ctd_instance)[1]
def __setValue (self, ctd_instance, new_value, provided):
return setattr(ctd_instance, self.__key, (provided, new_value))
def reset (self, ctd_instance):
"""Set the value of the attribute in the given instance to be its
default value, and mark that it has not been provided."""
self.__setValue(ctd_instance, self.__defaultValue, False)
def addDOMAttribute (self, dom_support, ctd_instance, element):
"""If this attribute as been set, add the corresponding attribute to the DOM element."""
( provided, value ) = self.__getValue(ctd_instance)
if provided:
assert value is not None
dom_support.addAttribute(element, self.__name, value.xsdLiteral())
return self
def validate (self, ctd_instance):
(provided, value) = self.__getValue(ctd_instance)
if value is not None:
if self.__prohibited:
raise pyxb.ProhibitedAttributeError('Value given for prohibited attribute %s' % (self.__name,))
if self.__required and not provided:
assert self.__fixed
raise pyxb.MissingAttributeError('Fixed required attribute %s was never set' % (self.__name,))
if not self.__dataType._IsValidValue(value):
raise pyxb.BindingValidationError('Attribute %s value type %s not %s' % (self.__name, type(value), self.__dataType))
self.__dataType.XsdConstraintsOK(value)
else:
if self.__required:
raise pyxb.MissingAttributeError('Required attribute %s does not have a value' % (self.__name,))
return True
def set (self, ctd_instance, new_value):
"""Set the value of the attribute.
This validates the value against the data type, creating a new instance if necessary.
@param ctd_instance: The binding instance for which the attribute
value is to be set
@type ctd_instance: subclass of L{pyxb.binding.basis.complexTypeDefinition}
@param new_value: The value for the attribute
@type new_value: An C{xml.dom.Node} instance, or any value that is
permitted as the input parameter to the C{Factory} method of the
attribute's datatype.
"""
provided = True
if isinstance(new_value, xml.dom.Node):
unicode_value = self.__name.getAttribute(new_value)
if unicode_value is None:
if self.__required:
raise pyxb.MissingAttributeError('Required attribute %s from %s not found' % (self.__name, ctd_instance._ExpandedName or type(ctd_instance)))
provided = False
unicode_value = self.__unicodeDefault
if unicode_value is None:
# Must be optional and absent
provided = False
new_value = None
else:
new_value = unicode_value
else:
assert new_value is not None
if self.__prohibited:
raise pyxb.ProhibitedAttributeError('Value given for prohibited attribute %s' % (self.__name,))
if (new_value is not None) and (not isinstance(new_value, self.__dataType)):
new_value = self.__dataType.Factory(new_value)
if self.__fixed and (new_value != self.__defaultValue):
raise pyxb.AttributeChangeError('Attempt to change value of fixed attribute %s' % (self.__name,))
self.__setValue(ctd_instance, new_value, provided)
return new_value
def _description (self, name_only=False, user_documentation=True):
if name_only:
return str(self.__name)
assert issubclass(self.__dataType, basis._TypeBinding_mixin)
desc = [ str(self.__id), ': ', str(self.__name), ' (', self.__dataType._description(name_only=True, user_documentation=False), '), ' ]
if self.__required:
desc.append('required')
elif self.__prohibited:
desc.append('prohibited')
else:
desc.append('optional')
if self.__defaultValue is not None:
desc.append(', ')
if self.__fixed:
desc.append('fixed')
else:
desc.append('default')
desc.extend(['=', self.__unicodeDefault ])
return ''.join(desc)
class ElementUse (ContentState_mixin, ContentModel_mixin):
"""Aggregate the information relevant to an element of a complex type.
This includes the L{original tag name<name>}, the spelling of L{the
corresponding object in Python <id>}, an L{indicator<isPlural>} of whether
multiple instances might be associated with the field, and other relevant
information..
"""
def name (self):
"""The expanded name of the element.
@rtype: L{pyxb.namespace.ExpandedName}
"""
return self.__name
__name = None
def id (self):
"""The string name of the binding class field used to hold the element
values.
This is the user-visible name, and excepting disambiguation will be
equal to the local name of the element."""
return self.__id
__id = None
# The dictionary key used to identify the value of the element. The value
# is the same as that used for private member variables in the binding
# class within which the element declaration occurred.
__key = None
def elementBinding (self):
"""The L{basis.element} instance identifying the information
associated with the element declaration.
"""
return self.__elementBinding
def _setElementBinding (self, element_binding):
# Set the element binding for this use. Only visible at all because
# we have to define the uses before the element instances have been
# created.
self.__elementBinding = element_binding
return self
__elementBinding = None
def isPlural (self):
"""True iff the content model indicates that more than one element
can legitimately belong to this use.
This includes elements in particles with maxOccurs greater than one,
and when multiple elements with the same NCName are declared in the
same type.
"""
return self.__isPlural
__isPlural = False
def __init__ (self, name, id, key, is_plural, element_binding=None):
"""Create an ElementUse instance.
@param name: The name by which the element is referenced in the XML
@type name: L{pyxb.namespace.ExpandedName}
@param id: The Python name for the element within the containing
L{pyxb.basis.binding.complexTypeDefinition}. This is a public
identifier, albeit modified to be unique, and is usually used as the
name of the element's inspector method or property.
@type id: C{str}
@param key: The string used to store the element
value in the dictionary of the containing
L{pyxb.basis.binding.complexTypeDefinition}. This is mangled so
that it is unique among and is treated as a Python private member.
@type key: C{str}
@param is_plural: If C{True}, documents for the corresponding type may
have multiple instances of this element. As a consequence, the value
of the element will be a list. If C{False}, the value will be C{None}
if the element is absent, and a reference to an instance of the type
identified by L{pyxb.binding.basis.element.typeDefinition} if present.
@type is_plural: C{bool}
@param element_binding: Reference to the class that serves as the
binding for the element.
"""
self.__name = name
self.__id = id
self.__key = key
self.__isPlural = is_plural
self.__elementBinding = element_binding
def defaultValue (self):
"""Return the default value for this element.
@todo: Right now, this returns C{None} for non-plural and an empty
list for plural elements. Need to support schema-specified default
values for simple-type content.
"""
if self.isPlural():
return []
return None
def value (self, ctd_instance):
"""Return the value for this use within the given instance."""
return getattr(ctd_instance, self.__key, self.defaultValue())
def reset (self, ctd_instance):
"""Set the value for this use in the given element to its default."""
setattr(ctd_instance, self.__key, self.defaultValue())
return self
def set (self, ctd_instance, value):
"""Set the value of this element in the given instance."""
if value is None:
return self.reset(ctd_instance)
assert self.__elementBinding is not None
if basis._TypeBinding_mixin._PerformValidation:
value = self.__elementBinding.compatibleValue(value, is_plural=self.isPlural())
setattr(ctd_instance, self.__key, value)
ctd_instance._addContent(value, self.__elementBinding)
return self
def setOrAppend (self, ctd_instance, value):
"""Invoke either L{set} or L{append}, depending on whether the element
use is plural."""
if self.isPlural():
return self.append(ctd_instance, value)
return self.set(ctd_instance, value)
def append (self, ctd_instance, value):
"""Add the given value as another instance of this element within the binding instance.
@raise pyxb.StructuralBadDocumentError: invoked on an element use that is not plural
"""
if not self.isPlural():
raise pyxb.StructuralBadDocumentError('Cannot append to element with non-plural multiplicity')
values = self.value(ctd_instance)
if basis._TypeBinding_mixin._PerformValidation:
value = self.__elementBinding.compatibleValue(value)
values.append(value)
ctd_instance._addContent(value, self.__elementBinding)
return values
def toDOM (self, dom_support, parent, value):
"""Convert the given value to DOM as an instance of this element.
@param dom_support: Helper for managing DOM properties
@type dom_support: L{pyxb.utils.domutils.BindingDOMSupport}
@param parent: The DOM node within which this element should be generated.
@type parent: C{xml.dom.Element}
@param value: The content for this element. May be text (if the
element allows mixed content), or an instance of
L{basis._TypeBinding_mixin}.
"""
if isinstance(value, basis._TypeBinding_mixin):
element_binding = self.__elementBinding
if value._substitutesFor(element_binding):
element_binding = value._element()
assert element_binding is not None
if element_binding.abstract():
raise pyxb.DOMGenerationError('Element %s is abstract but content %s not associated with substitution group member' % (self.name(), value))
element = dom_support.createChildElement(element_binding.name(), parent)
elt_type = element_binding.typeDefinition()
val_type = type(value)
if isinstance(value, basis.complexTypeDefinition):
assert isinstance(value, elt_type)
else:
if isinstance(value, basis.STD_union) and isinstance(value, elt_type._MemberTypes):
val_type = elt_type
if dom_support.requireXSIType() or elt_type._RequireXSIType(val_type):
val_type_qname = value._ExpandedName.localName()
tns_prefix = dom_support.namespacePrefix(value._ExpandedName.namespace())
if tns_prefix is not None:
val_type_qname = '%s:%s' % (tns_prefix, val_type_qname)
dom_support.addAttribute(element, pyxb.namespace.XMLSchema_instance.createExpandedName('type'), val_type_qname)
value._toDOM_csc(dom_support, element)
elif isinstance(value, (str, unicode)):
element = dom_support.createChildElement(self.name(), parent)
element.appendChild(dom_support.document().createTextNode(value))
else:
raise pyxb.LogicError('toDOM with unrecognized value type %s: %s' % (type(value), value))
def _description (self, name_only=False, user_documentation=True):
if name_only:
return str(self.__name)
desc = [ str(self.__id), ': ']
if self.isPlural():
desc.append('MULTIPLE ')
desc.append(self.elementBinding()._description(user_documentation=user_documentation))
return ''.join(desc)
def newState (self, parent_particle_state):
"""Implement parent class method."""
return self
def accepts (self, particle_state, instance, value, element_use):
rv = self._accepts(instance, value, element_use)
if rv:
particle_state.incrementCount()
return rv
def _accepts (self, instance, value, element_use):
if element_use == self:
self.setOrAppend(instance, value)
return True
if element_use is not None:
# If there's a known element, and it's not this one, the content
# does not match. This assumes we handled xsi:type and
# substitution groups earlier, which may be true.
return False
if isinstance(value, xml.dom.Node):
# If we haven't been able to identify an element for this before,
# then we don't recognize it, and will have to treat it as a
# wildcard.
return False
try:
self.setOrAppend(instance, self.__elementBinding.compatibleValue(value, _convert_string_values=False))
return True
except pyxb.BadTypeValueError, e:
pass
#print '%s %s %s in %s' % (instance, value, element_use, self)
return False
def _validate (self, symbol_set, output_sequence):
values = symbol_set.get(self)
#print 'values %s' % (values,)
if values is None:
return False
used = values.pop(0)
output_sequence.append( (self, used) )
if 0 == len(values):
del symbol_set[self]
return True
def __str__ (self):
return 'EU.%s@%x' % (self.__name, id(self))
class Wildcard (ContentState_mixin, ContentModel_mixin):
"""Placeholder for wildcard objects."""
NC_any = '##any' #<<< The namespace constraint "##any"
NC_not = '##other' #<<< A flag indicating constraint "##other"
NC_targetNamespace = '##targetNamespace'
NC_local = '##local'
__namespaceConstraint = None
def namespaceConstraint (self):
"""A constraint on the namespace for the wildcard.
Valid values are:
- L{Wildcard.NC_any}
- A tuple ( L{Wildcard.NC_not}, a L{namespace<pyxb.namespace.Namespace>} instance )
- set(of L{namespace<pyxb.namespace.Namespace>} instances)
Namespaces are represented by their URIs. Absence is
represented by None, both in the "not" pair and in the set.
"""
return self.__namespaceConstraint
PC_skip = 'skip' #<<< No constraint is applied
PC_lax = 'lax' #<<< Validate against available uniquely determined declaration
PC_strict = 'strict' #<<< Validate against declaration or xsi:type which must be available
# One of PC_*
__processContents = None
def processContents (self): return self.__processContents
def __normalizeNamespace (self, nsv):
if nsv is None:
return None
if isinstance(nsv, basestring):
nsv = pyxb.namespace.NamespaceForURI(nsv, create_if_missing=True)
assert isinstance(nsv, pyxb.namespace.Namespace), 'unexpected non-namespace %s' % (nsv,)
return nsv
def __init__ (self, *args, **kw):
# Namespace constraint and process contents are required parameters.
nsc = kw['namespace_constraint']
if isinstance(nsc, tuple):
nsc = (nsc[0], self.__normalizeNamespace(nsc[1]))
elif isinstance(nsc, set):
nsc = set([ self.__normalizeNamespace(_uri) for _uri in nsc ])
self.__namespaceConstraint = nsc
self.__processContents = kw['process_contents']
def matches (self, instance, value):
"""Return True iff the value is a valid match against this wildcard.
Validation per U{Wildcard allows Namespace Name<http://www.w3.org/TR/xmlschema-1/#cvc-wildcard-namespace>}.
"""
ns = None
if isinstance(value, xml.dom.Node):
if value.namespaceURI is not None:
ns = pyxb.namespace.NamespaceForURI(value.namespaceURI)
elif isinstance(value, basis._TypeBinding_mixin):
elt = value._element()
if elt is not None:
ns = elt.name().namespace()
else:
ns = value._ExpandedName.namespace()
else:
raise pyxb.LogicError('Need namespace from value')
if isinstance(ns, pyxb.namespace.Namespace) and ns.isAbsentNamespace():
ns = None
if self.NC_any == self.__namespaceConstraint:
return True
if isinstance(self.__namespaceConstraint, tuple):
(_, constrained_ns) = self.__namespaceConstraint
assert self.NC_not == _
if ns is None:
return False
if constrained_ns == ns:
return False
return True
return ns in self.__namespaceConstraint
def newState (self, parent_particle_state):
return self
def accepts (self, particle_state, instance, value, element_use):
if isinstance(value, xml.dom.Node):
value_desc = 'value in %s' % (value.nodeName,)
else:
value_desc = 'value of type %s' % (type(value),)
if not self.matches(instance, value):
return False
if not isinstance(value, basis._TypeBinding_mixin):
print 'NOTE: Created unbound wildcard element from %s' % (value_desc,)
assert isinstance(instance.wildcardElements(), list), 'Uninitialized wildcard list in %s' % (instance._ExpandedName,)
instance._appendWildcardElement(value)
particle_state.incrementCount()
return True
def _validate (self, symbol_set, output_sequence):
# @todo check node against namespace constraint and process contents
#print 'WARNING: Accepting node as wildcard match without validating.'
wc_values = symbol_set.get(None)
if wc_values is None:
return False
used = wc_values.pop(0)
output_sequence.append( (None, used) )
if 0 == len(wc_values):
del symbol_set[None]
return True
class SequenceState (ContentState_mixin):
__failed = False
__satisfied = False
def __init__ (self, group, parent_particle_state):
super(SequenceState, self).__init__(group)
self.__sequence = group
self.__parentParticleState = parent_particle_state
self.__particles = group.particles()
self.__index = -1
self.__satisfied = False
self.__failed = False
self.notifyFailure(None, False)
#print 'SS.CTOR %s: %d elts' % (self, len(self.__particles))
def accepts (self, particle_state, instance, value, element_use):
assert self.__parentParticleState == particle_state
assert not self.__failed
#print 'SS.ACC %s: %s %s %s' % (self, instance, value, element_use)
while self.__particleState is not None:
(consume, underflow_exc) = self.__particleState.step(instance, value, element_use)
if consume:
return True
if underflow_exc is not None:
self.__failed = True
raise underflow_exc
return False
def _verifyComplete (self, parent_particle_state):
while self.__particleState is not None:
self.__particleState.verifyComplete()
def notifyFailure (self, sub_state, particle_ok):
self.__index += 1
self.__particleState = None
if self.__index < len(self.__particles):
self.__particleState = ParticleState(self.__particles[self.__index], self)
else:
self.__satisfied = particle_ok
if particle_ok:
self.__parentParticleState.incrementCount()
#print 'SS.NF %s: %d %s %s' % (self, self.__index, particle_ok, self.__particleState)
class ChoiceState (ContentState_mixin):
def __init__ (self, group, parent_particle_state):
self.__parentParticleState = parent_particle_state
super(ChoiceState, self).__init__(group)
self.__choices = [ ParticleState(_p, self) for _p in group.particles() ]
self.__activeChoice = None
#print 'CS.CTOR %s: %d choices' % (self, len(self.__choices))
def accepts (self, particle_state, instance, value, element_use):
#print 'CS.ACC %s %s: %s %s %s' % (self, self.__activeChoice, instance, value, element_use)
if self.__activeChoice is None:
for choice in self.__choices:
#print 'CS.ACC %s candidate %s' % (self, choice)
try:
(consume, underflow_exc) = choice.step(instance, value, element_use)
except Exception, e:
consume = False
underflow_exc = e
#print 'CS.ACC %s: candidate %s : %s' % (self, choice, consume)
if consume:
self.__activeChoice = choice
self.__choices = None
return True
return False
(consume, underflow_exc) = self.__activeChoice.step(instance, value, element_use)
#print 'CS.ACC %s : active choice %s %s %s' % (self, self.__activeChoice, consume, underflow_exc)
if consume:
return True
if underflow_exc is not None:
self.__failed = True
raise underflow_exc
return False
def _verifyComplete (self, parent_particle_state):
rv = True
#print 'CS.VC %s: %s' % (self, self.__activeChoice)
if self.__activeChoice is None:
# Use self.__activeChoice as the iteration value so that it's
# non-None when notifyFailure is invoked.
for self.__activeChoice in self.__choices:
try:
#print 'CS.VC: try %s' % (self.__activeChoice,)
self.__activeChoice.verifyComplete()
return
except Exception, e:
pass
#print 'Missing components %s' % ("\n".join([ "\n ".join([str(_p2.term()) for _p2 in _p.particle().term().particles()]) for _p in self.__choices ]),)
raise pyxb.MissingContentError('choice')
self.__activeChoice.verifyComplete()
def notifyFailure (self, sub_state, particle_ok):
#print 'CS.NF %s %s' % (self, particle_ok)
if particle_ok and (self.__activeChoice is not None):
self.__parentParticleState.incrementCount()
pass
class AllState (ContentState_mixin):
__activeChoice = None
__needRetry = False
def __init__ (self, group, parent_particle_state):
self.__parentParticleState = parent_particle_state
super(AllState, self).__init__(group)
self.__choices = set([ ParticleState(_p, self) for _p in group.particles() ])
#print 'AS.CTOR %s: %d choices' % (self, len(self.__choices))
def accepts (self, particle_state, instance, value, element_use):
#print 'AS.ACC %s %s: %s %s %s' % (self, self.__activeChoice, instance, value, element_use)
self.__needRetry = True
while self.__needRetry:
self.__needRetry = False
if self.__activeChoice is None:
for choice in self.__choices:
#print 'AS.ACC %s candidate %s' % (self, choice)
try:
(consume, underflow_exc) = choice.step(instance, value, element_use)
except Exception, e:
consume = False
underflow_exc = e
#print 'AS.ACC %s: candidate %s : %s' % (self, choice, consume)
if consume:
self.__activeChoice = choice
self.__choices.discard(self.__activeChoice)
return True
return False
(consume, underflow_exc) = self.__activeChoice.step(instance, value, element_use)
#print 'AS.ACC %s : active choice %s %s %s' % (self, self.__activeChoice, consume, underflow_exc)
if consume:
return True
if underflow_exc is not None:
self.__failed = True
raise underflow_exc
return False
def _verifyComplete (self, parent_particle_state):
#print 'AS.VC %s: %s, %d left' % (self, self.__activeChoice, len(self.__choices))
if self.__activeChoice is not None:
self.__activeChoice.verifyComplete()
while self.__choices:
self.__activeChoice = self.__choices.pop()
self.__activeChoice.verifyComplete()
def notifyFailure (self, sub_state, particle_ok):
#print 'AS.NF %s %s' % (self, particle_ok)
self.__needRetry = True
self.__activeChoice = None
if particle_ok and (0 == len(self.__choices)):
self.__parentParticleState.incrementCount()
class ParticleState (pyxb.cscRoot):
def __init__ (self, particle, parent_state=None):
self.__particle = particle
self.__parentState = parent_state
self.__count = -1
#print 'PS.CTOR %s: particle %s' % (self, particle)
self.incrementCount()
def particle (self): return self.__particle
def incrementCount (self):
#print 'PS.IC %s' % (self,)
self.__count += 1
self.__termState = self.__particle.term().newState(self)
self.__tryAccept = True
def verifyComplete (self):
# @TODO@ Set a flag so we can make verifyComplete safe to call
# multiple times?
#print 'PS.VC %s entry' % (self,)
if not self.__particle.satisfiesOccurrences(self.__count):
self.__termState._verifyComplete(self)
if not self.__particle.satisfiesOccurrences(self.__count):
print 'PS.VC %s incomplete' % (self,)
raise pyxb.MissingContentError('incomplete')
if self.__parentState is not None:
self.__parentState.notifyFailure(self, True)
def step (self, instance, value, element_use):
"""Attempt to apply the value as a new instance of the particle's term.
The L{ContentState_mixin} created for the particle's term is consulted
to determine whether the instance can accept the given value. If so,
the particle's maximum occurrence limit is checked; if not, and the
particle has a parent state, it is informed of the failure.
@param instance: An instance of a subclass of
{basis.complexTypeDefinition}, into which the provided value will be
stored if it is consistent with the current model state.
@param value: The value that is being validated against the state.
@param element_use: An optional L{ElementUse} instance that specifies
the element to which the value corresponds. This will be available
when the value is extracted by parsing a document, but will be absent
if the value was passed as a constructor positional parameter.
@return: C{( consumed, underflow_exc )} A tuple where the first element
is C{True} iff the provided value was accepted in the current state.
When this first element is C{False}, the second element will be
C{None} if the particle's occurrence requirements have been met, and
is an instance of C{MissingElementError} if the observed number of
terms is less than the minimum occurrence count. Depending on
context, the caller may raise this exception, or may try an
alternative content model.
@raise pyxb.UnexpectedElementError: if the value satisfies the particle,
but having done so exceeded the allowable number of instances of the
term.
"""
#print 'PS.STEP %s: %s %s %s' % (self, instance, value, element_use)
# Only try if we're not already at the upper limit on occurrences
consumed = False
underflow_exc = None
# We can try the value against the term if we aren't at the maximum
# count for the term. Also, if we fail to consume, but as a side
# effect of the test the term may have reset itself, we can try again.
self.__tryAccept = True
while self.__tryAccept and (self.__count != self.__particle.maxOccurs()):
self.__tryAccept = False
consumed = self.__termState.accepts(self, instance, value, element_use)
#print 'PS.STEP %s: ta %s %s' % (self, self.__tryAccept, consumed)
self.__tryAccept = self.__tryAccept and (not consumed)
#print 'PS.STEP %s: %s' % (self, consumed)
if consumed:
if not self.__particle.meetsMaximum(self.__count):
raise pyxb.UnexpectedElementError('too many')
else:
if self.__parentState is not None:
self.__parentState.notifyFailure(self, self.__particle.satisfiesOccurrences(self.__count))
if not self.__particle.meetsMinimum(self.__count):
# @TODO@ Use better exception; changing this will require
# changing some unit tests.
#underflow_exc = pyxb.MissingElementError('too few')
underflow_exc = pyxb.UnrecognizedContentError('too few')
return (consumed, underflow_exc)
def __str__ (self):
particle = self.__particle
return 'ParticleState(%d:%d,%s:%s)@%x' % (self.__count, particle.minOccurs(), particle.maxOccurs(), particle.term(), id(self))
class ParticleModel (ContentModel_mixin):
"""Content model dealing with particles: terms with occurrence restrictions"""
def minOccurs (self): return self.__minOccurs
def maxOccurs (self): return self.__maxOccurs
def term (self): return self.__term
def meetsMaximum (self, count):
"""@return: C{True} iff there is no maximum on term occurrence, or the
provided count does not exceed that maximum"""
return (self.__maxOccurs is None) or (count <= self.__maxOccurs)
def meetsMinimum (self, count):
"""@return: C{True} iff the provided count meets the minimum number of
occurrences"""
return count >= self.__minOccurs
def satisfiesOccurrences (self, count):
"""@return: C{True} iff the provided count satisfies the occurrence
requirements"""
return self.meetsMinimum(count) and self.meetsMaximum(count)
def __init__ (self, term, min_occurs=1, max_occurs=1):
self.__term = term
self.__minOccurs = min_occurs
self.__maxOccurs = max_occurs
def newState (self):
return ParticleState(self)
def validate (self, symbol_set):
"""Determine whether the particle requirements are satisfiable by the
given symbol set.
The symbol set represents letters in an alphabet. If those letters
can be combined in a way that satisfies the regular expression
expressed in the model, a satisfying sequence is returned and the
symbol set is reduced by the letters used to form the sequence. If
the content model cannot be satisfied, C{None} is returned and the
symbol set remains unchanged.
@param symbol_set: A map from L{ElementUse} instances to a list of
values. The order of the values corresponds to the order in which
they should appear. A key of C{None} identifies values that are
stored as wildcard elements. Values are removed from the lists as
they are used; when the last value of a list is removed, its key is
removed from the map. Thus an empty dictionary is the indicator that
no more symbols are available.
@return: returns C{None}, or a list of tuples C{( eu, val )} where
C{eu} is an L{ElementUse} from the set of symbol keys, and C{val} is a
value from the corresponding list.
"""
output_sequence = []
#print 'Start: %d %s %s : %s' % (self.__minOccurs, self.__maxOccurs, self.__term, symbol_set)
result = self._validate(symbol_set, output_sequence)
#print 'End: %s %s %s' % (result, symbol_set, output_sequence)
if result:
return (symbol_set, output_sequence)
return None
def _validate (self, symbol_set, output_sequence):
symbol_set_mut = self._validateCloneSymbolSet(symbol_set)
output_sequence_mut = self._validateCloneOutputSequence(output_sequence)
count = 0
#print 'VAL start %s: %d %s' % (self.__term, self.__minOccurs, self.__maxOccurs)
last_size = len(output_sequence_mut)
while (count != self.__maxOccurs) and self.__term._validate(symbol_set_mut, output_sequence_mut):
#print 'VAL %s old cnt %d, left %s' % (self.__term, count, symbol_set_mut)
this_size = len(output_sequence_mut)
if this_size == last_size:
# Validated without consuming anything. Assume we can
# continue to do so, jump to the minimum, and exit.
if count < self.__minOccurs:
count = self.__minOccurs
break
count += 1
last_size = this_size
result = self.satisfiesOccurrences(count)
if (result):
self._validateReplaceResults(symbol_set, symbol_set_mut, output_sequence, output_sequence_mut)
#print 'VAL end PRT %s res %s: %s %s %s' % (self.__term, result, self.__minOccurs, count, self.__maxOccurs)
return result
class _Group (ContentModel_mixin):
"""Base class for content information pertaining to a U{model
group<http://www.w3.org/TR/xmlschema-1/#Model_Groups>}.
There is a specific subclass for each group compositor.
"""
_StateClass = None
"""A reference to a L{ContentState_mixin} class that maintains state when
validating an instance of this group."""
def particles (self): return self.__particles
def __init__ (self, *particles):
self.__particles = particles
def newState (self, parent_particle_state):
return self._StateClass(self, parent_particle_state)
# All and Sequence share the same validation code, so it's up here.
def _validate (self, symbol_set, output_sequence):
symbol_set_mut = self._validateCloneSymbolSet(symbol_set)
output_sequence_mut = self._validateCloneOutputSequence(output_sequence)
for p in self.particles():
if not p._validate(symbol_set_mut, output_sequence_mut):
return False
self._validateReplaceResults(symbol_set, symbol_set_mut, output_sequence, output_sequence_mut)
return True
class GroupChoice (_Group):
_StateClass = ChoiceState
# Choice requires a different validation algorithm
def _validate (self, symbol_set, output_sequence):
reset_mutables = True
for p in self.particles():
if reset_mutables:
symbol_set_mut = self._validateCloneSymbolSet(symbol_set)
output_sequence_mut = self._validateCloneOutputSequence(output_sequence)
if p._validate(symbol_set_mut, output_sequence_mut):
self._validateReplaceResults(symbol_set, symbol_set_mut, output_sequence, output_sequence_mut)
return True
reset_mutables = len(output_sequence) != len(output_sequence_mut)
return False
class GroupAll (_Group):
_StateClass = AllState
class GroupSequence (_Group):
_StateClass = SequenceState
## Local Variables:
## fill-column:78
## End:
| 44.168865 | 162 | 0.652708 | 6,209 | 50,220 | 5.107103 | 0.122081 | 0.013907 | 0.00596 | 0.014506 | 0.359445 | 0.312835 | 0.277326 | 0.254935 | 0.223873 | 0.209492 | 0 | 0.000902 | 0.271605 | 50,220 | 1,136 | 163 | 44.207746 | 0.865965 | 0.098845 | 0 | 0.38448 | 0 | 0 | 0.039344 | 0.005296 | 0 | 0 | 0 | 0.003521 | 0.021164 | 0 | null | null | 0.007055 | 0.007055 | null | null | 0.003527 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
615449c99a13e49491115643772692dd1ae14eec | 3,078 | py | Python | backend/bundle/controller.py | FlickerSoul/Graphery | 8b1390e1ba96fd2867f0cd8e5fc1d4ad6108121e | [
"MIT"
] | 5 | 2020-08-26T00:15:01.000Z | 2021-01-11T17:24:51.000Z | backend/bundle/controller.py | FlickerSoul/Graphery | 8b1390e1ba96fd2867f0cd8e5fc1d4ad6108121e | [
"MIT"
] | 69 | 2020-08-02T23:45:44.000Z | 2021-04-17T03:04:32.000Z | backend/bundle/controller.py | FlickerSoul/Graphery | 8b1390e1ba96fd2867f0cd8e5fc1d4ad6108121e | [
"MIT"
] | 4 | 2020-09-10T05:40:49.000Z | 2020-12-20T11:44:16.000Z | from __future__ import annotations
import logging
import pathlib
from logging.handlers import TimedRotatingFileHandler
from os import getenv
from typing import Union, List, Mapping
from bundle.utils.recorder import Recorder
from bundle.utils.cache_file_helpers import CacheFolder, USER_DOCS_PATH
from bundle.seeker import tracer
_CACHE_FOLDER_AUTO_DELETE_ENV_NAME = 'CONTROLLER_CACHE_AUTO_DELETE'
is_auto_delete = bool(int(getenv(_CACHE_FOLDER_AUTO_DELETE_ENV_NAME, False)))
_CACHE_PATH_ENV_NAME = 'CONTROLLER_CACHE_PATH'
controller_cache_path = pathlib.Path(getenv(_CACHE_PATH_ENV_NAME, USER_DOCS_PATH))
class _Controller:
_LOG_FILE_NAME = f'graphery_controller_execution.log'
def __init__(self, cache_path=controller_cache_path, auto_delete: bool = is_auto_delete):
self.main_cache_folder = CacheFolder(cache_path, auto_delete=auto_delete)
self.log_folder = CacheFolder(cache_path / 'log', auto_delete=auto_delete)
# TODO think about this, and the log file location in the sight class
self.log_folder.mkdir(parents=True, exist_ok=True)
self.tracer_cls = tracer
self.recorder = Recorder()
self.controller_logger = self._init_logger()
self.main_cache_folder.__enter__()
self.tracer_cls.set_new_recorder(self.recorder)
def _init_logger(self) -> logging.Logger:
log_file_path = self.log_folder.cache_folder_path / self._LOG_FILE_NAME
logger = logging.getLogger('controller.tracer')
logger.setLevel(logging.DEBUG)
log_file_handler = TimedRotatingFileHandler(log_file_path, when='midnight', backupCount=30)
log_file_handler.setLevel(logging.INFO)
formatter = logging.Formatter(
'%(asctime)-15s::%(levelname)s::%(message)s'
)
log_file_handler.setFormatter(formatter)
logger.addHandler(log_file_handler)
return logger
def get_recorded_content(self) -> List[Mapping]:
return self.recorder.get_change_list()
def get_processed_result(self) -> List[Mapping]:
return self.recorder.get_processed_change_list()
def get_processed_result_json(self) -> str:
return self.recorder.get_change_list_json()
def purge_records(self) -> None:
self.recorder.purge()
def __call__(self, dir_name: Union[str, pathlib.Path] = None,
mode: int = 0o777,
auto_delete: bool = False,
*args, **kwargs) -> CacheFolder:
if dir_name:
return self.main_cache_folder.add_cache_folder(dir_name, mode, auto_delete)
else:
return self.main_cache_folder
def __enter__(self) -> _Controller:
self.tracer_cls.set_logger(self.controller_logger)
# TODO give a prompt that the current session is under this time stamp
return self
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
self.tracer_cls.set_logger(None)
def __del__(self) -> None:
self.main_cache_folder.__exit__(None, None, None)
controller = _Controller()
| 37.084337 | 99 | 0.71345 | 396 | 3,078 | 5.146465 | 0.295455 | 0.058881 | 0.031894 | 0.046614 | 0.182041 | 0.10844 | 0.035329 | 0 | 0 | 0 | 0 | 0.003261 | 0.203054 | 3,078 | 82 | 100 | 37.536585 | 0.827558 | 0.044185 | 0 | 0 | 0 | 0 | 0.051718 | 0.042191 | 0 | 0 | 0 | 0.012195 | 0 | 1 | 0.166667 | false | 0 | 0.15 | 0.05 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6156bf0e3221f1152ca5fede44ceb1bca071a86d | 1,246 | py | Python | toolchain/riscv/MSYS/python/Lib/test/encoded_modules/__init__.py | zhiqiang-hu/bl_iot_sdk | 154ee677a8cc6a73e6a42a5ff12a8edc71e6d15d | [
"Apache-2.0"
] | 207 | 2018-10-01T08:53:01.000Z | 2022-03-14T12:15:54.000Z | toolchain/riscv/MSYS/python/Lib/test/encoded_modules/__init__.py | zhiqiang-hu/bl_iot_sdk | 154ee677a8cc6a73e6a42a5ff12a8edc71e6d15d | [
"Apache-2.0"
] | 8 | 2019-06-29T14:18:51.000Z | 2022-02-19T07:30:27.000Z | toolchain/riscv/MSYS/python/Lib/test/encoded_modules/__init__.py | zhiqiang-hu/bl_iot_sdk | 154ee677a8cc6a73e6a42a5ff12a8edc71e6d15d | [
"Apache-2.0"
] | 76 | 2020-03-16T01:47:46.000Z | 2022-03-21T16:37:07.000Z | # -*- encoding: utf-8 -*-
# This is a package that contains a number of modules that are used to
# test import from the source files that have different encodings.
# This file (the __init__ module of the package), is encoded in utf-8
# and contains a list of strings from various unicode planes that are
# encoded differently to compare them to the same strings encoded
# differently in submodules. The following list, test_strings,
# contains a list of tuples. The first element of each tuple is the
# suffix that should be prepended with 'module_' to arrive at the
# encoded submodule name, the second item is the encoding and the last
# is the test string. The same string is assigned to the variable
# named 'test' inside the submodule. If the decoding of modules works
# correctly, from module_xyz import test should result in the same
# string as listed below in the 'xyz' entry.
# module, encoding, test string
test_strings = (
('iso_8859_1', 'iso-8859-1', "Les hommes ont oublié cette vérité, "
"dit le renard. Mais tu ne dois pas l'oublier. Tu deviens "
"responsable pour toujours de ce que tu as apprivoisé."),
('koi8_r', 'koi8-r', "Познание бесконечности требует бесконечного времени.")
)
| 51.916667 | 81 | 0.731942 | 196 | 1,246 | 4.596939 | 0.545918 | 0.029967 | 0.028857 | 0.033296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.202247 | 1,246 | 23 | 82 | 54.173913 | 0.892354 | 0.714286 | 0 | 0 | 0 | 0 | 0.725552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6158ddbd923564b0fc6dc5759ee9c0f2fdcd6fb9 | 3,078 | py | Python | toolkit/utils/report_utils.py | suraj-testing2/Flowers_Toilet | 21c981531a505a8b74ee42c33a3f4d68ef72d7f3 | [
"Apache-2.0"
] | 22 | 2015-01-22T12:10:50.000Z | 2021-10-12T03:30:56.000Z | toolkit/utils/report_utils.py | suraj-testing2/Flowers_Toilet | 21c981531a505a8b74ee42c33a3f4d68ef72d7f3 | [
"Apache-2.0"
] | null | null | null | toolkit/utils/report_utils.py | suraj-testing2/Flowers_Toilet | 21c981531a505a8b74ee42c33a3f4d68ef72d7f3 | [
"Apache-2.0"
] | 17 | 2016-01-28T04:54:39.000Z | 2021-10-12T03:30:49.000Z | #!/usr/bin/python
#
# Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Common functions used in reporting.
Used by both command line tools and ui tools.
"""
from operator import itemgetter
import pprint
import textwrap
DISPLAY_WIDTH = 80
TAB_WIDTH = 4
BORDER = DISPLAY_WIDTH * '-'
SEPARATOR = (DISPLAY_WIDTH / 2) * '-'
# Only need is for wrapping with 3-level indenting.
wrapper = textwrap.TextWrapper(width=DISPLAY_WIDTH,
initial_indent=3 * TAB_WIDTH * ' ',
subsequent_indent=3 * TAB_WIDTH * ' ')
def PrintReportLine(text, indent=False, indent_level=1):
"""Helper to allow report string formatting (e.g. set tabs to 4 spaces).
Args:
text: String text to print.
indent: If True, indent the line.
indent_level: If indent, indent this many tabs.
"""
if indent:
fmt = '%s%%s' % (indent_level * '\t')
else:
fmt = '%s'
print str(fmt % text).expandtabs(TAB_WIDTH)
def WrapReportText(text):
"""Helper to allow report string wrapping (e.g. wrap and indent).
Actually invokes textwrap.fill() which returns a string instead of a list.
We always double-indent our wrapped blocks.
Args:
text: String text to be wrapped.
Returns:
String of wrapped and indented text.
"""
return wrapper.fill(text)
class Counter(object):
"""Replaces Collections.Counter when Python 2.7 is not available."""
def __init__(self):
"""Establish internal data structures for counting."""
self._counter = {}
def DebugPrint(self):
"""For debugging show the data structure."""
pprint.pprint(self._counter)
def Increment(self, counter_key, counter_increment=1):
"""Increment a key.
Args:
counter_key: String key that will collect a count.
counter_increment: Int to increment the count; usually 1.
"""
self._counter.setdefault(counter_key, 0)
self._counter[counter_key] += counter_increment
@property
def data(self):
"""Give access to the dictionary for retrieving keys/values."""
return self._counter
def FilterAndSortMostCommon(self, top_n=None):
"""Determine the top_n keys with highest counts in order descending.
Args:
top_n: Int count of the number of keys of interest. If None, list all.
Returns:
List of 2-tuples (key, count) in descending order.
"""
results = [(k, v) for k, v in self._counter.iteritems()]
if not top_n:
top_n = len(results)
return sorted(results, key=itemgetter(1), reverse=True)[:top_n]
| 28.766355 | 76 | 0.690058 | 432 | 3,078 | 4.833333 | 0.462963 | 0.036877 | 0.020115 | 0.015326 | 0.043103 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009905 | 0.212801 | 3,078 | 106 | 77 | 29.037736 | 0.851837 | 0.206953 | 0 | 0 | 0 | 0 | 0.010744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.088235 | null | null | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6158e50e667f81e2d5e70166a7f7dc89a015b4a3 | 323 | py | Python | Sinavro/Types/BaseClass.py | pl-Steve28-lq/SinavroLang | 8f8344bf9d92ba5498f9bbf103fb2c80cde724d2 | [
"MIT"
] | 4 | 2020-12-24T10:11:18.000Z | 2021-09-09T01:24:05.000Z | Sinavro/Types/BaseClass.py | pl-Steve28-lq/SinavroLang | 8f8344bf9d92ba5498f9bbf103fb2c80cde724d2 | [
"MIT"
] | null | null | null | Sinavro/Types/BaseClass.py | pl-Steve28-lq/SinavroLang | 8f8344bf9d92ba5498f9bbf103fb2c80cde724d2 | [
"MIT"
] | null | null | null | class SinavroObject: pass
def init(self, val): self.value = val
gencls = lambda n: type(f'Sinavro{n.title()}', (SinavroObject,), {'__init__': init, 'type': n})
SinavroInt = gencls('int')
SinavroFloat = gencls('float')
SinavroString = gencls('string')
SinavroBool = gencls('bool')
SinavroArray = gencls('array')
| 29.363636 | 96 | 0.681115 | 38 | 323 | 5.684211 | 0.657895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142415 | 323 | 10 | 97 | 32.3 | 0.779783 | 0 | 0 | 0 | 0 | 0 | 0.169329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
615ad8eba732cef089d5695b78282f0eeaae7437 | 2,824 | py | Python | HUGS/Interface/_upload.py | hugs-cloud/hugs | 65aef97d662746f1382bd03b15f46a6647c7b7f5 | [
"Apache-2.0"
] | null | null | null | HUGS/Interface/_upload.py | hugs-cloud/hugs | 65aef97d662746f1382bd03b15f46a6647c7b7f5 | [
"Apache-2.0"
] | 5 | 2020-08-18T12:22:46.000Z | 2020-09-30T14:30:11.000Z | HUGS/Interface/_upload.py | hugs-cloud/hugs | 65aef97d662746f1382bd03b15f46a6647c7b7f5 | [
"Apache-2.0"
] | 2 | 2020-06-11T20:30:24.000Z | 2020-10-29T17:30:21.000Z | import tempfile
from pathlib import Path
import ipywidgets as widgets
from HUGS.Client import Process
from HUGS.Interface import Credentials
class Upload:
def __init__(self):
self._credentials = Credentials()
self._user = None
def login(self):
return self._credentials.login()
def get_user(self):
return self._credentials.get_user()
def get_results(self):
return self._results
def upload(self):
type_widget = widgets.Dropdown(
options=["CRDS", "GC", "ICOS", "NOAA", "TB", "EUROCOM"],
description="Data type:",
disabled=False,
)
base_url = "https://hugs.acquire-aaai.com/t"
upload_widget = widgets.FileUpload(multiple=False, label="Select")
transfer_button = widgets.Button(
description="Transfer",
button_style="info",
layout=widgets.Layout(width="10%"),
)
def do_upload(a):
if type_widget.value is False:
status_text.value = (
"<font color='red'>Please select a data type</font>"
)
return
user = self.get_user()
data_type = type_widget.value
if not user.is_logged_in():
status_text.value = "<font color='red'>User not logged in</font>"
return
# Here we get the data as bytes, write it to a tmp directory so we can
# process it using HUGS
# TODO - better processing method? Allow HUGS to accept bytes?
with tempfile.TemporaryDirectory() as tmpdir:
file_content = upload_widget.value
filename = list(file_content.keys())[0]
data_bytes = file_content[filename]["content"]
tmp_filepath = Path(tmpdir).joinpath(filename)
with open(tmp_filepath, "wb") as f:
f.write(data_bytes)
p = Process(service_url=base_url)
result = p.process_files(
user=user, files=tmp_filepath, data_type=data_type
)
self._results = result
# Upload the file to HUGS
if result:
status_text.value = "<font color='green'>Upload successful</font>"
else:
status_text.value = (
"<font color='red'>Unable to process file</font>"
)
transfer_button.on_click(do_upload)
data_hbox = widgets.HBox(children=[type_widget, upload_widget, transfer_button])
status_text = widgets.HTML(
value="<font color='#00BCD4'>Waiting for file</font>"
)
return widgets.VBox(children=[data_hbox, status_text])
| 31.032967 | 88 | 0.555949 | 308 | 2,824 | 4.925325 | 0.386364 | 0.039552 | 0.046144 | 0.050099 | 0.069216 | 0.053395 | 0 | 0 | 0 | 0 | 0 | 0.003264 | 0.34915 | 2,824 | 90 | 89 | 31.377778 | 0.822089 | 0.061969 | 0 | 0.063492 | 0 | 0 | 0.122163 | 0.008699 | 0 | 0 | 0 | 0.011111 | 0 | 1 | 0.095238 | false | 0 | 0.079365 | 0.047619 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
616e5cf634f2586b19810e30e33e8aba47812b49 | 494 | py | Python | slender/tests/list/test_include.py | torokmark/slender | 3bf815e22f7802ba48706f31ba608cf609e23e68 | [
"Apache-2.0"
] | 1 | 2020-01-10T21:51:46.000Z | 2020-01-10T21:51:46.000Z | slender/tests/list/test_include.py | torokmark/slender | 3bf815e22f7802ba48706f31ba608cf609e23e68 | [
"Apache-2.0"
] | null | null | null | slender/tests/list/test_include.py | torokmark/slender | 3bf815e22f7802ba48706f31ba608cf609e23e68 | [
"Apache-2.0"
] | null | null | null |
import re
from unittest import TestCase
from expects import expect, equal, raise_error, be_true, be_false
from slender import List
class TestInclude(TestCase):
def test_include_if_value_in_array(self):
e = List(['apple', 'bear', 'dog', 'plum', 'grape', 'cat', 'anchor'])
expect(e.include('bear')).to(be_true)
def test_include_if_value_not_in_array(self):
e = List(['apple', 'bear', 'cat', 'anchor'])
expect(e.include('dog')).to(be_false)
| 27.444444 | 76 | 0.653846 | 70 | 494 | 4.385714 | 0.5 | 0.039088 | 0.091205 | 0.104235 | 0.449511 | 0.162866 | 0.162866 | 0 | 0 | 0 | 0 | 0 | 0.196356 | 494 | 17 | 77 | 29.058824 | 0.7733 | 0 | 0 | 0 | 0 | 0 | 0.111562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.363636 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
61741f74b759c1266c2b4eff5463940e41acbff2 | 1,691 | py | Python | wagtailcomments/basic/migrations/0001_initial.py | takeflight/wagtailcomments | 941ae2f16fccb867a33d2ed4c21b8e5e04af712d | [
"BSD-3-Clause"
] | 7 | 2016-09-28T10:51:44.000Z | 2018-09-29T08:27:23.000Z | wagtailcomments/basic/migrations/0001_initial.py | takeflight/wagtailcomments | 941ae2f16fccb867a33d2ed4c21b8e5e04af712d | [
"BSD-3-Clause"
] | null | null | null | wagtailcomments/basic/migrations/0001_initial.py | takeflight/wagtailcomments | 941ae2f16fccb867a33d2ed4c21b8e5e04af712d | [
"BSD-3-Clause"
] | 2 | 2017-05-21T08:41:19.000Z | 2018-08-06T13:50:59.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-07-29 06:13
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import enumchoicefield.fields
import wagtailcomments.models
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Comment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('object_id', models.TextField()),
('user_name', models.CharField(blank=True, max_length=255, null=True)),
('user_email', models.EmailField(blank=True, max_length=254, null=True)),
('datetime', models.DateTimeField(default=django.utils.timezone.now)),
('ip_address', models.GenericIPAddressField()),
('status', enumchoicefield.fields.EnumChoiceField(enum_class=wagtailcomments.models.CommentStatus, max_length=10)),
('body', models.TextField()),
('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'swappable': 'WAGTAILCOMMENTS_MODEL',
},
),
]
| 40.261905 | 141 | 0.646954 | 176 | 1,691 | 6.051136 | 0.5 | 0.030047 | 0.039437 | 0.061972 | 0.073239 | 0.073239 | 0.073239 | 0.073239 | 0 | 0 | 0 | 0.02144 | 0.227676 | 1,691 | 41 | 142 | 41.243902 | 0.794028 | 0.039622 | 0 | 0 | 1 | 0 | 0.109192 | 0.045651 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.212121 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6179a809c6fd4ff96baedddbf9bb9dae658ef8c7 | 1,256 | py | Python | neutron_plugin_contrail/plugins/opencontrail/neutron_middleware.py | hamzazafar/contrail-neutron-plugin | fb8dbcabc8240e5c47753ae6e2af5556d0a38421 | [
"Apache-2.0"
] | 3 | 2021-09-07T05:02:24.000Z | 2022-02-11T04:25:43.000Z | neutron_plugin_contrail/plugins/opencontrail/neutron_middleware.py | hamzazafar/contrail-neutron-plugin | fb8dbcabc8240e5c47753ae6e2af5556d0a38421 | [
"Apache-2.0"
] | 1 | 2016-04-26T09:05:42.000Z | 2016-04-26T09:05:42.000Z | neutron_plugin_contrail/plugins/opencontrail/neutron_middleware.py | hamzazafar/contrail-neutron-plugin | fb8dbcabc8240e5c47753ae6e2af5556d0a38421 | [
"Apache-2.0"
] | 5 | 2020-07-14T07:52:05.000Z | 2022-03-24T15:08:02.000Z | import logging
from eventlet import corolocal
from eventlet.greenthread import getcurrent
"""
This middleware is used to forward user token to Contrail API server.
Middleware is inserted at head of Neutron pipeline via api-paste.ini file
so that user token can be preserved in local storage of Neutron thread.
This is needed because neutron will later remove the user token before control
finally reaches Contrail plugin. Contrail plugin will retreive the user token
from thread's local storage and pass it to API server via X-AUTH-TOKEN header.
"""
class UserToken(object):
def __init__(self, app, conf):
self._logger = logging.getLogger(__name__)
self._app = app
self._conf = conf
def __call__(self, env, start_response):
# preserve user token for later forwarding to contrail API server
cur_greenlet = getcurrent()
cur_greenlet.contrail_vars = corolocal.local()
cur_greenlet.contrail_vars.token = env.get('HTTP_X_AUTH_TOKEN')
return self._app(env, start_response)
def token_factory(global_conf, **local_conf):
"""Paste factory."""
conf = global_conf.copy()
conf.update(local_conf)
def _factory(app):
return UserToken(app, conf)
return _factory
| 32.205128 | 78 | 0.731688 | 176 | 1,256 | 5.034091 | 0.477273 | 0.05079 | 0.029345 | 0.042889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.200637 | 1,256 | 38 | 79 | 33.052632 | 0.88247 | 0.062898 | 0 | 0 | 0 | 0 | 0.023876 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.157895 | 0.052632 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
618446f9da1ab0931b796eb8d9862b418bee6a2b | 1,115 | py | Python | hackerearth/Algorithms/Buggy Bot/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | 4 | 2020-07-24T01:59:50.000Z | 2021-07-24T15:14:08.000Z | hackerearth/Algorithms/Buggy Bot/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | hackerearth/Algorithms/Buggy Bot/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | """
# Sample code to perform I/O:
name = input() # Reading input from STDIN
print('Hi, %s.' % name) # Writing output to STDOUT
# Warning: Printing unwanted or ill-formatted data to output will cause the test cases to fail
"""
# Write your code here
from collections import defaultdict
n, m, k = map(int, input().strip().split())
adjacency = defaultdict(list)
for _ in range(m):
a, b = map(int, input().strip().split())
adjacency[a].append(b)
final = [False] * (n + 1)
for node in adjacency[1]:
final[node] = True
buggy = 1
neighboring_nodes = defaultdict(set)
for _ in range(k):
x, y = map(int, input().strip().split())
if final[x]:
final[x] = False
for pa in neighboring_nodes[x]:
adjacency[pa].append(x)
neighboring_nodes[x] = set()
final[y] = True
if buggy == x:
buggy = y
for node in adjacency[buggy]:
final[node] = True
neighboring_nodes[node].add(buggy)
adjacency[buggy] = []
final[buggy] = True
print(sum(final))
print(' '.join(str(i) for i in range(n + 1) if final[i] is True))
| 27.875 | 94 | 0.608072 | 163 | 1,115 | 4.122699 | 0.423313 | 0.095238 | 0.049107 | 0.071429 | 0.120536 | 0.089286 | 0 | 0 | 0 | 0 | 0 | 0.004773 | 0.24843 | 1,115 | 39 | 95 | 28.589744 | 0.797136 | 0.238565 | 0 | 0.071429 | 0 | 0 | 0.001189 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
618aadd7fe62fbfa6544e105456f6c6a8c7ddd78 | 1,443 | py | Python | app/forms.py | paulpaulaga/ask-paul | 1af662243ac5d05e3f3b237f743a11de25d712e7 | [
"MIT"
] | null | null | null | app/forms.py | paulpaulaga/ask-paul | 1af662243ac5d05e3f3b237f743a11de25d712e7 | [
"MIT"
] | 1 | 2021-10-05T07:42:54.000Z | 2021-10-05T07:42:54.000Z | app/forms.py | paulpaulaga/ask-paul | 1af662243ac5d05e3f3b237f743a11de25d712e7 | [
"MIT"
] | null | null | null | from django import forms
from app.models import Question, Profile, Answer
from django.contrib.auth.models import User
class LoginForm(forms.Form):
username = forms.CharField()
password = forms.CharField(widget=forms.PasswordInput)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['username'].required = True
self.fields['password'].required = True
class AskForm(forms.ModelForm):
class Meta:
model = Question
fields = ['title', 'text']
def save(self, *args, **kwargs):
user = kwargs.pop('user')
question = super().save(*args, **kwargs)
question.user = user
question.save()
return question
class UserSignupForm(forms.ModelForm):
avatar = forms.ImageField()
password = forms.CharField(widget=forms.PasswordInput)
confirm_password = forms.CharField(widget=forms.PasswordInput)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['email'].required = True
class Meta:
model = User
fields = ['username', 'email', 'first_name', 'last_name', 'password']
class AnswerForm(forms.ModelForm):
class Meta:
model = Answer
fields = ['text']
def save(self, *args, **kwargs):
user = kwargs.pop('user')
answer = super().save(*args, **kwargs)
answer.user = user
answer.save()
| 27.226415 | 77 | 0.627859 | 157 | 1,443 | 5.649682 | 0.299363 | 0.090192 | 0.063134 | 0.094701 | 0.426156 | 0.363021 | 0.311161 | 0.311161 | 0.311161 | 0.311161 | 0 | 0 | 0.23562 | 1,443 | 52 | 78 | 27.75 | 0.80417 | 0 | 0 | 0.333333 | 0 | 0 | 0.056826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0.128205 | 0.076923 | 0 | 0.512821 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
618dc6198da7239c2edc0143aac74d64a250d04c | 841 | py | Python | create_widget.py | Adam01/Cylinder | 84ca02e2860df579a4e0514c4060a0c497e167a5 | [
"MIT"
] | 4 | 2015-02-19T17:46:27.000Z | 2018-10-27T08:00:58.000Z | create_widget.py | Adam01/Cylinder | 84ca02e2860df579a4e0514c4060a0c497e167a5 | [
"MIT"
] | 3 | 2015-04-09T18:14:35.000Z | 2015-05-07T02:36:48.000Z | create_widget.py | Adam01/Cylinder | 84ca02e2860df579a4e0514c4060a0c497e167a5 | [
"MIT"
] | null | null | null | import os
import sys
import shutil
templateDir = "./Cylinder/WidgetTemplate"
widgetOutDir = "./Cylinder/rapyd/Widgets/"
if len(sys.argv) > 1:
name = sys.argv[1]
widgetDir = os.path.join(widgetOutDir, name)
if not os.path.exists(widgetDir):
if os.path.exists(templateDir):
os.mkdir(widgetDir)
for item in os.listdir(templateDir):
widgetTemplateItem = os.path.join(templateDir, item)
widgetItem = os.path.join(widgetDir, name + os.path.splitext(item)[1] )
print "Copying %s as %s" % (widgetTemplateItem, widgetItem)
shutil.copy(widgetTemplateItem, widgetItem)
print "Done"
else:
print "Unable to find template dir"
else:
print "Widget already exists"
else:
print "Specify widget name" | 32.346154 | 87 | 0.62069 | 95 | 841 | 5.494737 | 0.452632 | 0.068966 | 0.057471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004918 | 0.274673 | 841 | 26 | 88 | 32.346154 | 0.85082 | 0 | 0 | 0.130435 | 0 | 0 | 0.162708 | 0.059382 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.130435 | null | null | 0.217391 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
618ed936ea97eb91d0ed1cb9b6eb7123a3f94c4d | 1,380 | py | Python | examples/06-worker.py | pacslab/serverless-performance-simulator | 5dd1623ad873d005b457676c060232205de7f552 | [
"MIT"
] | 5 | 2021-06-17T15:50:40.000Z | 2022-02-15T21:24:39.000Z | examples/06-worker.py | sicilly/simfaas | 5dd1623ad873d005b457676c060232205de7f552 | [
"MIT"
] | null | null | null | examples/06-worker.py | sicilly/simfaas | 5dd1623ad873d005b457676c060232205de7f552 | [
"MIT"
] | 5 | 2020-07-19T22:57:34.000Z | 2021-11-16T01:40:03.000Z | import zmq
import time
import sys
import struct
import multiprocessing
from examples.sim_trace import generate_trace
port = "5556"
if len(sys.argv) > 1:
port = sys.argv[1]
int(port)
socket_addr = "tcp://127.0.0.1:%s" % port
worker_count = multiprocessing.cpu_count() * 2 + 1
stop_signal = False
def worker(context=None, name="worker"):
context = context or zmq.Context.instance()
worker = context.socket(zmq.ROUTER)
worker.connect(socket_addr)
print(f"Starting thread: {name}")
poller = zmq.Poller()
poller.register(worker, zmq.POLLIN)
while not stop_signal:
socks = dict(poller.poll(timeout=1000))
if worker in socks and socks[worker] == zmq.POLLIN:
ident, message = worker.recv_multipart()
# calculate trace
msg = struct.pack("d", generate_trace())
worker.send_multipart([ident, msg])
if __name__ == "__main__":
worker_names = [f"worker-{i}" for i in range(worker_count)]
worker_threads = [multiprocessing.Process(target=worker, args=(None,n)) for n in worker_names]
_ = [t.start() for t in worker_threads]
while True:
try:
time.sleep(1)
except KeyboardInterrupt:
print("Ctrl-c pressed!")
stop_signal = True
[t.join() for t in worker_threads]
break
| 26.538462 | 98 | 0.626087 | 178 | 1,380 | 4.702247 | 0.483146 | 0.035842 | 0.019116 | 0.028674 | 0.0454 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018609 | 0.260145 | 1,380 | 51 | 99 | 27.058824 | 0.801175 | 0.01087 | 0 | 0 | 1 | 0 | 0.062408 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.157895 | 0 | 0.184211 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61986b8629c565625618497302e6f724f841c131 | 779 | py | Python | src/visualize/visualize_checkpoint.py | Immocat/ACTOR | c7237e82e333bf2c57f7d8e12f27d0831233befc | [
"MIT"
] | 164 | 2021-09-06T12:43:39.000Z | 2022-03-29T02:33:38.000Z | src/visualize/visualize_checkpoint.py | Immocat/ACTOR | c7237e82e333bf2c57f7d8e12f27d0831233befc | [
"MIT"
] | 14 | 2021-09-17T00:42:24.000Z | 2022-03-07T04:18:12.000Z | src/visualize/visualize_checkpoint.py | Immocat/ACTOR | c7237e82e333bf2c57f7d8e12f27d0831233befc | [
"MIT"
] | 27 | 2021-09-07T04:38:38.000Z | 2022-03-29T00:37:10.000Z | import os
import matplotlib.pyplot as plt
import torch
from src.utils.get_model_and_data import get_model_and_data
from src.parser.visualize import parser
from .visualize import viz_epoch
import src.utils.fixseed # noqa
plt.switch_backend('agg')
def main():
# parse options
parameters, folder, checkpointname, epoch = parser()
model, datasets = get_model_and_data(parameters)
dataset = datasets["train"]
print("Restore weights..")
checkpointpath = os.path.join(folder, checkpointname)
state_dict = torch.load(checkpointpath, map_location=parameters["device"])
model.load_state_dict(state_dict)
# visualize_params
viz_epoch(model, dataset, epoch, parameters, folder=folder, writer=None)
if __name__ == '__main__':
main()
| 24.34375 | 78 | 0.741977 | 100 | 779 | 5.52 | 0.5 | 0.043478 | 0.059783 | 0.081522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161746 | 779 | 31 | 79 | 25.129032 | 0.845329 | 0.044929 | 0 | 0 | 0 | 0 | 0.052703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.368421 | 0 | 0.421053 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
619f55e5cd0fde19ef49d00d5c74c41a8e978000 | 938 | py | Python | APIs/Biquery/src/infra/models/datasets.py | clarencejlee/jdp | d3d31db0138ff06f2f5ec592d85317941af4f280 | [
"MIT"
] | null | null | null | APIs/Biquery/src/infra/models/datasets.py | clarencejlee/jdp | d3d31db0138ff06f2f5ec592d85317941af4f280 | [
"MIT"
] | null | null | null | APIs/Biquery/src/infra/models/datasets.py | clarencejlee/jdp | d3d31db0138ff06f2f5ec592d85317941af4f280 | [
"MIT"
] | null | null | null | from enum import Enum
from tortoise import fields
from tortoise.models import Model
class Status(str, Enum):
PENDING = "pending"
IMPORTED = "imported"
class Datasets(Model):
id = fields.CharField(36, pk=True)
provider = fields.CharField(10, null=False)
provider_id = fields.CharField(255, null=False)
name = fields.CharField(255, null=False)
size = fields.IntField(default=0, null=False)
status = fields.CharEnumField(Status, default=Status.PENDING)
created_at = fields.DatetimeField(auto_now_add=True)
updated_at = fields.DatetimeField(auto_now=True)
def serialize(self):
return {
'id': self.id,
'provider': self.provider,
'name': self.name,
'size': self.size,
'status': self.status.value,
'createdAt': self.created_at.timestamp() * 1000,
'updatedAt': self.updated_at.timestamp() * 1000,
} | 32.344828 | 65 | 0.643923 | 110 | 938 | 5.418182 | 0.409091 | 0.100671 | 0.057047 | 0.073826 | 0.184564 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026573 | 0.23774 | 938 | 29 | 66 | 32.344828 | 0.806993 | 0 | 0 | 0 | 0 | 0 | 0.060703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.16 | 0.04 | 0.72 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
61a07bb292fcae2ff0accde2ff05fef346e1cb96 | 6,046 | py | Python | src/python/pipelines_utils/file_utils.py | InformaticsMatters/pipelines-utils | 07182505500fe2e3ab1db29332f6a7ec98b0c1b6 | [
"Apache-2.0"
] | null | null | null | src/python/pipelines_utils/file_utils.py | InformaticsMatters/pipelines-utils | 07182505500fe2e3ab1db29332f6a7ec98b0c1b6 | [
"Apache-2.0"
] | 33 | 2018-01-23T10:20:41.000Z | 2019-07-09T14:25:33.000Z | src/python/pipelines_utils/file_utils.py | InformaticsMatters/pipelines-utils | 07182505500fe2e3ab1db29332f6a7ec98b0c1b6 | [
"Apache-2.0"
] | 1 | 2018-10-31T06:04:08.000Z | 2018-10-31T06:04:08.000Z | #!/usr/bin/env python
# Copyright 2018 Informatics Matters Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import os
from . import utils
# Files are normally located in sub-directories of the pipeline module path.
# For example a pipeline module 'pipeline_a.py' that expects to use a file
# or SDF picker would place its files in the directory
# 'pipelines/demo/pipeline_a'.
def pick(filename, directory=None):
"""Returns the named file. If directory is not specified the file is
expected to be located in a sub-directory whose name matches
that of the calling module otherwise the file is expected to be found in
the named directory.
:param filename: The file, whose path is required.
:type filename: ``str``
:param directory: An optional directory.
If not provided it is calculated automatically.
:type directory: ``str``
:return: The full path to the file, or None if it does not exist
:rtype: ``str``
"""
if directory is None:
directory = utils.get_undecorated_calling_module()
# If the 'cwd' is not '/output' (which indicates we're in a Container)
# then remove the CWD and the anticipated '/'
# from the front of the module
if os.getcwd() not in ['/output']:
directory = directory[len(os.getcwd()) + 1:]
file_path = os.path.join(directory, filename)
return file_path if os.path.isfile(file_path) else None
def pick_sdf(filename, directory=None):
"""Returns a full path to the chosen SDF file. The supplied file
is not expected to contain a recognised SDF extension, this is added
automatically.
If a file with the extension `.sdf.gz` or `.sdf` is found the path to it
(excluding the extension) is returned. If this fails, `None` is returned.
:param filename: The SDF file basename, whose path is required.
:type filename: ``str``
:param directory: An optional directory.
If not provided it is calculated automatically.
:type directory: ``str``
:return: The full path to the file without extension,
or None if it does not exist
:rtype: ``str``
"""
if directory is None:
directory = utils.get_undecorated_calling_module()
# If the 'cwd' is not '/output' (which indicates we're in a Container)
# then remove the CWD and the anticipated '/'
# from the front of the module
if os.getcwd() not in ['/output']:
directory = directory[len(os.getcwd()) + 1:]
file_path = os.path.join(directory, filename)
if os.path.isfile(file_path + '.sdf.gz'):
return file_path + '.sdf.gz'
elif os.path.isfile(file_path + '.sdf'):
return file_path + '.sdf'
# Couldn't find a suitable SDF file
return None
def pick_csv(filename, directory=None):
"""Returns a full path to the chosen CSV file. The supplied file
is not expected to contain a recognised CSV extension, this is added
automatically.
If a file with the extension `.csv.gz` or `.csv` is found the path to it
(excluding the extension) is returned. If this fails, `None` is returned.
:param filename: The CSV file basename, whose path is required.
:type filename: ``str``
:param directory: An optional directory.
If not provided it is calculated automatically.
:type directory: ``str``
:return: The full path to the file without extension,
or None if it does not exist
:rtype: ``str``
"""
if directory is None:
directory = utils.get_undecorated_calling_module()
# If the 'cwd' is not '/output' (which indicates we're in a Container)
# then remove the CWD and the anticipated '/'
# from the front of the module
if os.getcwd() not in ['/output']:
directory = directory[len(os.getcwd()) + 1:]
file_path = os.path.join(directory, filename)
if os.path.isfile(file_path + '.csv.gz'):
return file_path + '.csv.gz'
elif os.path.isfile(file_path + '.csv'):
return file_path + '.csv'
# Couldn't find a suitable CSV file
return None
def pick_smi(filename, directory=None):
"""Returns a full path to the chosen SMI file. The supplied file
is not expected to contain a recognised SMI extension, this is added
automatically.
If a file with the extension `.smi.gz` or `.smi` is found the path to it
(excluding the extension) is returned. If this fails, `None` is returned.
:param filename: The SMI file basename, whose path is required.
:type filename: ``str``
:param directory: An optional directory.
If not provided it is calculated automatically.
:type directory: ``str``
:return: The full path to the file without extension,
or None if it does not exist
:rtype: ``str``
"""
if directory is None:
directory = utils.get_undecorated_calling_module()
# If the 'cwd' is not '/output' (which indicates we're in a Container)
# then remove the CWD and the anticipated '/'
# from the front of the module
if os.getcwd() not in ['/output']:
directory = directory[len(os.getcwd()) + 1:]
file_path = os.path.join(directory, filename)
if os.path.isfile(file_path + '.smi.gz'):
return file_path + '.smi.gz'
elif os.path.isfile(file_path + '.smi'):
return file_path + '.smi'
# Couldn't find a suitable SMI file
return None
| 40.306667 | 78 | 0.66391 | 874 | 6,046 | 4.546911 | 0.185355 | 0.036236 | 0.017614 | 0.022899 | 0.720181 | 0.694514 | 0.67388 | 0.654253 | 0.654253 | 0.654253 | 0 | 0.002634 | 0.246444 | 6,046 | 149 | 79 | 40.577181 | 0.869622 | 0.637612 | 0 | 0.534884 | 0 | 0 | 0.048604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0 | 0.069767 | 0 | 0.395349 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61a22ded87c71674219b6b523e264a085c06ad4b | 1,358 | py | Python | the-lambda-trilogy/python/src/lambdas/the_lambda_lith/lambda_function.py | InfrastructureHQ/CDK-Patterns | 3531f2df028e87c1455b8afe95563d7552abb7af | [
"MIT"
] | null | null | null | the-lambda-trilogy/python/src/lambdas/the_lambda_lith/lambda_function.py | InfrastructureHQ/CDK-Patterns | 3531f2df028e87c1455b8afe95563d7552abb7af | [
"MIT"
] | null | null | null | the-lambda-trilogy/python/src/lambdas/the_lambda_lith/lambda_function.py | InfrastructureHQ/CDK-Patterns | 3531f2df028e87c1455b8afe95563d7552abb7af | [
"MIT"
] | null | null | null | def lambda_handler(event):
try:
first_num = event["queryStringParameters"]["firstNum"]
except KeyError:
first_num = 0
try:
second_num = event["queryStringParameters"]["secondNum"]
except KeyError:
second_num = 0
try:
operation_type = event["queryStringParameters"]["operation"]
if operation_type == "add":
result = add(first_num, second_num)
elif operation_type == "subtract":
result = subtract(first_num, second_num)
elif operation_type == "subtract":
result = multiply(first_num, second_num)
else:
result = "No Operation"
except KeyError:
return "No Operation"
return result
def add(first_num, second_num):
result = int(first_num) + int(second_num)
print("The result of % s + % s = %s" % (first_num, second_num, result))
return {"body": result, "statusCode": 200}
def subtract(first_num, second_num):
result = int(first_num) - int(second_num)
print("The result of % s - % s = %s" % (first_num, second_num, result))
return {"body": result, "statusCode": 200}
def multiply(first_num, second_num):
result = int(first_num) * int(second_num)
print("The result of % s * % s = %s" % (first_num, second_num, result))
return {"body": result, "statusCode": 200}
| 28.893617 | 75 | 0.620766 | 163 | 1,358 | 4.969325 | 0.202454 | 0.138272 | 0.155556 | 0.188889 | 0.62716 | 0.559259 | 0.559259 | 0.559259 | 0.559259 | 0.440741 | 0 | 0.010859 | 0.25405 | 1,358 | 46 | 76 | 29.521739 | 0.788746 | 0 | 0 | 0.323529 | 0 | 0 | 0.189985 | 0.046392 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.264706 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61a4a530246434eb38e7c667db777d5e633e159d | 13,214 | py | Python | template1.py | gregwa1953/FCM160 | b5c0a912e140e78891f587d2d271c23bff7333c2 | [
"MIT"
] | null | null | null | template1.py | gregwa1953/FCM160 | b5c0a912e140e78891f587d2d271c23bff7333c2 | [
"MIT"
] | null | null | null | template1.py | gregwa1953/FCM160 | b5c0a912e140e78891f587d2d271c23bff7333c2 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# ======================================================
# template1.py
# ------------------------------------------------------
# Created for Full Circle Magazine Issue #160
# Written by G.D. Walters
# Copyright (c) 2020 by G.D. Walters
# This source code is released under the MIT License
# ======================================================
import sys
import os
import platform
import datetime
import sqlite3
from PIL import ImageTk, Image
from dbutils import quote
from fpdf import fpdf
from fpdf import FPDF
from fpdf import Template
# ======================================================
dbname = "./tests/database/cookbook.db"
imagepath = "./tests/database/recipeimages/"
# The recipe information for the two sample recipes are defined as follows. These have been extracted
# from the actual database tables and converted into lists...
# ======================================================
recipe_table_dat = []
ingredients_table_dat = []
instructions_table_dat = []
cats_table_dat = []
images_table_dat = []
rtd = (
166, 'Mongolian Beef and Spring Onions',
'https://www.allrecipes.com/recipe/201849/mongolian-beef-and-spring-onions/',
'4 serving(s)', '30 minutes', 5, 1, None,
'A soy-based Chinese-style beef dish. Best served over soft rice noodles or rice.',
1)
recipe_table_dat.append(rtd)
# ingredients table
ing = [(1802, 166, None, None, None, '2 teaspoons vegetable oil'),
(1803, 166, None, None, None, '1 tablespoon finely chopped garlic'),
(1804, 166, None, None, None, '1/2 teaspoon grated fresh ginger root'),
(1805, 166, None, None, None, '1/2 cup soy sauce'),
(1806, 166, None, None, None, '1/2 cup water'),
(1807, 166, None, None, None, '2/3 cup dark brown sugar'),
(1808, 166, None, None, None,
'1 pound beef flank steak, sliced 1/4 inch thick on the diagonal'),
(1809, 166, None, None, None, '1/4 cup cornstarch'),
(1810, 166, None, None, None, '1 cup vegetable oil for frying'),
(1811, 166, None, None, None,
'2 bunches green onions, cut in 2-inch lengths')]
ingredients_table_dat.append(ing)
# instructions
instructions_table_dat.append((
157, 166,
'Heat 2 teaspoons of vegetable oil in a saucepan over medium heat, and cook and stir the garlic and ginger until they release their fragrance, about 30 seconds. Pour in the soy sauce, water, and brown sugar. Raise the heat to medium-high, and stir 4 minutes, until the sugar has dissolved and the sauce boils and slightly thickens. Remove sauce from the heat, and set aside.\nPlace the sliced beef into a bowl, and stir the cornstarch into the beef, coating it thoroughly. Allow the beef and cornstarch to sit until most of the juices from the meat have been absorbed by the cornstarch, about 10 minutes.\nHeat the vegetable oil in a deep-sided skillet or wok to 375 degrees F (190 degrees C).\nShake excess cornstarch from the beef slices, and drop them into the hot oil, a few at a time. Stir briefly, and fry until the edges become crisp and start to brown, about 2 minutes. Remove the beef from the oil with a large slotted spoon, and allow to drain on paper towels to remove excess oil.\nPour the oil out of the skillet or wok, and return the pan to medium heat. Return the beef slices to the pan, stir briefly, and pour in the reserved sauce. Stir once or twice to combine, and add the green onions. Bring the mixture to a boil, and cook until the onions have softened and turned bright green, about 2 minutes.\n\n'
))
# Cats
cats_table_dat.append([(166, 'Asian'), (166, 'Beef'), (166, 'Main Dish')])
# Images
images_table_dat.append((95, 166, './MongolianBeefandSpringOnions.png'))
# Amish White Bread
recipe_table_dat.append((
154, 'Amish White Bread',
'https://www.allrecipes.com/recipe/6788/amish-white-bread/',
'24 serving(s)', '150 minutes', 4, 1, None,
"I got this recipe from a friend. It is very easy, and doesn't take long to make.\n",
1))
ingredients_table_dat.append([
(1669, 154, None, None, None,
'2 cups warm water (110 degrees F/45 degrees C)'),
(1670, 154, None, None, None, '2/3 cup white sugar'),
(1671, 154, None, None, None, '1 1/2 tablespoons active dry yeast'),
(1672, 154, None, None, None, '1 1/2 teaspoons salt'),
(1673, 154, None, None, None, '1/4 cup vegetable oil'),
(1674, 154, None, None, None, '6 cups bread flour')
])
instructions_table_dat.append((
145, 154,
'In a large bowl, dissolve the sugar in warm water, and then stir in yeast. Allow to proof until yeast resembles a creamy foam.\nMix salt and oil into the yeast. Mix in flour one cup at a time. Knead dough on a lightly floured surface until smooth. Place in a well oiled bowl, and turn dough to coat. Cover with a damp cloth. Allow to rise until doubled in bulk, about 1 hour.\nPunch dough down. Knead for a few minutes, and divide in half. Shape into loaves, and place into two well oiled 9x5 inch loaf pans. Allow to rise for 30 minutes, or until dough has risen 1 inch above pans.\nBake at 350 degrees F (175 degrees C) for 30 minutes.\n\n\n'
))
cats_table_dat.append([(154, 'Breads'), (154, 'Breads')])
images_table_dat.append((79, 154, './AmishWhiteBread.png'))
# "Crack" Chicken
recipe_table_dat.append((
366, 'Crack Chicken', 'https://www.dinneratthezoo.com/crack-chicken/', '6',
'4 hours 5 minutes', '4.5', 1, None,
"This crack chicken is creamy ranch flavored chicken that's cooked in the crock pot until tender. A super easy slow cooker recipe that only contains 3 ingredients.\n",
1))
ingredients_table_dat.append([
(4821, 366, None, None, None, '2 lbs boneless skinless chicken breasts'),
(4822, 366, None, None, None, '1 ounce packet ranch seasoning'),
(4823, 366, None, None, None, '16 ounces cream cheese cut into cubes'),
(4824, 366, None, None, None,
'cooked crumbled bacon and green onions for serving optional')
])
instructions_table_dat.append((
361, 366,
'Place the chicken breasts, ranch seasoning and cream cheese in a slow cooker.\nCook on HIGH for 4 hours or LOW for 6-8 hours.\nShred the chicken with two forks. Stir until everything is thoroughly combined.\nServe, topped with bacon and green onions if desired.\n'
))
cats_table_dat.append([(366, 'American'), (366, 'Chicken'), (366, 'Main Dish'),
(366, 'Sandwich')])
images_table_dat.append((319, 366, './CrackChicken.png'))
# End of recipe information defination
# Template...
elements = [
{
'name': 'header',
'type': 'T',
'x1': 17.0,
'y1': 8.0,
'x2': 0,
'y2': 0,
'font': 'Arial',
'size': 8,
'bold': 0,
'italic': 0,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
},
{
'name': 'title',
'type': 'T',
'x1': 17,
'y1': 26,
'x2': 0,
'y2': 0,
'font': 'Arial',
'size': 22,
'bold': 1,
'italic': 1,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
},
{
'name': 'recipeimage',
'type': 'I',
'x1': 17,
'y1': 25,
'x2': 80,
'y2': 89,
'font': None,
'size': 0,
'bold': 0,
'italic': 0,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': 'image',
'priority': 2,
},
{
'name': 'ingreidentshead',
'type': 'T',
'x1': 17,
'y1': 220,
'x2': 0,
'y2': 0,
'font': 'Arial',
'size': 12,
'bold': 1,
'italic': 1,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': 'Ingredients:',
'priority': 2,
},
{
'name': 'ingredientitems',
'type': 'W',
'x1': 17,
'y1': 115,
'x2': 90,
'y2': 400,
'font': 'Arial',
'size': 11,
'bold': 0,
'italic': 0,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
'multiline': True
},
{
'name': 'instructionhead',
'type': 'T',
'x1': 17,
'y1': 360,
'x2': 0,
'y2': 0,
'font': 'Arial',
'size': 12,
'bold': 1,
'italic': 1,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': 'Instructions:',
'priority': 2,
},
{
'name': 'instructions',
'type': 'W',
'x1': 17,
'y1': 185,
'x2': 160, # 200,
'y2': 400, # 400,
'font': 'Arial',
'size': 11,
'bold': 0,
'italic': 0,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 1,
'multiline': True
},
{
'name': 'description',
'type': 'T',
'x1': 85,
'y1': 28,
'x2': 200,
'y2': 35,
'font': 'Arial',
'size': 12,
'bold': 1,
'italic': 1,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
'multiline': True
},
{
'name': 'source',
'type': 'T',
'x1': 85,
'y1': 60,
'x2': 200,
'y2': 66,
'font': 'Arial',
'size': 10,
'bold': 1,
'italic': 1,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
'multiline': True
},
{
'name': 'servings',
'type': 'T',
'x1': 26,
'y1': 95,
'x2': 84,
'y2': 98,
'font': 'Arial',
'size': 10,
'bold': 1,
'italic': 0,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
'multiline': False
},
{
'name': 'time',
'type': 'T',
'x1': 86,
'y1': 95,
'x2': 145,
'y2': 98,
'font': 'Arial',
'size': 10,
'bold': 1,
'italic': 0,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
'multiline': False
},
{
'name': 'rating',
'type': 'T',
'x1': 148,
'y1': 95,
'x2': 200,
'y2': 98,
'font': 'Arial',
'size': 10,
'bold': 1,
'italic': 0,
'underline': 0,
'foreground': 0,
'background': 0,
'align': 'L',
'text': '',
'priority': 2,
'multiline': False
}
]
def create_pdf(which):
recipetitle = recipe_table_dat[which][1]
recipeid = recipe_table_dat[which][0]
f = Template(format="Letter", elements=elements, title="Recipe Printout")
print(f'Elements is a {type(elements)} structure')
f.add_page()
#we FILL some of the fields of the template with the information we want
#note we access the elements treating the template instance as a "dict"
f["header"] = f"Greg's cookbook - {recipetitle} - Recipe ID {recipeid}"
f["title"] = recipetitle # 'Mongolian Beef and Spring Onions'
f["recipeimage"] = images_table_dat[which][2]
f["description"] = recipe_table_dat[which][8]
f["source"] = recipe_table_dat[which][2]
f['servings'] = f'Servings: {recipe_table_dat[which][3]}'
f['time'] = f'Total Time: {recipe_table_dat[which][4]}'
f['rating'] = f'Rating: {recipe_table_dat[which][5]}'
itms = len(ingredients_table_dat[which])
ings = ''
for itm in range(itms):
ings = ings + ingredients_table_dat[which][itm][5] + "\n"
f["ingredientshead"]
f["ingredientitems"] = ings
f["instructionhead"]
f["instructions"] = instructions_table_dat[which][2]
#and now we render the page
filename = f'./{recipetitle}.pdf'
f.render(filename)
print(f'\n\n{"=" * 45}')
print(' PDF has been generated')
print(' Please open the PDF manually')
print('=' * 45)
print('\n\n')
def menu():
print('Please select a recipe...')
print('1 - Mongolian Beef and Spring Onions')
print('2 - Amish White Bread')
print('3 - "Crack" Chicken')
resp = input('Please enter 1, 2, 3 or 0 to quit --> ')
if resp == "0":
print('Exiting program!')
sys.exit(0)
elif resp in ("1", "2", "3"):
return resp
else:
return -1
def mainroutine():
loop = True
while loop:
resp = menu()
if resp == -1:
print('Invalid selection. Please try again')
else:
print(f'Requested recipe: {resp} \n')
create_pdf(int(resp) - 1)
if __name__ == '__main__':
mainroutine() | 32.546798 | 1,325 | 0.546163 | 1,692 | 13,214 | 4.222222 | 0.292553 | 0.044793 | 0.033595 | 0.035274 | 0.241321 | 0.182671 | 0.163214 | 0.138158 | 0.138158 | 0.138158 | 0 | 0.063748 | 0.287725 | 13,214 | 406 | 1,326 | 32.546798 | 0.695283 | 0.074845 | 0 | 0.479224 | 0 | 0.016621 | 0.459129 | 0.016152 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00831 | false | 0 | 0.027701 | 0 | 0.041551 | 0.036011 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61aa135945511486eea03b3338aef406ccf3474a | 1,123 | py | Python | api/api.py | mshen63/DevUp | b0e0d9a4a442cd5b51fb60823ed0a0837c440442 | [
"MIT"
] | 2 | 2022-02-13T17:02:05.000Z | 2022-03-01T10:30:02.000Z | api/api.py | mshen63/DevUp | b0e0d9a4a442cd5b51fb60823ed0a0837c440442 | [
"MIT"
] | 62 | 2021-08-03T17:53:17.000Z | 2021-08-31T15:58:24.000Z | api/api.py | mshen63/DevUp | b0e0d9a4a442cd5b51fb60823ed0a0837c440442 | [
"MIT"
] | 4 | 2021-08-04T15:11:06.000Z | 2021-08-29T19:58:42.000Z | import os
from flask import Flask
from dotenv import load_dotenv
from flask_cors import CORS
from flask_mail import Mail
# Database and endpoints
from flask_migrate import Migrate
from models.models import db
from endpoints.exportEndpoints import exportEndpoints
load_dotenv()
app = Flask(__name__, static_folder="../build", static_url_path="/")
app.register_blueprint(exportEndpoints)
CORS(app)
app.config["MAIL_SERVER"] = "smtp.mailtrap.io"
app.config["MAIL_PORT"] = os.getenv("MAIL_PORT")
app.config["MAIL_USE_TLS"] = True
app.config["MAIL_USE_SSL"] = False
mail = Mail(app)
app.secret_key = "development key"
app.config[
"SQLALCHEMY_DATABASE_URI"
] = "postgresql+psycopg2://{user}:{passwd}@{host}:{port}/{table}".format(
user=os.getenv("POSTGRES_USER"),
passwd=os.getenv("POSTGRES_PASSWORD"),
host=os.getenv("POSTGRES_HOST"),
port=5432,
table=os.getenv("POSTGRES_DB"),
)
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db.init_app(app)
migrate = Migrate(app, db)
@app.errorhandler(404)
def not_found(e):
return "Not found"
@app.route("/")
def index():
return "Im here"
| 22.019608 | 73 | 0.737311 | 157 | 1,123 | 5.076433 | 0.414013 | 0.067754 | 0.065245 | 0.040151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008114 | 0.121995 | 1,123 | 50 | 74 | 22.46 | 0.800203 | 0.01959 | 0 | 0 | 0 | 0 | 0.251137 | 0.101911 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.222222 | 0.055556 | 0.333333 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
61b0ba8f2fe1cfda56083280555e02a787f5271a | 344 | py | Python | setup.py | trsav/romodel | b9e94edb0eeb89fe75be24e1eb303a077c02f58e | [
"MIT"
] | 27 | 2021-05-17T08:33:51.000Z | 2022-02-09T03:18:11.000Z | setup.py | trsav/romodel | b9e94edb0eeb89fe75be24e1eb303a077c02f58e | [
"MIT"
] | null | null | null | setup.py | trsav/romodel | b9e94edb0eeb89fe75be24e1eb303a077c02f58e | [
"MIT"
] | 7 | 2021-07-15T17:07:23.000Z | 2022-01-18T18:58:56.000Z | from setuptools import setup, find_packages
setup(
name='romodel',
version='0.0.2',
url='https://github.com/johwiebe/romodel.git',
author='Johannes Wiebe',
author_email='j.wiebe17@imperial.ac.uk',
description='Pyomo robust optimization toolbox',
packages=find_packages(),
install_requires=['pyomo', 'numpy'],
)
| 26.461538 | 52 | 0.69186 | 42 | 344 | 5.571429 | 0.809524 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017182 | 0.15407 | 344 | 12 | 53 | 28.666667 | 0.786942 | 0 | 0 | 0 | 0 | 0 | 0.383721 | 0.069767 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61b2a1604d4866a4eaf485c5fbc8ad47fa3ccd88 | 8,228 | py | Python | stream/migrations/0001_init_models.py | freejooo/vigilio | d21bf4f9d39e5dcde5d7c21476d8650e914c3c66 | [
"MIT"
] | 137 | 2021-03-26T18:19:45.000Z | 2022-03-06T07:48:23.000Z | stream/migrations/0001_init_models.py | rrosajp/vigilio | d21bf4f9d39e5dcde5d7c21476d8650e914c3c66 | [
"MIT"
] | 11 | 2021-03-28T00:07:00.000Z | 2021-05-04T12:54:58.000Z | stream/migrations/0001_init_models.py | rrosajp/vigilio | d21bf4f9d39e5dcde5d7c21476d8650e914c3c66 | [
"MIT"
] | 16 | 2021-03-27T23:58:53.000Z | 2022-03-20T14:52:13.000Z | # Generated by Django 3.1.5 on 2021-03-17 21:30
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name="Movie",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("imdb_id", models.CharField(max_length=10)),
("title", models.CharField(blank=True, max_length=120)),
("description", models.TextField(blank=True, null=True)),
("moviedb_popularity", models.FloatField(blank=True, null=True)),
(
"poster_path_big",
models.CharField(blank=True, max_length=255, null=True),
),
(
"poster_path_small",
models.CharField(blank=True, max_length=255, null=True),
),
(
"backdrop_path_big",
models.CharField(blank=True, max_length=255, null=True),
),
(
"backdrop_path_small",
models.CharField(blank=True, max_length=255, null=True),
),
("duration", models.IntegerField(default=0)),
("media_info_raw", models.JSONField(blank=True, default=dict)),
("imdb_score", models.FloatField(default=0.0)),
(
"original_language",
models.CharField(blank=True, max_length=2, null=True),
),
("release_date", models.DateField(blank=True, null=True)),
("is_adult", models.BooleanField(default=False)),
("is_ready", models.BooleanField(default=False)),
("updated_at", models.DateTimeField(auto_now=True)),
("created_at", models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name="MovieDBCategory",
fields=[
("moviedb_id", models.IntegerField(primary_key=True, serialize=False)),
("name", models.CharField(max_length=20)),
("updated_at", models.DateTimeField(auto_now=True)),
("created_at", models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name="MovieSubtitle",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("full_path", models.CharField(max_length=255)),
("relative_path", models.CharField(max_length=255)),
("file_name", models.CharField(max_length=255)),
("suffix", models.CharField(max_length=7)),
("updated_at", models.DateTimeField(auto_now=True)),
("created_at", models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name="UserMovieHistory",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("current_second", models.IntegerField(default=0)),
("remaining_seconds", models.IntegerField(default=0)),
("is_watched", models.BooleanField(default=False)),
("created_at", models.DateTimeField(auto_now_add=True)),
("updated_at", models.DateTimeField(auto_now=True)),
(
"movie",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="history",
to="stream.movie",
),
),
(
"user",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL,
),
),
],
),
migrations.CreateModel(
name="MyList",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
(
"movie",
models.OneToOneField(
on_delete=django.db.models.deletion.CASCADE,
related_name="my_list",
to="stream.movie",
),
),
(
"user",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL,
),
),
],
),
migrations.CreateModel(
name="MovieContent",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("torrent_source", models.TextField(null=True)),
("full_path", models.CharField(blank=True, max_length=255, null=True)),
(
"relative_path",
models.CharField(blank=True, max_length=255, null=True),
),
(
"main_folder",
models.CharField(blank=True, max_length=255, null=True),
),
("file_name", models.CharField(blank=True, max_length=255, null=True)),
(
"file_extension",
models.CharField(blank=True, max_length=255, null=True),
),
(
"source_file_name",
models.CharField(blank=True, max_length=255, null=True),
),
(
"source_file_extension",
models.CharField(blank=True, max_length=255, null=True),
),
("resolution_width", models.IntegerField(default=0)),
("resolution_height", models.IntegerField(default=0)),
("raw_info", models.TextField(blank=True, null=True)),
("is_ready", models.BooleanField(default=False)),
("updated_at", models.DateTimeField(auto_now=True)),
("created_at", models.DateTimeField(auto_now_add=True)),
(
"movie_subtitle",
models.ManyToManyField(blank=True, to="stream.MovieSubtitle"),
),
],
),
migrations.AddField(
model_name="movie",
name="movie_content",
field=models.ManyToManyField(to="stream.MovieContent"),
),
migrations.AddField(
model_name="movie",
name="moviedb_category",
field=models.ManyToManyField(blank=True, to="stream.MovieDBCategory"),
),
]
| 38.448598 | 87 | 0.444336 | 629 | 8,228 | 5.621622 | 0.205087 | 0.050905 | 0.047511 | 0.088235 | 0.675622 | 0.645645 | 0.549491 | 0.538462 | 0.526584 | 0.480204 | 0 | 0.016055 | 0.447375 | 8,228 | 213 | 88 | 38.629108 | 0.761601 | 0.005469 | 0 | 0.61165 | 1 | 0 | 0.09791 | 0.005256 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014563 | 0 | 0.033981 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61c26d6718f096ebcf35473494cb3a2610963d4a | 1,097 | py | Python | map/migrations/0001_initial.py | benjaoming/django-denmark.org | f708ce95720fd3452913a003ccf8179f869826c4 | [
"MIT"
] | null | null | null | map/migrations/0001_initial.py | benjaoming/django-denmark.org | f708ce95720fd3452913a003ccf8179f869826c4 | [
"MIT"
] | 1 | 2021-05-03T09:27:59.000Z | 2021-05-03T09:27:59.000Z | map/migrations/0001_initial.py | Nadiahansen15/django-denmark.org | d3be44764367a7c1059d7a0e896fbbc56ce1771f | [
"MIT"
] | null | null | null | # Generated by Django 2.0.3 on 2018-03-16 00:17
from django.conf import settings
import django.contrib.gis.db.models.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='MapEntry',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('location', django.contrib.gis.db.models.fields.PointField(srid=4326)),
('name', models.CharField(blank=True, help_text="Leave blank if it's yourself", max_length=256, null=True, verbose_name='Name of place')),
('owner', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.AlterUniqueTogether(
name='mapentry',
unique_together={('owner', 'name')},
),
]
| 34.28125 | 154 | 0.640839 | 125 | 1,097 | 5.52 | 0.584 | 0.046377 | 0.046377 | 0.052174 | 0.086957 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0.02619 | 0.234275 | 1,097 | 31 | 155 | 35.387097 | 0.795238 | 0.041021 | 0 | 0.083333 | 1 | 0 | 0.082857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61c8c91a42b220ed80fd252d6e8e255457b5045e | 2,141 | py | Python | bot.py | Shikib/twiliobot | 0077dfc2c1451243839bda4d05e6692d3d70391c | [
"MIT"
] | null | null | null | bot.py | Shikib/twiliobot | 0077dfc2c1451243839bda4d05e6692d3d70391c | [
"MIT"
] | null | null | null | bot.py | Shikib/twiliobot | 0077dfc2c1451243839bda4d05e6692d3d70391c | [
"MIT"
] | null | null | null | from telegram import Updater
from twilio.rest import TwilioRestClient
TELEGRAM_TOKEN = "INSERT_YOUR_TOKEN_HERE"
TWILIO_ACCOUNT_SID = "INSERT_ACCOUNT_SID_HERE"
TWILIO_AUTH_TOKEN = "INSERT_AUTH_TOKEN_HERE"
# initialize telegram updater and dispatcher
updater = Updater(token=TELEGRAM_TOKEN)
dispatcher = updater.dispatcher
# initialize twilio client
client = TwilioRestClient(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN)
def call(bot, chat_id, cmd_text):
# set source_number to be a number associated with the account
source_number = "+18668675309"
# cmd_text should be in format [Phone Number] [Message]
cmd_words = cmd_text.split()
phone_number = cmd_words[0]
message_text = cmd_words[1:] if len(cmd_words) > 1 else "Yes"
# Use TWILIO's API to call based on the cmd_text
call = client.calls.create(
url="http://demo.twilio.com/docs/voice.xml",
to=phone_number,
from=source_number)
# Send a confirmation message to the issuer of the command
msg = "Sent a call to the phone number with the default message."
bot.sendMessage(chat_id=chat_id, text=msg)
def text(bot, chat_id, cmd_text):
# set source_number to be a number associated with the account
source_number = "+18668675309"
# cmd_text should be in format [Phone Number] [Message]
cmd_words = cmd_text.split()
phone_number = cmd_words[0]
message_text = cmd_words[1:] if len(cmd_words) > 1 else "Yes"
# Use TWILIO's API to text based on the cmd_text
call = client.calls.create(
to=phone_number,
from=source_number,
body=message_text)
# Send a confirmation message to the issuer of the command
msg = "Sent a text to the phone number with your message."
bot.sendMessage(chat_id=chat_id, text=msg)
def main(bot, update):
message_text = update.message_text
chat_id = update.message.chat_id
words = message_text.split()
cmd = words[0]
cmd_text = ' '.join(words[1:]) if len(words) > 1 else ''
if cmd == '!call':
call(bot, chat_id, cmd_text)
elif cmd == '!text':
text(bot, chat_id, cmd_text)
dispatcher.addTelegramMessageHandler(main)
updater.start_polling()
| 31.028986 | 67 | 0.726763 | 325 | 2,141 | 4.587692 | 0.243077 | 0.056338 | 0.024145 | 0.032193 | 0.583501 | 0.556673 | 0.48558 | 0.48558 | 0.48558 | 0.434608 | 0 | 0.017694 | 0.181691 | 2,141 | 68 | 68 | 31.485294 | 0.833333 | 0.235871 | 0 | 0.333333 | 0 | 0 | 0.155077 | 0.041231 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.047619 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ee2287446b5623f2360208ebd4d39b34fb838c9 | 2,827 | py | Python | codes/1hop_getTwitterUserDetails.py | hridaydutta123/ActiveProbing | 763e90fb9e4d996fefd46fb72a53345f83bb3366 | [
"BSD-2-Clause"
] | null | null | null | codes/1hop_getTwitterUserDetails.py | hridaydutta123/ActiveProbing | 763e90fb9e4d996fefd46fb72a53345f83bb3366 | [
"BSD-2-Clause"
] | null | null | null | codes/1hop_getTwitterUserDetails.py | hridaydutta123/ActiveProbing | 763e90fb9e4d996fefd46fb72a53345f83bb3366 | [
"BSD-2-Clause"
] | null | null | null | import tweepy
from tweepy import OAuthHandler
import sys
import ConfigParser
from pymongo import MongoClient
import datetime
from random import randint
import time
# Mongo Settings
# Connect to MongoDB
client = MongoClient("hpc.iiitd.edu.in", 27017, maxPoolSize=50)
# Connect to db bitcoindb
db=client.activeprobing
settings_file = sys.argv[1]
#This file creates user-language feature generation
if len(sys.argv) < 1:
print """
Command : python userLanguages.py <inp-file> <settings-file>
(IN OUR CASE)
python userLanguages.py ../../Dataset/username_userID.csv ../settings.txt
"""
sys.exit(1)
# Read config settings
config = ConfigParser.ConfigParser()
config.readfp(open(settings_file))
# Random API key selection
randVal = randint(1,8)
CONSUMER_KEY = config.get('API Keys ' + str(randVal), 'API_KEY')
CONSUMER_SECRET = config.get('API Keys ' + str(randVal), 'API_SECRET')
ACCESS_KEY = config.get('API Keys ' + str(randVal), 'ACCESS_TOKEN')
ACCESS_SECRET = config.get('API Keys ' + str(randVal), 'ACCESS_TOKEN_SECRET')
auth = OAuthHandler(CONSUMER_KEY,CONSUMER_SECRET)
api = tweepy.API(auth)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
#search
api = tweepy.API(auth)
# Get list of userIDs in mongo
allUserIDs = db.followers.distinct("id")
userList = [170995068]
for users in userList:
try:
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, compression=True)
# Get followerids of user
c = tweepy.Cursor(api.followers_ids, user_id = users)
followerids = []
for page in c.pages():
followerids.append(page)
print "ids=", followerids
except tweepy.TweepError:
print "tweepy.TweepError="#, tweepy.TweepError
except:
e = sys.exc_info()[0]
print "Error: %s" % e
for ids in followerids[0]:
print ids
# Tweepy API get user details
result = api.get_user(user_id=ids)
if ids not in allUserIDs:
# Insert into mongo
result._json['lastModified'] = datetime.datetime.now()
result._json['followerOf'] = users
insertMongo = db.followers.insert_one(result._json)
else:
# # Check for changes
userMongoDetails = db.followers.find({'id':ids}, {'friends_count': 1, 'followers_count': 1})
for values in userMongoDetails:
existingFollowers = values['followers_count']
existingFriends = values['friends_count']
# New followers
newFollowers = result._json['followers_count']
newFriends = result._json['friends_count']
# Check if followers and friends are same as exist in mongo
if newFollowers != existingFollowers or newFriends != existingFriends:
changeVals = {'timestamp': datetime.datetime.now(),'followers_count':newFollowers, 'friends_count': newFriends}
db.followers.update({'id':ids}, {'$push': {"changes": changeVals}}, False, True)
| 30.728261 | 115 | 0.718783 | 367 | 2,827 | 5.416894 | 0.384196 | 0.025151 | 0.024145 | 0.032193 | 0.075453 | 0.075453 | 0.075453 | 0.037223 | 0 | 0 | 0 | 0.010575 | 0.163778 | 2,827 | 91 | 116 | 31.065934 | 0.830372 | 0.130527 | 0 | 0.032787 | 0 | 0 | 0.192623 | 0.013525 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.131148 | null | null | 0.081967 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ee8e5064d073bd9fe65b5f652de6a065ad8feed | 468 | py | Python | troupon/deals/tasks.py | andela/troupon | 3704cbe6e69ba3e4c53401d3bbc339208e9ebccd | [
"MIT"
] | 14 | 2016-01-12T07:31:09.000Z | 2021-11-20T19:29:35.000Z | troupon/deals/tasks.py | andela/troupon | 3704cbe6e69ba3e4c53401d3bbc339208e9ebccd | [
"MIT"
] | 52 | 2015-09-02T14:54:43.000Z | 2016-08-01T08:22:21.000Z | troupon/deals/tasks.py | andela/troupon | 3704cbe6e69ba3e4c53401d3bbc339208e9ebccd | [
"MIT"
] | 17 | 2015-09-30T13:18:48.000Z | 2021-11-18T16:25:12.000Z | from celery.decorators import periodic_task
from celery.task.schedules import crontab
from celery.utils.log import get_task_logger
from utils import scraper
logger = get_task_logger(__name__)
# A periodic task that will run every minute
@periodic_task(run_every=(crontab(hour=10, day_of_week="Wednesday")))
def send_periodic_emails():
logger.info("Start task")
result = scraper.send_periodic_emails()
logger.info("Task finished: result = %i" % result)
| 29.25 | 69 | 0.779915 | 68 | 468 | 5.117647 | 0.5 | 0.086207 | 0.074713 | 0.137931 | 0.16092 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004902 | 0.128205 | 468 | 15 | 70 | 31.2 | 0.848039 | 0.089744 | 0 | 0 | 0 | 0 | 0.106132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4ee93f1b5b503980b03dc4c25a2236f84fadfd5f | 806 | py | Python | unit_testing/temp_manage.py | ddsprasad/pythondev | be3e555ff4a877ecac120b650130c601772050b1 | [
"MIT"
] | null | null | null | unit_testing/temp_manage.py | ddsprasad/pythondev | be3e555ff4a877ecac120b650130c601772050b1 | [
"MIT"
] | null | null | null | unit_testing/temp_manage.py | ddsprasad/pythondev | be3e555ff4a877ecac120b650130c601772050b1 | [
"MIT"
] | null | null | null | from memsql.common import database
import sys
from datetime import datetime
DATABASE = 'PREPDB'
HOST = '10.1.100.12'
PORT = '3306'
USER = 'root'
PASSWORD = 'In5pir0n@121'
def get_connection(db=DATABASE):
""" Returns a new connection to the database. """
return database.connect(host=HOST, port=PORT, user=USER, password=PASSWORD, database=db)
def run_temp_scheduler():
with get_connection() as conn:
x = conn.query("call tvf_temp_scheduler()")
for i in x:
if 'ERROR' in str(list(i.values())):
print(datetime.now(), ': ', list(i.values()))
sys.exit()
else:
print(datetime.now(), ': ', list(i.values()))
print(datetime.now(), ": TEMP TABLES PERSISTED SUCCESSFULLY")
run_temp_scheduler() | 28.785714 | 92 | 0.614144 | 102 | 806 | 4.77451 | 0.54902 | 0.080082 | 0.067762 | 0.065708 | 0.166324 | 0.166324 | 0 | 0 | 0 | 0 | 0 | 0.027961 | 0.245658 | 806 | 28 | 93 | 28.785714 | 0.773026 | 0.050868 | 0 | 0.095238 | 0 | 0 | 0.141161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.095238 | 0.142857 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4ee959a3b5b6300a196f631748a0e734cce242cc | 534 | py | Python | tabcmd/parsers/remove_users_parser.py | WillAyd/tabcmd | 1ba4a6ce1586b5ec4286aca0edff0fbaa1c69f15 | [
"MIT"
] | null | null | null | tabcmd/parsers/remove_users_parser.py | WillAyd/tabcmd | 1ba4a6ce1586b5ec4286aca0edff0fbaa1c69f15 | [
"MIT"
] | null | null | null | tabcmd/parsers/remove_users_parser.py | WillAyd/tabcmd | 1ba4a6ce1586b5ec4286aca0edff0fbaa1c69f15 | [
"MIT"
] | null | null | null | from .global_options import *
class RemoveUserParser:
"""
Parser to removeusers command
"""
@staticmethod
def remove_user_parser(manager, command):
"""Method to parse remove user arguments passed by the user"""
remove_users_parser = manager.include(command)
remove_users_parser.add_argument('groupname',
help='The group to remove users from.')
set_users_file_arg(remove_users_parser)
set_completeness_options(remove_users_parser)
| 31.411765 | 80 | 0.662921 | 59 | 534 | 5.711864 | 0.542373 | 0.163205 | 0.20178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265918 | 534 | 16 | 81 | 33.375 | 0.859694 | 0.161049 | 0 | 0 | 0 | 0 | 0.093677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4eea4879abfe5efa38a70bbb4827cc57672e6417 | 6,819 | py | Python | Lab6/passengers.py | programiranje3/v2020 | 93c9793aadfea1c15107447951e2db5284fe8802 | [
"MIT"
] | null | null | null | Lab6/passengers.py | programiranje3/v2020 | 93c9793aadfea1c15107447951e2db5284fe8802 | [
"MIT"
] | null | null | null | Lab6/passengers.py | programiranje3/v2020 | 93c9793aadfea1c15107447951e2db5284fe8802 | [
"MIT"
] | null | null | null | #
# Create the FlightService enumeration that defines the following items (services):
# (free) snack, (free) refreshments, (free) meal, priority boarding,
# (free) onboard wifi, and an item for cases when services are not specified.
#
from enum import Enum
class FlightService(Enum):
unspecified = 0
snack = 1
refreshments = 2
meal = 3
priority_boarding = 4
onboard_wifi = 5
#
# Modify the Passenger class from Lab5 as follows:
#
# In addition to the *name* and *passport* attributes, the class should also have
# the following attributes:
# - *air_miles* - the number of air miles the passenger has accumulated
# - *checked_in* - a boolean indicator, true if the passenger has checked in
# - *services* - a class attribute defining the list of services available to all
# passengers of a particular class (category); available services for various categories
# of passengers should be defined as elements of the FlightService enumeration.
# For the Passenger class, services are unspecified as they depend on the passenger
# category (will be defined in the subclasses).
# Note that the attribute *is_business* (from Lab 5 version of the Passenger class) is dropped,
# as the passenger category will be handled through (Python) class hierarchy.
#
# The following methods of the Passenger class need to be revised:
#
# - constructor (__init__()) - it should receive 4 input arguments, one for
# each attribute; only the arguments for the passenger's name and passport
# have to be specified; the argument for *air_miles* has None as its default value,
# while False is the default value for *checked_in*.
#
# - a method that returns a string representation of a given Passenger object (__str__())
# so that it describes a passenger with the extended set of attributes.
#
# Finally, the following new methods should be added:
#
# - get and set methods (using appropriate decorators) for the *air_miles* attribute;
# the set method should assure that a non-negative integer value is assigned to this attribute
#
# - a class method (available_services()) that returns a list of strings describing services
# available to the passengers; this list is created based on the *services* class attribute.
#
from sys import stderr
class Passenger:
services = [FlightService.unspecified]
def __init__(self, name, passport, air_miles=None, checked_in=False):
self.name = name
self.passport = passport
self.air_miles = air_miles
self.checked_in = checked_in
@property
def passport(self):
return self.__passport
@passport.setter
def passport(self, value):
if (isinstance(value, str)) and (len(value) == 6) and (all([ch.isdigit() for ch in value])):
self.__passport = value
elif (isinstance(value, int)) and (len(str(value)) == 6):
self.__passport = str(value)
else:
print("Error! Incorrect passport number! Setting passport to None")
self.__passport = None
@property
def air_miles(self):
return self.__air_miles
@air_miles.setter
def air_miles(self, value):
self.__air_miles = None
if (value is None) or (isinstance(value, int) and (value >= 0)):
self.__air_miles = value
elif isinstance(value, str):
try:
if int(value) >= 0:
self.__air_miles = int(value)
except ValueError:
stderr.write(f"Error! An incorrect value {value} passed for the air miles attribute\n")
else:
print(f"Error! The input value {value} cannot be used for setting the air miles attribute")
def __str__(self):
passenger_str = f"{self.name}, with passport number: " + (self.passport if self.passport else "unavailable")
passenger_str += f", collected {self.air_miles} air miles" if self.air_miles else ""
passenger_str += "; check-in completed" if self.checked_in else "; not checked in yet"
return passenger_str
def __eq__(self, other):
if isinstance(other, Passenger):
if self.passport and other.passport:
return (self.name == other.name) and (self.passport == other.passport)
else:
print(f"Cannot determine equality since at least one of the passengers does not have passport number")
return False
else:
print("The other object is not of the Passenger type")
return False
@classmethod
def available_services(cls):
return [s.name.replace('_', ' ') for s in cls.services]
#
# Create the EconomyPassenger class that extends the Passenger class and has:
#
# - method candidate_for_upgrade that checks if the passenger is a candidate for an upgrade
# and returns an appropriate boolean value; a passenger is a candidate for an upgrade if
# their current air miles exceed the given threshold (input parameter) and the passenger
# has checked in
#
# - changed value for the *services* class attribute so that it includes snack and refreshments
# (as elements of the FlightServices enum)
#
# - overridden __str__ method so that it first prints "Economy class passenger" and then
# the available information about the passenger
#
class EconomyPassenger(Passenger):
services = [FlightService.snack, FlightService.refreshments]
def candidate_for_upgrade(self, min_air_miles):
return self.checked_in and self.air_miles and (self.air_miles > min_air_miles)
def __str__(self):
return "Economy class passenger " + super().__str__()
#
# Create class BusinessPassenger that extends the Passenger class and has:
#
# - changed value for the services class attribute, so that it includes:
# priority boarding, meal, digital entertainment, and onboard wifi
#
# - overridden __str__ method so that it first prints "Business class passenger" and then
# the available information about the passengers
#
class BusinessPassenger(Passenger):
services = [FlightService.meal, FlightService.onboard_wifi, FlightService.priority_boarding]
def __str__(self):
return "Business class passenger " + super().__str__()
if __name__ == '__main__':
jim = EconomyPassenger("Jim Jonas", '123456', air_miles=1000)
# jim.services = [FlightService.onboard_wifi, FlightService.meal]
print(jim)
print(jim.__dict__)
print(jim.available_services())
print()
bob = EconomyPassenger("Bob Jones", '987654', checked_in=True)
print(bob)
bob.air_miles = '20200'
print(bob.__dict__)
print(bob.available_services())
print()
mike = BusinessPassenger("Mike Stone", '234567', air_miles=2000)
print(mike)
print(mike.__dict__)
print(mike.available_services()) | 36.079365 | 118 | 0.69585 | 895 | 6,819 | 5.15419 | 0.251397 | 0.046824 | 0.023412 | 0.013007 | 0.144808 | 0.092348 | 0.092348 | 0.063299 | 0.046824 | 0.024279 | 0 | 0.00831 | 0.223493 | 6,819 | 189 | 119 | 36.079365 | 0.86289 | 0.446693 | 0 | 0.147727 | 0 | 0 | 0.156292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.318182 | 0.022727 | 0.068182 | 0.409091 | 0.170455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4eea98d24b1e30d22ba208bedbdfdc1f2bbee460 | 722 | py | Python | src/documents/management/commands/generate_documents.py | Talengi/phase | 60ff6f37778971ae356c5b2b20e0d174a8288bfe | [
"MIT"
] | 8 | 2016-01-29T11:53:40.000Z | 2020-03-02T22:42:02.000Z | src/documents/management/commands/generate_documents.py | Talengi/phase | 60ff6f37778971ae356c5b2b20e0d174a8288bfe | [
"MIT"
] | 289 | 2015-03-23T07:42:52.000Z | 2022-03-11T23:26:10.000Z | src/documents/management/commands/generate_documents.py | Talengi/phase | 60ff6f37778971ae356c5b2b20e0d174a8288bfe | [
"MIT"
] | 7 | 2015-12-08T09:03:20.000Z | 2020-05-11T15:36:51.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
from django.core.management.base import BaseCommand
from documents.tests.utils import generate_random_documents
from categories.models import Category
class Command(BaseCommand):
args = '<number_of_documents> <category_id>'
help = 'Creates a given number of random documents'
def handle(self, *args, **options):
nb_of_docs = int(args[0])
category_id = int(args[1])
category = Category.objects.get(pk=category_id)
generate_random_documents(nb_of_docs, category)
self.stdout.write(
'Successfully generated {nb_of_docs} documents'.format(
nb_of_docs=nb_of_docs,
).encode()
)
| 26.740741 | 67 | 0.66759 | 90 | 722 | 5.144444 | 0.544444 | 0.043197 | 0.086393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.227147 | 722 | 26 | 68 | 27.769231 | 0.824373 | 0.052632 | 0 | 0 | 1 | 0 | 0.178886 | 0.030792 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4eeadf88e58b24119aee24c1e548e0b28afca7bf | 2,004 | py | Python | checkov/kubernetes/checks/resource/k8s/ImagePullPolicyAlways.py | Devocean8-Official/checkov | 8ce61421fa838a97981ab3bd0ae2a12e541666b2 | [
"Apache-2.0"
] | 1 | 2021-02-13T15:24:42.000Z | 2021-02-13T15:24:42.000Z | checkov/kubernetes/checks/resource/k8s/ImagePullPolicyAlways.py | Devocean8-Official/checkov | 8ce61421fa838a97981ab3bd0ae2a12e541666b2 | [
"Apache-2.0"
] | 7 | 2021-04-12T06:54:07.000Z | 2022-03-21T14:04:14.000Z | checkov/kubernetes/checks/resource/k8s/ImagePullPolicyAlways.py | Devocean8-Official/checkov | 8ce61421fa838a97981ab3bd0ae2a12e541666b2 | [
"Apache-2.0"
] | 1 | 2021-12-16T03:09:55.000Z | 2021-12-16T03:09:55.000Z | import re
from typing import Any, Dict
from checkov.common.models.consts import DOCKER_IMAGE_REGEX
from checkov.common.models.enums import CheckResult
from checkov.kubernetes.checks.resource.base_container_check import BaseK8sContainerCheck
class ImagePullPolicyAlways(BaseK8sContainerCheck):
def __init__(self) -> None:
"""
Image pull policy should be set to always to ensure you get the correct image and imagePullSecrets are correct
Default is 'IfNotPresent' unless image tag is omitted or :latest
https://kubernetes.io/docs/concepts/configuration/overview/#container-images
An admission controller could be used to enforce imagePullPolicy
"""
name = "Image Pull Policy should be Always"
id = "CKV_K8S_15"
# Location: container .imagePullPolicy
super().__init__(name=name, id=id)
def scan_container_conf(self, metadata: Dict[str, Any], conf: Dict[str, Any]) -> CheckResult:
self.evaluated_container_keys = ["image", "imagePullPolicy"]
if conf.get("image"):
# Remove the digest, if present
image_val = conf["image"]
if not isinstance(image_val, str) or image_val.strip() == "":
return CheckResult.UNKNOWN
if "@" in image_val:
image_val = image_val[0 : image_val.index("@")]
(image, tag) = re.findall(DOCKER_IMAGE_REGEX, image_val)[0]
if "imagePullPolicy" not in conf:
if tag == "latest" or tag == "":
# Default imagePullPolicy = Always
return CheckResult.PASSED
else:
# Default imagePullPolicy = IfNotPresent
return CheckResult.FAILED
else:
if conf["imagePullPolicy"] != "Always":
return CheckResult.FAILED
else:
return CheckResult.FAILED
return CheckResult.PASSED
check = ImagePullPolicyAlways()
| 39.294118 | 118 | 0.628244 | 215 | 2,004 | 5.725581 | 0.44186 | 0.05199 | 0.056052 | 0.037368 | 0.037368 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004916 | 0.289421 | 2,004 | 50 | 119 | 40.08 | 0.859551 | 0.228543 | 0 | 0.258065 | 0 | 0 | 0.07893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.064516 | 0.16129 | 0 | 0.451613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4eeb6f838c4a0c27e093b28216bbce9b8353f8d9 | 381 | py | Python | jobs/migrations/0018_rename_user_id_recruiterpage_user.py | digitaloxford/do-wagtail | 49dd75b95109ebb38bf66aca13d3fdeb8e25d319 | [
"MIT"
] | 2 | 2021-04-11T11:59:51.000Z | 2021-04-12T06:56:23.000Z | jobs/migrations/0018_rename_user_id_recruiterpage_user.py | digitaloxford/do-wagtail | 49dd75b95109ebb38bf66aca13d3fdeb8e25d319 | [
"MIT"
] | 8 | 2021-04-10T10:40:27.000Z | 2022-01-25T16:32:22.000Z | jobs/migrations/0018_rename_user_id_recruiterpage_user.py | digitaloxford/do-wagtail | 49dd75b95109ebb38bf66aca13d3fdeb8e25d319 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.6 on 2021-08-05 11:05
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('jobs', '0017_rename_user_recruiterpage_user_id'),
]
operations = [
migrations.RenameField(
model_name='recruiterpage',
old_name='user_id',
new_name='user',
),
]
| 20.052632 | 59 | 0.603675 | 42 | 381 | 5.261905 | 0.714286 | 0.054299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070111 | 0.288714 | 381 | 18 | 60 | 21.166667 | 0.745387 | 0.11811 | 0 | 0 | 1 | 0 | 0.197605 | 0.113772 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4eef83ede604e40066824025d8e5da2b23f0f80b | 1,297 | py | Python | pywxwork/contact/async.py | renqiukai/pywxwork | 980283b4c89bf60f998071f30551b8802f1cdbf9 | [
"MIT"
] | 2 | 2021-05-22T11:16:47.000Z | 2021-05-27T12:42:38.000Z | pywxwork/contact/async.py | renqiukai/pywxwork | 980283b4c89bf60f998071f30551b8802f1cdbf9 | [
"MIT"
] | null | null | null | pywxwork/contact/async.py | renqiukai/pywxwork | 980283b4c89bf60f998071f30551b8802f1cdbf9 | [
"MIT"
] | null | null | null | from loguru import logger
from ..base import base
class async(base):
def __init__(self, token) -> None:
super().__init__(token)
def syncuser(self, data):
api_name = "batch/syncuser"
response = self.request(
api_name=api_name, method="post", json=data)
logger.debug(response)
return response
def replaceuser(self, data):
"""全量覆盖成员
https://open.work.weixin.qq.com/api/doc/90000/90135/90981
"""
api_name = "batch/replaceuser"
response = self.request(
api_name=api_name, method="post", json=data)
logger.debug(response)
return response
def replaceparty(self, data):
"""全量覆盖部门
https://open.work.weixin.qq.com/api/doc/90000/90135/90982
"""
api_name = "batch/replaceparty"
response = self.request(
api_name=api_name, method="post", json=data)
logger.debug(response)
return response
def getresult(self, data):
"""获取异步任务结果
https://open.work.weixin.qq.com/api/doc/90000/90135/90983
"""
api_name = "batch/getresult"
response = self.request(
api_name=api_name, method="get", params=data)
logger.debug(response)
return response
| 28.822222 | 65 | 0.59522 | 151 | 1,297 | 4.980132 | 0.298013 | 0.111702 | 0.06383 | 0.117021 | 0.607713 | 0.607713 | 0.558511 | 0.558511 | 0.506649 | 0.506649 | 0 | 0.048387 | 0.282961 | 1,297 | 44 | 66 | 29.477273 | 0.760215 | 0 | 0 | 0.517241 | 0 | 0 | 0.076402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068966 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ef0b50ef3594f7350bca88a70048229aa579679 | 1,539 | py | Python | app/places/migrations/0001_initial.py | aykutgk/nearapp-web | 469b9878cf7434278c78a733eeac40713dcc40a4 | [
"MIT"
] | null | null | null | app/places/migrations/0001_initial.py | aykutgk/nearapp-web | 469b9878cf7434278c78a733eeac40713dcc40a4 | [
"MIT"
] | null | null | null | app/places/migrations/0001_initial.py | aykutgk/nearapp-web | 469b9878cf7434278c78a733eeac40713dcc40a4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.6 on 2017-04-06 23:40
from __future__ import unicode_literals
from django.conf import settings
import django.contrib.gis.db.models.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Place',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, verbose_name='Name of the business or place')),
('point', django.contrib.gis.db.models.fields.PointField(srid=4326, verbose_name='Latitude/Longitude on Map')),
('google_place_id', models.CharField(max_length=60, unique=True, verbose_name='Google place id')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('active', models.BooleanField(default=True, verbose_name='Is this business or place still active?')),
('google_map_url', models.URLField(blank=True, default=None, null=True, verbose_name='Google map url')),
('owner', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Owner')),
],
),
]
| 43.971429 | 141 | 0.660169 | 185 | 1,539 | 5.324324 | 0.491892 | 0.078173 | 0.045685 | 0.036548 | 0.117767 | 0.060914 | 0 | 0 | 0 | 0 | 0 | 0.021452 | 0.212476 | 1,539 | 34 | 142 | 45.264706 | 0.791254 | 0.044185 | 0 | 0 | 1 | 0 | 0.139646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.192308 | 0 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ef25edc19b260d4a3119393c3b4c0a16a1f9130 | 462 | py | Python | preprocessing/importlib_read.py | Kunal614/Machine-Learning | 26b3e0f3397ddb524c96c5b6c99b173b6fc80501 | [
"MIT"
] | null | null | null | preprocessing/importlib_read.py | Kunal614/Machine-Learning | 26b3e0f3397ddb524c96c5b6c99b173b6fc80501 | [
"MIT"
] | null | null | null | preprocessing/importlib_read.py | Kunal614/Machine-Learning | 26b3e0f3397ddb524c96c5b6c99b173b6fc80501 | [
"MIT"
] | null | null | null | #importing lib
import pandas as pd
import numpy as np
#Take data
df = pd.DataFrame({"Name":['Kunal' , 'Mohit' , 'Rohit' ] ,"age":[np.nan , 23, 45] , "sex":['M' , np.nan , 'M']})
#check for nnull value
print(df.isnull().sum())
print(df.describe())
# ignore the nan rows
print(len(df.dropna()) , df.dropna())
#for ignoring columns
print(len(df.dropna(axis=1)) , df.dropna(axis=1))
print(len(df) , df)
#All the rows are ignore
print(df.isnull().sum())
| 15.931034 | 112 | 0.634199 | 76 | 462 | 3.855263 | 0.539474 | 0.109215 | 0.102389 | 0.109215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.155844 | 462 | 28 | 113 | 16.5 | 0.735897 | 0.229437 | 0 | 0.222222 | 0 | 0 | 0.077586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
4ef2b218f2e39c030635631f9cea78b7d43ed7b8 | 2,774 | py | Python | isw2-master/src/app/ui/ajustarReloj.py | marlanbar/academic-projects | bcdc8ca36b6984ab3f83c10b8a3ed45576ecfca1 | [
"MIT"
] | null | null | null | isw2-master/src/app/ui/ajustarReloj.py | marlanbar/academic-projects | bcdc8ca36b6984ab3f83c10b8a3ed45576ecfca1 | [
"MIT"
] | null | null | null | isw2-master/src/app/ui/ajustarReloj.py | marlanbar/academic-projects | bcdc8ca36b6984ab3f83c10b8a3ed45576ecfca1 | [
"MIT"
] | null | null | null | import datetime
from .consola import Consola
from .uiscreen import UIScreen
from ..core.reloj import Reloj
class AjustarReloj(UIScreen):
def __init__(self, unMain, unUsuario):
super().__init__(unMain)
self.usuario = unUsuario
def run(self):
self.consola.prnt("")
self.consola.prnt(" Ahora: %s" % self.main.getReloj().getFechaYTiempo().strftime("%d/%m/%Y %H:%M:%S"))
self.consola.prnt("===========================================================")
self.consola.prnt("1. Ajustar fecha")
self.consola.prnt("2. Ajustar tiempo")
self.consola.prnt("3. Avanzar una cantidad de dias")
self.consola.prnt("4. Correr procesos dependientes del tiempo")
self.consola.prnt("-----------------------------------------------------------")
self.consola.prnt("9. Volver a la pantalla anterior")
self.consola.prnt("")
opt = self.consola.askInput("Ingrese el número de la opcion que le interesa: ")
if opt == "1":
self.consola.prnt("-----------------------------------------------------------")
self.consola.prnt("")
inputDate = self.consola.askInput("Ingrese la fecha en formato DD/MM/YYYY: ")
try:
date = datetime.datetime.strptime(inputDate, "%d/%m/%Y")
self.consola.clear()
self.main.getReloj().resetFechaYTiempo(datetime.datetime.combine(date, self.main.getReloj().getTiempo()))
except:
self.consola.clear()
self.consola.prnt("[ERROR] La fecha ingresada es inválida")
elif opt == "2":
self.consola.prnt("-----------------------------------------------------------")
self.consola.prnt("")
inputTime = self.consola.askInput("Ingrese el tiempo en formato HH:MM:SS: ")
try:
time = datetime.datetime.strptime(inputTime, "%H:%M:%S")
self.consola.clear()
self.main.getReloj().resetFechaYTiempo(datetime.datetime.combine(self.main.getReloj().getFecha(), time.time()))
except:
self.consola.clear()
self.consola.prnt("[ERROR] El tiempo ingresado es inválido")
elif opt == "3":
self.consola.prnt("-----------------------------------------------------------")
self.consola.prnt("")
inputDays = self.consola.askInput("Ingrese la cantidad de dias: ")
try:
days = datetime.timedelta(days=int(inputDays))
except:
days = None
self.consola.clear()
self.consola.prnt("[ERROR] La cantidad ingresada es inválida")
if days is not None:
self.consola.clear()
self.main.getReloj().resetFechaYTiempo(self.main.getReloj().getFechaYTiempo() + days)
elif opt == "4":
self.consola.clear()
self.main.getReloj().notificar()
self.consola.prnt("[MSG] Procesos ejecutados")
elif opt == "9":
from .homeScreen import HomeScreen
self.consola.clear()
self.main.setScreen(HomeScreen(self.main, self.usuario))
else:
self.consola.clear()
| 37.486486 | 115 | 0.612112 | 324 | 2,774 | 5.216049 | 0.305556 | 0.214793 | 0.177515 | 0.094675 | 0.401775 | 0.313018 | 0.187574 | 0.15858 | 0.085207 | 0.085207 | 0 | 0.004191 | 0.13987 | 2,774 | 73 | 116 | 38 | 0.704107 | 0 | 0 | 0.369231 | 0 | 0 | 0.285148 | 0.106345 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.076923 | 0 | 0.123077 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ef40c95ccc6546b171182631d74d1be2a580530 | 2,359 | py | Python | signin/jd.py | nujabse/simpleSignin | e818f9880d97d39beddcadccb53dc23e18fe6d8e | [
"MIT"
] | 11 | 2018-03-07T04:13:05.000Z | 2019-11-28T04:43:55.000Z | signin/jd.py | nujabse/simpleSignin | e818f9880d97d39beddcadccb53dc23e18fe6d8e | [
"MIT"
] | 2 | 2018-03-04T15:08:01.000Z | 2018-05-28T07:42:47.000Z | signin/jd.py | nujabse/simpleSignin | e818f9880d97d39beddcadccb53dc23e18fe6d8e | [
"MIT"
] | 15 | 2018-01-25T10:54:06.000Z | 2019-11-05T07:09:20.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import traceback
from selenium.webdriver import ChromeOptions
from signin.chrome import find_chrome_driver_path, JdSession
from signin.jd_job import jobs_all
from lib.log import logger
from lib.settings import PC_UA
from lib.settings import MOBILE_UA
class JDUser:
def __init__(self, username, password, jobs_skip=None):
self.headless = True
self.logger = logger
self.ua_pc = PC_UA
self.ua = MOBILE_UA
self.username = username
self.password = password
self.jobs_skip = jobs_skip or []
class JD:
def __init__(self, username, password):
self.user = JDUser(username, password)
self.session = self.make_session()
self.job_list = [job for job in jobs_all if job.__name__ not in self.user.jobs_skip]
def sign(self):
jobs_failed = []
for job_class in self.job_list:
job = job_class(self)
# 默认使用移动设备User-agent,否则使用PC版User-Agent
# if job.is_mobile:
# job.session.headers.update({
# 'User-Agent': self.user.ua
# })
# else:
# job.session.headers.update({
# 'User-Agent': self.user.ua_pc})
try:
job.run()
except Exception as e:
logger.error('# 任务运行出错: ' + repr(e))
traceback.print_exc()
if not job.job_success:
jobs_failed.append(job.job_name)
print('=================================')
print('= 任务数: {}; 失败数: {}'.format(len(self.job_list), len(jobs_failed)))
if jobs_failed:
print('= 失败的任务: {}'.format(jobs_failed))
else:
print('= 全部成功 ~')
print('=================================')
return len(jobs_failed) == 0
def make_session(self) -> JdSession:
chrome_path = find_chrome_driver_path()
session = JdSession(webdriver_path=str(chrome_path),
browser='chrome',
webdriver_options=ChromeOptions())
session.webdriver_options.add_argument('lang=zh_CN.UTF-8')
if self.user.headless:
session.webdriver_options.add_argument('headless')
return session
if __name__ == '__main__':
pass
| 30.24359 | 92 | 0.561679 | 267 | 2,359 | 4.719101 | 0.340824 | 0.047619 | 0.02619 | 0.031746 | 0.163492 | 0.066667 | 0.066667 | 0.066667 | 0.066667 | 0 | 0 | 0.002452 | 0.308605 | 2,359 | 77 | 93 | 30.636364 | 0.77008 | 0.106825 | 0 | 0.039216 | 0 | 0 | 0.072008 | 0.031474 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0.098039 | 0.137255 | 0 | 0.294118 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4ef53f10b384a2cdfc48e2d923372b8c68776225 | 1,966 | py | Python | Post-Exploitation/LaZagne/Linux/lazagne/softwares/wallet/gnome.py | FOGSEC/TID3xploits | b57d8bae454081a3883a5684679e2a329e72d6e5 | [
"MIT"
] | 5 | 2018-01-15T13:58:40.000Z | 2022-02-17T02:38:58.000Z | Post-Exploitation/LaZagne/Linux/lazagne/softwares/wallet/gnome.py | bhattsameer/TID3xploits | b57d8bae454081a3883a5684679e2a329e72d6e5 | [
"MIT"
] | null | null | null | Post-Exploitation/LaZagne/Linux/lazagne/softwares/wallet/gnome.py | bhattsameer/TID3xploits | b57d8bae454081a3883a5684679e2a329e72d6e5 | [
"MIT"
] | 4 | 2019-06-21T07:51:11.000Z | 2020-11-04T05:20:09.000Z | #!/usr/bin/env python
import os
from lazagne.config.write_output import print_debug
from lazagne.config.moduleInfo import ModuleInfo
class Gnome(ModuleInfo):
def __init__(self):
options = {'command': '-g', 'action': 'store_true', 'dest': 'gnomeKeyring', 'help': 'Gnome Keyring'}
ModuleInfo.__init__(self, 'gnomeKeyring', 'wallet', options)
def run(self, software_name = None):
if os.getuid() == 0:
print_debug('WARNING', 'Do not run it with root privileges\n')
return
try:
import gnomekeyring
if len(gnomekeyring.list_keyring_names_sync()) > 0:
pwdFound = []
for keyring in gnomekeyring.list_keyring_names_sync():
for id in gnomekeyring.list_item_ids_sync(keyring):
values = {}
item = gnomekeyring.item_get_info_sync(keyring, id)
attr = gnomekeyring.item_get_attributes_sync(keyring, id)
if attr:
if item.get_display_name():
values["Item"] = item.get_display_name()
if attr.has_key('server'):
values["Host"] = attr['server']
if attr.has_key('protocol'):
values["Protocol"] = attr['protocol']
if attr.has_key('unique'):
values["Unique"] = attr['unique']
if attr.has_key('domain'):
values["Domain"] = attr['domain']
if attr.has_key('origin_url'):
values["URL"] = attr['origin_url']
if attr.has_key('username_value'):
values["Login"] = attr['username_value']
if attr.has_key('user'):
values["User"] = attr['user']
if item.get_secret():
values["Password"] = item.get_secret()
# write credentials into a text file
if len(values) != 0:
pwdFound.append(values)
return pwdFound
else:
print_debug('WARNING', 'The Gnome Keyring wallet is empty')
except Exception,e:
print_debug('ERROR', 'An error occurs with the Gnome Keyring wallet: {0}'.format(e))
| 31.206349 | 102 | 0.619023 | 242 | 1,966 | 4.834711 | 0.392562 | 0.041026 | 0.053846 | 0.071795 | 0.054701 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002699 | 0.246185 | 1,966 | 62 | 103 | 31.709677 | 0.786775 | 0.027976 | 0 | 0 | 0 | 0 | 0.193819 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.021739 | 0.086957 | null | null | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ef6051cf73af32c482601c57498a5943fed63f6 | 5,387 | py | Python | nydus/db/base.py | Elec/nydus | 9b505840da47a34f758a830c3992fa5dcb7bb7ad | [
"Apache-2.0"
] | 102 | 2015-01-12T20:07:37.000Z | 2022-02-21T10:00:07.000Z | nydus/db/base.py | Elec/nydus | 9b505840da47a34f758a830c3992fa5dcb7bb7ad | [
"Apache-2.0"
] | 4 | 2015-02-18T04:04:47.000Z | 2021-06-09T12:14:50.000Z | nydus/db/base.py | Elec/nydus | 9b505840da47a34f758a830c3992fa5dcb7bb7ad | [
"Apache-2.0"
] | 9 | 2015-07-28T15:16:04.000Z | 2021-06-08T20:31:04.000Z | """
nydus.db.base
~~~~~~~~~~~~~
:copyright: (c) 2011-2012 DISQUS.
:license: Apache License 2.0, see LICENSE for more details.
"""
__all__ = ('LazyConnectionHandler', 'BaseCluster')
import collections
from nydus.db.map import DistributedContextManager
from nydus.db.routers import BaseRouter, routing_params
from nydus.utils import apply_defaults
def iter_hosts(hosts):
# this can either be a dictionary (with the key acting as the numeric
# index) or it can be a sorted list.
if isinstance(hosts, collections.Mapping):
return hosts.iteritems()
return enumerate(hosts)
def create_connection(Connection, num, host_settings, defaults):
# host_settings can be an iterable or a dictionary depending on the style
# of connection (some connections share options and simply just need to
# pass a single host, or a list of hosts)
if isinstance(host_settings, collections.Mapping):
return Connection(num, **apply_defaults(host_settings, defaults or {}))
elif isinstance(host_settings, collections.Iterable):
return Connection(num, *host_settings, **defaults or {})
return Connection(num, host_settings, **defaults or {})
class BaseCluster(object):
"""
Holds a cluster of connections.
"""
class MaxRetriesExceededError(Exception):
pass
def __init__(self, hosts, backend, router=BaseRouter, max_connection_retries=20, defaults=None):
self.hosts = dict(
(conn_number, create_connection(backend, conn_number, host_settings, defaults))
for conn_number, host_settings
in iter_hosts(hosts)
)
self.max_connection_retries = max_connection_retries
self.install_router(router)
def __len__(self):
return len(self.hosts)
def __getitem__(self, name):
return self.hosts[name]
def __getattr__(self, name):
return CallProxy(self, name)
def __iter__(self):
for name in self.hosts.iterkeys():
yield name
def install_router(self, router):
self.router = router(self)
def execute(self, path, args, kwargs):
connections = self.__connections_for(path, args=args, kwargs=kwargs)
results = []
for conn in connections:
for retry in xrange(self.max_connection_retries):
func = conn
for piece in path.split('.'):
func = getattr(func, piece)
try:
results.append(func(*args, **kwargs))
except tuple(conn.retryable_exceptions), e:
if not self.router.retryable:
raise e
elif retry == self.max_connection_retries - 1:
raise self.MaxRetriesExceededError(e)
else:
conn = self.__connections_for(path, retry_for=conn.num, args=args, kwargs=kwargs)[0]
else:
break
# If we only had one db to query, we simply return that res
if len(results) == 1:
return results[0]
else:
return results
def disconnect(self):
"""Disconnects all connections in cluster"""
for connection in self.hosts.itervalues():
connection.disconnect()
def get_conn(self, *args, **kwargs):
"""
Returns a connection object from the router given ``args``.
Useful in cases where a connection cannot be automatically determined
during all steps of the process. An example of this would be
Redis pipelines.
"""
connections = self.__connections_for('get_conn', args=args, kwargs=kwargs)
if len(connections) is 1:
return connections[0]
else:
return connections
def map(self, workers=None, **kwargs):
return DistributedContextManager(self, workers, **kwargs)
@routing_params
def __connections_for(self, attr, args, kwargs, **fkwargs):
return [self[n] for n in self.router.get_dbs(attr=attr, args=args, kwargs=kwargs, **fkwargs)]
class CallProxy(object):
"""
Handles routing function calls to the proper connection.
"""
def __init__(self, cluster, path):
self.__cluster = cluster
self.__path = path
def __call__(self, *args, **kwargs):
return self.__cluster.execute(self.__path, args, kwargs)
def __getattr__(self, name):
return CallProxy(self.__cluster, self.__path + '.' + name)
class LazyConnectionHandler(dict):
"""
Maps clusters of connections within a dictionary.
"""
def __init__(self, conf_callback):
self.conf_callback = conf_callback
self.conf_settings = {}
self.__is_ready = False
def __getitem__(self, key):
if not self.is_ready():
self.reload()
return super(LazyConnectionHandler, self).__getitem__(key)
def is_ready(self):
return self.__is_ready
def reload(self):
from nydus.db import create_cluster
for conn_alias, conn_settings in self.conf_callback().iteritems():
self[conn_alias] = create_cluster(conn_settings)
self._is_ready = True
def disconnect(self):
"""Disconnects all connections in cluster"""
for connection in self.itervalues():
connection.disconnect()
| 32.451807 | 108 | 0.633748 | 622 | 5,387 | 5.279743 | 0.282958 | 0.030451 | 0.030451 | 0.024361 | 0.115408 | 0.090134 | 0.090134 | 0.042631 | 0.042631 | 0.042631 | 0 | 0.004601 | 0.273807 | 5,387 | 165 | 109 | 32.648485 | 0.834867 | 0.063486 | 0 | 0.09901 | 0 | 0 | 0.009589 | 0.004795 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009901 | 0.049505 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f60095130f09208c0cc014869141663b3698d4a5 | 9,317 | py | Python | tickets_handler/models.py | ChanTerelLy/partnerweb3 | 4f37f032242cbc67d57b7161ecb5960d9cfde76d | [
"MIT"
] | null | null | null | tickets_handler/models.py | ChanTerelLy/partnerweb3 | 4f37f032242cbc67d57b7161ecb5960d9cfde76d | [
"MIT"
] | null | null | null | tickets_handler/models.py | ChanTerelLy/partnerweb3 | 4f37f032242cbc67d57b7161ecb5960d9cfde76d | [
"MIT"
] | null | null | null | import uuid
from django.db import models
from partnerweb_parser.manager import NewDesign, Ticket
import re
from partnerweb_parser import system, mail, manager
import json
import datetime
from partnerweb_parser.date_func import dmYHM_to_datetime
from tickets_handler.tasks import update_date_for_assigned
class Modify(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
abstract = True
class Workers(models.Model):
name = models.CharField(max_length=250, unique=True)
number = models.CharField(max_length=50, unique=True)
master = models.CharField(max_length=250)
status = models.BooleanField()
url = models.URLField()
hiring_date = models.DateField(auto_now=True, blank=True)
@classmethod
@system.my_timer
def replace_num_worker(cls, tickets):
if tickets is not None:
for ticket in tickets:
try:
worker = cls.objects.get(number=ticket.operator)
ticket.name_operator = worker.name
except:
continue
return tickets
def natural_key(self):
return self.name
@classmethod
def update_workers(cls, auth):
for worker in manager.Worker.get_workers(auth):
operator = cls.objects.filter(number=worker.number)
if not operator:
cls(name=worker.name, number=worker.number, master=worker.master, status=worker.status,
url=worker.url).save()
continue
operator.update(name=worker.name, master=worker.master, status=worker.status, url=worker.url)
def __str__(self):
return self.name
class ChiefInstaller(models.Model):
full_name = models.CharField(max_length=250, unique=True)
number = models.CharField(max_length=50, unique=True)
class Installer(models.Model):
full_name = models.CharField(max_length=250, unique=True)
number = models.CharField(max_length=50, unique=True)
@classmethod
def parse_installers(cls, auth):
login = NewDesign(auth['login'], auth['operator'], auth['password'])
tickets = login.retrive_tickets()
sw_tickets, sw_today = login.switched_tickets(tickets)
for ticket in sw_tickets:
info_data = login.ticket_info(ticket.id)
name, phone = cls.find_installer_in_text(info_data.comments)
try:
installer, created = cls.objects.update_or_create(full_name=name)
if not created:
installer.number = phone
installer.save()
else:
cls(full_name=name, number=phone).save()
except:
continue
@staticmethod
def find_installer_in_text(comments):
for comment in comments:
data = re.search(r'Назначен Сервис - инженер (\w* \w* \w*), телефон (\d{11})', comment['text'])
if (data):
name = data.group(1)
phone = data.group(2)
return name, phone
return None, None
def __str__(self):
return f'{self.full_name} {self.number}'
class AdditionalTicket(models.Model):
number = models.IntegerField()
positive = models.BooleanField() # add or remove ticket
operator = models.ForeignKey(Workers, on_delete=models.CASCADE)
datetime = models.DateTimeField(auto_now=True)
@classmethod
def add(cls, payload):
payload = json.loads(payload.decode('utf-8'))
cls(number=payload['number'], positive=payload['positive'],
operator=Workers.objects.get(number=int(payload['operator']))).save()
@classmethod
def show(cls, operator):
AdditionalTicket.objects.filter(operator=Workers.objects.get(phone=operator))
@classmethod
def clear_switched_tickets(cls, sw_tickets, all_tickets):
for t in cls.objects.filter(datetime__month=datetime.datetime.now().month):
if t.positive:
for all_t in all_tickets:
if isinstance(all_t.ticket_paired_info, Ticket):
if all_t.ticket_paired_info.number == t.number or all_t.number == t.number:
sw_tickets.append(all_t)
continue
elif not t.positive:
for sw_t in sw_tickets:
try:
if t.number == sw_t.ticket_paired_info.number or sw_t.number == t.number:
sw_tickets.remove(sw_t)
break
except:
continue
return sw_tickets
def __str__(self):
return str(self.number)
class Employer(models.Model):
profile_name = models.TextField()
name = models.TextField()
email = models.EmailField()
phone = models.CharField(max_length=10)
position = models.TextField()
operator = models.ForeignKey(Workers, on_delete=models.CASCADE)
operator_password = models.TextField()
supervisor_password = models.CharField(max_length=50)
@classmethod
def find_master(cls, phone):
master = Workers.objects.get(number=phone).master if Workers.objects.get(number=phone) else None
master_obj = cls.objects.get(name=master) if master else cls.objects.none()
return master_obj
def __str__(self):
return self.name
class Reminder(models.Model):
ticket_number = models.TextField()
client_name = models.TextField()
client_number = models.CharField(max_length=10)
timer = models.DateTimeField()
operator = models.ForeignKey(Workers, on_delete=models.CASCADE)
link = models.URLField(null=True, blank=True)
recipient = models.TextField()
class TicketSource(models.Model):
ticket_number = models.CharField(max_length=20, unique=True)
source = models.CharField(max_length=50)
agent = models.ForeignKey(Workers, on_delete=models.CASCADE)
date = models.DateField(auto_now=True, blank=True)
@classmethod
def add_source(cls, ticket_number, source, operator):
try:
data = cls.objects.get(ticket_number=ticket_number)
data.source = source
data.save()
except:
cls(ticket_number=ticket_number, source=source,
agent=Workers.objects.get(number=operator)).save()
@classmethod
def find_source(cls, ticket_number):
return cls.objects.get(ticket_number=ticket_number).source
class ACL(models.Model):
code = models.CharField(max_length=50)
date_end = models.DateField()
class AssignedTickets(models.Model):
ticket_number = models.IntegerField()
when_assigned = models.DateTimeField(null=True, blank=True)
client_address = models.CharField(max_length=200)
client_name = models.CharField(max_length=150)
phones = models.CharField(max_length=150)
assigned_date = models.DateTimeField()
agent = models.ForeignKey(Workers, null=True, blank=True, on_delete=models.CASCADE)
@classmethod
def update(cls, ticket, *args, **kwargs):
if(kwargs.get('satelit_type')):
ticket = ticket.ticket_paired_info
db_ticket = cls.objects.filter(ticket_number=ticket.number).first()
if db_ticket:
if ticket.assigned_date:
db_ticket.when_assigned = dmYHM_to_datetime(ticket.assigned_date)
db_ticket.client_address = ticket.address
db_ticket.client_name = ticket.name
db_ticket.phones = ticket.phones
db_ticket.assigned_date = dmYHM_to_datetime(ticket.call_time)
db_ticket.agent = Workers.objects.filter(number=ticket.operator).first()
return db_ticket.save()
else:
assigned_date = dmYHM_to_datetime(ticket.call_time)
agent = Workers.objects.filter(number=ticket.operator).first()
result = cls(ticket_number=ticket.number,
when_assigned=dmYHM_to_datetime(ticket.assigned_date) if ticket.assigned_date else None,
client_address=ticket.address,
phones=ticket.phones,
assigned_date=assigned_date,
agent=agent,
client_name=ticket.name).save()
db_ticket = ticket.__dict__
db_ticket['mail_to'] = Employer.find_master(ticket.operator).email
db_ticket['link'] = ''
mail.EmailSender().agent_assign_ticket(db_ticket)
def update_date(self):
update_date_for_assigned()
class AUP(models.Model):
name = models.CharField(max_length=150)
position = models.CharField(max_length=100)
email = models.EmailField()
phone = models.IntegerField()
def __str__(self):
return f' {self.name} - {self.position}'
class FirebaseNotification(Modify):
ticket_number = models.IntegerField()
today_count_notification = models.IntegerField(default=0)
last_call_time = models.DateTimeField(null=True, blank=True)
last_ticket_status = models.CharField(max_length=255)
worker = models.ForeignKey(Workers, on_delete=models.CASCADE, null=True, blank=True) | 37.568548 | 107 | 0.651819 | 1,096 | 9,317 | 5.35219 | 0.17792 | 0.048585 | 0.058302 | 0.077736 | 0.355779 | 0.233038 | 0.199966 | 0.153938 | 0.080975 | 0.047562 | 0 | 0.00788 | 0.250832 | 9,317 | 248 | 108 | 37.568548 | 0.832521 | 0.002147 | 0 | 0.216346 | 0 | 0 | 0.020654 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086538 | false | 0.014423 | 0.043269 | 0.033654 | 0.514423 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f6029db9fa8774e3c51f5cd2c301f93c77ad18fa | 1,184 | py | Python | photos/views.py | Otybrian/personal-gallery | d9492b5d29600da4a2999a9cc4914279e55f87ee | [
"MIT"
] | null | null | null | photos/views.py | Otybrian/personal-gallery | d9492b5d29600da4a2999a9cc4914279e55f87ee | [
"MIT"
] | null | null | null | photos/views.py | Otybrian/personal-gallery | d9492b5d29600da4a2999a9cc4914279e55f87ee | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
import datetime as dt
from django.views import View
from photos.models import Image, category
# Create your views here.
def welcome(request):
return render(request, 'welcome.html')
def display_page(request):
image = Image.objects.all()
return render(request, 'all.html', {'image':image} )
def viewDetails(request):
image = Image.objects.all()
return render(request, 'details.html', {'image':image} )
def my_category(request):
categorys = category.objects.all()
context = {
'categorys': categorys,
}
return render(request, 'category.html', context)
def search_results(request):
if 'category_name' in request.GET and request.GET["category_name"]:
search_term = request.GET.get("category_name")
searched_category_name = category.search_by_category(search_term)
message = f"{search_term}"
return render(request, 'search.html',{"message":message,"category_name": searched_category_name})
else:
message = "You haven't searched for any term"
return render(request, 'search.html',{"message":message})
| 27.534884 | 105 | 0.699324 | 146 | 1,184 | 5.561644 | 0.349315 | 0.08867 | 0.140394 | 0.059113 | 0.307882 | 0.229064 | 0.229064 | 0.229064 | 0 | 0 | 0 | 0 | 0.183277 | 1,184 | 42 | 106 | 28.190476 | 0.83971 | 0.019426 | 0 | 0.071429 | 0 | 0 | 0.171132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.178571 | 0.035714 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f618776d83cae749bd8dfd7a18734f7ba04c9b06 | 1,129 | py | Python | Part_2_intermediate/mod_2/lesson_2/homework_1/homework.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_2/homework_1/homework.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_2/homework_1/homework.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null |
# Utwórz klasy do reprezentacji Produktu, Zamówienia, Jabłek i Ziemniaków.
# Stwórz po kilka obiektów typu jabłko i ziemniak i wypisz ich typ za pomocą funkcji wbudowanej type.
# Stwórz listę zawierającą 5 zamówień oraz słownik, w którym kluczami będą nazwy produktów
# a wartościami instancje klasy produkt.
class Product:
pass
class Order:
pass
class Apple:
pass
class Potato:
pass
if __name__ == '__main__':
green_apple = Apple()
red_apple = Apple()
fresh_apple = Apple()
print("green apple type:", type(green_apple))
print("red apple type:", type(red_apple))
print("fresh apple type:", type(fresh_apple))
old_potato = Potato()
young_potato = Potato()
print("old potato type:", type(old_potato))
print("young potato type:", type(young_potato))
# orders = [Order(), Order(), Order(), Order(), Order()]
orders = []
for _ in range(5):
orders.append(Order())
print(orders)
products = {
"Jabłko": Product(),
"Ziemniak": Product(),
"Marchew": Product(),
"Ciastka": Product(),
}
print(products)
| 22.58 | 101 | 0.644818 | 136 | 1,129 | 5.213235 | 0.463235 | 0.056417 | 0.055007 | 0.056417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00232 | 0.236492 | 1,129 | 49 | 102 | 23.040816 | 0.820186 | 0.314438 | 0 | 0.133333 | 0 | 0 | 0.15515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.133333 | 0 | 0 | 0.133333 | 0.233333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f61ba368dd6ce3592ab58a543cda6b7e4d14ce8d | 3,499 | py | Python | rj_gameplay/stp/coordinator.py | RoboJackets/robocup-software | ae2920b8b98213e625d0565dd67005e7a8595fac | [
"Apache-2.0"
] | 200 | 2015-01-26T01:45:34.000Z | 2022-03-19T13:05:31.000Z | rj_gameplay/stp/coordinator.py | RoboJackets/robocup-software | ae2920b8b98213e625d0565dd67005e7a8595fac | [
"Apache-2.0"
] | 1,254 | 2015-01-03T01:57:35.000Z | 2022-03-16T06:32:21.000Z | rj_gameplay/stp/coordinator.py | RoboJackets/robocup-software | ae2920b8b98213e625d0565dd67005e7a8595fac | [
"Apache-2.0"
] | 206 | 2015-01-21T02:03:18.000Z | 2022-02-01T17:57:46.000Z | """This module contains the implementation of the coordinator."""
from typing import Any, Dict, Optional, Type, List, Callable
import stp.play
import stp.rc as rc
import stp.role.assignment as assignment
import stp.situation
import stp.skill
from rj_msgs import msg
NUM_ROBOTS = 16
class Coordinator:
"""The coordinator is responsible for using SituationAnalyzer to select the best
play to run, calling tick() on the play to get the list of skills, then ticking
all of the resulting skills."""
__slots__ = [
"_play_selector",
"_prev_situation",
"_prev_play",
"_prev_role_results",
"_props",
"_debug_callback",
]
_play_selector: stp.situation.IPlaySelector
_prev_situation: Optional[stp.situation.ISituation]
_prev_play: Optional[stp.play.IPlay]
_prev_role_results: assignment.FlatRoleResults
_props: Dict[Type[stp.play.IPlay], Any]
# TODO(1585): Properly handle type annotations for props instead of using Any.
def __init__(
self,
play_selector: stp.situation.IPlaySelector,
debug_callback: Callable[[stp.play.IPlay, List[stp.skill.ISkill]],
None] = None):
self._play_selector = play_selector
self._props = {}
self._prev_situation = None
self._prev_play = None
self._prev_role_results = {}
self._debug_callback = debug_callback
def tick(self, world_state: rc.WorldState) -> List[msg.RobotIntent]:
"""Performs 1 ticks of the STP system:
1. Selects the best play to run given the passed in world state.
2. Ticks the best play, collecting the list of skills to run.
3. Ticks the list of skills.
:param world_state: The current state of the world.
"""
# Call situational analysis to see which play should be running.
cur_situation, cur_play = self._play_selector.select(world_state)
cur_play_type: Type[stp.play.IPlay] = type(cur_play)
# Update the props.
cur_play_props = cur_play.compute_props(self._props.get(cur_play_type, None))
if isinstance(cur_play, type(
self._prev_play)) and not self._prev_play.is_done(world_state):
cur_play = self._prev_play
# This should be checked here or in the play selector, so we can restart a play easily
# Collect the list of skills from the play.
new_role_results, skills = cur_play.tick(
world_state, self._prev_role_results, cur_play_props
)
self._debug_callback(cur_play, [entry.skill for entry in skills])
# Get the list of actions from the skills
intents = [msg.RobotIntent() for i in range(NUM_ROBOTS)]
intents_dict = {}
for skill in skills:
robot = new_role_results[skill][0].role.robot
intents_dict.update(skill.skill.tick(robot, world_state, intents[robot.id]))
# Get the list of robot intents from the actions
for i in range(NUM_ROBOTS):
if i in intents_dict.keys():
intents[i] = intents_dict[i]
else:
intents[i].motion_command.empty_command = [msg.EmptyMotionCommand()]
# Update _prev_*.
self._prev_situation = cur_situation
self._prev_play = cur_play
self._prev_role_results = new_role_results
self._props[cur_play_type] = cur_play_props
return intents
| 36.447917 | 98 | 0.654473 | 462 | 3,499 | 4.705628 | 0.281385 | 0.045078 | 0.024839 | 0.027599 | 0.067157 | 0.018399 | 0 | 0 | 0 | 0 | 0 | 0.00429 | 0.267219 | 3,499 | 95 | 99 | 36.831579 | 0.843604 | 0.254644 | 0 | 0 | 0 | 0 | 0.030757 | 0 | 0 | 0 | 0 | 0.010526 | 0 | 1 | 0.033898 | false | 0 | 0.118644 | 0 | 0.288136 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f6302daf57e1897d3b623be1dae10ac2892a5ddb | 1,227 | py | Python | taskmanager/src/modules/tasks/domain/task.py | acostapazo/event-manager | 614c91af19fad39f766ffa1b9a3d1e783f06c72e | [
"MIT"
] | null | null | null | taskmanager/src/modules/tasks/domain/task.py | acostapazo/event-manager | 614c91af19fad39f766ffa1b9a3d1e783f06c72e | [
"MIT"
] | 1 | 2020-04-20T11:20:22.000Z | 2020-04-20T11:20:22.000Z | taskmanager/src/modules/tasks/domain/task.py | acostapazo/event-manager | 614c91af19fad39f766ffa1b9a3d1e783f06c72e | [
"MIT"
] | null | null | null | from typing import Any, Dict
from meiga import Result, Error, Success
from petisco import AggregateRoot
from datetime import datetime
from taskmanager.src.modules.tasks.domain.description import Description
from taskmanager.src.modules.tasks.domain.events import TaskCreated
from taskmanager.src.modules.tasks.domain.task_id import TaskId
from taskmanager.src.modules.tasks.domain.title import Title
class Task(AggregateRoot):
def __init__(
self, task_id: TaskId, title: str, description: str, created_at: datetime
):
self.task_id = task_id
self.title = title
self.description = description
self.created_at = created_at
super().__init__()
@staticmethod
def create(task_id: TaskId, title: Title, description: Description):
user = Task(task_id, title, description, datetime.utcnow())
user.record(TaskCreated(task_id))
return user
def to_result(self) -> Result[Any, Error]:
return Success(self)
def to_dict(self) -> Dict:
return {
"task_id": self.task_id,
"title": self.title,
"description": self.description,
"created_at": self.created_at.isoformat(),
}
| 31.461538 | 81 | 0.682967 | 148 | 1,227 | 5.5 | 0.27027 | 0.066339 | 0.088452 | 0.12285 | 0.176904 | 0.176904 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224939 | 1,227 | 38 | 82 | 32.289474 | 0.855941 | 0 | 0 | 0 | 0 | 0 | 0.026895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.258065 | 0.064516 | 0.516129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f630f6ea5762c04cfcd310931e443ab55f0888fb | 4,085 | py | Python | CSD_API/get_from_author.py | andrewtarzia/cage_collect | e5e68dc23ec197eceff3b56de6725d996730b8ac | [
"MIT"
] | null | null | null | CSD_API/get_from_author.py | andrewtarzia/cage_collect | e5e68dc23ec197eceff3b56de6725d996730b8ac | [
"MIT"
] | null | null | null | CSD_API/get_from_author.py | andrewtarzia/cage_collect | e5e68dc23ec197eceff3b56de6725d996730b8ac | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
# Distributed under the terms of the MIT License.
"""
Script to search for and collect CIFs using a list of authors.
Author: Andrew Tarzia
Date Created: 1 Mar 2019
"""
import ccdc.search
import sys
import CSD_f
def write_entry(file, author, number, DOI, CSD, solvent, disorder):
"""
Write entry to CIF DB file that contains information for a
structure.
"""
with open(file, 'a') as f:
f.write(
f'{author},{number},{DOI},{CSD},{solvent},{disorder}\n'
)
def write_REFCODES(file, CSD):
"""
Write REFCODE to file.
"""
with open(file, 'a') as f:
f.write(CSD+'\n')
def main():
if (not len(sys.argv) == 4):
print """
Usage: get_from_author.py author_file cage_type output_prefix
author_file (str) -
file with list of authors
cage_type (str) -
organic if organic cages, metal if is_organometallic
organic: sets is_organometallic is False
metal: sets is_organometallic is True
anything else: passes this test
output_prefix (str) - prefix of .txt and .gcd file to output
"""
sys.exit()
else:
author_file = sys.argv[1]
cage_type = sys.argv[2]
output_prefix = sys.argv[3]
out_txt = output_prefix+'.txt'
out_gcd = output_prefix+'.gcd'
# files = []
authors = []
DOIs = []
CSD = []
for line in open(author_file, 'r'):
authors.append(line.rstrip())
with open(out_txt, 'w') as f:
f.write('author,number,DOI,CSD,solvent,disorder\n')
with open(out_gcd, 'w') as f:
f.write('')
count = 0
count_no = 0
idents = []
for i, author in enumerate(authors):
# break at '-----'
if '-----' in author:
break
count_no += 1
query = ccdc.search.TextNumericSearch()
query.add_author(author)
hits = query.search(database='CSD')
print author+': '+str(len(hits))
if len(hits) == 0:
print(author)
for hit in hits:
author_list = [
i.strip()
for i in hit.entry.publication.authors.split(',')
]
# skip polymeric structures
if hit.entry.chemical_name is not None:
if 'catena' in hit.entry.chemical_name:
continue
if hit.entry.is_polymeric is True:
continue
# skip if structure is powder study
if hit.entry.is_powder_study is True:
continue
if cage_type == 'organic':
# skip structures that are NOT purely organic
if hit.entry.is_organometallic is True:
continue
elif cage_type == 'metal':
# skip structures that are purely organic
if hit.entry.is_organometallic is False:
continue
else:
# do not skip any
pass
# note structures with solvent
solvent = 'n'
if hit.entry.chemical_name is not None:
if len(hit.entry.chemical_name.split(' ')) > 1:
solvent = 'y'
# note structures with disorder
disorder = 'n'
if hit.entry.has_disorder is True:
disorder = 'y'
crystal = hit.crystal
# write REFCODE to file
if hit.identifier not in idents:
idents.append(hit.identifier)
write_entry(
out_txt,
author,
str(hit.entry.ccdc_number),
hit.entry.doi,
hit.identifier,
solvent,
disorder
)
write_REFCODES(out_gcd, hit.identifier)
count += 1
print str(count)+' cifs found from '+str(count_no)+' authors'
if __name__ == "__main__":
main()
| 28.566434 | 68 | 0.520685 | 476 | 4,085 | 4.355042 | 0.294118 | 0.04631 | 0.033767 | 0.017366 | 0.150989 | 0.141341 | 0.125422 | 0.092619 | 0.031838 | 0 | 0 | 0.006735 | 0.38213 | 4,085 | 142 | 69 | 28.767606 | 0.81458 | 0.088127 | 0 | 0.113402 | 0 | 0 | 0.183078 | 0.026567 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.020619 | 0.030928 | null | null | 0.041237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f6316922d218565500b02ad1c78331fdf583ec07 | 641 | py | Python | src/datastructure/stacks_ex1.py | Valeeswaran/tutorials | 71b43cad46f4d7d2d67d3ff4be61bdaaade2a36a | [
"MIT"
] | null | null | null | src/datastructure/stacks_ex1.py | Valeeswaran/tutorials | 71b43cad46f4d7d2d67d3ff4be61bdaaade2a36a | [
"MIT"
] | null | null | null | src/datastructure/stacks_ex1.py | Valeeswaran/tutorials | 71b43cad46f4d7d2d67d3ff4be61bdaaade2a36a | [
"MIT"
] | null | null | null | import stacks1
def is_match(ch1, ch2):
match_dict = {
")": "(",
"]": "[",
"}": "{"
}
return match_dict[ch1] == ch2
def is_balanced(s):
stack = stacks1.Stack()
for ch in s:
if ch == '(' or ch == '{' or ch == '[':
stack.push(ch)
if ch == ')' or ch == '}' or ch == ']':
if stack.size() == 0:
return False
if not is_match(ch, stack.pop()):
return False
return stack.size()==0
print(is_balanced("))((a+b}{"))
print(is_balanced("((a+b))"))
print(is_balanced("))"))
print(is_balanced("[a+b]*(x+2y)*{hh+kk}")) | 20.677419 | 47 | 0.447738 | 81 | 641 | 3.432099 | 0.37037 | 0.179856 | 0.086331 | 0.172662 | 0.323741 | 0.26259 | 0.176259 | 0.176259 | 0 | 0 | 0 | 0.020882 | 0.327613 | 641 | 31 | 48 | 20.677419 | 0.62413 | 0 | 0 | 0.086957 | 0 | 0 | 0.077882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.043478 | 0 | 0.304348 | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f6321b2979bddd05bf08689d0cd5e7faac1097ef | 584 | py | Python | image_pipeline/stages/__init__.py | MarcoGlauser/image_pipeline | a33b8a64773e8a0189155ae625d3060e15fb8def | [
"MIT"
] | null | null | null | image_pipeline/stages/__init__.py | MarcoGlauser/image_pipeline | a33b8a64773e8a0189155ae625d3060e15fb8def | [
"MIT"
] | null | null | null | image_pipeline/stages/__init__.py | MarcoGlauser/image_pipeline | a33b8a64773e8a0189155ae625d3060e15fb8def | [
"MIT"
] | null | null | null | from image_pipeline.stages.crop_stage import CropStage
from image_pipeline.stages.jpeg_lossless_compression import JPEGLossLessCompressionStage
from image_pipeline.stages.jpeg_lossy_compression import JPEGLossyCompressionStage
from image_pipeline.stages.resize_stage import ResizeStage
pre_stages = [
ResizeStage,
CropStage,
] # List[_Stage]
compression_stages = [
JPEGLossyCompressionStage,
JPEGLossLessCompressionStage,
] # List[_Stage]
post_stages = [
# PlaceholderGenerationStage,
] # List[_Stage]
stages = pre_stages + compression_stages + post_stages | 27.809524 | 88 | 0.813356 | 58 | 584 | 7.862069 | 0.344828 | 0.078947 | 0.149123 | 0.201754 | 0.118421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126712 | 584 | 21 | 89 | 27.809524 | 0.894118 | 0.113014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.266667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f6414d22c3496a2fc287bc2eb9d1d63bb35b880e | 5,728 | py | Python | lab8/src/main.py | YaelBenShalom/Intro-to-AI | 37df1fc9316544338b8acfa5264316c4d5ce5915 | [
"MIT"
] | null | null | null | lab8/src/main.py | YaelBenShalom/Intro-to-AI | 37df1fc9316544338b8acfa5264316c4d5ce5915 | [
"MIT"
] | null | null | null | lab8/src/main.py | YaelBenShalom/Intro-to-AI | 37df1fc9316544338b8acfa5264316c4d5ce5915 | [
"MIT"
] | null | null | null | import common
import student_code
import array
class bcolors:
RED = "\x1b[31m"
GREEN = "\x1b[32m"
NORMAL = "\x1b[0m"
def read_data(training_data, test_data1, gold_data1, filename):
data = array.array('f')
test = array.array('f')
with open(filename, 'rb') as fd:
data.fromfile(fd, common.constants.TRAINING_SIZE *
(common.constants.DATA_DIM+1))
test.fromfile(fd, common.constants.TEST_SIZE *
(common.constants.DATA_DIM+1))
for i in range(common.constants.TRAINING_SIZE):
for j in range(common.constants.DATA_DIM+1):
training_data[i][j] = data[i*(common.constants.DATA_DIM+1)+j]
for i in range(common.constants.TEST_SIZE):
for j in range(common.constants.DATA_DIM):
test_data1[i][j] = test[i*(common.constants.DATA_DIM+1)+j]
test_data1[i][common.constants.DATA_DIM] = -1
gold_data1[i] = test[i *
(common.constants.DATA_DIM+1)+common.constants.DATA_DIM]
f = open(filename.split('.dat')[0] + '.csv', 'w')
for i in training_data:
f.write(str(i[0]) + ',' + str(i[1]) + ',' + str(i[2]) + '\n')
for i in range(len(test_data1)):
f.write(str(test_data1[i][0]) + ',' + str(test_data1[i]
[1]) + ',' + str(gold_data1[i]) + '\n')
f.close()
def read_data_csv(training_data, test_data1, gold_data1, filename):
data = []
test = []
with open(filename, 'r') as fd:
lines = fd.readlines()
for i in range(common.constants.TRAINING_SIZE + common.constants.TEST_SIZE):
if i < common.constants.TRAINING_SIZE:
data += [float(j) for j in lines[i].strip().split(',')]
else:
test += [float(j) for j in lines[i].strip().split(',')]
for i in range(common.constants.TRAINING_SIZE):
for j in range(common.constants.DATA_DIM+1):
training_data[i][j] = data[i*(common.constants.DATA_DIM+1)+j]
for i in range(common.constants.TEST_SIZE):
for j in range(common.constants.DATA_DIM):
test_data1[i][j] = test[i*(common.constants.DATA_DIM+1)+j]
test_data1[i][common.constants.DATA_DIM] = -1
gold_data1[i] = test[i *
(common.constants.DATA_DIM+1)+common.constants.DATA_DIM]
def run_experiment1(filename):
gold_data = [0 for x in range(common.constants.TEST_SIZE)]
test_data = [[0, 0, 0] for x in range(common.constants.TEST_SIZE)]
training_data = [[0, 0, 0] for x in range(common.constants.TRAINING_SIZE)]
# generating data should be hidden from students!
read_data_csv(training_data, test_data, gold_data, filename)
# this is one of the two student functions
# print (training_data)#, test_data, gold_data)
student_code.part_one_classifier(training_data, test_data)
# part 1 grading
error = 0
for i in range(common.constants.TEST_SIZE):
if(test_data[i][common.constants.DATA_DIM] != gold_data[i]):
error += 1
print("Incorrect classification is "+str(error) +
" out of " + str(common.constants.TEST_SIZE))
success = True
if (error <= float(common.constants.TEST_SIZE)*.05):
print("(" + bcolors.GREEN + "SUCCESS" + bcolors.NORMAL + ")")
else:
success = False
print("(" + bcolors.RED + "FAIL" + bcolors.NORMAL + ") maximum " +
str(float(common.constants.TEST_SIZE)*.05))
print
return success
def run_experiment2(filename):
gold_data = [0 for x in range(common.constants.TEST_SIZE)]
test_data = [[0, 0, 0] for x in range(common.constants.TEST_SIZE)]
training_data = [[0, 0, 0] for x in range(common.constants.TRAINING_SIZE)]
# generating data should be hidden from students!
read_data_csv(training_data, test_data, gold_data, filename)
# this is one of the two student functions
student_code.part_two_classifier(training_data, test_data)
# part 2 grading
error = 0
for i in range(common.constants.TEST_SIZE):
if(test_data[i][common.constants.DATA_DIM] != gold_data[i]):
# print("error is data", test_data[i])
error += 1
print("Incorrect classification is "+str(error) +
" out of " + str(common.constants.TEST_SIZE))
success = True
if (error <= float(common.constants.TEST_SIZE)*.05):
print("(" + bcolors.GREEN + "SUCCESS" + bcolors.NORMAL + ")")
else:
success = False
print("(" + bcolors.RED + "FAIL" + bcolors.NORMAL + ") maximum " +
str(float(common.constants.TEST_SIZE)*.05))
print
return success
all_passed = True
filename1 = "../data1.csv"
print("Linear Classifier : Dataset 1")
all_passed = run_experiment1(filename1) and all_passed
filename2 = "../data2.csv"
print("Linear Classifier : Dataset 2")
all_passed = run_experiment1(filename2) and all_passed
filename3 = "../data3.csv"
print("Linear Classifier : Dataset 3")
all_passed = run_experiment1(filename3) and all_passed
filename4 = "../data4.csv"
print("Linear Classifier : Dataset 4")
all_passed = run_experiment1(filename4) and all_passed
filename5 = "../datar1.csv"
print("Accelerometer : Dataset 1")
all_passed = run_experiment2(filename5) and all_passed
filename6 = "../datar2.csv"
print("Accelerometer : Dataset 2")
all_passed = run_experiment2(filename6) and all_passed
filename7 = "../datar3.csv"
print("Accelerometer : Dataset 3")
all_passed = run_experiment2(filename7) and all_passed
if all_passed:
exit(0)
else:
exit(1)
| 36.025157 | 86 | 0.622905 | 770 | 5,728 | 4.474026 | 0.142857 | 0.17852 | 0.099274 | 0.114949 | 0.723657 | 0.656023 | 0.598258 | 0.598258 | 0.562845 | 0.546589 | 0 | 0.024706 | 0.24389 | 5,728 | 158 | 87 | 36.253165 | 0.770723 | 0.050454 | 0 | 0.508475 | 0 | 0 | 0.084424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0.076271 | 0.025424 | 0 | 0.110169 | 0.127119 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f64680efdbc765125c942cd42b9be9b6e892d55e | 400 | py | Python | mediumwave/migrations/0011_transmitter_iso.py | soundelec/mwradio | b1a7f6d8c24469c13c5336e292cc316980fdea2d | [
"BSD-3-Clause"
] | null | null | null | mediumwave/migrations/0011_transmitter_iso.py | soundelec/mwradio | b1a7f6d8c24469c13c5336e292cc316980fdea2d | [
"BSD-3-Clause"
] | null | null | null | mediumwave/migrations/0011_transmitter_iso.py | soundelec/mwradio | b1a7f6d8c24469c13c5336e292cc316980fdea2d | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.1.2 on 2018-10-19 14:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('mediumwave', '0010_auto_20181017_1937'),
]
operations = [
migrations.AddField(
model_name='transmitter',
name='iso',
field=models.CharField(blank=True, max_length=3),
),
]
| 21.052632 | 61 | 0.605 | 44 | 400 | 5.386364 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.28 | 400 | 18 | 62 | 22.222222 | 0.711806 | 0.1125 | 0 | 0 | 1 | 0 | 0.133144 | 0.065156 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f649717f99a11da8f0544eda17de8a09d6562732 | 2,395 | py | Python | ircConnection.py | mutexlox/NO-FIFTH-GLYPH | 4acb5b8e584093ae7faa9f03c86c0e8d686df8f9 | [
"MIT"
] | 1 | 2019-04-16T10:59:38.000Z | 2019-04-16T10:59:38.000Z | ircConnection.py | mutexlox/NO-FIFTH-GLYPH | 4acb5b8e584093ae7faa9f03c86c0e8d686df8f9 | [
"MIT"
] | null | null | null | ircConnection.py | mutexlox/NO-FIFTH-GLYPH | 4acb5b8e584093ae7faa9f03c86c0e8d686df8f9 | [
"MIT"
] | null | null | null | import socket
import select
import config
class IRCConnection:
def __init__(self, serverName, port=6667):
self.connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.connection.connect((serverName, port))
self.connection.setblocking(0)
def sendMessage(self, toSend):
'''Helper function that sends the given
string as an IRC message.
'''
if not toSend.startswith("PONG"):
print toSend
self.connection.send(str(toSend) + "\r\n")
def receive(self):
'''Recieve 512 bytes from the connection (512 bytes == 1 message)
'''
# time out after a reasonable period of time so we revoice quickly
ready = select.select([self.connection], [], [], 0.2)
if ready[0]:
return str(self.connection.recv(512))
else:
return None
def setNick(self, nick):
'''Sets the nick to given string.
'''
self.sendMessage("NICK " + nick)
def setUser(self, userName, hostName, serverName, realName):
'''Set the user info as given.
'''
self.sendMessage("USER " + userName + " " +
hostName + " " +
serverName + " :" +
realName)
def authenticate(self, password):
'''Authenticate with NickServ with given password.
'''
self.sendMessage("PRIVMSG NickServ IDENTIFY " + password)
def setBot(self, nick):
'''Tell the server that we're a bot. (Note: This is network-dependent!)
'''
config.botIdentify(self, botNick=nick)
def reply(self, toSend, nick, chan, isPM):
sendTo = nick if isPM else chan
self.sendMessage("PRIVMSG " + sendTo + " :" + toSend)
def quit(self, quitMessage):
if quitMessage == "":
self.sendMessage("QUIT")
else:
self.sendMessage("QUIT :" + quitMessage)
def part(self, partMessage, chan):
if chan != "":
if partMessage == "":
self.sendMessage("PART " + chan)
else:
self.sendMessage("PART " + chan + " :" + partMessage)
def join(self, chan):
self.sendMessage("JOIN " + chan)
def close(self):
'''Close the connection.
'''
self.connection.close()
| 31.103896 | 79 | 0.548643 | 246 | 2,395 | 5.317073 | 0.422764 | 0.103211 | 0.039755 | 0.051988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011292 | 0.334447 | 2,395 | 76 | 80 | 31.513158 | 0.809285 | 0.026722 | 0 | 0.06383 | 0 | 0 | 0.044831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.042553 | 0.06383 | null | null | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f65224814a8fb77dc67c3f651cede815e9b8f99b | 1,786 | py | Python | config.py | vsmelov/neural-music | 0cbe06080e2c257c323ffc93dc673bb1e0edf2c4 | [
"MIT"
] | 2 | 2020-03-06T19:36:17.000Z | 2022-03-09T07:29:08.000Z | config.py | vsmelov/neural-music | 0cbe06080e2c257c323ffc93dc673bb1e0edf2c4 | [
"MIT"
] | null | null | null | config.py | vsmelov/neural-music | 0cbe06080e2c257c323ffc93dc673bb1e0edf2c4 | [
"MIT"
] | null | null | null | # coding: utf-8
import os
base_dir = os.path.dirname(os.path.realpath(__file__))
music_dir = os.path.join(base_dir, 'music-3')
data_dir = os.path.join(base_dir, 'data-3')
weights_dir = os.path.join(data_dir, 'weights')
weights_file = os.path.join(weights_dir, 'weights')
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if not os.path.exists(weights_dir):
os.makedirs(weights_dir)
# choice of rectangular, hanning, hamming, blackman, blackmanharris
window = 'hanning'
N = 2*512 # fft size, must be power of 2 and >= M
M = N-1 # window size
H = int(M / 2) # hop size
fs = 44100
NN = N / 2 + 1 # number of meaning-ful Fourier coefficients
# границы слышимости человека
min_freq = 20
max_freq = 20000
# привычные границы слышимости
min_freq = 20
max_freq = 8000
min_freq = 0
max_freq = 4000
# минимальные и максимальные номера коэффициентов Фурье
# соответствующие ограничениям на спектр
min_k = int(min_freq * N / fs)
max_k = int(max_freq * N / fs)
NP = max_k - min_k
print 'NP: {}'.format(NP)
zero_db = -80 # граница слышимости
mem_sec = 3
mem_n = int(mem_sec * fs / H)
gen_time = 5*60
sequence_length = int(gen_time * fs / H)
print 'sequence_length: {}'.format(sequence_length)
# DataSet Vectorization params
max_sentence_duration = 1.5 # seconds
max_sentence_len = int(fs * max_sentence_duration / H)
sentences_overlapping = 0.25
sentences_step = int(max_sentence_len * (1 - sentences_overlapping))
# сколько фреймов пропустим для анализа в начале каждой проверки,
# чтобы прогреть нейронку и дать ей
# угадать мелодию перед тем как делать предсказания
skip_first = 0
# Sin Model
sin_t = -80
minSineDur = 0.001
maxnSines = 200
freqDevOffset = 50
freqDevSlope = 0.001
Ns = N # size of fft used in synthesisNs = 512 # size of fft used in synthesis
| 25.15493 | 80 | 0.731243 | 289 | 1,786 | 4.342561 | 0.474048 | 0.038247 | 0.028685 | 0.031076 | 0.108367 | 0.031873 | 0 | 0 | 0 | 0 | 0 | 0.044415 | 0.167973 | 1,786 | 70 | 81 | 25.514286 | 0.800135 | 0.344345 | 0 | 0.046512 | 0 | 0 | 0.05126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.023256 | null | null | 0.046512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f653d187c14a15a2c7898dab067018d3da9f4ba1 | 1,184 | py | Python | tests/conftest.py | BookOps-CAT/ChangeSubject | bff86ad58685db169a99f9ad3b2df1179cffb5a3 | [
"MIT"
] | 1 | 2022-01-13T20:28:12.000Z | 2022-01-13T20:28:12.000Z | tests/conftest.py | BookOps-CAT/ChangeSubject | bff86ad58685db169a99f9ad3b2df1179cffb5a3 | [
"MIT"
] | null | null | null | tests/conftest.py | BookOps-CAT/ChangeSubject | bff86ad58685db169a99f9ad3b2df1179cffb5a3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pytest
from pymarc import Field, Record
@pytest.fixture
def fake_subfields():
return ["a", "subA", "x", "subX1", "x", "subX2", "z", "subZ."]
@pytest.fixture
def fake_subjects(fake_subfields):
return [
Field(tag="600", indicators=["1", "0"], subfields=fake_subfields),
Field(tag="650", indicators=[" ", "7"], subfields=fake_subfields),
Field(tag="650", indicators=[" ", "0"], subfields=fake_subfields),
Field(tag="653", indicators=[" ", " "], subfields=fake_subfields),
]
@pytest.fixture
def stub_bib():
record = Record()
record.add_field(
Field(tag="650", indicators=[" ", "0"], subfields=["a", "LCSH sub A."])
)
record.add_field(
Field(
tag="650",
indicators=[" ", "7"],
subfields=["a", "Unwanted FAST.", "2", "fast"],
)
)
record.add_field(
Field(
tag="650",
indicators=[" ", "7"],
subfields=["a", "neutral FAST.", "2", "fast"],
)
)
record.add_field(
Field(tag="907", indicators=[" ", " "], subfields=["a", ".b111111111"])
)
return record
| 24.666667 | 79 | 0.525338 | 124 | 1,184 | 4.919355 | 0.330645 | 0.104918 | 0.090164 | 0.172131 | 0.485246 | 0.485246 | 0.414754 | 0.216393 | 0.15082 | 0.15082 | 0 | 0.051843 | 0.266892 | 1,184 | 47 | 80 | 25.191489 | 0.650922 | 0.017736 | 0 | 0.351351 | 0 | 0 | 0.108527 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.054054 | 0.054054 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f65e0aca86dc9604b4e7e6ec844b23ecb9545677 | 2,031 | py | Python | bookstore/lib/green/__init__.py | Inveracity/python-grpc-betterproto-quartz | a94a0ea8429d93f3275389a66d0c41ca9b4b616c | [
"MIT"
] | null | null | null | bookstore/lib/green/__init__.py | Inveracity/python-grpc-betterproto-quartz | a94a0ea8429d93f3275389a66d0c41ca9b4b616c | [
"MIT"
] | null | null | null | bookstore/lib/green/__init__.py | Inveracity/python-grpc-betterproto-quartz | a94a0ea8429d93f3275389a66d0c41ca9b4b616c | [
"MIT"
] | null | null | null | # Generated by the protocol buffer compiler. DO NOT EDIT!
# sources: green.proto
# plugin: python-betterproto
from dataclasses import dataclass
from typing import Dict, List
import betterproto
from betterproto.grpc.grpclib_server import ServiceBase
import grpclib
class GreenColors(betterproto.Enum):
MOLDY = 0
MODERN = 1
PASTEL = 2
@dataclass(eq=False, repr=False)
class GreenHexadecimal(betterproto.Message):
hexdict: Dict[int, str] = betterproto.map_field(
1, betterproto.TYPE_INT32, betterproto.TYPE_STRING
)
def __post_init__(self) -> None:
super().__post_init__()
@dataclass(eq=False, repr=False)
class GreenRequest(betterproto.Message):
"""Green request"""
pass
def __post_init__(self) -> None:
super().__post_init__()
@dataclass(eq=False, repr=False)
class GreenResponse(betterproto.Message):
"""Green response"""
hexadecimal: List["GreenHexadecimal"] = betterproto.message_field(1)
def __post_init__(self) -> None:
super().__post_init__()
class GreenStub(betterproto.ServiceStub):
"""Book recommendation call"""
async def green(self) -> "GreenResponse":
request = GreenRequest()
return await self._unary_unary("/green.Green/Green", request, GreenResponse)
class GreenBase(ServiceBase):
"""Book recommendation call"""
async def green(self) -> "GreenResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def __rpc_green(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.green(**request_kwargs)
await stream.send_message(response)
def __mapping__(self) -> Dict[str, grpclib.const.Handler]:
return {
"/green.Green/Green": grpclib.const.Handler(
self.__rpc_green,
grpclib.const.Cardinality.UNARY_UNARY,
GreenRequest,
GreenResponse,
),
}
| 25.074074 | 84 | 0.672575 | 217 | 2,031 | 6.0553 | 0.382488 | 0.03653 | 0.03653 | 0.045662 | 0.2207 | 0.2207 | 0.197869 | 0.197869 | 0.094368 | 0.094368 | 0 | 0.004414 | 0.219104 | 2,031 | 80 | 85 | 25.3875 | 0.824086 | 0.090596 | 0 | 0.234043 | 1 | 0 | 0.042763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0.021277 | 0.106383 | 0.021277 | 0.468085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f6606f4ec40d7bf3ddd3d04cdb7161f2cd3e479b | 8,504 | py | Python | finnhub/models/filing.py | gavinjay/finnhub-python | b5c409dafeda390d14a2b0618ae6f25ab8d76c5b | [
"Apache-2.0"
] | null | null | null | finnhub/models/filing.py | gavinjay/finnhub-python | b5c409dafeda390d14a2b0618ae6f25ab8d76c5b | [
"Apache-2.0"
] | null | null | null | finnhub/models/filing.py | gavinjay/finnhub-python | b5c409dafeda390d14a2b0618ae6f25ab8d76c5b | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Finnhub API
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from finnhub.configuration import Configuration
class Filing(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'access_number': 'str',
'symbol': 'str',
'cik': 'str',
'form': 'str',
'filed_date': 'datetime',
'accepted_date': 'datetime',
'report_url': 'str',
'filing_url': 'str'
}
attribute_map = {
'access_number': 'accessNumber',
'symbol': 'symbol',
'cik': 'cik',
'form': 'form',
'filed_date': 'filedDate',
'accepted_date': 'acceptedDate',
'report_url': 'reportUrl',
'filing_url': 'filingUrl'
}
def __init__(self, access_number=None, symbol=None, cik=None, form=None, filed_date=None, accepted_date=None, report_url=None, filing_url=None, local_vars_configuration=None): # noqa: E501
"""Filing - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._access_number = None
self._symbol = None
self._cik = None
self._form = None
self._filed_date = None
self._accepted_date = None
self._report_url = None
self._filing_url = None
self.discriminator = None
if access_number is not None:
self.access_number = access_number
if symbol is not None:
self.symbol = symbol
if cik is not None:
self.cik = cik
if form is not None:
self.form = form
if filed_date is not None:
self.filed_date = filed_date
if accepted_date is not None:
self.accepted_date = accepted_date
if report_url is not None:
self.report_url = report_url
if filing_url is not None:
self.filing_url = filing_url
@property
def access_number(self):
"""Gets the access_number of this Filing. # noqa: E501
Access number. # noqa: E501
:return: The access_number of this Filing. # noqa: E501
:rtype: str
"""
return self._access_number
@access_number.setter
def access_number(self, access_number):
"""Sets the access_number of this Filing.
Access number. # noqa: E501
:param access_number: The access_number of this Filing. # noqa: E501
:type: str
"""
self._access_number = access_number
@property
def symbol(self):
"""Gets the symbol of this Filing. # noqa: E501
Symbol. # noqa: E501
:return: The symbol of this Filing. # noqa: E501
:rtype: str
"""
return self._symbol
@symbol.setter
def symbol(self, symbol):
"""Sets the symbol of this Filing.
Symbol. # noqa: E501
:param symbol: The symbol of this Filing. # noqa: E501
:type: str
"""
self._symbol = symbol
@property
def cik(self):
"""Gets the cik of this Filing. # noqa: E501
CIK. # noqa: E501
:return: The cik of this Filing. # noqa: E501
:rtype: str
"""
return self._cik
@cik.setter
def cik(self, cik):
"""Sets the cik of this Filing.
CIK. # noqa: E501
:param cik: The cik of this Filing. # noqa: E501
:type: str
"""
self._cik = cik
@property
def form(self):
"""Gets the form of this Filing. # noqa: E501
Form type. # noqa: E501
:return: The form of this Filing. # noqa: E501
:rtype: str
"""
return self._form
@form.setter
def form(self, form):
"""Sets the form of this Filing.
Form type. # noqa: E501
:param form: The form of this Filing. # noqa: E501
:type: str
"""
self._form = form
@property
def filed_date(self):
"""Gets the filed_date of this Filing. # noqa: E501
Filed date <code>%Y-%m-%d %H:%M:%S</code>. # noqa: E501
:return: The filed_date of this Filing. # noqa: E501
:rtype: datetime
"""
return self._filed_date
@filed_date.setter
def filed_date(self, filed_date):
"""Sets the filed_date of this Filing.
Filed date <code>%Y-%m-%d %H:%M:%S</code>. # noqa: E501
:param filed_date: The filed_date of this Filing. # noqa: E501
:type: datetime
"""
self._filed_date = filed_date
@property
def accepted_date(self):
"""Gets the accepted_date of this Filing. # noqa: E501
Accepted date <code>%Y-%m-%d %H:%M:%S</code>. # noqa: E501
:return: The accepted_date of this Filing. # noqa: E501
:rtype: datetime
"""
return self._accepted_date
@accepted_date.setter
def accepted_date(self, accepted_date):
"""Sets the accepted_date of this Filing.
Accepted date <code>%Y-%m-%d %H:%M:%S</code>. # noqa: E501
:param accepted_date: The accepted_date of this Filing. # noqa: E501
:type: datetime
"""
self._accepted_date = accepted_date
@property
def report_url(self):
"""Gets the report_url of this Filing. # noqa: E501
Report's URL. # noqa: E501
:return: The report_url of this Filing. # noqa: E501
:rtype: str
"""
return self._report_url
@report_url.setter
def report_url(self, report_url):
"""Sets the report_url of this Filing.
Report's URL. # noqa: E501
:param report_url: The report_url of this Filing. # noqa: E501
:type: str
"""
self._report_url = report_url
@property
def filing_url(self):
"""Gets the filing_url of this Filing. # noqa: E501
Filing's URL. # noqa: E501
:return: The filing_url of this Filing. # noqa: E501
:rtype: str
"""
return self._filing_url
@filing_url.setter
def filing_url(self, filing_url):
"""Sets the filing_url of this Filing.
Filing's URL. # noqa: E501
:param filing_url: The filing_url of this Filing. # noqa: E501
:type: str
"""
self._filing_url = filing_url
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, Filing):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, Filing):
return True
return self.to_dict() != other.to_dict()
| 26.658307 | 193 | 0.563617 | 1,044 | 8,504 | 4.43295 | 0.124521 | 0.07433 | 0.082973 | 0.082973 | 0.449006 | 0.342697 | 0.307692 | 0.252809 | 0.153414 | 0.088591 | 0 | 0.024554 | 0.334313 | 8,504 | 318 | 194 | 26.742138 | 0.792969 | 0.349718 | 0 | 0.089552 | 1 | 0 | 0.056089 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164179 | false | 0 | 0.029851 | 0 | 0.328358 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f66f8d8c41fecc47a0097bf4d34448a0a8795388 | 207 | py | Python | apps/web/api/urls.py | rubmu/QuestBot | 34c42a25e8400060a80e1a234ca7bc7a265769ae | [
"MIT"
] | 16 | 2017-12-01T04:45:50.000Z | 2021-08-28T17:25:11.000Z | apps/web/api/urls.py | daniilzinevich/QuestBot | 927b9f35aa4a4c88bf3825b965387d358c98f413 | [
"MIT"
] | 2 | 2018-08-18T16:42:49.000Z | 2021-02-25T21:16:06.000Z | apps/web/api/urls.py | daniilzinevich/QuestBot | 927b9f35aa4a4c88bf3825b965387d358c98f413 | [
"MIT"
] | 13 | 2018-01-26T14:35:51.000Z | 2020-05-10T20:48:36.000Z | from django.urls import path
from .views import ProcessWebHookAPIView
urlpatterns = [
path(
'webhook/<hook_id>/',
ProcessWebHookAPIView.as_view(),
name='hooks-handler'
),
]
| 17.25 | 40 | 0.642512 | 20 | 207 | 6.55 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246377 | 207 | 11 | 41 | 18.818182 | 0.839744 | 0 | 0 | 0 | 0 | 0 | 0.149758 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f67b88939087fdc071c7e83e728a3ef5a850c797 | 319 | py | Python | sunny/publisher/ioloop.py | AnkitAggarwalPEC/HFT-Analytics-Luigi | b01a48d7384accaca1869a7728912019b73f7e9c | [
"MIT"
] | null | null | null | sunny/publisher/ioloop.py | AnkitAggarwalPEC/HFT-Analytics-Luigi | b01a48d7384accaca1869a7728912019b73f7e9c | [
"MIT"
] | null | null | null | sunny/publisher/ioloop.py | AnkitAggarwalPEC/HFT-Analytics-Luigi | b01a48d7384accaca1869a7728912019b73f7e9c | [
"MIT"
] | null | null | null |
import asyncio
class IOLoop(object):
self.event_loop = asyncio.get_event_loop()
def __init__(self):
pass
def start(self):
if self._running:
raise RuntimeError("IOLoop is already running")
if self._stopped:
self._stopped = False
return
| 13.291667 | 59 | 0.583072 | 35 | 319 | 5.028571 | 0.628571 | 0.102273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.341693 | 319 | 23 | 60 | 13.869565 | 0.838095 | 0 | 0 | 0 | 0 | 0 | 0.080386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.090909 | 0.090909 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9c8686e93160db726ab6ec2f606cf595263e00fa | 1,179 | py | Python | tests/local/test_json.py | abarisain/mopidy | 4026e16996280016ceadfe98bd48f574306e2cef | [
"Apache-2.0"
] | 2 | 2015-07-09T09:36:26.000Z | 2019-10-05T04:13:19.000Z | tests/local/test_json.py | abarisain/mopidy | 4026e16996280016ceadfe98bd48f574306e2cef | [
"Apache-2.0"
] | null | null | null | tests/local/test_json.py | abarisain/mopidy | 4026e16996280016ceadfe98bd48f574306e2cef | [
"Apache-2.0"
] | 1 | 2019-10-05T04:13:10.000Z | 2019-10-05T04:13:10.000Z | from __future__ import unicode_literals
import unittest
from mopidy.local import json
from mopidy.models import Ref
class BrowseCacheTest(unittest.TestCase):
def setUp(self):
self.uris = [b'local:track:foo/bar/song1',
b'local:track:foo/bar/song2',
b'local:track:foo/song3']
self.cache = json._BrowseCache(self.uris)
def test_lookup_root(self):
expected = [Ref.directory(uri='local:directory:foo', name='foo')]
self.assertEqual(expected, self.cache.lookup('local:directory'))
def test_lookup_foo(self):
expected = [Ref.directory(uri='local:directory:foo/bar', name='bar'),
Ref.track(uri=self.uris[2], name='song3')]
self.assertEqual(expected, self.cache.lookup('local:directory:foo'))
def test_lookup_foo_bar(self):
expected = [Ref.track(uri=self.uris[0], name='song1'),
Ref.track(uri=self.uris[1], name='song2')]
self.assertEqual(
expected, self.cache.lookup('local:directory:foo/bar'))
def test_lookup_foo_baz(self):
self.assertEqual([], self.cache.lookup('local:directory:foo/baz'))
| 35.727273 | 77 | 0.641221 | 151 | 1,179 | 4.900662 | 0.258278 | 0.113514 | 0.114865 | 0.108108 | 0.504054 | 0.381081 | 0.337838 | 0.337838 | 0.148649 | 0 | 0 | 0.009719 | 0.214589 | 1,179 | 32 | 78 | 36.84375 | 0.789417 | 0 | 0 | 0 | 0 | 0 | 0.18151 | 0.118745 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.208333 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c8b2b5dd22e630bf72cf5ab5e527547533efeed | 1,164 | py | Python | Python/Zoo/zoo.py | bill-neely/ITSE1311-1302-Spring2018 | db8cddb66d921a378bcbbd0423e9036ffc3b7a8a | [
"MIT"
] | null | null | null | Python/Zoo/zoo.py | bill-neely/ITSE1311-1302-Spring2018 | db8cddb66d921a378bcbbd0423e9036ffc3b7a8a | [
"MIT"
] | null | null | null | Python/Zoo/zoo.py | bill-neely/ITSE1311-1302-Spring2018 | db8cddb66d921a378bcbbd0423e9036ffc3b7a8a | [
"MIT"
] | null | null | null | class Zoo:
def __init__(self, name, locations):
self.name = name
self.stillActive = True
self.locations = locations
self.currentLocation = self.locations[1]
def changeLocation(self, direction):
neighborID = self.currentLocation.neighbors[direction]
self.currentLocation = self.locations[neighborID]
def exit(self):
if self.currentLocation.allowExit:
self.stillActive = False
class Location:
def __init__(self, id, name, animal, neighbors, allowExit):
self.id = id
self.name = name
self.animal = animal
self.neighbors = neighbors
self.allowExit = allowExit
class Animal:
def __init__(self, name, soundMade, foodEaten, shelterType):
self.name = name
self.soundMade = soundMade
self.foodEaten = foodEaten
self.shelterType = shelterType
def speak(self):
return 'The ' + self.name + ' sounds like: ' + self.soundMade
def diet(self):
return 'The ' + self.name + ' eats ' + self.foodEaten
def shelter(self):
return 'The ' + self.name + ' prefers: ' + self.shelterType
| 29.846154 | 69 | 0.628007 | 122 | 1,164 | 5.893443 | 0.278689 | 0.089013 | 0.045897 | 0.066759 | 0.087622 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001186 | 0.275773 | 1,164 | 38 | 70 | 30.631579 | 0.85172 | 0 | 0 | 0.096774 | 0 | 0 | 0.036082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.258065 | false | 0 | 0 | 0.096774 | 0.451613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c8f281197fddfa1af410d31983e1c6f137a5fe2 | 2,312 | py | Python | neural-net/tensorflow/datasets.py | burntcustard/DeskBot-Zero | 5efb6bcca21eb66f88499b3671c8fda5f6517b11 | [
"MIT"
] | null | null | null | neural-net/tensorflow/datasets.py | burntcustard/DeskBot-Zero | 5efb6bcca21eb66f88499b3671c8fda5f6517b11 | [
"MIT"
] | null | null | null | neural-net/tensorflow/datasets.py | burntcustard/DeskBot-Zero | 5efb6bcca21eb66f88499b3671c8fda5f6517b11 | [
"MIT"
] | null | null | null |
# Useful tutorial on tensorflow.contrib.data:
# https://kratzert.github.io/2017/06/15/example-of-tensorflows-new-input-pipeline.html
import glob # Used to generate image filename list
import tensorflow as tf
def input_parser(image_path, label):
"""
Convert label to one_hot, read image from file & perform adjustments.
See: https://www.tensorflow.org/api_guides/python/image
and: https://www.tensorflow.org/programmers_guide/datasets
"""
image = tf.read_file(image_path)
image = tf.image.decode_image(image)
image = tf.reshape(image, [128, 128, 3])
#image = image[:, :, ::-1] # BGE -> RGB conversion if needed?
image = tf.image.rgb_to_grayscale(image)
#image = tf.image.resize_images(image, [32, 32])
image = tf.image.convert_image_dtype(image, tf.float32)
return image, label
def create(training_validation_ratio = 0.6, batch_size = 4):
"""Creates and returns a dataset, & number of classification classes."""
# Get image filenames, labels, and the number of classification classes
filenames = glob.glob("../img/*.png")
imgLabels = []
for filename in filenames:
imgLabels.append(int(filename.split("-d",1)[1].split('-',1)[0]))
# Label is currently just the distance integer (int32).
# Grouping e.g. having a classifier of "distance between 5 and 10cm"
# will be done here if it's required.
num_classes = max(imgLabels)
numTrainImgs = int(len(filenames) * training_validation_ratio)
training_paths = tf.constant(filenames[:numTrainImgs])
training_labels = tf.constant(imgLabels[:numTrainImgs])
evaluation_paths = tf.constant(filenames[numTrainImgs:])
evaluation_labels = tf.constant(imgLabels[numTrainImgs:])
# Create TensorFlow Dataset objects
training_dataset = tf.data.Dataset.from_tensor_slices(
(training_paths, training_labels)
)
training_dataset = training_dataset.map(input_parser)
training_dataset = training_dataset.batch(batch_size)
evaluation_dataset = tf.data.Dataset.from_tensor_slices(
(evaluation_paths, evaluation_labels)
)
evaluation_dataset = evaluation_dataset.map(input_parser)
evaluation_dataset = evaluation_dataset.batch(batch_size)
return training_dataset, evaluation_dataset, num_classes
| 39.186441 | 86 | 0.719291 | 296 | 2,312 | 5.456081 | 0.439189 | 0.030341 | 0.029721 | 0.026006 | 0.134985 | 0.044582 | 0.044582 | 0 | 0 | 0 | 0 | 0.017764 | 0.172145 | 2,312 | 58 | 87 | 39.862069 | 0.826019 | 0.339965 | 0 | 0 | 1 | 0 | 0.010094 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.064516 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c90a31f9f59a4bbc673d9e0cd9b46fab6dc56ae | 2,999 | py | Python | uis/widget_formatter.py | AlberLC/qt-app | 96fe79293de7f570a505845ed4ecd73f9f55572d | [
"MIT"
] | null | null | null | uis/widget_formatter.py | AlberLC/qt-app | 96fe79293de7f570a505845ed4ecd73f9f55572d | [
"MIT"
] | null | null | null | uis/widget_formatter.py | AlberLC/qt-app | 96fe79293de7f570a505845ed4ecd73f9f55572d | [
"MIT"
] | null | null | null | import os
import subprocess
import pathlib
def reemplazar(string):
return string.replace('self.', 'self.w.').replace('Form"', 'self.w.centralWidget"').replace('Form.', 'self.w.centralWidget.').replace('Form)', 'self.w.centralWidget)').replace('"', "'")
try:
url_archivo = input('Archivo: ').strip().strip('"').strip("'")
nombre, extension = os.path.splitext(os.path.basename(url_archivo))
nombre_clase = nombre.title().replace('_', '')
if extension == '.ui':
# Creo el codigo python a partir del .ui
url_archivo_py = pathlib.Path(url_archivo).with_suffix('.py')
subprocess.Popen(f'pyside2-uic {url_archivo} -o {url_archivo_py}', shell=True, stdout=subprocess.PIPE).wait()
# ---------- Comenzamos el formateo del archivo .py ----------
lineas_resultado = []
# Leo las lineas del archivo
with open(url_archivo_py) as file:
lineas = file.readlines()
i = 0
# Ignoramos los comentarios del principio
while 'from PySide2 import' not in lineas[i]:
i += 1
# Empezamos con la clase
while 'class' not in lineas[i]:
lineas_resultado.append(reemplazar(lineas[i]))
i += 1
lineas_resultado.append("\n")
lineas_resultado.append(f"class {nombre_clase}:\n")
lineas_resultado.append(f" def __init__(self, window):\n")
lineas_resultado.append(f" self.w = window\n")
lineas_resultado.append("\n")
lineas_resultado.append(f" self.setup_gui()\n")
lineas_resultado.append("\n")
lineas_resultado.append(f" def setup_gui(self):\n")
lineas_resultado.append(f" self.w.centralWidget = QtWidgets.QWidget(self.w)\n")
lineas_resultado.append(f" self.w.centralWidget.setObjectName('centralWidget')\n")
# Ignoramos las 3 lineas siguientes (def, Form.set, Form.resize) y avanzamos a la siguiente
i += 4
# Copiamos hasta linea en blanco
while lineas[i] != '\n':
lineas_resultado.append(reemplazar(lineas[i]))
i += 1
# Anadimos el widget a la vista
lineas_resultado.append(' self.w.setCentralWidget(self.w.centralWidget)\n')
# Copiamos la linea en blanco
lineas_resultado.append(reemplazar(lineas[i]))
# Ignoramos hasta los setText()
while 'Form.' not in lineas[i]:
i += 1
# Nos saltamos el Form.setWindowTitle()
i += 1
# Transformamos las lineas de los setText()
for linea in lineas[i:]:
lineas_resultado.append(reemplazar(
linea.replace('QtWidgets.QApplication.translate("Form", ', '').replace(', None, -1)', '')))
lineas_resultado.append(' def connect_signals(self, controller):\n')
lineas_resultado.append(' pass\n')
# Sobreescribo el archivo .py
with open(url_archivo_py, 'w', encoding='utf-8') as file:
file.writelines(lineas_resultado)
except Exception as e:
os.remove(url_archivo_py)
print(f'{e.__class__.__name__}: {e}')
input('Presione una tecla para continuar...')
| 35.282353 | 189 | 0.64955 | 384 | 2,999 | 4.9375 | 0.333333 | 0.150316 | 0.188291 | 0.127637 | 0.325949 | 0.298523 | 0.246308 | 0.205696 | 0.097046 | 0.049578 | 0 | 0.005017 | 0.202401 | 2,999 | 84 | 190 | 35.702381 | 0.787625 | 0.168723 | 0 | 0.215686 | 0 | 0 | 0.273497 | 0.110528 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0.019608 | 0.078431 | 0.019608 | 0.117647 | 0.019608 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c90c9812ecdd62e381c8bf6cedd2186659f9529 | 2,450 | py | Python | applications/HDF5Application/python_scripts/single_mesh_xdmf_output_process.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 2 | 2020-04-30T19:13:08.000Z | 2021-04-14T19:40:47.000Z | applications/HDF5Application/python_scripts/single_mesh_xdmf_output_process.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 1 | 2020-04-30T19:19:09.000Z | 2020-05-02T14:22:36.000Z | applications/HDF5Application/python_scripts/single_mesh_xdmf_output_process.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 1 | 2020-06-12T08:51:24.000Z | 2020-06-12T08:51:24.000Z | import KratosMultiphysics as KM
import KratosMultiphysics.HDF5Application.temporal_output_process_factory as output_factory
import KratosMultiphysics.HDF5Application.file_utilities as file_utils
def Factory(settings, Model):
"""Return a process for writing simulation results for a single mesh to HDF5.
It also creates the xdmf-file on the fly
"""
if not isinstance(settings, KM.Parameters):
raise Exception("expected input shall be a Parameters object, encapsulating a json string")
params = settings["Parameters"]
# setting default "file_settings"
if not params.Has("file_settings"):
file_params = KM.Parameters(r'''{
"file_access_mode" : "truncate",
"save_h5_files_in_folder" : true
}''')
params.AddValue("file_settings", file_params)
else:
if not params["file_settings"].Has("file_access_mode"):
params["file_settings"].AddEmptyValue("file_access_mode").SetString("truncate")
if not params["file_settings"].Has("save_h5_files_in_folder"):
params["file_settings"].AddEmptyValue("save_h5_files_in_folder").SetBool(True)
model_part_name = params["model_part_name"].GetString() # name of modelpart must be specified!
#todo(msandre): collapse older partitioned scripts to their serial counterparts like this
if Model[model_part_name].GetCommunicator().TotalProcesses() > 1:
factory_helper = output_factory.PartitionedTemporalOutputFactoryHelper()
else:
factory_helper = output_factory.TemporalOutputFactoryHelper()
(temporal_output_process, _, list_of_results_output) = factory_helper.Execute(params, Model)
for results_output in list_of_results_output:
temporal_output_process.AddOutput(results_output)
temporal_output_process._initial_output.AddOutput(file_utils.DeleteOldH5Files())
# in case the h5py-module is not installed (e.g. on clusters) we don't want it to crash the simulation!
# => in such a case the xdmf can be created manually afterwards locally
try:
from KratosMultiphysics.HDF5Application.xdmf_io import XdmfOutput
temporal_output_process.AddOutput(XdmfOutput()) # xdmf should be the last in the list
except ImportError:
warn_msg = "XDMF-Writing is not available,\nOnly HDF5 files are written"
KM.Logger.PrintWarning("SingleMeshXdmfOutputProcess", warn_msg)
return temporal_output_process
| 50 | 107 | 0.739592 | 300 | 2,450 | 5.81 | 0.433333 | 0.048193 | 0.072289 | 0.022375 | 0.101549 | 0.029834 | 0 | 0 | 0 | 0 | 0 | 0.005464 | 0.178367 | 2,450 | 48 | 108 | 51.041667 | 0.860407 | 0.196327 | 0 | 0.058824 | 0 | 0 | 0.232427 | 0.050282 | 0 | 0 | 0 | 0.020833 | 0 | 1 | 0.029412 | false | 0 | 0.147059 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.