hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8849a2cc379ce355d20d46bf08dbb2ec59e559cc | 1,606 | py | Python | dagger/runtime/local/types.py | raztud/dagger | 7b394138c139e3b4fdf228e3d34359f1ae6bdd7a | [
"Apache-2.0"
] | 9 | 2021-09-06T14:22:38.000Z | 2022-02-08T07:48:39.000Z | dagger/runtime/local/types.py | raztud/dagger | 7b394138c139e3b4fdf228e3d34359f1ae6bdd7a | [
"Apache-2.0"
] | 36 | 2021-09-04T06:20:19.000Z | 2021-12-26T17:54:59.000Z | dagger/runtime/local/types.py | raztud/dagger | 7b394138c139e3b4fdf228e3d34359f1ae6bdd7a | [
"Apache-2.0"
] | 4 | 2021-09-06T08:07:19.000Z | 2021-10-18T19:13:18.000Z | """Data types used for local invocations."""
from typing import Any, Generic, Iterable, Iterator, Mapping, NamedTuple, TypeVar, Union
from dagger.serializer import Serializer
T = TypeVar("T")
class OutputFile(NamedTuple):
"""Represents a file in the local file system that holds the serialized value for a node output."""
filename: str
serializer: Serializer
class PartitionedOutput(Generic[T]):
"""Represents a partitioned output explicitly."""
def __init__(self, iterable: Iterable[T]):
"""Build a partitioned output from an Iterable."""
self._iterable = iterable
self._iterator = iter(iterable)
def __iter__(self) -> Iterator[T]:
"""Return an iterator over the partitions of the output."""
return self
def __next__(self) -> T:
"""Return the next element in the partitioned output."""
return next(self._iterator)
def __repr__(self) -> str:
"""Return a human-readable representation of the partitioned output."""
return repr(self._iterable)
#: One of the outputs of a node, which may be partitioned
NodeOutput = Union[OutputFile, PartitionedOutput[OutputFile]]
#: All outputs of a node indexed by their name. Node executions may be partitioned, in which case this is a list.
NodeOutputs = Mapping[str, NodeOutput]
#: All executions of a node. If the node is partitioned there will only be one. Otherwise, there may be many.
NodeExecutions = Union[NodeOutputs, PartitionedOutput[NodeOutputs]]
#: The parameters supplied to a node (plain, not serialized)
NodeParams = Mapping[str, Any]
| 32.77551 | 113 | 0.711083 | 206 | 1,606 | 5.446602 | 0.412621 | 0.022282 | 0.018717 | 0.046346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196762 | 1,606 | 48 | 114 | 33.458333 | 0.869767 | 0.4533 | 0 | 0 | 0 | 0 | 0.001192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
885943efd318054d86fb6286f69908174bfd973b | 2,121 | py | Python | mirrormanager2/lib/pid.py | Devyani-Divs/mirrormanager2 | c56bfdb363608c5ea701ce8f089ab1b09abc4de9 | [
"MIT"
] | 1 | 2015-11-08T08:56:33.000Z | 2015-11-08T08:56:33.000Z | mirrormanager2/lib/pid.py | Devyani-Divs/mirrormanager2 | c56bfdb363608c5ea701ce8f089ab1b09abc4de9 | [
"MIT"
] | null | null | null | mirrormanager2/lib/pid.py | Devyani-Divs/mirrormanager2 | c56bfdb363608c5ea701ce8f089ab1b09abc4de9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright © 2014 Red Hat, Inc.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2, or (at your option) any later
# version. This program is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY expressed or implied, including the
# implied warranties of MERCHANTABILITY or FITNESS FOR A PARTICULAR
# PURPOSE. See the GNU General Public License for more details. You
# should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Any Red Hat trademarks that are incorporated in the source
# code or documentation are not subject to the GNU General Public
# License and may only be used or replicated with the express permission
# of Red Hat, Inc.
#
'''
MirrorManager2 internal api to manage PID.
'''
import os
def remove_pidfile(pidfile):
os.unlink(pidfile)
def create_pidfile_dir(pidfile):
piddir = os.path.dirname(pidfile)
if piddir and not os.path.exists(piddir):
os.makedirs(piddir, mode=0755)
def write_pidfile(pidfile, pid):
create_pidfile_dir(pidfile)
with open(pidfile, 'w') as stream:
stream.write(str(pid)+'\n')
return 0
def manage_pidfile(pidfile):
"""returns 1 if another process is running that is named in pidfile,
otherwise creates/writes pidfile and returns 0."""
pid = os.getpid()
if not os.path.exists(pidfile):
return write_pidfile(pidfile, pid)
oldpid = ''
try:
with open(pidfile, 'r') as stream:
oldpid = stream.read()
except IOError, err:
return 1
# is the oldpid process still running?
try:
os.kill(int(oldpid), 0)
except ValueError: # malformed oldpid
return write_pidfile(pidfile, pid)
except OSError, err:
if err.errno == 3: # No such process
return write_pidfile(pidfile, pid)
return 1
| 30.73913 | 72 | 0.69637 | 309 | 2,121 | 4.750809 | 0.501618 | 0.057221 | 0.035422 | 0.051771 | 0.13079 | 0.038147 | 0 | 0 | 0 | 0 | 0 | 0.017586 | 0.222537 | 2,121 | 68 | 73 | 31.191176 | 0.872044 | 0.477133 | 0 | 0.233333 | 0 | 0 | 0.004386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.033333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
88679dfffc3697036fdede3c85918178a75cdb7b | 1,286 | py | Python | flask_restful_swagger/resources/static_files.py | flask-restful-swagger/flask-restful-swagger | a7acd9c70b61b821028779f432b3554d54b7b61f | [
"MIT"
] | 21 | 2016-03-19T09:19:16.000Z | 2020-12-07T22:09:06.000Z | flask_restful_swagger/resources/static_files.py | flask-restful-swagger/flask-restful-swagger | a7acd9c70b61b821028779f432b3554d54b7b61f | [
"MIT"
] | 12 | 2016-03-25T09:06:40.000Z | 2019-06-06T19:22:05.000Z | flask_restful_swagger/resources/static_files.py | flask-restful-swagger/flask-restful-swagger | a7acd9c70b61b821028779f432b3554d54b7b61f | [
"MIT"
] | 12 | 2016-03-16T09:40:51.000Z | 2020-12-07T22:09:11.000Z | # -*- coding: utf-8 -*-
import mimetypes
import os
from flask import abort, send_file
from flask_restful import Resource
from flask_restful_swagger import root_path
from flask_restful_swagger.registry import get_current_registry
from flask_restful_swagger.utils import render_page
__author__ = 'sobolevn'
class StaticFiles(Resource):
# TODO: is it possible to change this signature?
def get(self, **kwargs):
req_registry = get_current_registry()
if not kwargs:
file_path = "index.html"
else:
keys = sorted(kwargs.keys())
file_path = '/'.join(
kwargs[k].strip('/') for k in keys if kwargs[k] is not None
)
if file_path in [ # TODO: refactor to TemplateResource
"index.html",
"o2c.html",
"swagger-ui.js",
"swagger-ui.min.js",
"lib/swagger-oauth.js",
]:
conf = {'resource_list_url': req_registry['spec_endpoint_path']}
return render_page(file_path, conf)
mime = mimetypes.guess_type(file_path)[0]
file_path = os.path.join(root_path, 'static', file_path)
if os.path.exists(file_path):
return send_file(file_path, mimetype=mime)
abort(404)
| 28.577778 | 76 | 0.619751 | 162 | 1,286 | 4.691358 | 0.450617 | 0.094737 | 0.084211 | 0.090789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006486 | 0.280715 | 1,286 | 44 | 77 | 29.227273 | 0.815135 | 0.080093 | 0 | 0 | 0 | 0 | 0.109415 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.03125 | false | 0 | 0.21875 | 0 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
886aaafc55837ed03a8a0f5d0d5001345c944ab2 | 411 | py | Python | docs/_config/cakephpbranch.py | asgraf/crud | bb3d131f79c44fdc4a34a0408370fcabce50a8a3 | [
"MIT"
] | null | null | null | docs/_config/cakephpbranch.py | asgraf/crud | bb3d131f79c44fdc4a34a0408370fcabce50a8a3 | [
"MIT"
] | null | null | null | docs/_config/cakephpbranch.py | asgraf/crud | bb3d131f79c44fdc4a34a0408370fcabce50a8a3 | [
"MIT"
] | null | null | null | import os
from sphinx.util.osutil import SEP
"""
CakePHP Git branch extension.
A simple sphinx extension for adding
the GitHub branch name of the docs' version.
"""
def setup(app):
app.connect('html-page-context', append_template_ctx)
app.add_config_value('branch', '', True)
return app
def append_template_ctx(app, pagename, templatename, ctx, event_arg):
ctx['branch'] = app.config.branch
| 22.833333 | 69 | 0.734793 | 60 | 411 | 4.916667 | 0.666667 | 0.094915 | 0.115254 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155718 | 411 | 17 | 70 | 24.176471 | 0.850144 | 0 | 0 | 0 | 0 | 0 | 0.099656 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
886fa3544da9f2fa1cc86185f0cae8dee173cd59 | 2,406 | py | Python | annodb/lib/Cytoband.py | always-waiting/Django-Xromate | 1fb5b4bbdfac9549622c5714971095325f201a96 | [
"MIT"
] | null | null | null | annodb/lib/Cytoband.py | always-waiting/Django-Xromate | 1fb5b4bbdfac9549622c5714971095325f201a96 | [
"MIT"
] | null | null | null | annodb/lib/Cytoband.py | always-waiting/Django-Xromate | 1fb5b4bbdfac9549622c5714971095325f201a96 | [
"MIT"
] | null | null | null | #encoding: utf-8
"""
注释数据库的cytoband表
"""
import mongoengine as mongoe
from annodb.apps import AnnodbConfig, dumpstring
import re
class Cytoband(mongoe.Document):
"""
collection name: cytoband
"""
meta = {
'db_alias': AnnodbConfig.alias,
}
description = mongoe.StringField()
name = mongoe.StringField()
chr = mongoe.StringField()
chrom = mongoe.StringField()
start = mongoe.IntField()
end = mongoe.IntField()
def __str__(self):
string = ["{\n"]
for k, v in self._data.iteritems():
string.append("\t%s => \n" % k)
string.append(dumpstring(v, level=2))
string.append("}\n")
return "".join(string)
@classmethod
def cytoband_region_to_coordinates(cls, region):
#print region
res = re.search('(1?[1-9]|10|2[0-2]|X|Y)([pqc]\S*)$', region)
if res:
#print region
chrom = res.group(1)
locs = res.group(2).split('-')
#print locs
match = re.compile("|".join(locs))
cytos = cls.objects(chr=chrom, name=match).only('start','end').order_by('start')
#if len(cytos) > 3:
#print "Chr: %s" % chrom
#print len(cytos)
#print "Match: %s" % "|".join(locs)
#print cytos[0].chr, cytos[0].start, cytos[0].end
#print cytos[-1].chr, cytos[-1].start, cytos[-1].end
coord = {'chr': chrom}
if not len(cytos):
print "%s is not cytoband locus" % region
return
if locs[0] == 'pter':
coord['start'] = 0
elif locs[0] == 'cen':
p_or_q = 'p' if locs[1].find("p") != -1 else 'q'
coord['start'] = cls.objects(chr=chrom, description='acen', name=re.compile(p_or_q)).first().start
else:
coord['start'] = cytos.order_by('start')[0].start
if len(locs) > 1 and locs[1] == 'qter':
coord['end'] = cls.objects.aggregate(
{'$match':{'chr':chrom}},
{'$group':{'_id':None, 'end':{'$max':'$end'}}}
).next()['end']
else:
coord['end'] = cytos.order_by('-end')[0].end
return coord
else:
print "%s is not cytoband locus" % region
return
| 33.887324 | 114 | 0.49335 | 277 | 2,406 | 4.223827 | 0.34657 | 0.05812 | 0.022222 | 0.030769 | 0.061538 | 0.061538 | 0.061538 | 0.061538 | 0 | 0 | 0 | 0.017666 | 0.34123 | 2,406 | 70 | 115 | 34.371429 | 0.720505 | 0.099335 | 0 | 0.14 | 0 | 0.02 | 0.097468 | 0.016245 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.06 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
88780afd73cb02be278d019b4ed1e3332e3efced | 2,689 | py | Python | transforms.py | Rars-project/mousepy | 5c0722afa2f1a46ffb61dc055e0bb53c1be944d2 | [
"MIT"
] | 3 | 2020-11-22T09:29:44.000Z | 2021-05-25T05:39:06.000Z | transforms.py | Rars-project/mousepy | 5c0722afa2f1a46ffb61dc055e0bb53c1be944d2 | [
"MIT"
] | 3 | 2020-11-10T08:25:47.000Z | 2021-05-13T12:10:12.000Z | transforms.py | Rars-project/mousepy | 5c0722afa2f1a46ffb61dc055e0bb53c1be944d2 | [
"MIT"
] | null | null | null | from torchvision.transforms import *
import numbers
import random
from PIL import Image
class GroupToTensor(object):
def __init__(self):
pass
def __call__(self, img_group):
return [ToTensor()(img) for img in img_group]
class GroupCenterCrop(object):
def __init__(self, size):
self.size = size
def __call__(self, img_group):
return [CenterCrop(self.size)(img) for img in img_group]
class GroupResize(object):
def __init__(self, size):
self.size = size
def __call__(self, img_group):
return [Resize(self.size)(img) for img in img_group]
class GroupResizeFit(object):
def __init__(self, size):
self.size = size
def __call__(self, img_group):
print(img_group[0].size)
return [Resize(self.size)(img) for img in img_group]
class GroupExpand(object):
def __init__(self, size):
self.size = size
def __call__(self, img_group):
w, h = img_group[0].size
tw, th = self.size
out_images = list()
if(w >= tw and h >= th):
assert img_group[0].size == self.size
return img_group
for img in img_group:
new_im = Image.new("RGB", (tw, th))
new_im.paste(img, ((tw-w)//2, (th-w)//2))
out_images.append(new_im)
assert out_images[0].size == self.size
return out_images
class GroupRandomCrop(object):
def __init__(self, size):
if isinstance(size, numbers.Number):
self.size = (int(size), int(size))
else:
self.size = size
def __call__(self, img_group):
w, h = img_group[0].size #120, 160
th, tw = self.size #100, 140
out_images = list()
if (w - tw) < 0:
print('W < TW')
for img in img_group:
new_im = Image.new("RGB", (tw, th))
new_im.paste(img_group[0], ((tw-w)//2, (th-w)//2))
out_images.append(new_im)
return out_images
x1 = random.randint(0, (w - tw))
y1 = random.randint(0, (h - th))
for img in img_group:
if w == tw and h == th:
out_images.append(img)
else:
out_images.append(img.crop((x1, y1, x1 + tw, y1 + th)))
return out_images
class GroupRandomRotation(object):
def __init__(self, max):
self.max = max
def __call__(self, img_group):
angle = random.randint(-self.max, self.max)
return [functional.rotate(img, angle) for img in img_group]
class GroupNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor_list):
# TODO: make efficient
for t, m, s in zip(tensor_list, self.mean, self.std):
t.sub_(m).div_(s)
return tensor_list
class GroupUnormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor_list):
# TODO: make efficient
for t, m, s in zip(tensor_list, self.mean, self.std):
t.mul_(s).add_(m)
return tensor_list | 24.008929 | 61 | 0.677203 | 434 | 2,689 | 3.921659 | 0.186636 | 0.098707 | 0.068743 | 0.089894 | 0.592244 | 0.537015 | 0.478848 | 0.464747 | 0.464747 | 0.445946 | 0 | 0.014117 | 0.18334 | 2,689 | 112 | 62 | 24.008929 | 0.760929 | 0.021569 | 0 | 0.534091 | 0 | 0 | 0.004566 | 0 | 0 | 0 | 0 | 0.008929 | 0.022727 | 1 | 0.204545 | false | 0.011364 | 0.045455 | 0.034091 | 0.477273 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
88840c1427db821d56126a9b2e6629f2330f9fa6 | 1,121 | py | Python | greens/main.py | grillazz/fastapi-mongodb | f3a38d93a1255d69218cee9a884bd862c50313be | [
"MIT"
] | 4 | 2021-12-09T20:11:34.000Z | 2022-03-01T20:31:23.000Z | greens/main.py | grillazz/fastapi-mongodb | f3a38d93a1255d69218cee9a884bd862c50313be | [
"MIT"
] | 10 | 2021-09-27T13:03:28.000Z | 2022-02-27T10:28:21.000Z | greens/main.py | grillazz/fastapi-mongodb | f3a38d93a1255d69218cee9a884bd862c50313be | [
"MIT"
] | null | null | null | from fastapi import FastAPI
from greens import config
from greens.routers import router as v1
from greens.services.repository import get_mongo_meta
from greens.utils import get_logger, init_mongo
global_settings = config.get_settings()
if global_settings.environment == "local":
get_logger("uvicorn")
app = FastAPI()
app.include_router(v1, prefix="/api/v1")
@app.on_event("startup")
async def startup_event():
app.state.logger = get_logger(__name__)
app.state.logger.info("Starting greens on your farmland...mmm")
app.state.mongo_client, app.state.mongo_db, app.state.mongo_collection = await init_mongo(
global_settings.db_name, global_settings.db_url, global_settings.collection
)
@app.on_event("shutdown")
async def shutdown_event():
app.state.logger.info("Parking tractors in garage...")
@app.get("/health-check")
async def health_check():
# # TODO: check settings dependencies passing as args and kwargs
# a = 5
# try:
# assert 5 / 0
# except Exception:
# app.state.logger.exception("My way or highway...")
return await get_mongo_meta()
| 27.341463 | 94 | 0.726137 | 157 | 1,121 | 4.993631 | 0.43949 | 0.071429 | 0.071429 | 0.058673 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00639 | 0.162355 | 1,121 | 40 | 95 | 28.025 | 0.828541 | 0.143622 | 0 | 0 | 0 | 0 | 0.119874 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0 | false | 0 | 0.217391 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
888d827381e0e9b58bf0a0983b6023cef7e60fba | 958 | py | Python | QCL_gui/comms.py | alex123go/QCL_controllerViaArduino | f9d174ecc6af45be27d6634df9315bdcfa83865c | [
"MIT"
] | null | null | null | QCL_gui/comms.py | alex123go/QCL_controllerViaArduino | f9d174ecc6af45be27d6634df9315bdcfa83865c | [
"MIT"
] | null | null | null | QCL_gui/comms.py | alex123go/QCL_controllerViaArduino | f9d174ecc6af45be27d6634df9315bdcfa83865c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Fri Jan 8 00:34:40 2021
@author: Alex1
"""
import serial
import time
class QCL_comms():
"""docstring for QCL_comms"""
def __init__(self, arg=None):
super(QCL_comms, self).__init__()
self.arg = arg
self.serActive = False
def connect(self,port = 'COM9'):
self.ser = serial.Serial(port,57600)
# don't know if useful
self.ser.parity=serial.PARITY_ODD
self.ser.stopbits=serial.STOPBITS_ONE
self.ser.bytesize=serial.EIGHTBITS
# let time for bootloader
time.sleep(1)
self.serActive = True
def disconnect(self):
if self.serActive == True:
self.ser.close()
self.serActive = False
def sendCmd(self,cmd):
cmd = cmd + '\n'
self.ser.write(cmd.encode())
time.sleep(0.1)
def powerComb(self,comb='1',status = 0):
cmd = 'Power'+str(comb)+':'+str(status)
self.sendCmd(cmd)
def enableComb(self,comb='1',status = 0):
cmd = 'Enable'+str(comb)+':'+str(status)
self.sendCmd(cmd)
| 19.55102 | 42 | 0.669102 | 145 | 958 | 4.331034 | 0.448276 | 0.066879 | 0.035032 | 0.066879 | 0.156051 | 0.156051 | 0.095541 | 0 | 0 | 0 | 0 | 0.0325 | 0.164927 | 958 | 48 | 43 | 19.958333 | 0.7525 | 0.149269 | 0 | 0.142857 | 0 | 0 | 0.02625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
88923d016f4a79bd13dcece4e88a95830c0b4a16 | 11,323 | py | Python | graphing/tests/diagram_maker.py | jonsim/robin-project | cda857312cc448b4475756bcf1c4229e74fa8d20 | [
"MIT"
] | null | null | null | graphing/tests/diagram_maker.py | jonsim/robin-project | cda857312cc448b4475756bcf1c4229e74fa8d20 | [
"MIT"
] | null | null | null | graphing/tests/diagram_maker.py | jonsim/robin-project | cda857312cc448b4475756bcf1c4229e74fa8d20 | [
"MIT"
] | null | null | null | #!/usr/bin/python
#--------------------------------------------------------------#
# DESCRIPTION #
# Takes a csv (must be comma separated ONLY) where each line #
# represents a single pixel row and each value represents #
# the z values for each corresponding pixel. #
# AUTHOR #
# Jonathan Simmonds #
#--------------------------------------------------------------#
from collections import namedtuple
from numpy import zeros
from scipy.cluster.vq import kmeans2
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.font_manager import FontProperties
import matplotlib
import matplotlib.pyplot as plt
import Image, ImageDraw
import sys
import struct
#--------------------------------------------------------------#
#--------------------- CLASS DEFINITIONS ----------------------#
#--------------------------------------------------------------#
Point = namedtuple('Point', ['x', 'y', 'z'])
#--------------------------------------------------------------#
#-------------------- FUNCTION DEFINITIONS --------------------#
#--------------------------------------------------------------#
def convert_rgb_to_string (rgb):
r = int(rgb[0]*255)
g = int(rgb[1]*255)
b = int(rgb[2]*255)
s = '#%02x%02x%02x' % (r, g, b)
return s
# H' takes values between 0-1530
# H' = 0- 255 RGB= 255, 0-255, 0
# H' = 255- 510 RGB= 255-0, 255, 0
# H' = 510- 765 RGB= 0, 255, 0-255
# H' = 765-1020 RGB= 0, 255-0, 255
# H' = 1020-1275 RGB= 0-255, 0, 255
# H' = 1275-1530 RGB= 255, 0, 255-0
def convert_rgb_to_depth (rgb):
r = rgb[0]
g = rgb[1]
b = rgb[2]
v = 0
if r == 0 and g == 0 and b == 0:
return 0
if r == 255:
if b == 0:
v = g
else:
v = 1275 + b
elif g == 255:
if r == 0:
v = 510 + b
else:
v = 255 + r
elif b == 255:
if r == 0:
v = 765 + g
else:
v = 1020 + r
v = (v * (4800/1530))
if v < 1:
return 0
else:
return v + 400
def convert_depth_to_rgb (v):
# check for zero
if v <= 400:
return (0, 0, 0)
v -= 400
v /= (4800/1530)
if v >= 1530:
return (1.0, 0.0, 0.0)
"""
if v < 255:
return (255, v, 0)
elif v < 510:
return (510-v, 255, 0)
elif v < 765:
return (0, 255, v-510)
elif v < 1020:
return (0, 1020-v, 255)
elif v < 1275:
return (v-1020, 0, 255)
else:
return (255, 0, 1530-v)"""
if v < 255:
return (1.0, v/255.0, 0.0)
elif v < 510:
return ((510-v)/255.0, 1.0, 0.0)
elif v < 765:
return (0.0, 1.0, (v-510)/255.0)
elif v < 1020:
return (0.0, (1020-v)/255.0, 1.0)
elif v < 1275:
return ((v-1020)/255.0, 0.0, 1.0)
else:
return (1.0, 0.0, (1530-v)/255.0)
def convert_depth_to_gs (v):
gs = v / (10000/255)
return (gs, gs, gs)
# loads a given image into a 2d list
def load_image (filename):
# make the data structure and open the image
data2d = zeros((640, 480))
input_img = Image.open(filename)
# loop through the input image, saving the values
for y in range(0, 480):
for x in range(0, 640):
data2d[x][y] = convert_rgb_to_depth(input_img.getpixel((x, y)))
# return
return data2d
def make_histogram (data2d, side='both'):
# setup the ranges to accomodate the side we want to look at.
# y 0-240 is top half of image, 241-480 is bottom half
xstart = 0
xend = 640
ystart = 0
yend = 480
if side == 'left':
xend = 320
elif side == 'right':
xstart = 320
elif side == 'region':
xstart = 64
xend = 576
ystart = 80
yend = 280
# make the data structure
hist = zeros(10000)
# loop through the input and build the histogram
for y in range(ystart, yend):
for x in range(xstart, xend):
hist[data2d[x][y]] += 1
# produce the histogram colour spectrum
colours = []
for i in range(0, 10000):
colours.append(convert_depth_to_rgb(i))
# print the histogram's stats
"""hist_range1 = 0
for i in range(600, 900):
hist_range1 += hist[i]
hist_range2 = 0
for i in range(900, 1200):
hist_range2 += hist[i]
print "hist[0] =", hist[0]
print "hist[600-899] =", hist_range1
print "hist[900-1199] =", hist_range2"""
# tidy up the histogram
histy = hist.tolist()
histx = range(0, 10000)
histy[0] = 0
# adjust the ranges
for i in range(0, len(histx)):
histx[i] /= 1000.0
for i in range(0, len(histy)):
histy[i] /= 1000.0
# return
return (histx, histy, colours)
def plot_histogram (plot_axes, hist):
plot_axes.bar(hist[0], hist[1], width=1.0/1000.0, color=hist[2], edgecolor=hist[2])
# ax.plot(histx, histy, 'b-')
def setup_plot ():
# create the plot
fig = plt.figure()
plt.subplots_adjust(left=0.1, right=0.95, top=0.95, bottom=0.1)
ax = fig.add_subplot(111)
# remove the top and right borders
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
# setup the font
font = {'family': 'Quattrocento',
'weight': 'bold',
'size' : 16}
matplotlib.rc('font', **font)
# return
return ax
def setup_plot_labels (ax):
# setup the labels
ax.set_xlabel('Depth (m)')
ax.set_ylabel('Frequency (000\'s of pixels)')
#ax.set_title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=100,\ \sigma=15$')
ax.set_xlim(0, 5)
ax.set_ylim(0, 4)
ax.set_yticks(range(0, 4+1))
# returns an array A = MxN points with M observations in N dimensions. A[i]=A[i,:] returns the ith
# observation, A[i,0] returns the ith observations's x value etc. A[:,0] returns all the x values.
def generate_3d_data (data2d):
X_SUBSTEP = 4
Y_SUBSTEP = 4
Sp_over_F = (0.2084 / 120)
r = zeros(((640/X_SUBSTEP)*(480/Y_SUBSTEP), 3))
i = 0
for y in range(0, 480, Y_SUBSTEP):
for x in range(0, 640, X_SUBSTEP):
z = data2d[x][y]
if z != 0:
r[i,0] = (x - 320) * z * Sp_over_F # rX
r[i,1] = (240 - y) * z * Sp_over_F # rY
r[i,2] = z # rZ
i += 1
return r
def generate_colors (data2d):
X_SUBSTEP = 4
Y_SUBSTEP = 4
colors = []
for y in range(0, 480, Y_SUBSTEP):
for x in range(0, 640, X_SUBSTEP):
v = data2d[x][y]
colors.append(convert_rgb_to_string(convert_depth_to_rgb(v)))
return colors
"""def plot_2d_data (data2d, colours=None, depth_scale=True):
# Initialise, calculating the resolution and creating the image (bg=green)
res_step = data2d[1].x - data2d[0].x
xres = 640 / res_step
yres = 480 / res_step
image = Image.new("RGB", (xres, yres), (0,255,0))
draw = ImageDraw.Draw(image)
# calculate max depth
max_depth = 0
for data in data2d:
if (data.z > max_depth):
max_depth = data.z
# Depth image
depth_scaling = 255.0 / max_depth
for i in range(len(data2d)):
color = int(data2d[i].z * depth_scaling)
if colours == None:
draw.point((data2d[i].x/res_step, data2d[i].y/res_step), fill=(color, color, color))
else:
draw.point((data2d[i].x/res_step, data2d[i].y/res_step), fill=colours[i])
# Scale
if (depth_scale and xres == 640):
scale_offset_x = 10
scale_offset_y = 10
scale_height = 200
scale_width = 65
gradient_scaling = 255.0 / (scale_height-30)
draw.rectangle([scale_offset_x, scale_offset_y, scale_offset_x+scale_width, scale_offset_y+scale_height], fill=(255,255,255), outline=(0,0,0))
for y in range(scale_height-30):
for x in range(20):
gradient_shade = int(y * gradient_scaling)
draw.point((scale_offset_x+5+x, scale_offset_y+20+y), fill=(gradient_shade, gradient_shade, gradient_shade))
title_string = "DEPTH (mm)"
title_string_s = draw.textsize(title_string)
title_string_offset_x = scale_width / 2 - title_string_s[0] / 2
title_string_offset_x = 4 # Comment this out for a more accurate x offset (at the risk of slight-non-centering)
title_string_offset_y = 2
draw.text((scale_offset_x+title_string_offset_x, scale_offset_y+title_string_offset_y), title_string, fill=(0,0,0))
draw.text((scale_offset_x+25, scale_offset_y+15), "- 0", fill=(0,0,0))
draw.text((scale_offset_x+25, scale_offset_y+scale_height-16), "- " + str(max_depth), fill=(0,0,0))
# show
image.show()"""
def plot_3d_data (data3d, colours=None, filename=None):
# pre-configure the plot environment
matplotlib.rcParams['axes.unicode_minus'] = False
fig = plt.figure()
axe = Axes3D(fig)
fig.patch.set_facecolor('white')
font = {'family': 'Quattrocento',
'weight': 'bold',
'size' : 16}
matplotlib.rc('font', **font)
#axe.grid(True, c='#000000')
#axe.set_aspect('equal')
axe.set_aspect(0.5)
c2 = ['#bbe600', '#00c7d9', '#e68e00']
if colours == None:
axe.scatter(data3d[:,0], data3d[:,1], data3d[:,2], zdir='y', s=2000, c=(255,0,0), lw=0, marker='o')
else:
axe.scatter(data3d[:,0], data3d[:,1], data3d[:,2], zdir='y', s=2000, c=colours, lw=0, marker='o')
#axe.dist = 15
axe.set_xlabel('X-axis')
axe.set_xlim3d([-1200, 1200])
axe.set_zlabel('Y-axis')
axe.set_zlim3d([-200, 1000])
axe.set_ylabel('Z-axis')
axe.set_ylim3d([ 1000, 3400])
# return
if filename == None:
plt.show()
else:
#plt.savefig(filename + ".pdf")
plt.savefig(filename + ".svg")
plt.savefig(filename + "_pc.png")
#--------------------------------------------------------------#
#------------------------ MAIN FUNCTION -----------------------#
#--------------------------------------------------------------#
# get command line args
filename = ""
side = 'both'
if len(sys.argv) < 2:
print "ERROR: the program must be called as follows:\n ./diagram_maker.py filename.png ['left' | 'right' | 'region' | 'both']"
sys.exit()
elif len(sys.argv) > 2:
side = sys.argv[2]
filename = sys.argv[1]
# setup
# do stuff
print "Loading image..."
image = load_image(filename)
"""print "Converting..."
data3d = generate_3d_data(image)
colors = generate_colors(image)
print "Plotting..."
plot_3d_data(data3d, colors)
"""
print "Making histogram..."
histogram = make_histogram(image, side=side)
print "Plotting histogram..."
ax = setup_plot()
plot_histogram(ax, histogram)
setup_plot_labels(ax)
print "Outputting histogram..."
# show stuff
#plt.show()
# save stuff
output_filename = filename.split('.')[0] + "_hist_" + side
#plt.savefig(output_filename + ".svg")
plt.savefig(output_filename + ".png")
| 29.486979 | 150 | 0.53537 | 1,591 | 11,323 | 3.692646 | 0.221245 | 0.008851 | 0.006128 | 0.011234 | 0.182468 | 0.126809 | 0.094298 | 0.086128 | 0.086128 | 0.078298 | 0 | 0.089733 | 0.276605 | 11,323 | 383 | 151 | 29.563969 | 0.627518 | 0.212753 | 0 | 0.151042 | 0 | 0.005208 | 0.072321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052083 | null | null | 0.026042 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
889a7aad2ff1d55cfc94e99278cda4cf9bda4342 | 893 | py | Python | src/filenames_server.py | jonlwowski012/DropboxROS | e6fd3151aff8400f4f906f2fc5ce0236822dccf9 | [
"Apache-2.0"
] | null | null | null | src/filenames_server.py | jonlwowski012/DropboxROS | e6fd3151aff8400f4f906f2fc5ce0236822dccf9 | [
"Apache-2.0"
] | null | null | null | src/filenames_server.py | jonlwowski012/DropboxROS | e6fd3151aff8400f4f906f2fc5ce0236822dccf9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from dropboxros.srv import *
from dropboxros.msg import username, filenames
import rospy
import os
def handle_checkfiles(req):
filenames_cli = filenames()
all_filenames = [f for f in os.listdir('.') if os.path.isfile(f)]
client_files = []
filetimes=[]
for filename in all_filenames:
print req
if req.username.username == filename.split("_",1)[0]:
time = os.path.getmtime(filename)
filetimes.append(time)
client_files.append(filename.split("_",1)[1])
filenames_cli.filenames = client_files
resp = CheckFilesResponse()
resp.filenames = filenames_cli
resp.filetimes = filetimes
return resp
def send_filenames_server():
rospy.init_node('filenames_server', anonymous=True)
s = rospy.Service('/server/check_filenames', CheckFiles, handle_checkfiles)
print "Ready to send files."
rospy.spin()
if __name__ == "__main__":
send_filenames_server()
| 27.060606 | 76 | 0.74804 | 120 | 893 | 5.341667 | 0.45 | 0.056162 | 0.065523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005148 | 0.129899 | 893 | 32 | 77 | 27.90625 | 0.81982 | 0.022396 | 0 | 0 | 0 | 0 | 0.080275 | 0.026376 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.148148 | null | null | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
88a9276b391f472aac9752caa30962748839bc9a | 4,143 | py | Python | libs/linearmodel.py | kuod/pygcta | b7a82d14b065904dcb47640e8bbad0ca04761ee4 | [
"MIT"
] | 1 | 2016-03-03T16:32:29.000Z | 2016-03-03T16:32:29.000Z | libs/linearmodel.py | kuod/pygcta | b7a82d14b065904dcb47640e8bbad0ca04761ee4 | [
"MIT"
] | null | null | null | libs/linearmodel.py | kuod/pygcta | b7a82d14b065904dcb47640e8bbad0ca04761ee4 | [
"MIT"
] | null | null | null | import numpy as np
import numpy.linalg as la
import pdb
class pygcta(object):
"""
class for pygcta
"""
def __init__(self, Y = None, K = None, X = None):
"""
Constructor
Y: Phenotype OBJECT
K: LIST of kernels
TODO: add covariates later
"""
self.Y = Y
self.K = K
assert np.all(self.Y.SID == self.K[0].SID), 'Tough luck, your samples do not match so complete the matching function!!!!'
self.betas = None
self.pv = None
self.sigma0 = np.array([np.var(self.Y.Y) / (len(self.K) + 1) for x in range(len(self.K) + 1)])### initialize
if X is None:
self.X = np.ones((K[0].K.shape[0],1))
def matching(self, Y, K):
"""
matching Y and K
"""
### sort data Y
sidx = np.argsort(Y.SID)
Y.SID = Y.SID[sidx]
Y.Y = Y.Y[sidx,:]
### sort data K
for kitem in K:
sidx = np.argsort(kitem.SID)
kitem.SID = kitem.SID[sidx]
kitem.K = kitem.K[sidx,:][:,sidx]
### find reference set samples
### to boring to finish this crap....
def likelihood(self, V, P, Vinv=None, X=None):
"""
implement likelihood function
"""
if Vinv is None:
Vinv = la.inv(V)
if X is None:
X = self.X
loglik = -0.5 * (np.log(la.det(V)) + np.log( la.det( np.dot(np.dot(X.T, Vinv), X) ) ) + np.dot( np.dot( self.Y.Y.T, P), self.Y.Y) )
return loglik
pass
def getV(self, sigmai):
"""
calculate V
"""
V = np.zeros(self.K[0].K.shape )
for i,x in enumerate(self.K):
V += sigmai[i] * x.K
V += np.eye(V.shape[0]) * sigmai[-1] ### multiply last one by identity
return V
def getP(self, Vinv):
"""
calculate P
P = V^-1 - V^{-1}X(X'V^{-1}X)^{-1}X'V^{-1}
"""
XVX = np.dot(np.dot(self.X.T, Vinv), self.X)
P = Vinv - np.dot(np.dot(Vinv, self.X) * (1./XVX), np.dot(self.X.T, Vinv))
return P
def emstep(self, P, A, sigmai):
"""
return emstep
"""
Y = self.Y.Y
N = Y.shape[0]
sigma_next = ((sigmai ** 2 ) * np.dot(Y.T, np.dot(P, np.dot(A, np.dot(P, Y)))) + np.trace(sigmai * np.eye(N) - (sigmai ** 2) * np.dot(P,A))) / N
return sigma_next.flatten()
def optimize(self, tol = 1E-4):
"""
MEAT: run optimization
"""
N = self.X.shape[0]
V0 = self.getV(self.sigma0)
Vinv0 = la.inv(V0)
P0 = self.getP(Vinv0)
L_old = self.likelihood(V0, P0, Vinv0)
print "Likelihood before EM step is: %f" % L_old
# Update each variance component?? I have no idea what I am doing?
sigma_next = np.zeros(self.sigma0.shape)
for i in range(len(self.K)):
sigma_next[i] = self.emstep(P0, self.K[i].K, self.sigma0[i])
# How do we update remaining variance component?? Like this or with A=identity?
sigma_next[-1] = self.emstep(P0, np.eye(N), self.sigma0[-1])
V_next = self.getV(sigma_next)
Vinv_next = la.inv(V_next)
P_next = self.getP(Vinv_next)
L_new = self.likelihood(V_next, P_next, Vinv_next)
print "Likelihood after initial EM step is: %f" % L_new
idx = 2
while L_new - L_old > tol:
L_old = L_new
for i in range(len(self.K)):
sigma_next[i] = self.emstep(P_next, self.K[i].K, sigma_next[i])
sigma_next[-1] = self.emstep(P_next, np.eye(N), sigma_next[-1])
V_next = self.getV(sigma_next)
Vinv_next = la.inv(V_next)
P_next = self.getP(Vinv_next)
L_new = self.likelihood(V_next, P_next, Vinv_next)
print "Likelihood after EM step %s is: %f (Likelihood Difference: +%e)" % (idx, L_new, L_new-L_old)
idx += 1
self.sigma = sigma_next
print "Final Sigma is %f" % (sigma_next[1])
if __name__ == "__main__":
pass
| 29.176056 | 152 | 0.510017 | 627 | 4,143 | 3.282297 | 0.240829 | 0.034014 | 0.014577 | 0.019436 | 0.212828 | 0.167153 | 0.152575 | 0.152575 | 0.152575 | 0.152575 | 0 | 0.01649 | 0.341299 | 4,143 | 141 | 153 | 29.382979 | 0.737633 | 0.064687 | 0 | 0.184211 | 0 | 0 | 0.068824 | 0 | 0 | 0 | 0 | 0.007092 | 0.013158 | 0 | null | null | 0.026316 | 0.039474 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
88afad722790c14e562b195acce7c6842c0dad80 | 1,755 | py | Python | crawler-demo/GPA.py | colddew/mix-python | 4d6280dde02839bbf29f2c6660828a50086fb8f8 | [
"MIT"
] | 1 | 2017-09-02T13:43:08.000Z | 2017-09-02T13:43:08.000Z | crawler-demo/GPA.py | colddew/mix-python | 4d6280dde02839bbf29f2c6660828a50086fb8f8 | [
"MIT"
] | null | null | null | crawler-demo/GPA.py | colddew/mix-python | 4d6280dde02839bbf29f2c6660828a50086fb8f8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import urllib
import urllib2
import cookielib
import re
import string
# 绩点运算
class SDU:
# 类的初始化
def __init__(self):
# 登录URL
self.loginUrl = 'http://jwxt.sdu.edu.cn:7890/pls/wwwbks/bks_login2.login'
# 成绩URL
self.gradeUrl = 'http://jwxt.sdu.edu.cn:7890/pls/wwwbks/bkscjcx.curscopre'
# CookieJar对象
self.cookies = cookielib.CookieJar()
# 表单数据
self.postdata = urllib.urlencode({
'stuid': '201200131012',
'pwd': 'xxxxx'
})
# 构建opener
self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.cookies))
# 学分list
self.credit = []
# 成绩list
self.grades = []
def getPage(self):
req = urllib2.Request(url=self.loginUrl, data=self.postdata)
result = self.opener.open(req)
result = self.opener.open(self.gradeUrl)
# 返回本学期成绩页面
return result.read().decode('gbk')
def getGrades(self):
# 获得本学期成绩页面
page = self.getPage()
# 正则匹配
myItems = re.findall('<TR>.*?<p.*?<p.*?<p.*?<p.*?<p.*?>(.*?)</p>.*?<p.*?<p.*?>(.*?)</p>.*?</TR>', page, re.S)
for item in myItems:
self.credit.append(item[0].encode('gbk'))
self.grades.append(item[1].encode('gbk'))
self.getGrade()
def getGrade(self):
# 计算总绩点
sum = 0.0
weight = 0.0
for i in range(len(self.credit)):
if (self.grades[i].isdigit()):
sum += string.atof(self.credit[i]) * string.atof(self.grades[i])
weight += string.atof(self.credit[i])
print u"本学期绩点为:", sum / weight
if __name__ == '__main__':
sdu = SDU()
sdu.getGrades()
| 27.857143 | 117 | 0.54245 | 202 | 1,755 | 4.643564 | 0.460396 | 0.017058 | 0.022388 | 0.025586 | 0.116205 | 0.071429 | 0.071429 | 0.071429 | 0 | 0 | 0 | 0.025765 | 0.292308 | 1,755 | 62 | 118 | 28.306452 | 0.729469 | 0.065527 | 0 | 0 | 0 | 0.025 | 0.143385 | 0.044923 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
88b161d70bb74d5ce1cdfdef84a9ab4306c7f3c1 | 865 | py | Python | resources/constants.py | kuntzer/SALSA-public | 79fd601d3999ac977bbc97be010b2c4ef81e4c35 | [
"BSD-3-Clause"
] | 1 | 2021-07-30T09:59:41.000Z | 2021-07-30T09:59:41.000Z | resources/constants.py | kuntzer/SALSA-public | 79fd601d3999ac977bbc97be010b2c4ef81e4c35 | [
"BSD-3-Clause"
] | null | null | null | resources/constants.py | kuntzer/SALSA-public | 79fd601d3999ac977bbc97be010b2c4ef81e4c35 | [
"BSD-3-Clause"
] | 1 | 2021-07-30T10:38:54.000Z | 2021-07-30T10:38:54.000Z | # Speed of light in m/s
speed_light = 299792458
# J s
Planck_constant = 6.626e-34
# m Wavelenght of band V
wavelenght_visual = 550e-9
# flux density (Jy) in V for a 0 mag star
Fv = 3640.0
# photons s-1 m-2 in Jy
Jy = 1.51e7
#1 rad = 57.3 grad
RAD = 57.29578
# 1 AU ib cn
AU = 149.59787e11
# Radius in cm
R_Earth = 6378.e5
R_MOON = 1737.4e5
atmosphere = 100.e5
# Julian day corresponding to the beginning of 2018
JD_2018 = 2458119.5
# km in cm
KM = 1.e5
# m2 in cm2
M2 = 1.e4
# Gravitational constant in N (m/kg)2
G = 6.67384e-11
# Gravitational parameter (mu = G *M) for Earth in m^3/s^2
mu_Earth = 3.986e14
# Timestamp for the initial time in the simulation
# timestamp 2018-01-01 00:00 (GMT as used in STK) (valid for Python/Unix)
# timestamp_2018_01_01 = 1514764800
timestamp_2018_01_01 = 1514764800
sideral_day = 23.9344696 #[h]
version = '0.8'
| 17.3 | 73 | 0.700578 | 169 | 865 | 3.502959 | 0.573965 | 0.065878 | 0.076014 | 0.086149 | 0.091216 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24238 | 0.203468 | 865 | 49 | 74 | 17.653061 | 0.616836 | 0.544509 | 0 | 0 | 0 | 0 | 0.007979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31f1e15db446ea3c3af96e18c60a6b2f5449c7df | 390 | py | Python | app/core/migrations/0026_boec_unreadactivitycount.py | VMatyagin/recipe-rest | 46e456330458a71b3315163a68ad82b8e58ca365 | [
"MIT"
] | null | null | null | app/core/migrations/0026_boec_unreadactivitycount.py | VMatyagin/recipe-rest | 46e456330458a71b3315163a68ad82b8e58ca365 | [
"MIT"
] | 1 | 2022-03-11T14:25:08.000Z | 2022-03-11T14:25:08.000Z | app/core/migrations/0026_boec_unreadactivitycount.py | VMatyagin/recipe-rest | 46e456330458a71b3315163a68ad82b8e58ca365 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.13 on 2021-07-16 09:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("core", "0025_activity_warning"),
]
operations = [
migrations.AddField(
model_name="boec",
name="unreadActivityCount",
field=models.IntegerField(default=0),
),
]
| 20.526316 | 49 | 0.602564 | 40 | 390 | 5.8 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 0.284615 | 390 | 18 | 50 | 21.666667 | 0.756272 | 0.117949 | 0 | 0 | 1 | 0 | 0.140351 | 0.061404 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31f556d3d6ea0c382b77b405f95d1a39b08d9698 | 2,679 | py | Python | plans/migrations/0005_recurring_payments.py | feedgurus/django-plans | cdb9019fa8651dc848a611e341a38e8275f1dbe3 | [
"MIT"
] | 240 | 2018-08-10T04:01:15.000Z | 2022-03-25T06:49:56.000Z | plans/migrations/0005_recurring_payments.py | feedgurus/django-plans | cdb9019fa8651dc848a611e341a38e8275f1dbe3 | [
"MIT"
] | 76 | 2018-07-29T12:37:44.000Z | 2022-03-29T19:38:52.000Z | plans/migrations/0005_recurring_payments.py | feedgurus/django-plans | cdb9019fa8651dc848a611e341a38e8275f1dbe3 | [
"MIT"
] | 55 | 2018-08-06T12:25:04.000Z | 2022-03-21T19:52:43.000Z | # Generated by Django 3.0.5 on 2020-04-15 07:32
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('plans', '0004_create_user_plans'),
]
operations = [
migrations.AddField(
model_name='planpricing',
name='has_automatic_renewal',
field=models.BooleanField(default=False, help_text='Use automatic renewal if possible?', verbose_name='has automatic renewal'),
),
migrations.AlterField(
model_name='plan',
name='order',
field=models.PositiveIntegerField(db_index=True, editable=False, verbose_name='order'),
),
migrations.AlterField(
model_name='quota',
name='order',
field=models.PositiveIntegerField(db_index=True, editable=False, verbose_name='order'),
),
migrations.CreateModel(
name='RecurringUserPlan',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('token', models.CharField(blank=True, default=None, help_text='Token, that will be used for payment renewal. Depends on used payment provider', max_length=255, null=True, verbose_name='recurring token')),
('payment_provider', models.CharField(blank=True, default=None, help_text='Provider, that will be used for payment renewal', max_length=255, null=True, verbose_name='payment provider')),
('amount', models.DecimalField(blank=True, db_index=True, decimal_places=2, max_digits=7, null=True, verbose_name='amount')),
('tax', models.DecimalField(blank=True, db_index=True, decimal_places=2, max_digits=4, null=True, verbose_name='tax')),
('currency', models.CharField(max_length=3, verbose_name='currency')),
('has_automatic_renewal', models.BooleanField(default=False, help_text='Automatic renewal is enabled for associated plan. If False, the plan renewal can be still initiated by user.', verbose_name='has automatic plan renewal')),
('card_expire_year', models.IntegerField(blank=True, null=True)),
('card_expire_month', models.IntegerField(blank=True, null=True)),
('pricing', models.ForeignKey(blank=True, default=None, help_text='Recurring pricing', null=True, on_delete=django.db.models.deletion.CASCADE, to='plans.Pricing')),
('user_plan', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='recurring', to='plans.UserPlan')),
],
),
]
| 58.23913 | 243 | 0.659948 | 311 | 2,679 | 5.533762 | 0.347267 | 0.063916 | 0.025567 | 0.04416 | 0.441604 | 0.441604 | 0.3405 | 0.22545 | 0.175479 | 0.175479 | 0 | 0.014232 | 0.213139 | 2,679 | 45 | 244 | 59.533333 | 0.802182 | 0.016797 | 0 | 0.25641 | 1 | 0.025641 | 0.240122 | 0.024316 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.051282 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31fa622fa8e904607330e829284532a853941476 | 3,003 | py | Python | qtop.py | tinaba96/fn2q | cbee3ecaf30563826172a3c2e86cd82458fe08e3 | [
"Apache-2.0"
] | null | null | null | qtop.py | tinaba96/fn2q | cbee3ecaf30563826172a3c2e86cd82458fe08e3 | [
"Apache-2.0"
] | null | null | null | qtop.py | tinaba96/fn2q | cbee3ecaf30563826172a3c2e86cd82458fe08e3 | [
"Apache-2.0"
] | null | null | null | import torch.nn as nn
import numpy
class qtop(): # quantization operators
def __init__(self, model, bw):
self.lev = pow(2., int(bw) - 1) #quantization levels
#self.max = (self.lev - 1.) / self.lev #maximum number
#self.max = (self.lev - 1.) / (self.lev*2.0) #maximum number
#self.max = (self.lev - 1.) / (self.lev*4.0) #maximum number
self.max = (self.lev - 1.) / (self.lev*8.0) #maximum number
# count the number of Conv2d
count_Conv2d = 0
for m in model.modules():
if isinstance(m, nn.Conv2d):
count_Conv2d = count_Conv2d + 1
start_range = 1
end_range = count_Conv2d
#end_range = count_Conv2d - 1 #remove last layer
#end_range = count_Conv2d - 2 #remove 1st layer and last layer
self.q_range = numpy.linspace(start_range,
end_range, end_range-start_range+1)\
.astype('int').tolist()
self.num_of_params = len(self.q_range)
self.saved_params = []
self.target_params = []
self.target_modules = []
index = 0
#index = -1 #remove 1st layer
for m in model.modules():
if isinstance(m, nn.Conv2d):
index = index + 1
if index in self.q_range:
tmp = m.weight.data.clone()
self.saved_params.append(tmp)
self.target_modules.append(m.weight)
def quantization(self):
self.clampConvParams()
self.saveParams()
self.quantizeConvParams()
def clampConvParams(self):
for index in range(self.num_of_params):
#self.target_modules[index].data = \
# self.target_modules[index].data.clamp(-1.0, self.max)
#self.target_modules[index].data = \
# self.target_modules[index].data.clamp(-0.5, self.max)
#self.target_modules[index].data = \
# self.target_modules[index].data.clamp(-0.25, self.max)
self.target_modules[index].data = \
self.target_modules[index].data.clamp(-0.125, self.max)
def saveParams(self):
for index in range(self.num_of_params):
self.saved_params[index].copy_(self.target_modules[index].data)
def quantizeConvParams(self):
for index in range(self.num_of_params):
tmp = self.target_modules[index].data
#tmp = tmp.mul(self.lev).add(0.5).floor().div(self.lev)
#tmp = tmp.mul(self.lev*2.0).add(0.5).floor().div(self.lev*2.0)
#tmp = tmp.mul(self.lev*4.0).add(0.5).floor().div(self.lev*4.0)
tmp = tmp.mul(self.lev*8.0).add(0.5).floor().div(self.lev*8.0)
self.target_modules[index].data = tmp
def restore(self):
for index in range(self.num_of_params):
self.target_modules[index].data.copy_(self.saved_params[index])
| 42.295775 | 76 | 0.562105 | 397 | 3,003 | 4.120907 | 0.18136 | 0.072738 | 0.145477 | 0.174817 | 0.539731 | 0.488998 | 0.432763 | 0.40709 | 0.368582 | 0.326406 | 0 | 0.030332 | 0.308358 | 3,003 | 70 | 77 | 42.9 | 0.757342 | 0.279054 | 0 | 0.173913 | 0 | 0 | 0.00145 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.043478 | 0 | 0.195652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31fa8dc5af0963a71eaeacbd55db31c72c0b792a | 1,101 | py | Python | 22_Generate Parentheses.py | jasoriya/leetcode | 07193f3d79fc3b10b2e9d7cef0de2d8a93759d6d | [
"MIT"
] | null | null | null | 22_Generate Parentheses.py | jasoriya/leetcode | 07193f3d79fc3b10b2e9d7cef0de2d8a93759d6d | [
"MIT"
] | null | null | null | 22_Generate Parentheses.py | jasoriya/leetcode | 07193f3d79fc3b10b2e9d7cef0de2d8a93759d6d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed May 15 13:26:06 2019
@author: Shreyans
"""
"""
Given n pairs of parentheses, write a function to generate all combinations of well-formed parentheses.
For example, given n = 3, a solution set is:
[
"((()))",
"(()())",
"(())()",
"()(())",
"()()()"
]
"""
from random import choice
class Solution:
def generateParenthesis(self, n: int) -> List[str]:
parantheses_map = {'(':n-1, ')':n}
combinations = []
i=0
while i < 2**n: #how to end this while loop?
i+=1
string = '('
temp_map = parantheses_map.copy()
while list(temp_map.values()) != [0, 0]:
append_this = choice(list(parantheses_map.keys()))
val = temp_map[append_this]
if val > 0 and (temp_map['('] < temp_map[')'] or append_this is '('):
temp_map[append_this] = val - 1
string+=append_this
if string not in combinations:
combinations.append(string)
| 26.853659 | 104 | 0.501362 | 126 | 1,101 | 4.269841 | 0.539683 | 0.078067 | 0.048327 | 0.063197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030429 | 0.343324 | 1,101 | 40 | 105 | 27.525 | 0.713693 | 0.09446 | 0 | 0 | 1 | 0 | 0.008253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31ffdf8e94530736c021ce0cdfdd1a1fcf6317d0 | 5,751 | py | Python | ranking/bayesian_sets.py | kienpt/site_discovery_public | 61440a8400bcc018c4dd9f8d2a810971ef548d8e | [
"Apache-1.1"
] | 4 | 2020-10-12T14:26:36.000Z | 2021-08-19T17:26:00.000Z | ranking/bayesian_sets.py | kienpt/site_discovery_public | 61440a8400bcc018c4dd9f8d2a810971ef548d8e | [
"Apache-1.1"
] | null | null | null | ranking/bayesian_sets.py | kienpt/site_discovery_public | 61440a8400bcc018c4dd9f8d2a810971ef548d8e | [
"Apache-1.1"
] | null | null | null | from math import sqrt
from numpy import *
import numpy as np
import sys
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
from sklearn.decomposition import PCA
from sklearn.decomposition import TruncatedSVD
from lemma_tokenizer import LemmaTokenizer
class Bayesian_Sets(object):
def __init__(self, seeds, representation, value_type, decomposition=None):
"""
Parameters
----------
value_type : "tfidf" or "binary "
decomposition: "pca", "nmf", "lsa"
"""
self.text_type = representation
# min_df = 2: filter any token that appears in less than 2 documents
# min_df = 0.125: filter any token that apearrs in less than 0.125*number_of_docs documents
mdf = max(2/float(len(seeds)), 0.1)
# Create vectorizer
self.value_type = value_type
if value_type=="binary":
#self.vectorizer = CountVectorizer(binary=True, stop_words='english', ngram_range=(1,2))
#self.vectorizer = CountVectorizer(binary=True, stop_words='english', ngram_range=(1,2), max_df=0.75, min_df=0, max_features=1000)
self.vectorizer = CountVectorizer(binary=True, stop_words='english', ngram_range=(1,2), max_df=1.0, min_df=mdf)
elif value_type=="tfidf":
#self.vectorizer = TfidfVectorizer(tokenizer=LemmaTokenizer(), stop_words='english', ngram_range=(1,2), max_df=1.0, min_df=mdf, use_idf=False, norm='l1', sublinear_tf=True)
self.vectorizer = TfidfVectorizer(stop_words='english', ngram_range=(1,2), max_df=1.0, min_df=mdf, use_idf=False, norm='l1', sublinear_tf=False)
else:
print "Wrong value type of Bayesian Sets."
sys.exit()
self.seeds = seeds
self.vect_seeds = self._vectorize_seeds()
for i, v in enumerate(self.vect_seeds):
print self._count(v)
print "Initialized Bayesian sets object. text type = ", self.text_type
#decomposition = 'nmf' # uncomment to use decomposition. Only NMF works because bayesian sets require non-negative inputs
if decomposition=='nmf':
self.model = NMF(n_components=200, init='nndsvd', random_state=0)
print "Created nmf model"
elif decomposition == 'pca':
self.model = PCA(n_components=100)
print "Created pca model"
elif decomposition == 'lsa':
self.model = TruncatedSVD(n_components=100)
print "Created lsa model"
self.decomposition = decomposition
def _count(self, vect):
c = 0
for i in vect:
if i:
c += 1
return c
def _vectorize_seeds(self):
print "Vectorizing seed websites..."
docs = [] # list of strings, used for constructing vectorizer
for w in self.seeds:
docs.extend([p.get_text(self.text_type) for p in w])
#self.vect_seeds = self.vectorizer.fit_transform(docs).todense() # Why converting to dense vector?
self.vectorizer.fit(docs)
if self.value_type=="tfidf":
return np.array([w.get_bstf_vsm(self.vectorizer, self.text_type) for w in self.seeds])
elif self.value_type=="binary":
return np.array([w.get_bsbin_vsm(self.vectorizer, self.text_type) for w in self.seeds])
else:
print "Wrong value type"
return None
def update_seeds(self, new_seeds):
self.seeds.extend(new_seeds)
for w in self.seeds:
w.clear()
self.vect_seeds = self._vectorize_seeds()
def score(self, websites):
print "Scoring..."
if self.value_type=="tfidf":
X = np.array([w.get_bstf_vsm(self.vectorizer, self.text_type) for w in websites])
elif self.value_type=="binary":
X = np.array([w.get_bsbin_vsm(self.vectorizer, self.text_type) for w in websites])
else:
print "Wrong value type"
print 'Shape ', X.shape
if self.decomposition:
self.vect_seeds, X = self._reduce_dim(self.vect_seeds, X)
scores = self.score_helper(self.vect_seeds, X)
results = []
for i, w in enumerate(websites):
results.append((w, scores[i]))
return results
def _reduce_dim(self, T, X):
"""
Use decomposition method to reduce dimension of the two vectors T and X.
Concatenate T and X and apply decomposition to the combined vector.
"""
TX = np.concatenate((T, X), axis=0)
print "Transforming"
transformed_X = self.model.fit_transform(TX)
print "Done transform"
split = T.shape[0]
new_T, _, new_X = np.vsplit(transformed_X, (split, split))
return new_T, new_X
def _reduce_dim_separated(self, T, X):
print "Transforming"
new_T = self.model.fit_transform(T)
new_X = self.model.fit_transform(X)
print "Done transform"
return new_T, new_X
def score_helper(self, D, X) :
''' D-> Query Set
X-> Data Set'''
#Compute Bayesian Sets Parameters
c = 2
N = D.shape[0]
T = concatenate((D,X), axis=0)
m = divide(sum(T, axis=0),float(T.shape[0]))
a = multiply(m, c)
b = multiply(subtract(1,m),c)
at = add(a,sum(D, axis=0))
bt = subtract(add(b,N),sum(D, axis=0))
C = sum(subtract(add(subtract(log(add(a,b)),log(add(add(a,b),N))), log(bt)), log (b)))
q = transpose(add(subtract(subtract(log(at),log(a)),log(bt)), log(b)))
score_X = transpose(add(C, dot(X,q)))
return asarray(score_X)
| 40.5 | 184 | 0.616936 | 779 | 5,751 | 4.414634 | 0.238768 | 0.034022 | 0.024426 | 0.030532 | 0.309974 | 0.224193 | 0.174179 | 0.174179 | 0.174179 | 0.174179 | 0 | 0.014496 | 0.268301 | 5,751 | 141 | 185 | 40.787234 | 0.802757 | 0.149365 | 0 | 0.182692 | 0 | 0 | 0.071397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.096154 | null | null | 0.144231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee01405199aa354afc6ae6eea478f773850f75ba | 1,073 | py | Python | py/orbit/injection/__init__.py | LeoRya/py-orbit | 340b14b6fd041ed8ec2cc25b0821b85742aabe0c | [
"MIT"
] | 17 | 2018-02-09T23:39:06.000Z | 2022-03-04T16:27:04.000Z | py/orbit/injection/__init__.py | LeoRya/py-orbit | 340b14b6fd041ed8ec2cc25b0821b85742aabe0c | [
"MIT"
] | 22 | 2017-05-31T19:40:14.000Z | 2021-09-24T22:07:47.000Z | py/orbit/injection/__init__.py | LeoRya/py-orbit | 340b14b6fd041ed8ec2cc25b0821b85742aabe0c | [
"MIT"
] | 37 | 2016-12-08T19:39:35.000Z | 2022-02-11T19:59:34.000Z | ## \namespace orbit::injection
## \brief These classes are for turn by turn injection of particles.
##
## Classes:
## - InjectParts - Class. Does the turn by turn injection
## - Joho - Class for generating JOHO style particle distributions
## - addTeapotInjectionNode - Adds an injection node to a teapot lattice
## - TeapotInjectionNode - Creates a teapot style injection Node
from injectparticles import InjectParts
from joho import JohoTransverse, JohoLongitudinal
from InjectionLatticeModifications import addTeapotInjectionNode
from TeapotInjectionNode import TeapotInjectionNode
from distributions import UniformLongDist, UniformLongDistPaint, GULongDist, SNSESpreadDist, SNSESpreadDistPaint
__all__ = []
__all__.append("addTeapotInjectionNode")
__all__.append("TeapotInjectionNode")
__all__.append("InjectParts")
__all__.append("JohoTransverse")
__all__.append("JohoLongitudinal")
__all__.append("UniformLongDist")
__all__.append("UniformLongDistPaint")
__all__.append("SNSESpreadDist")
__all__.append("SNSESpreadDistPaint")
__all__.append("GULongDist")
| 39.740741 | 112 | 0.812675 | 103 | 1,073 | 8.038835 | 0.427184 | 0.108696 | 0.024155 | 0.045894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10438 | 1,073 | 26 | 113 | 41.269231 | 0.861603 | 0.338304 | 0 | 0 | 0 | 0 | 0.231214 | 0.031792 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ee17216ba3d621105a01b8ea844b5782373a584b | 3,717 | py | Python | lucent/optvis/param/lowres.py | TomFrederik/lucent | 266e45832fbfb65f77c0c7ad907c7c51627bf567 | [
"Apache-2.0"
] | 2 | 2022-01-14T13:58:51.000Z | 2022-01-25T16:06:55.000Z | lucent/optvis/param/lowres.py | TomFrederik/lucent | 266e45832fbfb65f77c0c7ad907c7c51627bf567 | [
"Apache-2.0"
] | null | null | null | lucent/optvis/param/lowres.py | TomFrederik/lucent | 266e45832fbfb65f77c0c7ad907c7c51627bf567 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 The Lucent Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Provides lowres_tensor()."""
from __future__ import absolute_import, division, print_function
from typing import Union, List, Tuple, Optional, Callable
import einops
import numpy as np
import torch
import torch.nn.functional as F
def lowres_tensor(
shape: Union[List, Tuple, torch.Size],
underlying_shape: Union[List, Tuple, torch.Size],
offset: Optional[Union[bool, int, List]] = None,
sd: Optional[float] = 0.01,
) -> Tuple[List[torch.Tensor], Callable]:
"""Produces a tensor paramaterized by a interpolated lower resolution tensor.
This is like what is done in a laplacian pyramid, but a bit more general. It
can be a powerful way to describe images.
:param shape: desired shape of resulting tensor, should be of format (B, C, H, W) #TODO support more shapes
:type shape: Union[List, Tuple, torch.Size]
:param underlying_shape: shape of the tensor being resized into final tensor
:type underlying_shape: Union[List, Tuple, torch.Size]
:param offset: Describes how to offset the interpolated vector (like phase in a
Fourier transform). If None, apply no offset. If int, apply the same
offset to each dimension; if a list use each entry for each dimension.
If False, do not offset. If True, offset by half the ratio between shape and underlying shape (analogous to 90
degrees), defaults to None
:type offset: Optional[Union[bool, int, List]], optional
:param sd: Standard deviation of initial tensor variable., defaults to 0.01
:type sd: Optional[float], optional
:return: One-element list containing the low resolution tensor and the corresponding image function returning the tensor on call.
:rtype: Tuple[List[torch.Tensor], Callable]
"""
if isinstance(offset, float):
raise TypeError('Passing float offset is deprecated!')
# TODO pass device as argument to avoid mixing devices
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
underlying_t = (torch.randn(*underlying_shape) * sd).to(device).requires_grad_(True)
if offset is not None:
# Deal with non-list offset
if not isinstance(offset, list):
offset = len(shape) * [offset]
# Deal with the non-int offset entries
for n in range(len(offset)):
if offset[n] is True:
offset[n] = shape[n] / underlying_shape[n] / 2
if offset[n] is False:
offset[n] = 0
offset[n] = int(offset[n])
def inner():
t = torch.nn.functional.interpolate(einops.rearrange(underlying_t, 'b c h w -> 1 c b h w'), (shape[0], shape[2], shape[3]), mode="trilinear")
t = einops.rearrange(t, 'dummy c b h w -> (dummy b) c h w')
if offset is not None:
# Actually apply offset by padding and then cropping off the excess.
t = F.pad(t, offset, "reflect")
t = t[:shape[0], :shape[1], :shape[2], :shape[3]]
return t
return [underlying_t], inner | 47.653846 | 149 | 0.665321 | 534 | 3,717 | 4.595506 | 0.404494 | 0.02445 | 0.028525 | 0.03097 | 0.118989 | 0.082315 | 0.046455 | 0 | 0 | 0 | 0 | 0.009352 | 0.223298 | 3,717 | 78 | 150 | 47.653846 | 0.840665 | 0.559053 | 0 | 0.060606 | 0 | 0 | 0.072586 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.060606 | false | 0.030303 | 0.181818 | 0 | 0.30303 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee1e9148a8f0ea8740745168fdc9f9fb766da69a | 1,570 | py | Python | RFIDIOt-master/hitag2brute.py | kaosbeat/datakamp | 7ac90249b34aa4c3604fa4965ab124ab82d64ac3 | [
"MIT"
] | null | null | null | RFIDIOt-master/hitag2brute.py | kaosbeat/datakamp | 7ac90249b34aa4c3604fa4965ab124ab82d64ac3 | [
"MIT"
] | null | null | null | RFIDIOt-master/hitag2brute.py | kaosbeat/datakamp | 7ac90249b34aa4c3604fa4965ab124ab82d64ac3 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# hitag2brute.py - Brute Force hitag2 password
#
# Adam Laurie <adam@algroup.co.uk>
# http://rfidiot.org/
#
# This code is copyright (c) Adam Laurie, 2008, All rights reserved.
# For non-commercial use only, the following terms apply - for all other
# uses, please contact the author:
#
# This code is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
import rfidiot
import sys
import os
import time
try:
card= rfidiot.card
except:
print "Couldn't open reader!"
os._exit(True)
args= rfidiot.args
card.info('hitag2brute v0.1c')
pwd= 0x00
# start at specified PWD
if len(args) == 1:
pwd= int(args[0],16)
card.settagtype(card.ALL)
if card.select():
print 'Bruteforcing tag:', card.uid
else:
print 'No tag found!'
os._exit(True)
while 42:
PWD= '%08X' % pwd
if card.h2login(PWD):
print 'Password is %s' % PWD
os._exit(False)
else:
if not pwd % 16:
print PWD + ' \r',
if not card.select():
print 'No tag found! Last try: %s\r' % PWD,
else:
pwd= pwd + 1
sys.stdout.flush()
if pwd == 0xffffffff:
os._exit(True)
os._exit(False)
| 22.428571 | 73 | 0.67707 | 244 | 1,570 | 4.336066 | 0.553279 | 0.028355 | 0.028355 | 0.035917 | 0.049149 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021224 | 0.219745 | 1,570 | 69 | 74 | 22.753623 | 0.842449 | 0.51465 | 0 | 0.222222 | 0 | 0 | 0.188934 | 0 | 0 | 0 | 0.018893 | 0 | 0 | 0 | null | null | 0.027778 | 0.111111 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee20969cfa5f9c57fb51a177d56eebcff0c2e0fd | 2,099 | py | Python | fares_validator/warnings.py | TransitApp/gtfs-fares-v2-validator | c5fb33fa89c66bf1be14f1ebedea872fc81b4728 | [
"MIT"
] | 3 | 2021-08-05T15:03:44.000Z | 2021-08-11T09:38:22.000Z | fares_validator/warnings.py | jsteelz/gtfs-fares-v2-validator | c5fb33fa89c66bf1be14f1ebedea872fc81b4728 | [
"MIT"
] | 4 | 2021-07-16T12:11:35.000Z | 2021-07-16T16:27:33.000Z | fares_validator/warnings.py | jsteelz/gtfs-fares-v2-validator | c5fb33fa89c66bf1be14f1ebedea872fc81b4728 | [
"MIT"
] | null | null | null | # generic warnings
UNEXPECTED_FIELDS = 'A GTFS fares-v2 file has column name(s) not defined in the specification.'
UNUSED_AREA_IDS = 'Areas defined in areas.txt are unused in other fares files.'
UNUSED_NETWORK_IDS = 'Networks defined in routes.txt are unused in other fares files.'
UNUSED_TIMEFRAME_IDS = 'Timeframes defined in timeframes.txt are unused in other fares files.'
# areas.txt
NO_AREAS = 'No areas.txt was found, will assume no areas exist.'
# routes.txt
NO_ROUTES = 'No routes.txt was found, will assume no networks exist.'
# stops.txt
NO_STOPS = 'No stops.txt was found, will assume stops.txt does not reference any areas.'
UNUSED_AREAS_IN_STOPS = 'Areas defined in areas.txt are unused in stops.txt or stop_times.txt.'
# calendar.txt, calendar_dates.txt
NO_SERVICE_IDS = 'Neither calendar.txt or calendar_dates.txt was found, will assume no service_ids for fares data.'
# timeframes.txt
NO_TIMEFRAMES = 'No timeframes.txt was found, will assume no timeframes exist.'
# rider_categories.txt
MAX_AGE_LESS_THAN_MIN_AGE = 'An entry in rider_categories.txt has max_age less than or equal to min_age.'
NO_RIDER_CATEGORIES = 'No rider_categories.txt was found, will assume no rider_categories exist.'
VERY_LARGE_MAX_AGE = 'An entry in rider_categories.txt has a very large max_age.'
VERY_LARGE_MIN_AGE = 'An entry in rider_categories.txt has a very large min_age.'
# fare_containers.txt
NO_FARE_CONTAINERS = 'No fare_containers.txt was found, will assume no fare_containers exist.'
# fare_products.txt
NO_FARE_PRODUCTS = 'No fare_products.txt was found, will assume no fare_products exist.'
OFFSET_AMOUNT_WITHOUT_OFFSET_UNIT = 'An offset_amount in fare_products.txt is defined without an offset_unit, so duration_unit will be used.'
# fare_leg_rules.txt
NO_FARE_LEG_RULES = 'No fare_leg_rules.txt was found, will assume no fare_leg_rules exist.'
# fare_transfer_rules.txt
NO_FARE_TRANSFER_RULES = 'No fare_transfer_rules.txt was found, will assume no fare_transfer_rules exist.'
UNUSED_LEG_GROUPS = 'Leg groups defined in fare_leg_rules.txt are unused in fare_transfer_rules.txt.'
| 49.97619 | 141 | 0.801334 | 357 | 2,099 | 4.473389 | 0.221289 | 0.045085 | 0.068879 | 0.093926 | 0.331872 | 0.318723 | 0.246713 | 0.19474 | 0.078272 | 0.053851 | 0 | 0.000549 | 0.131491 | 2,099 | 41 | 142 | 51.195122 | 0.87548 | 0.093854 | 0 | 0 | 0 | 0 | 0.742328 | 0.024868 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee26bde7d91af34ed7d6ce91bb4e4591ffc2288a | 226 | py | Python | prla/assignments/a1/birthdays.py | AegirAexx/python-sandbox | fa1f584f615c6ed04f80b9dd92d2b241248c9ebe | [
"Unlicense"
] | null | null | null | prla/assignments/a1/birthdays.py | AegirAexx/python-sandbox | fa1f584f615c6ed04f80b9dd92d2b241248c9ebe | [
"Unlicense"
] | null | null | null | prla/assignments/a1/birthdays.py | AegirAexx/python-sandbox | fa1f584f615c6ed04f80b9dd92d2b241248c9ebe | [
"Unlicense"
] | null | null | null | from collections import Counter
def birthdays(string):
st = string.split()
dic = Counter(x[0:4] for x in st)
return [tuple([k for k in st if k.startswith(x)])
for x in [x for x in dic if dic[x] > 1]]
| 25.111111 | 53 | 0.606195 | 42 | 226 | 3.261905 | 0.5 | 0.087591 | 0.131387 | 0.10219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018293 | 0.274336 | 226 | 8 | 54 | 28.25 | 0.817073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee274e8c2047e32296e0716504014eb8c212a457 | 796 | py | Python | tools/where.py | china-x-orion/infoeye | fed6d0aababed3f2f183920194a42242999df5f1 | [
"MIT"
] | 1 | 2015-04-09T10:04:08.000Z | 2015-04-09T10:04:08.000Z | tools/where.py | china-x-orion/infoeye | fed6d0aababed3f2f183920194a42242999df5f1 | [
"MIT"
] | null | null | null | tools/where.py | china-x-orion/infoeye | fed6d0aababed3f2f183920194a42242999df5f1 | [
"MIT"
] | null | null | null | #!/usr/bin/python
"""
Author: rockylinux
E-Mail: Jingzheng.W@gmail.com
"""
import commands
#display the software
#return a list containning installed software
class whereissoftware:
def __init__(self):
self.__name = 'whereissoftware'
def getData(self):
(status, output) = commands.getstatusoutput('dpkg -l | awk \'{print$2, $3}\'')
#########
## 5 is hard coded
## please optmize it
#########
print [i.split() for i in output.split("\n")[5:]]
#return output
def testGetData(self,test):
if type(test) == type([]):
for i in test:
print i
else:
print test
if __name__ == '__main__':
a = whereissoftware()
test = a.getData()
#a.testGetData(test)
| 22.111111 | 86 | 0.555276 | 90 | 796 | 4.755556 | 0.611111 | 0.084112 | 0.028037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007143 | 0.296482 | 796 | 35 | 87 | 22.742857 | 0.757143 | 0.18593 | 0 | 0 | 0 | 0 | 0.071174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee27c746cf974a49aed23e628b7cde28129d4b2a | 929 | py | Python | curso_bioinfo/bio01.py | FellowsDevel/learning_python | c56fc3a222d44c3770cf39bea493af372b09f135 | [
"MIT"
] | null | null | null | curso_bioinfo/bio01.py | FellowsDevel/learning_python | c56fc3a222d44c3770cf39bea493af372b09f135 | [
"MIT"
] | null | null | null | curso_bioinfo/bio01.py | FellowsDevel/learning_python | c56fc3a222d44c3770cf39bea493af372b09f135 | [
"MIT"
] | null | null | null | ######################
# análise de sequencia
from Bio.Seq import Seq
seq1 = Seq("ATG")
print('Sequencia :', seq1)
# sequencia complementar
seq1_comp = seq1.complement()
print('Sequencia complementar :', seq1_comp)
# sequencia complementar reversa
seq1_comp_reversa = seq1.reverse_complement()
print('Sequencia complementar reversa :', seq1_comp_reversa)
###########################
# Transcrição de sequendia de DNS para RNA
dna_seq = Seq("ATG")
rna_seq = dna_seq.transcribe()
print("Transcrição RNA:", rna_seq)
# retorna ao DNA a partir do RNA
orig_dna = rna_seq.back_transcribe()
print("DNA original:", orig_dna)
###########################
# Tradução do DNA para RNA para Proteina
dna_seq = Seq("ATG")
rna_seq = dna_seq.transcribe()
print("Transcrição RNA:", rna_seq)
# retorna ao DNA a partir do RNA
protein_seq = rna_seq.translate()
print("Sequencia de proteina:", protein_seq)
| 24.447368 | 62 | 0.664155 | 119 | 929 | 5 | 0.285714 | 0.060504 | 0.084034 | 0.097479 | 0.423529 | 0.423529 | 0.278992 | 0.278992 | 0.278992 | 0.278992 | 0 | 0.01023 | 0.158235 | 929 | 37 | 63 | 25.108108 | 0.750639 | 0.232508 | 0 | 0.352941 | 0 | 0 | 0.282989 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0.411765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
ee38d9af4cd6001ad7b97a200d1c1207602f1624 | 358 | py | Python | setup.py | vhdeluca/PyStacking | 30f72934eb2335baff45faf0b901a214aed88b23 | [
"MIT"
] | null | null | null | setup.py | vhdeluca/PyStacking | 30f72934eb2335baff45faf0b901a214aed88b23 | [
"MIT"
] | null | null | null | setup.py | vhdeluca/PyStacking | 30f72934eb2335baff45faf0b901a214aed88b23 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(name='pystacking',
version='0.1.0',
description='Python Machine Learning Stacking Maker',
author='Vitor Hugo Medeiros De Luca',
author_email='vitordeluca@gmail.com',
license='MIT',
packages=find_packages(),
python_requires=">=3.5",
tests_require=['pytest'])
| 27.538462 | 59 | 0.667598 | 42 | 358 | 5.571429 | 0.833333 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017483 | 0.201117 | 358 | 12 | 60 | 29.833333 | 0.800699 | 0 | 0 | 0 | 0 | 0 | 0.321229 | 0.058659 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee3d5c500c3ceffae36f82424036f01909047b13 | 799 | py | Python | flask/microblog-db/app/__init__.py | qsunny/python | ace8c3178a9a9619de2b60ca242c2079dd2f825e | [
"MIT"
] | null | null | null | flask/microblog-db/app/__init__.py | qsunny/python | ace8c3178a9a9619de2b60ca242c2079dd2f825e | [
"MIT"
] | 2 | 2021-03-25T22:00:07.000Z | 2022-01-20T15:51:48.000Z | flask/microblog-login/app/__init__.py | qsunny/python | ace8c3178a9a9619de2b60ca242c2079dd2f825e | [
"MIT"
] | null | null | null | from flask import Flask
from config import config
from flask_sqlalchemy import SQLAlchemy
# app.config['SECRET_KEY'] = '666666'
# ... add more variables here as needed
# app.config.from_object('config') # 载入配置文件
# app.config.from_object(config[config_name])
# config[config_name].init_app(app)
db = SQLAlchemy()
def create_app(config_name):
app = Flask(__name__) # , static_url_path='/app/static')
app.config.from_object(config[config_name])
config[config_name].init_app(app)
# view导入不能比db的前,会导致db导入错误
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
# from .admin import admin as admin_blueprint
# app.register_blueprint(admin_blueprint, url_prefix='/admin')
db.init_app(app)
return app
# from app.front import routes | 26.633333 | 66 | 0.743429 | 111 | 799 | 5.108108 | 0.315315 | 0.079365 | 0.112875 | 0.100529 | 0.259259 | 0.215168 | 0.215168 | 0.215168 | 0.215168 | 0.215168 | 0 | 0.00885 | 0.151439 | 799 | 30 | 67 | 26.633333 | 0.827434 | 0.479349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ee42a15342dadbaeda0b94365049f203992e8d93 | 7,557 | py | Python | get_features/write_tfrecords_sites.py | kslin/miRNA_models | 5b034b036e5aa10ab62f91f8adccec473e29ec34 | [
"MIT"
] | 1 | 2022-02-05T11:01:17.000Z | 2022-02-05T11:01:17.000Z | get_features/write_tfrecords_sites.py | kslin/miRNA_models | 5b034b036e5aa10ab62f91f8adccec473e29ec34 | [
"MIT"
] | 11 | 2020-01-28T22:16:38.000Z | 2022-02-10T00:34:28.000Z | get_features/write_tfrecords_sites.py | kslin/miRNA_models | 5b034b036e5aa10ab62f91f8adccec473e29ec34 | [
"MIT"
] | 2 | 2020-01-23T21:52:12.000Z | 2020-02-24T16:43:52.000Z | from optparse import OptionParser
import os
import sys
import time
import numpy as np
import pandas as pd
import tensorflow as tf
import utils
import get_site_features
import tf_utils
np.set_printoptions(threshold=np.inf, linewidth=200)
pd.options.mode.chained_assignment = None
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("--tpm_file", dest="TPM_FILE", help="tpm data")
parser.add_option("--orf_file", dest="ORF_FILE", help="ORF sequences in tsv format")
parser.add_option("--mirseqs", dest="MIR_SEQS", help="tsv with miRNAs and their sequences")
parser.add_option("--mirlen", dest="MIRLEN", type=int)
parser.add_option("-w", "--outfile", dest="OUTFILE", help="location for tfrecords")
parser.add_option("--overlap_dist", dest="OVERLAP_DIST", help="minimum distance between neighboring sites", type=int)
parser.add_option("--only_canon", dest="ONLY_CANON", help="only use canonical sites", default=False, action='store_true')
(options, args) = parser.parse_args()
### READ miRNA DATA and filter for ones to keep ###
MIRNAS = pd.read_csv(options.MIR_SEQS, sep='\t')
MIRNAS = MIRNAS[MIRNAS['use_tpms']]
ALL_GUIDES = sorted(list(MIRNAS['mir'].values))
MIR_DICT = {}
for row in MIRNAS.iterrows():
guide_seq = row[1]['guide_seq']
pass_seq = row[1]['pass_seq']
MIR_DICT[row[1]['mir']] = {
'mirseq': guide_seq,
'site8': utils.rev_comp(guide_seq[1:8]) + 'A',
'one_hot': utils.one_hot_encode(guide_seq[:options.MIRLEN])
}
MIR_DICT[row[1]['mir'] + '*'] = {
'mirseq': pass_seq,
'site8': utils.rev_comp(pass_seq[1:8]) + 'A',
'one_hot': utils.one_hot_encode(pass_seq[:options.MIRLEN])
}
### READ EXPRESSION DATA ###
TPM = pd.read_csv(options.TPM_FILE, sep='\t', index_col=0).sort_index()
for mir in ALL_GUIDES:
if mir not in TPM.columns:
raise ValueError('{} given in mirseqs file but not in TPM file.'.format(mir))
num_batches = 10
TPM['batch'] = [ix % num_batches for ix in TPM['ix']]
print("Using mirs: {}".format(ALL_GUIDES))
# read in orf sequences
ORF_SEQS = pd.read_csv(options.ORF_FILE, sep='\t', header=None, index_col=0)
feature_names = ['mir', 'tpm', 'orf_guide_1hot', 'utr3_guide_1hot',
'orf_pass_1hot', 'utr3_pass_1hot']
with tf.python_io.TFRecordWriter(options.OUTFILE) as tfwriter:
for ix, row in enumerate(TPM.iterrows()):
# print progress
if ix % 100 == 0:
print("Processed {}/{} transcripts".format(ix, len(TPM)))
transcript = row[0]
utr3 = row[1]['sequence']
orf = ORF_SEQS.loc[transcript][2]
transcript_sequence = orf + utr3
orf_length = len(orf)
context_dict = tf.train.Features(feature={
'transcript': tf_utils._bytes_feature(transcript.encode('utf-8')),
'batch': tf_utils._int64_feature([row[1]['batch']])
})
total_transcript_sites = 0
features = [[], [], [], [], [], []]
for mir in ALL_GUIDES:
site8 = MIR_DICT[mir]['site8']
mirseq = MIR_DICT[mir]['mirseq']
site8_star = MIR_DICT[mir + '*']['site8']
mirseq_star = MIR_DICT[mir + '*']['mirseq']
features[0].append(tf_utils._bytes_feature(mir.encode('utf-8'))) # mir
features[1].append(tf_utils._float_feature([row[1][mir]])) # tpm
# get sites for guide strand
seqs, locs = get_site_features.get_sites_from_utr(transcript_sequence, site8, overlap_dist=options.OVERLAP_DIST, only_canon=options.ONLY_CANON)
num_orf_sites = len([l for l in locs if l < orf_length])
orf_sites = utils.mir_site_pair_to_ints(mirseq[:options.MIRLEN], ''.join(seqs[:num_orf_sites]))
utr3_sites = utils.mir_site_pair_to_ints(mirseq[:options.MIRLEN], ''.join(seqs[num_orf_sites:]))
features[2].append(tf_utils._int64_feature(orf_sites))
features[3].append(tf_utils._int64_feature(utr3_sites))
total_transcript_sites += len(locs)
# get sites for guide strand
seqs, locs = get_site_features.get_sites_from_utr(transcript_sequence, site8_star, overlap_dist=options.OVERLAP_DIST, only_canon=options.ONLY_CANON)
num_orf_sites = len([l for l in locs if l < orf_length])
orf_sites = utils.mir_site_pair_to_ints(mirseq_star[:options.MIRLEN], ''.join(seqs[:num_orf_sites]))
utr3_sites = utils.mir_site_pair_to_ints(mirseq_star[:options.MIRLEN], ''.join(seqs[num_orf_sites:]))
features[4].append(tf_utils._int64_feature(orf_sites))
features[5].append(tf_utils._int64_feature(utr3_sites))
total_transcript_sites += len(locs)
# features[0].append(tf_utils._bytes_feature(mir.encode('utf-8'))) # mir
# features[1].append(tf_utils._float_feature([row[1][mir]])) # tpm
# features[2].append(tf_utils._int64_feature(utils.one_hot_encode(mirseq[:options.MIRLEN]))) # mirseq
# assert len(utils.one_hot_encode(mirseq[:options.MIRLEN])) == 40
# # get sites for guide strand
# seqs, locs = get_site_features.get_sites_from_utr(transcript_sequence, site8, overlap_dist=options.OVERLAP_DIST, only_canon=options.ONLY_CANON)
# num_orf_sites = len([l for l in locs if l < orf_length])
# orf_sites = ''.join(seqs[:num_orf_sites])
# utr3_sites = ''.join(seqs[num_orf_sites:])
# features[3].append(tf_utils._int64_feature(utils.one_hot_encode(orf_sites)))
# features[4].append(tf_utils._int64_feature(utils.one_hot_encode(orf_sites)))
# total_transcript_sites += len(locs)
# features[5].append(tf_utils._int64_feature(utils.one_hot_encode(mirseq_star[:options.MIRLEN]))) # mirseq*
# assert len(utils.one_hot_encode(mirseq_star[:options.MIRLEN])) == 40
# # get sites for guide strand
# seqs, locs = get_site_features.get_sites_from_utr(transcript_sequence, site8_star, overlap_dist=options.OVERLAP_DIST, only_canon=options.ONLY_CANON)
# num_orf_sites = len([l for l in locs if l < orf_length])
# orf_sites = ''.join(seqs[:num_orf_sites])
# utr3_sites = ''.join(seqs[num_orf_sites:])
# features[6].append(tf_utils._int64_feature(utils.one_hot_encode(orf_sites)))
# features[7].append(tf_utils._int64_feature(utils.one_hot_encode(orf_sites)))
# total_transcript_sites += len(locs)
print(total_transcript_sites)
if total_transcript_sites > 0:
feature_dict = tf.train.FeatureLists(feature_list={
feature_names[ix]: tf.train.FeatureList(feature=features[ix]) for ix in range(len(feature_names))
})
# Create the SequenceExample
example = tf.train.SequenceExample(context=context_dict,
feature_lists=feature_dict)
tfwriter.write(example.SerializeToString())
else:
print('Skipping {} because no sites found'.format(transcript))
| 47.23125 | 166 | 0.616647 | 971 | 7,557 | 4.510814 | 0.191555 | 0.040183 | 0.041553 | 0.047717 | 0.526941 | 0.490411 | 0.481279 | 0.476712 | 0.463699 | 0.451598 | 0 | 0.016684 | 0.254466 | 7,557 | 159 | 167 | 47.528302 | 0.760738 | 0.22231 | 0 | 0.084211 | 0 | 0 | 0.109624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.105263 | 0 | 0.105263 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ee430cc30eb26c3221165f55ab792fe79d6ed18f | 727 | py | Python | 2018/22/caves.py | lvaughn/advent | ff3f727b8db1fd9b2a04aad5dcda9a6c8d1c271e | [
"CC0-1.0"
] | null | null | null | 2018/22/caves.py | lvaughn/advent | ff3f727b8db1fd9b2a04aad5dcda9a6c8d1c271e | [
"CC0-1.0"
] | null | null | null | 2018/22/caves.py | lvaughn/advent | ff3f727b8db1fd9b2a04aad5dcda9a6c8d1c271e | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
CAVE_DEPTH = 6969
TARGET_LOC = (9, 796)
GEO_INDEX_CACHE = {
(0, 0): 0,
TARGET_LOC: 0
}
def get_geo_index(x, y):
key = (x, y)
if key not in GEO_INDEX_CACHE:
if y == 0:
GEO_INDEX_CACHE[key] = x * 16807
elif x == 0:
GEO_INDEX_CACHE[key] = y * 48271
else:
GEO_INDEX_CACHE[key] = get_erosion_level(x, y - 1) * get_erosion_level(x - 1, y)
return GEO_INDEX_CACHE[key]
def get_erosion_level(x, y):
return (get_geo_index(x, y) + CAVE_DEPTH) % 20183
total_danger = 0
for x in range(TARGET_LOC[0]+1):
for y in range(TARGET_LOC[1] + 1):
total_danger += get_erosion_level(x, y) % 3
print("Part 1:", total_danger) | 22.030303 | 92 | 0.599725 | 123 | 727 | 3.276423 | 0.317073 | 0.158809 | 0.193548 | 0.158809 | 0.275434 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073724 | 0.272352 | 727 | 33 | 93 | 22.030303 | 0.688091 | 0.028886 | 0 | 0 | 0 | 0 | 0.009915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0 | 0.043478 | 0.173913 | 0.043478 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee4859079852c59e644e46e5945867a576873685 | 394 | py | Python | btcturk_client/tools.py | emre/btcturk-client | f9f02a63c35f9d14622d36cdf82df9f9171c627b | [
"MIT"
] | 15 | 2016-06-16T20:13:10.000Z | 2019-04-15T13:33:49.000Z | btcturk_client/tools.py | emre/btcturk-client | f9f02a63c35f9d14622d36cdf82df9f9171c627b | [
"MIT"
] | 5 | 2015-10-17T09:27:38.000Z | 2021-02-04T10:20:10.000Z | btcturk_client/tools.py | emre/btcturk-client | f9f02a63c35f9d14622d36cdf82df9f9171c627b | [
"MIT"
] | 11 | 2015-10-17T09:00:32.000Z | 2021-01-30T14:56:43.000Z |
def authenticated_method(func):
def _decorated(self, *args, **kwargs):
if not self.api_key:
raise ValueError("you need to set your API KEY for this method.")
response = func(self, *args, **kwargs)
if response.status_code == 401:
raise ValueError("invalid private/public key for API.")
return response.json()
return _decorated | 28.142857 | 77 | 0.629442 | 49 | 394 | 4.959184 | 0.612245 | 0.065844 | 0.115226 | 0.131687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010453 | 0.271574 | 394 | 14 | 78 | 28.142857 | 0.836237 | 0 | 0 | 0 | 0 | 0 | 0.203046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee4dbf4649134bb9b82a6df54b7092474ff29aee | 402 | py | Python | varundeboss/apis/urls.py | varundeboss/varundeboss | 5e5f4e8091a2e6b26d4f4444771b23cd9dae4e1f | [
"Apache-2.0"
] | null | null | null | varundeboss/apis/urls.py | varundeboss/varundeboss | 5e5f4e8091a2e6b26d4f4444771b23cd9dae4e1f | [
"Apache-2.0"
] | null | null | null | varundeboss/apis/urls.py | varundeboss/varundeboss | 5e5f4e8091a2e6b26d4f4444771b23cd9dae4e1f | [
"Apache-2.0"
] | null | null | null | from django.conf.urls import url, include
from django.contrib.auth.models import User
urlpatterns = [
url(r'^test/', include('testapp.urls'), name='Test User/Group API'),
url(r'^resume/', include('apis.jsonresume_org.urls'), name='Json Resume'),
url(r'^schema/', include('apis.schema_org.urls'), name='Schemas'),
url(r'^geoname/', include('apis.geonames_org.urls'), name='Geonames'),
] | 44.666667 | 78 | 0.689055 | 57 | 402 | 4.807018 | 0.473684 | 0.058394 | 0.120438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109453 | 402 | 9 | 79 | 44.666667 | 0.765363 | 0 | 0 | 0 | 0 | 0 | 0.382134 | 0.114144 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee50d033365f948753deb21b011a4f530ebfd889 | 3,511 | py | Python | SourceCode/cryptomath.py | Anusha1790/Secure_E-Voting_Mechanism_using_Blind-Signature_and_Digital_Signature | f0a32293810b780672e599f6e7914d838d5d376a | [
"Condor-1.1",
"RSA-MD",
"Naumen",
"Xnet",
"X11",
"MS-PL"
] | null | null | null | SourceCode/cryptomath.py | Anusha1790/Secure_E-Voting_Mechanism_using_Blind-Signature_and_Digital_Signature | f0a32293810b780672e599f6e7914d838d5d376a | [
"Condor-1.1",
"RSA-MD",
"Naumen",
"Xnet",
"X11",
"MS-PL"
] | null | null | null | SourceCode/cryptomath.py | Anusha1790/Secure_E-Voting_Mechanism_using_Blind-Signature_and_Digital_Signature | f0a32293810b780672e599f6e7914d838d5d376a | [
"Condor-1.1",
"RSA-MD",
"Naumen",
"Xnet",
"X11",
"MS-PL"
] | null | null | null | # Cryptomath Module
import random
def gcd(a, b):
# Returns the GCD of positive integers a and b using the Euclidean Algorithm.
if a>b:
x, y = a, b
else:
y, x = a, b
while y!= 0:
temp = x % y
x = y
y = temp
return x
def extendedGCD(a,b): #used to find mod inverse
# Returns integers u, v such that au + bv = gcd(a,b).
x, y = a, b
u1, v1 = 1, 0
u2, v2 = 0, 1
while y != 0:
r = x % y
q = (x - r) // y
u, v = u1 - q*u2, v1 - q*v2
x = y
y = r
u1, v1 = u2, v2
u2, v2 = u, v
return (u1, v1)
def findModInverse(a, m):
# Returns the inverse of a modulo m, if it exists.
if gcd(a,m) != 1:
return None
u, v = extendedGCD(a,m)
return u % m
def RabinMiller(n):
# Applies the probabilistic Rabin-Miller test for primality.
if n < 2:
return False
if n == 2:
return True
if n % 2 == 0:
return False
d = n - 1
s = 0
while(d % 2 == 0):
s += 1
d = d // 2
# At this point n - 1 = 2^s*d with d odd.
# Try fifty times to prove that n is composite.
for i in range(50):
a = random.randint(2, n - 1)
if gcd(a, n) != 1:
return False
b = pow(a, d, n)
if b == 1 or b == n - 1:
continue
isWitness = True
r = 1
while(r < s and isWitness):
b = pow(b, 2, n)
if b == n - 1:
isWitness = False
r += 1
if isWitness:
return False
return True
def isPrime(n):
# Determines whether a positive integer n is composite or probably prime.
if n < 2:
return False
smallPrimes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53,
59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113,
127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181,
191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251,
257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317,
331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397,
401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463,
467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557,
563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619,
631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701,
709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787,
797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863,
877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953,
967, 971, 977, 983, 991, 997]
# See if n is a small prime.
if n in smallPrimes:
return True
# See if n is divisible by a small prime.
for p in smallPrimes:
if n % p == 0:
return False
# Apply Fermat test for compositeness.
for base in [2,3,5,7,11]:
if pow(base, n - 1, n) != 1:
return False
# Apply Rabin-Miller test.
return RabinMiller(n)
def findPrime(bits=1024, tries=10000):
# Find a prime with the given number of bits.
x = 2**(bits - 1)
y = 2*x
for i in range(tries):
n = random.randint(x, y)
if n % 2 == 0:
n += 1
if isPrime(n):
return n
return None | 30.267241 | 81 | 0.473085 | 553 | 3,511 | 3.003617 | 0.495479 | 0.016255 | 0.012041 | 0.018061 | 0.032511 | 0.007225 | 0 | 0 | 0 | 0 | 0 | 0.262753 | 0.408146 | 3,511 | 116 | 82 | 30.267241 | 0.536574 | 0.17317 | 0 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0 | 0.010638 | 0 | 0.255319 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee5226d75b57931207b0a079895f1d3650ab1949 | 3,529 | py | Python | test/data_processing/test_find_best_worst_lists.py | 0xProject/p2p_incentives | ce69926eb3d003fb2767651df9486556c0e20ab6 | [
"Apache-2.0"
] | 3 | 2020-03-11T19:42:48.000Z | 2021-04-01T21:09:05.000Z | test/data_processing/test_find_best_worst_lists.py | 0xProject/p2p_incentives | ce69926eb3d003fb2767651df9486556c0e20ab6 | [
"Apache-2.0"
] | null | null | null | test/data_processing/test_find_best_worst_lists.py | 0xProject/p2p_incentives | ce69926eb3d003fb2767651df9486556c0e20ab6 | [
"Apache-2.0"
] | null | null | null | """
This module contains unit tests of find_best_worst_lists().
"""
from typing import List, Tuple
import pytest
from data_processing import find_best_worst_lists
from data_types import BestAndWorstLists, InvalidInputError, SpreadingRatio
from .__init__ import RATIO_LIST
# test normal cases
CASES_BEST_WORST_LISTS: List[Tuple[List[SpreadingRatio], BestAndWorstLists]] = [
# tuples: (input, output)
(
[RATIO_LIST[0], RATIO_LIST[1], RATIO_LIST[2]],
BestAndWorstLists(best=RATIO_LIST[2], worst=RATIO_LIST[0]),
),
(
[RATIO_LIST[0], RATIO_LIST[1], RATIO_LIST[2], RATIO_LIST[3]],
BestAndWorstLists(best=RATIO_LIST[2], worst=RATIO_LIST[0]),
),
(
[RATIO_LIST[3], RATIO_LIST[4], RATIO_LIST[5]],
BestAndWorstLists(best=RATIO_LIST[4], worst=RATIO_LIST[4]),
),
(
[RATIO_LIST[4], RATIO_LIST[5], RATIO_LIST[6]],
BestAndWorstLists(best=RATIO_LIST[6], worst=RATIO_LIST[4]),
),
(
[RATIO_LIST[5], RATIO_LIST[7], RATIO_LIST[8]],
BestAndWorstLists(best=RATIO_LIST[5], worst=RATIO_LIST[8]),
),
(
[RATIO_LIST[3], RATIO_LIST[9], RATIO_LIST[10], RATIO_LIST[11]],
BestAndWorstLists(best=RATIO_LIST[10], worst=RATIO_LIST[11]),
),
(
[
RATIO_LIST[0],
RATIO_LIST[1],
RATIO_LIST[2],
RATIO_LIST[3],
RATIO_LIST[4],
RATIO_LIST[5],
RATIO_LIST[6],
RATIO_LIST[7],
RATIO_LIST[8],
RATIO_LIST[9],
RATIO_LIST[10],
],
BestAndWorstLists(best=RATIO_LIST[6], worst=RATIO_LIST[0]),
),
]
@pytest.mark.parametrize("ratio_list, expected_output", CASES_BEST_WORST_LISTS)
def test_find_best_worst_lists__normal(
ratio_list: List[SpreadingRatio], expected_output: BestAndWorstLists
) -> None:
"""
This function tests find_best_worst_lists in normal cases
:param ratio_list: list of SpreadingRatio instances
:param expected_output: an instance of BestAndWorstLists
:return: None
"""
actual_output: BestAndWorstLists = find_best_worst_lists(ratio_list)
for idx in range(2):
assert len(expected_output[idx]) == len(actual_output[idx])
for value_idx in range(len(expected_output)):
if isinstance(expected_output[idx][value_idx], float):
assert actual_output[idx][value_idx] == pytest.approx(
expected_output[idx][value_idx]
)
else: # this is a None
assert expected_output[idx][value_idx] is actual_output[idx][value_idx]
# test exceptions
def test_find_best_worst_lists__all_none() -> None:
"""
This function tests find_best_worst_lists when every element is None.
:return: None
"""
with pytest.raises(ValueError, match="All entries are None."):
find_best_worst_lists([RATIO_LIST[3], RATIO_LIST[12], RATIO_LIST[13]])
def test_find_best_worst_lists__empty_input() -> None:
"""
This function tests find_best_worst_lists when the input is empty.
:return: None
"""
with pytest.raises(InvalidInputError):
find_best_worst_lists([])
def test_find_best_worst_lists__different_length() -> None:
"""
This function tests find_best_worst_lists when the input length varies.
:return: None
"""
with pytest.raises(ValueError, match="Input lists are of different length."):
find_best_worst_lists([RATIO_LIST[0], RATIO_LIST[1], RATIO_LIST[14]])
| 32.376147 | 87 | 0.656277 | 454 | 3,529 | 4.786344 | 0.19163 | 0.231937 | 0.103083 | 0.115969 | 0.554993 | 0.45329 | 0.35665 | 0.318914 | 0.237 | 0.130235 | 0 | 0.022043 | 0.228677 | 3,529 | 108 | 88 | 32.675926 | 0.776267 | 0.160102 | 0 | 0.128571 | 0 | 0 | 0.029278 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 1 | 0.057143 | false | 0 | 0.071429 | 0 | 0.128571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee5f0c55c0a9148be187a8e822f3f9099b855258 | 436 | py | Python | software/input_variable_processing/out_processing/out_parsing.py | Searchlight2/Searchlight2 | 87c85975be49c503f79063d72d321da21a6c6341 | [
"MIT"
] | 17 | 2018-12-11T13:36:12.000Z | 2022-03-31T06:27:59.000Z | software/input_variable_processing/out_processing/out_parsing.py | Searchlight2/Searchlight2 | 87c85975be49c503f79063d72d321da21a6c6341 | [
"MIT"
] | 1 | 2019-11-26T11:00:57.000Z | 2019-11-26T11:01:12.000Z | software/input_variable_processing/out_processing/out_parsing.py | Searchlight2/Searchlight2 | 87c85975be49c503f79063d72d321da21a6c6341 | [
"MIT"
] | 7 | 2020-05-21T16:35:41.000Z | 2022-03-31T06:28:01.000Z |
def out_parsing(out_path_parameter, global_variables):
# default inputs
out_path = None
# gets the sub-parameters
sub_params_list = out_path_parameter.split(",")
for sub_param in sub_params_list:
if sub_param.upper().startswith("path=".upper()):
out_path = sub_param.split("=")[1]
global_variables["out_path"] = out_path
print "parsed the out parameter"
return global_variables | 24.222222 | 57 | 0.683486 | 58 | 436 | 4.810345 | 0.465517 | 0.150538 | 0.114695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002915 | 0.213303 | 436 | 18 | 58 | 24.222222 | 0.810496 | 0.087156 | 0 | 0 | 0 | 0 | 0.098734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee61b47e76320e9a05c1dd937770719208d0be1f | 1,381 | py | Python | seqauto/migrations/0028_one_off_set_sequencing_run_date.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 5 | 2021-01-14T03:34:42.000Z | 2022-03-07T15:34:18.000Z | seqauto/migrations/0028_one_off_set_sequencing_run_date.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 551 | 2020-10-19T00:02:38.000Z | 2022-03-30T02:18:22.000Z | seqauto/migrations/0028_one_off_set_sequencing_run_date.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | null | null | null | # Generated by Django 3.1.3 on 2021-06-15 02:43
import logging
import re
from datetime import datetime
from django.db import migrations
from django.utils.timezone import make_aware
def _one_off_set_sequencing_run_date(apps, schema_editor):
SequencingRun = apps.get_model("seqauto", "SequencingRun")
# Old ones look like: 210226_NB501009_0445_AHV7GVBGXH
OLD_REGEX = "^([12]\d{5})_"
# New ones look like: Exome_20_001_200612_NB501009_0389_AH7TJTBGXG
NEW_REGEX = ".*_([12]\d{5})_"
old_pattern = re.compile(OLD_REGEX)
new_pattern = re.compile(NEW_REGEX)
sequencing_runs = []
for sr in SequencingRun.objects.filter(date__isnull=True):
if m := old_pattern.match(sr.name):
date_str = m.group(1)
elif m := new_pattern.match(sr.name):
date_str = m.group(1)
else:
logging.warning("Couldn't work out date from: '%s'", sr.name)
continue
dt = datetime.strptime(date_str, "%y%m%d")
sr.date = make_aware(dt).date()
sequencing_runs.append(sr)
if sequencing_runs:
SequencingRun.objects.bulk_update(sequencing_runs, ["date"], batch_size=500)
class Migration(migrations.Migration):
dependencies = [
('seqauto', '0027_sequencingrun_date'),
]
operations = [
migrations.RunPython(_one_off_set_sequencing_run_date)
]
| 29.382979 | 84 | 0.672701 | 186 | 1,381 | 4.731183 | 0.510753 | 0.063636 | 0.020455 | 0.043182 | 0.131818 | 0.131818 | 0.072727 | 0.072727 | 0.072727 | 0 | 0 | 0.063712 | 0.215786 | 1,381 | 46 | 85 | 30.021739 | 0.748846 | 0.117306 | 0 | 0.0625 | 1 | 0 | 0.099588 | 0.01893 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.15625 | 0 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee6bda0e1603d22685c5ea9ef912718f6cc0b704 | 606 | py | Python | 346-moving-average-from-data-stream/346-moving-average-from-data-stream.py | jurayev/data-structures-algorithms-solutions | 7103294bafb60117fc77efe4913edcffbeb1ac7a | [
"MIT"
] | null | null | null | 346-moving-average-from-data-stream/346-moving-average-from-data-stream.py | jurayev/data-structures-algorithms-solutions | 7103294bafb60117fc77efe4913edcffbeb1ac7a | [
"MIT"
] | null | null | null | 346-moving-average-from-data-stream/346-moving-average-from-data-stream.py | jurayev/data-structures-algorithms-solutions | 7103294bafb60117fc77efe4913edcffbeb1ac7a | [
"MIT"
] | null | null | null | class MovingAverage:
"""
[1,10,3,5]
size = 3
n = 4
[0,1,11,14,19]
"""
def __init__(self, size: int):
self.size = size
self.prefixes = [0]
def next(self, val: int) -> float:
n = len(self.prefixes)
self.prefixes.append(val)
self.prefixes[-1] += self.prefixes[-2]
if n <= self.size:
return self.prefixes[-1] / n
return (self.prefixes[-1] - self.prefixes[n-self.size]) / self.size
# Your MovingAverage object will be instantiated and called as such:
# obj = MovingAverage(size)
# param_1 = obj.next(val) | 26.347826 | 75 | 0.564356 | 84 | 606 | 4.011905 | 0.440476 | 0.284866 | 0.115727 | 0.10089 | 0.148368 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048611 | 0.287129 | 606 | 23 | 76 | 26.347826 | 0.731481 | 0.262376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee6f81184fdc128af7c8db59ff5bcfb4f0789978 | 5,429 | py | Python | application/views/authentication.py | aditya369b/Gator-Barter-Website | 5e7151021ae49374a4e18b15b2ccf219baed02a8 | [
"MIT"
] | null | null | null | application/views/authentication.py | aditya369b/Gator-Barter-Website | 5e7151021ae49374a4e18b15b2ccf219baed02a8 | [
"MIT"
] | null | null | null | application/views/authentication.py | aditya369b/Gator-Barter-Website | 5e7151021ae49374a4e18b15b2ccf219baed02a8 | [
"MIT"
] | null | null | null | """
BluePrint for all Authontication routs and logic
Use "from passlib.hash import sha256_crypt" as our encryption library
User information stored in the session (Backend) as a dictionary
And accessible from any route
30% written by Alex Kohanim
70% written By Akshay Kasar
If any questions arise from this blueprint, we should both be able to answer
your questions
"""
from flask import Blueprint, render_template, session, request, redirect, flash
import gatorProduct as product # class made by alex
import gatorUser as user # class made by alex
from queries import query
from dbCursor import getCursor
from filterData import filter_data
import bleach
from passlib.hash import sha256_crypt
import time
authentication_blueprint = Blueprint('authentication', __name__)
db = getCursor()[0]
@authentication_blueprint.route("/login", methods=['GET', 'POST'])
def login():
cursor = getCursor()[1]
if request.method == "POST":
email = str(bleach.clean(request.form['email']))
pwd = str(bleach.clean(request.form['pwd']))
print(email, " tried to login")
cursor.execute(query().GET_USER_BY_EMAIL(email))
data = cursor.fetchone()
cursor.close()
if data is None:
flash("User not found!")
print("User not found!")
return render_template("login.html", code=404, message="Page Not Found")
print(data)
userObject = user.makeUser(data)
if sha256_crypt.verify(pwd, userObject.u_pwd):
print("Authentication Successful")
flash("Authentication Successful")
session['sessionUser'] = userObject.toDict()
session['sessionKey'] = int(time.time()*1000)
if 'lazyRegistration' in session:
# session.pop('lazyRegistration')
# makeAndInsertMessageForSeller()
if session['lazyPage'] == 'contact-seller':
flash('Message Sent Successfully')
return redirect("/contact-seller/"+session['item_id'])
elif session['lazyPage'] == 'item-posting':
flash('Item Pending Approval')
return redirect("/item-posting")
return redirect("/")
else:
print("Authentication Failed!")
flash("Authentication Failed!")
return render_template("login.html", code=401, message="Unauthorized")
return render_template("login.html")
@authentication_blueprint.route("/register", methods=['GET', 'POST'])
def register():
cursor = getCursor()[1]
if request.method == "POST":
print(request.form)
email = str(bleach.clean(request.form['email']))
password = sha256_crypt.encrypt(
str(bleach.clean(request.form['password'].strip())))
confirm_password = sha256_crypt.encrypt(
str(bleach.clean(request.form['confirm-password'].strip())))
fname = str(bleach.clean(request.form['fname']))
lname = str(bleach.clean(request.form['lname']))
created_ts = str(bleach.clean(time.strftime('%Y-%m-%d %H:%M:%S')))
updated_ts = str(bleach.clean(time.strftime('%Y-%m-%d %H:%M:%S')))
if not request.form['password'] == request.form['confirm-password']:
pass_temp = request.form['password']
confirm_pass_temp = request.form['confirm-password']
print(pass_temp, confirm_pass_temp)
print(pass_temp == confirm_pass_temp)
flash("passwords do not match")
return redirect("/register")
# check if user already exists
cursor.execute(query().GET_USER_BY_EMAIL(email))
data = cursor.fetchone()
if data is not None:
print("Registeration of " + email +
" Failed. User Already Exists!")
flash("Registeration of " + email +
" Failed. User Already Exists!")
return redirect("/login")
if not email.endswith("@mail.sfsu.edu"):
flash("email needs to end with @mail.sfsu.edu")
return redirect("/register")
# make new user row in db
print(query().INSERT_USER(email, password,
fname, lname, created_ts, updated_ts))
d = cursor.execute(query().INSERT_USER(
email, password, fname, lname, created_ts, updated_ts))
print(d)
db.commit()
if d == 1:
cursor.execute(query().GET_USER_BY_EMAIL(email))
session['sessionUser'] = user.makeUser(cursor.fetchone()).toDict()
print("Registeration of", email, "Successful")
flash("Registeration of "+email + " Successful")
session['sessionKey'] = int(time.time()*1000)
if 'lazyRegistration' in session:
# session.pop('lazyRegistration')
if session['lazyPage'] == 'contact-seller':
return redirect("/contact-seller/"+session['item_id'])
elif session['lazyPage'] == 'item-posting':
return redirect("/item-posting")
return redirect("/")
cursor.close()
print("Simple Register Page Click")
return render_template("register.html")
@authentication_blueprint.route("/logout")
def logout():
try:
session.pop('sessionUser')
except KeyError:
pass
return redirect('/')
| 35.48366 | 84 | 0.60932 | 594 | 5,429 | 5.486532 | 0.292929 | 0.040503 | 0.038662 | 0.045106 | 0.427432 | 0.37711 | 0.296103 | 0.226757 | 0.215404 | 0.184106 | 0 | 0.009296 | 0.2669 | 5,429 | 152 | 85 | 35.717105 | 0.809548 | 0.10186 | 0 | 0.295238 | 0 | 0 | 0.196502 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0.133333 | 0.085714 | 0 | 0.247619 | 0.171429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ee719a41743e27f679ac28e42919e0c85d9247d8 | 335 | py | Python | services/tests/test_rabbitmq_wrapper_int.py | ToucanBran/gateio-crypto-trading-bot-binance-announcements-new-coins | a83f4f0de3463001855c5e89b8f39c6b20ca3e99 | [
"MIT"
] | 1 | 2021-11-25T11:34:23.000Z | 2021-11-25T11:34:23.000Z | services/tests/test_rabbitmq_wrapper_int.py | ToucanBran/gateio-crypto-trading-bot-binance-announcements-new-coins | a83f4f0de3463001855c5e89b8f39c6b20ca3e99 | [
"MIT"
] | 1 | 2021-11-25T01:54:22.000Z | 2021-11-26T22:23:10.000Z | services/tests/test_rabbitmq_wrapper_int.py | ToucanBran/gateio-crypto-trading-bot-binance-announcements-new-coins | a83f4f0de3463001855c5e89b8f39c6b20ca3e99 | [
"MIT"
] | null | null | null | from services.rabbitmq_wrapper import RabbitMqWrapper
import yaml
def test_given_validRabbitConfigurations_when_openChannelCalled_then_expectSuccessfulConnection(configs):
wrapper = RabbitMqWrapper(configs["queue"])
channel = wrapper.open_channel()
opened = channel.is_open
wrapper.close_connection()
assert opened | 37.222222 | 105 | 0.820896 | 34 | 335 | 7.794118 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 335 | 9 | 106 | 37.222222 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0.014881 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee7734053c1d31ed423c75148105c6c7ddde71d8 | 661 | py | Python | mindhome_alpha/erpnext/patches/v13_0/loyalty_points_entry_for_pos_invoice.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:55:29.000Z | 2021-04-29T14:55:29.000Z | mindhome_alpha/erpnext/patches/v13_0/loyalty_points_entry_for_pos_invoice.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | null | null | null | mindhome_alpha/erpnext/patches/v13_0/loyalty_points_entry_for_pos_invoice.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:39:01.000Z | 2021-04-29T14:39:01.000Z | # Copyright (c) 2019, Frappe and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import frappe
def execute():
'''`sales_invoice` field from loyalty point entry is splitted into `invoice_type` & `invoice` fields'''
frappe.reload_doc("Accounts", "doctype", "loyalty_point_entry")
if not frappe.db.has_column('Loyalty Point Entry', 'sales_invoice'):
return
frappe.db.sql(
"""UPDATE `tabLoyalty Point Entry` lpe
SET lpe.`invoice_type` = 'Sales Invoice', lpe.`invoice` = lpe.`sales_invoice`
WHERE lpe.`sales_invoice` IS NOT NULL
AND (lpe.`invoice` IS NULL OR lpe.`invoice` = '')""") | 33.05 | 104 | 0.729198 | 92 | 661 | 5.076087 | 0.532609 | 0.12848 | 0.109208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008818 | 0.142209 | 661 | 20 | 105 | 33.05 | 0.814815 | 0.299546 | 0 | 0 | 0 | 0 | 0.270492 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | true | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee7b46f503826b94562405ebac054a8e8b8dd9bd | 1,052 | py | Python | tests/test_Overhang.py | Edinburgh-Genome-Foundry/Overhang | 01dc39dc60361612dd0e56a21a59103e925c2ed1 | [
"MIT"
] | null | null | null | tests/test_Overhang.py | Edinburgh-Genome-Foundry/Overhang | 01dc39dc60361612dd0e56a21a59103e925c2ed1 | [
"MIT"
] | null | null | null | tests/test_Overhang.py | Edinburgh-Genome-Foundry/Overhang | 01dc39dc60361612dd0e56a21a59103e925c2ed1 | [
"MIT"
] | null | null | null | import overhang
def test_Overhang():
oh_aaa = overhang.Overhang("AAA")
assert oh_aaa.overhang == "AAA"
assert oh_aaa.overhang_rc == "TTT"
assert oh_aaa.has_multimer is True # tests count_max_repeat()
assert oh_aaa.is_good() is True
assert oh_aaa.gc_content == 0
oh_ttaa = overhang.Overhang("TTAA")
assert oh_ttaa.is_palindromic is True
expected = [
"L[KIRTMSN]",
"[YFPILRTHSCDNAGV]*",
"[IFVL][KN]",
"L[KIRTMSN]",
"[YFPILRTHSCDNAGV]*",
"[IFVL][KN]",
]
for index, pattern in enumerate(oh_ttaa.aa_patterns):
assert set(pattern) == set(expected[index])
# first example AAA had no start / stop codons
oh_atga = overhang.Overhang("ATGA")
assert oh_atga.has_start_codon is True
assert oh_atga.has_stop_codon is True
assert overhang.Overhang("ATGT").has_rc_start_codon is True
assert overhang.Overhang("TGA").has_rc_stop_codon is True
def test_generate_all_overhangs():
assert len(overhang.generate_all_overhangs(3)) == 32
| 28.432432 | 66 | 0.669202 | 145 | 1,052 | 4.606897 | 0.372414 | 0.095808 | 0.082335 | 0.076347 | 0.303892 | 0.176647 | 0 | 0 | 0 | 0 | 0 | 0.004854 | 0.21673 | 1,052 | 36 | 67 | 29.222222 | 0.805825 | 0.065589 | 0 | 0.222222 | 0 | 0 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0.444444 | 1 | 0.074074 | false | 0 | 0.037037 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee80975628389032d05fb1deb25e7709ed7637d7 | 1,435 | py | Python | launch_parallel.py | jbr-ai-labs/NeurIPS2020-Flatland-Competition-Solution | bd3c169ffa39063d6bb2b170bca93fa785b7bf71 | [
"MIT"
] | 6 | 2021-01-21T14:33:56.000Z | 2021-12-31T08:11:41.000Z | launch_parallel.py | jbr-ai-labs/NeurIPS2020-Flatland-Competition-Solution | bd3c169ffa39063d6bb2b170bca93fa785b7bf71 | [
"MIT"
] | 1 | 2021-07-20T14:16:42.000Z | 2022-03-07T02:49:22.000Z | launch_parallel.py | jbr-ai-labs/NeurIPS2020-Flatland-Competition-Solution | bd3c169ffa39063d6bb2b170bca93fa785b7bf71 | [
"MIT"
] | 1 | 2021-04-05T03:07:37.000Z | 2021-04-05T03:07:37.000Z | import torch
from multiprocessing import Pool, set_start_method
from functools import partial
import argparse
from copy import deepcopy
from train import start_experiment
__device = None
def load_experiments(exp_path, sdevice):
exp_list = torch.load(exp_path)
# TODO a better way to choose cuda device for experiment
# by the way only 4 experiments and 4 cuda is common case
CUDA_COUNT = 4
count = 0
for exp in exp_list:
if sdevice == "cpu":
__device = torch.device("cpu")
elif sdevice == "cuda":
__device = torch.device("cuda:" + str(count % CUDA_COUNT))
else:
assert False
print(__device)
exp.device = __device
count += 1
return exp_list
# gonna work with ray?
def launch_experiments(exp_list, processes=4):
set_start_method("spawn")
with Pool(processes=processes) as pool:
pool.map(start_experiment, exp_list)
def create_parser():
parser = argparse.ArgumentParser()
parser.add_argument('--processes', type=int, default=4, required=False)
parser.add_argument('--device', type=str, default="cpu", required=False)
parser.add_argument('--exp-path', type=str, default="generated/exp_list", required=False)
return parser
if __name__ == "__main__":
args = create_parser().parse_args()
launch_experiments(load_experiments(args.exp_path, args.device), args.processes)
| 27.596154 | 93 | 0.685714 | 188 | 1,435 | 4.994681 | 0.404255 | 0.044728 | 0.054313 | 0.046858 | 0.063898 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006228 | 0.216725 | 1,435 | 51 | 94 | 28.137255 | 0.829181 | 0.091289 | 0 | 0 | 0 | 0 | 0.06 | 0 | 0 | 0 | 0 | 0.019608 | 0.028571 | 1 | 0.085714 | false | 0 | 0.171429 | 0 | 0.314286 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee89a066f66bf3c3cd5e0c8d13d50005a38fed6d | 935 | py | Python | gpgpu/generators/render_volume_defs.py | jpanikulam/experiments | be36319a89f8baee54d7fa7618b885edb7025478 | [
"MIT"
] | 1 | 2019-04-14T11:40:28.000Z | 2019-04-14T11:40:28.000Z | gpgpu/generators/render_volume_defs.py | jpanikulam/experiments | be36319a89f8baee54d7fa7618b885edb7025478 | [
"MIT"
] | 5 | 2018-04-18T13:54:29.000Z | 2019-08-22T20:04:17.000Z | gpgpu/generators/render_volume_defs.py | jpanikulam/experiments | be36319a89f8baee54d7fa7618b885edb7025478 | [
"MIT"
] | 1 | 2018-12-24T03:45:47.000Z | 2018-12-24T03:45:47.000Z | # %codegen(cl_gen)
import generate_opencl_structs
def main():
cfg_defd = [
{
'type': 'int',
'length': 1,
'name': 'method',
},
{
'type': 'float',
'length': 1,
'name': 'step_size',
'default': '0.01'
},
{
'type': 'float',
'length': 1,
'name': 'max_dist',
'default': "25.0"
},
{
'type': 'int',
'length': 1,
'name': 'render_mode',
},
{
'type': 'int',
'length': 1,
'name': 'max_iteration',
}
]
definitions = [
("RenderVolumeConfig", cfg_defd),
]
destination = "/home/jacob/repos/experiments/gpgpu/kernels/render_volume"
generate_opencl_structs.write_files(definitions, destination)
if __name__ == '__main__':
main()
| 20.777778 | 77 | 0.42139 | 75 | 935 | 4.973333 | 0.56 | 0.093834 | 0.147453 | 0.112601 | 0.252011 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02037 | 0.42246 | 935 | 44 | 78 | 21.25 | 0.67037 | 0.017112 | 0 | 0.263158 | 1 | 0 | 0.262814 | 0.062159 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.026316 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ee8c5c12d544b0e4bc1f671ec6776fbf2f2fa689 | 4,536 | py | Python | Accidence/tstring.py | mkinsz/pymodule | c0b4a750478c526d6c3a7fe0e06f1046b1906075 | [
"MIT"
] | null | null | null | Accidence/tstring.py | mkinsz/pymodule | c0b4a750478c526d6c3a7fe0e06f1046b1906075 | [
"MIT"
] | null | null | null | Accidence/tstring.py | mkinsz/pymodule | c0b4a750478c526d6c3a7fe0e06f1046b1906075 | [
"MIT"
] | null | null | null | import cmath
import math
name = 'John'
age = 23
print('%s is %d years old.' % (name, age))
params = {'name': 'John', 'age': 23}
print('%(name)s is %(age)d years old' % params)
mylist = [1, 2, 3]
print("A list: %s" % mylist)
data = ('John', 'Doe', 55.34)
format_string = 'Hello %s %s. Your current balance is $%.2f.'
format_string2 = 'Hello %s %s. Your current balance is $%s.'
print(format_string % data)
print(format_string2 % data)
astring = 'Hello World!'
print(astring.index('o'))
print(astring.count('l'))
print(astring[3:7])
# [start:stop:step].
print(astring[3:10:2])
print(astring[-1])
print(astring[::-1])
print(astring.upper())
print(astring.lower())
print(astring.startswith('Hello'))
print(astring.endswith('asdf'))
print(astring.split())
print(astring.split(' '))
s = "Hey there! what should this string be?"
print('Length of s = %d' % len(s))
print("The first five characters are '%s'" % s[:5]) # Start to 5
print("The next five characters are '%s'" % s[5:10]) # 5 to 10
print("The thirteenth character is '%s'" % s[12]) # Just number 12
print("The characters with odd index are '%s'" % s[1::2]) # (0-based indexing)
print("The last five characters are '%s'" % s[-5:]) # 5th-from-last to end
name = "John"
if name in ["John", "Rick"]:
print("Your name is either John or Rick.")
statement = True
another_statement = True
if statement == True: print('State True...')
if statement is True:
print("statement True")
pass
elif another_statement is True: # else if
print('another_statement is True')
pass
else:
print('All False...')
pass
x = [1, 2, 3]
y = [1, 2, 3]
print(x == y) # match the values of the variables
print(x is y) # match the instances themselves
print(not False)
print((not False) == False)
primes = [2, 3, 4, 5]
for prime in primes:
print(prime)
for x in range(5):
print(x)
for x in range(3, 8, 2):
print(x)
count = 0
while True:
print(count)
count += 1
if count >= 5:
break
for x in range(10):
# Check if x is even
if x % 2 == 0:
continue
print(x)
# sentence = input("Sentence: ")
# screen_width = 80
# text_width = len(sentence)
# box_width = text_width+6
# left_margin = (screen_width - box_width)
# print()
# print(" " * left_margin + "+" + "-" * (box_width - 2) + "+")
# print(" " * left_margin + " |" + " " * text_width + " |")
# print(" " * left_margin + " | " + sentence + " |")
# print(" " * left_margin + " |" + " " * text_width + " |")
# print(" " * left_margin + "+" + "-" * (box_width - 2) + "+")
# print()
lstring = list("Hello")
print("LString: ", lstring)
slstring = ''.join(lstring)
print(slstring)
names = ["Alice", "Beth", "Cecil", "Dee-Dee", "Earl"]
del names[2]
print(names)
name = list("Perl")
name[1:] = list('ython')
print(name)
numbers = [1, 5]
numbers[1:1] = [2, 3, 4]
print(numbers)
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(numbers[-3:-1])
print(numbers[-3:])
print(numbers[:3])
print(numbers[:])
print(1/2)
print(1//2)
print(1 % 2)
print(2**3)
print(pow(2, 3))
print(100000000000000000000)
print(0xAF)
print(0o11)
print(abs(-10))
print(round(1./3))
print(math.floor(1.9))
print(math.sqrt(9))
# print(math.sqrt(-1))
print(cmath.sqrt(-1))
temp = 42
print("The temperature is " + repr(temp))
print("C:\\nowhere")
print("C:/nowhere" "/hi")
print(r"C:\nowhere")
from string import Template
s = Template("$x, glorious $x!")
print(s.substitute(x='slurm'))
s = Template("It's ${x}tastic!")
print(s.substitute(x='slurm'))
s = Template("Make $$ selling $x!")
print(s.substitute(x='slurm'))
s = Template("A $thing must never $action.")
d = {}
d['thing'] = 'gentleman'
d['action'] = 'show his socks'
print(s.substitute(d))
print(s.safe_substitute(d))
print("%.10f" % math.pi)
print("%.*s" % (5, "Guide van Rossum"))
print("%10.2f" % math.pi)
print("%010.4f" % math.pi)
print("%-10.2f" % math.pi)
# width = int(input("Please enter width: "))
# price_width = 10
# item_width = width - price_width
# header_format = "%-*s%*s"
# format = "%-*s%*.2f"
# print("=" * width)
# print(header_format % (item_width, "Item", price_width, "Price"))
# print("-" * width)
# print(format % (item_width, "Apples", price_width, 0.4))
# print(format % (item_width, "Pears", price_width, 0.5))
# print(format % (item_width, "Cantaloupes", price_width, 1.92))
# print(format % (item_width, "Dried Apricots (16 oz.)", price_width, 8))
# print(format % (item_width, "Prunes (4 lbs.)", price_width, 12))
# print("=" * width)
ststring = "that's all folks"
print(ststring.title()) | 22.68 | 79 | 0.619489 | 697 | 4,536 | 3.97274 | 0.28264 | 0.052004 | 0.032503 | 0.036114 | 0.178043 | 0.126761 | 0.105092 | 0.051282 | 0 | 0 | 0 | 0.043605 | 0.170855 | 4,536 | 200 | 80 | 22.68 | 0.692635 | 0.271164 | 0 | 0.070313 | 0 | 0 | 0.248088 | 0 | 0 | 0 | 0.001224 | 0 | 0 | 1 | 0 | false | 0.023438 | 0.023438 | 0 | 0.023438 | 0.578125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
c987d91d956b8ffecede2e82f0bdee5c3f9f8769 | 467 | py | Python | tests/utils_test.py | BostonDSA/fest | 2876cd09a5df978145b432fb1e711c9d0322fc25 | [
"MIT"
] | 10 | 2018-08-05T20:34:03.000Z | 2021-12-01T07:18:35.000Z | tests/utils_test.py | BostonDSA/fest | 2876cd09a5df978145b432fb1e711c9d0322fc25 | [
"MIT"
] | 10 | 2018-02-12T16:46:16.000Z | 2022-01-21T23:39:07.000Z | tests/utils_test.py | BostonDSA/fest | 2876cd09a5df978145b432fb1e711c9d0322fc25 | [
"MIT"
] | 7 | 2019-04-13T00:49:05.000Z | 2021-09-28T00:53:51.000Z | from fest import utils
class SomeClass:
pass
def test_future():
fut = utils.Future(iter('abcdefg'))
ret = fut.filter(lambda x: x < 'e').execute()
exp = list('abcd')
assert ret == exp
def test_digest():
ret = {'fizz': 'buzz'}
assert utils.digest(ret) == 'f45195aef08daea1be5dbb1c7feb5763c5bc7b37'
def test_logger():
obj = SomeClass()
ret = utils.logger(obj)
exp = 'tests.utils_test.SomeClass'
assert ret.name == exp
| 18.68 | 74 | 0.638116 | 58 | 467 | 5.068966 | 0.534483 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052198 | 0.220557 | 467 | 24 | 75 | 19.458333 | 0.755495 | 0 | 0 | 0 | 0 | 0 | 0.184154 | 0.141328 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.1875 | false | 0.0625 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c9888c66d0c167c08d08ff0e404deb6b1ffdf3c2 | 903 | py | Python | examples/shorten_url.py | Vincydotzsh/fasmga.py | 31fbacfc5172dced158852cf451cc821a9c0910d | [
"MIT"
] | 2 | 2021-10-29T21:02:08.000Z | 2021-11-21T13:18:56.000Z | examples/shorten_url.py | Vincydotzsh/fasmga.py | 31fbacfc5172dced158852cf451cc821a9c0910d | [
"MIT"
] | null | null | null | examples/shorten_url.py | Vincydotzsh/fasmga.py | 31fbacfc5172dced158852cf451cc821a9c0910d | [
"MIT"
] | 2 | 2021-10-30T13:52:14.000Z | 2021-11-14T11:01:44.000Z | import asyncio
import fasmga
import os
client = fasmga.Client(os.getenv("FGA_TOKEN"))
@client.on("ready")
async def main():
url = await client.shorten("http://example.com", "your-url-id")
# change "your-url-id" with the url ID you want,
# or remove it if you want it to generate a random one.
print("Your shortened URL is:", url)
print("It will redirect to", url.uri)
await asyncio.sleep(10) # wait for 10 seconds if you want to try the shortened site
await url.edit(
url="https://google.com", password="mysupersecurepassword"
) # edit the redirect URL and add a password
print(f"Your URL {url} has been edited.")
print("Now it redirects to", url.uri)
print("Remember: Passwords aren't stored in URL instances.")
await client.close() # closes the client, if you want to keep the event loop running you can comment this line
client.start()
| 34.730769 | 115 | 0.687708 | 143 | 903 | 4.335664 | 0.552448 | 0.045161 | 0.043548 | 0.035484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005548 | 0.20155 | 903 | 25 | 116 | 36.12 | 0.854369 | 0.317829 | 0 | 0 | 0 | 0 | 0.367213 | 0.034426 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.166667 | 0 | 0.166667 | 0.277778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c988c7d9d8c652c7ad8baabf299dfddc7d585ea7 | 1,117 | py | Python | python/simple_run_weekly.py | shibli049/miscellaneous | d46bc1881cb0c72d43c198db2a534a0d25b992ad | [
"MIT"
] | 1 | 2018-08-12T10:15:58.000Z | 2018-08-12T10:15:58.000Z | python/simple_run_weekly.py | shibli049/miscellaneous | d46bc1881cb0c72d43c198db2a534a0d25b992ad | [
"MIT"
] | null | null | null | python/simple_run_weekly.py | shibli049/miscellaneous | d46bc1881cb0c72d43c198db2a534a0d25b992ad | [
"MIT"
] | null | null | null | #! /usr/local/bin/python3.7
``` Usage: cron job
0 */1 * * * simple_run_weekly.py
```
from datetime import date
import json
from subprocess import run
import logging
logging.basicConfig(level=logging.INFO,
format=' %(asctime)s - %(levelname)s - %(lineno)d - %(message)s')
ALERT_DAY='Thursday'
filename = 'run_flag.json'
CMD = "~/script.sh"
# default
data = {'sent': False}
try:
current_day = date.today().strftime("%A")
logging.info("current day: {}, alert day: {}".format(current_day, ALERT_DAY))
if current_day == ALERT_DAY:
try:
with open(filename, 'r') as f:
data = json.loads("\n".join(f.readlines()))
except (BaseException, Exception) as e:
logging.error("error: {}".format(e))
logging.info("data: {}".format(data))
if not data['sent']:
run(CMD)
data['sent'] = True
except (BaseException, Exception) as e:
logging.error("error: {}".format(e))
finally:
t = json.dumps(data, indent=2)
with open(filename, 'w') as f:
f.writelines(t)
| 26.595238 | 85 | 0.57923 | 141 | 1,117 | 4.524823 | 0.510638 | 0.050157 | 0.070533 | 0.08464 | 0.172414 | 0.172414 | 0.172414 | 0.172414 | 0.172414 | 0.172414 | 0 | 0.006024 | 0.256938 | 1,117 | 41 | 86 | 27.243902 | 0.762651 | 0.030439 | 0 | 0.1875 | 0 | 0 | 0.148936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c98d7b64e14c7ee91eee04d518b7cdd93828eefa | 5,874 | py | Python | mrwer.py | amali/multiRefWER | d03c6c67897dab3a3a5734fff6d619b3253dc7fd | [
"MIT"
] | 6 | 2020-01-07T07:08:03.000Z | 2021-08-09T11:37:43.000Z | mrwer.py | amali/multiRefWER | d03c6c67897dab3a3a5734fff6d619b3253dc7fd | [
"MIT"
] | null | null | null | mrwer.py | amali/multiRefWER | d03c6c67897dab3a3a5734fff6d619b3253dc7fd | [
"MIT"
] | 2 | 2020-04-05T18:42:59.000Z | 2021-04-16T07:12:45.000Z | #!/usr/bin/python -tt
# this is the main script for MR-WER
#
# Copyright (C) 2017, Qatar Computing Research Institute, HBKU (author: Ahmed Ali)
#
from __future__ import division
import sys
reload(sys)
import codecs
import collections
import re
from subprocess import call
import numpy as np
from mr import *
sys.setdefaultencoding('utf8')
import argparse
def werf(r, h):
# initialisation
D, B = wagner_fischer(r, h)
bt = naive_backtrace(B)
i,d,s,c,aligned_r, aligned_h, operations = align(r, h, bt)
return i,d,s,c,len(r),len(h),aligned_r, aligned_h, operations
def load_file_dict (trans_file):
# we need to handle files with no transcriptions
dict_map={}
with codecs.open(trans_file,'r',encoding='utf-8') as h:
for line in h:
if len(line.rstrip().split(None, 1)) > 1:
(key, val) = line.rstrip().split(None, 1)
dict_map[key] = val
else: dict_map[line.rstrip()] = ""
return dict_map
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Multi reference evaluation for ASR against one reference or more.')
parser.add_argument('ref', help='one or more reference transcription',nargs='+')
parser.add_argument('hyp', help='ASR hypothesis transcription (must be last argument)')
parser.add_argument('-e', '--show-errors',help='Show error per sentence', action='store_true',default=False)
parser.add_argument("-ma","--show-multiple-alignment",help='Show multi-reference alignment for each sentence',action="store_true",default=False)
parser.add_argument("-a","--show-alignment",help='Show alignment for each sentence',action="store_true", default=False)
args = parser.parse_args()
nref=len(args.ref)
load_file_dict (args.hyp)
#load the recognition file
hyp_dict = load_file_dict (args.hyp)
#with codecs.open(args.hyp,'r',encoding='utf-8') as h:
# hyp_dict = dict(x.rstrip().split(None, 1) for x in h)
#load all the reference files
ref_dict={}
align_ref={}
results_details={}
total_wer=0
# WER here
for idx, ref_file in enumerate(args.ref):
ref_dict[idx]=load_file_dict(ref_file)
#make sure that all files has the same ids
if not (sorted(ref_dict[idx].keys()) == sorted(hyp_dict.keys())):
print "WARNING Files:", ref_file, args.hyp, "have differnt ids."
i=d=s=c=e=i_t=d_t=s_t=c_t=e_t=wer=wer_t=wc=wc_t=hc=hc_t=0
align_ref[idx]={}
results_details['file_'+str(idx)]={}
# We calculate the WER per refernce file
for key in ref_dict[idx]:
results_details['file_'+str(idx)]['sent_'+key]={}
i,d,s,c,wc,hc,results_details['file_'+str(idx)]['sent_'+key]['aligned_r'], \
results_details['file_'+str(idx)]['sent_'+key]['aligned_h'], \
results_details['file_'+str(idx)]['sent_'+key]['operations'] = werf(ref_dict[idx][key].split(),hyp_dict[key].split())
err=i+d+s
wer=err/wc*100
i_t+=i
d_t+=d
s_t+=s
c_t+=c
wc_t+=wc
hc_t+=hc
wer='%%WER:%.2f [%d / %d , %d ins, %d del, %s sub]' % (wer,err,wc,i,d,s)
results_details['file_'+str(idx)]['sent_'+key]['wer']=wer
err=i_t+d_t+s_t
wer=err/wc_t*100
wer='%%Overall WER:%.2f [%d / %d , %d ins, %d del, %s sub]' % (wer,err,wc_t,i_t,d_t,s_t)
total_wer+=(err/wc_t)
results_details['file_'+str(idx)]['wer']=wer
# MR-WER here
i=d=s=c=di=mrwer=i_t=d_t=s_t=c_t=di_t=mrwer_t=0
# Here, we calculate MR-WER per senetnce across all the available references:
for sentence_id in hyp_dict.keys():
results_details['sent_'+sentence_id]={}
i,d,s,c,di,align_compact,align_details = merge_align(results_details,sentence_id,nref)
i_t+=i
d_t+=d
s_t+=s
c_t+=c
mrwer='%%MR-WER:%.2f [%d ins, %d del, %d sub, %d cor, %d del(uncounted)]' % ((i+d+s)/(s+d+c)*100,i,d,s,c,di)
results_details['sent_'+sentence_id]['mrwer']=mrwer
results_details['sent_'+sentence_id]['align_details']=align_details
results_details['sent_'+sentence_id]['align_compact']=align_compact
mrwer='%%Overall MR-WER:%.2f [%d ins, %d del, %d sub, %d cor]' % ((i_t+d_t+s_t)/(s_t+d_t+c_t)*100,i_t,d_t,s_t,c_t)
results_details['mrwer']=mrwer
#Show results here
if args.show_alignment or args.show_multiple_alignment or args.show_errors:
print 'Detailed results:'
for sentence_id in hyp_dict.keys():
print 'ID:', sentence_id
for ref_id in range(nref):
print 'File:', args.ref[ref_id]
print results_details['file_'+str(ref_id)]['sent_'+sentence_id]['wer']
if args.show_alignment:
print 'Ref: ',' '.join(results_details['file_'+str(ref_id)]['sent_'+sentence_id]['aligned_r'])
print 'Hyp: ',' '.join(results_details['file_'+str(ref_id)]['sent_'+sentence_id]['aligned_h'])
print 'Err: ',' '.join(results_details['file_'+str(ref_id)]['sent_'+sentence_id]['operations'])
print ''
print results_details['sent_'+sentence_id]['mrwer']
if args.show_multiple_alignment:
print results_details['sent_'+sentence_id]['align_details']
print '####'
print 'Overall results:'
for ref_id in range(nref):
print 'File:', args.ref[ref_id]
print results_details['file_'+str(ref_id)]['wer']
print '\n', results_details['mrwer']
print '%%Overall AV-WER:%.2f' % (total_wer/nref*100)
| 36.943396 | 148 | 0.597038 | 869 | 5,874 | 3.812428 | 0.208285 | 0.092967 | 0.065198 | 0.076064 | 0.395714 | 0.335346 | 0.278298 | 0.210383 | 0.180199 | 0.124359 | 0 | 0.007715 | 0.249745 | 5,874 | 158 | 149 | 37.177215 | 0.744044 | 0.094995 | 0 | 0.135922 | 0 | 0.038835 | 0.184325 | 0.004721 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.087379 | null | null | 0.165049 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c994200b24b26faebdff2ea952810089ee05791f | 641 | py | Python | tarea4/tarea4-09.py | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | 2 | 2018-04-20T13:54:38.000Z | 2018-06-09T01:43:45.000Z | tarea4/tarea4-09.py | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | 5 | 2018-03-02T13:08:32.000Z | 2018-03-05T16:02:19.000Z | tarea4/tarea4-09.py | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | null | null | null | for i in range(0,3):
f = open("dato.txt")
f.seek(17+(i*77),0)
x1= int(f.read(2))
f.seek(20+(i*77),0)
y1= int(f.read(2))
f.seek(35+(i*77),0)
a= int(f.read(2))
f.seek(38+(i*77),0)
b= int(f.read(2))
f.seek(60+(i*77),0)
x2= int(f.read(2))
f.seek(63+(i*77),0)
y2= int(f.read(2))
f.seek(73+(i*77),0)
r= int(f.read(2))
print (x1,y1,a,b,x2,y2,r ," ")
f.close()
y=(y1+((b/a)*(x2-x1))) #ecuacion es de la recta
d=(r*(1+y))
print ("ecuacion", y)
f = open("dato.txt","a")
f.write("En la opcion "+str(i)+" ecuacuion que falta "+"\n")
f.close()
| 21.366667 | 65 | 0.469579 | 133 | 641 | 2.263158 | 0.353383 | 0.116279 | 0.093023 | 0.209302 | 0.27907 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0.25741 | 641 | 30 | 66 | 21.366667 | 0.514706 | 0.035881 | 0 | 0.083333 | 0 | 0 | 0.105178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c997f8c075bcc99e58177fae2b33ee69441783d5 | 1,969 | py | Python | test_scripts/reg1.py | talih0/dps-for-iot | e16f61da15d1d06f13cfce8a667c710f45441d4b | [
"Apache-2.0"
] | 57 | 2017-12-14T01:37:02.000Z | 2021-11-08T11:19:32.000Z | test_scripts/reg1.py | talih0/dps-for-iot | e16f61da15d1d06f13cfce8a667c710f45441d4b | [
"Apache-2.0"
] | 30 | 2017-11-03T18:40:51.000Z | 2021-06-30T13:47:16.000Z | test_scripts/reg1.py | talih0/dps-for-iot | e16f61da15d1d06f13cfce8a667c710f45441d4b | [
"Apache-2.0"
] | 15 | 2018-03-14T05:56:08.000Z | 2021-04-25T21:29:09.000Z | #!/usr/bin/python
from common import *
import atexit
import time
atexit.register(cleanup)
# Start the registry service
reg1 = reg()
# Start some subscribers
# Delay the starts so that we ensure a fully connected graph.
sub1 = reg_subs('-p {} -c 1 a/b/c'.format(reg1.port))
sub2 = reg_subs('-p {} -c 1 a/b/c'.format(reg1.port))
expect_reg_linked([sub1, sub2])
sub3 = reg_subs('-p {} -c 2 a/b/c'.format(reg1.port))
expect_reg_linked(sub3)
sub4 = reg_subs('-p {} -c 3 a/b/c'.format(reg1.port))
expect_reg_linked(sub4)
sub5 = reg_subs('-p {} -c 4 a/b/c'.format(reg1.port))
expect_reg_linked(sub5)
sub6 = reg_subs('-p {} -c 5 a/b/c'.format(reg1.port))
expect_reg_linked(sub6)
sub7 = reg_subs('-p {} -c 6 a/b/c'.format(reg1.port))
expect_reg_linked(sub7)
sub8 = reg_subs('-p {} -c 7 a/b/c'.format(reg1.port))
expect_reg_linked(sub8)
sub9 = reg_subs('-p {} -c 8 a/b/c'.format(reg1.port))
expect_reg_linked(sub9)
sub10 = reg_subs('-p {} -c 9 a/b/c'.format(reg1.port))
expect_reg_linked(sub10)
sub11 = reg_subs('-p {} -c 10 a/b/c'.format(reg1.port))
expect_reg_linked(sub11)
sub12 = reg_subs('-p {} -c 11 a/b/c'.format(reg1.port))
expect_reg_linked(sub12)
sub13 = reg_subs('-p {} -c 12 a/b/c'.format(reg1.port))
expect_reg_linked(sub13)
sub14 = reg_subs('-p {} -c 13 a/b/c'.format(reg1.port))
expect_reg_linked(sub14)
sub15 = reg_subs('-p {} -c 14 a/b/c'.format(reg1.port))
expect_reg_linked(sub15)
sub16 = reg_subs('-p {} -c 15 1/2/3'.format(reg1.port))
expect_reg_linked(sub16)
sub17 = reg_subs('-p {} -c 16 +/+/#'.format(reg1.port))
expect_reg_linked(sub17)
# Give time for the subscriptions to propogate
time.sleep(15)
# Start some publishers
reg_pubs('-p {} a/b/c -m hello'.format(reg1.port))
reg_pubs('-p {} 1/2/3 -m world'.format(reg1.port))
expect_pub_received([sub1, sub2, sub3, sub4, sub5, sub6, sub7, sub8, sub9, sub10, sub11, sub12,
sub13, sub14, sub15], 'a/b/c')
expect_pub_received([sub16], '1/2/3')
expect_pub_received([sub17], ['a/b/c', '1/2/3'])
| 32.816667 | 95 | 0.685627 | 372 | 1,969 | 3.475806 | 0.22043 | 0.146945 | 0.205723 | 0.118329 | 0.419954 | 0.419954 | 0.375097 | 0.375097 | 0.375097 | 0.041763 | 0 | 0.076437 | 0.116303 | 1,969 | 59 | 96 | 33.372881 | 0.666667 | 0.098019 | 0 | 0 | 0 | 0 | 0.191525 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c99969794507c370a480dadfb750763cdd719c6f | 56,380 | py | Python | home/bin/k3b-rm.py | ssokolow/profile | 09f2a842077909d883a08b546659516deec7d719 | [
"MIT"
] | 9 | 2015-04-14T22:27:40.000Z | 2022-02-23T05:33:00.000Z | home/bin/k3b-rm.py | ssokolow/profile | 09f2a842077909d883a08b546659516deec7d719 | [
"MIT"
] | null | null | null | home/bin/k3b-rm.py | ssokolow/profile | 09f2a842077909d883a08b546659516deec7d719 | [
"MIT"
] | 9 | 2015-04-14T22:27:42.000Z | 2017-11-21T11:34:23.000Z | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
# pylint: disable=invalid-name
"""A simple tool for deleting the files listed in a K3b project after it has
been written to a disc. (Useful in concert with gaff-k3b)
--snip--
@note: This currently explicitly uses C{posixpath} rather than C{os.path}
since, as a POSIX-only program, K3b is going to be writing project files
that always use UNIX path separators.
@note: When designing the internals of this tool, I strived to ensure that,
should a fatal bug be encountered, it would be possible to fix it and
then re-run the same command to complete the operation.
@todo: For the love of God, make this its own project and split it into
multiple files!
@todo: Wherever feasible, make mocks actually call what they're supposed to be
mocking so I can use them purely to determine that the right number of
calls were made with the right arguments.
(As is, I'm "testing the mock" too much.)
@todo: Refactor the test suite once I'm no longer burned out on this project.
@todo: Redesign the tests to ensure that Unicode in Python 2.x doesn't cause
errors with print().
@todo: Resolve entities like #xde4a;#xdd10;
"""
from __future__ import (absolute_import, division, print_function,
with_statement, unicode_literals)
__appname__ = "File-Deleting Companion for gaff-k3b"
__author__ = "Stephan Sokolow (deitarion/SSokolow)"
__version__ = "0.1"
__license__ = "MIT"
import logging
log = logging.getLogger(__name__)
import errno, os, posixpath, re, shutil, sys, tempfile, zipfile
import xml.etree.cElementTree as ET
if sys.version_info.major < 3:
from xml.sax.saxutils import escape as xmlescape
from urllib import (pathname2url as _pathname2url,
quote as _urlquote,
quote_plus as _urlquote_plus)
def pathname2url(text):
"""Fixup wrapper to make C{urllib.quote} behave as in Python 3"""
return _pathname2url(text if isinstance(text, bytes)
else text.encode('utf-8'))
def urlquote(text):
"""Fixup wrapper to make C{urllib.quote} behave as in Python 3"""
return _urlquote(text if isinstance(text, bytes)
else text.encode('utf-8'))
def urlquote_plus(text, safe=b''):
"""Fixup wrapper to make C{urllib.quote_plus} behave as in Python 3"""
return _urlquote_plus(text if isinstance(text, bytes)
else text.encode('utf-8'), safe)
else: # pragma: nocover
from urllib.request import pathname2url # pylint: disable=E0611,F0401
from urllib.parse import (quote as urlquote, # pylint: disable=E0611,F0401
quote_plus as urlquote_plus)
from xml.sax.saxutils import escape as _xmlescape
def xmlescape(text):
"""Fixup wrapper to make C{xml.sax.saxutils.escape} work with bytes"""
return (_xmlescape(text.decode('latin1')).encode('latin1')
if isinstance(text, bytes) else _xmlescape(text))
# ---=== Actual Code ===---
if sys.version_info.major >= 3: # pragma: nocover
basestring = (bytes, str) # pylint: disable=redefined-builtin
unicode = str # pylint: disable=redefined-builtin
class FSWrapper(object):
"""Centralized overwrite/dry-run control and log-as-fail wrapper."""
overwrite = False
dry_run = True # Fail safe if the connection to --dry-run is broken
def __init__(self, overwrite=overwrite, dry_run=dry_run):
self.overwrite = overwrite
self.dry_run = dry_run
def mergemove(self, src, dest):
"""Move a file or folder, recursively merging it if the target exists.
@param dest: The B{exact} (not parent) path to which to move C{src}.
@return: Dict mapping old paths to new ones.
"""
moved = {}
if os.path.isdir(src) and os.path.exists(dest):
for fname in os.listdir(src):
moved.update(self.mergemove(os.path.join(src, fname),
os.path.join(dest, fname)))
elif self.move(src, dest):
moved[src] = dest
self.remove_emptied_dirs(src)
return moved
# TODO: What if the parent of dest doesn't exist?
def move(self, src, dest):
"""See L{shutil.move}.
@return: C{True} if the move was successful or failed because the
source doesn't exist and the destination already does.
(Used by the code for rewriting paths within other files)
@rtype: C{bool}
"""
dest_exists = os.path.exists(dest)
if not os.path.exists(src):
log.warn("Cannot move nonexistant path: %s", src)
return dest_exists
elif dest_exists and not self.overwrite:
log.warn("Target exists. Skipping: %s", dest)
return False
log.info("%r -> %r", src, dest)
if not self.dry_run:
destdir = os.path.dirname(dest)
try:
if not os.path.exists(destdir):
os.makedirs(destdir) # TODO: Test this branch
shutil.move(src, dest)
except IOError as err: # pragma: nocover
log.error(err)
return True
def remove(self, path):
"""See L{os.unlink} or L{shutil.rmtree} as appropriate.
@return: C{True} on success
"""
if not os.path.exists(path):
log.warn("Cannot remove nonexistant path: %s", path)
return False
log.info("Removing: %s", path)
if not self.dry_run:
if os.path.isdir(path):
shutil.rmtree(path)
else:
os.remove(path)
# TODO: Log and continue in case of exception here
return True
def remove_emptied_dirs(self, paths):
"""Recursively remove empty directories.
(eg. Clean up directory structures after removing a bunch of files.)
"""
while paths:
diminished = set()
# Sort reversed to mimic os.walk(topdown=False) for efficiency
for path in sorted(paths, reverse=True):
parent = os.path.normpath(os.path.dirname(path))
try: # Use os.rmdir as final "do we do?" for atomic operation
if not (os.path.isdir(path) and not os.listdir(path)):
if not os.path.exists(path):
diminished.add(parent)
continue
log.info("Removing empty directory: %s", path)
if not self.dry_run:
os.rmdir(path) # TODO: More dry_run-friendly approach
diminished.add(parent)
except OSError as err:
if err.errno == errno.ENOTEMPTY:
return # TODO: Test this short-circuit optimization
log.warning(err)
# Iteratively walk up until we run out of emptied ancestors
paths = diminished
# TODO: Use a separate SAX-based rewrite option for XML so entities are
# handled automatically. (Use http://code.activestate.com/recipes/265881/)
# ...and at least SUPPORT https://pypi.python.org/pypi/defusedxml/
# TODO: Consider a separate parser for YAML too since it's popular enough
# to potentially be encountered in the wild and has optional escaping that
# could result in an ASCII-to-Unicode path rewrite rendering the file's
# syntax invald.
# (Always a risk, but much less likely in other formats because of how the
# src/dest mapping ensures the replacement will use the same encoding that
# was matched.)
# TODO: Consider some kind of autodetect based on extension and, for gzip
# and XML, headers.
def rewrite(self, fpath, mappings):
"""Atomically rewrite substrings within a given file."""
def matcher(match):
"""re.sub callback"""
src = match.group(0)
log.debug("Rewrite in %r: %r -> %r", fpath, src, mappings.get(src))
assert isinstance(src, bytes)
assert isinstance(mappings[src], bytes)
return mappings[src]
# If no mappings were provided, don't bother to read/write a no-op.
if not mappings:
return
# If any input string is unicode rather than bytes, encode as UTF-8
# (A "do what I probably mean" measure chosen as preferable to a
# runtime exception, given that Python has no static typing)
mappings = {
x.encode('utf-8') if isinstance(x, unicode) else x:
y.encode('utf-8') if isinstance(y, unicode) else y
for x, y in mappings.items()
# TODO: The ifinstance(y, unicode) used to use "x" by mistake.
# Write a test case for that using unicode input.
}
log.info("Rewriting paths in %r", fpath)
with open(fpath, 'rb') as fobj:
re_str = fgrep_to_re_str(mappings.keys())
content = fobj.read()
if isinstance(content, unicode):
content = content.encode('utf-8')
assert isinstance(re_str, bytes)
assert isinstance(content, bytes)
rex = re.compile(fgrep_to_re_str(mappings.keys()))
content = rex.sub(matcher, content)
if not self.dry_run:
# Replace atomically by renaming a temp file
with tempfile.NamedTemporaryFile(delete=False,
dir=os.path.split(fpath)[0]) as fobj:
fobj.write(content)
tpath = fobj.name
os.rename(tpath, fpath) # TODO: Did Windows need os.unlink first?
def _print(*args, **kwargs):
"""Wrapper for C{print} to allow mocking"""
print(*args, **kwargs)
def main():
"""setuptools-compatible entry point"""
import argparse
parser = argparse.ArgumentParser(
description=__doc__.replace('\r\n', '\n').split('\n--snip--\n')[0])
parser.add_argument('--version', action='version',
version="%%(prog)s v%s" % __version__)
parser.set_defaults(overwrite=False, dry_run=False,
verbose=False, quiet=False)
subparsers = parser.add_subparsers(title='available subcommands')
def new_subcommand(*args, **kwargs):
"""C{subparsers.add_parser} wrapper which adds common arguments."""
parser = subparsers.add_parser(*args, **kwargs)
parser.add_argument('-v', '--verbose', action="count", dest="verbose",
default=2,
help="Increase the verbosity. Use twice for extra effect")
parser.add_argument('-q', '--quiet', action="count", dest="quiet",
default=0,
help="Decrease the verbosity. Use twice for extra effect")
parser.add_argument('-n', '--dry-run', action="store_true",
dest="dry_run",
help="Don't actually modify the filesystem. Just simulate.")
parser.add_argument('paths', metavar='project_file', nargs='+',
help="K3b project file to read from")
return parser
new_subcommand('rm', help='Remove the given files'
).set_defaults(remove_leftovers=True, mode='rm')
mv_parser = new_subcommand('mv', help='Move the files to the given path')
mv_parser.add_argument('--overwrite', action="store_true",
dest="overwrite",
help="Allow %(prog)s to overwrite files at the target location.")
mv_parser.add_argument('--rewrite', metavar='path', action='append',
dest='rewrites', default=[],
help="Perform string substitution on paths within given file")
mv_parser.add_argument('target', metavar='target_dir',
help="Directory to move files into")
mv_parser.set_defaults(remove_leftovers=True, mode='mv')
new_subcommand('ls', help='List paths retrieved'
).set_defaults(remove_leftovers=False, mode='ls')
args = parser.parse_args()
# TODO: How do I prevent set_defaults from clobbering add_argument stuff?
if args.verbose is False:
args.verbose = 2
# Set up clean logging to stderr
log_levels = [logging.CRITICAL, logging.ERROR, logging.WARNING,
logging.INFO, logging.DEBUG]
args.verbose = min(args.verbose - args.quiet, len(log_levels) - 1)
args.verbose = max(args.verbose, 0)
logging.basicConfig(level=log_levels[args.verbose],
format='%(levelname)s: %(message)s')
if args.mode == 'mv' and not os.path.isdir(args.target):
log.critical("Target path is not a directory: %s", args.target)
return 2
# TODO: Test that --overwrite and --dry-run always get passed through
filesystem = FSWrapper(overwrite=args.overwrite, dry_run=args.dry_run)
done = {}
for path in args.paths:
try:
files = parse_k3b_proj(path)
# TODO: Log and continue in case of exception here
except (IOError, zipfile.BadZipfile) as err: # TODO: Test this branch
log.warning("Not a valid K3b project file. Moving: %s", path)
files = {os.path.abspath(path): os.path.basename(path)}
for src_path, dest_rel in sorted(files.items()):
if args.mode == 'mv':
dest_path = mounty_join(args.target, dest_rel)
filesystem.mergemove(src_path, dest_path)
done[src_path] = dest_path
elif args.mode == 'rm':
filesystem.remove(src_path)
else: # args.mode == 'ls'
_print(src_path)
if args.remove_leftovers:
filesystem.remove_emptied_dirs(files)
if args.mode == 'mv':
for path in args.rewrites:
if not os.path.isfile(path): # TODO: Test this branch
log.error("Not a file: %s", path)
continue
filesystem.rewrite(path, done)
def fgrep_to_re_str(patterns):
"""Escape a list of strings and return a regex string to match any of them.
@type patterns: C{[unicode]} or C{[bytes]}
@warning: Providing a C{patterns} which contains both unicode and \
bytestrings is undefined behaviour. It's your own fault if your
Python 3.x program blows up because of it.
@note: Does not C{re.compile} for you in case you want to incorporate it
into a larger regular expression.
"""
if not patterns:
return b'' # FIXME: I know this is going to have unexpected results.
if isinstance(patterns, basestring):
patterns = [patterns]
patterns = [re.escape(x) for x in patterns]
return (b'(' + b'|'.join(patterns) + b')'
if isinstance(patterns[0], bytes)
else u'(%s)' % u'|'.join(patterns))
re_percent_escape_u = re.compile(u"%[0-9a-fA-F]{2}")
re_percent_escape_b = re.compile(re_percent_escape_u.pattern.encode('ascii'))
def lower_percent_escapes(escaped_str):
"""Lowercase the %3C-style escapes in a string."""
rex = re_percent_escape_u
if isinstance(escaped_str, bytes):
rex = re_percent_escape_b
return rex.sub(lambda x: x.group(0).lower(), escaped_str)
def mounty_join(a, b):
"""Join paths C{a} and C{b} while ignoring leading separators on C{b}"""
b = b.lstrip(os.sep).lstrip(os.altsep or os.sep)
return posixpath.join(a, b)
# TODO: Decide on an exception-handling policy here
def parse_k3b_proj(path):
"""Parse a K3b project file into a list of paths"""
with zipfile.ZipFile(path) as zfh:
xml = zfh.read('maindata.xml')
root = ET.fromstring(xml)
del xml
return parse_proj_directory('/', root.find('./files'))
def parse_proj_directory(parent_path, node):
"""Recursive helper for traversing k3b project XML"""
results = {}
for item in node:
path = posixpath.join(parent_path, item.get("name", ''))
if item.tag == 'file':
results[item.find('.//url').text] = path
elif item.tag == 'directory':
results.update(parse_proj_directory(path, item))
return results
def vary_escaped(path):
"""Generate all known encodings of a path that a regex may need to expect.
Seen In the wild:
- Completely unescaped (.pls, .gqv)
- urllib.pathname2url()ed file:/// URLs with upcase %-encoding (.audpl)
- xml.sax.saxutils.escape()ed unicode (XSPF)
@note: For most versatile matching, the file:// portion is omitted from
the generated output in order to also match file://<host>/...
@note: Output ordering is guaranteed to remain consistent between runs
within the same process lifetime so that C{zip()} can be used to
generate before/after pairs.
@warning: This only addresses common encodings used to encapsulate a URI
or path within a structured data file. It is still your responsibility
to ensure that you provide a path in the correct character coding.
This also makes no attempt to hack around XML entity encodings. If you
want to C{sed} paths in an XML file, combine this with a SAX-based
stream filter so you can operate on Unicode codepoints without fear
of rendering the XML invalid or tripping over an entity reference.
@warning: This makes no attempt to work around badly normalized paths or
mixed-case percent-encoding. Attempting to do so would result in a
combinatorial explosion. If you want to deal with messy data, you need
to actually parse and then re-serialize the format.
@todo: Does .audpl perform path separator conversion like pathname2url()
or does it just escape blindly like quote()?
@todo: Do a survey of file formats to determine what else I should do.
Potential examples:
- urllib.quote()ed (pathname2url with no os.sep conversion)
- urllib.quote_plus()ed file:/// URLs with uppercase %-encoding
- urllib.quote_plus()ed file:/// URLs with lowercase %-encoding
"""
# NOTE TO MAINTAINERS: This must return a consistently-ordered result
# but there is currently no way to reliably unit test that.
p2url = pathname2url(path)
return [
path, # literal (.pls, .gqv)
p2url, # url escaped (.audpl)
lower_percent_escapes(p2url),
xmlescape(path), # xml-escaped (XSPF)
]
def vary_escaped_batch(mappings):
"""Apply C{vary_escaped} to a dict of filename mappings."""
output = {}
for src, dest in mappings.items():
output.update(zip(vary_escaped(src), vary_escaped(dest)))
return output
# ---=== Test Suite ===---
if sys.argv[0].rstrip('3').endswith('nosetests'): # pragma: nobranch
import errno, tempfile, unittest
from itertools import product
if sys.version_info.major < 3:
from cStringIO import StringIO
BytesIO = StringIO
open_path = '__builtin__.open'
else: # pragma: nocover
from io import StringIO, BytesIO
open_path = 'builtins.open'
try:
from unittest.mock import ( # pylint: disable=E0611,F0401
patch, call, ANY, DEFAULT, mock_open)
except ImportError:
from mock import patch, call, ANY, DEFAULT, mock_open
def _file_exists(src, *_):
"""Used with C{side_effect} to make filesystem mocks stricter"""
if os.path.exists(src):
return DEFAULT
else:
raise IOError(errno.ENOENT, '%s: %r' %
(os.strerror(errno.ENOENT), src))
def touch_with_parents(path):
"""Touch a file into existence, including parents if needed"""
parent = posixpath.dirname(path)
if not os.path.exists(parent):
os.makedirs(parent)
# `touch $fpath`
open(path, 'a').close()
def test_pathname2url():
"""pathname2url: UTF-8 and Unicode input produce same output"""
assert pathname2url(b'abc/def ghi') == pathname2url(u'abc/def ghi')
def test_urlquote():
"""urlquote: UTF-8 and Unicode input produce same output"""
assert urlquote(b'abc/def ghi') == urlquote(u'abc/def ghi')
def test_urlquote_plus():
"""urlquote_plus: UTF-8 and Unicode input produce same output"""
assert urlquote_plus(b'abc/def ghi') == urlquote_plus(u'abc/def ghi')
def test_xmlescape():
"""xmlescape: both bytes and unicode are supported"""
assert xmlescape(u'<abc & def>') == u'<abc & def>'
assert xmlescape(b'<abc & def>') == b'<abc & def>'
class MockDataMixin(object): # pylint: disable=R0903
"""Code common to both light and heavy tests"""
maxDiff = None
longMessage = True
root_placeholder = u'~ROOT~'
@classmethod
def setUpClass(cls): # NOQA
"""Profiling showed over 25% of test time spent on ElementTree.
This cuts that in half.
"""
test_dom = ET.Element("k3b_data_project")
files = ET.SubElement(test_dom, "files")
cls.expected_tmpl = cls._add_files([], files, depth=2)
cls.xmldata_tmpl = BytesIO()
test_tree = ET.ElementTree(test_dom)
test_tree.write(cls.xmldata_tmpl,
encoding="UTF-8",
xml_declaration=True)
@classmethod
def _make_file_node(cls, dom_parent, fpath):
"""Add a file/url node stack as a child of the given parent"""
fnode = ET.SubElement(dom_parent, "file",
name=posixpath.basename(fpath))
unode = ET.SubElement(fnode, "url")
unode.text = mounty_join(cls.root_placeholder, fpath)
@classmethod
def _add_files(cls, ancestors, dom_parent, depth=0):
"""Generate a list of expected test files and populate test XML"""
expect, parent = {}, os.sep + os.sep.join(ancestors)
for x in list(u'12ñの') + ['dir']:
fpath = posixpath.join(parent, '_'.join(ancestors + [x]))
cls._make_file_node(dom_parent, fpath)
expect[mounty_join(cls.root_placeholder, fpath)] = fpath
# For robustness-testing
ET.SubElement(dom_parent, "garbage")
if depth:
for x in u'45ðあ':
expect.update(cls._add_files(ancestors + [x],
ET.SubElement(dom_parent, "directory", name=x),
depth - 1))
return expect
class TestFSWrapper(unittest.TestCase): # pylint: disable=R0904
"""Tests for L{FSWrapper}
@todo: Come up with a clean, robust way to either mock or generate
filesystem structures and then test postconditions.
My test suite is currently a mess.
@todo: Consider using unittest2 so I can get access to the Python 3.4
self.subTest context manager for things like "for dry_run in ..."
"""
def setUp(self): # NOQA
self.testroot = tempfile.mkdtemp(prefix='k3b-rm_test-')
self.destroot = tempfile.mkdtemp(prefix='k3b-rm_test-')
self.addCleanup(self.cleanup)
def set_mock(patcher): # pylint: disable=C0111
patcher.start()
self.addCleanup(patcher.stop)
self._rmd = os.rmdir
for mpath in ("os.remove", "os.unlink", "os.rmdir", "os.rename",
"shutil.rmtree", "shutil.move"):
set_mock(patch(mpath, side_effect=_file_exists, autospec=True))
for meth in ('warn', 'info'):
set_mock(patch.object(log, meth, autospec=True))
def tearDown(self): # NOQA
# Simplify tests by expecting reset_mock() on used mocks.
for mock in (os.remove, os.unlink, os.rmdir, os.rename,
shutil.rmtree, shutil.move,
log.info, log.warn):
self.assertFalse(mock.called, # pylint: disable=E1103
"Shouldn't have been called: %s" % mock)
def cleanup(self):
"""Stuff which should be called after other cleanups, regardless"""
# Make sure we call this after the mocks are deactivated
assert getattr(shutil.rmtree, 'called', None) is None
shutil.rmtree(self.testroot)
shutil.rmtree(self.destroot)
def _do_move_tsts(self, wrapper, src, dest):
"""The actual subTest code for test_move"""
should_succeed = wrapper.overwrite or not os.path.exists(dest)
for m in (os.rename, shutil.move):
self.assertFalse(m.called) # pylint: disable=E1101
self.assertEqual(should_succeed, wrapper.move(src, dest),
"Must return True on successful removal")
if wrapper.dry_run or not should_succeed:
for m in (os.rename, shutil.move):
self.assertFalse(m.called) # pylint: disable=E1101
else:
m = shutil.move
m.assert_called_once_with( # pylint: disable=E1101
src, dest)
m.reset_mock() # pylint: disable=E1101
if should_succeed:
log.info.assert_called_once_with(ANY, src, dest)
log.info.reset_mock()
else:
log.warn.assert_called_once_with(ANY, dest)
log.warn.reset_mock()
def _make_children(self, parent, targets, keepers, depth=3):
"""Recursive helper for generating a test tree"""
dir_names = u'56ðあ' # Keep this at least 3 entries long
dpath = os.path.join(parent, 'target')
os.makedirs(dpath)
targets.append(dpath)
fpath = os.path.join(parent, u'keepœr')
idx = dir_names.find(os.path.basename(parent))
if idx == 1:
touch_with_parents(fpath)
keepers.add(fpath)
elif idx == 2:
os.makedirs(fpath)
keepers.add(fpath)
if depth:
for x in dir_names:
self._make_children(os.path.join(parent, x),
targets, keepers, depth - 1)
def test_init_members(self):
"""FSWrapper.__init__: public members are set properly"""
wrapper = FSWrapper()
self.assertFalse(wrapper.overwrite, "overwrite=False not default!")
self.assertTrue(wrapper.dry_run, "dry_run=True not default!")
for overwrite in (True, False):
for dry_run in (True, False):
wrapper = FSWrapper(overwrite, dry_run)
self.assertEqual(overwrite, wrapper.overwrite)
self.assertEqual(dry_run, wrapper.dry_run)
@patch.object(FSWrapper, "move", autospec=True)
@patch.object(FSWrapper, "remove_emptied_dirs", autospec=True)
def test_mergemove(self, remdirs, move):
"""FSWrapper.mergemove: normal operation"""
# dry_run=False to catch stuff which should be within the mocks.
wrapper = FSWrapper(dry_run=False)
mergemove = wrapper.mergemove
with patch.object(FSWrapper, 'mergemove', autospec=True) as mmove:
targets, keepers = [], set()
self._make_children(self.testroot, targets, keepers)
new_dest = os.path.join(self.destroot, 'new_dest')
afile = os.path.join(self.testroot, 'afile')
touch_with_parents(afile)
# Directory that doesn't exist at dest
self.assertDictEqual(mergemove(self.testroot, new_dest),
{self.testroot: new_dest})
move.assert_called_once_with(ANY, self.testroot, new_dest)
move.reset_mock()
remdirs.assert_called_once_with(ANY, self.testroot)
remdirs.reset_mock()
# File that doesn't exist at dest
self.assertDictEqual(mergemove(afile, new_dest),
{afile: new_dest})
move.assert_called_once_with(ANY, afile, new_dest)
move.reset_mock()
remdirs.assert_called_once_with(ANY, afile)
remdirs.reset_mock()
# Directory that exists at dest
self.assertFalse(mmove.called)
mmove.return_value = {'FROM': 'TO'}
self.assertDictEqual(mergemove(self.testroot, self.destroot),
{'FROM': 'TO'})
self.assertListEqual(
sorted(mmove.call_args_list), sorted(
[call(ANY, os.path.join(self.testroot, x),
os.path.join(self.destroot, x))
for x in os.listdir(self.testroot)]))
mmove.reset_mock()
# Reaction to a failed self.move call in mergemove
move.return_value = False
self.assertDictEqual(mergemove(self.testroot, new_dest), {})
self.assertFalse(mmove.called)
move.assert_called_once_with(ANY, self.testroot, new_dest)
def test_move(self):
"""FSWrapper.move: normal operation"""
# TODO: Extend test to cover the os.makedirs(destdir) step
paths = []
for _ in range(2):
fd, fpath = tempfile.mkstemp(dir=self.testroot)
dpath = tempfile.mkdtemp(dir=self.testroot)
os.close(fd)
paths.extend([fpath, dpath])
for dry_run in (True, False):
for owrite in (True, False):
wrapper = FSWrapper(dry_run=dry_run, overwrite=owrite)
for src_idx in (0, 1):
self._do_move_tsts(wrapper, paths[src_idx],
paths[src_idx] + '_a')
# Test overwrite behaviour for both files and folders
# as both source and destination
for dest_idx in (2, 3):
self._do_move_tsts(wrapper, paths[src_idx],
paths[dest_idx])
def test_move_bad_paths(self):
"""FSWrapper.move: failures due to bad source/destination"""
test_src = tempfile.mktemp()
test_dest = tempfile.mktemp()
for dry_run in (True, False):
wrapper = FSWrapper(dry_run=dry_run)
# Note: While it's less portable, I use POSIX command paths
# to avoid the overhead of setting up and tearing down
# test files when existence should be the only check.
for src, dest, expected in (
(test_src, test_dest, False),
(test_src, '/', True),
('/bin/sh', '/bin/echo', False)):
self.assertEqual(wrapper.move(src, dest), expected,
"Must return %s for %s -> %s" %
(expected, src, dest))
log.warn.assert_called_once_with(ANY,
dest if os.path.exists(src) else src)
log.warn.reset_mock()
def test_remove(self):
"""FSWrapper.remove: normal operation"""
for dry_run in (True, False):
wrapper = FSWrapper(dry_run=dry_run)
test_fd, test_path = tempfile.mkstemp(dir=self.testroot)
os.close(test_fd)
for mock, path in ((os.remove, test_path),
(shutil.rmtree, self.testroot)):
self.assertFalse(mock.called) # pylint: disable=E1101
self.assertTrue(wrapper.remove(path),
"Must return True on successful removal")
if dry_run:
self.assertFalse(mock.called) # pylint: disable=E1101
else:
mock.assert_called_once_with( # pylint: disable=E1101
path)
mock.reset_mock() # pylint: disable=E1101
log.info.assert_called_once_with(ANY, path)
log.info.reset_mock()
def test_remove_nonexistant(self):
"""FSWrapper.remove: nonexistant targets"""
test_path = tempfile.mktemp()
for dry_run in (True, False):
wrapper = FSWrapper(dry_run=dry_run)
self.assertFalse(wrapper.remove(test_path),
"Must return False if file doesn't exist")
log.warn.assert_called_once_with(ANY, test_path)
log.warn.reset_mock()
def test_remove_emptied_dirs(self):
"""FSWrapper.remove_emptied_dirs: basic function"""
targets, keepers = [], set()
self._make_children(self.testroot, targets, keepers)
# Test the handling of nonexistant paths too
targets.append(tempfile.mktemp())
# TODO: Test files and nonexistant children of otherwise empty dirs
for dry_run in (True, False):
if not dry_run:
os.rmdir.side_effect = (
lambda x: self._rmd(x)) # pylint: disable=E1101,W0108
wrapper = FSWrapper(dry_run=dry_run)
wrapper.remove_emptied_dirs(targets)
# TODO: Assert that log.info was called *in a way that gives
# dry-running meaning*.
self.assertTrue(log.info.called)
log.info.reset_mock()
self.assertEqual(
os.rmdir.called, not dry_run) # pylint: disable=E1101
if dry_run:
continue
# Test actual os.rmdir use when not dry-running
try:
for path in targets:
if not os.path.exists(path):
continue
self.assertIn(call(path),
os.rmdir.call_args_list) # pylint: disable=E1101
# Rough test for parent traversal
parent = os.path.dirname(path)
if len(os.listdir(parent)) > 1:
continue
self.assertIn(call(parent),
os.rmdir.call_args_list) # pylint: disable=E1101
finally:
os.rmdir.reset_mock() # pylint: disable=E1101
# Verify that the only remaining things are ancestors of stuff we
# wanted to keep.
for path, _, files in os.walk(self.testroot):
if path.endswith(u'keepœr'):
continue
self.assertIn(files, [[], [u'keepœr']],
"Must remove all emptied dirs")
for path in keepers:
self.assertTrue(os.path.exists(path),
"Must not remove files or sub-target dirs")
# Prevent tearDown from complaining about an expected mock call
os.rmdir.reset_mock() # pylint: disable=E1101
@patch.object(log, 'warning', autospec=True)
def test_remove_emptied_dirs_exceptional(self, mock):
"""FSWrapper.remove_emptied_dirs: exceptional input"""
wrapper = FSWrapper(dry_run=False) # TODO: Test dry_run=True
# Test that an empty options list doesn't cause errors or call
# os.rmdir unnecessarily
wrapper.remove_emptied_dirs([])
self.assertFalse(os.rmdir.called) # pylint: disable=E1101
# Test that a failure to normalize input doesn't raise exceptions
# TODO: Redesign this so we can actually verify results
wrapper.remove_emptied_dirs(['/.' + os.path.join(
tempfile.mktemp(dir='/'))])
mock.reset_mock()
# Separately test response to EBUSY by trying to rmdir('/')
os.rmdir.side_effect = OSError(errno.EBUSY, "FOO")
wrapper.remove_emptied_dirs(['/bin'])
class TestFSWrapperRewriteFunc(unittest.TestCase): # pylint: disable=R0904
"""Tests for L{FSWrapper.rewrite}"""
# --==< Base rewrite() Testing Data >==--
# TODO: Decide how to handle paths on case-insensitive OSes
base_test_content = b'\n'.join([
b"ABC", b"abc", b"AbC", # Start+end, many cases
b" ABC ", b"KABC", b"ABCK", # Various word boundaries
b" ABCDEFGHIJKLMNOPQRSTUVWXYZ ", # many on same line
b"DEF", b"JKLMNOPQ", # Other matches
"ЀF".encode('utf-8'), # Non-ASCII content
b"XYZ" # Non-match
])
base_test_expected = b'\n'.join([
b"DEF", b"abc", b"AbC", # No ABC->DEF->GHI (one pass only)
b" DEF ", b"KDEF", b"DEFK",
b" DEFGHIGHIJKLTunaPQRSTUVWXYZ ",
b"GHI",
b"JKLTunaPQ", # Length-changing replacement
"ð€F".encode('utf-8'), # Non-ASCII content
b"XYZ"
])
base_expected_rewrites = (
[(b'ABC', b'DEF')] * 5 +
[(b'DEF', b'GHI'), (b'MNO', b'Tuna')] * 2 +
[('ЀF'.encode('utf-8'), 'ð€F'.encode('utf-8'))]
)
base_test_map = {'ABC': 'DEF', 'DEF': 'GHI',
'MNO': 'Tuna', 'ЀF': 'ð€F'}
# --==< rewrite() Integration Testing Data >==--
test_map = {
# Pure 7-bit ASCII
u"/foo & bar/a/01 - thaz.thuz": u"/qaz & gud/b/01 - thaz.thuz",
# Plenty of Unicode
u"/foœ & bær/a/01 - ðaz.þuz": u"/qæz & güd/b/01 - ðaz.þuz",
}
# NOTE: For integration testing purposes, it's important to always keep
# this up to date with all the formats I foresee operating on.
test_tmpls = (
u'#Geeqie collection\n"%(literal)s.jpg"\n#end', # gqv
u'title=List\nurl=file://%(pathname2url)s.mp3\ntitle=Đaz', # audpl
u'[playlist]\nNumberOfEntries=1\nFile1=%(literal)s.ogg', # pls
u'<?xml ?><location>%(xmlescape)s.mp3</location>', # xspf
u'file://%(pathname2url)s.mp3', # m3u8
u'#EXTM3U\nfile://%(pathname2url)s.mp3', # m3u8
# http://gonze.com/playlists/playlist-format-survey.html
# XXX: Should I bother attempting to deal with Windows-1252 in m3u?
# XXX: Should I try supporting paths relative to the playlist file?
# XXX: Is there a reasonable way to support B4S/WPL \-sep'd paths?
)
test_paths_encoded = [[{
'literal': x,
'pathname2url': pathname2url(x),
'xmlescape': xmlescape(x),
} for x in pair]
for pair in test_map.items()]
test_data = [[tmpl % path for path in pair]
for tmpl, pair in product(test_tmpls, test_paths_encoded)]
def setUp(self): # NOQA
# Verify we have a (map, before, after) list, paths × tmpls long
# (And do it every time to make improve test isolation)
self.assertEqual(len(self.test_data),
len(self.test_map) * len(self.test_tmpls))
self.assertTrue(all(len(x) == 2 for x in self.test_data))
# We want to make sure it works on both Unicode strings and
# bytestrings (and we want to regenerate on each run to be safe)
self.test_pairs = self.test_data + [
[x.encode('utf8') for x in pair] for pair in self.test_data]
self.testroot = tempfile.mkdtemp(prefix='k3b-rm_test-')
self.addCleanup(self.cleanup)
self.empty_path = os.path.join(self.testroot, 'empty_file')
open(self.empty_path, 'wb').close()
self.path_with_content = os.path.join(self.testroot, 'stuff_file')
with open(self.path_with_content, 'wb') as fobj:
# TODO: Decide how to handle paths on case-insensitive OSes
fobj.write(self.base_test_content)
def set_mock(patcher): # pylint: disable=C0111
patcher.start()
self.addCleanup(patcher.stop)
for meth in ('warn', 'info', 'debug'):
set_mock(patch.object(log, meth, autospec=True))
def tearDown(self): # NOQA
# Simplify tests by expecting reset_mock() on used mocks.
for mock in (log.debug, log.info, log.warn):
self.assertFalse(mock.called, # pylint: disable=E1103
"Shouldn't have been called: %s" % mock)
def cleanup(self):
"""Stuff which should be called after other cleanups, regardless"""
shutil.rmtree(self.testroot)
def _test_rewrite_empty(self, mappings):
"""Shared code to test rewrite() with empty files"""
for dry_run in (True, False):
wrapper = FSWrapper(dry_run=dry_run)
# Verify that a missing mapping key doesn't cause chaos
wrapper.rewrite(self.empty_path, {})
# Verify that an empty file doesn't cause chaos
self.assertEqual(os.stat(self.empty_path).st_size, 0)
wrapper.rewrite(self.empty_path, mappings)
self.assertEqual(os.stat(self.empty_path).st_size, 0)
self.assertFalse(log.debug.called)
log.info.assert_called_once_with(ANY, self.empty_path)
log.info.reset_mock()
def _test_rewrite(self, mappings):
"""Shared code to test rewrite() with content"""
for dry_run in (True, False):
wrapper = FSWrapper(dry_run=dry_run)
# TODO: Test non-empty files shorter than the match string.
wrapper.rewrite(self.path_with_content, self.base_test_map)
log.info.assert_called_once_with(ANY, self.path_with_content)
log.info.reset_mock()
self.assertListEqual(sorted(log.debug.call_args_list), sorted(
[call(ANY, self.path_with_content, x[0], x[1])
for x in self.base_expected_rewrites]))
log.debug.reset_mock()
# Verify actual on-disk result
if dry_run:
with open(self.path_with_content, 'rb') as fobj:
self.assertEqual(fobj.read(), self.base_test_content)
else:
with open(self.path_with_content, 'rb') as fobj:
self.assertEqual(fobj.read(), self.base_test_expected)
def test_rewrite(self):
"""FSWrapper.rewrite: basic functionality (bytestring mappings)"""
self._test_rewrite_empty({b'foo': b'bar'})
self._test_rewrite({x.encode('utf8'): y.encode('utf8')
for x, y in self.base_test_map.items()})
def test_rewrite_unicode(self):
"""FSWrapper.rewrite: basic functionality (unicode mappings)"""
self._test_rewrite_empty({'foo': 'bar'})
self._test_rewrite(self.base_test_map)
# TODO: Decide how to handle these broken cases, if at all
#self._test_rewrite_empty({b'foo': 'bar'})
#self._test_rewrite_empty({'foo': b'bar'})
def test_rewrite_integration(self):
"""FSWrapper.rewrite: in combination with vary_escaped()"""
failures = []
#TODO: Test with both unicode and bytes versions of test_map
for before, after in self.test_pairs:
with patch(open_path, mock_open(read_data=before)) as opn:
for dry_run in (True, ): # False): # TODO: Test dry_run=False
wrapper = FSWrapper(dry_run=dry_run)
wrapper.rewrite('/testfile',
vary_escaped_batch(self.test_map))
if not log.debug.called:
failures.append((before, after))
#log.debug.assert_called_once_with(
# ANY, '/testfile', ANY, ANY)
# FIXME: We need to check that the second two ANYs
# are a pair from self.test_map.items()
log.debug.reset_mock()
# TODO: We need to test a "no matches" case with both dry_run
# values
if failures:
self.fail("Failed to rewrite in one or more cases:\n\t%s" %
'\n\t'.join(repr(x) for x in failures))
def test_vary_escaped_batch(self):
"""H: vary_escaped_batch: most minimal test possible
(Used to localize bytes/unicode errors)
"""
result = vary_escaped_batch(self.test_map)
self.assertIsInstance(result, dict)
self.assertGreater(len(result), len(self.test_map))
self.assertDictContainsSubset(self.test_map, result)
class TestK3bRmLightweight(unittest.TestCase, MockDataMixin
): # pylint: disable=R0904
"""Tests for k3b-rm which require no test tree on the filesystem."""
@classmethod
def setUpClass(cls): # NOQA
MockDataMixin.setUpClass()
def test__file_exists(self):
"""L: _file_exists helper for @patch: normal function"""
self.assertEqual(_file_exists('/'), DEFAULT)
self.assertRaises(IOError, _file_exists, tempfile.mktemp())
@patch.object(sys, 'argv',
[__file__, 'mv', tempfile.mktemp(), tempfile.mktemp()])
def test_main_bad_destdir(self):
"""L: main: calls sys.exit(2) for a bad mv target"""
self.assertEqual(main(), 2, "Must exit with status 2 on bad mv "
"target")
def test_mounty_join(self):
"""L: mounty_join: proper behaviour"""
for path_a in ('/foo', '/foo/'):
for path_b in ('baz', '/baz', '//baz', '///baz'):
self.assertEqual(mounty_join(path_a, path_b),
'/foo/baz', "%s + %s" % (path_a, path_b))
@staticmethod
@patch("os.makedirs", autospec=True)
@patch(open_path, mock_open(), create=True)
def test_touch_with_parents(makedirs):
"""L: touch_with_parents: basic operation"""
touch_with_parents('/bar/foo')
makedirs.assert_called_once_with('/bar')
# pylint: disable=E1101
open.assert_called_once_with('/bar/foo', 'a')
def test_fgrep_to_re_str(self):
"""L: fgrep_to_re_str: basic operation
@todo: Consider using unittest2 so I can get access to the
Python 3.4 self.subTest context manager to simplify this.
"""
# Test support for unicode input
self.assertEqual(fgrep_to_re_str([u"abc"]), u"(abc)")
self.assertEqual(fgrep_to_re_str([u"abc", u"def"]), u"(abc|def)")
self.assertEqual(fgrep_to_re_str([br"\n[1]".decode('utf8'), u"|"]),
br"(\\n\[1\]|\|)".decode('utf8'),
"Must escape input strings")
# Test support for bytes input
self.assertEqual(fgrep_to_re_str([b"abc"]), b"(abc)")
self.assertEqual(fgrep_to_re_str([b"abc", b"def"]), b"(abc|def)")
self.assertEqual(fgrep_to_re_str([br"\n[1]", br"|"]),
br"(\\n\[1\]|\|)", "Must escape input strings")
# Test convenience support for passing a bare string as input
self.assertEqual(fgrep_to_re_str(u"abc"), u"(abc)")
self.assertEqual(fgrep_to_re_str(b"abc"), b"(abc)")
# Test that an empty list doesn't break things
self.assertIn(fgrep_to_re_str([]), (b'', u'', b'()', u'()'))
# Verify that escaping isn't being reinvented
with patch("re.escape") as resc:
resc.return_value = "" # Needed to prevent an exception
fgrep_to_re_str("abc")
resc.assert_called_once_with("abc")
def test_lower_percent_escapes(self):
"""L: lower_percent_escapes: basic operation"""
for before, after in (
(u"ABCabc", u"ABCabc"), # Percent-free
(u"%AFfoo%B0bar%0C", u"%affoo%b0bar%0c"), # start/mid/end
(u"%Affoo%Babar%EC", u"%affoo%babar%ec"), # mixed case
(u"%affoo%b0bar%0c", u"%affoo%b0bar%0c"), # already lower
(u"%02foo%35bar%22", u"%02foo%35bar%22"), # digit-only
(u"%ZaZ%KaZ", u"%ZaZ%KaZ")): # No-hex triple
# Verify that it works on both unicode and bytes
self.assertEqual(lower_percent_escapes(before), after)
self.assertEqual(lower_percent_escapes(before.encode('utf8')),
after.encode('utf8'))
def test_vary_escaped(self):
"""L: vary_escaped: basic operation"""
# Manually-generated test strings (do not automate)
test_str = u"/01/fœ & bar/<Båz>"
expected = [u"/01/fœ & bar/<Båz>",
u'/01/f%C5%93%20%26%20bar/%3CB%C3%A5z%3E',
u'/01/f%c5%93%20%26%20bar/%3cB%c3%a5z%3e',
u'/01/fœ & bar/<Båz>']
result = vary_escaped(test_str)
self.assertIsInstance(result, (tuple, list),
"Result must have a consistent ordering")
self.assertListEqual(result, expected)
# ...and bytestrings to ensure type(input) = type(output)
test_str = b"/01/foo & bar/<Baz>"
expected = [b"/01/foo & bar/<Baz>",
# TODO: Is this Py3 bytes->str conversion desirable?
'/01/foo%20%26%20bar/%3CBaz%3E',
'/01/foo%20%26%20bar/%3cBaz%3e',
b'/01/foo & bar/<Baz>']
result = vary_escaped(test_str)
self.assertIsInstance(result, (tuple, list),
"Result must have a consistent ordering")
self.assertListEqual(result, expected)
class TestK3bRm(unittest.TestCase, MockDataMixin): # pylint: disable=R0904
"""Test suite for k3b-rm to be run via C{nosetests}."""
@classmethod
def setUpClass(cls): # NOQA
MockDataMixin.setUpClass()
def setUp(self): # NOQA
"""Generate all data necessary for a test run"""
self.dest = tempfile.mkdtemp(prefix='k3b-rm_test-dest-')
self.root = tempfile.mkdtemp(prefix='k3b-rm_test-src-')
self.project = tempfile.NamedTemporaryFile(prefix='k3b-rm_test-',
suffix='.k3b')
self.addCleanup(self.cleanup)
tmp = self.xmldata_tmpl.getvalue().decode('UTF-8').replace(
self.root_placeholder, self.root)
if sys.version_info.major < 3: # pragma: nobranch
tmp = tmp.encode('UTF-8')
xmldata = StringIO(tmp)
with zipfile.ZipFile(self.project, 'w') as zobj:
zobj.writestr("maindata.xml", xmldata.getvalue())
self.expected = {}
for key, value in self.expected_tmpl.items():
path = key.replace(self.root_placeholder, self.root)
self.expected[path] = value
if path.endswith('dir'):
os.makedirs(path)
else:
touch_with_parents(path)
def cleanup(self): # NOQA
"""Stuff which should be called after other cleanups, regardless"""
for x in ('dest', 'root'):
path = getattr(self, x, None)
if path is not None:
try:
# Make sure we call this after mocks are deactivated
assert getattr(shutil.rmtree, 'called', None) is None
shutil.rmtree(path)
except OSError:
pass
delattr(self, x)
del self.project
del self.expected
# TODO: Decide how to address this
# @staticmethod
# def test_main_argparse_hole() :
# """Hole in argparse isn't triggered"""
# with patch.object(sys, 'argv', [__file__]):
# main()
@patch.object(sys.modules[__name__], "_print")
@patch.object(sys.modules[__name__], "FSWrapper", autospec=True)
def test_main_ls(self, fswrapper, mock):
"""H: main: ls subcommand function"""
# Allow FSWrapper to be initialized but don't allow method calls
fswrapper.return_value = None
with patch.object(sys, 'argv',
[__file__, 'ls', self.project.name]):
main()
self.assertListEqual(
sorted(mock.call_args_list), sorted(
[call(x) for x in self.expected.keys()]))
@patch.object(FSWrapper, "mergemove", autospec=True)
@patch.object(FSWrapper, "remove_emptied_dirs", autospec=True)
def test_main_move(self, remdirs, mmv):
"""H: main: mv triggers move_batch but only with args"""
for args in [[], ['--overwrite']]:
with patch.object(sys, 'argv',
[__file__, 'mv', self.project.name, '/'] + args):
main()
results = [x[0][1:] for x in mmv.call_args_list]
self.assertListEqual(sorted(results),
sorted([(x, mounty_join('/', self.expected[x]))
for x in self.expected]))
remdirs.assert_called_once_with(ANY, self.expected)
mmv.reset_mock()
remdirs.reset_mock()
@patch.object(FSWrapper, "move", autospec=True)
@patch.object(FSWrapper, "remove", autospec=True)
@patch.object(FSWrapper, "remove_emptied_dirs", autospec=True)
def test_main_remove(self, remdirs, remove, move):
"""H: main: rm subcommand calls remove() but only properly"""
with patch.object(sys, 'argv',
[__file__, 'rm', self.project.name]):
main()
self.assertFalse(move.called)
remdirs.assert_called_once_with(ANY, self.expected)
results = [x[0][1] for x in remove.call_args_list]
self.assertListEqual(sorted(self.expected), sorted(results))
# TODO: I'll want a test analogous to the old
# test_rm_batch_nonexistant
def test_parse_k3b_proj(self):
"""H: parse_k3b_proj: basic functionality"""
got = parse_k3b_proj(self.project.name)
self.assertDictEqual(self.expected, got)
if __name__ == '__main__': # pragma: nocover
sys.exit(main())
| 43.671572 | 82 | 0.561103 | 6,724 | 56,380 | 4.59191 | 0.157942 | 0.011854 | 0.010364 | 0.012955 | 0.294468 | 0.236073 | 0.195556 | 0.160286 | 0.135315 | 0.120029 | 0 | 0.010496 | 0.334214 | 56,380 | 1,290 | 83 | 43.705426 | 0.811866 | 0.258815 | 0 | 0.222914 | 0 | 0.002491 | 0.089336 | 0.010546 | 0 | 0 | 0 | 0.013178 | 0.115816 | 1 | 0.085928 | false | 0.001245 | 0.021171 | 0 | 0.161893 | 0.006227 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9a05981f06f65d05a24a40ae4af50c8848a9d04 | 588 | py | Python | backend/spider_backend.py | sunhailin-Leo/business_data_spider | 0494545971489aff684fb5f1162cbf87d1048c22 | [
"MIT"
] | 4 | 2019-03-06T05:38:36.000Z | 2021-01-15T09:34:10.000Z | backend/spider_backend.py | sunhailin-Leo/business_data_spider | 0494545971489aff684fb5f1162cbf87d1048c22 | [
"MIT"
] | 1 | 2019-12-19T07:11:55.000Z | 2019-12-26T09:56:58.000Z | backend/spider_backend.py | sunhailin-Leo/business_data_spider | 0494545971489aff684fb5f1162cbf87d1048c22 | [
"MIT"
] | 3 | 2019-06-18T16:01:57.000Z | 2020-10-01T19:08:01.000Z | # -*- coding: UTF-8 -*-
"""
Created on 2017年11月10日
@author: Leo
"""
# 第三方库
from flask import Flask, Blueprint
from flask_restful import Api
# 项目内部库
from backend.resources.spider import SpiderList
from backend.resources.spider import SpiderSearch
# 项目版本的URL前缀
version_prefix = "/v1"
# 定义Flask项目
app = Flask(__name__)
api_bp = Blueprint('api', __name__)
api = Api(api_bp)
# 项目资源模板
api.add_resource(SpiderList, version_prefix + '/spider_list')
api.add_resource(SpiderSearch, version_prefix + '/search')
app.register_blueprint(api_bp)
if __name__ == '__main__':
app.run(port=8080)
| 18.967742 | 61 | 0.748299 | 77 | 588 | 5.363636 | 0.519481 | 0.094431 | 0.096852 | 0.125908 | 0.154964 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.130952 | 588 | 30 | 62 | 19.6 | 0.780822 | 0.164966 | 0 | 0 | 0 | 0 | 0.069038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c9a07e3bc2d9a10ed4947133e353a88ebc94efb1 | 525 | py | Python | committees/migrations/0009_auto_20200122_1722.py | jonting/volmun | 8c49c0891825282bb7de57fc72e0eb0e8fbc4d6c | [
"MIT"
] | null | null | null | committees/migrations/0009_auto_20200122_1722.py | jonting/volmun | 8c49c0891825282bb7de57fc72e0eb0e8fbc4d6c | [
"MIT"
] | null | null | null | committees/migrations/0009_auto_20200122_1722.py | jonting/volmun | 8c49c0891825282bb7de57fc72e0eb0e8fbc4d6c | [
"MIT"
] | null | null | null | # Generated by Django 2.1.15 on 2020-01-22 22:22
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('committees', '0008_auto_20200114_1807'),
]
operations = [
migrations.AlterField(
model_name='topic',
name='committee',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='topics', to='committees.Committee'),
),
]
| 26.25 | 132 | 0.630476 | 57 | 525 | 5.701754 | 0.649123 | 0.073846 | 0.086154 | 0.135385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081841 | 0.255238 | 525 | 19 | 133 | 27.631579 | 0.749361 | 0.087619 | 0 | 0 | 1 | 0 | 0.159389 | 0.050218 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9a52b5163180880d576e4f6a17445f79d886154 | 651 | py | Python | core/templatetags/core/tags.py | zachtib/MTGRollCall | 1c2247e536a8fc178e0c6e9aacebb4213560e84f | [
"MIT"
] | null | null | null | core/templatetags/core/tags.py | zachtib/MTGRollCall | 1c2247e536a8fc178e0c6e9aacebb4213560e84f | [
"MIT"
] | 4 | 2020-02-11T22:19:29.000Z | 2021-06-10T20:13:33.000Z | core/templatetags/core/tags.py | zachtib/MTGRollCall | 1c2247e536a8fc178e0c6e9aacebb4213560e84f | [
"MIT"
] | null | null | null | import re
from django import template
from django.urls import reverse, NoReverseMatch
from django.utils.safestring import mark_safe
register = template.Library()
@register.simple_tag(takes_context=True)
def nav(context, url, text):
try:
url = reverse(url)
except NoReverseMatch:
pass
# Special case for landing page
pattern = f'^{url}$' if url == '/' else f'^{url}'
path = context['request'].path
active = re.search(pattern, path)
return mark_safe(f'<li class="nav-item{" active" if active else ""}">'
f'<a class="nav-link" href="{url}">{text}</a>'
f'</li>')
| 26.04 | 74 | 0.620584 | 85 | 651 | 4.705882 | 0.541176 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.239631 | 651 | 24 | 75 | 27.125 | 0.808081 | 0.044547 | 0 | 0 | 0 | 0 | 0.191935 | 0.037097 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.058824 | 0.235294 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c9a76de5de0ee5029db7ffcbce53ce06f39f3a9a | 629 | py | Python | src/event.py | deepakkarki/pub-sub | 47594b9c26a4f9194cccf783200f497b6eb6c0bb | [
"MIT"
] | null | null | null | src/event.py | deepakkarki/pub-sub | 47594b9c26a4f9194cccf783200f497b6eb6c0bb | [
"MIT"
] | 1 | 2021-06-09T06:27:49.000Z | 2021-06-09T06:27:49.000Z | src/event.py | deepakkarki/pub-sub | 47594b9c26a4f9194cccf783200f497b6eb6c0bb | [
"MIT"
] | 1 | 2021-05-29T04:07:34.000Z | 2021-05-29T04:07:34.000Z | # event.py>
from enum import Enum
# Constants for accessing data fields out of the Control Events'
# data dictionaries
CHORD_RING = "ring"
PREDECESSOR = "predecessor"
SEGMENT = "segment"
class EventType(Enum):
PAUSE_OPER = 1
RESUME_OPER = 2
RESTART_BROKER = 3
RING_UPDATE = 4
UPDATE_TOPICS = 5
VIEW_CHANGE = 6
class ControlEvent():
def __init__(self, name: EventType, data=None) -> None:
self.name = name # enum EventType
if data == None:
self.data = {} # events may have no helper data needed
else:
self.data = data # dictionary for data parameters
| 25.16 | 66 | 0.653418 | 81 | 629 | 4.938272 | 0.641975 | 0.04 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.265501 | 629 | 24 | 67 | 26.208333 | 0.852814 | 0.27663 | 0 | 0 | 0 | 0 | 0.049107 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9aa27c453e19d48cc1212313556d98a185acb92 | 2,085 | py | Python | polling_stations/apps/addressbase/models.py | dantagg/UK-Polling-Stations | 2271b3fbfe5242de659892d24fad4d8851c804ba | [
"BSD-3-Clause"
] | null | null | null | polling_stations/apps/addressbase/models.py | dantagg/UK-Polling-Stations | 2271b3fbfe5242de659892d24fad4d8851c804ba | [
"BSD-3-Clause"
] | null | null | null | polling_stations/apps/addressbase/models.py | dantagg/UK-Polling-Stations | 2271b3fbfe5242de659892d24fad4d8851c804ba | [
"BSD-3-Clause"
] | null | null | null | from django.contrib.gis.db import models
from django.db import connection
from uk_geo_utils.models import (
AbstractAddress,
AbstractAddressManager,
AbstractOnsudManager,
)
class AddressManager(AbstractAddressManager):
def postcodes_for_district(self, district):
qs = self.filter(location__within=district.area)
qs = qs.values_list("postcode", flat=True).distinct()
return list(qs)
def points_for_postcode(self, postcode):
qs = self.filter(postcode=postcode)
qs = qs.values_list("location", flat=True)
return list(qs)
class Address(AbstractAddress):
objects = AddressManager()
class UprnToCouncil(models.Model):
class Meta:
indexes = [models.Index(fields=["lad",], name="lookup_lad_idx")]
objects = AbstractOnsudManager()
uprn = models.CharField(primary_key=True, max_length=12)
lad = models.CharField(blank=True, max_length=9)
class Blacklist(models.Model):
"""
Model for storing postcodes containing UPRNs in >1 local authorities
This is intentionally de-normalised for performance reasons
Ideally ('postcode', 'lad') should be a composite PK,
but django's ORM doesn't support them.
"""
postcode = models.CharField(blank=False, max_length=15, db_index=True)
lad = models.CharField(blank=False, max_length=9)
class Meta:
unique_together = ("postcode", "lad")
def get_uprn_hash_table(council_id):
# get all the UPRNs in target local auth
# NB we miss ~25 over the country because lighthouses etc.
cursor = connection.cursor()
cursor.execute(
"""
SELECT
a.uprn,
a.address,
REPLACE(a.postcode, ' ', ''),
a.location
FROM addressbase_address a
JOIN addressbase_uprntocouncil u ON a.uprn=u.uprn
WHERE u.lad=%s;
""",
[council_id],
)
# return result a hash table keyed by UPRN
return {
row[0]: {"address": row[1], "postcode": row[2], "location": row[3]}
for row in cursor.fetchall()
}
| 28.561644 | 75 | 0.657074 | 255 | 2,085 | 5.270588 | 0.478431 | 0.044643 | 0.044643 | 0.020833 | 0.050595 | 0.050595 | 0 | 0 | 0 | 0 | 0 | 0.008171 | 0.23693 | 2,085 | 72 | 76 | 28.958333 | 0.836581 | 0.172182 | 0 | 0.102564 | 0 | 0 | 0.046495 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9b22ca9a67508f5c5903dc519e72df78e81eda6 | 456 | py | Python | bitirmetezi/venv/Lib/site-packages/plot/tk/listTK/upgrade_index.py | busraltun/IMPLEMENTATIONOFEYECONTROLLEDVIRTUALKEYBOARD | fa3a9b150419a17aa82f41b068a5d69d0ff0d0f3 | [
"MIT"
] | 1 | 2020-04-10T08:14:43.000Z | 2020-04-10T08:14:43.000Z | bitirmetezi/venv/Lib/site-packages/plot/tk/listTK/upgrade_index.py | busraltun/IMPLEMENTATIONOFEYECONTROLLEDVIRTUALKEYBOARD | fa3a9b150419a17aa82f41b068a5d69d0ff0d0f3 | [
"MIT"
] | 1 | 2016-11-30T20:37:27.000Z | 2016-12-12T11:55:50.000Z | bitirmetezi/venv/Lib/site-packages/plot/tk/listTK/upgrade_index.py | busraltun/IMPLEMENTATIONOFEYECONTROLLEDVIRTUALKEYBOARD | fa3a9b150419a17aa82f41b068a5d69d0ff0d0f3 | [
"MIT"
] | 1 | 2019-12-18T07:56:00.000Z | 2019-12-18T07:56:00.000Z | """
upgrade a low dimensional index to a higher one
"""
from typing import List
def upgrade_index(index, new_dim):
# type: (List, int) -> List
"""Upgrade a low dimensional index to a higher one
Args:
index (List): a list of integers
new_dim (int): the new dimension
Returns:
a new index list
"""
if len(index) >= new_dim:
return index
else:
return upgrade_index(index + [0], new_dim)
| 20.727273 | 54 | 0.609649 | 65 | 456 | 4.184615 | 0.430769 | 0.088235 | 0.080882 | 0.161765 | 0.286765 | 0.286765 | 0.286765 | 0.286765 | 0.286765 | 0 | 0 | 0.003115 | 0.296053 | 456 | 21 | 55 | 21.714286 | 0.844237 | 0.513158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9b49bd12ca7a9e72a39d50bd29f8c6f4a48b7c3 | 2,108 | py | Python | app/admin/authentic.py | BeyondLam/Flask_Blog_Python3 | 274c932e9ea28bb6c83335e408a2cd9f1cf4fcb6 | [
"Apache-2.0"
] | 2 | 2019-10-25T16:35:41.000Z | 2019-10-26T10:54:00.000Z | app/admin/authentic.py | BeyondLam/Flask_Blog_Python3 | 274c932e9ea28bb6c83335e408a2cd9f1cf4fcb6 | [
"Apache-2.0"
] | null | null | null | app/admin/authentic.py | BeyondLam/Flask_Blog_Python3 | 274c932e9ea28bb6c83335e408a2cd9f1cf4fcb6 | [
"Apache-2.0"
] | null | null | null | from . import admin
from app import db
from flask import request, jsonify, current_app, session
from app.models import Admin, AdminLoginLog
from app.utils.tool import admin_login_required
# 登录
@admin.route("/login", methods=["POST"])
def login():
"""用户的登录"""
# 获取参数
req_dict = request.get_json()
username = req_dict.get("username")
password = req_dict.get("password")
# 参数完整的校验
if not all([username, password]):
return jsonify(code=4001, msg="参数不完整")
# 查询用户
try:
admin_info = Admin.query.filter_by(username=username).first()
except Exception as e:
current_app.logger.error(e)
return jsonify(code=4002, msg="获取用户信息失败")
# 用户的状态是否可用
if admin_info is None or admin_info.status != "正常":
return jsonify(code=4003, msg="查无此用户 无法登录")
# 用数据库的密码与用户填写的密码进行对比验证
if admin_info is None or admin_info.password != password:
return jsonify(code=4003, msg="用户名或密码错误")
# 添加管理员登录日志
ip_addr = request.remote_addr # 获取管理员登录的ip
admin_login_log = AdminLoginLog(admin_id=admin_info.id, ip=ip_addr)
try:
db.session.add(admin_login_log)
db.session.commit()
except Exception as e:
print(e)
db.session.rollback()
# 如果验证相同成功,保存登录状态, 在session中
session["username"] = admin_info.username
session["admin_id"] = admin_info.id
session["avatar"] = admin_info.avatar
return jsonify(code=200, msg="登录成功")
# 检查登陆状态
@admin.route("/session", methods=["GET"])
def check_login():
"""检查登陆状态"""
# 尝试从session中获取用户的名字
username = session.get("username")
admin_id = session.get('admin_id')
avatar = session.get("avatar")
# 如果session中数据username名字存在,则表示用户已登录,否则未登录
if username is not None:
return jsonify(code=200, msg="已登录", data={"username": username, "admin_id": admin_id, "avatar": avatar})
else:
return jsonify(code=4001, msg="管理员未登录")
# 登出
@admin.route("/session", methods=["DELETE"])
@admin_login_required
def logout():
"""登出"""
# 清除session数据
session.clear()
return jsonify(code=200, msg="成功退出登录!")
| 27.376623 | 112 | 0.662713 | 266 | 2,108 | 5.12406 | 0.364662 | 0.059428 | 0.09978 | 0.044021 | 0.188555 | 0.041086 | 0.041086 | 0.041086 | 0 | 0 | 0 | 0.017313 | 0.205408 | 2,108 | 76 | 113 | 27.736842 | 0.796418 | 0.093928 | 0 | 0.085106 | 0 | 0 | 0.090377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0.06383 | 0.106383 | 0 | 0.340426 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c9b75430cc7755f89490c45152b50f01cb8d73dc | 653 | py | Python | src/post/views.py | viniciussslima/b2bit-mini-twitter | d37b6d1c3e8d92f5aeee8c8bfdde34b0e4f82e57 | [
"MIT"
] | null | null | null | src/post/views.py | viniciussslima/b2bit-mini-twitter | d37b6d1c3e8d92f5aeee8c8bfdde34b0e4f82e57 | [
"MIT"
] | null | null | null | src/post/views.py | viniciussslima/b2bit-mini-twitter | d37b6d1c3e8d92f5aeee8c8bfdde34b0e4f82e57 | [
"MIT"
] | null | null | null | from rest_framework import generics
from rest_framework.response import Response
from .serializers import CreatePostSerializer, ListPostSerializer
from .models import Post
class PostView(generics.GenericAPIView):
def post(self, request):
data = {**request.data, **{"user": request.user.id}}
serializer = CreatePostSerializer(data=data)
serializer.is_valid(self)
serializer.save()
return Response(serializer.data, status=201)
class PostListView(generics.ListAPIView):
serializer_class = ListPostSerializer
def get_queryset(self):
return Post.objects.exclude(user=self.request.user.id)
| 26.12 | 65 | 0.7366 | 71 | 653 | 6.704225 | 0.464789 | 0.033613 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005556 | 0.173047 | 653 | 24 | 66 | 27.208333 | 0.875926 | 0 | 0 | 0 | 0 | 0 | 0.006126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.266667 | 0.066667 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9c32c9162b28b6a737e113f28ce89e81e9f5c0e | 2,237 | py | Python | test_sms_for_pi.py | viable-hartman/sim-module | bf991c01bda92ed95fc8345ce6f43d07336e70b5 | [
"MIT"
] | 61 | 2015-07-10T00:41:15.000Z | 2021-06-10T22:23:13.000Z | test_sms_for_pi.py | viable-hartman/sim-module | bf991c01bda92ed95fc8345ce6f43d07336e70b5 | [
"MIT"
] | 9 | 2015-08-21T14:03:25.000Z | 2017-08-16T08:33:27.000Z | test_sms_for_pi.py | viable-hartman/sim-module | bf991c01bda92ed95fc8345ce6f43d07336e70b5 | [
"MIT"
] | 30 | 2015-05-09T02:05:30.000Z | 2020-11-05T13:52:24.000Z | #!/usr/bin/python3
import logging
from test_shared import initializeLogs, initializeUartPort, baseOperations
from lib.sim900.smshandler import SimGsmSmsHandler, SimSmsPduCompiler
def printScaPlusPdu(pdu, logger):
# printing SCA+PDU just for debug
d = pdu.compile()
if d is None:
return False
for (sca, pdu, ) in d:
logger.info("sendSms(): sca + pdu = \"{0}\"".format(sca + pdu))
def sendSms(sms, pdu, logger):
# just for debug printing all SCA + PDU parts
printScaPlusPdu(pdu, logger)
if not sms.sendPduMessage(pdu, 1):
logger.error("error sending SMS: {0}".format(sms.errorText))
return False
return True
def main():
"""
Tests SMS sending.
:return: true if everything was OK, otherwise returns false
"""
print("Please, enter phone number")
phone_number = input()
print("Please, enter sms text: ")
sms_text = input()
# logging levels
CONSOLE_LOGGER_LEVEL = logging.INFO
LOGGER_LEVEL = logging.INFO
COMPORT_NAME = "/dev/ttyAMA0"
# WARN: scecify recipient number here!!!
TARGET_PHONE_NUMBER = phone_number
# You can specify SMS center number, but it's not necessary. If you will not specify SMS center number, SIM900
# module will get SMS center number from memory
# SMS_CENTER_NUMBER = "+1 050 123 45 67"
SMS_CENTER_NUMBER = ""
# adding & initializing port object
port = initializeUartPort(portName=COMPORT_NAME)
# initializing logger
(formatter, logger, consoleLogger,) = initializeLogs(LOGGER_LEVEL, CONSOLE_LOGGER_LEVEL)
# making base operations
d = baseOperations(port, logger)
if d is None:
return False
(gsm, imei) = d
# creating object for SMS sending
sms = SimGsmSmsHandler(port, logger)
# ASCII
logger.info("sending sms")
pduHelper = SimSmsPduCompiler(
SMS_CENTER_NUMBER,
TARGET_PHONE_NUMBER,
"{}\n{}".format(
sms_text,
"This is a computer do no reply!"
)
)
if not sendSms(sms, pduHelper, logger):
return False
gsm.closePort()
return True
if __name__ == "__main__":
main()
print("DONE")
| 24.053763 | 114 | 0.643272 | 269 | 2,237 | 5.237918 | 0.423792 | 0.038325 | 0.063875 | 0.012775 | 0.028389 | 0.028389 | 0 | 0 | 0 | 0 | 0 | 0.01335 | 0.263299 | 2,237 | 92 | 115 | 24.315217 | 0.841019 | 0.206527 | 0 | 0.166667 | 0 | 0 | 0.101083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.104167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9c4dc33bfa1faa3d19c72bdd72e244b5e361a66 | 694 | py | Python | zpgc_2016b/include/alaudio.py | mpatacchiola/naogui | 2c71c82362edcf66b1a24a5f2af23e9719011146 | [
"MIT"
] | 2 | 2017-12-22T14:33:07.000Z | 2020-07-23T09:35:59.000Z | zpgc_2016c/include/alaudio.py | mpatacchiola/naogui | 2c71c82362edcf66b1a24a5f2af23e9719011146 | [
"MIT"
] | null | null | null | zpgc_2016c/include/alaudio.py | mpatacchiola/naogui | 2c71c82362edcf66b1a24a5f2af23e9719011146 | [
"MIT"
] | 4 | 2016-04-01T10:02:39.000Z | 2018-04-14T08:05:20.000Z | # -*- encoding: UTF-8 -*-
import sys
import time
sys.path.insert(1, "../include/pynaoqi-python2.7-2.1.3.3-linux64") #import this module for the nao.py module
from naoqi import ALProxy
if (len(sys.argv) < 2):
print "Usage: 'python audioplayer_play.py IP [PORT]'"
sys.exit(1)
IP = sys.argv[1]
PORT = 9559
if (len(sys.argv) > 2):
PORT = sys.argv[2]
try:
aup = ALProxy("ALAudioPlayer", IP, PORT)
except Exception,e:
print "Could not create proxy to ALAudioPlayer"
print "Error was: ",e
sys.exit(1)
#Loads a file and launchs the playing 5 seconds later
fileId = aup.loadFile("/home/nao/naoqi/mp3/NOGAZE_NOPOINT_SYNTH_RP_01_AB_10.wav")
time.sleep(1)
aup.play(fileId)
| 24.785714 | 108 | 0.690202 | 117 | 694 | 4.034188 | 0.615385 | 0.059322 | 0.050847 | 0.050847 | 0.055085 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046552 | 0.164265 | 694 | 27 | 109 | 25.703704 | 0.767241 | 0.165706 | 0 | 0.1 | 0 | 0 | 0.361739 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.15 | null | null | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9c5681af8dcd47078d649177ff3c037ef1ceadd | 1,035 | py | Python | pyfc/tempcontainers.py | vrga/pyFanController | 99c707febddc2269d9ab062490eb4a9c45f76230 | [
"MIT"
] | null | null | null | pyfc/tempcontainers.py | vrga/pyFanController | 99c707febddc2269d9ab062490eb4a9c45f76230 | [
"MIT"
] | null | null | null | pyfc/tempcontainers.py | vrga/pyFanController | 99c707febddc2269d9ab062490eb4a9c45f76230 | [
"MIT"
] | null | null | null | from typing import Dict
from .common import mean, ValueBuffer
from datetime import datetime, timezone, timedelta
class TemperatureGroup:
def __init__(self, name, time_read_sec=1):
self.name = name
self.data: Dict[str, ValueBuffer] = {}
self.last_update = datetime.now(tz=timezone.utc) - timedelta(seconds=10)
self.time_read = timedelta(seconds=time_read_sec)
def updatable(self):
if datetime.now(timezone.utc) - self.last_update > self.time_read:
return True
return False
def update(self, name, device):
if name not in self.data:
self.data[name] = ValueBuffer(name, 35)
self.data[name].update(device)
self.last_update = datetime.now(timezone.utc)
def mean(self, device) -> float:
try:
if device is None:
return mean(buffer.mean() for buffer in self.data.values())
return self.data[device].mean()
except (KeyError, ZeroDivisionError):
return 35.0
| 29.571429 | 80 | 0.631884 | 130 | 1,035 | 4.930769 | 0.376923 | 0.074883 | 0.065523 | 0.068643 | 0.078003 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010568 | 0.268599 | 1,035 | 34 | 81 | 30.441176 | 0.836196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.12 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9c99958e25351a3fac1cd198c276bd7ee072c9a | 4,954 | py | Python | tests/test_ISR.py | engeir/isr-spectrum | 1e6f9d4c95a8bbeeb38395474c16d56e9063c665 | [
"MIT"
] | 1 | 2020-05-04T17:03:21.000Z | 2020-05-04T17:03:21.000Z | tests/test_ISR.py | engeir/isr_spectrum | 7ac0562dd71c3d55bf5991c3fe8de3b5d8a55a02 | [
"MIT"
] | 4 | 2022-01-13T16:54:39.000Z | 2022-03-30T21:03:27.000Z | tests/test_ISR.py | engeir/isr_spectrum | 7ac0562dd71c3d55bf5991c3fe8de3b5d8a55a02 | [
"MIT"
] | null | null | null | """This script implements tests for
functions used throughout the program.
Run from directory `program` with command
python -m unittest test.test_ISR -b
"""
import multiprocessing as mp
mp.set_start_method("fork")
import unittest # pylint: disable=C0413
import numpy as np # pylint: disable=C0413
import scipy.constants as const # pylint: disable=C0413
import scipy.integrate as si # pylint: disable=C0413
from isr_spectrum.utils import spectrum_calculation as isr # pylint: disable=C0413
from isr_spectrum.utils import vdfs # pylint: disable=C0413
class TestISR(unittest.TestCase):
"""Check if the output from isr_spectrum is as expected.
Should return two numpy.ndarrays of equal shape.
Arguments:
unittest.TestCase {class} -- inherits from unittest
to make it a TestCase
"""
@classmethod
def setUpClass(cls):
cls.a, cls.b = None, None
def setUp(self):
F0 = 430e6
K_RADAR = -2 * F0 * 2 * np.pi / const.c
self.sys_set = {
"K_RADAR": K_RADAR,
"B": 5e-4,
"MI": 16,
"NE": 2e11,
"NU_E": 0,
"NU_I": 0,
"T_E": 5000,
"T_I": 2000,
"T_ES": 90000,
"THETA": 40 * np.pi / 180,
"Z": 599,
"mat_file": "fe_zmuE-07.mat",
"pitch_angle": "all",
}
self.params = {"kappa": 3, "vdf": "gauss_shell", "area": False}
def tearDown(self):
self.assertIsInstance(self.a, np.ndarray)
self.assertIsInstance(self.b, np.ndarray)
self.assertEqual(self.a.shape, self.b.shape, msg="a.shape != b.shape")
def test_isr_maxwell(self):
self.a, self.b, meta_data = isr.isr_spectrum(
"maxwell", self.sys_set, **self.params
)
self.assertEqual(meta_data["kappa"], None)
self.assertEqual(meta_data["vdf"], None)
self.assertEqual(meta_data["T_ES"], None)
self.assertEqual(meta_data["Z"], None)
self.assertEqual(meta_data["mat_file"], None)
def test_isr_kappa(self):
self.a, self.b, meta_data = isr.isr_spectrum(
"kappa", self.sys_set, **self.params
)
self.assertEqual(meta_data["kappa"], 3)
self.assertEqual(meta_data["vdf"], None)
self.assertEqual(meta_data["T_ES"], None)
self.assertEqual(meta_data["Z"], None)
self.assertEqual(meta_data["mat_file"], None)
def test_isr_long_calc_gauss(self):
self.a, self.b, meta_data = isr.isr_spectrum(
"a_vdf", self.sys_set, **self.params
)
self.assertEqual(meta_data["kappa"], None)
self.assertEqual(meta_data["vdf"], "gauss_shell")
self.assertEqual(meta_data["T_ES"], 90000)
self.assertEqual(meta_data["Z"], None)
self.assertEqual(meta_data["mat_file"], None)
def test_isr_long_calc_real(self):
self.params["vdf"] = "real_data"
self.a, self.b, meta_data = isr.isr_spectrum(
"a_vdf", self.sys_set, **self.params
)
self.assertEqual(meta_data["kappa"], None)
self.assertEqual(meta_data["vdf"], "real_data")
self.assertEqual(meta_data["T_ES"], None)
self.assertEqual(meta_data["Z"], 599)
self.assertEqual(meta_data["mat_file"], "fe_zmuE-07.mat")
# Reference to TestVDF $\label{lst:testVDF}$
class TestVDF(unittest.TestCase):
"""Class which test if the VDFs are normalized.
Arguments:
unittest.TestCase {class} -- inherits from unittest
to make it a TestCase
"""
@classmethod
def setUpClass(cls):
cls.v = np.linspace(0, (6e6) ** (1 / 3), int(4e4)) ** 3
cls.params = {
"m": 9.1093837015e-31,
"T": 1000,
"kappa": 3,
"T_ES": 90000,
"Z": 300,
"mat_file": "fe_zmuE-07.mat",
"pitch_angle": "all",
}
cls.f = None
# cls.fs = []
# @classmethod
# def tearDownClass(cls):
# np.savez('f', v=cls.v, m=cls.fs[0], k=cls.fs[1], r=cls.fs[2])
def tearDown(self):
# The function f is scaled with the Jacobian of cartesian to spherical
f = self.f.f_0() * self.v ** 2 * 4 * np.pi
res = si.simps(f, self.v)
self.assertAlmostEqual(res, 1, places=6)
def test_vdf_maxwell(self):
self.f = vdfs.F_MAXWELL(self.v, self.params)
# self.fs.insert(0, self.f.f_0())
def test_vdf_kappa(self):
self.f = vdfs.F_KAPPA(self.v, self.params)
# self.fs.insert(1, self.f.f_0())
# def test_vdf_kappa_vol2(self):
# self.f = vdfs.F_KAPPA_2(self.v, self.params)
# def test_vdf_gauss_shell(self):
# self.f = vdfs.F_GAUSS_SHELL(self.v, self.params)
def test_vdf_real_data(self):
self.f = vdfs.F_REAL_DATA(self.v, self.params)
# self.fs.insert(2, self.f.f_0())
if __name__ == "__main__":
unittest.main()
| 31.35443 | 83 | 0.591441 | 684 | 4,954 | 4.115497 | 0.238304 | 0.068206 | 0.134991 | 0.16341 | 0.544938 | 0.509414 | 0.472114 | 0.425577 | 0.378686 | 0.356661 | 0 | 0.03533 | 0.268672 | 4,954 | 157 | 84 | 31.55414 | 0.741651 | 0.223658 | 0 | 0.326531 | 0 | 0 | 0.086138 | 0 | 0 | 0 | 0 | 0 | 0.244898 | 1 | 0.122449 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9cec260a495923e5d533fd7fb343a86e99e3d4e | 1,576 | py | Python | app/model/rental.py | almamallo/ejemplo-sphinx | 65a0c07bead651a15b8409b0387ed6d041db1fd0 | [
"Apache-2.0"
] | null | null | null | app/model/rental.py | almamallo/ejemplo-sphinx | 65a0c07bead651a15b8409b0387ed6d041db1fd0 | [
"Apache-2.0"
] | null | null | null | app/model/rental.py | almamallo/ejemplo-sphinx | 65a0c07bead651a15b8409b0387ed6d041db1fd0 | [
"Apache-2.0"
] | null | null | null | """
Rental
======
"""
class Rental:
"""
Representa el alquier de un barco de un cliente.
:param client: El cliente, arrendatario del alquiler.
:param boat: El barco del client para el que se realiza el alquiler.
:param start_date: Fecha de inicio del alquiler.
:param end_date: Fecha de fin del alquiler.
:param position: Posición del amarre.
:type client: Client
:type boat: Boat
:type start_date: datetime
:type end_date: datetime
:type position: string
"""
daily_rental_price = 50
def __init__(self, client, boat, start_date, end_date, position):
"""Inicializa un objeto de la clase Rental."""
self.client = client
self.start_date = start_date
self.end_date = end_date
self.boat = boat
self.position = position
self.__price = self.calculate_price()
@property
def price(self):
"""Devuelve el precio del amarre."""
return self.__price
def calculate_price(self):
"""
Calcula el precio del amarre en función de de las fechas y el precio diario.
:return: El precio del alquiler en función de las fechas
:rtype: float
"""
return (self.end_date - self.start_date).days * cls.daily_price * self.boat.length
@classmethod
def change_daily_rental_price(cls, new_price):
"""
Cambia el precio diario del alquiler.
:param new_price: El nuevo precio del alquiler.
:type new_price: float
"""
cls.daily_rental_price = new_price
| 25.836066 | 90 | 0.635152 | 205 | 1,576 | 4.717073 | 0.307317 | 0.068252 | 0.066184 | 0.03516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001764 | 0.280457 | 1,576 | 60 | 91 | 26.266667 | 0.85097 | 0.484772 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9d5727ae103d1267d3ee7871bd00a41ee83b324 | 389 | py | Python | utils/mixins.py | TheKiddos/StaRat | 33807d73276563f636b430e1bbfcb65b645869f7 | [
"MIT"
] | 1 | 2021-05-18T16:33:10.000Z | 2021-05-18T16:33:10.000Z | utils/mixins.py | TheKiddos/StaRat | 33807d73276563f636b430e1bbfcb65b645869f7 | [
"MIT"
] | 3 | 2021-05-18T16:02:32.000Z | 2021-05-21T15:20:12.000Z | utils/mixins.py | TheKiddos/StaRat | 33807d73276563f636b430e1bbfcb65b645869f7 | [
"MIT"
] | 1 | 2021-09-12T22:56:09.000Z | 2021-09-12T22:56:09.000Z | from rest_framework.permissions import AllowAny
class PublicListRetrieveViewSetMixin:
"""Allow anyone to use list and retrieve actions, return default permissions and auth otherwise"""
allowed_actions = ['list', 'retrieve']
def get_permissions(self):
if self.action in self.allowed_actions:
return [AllowAny(), ]
return super().get_permissions()
| 32.416667 | 102 | 0.714653 | 43 | 389 | 6.348837 | 0.651163 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.200514 | 389 | 11 | 103 | 35.363636 | 0.877814 | 0.236504 | 0 | 0 | 0 | 0 | 0.041237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9da7ebc9ebd24b5ce44705e2e55bdb890c52407 | 9,176 | py | Python | Lib/test/test_mailcap.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
] | 1 | 2018-06-21T18:21:24.000Z | 2018-06-21T18:21:24.000Z | Lib/test/test_mailcap.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
] | null | null | null | Lib/test/test_mailcap.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
] | null | null | null | zaimportuj mailcap
zaimportuj os
zaimportuj shutil
zaimportuj test.support
zaimportuj unittest
# Location of mailcap file
MAILCAPFILE = test.support.findfile("mailcap.txt")
# Dict to act jako mock mailcap entry dla this test
# The keys oraz values should match the contents of MAILCAPFILE
MAILCAPDICT = {
'application/x-movie':
[{'compose': 'moviemaker %s',
'x11-bitmap': '"/usr/lib/Zmail/bitmaps/movie.xbm"',
'description': '"Movie"',
'view': 'movieplayer %s'}],
'application/*':
[{'copiousoutput': '',
'view': 'echo "This jest \\"%t\\" but jest 50 \\% Greek to me" \\; cat %s'}],
'audio/basic':
[{'edit': 'audiocompose %s',
'compose': 'audiocompose %s',
'description': '"An audio fragment"',
'view': 'showaudio %s'}],
'video/mpeg':
[{'view': 'mpeg_play %s'}],
'application/postscript':
[{'needsterminal': '', 'view': 'ps-to-terminal %s'},
{'compose': 'idraw %s', 'view': 'ps-to-terminal %s'}],
'application/x-dvi':
[{'view': 'xdvi %s'}],
'message/external-body':
[{'composetyped': 'extcompose %s',
'description': '"A reference to data stored w an external location"',
'needsterminal': '',
'view': 'showexternal %s %{access-type} %{name} %{site} %{directory} %{mode} %{server}'}],
'text/richtext':
[{'test': 'test "`echo %{charset} | tr \'[A-Z]\' \'[a-z]\'`" = iso-8859-8',
'copiousoutput': '',
'view': 'shownonascii iso-8859-8 -e richtext -p %s'}],
'image/x-xwindowdump':
[{'view': 'display %s'}],
'audio/*':
[{'view': '/usr/local/bin/showaudio %t'}],
'video/*':
[{'view': 'animate %s'}],
'application/frame':
[{'print': '"cat %s | lp"', 'view': 'showframe %s'}],
'image/rgb':
[{'view': 'display %s'}]
}
klasa HelperFunctionTest(unittest.TestCase):
def test_listmailcapfiles(self):
# The zwróć value dla listmailcapfiles() will vary by system.
# So verify that listmailcapfiles() returns a list of strings that jest of
# non-zero length.
mcfiles = mailcap.listmailcapfiles()
self.assertIsInstance(mcfiles, list)
dla m w mcfiles:
self.assertIsInstance(m, str)
przy test.support.EnvironmentVarGuard() jako env:
# According to RFC 1524, jeżeli MAILCAPS env variable exists, use that
# oraz only that.
jeżeli "MAILCAPS" w env:
env_mailcaps = env["MAILCAPS"].split(os.pathsep)
inaczej:
env_mailcaps = ["/testdir1/.mailcap", "/testdir2/mailcap"]
env["MAILCAPS"] = os.pathsep.join(env_mailcaps)
mcfiles = mailcap.listmailcapfiles()
self.assertEqual(env_mailcaps, mcfiles)
def test_readmailcapfile(self):
# Test readmailcapfile() using test file. It should match MAILCAPDICT.
przy open(MAILCAPFILE, 'r') jako mcf:
d = mailcap.readmailcapfile(mcf)
self.assertDictEqual(d, MAILCAPDICT)
def test_lookup(self):
# Test without key
expected = [{'view': 'mpeg_play %s'}, {'view': 'animate %s'}]
actual = mailcap.lookup(MAILCAPDICT, 'video/mpeg')
self.assertListEqual(expected, actual)
# Test przy key
key = 'compose'
expected = [{'edit': 'audiocompose %s',
'compose': 'audiocompose %s',
'description': '"An audio fragment"',
'view': 'showaudio %s'}]
actual = mailcap.lookup(MAILCAPDICT, 'audio/basic', key)
self.assertListEqual(expected, actual)
def test_subst(self):
plist = ['id=1', 'number=2', 'total=3']
# test case: ([field, MIMEtype, filename, plist=[]], <expected string>)
test_cases = [
(["", "audio/*", "foo.txt"], ""),
(["echo foo", "audio/*", "foo.txt"], "echo foo"),
(["echo %s", "audio/*", "foo.txt"], "echo foo.txt"),
(["echo %t", "audio/*", "foo.txt"], "echo audio/*"),
(["echo \%t", "audio/*", "foo.txt"], "echo %t"),
(["echo foo", "audio/*", "foo.txt", plist], "echo foo"),
(["echo %{total}", "audio/*", "foo.txt", plist], "echo 3")
]
dla tc w test_cases:
self.assertEqual(mailcap.subst(*tc[0]), tc[1])
klasa GetcapsTest(unittest.TestCase):
def test_mock_getcaps(self):
# Test mailcap.getcaps() using mock mailcap file w this dir.
# Temporarily override any existing system mailcap file by pointing the
# MAILCAPS environment variable to our mock file.
przy test.support.EnvironmentVarGuard() jako env:
env["MAILCAPS"] = MAILCAPFILE
caps = mailcap.getcaps()
self.assertDictEqual(caps, MAILCAPDICT)
def test_system_mailcap(self):
# Test mailcap.getcaps() przy mailcap file(s) on system, jeżeli any.
caps = mailcap.getcaps()
self.assertIsInstance(caps, dict)
mailcapfiles = mailcap.listmailcapfiles()
existingmcfiles = [mcf dla mcf w mailcapfiles jeżeli os.path.exists(mcf)]
jeżeli existingmcfiles:
# At least 1 mailcap file exists, so test that.
dla (k, v) w caps.items():
self.assertIsInstance(k, str)
self.assertIsInstance(v, list)
dla e w v:
self.assertIsInstance(e, dict)
inaczej:
# No mailcap files on system. getcaps() should zwróć empty dict.
self.assertEqual({}, caps)
klasa FindmatchTest(unittest.TestCase):
def test_findmatch(self):
# default findmatch arguments
c = MAILCAPDICT
fname = "foo.txt"
plist = ["access-type=default", "name=john", "site=python.org",
"directory=/tmp", "mode=foo", "server=bar"]
audio_basic_entry = {
'edit': 'audiocompose %s',
'compose': 'audiocompose %s',
'description': '"An audio fragment"',
'view': 'showaudio %s'
}
audio_entry = {"view": "/usr/local/bin/showaudio %t"}
video_entry = {'view': 'animate %s'}
message_entry = {
'composetyped': 'extcompose %s',
'description': '"A reference to data stored w an external location"', 'needsterminal': '',
'view': 'showexternal %s %{access-type} %{name} %{site} %{directory} %{mode} %{server}'
}
# test case: (findmatch args, findmatch keyword args, expected output)
# positional args: caps, MIMEtype
# keyword args: key="view", filename="/dev/null", plist=[]
# output: (command line, mailcap entry)
cases = [
([{}, "video/mpeg"], {}, (Nic, Nic)),
([c, "foo/bar"], {}, (Nic, Nic)),
([c, "video/mpeg"], {}, ('mpeg_play /dev/null', {'view': 'mpeg_play %s'})),
([c, "audio/basic", "edit"], {}, ("audiocompose /dev/null", audio_basic_entry)),
([c, "audio/basic", "compose"], {}, ("audiocompose /dev/null", audio_basic_entry)),
([c, "audio/basic", "description"], {}, ('"An audio fragment"', audio_basic_entry)),
([c, "audio/basic", "foobar"], {}, (Nic, Nic)),
([c, "video/*"], {"filename": fname}, ("animate %s" % fname, video_entry)),
([c, "audio/basic", "compose"],
{"filename": fname},
("audiocompose %s" % fname, audio_basic_entry)),
([c, "audio/basic"],
{"key": "description", "filename": fname},
('"An audio fragment"', audio_basic_entry)),
([c, "audio/*"],
{"filename": fname},
("/usr/local/bin/showaudio audio/*", audio_entry)),
([c, "message/external-body"],
{"plist": plist},
("showexternal /dev/null default john python.org /tmp foo bar", message_entry))
]
self._run_cases(cases)
@unittest.skipUnless(os.name == "posix", "Requires 'test' command on system")
def test_test(self):
# findmatch() will automatically check any "test" conditions oraz skip
# the entry jeżeli the check fails.
caps = {"test/pass": [{"test": "test 1 -eq 1"}],
"test/fail": [{"test": "test 1 -eq 0"}]}
# test case: (findmatch args, findmatch keyword args, expected output)
# positional args: caps, MIMEtype, key ("test")
# keyword args: N/A
# output: (command line, mailcap entry)
cases = [
# findmatch will zwróć the mailcap entry dla test/pass because it evaluates to true
([caps, "test/pass", "test"], {}, ("test 1 -eq 1", {"test": "test 1 -eq 1"})),
# findmatch will zwróć Nic because test/fail evaluates to false
([caps, "test/fail", "test"], {}, (Nic, Nic))
]
self._run_cases(cases)
def _run_cases(self, cases):
dla c w cases:
self.assertEqual(mailcap.findmatch(*c[0], **c[1]), c[2])
jeżeli __name__ == '__main__':
unittest.main()
| 42.091743 | 104 | 0.549041 | 977 | 9,176 | 5.103378 | 0.253838 | 0.028079 | 0.015443 | 0.015042 | 0.282792 | 0.23586 | 0.191536 | 0.179503 | 0.165062 | 0.13578 | 0 | 0.005794 | 0.28531 | 9,176 | 217 | 105 | 42.285714 | 0.754498 | 0.168701 | 0 | 0.134146 | 0 | 0.012195 | 0.316877 | 0.02238 | 0 | 0 | 0 | 0 | 0.085366 | 0 | null | null | 0.012195 | 0.030488 | null | null | 0.006098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9ec8f57071dfa5f15c145a3e4c4fa72401b3d22 | 449 | py | Python | tests/graphite_test.py | datastax-labs/hunter | 3631cc3fa529991297a8b631bbae15b138cce307 | [
"Apache-2.0"
] | 17 | 2021-09-03T07:32:40.000Z | 2022-03-24T21:56:22.000Z | tests/graphite_test.py | datastax-labs/hunter | 3631cc3fa529991297a8b631bbae15b138cce307 | [
"Apache-2.0"
] | 1 | 2021-12-02T14:05:07.000Z | 2021-12-02T14:05:07.000Z | tests/graphite_test.py | datastax-labs/hunter | 3631cc3fa529991297a8b631bbae15b138cce307 | [
"Apache-2.0"
] | 2 | 2022-01-18T18:40:41.000Z | 2022-03-11T15:33:25.000Z | from hunter.graphite import compress_target_paths
def test_compress_target_paths():
paths = [
"foo.bar.p50",
"foo.bar.p75",
"foo.bar.p99",
"foo.foo.baz.p50",
"foo.foo.baz.p75",
"foo.foo.baz.throughput",
"something.else",
]
assert set(compress_target_paths(paths)) == {
"foo.bar.{p50,p75,p99}",
"foo.foo.baz.{p50,p75,throughput}",
"something.else",
}
| 22.45 | 49 | 0.556793 | 55 | 449 | 4.418182 | 0.363636 | 0.098765 | 0.148148 | 0.197531 | 0.395062 | 0.271605 | 0.271605 | 0 | 0 | 0 | 0 | 0.061538 | 0.276169 | 449 | 19 | 50 | 23.631579 | 0.686154 | 0 | 0 | 0.125 | 0 | 0 | 0.36971 | 0.167038 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c9f3cf8902da7131992e192430916de07d4a813b | 1,209 | py | Python | typeddfs/_mixins/_pretty_print_mixin.py | dmyersturnbull/typed-dfs | 57504da6cf2085e96ef62f35afcc2f733a9910e0 | [
"Apache-2.0"
] | 5 | 2020-07-14T14:03:35.000Z | 2022-01-15T03:34:15.000Z | typeddfs/_mixins/_pretty_print_mixin.py | kokellab/typed-dfs | 11c80080446a28e618a379569606f1c1ba387062 | [
"Apache-2.0"
] | 83 | 2021-01-06T06:19:12.000Z | 2022-03-28T03:06:18.000Z | typeddfs/_mixins/_pretty_print_mixin.py | kokellab/typed-dfs | 11c80080446a28e618a379569606f1c1ba387062 | [
"Apache-2.0"
] | null | null | null | """
Mixin that just overrides _repr_html.
"""
class _PrettyPrintMixin:
"""
A DataFrame with an overridden ``_repr_html_`` and some simple additional methods.
"""
def _repr_html_(self) -> str:
"""
Renders HTML for display() in Jupyter notebooks.
Jupyter automatically uses this function.
Returns:
Just a string containing HTML, which will be wrapped in an HTML object
"""
# noinspection PyProtectedMember
return (
f"<strong>{self.__class__.__name__}: {self._dims()}</strong>\n{super()._repr_html_()}"
)
def _dims(self) -> str:
"""
Returns a string describing the dimensionality.
Returns:
A text description of the dimensions of this DataFrame
"""
# we could handle multi-level columns
# but they're quite rare, and the number of rows is probably obvious when looking at it
if len(self.index.names) > 1:
return f"{len(self)} rows × {len(self.columns)} columns, {len(self.index.names)} index columns"
else:
return f"{len(self)} rows × {len(self.columns)} columns"
__all__ = ["_PrettyPrintMixin"]
| 30.225 | 107 | 0.610422 | 144 | 1,209 | 4.951389 | 0.576389 | 0.058906 | 0.033661 | 0.047686 | 0.112202 | 0.112202 | 0.112202 | 0.112202 | 0.112202 | 0.112202 | 0 | 0.001159 | 0.286187 | 1,209 | 39 | 108 | 31 | 0.822711 | 0.468983 | 0 | 0 | 0 | 0.181818 | 0.435028 | 0.19774 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c9f5147f17c4bc6062891d7e0f8f3bf64753259d | 1,694 | py | Python | client/src/FifoFile.py | tommccallum/smartbot | 7241aa80a8dfa1f67e411c9000d65addd81ebd3f | [
"MIT"
] | 1 | 2021-01-27T11:18:54.000Z | 2021-01-27T11:18:54.000Z | client/src/FifoFile.py | tommccallum/smartbot | 7241aa80a8dfa1f67e411c9000d65addd81ebd3f | [
"MIT"
] | null | null | null | client/src/FifoFile.py | tommccallum/smartbot | 7241aa80a8dfa1f67e411c9000d65addd81ebd3f | [
"MIT"
] | null | null | null | import logging
import os
class FifoFile:
"""
Fifo object that handles directory and file creation
"""
instance_counter = 1
"""Makes any fifo name unique"""
def __init__(self, fifo_path=None, filename_prefix = "smartbot"):
self.fifo_filename = "{}_{}.{}".format(filename_prefix, str(FifoFile.instance_counter),"fifo")
FifoFile.instance_counter += 1
self.full_path = self._make_fifo(fifo_path, self.fifo_filename)
def write(self, command):
logging.debug("Sending command '{}' to fifo '{}'".format(command,self.full_path))
with open(self.full_path, "w") as out_file:
out_file.write(command + "\n")
logging.info("Written '" + command + "' to " + self.full_path)
def _make_fifo(self, path, filename):
"""
Creates fifo file and returns path to it
"""
self._create_fifo_directory(path)
full_path = os.path.join(path, filename)
self._create_fifo_file(full_path)
return full_path
def _create_fifo_file(self, full_path):
if not os.path.exists(full_path):
logging.info("Making fifo at {}".format(full_path))
os.mkfifo(full_path)
def _create_fifo_directory(self, path):
if not os.path.isdir(path):
logging.info("Creating directory: {}".format(path))
os.makedirs(path)
def delete(self):
logging.debug("attempting to remove fifo {}".format(self.full_path))
if self.full_path and os.path.exists(self.full_path):
logging.debug("Removing {}".format(self.full_path))
os.remove(self.full_path)
def __del__(self):
self.delete()
| 33.215686 | 102 | 0.62987 | 217 | 1,694 | 4.677419 | 0.299539 | 0.126108 | 0.118227 | 0.029557 | 0.070936 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001563 | 0.244392 | 1,694 | 50 | 103 | 33.88 | 0.791406 | 0.0549 | 0 | 0 | 0 | 0 | 0.096732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212121 | false | 0 | 0.060606 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a003824398196538a13a8f6da8daad758512926f | 2,195 | py | Python | run.py | msk-5s/hclust-uniform | 81986510d529d5ca3dd5f81515f010078b729446 | [
"MIT"
] | null | null | null | run.py | msk-5s/hclust-uniform | 81986510d529d5ca3dd5f81515f010078b729446 | [
"MIT"
] | null | null | null | run.py | msk-5s/hclust-uniform | 81986510d529d5ca3dd5f81515f010078b729446 | [
"MIT"
] | null | null | null | # SPDX-License-Identifier: MIT
"""
This script runs phase identification using a single set of parameters.
Note that the results in this script may differ a bit from the results gotten from using
`run_suite.py`. This is because the random number generator is only `invoked` once in this script
where as it is `invoked` multiple times in the suite.
"""
import numpy as np
import sklearn.metrics
import model
import data_factory
#---------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------
def main(): # pylint: disable=too-many-locals, too-many-statements
"""
The main function.
"""
# We want and randomness to be repeatable.
rng = np.random.default_rng(seed=1337)
#***********************************************************************************************
# Load the data.
#***********************************************************************************************
load = data_factory.make_data(noise_percent=0.005, rng=rng)
labels = data_factory.make_labels()
start = 0
days = 7
width = 96 * days
load = load[start:(start + width), :]
#***********************************************************************************************
# Run phase identification.
#***********************************************************************************************
predictions = model.predict_hclust(labels=labels, load=load, metric="correlation")
accuracy = sklearn.metrics.accuracy_score(y_true=labels, y_pred=predictions)
#***********************************************************************************************
# Print out results.
#***********************************************************************************************
print("-"*50)
print(f"Accuracy: {accuracy}")
print("-"*50)
#---------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------
if __name__ == '__main__':
main()
| 39.196429 | 100 | 0.37631 | 168 | 2,195 | 4.797619 | 0.577381 | 0.037221 | 0.029777 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008312 | 0.123007 | 2,195 | 55 | 101 | 39.909091 | 0.41039 | 0.675171 | 0 | 0.105263 | 0 | 0 | 0.060741 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.263158 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a00c68e4f804718b72bfbfce90e19f26df409b91 | 1,556 | py | Python | backend/tournesol/throttling.py | iamnkc/tournesol | 4a09985f494577917c357783a37dfae02c57fd82 | [
"CC0-1.0"
] | null | null | null | backend/tournesol/throttling.py | iamnkc/tournesol | 4a09985f494577917c357783a37dfae02c57fd82 | [
"CC0-1.0"
] | null | null | null | backend/tournesol/throttling.py | iamnkc/tournesol | 4a09985f494577917c357783a37dfae02c57fd82 | [
"CC0-1.0"
] | null | null | null | from rest_framework.throttling import AnonRateThrottle, ScopedRateThrottle, UserRateThrottle
class BurstAnonRateThrottle(AnonRateThrottle):
"""
Limit the rate of API calls that may be made by an anonymous users.
Should be used to define a rate for a short period of time.
"""
scope = "anon_burst"
class BurstUserRateThrottle(UserRateThrottle):
"""
Limit the rate of API calls that may be made by an authenticated users.
Should be used to define a rate for a short period of time.
"""
scope = "user_burst"
class SustainedAnonRateThrottle(AnonRateThrottle):
"""
Limit the rate of API calls that may be made by an anonymous users.
Should be used in addition to `BurstAnonRateThrottle` to define a rate
that can be sustained for a longer period.
"""
scope = "anon_sustained"
class SustainedUserRateThrottle(UserRateThrottle):
"""
Limit the rate of API calls that may be made by an authenticated users.
Should be used in addition to `BurstUserRateThrottle` to define a rate
that can be sustained for a longer period.
"""
scope = "user_sustained"
class PostScopeRateThrottle(ScopedRateThrottle):
"""
Limit the rate of only HTTP POST by different amounts for various parts
of the API.
All other HTTP methods are authorized, and passed to the next throttle
in the chain.
"""
def allow_request(self, request, view):
if request.method != "POST":
return True
return super().allow_request(request, view)
| 26.372881 | 92 | 0.705656 | 204 | 1,556 | 5.348039 | 0.362745 | 0.036664 | 0.054995 | 0.064161 | 0.493126 | 0.493126 | 0.493126 | 0.471127 | 0.471127 | 0.471127 | 0 | 0 | 0.235219 | 1,556 | 58 | 93 | 26.827586 | 0.916807 | 0.514781 | 0 | 0 | 0 | 0 | 0.080871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a00cb8f36d21aa55ca080b23c6c59edf9b521eb9 | 5,357 | py | Python | autodraft/draftHost/logic/fantasy.py | gnmerritt/autodraft | b4071f9cc5eaf57beb19ba1aec99377afdbfb09a | [
"MIT"
] | null | null | null | autodraft/draftHost/logic/fantasy.py | gnmerritt/autodraft | b4071f9cc5eaf57beb19ba1aec99377afdbfb09a | [
"MIT"
] | null | null | null | autodraft/draftHost/logic/fantasy.py | gnmerritt/autodraft | b4071f9cc5eaf57beb19ba1aec99377afdbfb09a | [
"MIT"
] | null | null | null | import datetime as d
import uuid
from django.utils import timezone
from draftHost import models
from json import JsonObject, JsonTime, EmailMasker
from performance import ReadOnlyCachedAttribute
import nfl, draft
class JsonFantasyRoster(JsonObject):
fields = ['slots',]
show_id = False
class JsonFantasyDraft(JsonObject):
fields = ['admin', 'team_limit',]
functions = ['time_per_pick_s',
'teams',
'roster',
'draft_start',
'current_time',
'selections',]
@ReadOnlyCachedAttribute
def teams(self):
return models.FantasyTeam.objects.filter(draft=self.db_object)
def get_time_per_pick_s(self):
return self.db_object.time_per_pick
def get_teams(self):
json = []
for team in self.teams:
json_player = JsonFantasyTeam(team)
json_player.show_draft_id = False # already showing the draft...
json.append(json_player.json_dict())
return json
def get_draft_start(self):
time = JsonTime(self.db_object.draft_start)
time.now = timezone.now()
return time.json_dict()
def get_roster(self):
return JsonFantasyRoster(self.db_object.roster).json_dict()
def get_current_time(self):
return JsonTime(d.datetime.now()).json_dict()
def get_selections(self):
selections_queryset = models.FantasySelection.objects.filter(
draft_pick__fantasy_team__draft=self.db_object
)
selections = []
for s in selections_queryset:
json = JsonFantasySelection(s)
json.show_team = True
selections.append(json.json_dict())
return selections
class JsonFantasyTeam(JsonObject):
fields = ['name',]
functions = ['picks', 'selection_ids', 'draft_id', 'email', 'players']
pick_options = { 'show_team': False, }
show_players = False
mask_email = True
@ReadOnlyCachedAttribute
def builder(self):
return draft.PickBuilder(self.db_object)
def get_email(self):
email = self.db_object.email
if self.mask_email:
return EmailMasker(email).masked
return email
def get_picks(self):
return self.builder.get_picks(is_team=True,
options=self.pick_options)
def get_selection_ids(self):
return self.builder.get_selections(is_team=True)
def get_draft_id(self):
return self.db_object.draft.id
def get_players(self):
selections = self.builder.raw_selections(is_team=True)
players = [s.player for s in selections]
json_players = []
for p in players:
json_player = nfl.JsonNflPlayer(p)
json_player.show_team = False
json_players.append(json_player.json_dict())
return json_players
class FantasyTeamCreator(object):
"""Adds a FantasyTeam to a draft given a team name & email"""
def __init__(self, team_form_data):
self.data = team_form_data
def create_team(self):
"""Creates and returns the new team or None on error"""
draft = models.FantasyDraft.objects.get(pk=self.data['draft_id'])
if draft:
del self.data['draft_id']
self.data['draft'] = draft
# check the draft password
if draft.password:
form_pw = self.data['password']
if form_pw != draft.password:
return None
if 'password' in self.data:
del self.data['password']
self.data['auth_key'] = self.get_auth_key()
team, created = models.FantasyTeam.objects.get_or_create(**self.data)
return team
else:
return None
def get_auth_key(self):
key = uuid.uuid4()
return str(key)
class JsonFantasyPick(JsonObject):
fields = ['pick_number',]
functions = ['team', 'expires', 'starts', 'active']
now = None
def get_starts(self):
return self.__time(self.db_object.starts)
def get_expires(self):
return self.__time(self.db_object.expires)
def __time(self, when):
time = JsonTime(when)
if self.now is not None:
time.now = self.now
return time.json_dict()
def get_team(self):
team = JsonFantasyTeam(self.db_object.fantasy_team)
team.show_picks = False
return team.json_dict()
def get_active(self):
if self.now is not None:
return self.db_object.is_active(self.now)
class JsonFantasySelection(JsonObject):
functions = ['team', 'draft_pick', 'player', 'when',]
show_team = False
def get_team(self):
team = JsonFantasyTeam(self.db_object.draft_pick.fantasy_team)
team.show_picks = False
return team.json_dict()
def get_draft_pick(self):
pick = JsonFantasyPick(self.db_object.draft_pick)
pick.show_selection_ids = False
if self.show_team:
pick.show_team = False
return pick.json_dict()
def get_player(self):
player = nfl.JsonNflPlayer(self.db_object.player)
player.show_team = self.show_team
return player.json_dict()
def get_when(self):
return JsonTime(self.db_object.when).json_dict()
| 29.434066 | 81 | 0.62479 | 644 | 5,357 | 4.976708 | 0.180124 | 0.037442 | 0.059906 | 0.034945 | 0.176599 | 0.127301 | 0.116069 | 0.061154 | 0.061154 | 0.033073 | 0 | 0.000259 | 0.279821 | 5,357 | 181 | 82 | 29.596685 | 0.830482 | 0.029867 | 0 | 0.100719 | 0 | 0 | 0.04648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.179856 | false | 0.035971 | 0.05036 | 0.079137 | 0.568345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a00df87ea7efb585c2f792eae62f4a5c9e8616d7 | 397 | py | Python | src/__init__.py | smkell/pyfbl | 658e137da78de71c493f0b8ac3e4ec85f4e773ed | [
"MIT"
] | null | null | null | src/__init__.py | smkell/pyfbl | 658e137da78de71c493f0b8ac3e4ec85f4e773ed | [
"MIT"
] | null | null | null | src/__init__.py | smkell/pyfbl | 658e137da78de71c493f0b8ac3e4ec85f4e773ed | [
"MIT"
] | null | null | null | from flags import Flags
class PositionElgibility(Flags):
""" Bitwise enum for marking a player's egibility.abs
"""
catcher = ()
first_base = ()
second_base = ()
third_base = ()
short_stop = ()
outfield = ()
left_field = ()
center_field = ()
right_field = ()
designated_hitter = ()
pitcher = ()
starting_pitcher = ()
relief_pitcher = () | 20.894737 | 57 | 0.584383 | 39 | 397 | 5.692308 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.289673 | 397 | 19 | 58 | 20.894737 | 0.787234 | 0.123426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a00ed78e391cd1ec859988cec050c1c543fc5e91 | 5,142 | py | Python | tests/effects/test_equalizer.py | Nikolay-Lysenko/sinethesizer | fe6855186a00e701113ea5bb4fac104bf8497035 | [
"MIT"
] | 8 | 2019-07-25T12:17:38.000Z | 2021-09-04T19:38:21.000Z | tests/effects/test_equalizer.py | Nikolay-Lysenko/sinethesizer | fe6855186a00e701113ea5bb4fac104bf8497035 | [
"MIT"
] | 7 | 2019-07-20T18:04:54.000Z | 2021-08-03T17:31:26.000Z | tests/effects/test_equalizer.py | Nikolay-Lysenko/sinethesizer | fe6855186a00e701113ea5bb4fac104bf8497035 | [
"MIT"
] | 1 | 2019-10-16T18:44:43.000Z | 2019-10-16T18:44:43.000Z | """
Test `sinethesizer.effects.equalizer` module.
Author: Nikolay Lysenko
"""
from typing import Any, Dict, List
import numpy as np
import pytest
from scipy.signal import spectrogram
from sinethesizer.effects.equalizer import apply_equalizer
from sinethesizer.synth.core import Event
from sinethesizer.oscillators import generate_mono_wave
@pytest.mark.parametrize(
"frequencies, frame_rate, kind, kwargs, spectrogram_params, expected",
[
(
# `frequencies`
[100 * x for x in range(1, 20)],
# `frame_rate`
10000,
# `kind`
'absolute',
# `kwargs`
{
'breakpoint_frequencies': [300, 700],
'gains': [0.2, 1.0],
},
# `spectrogram_params`
{'nperseg': 100},
# `expected`
# In this test case, `expected` contains summed over time power
# for frequencies 0, 100, 200, ..., 1900 respectively.
np.array(
[
0.0021011, 0.0249528, 0.0277226, 0.0387388, 0.0996291,
0.2081294, 0.3571571, 0.5181565, 0.55258, 0.557289,
0.5601418, 0.5615491, 0.5621033, 0.5622196, 0.5619461,
0.5608991, 0.5583538, 0.5535695, 0.5462548, 0.536942
]
)
),
(
# `frequencies`
[100 * x for x in range(1, 20)],
# `frame_rate`
10000,
# `kind`
'absolute',
# `kwargs`
{
'breakpoint_frequencies': [0, 500, 1200, 1900],
'gains': [0, 1.0, 0.1, 1.0],
},
# `spectrogram_params`
{'nperseg': 100},
# `expected`
# In this test case, `expected` contains summed over time power
# for frequencies 0, 100, 200, ..., 1900 respectively.
np.array(
[
0.0062764, 0.0342341, 0.0986968, 0.2045612, 0.3501325,
0.4880824, 0.4132437, 0.306272, 0.2138001, 0.1371348,
0.0776751, 0.03646, 0.0184661, 0.0364665, 0.0775099,
0.136432, 0.2119483, 0.3025262, 0.4070148, 0.5069672
]
)
),
(
# `frequencies`
[100 * x for x in range(1, 20)],
# `frame_rate`
10000,
# `kind`
'absolute',
# `kwargs`
{
'breakpoint_frequencies': [0, 500, 1200, 1900, 5000],
'gains': [0, 1.0, 0.1, 1.0, 1.0],
},
# `spectrogram_params`
{'nperseg': 100},
# `expected`
# In this test case, `expected` contains summed over time power
# for frequencies 0, 100, 200, ..., 1900 respectively.
np.array(
[
0.0062764, 0.0342341, 0.0986968, 0.2045612, 0.3501325,
0.4880824, 0.4132437, 0.306272, 0.2138001, 0.1371348,
0.0776751, 0.03646, 0.0184661, 0.0364665, 0.0775099,
0.136432, 0.2119483, 0.3025262, 0.4070148, 0.5069672
]
)
),
(
# `frequencies`
[100 * x for x in range(1, 20)],
# `frame_rate`
10000,
# `kind`
'relative',
# `kwargs`
{
'breakpoint_frequencies_ratios': [0, 5, 12, 19, 50],
'gains': [0, 1.0, 0.1, 1.0, 1.0],
},
# `spectrogram_params`
{'nperseg': 100},
# `expected`
# In this test case, `expected` contains summed over time power
# for frequencies 0, 100, 200, ..., 1900 respectively.
np.array(
[
0.0062764, 0.0342341, 0.0986968, 0.2045612, 0.3501325,
0.4880824, 0.4132437, 0.306272, 0.2138001, 0.1371348,
0.0776751, 0.03646, 0.0184661, 0.0364665, 0.0775099,
0.136432, 0.2119483, 0.3025262, 0.4070148, 0.5069672
]
)
),
]
)
def test_apply_equalizer(
frequencies: List[float], frame_rate: int, kind: str,
kwargs: Dict[str, Any], spectrogram_params: Dict[str, Any],
expected: np.ndarray
) -> None:
"""Test `apply_equalizer` function."""
waves = [
generate_mono_wave(
'sine', frequency, np.ones(frame_rate), frame_rate
)
for frequency in frequencies
]
sound = sum(waves)
sound = np.vstack((sound, sound))
event = Event(
instrument='any_instrument',
start_time=0,
duration=1,
frequency=min(frequencies),
velocity=1,
effects='',
frame_rate=frame_rate
)
sound = apply_equalizer(sound, event, kind, **kwargs)
spc = spectrogram(sound[0], frame_rate, **spectrogram_params)[2]
result = spc.sum(axis=1)[:len(expected)]
np.testing.assert_almost_equal(result, expected)
| 33.174194 | 75 | 0.483469 | 533 | 5,142 | 4.596623 | 0.26454 | 0.040408 | 0.006122 | 0.029388 | 0.546939 | 0.546939 | 0.546939 | 0.546939 | 0.542857 | 0.542857 | 0 | 0.257473 | 0.394983 | 5,142 | 154 | 76 | 33.38961 | 0.530055 | 0.168028 | 0 | 0.336364 | 0 | 0 | 0.061408 | 0.022437 | 0 | 0 | 0 | 0 | 0.009091 | 1 | 0.009091 | false | 0 | 0.063636 | 0 | 0.072727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a018fc0229ba9869182407ca67226ca51fe481bd | 1,497 | py | Python | migrations/versions/43162db35c55_.py | NeverLeft/FLASKTPP | 3480131f3386bfc86e45a914f2140949863641dd | [
"Apache-2.0"
] | null | null | null | migrations/versions/43162db35c55_.py | NeverLeft/FLASKTPP | 3480131f3386bfc86e45a914f2140949863641dd | [
"Apache-2.0"
] | null | null | null | migrations/versions/43162db35c55_.py | NeverLeft/FLASKTPP | 3480131f3386bfc86e45a914f2140949863641dd | [
"Apache-2.0"
] | null | null | null | """empty message
Revision ID: 43162db35c55
Revises: 8958fdf1725c
Create Date: 2018-06-12 14:27:07.207294
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '43162db35c55'
down_revision = '8958fdf1725c'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('movies',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('showname', sa.String(length=32), nullable=True),
sa.Column('shownameen', sa.String(length=64), nullable=True),
sa.Column('director', sa.String(length=32), nullable=True),
sa.Column('leadingRole', sa.String(length=256), nullable=True),
sa.Column('type', sa.String(length=32), nullable=True),
sa.Column('country', sa.String(length=64), nullable=True),
sa.Column('language', sa.String(length=32), nullable=True),
sa.Column('duration', sa.Integer(), nullable=True),
sa.Column('screeningmodel', sa.String(length=16), nullable=True),
sa.Column('openday', sa.DateTime(), nullable=True),
sa.Column('backgroundpicture', sa.String(length=256), nullable=True),
sa.Column('flag', sa.Integer(), nullable=True),
sa.Column('isdelete', sa.Boolean(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('movies')
# ### end Alembic commands ###
| 33.266667 | 73 | 0.683367 | 191 | 1,497 | 5.329843 | 0.371728 | 0.11002 | 0.178782 | 0.235756 | 0.428291 | 0.428291 | 0.371316 | 0.371316 | 0 | 0 | 0 | 0.057858 | 0.145625 | 1,497 | 44 | 74 | 34.022727 | 0.738077 | 0.197061 | 0 | 0 | 0 | 0 | 0.132189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a01d1f0078acd5d1336952fdc0ce79688a83b03b | 215 | py | Python | scripts/check_db.py | brl0/bripy | 3754b5db651180d58645bd7d32c3d5d12528ebde | [
"MIT"
] | null | null | null | scripts/check_db.py | brl0/bripy | 3754b5db651180d58645bd7d32c3d5d12528ebde | [
"MIT"
] | null | null | null | scripts/check_db.py | brl0/bripy | 3754b5db651180d58645bd7d32c3d5d12528ebde | [
"MIT"
] | null | null | null | from sqlalchemy import create_engine
import pandas as pd
database = 'fileinfo.db'
engine = create_engine(f'sqlite:///{database}')
df = pd.read_sql('SELECT * FROM files;', engine)
print(df.info())
print(df.head())
| 21.5 | 48 | 0.725581 | 32 | 215 | 4.78125 | 0.65625 | 0.156863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 215 | 9 | 49 | 23.888889 | 0.805263 | 0 | 0 | 0 | 0 | 0 | 0.237209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a01e111c243836aae897ed99b45bdd0d52ea0940 | 17,656 | py | Python | Train_SDAE/stacked_dae.py | glrs/StackedDAE | c21e851dc13e11f201ce7289e854c05956637986 | [
"Apache-2.0"
] | 28 | 2016-03-01T20:21:38.000Z | 2021-11-22T14:23:00.000Z | Train_SDAE/stacked_dae.py | glrs/StackedDAE | c21e851dc13e11f201ce7289e854c05956637986 | [
"Apache-2.0"
] | null | null | null | Train_SDAE/stacked_dae.py | glrs/StackedDAE | c21e851dc13e11f201ce7289e854c05956637986 | [
"Apache-2.0"
] | 15 | 2016-03-05T18:43:35.000Z | 2022-01-07T12:24:36.000Z | from __future__ import division
import tensorflow as tf
import numpy as np
import time
import sklearn
from sklearn.metrics import precision_score, confusion_matrix
from sklearn.metrics import recall_score, f1_score, roc_curve
from dae import DAE_Layer
from os.path import join as pjoin
#from utils import load_data_sets_pretraining, write_csv
from tools.utils import fill_feed_dict, fill_feed_dict_dae
from tools.evaluate import do_eval_summary, evaluation, do_eval
from tools.config import FLAGS
from tools.visualize import make_heatmap
from tensorflow.python.framework.errors import FailedPreconditionError
class Stacked_DAE(object):
def __init__(self, net_shape, session=None, selfish_layers=False):
""" Stack De-noising Autoencoder (SDAE) initialization
Args:
net_shape: The network architecture of the SDAE
session : The tensorflow session
selfish_layers: Whether the layers are going to be trained individually
or dependent to the direct output of the previous layer
(Theoretically: using it is faster, but memory costly)
Tips:
Using selfish_layers needs some extra handling.
* Feed each individual De-noising Autoencoder (DAE) directly.
(e.g. feed_dict = {sdae.get_layers[i]._x : input_data})
* Reassign/Reload the input data-set with the data-set for the next
layer, obtained by using the genrate_next_dataset() function.
(e.g. in this case load_data_sets_pretraining(next_dataset, split_only=False))
"""
self._sess = session
self._net_shape = net_shape
self.nHLayers = len(self._net_shape) - 2
self._selfish_layers = selfish_layers
self.loss_summaries = None
if self._selfish_layers:
self._x = None
self._y_dataset = {}
else:
self._x = tf.placeholder(dtype=tf.float32, shape=(FLAGS.batch_size, self._net_shape[0]), name='dae_input_layer')
self._dae_layers = []
self._weights = []
self._biases = []
self.weights = []
self.biases = []
self._create_network()
def _create_network(self):
is_last_layer = False
for layer in xrange(self.nHLayers + 1):
with tf.name_scope("Layer_{0}".format(layer)):
if self._selfish_layers:
x = tf.placeholder(dtype=tf.float32, shape=(FLAGS.batch_size, self._net_shape[layer]), name='dae_input_from_layer_{0}'.format(layer))
self._y_dataset[layer] = []
else:
if layer == 0:
x = self._x
else:
x = self._dae_layers[layer-1].clean_activation()
# x = self._dae_layers[layer-1].get_representation_y
new_x = tf.identity(x)
if layer == self.nHLayers:
is_last_layer = True
if FLAGS.bias_node and layer < self.nHLayers:
# Add bias node (experimental)
bias_node = tf.ones(shape=[FLAGS.batch_size, 1], dtype=tf.float32)
new_x = tf.concat(1, [bias_node, x])
dae_layer = DAE_Layer(in_data=new_x, prev_layer_size=self._net_shape[layer],
next_layer_size=self._net_shape[layer+1], nth_layer=layer+1,
last_layer=is_last_layer)
self._dae_layers.append(dae_layer)
@property
def session(self):
return self._sess
@property
def get_layers(self):
return self._dae_layers
@property
def get_weights(self):
# if len(self.weights) != self.nHLayers + 1:
self.weights = []
for n in xrange(self.nHLayers + 1):
if self.get_layers[n].get_w:
try:
self.weights.append(self.session.run(self.get_layers[n].get_w))
except FailedPreconditionError:
break
else:
break
return self.weights
@property
def get_biases(self):
# if len(self.biases) != self.nHLayers + 1:
self.biases = []
for n in xrange(self.nHLayers + 1):
if self.get_layers[n].get_b:
try:
self.biases.append(self.session.run(self.get_layers[n].get_b))
except FailedPreconditionError:
break
else:
break
return self.biases
def get_activation(self, x, layer, use_fixed=True):
return self.session.run(self.get_layers[layer].clean_activation(x_in=x, use_fixed=use_fixed))
# return self.session.run(tf.sigmoid(tf.nn.bias_add(tf.matmul(x, self.get_weights[layer]), self.get_biases[layer]), name='activate'))
def train(self, cost, layer=None):
# with tf.name_scope("Training"):
# Add a scalar summary for the snapshot loss.
self.loss_summaries = tf.scalar_summary(cost.op.name, cost)
if layer is None:
lr = FLAGS.supervised_learning_rate
else:
lr = self.get_layers[layer]._l_rate
# Create the gradient descent optimizer with the given learning rate.
optimizer = tf.train.GradientDescentOptimizer(lr)
# Create a variable to track the global step.
global_step = tf.Variable(0, trainable=False, name='global_step')
# Use the optimizer to apply the gradients that minimize the loss
# (and also increment the global step counter) as a single training step.
train_op = optimizer.minimize(cost, global_step=global_step)
return train_op, global_step
def calc_last_x(self, X, bias_node=False):
tmp = X
for layer in self.get_layers:
if bias_node:
bias_n = tf.ones(shape=[FLAGS.batch_size, 1], dtype=tf.float32)
tmp = tf.concat(1, [bias_n, tmp])
tmp = layer.clean_activation(x_in=tmp, use_fixed=False)
# print(tmp, self._net_shape[-2], self._net_shape[-1])
# dae_layer = DAE_Layer(in_data=tmp, prev_layer_size=self._net_shape[-2],
# next_layer_size=self._net_shape[-1], nth_layer=len(self._net_shape)-1,
# last_layer=True)
#
# self._dae_layers.append(dae_layer)
# tmp = self.get_layers[-1].clean_activation(x_in=tmp, use_fixed=False)
return tmp
def add_final_layer(self, input_x, bias_node=False):
last_x = self.calc_last_x(input_x, bias_node=bias_node)
print "Last layer added:", last_x.get_shape()
return last_x
# def finetune_net(self):
# last_output = self._x
#
# for layer in xrange(self.nHLayers + 1):
# w = self.get_layers[layer]
def genrate_next_dataset(self, from_dataset, layer):
""" Generate next data-set
Note: This function has a meaning only if selfish layers are in use.
It takes as input the data-set and transforms it using the previously
trained layer in order to obtain it's output. The output of that layer
is saved as a data-set to be used as input for the next one.
Args:
from_dataset: The data-set you want to transform (usually
the one that the previous layer is trained on)
layer : The layer to be used for the data transformation
Returns:
numpy array: The new data-set to be used for the next layer
"""
if self._selfish_layers:
for _ in xrange(from_dataset.num_batches):
feed_dict = fill_feed_dict_dae(from_dataset, self.get_layers[layer]._x)
y = self.session.run(self.get_layers[layer].clean_activation(), feed_dict=feed_dict)
for j in xrange(np.asarray(y).shape[0]):
self._y_dataset[layer].append(y[j])
return np.asarray(self._y_dataset[layer])
else:
print "Note: This function has a meaning only if selfish layers are in use."
return None
def pretrain_sdae(input_x, shape):
with tf.Graph().as_default():# as g:
sess = tf.Session()
sdae = Stacked_DAE(net_shape=shape, session=sess, selfish_layers=False)
for layer in sdae.get_layers[:-1]:
with tf.variable_scope("pretrain_{0}".format(layer.which)):
cost = layer.get_loss
train_op, global_step = sdae.train(cost, layer=layer.which)
summary_dir = pjoin(FLAGS.summary_dir, 'pretraining_{0}'.format(layer.which))
summary_writer = tf.train.SummaryWriter(summary_dir, graph_def=sess.graph_def, flush_secs=FLAGS.flush_secs)
summary_vars = [layer.get_w_b[0], layer.get_w_b[1]]
hist_summarries = [tf.histogram_summary(v.op.name, v) for v in summary_vars]
hist_summarries.append(sdae.loss_summaries)
summary_op = tf.merge_summary(hist_summarries)
'''
You can get all the trainable variables using tf.trainable_variables(),
and exclude the variables which should be restored from the pretrained model.
Then you can initialize the other variables.
'''
layer.vars_to_init.append(global_step)
sess.run(tf.initialize_variables(layer.vars_to_init))
print("\n\n")
print "| Layer | Epoch | Step | Loss |"
for step in xrange(FLAGS.pretraining_epochs * input_x.train.num_examples):
feed_dict = fill_feed_dict_dae(input_x.train, sdae._x)
loss, _ = sess.run([cost, train_op], feed_dict=feed_dict)
if step % 1000 == 0:
summary_str = sess.run(summary_op, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, step)
output = "| Layer {0} | Epoch {1} | {2:>6} | {3:10.4f} |"\
.format(layer.which, step // input_x.train.num_examples + 1, step, loss)
print output
# Note: Use this style if you are using the shelfish_layer choice.
# This way you keep the activated data to be fed to the next layer.
# next_dataset = sdae.genrate_next_dataset(from_dataset=input_x.all, layer=layer.which)
# input_x = load_data_sets_pretraining(next_dataset, split_only=False)
# Save Weights and Biases for all layers
for n in xrange(len(shape) - 2):
w = sdae.get_layers[n].get_w
b = sdae.get_layers[n].get_b
W, B = sess.run([w, b])
np.savetxt(pjoin(FLAGS.output_dir, 'Layer_' + str(n) + '_Weights.txt'), np.asarray(W), delimiter='\t')
np.savetxt(pjoin(FLAGS.output_dir, 'Layer_' + str(n) + '_Biases.txt'), np.asarray(B), delimiter='\t')
make_heatmap(W, 'weights_'+ str(n))
print "\nPretraining Finished...\n"
return sdae
def finetune_sdae(sdae, input_x, n_classes, label_map):
print "Starting Fine-tuning..."
sess = sdae.session
with sess.graph.as_default():
n_features = sdae._net_shape[0]
x_pl = tf.placeholder(tf.float32, shape=(FLAGS.batch_size, n_features), name='input_pl')
labels_pl = tf.placeholder(tf.int32, shape=FLAGS.batch_size, name='labels_pl')
labels = tf.identity(labels_pl)
# Get the supervised fine tuning net
logits = sdae.add_final_layer(x_pl, bias_node=FLAGS.bias_node)
# logits = sdae.finetune_net(input_x)
loss = loss_supervised(logits, labels_pl, n_classes)
train_op, _ = sdae.train(loss)
eval_correct, corr, y_pred = evaluation(logits, labels_pl)
hist_summaries = [layer.get_w for layer in sdae.get_layers]
hist_summaries.extend([layer.get_b for layer in sdae.get_layers])
hist_summaries = [tf.histogram_summary(v.op.name + "_fine_tuning", v) for v in hist_summaries]
summary_op = tf.merge_summary(hist_summaries)
summary_writer = tf.train.SummaryWriter(pjoin(FLAGS.summary_dir, 'fine_tuning'),
graph_def=sess.graph_def,
flush_secs=FLAGS.flush_secs)
sess.run(tf.initialize_all_variables())
steps = FLAGS.finetuning_epochs * input_x.train.num_examples
for step in xrange(steps):
start_time = time.time()
feed_dict = fill_feed_dict(input_x.train, x_pl, labels_pl)
_, loss_value, ev_corr, c, y_true = sess.run([train_op, loss, eval_correct, corr, labels], feed_dict=feed_dict)
duration = time.time() - start_time
# Write the summaries and print an overview fairly often.
if step % 1000 == 0:
# Print status to stdout.
print "\nLoss: ", loss_value
# print "Eval corr:", ev_corr
# print "Correct:", c
# print "Y_pred:", y_pred
# print "Label_pred:", y_true
# y_true = np.argmax(labels_pl, 0)
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration))
print 'Evaluation Sum:', ev_corr, '/', len(c)
# print('Evaluation Corrects:', eval_corr)
# print('Logits:', lgts)
print "---------------"
# Update the events file.
summary_str = sess.run(summary_op, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, step)
if (step + 1) % 1000 == 0 or (step + 1) == steps:
train_sum = do_eval_summary("training_error",
sess,
eval_correct,
x_pl,
labels_pl,
input_x.train)
if input_x.validation is not None:
val_sum = do_eval_summary("validation_error",
sess,
eval_correct,
x_pl,
labels_pl,
input_x.validation)
test_sum = do_eval_summary("test_error",
sess,
eval_correct,
x_pl,
labels_pl,
input_x.test)
summary_writer.add_summary(train_sum, step)
if input_x.validation is not None:
summary_writer.add_summary(val_sum, step)
summary_writer.add_summary(test_sum, step)
for n in xrange(len(sdae._net_shape) - 1):
w = sdae.get_layers[n].get_w
b = sdae.get_layers[n].get_b
W, B = sess.run([w, b])
np.savetxt(pjoin(FLAGS.output_dir, 'Finetuned_Layer_' + str(n) + '_Weights.txt'), np.asarray(W), delimiter='\t')
np.savetxt(pjoin(FLAGS.output_dir, 'Finetuned_Layer_' + str(n) + '_Biases.txt'), np.asarray(B), delimiter='\t')
make_heatmap(W, 'Finetuned_weights_'+ str(n))
do_eval(sess, eval_correct, y_pred, x_pl, labels_pl, label_map, input_x.train, title='Final_Train')
do_eval(sess, eval_correct, y_pred, x_pl, labels_pl, label_map, input_x.test, title='Final_Test')
if input_x.validation is not None:
do_eval(sess, eval_correct, y_pred, x_pl, labels_pl, label_map, input_x.validation, title='Final_Validation')
print "Fine-tuning Finished..."
return sdae
def loss_supervised(logits, labels, num_classes):
"""Calculates the loss from the logits and the labels.
Args:
logits: Logits tensor, float - [batch_size, NUM_CLASSES].
labels: Labels tensor, int32 - [batch_size].
Returns:
loss: Loss tensor of type float.
"""
# Convert from sparse integer labels in the range [0, NUM_CLASSSES)
# to 1-hot dense float vectors (that is we will have batch_size vectors,
# each with NUM_CLASSES values, all of which are 0.0 except there will
# be a 1.0 in the entry corresponding to the label).
batch_size = tf.size(labels)
labels = tf.expand_dims(labels, 1)
indices = tf.expand_dims(tf.range(0, batch_size), 1)
concated = tf.concat(1, [indices, labels])
onehot_labels = tf.sparse_to_dense(concated, tf.pack([batch_size, num_classes]), 1.0, 0.0)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, onehot_labels, name='xentropy')
loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
return loss
| 43.168704 | 153 | 0.567173 | 2,186 | 17,656 | 4.342635 | 0.168344 | 0.013273 | 0.015169 | 0.010955 | 0.310966 | 0.264932 | 0.219425 | 0.180343 | 0.165596 | 0.137786 | 0 | 0.008006 | 0.342093 | 17,656 | 408 | 154 | 43.27451 | 0.809229 | 0.135535 | 0 | 0.24359 | 0 | 0.004274 | 0.051981 | 0.001832 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.059829 | null | null | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e42720da03ba3a22d671fec22abf835213e1aa7 | 643 | py | Python | tests/unit/cmds/test_cmd_my_name.py | gsfellis/noob_snhubot | 7caba3cbf672183f37119f1199b2269bbe8792b3 | [
"MIT"
] | 2 | 2018-03-29T02:24:56.000Z | 2018-10-01T18:08:07.000Z | tests/unit/cmds/test_cmd_my_name.py | gsfellis/noob_snhubot | 7caba3cbf672183f37119f1199b2269bbe8792b3 | [
"MIT"
] | 15 | 2018-03-16T15:52:10.000Z | 2018-12-24T17:59:51.000Z | tests/unit/cmds/test_cmd_my_name.py | gsfellis/noob_snhubot | 7caba3cbf672183f37119f1199b2269bbe8792b3 | [
"MIT"
] | 3 | 2018-03-18T23:57:53.000Z | 2019-07-15T20:45:17.000Z | import string
import random
from cmds import my_name
from Bot import Bot
class TestCmdMyName(object):
cmd = "what's my name?"
uid = ''.join(random.choice(string.ascii_uppercase + string.digits)
for _ in range(9))
bot = Bot(uid, None, None)
def test_command(self):
assert my_name.command == self.cmd
def test_public(self):
assert my_name.public
def test_output(self):
outstring = "Your name is <@{}>! Did you forget or something?".format(
self.uid)
response = my_name.execute(self.cmd, self.uid, self.bot)
assert response == (outstring, None)
| 24.730769 | 78 | 0.631415 | 87 | 643 | 4.563218 | 0.494253 | 0.075567 | 0.060453 | 0.080605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002105 | 0.261275 | 643 | 25 | 79 | 25.72 | 0.833684 | 0 | 0 | 0 | 0 | 0 | 0.097978 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.222222 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4e42c88b683a06c42e86880d79e11d92989baf48 | 3,552 | py | Python | src/django_include_bootstrap/migrations/0002_auto_20200112_1311.py | xelaxela13/django_include_bootstrap | 7c37cc2b894c19266b92cb8fa07b1921ddf2aab8 | [
"MIT"
] | null | null | null | src/django_include_bootstrap/migrations/0002_auto_20200112_1311.py | xelaxela13/django_include_bootstrap | 7c37cc2b894c19266b92cb8fa07b1921ddf2aab8 | [
"MIT"
] | null | null | null | src/django_include_bootstrap/migrations/0002_auto_20200112_1311.py | xelaxela13/django_include_bootstrap | 7c37cc2b894c19266b92cb8fa07b1921ddf2aab8 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.2 on 2020-01-12 13:11
from django.db import migrations
bootstrap_version = "4.4.1"
jquery_version = "3.3.1"
popover_version = "1.14.3"
fontawesome_version = "4.7.0"
urls_settings = [
{
"library": 4,
"version": bootstrap_version,
"url": f"https://stackpath.bootstrapcdn.com/bootstrap/{bootstrap_version}/css/bootstrap.min.css",
"url_pattern": "https://stackpath.bootstrapcdn.com/bootstrap/{version}/css/bootstrap.min.css",
"integrity": "sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh",
"active": True
},
{
"library": 1,
"version": bootstrap_version,
"url": f"https://stackpath.bootstrapcdn.com/bootstrap/{bootstrap_version}/js/bootstrap.min.js",
"url_pattern": "https://stackpath.bootstrapcdn.com/bootstrap/{version}/js/bootstrap.min.js",
"integrity": "sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6",
"active": True
},
{
"library": 1,
"version": bootstrap_version,
"url": f"https://stackpath.bootstrapcdn.com/bootstrap/{bootstrap_version}/js/bootstrap.bundle.min.js",
"url_pattern": "https://stackpath.bootstrapcdn.com/bootstrap/{version}/js/bootstrap.bundle.min.js",
"integrity": "sha384-6khuMg9gaYr5AxOqhkVIODVIvm9ynTT5J4V1cfthmT+emCG6yVmEZsRHdxlotUnm",
"active": False
},
{
"library": 2,
"version": jquery_version,
"url": f"https://code.jquery.com/jquery-{jquery_version}.min.js",
"url_pattern": "https://code.jquery.com/jquery-{version}.min.js",
"integrity": "sha384-tsQFqpEReu7ZLhBV2VZlAu7zcOV+rXbYlF2cqB8txI/8aZajjp4Bqd+V6D5IgvKT",
"active": True
},
{
"library": 2,
"version": jquery_version,
"url": f"https://code.jquery.com//jquery-{jquery_version}.slim.min.js",
"url_pattern": "https://code.jquery.com//jquery-{version}.slim.min.js",
"integrity": "sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo",
"active": False
},
{
"library": 3,
"version": popover_version,
"url": f"https://cdnjs.cloudflare.com/ajax/libs/popper.js/{popover_version}/umd/popper.min.js",
"url_pattern": "https://cdnjs.cloudflare.com/ajax/libs/popper.js/{version}/umd/popper.min.js",
"integrity": "sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49",
"active": True
},
{
"library": 5,
"version": fontawesome_version,
"url": f"https://stackpath.bootstrapcdn.com/font-awesome/{fontawesome_version}/css/font-awesome.min.css",
"url_pattern": "https://stackpath.bootstrapcdn.com/font-awesome/{version}/css/font-awesome.min.css",
"integrity": "sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN",
"active": True
}
]
def forwards(apps, schema_editor):
IncludeBootstrap = apps.get_model('django_include_bootstrap', 'IncludeBootstrap')
entity = []
for item in urls_settings:
entity.append(IncludeBootstrap(**item))
IncludeBootstrap.objects.bulk_create(entity)
def backwards(apps, schema_editor):
IncludeBootstrap = apps.get_model('django_include_bootstrap', 'IncludeBootstrap')
IncludeBootstrap.objects.all().delete()
class Migration(migrations.Migration):
dependencies = [
('django_include_bootstrap', '0001_initial'),
]
operations = [
migrations.RunPython(forwards, backwards)
]
| 39.466667 | 113 | 0.673986 | 363 | 3,552 | 6.487603 | 0.278237 | 0.067941 | 0.088323 | 0.098514 | 0.529936 | 0.502335 | 0.457749 | 0.433546 | 0.355414 | 0.355414 | 0 | 0.046464 | 0.175957 | 3,552 | 89 | 114 | 39.910112 | 0.758114 | 0.012669 | 0 | 0.227848 | 1 | 0.075949 | 0.564051 | 0.16234 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025316 | false | 0 | 0.012658 | 0 | 0.075949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e462e00dcf0244ee9090c134358edcc9e6a6c59 | 3,906 | py | Python | USB/python/test-usb2001tc.py | wjasper/Linix_Drivers | 9c5443f3c9d249f341b6b8580929f8cdbdba4079 | [
"JasPer-2.0"
] | 100 | 2016-11-08T15:41:43.000Z | 2022-02-20T19:37:32.000Z | USB/python/test-usb2001tc.py | wjasper/Linix_Drivers | 9c5443f3c9d249f341b6b8580929f8cdbdba4079 | [
"JasPer-2.0"
] | 37 | 2017-01-05T17:48:14.000Z | 2022-01-06T17:43:27.000Z | USB/python/test-usb2001tc.py | wjasper/Linix_Drivers | 9c5443f3c9d249f341b6b8580929f8cdbdba4079 | [
"JasPer-2.0"
] | 65 | 2016-12-20T07:03:44.000Z | 2022-03-14T21:48:35.000Z | #! /usr/bin/python3
#
# Copyright (c) 2019 Warren J. Jasper <wjasper@ncsu.edu>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
from usb_2001TC import *
import time
import sys
import fcntl
import os
def toContinue():
answer = input('Continue [yY]? ')
if (answer == 'y' or answer == 'Y'):
return True
else:
return False
def main():
# initalize the class
try:
usb2001tc = usb_2001TC()
print("got a device\n")
except:
print('No USB-2001TC device found.')
return
while True:
print("\nUSB-2001-TC Testing")
print("----------------")
print("Hit 'a' to read AIn.")
print("Hit 'b' to blink LED.")
print("Hit 'c' to get calibration slope and offset")
print("Hit 'C' to get calibration date")
print("Hit 'd' to set device")
print("Hit 'i' to get CJC and Analog Input readings")
print("Hit 'I' to get information about the device")
print("Hit 'F' to get the CJC reading in degree F")
print("Hit 'K' to get the CJC reading in degree Kelvin")
print("Hit 'G' to call get_all")
print("Hit 'r' to get reset device")
print("Hit 's' to get serial number")
print("Hit 'S' to get status")
print("Hit 't' to get the temperature")
print("Hit 'T' to write temperature to file")
print("Hit 'v' to get firmware version")
print("Hit 'e' to exit.")
ch = input('\n')
if ch == 'b':
count = int(input('Enter number of times to blink: '))
usb2001tc.Blink(count)
elif ch == 'a':
print("AIn = ", usb2001tc.AIn())
elif ch == 'c':
print("Calibration data: Slope = ", usb2001tc.getSlope(), " Offset = ", usb2001tc.getOffset())
elif ch == 'C':
print('MFG Calibration date: ', usb2001tc.getMFGCAL())
elif ch == 'd':
thermo = input("Input Thermocouple type [J,K,R,S,T,N,E,B]: ")
usb2001tc.sendSensorType(thermo)
thermo = usb2001tc.getSensorType()
print("Sensor Type = ", thermo)
elif ch == 'e':
usb2001tc.udev.close()
exit(0)
elif ch == 'i':
print("CJC = ", usb2001tc.getCJCDegC(), " degree C")
elif ch == 'F':
print("CJC = ", usb2001tc.getCJCDegF(), " degree F")
elif ch == 'K':
print("CJC = ", usb2001tc.getCJCDegKelvin(), " degree K")
elif ch == 'G':
usb2001tc.GetAll()
elif ch == 'r':
usb2001tc.Reset()
elif ch == 's':
print("Serial No: %s" % usb2001tc.getSerialNumber())
elif ch == 'S':
print("Status = %s" % usb2001tc.getStatus())
elif ch == 'I':
print("Manufacturer: %s" % usb2001tc.getManufacturer())
print("Product: %s" % usb2001tc.getProduct())
print("Serial No: %s" % usb2001tc.getSerialNumber())
elif ch == 'v':
print("Firmware version: %s" % usb2001tc.getFirmwareVersion())
elif ch == 't':
# put the board in the correct voltage range +/- 73.125mV
usb2001tc.setVoltageRange(4)
ch = input("Input Thermocouple type [J,K,R,S,T,N,E,B]: ")
tc_type = ch
for i in range(10):
temperature = usb2001tc.tc_temperature(tc_type)
print("Thermocouple type: ", ch, "Temperature = ", temperature, "C ", temperature*9./5. + 32., "F ")
time.sleep(1)
if __name__ == "__main__":
main()
| 34.566372 | 109 | 0.622888 | 543 | 3,906 | 4.45488 | 0.38674 | 0.056222 | 0.014882 | 0.023563 | 0.171145 | 0.147995 | 0.114097 | 0.064489 | 0.028111 | 0.028111 | 0 | 0.045455 | 0.233999 | 3,906 | 112 | 110 | 34.875 | 0.763035 | 0.217102 | 0 | 0.023256 | 0 | 0 | 0.332237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.05814 | 0 | 0.116279 | 0.406977 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
4e494bf6a380c6128d5a1e9fb593a59a2f054b85 | 13,530 | py | Python | ambra_sdk/service/entrypoints/generated/session.py | agolovkina/sdk-python | 248f9301dd801527b28881e2641eb21fe189a6dc | [
"Apache-2.0"
] | null | null | null | ambra_sdk/service/entrypoints/generated/session.py | agolovkina/sdk-python | 248f9301dd801527b28881e2641eb21fe189a6dc | [
"Apache-2.0"
] | null | null | null | ambra_sdk/service/entrypoints/generated/session.py | agolovkina/sdk-python | 248f9301dd801527b28881e2641eb21fe189a6dc | [
"Apache-2.0"
] | null | null | null | """ Session.
Do not edit this file by hand.
This is generated by parsing api.html service doc.
"""
from ambra_sdk.exceptions.service import AuthFailed
from ambra_sdk.exceptions.service import BadPassword
from ambra_sdk.exceptions.service import Blocked
from ambra_sdk.exceptions.service import BrandNotAllowed
from ambra_sdk.exceptions.service import Disabled
from ambra_sdk.exceptions.service import Expired
from ambra_sdk.exceptions.service import InvalidCode
from ambra_sdk.exceptions.service import InvalidCredentials
from ambra_sdk.exceptions.service import InvalidPin
from ambra_sdk.exceptions.service import InvalidSid
from ambra_sdk.exceptions.service import InvalidSignature
from ambra_sdk.exceptions.service import InvalidUrl
from ambra_sdk.exceptions.service import InvalidVendor
from ambra_sdk.exceptions.service import Lockout
from ambra_sdk.exceptions.service import MissingFields
from ambra_sdk.exceptions.service import MissingInformation
from ambra_sdk.exceptions.service import NoOauth
from ambra_sdk.exceptions.service import NotFound
from ambra_sdk.exceptions.service import OnlyOne
from ambra_sdk.exceptions.service import OtherOauth
from ambra_sdk.exceptions.service import PasswordReset
from ambra_sdk.exceptions.service import PinExpired
from ambra_sdk.exceptions.service import SsoOnly
from ambra_sdk.exceptions.service import ValidationFailed
from ambra_sdk.exceptions.service import WhitelistLockout
from ambra_sdk.service.query import QueryO
class Session:
"""Session."""
def __init__(self, api):
self._api = api
def login(
self,
login,
account_login=None,
account_name=None,
email=None,
location=None,
new_password=None,
password=None,
use_pkey=None,
validate_session=None,
vanity=None,
):
"""Login.
:param login: The user account_login or email address
:param account_login: account_login
:param account_name: account_name
:param email: email
:param location: Login location. (optional)
:param new_password: Change the password or account password to this. (optional)
:param password: password
:param use_pkey: use_pkey
:param validate_session: If you would like to validate an existing session rather than create a new one pass in the sid of the session to valid in this parameter. It will check if the session is still valid and the credentials are for the session. (optional)
:param vanity: The account vanity name. (optional)
"""
request_data = {
'account_login': account_login,
'account_name': account_name,
'email': email,
'location': location,
'login': login,
'new_password': new_password,
'password': password,
'use_pkey': use_pkey,
'validate_session': validate_session,
'vanity': vanity,
}
errors_mapping = {}
errors_mapping[('BAD_PASSWORD', None)] = BadPassword('The new_password does not meet the password requirements')
errors_mapping[('BLOCKED', None)] = Blocked('The user is blocked from the system')
errors_mapping[('BRAND_NOT_ALLOWED', None)] = BrandNotAllowed('The user is limited to some brands to login with allowed_login_brands setting')
errors_mapping[('DISABLED', None)] = Disabled('The user is disabled and needs to be /user/enabled to allow access')
errors_mapping[('INVALID_CREDENTIALS', None)] = InvalidCredentials('Invalid user name or password.')
errors_mapping[('LOCKOUT', None)] = Lockout('Too many failed attempts')
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
errors_mapping[('ONLY_ONE', None)] = OnlyOne('You can pass either the password or use_pkey flag, not both')
errors_mapping[('PASSWORD_RESET', None)] = PasswordReset('The password needs to be changed')
errors_mapping[('SSO_ONLY', None)] = SsoOnly('The user can only login via SSO')
errors_mapping[('VALIDATION_FAILED', None)] = ValidationFailed('The session validation failed')
errors_mapping[('WHITELIST_LOCKOUT', None)] = WhitelistLockout('Login blocked by the account whitelist')
query_data = {
'api': self._api,
'url': '/session/login',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': False,
}
return QueryO(**query_data)
def user(
self,
settings=None,
):
"""User.
:param settings: A JSON list of user settings set via /setting/set to return (optional)
"""
request_data = {
'settings': settings,
}
errors_mapping = {}
query_data = {
'api': self._api,
'url': '/session/user',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
def permissions(
self,
account_id=None,
namespace_id=None,
):
"""Permissions.
:param account_id: account_id
:param namespace_id: namespace_id
"""
request_data = {
'account_id': account_id,
'namespace_id': namespace_id,
}
errors_mapping = {}
query_data = {
'api': self._api,
'url': '/session/permissions',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
def logout(
self,
):
"""Logout.
"""
request_data = {
}
errors_mapping = {}
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
errors_mapping[('NOT_FOUND', None)] = NotFound('The sid was not found')
query_data = {
'api': self._api,
'url': '/session/logout',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
def csrf_enable(
self,
redirect_uri,
):
"""Csrf enable.
:param redirect_uri: The URL to redirect to
"""
request_data = {
'redirect_uri': redirect_uri,
}
errors_mapping = {}
errors_mapping[('INVALID_URL', None)] = InvalidUrl('The URL must be a relative URL')
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
query_data = {
'api': self._api,
'url': '/session/csrf/enable',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
def uuid(
self,
):
"""Uuid.
"""
request_data = {
}
errors_mapping = {}
query_data = {
'api': self._api,
'url': '/session/uuid',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': False,
}
return QueryO(**query_data)
def oauth_start(
self,
):
"""Oauth start.
"""
request_data = {
}
errors_mapping = {}
errors_mapping[('NO_OAUTH', None)] = NoOauth('OAuth is not setup for the associated brand')
query_data = {
'api': self._api,
'url': '/session/oauth/start',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': False,
}
return QueryO(**query_data)
def oauth(
self,
code,
redirect_uri,
vendor,
):
"""Oauth.
:param code: The OAuth code
:param redirect_uri: The redirect_uri used to get the code parameter
:param vendor: The OAuth vendor (doximity|google|brand)
"""
request_data = {
'code': code,
'redirect_uri': redirect_uri,
'vendor': vendor,
}
errors_mapping = {}
errors_mapping[('AUTH_FAILED', None)] = AuthFailed('OAuth failed or a user id was not returned')
errors_mapping[('INVALID_CODE', None)] = InvalidCode('Invalid code')
errors_mapping[('INVALID_VENDOR', None)] = InvalidVendor('Invalid vendor')
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
errors_mapping[('MISSING_INFORMATION', None)] = MissingInformation('The response from the OAuth provider is missing either the email, first_name or last_name fields')
errors_mapping[('NO_OAUTH', None)] = NoOauth('OAuth is not setup for the associated brand')
errors_mapping[('OTHER_OAUTH', None)] = OtherOauth('The user is already setup to OAuth via another vendor')
query_data = {
'api': self._api,
'url': '/session/oauth',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': False,
}
return QueryO(**query_data)
def oauth_token(
self,
client_id,
client_secret,
grant_type,
duration=None,
):
"""Oauth token.
:param client_id: The users email address
:param client_secret: The users password
:param grant_type: The grant type, set to client_credentials
:param duration: The number of seconds the token is valid for (optional and defaults to 3600 with a maximum value of 86400)
"""
request_data = {
'client_id': client_id,
'client_secret': client_secret,
'duration': duration,
'grant_type': grant_type,
}
errors_mapping = {}
errors_mapping[('AUTH_FAILED', None)] = AuthFailed('Authentication failed')
errors_mapping[('LOCKOUT', None)] = Lockout('Too many failed attempts')
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
query_data = {
'api': self._api,
'url': '/session/oauth/token',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': False,
}
return QueryO(**query_data)
def pin(
self,
pin,
remember_device=None,
):
"""Pin.
:param pin: The PIN
:param remember_device: Remember the device as trusted. (optional)
"""
request_data = {
'pin': pin,
'remember_device': remember_device,
}
errors_mapping = {}
errors_mapping[('INVALID_PIN', None)] = InvalidPin('Invalid PIN')
errors_mapping[('INVALID_SID', None)] = InvalidSid('Invalid sid')
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
errors_mapping[('PIN_EXPIRED', None)] = PinExpired('The PIN has expired')
query_data = {
'api': self._api,
'url': '/session/pin',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
def sign(
self,
signature,
):
"""Sign.
:param signature: The Base64-encoded signature
"""
request_data = {
'signature': signature,
}
errors_mapping = {}
errors_mapping[('INVALID_SID', None)] = InvalidSid('Invalid sid')
errors_mapping[('INVALID_SIGNATURE', None)] = InvalidSignature('Invalid signature')
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
query_data = {
'api': self._api,
'url': '/session/sign',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
def ttl(
self,
):
"""Ttl.
"""
request_data = {
}
errors_mapping = {}
errors_mapping[('EXPIRED', None)] = Expired('Expired')
errors_mapping[('MISSING_FIELDS', None)] = MissingFields('A required field is missing or does not have data in it. The error_subtype holds a array of all the missing fields')
query_data = {
'api': self._api,
'url': '/session/ttl',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
| 37.272727 | 266 | 0.612047 | 1,508 | 13,530 | 5.295093 | 0.13992 | 0.11722 | 0.039073 | 0.068879 | 0.509831 | 0.499937 | 0.379712 | 0.368817 | 0.347151 | 0.331371 | 0 | 0.001145 | 0.290244 | 13,530 | 363 | 267 | 37.272727 | 0.830366 | 0.119734 | 0 | 0.491103 | 1 | 0.02847 | 0.279679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046263 | false | 0.035587 | 0.092527 | 0 | 0.185053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e49c9b8a2769ed3bdd04410a1169a3e6865dc44 | 2,799 | py | Python | core/models/model.py | Latterlig96/airflow-model-trainer | 7da36aae3036759639ae1c556f41fc70409aa444 | [
"MIT"
] | null | null | null | core/models/model.py | Latterlig96/airflow-model-trainer | 7da36aae3036759639ae1c556f41fc70409aa444 | [
"MIT"
] | null | null | null | core/models/model.py | Latterlig96/airflow-model-trainer | 7da36aae3036759639ae1c556f41fc70409aa444 | [
"MIT"
] | null | null | null | import timm
import torch.nn as nn
import torch
from pytorch_lightning import LightningModule
import torchmetrics
from typing import Dict, Any
class EfficientNetB1(nn.Module):
def __init__(self):
super().__init__()
self.backbone = timm.create_model(
'efficientnet_b1', pretrained=True, num_classes=0, in_chans=3
)
num_features = self.backbone.num_features
self.head = nn.Linear(num_features, 1)
def forward(self, x: torch.Tensor) -> torch.Tensor:
out = self.backbone(x)
out = self.head(out)
return out
class Model(LightningModule):
def __init__(self):
super().__init__()
self._build_model()
self._build_criterion()
self._build_metric()
def _build_model(self):
self.model = EfficientNetB1()
def _build_criterion(self):
self.criterion = torch.nn.BCEWithLogitsLoss()
def _build_optimizer(self):
self.optimizer = torch.optim.Adam(self.parameters(), lr=1e-5)
def _build_scheduler(self):
self.scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(
self.optimizer, T_0=20, eta_min=1e-4
)
def _build_metric(self):
self.metric = torchmetrics.F1Score(num_classes=1)
def forward(self, x: torch.Tensor) -> torch.Tensor:
out = self.model(x)
return out
def training_step(self, batch, batch_idx):
loss, pred, labels = self._share_step(batch)
return {'loss': loss, 'pred': pred.detach(), 'labels': labels.detach()}
def validation_step(self, batch, batch_idx):
loss, pred, labels = self._share_step(batch)
return {'loss': loss, 'pred': pred.detach(), 'labels': labels.detach()}
def training_epoch_end(self, outputs):
self._share_epoch_end(outputs, 'train')
def validation_epoch_end(self, outputs):
self._share_epoch_end(outputs, 'val')
def _share_step(self, batch):
images, labels = batch
pred = self.forward(images.float()).sigmoid()
loss = self.criterion(pred, labels.float())
return loss, pred, labels
def _share_epoch_end(self, outputs, mode):
preds = []
labels = []
for output in outputs:
pred, label = output['pred'], output['labels']
preds.append(pred)
labels.append(label)
preds = torch.cat(preds)
labels = torch.cat(labels)
metric = self.metric(preds, labels)
self.logger.experiment.log_metric(f'{mode}_loss', metric)
def configure_optimizers(self) -> Dict[Any, Any]:
self._build_optimizer()
self._build_scheduler()
return {"optimizer": self.optimizer, "lr_scheduler": self.scheduler}
| 31.449438 | 79 | 0.62701 | 330 | 2,799 | 5.090909 | 0.278788 | 0.026786 | 0.023214 | 0.033929 | 0.254762 | 0.254762 | 0.22619 | 0.22619 | 0.22619 | 0.175 | 0 | 0.007212 | 0.256877 | 2,799 | 88 | 80 | 31.806818 | 0.800481 | 0 | 0 | 0.173913 | 0 | 0 | 0.033226 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.231884 | false | 0 | 0.086957 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e4a923e158888b3d4d5c34fac554043c558b3c2 | 7,755 | py | Python | emtf_nnet/keras/quantization/default_quantize_scheme.py | jiafulow/emtf-nnet | 70a6c747c221178f9db940197ea886bdb60bf3ba | [
"Apache-2.0"
] | null | null | null | emtf_nnet/keras/quantization/default_quantize_scheme.py | jiafulow/emtf-nnet | 70a6c747c221178f9db940197ea886bdb60bf3ba | [
"Apache-2.0"
] | null | null | null | emtf_nnet/keras/quantization/default_quantize_scheme.py | jiafulow/emtf-nnet | 70a6c747c221178f9db940197ea886bdb60bf3ba | [
"Apache-2.0"
] | null | null | null | # The following source code was originally obtained from:
# https://github.com/tensorflow/model-optimization/blob/v0.7.0/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_scheme.py
# https://github.com/tensorflow/model-optimization/blob/v0.7.0/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_layout_transform.py
# https://github.com/tensorflow/model-optimization/blob/v0.7.0/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_registry.py
# ==============================================================================
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Quantization scheme which specifies how quantization should be applied."""
import tensorflow as tf
from tensorflow_model_optimization.python.core.quantization.keras import quantize_aware_activation
from tensorflow_model_optimization.python.core.quantization.keras import quantize_layout_transform
from tensorflow_model_optimization.python.core.quantization.keras import quantize_registry
from tensorflow_model_optimization.python.core.quantization.keras import quantize_scheme
from tensorflow_model_optimization.python.core.quantization.keras import quantize_wrapper
from tensorflow_model_optimization.python.core.quantization.keras import quantizers
from tensorflow_model_optimization.python.core.quantization.keras.graph_transformations import model_transformer
from emtf_nnet.keras.layers import (
ActivityRegularization, FeatureNormalization, LinearActivation,
MutatedBatchNormalization, MutatedDense, MutatedDenseFold, ScaleActivation,
TanhActivation)
from .default_quantize_configs import (
DefaultDenseQuantizeConfig, DefaultDenseFoldQuantizeConfig,
DefaultInputQuantizeConfig, DefaultOutputQuantizeConfig, NoOpQuantizeConfig)
from .default_transforms import InputLayerQuantize, MutatedDenseFolding
from .quantizers import FixedRangeQuantizer
class DefaultQuantizeLayoutTransform(quantize_layout_transform.QuantizeLayoutTransform):
"""Default quantization layout transformations."""
_TRANSFORMS = [
#InputLayerQuantize(),
MutatedDenseFolding(),
]
def apply(self, model, layer_quantize_map):
"""Implement default 8-bit transforms.
Currently this means the following.
1. Pull activations into layers, and apply fuse activations. (TODO)
2. Modify range in incoming layers for Concat. (TODO)
3. Fuse Conv2D/DepthwiseConv2D + BN into single layer.
Args:
model: Keras model to be quantized.
layer_quantize_map: Map with keys as layer names, and values as dicts
containing custom `QuantizeConfig`s which may have been passed with
layers.
Returns:
(Transformed Keras model to better match TensorFlow Lite backend, updated
layer quantize map.)
"""
return model_transformer.ModelTransformer(
model,
self._TRANSFORMS,
candidate_layers=set(layer_quantize_map.keys()),
layer_metadata=layer_quantize_map).transform()
class DefaultQuantizeRegistry(quantize_registry.QuantizeRegistry):
"""Default quantization registry."""
def __init__(self, disable_per_axis=False):
self._layer_quantize_map = {}
#self._layer_quantize_map[tf.keras.layers.Activation] = DefaultOutputQuantizeConfig()
#self._layer_quantize_map[tf.keras.layers.BatchNormalization] = DefaultOutputQuantizeConfig()
#self._layer_quantize_map[tf.keras.layers.Dense] = DefaultDenseQuantizeConfig()
#self._layer_quantize_map[tf.keras.layers.Rescaling] = NoOpQuantizeConfig()
self._layer_quantize_map[ActivityRegularization] = NoOpQuantizeConfig()
self._layer_quantize_map[LinearActivation] = DefaultOutputQuantizeConfig()
#self._layer_quantize_map[MutatedBatchNormalization] = DefaultOutputQuantizeConfig()
self._layer_quantize_map[MutatedDense] = DefaultDenseQuantizeConfig()
self._layer_quantize_map[MutatedDenseFold] = DefaultDenseFoldQuantizeConfig()
self._layer_quantize_map[FeatureNormalization] = DefaultOutputQuantizeConfig()
self._layer_quantize_map[ScaleActivation] = NoOpQuantizeConfig()
self._layer_quantize_map[TanhActivation] = DefaultOutputQuantizeConfig()
self._disable_per_axis = disable_per_axis # unused
def _is_supported_layer(self, layer_class):
return layer_class in self._layer_quantize_map
def _get_quantize_config(self, layer_class):
return self._layer_quantize_map[layer_class]
def supports(self, layer):
"""Returns whether the registry supports this layer type.
# TODO(pulkitb): Consider pushing this function up to the registry.
Args:
layer: The layer to check for support.
Returns:
True/False whether the layer type is supported.
"""
if self._is_supported_layer(layer.__class__):
return True
return False
def get_quantize_config(self, layer):
"""Returns the quantization config for the given layer.
Args:
layer: input layer to return quantize config for.
Returns:
Returns the QuantizeConfig for the given layer.
"""
if not self.supports(layer):
raise ValueError(
'`get_quantize_config()` called on an unsupported layer {}. Check '
'if layer is supported by calling `supports()`. Alternatively, you '
'can use `QuantizeConfig` to specify a behavior for your layer.'
.format(layer.__class__))
return self._get_quantize_config(layer.__class__)
class DefaultQuantizeScheme(quantize_scheme.QuantizeScheme):
"""Quantization scheme which specifies how quantization should be applied."""
_QUANTIZATION_OBJECTS = {
'ActivityRegularization': ActivityRegularization,
'FeatureNormalization': FeatureNormalization,
'LinearActivation': LinearActivation,
'MutatedBatchNormalization': MutatedBatchNormalization,
'MutatedDense': MutatedDense,
'MutatedDenseFold': MutatedDenseFold,
'ScaleActivation': ScaleActivation,
'TanhActivation': TanhActivation,
'FixedRangeQuantizer': FixedRangeQuantizer,
'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'DefaultInputQuantizeConfig': DefaultInputQuantizeConfig,
'DefaultOutputQuantizeConfig': DefaultOutputQuantizeConfig,
'NoOpQuantizeConfig': NoOpQuantizeConfig,
'DefaultDenseFoldQuantizeConfig': DefaultDenseFoldQuantizeConfig,
# from tensorflow_model_optimization
'QuantizeAwareActivation': quantize_aware_activation.QuantizeAwareActivation,
'QuantizeWrapper': quantize_wrapper.QuantizeWrapper,
'QuantizeWrapperV2': quantize_wrapper.QuantizeWrapperV2,
'AllValuesQuantizer': quantizers.AllValuesQuantizer,
'LastValueQuantizer': quantizers.LastValueQuantizer,
'MovingAverageQuantizer': quantizers.MovingAverageQuantizer,
}
def __init__(self, disable_per_axis=False):
self._disable_per_axis = disable_per_axis
def get_layout_transformer(self):
return DefaultQuantizeLayoutTransform()
def get_quantize_registry(self):
return DefaultQuantizeRegistry(
disable_per_axis=self._disable_per_axis)
| 44.0625 | 178 | 0.772663 | 795 | 7,755 | 7.31195 | 0.289308 | 0.044727 | 0.055049 | 0.051608 | 0.293824 | 0.237571 | 0.227593 | 0.205918 | 0.163599 | 0.131258 | 0 | 0.004621 | 0.134881 | 7,755 | 175 | 179 | 44.314286 | 0.861827 | 0.383366 | 0 | 0.047619 | 0 | 0 | 0.128111 | 0.048474 | 0 | 0 | 0 | 0.017143 | 0 | 1 | 0.107143 | false | 0 | 0.142857 | 0.047619 | 0.404762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e52e216784d13b3e588296212581a7b442e91f8 | 1,376 | py | Python | test_HW1.py | zinhart/kvs_boost | 5a3bee82284a9b1bfec7971168e9296976bfa091 | [
"MIT"
] | null | null | null | test_HW1.py | zinhart/kvs_boost | 5a3bee82284a9b1bfec7971168e9296976bfa091 | [
"MIT"
] | null | null | null | test_HW1.py | zinhart/kvs_boost | 5a3bee82284a9b1bfec7971168e9296976bfa091 | [
"MIT"
] | null | null | null | import unittest
import subprocess
import requests
PORT=8080
class TestHW1(unittest.TestCase):
def test1(self):
res = requests.get('http://localhost:'+str(PORT)+'/check')
self.assertEqual(res.text, 'This is a GET request', msg='Incorrect response to GET request to /check endpoint')
self.assertEqual(res.status_code, 200, msg='Did not return status 200 to GET request to /check endpoint')
def test2(self):
res = requests.post('http://localhost:'+str(PORT)+'/check')
self.assertEqual(res.text, 'This is a POST request', msg='Incorrect response to POST request to /check endpoint')
self.assertEqual(res.status_code, 200, msg='Did not return status 200 to POST request to /check endpoint')
def test3(self):
res = requests.put('http://localhost:'+str(PORT)+'/check')
self.assertEqual(res.status_code, 405, msg='Did not return status 405 to PUT request to /check endpoint')
def test4(self):
res = requests.get('http://localhost:'+str(PORT)+'/hello?name=Peter')
self.assertEqual(res.text, 'Hello Peter!', msg='Incorrect response to /hello?name=Peter endpoint')
def test5(self):
res = requests.get('http://localhost:'+str(PORT)+'/hello')
self.assertEqual(res.text, 'Hello user!', msg='Incorrect response to /hello endpoint')
if __name__ == '__main__':
unittest.main()
| 43 | 119 | 0.683866 | 190 | 1,376 | 4.894737 | 0.252632 | 0.112903 | 0.135484 | 0.107527 | 0.751613 | 0.510753 | 0.444086 | 0.444086 | 0.378495 | 0.286022 | 0 | 0.024605 | 0.172965 | 1,376 | 31 | 120 | 44.387097 | 0.792619 | 0 | 0 | 0 | 0 | 0 | 0.412791 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 1 | 0.208333 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e637b103d18579916bf324d23aa441cf9dd950d | 431 | py | Python | shopapp/shoptrans/migrations/0015_auto_20181209_1600.py | biseshbhattarai/Transaction-Microservice | 32226e339d53842d6178f50d5988880e7a7ae7c3 | [
"MIT"
] | null | null | null | shopapp/shoptrans/migrations/0015_auto_20181209_1600.py | biseshbhattarai/Transaction-Microservice | 32226e339d53842d6178f50d5988880e7a7ae7c3 | [
"MIT"
] | null | null | null | shopapp/shoptrans/migrations/0015_auto_20181209_1600.py | biseshbhattarai/Transaction-Microservice | 32226e339d53842d6178f50d5988880e7a7ae7c3 | [
"MIT"
] | null | null | null | # Generated by Django 2.1 on 2018-12-09 10:15
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('shoptrans', '0014_auto_20181209_1554'),
]
operations = [
migrations.AlterField(
model_name='product',
name='creditor_name',
field=models.CharField(default='No creditor', max_length=20000, null=True),
),
]
| 22.684211 | 87 | 0.62181 | 47 | 431 | 5.574468 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11041 | 0.264501 | 431 | 18 | 88 | 23.944444 | 0.716088 | 0.099768 | 0 | 0 | 1 | 0 | 0.163212 | 0.059585 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e6a8ab206fcd80138984909ead6959142cf2679 | 388 | py | Python | makeGif.py | andersdot/modelMWkinematics | 381518f02e6efaec8c7b47f3cca3a8eb3aa2434f | [
"MIT"
] | null | null | null | makeGif.py | andersdot/modelMWkinematics | 381518f02e6efaec8c7b47f3cca3a8eb3aa2434f | [
"MIT"
] | null | null | null | makeGif.py | andersdot/modelMWkinematics | 381518f02e6efaec8c7b47f3cca3a8eb3aa2434f | [
"MIT"
] | null | null | null | import imageio
import numpy as np
import sys
filenamepre = sys.argv[1]
filenamepost = sys.argv[2]
images = []
frames = np.arange(35, 165)
for i in range(9):
for j in range(9):
filename = '{0}_{1:03d}_{2:03d}.{3}'.format(filenamepre, i+1, j+1, filenamepost)
print filename
images.append(imageio.imread(filename))
imageio.mimsave(filenamepre + '.gif', images)
| 24.25 | 88 | 0.664948 | 58 | 388 | 4.413793 | 0.551724 | 0.054688 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060127 | 0.185567 | 388 | 15 | 89 | 25.866667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.069588 | 0.059278 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.230769 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e6b7a1a57689264498c5877c0366716e4a416ab | 1,710 | py | Python | gen_fdm.py | lljbash/FastTT | 943ed6756f630bbf2a7d18f297b65b98c2d891b8 | [
"MIT"
] | 6 | 2021-02-27T10:07:33.000Z | 2022-02-12T22:49:26.000Z | gen_fdm.py | lljbash/FastTT | 943ed6756f630bbf2a7d18f297b65b98c2d891b8 | [
"MIT"
] | null | null | null | gen_fdm.py | lljbash/FastTT | 943ed6756f630bbf2a7d18f297b65b98c2d891b8 | [
"MIT"
] | 1 | 2021-09-14T09:22:55.000Z | 2021-09-14T09:22:55.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import numpy as np
alpha = np.random.rand(7)
alpha /= np.linalg.norm(alpha, 1)
n = 40
def index_to_position(index):
p = 0
a, b, c, d, e, f = index
index = [a, d, b, e, c, f]
for i in index:
p = p * n + i
return p
if __name__ == "__main__":
with open("fdm.tsv", "w") as f:
for i in range(n):
for j in range(n):
for k in range(n):
alpha = np.random.rand(7)
alpha /= np.linalg.norm(alpha, 1)
p = index_to_position([i, j, k, i, j, k])
print("{}\t{}".format(p, alpha[0]), file=f)
if i - 1 >= 0:
p = index_to_position([i, j, k, i - 1, j, k])
print("{}\t{}".format(p, alpha[1]), file=f)
if i + 1 < n:
p = index_to_position([i, j, k, i + 1, j, k])
print("{}\t{}".format(p, alpha[2]), file=f)
if j - 1 >= 0:
p = index_to_position([i, j, k, i, j - 1, k])
print("{}\t{}".format(p, alpha[3]), file=f)
if j + 1 < n:
p = index_to_position([i, j, k, i, j + 1, k])
print("{}\t{}".format(p, alpha[4]), file=f)
if k - 1 >= 0:
p = index_to_position([i, j, k, i, j, k - 1])
print("{}\t{}".format(p, alpha[5]), file=f)
if k + 1 < n:
p = index_to_position([i, j, k, i, j, k + 1])
print("{}\t{}".format(p, alpha[6]), file=f)
| 37.173913 | 69 | 0.374854 | 247 | 1,710 | 2.497976 | 0.226721 | 0.038898 | 0.048622 | 0.181524 | 0.693679 | 0.615883 | 0.615883 | 0.58671 | 0.58671 | 0.58671 | 0 | 0.032944 | 0.449708 | 1,710 | 45 | 70 | 38 | 0.622742 | 0.025146 | 0 | 0.105263 | 0 | 0 | 0.034856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.026316 | 0 | 0.078947 | 0.184211 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e6ee7e192fbde8739fcbe25af5697e1f9a59049 | 395 | py | Python | measures/migrations/0005_auto_20210224_2242.py | uktrade/tamato | 4ba2ffb25eea2887e4e081c81da7634cd7b4f9ca | [
"MIT"
] | 14 | 2020-03-25T11:11:29.000Z | 2022-03-08T20:41:33.000Z | measures/migrations/0005_auto_20210224_2242.py | uktrade/tamato | 4ba2ffb25eea2887e4e081c81da7634cd7b4f9ca | [
"MIT"
] | 352 | 2020-03-25T10:42:09.000Z | 2022-03-30T15:32:26.000Z | measures/migrations/0005_auto_20210224_2242.py | uktrade/tamato | 4ba2ffb25eea2887e4e081c81da7634cd7b4f9ca | [
"MIT"
] | 3 | 2020-08-06T12:22:41.000Z | 2022-01-16T11:51:12.000Z | # Generated by Django 3.1 on 2021-02-24 22:42
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("measures", "0004_rename_condition_component_measurement"),
]
operations = [
migrations.AlterModelOptions(
name="measurecondition",
options={"ordering": ["component_sequence_number"]},
),
]
| 21.944444 | 68 | 0.643038 | 37 | 395 | 6.702703 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.248101 | 395 | 17 | 69 | 23.235294 | 0.774411 | 0.108861 | 0 | 0 | 1 | 0 | 0.285714 | 0.194286 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e774dc29221595db5acbbd6eb7c6434ab0d07dc | 1,795 | py | Python | py/mistql/cli.py | evinism/millieql | 9f68e52504c3c2e4996aaa934f493fdbb089d569 | [
"MIT"
] | 263 | 2021-09-22T07:41:16.000Z | 2022-03-31T10:57:00.000Z | py/mistql/cli.py | evinism/millieql | 9f68e52504c3c2e4996aaa934f493fdbb089d569 | [
"MIT"
] | 90 | 2021-09-20T08:44:58.000Z | 2022-03-26T06:24:44.000Z | py/mistql/cli.py | evinism/beakerql | 213331b96f83daa057a28d1019fdaa1fa29b6377 | [
"MIT"
] | 12 | 2021-09-22T17:15:06.000Z | 2022-02-10T20:10:51.000Z | #!/usr/bin/env python3
from typing import Union
import argparse
from mistql import __version__
from mistql import query
import sys
import json
import logging
log = logging.getLogger(__name__)
parser = argparse.ArgumentParser(
description="CLI for the python MistQL query language implementation"
)
parser.add_argument("--version", "-v", action="version", version=__version__)
parser.add_argument("query", type=str, help="The query to run")
inputgroup = parser.add_mutually_exclusive_group()
inputgroup.add_argument("--data", "-d", type=str, help="The data to run the query on.")
inputgroup.add_argument(
"--file", "-f", type=str, help="The file to read the data from. Defaults to stdin"
)
parser.add_argument(
"--output", "-o", type=str, help="The output file. Defaults to stdout"
)
parser.add_argument(
"--pretty", "-p", action="store_true", help="Pretty print the output"
)
def main(supplied_args=None):
if supplied_args is None:
args = parser.parse_args()
else:
args = parser.parse_args(supplied_args)
raw_data: Union[str, bytes]
if args.data:
raw_data = args.data
elif args.file:
with open(args.file, 'rb') as f:
raw_data = f.read()
else:
raw_data = sys.stdin.buffer.read()
data = json.loads(raw_data)
out = query(args.query, data)
if args.output:
# TODO: Allow alternate output encodings other than utf-8
out_bytes = json.dumps(
out,
indent=2 if args.pretty else None,
ensure_ascii=False
).encode("utf-8")
with open(args.output, "wb") as f:
f.write(out_bytes)
else:
print(json.dumps(out, indent=2 if args.pretty else None, ensure_ascii=False))
if __name__ == "__main__":
main()
| 27.19697 | 87 | 0.660167 | 250 | 1,795 | 4.564 | 0.368 | 0.057844 | 0.059597 | 0.04908 | 0.096407 | 0.096407 | 0.096407 | 0.096407 | 0.096407 | 0.096407 | 0 | 0.003559 | 0.21727 | 1,795 | 65 | 88 | 27.615385 | 0.808541 | 0.042897 | 0 | 0.098039 | 0 | 0 | 0.170746 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.019608 | false | 0 | 0.137255 | 0 | 0.156863 | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e7c01611d44267ba1acfef75fbe7c3e260c5cba | 1,814 | py | Python | csvsplitter.py | harryhan1989/csvsplitter | c8c25f65a27059c6f3791323c39b8eb753e155ad | [
"MIT"
] | 7 | 2016-06-22T06:40:11.000Z | 2021-10-30T16:44:20.000Z | csvsplitter.py | harryhan1989/csvsplitter | c8c25f65a27059c6f3791323c39b8eb753e155ad | [
"MIT"
] | 2 | 2016-08-02T07:28:58.000Z | 2017-12-19T18:45:31.000Z | csvsplitter.py | harryhan1989/csvsplitter | c8c25f65a27059c6f3791323c39b8eb753e155ad | [
"MIT"
] | 2 | 2020-04-21T21:11:48.000Z | 2021-09-07T21:26:27.000Z | import sys,os
import py2 as csv
import converter,iohelper
from itertools import groupby
def do_split(csvfilepath,colidx):
result = {}
header=[]
iohelper.delete_file_folder('splitted/')
iohelper.dir_create('splitted/')
with open(csvfilepath, 'rb') as csvfile:
csvreader = csv.reader(csvfile, encoding='utf-8')
header=csvreader.next()
for row in csvreader:
if row[colidx] in result:
result[row[colidx]].append(row)
if(len(result[row[colidx]])>100000):
print 'start---%s' % row[colidx]
with open("splitted/%s.csv" % row[colidx], "ab") as output:
wr = csv.writer(output, encoding='utf-8')
if os.path.getsize("splitted/%s.csv" % row[colidx])==0:
wr.writerow(header)
converter.convert_to_utf8("splitted/%s.csv" % row[colidx])
for line in result[row[colidx]]:
wr.writerow(line)
print 'end---%s' %row[colidx]
result[row[colidx]]=[]
else:
result[row[colidx]] = [row]
for attr, value in result.iteritems():
if attr!='-' and attr!='':
print 'start---%s' %attr
with open("splitted/%s.csv" %attr, "ab") as output:
wr = csv.writer(output, quoting=csv.QUOTE_ALL)
if os.path.getsize("splitted/%s.csv" % attr)==0:
wr.writerow(header)
converter.convert_to_utf8("splitted/%s.csv" % attr)
for line in value:
wr.writerow(line)
print 'end---%s' %attr
value=[] | 44.243902 | 86 | 0.488423 | 195 | 1,814 | 4.497436 | 0.333333 | 0.112885 | 0.082098 | 0.051311 | 0.368301 | 0.291904 | 0.239453 | 0.116306 | 0.116306 | 0.116306 | 0 | 0.011576 | 0.380926 | 1,814 | 41 | 87 | 44.243902 | 0.769368 | 0 | 0 | 0.1 | 0 | 0 | 0.088705 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e7d951b6666fe0170d6f367a7dc198639ed32fb | 769 | py | Python | django_todo/apps/core/models.py | maxicecilia/django_todo | d669c89c1885722cdb4e1a366ccff530137c64cc | [
"MIT"
] | null | null | null | django_todo/apps/core/models.py | maxicecilia/django_todo | d669c89c1885722cdb4e1a366ccff530137c64cc | [
"MIT"
] | null | null | null | django_todo/apps/core/models.py | maxicecilia/django_todo | d669c89c1885722cdb4e1a366ccff530137c64cc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from django.db import models
from django.contrib.auth.models import User
class TaskManager(models.Manager):
def pending_tasks(self, user):
return super(TaskManager, self).get_queryset().filter(is_checked=False, user=user).order_by('-date_created')
class Task(models.Model):
"""Task that needs to be done."""
date_created = models.DateTimeField(auto_now_add=True, blank=True, null=True)
date_done = models.DateTimeField(blank=True, null=True)
description = models.CharField('What needs to be done?', max_length=255)
is_checked = models.BooleanField(default=False)
user = models.ForeignKey(User, default=None)
objects = TaskManager()
def __unicode__(self):
return u'%s' % self.description
| 33.434783 | 116 | 0.716515 | 103 | 769 | 5.203884 | 0.563107 | 0.037313 | 0.033582 | 0.048507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006173 | 0.157347 | 769 | 22 | 117 | 34.954545 | 0.820988 | 0.06502 | 0 | 0 | 0 | 0 | 0.051893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
4e84f7185344758fdb64753b21ca1f1580ea377a | 545 | py | Python | MRT-and-LRT-Stations-master/combine.py | shamim-akhtar/tutorial-pathfinding | 5f0d76795f1f0461039f1d9d023efee56809d237 | [
"CC0-1.0"
] | 6 | 2021-07-21T11:10:40.000Z | 2022-03-04T19:53:21.000Z | MRT-and-LRT-Stations-master/combine.py | shamim-akhtar/tutorial-pathfinding | 5f0d76795f1f0461039f1d9d023efee56809d237 | [
"CC0-1.0"
] | null | null | null | MRT-and-LRT-Stations-master/combine.py | shamim-akhtar/tutorial-pathfinding | 5f0d76795f1f0461039f1d9d023efee56809d237 | [
"CC0-1.0"
] | 3 | 2021-07-21T11:03:34.000Z | 2022-03-02T09:44:06.000Z | import pandas as pd
bus_stops = pd.read_csv('./bus_stops.csv')
stations = pd.read_csv('./mrt_lrt.csv')
def strip_mrtlrt(row):
assert(
row['Name'].endswith(' MRT STATION') or\
row['Name'].endswith(' LRT STATION')
)
l = len(' MRT STATION')
return row['Name'][:-l]
stations['Name'] = stations.apply(strip_mrtlrt, axis=1)
combined = pd.concat((bus_stops, stations))
combined = combined.drop(['Unnamed: 0'], axis=1)
print combined
combined.reset_index(drop=True, inplace=True)
combined.to_csv('./combined.csv')
| 22.708333 | 55 | 0.666055 | 78 | 545 | 4.525641 | 0.474359 | 0.067989 | 0.050992 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006522 | 0.155963 | 545 | 23 | 56 | 23.695652 | 0.76087 | 0 | 0 | 0 | 0 | 0 | 0.191176 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0 | null | null | 0 | 0.0625 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e8c211d5c5079947238c71fd44e2f758e86f27d | 1,322 | py | Python | osc_bge/branch/migrations/0017_bgeresource.py | jisuhan3201/osc-bge | 125c441d23d7f1fdb2d9b8f42f859082e757e25a | [
"MIT"
] | null | null | null | osc_bge/branch/migrations/0017_bgeresource.py | jisuhan3201/osc-bge | 125c441d23d7f1fdb2d9b8f42f859082e757e25a | [
"MIT"
] | 5 | 2020-06-05T19:49:47.000Z | 2021-09-08T00:50:55.000Z | osc_bge/branch/migrations/0017_bgeresource.py | jisuhan3201/osc-bge | 125c441d23d7f1fdb2d9b8f42f859082e757e25a | [
"MIT"
] | null | null | null | # Generated by Django 2.0.9 on 2018-12-13 09:36
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('branch', '0016_hoststudent_communication_log'),
]
operations = [
migrations.CreateModel(
name='BgeResource',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
('cateogry', models.CharField(blank=True, max_length=80, null=True)),
('sub_category', models.CharField(blank=True, max_length=80, null=True)),
('title', models.CharField(blank=True, max_length=140, null=True)),
('file', models.FileField(blank=True, null=True, upload_to='resources/')),
('writer', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
]
| 40.060606 | 132 | 0.618759 | 146 | 1,322 | 5.445205 | 0.513699 | 0.07044 | 0.045283 | 0.090566 | 0.220126 | 0.149686 | 0.108176 | 0.108176 | 0.108176 | 0 | 0 | 0.026183 | 0.248865 | 1,322 | 32 | 133 | 41.3125 | 0.774421 | 0.034039 | 0 | 0 | 1 | 0 | 0.100392 | 0.026667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.115385 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4e94bb9a191cf6308804e0b0bcc3d2f53554916f | 1,117 | py | Python | mayan/apps/folders/forms.py | camerondphillips/MAYAN | b8cd44af50f0b2f2b59286d9c88e2f7aa573a93f | [
"Apache-2.0"
] | null | null | null | mayan/apps/folders/forms.py | camerondphillips/MAYAN | b8cd44af50f0b2f2b59286d9c88e2f7aa573a93f | [
"Apache-2.0"
] | 1 | 2022-03-12T01:03:39.000Z | 2022-03-12T01:03:39.000Z | mayan/apps/folders/forms.py | camerondphillips/MAYAN | b8cd44af50f0b2f2b59286d9c88e2f7aa573a93f | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import, unicode_literals
import logging
from django import forms
from django.core.exceptions import PermissionDenied
from django.utils.translation import ugettext_lazy as _
from acls.models import AccessEntry
from permissions.models import Permission
from .models import Folder
from .permissions import PERMISSION_FOLDER_VIEW
logger = logging.getLogger(__name__)
class FolderForm(forms.ModelForm):
class Meta:
model = Folder
fields = ('title',)
class FolderListForm(forms.Form):
def __init__(self, *args, **kwargs):
user = kwargs.pop('user', None)
logger.debug('user: %s', user)
super(FolderListForm, self).__init__(*args, **kwargs)
queryset = Folder.objects.all()
try:
Permission.objects.check_permissions(user, [PERMISSION_FOLDER_VIEW])
except PermissionDenied:
queryset = AccessEntry.objects.filter_objects_by_access(PERMISSION_FOLDER_VIEW, user, queryset)
self.fields['folder'] = forms.ModelChoiceField(
queryset=queryset,
label=_('Folder'))
| 28.641026 | 107 | 0.711728 | 122 | 1,117 | 6.262295 | 0.47541 | 0.039267 | 0.078534 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199642 | 1,117 | 38 | 108 | 29.394737 | 0.854586 | 0 | 0 | 0 | 0 | 0 | 0.025962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.333333 | 0 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4ea90f3a55bca3ad3cc128c2c60ef4a2c8c9f010 | 14,571 | py | Python | collocation_node.py | UW-OCP/Collocation-Solver-CUDA | 6cd1e1de6f4e43177251850e501053b2b3ae60f4 | [
"MIT"
] | null | null | null | collocation_node.py | UW-OCP/Collocation-Solver-CUDA | 6cd1e1de6f4e43177251850e501053b2b3ae60f4 | [
"MIT"
] | null | null | null | collocation_node.py | UW-OCP/Collocation-Solver-CUDA | 6cd1e1de6f4e43177251850e501053b2b3ae60f4 | [
"MIT"
] | null | null | null | import numpy as np
import scipy.linalg
class collocation_node:
"""
Node class for collocation solver, save the required DAE variables
at each time node
"""
'''
Input: size_y - size of the ODE variables
size_z - size of the DAE variables
size_p - size of the parameters
m: stages of the lobatto coefficients
Construct the empty collocation node with all the
fields set to zero.
'''
def __init__(self, size_y, size_z, size_p, m):
self.size_y = size_y
self.size_z = size_z
self.size_p = size_p
self.m = m
self.delta_t = 0
self.tspan = []
self.y = np.zeros((size_y), dtype = np.float64)
self.z = np.zeros((size_z), dtype = np.float64)
self.y_dot = np.zeros((size_y, m), dtype = np.float64)
self.z_tilda = np.zeros((size_z, m), dtype = np.float64)
self.p = np.zeros((size_p), dtype = np.float64)
self.y_tilda = np.zeros((size_y, m), dtype = np.float64)
self.f_a = np.zeros(((size_y + size_z) * m), dtype = np.float64)
self.f_b = np.zeros((size_y), dtype = np.float64)
self.f_N = []
self.J = np.zeros(((size_y + size_z) * m, size_y), dtype = np.float64)
self.W = np.zeros(((size_y + size_z) * m, (size_y + size_z) * m), dtype = np.float64)
self.V = np.zeros(((size_y + size_z) * m, size_p), dtype = np.float64)
self.V_N = []
self.D = np.zeros((size_y, (size_y + size_z) * m), dtype = np.float64)
self.B = np.zeros((size_y + size_p, size_y), dtype = np.float64)
self.A = np.zeros((size_y, size_y), dtype = np.float64)
self.C = np.zeros((size_y, size_y), dtype = np.float64)
self.H = np.zeros((size_y, size_p), dtype=np.float64)
self.H_N = []
self.b = np.zeros((size_y), dtype=np.float64)
self.b_N = []
self.delta_y = np.zeros(size_y, dtype=np.float64)
self.delta_k = np.zeros(((size_y + size_z) * m), dtype = np.float64)
self.delta_p = np.zeros(size_p, dtype=np.float64)
'''
self.C_tilda = np.zeros((size_y, size_y), dtype = np.float64)
self.G_tilda = np.zeros((size_y, size_y), dtype = np.float64)
self.H_tilda = np.zeros((size_y, size_p), dtype = np.float64)
self.b_tilda = np.zeros((size_y), dtype = np.float64)
self.R = np.zeros()
self.E = np.zeros()
self.G = np.zeros()
self.K = np.zeros()
self.d = np.zeros()
self.Rp = np.zeros()
self.dp = np.zeros()
self.delta_p = np.zeros()
self.delta_y = np.zeros()
self.delta_k = np.zeros()
'''
'''
Input: y - a size_y vector of the value of the ODE variables
Set the ODE variable y.
'''
def set_y(self, y):
for i in range(self.size_y):
self.y[i] = y[i]
'''
Input: z - a size_z vector of the value of the DAE variables
Set the DAE variable z.
'''
def set_z(self, z):
for i in range(self.size_z):
self.z[i] = z[i]
'''
Input: p - a size_p x 1 vector of the value of the parameter variables
Set the parameter variable p.
'''
def set_p(self, p):
for i in range(self.size_p):
self.p[i] = p[i]
'''
Input: delta_t - a double representing the interval of the time
span between the current node and the next node
Set the time interval delta_t.
'''
def set_delta_t(self, delta_t):
self.delta_t = delta_t
'''
Input: tspan representing the time span between the current node and the next node
Set the time interval tspan.
'''
def set_tspan(self, tspan):
for i in range(tspan.shape[0]):
self.tspan.append(tspan[i])
'''
Input:
y_dot : value of the derivative of the ODE variable y
j : index of the collocation point
Set the y_dot at the jth collocation point
'''
def set_y_dot(self, y_dot, j):
for i in range(self.size_y):
self.y_dot[i][j] = y_dot[i]
'''
Input:
z_tilda : value of the DAE variable z
j : index of the collocation point
Set the z_tilda at the jth collocation point
'''
def set_z_tilda(self, z_tilda, j):
for i in range(self.size_z):
self.z_tilda[i][j] = z_tilda[i]
'''
Input:
y_tilda : value of the ODE variable y
j : index of the collocation point
Set the y_tilda at the jth collocation point
'''
def set_y_tilda(self, y_tilda, j):
for i in range(self.size_y):
self.y_tilda[i][j] = y_tilda[i]
'''
Input:
f_a : value of the residual of all the collocation points of the time interval
Set the residual f_a of the time interval
'''
def set_f_a(self, f_a):
for i in range((self.size_y + self.size_z) * self.m):
self.f_a[i] = f_a[i]
'''
Input:
f_b : value of the residual of at the node
Set the residual f_b of the node
'''
def set_f_b(self, f_b):
for i in range(self.size_y):
self.f_b[i] = f_b[i]
# self.f_N = np.zeros((size_y + size_p), dtype = np.float64)
def set_f_N(self, f_N):
for i in range(self.size_y + self.size_p):
self.f_N.append(f_N[i])
def get_y(self):
y = np.zeros((self.size_y), dtype = np.float64)
for i in range(self.size_y):
y[i] = self.y[i]
return y
def get_y_tilda(self, j):
y = np.zeros((self.size_y), dtype = np.float64)
for i in range(self.size_y):
y[i] = self.y_tilda[i][j]
return y
def get_z_tilda(self, j):
z = np.zeros((self.size_z), dtype = np.float64)
for i in range(self.size_z):
z[i] = self.z_tilda[i][j]
return z
def set_B(self, B):
for i in range(self.size_y + self.size_p):
for j in range(self.size_y):
self.B[i][j] = B[i][j]
# self.VN = np.zeros((size_y + size_p, size_p), dtype = np.float64)
def set_VN(self, VN):
for i in range(self.size_y + self.size_p):
V_row = []
for j in range(self.size_p):
V_row.append(VN[i][j])
self.V_N.append(V_row)
# self.HN = np.zeros((size_y + size_p, size_p), dtype = np.float64)
def set_HN(self, VN):
for i in range(self.size_y + self.size_p):
H_row = []
for j in range(self.size_p):
H_row.append(VN[i][j])
self.H_N.append(H_row)
# self.b_N = np.zeros((size_y + size_p), dtype = np.float64)
def set_bN(self, f_b):
for i in range(self.size_y + self.size_p):
self.b_N.append(-f_b[i])
def set_delta_y(self, delta_y):
for i in range(self.size_y):
self.delta_y[i] = delta_y[i]
def set_delta_k(self, delta_k):
for i in range((self.size_y + self.size_z) * self.m):
self.delta_k[i] = delta_k[i]
# j_col : jth collocation node
def set_Jacobian(self, a, b, Dh, Dg, j_col):
'''
hy = np.zeros((self.size_y, self.size_y), dtype = np.float64)
hz = np.zeros((self.size_y, self.size_z), dtype = np.float64)
hp = np.zeros((self.size_y, self.size_p), dtype = np.float64)
gy = np.zeros((self.size_z, self.size_y), dtype = np.float64)
gz = np.zeros((self.size_z, self.size_z), dtype = np.float64)
gp = np.zeros((self.size_z, self.size_p), dtype = np.float64)
'''
hy = Dh[0 : , 0 : self.size_y]
hz = Dh[0 : , self.size_y : self.size_y + self.size_z]
hp = Dh[0 : , self.size_y + self.size_z : ]
gy = Dg[0 : , 0 : self.size_y]
gz = Dg[0 : , self.size_y : self.size_y + self.size_z]
gp = Dg[0 : , self.size_y + self.size_z : ]
'''
for i in range(self.size_y):
for j in range(self.size_y):
hy[i][j] = Dh[i][j]
for j in range(self.size_z):
hz[i][j] = Dh[i][j + self.size_y]
for j in range(self.size_p):
hp[i][j] = Dh[i][j + (self.size_y + self.size_z)]
for i in range(self.size_z):
for j in range(self.size_y):
gy[i][j] = Dg[i][j]
for j in range(self.size_z):
gz[i][j] = Dg[i][j + self.size_y]
for j in range(self.size_p):
gp[i][j] = Dg[i][j + (self.size_y + self.size_z)]
'''
start_row_index_h = j_col * (self.size_y + self.size_z)
self.J[start_row_index_h : start_row_index_h + self.size_y, 0 : self.size_y] = hy
self.V[start_row_index_h : start_row_index_h + self.size_y, 0 : self.size_p] = hp
'''
for i in range(self.size_y):
for j in range(self.size_y):
self.J[start_row_index_h + i][j] = hy[i][j]
for j in range(self.size_p):
self.V[start_row_index_h + i][j] = hp[i][j]
'''
start_row_index_g = start_row_index_h + self.size_y
self.J[start_row_index_g : start_row_index_g + self.size_z, 0 : self.size_y] = gy
self.V[start_row_index_g : start_row_index_g + self.size_z, 0 : self.size_p] = gp
'''
for i in range(self.size_z):
for j in range(self.size_y):
self.J[start_row_index_g + i][j] = gy[i][j]
for j in range(self.size_p):
self.V[start_row_index_g + i][j] = gp[i][j]
'''
self.D[0 : self.size_y, j_col * (self.size_y + self.size_z) : j_col * (self.size_y + self.size_z) + self.size_y] = self.delta_t * b * np.eye(self.size_y, dtype = np.float64)
'''
for i in range(self.size_y):
self.D[i][i + j_col * (self.size_y + self.size_z)] = self.delta_t * b * 1
'''
# for each row block j_col
# loop through all the column block
for i in range(self.m):
start_row_index = j_col * (self.size_y + self.size_z)
w_tmp = np.zeros(((self.size_y + self.size_z), (self.size_y + self.size_z)), dtype = np.float64)
if i == j_col:
start_col_index = i * (self.size_y + self.size_z)
identity = np.eye(self.size_y, dtype = np.float64)
w_tmp[0 : self.size_y, 0 : self.size_y] = -identity + self.delta_t * a[j_col, j_col] * hy
w_tmp[0 : self.size_y, self.size_y : ] = hz
w_tmp[self.size_y : , 0 : self.size_y] = self.delta_t * a[j_col, j_col] * gy
w_tmp[self.size_y : , self.size_y : ] = gz
self.W[start_row_index : start_row_index + (self.size_y + self.size_z), start_col_index : start_col_index + (self.size_y + self.size_z)] = w_tmp
'''
for j in range(self.size_y):
for k in range(self.size_y):
w_tmp[j][k] = -identity[j][k] + self.delta_t * a[j_col][j_col] * hy[j][k]
for k in range(self.size_z):
w_tmp[j][k + self.size_y] = hz[j][k]
for j in range(self.size_z):
for k in range(self.size_y):
w_tmp[j + self.size_y][k] = self.delta_t * a[j_col][j_col] * gy[j][k]
for k in range(self.size_z):
w_tmp[j + self.size_y][k + self.size_y] = gz[j][k]
for j in range(self.size_y + self.size_z):
for k in range(self.size_y + self.size_z):
self.W[start_row_index + j][start_col_index + k] = w_tmp[j][k]
'''
else:
start_col_index = i * (self.size_y + self.size_z)
w_tmp[0 : self.size_y, 0 : self.size_y] = self.delta_t * a[j_col, i] * hy
w_tmp[0 : self.size_y, self.size_y : ] = np.zeros((self.size_y, self.size_z), dtype = np.float64)
w_tmp[self.size_y : , 0 : self.size_y] = self.delta_t * a[j_col, i] * gy
w_tmp[self.size_y : , self.size_y : ] = np.zeros((self.size_z, self.size_z), dtype = np.float64)
self.W[start_row_index : start_row_index + (self.size_y + self.size_z), start_col_index : start_col_index + (self.size_y + self.size_z)] = w_tmp
'''
for j in range(self.size_y):
for k in range(self.size_y):
w_tmp[j][k] = self.delta_t * a[j_col][i] * hy[j][k]
for k in range(self.size_z):
w_tmp[j][k + self.size_y] = 0
for j in range(self.size_z):
for k in range(self.size_y):
w_tmp[j + self.size_y][k] = self.delta_t * a[j_col][i] * gy[j][k]
for k in range(self.size_z):
w_tmp[j + self.size_y][k + self.size_y] = 0
for j in range(self.size_y + self.size_z):
for k in range(self.size_y + self.size_z):
self.W[start_row_index + j][start_col_index + k] = w_tmp[j][k]
'''
def update_Jacobian(self):
# W_inv = np.linalg.inv(self.W)
# identity = np.eye(self.size_y, dtype=np.float64)
# D_W_inv = np.dot(self.D, W_inv)
# self.A = -identity + np.dot(D_W_inv, self.J)
# self.C = np.eye(self.size_y)
# self.H = np.dot(D_W_inv, self.V)
# self.b = -self.f_b - np.dot(D_W_inv, self.f_a)
identity = np.eye(self.size_y, dtype=np.float64)
# P * W = L * U
P, L, U = scipy.linalg.lu(self.W)
# A = -I + D * W^(-1) * J
# X = np.dot(P, self.J)
X = np.linalg.solve(P, self.J)
L_inv_J = np.linalg.solve(L, X)
W_inv_J = np.linalg.solve(U, L_inv_J)
D_W_inv_J = np.dot(self.D, W_inv_J)
self.A = -identity + D_W_inv_J
# H = D * W^(-1) * V
X = np.linalg.solve(P, self.V)
L_inv_V = np.linalg.solve(L, X)
W_inv_V = np.linalg.solve(U, L_inv_V)
self.H = np.dot(self.D, W_inv_V)
# C = I
self.C = identity
# b = -f_b - D * W^(-1) * f_a
X = np.linalg.solve(P, self.f_a)
L_inv_f_a = np.linalg.solve(L, X)
W_inv_f_a = np.linalg.solve(U, L_inv_f_a)
D_W_inv_f_a = np.dot(self.D, W_inv_f_a)
self.b = -self.f_b - D_W_inv_f_a
| 41.394886 | 182 | 0.529682 | 2,451 | 14,571 | 2.932273 | 0.050592 | 0.189231 | 0.128983 | 0.099485 | 0.753026 | 0.705162 | 0.630166 | 0.597607 | 0.539029 | 0.427717 | 0 | 0.012175 | 0.334843 | 14,571 | 351 | 183 | 41.512821 | 0.729364 | 0.08064 | 0 | 0.185897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.147436 | false | 0 | 0.012821 | 0 | 0.185897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4eb6cb0b64f314a943310f39f0c678723433be2b | 16,116 | py | Python | applied_python/applied_python/lib/python2.7/site-packages/ansible/modules/core/files/file.py | mith1979/ansible_automation | 013dfa67c6d91720b787fadb21de574b6e023a26 | [
"Apache-2.0"
] | 1 | 2020-10-14T00:06:54.000Z | 2020-10-14T00:06:54.000Z | applied_python/applied_python/lib/python2.7/site-packages/ansible/modules/core/files/file.py | mith1979/ansible_automation | 013dfa67c6d91720b787fadb21de574b6e023a26 | [
"Apache-2.0"
] | null | null | null | applied_python/applied_python/lib/python2.7/site-packages/ansible/modules/core/files/file.py | mith1979/ansible_automation | 013dfa67c6d91720b787fadb21de574b6e023a26 | [
"Apache-2.0"
] | 2 | 2015-08-06T07:45:48.000Z | 2017-01-04T17:47:16.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import shutil
import stat
import grp
import pwd
try:
import selinux
HAVE_SELINUX=True
except ImportError:
HAVE_SELINUX=False
DOCUMENTATION = '''
---
module: file
version_added: "historical"
short_description: Sets attributes of files
extends_documentation_fragment: files
description:
- Sets attributes of files, symlinks, and directories, or removes
files/symlinks/directories. Many other modules support the same options as
the M(file) module - including M(copy), M(template), and M(assemble).
notes:
- See also M(copy), M(template), M(assemble)
requirements: [ ]
author: Michael DeHaan
options:
path:
description:
- 'path to the file being managed. Aliases: I(dest), I(name)'
required: true
default: []
aliases: ['dest', 'name']
state:
description:
- If C(directory), all immediate subdirectories will be created if they
do not exist, since 1.7 they will be created with the supplied permissions.
If C(file), the file will NOT be created if it does not exist, see the M(copy)
or M(template) module if you want that behavior. If C(link), the symbolic
link will be created or changed. Use C(hard) for hardlinks. If C(absent),
directories will be recursively deleted, and files or symlinks will be unlinked.
If C(touch) (new in 1.4), an empty file will be created if the c(path) does not
exist, while an existing file or directory will receive updated file access and
modification times (similar to the way `touch` works from the command line).
required: false
default: file
choices: [ file, link, directory, hard, touch, absent ]
src:
required: false
default: null
choices: []
description:
- path of the file to link to (applies only to C(state=link)). Will accept absolute,
relative and nonexisting paths. Relative paths are not expanded.
recurse:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "1.1"
description:
- recursively set the specified file attributes (applies only to state=directory)
force:
required: false
default: "no"
choices: [ "yes", "no" ]
description:
- 'force the creation of the symlinks in two cases: the source file does
not exist (but will appear later); the destination exists and is a file (so, we need to unlink the
"path" file and create symlink to the "src" file in place of it).'
'''
EXAMPLES = '''
# change file ownership, group and mode. When specifying mode using octal numbers, first digit should always be 0.
- file: path=/etc/foo.conf owner=foo group=foo mode=0644
- file: src=/file/to/link/to dest=/path/to/symlink owner=foo group=foo state=link
- file: src=/tmp/{{ item.path }} dest={{ item.dest }} state=link
with_items:
- { path: 'x', dest: 'y' }
- { path: 'z', dest: 'k' }
# touch a file, using symbolic modes to set the permissions (equivalent to 0644)
- file: path=/etc/foo.conf state=touch mode="u=rw,g=r,o=r"
# touch the same file, but add/remove some permissions
- file: path=/etc/foo.conf state=touch mode="u+rw,g-wx,o-rwx"
'''
def get_state(path):
''' Find out current state '''
if os.path.lexists(path):
if os.path.islink(path):
return 'link'
elif os.path.isdir(path):
return 'directory'
elif os.stat(path).st_nlink > 1:
return 'hard'
else:
# could be many other things, but defaulting to file
return 'file'
return 'absent'
def recursive_set_attributes(module, path, follow, file_args):
changed = False
for root, dirs, files in os.walk(path):
for fsobj in dirs + files:
fsname = os.path.join(root, fsobj)
if not os.path.islink(fsname):
tmp_file_args = file_args.copy()
tmp_file_args['path']=fsname
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed)
else:
tmp_file_args = file_args.copy()
tmp_file_args['path']=fsname
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed)
if follow:
fsname = os.path.join(root, os.readlink(fsname))
if os.path.isdir(fsname):
changed |= recursive_set_attributes(module, fsname, follow, file_args)
tmp_file_args = file_args.copy()
tmp_file_args['path']=fsname
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed)
return changed
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(choices=['file','directory','link','hard','touch','absent'], default=None),
path = dict(aliases=['dest', 'name'], required=True),
original_basename = dict(required=False), # Internal use only, for recursive ops
recurse = dict(default='no', type='bool'),
force = dict(required=False,default=False,type='bool'),
diff_peek = dict(default=None),
validate = dict(required=False, default=None),
src = dict(required=False, default=None),
),
add_file_common_args=True,
supports_check_mode=True
)
params = module.params
state = params['state']
force = params['force']
diff_peek = params['diff_peek']
src = params['src']
follow = params['follow']
# modify source as we later reload and pass, specially relevant when used by other modules.
params['path'] = path = os.path.expanduser(params['path'])
# short-circuit for diff_peek
if diff_peek is not None:
appears_binary = False
try:
f = open(path)
b = f.read(8192)
f.close()
if "\x00" in b:
appears_binary = True
except:
pass
module.exit_json(path=path, changed=False, appears_binary=appears_binary)
prev_state = get_state(path)
# state should default to file, but since that creates many conflicts,
# default to 'current' when it exists.
if state is None:
if prev_state != 'absent':
state = prev_state
else:
state = 'file'
# source is both the source of a symlink or an informational passing of the src for a template module
# or copy module, even if this module never uses it, it is needed to key off some things
if src is not None:
src = os.path.expanduser(src)
else:
if state in ['link','hard']:
if follow and state == 'link':
# use the current target of the link as the source
src = os.path.realpath(path)
else:
module.fail_json(msg='src and dest are required for creating links')
# original_basename is used by other modules that depend on file.
if os.path.isdir(path) and state not in ["link", "absent"]:
basename = None
if params['original_basename']:
basename = params['original_basename']
elif src is not None:
basename = os.path.basename(src)
if basename:
params['path'] = path = os.path.join(path, basename)
# make sure the target path is a directory when we're doing a recursive operation
recurse = params['recurse']
if recurse and state != 'directory':
module.fail_json(path=path, msg="recurse option requires state to be 'directory'")
file_args = module.load_file_common_arguments(params)
changed = False
if state == 'absent':
if state != prev_state:
if not module.check_mode:
if prev_state == 'directory':
try:
shutil.rmtree(path, ignore_errors=False)
except Exception, e:
module.fail_json(msg="rmtree failed: %s" % str(e))
else:
try:
os.unlink(path)
except Exception, e:
module.fail_json(path=path, msg="unlinking failed: %s " % str(e))
module.exit_json(path=path, changed=True)
else:
module.exit_json(path=path, changed=False)
elif state == 'file':
if state != prev_state:
if follow and prev_state == 'link':
# follow symlink and operate on original
path = os.path.realpath(path)
prev_state = get_state(path)
file_args['path'] = path
if prev_state not in ['file','hard']:
# file is not absent and any other state is a conflict
module.fail_json(path=path, msg='file (%s) is %s, cannot continue' % (path, prev_state))
changed = module.set_fs_attributes_if_different(file_args, changed)
module.exit_json(path=path, changed=changed)
elif state == 'directory':
if follow and prev_state == 'link':
path = os.path.realpath(path)
prev_state = get_state(path)
if prev_state == 'absent':
if module.check_mode:
module.exit_json(changed=True)
changed = True
curpath = ''
# Split the path so we can apply filesystem attributes recursively
# from the root (/) directory for absolute paths or the base path
# of a relative path. We can then walk the appropriate directory
# path to apply attributes.
for dirname in path.strip('/').split('/'):
curpath = '/'.join([curpath, dirname])
# Remove leading slash if we're creating a relative path
if not os.path.isabs(path):
curpath = curpath.lstrip('/')
if not os.path.exists(curpath):
os.mkdir(curpath)
tmp_file_args = file_args.copy()
tmp_file_args['path']=curpath
changed = module.set_fs_attributes_if_different(tmp_file_args, changed)
# We already know prev_state is not 'absent', therefore it exists in some form.
elif prev_state != 'directory':
module.fail_json(path=path, msg='%s already exists as a %s' % (path, prev_state))
changed = module.set_fs_attributes_if_different(file_args, changed)
if recurse:
changed |= recursive_set_attributes(module, file_args['path'], follow, file_args)
module.exit_json(path=path, changed=changed)
elif state in ['link','hard']:
if os.path.isdir(path) and not os.path.islink(path):
relpath = path
else:
relpath = os.path.dirname(path)
absrc = os.path.join(relpath, src)
if not os.path.exists(absrc) and not force:
module.fail_json(path=path, src=src, msg='src file does not exist, use "force=yes" if you really want to create the link: %s' % absrc)
if state == 'hard':
if not os.path.isabs(src):
module.fail_json(msg="absolute paths are required")
elif prev_state == 'directory':
if not force:
module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, path))
elif len(os.listdir(path)) > 0:
# refuse to replace a directory that has files in it
module.fail_json(path=path, msg='the directory %s is not empty, refusing to convert it' % path)
elif prev_state in ['file', 'hard'] and not force:
module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, path))
if prev_state == 'absent':
changed = True
elif prev_state == 'link':
old_src = os.readlink(path)
if old_src != src:
changed = True
elif prev_state == 'hard':
if not (state == 'hard' and os.stat(path).st_ino == os.stat(src).st_ino):
changed = True
if not force:
module.fail_json(dest=path, src=src, msg='Cannot link, different hard link exists at destination')
elif prev_state in ['file', 'directory']:
changed = True
if not force:
module.fail_json(dest=path, src=src, msg='Cannot link, %s exists at destination' % prev_state)
else:
module.fail_json(dest=path, src=src, msg='unexpected position reached')
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
tmppath = '/'.join([os.path.dirname(path), ".%s.%s.tmp" % (os.getpid(),time.time())])
try:
if prev_state == 'directory' and (state == 'hard' or state == 'link'):
os.rmdir(path)
if state == 'hard':
os.link(src,tmppath)
else:
os.symlink(src, tmppath)
os.rename(tmppath, path)
except OSError, e:
if os.path.exists(tmppath):
os.unlink(tmppath)
module.fail_json(path=path, msg='Error while replacing: %s' % str(e))
else:
try:
if state == 'hard':
os.link(src,path)
else:
os.symlink(src, path)
except OSError, e:
module.fail_json(path=path, msg='Error while linking: %s' % str(e))
if module.check_mode and not os.path.exists(path):
module.exit_json(dest=path, src=src, changed=changed)
changed = module.set_fs_attributes_if_different(file_args, changed)
module.exit_json(dest=path, src=src, changed=changed)
elif state == 'touch':
if not module.check_mode:
if prev_state == 'absent':
try:
open(path, 'w').close()
except OSError, e:
module.fail_json(path=path, msg='Error, could not touch target: %s' % str(e))
elif prev_state in ['file', 'directory', 'hard']:
try:
os.utime(path, None)
except OSError, e:
module.fail_json(path=path, msg='Error while touching existing target: %s' % str(e))
else:
module.fail_json(msg='Cannot touch other than files, directories, and hardlinks (%s is %s)' % (path, prev_state))
try:
module.set_fs_attributes_if_different(file_args, True)
except SystemExit, e:
if e.code:
# We take this to mean that fail_json() was called from
# somewhere in basic.py
if prev_state == 'absent':
# If we just created the file we can safely remove it
os.remove(path)
raise e
module.exit_json(dest=path, changed=True)
module.fail_json(path=path, msg='unexpected position reached')
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()
| 40.089552 | 146 | 0.5896 | 2,075 | 16,116 | 4.487229 | 0.198072 | 0.029965 | 0.030072 | 0.025132 | 0.311889 | 0.243583 | 0.197938 | 0.180539 | 0.153367 | 0.129524 | 0 | 0.002612 | 0.311057 | 16,116 | 401 | 147 | 40.189526 | 0.83599 | 0.12894 | 0 | 0.3 | 0 | 0.032258 | 0.300201 | 0.007233 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003226 | 0.022581 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ebcdf3123020aef076157d0fdffc141f76bbc7e | 2,409 | py | Python | Projects/jobs/views.py | jjfleet/Capstone | f81e21f0641ed0b75e06161198fca52805acb2e4 | [
"Apache-2.0"
] | 2 | 2018-07-23T05:44:50.000Z | 2018-09-10T09:12:36.000Z | Projects/jobs/views.py | jjmassey/Capstone | f81e21f0641ed0b75e06161198fca52805acb2e4 | [
"Apache-2.0"
] | 14 | 2018-09-10T10:42:39.000Z | 2018-10-24T00:04:36.000Z | Projects/jobs/views.py | jjfleet/Capstone | f81e21f0641ed0b75e06161198fca52805acb2e4 | [
"Apache-2.0"
] | 2 | 2018-09-10T06:34:31.000Z | 2018-09-17T06:05:23.000Z | from django.shortcuts import render, reverse, get_object_or_404, redirect
from django.views.generic import TemplateView, ListView, DetailView, CreateView, UpdateView, DeleteView
from .models import JobListing
from django.contrib.auth.models import User
from django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin
from django.contrib.auth.decorators import login_required
from django.utils import timezone
from django.db.models import Q
from django.contrib import messages
import datetime
from jobs.forms import JobCreateForm, JobUpdateForm
def job_create(request):
if request.method == 'POST':
form = JobCreateForm(request.POST)
if form.is_valid():
form.instance.author = request.user
form = form.save()
messages.success(request, "Your job has been created!")
return redirect(reverse('job-detail', kwargs={'pk': form.pk}))
else:
form = JobCreateForm(initial={
'phone_Number': request.user.profile.phone_Number,
'company': request.user.profile.company
}
)
return render(request, 'jobs/joblisting_form.html', {'form': form})
def jobUpdateView(request, pk):
instance = get_object_or_404(JobListing, id=pk)
form = JobUpdateForm(request.POST or None, instance=instance)
if form.is_valid():
form.save()
messages.success(request, "Your job has been created!")
return redirect(reverse('job-detail', kwargs={'pk': pk}))
else:
g_form = JobUpdateForm(instance = JobListing.objects.get(pk=pk))
return render(request, 'jobs/jobupdate_form.html', {'g_form': g_form})
class JobPageView(ListView):
model = JobListing
template_name = 'jobs/jobs.html'
context_object_name = 'data'
ordering = ['-date_posted']
class UserJobPageView(ListView):
model = JobListing
template_name = 'jobs/user_jobs.html'
context_object_name = 'data'
paginate_by = 3
def get_queryset(self):
user = get_object_or_404(User, username=self.kwargs.get('username'))
return JobListing.objects.filter(author=user).order_by('-date_posted')
class JobDeleteView(LoginRequiredMixin, UserPassesTestMixin, DeleteView):
model = JobListing
success_url = '/'
def test_func(self):
job = self.get_object()
return self.request.user == job.author
class JobDetailView(DetailView):
model = JobListing
# class JobSearchView(ListView):
# model = JobListing
# template_name = 'jobs/jobs.html'
# context_object_name = 'data'
# ordering = ['-date_posted']
#
# def get_queryset
| 31.697368 | 103 | 0.760482 | 310 | 2,409 | 5.780645 | 0.322581 | 0.044643 | 0.037946 | 0.023438 | 0.25 | 0.233259 | 0.195313 | 0.195313 | 0.195313 | 0.195313 | 0 | 0.004733 | 0.122873 | 2,409 | 75 | 104 | 32.12 | 0.843351 | 0.066833 | 0 | 0.210526 | 0 | 0 | 0.103571 | 0.021875 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0.035088 | 0.192982 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4ec2b217554237784ba1b54a40b8214906235fdf | 2,189 | py | Python | pcWacomToMouseTouchpad.py | Amanita-muscaria/rmWacomToMouse | f4c0726c79a04475d123c5e58b50c16734178676 | [
"MIT"
] | 50 | 2019-02-08T10:01:24.000Z | 2021-08-05T21:39:20.000Z | pcWacomToMouseTouchpad.py | trivedigaurav/rmWacomToMouse | 5f6236feafb149b734dcb2b7dc9580ae42941366 | [
"MIT"
] | 5 | 2019-03-09T17:16:53.000Z | 2021-02-27T00:22:15.000Z | pcWacomToMouseTouchpad.py | trivedigaurav/rmWacomToMouse | 5f6236feafb149b734dcb2b7dc9580ae42941366 | [
"MIT"
] | 3 | 2019-08-01T04:11:17.000Z | 2021-02-26T20:31:54.000Z | #!/usr/bin/env python3
'''
Meant to run on your PC.
Receives data generated by rmServerWacomInput.py,
moves the mouse and presses accordingly.
Acts like a touchpad.
'''
import socket
import struct
from pynput.mouse import Button, Controller
mouse = Controller()
# ----------
# Config:
ONLY_DEBUG = False # Only show data. Don't move mouse
CLICK_PRESSURE = 1000
RELEASE_PRESSURE = 100
MAX_DIST = 20 # Max is 50
SPEED = 1.0
INVERT_X = True # Can be used to change orientation
INVERT_Y = True
# ----------
WACOM_WIDTH = 15725 # Values just checked by drawing to the edges
WACOM_HEIGHT = 20967 # ↑
# Source: https://github.com/canselcik/libremarkable/blob/master/src/input/wacom.rs
EV_SYNC = 0
EV_KEY = 1
EV_ABS = 3
WACOM_EVCODE_PRESSURE = 24
WACOM_EVCODE_DISTANCE = 25
WACOM_EVCODE_XTILT = 26
WACOM_EVCODE_YTILT = 27
WACOM_EVCODE_XPOS = 0
WACOM_EVCODE_YPOS = 1
lastXPos = None
lastYPos = None
lastXTilt = -1
lastYTilt = -1
lastDistance = -1
lastPressure = -1
mouseButtonPressed = False
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(('10.11.99.1', 33333))
while True:
evDevType, evDevCode, evDevValue = struct.unpack('HHi', client.recv(8))
if evDevType == EV_ABS:
if evDevCode == WACOM_EVCODE_XPOS:
if lastDistance < MAX_DIST and lastXPos is not None:
xDist = (evDevValue - lastXPos) * (-SPEED if INVERT_X else SPEED)
mouse.move(xDist, 0)
lastXPos = evDevValue
elif evDevCode == WACOM_EVCODE_YPOS:
if lastDistance < MAX_DIST and lastYPos is not None:
yDist = (evDevValue - lastYPos) * (-SPEED if INVERT_Y else SPEED)
mouse.move(0, yDist)
lastYPos = evDevValue
elif evDevCode == WACOM_EVCODE_XTILT:
lastXTilt = evDevValue
elif evDevCode == WACOM_EVCODE_YTILT:
lastYTilt = evDevValue
elif evDevCode == WACOM_EVCODE_DISTANCE:
lastDistance = evDevValue
elif evDevCode == WACOM_EVCODE_PRESSURE:
if not ONLY_DEBUG:
if not mouseButtonPressed and evDevValue > CLICK_PRESSURE:
mouse.press(Button.left)
mouseButtonPressed = True
elif mouseButtonPressed and evDevValue <= RELEASE_PRESSURE:
mouse.release(Button.left)
mouseButtonPressed = False
lastPressure = evDevValue
| 24.875 | 83 | 0.733668 | 296 | 2,189 | 5.277027 | 0.452703 | 0.084507 | 0.076825 | 0.089629 | 0.139565 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031025 | 0.175423 | 2,189 | 87 | 84 | 25.16092 | 0.833795 | 0.180448 | 0 | 0.033898 | 1 | 0 | 0.007316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050847 | 0 | 0.050847 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ecbf6143207f88a7a5bae28fbacfc92c18e32aa | 739 | py | Python | users/urls.py | shaxpakistan/e-stationery | 00c7027f74e9f0e90247e44d9dbd86203cd9dbb4 | [
"CC0-1.0"
] | 1 | 2022-02-17T19:13:40.000Z | 2022-02-17T19:13:40.000Z | users/urls.py | shaxpakistan/e-stationery | 00c7027f74e9f0e90247e44d9dbd86203cd9dbb4 | [
"CC0-1.0"
] | null | null | null | users/urls.py | shaxpakistan/e-stationery | 00c7027f74e9f0e90247e44d9dbd86203cd9dbb4 | [
"CC0-1.0"
] | null | null | null | from django.urls import path
from .views import *
urlpatterns = [
path('register/', RegisterView.as_view(), name = "register"),
path('<int:pk>/password/', PasswordsChangeView.as_view(), name = "password"),
path('profile/', ProfileView.as_view(), name = "profile"),
path('<int:pk>/edit_profile/', EditProfile.as_view(), name = "edit_profile"),
path('<str:pk>/edit_settings/', EditSettings.as_view(), name = "edit_settings"),
path('create_profile/', CreateProfile.as_view(), name = "create_profile"),
path('password_changed_successful/', password_changed_successful, name = "password_changed_successful"),
path('signin/', signin, name = 'login'),
path('logoutV/', logoutV, name = 'logout'),
] | 43.470588 | 108 | 0.675237 | 84 | 739 | 5.72619 | 0.369048 | 0.074844 | 0.12474 | 0.058212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142084 | 739 | 17 | 109 | 43.470588 | 0.758675 | 0 | 0 | 0 | 0 | 0 | 0.321622 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.153846 | 0.153846 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4ed049ed4c7f6112a5a50373ed994443ceae1571 | 1,901 | py | Python | scripts/rancher-cli.py | dustinvanbuskirk/docker-rancher-cd | 3af4b969731f0f8d8aea547d5a24453aa19470ca | [
"MIT"
] | null | null | null | scripts/rancher-cli.py | dustinvanbuskirk/docker-rancher-cd | 3af4b969731f0f8d8aea547d5a24453aa19470ca | [
"MIT"
] | null | null | null | scripts/rancher-cli.py | dustinvanbuskirk/docker-rancher-cd | 3af4b969731f0f8d8aea547d5a24453aa19470ca | [
"MIT"
] | null | null | null | import sys
import getopt
import subprocess
def main(argv):
rancher_options = ''
rancher_command = ''
rancher_args = ''
try:
opts, args = getopt.getopt(argv, "o:c:a:", ["help", "rancher_options=", "rancher_command=", "rancher_args="])
except getopt.GetoptError:
print('Unrecognized Argument, See Usage Below.')
print('rancher-cli.py -o "<OPTIONS>" -c "<COMMAND>" -a "<args>"')
print('to see rancher cli help for a run rancher-cli.py help with no additional command line args')
print('to see rancher cli help for a COMMAND run rancher-cli.py -c "<COMMAND>" -a "--help" with no additional command line args')
sys.exit(2)
for opt, arg in opts:
if opt == "--help":
print('rancher-cli.py -o <rancher_options> -c <rancher_command> -a <rancher_args>')
print('rancher_options - arguments to pass to rancher, --debug --version')
print('rancher_command - arguments to pass to rancher, inspect')
print('rancher_args - COMMAND arguments to pass to rancher')
command = ['rancher --help']
proc = subprocess.Popen(command, shell=True)
stdout, stderr = proc.communicate()
sys.exit()
elif opt in ("-o", "--rancher_options"):
rancher_options = arg
elif opt in ("-c", "--rancher_command"):
rancher_command = arg
elif opt in ("-a", "--rancher_args"):
rancher_args = arg
command = ['rancher ' + rancher_options + ' ' + rancher_command + ' ' + rancher_args]
try:
exitcode = subprocess.Popen(command, shell=True, stdout=sys.stdout, stderr=sys.stderr).wait(timeout=10*60) # Timeout after 10 minutes
except subprocess.TimeoutExpired:
print("Timeout during upgrade")
exitcode = 1
exit(exitcode)
if __name__ == "__main__":
main(sys.argv[1:])
| 40.446809 | 142 | 0.613361 | 233 | 1,901 | 4.879828 | 0.27897 | 0.098505 | 0.092348 | 0.073879 | 0.394899 | 0.342128 | 0.188215 | 0.056288 | 0.056288 | 0 | 0 | 0.006356 | 0.255129 | 1,901 | 46 | 143 | 41.326087 | 0.79661 | 0.012625 | 0 | 0.05 | 0 | 0.025 | 0.383467 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0.075 | 0.075 | 0 | 0.1 | 0.225 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4ed089d9b1e58aa4c2ee7a196cf864e8cc4702c6 | 336 | py | Python | motoscrape/db.py | sairon/motoscrape | 15bb975791f21fbb8c31fdc8927ee02c3fad0e63 | [
"Unlicense"
] | 2 | 2016-04-30T23:54:33.000Z | 2016-06-05T09:11:04.000Z | motoscrape/db.py | sairon/motoscrape | 15bb975791f21fbb8c31fdc8927ee02c3fad0e63 | [
"Unlicense"
] | null | null | null | motoscrape/db.py | sairon/motoscrape | 15bb975791f21fbb8c31fdc8927ee02c3fad0e63 | [
"Unlicense"
] | null | null | null | import glob
import json
ads_db = {}
def load_data():
try:
for file_ in glob.glob("scrapes/*.json"):
with open(file_, "r") as f:
data = json.load(f, "utf-8")
for item in data:
ads_db[item['permalink']] = ads_db
except IOError:
pass
load_data()
| 17.684211 | 54 | 0.5 | 44 | 336 | 3.659091 | 0.568182 | 0.093168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004785 | 0.377976 | 336 | 18 | 55 | 18.666667 | 0.76555 | 0 | 0 | 0 | 0 | 0 | 0.08631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.076923 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.