hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b88fca2ebe335e0075492e9a81b964d8fd3677ae | 2,595 | py | Python | cdc/src/NoteDeid.py | ZebinKang/cdc | a32fe41892021d29a1d9c534728a92b67f9b6cea | [
"MIT"
] | null | null | null | cdc/src/NoteDeid.py | ZebinKang/cdc | a32fe41892021d29a1d9c534728a92b67f9b6cea | [
"MIT"
] | null | null | null | cdc/src/NoteDeid.py | ZebinKang/cdc | a32fe41892021d29a1d9c534728a92b67f9b6cea | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
'''
The MIT License (MIT)
Copyright (c) 2016 Wei-Hung Weng
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Title : Clinical Document Classification Pipeline: Deidentification module (deid)
Author : Wei-Hung Weng
Created : 10/21/2016
'''
import sys, os, time
import subprocess
import commands
import string
import pandas as pd
def RunDeid(folder, deidDir, rpdr=False, erisOne=False):
print "Copying deid program into folders"
cwd = os.getcwd()
os.system("cp -r " + deidDir + "/* " + folder)
#subprocess.check_output(['bash', '-c', cmd])
print "Executing deid"
if rpdr:
cmd = "for file in *.txt; do sed 1,2d \"${file}\" > temp && mv temp \"${file}\"; echo '\r\nSTART_OF_RECORD=1||||1||||' | cat - \"$file\" > temp && mv temp \"$file\"; echo '||||END_OF_RECORD\r\n' >> \"$file\"; mv -- \"${file}\" \"${file}.text\"; perl deid.pl \"${file%%.txt.text}\" deid-output.config; sed 's/\[\*\*.*\*\*\]//g' \"${file}.res\" > temp && mv temp \"${file}.res\"; done"
else:
cmd = "for file in *.txt; do echo '\r\nSTART_OF_RECORD=1||||1||||' | cat - \"$file\" > temp && mv temp \"$file\"; echo '||||END_OF_RECORD\r\n' >> \"$file\"; mv -- \"${file}\" \"${file}.text\"; perl deid.pl \"${file%%.txt.text}\" deid-output.config; sed 's/\[\*\*.*\*\*\]//g' \"${file}.res\" > temp && mv temp \"${file}.res\"; done"
os.chdir(folder)
subprocess.check_output(['bash', '-c', cmd])
os.system('rm -rf dict; rm -rf doc; rm -rf GSoutput; rm -rf GSstat; rm -rf lists')
os.system('find . ! -name "*.res" -exec rm -r {} \;')
os.chdir(cwd)
| 47.181818 | 391 | 0.665511 | 389 | 2,595 | 4.413882 | 0.465296 | 0.051252 | 0.029121 | 0.040769 | 0.242283 | 0.242283 | 0.211998 | 0.171229 | 0.171229 | 0.171229 | 0 | 0.008817 | 0.169557 | 2,595 | 54 | 392 | 48.055556 | 0.787935 | 0.031599 | 0 | 0 | 0 | 0.105263 | 0.505027 | 0.117556 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.263158 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b892a01e77462df62b5d9db18651eb65be2d4626 | 571 | py | Python | datasets/__init__.py | cogito233/text-autoaugment | cae3cfddaba9da01cf291f975e5cf4f734634b51 | [
"MIT"
] | 1 | 2021-09-08T12:00:11.000Z | 2021-09-08T12:00:11.000Z | datasets/__init__.py | cogito233/text-autoaugment | cae3cfddaba9da01cf291f975e5cf4f734634b51 | [
"MIT"
] | null | null | null | datasets/__init__.py | cogito233/text-autoaugment | cae3cfddaba9da01cf291f975e5cf4f734634b51 | [
"MIT"
] | null | null | null | from .imdb import IMDB
from .sst5 import SST5
from .sst2 import SST2
from .trec import TREC
from .yelp2 import YELP2
from .yelp5 import YELP5
__all__ = ('IMDB', 'SST2', 'SST5', 'TREC', 'YELP2', 'YELP5')
def get_dataset(dataset_name, examples, tokenizer, text_transform=None):
dataset_name = dataset_name.lower()
datasets = {
'imdb': IMDB,
'sst2': SST2,
'sst5': SST5,
'trec': TREC,
'yelp2': YELP2,
'yelp5': YELP5,
}
dataset = datasets[dataset_name](examples, tokenizer, text_transform)
return dataset
| 24.826087 | 73 | 0.640981 | 69 | 571 | 5.144928 | 0.304348 | 0.123944 | 0.107042 | 0.157746 | 0.230986 | 0.230986 | 0 | 0 | 0 | 0 | 0 | 0.045767 | 0.234676 | 571 | 22 | 74 | 25.954545 | 0.76659 | 0 | 0 | 0 | 0 | 0 | 0.091068 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.315789 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b894aa5fcf5ee5c3c91a08a010d10cc426cae285 | 857 | py | Python | npc/gui/util.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | null | null | null | npc/gui/util.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | null | null | null | npc/gui/util.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | null | null | null | # Helpers common to the gui
from contextlib import contextmanager
from PyQt5 import QtWidgets
@contextmanager
def safe_command(command):
"""
Helper to suppress AttributeErrors from commands
Args:
command (callable): The command to run. Any AttributeError raised by
the command will be suppressed.
"""
try:
yield command
except AttributeError as err:
pass
def show_error(title, message, parent):
"""
Helper to show a modal error window
Args:
title (str): Title for the error window
message (str): Message text to display
parent (object): Parent window for the modal. This window will be
disabled while the modal is visible. Defaults to the main window.
"""
QtWidgets.QMessageBox.warning(parent, title, message, QtWidgets.QMessageBox.Ok)
| 27.645161 | 83 | 0.679113 | 106 | 857 | 5.471698 | 0.537736 | 0.017241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001577 | 0.26021 | 857 | 30 | 84 | 28.566667 | 0.913249 | 0.547258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.1 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b89711b17746d4b4271b066247a24c7b87a987eb | 5,711 | py | Python | test/test_html.py | dominickpastore/pymd4c | 7fac37348b1e2520532c83bcb84b9cfecbcdff0c | [
"MIT"
] | 7 | 2020-04-30T08:27:44.000Z | 2022-02-09T12:23:07.000Z | test/test_html.py | dominickpastore/pymd4c | 7fac37348b1e2520532c83bcb84b9cfecbcdff0c | [
"MIT"
] | 23 | 2020-05-29T14:58:46.000Z | 2021-11-10T23:44:25.000Z | test/test_html.py | dominickpastore/pymd4c | 7fac37348b1e2520532c83bcb84b9cfecbcdff0c | [
"MIT"
] | 2 | 2020-09-17T19:40:44.000Z | 2021-07-13T16:43:18.000Z | # Based on spec_tests.py from
# https://github.com/commonmark/commonmark-spec/blob/master/test/spec_tests.py
# and
# https://github.com/github/cmark-gfm/blob/master/test/spec_tests.py
import sys
import os
import os.path
import re
import md4c
import md4c.domparser
import pytest
from normalize import normalize_html
extension_flags = {
'table': md4c.MD_FLAG_TABLES,
'urlautolink': md4c.MD_FLAG_PERMISSIVEURLAUTOLINKS,
'emailautolink': md4c.MD_FLAG_PERMISSIVEEMAILAUTOLINKS,
'wwwautolink': md4c.MD_FLAG_PERMISSIVEWWWAUTOLINKS,
'tasklist': md4c.MD_FLAG_TASKLISTS,
'strikethrough': md4c.MD_FLAG_STRIKETHROUGH,
'underline': md4c.MD_FLAG_UNDERLINE,
'wikilink': md4c.MD_FLAG_WIKILINKS,
'latexmath': md4c.MD_FLAG_LATEXMATHSPANS,
#TODO Add test cases for the rest of the flags
# (including combination flags)
}
def get_tests(specfile):
line_number = 0
start_line = 0
end_line = 0
example_number = 0
markdown_lines = []
html_lines = []
state = 0 # 0 regular text, 1 markdown example, 2 html output
extensions = []
headertext = ''
tests = []
header_re = re.compile('#+ ')
full_specfile = os.path.join(sys.path[0], 'spec', specfile)
with open(full_specfile, 'r', encoding='utf-8', newline='\n') as specf:
for line in specf:
line_number = line_number + 1
l = line.strip()
if l.startswith("`" * 32 + " example"):
state = 1
extensions = l[32 + len(" example"):].split()
elif l == "`" * 32:
state = 0
example_number = example_number + 1
end_line = line_number
md4c_version = None
for extension in extensions:
if extension.startswith('md4c-'):
md4c_version = extension
break
if md4c_version is not None:
extensions.remove(md4c_version)
md4c_version = md4c_version[5:]
if 'disabled' not in extensions:
tests.append({
"markdown":''.join(markdown_lines).replace('→',"\t"),
"html":''.join(html_lines).replace('→',"\t"),
"example": example_number,
"start_line": start_line,
"end_line": end_line,
"section": headertext,
"file": specfile,
"md4c_version": md4c_version,
"extensions": extensions})
start_line = 0
markdown_lines = []
html_lines = []
elif l == ".":
state = 2
elif state == 1:
if start_line == 0:
start_line = line_number - 1
markdown_lines.append(line)
elif state == 2:
html_lines.append(line)
elif state == 0 and re.match(header_re, line):
headertext = header_re.sub('', line).strip()
return tests
def collect_all_tests():
all_tests = []
specfiles = os.listdir(os.path.join(sys.path[0], 'spec'))
for specfile in specfiles:
all_tests.extend(get_tests(specfile))
return all_tests
def skip_if_older_version(running_version, test_version):
"""Skip the current test if the running version of MD4C is older than the
version required for the test
:param running_version: Running version of MD4C, e.g. "0.4.8"
:type running_version: str
:param test_version: Version of MD4C required for the test
:type test_version: str
"""
if running_version is None or test_version is None:
return
running_version = [int(x) for x in running_version.split('.')]
test_version = [int(x) for x in test_version.split('.')]
for r, t in zip(running_version, test_version):
if r < t:
pytest.skip()
for t in test_version[len(running_version):]:
if t > 0:
pytest.skip("Test requires newer MD4C")
@pytest.fixture
def md4c_version(pytestconfig):
return pytestconfig.getoption('--md4c-version')
@pytest.mark.parametrize(
'test_case', collect_all_tests(),
ids=lambda x: f'{x["file"]}:{x["start_line"]}-{x["section"]}')
def test_html_output(test_case, md4c_version):
"""Test HTMLRenderer with default render flags on the given example"""
skip_if_older_version(md4c_version, test_case['md4c_version'])
parser_flags = 0
for extension in test_case['extensions']:
parser_flags |= extension_flags[extension]
renderer = md4c.HTMLRenderer(parser_flags, 0)
output = renderer.parse(test_case['markdown'])
assert normalize_html(output) == normalize_html(test_case['html'], False)
@pytest.mark.parametrize(
'test_case', collect_all_tests(),
ids=lambda x: f'{x["file"]}:{x["start_line"]}-{x["section"]}')
def test_domparser_html(test_case, md4c_version):
"""Test that the output for DOMParser render() matches HTMLRenderer char
for char"""
skip_if_older_version(md4c_version, test_case['md4c_version'])
parser_flags = 0
for extension in test_case['extensions']:
parser_flags |= extension_flags[extension]
html_renderer = md4c.HTMLRenderer(parser_flags)
html_output = html_renderer.parse(test_case['markdown'])
dom_parser = md4c.domparser.DOMParser(parser_flags)
dom_output = dom_parser.parse(test_case['markdown']).render()
assert html_output == dom_output
#TODO Test keyword arguments for flags
#TODO Test HTML flags
#TODO Test mixing keyword arguments and traditional flags
| 33.994048 | 78 | 0.615304 | 693 | 5,711 | 4.867244 | 0.235209 | 0.052179 | 0.026682 | 0.022532 | 0.242514 | 0.16306 | 0.138156 | 0.125111 | 0.125111 | 0.125111 | 0 | 0.017179 | 0.276309 | 5,711 | 167 | 79 | 34.197605 | 0.798451 | 0.145684 | 0 | 0.183333 | 0 | 0 | 0.089534 | 0.018238 | 0 | 0 | 0 | 0.011976 | 0.016667 | 1 | 0.05 | false | 0 | 0.066667 | 0.008333 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b89c391348948a67ee076b201c2356ffbd5b2843 | 418 | py | Python | fifth.py | leephoter/coding-exam | a95fdd6e8477651da811b5b5a93b7214914e9418 | [
"MIT"
] | null | null | null | fifth.py | leephoter/coding-exam | a95fdd6e8477651da811b5b5a93b7214914e9418 | [
"MIT"
] | null | null | null | fifth.py | leephoter/coding-exam | a95fdd6e8477651da811b5b5a93b7214914e9418 | [
"MIT"
] | null | null | null | # abba
# foo bar bar foo
text1 = list(input())
text2 = input().split()
# text1 = set(text1)
print(text1)
print(text2)
for i in range(len(text1)):
if text1[i] == "a":
text1[i] = 1
else:
text1[i] = 0
for i in range(len(text2)):
if text2[i] == "foo":
text2[i] = 1
else:
text2[i] = 0
print(text1)
print(text2)
if (text1 == text2):
print(True)
else:
print(False)
| 14.928571 | 27 | 0.543062 | 64 | 418 | 3.546875 | 0.34375 | 0.132159 | 0.132159 | 0.176211 | 0.123348 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073333 | 0.282297 | 418 | 27 | 28 | 15.481481 | 0.683333 | 0.093301 | 0 | 0.35 | 0 | 0 | 0.010667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b89c5decc2d125e57179ddb7e0fbbf5b7fa1d17a | 864 | py | Python | login_checks.py | mhhoban/basic-blog | 107d6df7c8374ae088097780a15364bb96394664 | [
"MIT"
] | null | null | null | login_checks.py | mhhoban/basic-blog | 107d6df7c8374ae088097780a15364bb96394664 | [
"MIT"
] | null | null | null | login_checks.py | mhhoban/basic-blog | 107d6df7c8374ae088097780a15364bb96394664 | [
"MIT"
] | null | null | null | """
Methods for user login
"""
from cgi import escape
from google.appengine.ext import ndb
def login_fields_complete(post_data):
"""
validates that both login fields were filled in
:param post_data:
:return:
"""
try:
user_id = escape(post_data['user_id'], quote=True)
except KeyError:
user_id = False
try:
password = escape(post_data['password'], quote=True)
except KeyError:
password = False
if user_id and password:
return {'complete': True, 'user_id': user_id, 'password': password}
else:
return {'complete': False}
def valid_user_id_check(user_id):
"""
checks that user exists
:param user_id:
:return:
"""
user_key = ndb.Key('User', user_id)
user = user_key.get()
if user:
return True
else:
return False
| 18 | 75 | 0.611111 | 110 | 864 | 4.618182 | 0.381818 | 0.11811 | 0.055118 | 0.090551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283565 | 864 | 47 | 76 | 18.382979 | 0.820679 | 0.168981 | 0 | 0.272727 | 0 | 0 | 0.075301 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.181818 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b89ead298075031fa1c3f90802815475e6fa1de6 | 594 | py | Python | setup.py | ANich/patois-stopwords | 37e63a0d9df60c7273dd7664a024e02cfcfb04c7 | [
"MIT"
] | null | null | null | setup.py | ANich/patois-stopwords | 37e63a0d9df60c7273dd7664a024e02cfcfb04c7 | [
"MIT"
] | null | null | null | setup.py | ANich/patois-stopwords | 37e63a0d9df60c7273dd7664a024e02cfcfb04c7 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='patois-stop-words',
version='0.0.1',
description='A list of patois stop words.',
long_description=open('README.md').read(),
license='MIT',
author='Alexander Nicholson',
author_email='alexj.nich@hotmail.com',
url='https://github.com/ANich/patois-stop-words',
packages=find_packages(),
package_data={
'patois_stop_words': ['words.txt']
},
classifiers=[
'Intended Audience :: Developers',
'Programming Language :: Python :: 3',
],
keywords='patois'
)
| 27 | 53 | 0.62963 | 67 | 594 | 5.477612 | 0.716418 | 0.108992 | 0.163488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008584 | 0.215488 | 594 | 21 | 54 | 28.285714 | 0.77897 | 0 | 0 | 0 | 0 | 0 | 0.409091 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8a3792b87c74a7c4d324caa87c2a3a3046ea018 | 319 | py | Python | gargantua/utils/elasticsearch.py | Laisky/laisky-blog | ebe7dadf8fce283ebab0539926ad1be1246e5156 | [
"Apache-2.0"
] | 18 | 2015-05-08T02:06:39.000Z | 2022-03-05T21:36:48.000Z | gargantua/utils/elasticsearch.py | Laisky/laisky-blog | ebe7dadf8fce283ebab0539926ad1be1246e5156 | [
"Apache-2.0"
] | 131 | 2015-01-22T14:54:59.000Z | 2022-02-16T15:14:10.000Z | gargantua/utils/elasticsearch.py | Laisky/laisky-blog | ebe7dadf8fce283ebab0539926ad1be1246e5156 | [
"Apache-2.0"
] | 3 | 2016-01-11T13:52:41.000Z | 2019-06-12T08:54:15.000Z | import json
def parse_search_resp(resp):
return [i['_source'] for i in json.loads(resp)['hits']['hits']]
def generate_keyword_search(keyword, field='post_content'):
query = {
"query": {
"match": {
field: keyword
}
}
}
return json.dumps(query)
| 18.764706 | 67 | 0.539185 | 35 | 319 | 4.742857 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.322884 | 319 | 16 | 68 | 19.9375 | 0.768519 | 0 | 0 | 0 | 1 | 0 | 0.115987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0.083333 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8a39d8abd6a6f5c91947df5a4f7859aa7716d4d | 957 | py | Python | HackerRank/Python_Learn/03_Strings/13_The_Minion_Game.py | Zubieta/CPP | fb4a3cbf2e4edcc590df15663cd28fb9ecab679c | [
"MIT"
] | 8 | 2017-03-02T07:56:45.000Z | 2021-08-07T20:20:19.000Z | HackerRank/Python_Learn/03_Strings/13_The_Minion_Game.py | zubie7a/Algorithms | fb4a3cbf2e4edcc590df15663cd28fb9ecab679c | [
"MIT"
] | null | null | null | HackerRank/Python_Learn/03_Strings/13_The_Minion_Game.py | zubie7a/Algorithms | fb4a3cbf2e4edcc590df15663cd28fb9ecab679c | [
"MIT"
] | 1 | 2021-08-07T20:20:20.000Z | 2021-08-07T20:20:20.000Z | # https://www.hackerrank.com/challenges/the-minion-game
from collections import Counter
def minion_game(string):
# your code goes here
string = string.lower()
consonants = set("bcdfghjklmnpqrstvwxyz")
vowels = set("aeiou")
# Stuart will get 1 point for every non-distinct substring that starts
# with a consonant, Kevin for every that starts with a vowel.
score_S, score_K = 0, 0
length = len(string)
for i in range(length):
# No need to compute the substrings, once we know the starting char,
# we can simply calculate the number of substrings that can be formed
# from here on to the end of the string.
if string[i] in consonants:
score_S += length - i
if string[i] in vowels:
score_K += length - i
if score_S > score_K:
print "Stuart %d" % score_S
elif score_S < score_K:
print "Kevin %d" % score_K
else:
print "Draw"
| 36.807692 | 77 | 0.640543 | 140 | 957 | 4.3 | 0.535714 | 0.049834 | 0.054817 | 0.059801 | 0.056478 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00436 | 0.281087 | 957 | 25 | 78 | 38.28 | 0.87064 | 0.392894 | 0 | 0 | 0 | 0 | 0.082024 | 0.036649 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8a8d25a2989246934825ecb3bded3322cd894bb | 446 | py | Python | students/migrations/0010_institutionalemail_title_email.py | estudeplus/perfil | 58b847aa226b885ca6a7a128035f09de2322519f | [
"MIT"
] | null | null | null | students/migrations/0010_institutionalemail_title_email.py | estudeplus/perfil | 58b847aa226b885ca6a7a128035f09de2322519f | [
"MIT"
] | 21 | 2019-05-11T18:01:10.000Z | 2022-02-10T11:22:01.000Z | students/migrations/0010_institutionalemail_title_email.py | estudeplus/perfil | 58b847aa226b885ca6a7a128035f09de2322519f | [
"MIT"
] | null | null | null | # Generated by Django 2.2.1 on 2019-06-30 00:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('students', '0009_auto_20190629_0125'),
]
operations = [
migrations.AddField(
model_name='institutionalemail',
name='title_email',
field=models.CharField(default='Assunto do email', editable=False, max_length=20),
),
]
| 23.473684 | 94 | 0.630045 | 49 | 446 | 5.612245 | 0.836735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099698 | 0.257848 | 446 | 18 | 95 | 24.777778 | 0.731118 | 0.100897 | 0 | 0 | 1 | 0 | 0.190476 | 0.057644 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8a965e925e8c33d2b6141373da012de99c134f6 | 1,197 | py | Python | ics/structures/secu_avb_settings.py | intrepidcs/python_ics | 7bfa8c2f893763608f9255f9536a2019cfae0c23 | [
"Unlicense"
] | 45 | 2017-10-17T08:42:08.000Z | 2022-02-21T16:26:48.000Z | ics/structures/secu_avb_settings.py | intrepidcs/python_ics | 7bfa8c2f893763608f9255f9536a2019cfae0c23 | [
"Unlicense"
] | 106 | 2017-03-07T21:10:39.000Z | 2022-03-29T15:32:46.000Z | ics/structures/secu_avb_settings.py | intrepidcs/python_ics | 7bfa8c2f893763608f9255f9536a2019cfae0c23 | [
"Unlicense"
] | 17 | 2017-04-04T12:30:22.000Z | 2022-01-28T05:30:25.000Z | # This file was auto generated; Do not modify, if you value your sanity!
import ctypes
import enum
from ics.structures.can_settings import *
from ics.structures.canfd_settings import *
from ics.structures.s_text_api_settings import *
class flags(ctypes.Structure):
_pack_ = 2
_fields_ = [
('disableUsbCheckOnBoot', ctypes.c_uint32, 1),
('enableLatencyTest', ctypes.c_uint32, 1),
('reserved', ctypes.c_uint32, 30),
]
class secu_avb_settings(ctypes.Structure):
_pack_ = 2
_fields_ = [
('perf_en', ctypes.c_uint16),
('can1', CAN_SETTINGS),
('canfd1', CANFD_SETTINGS),
('can2', CAN_SETTINGS),
('canfd2', CANFD_SETTINGS),
('network_enables', ctypes.c_uint64),
('termination_enables', ctypes.c_uint64),
('pwr_man_timeout', ctypes.c_uint32),
('pwr_man_enable', ctypes.c_uint16),
('network_enabled_on_boot', ctypes.c_uint16),
('iso15765_separation_time_offset', ctypes.c_int16),
('text_api', STextAPISettings),
('flags', flags),
]
_neoECU_AVBSettings = secu_avb_settings
ECU_AVBSettings = secu_avb_settings
SECU_AVBSettings = secu_avb_settings
| 27.837209 | 72 | 0.670844 | 141 | 1,197 | 5.319149 | 0.475177 | 0.093333 | 0.069333 | 0.104 | 0.152 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037076 | 0.211362 | 1,197 | 42 | 73 | 28.5 | 0.757415 | 0.05848 | 0 | 0.125 | 1 | 0 | 0.180605 | 0.066726 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15625 | 0 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8ae248b83fdee036686d9358abb1c53e99adc81 | 26,125 | py | Python | tests/configured_tests.py | maxcountryman/flask-security | ccb41df095177b11e8526958c1001d2f887d9feb | [
"MIT"
] | null | null | null | tests/configured_tests.py | maxcountryman/flask-security | ccb41df095177b11e8526958c1001d2f887d9feb | [
"MIT"
] | null | null | null | tests/configured_tests.py | maxcountryman/flask-security | ccb41df095177b11e8526958c1001d2f887d9feb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import with_statement
import base64
import time
import simplejson as json
from flask.ext.security.utils import capture_registrations, \
capture_reset_password_requests, capture_passwordless_login_requests
from flask.ext.security.forms import LoginForm, ConfirmRegisterForm, RegisterForm, \
ForgotPasswordForm, ResetPasswordForm, SendConfirmationForm, \
PasswordlessLoginForm
from flask.ext.security.forms import TextField, SubmitField, valid_user_email
from tests import SecurityTest
class ConfiguredPasswordHashSecurityTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_PASSWORD_HASH': 'bcrypt',
'SECURITY_PASSWORD_SALT': 'so-salty',
'USER_COUNT': 1
}
def test_authenticate(self):
r = self.authenticate(endpoint="/login")
self.assertIn('Home Page', r.data)
class ConfiguredSecurityTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_REGISTERABLE': True,
'SECURITY_LOGOUT_URL': '/custom_logout',
'SECURITY_LOGIN_URL': '/custom_login',
'SECURITY_POST_LOGIN_VIEW': '/post_login',
'SECURITY_POST_LOGOUT_VIEW': '/post_logout',
'SECURITY_POST_REGISTER_VIEW': '/post_register',
'SECURITY_UNAUTHORIZED_VIEW': '/unauthorized',
'SECURITY_DEFAULT_HTTP_AUTH_REALM': 'Custom Realm'
}
def test_login_view(self):
r = self._get('/custom_login')
self.assertIn("<h1>Login</h1>", r.data)
def test_authenticate(self):
r = self.authenticate(endpoint="/custom_login")
self.assertIn('Post Login', r.data)
def test_logout(self):
self.authenticate(endpoint="/custom_login")
r = self.logout(endpoint="/custom_logout")
self.assertIn('Post Logout', r.data)
def test_register_view(self):
r = self._get('/register')
self.assertIn('<h1>Register</h1>', r.data)
def test_register(self):
data = dict(email='dude@lp.com',
password='password',
password_confirm='password')
r = self._post('/register', data=data, follow_redirects=True)
self.assertIn('Post Register', r.data)
def test_register_with_next_querystring_argument(self):
data = dict(email='dude@lp.com',
password='password',
password_confirm='password')
r = self._post('/register?next=/page1', data=data, follow_redirects=True)
self.assertIn('Page 1', r.data)
def test_register_json(self):
data = '{ "email": "dude@lp.com", "password": "password", "csrf_token":"%s" }' % self.csrf_token
r = self._post('/register', data=data, content_type='application/json')
data = json.loads(r.data)
self.assertEquals(data['meta']['code'], 200)
def test_register_existing_email(self):
data = dict(email='matt@lp.com',
password='password',
password_confirm='password')
r = self._post('/register', data=data, follow_redirects=True)
msg = 'matt@lp.com is already associated with an account'
self.assertIn(msg, r.data)
def test_unauthorized(self):
self.authenticate("joe@lp.com", endpoint="/custom_auth")
r = self._get("/admin", follow_redirects=True)
msg = 'You are not allowed to access the requested resouce'
self.assertIn(msg, r.data)
def test_default_http_auth_realm(self):
r = self._get('/http', headers={
'Authorization': 'Basic ' + base64.b64encode("joe@lp.com:bogus")
})
self.assertIn('<h1>Unauthorized</h1>', r.data)
self.assertIn('WWW-Authenticate', r.headers)
self.assertEquals('Basic realm="Custom Realm"',
r.headers['WWW-Authenticate'])
class BadConfiguredSecurityTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_PASSWORD_HASH': 'bcrypt',
'USER_COUNT': 1
}
def test_bad_configuration_raises_runtimer_error(self):
self.assertRaises(RuntimeError, self.authenticate)
class DefaultTemplatePathTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_LOGIN_USER_TEMPLATE': 'custom_security/login_user.html',
}
def test_login_user_template(self):
r = self._get('/login')
self.assertIn('CUSTOM LOGIN USER', r.data)
class RegisterableTemplatePathTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_REGISTERABLE': True,
'SECURITY_REGISTER_USER_TEMPLATE': 'custom_security/register_user.html'
}
def test_register_user_template(self):
r = self._get('/register')
self.assertIn('CUSTOM REGISTER USER', r.data)
class RecoverableTemplatePathTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_RECOVERABLE': True,
'SECURITY_FORGOT_PASSWORD_TEMPLATE': 'custom_security/forgot_password.html',
'SECURITY_RESET_PASSWORD_TEMPLATE': 'custom_security/reset_password.html',
}
def test_forgot_password_template(self):
r = self._get('/reset')
self.assertIn('CUSTOM FORGOT PASSWORD', r.data)
def test_reset_password_template(self):
with capture_reset_password_requests() as requests:
r = self._post('/reset',
data=dict(email='joe@lp.com'),
follow_redirects=True)
t = requests[0]['token']
r = self._get('/reset/' + t)
self.assertIn('CUSTOM RESET PASSWORD', r.data)
class ConfirmableTemplatePathTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_CONFIRMABLE': True,
'SECURITY_SEND_CONFIRMATION_TEMPLATE': 'custom_security/send_confirmation.html'
}
def test_send_confirmation_template(self):
r = self._get('/confirm')
self.assertIn('CUSTOM SEND CONFIRMATION', r.data)
class PasswordlessTemplatePathTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_PASSWORDLESS': True,
'SECURITY_SEND_LOGIN_TEMPLATE': 'custom_security/send_login.html'
}
def test_send_login_template(self):
r = self._get('/login')
self.assertIn('CUSTOM SEND LOGIN', r.data)
class RegisterableTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_REGISTERABLE': True,
'USER_COUNT': 1
}
def test_register_valid_user(self):
data = dict(email='dude@lp.com',
password='password',
password_confirm='password')
self._post('/register', data=data, follow_redirects=True)
r = self.authenticate('dude@lp.com')
self.assertIn('Hello dude@lp.com', r.data)
class ConfirmableTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_CONFIRMABLE': True,
'SECURITY_REGISTERABLE': True,
'SECURITY_EMAIL_SUBJECT_REGISTER': 'Custom welcome subject',
'USER_COUNT': 1
}
def test_login_before_confirmation(self):
e = 'dude@lp.com'
self.register(e)
r = self.authenticate(email=e)
self.assertIn(self.get_message('CONFIRMATION_REQUIRED'), r.data)
def test_send_confirmation_of_already_confirmed_account(self):
e = 'dude@lp.com'
with capture_registrations() as registrations:
self.register(e)
token = registrations[0]['confirm_token']
self.client.get('/confirm/' + token, follow_redirects=True)
self.logout()
r = self._post('/confirm', data=dict(email=e))
self.assertIn(self.get_message('ALREADY_CONFIRMED'), r.data)
def test_register_sends_confirmation_email(self):
e = 'dude@lp.com'
with self.app.extensions['mail'].record_messages() as outbox:
self.register(e)
self.assertEqual(len(outbox), 1)
self.assertIn(e, outbox[0].html)
self.assertEqual('Custom welcome subject', outbox[0].subject)
def test_confirm_email(self):
e = 'dude@lp.com'
with capture_registrations() as registrations:
self.register(e)
token = registrations[0]['confirm_token']
r = self.client.get('/confirm/' + token, follow_redirects=True)
msg = self.app.config['SECURITY_MSG_EMAIL_CONFIRMED'][0]
self.assertIn(msg, r.data)
def test_invalid_token_when_confirming_email(self):
r = self.client.get('/confirm/bogus', follow_redirects=True)
msg = self.app.config['SECURITY_MSG_INVALID_CONFIRMATION_TOKEN'][0]
self.assertIn(msg, r.data)
def test_send_confirmation_json(self):
r = self._post('/confirm', data='{"email": "matt@lp.com"}',
content_type='application/json')
self.assertEquals(r.status_code, 200)
def test_send_confirmation_with_invalid_email(self):
r = self._post('/confirm', data=dict(email='bogus@bogus.com'))
msg = self.app.config['SECURITY_MSG_USER_DOES_NOT_EXIST'][0]
self.assertIn(msg, r.data)
def test_resend_confirmation(self):
e = 'dude@lp.com'
self.register(e)
r = self._post('/confirm', data={'email': e})
msg = self.get_message('CONFIRMATION_REQUEST', email=e)
self.assertIn(msg, r.data)
def test_user_deleted_before_confirmation(self):
e = 'dude@lp.com'
with capture_registrations() as registrations:
self.register(e)
user = registrations[0]['user']
token = registrations[0]['confirm_token']
with self.app.app_context():
from flask_security.core import _security
_security.datastore.delete(user)
_security.datastore.commit()
r = self.client.get('/confirm/' + token, follow_redirects=True)
msg = self.app.config['SECURITY_MSG_INVALID_CONFIRMATION_TOKEN'][0]
self.assertIn(msg, r.data)
class ExpiredConfirmationTest(SecurityTest):
AUTH_CONFIG = {
'SECURITY_CONFIRMABLE': True,
'SECURITY_REGISTERABLE': True,
'SECURITY_CONFIRM_EMAIL_WITHIN': '1 milliseconds',
'USER_COUNT': 1
}
def test_expired_confirmation_token_sends_email(self):
e = 'dude@lp.com'
with capture_registrations() as registrations:
self.register(e)
token = registrations[0]['confirm_token']
time.sleep(1.25)
with self.app.extensions['mail'].record_messages() as outbox:
r = self.client.get('/confirm/' + token, follow_redirects=True)
self.assertEqual(len(outbox), 1)
self.assertNotIn(token, outbox[0].html)
expire_text = self.AUTH_CONFIG['SECURITY_CONFIRM_EMAIL_WITHIN']
msg = self.app.config['SECURITY_MSG_CONFIRMATION_EXPIRED'][0]
msg = msg % dict(within=expire_text, email=e)
self.assertIn(msg, r.data)
class LoginWithoutImmediateConfirmTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_CONFIRMABLE': True,
'SECURITY_REGISTERABLE': True,
'SECURITY_LOGIN_WITHOUT_CONFIRMATION': True,
'USER_COUNT': 1
}
def test_register_valid_user_automatically_signs_in(self):
e = 'dude@lp.com'
p = 'password'
data = dict(email=e, password=p, password_confirm=p)
r = self._post('/register', data=data, follow_redirects=True)
self.assertIn(e, r.data)
class RecoverableTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_RECOVERABLE': True,
'SECURITY_RESET_PASSWORD_ERROR_VIEW': '/',
'SECURITY_POST_FORGOT_VIEW': '/'
}
def test_reset_view(self):
with capture_reset_password_requests() as requests:
r = self._post('/reset',
data=dict(email='joe@lp.com'),
follow_redirects=True)
t = requests[0]['token']
r = self._get('/reset/' + t)
self.assertIn('<h1>Reset password</h1>', r.data)
def test_forgot_post_sends_email(self):
with capture_reset_password_requests():
with self.app.extensions['mail'].record_messages() as outbox:
self._post('/reset', data=dict(email='joe@lp.com'))
self.assertEqual(len(outbox), 1)
def test_forgot_password_json(self):
r = self._post('/reset', data='{"email": "matt@lp.com"}',
content_type="application/json")
self.assertEquals(r.status_code, 200)
def test_forgot_password_invalid_email(self):
r = self._post('/reset',
data=dict(email='larry@lp.com'),
follow_redirects=True)
self.assertIn("Specified user does not exist", r.data)
def test_reset_password_with_valid_token(self):
with capture_reset_password_requests() as requests:
r = self._post('/reset',
data=dict(email='joe@lp.com'),
follow_redirects=True)
t = requests[0]['token']
r = self._post('/reset/' + t, data={
'password': 'newpassword',
'password_confirm': 'newpassword'
}, follow_redirects=True)
r = self.logout()
r = self.authenticate('joe@lp.com', 'newpassword')
self.assertIn('Hello joe@lp.com', r.data)
def test_reset_password_with_invalid_token(self):
r = self._post('/reset/bogus', data={
'password': 'newpassword',
'password_confirm': 'newpassword'
}, follow_redirects=True)
self.assertIn(self.get_message('INVALID_RESET_PASSWORD_TOKEN'), r.data)
class ExpiredResetPasswordTest(SecurityTest):
AUTH_CONFIG = {
'SECURITY_RECOVERABLE': True,
'SECURITY_RESET_PASSWORD_WITHIN': '1 milliseconds'
}
def test_reset_password_with_expired_token(self):
with capture_reset_password_requests() as requests:
r = self._post('/reset', data=dict(email='joe@lp.com'),
follow_redirects=True)
t = requests[0]['token']
time.sleep(1)
r = self._post('/reset/' + t, data={
'password': 'newpassword',
'password_confirm': 'newpassword'
}, follow_redirects=True)
self.assertIn('You did not reset your password within', r.data)
class ChangePasswordTest(SecurityTest):
AUTH_CONFIG = {
'SECURITY_RECOVERABLE': True,
'SECURITY_CHANGEABLE': True,
}
def test_change_password(self):
self.authenticate()
r = self.client.get('/change', follow_redirects=True)
self.assertIn('Change password', r.data)
def test_change_password_invalid(self):
self.authenticate()
r = self._post('/change', data={
'password': 'notpassword',
'new_password': 'newpassword',
'new_password_confirm': 'newpassword'
}, follow_redirects=True)
self.assertNotIn('You successfully changed your password', r.data)
self.assertIn('Invalid password', r.data)
def test_change_password_mismatch(self):
self.authenticate()
r = self._post('/change', data={
'password': 'password',
'new_password': 'newpassword',
'new_password_confirm': 'notnewpassword'
}, follow_redirects=True)
self.assertNotIn('You successfully changed your password', r.data)
self.assertIn('Passwords do not match', r.data)
def test_change_password_bad_password(self):
self.authenticate()
r = self._post('/change', data={
'password': 'password',
'new_password': 'a',
'new_password_confirm': 'a'
}, follow_redirects=True)
self.assertNotIn('You successfully changed your password', r.data)
self.assertIn('Field must be between', r.data)
def test_change_password_success(self):
self.authenticate()
with self.app.extensions['mail'].record_messages() as outbox:
r = self._post('/change', data={
'password': 'password',
'new_password': 'newpassword',
'new_password_confirm': 'newpassword'
}, follow_redirects=True)
self.assertIn('You successfully changed your password', r.data)
self.assertIn('Home Page', r.data)
self.assertEqual(len(outbox), 1)
self.assertIn("Your password has been changed", outbox[0].html)
self.assertIn("/reset", outbox[0].html)
class ChangePasswordPostViewTest(SecurityTest):
AUTH_CONFIG = {
'SECURITY_CHANGEABLE': True,
'SECURITY_POST_CHANGE_VIEW': '/profile',
}
def test_change_password_success(self):
self.authenticate()
r = self._post('/change', data={
'password': 'password',
'new_password': 'newpassword',
'new_password_confirm': 'newpassword'
}, follow_redirects=True)
self.assertIn('Profile Page', r.data)
class ChangePasswordDisabledTest(SecurityTest):
AUTH_CONFIG = {
'SECURITY_CHANGEABLE': False,
}
def test_change_password_endpoint_is_404(self):
self.authenticate()
r = self.client.get('/change', follow_redirects=True)
self.assertEqual(404, r.status_code)
class TrackableTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_TRACKABLE': True,
'USER_COUNT': 1
}
def test_did_track(self):
e = 'matt@lp.com'
self.authenticate(email=e)
self.logout()
self.authenticate(email=e)
with self.app.test_request_context('/profile'):
user = self.app.security.datastore.find_user(email=e)
self.assertIsNotNone(user.last_login_at)
self.assertIsNotNone(user.current_login_at)
self.assertEquals('untrackable', user.last_login_ip)
self.assertEquals('untrackable', user.current_login_ip)
self.assertEquals(2, user.login_count)
class PasswordlessTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_PASSWORDLESS': True
}
def test_login_request_for_inactive_user(self):
msg = self.app.config['SECURITY_MSG_DISABLED_ACCOUNT'][0]
r = self._post('/login', data=dict(email='tiya@lp.com'),
follow_redirects=True)
self.assertIn(msg, r.data)
def test_request_login_token_with_json_and_valid_email(self):
data = '{"email": "matt@lp.com", "password": "password", "csrf_token":"%s"}' % self.csrf_token
r = self._post('/login', data=data, content_type='application/json')
self.assertEquals(r.status_code, 200)
self.assertNotIn('error', r.data)
def test_request_login_token_with_json_and_invalid_email(self):
data = '{"email": "nobody@lp.com", "password": "password"}'
r = self._post('/login', data=data, content_type='application/json')
self.assertIn('errors', r.data)
def test_request_login_token_sends_email_and_can_login(self):
e = 'matt@lp.com'
r, user, token = None, None, None
with capture_passwordless_login_requests() as requests:
with self.app.extensions['mail'].record_messages() as outbox:
r = self._post('/login', data=dict(email=e),
follow_redirects=True)
self.assertEqual(len(outbox), 1)
self.assertEquals(1, len(requests))
self.assertIn('user', requests[0])
self.assertIn('login_token', requests[0])
user = requests[0]['user']
token = requests[0]['login_token']
msg = self.app.config['SECURITY_MSG_LOGIN_EMAIL_SENT'][0]
msg = msg % dict(email=user.email)
self.assertIn(msg, r.data)
r = self.client.get('/login/' + token, follow_redirects=True)
msg = self.get_message('PASSWORDLESS_LOGIN_SUCCESSFUL')
self.assertIn(msg, r.data)
r = self.client.get('/profile')
self.assertIn('Profile Page', r.data)
def test_invalid_login_token(self):
msg = self.app.config['SECURITY_MSG_INVALID_LOGIN_TOKEN'][0]
r = self._get('/login/bogus', follow_redirects=True)
self.assertIn(msg, r.data)
def test_token_login_when_already_authenticated(self):
with capture_passwordless_login_requests() as requests:
self._post('/login', data=dict(email='matt@lp.com'),
follow_redirects=True)
token = requests[0]['login_token']
r = self.client.get('/login/' + token, follow_redirects=True)
msg = self.get_message('PASSWORDLESS_LOGIN_SUCCESSFUL')
self.assertIn(msg, r.data)
r = self.client.get('/login/' + token, follow_redirects=True)
msg = self.get_message('PASSWORDLESS_LOGIN_SUCCESSFUL')
self.assertNotIn(msg, r.data)
def test_send_login_with_invalid_email(self):
r = self._post('/login', data=dict(email='bogus@bogus.com'))
self.assertIn('Specified user does not exist', r.data)
class ExpiredLoginTokenTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_PASSWORDLESS': True,
'SECURITY_LOGIN_WITHIN': '1 milliseconds',
'USER_COUNT': 1
}
def test_expired_login_token_sends_email(self):
e = 'matt@lp.com'
with capture_passwordless_login_requests() as requests:
self._post('/login', data=dict(email=e), follow_redirects=True)
token = requests[0]['login_token']
time.sleep(1.25)
with self.app.extensions['mail'].record_messages() as outbox:
r = self.client.get('/login/' + token, follow_redirects=True)
expire_text = self.AUTH_CONFIG['SECURITY_LOGIN_WITHIN']
msg = self.app.config['SECURITY_MSG_LOGIN_EXPIRED'][0]
msg = msg % dict(within=expire_text, email=e)
self.assertIn(msg, r.data)
self.assertEqual(len(outbox), 1)
self.assertIn(e, outbox[0].html)
self.assertNotIn(token, outbox[0].html)
class AsyncMailTaskTests(SecurityTest):
AUTH_CONFIG = {
'SECURITY_RECOVERABLE': True,
'USER_COUNT': 1
}
def setUp(self):
super(AsyncMailTaskTests, self).setUp()
self.mail_sent = False
def test_send_email_task_is_called(self):
@self.app.security.send_mail_task
def send_email(msg):
self.mail_sent = True
self._post('/reset', data=dict(email='matt@lp.com'))
self.assertTrue(self.mail_sent)
class NoBlueprintTests(SecurityTest):
APP_KWARGS = {
'register_blueprint': False,
}
AUTH_CONFIG = {
'USER_COUNT': 1
}
def test_login_endpoint_is_404(self):
r = self._get('/login')
self.assertEqual(404, r.status_code)
def test_http_auth_without_blueprint(self):
auth = 'Basic ' + base64.b64encode("matt@lp.com:password")
r = self._get('/http', headers={'Authorization': auth})
self.assertIn('HTTP Authentication', r.data)
class ExtendFormsTest(SecurityTest):
class MyLoginForm(LoginForm):
email = TextField('My Login Email Address Field')
class MyRegisterForm(RegisterForm):
email = TextField('My Register Email Address Field')
APP_KWARGS = {
'login_form': MyLoginForm,
'register_form': MyRegisterForm,
}
AUTH_CONFIG = {
'SECURITY_CONFIRMABLE': False,
'SECURITY_REGISTERABLE': True,
}
def test_login_view(self):
r = self._get('/login', follow_redirects=True)
self.assertIn("My Login Email Address Field", r.data)
def test_register(self):
r = self._get('/register', follow_redirects=True)
self.assertIn("My Register Email Address Field", r.data)
class RecoverableExtendFormsTest(SecurityTest):
class MyForgotPasswordForm(ForgotPasswordForm):
email = TextField('My Forgot Password Email Address Field',
validators=[valid_user_email])
class MyResetPasswordForm(ResetPasswordForm):
submit = SubmitField("My Reset Password Submit Field")
APP_KWARGS = {
'forgot_password_form': MyForgotPasswordForm,
'reset_password_form': MyResetPasswordForm,
}
AUTH_CONFIG = {
'SECURITY_RECOVERABLE': True,
}
def test_forgot_password(self):
r = self._get('/reset', follow_redirects=True)
self.assertIn("My Forgot Password Email Address Field", r.data)
def test_reset_password(self):
with capture_reset_password_requests() as requests:
self._post('/reset', data=dict(email='joe@lp.com'),
follow_redirects=True)
token = requests[0]['token']
r = self._get('/reset/' + token)
self.assertIn("My Reset Password Submit Field", r.data)
class PasswordlessExtendFormsTest(SecurityTest):
class MyPasswordlessLoginForm(PasswordlessLoginForm):
email = TextField('My Passwordless Login Email Address Field')
APP_KWARGS = {
'passwordless_login_form': MyPasswordlessLoginForm,
}
AUTH_CONFIG = {
'SECURITY_PASSWORDLESS': True,
}
def test_passwordless_login(self):
r = self._get('/login', follow_redirects=True)
self.assertIn("My Passwordless Login Email Address Field", r.data)
class ConfirmableExtendFormsTest(SecurityTest):
class MyConfirmRegisterForm(ConfirmRegisterForm):
email = TextField('My Confirm Register Email Address Field')
class MySendConfirmationForm(SendConfirmationForm):
email = TextField('My Send Confirmation Email Address Field')
APP_KWARGS = {
'confirm_register_form': MyConfirmRegisterForm,
'send_confirmation_form': MySendConfirmationForm,
}
AUTH_CONFIG = {
'SECURITY_CONFIRMABLE': True,
'SECURITY_REGISTERABLE': True,
}
def test_register(self):
r = self._get('/register', follow_redirects=True)
self.assertIn("My Confirm Register Email Address Field", r.data)
def test_send_confirmation(self):
r = self._get('/confirm', follow_redirects=True)
self.assertIn("My Send Confirmation Email Address Field", r.data)
| 33.407928 | 104 | 0.633952 | 2,909 | 26,125 | 5.462702 | 0.095222 | 0.021081 | 0.050217 | 0.023409 | 0.637153 | 0.578252 | 0.49097 | 0.395884 | 0.364043 | 0.313511 | 0 | 0.005332 | 0.246201 | 26,125 | 781 | 105 | 33.450704 | 0.801605 | 0.000804 | 0 | 0.455017 | 0 | 0.00346 | 0.221562 | 0.063408 | 0 | 0 | 0 | 0 | 0.157439 | 1 | 0.112457 | false | 0.193772 | 0.015571 | 0 | 0.238754 | 0.00519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b8b1d87440313264fbf621f72a939400ed1ccedc | 390 | py | Python | Global and Local Inversions/Solution.py | chandrikadeb7/Awesome-LeetCode-Python | 57902d3394761afd9a51b6405c085c87526e647e | [
"MIT"
] | 2 | 2021-07-16T06:46:58.000Z | 2021-12-08T01:15:09.000Z | Global and Local Inversions/Solution.py | chandrikadeb7/Awesome-LeetCode-Python | 57902d3394761afd9a51b6405c085c87526e647e | [
"MIT"
] | null | null | null | Global and Local Inversions/Solution.py | chandrikadeb7/Awesome-LeetCode-Python | 57902d3394761afd9a51b6405c085c87526e647e | [
"MIT"
] | null | null | null | class Solution:
def isIdealPermutation(self, A: List[int]) -> bool:
n = len(A)
g = local = 0
for i in range(1, n):
if A[i] < A[i-1]:
local += 1
if A[i] < i:
diff = i - A[i]
g += diff * (diff+1) // 2
return g == local
| 32.5 | 55 | 0.315385 | 45 | 390 | 2.733333 | 0.511111 | 0.065041 | 0.065041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035503 | 0.566667 | 390 | 11 | 56 | 35.454545 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8b4581f931e18341efca7f99abcc93a3432695c | 13,076 | py | Python | queryable_properties/properties/common.py | W1ldPo1nter/django-queryable-properties | 9bb4ecb4fbdd7a9e0f610f937c8101a643027fb1 | [
"BSD-3-Clause"
] | 36 | 2019-10-22T11:44:37.000Z | 2022-03-15T21:27:03.000Z | queryable_properties/properties/common.py | W1ldPo1nter/django-queryable-properties | 9bb4ecb4fbdd7a9e0f610f937c8101a643027fb1 | [
"BSD-3-Clause"
] | 6 | 2020-10-03T15:13:26.000Z | 2021-09-25T14:05:50.000Z | queryable_properties/properties/common.py | W1ldPo1nter/django-queryable-properties | 9bb4ecb4fbdd7a9e0f610f937c8101a643027fb1 | [
"BSD-3-Clause"
] | 3 | 2021-04-26T08:30:46.000Z | 2021-08-18T09:04:49.000Z | # encoding: utf-8
import operator
import six
from django.db.models import BooleanField, Field, Q
from ..utils.internal import MISSING_OBJECT, ModelAttributeGetter, QueryPath
from .base import QueryableProperty
from .mixins import AnnotationGetterMixin, AnnotationMixin, boolean_filter, LookupFilterMixin
class BooleanMixin(LookupFilterMixin):
"""
Internal mixin class for common properties that return boolean values,
which is intended to be used in conjunction with one of the annotation
mixins.
"""
filter_requires_annotation = False
def _get_condition(self, cls): # pragma: no cover
"""
Build the query filter condition for this boolean property, which is
used for both the filter and the annotation implementation.
:param type cls: The model class of which a queryset should be filtered
or annotated.
:return: The filter condition for this property.
:rtype: django.db.models.Q
"""
raise NotImplementedError()
@boolean_filter
def get_exact_filter(self, cls):
return self._get_condition(cls)
def get_annotation(self, cls):
from django.db.models import Case, When
return Case(
When(self._get_condition(cls), then=True),
default=False,
output_field=BooleanField()
)
class ValueCheckProperty(BooleanMixin, AnnotationMixin, QueryableProperty):
"""
A property that checks if an attribute of a model instance or a related
object contains a certain value or one of multiple specified values and
returns a corresponding boolean value.
Supports queryset filtering and ``CASE``/``WHEN``-based annotating.
"""
def __init__(self, attribute_path, *values, **kwargs):
"""
Initialize a new property that checks for certain field values.
:param str attribute_path: The name of the attribute to compare
against. May also be a more complex path to
a related attribute using dot-notation (like
with :func:`operator.attrgetter`). If an
intermediate value on the path is None, it
will be treated as "no match" instead of
raising an exception. The behavior is the
same if an intermediate value raises an
ObjectDoesNotExist error.
:param values: The value(s) to check for.
"""
self.attribute_getter = ModelAttributeGetter(attribute_path)
self.values = values
super(ValueCheckProperty, self).__init__(**kwargs)
def get_value(self, obj):
return self.attribute_getter.get_value(obj) in self.values
def _get_condition(self, cls):
return self.attribute_getter.build_filter('in', self.values)
class RangeCheckProperty(BooleanMixin, AnnotationMixin, QueryableProperty):
"""
A property that checks if a static or dynamic value is contained in a range
expressed by two field values and returns a corresponding boolean value.
Supports queryset filtering and ``CASE``/``WHEN``-based annotating.
"""
def __init__(self, min_attribute_path, max_attribute_path, value, include_boundaries=True, in_range=True,
include_missing=False, **kwargs):
"""
Initialize a new property that checks if a value is contained in a
range expressed by two field values.
:param str min_attribute_path: The name of the attribute to get the
lower boundary from. May also be a more
complex path to a related attribute
using dot-notation (like with
:func:`operator.attrgetter`). If an
intermediate value on the path is None,
it will be treated as a missing value
instead of raising an exception. The
behavior is the same if an intermediate
value raises an ``ObjectDoesNotExist``
error.
:param str max_attribute_path: The name of the attribute to get the
upper boundary from. The same behavior
as for the lower boundary applies.
:param value: The value which is tested against the boundary. May be a
callable which can be called without any arguments, whose
return value will then be used as the test value.
:param bool include_boundaries: Whether or not the value is considered
a part of the range if it is exactly
equal to one of the boundaries.
:param bool in_range: Configures whether the property should return
``True`` if the value is in range
(``in_range=True``) or if it is out of the range
(``in_range=False``). This also affects the
impact of the ``include_boundaries`` and
``include_missing`` parameters.
:param bool include_missing: Whether or not a missing value is
considered a part of the range (see the
description of ``min_attribute_path``).
Useful e.g. for nullable fields.
"""
self.min_attribute_getter = ModelAttributeGetter(min_attribute_path)
self.max_attribute_getter = ModelAttributeGetter(max_attribute_path)
self.value = value
self.include_boundaries = include_boundaries
self.in_range = in_range
self.include_missing = include_missing
super(RangeCheckProperty, self).__init__(**kwargs)
@property
def final_value(self):
value = self.value
if callable(value):
value = value()
return value
def get_value(self, obj):
value = self.final_value
min_value = self.min_attribute_getter.get_value(obj)
max_value = self.max_attribute_getter.get_value(obj)
lower_operator = operator.le if self.include_boundaries else operator.lt
greater_operator = operator.ge if self.include_boundaries else operator.gt
contained = self.include_missing if min_value in (None, MISSING_OBJECT) else greater_operator(value, min_value)
contained &= self.include_missing if max_value in (None, MISSING_OBJECT) else lower_operator(value, max_value)
return not (contained ^ self.in_range)
def _get_condition(self, cls):
value = self.final_value
lower_condition = self.min_attribute_getter.build_filter('lte' if self.include_boundaries else 'lt', value)
upper_condition = self.max_attribute_getter.build_filter('gte' if self.include_boundaries else 'gt', value)
if self.include_missing:
lower_condition |= self.min_attribute_getter.build_filter('isnull', True)
upper_condition |= self.max_attribute_getter.build_filter('isnull', True)
if not self.in_range:
return ~lower_condition | ~upper_condition
return lower_condition & upper_condition
class RelatedExistenceCheckProperty(BooleanMixin, AnnotationGetterMixin, QueryableProperty):
"""
A property that checks whether related objects to the one that uses the
property exist in the database and returns a corresponding boolean value.
Supports queryset filtering and ``CASE``/``WHEN``-based annotating.
"""
def __init__(self, relation_path, **kwargs):
"""
Initialize a new property that checks for the existence of related
objects.
:param str relation_path: The path to the object/field whose existence
is to be checked. May contain the lookup
separator (``__``) to check for more remote
relations.
"""
super(RelatedExistenceCheckProperty, self).__init__(**kwargs)
self.filter = (QueryPath(relation_path) + 'isnull').build_filter(False)
def get_value(self, obj):
return self.get_queryset_for_object(obj).filter(self.filter).exists()
def _get_condition(self, cls):
# Perform the filtering via a subquery to avoid any side-effects that may be introduced by JOINs.
subquery = self.get_queryset(cls).filter(self.filter)
return Q(pk__in=subquery)
class MappingProperty(AnnotationMixin, QueryableProperty):
"""
A property that translates values of an attribute into other values using
defined mappings.
"""
# Copy over Django's implementation to forcibly evaluate a lazy value.
_force_value = six.get_unbound_function(Field.get_prep_value)
def __init__(self, attribute_path, output_field, mappings, default=None, **kwargs):
"""
Initialize a property that maps values from an attribute to other
values.
:param str attribute_path: The name of the attribute to compare
against. May also be a more complex path to
a related attribute using dot-notation (like
with :func:`operator.attrgetter`). If an
intermediate value on the path is None, it
will be treated as "no match" instead of
raising an exception. The behavior is the
same if an intermediate value raises an
``ObjectDoesNotExist`` error.
:param django.db.models.Field output_field: The field to represent the
mapped values in querysets.
:param mappings: An iterable containing 2-tuples that represent the
mappings to use (the first value of each tuple is
mapped to the second value).
:type mappings: collections.Iterable[(object, object)]
:param default: A default value to return/use in querysets when in case
none of the mappings match an encountered value.
Defaults to None.
"""
super(MappingProperty, self).__init__(**kwargs)
self.attribute_getter = ModelAttributeGetter(attribute_path)
self.output_field = output_field
self.mappings = mappings
self.default = default
def get_value(self, obj):
attibute_value = self.attribute_getter.get_value(obj)
for from_value, to_value in self.mappings:
if attibute_value == from_value:
return self._force_value(to_value)
return self._force_value(self.default)
def get_annotation(self, cls):
from django.db.models import Case, Value, When
cases = (When(self.attribute_getter.build_filter('exact', from_value), then=Value(self._force_value(to_value)))
for from_value, to_value in self.mappings)
return Case(*cases, default=Value(self._force_value(self.default)), output_field=self.output_field)
class AnnotationProperty(AnnotationGetterMixin, QueryableProperty):
"""
A property that is based on a static annotation that is even used to
provide getter values.
"""
def __init__(self, annotation, **kwargs):
"""
Initialize a new property that gets its value by retrieving an
annotated value from the database.
:param annotation: The static annotation to use to determine the value
of this property.
"""
super(AnnotationProperty, self).__init__(**kwargs)
self.annotation = annotation
def get_annotation(self, cls):
return self.annotation
class AggregateProperty(AnnotationProperty):
"""
A property that is based on an aggregate that is used to provide both
queryset annotations as well as getter values.
"""
def __init__(self, aggregate, **kwargs):
"""
Initialize a new property that gets its value by retrieving an
aggregated value from the database.
:param django.db.models.Aggregate aggregate: The aggregate to use to
determine the value of
this property.
"""
super(AggregateProperty, self).__init__(aggregate, **kwargs)
def get_value(self, obj):
return self.get_queryset_for_object(obj).aggregate(**{self.name: self.annotation})[self.name]
| 45.245675 | 119 | 0.613643 | 1,484 | 13,076 | 5.258086 | 0.178571 | 0.019608 | 0.011662 | 0.016148 | 0.459567 | 0.350763 | 0.319236 | 0.300013 | 0.226964 | 0.226964 | 0 | 0.000227 | 0.324946 | 13,076 | 288 | 120 | 45.402778 | 0.883766 | 0.496406 | 0 | 0.15534 | 0 | 0 | 0.006194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.194175 | false | 0 | 0.07767 | 0.058252 | 0.504854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b8b92ae2e5cf67849b6f6b332521716f375c2982 | 3,960 | py | Python | sookie.py | anygard/sookie | 5732f7644d2d908911735e62c8574863825174a2 | [
"MIT"
] | null | null | null | sookie.py | anygard/sookie | 5732f7644d2d908911735e62c8574863825174a2 | [
"MIT"
] | null | null | null | sookie.py | anygard/sookie | 5732f7644d2d908911735e62c8574863825174a2 | [
"MIT"
] | null | null | null |
""" Sookie, is a waiter, waits for a socket to be listening then it moves on
Usage:
sookie <socket> [--timeout=<to>] [--retry=<rt>] [--logsocket=<ls>] [--logfacility=<lf>] [--loglevel=<ll>]
sookie -h | --help
sookie --version
Options:
-h --help Show this screen
--version Show version
--timeout=<to> Timout in seconds [default: 1800]
--retry=<rt> Interval between retries in seconds [default: 20]
--logsocket=<ls> Socket to send syslog messages to, only logging to local syslog if omitted.
--logfacility=<lf> The syslog facility to use for logging [default: user]
--loglevel=<ll> The syslog severity level to use, i.e the verbosity level [default: info]
<socket> Socket to wait for, 'host:port'
Sookie is intended to be a simple way of providing som measure of management of
inter server dependencies in complex environments. All it does is wait for a
socket to start listening for connections then it exits. It is supposed to be
used as a "smart" sleep in a startup script.
Sookie logs to syslog, and optionally to a remote syslog server aswell. Level
and facility values can be taken from syslog(1)
Sookie Stackhouse is a waitress.
exitcodes
0: ok, the server answered
1: waited until timout
2: invalid syntax
"""
import docopt
import logging
import logging.handlers
import os
import socket
import sys
import time
def main(args):
if args['--logsocket']:
logserver = tuple(args['--logsocket'].split(':'))
else:
logserver = None
logfacility = args['--logfacility']
loglevel = args['--loglevel']
logger = logging.getLogger(os.path.basename(__file__))
localsyslog = logging.handlers.SysLogHandler()
if logserver:
remotesyslog = logging.handlers.SysLogHandler(
address=logserver,
facility=logging.handlers.SysLogHandler.facility_names[logfacility]
)
try:
localsyslog.setLevel(logging.handlers.SysLogHandler.priority_names[loglevel])
if logserver:
remotesyslog.setLevel(logging.handlers.SysLogHandler.priority_names[loglevel])
except KeyError:
print "Invalid argument to %s (%s)" % ('--loglevel', args['--loglevel'])
sys.exit(2)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
localsyslog.setFormatter(formatter)
if logserver:
remotesyslog.setFormatter(formatter)
logger.addHandler(localsyslog)
if logserver:
logger.addHandler(remotesyslog)
logger.info('%s Starting' % __file__)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
option = '--timeout'
timeout = int(args[option])
option = '--retry'
interval = int(args[option])
except ValueError:
print "Invalid argument to %s (%s)" % (option, args[option])
sys.exit(2)
server = tuple(args['<socket>'].split(':'))
timeout_time = time.time() + timeout
is_timeout = False
logger.debug('now: %d, timeout: %d, timeout_time: %d)' % (time.time(), timeout, timeout_time))
while True:
t = time.time()
if t >= timeout_time:
is_timeout = True
break
try:
sock.connect(server)
logger.info('Connect')
print server
logger.debug('%ds to spare' % int(timeout_time-t))
break
except socket.error:
logger.debug('Waiting %d more seconds' % step)
time.sleep(step)
except TypeError, E:
print E
print "Invalid socket: %s" % args['<socket>']
sys.exit(2)
logger.info('%s Ending' % __file__)
exitcode = 1 if is_timeout else 0
logger.debug('exitcode: %d' % exitcode)
sys.exit(exitcode)
if __name__ == '__main__':
args = docopt.docopt(__doc__, version='0.1')
main(args)
| 30.697674 | 109 | 0.630303 | 477 | 3,960 | 5.155136 | 0.36478 | 0.0366 | 0.056934 | 0.00976 | 0.06588 | 0.06588 | 0.04636 | 0 | 0 | 0 | 0 | 0.005771 | 0.256061 | 3,960 | 128 | 110 | 30.9375 | 0.828921 | 0 | 0 | 0.162162 | 0 | 0 | 0.132696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.094595 | null | null | 0.067568 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8bd59b6d2fd731f6b088f01ce1a174d704adcae | 7,568 | py | Python | tests/test_mongoengine_dsl.py | StoneMoe/mongoengine_dsl | 310d77c30e77ba1f695b3d644737fcfc3c2ab304 | [
"MIT"
] | 3 | 2021-08-25T02:08:34.000Z | 2022-03-23T08:32:09.000Z | tests/test_mongoengine_dsl.py | StoneMoe/mongoengine_dsl | 310d77c30e77ba1f695b3d644737fcfc3c2ab304 | [
"MIT"
] | 1 | 2021-08-24T09:41:11.000Z | 2021-08-24T10:02:43.000Z | tests/test_mongoengine_dsl.py | StoneMoe/mongoengine_dsl | 310d77c30e77ba1f695b3d644737fcfc3c2ab304 | [
"MIT"
] | 1 | 2021-08-24T14:25:28.000Z | 2021-08-24T14:25:28.000Z | #!/usr/bin/env python
import unittest
from mongoengine import Document, Q, StringField, connect
from mongoengine_dsl import Query
from mongoengine_dsl.errors import InvalidSyntaxError, TransformHookError
from tests.utils import ts2dt
class DSLTest(unittest.TestCase):
def test_whitespace(self):
self.assertEqual(
Q(key1='val1') & Q(key2='val2') & Q(key3='val3'),
Query('key1:val1 and key2=val2 and key3==val3'),
)
self.assertEqual(
Q(key1='val1') & Q(key2='val2') & Q(key3='val3'),
Query('key1 : val1 and key2 = val2 and key3 == val3'),
)
def test_token(self):
self.assertEqual(Q(key1='hi_there'), Query('key1: hi_there'))
self.assertEqual(Q(key1='8a'), Query('key1: 8a'))
self.assertEqual(Q(key1='8.8.'), Query('key1: 8.8.'))
self.assertEqual(Q(key1='8.8.8'), Query('key1: 8.8.8'))
self.assertEqual(Q(key1='8.8.8.8'), Query('key1: 8.8.8.8'))
def test_quote_string(self):
self.assertEqual(Q(key1='hi_there'), Query('key1: "hi_there"'))
self.assertEqual(Q(key1='hi_there'), Query("key1: 'hi_there'"))
self.assertEqual(Q(key1='hello world'), Query('key1: "hello world"'))
self.assertEqual(Q(key1='hello world'), Query("key1: 'hello world'"))
self.assertEqual(
Q(key1='escape"this"world'),
Query('key1: "escape\\"this\\"world"'),
)
self.assertEqual(
Q(key1="escape'this'world"),
Query("key1: 'escape\\'this\\'world'"),
)
def test_int(self):
self.assertEqual(Q(key1=1), Query('key1:1'))
self.assertEqual(Q(key1=-1), Query('key1:-1'))
def test_float(self):
self.assertEqual(Q(key1=1.213), Query('key1:1.213'))
self.assertEqual(Q(key1=-1.213), Query('key1:-1.213'))
def test_bool(self):
self.assertEqual(
Q(key1=True) & Q(key2=True) & Q(key3=True),
Query('key1:true and key2:TRUE and key3:True'),
)
self.assertEqual(
Q(key1=False) & Q(key2=False) & Q(key3=False),
Query('key1:false and key2:FALSE and key3:False'),
)
def test_array(self):
self.assertEqual(Q(key1=['hi']), Query('key1:[hi]'))
self.assertEqual(
Q(key1=[False, True, 1, 1.2, 'quote', 'no_quote']),
Query('key1:[false, true, 1, 1.2, "quote", no_quote]'),
)
self.assertEqual( # Full-width comma
Q(key1=[False, True, 1, 1.2, 'quote', 'no_quote']),
Query('key1:[false, true, 1, 1.2, "quote", no_quote]'),
)
self.assertEqual( # no comma
Q(key1=[False, True, 1, 1.2, 'quote', 'no_quote']),
Query('key1:[false true 1 1.2 "quote" no_quote]'),
)
self.assertEqual(Q(key1=[1, [2, 3]]), Query('key1:[1, [2, 3]]')) # nested array
self.assertEqual( # nested more array
Q(key1=[1, 2, [3, [4, 5, 6]]]),
Query('key1:[1, 2, [3, [4, 5, 6]]]'),
)
self.assertRaisesRegex(
InvalidSyntaxError,
'Exclude operator cannot be used in arrays',
Query,
'key1 @ [!,2,3] and key2:"value2"',
)
self.assertRaisesRegex(
InvalidSyntaxError,
'Wildcard operator cannot be used in arrays',
Query,
'key1 !@ [*,2,3] and key2:"value2"',
)
def test_logical_priority(self):
self.assertEqual(
Q(key1='键1') & Q(key2='value2') & Q(键3='value3'),
Query('key1:键1 and key2:"value2" and 键3:value3'),
)
self.assertEqual(
(Q(key1='键1') | Q(key2='value2')) & Q(键3='value3'),
Query('(key1:键1 or key2:"value2") and 键3:value3'),
)
self.assertEqual(
Q(key1='键1') & (Q(key2='value2') | Q(键3='value3')),
Query('key1:键1 and (key2:"value2" or 键3:value3)'),
)
self.assertEqual(
Q(key1='键1') & (Q(key2='value2') | Q(键3='value3') | Q(key4='value4')),
Query('key1:键1 and (key2:"value2" or 键3:value3 or key4: value4)'),
)
def test_equal(self):
self.assertEqual(
Q(key1='val1') & Q(key2='val2') & Q(key3='val3'),
Query('key1:val1 and key2=val2 and key3==val3'),
)
self.assertEqual(
Q(key1='val1') & Q(key2='val2') & Q(key3='val3'),
Query('key1:val1 and key2=val2 and key3==val3'),
)
def test_not_equal(self):
self.assertEqual(Q(key1__ne=1), Query('key1!=1'))
def test_greater_than(self):
self.assertEqual(Q(key1__gt=1), Query('key1>1'))
self.assertEqual(Q(key1__gte=1), Query('key1>=1'))
def test_less_than(self):
self.assertEqual(Q(key1__lt=1), Query('key1<1'))
self.assertEqual(Q(key1__lte=1), Query('key1<=1'))
def test_exists_and_not_exists(self):
self.assertEqual(
Q(key1__exists=True) & Q(key2='value2'),
Query('key1:* and key2:"value2"'),
)
self.assertEqual(
Q(key1__exists=False) & Q(key2='value2'),
Query('key1:! and key2:"value2"'),
)
self.assertRaisesRegex(
InvalidSyntaxError,
'Wildcard operator can only be used for equals',
Query,
'key1 != *',
)
self.assertRaisesRegex(
InvalidSyntaxError,
'Exclude operator can only be used for equals',
Query,
'key1 != !',
)
def test_contain_and_not_contain(self):
self.assertEqual(
Q(key1__in=[1, 2, 3]) & Q(key2='value2'),
Query('key1 @ [1,2,3] and key2:"value2"'),
)
self.assertEqual(
Q(key1__nin=[1, 2, 3]) & Q(key2='value2'),
Query('key1 !@ [1,2,3] and key2:"value2"'),
)
def test_transform_hook(self):
self.assertEqual(
Q(key1=ts2dt(0)) & Q(key2=0),
Query('key1: 0 and key2: 0', transform={'key1': ts2dt}),
)
self.assertEqual( # bypass :*
Q(key1__exists=True) & Q(key2=0),
Query('key1: * and key2: 0', transform={'key1': ts2dt}),
)
self.assertEqual( # bypass :!
Q(key1__exists=False) & Q(key2=0),
Query('key1: ! and key2: 0', transform={'key1': ts2dt}),
)
self.assertEqual( # nested field
Q(nested__key1=ts2dt(0)) & Q(key2=0),
Query('nested.key1: 0 and key2: 0', transform={'nested.key1': ts2dt}),
)
self.assertRaisesRegex( # hook exception handle
TransformHookError,
'Field key1 transform hook error',
Query,
'key1 != abc',
transform={'key1': ts2dt},
)
def test_nested_field(self):
self.assertEqual(Q(key__inner=0), Query('key.inner: 0'))
class OtherTest(unittest.TestCase):
def test_readme_example(self):
connect('mongoengine_test', host='mongomock://localhost')
class User(Document):
fullname = StringField()
User(fullname='Tom').save()
User(fullname='Dick').save()
User(fullname='Harry').save()
self.assertEqual(User.objects(Query('fullname: Dick')).first().fullname, 'Dick')
self.assertEqual(
User.objects(
Query('fullname: dick', transform={'fullname': lambda x: x.title()})
)
.first()
.fullname,
'Dick',
)
| 35.698113 | 88 | 0.532505 | 918 | 7,568 | 4.311547 | 0.139434 | 0.109146 | 0.157655 | 0.192016 | 0.721324 | 0.658161 | 0.587923 | 0.537898 | 0.478019 | 0.417888 | 0 | 0.064449 | 0.296776 | 7,568 | 211 | 89 | 35.867299 | 0.679256 | 0.017442 | 0 | 0.26738 | 0 | 0 | 0.234415 | 0.009021 | 0 | 0 | 0 | 0 | 0.278075 | 1 | 0.090909 | false | 0 | 0.026738 | 0 | 0.13369 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8c32fb0b4535e967806c491e7dce8ba89fb1433 | 1,134 | py | Python | app/cachedmodel/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | app/cachedmodel/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | app/cachedmodel/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | # Generated by Django 3.2.7 on 2021-09-19 03:41
from django.db import migrations, models
import django.db.models.deletion
import django.db.models.manager
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
]
operations = [
migrations.CreateModel(
name='CachedModelTypes',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('app_label', models.CharField(max_length=100)),
('model', models.CharField(max_length=100)),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.contenttype')),
],
options={
'ordering': ('app_label', 'model'),
'unique_together': {('app_label', 'model')},
},
managers=[
('objects', django.db.models.manager.Manager()),
('_related', django.db.models.manager.Manager()),
],
),
]
| 32.4 | 128 | 0.574074 | 111 | 1,134 | 5.720721 | 0.558559 | 0.075591 | 0.110236 | 0.099213 | 0.173228 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030902 | 0.286596 | 1,134 | 34 | 129 | 33.352941 | 0.754017 | 0.039683 | 0 | 0.074074 | 1 | 0 | 0.162833 | 0.048758 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b210a7b86cf5f45e110a190e8d8eb560c075e998 | 397 | py | Python | dotacni_matice/migrations/0002_history.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | 1 | 2019-05-26T22:24:01.000Z | 2019-05-26T22:24:01.000Z | dotacni_matice/migrations/0002_history.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | 6 | 2019-01-22T14:53:43.000Z | 2020-09-22T16:20:28.000Z | dotacni_matice/migrations/0002_history.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.3 on 2019-12-27 18:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('dotacni_matice', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='dotacnititul',
name='history',
field=models.TextField(blank=True, null=True),
),
]
| 20.894737 | 58 | 0.602015 | 42 | 397 | 5.619048 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.282116 | 397 | 18 | 59 | 22.055556 | 0.761404 | 0.11335 | 0 | 0 | 1 | 0 | 0.128571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b213326f9b1abfe3dfc2e0c0ee4f51afa2c00f6e | 778 | py | Python | Software_University/python_basics/exam_preparation/4_exam_prep/renovation.py | Ivanazzz/SoftUni-W3resource-Python | 892321a290e22a91ff2ac2fef5316179a93f2f17 | [
"MIT"
] | 1 | 2022-01-26T07:38:11.000Z | 2022-01-26T07:38:11.000Z | Software_University/python_basics/exam_preparation/4_exam_prep/renovation.py | Ivanazzz/SoftUni-W3resource-Python | 892321a290e22a91ff2ac2fef5316179a93f2f17 | [
"MIT"
] | null | null | null | Software_University/python_basics/exam_preparation/4_exam_prep/renovation.py | Ivanazzz/SoftUni-W3resource-Python | 892321a290e22a91ff2ac2fef5316179a93f2f17 | [
"MIT"
] | null | null | null | from math import ceil
walls_hight = int(input())
walls_witdh = int(input())
percentage_walls_tottal_area_not_painted = int(input())
total_walls_area = walls_hight * walls_witdh * 4
quadratic_meters_left = total_walls_area - ceil(total_walls_area * percentage_walls_tottal_area_not_painted / 100)
while True:
paint_liters = input()
if paint_liters == "Tired!":
print(f"{quadratic_meters_left} quadratic m left.")
break
paint_liters = int(paint_liters)
quadratic_meters_left -= paint_liters
if quadratic_meters_left < 0:
print(f"All walls are painted and you have {abs(quadratic_meters_left)} l paint left!")
break
elif quadratic_meters_left == 0:
print("All walls are painted! Great job, Pesho!")
break
| 32.416667 | 114 | 0.717224 | 109 | 778 | 4.779817 | 0.385321 | 0.172745 | 0.21881 | 0.095969 | 0.230326 | 0.134357 | 0 | 0 | 0 | 0 | 0 | 0.0096 | 0.196658 | 778 | 23 | 115 | 33.826087 | 0.824 | 0 | 0 | 0.157895 | 0 | 0 | 0.210797 | 0.065553 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0.157895 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b21881d06efcd08194a38d1b8b2a7efa72fa56b5 | 890 | py | Python | src/tools/checkDeckByUrl.py | kentokura/xenoparts | d861ca474accdf1ec7bcf6afcac6be9246cf4c85 | [
"MIT"
] | null | null | null | src/tools/checkDeckByUrl.py | kentokura/xenoparts | d861ca474accdf1ec7bcf6afcac6be9246cf4c85 | [
"MIT"
] | null | null | null | src/tools/checkDeckByUrl.py | kentokura/xenoparts | d861ca474accdf1ec7bcf6afcac6be9246cf4c85 | [
"MIT"
] | null | null | null | # coding: utf-8
# Your code here!
import csv
def encode_cardd_by_url(url: str) -> dict:
"""
入力:デッキURL
出力:{ card_id, num }
処理:
URLから、カードidごとの枚数の辞書を作成する
"""
site_url, card_url = url.split("c=")
card_url, key_card_url = card_url.split("&")
arr_card_id = card_url.split(".")
deck = { card_id: arr_card_id.count(card_id) for card_id in arr_card_id }
return deck
# 処理はここから
deck = encode_cardd_by_url(input())
card_details = []
# csvを開く, card_dbはwithを抜けると自動で閉じる
with open('../db/dmps_card_db.csv') as card_db:
reader = csv.reader(f)
for row in reader:
for card_id, num in deck.items():
# keyが存在する行をとってくる
card_details.append(row.split(","))
# card_details.append(csv(exist key line).split(","))
# 結果出力
for card_detail in card_details:
print(card_detail)
| 22.25 | 77 | 0.61573 | 125 | 890 | 4.112 | 0.44 | 0.093385 | 0.052529 | 0.062257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001517 | 0.259551 | 890 | 39 | 78 | 22.820513 | 0.778452 | 0.231461 | 0 | 0 | 0 | 0 | 0.041538 | 0.033846 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b21e3496e5fd13e41a572208964a13c7cf7ed7c2 | 3,032 | py | Python | UsbVibrationDevice.py | Suitceyes-Project-Code/Vibration-Pattern-Player | 44d8bac61eed0ee7712eb0299d0d7029f688fe24 | [
"MIT"
] | null | null | null | UsbVibrationDevice.py | Suitceyes-Project-Code/Vibration-Pattern-Player | 44d8bac61eed0ee7712eb0299d0d7029f688fe24 | [
"MIT"
] | null | null | null | UsbVibrationDevice.py | Suitceyes-Project-Code/Vibration-Pattern-Player | 44d8bac61eed0ee7712eb0299d0d7029f688fe24 | [
"MIT"
] | 1 | 2021-10-04T14:26:49.000Z | 2021-10-04T14:26:49.000Z | import PyCmdMessenger
from VestDeviceBase import VestDevice
class UsbVestDevice(VestDevice):
"""
Basic interface for sending commands to the vest using a
serial port connection.
"""
commands = [["PinSet","gg"],
["PinMute","g"],
["GloveSet","gg*"],
["GloveMute",""],
["FreqSet","g"],
["PinGet","g"],
["FreqGet",""],
["PinState","gg"],
["FreqState","g"],
["StringMsg","s"],
["DebugSet","g"],
["SetMotor", "gg"],
["SetMotorSpeed", "g"]]
def __init__(self, device):
"""
Creates a new instance of Vest.
Inputs:
device:
The path to the device, e.g.: "/dev/ttyACM0" or "COM3"
"""
self._board = PyCmdMessenger.ArduinoBoard(device, baud_rate=115200)
self._connection = PyCmdMessenger.CmdMessenger(self._board, UsbVestDevice.commands, warnings=False)
def __enter__(self):
self.set_frequency(0)
return self
def __exit__(self, type, value, traceback):
# Make sure the vest is muted and that the connection is closed.
self.mute()
self._board.close()
def set_pin(self,pin,value):
"""
Sets a pin to a given value. This sets the vibration intensity of a given pin.
Inputs:
pin:
The pin index whose value should be set. This should be a byte value.
value:
A byte value (0-255) representing the vibration intensity. 0 is no vibration, 255
is the max intensity.
"""
self._connection.send("PinSet",pin,value)
def mute_pin(self,pin):
"""
Sets the vibration intensity for a given pin to 0.
Inputs:
pin: The pin which will be muted.
"""
self._connection.send("PinMute", pin)
def mute(self):
"""
Mutes all pins on the vest.
"""
self._connection.send("GloveMute")
def set_frequency(self,frequency):
"""
Sets the frequency of the entire vest.
Inputs:
frequency: The frequency in milliseconds.
"""
self._connection.send("FreqSet", frequency)
def set_vest(self, pin_value_dict, frequency):
values = []
for key in pin_value_dict:
values.append(key)
values.append(pin_value_dict[key])
values.append(frequency)
self._connection.send("GloveSet", *values)
def get_pin(self,pin):
"""
Gets the vibration intensity for a given pin.
Inputs:
pin: The pin index whose intensity should be fetched.
"""
self._connection.send("PinGet", pin)
return self._connection.receive()
def set_pins_batched(self, values = dict):
for pin in values:
self.set_pin(pin, values[pin]) | 30.938776 | 115 | 0.539248 | 328 | 3,032 | 4.865854 | 0.338415 | 0.070175 | 0.067669 | 0.028195 | 0.078321 | 0.078321 | 0.078321 | 0.042607 | 0.042607 | 0 | 0 | 0.009146 | 0.350923 | 3,032 | 98 | 116 | 30.938776 | 0.801829 | 0.297823 | 0 | 0 | 0 | 0 | 0.088553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.043478 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b22419f7d9aaad90e17b3010a06a273060fa238e | 1,729 | py | Python | mem_py/login/forms.py | Ciuel/Proyecto-Django | a466659fa7e84e77d0692f4f3c3f8c5f541079d4 | [
"MIT"
] | null | null | null | mem_py/login/forms.py | Ciuel/Proyecto-Django | a466659fa7e84e77d0692f4f3c3f8c5f541079d4 | [
"MIT"
] | null | null | null | mem_py/login/forms.py | Ciuel/Proyecto-Django | a466659fa7e84e77d0692f4f3c3f8c5f541079d4 | [
"MIT"
] | 1 | 2021-07-17T19:41:40.000Z | 2021-07-17T19:41:40.000Z | from django import forms
from django.contrib.auth.forms import UserCreationForm,AuthenticationForm
from .models import UserProfile
# Create your forms here
class LoginForm(AuthenticationForm):
def __init__(self, request, *args, **kwargs):
super().__init__(self, request, *args, **kwargs)
self.fields['username'].widget.attrs.update({'placeholder':'Nombre de Usuario'})
self.fields['password'].widget.attrs.update({'placeholder':'Contraseña'})
self.error_messages['invalid_login']="Contraseña o Usuario incorrecto"
for fieldname in ['username','password']:
self.fields[fieldname].help_text = None
self.fields[fieldname].label=""
class Meta:
model= UserProfile
fields=[
"username",
"password",
]
class RegisterForm(UserCreationForm):
def __init__(self, *args, **kwargs):
super(UserCreationForm, self).__init__(*args, **kwargs)
self.fields['username'].widget.attrs.update({'placeholder':'Nombre de Usuario'})
self.fields['email'].widget.attrs.update({'placeholder':'Email'})
self.fields['password1'].widget.attrs.update({'placeholder':'Contraseña'})
self.fields['password2'].widget.attrs.update({'placeholder':'Repetir Contraseña'})
self.error_messages["password_mismatch"]="Las contraseñas no coinciden"
for fieldname in ['username','email','password1', 'password2']:
self.fields[fieldname].help_text = None
self.fields[fieldname].label=""
class Meta:
model= UserProfile
fields=[
"username",
"email",
"password1",
"password2",
]
| 39.295455 | 90 | 0.632736 | 168 | 1,729 | 6.380952 | 0.345238 | 0.093284 | 0.095149 | 0.156716 | 0.442164 | 0.404851 | 0.326493 | 0.326493 | 0.326493 | 0.326493 | 0 | 0.004491 | 0.227299 | 1,729 | 43 | 91 | 40.209302 | 0.797904 | 0.012724 | 0 | 0.378378 | 0 | 0 | 0.218897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0.243243 | 0.081081 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b224f08977080d30a8248e3383147fd3fad725df | 1,487 | py | Python | numpy_examples/basic_5_structured_arrays.py | stealthness/sklearn-examples | e755fd3804cc15dd28ff2a38e299e80c83565d0a | [
"BSD-3-Clause"
] | null | null | null | numpy_examples/basic_5_structured_arrays.py | stealthness/sklearn-examples | e755fd3804cc15dd28ff2a38e299e80c83565d0a | [
"BSD-3-Clause"
] | null | null | null | numpy_examples/basic_5_structured_arrays.py | stealthness/sklearn-examples | e755fd3804cc15dd28ff2a38e299e80c83565d0a | [
"BSD-3-Clause"
] | null | null | null | """
Purpose of this file is to give examples of structured arrays
This script is partially dirived from the LinkedIn learning course
https://www.linkedin.com/learning/numpy-data-science-essential-training/create-arrays-from-python-structures
"""
import numpy as np
person_data_def = [('name', 'S6'), ('height', 'f8'), ('weight', 'f8'), ('age', 'i8')]
# create a structured array
people_array = np.zeros(4, dtype=person_data_def)
print(f'The structured array is of type {type(people_array)}\n{people_array}')
# let us change some the data values
# note that any int for height or weight will processed as default
people_array[2] = ('Cat', 130, 56, 22)
people_array[0] = ('Amy', 126, 60, 25)
people_array[1] = ('Bell', 146, 60, 20)
people_array[3] = ('Amy', 140, 80, 55)
print(people_array)
# we can print the information for name, height, weight and age
ages = people_array['age']
print(f'the ages of the people are {ages}')
print(f'The names of the people are {people_array["name"]}')
print(f'The heights of the people are {people_array["height"]}')
print(f'The weights of the people are {people_array["weight"]}')
youthful = ages/2
print(f'The young ages are {youthful}')
# Note that youthful does not change the original data
print(f'The original ages are {ages}')
print(people_array[['name', 'age']])
# Record array is a thin wrapper around structured array
person_record_array = np.rec.array([('a', 100, 80, 50), ('b', 190, 189, 20)])
print(type(person_record_array[0])) | 33.795455 | 108 | 0.718897 | 245 | 1,487 | 4.277551 | 0.436735 | 0.13645 | 0.060115 | 0.053435 | 0.071565 | 0.071565 | 0 | 0 | 0 | 0 | 0 | 0.041991 | 0.135171 | 1,487 | 44 | 109 | 33.795455 | 0.772939 | 0.359785 | 0 | 0 | 0 | 0 | 0.391906 | 0.112886 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b22bc88a2d140b8c45a0fbac6ce8fea46af69f26 | 1,036 | py | Python | courses/python/mflac/vuln_app/patched_admin.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | 49 | 2021-05-20T12:49:28.000Z | 2022-03-13T11:35:03.000Z | courses/python/mflac/vuln_app/patched_admin.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | null | null | null | courses/python/mflac/vuln_app/patched_admin.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | 5 | 2021-05-20T12:58:34.000Z | 2021-12-05T19:08:13.000Z | import functools
from flask import Blueprint
from flask import render_template
from flask import g
from flask import redirect
from flask import url_for
from flask import flash
from mflac.vuln_app.db import get_db
bp = Blueprint("admin", __name__, url_prefix="/admin")
def admin_required(view):
@functools.wraps(view)
def wrapped_view(**kwargs):
if g.user is None or not g.user['is_admin']:
flash("Forbidden. You haven't enough permissions")
return redirect(url_for("index.index"))
return view(**kwargs)
return wrapped_view
def login_required(view):
@functools.wraps(view)
def wrapped_view(**kwargs):
if g.user is None:
return redirect(url_for("auth.login"))
return view(**kwargs)
return wrapped_view
@bp.route("/users_list")
@login_required
@admin_required
def users_list():
db = get_db()
users = db.execute("SELECT id, username, is_admin FROM user").fetchall()
return render_template('admin/users_list.html', users=users)
| 25.268293 | 76 | 0.697876 | 146 | 1,036 | 4.773973 | 0.356164 | 0.077475 | 0.129125 | 0.074605 | 0.275466 | 0.275466 | 0.180775 | 0.180775 | 0.180775 | 0.180775 | 0 | 0 | 0.197876 | 1,036 | 40 | 77 | 25.9 | 0.838749 | 0 | 0 | 0.258065 | 0 | 0 | 0.146718 | 0.02027 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16129 | false | 0 | 0.258065 | 0 | 0.645161 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b22e87213200baf4d5c3c89eb335262571cc546e | 1,486 | py | Python | HackerEarth/Python/BasicProgramming/InputOutput/BasicsOfInputOutput/MinimizeCost.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | HackerEarth/Python/BasicProgramming/InputOutput/BasicsOfInputOutput/MinimizeCost.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | HackerEarth/Python/BasicProgramming/InputOutput/BasicsOfInputOutput/MinimizeCost.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | #!/Usr/bin/env python
"""
You are given an array of numbers Ai which contains positive as well as negative numbers . The cost of the array can be defined as C(X)
C(x) = |A1 + T1| + |A2 + T2| + ... + |An + Tn|, where T is the transfer array which contains N zeros initially.
You need to minimize this cost. You can transfer value from one array element to another if and only if the distance between them is at most K.
Also, transfer value can't be transferred further.
Say array contains 3, -1, -2 and K = 1
if we transfer 3 from 1st element to 2nd, the array becomes
Original Value 3, -1, -2
Transferred value -3, 3, 0
C(x) = |3 - 3| + |-1 + 3| + ... + |-2 + 0| = 4 which is minimum in this case
Note :
Only positive value can be transferred
It is not necessary to transfer whole value i.e partial transfer is also acceptable. This means that if you have A[i] = 5 then you can distribute the value 5 across many other array elements provided that they finally sum to a number less than equal to 5. For example 5 can be transferred in chunks of smaller values say 2 , 3 but their sum should not exceed 5.
INPUT:
First line contains N and K separated by space
Second line denotes an array of size N
OUTPU:
Minimum value of C(X)
CONSTRAINTS:
1 ≤ N,K ≤ 10^5
-10^9 ≤ Ai ≤ 10^9
"""
import io
__author__ = "Cristian Chitiva"
__date__ = "March 18, 2019"
__email__ = "cychitivav@unal.edu.co"
N=os.read(0,2).decode()
print(type(N))
| 31.617021 | 362 | 0.691117 | 271 | 1,486 | 3.760148 | 0.523985 | 0.007851 | 0.017664 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042683 | 0.227456 | 1,486 | 46 | 363 | 32.304348 | 0.841463 | 0.861373 | 0 | 0 | 0 | 0 | 0.344371 | 0.145695 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2341f237ea46f0ced528101120f6ba97f84d73f | 14,362 | py | Python | ci/unit_tests/functions_deploy/main_test.py | xverges/watson-assistant-workbench | b899784506c7469be332cb58ed447ca8f607ed30 | [
"Apache-2.0"
] | 1 | 2020-03-27T16:39:38.000Z | 2020-03-27T16:39:38.000Z | ci/unit_tests/functions_deploy/main_test.py | xverges/watson-assistant-workbench | b899784506c7469be332cb58ed447ca8f607ed30 | [
"Apache-2.0"
] | 1 | 2021-01-29T16:14:58.000Z | 2021-02-03T16:10:07.000Z | ci/unit_tests/functions_deploy/main_test.py | xverges/watson-assistant-workbench | b899784506c7469be332cb58ed447ca8f607ed30 | [
"Apache-2.0"
] | 1 | 2021-01-22T13:13:36.000Z | 2021-01-22T13:13:36.000Z | """
Copyright 2019 IBM Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import uuid
import zipfile
from urllib.parse import quote
import pytest
import requests
import functions_delete_package
import functions_deploy
from wawCommons import getFunctionResponseJson
from ...test_utils import BaseTestCaseCapture
class TestMain(BaseTestCaseCapture):
dataBasePath = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'main_data')
packageBase = "Package-for-WAW-CI-"
def setup_class(cls):
BaseTestCaseCapture.checkEnvironmentVariables(['CLOUD_FUNCTIONS_USERNAME', 'CLOUD_FUNCTIONS_PASSWORD',
'CLOUD_FUNCTIONS_NAMESPACE'])
cls.username = os.environ['CLOUD_FUNCTIONS_USERNAME']
cls.password = os.environ['CLOUD_FUNCTIONS_PASSWORD']
cls.apikey = cls.username + ':' + cls.password
cls.cloudFunctionsUrl = os.environ.get('CLOUD_FUNCTIONS_URL',
'https://us-south.functions.cloud.ibm.com/api/v1/namespaces')
cls.namespace = os.environ['CLOUD_FUNCTIONS_NAMESPACE']
cls.urlNamespace = quote(cls.namespace)
def callfunc(self, *args, **kwargs):
functions_deploy.main(*args, **kwargs)
def _getFunctionsInPackage(self, package):
functionListUrl = self.cloudFunctionsUrl + '/' + self.urlNamespace + '/actions/?limit=0&skip=0'
functionListResp = requests.get(functionListUrl, auth=(self.username, self.password),
headers={'accept': 'application/json'})
assert functionListResp.status_code == 200
functionListJson = functionListResp.json()
functionNames = []
for function in functionListJson:
if (self.namespace + '/' + package) in function['namespace']:
functionNames.append(function['name'])
return functionNames
def setup_method(self):
self.package = self.packageBase + str(uuid.uuid4())
self.packageCreated = False # test should set that to true if it created package for cloud functions
def teardown_method(self):
if self.packageCreated:
# Delete the package
params = ['-c', os.path.join(self.dataBasePath, 'exampleFunctions.cfg'),
'--cloudfunctions_package', self.package, '--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl,
'--cloudfunctions_package', self.package,
'--cloudfunctions_apikey', self.apikey]
self.t_fun_noException(functions_delete_package.main, [params])
# @pytest.mark.skipiffails(label='Cloud Functions, Invoking an action with blocking=true returns 202')
@pytest.mark.parametrize('useApikey', [True, False])
def test_functionsUploadFromDirectory(self, useApikey):
"""Tests if functions_deploy uploads all supported functions from given directory."""
params = ['-c', os.path.join(self.dataBasePath, 'exampleFunctions.cfg'),
'--cloudfunctions_package', self.package, '--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl]
if useApikey:
params.extend(['--cloudfunctions_apikey', self.apikey])
else:
params.extend(['--cloudfunctions_username', self.username, '--cloudfunctions_password', self.password])
# upload functions
self.t_noException([params])
self.packageCreated = True
# obtain list of uploaded functions
functionNames = self._getFunctionsInPackage(self.package)
# get original list of cloud function files and check if all of them were uploaded
functionsDir = os.path.join(self.dataBasePath, 'example_functions')
functionFileNames = [os.path.splitext(fileName)[0] for fileName in os.listdir(functionsDir)]
assert set(functionNames) == set(functionFileNames)
# try to call particular functions
for functionName in functionNames:
responseJson = getFunctionResponseJson(self.cloudFunctionsUrl,
self.urlNamespace,
self.username,
self.password,
self.package,
functionName,
{},
{'name': 'unit test'})
assert "Hello unit test!" in responseJson['greeting']
@pytest.mark.skipif(os.environ.get('TRAVIS_EVENT_TYPE') != "cron", reason="This test is nightly build only.")
@pytest.mark.parametrize('useApikey', [True, False])
def test_pythonVersionFunctions(self, useApikey):
"""Tests if it's possible to upload one function into two different version of runtime."""
# Error in response: The 'python:2' runtime is no longer supported. You may read and delete but not update or invoke this action.
for pythonVersion in [3]:
params = ['-c', os.path.join(self.dataBasePath, 'python' + str(pythonVersion) + 'Functions.cfg'),
'--cloudfunctions_package', self.package, '--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl]
if useApikey:
params.extend(['--cloudfunctions_apikey', self.apikey])
else:
params.extend(['--cloudfunctions_username', self.username, '--cloudfunctions_password', self.password])
self.t_noException([params])
self.packageCreated = True
responseJson = getFunctionResponseJson(self.cloudFunctionsUrl,
self.urlNamespace,
self.username,
self.password,
self.package,
'getPythonMajorVersion',
{},
{})
assert pythonVersion == responseJson['majorVersion']
@pytest.mark.skipif(os.environ.get('TRAVIS_EVENT_TYPE') != "cron", reason="This test is nightly build only.")
@pytest.mark.parametrize('useApikey', [True, False])
def test_functionsInZip(self, useApikey):
"""Tests if functions_deploy can handle function in zip file."""
# prepare zip file
dirForZip = os.path.join(self.dataBasePath, "outputs", "pythonZip")
BaseTestCaseCapture.createFolder(dirForZip)
with zipfile.ZipFile(os.path.join(dirForZip, 'testFunc.zip'), 'w') as functionsZip:
for fileToZip in os.listdir(os.path.join(self.dataBasePath, 'zip_functions')):
functionsZip.write(os.path.join(self.dataBasePath, 'zip_functions', fileToZip), fileToZip)
#upload zip file
params = ['--cloudfunctions_package', self.package, '--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl, '--common_functions', [dirForZip]]
if useApikey:
params.extend(['--cloudfunctions_apikey', self.apikey])
else:
params.extend(['--cloudfunctions_username', self.username, '--cloudfunctions_password', self.password])
self.t_noException([params])
self.packageCreated = True
# call function and check if sub-function from non-main file was called
responseJson = getFunctionResponseJson(self.cloudFunctionsUrl,
self.urlNamespace,
self.username,
self.password,
self.package,
'testFunc',
{},
{})
assert "String from helper function" == responseJson['test']
# @pytest.mark.skipiffails(label='Cloud Functions, Invoking an action with blocking=true returns 202')
@pytest.mark.parametrize('useApikey', [True, False])
def test_functionsUploadSequence(self, useApikey):
"""Tests if functions_deploy uploads sequences."""
params = ['-c', os.path.join(self.dataBasePath, 'exampleValidSequences.cfg'),
'--cloudfunctions_package', self.package, '--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl]
if useApikey:
params.extend(['--cloudfunctions_apikey', self.apikey])
else:
params.extend(['--cloudfunctions_username', self.username, '--cloudfunctions_password', self.password])
# upload functions
self.t_noException([params])
self.packageCreated = True
sequenceAnswers = {"a" : "123", "b" : "231", "c" : "312"}
# try to call particular sequences and test their output
for sequenceName in sequenceAnswers:
responseJson = getFunctionResponseJson(self.cloudFunctionsUrl,
self.urlNamespace,
self.username,
self.password,
self.package,
sequenceName,
{},
{})
shouldAnswer = sequenceAnswers[sequenceName]
assert shouldAnswer in responseJson["entries"]
@pytest.mark.skipif(os.environ.get('TRAVIS_EVENT_TYPE') != "cron", reason="This test is nightly build only.")
@pytest.mark.parametrize('useApikey', [True, False])
def test_functionsMissingSequenceComponent(self, useApikey):
"""Tests if functions_deploy fails when uploading a sequence with a nonexistent function."""
params = ['-c', os.path.join(self.dataBasePath, 'exampleNonexistentFunctionRef.cfg'),
'--cloudfunctions_package', self.package, '--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl]
if useApikey:
params.extend(['--cloudfunctions_apikey', self.apikey])
else:
params.extend(['--cloudfunctions_username', self.username, '--cloudfunctions_password', self.password])
# upload functions (will fail AFTER package creation)
self.packageCreated = True
self.t_exitCodeAndLogMessage(1, "Unexpected error code", [params])
@pytest.mark.parametrize('useApikey', [True, False])
def test_functionsMissingSequenceDefinition(self, useApikey):
"""Tests if functions_deploy fails when uploading a sequence without a function list."""
params = ['-c', os.path.join(self.dataBasePath, 'exampleUndefinedSequence.cfg'),
'--cloudfunctions_package', self.package, '--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl]
if useApikey:
params.extend(['--cloudfunctions_apikey', self.apikey])
else:
params.extend(['--cloudfunctions_username', self.username, '--cloudfunctions_password', self.password])
# Fails before anything is uploaded
self.t_exitCodeAndLogMessage(1, "parameter not defined", [params])
def test_badArgs(self):
"""Tests some basic common problems with args."""
self.t_unrecognizedArgs([['--nonExistentArg', 'randomNonPositionalArg']])
self.t_exitCode(1, [[]])
completeArgsList = ['--cloudfunctions_username', self.username,
'--cloudfunctions_password', self.password,
'--cloudfunctions_apikey', self.password + ":" + self.username,
'--cloudfunctions_package', self.package,
'--cloudfunctions_namespace', self.namespace,
'--cloudfunctions_url', self.cloudFunctionsUrl,
'--common_functions', self.dataBasePath]
for argIndex in range(len(completeArgsList)):
if not completeArgsList[argIndex].startswith('--'):
continue
paramName = completeArgsList[argIndex][2:]
argsListWithoutOne = []
for i in range(len(completeArgsList)):
if i != argIndex and i != (argIndex + 1):
argsListWithoutOne.append(completeArgsList[i])
if paramName in ['cloudfunctions_username', 'cloudfunctions_password']:
message = 'combination already set: \'[\'cloudfunctions_apikey\']\''
elif paramName in ['cloudfunctions_apikey']:
# we have to remove username and password (if not it would be valid combination of parameters)
argsListWithoutOne = argsListWithoutOne[4:] # remove username and password (leave just apikey)
message = 'Combination 0: \'cloudfunctions_apikey\''
else:
# we have to remove username and password (if not then it would always return error that both auth types are provided)
argsListWithoutOne = argsListWithoutOne[4:] # remove username and password (leave just apikey)
message = 'required \'' + paramName + '\' parameter not defined'
self.t_exitCodeAndLogMessage(1, message, [argsListWithoutOne])
| 49.524138 | 137 | 0.598176 | 1,289 | 14,362 | 6.573313 | 0.250582 | 0.020772 | 0.014163 | 0.016523 | 0.479051 | 0.460876 | 0.456745 | 0.422283 | 0.408828 | 0.399858 | 0 | 0.004194 | 0.302743 | 14,362 | 289 | 138 | 49.695502 | 0.841921 | 0.152625 | 0 | 0.465969 | 0 | 0 | 0.189792 | 0.111166 | 0 | 0 | 0 | 0 | 0.031414 | 1 | 0.062827 | false | 0.089005 | 0.052356 | 0 | 0.136126 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b2376657b0293a1d78aa6eb2c5f7730819b325c9 | 867 | py | Python | pychron/experiment/tests/comment_template.py | ael-noblegas/pychron | 6ebbbb1f66a614972b62b7a9be4c784ae61b5d62 | [
"Apache-2.0"
] | 1 | 2019-02-27T21:57:44.000Z | 2019-02-27T21:57:44.000Z | pychron/experiment/tests/comment_template.py | ael-noblegas/pychron | 6ebbbb1f66a614972b62b7a9be4c784ae61b5d62 | [
"Apache-2.0"
] | 80 | 2018-07-17T20:10:20.000Z | 2021-08-17T15:38:24.000Z | pychron/experiment/tests/comment_template.py | AGESLDEO/pychron | 1a81e05d9fba43b797f335ceff6837c016633bcf | [
"Apache-2.0"
] | null | null | null |
from __future__ import absolute_import
__author__ = 'ross'
import unittest
from pychron.experiment.utilities.comment_template import CommentTemplater
class MockFactory(object):
irrad_level = 'A'
irrad_hole = '9'
class CommentTemplaterTestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.obj=MockFactory()
def test_render1(self):
self._test_render('irrad_level : irrad_hole', 'A:9')
def test_render2(self):
self._test_render('irrad_level : irrad_hole SCLF', 'A:9SCLF')
def test_render3(self):
self._test_render('irrad_level : irrad_hole <SPACE> SCLF', 'A:9 SCLF')
def _test_render(self, label, expected):
ct = CommentTemplater()
ct.label=label
r = ct.render(self.obj)
self.assertEqual(expected, r)
if __name__ == '__main__':
unittest.main()
| 22.815789 | 78 | 0.682814 | 105 | 867 | 5.285714 | 0.428571 | 0.072072 | 0.064865 | 0.097297 | 0.2 | 0.2 | 0.2 | 0.2 | 0 | 0 | 0 | 0.010189 | 0.207612 | 867 | 37 | 79 | 23.432432 | 0.797671 | 0 | 0 | 0 | 0 | 0 | 0.140878 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.208333 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2377bde1e5c8e5670fad099a5e53482fcf577c1 | 1,823 | py | Python | apps/roles/views.py | andipandiber/CajaAhorros | cb0769fc04529088768ea650f9ee048bd9a55837 | [
"MIT"
] | null | null | null | apps/roles/views.py | andipandiber/CajaAhorros | cb0769fc04529088768ea650f9ee048bd9a55837 | [
"MIT"
] | 8 | 2021-03-30T13:39:24.000Z | 2022-03-12T00:36:15.000Z | apps/roles/views.py | andresbermeoq/CajaAhorros | cb0769fc04529088768ea650f9ee048bd9a55837 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.urls import reverse_lazy
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import CreateView, ListView, UpdateView, DeleteView, TemplateView
from .models import Role
class baseView(LoginRequiredMixin, TemplateView):
template_name = 'role/inicio.html'
login_url = reverse_lazy('user_app:login-user')
class createRoleView(LoginRequiredMixin,CreateView):
model = Role
template_name = "role/create.html"
fields = ('__all__')
success_url = reverse_lazy('role_app:base')
login_url = reverse_lazy('user_app:login-user')
def form_valid(self, form):
role = form.save(commit = False)
role.save()
return super(createRoleView, self).form_valid(form)
class updateRoleView(LoginRequiredMixin, UpdateView):
template_name = "role/update.html"
model = Role
fields = ('__all__')
success_url = reverse_lazy('role_app:base')
login_url = reverse_lazy('user_app:login-user')
def post(self, request, *args, **kwargs):
self.object = self.get_object()
return super().post(request, *args, **kwargs)
def form_valid(self, form):
return super(updateRoleView, self).form_valid(form)
class deleteRoleView(LoginRequiredMixin, DeleteView):
model = Role
template_name = "role/delete.html"
success_url = reverse_lazy('role_app:base')
login_url = reverse_lazy('user_app:login-user')
class listRoleView(LoginRequiredMixin, ListView):
template_name = "role/list_all.html"
context_object_name = 'roles'
login_url = reverse_lazy('user_app:login-user')
def get_queryset(self):
key = self.request.GET.get("key", '')
list = Role.objects.filter(
name_role__icontains = key
)
return list
| 29.403226 | 91 | 0.705979 | 221 | 1,823 | 5.597285 | 0.298643 | 0.080032 | 0.090542 | 0.076799 | 0.354891 | 0.248989 | 0.248989 | 0.248989 | 0.248989 | 0.181892 | 0 | 0 | 0.185409 | 1,823 | 61 | 92 | 29.885246 | 0.832997 | 0 | 0 | 0.340909 | 0 | 0 | 0.130913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.113636 | 0.022727 | 0.840909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b238f91a5ac084ae34b9c4b97d9a95b7ebca4518 | 418 | py | Python | hlwtadmin/migrations/0035_location_disambiguation.py | Kunstenpunt/havelovewilltravel | 6a27824b4d3d8b1bf19e0bc0d0648f0f4e8abc83 | [
"Apache-2.0"
] | 1 | 2020-10-16T16:29:01.000Z | 2020-10-16T16:29:01.000Z | hlwtadmin/migrations/0035_location_disambiguation.py | Kunstenpunt/havelovewilltravel | 6a27824b4d3d8b1bf19e0bc0d0648f0f4e8abc83 | [
"Apache-2.0"
] | 365 | 2020-02-03T12:46:53.000Z | 2022-02-27T17:20:46.000Z | hlwtadmin/migrations/0035_location_disambiguation.py | Kunstenpunt/havelovewilltravel | 6a27824b4d3d8b1bf19e0bc0d0648f0f4e8abc83 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0 on 2020-07-22 08:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('hlwtadmin', '0034_auto_20200722_1000'),
]
operations = [
migrations.AddField(
model_name='location',
name='disambiguation',
field=models.CharField(blank=True, max_length=200, null=True),
),
]
| 22 | 74 | 0.617225 | 45 | 418 | 5.622222 | 0.844444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108197 | 0.270335 | 418 | 18 | 75 | 23.222222 | 0.721311 | 0.102871 | 0 | 0 | 1 | 0 | 0.144772 | 0.061662 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b23ae56df03e56d6049586357b729e447c6dec2f | 658 | py | Python | 819. Rotate Array/Slicing.py | tulsishankarreddy/leetcode | fffe90b0ab43a57055c248550f31ac18967fe183 | [
"MIT"
] | 1 | 2022-01-19T16:26:49.000Z | 2022-01-19T16:26:49.000Z | 819. Rotate Array/Slicing.py | tulsishankarreddy/leetcode | fffe90b0ab43a57055c248550f31ac18967fe183 | [
"MIT"
] | null | null | null | 819. Rotate Array/Slicing.py | tulsishankarreddy/leetcode | fffe90b0ab43a57055c248550f31ac18967fe183 | [
"MIT"
] | null | null | null | ''' This can be solved using the slicing method used in list. We have to modify the list by take moving the
last part of the array in reverse order and joining it with the remaining part of the list to its right'''
class Solution:
def rotate(self, nums: List[int], k: int) -> None:
"""
Do not return anything, modify nums in-place instead.
"""
k = k % len(nums) #To reduce the a full cycle rotation
nums[:] = nums[-k:] + nums[:-k] #nums[-k:] -> end part of the list in reverse order
#nums[:-k] -> front part of the list which is attached to the right of nums[-k:] | 54.833333 | 120 | 0.600304 | 104 | 658 | 3.798077 | 0.548077 | 0.063291 | 0.091139 | 0.098734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.302432 | 658 | 12 | 120 | 54.833333 | 0.860566 | 0.647416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b23ca7a903bc4922dc5e8b76e4f255954b93daec | 10,324 | py | Python | ecs/notifications/models.py | programmierfabrik/ecs | 2389a19453e21b2ea4e40b272552bcbd42b926a9 | [
"Apache-2.0"
] | 9 | 2017-02-13T18:17:13.000Z | 2020-11-21T20:15:54.000Z | ecs/notifications/models.py | programmierfabrik/ecs | 2389a19453e21b2ea4e40b272552bcbd42b926a9 | [
"Apache-2.0"
] | 2 | 2021-05-20T14:26:47.000Z | 2021-05-20T14:26:48.000Z | ecs/notifications/models.py | programmierfabrik/ecs | 2389a19453e21b2ea4e40b272552bcbd42b926a9 | [
"Apache-2.0"
] | 4 | 2017-04-02T18:48:59.000Z | 2021-11-23T15:40:35.000Z | from importlib import import_module
from django.conf import settings
from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import ugettext
from django.template import loader
from django.utils.text import slugify
from django.utils import timezone
from reversion.models import Version
from reversion import revisions as reversion
from ecs.documents.models import Document
from ecs.utils.viewutils import render_pdf_context
from ecs.notifications.constants import SAFETY_TYPE_CHOICES
from ecs.notifications.managers import NotificationManager
from ecs.authorization.managers import AuthorizationManager
class NotificationType(models.Model):
name = models.CharField(max_length=80, unique=True)
form = models.CharField(max_length=80, default='ecs.notifications.forms.NotificationForm')
default_response = models.TextField(blank=True)
position = models.IntegerField(default=0)
includes_diff = models.BooleanField(default=False)
grants_vote_extension = models.BooleanField(default=False)
finishes_study = models.BooleanField(default=False)
is_rejectable = models.BooleanField(default=False)
@property
def form_cls(self):
if not hasattr(self, '_form_cls'):
module, cls_name = self.form.rsplit('.', 1)
self._form_cls = getattr(import_module(module), cls_name)
return self._form_cls
def get_template(self, pattern):
template_names = [pattern % name for name in (self.form_cls.__name__, 'base')]
return loader.select_template(template_names)
def __str__(self):
return self.name
class DiffNotification(models.Model):
old_submission_form = models.ForeignKey('core.SubmissionForm', related_name="old_for_notification")
new_submission_form = models.ForeignKey('core.SubmissionForm', related_name="new_for_notification")
class Meta:
abstract = True
def save(self, **kwargs):
super().save()
self.submission_forms = [self.old_submission_form]
self.new_submission_form.is_transient = False
self.new_submission_form.save(update_fields=('is_transient',))
def apply(self):
new_sf = self.new_submission_form
if not self.new_submission_form.is_current and self.old_submission_form.is_current:
new_sf.acknowledge(True)
new_sf.mark_current()
return True
else:
return False
def get_diff(self, plainhtml=False):
from ecs.core.diff import diff_submission_forms
return diff_submission_forms(self.old_submission_form, self.new_submission_form).html(plain=plainhtml)
class Notification(models.Model):
type = models.ForeignKey(NotificationType, null=True, related_name='notifications')
submission_forms = models.ManyToManyField('core.SubmissionForm', related_name='notifications')
documents = models.ManyToManyField('documents.Document', related_name='notifications')
pdf_document = models.OneToOneField(Document, related_name='_notification', null=True)
comments = models.TextField()
timestamp = models.DateTimeField(auto_now_add=True)
user = models.ForeignKey('auth.User', null=True)
objects = NotificationManager()
unfiltered = models.Manager()
def __str__(self):
return '{} für {}'.format(
self.short_name,
' + '.join(str(sf.submission) for sf in self.submission_forms.all())
)
@property
def short_name(self):
sn = getattr(self, 'safetynotification', None)
if sn:
return sn.get_safety_type_display()
return self.type.name
@property
def is_rejected(self):
try:
return self.answer.is_rejected
except NotificationAnswer.DoesNotExist:
return None
def get_submission_form(self):
if self.submission_forms.exists():
return self.submission_forms.all()[0]
return None
def get_submission(self):
sf = self.get_submission_form()
if sf:
return sf.submission
return None
def get_filename(self, suffix='.pdf'):
ec_num = '_'.join(
str(num)
for num in self.submission_forms
.order_by('submission__ec_number')
.distinct()
.values_list('submission__ec_number', flat=True)
)
base = '{}-{}'.format(slugify(ec_num), slugify(self.type.name))
return base[:(250 - len(suffix))] + suffix
def render_pdf(self):
tpl = self.type.get_template('notifications/pdf/%s.html')
submission_forms = self.submission_forms.select_related('submission')
return render_pdf_context(tpl, {
'notification': self,
'submission_forms': submission_forms,
'documents': self.documents.order_by('doctype__identifier', 'date', 'name'),
})
def render_pdf_document(self):
assert self.pdf_document is None
pdfdata = self.render_pdf()
self.pdf_document = Document.objects.create_from_buffer(pdfdata,
doctype='notification', parent_object=self, name=str(self)[:250],
original_file_name=self.get_filename())
self.save()
class ReportNotification(Notification):
study_started = models.BooleanField(default=True)
reason_for_not_started = models.TextField(null=True, blank=True)
recruited_subjects = models.PositiveIntegerField(null=True, blank=False)
finished_subjects = models.PositiveIntegerField(null=True, blank=False)
aborted_subjects = models.PositiveIntegerField(null=True, blank=False)
SAE_count = models.PositiveIntegerField(default=0, blank=False)
SUSAR_count = models.PositiveIntegerField(default=0, blank=False)
class Meta:
abstract = True
class CompletionReportNotification(ReportNotification):
study_aborted = models.BooleanField(default=False)
completion_date = models.DateField()
class ProgressReportNotification(ReportNotification):
runs_till = models.DateField(null=True, blank=True)
class AmendmentNotification(DiffNotification, Notification):
is_substantial = models.BooleanField(default=False)
meeting = models.ForeignKey('meetings.Meeting', null=True,
related_name='amendments')
needs_signature = models.BooleanField(default=False)
def schedule_to_meeting(self):
from ecs.meetings.models import Meeting
meeting = Meeting.objects.filter(started=None).order_by('start').first()
self.meeting = meeting
self.save()
class SafetyNotification(Notification):
safety_type = models.CharField(max_length=6, db_index=True, choices=SAFETY_TYPE_CHOICES, verbose_name=_('Type'))
class CenterCloseNotification(Notification):
investigator = models.ForeignKey('core.Investigator', related_name="closed_by_notification")
close_date = models.DateField()
@reversion.register(fields=('text',))
class NotificationAnswer(models.Model):
notification = models.OneToOneField(Notification, related_name="answer")
text = models.TextField()
is_valid = models.BooleanField(default=True)
is_final_version = models.BooleanField(default=False, verbose_name=_('Proofread'))
is_rejected = models.BooleanField(default=False, verbose_name=_('rate negative'))
pdf_document = models.OneToOneField(Document, related_name='_notification_answer', null=True)
signed_at = models.DateTimeField(null=True)
published_at = models.DateTimeField(null=True)
objects = AuthorizationManager()
unfiltered = models.Manager()
@property
def version_number(self):
return Version.objects.get_for_object(self).count()
def get_render_context(self):
return {
'notification': self.notification,
'documents': self.notification.documents.order_by('doctype__identifier', 'date', 'name'),
'answer': self,
}
def render_pdf(self):
notification = self.notification
tpl = notification.type.get_template('notifications/answers/pdf/%s.html')
return render_pdf_context(tpl, self.get_render_context())
def render_pdf_document(self):
pdfdata = self.render_pdf()
self.pdf_document = Document.objects.create_from_buffer(pdfdata,
doctype='notification_answer', parent_object=self,
name=str(self),
original_file_name=self.notification.get_filename('-answer.pdf')
)
self.save()
def distribute(self):
from ecs.core.models.submissions import Submission
self.published_at = timezone.now()
self.save()
if not self.is_rejected and self.notification.type.includes_diff:
try:
notification = AmendmentNotification.objects.get(pk=self.notification.pk)
notification.apply()
except AmendmentNotification.DoesNotExist:
assert False, "we should never get here"
extend, finish = False, False
if not self.is_rejected:
if self.notification.type.grants_vote_extension:
extend = True
if self.notification.type.finishes_study:
finish = True
for submission in Submission.objects.filter(forms__in=self.notification.submission_forms.values('pk').query):
if extend:
for vote in submission.votes.positive().permanent():
vote.extend()
if finish:
submission.finish()
presenting_parties = submission.current_submission_form.get_presenting_parties()
_ = ugettext
presenting_parties.send_message(
_('New Notification Answer'),
'notifications/answers/new_message.txt',
context={
'notification': self.notification,
'answer': self,
'ABSOLUTE_URL_PREFIX': settings.ABSOLUTE_URL_PREFIX,
},
submission=submission)
NOTIFICATION_MODELS = (
Notification, CompletionReportNotification, ProgressReportNotification,
AmendmentNotification, SafetyNotification, CenterCloseNotification,
)
| 37.955882 | 117 | 0.6852 | 1,113 | 10,324 | 6.142857 | 0.213836 | 0.02662 | 0.040222 | 0.039491 | 0.20667 | 0.159427 | 0.127834 | 0.078689 | 0.043586 | 0.043586 | 0 | 0.001988 | 0.220263 | 10,324 | 271 | 118 | 38.095941 | 0.847329 | 0 | 0 | 0.152074 | 0 | 0 | 0.077586 | 0.019275 | 0 | 0 | 0 | 0 | 0.009217 | 1 | 0.092166 | false | 0 | 0.087558 | 0.018433 | 0.534562 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b23d36fa5033cff1b7860caf5d44f22ca9d35ade | 3,422 | py | Python | iwjam_import.py | patrickgh3/iwjam | fd6f58bd5217dc13ed475779fe7f1ff6ca7f13be | [
"MIT"
] | null | null | null | iwjam_import.py | patrickgh3/iwjam | fd6f58bd5217dc13ed475779fe7f1ff6ca7f13be | [
"MIT"
] | null | null | null | iwjam_import.py | patrickgh3/iwjam | fd6f58bd5217dc13ed475779fe7f1ff6ca7f13be | [
"MIT"
] | null | null | null | from lxml import etree
import os
import sys
import shutil
import iwjam_util
# Performs an import of a mod project into a base project given a
# previously computed ProjectDiff between them,
# and a list of folder names to prefix
# ('%modname%' will be replaced with the mod's name)
def do_import(base_dir, mod_dir, pdiff, folder_prefixes=['%modname%']):
# Clone base project into out directory
#if os.path.isdir(out_dir):
# print('Out dir already exists, aborting')
# sys.exit()
#shutil.copytree(base_dir, out_dir)
#os.rename(iwjam_util.gmx_in_dir(out_dir),
# os.path.join(out_dir, 'output.project.gmx'))
#base_dir = out_dir
# Replace %modname%
for i, p in enumerate(folder_prefixes):
if p == '%modname%':
folder_prefixes[i] = pdiff.mod_name
# Set up XML
base_gmx = iwjam_util.gmx_in_dir(base_dir)
base_tree = etree.parse(base_gmx)
base_root = base_tree.getroot()
mod_gmx = iwjam_util.gmx_in_dir(mod_dir)
mod_tree = etree.parse(mod_gmx)
mod_root = mod_tree.getroot()
# For each added resource
for addedres in pdiff.added:
# Create a new resource element
new_elt = etree.Element(addedres.restype)
new_elt.text = addedres.elt_text
# Create list of names of groups to traverse/create
group_names = folder_prefixes + addedres.group_names
baseElt = base_root.find(addedres.restype_group_name)
# Create resource type element if it doesn't exist
if baseElt is None:
baseElt = etree.SubElement(base_root, addedres.restype_group_name)
# Traverse groups, creating nonexistent ones along the way
for g in group_names:
# Try to find group element with the current name
nextBaseElt = next(
(c for c in baseElt if c.get('name') == g), None)
# Create group element if it doesn't exist
if nextBaseElt is None:
nextBaseElt = etree.SubElement(baseElt, baseElt.tag)
nextBaseElt.set('name', g)
baseElt = nextBaseElt
# Add the new resource element
baseElt.append(new_elt)
# Write project file
base_tree.write(base_gmx, pretty_print=True)
# Now, copy the files
_recurse_files('', base_dir, mod_dir, [r.name for r in pdiff.added])
# TODO: Modified resources
def _recurse_files(subpath, base_dir, mod_dir, res_names):
subdirs = [e for e in os.scandir(os.path.join(mod_dir, subpath))
if e.is_dir() and e.name != 'Configs']
files = [e for e in os.scandir(os.path.join(mod_dir, subpath))
if e.is_file()]
for file in files:
resname = file.name.split('.')[0]
extension = file.name.split('.')[-1]
if subpath.split('\\')[0] == 'sprites' and extension == 'png':
resname = '_'.join(resname.split('_')[0:-1])
if resname in res_names:
relpath = os.path.relpath(file.path, mod_dir)
base_file_path = os.path.join(base_dir, relpath)
shutil.copyfile(file.path, base_file_path)
for subdir in subdirs:
relpath = os.path.relpath(subdir.path, mod_dir)
base_path = os.path.join(base_dir, relpath)
if not os.path.exists(base_path):
os.mkdir(base_path)
_recurse_files(relpath, base_dir, mod_dir, res_names)
| 36.404255 | 78 | 0.63647 | 483 | 3,422 | 4.333333 | 0.287785 | 0.0301 | 0.0215 | 0.024845 | 0.139035 | 0.130913 | 0.091734 | 0.042045 | 0.042045 | 0.042045 | 0 | 0.001986 | 0.264465 | 3,422 | 93 | 79 | 36.795699 | 0.829559 | 0.259205 | 0 | 0 | 0 | 0 | 0.01953 | 0 | 0 | 0 | 0 | 0.010753 | 0 | 1 | 0.038462 | false | 0 | 0.115385 | 0 | 0.153846 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b24345cfa90040fa81b341f92e8e1c158be7a95e | 673 | py | Python | dvc/parsing/__init__.py | mbraakhekke/dvc | 235d4c9a94603131e00c9b770125584fdb369481 | [
"Apache-2.0"
] | null | null | null | dvc/parsing/__init__.py | mbraakhekke/dvc | 235d4c9a94603131e00c9b770125584fdb369481 | [
"Apache-2.0"
] | null | null | null | dvc/parsing/__init__.py | mbraakhekke/dvc | 235d4c9a94603131e00c9b770125584fdb369481 | [
"Apache-2.0"
] | null | null | null | import logging
from itertools import starmap
from funcy import join
from .context import Context
from .interpolate import resolve
logger = logging.getLogger(__name__)
STAGES = "stages"
class DataResolver:
def __init__(self, d):
self.context = Context()
self.data = d
def _resolve_entry(self, name, definition):
stage_d = resolve(definition, self.context)
logger.trace("Resolved stage data for '%s': %s", name, stage_d)
return {name: stage_d}
def resolve(self):
stages = self.data.get(STAGES, {})
data = join(starmap(self._resolve_entry, stages.items()))
return {**self.data, STAGES: data}
| 24.035714 | 71 | 0.665676 | 84 | 673 | 5.154762 | 0.369048 | 0.055427 | 0.050808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225854 | 673 | 27 | 72 | 24.925926 | 0.831094 | 0 | 0 | 0 | 0 | 0 | 0.056464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.263158 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b243f7691e46a57fcead4522c62b345ef6662d0c | 1,692 | py | Python | interviewbit/TwoPointers/kthsmallest.py | zazhang/coding-problems | 704f0ab22ecdc5fca1978ac7791f43258eb441dd | [
"MIT"
] | null | null | null | interviewbit/TwoPointers/kthsmallest.py | zazhang/coding-problems | 704f0ab22ecdc5fca1978ac7791f43258eb441dd | [
"MIT"
] | null | null | null | interviewbit/TwoPointers/kthsmallest.py | zazhang/coding-problems | 704f0ab22ecdc5fca1978ac7791f43258eb441dd | [
"MIT"
] | null | null | null | #!usr/bin/env ipython
"""Coding interview problem (array, math):
See `https://www.interviewbit.com/problems/kth-smallest-element-in-the-array/`
Find the kth smallest element in an unsorted array of non-negative integers.
Definition of kth smallest element:
kth smallest element is the minimum possible n such that there are at least k elements in the array <= n.
In other words, if the array A was sorted, then A[k - 1] ( k is 1 based, while the arrays are 0 based )
NOTE:
You are not allowed to modify the array ( The array is read only ).
Try to do it using constant extra space.
Example:
A : [2 1 4 3 2]
k : 3
answer : 2
"""
class Solution:
# @param A : tuple of integers
# @param k : integer
# @return an integer
# This implementation is slow, time limit exceeds
def kthsmallest(self, A, k):
if type(A) == int:
return A
else:
temp_A = list(A)
min_list = []
index = 0
while index < k: # k is 1 based, e.g. k=3 means 3 elements
current_min = min(temp_A)
min_list.append(current_min)
temp_A.remove(current_min)
index += 1
return max(min_list)
# This implementation uses extra space, not constant extra space
def kthsmallest2(self, A, k):
if type(A) == int:
return A
else:
temp_A = list(A)
temp_A.sort()
return temp_A[k-1]
# Need a method that is constant space and O(k) time
def kthsmallest3(self, A, k):
return None
if __name__ == '__main__':
s = Solution() # create Solution object
A = (1,3,2,234,5,6,1)
k = 4
print s.kthsmallest(A,k)
| 26.030769 | 105 | 0.613475 | 264 | 1,692 | 3.856061 | 0.450758 | 0.011788 | 0.070727 | 0.039293 | 0.072692 | 0.072692 | 0.072692 | 0.072692 | 0.072692 | 0.072692 | 0 | 0.023451 | 0.294326 | 1,692 | 64 | 106 | 26.4375 | 0.829146 | 0.184397 | 0 | 0.285714 | 0 | 0 | 0.010596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b244e34a9bc2f4dc206325d9907079cdca8ac5ad | 1,021 | py | Python | Test/test_conf_ap/conf_hostapd/create_config.py | liquidinvestigations/wifi-test | beae8674730d78330b1b18214c86206d858ed604 | [
"MIT"
] | null | null | null | Test/test_conf_ap/conf_hostapd/create_config.py | liquidinvestigations/wifi-test | beae8674730d78330b1b18214c86206d858ed604 | [
"MIT"
] | null | null | null | Test/test_conf_ap/conf_hostapd/create_config.py | liquidinvestigations/wifi-test | beae8674730d78330b1b18214c86206d858ed604 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from hostapdconf.parser import HostapdConf
from hostapdconf import helpers as ha
import subprocess
def create_hostapd_conf(ssid, password, interface):
"""
Create a new hostapd.conf with the given ssid, password, interface.
Overwrites the current config file.
"""
subprocess.call(['touch', './hostapd.conf'])
conf = HostapdConf('./hostapd.conf')
# set some common options
ha.set_ssid(conf, ssid)
ha.reveal_ssid(conf)
ha.set_iface(conf, interface)
ha.set_driver(conf, ha.STANDARD)
ha.set_channel(conf, 2)
ha.enable_wpa(conf, passphrase=password, wpa_mode=ha.WPA2_ONLY)
ha.set_country(conf, 'ro')
# my hostapd doesn't like the default values of -1 here, so we set some
# dummy values
conf.update({'rts_threshold': 0, 'fragm_threshold': 256})
print("writing configuration")
conf.write()
if __name__ == '__main__':
print("Creating conf file...")
create_hostapd_conf('test_conf_supplicant', 'password', 'wlan0')
| 27.594595 | 75 | 0.695397 | 139 | 1,021 | 4.928058 | 0.553957 | 0.080292 | 0.049635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009592 | 0.183154 | 1,021 | 36 | 76 | 28.361111 | 0.811751 | 0.227228 | 0 | 0 | 0 | 0 | 0.191099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.157895 | 0.157895 | 0 | 0.210526 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b245f0f4f9cda0ef7cfead4f0aa73f69f90186e7 | 1,669 | py | Python | tests/test_paddle.py | ankitshah009/MMdnn | a03d800eb4016765e97f82eb5d2e69f98de3a9cf | [
"MIT"
] | 3,442 | 2017-11-20T08:39:51.000Z | 2019-05-06T10:51:19.000Z | tests/test_paddle.py | ankitshah009/MMdnn | a03d800eb4016765e97f82eb5d2e69f98de3a9cf | [
"MIT"
] | 430 | 2017-11-29T04:21:48.000Z | 2019-05-06T05:37:37.000Z | tests/test_paddle.py | ankitshah009/MMdnn | a03d800eb4016765e97f82eb5d2e69f98de3a9cf | [
"MIT"
] | 683 | 2017-11-20T08:50:34.000Z | 2019-05-04T04:25:14.000Z | from __future__ import absolute_import
from __future__ import print_function
import os
import sys
from conversion_imagenet import TestModels
from conversion_imagenet import is_paddle_supported
def get_test_table():
return { 'paddle' : {
'resnet50' : [
TestModels.onnx_emit,
#TestModels.caffe_emit,
#TestModels.cntk_emit,
TestModels.coreml_emit,
TestModels.keras_emit,
TestModels.mxnet_emit,
TestModels.pytorch_emit,
TestModels.tensorflow_emit
],
'resnet101' : [
#TestModels.onnx_emit,
#TestModels.caffe_emit,
#TestModels.cntk_emit,
TestModels.coreml_emit,
TestModels.keras_emit,
TestModels.mxnet_emit,
TestModels.pytorch_emit,
TestModels.tensorflow_emit
],
'vgg16' : [
TestModels.onnx_emit,
#TestModels.caffe_emit,
#TestModels.cntk_emit,
#TestModels.coreml_emit,
#TestModels.keras_emit,
#TestModels.mxnet_emit,
#TestModels.pytorch_emit,
#TestModels.tensorflow_emit
],
}}
def test_paddle():
if not is_paddle_supported():
return
# omit tensorflow lead to crash
import tensorflow as tf
test_table = get_test_table()
tester = TestModels(test_table)
tester._test_function('paddle', tester.paddle_parse)
if __name__ == '__main__':
test_paddle()
| 30.345455 | 56 | 0.559617 | 146 | 1,669 | 5.993151 | 0.294521 | 0.336 | 0.061714 | 0.096 | 0.541714 | 0.541714 | 0.541714 | 0.541714 | 0.541714 | 0.541714 | 0 | 0.006692 | 0.373277 | 1,669 | 54 | 57 | 30.907407 | 0.829828 | 0.177951 | 0 | 0.416667 | 0 | 0 | 0.030905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.194444 | 0.027778 | 0.305556 | 0.027778 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2473e8998bf083e1cd206ca3716ffba6efcc23c | 1,778 | py | Python | stickyuploads/utils.py | caktus/django-sticky-uploads | a57539655ba991f63f31f0a5c98d790947bcd1b8 | [
"BSD-3-Clause"
] | 11 | 2015-08-14T14:38:02.000Z | 2019-12-16T14:39:30.000Z | stickyuploads/utils.py | caktus/django-sticky-uploads | a57539655ba991f63f31f0a5c98d790947bcd1b8 | [
"BSD-3-Clause"
] | 16 | 2015-08-05T14:02:19.000Z | 2018-03-28T15:43:47.000Z | stickyuploads/utils.py | caktus/django-sticky-uploads | a57539655ba991f63f31f0a5c98d790947bcd1b8 | [
"BSD-3-Clause"
] | 6 | 2015-08-14T12:34:52.000Z | 2019-10-16T04:18:37.000Z | from __future__ import unicode_literals
import os
from django.core import signing
from django.core.exceptions import ImproperlyConfigured
from django.core.files.storage import get_storage_class
from django.utils.functional import LazyObject
def serialize_upload(name, storage, url):
"""
Serialize uploaded file by name and storage. Namespaced by the upload url.
"""
if isinstance(storage, LazyObject):
# Unwrap lazy storage class
storage._setup()
cls = storage._wrapped.__class__
else:
cls = storage.__class__
return signing.dumps({
'name': name,
'storage': '%s.%s' % (cls.__module__, cls.__name__)
}, salt=url)
def deserialize_upload(value, url):
"""
Restore file and name and storage from serialized value and the upload url.
"""
result = {'name': None, 'storage': None}
try:
result = signing.loads(value, salt=url)
except signing.BadSignature:
# TODO: Log invalid signature
pass
else:
try:
result['storage'] = get_storage_class(result['storage'])
except (ImproperlyConfigured, ImportError):
# TODO: Log invalid class
result = {'name': None, 'storage': None}
return result
def open_stored_file(value, url):
"""
Deserialize value for a given upload url and return open file.
Returns None if deserialization fails.
"""
upload = None
result = deserialize_upload(value, url)
filename = result['name']
storage_class = result['storage']
if storage_class and filename:
storage = storage_class()
if storage.exists(filename):
upload = storage.open(filename)
upload.name = os.path.basename(filename)
return upload
| 29.147541 | 79 | 0.654668 | 204 | 1,778 | 5.534314 | 0.343137 | 0.074402 | 0.037201 | 0.044287 | 0.044287 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.251969 | 1,778 | 60 | 80 | 29.633333 | 0.848872 | 0.186164 | 0 | 0.153846 | 0 | 0 | 0.045161 | 0 | 0 | 0 | 0 | 0.016667 | 0 | 1 | 0.076923 | false | 0.025641 | 0.179487 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b252bb8863e2cde9dc1c8cf3fba5014be866dbed | 5,607 | py | Python | gnuradio-3.7.13.4/gr-qtgui/apps/plot_spectrogram_base.py | v1259397/cosmic-gnuradio | 64c149520ac6a7d44179c3f4a38f38add45dd5dc | [
"BSD-3-Clause"
] | 1 | 2021-03-09T07:32:37.000Z | 2021-03-09T07:32:37.000Z | gnuradio-3.7.13.4/gr-qtgui/apps/plot_spectrogram_base.py | v1259397/cosmic-gnuradio | 64c149520ac6a7d44179c3f4a38f38add45dd5dc | [
"BSD-3-Clause"
] | null | null | null | gnuradio-3.7.13.4/gr-qtgui/apps/plot_spectrogram_base.py | v1259397/cosmic-gnuradio | 64c149520ac6a7d44179c3f4a38f38add45dd5dc | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
#
# Copyright 2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from gnuradio import gr, blocks
from gnuradio.eng_option import eng_option
from optparse import OptionParser
import os, sys
try:
from gnuradio import qtgui
from PyQt4 import QtGui, QtCore
import sip
except ImportError:
print "Error: Program requires PyQt4 and gr-qtgui."
sys.exit(1)
try:
import scipy
except ImportError:
print "Error: Scipy required (www.scipy.org)."
sys.exit(1)
try:
from gnuradio.qtgui.plot_form import *
from gnuradio.qtgui.plot_base import *
except ImportError:
from plot_form import *
from plot_base import *
class plot_base(gr.top_block):
def __init__(self, filelist, fc, samp_rate, psdsize, start,
nsamples, max_nsamples, avg=1.0):
gr.top_block.__init__(self)
self._filelist = filelist
self._center_freq = fc
self._samp_rate = samp_rate
self._psd_size = psdsize
self._start = start
self._max_nsamps = max_nsamples
self._nsigs = len(self._filelist)
self._avg = avg
self._nsamps = nsamples
self._auto_scale = False
self._y_min = -200
self._y_max = 400
self._y_range = 130
self._y_value = 10
self._is_setup = False
self.qapp = QtGui.QApplication(sys.argv)
def setup(self):
self.skip = blocks.skiphead(self.dsize, self._start)
n = 0
self.srcs = list()
self._data_min = sys.maxint
self._data_max = -sys.maxint - 1
for f in self._filelist:
data,_min,_max = self.read_samples(f, self._start,
self._nsamps, self._psd_size)
if(_min < self._data_min):
self._data_min = _min
if(_max > self._data_max):
self._data_max = _max
self.srcs.append(self.src_type(data))
# Set default labels based on file names
fname = f.split("/")[-1]
self.gui_snk.set_line_label(n, "{0}".format(fname))
n += 1
self.connect(self.srcs[0], self.skip)
self.connect(self.skip, (self.gui_snk, 0))
for i,s in enumerate(self.srcs[1:]):
self.connect(s, (self.gui_snk, i+1))
self.gui_snk.set_update_time(0);
self.gui_snk.set_time_per_fft(self._psd_size/self._samp_rate)
self.gui_snk.enable_menu(False)
self.gui_snk.set_fft_average(self._avg)
# Get Python Qt references
pyQt = self.gui_snk.pyqwidget()
self.pyWin = sip.wrapinstance(pyQt, QtGui.QWidget)
self._is_setup = True
def is_setup(self):
return self._is_setup
def set_y_axis(self, y_min, y_max):
self.gui_snk.set_intensity_range(y_min, y_max)
return y_min, y_max
def get_gui(self):
if(self.is_setup()):
return self.pyWin
else:
return None
def reset(self, newstart, newnsamps):
self.stop()
self.wait()
self.gui_snk.clear_data()
self.gui_snk.set_time_per_fft(self._psd_size/self._samp_rate)
self._start = newstart
self._nsamps = newnsamps
self._data_min = sys.maxint
self._data_max = -sys.maxint - 1
for s,f in zip(self.srcs, self._filelist):
data,_min,_max = self.read_samples(f, self._start, newnsamps, self._psd_size)
if(_min < self._data_min):
self._data_min = _min
if(_max > self._data_max):
self._data_max = _max
s.set_data(data)
self.start()
def setup_options(desc):
parser = OptionParser(option_class=eng_option, description=desc,
conflict_handler="resolve")
parser.add_option("-N", "--nsamples", type="int", default=1000000,
help="Set the number of samples to display [default=%default]")
parser.add_option("-S", "--start", type="int", default=0,
help="Starting sample number [default=%default]")
parser.add_option("-L", "--psd-size", type="int", default=2048,
help="Set the FFT size of the PSD [default=%default]")
parser.add_option("-f", "--center-frequency", type="eng_float", default=0.0,
help="Set the center frequency of the signal [default=%default]")
parser.add_option("-r", "--sample-rate", type="eng_float", default=1.0,
help="Set the sample rate of the signal [default=%default]")
parser.add_option("-a", "--average", type="float", default=1.0,
help="Set amount of averaging (smaller=more averaging) [default=%default]")
(options, args) = parser.parse_args()
if(len(args) < 1):
parser.print_help()
sys.exit(0)
return (options, args)
| 32.789474 | 97 | 0.619939 | 770 | 5,607 | 4.301299 | 0.303896 | 0.028986 | 0.033213 | 0.023551 | 0.224336 | 0.181763 | 0.152174 | 0.152174 | 0.128019 | 0.128019 | 0 | 0.015729 | 0.2743 | 5,607 | 170 | 98 | 32.982353 | 0.798231 | 0.147316 | 0 | 0.188034 | 0 | 0 | 0.109523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.128205 | null | null | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2587d1aad26d95bdbf9bbeb64895092e8199eaa | 1,467 | py | Python | alipay/aop/api/domain/TaxReceiptOnceInfo.py | antopen/alipay-sdk-python-all | 8e51c54409b9452f8d46c7bb10eea7c8f7e8d30c | [
"Apache-2.0"
] | null | null | null | alipay/aop/api/domain/TaxReceiptOnceInfo.py | antopen/alipay-sdk-python-all | 8e51c54409b9452f8d46c7bb10eea7c8f7e8d30c | [
"Apache-2.0"
] | null | null | null | alipay/aop/api/domain/TaxReceiptOnceInfo.py | antopen/alipay-sdk-python-all | 8e51c54409b9452f8d46c7bb10eea7c8f7e8d30c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from alipay.aop.api.constant.ParamConstants import *
class TaxReceiptOnceInfo(object):
def __init__(self):
self._cognizant_mobile = None
self._ep_tax_id = None
@property
def cognizant_mobile(self):
return self._cognizant_mobile
@cognizant_mobile.setter
def cognizant_mobile(self, value):
self._cognizant_mobile = value
@property
def ep_tax_id(self):
return self._ep_tax_id
@ep_tax_id.setter
def ep_tax_id(self, value):
self._ep_tax_id = value
def to_alipay_dict(self):
params = dict()
if self.cognizant_mobile:
if hasattr(self.cognizant_mobile, 'to_alipay_dict'):
params['cognizant_mobile'] = self.cognizant_mobile.to_alipay_dict()
else:
params['cognizant_mobile'] = self.cognizant_mobile
if self.ep_tax_id:
if hasattr(self.ep_tax_id, 'to_alipay_dict'):
params['ep_tax_id'] = self.ep_tax_id.to_alipay_dict()
else:
params['ep_tax_id'] = self.ep_tax_id
return params
@staticmethod
def from_alipay_dict(d):
if not d:
return None
o = TaxReceiptOnceInfo()
if 'cognizant_mobile' in d:
o.cognizant_mobile = d['cognizant_mobile']
if 'ep_tax_id' in d:
o.ep_tax_id = d['ep_tax_id']
return o
| 26.196429 | 83 | 0.611452 | 189 | 1,467 | 4.391534 | 0.222222 | 0.271084 | 0.126506 | 0.092771 | 0.291566 | 0.248193 | 0.1 | 0.057831 | 0 | 0 | 0 | 0.000967 | 0.29516 | 1,467 | 55 | 84 | 26.672727 | 0.801741 | 0.02863 | 0 | 0.097561 | 0 | 0 | 0.090077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0 | 0.04878 | 0.04878 | 0.365854 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b259ceb8b82845e18e8b6159d8f807dea2a352fc | 1,478 | py | Python | scripts/rgb2labels.py | theRealSuperMario/supermariopy | 9fff8275278ff26caff50da86109c25d276bb30b | [
"MIT"
] | 36 | 2019-07-14T16:10:37.000Z | 2022-03-29T10:11:03.000Z | scripts/rgb2labels.py | theRealSuperMario/supermariopy | 9fff8275278ff26caff50da86109c25d276bb30b | [
"MIT"
] | 3 | 2019-10-09T15:11:13.000Z | 2021-07-31T02:17:43.000Z | scripts/rgb2labels.py | theRealSuperMario/supermariopy | 9fff8275278ff26caff50da86109c25d276bb30b | [
"MIT"
] | 14 | 2019-08-29T14:11:54.000Z | 2022-03-06T13:41:56.000Z | import numpy as np
from matplotlib import pyplot as plt
"""
https://stackoverflow.com/questions/42750910/convert-rgb-image-to-index-image/62980021#62980021
convert semantic labels from RGB coding to index coding
Steps:
1. define COLORS (see below)
2. hash colors
3. run rgb2index(segmentation_rgb)
see example below
TODO: apparently, using cv2.LUT is much simpler (and maybe faster?)
"""
COLORS = np.array([[0, 0, 0], [0, 0, 255], [255, 0, 0], [0, 255, 0]])
W = np.power(255, [0, 1, 2])
HASHES = np.sum(W * COLORS, axis=-1)
HASH2COLOR = {h: c for h, c in zip(HASHES, COLORS)}
HASH2IDX = {h: i for i, h in enumerate(HASHES)}
def rgb2index(segmentation_rgb):
"""
turn a 3 channel RGB color to 1 channel index color
"""
s_shape = segmentation_rgb.shape
s_hashes = np.sum(W * segmentation_rgb, axis=-1)
print(np.unique(segmentation_rgb.reshape((-1, 3)), axis=0))
func = lambda x: HASH2IDX[int(x)] # noqa
segmentation_idx = np.apply_along_axis(func, 0, s_hashes.reshape((1, -1)))
segmentation_idx = segmentation_idx.reshape(s_shape[:2])
return segmentation_idx
segmentation = np.array([[0, 0, 0], [0, 0, 255], [255, 0, 0]] * 3).reshape((3, 3, 3))
segmentation_idx = rgb2index(segmentation)
print(segmentation)
print(segmentation_idx)
fig, axes = plt.subplots(1, 2, figsize=(6, 3))
axes[0].imshow(segmentation)
axes[0].set_title("Segmentation RGB")
axes[1].imshow(segmentation_idx)
axes[1].set_title("Segmentation IDX")
plt.show()
| 28.980392 | 95 | 0.696888 | 236 | 1,478 | 4.279661 | 0.385593 | 0.021782 | 0.020792 | 0.015842 | 0.039604 | 0.039604 | 0.039604 | 0.039604 | 0.039604 | 0.039604 | 0 | 0.07496 | 0.151556 | 1,478 | 50 | 96 | 29.56 | 0.730463 | 0.038566 | 0 | 0 | 0 | 0 | 0.029851 | 0 | 0 | 0 | 0 | 0.02 | 0 | 1 | 0.04 | false | 0 | 0.08 | 0 | 0.16 | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2600eaa1ce4c305aedb5991b27f9834888e24d3 | 512 | py | Python | setup.py | drrobotk/multilateral_index_calc | 7b1cf2f178e4407167c90ed64743f9357da1d4f0 | [
"MIT"
] | 3 | 2021-11-27T00:00:56.000Z | 2022-02-14T09:58:33.000Z | setup.py | drrobotk/multilateral_index_calc | 7b1cf2f178e4407167c90ed64743f9357da1d4f0 | [
"MIT"
] | null | null | null | setup.py | drrobotk/multilateral_index_calc | 7b1cf2f178e4407167c90ed64743f9357da1d4f0 | [
"MIT"
] | null | null | null | from gettext import find
from setuptools import setup, find_packages
setup(
name='PriceIndexCalc',
version='0.1-dev9',
description='Price Index Calculator using bilateral and multilateral methods',
author='Dr. Usman Kayani',
url='https://github.com/drrobotk/PriceIndexCalc',
author_email='usman.kayani@ons.gov.uk',
license='MIT',
packages=find_packages(where="src"),
package_dir={'': 'src'},
install_requires=['pandas', 'numpy', 'scipy'],
include_package_data=True,
) | 32 | 82 | 0.703125 | 62 | 512 | 5.693548 | 0.790323 | 0.067989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006912 | 0.152344 | 512 | 16 | 83 | 32 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0.37232 | 0.044834 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2604e0c3e4e10fe06252e6006860caca1b86c21 | 480 | py | Python | cryptofeed_werks/bigquery_storage/constants.py | globophobe/crypto-tick-data | 7ec5d1e136b9bc27ae936f55cf6ab7fe5e37bda4 | [
"MIT"
] | null | null | null | cryptofeed_werks/bigquery_storage/constants.py | globophobe/crypto-tick-data | 7ec5d1e136b9bc27ae936f55cf6ab7fe5e37bda4 | [
"MIT"
] | null | null | null | cryptofeed_werks/bigquery_storage/constants.py | globophobe/crypto-tick-data | 7ec5d1e136b9bc27ae936f55cf6ab7fe5e37bda4 | [
"MIT"
] | null | null | null | import os
try:
from google.cloud import bigquery # noqa
except ImportError:
BIGQUERY = False
else:
BIGQUERY = True
GOOGLE_APPLICATION_CREDENTIALS = "GOOGLE_APPLICATION_CREDENTIALS"
BIGQUERY_LOCATION = "BIGQUERY_LOCATION"
BIGQUERY_DATASET = "BIGQUERY_DATASET"
def use_bigquery():
return (
BIGQUERY
and os.environ.get(GOOGLE_APPLICATION_CREDENTIALS)
and os.environ.get(BIGQUERY_LOCATION)
and os.environ(BIGQUERY_DATASET)
)
| 20 | 65 | 0.729167 | 53 | 480 | 6.358491 | 0.433962 | 0.151335 | 0.249258 | 0.089021 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204167 | 480 | 23 | 66 | 20.869565 | 0.882199 | 0.008333 | 0 | 0 | 0 | 0 | 0.132911 | 0.063291 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.176471 | 0.058824 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b264318ef812ccb5494cb1fbb53e013385e1b79c | 970 | py | Python | leetcode/87. Scramble String.py | CSU-FulChou/IOS_er | 4286677854c4afe61f745bfd087527e369402dc7 | [
"MIT"
] | 2 | 2020-02-10T15:20:03.000Z | 2020-02-23T07:23:57.000Z | leetcode/87. Scramble String.py | CSU-FulChou/IOS_er | 4286677854c4afe61f745bfd087527e369402dc7 | [
"MIT"
] | null | null | null | leetcode/87. Scramble String.py | CSU-FulChou/IOS_er | 4286677854c4afe61f745bfd087527e369402dc7 | [
"MIT"
] | 1 | 2020-02-24T04:46:44.000Z | 2020-02-24T04:46:44.000Z | # 2021.04.16 hard:
class Solution:
def isScramble(self, s1: str, s2: str) -> bool:
'''
dp 问题:
1. 子字符串应该一样长. 很简单已经保证了
子字符串一样,那么久直接返回 True
2. 子字符串中 存在的字母应该一样, 同一个字母的数量应该一样多 Count()
两种分割方式,交换 或者 不交换不断的迭代下去:
分割的两种方式要写对!
'''
@cache
def dfs(idx1, idx2, length):
if s1[idx1:length+idx1] == s2[idx2:idx2+length]:
return True
if Counter(s1[idx1:length+idx1]) != Counter(s2[idx2:idx2+length]):
return False
for i in range(1,length):
# no swarp
if dfs(idx1,idx2,i) and dfs(idx1+i, idx2+i, length-i): # 这两个的 位置 idx1, idx2 传入要注意:
return True
if dfs(idx1, idx2+length-i, i) and dfs(idx1+i, idx2, length-i):
return True
return False
res = dfs(0,0,len(s1))
dfs.cache_clear()
# print(res)
return res | 33.448276 | 98 | 0.5 | 121 | 970 | 4 | 0.454545 | 0.072314 | 0.068182 | 0.070248 | 0.157025 | 0.066116 | 0 | 0 | 0 | 0 | 0 | 0.06734 | 0.387629 | 970 | 29 | 99 | 33.448276 | 0.747475 | 0.201031 | 0 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b264888cc9f1eb496c9df03db998069fffdf3f86 | 3,079 | py | Python | packaging/setup/plugins/ovirt-engine-setup/all-in-one/super_user.py | SunOfShine/ovirt-engine | 7684597e2d38ff854e629e5cbcbb9f21888cb498 | [
"Apache-2.0"
] | 1 | 2021-02-02T05:38:35.000Z | 2021-02-02T05:38:35.000Z | packaging/setup/plugins/ovirt-engine-setup/all-in-one/super_user.py | SunOfShine/ovirt-engine | 7684597e2d38ff854e629e5cbcbb9f21888cb498 | [
"Apache-2.0"
] | null | null | null | packaging/setup/plugins/ovirt-engine-setup/all-in-one/super_user.py | SunOfShine/ovirt-engine | 7684597e2d38ff854e629e5cbcbb9f21888cb498 | [
"Apache-2.0"
] | null | null | null | #
# ovirt-engine-setup -- ovirt engine setup
# Copyright (C) 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
AIO super user password plugin.
"""
import gettext
_ = lambda m: gettext.dgettext(message=m, domain='ovirt-engine-setup')
from otopi import util
from otopi import plugin
from otopi import constants as otopicons
from ovirt_engine_setup import constants as osetupcons
@util.export
class Plugin(plugin.PluginBase):
"""
AIO super user password plugin.
"""
def __init__(self, context):
super(Plugin, self).__init__(context=context)
def _validateUserPasswd(self, host, user, password):
valid = False
import paramiko
try:
cli = paramiko.SSHClient()
cli.set_missing_host_key_policy(paramiko.AutoAddPolicy())
cli.connect(
hostname=host,
username=user,
password=password
)
valid = True
except paramiko.AuthenticationException:
pass
finally:
cli.close()
return valid
@plugin.event(
stage=plugin.Stages.STAGE_INIT,
)
def _init(self):
self.environment.setdefault(
osetupcons.AIOEnv.ROOT_PASSWORD,
None
)
@plugin.event(
stage=plugin.Stages.STAGE_CUSTOMIZATION,
condition=lambda self: self.environment[
osetupcons.AIOEnv.CONFIGURE
],
name=osetupcons.Stages.AIO_CONFIG_ROOT_PASSWORD
)
def _customization(self):
interactive = (
self.environment[osetupcons.AIOEnv.ROOT_PASSWORD] is None
)
while self.environment[osetupcons.AIOEnv.ROOT_PASSWORD] is None:
password = self.dialog.queryString(
name='AIO_ROOT_PASSWORD',
note=_("Enter 'root' user password: "),
prompt=True,
hidden=True,
)
if self._validateUserPasswd(
host='localhost',
user='root',
password=password
):
self.environment[osetupcons.AIOEnv.ROOT_PASSWORD] = password
else:
if interactive:
self.logger.error(_('Wrong root password, try again'))
else:
raise RuntimeError(_('Wrong root password'))
self.environment[otopicons.CoreEnv.LOG_FILTER].append(
self.environment[osetupcons.AIOEnv.ROOT_PASSWORD]
)
# vim: expandtab tabstop=4 shiftwidth=4
| 28.775701 | 76 | 0.616759 | 329 | 3,079 | 5.665654 | 0.468085 | 0.064378 | 0.053648 | 0.075107 | 0.162017 | 0.13412 | 0.052575 | 0.052575 | 0 | 0 | 0 | 0.004645 | 0.300747 | 3,079 | 106 | 77 | 29.04717 | 0.861124 | 0.226697 | 0 | 0.089552 | 0 | 0 | 0.053373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0.238806 | 0.089552 | 0 | 0.179104 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b26b3c9bc4d2fbf8cbbb44a23143622070070eef | 316 | py | Python | create.py | devanshsharma22/ONE | 27450ff2e9e07164527043a161274495ef3a1178 | [
"CC-BY-3.0"
] | null | null | null | create.py | devanshsharma22/ONE | 27450ff2e9e07164527043a161274495ef3a1178 | [
"CC-BY-3.0"
] | null | null | null | create.py | devanshsharma22/ONE | 27450ff2e9e07164527043a161274495ef3a1178 | [
"CC-BY-3.0"
] | null | null | null | from flask import Flask
from models import *
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS']=False
db.init_app(app)
def main():
db.create_all()
if __name__ == "__main__":
with app.app_context():
main()
| 15.8 | 70 | 0.708861 | 43 | 316 | 4.744186 | 0.604651 | 0.088235 | 0.186275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158228 | 316 | 19 | 71 | 16.631579 | 0.766917 | 0 | 0 | 0 | 0 | 0 | 0.231013 | 0.167722 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b26beef2d3bbff1212787d1023080b96af3af78b | 1,160 | py | Python | jmetex/main.py | innovocloud/jmetex | 5e7c4d9695174fe2f5c3186b8bbb41857e9715df | [
"Apache-2.0"
] | 2 | 2018-02-19T14:21:31.000Z | 2018-03-15T03:16:05.000Z | jmetex/main.py | innovocloud/jmetex | 5e7c4d9695174fe2f5c3186b8bbb41857e9715df | [
"Apache-2.0"
] | null | null | null | jmetex/main.py | innovocloud/jmetex | 5e7c4d9695174fe2f5c3186b8bbb41857e9715df | [
"Apache-2.0"
] | null | null | null | import sys
import time
import argparse
from prometheus_client import start_http_server, Metric, REGISTRY, Summary
from .interfacecollector import InterfaceCollector
from .opticalcollector import OpticalCollector
def main():
parser = argparse.ArgumentParser(description='JunOS API to Prometheus exporter')
parser.add_argument('--port', type=int, required=True,
help='listen port')
parser.add_argument('--instance', type=str, required=True,
help='instance name')
parser.add_argument('--rpc_url', type=str, required=True,
help='URL of the junos RPC endpoint')
parser.add_argument('--user', type=str, required=True,
help='junos user name')
parser.add_argument('--password', type=str, required=True,
help='junos password')
args = parser.parse_args()
start_http_server(args.port)
REGISTRY.register(InterfaceCollector(args.instance, args.rpc_url, args.user, args.password))
REGISTRY.register(OpticalCollector(args.instance, args.rpc_url, args.user, args.password))
while True:
time.sleep(1)
| 40 | 96 | 0.675862 | 134 | 1,160 | 5.746269 | 0.365672 | 0.058442 | 0.11039 | 0.098701 | 0.241558 | 0.181818 | 0.109091 | 0.109091 | 0.109091 | 0 | 0 | 0.0011 | 0.216379 | 1,160 | 28 | 97 | 41.428571 | 0.845985 | 0 | 0 | 0 | 0 | 0 | 0.133621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0.166667 | 0.25 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b2769b5ec360ec5dc6f9171e3632b3ef3f3dc0c8 | 570 | py | Python | python/ray/rllib/models/tf/tf_modelv2.py | alex-petrenko/ray | dfc94ce7bcd5d9d008822efdeec17c3f6bb9c606 | [
"Apache-2.0"
] | 1 | 2020-09-27T08:48:11.000Z | 2020-09-27T08:48:11.000Z | python/ray/rllib/models/tf/tf_modelv2.py | JunpingDu/ray | 214f09d969480279930994cabbcc2a75535cc6ca | [
"Apache-2.0"
] | 4 | 2019-03-04T13:03:24.000Z | 2019-06-06T11:25:07.000Z | python/ray/rllib/models/tf/tf_modelv2.py | JunpingDu/ray | 214f09d969480279930994cabbcc2a75535cc6ca | [
"Apache-2.0"
] | 1 | 2020-04-30T09:06:20.000Z | 2020-04-30T09:06:20.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from ray.rllib.models.modelv2 import ModelV2
from ray.rllib.utils import try_import_tf
tf = try_import_tf()
class TFModelV2(ModelV2):
"""TF version of ModelV2."""
def __init__(self, obs_space, action_space, output_spec, model_config,
name):
ModelV2.__init__(
self,
obs_space,
action_space,
output_spec,
model_config,
name,
framework="tf")
| 23.75 | 74 | 0.642105 | 66 | 570 | 5.030303 | 0.454545 | 0.090361 | 0.144578 | 0.096386 | 0.313253 | 0.313253 | 0.313253 | 0.313253 | 0.313253 | 0.313253 | 0 | 0.014851 | 0.291228 | 570 | 23 | 75 | 24.782609 | 0.806931 | 0.038596 | 0 | 0 | 0 | 0 | 0.00369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.352941 | 0 | 0.470588 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b27e60d49c438918ac9f9898312b3fc091fc3cf6 | 35,738 | py | Python | src/proto/runtime_pb2_grpc.py | layotto/python-sdk | dac5833ebbfe16d6e5b6322041ca65431096f14b | [
"Apache-2.0"
] | null | null | null | src/proto/runtime_pb2_grpc.py | layotto/python-sdk | dac5833ebbfe16d6e5b6322041ca65431096f14b | [
"Apache-2.0"
] | 1 | 2022-02-23T14:37:01.000Z | 2022-02-23T14:37:01.000Z | src/proto/runtime_pb2_grpc.py | layotto/python-sdk | dac5833ebbfe16d6e5b6322041ca65431096f14b | [
"Apache-2.0"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
import runtime_pb2 as runtime__pb2
class RuntimeStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.SayHello = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/SayHello',
request_serializer=runtime__pb2.SayHelloRequest.SerializeToString,
response_deserializer=runtime__pb2.SayHelloResponse.FromString,
)
self.InvokeService = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/InvokeService',
request_serializer=runtime__pb2.InvokeServiceRequest.SerializeToString,
response_deserializer=runtime__pb2.InvokeResponse.FromString,
)
self.GetConfiguration = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/GetConfiguration',
request_serializer=runtime__pb2.GetConfigurationRequest.SerializeToString,
response_deserializer=runtime__pb2.GetConfigurationResponse.FromString,
)
self.SaveConfiguration = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/SaveConfiguration',
request_serializer=runtime__pb2.SaveConfigurationRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.DeleteConfiguration = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/DeleteConfiguration',
request_serializer=runtime__pb2.DeleteConfigurationRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.SubscribeConfiguration = channel.stream_stream(
'/spec.proto.runtime.v1.Runtime/SubscribeConfiguration',
request_serializer=runtime__pb2.SubscribeConfigurationRequest.SerializeToString,
response_deserializer=runtime__pb2.SubscribeConfigurationResponse.FromString,
)
self.TryLock = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/TryLock',
request_serializer=runtime__pb2.TryLockRequest.SerializeToString,
response_deserializer=runtime__pb2.TryLockResponse.FromString,
)
self.Unlock = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/Unlock',
request_serializer=runtime__pb2.UnlockRequest.SerializeToString,
response_deserializer=runtime__pb2.UnlockResponse.FromString,
)
self.GetNextId = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/GetNextId',
request_serializer=runtime__pb2.GetNextIdRequest.SerializeToString,
response_deserializer=runtime__pb2.GetNextIdResponse.FromString,
)
self.GetState = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/GetState',
request_serializer=runtime__pb2.GetStateRequest.SerializeToString,
response_deserializer=runtime__pb2.GetStateResponse.FromString,
)
self.GetBulkState = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/GetBulkState',
request_serializer=runtime__pb2.GetBulkStateRequest.SerializeToString,
response_deserializer=runtime__pb2.GetBulkStateResponse.FromString,
)
self.SaveState = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/SaveState',
request_serializer=runtime__pb2.SaveStateRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.DeleteState = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/DeleteState',
request_serializer=runtime__pb2.DeleteStateRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.DeleteBulkState = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/DeleteBulkState',
request_serializer=runtime__pb2.DeleteBulkStateRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.ExecuteStateTransaction = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/ExecuteStateTransaction',
request_serializer=runtime__pb2.ExecuteStateTransactionRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.PublishEvent = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/PublishEvent',
request_serializer=runtime__pb2.PublishEventRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.GetFile = channel.unary_stream(
'/spec.proto.runtime.v1.Runtime/GetFile',
request_serializer=runtime__pb2.GetFileRequest.SerializeToString,
response_deserializer=runtime__pb2.GetFileResponse.FromString,
)
self.PutFile = channel.stream_unary(
'/spec.proto.runtime.v1.Runtime/PutFile',
request_serializer=runtime__pb2.PutFileRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.ListFile = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/ListFile',
request_serializer=runtime__pb2.ListFileRequest.SerializeToString,
response_deserializer=runtime__pb2.ListFileResp.FromString,
)
self.DelFile = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/DelFile',
request_serializer=runtime__pb2.DelFileRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.GetFileMeta = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/GetFileMeta',
request_serializer=runtime__pb2.GetFileMetaRequest.SerializeToString,
response_deserializer=runtime__pb2.GetFileMetaResponse.FromString,
)
self.InvokeBinding = channel.unary_unary(
'/spec.proto.runtime.v1.Runtime/InvokeBinding',
request_serializer=runtime__pb2.InvokeBindingRequest.SerializeToString,
response_deserializer=runtime__pb2.InvokeBindingResponse.FromString,
)
class RuntimeServicer(object):
"""Missing associated documentation comment in .proto file."""
def SayHello(self, request, context):
"""SayHello used for test
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def InvokeService(self, request, context):
"""InvokeService do rpc calls
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetConfiguration(self, request, context):
"""GetConfiguration gets configuration from configuration store.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SaveConfiguration(self, request, context):
"""SaveConfiguration saves configuration into configuration store.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteConfiguration(self, request, context):
"""DeleteConfiguration deletes configuration from configuration store.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SubscribeConfiguration(self, request_iterator, context):
"""SubscribeConfiguration gets configuration from configuration store and subscribe the updates.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def TryLock(self, request, context):
"""Distributed Lock API
A non-blocking method trying to get a lock with ttl.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Unlock(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetNextId(self, request, context):
"""Sequencer API
Get next unique id with some auto-increment guarantee
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetState(self, request, context):
"""Below are the APIs compatible with Dapr.
We try to keep them same as Dapr's because we want to work with Dapr to build an API spec for cloud native runtime
,like CloudEvent for event data.
Gets the state for a specific key.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetBulkState(self, request, context):
"""Gets a bulk of state items for a list of keys
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SaveState(self, request, context):
"""Saves an array of state objects
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteState(self, request, context):
"""Deletes the state for a specific key.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteBulkState(self, request, context):
"""Deletes a bulk of state items for a list of keys
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ExecuteStateTransaction(self, request, context):
"""Executes transactions for a specified store
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def PublishEvent(self, request, context):
"""Publishes events to the specific topic
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetFile(self, request, context):
"""Get file with stream
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def PutFile(self, request_iterator, context):
"""Put file with stream
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListFile(self, request, context):
"""List all files
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DelFile(self, request, context):
"""Delete specific file
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetFileMeta(self, request, context):
"""Get file meta data, if file not exist,return code.NotFound error
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def InvokeBinding(self, request, context):
"""Invokes binding data to specific output bindings
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_RuntimeServicer_to_server(servicer, server):
rpc_method_handlers = {
'SayHello': grpc.unary_unary_rpc_method_handler(
servicer.SayHello,
request_deserializer=runtime__pb2.SayHelloRequest.FromString,
response_serializer=runtime__pb2.SayHelloResponse.SerializeToString,
),
'InvokeService': grpc.unary_unary_rpc_method_handler(
servicer.InvokeService,
request_deserializer=runtime__pb2.InvokeServiceRequest.FromString,
response_serializer=runtime__pb2.InvokeResponse.SerializeToString,
),
'GetConfiguration': grpc.unary_unary_rpc_method_handler(
servicer.GetConfiguration,
request_deserializer=runtime__pb2.GetConfigurationRequest.FromString,
response_serializer=runtime__pb2.GetConfigurationResponse.SerializeToString,
),
'SaveConfiguration': grpc.unary_unary_rpc_method_handler(
servicer.SaveConfiguration,
request_deserializer=runtime__pb2.SaveConfigurationRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'DeleteConfiguration': grpc.unary_unary_rpc_method_handler(
servicer.DeleteConfiguration,
request_deserializer=runtime__pb2.DeleteConfigurationRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'SubscribeConfiguration': grpc.stream_stream_rpc_method_handler(
servicer.SubscribeConfiguration,
request_deserializer=runtime__pb2.SubscribeConfigurationRequest.FromString,
response_serializer=runtime__pb2.SubscribeConfigurationResponse.SerializeToString,
),
'TryLock': grpc.unary_unary_rpc_method_handler(
servicer.TryLock,
request_deserializer=runtime__pb2.TryLockRequest.FromString,
response_serializer=runtime__pb2.TryLockResponse.SerializeToString,
),
'Unlock': grpc.unary_unary_rpc_method_handler(
servicer.Unlock,
request_deserializer=runtime__pb2.UnlockRequest.FromString,
response_serializer=runtime__pb2.UnlockResponse.SerializeToString,
),
'GetNextId': grpc.unary_unary_rpc_method_handler(
servicer.GetNextId,
request_deserializer=runtime__pb2.GetNextIdRequest.FromString,
response_serializer=runtime__pb2.GetNextIdResponse.SerializeToString,
),
'GetState': grpc.unary_unary_rpc_method_handler(
servicer.GetState,
request_deserializer=runtime__pb2.GetStateRequest.FromString,
response_serializer=runtime__pb2.GetStateResponse.SerializeToString,
),
'GetBulkState': grpc.unary_unary_rpc_method_handler(
servicer.GetBulkState,
request_deserializer=runtime__pb2.GetBulkStateRequest.FromString,
response_serializer=runtime__pb2.GetBulkStateResponse.SerializeToString,
),
'SaveState': grpc.unary_unary_rpc_method_handler(
servicer.SaveState,
request_deserializer=runtime__pb2.SaveStateRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'DeleteState': grpc.unary_unary_rpc_method_handler(
servicer.DeleteState,
request_deserializer=runtime__pb2.DeleteStateRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'DeleteBulkState': grpc.unary_unary_rpc_method_handler(
servicer.DeleteBulkState,
request_deserializer=runtime__pb2.DeleteBulkStateRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'ExecuteStateTransaction': grpc.unary_unary_rpc_method_handler(
servicer.ExecuteStateTransaction,
request_deserializer=runtime__pb2.ExecuteStateTransactionRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'PublishEvent': grpc.unary_unary_rpc_method_handler(
servicer.PublishEvent,
request_deserializer=runtime__pb2.PublishEventRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'GetFile': grpc.unary_stream_rpc_method_handler(
servicer.GetFile,
request_deserializer=runtime__pb2.GetFileRequest.FromString,
response_serializer=runtime__pb2.GetFileResponse.SerializeToString,
),
'PutFile': grpc.stream_unary_rpc_method_handler(
servicer.PutFile,
request_deserializer=runtime__pb2.PutFileRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'ListFile': grpc.unary_unary_rpc_method_handler(
servicer.ListFile,
request_deserializer=runtime__pb2.ListFileRequest.FromString,
response_serializer=runtime__pb2.ListFileResp.SerializeToString,
),
'DelFile': grpc.unary_unary_rpc_method_handler(
servicer.DelFile,
request_deserializer=runtime__pb2.DelFileRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'GetFileMeta': grpc.unary_unary_rpc_method_handler(
servicer.GetFileMeta,
request_deserializer=runtime__pb2.GetFileMetaRequest.FromString,
response_serializer=runtime__pb2.GetFileMetaResponse.SerializeToString,
),
'InvokeBinding': grpc.unary_unary_rpc_method_handler(
servicer.InvokeBinding,
request_deserializer=runtime__pb2.InvokeBindingRequest.FromString,
response_serializer=runtime__pb2.InvokeBindingResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'spec.proto.runtime.v1.Runtime', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class Runtime(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def SayHello(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/SayHello',
runtime__pb2.SayHelloRequest.SerializeToString,
runtime__pb2.SayHelloResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def InvokeService(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/InvokeService',
runtime__pb2.InvokeServiceRequest.SerializeToString,
runtime__pb2.InvokeResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetConfiguration(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/GetConfiguration',
runtime__pb2.GetConfigurationRequest.SerializeToString,
runtime__pb2.GetConfigurationResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def SaveConfiguration(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/SaveConfiguration',
runtime__pb2.SaveConfigurationRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteConfiguration(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/DeleteConfiguration',
runtime__pb2.DeleteConfigurationRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def SubscribeConfiguration(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/spec.proto.runtime.v1.Runtime/SubscribeConfiguration',
runtime__pb2.SubscribeConfigurationRequest.SerializeToString,
runtime__pb2.SubscribeConfigurationResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def TryLock(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/TryLock',
runtime__pb2.TryLockRequest.SerializeToString,
runtime__pb2.TryLockResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Unlock(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/Unlock',
runtime__pb2.UnlockRequest.SerializeToString,
runtime__pb2.UnlockResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetNextId(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/GetNextId',
runtime__pb2.GetNextIdRequest.SerializeToString,
runtime__pb2.GetNextIdResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetState(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/GetState',
runtime__pb2.GetStateRequest.SerializeToString,
runtime__pb2.GetStateResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetBulkState(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/GetBulkState',
runtime__pb2.GetBulkStateRequest.SerializeToString,
runtime__pb2.GetBulkStateResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def SaveState(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/SaveState',
runtime__pb2.SaveStateRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteState(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/DeleteState',
runtime__pb2.DeleteStateRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteBulkState(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/DeleteBulkState',
runtime__pb2.DeleteBulkStateRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ExecuteStateTransaction(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/ExecuteStateTransaction',
runtime__pb2.ExecuteStateTransactionRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def PublishEvent(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/PublishEvent',
runtime__pb2.PublishEventRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetFile(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/spec.proto.runtime.v1.Runtime/GetFile',
runtime__pb2.GetFileRequest.SerializeToString,
runtime__pb2.GetFileResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def PutFile(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_unary(request_iterator, target, '/spec.proto.runtime.v1.Runtime/PutFile',
runtime__pb2.PutFileRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ListFile(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/ListFile',
runtime__pb2.ListFileRequest.SerializeToString,
runtime__pb2.ListFileResp.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DelFile(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/DelFile',
runtime__pb2.DelFileRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetFileMeta(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/GetFileMeta',
runtime__pb2.GetFileMetaRequest.SerializeToString,
runtime__pb2.GetFileMetaResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def InvokeBinding(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spec.proto.runtime.v1.Runtime/InvokeBinding',
runtime__pb2.InvokeBindingRequest.SerializeToString,
runtime__pb2.InvokeBindingResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 45.352792 | 129 | 0.651603 | 3,133 | 35,738 | 7.162464 | 0.072774 | 0.047683 | 0.032086 | 0.036096 | 0.68525 | 0.626961 | 0.59844 | 0.562611 | 0.520544 | 0.515463 | 0 | 0.006971 | 0.273462 | 35,738 | 787 | 130 | 45.410419 | 0.857269 | 0.049387 | 0 | 0.56315 | 1 | 0 | 0.094387 | 0.058025 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068351 | false | 0 | 0.004458 | 0.032689 | 0.109955 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2842ba57b4666045fc4763a33435c2f652b5394 | 5,668 | py | Python | uroboros-diversification/src/diversification/bb_branchfunc_diversify.py | whj0401/RLOBF | 2755eb5e21e4f2445a7791a1159962e80a5739ca | [
"MIT"
] | 3 | 2020-12-11T06:15:17.000Z | 2021-04-24T07:09:03.000Z | uroboros-diversification/src/diversification/bb_branchfunc_diversify.py | whj0401/RLOBF | 2755eb5e21e4f2445a7791a1159962e80a5739ca | [
"MIT"
] | null | null | null | uroboros-diversification/src/diversification/bb_branchfunc_diversify.py | whj0401/RLOBF | 2755eb5e21e4f2445a7791a1159962e80a5739ca | [
"MIT"
] | 2 | 2021-03-10T17:46:33.000Z | 2021-03-31T08:00:27.000Z | from analysis.visit import *
from disasm.Types import *
from utils.ail_utils import *
from utils.pp_print import *
from junkcodes import get_junk_codes
obfs_proportion = 0.015
class bb_branchfunc_diversify(ailVisitor):
def __init__(self, funcs, fb_tbl, cfg_tbl):
ailVisitor.__init__(self)
self.funcs = funcs
self._new_des_id = 0
def _branch_a_func(self, f):
fil = self.func_instrs(f)
find_a_valid_func = False
for instr in fil:
op = get_op(instr)
des = get_cf_des(instr)
if des is not None and isinstance(des, Label):
if op in JumpOp:
if random.random() > obfs_proportion:
continue
# here we modify the process of 2 situations, jmp and conditional jmp
if p_op(op) == 'jmp' or p_op(op) == self._ops['jmp']:
# this is a simple jump, we simply cache the des and call the routine
find_a_valid_func = True
loc = self._get_loc(instr)
i0 = TripleInstr((self._ops['mov'], Label('branch_des'), Label('$' + str(des)), loc, None))
loc1 = copy.deepcopy(loc)
loc1.loc_label = ''
i1 = DoubleInstr((self._ops['call'], Label('branch_routine'), loc1, None))
junk1 = get_junk_codes(loc1)
junk2 = get_junk_codes(loc1)
self.insert_instrs(i0, loc)
for _i in junk1:
self.insert_instrs(_i, loc)
self.replace_instrs(i1, loc, instr)
for _i in junk2:
self.append_instrs(_i, loc)
elif p_op(op) in {'je', 'jne', 'jl', 'jle', 'jg', 'jge'}:
# we only handle with these conditional jmp
find_a_valid_func = True
loc = self._get_loc(instr)
postfix = p_op(op)[1:]
# we ues conditional move the modify a conditional jmp
self._new_des_id += 1
fall_through_label = 'fall_through_label_%d' % self._new_des_id
loc_no_label = copy.deepcopy(loc)
loc_no_label.loc_label = ''
loc_fall_through = copy.deepcopy(loc)
loc_fall_through.loc_label = fall_through_label + ':'
tmp = [
DoubleInstr((self._ops['push'], self._regs[0], loc, None)), # 0 replace
DoubleInstr((self._ops['push'], self._regs[1], loc_no_label, None)),
TripleInstr((self._ops['mov'], self._regs[0], Label('$' + fall_through_label), loc_no_label, None)),
TripleInstr((self._ops['mov'], self._regs[1], Label('$' + str(des)), loc_no_label, None)),
TripleInstr(('cmov' + postfix, self._regs[0], self._regs[1], loc_no_label, None)),
TripleInstr((self._ops['mov'], Label('branch_des'), self._regs[0], loc_no_label, None)),
DoubleInstr((self._ops['pop'], self._regs[1], loc_no_label, None)),
DoubleInstr((self._ops['pop'], self._regs[0], loc_no_label, None)),
DoubleInstr((self._ops['call'], Label('branch_routine'), loc_no_label, None)),
SingleInstr((self._ops['nop'], loc_fall_through, None))
]
self.replace_instrs(tmp[0], loc, instr)
for _i in tmp[1:]:
self.append_instrs(_i, loc)
return find_a_valid_func
def branch_func(self):
# print 'bb branch on %d candidate function' % len(self.funcs)
# select the 1st obfs_proportion functions
# for f in self.funcs[:int(obfs_proportion * len(self.funcs))]:
do_branch = False
for f in self.funcs:
#for f in random.sample(self.funcs, int(obfs_proportion * len(self.funcs)) + 1):
if self._branch_a_func(f):
do_branch = True
self.update_process()
if not do_branch:
print 'no valid function is selected'
def bb_div_branch(self):
self.branch_func()
def get_branch_routine(self, iloc):
"""
return the list of routine instructions for branch functions
:param iloc: the location of instruction that routine being inserted
:return: the list of routine instructions
"""
loc_with_branch_label = copy.deepcopy(iloc)
loc_with_branch_label.loc_label = 'branch_routine: '
loc = copy.deepcopy(iloc)
loc.loc_label = ''
i0 = DoubleInstr((self._ops['pop'], Label('global_des'), loc_with_branch_label, None))
junk = get_junk_codes(loc)
i1 = DoubleInstr((self._ops['jmp'], Label('*branch_des'), loc, None))
res = [i0]
res.extend(junk)
res.append(i1)
return res
def attach_branch_routine(self):
loc = get_loc(self.instrs[-1])
routine_instrs = self.get_branch_routine(loc)
self.instrs.extend(routine_instrs)
def bb_div_process(self):
self.bb_div_branch()
self.attach_branch_routine()
def visit(self, instrs):
print 'start bb branch function'
self.instrs = copy.deepcopy(instrs)
self.bb_div_process()
return self.instrs
| 46.459016 | 128 | 0.534227 | 673 | 5,668 | 4.228826 | 0.219911 | 0.034434 | 0.035137 | 0.039353 | 0.267393 | 0.227337 | 0.185172 | 0.145819 | 0.119115 | 0.119115 | 0 | 0.011053 | 0.361503 | 5,668 | 121 | 129 | 46.842975 | 0.775352 | 0.085392 | 0 | 0.0625 | 0 | 0 | 0.046544 | 0.004231 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052083 | null | null | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b286d23fc369a16764ed55694919ccd382975d06 | 138 | py | Python | main1.py | dubblin27/bible-of-algo | 4f893ba0d32d8d169abf4c4485f105cc8169cdbb | [
"MIT"
] | null | null | null | main1.py | dubblin27/bible-of-algo | 4f893ba0d32d8d169abf4c4485f105cc8169cdbb | [
"MIT"
] | null | null | null | main1.py | dubblin27/bible-of-algo | 4f893ba0d32d8d169abf4c4485f105cc8169cdbb | [
"MIT"
] | null | null | null | su = 0
a = [3,5,6,2,7,1]
print(sum(a))
x, y = input("Enter a two value: ").split()
x = int(x)
y = int(y)
su = a[y] + sum(a[:y])
print(su) | 17.25 | 44 | 0.514493 | 34 | 138 | 2.088235 | 0.558824 | 0.112676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063063 | 0.195652 | 138 | 8 | 45 | 17.25 | 0.576577 | 0 | 0 | 0 | 0 | 0 | 0.136691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b292be09587a07ede608a3607cc6852e3db17188 | 925 | py | Python | tools/SDKTool/src/WrappedDeviceAPI/deviceAPI/mobileDevice/android/plugin/Platform_plugin/PlatformWeTest/__init__.py | Passer-D/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 1,210 | 2020-08-18T07:57:36.000Z | 2022-03-31T15:06:05.000Z | tools/SDKTool/src/WrappedDeviceAPI/deviceAPI/mobileDevice/android/plugin/Platform_plugin/PlatformWeTest/__init__.py | guokaiSama/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 37 | 2020-08-24T02:48:38.000Z | 2022-01-30T06:41:52.000Z | tools/SDKTool/src/WrappedDeviceAPI/deviceAPI/mobileDevice/android/plugin/Platform_plugin/PlatformWeTest/__init__.py | guokaiSama/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 275 | 2020-08-18T08:35:16.000Z | 2022-03-31T15:06:07.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making GameAISDK available.
This source code file is licensed under the GNU General Public License Version 3.
For full details, please refer to the file "LICENSE.txt" which is provided as part of this source code package.
Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved.
"""
import platform
__is_windows_system = platform.platform().lower().startswith('window')
__is_linux_system = platform.platform().lower().startswith('linux')
if __is_windows_system:
from .demo_windows.PlatformWeTest import PlatformWeTest
from .demo_windows.common.AdbTool import AdbTool
elif __is_linux_system:
from .demo_ubuntu16.PlatformWeTest import PlatformWeTest
from .demo_ubuntu16.common.AdbTool import AdbTool
else:
raise Exception('system is not support!')
def GetInstance():
return PlatformWeTest()
| 35.576923 | 111 | 0.776216 | 125 | 925 | 5.584 | 0.584 | 0.045845 | 0.040115 | 0.077364 | 0.226361 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.143784 | 925 | 25 | 112 | 37 | 0.866162 | 0.412973 | 0 | 0 | 0 | 0 | 0.061682 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.384615 | 0.076923 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b293b0671b5147e6e833e70a808c61e5033f825f | 579 | py | Python | python/codingbat/src/sum_double.py | christopher-burke/warmups | 140c96ada87ec5e9faa4622504ddee18840dce4a | [
"MIT"
] | null | null | null | python/codingbat/src/sum_double.py | christopher-burke/warmups | 140c96ada87ec5e9faa4622504ddee18840dce4a | [
"MIT"
] | 2 | 2022-03-10T03:49:14.000Z | 2022-03-14T00:49:54.000Z | python/codingbat/src/sum_double.py | christopher-burke/warmups | 140c96ada87ec5e9faa4622504ddee18840dce4a | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""sum_double
Given two int values, return their sum.
Unless the two values are the same, then return double their sum.
sum_double(1, 2) → 3
sum_double(3, 2) → 5
sum_double(2, 2) → 8
source: https://codingbat.com/prob/p141905
"""
def sum_double(a: int, b: int) -> int:
"""Sum Double.
Return the sum or if a == b return double the sum.
"""
multiply = 1
if a == b:
multiply += 1
return (a + b) * multiply
if __name__ == "__main__":
print(sum_double(1, 2))
print(sum_double(3, 2))
print(sum_double(2, 2))
| 18.09375 | 65 | 0.618307 | 99 | 579 | 3.484848 | 0.383838 | 0.234783 | 0.121739 | 0.063768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 0.240069 | 579 | 31 | 66 | 18.677419 | 0.722727 | 0.53886 | 0 | 0 | 0 | 0 | 0.03252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.222222 | 0.333333 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b293f0ceac4f743a52151b0799d4e433f9e36af9 | 366 | py | Python | src/draw.py | mattdesl/inkyphat-mods | 2867161e66ffce87b75170e081f5ab481ce5e6b1 | [
"MIT"
] | 7 | 2020-04-25T09:24:18.000Z | 2022-01-02T03:24:24.000Z | src/draw.py | mattdesl/inkyphat-mods | 2867161e66ffce87b75170e081f5ab481ce5e6b1 | [
"MIT"
] | null | null | null | src/draw.py | mattdesl/inkyphat-mods | 2867161e66ffce87b75170e081f5ab481ce5e6b1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import argparse
from PIL import Image
from inky import InkyPHAT
print("""Inky pHAT/wHAT: Logo
Displays the Inky pHAT/wHAT logo.
""")
type = "phat"
colour = "black"
inky_display = InkyPHAT(colour)
inky_display.set_border(inky_display.BLACK)
img = Image.open("assets/InkypHAT-212x104-bw.png")
inky_display.set_image(img)
inky_display.show() | 18.3 | 50 | 0.762295 | 56 | 366 | 4.857143 | 0.535714 | 0.202206 | 0.088235 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018462 | 0.112022 | 366 | 20 | 51 | 18.3 | 0.818462 | 0.054645 | 0 | 0 | 0 | 0 | 0.271676 | 0.086705 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 0.230769 | 0.076923 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b296a32574784e1bd7a3f60cbb896711ff7dd880 | 1,230 | py | Python | newsapp/tests.py | Esther-Anyona/four-one-one | 6a5e019b35710941a669c1b49e993b683c99d615 | [
"MIT"
] | null | null | null | newsapp/tests.py | Esther-Anyona/four-one-one | 6a5e019b35710941a669c1b49e993b683c99d615 | [
"MIT"
] | null | null | null | newsapp/tests.py | Esther-Anyona/four-one-one | 6a5e019b35710941a669c1b49e993b683c99d615 | [
"MIT"
] | null | null | null | from django.test import TestCase
from .models import *
from django.contrib.auth.models import User
# Create your tests here.
user = User.objects.get(id=1)
profile = Profile.objects.get(id=1)
neighbourhood = Neighbourhood.objects.get(id=1)
class TestBusiness(TestCase):
def setUp(self):
self.business=Business(name = "hardware", description="your stop for best prices", user= profile, neighbourhood_id=neighbourhood, business_email='mail@gmail.com')
self.business.save()
def test_instance(self):
self.assertTrue(isinstance(self.business,Business))
def test_create_business(self):
self.business.create_business()
businesses=Business.objects.all()
self.assertTrue(len(businesses)>0)
def test_delete_business(self):
self.business.delete_business()
businesses=Business.objects.all()
self.assertTrue(len(businesses)==0)
def test_update_business(self):
self.business.create_business()
# self.business.update_business(self.business.id, 'hardware')
updated_business = Business.objects.all()
self.assertTrue(len(updated_business) > 0)
def tearDown(self):
Business.objects.all().delete()
| 30 | 170 | 0.702439 | 148 | 1,230 | 5.736486 | 0.324324 | 0.127208 | 0.075383 | 0.045936 | 0.288575 | 0.288575 | 0.167256 | 0.167256 | 0.167256 | 0.167256 | 0 | 0.005964 | 0.182114 | 1,230 | 40 | 171 | 30.75 | 0.837972 | 0.06748 | 0 | 0.153846 | 0 | 0 | 0.041084 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.230769 | false | 0 | 0.115385 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b296bd14330ba64af65527855f690dd49d0a2709 | 4,620 | py | Python | ssdlite/load_caffe_weights.py | kkrpawkal/MobileNetv2-SSDLite | b434ed07b46d6e7f733ec97e180b57c8db30cae3 | [
"MIT"
] | null | null | null | ssdlite/load_caffe_weights.py | kkrpawkal/MobileNetv2-SSDLite | b434ed07b46d6e7f733ec97e180b57c8db30cae3 | [
"MIT"
] | null | null | null | ssdlite/load_caffe_weights.py | kkrpawkal/MobileNetv2-SSDLite | b434ed07b46d6e7f733ec97e180b57c8db30cae3 | [
"MIT"
] | null | null | null | import numpy as np
import sys,os
caffe_root = '/home/yaochuanqi/work/ssd/caffe/'
sys.path.insert(0, caffe_root + 'python')
import caffe
deploy_proto = 'deploy.prototxt'
save_model = 'deploy.caffemodel'
weights_dir = 'output'
box_layers = ['conv_13/expand', 'Conv_1', 'layer_19_2_2', 'layer_19_2_3', 'layer_19_2_4', 'layer_19_2_5']
def load_weights(path, shape=None):
weights = None
if shape is None:
weights = np.fromfile(path, dtype=np.float32)
else:
weights = np.fromfile(path, dtype=np.float32).reshape(shape)
os.unlink(path)
return weights
def load_data(net):
for key in net.params.iterkeys():
if type(net.params[key]) is caffe._caffe.BlobVec:
print(key)
if 'mbox' not in key and (key.startswith("conv") or key.startswith("Conv") or key.startswith("layer")):
print('conv')
if key.endswith("/bn"):
prefix = weights_dir + '/' + key.replace('/', '_')
net.params[key][0].data[...] = load_weights(prefix + '_moving_mean.dat')
net.params[key][1].data[...] = load_weights(prefix + '_moving_variance.dat')
net.params[key][2].data[...] = np.ones(net.params[key][2].data.shape, dtype=np.float32)
elif key.endswith("/scale"):
prefix = weights_dir + '/' + key.replace('scale','bn').replace('/', '_')
net.params[key][0].data[...] = load_weights(prefix + '_gamma.dat')
net.params[key][1].data[...] = load_weights(prefix + '_beta.dat')
else:
prefix = weights_dir + '/' + key.replace('/', '_')
ws = np.ones((net.params[key][0].data.shape[0], 1, 1, 1), dtype=np.float32)
if os.path.exists(prefix + '_weights_scale.dat'):
ws = load_weights(prefix + '_weights_scale.dat', ws.shape)
net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape) * ws
if len(net.params[key]) > 1:
net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')
elif 'mbox_loc/depthwise' in key or 'mbox_conf/depthwise' in key:
prefix = key[0:key.find('_mbox')]
index = box_layers.index(prefix)
if 'mbox_loc' in key:
prefix = weights_dir + '/BoxPredictor_' + str(index) + '_BoxEncodingPredictor_depthwise'
else:
prefix = weights_dir + '/BoxPredictor_' + str(index) + '_ClassPredictor_depthwise'
if key.endswith("/bn"):
net.params[key][0].data[...] = load_weights(prefix + '_bn_moving_mean.dat')
net.params[key][1].data[...] = load_weights(prefix + '_bn_moving_variance.dat')
net.params[key][2].data[...] = np.ones(net.params[key][2].data.shape, dtype=np.float32)
elif key.endswith("/scale"):
net.params[key][0].data[...] = load_weights(prefix + '_gamma.dat')
net.params[key][1].data[...] = load_weights(prefix + '_beta.dat')
else:
print key
net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape)
if len(net.params[key]) > 1:
net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')
elif key.endswith("mbox_loc"):
prefix = key.replace("_mbox_loc", "")
index = box_layers.index(prefix)
prefix = weights_dir + '/BoxPredictor_' + str(index) + '_BoxEncodingPredictor'
net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape)
net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')
elif key.endswith("mbox_conf"):
prefix = key.replace("_mbox_conf", "")
index = box_layers.index(prefix)
prefix = weights_dir + '/BoxPredictor_' + str(index) + '_ClassPredictor'
net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape)
net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')
else:
print ("error key " + key)
net_deploy = caffe.Net(deploy_proto, caffe.TEST)
load_data(net_deploy)
net_deploy.save(save_model)
| 54.352941 | 124 | 0.541775 | 548 | 4,620 | 4.375912 | 0.169708 | 0.108841 | 0.140117 | 0.140117 | 0.672644 | 0.610926 | 0.578816 | 0.491243 | 0.477064 | 0.457048 | 0 | 0.019059 | 0.295887 | 4,620 | 84 | 125 | 55 | 0.718106 | 0 | 0 | 0.381579 | 0 | 0 | 0.148733 | 0.028578 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.039474 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b29b61190657129eadf2448fe993cb4e944db000 | 1,096 | py | Python | t/unit/utils/test_div.py | kaiix/kombu | 580b5219cc50cad278c4b664d0e0f85e37a5e9ea | [
"BSD-3-Clause"
] | 1,920 | 2015-01-03T15:43:23.000Z | 2022-03-30T19:30:35.000Z | t/unit/utils/test_div.py | kaiix/kombu | 580b5219cc50cad278c4b664d0e0f85e37a5e9ea | [
"BSD-3-Clause"
] | 949 | 2015-01-02T18:56:00.000Z | 2022-03-31T23:14:59.000Z | t/unit/utils/test_div.py | kaiix/kombu | 580b5219cc50cad278c4b664d0e0f85e37a5e9ea | [
"BSD-3-Clause"
] | 833 | 2015-01-07T23:56:35.000Z | 2022-03-31T22:04:11.000Z | import pickle
from io import BytesIO, StringIO
from kombu.utils.div import emergency_dump_state
class MyStringIO(StringIO):
def close(self):
pass
class MyBytesIO(BytesIO):
def close(self):
pass
class test_emergency_dump_state:
def test_dump(self, stdouts):
fh = MyBytesIO()
stderr = StringIO()
emergency_dump_state(
{'foo': 'bar'}, open_file=lambda n, m: fh, stderr=stderr)
assert pickle.loads(fh.getvalue()) == {'foo': 'bar'}
assert stderr.getvalue()
assert not stdouts.stdout.getvalue()
def test_dump_second_strategy(self, stdouts):
fh = MyStringIO()
stderr = StringIO()
def raise_something(*args, **kwargs):
raise KeyError('foo')
emergency_dump_state(
{'foo': 'bar'},
open_file=lambda n, m: fh,
dump=raise_something,
stderr=stderr,
)
assert 'foo' in fh.getvalue()
assert 'bar' in fh.getvalue()
assert stderr.getvalue()
assert not stdouts.stdout.getvalue()
| 23.319149 | 69 | 0.595803 | 124 | 1,096 | 5.129032 | 0.346774 | 0.081761 | 0.113208 | 0.050314 | 0.355346 | 0.289308 | 0.289308 | 0.289308 | 0.132075 | 0.132075 | 0 | 0 | 0.29562 | 1,096 | 46 | 70 | 23.826087 | 0.823834 | 0 | 0 | 0.363636 | 0 | 0 | 0.024635 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 1 | 0.151515 | false | 0.060606 | 0.090909 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b29e142efe612167f93b68a27b4c24715a4da2ff | 1,058 | py | Python | zkpytb/json.py | zertrin/zkpytb | 066662d9c7bd233f977302cb11cf888a2a1828d2 | [
"MIT"
] | 2 | 2021-07-17T19:30:17.000Z | 2022-02-14T04:55:46.000Z | zkpytb/json.py | zertrin/zkpytb | 066662d9c7bd233f977302cb11cf888a2a1828d2 | [
"MIT"
] | null | null | null | zkpytb/json.py | zertrin/zkpytb | 066662d9c7bd233f977302cb11cf888a2a1828d2 | [
"MIT"
] | null | null | null | """
Helper functions related to json
Author: Marc Gallet
"""
import datetime
import decimal
import json
import uuid
import pathlib
class JSONEncoder(json.JSONEncoder):
"""
A custom JSONEncoder that can handle a bit more data types than the one from stdlib.
"""
def default(self, o):
# early passthrough if it works by default
try:
return json.JSONEncoder.default(self, o)
except Exception:
pass
# handle Path objects
if isinstance(o, pathlib.Path):
return str(o).replace('\\', '/')
# handle UUID objects
if isinstance(o, uuid.UUID):
return str(o)
if isinstance(o, (datetime.datetime, datetime.time, datetime.date)):
return o.isoformat()
if isinstance(o, datetime.timedelta):
return o.total_seconds()
if isinstance(o, (complex, decimal.Decimal)):
return str(o)
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, o)
| 27.128205 | 88 | 0.618147 | 127 | 1,058 | 5.141732 | 0.480315 | 0.091884 | 0.099541 | 0.085758 | 0.101072 | 0.101072 | 0 | 0 | 0 | 0 | 0 | 0 | 0.29206 | 1,058 | 38 | 89 | 27.842105 | 0.871829 | 0.258979 | 0 | 0.181818 | 0 | 0 | 0.003958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.045455 | 0.227273 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b29e7d32ca4c3f659315bd72acd899c4542a2363 | 1,960 | py | Python | back_end/consts.py | DoctorChe/crash_map | e540ab8a45f67ff78c9993ac3eb1b413d4786cd9 | [
"MIT"
] | 1 | 2019-04-04T21:55:24.000Z | 2019-04-04T21:55:24.000Z | back_end/consts.py | DoctorChe/crash_map | e540ab8a45f67ff78c9993ac3eb1b413d4786cd9 | [
"MIT"
] | 2 | 2019-04-14T10:11:25.000Z | 2019-04-25T20:49:54.000Z | back_end/consts.py | DoctorChe/crash_map | e540ab8a45f67ff78c9993ac3eb1b413d4786cd9 | [
"MIT"
] | null | null | null | # encoding: utf-8
# input data constants
MARI_EL = 'Республика Марий Эл'
YOSHKAR_OLA = 'Республика Марий Эл, Йошкар-Ола'
VOLZHSK = 'Республика Марий Эл, Волжск'
VOLZHSK_ADM = 'Республика Марий Эл, Волжский район'
MOUNTIN = 'Республика Марий Эл, Горномарийский район'
ZVENIGOVO = 'Республика Марий Эл, Звениговский район'
KILEMARY = 'Республика Марий Эл, Килемарский район'
KUZHENER = 'Республика Марий Эл, Куженерский район'
TUREK = 'Республика Марий Эл, Мари-Турекский район'
MEDVEDEVO = 'Республика Марий Эл, Медведевский район'
MORKI = 'Республика Марий Эл, Моркинский район'
NEW_TORYAL = 'Республика Марий Эл, Новоторъяльский район'
ORSHANKA = 'Республика Марий Эл, Оршанский район'
PARANGA = 'Республика Марий Эл, Параньгинский район'
SERNUR = 'Республика Марий Эл, Сернурский район'
SOVETSKIY = 'Республика Марий Эл, Советский район'
YURINO = 'Республика Марий Эл, Юринский район'
ADMINISTRATIVE = [YOSHKAR_OLA, VOLZHSK, VOLZHSK_ADM, MOUNTIN, ZVENIGOVO, KILEMARY, KUZHENER, TUREK, MEDVEDEVO, MORKI, NEW_TORYAL, ORSHANKA, PARANGA, SERNUR, SOVETSKIY, YURINO]
# data indices
DATE = 0
TIME = 1
TYPE = 2
LOCATION = 3
STREET = 4
HOUSE_NUMBER = 5
ROAD = 6
KILOMETER = 7
METER = 8
LONGITUDE = 9
LATITUDE = 10
DEATH = 11
DEATH_CHILDREN = 12
INJURY = 13
INJURY_CHILDREN = 14
LONGITUDE_GEOCODE = 15
LATITUDE_GEOCODE = 16
VALID = 17
VALID_STRICT = 18
STREET_REPLACE_DICTIONARY = {
'Кырля': 'Кырли',
'Ленина пр-кт': 'Ленинский проспект',
'Ленина пл': 'Ленинский проспект',
'Л.Шевцовой': 'Шевцовой',
'Панфилова пер': 'Панфилова улица',
'Комсомольская пл': 'Комсомольская ул',
'Маркса пер': 'Маркса ул'
}
# coordinates grid borders
MARI_EL_WEST = 45.619745
MARI_EL_EAST = 50.200041
MARI_EL_SOUTH = 55.830512
MARI_EL_NORTH = 57.343631
YOSHKAR_OLA_WEST = 47.823484
YOSHKAR_OLA_EAST = 47.972560
YOSHKAR_OLA_SOUTH = 56.603073
YOSHKAR_OLA_NORTH = 56.669722
EARTH_MEAN_RADIUS = 6371000
MAX_DISTANCE = 150
# Yandex API constants
HOUSE_YANDEX = 'house' | 26.849315 | 175 | 0.758673 | 261 | 1,960 | 5.563218 | 0.51341 | 0.17562 | 0.199036 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061788 | 0.14949 | 1,960 | 73 | 176 | 26.849315 | 0.809238 | 0.048469 | 0 | 0 | 0 | 0 | 0.419355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2a18a1d5893e676f4cfbf5555c659a91725ab53 | 52,309 | py | Python | tagger-algo.py | li992/MAT | a5fb87b2d1ef667e5eb4a1c4e87caae6f1f75292 | [
"Apache-2.0"
] | null | null | null | tagger-algo.py | li992/MAT | a5fb87b2d1ef667e5eb4a1c4e87caae6f1f75292 | [
"Apache-2.0"
] | null | null | null | tagger-algo.py | li992/MAT | a5fb87b2d1ef667e5eb4a1c4e87caae6f1f75292 | [
"Apache-2.0"
] | null | null | null | import glob,os,stanza,argparse
from datetime import datetime
# route initiation
directory_path = os.getcwd()
#stanford tagger initiation
nlp = stanza.Pipeline('en')
dimDict ={}
# type specifiers
have = ["have","has","'ve","had","having","hath"]
do = ["do","does","did","doing","done"]
wp = ["who","whom","whose","which"]
be = ["be","am","is","are","was","were","been","being","'s","'m","'re"]
who = ["what","where","when","how","whether","why","whoever","whomever","whichever","wherever","whenever","whatever","however"]
preposition = ["against","amid","amidst","among","amongst","at","besides","between","by","despite","during","except","for","from","in","into","minus","notwithstanding","of","off","on","onto","opposite","out","per","plus","pro","than","through","throughout","thru","toward","towards","upon","versus","via","with","within","without"]
public = ["acknowledge","acknowledged","acknowledges","acknowledging","add","adds","adding","added","admit","admits","admitting","admitted","affirm","affirms","affirming","affirmed","agree","agrees","agreeing","agreed","allege","alleges","alleging","alleged","announce","announces","announcing","announced","argue","argues","arguing","argued","assert","asserts","asserting","asserted","bet","bets","betting","boast","boasts","boasting","boasted","certify","certifies","certifying","certified","claim","claims","claiming","claimed","comment","comments","commenting","commented","complain","complains","complaining","complained","concede","concedes","conceding","conceded","confess","confesses","confessing","confessed","confide","confides","confiding","confided","confirm","confirms","confirming","confirmed","contend","contends","contending","contended","convey","conveys","conveying","conveyed","declare","declares","declaring","declared","deny","denies","denying","denied","disclose","discloses","disclosing","disclosed","exclaim","exclaims","exclaiming","exclaimed","explain","explains","explaining","explained","forecast","forecasts","forecasting","forecasted","foretell","foretells","foretelling","foretold","guarantee","guarantees","guaranteeing","guaranteed","hint","hints","hinting","hinted","insist","insists","insisting","insisted","maintain","maintains","maintaining","maintained","mention","mentions","mentioning","mentioned","object","objects","objecting","objected","predict","predicts","predicting","predicted","proclaim","proclaims","proclaiming","proclaimed","promise","promises","promising","promised","pronounce","pronounces","pronouncing","pronounced","prophesy","prophesies","prophesying","prophesied","protest","protests","protesting","protested","remark","remarks","remarking","remarked","repeat","repeats","repeating","repeated","reply","replies","replying","replied","report","reports","reporting","reported","say","says","saying","said","state","states","stating","stated","submit","submits","submitting","submitted","suggest","suggests","suggesting","suggested","swear","swears","swearing","swore","sworn","testify","testifies","testifying","testified","vow","vows","vowing","vowed","warn","warns","warning","warned","write","writes","writing","wrote","written"]
private = ["accept","accepts","accepting","accepted","anticipate","anticipates","anticipating","anticipated","ascertain","ascertains","ascertaining","ascertained","assume","assumes","assuming","assumed","believe","believes","believing","believed","calculate","calculates","calculating","calculated","check","checks","checking","checked","conclude","concludes","concluding","concluded","conjecture","conjectures","conjecturing","conjectured","consider","considers","considering","considered","decide","decides","deciding","decided","deduce","deduces","deducing","deduced","deem","deems","deeming","deemed","demonstrate","demonstrates","demonstrating","demonstrated","determine","determines","determining","determined","discern","discerns","discerning","discerned","discover","discovers","discovering","discovered","doubt","doubts","doubting","doubted","dream","dreams","dreaming","dreamt","dreamed","ensure","ensures","ensuring","ensured","establish","establishes","establishing","established","estimate","estimates","estimating","estimated","expect","expects","expecting","expected","fancy","fancies","fancying","fancied","fear","fears","fearing","feared","feel","feels","feeling","felt","find","finds","finding","found","foresee","foresees","foreseeing","foresaw","forget","forgets","forgetting","forgot","forgotten","gather","gathers","gathering","gathered","guess","guesses","guessing","guessed","hear","hears","hearing","heard","hold","holds","holding","held","hope","hopes","hoping","hoped","imagine","imagines","imagining","imagined","imply","implies","implying","implied","indicate","indicates","indicating","indicated","infer","infers","inferring","inferred","insure","insures","insuring","insured","judge","judges","judging","judged","know","knows","knowing","knew","known","learn","learns","learning","learnt","learned","mean","means","meaning","meant","note","notes","noting","noted","notice","notices","noticing","noticed","observe","observes","observing","observed","perceive","perceives","perceiving","perceived","presume","presumes","presuming","presumed","presuppose","presupposes","presupposing","presupposed","pretend","pretend","pretending","pretended","prove","proves","proving","proved","realize","realise","realising","realizing","realises","realizes","realised","realized","reason","reasons","reasoning","reasoned","recall","recalls","recalling","recalled","reckon","reckons","reckoning","reckoned","recognize","recognise","recognizes","recognises","recognizing","recognising","recognized","recognised","reflect","reflects","reflecting","reflected","remember","remembers","remembering","remembered","reveal","reveals","revealing","revealed","see","sees","seeing","saw","seen","sense","senses","sensing","sensed","show","shows","showing","showed","shown","signify","signifies","signifying","signified","suppose","supposes","supposing","supposed","suspect","suspects","suspecting","suspected","think","thinks","thinking","thought","understand","understands","understanding","understood"]
suasive = ["agree","agrees","agreeing","agreed","allow","allows","allowing","allowed","arrange","arranges","arranging","arranged","ask","asks","asking","asked","beg","begs","begging","begged","command","commands","commanding","commanded","concede","concedes","conceding","conceded","decide","decides","deciding","decided","decree","decrees","decreeing","decreed","demand","demands","demanding","demanded","desire","desires","desiring","desired","determine","determines","determining","determined","enjoin","enjoins","enjoining","enjoined","ensure","ensures","ensuring","ensured","entreat","entreats","entreating","entreated","grant","grants","granting","granted","insist","insists","insisting","insisted","instruct","instructs","instructing","instructed","intend","intends","intending","intended","move","moves","moving","moved","ordain","ordains","ordaining","ordained","order","orders","ordering","ordered","pledge","pledges","pledging","pledged","pray","prays","praying","prayed","prefer","prefers","preferring","preferred","pronounce","pronounces","pronouncing","pronounced","propose","proposes","proposing","proposed","recommend","recommends","recommending","recommended","request","requests","requesting","requested","require","requires","requiring","required","resolve","resolves","resolving","resolved","rule","rules","ruling","ruled","stipulate","stipulates","stipulating","stipulated","suggest","suggests","suggesting","suggested","urge","urges","urging","urged","vote","votes","voting","voted"]
symbols = [",",".","!","@","#","$","%","^","&","*","(",")","<",">","/","?","{","}","[","]","\\","|","-","+","=","~","`"]
indefinitePN = ["anybody","anyone","anything","everybody","everyone","everything","nobody","none","nothing","nowhere","somebody","someone","something"]
quantifier = ["each","all","every","many","much","few","several","some","any"]
quantifierPN = ["everybody","somebody","anybody","everyone","someone","anyone","everything","something","anything"]
conjunctives = ["alternatively","consequently","conversely","eg","e.g.","furthermore","hence","however","i.e.","instead","likewise","moreover","namely","nevertheless","nonetheless","notwithstanding","otherwise","similarly","therefore","thus","viz."]
timeABV = ["afterwards","again","earlier","early","eventually","formerly","immediately","initially","instantly","late","lately","later","momentarily","now","nowadays","once","originally","presently","previously","recently","shortly","simultaneously","subsequently","today","to-day","tomorrow","to-morrow","tonight","to-night","yesterday"]
placeABV = ["aboard","above","abroad","across","ahead","alongside","around","ashore","astern","away","behind","below","beneath","beside","downhill","downstairs","downstream","east","far","hereabouts","indoors","inland","inshore","inside","locally","near","nearby","north","nowhere","outdoors","outside","overboard","overland","overseas","south","underfoot","underground","underneath","uphill","upstairs","upstream","west"]
narrative = ["ask","asks","asked","asking","tell","tells","told","telling"]
# tag specifiers
v = ["VBG","VBN","VB","VBD","VBP","VBZ"]
nn = ["NN","NNP","NNPS","NNS"]
def printWithTime(Strr):
now=datetime.now()
dt = now.strftime("%Y-%m-%d %H:%M:%S")
print(dt+" INFO: "+Strr)
def tagger(data,file,frags):
printWithTime(" Creating Stanford Tags....")
doc = nlp(data)
printWithTime(" Finished")
stftoutfilepath = os.path.join(directory_path,'Results')
tagoutfilepath = os.path.join(directory_path,'Results')
if frags == True:
stftoutfilepath = os.path.join(stftoutfilepath,'StanfordTagsFragment')
tagoutfilepath = os.path.join(tagoutfilepath,'ModifiedTagsFragment')
else:
stftoutfilepath = os.path.join(stftoutfilepath,'StanfordTags')
tagoutfilepath = os.path.join(tagoutfilepath,'ModifiedTags')
stftoutfilepath = os.path.join(stftoutfilepath,file)
tagoutfilepath = os.path.join(tagoutfilepath,file)
out = open(stftoutfilepath,'w')
dout = open(tagoutfilepath,'w')
printWithTime(" Generating Analyzed Tags...")
for i,sent in enumerate(doc.sentences):
linewords=[]
for word in sent.words:
outstr = f'{word.text}_{word.xpos}\n'
linewords.append(f'{word.text}_{word.xpos}')
out.write(outstr)
taglist = taggerAnalyzer(linewords)
for tags in taglist:
dout.write(tags+"\n")
printWithTime(" Finished")
out.close()
dout.close()
return
def getFinishedFiles(t):
returnList =[]
if t == "merged":
if not os.path.exists(os.path.join(directory_path,'mList.txt')):
return returnList
else:
path = os.path.join(directory_path,'mList.txt')
with open(path,'r') as infile:
for line in infile:
returnList.append(line.replace('\n',''))
return returnList
elif t == "fragment":
if not os.path.exists(os.path.join(directory_path,'fList.txt')):
return returnList
else:
path = os.path.join(directory_path,'fList.txt')
with open(path,'r') as infile:
for line in infile:
returnList.append(line.replace('\n',''))
return returnList
else:
return returnList
def MergedfolderProcess():
#print('folderprocess called')
if not os.path.exists('MergedFiles'):
printWithTime('Error: Please use FileMerger.py to generate file data first')
return []
else:
os.chdir(os.path.join(directory_path,'MergedFiles'))
filenames = glob.glob('*.txt')
validnames =[]
for name in filenames:
validnames.append(name)
#print(validnames)
return validnames
def FragmentfolderProcess():
if not os.path.exists('FileFragments'):
printWithTime('Error: Please use FileMerger.py to generate file data first')
return []
else:
os.chdir(os.path.join(directory_path,'FileFragments'))
filenames=glob.glob('*.txt')
validnames = []
for name in filenames:
validnames.append(name)
return validnames
def taggerAnalyzer(wordList):
#first loop to define prepositions
for i in range(len(wordList)):
word = wordList[i].split('_')
if i<len(wordList)-1:
next_word = wordList[i+1].split('_')
else:
next_word = ['','NULL']
if(word[0].lower()=="to" and (next_word[0] in wp or any(n in next_word for n in ["IN","CD","DT","JJ","PRPS","WPS","NN","NNP","PDT","PRP","WDT","WRB"]))):
wordList[i] = word[0] + "_PIN"
#second loop to define simple types
for i in range(len(wordList)):
word = wordList[i].split('_')
# negation
if("not" in word[0] or "n't" in word[0]) and "RB" in wordList[i]:
wordList[i] = word[0] + "_XX0"
# preposition
if word[0] in preposition:
wordList[i] = word[0] + "_PIN"
#indefinite pronouns
if word[0] in indefinitePN:
wordList[i] = word[0] + "_INPR"
#quantifier
if word[0] in quantifier:
wordList[i] = word[0] + "_QUAN"
#quantifier pronouns
if word[0] in quantifierPN:
wordList[i] = word[0] + "_QUPR"
# third loop to define complex types
for i in range(len(wordList)):
word = wordList[i].split('_')
if i<len(wordList)-4:
fourth_next_word = wordList[i+4].split('_')
else:
fourth_next_word = ['','NULL']
if i<len(wordList)-3:
third_next_word = wordList[i+3].split('_')
else:
third_next_word = ['','NULL']
if i<len(wordList)-2:
second_next_word = wordList[i+2].split('_')
else:
second_next_word = ['','NULL']
if i<len(wordList)-1:
next_word = wordList[i+1].split('_')
else:
next_word = ['','NULL']
if i>=1:
previous_word = wordList[i-1].split('_')
else:
previous_word = ['','NULL']
if i>=2:
second_previous_word = wordList[i-2].split('_')
else:
second_previous_word = ['','NULL']
if i>=3:
third_previous_word = wordList[i-3].split('_')
else:
third_previous_word = ['','NULL']
if i>=4:
fourth_previous_word = wordList[i-4].split('_')
else:
fourth_previous_word = ['','NULL']
if i>=5:
fifth_previous_word = wordList[i-5].split('_')
else:
fifth_previous_word = ['','NULL']
if i>=6:
sixth_previous_word = wordList[i-6].split('_')
else:
sixth_previous_word = ['','NULL']
#adverbial subordinators
if word[0].lower() in ["since","while","whilst","whereupon","whereas","whereby"]:
wordList[i]=wordList[i].replace(word[1],'OSUB')
word = wordList[i].split('_')
if (
(word[0].lower() == "such" and next_word[0].lower() == "that") or
(word[0].lower() == "inasmuch" and next_word[0].lower() == "as") or
(word[0].lower() == "forasmuch" and next_word[0].lower() == "as") or
(word[0].lower() == "insofar" and next_word[0].lower() == "as") or
(word[0].lower() == "insomuch" and next_word[0].lower() == "as") or
(word[0].lower() == "so" and next_word[0].lower() == "that" and any(n in second_next_word for n in ["NN","NNP","JJ"]))
):
wordList[i]=wordList[i].replace(word[1],'OSUB')
word = wordList[i].split('_')
wordList[i+1]=wordList[i+1].replace(next_word[1],"NULL")
next_word = wordList[i+1].split('_')
if ((word[0].lower() =="as") and (next_word[0].lower() in ["long","soon"]) and (second_next_word[0].lower() =="as")):
wordList[i]=wordList[i].replace(word[1],'OSUB')
word = wordList[i].split('_')
wordList[i+1]=wordList[i+1].replace(next_word[1],"NULL")
next_word = wordList[i+1].split('_')
wordList[i+2]=wordList[i+2].replace(second_next_word[1],"NULL")
second_next_word = wordList[i+2].split('_')
#predicative adjectives
if (word[0].lower() in be) and ("JJ" in next_word) and any(n in second_next_word for n in ["JJ","RB","NN","NNP"]):
wordList[i+1]=wordList[i+1].replace(next_word[1],'PRED')
next_word = wordList[i+1].split('_')
if (word[0].lower() in be) and ("RB" in next_word) and ("JJ" in second_next_word) and any(n in third_next_word for n in ["JJ","RB","NN","NNP"]):
wordList[i+2]=wordList[i+2].replace(second_next_word[1],'PRED')
second_next_word = wordList[i+2].split('_')
if (word[0].lower() in be) and ("XX0" in next_word) and ("JJ" in second_next_word) and any(n in third_next_word for n in ["JJ","RB","NN","NNP"]):
wordList[i+2]=wordList[i+2].replace(second_next_word[1],'PRED')
second_next_word = wordList[i+2].split('_')
if (word[0].lower() in be) and ("XX0" in next_word) and ("RB" in second_next_word) and ("JJ" in third_next_word) and any(n in fourth_next_word for n in ["JJ","RB","NN","NNP"]):
wordList[i+3]=wordList[i+3].replace(third_next_word[1],'PRED')
third_next_word = wordList[i+3].split('_')
if ("JJ" in word) and ("PHC" in previous_word) and ("PRED" in second_previous_word):
wordList[i]=wordList[i].replace(word[1],'PRED')
word = wordList[i].split('_')
#tags conjuncts
if (word[0].lower() in symbols and next_word[0].lower() in ["else","altogether","rather"]):
wordList[i+1]=wordList[i+1].replace(next_word[1],"CONJ")
next_word = wordList[i+1].split('_')
if word[0].lower() in conjunctives:
wordList[i]=wordList[i].replace(word[1],"CONJ")
word = wordList[i].split('_')
if ((word[0].lower()=="in" and next_word[0].lower() in ["comparison","contrast","particular","addition","conclusion","consequence","sum","summary"]) or
(word[0].lower()=="for" and next_word[0].lower() in ["example","instance"]) or
(word[0].lower()=="instead" and next_word[0].lower()=="of") or
(word[0].lower()=="by" and next_word[0].lower() in ["contrast","comparison"])):
wordList[i]=wordList[i].replace(word[1],"CONJ")
wordList[i+1]=wordList[i+1].replace(next_word[1],"NULL")
word = wordList[i].split('_')
next_word = wordList[i+1].split('_')
if((word[0].lower()=="in" and next_word[0].lower()=="any" and second_next_word[0].lower() in ["event","case"]) or
(word[0].lower()=="in" and next_word[0].lower()=="other" and second_next_word[0].lower()=="words") or
(word[0].lower()=="as" and next_word[0].lower()=="a" and second_next_word[0].lower() in ["consequence","result"]) or
(word[0].lower()=="on" and next_word[0].lower()=="the" and second_next_word[0].lower()=="contrary") ):
wordList[i]=wordList[i].replace(word[1],"CONJ")
wordList[i+1]=wordList[i+1].replace(next_word[1],"NULL")
wordList[i+2]=wordList[i+2].replace(second_next_word[1],"NULL")
word = wordList[i].split('_')
next_word = wordList[i+1].split('_')
second_next_word = wordList[i+2].split('_')
if(word[0].lower()=="on" and next_word[0].lower()=="the"and second_next_word[0].lower()=="other" and third_next_word[0].lower()=="hand"):
wordList[i]=wordList[i].replace(word[1],"CONJ")
wordList[i+1]=wordList[i+1].replace(next_word[1],"NULL")
wordList[i+2]=wordList[i+2].replace(second_next_word[1],"NULL")
wordList[i+3]=wordList[i+3].replace(third_next_word[1],"NULL")
word = wordList[i].split('_')
next_word = wordList[i+1].split('_')
second_next_word = wordList[i+2].split('_')
third_next_word = wordList[i+3].split('_')
#tags emphatics
if word[0].lower() in ["just","really","most","more"]:
wordList[i]=wordList[i].replace(word[1],"EMPH")
word = wordList[i].split('_')
if((word[0].lower() in ["real","so"] and any(n in next_word for n in ["JJ","PRED"])) or
(word[0].lower() in do and any(n in next_word for n in v))):
wordList[i]=wordList[i].replace(word[1],"EMPH")
word = wordList[i].split('_')
if((word[0].lower() == "for" and next_word[0].lower()=="sure") or
(word[0].lower()=="a" and next_word[0].lower()=="lot") or
(word[0].lower()=="such" and next_word[0].lower()=="a")):
wordList[i]=wordList[i].replace(word[1],"EMPH")
wordList[i+1]=wordList[i+1].replace(next_word[1],"NULL")
word = wordList[i].split('_')
next_word = wordList[i+1].split('_')
#tags phrasal "and" coordination
if word[0].lower()=="and":
if((("RB" in previous_word and "RB" in next_word)) or
(any(n in previous_word for n in nn) and any(n in next_word for n in nn)) or
(any(n in previous_word for n in v) and any(n in next_word for n in v)) or
(any(n in previous_word for n in ["JJ","PRED"]) and any(n in next_word for n in ["JJ","PRED"]))):
wordList[i]=wordList[i].replace(word[1],"PHC")
word = wordList[i].split('_')
#tags pro-verb do
if word[0].lower() in do:
if (all(n not in next_word for n in v) and
("XX0" not in next_word) and
(all(n not in next_word for n in ["RB","XX0"]) and all(n not in second_previous_word for n in v)) and
(all(n not in next_word for n in ["RB","XX0"]) and ("RB" not in second_next_word )and all(n not in third_next_word for n in v)) and
(previous_word[0] not in symbols) and
((previous_word[0].lower() not in wp) or (previous_word[0].lower() not in who))):
wordList[i]+="_PROD"
word = wordList[i].split('_')
#tags WH questions
if (((word[0].lower() in symbols and word[0]!=',') and (next_word[0].lower() in who) and (next_word[0].lower() not in ["however","whatever"]) and ("MD" in second_next_word)) or
((word[0].lower() in symbols and word[0]!=',') and (next_word[0].lower() in who) and (next_word[0].lower() not in ["however","whatever"]) and ((second_next_word[0].lower() in do) or (second_next_word[0].lower() in have) or (second_next_word[0].lower() in be))) or
((word[0].lower() in symbols and word[0]!=',') and (second_next_word[0].lower())in who) and (second_next_word[0].lower() not in ["however","whatever"]) and (third_next_word[0].lower() in be)):
wordList[i+1]+="_WHQU"
next_word = wordList[i+1].split('_')
#tags sentence relatives
if(word[0].lower() in symbols and next_word[0].lower()=="which"):
wordList[i+1]+="_SERE"
next_word = wordList[i+1].split('_')
#tags perfect aspects
if word[0].lower() in have:
if (any(n in next_word for n in ["VBD","VBN"]) or
(any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["VBD","VBN"])) or
(any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in["RB","XX0"]) and any(n in third_next_word for n in ["VBD","VBN"])) or
(any(n in next_word for n in ["NN","NNP","PRP"]) and any(n in second_next_word for n in ["VBD","VBN"])) or
("XX0" in next_word and any(n in second_next_word for n in["NN","NNP","PRP"]) and any(n in third_next_word for n in ["VBN","VBD"]))):
wordList[i]+="_PEAS"
word = wordList[i].split('_')
#tags passives
if word[0].lower() in be or word[0].lower() in ["have","had","has","get"]:
if((any(n in next_word for n in ["VBD","VBN"])) or
(any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["VBD","VBN"])) or
(any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["RB","XX0"]) and any(n in third_next_word for n in ["VBD","VBN"])) or
("XX0" in next_word and any(n in second_next_word for n in ["NN","NNP","PRP"]) and any(n in third_next_word for n in ["VBD","VBN"])) or
(any(n in next_word for n in ["NN","NNP","PRP"]) and any(n in second_next_word for n in ["VBD","VBN"]))):
wordList[i] +="_PASS"
word = wordList[i].split('_')
#tags "by passives"
if word[0].lower() in be or word[0].lower() in ["have","had","has","get"]:
if ((any(n in next_word for n in ["VBD","VBN"]) and second_next_word[0].lower() =="by") or
(any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["VBD","VBN"]) and third_next_word[0].lower()=="by") or
(any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["RB","XX0"]) and any(n in third_next_word for n in ["VBD","VBN"]) and fourth_next_word[0].lower()=="by") or
(any(n in next_word for n in ["NN","NNP","PRP"]) and any(n in second_next_word for n in ["VBD","VBN"]) and third_next_word[0].lower()=="by") or
("XX0" in next_word and any(n in second_next_word for n in ["NN","NNP","PRP"]) and any(n in third_next_word for n in ["VBD","VBN"]) and fourth_next_word[0].lower()=="by")):
if ("PASS" in wordList[i]):
wordList[i]=wordList[i].replace("PASS","BYPA")
else:
wordList[i]+="_BYPA"
word = wordList[i].split('_')
#tags be as main verb
if(("EX" not in second_previous_word and "EX" not in previous_word and word[0].lower() in be and any(n in next_word for n in ["CD","DT","PDT","PRPS","PRP","JJ","PRED","PIN","QUAN"])) or
("EX" not in second_previous_word and "EX" not in previous_word and word[0].lower() in be and any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["CD","DT","PDT","PRPS","PRP","JJ","PRED","PIN","QUAN"]))):
wordList[i] +="_BEMA"
word = wordList[i].split('_')
#tags wh clauses
if (any(n in word for n in suasive) or any(n in word for n in public) or any(n in word for n in private)) and (any(n in next_word for n in wp) or any(n in next_word for n in who)) and (all(n not in second_next_word for n in do) and all(n not in second_next_word for n in be) and all(n not in second_next_word for n in have) and ('MD' not in second_next_word)):
wordList[i+1]+="_WHCL"
next_word = wordList[i+1].split('_')
#tags pied-piping relative clauses
if "PIN" in word and next_word[0].lower() in ["who","whom","whose","which"]:
wordList[i+1]+="_PIRE"
next_word = wordList[i+1].split('_')
#tags stranded preposisitons
if "PIN" in word and next_word[0].lower()!="besides" and next_word[0].lower() in [",","."]:
wordList[i] +="_STPR"
word = wordList[i].split('_')
#tags split infinitives
if ((word[0].lower()=="to" and any(n in next_word for n in ["RB","AMPLIF","DWNT"]) and next_word[0].lower() in ["just","really","most","more"] and any(n in second_next_word for n in v)) or
(word[0].lower()=="to" and any(n in next_word for n in ["RB","AMPLIF","DWNT"]) and next_word[0].lower() in ["just","really","most","more"] and any(n in second_next_word for n in ["RB","AMPLIF","DOWNTON"]) and any(n in third_next_word for n in v))):
wordList[i] +="_SPIN"
word = wordList[i].split('_')
#tags split auxiliaries
if(((word[0].lower() in do or word[0].lower() in have or word[0].lower() in be or "MD" in word) and (any(n in next_word for n in ["RB","AMPLIF","DOWNTON"]) or (next_word[0].lower() in ["just","really","most","more"])) and any(n in second_next_word for n in v)) or
((word[0].lower() in do or word[0].lower() in have or word[0].lower() in be or "MD" in word) and (any(n in next_word for n in ["RB","AMPLIF","DOWNTON"]) or (next_word[0].lower() in ["just","really","most","more"])) and ("RB" in second_next_word) and any(n in third_next_word for n in v))):
wordList[i] +="_SPAU"
word = wordList[i].split('_')
#tags synthetic negation
if((word[0].lower()=="no" and any(n in next_word for n in ["JJ","PRED","NN","NNP"])) or
word[0].lower() =="neither" or
word[0].lower() =="nor"):
wordList[i] = wordList[i].replace(word[1],"SYNE")
word = wordList[i].split('_')
#tags time adverbials
if(word[0].lower() in timeABV):
wordList[i] = wordList[i].replace(word[1],"TIME")
word = wordList[i].split('_')
if(word[0].lower()=="soon" and next_word[0].lower()=="as"):
wordList[i] = wordList[i].replace(word[1],"TIME")
word = wordList[i].split('_')
#tags place adverbials
if word[0].lower() in placeABV and "NNP" not in word:
wordList[i] = wordList[i].replace(word[1],"PLACE")
word = wordList[i].split('_')
#tags 'that' verb complement
if((previous_word[0].lower() in ["and","nor","but","or","also"] or previous_word[0] in symbols )and word[0].lower()=="that" and (next_word[0].lower()=="there" or any(n in next_word for n in ["DT","QUAN","CD","PRP","NNS","NNP"])) or
((previous_word[0].lower() in public or previous_word[0].lower() in private or previous_word[0].lower() in suasive or (previous_word[0].lower() in ["seem","seems","seemed","seeming","appear","appears","appeared","appearing"] and any(n in previous_word for n in v))) and word[0].lower()=="that" and (next_word[0].lower() in do or next_word[0].lower() in be or next_word[0].lower() in have) or any(n in next_word for n in v) or "MD" in next_word or next_word[0].lower()=="and") or
((fourth_previous_word[0] in public or fourth_previous_word[0] in private or fourth_previous_word[0] in suasive) and "PIN" in third_previous_word and any(n in second_previous_word for n in nn) and any(n in previous_word for n in nn) and word[0].lower() =="that") or
((fifth_previous_word[0] in public or fifth_previous_word[0] in private or fifth_previous_word[0] in suasive ) and "PIN" in fourth_previous_word and any(n in third_previous_word for n in nn) and any(n in second_previous_word for n in nn) and any(n in previous_word for n in nn) and word[0].lower() =="that") or
((sixth_previous_word[0] in public or sixth_previous_word[0] in private or sixth_previous_word[0] in suasive ) and "PIN" in fifth_previous_word and any(n in fourth_previous_word for n in nn) and any(n in third_previous_word for n in nn) and any(n in second_previous_word for n in nn) and any(n in previous_word for n in nn) and word[0].lower() =="that")):
if(word[0].lower()=="that"):
wordList[i] = wordList[i].replace(word[1],"THVC")
word = wordList[i].split('_')
#tags 'that' adjective complementss
if (any(n in previous_word for n in ["JJ","PRED"]) and word[0].lower()=="that"):
wordList[i] = wordList[i].replace(word[1],"THAC")
word = wordList[i].split('_')
#tags present participial clauses
if previous_word[0] in symbols and "VBG" in word and (any(n in next_word for n in ["PIN","DT","QUAN","CD","WPs","PRP","RB"]) or next_word[0].lower() in wp or next_word[0].lower() in who):
wordList[i] += "_PRESP"
word = wordList[i].split('_')
#tags past participial clauses
if previous_word[0] in symbols and "VBN" in word and any(n in next_word for n in ["PIN","RB"]):
wordList[i] += "_PASTP"
word = wordList[i].split('_')
#tags past participial WHIZ deletion relatives
if (any(n in wordList[i-1] for n in nn) or ("QUPR" in previous_word)) and ("VBN" in word) and (any(n in next_word for n in ["PIN","RB"]) or (next_word[0].lower() in be)):
wordList[i] += "_WZPAST"
word = wordList[i].split('_')
#tags present participial WHIZ deletion relatives
if any(n in previous_word for n in nn) and "VBG" in word:
wordList[i] += "_WZPRES"
word = wordList[i].split('_')
#tags "that" relative clauses on subject position
if ((any(n in previous_word for n in nn) and (word[0].lower()=="that") and (any(n in next_word for n in v) or "MD" in next_word or next_word[0].lower() in do or next_word[0].lower() in be or next_word[0].lower() in have)) or
(any(n in previous_word for n in nn) and (word[0].lower()=="that") and any(n in next_word for n in ["RB","XX0"]) and (any(n in second_next_word for n in v) or "MD" in second_next_word or second_next_word[0].lower() in do or second_next_word[0].lower() in be or second_next_word[0].lower() in have)) or
(any(n in previous_word for n in nn) and (word[0].lower()=="that") and any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["RB","XX0"]) and (any(n in third_next_word for n in v) or "MD" in third_next_word or third_next_word[0].lower() in do or third_next_word[0].lower() in be or third_next_word[0].lower() in have))):
wordList[i] = wordList[i].replace(word[1],"TSUB")
word = wordList[i].split('_')
#tags "that" relative clauses on object positionW
if((any(n in previous_word for n in nn) and (word[0].lower() =="that") and (next_word[0].lower() in ["it","i","we","he","she","they"] or any(n in next_word for n in ["DT","QUAN","CD","JJ","NNS","NNP","PRPS"])))or
(any(n in previous_word for n in nn) and (word[0].lower()=="that") and any(n in next_word for n in nn) and "POS" in second_next_word)):
wordList[i] = wordList[i].replace(word[1],"TOBJ")
word = wordList[i].split('_')
#tags WH relative clauses on subject position
if((all(n not in third_previous_word[0].lower() for n in narrative) and any(n in previous_word for n in nn) and (word[0].lower() in wp) and ((next_word[0].lower() in do) or (next_word[0].lower() in be) or (next_word[0].lower() in have) or any(n in next_word for n in v) or ("MD" in next_word))) or
(all(n not in third_previous_word[0].lower() for n in narrative) and any(n in previous_word for n in nn) and (word[0].lower() in wp) and any(n in next_word for n in ["RB","XX0"]) and(second_next_word[0].lower() in do or second_next_word[0].lower() in be or second_next_word[0].lower() in have or any(n in second_next_word for n in v) or "MD" in second_next_word)) or
(all(n not in third_previous_word[0].lower() for n in narrative) and any(n in previous_word for n in nn) and (word[0].lower() in wp) and any(n in next_word for n in ["RB","XX0"]) and any(n in second_next_word for n in ["RB","XX0"]) and (third_next_word[0].lower() in do or third_next_word[0].lower() in be or third_next_word[0].lower() in have or any(n in third_next_word for n in v) or "MD" in third_next_word))):
wordList[i] +="_WHSUB"
word = wordList[i].split('_')
#tags WH relative clauses on object position
if(all(n not in third_previous_word[0].lower() for n in narrative) and any(n in previous_word for n in nn) and (word[0].lower() in wp) and ((next_word[0].lower() not in do) and (next_word[0].lower() not in be) and (next_word[0].lower() not in have) and all(n not in next_word for n in v) and all(n not in next_word for n in ["MD","RB","XX0"]))):
wordList[i] += "_WHOBJ"
word = wordList[i].split('_')
#tags hedges
if word[0].lower()=="maybe":
wordList[i]= wordList[i].replace(word[1],"HDG")
word = wordList[i].split('_')
if((word[0].lower()=="at" and next_word[0].lower()=="about") or
(word[0].lower()=="something" and next_word[0].lower()=="like")):
wordList[i]= wordList[i].replace(word[1],"HDG")
wordList[i+1]= next_word[0].lower()+"_NULL"
word = wordList[i].split('_')
next_word = wordList[i+1].split('_')
if word[0].lower()=="more" and next_word[0].lower()=="or" and second_next_word[0].lower()=="less":
wordList[i]= wordList[i].replace(word[1],"HDG")
wordList[i+1]= next_word[0].lower()+"_NULL"
wordList[i+2]= second_next_word[0].lower()+"_NULL"
word = wordList[i].split('_')
next_word = wordList[i+1].split('_')
second_next_word = wordList[i+2].split('_')
if (((any(n in second_previous_word for n in ["DT","QUAN","CD","JJ","PRED","PRPS"]) or second_previous_word[0].lower() in who) and previous_word[0].lower()=="sort" and word[0].lower()=="of")or
((any(n in second_previous_word for n in ["DT","QUAN","CD","JJ","PRED","PRPS"]) or second_previous_word[0].lower() in who) and previous_word[0].lower()=="kind" and word[0].lower()=="of")):
wordList[i]= wordList[i].replace(word[1],"HDG")
wordList[i-1]= previous_word[0].lower()+"_NULL"
word = wordList[i].split('_')
previous_word = wordList[i-1].split('_')
#tags discourse particles
if (previous_word[0] in symbols) and (word[0].lower() in ["well","now","anyhow","anyways"]):
wordList[i] =wordList[i].replace(word[1],"DPAR")
word = wordList[i].split('_')
for i in range(len(wordList)):
word = wordList[i].split('_')
if i<len(wordList)-1:
next_word = wordList[i+1].split('_')
else:
next_word = ['','NULL']
#tags demonstrative pronouns
if (((word[0].lower() in ["that","this","these","those"]) and ("NULL" not in word) and ((next_word[0].lower() in do) or (next_word[0].lower() in be) or (next_word[0].lower() in have) or (next_word[0].lower() in wp) or any(n in next_word for n in v) or( "MD" in next_word) or (next_word[0].lower()=="and") or (next_word[0] in symbols)) and all(n not in word for n in ["TOBJ","TSUB","THAC","THVC"])) or
((word[0].lower()=="that") and (next_word[0].lower() in ["'s","is"]))):
wordList[i] = wordList[i].replace(word[1],"DEMP")
word = wordList[i].split('_')
for i in range(len(wordList)):
word = wordList[i].split('_')
if i<len(wordList)-1:
next_word = wordList[i+1].split('_')
else:
next_word = ['','NULL']
#tags demonstratives
if word[0].lower() in ["that","this","these","those"] and all(n not in word for n in ["DEMP","TOBJ","TSUB","THAC","THVC","NULL"]):
wordList[i] = wordList[i].replace(word[1],"DEMO")
word = wordList[i].split('_')
for i in range(len(wordList)):
word = wordList[i].split('_')
if i<len(wordList)-4:
fourth_next_word = wordList[i+4].split('_')
else:
fourth_next_word = ['','NULL']
if i<len(wordList)-3:
third_next_word = wordList[i+3].split('_')
else:
third_next_word = ['','NULL']
if i<len(wordList)-2:
second_next_word = wordList[i+2].split('_')
else:
second_next_word = ['','NULL']
if i<len(wordList)-1:
next_word = wordList[i+1].split('_')
else:
next_word = ['','NULL']
#tags subordinator-that deletion
if (((word[0].lower() in public or word[0].lower() in private or word[0].lower() in suasive) and (next_word[0].lower() in ["i","we","she","he","they"] or "DEMP" in next_word)) or
((word[0].lower() in public or word[0].lower() in private or word[0].lower() in suasive) and (["PRP"] in next_word or any(n in next_word for n in nn)) and (second_next_word[0].lower() in do or second_next_word[0].lower() in have or second_next_word[0].lower() in be or any(n in second_next_word for n in v) or "MD" in second_next_word)) or
((word[0].lower() in public or word[0].lower() in private or word[0].lower() in suasive) and any(n in next_word for n in ["JJ","PRED","RB","DT","QUAN","CD","PRPS"]) and any(n in second_next_word for n in nn) and (third_next_word[0].lower() in do or third_next_word[0].lower() in have or third_next_word[0].lower() in be or any(n in third_next_word for n in v) or "MD" in third_next_word)) or
((word[0].lower() in public or word[0].lower() in private or word[0].lower() in suasive) and any(n in next_word for n in ["JJ","PRED","RB","DT","QUAN","CD","PRPS"]) and any(n in second_next_word for n in ["JJ","PRED"]) and any(n in third_next_word for n in nn) and (fourth_next_word[0].lower() in do or fourth_next_word[0].lower() in have or fourth_next_word[0].lower() in be or any(n in fourth_next_word for n in v) or "MD" in fourth_next_word))):
wordList[i] += "_THATD"
word = wordList[i].split('_')
for i in range(len(wordList)):
word = wordList[i].split('_')
if i<len(wordList)-1:
next_word = wordList[i+1].split('_')
else:
next_word = ['','NULL']
#tags independent clause coordination
if (((previous_word[0]==",") and ("and" in word) and (any(n in next_word for n in ["it","so","then","you","u","we","he","she","they"]) or ("DEMP" in next_word))) or
((previous_word[0]==",") and ("and" in word) and (next_word[0].lower()=="there") and any(n in second_next_word for n in be)) or
((previous_word[0] in symbols) and ("and" in word)) or
(("and" in word) and (any(n in next_word for n in wp) or any(n in next_word for n in who) or any(n in next_word for n in ["because","although","though","tho","if","unless"]) or any(n in next_word for n in ["OSUB","DPAR","CONJ"])))):
wordList[i] = wordList[i].replace(word[1],"ANDC")
word = wordList[i].split('_')
for i in range(len(wordList)):
word = wordList[i].split('_')
#basic tags
if word[0].lower() in ["absolutely","altogether","completely","enormously","entirely","extremely","fully","greatly","highly","intensely","perfectly","strongly","thoroughly","totally","utterly","very"]:
wordList[i] = wordList[i].replace(word[1],"AMP")
word = wordList[i].split('_')
if word[0].lower() in ["almost","barely","hardly","merely","mildly","nearly","only","partially","partly","practically","scarcely","slightly","somewhat"]:
wordList[i] = wordList[i].replace(word[1],"DWNT")
word = wordList[i].split('_')
if ("tion"in word[0].lower() or "ment" in word[0].lower() or "ness" in word[0].lower() or "nesses" in word[0].lower() or "ity" in word[0].lower() or "ities" in word[0].lower()) and any(n in word for n in nn):
wordList[i] = wordList[i].replace(word[1],"NOMZ")
word = wordList[i].split('_')
if ("ing" in word[0].lower() and any(n in word for n in nn)) or ("ings" in word[0].lower() and any(n in word for n in nn)):
wordList[i] = wordList[i].replace(word[1],"GER")
word = wordList[i].split('_')
if any(n in word for n in nn):
wordList[i] = wordList[i].replace(word[1],"NN")
word = wordList[i].split('_')
if any(n in word for n in ["JJS","JJR"]):
wordList[i] = wordList[i].replace(word[1],"JJ")
word = wordList[i].split('_')
if any(n in word for n in ["RBS","RBR","WRB"]):
wordList[i] = wordList[i].replace(word[1],"RB")
word = wordList[i].split('_')
if any(n in word for n in ["VBP","VBZ"]):
wordList[i] = wordList[i].replace(word[1],"VPRT")
word = wordList[i].split('_')
if word[0].lower() in ["I","me","we","us","my","our","myself","ourselves"]:
wordList[i] = wordList[i].replace(word[1],"FPP1")
word = wordList[i].split('_')
if word[0].lower() in ["you","your","yourself","yourselves","thy","thee","thyself","thou"]:
wordList[i] = wordList[i].replace(word[1],"SPP2")
word = wordList[i].split('_')
if word[0].lower() in ["she","he","they","her","his","them","him","their","himself","herself","themselves"]:
wordList[i] = wordList[i].replace(word[1],"TPP3")
word = wordList[i].split('_')
if word[0].lower() in ["it","its","itself"]:
wordList[i] = wordList[i].replace(word[1],"PIT")
word = wordList[i].split('_')
if word[0].lower() in ["because"]:
wordList[i] = wordList[i].replace(word[1],"CAUS")
word = wordList[i].split('_')
if word[0].lower() in ["although","though","tho"]:
wordList[i] = wordList[i].replace(word[1],"CONC")
word = wordList[i].split('_')
if word[0].lower() in ["if","unless"]:
wordList[i] = wordList[i].replace(word[1],"COND")
word = wordList[i].split('_')
if (word[0].lower() in ["can","may","might","could"]) or ("ca" in word[0].lower() and "MD" in word):
wordList[i] = wordList[i].replace(word[1],"POMD")
word = wordList[i].split('_')
if word[0].lower() in ["ought","should","must"]:
wordList[i] = wordList[i].replace(word[1],"NEMD")
word = wordList[i].split('_')
if (word[0].lower() in ["would","shall"]) or (("will" in word[0].lower() or "ll" in word[0].lower() or "wo" in word[0].lower() or "sha" in word[0].lower() or "'d" in word[0].lower()) and "MD" not in word):
wordList[i] = wordList[i].replace(word[1],"PRMD")
word = wordList[i].split('_')
if word[0].lower() in public:
wordList[i] += "_PUBV"
word = wordList[i].split('_')
if word[0].lower() in private:
wordList[i] += "_PRIV"
word = wordList[i].split('_')
if word[0].lower() in suasive:
wordList[i] += "_SUAV"
word = wordList[i].split('_')
if word[0].lower() in ["seem","seems","seemed","seeming","appear","appears","appeared","appearing"] and any(n in word for n in v):
wordList[i] += "_SMP"
word = wordList[i].split('_')
if (word[0].lower() in ["\'ll","\'d"] or ("n\'t" in word[0].lower() and "XX0" in word) or ("\'" in word[0].lower() and any(n in word for n in v))):
wordList[i] += "_CONT"
word = wordList[i].split('_')
return wordList
def merged():
printWithTime("Merged files tagging progress started")
wordList = MergedfolderProcess()
finishedList = getFinishedFiles("merged")
for file in wordList:
if file in finishedList:
printWithTime("File: "+file+" has been processed, now moving to the next file")
continue
else:
printWithTime("Now processing file: "+file+"...")
filepath = os.path.join(directory_path,"MergedFiles")
filepath = os.path.join(filepath,file)
with open(filepath,'r') as filecontent:
data = filecontent.read().replace('\n',' ')
tagger(data,file,False)
printWithTime("Tag generation complete: "+file+"")
finishedFileRecorder = open(os.path.join(directory_path,'mList.txt'),'a')
finishedFileRecorder.write(file+"\n")
printWithTime("Tagging program finished\nPlease use tagger-count.py to generate analysis data")
return
def fragments():
printWithTime("File fragments tagging progress started")
wordList = FragmentfolderProcess()
finishedList = getFinishedFiles("fragment")
for file in wordList:
if file in finishedList:
printWithTime("File: "+file+" has been processed, now moving to the next file")
continue
else:
printWithTime("Now processing file: "+file+"...")
filepath = os.path.join(directory_path,"FileFragments")
filepath = os.path.join(filepath,file)
with open(filepath,'r') as filecontent:
data = filecontent.read().replace('\n',' ')
tagger(data,file,True)
printWithTime("Tag generation complete: "+file+"")
finishedFileRecorder = open(os.path.join(directory_path,'fList.txt'),'a')
finishedFileRecorder.write(file+"\n")
printWithTime("Tagging program finished\nPlease use tagger-count.py -f true to generate analysis data")
parser = argparse.ArgumentParser(description="MAT tagging algorithm")
parser.add_argument('-f','--fragment',type=str,default="false",help='To generate tags for merged files, set this value to false; To generate tags for file fragments, set this value to true')
parser.add_argument('-r','--restart',type=str,default="false",help='If you want to restart the program to let it process from beginning, set this value to true; otherwise, set it to false')
if not os.path.exists('Results'):
os.mkdir(os.path.join(os.getcwd(),'Results'))
os.chdir(os.path.join(os.getcwd(),'Results'))
if not os.path.exists('StanfordTags'):
os.mkdir(os.path.join(os.getcwd(),'StanfordTags'))
if not os.path.exists('ModifiedTags'):
os.mkdir(os.path.join(os.getcwd(),'ModifiedTags'))
if not os.path.exists('StanfordTagsFragment'):
os.mkdir(os.path.join(os.getcwd(),'StanfordTagsFragment'))
if not os.path.exists('ModifiedTagsFragment'):
os.mkdir(os.path.join(os.getcwd(),'ModifiedTagsFragment'))
os.chdir('..')
args = parser.parse_args()
if args.fragment == "true":
if args.restart == "true":
if os.path.exists('fList.txt'):
os.remove(os.path.join(directory_path,'fList.txt'))
fragments()
else:
if args.restart == "true":
if os.path.exists('mList.txt'):
os.remove(os.path.join(directory_path,'mList.txt'))
merged() | 71.853022 | 3,008 | 0.580569 | 7,132 | 52,309 | 4.156758 | 0.179473 | 0.080955 | 0.083991 | 0.047561 | 0.636781 | 0.588646 | 0.548674 | 0.496391 | 0.468529 | 0.425218 | 0 | 0.011652 | 0.225602 | 52,309 | 728 | 3,009 | 71.853022 | 0.720203 | 0.024088 | 0 | 0.435811 | 0 | 0.003378 | 0.20742 | 0.000941 | 0 | 0 | 0 | 0 | 0.001689 | 1 | 0.013514 | false | 0.005068 | 0.003378 | 0 | 0.037162 | 0.030405 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b2aacb8c58e5a1abfc8fe218bf0ba965384b2044 | 1,032 | py | Python | library/real/display_real.py | console-beaver/MIT-Racecar-cbeast | f7f9c156e7072da7acc680ae1ad1de344253ae05 | [
"MIT"
] | null | null | null | library/real/display_real.py | console-beaver/MIT-Racecar-cbeast | f7f9c156e7072da7acc680ae1ad1de344253ae05 | [
"MIT"
] | null | null | null | library/real/display_real.py | console-beaver/MIT-Racecar-cbeast | f7f9c156e7072da7acc680ae1ad1de344253ae05 | [
"MIT"
] | null | null | null | """
Copyright Harvey Mudd College
MIT License
Spring 2020
Contains the Display module of the racecar_core library
"""
import cv2 as cv
import os
from nptyping import NDArray
from display import Display
class DisplayReal(Display):
__WINDOW_NAME: str = "RACECAR display window"
__DISPLAY: str = ":1"
def __init__(self):
self.__display_found = (
self.__DISPLAY
in os.popen(
"cd /tmp/.X11-unix && for x in X*; do echo \":${x#X}\"; done "
).read()
)
if self.__display_found:
os.environ["DISPLAY"] = self.__DISPLAY
else:
print(f"Display {self.__DISPLAY} not found.")
def create_window(self) -> None:
if self.__display_found:
cv.namedWindow(self.__WINDOW_NAME)
else:
pass
def show_color_image(self, image: NDArray) -> None:
if self.__display_found:
cv.imshow(self.__WINDOW_NAME, image)
cv.waitKey(1)
else:
pass
| 23.454545 | 78 | 0.587209 | 124 | 1,032 | 4.58871 | 0.491935 | 0.135325 | 0.112478 | 0.094903 | 0.084359 | 0.084359 | 0 | 0 | 0 | 0 | 0 | 0.012748 | 0.315891 | 1,032 | 43 | 79 | 24 | 0.793201 | 0.106589 | 0 | 0.275862 | 0 | 0 | 0.126915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0.068966 | 0.137931 | 0 | 0.344828 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b2adb9d7006450ffeda3b214aef1de0a2d913357 | 1,335 | py | Python | test_default.py | dukedhx/tokenflex-reporting-python-script | f837b4e4a1cf388620da94abbaddab6bcabd51a8 | [
"MIT"
] | 4 | 2018-12-17T09:09:44.000Z | 2020-12-15T16:35:47.000Z | test_default.py | dukedhx/tokenflex-reporting-python-script | f837b4e4a1cf388620da94abbaddab6bcabd51a8 | [
"MIT"
] | null | null | null | test_default.py | dukedhx/tokenflex-reporting-python-script | f837b4e4a1cf388620da94abbaddab6bcabd51a8 | [
"MIT"
] | 4 | 2019-09-01T10:08:32.000Z | 2021-01-09T10:12:46.000Z | #####################################################################
## Copyright (c) Autodesk, Inc. All rights reserved
## Written by Forge Partner Development
##
## Permission to use, copy, modify, and distribute this software in
## object code form for any purpose and without fee is hereby granted,
## provided that the above copyright notice appears in all copies and
## that both that copyright notice and the limited warranty and
## restricted rights notice below appear in all supporting
## documentation.
##
## AUTODESK PROVIDES THIS PROGRAM "AS IS" AND WITH ALL FAULTS.
## AUTODESK SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTY OF
## MERCHANTABILITY OR FITNESS FOR A PARTICULAR USE. AUTODESK, INC.
## DOES NOT WARRANT THAT THE OPERATION OF THE PROGRAM WILL BE
## UNINTERRUPTED OR ERROR FREE.
#####################################################################
import simple_http_server as SimpleHTTPServer
import consumption_reporting as ConsumptionReporting
from threading import Thread
from time import sleep
import pytest
@pytest.mark.skip()
def shutdownServer():
sleep(30)
SimpleHTTPServer.httpd.shutdown()
def testServer():
thread = Thread(target=shutdownServer)
thread.start()
SimpleHTTPServer.startHttpServer()
thread.join()
def testConsumption():
ConsumptionReporting.start(None)
| 32.560976 | 70 | 0.691386 | 153 | 1,335 | 6.013072 | 0.647059 | 0.023913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00175 | 0.14382 | 1,335 | 40 | 71 | 33.375 | 0.80315 | 0.513858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.3125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a22fe2112341437f4d8c36db1b3319ad00230552 | 2,274 | py | Python | fuzzinator/tracker/github_tracker.py | akosthekiss/fuzzinator | 194e199bb0efea26b857ad05f381f72e7a9b8f66 | [
"BSD-3-Clause"
] | null | null | null | fuzzinator/tracker/github_tracker.py | akosthekiss/fuzzinator | 194e199bb0efea26b857ad05f381f72e7a9b8f66 | [
"BSD-3-Clause"
] | null | null | null | fuzzinator/tracker/github_tracker.py | akosthekiss/fuzzinator | 194e199bb0efea26b857ad05f381f72e7a9b8f66 | [
"BSD-3-Clause"
] | 1 | 2018-06-28T05:21:21.000Z | 2018-06-28T05:21:21.000Z | # Copyright (c) 2016-2022 Renata Hodovan, Akos Kiss.
#
# Licensed under the BSD 3-Clause License
# <LICENSE.rst or https://opensource.org/licenses/BSD-3-Clause>.
# This file may not be copied, modified, or distributed except
# according to those terms.
try:
# FIXME: very nasty, but a recent PyGithub version began to depend on
# pycrypto transitively, which is a PITA on Windows (can easily fail with an
# ``ImportError: No module named 'winrandom'``) -- so, we just don't care
# for now if we cannot load the github module at all. This workaround just
# postpones the error to the point when ``GithubTracker`` is actually used,
# so be warned, don't do that on Windows!
from github import Github, GithubException
except ImportError:
pass
from .tracker import Tracker, TrackerError
class GithubTracker(Tracker):
"""
GitHub_ issue tracker.
.. _GitHub: https://github.com/
**Mandatory parameter of the issue tracker:**
- ``repository``: repository name in user/repo format.
**Optional parameter of the issue tracker:**
- ``token``: a personal access token for authenticating.
**Example configuration snippet:**
.. code-block:: ini
[sut.foo]
tracker=fuzzinator.tracker.GithubTracker
[sut.foo.tracker]
repository=alice/foo
token=1234567890123456789012345678901234567890
"""
def __init__(self, *, repository, token=None):
self.repository = repository
self.ghapi = Github(login_or_token=token)
self.project = self.ghapi.get_repo(repository)
def find_duplicates(self, *, title):
try:
issues = list(self.ghapi.search_issues('repo:{repository} is:issue is:open {title}'.format(repository=self.repository, title=title)))
return [(issue.html_url, issue.title) for issue in issues]
except GithubException as e:
raise TrackerError('Finding possible duplicates failed') from e
def report_issue(self, *, title, body):
try:
new_issue = self.project.create_issue(title=title, body=body)
return new_issue.html_url
except GithubException as e:
raise TrackerError('Issue reporting failed') from e
| 34.454545 | 145 | 0.670624 | 282 | 2,274 | 5.347518 | 0.531915 | 0.023873 | 0.013263 | 0.025199 | 0.088859 | 0.054377 | 0 | 0 | 0 | 0 | 0 | 0.028752 | 0.235268 | 2,274 | 65 | 146 | 34.984615 | 0.838413 | 0.505717 | 0 | 0.227273 | 0 | 0 | 0.093422 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.136364 | false | 0.045455 | 0.136364 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2315dd43508aee4e316bc2ccbff15322163a590 | 2,624 | py | Python | qmdz_const.py | cygnushan/measurement | 644e8b698faf50dcc86d88834675d6adf1281b10 | [
"MIT"
] | 1 | 2022-03-18T18:38:02.000Z | 2022-03-18T18:38:02.000Z | qmdz_const.py | cygnushan/measurement | 644e8b698faf50dcc86d88834675d6adf1281b10 | [
"MIT"
] | null | null | null | qmdz_const.py | cygnushan/measurement | 644e8b698faf50dcc86d88834675d6adf1281b10 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import sys
import os
from init_op import read_config
# ROOT_PATH = os.path.split(os.path.realpath(__file__))[0]
if getattr(sys, 'frozen', None):
ROOT_DIR = os.path.dirname(sys.executable)
else:
ROOT_DIR = os.path.dirname(__file__)
VI_CONF_PATH = ROOT_DIR + "\conf\VI_CONF.ini"
ST_CONF_PATH = ROOT_DIR + "\conf\ST_CONF.ini"
SC_CONF_PATH = ROOT_DIR + "\conf\SC_CONF.ini"
SYS_CONF_PATH = ROOT_DIR + "\conf\SYS_CONF.ini"
vrange_dict = {0:"AUTO", 1:"1e-6", 2:"10e-6", 3:"100e-6",4:"1e-3", 5:"10e-3",
6:"100e-3", 7:"1", 8:"10", 9:"210"}
irange_dict= {0:"AUTO", 1:"10e-9", 2:"100e-9", 3:"1e-6", 4:"10e-6", 5:"100e-6",
6:"1e-3", 7:"10e-3", 8:"100e-3", 9:"1"}
gas_coef = {0:1.000, 1:1.400, 2:0.446, 3:0.785, 4:0.515, 5:0.610, 6:0.500,
7:0.250, 8:0.410, 9:0.350, 10:0.300, 11:0.250, 12:0.260, 13:1.000,
14:0.740, 15:0.790, 16:1.010, 17:1.000, 18:1.400, 19:1.400, 20:1.000,
21:0.510, 22:0.990, 23:0.710, 24:1.400, 25:0.985, 26:0.630, 27:0.280,
28:0.620, 29:1.360}
res_range = {0:"100", 1:"1e3", 2:"10e3", 3:"100e3", 4:"1e6", 5:"10e6", 6:"100e6", 7:"200e6"}
res_det = 0
VI_ILIST = []
IV_VLIST = []
VI_GAS = []
ST_GAS_AUTO = [0,0,0,0,0,0,0,0]
ST_GAS_MODE = 0 # 0:自动控制 1:手动
SC_GAS_MODE = 0 # 0:自动控制 1:手动
SC_FLOW1 = []
SC_FLOW2 = []
SC_FLOW3 = []
SC_GAS_PARA = []
hold_time = 60
low_offset = 0.2
high_offset = 1
up_slot = 1
down_slot = 1
critical_temp = 500
measure_times = 1
temp_list = []
Auto_Range = 1
# 2400设置全局变量
MEAS_MODE = 0 #0:2线制,1:4线制
OUTPUT_MODE = 0 # 0:脉冲输出,1:连续输出
VI_MODE = 1
# 测试时间段
TIME_t1 = 0
TIME_t2 = 0
TIME_t3 = 0
TIME_t4 = 0
TIME_SUM = 0
#[流量计1状态,流量值1,流量计2状态,流量值2,流量计3状态,流量值3,空气状态,空气流量值,]
t1_gas = []
t2_gas = []
t3_gas = []
t4_gas = []
flowmeter1_state = 0
flowmeter2_state = 0
flowmeter3_state = 0
airpump_state = 0
color_list = ["Aqua","Black","Fuchsia","Gray","Green","Lime","Maroon","Navy",
"Red","Silver","Teal","Yellow","Blue","Olive","Purple","White"]
PARA_NAME = ['SteP','HIAL','LoAL','HdAL','LdAL','AHYS','CtrL','M5',
'P','t','CtI','InP','dPt','SCL','SCH','AOP',
'Scb','OPt','OPL','OPH','AF','RUNSTA','Addr','FILt',
'AmAn','Loc','c01','t01','c02','t02', 'c03','t03']
PARA_DEFAULT = [1,8000,-1960,9999,9999,2,3,50,65,20,2,0,1,0,
5000,5543,0,0,0,100,6,12,1,10,27,808]
def get_range(key):
key_value = read_config(SYS_CONF_PATH, 'HMTS48', key)
return key_value
flow1_range = int(get_range('flow1_range'))
flow2_range = int(get_range('flow2_range'))
flow3_range = int(get_range('flow3_range'))
| 24.523364 | 92 | 0.596418 | 497 | 2,624 | 2.965795 | 0.438632 | 0.017639 | 0.014247 | 0.013569 | 0.108548 | 0.029851 | 0.029851 | 0.024423 | 0 | 0 | 0 | 0.193802 | 0.176067 | 2,624 | 106 | 93 | 24.754717 | 0.487974 | 0.075076 | 0 | 0 | 0 | 0 | 0.170873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0 | 0.042254 | 0 | 0.070423 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a232ee55bbdd0227f3c92c01f62af655cba96907 | 2,088 | py | Python | project/repository/user.py | tobiasaditya/fastapi-blog | 0f50f4261755f926ce9e951db8237a5f38384dcb | [
"MIT"
] | null | null | null | project/repository/user.py | tobiasaditya/fastapi-blog | 0f50f4261755f926ce9e951db8237a5f38384dcb | [
"MIT"
] | null | null | null | project/repository/user.py | tobiasaditya/fastapi-blog | 0f50f4261755f926ce9e951db8237a5f38384dcb | [
"MIT"
] | null | null | null | from typing import List
from fastapi import APIRouter
from fastapi.params import Depends
from fastapi import HTTPException, status
from sqlalchemy.orm.session import Session
from project import schema, models, database, hashing
router = APIRouter(
prefix="/user",
tags=['Users']
)
@router.post('/new')
def create_user(request:schema.User, db:Session = Depends(database.get_db)):
hashed_pass = hashing.get_password_hash(request.password)
new_user = models.User(name = request.name,username = request.username, password = hashed_pass)
db.add(new_user)
db.commit()
db.refresh(new_user)
return request
@router.get('/find', response_model= List[schema.showUser])
def show_user_all(db:Session=Depends(database.get_db)):
all_users = db.query(models.User).all()
return all_users
@router.get('/find/{id}',response_model= schema.showUser)
def show_user_id(id:int, db:Session = Depends(database.get_db)):
selected_project = db.query(models.User).filter(models.User.id == id).first()
if not selected_project:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND,detail=f"User {id} not found.")
return selected_project
# @router.put('/{id}')
# def update_project_id(id:int,request:schema.Project,db:Session = Depends(database.get_db)):
# #Search for projects' id
# selected_project = db.query(models.Project).filter(models.Project.id == id)
# if not selected_project.first():
# raise HTTPException(status_code=status.HTTP_404_NOT_FOUND,detail=f"Project {id} not found.")
# selected_project.update(dict(request))
# return {'status':f'project {id} updated'}
# @router.delete('/{id}')
# def delete_project_id(id:int,db:Session = Depends(database.get_db)):
# selected_project = db.query(models.Project).filter(models.Project.id == id).first()
# if not selected_project:
# raise HTTPException(status_code=status.HTTP_404_NOT_FOUND,detail=f"Project {id} not found.")
# db.delete(selected_project)
# db.commit()
# return {'status':f'delete project_id {id} successful'}
| 33.142857 | 102 | 0.724617 | 291 | 2,088 | 5.037801 | 0.24055 | 0.092087 | 0.05457 | 0.081855 | 0.415416 | 0.38131 | 0.321965 | 0.321965 | 0.321965 | 0.321965 | 0 | 0.005011 | 0.139847 | 2,088 | 62 | 103 | 33.677419 | 0.811247 | 0.414272 | 0 | 0 | 0 | 0 | 0.040698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0.071429 | 0.214286 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a248fa91871a4d64d360baf9357e2574f6ec13d4 | 218 | py | Python | Ports.py | bullgom/pysnn2 | dad5ae26b029afd5c5bf76fe141249b0f7b7a36c | [
"MIT"
] | null | null | null | Ports.py | bullgom/pysnn2 | dad5ae26b029afd5c5bf76fe141249b0f7b7a36c | [
"MIT"
] | null | null | null | Ports.py | bullgom/pysnn2 | dad5ae26b029afd5c5bf76fe141249b0f7b7a36c | [
"MIT"
] | null | null | null | AP = "AP"
BP = "BP"
ARRIVE = "ARRIVE"
NEUROMODULATORS = "NEUROMODULATORS"
TARGET = "TARGET"
OBSERVE = "OBSERVE"
SET_FREQUENCY = "SET_FREQUENCY"
DEACTIVATE = "DEACTIVATE"
ENCODE_INFORMATION = "ENCODE_INFORMATION"
| 13.625 | 41 | 0.724771 | 22 | 218 | 7 | 0.5 | 0.155844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151376 | 218 | 15 | 42 | 14.533333 | 0.832432 | 0 | 0 | 0 | 0 | 0 | 0.362385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a25ad39526f4933af2df581028f2688cffce6933 | 2,117 | py | Python | pychron/fractional_loss_calculator.py | ASUPychron/pychron | dfe551bdeb4ff8b8ba5cdea0edab336025e8cc76 | [
"Apache-2.0"
] | 31 | 2016-03-07T02:38:17.000Z | 2022-02-14T18:23:43.000Z | pychron/fractional_loss_calculator.py | ASUPychron/pychron | dfe551bdeb4ff8b8ba5cdea0edab336025e8cc76 | [
"Apache-2.0"
] | 1,626 | 2015-01-07T04:52:35.000Z | 2022-03-25T19:15:59.000Z | pychron/fractional_loss_calculator.py | UIllinoisHALPychron/pychron | f21b79f4592a9fb9dc9a4cb2e4e943a3885ededc | [
"Apache-2.0"
] | 26 | 2015-05-23T00:10:06.000Z | 2022-03-07T16:51:57.000Z | # ===============================================================================
# Copyright 2019 ross
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===============================================================================
from numpy import linspace
from traits.api import HasTraits, Int, Float, Instance, on_trait_change
from traitsui.api import View, VGroup, UItem, Item, HGroup
from pychron.graph.graph import Graph
from pychron.processing.argon_calculations import calculate_fractional_loss
class FractionalLossCalculator(HasTraits):
graph = Instance(Graph)
temp = Float(475)
min_age = Int(1)
max_age = Int(1000)
radius = Float(0.1)
def __init__(self, *args, **kw):
super(FractionalLossCalculator, self).__init__(*args, **kw)
self.graph = g = Graph()
g.new_plot()
xs, ys = self._calculate_data()
g.new_series(xs, ys)
def _calculate_data(self):
xs = linspace(self.min_age, self.max_age)
fs = [calculate_fractional_loss(ti, self.temp, self.radius) for ti in xs]
return xs, fs
@on_trait_change("temp, radius, max_age, min_age")
def _replot(self):
xs, ys = self._calculate_data()
self.graph.set_data(xs)
self.graph.set_data(ys, axis=1)
def traits_view(self):
a = HGroup(Item("temp"), Item("radius"), Item("min_age"), Item("max_age"))
v = View(VGroup(a, UItem("graph", style="custom")))
return v
if __name__ == "__main__":
f = FractionalLossCalculator()
f.configure_traits()
# ============= EOF =============================================
| 34.145161 | 82 | 0.616911 | 269 | 2,117 | 4.684015 | 0.468401 | 0.047619 | 0.020635 | 0.025397 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011014 | 0.185168 | 2,117 | 61 | 83 | 34.704918 | 0.71942 | 0.359471 | 0 | 0.060606 | 0 | 0 | 0.054518 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.151515 | 0 | 0.515152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a26034218c90d245fe24941c0da299f8ed7dd85c | 667 | py | Python | config/urls.py | erik-sn/tagmap | 8131fac833cf4edd20ac3497377ec2145fa75bcc | [
"MIT"
] | null | null | null | config/urls.py | erik-sn/tagmap | 8131fac833cf4edd20ac3497377ec2145fa75bcc | [
"MIT"
] | null | null | null | config/urls.py | erik-sn/tagmap | 8131fac833cf4edd20ac3497377ec2145fa75bcc | [
"MIT"
] | null | null | null | from django.conf import settings
from django.conf.urls import url, include
from django.contrib import admin
from api.views import index
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^api/', include('api.urls')),
]
# troubleshooting tool
if settings.TOOLBAR:
import debug_toolbar
urlpatterns = [
url(r'^__debug__/', include(debug_toolbar.urls)),
] + urlpatterns
"""
If we are serving the base html file through django then
route all non-matching urls to the html file where they
will be processed on the client by the react application
"""
if settings.SERVER_TYPE.upper() == 'DJANGO':
urlpatterns += [url(r'^.*$', index)]
| 25.653846 | 57 | 0.706147 | 94 | 667 | 4.93617 | 0.510638 | 0.034483 | 0.096983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175412 | 667 | 25 | 58 | 26.68 | 0.843636 | 0.029985 | 0 | 0.133333 | 0 | 0 | 0.087607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a275677a628b972b4fd284b9ad40ccf51d3ac9ae | 390 | py | Python | prplatform/exercises/migrations/0002_auto_20180508_1200.py | piehei/prplatform | f3248b66019f207bb06a4681a62057e175408b3e | [
"MIT"
] | 3 | 2018-10-07T18:50:01.000Z | 2020-07-29T14:43:51.000Z | prplatform/exercises/migrations/0002_auto_20180508_1200.py | piehei/prplatform | f3248b66019f207bb06a4681a62057e175408b3e | [
"MIT"
] | 9 | 2019-08-26T11:55:00.000Z | 2020-05-04T13:56:06.000Z | prplatform/exercises/migrations/0002_auto_20180508_1200.py | piehei/prplatform | f3248b66019f207bb06a4681a62057e175408b3e | [
"MIT"
] | null | null | null | # Generated by Django 2.0.4 on 2018-05-08 12:00
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('courses', '0013_auto_20180426_0754'),
('exercises', '0001_initial'),
]
operations = [
migrations.RenameModel(
old_name='GeneralExercise',
new_name='SubmissionExercise',
),
]
| 20.526316 | 47 | 0.610256 | 39 | 390 | 5.948718 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123675 | 0.274359 | 390 | 18 | 48 | 21.666667 | 0.696113 | 0.115385 | 0 | 0 | 1 | 0 | 0.244898 | 0.067055 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a277d99ca9d564507caf9cea939d843c77111614 | 777 | py | Python | spirit/utils/paginator/infinite_paginator.py | rterehov/Spirit | 515894001da9d499852b7ebde25892d290e26c38 | [
"MIT"
] | null | null | null | spirit/utils/paginator/infinite_paginator.py | rterehov/Spirit | 515894001da9d499852b7ebde25892d290e26c38 | [
"MIT"
] | null | null | null | spirit/utils/paginator/infinite_paginator.py | rterehov/Spirit | 515894001da9d499852b7ebde25892d290e26c38 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.http import Http404
from infinite_scroll_pagination.paginator import SeekPaginator, EmptyPage
def paginate(request, query_set, lookup_field, per_page=15, page_var='value'):
# TODO: remove
page_pk = request.GET.get(page_var, None)
paginator = SeekPaginator(query_set, per_page=per_page, lookup_field=lookup_field)
# First page
if page_pk is None:
return paginator.page()
try:
obj = query_set.model.objects.get(pk=page_pk)
except query_set.model.DoesNotExist:
raise Http404()
value = getattr(obj, lookup_field)
try:
page = paginator.page(value=value, pk=page_pk)
except EmptyPage:
raise Http404()
return page
| 24.28125 | 86 | 0.700129 | 103 | 777 | 5.048544 | 0.446602 | 0.061538 | 0.05 | 0.053846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019512 | 0.208494 | 777 | 31 | 87 | 25.064516 | 0.826016 | 0.057915 | 0 | 0.222222 | 0 | 0 | 0.006868 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 1 | 0.055556 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a27856f4617a1105202515667ba0f2cfc6adb560 | 10,172 | py | Python | lib/exaproxy/configuration.py | oriolarcas/exaproxy | 5dc732760d811fd4986f83e6dd78d29228927aec | [
"BSD-2-Clause"
] | 124 | 2015-01-03T10:42:17.000Z | 2021-12-24T05:30:25.000Z | lib/exaproxy/configuration.py | oriolarcas/exaproxy | 5dc732760d811fd4986f83e6dd78d29228927aec | [
"BSD-2-Clause"
] | 14 | 2015-02-06T02:21:16.000Z | 2019-01-10T18:22:18.000Z | lib/exaproxy/configuration.py | oriolarcas/exaproxy | 5dc732760d811fd4986f83e6dd78d29228927aec | [
"BSD-2-Clause"
] | 25 | 2015-04-11T04:01:57.000Z | 2021-07-21T07:46:31.000Z | # encoding: utf-8
"""
configuration.py
Created by Thomas Mangin on 2011-11-29.
Copyright (c) 2011-2013 Exa Networks. All rights reserved.
"""
# NOTE: reloading mid-program not possible
import os
import sys
import logging
import pwd
import math
import socket
import struct
_application = None
_config = None
_defaults = None
class ConfigurationError (Exception):
pass
_syslog_name_value = {
'CRITICAL' : logging.CRITICAL,
'ERROR' : logging.ERROR,
'WARNING' : logging.WARNING,
'INFO' : logging.INFO,
'DEBUG' : logging.DEBUG,
}
_syslog_value_name = {
logging.CRITICAL : 'CRITICAL',
logging.ERROR : 'ERROR',
logging.WARNING : 'WARNING',
logging.INFO : 'INFO',
logging.DEBUG : 'DEBUG',
}
class NoneDict (dict):
def __getitem__ (self,name):
return None
nonedict = NoneDict()
home = os.path.normpath(sys.argv[0]) if sys.argv[0].startswith('/') else os.path.normpath(os.path.join(os.getcwd(),sys.argv[0]))
class value (object):
@staticmethod
def nop (_):
return _
@staticmethod
def syslog (log):
if log not in _syslog_name_value:
raise TypeError('invalid log level %s' % log)
return _syslog_name_value[log]
@staticmethod
def root (path):
roots = home.split(os.sep)
location = []
for index in range(len(roots)-1,-1,-1):
if roots[index] in ('lib','bin'):
if index:
location = roots[:index]
break
root = os.path.join(*location)
paths = [
os.path.normpath(os.path.join(os.path.join(os.sep,root,path))),
os.path.normpath(os.path.expanduser(value.unquote(path))),
os.path.normpath(os.path.join('/',path)),
os.path.normpath(os.path.join('/','usr',path)),
]
return paths
@staticmethod
def integer (_):
value = int(_)
if value <= 0:
raise TypeError('the value must be positive')
return value
@staticmethod
def lowunquote (_):
return _.strip().strip('\'"').lower()
@staticmethod
def unquote (_):
return _.strip().strip('\'"')
@staticmethod
def boolean (_):
return _.lower() in ('1','yes','on','enable','true')
@staticmethod
def list (_):
return _.split()
@staticmethod
def ports (_):
try:
return [int(x) for x in _.split()]
except ValueError:
raise TypeError('resolv.conf can not be found (are you using DHCP without any network setup ?)')
@staticmethod
def methods (_):
return _.upper().split()
@staticmethod
def user (_):
try:
pwd.getpwnam(_)
# uid = answer[2]
except KeyError:
raise TypeError('user %s is not found on this system' % _)
return _
@staticmethod
def folder(path):
paths = value.root(path)
options = [path for path in paths if os.path.exists(path)]
if not options: raise TypeError('%s does not exists' % path)
first = options[0]
if not first: raise TypeError('%s does not exists' % first)
return first
@staticmethod
def conf(path):
first = value.folder(path)
if not os.path.isfile(first): raise TypeError('%s is not a file' % path)
return first
@staticmethod
def resolver(path):
global _application
paths = value.root('etc/%s/dns/resolv.conf' % _application)
paths.append(os.path.normpath(os.path.join('/','etc','resolv.conf')))
paths.append(os.path.normpath(os.path.join('/','var','run','resolv.conf')))
for resolver in paths:
if os.path.exists(resolver):
with open(resolver) as r:
if 'nameserver' in (line.strip().split(None,1)[0].lower() for line in r.readlines() if line.strip()):
return resolver
raise TypeError('resolv.conf can not be found (are you using DHCP without any network setup ?)')
@staticmethod
def exe (path):
argv = path.split(' ',1)
program = value.conf(argv.pop(0))
if not os.access(program, os.X_OK):
raise TypeError('%s is not an executable' % program)
return program if not argv else '%s %s' % (program,argv[0])
@staticmethod
def services (string):
try:
services = []
for service in value.unquote(string).split():
host,port = service.split(':')
services.append((host,int(port)))
return services
except ValueError:
raise TypeError('resolv.conf can not be found (are you using DHCP without any network setup ?)')
@staticmethod
def ranges (string):
try:
ranges = []
for service in value.unquote(string).split():
network,netmask = service.split('/')
if ':' in network:
high,low = struct.unpack('!QQ',socket.inet_pton(socket.AF_INET6,network))
start = (high << 64) + low
end = start + pow(2,128-int(netmask)) - 1
ranges.append((6,start,end))
else:
start = struct.unpack('!L',socket.inet_pton(socket.AF_INET,network))[0]
end = start + pow(2,32-int(netmask)) - 1
ranges.append((4,start,end))
return ranges
except ValueError:
raise TypeError('Can not parse the data as IP range')
@staticmethod
def redirector (name):
if name == 'url' or name.startswith('icap://'):
return name
raise TypeError('invalid redirector protocol %s, options are url or header' % name)
class string (object):
@staticmethod
def nop (_):
return _
@staticmethod
def syslog (log):
if log not in _syslog_value_name:
raise TypeError('invalid log level %s' % log)
return _syslog_value_name[log]
@staticmethod
def quote (_):
return "'%s'" % str(_)
@staticmethod
def lower (_):
return str(_).lower()
@staticmethod
def path (path):
split = sys.argv[0].split('lib/%s' % _application)
if len(split) > 1:
prefix = os.sep.join(split[:1])
if prefix and path.startswith(prefix):
path = path[len(prefix):]
home = os.path.expanduser('~')
if path.startswith(home):
return "'~%s'" % path[len(home):]
return "'%s'" % path
@staticmethod
def list (_):
return "'%s'" % ' '.join((str(x) for x in _))
@staticmethod
def services (_):
l = ' '.join(('%s:%d' % (host,port) for host,port in _))
return "'%s'" % l
@staticmethod
def ranges (_):
def convert ():
for (proto,start,end) in _:
bits = int(math.log(end-start+1,2))
if proto == 4:
network = socket.inet_ntop(socket.AF_INET,struct.pack('!L',start))
yield '%s/%d' % (network,32-bits)
else:
high = struct.pack('!Q',start >> 64)
low = struct.pack('!Q',start & 0xFFFFFFFF)
network = socket.inet_ntop(socket.AF_INET6,high+low)
yield '%s/%d' % (network,128-bits)
return "'%s'" % ' '.join(convert())
import ConfigParser
class Store (dict):
def __getitem__ (self,key):
return dict.__getitem__(self,key.replace('_','-'))
def __setitem__ (self,key,value):
return dict.__setitem__(self,key.replace('_','-'),value)
def __getattr__ (self,key):
return dict.__getitem__(self,key.replace('_','-'))
def __setattr__ (self,key,value):
return dict.__setitem__(self,key.replace('_','-'),value)
def _configuration (conf):
location = os.path.join(os.sep,*os.path.join(home.split(os.sep)))
while location and location != '/':
location, directory = os.path.split(location)
if directory in ('lib','bin'):
break
_conf_paths = []
if conf:
_conf_paths.append(os.path.abspath(os.path.normpath(conf)))
if location:
_conf_paths.append(os.path.normpath(os.path.join(location,'etc',_application,'%s.conf' % _application)))
_conf_paths.append(os.path.normpath(os.path.join('/','etc',_application,'%s.conf' % _application)))
_conf_paths.append(os.path.normpath(os.path.join('/','usr','etc',_application,'%s.conf' % _application)))
configuration = Store()
ini = ConfigParser.ConfigParser()
ini_files = [path for path in _conf_paths if os.path.exists(path)]
if ini_files:
ini.read(ini_files[0])
for section in _defaults:
default = _defaults[section]
for option in default:
convert = default[option][0]
try:
proxy_section = '%s.%s' % (_application,section)
env_name = '%s.%s' % (proxy_section,option)
rep_name = env_name.replace('.','_')
if env_name in os.environ:
conf = os.environ.get(env_name)
elif rep_name in os.environ:
conf = os.environ.get(rep_name)
else:
try:
# raise and set the default
conf = value.unquote(ini.get(section,option,nonedict))
except (ConfigParser.NoSectionError,ConfigParser.NoOptionError):
# raise and set the default
conf = value.unquote(ini.get(proxy_section,option,nonedict))
# name without an = or : in the configuration and no value
if conf is None:
conf = default[option][2]
except (ConfigParser.NoSectionError,ConfigParser.NoOptionError):
conf = default[option][2]
try:
configuration.setdefault(section,Store())[option] = convert(conf)
except TypeError,error:
raise ConfigurationError('invalid value for %s.%s : %s (%s)' % (section,option,conf,str(error)))
return configuration
def load (application=None,defaults=None,conf=None):
global _application
global _defaults
global _config
if _config:
return _config
if conf is None:
raise RuntimeError('You can not have an import using load() before main() initialised it')
_application = application
_defaults = defaults
_config = _configuration(conf)
return _config
def default ():
for section in sorted(_defaults):
for option in sorted(_defaults[section]):
values = _defaults[section][option]
default = "'%s'" % values[2] if values[1] in (string.list,string.path,string.quote) else values[2]
yield '%s.%s.%s %s: %s. default (%s)' % (_application,section,option,' '*(20-len(section)-len(option)),values[3],default)
def ini (diff=False):
for section in sorted(_config):
if section in ('proxy','debug'):
continue
header = '\n[%s]' % section
for k in sorted(_config[section]):
v = _config[section][k]
if diff and _defaults[section][k][0](_defaults[section][k][2]) == v:
continue
if header:
print header
header = ''
print '%s = %s' % (k,_defaults[section][k][1](v))
def env (diff=False):
print
for section,values in _config.items():
if section in ('proxy','debug'):
continue
for k,v in values.items():
if diff and _defaults[section][k][0](_defaults[section][k][2]) == v:
continue
if _defaults[section][k][1] == string.quote:
print "%s.%s.%s='%s'" % (_application,section,k,v)
continue
print "%s.%s.%s=%s" % (_application,section,k,_defaults[section][k][1](v))
| 27.197861 | 128 | 0.665946 | 1,405 | 10,172 | 4.701779 | 0.182206 | 0.029973 | 0.019679 | 0.02422 | 0.316682 | 0.267787 | 0.229034 | 0.193612 | 0.175749 | 0.132153 | 0 | 0.009437 | 0.177055 | 10,172 | 373 | 129 | 27.270777 | 0.779716 | 0.017794 | 0 | 0.273333 | 0 | 0.016667 | 0.099909 | 0.002231 | 0 | 0 | 0.001014 | 0 | 0 | 0 | null | null | 0.003333 | 0.03 | null | null | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a285b6ae623657a020499a2ec4ea9b0765d78e0b | 5,708 | py | Python | expenses/migrations/0001_initial.py | inducer/expensely | b88b830e466db63cce5acfcdb0269411c7b39358 | [
"MIT",
"Unlicense"
] | 1 | 2021-07-02T02:03:09.000Z | 2021-07-02T02:03:09.000Z | expenses/migrations/0001_initial.py | inducer/expensely | b88b830e466db63cce5acfcdb0269411c7b39358 | [
"MIT",
"Unlicense"
] | null | null | null | expenses/migrations/0001_initial.py | inducer/expensely | b88b830e466db63cce5acfcdb0269411c7b39358 | [
"MIT",
"Unlicense"
] | 2 | 2016-08-24T05:25:57.000Z | 2018-12-31T01:06:07.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2016-09-24 23:01
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Account',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('symbol', models.CharField(max_length=20)),
('name', models.CharField(max_length=200)),
('category', models.CharField(choices=[('fund', 'Funding source'), ('exp', 'Expenses'), ('other', 'Other')], max_length=10)),
],
options={
'ordering': ['symbol'],
},
),
migrations.CreateModel(
name='AccountGroup',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
],
),
migrations.CreateModel(
name='Currency',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('symbol', models.CharField(max_length=10)),
('name', models.CharField(max_length=200)),
],
options={
'verbose_name_plural': 'currencies',
},
),
migrations.CreateModel(
name='Entry',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('valid_date', models.DateField(default=django.utils.timezone.now)),
('description', models.CharField(max_length=200)),
('create_date', models.DateTimeField(default=django.utils.timezone.now)),
],
options={
'ordering': ['valid_date', 'description'],
'verbose_name_plural': 'entries',
},
),
migrations.CreateModel(
name='EntryCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
],
options={
'verbose_name_plural': 'entry categories',
},
),
migrations.CreateModel(
name='EntryComment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_date', models.DateTimeField(default=django.utils.timezone.now)),
('comment', models.TextField()),
('creator', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
('entry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='expenses.Entry')),
],
),
migrations.CreateModel(
name='EntryComponent',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('amount', models.DecimalField(decimal_places=2, max_digits=19)),
('account', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='entry_components', to='expenses.Account')),
('entry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='components', to='expenses.Entry')),
],
options={
'ordering': ['amount'],
},
),
migrations.CreateModel(
name='EntryValidation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_date', models.DateTimeField(default=django.utils.timezone.now)),
('comments', models.TextField(blank=True, null=True)),
('entry_component', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='expenses.EntryComponent')),
('validator', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.AddField(
model_name='entry',
name='category',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='expenses.EntryCategory'),
),
migrations.AddField(
model_name='entry',
name='creator',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='account',
name='currency',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='expenses.Currency'),
),
migrations.AddField(
model_name='account',
name='group',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='expenses.AccountGroup'),
),
migrations.AddField(
model_name='account',
name='guardian',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
]
| 43.572519 | 148 | 0.576384 | 542 | 5,708 | 5.920664 | 0.204797 | 0.032409 | 0.052353 | 0.082269 | 0.636024 | 0.618573 | 0.567778 | 0.567778 | 0.567778 | 0.549704 | 0 | 0.010051 | 0.285389 | 5,708 | 130 | 149 | 43.907692 | 0.776661 | 0.011913 | 0 | 0.54918 | 1 | 0 | 0.121164 | 0.011708 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040984 | 0 | 0.07377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a288d8b6411de0a207c959a000823b29df69e32d | 743 | py | Python | src/server.py | awsassets/superfish | 77d93ec864de22b592bc4b69aa5ab7580aa383ab | [
"MIT"
] | null | null | null | src/server.py | awsassets/superfish | 77d93ec864de22b592bc4b69aa5ab7580aa383ab | [
"MIT"
] | null | null | null | src/server.py | awsassets/superfish | 77d93ec864de22b592bc4b69aa5ab7580aa383ab | [
"MIT"
] | null | null | null | import flask ; from flask import *
def Serve(email_form, password_form, rd, dic, host="0.0.0.0", port="8080"):
app = Flask(__name__, template_folder="../clone")
# login storage
class Login:
email = ""
pwd = ""
ip = ""
# forms
@app.get("/")
def index():
return render_template("index.html")
@app.post("/login")
def login():
Login.ip = request.remote_addr
Login.email = request.form.get(email_form)
Login.pwd = request.form.get(password_form)
ouputfunc = dic["func"]
res = dic["res"]
ouputfunc(res=res, Login=Login)
return flask.redirect(rd)
print("\n-= Flask Logs =-")
app.run(host=host, port=port) | 24.766667 | 75 | 0.561238 | 92 | 743 | 4.413043 | 0.456522 | 0.014778 | 0.014778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015066 | 0.28533 | 743 | 30 | 76 | 24.766667 | 0.749529 | 0.025572 | 0 | 0 | 0 | 0 | 0.084488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.095238 | 0.047619 | 0.047619 | 0.47619 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a28aac04289f6912a4237acfbf9626f8b5f787ef | 593 | py | Python | SOURCE/test_ex01.py | PinkPhayate/Information_Access_Design | a6ae3b055e971708d67fda7129e51cd0d9b16d2f | [
"MIT"
] | null | null | null | SOURCE/test_ex01.py | PinkPhayate/Information_Access_Design | a6ae3b055e971708d67fda7129e51cd0d9b16d2f | [
"MIT"
] | null | null | null | SOURCE/test_ex01.py | PinkPhayate/Information_Access_Design | a6ae3b055e971708d67fda7129e51cd0d9b16d2f | [
"MIT"
] | null | null | null | import re,io,os.path,os
def remove_tag(str):
alldigit = re.compile(r"^<.+")
if alldigit.search(str) != None:
return False
return True
for line in open('./../text_list', "r"):
filename = './../TXT/tragedies/'+line.rstrip()
print filename
f = open("./../TXT/test_"+line.rstrip(),"w")
for line in io.open(filename,"r",encoding="utf-16"):
if remove_tag(line):
# remove signiture
line = re.sub(re.compile("[!-/:-@[-`{-~;?]"),"", line).rstrip()
# print line
f.write(line.encode('utf-8'))
f.close()
| 25.782609 | 75 | 0.53457 | 77 | 593 | 4.064935 | 0.532468 | 0.095847 | 0.057508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006726 | 0.247892 | 593 | 22 | 76 | 26.954545 | 0.695067 | 0.045531 | 0 | 0 | 0 | 0 | 0.143872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a298626351e920d8afa27758b5249b92283fda64 | 308 | py | Python | package_name/__init__.py | netserf/template-python-repo | f6a2612b0e2dfd766c1287abb6e17f13fca44b93 | [
"MIT"
] | null | null | null | package_name/__init__.py | netserf/template-python-repo | f6a2612b0e2dfd766c1287abb6e17f13fca44b93 | [
"MIT"
] | null | null | null | package_name/__init__.py | netserf/template-python-repo | f6a2612b0e2dfd766c1287abb6e17f13fca44b93 | [
"MIT"
] | null | null | null | # `name` is the name of the package as used for `pip install package`
name = "package-name"
# `path` is the name of the package for `import package`
path = name.lower().replace("-", "_").replace(" ", "_")
version = "0.1.0"
author = "Author Name"
author_email = ""
description = "" # summary
license = "MIT"
| 30.8 | 69 | 0.655844 | 44 | 308 | 4.522727 | 0.522727 | 0.050251 | 0.090452 | 0.110553 | 0.211055 | 0.211055 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.172078 | 308 | 9 | 70 | 34.222222 | 0.768627 | 0.422078 | 0 | 0 | 0 | 0 | 0.201149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a29c18ce763ea0eb8b5497234efc7ee7fced0caa | 440 | py | Python | home/pi/_testing/logging-test.py | rc-bellergy/pxpi | e3d6d1d1a1f6d1fdf53341d314e7d549c8e84a68 | [
"MIT"
] | 26 | 2020-02-16T09:14:16.000Z | 2022-03-28T07:39:47.000Z | home/pi/_testing/logging-test.py | rc-bellergy/pxpi | e3d6d1d1a1f6d1fdf53341d314e7d549c8e84a68 | [
"MIT"
] | 1 | 2020-10-04T03:48:09.000Z | 2020-10-05T01:47:09.000Z | home/pi/_testing/logging-test.py | rc-bellergy/pxpi | e3d6d1d1a1f6d1fdf53341d314e7d549c8e84a68 | [
"MIT"
] | 7 | 2020-10-04T03:45:36.000Z | 2022-02-28T16:54:36.000Z | #!/usr/bin/env python
import logging
import logging.handlers
import os
# Logging to file
dir_path = os.path.dirname(os.path.realpath(__file__))
logging.basicConfig(filename=dir_path + "/test.log", format='%(asctime)s - %(message)s', level=logging.INFO, filemode='w')
# Logging messages to the console
console = logging.StreamHandler()
logger = logging.getLogger()
logger.addHandler(console)
# Logging test
logging.info("** Testing **") | 25.882353 | 122 | 0.747727 | 59 | 440 | 5.474576 | 0.576271 | 0.080495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104545 | 440 | 17 | 123 | 25.882353 | 0.819797 | 0.184091 | 0 | 0 | 0 | 0 | 0.134831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a29c642613fdef33219868f8958a1851ae0b81aa | 1,556 | py | Python | test_client.py | ericjmartin/slackview | 28797ca06e13f5c9f97c1755e613c0e402ae0ea4 | [
"MIT"
] | null | null | null | test_client.py | ericjmartin/slackview | 28797ca06e13f5c9f97c1755e613c0e402ae0ea4 | [
"MIT"
] | null | null | null | test_client.py | ericjmartin/slackview | 28797ca06e13f5c9f97c1755e613c0e402ae0ea4 | [
"MIT"
] | null | null | null | import os
from slack_sdk.web import WebClient
from slack_sdk.socket_mode import SocketModeClient
# Initialize SocketModeClient with an app-level token + WebClient
client = SocketModeClient(
# This app-level token will be used only for establishing a connection
app_token=os.environ.get("SLACK_APP_TOKEN")
# You will be using this WebClient for performing Web API calls in listeners
web_client=WebClient(token=os.environ.get("SLACK_BOT_TOKEN")) # xoxb-111-222-xyz
)
from slack_sdk.socket_mode.response import SocketModeResponse
from slack_sdk.socket_mode.request import SocketModeRequest
def process(client: SocketModeClient, req: SocketModeRequest):
print('here')
if req.type == "events_api":
# Acknowledge the request anyway
response = SocketModeResponse(envelope_id=req.envelope_id)
client.send_socket_mode_response(response)
# Add a reaction to the message if it's a new message
if req.payload["event"]["type"] == "message" \
and req.payload["event"].get("subtype") is None:
client.web_client.reactions_add(
name="eyes",
channel=req.payload["event"]["channel"],
timestamp=req.payload["event"]["ts"],
)
# Add a new listener to receive messages from Slack
# You can add more listeners like this
client.socket_mode_request_listeners.append(process)
# Establish a WebSocket connection to the Socket Mode servers
client.connect()
# Just not to stop this process
from threading import Event
Event().wait() | 39.897436 | 85 | 0.720437 | 207 | 1,556 | 5.299517 | 0.449275 | 0.054695 | 0.043756 | 0.049225 | 0.100273 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004781 | 0.193445 | 1,556 | 39 | 86 | 39.897436 | 0.869323 | 0.311054 | 0 | 0 | 0 | 0 | 0.089454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.24 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2a490d7174747d1795eadc9407c26effc4b112a | 14,165 | py | Python | mod_equations-master/mod_equations.py | userElaina/hg8 | 235dbeca3d58b94e1378ac4240ed8424791ae561 | [
"MIT"
] | null | null | null | mod_equations-master/mod_equations.py | userElaina/hg8 | 235dbeca3d58b94e1378ac4240ed8424791ae561 | [
"MIT"
] | null | null | null | mod_equations-master/mod_equations.py | userElaina/hg8 | 235dbeca3d58b94e1378ac4240ed8424791ae561 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
'''
Author @55-AA
19 July, 2016
'''
import copy
def gcd(a, b):
"""
Return the greatest common denominator of integers a and b.
gmpy2.gcd(a, b)
"""
while b:
a, b = b, a % b
return a
def lcm(a, b):
return a * b / (gcd(a, b))
def egcd(a, b):
"""
ax + by = 1
ax ≡ 1 mod b
Return a 3-element tuple (g, x, y), the g = gcd(a, b)
gmpy2.gcdext(a, b)
"""
if a == 0:
return (b, 0, 1)
else:
g, y, x = egcd(b % a, a)
return (g, x - (b // a) * y, y)
def mod_inv(a, m):
"""
ax ≡ 1 mod m
gmpy2.invert(a, m)
"""
g, x, y = egcd(a, m)
assert g == 1
return x % m
def int2mem(x):
"""
0x12233 => '\x33\x22\x01'
"""
pad_even = lambda x : ('', '0')[len(x)%2] + x
x = list(pad_even(format(x, 'x')).decode('hex'))
x.reverse()
return ''.join(x)
def mem2int(x):
"""
'\x33\x22\x01' => 0x12233
"""
x = list(x)
x.reverse()
return int(''.join(x).encode('hex'), 16)
###########################################################
# class
###########################################################
class GaussMatrix:
"""
A*X ≡ B (mod p),p为大于0的整数。
高斯消元求解模线性方程组。先化简为上三角,然后回代求解。
当r(A) <= n时,一定有多解;
当r(A) == n时,有多解或唯一解;
当r(A) != r(A~)时,无解。
r(A)为系数矩阵的秩,r(A)为增广矩阵的秩,n为未知数的个数。
http://www.docin.com/p-1063811671.html讨论了gcd(|A|, m) = 1时的LU分解解法,
本文包括了gcd(|A|, m) > 1时的解法,
化简原则:
1、系数与模互质
2、系数加某一行n次后,对应的系数与模的GCD最小
3、将1或2得到的系数移到对角线上
初始化参数:
matrix:方程组的增广矩阵(最后一列为常数项)。
matrix = [
[ 69, 75, 78, 36, 58],
[ 46, 68, 51, 26, 42],
[ 76, 40, 42, 49, 11],
[ 11, 45, 2, 45, 1],
[ 15, 67, 60, 14, 72],
[ 76, 67, 73, 56, 58],
[ 67, 15, 68, 54, 75],
]
mod:模数
函数:
gauss():求解方程
输出变量:
error_str:出错的信息
count:解的数量
"""
def __init__(self, matrix, mod):
self.matrix = copy.deepcopy(matrix)
self.d = None
self.r = len(matrix)
self.c = len(matrix[0])
self.N = len(matrix[0]) - 1
self.mod = mod
self.count = 1
self.error_str = "unknown error"
def verify_solution(self, solution):
for d in self.matrix:
result = 0
for r in map(lambda x,y:0 if None == y else x*y, d, solution):
result += r
if (result % self.mod) != ((d[-1]) % self.mod):
return 0
return 1
def swap_row(self, ra, rb):
(self.d[ra], self.d[rb]) = (self.d[rb], self.d[ra])
def swap_col(self, ca, cb):
for j in range(self.r):
(self.d[j][ca], self.d[j][cb]) = (self.d[j][cb], self.d[j][ca])
def inv_result(self, r, n):
"""
求解第n个未知数,r已经获得的解。形如:[None,None, ..., n+1, ...]
a*x ≡ b(mod m)
x有解的条件:gcd(a,m) | b。也即a,m互质时一定有解,不互质时,b整除gcd(a,m)也有解,否则无解。
解的格式为:x0+k(m/gcd(a,m)),其中x0为最小整数特解,k为任意整数。
返回[x0, x1, ...xn],其中x0 < x1 < xn < m。
"""
b = self.d[n][self.N]
a = self.d[n][n]
m = self.mod
k = gcd(a, m)
for j in xrange(n + 1, self.N):
b = (b - (self.d[n][j] * r[j] % m)) % m
if 1 == k:
return [mod_inv(a, m) * b % m]
else:
if k == gcd(k, b):
a /= k
b /= k
m /= k
x0 = mod_inv(a, m) * b % m
x = []
for i in xrange(k):
x.append(x0 + m*i)
return x
return None
def find_min_gcd_row_col(self, i, j):
# 查找直接互质的对角线系数
for k in xrange(i, self.r):
for l in xrange(j, self.c - 1):
if(1 == gcd(self.d[k][l], self.mod)):
return [k, l]
def add_min_gcd(a, b, m):
r = [m, 1]
g = gcd(a, b)
if g:
i = a / g
for j in xrange(i):
g = gcd((a + j * b) % m, m)
if g < r[0]:
r[0] = g
r[1] = j
if g == 1:
break
return r
# 查找加乘后GCD最小的对角线系数
# [加乘后的最大公约数,加乘的倍数,要化简的行号,加乘的行号,要化简的列号]
r = [self.mod, 1, i, i + 1, j]
for k in xrange(i, self.r):
for kk in xrange(k+1, self.r):
for l in range(j, self.c - 1):
rr = add_min_gcd(self.d[k][l], self.d[kk][l], self.mod)
if rr[0] < r[0]:
r[0] = rr[0]
r[1] = rr[1]
r[2] = k
r[3] = kk
r[4] = l
pass
if(1 == rr[0]):
break
g = r[0]
n = r[1]
k = r[2]
kk = r[3]
l = r[4]
if n and g < self.mod:
self.d[k] = map(lambda x, y : (x + n*y)%self.mod, self.d[k], self.d[kk])
return [k, l]
def mul_row(self, i, k, j):
a = self.d[k][j]
b = self.d[i][j]
def get_mul(a, b, m):
k = gcd(a, m)
if 1 == k:
return mod_inv(a, m) * b % m
else:
if k == gcd(k, b):
return mod_inv(a/k, m/k) * (b/k) % (m/k)
return None
if b:
mul = get_mul(a, b, self.mod)
if None == mul:
print_matrix(self.d)
assert(mul != None)
self.d[i] = map(lambda x, y : (y - x*mul) % self.mod, self.d[k], self.d[i])
def gauss(self):
"""
返回解向量,唯一解、多解或无解(None)。
例如:[[61, 25, 116, 164], [61, 60, 116, 94], [61, 95, 116, 24], [61, 130, 116, 129], [61, 165, 116, 59]]
"""
self.d = copy.deepcopy(self.matrix)
for i in xrange(self.r):
for j in xrange(self.c):
self.d[i][j] = self.matrix[i][j] % self.mod #把负系数变成正系数
if self.r < self.N:
self.d.extend([[0]*self.c]*(self.N - self.r))
# 化简上三角
index = [x for x in xrange(self.N)]
for i in range(self.N):
tmp = self.find_min_gcd_row_col(i, i)
if(tmp):
self.swap_row(i, tmp[0])
(index[i], index[tmp[1]]) = (index[tmp[1]], index[i])
self.swap_col(i, tmp[1])
else:
self.error_str = "no min"
return None
for k in range(i + 1, self.r):
self.mul_row(k, i, i)
# print_matrix(self.d)
if self.r > self.N:
for i in xrange(self.N, self.r):
for j in xrange(self.c):
if self.d[i][j]:
self.error_str = "r(A) != r(A~)"
return None
# 判断解的数量
for i in xrange(self.N):
self.count *= gcd(self.d[i][i], self.mod)
if self.count > 100:
self.error_str = "solution too more:%d" % (self.count)
return None
# 回代
result = [[None]*self.N]
for i in range(self.N - 1, -1, -1):
new_result = []
for r in result:
ret = self.inv_result(r, i)
if ret:
for rr in ret:
l = r[:]
l[i] = rr
new_result.append(l)
else:
self.error_str = "no inv:i=%d" % (i)
return None
result = new_result
# 调整列变换导致的未知数顺序变化
for i in xrange(len(result)) :
def xchg(a, b):
result[i][b] = a
map(xchg, result[i][:], index)
return result
###########################################################
# test
###########################################################
def print_array(x):
prn = "\t["
for j in x:
if j:
prn += "%3d, " % j
else:
prn += " 0, "
print prn[:-2]+"],"
def print_matrix(x):
print "["
for i in x:
print_array(i)
print "]"
def random_test(times):
import random
for i in xrange(times):
print "\n============== random test %d ==============\n" % i
mod = random.randint(5, 999)
col = random.randint(2, 30)
row = random.randint(2, 30)
solution = map(lambda x : random.randint(0, mod - 1), [xc for xc in xrange(col)])
matrix = []
for y in xrange(row):
array = map(lambda x : random.randint(0, mod), [xc for xc in xrange(col)])
t = 0
for j in map(lambda x,y:0 if None == y else x*y, array, solution):
t += j
array.append(t % mod)
matrix.append(array)
run_test(mod, solution, matrix)
def static_test_ex():
mod=256
solution=[]
# matrix=[
# [2,3,0,11],
# [1,1,0,6],
# [3,0,1,22],
# ]
matrix=[list(map(int,i.split())) for i in open('m2.txt','r').read().splitlines()]
run_test(mod, solution, matrix)
def static_test():
mod = 26
solution = [23,15,19,13,25,17,24,18,11]
matrix = [
[11,12,7,0,0,0,0,0,0],
[0,0,0,11,12,7,0,0,0],
[0,0,0,0,0,0,11,12,7],
[14,18,23,0,0,0,0,0,0],
[0,0,0,14,18,23,0,0,0],
[0,0,0,0,0,0,14,18,23],
[17,5,19,0,0,0,0,0,0],
[0,0,0,17,5,19,0,0,0],
[0,0,0,0,0,0,17,5,19],
]
for i in xrange(len(matrix)):
t = 0
for j in map(lambda x,y:0 if None == y else x*y, matrix[i], solution):
t += j
matrix[i].append(t % mod)
run_test(mod, solution, matrix)
def run_test(mod, solution, matrix):
print "row = %d, col = %d" % (len(matrix), len(matrix[0])-1)
print "mod = %d" % (mod)
print "solution =", solution
print "matrix ="
print_matrix(matrix)
g = GaussMatrix(matrix, mod)
ret = g.gauss()
if not ret:
print "error:"
print_matrix(g.d)
print "error_str:", g.error_str
else:
print "times:", g.count
print "result:"
print_matrix(ret)
def DSA_comK():
"""
# DSA两次签名使用相同的随机数k可导致私钥x泄漏
# p:L bits长的素数。L是64的倍数,范围是512到1024;
# q:p - 1的160bits的素因子;
# g:g = h^((p-1)/q) mod p,h满足h < p - 1, h^((p-1)/q) mod p > 1;
# x:x < q,x为私钥 ;
# y:y = g^x mod p ,( p, q, g, y )为公钥;
# r = ( g^k mod p ) mod q
# s = ( k^(-1) (HASH(m) + xr)) mod q
# 签名结果是( m, r, s )
"""
import hashlib
p = 0x8c286991e30fd5341b7832ce9fe869c0a73cf79303c2959ab677d980237abf7ecf853015c9a086c4330252043525a4fa60c64397421caa290225d6bc6ec6b122cd1da4bba1b13f51daca8b210156a28a0c3dbf17a7826f738fdfa87b22d7df990908c13dbd0a1709bbbab5f816ddba6c8166ef5696414538f6780fdce987552b
g = 0x49874582cd9af51d6f554c8fae68588c383272c357878d7f4079c6edcda3bcbf1f2cbada3f7d541a5b1ae7f046199f8f51d72db60a2601bd3375a3b48d7a3c9a0c0e4e8a0680f7fb98a8610f042e10340d2453d3c811088e48c5d6dd834eaa5509daeb430bcd9de8aabc239d698a655004e3f0a2ee456ffe9331c5f32c66f90d
q = 0x843437e860962d85d17d6ee4dd2c43bc4aec07a5
m1 = 0x3132333435363738
r1 = 0x4d91a491d95e4eef4196a583cd282ca0e625f36d
s1 = 0x3639b47678abf7545397fc9a1af108537fd1dfac
m2 = 0x49276c6c206265206261636b2e
r2 = 0x4d91a491d95e4eef4196a583cd282ca0e625f36d
s2 = 0x314c044409a94f4961340212b42ade005fb27b0a
# M1 = mem2int(hashlib.sha1(int2mem(m1)).digest())
M1 = int(hashlib.sha1('3132333435363738'.decode('hex')).hexdigest(), 16)
# M2 = mem2int(hashlib.sha1(int2mem(m2)).digest())
M2 = int(hashlib.sha1('49276c6c206265206261636b2e'.decode("hex")).hexdigest(), 16)
matrix_c = [
[0x3639b47678abf7545397fc9a1af108537fd1dfac, -0x4d91a491d95e4eef4196a583cd282ca0e625f36d, M1],
[0x314c044409a94f4961340212b42ade005fb27b0a, -0x4d91a491d95e4eef4196a583cd282ca0e625f36d, M2]
]
print "mod = %d" % (q)
print "matrix ="
print_matrix(matrix_c)
Gauss = GaussMatrix(matrix_c, q)
ret = Gauss.gauss()
if not ret:
print "error:"
print_matrix(Gauss.d)
print "error_str:", Gauss.error_str
else:
k = ret[0][0]
x = ret[0][1]
print "k: %x" % (k)
print "x: %x" % (x)
print Gauss.verify_solution(ret[0])
exit(0)
def qwq(matrix):
mod=256
solution=[]
run_test(mod, solution, matrix)
# if __name__ == "__main__":
# DSA_comK()
# static_test()
# static_test_ex()
# random_test(1)
# exit(0)
con=[26,28,38,39,40,50,52,79,80,81,91,103,105,115,116,117]
for kk in xrange(144):
if kk in con:
continue
for k in xrange(kk):
if k in con:
continue
matrix=[list(map(int,i.split())) for i in open('n2.txt','r').read().splitlines()]
_p=list()
_p.append(int(k))
if k//12>=1:
_p.append(int(k-12))
if k//12<=10:
_p.append(int(k+12))
if k//12>=2:
_p.append(int(k-24))
if k//12<=9:
_p.append(int(k+24))
if k%12>=1:
_p.append(int(k-1))
if k%12<=10:
_p.append(int(k+1))
if k%12>=2:
_p.append(int(k-2))
if k%12<=9:
_p.append(int(k+2))
_p.append(int(kk))
if kk//12>=1:
_p.append(int(kk-12))
if kk//12<=10:
_p.append(int(kk+12))
if kk//12>=2:
_p.append(int(kk-24))
if kk//12<=9:
_p.append(int(kk+24))
if kk%12>=1:
_p.append(int(kk-1))
if kk%12<=10:
_p.append(int(kk+1))
if kk%12>=2:
_p.append(int(kk-2))
if kk%12<=9:
_p.append(int(kk+2))
for i in sorted(set(_p))[::-1]:
matrix.pop(i)
qwq(matrix)
# input()
| 27.34556 | 266 | 0.454712 | 1,922 | 14,165 | 3.299168 | 0.168574 | 0.015455 | 0.019871 | 0.022709 | 0.23971 | 0.185144 | 0.166693 | 0.143037 | 0.051254 | 0.051254 | 0 | 0.137951 | 0.37515 | 14,165 | 517 | 267 | 27.398453 | 0.578014 | 0.030851 | 0 | 0.151976 | 0 | 0 | 0.02731 | 0.002269 | 0 | 0 | 0.082017 | 0 | 0.006079 | 0 | null | null | 0.00304 | 0.009119 | null | null | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2a5a42b6d09e31e44930e2b97a11e3ac6f3bf02 | 6,304 | py | Python | vnpy/app/portfolio_strategy/strategies/daily_amplitude_2_days_volitility_strategy.py | franklili3/vnpy | 4d710553302eb3587e4acb2ff8ce151660fb9c17 | [
"MIT"
] | null | null | null | vnpy/app/portfolio_strategy/strategies/daily_amplitude_2_days_volitility_strategy.py | franklili3/vnpy | 4d710553302eb3587e4acb2ff8ce151660fb9c17 | [
"MIT"
] | null | null | null | vnpy/app/portfolio_strategy/strategies/daily_amplitude_2_days_volitility_strategy.py | franklili3/vnpy | 4d710553302eb3587e4acb2ff8ce151660fb9c17 | [
"MIT"
] | null | null | null | from typing import List, Dict
from datetime import datetime
import numpy as np
from vnpy.app.portfolio_strategy import StrategyTemplate, StrategyEngine
from vnpy.trader.utility import BarGenerator, ArrayManager
from vnpy.trader.object import BarData
from vnpy.trader.constant import Interval
class AmplitudeVolitilityStrategy(StrategyTemplate):
""""""
author = "Frank li"
volitility_window = 2
stocks_number = 1
rebalance_days = 1
#leg1_symbol = ""
parameters = [
"volitility_window",
"stocks_number",
"rebalance_days"
]
#variables = [
#"leg1_symbol",
#]
def __init__(
self,
strategy_engine: StrategyEngine,
strategy_name: str,
vt_symbols: List[str],
setting: dict
):
""""""
super().__init__(strategy_engine, strategy_name, vt_symbols, setting)
self.bgs: Dict[str, BarGenerator] = {}
self.targets: Dict[str, int] = {}
self.last_tick_time: datetime = None
self.amplitude_data: Dict[str, np.array] = {}
self.volitility_data: Dict[str, int] = {}
self.sorted_volitility_data: list[np.array] = []
self.selected_symbols: list[str] = []
self.bar_count: int = 0
for vt_symbol in self.vt_symbols:
self.amplitude_data[vt_symbol] = np.zeros(volitility_window)
self.volitility_data[vt_symbol] = np.zeros(1)
# Obtain contract info
#self.leg1_symbol, self.leg2_symbol = vt_symbols
def on_bar(bar: BarData):
""""""
pass
for vt_symbol in self.vt_symbols:
self.targets[vt_symbol] = 0
self.bgs[vt_symbol] = BarGenerator(on_bar, Interval.Day)
def on_init(self):
"""
Callback when strategy is inited.
"""
self.write_log("策略初始化")
self.load_bars(2)
def on_start(self):
"""
Callback when strategy is started.
"""
self.write_log("策略启动")
def on_stop(self):
"""
Callback when strategy is stopped.
"""
self.write_log("策略停止")
def on_tick(self, tick: TickData):
"""
Callback of new tick data update.
"""
if (
self.last_tick_time
and self.last_tick_time.day != tick.datetime.day
):
bars = {}
for vt_symbol, bg in self.bgs.items():
bars[vt_symbol] = bg.generate()
self.on_bars(bars)
bg: BarGenerator = self.bgs[tick.vt_symbol]
bg.update_tick(tick)
self.last_tick_time = tick.datetime
def on_bars(self, bars: Dict[str, BarData]):
""""""
self.cancel_all()
self.bar_count += 1
# Return if one leg data is missing
#if self.leg1_symbol not in bars or self.leg2_symbol not in bars:
# return
# Calculate current signal
bar_0 = bars[0]
#leg2_bar = bars[self.leg2_symbol]
for vt_symbol in self.vt_symbols:
self.amplitude_data[vt_symbol].append((bars[vt_symbol].high_price -\
bars[vt_symbol].low_price) / bars[vt_symbol].open_price)
# Filter time only run on
if self.bar_count % rebalance_days != 0:
return
# Update to data array
for vt_symbol in self.vt_symbols:
if (self.amplitude_data[vt_symbol]) < volitility_window:
self.amplitude_data[vt_symbol].append((bars[vt_symbol].high_price -\
bars[vt_symbol].low_price) / bars[vt_symbol].open_price)
else:
self.amplitude_data[vt_symbol][:-1] = self.amplitude_data[vt_symbol][1:]
self.amplitude_data[vt_symbol].append((bars[vt_symbol].high_price -\
bars[vt_symbol].low_price) / bars[vt_symbol].open_price)
self.volitility_data[vt_symbol] = self.amplitude_data[vt_symbol].std()
#self.current_spread = (
# leg1_bar.close_price * self.leg1_ratio - leg2_bar.close_price * self.leg2_ratio
# select stock name
self.sorted_volitility_data = sorted(self.volitility_data.items(), key=lambda\
item: item[1], reverse=True)
i = 0
for item in self.sorted_volitility_data.items():
if i < stocks_number:
self.selected_symbols.append(item[0])
print('self.selected_symbols:', self.selected_symbols)
'''
self.spread_data[:-1] = self.spread_data[1:]
self.spread_data[-1] = self.current_spread
self.spread_count += 1
if self.spread_count <= self.boll_window:
return
# Calculate boll value
buf: np.array = self.spread_data[-self.boll_window:]
std = buf.std()
self.boll_mid = buf.mean()
self.boll_up = self.boll_mid + self.boll_dev * std
self.boll_down = self.boll_mid - self.boll_dev * std
'''
# Calculate new target position
# 等权重
weight = 1 / len(self.selected_symbols)
# 目前持仓列表
stock_hold_now = [equity.symbol for equity in self.get_pos()]
print('stock_hold_now:', stock_hold_now)
# 需要买入的股票列表
stock_to_buy = [i for i not in stock_hold_now if i in self.selected_symbols]
print('stock_to_buy:', stock_to_buy)
# 继续持有股票列表
no_need_to_sell = [i for i in stock_hold_now if i in self.selected_symbols]
print('no_need_to_sell:', no_need_to_sell)
# 卖出股票列表
stock_to_sell = [i for i in stock_hold_now if i not in no_need_to_sell]
print('stock_to_sell:', stock_to_sell)
# 执行卖出
for stock in stock_to_sell:
current_pos = self.get_pos(stock)
volume = current_pos
bar = bars[stock]
price = bar.close_price + self.price_add
self.sell(stock, price, volume)
# 如果当天没有买入就返回
if len(stock_to_buy) == 0:
return
# 执行买入
for s_t_b in stock_to_buy:
bar = bars[stock]
price = bar.close_price + self.price_add
volume = 1
self.buy(s_t_b, volume)
self.put_event() | 32 | 92 | 0.584391 | 786 | 6,304 | 4.426209 | 0.215013 | 0.064386 | 0.034493 | 0.043691 | 0.277666 | 0.221903 | 0.221903 | 0.200057 | 0.191434 | 0.17735 | 0 | 0.008128 | 0.316942 | 6,304 | 197 | 93 | 32 | 0.799814 | 0.085977 | 0 | 0.166667 | 0 | 0 | 0.029035 | 0.004405 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009259 | 0.064815 | null | null | 0.046296 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2a95220c05c2685607d88d70a06cedd80129fc1 | 2,489 | py | Python | CareerTinderServer/CareerTinder/migrations/0002_auto_20160918_0011.py | sarojaerabelli/HVGS | 86ec3d2de496540ca439c40f4a0c58c47aa181cf | [
"MIT"
] | 1 | 2016-09-18T16:40:27.000Z | 2016-09-18T16:40:27.000Z | CareerTinderServer/CareerTinder/migrations/0002_auto_20160918_0011.py | sarojaerabelli/HVGS | 86ec3d2de496540ca439c40f4a0c58c47aa181cf | [
"MIT"
] | null | null | null | CareerTinderServer/CareerTinder/migrations/0002_auto_20160918_0011.py | sarojaerabelli/HVGS | 86ec3d2de496540ca439c40f4a0c58c47aa181cf | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2016-09-18 04:11
from __future__ import unicode_literals
import CareerTinder.listfield
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('CareerTinder', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='hiree',
name='date_of_birth',
),
migrations.RemoveField(
model_name='hiree',
name='name',
),
migrations.AddField(
model_name='hiree',
name='college',
field=models.CharField(default='mit', max_length=100),
preserve_default=False,
),
migrations.AddField(
model_name='hiree',
name='degree',
field=models.CharField(choices=[(b'BA', b"Bachelor's"), (b'MA', b"Master's"), (b'DO', b'Doctorate')], default='ba', max_length=10),
preserve_default=False,
),
migrations.AddField(
model_name='hiree',
name='first_name',
field=models.CharField(default='john', max_length=50),
preserve_default=False,
),
migrations.AddField(
model_name='hiree',
name='last_name',
field=models.CharField(default='doe', max_length=50),
preserve_default=False,
),
migrations.AddField(
model_name='hiree',
name='major',
field=models.CharField(default='cs', max_length=100),
preserve_default=False,
),
migrations.AddField(
model_name='hiree',
name='year',
field=models.IntegerField(default='2019'),
preserve_default=False,
),
migrations.AddField(
model_name='recruiter',
name='hirees',
field=CareerTinder.listfield.ListField(default=b''),
),
migrations.AlterField(
model_name='company',
name='logo',
field=models.ImageField(upload_to=b'media/logos/'),
),
migrations.AlterField(
model_name='hiree',
name='face_picture',
field=models.ImageField(upload_to=b'media/faces/'),
),
migrations.AlterField(
model_name='hiree',
name='resume_picture',
field=models.FileField(upload_to=b'media/resumes/'),
),
]
| 30.728395 | 143 | 0.546002 | 236 | 2,489 | 5.597458 | 0.355932 | 0.081756 | 0.10598 | 0.13626 | 0.526117 | 0.479182 | 0.335352 | 0.246783 | 0.246783 | 0.204391 | 0 | 0.022063 | 0.326235 | 2,489 | 80 | 144 | 31.1125 | 0.765653 | 0.02732 | 0 | 0.547945 | 1 | 0 | 0.112903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041096 | 0 | 0.082192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2b000534f69d5e5c990ba8c2baa88de9b69fc99 | 1,920 | py | Python | corefacility/core/models/module.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/core/models/module.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/core/models/module.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | import uuid
from django.db import models
class Module(models.Model):
"""
Defines the information related to the application module that is stored to the
database, not as a Django application
"""
uuid = models.UUIDField(db_index=True, editable=False, primary_key=True, default=uuid.uuid4,
help_text="The UUID provides a quick access to the application during the routing")
parent_entry_point = models.ForeignKey("EntryPoint", null=True, on_delete=models.RESTRICT,
related_name="modules", editable=False,
help_text="List of all modules connected to this entry point")
alias = models.SlugField(editable=False,
help_text="A short name that can be used to identify the module in the app")
name = models.CharField(max_length=128, editable=False, db_index=True,
help_text="The name through which the module is visible in the system")
html_code = models.TextField(null=True, editable=False,
help_text="When the module is visible on the frontend as widget, this field relates"
"to the module HTML code to show")
app_class = models.CharField(max_length=1024, editable=False,
help_text="The python class connected to the module")
user_settings = models.JSONField(help_text="Settings defined by the user and stored in the JSON format")
is_application = models.BooleanField(default=True, editable=False,
help_text="True if the module is application")
is_enabled = models.BooleanField(default=True,
help_text="True if the module has switched on")
class Meta:
unique_together = ["alias", "parent_entry_point"]
| 60 | 117 | 0.621354 | 235 | 1,920 | 4.965957 | 0.417021 | 0.061697 | 0.072836 | 0.089974 | 0.075407 | 0.039417 | 0 | 0 | 0 | 0 | 0 | 0.006038 | 0.309896 | 1,920 | 31 | 118 | 61.935484 | 0.874717 | 0.060938 | 0 | 0 | 0 | 0 | 0.307347 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a2b2ffb533dae5272cd3fbc1cefbb22e54b5762b | 1,181 | py | Python | 14-semparsing/ucca/scripts/find_constructions.py | ariasjose/nn4nlp-code | 7327ea3e93161afbc8c008e287b646daa802be4d | [
"Apache-2.0"
] | null | null | null | 14-semparsing/ucca/scripts/find_constructions.py | ariasjose/nn4nlp-code | 7327ea3e93161afbc8c008e287b646daa802be4d | [
"Apache-2.0"
] | null | null | null | 14-semparsing/ucca/scripts/find_constructions.py | ariasjose/nn4nlp-code | 7327ea3e93161afbc8c008e287b646daa802be4d | [
"Apache-2.0"
] | null | null | null | from argparse import ArgumentParser
from ucca import constructions
from ucca.ioutil import read_files_and_dirs
if __name__ == "__main__":
argparser = ArgumentParser(description="Extract linguistic constructions from UCCA corpus.")
argparser.add_argument("passages", nargs="+", help="the corpus, given as xml/pickle file names")
constructions.add_argument(argparser, False)
argparser.add_argument("-v", "--verbose", action="store_true", help="print tagged text for each passage")
args = argparser.parse_args()
for passage in read_files_and_dirs(args.passages):
if args.verbose:
print("%s:" % passage.ID)
extracted = constructions.extract_edges(passage, constructions=args.constructions, verbose=args.verbose)
if any(extracted.values()):
if not args.verbose:
print("%s:" % passage.ID)
for construction, edges in extracted.items():
if edges:
print(" %s:" % construction.description)
for edge in edges:
print(" %s [%s %s]" % (edge, edge.tag, edge.child))
print()
| 47.24 | 113 | 0.624047 | 131 | 1,181 | 5.473282 | 0.442748 | 0.033473 | 0.058577 | 0.04463 | 0.072524 | 0.072524 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266723 | 1,181 | 24 | 114 | 49.208333 | 0.827945 | 0 | 0 | 0.090909 | 0 | 0 | 0.163354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.272727 | 0.136364 | 0 | 0.136364 | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a2bd228991060d0a29b89ddd1eb606ca0ff8fed6 | 1,044 | py | Python | bulletin/factories.py | ralphqq/ph-earthquake-dashboard | b9a599e92844b13fd1f7e3f54e087ec0ab6bc53a | [
"MIT"
] | null | null | null | bulletin/factories.py | ralphqq/ph-earthquake-dashboard | b9a599e92844b13fd1f7e3f54e087ec0ab6bc53a | [
"MIT"
] | 7 | 2020-06-05T20:14:42.000Z | 2022-03-02T15:00:30.000Z | bulletin/factories.py | ralphqq/ph-earthquake-dashboard | b9a599e92844b13fd1f7e3f54e087ec0ab6bc53a | [
"MIT"
] | null | null | null | from datetime import timedelta
import random
from django.utils import timezone
import factory
class BulletinFactory(factory.DjangoModelFactory):
class Meta:
model = 'bulletin.Bulletin'
url = factory.Sequence(lambda n: f'https://www.sitepage.com/{n}')
latitude = factory.Faker(
'pydecimal',
right_digits=2,
min_value=-90,
max_value=90
)
longitude = factory.Faker(
'pydecimal',
right_digits=2,
min_value=-180,
max_value=180
)
depth = factory.Faker(
'pydecimal',
right_digits=1,
min_value=0,
max_value=500
)
magnitude = factory.Faker(
'pydecimal',
right_digits=1,
min_value=1,
max_value=10
)
location = factory.Faker('address')
@factory.sequence
def time_of_quake(n):
"""Creates sequence of datetime obj 30 minutes apart."""
td = timedelta(minutes=30)
return timezone.now() - (n * td)
| 23.2 | 70 | 0.573755 | 113 | 1,044 | 5.176991 | 0.495575 | 0.102564 | 0.14359 | 0.177778 | 0.280342 | 0.280342 | 0.280342 | 0.280342 | 0 | 0 | 0 | 0.035562 | 0.326628 | 1,044 | 44 | 71 | 23.727273 | 0.796586 | 0.047893 | 0 | 0.216216 | 0 | 0 | 0.09322 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.108108 | 0 | 0.378378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2c5873d31b6625bca48f32ced06274ab2625243 | 12,181 | py | Python | twitter/Update.py | bhargavz/py-twitter-sentiment-analysis | fc611df592ed607e58c2600bd20fceffa309108c | [
"MIT"
] | null | null | null | twitter/Update.py | bhargavz/py-twitter-sentiment-analysis | fc611df592ed607e58c2600bd20fceffa309108c | [
"MIT"
] | null | null | null | twitter/Update.py | bhargavz/py-twitter-sentiment-analysis | fc611df592ed607e58c2600bd20fceffa309108c | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# FILE: Update.py
#
# This class provides mechanisms to update, reply to, retweet status
# messages and send direct messages
#
# Copyright by Author. All rights reserved. Not for reuse without
# express permissions.
#
import sys, time, json, logging
from sochi.twitter.Login import Login
from sochi.twitter.TwitterBase import TwitterBase
from sochi.twitter.auth_settings import *
class Update(TwitterBase):
def __init__(self,
name="Update",
logger=None,
args=(),
kwargs={}):
TwitterBase.__init__(self, name=name, logger=logger,
args=args, kwargs=kwargs)
self.update_url ="https://api.twitter.com/1.1/statuses/update.json"
self.retweet_url ="https://api.twitter.com/1.1/statuses/retweet/"
self.direct_url ="https://api.twitter.com/1.1/direct_messages/new.json"
self.status_update = False
self.status_retweet = False
self.direct_message = False
self.max_status_len = 140
self.set_request_type_as_status_update()
##
# Sets the type of request to a status update
#
def set_request_type_as_status_update(self):
if( not self.querying ):
self.status_update = True
self.status_retweet = False
self.direct_message = False
self.clear_request_params()
self.set_request_domain(self.update_url)
##
# Sets the type of request to a retweet request
#
def set_request_type_as_retweet(self):
if( not self.querying ):
self.status_update = False
self.status_retweet = True
self.direct_message = False
self.clear_request_params()
self.set_request_domain(self.retweet_url)
##
# Sets the type of request to direct message
#
def set_request_type_as_direct_message(self):
if( not self.querying ):
self.status_update = False
self.status_retweet = False
self.direct_message = True
self.clear_request_params()
self.set_request_domain(self.direct_url)
##
# Sets the status to be set when the request is made
#
def set_status(self, status=None, doit=False):
if( not self.querying ):
if( status and self.status_update):
status = self._trim_status(status)
self.set_request_param(kw="status", val=status)
if( doit ):
if( self.running ):
self.start_request()
else:
self.make_request()
else:
self.clear_request_params()
##
# Sets whether or not this status message is in reply to another message
#
def set_in_reply_to(self, status_id=None):
if( not self.querying ):
if( status_id and self.status_update):
self.set_request_param(kw="in_reply_to_status_id", val=str(status_id))
else:
self.clear_request_params()
##
# Sets the latitude and longitude
#
def set_location(self, lat=None, lon=None):
if( not self.querying ):
if( lat and lon and self.status_update ):
self.set_request_param(kw="lat", val=str(lat))
self.set_request_param(kw="long", val=str(lon))
else:
self.clear_request_params()
##
# Sets the status to be an @ reply to the specified user
#
def set_at_reply_message(self, username=None, status=None, doit=False):
if( not self.querying ):
if( user and status and self.status_update ):
status = "@"+str(username)+" "+str(status)
status = self._trim_status(status)
self.set_request_param(kw="status", val=status)
if( doit ):
if( self.running ):
self.start_request()
else:
self.make_request()
elif( status ):
self.set_status(status=status,doit=doit)
else:
self.clear_request_params()
##
# Sets a direct message to be sent to a specific user either using
# username or user_id
#
def set_direct_message(self, username=None, user_id=None, status=None, doit=False):
if( not self.querying ):
if( (username or user_id) and status and self.direct_message ):
status = self._trim_status(status)
self.set_request_param(kw="text", val=status)
if( username ):
self.set_request_param(kw="screen_name", val=username)
if( user_id ):
self.set_request_param(kw="user_id", val=user_id)
if( doit ):
if( self.running ):
self.start_request()
else:
self.make_request()
else:
self.clear_request_params()
##
# Will retweet the specified tweet ID
#
def set_retweet(self, tweet_id=None, doit=False):
if( not self.querying ):
if( tweet_id and self.status_retweet ):
url = self.retweet_url+str(tweet_id)+".json"
self.clear_request_params()
self.set_request_domain(url)
if( doit ):
if( self.running ):
self.start_request()
else:
self.make_request()
else:
self.clear_request_params()
##
# Trim the status message to fit 140 character limit of Twitter
#
def _trim_status(self, status=None):
if( status ):
status = unicode(status)
if( len(status) > self.max_status_len ):
mesg = "Status too long, truncated."
self.logger.info(mesg)
mesg = "Old status: \"%s\""%(status)
self.logger.info(mesg)
status = status[:self.max_status_len]
mesg = "New status: \"%s\""%(status)
self.logger.info(mesg)
return status
##
# Basically a cheap version of make_request for a status update
#
def update_status(self):
if( self.running ):
self.start_request()
else:
self.make_request()
##
#
#
def make_request(self):
# this code is not reentrant, don't make the request twice
if( self.querying ):
return
self.querying = True
try:
# this must be a POST request as defined by the "Update" spec
#print "domain",self.get_request_domain()
#print "payload",self.get_request_params()
self.set_request(domain=self.get_request_domain(),
method="POST",
payload=self.get_request_params())
request_results = self._make_request(request=self._request_data)
js = None
if( request_results or request_results.text ):
try:
#print request_results.text
js = request_results.json()
except ValueError, e:
mesg = "JSON ValueError: "+str(e)
self.logger.info(mesg)
js = None
if( js ):
#print json.dumps(js, sort_keys=True, indent=4)
self.put_message(m=js)
self.querying = False
except:
self.querying = False
raise
return
def parse_params(argv):
auth = None
user = None
status = None
direct = None
retweet = None
favorite = None
json = False
limits = False
debug = False
logging = False
pc = 1
while( pc < len(argv) ):
param = argv[pc]
if( param == "-auth"):
pc += 1
auth = argv[pc]
if( param == "-user"):
pc += 1
user = argv[pc]
if( param == "-status"):
pc += 1
status = argv[pc]
if( param == "-s"):
pc += 1
status = argv[pc]
if( param == "-direct"):
pc += 1
direct = argv[pc]
if( param == "-d"):
pc += 1
direct = argv[pc]
if( param == "-retweet"):
pc += 1
retweet = argv[pc]
if( param == "-r"):
pc += 1
retweet = argv[pc]
if( param == "-favorite"):
pc += 1
favorite = argv[pc]
if( param == "-f"):
pc += 1
favorite = argv[pc]
if( param == "-log"):
logging = True
if( param == "-debug"):
debug = True
if( param == "-json"):
json = True
if( param == "-limits"):
limits = True
pc += 1
return {'auth':auth, 'user':user,
'status':status, 'direct':direct, 'retweet':retweet, 'favorite':favorite,
'logging':logging, 'debug':debug, 'json':json, 'limits':limits }
def usage(argv):
print "USAGE: python %s -auth <appname> -user <auth_user> -status \"<message>\" [-direct <username>] [-retweet <tweet_id>] [-log] [-json] "%(argv[0])
sys.exit(0)
def main(argv):
if len(argv) < 4:
usage(argv)
p = parse_params(argv)
print p
twit = Update()
twit.set_user_agent(agent="random")
if( p['logging'] ):
log_fname = twit.get_preferred_logname()
fmt='[%(asctime)s][%(module)s:%(funcName)s():%(lineno)d] %(levelname)s:%(message)s'
logging.basicConfig(filename=log_fname,format=fmt,level=logging.INFO)
log = logging.getLogger("twit_tools")
lg = None
if( not p['auth'] and not p['user'] ):
print "Must have authenticating User and Application!"
usage(argv)
return
if( p['auth'] ):
app = p['auth']
app_keys = TWITTER_APP_OAUTH_PAIR(app=p['auth'])
app_token_fname = TWITTER_APP_TOKEN_FNAME(app=p['auth'])
lg = Login( name="StatusUpdateLogin",
app_name=p['auth'],
app_user=p['user'],
token_fname=app_token_fname)
if( p['debug'] ):
lg.set_debug(True)
## Key and secret for specified application
lg.set_consumer_key(consumer_key=app_keys['consumer_key'])
lg.set_consumer_secret(consumer_secret=app_keys['consumer_secret'])
lg.login()
twit.set_auth_obj(obj=lg)
if( p['retweet'] ):
twit.set_request_type_as_retweet()
twit.set_retweet(tweet_id=p['retweet'])
elif( p['direct'] and p['status']):
twit.set_request_type_as_direct_message()
twit.set_direct_message(status=p['status'],username=p['direct'])
elif( p['status'] ):
twit.set_request_type_as_status_update()
twit.set_status(status=p['status'])
else:
print "Must supply a status message to post!"
return
twit.update_status()
twit.wait_request()
if( twit.messages()>0 ):
m = twit.get_message()
if( m ):
if( p['json'] ):
print json.dumps(m,indent=4,sort_keys=True)
else:
if( "created_at" in m and "user" in m ):
print "At %s, user %s posted:"%(m['created_at'],m['user']['name'])
print m['text'].encode('utf-8')
elif( "error" in m or "errors" in m ):
print "Error response."
else:
print "Not clear what this response was!"
print json.dumps(m,indent=4,sort_keys=True)
else:
print "Nothing returned!"
if( twit.had_warning() ):
print "WARNING:",twit.get_last_warning()
if( twit.had_error() ):
print "ERROR:",twit.get_last_error()
return
if __name__ == '__main__':
main(sys.argv)
| 33.190736 | 153 | 0.531566 | 1,412 | 12,181 | 4.407224 | 0.157932 | 0.032139 | 0.031496 | 0.022979 | 0.387434 | 0.350153 | 0.301302 | 0.221276 | 0.193476 | 0.16214 | 0 | 0.004209 | 0.356375 | 12,181 | 366 | 154 | 33.281421 | 0.789541 | 0.094409 | 0 | 0.377698 | 0 | 0.003597 | 0.09304 | 0.008848 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.014388 | null | null | 0.046763 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2cef5581d6639f72a0f834dc67419807bab8ec4 | 759 | py | Python | dear_petition/petition/migrations/0008_auto_20200208_0222.py | robert-w-gries/dear-petition | 35244afc8e967b41ae5265ae31fd13b26e4e835a | [
"MIT"
] | 4 | 2020-04-01T14:42:45.000Z | 2021-12-12T21:11:11.000Z | dear_petition/petition/migrations/0008_auto_20200208_0222.py | robert-w-gries/dear-petition | 35244afc8e967b41ae5265ae31fd13b26e4e835a | [
"MIT"
] | 142 | 2019-08-12T19:08:34.000Z | 2022-03-29T23:05:35.000Z | dear_petition/petition/migrations/0008_auto_20200208_0222.py | robert-w-gries/dear-petition | 35244afc8e967b41ae5265ae31fd13b26e4e835a | [
"MIT"
] | 8 | 2020-02-04T20:37:00.000Z | 2021-03-28T13:28:32.000Z | # Generated by Django 2.2.4 on 2020-02-08 02:22
from django.db import migrations
def move_batch_fks(apps, schema_editor):
Batch = apps.get_model("petition", "Batch")
CIPRSRecord = apps.get_model("petition", "CIPRSRecord")
for batch in Batch.objects.all():
print(f"Adding batch {batch.pk} to {batch.records.count()} records")
batch.records.update(batch=batch)
first_batch = Batch.objects.order_by("pk").first()
for record in CIPRSRecord.objects.all():
if not record.batch:
record.batch = first_batch
record.save()
class Migration(migrations.Migration):
dependencies = [
("petition", "0007_auto_20200208_0221"),
]
operations = [migrations.RunPython(move_batch_fks)]
| 29.192308 | 76 | 0.671937 | 98 | 759 | 5.071429 | 0.530612 | 0.060362 | 0.04829 | 0.080483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05124 | 0.202899 | 759 | 25 | 77 | 30.36 | 0.770248 | 0.059289 | 0 | 0 | 1 | 0 | 0.172753 | 0.064607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.294118 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2d1763a00e0070a7178e1445d0a7e1fdef3a6a9 | 34,160 | py | Python | tool/pylib/generator/output/PartBuilder.py | mever/qooxdoo | 2bb08cb6c4ddfaf2425e6efff07deb17e960a050 | [
"MIT"
] | 1 | 2021-02-05T23:00:25.000Z | 2021-02-05T23:00:25.000Z | tool/pylib/generator/output/PartBuilder.py | mever/qooxdoo | 2bb08cb6c4ddfaf2425e6efff07deb17e960a050 | [
"MIT"
] | 3 | 2019-02-18T04:22:52.000Z | 2021-02-21T15:02:54.000Z | tool/pylib/generator/output/PartBuilder.py | mever/qooxdoo | 2bb08cb6c4ddfaf2425e6efff07deb17e960a050 | [
"MIT"
] | 1 | 2021-06-03T23:08:44.000Z | 2021-06-03T23:08:44.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
################################################################################
#
# qooxdoo - the new era of web development
#
# http://qooxdoo.org
#
# Copyright:
# 2006-2010 1&1 Internet AG, Germany, http://www.1und1.de
#
# License:
# MIT: https://opensource.org/licenses/MIT
# See the LICENSE file in the project's top-level directory for details.
#
# Authors:
# * Sebastian Werner (wpbasti)
# * Thomas Herchenroeder (thron7)
# * Richard Sternagel (rsternagel)
#
################################################################################
##
# PartBuilder -- create packages and associates parts to packages, from parts configuration and class list
#
# Interface:
# - PartBuilder.getPackages()
##
from misc.Collections import OrderedDict
from misc.Collections import DefaultOrderedDict
from generator.output.Part import Part
from generator.output.Package import Package
from generator.code.Class import CompileOptions
from generator.config.Config import ConfigurationError
class PartBuilder(object):
def __init__(self, console, depLoader):
self._console = console
self._depLoader = depLoader
##
# interface method
def getPackages(self, partIncludes, smartExclude, jobContext, script):
# Get config settings
jobConfig = jobContext["jobconf"]
self._jobconf = jobConfig
minPackageSize = jobConfig.get("packages/sizes/min-package", 0)
minPackageSizeForUnshared = jobConfig.get("packages/sizes/min-package-unshared", None)
partsCfg = jobConfig.get("packages/parts", {})
collapseCfg = jobConfig.get("packages/collapse", [])
boot = jobConfig.get("packages/init", "boot")
script.boot = boot
# Automatically add boot part to collapse list
if boot in partsCfg and not boot in collapseCfg:
collapseCfg.append(boot)
# Preprocess part data
script.parts = {} # map of Parts
script.parts = self._getParts(partIncludes, partsCfg, script)
self._checkPartsConfig(script.parts)
script.parts = self._getPartDeps(script, smartExclude)
# Compute packages
script.packages = [] # array of Packages
script.packages = self._createPackages(script)
self._checkPackagesAgainstClassList(script)
script.sortParts()
self._printPartStats(script)
# Collapse parts by collapse order
self._console.info("Collapsing parts ", feed=False)
self.collapsePartsByOrder(script)
# Collapse parts by package size
self.collapsePartsBySize(script, minPackageSize, minPackageSizeForUnshared)
# Size collapsing might introduce new dependencies to the boot part
# try to assure a single package
if len(script.parts[script.boot].packages) > 1:
quickCollapseConfig = { 0 : set((script.parts[script.boot],))}
self.collapsePartsByOrder(script, quickCollapseConfig)
assert len(script.parts[script.boot].packages) == 1
self._printPartStats(script)
# Post process results
resultParts = self._getFinalPartData(script)
# re-sort part packages, to clean up ordering issues from merging
# - (the issue here is that part packages are only re-sorted during merges
# when actually new packages are added to the part, but *not* when existing
# packages receive a merge package whos package dependencies are already
# fullfilled in the part; still package dependencies among the existing
# packages might change so a re-sorting is necessary to support proper
# load order)
script.sortParts()
script = self._getFinalClassList(script)
#resultClasses = util.dictToList(resultClasses) # turn map into list, easier for upstream methods
self._console.dotclear()
if True: #self._console.getLevel() < self._console._levels["info"]: # - not working!
self.verifyParts(script.parts, script)
# Return
# {Map} resultParts[partId] = [packageId1, packageId2]
# {Array} resultClasses[packageId] = [class1, class2]
#return boot, resultParts, resultClasses
return resultParts, script
##
# Check head lists (part.initial_deps) are non-overlapping
# @param {Map} { partId : generator.code.Part }
def _checkPartsConfig(self, parts):
headclasses = dict((x.name,set(x.initial_deps)) for x in parts.values())
for partid, parthead in headclasses.items():
for partid1, parthead1 in ((x,y) for x,y in headclasses.items() if x!=partid):
#print "Checking %s - %s" % (partid, partid1)
overlap = parthead.intersection(parthead1)
if overlap:
raise ConfigurationError("Part '%s' and '%s' have overlapping includes: %r" % (partid, partid1, list(overlap)))
##
# Check all classes from the global class list are contained in
# *some* package.
def _checkPackagesAgainstClassList(self, script):
allpackageclasses = set([])
for package in script.packages:
allpackageclasses.update(package.classes)
missingclasses = set(script.classesObj).difference(allpackageclasses)
if missingclasses:
raise ValueError("These necessary classes are not covered by parts: %r" % list(missingclasses))
def verifyParts(self, partsMap, script):
def handleError(msg):
if bomb_on_error:
raise RuntimeError(msg)
else:
self._console.warn("! "+msg)
self._console.info("Verifying parts ", feed=False)
self._console.indent()
bomb_on_error = self._jobconf.get("packages/verifier-bombs-on-error", True)
allpartsclasses = []
# 5) Check consistency between package.part_mask and part.packages
self._console.debug("Verifying packages-to-parts relations...")
self._console.indent()
for package in script.packages:
for part in partsMap.values():
if package.part_mask & part.bit_mask:
if package not in part.packages:
handleError("Package '%d' supposed to be in part '%s', but isn't" % (package.id, part.name))
self._console.outdent()
self._console.debug("Verifying individual parts...")
#self._console.indent()
for part in partsMap.values():
if part.is_ignored: # skip ignored parts
continue
self._console.debug("Part: %s" % part.name)
self._console.dot()
self._console.indent()
# get set of current classes in this part
classList = []
classPackage = []
for packageIdx, package in enumerate(part.packages): # TODO: not sure this is sorted
for pos,classId in enumerate(x.id for x in package.classes):
classList.append(classId)
classPackage.append((package.id,pos))
allpartsclasses.extend(classList)
# 1) Check the initial part defining classes are included (trivial sanity)
for classId in part.initial_deps:
if classId not in classList:
handleError("Defining class not included in part: '%s'" % (classId,))
# 2) Check individual class deps are fullfilled in part
# 3) Check classes are in load-order
# alternative: check part.deps against classSet
classIdx = -1
for packageIdx, package in enumerate(part.packages):
for clazz in package.classes:
classIdx += 1
classDeps, _ = clazz.getCombinedDeps(script.classesAll, script.variants, script.jobconfig)
loadDeps = set(x.name for x in classDeps['load'])
ignoreDeps = set(x.name for x in classDeps['ignore'])
# we cannot enforce runDeps here, as e.g. the 'boot'
# part necessarily lacks classes from subsequent parts
# (that's the whole point of parts)
for depsId in loadDeps.difference(ignoreDeps):
try:
depsIdx = classList.index(depsId)
except ValueError:
handleError("Unfullfilled dependency of class '%s'[%d,%d]: '%s'" %
(clazz.id, package.id, classIdx, depsId))
continue
if depsId in loadDeps and classIdx < depsIdx:
handleError("Load-dep loaded after using class ('%s'[%d,%d]): '%s'[%d,%d]" %
(clazz.id, package.id, classIdx,
depsId, classPackage[depsIdx][0], classPackage[depsIdx][1]))
#if missingDeps: # there is a load dep not in the part
# self._console.warn("Unfullfilled load dependencies of class '%s': %r" % (classId, tuple(missingDeps)))
self._console.outdent()
#self._console.outdent()
# 4) Check all classes from the global class list are contained in
# *some* part
missingclasses = set(x.id for x in script.classesObj).difference(allpartsclasses)
if missingclasses:
handleError("These necessary classes are not covered by parts: %r" % list(missingclasses))
self._console.dotclear()
self._console.outdent()
return
##
# create the set of parts, each part with a unique single-bit bit mask
# @returns {Map} parts = { partName : Part() }
def _getParts(self, partIncludes, partsCfg, script):
self._console.debug("Creating part structures...")
self._console.indent()
parts = {}
for partPos, partId in enumerate(partIncludes):
npart = Part(partId) # create new Part object
npart.bit_mask = script.getPartBitMask() # add unique bit
initial_deps = list(set(partIncludes[partId]).difference(script.excludes)) # defining classes from config minus expanded excludes
npart.initial_deps = initial_deps # for later cross-part checking
npart.deps = initial_deps[:] # own copy, as this will get expanded
if 'expected-load-order' in partsCfg[partId]:
npart.collapse_index = partsCfg[partId]['expected-load-order']
if 'no-merge-private-package' in partsCfg[partId]:
npart.no_merge_private_package = partsCfg[partId]['no-merge-private-package']
parts[partId] = npart
self._console.debug("Part #%s => %s" % (partId, npart.bit_mask))
self._console.outdent()
return parts
##
# create the complete list of class dependencies for each part
def _getPartDeps(self, script, smartExclude):
parts = script.parts
variants = script.variants
globalClassList = [x.id for x in script.classesObj]
self._console.debug("")
self._console.info("Assembling parts")
self._console.indent()
for part in parts.values():
self._console.info("part %s " % part.name, feed=False)
# Exclude initial classes of other parts
partExcludes = []
for otherPartId in parts:
if otherPartId != part.name:
partExcludes.extend(parts[otherPartId].initial_deps)
# Extend with smart excludes
partExcludes.extend(smartExclude)
# Remove unknown classes before checking dependencies
for classId in part.deps[:]:
if not classId in globalClassList :
part.deps.remove(classId)
# Checking we have something to include
if len(part.deps) == 0:
self._console.info("Part #%s is ignored in current configuration" % part.name)
part.is_ignored = True
continue
# Finally resolve the dependencies
# do not allow blocked loaddeps, as this would make the part unloadable
partClasses = self._depLoader.classlistFromInclude(part.deps, partExcludes, variants, script=script, allowBlockLoaddeps=False)
# Remove all unknown classes -- TODO: Can this ever happen here?!
for classId in partClasses[:]: # need to work on a copy because of changes in the loop
#if not classId in globalClassList:
if not classId in script.classes: # check against application class list
self._console.warn("Removing unknown class dependency '%s' from config of part #%s" % (classId, part.name))
partClasses.remove(classId)
# Store
self._console.debug("Part #%s depends on %s classes" % (part.name, len(partClasses)))
part.deps = partClasses
self._console.outdent()
return parts
##
# Cut an initial set of packages out of the set of classes needed by the parts
# @returns {Array} [ Package ]
def _createPackages(self, script):
##
# Collect classes from parts, recording which class is used in which part
# @returns {Map} { classId : parts_bit_mask }
def getClassesFromParts(partObjs):
allClasses = DefaultOrderedDict(lambda: 0)
for part in partObjs:
for classId in part.deps:
allClasses[classId] |= part.bit_mask # a class used by multiple parts gets multiple bits
return allClasses
##
# Create packages from classes
# @returns {Array} [ Package ]
def getPackagesFromClasses(allClasses):
packages = {}
for classId in allClasses:
pkgId = allClasses[classId]
# create a Package if necessary
if pkgId not in packages:
package = Package(pkgId)
packages[pkgId] = package
# store classId with this package
#packages[pkgId].classes.append(classId)
packages[pkgId].classes.append(classesObj[classId])
return packages.values()
# ---------------------------------------------------------------
self._console.indent()
parts = script.parts.values()
classesObj = OrderedDict((cls.id, cls) for cls in script.classesObj)
# generate list of all classes from the part dependencies
allClasses = getClassesFromParts(parts)
# Create a package for each set of classes which
# are used by the same parts
packages = getPackagesFromClasses(allClasses)
# Register packages with using parts
for package in packages:
for part in parts:
if package.id & part.bit_mask:
part.packages.append(package)
# Register dependencies between packages
for package in packages:
# get all direct (load)deps of this package
allDeps = set(())
for clazz in package.classes:
classDeps, _ = clazz.getCombinedDeps(script.classesAll, script.variants, script.jobconfig)
loadDeps = set(x.name for x in classDeps['load'])
allDeps.update(loadDeps)
# record the other packages in which these classes are contained
for classId in allDeps:
for otherpackage in packages:
if otherpackage != package and classId in (x.id for x in otherpackage.classes):
package.packageDeps.add(otherpackage)
self._console.outdent()
return packages
def _computePackageSize(self, package, variants, script):
packageSize = 0
compOptions = CompileOptions()
compOptions.optimize = script.optimize
compOptions.format = True
compOptions.variantset = variants
self._console.indent()
for clazz in package.classes:
packageSize += clazz.getCompiledSize(compOptions, featuremap=script._featureMap)
self._console.outdent()
return packageSize
##
# Support for merging small packages.
#
# Small (as specified in the config) packages are detected, starting with
# those that are used by the fewest parts, and are merged into packages that
# are used by the same and more parts.
def collapsePartsBySize(self, script, minPackageSize, minPackageSizeForUnshared):
if minPackageSize == None or minPackageSize == 0:
return
variants = script.variants
self._console.debug("")
self._console.debug("Collapsing parts by package sizes...")
self._console.indent()
self._console.debug("Minimum size: %sKB" % minPackageSize)
self._console.indent()
if minPackageSizeForUnshared == None:
minPackageSizeForUnshared = minPackageSize
# Start at the end with the sorted list
# e.g. merge 4->7 etc.
allPackages = script.packagesSorted()
allPackages.reverse()
# make a dict {part.bit_mask: part}
allPartBitMasks = {}
[allPartBitMasks.setdefault(x.bit_mask, x) for x in script.parts.values()]
oldpackages = set([])
while oldpackages != set(script.packages):
oldpackages = set(script.packages)
allPackages = script.packagesSorted()
allPackages.reverse()
# Test and optimize 'fromId'
for fromPackage in allPackages:
self._console.dot()
# possibly protect part-private package from merging
if fromPackage.id in allPartBitMasks.keys(): # fromPackage.id == a part's bit mask
if allPartBitMasks[fromPackage.id].no_merge_private_package:
self._console.debug("Skipping private package #%s" % (fromPackage.id,))
continue
packageSize = self._computePackageSize(fromPackage, variants, script) / 1024
self._console.debug("Package #%s: %sKB" % (fromPackage.id, packageSize))
# check selectablility
if (fromPackage.part_count == 1) and (packageSize >= minPackageSizeForUnshared):
continue
if (fromPackage.part_count > 1) and (packageSize >= minPackageSize):
continue
# assert: the package is shared and smaller than minPackageSize
# or: the package is unshared and smaller than minPackageSizeForUnshared
self._console.indent()
mergedPackage, targetPackage = self._mergePackage(fromPackage, script, script.packages)
if mergedPackage: # mergedPackage == fromPackage on success
script.packages.remove(fromPackage)
self._console.outdent()
self._console.dotclear()
self._console.outdent()
self._console.outdent()
##
# get the "smallest" package (in the sense of _sortPackages()) that is
# in all parts mergePackage is in, and is earlier in the corresponding
# packages lists
def _findMergeTarget(self, mergePackage, packages):
##
# if another package id has the same bits turned on, it is available
# in the same parts.
def areInSameParts(mergePackage, package):
return (mergePackage.id & package.id) == mergePackage.id
##
# check if any of the deps of mergePackage depend on targetPackage -
# if merging mergePackage into targetPackage, this would be creating
# circular dependencies
def noCircularDeps(mergePackage, targetPackage):
for package in mergePackage.packageDeps:
if targetPackage in package.packageDeps:
return False
return True
##
# check that the targetPackage is loaded in those parts
# where mergePackage's deps are loaded
def depsAvailWhereTarget (mergePackage, targetPackage):
for depsPackage in mergePackage.packageDeps:
if not areInSameParts(targetPackage, depsPackage):
return False
return True
# ----------------------------------------------------------------------
allPackages = reversed(Package.sort(packages))
# sorting and reversing assures we try "smaller" package id's first
addtl_merge_constraints = self._jobconf.get("packages/additional-merge-constraints", True)
for targetPackage in allPackages:
if mergePackage.id == targetPackage.id: # no self-merging ;)
continue
if not areInSameParts(mergePackage, targetPackage):
self._console.debug("Problematic #%d (different parts using)" % targetPackage.id)
continue
if not noCircularDeps(mergePackage, targetPackage):
self._console.debug("Problematic #%d (circular dependencies)" % targetPackage.id)
if addtl_merge_constraints:
continue
# why accept this by default?
if not depsAvailWhereTarget(mergePackage, targetPackage):
self._console.debug("Problematic #%d (dependencies not always available)" % targetPackage.id)
if addtl_merge_constraints:
continue
# why accept this by default?
yield targetPackage
yield None
##
# Support for collapsing parts along their expected load order
#
# Packages are merged in parts that define an expected load order, starting
# with the boot part and continuing with groups of parts that have the same
# load index, in increasing order. Within a group, packages are merged from
# least used to more often used, and with packages unique to one of the parts
# in the group to packages that are common to all parts.
# Target packages for one group are blocked for the merge process of the next,
# to avoid merging all packages into one "monster" package that all parts
# share eventually.
def collapsePartsByOrder(self, script, collapse_groups=None):
def getCollapseGroupsOrdered(parts, ):
# returns dict of parts grouped by collapse index
# { 0 : set('boot'), 1 : set(part1, part2), 2 : ... }
collapseGroups = {}
# pre-define boot part with collapse index 0
boot = self._jobconf.get("packages/init", "boot")
collapseGroups[0] = set((parts[boot],))
# get configured load groups
for partname in parts:
part = parts[partname]
collidx = getattr(part, 'collapse_index', None)
if collidx:
if collidx < 1 : # not allowed
raise RuntimeError, "Collapse index must be 1 or greater (Part: %s)" % partname
else:
if collidx not in collapseGroups:
collapseGroups[collidx] = set(())
collapseGroups[collidx].add(part)
return collapseGroups
def isUnique(package, collapse_group):
return sum([int(package in part.packages) for part in collapse_group]) == 1
def isCommon(package, collapse_group):
return sum([int(package in part.packages) for part in collapse_group]) == len(collapse_group)
def getUniquePackages(part, collapse_group):
uniques = {}
for package in part.packages:
if isUnique(package, collapse_group):
if (package.id == part.bit_mask and # possibly protect "private" package
part.no_merge_private_package):
pass
else:
uniques[package.id] = package
return uniques
getUniquePackages.key = 'unique'
def getCommonPackages(part, collapse_group):
commons = {}
for package in part.packages:
if isCommon(package, collapse_group):
commons[package.id] = package
return commons
getCommonPackages.key = 'common'
def mergeGroupPackages(selectFunc, collapse_group, script, seen_targets):
self._console.debug("collapsing %s packages..." % selectFunc.key)
self._console.indent()
curr_targets = set(())
for part in collapse_group:
oldpackages = []
while oldpackages != part.packages:
oldpackages = part.packages[:]
for package in reversed(part.packagesSorted): # start with "smallest" package
selected_packages = selectFunc(part, collapse_group) # re-calculate b.o. modified part.packages
if package.id in selected_packages:
(mergedPackage,
targetPackage) = self._mergePackage(package, script,
selected_packages.values(), # TODO: How should areInSameParts() ever succeed with uniquePackages?!
seen_targets)
if mergedPackage: # on success: mergedPackage == package
script.packages.remove(mergedPackage)
curr_targets.add(targetPackage)
seen_targets.update(curr_targets)
self._console.outdent()
return script.parts, script.packages
# ---------------------------------------------------------------------
self._console.debug("")
self._console.debug("Collapsing parts by collapse order...")
self._console.indent()
if collapse_groups == None:
collapse_groups = getCollapseGroupsOrdered(script.parts, )
seen_targets = set(())
for collidx in sorted(collapse_groups.keys()): # go through groups in load order
self._console.dot()
collgrp = collapse_groups[collidx]
self._console.debug("Collapse group %d %r" % (collidx, [x.name for x in collgrp]))
self._console.indent()
script.parts, script.packages = mergeGroupPackages(getUniquePackages, collgrp, script, seen_targets)
script.parts, script.packages = mergeGroupPackages(getCommonPackages, collgrp, script, seen_targets)
self._console.outdent()
self._console.dotclear()
self._console.outdent()
return
##
# Try to find a target package for <fromPackage> within <packages> and, if
# found, merge <fromPackage> into the target package. <seen_targets>, if
# given, is a skip list for pot. targets.
#
# On merge, maintains package.dependencies and part.packages, but leaves it
# to the caller to pot. remove <fromPackage> from script.packages.
# @return (<fromPackage>,<toPackage>) on success, else (None,None)
def _mergePackage(self, fromPackage, script, packages, seen_targets=None):
def updatePartDependencies(part, packageDeps):
for package in packageDeps:
if package not in part.packages:
# add package
part.packages.append(package)
# update package's part bit mask
package.part_mask |= part.bit_mask
# make sure the new package's dependencies are also included
updatePartDependencies(part, package.packageDeps)
return
def mergeContAndDeps(fromPackage, toPackage):
# Merging package content
toPackage.classes.extend(fromPackage.classes)
# Merging package dependencies
depsDelta = fromPackage.packageDeps.difference(set((toPackage,))) # make sure toPackage is not included
self._console.debug("Adding packages dependencies to target package: %s" % (map(str, sorted([x.id for x in depsDelta])),))
toPackage.packageDeps.update(depsDelta)
toPackage.packageDeps.difference_update(set((fromPackage,))) # remove potential dependency to fromPackage
self._console.debug("Target package #%s now depends on: %s" % (toPackage.id, map(str, sorted([x.id for x in toPackage.packageDeps]))))
return toPackage
# ----------------------------------------------------------------------
self._console.debug("Search a target package for package #%s" % (fromPackage.id,))
self._console.indent()
# find toPackage
toPackage = None
for toPackage in self._findMergeTarget(fromPackage, packages):
if toPackage == None:
break
elif seen_targets != None:
if toPackage not in seen_targets:
break
else:
break
if not toPackage:
self._console.outdent()
return None, None
self._console.debug("Merge package #%s into #%s" % (fromPackage.id, toPackage.id))
self._console.indent()
# Merge package content and dependencies
toPackage = mergeContAndDeps(fromPackage, toPackage)
# Update package dependencies:
# all packages that depended on fromPackage depend now on toPackage
for package in script.packages:
if fromPackage in package.packageDeps:
# replace fromPackage with toPackage
package.packageDeps.difference_update(set((fromPackage,)))
package.packageDeps.update(set((toPackage,)))
# Update part information:
# remove the fromPackage from all parts using it, and add new dependencies to parts
# using toPackage
for part in script.parts.values():
# remove the merged package
if fromPackage in part.packages:
# we can simply remove the package, as we know the target package is also there
part.packages.remove(fromPackage)
# check additional dependencies for all parts
if toPackage in part.packages:
# this could be a part method
# if the toPackage is in part, we might need to add additional packages that toPackage now depends on
updatePartDependencies(part, fromPackage.packageDeps)
# remove of fromPackage from global packages list is easier handled in the caller
self._console.outdent()
self._console.outdent()
return fromPackage, toPackage # to allow caller check for merging and further clean-up fromPackage
def _getFinalPartData(self, script):
parts = script.parts
packageIds = [x.id for x in script.packagesSorted()]
resultParts = {}
for toId, fromId in enumerate(packageIds):
for partId in parts:
if fromId in parts[partId].packages:
if not partId in resultParts:
resultParts[partId] = [toId]
else:
resultParts[partId].append(toId)
return resultParts
def _getFinalClassList(self, script):
packages = script.packagesSorted()
for package in packages:
# TODO: temp. kludge, to pass classIds to sortClasses()
# sortClasses() should take Class() objects directly
classMap = OrderedDict((cls.id, cls) for cls in package.classes)
classIds = self._depLoader.sortClasses(classMap.keys(), script.variants, script.buildType)
package.classes = [classMap[x] for x in classIds]
return script
##
# <currently not used>
def _sortPackagesTopological(self, packages): # packages : [Package]
import graph
# create graph object
gr = graph.digraph()
# add classes as nodes
gr.add_nodes(packages)
# for each load dependency add a directed edge
for package in packages:
for dep in package.packageDeps:
gr.add_edge(package, dep)
# cycle check?
cycle_nodes = gr.find_cycle()
if cycle_nodes:
raise RuntimeError("Detected circular dependencies between packages: %r" % cycle_nodes)
packageList = gr.topological_sorting()
return packageList
def _printPartStats(self, script):
packages = dict([(x.id,x) for x in script.packages])
parts = script.parts
packageIds = packages.keys()
packageIds.sort()
packageIds.reverse()
self._console.debug("")
self._console.debug("Package summary : %d packages" % len(packageIds))
self._console.indent()
for packageId in packageIds:
self._console.debug("Package #%s contains %s classes" % (packageId, len(packages[packageId].classes)))
self._console.debug("%r" % packages[packageId].classes)
self._console.debug("Package #%s depends on these packages: %s" % (packageId, map(str, sorted([x.id for x in packages[packageId].packageDeps]))))
self._console.outdent()
self._console.debug("")
self._console.debug("Part summary : %d parts" % len(parts))
self._console.indent()
packages_used_in_parts = 0
for part in parts.values():
packages_used_in_parts += len(part.packages)
self._console.debug("Part #%s packages(%d): %s" % (part.name, len(part.packages), ", ".join('#'+str(x.id) for x in part.packages)))
self._console.debug("")
self._console.debug("Total of packages used in parts: %d" % packages_used_in_parts)
self._console.outdent()
self._console.debug("")
| 42.593516 | 157 | 0.600907 | 3,526 | 34,160 | 5.756948 | 0.171299 | 0.048771 | 0.026799 | 0.003104 | 0.174294 | 0.120055 | 0.082566 | 0.04951 | 0.046406 | 0.036159 | 0 | 0.002485 | 0.30483 | 34,160 | 801 | 158 | 42.646692 | 0.852318 | 0.243823 | 0 | 0.263499 | 0 | 0.00216 | 0.068685 | 0.006998 | 0 | 0 | 0 | 0.001248 | 0.00216 | 0 | null | null | 0.00216 | 0.015119 | null | null | 0.006479 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2d721ef72b39de52022137d721dac292cbddcad | 890 | py | Python | Python/Topics/Sending-Email/05-pdf-attachment.py | shihab4t/Software-Development | 0843881f2ba04d9fca34e44443b5f12f509f671e | [
"Unlicense"
] | null | null | null | Python/Topics/Sending-Email/05-pdf-attachment.py | shihab4t/Software-Development | 0843881f2ba04d9fca34e44443b5f12f509f671e | [
"Unlicense"
] | null | null | null | Python/Topics/Sending-Email/05-pdf-attachment.py | shihab4t/Software-Development | 0843881f2ba04d9fca34e44443b5f12f509f671e | [
"Unlicense"
] | null | null | null | import imghdr
import smtplib
import os
from email.message import EmailMessage
EMAIL_ADDRESS = os.environ.get("GMAIL_ADDRESS")
EMAIL_PASSWORD = os.environ.get("GMAIL_APP_PASS")
pdfs = ["/home/shihab4t/Downloads/Profile.pdf"]
with smtplib.SMTP_SSL("smtp.gmail.com", 465) as smtp:
smtp.login(EMAIL_ADDRESS, EMAIL_PASSWORD)
reciver = "shihab4tdev@gmail.com"
msg = EmailMessage()
msg["Subject"] = "Grab dinner this weekend? 2"
msg["From"] = EMAIL_ADDRESS
msg["To"] = reciver
msg.set_content("How about dinner at 6pm this Saturday")
for pdf in pdfs:
with open(pdf, "rb") as pdf:
pdf_data = pdf.read()
pdf_name = pdf.name
msg.add_attachment(pdf_data, maintype="application",
subtype="octet-stream", filename=pdf_name)
smtp.send_message(msg)
print(f"Email was sented to {reciver}")
| 26.969697 | 69 | 0.665169 | 120 | 890 | 4.8 | 0.541667 | 0.0625 | 0.041667 | 0.059028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010043 | 0.216854 | 890 | 32 | 70 | 27.8125 | 0.816356 | 0 | 0 | 0 | 0 | 0 | 0.257303 | 0.064045 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.086957 | 0.173913 | 0 | 0.173913 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a2df2293ad90461c1622171c3d5669f2f6f7fd84 | 2,791 | py | Python | yggdrasil/metaschema/datatypes/InstanceMetaschemaType.py | astro-friedel/yggdrasil | 5ecbfd083240965c20c502b4795b6dc93d94b020 | [
"BSD-3-Clause"
] | null | null | null | yggdrasil/metaschema/datatypes/InstanceMetaschemaType.py | astro-friedel/yggdrasil | 5ecbfd083240965c20c502b4795b6dc93d94b020 | [
"BSD-3-Clause"
] | null | null | null | yggdrasil/metaschema/datatypes/InstanceMetaschemaType.py | astro-friedel/yggdrasil | 5ecbfd083240965c20c502b4795b6dc93d94b020 | [
"BSD-3-Clause"
] | null | null | null | from yggdrasil.metaschema.datatypes import MetaschemaTypeError
from yggdrasil.metaschema.datatypes.MetaschemaType import MetaschemaType
from yggdrasil.metaschema.datatypes.JSONObjectMetaschemaType import (
JSONObjectMetaschemaType)
from yggdrasil.metaschema.properties.ArgsMetaschemaProperty import (
ArgsMetaschemaProperty)
class InstanceMetaschemaType(MetaschemaType):
r"""Type for evaluating instances of Python classes."""
name = 'instance'
description = 'Type for Python class instances.'
properties = ['class', 'args']
definition_properties = ['class']
metadata_properties = ['class', 'args']
extract_properties = ['class', 'args']
python_types = (object, )
cross_language_support = False
@classmethod
def validate(cls, obj, raise_errors=False):
r"""Validate an object to check if it could be of this type.
Args:
obj (object): Object to validate.
raise_errors (bool, optional): If True, errors will be raised when
the object fails to be validated. Defaults to False.
Returns:
bool: True if the object could be of this type, False otherwise.
"""
# Base not called because every python object should pass validation
# against the object class
try:
ArgsMetaschemaProperty.instance2args(obj)
return True
except MetaschemaTypeError:
if raise_errors:
raise ValueError("Class dosn't have an input_args attribute.")
return False
@classmethod
def encode_data(cls, obj, typedef):
r"""Encode an object's data.
Args:
obj (object): Object to encode.
typedef (dict): Type definition that should be used to encode the
object.
Returns:
string: Encoded object.
"""
args = ArgsMetaschemaProperty.instance2args(obj)
if isinstance(typedef, dict) and ('args' in typedef):
typedef_args = {'properties': typedef['args']}
else:
typedef_args = None
return JSONObjectMetaschemaType.encode_data(args, typedef_args)
@classmethod
def decode_data(cls, obj, typedef):
r"""Decode an object.
Args:
obj (string): Encoded object to decode.
typedef (dict): Type definition that should be used to decode the
object.
Returns:
object: Decoded object.
"""
# TODO: Normalization can be removed if metadata is normalized
typedef = cls.normalize_definition(typedef)
args = JSONObjectMetaschemaType.decode_data(
obj, {'properties': typedef.get('args', {})})
return typedef['class'](**args)
| 34.036585 | 78 | 0.636331 | 288 | 2,791 | 6.104167 | 0.357639 | 0.025597 | 0.052332 | 0.054608 | 0.112628 | 0.048919 | 0.048919 | 0.048919 | 0.048919 | 0 | 0 | 0.001002 | 0.284486 | 2,791 | 81 | 79 | 34.45679 | 0.879319 | 0.322465 | 0 | 0.071429 | 0 | 0 | 0.089138 | 0 | 0 | 0 | 0 | 0.012346 | 0 | 1 | 0.071429 | false | 0 | 0.095238 | 0 | 0.47619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2df9c5cd443a1cdbe81e54c4e448271480f6781 | 368 | py | Python | battleships/migrations/0004_auto_20181202_1852.py | ArturAdamczyk/Battleships | 748e4fa87ed0c17c57abbdf5a0a2bca3c91dff24 | [
"MIT"
] | null | null | null | battleships/migrations/0004_auto_20181202_1852.py | ArturAdamczyk/Battleships | 748e4fa87ed0c17c57abbdf5a0a2bca3c91dff24 | [
"MIT"
] | null | null | null | battleships/migrations/0004_auto_20181202_1852.py | ArturAdamczyk/Battleships | 748e4fa87ed0c17c57abbdf5a0a2bca3c91dff24 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.3 on 2018-12-02 17:52
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('battleships', '0003_auto_20181202_1832'),
]
operations = [
migrations.RenameField(
model_name='coordinate',
old_name='ship',
new_name='ship1',
),
]
| 19.368421 | 51 | 0.589674 | 39 | 368 | 5.410256 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124031 | 0.298913 | 368 | 18 | 52 | 20.444444 | 0.693798 | 0.122283 | 0 | 0 | 1 | 0 | 0.165109 | 0.071651 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2e589c4ee6ca6ac8b468da944e0f2d14d31872f | 695 | py | Python | toto/methods/client_error.py | VNUELIVE/Toto | 6940b4114fc6b680e0d40ae248b7d2599c954f81 | [
"MIT"
] | null | null | null | toto/methods/client_error.py | VNUELIVE/Toto | 6940b4114fc6b680e0d40ae248b7d2599c954f81 | [
"MIT"
] | null | null | null | toto/methods/client_error.py | VNUELIVE/Toto | 6940b4114fc6b680e0d40ae248b7d2599c954f81 | [
"MIT"
] | null | null | null | import logging
from toto.invocation import *
@requires('client_error', 'client_type')
def invoke(handler, parameters):
'''A convenince method for writing browser errors
to Toto's server log. It works with the ``registerErrorHandler()`` method in ``toto.js``.
The "client_error" parameter should be set to the string to be written to Toto's log.
Currently, the "client_type" parameter must be set to "browser_js" for an event
to be written. Otherwise, this method has no effect.
Requires: ``client_error``, ``client_type``
'''
if parameters['client_type'] != 'browser_js':
return {'logged': False}
logging.error(str(parameters['client_error']))
return {'logged': True}
| 36.578947 | 91 | 0.723741 | 99 | 695 | 4.979798 | 0.525253 | 0.089249 | 0.077079 | 0.10142 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155396 | 695 | 18 | 92 | 38.611111 | 0.839864 | 0.576978 | 0 | 0 | 0 | 0 | 0.247273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a2e5b6c37644bb0cda6e0ffc3d078b3332260604 | 1,945 | py | Python | parallelformers/policies/gptj.py | Oaklight/parallelformers | 57fc36f81734c29aaf814e092ce13681d3c28ede | [
"Apache-2.0"
] | 454 | 2021-07-18T02:51:23.000Z | 2022-03-31T04:00:53.000Z | parallelformers/policies/gptj.py | Oaklight/parallelformers | 57fc36f81734c29aaf814e092ce13681d3c28ede | [
"Apache-2.0"
] | 16 | 2021-07-18T10:47:21.000Z | 2022-03-22T18:49:57.000Z | parallelformers/policies/gptj.py | Oaklight/parallelformers | 57fc36f81734c29aaf814e092ce13681d3c28ede | [
"Apache-2.0"
] | 33 | 2021-07-18T04:48:28.000Z | 2022-03-14T22:16:36.000Z | # Copyright 2021 TUNiB inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from transformers.models.gptj.modeling_gptj import GPTJBlock
from parallelformers.policies.base import Layer, Policy
from parallelformers.utils import AllReduceLinear
class GPTJPolicy(Policy):
@staticmethod
def replace_arguments(config, world_size):
return {
# 1. reduce hidden size
"attn.embed_dim": config.hidden_size // world_size,
# 2. reduce number of heads
"attn.num_attention_heads": config.n_head // world_size,
}
@staticmethod
def attn_qkv():
return [
Layer(weight="attn.q_proj.weight"),
Layer(weight="attn.k_proj.weight"),
Layer(weight="attn.v_proj.weight"),
]
@staticmethod
def attn_out():
return [
Layer(
weight="attn.out_proj.weight",
replace=AllReduceLinear,
),
]
@staticmethod
def mlp_in():
return [
Layer(
weight="mlp.fc_in.weight",
bias="mlp.fc_in.bias",
),
]
@staticmethod
def mlp_out():
return [
Layer(
weight="mlp.fc_out.weight",
bias="mlp.fc_out.bias",
replace=AllReduceLinear,
),
]
@staticmethod
def original_layer_class():
return GPTJBlock
| 28.188406 | 74 | 0.607712 | 224 | 1,945 | 5.169643 | 0.491071 | 0.051813 | 0.058722 | 0.027634 | 0.081174 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007396 | 0.304884 | 1,945 | 68 | 75 | 28.602941 | 0.849112 | 0.303856 | 0 | 0.4 | 0 | 0 | 0.130045 | 0.017937 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.066667 | 0.133333 | 0.355556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
a2e61afbf4f6a03e376d0464c7acf87dc5bb080e | 503 | py | Python | app/modules/checkerbox.py | hboueix/PyCheckers | c1339a004f30f76a33461b52f9633bbbd1204bb0 | [
"MIT"
] | null | null | null | app/modules/checkerbox.py | hboueix/PyCheckers | c1339a004f30f76a33461b52f9633bbbd1204bb0 | [
"MIT"
] | null | null | null | app/modules/checkerbox.py | hboueix/PyCheckers | c1339a004f30f76a33461b52f9633bbbd1204bb0 | [
"MIT"
] | null | null | null | import pygame
class Checkerbox(pygame.sprite.Sprite):
def __init__(self, size, color, coords):
super().__init__()
self.rect = pygame.Rect(coords[0], coords[1], size, size)
self.color = color
self.hovered = False
def draw(self, screen):
pygame.draw.rect(screen, self.color, self.rect)
def set_hovered(self):
if self.rect.collidepoint(pygame.mouse.get_pos()):
self.hovered = True
else:
self.hovered = False
| 23.952381 | 65 | 0.606362 | 62 | 503 | 4.758065 | 0.435484 | 0.081356 | 0.108475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005464 | 0.272366 | 503 | 20 | 66 | 25.15 | 0.800546 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2f4994690266aa4a640429912d46124db104724 | 1,461 | py | Python | tests/unittests/types/test_array.py | TrigonDev/apgorm | 5b593bfb5a200708869e079248c25786608055d6 | [
"MIT"
] | 8 | 2022-01-21T23:07:29.000Z | 2022-03-26T12:03:57.000Z | tests/unittests/types/test_array.py | TrigonDev/apgorm | 5b593bfb5a200708869e079248c25786608055d6 | [
"MIT"
] | 22 | 2021-12-23T00:43:41.000Z | 2022-03-23T13:17:32.000Z | tests/unittests/types/test_array.py | TrigonDev/apgorm | 5b593bfb5a200708869e079248c25786608055d6 | [
"MIT"
] | 3 | 2022-01-15T20:58:33.000Z | 2022-01-26T21:36:13.000Z | # MIT License
#
# Copyright (c) 2021 TrigonDev
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import pytest
from apgorm.types import Array, Int # for subtypes
@pytest.mark.parametrize("subtype", [Int(), Array(Int()), Array(Array(Int()))])
def test_array_init(subtype):
a = Array(subtype)
assert a.subtype is subtype
def test_array_sql():
assert Array(Int())._sql == "INTEGER[]"
assert Array(Array(Int()))._sql == "INTEGER[][]"
| 38.447368 | 79 | 0.750171 | 217 | 1,461 | 5.023041 | 0.516129 | 0.080734 | 0.023853 | 0.033028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003292 | 0.168378 | 1,461 | 37 | 80 | 39.486486 | 0.893827 | 0.735113 | 0 | 0 | 0 | 0 | 0.074176 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2f8a7986f7bf085148eeaed0a44176810f81182 | 747 | py | Python | code/searchers.py | trunc8/mespp | 8348bdd0ba8f584ef7196c0064b8e5bafa38a0fb | [
"MIT"
] | 2 | 2021-07-07T17:01:17.000Z | 2022-03-30T05:28:44.000Z | code/searchers.py | trunc8/mespp | 8348bdd0ba8f584ef7196c0064b8e5bafa38a0fb | [
"MIT"
] | null | null | null | code/searchers.py | trunc8/mespp | 8348bdd0ba8f584ef7196c0064b8e5bafa38a0fb | [
"MIT"
] | 1 | 2021-07-07T17:00:54.000Z | 2021-07-07T17:00:54.000Z | #!/usr/bin/env python3
# trunc8 did this
import numpy as np
class Searchers:
def __init__(self, g,
N=100,
M=2,
initial_positions=np.array([90,58]),
target_initial_position=45):
'''
g: Graph of environment
N: Number of vertices
M: Number of searchers
initial_positions: Starting positions of searchers
'''
self.N = N
self.M = M
self.initial_positions = initial_positions
self.positions = self.initial_positions.copy()
self.initial_belief = np.zeros(N+1)
capture_offset = 1 # For less confusion while indexing
vertex = target_initial_position+capture_offset
self.initial_belief[vertex] = 1
def updatePositions(self):
pass | 25.758621 | 58 | 0.649264 | 96 | 747 | 4.875 | 0.510417 | 0.17094 | 0.089744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027322 | 0.26506 | 747 | 29 | 59 | 25.758621 | 0.825137 | 0.255689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.058824 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a2fa1506f35030e5726f14dab7372d11ea530f9d | 1,015 | py | Python | vogue/api/api_v1/api.py | mayabrandi/vogue | 463e6417a9168eadb0d11dea2d0f97919494bcd3 | [
"MIT"
] | 1 | 2021-12-16T19:29:17.000Z | 2021-12-16T19:29:17.000Z | vogue/api/api_v1/api.py | mayabrandi/vogue | 463e6417a9168eadb0d11dea2d0f97919494bcd3 | [
"MIT"
] | 188 | 2018-10-25T06:13:17.000Z | 2022-02-25T19:47:06.000Z | vogue/api/api_v1/api.py | mayabrandi/vogue | 463e6417a9168eadb0d11dea2d0f97919494bcd3 | [
"MIT"
] | null | null | null | from fastapi import FastAPI
from vogue.api.api_v1.endpoints import (
insert_documents,
home,
common_trends,
sequencing,
genootype,
reagent_labels,
prepps,
bioinfo_covid,
bioinfo_micro,
bioinfo_mip,
update,
)
from vogue.settings import static_files
app = FastAPI()
app.mount(
"/static",
static_files,
name="static",
)
app.include_router(home.router, tags=["home"])
app.include_router(common_trends.router, tags=["common_trends"])
app.include_router(sequencing.router, tags=["sequencing"])
app.include_router(genootype.router, tags=["genotype"])
app.include_router(reagent_labels.router, tags=["index"])
app.include_router(prepps.router, tags=["preps"])
app.include_router(bioinfo_micro.router, tags=["bioinfo_micro"])
app.include_router(bioinfo_covid.router, tags=["bioinfo_covid"])
app.include_router(bioinfo_mip.router, tags=["bioinfo_mip"])
app.include_router(update.router, tags=["update"])
app.include_router(insert_documents.router, tags=["sample"])
| 27.432432 | 64 | 0.747783 | 130 | 1,015 | 5.607692 | 0.269231 | 0.150892 | 0.241427 | 0.09465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00111 | 0.112315 | 1,015 | 36 | 65 | 28.194444 | 0.807991 | 0 | 0 | 0 | 0 | 0 | 0.105419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.09375 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2fe2076a061b4411e718858d451c717a3acc756 | 318 | py | Python | Chapter01/displacy-save-as-image-1-4-5.py | indrasmartmob/Mastering-spaCy | 756876902eee8437d6d9ddcef2ba7ffabfc970a3 | [
"MIT"
] | 76 | 2021-07-07T14:32:42.000Z | 2022-03-27T17:15:15.000Z | Chapter01/displacy-save-as-image-1-4-5.py | indrasmartmob/Mastering-spaCy | 756876902eee8437d6d9ddcef2ba7ffabfc970a3 | [
"MIT"
] | 4 | 2021-08-18T18:08:23.000Z | 2022-03-27T03:30:27.000Z | Chapter01/displacy-save-as-image-1-4-5.py | indrasmartmob/Mastering-spaCy | 756876902eee8437d6d9ddcef2ba7ffabfc970a3 | [
"MIT"
] | 38 | 2021-07-09T22:23:38.000Z | 2022-03-12T07:11:37.000Z | #!/usr/bin/env python3
import spacy
from spacy import displacy
from pathlib import Path
nlp = spacy.load("en_core_web_md")
doc = nlp("I'm a butterfly.")
svg = displacy.render(doc, style="dep", jupyter=False)
filename = "butterfly.svg"
output_path = Path(filename)
output_path.open("w", encoding="utf-8").write(svg)
| 22.714286 | 54 | 0.735849 | 51 | 318 | 4.490196 | 0.686275 | 0.104803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.113208 | 318 | 13 | 55 | 24.461538 | 0.804965 | 0.066038 | 0 | 0 | 0 | 0 | 0.176271 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0c042004c2d10428499c1e729e50d34d388b3eb9 | 519 | py | Python | sources/101_test.py | Painatalman/python101 | 9727ca03da46f81813fc2d338b8ba22fc0d8b78b | [
"Apache-2.0"
] | null | null | null | sources/101_test.py | Painatalman/python101 | 9727ca03da46f81813fc2d338b8ba22fc0d8b78b | [
"Apache-2.0"
] | null | null | null | sources/101_test.py | Painatalman/python101 | 9727ca03da46f81813fc2d338b8ba22fc0d8b78b | [
"Apache-2.0"
] | null | null | null | from fruits import validate_fruit
fruits = ["banana", "lemon", "apple", "orange", "batman"]
print fruits
def list_fruits(fruits, byName=True):
if byName:
# WARNING: this won't make a copy of the list and return it. It will change the list FOREVER
fruits.sort()
for index, fruit in enumerate(fruits):
if validate_fruit(fruit):
print "Fruit nr %d is %s" % (index, fruit)
else:
print "This %s is no fruit!" % (fruit)
list_fruits(fruits)
print fruits
| 22.565217 | 100 | 0.628131 | 73 | 519 | 4.410959 | 0.575342 | 0.080745 | 0.099379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265896 | 519 | 22 | 101 | 23.590909 | 0.845144 | 0.17341 | 0 | 0.153846 | 0 | 0 | 0.152225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.307692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c08ff139766a0d536bcc09bc242b07f333b8755 | 853 | py | Python | HackerRank/Two Sum/Two Sum.py | nikku1234/Code-Practise | 94eb6680ea36efd10856c377000219285f77e5a4 | [
"Apache-2.0"
] | 9 | 2020-07-02T06:06:17.000Z | 2022-02-26T11:08:09.000Z | HackerRank/Two Sum/Two Sum.py | nikku1234/Code-Practise | 94eb6680ea36efd10856c377000219285f77e5a4 | [
"Apache-2.0"
] | 1 | 2021-11-04T17:26:36.000Z | 2021-11-04T17:26:36.000Z | HackerRank/Two Sum/Two Sum.py | nikku1234/Code-Practise | 94eb6680ea36efd10856c377000219285f77e5a4 | [
"Apache-2.0"
] | 8 | 2021-01-31T10:31:12.000Z | 2022-03-13T09:15:55.000Z | class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
"""Naive Logic"""
''' for i in range(len(nums)):
left = nums[i+1:]
for j in range(len(left)):
if (nums[i]+left[j]) ==target :
return i,j+i+1
'''
'''Better Logic'''
'''
k=0
for i in nums:
k = k+1
if target-i in nums[k:]:
return(k - 1, nums[k:].index(target - i) + k)
'''
'''Going for a better logic HashTable'''
hash_table={}
for i in range(len(nums)):
hash_table[nums[i]]=i
for i in range(len(nums)):
if target-nums[i] in hash_table:
if hash_table[target-nums[i]] != i:
return [i, hash_table[target-nums[i]] ]
return [] | 32.807692 | 64 | 0.444314 | 114 | 853 | 3.280702 | 0.263158 | 0.048128 | 0.064171 | 0.088235 | 0.251337 | 0.144385 | 0 | 0 | 0 | 0 | 0 | 0.009766 | 0.399766 | 853 | 26 | 65 | 32.807692 | 0.720703 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c09891ffb40760a1dcac5e46984a7d055ce0caf | 2,587 | py | Python | web/app/djrq/admin/admin.py | bmillham/djrq2 | c84283b75a7c15da1902ebfc32b7d75159c09e20 | [
"MIT"
] | 1 | 2016-11-23T20:50:00.000Z | 2016-11-23T20:50:00.000Z | web/app/djrq/admin/admin.py | bmillham/djrq2 | c84283b75a7c15da1902ebfc32b7d75159c09e20 | [
"MIT"
] | 15 | 2017-01-15T04:18:40.000Z | 2017-02-25T04:13:06.000Z | web/app/djrq/admin/admin.py | bmillham/djrq2 | c84283b75a7c15da1902ebfc32b7d75159c09e20 | [
"MIT"
] | null | null | null | # encoding: utf-8
from web.ext.acl import when
from ..templates.admin.admintemplate import page as _page
from ..templates.admin.requests import requeststemplate, requestrow
from ..templates.requests import requestrow as rr
from ..send_update import send_update
import cinje
@when(when.matches(True, 'session.authenticated', True), when.never)
class Admin:
__dispatch__ = 'resource'
__resource__ = 'admin'
from .suggestions import Suggestions as suggestions
from .mistags import Mistags as mistags
from .auth import Auth as auth
from .logout import Logout as logout
from .showinfo import ShowInfo as showinfo
from .requestoptions import RequestOptions as requestoptions
from .catalogoptions import CatalogOptions as catalogoptions
from .uploadfiles import UploadFiles as uploadfiles
from .updatedatabase import UpdateDatabase as updatedatabase
from .changepw import ChangePassword as changepw
from .showhistory import ShowHistory as showhistory
from .restoredatabase import RestoreDatabase as restoredatabase, CurrentProgress as currentprogress
from .updatehistory import UpdateHistory as updatehistory
def __init__(self, context, name, *arg, **args):
self._name = name
self._ctx = context
self.queries = context.queries
def get(self, *arg, **args):
if len(arg) > 0 and arg[0] != 'requests':
return "Page not found: {}".format(arg[0])
if 'view_status' not in args:
args['view_status'] = 'New/Pending'
if 'change_status' in args:
changed_row = self.queries.change_request_status(args['id'], args['status'])
try:
request_row = cinje.flatten(rr(changed_row))
except:
request_row = '' # Row was deleted
np_info = self.queries.get_requests_info(status=args['view_status'])
send_update(self._ctx.websocket, requestbutton=np_info.request_count, request_row=request_row, new_request_status=args['status'], request_id=args['id']) # Update the request count button
send_update(self._ctx.websocket_admin, requestbutton=np_info.request_count) # Update the request count button
requestlist = self.queries.get_requests(status=args['view_status'])
try:
requestinfo = np_info
except:
requestinfo = self.queries.get_requests_info(status=args['view_status'])
return requeststemplate(_page, "Requests", self._ctx, requestlist=requestlist, view_status=args['view_status'], requestinfo=requestinfo)
| 43.847458 | 198 | 0.709702 | 307 | 2,587 | 5.80456 | 0.28013 | 0.039282 | 0.039282 | 0.044893 | 0.145903 | 0.051627 | 0.051627 | 0.051627 | 0.051627 | 0 | 0 | 0.001947 | 0.20603 | 2,587 | 58 | 199 | 44.603448 | 0.865628 | 0.036722 | 0 | 0.085106 | 0 | 0 | 0.069964 | 0.008444 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0.021277 | 0.404255 | 0 | 0.553191 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0c0c55cfe0bc18dae70bf566cb7d439dd048fafe | 602 | py | Python | udp/src/server.py | matthewchute/net-prot | 82d2d92b3c88afb245161780fdd7909d7bf15eb1 | [
"MIT"
] | null | null | null | udp/src/server.py | matthewchute/net-prot | 82d2d92b3c88afb245161780fdd7909d7bf15eb1 | [
"MIT"
] | null | null | null | udp/src/server.py | matthewchute/net-prot | 82d2d92b3c88afb245161780fdd7909d7bf15eb1 | [
"MIT"
] | null | null | null | import constants, helpers, os
temp_msg = None
whole_msg = b''
file_path = None
helpers.sock.bind(constants.IP_PORT)
print "Server Ready"
# recieve
while temp_msg != constants.EOF:
datagram = helpers.sock.recvfrom(constants.BUFFER_SIZE)
temp_msg = datagram[0]
if file_path is None:
print("Receiving " + temp_msg.decode() + "...")
file_path = os.path.join(constants.SERVER_FILE_PATH, temp_msg.decode())
else:
whole_msg += temp_msg
whole_msg = whole_msg.strip(constants.EOF)
with open(file_path, 'wb') as sFile:
sFile.write(whole_msg)
print "Received"
| 22.296296 | 79 | 0.696013 | 86 | 602 | 4.651163 | 0.465116 | 0.105 | 0.065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002037 | 0.184385 | 602 | 26 | 80 | 23.153846 | 0.812627 | 0.011628 | 0 | 0 | 0 | 0 | 0.059122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c0dcfc232bbe604e854e762de0825bd246ecc01 | 3,697 | py | Python | sdk/apimanagement/azure-mgmt-apimanagement/azure/mgmt/apimanagement/models/hostname_configuration.py | tzhanl/azure-sdk-for-python | 18cd03f4ab8fd76cc0498f03e80fbc99f217c96e | [
"MIT"
] | 1 | 2021-09-07T18:36:04.000Z | 2021-09-07T18:36:04.000Z | sdk/apimanagement/azure-mgmt-apimanagement/azure/mgmt/apimanagement/models/hostname_configuration.py | tzhanl/azure-sdk-for-python | 18cd03f4ab8fd76cc0498f03e80fbc99f217c96e | [
"MIT"
] | 2 | 2019-10-02T23:37:38.000Z | 2020-10-02T01:17:31.000Z | sdk/apimanagement/azure-mgmt-apimanagement/azure/mgmt/apimanagement/models/hostname_configuration.py | tzhanl/azure-sdk-for-python | 18cd03f4ab8fd76cc0498f03e80fbc99f217c96e | [
"MIT"
] | 1 | 2019-06-17T22:18:23.000Z | 2019-06-17T22:18:23.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class HostnameConfiguration(Model):
"""Custom hostname configuration.
All required parameters must be populated in order to send to Azure.
:param type: Required. Hostname type. Possible values include: 'Proxy',
'Portal', 'Management', 'Scm', 'DeveloperPortal'
:type type: str or ~azure.mgmt.apimanagement.models.HostnameType
:param host_name: Required. Hostname to configure on the Api Management
service.
:type host_name: str
:param key_vault_id: Url to the KeyVault Secret containing the Ssl
Certificate. If absolute Url containing version is provided, auto-update
of ssl certificate will not work. This requires Api Management service to
be configured with MSI. The secret should be of type
*application/x-pkcs12*
:type key_vault_id: str
:param encoded_certificate: Base64 Encoded certificate.
:type encoded_certificate: str
:param certificate_password: Certificate Password.
:type certificate_password: str
:param default_ssl_binding: Specify true to setup the certificate
associated with this Hostname as the Default SSL Certificate. If a client
does not send the SNI header, then this will be the certificate that will
be challenged. The property is useful if a service has multiple custom
hostname enabled and it needs to decide on the default ssl certificate.
The setting only applied to Proxy Hostname Type. Default value: False .
:type default_ssl_binding: bool
:param negotiate_client_certificate: Specify true to always negotiate
client certificate on the hostname. Default Value is false. Default value:
False .
:type negotiate_client_certificate: bool
:param certificate: Certificate information.
:type certificate: ~azure.mgmt.apimanagement.models.CertificateInformation
"""
_validation = {
'type': {'required': True},
'host_name': {'required': True},
}
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
'host_name': {'key': 'hostName', 'type': 'str'},
'key_vault_id': {'key': 'keyVaultId', 'type': 'str'},
'encoded_certificate': {'key': 'encodedCertificate', 'type': 'str'},
'certificate_password': {'key': 'certificatePassword', 'type': 'str'},
'default_ssl_binding': {'key': 'defaultSslBinding', 'type': 'bool'},
'negotiate_client_certificate': {'key': 'negotiateClientCertificate', 'type': 'bool'},
'certificate': {'key': 'certificate', 'type': 'CertificateInformation'},
}
def __init__(self, **kwargs):
super(HostnameConfiguration, self).__init__(**kwargs)
self.type = kwargs.get('type', None)
self.host_name = kwargs.get('host_name', None)
self.key_vault_id = kwargs.get('key_vault_id', None)
self.encoded_certificate = kwargs.get('encoded_certificate', None)
self.certificate_password = kwargs.get('certificate_password', None)
self.default_ssl_binding = kwargs.get('default_ssl_binding', False)
self.negotiate_client_certificate = kwargs.get('negotiate_client_certificate', False)
self.certificate = kwargs.get('certificate', None)
| 48.012987 | 94 | 0.678117 | 426 | 3,697 | 5.751174 | 0.366197 | 0.029388 | 0.063673 | 0.022857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00166 | 0.185285 | 3,697 | 76 | 95 | 48.644737 | 0.811753 | 0.572085 | 0 | 0 | 0 | 0 | 0.337725 | 0.072122 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0.076923 | 0.038462 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.