hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f91114b11e69845ded8fd16b0a8d6a5869cb61e2 | 2,777 | py | Python | app/routers/sentences.py | ephraimberkovitch/cadet | 40ff288bfa96a3a0615fdf0b4d79246bc0fb0011 | [
"MIT"
] | 2 | 2021-06-23T14:03:09.000Z | 2021-11-21T01:06:03.000Z | app/routers/sentences.py | ephraimberkovitch/cadet | 40ff288bfa96a3a0615fdf0b4d79246bc0fb0011 | [
"MIT"
] | 13 | 2021-06-23T16:07:57.000Z | 2021-07-09T20:51:09.000Z | app/routers/sentences.py | ephraimberkovitch/cadet | 40ff288bfa96a3a0615fdf0b4d79246bc0fb0011 | [
"MIT"
] | 2 | 2021-06-23T16:09:32.000Z | 2022-03-18T12:44:25.000Z | import importlib.util # https://stackoverflow.com/questions/67631/how-to-import-a-module-given-the-full-path
import json
from pathlib import Path
from fastapi import APIRouter, Depends
from fastapi.responses import HTMLResponse
from fastapi import Request, Form
from fastapi.templating import Jinja2Templates
templates = Jinja2Templates(directory="app/templates")
from app.util.login import get_current_username
router = APIRouter(dependencies=[Depends(get_current_username)])
@router.get("/sentences")
async def create(request: Request):
new_lang = Path.cwd() / "new_lang"
if len(list(new_lang.iterdir())) > 0:
path = list(new_lang.iterdir())[0]
path = path / "examples.py"
spec = importlib.util.spec_from_file_location("sentences", str(path))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sentences = module.sentences
#ltr or rtl
nlp = get_nlp()
writing_system = nlp.vocab.writing_system['direction']
return templates.TemplateResponse(
"sentences.html", {"request": request, "sentences": sentences, "writing_system":writing_system}
)
else:
return templates.TemplateResponse(
"error_please_create.html", {"request": request}
)
@router.post("/update_sentences")
async def update_sentences(request: Request, sentences: str = Form(...)):
sentences = json.loads(sentences)
new_lang = Path.cwd() / "new_lang"
if new_lang.exists():
if len(list(new_lang.iterdir())) > 0:
name = list(new_lang.iterdir())[0].name
examples_file = Path.cwd() / "new_lang" / name / "examples.py"
examples = examples_file.read_text()
start = examples.find("sentences = [") + 13
end = examples.find("]")
sents = ""
for sentence in sentences:
sentence = sentence.replace('&nbsp','').replace(' ','').replace('\n','').strip() #bug from the template
sents += '"""' + sentence + '""",'
examples_file.write_text(examples[:start] + sents + examples[end:])
return sentences
def get_nlp():
# Load language object as nlp
new_lang = Path.cwd() / "new_lang"
lang_name = list(new_lang.iterdir())[0].name
try:
mod = __import__(f"new_lang.{lang_name}", fromlist=[lang_name.capitalize()])
except SyntaxError: # Unable to load __init__ due to syntax error
# redirect /edit?file_name=examples.py
message = "[*] SyntaxError, please correct this file to proceed."
return RedirectResponse(url="/edit?file_name=tokenizer_exceptions.py")
cls = getattr(mod, lang_name.capitalize())
nlp = cls()
return nlp
| 39.112676 | 128 | 0.649622 | 327 | 2,777 | 5.351682 | 0.370031 | 0.056 | 0.031429 | 0.051429 | 0.110857 | 0.110857 | 0.083429 | 0 | 0 | 0 | 0 | 0.006467 | 0.220382 | 2,777 | 70 | 129 | 39.671429 | 0.801848 | 0.080663 | 0 | 0.122807 | 0 | 0 | 0.132364 | 0.024745 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.192982 | 0 | 0.298246 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f916bc28e5684a59f27cde56fa82ff78daa7b8b0 | 2,797 | py | Python | tests/test_options.py | mje-nz/pythontexfigures | 60fdaaf0ec0b5d7176b2f36f7b7dd41ff3af0d84 | [
"BSD-3-Clause"
] | 4 | 2020-08-29T21:51:51.000Z | 2021-07-03T08:52:13.000Z | tests/test_options.py | mje-nz/pythontexfigures | 60fdaaf0ec0b5d7176b2f36f7b7dd41ff3af0d84 | [
"BSD-3-Clause"
] | null | null | null | tests/test_options.py | mje-nz/pythontexfigures | 60fdaaf0ec0b5d7176b2f36f7b7dd41ff3af0d84 | [
"BSD-3-Clause"
] | null | null | null | """Tests for pythontexfigures LaTeX package options."""
from pathlib import Path
import pytest
from util import build # noqa: I900
DOCUMENT_TEMPLATE = r"""
\documentclass{article}
%(pre)s
\usepackage{pgf}
\usepackage{pythontex}
\usepackage[%(options)s]{pythontexfigures}
%(post)s
\begin{document}
%(body)s
\end{document}
"""
BODY = r"\pyfig{test.py}"
SCRIPT = """
import matplotlib
def main():
open("result.txt", "w").write(str(matplotlib.rcParams["font.size"]))
"""
def document(options="", pre="", post="", body=BODY):
"""Fill in LaTeX document template."""
return DOCUMENT_TEMPLATE % dict(options=options, pre=pre, post=post, body=body)
def test_build_default(in_temp_dir):
"""Test building a simple document with a simple figure using default options."""
Path("main.tex").write_text(document())
Path("test.py").write_text(SCRIPT)
build("main.tex")
def test_missing_figure(in_temp_dir):
"""Test build fails if a figure script is missing."""
Path("main.tex").write_text(document())
with pytest.raises(AssertionError):
build("main.tex")
def test_build_in_subfolder(in_temp_dir):
"""Test keeping scripts in a subfolder."""
Path("main.tex").write_text(document(post=r"\pythontexfigurespath{scripts}"))
Path("scripts").mkdir()
Path("scripts/test.py").write_text(SCRIPT)
build("main.tex")
@pytest.mark.parametrize(
"name,expected",
(
("normalsize", 10),
("small", 9),
("footnotesize", 8),
("scriptsize", 7),
("6", 6),
),
)
def test_font_size(in_temp_dir, name: str, expected: float):
"""Test building with different font sizes."""
Path("main.tex").write_text(document(options="fontsize=" + name))
Path("test.py").write_text(SCRIPT)
build("main.tex")
assert float(Path("result.txt").read_text()) == expected
def test_relative(in_temp_dir):
"""Test building with the relative option."""
Path("main.tex").write_text(
document(
options="relative",
pre=r"\usepackage[abspath]{currfile}",
body=r"\include{tex/body.tex}",
)
)
Path("tex").mkdir()
Path("tex/body.tex").write_text(BODY)
Path("tex/test.py").write_text(SCRIPT)
build("main.tex")
def test_relative_subfolder(in_temp_dir):
"""Test building with the relative option and scripts in a subfolder."""
Path("main.tex").write_text(
document(
options="relative",
pre=r"\usepackage[abspath]{currfile}",
post=r"\pythontexfigurespath{scripts}",
body=r"\include{tex/body.tex}",
)
)
Path("tex/scripts").mkdir(parents=True)
Path("tex/body.tex").write_text(BODY)
Path("tex/scripts/test.py").write_text(SCRIPT)
build("main.tex")
| 26.638095 | 85 | 0.643904 | 359 | 2,797 | 4.905292 | 0.275766 | 0.06644 | 0.054514 | 0.054514 | 0.444634 | 0.404316 | 0.372516 | 0.352641 | 0.319705 | 0.164679 | 0 | 0.004392 | 0.185913 | 2,797 | 104 | 86 | 26.894231 | 0.768994 | 0.144083 | 0 | 0.32 | 0 | 0.013333 | 0.313217 | 0.127497 | 0 | 0 | 0 | 0 | 0.026667 | 1 | 0.093333 | false | 0 | 0.053333 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f91a9b5d30b6f6b627352832df3808c0d8ef429a | 3,030 | py | Python | manager/cached_templates/templates/dash.html.py | jordancarlson08/MyStuff | 4f4f6fdd298ce00e4a1f8a4621aaf94c0ccdb773 | [
"Apache-2.0"
] | null | null | null | manager/cached_templates/templates/dash.html.py | jordancarlson08/MyStuff | 4f4f6fdd298ce00e4a1f8a4621aaf94c0ccdb773 | [
"Apache-2.0"
] | null | null | null | manager/cached_templates/templates/dash.html.py | jordancarlson08/MyStuff | 4f4f6fdd298ce00e4a1f8a4621aaf94c0ccdb773 | [
"Apache-2.0"
] | null | null | null | # -*- coding:ascii -*-
from mako import runtime, filters, cache
UNDEFINED = runtime.UNDEFINED
__M_dict_builtin = dict
__M_locals_builtin = locals
_magic_number = 9
_modified_time = 1397084402.354378
_enable_loop = True
_template_filename = 'C:\\Users\\Jordan Carlson\\Desktop\\MyStuff\\manager\\templates/dash.html'
_template_uri = 'dash.html'
_source_encoding = 'ascii'
import os, os.path, re
_exports = ['content']
def _mako_get_namespace(context, name):
try:
return context.namespaces[(__name__, name)]
except KeyError:
_mako_generate_namespaces(context)
return context.namespaces[(__name__, name)]
def _mako_generate_namespaces(context):
pass
def _mako_inherit(template, context):
_mako_generate_namespaces(context)
return runtime._inherit_from(context, 'base.htm', _template_uri)
def render_body(context,**pageargs):
__M_caller = context.caller_stack._push_frame()
try:
__M_locals = __M_dict_builtin(pageargs=pageargs)
request = context.get('request', UNDEFINED)
def content():
return render_content(context._locals(__M_locals))
__M_writer = context.writer()
# SOURCE LINE 1
__M_writer('<!--## This is the base page for both the dashboards. Sprouting off of this one will be a manager and an admin page with minute\r\n')
# SOURCE LINE 3
__M_writer('\r\n')
# SOURCE LINE 6
__M_writer(' \r\n')
# SOURCE LINE 8
__M_writer(' ')
__M_writer('\r\n\r\n')
if 'parent' not in context._data or not hasattr(context._data['parent'], 'content'):
context['self'].content(**pageargs)
# SOURCE LINE 23
__M_writer('<!--ends content-->\r\n')
return ''
finally:
context.caller_stack._pop_frame()
def render_content(context,**pageargs):
__M_caller = context.caller_stack._push_frame()
try:
request = context.get('request', UNDEFINED)
def content():
return render_content(context)
__M_writer = context.writer()
# SOURCE LINE 10
__M_writer(' ')
__M_writer('\r\n <h2>Welcome back, ')
# SOURCE LINE 11
__M_writer(str(request.user.first_name))
__M_writer(' ')
__M_writer(str(request.user.last_name))
__M_writer("!</h2></br>\r\n <p>Use the left-side navigation bar to view connecting pages and options by clicking on each section heading. To view your own account information, log out, or return to your dashboard, use the dropdown menu in the upper right hand corner.</p>\r\n\r\n <div class='vertical_spacer6'></div>\r\n <div class='vertical_spacer6'></div>\r\n <div class='vertical_spacer6'></div>\r\n <div class='vertical_spacer6'></div>\r\n <div class='vertical_spacer6'></div>\r\n <div class='vertical_spacer6'></div>\r\n <div class='vertical_spacer6'></div>\r\n <div class='vertical_spacer6'></div>\r\n\r\n")
return ''
finally:
context.caller_stack._pop_frame()
| 38.35443 | 630 | 0.663366 | 406 | 3,030 | 4.62069 | 0.366995 | 0.020256 | 0.021322 | 0.042644 | 0.452026 | 0.362473 | 0.293177 | 0.293177 | 0.293177 | 0.249467 | 0 | 0.015455 | 0.209901 | 3,030 | 78 | 631 | 38.846154 | 0.76817 | 0.039934 | 0 | 0.448276 | 0 | 0.034483 | 0.326087 | 0.116977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12069 | false | 0.017241 | 0.034483 | 0.034483 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f92515e6b724a9961231bbc88b0302170c10e53f | 1,332 | py | Python | paprika/actions/files/Pipe.py | thunder-/paprika | af262407ec9c195dbb5a7c205510e6ad2fb65f36 | [
"MIT"
] | null | null | null | paprika/actions/files/Pipe.py | thunder-/paprika | af262407ec9c195dbb5a7c205510e6ad2fb65f36 | [
"MIT"
] | null | null | null | paprika/actions/files/Pipe.py | thunder-/paprika | af262407ec9c195dbb5a7c205510e6ad2fb65f36 | [
"MIT"
] | null | null | null | from paprika.repositories.FileRepository import FileRepository
from paprika.repositories.ProcessPropertyRepository import ProcessPropertyRepository
from paprika.repositories.ProcessRepository import ProcessRepository
from paprika.system.logger.Logger import Logger
from paprika.actions.Actionable import Actionable
class Pipe(Actionable):
def __init__(self):
Actionable.__init__(self)
def execute(self, connector, process_action):
job_name = process_action['job_name']
logger = Logger(connector, self)
file_repository = FileRepository(connector)
process_repository = ProcessRepository(connector)
process_property_repository = ProcessPropertyRepository(connector)
# retrieve the file properties
process = process_repository.find_by_id(process_action['pcs_id'])
file_id = process_property_repository.get_property(process, 'file_id')
file = file_repository.find_by_id(file_id)
filename = file['filename']
locked = file_repository.locked(file)
if locked:
logger.info(job_name, 'file: ' + filename + " locked ")
return process_action
else:
logger.info(job_name, 'file: ' + filename + " not locked ")
logger.info(job_name, filename + " state: " + file['state'])
| 34.153846 | 84 | 0.712462 | 138 | 1,332 | 6.623188 | 0.297101 | 0.060175 | 0.075492 | 0.055799 | 0.095186 | 0.063457 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205706 | 1,332 | 38 | 85 | 35.052632 | 0.863894 | 0.021021 | 0 | 0 | 0 | 0 | 0.057011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.2 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f9275e748370b9802c302be3f62b6834e789955f | 4,113 | py | Python | aptl3/procrustes/generalized.py | matteoterruzzi/aptl3 | 680ab58ffa79d0eee293729d36f677a588350519 | [
"MIT"
] | null | null | null | aptl3/procrustes/generalized.py | matteoterruzzi/aptl3 | 680ab58ffa79d0eee293729d36f677a588350519 | [
"MIT"
] | null | null | null | aptl3/procrustes/generalized.py | matteoterruzzi/aptl3 | 680ab58ffa79d0eee293729d36f677a588350519 | [
"MIT"
] | null | null | null | from typing import Dict, Optional
import logging
import numpy as np
from .orthogonal import OrthogonalProcrustesModel
class GeneralizedProcrustesAnalysis:
"""https://en.wikipedia.org/wiki/Generalized_Procrustes_analysis"""
def __init__(self, *, max_iter: int = 10):
self.dim: Optional[int] = None
self.procrustes_distance: Optional[float] = None
self.n_samples: Optional[int] = None
self.orthogonal_models: Dict[int, OrthogonalProcrustesModel] = dict() # map embedding_id --> model
self.max_iter: int = max_iter
assert max_iter >= 2
def fit(self, aligned_embedding_samples: Dict[int, np.ndarray]) -> None:
assert self.max_iter >= 2 # otherwise one of the models will be left undefined
assert len(set(samples.shape[0] for samples in aligned_embedding_samples.values())) == 1, "alignment error"
dims: Dict[int, int] = {e: samples.shape[1] for e, samples in aligned_embedding_samples.items()}
# take max dim manifold as reference
reference_embedding_id: Optional[int] = max(dims.keys(), key=dims.get) # this will become None after iter 0
reference: np.ndarray = aligned_embedding_samples[reference_embedding_id]
dim: int = reference.shape[1]
# superimpose all other instances to current reference shape
models: Dict[int, OrthogonalProcrustesModel] = dict()
procrustes_distance: Optional[float] = None
i: int = -1
for i in range(self.max_iter):
mean = np.zeros_like(reference)
for embedding_id, src_dim in dims.items():
x = aligned_embedding_samples[embedding_id]
if embedding_id == reference_embedding_id:
logging.debug(f'Using embedding #{embedding_id} as reference with {dim=} ...')
# R would be the identity matrix
mean += x
else:
logging.debug(f'Fitting orthogonal procrustes for embedding #{embedding_id} {src_dim=} ...')
model = OrthogonalProcrustesModel(src_dim, dim)
models[embedding_id] = model
model.fit(x, reference)
logging.debug(f'Fitted with {model.scale/reference.shape[0]=:.2%} ...')
y = model.predict(x)
mean += y
# compute the mean shape of the current set of superimposed shapes
mean /= len(aligned_embedding_samples)
old_procrustes_distance = procrustes_distance
procrustes_distance = np.linalg.norm(mean - reference) / np.sqrt(mean.shape[0])
# take as new reference the average shape along the axis of the manifolds and repeat until convergence
reference_embedding_id = None
reference = mean
logging.debug(f'Done GPA iteration #{i} ({procrustes_distance=:.2%}) ...')
if old_procrustes_distance is not None and procrustes_distance / old_procrustes_distance >= .99:
break
assert procrustes_distance is not None
logging.debug(f'GPA fitted ({procrustes_distance=:.2%} '
f'{"reached max_iter" if i >= self.max_iter-1 else "converged"}).')
# store results only at end, to avoid inconsistent state
self.dim = dim
self.procrustes_distance = float(procrustes_distance)
self.n_samples = reference.shape[0]
self.orthogonal_models = models
def predict(self, src_embedding_id: int, dest_embedding_id: int, x: np.ndarray):
a = self.orthogonal_models[src_embedding_id]
assert a.src_dim <= self.dim
b = self.orthogonal_models[dest_embedding_id]
assert b.src_dim <= self.dim
assert a.dest_dim == b.dest_dim == self.dim
assert x.shape[1] == a.src_dim
y = a.transform(x)
assert y.shape[0] == x.shape[0]
assert y.shape[1] == a.dest_dim == b.dest_dim == self.dim
z = b.inverse_transform(y)
assert z.shape[0] == x.shape[0]
assert z.shape[1] == b.src_dim
return z
| 47.275862 | 116 | 0.628738 | 515 | 4,113 | 4.858252 | 0.275728 | 0.065947 | 0.055156 | 0.011191 | 0.142286 | 0.033573 | 0.018385 | 0.018385 | 0 | 0 | 0 | 0.008722 | 0.275225 | 4,113 | 86 | 117 | 47.825581 | 0.830594 | 0.126672 | 0 | 0 | 0 | 0 | 0.100615 | 0.025713 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.045455 | false | 0 | 0.060606 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f92915a4474c636fb23ee209bc0a8b0878710ea4 | 7,538 | py | Python | analysis/general/analyze_table.py | cmu-db/cmdbac | 1f981e6f110728e51ba4ffdb90ff2d4ce091057a | [
"Apache-2.0"
] | 31 | 2016-04-07T04:54:29.000Z | 2021-11-30T02:30:57.000Z | analysis/general/analyze_table.py | cmu-db/db-webcrawler | 1f981e6f110728e51ba4ffdb90ff2d4ce091057a | [
"Apache-2.0"
] | 22 | 2015-12-19T14:49:18.000Z | 2021-09-07T23:48:24.000Z | analysis/general/analyze_table.py | cmu-db/dbac | 1f981e6f110728e51ba4ffdb90ff2d4ce091057a | [
"Apache-2.0"
] | 7 | 2016-05-13T01:02:01.000Z | 2019-10-06T16:52:54.000Z | #!/usr/bin/env python
import os, sys
sys.path.append(os.path.join(os.path.dirname(__file__), os.pardir))
sys.path.append(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir))
sys.path.append(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, "core"))
import re
import csv
from utils import dump_all_stats, filter_repository
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cmudbac.settings")
import django
django.setup()
from library.models import *
TABLES_DIRECTORY = 'tables'
def table_stats(directory = '.'):
stats = {}
for repo in Repository.objects.exclude(latest_successful_attempt = None):
if filter_repository(repo):
continue
statistics = Statistic.objects.filter(attempt = repo.latest_successful_attempt)
if len(statistics) == 0:
continue
for s in statistics:
if s.description == 'num_transactions':
continue
if s.description not in stats:
stats[s.description] = {}
project_type_name = repo.project_type.name
if project_type_name not in stats[s.description]:
stats[s.description][project_type_name] = []
stats[s.description][project_type_name].append(s.count)
dump_all_stats(directory, stats)
def column_stats(directory = '.'):
stats = {'column_nullable': {}, 'column_type': {}, 'column_extra': {}, 'column_num': {}}
for repo in Repository.objects.exclude(latest_successful_attempt = None):
if filter_repository(repo):
continue
column_informations = Information.objects.filter(attempt = repo.latest_successful_attempt).filter(name = 'columns')
constraint_informations = Information.objects.filter(attempt = repo.latest_successful_attempt).filter(name = 'constraints')
num_table_statistics = Statistic.objects.filter(attempt = repo.latest_successful_attempt).filter(description = 'num_tables')
if len(column_informations) > 0 and len(constraint_informations) > 0 and len(num_table_statistics) > 0:
column_information = column_informations[0]
constraint_information = constraint_informations[0]
num_tables = num_table_statistics[0].count
project_type_name = repo.project_type.name
if project_type_name not in stats['column_nullable']:
stats['column_nullable'][project_type_name] = {}
if project_type_name not in stats['column_type']:
stats['column_type'][project_type_name] = {}
if project_type_name not in stats['column_extra']:
stats['column_extra'][project_type_name] = {}
if project_type_name not in stats['column_num']:
stats['column_num'][project_type_name] = []
if repo.latest_successful_attempt.database.name == 'PostgreSQL':
regex = '(\(.*?\))[,\]]'
elif repo.latest_successful_attempt.database.name == 'MySQL':
regex = '(\(.*?\))[,\)]'
table_stats = {'column_nullable': {}, 'column_type': {}, 'column_extra': {}, 'column_num': {}}
for column in re.findall(regex, column_information.description):
cells = column.split(',')
table = str(cells[2]).replace("'", "").strip()
nullable = str(cells[6]).replace("'", "").strip()
if table not in table_stats['column_nullable']:
table_stats['column_nullable'][table] = {}
table_stats['column_nullable'][table][nullable] = table_stats['column_nullable'][table].get(nullable, 0) + 1
_type = str(cells[7]).replace("'", "").strip()
if table not in table_stats['column_type']:
table_stats['column_type'][table] = {}
table_stats['column_type'][table][_type] = table_stats['column_type'][table].get(_type, 0) + 1
extra = str(cells[16]).replace("'", "").strip()
if extra:
if table not in table_stats['column_extra']:
table_stats['column_extra'][table] = {}
table_stats['column_extra'][table][extra] = table_stats['column_extra'][table].get(extra, 0) + 1
if table not in table_stats['column_num']:
table_stats['column_num'][table] = 0
table_stats['column_num'][table] += 1
for column in re.findall(regex, constraint_information.description):
cells = column.split(',')
if repo.latest_successful_attempt.database.name == 'PostgreSQL':
constraint_type = str(cells[6]).replace("'", "").strip()
elif repo.latest_successful_attempt.database.name == 'MySQL':
constraint_type = str(cells[5])[:-1].replace("'", "").strip()
if repo.latest_successful_attempt.database.name == 'PostgreSQL':
table = str(cells[5]).replace("'", "").strip()
elif repo.latest_successful_attempt.database.name == 'MySQL':
table = str(cells[4]).replace("'", "").strip()
if table not in table_stats['column_extra']:
table_stats['column_extra'][table] = {}
table_stats['column_extra'][table][constraint_type] = table_stats['column_extra'][table].get(constraint_type, 0) + 1
for stats_type in table_stats:
for table in table_stats[stats_type]:
if isinstance(table_stats[stats_type][table], dict):
for second_type in table_stats[stats_type][table]:
if second_type not in stats[stats_type][project_type_name]:
stats[stats_type][project_type_name][second_type] = []
stats[stats_type][project_type_name][second_type].append(table_stats[stats_type][table][second_type])
else:
stats[stats_type][project_type_name].append(table_stats[stats_type][table])
dump_all_stats(directory, stats)
def index_stats(directory = TABLES_DIRECTORY):
stats = {'index_type': {}}
for repo in Repository.objects.exclude(latest_successful_attempt = None):
if filter_repository(repo):
continue
index_informations = Information.objects.filter(attempt = repo.latest_successful_attempt).filter(name = 'indexes')
if len(index_informations) > 0:
index_information = index_informations[0]
project_type_name = repo.project_type.name
if project_type_name not in stats['index_type']:
stats['index_type'][project_type_name] = {}
if repo.latest_successful_attempt.database.name == 'PostgreSQL':
regex = '(\(.*?\))[,\]]'
elif repo.latest_successful_attempt.database.name == 'MySQL':
regex = '(\(.*?\))[,\)]'
for column in re.findall(regex, index_information.description):
cells = column.split(',')
_type = cells[13].replace("'", "").strip()
stats['index_type'][project_type_name][_type] = stats['index_type'][project_type_name].get(_type, 0) + 1
dump_all_stats(directory, stats)
def main():
# active
table_stats(TABLES_DIRECTORY)
column_stats(TABLES_DIRECTORY)
index_stats(TABLES_DIRECTORY)
# working
# deprecated
if __name__ == '__main__':
main()
| 45.963415 | 132 | 0.608782 | 838 | 7,538 | 5.190931 | 0.118138 | 0.073333 | 0.086207 | 0.08069 | 0.695402 | 0.623678 | 0.489885 | 0.471724 | 0.428966 | 0.377471 | 0 | 0.005929 | 0.261608 | 7,538 | 163 | 133 | 46.245399 | 0.775602 | 0.006102 | 0 | 0.290323 | 0 | 0 | 0.094685 | 0.002938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.048387 | 0 | 0.080645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f92c4cb10c3058dd32a39d35e1301e33c83939d1 | 5,107 | py | Python | examples/ex_4shapes.py | cristobalfuenzalida/grafica | cf7bb90c4c5c34ee56d328188111c917a0d10389 | [
"MIT"
] | null | null | null | examples/ex_4shapes.py | cristobalfuenzalida/grafica | cf7bb90c4c5c34ee56d328188111c917a0d10389 | [
"MIT"
] | null | null | null | examples/ex_4shapes.py | cristobalfuenzalida/grafica | cf7bb90c4c5c34ee56d328188111c917a0d10389 | [
"MIT"
] | null | null | null | # coding=utf-8
"""Drawing 4 shapes with different transformations"""
import glfw
from OpenGL.GL import *
import OpenGL.GL.shaders
import numpy as np
import sys
import os.path
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import grafica.basic_shapes as bs
import grafica.easy_shaders as es
import grafica.transformations as tr
import grafica.performance_monitor as pm
__author__ = "Daniel Calderon"
__license__ = "MIT"
# We will use 32 bits data, so an integer has 4 bytes
# 1 byte = 8 bits
SIZE_IN_BYTES = 4
# A class to store the application control
class Controller:
def __init__(self):
self.fillPolygon = True
# we will use the global controller as communication with the callback function
controller = Controller()
# This function will be executed whenever a key is pressed or released
def on_key(window, key, scancode, action, mods):
if action != glfw.PRESS:
return
global controller
if key == glfw.KEY_SPACE:
controller.fillPolygon = not controller.fillPolygon
elif key == glfw.KEY_ESCAPE:
glfw.set_window_should_close(window, True)
else:
print('Unknown key')
if __name__ == "__main__":
# Initialize glfw
if not glfw.init():
glfw.set_window_should_close(window, True)
# Creating a glfw window
width = 600
height = 600
title = "Displaying multiple shapes - Modern OpenGL"
window = glfw.create_window(width, height, title, None, None)
if not window:
glfw.terminate()
glfw.set_window_should_close(window, True)
glfw.make_context_current(window)
# Connecting the callback function 'on_key' to handle keyboard events
glfw.set_key_callback(window, on_key)
# Binding artificial vertex array object for validation
VAO = glGenVertexArrays(1)
glBindVertexArray(VAO)
# Creating our shader program and telling OpenGL to use it
pipeline = es.SimpleTransformShaderProgram()
glUseProgram(pipeline.shaderProgram)
# Setting up the clear screen color
glClearColor(0.15, 0.15, 0.15, 1.0)
# Creating shapes on GPU memory
shapeTriangle = bs.createRainbowTriangle()
gpuTriangle = es.GPUShape().initBuffers()
pipeline.setupVAO(gpuTriangle)
gpuTriangle.fillBuffers(shapeTriangle.vertices, shapeTriangle.indices, GL_STATIC_DRAW)
shapeQuad = bs.createRainbowQuad()
gpuQuad = es.GPUShape().initBuffers()
pipeline.setupVAO(gpuQuad)
gpuQuad.fillBuffers(shapeQuad.vertices, shapeQuad.indices, GL_STATIC_DRAW)
perfMonitor = pm.PerformanceMonitor(glfw.get_time(), 0.5)
# glfw will swap buffers as soon as possible
glfw.swap_interval(0)
# Application loop
while not glfw.window_should_close(window):
# Measuring performance
perfMonitor.update(glfw.get_time())
glfw.set_window_title(window, title + str(perfMonitor))
# Using GLFW to check for input events
glfw.poll_events()
# Filling or not the shapes depending on the controller state
if (controller.fillPolygon):
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
else:
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)
# Clearing the screen
glClear(GL_COLOR_BUFFER_BIT)
# Using the time as the theta parameter
theta = glfw.get_time()
# Triangle
triangleTransform = tr.matmul([
tr.translate(0.5, 0.5, 0),
tr.rotationZ(2 * theta),
tr.uniformScale(0.5)
])
# Updating the transform attribute
glUniformMatrix4fv(glGetUniformLocation(pipeline.shaderProgram, "transform"), 1, GL_TRUE, triangleTransform)
# Drawing function
pipeline.drawCall(gpuTriangle)
# Another instance of the triangle
triangleTransform2 = tr.matmul([
tr.translate(-0.5, 0.5, 0),
tr.scale(
0.5 + 0.2 * np.cos(1.5 * theta),
0.5 + 0.2 * np.sin(2 * theta),
0)
])
glUniformMatrix4fv(glGetUniformLocation(pipeline.shaderProgram, "transform"), 1, GL_TRUE, triangleTransform2)
pipeline.drawCall(gpuTriangle)
# Quad
quadTransform = tr.matmul([
tr.translate(-0.5, -0.5, 0),
tr.rotationZ(-theta),
tr.uniformScale(0.7)
])
glUniformMatrix4fv(glGetUniformLocation(pipeline.shaderProgram, "transform"), 1, GL_TRUE, quadTransform)
pipeline.drawCall(gpuQuad)
# Another instance of the Quad
quadTransform2 = tr.matmul([
tr.translate(0.5, -0.5, 0),
tr.shearing(0.3 * np.cos(theta), 0, 0, 0, 0, 0),
tr.uniformScale(0.7)
])
glUniformMatrix4fv(glGetUniformLocation(pipeline.shaderProgram, "transform"), 1, GL_TRUE, quadTransform2)
pipeline.drawCall(gpuQuad)
# Once the drawing is rendered, buffers are swap so an uncomplete drawing is never seen.
glfw.swap_buffers(window)
# freeing GPU memory
gpuTriangle.clear()
gpuQuad.clear()
glfw.terminate() | 29.865497 | 117 | 0.667319 | 616 | 5,107 | 5.407468 | 0.371753 | 0.007205 | 0.009006 | 0.027619 | 0.219754 | 0.184329 | 0.166917 | 0.136295 | 0.091264 | 0.091264 | 0 | 0.022527 | 0.243783 | 5,107 | 171 | 118 | 29.865497 | 0.839979 | 0.209125 | 0 | 0.171717 | 0 | 0 | 0.0287 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020202 | false | 0 | 0.10101 | 0 | 0.141414 | 0.010101 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f92c752c2d9a2ddf52f10a14162916f713687399 | 6,240 | py | Python | bika/lims/adapters/widgetvisibility.py | hocinebendou/bika.gsoc | 85bc0c587de7f52073ae0e89bddbc77bf875f295 | [
"MIT"
] | null | null | null | bika/lims/adapters/widgetvisibility.py | hocinebendou/bika.gsoc | 85bc0c587de7f52073ae0e89bddbc77bf875f295 | [
"MIT"
] | null | null | null | bika/lims/adapters/widgetvisibility.py | hocinebendou/bika.gsoc | 85bc0c587de7f52073ae0e89bddbc77bf875f295 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
from bika.lims.interfaces import IAnalysisRequestsFolder, IBatch, IClient
from bika.lims.interfaces import IATWidgetVisibility
from bika.lims.utils import getHiddenAttributesForClass
from Products.CMFCore.utils import getToolByName
from Products.CMFCore.WorkflowCore import WorkflowException
from types import DictType
from zope.interface import implements
_marker = []
class WorkflowAwareWidgetVisibility(object):
"""This adapter allows the schema definition to have different widget visibility
settings for different workflow states in the primary review_state workflow.
With this it is possible to write:
StringField(
'fieldName',
widget=StringWidget(
label=_('field Name'),
visible = {
'edit': 'visible', # regular AT uses these and they override
'view': 'visible', # everything, without 'edit' you cannot edit
'wf_state': {'edit': 'invisible', 'view': 'visible' },
'other_state': {'edit': 'visible', 'view': 'invisible'},
}
The rules about defaults, "hidden", "visible" and "invisible" are the same
as those from the default Products.Archetypes.Widget.TypesWidget#isVisible
"""
implements(IATWidgetVisibility)
def __init__(self, context):
self.context = context
self.sort = 100
def __call__(self, context, mode, field, default):
"""
"""
state = default if default else 'visible'
workflow = getToolByName(self.context, 'portal_workflow')
try:
review_state = workflow.getInfoFor(self.context, 'review_state')
except WorkflowException:
return state
vis_dic = field.widget.visible
if type(vis_dic) is not DictType or review_state not in vis_dic:
return state
inner_vis_dic = vis_dic.get(review_state, state)
if inner_vis_dic is _marker:
state = state
if type(inner_vis_dic) is DictType:
state = inner_vis_dic.get(mode, state)
state = state
elif not inner_vis_dic:
state = 'invisible'
elif inner_vis_dic < 0:
state = 'hidden'
return state
class SamplingWorkflowWidgetVisibility(object):
"""This will force the 'Sampler' and 'DateSampled' widget default to 'visible'.
We must check the attribute saved on the sample, not the bika_setup value.
"""
implements(IATWidgetVisibility)
def __init__(self, context):
self.context = context
self.sort = 100
def __call__(self, context, mode, field, default):
sw_fields = ['Sampler', 'DateSampled']
state = default if default else 'invisible'
fieldName = field.getName()
if fieldName in sw_fields \
and hasattr(self.context, 'getSamplingWorkflowEnabled') \
and self.context.getSamplingWorkflowEnabled():
if mode == 'header_table':
state = 'prominent'
elif mode == 'view':
state = 'visible'
return state
class ClientFieldWidgetVisibility(object):
"""The Client field is editable by default in ar_add. This adapter
will force the Client field to be hidden when it should not be set
by the user.
"""
implements(IATWidgetVisibility)
def __init__(self, context):
self.context = context
self.sort = 10
def __call__(self, context, mode, field, default):
state = default if default else 'hidden'
fieldName = field.getName()
if fieldName != 'Client':
return state
parent = self.context.aq_parent
if IBatch.providedBy(parent):
if parent.getClient():
return 'hidden'
if IClient.providedBy(parent):
return 'hidden'
return state
class BatchARAdd_BatchFieldWidgetVisibility(object):
"""This will force the 'Batch' field to 'hidden' in ar_add when the parent
context is a Batch.
"""
implements(IATWidgetVisibility)
def __init__(self, context):
self.context = context
self.sort = 10
def __call__(self, context, mode, field, default):
state = default if default else 'visible'
fieldName = field.getName()
if fieldName == 'Batch' and context.aq_parent.portal_type == 'Batch':
return 'hidden'
return state
class OptionalFieldsWidgetVisibility(object):
"""Remove 'hidden attributes' (fields in registry bika.lims.hiddenattributes).
fieldName = field.getName()
"""
implements(IATWidgetVisibility)
def __init__(self, context):
self.context = context
self.sort = 5
def __call__(self, context, mode, field, default):
state = default if default else 'visible'
hiddenattributes = getHiddenAttributesForClass(context.portal_type)
if field.getName() in hiddenattributes:
state = "hidden"
return state
class HideARPriceFields(object):
"""Hide related fields in ARs when ShowPrices is disabled
"""
implements(IATWidgetVisibility)
def __init__(self, context):
self.context = context
self.sort = 3
def __call__(self, context, mode, field, default):
fields = ['InvoiceExclude']
ShowPrices = context.bika_setup.getShowPrices()
state = default if default else 'invisible'
fieldName = field.getName()
if fieldName in fields and not ShowPrices:
state = 'invisible'
return state
class HideClientDiscountFields(object):
"""Hide related fields in ARs when ShowPrices is disabled
"""
implements(IATWidgetVisibility)
def __init__(self, context):
self.context = context
self.sort = 3
def __call__(self, context, mode, field, default):
fields = ['BulkDiscount', 'MemberDiscountApplies']
ShowPrices = context.bika_setup.getShowPrices()
state = default if default else 'invisible'
fieldName = field.getName()
if fieldName in fields and not ShowPrices:
state = 'invisible'
return state
| 33.548387 | 84 | 0.63766 | 658 | 6,240 | 5.901216 | 0.259878 | 0.073654 | 0.057687 | 0.064898 | 0.424414 | 0.35385 | 0.35385 | 0.35385 | 0.35385 | 0.35385 | 0 | 0.00333 | 0.278205 | 6,240 | 185 | 85 | 33.72973 | 0.858792 | 0.229327 | 0 | 0.577586 | 0 | 0 | 0.059417 | 0.010082 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12069 | false | 0 | 0.060345 | 0 | 0.353448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
f92cdb722f985bc83c1789147a169d25558c6579 | 1,534 | py | Python | leetcode/python/706_design_hashmap.py | VVKot/leetcode-solutions | 7d6e599b223d89a7861929190be715d3b3604fa4 | [
"MIT"
] | 4 | 2019-04-22T11:57:36.000Z | 2019-10-29T09:12:56.000Z | leetcode/python/706_design_hashmap.py | VVKot/coding-competitions | 7d6e599b223d89a7861929190be715d3b3604fa4 | [
"MIT"
] | null | null | null | leetcode/python/706_design_hashmap.py | VVKot/coding-competitions | 7d6e599b223d89a7861929190be715d3b3604fa4 | [
"MIT"
] | null | null | null | class ListNode:
def __init__(self, key: int, val: int):
self.pair = (key, val)
self.next = None
class MyHashMap:
def __init__(self):
self.size = 1000
self.store = [None] * self.size
def _get_hash(self, key):
return key % self.size
def put(self, key: int, value: int) -> None:
hash = self._get_hash(key)
if self.store[hash] is None:
self.store[hash] = ListNode(key, value)
else:
curr = self.store[hash]
while True:
if curr.pair[0] == key:
curr.pair = (key, value)
return
if curr.next is None:
break
curr = curr.next
curr.next = ListNode(key, value)
def get(self, key: int) -> int:
hash = self._get_hash(key)
curr = self.store[hash]
while curr:
if curr.pair[0] == key:
return curr.pair[1]
else:
curr = curr.next
return -1
def remove(self, key: int) -> None:
hash = self._get_hash(key)
curr = prev = self.store[hash]
if not curr:
return
if curr.pair[0] == key:
self.store[hash] = curr.next
else:
curr = curr.next
while curr:
if curr.pair[0] == key:
prev.next = curr.next
break
else:
curr, prev = curr.next, prev.next
| 26.912281 | 53 | 0.458931 | 179 | 1,534 | 3.843575 | 0.178771 | 0.093023 | 0.113372 | 0.063953 | 0.268895 | 0.177326 | 0.139535 | 0 | 0 | 0 | 0 | 0.011587 | 0.437419 | 1,534 | 56 | 54 | 27.392857 | 0.785632 | 0 | 0 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0.020833 | 0.270833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
005563def602f5a05fa7ab40a2f6437d84a5f513 | 3,777 | py | Python | panoptes_aggregation/extractors/sw_extractor.py | alnah005/aggregation-for-caesar | b2422f4c007857531ac3ff2636b567adb667dd0c | [
"Apache-2.0"
] | 9 | 2018-04-11T13:44:32.000Z | 2022-03-09T16:39:26.000Z | panoptes_aggregation/extractors/sw_extractor.py | alnah005/aggregation-for-caesar | b2422f4c007857531ac3ff2636b567adb667dd0c | [
"Apache-2.0"
] | 217 | 2017-07-27T09:20:15.000Z | 2022-03-21T11:15:33.000Z | panoptes_aggregation/extractors/sw_extractor.py | hughdickinson/aggregation-for-caesar | d6bca0a1126e0397315d5773401c71075c33ee2f | [
"Apache-2.0"
] | 10 | 2018-11-12T21:36:48.000Z | 2022-02-07T11:50:03.000Z | '''
Shakespeares World Text Extractor
---------------------------------
This module provides a fuction to extract the `text` data from
annotations made on Shakespeares World and AnnoTate.
'''
import bs4
from collections import OrderedDict
import copy
import numpy as np
import html
import warnings
from .extractor_wrapper import extractor_wrapper
warnings.filterwarnings("ignore", category=UserWarning, module='bs4')
tag_whitelist = [
'sw-ex',
'sl',
'brev-y',
'sw-ins',
'sw-del',
'sw-unclear',
'sw-sup',
'label',
'graphic'
]
def clean_text(s):
'''
Clean text from Shakespeares World and AnnoTate classification to prepare
it for aggregation. Unicode characters, `xml`, and `html` are removed.
Parameters
----------
s : string
A string to be cleaned
Returns
-------
clean_s : string
The string with all unicode, `xml`, and `html` removed
'''
s_out = s.encode('ascii', 'ignore').decode('ascii')
if '<xml>' in s_out:
# the user copy and pasted in from micorsoft office
# these classifications are a mess, just strip all tags
soup = bs4.BeautifulSoup(s_out, 'lxml')
s_out = soup.get_text().replace('\n', '')
elif '<' in s_out:
# remove html tags (these should never have been in the text to begin with)
soup = bs4.BeautifulSoup(s_out, 'html.parser')
for match in soup.findAll():
if (match.text.strip() == '') or (match.name not in tag_whitelist):
match.unwrap()
s_out = str(soup)
# unescape html and repalce (\xa0) with a normal space
s_out = html.unescape(s_out).replace('\xa0', ' ')
return s_out
@extractor_wrapper(gold_standard=True)
def sw_extractor(classification, gold_standard=False, **kwargs):
'''Extract text annotations from Shakespeares World and AnnoTate.
Parameters
----------
classification : dict
A dictionary containing an `annotations` key that is a list of
panoptes annotations
Returns
-------
extraction : dict
A dictionary with one key for each `frame`. The value for each frame
is a dict with `text`, a list-of-lists of transcribe words, `points`, a
dict with the list-of-lists of `x` and `y` postions of each space between words,
and `slope`, a list of the slopes (in deg) of each line drawn.
For `points` and `text` there is one inner list for each annotaiton made
on the frame.
'''
extract = OrderedDict()
blank_frame = OrderedDict([
('points', OrderedDict([('x', []), ('y', [])])),
('text', []),
('slope', []),
('gold_standard', gold_standard)
])
frame = 'frame0'
extract[frame] = copy.deepcopy(blank_frame)
if len(classification['annotations']) > 0:
annotation = classification['annotations'][0]
if isinstance(annotation['value'], list):
for value in annotation['value']:
if ('startPoint' in value) and ('endPoint' in value) and ('text' in value):
x = [value['startPoint']['x'], value['endPoint']['x']]
y = [value['startPoint']['y'], value['endPoint']['y']]
if (None not in x) and (None not in y):
text = [clean_text(value['text'])]
dx = x[-1] - x[0]
dy = y[-1] - y[0]
slope = np.rad2deg(np.arctan2(dy, dx))
extract[frame]['text'].append(text)
extract[frame]['points']['x'].append(x)
extract[frame]['points']['y'].append(y)
extract[frame]['slope'].append(slope)
return extract
| 34.651376 | 91 | 0.577707 | 461 | 3,777 | 4.67679 | 0.35141 | 0.018553 | 0.027829 | 0.038961 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005541 | 0.283294 | 3,777 | 108 | 92 | 34.972222 | 0.790912 | 0.358221 | 0 | 0 | 0 | 0 | 0.114547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.116667 | 0 | 0.183333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0056d447a08163829490cb5c9d6443f2c725e44f | 15,111 | py | Python | src/wakeexchange/GeneralWindFarmGroups.py | WISDEM/wake-exchange | 1e9bb1266799517afeca0358c3237f9250bacfa4 | [
"Apache-2.0"
] | 2 | 2018-03-27T17:47:04.000Z | 2018-12-27T09:02:27.000Z | src/wakeexchange/GeneralWindFarmGroups.py | WISDEM/wake-exchange | 1e9bb1266799517afeca0358c3237f9250bacfa4 | [
"Apache-2.0"
] | null | null | null | src/wakeexchange/GeneralWindFarmGroups.py | WISDEM/wake-exchange | 1e9bb1266799517afeca0358c3237f9250bacfa4 | [
"Apache-2.0"
] | 4 | 2018-03-15T20:53:04.000Z | 2019-03-21T07:41:18.000Z | import numpy as np
from openmdao.api import Group, IndepVarComp, ParallelGroup, ScipyGMRES, NLGaussSeidel
from openmdao.core.mpi_wrap import MPI
if MPI:
from openmdao.api import PetscKSP
from wakeexchange.floris import floris_wrapper, add_floris_params_IndepVarComps
from wakeexchange.gauss import add_gauss_params_IndepVarComps
from GeneralWindFarmComponents import WindFrame, AdjustCtCpYaw, MUX, WindFarmAEP, DeMUX, \
CPCT_Interpolate_Gradients_Smooth, WindDirectionPower, add_gen_params_IdepVarComps, \
CPCT_Interpolate_Gradients
class RotorSolveGroup(Group):
def __init__(self, nTurbines, direction_id=0, datasize=0, differentiable=True,
use_rotor_components=False, nSamples=0, wake_model=floris_wrapper,
wake_model_options=None):
super(RotorSolveGroup, self).__init__()
if wake_model_options is None:
wake_model_options = {'differentiable': differentiable, 'use_rotor_components': use_rotor_components,
'nSamples': nSamples}
from openmdao.core.mpi_wrap import MPI
# set up iterative solvers
epsilon = 1E-6
if MPI:
self.ln_solver = PetscKSP()
else:
self.ln_solver = ScipyGMRES()
self.nl_solver = NLGaussSeidel()
self.ln_solver.options['atol'] = epsilon
self.add('CtCp', CPCT_Interpolate_Gradients_Smooth(nTurbines, direction_id=direction_id, datasize=datasize),
promotes=['gen_params:*', 'yaw%i' % direction_id,
'wtVelocity%i' % direction_id, 'Cp_out'])
# TODO refactor the model component instance
self.add('floris', wake_model(nTurbines, direction_id=direction_id, wake_model_options=wake_model_options),
promotes=(['model_params:*', 'wind_speed', 'axialInduction',
'turbineXw', 'turbineYw', 'rotorDiameter', 'yaw%i' % direction_id, 'hubHeight',
'wtVelocity%i' % direction_id]
if (nSamples == 0) else
['model_params:*', 'wind_speed', 'axialInduction',
'turbineXw', 'turbineYw', 'rotorDiameter', 'yaw%i' % direction_id, 'hubHeight',
'wtVelocity%i' % direction_id, 'wsPositionX', 'wsPositionY', 'wsPositionZ',
'wsArray%i' % direction_id]))
self.connect('CtCp.Ct_out', 'floris.Ct')
class DirectionGroup(Group):
"""
Group containing all necessary components for wind plant calculations
in a single direction
"""
def __init__(self, nTurbines, direction_id=0, use_rotor_components=False, datasize=0,
differentiable=True, add_IdepVarComps=True, params_IdepVar_func=add_floris_params_IndepVarComps,
params_IndepVar_args=None, nSamples=0, wake_model=floris_wrapper, wake_model_options=None, cp_points=1,
cp_curve_spline=None):
super(DirectionGroup, self).__init__()
if add_IdepVarComps:
if params_IdepVar_func is not None:
if (params_IndepVar_args is None) and (wake_model is floris_wrapper):
params_IndepVar_args = {'use_rotor_components': False}
elif params_IndepVar_args is None:
params_IndepVar_args = {}
params_IdepVar_func(self, **params_IndepVar_args)
add_gen_params_IdepVarComps(self, datasize=datasize)
self.add('directionConversion', WindFrame(nTurbines, differentiable=differentiable, nSamples=nSamples),
promotes=['*'])
if use_rotor_components:
self.add('rotorGroup', RotorSolveGroup(nTurbines, direction_id=direction_id,
datasize=datasize, differentiable=differentiable,
nSamples=nSamples, use_rotor_components=use_rotor_components,
wake_model=wake_model, wake_model_options=wake_model_options),
promotes=(['gen_params:*', 'yaw%i' % direction_id, 'wtVelocity%i' % direction_id,
'model_params:*', 'wind_speed', 'axialInduction',
'turbineXw', 'turbineYw', 'rotorDiameter', 'hubHeight']
if (nSamples == 0) else
['gen_params:*', 'yaw%i' % direction_id, 'wtVelocity%i' % direction_id,
'model_params:*', 'wind_speed', 'axialInduction',
'turbineXw', 'turbineYw', 'rotorDiameter', 'hubHeight', 'wsPositionX', 'wsPositionY', 'wsPositionZ',
'wsArray%i' % direction_id]))
else:
self.add('CtCp', AdjustCtCpYaw(nTurbines, direction_id, differentiable),
promotes=['Ct_in', 'Cp_in', 'gen_params:*', 'yaw%i' % direction_id])
self.add('myModel', wake_model(nTurbines, direction_id=direction_id, wake_model_options=wake_model_options),
promotes=(['model_params:*', 'wind_speed', 'axialInduction',
'turbineXw', 'turbineYw', 'rotorDiameter', 'yaw%i' % direction_id, 'hubHeight',
'wtVelocity%i' % direction_id]
if (nSamples == 0) else
['model_params:*', 'wind_speed', 'axialInduction',
'turbineXw', 'turbineYw', 'rotorDiameter', 'yaw%i' % direction_id, 'hubHeight',
'wtVelocity%i' % direction_id, 'wsPositionXw', 'wsPositionYw', 'wsPositionZ',
'wsArray%i' % direction_id]))
self.add('powerComp', WindDirectionPower(nTurbines=nTurbines, direction_id=direction_id, differentiable=True,
use_rotor_components=use_rotor_components, cp_points=cp_points,
cp_curve_spline=cp_curve_spline),
promotes=['air_density', 'generatorEfficiency', 'rotorDiameter',
'wtVelocity%i' % direction_id, 'rated_power',
'wtPower%i' % direction_id, 'dir_power%i' % direction_id, 'cut_in_speed', 'cp_curve_cp',
'cp_curve_vel'])
if use_rotor_components:
self.connect('rotorGroup.Cp_out', 'powerComp.Cp')
else:
self.connect('CtCp.Ct_out', 'myModel.Ct')
self.connect('CtCp.Cp_out', 'powerComp.Cp')
class AEPGroup(Group):
"""
Group containing all necessary components for wind plant AEP calculations using the FLORIS model
"""
def __init__(self, nTurbines, nDirections=1, use_rotor_components=False, datasize=0,
differentiable=True, optimizingLayout=False, nSamples=0, wake_model=floris_wrapper,
wake_model_options=None, params_IdepVar_func=add_floris_params_IndepVarComps,
params_IndepVar_args=None, cp_points=1, cp_curve_spline=None, rec_func_calls=False):
super(AEPGroup, self).__init__()
if wake_model_options is None:
wake_model_options = {'differentiable': differentiable, 'use_rotor_components': use_rotor_components,
'nSamples': nSamples, 'verbose': False}
# providing default unit types for general MUX/DeMUX components
power_units = 'kW'
direction_units = 'deg'
wind_speed_units = 'm/s'
# add necessary inputs for group
self.add('dv0', IndepVarComp('windDirections', np.zeros(nDirections), units=direction_units), promotes=['*'])
self.add('dv1', IndepVarComp('windSpeeds', np.zeros(nDirections), units=wind_speed_units), promotes=['*'])
self.add('dv2', IndepVarComp('windFrequencies', np.ones(nDirections)), promotes=['*'])
self.add('dv3', IndepVarComp('turbineX', np.zeros(nTurbines), units='m'), promotes=['*'])
self.add('dv4', IndepVarComp('turbineY', np.zeros(nTurbines), units='m'), promotes=['*'])
self.add('dv4p5', IndepVarComp('hubHeight', np.zeros(nTurbines), units='m'), promotes=['*'])
# add vars to be seen by MPI and gradient calculations
self.add('dv5', IndepVarComp('rotorDiameter', np.zeros(nTurbines), units='m'), promotes=['*'])
self.add('dv6', IndepVarComp('axialInduction', np.zeros(nTurbines)), promotes=['*'])
self.add('dv7', IndepVarComp('generatorEfficiency', np.zeros(nTurbines)), promotes=['*'])
self.add('dv8', IndepVarComp('air_density', val=1.1716, units='kg/(m*m*m)'), promotes=['*'])
self.add('dv9', IndepVarComp('rated_power', np.ones(nTurbines)*5000., units='kW',
desc='rated power for each turbine', pass_by_obj=True), promotes=['*'])
if not use_rotor_components:
self.add('dv10', IndepVarComp('Ct_in', np.zeros(nTurbines)), promotes=['*'])
self.add('dv11', IndepVarComp('Cp_in', np.zeros(nTurbines)), promotes=['*'])
self.add('dv12', IndepVarComp('cp_curve_cp', np.zeros(datasize),
desc='cp curve cp data', pass_by_obj=True), promotes=['*'])
self.add('dv13', IndepVarComp('cp_curve_vel', np.zeros(datasize), units='m/s',
desc='cp curve velocity data', pass_by_obj=True), promotes=['*'])
self.add('dv14', IndepVarComp('cut_in_speed', np.zeros(nTurbines), units='m/s',
desc='cut-in speed of wind turbines', pass_by_obj=True), promotes=['*'])
# add variable tree IndepVarComps
add_gen_params_IdepVarComps(self, datasize=datasize)
# indep variable components for wake model
if params_IdepVar_func is not None:
if (params_IndepVar_args is None) and (wake_model is floris_wrapper):
params_IndepVar_args = {'use_rotor_components': False}
elif params_IndepVar_args is None:
params_IndepVar_args = {}
params_IdepVar_func(self, **params_IndepVar_args)
# add components and groups
self.add('windDirectionsDeMUX', DeMUX(nDirections, units=direction_units))
self.add('windSpeedsDeMUX', DeMUX(nDirections, units=wind_speed_units))
# print("initializing parallel groups")
# if use_parallel_group:
# direction_group = ParallelGroup()
# else:
# direction_group = Group()
pg = self.add('all_directions', ParallelGroup(), promotes=['*'])
if use_rotor_components:
for direction_id in np.arange(0, nDirections):
# print('assigning direction group %i'.format(direction_id))
pg.add('direction_group%i' % direction_id,
DirectionGroup(nTurbines=nTurbines, direction_id=direction_id,
use_rotor_components=use_rotor_components, datasize=datasize,
differentiable=differentiable, add_IdepVarComps=False, nSamples=nSamples,
wake_model=wake_model, wake_model_options=wake_model_options, cp_points=cp_points),
promotes=(['gen_params:*', 'model_params:*', 'air_density',
'axialInduction', 'generatorEfficiency', 'turbineX', 'turbineY', 'hubHeight',
'yaw%i' % direction_id, 'rotorDiameter', 'rated_power', 'wtVelocity%i' % direction_id,
'wtPower%i' % direction_id, 'dir_power%i' % direction_id]
if (nSamples == 0) else
['gen_params:*', 'model_params:*', 'air_density',
'axialInduction', 'generatorEfficiency', 'turbineX', 'turbineY', 'hubHeight',
'yaw%i' % direction_id, 'rotorDiameter', 'rated_power', 'wsPositionX', 'wsPositionY',
'wsPositionZ', 'wtVelocity%i' % direction_id,
'wtPower%i' % direction_id, 'dir_power%i' % direction_id, 'wsArray%i' % direction_id]))
else:
for direction_id in np.arange(0, nDirections):
# print('assigning direction group %i'.format(direction_id))
pg.add('direction_group%i' % direction_id,
DirectionGroup(nTurbines=nTurbines, direction_id=direction_id,
use_rotor_components=use_rotor_components, datasize=datasize,
differentiable=differentiable, add_IdepVarComps=False, nSamples=nSamples,
wake_model=wake_model, wake_model_options=wake_model_options, cp_points=cp_points,
cp_curve_spline=cp_curve_spline),
promotes=(['Ct_in', 'Cp_in', 'gen_params:*', 'model_params:*', 'air_density', 'axialInduction',
'generatorEfficiency', 'turbineX', 'turbineY', 'yaw%i' % direction_id, 'rotorDiameter',
'hubHeight', 'rated_power', 'wtVelocity%i' % direction_id, 'wtPower%i' % direction_id,
'dir_power%i' % direction_id, 'cut_in_speed', 'cp_curve_cp', 'cp_curve_vel']
if (nSamples == 0) else
['Ct_in', 'Cp_in', 'gen_params:*', 'model_params:*', 'air_density', 'axialInduction',
'generatorEfficiency', 'turbineX', 'turbineY', 'yaw%i' % direction_id, 'rotorDiameter',
'hubHeight', 'rated_power', 'cut_in_speed', 'wsPositionX', 'wsPositionY', 'wsPositionZ',
'wtVelocity%i' % direction_id, 'wtPower%i' % direction_id,
'dir_power%i' % direction_id, 'wsArray%i' % direction_id, 'cut_in_speed', 'cp_curve_cp',
'cp_curve_vel']))
# print("parallel groups initialized")
self.add('powerMUX', MUX(nDirections, units=power_units))
self.add('AEPcomp', WindFarmAEP(nDirections, rec_func_calls=rec_func_calls), promotes=['*'])
# connect components
self.connect('windDirections', 'windDirectionsDeMUX.Array')
self.connect('windSpeeds', 'windSpeedsDeMUX.Array')
for direction_id in np.arange(0, nDirections):
self.add('y%i' % direction_id, IndepVarComp('yaw%i' % direction_id, np.zeros(nTurbines), units='deg'), promotes=['*'])
self.connect('windDirectionsDeMUX.output%i' % direction_id, 'direction_group%i.wind_direction' % direction_id)
self.connect('windSpeedsDeMUX.output%i' % direction_id, 'direction_group%i.wind_speed' % direction_id)
self.connect('dir_power%i' % direction_id, 'powerMUX.input%i' % direction_id)
self.connect('powerMUX.Array', 'dirPowers') | 61.178138 | 132 | 0.591225 | 1,497 | 15,111 | 5.699399 | 0.144289 | 0.091538 | 0.066104 | 0.022855 | 0.639709 | 0.594585 | 0.562705 | 0.50211 | 0.450891 | 0.438233 | 0 | 0.004864 | 0.292568 | 15,111 | 247 | 133 | 61.178138 | 0.793265 | 0.05367 | 0 | 0.364641 | 0 | 0 | 0.190807 | 0.011088 | 0 | 0 | 0 | 0.004049 | 0 | 1 | 0.016575 | false | 0.022099 | 0.044199 | 0 | 0.077348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0059a5764d2c1aef345a0cb4799bf9b121c30933 | 1,571 | py | Python | sources/led.py | marcoplaitano/iot-weather-station | a882cb3e59b7ac9d855dce40a9b562460008673f | [
"MIT"
] | null | null | null | sources/led.py | marcoplaitano/iot-weather-station | a882cb3e59b7ac9d855dce40a9b562460008673f | [
"MIT"
] | null | null | null | sources/led.py | marcoplaitano/iot-weather-station | a882cb3e59b7ac9d855dce40a9b562460008673f | [
"MIT"
] | null | null | null | class Led():
"""
This class represents a led connected to one of the board's digital pins.
It also needs the mqtt client in order to send notifications to the user.
"""
def __init__(self, pin, client):
self._pin = pin
pinMode(self._pin, OUTPUT)
self._client = client
# turned off by default
self._is_on = False
digitalWrite(self._pin, LOW)
def on(self):
"""
Turns the led on and notifies the user via MQTT.
"""
if self._is_on:
return
digitalWrite(self._pin, HIGH)
self._is_on = True
self._client.publish("iot-marco/data/led", "on")
def off(self):
"""
Turns the led off and notifies the user via MQTT.
"""
if not self._is_on:
return
digitalWrite(self._pin, LOW)
self._is_on = False
self._client.publish("iot-marco/data/led", "off")
def state(self):
"""
Returns a string representing the current state of the device.
"""
return "on" if self._is_on else "off"
def control(self, command):
"""
Usually called when a message is received in the "iot-marco/commands/led" subtopic.
The payload of the message (the command parameter here) is the action to perform.
"""
if command == "get-state":
self._client.publish("iot-marco/data/led", self.state())
elif command == "turn-on":
self.on()
elif command == "turn-off":
self.off()
| 28.563636 | 91 | 0.565245 | 204 | 1,571 | 4.230392 | 0.372549 | 0.048667 | 0.05562 | 0.069525 | 0.25029 | 0.25029 | 0.25029 | 0 | 0 | 0 | 0 | 0 | 0.330999 | 1,571 | 54 | 92 | 29.092593 | 0.821123 | 0.316996 | 0 | 0.214286 | 0 | 0 | 0.09234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
005c7e1d5f864be8035bf00abdbe086c5d6aba26 | 1,548 | py | Python | SpatialDataset.py | pradeepppc/PSHUIM | 401f6cb1dcae21d49fd5877ecd3754a1b80c3b4e | [
"MIT"
] | null | null | null | SpatialDataset.py | pradeepppc/PSHUIM | 401f6cb1dcae21d49fd5877ecd3754a1b80c3b4e | [
"MIT"
] | null | null | null | SpatialDataset.py | pradeepppc/PSHUIM | 401f6cb1dcae21d49fd5877ecd3754a1b80c3b4e | [
"MIT"
] | null | null | null | from Transaction import Transaction
class Dataset:
transactions = []
maxItem = 0
def __init__(self, datasetpath, neighbors):
with open(datasetpath, 'r') as f:
lines = f.readlines()
for line in lines:
self.transactions.append(self.createTransaction(line, neighbors))
print('Transaction Count :' + str(len(self.transactions)))
f.close()
def createTransaction(self, line, neighbors):
trans_list = line.strip().split(':')
transactionUtility = int(trans_list[1])
itemsString = trans_list[0].strip().split(' ')
utilityString = trans_list[2].strip().split(' ')
# pmuString = trans_list[3].strip().split(' ')
items = []
utilities = []
pmus = []
for idx, item in enumerate(itemsString):
item_int = int(item)
if item_int > self.maxItem:
self.maxItem = item_int
items.append(item_int)
utilities.append(int(utilityString[idx]))
pm = int(utilityString[idx])
if item_int in neighbors:
for j in range(0, len(itemsString)):
if j != idx:
if int(itemsString[j]) in neighbors[item_int]:
pm += int(utilityString[j])
pmus.append(pm)
return Transaction(items, utilities, transactionUtility, pmus)
def getMaxItem(self):
return self.maxItem
def getTransactions(self):
return self.transactions
| 33.652174 | 81 | 0.565245 | 159 | 1,548 | 5.408805 | 0.345912 | 0.048837 | 0.02093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005731 | 0.323643 | 1,548 | 45 | 82 | 34.4 | 0.815664 | 0.028424 | 0 | 0 | 0 | 0 | 0.015313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.027027 | 0.054054 | 0.297297 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
005db1df0bd94d276687c3061304891f57e05a5e | 2,047 | py | Python | viadot/tasks/aselite.py | angelika233/viadot | 99a4c5b622ad099a44ab014a47ba932a747c0ae6 | [
"MIT"
] | null | null | null | viadot/tasks/aselite.py | angelika233/viadot | 99a4c5b622ad099a44ab014a47ba932a747c0ae6 | [
"MIT"
] | null | null | null | viadot/tasks/aselite.py | angelika233/viadot | 99a4c5b622ad099a44ab014a47ba932a747c0ae6 | [
"MIT"
] | null | null | null | import json
import prefect
from typing import Any, Dict
from prefect import Task
from prefect.tasks.secrets import PrefectSecret
from .azure_key_vault import AzureKeyVaultSecret
from viadot.config import local_config
from viadot.sources import AzureSQL
class ASELiteToDF(Task):
def __init__(
self, credentials: Dict[str, Any] = None, query: str = None, *args, **kwargs
):
"""
Task for obtaining data from ASElite source.
Args:
credentials (Dict[str, Any], optional): ASElite SQL Database credentials. Defaults to None.
query(str, optional): Query to perform on a database. Defaults to None.
Returns: Pandas DataFrame
"""
self.credentials = credentials
self.query = query
super().__init__(
name="ASElite_to_df",
*args,
**kwargs,
)
def __call__(self, *args, **kwargs):
"""Download from aselite database to df"""
return super().__call__(*args, **kwargs)
def run(
self,
query: str,
credentials: Dict[str, Any] = None,
credentials_secret: str = None,
vault_name: str = None,
):
logger = prefect.context.get("logger")
if not credentials_secret:
try:
credentials_secret = PrefectSecret("aselite").run()
except ValueError:
pass
if credentials_secret:
credentials_str = AzureKeyVaultSecret(
credentials_secret, vault_name=vault_name
).run()
credentials = json.loads(credentials_str)
logger.info("Loaded credentials from Key Vault")
else:
credentials = local_config.get("ASELite_SQL")
logger.info("Loaded credentials from local source")
aselite = AzureSQL(credentials=credentials)
logger.info("Connected to ASELITE SOURCE")
df = aselite.to_df(query=query)
logger.info("Succefully collected data from query")
return df
| 31.984375 | 103 | 0.612115 | 217 | 2,047 | 5.617512 | 0.327189 | 0.069729 | 0.044299 | 0.051682 | 0.091879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.301905 | 2,047 | 63 | 104 | 32.492063 | 0.853044 | 0.139228 | 0 | 0.041667 | 0 | 0 | 0.099353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.020833 | 0.166667 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
005e310b4e5c1b997b5367bd0bd0c33f2bd0f301 | 3,062 | py | Python | base_templates/cfsr_conversion_template.py | bstdenis/pycfs | bbb8576bbae757e9a6baca8294e18c03d3513361 | [
"Apache-2.0"
] | 1 | 2018-03-14T21:27:20.000Z | 2018-03-14T21:27:20.000Z | base_templates/cfsr_conversion_template.py | bstdenis/pycfs | bbb8576bbae757e9a6baca8294e18c03d3513361 | [
"Apache-2.0"
] | null | null | null | base_templates/cfsr_conversion_template.py | bstdenis/pycfs | bbb8576bbae757e9a6baca8294e18c03d3513361 | [
"Apache-2.0"
] | null | null | null | """CFSR & CVSv2 conversion
This requires a running UPCluster:
$ ipcluster start -n 12
"""
import os
from ipyparallel import Client
import cfsr
path_input = '/some/path'
path_output = '/some/path'
path_pycfs = '/some/path' # The path to cfsr.py and gribou.py
var_names = ['pressfc'] # The RDA archive cfsr dataset prefix
grib_var_names = ['Surface pressure']
"""The grib_var_names can be obtained from gribou.all_str_dump(file_name)"""
grib_levels = [None]
"""The grib levels are set to None if there are no vertical level units
in the groubou.all_str_dump(file_name), otherwise the number is used
(e.g. grib_levels = [2] for one 2 meter variable)"""
nc_var_names = ['ps']
nc_units = ['Pa']
"""nc_var_names can also be obtained in the gribou.all_str_dump(file_name)"""
nc_format = 'NETCDF4_CLASSIC'
initial_year = 1979
final_year = 2010
months = ['01', '02', '03', '04', '05', '06',
'07', '08', '09', '10', '11', '12']
grib_source = 'rda'
resolution = 'highres'
"""This is related to the grid choice on the rda portal. Generally, if
the higher resolution is selected, set to 'highres'. For lower resolutions,
the file names should have a *.l.gdas.* structure, in this case set to
'lowres'"""
cache_size = 100
rc = Client()
with rc[:].sync_imports():
import sys
rc[:].execute("sys.path.append('{0}')".format(path_pycfs))
with rc[:].sync_imports():
import cfsr
with rc[:].sync_imports():
import gribou
lview = rc.load_balanced_view()
mylviews = []
for i, var_name in enumerate(var_names):
for yyyy in range(initial_year, final_year + 1):
for mm in months:
vym = (var_name, str(yyyy), mm)
ncvym = (nc_var_names[i], str(yyyy), mm)
if resolution in ['highres', 'prmslmidres', 'ocnmidres']:
if (yyyy > 2011) or ((yyyy == 2011) and (int(mm) > 3)):
grib_file = "{0}.cdas1.{1}{2}.grb2".format(*vym)
else:
grib_file = "{0}.gdas.{1}{2}.grb2".format(*vym)
file_name = "{0}_1hr_cfsr_reanalysis_{1}{2}.nc".format(*ncvym)
nc_file = os.path.join(path_output, file_name)
elif resolution in ['lowres', 'ocnlowres']:
grib_file = "{0}.l.gdas.{1}{2}.grb2".format(*vym)
file_name = "{0}_1hr_cfsr_reanalysis_lowres_{1}{2}.nc".format(
*ncvym)
nc_file = os.path.join(path_output, file_name)
grib_file = os.path.join(path_input, grib_file)
if not os.path.isfile(grib_file):
continue
print(grib_file)
mylviews.append(lview.apply(
cfsr.hourly_grib2_to_netcdf, grib_file, grib_source, nc_file,
nc_var_names[i], grib_var_names[i], grib_levels[i],
cache_size=cache_size, overwrite_nc_units=nc_units[i],
nc_format=nc_format))
if nc_var_names[i] in ['tasmin','tasmax']:
print("WARNING: this is a cumulative min/max variable, need to run"
"cfsr_sampling.py afterwards.")
| 36.452381 | 78 | 0.622796 | 445 | 3,062 | 4.085393 | 0.379775 | 0.044004 | 0.027503 | 0.023102 | 0.19582 | 0.129813 | 0.10341 | 0.10341 | 0.10341 | 0.10341 | 0 | 0.031814 | 0.240366 | 3,062 | 83 | 79 | 36.891566 | 0.749785 | 0.051927 | 0 | 0.12069 | 0 | 0 | 0.17472 | 0.059534 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.155172 | 0 | 0.155172 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
006313f3e486bead27c403273cfdcb3ea0ba6767 | 369 | py | Python | ex055.py | Roninho514/Treinamento-Python | fc6ad0b64fb3dc3cfa5381f8fc53b5b3243a7ff6 | [
"MIT"
] | null | null | null | ex055.py | Roninho514/Treinamento-Python | fc6ad0b64fb3dc3cfa5381f8fc53b5b3243a7ff6 | [
"MIT"
] | null | null | null | ex055.py | Roninho514/Treinamento-Python | fc6ad0b64fb3dc3cfa5381f8fc53b5b3243a7ff6 | [
"MIT"
] | null | null | null | PesoMaior = 0
PesoMenor = 0
for c in range(1,6):
peso = float(input('Peso da {}ª pessoa: '.format(c)))
if c == 1:
PesoMaior = peso
PesoMenor = peso
else:
if PesoMaior < peso:
PesoMaior = peso
elif PesoMenor > peso:
PesoMenor = peso
print('O menor peso é {} e o maior {}'.format(PesoMenor, PesoMaior))
| 26.357143 | 68 | 0.558266 | 48 | 369 | 4.291667 | 0.520833 | 0.18932 | 0.165049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02008 | 0.325203 | 369 | 13 | 69 | 28.384615 | 0.807229 | 0 | 0 | 0.307692 | 0 | 0 | 0.135501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
006c69847c24c35304954da3e05ad12d45e9a338 | 1,461 | py | Python | day10/code/main.py | JoseTomasTocino/AdventOfCode2020 | 19b22c3f9ef2331f08c2ad78f49f200a5f4adfc9 | [
"MIT"
] | null | null | null | day10/code/main.py | JoseTomasTocino/AdventOfCode2020 | 19b22c3f9ef2331f08c2ad78f49f200a5f4adfc9 | [
"MIT"
] | null | null | null | day10/code/main.py | JoseTomasTocino/AdventOfCode2020 | 19b22c3f9ef2331f08c2ad78f49f200a5f4adfc9 | [
"MIT"
] | null | null | null | import logging
from functools import lru_cache
logger = logging.getLogger(__name__)
def parse_adapter_input(adapters):
# Separate by lines, convert to integer, prepend the initial adapter (0) and append the final adapter (max + 3)
adapters = [0] + sorted(int(x) for x in adapters.split("\n") if x)
adapters.append(max(adapters) + 3)
return adapters
def get_adapter_differences(adapters):
# Given all adapters need to be used, this is just a matter of sorting them and computing the differences
adapters = parse_adapter_input(adapters)
adapters_delta = [adapters[i + 1] - adapters[i] for i in range(len(adapters) - 1)]
return adapters_delta
def get_adapter_path_count(adapters):
# Parse and convert adapters to tuple (because lru_cache decorated functions need hashable arguments)
adapters = tuple(parse_adapter_input(adapters))
return get_adapter_path_count_priv(adapters)
@lru_cache()
def get_adapter_path_count_priv(adapters, current=0):
# Get the next adapter indices
next_indices = [x for x in range(current + 1, current + 4) if x < len(adapters)]
# If there are no more indices, we're at base case so return 1
if not next_indices:
return 1
# Otherwise, sum all branches from matching adapters (according to <= 3 criteria)
return sum(
get_adapter_path_count_priv(adapters, i)
for i in next_indices
if adapters[i] - adapters[current] <= 3
)
| 33.204545 | 115 | 0.71937 | 215 | 1,461 | 4.725581 | 0.413953 | 0.049213 | 0.055118 | 0.074803 | 0.137795 | 0.091535 | 0 | 0 | 0 | 0 | 0 | 0.011178 | 0.20397 | 1,461 | 43 | 116 | 33.976744 | 0.862425 | 0.330595 | 0 | 0 | 0 | 0 | 0.00206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00703f62cc89efa8a39ec540a09b926b4227e9f8 | 1,334 | py | Python | USAP_H37Rv/Tools/smalt-0.7.5/test/cigar_test.py | SANBI-SA-archive/s_bvc_pipeline | 43948b4ea5db6333633361dd91f7e7b320392fb2 | [
"Apache-2.0"
] | null | null | null | USAP_H37Rv/Tools/smalt-0.7.5/test/cigar_test.py | SANBI-SA-archive/s_bvc_pipeline | 43948b4ea5db6333633361dd91f7e7b320392fb2 | [
"Apache-2.0"
] | null | null | null | USAP_H37Rv/Tools/smalt-0.7.5/test/cigar_test.py | SANBI-SA-archive/s_bvc_pipeline | 43948b4ea5db6333633361dd91f7e7b320392fb2 | [
"Apache-2.0"
] | null | null | null | # test cigar strings
PROGNAM = "../src/smalt"
FNAM_REF = "cigar_ref.fa.gz"
FNAM_READ1 = "cigar_read1.fq"
FNAM_READ2 = "cigar_read2.fq"
TMPFIL_PREFIX = "TMPcig"
KMER = 13
NSKIP = 2
def smalt_index(df,index_name, fasta_name, kmer, nskip):
from sys import exit
from subprocess import call
tup = (PROGNAM, 'index',
'-k', '%i' % (int(kmer)),
'-s', '%i' % (int(nskip)),
index_name,
fasta_name)
df.call(tup, "when indexing")
def smalt_map(df, oufilnam, indexnam, readfil, matefil, typ="fastq", flags=[]):
from sys import exit
from subprocess import call
tup = [PROGNAM, 'map']
if len(flags) > 0:
tup.extend(flags)
tup.extend([
'-f', typ,
'-o', oufilnam,
indexnam,
readfil, matefil])
df.call(tup, "when mapping")
if __name__ == '__main__':
from testdata import DataFiles
df = DataFiles()
refnam = df.joinData(FNAM_REF)
readnamA = df.joinData(FNAM_READ1)
readnamB = df.joinData(FNAM_READ2)
indexnam = df.addIndex(TMPFIL_PREFIX)
oufilnam = df.addTMP(TMPFIL_PREFIX + ".sam")
smalt_index(df,indexnam, refnam, KMER, NSKIP)
smalt_map(df,oufilnam, indexnam, readnamA, readnamB, "sam", ["-x"])
#print "Test ok."
df.cleanup()
exit()
| 23.821429 | 79 | 0.596702 | 167 | 1,334 | 4.598802 | 0.401198 | 0.036458 | 0.054688 | 0.046875 | 0.200521 | 0.132813 | 0.132813 | 0.132813 | 0.132813 | 0.132813 | 0 | 0.010183 | 0.263868 | 1,334 | 55 | 80 | 24.254545 | 0.771894 | 0.025487 | 0 | 0.1 | 0 | 0 | 0.098689 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.125 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00718672508db15a300c29553a2f5ac491f6405f | 9,071 | py | Python | GUI/main.py | Uttirna/Fuzzy-Expert-System | 814708556b86c69769d935fd256af08952fdde8d | [
"MIT"
] | null | null | null | GUI/main.py | Uttirna/Fuzzy-Expert-System | 814708556b86c69769d935fd256af08952fdde8d | [
"MIT"
] | null | null | null | GUI/main.py | Uttirna/Fuzzy-Expert-System | 814708556b86c69769d935fd256af08952fdde8d | [
"MIT"
] | null | null | null | from tkinter import *
from PIL import ImageTk,Image
import matlab.engine
eng = matlab.engine.start_matlab()
font11 = "-family Arial -size 19 -weight normal -slant roman " \
"-underline 0 -overstrike 0"
font12 = "-family Arial -size 12 -weight normal -slant roman " \
"-underline 0 -overstrike 0"
font14 = "-family Arial -size 15 -weight normal -slant roman " \
"-underline 0 -overstrike 0"
font15 = "-family Arial -size 12 -weight bold -slant roman " \
"-underline 0 -overstrike 0"
root = Tk()
TFrame1 = Frame(root)
TFrame1.place(relx=0.01, rely=0.02, relheight=0.94, relwidth=0.48)
TFrame1.configure(relief=GROOVE)
TFrame1.configure(borderwidth="2")
TFrame1.configure(relief=GROOVE)
TFrame1.configure(width=465)
TLabel = Label(TFrame1)
TLabel.place(relx=0.3, rely=0.04, height=38, width=350)
TLabel.configure(background="#d9d9d9")
TLabel.configure(foreground="#000000")
TLabel.configure(font=font11)
TLabel.configure(relief=FLAT)
TLabel.configure(text='''Enter Patient's data ''')
#--------------------------------INPUT 1-------------------------------
TLabel1 = Label(TFrame1)
TLabel1.place(relx=0.02, rely=0.15, height=39, width=150)
TLabel1.configure(background="#d9d9d9")
TLabel1.configure(foreground="#000000")
TLabel1.configure(font=font12)
TLabel1.configure(relief=FLAT)
TLabel1.configure(text='''Clump Thickness''')
TEntry_Clump = Entry(TFrame1)
TEntry_Clump.place(relx=0.24, rely=0.15, relheight=0.05, relwidth=0.53)
TEntry_Clump.configure(width=246)
TEntry_Clump.configure(takefocus="")
TEntry_Clump.configure(cursor="ibeam")
#---------------------------------INPUT 2-------------------------------
TLabel2 = Label(TFrame1)
TLabel2.place(relx=0.02, rely=0.24, height=39, width=150)
TLabel2.configure(background="#d9d9d9")
TLabel2.configure(foreground="#000000")
TLabel2.configure(font=font12)
TLabel2.configure(relief=FLAT)
TLabel2.configure(text='''Uniformity Cell Size''')
TEntry_UCellSize = Entry(TFrame1)
TEntry_UCellSize.place(relx=0.24, rely=0.24, relheight=0.05, relwidth=0.53)
TEntry_UCellSize.configure(width=246)
TEntry_UCellSize.configure(takefocus="")
TEntry_UCellSize.configure(cursor="ibeam")
#---------------------------------INPUT 3-------------------------------
TLabel3 = Label(TFrame1)
TLabel3.place(relx=0.02, rely=0.33, height=39, width=150)
TLabel3.configure(background="#d9d9d9")
TLabel3.configure(foreground="#000000")
TLabel3.configure(font=font12)
TLabel3.configure(relief=FLAT)
TLabel3.configure(text='''Uniformity Cell Shape''')
TEntry_UCellShape = Entry(TFrame1)
TEntry_UCellShape.place(relx=0.24, rely=0.33, relheight=0.05, relwidth=0.53)
TEntry_UCellShape.configure(width=246)
TEntry_UCellShape.configure(takefocus="")
TEntry_UCellShape.configure(cursor="ibeam")
#----------------------------------------INPUT 4----------------------------------
TLabel4 = Label(TFrame1)
TLabel4.place(relx=0.02, rely=0.41, height=39, width=150)
TLabel4.configure(background="#d9d9d9")
TLabel4.configure(foreground="#000000")
TLabel4.configure(font=font12)
TLabel4.configure(relief=FLAT)
TLabel4.configure(text='''Marginal Adhesion''')
TEntry_MarAdh = Entry(TFrame1)
TEntry_MarAdh.place(relx=0.24, rely=0.41, relheight=0.05, relwidth=0.53)
TEntry_MarAdh.configure(width=246)
TEntry_MarAdh.configure(takefocus="")
TEntry_MarAdh.configure(cursor="ibeam")
#-----------------------------INPUT 5-----------------------------------------
TLabel5 = Label(TFrame1)
TLabel5.place(relx=0.02, rely=0.5, height=39, width=150)
TLabel5.configure(background="#d9d9d9")
TLabel5.configure(foreground="#000000")
TLabel5.configure(font=font12)
TLabel5.configure(relief=FLAT)
TLabel5.configure(text='''Single Epi Cell Size''')
TEntry_EpiCellSize =Entry(TFrame1)
TEntry_EpiCellSize.place(relx=0.24, rely=0.5, relheight=0.05, relwidth=0.53)
TEntry_EpiCellSize.configure(width=246)
TEntry_EpiCellSize.configure(takefocus="")
TEntry_EpiCellSize.configure(cursor="ibeam")
#-----------------------------INPUT 6--------------------------------------
TLabel6 = Label(TFrame1)
TLabel6.place(relx=0.02, rely=0.61, height=39, width=150)
TLabel6.configure(background="#d9d9d9")
TLabel6.configure(foreground="#000000")
TLabel6.configure(font=font12)
TLabel6.configure(relief=FLAT)
TLabel6.configure(text='''Bare Nuclei''')
TEntry_Bare = Entry(TFrame1)
TEntry_Bare.place(relx=0.24, rely=0.61, relheight=0.05, relwidth=0.53)
TEntry_Bare.configure(width=246)
TEntry_Bare.configure(takefocus="")
TEntry_Bare.configure(cursor="ibeam")
#-----------------------------INPUT 7------------------------------------
TLabel7 = Label(TFrame1)
TLabel7.place(relx=0.02, rely=0.70, height=39, width=150)
TLabel7.configure(background="#d9d9d9")
TLabel7.configure(foreground="#000000")
TLabel7.configure(font=font12)
TLabel7.configure(relief=FLAT)
TLabel7.configure(text='''Bland Chromatin''')
TEntry_Chromatin = Entry(TFrame1)
TEntry_Chromatin.place(relx=0.24, rely=0.70, relheight=0.05, relwidth=0.53)
TEntry_Chromatin.configure(width=246)
TEntry_Chromatin.configure(takefocus="")
TEntry_Chromatin.configure(cursor="ibeam")
#---------------------------INPUT 8----------------------------------------
TLabel8 = Label(TFrame1)
TLabel8.place(relx=0.02, rely=0.79, height=39, width=150)
TLabel8.configure(background="#d9d9d9")
TLabel8.configure(foreground="#000000")
TLabel8.configure(font=font12)
TLabel8.configure(relief=FLAT)
TLabel8.configure(text='''Normal Nucleoli''')
TEntry_Normal = Entry(TFrame1)
TEntry_Normal.place(relx=0.24, rely=0.79, relheight=0.05, relwidth=0.53)
TEntry_Normal.configure(width=246)
TEntry_Normal.configure(takefocus="")
TEntry_Normal.configure(cursor="ibeam")
#---------------------------------INPUT 9-------------------------------
TLabel9 = Label(TFrame1)
TLabel9.place(relx=0.02, rely=0.88, height=39, width=150)
TLabel9.configure(background="#d9d9d9")
TLabel9.configure(foreground="#000000")
TLabel9.configure(font=font12)
TLabel9.configure(relief=FLAT)
TLabel9.configure(text='''Mitosis''')
TEntry_Mitosis = Entry(TFrame1)
TEntry_Mitosis.place(relx=0.24, rely=0.88, relheight=0.05, relwidth=0.53)
TEntry_Mitosis.configure(width=246)
TEntry_Mitosis.configure(takefocus="")
TEntry_Mitosis.configure(cursor="ibeam")
# -----------------------------------------------------------------------
TButton_eval = Button(TFrame1,command=lambda w=TFrame1: get_all_entry_widgets_text_content(w))
TButton_eval.place(relx=0.34, rely=0.95, height=35, width=126)
TButton_eval.configure(takefocus="")
TButton_eval.configure(text='''Evaluate''')
# -----------------------------------------------------------------------
TLabel_Output = Label(root)
TLabel_Output.place(relx=0.60, rely=0.06, height=38, width=436)
TLabel_Output.configure(background="#d9d9d9")
TLabel_Output.configure(foreground="#000000")
TLabel_Output.configure(font=font11)
TLabel_Output.configure(relief=FLAT)
TLabel_Output.configure(anchor=CENTER)
TLabel_Output.configure(text='''Breast Cancer Stage :''')
TLabel_Output.configure(width=436)
Canvas_Graph = Canvas(root)
Canvas_Graph.place(relx=0.51, rely=0.16, relheight=0.66, relwidth=0.47)
Canvas_Graph.configure(background="white")
Canvas_Graph.configure(borderwidth="2")
Canvas_Graph.configure(highlightbackground="#e0ded1")
Canvas_Graph.configure(highlightcolor="black")
Canvas_Graph.configure(insertbackground="black")
Canvas_Graph.configure(relief=RIDGE)
Canvas_Graph.configure(selectbackground="#cac8bc")
Canvas_Graph.configure(selectforeground="black")
Canvas_Graph.configure(width=456)
TLabel_OutputText = Label(root)
TLabel_OutputText.place(relx=0.60, rely=0.87, height=38, width=500)
TLabel_OutputText.configure(background="#d9d9d9")
TLabel_OutputText.configure(foreground="#000000")
TLabel_OutputText.configure(font=font14)
TLabel_OutputText.configure(relief=FLAT)
TLabel_OutputText.configure(anchor=CENTER)
TLabel_OutputText.configure(width=500)
def get_all_entry_widgets_text_content(parent_widget):
args = []
children_widgets = parent_widget.winfo_children()
for child_widget in children_widgets:
if child_widget.winfo_class() == 'Entry':
args.append(child_widget.get())
#print(args)
#print(type(args[1]))
# check if all inputs are valid or not also non of the input field is null
doMATLABProcessing(args)
def doMATLABProcessing(data):
# contacting MATLAB using its API
for i in range(len(data)):
data[i] = float(data[i])
val = eng.evalFuzzy(data[0],data[1],data[2],data[3],data[4],data[5],data[6],data[7],data[8])
outputMsg(val)
eng.createOutputGraph(val,nargout=0)
tk_img = ImageTk.PhotoImage(Image.open("output.jpg"))
Canvas_Graph.create_image((150, 100), image=tk_img, anchor=NW)
Canvas_Graph.tk_img = tk_img
def outputMsg(val):
if val > 0.8:
text = "You are at high risk of Breast Cancer"
elif val <0.8 and val >0.6:
text = "You are at medium risk of Breast Cancer"
else:
text = "You are at low risk of Breast Cancer"
TLabel_OutputText['text'] = text
root.mainloop() | 35.996032 | 100 | 0.697167 | 1,163 | 9,071 | 5.347377 | 0.190026 | 0.034732 | 0.038591 | 0.017366 | 0.152275 | 0.145843 | 0.062711 | 0.020743 | 0 | 0 | 0 | 0.071687 | 0.08654 | 9,071 | 252 | 101 | 35.996032 | 0.678856 | 0.103737 | 0 | 0.031414 | 0 | 0 | 0.108058 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015707 | false | 0 | 0.015707 | 0 | 0.031414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0071a7c290c35a89f8f4928e5517cd558ffff2f5 | 11,473 | py | Python | rs5archive.py | DarkStarSword/miasmata-fixes | d320f5e68cd5ebabd14efd7af021afa7e63d161e | [
"MIT"
] | 10 | 2015-06-13T17:27:18.000Z | 2021-02-14T13:03:11.000Z | rs5archive.py | DarkStarSword/miasmata-fixes | d320f5e68cd5ebabd14efd7af021afa7e63d161e | [
"MIT"
] | 2 | 2020-07-11T18:34:57.000Z | 2021-03-07T02:27:46.000Z | rs5archive.py | DarkStarSword/miasmata-fixes | d320f5e68cd5ebabd14efd7af021afa7e63d161e | [
"MIT"
] | 1 | 2016-03-23T22:26:23.000Z | 2016-03-23T22:26:23.000Z | #!/usr/bin/env python
# Based loosely on git://github.com/klightspeed/RS5-Extractor
# I wanted something a bit lower level that didn't convert the contained files
# so I could examine the format for myself. Don't expect this to be feature
# complete for a while
# Fix print function for Python 2 deficiency regarding non-ascii encoded text files:
from __future__ import print_function
import utf8file
print = utf8file.print
try:
from PySide import QtCore
except ImportError:
class RS5Patcher(object):
def tr(self, msg):
return msg
else:
# For PySide translations without being overly verbose...
class RS5Patcher(QtCore.QObject): pass
RS5Patcher = RS5Patcher()
import struct
import zlib
import sys
import os
import collections
import rs5file
chunk_extensions = {
('IMAG', 'DATA'): '.dds',
}
def progress(percent=None, msg=None):
if msg is not None:
print(msg)
# http://msdn.microsoft.com/en-us/library/system.datetime.fromfiletimeutc.aspx:
# A Windows file time is a 64-bit value that represents the number of
# 100-nanosecond intervals that have elapsed since 12:00 midnight,
# January 1, 1601 A.D. (C.E.) Coordinated Universal Time (UTC).
import calendar
win_epoch = calendar.timegm((1601, 1, 1, 0, 0, 0))
def from_win_time(win_time):
return win_time / 10000000 + win_epoch
def to_win_time(unix_time):
return (unix_time - win_epoch) * 10000000
def mkdir_recursive(path):
if path == '':
return
(head, tail) = os.path.split(path)
mkdir_recursive(head)
if not os.path.exists(path):
os.mkdir(path)
elif not os.path.isdir(path):
raise OSError(17, '%s exists but is not a directory' % path)
class NotAFile(Exception): pass
class Rs5CompressedFile(object):
def gen_dir_ent(self):
return struct.pack('<QIQ4sQQ',
self.data_off, self.compressed_size, self.u1,
self.type, self.uncompressed_size << 1 | self.u2,
to_win_time(self.modtime)) + self.filename + '\0'
def read(self):
self.fp.seek(self.data_off)
return self.fp.read(self.compressed_size)
def decompress(self):
return zlib.decompress(self.read())
class Rs5CompressedFileDecoder(Rs5CompressedFile):
def __init__(self, f, data):
self.fp = f
(self.data_off, self.compressed_size, self.u1, self.type, self.uncompressed_size,
modtime) \
= struct.unpack('<QIQ4sQQ', data[:40])
self.u2 = self.uncompressed_size & 0x1
if not self.u2:
raise NotAFile()
self.uncompressed_size >>= 1
filename_len = data[40:].find('\0')
self.filename = data[40:40 + filename_len]
self.modtime = from_win_time(modtime)
def extract(self, base_path, strip, overwrite):
dest = os.path.join(base_path, self.filename.replace('\\', os.path.sep))
if os.path.isfile(dest) and not overwrite: # and size != 0
print('Skipping %s - file exists.' % dest, file=sys.stderr)
return
(dir, file) = os.path.split(dest)
mkdir_recursive(dir)
f = open(dest, 'wb')
try:
data = self.decompress()
if strip:
contents = rs5file.Rs5FileDecoder(data)
assert(contents.magic == self.type)
assert(contents.filename == self.filename)
#assert(len(contents.data) == filesize) # Removed because it breaks --strip
f.write(contents.data)
else:
f.write(data)
except zlib.error as e:
print('ERROR EXTRACTING %s: %s. Skipping decompression!' % (dest, str(e)), file=sys.stderr)
f.write(self.read())
f.close()
os.utime(dest, (self.modtime, self.modtime))
def extract_chunks(self, base_path, overwrite):
dest = os.path.join(base_path, self.filename.replace('\\', os.path.sep))
dest = dest.rstrip() # "TEX/Rock_Set2_Moss " ends in a space, which Windows can't handle, so strip it.
data = self.decompress()
try:
chunks = rs5file.Rs5ChunkedFileDecoder(data)
except:
# print('NOTE: %s does not contain chunks, extracting whole file...' % dest, file=sys.stderr)
return self.extract(base_path, False, overwrite)
if os.path.exists(dest) and not os.path.isdir(dest):
print('WARNING: %s exists, but is not a directory, skipping!' % dest, file=sys.stderr)
return
mkdir_recursive(dest)
path = os.path.join(dest, '00-HEADER')
if os.path.isfile(path) and not overwrite: # and size != 0
print('Skipping %s - file exists.' % dest, file=sys.stderr)
else:
f = open(path, 'wb')
f.write(chunks.header())
f.close()
for (i, chunk) in enumerate(chunks.itervalues(), 1):
extension = (self.type, chunk.name)
path = os.path.join(dest, '%.2i-%s%s' % (i, chunk.name, chunk_extensions.get(extension, '')))
if os.path.isfile(path) and not overwrite: # and size != 0
print('Skipping %s - file exists.' % dest, file=sys.stderr)
continue
f = open(path, 'wb')
f.write(chunk.data)
f.close()
class Rs5CompressedFileEncoder(Rs5CompressedFile):
def __init__(self, fp, filename = None, buf = None, seek_cb = None):
if filename is not None:
self.modtime = os.stat(filename).st_mtime
uncompressed = open(filename, 'rb').read()
else:
import time
self.modtime = time.time()
uncompressed = buf
self.uncompressed_size = len(uncompressed)
contents = rs5file.Rs5FileDecoder(uncompressed)
(self.type, self.filename) = (contents.magic, contents.filename)
compressed = zlib.compress(uncompressed)
self.compressed_size = len(compressed)
self.u1 = 0x30080000000
self.u2 = 1
self.fp = fp
if seek_cb is not None:
seek_cb(self.compressed_size)
self.data_off = fp.tell()
fp.write(compressed)
class Rs5CompressedFileRepacker(Rs5CompressedFile):
def __init__(self, newfp, oldfile, seek_cb=None):
self.compressed_size = oldfile.compressed_size
self.u1 = oldfile.u1
self.type = oldfile.type
self.uncompressed_size = oldfile.uncompressed_size
self.u2 = oldfile.u2
self.modtime = oldfile.modtime
self.filename = oldfile.filename
self.fp = newfp
if seek_cb is not None:
seek_cb(self.compressed_size)
self.data_off = newfp.tell()
newfp.write(oldfile.read())
class Rs5CentralDirectory(collections.OrderedDict):
@property
def d_size(self):
return self.ent_len * (1 + len(self))
class Rs5CentralDirectoryDecoder(Rs5CentralDirectory):
def __init__(self, real_fp = None):
self.fp.seek(self.d_off)
data = self.fp.read(self.ent_len)
(d_off1, self.d_orig_len, flags) = struct.unpack('<QII', data[:16])
assert(self.d_off == d_off1)
if real_fp is None:
real_fp = self.fp
collections.OrderedDict.__init__(self)
for f_off in range(self.d_off + self.ent_len, self.d_off + self.d_orig_len, self.ent_len):
try:
entry = Rs5CompressedFileDecoder(real_fp, self.fp.read(self.ent_len))
self[entry.filename] = entry
except NotAFile:
# XXX: Figure out what these are.
# I think they are just deleted files
continue
class Rs5CentralDirectoryEncoder(Rs5CentralDirectory):
def write_directory(self):
self.d_off = self.fp.tell()
self.d_orig_len = self.d_size
dir_hdr = struct.pack('<QII', self.d_off, self.d_size, self.flags)
pad = '\0' * (self.ent_len - len(dir_hdr)) # XXX: Not sure if any data here is important
self.fp.write(dir_hdr + pad)
for file in self.itervalues():
ent = file.gen_dir_ent()
pad = '\0' * (self.ent_len - len(ent)) # XXX: Not sure if any data here is important
self.fp.write(ent + pad)
class Rs5ArchiveDecoder(Rs5CentralDirectoryDecoder):
def __init__(self, f):
self.fp = f
magic = f.read(8)
if magic != 'CFILEHDR':
raise ValueError('Invalid file header')
(self.d_off, self.ent_len, self.u1) = struct.unpack('<QII', f.read(16))
Rs5CentralDirectoryDecoder.__init__(self)
class Rs5ArchiveEncoder(Rs5CentralDirectoryEncoder):
header_len = 24
ent_len = 168
u1 = 0
flags = 0x80000000
def __init__(self, filename):
Rs5CentralDirectoryEncoder.__init__(self)
self.fp = open(filename, 'wb')
self.fp.seek(self.header_len)
def add(self, filename, seek_cb=None, progress=progress):
progress(msg=RS5Patcher.tr("Adding {0}...").format(filename))
entry = Rs5CompressedFileEncoder(self.fp, filename, seek_cb=seek_cb)
self[entry.filename] = entry
def add_from_buf(self, buf, seek_cb=None, progress=progress):
entry = Rs5CompressedFileEncoder(self.fp, buf=buf, seek_cb=seek_cb)
progress(msg=RS5Patcher.tr("Adding {0}...").format(entry.filename))
self[entry.filename] = entry
def add_chunk_dir(self, path, seek_cb=None):
print("Adding chunks from {0}...".format(path))
files = sorted(os.listdir(path))
files.remove('00-HEADER')
header = open(os.path.join(path, '00-HEADER'), 'rb')
header = rs5file.Rs5FileDecoder(header.read())
chunks = collections.OrderedDict()
for filename in files:
chunk_path = os.path.join(path, filename)
if not os.path.isfile(chunk_path) or '-' not in filename:
print('Skipping {0}: not a valid chunk'.format(chunk_path))
continue
chunk_name = filename.split('-', 1)[1]
print(' {0}'.format(filename))
chunk = open(chunk_path, 'rb')
chunk = rs5file.Rs5ChunkEncoder(chunk_name, chunk.read())
chunks[chunk.name] = chunk
chunks = rs5file.Rs5ChunkedFileEncoder(header.magic, header.filename, header.u2, chunks)
entry = Rs5CompressedFileEncoder(self.fp, buf=chunks.encode(), seek_cb=seek_cb)
self[entry.filename] = entry
def write_header(self, progress=progress):
progress(msg=RS5Patcher.tr("Writing RS5 header..."))
self.fp.seek(0)
self.fp.write(struct.pack('<8sQII', 'CFILEHDR', self.d_off, self.ent_len, self.u1))
def save(self, progress=progress):
progress(msg=RS5Patcher.tr("Writing central directory..."))
self.write_directory()
self.write_header(progress=progress)
self.fp.flush()
progress(msg=RS5Patcher.tr("RS5 Written"))
self.do_timestamp_workaround(progress)
def do_timestamp_workaround(self, progress=progress):
# Miasmata v2.0.0.4 has a bizzare bug where the menu is blank
# other than the 'created by...' if the main.rs5 timestamp is
# certain values. I do not yet fully understand what values it
# can and cannot accept, so force everything to a known working
# time
import time
fake_time = time.mktime((2015, 2, 16, 4, 5, 0, 0, 0, -1))
# fake_time = time.time - (30 * 60)
progress(msg=RS5Patcher.tr("Setting timestamp on %s to %s to workaround for v2.0.0.4 bug" % \
(self.fp.name, time.asctime(time.localtime(fake_time)))))
os.utime(self.fp.name, (fake_time, fake_time))
class Rs5ArchiveUpdater(Rs5ArchiveEncoder, Rs5ArchiveDecoder):
def __init__(self, fp):
return Rs5ArchiveDecoder.__init__(self, fp)
def seek_eof(self):
self.fp.seek(0, 2)
def seek_find_hole(self, size):
'''Safe fallback version - always seeks to the end of file'''
return self.seek_eof()
def add(self, filename, progress=progress):
return Rs5ArchiveEncoder.add(self, filename, seek_cb = self.seek_find_hole, progress=progress)
def add_chunk_dir(self, path):
return Rs5ArchiveEncoder.add_chunk_dir(self, path, seek_cb = self.seek_find_hole)
def add_from_buf(self, buf, progress=progress):
return Rs5ArchiveEncoder.add_from_buf(self, buf, seek_cb = self.seek_find_hole, progress=progress)
def save(self, progress=progress):
self.seek_find_hole(self.d_size)
progress(msg=RS5Patcher.tr("Writing central directory..."))
self.write_directory()
# When updating an existing archive we use an extra flush
# before writing the header to reduce the risk of writing a bad
# header in case of an IO error, power failure, etc:
self.fp.flush()
self.write_header(progress=progress)
self.fp.flush()
progress(msg=RS5Patcher.tr("RS5 Written"))
self.do_timestamp_workaround(progress)
# vi:noexpandtab:sw=8:ts=8
| 33.44898 | 104 | 0.718644 | 1,690 | 11,473 | 4.754438 | 0.220118 | 0.021655 | 0.011201 | 0.0229 | 0.261854 | 0.214312 | 0.188923 | 0.159801 | 0.145115 | 0.1257 | 0 | 0.022793 | 0.15105 | 11,473 | 342 | 105 | 33.546784 | 0.802156 | 0.142857 | 0 | 0.209125 | 0 | 0 | 0.061447 | 0 | 0 | 0 | 0.002654 | 0 | 0.011407 | 1 | 0.117871 | false | 0.007605 | 0.04943 | 0.038023 | 0.292776 | 0.041825 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0075780408e5fb0346ee917819c012a1cdc3aae4 | 1,433 | py | Python | discounts/src/handlers/users.py | dalmarcogd/mobstore | 0b542b9267771a1f4522990d592028dc30ee246f | [
"Apache-2.0"
] | null | null | null | discounts/src/handlers/users.py | dalmarcogd/mobstore | 0b542b9267771a1f4522990d592028dc30ee246f | [
"Apache-2.0"
] | null | null | null | discounts/src/handlers/users.py | dalmarcogd/mobstore | 0b542b9267771a1f4522990d592028dc30ee246f | [
"Apache-2.0"
] | null | null | null | import logging
from typing import Dict
from src.database import queries
from src.exceptions.exceptions import UnrecognizedEventOperation
def _get_user(message: Dict) -> Dict:
return {
'id': message.get('user_id'),
'first_name': message.get('first_name'),
'last_name': message.get('last_name'),
'birth_date': message.get('birth_date'),
}
def _handle_create_user(user: Dict):
queries.create_user(user)
def _handle_update_user(user: Dict):
queries.update_user(user.get('id'), user)
def _handle_delete_user(user: Dict):
queries.delete_user(user.get('id'))
def handle_users_events(message: Dict):
event_type: str = message.get('event_type')
operation: str = message.get('operation')
if event_type == 'users':
if operation == 'create':
user = _get_user(message)
_handle_create_user(user)
logging.info(f"user id={user.get('id')} created")
elif operation == 'update':
user = _get_user(message)
_handle_update_user(user)
logging.info(f"user id={user.get('id')} updated")
elif operation == 'delete':
user = _get_user(message)
_handle_delete_user(user)
logging.info(f"user id={user.get('id')} deleted")
else:
raise UnrecognizedEventOperation(operation)
else:
raise UnrecognizedEventOperation(event_type)
| 28.66 | 64 | 0.641312 | 173 | 1,433 | 5.069364 | 0.231214 | 0.082098 | 0.051311 | 0.064994 | 0.201824 | 0.119726 | 0.119726 | 0.119726 | 0.119726 | 0.119726 | 0 | 0 | 0.235171 | 1,433 | 49 | 65 | 29.244898 | 0.800182 | 0 | 0 | 0.135135 | 0 | 0 | 0.145848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0 | 0.108108 | 0.027027 | 0.27027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
007a3635fa0f60908ea0d0ead5d33a15e47d6adb | 4,913 | py | Python | pymobiledevice3/services/os_trace.py | iOSForensics/pymobiledevice3 | 6b148f4e58cc51cb44c18935913a3e6cec5b60d5 | [
"MIT"
] | 1 | 2022-01-20T16:53:15.000Z | 2022-01-20T16:53:15.000Z | pymobiledevice3/services/os_trace.py | iOSForensics/pymobiledevice3 | 6b148f4e58cc51cb44c18935913a3e6cec5b60d5 | [
"MIT"
] | null | null | null | pymobiledevice3/services/os_trace.py | iOSForensics/pymobiledevice3 | 6b148f4e58cc51cb44c18935913a3e6cec5b60d5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import logging
import plistlib
import struct
import tempfile
import typing
from datetime import datetime
from tarfile import TarFile
from construct import Struct, Bytes, Int32ul, Optional, Enum, Byte, Adapter, Int16ul, this, Computed, \
RepeatUntil
from pymobiledevice3.exceptions import PyMobileDevice3Exception
from pymobiledevice3.lockdown import LockdownClient
from pymobiledevice3.services.base_service import BaseService
from pymobiledevice3.utils import try_decode
CHUNK_SIZE = 4096
TIME_FORMAT = '%H:%M:%S'
SYSLOG_LINE_SPLITTER = '\n\x00'
class TimestampAdapter(Adapter):
def _decode(self, obj, context, path):
return datetime.fromtimestamp(obj.seconds + (obj.microseconds / 1000000))
def _encode(self, obj, context, path):
return list(map(int, obj.split(".")))
timestamp_t = Struct(
'seconds' / Int32ul,
Bytes(4),
'microseconds' / Int32ul
)
syslog_t = Struct(
Bytes(9),
'pid' / Int32ul,
Bytes(42),
'timestamp' / TimestampAdapter(timestamp_t),
Bytes(1),
'level' / Enum(Byte, Notice=0, Info=0x01, Debug=0x02, Error=0x10, Fault=0x11),
Bytes(38),
'image_name_size' / Int16ul,
'message_size' / Int16ul,
Bytes(6),
'_subsystem_size' / Int32ul,
'_category_size' / Int32ul,
Bytes(4),
'_filename' / RepeatUntil(lambda x, lst, ctx: lst[-1] == 0, Byte),
'filename' / Computed(lambda ctx: try_decode(bytearray(ctx._filename[:-1]))),
'_image_name' / Bytes(this.image_name_size),
'image_name' / Computed(lambda ctx: try_decode(ctx._image_name[:-1])),
'_message' / Bytes(this.message_size),
'message' / Computed(lambda ctx: try_decode(ctx._message[:-1])),
'label' / Optional(Struct(
'_subsystem' / Bytes(this._._subsystem_size),
'subsystem' / Computed(lambda ctx: try_decode(ctx._subsystem[:-1])),
'_category' / Bytes(this._._category_size),
'category' / Computed(lambda ctx: try_decode(ctx._category[:-1])),
)),
)
class OsTraceService(BaseService):
"""
Provides API for the following operations:
* Show process list (process name and pid)
* Stream syslog lines in binary form with optional filtering by pid.
* Get old stored syslog archive in PAX format (can be extracted using `pax -r < filename`).
* Archive contain the contents are the `/var/db/diagnostics` directory
"""
SERVICE_NAME = 'com.apple.os_trace_relay'
def __init__(self, lockdown: LockdownClient):
super().__init__(lockdown, self.SERVICE_NAME)
self.logger = logging.getLogger(__name__)
def get_pid_list(self):
self.service.send_plist({'Request': 'PidList'})
# ignore first received unknown byte
self.service.recvall(1)
response = self.service.recv_prefixed()
return plistlib.loads(response)
def create_archive(self, out: typing.IO, size_limit: int = None, age_limit: int = None, start_time: int = None):
request = {'Request': 'CreateArchive'}
if size_limit is not None:
request.update({'SizeLimit': size_limit})
if age_limit is not None:
request.update({'AgeLimit': age_limit})
if start_time is not None:
request.update({'StartTime': start_time})
self.service.send_plist(request)
assert 1 == self.service.recvall(1)[0]
assert plistlib.loads(self.service.recv_prefixed()).get('Status') == 'RequestSuccessful', 'Invalid status'
while True:
try:
assert 3 == self.service.recvall(1)[0], 'invalid magic'
except ConnectionAbortedError:
break
out.write(self.service.recv_prefixed(endianity='<'))
def collect(self, out: str, size_limit: int = None, age_limit: int = None, start_time: int = None):
"""
Collect the system logs into a .logarchive that can be viewed later with tools such as log or Console.
"""
with tempfile.NamedTemporaryFile() as tar:
self.create_archive(tar, size_limit=size_limit, age_limit=age_limit, start_time=start_time)
TarFile(tar.name).extractall(out)
def syslog(self, pid=-1):
self.service.send_plist({'Request': 'StartActivity', 'MessageFilter': 65535, 'Pid': pid, 'StreamFlags': 60})
length_length, = struct.unpack('<I', self.service.recvall(4))
length = int(self.service.recvall(length_length)[::-1].hex(), 16)
response = plistlib.loads(self.service.recvall(length))
if response.get('Status') != 'RequestSuccessful':
raise PyMobileDevice3Exception(f'got invalid response: {response}')
while True:
assert b'\x02' == self.service.recvall(1)
length, = struct.unpack('<I', self.service.recvall(4))
line = self.service.recvall(length)
entry = syslog_t.parse(line)
yield entry
| 35.861314 | 116 | 0.659475 | 591 | 4,913 | 5.326565 | 0.37225 | 0.055909 | 0.051461 | 0.031766 | 0.176938 | 0.108005 | 0.054003 | 0.054003 | 0.02986 | 0.02986 | 0 | 0.023371 | 0.216161 | 4,913 | 136 | 117 | 36.125 | 0.794079 | 0.097904 | 0 | 0.041237 | 0 | 0 | 0.10192 | 0.005484 | 0 | 0 | 0.003656 | 0 | 0.041237 | 1 | 0.072165 | false | 0 | 0.123711 | 0.020619 | 0.257732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
007bd3b4346ee7f082df7ea4d0b57beef7747110 | 3,783 | py | Python | clases/unidad1/08_soluciones.py | magister-informatica-uach/INFO147 | 3898eb6f589a22beefb5972a0c911bb9dd098c6d | [
"Unlicense"
] | 13 | 2019-04-12T21:10:39.000Z | 2021-10-12T14:30:09.000Z | clases/unidad1/08_soluciones.py | magister-informatica-uach/INFO147 | 3898eb6f589a22beefb5972a0c911bb9dd098c6d | [
"Unlicense"
] | null | null | null | clases/unidad1/08_soluciones.py | magister-informatica-uach/INFO147 | 3898eb6f589a22beefb5972a0c911bb9dd098c6d | [
"Unlicense"
] | 12 | 2019-04-12T20:00:09.000Z | 2021-06-17T21:48:53.000Z | # -*- coding: utf-8 -*-
# Ejercicio read_csv
conv = dict.fromkeys(['open', 'close', 'high', 'low', 'next_weeks_open', 'next_weeks_close'],
lambda x: float(x.strip("$")))
df = pd.read_csv("dow_jones_index.data", sep=',', header=0, index_col='date',
converters=conv, parse_dates=[2])
display(df.head(),
df.dtypes)
AA_df = df[df["stock"] == "AA"].loc["2011-03-01":"2011-06-01"][["open", "high", "low", "close"]]
# Opcional: Graficando los valores de las acciones
import matplotlib.dates as mdates
fig, ax = plt.subplots(figsize=(7, 4))
aa_dates_mpl = mdates.date2num(AA_df.index.values)
for date, stock_open, stock_close in zip(aa_dates_mpl, AA_df['open'].values, AA_df['close'].values):
ax.arrow(x=date,
y=stock_open,
dx=0.,
dy=stock_close - stock_open,
head_width=2, head_length=0.1, fc='k', ec='k')
ax.fill_between(AA_df.index.values, AA_df['low'].values, AA_df['high'].values, alpha=0.5);
ax.set_ylabel("Precio de las acciones de AA")
fig.autofmt_xdate()
# Ejercicio MultiIndex
df = pd.read_excel("Cantidad-de-Viviendas-por-Tipo.xlsx",
sheet_name=1, # Importamos la segunda hoja (vivienda)
usecols=list(range(1, 20)), # Importamos las columnas 1 a 20
header=1, # El header está en la segunda fila
skiprows=[2], # Eliminamos la fila 2 ya que es invalida
index_col='ORDEN' # Usaremos el orden de las comunas como índice
).dropna() # Eliminamos las filas con NaN
df.set_index(["NOMBRE REGIÓN", "NOMBRE PROVINCIA", "NOMBRE COMUNA"], inplace=True)
display(df.head())
idx = pd.IndexSlice
display(df.loc[("LOS RÍOS")],
df.loc[idx[:, ["RANCO", "OSORNO"], :], :],
df.loc[idx[:, :, ["VALDIVIA", "FRUTILLAR"]], :])
col_mask = df.columns[4:-1]
display(col_mask)
display(df.loc[idx[:, "VALDIVIA", :], col_mask].head(),
df.loc[idx[:, "VALDIVIA", :], col_mask].sum())
"""
Viviendas Particulares Ocupadas con Moradores Presentes 94771.0
Viviendas Particulares Ocupadas con Moradores Ausentes 5307.0
Viviendas Particulares Desocupadas (en Venta, para arriendo, Abandonada u otro) 6320.0
Viviendas Particulares Desocupadas\n(de Temporada) 6910.0
Viviendas Colectivas 386.0
"""
# Ejercicio groupby
df = pd.read_excel("Cantidad-de-Viviendas-por-Tipo.xlsx",
sheet_name=1, # Importamos la segunda hoja (vivienda)
usecols=list(range(1, 20)), # Importamos las columnas 1 a 20
header=1, # El header está en la segunda fila
skiprows=[2], # Eliminamos la fila 2 ya que es invalida
index_col='ORDEN' # Usaremos el orden de las comunas como índice
).dropna() # Eliminamos las filas con NaN
df.set_index(["NOMBRE REGIÓN", "NOMBRE PROVINCIA", "NOMBRE COMUNA"], inplace=True)
mask = ["Viviendas Particulares Ocupadas con Moradores Presentes",
"Viviendas Particulares Ocupadas con Moradores Ausentes"]
display(df.groupby(by="NOMBRE REGIÓN", sort=False)[mask].aggregate([np.mean, np.std]).head(5))
def responsables(x):
#Regiones donde en promedio las comunas tengan una proporcion de viviendas ocupadas(presentes)/total mayor a 85%
return x[mask[0]]/(x[mask[0]] + x[mask[1]]) > 0.98
display(df.groupby("NOMBRE COMUNA", sort=False).filter(responsables)[mask])
def normalizar(x):
if x.dtype == np.float:
return (x - x.mean())/x.std()
else:
return x
display(df.groupby(by="NOMBRE REGIÓN", sort=False)[mask].transform(normalizar).head(10))
| 41.571429 | 116 | 0.615385 | 509 | 3,783 | 4.489195 | 0.373281 | 0.027571 | 0.014004 | 0.056018 | 0.452079 | 0.444639 | 0.337856 | 0.337856 | 0.337856 | 0.300219 | 0 | 0.030028 | 0.242929 | 3,783 | 90 | 117 | 42.033333 | 0.767807 | 0.177901 | 0 | 0.285714 | 0 | 0 | 0.200076 | 0.026626 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.017857 | 0.017857 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
007cfba22ceeaf6f6ca692eeda5fb2bfcabe9273 | 1,114 | py | Python | flameshot.py | baltpeter/albert-extensions | 616adac2e23b878695d027bd0fb2253c0e7c8cd1 | [
"MIT"
] | 5 | 2020-10-24T11:45:55.000Z | 2022-03-04T20:53:03.000Z | flameshot.py | baltpeter/albert-extensions | 616adac2e23b878695d027bd0fb2253c0e7c8cd1 | [
"MIT"
] | null | null | null | flameshot.py | baltpeter/albert-extensions | 616adac2e23b878695d027bd0fb2253c0e7c8cd1 | [
"MIT"
] | 1 | 2020-08-05T23:54:46.000Z | 2020-08-05T23:54:46.000Z | # -*- coding: utf-8 -*-
"""Quickly open Flameshot to make a screenshot."""
from albert import *
import os
__title__ = "Flameshot shortcut"
__version__ = "0.4.1"
__triggers__ = "fs"
__authors__ = "Benjamin Altpeter"
iconPath = iconLookup("flameshot")
def handleQuery(query):
if not query.isTriggered:
return
results = []
results.append(
Item(
id=__title__,
icon=iconPath,
text="Open Flameshot in GUI mode",
subtext="This will run `flameshot gui`.",
completion=query.rawString,
actions=[
# We need to wait for the Albert prompt to disappear, otherwise it will be in the screenshot. Waiting for 0.2 seconds seems long enough but I am not sure. Maybe there is a cleaner way to do this?
# We cannot use the more appropriate `ProcAction` here because (afaik) the subprocess.run-style array cannot issue commands like the one we want.
FuncAction("Open Flameshot", lambda: os.system("(sleep 0.2 && flameshot gui)&"))
]
)
)
return results
| 30.108108 | 211 | 0.623878 | 137 | 1,114 | 4.927007 | 0.70073 | 0.057778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01005 | 0.285458 | 1,114 | 36 | 212 | 30.944444 | 0.83794 | 0.363555 | 0 | 0 | 0 | 0 | 0.21398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.083333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
007ebda16d6e0ea5fd1d6856b8b7eb7a57276487 | 959 | py | Python | github.py | PeacefulWindy/MyHome | dd2d810c2aeb887bb88fb9bd881f9e145b4c7670 | [
"MIT"
] | null | null | null | github.py | PeacefulWindy/MyHome | dd2d810c2aeb887bb88fb9bd881f9e145b4c7670 | [
"MIT"
] | null | null | null | github.py | PeacefulWindy/MyHome | dd2d810c2aeb887bb88fb9bd881f9e145b4c7670 | [
"MIT"
] | null | null | null | import os
repository="origin"
branch="master"
defaultIgnoreList=[
"__pycache__",
"migrations",
".git",
".log",
".vscode",
]
ignoreList=[
"uwsgi",
"config",
]
os.system("git pull "+repository+" "+branch)
fetchList=os.listdir()
filesList=[]
i=0
while i<len(fetchList):
path=fetchList[i]
if os.path.isdir(path):
l2=os.listdir(path)
for it in l2:
fetchList.append(os.path.join(path,it))
else:
isFind=False
for it in defaultIgnoreList:
if path.find(it) != -1:
isFind=True
break
if not isFind:
for it in ignoreList:
if path.find(it)!= -1:
isFind=True
break
if not isFind:
filesList.append(path)
i+=1
for it in filesList:
os.system("git add "+it)
os.system("git commit")
os.system("git push "+repository+" "+branch)
| 18.09434 | 51 | 0.527633 | 111 | 959 | 4.522523 | 0.396396 | 0.063745 | 0.087649 | 0.047809 | 0.155378 | 0.155378 | 0.155378 | 0.155378 | 0.155378 | 0.155378 | 0 | 0.009464 | 0.338895 | 959 | 52 | 52 | 18.442308 | 0.782334 | 0 | 0 | 0.190476 | 0 | 0 | 0.101147 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
007f650565eee305a6b7e5a8d62cbc3f16371f91 | 766 | py | Python | examples/zoo_models.py | Pandinosaurus/PyTouch | 3a52bc004bebffe8da75294be53f193062d6577f | [
"MIT"
] | null | null | null | examples/zoo_models.py | Pandinosaurus/PyTouch | 3a52bc004bebffe8da75294be53f193062d6577f | [
"MIT"
] | null | null | null | examples/zoo_models.py | Pandinosaurus/PyTouch | 3a52bc004bebffe8da75294be53f193062d6577f | [
"MIT"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
from pytouch import PyTouchZoo, sensors
def main():
pytouch_zoo = PyTouchZoo()
# list available pytouch zoo models
available_models = pytouch_zoo.list_models()
print(available_models)
# load DIGIT sensor touch detect model from pytouch zoo
touch_detect_model = pytouch_zoo.load_model_from_zoo( # noqa: F841
"touchdetect_resnet18", sensors.DigitSensor
)
# load custom PyTorch-Lightning saved model
custom_model = pytouch_zoo.load_model("/path/to/pl/model") # noqa: F841
# create custom onnx session for inference
custom_onnx = pytouch_zoo.load_onnx_session("/path/to/onnx/model") # noqa: F841
if __name__ == "__main__":
main()
| 28.37037 | 84 | 0.720627 | 99 | 766 | 5.30303 | 0.474747 | 0.133333 | 0.08 | 0.072381 | 0.091429 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017799 | 0.193211 | 766 | 26 | 85 | 29.461538 | 0.831715 | 0.356397 | 0 | 0 | 0 | 0 | 0.132231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00811e646789a59ada6dde6962fef84c0e2caf2f | 1,663 | py | Python | docs/smurff/datasets.py | msteijaert/smurff | e6066d51e1640e9aad0118628ba72c9d662919fb | [
"MIT"
] | null | null | null | docs/smurff/datasets.py | msteijaert/smurff | e6066d51e1640e9aad0118628ba72c9d662919fb | [
"MIT"
] | null | null | null | docs/smurff/datasets.py | msteijaert/smurff | e6066d51e1640e9aad0118628ba72c9d662919fb | [
"MIT"
] | null | null | null | from .prepare import make_train_test
import os
import tempfile
import scipy.io as sio
from hashlib import sha256
try:
import urllib.request as urllib_request # for Python 3
except ImportError:
import urllib2 as urllib_request # for Python 2
urls = {
"chembl-IC50-346targets.mm" :
(
"http://homes.esat.kuleuven.be/~jsimm/chembl-IC50-346targets.mm",
"10c3e1f989a7a415a585a175ed59eeaa33eff66272d47580374f26342cddaa88",
),
"chembl-IC50-compound-feat.mm" :
(
"http://homes.esat.kuleuven.be/~jsimm/chembl-IC50-compound-feat.mm",
"f9fe0d296272ef26872409be6991200dbf4884b0cf6c96af8892abfd2b55e3bc",
),
}
def load_one(filename):
(url, expected_sha) = urls[filename]
with tempfile.TemporaryDirectory() as tmpdirname:
output = os.path.join(tmpdirname, filename)
urllib_request.urlretrieve(url, output)
actual_sha = sha256(open(output, "rb").read()).hexdigest()
assert actual_sha == expected_sha
matrix = sio.mmread(output)
return matrix
def load_chembl():
"""Downloads a small subset of the ChEMBL dataset.
Returns
-------
ic50_train: sparse matrix
sparse train matrix
ic50_test: sparse matrix
sparse test matrix
feat: sparse matrix
sparse row features
"""
# load bioactivity and features
ic50 = load_one("chembl-IC50-346targets.mm")
feat = load_one("chembl-IC50-compound-feat.mm")
## creating train and test sets
ic50_train, ic50_test = make_train_test(ic50, 0.2)
return (ic50_train, ic50_test, feat)
| 26.396825 | 80 | 0.658449 | 188 | 1,663 | 5.712766 | 0.414894 | 0.055866 | 0.055866 | 0.061453 | 0.175047 | 0.074488 | 0.074488 | 0.074488 | 0.074488 | 0 | 0 | 0.102073 | 0.245941 | 1,663 | 62 | 81 | 26.822581 | 0.754386 | 0.176789 | 0 | 0.057143 | 0 | 0.028571 | 0.274792 | 0.177139 | 0 | 0 | 0 | 0 | 0.028571 | 1 | 0.057143 | false | 0 | 0.228571 | 0 | 0.342857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00817e2c28c51ab9c1788435a4d1e1e979d74b71 | 1,115 | py | Python | anipick/season.py | pengode-handal/anipick | f711620d9c12581f13f951204151b60eac4e1736 | [
"MIT"
] | 1 | 2022-03-02T07:59:15.000Z | 2022-03-02T07:59:15.000Z | build/lib/anipick/season.py | pengode-handal/anipick | f711620d9c12581f13f951204151b60eac4e1736 | [
"MIT"
] | null | null | null | build/lib/anipick/season.py | pengode-handal/anipick | f711620d9c12581f13f951204151b60eac4e1736 | [
"MIT"
] | null | null | null | import requests
from bs4 import BeautifulSoup
from datetime import date
class Seasonal:
year = date.today()
year = str(year).split('-')[0]
doy = date.today().timetuple().tm_yday
# "day of year" ranges for the northern hemisphere
spring = range(80, 172)
summer = range(172, 264)
fall = range(264, 355)
# winter = everything else
if doy in spring:
season = 'spring'
elif doy in summer:
season = 'summer'
elif doy in fall:
season = 'fall'
else:
season = 'winter'
def __init__(self, limit='3', years=year, seasons=season):
url = requests.get(f'https://myanimelist.net/anime/season/{years}/{seasons}')
soup = BeautifulSoup(url.content, 'html.parser')
if int(limit) >= 9:
raise KeyError('too many requests, limit max is 9')
limit = int(limit)
nama = soup.find_all('a', {"class": "link-title"})[:limit]
name = []
for namaa in nama:
name.append(namaa.text)
name = ', '.join(name)
self.name = name or None
| 26.547619 | 85 | 0.56861 | 138 | 1,115 | 4.550725 | 0.572464 | 0.023885 | 0.028662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028351 | 0.304036 | 1,115 | 41 | 86 | 27.195122 | 0.780928 | 0.065471 | 0 | 0 | 0 | 0 | 0.134745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.1 | 0 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
008325e0c3e4eb71571c7a9cc6c994f7e82989e8 | 919 | py | Python | papamath/calc/__main__.py | PapaTRex/papamath | 2f3857d93d0d9a01a1026aee891a545baa8cc202 | [
"MIT"
] | 2 | 2019-11-25T08:48:32.000Z | 2019-11-27T14:42:12.000Z | papamath/calc/__main__.py | PapaTRex/papamath | 2f3857d93d0d9a01a1026aee891a545baa8cc202 | [
"MIT"
] | null | null | null | papamath/calc/__main__.py | PapaTRex/papamath | 2f3857d93d0d9a01a1026aee891a545baa8cc202 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# --*-- encoding=utf-8 --*--
import datetime as dt
import sys
from . import addition
from . import subtraction
from ..eval import quiz
def main():
"""
Run this program with limit and times
使用最大值和题目数作为参数来调用程序
e.g. python3 -m papamath.calc
python3 -m papamath.calc 20
python3 -m papamath.calc 20 50
"""
limit = int(sys.argv[1]) if len(sys.argv) > 1 else 100
times = int(sys.argv[2]) if len(sys.argv) > 2 else 50
summary = quiz.repeat(
[addition.add_ints(limit), subtraction.sub_ints(limit)], times=times)
num = len(summary['question'].unique())
if num > 0:
total_time = summary['spent'].sum()
average_time = total_time / num
print(f'你一共做了{num}道数学题,用时{str(dt.timedelta(seconds=total_time))},'
f'每题平均{str(dt.timedelta(seconds=average_time))},继续加油!')
if __name__ == '__main__':
main()
| 25.527778 | 77 | 0.627856 | 130 | 919 | 4.323077 | 0.515385 | 0.049822 | 0.085409 | 0.106762 | 0.078292 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029661 | 0.229597 | 919 | 35 | 78 | 26.257143 | 0.764124 | 0.22198 | 0 | 0 | 0 | 0 | 0.18915 | 0.158358 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.277778 | 0 | 0.333333 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
008665988949a52336cbdcd31e365f4d40afb967 | 1,020 | py | Python | sean/seanWebapp/rasaAPI.py | OKaemii/RASA-AI-Chatbot | 3fe08461c4b22166bbdc479a304483de360f0358 | [
"Apache-2.0"
] | null | null | null | sean/seanWebapp/rasaAPI.py | OKaemii/RASA-AI-Chatbot | 3fe08461c4b22166bbdc479a304483de360f0358 | [
"Apache-2.0"
] | null | null | null | sean/seanWebapp/rasaAPI.py | OKaemii/RASA-AI-Chatbot | 3fe08461c4b22166bbdc479a304483de360f0358 | [
"Apache-2.0"
] | null | null | null | import json
import requests
RASA = "http://localhost:5005"
RASA_API = RASA + "/webhooks/rest/webhook"
def helloworld():
response = requests.request("GET", RASA)
return response.text
def version():
headers = {
'Content-Type':"application/json",
}
response = requests.request("GET", RASA +"/version", headers=headers)
return response.text
def message(message, sender="TheLegend27", debug=0):
data = {
"sender":sender,
"message":str(message)
}
headers = {
'Content-Type':"application/json",
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
}
try:
response = requests.post(RASA_API, data=json.dumps(data), headers=headers)
except:
# no response from rasa server, is it running?
return {"text":"ERROR 1"}
if (debug == 1):
print(response.status_code)
print(response.content)
print(json.loads(response.text))
try:
return json.loads(response.text)
except:
# something wrong with rasa server, it's running, but not working
return {"text":"ERROR 0"} | 21.25 | 76 | 0.688235 | 129 | 1,020 | 5.418605 | 0.465116 | 0.06867 | 0.065808 | 0.074392 | 0.180258 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011641 | 0.157843 | 1,020 | 48 | 77 | 21.25 | 0.802095 | 0.105882 | 0 | 0.285714 | 0 | 0 | 0.22967 | 0.024176 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.057143 | 0 | 0.285714 | 0.085714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
008796684a761f1722ef09d1e5677dfbfa57b5c7 | 1,246 | py | Python | src/Day26_PathWithMinimumEffort.py | ruarfff/leetcode-jan-2021 | 9436c0d6b82e83c0b21a498c998fa9e41d443d3c | [
"MIT"
] | null | null | null | src/Day26_PathWithMinimumEffort.py | ruarfff/leetcode-jan-2021 | 9436c0d6b82e83c0b21a498c998fa9e41d443d3c | [
"MIT"
] | null | null | null | src/Day26_PathWithMinimumEffort.py | ruarfff/leetcode-jan-2021 | 9436c0d6b82e83c0b21a498c998fa9e41d443d3c | [
"MIT"
] | null | null | null | from typing import List
import math
import heapq
class Solution:
def minimumEffortPath(self, heights: List[List[int]]) -> int:
row = len(heights)
col = len(heights[0])
diff = [[math.inf] * col for _ in range(row)]
diff[0][0] = 0
visited = [[False] * col for _ in range(row)]
queue = [(0, 0, 0)]
while queue:
difference, x, y = heapq.heappop(queue)
visited[x][y] = True
for dx, dy in [[0, 1], [1, 0], [0, -1], [-1, 0]]:
adjacent_x = x + dx
adjacent_y = y + dy
if (
0 <= adjacent_x < row
and 0 <= adjacent_y < col
and not visited[adjacent_x][adjacent_y]
):
current_difference = abs(
heights[adjacent_x][adjacent_y] - heights[x][y]
)
max_difference = max(current_difference, diff[x][y])
if diff[adjacent_x][adjacent_y] > max_difference:
diff[adjacent_x][adjacent_y] = max_difference
heapq.heappush(queue, (max_difference, adjacent_x, adjacent_y))
return diff[-1][-1]
| 37.757576 | 87 | 0.480738 | 144 | 1,246 | 4.006944 | 0.298611 | 0.109185 | 0.147314 | 0.155979 | 0.176776 | 0.121317 | 0.121317 | 0 | 0 | 0 | 0 | 0.025572 | 0.403692 | 1,246 | 32 | 88 | 38.9375 | 0.751009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0088227bd88febb18c1ee9e4681c9ef4348e66de | 564 | py | Python | scripts/data_filepaths.py | dhlab-epfl/student-project-repo-structure | ca5ed4cb438b571ce6f7f59285a060e594d41f25 | [
"Apache-2.0"
] | 1 | 2022-03-31T10:18:42.000Z | 2022-03-31T10:18:42.000Z | scripts/data_filepaths.py | dhlab-epfl/student-project-repo-structure | ca5ed4cb438b571ce6f7f59285a060e594d41f25 | [
"Apache-2.0"
] | null | null | null | scripts/data_filepaths.py | dhlab-epfl/student-project-repo-structure | ca5ed4cb438b571ce6f7f59285a060e594d41f25 | [
"Apache-2.0"
] | 2 | 2022-03-17T10:48:30.000Z | 2022-03-29T14:06:51.000Z | from os.path import join as j
relative_path_to_root = j("..","..")
# use and abuse from os.path.join() (here aliased as "j") it ensures cross OS compatible paths
data_folder = j(relative_path_to_root, "data")
figures_folder = j(relative_path_to_root, "report", "figures")
# STEP 0 download data
# ===================================
s0_folder = j(data_folder, "s0_downloaded_data")
s0_balzac_books = j(s0_folder, "balzac_books.json")
s0_figure_sinusoid = j(figures_folder, "s0_sinusoid.png")
# STEP 1 train model
# ===================================
# ... | 26.857143 | 94 | 0.638298 | 80 | 564 | 4.2125 | 0.475 | 0.080119 | 0.115727 | 0.133531 | 0.204748 | 0.148368 | 0 | 0 | 0 | 0 | 0 | 0.016162 | 0.12234 | 564 | 21 | 95 | 26.857143 | 0.664646 | 0.368794 | 0 | 0 | 0 | 0 | 0.202857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0088affa65e017cd642613de8dad471ca5844b13 | 578 | py | Python | sample_code/ThompsonSampling-master/montecarlo_thompsonbernoulli.py | sou350121/A-Tutorial-on-Thompson-Sampling | c0e7226c4f4258128d6d68682bfd32b44bbb6763 | [
"MIT"
] | 2 | 2021-04-15T12:15:07.000Z | 2021-04-16T01:25:14.000Z | sample_code/ThompsonSampling-master/montecarlo_thompsonbernoulli.py | sou350121/A-Tutorial-on-Thompson-Sampling | c0e7226c4f4258128d6d68682bfd32b44bbb6763 | [
"MIT"
] | null | null | null | sample_code/ThompsonSampling-master/montecarlo_thompsonbernoulli.py | sou350121/A-Tutorial-on-Thompson-Sampling | c0e7226c4f4258128d6d68682bfd32b44bbb6763 | [
"MIT"
] | null | null | null | import random
from bernoulliarm import *
from thompsonbernoulli import *
from bandittestframe import *
random.seed(1)
means = [0.1, 0.1, 0.1, 0.1, 0.9]
n_arms = len(means)
random.shuffle(means)
arms = [BernoulliArm(mu) for mu in means]
print("Best arm is " + str(means.index(max(means))))
f = open("thompson_bernoulli_results.tsv", "w+")
algo = ThompsonBernoulli([], [], [], [])
algo.initialize(n_arms)
results = test_algorithm(algo, arms, 5000, 250)
for i in range(len(results[0])):
f.write("\t".join([str(results[j][i]) for j in range(len(results))])+ "\n")
f.close()
| 26.272727 | 79 | 0.681661 | 92 | 578 | 4.228261 | 0.48913 | 0.020566 | 0.030848 | 0.030848 | 0.023136 | 0.023136 | 0.023136 | 0 | 0 | 0 | 0 | 0.037773 | 0.129758 | 578 | 21 | 80 | 27.52381 | 0.735586 | 0 | 0 | 0 | 0 | 0 | 0.083045 | 0.051903 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.235294 | 0 | 0.235294 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00896d625149256b262a7e4769e38b6b117df618 | 653 | py | Python | pymath/topkfrequentelements/__init__.py | JASTYN/pythonmaster | 46638ab09d28b65ce5431cd0759fe6df272fb85d | [
"Apache-2.0",
"MIT"
] | 3 | 2017-05-02T10:28:13.000Z | 2019-02-06T09:10:11.000Z | pymath/topkfrequentelements/__init__.py | JASTYN/pythonmaster | 46638ab09d28b65ce5431cd0759fe6df272fb85d | [
"Apache-2.0",
"MIT"
] | 2 | 2017-06-21T20:39:14.000Z | 2020-02-25T10:28:57.000Z | pymath/topkfrequentelements/__init__.py | JASTYN/pythonmaster | 46638ab09d28b65ce5431cd0759fe6df272fb85d | [
"Apache-2.0",
"MIT"
] | 2 | 2016-07-29T04:35:22.000Z | 2017-01-18T17:05:36.000Z | from collections import Counter
from typing import List
from datastructures.trees.heaps.min_heap import MinHeap
def top_k_frequent(nums: List[int], k: int) -> List[int]:
counter = Counter(nums)
return [x for x, y in counter.most_common(k)]
def top_k_frequent_with_min_heap(nums: List[int], k: int) -> List[int]:
"""
Uses a Min Heap to get the top k frequent elements
"""
counter = Counter(nums)
arr = []
for num, count in counter.items():
arr.append([-count, num])
min_heap = MinHeap(arr)
ans = []
for _ in range(k):
a = min_heap.remove_min()
ans.append(a[1])
return ans
| 21.064516 | 71 | 0.635528 | 99 | 653 | 4.060606 | 0.414141 | 0.087065 | 0.089552 | 0.074627 | 0.109453 | 0.109453 | 0.109453 | 0 | 0 | 0 | 0 | 0.002028 | 0.245023 | 653 | 30 | 72 | 21.766667 | 0.813387 | 0.07657 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.176471 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
008a9bbd3e50567c47ccabd458436be492fd0ce2 | 2,530 | py | Python | assignments/12_csv_grad/solution.py | prasiddhigyawali/python_clone | 7c2d609301dc49e77236baa0403ca9a3042bd047 | [
"MIT"
] | 1 | 2021-05-19T19:07:56.000Z | 2021-05-19T19:07:56.000Z | csv_filter/solution.py | LongNguyen1984/biofx_python | b8d45dc38d968674c6b641051b73f8ed1503b1e4 | [
"MIT"
] | 1 | 2020-02-11T20:15:59.000Z | 2020-02-11T20:15:59.000Z | csv_filter/solution.py | LongNguyen1984/biofx_python | b8d45dc38d968674c6b641051b73f8ed1503b1e4 | [
"MIT"
] | 24 | 2020-01-15T17:34:40.000Z | 2021-08-23T05:57:24.000Z | #!/usr/bin/env python3
""" Filter delimited records """
import argparse
import csv
import re
import sys
# --------------------------------------------------
def get_args():
"""Get command-line arguments"""
parser = argparse.ArgumentParser(
description='Filter delimited records',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('-f',
'--file',
metavar='FILE',
type=argparse.FileType('rt'),
help='Input file',
required=True)
parser.add_argument('-v',
'--val',
help='Value for filter',
metavar='val',
type=str,
required=True)
parser.add_argument('-c',
'--col',
help='Column for filter',
metavar='col',
type=str,
default='')
parser.add_argument('-o',
'--outfile',
help='Output filename',
type=argparse.FileType('wt'),
default='out.csv')
parser.add_argument('-d',
'--delimiter',
help='Input delimiter',
metavar='delim',
type=str,
default=',')
return parser.parse_args()
# --------------------------------------------------
def main():
"""Make a jazz noise here"""
args = get_args()
search_for = args.val
search_col = args.col
reader = csv.DictReader(args.file, delimiter=args.delimiter)
if search_col and search_col not in reader.fieldnames:
print(f'--col "{search_col}" not a valid column!', file=sys.stderr)
print(f'Choose from {", ".join(reader.fieldnames)}')
sys.exit(1)
writer = csv.DictWriter(args.outfile, fieldnames=reader.fieldnames)
writer.writeheader()
num_written = 0
for rec in reader:
text = rec.get(search_col) if search_col else ' '.join(rec.values())
if re.search(search_for, text, re.IGNORECASE):
num_written += 1
writer.writerow(rec)
# args.outfile.write(text + '\n')
print(f'Done, wrote {num_written:,} to "{args.outfile.name}".')
# --------------------------------------------------
if __name__ == '__main__':
main()
| 29.08046 | 76 | 0.462055 | 230 | 2,530 | 4.96087 | 0.417391 | 0.047327 | 0.074496 | 0.03681 | 0.050833 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002484 | 0.363636 | 2,530 | 86 | 77 | 29.418605 | 0.706211 | 0.111462 | 0 | 0.086207 | 0 | 0 | 0.140997 | 0.021554 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.068966 | 0 | 0.12069 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
008b2b646bc08ddb8a8fd147bc3c69e71605e887 | 7,043 | py | Python | labeled_nuclei_project/models.py | amarotaylor/MSI_prediction | b85acc0758b0c28049553142e1f278dd2cc7de4f | [
"MIT"
] | 1 | 2019-04-10T13:42:22.000Z | 2019-04-10T13:42:22.000Z | labeled_nuclei_project/models.py | amarotaylor/MSI_prediction | b85acc0758b0c28049553142e1f278dd2cc7de4f | [
"MIT"
] | null | null | null | labeled_nuclei_project/models.py | amarotaylor/MSI_prediction | b85acc0758b0c28049553142e1f278dd2cc7de4f | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
class ConvNet(nn.Module):
def __init__(self, n_conv_layers, n_fc_layers, kernel_size, n_conv_filters, hidden_size, dropout=0.5):
super(ConvNet, self).__init__()
self.n_conv_layers = n_conv_layers
self.n_fc_layers = n_fc_layers
self.kernel_size = kernel_size
self.n_conv_filters = n_conv_filters
self.hidden_size = hidden_size
self.conv_layers = []
self.fc_layers = []
self.m = nn.MaxPool2d(2, stride=2)
self.n = nn.Dropout(dropout)
self.relu = nn.ReLU()
in_channels = 3
for layer in range(self.n_conv_layers):
self.conv_layers.append(nn.Conv2d(in_channels, self.n_conv_filters[layer], self.kernel_size[layer]))
self.conv_layers.append(self.relu)
self.conv_layers.append(self.m)
in_channels = self.n_conv_filters[layer]
in_channels = in_channels * 25
for layer in range(self.n_fc_layers):
self.fc_layers.append(nn.Linear(in_channels, self.hidden_size[layer]))
self.fc_layers.append(self.relu)
self.fc_layers.append(self.n)
in_channels = self.hidden_size[layer]
self.conv = nn.Sequential(*self.conv_layers)
self.fc = nn.Sequential(*self.fc_layers)
self.classification_layer = nn.Linear(in_channels, 2)
def forward(self, x):
embed = self.conv(x)
embed = embed.view(x.shape[0],-1)
y = self.fc(embed)
return y
class Attention(nn.Module):
def __init__(self, input_size, hidden_size, output_size, gated=True):
super(Attention, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.gated = gated
self.V = nn.Linear(input_size, hidden_size)
self.U = nn.Linear(input_size, hidden_size)
self.w = nn.Linear(hidden_size, output_size)
self.sigm = nn.Sigmoid()
self.tanh = nn.Tanh()
self.sm = nn.Softmax(dim=0)
def forward(self, h):
if self.gated == True:
a = self.sm(self.w(self.tanh(self.V(h)) * self.sigm(self.U(h))))
else:
a = self.sm(self.w(self.tanh(self.V(h))))
return a
class pool(nn.Module):
def __init__(self,attn = None):
super(pool,self).__init__()
self.attn = attn
def forward(self,x):
if self.attn == None:
return torch.mean(x,0)
else:
a = self.attn(x)
v = torch.transpose(a, dim0=0, dim1=1).matmul(x)
return v.squeeze(0)
class Generator(nn.Module):
def __init__(self, n_conv_layers, kernel_size, n_conv_filters, hidden_size, n_rnn_layers, dropout=0.5):
super(Generator, self).__init__()
self.n_conv_layers = n_conv_layers
self.kernel_size = kernel_size
self.n_conv_filters = n_conv_filters
self.hidden_size = hidden_size
self.n_rnn_layers = n_rnn_layers
self.conv_layers = []
self.m = nn.MaxPool2d(2, stride=2)
self.relu = nn.ReLU()
in_channels = 3
for layer in range(self.n_conv_layers):
self.conv_layers.append(nn.Conv2d(in_channels, self.n_conv_filters[layer], self.kernel_size[layer]))
self.conv_layers.append(self.relu)
self.conv_layers.append(self.m)
in_channels = self.n_conv_filters[layer]
self.conv = nn.Sequential(*self.conv_layers)
in_channels = in_channels * 25
self.lstm = nn.LSTM(in_channels, self.hidden_size, self.n_rnn_layers, batch_first=True,
dropout=dropout, bidirectional=True)
in_channels = hidden_size * 2
self.classification_layer = nn.Linear(in_channels, 2)
def forward(self, x):
embed = self.conv(x)
embed = embed.view(1,x.shape[0],-1)
self.lstm.flatten_parameters()
output, hidden = self.lstm(embed)
y = self.classification_layer(output)
return y
def zero_grad(self):
"""Sets gradients of all model parameters to zero."""
for p in self.parameters():
if p.grad is not None:
p.grad.data.zero_()
def update_tile_shape(H_in, W_in, kernel_size, dilation = 1., padding = 0., stride = 1.):
H_out = (H_in + 2. * padding - dilation * (kernel_size-1) -1)/stride + 1
W_out = (W_in + 2. * padding - dilation * (kernel_size-1) -1)/stride + 1
return int(np.floor(H_out)),int(np.floor(W_out))
class Neighborhood_Generator(nn.Module):
def __init__(self, n_conv_layers, n_fc_layers, kernel_size, n_conv_filters, hidden_size, dropout=0.5,
dilation = 1., padding = 0, H_in = 27, W_in = 27):
super(Neighborhood_Generator, self).__init__()
# set class attributes
self.n_conv_layers = n_conv_layers
self.kernel_size = kernel_size
self.n_conv_filters = n_conv_filters
self.hidden_size = hidden_size
self.n_fc_layers = n_fc_layers
self.conv_layers = []
self.fc_layers = []
self.n = nn.Dropout(dropout)
self.m = nn.MaxPool2d(2, stride=2)
self.relu = nn.ReLU()
self.H_in, self.W_in = H_in, W_in
# perform the encoding
in_channels = 3
for layer in range(self.n_conv_layers):
self.conv_layers.append(nn.Conv2d(in_channels, self.n_conv_filters[layer], self.kernel_size[layer]))
self.conv_layers.append(self.relu)
self.conv_layers.append(self.m)
# convolution
self.H_in, self.W_in = update_tile_shape(self.H_in, self.W_in, kernel_size[layer])
# max pooling
self.H_in, self.W_in = update_tile_shape(self.H_in, self.W_in, 2, stride = 2)
in_channels = self.n_conv_filters[layer]
# compute concatenation size
in_channels = in_channels * self.H_in * self.W_in * 5
# infer the z
for layer in range(self.n_fc_layers):
self.fc_layers.append(nn.Linear(in_channels, self.hidden_size[layer]))
self.fc_layers.append(self.relu)
self.fc_layers.append(self.n)
in_channels = self.hidden_size[layer]
self.conv = nn.Sequential(*self.conv_layers)
self.fc = nn.Sequential(*self.fc_layers)
self.classification_layer = nn.Linear(in_channels, 2)
def forward(self, x, neighbors):
embed = self.conv(x)
embed = embed.view(x.shape[0],-1)
e_neighbors = [torch.index_select(embed,0,n) for n in neighbors]
embed_n = torch.stack([torch.cat([e.unsqueeze(0),n],0).view(-1) for e,n in zip(embed,e_neighbors)])
output = self.fc(embed_n)
logits = self.classification_layer(output)
return logits | 40.017045 | 112 | 0.608689 | 997 | 7,043 | 4.04012 | 0.128385 | 0.034757 | 0.040218 | 0.033515 | 0.678252 | 0.633565 | 0.603029 | 0.569513 | 0.544935 | 0.523088 | 0 | 0.013223 | 0.280562 | 7,043 | 176 | 113 | 40.017045 | 0.781725 | 0.021866 | 0 | 0.537931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082759 | false | 0 | 0.027586 | 0 | 0.193103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0092953e0c320d80fd162ef756008fc8c9889fe1 | 1,521 | py | Python | graph.py | williamium3000/VAE-pytorch | 81ecde20f76bd54400e05f66decac960f5591820 | [
"MIT"
] | 4 | 2021-03-17T01:23:02.000Z | 2021-06-04T06:42:41.000Z | graph.py | williamium3000/VAE-pytorch | 81ecde20f76bd54400e05f66decac960f5591820 | [
"MIT"
] | null | null | null | graph.py | williamium3000/VAE-pytorch | 81ecde20f76bd54400e05f66decac960f5591820 | [
"MIT"
] | null | null | null | import json
import numpy as np
import matplotlib.pyplot as plt
from tensorboard.backend.event_processing import event_accumulator
def load_data_from_tensorboard(path):
ea=event_accumulator.EventAccumulator(path)
ea.Reload()
val_psnr=ea.scalars.Items('loss')
data = [i.value for i in val_psnr]
return data
# CVAE
BCE_loss = load_data_from_tensorboard("runs/Mar17_05-00-30_06bed19cdc6a/loss_BCE/events.out.tfevents.1615957245.06bed19cdc6a.20271.2")
KL_loss = load_data_from_tensorboard("runs/Mar17_05-00-30_06bed19cdc6a/loss_KLD/events.out.tfevents.1615957245.06bed19cdc6a.20271.3")
loss = load_data_from_tensorboard("runs/Mar17_05-00-30_06bed19cdc6a/loss_loss/events.out.tfevents.1615957245.06bed19cdc6a.20271.1")
# VAE
# BCE_loss = load_data_from_tensorboard("runs/Mar17_05-00-23_06bed19cdc6a/loss_BCE/events.out.tfevents.1615957234.06bed19cdc6a.20225.2")
# KL_loss = load_data_from_tensorboard("runs/Mar17_05-00-23_06bed19cdc6a/loss_KLD/events.out.tfevents.1615957234.06bed19cdc6a.20225.3")
# loss = load_data_from_tensorboard("runs/Mar17_05-00-23_06bed19cdc6a/loss_loss/events.out.tfevents.1615957234.06bed19cdc6a.20225.1")
x = list(range(len(KL_loss)))
ax1 = plt.subplot(1,1,1)
ax1.plot(x, BCE_loss, color="red",linewidth=1, label = "BCE loss")
ax1.plot(x, KL_loss, color="blue",linewidth=1, label = "KL loss")
ax1.plot(x, loss, color="yellow",linewidth=1, label = "total loss")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.title("loss with respect to epoch(CVAE)")
ax1.legend()
plt.show()
| 40.026316 | 136 | 0.788297 | 239 | 1,521 | 4.803347 | 0.317992 | 0.10453 | 0.073171 | 0.140244 | 0.562718 | 0.562718 | 0.315331 | 0.315331 | 0.315331 | 0.315331 | 0 | 0.153352 | 0.078238 | 1,521 | 37 | 137 | 41.108108 | 0.665478 | 0.268902 | 0 | 0 | 0 | 0.130435 | 0.32821 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.173913 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00941e35e061a46578e9fdc493d82e26af9063ed | 6,774 | py | Python | rsopt/simulation.py | radiasoft/rsopt | 6d4d123dd61e30c7f562b2f5a28c3ccbbcddbde3 | [
"Apache-2.0"
] | 6 | 2020-11-03T16:51:50.000Z | 2022-02-13T20:40:05.000Z | rsopt/simulation.py | radiasoft/rsopt | 6d4d123dd61e30c7f562b2f5a28c3ccbbcddbde3 | [
"Apache-2.0"
] | 97 | 2020-05-18T18:24:49.000Z | 2022-03-23T15:42:42.000Z | rsopt/simulation.py | radiasoft/rsopt | 6d4d123dd61e30c7f562b2f5a28c3ccbbcddbde3 | [
"Apache-2.0"
] | 4 | 2020-08-18T23:19:55.000Z | 2021-12-08T20:55:09.000Z | import logging
import time
import numpy as np
import os
import rsopt.conversion
from libensemble import message_numbers
from libensemble.executors.executor import Executor
from collections import Iterable
# TODO: This should probably be in libe_tools right?
_POLL_TIME = 1 # seconds
_PENALTY = 1e9
def get_x_from_H(H, sim_specs):
# 'x' may have different name depending on software being used
# Assumes vector data
x_name = sim_specs['in'][0]
x = H[x_name][0]
return x.tolist()
def get_signature(parameters, settings):
# TODO: signature just means dict with settings and params. This should be renamed if it is kept.
# No lambda functions are allowed in settings and parameter names may not be referenced
# Just needs to insert parameter keys into the settings dict, but they won't have usable values yet
signature = settings.copy()
for key in parameters.keys():
signature[key] = None
return signature
def _parse_x(x, parameters):
x_struct = {}
if not isinstance(x, Iterable):
x = [x, ]
for val, name in zip(x, parameters.keys()):
x_struct[name] = val
# Remove used parameters
for _ in parameters.keys():
x.pop(0)
return x_struct
def compose_args(x, parameters, settings):
args = None # Not used for now
x_struct = _parse_x(x, parameters)
signature = get_signature(parameters, settings)
kwargs = signature.copy()
for key in kwargs.keys():
if key in x_struct:
kwargs[key] = x_struct[key]
return args, kwargs
def format_evaluation(sim_specs, container):
if not hasattr(container, '__iter__'):
container = (container,)
# FUTURE: Type check for container values against spec
outspecs = sim_specs['out']
output = np.zeros(1, dtype=outspecs)
if len(outspecs) == 1:
output[output.dtype.names[0]] = container
return output
for spec, value in zip(output.dtype.names, container):
output[spec] = value
return output
class SimulationFunction:
def __init__(self, jobs: list, objective_function: callable):
# Received from libEnsemble during function evaluation
self.H = None
self.J = {}
self.persis_info = None
self.sim_specs = None
self.libE_info = None
self.log = logging.getLogger('libensemble')
self.jobs = jobs
self.objective_function = objective_function
self.switchyard = None
def __call__(self, H, persis_info, sim_specs, libE_info):
self.H = H
self.persis_info = persis_info
self.sim_specs = sim_specs
self.libE_info = libE_info
x = get_x_from_H(H, self.sim_specs)
halt_job_sequence = False
for job in self.jobs:
# Generate input values
_, kwargs = compose_args(x, job.parameters, job.settings)
self.J['inputs'] = kwargs
# Call preprocessors
if job.pre_process:
for f_pre in job._setup._preprocess:
f_pre(self.J)
# Generate input files for simulation
job._setup.generate_input_file(kwargs, '.')
if self.switchyard and job.input_distribution:
if os.path.exists(job.input_distribution):
os.remove(job.input_distribution)
self.switchyard.write(job.input_distribution, job.code)
job_timeout_sec = job.timeout
if job.executor:
# MPI Job or non-Python executable
exctr = Executor.executor
task = exctr.submit(**job.executor_args)
while True:
time.sleep(_POLL_TIME)
task.poll()
if task.finished:
if task.state == 'FINISHED':
sim_status = message_numbers.WORKER_DONE
self.J['status'] = sim_status
f = None
break
elif task.state == 'FAILED':
sim_status = message_numbers.TASK_FAILED
self.J['status'] = sim_status
halt_job_sequence = True
break
else:
self.log.warning("Unknown task failure")
sim_status = message_numbers.TASK_FAILED
self.J['status'] = sim_status
halt_job_sequence = True
break
elif task.runtime > job_timeout_sec:
self.log.warning('Task Timed out, aborting Job chain')
sim_status = message_numbers.WORKER_KILL_ON_TIMEOUT
self.J['status'] = sim_status
task.kill() # Timeout
halt_job_sequence = True
break
else:
# Serial Python Job
f = job.execute(**kwargs)
sim_status = message_numbers.WORKER_DONE
# NOTE: Right now f is not passed to the objective function. Would need to go inside J. Or pass J into
# function job.execute(**kwargs)
if halt_job_sequence:
break
if job.output_distribution:
self.switchyard = rsopt.conversion.create_switchyard(job.output_distribution, job.code)
self.J['switchyard'] = self.switchyard
if job.post_process:
for f_post in job._setup._postprocess:
f_post(self.J)
if sim_status == message_numbers.WORKER_DONE and not halt_job_sequence:
# Use objective function is present
if self.objective_function:
val = self.objective_function(self.J)
output = format_evaluation(self.sim_specs, val)
self.log.info('val: {}, output: {}'.format(val, output))
else:
# If only serial python was run then then objective_function doesn't need to be defined
try:
output = format_evaluation(self.sim_specs, f)
except NameError as e:
print(e)
print("An objective function must be defined if final Job is is not Python")
else:
# TODO: Temporary penalty. Need to add a way to adjust this.
self.log.warning('Penalty was used because result could not be evaluated')
output = format_evaluation(self.sim_specs, _PENALTY)
return output, persis_info, sim_status | 35.652632 | 118 | 0.57204 | 784 | 6,774 | 4.767857 | 0.288265 | 0.025682 | 0.019262 | 0.036918 | 0.127341 | 0.103531 | 0.041199 | 0.041199 | 0.041199 | 0.041199 | 0 | 0.002064 | 0.356215 | 6,774 | 190 | 119 | 35.652632 | 0.855079 | 0.15028 | 0 | 0.164179 | 0 | 0 | 0.047611 | 0 | 0 | 0 | 0 | 0.005263 | 0 | 1 | 0.052239 | false | 0 | 0.059701 | 0 | 0.171642 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00950bf5ea1da7a9712f6f48a0aef005059be905 | 753 | py | Python | BasicAlgorithms/recursion.py | bhattvishal/intro_to_py-algo | 63cea2eff4ef13cca96c0d09e723a08ea19a9006 | [
"MIT"
] | 1 | 2020-11-26T11:06:56.000Z | 2020-11-26T11:06:56.000Z | 2.Algorithms/1.Basic.Algorithms/recursion.py | bhattvishal/programming-learning-python | 78498bfbe7c1c7b1bda53756ca8552ab30fbf538 | [
"MIT"
] | null | null | null | 2.Algorithms/1.Basic.Algorithms/recursion.py | bhattvishal/programming-learning-python | 78498bfbe7c1c7b1bda53756ca8552ab30fbf538 | [
"MIT"
] | 1 | 2020-11-04T11:07:57.000Z | 2020-11-04T11:07:57.000Z | # In this example we will use the concept of RECURSION
# to the the Power and the Factorial of a given Number
def calculatePower(number, power):
if power == 0:
return 1
else:
return number * calculatePower(number, power - 1)
def calculateFactorial(number):
if number == 0:
return 1
else:
return number * calculateFactorial(number - 1)
def main():
print("{0} to the power of {1} is: {2}".format(2, 3, calculatePower(2, 3)))
print("{0} to the power of {1} is: {2}".format(10, 2, calculatePower(10, 2)))
print("Factorial of {0} is: {1}".format(0, calculateFactorial(0)))
print("Factorial of {0} is: {1}".format(5, calculateFactorial(5)))
if __name__ == "__main__":
main() | 30.12 | 82 | 0.622842 | 107 | 753 | 4.308411 | 0.317757 | 0.032538 | 0.10846 | 0.052061 | 0.338395 | 0.338395 | 0.234273 | 0.121475 | 0.121475 | 0.121475 | 0 | 0.052356 | 0.239044 | 753 | 25 | 83 | 30.12 | 0.752182 | 0.139442 | 0 | 0.235294 | 0 | 0 | 0.182663 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0 | 0 | 0.411765 | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0095e7647eecae4b37ed02faff6cc32416ebf3b2 | 653 | py | Python | trainers/__init__.py | kobakobashu/posenet-python | 52290733504fd0a130cc2301bad5db761c14a4e9 | [
"Apache-2.0"
] | null | null | null | trainers/__init__.py | kobakobashu/posenet-python | 52290733504fd0a130cc2301bad5db761c14a4e9 | [
"Apache-2.0"
] | null | null | null | trainers/__init__.py | kobakobashu/posenet-python | 52290733504fd0a130cc2301bad5db761c14a4e9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Executor
These functions are for execution.
"""
from configs.supported_info import SUPPORTED_TRAINER
from trainers.default_trainer import DefaultTrainer
def get_trainer(cfg: object) -> object:
"""Get trainer
Args:
cfg: Config of the project.
Returns:
Trainer object.
Raises:
NotImplementedError: If the model you want to use is not suppoeted.
"""
trainer_name = cfg.train.trainer.name
if trainer_name not in SUPPORTED_TRAINER:
raise NotImplementedError('The trainer is not supported.')
if trainer_name == "default":
return DefaultTrainer(cfg) | 20.40625 | 75 | 0.683002 | 78 | 653 | 5.615385 | 0.564103 | 0.100457 | 0.059361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001996 | 0.232772 | 653 | 32 | 76 | 20.40625 | 0.872255 | 0.350689 | 0 | 0 | 0 | 0 | 0.094241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0099b3dcb3c67b6bba13b50c4b607f68f7d224da | 9,779 | py | Python | defender.py | robomotic/bulkavscanners | 2b0862945e43e5160f6103e23999eb1e05b36852 | [
"Apache-2.0"
] | 1 | 2021-12-15T11:31:24.000Z | 2021-12-15T11:31:24.000Z | defender.py | robomotic/bulkavscanners | 2b0862945e43e5160f6103e23999eb1e05b36852 | [
"Apache-2.0"
] | null | null | null | defender.py | robomotic/bulkavscanners | 2b0862945e43e5160f6103e23999eb1e05b36852 | [
"Apache-2.0"
] | null | null | null | from subprocess import Popen, PIPE
import re
import glob
import os
import pickle
import json
import time
import logging
import argparse
import datetime
import hashlib
__author__ = "Paolo Di Prodi"
__copyright__ = "Copyright 2018, Paolo Di Prodi"
__license__ = "Apache 2.0"
__version__ = "0.99"
__email__ = "contact [AT] logstotal.com"
STATE_FOLDER = os.path.join('progress','defender')
os.makedirs(STATE_FOLDER,exist_ok=True)
LOG_FOLDER = 'logs'
os.makedirs(LOG_FOLDER,exist_ok=True)
logging.basicConfig(
format="%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s",
handlers=[
logging.FileHandler("{0}/{1}.log".format(LOG_FOLDER, 'defenderscan')),
logging.StreamHandler()
],level=logging.DEBUG)
class WinDefenderProcessor():
md5rex = re.compile(r"[0-9a-f]{32}$",re.IGNORECASE)
sha1rex = re.compile(r"[0-9a-f]{40}$",re.IGNORECASE)
sha256rex = re.compile(r"[0-9a-f]{64}$",re.IGNORECASE)
threatrex = re.compile(r"^Threat\s+\:\s(.+)$")
resrex = re.compile(r"^Resources\s+\:\s(\d+)\stotal$")
filerex = re.compile(r"^\s+file\s+\:\s(.*)$")
headrex = re.compile(r"found\s(\d+)\sthreats\.$")
def __init__(self,hashtype='md5'):
self.preferred_hash = hashtype
self.state_path = os.path.join(STATE_FOLDER,'state.pk')
self.get_version()
if os.path.exists(self.state_path):
with open(self.state_path, 'rb') as handle:
self.state = pickle.load(handle)
else:
os.makedirs(LOG_FOLDER,exist_ok=True)
self.state = []
@staticmethod
def hash_file(path):
with open(path, 'rb') as file:
data = file.read()
info = {
'md5': hashlib.md5(data).hexdigest(),
'sha1': hashlib.sha1(data).hexdigest(),
'sha256': hashlib.sha256(data).hexdigest()}
return info
def get_version(self):
# use WMI to get all versions
self.engine = '1.1.14800.3'
self.platform = ' 4.14.17613.18039'
self.signature = '1.267.196.0'
def get_hash(self,filename):
match = re.findall(self.md5rex, filename)
if match:
return ('md5',match[0].lower())
match = re.findall(self.sha1rex, filename)
if match:
return ('sha1', match[0].lower())
match = re.findall(self.sha256rex, filename)
if match:
return ('sha256', match[0].lower())
return (None,None)
def scan_folder(self,path,recursive = False, batch = 10):
''' Scan file one by one '''
if path.endswith(os.path.sep):
self.files = list(glob.iglob(path + '*', recursive=recursive))
else:
self.files = list(glob.iglob(path + os.path.sep + '*', recursive=recursive))
logging.info("Preparing to scan {0} total files".format(len(self.files)))
# which files were not scanned from last time
notscanned = [path for path in self.files if path not in self.state]
if len(notscanned) == 0:
logging.warn("No new files to scan")
return
interrupted = False
for filepath in notscanned:
if interrupted == True:
break
try:
[hashtype, value] = self.get_hash(os.path.basename(filepath))
defender_process = Popen(['mpcmdrun', '-scan', '-scantype', '3', '-DisableRemediation', '-file', filepath],
stdout=PIPE,stderr = PIPE)
out, err = defender_process.communicate()
if len(err.decode('utf-8')) > 0:
logging.error(err.decode('utf-8'))
summary = self.parse_defender_out(out.decode('utf-8'))
if hashtype is None or hashtype!=self.preferred_hash:
#compute the hash of the file
all_hash = WinDefenderProcessor.hash_file()
hashtype = self.preferred_hash
value = all_hash[self.preferred_hash]
if 'Found' not in summary:
report = {hashtype: value, "defender": '',
"engine": self.engine, "platform": self.platform,
"signature": self.signature,
'scanTime': datetime.datetime.utcnow().isoformat()}
elif summary['Found'] > 0 :
for threat in summary['Threats']:
report = {hashtype: value, "defender": threat['Threat'],
"engine": self.engine, "build": self.platform,
"signature": self.signature,
'scanTime': datetime.datetime.utcnow().isoformat()}
self.state += [filepath]
except KeyboardInterrupt:
# remove the last chunk just in case
del self.state[-1:]
# kill the clamscan process
if defender_process.poll() is not None:
defender_process.kill()
logging.warning("Terminating batch process....")
interrupted = True
finally:
yield report
with open(self.state_path, 'wb') as handle:
pickle.dump(self.state, handle, protocol=pickle.HIGHEST_PROTOCOL)
def parse_defender_out(self,report):
# 'Threat : Virus:DOS/Svir'
# 'Resources : 1 total'
# ' file : F:\\VirusShare_xxxxx\\VirusShare_000a50c55a2f4517d2e27b21f4b27e3b'
lines = report.split('\r\n')
header = False
begin_manifest = False
end_manifest = False
summary = {}
detection = {}
for line in lines:
if 'LIST OF DETECTED THREATS' in line or 'Scan starting...' in line or 'Scan finished.' in line:
header = True
continue
elif line.startswith("Scanning"):
match = re.findall(self.headrex, line)
if match:
summary["Found"] = int(match[0])
summary["Threats"] = []
header = False
elif 'Threat information' in line:
begin_manifest = True
continue
elif line.count('-') == len(line):
end_manifest = True
# time to flush!
if len(detection.keys())>0:
summary["Threats"].append(detection)
detection = {}
begin_manifest = False
elif begin_manifest == True:
match = re.findall(self.threatrex, line)
if match:
detection['Threat'] = match[0]
match = re.findall(self.resrex, line)
if match:
detection['Resources'] = match[0]
match = re.findall(self.filerex, line)
if match:
if 'Files' in detection:
detection['Files'].append(match[0])
else:
detection['Files'] = [match[0]]
return summary
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Scan an entire folder with ClamAv')
parser.add_argument('--version',action='store_true',
help='Display Clam version')
parser.add_argument('--scan',action='store_true',
help='Scan a folder')
parser.add_argument('--folder',dest='folder', type=str,
help='Folder with virus samples')
parser.add_argument('--detections',dest='detections', type=str,
help='Folder with the output of the scan')
parser.add_argument('--merge', action='store_true',
help='Merge previous scans')
parser.add_argument('--recursive', action='store_true',
help='Scan all files with nested folders')
parser.add_argument('--batchsize', default = 140, type = int,
help='Batch scanning in groups')
parser.add_argument('--newline', action='store_true', default= True,
help='Use new lines in json output')
args = parser.parse_args()
if args.merge:
processor = WinDefenderProcessor()
merged = processor.merge_scans(args.detections,args.newline)
if args.version:
processor = WinDefenderProcessor()
version = processor.get_version()
print("Version {0} Build {1}".format(version[0],version[1]))
sigs = processor.get_definition_time()
print("Signature date {0}".format(sigs.isoformat()))
if args.scan:
if args.folder:
processor = WinDefenderProcessor()
os.makedirs(args.detections,exist_ok=True)
reports = []
for report in processor.scan_folder(args.folder,recursive=args.recursive):
reports.append(report)
if len(reports) >= args.batchsize:
output_file = os.path.join(args.detections, "%s.json" % int(time.time()))
with open(output_file, "w") as file:
json.dump(reports, file)
logging.info("Saved %d " % len(reports))
reports = []
if len(reports) > 0:
output_file = os.path.join(args.detections, "%s.json" % int(time.time()))
with open(output_file, "w") as file:
json.dump(reports, file)
logging.info("Saved %d " % len(reports)) | 35.952206 | 123 | 0.536762 | 1,035 | 9,779 | 4.974879 | 0.269565 | 0.017479 | 0.026413 | 0.024471 | 0.151097 | 0.12585 | 0.098272 | 0.075354 | 0.075354 | 0.075354 | 0 | 0.020963 | 0.341446 | 9,779 | 272 | 124 | 35.952206 | 0.778571 | 0.037938 | 0 | 0.183168 | 0 | 0.004951 | 0.13168 | 0.00809 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029703 | false | 0 | 0.054455 | 0 | 0.158416 | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
009a7194d72bb19c9a7e47c1c2bfc99ff6b60d94 | 5,731 | py | Python | python-flask-server/openapi_server/dcc/disease_utils.py | broadinstitute/genetics-kp-dev | 902a153a33942ba5d224c129db0ae58562927085 | [
"MIT"
] | null | null | null | python-flask-server/openapi_server/dcc/disease_utils.py | broadinstitute/genetics-kp-dev | 902a153a33942ba5d224c129db0ae58562927085 | [
"MIT"
] | 8 | 2021-06-14T18:10:53.000Z | 2022-03-23T18:30:10.000Z | python-flask-server/openapi_server/dcc/disease_utils.py | broadinstitute/genetics-kp-dev | 902a153a33942ba5d224c129db0ae58562927085 | [
"MIT"
] | 1 | 2022-02-22T21:24:58.000Z | 2022-02-22T21:24:58.000Z |
# imports
import json
import requests
import logging
import sys
logging.basicConfig(level=logging.INFO, format=f'[%(asctime)s] - %(levelname)s - %(name)s %(threadName)s : %(message)s')
handler = logging.StreamHandler(sys.stdout)
logger = logging.getLogger(__name__)
# constants
URL_ONTOLOGY_KP = "https://stars-app.renci.org/sparql-kp/query"
# methods
def build_query(predicate, subject_category, subject_id, object_category, object_id):
''' will build a trapi v1.1 query '''
edges = {"e00": {"predicates": [predicate], "subject": "n00", "object": "n01"}}
nodes = {"n00": {}, "n01": {}}
if subject_category:
nodes["n00"]["categories"] = [subject_category]
if object_category:
nodes["n01"]["categories"] = [object_category]
if subject_id:
if isinstance(subject_id, list):
nodes["n00"]["ids"] = subject_id
else:
nodes["n00"]["ids"] = [subject_id]
if object_id:
if isinstance(object_id, list):
nodes["n01"]["ids"] = object_id
else:
nodes["n01"]["ids"] = [object_id]
message = {"query_graph": {"edges": edges, "nodes": nodes}}
result = {"message": message}
# return
return result
def get_node_list(json_response):
''' will extract the nodes from the trapi v1.1 response'''
result = []
# get the nodes
if json_response and json_response.get("message") and json_response.get("message").get("query_graph"):
knowledge_graph = json_response.get("message").get("knowledge_graph")
# loop
if knowledge_graph.get("nodes"):
for key, values in knowledge_graph.get("nodes").items():
result.append(key)
# return result
return result
def query_service(url, query):
''' will do a post call to a service qith a trapi v1.1 query'''
response = None
# call
try:
response = requests.post(url, json=query).json()
except (RuntimeError, TypeError, NameError, ValueError):
print('ERROR: query_service - REST query or decoding JSON has failed')
# return
return response
def get_disease_descendants(disease_id, category=None, debug=False):
''' will query the trapi v1.1 ontology kp and return the descendant diseases '''
# initialize
list_diseases = []
json_query = build_query(predicate="biolink:subclass_of", subject_category=category, object_category=category, subject_id=None, object_id=disease_id)
# print result
if debug:
print("the query is: \n{}".format(json.dumps(json_query, indent=2)))
# query the KP and get the results
response = query_service(URL_ONTOLOGY_KP, json_query)
list_diseases = get_node_list(response)
# always add itself back in in case there was error and empty list returned
list_diseases.append(disease_id)
# get unique elements in the list
list_diseases = list(set(list_diseases))
# log
if debug:
print("got the child disease list: {}".format(list_diseases))
# return
return list_diseases
def get_disease_descendants_from_list(list_curie_id, category=None, log=False):
''' will query the trapi v1.1 ontology kp and return the descendant diseases, will return list of (original, new) tuples '''
# initialize
list_result = []
list_filtered = [item for item in list_curie_id if item.split(':')[0] in ['EFO', 'MONDO']]
json_query = build_query(predicate="biolink:subclass_of", subject_category=category, object_category=category, subject_id=None, object_id=list_filtered)
# print result
if log:
logger.info("reduced efo/mondo input descendant list from: {} to: {}".format(list_curie_id, list_filtered))
if len(list_filtered) > 0:
logger.info("the query is: \n{}".format(json.dumps(json_query, indent=2)))
# query the KP and get the results
json_response = query_service(URL_ONTOLOGY_KP, json_query)
# get the nodes
if json_response and json_response.get("message") and json_response.get("message").get("knowledge_graph"):
knowledge_graph = json_response.get("message").get("knowledge_graph")
# loop
logger.info("edges: {}".format(knowledge_graph.get("edges")))
if knowledge_graph.get("edges"):
for key, value in knowledge_graph.get("edges").items():
descendant = (value.get("object"), value.get("subject"))
list_result.append(descendant)
# get unique elements in the list
list_result = list(set(list_result))
# log
if log:
for item in list_result:
logger.info("got the web descendant disease entry: {}".format(item))
# return
return list_result
# test
if __name__ == "__main__":
disease_id = "MONDO:0007972" # meniere's disease
disease_id = "MONDO:0020066" # ehler's danlos
# disease_id = "MONDO:0005267" # heart disease
get_disease_descendants(disease_id=disease_id, category="biolink:DiseaseOrPhenotypicFeature", debug=True)
get_disease_descendants(disease_id=disease_id, debug=True)
# json_query = build_query(predicate="biolink:subclass_of", subject_category="biolink:Disease", object_category="biolink:Disease", subject_id=None, object_id=)
# test server error catching
disease_id = "NCBIGene:1281"
get_disease_descendants(disease_id=disease_id, debug=True)
# # print result
# print("the query is: \n{}".format(json.dumps(json_query, indent=2)))
# # query the KP and get the results
# response = query_service(URL_ONTOLOGY_KP, json_query)
# list_diseases = get_node_list(response)
# print("got the child disease list: {}".format(list_diseases))
| 36.974194 | 163 | 0.66847 | 742 | 5,731 | 4.96496 | 0.212938 | 0.031759 | 0.02443 | 0.035831 | 0.419381 | 0.376764 | 0.376764 | 0.347448 | 0.336048 | 0.285559 | 0 | 0.013702 | 0.210434 | 5,731 | 154 | 164 | 37.214286 | 0.800442 | 0.218112 | 0 | 0.144578 | 0 | 0.012048 | 0.163946 | 0.00771 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060241 | false | 0 | 0.048193 | 0 | 0.168675 | 0.036145 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
009bd895b6d6ee032c9cee2c80ab93e4d5be6020 | 2,991 | py | Python | trend/src/zutils/zrpc/client/work_thread.py | limingmax/WFCode | f2e6d2fcf05ad9fdaac3a69603afee047ed37ca3 | [
"Apache-2.0"
] | 2 | 2018-10-23T01:56:46.000Z | 2018-10-23T01:56:49.000Z | trend/src/zutils/zrpc/client/work_thread.py | limingmax/WFCode | f2e6d2fcf05ad9fdaac3a69603afee047ed37ca3 | [
"Apache-2.0"
] | null | null | null | trend/src/zutils/zrpc/client/work_thread.py | limingmax/WFCode | f2e6d2fcf05ad9fdaac3a69603afee047ed37ca3 | [
"Apache-2.0"
] | null | null | null | import traceback
import time
import threading
from zutils.zrpc.client.task_template import TaskTemplateWithFrame, TaskTemplateWithoutFrame
class WorkThread(threading.Thread):
status = 'init'
thread_num = 0
thread_dict = dict()
cur_frame = None
def __init__(self, run_instance):
threading.Thread.__init__(self)
self.run_instance = run_instance
self.thread_id = str(WorkThread.thread_num)
WorkThread.thread_num += 1
WorkThread.thread_dict[self.thread_id] = [self, None]
self.run_type = None
if isinstance(run_instance, TaskTemplateWithFrame):
self.run_type = 'withframe'
elif isinstance(run_instance, TaskTemplateWithoutFrame):
self.run_type = 'withoutframe'
else:
raise Exception('run_instance 不是继承 TaskTemplate')
def run(self):
instance_frame = WorkThread.thread_dict[self.thread_id]
while WorkThread.status == 'run':
try:
if self.run_type == 'withframe':
frame = instance_frame[1]
instance_frame[1] = None
if frame is not None:
self.run_instance.run(frame)
else:
self.run_instance.run()
except:
traceback.print_exc()
time.sleep(1)
self.run_instance.sleep()
@staticmethod
def set_frame(frame):
thread_dict = WorkThread.thread_dict
for key in thread_dict:
thread_dict[key][1] = frame
@staticmethod
def start_all():
WorkThread.status = 'run'
thread_dict = WorkThread.thread_dict
for key in thread_dict:
thread_dict[key][0].start()
print(type(thread_dict[key][0].run_instance), 'start')
print('start all thread')
@staticmethod
def stop_all():
WorkThread.status = 'exit'
thread_dict = WorkThread.thread_dict
for key in thread_dict:
thread_dict[key][0].join()
print(type(thread_dict[key][0].run_instance), 'stop')
print('stop all thread')
if __name__ == '__main__':
from zutils.task.client.task_template import TaskTemplate
class XXX(TaskTemplate):
def sleep(self):
print('xxxxxx')
time.sleep(1)
class YYY(TaskTemplate):
def sleep(self):
print('yyyyyy')
time.sleep(3)
class ZZZ(TaskTemplate):
def run(self, frame):
WorkThread.set_frame(frame + 1)
def sleep(self):
print('zzzzzz')
time.sleep(3)
WorkThread(XXX())
WorkThread(YYY())
WorkThread(ZZZ())
WorkThread.set_frame(0)
WorkThread.start_all()
time.sleep(15)
WorkThread.stop_all()
#
# a = {
# 'a':1,
# 'b':2
# }
#
#
#
#
#
# v = a['a']
# a['a'] = None
# print(v)
# print(a)
#
| 24.317073 | 92 | 0.567369 | 324 | 2,991 | 5.027778 | 0.225309 | 0.104359 | 0.046041 | 0.034377 | 0.230203 | 0.194598 | 0.15531 | 0.15531 | 0.113567 | 0.113567 | 0 | 0.009462 | 0.328653 | 2,991 | 122 | 93 | 24.516393 | 0.801793 | 0.023738 | 0 | 0.222222 | 0 | 0 | 0.048209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.061728 | 0 | 0.271605 | 0.098765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
009e8776cfa85887a7c94901dc021527afa5ca46 | 11,509 | py | Python | tests/data23/recipe-577507.py | JohannesBuchner/pystrict3 | f442a89ac6a23f4323daed8ef829d8e9e1197f90 | [
"BSD-2-Clause"
] | 1 | 2020-06-05T08:53:26.000Z | 2020-06-05T08:53:26.000Z | tests/data23/recipe-577507.py | JohannesBuchner/pystrict3 | f442a89ac6a23f4323daed8ef829d8e9e1197f90 | [
"BSD-2-Clause"
] | 1 | 2020-06-04T13:47:19.000Z | 2020-06-04T13:47:57.000Z | tests/data23/recipe-577507.py | JohannesBuchner/pystrict3 | f442a89ac6a23f4323daed8ef829d8e9e1197f90 | [
"BSD-2-Clause"
] | 1 | 2020-11-07T17:02:46.000Z | 2020-11-07T17:02:46.000Z | ##
# This module provides a powerful 'switch'-like dispatcher system.
# Values for switch cases can be anything comparable via '==', a string
# for use on the left-hand side of the 'in' operator, or a regular expression.
# Iterables of these types can also be used.
__author__ = 'Mike Kent'
import re
class SwitchError(Exception): pass
CPAT_TYPE = type(re.compile('.'))
STR_TYPE = type('')
LIST_TYPE = type([])
TUPLE_TYPE = type(())
class Switch(object):
def __init__(self):
self.exactCases = {}
self.inCases = []
self.patternCases = []
self.defaultHandler = None
##
# Try each 'in' case, in the order they were
# specified, stopping if we get a match.
# Return a tuple of the string we are searching for in the target string,
# and the case handler found, or (None, None) if no match found.
def _findInCase(self, switchValue):
for inStr, aHandler in self.inCases:
if inStr in switchValue:
return (inStr, aHandler)
return (None, None)
##
# Try each regex pattern (using re.search), in the order they were
# specified, stopping if we get a match.
# Return a tuple of the re match object and the case handler found, or
# (None, None) if no match found.
def _findRegExCase(self, switchValue):
for cpat, aHandler in self.patternCases:
matchObj = cpat.search(switchValue)
if matchObj is not None:
return (matchObj, aHandler)
return (None, None)
##
# Switch on a switch value. A match against the exact
# (non-regular-expression) case matches is tried first. If that doesn't
# find a match, then if the switch value is a string, the 'in' case
# matches are tried next, in the order they were registered. If that
# doesn't find a match, then if the switch value is a string,
# the regular-expression case matches are tried next, in
# the order they were registered. If that doesn't find a match, and
# a default case handler was registered, the default case handler is used.
# If no match was found, and no default case handler was registered,
# SwitchError is raised.
# If a switch match is found, the corresponding case handler is called.
# The switch value is passed as the first positional parameter, along with
# any other positional and keyword parameters that were passed to the
# switch method. The switch method returns the return value of the
# called case handler.
def switch(self, switchValue, *args, **kwargs):
caseHandler = None
switchType = type(switchValue)
try:
# Can we find an exact match for this switch value?
# For an exact match, we will pass the case value to the case
# handler.
caseHandler = self.exactCases.get(switchValue)
caseValue = switchValue
except TypeError:
pass
# If no exact match, and we have 'in' cases to try,
# see if we have a matching 'in' case for this switch value.
# For an 'in' operation, we will be passing the left-hand side of
# 'in' operator to the case handler.
if not caseHandler and switchType in (STR_TYPE, LIST_TYPE, TUPLE_TYPE) \
and self.inCases:
caseValue, caseHandler = self._findInCase(switchValue)
# If no 'in' match, and we have regex patterns to try,
# see if we have a matching regex pattern for this switch value.
# For a RegEx match, we will be passing the re.matchObject to the
# case handler.
if not caseHandler and switchType == STR_TYPE and self.patternCases:
caseValue, caseHandler = self._findRegExCase(switchValue)
# If still no match, see if we have a default case handler to use.
if not caseHandler:
caseHandler = self.defaultHandler
caseValue = switchValue
# If still no case handler was found for the switch value,
# raise a SwitchError.
if not caseHandler:
raise SwitchError("Unknown case value %r" % switchValue)
# Call the case handler corresponding to the switch value,
# passing it the case value, and any other parameters passed
# to the switch, and return that case handler's return value.
return caseHandler(caseValue, *args, **kwargs)
##
# Register a case handler, and the case value is should handle.
# This is a function decorator for a case handler. It doesn't
# actually modify the decorated case handler, it just registers it.
# It takes a case value (any object that is valid as a dict key),
# or any iterable of such case values.
def case(self, caseValue):
def wrap(caseHandler):
# If caseValue is not an iterable, turn it into one so
# we can handle everything the same.
caseValues = ([ caseValue ] if not hasattr(caseValue, '__iter__') \
else caseValue)
for aCaseValue in caseValues:
# Raise SwitchError on a dup case value.
if aCaseValue in self.exactCases:
raise SwitchError("Duplicate exact case value '%s'" % \
aCaseValue)
# Add it to the dict for finding exact case matches.
self.exactCases[aCaseValue] = caseHandler
return caseHandler
return wrap
##
# Register a case handler for handling a regular expression.
def caseRegEx(self, caseValue):
def wrap(caseHandler):
# If caseValue is not an iterable, turn it into one so
# we can handle everything the same.
caseValues = ([ caseValue ] if not hasattr(caseValue, '__iter__') \
else caseValue)
for aCaseValue in caseValues:
# If this item is not a compiled regular expression, compile it.
if type(aCaseValue) != CPAT_TYPE:
aCaseValue = re.compile(aCaseValue)
# Raise SwitchError on a dup case value.
for thisCaseValue, _ in self.patternCases:
if aCaseValue.pattern == thisCaseValue.pattern:
raise SwitchError("Duplicate regex case value '%s'" % \
aCaseValue.pattern)
self.patternCases.append((aCaseValue, caseHandler))
return caseHandler
return wrap
##
# Register a case handler for handling an 'in' operation.
def caseIn(self, caseValue):
def wrap(caseHandler):
# If caseValue is not an iterable, turn it into one so
# we can handle everything the same.
caseValues = ([ caseValue ] if not hasattr(caseValue, '__iter__') \
else caseValue)
for aCaseValue in caseValues:
# Raise SwitchError on a dup case value.
for thisCaseValue, _ in self.inCases:
if aCaseValue == thisCaseValue:
raise SwitchError("Duplicate 'in' case value '%s'" % \
aCaseValue)
# Add it to the the list of 'in' values.
self.inCases.append((aCaseValue, caseHandler))
return caseHandler
return wrap
##
# This is a function decorator for registering the default case handler.
def default(self, caseHandler):
self.defaultHandler = caseHandler
return caseHandler
if __name__ == '__main__': # pragma: no cover
# Example uses
# Instantiate a switch object.
mySwitch = Switch()
# Register some cases and case handlers, using the handy-dandy
# decorators.
# A default handler
@mySwitch.default
def gotDefault(value, *args, **kwargs):
print("Default handler: I got unregistered value %r, "\
"with args: %r and kwargs: %r" % \
(value, args, kwargs))
return value
# A single numeric case value.
@mySwitch.case(0)
def gotZero(value, *args, **kwargs):
print("gotZero: I got a %d, with args: %r and kwargs: %r" % \
(value, args, kwargs))
return value
# A range of numeric case values.
@mySwitch.case(list(range(5, 10)))
def gotFiveThruNine(value, *args, **kwargs):
print("gotFiveThruNine: I got a %d, with args: %r and kwargs: %r" % \
(value, args, kwargs))
return value
# A string case value, for an exact match.
@mySwitch.case('Guido')
def gotGuido(value, *args, **kwargs):
print("gotGuido: I got '%s', with args: %r and kwargs: %r" % \
(value, args, kwargs))
return value
# A string value for use with the 'in' operator.
@mySwitch.caseIn('lo')
def gotLo(value, *args, **kwargs):
print("gotLo: I got '%s', with args: %r and kwargs: %r" % \
(value, args, kwargs))
return value
# A regular expression pattern match in a string.
# You can also pass in a pre-compiled regular expression.
@mySwitch.caseRegEx(r'\b([Pp]y\w*)\b')
def gotPyword(matchObj, *args, **kwargs):
print("gotPyword: I got a matchObject where group(1) is '%s', "\
"with args: %r and kwargs: %r" % \
(matchObj.group(1), args, kwargs))
return matchObj
# And lastly, you can pass a iterable to case, caseIn, and
# caseRegEx.
@mySwitch.case([ 99, 'yo', 200 ])
def gotStuffInSeq(value, *args, **kwargs):
print("gotStuffInSeq: I got %r, with args: %r and kwargs: %r" % \
(value, args, kwargs))
return value
# Now show what we can do.
got = mySwitch.switch(0)
# Returns 0, prints "gotZero: I got a 0, with args: () and kwargs: {}"
got = mySwitch.switch(6, flag='boring')
# Returns 6, prints "gotFiveThruNine: I got a 6, with args: () and
# kwargs: {'flag': 'boring'}"
got = mySwitch.switch(10, 42)
# Returns 10, prints "Default handler: I got unregistered value 10,
# with args: (42,) and kwargs: {}"
got = mySwitch.switch('Guido', BDFL=True)
# Returns 'Guido', prints "gotGuido: I got 'Guido', with args: () and
# kwargs: {'BDFL': True}"
got = mySwitch.switch('Anyone seen Guido around?')
# Returns 'Anyone Seen Guido around?', prints "Default handler: I got
# unregistered value 'Anyone seen Guido around?', with args: () and
# kwargs: {}", 'cause we used 'case' and not 'caseIn'.
got = mySwitch.switch('Yep, and he said "hello".', 99, yes='no')
# Returns 'lo', prints "gotLo: I got 'lo', with args: (99,) and
# kwargs: {'yes': 'no'}", 'cause we found the 'lo' in 'hello'.
got = mySwitch.switch('Bird is the Python word of the day.')
# Returns a matchObject, prints "gotPyword: I got a matchObject where
# group(1) is 'Python', with args: () and kwargs: {}"
got = mySwitch.switch('yo')
# Returns 'yo', prints "gotStuffInSeq: I got 'yo', with args: () and
# kwargs: {}"
| 41.103571 | 80 | 0.588583 | 1,412 | 11,509 | 4.766997 | 0.181303 | 0.032685 | 0.026742 | 0.01248 | 0.367107 | 0.33472 | 0.314664 | 0.280493 | 0.264151 | 0.239489 | 0 | 0.004121 | 0.325311 | 11,509 | 279 | 81 | 41.250896 | 0.862717 | 0.416283 | 0 | 0.315385 | 0 | 0 | 0.104602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138462 | false | 0.015385 | 0.007692 | 0 | 0.307692 | 0.053846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
009eb040adfa94c81028b6d03f34a0e3a951ff0a | 1,221 | py | Python | three_play/v3/models/requests.py | rnag/three-play | 694a9f96ff5edf5f72c04827004b644c0a51365a | [
"MIT"
] | null | null | null | three_play/v3/models/requests.py | rnag/three-play | 694a9f96ff5edf5f72c04827004b644c0a51365a | [
"MIT"
] | 2 | 2021-06-12T22:37:35.000Z | 2021-06-12T22:38:41.000Z | three_play/v3/models/requests.py | rnag/three-play | 694a9f96ff5edf5f72c04827004b644c0a51365a | [
"MIT"
] | null | null | null | from typing import Optional, List
from requests import Session
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
from ...config.requests import (
DEFAULT_MAX_RETRIES, DEFAULT_BACKOFF_FACTOR, DEFAULT_STATUS_FORCE_LIST)
class SessionWithRetry(Session):
def __init__(self, auth=None,
num_retries=DEFAULT_MAX_RETRIES,
backoff_factor=DEFAULT_BACKOFF_FACTOR,
additional_status_force_list: Optional[List[int]] = None):
super().__init__()
self.auth = auth
status_force_list = DEFAULT_STATUS_FORCE_LIST
# Retry on additional status codes (ex. HTTP 400) if needed
if additional_status_force_list:
status_force_list.extend(additional_status_force_list)
retry_strategy = Retry(
read=0,
total=num_retries,
status_forcelist=status_force_list,
method_whitelist=["HEAD", "GET", "PUT", "POST", "DELETE", "OPTIONS", "TRACE"],
backoff_factor=backoff_factor
)
adapter = HTTPAdapter(max_retries=retry_strategy)
self.mount("https://", adapter)
self.mount("http://", adapter)
| 32.131579 | 90 | 0.669124 | 137 | 1,221 | 5.620438 | 0.416058 | 0.114286 | 0.155844 | 0.097403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005423 | 0.244881 | 1,221 | 37 | 91 | 33 | 0.829718 | 0.046683 | 0 | 0 | 0 | 0 | 0.040448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.192308 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00a019cbccdf847c336f738afaee1f639509b8fa | 2,397 | py | Python | linkedlist/Reference_code/q10.py | pengfei-chen/algorithm_qa | c2ccdcb77004e88279d61e4e433ee49527fc34d6 | [
"MIT"
] | 79 | 2018-03-27T12:37:49.000Z | 2022-01-21T10:18:17.000Z | linkedlist/Reference_code/q10.py | pengfei-chen/algorithm_qa | c2ccdcb77004e88279d61e4e433ee49527fc34d6 | [
"MIT"
] | null | null | null | linkedlist/Reference_code/q10.py | pengfei-chen/algorithm_qa | c2ccdcb77004e88279d61e4e433ee49527fc34d6 | [
"MIT"
] | 27 | 2018-04-08T03:07:06.000Z | 2021-10-30T00:01:50.000Z | """
问题描述:假设链表中每个节点的值都在[0,9]之间,那么链表整体就可以代表一个整数,
例如:9->3->7,可以代表整数937.给定两个这种链表的头结点head1和head2,请生
成代表两个整数相加值的结果链表。例如:链表1为9->3->7,链表2为6->3,最后生成
新的结果链表为1->0->0->0.
思路:
1)如果将链表先转化为整数相加,再转成链表,可能会出现溢出
2)可以使用逆序栈将链表节点压入栈,再进行操作
3)利用链表的逆序求解,这样不会占用额外空间复杂度
"""
from linkedlist.toolcls import Node, PrintMixin
class ListAddTool(PrintMixin):
@staticmethod
def add_list(head1, head2):
if head1 is None:
return head2
if head2 is None:
return head1
reversed_list1 = ListAddTool.revert_linked_list(head1)
reversed_list2 = ListAddTool.revert_linked_list(head2)
new_head = None
new_list = None
flag = 0
while reversed_list1 is not None or reversed_list2 is not None:
if reversed_list1 is None:
value1 = 0
else:
value1 = reversed_list1.value
if reversed_list2 is None:
value2 = 0
else:
value2 = reversed_list2.value
temp = value1 + value2 + flag
if temp/10 >= 1:
flag = 1
if new_list is None:
new_head = Node(temp % 10)
new_list = new_head
else:
new_list.next = Node(temp % 10)
new_list = new_list.next
else:
flag = 0
if new_list is None:
new_head = Node(temp)
new_list = new_head
else:
new_list.next = Node(temp)
new_list = new_list.next
if reversed_list1 is not None:
reversed_list1 = reversed_list1.next
if reversed_list2 is not None:
reversed_list2 = reversed_list2.next
if flag == 1:
new_list.next = Node(1)
reversed_new_head = ListAddTool.revert_linked_list(new_head)
return reversed_new_head
@staticmethod
def revert_linked_list(head):
pre = None
while head is not None:
next = head.next
head.next = pre
pre = head
head = next
return pre
if __name__ == '__main__':
node1 = Node(9)
node1.next = Node(9)
node1.next.next = Node(9)
node2 = Node(1)
ListAddTool.print_list(ListAddTool.add_list(node1, node2)) | 26.340659 | 71 | 0.546099 | 278 | 2,397 | 4.510791 | 0.248201 | 0.066986 | 0.035885 | 0.064593 | 0.208931 | 0.118022 | 0.106858 | 0.106858 | 0.106858 | 0.059011 | 0 | 0.051351 | 0.382562 | 2,397 | 91 | 72 | 26.340659 | 0.795946 | 0.099708 | 0 | 0.241935 | 0 | 0 | 0.003719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.016129 | 0 | 0.129032 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00a6a2f92810e78323611c3932598a5778ba1e13 | 1,310 | py | Python | src/cnn_cifar10/mixup.py | chrismolli/redes_neuronales_semester_project | 3309d102b809b395af39f7b570927e23d10db5ea | [
"MIT"
] | null | null | null | src/cnn_cifar10/mixup.py | chrismolli/redes_neuronales_semester_project | 3309d102b809b395af39f7b570927e23d10db5ea | [
"MIT"
] | null | null | null | src/cnn_cifar10/mixup.py | chrismolli/redes_neuronales_semester_project | 3309d102b809b395af39f7b570927e23d10db5ea | [
"MIT"
] | null | null | null | import numpy as np
def mixup_extend_data(x,y,n):
""" MIXUP_EXTEND_DATA will use the mixup technique to append
n inter-class representations to the given data. y must be
in a one hot representation.
"""
# copy data
x_extend = []
y_extend = []
# create new data
for i in range(n):
# draw two indices
first = int(x.shape[0] * np.random.rand())
second = int(x.shape[0] * np.random.rand())
while second is first:
second = int(np.round(x.shape[0] * np.random.rand(), 0))
# draw mixup ratio from [0.2,0.4]
mix_ratio = 0.2 * (np.random.rand() + 1)
# mix up
(x_, y_) = mixup(x[first],x[second],y[first],y[second],mix_ratio)
# append to extended data set
x_extend.append(x_)
y_extend.append(y_)
# join datasets
x_extend = np.stack(x_extend,axis=0)
x_extend = np.concatenate([x,x_extend],axis=0)
y_extend = np.stack(y_extend, axis=0)
y_extend = np.concatenate([y, y_extend], axis=0)
# return modified dataset
return x_extend, y_extend
def mixup(x1,x2,y1,y2,mix_ratio):
""" MIXUP creates a inter-class datapoint using mix_ratio
"""
x = mix_ratio * x1 + (1-mix_ratio) * x2
y = mix_ratio * y1 + (1-mix_ratio) * y2
return (x,y)
| 31.190476 | 73 | 0.600763 | 209 | 1,310 | 3.62201 | 0.339713 | 0.084544 | 0.063408 | 0.035667 | 0.136063 | 0.136063 | 0.058124 | 0 | 0 | 0 | 0 | 0.026123 | 0.269466 | 1,310 | 41 | 74 | 31.95122 | 0.76489 | 0.268702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.045455 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00a6ba612c79d98cfdae3a445d8e46b386ab0def | 5,014 | py | Python | mlc_tools/module_php/writer.py | mlc-tools/mlc-tools | 1ee8e82e438cda2cc1efd334d69773d1a29a0e0c | [
"MIT"
] | 1 | 2018-05-07T09:32:57.000Z | 2018-05-07T09:32:57.000Z | mlc_tools/module_php/writer.py | mlc-tools/mlc-tools | 1ee8e82e438cda2cc1efd334d69773d1a29a0e0c | [
"MIT"
] | 4 | 2019-09-27T09:33:34.000Z | 2020-04-13T13:48:02.000Z | mlc_tools/module_php/writer.py | mlc-tools/mlc-tools | 1ee8e82e438cda2cc1efd334d69773d1a29a0e0c | [
"MIT"
] | 1 | 2018-02-23T01:04:44.000Z | 2018-02-23T01:04:44.000Z | from ..base import WriterBase
from ..core.object import AccessSpecifier
from .serializer import Serializer
class Writer(WriterBase):
def __init__(self, out_directory):
WriterBase.__init__(self, out_directory)
def write_class(self, cls):
self.set_initial_values(cls)
declaration_list = ''
initialization_list = ''
for member in cls.members:
declare, init = self.write_object(member)
declaration_list += declare + '\n'
if init:
initialization_list += init + '\n'
functions = ''
for method in cls.functions:
text = self.write_function(method)
functions += text
imports = ''
name = cls.name
extend = ''
include_patter = '\nrequire_once "{}.php";'
if cls.superclasses:
extend = ' extends ' + cls.superclasses[0].name
imports += include_patter.format(cls.superclasses[0].name)
for obj in cls.members:
if self.model.has_class(obj.type):
if obj.type != cls.name:
imports += include_patter.format(obj.type)
elif obj.type in ['list', 'map']:
for arg in obj.template_args:
if self.model.has_class(arg.type) and arg.type != cls.name:
imports += include_patter.format(arg.type)
imports += include_patter.format('Factory')
if 'DataStorage' in functions:
imports += include_patter.format('DataStorage')
constructor_args, constructor_body = self.get_constructor_data(cls)
out = PATTERN_FILE.format(name=name,
extend=extend,
declarations=declaration_list,
initialize_list=initialization_list,
functions=functions,
imports=imports,
constructor_args=constructor_args,
constructor_body=constructor_body)
return [
('%s.php' % cls.name, self.prepare_file(out))
]
def write_object(self, obj):
out_init = ''
value = obj.initial_value
cls_type = self.model.get_class(obj.type) if self.model.has_class(obj.type) else None
if (value in [None, '"NONE"'] and not obj.is_pointer) or (cls_type and cls_type.type == 'enum'):
if obj.type == "string":
value = '""'
elif obj.type == "int":
value = "0"
elif obj.type == "float":
value = "0"
elif obj.type == "uint":
value = "0"
elif obj.type == "bool":
value = "false"
elif obj.type == "list":
value = "array()"
elif obj.type == "map":
value = "array()"
else:
if cls_type is not None and cls_type.type == 'enum':
value = None
if obj.initial_value:
initial_value = obj.initial_value.replace('::', '::$')
else:
initial_value = '{}::${}'.format(cls_type.name, cls_type.members[0].name)
out_init = '$this->{} = {};'.format(obj.name, initial_value)
elif cls_type:
out_init = '$this->{} = new {}();'.format(obj.name, obj.type)
if obj.is_static:
out_declaration = AccessSpecifier.to_string(obj.access) + ' static ${0} = {1};'
else:
out_declaration = AccessSpecifier.to_string(obj.access) + ' ${0} = {1};'
out_declaration = out_declaration.format(obj.name, Serializer().convert_initialize_value(value))
return out_declaration, out_init
def prepare_file(self, text):
text = self.prepare_file_codestype_php(text)
text = text.replace('::TYPE', '::$TYPE')
text = text.replace('nullptr', 'null')
text = text.replace('foreach(', 'foreach (')
text = text.replace('for(', 'for (')
text = text.replace('if(', 'if (')
text = text.replace(' extends', ' extends')
text = text.strip()
return text
def get_method_arg_pattern(self, obj):
return '{type} ${name}={value}' if obj.initial_value is not None else '{type} ${name}'
def get_method_pattern(self, method):
return PATTERN_METHOD
def get_required_args_to_function(self, method):
return None
def add_static_modifier_to_method(self, text):
return 'static ' + text
PATTERN_FILE = '''<?php
{imports}
class {name} {extend}
{{
//members:
{declarations}
public function __construct({constructor_args})
{{
{initialize_list}
{constructor_body}
}}
//functions
{functions}
}};
?>
'''
PATTERN_METHOD = '''{access} function {name}({args})
{{
{body}
}}
'''
| 34.342466 | 104 | 0.534503 | 522 | 5,014 | 4.942529 | 0.195402 | 0.037985 | 0.029845 | 0.050388 | 0.137209 | 0.084496 | 0.084496 | 0 | 0 | 0 | 0 | 0.003038 | 0.343438 | 5,014 | 145 | 105 | 34.57931 | 0.78068 | 0 | 0 | 0.121951 | 0 | 0 | 0.127044 | 0.006183 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065041 | false | 0 | 0.089431 | 0.03252 | 0.219512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00a723f498f71a64d605273ee1d3b575ef1bfb52 | 823 | py | Python | logistic_regression.py | Nikilnick97/Natural-Language-Processing | 5d2118b8ee9517b8313ed6204061ddefa07d31c0 | [
"MIT"
] | null | null | null | logistic_regression.py | Nikilnick97/Natural-Language-Processing | 5d2118b8ee9517b8313ed6204061ddefa07d31c0 | [
"MIT"
] | null | null | null | logistic_regression.py | Nikilnick97/Natural-Language-Processing | 5d2118b8ee9517b8313ed6204061ddefa07d31c0 | [
"MIT"
] | null | null | null | from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from local_time import LocalTime
class Logistic_Regression:
@staticmethod
def get_best_hyperparameter(X_train, y_train, y_val, X_val):
# This gets the best hyperparameter for Regularisation
best_accuracy = 0.0
best_c = 0.0
for c in [0.01, 0.05, 0.25, 0.5, 1]:
lr = LogisticRegression(C=c)
lr.fit(X_train, y_train)
accuracy_ = accuracy_score(y_val, lr.predict(X_val))
if accuracy_ > best_accuracy:
best_accuracy = accuracy_
best_c = c
print ("---Accuracy for C=%s: %s" % (c, accuracy_))
print(LocalTime.get(), "best hyperparameter for regularisation: c = ", best_c)
return best_c | 39.190476 | 86 | 0.63548 | 110 | 823 | 4.518182 | 0.4 | 0.040241 | 0.084507 | 0.04829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026936 | 0.27825 | 823 | 21 | 87 | 39.190476 | 0.809764 | 0.063183 | 0 | 0 | 0 | 0 | 0.088312 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.166667 | 0 | 0.333333 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00acc1b4537dd37e9aabfa6f56d35705dc7f3a8e | 3,017 | py | Python | core/app/alarm/tests/test_save_motion.py | mxmaxime/mx-tech-house | f6b66b8390b348e48d4c6ea0da51e409f3845fd6 | [
"MIT"
] | 2 | 2021-04-29T19:28:59.000Z | 2021-04-29T21:20:32.000Z | core/app/alarm/tests/test_save_motion.py | mxmaxime/mx-tech-house | f6b66b8390b348e48d4c6ea0da51e409f3845fd6 | [
"MIT"
] | 101 | 2020-06-26T19:51:24.000Z | 2021-03-28T09:35:55.000Z | core/app/alarm/tests/test_save_motion.py | mxmaxime/mx-tech-house | f6b66b8390b348e48d4c6ea0da51e409f3845fd6 | [
"MIT"
] | null | null | null | import dataclasses
from alarm.use_cases.data import Detection
import uuid
from decimal import Decimal
from django.forms import model_to_dict
from django.test import TestCase
from django.utils import timezone
from freezegun import freeze_time
from alarm.business.in_motion import save_motion
from alarm.factories import AlarmStatusFactory
from camera.factories import CameraROIFactory, CameraRectangleROIFactory
from alarm.models import AlarmStatus
from camera.models import CameraMotionDetectedBoundingBox, CameraMotionDetected, CameraRectangleROI
from devices.models import Device
class SaveMotionTestCase(TestCase):
def setUp(self) -> None:
self.alarm_status: AlarmStatus = AlarmStatusFactory()
self.device: Device = self.alarm_status.device
self.event_ref = str(uuid.uuid4())
def test_save_motion(self):
start_motion_time = timezone.now()
with freeze_time(start_motion_time):
save_motion(self.device, [], self.event_ref, True)
motion = CameraMotionDetected.objects.filter(device__device_id=self.device.device_id)
self.assertTrue(motion.exists())
motion = motion[0]
self.assertEqual(motion.motion_started_at, start_motion_time)
self.assertEqual(str(motion.event_ref), self.event_ref)
self.assertIsNone(motion.motion_ended_at)
end_motion_time = timezone.now()
with freeze_time(end_motion_time):
save_motion(self.device, [], self.event_ref, False)
motion = CameraMotionDetected.objects.get(device__device_id=self.device.device_id)
self.assertEqual(motion.motion_started_at, start_motion_time)
self.assertEqual(motion.motion_ended_at, end_motion_time)
self.assertEqual(str(motion.event_ref), self.event_ref)
def test_save_motion_rectangles(self):
detections = (
Detection(
bounding_box=[],
bounding_box_point_and_size={'x': 10, 'y': 15, 'w': 200, 'h': 150},
class_id='people',
score=0.8
),
)
save_motion(self.device, detections, self.event_ref, True)
motions = CameraMotionDetected.objects.filter(device__device_id=self.device.device_id)
self.assertTrue(len(motions), 1)
motion = motions[0]
bounding_boxes = CameraMotionDetectedBoundingBox.objects.filter(camera_motion_detected=motion)
self.assertTrue(len(bounding_boxes), len(detections))
bounding_box = bounding_boxes[0]
for bounding_box, detection in zip(bounding_boxes, detections):
detection_plain = dataclasses.asdict(detection)
expected_bounding_box = detection_plain['bounding_box_point_and_size']
expected_bounding_box['score'] = detection.score
self.assertEqual(
model_to_dict(bounding_box, exclude=('camera_motion_detected', 'id')),
expected_bounding_box
)
| 39.181818 | 102 | 0.694398 | 344 | 3,017 | 5.825581 | 0.27907 | 0.049401 | 0.035928 | 0.053892 | 0.323852 | 0.300898 | 0.300898 | 0.244012 | 0.228044 | 0.186128 | 0 | 0.007253 | 0.223069 | 3,017 | 76 | 103 | 39.697368 | 0.847696 | 0 | 0 | 0.066667 | 0 | 0 | 0.021883 | 0.016247 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.05 | false | 0 | 0.233333 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00ad9bd4f7342edae60739235bd9aaa111ee0531 | 16,881 | py | Python | whisper/app.py | jphacks/SD_1809 | e7f354cf9c66f46d3e60a9a55f810d007f8b885e | [
"MIT"
] | null | null | null | whisper/app.py | jphacks/SD_1809 | e7f354cf9c66f46d3e60a9a55f810d007f8b885e | [
"MIT"
] | null | null | null | whisper/app.py | jphacks/SD_1809 | e7f354cf9c66f46d3e60a9a55f810d007f8b885e | [
"MIT"
] | 2 | 2018-10-21T02:31:58.000Z | 2020-11-06T12:45:37.000Z | from __future__ import unicode_literals
import errno
import os
import sys
import tempfile
import concurrent.futures as futures
import json
import re
from argparse import ArgumentParser
from flask import Flask, request, abort
from linebot import (
LineBotApi, WebhookHandler
)
from linebot.exceptions import (
LineBotApiError, InvalidSignatureError
)
from linebot.models import (
MessageEvent, TextMessage, TextSendMessage,
SourceUser, SourceGroup, SourceRoom,
TemplateSendMessage, ConfirmTemplate, MessageAction,
ButtonsTemplate, ImageCarouselTemplate, ImageCarouselColumn, URIAction,
PostbackAction, DatetimePickerAction,
CameraAction, CameraRollAction, LocationAction,
CarouselTemplate, CarouselColumn, PostbackEvent,
StickerMessage, StickerSendMessage, LocationMessage, LocationSendMessage,
ImageMessage, VideoMessage, AudioMessage, FileMessage,
UnfollowEvent, FollowEvent, JoinEvent, LeaveEvent, BeaconEvent,
FlexSendMessage, BubbleContainer, ImageComponent, BoxComponent,
TextComponent, SpacerComponent, IconComponent, ButtonComponent,
SeparatorComponent, QuickReply, QuickReplyButton
)
app = Flask(__name__)
# get channel_secret and channel_access_token from your environment variable
channel_secret = os.getenv('LINE_CHANNEL_SECRET', None)
channel_access_token = os.getenv('LINE_CHANNEL_ACCESS_TOKEN', None)
if channel_secret is None:
print('Specify LINE_CHANNEL_SECRET as environment variable.')
sys.exit(1)
if channel_access_token is None:
print('Specify LINE_CHANNEL_ACCESS_TOKEN as environment variable.')
sys.exit(1)
print(channel_secret, file=sys.stderr)
print(channel_access_token, file=sys.stderr)
line_bot_api = LineBotApi(channel_access_token)
handler = WebhookHandler(channel_secret)
static_tmp_path = os.path.join(os.path.dirname(__file__), 'static', 'tmp')
# ========================= whisper独自のフィールド ========================
from UserData import UserData
from PlantAnimator import PlantAnimator
from beaconWhisperEvent import BeaconWhisperEvent
# ここでimport出来ないときは、pip install clova-cek-sdk をたたくこと
import cek
from flask import jsonify
user_data = UserData()
plant_animator = PlantAnimator(user_data, line_bot_api)
beacon_whisper_event = BeaconWhisperEvent(line_bot_api,user_data)
# user_idでエラーをはく場合は、下のidベタ打ちを採用してください
# user_id = "U70418518785e805318db128d8014710e"
user_id = user_data.json_data["user_id"]
# =========================================================================
# =========================Clova用のフィールド==============================
# application_id : lineのClovaアプリ?でスキルを登録した際のExtension_IDを入れる
clova = cek.Clova(
application_id = "com.clovatalk.whisper",
default_language = "ja",
debug_mode = False
)
# =========================================================================
# function for create tmp dir for download content
def make_static_tmp_dir():
try:
os.makedirs(static_tmp_path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(static_tmp_path):
pass
else:
raise
@app.route("/callback", methods=['POST'])
def callback():
# get X-Line-Signature header value
signature = request.headers['X-Line-Signature']
# get request body as text
body = request.get_data(as_text=True)
app.logger.info("Request body: " + body)
# handle webhook body
try:
handler.handle(body, signature)
except LineBotApiError as e:
print("Got exception from LINE Messaging API: %s\n" % e.message)
for m in e.error.details:
print(" %s: %s" % (m.property, m.message))
print("\n")
except InvalidSignatureError:
abort(400)
return 'OK'
# /clova に対してのPOSTリクエストを受け付けるサーバーを立てる
@app.route('/clova', methods=['POST'])
def my_service():
body_dict = clova.route(body=request.data, header=request.headers)
response = jsonify(body_dict)
response.headers['Content-Type'] = 'application/json;charset-UTF-8'
return response
# 以下はcallback用のhandler
# ユーザにフォローされた時のイベント
@handler.add(FollowEvent)
def follow_event(event):
global user_id
user_id = event.source.user_id
user_data.set_user_id(user_id)
line_bot_api.reply_message(
event.reply_token, TextSendMessage(text="初めまして。whisperです!\nよろしくね(^^♪"))
@handler.add(MessageEvent, message=TextMessage)
def handle_text_message(event):
print("text message")
text = event.message.text
split_msg = re.split('[\ | ]', text)
reply_texts = create_reply(split_msg, event, source="text")
if reply_texts is not None:
reply_texts = (reply_texts,) if isinstance(reply_texts, str) else reply_texts
msgs = [TextSendMessage(text=s) for s in reply_texts]
line_bot_api.reply_message(event.reply_token, msgs)
@handler.add(MessageEvent, message=LocationMessage)
def handle_location_message(event):
line_bot_api.reply_message(
event.reply_token,
LocationSendMessage(
title=event.message.title, address=event.message.address,
latitude=event.message.latitude, longitude=event.message.longitude
)
)
@handler.add(MessageEvent, message=StickerMessage)
def handle_sticker_message(event):
line_bot_api.reply_message(
event.reply_token,
StickerSendMessage(
package_id=event.message.package_id,
sticker_id=event.message.sticker_id)
)
# Other Message Type
@handler.add(MessageEvent, message=(ImageMessage, VideoMessage, AudioMessage))
def handle_content_message(event):
if isinstance(event.message, ImageMessage):
ext = 'jpg'
elif isinstance(event.message, VideoMessage):
ext = 'mp4'
elif isinstance(event.message, AudioMessage):
ext = 'm4a'
else:
return
message_content = line_bot_api.get_message_content(event.message.id)
with tempfile.NamedTemporaryFile(dir=static_tmp_path, prefix=ext + '-', delete=False) as tf:
for chunk in message_content.iter_content():
tf.write(chunk)
tempfile_path = tf.name
dist_path = tempfile_path + '.' + ext
dist_name = os.path.basename(dist_path)
os.rename(tempfile_path, dist_path)
line_bot_api.reply_message(
event.reply_token, [
TextSendMessage(text='Save content.'),
TextSendMessage(text=request.host_url + os.path.join('static', 'tmp', dist_name))
])
@handler.add(MessageEvent, message=FileMessage)
def handle_file_message(event):
message_content = line_bot_api.get_message_content(event.message.id)
with tempfile.NamedTemporaryFile(dir=static_tmp_path, prefix='file-', delete=False) as tf:
for chunk in message_content.iter_content():
tf.write(chunk)
tempfile_path = tf.name
dist_path = tempfile_path + '-' + event.message.file_name
dist_name = os.path.basename(dist_path)
os.rename(tempfile_path, dist_path)
line_bot_api.reply_message(
event.reply_token, [
TextSendMessage(text='Save file.'),
TextSendMessage(text=request.host_url + os.path.join('static', 'tmp', dist_name))
])
@handler.add(UnfollowEvent)
def handle_unfollow():
app.logger.info("Got Unfollow event")
@handler.add(JoinEvent)
def handle_join(event):
line_bot_api.reply_message(
event.reply_token,
TextSendMessage(text='Joined this ' + event.source.type))
@handler.add(LeaveEvent)
def handle_leave():
app.logger.info("Got leave event")
@handler.add(PostbackEvent)
def handle_postback(event):
if event.postback.data == 'ping':
line_bot_api.reply_message(
event.reply_token, TextSendMessage(text='pong'))
elif event.postback.data == 'datetime_postback':
line_bot_api.reply_message(
event.reply_token, TextSendMessage(text=event.postback.params['datetime']))
elif event.postback.data == 'date_postback':
line_bot_api.reply_message(
event.reply_token, TextSendMessage(text=event.postback.params['date']))
elif event.postback.data in ('set_beacon_on', 'set_beacon_off'):
# ビーコンを使うかどうかを設定するときの"YES", "No"を押したときの挙動を設定
beacon_whisper_event.set_beacon(event)
else:
# 植物の名前を消すときにはワンクッション挟んであげる
data = event.postback.data.split()
if data[0] == 'delete_plant':
plant_animator.delete_plant(data[1])
elif data[0] == 'delete_plant_cancel':
line_bot_api.reply_message(
event.reply_token,
TextSendMessage(
text='ありがとう^^'
)
)
# ビーコンがかざされたときに呼ばれる処理
@handler.add(BeaconEvent)
def handle_beacon(event):
if plant_animator.listen_beacon_span():
beacon_whisper_event.activation_msg(event)
if user_data.json_data['use_line_beacon'] is 1:
# ビーコンがエコモード中ならずっと家にいたと判断して挨拶はしない
if plant_animator.check_beacon_eco_time() == False:
line_bot_api.reply_message(
event.reply_token,
TextSendMessage(
text='おかえりなさい!'
))
plant_animator.listen_beacon(user_data.json_data['use_line_beacon'])
#--------------------------------------
# メッセージを生成するメソッドへのディスパッチャ
#--------------------------------------
lines = (
"植物の呼び出し", " ハロー `植物の名前`",
"植物の登録:", " 登録 `植物の名前`",
"植物の削除", " 削除 `植物の名前`",
"会話の終了", ' またね')
help_msg = os.linesep.join(lines)
def create_reply(split_text, event=None, source=None):
"""
テキストとして受け取ったメッセージとclovaから受け取ったメッセージを同列に扱うために
応答メッセージ生成へのディスパッチ部分を抜き出す
input: string[]
output: None or iterable<string>
"""
def decorate_text(plant, text):
return plant.display_name + ": " + text
text = split_text[0]
if text == 'bye':
if isinstance(event.source, SourceGroup):
line_bot_api.reply_message(
event.reply_token, TextSendMessage(text='またね、今までありがとう'))
line_bot_api.leave_group(event.source.group_id)
elif isinstance(event.source, SourceRoom):
line_bot_api.reply_message(
event.reply_token, TextSendMessage(text='またね、今までありがとう'))
line_bot_api.leave_room(event.source.room_id)
else:
line_bot_api.reply_message(
event.reply_token,
TextSendMessage(text="この会話から退出させることはできません"))
# ユーザからビーコンの設定を行う
elif text in {'beacon', 'ビーコン'}:
return beacon_whisper_event.config_beacon_msg(event)
elif text in {"help", "ヘルプ"}:
return help_msg
elif text in {'またね', 'じゃあね', 'バイバイ'}:
plant = plant_animator.plant
text = plant_animator.disconnect()
if source == "text":
text = decorate_text(plant, text)
return text
# 植物の生成を行う
elif text in {'登録', 'ようこそ'}:
if len(split_text) == 2:
name = split_text[1]
return plant_animator.register_plant(name)
elif len(split_text) == 1:
return "名前が設定されていません"
else:
return "メッセージが不正です", "例:登録 `植物の名前`"
# ランダムに呼び出す
elif text == "誰かを呼んで":
reply = plant_animator.clova_random_connect()
if source == "text":
reply = decorate_text(plant_animator.plant, reply)
return reply
# 植物との接続命令
elif split_text[0] in {'ハロー', 'hello', 'こんにちは', 'こんばんは', 'おはよう', 'ごきげんよう'}:
if len(split_text) == 2:
reply = plant_animator.connect(split_text[1])
if source == "text":
reply = decorate_text(plant_animator.plant, reply)
return reply
elif len(split_text) == 1:
return "植物が選択されていません"
else:
return "メッセージが不正です:", "例:ハロー `植物の名前`"
# 植物を削除するときの命令
elif split_text[0] == {'削除'}:
if len(split_text) == 2:
return plant_animator.delete_plant(split_text[1])
elif len(split_text) == 1:
return "植物が選択されていません"
else:
return "メッセージが不正です:" , "例:削除 `植物の名前`"
# 植物を削除するときの命令
# if split_msg[1] is not None:
# confirm_template = ConfirmTemplate(text= split_msg[1] +"の情報を削除します\n本当によろしいですか?\n", actions=[
# PostbackAction(label='Yes', data='delete_plant '+ split_msg[1], displayText='はい'),
# PostbackAction(label='No', data='delete_plant_cancel '+ split_msg[1], displayText='いいえ'),
# ])
# template_message = TemplateSendMessage(
# alt_text='Confirm alt text', template=confirm_template)
# line_bot_api.reply_message(event.reply_token, template_message)
# else:
# line_bot_api.reply_message(
# event.reply_token,
# TextSendMessage(
# text='植物が選択されていません'
# )
# )
else:
text = plant_animator.communicate(text)
if source == "text":
if plant_animator.connecting():
text = decorate_text(plant_animator.plant, text)
else:
text = [text, help_msg]
return text
# line_bot_api.reply_message(
# event.reply_token, TextSendMessage(text=event.message.text))
#--------------------------------------
# メッセージを生成するメソッドへのディスパッチャ end
#--------------------------------------
# 以下にClova用のイベントを書き込む
# 起動時の処理
@clova.handle.launch
def launch_request_handler(clova_request):
welcome_japanese = cek.Message(message="おかえりなさい!", language="ja")
response = clova.response([welcome_japanese])
return response
@clova.handle.default
def no_response(clova_request):
text = plant_animator.communicate("hogehoge")
if plant_animator.connecting():
text = "%s: よくわかんないや" % plant_animator.plant.display_name
return clova.response(text)
# Communicateの発火箇所
# debugのために、defaultにしているが本来は
# @clova.handle.intent("Communication") と書いて、Clova アプリの方でインテントを設定しておく必要がある
# ToDo: Connect処理を設定してあげないと不親切、LINE Clavaアプリで予冷応答を細かく設定(今回は時間が足りないかも)
# @clova.handle.default
# @clova.handle.intent("AskStatus")
# def communication(clova_request):
# msg = plant_animator.communicate("調子はどう?", None)
# if msg is None:
# msg = "誰ともお話ししていません"
# message_japanese = cek.Message(message=msg, language="ja")
# response = clova.response([message_japanese])
# return response
# @clova.handle.intent("AskWater")
# def ask_water(clova_request):
# msg = plant_animator.communicate("水はいる?", None)
# if msg is None:
# msg = "誰ともお話ししていません"
# message_japanese = cek.Message(message=msg, language="ja")
# response = clova.response([message_japanese])
# return response
# @clova.handle.intent("AskLuminous")
# def ask_luminous(clova_request):
# msg = plant_animator.communicate("日当たりはどう?", None)
# if msg is None:
# msg = "誰ともお話ししていません"
# message_japanese = cek.Message(message=msg, language="ja")
# response = clova.response([message_japanese])
# return response
#--------------------------
# start Clova setting
#--------------------------
def define_clova_handler(intent, text):
@clova.handle.intent(intent)
def handler(clova_request):
# バグがあるかもしれない
# textの形式次第で
print("clova intent = %s" % intent)
msg = create_reply([text], source="clova")
# msg = plant_animator.communicate(text, None)
if msg is None:
msg = "誰ともお話ししていません"
message_japanese = cek.Message(message=msg, language="ja")
response = clova.response([message_japanese])
return response
return handler
with open("data/clova_setting.json") as f:
js = json.load(f)
intent_text_dict = js["intent_text_dict"]
# Clovaに対するイベントハンドラを設定
for k ,v in intent_text_dict.items():
define_clova_handler(k, v)
#-------------------------------
# end Clova setting
#-------------------------------
import time
# should be modified when required
def update():
plant_animator.update()
def main_loop(clock_span):
while 1:
time.sleep(clock_span)
update()
if __name__ == "__main__":
arg_parser = ArgumentParser(
usage='Usage: python ' + __file__ + ' [--port <port>] [--help]'
)
arg_parser.add_argument('-p', '--port', type=int, default=8000, help='port')
arg_parser.add_argument('-d', '--debug', default=False, help='debug')
options = arg_parser.parse_args()
# create tmp dir for download content
make_static_tmp_dir()
def push_message(msg):
line_bot_api.push_message(user_id, TextSendMessage(text=msg))
plant_animator.push_message = push_message
with futures.ThreadPoolExecutor(2) as exec:
exec.submit(app.run, debug=options.debug, port=options.port)
exec.submit(main_loop, 0.9)
| 32.842412 | 107 | 0.647237 | 1,846 | 16,881 | 5.706392 | 0.22156 | 0.017277 | 0.024682 | 0.025631 | 0.299506 | 0.275679 | 0.245681 | 0.240175 | 0.23315 | 0.232675 | 0 | 0.004779 | 0.219004 | 16,881 | 513 | 108 | 32.906433 | 0.794144 | 0.205794 | 0 | 0.239264 | 0 | 0 | 0.08482 | 0.011364 | 0 | 0 | 0 | 0.001949 | 0 | 1 | 0.070552 | false | 0.003067 | 0.058282 | 0.003067 | 0.196319 | 0.027607 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00ae834c9dc4befc6140c5f9a714686060eba6c7 | 815 | py | Python | list_and_dict.py | MauVal96/python-intermediate | ff9a4263b10b48feb07abc3e6ae471f77fa2e089 | [
"MIT"
] | null | null | null | list_and_dict.py | MauVal96/python-intermediate | ff9a4263b10b48feb07abc3e6ae471f77fa2e089 | [
"MIT"
] | null | null | null | list_and_dict.py | MauVal96/python-intermediate | ff9a4263b10b48feb07abc3e6ae471f77fa2e089 | [
"MIT"
] | null | null | null | # Nested Lists and Dictionaries
def run():
my_list = [1, "Hello", True, 4.5]
my_dict = {
"firstname": "Mauricio",
"lastname": "Valadez"
}
super_list = [
{"firstname": "Mauricio", "lastname": "Valadez"},
{"firstname": "Carlos", "lastname": "García"},
{"firstname": "Francisco", "lastname": "Hernández"},
{"firstname": "Laura", "lastname": "Pérez"},
{"firstname": "Gabriela", "lastname": "Rojas"}
]
super_dict = {
"natural_nums": [1,2,3,4,5],
"integer_nums": [-1, -2, 0, 1, 2],
"float_nums": [1.2, 3.7, 9.86]
}
for key, value in super_dict.items():
print(key, "-", value)
for item in super_list:
print(item["firstname"], "-", item["lastname"])
if __name__ == '__main__':
run() | 27.166667 | 60 | 0.521472 | 89 | 815 | 4.58427 | 0.52809 | 0.019608 | 0.044118 | 0.156863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03367 | 0.271166 | 815 | 30 | 61 | 27.166667 | 0.653199 | 0.035583 | 0 | 0 | 0 | 0 | 0.319745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0 | 0 | 0.041667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00af5e0ffc302574cdfb6f82132240cba4c78b5d | 460 | py | Python | batman.py | Elry/py-basics | 7e1003ee5a0124291b5d3afe6ec074d93883fb19 | [
"MIT"
] | null | null | null | batman.py | Elry/py-basics | 7e1003ee5a0124291b5d3afe6ec074d93883fb19 | [
"MIT"
] | null | null | null | batman.py | Elry/py-basics | 7e1003ee5a0124291b5d3afe6ec074d93883fb19 | [
"MIT"
] | null | null | null | from bat import Bat
from ubermesch import Ubermesch
class Batman(Bat, Ubermesch):
def __init__(self, *args, **kwargs):
Ubermesch.__init__(self, 'anonymous', movie=True, superpowers=['Wealthy'], *args, **kwargs)
Bat.__init__(self, *args, can_fly=False, **kwargs)
self.name = "neo"
def sing(self):
return "tototototoototot"
if __name__ == '__main__':
sup = Batman
print(Batman.__mro__)
print(sup.get_species())
print(sup.sing())
| 25.555556 | 95 | 0.686957 | 58 | 460 | 5 | 0.534483 | 0.082759 | 0.082759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163043 | 460 | 17 | 96 | 27.058824 | 0.753247 | 0 | 0 | 0 | 0 | 0 | 0.093478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.071429 | 0.428571 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00afcb16830c0c578437b292e2d8f2163e751822 | 9,400 | py | Python | application.py | Lambda-School-Labs/Job-Funnl-ds-2 | 3603964839a8946363f9238b091aba9abace7d0d | [
"MIT"
] | null | null | null | application.py | Lambda-School-Labs/Job-Funnl-ds-2 | 3603964839a8946363f9238b091aba9abace7d0d | [
"MIT"
] | 2 | 2020-05-02T00:16:16.000Z | 2021-08-23T20:41:52.000Z | application.py | Lambda-School-Labs/Job-Funnl-ds-2 | 3603964839a8946363f9238b091aba9abace7d0d | [
"MIT"
] | 5 | 2020-03-05T17:19:15.000Z | 2020-03-26T13:44:15.000Z |
import os
import subprocess
import sys
import logging
import time
from os.path import join, dirname
from flask import Flask, jsonify, request, send_file
from flask.logging import default_handler
from datafunctions.log.log import startLog, getLogFile, tailLogFile
SCRAPER_NAME = './run_scrapers.py'
SCRAPER_NAME_PS = SCRAPER_NAME[2:]
MODEL_NAME = './run_models.py'
MODEL_NAME_PS = MODEL_NAME[2:]
LDA17_NN_PATH = join(dirname(__file__), 'datafunctions/model/models/lda17_files/nearest_neighbors')
LDA17_M_PATH = join(dirname(__file__), 'datafunctions/model/models/lda17_files/model')
LDA17_ME_PATH = join(dirname(__file__), 'datafunctions/model/models/lda17_files/model.expElogbeta.npy')
LDA17_MI_PATH = join(dirname(__file__), 'datafunctions/model/models/lda17_files/model.id2word')
LDA17_MS_PATH = join(dirname(__file__), 'datafunctions/model/models/lda17_files/model.state')
LDA17_ID_PATH = join(dirname(__file__), 'datafunctions/model/models/lda17_files/id2word')
startLog(getLogFile(__file__))
APP_LOG = logging.getLogger(__name__)
APP_LOG.info('Creating app...')
application = Flask(__name__)
werkzeug_logger = logging.getLogger('werkzeug')
for handler in APP_LOG.handlers:
werkzeug_logger.addHandler(handler)
application.logger.addHandler(handler)
@application.route('/')
def index():
return '''
<html><head></head><body>
Health check: <a href="/health">/health</a>
<br>
Start scrapers: <a href="/start">/start</a>
<br>
Kill scrapers: <a href="/kill">/kill</a>
<br>
Start models: <a href="/start-models">/start-models</a>
<br>
Kill models: <a href="/kill-models">/kill-models</a>
<br>
Application logs: <a href="/logs?file=application.py&lines=50">/logs?file=application.py</a>
<br>
Scraper logs: <a href="/logs?file=run_scrapers.py&lines=100">/logs?file=run_scrapers.py</a>
<br>
Model logs: <a href="/logs?file=run_models.py&lines=100">/logs?file=run_models.py</a>
</body></html>
'''
@application.route('/logs', methods=['GET'])
def logs():
"""
Gets the last n lines of a given log
"""
APP_LOG.info(f'/logs called with args {request.args}')
logfile = request.args.get('file', None)
lines = request.args.get('lines', 1000)
if logfile is None:
return('''
<pre>
Parameters:
file: The file to get logs for
Required
Usually one of either application.py or run_scrapers.py
lines: Number of lines to get
Defaults to 1000
</pre>
''')
try:
res = tailLogFile(logfile, n_lines=lines)
return (f'<pre>{res}</pre>')
except Exception as e:
return(f'Exception {type(e)} getting logs: {e}')
@application.route('/health', methods=['GET'])
def health():
"""
Prints various health info about the machine.
"""
APP_LOG.info('/health called')
outputs = {}
outputs['scrapers running'] = check_running(SCRAPER_NAME)
outputs['models running'] = check_running(MODEL_NAME)
outputs['free'] = os.popen('free -h').read()
outputs['dstat'] = os.popen('dstat -cdlimnsty 1 0').read()
outputs['top'] = os.popen('top -bn1').read()
outputs['ps'] = os.popen('ps -Afly --forest').read()
APP_LOG.info(f'Health results: {outputs}')
r = ''
for key, val in outputs.items():
r += f'''
<hr />
<h4>{key}</h4>
<pre style="white-space: pre-wrap; overflow-wrap: break-word;">{val}</pre>
'''
return r
@application.route('/kill', methods=['GET', 'POST'])
def kill():
"""
Kills the web scrapers.
"""
initial_state = check_running(SCRAPER_NAME)
running = initial_state
try:
APP_LOG.info('/kill called')
tries = 0
max_tries = 5
while running and tries < max_tries:
APP_LOG.info(f'Scraper running, attempting to kill it (try {tries + 1} of {max_tries})')
r = os.system(
f'kill $(ps -Af | grep {SCRAPER_NAME_PS} | grep -v grep | grep -oP "^[a-zA-Z\s]+[0-9]+" | grep -oP "[0-9]+")'
)
APP_LOG.info(f'Kill call exited with code: {r}')
tries += 1
running = check_running(SCRAPER_NAME)
if running:
wait_time = 2
APP_LOG.info(f'Waiting {wait_time} seconds...')
time.sleep(wait_time)
except Exception as e:
APP_LOG.warn(f'Exception while killing scrapers: {e}')
APP_LOG.warn(e, exc_info=True)
return f'''
<html><body>
<h4>initially running</h4>
<pre>{initial_state}</pre>
<hr />
<h4>scrapers running</h4>
<pre>{running}</pre>
</html></body>
'''
@application.route('/start', methods=['GET', 'POST'])
def start():
"""
Starts the web scrapers.
"""
tries = 0
result = {
'running': False,
'tries': 0,
'message': 'Unknown failure.'
}
try:
APP_LOG.info('/start called')
max_tries = 5
while not check_running(SCRAPER_NAME) and tries < max_tries:
APP_LOG.info(f'Scraper not running, attempting to start it (try {tries + 1} of {max_tries})')
start_and_disown(SCRAPER_NAME)
wait_time = 0
APP_LOG.info(f'Waiting {wait_time} seconds...')
time.sleep(wait_time)
tries += 1
if check_running(SCRAPER_NAME):
APP_LOG.info(f'Scraper running.')
if tries == 0:
result = {
'running': True,
'tries': tries,
'message': f'{SCRAPER_NAME} already running.'
}
else:
result = {
'running': True,
'tries': tries,
'message': f'{SCRAPER_NAME} started after {tries} tries.'
}
else:
result = {
'running': False,
'tries': tries,
'message': f'Failed to start {SCRAPER_NAME} after {tries} tries.'
}
# APP_LOG.info(f'run_scrapers stdout: {p.stdout.read()}')
# APP_LOG.info(f'run_scrapers stderr: {p.stderr.read()}')
APP_LOG.info(f'result: {result}')
except Exception as e:
result = {
'running': False,
'tries': tries,
'message': f'Aborting after {type(e)} exception on try {tries}: {e}'
}
APP_LOG.warn(f'result: {result}')
APP_LOG.warn(e, exc_info=True)
return jsonify(result)
@application.route('/kill-models', methods=['GET', 'POST'])
def kill_models():
"""
Kills the topic models.
"""
initial_state = check_running(MODEL_NAME)
running = initial_state
try:
APP_LOG.info('/kill-models called')
tries = 0
max_tries = 5
while running and tries < max_tries:
APP_LOG.info(f'Models running, attempting to kill it (try {tries + 1} of {max_tries})')
r = os.system(
f'kill $(ps -Af | grep {MODEL_NAME_PS} | grep -v grep | grep -oP "^[a-zA-Z\s]+[0-9]+" | grep -oP "[0-9]+")'
)
APP_LOG.info(f'Kill call exited with code: {r}')
tries += 1
running = check_running(MODEL_NAME)
if running:
wait_time = 2
APP_LOG.info(f'Waiting {wait_time} seconds...')
time.sleep(wait_time)
except Exception as e:
APP_LOG.warn(f'Exception while killing models: {e}')
APP_LOG.warn(e, exc_info=True)
return f'''
<html><body>
<h4>initially running</h4>
<pre>{initial_state}</pre>
<hr />
<h4>models running</h4>
<pre>{running}</pre>
</html></body>
'''
@application.route('/start-models', methods=['GET', 'POST'])
def start_models():
"""
Starts the topic models.
"""
tries = 0
result = {
'running': False,
'tries': 0,
'message': 'Unknown failure.'
}
try:
APP_LOG.info('/start-models called')
max_tries = 5
while not check_running(MODEL_NAME) and tries < max_tries:
APP_LOG.info(f'Models not running, attempting to start it (try {tries + 1} of {max_tries})')
start_and_disown(MODEL_NAME)
wait_time = 0
APP_LOG.info(f'Waiting {wait_time} seconds...')
time.sleep(wait_time)
tries += 1
if check_running(MODEL_NAME):
APP_LOG.info(f'Models running.')
if tries == 0:
result = {
'running': True,
'tries': tries,
'message': f'{MODEL_NAME} already running.'
}
else:
result = {
'running': True,
'tries': tries,
'message': f'{MODEL_NAME} started after {tries} tries.'
}
else:
result = {
'running': False,
'tries': tries,
'message': f'Failed to start {MODEL_NAME} after {tries} tries.'
}
APP_LOG.info(f'result: {result}')
except Exception as e:
result = {
'running': False,
'tries': tries,
'message': f'Aborting after {type(e)} exception on try {tries}: {e}'
}
APP_LOG.warn(f'result: {result}')
APP_LOG.warn(e, exc_info=True)
return jsonify(result)
@application.route('/models/lda17-nn')
def models_lda17_nn():
'''
Returns the pickled NearestNeighbors model for the LDA17 model.
'''
# At some point, this should be replaced with an autogenerated route or a static route
return send_file(LDA17_NN_PATH)
@application.route('/models/lda17-m')
def models_lda17_m():
return send_file(LDA17_M_PATH)
@application.route('/models/lda17-m.expElogbeta.npy')
def models_lda17_me():
return send_file(LDA17_ME_PATH)
@application.route('/models/lda17-m.id2word')
def models_lda17_mi():
return send_file(LDA17_MI_PATH)
@application.route('/models/lda17-m.state')
def models_lda17_ms():
return send_file(LDA17_MS_PATH)
@application.route('/models/lda17-id')
def models_lda17_id():
return send_file(LDA17_ID_PATH)
def check_running(pname):
APP_LOG.info(f'check_running called, pname: {pname}')
result = os.system(f'ps -Af | grep -v grep | grep -v log | grep {pname}')
APP_LOG.info(f'exit code: {result}')
return result == 0
def start_and_disown(pname):
with open(os.devnull, 'r+b', 0) as DEVNULL:
subprocess.Popen(['nohup', sys.executable, pname],
stdin=DEVNULL, stdout=DEVNULL, stderr=DEVNULL, close_fds=True, preexec_fn=os.setpgrp)
if __name__ == '__main__':
APP_LOG.info('Starting Flask dev server...')
application.run() | 26.934097 | 113 | 0.67266 | 1,369 | 9,400 | 4.450694 | 0.162893 | 0.036435 | 0.044313 | 0.036107 | 0.555063 | 0.504021 | 0.465288 | 0.45741 | 0.43788 | 0.388971 | 0 | 0.016527 | 0.163191 | 9,400 | 349 | 114 | 26.934097 | 0.758073 | 0.047021 | 0 | 0.431655 | 0 | 0.021583 | 0.417229 | 0.096379 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053957 | false | 0 | 0.032374 | 0.021583 | 0.136691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00b06694be371e9a46f14a04d71474d24287deb3 | 1,757 | py | Python | badbaby/static/expyfun/ids_expyfun.py | ktavabi/bad-baby | 241fbafd3ca9e23f25aae4eb3bbc90e666c76e8f | [
"MIT"
] | 1 | 2020-12-08T05:29:02.000Z | 2020-12-08T05:29:02.000Z | badbaby/static/expyfun/ids_expyfun.py | ktavabi/bad-baby | 241fbafd3ca9e23f25aae4eb3bbc90e666c76e8f | [
"MIT"
] | 15 | 2020-03-13T20:46:50.000Z | 2021-09-01T21:31:43.000Z | badbaby/static/expyfun/ids_expyfun.py | ktavabi/bad-baby | 241fbafd3ca9e23f25aae4eb3bbc90e666c76e8f | [
"MIT"
] | 3 | 2020-09-21T18:30:42.000Z | 2020-12-14T19:15:27.000Z | # -*- coding: utf-8 -*-
# Authors: Kambiz Tavabi <ktavabi@gmail.com>
#
# simplified bsd-3 license
"""Script for infant basic auditory testing using infant directed speech (IDS)"""
import numpy as np
from os import path as op
from expyfun import ExperimentController
from expyfun.stimuli import read_wav
from expyfun._trigger_controllers import decimals_to_binary
from expyfun import assert_version
assert_version('8511a4d')
fs = 24414
stim_dir = op.join(op.dirname(__file__), 'stimuli', 'ids')
sound_files = ['inForest_part-1-rms.wav',
'inForest_part-2-rms.wav',
'inForest_part-3-rms.wav',
'inForest_part-4-rms.wav',
'inForest_part-5-rms.wav']
sound_files = {j: op.join(stim_dir, k)
for j, k in enumerate(sound_files)}
wavs = [np.ascontiguousarray(read_wav(v)) for _, v in sorted(sound_files.items())]
# convert length of wave files into number of bits
n_bits = int(np.floor(np.log2(len(wavs)))) + 1
with ExperimentController('IDS', stim_db=75, stim_fs=fs, stim_rms=0.01,
check_rms=None, suppress_resamp=True) as ec:
for ii, wav in enumerate(wavs):
# stamp trigger line prior to stimulus onset
ec.clear_buffer()
ec.load_buffer(wav[0])
ec.identify_trial(ec_id=str(ii), ttl_id=decimals_to_binary([ii], [n_bits]))
# our next start time is our last start time, plus
# the stimulus duration
stim_len = 1./fs * len(wav[0][0]) # in seconds
ec.start_stimulus() # stamps stimulus onset
ec.wait_secs(stim_len) # wait through stimulus duration to stop the playback
ec.stop()
ec.trial_ok()
ec.check_force_quit() # make sure we're not trying to quit
| 38.195652 | 85 | 0.6642 | 261 | 1,757 | 4.298851 | 0.501916 | 0.053476 | 0.049911 | 0.064171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020588 | 0.225953 | 1,757 | 45 | 86 | 39.044444 | 0.804412 | 0.260672 | 0 | 0 | 0 | 0 | 0.105304 | 0.089704 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00b5e451e5aaad2a4c60d85866a433f916c39652 | 5,760 | py | Python | run_test.py | drstmane/whatismyschema | bb93df49d513cb5ebafb275b83ec3dc1f6eea5bb | [
"BSD-3-Clause"
] | 8 | 2018-10-30T15:30:29.000Z | 2021-05-17T06:17:56.000Z | run_test.py | drstmane/whatismyschema | bb93df49d513cb5ebafb275b83ec3dc1f6eea5bb | [
"BSD-3-Clause"
] | 10 | 2019-07-13T12:17:07.000Z | 2019-08-27T18:23:53.000Z | run_test.py | drstmane/whatismyschema | bb93df49d513cb5ebafb275b83ec3dc1f6eea5bb | [
"BSD-3-Clause"
] | 2 | 2019-08-11T20:55:49.000Z | 2019-09-25T13:24:33.000Z | #!/bin/env python
# coding: utf8
#
# WhatIsMySchema
#
# Copyright (c) 2018 Tim Gubner
#
#
import unittest
from whatismyschema import *
class WhatIsMySchemaTestCase(unittest.TestCase):
def fix_type(self, t):
return t.lower().replace(" ", "").replace("\n", "")
def check_type(self, col, expect):
types = col.determine_type()
self.assertTrue(len(types) > 0)
expect = self.fix_type(expect)
data = self.fix_type(types[0])
self.assertEqual(data, expect)
def check_types(self, cols, types):
self.assertEqual(len(cols), len(types))
for (col, tpe) in zip(cols, types):
self.check_type(col, tpe)
def check_null(self, cols, isnull):
self.assertEqual(len(cols), len(isnull))
for (col, null) in zip(cols, isnull):
if null:
self.assertTrue(col.num_nulls > 0)
else:
self.assertEqual(col.num_nulls, 0)
def check_all_null_val(self, cols, val):
isnull = []
for r in range(0, len(cols)):
isnull.append(val)
self.check_null(cols, isnull)
def check_all_null(self, cols):
self.check_all_null_val(cols, True)
def check_none_null(self, cols):
self.check_all_null_val(cols, False)
class TableTests(WhatIsMySchemaTestCase):
def testDates1(self):
table = Table()
table.seperator = ","
table.push("2013-08-29,2013-08-05 15:23:13.716532")
self.check_types(table.columns,
["date", "datetime"])
self.check_none_null(table.columns)
table.check()
def testSep1(self):
table = Table()
table.seperator = "seperator"
table.push("Hallo|seperator|Welt")
self.check_types(table.columns,
["varchar(6)", "varchar(5)"])
self.check_none_null(table.columns)
table.check()
def testInt1(self):
table = Table()
table.seperator = "|"
table.push("0")
table.push("-127")
table.push("127")
self.check_types(table.columns,
["tinyint"])
self.check_none_null(table.columns)
table.check()
def testDec1(self):
table = Table()
table.seperator = "|"
table.push("42")
table.push("42.44")
table.push("42.424")
table.push("4.424")
self.check_types(table.columns,
["decimal(5,3)"])
self.check_none_null(table.columns)
table.check()
def test1(self):
table = Table()
table.seperator = "|"
table.push("Str1|Str2|42|42|13")
table.push("Ha|Str3333|42.42|Test|34543534543543")
self.check_types(table.columns,
["varchar(4)", "varchar(7)", "decimal(4,2)", "varchar(4)", "bigint"])
self.check_none_null(table.columns)
table.check()
def testColMismatch1(self):
table = Table()
table.seperator = ","
table.push("1")
table.push("1,2")
table.push("1")
self.check_types(table.columns,
["tinyint", "tinyint"])
self.check_null(table.columns, [False, True])
table.check()
def testIssue4(self):
table = Table()
table.seperator = ","
table.push("0.0390625")
table.push("0.04296875")
self.check_types(table.columns,
["decimal(8,8)"])
self.check_null(table.columns, [False])
table.check()
def testDecZeros(self):
table = Table()
table.seperator = "|"
table.push(".1000|000.0|.4")
table.push(".123|1.1|.423")
self.check_types(table.columns, [
"decimal(3, 3)", "decimal(2, 1)", "decimal(3, 3)"])
self.check_null(table.columns, [False, False, False])
table.check()
def testIssue7a(self):
table = Table()
table.seperator = "|"
table.push("123|.1|1.23")
table.push("1|.123|12.3")
self.check_types(table.columns,
["tinyint", "decimal(3,3)", "decimal(4,2)"])
self.check_null(table.columns, [False, False, False])
table.check()
def testIssue7b(self):
table = Table()
table.seperator = "|"
table.push("123|1|1.23|12.3")
table.push("0.123|.1|.123|.123")
self.check_types(table.columns,
["decimal(6,3)", "decimal(2,1)", "decimal(4,3)", "decimal(5,3)"])
self.check_null(table.columns, [False, False, False, False])
table.check()
def testIssue5a(self):
table = Table()
table.seperator = "|"
table.push("1||a")
table.push("2||b")
table.push("3||c")
self.check_types(table.columns,
["tinyint", "boolean", "varchar(1)"])
self.check_null(table.columns, [False, True, False])
table.check()
def testIssue5b(self):
table = Table()
table.seperator = "|"
table.parent_null_value = "="
table.push("1|=|a")
table.push("2|=|b")
table.push("3|=|c")
self.check_types(table.columns,
["tinyint", "boolean", "varchar(1)"])
self.check_null(table.columns, [False, True, False])
table.check()
class CliTests(WhatIsMySchemaTestCase):
def run_process(self, cmd, file):
path = os.path.dirname(os.path.abspath(__file__))
p = subprocess.Popen("python {path}/whatismyschema.py{sep}{cmd}{sep}{path}/{file}".format(
path=path, cmd=cmd, file=file,
sep=" " if len(cmd) > 0 else ""), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if p.returncode:
raise Exception(err)
else:
# Print stdout from cmd call
if err is None:
err = ""
if out is None:
out = ""
self.assertEqual(0, len(err.decode('utf8').strip()))
return self.fix_type(out.decode('utf8').strip())
def testParallel1(self):
for num_process in [1, 2, 4, 8]:
for chunk_size in [1, 10, 100]:
for begin in [0, 1]:
flags = "--parallel-chunk-size {chunk_size} --parallelism {parallel} --begin {begin}".format(
chunk_size=chunk_size, parallel=num_process, begin=begin)
out = self.run_process(flags, "test1.txt")
if begin == 0:
expect = self.fix_type("col0varchar(5)notnullcol1varchar(2)notnullcol2varchar(3)notnull")
self.assertEqual(out, expect)
elif begin == 1:
expect = self.fix_type("col0decimal(4,2)notnullcol1tinyintnotnullcol2smallintnotnull")
self.assertEqual(out, expect)
else:
assert(False)
if __name__ == '__main__':
unittest.main() | 24 | 98 | 0.661285 | 810 | 5,760 | 4.606173 | 0.207407 | 0.067542 | 0.045028 | 0.06111 | 0.449209 | 0.39614 | 0.294827 | 0.246047 | 0.207987 | 0.121683 | 0 | 0.049825 | 0.156771 | 5,760 | 240 | 99 | 24 | 0.718345 | 0.017535 | 0 | 0.357542 | 0 | 0 | 0.153601 | 0.044771 | 0 | 0 | 0 | 0 | 0.055866 | 1 | 0.117318 | false | 0 | 0.011173 | 0.005587 | 0.156425 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00b7bd96ba6d79608c698351436afa9e3e25d037 | 266 | py | Python | scrapper/urls.py | walaazidane/final-project- | eb52353b95078b19696df0f8cca1ea5fd88f0983 | [
"MIT"
] | 31 | 2017-07-25T13:22:57.000Z | 2021-01-18T10:05:54.000Z | scrapper/urls.py | shashank-sharma/mythical-learning | f1105fceee8196cda275d9c72398c4e3a99b3f3c | [
"MIT"
] | 34 | 2017-05-14T06:40:24.000Z | 2019-07-15T14:37:16.000Z | scrapper/urls.py | walaazidane/final-project- | eb52353b95078b19696df0f8cca1ea5fd88f0983 | [
"MIT"
] | 4 | 2017-07-29T08:18:48.000Z | 2019-09-17T15:37:08.000Z | from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^problem/', views.problem, name = 'problem'),
url(r'^blog/', views.blog, name = 'blog'),
url(r'cpp/', views.cpp, name = 'cpp'),
url(r'^$', views.temple, name = 'temple'),
]
| 24.181818 | 55 | 0.597744 | 37 | 266 | 4.297297 | 0.378378 | 0.100629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18797 | 266 | 10 | 56 | 26.6 | 0.736111 | 0 | 0 | 0 | 0 | 0 | 0.154135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00b863ca4baba5322bd85d3764d5e6d8375946cf | 1,018 | py | Python | publica_admin/admin/views_admin.py | publica-io/django-publica-admin | 27b36172048773cb697548494c843a376527d324 | [
"BSD-3-Clause"
] | null | null | null | publica_admin/admin/views_admin.py | publica-io/django-publica-admin | 27b36172048773cb697548494c843a376527d324 | [
"BSD-3-Clause"
] | null | null | null | publica_admin/admin/views_admin.py | publica-io/django-publica-admin | 27b36172048773cb697548494c843a376527d324 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
try:
from views.models import *
except ImportError:
pass
else:
from images_admin import ImageInline
from ..mixins import *
class ViewLinkageInline(admin.StackedInline):
fields = (
'order',
'content_type',
'object_id',
'enabled'
)
model = ViewLinkage
extra = 0
related_lookup_fields = {
'generic': [['content_type', 'object_id'], ],
}
class ViewAdmin(TemplatesAdminMixin, PublicaModelAdminMixin, admin.ModelAdmin):
fields = (
'title',
'slug',
'short_title',
'text',
'template',
'enabled'
)
inlines = [
ViewLinkageInline,
ImageInline,
]
prepopulated_fields = {
'slug': ('title', )
}
class Media:
js = TinyMCETextMixin.Media.js
admin.site.register(View, ViewAdmin)
| 19.960784 | 83 | 0.509823 | 77 | 1,018 | 6.623377 | 0.636364 | 0.043137 | 0.066667 | 0.07451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001623 | 0.394892 | 1,018 | 50 | 84 | 20.36 | 0.826299 | 0 | 0 | 0.102564 | 0 | 0 | 0.107073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.025641 | 0.128205 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00b9bdc2fc25f447d02c74734b9528389a1c8e25 | 5,068 | py | Python | functional/tests/identity/v3/test_user.py | ankur-gupta91/osc-ip-cap | 9a64bbc31fcc0872f52ad2d92c550945eea5cc97 | [
"Apache-2.0"
] | null | null | null | functional/tests/identity/v3/test_user.py | ankur-gupta91/osc-ip-cap | 9a64bbc31fcc0872f52ad2d92c550945eea5cc97 | [
"Apache-2.0"
] | null | null | null | functional/tests/identity/v3/test_user.py | ankur-gupta91/osc-ip-cap | 9a64bbc31fcc0872f52ad2d92c550945eea5cc97 | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest_lib.common.utils import data_utils
from functional.tests.identity.v3 import test_identity
class UserTests(test_identity.IdentityTests):
def test_user_create(self):
self._create_dummy_user()
def test_user_delete(self):
username = self._create_dummy_user(add_clean_up=False)
raw_output = self.openstack('user delete '
'--domain %(domain)s '
'%(name)s' % {'domain': self.domain_name,
'name': username})
self.assertEqual(0, len(raw_output))
def test_user_list(self):
raw_output = self.openstack('user list')
items = self.parse_listing(raw_output)
self.assert_table_structure(items, test_identity.BASIC_LIST_HEADERS)
def test_user_set(self):
username = self._create_dummy_user()
raw_output = self.openstack('user show '
'--domain %(domain)s '
'%(name)s' % {'domain': self.domain_name,
'name': username})
user = self.parse_show_as_object(raw_output)
new_username = data_utils.rand_name('NewTestUser')
new_email = data_utils.rand_name() + '@example.com'
raw_output = self.openstack('user set '
'--email %(email)s '
'--name %(new_name)s '
'%(id)s' % {'email': new_email,
'new_name': new_username,
'id': user['id']})
self.assertEqual(0, len(raw_output))
raw_output = self.openstack('user show '
'--domain %(domain)s '
'%(name)s' % {'domain': self.domain_name,
'name': new_username})
updated_user = self.parse_show_as_object(raw_output)
self.assertEqual(user['id'], updated_user['id'])
self.assertEqual(new_email, updated_user['email'])
def test_user_set_default_project_id(self):
username = self._create_dummy_user()
project_name = self._create_dummy_project()
# get original user details
raw_output = self.openstack('user show '
'--domain %(domain)s '
'%(name)s' % {'domain': self.domain_name,
'name': username})
user = self.parse_show_as_object(raw_output)
# update user
raw_output = self.openstack('user set '
'--project %(project)s '
'--project-domain %(project_domain)s '
'%(id)s' % {'project': project_name,
'project_domain':
self.domain_name,
'id': user['id']})
self.assertEqual(0, len(raw_output))
# get updated user details
raw_output = self.openstack('user show '
'--domain %(domain)s '
'%(name)s' % {'domain': self.domain_name,
'name': username})
updated_user = self.parse_show_as_object(raw_output)
# get project details
raw_output = self.openstack('project show '
'--domain %(domain)s '
'%(name)s' % {'domain': self.domain_name,
'name': project_name})
project = self.parse_show_as_object(raw_output)
# check updated user details
self.assertEqual(user['id'], updated_user['id'])
self.assertEqual(project['id'], updated_user['default_project_id'])
def test_user_show(self):
username = self._create_dummy_user()
raw_output = self.openstack('user show '
'--domain %(domain)s '
'%(name)s' % {'domain': self.domain_name,
'name': username})
items = self.parse_show(raw_output)
self.assert_show_fields(items, self.USER_FIELDS)
| 49.686275 | 78 | 0.500592 | 505 | 5,068 | 4.792079 | 0.239604 | 0.07438 | 0.069835 | 0.090909 | 0.480579 | 0.447107 | 0.384298 | 0.371901 | 0.371901 | 0.301653 | 0 | 0.002632 | 0.400158 | 5,068 | 101 | 79 | 50.178218 | 0.793421 | 0.12944 | 0 | 0.533333 | 0 | 0 | 0.130692 | 0 | 0 | 0 | 0 | 0 | 0.12 | 1 | 0.08 | false | 0 | 0.026667 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00ba9c94000c21d53effafeefc5ec42d4ed5165f | 1,825 | py | Python | src/main.py | gltchitm/encpad | faab373f0f224c8a8dbd6d09a5fad5c14d02d03a | [
"MIT"
] | null | null | null | src/main.py | gltchitm/encpad | faab373f0f224c8a8dbd6d09a5fad5c14d02d03a | [
"MIT"
] | null | null | null | src/main.py | gltchitm/encpad | faab373f0f224c8a8dbd6d09a5fad5c14d02d03a | [
"MIT"
] | null | null | null | import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
from store import store
from create_new_notepad import CreateNewNotepad
from notepad_editor import NotepadEditor
from unlock_notepad import UnlockNotepad
from welcome import Welcome
store['FORMAT_VERSION'] = '2'
store['APPLICATION_VERSION'] = '1.0.0'
class Encpad(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title='Encpad')
store['password'] = None
store['notepad'] = None
self.set_border_width(20)
self.set_default_size(360, 560)
self.set_resizable(False)
stack = Gtk.Stack()
stack.set_transition_type(Gtk.StackTransitionType.SLIDE_LEFT_RIGHT)
stack.set_transition_duration(200)
store['stack'] = stack
stack.add_named(Welcome(), 'welcome')
stack.add_named(UnlockNotepad(), 'unlock_notepad')
stack.add_named(CreateNewNotepad(), 'create_new_notepad')
stack.add_named(NotepadEditor(), 'notepad_editor')
stack.set_visible_child_name('welcome')
self.connect('delete-event', self.window_delete)
self.add(stack)
def window_delete(self, _widget, _event):
if store['confirm_close']:
dialog = Gtk.MessageDialog(
message_type=Gtk.MessageType.QUESTION,
buttons=Gtk.ButtonsType.YES_NO,
text='You have unsaved changes.'
)
dialog.format_secondary_text('Are you sure you want to exit without saving them?')
response = dialog.run()
dialog.destroy()
if response != Gtk.ResponseType.YES:
return True
return False
if __name__ == '__main__':
window = Encpad()
window.connect('destroy', Gtk.main_quit)
window.show_all()
Gtk.main()
| 28.968254 | 94 | 0.650959 | 213 | 1,825 | 5.314554 | 0.450704 | 0.028269 | 0.045936 | 0.035336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012328 | 0.244384 | 1,825 | 62 | 95 | 29.435484 | 0.808557 | 0 | 0 | 0 | 0 | 0 | 0.134795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0.021277 | 0.148936 | 0 | 0.255319 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00babb53ff1072afcca974e8051db281df6bfdf9 | 492 | py | Python | jit_compiling/test.py | ChambinLee/norm_cuda | 627573a16dc50c517254057225c31cb7193017fc | [
"MIT"
] | 3 | 2021-11-13T07:35:45.000Z | 2022-03-13T13:00:09.000Z | jit_compiling/test.py | ChambinLee/norm_cuda | 627573a16dc50c517254057225c31cb7193017fc | [
"MIT"
] | null | null | null | jit_compiling/test.py | ChambinLee/norm_cuda | 627573a16dc50c517254057225c31cb7193017fc | [
"MIT"
] | 1 | 2022-02-28T12:26:05.000Z | 2022-02-28T12:26:05.000Z | import torch
from torch.utils.cpp_extension import load
norm = load(name="two_norm",
sources=["two_norm/two_norm_bind.cpp", "two_norm/two_norm_kernel.cu"],
verbose=True)
n,m = 8,3
a = torch.randn(n,m)
b = torch.randn(n,m)
c = torch.zeros(1)
print("a:\n",a)
print("\nb:\n",b)
a = a.cuda()
b = b.cuda()
c = c.cuda()
norm.two_norm(a,b,c,n,m)
torch.cuda.synchronize()
print("\nresult by two_norm:",c)
print("\nresult by torch.norm:",torch.norm(a-b))
| 18.222222 | 89 | 0.621951 | 90 | 492 | 3.288889 | 0.355556 | 0.165541 | 0.111486 | 0.094595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007444 | 0.180894 | 492 | 26 | 90 | 18.923077 | 0.727047 | 0 | 0 | 0 | 0 | 0 | 0.23374 | 0.107724 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00bd3c651188f07e87054fbd345e246fcb4dcb27 | 3,931 | py | Python | tint/protocols/msgpackp.py | bmuller/tint | e74a3e4c46f71dfcb2574920467ad791d29de6fe | [
"MIT"
] | 1 | 2015-02-18T18:33:44.000Z | 2015-02-18T18:33:44.000Z | tint/protocols/msgpackp.py | 8468/tint | e74a3e4c46f71dfcb2574920467ad791d29de6fe | [
"MIT"
] | null | null | null | tint/protocols/msgpackp.py | 8468/tint | e74a3e4c46f71dfcb2574920467ad791d29de6fe | [
"MIT"
] | null | null | null | from collections import deque
from twisted.protocols.policies import TimeoutMixin
from twisted.internet.protocol import Protocol
from twisted.internet.defer import Deferred, TimeoutError, maybeDeferred
import umsgpack
from tint.log import Logger
class NoSuchCommand(Exception):
"""
Exception raised when a non existent command is called.
"""
class Command(object):
def __init__(self, command, args):
self.command = command
self.args = args
self._deferred = Deferred()
def encode(self):
c = [self.command] + list(self.args)
return umsgpack.packb(c)
@classmethod
def decode(self, data):
parts = umsgpack.unpackb(data)
return Command(parts[0], parts[1:])
def success(self, value):
self._deferred.callback(value)
def fail(self, error):
self._deferred.errback(error)
def __str__(self):
args = ", ".join([str(a) for a in self.args])
return "<Command %s(%s)>" % (self.command, args)
class MsgPackProtocol(Protocol, TimeoutMixin):
_disconnected = False
_buffer = ''
_expectedLength = None
def __init__(self, timeOut=10):
self._current = deque()
self.persistentTimeOut = self.timeOut = timeOut
self.log = Logger(system=self)
def _cancelCommands(self, reason):
while self._current:
cmd = self._current.popleft()
cmd.fail(reason)
def timeoutConnection(self):
self._cancelCommands(TimeoutError("Connection timeout"))
self.transport.loseConnection()
def connectionLost(self, reason):
self._disconnected = True
self._cancelCommands(reason)
Protocol.connectionLost(self, reason)
def dataReceived(self, data):
self.resetTimeout()
self._buffer += data
if self._expectedLength is None:
parts = self._buffer.split(' ', 1)
self._expectedLength = int(parts[0])
self._buffer = parts[1]
if len(self._buffer) >= self._expectedLength:
data = self._buffer[1:self._expectedLength]
if self._buffer[0] == '>':
self.commandReceived(data)
elif self._buffer[0] == '<':
self.responseReceived(data)
elif self._buffer[0] == 'e':
self.errorReceived(data)
self._buffer = self._buffer[self._expectedLength:]
self._expectedLength = None
if len(self._buffer) > 0:
self.dataReceived('')
def commandReceived(self, data):
cmdObj = Command.decode(data)
cmd = getattr(self, "cmd_%s" % cmdObj.command, None)
if cmd is None:
raise NoSuchCommand("%s is not a valid command" % cmdObj.command)
self.log.debug("RPC command received: %s" % cmdObj)
d = maybeDeferred(cmd, *cmdObj.args)
d.addCallback(self.sendResult)
d.addErrback(self.sendError)
def sendError(self, error):
result = umsgpack.packb(str(error))
self.transport.write("%i e%s" % (len(result) + 1, result))
def sendResult(self, result):
result = umsgpack.packb(result)
self.transport.write("%i <%s" % (len(result) + 1, result))
def sendCommand(self, cmd, args):
if not self._current:
self.setTimeout(self.persistentTimeOut)
cmdObj = Command(cmd, args)
self._current.append(cmdObj)
data = cmdObj.encode()
self.transport.write("%i >%s" % (len(data) + 1, data))
return cmdObj._deferred
def responseReceived(self, data):
unpacked = umsgpack.unpackb(data)
self.log.debug("result received: %s" % data)
self._current.popleft().success(unpacked)
def errorReceived(self, data):
unpacked = umsgpack.unpackb(data)
self.log.debug("error received: %s" % data)
self._current.popleft().fail(Exception(unpacked))
| 30.710938 | 77 | 0.618418 | 431 | 3,931 | 5.529002 | 0.25522 | 0.04616 | 0.018464 | 0.018884 | 0.11582 | 0.099874 | 0.039446 | 0.039446 | 0.039446 | 0 | 0 | 0.005183 | 0.263801 | 3,931 | 127 | 78 | 30.952756 | 0.818245 | 0.013991 | 0 | 0.021053 | 0 | 0 | 0.03886 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178947 | false | 0 | 0.063158 | 0 | 0.347368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00be5d7cca91ea09fe8f701d0fc9b5adbb1c6967 | 1,435 | py | Python | discoin/utils.py | Discoin/discoin.py | 4a3459dfaab6695fe88d05290465a1b7842b3606 | [
"MIT"
] | 2 | 2020-07-26T11:29:47.000Z | 2021-09-08T22:38:35.000Z | discoin/utils.py | Discoin/discoin.py | 4a3459dfaab6695fe88d05290465a1b7842b3606 | [
"MIT"
] | 8 | 2020-02-11T14:23:38.000Z | 2021-04-16T21:38:15.000Z | discoin/utils.py | Discoin/discoin.py | 4a3459dfaab6695fe88d05290465a1b7842b3606 | [
"MIT"
] | null | null | null | from .config import DOMAIN
from .errors import InternalServerError, BadRequest, InvalidMethod, WebTimeoutError
from aiohttp import ClientSession
from asyncio import TimeoutError
async def api_request(session: ClientSession, method: str, url_path: str, headers: dict=None, json: dict=None):
'''
*`session` = the aiohttp session
*`method` = `GET`, `POST`, or `PATCH`
*`url_path` = The api endpoint
`headers` = headers for the api request
`json` = json for the api request
'''
url = DOMAIN + url_path
try:
if method.upper() == "GET":
api_response = await session.get(url, headers=headers, json=json)
elif method.upper() == "POST":
api_response = await session.post(url, headers=headers, json=json)
elif method.upper() == "PATCH":
api_response = await session.patch(url, headers=headers, json=json)
else:
raise InvalidMethod("Invalid method provided. Must be `GET`, `POST`, or `PATCH`")
except TimeoutError:
raise WebTimeoutError("Your request has timed out, most likely due to the discoin API being down.")
if api_response.status >= 500:
raise InternalServerError(f"The Discoin API returned the status code {api_response.status}")
elif api_response.status >= 400:
raise BadRequest(f"The Discoin API returned the status code {api_response.status}")
return api_response | 42.205882 | 111 | 0.672474 | 178 | 1,435 | 5.353933 | 0.359551 | 0.09234 | 0.071354 | 0.072403 | 0.219307 | 0.193075 | 0.193075 | 0.193075 | 0.109129 | 0.109129 | 0 | 0.005405 | 0.226481 | 1,435 | 34 | 112 | 42.205882 | 0.853153 | 0 | 0 | 0 | 0 | 0 | 0.218063 | 0.034174 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00be7b3c81233764127a44082d1d7982d1b01666 | 896 | py | Python | sprinkl_async/const.py | ptorsten/sprinkl-async | 1f62b1799f19bc604e13f65cbde1f705caefcd78 | [
"Apache-2.0"
] | 5 | 2020-03-15T21:24:56.000Z | 2020-07-17T02:14:29.000Z | sprinkl_async/const.py | ptorsten/sprinkl-async | 1f62b1799f19bc604e13f65cbde1f705caefcd78 | [
"Apache-2.0"
] | null | null | null | sprinkl_async/const.py | ptorsten/sprinkl-async | 1f62b1799f19bc604e13f65cbde1f705caefcd78 | [
"Apache-2.0"
] | 2 | 2019-08-12T00:40:29.000Z | 2020-06-21T22:35:17.000Z | """Declare package constants."""
#
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from datetime import timedelta
from .__version__ import __version__
DEFAULT_TIMEOUT = 30
USER_AGENT = "sprinkl-async/" + __version__
TOKEN_LIFETIME = timedelta(hours=int(23))
SPRINKL_ENDPOINT = "https://api.sprinkl.com/v1"
SPRINKL_AUTH_ENDPOINT = SPRINKL_ENDPOINT + "/authenticate"
| 33.185185 | 74 | 0.767857 | 128 | 896 | 5.226563 | 0.695313 | 0.089686 | 0.038864 | 0.047833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017083 | 0.15067 | 896 | 26 | 75 | 34.461538 | 0.862024 | 0.642857 | 0 | 0 | 0 | 0 | 0.17608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00be7f4d50bc2df415f28d4e095c788f4191d555 | 10,338 | py | Python | mmdet/core/anchor/orient_anchor_target.py | qinr/MRDet | 44b608ec6007db204dcb6b82eff9e356fc7b56d0 | [
"Apache-2.0"
] | 8 | 2021-05-25T10:49:00.000Z | 2021-11-28T04:01:02.000Z | mmdet/core/anchor/orient_anchor_target.py | qinr/MRDet | 44b608ec6007db204dcb6b82eff9e356fc7b56d0 | [
"Apache-2.0"
] | null | null | null | mmdet/core/anchor/orient_anchor_target.py | qinr/MRDet | 44b608ec6007db204dcb6b82eff9e356fc7b56d0 | [
"Apache-2.0"
] | 2 | 2021-07-22T12:53:37.000Z | 2021-12-17T12:53:51.000Z | import torch
from ..bbox import (PseudoSampler, assign_and_sample, bbox2delta, build_assigner,
delta2hbboxrec5, hbbox2rbboxRec_v2, rbboxPoly2Rectangle,
rec2target)
from ..utils import multi_apply
def orient_anchor_target(bbox_pred_list,
anchor_list,
valid_flag_list,
gt_bboxes_list,
gt_rbboxes_poly_list,
img_metas,
target_means_hbb,
target_stds_hbb,
target_means_obb,
target_stds_obb,
cfg,
gt_bboxes_ignore_list=None,
gt_labels_list=None,
label_channels=1,
sampling=True,
unmap_outputs=True):
"""Compute regression and classification targets for anchors.
Args:
anchor_list (list[list]): Multi level anchors of each image.
valid_flag_list (list[list]): Multi level valid flags of each image.
gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
img_metas (list[dict]): Meta info of each image.
target_means (Iterable): Mean value of regression targets.
target_stds (Iterable): Std value of regression targets.
cfg (dict): RPN train configs.
Returns:
tuple
"""
num_imgs = len(img_metas)
assert len(anchor_list) == len(valid_flag_list) == num_imgs
# anchor number of multi levels
num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
# concat all level anchors and flags to a single tensor
bbox_pred_new_list = []
for i in range(num_imgs):
assert len(anchor_list[i]) == len(valid_flag_list[i])
anchor_list[i] = torch.cat(anchor_list[i])
valid_flag_list[i] = torch.cat(valid_flag_list[i])
bbox_preds = []
for j in range(len(bbox_pred_list)):
bbox_preds.append(bbox_pred_list[j][i].permute(1, 2, 0).reshape(-1, 4))
bbox_preds = torch.cat(bbox_preds)
bbox_pred_new_list.append(bbox_preds)
# compute targets for each image
if gt_bboxes_ignore_list is None:
gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
if gt_labels_list is None:
gt_labels_list = [None for _ in range(num_imgs)]
(all_labels, all_label_weights, all_bbox_targets, all_bbox_weights,
all_obb_targets, all_obb_weights,
pos_inds_list, neg_inds_list) = multi_apply(
orient_anchor_target_single,
bbox_pred_new_list,
anchor_list,
valid_flag_list,
gt_bboxes_list,
gt_rbboxes_poly_list,
gt_bboxes_ignore_list,
gt_labels_list,
img_metas,
target_means_hbb=target_means_hbb,
target_stds_hbb=target_stds_hbb,
target_means_obb=target_means_obb,
target_stds_obb=target_stds_obb,
cfg=cfg,
label_channels=label_channels,
sampling=sampling,
unmap_outputs=unmap_outputs)
# no valid anchors
if any([labels is None for labels in all_labels]):
return None
# sampled anchors of all images
num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])
num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])
# split targets to a list w.r.t. multiple levels
labels_list = images_to_levels(all_labels, num_level_anchors)
label_weights_list = images_to_levels(all_label_weights, num_level_anchors)
bbox_targets_list = images_to_levels(all_bbox_targets, num_level_anchors)
bbox_weights_list = images_to_levels(all_bbox_weights, num_level_anchors)
obb_targets_list = images_to_levels(all_obb_targets, num_level_anchors)
obb_weights_list = images_to_levels(all_obb_weights, num_level_anchors)
return (labels_list, label_weights_list, bbox_targets_list,
bbox_weights_list, obb_targets_list, obb_weights_list,
num_total_pos, num_total_neg)
def images_to_levels(target, num_level_anchors):
"""Convert targets by image to targets by feature level.
[target_img0, target_img1] -> [target_level0, target_level1, ...]
"""
target = torch.stack(target, 0)
level_targets = []
start = 0
for n in num_level_anchors:
end = start + n
level_targets.append(target[:, start:end].squeeze(0))
start = end
return level_targets
def orient_anchor_target_single(bbox_pred,
flat_anchors,
valid_flags,
gt_bboxes,
gt_rbboxes_poly,
gt_bboxes_ignore,
gt_labels,
img_meta,
target_means_hbb,
target_stds_hbb,
target_means_obb,
target_stds_obb,
cfg,
label_channels=1,
sampling=True,
unmap_outputs=True):
inside_flags = anchor_inside_flags(flat_anchors, valid_flags,
img_meta['img_shape'][:2],
cfg.allowed_border)
# inside_flags: 返回在图中的anchor对应的索引
if not inside_flags.any():
return (None, ) * 6
# assign gt and sample anchors
anchors = flat_anchors[inside_flags, :]
bbox_pred = bbox_pred[inside_flags, :]
# 筛选后在图中的anchor
# 将anchor和gt_bbox匹配,得到正样本和负样本, 并用sampler将这些结果进行封装,方便之后使用
if sampling:
assign_result, sampling_result = assign_and_sample(
anchors, gt_bboxes, gt_bboxes_ignore, None, cfg)
else:
bbox_assigner = build_assigner(cfg.assigner)
assign_result = bbox_assigner.assign(anchors, gt_bboxes,
gt_bboxes_ignore, gt_labels)
bbox_sampler = PseudoSampler()
sampling_result = bbox_sampler.sample(assign_result, anchors,
gt_bboxes)
num_valid_anchors = anchors.shape[0]
bbox_targets = torch.zeros_like(anchors)
bbox_weights = torch.zeros_like(anchors)
labels = anchors.new_zeros(num_valid_anchors, dtype=torch.long)
label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float)
obb_targets = torch.zeros_like(anchors)
obb_weights = torch.zeros_like(anchors)
pos_inds = sampling_result.pos_inds # 正样本索引
neg_inds = sampling_result.neg_inds # 负样本索引
pos_bbox_pred = bbox_pred[pos_inds, :]
if len(pos_inds) > 0:
pos_bbox_targets = bbox2delta(sampling_result.pos_bboxes,
sampling_result.pos_gt_bboxes,
target_means_hbb, target_stds_hbb)
# 将bbox转化为delta,并使用target_means,target_stds标准化
bbox_targets[pos_inds, :] = pos_bbox_targets
bbox_weights[pos_inds, :] = 1.0 # 正样本权重是1,负样本权重是0
pos_bbox_rec = delta2hbboxrec5(sampling_result.pos_bboxes,
pos_bbox_pred,
target_means_hbb,
target_stds_hbb)
pos_gt_rbboxes_poly = gt_rbboxes_poly[sampling_result.pos_assigned_gt_inds, :]
pos_gt_rbboxes_rec = rbboxPoly2Rectangle(pos_gt_rbboxes_poly)
pos_obb_targets = rec2target(pos_bbox_rec, pos_gt_rbboxes_rec,
target_means_obb, target_stds_obb)
obb_targets[pos_inds, :] = pos_obb_targets
obb_weights[pos_inds, :] = 1.0
if gt_labels is None:
labels[pos_inds] = 1
else:
labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds]
if cfg.pos_weight <= 0:
label_weights[pos_inds] = 1.0
else:
label_weights[pos_inds] = cfg.pos_weight
if len(neg_inds) > 0:
label_weights[neg_inds] = 1.0
# map up to original set of anchors
if unmap_outputs:
num_total_anchors = flat_anchors.size(0)
labels = unmap(labels, num_total_anchors, inside_flags)
label_weights = unmap(label_weights, num_total_anchors, inside_flags)
bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags)
bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags)
obb_targets = unmap(obb_targets, num_total_anchors, inside_flags)
obb_weights = unmap(obb_weights, num_total_anchors, inside_flags)
# labels:每个anchor对应的label
# label_weights:每个anchor cls_loss的权重,负样本权重为1,正样本权重可为1也可为其他值
# bbox_targets:每个anchor与其对应的gt_bbox之前的delta,用于回归
# bbox_weights: 每个anchor bbox_reg的权重,正样本为1,负样本为0
# pos_inds:anchor中正样本的索引
# neg_inds: anchor中负样本的索引
return (labels, label_weights, bbox_targets, bbox_weights, obb_targets,
obb_weights, pos_inds, neg_inds)
# 判断anchor是否超出图片边界
def anchor_inside_flags(flat_anchors,
valid_flags,
img_shape,
allowed_border=0):
img_h, img_w = img_shape[:2]
if allowed_border >= 0:
inside_flags = valid_flags & \
(flat_anchors[:, 0] >= -allowed_border).type(torch.uint8) & \
(flat_anchors[:, 1] >= -allowed_border).type(torch.uint8) & \
(flat_anchors[:, 2] < img_w + allowed_border).type(torch.uint8) & \
(flat_anchors[:, 3] < img_h + allowed_border).type(torch.uint8)
else:
inside_flags = valid_flags
return inside_flags
def unmap(data, count, inds, fill=0):
""" Unmap a subset of item (data) back to the original set of items (of
size count) """
if data.dim() == 1:
ret = data.new_full((count, ), fill)
ret[inds] = data
else:
new_size = (count, ) + data.size()[1:]
ret = data.new_full(new_size, fill)
ret[inds, :] = data
return ret
| 42.896266 | 87 | 0.600019 | 1,225 | 10,338 | 4.680816 | 0.161633 | 0.019533 | 0.023544 | 0.020928 | 0.323858 | 0.253401 | 0.136031 | 0.097314 | 0.039414 | 0.039414 | 0 | 0.00949 | 0.327239 | 10,338 | 240 | 88 | 43.075 | 0.814953 | 0.134842 | 0 | 0.186813 | 0 | 0 | 0.001046 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 1 | 0.027473 | false | 0 | 0.016484 | 0 | 0.082418 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00c004e3df93f878ec9ce6f08213502410bf4836 | 12,588 | py | Python | eufySecurityApi/api.py | Rihan9/uefiSecurityApi | 27ef9446b1cab55244f2b90bafee33bcfcfe53b5 | [
"MIT"
] | 1 | 2021-02-17T22:32:48.000Z | 2021-02-17T22:32:48.000Z | eufySecurityApi/api.py | Rihan9/uefiSecurityApi | 27ef9446b1cab55244f2b90bafee33bcfcfe53b5 | [
"MIT"
] | null | null | null | eufySecurityApi/api.py | Rihan9/uefiSecurityApi | 27ef9446b1cab55244f2b90bafee33bcfcfe53b5 | [
"MIT"
] | null | null | null | from eufySecurityApi.const import (
TWO_FACTOR_AUTH_METHODS, API_BASE_URL, API_HEADERS, RESPONSE_ERROR_CODE,
ENDPOINT_LOGIN,ENDPOINT_DEVICE_LIST, DEVICE_TYPE, ENDPOINT_STATION_LIST, ENDPOINT_REQUEST_VERIFY_CODE,
ENDPOINT_TRUST_DEVICE_LIST, ENDPOINT_TRUST_DEVICE_ADD
)
import logging, json, copy, functools, requests, asyncio
from datetime import datetime, timedelta #, time as dtTime
from eufySecurityApi.model import Device
# import time
_LOGGER = logging.getLogger(__name__)
class Api():
def __init__(self, username=None, password=None, token=None, domain=API_BASE_URL, token_expire_at=None, preferred2FAMethod=TWO_FACTOR_AUTH_METHODS.EMAIL):
self._username =username
self._password = password
self._preferred2FAMethod = preferred2FAMethod
self._token = token
self._tokenExpiration = None if token_expire_at is None else datetime.fromtimestamp(token_expire_at)
self._refreshToken = None
self._domain = domain
self.headers = API_HEADERS
self._LOGGER = logging.getLogger(__name__)
self.devices = {}
self.stations = {}
self._userId = None
# self.headers['timezone'] =
# dtTime(dtTime.fromisoformat(time.strptime(time.localtime(), '%HH:%MM'))) - dtTime(dtTime.fromisoformat(time.strptime(time.gmtime(), '%HH:%MM')))
@property
def userId(self):
return self._userId
async def authenticate(self):
if(self._token is None or self._tokenExpiration > datetime.now()):
response = await self._request('POST', ENDPOINT_LOGIN, {
'email': self._username,
'password': self._password
}, self.headers)
if(response.status_code != 200):
self._LOGGER.error('Unexpected response code: %s, on url: %s' % response.status_code, response.request.url)
raise LoginException('Unexpected response code: %s, on url: %s' % response.status_code, response.request.url)
dataresult = response.json()
self._LOGGER.debug('login response: %s' % dataresult)
# self._LOGGER.debug('%s, %s' % (type(dataresult['code']), dataresult['code']))
if(RESPONSE_ERROR_CODE(dataresult['code']) == RESPONSE_ERROR_CODE.WHATEVER_ERROR):
self._token = dataresult['data']['auth_token']
self._tokenExpiration = datetime.fromtimestamp(dataresult['data']['token_expires_at'])
if('domain' in dataresult['data'] and dataresult['data']['domain'] != '' and dataresult['data']['domain'] != self._domain):
self._token = None
self._tokenExpiration = None
self._domain = dataresult['data']['domain']
self._LOGGER.info('Switching to new domain: %s', self._domain)
return await self.authenticate()
self._LOGGER.debug('Token: %s' %self._token)
self._LOGGER.debug('Token expire at: %s' % self._tokenExpiration)
self._userId = dataresult['data']['user_id']
return 'OK'
elif(RESPONSE_ERROR_CODE(dataresult['code']) == RESPONSE_ERROR_CODE.NEED_VERIFY_CODE):
self._LOGGER.info('need two factor authentication. Send verification code...')
#dataresult['data']
self._token = dataresult['data']['auth_token']
self._tokenExpiration = datetime.fromtimestamp(dataresult['data']['token_expires_at'])
self._userId = dataresult['data']['user_id']
self._LOGGER.debug('Token: %s' %self._token)
self._LOGGER.debug('Token expire at: %s' % self._tokenExpiration)
await self.requestVerifyCode()
return "send_verify_code"
else:
message = 'Unexpected API response code %s: %s (%s)' % (dataresult['code'], dataresult['msg'], response.request.url)
self._LOGGER.error(message)
raise LoginException(message)
else:
return 'OK'
pass
async def update(self, device_sn=None):
if(device_sn is None):
await self.get_stations()
await self.get_devices()
else:
await self.get_devices(device_sn)
async def get_devices(self, device_sn=None):
data = {}
if(device_sn is not None):
data['device_sn'] = device_sn
response = await self._request('POST', ENDPOINT_DEVICE_LIST, data, self.headers)
if(response.status_code != 200):
self._LOGGER.error('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
raise LoginException('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
dataresult = response.json()
self._LOGGER.debug('get_devices response: %s' % dataresult)
if(RESPONSE_ERROR_CODE(dataresult['code']) != RESPONSE_ERROR_CODE.WHATEVER_ERROR):
message = 'Unexpected API response code %s: %s' % (dataresult['code'], dataresult['msg'])
self._LOGGER.error(message)
raise ApiException(message)
for device in dataresult['data']:
try:
deviceType = DEVICE_TYPE(device['device_type'])
if(device['device_sn'] not in self.devices):
self.devices[device['device_sn']] = Device.fromType(self, deviceType)
self.devices[device['device_sn']].init(device)
else:
self.devices[device['device_sn']].update(device)
except Exception as e:
self._LOGGER.exception(e)
return self.devices
async def get_stations(self):
response = await self._request('POST', ENDPOINT_STATION_LIST, {}, self.headers)
if(response.status_code != 200):
self._LOGGER.error('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
raise LoginException('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
dataresult = response.json()
self._LOGGER.debug('get_stations response: %s' % dataresult)
if(RESPONSE_ERROR_CODE(dataresult['code']) != RESPONSE_ERROR_CODE.WHATEVER_ERROR):
message = 'Unexpected API response code %s: %s' % (dataresult['code'], dataresult['msg'])
self._LOGGER.error(message)
raise ApiException(message)
for device in dataresult['data']:
try:
deviceType = DEVICE_TYPE(device['device_type'])
self.stations[device['station_sn']] = Device.fromType(self, deviceType)
self.stations[device['station_sn']].init(device)
except Exception as e:
self._LOGGER.exception(e)
return self.stations
async def get_device(self, deviceId):
pass
async def refresh_token(self):
pass
async def invalidate_token(self):
self._token = None
self._refreshToken = None
self._tokenExpiration = None
pass
async def requestVerifyCode(self):
response = await self._request('POST', ENDPOINT_REQUEST_VERIFY_CODE, {
'message_type': self._preferred2FAMethod.value
}, self.headers)
if(response.status_code != 200):
self._LOGGER.error('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
raise ApiException('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
dataresult = response.json()
self._LOGGER.debug('request verify code response: %s' % dataresult)
if(RESPONSE_ERROR_CODE(dataresult['code']) != RESPONSE_ERROR_CODE.WHATEVER_ERROR):
message = 'Unexpected API response code %s: %s' % (dataresult['code'], dataresult['msg'])
self._LOGGER.error(message)
raise ApiException(message)
return 'OK'
async def sendVerifyCode(self, verifyCode):
# check verify code #
response = await self._request('POST', ENDPOINT_LOGIN, {
'verify_code': verifyCode,
'transaction': datetime.now().timestamp()
}, self.headers)
if(response.status_code != 200):
self._LOGGER.error('Unexpected response code: %s, on url: %s' % response.status_code, response.request.url)
raise ApiException('Unexpected response code: %s, on url: %s' % response.status_code, response.request.url)
dataresult = response.json()
self._LOGGER.debug('send verify code response: %s' % dataresult)
if(RESPONSE_ERROR_CODE(dataresult['code']) != RESPONSE_ERROR_CODE.WHATEVER_ERROR):
message = 'Unexpected API response code %s: %s' % (dataresult['code'], dataresult['msg'])
self._LOGGER.error(message)
raise ApiException(message)
# if ok, add this device to trust device list #
response = await self._request('POST', ENDPOINT_TRUST_DEVICE_ADD, {
'verify_code': verifyCode,
'transaction': datetime.now().timestamp()
}, self.headers)
if(response.status_code != 200):
self._LOGGER.error('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
raise ApiException('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
dataresult = response.json()
self._LOGGER.debug('add trust device response: %s' % dataresult)
if(RESPONSE_ERROR_CODE(dataresult['code']) != RESPONSE_ERROR_CODE.WHATEVER_ERROR):
message = 'Unexpected API response code %s: %s' % (dataresult['code'], dataresult['msg'])
self._LOGGER.error(message)
raise ApiException(message)
response = await self._request('GET', ENDPOINT_TRUST_DEVICE_LIST, None, self.headers)
if(response.status_code != 200):
self._LOGGER.error('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
raise ApiException('Unexpected response code: %s, on url: %s' % (response.status_code, response.request.url))
dataresult = response.json()
self._LOGGER.debug('add trust device response: %s' % dataresult)
if(RESPONSE_ERROR_CODE(dataresult['code']) != RESPONSE_ERROR_CODE.WHATEVER_ERROR):
message = 'Unexpected API response code %s: %s' % (dataresult['code'], dataresult['msg'])
self._LOGGER.error(message)
raise ApiException(message)
isTrusted = False
for trusted in dataresult['data']['list']:
if(trusted['is_current_device'] == 1):
self._tokenExpiration = (datetime.now() + timedelta(days=365*10))
isTrusted = True
return 'OK' if isTrusted else 'KO'
@property
def connected(self):
return self._token != None and self._tokenExpiration > datetime.now()
@property
def base_url(self):
return ('https://%s/v1' % self._domain)
@property
def token(self):
return self._token
@property
def token_expire_at(self):
return self._tokenExpiration.timestamp()
@property
def domain(self):
return self._domain
async def _request(self, method, url, data, headers={}) -> requests.Response:
try:
loop = asyncio.get_running_loop()
except:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
call = None
if(method == 'GET'):
call = requests.get
elif(method == 'POST'):
call = requests.post
else:
raise ApiException('Unsupported operation: %s' % method)
url = self.base_url + url
newHeaders = copy.copy(headers)
if(url != ENDPOINT_LOGIN or 'verify_code' in data):
newHeaders['X-Auth-Token'] = self._token
self._LOGGER.debug('method: %s' % method)
self._LOGGER.debug('url: %s' % url)
self._LOGGER.debug('data: %s' % data)
self._LOGGER.debug('headers: %s' % newHeaders)
response = await loop.run_in_executor(None, functools.partial(call, url, json=data, headers=newHeaders))
#response = call(url, json=data, headers=newHeaders)
return response
class ApiException(Exception):
pass
class LoginException(ApiException):
pass | 48.045802 | 158 | 0.622418 | 1,382 | 12,588 | 5.478292 | 0.112156 | 0.046229 | 0.049927 | 0.042531 | 0.576938 | 0.554748 | 0.505349 | 0.479461 | 0.473121 | 0.473121 | 0 | 0.00343 | 0.258818 | 12,588 | 262 | 159 | 48.045802 | 0.808039 | 0.032968 | 0 | 0.491071 | 0 | 0 | 0.139193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0.040179 | 0.017857 | 0.026786 | 0.129464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00c203f198873d9000f664434964603c2fa22d2f | 4,595 | py | Python | test/test_accuracy.py | darebrawley/doppelganger-asa | 6d01e56b8ee8218f047724fdb4a5b1a2bf104c5f | [
"Apache-2.0"
] | 25 | 2019-05-30T21:13:58.000Z | 2022-01-25T09:52:55.000Z | test/test_accuracy.py | darebrawley/doppelganger-asa | 6d01e56b8ee8218f047724fdb4a5b1a2bf104c5f | [
"Apache-2.0"
] | 2 | 2020-01-30T20:32:15.000Z | 2020-02-21T20:20:57.000Z | test/test_accuracy.py | darebrawley/doppelganger-asa | 6d01e56b8ee8218f047724fdb4a5b1a2bf104c5f | [
"Apache-2.0"
] | 11 | 2019-05-15T17:10:01.000Z | 2021-02-10T18:07:54.000Z | # Copyright 2017 Sidewalk Labs | https://www.apache.org/licenses/LICENSE-2.0
from __future__ import (
absolute_import, division, print_function, unicode_literals
)
import mock
from mock import Mock
import unittest
import pandas as pd
import numpy as np
from doppelganger import Accuracy
from doppelganger.accuracy import ErrorStat
class TestAccuracy(unittest.TestCase):
def _mock_variable_bins(self):
return [
('num_people', '1'),
('num_people', '3'),
('num_people', '2'),
('num_people', '4+'),
('num_vehicles', '1'),
('num_vehicles', '0'),
('num_vehicles', '2'),
('num_vehicles', '3+'),
('age', '0-17'),
('age', '18-34'),
('age', '65+'),
('age', '35-64'),
]
def _mock_state_puma(self):
return [('20', '00500'), ('20', '00602'), ('20', '00604'), ('29', '00901'), ('29', '00902')]
def _mock_comparison_dataframe(self):
# Just the top 10 lines of a sample PUMS file, counts will NOT line up with marginals.
return pd.DataFrame(
data=[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
],
columns=['pums', 'marginal', 'gen'],
index=self._mock_variable_bins())
@mock.patch('doppelganger.Accuracy._comparison_dataframe')
def test_error_metrics(self, mock_comparison_dataframe):
accuracy = Accuracy(Mock(), Mock(), Mock(), Mock(), Mock(), Mock(), Mock())
accuracy.comparison_dataframe = self._mock_comparison_dataframe()
self.assertEqual(accuracy.root_mean_squared_error(), (1.0, 1.0))
self.assertListEqual(accuracy.root_squared_error().mean().tolist(), [1.0, 1.0])
self.assertListEqual(accuracy.absolute_pct_error().mean().tolist(),
[2.0, 0.66666666666666663])
@mock.patch('doppelganger.Accuracy.from_data_dir')
@mock.patch('doppelganger.Accuracy._comparison_dataframe')
def test_error_report(self, mock_comparison_datframe, mock_from_data_dir):
accuracy = Accuracy(Mock(), Mock(), Mock(), Mock(), Mock(), Mock(), Mock())
accuracy.comparison_dataframe = self._mock_comparison_dataframe()
accuracy.from_data_dir.return_value = accuracy
state_puma = dict()
state_puma['20'] = ['00500', '00602', '00604']
state_puma['29'] = ['00901', '00902']
expected_columns = ['marginal-pums', 'marginal-doppelganger']
df_puma, df_variable, df_total =\
accuracy.error_report(
state_puma, 'fake_dir',
marginal_variables=['num_people', 'num_vehicles', 'age'],
statistic=ErrorStat.ABSOLUTE_PCT_ERROR
)
# Test df_total
df_total_expected = pd.Series(
[2.00000, 0.666667],
index=expected_columns
)
self.assertTrue(all((df_total - df_total_expected) < 1))
# Test df_puma
expected_puma_data = np.reshape([2.0, 2/3.0]*5, (5, 2))
df_expected_puma = pd.DataFrame(
data=expected_puma_data,
index=self._mock_state_puma(),
columns=expected_columns
)
self.assertTrue((df_expected_puma == df_puma).all().all())
# Test df_variable
expected_variable_data = np.reshape([2.0, 2/3.0]*12, (12, 2))
df_expected_variable = pd.DataFrame(
data=expected_variable_data,
index=self._mock_variable_bins(),
columns=expected_columns
)
self.assertTrue((df_expected_variable == df_variable).all().all())
# Test unimplemented statistic name
try:
self.assertRaises(
Exception,
Accuracy.error_report(
state_puma, 'fake_dir',
marginal_variables=['num_people', 'num_vehicles', 'age'],
statistic='wrong-statistic-name'
)
)
except Exception:
pass
| 37.357724 | 100 | 0.515996 | 474 | 4,595 | 4.753165 | 0.261603 | 0.013316 | 0.015979 | 0.01953 | 0.372392 | 0.315135 | 0.315135 | 0.246782 | 0.230803 | 0.177541 | 0 | 0.063615 | 0.353428 | 4,595 | 122 | 101 | 37.663934 | 0.694716 | 0.051578 | 0 | 0.24 | 0 | 0 | 0.102735 | 0.032636 | 0 | 0 | 0 | 0 | 0.07 | 1 | 0.05 | false | 0.01 | 0.09 | 0.03 | 0.18 | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00c77972f828a4eaba5156777d6d5f0bfdf8f57c | 8,038 | py | Python | metaspace/engine/scripts/update_diagnostics.py | METASPACE2020/METASPACE | e1acd9a409f84a78eed7ca9713258c09b0e137ca | [
"Apache-2.0"
] | null | null | null | metaspace/engine/scripts/update_diagnostics.py | METASPACE2020/METASPACE | e1acd9a409f84a78eed7ca9713258c09b0e137ca | [
"Apache-2.0"
] | null | null | null | metaspace/engine/scripts/update_diagnostics.py | METASPACE2020/METASPACE | e1acd9a409f84a78eed7ca9713258c09b0e137ca | [
"Apache-2.0"
] | null | null | null | import argparse
import logging
import warnings
from concurrent.futures import ThreadPoolExecutor
import numpy as np
from lithops import Storage
from lithops.storage.utils import CloudObject
from sm.engine.annotation.diagnostics import (
DiagnosticType,
extract_dataset_diagnostics,
add_diagnostics,
del_diagnostics,
)
from sm.engine.annotation.imzml_reader import LithopsImzMLReader, FSImzMLReader
from sm.engine.db import DB
from sm.engine.storage import get_s3_client
from sm.engine.util import GlobalInit, split_cos_path, split_s3_path
logger = logging.getLogger('engine')
def parse_input_path_for_lithops(sm_config, input_path):
if input_path.startswith('s3://') or input_path.startswith('s3a://'):
backend = 'aws_s3'
bucket, prefix = split_s3_path(input_path)
else:
backend = 'ibm_cos'
bucket, prefix = split_cos_path(input_path)
storage = Storage(sm_config['lithops'], backend)
if backend == 'aws_s3' and sm_config['lithops']['aws_s3']['endpoint'].startswith('http://'):
# WORKAROUND for local Minio access
# Lithops forces the url to HTTPS, so overwrite the S3 client with a fixed client
# https://github.com/lithops-cloud/lithops/issues/708
storage.storage_handler.s3_client = get_s3_client()
keys_in_path = storage.list_keys(bucket, prefix)
imzml_keys = [key for key in keys_in_path if key.lower().endswith('.imzml')]
ibd_keys = [key for key in keys_in_path if key.lower().endswith('.ibd')]
debug_info = f'Path {input_path} had keys: {keys_in_path}'
assert len(imzml_keys) == 1, f'Couldn\'t determine imzML file. {debug_info}'
assert len(ibd_keys) == 1, f'Couldn\'t determine ibd file. {debug_info}'
imzml_cobject = CloudObject(storage.backend, bucket, imzml_keys[0])
ibd_cobject = CloudObject(storage.backend, bucket, ibd_keys[0])
return storage, imzml_cobject, ibd_cobject
def process_dataset(sm_config, del_first, ds_id):
logger.info(f'Processing {ds_id}')
try:
if del_first:
del_diagnostics(ds_id)
ds = DB().select_one_with_fields('SELECT * FROM dataset WHERE id = %s', (ds_id,))
input_path = ds['input_path']
if input_path.startswith('/'):
imzml_reader = FSImzMLReader(input_path)
if not imzml_reader.is_mz_from_metadata or not imzml_reader.is_tic_from_metadata:
logger.info(f'{ds_id} missing metadata, reading spectra...')
for _ in imzml_reader.iter_spectra(np.arange(imzml_reader.n_spectra)):
# Read all spectra so that mz/tic data is populated
pass
else:
storage, imzml_cobject, ibd_cobject = parse_input_path_for_lithops(
sm_config, input_path
)
imzml_reader = LithopsImzMLReader(
storage,
imzml_cobject=imzml_cobject,
ibd_cobject=ibd_cobject,
)
if not imzml_reader.is_mz_from_metadata or not imzml_reader.is_tic_from_metadata:
logger.info(f'{ds_id} missing metadata, reading spectra...')
chunk_size = 1000
for chunk_start in range(0, imzml_reader.n_spectra, chunk_size):
chunk_end = min(imzml_reader.n_spectra, chunk_start + chunk_size)
chunk = np.arange(chunk_start, chunk_end)
for _ in imzml_reader.iter_spectra(storage, chunk):
# Read all spectra so that mz/tic data is populated
pass
diagnostics = extract_dataset_diagnostics(ds_id, imzml_reader)
add_diagnostics(diagnostics)
return ds_id, True
except Exception:
logger.error(f'Failed to process {ds_id}', exc_info=True)
return ds_id, False
def find_dataset_ids(ds_ids_param, sql_where, missing, failed, succeeded):
db = DB()
if ds_ids_param:
specified_ds_ids = ds_ids_param.split(',')
elif sql_where:
specified_ds_ids = db.select_onecol(f"SELECT id FROM dataset WHERE {sql_where}")
else:
specified_ds_ids = None
if not missing:
# Default to processing all datasets missing diagnostics
missing = specified_ds_ids is None and not failed and not succeeded
ds_type_counts = db.select(
'SELECT d.id, COUNT(DISTINCT dd.type), COUNT(dd.error) '
'FROM dataset d LEFT JOIN dataset_diagnostic dd on d.id = dd.ds_id '
'WHERE d.status = \'FINISHED\' '
'GROUP BY d.id'
)
if missing or failed or succeeded:
# Get ds_ids based on status (or filter specified ds_ids on status)
status_ds_ids = set()
for ds_id, n_diagnostics, n_errors in ds_type_counts:
if missing and (n_diagnostics or 0) < len(DiagnosticType):
status_ds_ids.add(ds_id)
elif failed and n_errors > 0:
status_ds_ids.add(ds_id)
elif succeeded and n_diagnostics == len(DiagnosticType) and n_errors == 0:
status_ds_ids.add(ds_id)
if specified_ds_ids is not None:
# Keep order, if directly specified
ds_ids = [ds_id for ds_id in specified_ds_ids if ds_id in status_ds_ids]
else:
# Order by ID descending, so that newer DSs are updated first
ds_ids = sorted(status_ds_ids, reverse=True)
else:
ds_ids = specified_ds_ids
assert ds_ids, 'No datasets found'
return ds_ids
def run_diagnostics(sm_config, ds_ids, del_first, jobs):
failed_ds_ids = []
with ThreadPoolExecutor(jobs or None) as executor:
map_func = executor.map if jobs != 1 else map
for i, (ds_id, success) in enumerate(
map_func(lambda ds_id: process_dataset(sm_config, del_first, ds_id), ds_ids)
):
logger.info(f'Completed {ds_id} ({i}/{len(ds_ids)})')
if not success:
failed_ds_ids.append(ds_id)
if failed_ds_ids:
logger.error(f'Failed datasets ({len(failed_ds_ids)}): {failed_ds_ids}')
def main():
parser = argparse.ArgumentParser(
description='Reindex or update dataset results. NOTE: FDR diagnostics are unsupported as '
'they require the dataset to be completely reprocessed.'
)
parser.add_argument('--config', default='conf/config.json', help='SM config path')
parser.add_argument('--ds-id', help='DS id (or comma-separated list of ids)')
parser.add_argument('--sql-where', help='SQL WHERE clause for datasets table')
parser.add_argument(
'--missing',
action='store_true',
help='(Default if ds-id/failed/succeeded not specified) '
'Process datasets that are missing diagnostics',
)
parser.add_argument(
'--failed',
action='store_true',
help='Process datasets that have errors in their diagnostics',
)
parser.add_argument(
'--succeeded', action='store_true', help='Process datasets even if they have diagnostics'
)
parser.add_argument(
'--del-first', action='store_true', help='Delete existing diagnostics before regenerating'
)
parser.add_argument('--jobs', '-j', type=int, default=1, help='Number of parallel jobs to run')
parser.add_argument('--verbose', '-v', action='store_true')
args = parser.parse_args()
with GlobalInit(config_path=args.config) as sm_config:
if not args.verbose:
logging.getLogger('lithops.storage.backends').setLevel(logging.WARNING)
warnings.filterwarnings('ignore', module='pyimzml')
ds_ids = find_dataset_ids(
ds_ids_param=args.ds_id,
sql_where=args.sql_where,
missing=args.missing,
failed=args.failed,
succeeded=args.succeeded,
)
run_diagnostics(
sm_config=sm_config,
ds_ids=ds_ids,
del_first=args.del_first,
jobs=args.jobs,
)
if __name__ == '__main__':
main()
| 39.596059 | 99 | 0.652152 | 1,072 | 8,038 | 4.642724 | 0.223881 | 0.035162 | 0.025316 | 0.012859 | 0.215592 | 0.179024 | 0.125779 | 0.120555 | 0.106892 | 0.090416 | 0 | 0.004672 | 0.254417 | 8,038 | 202 | 100 | 39.792079 | 0.825797 | 0.059716 | 0 | 0.121212 | 0 | 0 | 0.172231 | 0.009141 | 0 | 0 | 0 | 0 | 0.018182 | 1 | 0.030303 | false | 0.012121 | 0.072727 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00ca91da745c632cc521c76e550462f23371dbf5 | 733 | py | Python | cli/progress.py | merwane/shield | 067d4ed9c84946479c200c0f7bcf47f7bfce3b80 | [
"MIT"
] | null | null | null | cli/progress.py | merwane/shield | 067d4ed9c84946479c200c0f7bcf47f7bfce3b80 | [
"MIT"
] | null | null | null | cli/progress.py | merwane/shield | 067d4ed9c84946479c200c0f7bcf47f7bfce3b80 | [
"MIT"
] | null | null | null | import colorama
colorama.init()
def print_static_progress_bar(title, percentage, text, color="white"):
empty_bar = "—" * 50
if color == 'red':
color = colorama.Fore.RED
elif color == 'green':
color = colorama.Fore.GREEN
elif color == 'blue':
color = colorama.Fore.BLUE
else:
color = colorama.Fore.WHITE
fill_char = color + "█" + colorama.Fore.WHITE
# convert percentage to position
position = (percentage * 50) / 100
position = round(position)
filled_bar = empty_bar
filled_bar = (fill_char * (position+1)) + filled_bar[position+1:]
final_bar = "{}: |{}| {} ({}%)".format(title, filled_bar, text, percentage)
print(final_bar) | 26.178571 | 79 | 0.608458 | 87 | 733 | 5 | 0.402299 | 0.137931 | 0.156322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016484 | 0.255116 | 733 | 28 | 80 | 26.178571 | 0.776557 | 0.040928 | 0 | 0 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.105263 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00cbf5b142a833fac446bd82a1109ad6d28c184a | 888 | py | Python | maga/run_server.py | minkefusiji/TimeSeriesAnalysisPlugin | 85baac82cece9bac7cabb053673df7cc20efa50d | [
"MIT"
] | null | null | null | maga/run_server.py | minkefusiji/TimeSeriesAnalysisPlugin | 85baac82cece9bac7cabb053673df7cc20efa50d | [
"MIT"
] | null | null | null | maga/run_server.py | minkefusiji/TimeSeriesAnalysisPlugin | 85baac82cece9bac7cabb053673df7cc20efa50d | [
"MIT"
] | null | null | null | from os import environ
from maga.maga_plugin_service import MagaPluginService
from common.plugin_model_api import api, PluginModelAPI, PluginModelListAPI, PluginModelTrainAPI, \
PluginModelInferenceAPI, app, PluginModelParameterAPI
multivarite = MagaPluginService()
api.add_resource(PluginModelListAPI(multivarite), '/multivarite/models')
api.add_resource(PluginModelAPI(multivarite), '/multivarite/model', '/multivarite/model/<model_key>')
api.add_resource(PluginModelTrainAPI(multivarite), '/multivarite/<model_key>/train')
api.add_resource(PluginModelInferenceAPI(multivarite), '/multivarite/<model_key>/inference')
api.add_resource(PluginModelParameterAPI(multivarite), '/multivarite/parameters')
if __name__ == '__main__':
HOST = environ.get('SERVER_HOST', '0.0.0.0')
PORT = environ.get('SERVER_PORT', 56789)
app.run(HOST, PORT, threaded=True, use_reloader=False) | 49.333333 | 101 | 0.801802 | 96 | 888 | 7.177083 | 0.416667 | 0.043541 | 0.101597 | 0.087083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010976 | 0.076577 | 888 | 18 | 102 | 49.333333 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0.214848 | 0.131609 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00cbfa16839c500b9eb995200206a0b3bc8f7b5b | 5,766 | py | Python | src/jf/config.py | diseaz-joom/dsaflow | 3d5cc8caa5ff0b0db3b7590cd27d9421ade88f6c | [
"MIT"
] | null | null | null | src/jf/config.py | diseaz-joom/dsaflow | 3d5cc8caa5ff0b0db3b7590cd27d9421ade88f6c | [
"MIT"
] | 6 | 2022-03-25T13:24:04.000Z | 2022-03-29T13:24:36.000Z | src/jf/config.py | diseaz-joom/dsaflow | 3d5cc8caa5ff0b0db3b7590cd27d9421ade88f6c | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# -*- mode: python; coding: utf-8 -*-
from typing import List, Dict, Optional, Generator, Tuple
import collections
from jf import command
from jf import git
from jf import schema
class Error(Exception):
'''Base for errors in the module.'''
SEPARATOR = schema.SEPARATOR
class JfTemplateCfg(schema.SectionCfg):
'''Jflow template config.'''
KEYS = [
'version', 'upstream', 'fork',
'lreview_prefix', 'lreview_suffix',
'review_prefix', 'review_suffix',
'ldebug_prefix', 'ldebug_suffix',
'debug_prefix', 'debug_suffix',
]
version = schema.Value(schema.IntType, ['version'], default=0)
upstream = schema.MaybeValue(schema.StrType, ['upstream'])
fork = schema.MaybeValue(schema.StrType, ['fork'])
ldebug_prefix = schema.MaybeValue(schema.StrType, ['debug-local-prefix'])
ldebug_suffix = schema.MaybeValue(schema.StrType, ['debug-local-suffix'])
debug_prefix = schema.MaybeValue(schema.StrType, ['debug-prefix'])
debug_suffix = schema.MaybeValue(schema.StrType, ['debug-suffix'])
lreview_prefix = schema.MaybeValue(schema.StrType, ['public-prefix'])
lreview_suffix = schema.MaybeValue(schema.StrType, ['public-suffix'])
review_prefix = schema.MaybeValue(schema.StrType, ['remote-prefix'])
review_suffix = schema.MaybeValue(schema.StrType, ['remote-suffix'])
class JfCfg(schema.SectionCfg):
'''Section of global Jflow config.'''
remote = schema.Value(schema.StrType, ['remote'], default='origin')
template = schema.Map(JfTemplateCfg, ['template'])
default_green = schema.ListValue(schema.StrType, ['default-green'])
autosync = schema.Value(schema.BoolType, ['autosync'], default=False)
class JfBranchCfg(schema.SectionCfg):
'''Jflow configuration for a branch.'''
KEYS = [
'version', 'remote',
'upstream', 'fork',
'lreview', 'review',
'ldebug', 'debug',
'hidden', 'protected', 'tested', 'sync',
'debug_prefix', 'debug_suffix',
]
version = schema.Value(schema.IntType, ['version'], default=0)
remote = schema.MaybeValue(schema.StrType, ['remote-name'])
upstream = schema.Value(schema.BranchType, ['upstream'], git.ZeroBranchName)
upstream_shortcut = schema.MaybeValue(schema.StrType, ['upstream-shortcut'])
fork = schema.Value(schema.BranchType, ['fork'], git.ZeroBranchName)
fork_shortcut = schema.MaybeValue(schema.StrType, ['fork-shortcut'])
ldebug = schema.MaybeValue(schema.BranchType, ['ldebug'])
debug = schema.MaybeValue(schema.BranchType, ['debug'])
lreview = schema.MaybeValue(schema.BranchType, ['public'])
review = schema.MaybeValue(schema.BranchType, ['remote'])
debug_prefix = schema.MaybeValue(schema.StrType, ['debug-prefix'])
debug_suffix = schema.MaybeValue(schema.StrType, ['debug-suffix'])
# Properties below are not only for jflow-controlled branches
hidden = schema.Value(schema.BoolType, ['hidden'], default=False)
protected = schema.Value(schema.BoolType, ['protected'], default=False)
sync = schema.Value(schema.BoolType, ['sync'], default=False)
fork_from = schema.MaybeValue(schema.BranchType, ['fork-from'])
tested = schema.MaybeValue(schema.BranchType, ['tested'])
class StgitBranchCfg(schema.SectionCfg):
'''Stgit configuration for a branch.'''
version = schema.Value(schema.IntType, ['stackformatversion'], default=0)
parentbranch = schema.MaybeValue(schema.StrType, ['parentbranch'])
class GitRemoteCfg(schema.SectionCfg):
'''Remote configuration.'''
url = schema.MaybeValue(schema.StrType, ['url'])
class GitBranchCfg(schema.SectionCfg):
'''Branches configuration.'''
jf = schema.Section(JfBranchCfg, ['jflow'])
stgit = schema.Section(StgitBranchCfg, ['stgit'])
remote = schema.Value(schema.StrType, ['remote'], default='')
merge = schema.Value(schema.StrType, ['merge'], default='')
description = schema.Value(schema.StrType, ['description'], default='')
class Root(schema.Root):
def __init__(self) -> None:
schema.Root.__init__(self, GitConfigHolder())
branch = schema.Map(GitBranchCfg, ['branch'])
remote = schema.Map(GitRemoteCfg, ['remote'])
jf = schema.Section(JfCfg, ['jflow'])
class GitConfigHolder:
def __init__(self) -> None:
self._config: Optional[Dict[str, List[str]]] = None
@property
def config(self) -> Dict[str, List[str]]:
if self._config is not None:
return self._config
self._config = collections.defaultdict(list)
for name, value in self._gen_config():
self._config[name].append(value)
return self._config
@staticmethod
def _gen_config() -> Generator[Tuple[str, str], None, None]:
for line in command.read(['git', 'config', '--list']):
name, value = line.split('=', 1)
yield name, value
def set(self, name: str, value: str) -> None:
command.run(['git', 'config', '--local', name, value])
if self._config is None:
return
self._config[name] = [value]
def reset(self, name: str, value: str) -> None:
command.run(['git', 'config', '--local', '--replace-all', name, value])
if self._config is None:
return
self._config[name] = [value]
def append(self, name: str, value: str) -> None:
command.run(['git', 'config', '--local', '--add', name, value])
if self._config is None:
return
self._config[name].append(value)
def unset(self, name: str) -> None:
command.run(['git', 'config', '--local', '--unset-all', name])
if self._config is None:
return
del self._config[name]
| 34.321429 | 80 | 0.653833 | 633 | 5,766 | 5.870458 | 0.192733 | 0.099031 | 0.136168 | 0.13267 | 0.347417 | 0.233854 | 0.203175 | 0.17169 | 0.17169 | 0.17169 | 0 | 0.001283 | 0.189213 | 5,766 | 167 | 81 | 34.526946 | 0.793583 | 0.054284 | 0 | 0.236364 | 0 | 0 | 0.133887 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072727 | false | 0 | 0.045455 | 0 | 0.663636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00ccc3e05c2f8c10d5fcb5b0ce3e5aab48ee4131 | 3,005 | py | Python | 60/60_2.py | bobismijnnaam/bobe-euler | 111abdf37256d19c4a8c4e1a071db52929acf9d9 | [
"MIT"
] | null | null | null | 60/60_2.py | bobismijnnaam/bobe-euler | 111abdf37256d19c4a8c4e1a071db52929acf9d9 | [
"MIT"
] | null | null | null | 60/60_2.py | bobismijnnaam/bobe-euler | 111abdf37256d19c4a8c4e1a071db52929acf9d9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from Utils import *
def isOkay(ps, p1, p2):
newP1 = int(str(p1) + str(p2))
newP2 = int(str(p2) + str(p1))
return (newP1 in ps or isPrime(newP1)) and (newP2 in ps or isPrime(newP2))
def isCombination(indices):
return len(indices) == len(set(indices))
def isFinished(indices, indexLimit):
return indices[-1] >= indexLimit
def nextIndicesWithShort(indices, indexLimit, pl, foundMinimum):
indices[0] += 1
# print("Before:", indices)
for i in range(len(indices[:-1])):
v = indices[i]
# print("During:", i, v, indices)
if v >= indexLimit:
indices[i] -= indexLimit
indices[i + 1] += 1
# Short the computation if the current index already exceeds the found minimum
# print(i, v)
elif pl[v] >= foundMinimum:
indices[i + 1] += 1
for j in range(i + 1):
indices[j] = 0
elif sum([pl[pi] for pi in indices[i:]]) >= foundMinimum:
indices[i + 1] += 1
for j in range(i + 1):
indices[j] = 0
# print("After:", indices)
def isCool(ps, pl, indices):
for i in range(len(indices)):
for j in range(i + 1, len(indices)):
# print(i, j, len(indices))
# print(indices[i], indices[j])
# print(len(pl))
if not isOkay(ps, pl[indices[i]], pl[indices[j]]):
return False
return True
if __name__ == "__main__":
# Your code here!
limit = 674
digits = 5
nj = NumberJuggler(limit)
pl = nj.primeList
ps = set(pl)
indexLimit = len(pl)
foundMinimum = digits * limit
foundGroup = None
indices = list(range(digits))
i = 0
while not isFinished(indices, indexLimit):
print(indices)
if not isCombination:
nextIndicesWithShort(indices, indexLimit, pl, foundMinimum)
continue
# Check if the higher indices are compatible with eachother
mustContinue = False
for j in range(len(indices) - 2, 0, -1):
if indices[j] == 0:
if not isCool(ps, pl, indices[j:]):
# Skip all the indices
for k in range(j + 1):
indices[k] = indexLimit - 1
nextIndicesWithShort(indices, indexLimit, pl, foundMinimum)
mustContinue = True
break
if mustContinue: continue
i += 1
if i % 100000 == 0: print(indices)
if isCool(ps, pl, indices):
total = sum([pl[i] for i in indices])
if total < foundMinimum:
foundMinimum = total
foundGroup = [pl[i] for i in indices]
print("New foundMinimum:", foundMinimum)
print("Group:", foundGroup)
nextIndicesWithShort(indices, indexLimit, pl, foundMinimum)
print("New foundMinimum:", foundMinimum)
print("Group:", foundGroup)
| 29.752475 | 86 | 0.541764 | 353 | 3,005 | 4.589235 | 0.25779 | 0.039506 | 0.091358 | 0.096296 | 0.30679 | 0.180864 | 0.153086 | 0.054321 | 0.054321 | 0.054321 | 0 | 0.02381 | 0.343095 | 3,005 | 100 | 87 | 30.05 | 0.796859 | 0.119468 | 0 | 0.205882 | 0 | 0 | 0.020501 | 0 | 0 | 0 | 0 | 0.01 | 0 | 1 | 0.073529 | false | 0 | 0.014706 | 0.029412 | 0.161765 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00ce017e64b311a8af155a0d7cd8fdd51f298861 | 1,954 | py | Python | update_npm_deps.py | wikimedia/integration-dashboard | 7e9614efbb59a50b00fdad11a6785d2a9e1ab8b2 | [
"CC0-1.0"
] | null | null | null | update_npm_deps.py | wikimedia/integration-dashboard | 7e9614efbb59a50b00fdad11a6785d2a9e1ab8b2 | [
"CC0-1.0"
] | null | null | null | update_npm_deps.py | wikimedia/integration-dashboard | 7e9614efbb59a50b00fdad11a6785d2a9e1ab8b2 | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
from collections import OrderedDict
import glob
import json
import os
import os.path
import subprocess
import lib
argv = lib.cli_config()
if len(argv) > 1:
extension = argv[1]
else:
extension = None
def update(package_json):
os.chdir(os.path.dirname(package_json))
print(package_json.split('/')[-2])
updating = []
out = subprocess.check_output(['git', 'diff', '--name-only']).decode()
if 'package.json' in out:
print('WARNING: package.json has local changes')
return
with open(package_json, 'r') as f:
j = json.load(f, object_pairs_hook=OrderedDict)
for package, version in j['devDependencies'].items():
if lib.get_npm_version(package) != version:
i = (package, version, lib.get_npm_version(package))
print('Updating %s: %s --> %s' % i)
updating.append(i)
j['devDependencies'][package] = lib.get_npm_version(package)
if not updating:
print('Nothing to update')
return
with open(package_json, 'w') as f:
out = json.dumps(j, indent=' ')
f.write(out + '\n')
subprocess.call(['npm', 'install'])
try:
subprocess.check_call(['npm', 'test'])
except subprocess.CalledProcessError:
print('Error updating %s' % package_json)
return
msg = 'build: Updating development dependencies\n\n'
for tup in updating:
msg += '* %s: %s → %s\n' % tup
print(msg)
lib.commit_and_push(files=['package.json'], msg=msg, branch='master',
topic='bump-dev-deps')
if extension == 'mediawiki':
packages = [os.path.join(lib.MEDIAWIKI_DIR, 'package.json')]
else:
packages = glob.glob(os.path.join(lib.EXTENSIONS_DIR, '*/package.json'))
for package in sorted(packages):
ext_name = package.split('/')[-2]
if extension and extension != ext_name:
continue
update(package)
| 31.015873 | 76 | 0.614637 | 250 | 1,954 | 4.716 | 0.416 | 0.102629 | 0.022901 | 0.040712 | 0.100933 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003374 | 0.241556 | 1,954 | 62 | 77 | 31.516129 | 0.791498 | 0.010747 | 0 | 0.090909 | 0 | 0 | 0.157867 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018182 | false | 0 | 0.127273 | 0 | 0.2 | 0.109091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00cf0d4b8c3b5d362e932a9c1aef4cf5350b79f9 | 4,821 | py | Python | tests/integration/put_cars_call_test.py | ikostan/REST_API_AUTOMATION | cdb4d30fbc7457b2a403b4dad6fe1efa2e754681 | [
"Unlicense"
] | 8 | 2020-03-17T09:15:28.000Z | 2022-01-29T19:50:45.000Z | tests/integration/put_cars_call_test.py | ikostan/REST_API_AUTOMATION | cdb4d30fbc7457b2a403b4dad6fe1efa2e754681 | [
"Unlicense"
] | 1 | 2021-06-02T00:26:58.000Z | 2021-06-02T00:26:58.000Z | tests/integration/put_cars_call_test.py | ikostan/REST_API_AUTOMATION | cdb4d30fbc7457b2a403b4dad6fe1efa2e754681 | [
"Unlicense"
] | 1 | 2021-11-22T16:10:27.000Z | 2021-11-22T16:10:27.000Z | #!/path/to/interpreter
"""
PUT Call Integration Test
"""
# Created by Egor Kostan.
# GitHub: https://github.com/ikostan
# LinkedIn: https://www.linkedin.com/in/egor-kostan/
import base64
import allure
import unittest
from flask import json
from api.cars_app import app
@allure.epic('Simple Flask App')
@allure.parent_suite('REST API')
@allure.suite("Integration Tests")
@allure.sub_suite("Positive Tests")
@allure.feature("PUT")
@allure.story('Car Update')
class PutCarsCallTestCase(unittest.TestCase):
"""
Testing a JSON API implemented in Flask.
PUT Call Integration Test.
PUT method requests for the enclosed entity be stored under
the supplied Request-URI. If the Request-URI refers to an
already existing resource – an update operation will happen,
otherwise create operation should happen if Request-URI is a
valid resource URI (assuming client is allowed to determine
resource identifier).
PUT method is idempotent. So if you send retry a request
multiple times, that should be equivalent to single request
modification.
Use PUT when you want to modify a singular resource
which is already a part of resources collection.
PUT replaces the resource in its entirety.
"""
def setUp(self) -> None:
with allure.step("Prepare test data"):
self.car_original = {"name": "Creta",
"brand": "Hyundai",
"price_range": "8-14 lacs",
"car_type": "hatchback"}
self.car_updated = {"name": "Creta",
"brand": "Hyundai",
"price_range": "6-9 lacs",
"car_type": "hatchback"}
self.non_admin_user = {"name": "eric",
"password": "testqxf2",
"perm": "non_admin"}
self.admin_user = {"name": "qxf2",
"password": "qxf2",
"perm": "admin"}
def test_put_cars_update_non_admin(self):
"""
Test PUT call using non admin user credentials.
:return:
"""
allure.dynamic.title("Update car properties using "
"PUT call and non admin credentials")
allure.dynamic.severity(allure.severity_level.NORMAL)
with allure.step("Send PUT request"):
response = app.test_client().put(
'{}{}'.format('/cars/update/',
self.car_original['name']),
data=json.dumps(self.car_updated),
content_type='application/json',
# Testing Flask application
# with basic authentication
# Source: https://gist.github.com/jarus/1160696
headers={
'Authorization': 'Basic ' +
base64.b64encode(bytes(self.non_admin_user['name'] +
":" +
self.non_admin_user['password'],
'ascii')).decode('ascii')
}
)
with allure.step("Verify status code"):
self.assertEqual(200, response.status_code)
with allure.step("Convert response into JSON data"):
data = json.loads(response.get_data(as_text=True))
# print("\nDATA:\n{}\n".format(data)) # Debug only
with allure.step("Verify 'successful' flag"):
self.assertTrue(data['response']['successful'])
with allure.step("Verify updated car data"):
self.assertDictEqual(self.car_updated,
data['response']['car'])
def test_put_cars_update_admin(self):
"""
Test PUT call using admin user credentials.
:return:
"""
allure.dynamic.title("Update car properties using "
"PUT call and admin credentials")
allure.dynamic.severity(allure.severity_level.NORMAL)
with allure.step("Send PUT request"):
response = app.test_client().put(
'{}{}'.format('/cars/update/',
self.car_original['name']),
data=json.dumps(self.car_updated),
content_type='application/json',
# Testing Flask application
# with basic authentication
# Source: https://gist.github.com/jarus/1160696
headers={
'Authorization': 'Basic ' +
base64.b64encode(bytes(self.admin_user['name'] +
":" +
self.admin_user['password'],
'ascii')).decode('ascii')
}
)
with allure.step("Verify status code"):
self.assertEqual(200, response.status_code)
with allure.step("Convert response into JSON data"):
data = json.loads(response.get_data(as_text=True))
# print("\nDATA:\n{}\n".format(data)) # Debug only
with allure.step("Verify 'successful' flag"):
self.assertTrue(data['response']['successful'])
with allure.step("Verify updated car data"):
self.assertDictEqual(self.car_updated,
data['response']['car'])
| 31.717105 | 77 | 0.607758 | 553 | 4,821 | 5.216998 | 0.327306 | 0.038128 | 0.05338 | 0.041594 | 0.615598 | 0.574003 | 0.535182 | 0.535182 | 0.535182 | 0.535182 | 0 | 0.010774 | 0.268409 | 4,821 | 151 | 78 | 31.927152 | 0.806918 | 0.266542 | 0 | 0.560976 | 0 | 0 | 0.23506 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 1 | 0.036585 | false | 0.04878 | 0.060976 | 0 | 0.109756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00cf8aae92b1dc888c5056333687fc326d0907ae | 356 | py | Python | tests/test_robots.py | knaik/TikTok | 7a29be575e7ab2be7d725cc4d8c81ce7df349db7 | [
"MIT"
] | 37 | 2020-04-08T01:06:30.000Z | 2022-03-29T02:04:10.000Z | tests/test_robots.py | knaik/TikTok | 7a29be575e7ab2be7d725cc4d8c81ce7df349db7 | [
"MIT"
] | 5 | 2020-06-12T03:38:06.000Z | 2022-03-15T08:54:09.000Z | tests/test_robots.py | knaik/TikTok | 7a29be575e7ab2be7d725cc4d8c81ce7df349db7 | [
"MIT"
] | 22 | 2020-04-21T22:20:33.000Z | 2022-03-22T08:55:20.000Z | import os, sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from robots import getAllowedAgents
def test_uas():
# valid as of 2020/05/25
uas = getAllowedAgents()
assert set(uas) == {'Googlebot', 'Applebot', 'Bingbot', 'DuckDuckBot', 'Naverbot', 'Twitterbot', 'Yandex'}
if __name__ == '__main__':
test_uas() | 32.363636 | 110 | 0.69382 | 46 | 356 | 5.065217 | 0.695652 | 0.077253 | 0.111588 | 0.128755 | 0.137339 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026144 | 0.140449 | 356 | 11 | 111 | 32.363636 | 0.735294 | 0.061798 | 0 | 0 | 0 | 0 | 0.201201 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00d0550f561d4fd3f38265b53b44c4789f7b9141 | 4,411 | py | Python | tests/test_api.py | agounaris/lehmanbrothers | c82ce0c9ed1abc51170915153e15e5240726c607 | [
"MIT"
] | null | null | null | tests/test_api.py | agounaris/lehmanbrothers | c82ce0c9ed1abc51170915153e15e5240726c607 | [
"MIT"
] | 1 | 2021-11-15T17:46:29.000Z | 2021-11-15T17:46:29.000Z | tests/test_api.py | agounaris/warren | c82ce0c9ed1abc51170915153e15e5240726c607 | [
"MIT"
] | null | null | null | from warren.api.statements import BalanceSheet
from warren.api.statements import IncomeStatement
from warren.api.statements import CashFlow
from warren.api.statements import FinancialPerformance
data = {
'ebit': 100,
'total_assets': 100,
'net_income': 100,
'total_stockholder_equity': 100,
'gross_profit': 100,
'total_revenue': 100,
'year': 2016
}
previous_data = {
'ebit': 100,
'total_assets': 100,
'net_income': 100,
'total_stockholder_equity': 100,
'gross_profit': 100,
'total_revenue': 100,
'year': 2016
}
class TestApi(object):
def test_init_balance_sheet(self):
balance_sheet = BalanceSheet(data)
assert isinstance(balance_sheet, BalanceSheet)
def test_init_income_statement(self):
income_statement = IncomeStatement(data)
assert isinstance(income_statement, IncomeStatement)
def test_init_cash_flow(self):
cash_flow = CashFlow(data)
assert isinstance(cash_flow, CashFlow)
def test_init_financial_statements(self):
balance_sheet = BalanceSheet(data)
income_statement = IncomeStatement(data)
cash_flow = CashFlow(data)
fs = FinancialPerformance('TEST',
[income_statement],
[balance_sheet],
[cash_flow])
assert isinstance(fs, FinancialPerformance)
def test_get_balance_sheet_by_year(self):
balance_sheet = BalanceSheet(data)
income_statement = IncomeStatement(data)
cash_flow = CashFlow(data)
fs = FinancialPerformance('TEST',
[income_statement],
[balance_sheet],
[cash_flow])
assert isinstance(fs.get_balance_sheet_by_year(2016), BalanceSheet)
def test_get_income_statement_by_year(self):
balance_sheet = BalanceSheet(data)
income_statement = IncomeStatement(data)
cash_flow = CashFlow(data)
fs = FinancialPerformance('TEST',
[income_statement],
[balance_sheet],
[cash_flow])
assert isinstance(fs.get_income_statement_by_year(2016),
IncomeStatement)
def test_get_cash_flow_by_year(self):
balance_sheet = BalanceSheet(data)
income_statement = IncomeStatement(data)
cash_flow = CashFlow(data)
fs = FinancialPerformance('TEST',
[income_statement],
[balance_sheet],
[cash_flow])
assert isinstance(fs.get_cash_flow_by_year(2016), CashFlow)
def test_return_on_assets(self):
bss = [BalanceSheet(data), BalanceSheet(previous_data)]
ics = [IncomeStatement(data), IncomeStatement(previous_data)]
cfs = [CashFlow(data), CashFlow(previous_data)]
fs = FinancialPerformance('TEST',
bss,
ics,
cfs)
assert fs.return_on_asset(2016) == 1.0
def test_return_on_equity(self):
bss = [BalanceSheet(data), BalanceSheet(previous_data)]
ics = [IncomeStatement(data), IncomeStatement(previous_data)]
cfs = [CashFlow(data), CashFlow(previous_data)]
fs = FinancialPerformance('TEST',
bss,
ics,
cfs)
assert fs.return_on_equity(2016) == 1.0
def test_profit_margin(self):
bss = [BalanceSheet(data), BalanceSheet(previous_data)]
ics = [IncomeStatement(data), IncomeStatement(previous_data)]
cfs = [CashFlow(data), CashFlow(previous_data)]
fs = FinancialPerformance('TEST',
bss,
ics,
cfs)
assert fs.profit_margin(2016) == 1.0
def test_balance_sheet_compare(self):
assert BalanceSheet(data) == BalanceSheet(previous_data)
def test_income_statement_compare(self):
assert IncomeStatement(data) == IncomeStatement(previous_data)
def test_cash_flow_compare(self):
assert CashFlow(data) == CashFlow(previous_data)
| 32.674074 | 75 | 0.579914 | 411 | 4,411 | 5.941606 | 0.126521 | 0.068796 | 0.074529 | 0.085995 | 0.747338 | 0.582719 | 0.582719 | 0.582719 | 0.582719 | 0.582719 | 0 | 0.025247 | 0.335525 | 4,411 | 134 | 76 | 32.91791 | 0.807915 | 0 | 0 | 0.647059 | 0 | 0 | 0.042167 | 0.010882 | 0 | 0 | 0 | 0 | 0.127451 | 1 | 0.127451 | false | 0 | 0.039216 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00d442d399c62daee46ef62ed81ee028a91e9d40 | 10,712 | py | Python | metalprot/basic/filter.py | lonelu/Metalprot | e51bee472c975aa171bdb6ee426a07ca69f110ee | [
"MIT"
] | null | null | null | metalprot/basic/filter.py | lonelu/Metalprot | e51bee472c975aa171bdb6ee426a07ca69f110ee | [
"MIT"
] | null | null | null | metalprot/basic/filter.py | lonelu/Metalprot | e51bee472c975aa171bdb6ee426a07ca69f110ee | [
"MIT"
] | null | null | null | import numpy as np
import prody as pr
from prody.measure.transform import calcRMSD
from scipy.spatial.distance import cdist
import itertools
from sklearn.neighbors import NearestNeighbors
from .vdmer import pair_wise_geometry_matrix
class Search_filter:
def __init__(self, filter_abple = False, filter_phipsi = True, max_phipsi_val = 15,
filter_vdm_score = False, min_vdm_score = 0, filter_vdm_count = False, min_vdm_clu_num = 20,
after_search_filter_geometry = False, filter_based_geometry_structure = False, angle_tol = 5, aa_aa_tol = 0.5, aa_metal_tol = 0.2,
pair_angle_range = None, pair_aa_aa_dist_range = None, pair_metal_aa_dist_range = None,
after_search_filter_qt_clash = False, vdm_vdm_clash_dist = 2.7, vdm_bb_clash_dist = 2.2,
after_search_open_site_clash = True, open_site_dist = 3.0, write_filtered_result = False,
selfcenter_filter_member_phipsi = True):
self.filter_abple = filter_abple
self.filter_phipsi = filter_phipsi
self.max_phipsi_val = max_phipsi_val
self.filter_vdm_score = filter_vdm_score
self.min_vdm_score = min_vdm_score
self.filter_vdm_count = filter_vdm_count
self.min_vdm_clu_num = min_vdm_clu_num
self.after_search_filter_geometry = after_search_filter_geometry
self.filter_based_geometry_structure = filter_based_geometry_structure
self.angle_tol = angle_tol
self.aa_aa_tol = aa_aa_tol
self.aa_metal_tol = aa_metal_tol
self.pair_angle_range = pair_angle_range # [90, 110]
self.pair_aa_aa_dist_range = pair_aa_aa_dist_range # [2.8, 3.4]
self.pair_metal_aa_dist_range = pair_metal_aa_dist_range # [2.0, 2.3]
self.after_search_filter_qt_clash = after_search_filter_qt_clash
self.vdm_vdm_clash_dist = vdm_vdm_clash_dist
self.vdm_bb_clash_dist = vdm_bb_clash_dist
self.after_search_open_site_clash = after_search_open_site_clash
self.open_site_dist = open_site_dist
self.write_filtered_result = write_filtered_result
self.selfcenter_filter_member_phipsi = selfcenter_filter_member_phipsi
def para2string(self):
parameters = "Filter parameters: \n"
parameters += 'filter_abple: ' + str(self.filter_abple) + ' \n'
parameters += 'filter_phipsi: ' + str(self.filter_phipsi) + ' \n'
parameters += 'max_phipsi_val: ' + str(self.max_phipsi_val) + ' \n'
parameters += 'filter_vdm_score: ' + str(self.filter_vdm_score) + ' \n'
parameters += 'min_vdm_score: ' + str(self.min_vdm_score) + ' \n'
parameters += 'filter_vdm_count: ' + str(self.filter_vdm_count) + ' \n'
parameters += 'min_vdm_clu_num: ' + str(self.min_vdm_clu_num) + ' \n'
parameters += 'after_search_filter_geometry: ' + str(self.after_search_filter_geometry) + ' \n'
parameters += 'filter_based_geometry_structure: ' + str(self.filter_based_geometry_structure) + ' \n'
parameters += 'pair_angle_range: ' + str(self.pair_angle_range) + ' \n'
parameters += 'pair_aa_aa_dist_range: ' + str(self.pair_aa_aa_dist_range) + ' \n'
parameters += 'pair_metal_aa_dist_range: ' + str(self.pair_metal_aa_dist_range) + ' \n'
parameters += 'filter_qt_clash: ' + str(self.after_search_filter_qt_clash) + ' \n'
parameters += 'vdm_vdm_clash_dist: ' + str(self.vdm_vdm_clash_dist) + ' \n'
parameters += 'vdm_bb_clash_dist: ' + str(self.vdm_bb_clash_dist) + ' \n'
parameters += 'after_search_open_site_clash: ' + str(self.after_search_open_site_clash) + ' \n'
parameters += 'open_site_dist: ' + str(self.open_site_dist) + ' \n'
parameters += 'write_filtered_result: ' + str(self.write_filtered_result) + ' \n'
parameters += 'selfcenter_filter_member_phipsi: ' + str(self.selfcenter_filter_member_phipsi) + ' \n'
return parameters
@staticmethod
def after_search_geo_pairwise_satisfied(combinfo, pair_angle_range = None, pair_aa_aa_dist_range = None, pair_metal_aa_dist_range = None):
'''
range = (75, 125) for Zn.
if all pairwise angle is between the range. The geometry is satisfied.
'''
satisfied = True
if pair_angle_range:
for an in combinfo.angle_pair:
if an < pair_angle_range[0] or an > pair_angle_range[1]:
combinfo.pair_angle_ok = -1
satisfied = False
break
if pair_aa_aa_dist_range:
for ad in combinfo.aa_aa_pair:
if ad < pair_aa_aa_dist_range[0] or ad > pair_aa_aa_dist_range[1]:
combinfo.pair_aa_aa_dist_ok = -1
satisfied = False
break
if pair_metal_aa_dist_range:
combinfo.pair_aa_metal_dist_ok = 1
for amd in combinfo.metal_aa_pair:
if amd < pair_metal_aa_dist_range[0] or amd > pair_metal_aa_dist_range[1]:
combinfo.pair_aa_metal_dist_ok = -1
satisfied = False
break
return satisfied
@staticmethod
def get_min_geo(geometry, geo_struct, metal_sel = 'name NI MN ZN CO CU MG FE' ):
'''
Metal must be the last atom in the prody object.
'''
aa_coords = geo_struct.select('not ' + metal_sel).getCoords()
metal_coord = geo_struct.select(metal_sel).getCoords()[0]
ct_len = len(aa_coords)
min_rmsd = 0
min_geo_struct = None
for xs in itertools.permutations(range(ct_len), ct_len):
_geo_struct = geo_struct.copy()
coords = []
for x in xs:
coords.append(aa_coords[x])
coords.append(metal_coord)
_geo_struct.setCoords(np.array(coords))
pr.calcTransformation(_geo_struct.select('not oxygen'), geometry).apply(_geo_struct)
rmsd = pr.calcRMSD(_geo_struct.select('not oxygen'), geometry)
if not min_geo_struct:
min_geo_struct = _geo_struct
min_rmsd = rmsd
elif rmsd < min_rmsd:
min_geo_struct = _geo_struct
min_rmsd = rmsd
return min_geo_struct, min_rmsd
@staticmethod
def after_search_geo_strcut_satisfied(comb_info, min_geo_struct, angle_tol, aa_aa_tol, aa_metal_tol):
aa_aa_pair, metal_aa_pair, angle_pair = pair_wise_geometry_matrix(min_geo_struct)
info_aa_aa_pair, info_metal_aa_pair, info_angle_pair = pair_wise_geometry_matrix(comb_info.geometry)
satisfied = True
comb_info.pair_aa_metal_dist_ok = 1
for i in range(len(metal_aa_pair)):
if info_metal_aa_pair[i] < metal_aa_pair[i] - aa_metal_tol or info_metal_aa_pair[i] > metal_aa_pair[i] + aa_metal_tol:
comb_info.pair_aa_metal_dist_ok = -1
satisfied = False
break
comb_info.pair_aa_aa_dist_ok = 1
for i, j in itertools.combinations(range(aa_aa_pair.shape[0]), 2):
if info_aa_aa_pair[i, j] < aa_aa_pair[i, j] - aa_aa_tol or info_aa_aa_pair[i, j] > aa_aa_pair[i, j] + aa_aa_tol:
comb_info.pair_aa_aa_dist_ok = -1
satisfied = False
break
comb_info.pair_angle_ok = 1
for i, j in itertools.combinations(range(aa_aa_pair.shape[0]), 2):
if info_angle_pair[i, j] < angle_pair[i, j] - angle_tol or info_angle_pair[i, j] > angle_pair[i, j] + angle_tol:
comb_info.pair_angle_ok = -1
satisfied = False
break
return satisfied
@staticmethod
def vdm_clash(vdms, target, vdm_vdm_clash_dist = 2.7, vdm_bb_clash_dist = 2.2, unsupperimposed = True, wins = None, align_sel = 'name N CA C'):
'''
clashing with sklearn.neighbors NearestNeighbors method.
All sc except CB atom of vdm are used for clashing checking.
All bb of target are used for clashing chekcing.
If clash detected, return True.
'''
coords = []
for i in range(len(vdms)):
vdm = vdms[i]
if unsupperimposed:
win = wins[i]
target_sel = 'resindex ' + str(win) + ' and ' + align_sel
query_sel = 'resindex ' + str(vdm.contact_resind) + ' and '+ align_sel
if len(vdm.query.select(query_sel)) != len(target.select(target_sel)):
print('supperimpose_target_bb not happening')
continue
transform = pr.calcTransformation(vdm.query.select(query_sel), target.select(target_sel))
transform.apply(vdm.query)
vdm_sel = 'protein and heavy and sc and not name CB'
coord = vdm.query.select(vdm_sel).getCoords()
coords.append(coord)
for i, j in itertools.combinations(range(len(coords)), 2):
neigh_y = NearestNeighbors(radius= vdm_vdm_clash_dist)
neigh_y.fit(coords[i])
x_in_y = neigh_y.radius_neighbors(coords[j])
x_has_y = any([True if len(a) >0 else False for a in x_in_y[1]])
if x_has_y:
return True
bb_coord = target.select('protein and heavy and bb').getCoords()
for i in range(len(coords)):
neigh_y = NearestNeighbors(radius= vdm_bb_clash_dist)
neigh_y.fit(bb_coord)
x_in_y = neigh_y.radius_neighbors(coords[i])
x_has_y = any([True if len(a) >0 else False for a in x_in_y[1]])
if x_has_y:
return True
return False
@staticmethod
def open_site_clashing(vdms, target, ideal_geo, open_site_dist = 3.0):
'''
The open site of ideal_geo must be Oxygen, the other atom could not be Oxygen.
If clash detected, return True.
'''
ideal_geo_coord = [ideal_geo.select('oxygen')[0].getCoords()]
coords = []
for i in range(len(vdms)):
vdm = vdms[i]
vdm_sel = 'protein and heavy and sc and not name CB'
coord = vdm.query.select(vdm_sel).getCoords()
coords.extend(coord)
bb_coord = target.select('protein and heavy and bb').getCoords()
coords.extend(bb_coord)
neigh_y = NearestNeighbors(radius= open_site_dist)
neigh_y.fit(coords)
x_in_y = neigh_y.radius_neighbors(ideal_geo_coord)
x_has_y = any([True if len(a) >0 else False for a in x_in_y[1]])
if x_has_y:
return True
return False
| 43.193548 | 147 | 0.629854 | 1,484 | 10,712 | 4.166442 | 0.123315 | 0.017467 | 0.032023 | 0.02329 | 0.474527 | 0.331069 | 0.261362 | 0.231926 | 0.203461 | 0.163351 | 0 | 0.009245 | 0.283047 | 10,712 | 247 | 148 | 43.368421 | 0.795833 | 0.045743 | 0 | 0.283333 | 0 | 0 | 0.073151 | 0.021141 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038889 | false | 0 | 0.038889 | 0 | 0.133333 | 0.005556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00d68ed3e96b3394d7d99bd2cd2947cd940a0e2a | 29,199 | py | Python | ucsl/ucsl_classifier.py | neurospin-projects/2021_rlouiset_ucsl | 7e98a58d9940164c23f748fa9c974cf556fa28b0 | [
"BSD-3-Clause"
] | 2 | 2021-07-19T12:42:58.000Z | 2021-12-21T09:56:32.000Z | ucsl/ucsl_classifier.py | neurospin-projects/2021_rlouiset_ucsl | 7e98a58d9940164c23f748fa9c974cf556fa28b0 | [
"BSD-3-Clause"
] | null | null | null | ucsl/ucsl_classifier.py | neurospin-projects/2021_rlouiset_ucsl | 7e98a58d9940164c23f748fa9c974cf556fa28b0 | [
"BSD-3-Clause"
] | null | null | null | import copy
import logging
from sklearn.base import ClassifierMixin
from sklearn.metrics import adjusted_rand_score as ARI
from sklearn.mixture import GaussianMixture
from ucsl.base import *
from ucsl.utils import *
class UCSL_C(BaseEM, ClassifierMixin):
"""ucsl classifier.
Implementation of Mike Tipping"s Relevance Vector Machine for
classification using the scikit-learn API.
Parameters
----------
clustering : string or object, optional (default="gaussian_mixture")
Clustering method for the Expectation step,
If not specified, "gaussian_mixture" (spherical by default) will be used.
It must be one of "k_means", "gaussian_mixture"
It can also be a sklearn-like object with fit, predict and fit_predict methods.
maximization ; string or object, optional (default="lr")
Classification method for the maximization step,
If not specified, "lr" (Logistic Regression) will be used.
It must be one of "k_means", "gaussian_mixture"
It can also be a sklearn-like object with fit and predict methods; coef_ and intercept_ attributes.
negative_weighting : string, optional (default="soft")
negative samples weighting applied during the Maximization step,
If not specified, UCSL original "soft" will be used.
It must be one of "uniform", "soft", "hard".
ie : the importance weight of non-clustered samples in the sub-classifiers estimation
positive_weighting : string, optional (default="hard")
positive samples weighting applied during the Maximization step,
If not specified, UCSL original "hard" will be used.
It must be one of "uniform", "soft", "hard".
ie : the importance weight of clustered samples in the sub-classifiers estimation
n_clusters : int, optional (default=2)
numbers of subtypes we are assuming (equal to K in UCSL original paper)
If not specified, the value of 2 will be used.
Must be > 1.
label_to_cluster : int, optional (default=1)
which label we are clustering into subgroups
If not specified, the value of 1 will be used.
ie : label_to_cluster is similar to "positive class" in UCSL original paper
Must be 0 or 1.
n_iterations : int, optional (default=10)
numbers of Expectation-Maximization step performed per consensus run
If not specified, the value of 10 will be used.
Must be > 1.
n_consensus : int, optional (default=10)
numbers of Expectation-Maximization loops performed before ensembling of all the clusterings
If not specified, the value of 10 will be used.
Must be > 1.
stability_threshold : float, optional (default=0.9)
Adjusted rand index threshold between 2 successive iterations clustering
If not specified, the value of 0.9 will be used.
Must be between 0 and 1.
noise_tolerance_threshold : float, optional (default=10)
Threshold tolerance in graam-schmidt algorithm
Given an orthogonalized vector, if its norm is inferior to 1 / noise_tolerance_threshold,
we do not add it to the orthonormalized basis.
Must be > 0.
"""
def __init__(self, stability_threshold=0.9, noise_tolerance_threshold=10,
n_consensus=10, n_iterations=10,
n_clusters=2, label_to_cluster=1,
clustering='gaussian_mixture', maximization='logistic',
negative_weighting='soft', positive_weighting='hard',
training_label_mapping=None):
super().__init__(clustering=clustering, maximization=maximization,
stability_threshold=stability_threshold, noise_tolerance_threshold=noise_tolerance_threshold,
n_consensus=n_consensus, n_iterations=n_iterations)
# define the number of clusters needed
self.n_clusters = n_clusters
# define which label we want to cluster
self.label_to_cluster = label_to_cluster
# define the mapping of labels before fitting the algorithm
# for example, one may want to merge 2 labels together before fitting to check if clustering separate them well
if training_label_mapping is None:
self.training_label_mapping = {label: label for label in range(2)}
else:
self.training_label_mapping = training_label_mapping
# define what are the weightings we want for each label
assert (negative_weighting in ['hard', 'soft', 'uniform']), \
"negative_weighting must be one of 'hard', 'soft'"
assert (positive_weighting in ['hard', 'soft', 'uniform']), \
"positive_weighting must be one of 'hard', 'soft'"
self.negative_weighting = negative_weighting
self.positive_weighting = positive_weighting
# store directions from the Maximization method and store intercepts
self.coefficients = {cluster_i: [] for cluster_i in range(self.n_clusters)}
self.intercepts = {cluster_i: [] for cluster_i in range(self.n_clusters)}
# store intermediate and consensus results in dictionaries
self.cluster_labels_ = None
self.clustering_assignments = None
# define barycenters saving dictionaries
self.barycenters = None
# define orthonormal directions basis and clustering methods at each consensus step
self.orthonormal_basis = {c: {} for c in range(n_consensus)}
self.clustering_method = {c: {} for c in range(n_consensus)}
def fit(self, X_train, y_train):
"""Fit the ucsl model according to the given training data.
Parameters
----------
X_train : array-like, shape (n_samples, n_features)
Training vectors.
y_train : array-like, shape (n_samples,)
Target values.
Returns
-------
self
"""
# apply label mapping (in our case we merged "BIPOLAR" and "SCHIZOPHRENIA" into "MENTAL DISEASE" for our xp)
y_train_copy = y_train.copy()
for original_label, new_label in self.training_label_mapping.items():
y_train_copy[y_train == original_label] = new_label
# run the algorithm
self.run(X_train, y_train_copy, idx_outside_polytope=self.label_to_cluster)
return self
def predict(self, X, y_true=None):
"""Predict classification and clustering using the UCSL model.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Query points to be evaluate.
y_true : array-like, shape (n_samples, n_features)
Ground truth classification labels.
Returns
-------
y_pred_clsf // y_true : array, shape (n_samples,)
Predictions of the classification binary task of the query points if y_true is None.
Returns y_true if y_true is not None
y_pred : array, shape (n_samples,)
Predictions of the clustering task of the query points.
BEWARE : if y_true is not None, clustering prediction of samples considered "negative"
(with classification ground truth label different than label_to_cluster) are set to -1.
BEWARE : if y_true is None, clustering predictions of samples considered "negative"
(when classification prediction different than label_to_cluster) are set to -1.
"""
y_pred_proba_clsf = self.predict_classif_proba(X)
y_pred_clsf = np.argmax(y_pred_proba_clsf, 1)
y_pred_proba_clusters = self.predict_clusters(X)
y_pred_clusters = np.argmax(y_pred_proba_clusters, 1)
if y_true is None :
y_pred_clusters[y_pred_clsf == (1 - self.label_to_cluster)] = -1
return y_pred_clsf, y_pred_clusters
else :
y_pred_clusters[y_true == (1 - self.label_to_cluster)] = -1
return y_true, y_pred_clusters
def predict_proba(self, X, y_true=None):
"""Predict using the ucsl model.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Query points to be evaluate.
Returns
-------
y_pred_clsf : array, shape (n_samples,)
Probabailistic predictions of the classification binary task of the query points.
y_pred : array, shape (n_samples,)
Probabilistic predictions of the clustering task of the query points.
BEWARE : if y_true is not None, clustering prediction of samples considered "negative"
(with classification ground truth label different than label_to_cluster) are set to -1.
BEWARE : if y_true is None, clustering predictions of samples considered "negative"
(when classification prediction different than label_to_cluster) are set to -1.
"""
y_pred_proba_clsf = self.predict_classif_proba(X)
y_pred_clsf = np.argmax(y_pred_proba_clsf, 1)
y_pred_proba_clusters = self.predict_clusters(X)
if y_true is None :
y_pred_proba_clusters[y_pred_clsf == (1 - self.label_to_cluster)] = -1
return y_pred_clsf, y_pred_proba_clusters
else :
y_pred_proba_clusters[y_true == (1 - self.label_to_cluster)] = -1
return y_true, y_pred_proba_clusters
def predict_classif_proba(self, X):
"""Predict using the ucsl model.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Query points to be evaluate.
Returns
-------
y_pred : array, shape (n_samples, n_labels)
Predictions of the probabilities of the query points belonging to labels.
"""
y_pred = np.zeros((len(X), 2))
distances_to_hyperplanes = self.compute_distances_to_hyperplanes(X)
# compute the predictions \w.r.t cluster previously found
cluster_predictions = self.predict_clusters(X)
y_pred[:, self.label_to_cluster] = sum(
[cluster_predictions[:, cluster] * distances_to_hyperplanes[:, cluster] for cluster in range(self.n_clusters)])
# compute probabilities \w sigmoid
y_pred[:, self.label_to_cluster] = sigmoid(
y_pred[:, self.label_to_cluster] / np.max(y_pred[:, self.label_to_cluster]))
y_pred[:, 1 - self.label_to_cluster] = 1 - y_pred[:, self.label_to_cluster]
return y_pred
def compute_distances_to_hyperplanes(self, X):
"""Predict using the ucsl model.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Query points to be evaluate.
Returns
-------
SVM_distances : dict of array, length (n_labels) , shape of element (n_samples, n_clusters[label])
Predictions of the point/hyperplane margin for each cluster of each label.
"""
# first compute points distances to hyperplane
distances_to_hyperplanes = np.zeros((len(X), self.n_clusters))
for cluster_i in range(self.n_clusters):
coefficient = self.coefficients[cluster_i]
intercept = self.intercepts[cluster_i]
distances_to_hyperplanes[:, cluster_i] = X @ coefficient[0] + intercept[0]
return distances_to_hyperplanes
def predict_clusters(self, X):
"""Predict clustering for each label in a hierarchical manner.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors.
Returns
-------
cluster_predictions : dict of arrays, length (n_labels) , shape per key:(n_samples, n_clusters[key])
Dict containing clustering predictions for each label, the dictionary keys are the labels
"""
X_proj = X @ self.orthonormal_basis[-1].T
Q_distances = np.zeros((len(X_proj), self.n_clusters))
for cluster in range(self.n_clusters):
if X_proj.shape[1] > 1:
Q_distances[:, cluster] = np.sum(np.abs(X_proj - self.barycenters[cluster]), 1)
else:
Q_distances[:, cluster] = (X_proj - self.barycenters[cluster][None, :])[:, 0]
y_pred_proba_clusters = Q_distances / np.sum(Q_distances, 1)[:, None]
return y_pred_proba_clusters
def run(self, X, y, idx_outside_polytope):
# set label idx_outside_polytope outside of the polytope by setting it to positive labels
y_polytope = np.copy(y)
# if label is inside of the polytope, the distance is negative and the label is not divided into
y_polytope[y_polytope != idx_outside_polytope] = -1
# if label is outside of the polytope, the distance is positive and the label is clustered
y_polytope[y_polytope == idx_outside_polytope] = 1
index_positives = np.where(y_polytope == 1)[0] # index for Positive labels (outside polytope)
index_negatives = np.where(y_polytope == -1)[0] # index for Negative labels (inside polytope)
n_consensus = self.n_consensus
# define the clustering assignment matrix (each column correspond to one consensus run)
self.clustering_assignments = np.zeros((len(index_positives), n_consensus))
for consensus in range(n_consensus):
# first we initialize the clustering matrix S, with the initialization strategy set in self.initialization
S, cluster_index = self.initialize_clustering(X, y_polytope, index_positives)
if self.negative_weighting in ['uniform']:
S[index_negatives] = 1 / self.n_clusters
elif self.negative_weighting in ['hard']:
S[index_negatives] = np.rint(S[index_negatives])
if self.positive_weighting in ['hard']:
S[index_positives] = np.rint(S[index_positives])
cluster_index = self.run_EM(X, y_polytope, S, cluster_index, index_positives, index_negatives, consensus)
# update the cluster index for the consensus clustering
self.clustering_assignments[:, consensus] = cluster_index
if n_consensus > 1:
self.clustering_ensembling(X, y_polytope, index_positives, index_negatives)
def initialize_clustering(self, X, y_polytope, index_positives):
"""Perform a bagging of the previously obtained clusterings and compute new hyperplanes.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors.
y_polytope : array-like, shape (n_samples,)
Target values.
index_positives : array-like, shape (n_positives_samples,)
indexes of the positive labels being clustered
Returns
-------
S : array-like, shape (n_samples, n_samples)
Cluster prediction matrix.
"""
S = np.ones((len(y_polytope), self.n_clusters)) / self.n_clusters
if self.clustering in ["k_means"]:
KM = KMeans(n_clusters=self.n_clusters, init="random", n_init=1).fit(X[index_positives])
S = one_hot_encode(KM.predict(X))
if self.clustering in ["gaussian_mixture"]:
GMM = GaussianMixture(n_components=self.n_clusters, init_params="random", n_init=1,
covariance_type="spherical").fit(X[index_positives])
S = GMM.predict_proba(X)
else:
custom_clustering_method_ = copy.deepcopy(self.clustering)
S_positives = custom_clustering_method_.fit_predict(X[index_positives])
S_distances = np.zeros((len(X), np.max(S_positives) + 1))
for cluster in range(np.max(S_positives) + 1):
S_distances[:, cluster] = np.sum(
np.abs(X - np.mean(X[index_positives][S_positives == cluster], 0)[None, :]), 1)
S_distances /= np.sum(S_distances, 1)[:, None]
S = 1 - S
cluster_index = np.argmax(S[index_positives], axis=1)
return S, cluster_index
def maximization_step(self, X, y_polytope, S):
if self.maximization == "svc":
for cluster in range(self.n_clusters):
cluster_assignment = np.ascontiguousarray(S[:, cluster])
SVM_coefficient, SVM_intercept = launch_svc(X, y_polytope, cluster_assignment)
self.coefficients[cluster] = SVM_coefficient
self.intercepts[cluster] = SVM_intercept
elif self.maximization == "lr":
for cluster in range(self.n_clusters):
cluster_assignment = np.ascontiguousarray(S[:, cluster])
logistic_coefficient, logistic_intercept = launch_logistic(X, y_polytope, cluster_assignment)
self.coefficients[cluster] = logistic_coefficient
self.intercepts[cluster] = logistic_intercept
else:
for cluster in range(self.n_clusters):
cluster_assignment = np.ascontiguousarray(S[:, cluster])
self.maximization.fit(X, y_polytope, sample_weight=cluster_assignment)
self.coefficients[cluster] = self.maximization.coef_
self.intercepts[cluster] = self.maximization.intercept_
def expectation_step(self, X, S, index_positives, consensus):
"""Update clustering method (update clustering distribution matrix S).
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors.
S : array-like, shape (n_samples, n_samples)
Cluster prediction matrix.
index_positives : array-like, shape (n_positives_samples,)
indexes of the positive labels being clustered
consensus : int
which consensus is being run ?
Returns
-------
S : array-like, shape (n_samples, n_samples)
Cluster prediction matrix.
cluster_index : array-like, shape (n_positives_samples, )
clusters predictions argmax for positive samples.
"""
# get directions basis
directions_basis = []
for cluster in range(self.n_clusters):
directions_basis.extend(self.coefficients[cluster])
norm_directions = [np.linalg.norm(direction) for direction in directions_basis]
directions_basis = np.array(directions_basis) / np.array(norm_directions)[:, None]
# apply graam-schmidt algorithm
orthonormalized_basis = self.graam_schmidt(directions_basis)
self.orthonormal_basis[consensus] = orthonormalized_basis
self.orthonormal_basis[-1] = np.array(orthonormalized_basis).copy()
X_proj = X @ self.orthonormal_basis[consensus].T
# get centroids or barycenters
centroids = [np.mean(S[index_positives, cluster][:, None] * X_proj[index_positives, :], 0) for cluster in
range(self.n_clusters)]
if self.clustering == 'k_means':
self.clustering_method[consensus] = KMeans(n_clusters=self.n_clusters, init=np.array(centroids),
n_init=1).fit(X_proj[index_positives])
Q_positives = self.clustering_method[consensus].fit_predict(X_proj[index_positives])
Q_distances = np.zeros((len(X_proj), np.max(Q_positives) + 1))
for cluster in range(np.max(Q_positives) + 1):
Q_distances[:, cluster] = np.sum(
np.abs(X_proj - self.clustering_method[consensus].cluster_centers_[cluster]), 1)
Q_distances = Q_distances / np.sum(Q_distances, 1)[:, None]
Q = 1 - Q_distances
elif self.clustering == 'gaussian_mixture':
self.clustering_method[consensus] = GaussianMixture(n_components=self.n_clusters,
covariance_type="spherical",
means_init=np.array(centroids)).fit(
X_proj[index_positives])
Q = self.clustering_method[consensus].predict_proba(X_proj)
self.clustering_method[-1] = copy.deepcopy(self.clustering_method[consensus])
else:
self.clustering_method[consensus] = copy.deepcopy(self.clustering)
Q_positives = self.clustering_method[consensus].fit_predict(X_proj[index_positives])
Q_distances = np.zeros((len(X_proj), np.max(Q_positives) + 1))
for cluster in range(np.max(Q_positives) + 1):
Q_distances[:, cluster] = np.sum(
np.abs(X_proj - np.mean(X_proj[index_positives][Q_positives == cluster], 0)[None, :]), 1)
Q_distances = Q_distances / np.sum(Q_distances, 1)[:, None]
Q = 1 - Q_distances
# define matrix clustering
S = Q.copy()
cluster_index = np.argmax(Q[index_positives], axis=1)
return S, cluster_index
def graam_schmidt(self, directions_basis):
# compute the most important vectors because Graam-Schmidt is not invariant by permutation when the matrix is not square
scores = []
for i, direction_i in enumerate(directions_basis):
scores_i = []
for j, direction_j in enumerate(directions_basis):
if i != j:
scores_i.append(np.linalg.norm(direction_i - (np.dot(direction_i, direction_j) * direction_j)))
scores.append(np.mean(scores_i))
directions = directions_basis[np.array(scores).argsort()[::-1], :]
# orthonormalize coefficient/direction basis
basis = []
for v in directions:
w = v - np.sum(np.dot(v, b) * b for b in basis)
if len(basis) >= 2:
if np.linalg.norm(w) * self.noise_tolerance_threshold > 1:
basis.append(w / np.linalg.norm(w))
elif np.linalg.norm(w) > 1e-2:
basis.append(w / np.linalg.norm(w))
return np.array(basis)
def run_EM(self, X, y_polytope, S, cluster_index, index_positives, index_negatives, consensus):
"""Perform a bagging of the previously obtained clustering and compute new hyperplanes.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors.
y_polytope : array-like, shape (n_samples,)
Target values.
S : array-like, shape (n_samples, n_samples)
Cluster prediction matrix.
cluster_index : array-like, shape (n_positives_samples, )
clusters predictions argmax for positive samples.
index_positives : array-like, shape (n_positives_samples,)
indexes of the positive labels being clustered
index_negatives : array-like, shape (n_positives_samples,)
indexes of the positive labels being clustered
consensus : int
index of consensus
Returns
-------
S : array-like, shape (n_samples, n_samples)
Cluster prediction matrix.
"""
best_cluster_consistency = 1
if consensus == -1:
save_stabler_coefficients = True
consensus = self.n_consensus + 1
best_cluster_consistency = 0
for iteration in range(self.n_iterations):
# check for degenerate clustering for positive labels (warning) and negatives (might be normal)
for cluster in range(self.n_clusters):
if np.count_nonzero(S[index_positives, cluster]) == 0:
logging.debug("Cluster dropped, one cluster have no positive points anymore, in iteration : %d" % (
iteration - 1))
logging.debug("Re-initialization of the clustering...")
S, cluster_index = self.initialize_clustering(X, y_polytope, index_positives)
if np.max(S[index_negatives, cluster]) < 0.5:
logging.debug(
"Cluster too far, one cluster have no negative points anymore, in consensus : %d" % (
iteration - 1))
logging.debug("Re-distribution of this cluster negative weight to 'all'...")
S[index_negatives, cluster] = 1 / self.n_clusters
# re-init directions for each clusters
self.coefficients = {cluster_i: [] for cluster_i in range(self.n_clusters)}
self.intercepts = {cluster_i: [] for cluster_i in range(self.n_clusters)}
# run maximization step
self.maximization_step(X, y_polytope, S)
# decide the convergence based on the clustering stability
S_hold = S.copy()
S, cluster_index = self.expectation_step(X, S, index_positives, consensus)
# applying the negative weighting set as input
if self.negative_weighting in ['uniform']:
S[index_negatives] = 1 / self.n_clusters
elif self.negative_weighting in ['hard']:
S[index_negatives] = np.rint(S[index_negatives])
if self.positive_weighting in ['hard']:
S[index_positives] = np.rint(S[index_positives])
# check the Clustering Stability \w Adjusted Rand Index for stopping criteria
cluster_consistency = ARI(np.argmax(S[index_positives], 1), np.argmax(S_hold[index_positives], 1))
if cluster_consistency > best_cluster_consistency:
best_cluster_consistency = cluster_consistency
self.coefficients[-1] = copy.deepcopy(self.coefficients)
self.intercepts[-1] = copy.deepcopy(self.intercepts)
self.orthonormal_basis[-1] = copy.deepcopy(self.orthonormal_basis[consensus])
self.clustering_method[-1] = copy.deepcopy(self.clustering_method[consensus])
if cluster_consistency > self.stability_threshold:
break
return cluster_index
def predict_clusters_proba_from_cluster_labels(self, X):
"""Predict positive and negative points clustering probabilities.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors.
Returns
-------
S : array-like, shape (n_samples, n_samples)
Cluster prediction matrix.
"""
X_clustering_assignments = np.zeros((len(X), self.n_consensus))
for consensus in range(self.n_consensus):
X_proj = X @ self.orthonormal_basis[consensus].T
if self.clustering in ['k_means', 'gaussian_mixture']:
X_clustering_assignments[:, consensus] = self.clustering_method[consensus].predict(X_proj)
else:
X_clustering_assignments[:, consensus] = self.clustering_method[consensus].fit_predict(X_proj)
similarity_matrix = compute_similarity_matrix(self.clustering_assignments,
clustering_assignments_to_pred=X_clustering_assignments)
Q = np.zeros((len(X), self.n_clusters))
y_clusters_train_ = self.cluster_labels_
for cluster in range(self.n_clusters):
Q[:, cluster] = np.mean(similarity_matrix[y_clusters_train_ == cluster], 0)
Q /= np.sum(Q, 1)[:, None]
return Q
def clustering_ensembling(self, X, y_polytope, index_positives, index_negatives):
"""Perform a bagging of the previously obtained clustering and compute new hyperplanes.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors.
y_polytope : array-like, shape (n_samples,)
Modified target values.
index_positives : array-like, shape (n_positives_samples,)
indexes of the positive labels being clustered
index_negatives : array-like, shape (n_positives_samples,)
indexes of the positive labels being clustered
Returns
-------
None
"""
# perform consensus clustering
consensus_cluster_index = compute_spectral_clustering_consensus(self.clustering_assignments, self.n_clusters)
# save clustering predictions computed by bagging step
self.cluster_labels_ = consensus_cluster_index
# update clustering matrix S
S = self.predict_clusters_proba_from_cluster_labels(X)
if self.negative_weighting in ['uniform']:
S[index_negatives] = 1 / self.n_clusters
elif self.negative_weighting in ['hard']:
S[index_negatives] = np.rint(S[index_negatives])
if self.positive_weighting in ['hard']:
S[index_positives] = np.rint(S[index_positives])
cluster_index = self.run_EM(X, y_polytope, S, consensus_cluster_index, index_positives, index_negatives, -1)
# save barycenters and final predictions
self.cluster_labels_ = cluster_index
X_proj = X @ self.orthonormal_basis[-1].T
self.barycenters = [
np.mean(X_proj[index_positives][cluster_index == cluster], 0)[None, :] for cluster in
range(np.max(cluster_index) + 1)]
| 48.183168 | 128 | 0.637385 | 3,560 | 29,199 | 5.022753 | 0.10618 | 0.030535 | 0.023489 | 0.025166 | 0.512276 | 0.468766 | 0.415972 | 0.370952 | 0.327219 | 0.322633 | 0 | 0.006055 | 0.276071 | 29,199 | 605 | 129 | 48.26281 | 0.839862 | 0.351998 | 0 | 0.264286 | 0 | 0 | 0.032197 | 0 | 0 | 0 | 0 | 0 | 0.007143 | 1 | 0.053571 | false | 0 | 0.025 | 0 | 0.128571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00d842aa29ccd09a11bbcd8359c46b6c0494b69e | 400 | py | Python | doubi_sdust/0005.py | saurabh896/python-1 | f8d3aedf4c0fe6e24dfa3269ea7e642c9f7dd9b7 | [
"MIT"
] | 3,976 | 2015-01-01T15:49:39.000Z | 2022-03-31T03:47:56.000Z | doubi_sdust/0005.py | dwh65416396/python | 1a7e3edd1cd3422cc0eaa55471a0b42e004a9a1a | [
"MIT"
] | 97 | 2015-01-11T02:59:46.000Z | 2022-03-16T14:01:56.000Z | doubi_sdust/0005.py | dwh65416396/python | 1a7e3edd1cd3422cc0eaa55471a0b42e004a9a1a | [
"MIT"
] | 3,533 | 2015-01-01T06:19:30.000Z | 2022-03-28T13:14:54.000Z | '''
第 0005 题: 你有一个目录,装了很多照片,把它们的尺寸变成都不大于 iPhone5 分辨率的大小。
'''
from PIL import Image
import os.path
def Size(dirPath, size_x, size_y):
f_list = os.listdir(dirPath)
for i in f_list:
if os.path.splitext(i)[1] == '.jpg':
img = Image.open(i)
img.thumbnail((size_x,size_y))
img.save(i)
print(i)
Size('D:\PyCharm 2017.1.3\projects', 1136, 640) | 26.666667 | 52 | 0.595 | 63 | 400 | 3.68254 | 0.650794 | 0.051724 | 0.077586 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063973 | 0.2575 | 400 | 15 | 53 | 26.666667 | 0.717172 | 0.13 | 0 | 0 | 0 | 0 | 0.093842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.272727 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00d92d1664a279f5e2296cfecc6155261207c717 | 672 | py | Python | src/trials/trial_input.py | BattleManCWT/Job-Hunter-Search-Engine-with-crawled-Job-Dataset | a132783148832c284c0fdb385039cfa14c17b937 | [
"MIT"
] | null | null | null | src/trials/trial_input.py | BattleManCWT/Job-Hunter-Search-Engine-with-crawled-Job-Dataset | a132783148832c284c0fdb385039cfa14c17b937 | [
"MIT"
] | null | null | null | src/trials/trial_input.py | BattleManCWT/Job-Hunter-Search-Engine-with-crawled-Job-Dataset | a132783148832c284c0fdb385039cfa14c17b937 | [
"MIT"
] | 2 | 2020-12-14T02:35:40.000Z | 2021-03-19T06:29:31.000Z | import tkinter as tk
windows = tk.Tk()
windows.title("输入框、文本框")
windows.geometry("500x300") #界面大小
#设置输入框,对象是在windows上,show参数--->显示文本框输入时显示方式None:文字不加密,show="*"加密
e = tk.Entry(windows,show=None)
e.pack()
def insert_point():
var = e.get() #获取输入的信息
t.insert("insert",var) #参数1:插入方式,参数2:插入的数据
def insert_end():
var = e.get()
t.insert("end",var)
#根据光标位置插入数据
b1 = tk.Button(windows,text="insert point",width=15,height=2,command=insert_point)
b1.pack()
b2 = tk.Button(windows,text="insert end",width=15,height=2,command=insert_end)
b2.pack()
#设置文本框
t = tk.Text(windows,height=2)
t.pack()
windows.mainloop()
# print(len("我我我")) | 22.4 | 83 | 0.666667 | 102 | 672 | 4.352941 | 0.480392 | 0.081081 | 0.031532 | 0.085586 | 0.234234 | 0.121622 | 0 | 0 | 0 | 0 | 0 | 0.033217 | 0.14881 | 672 | 30 | 84 | 22.4 | 0.743007 | 0.184524 | 0 | 0.105263 | 0 | 0 | 0.087719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00db9ea31d36e006618e7ee363274309f264c275 | 1,321 | py | Python | fabulous/services/google.py | rusrushal13/fabulous | 61979213c5f1381e686e8a516606d9963e91f8b1 | [
"BSD-3-Clause"
] | null | null | null | fabulous/services/google.py | rusrushal13/fabulous | 61979213c5f1381e686e8a516606d9963e91f8b1 | [
"BSD-3-Clause"
] | null | null | null | fabulous/services/google.py | rusrushal13/fabulous | 61979213c5f1381e686e8a516606d9963e91f8b1 | [
"BSD-3-Clause"
] | null | null | null | """~google <search term> will return three results from the google search for <search term>"""
import re
import requests
from random import shuffle
from googleapiclient.discovery import build
import logging
my_api_key = "Your API Key(Link: https://console.developers.google.com/apis/dashboard)"
my_cse_id = "Your Custom Search Engine ID(Link: https://cse.google.co.in/cse/)"
"""fuction to fetch data from Google Custom Search Engine API"""
def google(searchterm, api_key, cse_id, **kwargs):
service = build("customsearch", "v1", developerKey=api_key, cache_discovery=False)
res = service.cse().list(q=searchterm, cx=cse_id, **kwargs).execute()
return res['items']
"""fuction to return first three search results"""
def google_search(searchterm):
results = google(searchterm, my_api_key, my_cse_id, num=10)
length = len(results)
retval = ""
if length < 3:
for index in range(length):
retval += results[index]['link'] + "\n"
else:
for index in range(3):
retval += results[index]['link'] + "\n"
return retval
def on_message(msg, server):
text = msg.get("text", "")
match = re.findall(r"~google (.*)", text)
if not match:
return
searchterm = match[0]
return google_search(searchterm)
on_bot_message = on_message
| 30.72093 | 94 | 0.677517 | 183 | 1,321 | 4.786885 | 0.431694 | 0.034247 | 0.018265 | 0.034247 | 0.052511 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005618 | 0.191522 | 1,321 | 42 | 95 | 31.452381 | 0.814607 | 0.066616 | 0 | 0.066667 | 0 | 0 | 0.165319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.166667 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00dddb8657c3b8c3b329fb18c769f7bd6e7f90ef | 2,255 | py | Python | codebase/models/extra_layers.py | abdussamettrkr/dirt-t | a605d0c31a4bec9e60eb533704cd5e423601c060 | [
"MIT"
] | null | null | null | codebase/models/extra_layers.py | abdussamettrkr/dirt-t | a605d0c31a4bec9e60eb533704cd5e423601c060 | [
"MIT"
] | null | null | null | codebase/models/extra_layers.py | abdussamettrkr/dirt-t | a605d0c31a4bec9e60eb533704cd5e423601c060 | [
"MIT"
] | null | null | null | import tensorflow as tf
import tensorbayes as tb
import numpy as np
from codebase.args import args
from tensorbayes.tfutils import softmax_cross_entropy_with_two_logits as softmax_xent_two
from tensorflow.contrib.framework import add_arg_scope
@add_arg_scope
def normalize_perturbation(d, scope=None):
with tf.name_scope(scope, 'norm_pert'):
output = tf.nn.l2_normalize(d, axis=list(range(1, len(d.shape))))
return output
@add_arg_scope
def scale_gradient(x, scale, scope=None, reuse=None):
with tf.name_scope('scale_grad'):
output = (1 - scale) * tf.stop_gradient(x) + scale * x
return output
@add_arg_scope
def noise(x, std, phase, scope=None, reuse=None):
with tf.name_scope(scope, 'noise'):
eps = tf.random_normal(tf.shape(x), 0.0, std)
output = tf.where(phase, x + eps, x)
return output
@add_arg_scope
def leaky_relu(x, a=0.2, name=None):
with tf.name_scope(name, 'leaky_relu'):
return tf.maximum(x, a * x)
@add_arg_scope
def basic_accuracy(a, b, scope=None):
with tf.name_scope(scope, 'basic_acc'):
a = tf.argmax(a, 1)
b = tf.argmax(b, 1)
eq = tf.cast(tf.equal(a, b), 'float32')
output = tf.reduce_mean(eq)
return output
@add_arg_scope
def perturb_image(x, p, classifier, pert='vat', scope=None):
with tf.name_scope(scope, 'perturb_image'):
eps = 1e-6 * normalize_perturbation(tf.random_normal(shape=tf.shape(x)))
# Predict on randomly perturbed image
eps_p = classifier(x + eps, phase=True, reuse=True)
loss = softmax_xent_two(labels=p, logits=eps_p)
# Based on perturbed image, get direction of greatest error
eps_adv = tf.gradients(loss, [eps], aggregation_method=2)[0]
# Use that direction as adversarial perturbation
eps_adv = normalize_perturbation(eps_adv)
x_adv = tf.stop_gradient(x + args.radius * eps_adv)
return x_adv
@add_arg_scope
def vat_loss(x, p, classifier, scope=None):
with tf.name_scope(scope, 'smoothing_loss'):
x_adv = perturb_image(x, p, classifier)
p_adv = classifier(x_adv, phase=True, reuse=True)
loss = tf.reduce_mean(softmax_xent_two(labels=tf.stop_gradient(p), logits=p_adv))
return loss
| 33.656716 | 89 | 0.686475 | 353 | 2,255 | 4.186969 | 0.286119 | 0.032476 | 0.05954 | 0.066306 | 0.273342 | 0.198241 | 0.159675 | 0.044655 | 0 | 0 | 0 | 0.008296 | 0.198226 | 2,255 | 66 | 90 | 34.166667 | 0.809181 | 0.062084 | 0 | 0.215686 | 0 | 0 | 0.037897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137255 | false | 0 | 0.117647 | 0 | 0.392157 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00de70ef58eee9f1f78b0b92159d53c4e743910d | 1,366 | py | Python | Projects/Online Workouts/w3resource/String/program-72.py | ivenpoker/Python-Projects | 2975e1bd687ec8dbcc7a4842c13466cb86292679 | [
"MIT"
] | 1 | 2019-09-23T15:51:45.000Z | 2019-09-23T15:51:45.000Z | Projects/Online Workouts/w3resource/String/program-72.py | ivenpoker/Python-Projects | 2975e1bd687ec8dbcc7a4842c13466cb86292679 | [
"MIT"
] | 5 | 2021-02-08T20:47:19.000Z | 2022-03-12T00:35:44.000Z | Projects/Online Workouts/w3resource/String/program-72.py | ivenpoker/Python-Projects | 2975e1bd687ec8dbcc7a4842c13466cb86292679 | [
"MIT"
] | null | null | null | #!/usr/bin/env python 3
############################################################################################
# #
# Program purpose: Removes all consecutive duplicates from a given string. #
# Program Author : Happi Yvan <ivensteinpoker@gmail.com> #
# Creation Date : November 5, 2019 #
# #
############################################################################################
import itertools
def obtain_data_from_user(input_mess: str) -> str:
is_valid, user_data = False, ''
while is_valid is False:
try:
user_data = input(input_mess)
if len(user_data) == 0:
raise ValueError('Please enter some string to work with')
is_valid = True
except ValueError as ve:
print(f'[ERROR]: {ve}')
return user_data
def do_processing(main_str: str) -> str:
return ''.join(i for i, _ in itertools.groupby(main_str))
if __name__ == "__main__":
main_data = obtain_data_from_user(input_mess='Enter some string data: ')
print(f'New string: {do_processing(main_str=main_data)}')
| 44.064516 | 92 | 0.428258 | 124 | 1,366 | 4.459677 | 0.556452 | 0.057866 | 0.050633 | 0.065099 | 0.097649 | 0.097649 | 0 | 0 | 0 | 0 | 0 | 0.008083 | 0.366032 | 1,366 | 30 | 93 | 45.533333 | 0.630485 | 0.243777 | 0 | 0 | 0 | 0 | 0.184814 | 0.050143 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0.058824 | 0.294118 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00dfce7e1992e7c1b676d1670007946bf45fd716 | 13,235 | py | Python | ukb/transforms/__init__.py | wi905252/ukb-cardiac-mri | 3177dde898a65b1d7f385b78e4f134de3852bea5 | [
"Apache-2.0"
] | 19 | 2018-05-30T22:13:17.000Z | 2022-01-18T14:04:40.000Z | ukb/transforms/__init__.py | wi905252/ukb-cardiac-mri | 3177dde898a65b1d7f385b78e4f134de3852bea5 | [
"Apache-2.0"
] | 1 | 2019-08-07T07:29:07.000Z | 2019-08-07T08:54:10.000Z | ukb/transforms/__init__.py | wi905252/ukb-cardiac-mri | 3177dde898a65b1d7f385b78e4f134de3852bea5 | [
"Apache-2.0"
] | 8 | 2019-07-03T23:19:43.000Z | 2021-11-15T17:09:24.000Z | from .ukbb import *
from .augmentations import *
from .multi_series import *
from torchvision.transforms import Compose
class RandomTransforms(object):
"""Base class for a list of transformations with randomness
Args:
transforms (list or tuple): list of transformations
"""
def __init__(self, transforms, out_range=(0.0, 1.0)):
assert isinstance(transforms, (list, tuple))
self.transforms = transforms
self.out_range = out_range
def __call__(self, *args, **kwargs):
raise NotImplementedError()
def __repr__(self):
format_string = self.__class__.__name__ + '('
for t in self.transforms:
format_string += '\n'
format_string += ' {0}'.format(t)
format_string += '\n)'
return format_string
class RandomOrder(RandomTransforms):
"""Apply a list of transformations in a random order
"""
def __call__(self, img):
order = list(range(len(self.transforms)))
random.shuffle(order)
for i in order:
img = self.transforms[i](img)
rescale = RescaleIntensity(out_range=self.out_range)
img = rescale(img)
return img
class ComposeMultiChannel(object):
"""Composes several transforms together for multi channel operations.
Args:
transforms (list of ``Transform`` objects): list of transforms to compose.
Example:
>>> transforms.Compose([
>>> transforms.CenterCrop(10),
>>> transforms.ToTensor(),
>>> ])
"""
def __init__(self, transforms):
self.transforms = transforms
def __call__(self, img1, img2, img3):
for t in self.transforms:
img1, img2, img3 = t(img1, img2, img3)
return img1, img2, img3
def __repr__(self):
format_string = self.__class__.__name__ + '('
for t in self.transforms:
format_string += '\n'
format_string += ' {0}'.format(t)
format_string += '\n)'
return format_string
##############################################################################
# SINGLE Series Transforms (to be used on flow_250_*_MAG)
##############################################################################
############################
# Preprocessing Transforms
############################
def compose_preprocessing(preprocessing):
"""
Compose a preprocessing transform to be performed.
Params
------
preprocessing : dict
- dictionary defining all preprocessing steps to be taken with their
values
e.g. {"FrameSelector" : "var",
"Rescale_Intensity" : [0, 255],
"Gamma_Correction" : 2.0}
Return
------
torchvision.transforms.Compose
"""
# Frame Selector
if (preprocessing["FrameSelector"]["name"] == "FrameSelectionVar"):
frame_selector = FrameSelectionVar(n_frames=preprocessing["n_frames"])
else:
frame_selector = FrameSelectionStd(n_frames=preprocessing["n_frames"],
channel=preprocessing["FrameSelector"]["channel"],
epsilon=preprocessing["FrameSelector"]["epsilon"])
# Rescale Intensity
if ("Rescale_Intensity" in preprocessing):
intensity_rescale = RescaleIntensity(out_range=tuple(preprocessing["Rescale_Intensity"]))
else:
intensity_rescale = NullTransform()
# Gamma Correction
if ("Gamma_Correction" in preprocessing):
gamma_correct = GammaCorrection(gamma=preprocessing["Gamma_Correction"]["gamma"],
intensity=preprocessing["Gamma_Correction"]["intensity"])
else:
gamma_correct = NullTransform()
return Compose([frame_selector, intensity_rescale, gamma_correct])
###########################
# Augmentation Transforms
###########################
def compose_augmentation(augmentations, seed=1234):
"""
Compose an augmentation transform to be performed.
Params
------
augmentations : dict
- dictionary defining all augmentation steps to be taken with their
values
e.g.
{
"RandomCrop" : {
"size" : 28,
"padding" : 12
},
"RandomRotation" : {
"degrees" : 25
},
"RandomTranslation" : {
"translate" : (0.2, 0.8)
},
"RandomShear" : {
"shear" : 12.5
},
"RandomAffine" : {
"degrees" : 5,
"translate" : (0.5, 0.5),
"scale" : 0.8,
"shear" : 15.0
},
"Randomize" : 0
}
Return
------
torchvision.transforms.Compose (ordered transforms)
OR
torchvision.transforms.RandomOrder (randomly ordered transforms)
"""
# Padding
if ("Pad" in augmentations):
if ("padding" in augmentations["Pad"]):
padding = augmentations["Pad"]["padding"]
else:
padding = 0
if ("fill" in augmentations["Pad"]):
fill = augmentations["Pad"]["fill"]
else:
fill = 0
if ("padding_mode" in augmentations["Pad"]):
padding_mode = augmentations["Pad"]["padding_mode"]
else:
padding_mode = 'constant'
pad = Pad(
padding=padding,
fill=fill, padding_mode=padding_mode)
else:
pad = NullAugmentation()
# Random Horizontal Flip
if ("RandomHorizontalFlip" in augmentations):
if ("probability" in augmentations["RandomHorizontalFlip"]):
probability = augmentations["RandomHorizontalFlip"]["probability"]
else:
probability = 0.5
random_horizontal = RandomHorizontalFlip(p=probability, seed=seed)
else:
random_horizontal = NullAugmentation()
# Random Vertical Flip
if ("RandomVerticalFlip" in augmentations):
if ("probability" in augmentations["RandomVerticalFlip"]):
probability = augmentations["RandomVerticalFlip"]["probability"]
else:
probability = 0.5
random_vertical = RandomVerticalFlip(p=probability, seed=seed)
else:
random_vertical = NullAugmentation()
# Random Cropping
if ("RandomCrop" in augmentations):
if ("padding" in augmentations["RandomCrop"]):
padding = augmentations["RandomCrop"]["padding"]
else:
padding = 0
random_crop = RandomCrop(
augmentations["RandomCrop"]["size"],
padding=padding, seed=seed)
else:
random_crop = NullAugmentation()
# Random Rotation
if ("RandomRotation" in augmentations):
if ("resample" in augmentations["RandomRotation"]):
resample = augmentations["RandomRotation"]["resample"]
else:
resample = False
if ("center" in augmentations["RandomRotation"]):
center = augmentations["RandomRotation"]["center"]
else:
center = None
random_rotation = RandomRotation(
augmentations["RandomRotation"]["degrees"],
resample=resample, center=center, seed=seed)
else:
random_rotation = NullAugmentation()
# Random Translation
if ("RandomTranslation" in augmentations):
if ("resample" in augmentations["RandomTranslation"]):
resample = augmentations["RandomTranslation"]["resample"]
else:
resample = False
random_translation = RandomTranslation(
augmentations["RandomTranslation"]["translate"], resample=resample,
seed=seed)
else:
random_translation = NullAugmentation()
# Random Shear
if ("RandomShear" in augmentations):
if ("resample" in augmentations["RandomShear"]):
resample = augmentations["RandomShear"]["resample"]
else:
resample = False
random_shear = RandomShear(
augmentations["RandomShear"]["shear"], resample=resample,
seed=seed)
else:
random_shear = NullAugmentation()
# Random Affine
if ("RandomAffine" in augmentations):
if ("translate" in augmentations["RandomAffine"]):
translate = augmentations["RandomAffine"]["translate"]
else:
translate = None
if ("scale" in augmentations["RandomAffine"]):
scale = augmentations["RandomAffine"]["scale"]
else:
scale = None
if ("shear" in augmentations["RandomAffine"]):
shear = augmentations["RandomAffine"]["shear"]
else:
shear = None
if ("resample" in augmentations["RandomAffine"]):
resample = augmentations["RandomAffine"]["resample"]
else:
resample = False
if ("fillcolor" in augmentations["RandomAffine"]):
fillcolor = augmentations["RandomAffine"]["fillcolor"]
else:
fillcolor = 0
random_affine = RandomAffine(
augmentations["RandomAffine"]["degrees"],
translate=translate, scale=scale, shear=shear,
resample=resample, fillcolor=fillcolor, seed=seed)
else:
random_affine = NullAugmentation()
try:
if (augmentations["Randomize"]):
if ("PixelRange" in augmentations):
return RandomOrder(
[random_crop, random_rotation, random_translation,
random_shear, random_affine])
else:
return RandomOrder(
[random_crop, random_rotation, random_translation,
random_shear, random_affine])
except: # This will fail when "Randomize" is not defined in augmentations
pass
return Compose([pad, random_horizontal, random_vertical, random_crop,
random_rotation, random_translation, random_shear,
random_affine])
##############################################################################
# Postprocessing Transforms
##############################################################################
def compose_postprocessing(postprocessing):
"""
Compose a postprocessing transform to be performed.
Params
------
postprocessing : dict
- dictionary defining all preprocessing steps to be taken with their
values
e.g. {"Name" : "RescaleIntensity"}
OR
{"Name" : "StdNormalize"}
Return
------
torchvision.transforms.Compose
"""
if (postprocessing["Name"] == "StdNormalize"):
postprocess = StdNormalize()
else:
postprocess = RescaleIntensity(out_range=(0.0, 1.0))
return Compose([postprocess])
##############################################################################
# MULTIPLE Series Transforms (to be used on ALL flow_250_* series)
##############################################################################
############################
# Preprocessing Transforms
############################
def compose_preprocessing_multi(preprocessing):
"""
Compose a preprocessing transform to be performed on MULTI series.
Params
------
preprocessing : dict
- dictionary defining all preprocessing steps to be taken with their
values
e.g. {"FrameSelector" : "var",
"Rescale_Intensity" : [0, 255],
"Gamma_Correction" : 2.0}
Return
------
torchvision.transforms.Compose
"""
# Frame Selector
if (preprocessing["FrameSelector"]["name"] == "FrameSelectionVarMulti"):
frame_selector = FrameSelectionVarMulti(n_frames=preprocessing["n_frames"])
# Rescale Intensity
if ("RescaleIntensityMulti" in preprocessing):
intensity_rescale = RescaleIntensityMulti(out_range=tuple(preprocessing["RescaleIntensityMulti"]))
else:
intensity_rescale = NullTransformMulti()
return ComposeMultiChannel([frame_selector, intensity_rescale])
#############################
# Postprocessing Transforms
#############################
def compose_postprocessing_multi(postprocessing):
"""
Compose a postprocessing transform to be performed on MULTI series.
Params
------
postprocessing : dict
- dictionary defining all preprocessing steps to be taken with their
values
e.g. {"Name" : "RescaleIntensity"}
OR
{"Name" : "StdNormalize"}
Return
------
torchvision.transforms.Compose
"""
if (postprocessing["Name"] == "StdNormalizeMulti"):
postprocess = StdNormalizeMulti()
else:
postprocess = RescaleIntensityMulti(out_range=(0.0, 1.0))
return ComposeMultiChannel([postprocess])
| 32.280488 | 106 | 0.552172 | 1,064 | 13,235 | 6.734962 | 0.162594 | 0.05233 | 0.018979 | 0.017583 | 0.362964 | 0.297656 | 0.221742 | 0.216718 | 0.181273 | 0.176947 | 0 | 0.008838 | 0.29898 | 13,235 | 409 | 107 | 32.359413 | 0.763527 | 0.25833 | 0 | 0.316583 | 0 | 0 | 0.132579 | 0.007391 | 0 | 0 | 0 | 0 | 0.005025 | 1 | 0.060302 | false | 0.005025 | 0.020101 | 0 | 0.150754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00e12d4d2f525ec2940b6bf94f093ede3873009b | 2,985 | py | Python | src/clm/views/admin_cm/iso_image.py | cc1-cloud/cc1 | 8113673fa13b6fe195cea99dedab9616aeca3ae8 | [
"Apache-2.0"
] | 11 | 2015-05-06T14:16:54.000Z | 2022-02-08T23:21:31.000Z | src/clm/views/admin_cm/iso_image.py | fortress-shell/cc1 | 8113673fa13b6fe195cea99dedab9616aeca3ae8 | [
"Apache-2.0"
] | 1 | 2015-10-30T21:08:11.000Z | 2015-10-30T21:08:11.000Z | src/clm/views/admin_cm/iso_image.py | fortress-shell/cc1 | 8113673fa13b6fe195cea99dedab9616aeca3ae8 | [
"Apache-2.0"
] | 5 | 2016-02-12T22:01:38.000Z | 2021-12-06T16:56:54.000Z | # -*- coding: utf-8 -*-
# @COPYRIGHT_begin
#
# Copyright [2010-2014] Institute of Nuclear Physics PAN, Krakow, Poland
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @COPYRIGHT_end
"""@package src.clm.views.admin_cm.iso_image
@alldecoratedby{src.clm.utils.decorators.admin_cm_log}
"""
from clm.utils.decorators import admin_cm_log, cm_request
from clm.utils.cm import CM
from clm.utils.exception import CLMException
from clm.models.user import User
@admin_cm_log(log=False, pack=True)
def get_list(cm_id, caller_id, cm_password):
"""
@cm_request{iso_image.get_list()}
@clmview_admin_cm
"""
names = {}
resp = CM(cm_id).send_request("admin_cm/iso_image/get_list/", caller_id=caller_id, cm_password=cm_password)
for img in resp['data']:
if str(img['user_id']) not in names:
try:
user = User.objects.get(pk=img['user_id'])
names[str(img['user_id'])] = user.first + " " + user.last
except:
raise CLMException('user_get')
img['owner'] = names[str(img['user_id'])]
return resp['data']
@admin_cm_log(log=False, pack=False)
@cm_request
def get_by_id(cm_response, **data):
"""
@clmview_admin_cm
@cm_request_transparent{iso_image.get_by_id()}
"""
return cm_response
@admin_cm_log(log=True, pack=False)
@cm_request
def delete(cm_response, **data):
"""
@clmview_admin_cm
@cm_request_transparent{iso_image.delete()}
"""
return cm_response
@admin_cm_log(log=True, pack=False)
@cm_request
def edit(cm_response, **data):
"""
@clmview_admin_cm
@cm_request_transparent{iso_image.edit()}
"""
return cm_response
@admin_cm_log(log=True, pack=False)
@cm_request
def download(cm_response, **data):
"""
@clmview_admin_cm
@cm_request_transparent{iso_image.download()}
"""
return cm_response
@admin_cm_log(log=True, pack=False)
@cm_request
def copy(cm_response, **data):
"""
@clmview_admin_cm
@cm_request_transparent{iso_image.copy()}
"""
return cm_response
@admin_cm_log(log=True, pack=False)
@cm_request
def set_public(cm_response, **data):
"""
@clmview_admin_cm
@cm_request_transparent{iso_image.set_public()}
"""
return cm_response
@admin_cm_log(log=True, pack=False)
@cm_request
def set_private(cm_response, **data):
"""
@clmview_admin_cm
@cm_request_transparent{iso_image.set_private()}
"""
return cm_response
| 25.084034 | 111 | 0.688777 | 429 | 2,985 | 4.531469 | 0.291375 | 0.072016 | 0.05144 | 0.053498 | 0.445988 | 0.419753 | 0.374486 | 0.374486 | 0.374486 | 0.374486 | 0 | 0.005363 | 0.18794 | 2,985 | 118 | 112 | 25.29661 | 0.796617 | 0.420436 | 0 | 0.444444 | 0 | 0 | 0.050193 | 0.018018 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0.044444 | 0.088889 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00e222ae0c3c60f984161de1c02f4b88fd98c046 | 3,424 | py | Python | follow_getters.py | gameoverfran/seguridad_informatica | 59ca7d749e9378f19d2d82580d1ddf3f3762ce15 | [
"MIT"
] | null | null | null | follow_getters.py | gameoverfran/seguridad_informatica | 59ca7d749e9378f19d2d82580d1ddf3f3762ce15 | [
"MIT"
] | null | null | null | follow_getters.py | gameoverfran/seguridad_informatica | 59ca7d749e9378f19d2d82580d1ddf3f3762ce15 | [
"MIT"
] | null | null | null | import json
import os
from datetime import datetime
from time import sleep
import requests
from getters_maker import get_request_hash, get_insta_query, get_cookies
def __get_follow_requests(variables, res_json, personal_path, json_piece):
# json_formatted_str = json.dumps(res_json, indent=1)
# print(json_formatted_str)
full_info_dict = dict()
file_name = json_piece + "_info.txt"
for node in res_json["data"]["user"][str(json_piece)]["edges"]:
node = node["node"]
full_info_dict["id"] = node["id"]
full_info_dict["username"] = node["username"]
full_info_dict["full_name"] = node["full_name"]
full_info_dict["is_private"] = node["is_private"]
full_info_dict["is_verified"] = node["is_verified"]
full_info_dict["followed_by_viewer"] = node["followed_by_viewer"]
full_info_dict["requested_by_viewer"] = node["requested_by_viewer"]
with open(os.path.join(personal_path, file_name), 'a+') as file:
json.dump(full_info_dict, file, indent=1)
if res_json["data"]["user"][str(json_piece)]["page_info"]["has_next_page"]:
variables["after"] = res_json["data"]["user"][str(json_piece)]["page_info"]["end_cursor"]
return True
return False
def __get_user_followers(id_user, username, dir_name, session_id, param, action):
reintentos_maximos = 3
sleep_time = 10 # 1 seg
variables = {
'id': id_user,
'first': 50
}
params = {
"query_hash": param,
"variables": json.dumps(variables)
}
has_next_page = True
error = False
reintentos_actuales = 0
print("------------------------------------------------------")
while has_next_page and reintentos_actuales < reintentos_maximos:
res = requests.get(get_insta_query(), params=params, cookies=get_cookies(session_id, JSON_codification=False))
if res.status_code == 200:
print("Obteniendo " + action + " de " + username + "...")
reintentos_actuales = 0
try:
if action == "followers":
has_next_page = __get_follow_requests(variables, res.json(), dir_name, 'edge_followed_by')
else:
has_next_page = __get_follow_requests(variables, res.json(), dir_name, 'edge_follow')
if has_next_page:
params["variables"] = json.dumps(variables)
sleep(sleep_time)
except Exception as err:
print("Se produjo un error en get_user_followers: ", err)
reintentos_actuales = reintentos_maximos
error = True
else:
reintentos_actuales += 1
sleep(sleep_time)
if not error:
print("Todos los " + action + " de " + username + " han sido obtenidos")
return reintentos_actuales < reintentos_maximos
def get_user_followers_ing(id_user, username, dir_name, session_id, boolean_followes, boolean_following):
a = False
b = False
if boolean_followes:
a = __get_user_followers(id_user, username, dir_name, session_id, get_request_hash()['followers'], 'followers')
if boolean_following:
b = __get_user_followers(id_user, username, dir_name, session_id, get_request_hash()['following'], 'following')
return a and b
| 40.761905 | 120 | 0.618283 | 413 | 3,424 | 4.769976 | 0.268765 | 0.036548 | 0.054822 | 0.034518 | 0.22132 | 0.22132 | 0.204569 | 0.175635 | 0.175635 | 0.140102 | 0 | 0.005523 | 0.259638 | 3,424 | 83 | 121 | 41.253012 | 0.771598 | 0.024241 | 0 | 0.085714 | 0 | 0 | 0.15335 | 0.016595 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0 | 0.085714 | 0 | 0.185714 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00e2c2a5ff0187278d94fe3397052c65091ac840 | 4,895 | py | Python | IS31FL3741_DisplayIO/sports_glasses/code.py | albinger/Adafruit_Learning_System_Guides | 4fe2da261fe5d1ca282b86bd3b93ee1466346fa7 | [
"MIT"
] | null | null | null | IS31FL3741_DisplayIO/sports_glasses/code.py | albinger/Adafruit_Learning_System_Guides | 4fe2da261fe5d1ca282b86bd3b93ee1466346fa7 | [
"MIT"
] | null | null | null | IS31FL3741_DisplayIO/sports_glasses/code.py | albinger/Adafruit_Learning_System_Guides | 4fe2da261fe5d1ca282b86bd3b93ee1466346fa7 | [
"MIT"
] | null | null | null | # SPDX-FileCopyrightText: 2022 Mark Komus
#
# SPDX-License-Identifier: MIT
import random
import time
import board
import busio
import digitalio
import displayio
import framebufferio
import is31fl3741
from adafruit_is31fl3741.is31fl3741_PixelBuf import IS31FL3741_PixelBuf
from adafruit_is31fl3741.led_glasses_map import (
glassesmatrix_ledmap_no_ring,
left_ring_map_no_inner,
right_ring_map_no_inner,
)
from adafruit_display_text import label
from adafruit_bitmap_font import bitmap_font
from adafruit_led_animation.animation.chase import Chase
from adafruit_debouncer import Debouncer
# List of possible messages to display. Randomly chosen
MESSAGES = (
"GO TEAM GO",
"WE ARE NUMBER 1",
"I LIKE THE HALFTIME SHOW",
)
# Colors used for the text and ring lights
BLUE_TEXT = (0, 20, 255)
BLUE_RING = (0, 10, 120)
YELLOW_TEXT = (220, 210, 0)
YELLOW_RING = (150, 140, 0)
def ScrollMessage(text, color, repeat):
"""Scroll a message across the eyeglasses a set number of times"""
text_area.text = text
text_area.color = color
# Start the message just off the side of the glasses
x = display.width
text_area.x = x
# Determine the width of the message to scroll
width = text_area.bounding_box[2]
for _ in range(repeat):
while x != -width:
x = x - 1
text_area.x = x
# Update the switch and if it has been pressed abort scrolling this message
switch.update()
if not switch.value:
return
time.sleep(0.025) # adjust to change scrolling speed
x = display.width
def Score(text, color, ring_color, repeat):
"""Show a scrolling text message and animated effects on the eye rings.
The messages scrolls left to right, then right to left while the eye rings
are animated using the adafruit_led_animation library."""
# Set up a led animation chase sequence for both eyelights
chase_left = Chase(left_eye, speed=0.11, color=ring_color, size=8, spacing=4)
chase_right = Chase(right_eye, speed=0.07, color=ring_color, size=8, spacing=4)
text_area.text = text
text_area.color = color
x = display.width
text_area.x = x
width = text_area.bounding_box[2]
for _ in range(repeat):
# Scroll the text left and animate the eyes
while x != -width:
x = x - 1
text_area.x = x
chase_left.animate()
chase_right.animate()
time.sleep(0.008) # adjust to change scrolling speed
# Scroll the text right and animate the eyes
while x != display.width:
x = x + 1
text_area.x = x
chase_left.animate()
chase_right.animate()
time.sleep(0.008) # adjust to change scrolling speed
# Remove any existing displays
displayio.release_displays()
# Set up the top button used to trigger a special message when pressed
switch_pin = digitalio.DigitalInOut(board.SWITCH)
switch_pin.direction = digitalio.Direction.INPUT
switch_pin.pull = digitalio.Pull.UP
switch = Debouncer(switch_pin)
# Initialize the LED Glasses
#
# In this example scale is set to True. When True the logical display is
# three times the physical display size and scaled down to allow text to
# look more natural for small display sizes. Hence the display is created
# as 54x15 when the physical display is 18x5.
#
i2c = busio.I2C(board.SCL, board.SDA, frequency=1000000)
is31 = is31fl3741.IS31FL3741(i2c=i2c)
is31_framebuffer = is31fl3741.IS31FL3741_FrameBuffer(
is31, 54, 15, glassesmatrix_ledmap_no_ring, scale=True, gamma=True
)
display = framebufferio.FramebufferDisplay(is31_framebuffer, auto_refresh=True)
# Set up the left and right eyelight rings
# init is set to False as the IS31FL3741_FrameBuffer has already initialized the IS31FL3741 driver
left_eye = IS31FL3741_PixelBuf(
is31, left_ring_map_no_inner, init=False, auto_write=False
)
right_eye = IS31FL3741_PixelBuf(
is31, right_ring_map_no_inner, init=False, auto_write=False
)
# Dim the display. Full brightness is BRIGHT
is31_framebuffer.brightness = 0.2
# Load the font to be used - scrolly only has upper case letters
font = bitmap_font.load_font("/fonts/scrolly.bdf")
# Set up the display elements
text_area = label.Label(font, text="", color=(0, 0, 0))
text_area.y = 8
group = displayio.Group()
group.append(text_area)
display.show(group)
while True:
# Run the debouncer code to get the updated switch value
switch.update()
# If the switch has been pressed interrupt start a special message
if not switch.value:
Score("SCORE!", YELLOW_TEXT, BLUE_RING, 2)
# If the switch is not pressed pick a random message and scroll it
left_eye.fill(BLUE_RING)
right_eye.fill(BLUE_RING)
left_eye.show()
right_eye.show()
ScrollMessage(random.choice(MESSAGES), YELLOW_TEXT, 2)
| 31.178344 | 98 | 0.712972 | 720 | 4,895 | 4.709722 | 0.304167 | 0.033029 | 0.01327 | 0.014745 | 0.192274 | 0.170451 | 0.158655 | 0.129165 | 0.109112 | 0.08729 | 0 | 0.046814 | 0.214505 | 4,895 | 156 | 99 | 31.378205 | 0.835111 | 0.342186 | 0 | 0.3125 | 0 | 0 | 0.023021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0 | 0.145833 | 0 | 0.177083 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00e3f141945c6b600fe68b26e38b8881cab5f607 | 1,014 | py | Python | counter.py | Ruthenic/word-predictor | d8ba87e4a0e055f7d4fe529d038a87254b7b234a | [
"Unlicense"
] | 4 | 2021-01-14T20:41:46.000Z | 2021-11-21T16:50:17.000Z | counter.py | Ruthenic/word-predictor | d8ba87e4a0e055f7d4fe529d038a87254b7b234a | [
"Unlicense"
] | null | null | null | counter.py | Ruthenic/word-predictor | d8ba87e4a0e055f7d4fe529d038a87254b7b234a | [
"Unlicense"
] | null | null | null | counts = []
arrin = {}
printed=0
with open('model.txt') as f:
for line in f:
doesContainAlready = False
line = line.replace('\n', '')
pre = line.split(';')[0]
post = line.split(';')[1]
n=0
#for i in counts:
# if i.startswith(line):
# doesContainAlready = True
# break
# n+=1 #good riddance (maybe)
try:
n = int(arrin.get(line))
doesContainAlready = True
except:
doesContainAlready = False
if doesContainAlready == False:
counts.append(line + ';1')
arrin[line] = len(counts) - 1
elif doesContainAlready == True:
try:
counts[n] = line + ';' + str(int(counts[n].split(';')[2]) + 1)
except:
pass
printed+=1
print(printed)
print(counts)
with open('counts.txt', 'w') as f:
for i in counts:
i = i.replace('\n', '')
f.write(i + '\n')
| 28.166667 | 78 | 0.467456 | 110 | 1,014 | 4.309091 | 0.381818 | 0.14557 | 0.025316 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016103 | 0.387574 | 1,014 | 35 | 79 | 28.971429 | 0.747182 | 0.116371 | 0 | 0.2 | 0 | 0 | 0.035955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.033333 | 0 | 0 | 0 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00edf6e571545af5f6320b841e38389297b22abd | 680 | py | Python | Utils/Submission/Submission.py | MaurizioFD/recsys-challenge-2020-twitter | 95dc024fb4f8777aa62e1304536daece640428de | [
"Apache-2.0"
] | 44 | 2020-07-09T11:31:17.000Z | 2022-03-04T05:50:48.000Z | Utils/Submission/Submission.py | kiminh/recsys-challenge-2020-twitter | 567f0db40be7db3d21c360f2ca6cdf2addc7c698 | [
"Apache-2.0"
] | 3 | 2020-10-02T18:55:21.000Z | 2020-10-13T22:13:58.000Z | Utils/Submission/Submission.py | kiminh/recsys-challenge-2020-twitter | 567f0db40be7db3d21c360f2ca6cdf2addc7c698 | [
"Apache-2.0"
] | 9 | 2020-08-08T14:55:59.000Z | 2021-09-06T09:17:03.000Z | import RootPath
def create_submission_file(tweets, users, predictions, output_file):
# Preliminary checks
assert len(tweets) == len(users), f"length are different tweets -> {len(tweets)}, and users -> {len(users)} "
assert len(users) == len(predictions), f"length are different users -> {len(users)}, and predictions -> {len(predictions)} "
assert len(tweets) == len(predictions), f"length are different tweets -> {len(tweets)}, and predictions -> {len(predictions)} "
file = open(RootPath.get_root().joinpath(output_file), "w")
for i in range(len(tweets)):
file.write(f"{tweets[i]},{users[i]},{round(predictions[i], 4)}\n")
file.close() | 40 | 131 | 0.670588 | 90 | 680 | 5.011111 | 0.366667 | 0.099778 | 0.066519 | 0.126386 | 0.268293 | 0.268293 | 0.16408 | 0.16408 | 0 | 0 | 0 | 0.001748 | 0.158824 | 680 | 17 | 132 | 40 | 0.786713 | 0.026471 | 0 | 0 | 0 | 0 | 0.438729 | 0.068079 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00f08386ad2af26c1e6b09fd6ddb6558568930d8 | 4,327 | py | Python | Software/Child Drone/Control/picamapriltag.py | sangminoh715/SKARS-Capstone-Project | 87cfadd1a650d2f492b38f87ab42c41641a06dd0 | [
"MIT"
] | null | null | null | Software/Child Drone/Control/picamapriltag.py | sangminoh715/SKARS-Capstone-Project | 87cfadd1a650d2f492b38f87ab42c41641a06dd0 | [
"MIT"
] | null | null | null | Software/Child Drone/Control/picamapriltag.py | sangminoh715/SKARS-Capstone-Project | 87cfadd1a650d2f492b38f87ab42c41641a06dd0 | [
"MIT"
] | null | null | null | import time
import picamera
import apriltag
import cv2
import numpy as np
import math
import threading
from parameters import Parameters
# Create a pool of image processors
done = False
lock = threading.Lock()
pool = []
np.set_printoptions(suppress=True)
##########################################################################
class ImageProcessor(threading.Thread):
def __init__(self, width, height, parameters):
super(ImageProcessor, self).__init__()
self.height = height
self.width = width
self.detector = apriltag.Detector()
self.tag_size = 3.0
self.parameters = (0,0,0,0) #x,y,z,r
self.paramstruct = parameters;
# self.paramstruct = Parameters();
fov_x = 62.2*math.pi/180
fov_y = 48.8*math.pi/180
fx = self.width/(2*math.tan(fov_x/2))
fy = self.height/(2*math.tan(fov_y/2))
self.camera_params = (fx, fy, width/2, height/2)
self.img = np.empty((self.width * self.height * 3,),dtype=np.uint8)
self.event = threading.Event()
self.terminated = False
self.start()
def run(self):
# This method runs in a separate thread
global done
while not self.terminated:
# Wait for an image to be written to the stream
if self.event.wait(1):
try:
t = time.time()
self.img = self.img.reshape((self.height,self.width,3))
self.img = cv2.cvtColor(self.img,cv2.COLOR_BGR2GRAY)
results = self.detector.detect(self.img)
for i, detection in enumerate(results):
pose, e0, e1 = self.detector.detection_pose(detection,self.camera_params,self.tag_size)
mat = np.array(pose)
T = mat[0:3,3]
# print("MAT:", mat)
rz = -math.atan2(mat[1,0],mat[0,0])
lock.acquire()
self.paramstruct.add(np.array(mat[0:3,3]), rz, t)
lock.release()
if results == []:
lock.acquire()
self.paramstruct.softReset()
lock.release()
finally:
# Reset the stream and event
self.img = np.empty((self.width * self.height * 3,),dtype=np.uint8)
self.event.clear()
# Return ourselves to the pool
with lock:
pool.append(self)
class PiCam(object):
def __init__(self, multi, parameters):
self.width = 160 #640
self.height = 128 #480
self.params = parameters
self.multi = multi
global pool
if (multi):
pool = [ImageProcessor(self.width,self.height,self.params) for i in range(8)]
else:
pool = [ImageProcessor(self.width,self.height,self.params) for i in range(1)]
def streams(self):
global done
global lock
while not done:
with lock:
if pool:
processor = pool.pop()
else:
processor = None
if processor:
yield processor.img
processor.event.set()
else:
# When the pool is starved, wait a while for it to refill
time.sleep(0.1)
def start(self):
with picamera.PiCamera() as camera:
width = self.width
height = self.height
camera.sensor_mode = 4
camera.framerate=30
camera.exposure_mode = 'sports'
camera.resolution = (self.width, self.height)
time.sleep(2)
camera.capture_sequence(self.streams(), 'bgr', use_video_port=True)
# Shut down the processors in an orderly fashion
while pool:
with lock:
processor = pool.pop()
processor.terminated = True
processor.join()
#######################
if __name__ == "__main__":
paramstruct = Parameters()
cam = PiCam(True, paramstruct)
cam.start()
| 30.048611 | 111 | 0.502196 | 473 | 4,327 | 4.51797 | 0.327696 | 0.046327 | 0.030416 | 0.044455 | 0.105756 | 0.105756 | 0.105756 | 0.105756 | 0.105756 | 0.105756 | 0 | 0.025009 | 0.380864 | 4,327 | 143 | 112 | 30.258741 | 0.772676 | 0.078807 | 0 | 0.16 | 0 | 0 | 0.004387 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.08 | 0 | 0.15 | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00f0d88feb828efc6d566faea34795c42d5f74d4 | 9,913 | py | Python | Listing.py | l1mc/consulting_project-Shui_On_Land | 60522160607d940d4e566fcb922d7c49bbf6a83c | [
"MIT"
] | 2 | 2020-09-25T02:35:28.000Z | 2020-10-25T13:11:38.000Z | Listing.py | l1mc/Consulting-Service-for-Shui-On-Land | 60522160607d940d4e566fcb922d7c49bbf6a83c | [
"MIT"
] | null | null | null | Listing.py | l1mc/Consulting-Service-for-Shui-On-Land | 60522160607d940d4e566fcb922d7c49bbf6a83c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Aug 17 14:28:27 2020
@author: Mingcong Li
"""
import difflib # 计算两个字符串相似度的
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import copy #用来深度复制
import matplotlib.ticker as mtick # 用来改变坐标抽格式
plt.rcParams['font.sans-serif'] = ['FangSong'] # 指定默认字体
plt.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题
# 做分类汇总的函数
def pivot1(listn, version):
# csv_data[csv_data['area'].isna()]
subset = csv_data[csv_data['area'].isin(listn)]
subset['list_date_short'] = subset['list_date'].apply(str).str[0:4]
global result
result = pd.crosstab(subset.list_date_short, subset.industry, margins = True)
result.to_excel(r'D:\桌面的文件夹\实习\睿丛\output_%s.xls' %version)
return
# 统计的三个层次
list1 = ['南京', '苏州', '无锡', '常州', '镇江', '扬州', '泰州', '南通', '淮安', '连云港', '盐城', '徐州', '宿迁', '杭州', '宁波', '温州', '绍兴', '湖州', '嘉兴', '金华', '衢州', '台州', '丽水', '舟山', '合肥 ', '马鞍山', '淮北', '宿州', '阜阳', '蚌埠', '淮南', '滁州', '六安', '巢湖', '芜湖', '亳州', '安庆', '池州', '铜陵', '宣城', '黄山', '上海', '江苏', '安徽', '浙江']
list2 = ['南京', '苏州', '无锡', '常州', '镇江', '扬州', '泰州', '南通', '淮安', '连云港', '盐城', '徐州', '宿迁', '杭州', '宁波', '温州', '绍兴', '湖州', '嘉兴', '金华', '衢州', '台州', '丽水', '舟山', '上海', '江苏', '浙江']
list3 = ['上海']
# 导入数据
csv_file = r'D:\桌面的文件夹\实习\睿丛\分年份、分行业统计长三角地区当年上市数量\df_stock.csv'
csv_data = pd.read_csv(csv_file, low_memory = False)#防止弹出警告
print(csv_data)
csv_data.info()
csv_data.head()
csv_data.describe()
csv_data.head(50)
# 进行三个层次的分类汇总
pivot1(list1,'list1')
pivot1(list2,'list2')
pivot1(list3,'list3')
result # 查看分类汇总的结果
# 处理行业名称
# 准备好申万行业分类的数据
Tpye=pd.read_excel(r'D:\桌面的文件夹\实习\睿丛\分年份、分行业统计长三角地区当年上市数量\申银万国行业分类标准 .xlsx',sheet_name='处理', header=None) # 导入行业分类
type1 = Tpye.sort_values(1, axis=0) # 按照行业编号有小到大排序
type1=type1.drop_duplicates(subset=0, keep='first', inplace=False, ignore_index=False) # 去除重复行。有些母分类和子分类是同名的,就只保留母分类。
type1=type1.rename(columns={0:'industry'}) # 给行业名称的列命名。
type1=type1.rename(columns={1:'code'}) # 给行业名称的列命名。
type1 = type1.set_index("industry") # 让行业名称成为行标签,便于后续合并
print(type1.index.is_unique) # 发现行标签没有重复的
type1
# 在最前面插入一个空列,用来保存匹配的结果
test=result.T.iloc[0:79,:] # 取消行业类型里面的“all”
col_name=test.columns.tolist() # 将数据框的列名全部提取出来存放在列表里
col_name.insert(0,'new') # 在列索引为0的位置插入一列,列名为:new,刚插入时不会有值,整列都是NaN
test=test.reindex(columns=col_name) # DataFrame.reindex() 对原行/列索引重新构建索引值
test
# 把申万分类匹配到原始分类上
test.iloc[:,0] = test.index.map(lambda x: difflib.get_close_matches(x, type1.index, cutoff=0.3,n=1)[0]) # map()就是对于一个可迭代对象中的元素,轮流执行一个function
test.head(60) # 查看匹配结果
test.iloc[61:81,:] # 查看匹配结果
test.to_excel(r'D:\桌面的文件夹\实习\睿丛\行业分类匹配结果.xls') # 导出匹配结果,手工在excel里面处理匹配不正确的项目。发现有11个需要手工调整
# 把行业名称转换为申万的命名体系。
#导入并整理
data=pd.read_excel(r'D:\桌面的文件夹\实习\睿丛\行业分类匹配结果_修改后.xls', index_col = 'industry') # 重新导入匹配好分类的行业汇总
data = data.groupby(data.index).sum() # 把重复的行业进行加和。因为concat要求index不能重复。注:此时子行业和母行业是混乱出现的。
# 合并
outcome = pd.concat([data, type1], axis=1, join='inner', ignore_index=False) # 这里是按照index合并数据,可以合并object类型的。inner表示求交集,outer表示求并集。由于data里面的index是type1的子集,所以可以用inner方式。axis=1表示横向合并。
# 改行业代码
outcome['code'] = outcome['code'].apply(str).str[0:2].map(lambda x: x+'0000') # 把行业代码改成一级行业的代码,即后四位全是0
outcome['code'] = outcome['code'].astype('int64')
# 生成新的index
outcome1 = outcome.set_index('code')
outcome1 = outcome1.groupby(outcome1.index).sum()
type2 = type1.reset_index().set_index('code') # 把原来作为index的‘industry’还原成一列数据
outcome2 = pd.concat([outcome1, type2], axis=1, join='inner', ignore_index=False) # 把申万的中文一级行业名称匹配到数据上。这个地方一定要注意,index的数据类型也必须一致,否则合并不出来。
result = outcome2.set_index('industry').T
row_name=result.index.tolist() # 将数据框的列名全部提取出来存放在列表里
type(row_name[1]) # 确认是字符型元素
row_name.insert(1,'1991') # 在列索引为1的位置插入一行,行名为:1991。因为前面的分类汇总会导致一些没有上市的年份被省略掉。
row_name.insert(15,'2005')
row_name.insert(-8,'2013')
result=result.reindex(index=row_name) # DataFrame.reindex() 对原行/列索引重新构建索引值
result.iloc[[1, 15, -9],:]=0.0 # 把NaN的值填充成零
result # result是整理完的总的数据集
# 到这里,数据的整理就完成了。
# 下面开始分析数据
nameDF = pd.DataFrame() # # 空df储存分析类型、行业名称
# 提取分行业的上市总量,用于1和2
industry = result[31:32] # 提取最后一行加总的值ALL
# 1.上市数量最多的10个行业
# 提取
temp1 = industry.T.sort_values('All',ascending=False,inplace=False)[0:11] # 提取行业名称以及上市数量
temp1
# 画图
title='过去30年上市数量最多的10个行业' # 单独设置title,一遍存储到nameDF中
fig1 = temp1.plot(kind='bar', fontsize=16, figsize=(14,14*0.618), title=title, rot=0, legend='') #设置图的格式
fig1.axes.title.set_size(20) #设置标题
# 储存
fig1.figure.savefig(r'D:\桌面的文件夹\实习\睿丛\过去30年上市数量最多的10个行业.png') #保存图片
type(temp1) # 查看temp1的类型
stri=',' # 设置分隔符
seq=temp1.index.tolist() # 获取行业名称
industryName = stri.join(seq) # 把列表中的所有元素合并成一个字符串。
s = pd.Series([title,industryName]) #保存标题和行业名称
nameDF = nameDF.append(s, ignore_index=True) # 添加到df中
# 2.上市数量最少的10个行业。这里的代码比1可复制性更高。
# 提取
temp2 = industry.T.sort_values('All',ascending=True,inplace=False)[0:11].sort_values('All',ascending=False,inplace=False) # 和1一样的规则。提取行业名称以及上市数量。先从小到大提取前10,再把筛选出来的从大到小排。
# 画图
title='过去30年上市数量最少的10个行业' # 单独设置title,一遍存储到nameDF中
fig2 = temp2.plot(kind='bar', fontsize=16, figsize=(14,14*0.618), title=title, rot=0, legend='') #设置图的格式
fig2.axes.title.set_size(20) #设置标题
fmt='%.0f'
yticks = mtick.FormatStrFormatter(fmt)
fig2.yaxis.set_major_formatter(yticks) # 设置不要有小数位数。dataframe里面每一个数都是浮点型的。
# 储存
fig2.figure.savefig(r'D:\桌面的文件夹\实习\睿丛\%s.png' %title) #保存图片
seq=temp2.index.tolist() # 获取行业名称
industryName = stri.join(seq) # 把列表中的所有元素合并成一个字符串。
s = pd.Series([title,industryName]) #保存标题和行业名称
nameDF = nameDF.append(s, ignore_index=True) # 添加到df中
# 3.提取分年度的上市总量
# 提取
result['All'] = result.apply(lambda x: x.sum(),axis=1) # 增加每一行的汇总值,下面一步提取的就是这个值
# 画图
title='上海地区过去30年每年的上市数量变化'
temp3= result.iloc[:,-1].drop(['All'])
fig3 = temp3.plot(kind='line', fontsize=16, figsize=(14,14*0.618),use_index=True, title=title, rot=0)
fig3.axes.title.set_size(20)
# 储存
fig3.figure.savefig(r'D:\桌面的文件夹\实习\睿丛\%s.png' %title) #保存图片
# 年份合并,来平滑波动
result4 = result.iloc[:-1,:]
# 4.五年一合并,绝对数
i = 0
data_new = pd.DataFrame()
while i < (result.shape[0]-1):
try:
data_new = data_new.append(result4.iloc[i,:]+result4.iloc[i+1,:]+result4.iloc[i+2,:]+result4.iloc[i+3,:]+result4.iloc[i+4,:], ignore_index=True)
except:
i +=5
i +=5
s=data_new.sum(axis=0)
data_new = data_new.append(s, ignore_index=True)
data_new
# 提取
title='上市总数最多的12个行业的上市数量'
temp4 = data_new.T.sort_values(by=[6],ascending=False,inplace=False).iloc[0:12,:-1].T
# 画图
fig4 = temp4.plot(kind='line', subplots=True,sharex=True, sharey=True, fontsize=16, layout=(3,4),figsize=(18,18*0.618),use_index=True, title=title, legend=True, rot=90)
labels = ['1990-1994', '1995-1999', '2000-2004', '2005-2009', '2010-2014','2015-2019'] # 设置标签的名称
x = np.arange(len(labels)) # the label locations
fig4[1,1].set_xticks(x) # 设置刻度
fig4[1,1].set_xticklabels(labels) # 设置刻度的名称
fmt='%.0f'
yticks = mtick.FormatStrFormatter(fmt)
fig4[1,1].yaxis.set_major_formatter(yticks) # 设置不要有小数位数。dataframe里面每一个数都是浮点型的。
# 储存
fig4[1,1].figure.savefig(r'D:\桌面的文件夹\实习\睿丛\%s.png' %title) #保存图片,这里,fig4是一个AxesSubplot对象,实际形式是一个ndarray。因此,只要调用这个ndarray里面的任何一个图像,就能把所有的图片画出来。注意,这一调用的是第二行、第二列的图片。
fig4[0,0].figure.show()
seq=temp4.T.index.tolist() # 获取行业名称
industryName = stri.join(seq) # 把列表中的所有元素合并成一个字符串。
s = pd.Series([title,industryName]) #保存标题和行业名称
nameDF = nameDF.append(s, ignore_index=True) # 添加到df中
# 5.五年一合并,相对数
# 准备加总数
data_reg = copy.deepcopy(data_new) #这里需要一个深度复制,保持Df是不变的。否则如果运行一次程序要连着查好几次,就会出问题。因为我们要对Df的格式整个进行改变。
data_reg['All']=data_reg.sum(axis=1) # 每一年所有行业的上市量求和,放在最后一列。每个行业的加总已经有了,在第六行。
# 求相对数
data_reg=data_reg.div(data_reg.iloc[:,-1],axis=0).iloc[:,:-1] # 用来回归的数据集,是相对数
# 提取
title='上市总数最多的12个行业的上市占比'
temp5 = data_reg.T.sort_values(by=[6],ascending=False,inplace=False).iloc[0:12,:-1].T
# 画图
fig5 = temp5.plot(kind='line', subplots=True,sharex=True, sharey=True, fontsize=16, layout=(3,4),figsize=(18,18*0.618),use_index=True, title=title, legend=True, rot=90)
labels = ['1990-1994', '1995-1999', '2000-2004', '2005-2009', '2010-2014','2015-2019'] # 设置标签的名称
x = np.arange(len(labels)) # the label locations
fig5[1,1].set_xticks(x) # 设置x轴刻度
fig5[1,1].set_xticklabels(labels) # 设置x轴刻度的名称
fig5[1,1].yaxis.set_major_formatter(mtick.PercentFormatter(1,0)) # 设置y轴的格式为没有小数点的百分比。第一个参数为把多少的数值设置为100%,第二个参数为保留几位小数。
# 储存
fig5[1,1].figure.savefig(r'D:\桌面的文件夹\实习\睿丛\%s.png' %title) #保存图片,这里,fig4是一个AxesSubplot对象,实际形式是一个ndarray。因此,只要调用这个ndarray里面的任何一个图像,就能把所有的图片画出来。注意,这一调用的是第二行、第二列的图片。
fig5[0,0].figure.show()
seq=temp5.T.index.tolist() # 获取行业名称
industryName = stri.join(seq) # 把列表中的所有元素合并成一个字符串。
s = pd.Series([title,industryName]) #保存标题和行业名称
nameDF = nameDF.append(s, ignore_index=True) # 添加到df中
# 做回归进行分类
# 设置好X、Y、模型
Y_train=data_reg.iloc[:-1,:].T
X_train = pd.DataFrame(np.arange(6).reshape((-1, 1)))
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
# 开始训练
i=0
box=np.array([])
while i < (Y_train.shape[0]):
print(i)
linreg.fit(X_train, Y_train.iloc[i,:])
i +=1
box = np.hstack((box, linreg.coef_))
# 训练结果
print(box)
Y_train[6] = box
# 画图
# 增长最快的15个行业
temp11 = Y_train.sort_values(by=[6],ascending=False,inplace=False).iloc[:15,:-1].T
fig11 = temp11.plot(kind='line', ax=None, subplots=True,sharex=True, sharey=True, fontsize=16, layout=(3,5),figsize=(18,18*0.618),use_index=True, title='# 增长最快的15个行业', grid=None, legend=True,style= None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=0, xerr=None,secondary_y=False, sort_columns=False)
# 衰退最快的15个行业
temp12 = Y_train.sort_values(by=[6],ascending=True,inplace=False).iloc[:15,:-1].T
fig12 = temp12.plot(kind='line', ax=None, subplots=True,sharex=True, sharey=True, fontsize=16, layout=(3,5),figsize=(18,18*0.618),use_index=True, title='增长前15的行业', grid=None, legend=True,style= None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=0, xerr=None,secondary_y=False, sort_columns=False)
| 39.181818 | 346 | 0.71381 | 1,478 | 9,913 | 4.711096 | 0.317321 | 0.014218 | 0.011489 | 0.014362 | 0.454545 | 0.408014 | 0.378142 | 0.328163 | 0.304323 | 0.298147 | 0 | 0.054399 | 0.098759 | 9,913 | 252 | 347 | 39.337302 | 0.724983 | 0.219409 | 0 | 0.164557 | 0 | 0 | 0.120821 | 0.0408 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006329 | false | 0 | 0.044304 | 0 | 0.056962 | 0.025316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
00f51b27af02d7780983835d377f6e4f85ccb09f | 904 | py | Python | Algorithms/Mergesort/python/mergesort.py | Ritik7042/Data-Structures-Algorithms-Hacktoberfest-2K19 | 47550ec865e215aa7f577a4de40aac431af0d52d | [
"MIT"
] | 51 | 2019-09-30T18:49:55.000Z | 2020-11-26T10:23:15.000Z | Algorithms/Mergesort/python/mergesort.py | rvk7895/Data-Structures-Algorithms-Hacktoberfest-2K19 | 52beb5da65263bdea0d27070aa690e0ed5966139 | [
"MIT"
] | 208 | 2019-09-30T17:44:05.000Z | 2019-12-13T13:02:38.000Z | Algorithms/Mergesort/python/mergesort.py | rvk7895/Data-Structures-Algorithms-Hacktoberfest-2K19 | 52beb5da65263bdea0d27070aa690e0ed5966139 | [
"MIT"
] | 299 | 2019-09-30T14:49:35.000Z | 2021-10-02T17:06:56.000Z | #!/usr/bin/env python3
def mergesort(unsorted_list):
n = len(unsorted_list)
if n > 1:
m = n // 2
left = unsorted_list[:m]
right = unsorted_list[m:]
mergesort(left)
mergesort(right)
merge(unsorted_list, left, right)
def merge(original, left, right):
i = j = k = 0
nleft = len(left)
nright = len(right)
while i < nleft and j < nright:
if left[i] < right[j]:
original[k] = left[i]
i += 1
else:
original[k] = right[j]
j += 1
k += 1
while i < nleft:
original[k] = left[i]
i += 1
k += 1
while j < nright:
original[k] = right[j]
j += 1
k += 1
if __name__ == '__main__':
example_list = [-1, 1, 1, 0, -2, 199, 204, 1000, -400, 6]
print(example_list)
mergesort(example_list)
print(example_list)
| 23.179487 | 61 | 0.495575 | 122 | 904 | 3.532787 | 0.295082 | 0.139211 | 0.020882 | 0.064965 | 0.162413 | 0.162413 | 0.088167 | 0.088167 | 0 | 0 | 0 | 0.052817 | 0.371681 | 904 | 38 | 62 | 23.789474 | 0.705986 | 0.02323 | 0 | 0.382353 | 0 | 0 | 0.00907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.058824 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
da949dce61a4080b2cdaf17c3d09ee56ed83cdfc | 3,538 | py | Python | coef.py | jooohhn/venezuelan-economic-analysis | b61559c385677f7023240655ae636a732b0d21dd | [
"MIT"
] | 2 | 2019-05-11T06:02:01.000Z | 2019-05-14T10:09:22.000Z | coef.py | jooohhn/venezuelan-economic-analysis | b61559c385677f7023240655ae636a732b0d21dd | [
"MIT"
] | null | null | null | coef.py | jooohhn/venezuelan-economic-analysis | b61559c385677f7023240655ae636a732b0d21dd | [
"MIT"
] | null | null | null | import csv
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Correlation coef for oil prices to GDP per capita
dfOil = pd.read_excel('./data/Oil Prices.xls', dtype={'Date': int, 'Value': float})
dfOil = dfOil.rename(columns={'Date': 'Year', 'Value': 'Oil Price per Barrel (USD)'})
dfGdp = pd.read_csv('./data/Per capita GDP at current prices - US Dollars.csv', header=0)
dfGdp = dfGdp.sort_values('Year', ascending=True)
dfGdp = dfGdp[dfGdp['Country or Area']=='Venezuela (Bolivarian Republic of)']
dfGdp = dfGdp[['Year', 'Value']]
dfGdp = dfGdp.rename(columns={'Value': 'GDP per Capita (USD)'})
dfOil = dfOil.set_index('Year')
dfGdp = dfGdp.set_index('Year')
# print(dfGdp)
dfJoin = dfOil.join(dfGdp, on='Year', how='inner', lsuffix=' - Oil Price', rsuffix='- GDP per Capita')
print('------------------------------------------------------------------------------------------------')
print(dfJoin.corr(method='pearson'))
print('')
sns.lmplot(x='Oil Price per Barrel (USD)',y='GDP per Capita (USD)', data=dfJoin, fit_reg=True)
plt.savefig('./Oil Price to GDP per Capita')
# Correlation coef for GDP and inflation
dfGdp = pd.read_csv('./data/Per capita GDP at current prices - US Dollars.csv', header=0)
dfGdp = dfGdp.sort_values('Year', ascending=True)
dfGdp = dfGdp[dfGdp['Country or Area']=='Venezuela (Bolivarian Republic of)']
dfGdp = dfGdp[['Year', 'Value']]
dfGdp = dfGdp.rename(columns={'Value': 'GDP per Capita (USD)'})
dfGdp = dfGdp.set_index('Year')
dfInflation = pd.read_csv('./data/Inflation.csv', header=0)
dfInflation = dfInflation[dfInflation['Country Name']=='Venezuela, RB']
dfInflation = dfInflation.set_index('Country Name')
obj = {'year': [], 'rate': []}
for index, row in dfInflation.iterrows():
for year in range(2009, 2017):
obj['year'].append(year)
obj['rate'].append(row[str(year)])
dfInflation = pd.DataFrame(data={'Inflation rate %': obj['rate']}, index=obj['year'])
dfJoin = dfGdp.join(dfInflation, on='Year', how='inner')
print('------------------------------------------------------------------------------------------------')
print(dfJoin.corr(method='pearson'))
print('')
sns.lmplot(x='Inflation rate %',
y='GDP per Capita (USD)', data=dfJoin, fit_reg=True)
plt.savefig('./Inflation rate % to GDP per Capita')
# Correlation coef for GDP % change and infant mortality rate rate
dfGdp = pd.read_csv('./data/Per capita GDP at current prices - US Dollars.csv', header=0)
dfGdp = dfGdp.sort_values('Year', ascending=True)
dfGdp = dfGdp[dfGdp['Country or Area']=='Venezuela (Bolivarian Republic of)']
dfGdp = dfGdp[['Year', 'Value']]
dfGdp = dfGdp.rename(columns={'Value': 'GDP per Capita (USD)'})
dfGdp = dfGdp.set_index('Year')
dfMortality = pd.read_csv('./data/Infant Mortaility.csv', header=0)
dfMortality = dfMortality[dfMortality['Country Name']=='Venezuela, RB']
obj = {'year': [], 'deaths': []}
for index, row in dfMortality.iterrows():
for year in range(1960, 2017):
obj['year'].append(year)
obj['deaths'].append(row[str(year)])
dfMortality = pd.DataFrame(data={'Infant deaths per 1,000 live births': obj['deaths']}, index=obj['year'])
dfJoin = dfGdp.join(dfMortality, on='Year', how='inner')
print('------------------------------------------------------------------------------------------------')
print(dfJoin.corr(method='pearson'))
print('')
sns.lmplot(x='Infant deaths per 1,000 live births',y='GDP per Capita (USD)', data=dfJoin, fit_reg=True)
plt.savefig('./Infant deaths per 1,000 live births rate % to GDP per Capita')
| 49.138889 | 106 | 0.639344 | 476 | 3,538 | 4.716387 | 0.207983 | 0.080178 | 0.058797 | 0.040089 | 0.624053 | 0.567038 | 0.521604 | 0.482851 | 0.45167 | 0.45167 | 0 | 0.010493 | 0.11108 | 3,538 | 71 | 107 | 49.830986 | 0.703339 | 0.046919 | 0 | 0.47541 | 0 | 0 | 0.403029 | 0.085536 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.065574 | 0 | 0.065574 | 0.147541 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
da97e76fb81b60192f3404927df48c6a16e6c0cb | 5,351 | py | Python | SDKs/CryCode/3.7.0/GameDll/uberfiles/genuber.py | amrhead/FireNET | 34d439aa0157b0c895b20b2b664fddf4f9b84af1 | [
"BSD-2-Clause"
] | 4 | 2017-12-18T20:10:16.000Z | 2021-02-07T21:21:24.000Z | SDKs/CryCode/3.7.0/GameDll/uberfiles/genuber.py | amrhead/FireNET | 34d439aa0157b0c895b20b2b664fddf4f9b84af1 | [
"BSD-2-Clause"
] | null | null | null | SDKs/CryCode/3.7.0/GameDll/uberfiles/genuber.py | amrhead/FireNET | 34d439aa0157b0c895b20b2b664fddf4f9b84af1 | [
"BSD-2-Clause"
] | 3 | 2019-03-11T21:36:15.000Z | 2021-02-07T21:21:26.000Z | #!/bin/python
import re
import sys
import os
class Config:
def __init__(self):
options = self.__parseOptions()
self.projectFileName = options.project
self.sourcesPerFile = options.number
self.mkFileName = options.makefile
self.mkProjectFileName = options.mkproject
self.destinationFolder = os.path.basename(os.getcwd())
self.excludeFileNames = ['OrganicMotion/OrganicMotionClient.cpp']
def __parseOptions(self):
from optparse import OptionParser
optionParser = OptionParser()
optionParser.add_option('-p', '--project', help='Project file name', default='../GameDllSDK.vcxproj', type='string')
optionParser.add_option('-k', '--mkproject', help='Project file name', default='../Project.mk', type='string')
optionParser.add_option('-n', '--number', help='Number of sources per file', default='25', type='int')
optionParser.add_option('-m', '--makefile', help='Makefile name', default='Project.mk', type='string')
(options, args) = optionParser.parse_args()
return options
def __str__(self):
return 'projectFileName="%s" destinationFolder="%s" sourcesPerFile="%d" mkFileName="%s"' % (self.projectFileName, self.destinationFolder, self.sourcesPerFile, self.mkFileName)
class Parser:
def __init__(self, config):
self.config = config
self.reFileName = re.compile(r'<ClCompile\s*Include\s*=\s*\"([^\"]*)\"\s*/?>', re.DOTALL)
def parseFileNames(self):
fileNames = []
projectFileContent = open(self.config.projectFileName).read()
for match in self.reFileName.findall(projectFileContent):
fileName = match.replace('\\', '/').strip('./')
isSourceFile = fileName.endswith('.cpp') or fileName.endswith('.c')
if isSourceFile and not fileName in fileNames and not fileName in self.config.excludeFileNames:
fileNames.append(fileName)
return fileNames
def createUnityFileName(self, unityFileID):
return 'GameSDK_%d_uber.cpp' % unityFileID
def splitFileNames(self, list):
splittedFilesDict = {}
subList = []
for elem in list:
if len(subList) == self.config.sourcesPerFile:
unityFileName = self.createUnityFileName(len(splittedFilesDict))
splittedFilesDict[unityFileName] = subList
subList = []
subList.append(elem)
if len(subList) > 0:
unityFileName = self.createUnityFileName(len(splittedFilesDict))
splittedFilesDict[unityFileName] = subList
return splittedFilesDict
class Generator:
def __init__(self, config):
self.config = config
self.unityBuildFirstLine = 'ifeq ($(MKOPTION_UNITYBUILD),1)'
self.unityBuildLastLine = 'endif'
def __writeRemovedSourceFiles(self, splittedFileNames, mkFile):
print (self.unityBuildFirstLine, file=mkFile)
print ('PROJECT_SOURCES_CPP_REMOVE += \\', file=mkFile)
removedCounter = 0
for (unityFileName, codeFileNames) in splittedFileNames.items():
for codeFileName in codeFileNames:
removedCounter = removedCounter + 1
print ('\t%s \\' % codeFileName, file=mkFile)
print ('', file=mkFile)
print ('Writing removed sources in "%s" - %d' % (self.config.mkFileName, removedCounter))
def __writeUnityFileNames(self, splittedFileNames, mkFile):
print ('PROJECT_SOURCES_CPP_ADD += \\', file=mkFile)
for (unityFileName, codeFileNames) in splittedFileNames.items():
print ('\t%s/%s \\' % (config.destinationFolder, unityFileName), file=mkFile)
print ('', file=mkFile)
print (self.unityBuildLastLine, file=mkFile)
print ('Writing unity file names to be compiled in "%s" - %d' % (self.config.mkFileName, len(splittedFileNames)))
def __writeUnityFiles(self, splittedFileNames):
unityFileNamesWritten = []
for (unityFileName, codeFileNames) in splittedFileNames.items():
print ('Generating unity file: %s - %d' % (unityFileName, len(codeFileNames)))
unityFile = open(unityFileName, 'w')
print ('#ifdef _DEVIRTUALIZE_\n\t#include <GameSDK_devirt_defines.h>\n#endif\n', file=unityFile)
for codeFileName in codeFileNames:
print ('#include "../%s"' % codeFileName, file=unityFile)
print ('\n#ifdef _DEVIRTUALIZE_\n\t#include <GameSDK_wrapper_includes.h>\n#endif', file=unityFile)
unityFile.flush()
unityFileNamesWritten.append(unityFileName)
for fileName in os.listdir('./'):
if fileName not in unityFileNamesWritten and fileName.endswith('_uber.cpp'):
print ('Clearing:', fileName)
file = open(fileName, 'w')
file.close()
def __writeProjectFile(self):
mkFile = open(self.config.mkFileName, 'w')
mkPrjFile = open(self.config.mkProjectFileName)
copyCurrentLine = True
for line in mkPrjFile:
if line.startswith(self.unityBuildFirstLine):
copyCurrentLine = False
self.__writeRemovedSourceFiles(splittedFileNames, mkFile)
self.__writeUnityFileNames(splittedFileNames, mkFile)
if copyCurrentLine:
print (line.rstrip('\n'), file=mkFile)
if line.startswith(self.unityBuildLastLine):
copyCurrentLine = True
mkFile.flush()
def writeFiles(self, splittedFileNames):
try:
self.__writeProjectFile()
self.__writeUnityFiles(splittedFileNames)
except IOError as errorMessage:
print ('IO error: %s' % errorMessage)
config = Config()
parser = Parser(config)
fileNames = parser.parseFileNames()
splittedFileNames = parser.splitFileNames(fileNames)
generator = Generator(config)
generator.writeFiles(splittedFileNames)
| 33.030864 | 177 | 0.724164 | 561 | 5,351 | 6.802139 | 0.276292 | 0.028826 | 0.023585 | 0.024371 | 0.212788 | 0.173742 | 0.098532 | 0.068134 | 0 | 0 | 0 | 0.001312 | 0.145393 | 5,351 | 161 | 178 | 33.236025 | 0.833151 | 0.002243 | 0 | 0.165217 | 0 | 0 | 0.150618 | 0.060697 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113043 | false | 0 | 0.034783 | 0.017391 | 0.217391 | 0.147826 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
da98edb50b64c4ae9596fe0d3c027f34ac473584 | 2,275 | py | Python | 2022/04/challenge04.py | mharty3/preppin-data | 9fad9b4fdd2ef9a12f7a32b03930179faa2284ea | [
"MIT"
] | null | null | null | 2022/04/challenge04.py | mharty3/preppin-data | 9fad9b4fdd2ef9a12f7a32b03930179faa2284ea | [
"MIT"
] | null | null | null | 2022/04/challenge04.py | mharty3/preppin-data | 9fad9b4fdd2ef9a12f7a32b03930179faa2284ea | [
"MIT"
] | null | null | null | #%%
# https://preppindata.blogspot.com/2022/01/2022-week-4-prep-school-travel-plans.html
# 2022-01-26
import pandas as pd
RAW = pd.read_csv('2022/04/inputs/travel_plans.csv')
MISTAKES = {
'Scoter': 'Scooter',
'Walkk': 'Walk',
'Carr': 'Car',
'Bycycle': 'Bicycle',
'Scootr': 'Scooter',
'Wallk': 'Walk',
'WAlk': 'Walk',
'Waalk': 'Walk',
'Helicopeter': 'Helicopter'
}
SUSTAINABILITY = {
'Car': 'Non-Sustainable',
'Bicycle': 'Sustainable',
'Scooter': 'Sustainable',
'Walk': 'Sustainable',
'Aeroplane': 'Non-Sustainable',
'Helicopter': 'Non-Sustainable',
'Van': 'Non-Sustainable',
"Mum's Shoulders": 'Sustainable',
'Hopped': 'Sustainable',
"Dad's Shoulders": 'Sustainable',
'Skipped': 'Sustainable',
'Jumped': 'Sustainable',
'Helicopter': 'Non-Sustainable'
}
trip_counts_per_day = (RAW
.drop(columns=['Student ID'])
.count()
.rename('trips_per_day')
)
output = (RAW
.melt(id_vars='Student ID', value_name='method', var_name='day')
.assign(method = lambda df_: (df_['method']
.map(MISTAKES)
.fillna(df_['method'])),
)
.groupby(['day', 'method'])['Student ID']
.count()
.reset_index()
.rename(columns={'Student ID': 'number_of_trips'})
.join(trip_counts_per_day, on='day')
.assign(
sustainable=lambda df_: df_['method'].map(SUSTAINABILITY),
percent_trips_per_day=lambda df_: df_['number_of_trips'] / df_['trips_per_day']
)
.round(2)
.rename(columns={'sustainable': 'Sustainable?',
'method': 'Method of Travel',
'day': 'Weekday',
'number_of_trips': 'Number of Trips',
'trips_per_day': 'Trips per day',
'percent_trips_per_day': '% of trips per day'})
).to_csv('2022/04/outputs/output.csv',
columns=['Sustainable?', 'Method of Travel', 'Weekday',
'Number of Trips', 'Trips per day', '% of trips per day'],
index=False)
| 32.042254 | 91 | 0.52 | 225 | 2,275 | 5.084444 | 0.382222 | 0.057692 | 0.086538 | 0.061189 | 0.107517 | 0.074301 | 0.041958 | 0 | 0 | 0 | 0 | 0.020539 | 0.315165 | 2,275 | 70 | 92 | 32.5 | 0.713736 | 0.042198 | 0 | 0.033898 | 0 | 0 | 0.363678 | 0.035862 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016949 | 0 | 0.016949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
da998cae86d173be7166b8bbd15b244a3cd77208 | 2,864 | py | Python | plugins/pg_invalid_indexes.py | xinferum/mamonsu-plugins | 9ddce580a1b030e67b1d6334c631cc76770bee9a | [
"MIT"
] | null | null | null | plugins/pg_invalid_indexes.py | xinferum/mamonsu-plugins | 9ddce580a1b030e67b1d6334c631cc76770bee9a | [
"MIT"
] | null | null | null | plugins/pg_invalid_indexes.py | xinferum/mamonsu-plugins | 9ddce580a1b030e67b1d6334c631cc76770bee9a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from mamonsu.plugins.pgsql.plugin import PgsqlPlugin as Plugin
from mamonsu.plugins.pgsql.pool import Pooler
class PgInvalidIndexes(Plugin):
Interval = 60
DEFAULT_CONFIG = {
'Interval': str(60), # Default interval (1 hour = 3600 sec)
}
zbx_key = "invalid_indexes_count"
#
query_agent_discovery = "SELECT json_build_object ('data',json_agg(json_build_object('{#TABLE_IDX}', '" + zbx_key + "')));"
# Select count on invalid indexes in database
query = """
SELECT COUNT(*)
FROM pg_index i
JOIN pg_class c ON i.indexrelid = c.oid
JOIN pg_class c2 ON i.indrelid = c2.oid
JOIN pg_namespace n2 ON c2.relnamespace = n2.oid
WHERE (NOT i.indisready OR NOT i.indisvalid)
AND NOT EXISTS (SELECT 1 FROM pg_stat_activity where datname = current_database() AND query ilike '%concurrently%' AND pid <> pg_backend_pid());
"""
AgentPluginType = 'pg'
key_rel_part = 'pgsql.' + zbx_key
key_rel_part_discovery = key_rel_part+'{0}'
def run(self, zbx):
objects = []
for info_dbs in Pooler.query("select datname from pg_catalog.pg_database where datistemplate = false and datname not in ('mamonsu','postgres')"):
objects.append({'{#TABLE_IDX}': info_dbs[0]})
result = Pooler.query(self.query, info_dbs[0])
zbx.send(self.key_rel_part+'[{0}]'.format(info_dbs[0]), result[0][0])
zbx.send(self.key_rel_part+'[]', zbx.json({'data': objects}))
def discovery_rules(self, template, dashboard=False):
rule = {
'name': 'Invalid indexes in database discovery',
'key': self.key_rel_part_discovery.format('[{0}]'.format(self.Macros[self.Type])),
'filter': '{#TABLE_IDX}:.*'
}
items = [
{'key': self.right_type(self.key_rel_part_discovery, var_discovery="{#TABLE_IDX},"),
'name': 'Invalid indexes in database: {#TABLE_IDX}',
'units': Plugin.UNITS.none,
'value_type': Plugin.VALUE_TYPE.numeric_unsigned,
'delay': self.Interval},
]
conditions = [
{
'condition': [
{'macro': '{#TABLE_IDX}',
'value': '.*',
'formulaid': 'A'}
]
}
]
triggers = [{
'name': 'PostgreSQL: In the database {#TABLE_IDX} invalid indexes on {HOSTNAME} (value={ITEM.LASTVALUE})',
'expression': '{#TEMPLATE:'+self.right_type(self.key_rel_part_discovery, var_discovery="{#TABLE_IDX},")+'.last()}>0',
'priority': 2
}
]
return template.discovery_rule(rule=rule, conditions=conditions, items=items, triggers=triggers)
| 40.914286 | 153 | 0.574721 | 329 | 2,864 | 4.805471 | 0.37386 | 0.040481 | 0.050601 | 0.044276 | 0.14864 | 0.098672 | 0.098672 | 0.070841 | 0.070841 | 0.070841 | 0 | 0.012764 | 0.288757 | 2,864 | 69 | 154 | 41.507246 | 0.763378 | 0.035964 | 0 | 0 | 0 | 0.035088 | 0.370464 | 0.050435 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.035088 | 0 | 0.245614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
daa23b88ceb8bf087be4f6e681687b1a4883adc2 | 636 | py | Python | beep_alarm.py | XC-Li/Raspberry_Projects | 48b61832641fea1dcbd24b651266fe767d8cd254 | [
"MIT"
] | null | null | null | beep_alarm.py | XC-Li/Raspberry_Projects | 48b61832641fea1dcbd24b651266fe767d8cd254 | [
"MIT"
] | null | null | null | beep_alarm.py | XC-Li/Raspberry_Projects | 48b61832641fea1dcbd24b651266fe767d8cd254 | [
"MIT"
] | null | null | null | from time import ctime
from time import sleep
from sakshat import SAKSHAT
from sakspins import SAKSPins as Pins
saks = SAKSHAT()
alarm = [2011]
def tact_event_handler(pin, status):
global alarm_run
if pin == Pins.TACT_RIGHT:
print("Stop timer")
alarm_run = False
try:
while True:
current_time = ctime()
current_time = current_time[11:13] + current_time[14:16]
print(current_time)
if int(current_time) in alarm:
saks.buzzer.beep(1)
sleep(2)
except KeyboardInterrupt:
print("End")
saks.ledrow.off()
saks.buzzer.off() | 23.555556 | 65 | 0.625786 | 83 | 636 | 4.662651 | 0.542169 | 0.170543 | 0.072351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030837 | 0.286164 | 636 | 27 | 66 | 23.555556 | 0.821586 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.173913 | 0 | 0.217391 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
daa5d4cc65519f90d470b60fa1a16f721ffb4184 | 8,019 | py | Python | appointment/views.py | aksheus/patient_appointment_system | d718f676e5b20197c8629e9eb9f9a47eb94b3ffe | [
"Apache-2.0"
] | null | null | null | appointment/views.py | aksheus/patient_appointment_system | d718f676e5b20197c8629e9eb9f9a47eb94b3ffe | [
"Apache-2.0"
] | null | null | null | appointment/views.py | aksheus/patient_appointment_system | d718f676e5b20197c8629e9eb9f9a47eb94b3ffe | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render,get_object_or_404
from django.http import HttpResponse,Http404,HttpResponseRedirect
from appointment.models import Patient,Appointment
from django.template import RequestContext,loader
from django.core.urlresolvers import reverse
from django.utils import timezone
from datetime import datetime,timedelta
import phonenumbers
from django.core.mail import EmailMessage
from django.views.generic import View
# Create your views here.
class Index(View):
def get(self,request):
dt=[]
now=timezone.now()
one_day=timedelta(days=1)
two_day=timedelta(days=2)
h=timedelta(hours=1)
while now.hour!=int(9): #start time
now=now-h
s=timedelta(seconds=1)
while now.second != int(0):
now=now-s
m=timedelta(minutes=1)
while now.minute!=int(0):
now=now-m
m=timedelta(minutes=10)
dt.append(now)
won=now
while won.hour!=int(13): #check 1 loop logic
won=won+m
dt.append(won)
dt.pop()
for x in xrange(len(dt)):
won=dt[x]+one_day
dt.append(won)
won=dt[x]+two_day
dt.append(won) #now dt filled with all possible appointments remove
#one's already booked i.e in database
a=Appointment.objects.all()
a=[x.appointment_datetime for x in a]
display_list=[str(x) for x in list(set(dt)-set(a))] #remove already booked appointments
display_list.sort()
for x in xrange(len(display_list)):
bugfix=list(display_list[x])
bugfix=bugfix[:19]
display_list[x]="".join([y for y in bugfix])
context={'display_list': display_list}
return render(request,'appointment/index.html',context)
class Form_handle(View):
def post(self,request):
"""create patient object check wether it is already in db
if it is don't store check wether the appointment is
within 15 days of the previous one if it is 'review'
else it is 'fresh'.Retrieve the particular patient
object from db create the appointment object and
point it to the patient
if patient ain't in db store patient and create
appointment pointing to that patient and store it"""
F=request.POST
try:
pp=Patient.objects.get(patient_name=F['name'],
patient_email=F['email']
)
try:
app=pp.appointment
except Appointment.DoesNotExist:
pass
comp=datetime.strptime(F['datetime'],'%Y-%m-%d %H:%M:%S')
if comp.day-app.appointment_datetime.day <= 15: #review
store_app=Appointment(
appointment_datetime=comp,
fresh_or_review=True,
appointment_problem=F['problem'])
store_app.save()
pp.appointment=store_app
pp.save()
mail_to_doctor=EmailMessage('appointment for %s'%pp.patient_name,
store_app.appointment_problem,
to=['spvijayal@gmail.com']
)
mail_to_doctor.send() #returns 1 on success or SMTP standard errors
mess='''Respected Sir/Madam,
Your review appointment is scheduled on %s'''%F['datetime']
mail_to_patient=EmailMessage('clinic\'s name',
mess,
to=['%s'%pp.patient_email]
)
mail_to_patient.send()
else:
store_app=Appointment(
appointment_datetime=comp,
appointment_problem=F['problem'])
store_app.save()
pp.appointment=store_app
pp.save()
mail_to_doctor=EmailMessage('appointment for %s'%pp.patient_name,
store_app.appointment_problem,
to=['spvijayal@gmail.com']
)
mail_to_doctor.send()
mess='''Respected Sir/Madam,
Your fresh appointment is scheduled on %s'''%F['datetime']
mail_to_patient=EmailMessage('clinic\'s name',
mess,
to=['%s'%pp.patient_email]
)
mail_to_patient.send()
return HttpResponseRedirect('results/')
except Patient.DoesNotExist:
try:
z=phonenumbers.parse(F['phonenum'],"IN")
except phonenumbers.NumberParseException:
cont={'error_message': ' Invalid Phone Number '}
return render(request,'appointment/index_l.html',cont)
if int(F['age']) >= 120 or int(F['age']) < 1:
con={'error_message': '%s is your age eh !! Nice try'%F['age']}
return render(request,'appointment/index_l.html',con)
if len(F['phonenum'][3:])!=10:
cont={'error_message': ' Invalid Phone Number '}
return render(request,'appointment/index_l.html',cont)
try:
u=(int(x) for x in F['phonenum'][1:])
for uu in u:
uu=type(uu)
except ValueError:
cont={'error_message': ' Invalid Phone Number '}
return render(request,'appointment/index_l.html',cont)
if not phonenumbers.is_possible_number(z):
cont={'error_message': ' Invalid Phone Number '}
return render(request,'appointment/index_l.html',cont)
if not phonenumbers.is_valid_number:
cont={'error_message': ' Invalid Phone Number '}
return render(request,'appointment/index_l.html',cont)
email_doms=['aol.com','comcast.net','facebook.com',
'gmail.com', 'hotmail.com','msn.com'
'outlook.com','yahoo.com','yahoo.co.in'
]
if str(F['email']).split('@')[0] == '':
err_mail={'error_message':' Invalid email address '}
return render(request,'appointment/index_l.html',err_mail)
if F['email'].split('@')[1] not in email_doms :
err_mail={'error_message':' No support for email by %s'%F['email'].split('@')[1]}
return render(request,'appointment/index_l.html',err_mail)
comp=datetime.strptime(F['datetime'],'%Y-%m-%d %H:%M:%S')
store_app=Appointment(
appointment_datetime=comp,
appointment_problem=F['problem'])
store_app.save()
p=Patient(appointment=store_app,
patient_name=F['name'],
patient_age=int(F['age']),
patient_sex=F['sex'],
patient_email=F['email'],
patient_phone=F['phonenum'])
p.save()
mail_to_doctor=EmailMessage('appointment for %s'%p.patient_name,
store_app.appointment_problem,
to=['spvijayal@gmail.com']
)
mail_to_doctor.send()
mess='''Respected Sir/Madam,
We are glad to offer our services,Kindly visit the clinic on %s'''%F['datetime']
mail_to_patient=EmailMessage('clinic\'s name',
mess,
to=['%s'%p.patient_email]
)
mail_to_patient.send()
return HttpResponseRedirect('results/')
class Results(View):
def get(self,request):
return render(request,'appointment/index_l.html')
| 41.765625 | 120 | 0.529118 | 886 | 8,019 | 4.671558 | 0.23702 | 0.023194 | 0.045905 | 0.072481 | 0.458806 | 0.414835 | 0.404687 | 0.385359 | 0.37497 | 0.329307 | 0 | 0.007259 | 0.364385 | 8,019 | 192 | 121 | 41.765625 | 0.804787 | 0.076069 | 0 | 0.407407 | 0 | 0.006173 | 0.180766 | 0.032421 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018519 | false | 0.006173 | 0.061728 | 0.006173 | 0.17284 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
daa7627c934e5e470f68c42789921d00362402ef | 2,395 | py | Python | terraform-provider-hpam/mock-hpam-server/fake_hpam.py | f-guichard/terraform-provider-examples | cbef217eb82df1f1c8798af2eebc065f2357b1a7 | [
"Apache-2.0"
] | null | null | null | terraform-provider-hpam/mock-hpam-server/fake_hpam.py | f-guichard/terraform-provider-examples | cbef217eb82df1f1c8798af2eebc065f2357b1a7 | [
"Apache-2.0"
] | null | null | null | terraform-provider-hpam/mock-hpam-server/fake_hpam.py | f-guichard/terraform-provider-examples | cbef217eb82df1f1c8798af2eebc065f2357b1a7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: UTF-8 -*-
#Othello.java style : single file program
import os
import time
from flask import Flask
from flask import jsonify
from flask import request
# Global variables section
_CREATE_DELAY = 2
PORT = 30026 # Affectation port a updater pour CloudFoundry
CONTROLLER_VERSION = "v1"
_CONTROLLER_NAME = "Asset Mgmt Controller"
_26E_URL = "/"+CONTROLLER_VERSION+"/26e"
_26E_ID = "/"+CONTROLLER_VERSION+"/26e/<id>"
_HELPER_RESPONSE = {
_CONTROLLER_NAME: CONTROLLER_VERSION,
"GET "+_26E_URL : {
"method": "GET",
"parameters": "",
"code retour": "200"
},
"GET "+_26E_ID : {
"method": "GET",
"parameters": "un identifiant de vips",
"code retour": "200"
},
"POST "+_26E_URL : {
"method": "POST",
"parameters": "json body like {}",
"code retour": "201"
},
"PATCH "+_26E_ID : {
"method": "PATCH",
"parameters": "json body like : {vipid : 'DESCRIPTION':'DESCRIPTION'}",
"code retour": "200"
},
"DELETE "+_26E_ID : {
"method": "DELETE",
"parameters": "un identifiant de vip",
"code retour": "200"
}
}
ramDic = {}
app = Flask(__name__)
@app.route('/')
def index():
return 'WORKING'
@app.route('/help')
def help():
return jsonify(_HELPER_RESPONSE)
@app.route(_26E_URL, methods=['GET'])
def list_assets():
#PEP 448
response = jsonify(*ramDic)
response.status_code = 200
return response
@app.route(_26E_ID, methods=['GET'])
def list_asset(id):
response = jsonify(ramDic.get(id))
response.status_code = 200
return response
@app.route(_26E_URL, methods=['POST'])
def create_assets():
body = request.get_json(force=True)
ramDic[str(len(ramDic))] = body
response = jsonify({'id':str(len(ramDic)-1)},{"obj":ramDic.get(str(len(ramDic)-1))})
response.status_code = 201
time.sleep(_CREATE_DELAY)
return response
@app.route(_26E_ID, methods=['PATCH'])
def patch_assets():
response = jsonify('NOT IMPLEMENTED YET')
response.status_code = 200
return response
@app.route(_26E_ID, methods=['DELETE'])
def delete_assets(id):
response = jsonify(ramDic.pop(id))
response.status_code = 200
return response
app.debug = True
app.run(host='0.0.0.0', port=PORT)
| 24.947917 | 89 | 0.604175 | 283 | 2,395 | 4.918728 | 0.314488 | 0.028736 | 0.057471 | 0.068247 | 0.194684 | 0.194684 | 0.166667 | 0.142241 | 0.112069 | 0.079023 | 0 | 0.041505 | 0.245511 | 2,395 | 95 | 90 | 25.210526 | 0.728832 | 0.05762 | 0 | 0.194805 | 0 | 0 | 0.192022 | 0.012987 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.064935 | 0.025974 | 0.246753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
daaa7152b73fa0a13aa5df80e0a7b9ce4f28141b | 1,489 | py | Python | setup.py | OpenMindInnovation/quetzal | 3940dfe8e3d2a1060ec89ba4e575365563042bf9 | [
"BSD-3-Clause"
] | null | null | null | setup.py | OpenMindInnovation/quetzal | 3940dfe8e3d2a1060ec89ba4e575365563042bf9 | [
"BSD-3-Clause"
] | null | null | null | setup.py | OpenMindInnovation/quetzal | 3940dfe8e3d2a1060ec89ba4e575365563042bf9 | [
"BSD-3-Clause"
] | null | null | null | # http://flask.pocoo.org/docs/1.0/patterns/packages/
from setuptools import setup, find_packages
import versioneer
authors = [
('David Ojeda', 'david@dojeda.com'),
]
author_names = ', '.join(tup[0] for tup in authors)
author_emails = ', '.join(tup[1] for tup in authors)
setup(
name='quetzal',
packages=find_packages(exclude=['docs', 'migrations', 'tests']),
namespace_packages=['quetzal'],
include_package_data=True,
python_requires='>=3.6, ~=3.7',
install_requires=[
'Flask',
'werkzeug',
'Flask-Login',
'Flask-Principal',
'connexion',
'celery',
'kombu',
'Flask-Celery-Helper',
'SQLAlchemy',
'Flask-SQLAlchemy',
'Flask-Migrate',
'alembic',
'psycopg2-binary',
'sqlparse',
'requests',
'Click',
'syslog-rfc5424-formatter',
'apscheduler',
'gunicorn',
'google-cloud-storage',
],
author=author_names,
author_email=author_emails,
classifiers=[
'Development Status :: 4 - Beta',
'Framework :: Flask',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3 :: Only',
'Topic :: Database',
'Topic :: Scientific/Engineering',
'Topic :: System :: Archiving',
],
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
zip_safe=False,
)
| 25.672414 | 68 | 0.577569 | 144 | 1,489 | 5.868056 | 0.659722 | 0.028402 | 0.018935 | 0.035503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013774 | 0.268637 | 1,489 | 57 | 69 | 26.122807 | 0.762167 | 0.03358 | 0 | 0.039216 | 0 | 0 | 0.374391 | 0.032011 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.039216 | 0 | 0.039216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |