hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a657192a2a6097dd374a729b11c2e62a804c6b55 | 6,581 | py | Python | bin/postprocess-exe.py | ktanidis2/Modified_CosmoSIS_for_galaxy_number_count_angular_power_spectra | 07e5d308c6a8641a369a3e0b8d13c4104988cd2b | [
"BSD-2-Clause"
] | 1 | 2021-09-15T10:10:26.000Z | 2021-09-15T10:10:26.000Z | bin/postprocess-exe.py | ktanidis2/Modified_CosmoSIS_for_galaxy_number_count_angular_power_spectra | 07e5d308c6a8641a369a3e0b8d13c4104988cd2b | [
"BSD-2-Clause"
] | null | null | null | bin/postprocess-exe.py | ktanidis2/Modified_CosmoSIS_for_galaxy_number_count_angular_power_spectra | 07e5d308c6a8641a369a3e0b8d13c4104988cd2b | [
"BSD-2-Clause"
] | 1 | 2021-06-11T15:29:43.000Z | 2021-06-11T15:29:43.000Z | #!/usr/bin/env python
from __future__ import print_function
from cosmosis.postprocessing.postprocess import postprocessor_for_sampler
from cosmosis.postprocessing.inputs import read_input
from cosmosis.postprocessing.plots import Tweaks
from cosmosis.runtime.utils import mkdir
import sys
import argparse
import os
parser = argparse.ArgumentParser(description="Post-process cosmosis output")
parser.add_argument("inifile", nargs="+")
mcmc=parser.add_argument_group(title="MCMC", description="Options for MCMC-type samplers")
mcmc.add_argument("--burn", default=0.0, type=float, help="Fraction or number of samples to burn at the start")
mcmc.add_argument("--thin", default=1, type=int, help="Keep every n'th sampler in MCMC")
mcmc.add_argument("--weights", action='store_true', help="Look for a weight column in a generic MCMC file")
general=parser.add_argument_group(title="General", description="General options for controlling postprocessing")
general.add_argument("-o","--outdir", default=".", help="Output directory for all generated files")
general.add_argument("-p","--prefix", default="", help="Prefix for all generated files")
general.add_argument("--more-latex", default="", help="Load an additional latex file to the default")
general.add_argument("--no-latex", action='store_true', help="Do not use latex-style labels, just use the text")
general.add_argument("--blind-add", action='store_true', help="Blind results by adding adding a secret value to each parameter")
general.add_argument("--blind-mul", action='store_true', help="Blind results by scaling by a secret value for each parameter")
general.add_argument("--pdb", action='store_true', help="Run the debugger if any of the postprocessing stages fail")
inputs=parser.add_argument_group(title="Inputs", description="Options controlling the inputs to this script")
inputs.add_argument("--text", action='store_true', help="Tell postprocess that its argument is a text file, regardless of its suffix")
inputs.add_argument("--derive", default="", help="Read a python script with functions in that derive new columns from existing ones")
plots=parser.add_argument_group(title="Plotting", description="Plotting options")
plots.add_argument("--legend", help="Add a legend to the plot with the specified titles, separated by | (the pipe symbol)")
plots.add_argument("--legend-loc", default='best', help="The location of the legend: best, UR, UL, LL, LR, R, CL, CR, LC, UC, C (use quotes for the ones with two words.)")
plots.add_argument("--swap", action='store_true', help="Swap the ordering of the parameters in (x,y)")
plots.add_argument("--only", type=str, dest='prefix_only', help="Only make 2D plots where both parameter names start with this")
plots.add_argument("--either", type=str, dest='prefix_either', help="Only make 2D plots where one of the parameter names starts with this.")
plots.add_argument("--no-plots", action='store_true', help="Do not make any default plots")
plots.add_argument("--no-2d", action='store_true', help="Do not make any 2D plots")
plots.add_argument("--no-alpha", dest='alpha', action='store_false', help="No alpha effect - shaded contours will not be visible through other ones")
plots.add_argument("-f", "--file-type", default="png", help="Filename suffix for plots")
plots.add_argument("--no-smooth", dest='smooth', default=True, action='store_false', help="Do not smooth grid plot joint constraints")
plots.add_argument("--n-kde", default=100, type=int, help="Number of KDE smoothing points per dimension to use for MCMC 2D curves. Reduce to speed up, but can make plots look worse.")
plots.add_argument("--factor-kde", default=2.0, type=float, help="Smoothing factor for MCMC plots. More makes plots look better but can smooth out too much.")
plots.add_argument("--no-fill", dest='fill', default=True, action='store_false', help="Do not fill in 2D constraint plots with color")
plots.add_argument("--extra", dest='extra', default="", help="Load extra post-processing steps from this file.")
plots.add_argument("--tweaks", dest='tweaks', default="", help="Load plot tweaks from this file.")
plots.add_argument("--no-image", dest='image', default=True, action='store_false', help="Do not plot the image in 2D grids; just show the contours")
plots.add_argument("--run-max-post", default="", help="Run the test sampler on maximum-posterior sample and save to the named directory.")
def main(args):
#Read the command line arguments and load the
#ini file that created the run
args = parser.parse_args(args)
for ini_filename in args.inifile:
if not os.path.exists(ini_filename):
raise ValueError("The file (or directory) {} does not exist.".format(ini_filename))
#Make the directory for the outputs to go in.
mkdir(args.outdir)
outputs = {}
#Deal with legends, if any
if args.legend:
labels = args.legend.split("|")
if len(labels)!=len(args.inifile):
raise ValueError("You specified {} legend names but {} files to plot".format(len(labels), len(args.inifile)))
else:
labels = args.inifile
if len(args.inifile)>1 and args.run_max_post:
raise ValueError("Can only use the --run-max-post argument with a single parameter file for now")
for i,ini_filename in enumerate(args.inifile):
sampler, ini = read_input(ini_filename, args.text, args.weights)
processor_class = postprocessor_for_sampler(sampler.strip ())
#We do not know how to postprocess everything.
if processor_class is None:
print("I do not know how to postprocess output from the %s sampler"%sampler)
sampler = None
continue
#Create and run the postprocessor
processor = processor_class(ini, labels[i], i, **vars(args))
#Inherit any plots from the previous postprocessor
#so we can make plots with multiple datasets on
processor.outputs.update(outputs)
#We can load extra plots to make from a python
#script here
if args.extra:
processor.load_extra_steps(args.extra)
#Optionally add a step in which we
if args.run_max_post:
processor.add_rerun_bestfit_step(args.run_max_post)
#Run the postprocessor and make the outputs for this chain
processor.run()
#Save the outputs ready for the next post-processor in case
#they want to add to it (e.g. two constriants on the same axes)
outputs = processor.outputs
if sampler is None:
return
#Run any tweaks that the user specified
if args.tweaks:
tweaks = Tweaks.instances_from_file(args.tweaks)
for tweak in tweaks:
processor.apply_tweaks(tweak)
#Save all the image files and close the text files
processor.finalize()
if __name__=="__main__":
main(sys.argv[1:])
| 53.072581 | 183 | 0.752925 | 1,021 | 6,581 | 4.760039 | 0.290891 | 0.076955 | 0.055967 | 0.035185 | 0.169342 | 0.100823 | 0.064198 | 0.034979 | 0 | 0 | 0 | 0.002952 | 0.124905 | 6,581 | 123 | 184 | 53.504065 | 0.840945 | 0.104543 | 0 | 0 | 0 | 0.02439 | 0.4355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012195 | false | 0 | 0.097561 | 0 | 0.121951 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6571d51ba20f89c596630ca6b73cb59f6d6f6e4 | 224 | py | Python | Task 3.py | IsSveshuD/lab_10 | 7b6c6f69e9ee272e95300f325b1f1a251b3b07b6 | [
"MIT"
] | null | null | null | Task 3.py | IsSveshuD/lab_10 | 7b6c6f69e9ee272e95300f325b1f1a251b3b07b6 | [
"MIT"
] | null | null | null | Task 3.py | IsSveshuD/lab_10 | 7b6c6f69e9ee272e95300f325b1f1a251b3b07b6 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding^ utf-8 -*-
def t():
r = 1
while 1:
ch = int(input())
if not ch: break
r *= ch
print(r)
return (r)
if __name__ == '__main__':
print(t())
| 13.176471 | 26 | 0.446429 | 31 | 224 | 2.967742 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.375 | 224 | 16 | 27 | 14 | 0.628571 | 0.191964 | 0 | 0 | 0 | 0 | 0.044693 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6592e64bf1d5f88b19a5726eb07a0d0e13e1847 | 1,715 | py | Python | setup.py | AmineSoukara/PyAnime4Up | 495c6123a60e28e8a447da4152793c9201729e6a | [
"MIT"
] | 2 | 2021-10-01T20:51:20.000Z | 2021-11-12T04:45:16.000Z | setup.py | AmineSoukara/PyAnime4Up | 495c6123a60e28e8a447da4152793c9201729e6a | [
"MIT"
] | null | null | null | setup.py | AmineSoukara/PyAnime4Up | 495c6123a60e28e8a447da4152793c9201729e6a | [
"MIT"
] | null | null | null | """
PyAnime4Up
~~~~~~~~~
:Copyright: (c) 2021 By Amine Soukara <https://github.com/AmineSoukara>.
:License: MIT, See LICENSE For More Details.
:Description: A Selenium-less Python Anime4Up Library
"""
from setuptools import find_packages, setup
AUTHOR = "AmineSoukara"
EMAIL = "AmineSoukara@gmail.com"
URL = "https://github.com/AmineSoukara/PyAnime4Up"
# Get the long description
with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
VERSION = '1.8'
setup(
name="PyAnime4Up",
version=VERSION,
description="A Selenium-less Python Anime4Up Library",
long_description=long_description,
long_description_content_type="text/markdown",
author=AUTHOR,
author_email=EMAIL,
url=URL,
license="MIT",
packages=find_packages(),
keywords="Anime Anime4Up Scrapper Python",
project_urls={
"Source": "https://github.com/AmineSoukara/PyAnime4Up",
"Documentation": "https://github.com/AmineSoukara/PyAnime4Up#readme",
"Tracker": "https://github.com/AmineSoukara/PyAnime4Up/issues",
},
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"Natural Language :: English",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Topic :: Internet",
],
python_requires=">=3.6",
install_requires=["aiohttp", "urllib3", "bs4", "requests"],
)
| 30.625 | 77 | 0.653061 | 183 | 1,715 | 6.054645 | 0.535519 | 0.049639 | 0.063177 | 0.117329 | 0.211191 | 0.081227 | 0.081227 | 0 | 0 | 0 | 0 | 0.021106 | 0.198834 | 1,715 | 55 | 78 | 31.181818 | 0.785298 | 0.127114 | 0 | 0 | 0 | 0 | 0.515111 | 0.014775 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a659c422f6fefeb24b335ffa862836f6bd0ada52 | 1,032 | py | Python | test/test_middleware.py | sunhailin-Leo/fastapi_apollo_middleware | 32351406141dbd87254efd4516288a556adbe72a | [
"MIT"
] | 2 | 2021-03-26T03:54:43.000Z | 2021-03-28T10:51:19.000Z | test/test_middleware.py | sunhailin-Leo/fastapi_apollo_middleware | 32351406141dbd87254efd4516288a556adbe72a | [
"MIT"
] | null | null | null | test/test_middleware.py | sunhailin-Leo/fastapi_apollo_middleware | 32351406141dbd87254efd4516288a556adbe72a | [
"MIT"
] | null | null | null | import time
import pytest
from fastapi import FastAPI
from fastapi.testclient import TestClient
from fastapi_apollo_middleware.middleware import (
FastAPIApolloMiddleware,
startup_apollo_cycle_task,
)
@pytest.fixture(name="test_middleware")
def test_middleware():
def _test_middleware(**profiler_kwargs):
app = FastAPI()
app.add_middleware(
FastAPIApolloMiddleware,
apollo_app_id="test-fastapi",
)
@app.on_event("startup")
async def startup():
await startup_apollo_cycle_task(namespaces=["application"])
@app.get("/test")
async def normal_request(request):
return {"retMsg": "Normal Request test Success!"}
return app
return _test_middleware
class TestProfilerMiddleware:
@pytest.fixture
def client(self, test_middleware):
return TestClient(test_middleware())
def test_apollo(self, client):
# request
request_path = "/test"
client.get(request_path)
| 23.454545 | 71 | 0.669574 | 107 | 1,032 | 6.224299 | 0.35514 | 0.126126 | 0.076577 | 0.094595 | 0.072072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243217 | 1,032 | 43 | 72 | 24 | 0.852753 | 0.006783 | 0 | 0.064516 | 0 | 0 | 0.086999 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.16129 | 0.032258 | 0.451613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a65cc33c30371070823025c24bcfdcb3d64f61dc | 655 | py | Python | HW/final/test/preprocess.py | houzeyu2683/IRRHW | c44298ad14c468eff36bc75ebc63abdc9ba24d55 | [
"Apache-2.0"
] | null | null | null | HW/final/test/preprocess.py | houzeyu2683/IRRHW | c44298ad14c468eff36bc75ebc63abdc9ba24d55 | [
"Apache-2.0"
] | null | null | null | HW/final/test/preprocess.py | houzeyu2683/IRRHW | c44298ad14c468eff36bc75ebc63abdc9ba24d55 | [
"Apache-2.0"
] | 1 | 2022-01-16T03:40:34.000Z | 2022-01-16T03:40:34.000Z |
import pandas
'''
根據文本資料,建構 term document matrix 矩陣,
存放在指定的位置。
'''
information = pandas.read_csv("csv/information.csv")
import text
vocabulary = text.vocabulary()
vocabulary.build(content = information['abstract'], title=information['title_e'])
matrix = pandas.DataFrame(vocabulary.frequency, dtype='int')
matrix.index = vocabulary.term
matrix.columns = vocabulary.title
matrix.to_csv('frequency matrix.csv')
'''
建構 word2vec 模型,詞向量存放在指定位置。
'''
embedding = text.embedding(content=information['abstract'], tokenize=vocabulary.tokenize)
embedding.build(what='model', by='SG', window=8, dimension=150, epoch=10)
embedding.save(path='./vector.model')
| 22.586207 | 89 | 0.757252 | 81 | 655 | 6.08642 | 0.54321 | 0.056795 | 0.105477 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011725 | 0.08855 | 655 | 28 | 90 | 23.392857 | 0.81407 | 0 | 0 | 0 | 0 | 0 | 0.151675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a65d8a9e5583c81d0cc550a14888526c903f8d10 | 5,039 | py | Python | fai/files.py | st31ny/pyfai | e81aa4b6a62bb4b5f27b5dc7e83dc2fa93862846 | [
"MIT"
] | 2 | 2021-12-20T00:47:06.000Z | 2021-12-21T15:04:42.000Z | fai/files.py | st31ny/pyfai | e81aa4b6a62bb4b5f27b5dc7e83dc2fa93862846 | [
"MIT"
] | null | null | null | fai/files.py | st31ny/pyfai | e81aa4b6a62bb4b5f27b5dc7e83dc2fa93862846 | [
"MIT"
] | null | null | null | """ File Handling
=============
Pyfai strictly differentiates between virtual paths in the target system
(:any:`TargetPath`) and physical paths in the installer system
(:any:`InstallerPath`). While the former are always rooted in the target
system's filesystem root, only the latter can be resolved as pyfai is
running in the installer system.
Most functions in this package work on :any:`TargetPath`\\ s. Sometimes,
however, access to the actual target filesystem is required. Therefore
this module provides functions to convert between both path types.
During softupdate, the installer system actually IS the target system, so
in this case the value of a :any:`TargetPath` to a specific file is
identical to the :any:`InstallerPath` to the same file, although both are
still separate classes.
Since a :any:`TargetPath` is only virtual and not (at least not during an
install) always resolvable it is an alias to :any:`pathlib.PurePosixPath`.
Conversely, an :any:`InstallerPath` is actually simply a path in the
currently running system, so it aliases :any:`pathlib.PosixPath`.
To convert between :any:`TargetPath`\\ s and :any:`InstallerPath`\\ s, use
:any:`resolve()` and :any:`unresolve()`.
"""
from __future__ import annotations
from typing import Sequence
import pathlib
from . import env, subprocess
InstallerPath: type = pathlib.PosixPath
"""Physical path in the installer system"""
_ip_root = InstallerPath('/')
TargetPath: type = pathlib.PurePosixPath
"""Virtual path in the target system"""
_tp_root = TargetPath('/')
def resolve(target_path: TargetPath) -> InstallerPath:
"""Resolve a path in the target system
:param target_path: pure path in the target system
:return: absolute path in the installer system within :any:`env.target`
"""
if target_path.is_absolute():
target_path = target_path.relative_to(_tp_root)
result = env.target / target_path
assert env.target in result.parents
assert result.is_absolute()
return result
def unresolve(installer_path: InstallerPath) -> TargetPath:
"""Find the target path for a resolved path
:param installer_path: resolved path
:return: absolute path in target system
:raise ValueError: if :any:`installer_path` not within :any:`env.target`
"""
result = _tp_root / installer_path.relative_to(env.target)
assert result.is_absolute()
return result
def chmod(path: TargetPath,
*,
mode: int = 0o644,
user: str = 'root',
group: str = 'root'):
"""Change mode and owner/group of a file
:param path: path of file to chmod
:param mode: desired file mode
:param user: desired file owner
:param group: desired file group
:raises FileNotFoundError: if :any:`path <chmod.params.path>` does not exist
This function is idempotent.
"""
assert not user.startswith('-')
assert not group.startswith('-')
resolve(path).chmod(mode)
# we need to run this in the target to resolve user names correctly
subprocess.run(['chown', f'{user}:{group}', str(path)])
def mkdir(path: TargetPath,
*,
mode: int = 0o755,
user: str = 'root',
group: str = 'root'):
"""Create a directory relative to target
:param path: path of directory to create
:param mode: desired directory mode
:param user: desired directory owner
:param group: desired directory group
:raise FileExistsError: if :any:`path <mkdir.params.path>` is a non-directory file
Parent directories are created with default mode/owner/group if they do not
exist.
This function is idempotent.
"""
resolve(path).mkdir(mode=mode, parents=True, exist_ok=True)
chmod(path, mode=mode, user=user, group=group)
def fcopy(
*args: Sequence[TargetPath],
recursively: bool = False,
user: str = 'root',
group: str = 'root',
mode: int = 0o644,
remove_backup: bool = True,
delete_orphan: bool = True,
ignore_warnings: bool = True,
):
""" Run `fcopy(8)`_
:param args: paths of files to install
:param recursively: enable recursive mode (``-r``)
:param user: set file owner (``-m``)
:param group: set file group (``-m``)
:param mode: set file mode (``-m``)
:param remove_backup: remove ``*.pre_fcopy`` backup files (``-B``)
:param delete_orphan: delete target files when no class applies (``-d``)
:param ignore_warnings: ignore warnings when no class applies (``-i``)
.. _`fcopy(8)`: https://fai-project.org/doc/man/fcopy.html
"""
arg_map = {
'-B': remove_backup,
'-d': delete_orphan,
'-i': ignore_warnings,
'-r': recursively,
}
the_mode = f'{user},{group},{mode:o}'
fargs = ['fcopy', '-v', '-m', the_mode]
for option_name, option_set in arg_map.items():
if option_set:
fargs.append(option_name)
fargs.extend(str(p) for p in args)
subprocess.run_installer(fargs)
| 33.593333 | 86 | 0.670173 | 674 | 5,039 | 4.937685 | 0.292285 | 0.016526 | 0.019832 | 0.025541 | 0.095553 | 0.0622 | 0.022236 | 0 | 0 | 0 | 0 | 0.00356 | 0.219488 | 5,039 | 149 | 87 | 33.818792 | 0.842614 | 0.54872 | 0 | 0.224138 | 0 | 0 | 0.044799 | 0.011843 | 0 | 0 | 0 | 0 | 0.086207 | 1 | 0.086207 | false | 0 | 0.068966 | 0 | 0.189655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a65e4a75490f93c29d34c895001bf593d4b147ba | 1,595 | py | Python | libs/openldap/openldap.py | wrobelda/craft-blueprints-kde | 366f460cecd5baebdf3a695696767c8c0e5e7c7e | [
"BSD-2-Clause"
] | 14 | 2017-09-04T09:01:03.000Z | 2022-01-04T20:09:00.000Z | libs/openldap/openldap.py | wrobelda/craft-blueprints-kde | 366f460cecd5baebdf3a695696767c8c0e5e7c7e | [
"BSD-2-Clause"
] | 14 | 2017-12-15T08:11:22.000Z | 2020-12-29T19:11:13.000Z | libs/openldap/openldap.py | wrobelda/craft-blueprints-kde | 366f460cecd5baebdf3a695696767c8c0e5e7c7e | [
"BSD-2-Clause"
] | 19 | 2017-09-05T19:16:21.000Z | 2020-10-18T12:46:06.000Z | import info
class subinfo(info.infoclass):
def setTargets(self):
for ver in ['2.4.28', '2.4.33', '2.4.36', '2.4.45']:
self.targets[ver] = ('ftp://ftp.openldap.org/pub/OpenLDAP/'
'openldap-release/openldap-' + ver + '.tgz')
self.targetInstSrc[ver] = 'openldap-' + ver
self.patchToApply['2.4.28'] = [('openldap-2.4.28-20120212.diff', 1)]
self.patchToApply['2.4.33'] = [('openldap-2.4.33-20130124.diff', 1)]
self.patchToApply['2.4.36'] = [('openldap-2.4.36-20131003.diff', 1)]
# self.patchToApply['2.4.36'] = [('openldap-2.4.36-20170627.diff', 1)]
self.patchToApply['2.4.45'] = [('openldap-2.4.45-20170628.diff', 1)]
self.targetDigests['2.4.28'] = 'd888beae1723002a5a2ff5509d3040df40885774'
self.targetDigests['2.4.33'] = '0cea642ba2dae1eb719da41bfedb9eba72ad504d'
self.targetDigests['2.4.36'] = 'da0e18a28a5dade5c98d9a382fd8f0a676a12aca'
self.description = "an open source implementation of the Lightweight Directory Access Protocol"
self.defaultTarget = '2.4.45'
def setDependencies(self):
self.runtimeDependencies["virtual/base"] = None
self.runtimeDependencies["libs/cyrus-sasl"] = None
self.runtimeDependencies["libs/pcre"] = None
self.runtimeDependencies["libs/openssl"] = None
from Package.CMakePackageBase import *
class Package(CMakePackageBase):
def __init__(self, **args):
CMakePackageBase.__init__(self)
# self.subinfo.options.configure.args = "-DBUILD_TOOL=ON -DBUILD_TESTS=ON "
| 43.108108 | 103 | 0.64326 | 187 | 1,595 | 5.433155 | 0.374332 | 0.035433 | 0.023622 | 0.088583 | 0.11811 | 0.11811 | 0.072835 | 0.072835 | 0.072835 | 0.072835 | 0 | 0.141304 | 0.192476 | 1,595 | 36 | 104 | 44.305556 | 0.647516 | 0.089028 | 0 | 0 | 0 | 0 | 0.348276 | 0.205517 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.08 | 0 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a65e59791dc58d5a43721bf93a162bd5c1d59c87 | 4,610 | py | Python | cogs/fun/cog.py | uselessvevo/bonny-biboni-bot | ad6d0dc9e688dc9264638103bd07c8a95c2aaa56 | [
"MIT"
] | null | null | null | cogs/fun/cog.py | uselessvevo/bonny-biboni-bot | ad6d0dc9e688dc9264638103bd07c8a95c2aaa56 | [
"MIT"
] | null | null | null | cogs/fun/cog.py | uselessvevo/bonny-biboni-bot | ad6d0dc9e688dc9264638103bd07c8a95c2aaa56 | [
"MIT"
] | null | null | null | """
Description: Old fun module. Will be rewritten
Version: 0620/prototype
Author: useless_vevo
"""
# Standard library
import os
import hashlib
import requests
from io import BytesIO
# Discord
import discord
from discord.ext import commands
# Pillow/PIL
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
# Common
from tools.locales import tr
from tools.locales import alias
class Fun(commands.Cog):
def __init__(self, bot):
self.bot = bot
self._resources = os.path.join(os.path.dirname(__file__), 'resources')
self._images_folder = os.path.join(self._resources, 'images')
self._temp_images_folder = os.path.join(self._resources, 'images', 'Temp')
if not os.path.exists(self._temp_images_folder):
os.makedirs(self._temp_images_folder)
# tools
@staticmethod
async def get_image(ctx):
history_limit = 2000
formats = ('png', 'gif', 'jpeg', 'jpg')
async for c in ctx.history(limit=history_limit):
if len(c.attachments) > 0:
background_url = c.attachments[0].url
background_ext = background_url.split('.')[-1]
return background_url if background_ext in formats else None
def save_image(self, file):
file = f'hash_{hashlib.sha1(file.encode()).hexdigest()[:8]}.jpg'
output_file = os.path.join(self._temp_images_folder, file)
response = requests.get(file)
image = Image.open(BytesIO(response.content))
image.save(output_file, 'PNG')
return output_file
@commands.command(aliases=alias('impact-meme'), pass_context=True)
@commands.cooldown(2, 3)
async def impact_meme(self, ctx, *string):
# Forked from: https://github.com/Littlemansmg/Discord-Meme-Generator
image_path = self.save_image(await self.get_image(ctx))
font_path = f'{self._resources}/Fonts/impact.ttf'
if string:
string_size = len(string) // 2
top_string = ' '.join(string[:string_size])
bottom_string = ' '.join(string[string_size:])
with Image.open(image_path) as image:
size = image.size
font_size = int(size[1] / 5)
font = ImageFont.truetype(font_path, font_size)
edit = ImageDraw.Draw(image)
# find biggest font size that works
top_text_size = font.getsize(top_string)
bottom_text_size = font.getsize(bottom_string)
while top_text_size[0] > size[0] - 20 or bottom_text_size[0] > size[0] - 20:
font_size = font_size - 1
# fix it
font = ImageFont.truetype(font_path, font_size)
top_text_size = font.getsize(top_string)
bottom_text_size = font.getsize(bottom_string)
# find top centered position for top text
top_text_posx = (size[0] / 2) - (top_text_size[0] / 2)
top_text_posy = 0
top_text_pos = (top_text_posx, top_text_posy)
# find bottom centered position for bottom text
bottom_text_posx = (size[0] / 2) - (bottom_text_size[0] / 2)
bottom_text_posy = size[1] - bottom_text_size[1] - 10
bottom_text_pos = (bottom_text_posx, bottom_text_posy)
# draw outlines
# there may be a better way
outline_range = int(font_size / 15)
for x in range(-outline_range, outline_range + 1):
for y in range(-outline_range, outline_range + 1):
edit.text(
(top_text_pos[0] + x, top_text_pos[1] + y),
top_string,
(0, 0, 0),
font=font
)
edit.text(
(bottom_text_pos[0] + x, bottom_text_pos[1] + y),
bottom_string,
(0, 0, 0),
font=font
)
edit.text(top_text_pos, top_string, (255, 255, 255), font=font)
edit.text(bottom_text_pos, bottom_string, (255, 255, 255), font=font)
image.save(image_path, 'PNG')
await ctx.send(file=discord.File(image_path))
os.remove(image_path)
else:
await ctx.send(tr('Cogs.Fun.Fun.ImpactMemeEmptyString', ctx))
def setup(bot):
bot.add_cog(Fun(bot))
| 35.736434 | 92 | 0.565727 | 562 | 4,610 | 4.414591 | 0.275801 | 0.056429 | 0.028214 | 0.032245 | 0.28295 | 0.208787 | 0.180572 | 0.108021 | 0.054817 | 0.054817 | 0 | 0.023499 | 0.335358 | 4,610 | 128 | 93 | 36.015625 | 0.786227 | 0.081562 | 0 | 0.139535 | 0 | 0 | 0.042705 | 0.028944 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034884 | false | 0.011628 | 0.127907 | 0 | 0.197674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6641926389cb84ab0fd992f95aedcd6c3ef503b | 1,079 | py | Python | tests/test_regions.py | luigialberti/pytriangle | 99ecafc299a692ef0f33e262bc7a1c912d3aa694 | [
"MIT"
] | null | null | null | tests/test_regions.py | luigialberti/pytriangle | 99ecafc299a692ef0f33e262bc7a1c912d3aa694 | [
"MIT"
] | null | null | null | tests/test_regions.py | luigialberti/pytriangle | 99ecafc299a692ef0f33e262bc7a1c912d3aa694 | [
"MIT"
] | null | null | null | import math
import triangle
import numpy
pointBoundary = [ (-1, -1),
(-1, 1.0),
( 0, 1.0),
( 0, -1),
( 1, 1),
( 1, -3)]
points = pointBoundary
segs = [(0, 1),(1, 2),(2, 3),(3, 0), (2, 4), (4, 5), (5, 3)]
# these are physical tags to apply on segments, same dimension as segs
segTags = [ 5,5,5,4,7,7,7]
t = triangle.Triangle()
t.set_points(points)
t.set_segments(segs, segTags)
# regions can be defined with a regional attribute 'r' at x,y coordinates.
# Moreover it is possible to specify the area constraint in that region with
# the fourth parmaters 'a'
# regions = [(x,y,r,a),...]
regions = [ (-0.5, 0.5, 10, 0.1),
(0.5, 0.5, 20, 0.5)]
t.set_regions(regions)
t.triangulate(mode='qpzAe', area=1)
# a function to plot the triangulation, for fast check of regional attributes
# within the mesh triangles
t.plot_mesh().show()
print(t.get_triangles())
# note that to have edges in "t" we need to use the "e" switch in
# triangulate!
print(t.get_edges())
| 22.957447 | 77 | 0.586654 | 174 | 1,079 | 3.603448 | 0.465517 | 0.022329 | 0.019139 | 0.012759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065081 | 0.2595 | 1,079 | 46 | 78 | 23.456522 | 0.71965 | 0.414272 | 0 | 0 | 0 | 0 | 0.008052 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.136364 | 0 | 0.136364 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a665215b275a48c5f84ca29a35c2258cf294cefc | 4,703 | py | Python | src/Tester.py | ujjawalmisra/json-ws-test | bece383d414c12b8827afc59da5c7a98d4c46b0f | [
"MIT"
] | 1 | 2017-02-09T14:52:25.000Z | 2017-02-09T14:52:25.000Z | src/Tester.py | ujjawalmisra/json-ws-test | bece383d414c12b8827afc59da5c7a98d4c46b0f | [
"MIT"
] | null | null | null | src/Tester.py | ujjawalmisra/json-ws-test | bece383d414c12b8827afc59da5c7a98d4c46b0f | [
"MIT"
] | null | null | null | import argparse
import json
import pprint
import Logger
from executors.EndLoopExecutor import EndLoopExecutor
from executors.ExecutorFactory import ExecutorFactory
from DictUtils import DictUtils
class Tester:
__LOGGER = Logger.getLogger('Tester')
def __init__(self, configFilePath):
Tester.__LOGGER.debug("created Tester")
with open(configFilePath, 'r') as configFile:
self.__config = DictUtils.convert(json.load(configFile))
Tester.__LOGGER.debug("loaded config from: " + configFilePath)
def showConfig(self):
pprint.pprint(self.__config)
if None != self.__config['tests']:
for test in self.__config['tests']:
pprint.pprint(test)
def __isValidStep(self, step):
Tester.__LOGGER.debug("validating step: " + str(step))
return None != step and None != step['construct']
def __formatResultSeparator(self):
return "|" + ("-" * 30) + "|" + (("-" * 14 + "|") * 3)
def __formatResultHead1(self):
s = "|" + "[sid]".center(30) + "|"
for t in ['total', 'passed', 'failed']:
s+= ("[" + t + "]").center(14) + "|"
return s
def __formatResultHead2(self):
s = "|" + "".ljust(30) + "|"
i = len(['total', 'passed', 'failed'])
while i > 0:
s += "count".rjust(6)
s += "avg(ms)".rjust(8)
s += "|"
i -= 1
return s
def __formatResultStr(self, sid, data):
s = "|" + sid.ljust(30) + "|"
for t in ['total', 'passed', 'failed']:
if 0 == data[t]['count']:
avgTime = 0
else:
avgTime = int(data[t]['time']*1000/data[t]['count'])
s += str(data[t]['count']).rjust(6)
s += str(avgTime).rjust(8)
s += "|"
return s
def run(self):
Tester.__LOGGER.info("in run")
if not 'steps' in self.__config:
Tester.__LOGGER.info("no test steps to execute")
return
default = DictUtils.defaultIfNone(self.__config, None, 'default')
control = {'loop':{'running': False, 'count': 0, 'steps': []},
'session':{'running': False, 'steps': {}},
'result':{'total':{'count':0, 'time':0},
'passed':{'count':0, 'time':0},
'failed':{'count':0, 'time':0},
'steps':{}
}
}
for step in self.__config['steps']:
if False == self.__isValidStep(step):
continue
executor = ExecutorFactory.getExecutor(step['construct'])
if None == executor:
Tester.__LOGGER.error("no executor found for construct: " + step['construct'])
continue
executor.execute(default, step, control)
if isinstance(executor, EndLoopExecutor):
while control['loop']['running']:
for tStep in control['loop']['steps']:
tStep['executor'].execute(default, tStep['step'], control)
executor.execute(default, step, control)
Tester.__LOGGER.info("================================")
Tester.__LOGGER.info("[SUMMARY JSON]")
Tester.__LOGGER.info(str(control['result']))
Tester.__LOGGER.info("================================")
Tester.__LOGGER.info("================================")
Tester.__LOGGER.info("[SUMMARY]")
Tester.__LOGGER.info(self.__formatResultSeparator())
Tester.__LOGGER.info(self.__formatResultHead1())
Tester.__LOGGER.info(self.__formatResultSeparator())
Tester.__LOGGER.info(self.__formatResultHead2())
Tester.__LOGGER.info(self.__formatResultSeparator())
for step in self.__config['steps']:
if not 'sid' in step:
continue
sid = step['sid']
sidData = control['result']['steps'][sid]
Tester.__LOGGER.info(self.__formatResultStr(sid, sidData))
Tester.__LOGGER.info(self.__formatResultSeparator())
Tester.__LOGGER.info(self.__formatResultStr('OVERALL', control['result']))
Tester.__LOGGER.info(self.__formatResultSeparator())
Tester.__LOGGER.info("================================")
#--------------------------------
# [main]
#--------------------------------
parser = argparse.ArgumentParser()
parser.add_argument('config', help="config file containing the tests")
args = parser.parse_args()
T = Tester(args.config)
T.run()
| 37.624 | 94 | 0.51818 | 438 | 4,703 | 5.340183 | 0.251142 | 0.117999 | 0.12313 | 0.076956 | 0.264643 | 0.186404 | 0.179564 | 0.102608 | 0.078239 | 0 | 0 | 0.010906 | 0.298108 | 4,703 | 124 | 95 | 37.927419 | 0.697667 | 0.015097 | 0 | 0.22549 | 0 | 0 | 0.137208 | 0.027658 | 0.019608 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0.039216 | 0.068627 | 0.009804 | 0.22549 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a666e2b483e7e338818d32db2cd3f68b92fb8795 | 7,504 | py | Python | tests/L0/run_transformer/run_gpt_minimal_test.py | jpool-nv/apex | d36397d2b8ce5c8854997e4ec2828e056e8fda89 | [
"BSD-3-Clause"
] | null | null | null | tests/L0/run_transformer/run_gpt_minimal_test.py | jpool-nv/apex | d36397d2b8ce5c8854997e4ec2828e056e8fda89 | [
"BSD-3-Clause"
] | null | null | null | tests/L0/run_transformer/run_gpt_minimal_test.py | jpool-nv/apex | d36397d2b8ce5c8854997e4ec2828e056e8fda89 | [
"BSD-3-Clause"
] | 1 | 2021-12-20T00:49:01.000Z | 2021-12-20T00:49:01.000Z | from functools import partial
from typing import List
import time
import torch
from apex.transformer import parallel_state
from apex.transformer.tensor_parallel import model_parallel_cuda_manual_seed
from apex.transformer.pipeline_parallel.utils import setup_microbatch_calculator
from apex.transformer.pipeline_parallel.utils import (
average_losses_across_data_parallel_group,
)
from apex.transformer.pipeline_parallel.utils import get_ltor_masks_and_position_ids
from apex.transformer.pipeline_parallel.schedules.common import build_model
from apex.transformer.pipeline_parallel.schedules.common import (
_get_params_for_weight_decay_optimization,
)
from apex.transformer.pipeline_parallel.schedules.fwd_bwd_pipelining_without_interleaving import (
forward_backward_pipelining_without_interleaving,
)
from apex.transformer.testing.standalone_gpt import gpt_model_provider
from apex.transformer.testing import global_vars
from apex.transformer.testing.commons import TEST_SUCCESS_MESSAGE
from apex.transformer.testing.commons import initialize_distributed
MANUAL_SEED = 42
inds = None
data_idx = 0
N_VOCAB = 128
def download_fancy_data():
# import requests
# response = requests.get('https://internet.com/book.txt')
# text = ' '.join(response.text.split())
text = """
An original sentence not subject to any license restrictions, copyright, or royalty payments. Nothing to see here. Commercial or non-commercial use. Research or non-research purposes. The quick brown fox jumps over the lazy dog. Lorem ipsum.
"""
text = text * 1024
encoded = text.encode("ascii", "replace")
ints = [int(encoded[i]) for i in range(len(encoded))]
return torch.tensor(ints)
# build a batch given sequence_len and batch size
def generate_fancy_data_labels(sequence_len, batch_size):
global data_idx
global inds
global MANUAL_SEED
temps = list()
for i in range(batch_size):
if inds is None or data_idx >= len(inds):
# hack as use of RNG will fall out of sync due to pipelines being different
model_parallel_cuda_manual_seed(MANUAL_SEED)
inds = torch.randperm(effective_length, device="cuda")
MANUAL_SEED += 1
data_idx = 0
data_idx_ = data_idx
offset = inds[data_idx_]
data_idx += 1
curr = fancy_data[offset : offset + sequence_len + 1].clone().detach()
temps.append(curr)
temp = torch.stack(temps, dim=0).cuda()
return temp
easy_data = None
def get_batch(int_tensors: List[torch.Tensor]):
data = int_tensors[0]
# Unpack.
tokens_ = data.long()
labels = tokens_[:, 1:].contiguous()
tokens = tokens_[:, :-1].contiguous()
# Get the masks and position ids.
attention_mask, loss_mask, position_ids = get_ltor_masks_and_position_ids(
tokens,
N_VOCAB, # tokenizer.eod,
False, # args.reset_position_ids,
False, # args.reset_attention_mask,
False, # args.eod_mask_loss,
)
return tokens, labels, loss_mask, attention_mask, position_ids
# Ref: https://github.com/NVIDIA/Megatron-LM/blob/b31e1296354e979722627a6c4dedafe19b51fa97/pretrain_gpt.py#L75
def loss_func(loss_mask, output_tensor):
losses = output_tensor.float()
loss_mask = loss_mask.view(-1).float()
loss = torch.sum(losses.view(-1) * loss_mask) / loss_mask.sum()
# Reduce loss for logging.
averaged_loss = average_losses_across_data_parallel_group([loss])
return loss, {"lm loss": averaged_loss[0]}
# Ref: https://github.com/NVIDIA/Megatron-LM/blob/b31e1296354e979722627a6c4dedafe19b51fa97/pretrain_gpt.py#L86
def fwd_step_func(batch, model):
"""Forward step."""
tokens, labels, loss_mask, attention_mask, position_ids = get_batch(batch)
output_tensor = model(tokens, position_ids, attention_mask, labels=labels)
return output_tensor, partial(loss_func, loss_mask)
def train(model, optim, pipeline_model_parallel_size, async_comm):
sequence_len = global_vars.get_args().seq_length
micro_batch_size = global_vars.get_args().micro_batch_size
hidden_size = global_vars.get_args().hidden_size
fwd_bwd_func = forward_backward_pipelining_without_interleaving
tensor_shape = (args.seq_length, args.micro_batch_size, args.hidden_size)
runtime = 0
# training loop
for i in range(3):
since = time.time()
if torch.distributed.get_rank() == 0:
print("begin iter", i)
batch = [
generate_fancy_data_labels(args.seq_length, args.global_batch_size)
for _ in range(pipeline_model_parallel_size)
]
if torch.distributed.get_rank() == 0:
print("finished making batch...")
optim.zero_grad()
fwd_bwd_func(
fwd_step_func, batch, model, forward_only=False, tensor_shape=tensor_shape, async_comm=async_comm
)
if torch.distributed.get_rank() == 0:
print("finished forward step")
optim.step()
if torch.distributed.get_rank() == 0:
print("finished iter", i)
runtime += time.time() - since
return runtime / 3.0
if __name__ == "__main__":
init = True
for async_comm in (False, True):
global fancy_data
global effective_length
if init:
init = False
global_vars.set_global_variables()
args = global_vars.get_args()
fancy_data = download_fancy_data()
effective_length = fancy_data.size(0) // args.seq_length
effective_length = fancy_data.size(0) - args.seq_length
initialize_distributed()
world_size = torch.distributed.get_world_size()
failure = None
args.padded_vocab_size = 128
batch_size = args.global_batch_size
micro_batch_size = args.micro_batch_size
setup_microbatch_calculator(
args.rank,
args.rampup_batch_size,
args.global_batch_size,
args.micro_batch_size,
args.data_parallel_size, # args.data_parallel_size,
)
world_size = torch.distributed.get_world_size()
print(args.tensor_model_parallel_size, "MODEL PARALLEL SIZE")
parallel_state.initialize_model_parallel(
tensor_model_parallel_size_=args.tensor_model_parallel_size,
pipeline_model_parallel_size_=args.pipeline_model_parallel_size,
)
pipeline_model_parallel_size = (
parallel_state.get_pipeline_model_parallel_world_size()
)
model_parallel_cuda_manual_seed(0)
model = build_model(
gpt_model_provider,
wrap_with_ddp=True,
virtual_pipeline_model_parallel_size=None,
cpu_offload=args.cpu_offload,
)
assert isinstance(model, list), model
_param_groups = _get_params_for_weight_decay_optimization(model)
optim = torch.optim.Adam(_param_groups)
runtime = train(model, optim, args.pipeline_model_parallel_size, async_comm)
parallel_state.destroy_model_parallel()
torch.distributed.barrier()
if torch.distributed.get_rank() == 0:
print(TEST_SUCCESS_MESSAGE)
print("Average Iteration Time:", runtime)
| 38.482051 | 244 | 0.678971 | 927 | 7,504 | 5.171521 | 0.257821 | 0.046099 | 0.047559 | 0.036504 | 0.38569 | 0.308719 | 0.19587 | 0.121193 | 0.055069 | 0.037547 | 0 | 0.016129 | 0.239872 | 7,504 | 194 | 245 | 38.680412 | 0.824334 | 0.087287 | 0 | 0.077922 | 0 | 0.006494 | 0.058504 | 0 | 0 | 0 | 0 | 0 | 0.006494 | 1 | 0.038961 | false | 0 | 0.103896 | 0 | 0.181818 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a66d35e9c6f79553a73555c39661a31163d7a94c | 16,851 | py | Python | src/auth/wxapi/wechat_api.py | zeroleo12345/authen | 700e5b6842aecc61c0a3f96bd5ef480fbeecbc46 | [
"MIT"
] | null | null | null | src/auth/wxapi/wechat_api.py | zeroleo12345/authen | 700e5b6842aecc61c0a3f96bd5ef480fbeecbc46 | [
"MIT"
] | null | null | null | src/auth/wxapi/wechat_api.py | zeroleo12345/authen | 700e5b6842aecc61c0a3f96bd5ef480fbeecbc46 | [
"MIT"
] | 1 | 2019-11-13T05:59:35.000Z | 2019-11-13T05:59:35.000Z | #coding:utf-8
import sys
import traceback
import time
import datetime
import pytz
import base64
import binascii
import hashlib
# 第三方库
from flask import Flask, request, redirect, jsonify, session, abort, render_template, Response
from decouple import config
# 自己的库
from mybase.mylog3 import log
from mybase.myencrypt2 import encrypt_aes, decrypt_aes
from mybase.myutil import basetype_to_str
from mybase.mysqlpool import MysqlPool, IntegrityError
from mybase.mycksum import calmd5
from mybase.myrandom import MyRandom
from auth.wxapi import wxapi
from auth.utils.webutil import WebUtil
# 全局变量
WEBUTIL = WebUtil()
g_tz = pytz.timezone('Asia/Shanghai')
TOKEN_SECRET = config('TOKEN_SECRET')
IV = config('IV')
@wxapi.before_app_first_request
def init_my_blueprint():
pass
def get_session():
openid, wxid = '', ''
if session.has_key('openid'): # flask机制保证session不会被擅改!
openid = session['openid']
log.d( "old session: {}", session )
if session.has_key('wxid'):
wxid = session['wxid']
return (openid, wxid)
# /wxapi/heartbeat
@wxapi.route('/heartbeat', methods=['GET'])
def Page_Index():
log.d(sys._getframe().f_code.co_name)
try:
return "heartbeat"
except:
log.e(traceback.format_exc())
@wxapi.route('/hwinfo.json', methods=['POST'])
def Page_HWinfoJson():
log.d(sys._getframe().f_code.co_name)
# log.d( 'request url: {}', request.url )
log.d("request form: {}", request.form.items().__str__())
try:
json_param = basetype_to_str(request.json)
msgtype = json_param['msgtype']
func = getattr(HardwareInfo, msgtype, None)
if func:
return func(json_param)
else:
log.e('unknown msgtype: {}', msgtype)
return jsonify({"code": '系统错误'})
except:
log.e(traceback.format_exc())
return jsonify({"code": '系统错误'})
class HardwareInfo(object):
# 保存机器硬件信息
@staticmethod
def login_success(json_param):
log.d(sys._getframe().f_code.co_name)
log.d("json_param = {}", json_param)
userinfo = json_param['userinfo']
wxid = userinfo['wxid']
alias = userinfo['alias']
nickname = userinfo['nickname']
qq = userinfo['qq']
email = userinfo['email']
appversion = userinfo['appversion']
#
hwinfo = json_param['hwinfo']
log.d("hwinfo = {}", hwinfo)
##
IMEI = hwinfo['IMEI']
android_id = hwinfo['android_id']
Line1Number = hwinfo['Line1Number']
SimSerialNumber = hwinfo['SimSerialNumber']
IMSI = hwinfo['IMSI']
SimCountryIso = hwinfo['SimCountryIso']
SimOperator = hwinfo['SimOperator']
SimOperatorName = hwinfo['SimOperatorName']
NetworkCountryIso = hwinfo['NetworkCountryIso']
NetworkOperator = hwinfo['NetworkOperator']
NetworkOperatorName = hwinfo['NetworkOperatorName']
NetworkType = hwinfo['NetworkType']
PhoneType = hwinfo['PhoneType']
SimState = hwinfo['SimState']
MacAddress = hwinfo['MacAddress']
SSID = hwinfo['SSID']
BSSID = hwinfo['BSSID']
RELEASE = hwinfo['RELEASE']
SDK = hwinfo['SDK']
CPU_ABI = hwinfo['CPU_ABI']
CPU_ABI2 = hwinfo['CPU_ABI2']
widthPixels = hwinfo['widthPixels']
heightPixels = hwinfo['heightPixels']
RadioVersion = hwinfo['RadioVersion']
BRAND = hwinfo['BRAND']
MODEL = hwinfo['MODEL']
PRODUCT = hwinfo['PRODUCT']
MANUFACTURER = hwinfo['MANUFACTURER']
cpuinfo = hwinfo['cpuinfo']
HARDWARE = hwinfo['HARDWARE']
FINGERPRINT = hwinfo['FINGERPRINT']
DISPLAY = hwinfo['DISPLAY']
INCREMENTAL = hwinfo['INCREMENTAL']
SERIAL = hwinfo['SERIAL']
# 计算MD5
key = '{wxid}{heightPixels}{widthPixels}{appversion}{RELEASE}{MODEL}{BRAND}{android_id}{MANUFACTURER}{PRODUCT}{FINGERPRINT}{cpuinfo}'.format(
wxid=wxid, heightPixels=heightPixels, widthPixels=widthPixels, appversion=appversion, RELEASE=RELEASE, MODEL=MODEL, BRAND=BRAND, android_id=android_id,
MANUFACTURER=MANUFACTURER, PRODUCT=PRODUCT, FINGERPRINT=FINGERPRINT, cpuinfo=cpuinfo
)
cksum = calmd5(key)
try:
with MysqlPool(WEBUTIL.mysql_config) as p:
ret = p.execute( """INSERT INTO hardware( wxid, alias, nickname, qq, email, appversion, cksum, IMEI, android_id, Line1Number, SimSerialNumber, IMSI, SimCountryIso, SimOperator, SimOperatorName, NetworkCountryIso, NetworkOperator, NetworkOperatorName, NetworkType, PhoneType, SimState, MacAddress, SSID, BSSID, `RELEASE`, SDK, CPU_ABI, CPU_ABI2, widthPixels, heightPixels, RadioVersion, BRAND, MODEL, PRODUCT, MANUFACTURER, cpuinfo, HARDWARE, FINGERPRINT, DISPLAY, INCREMENTAL, SERIAL
) VALUES (
%s, %s, %s, %s, %s, %s, %s, %s, %s, %s,
%s, %s, %s, %s, %s, %s, %s, %s, %s, %s,
%s, %s, %s, %s, %s, %s, %s, %s, %s, %s,
%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s
)""",
(wxid, alias, nickname, qq, email, appversion, cksum, IMEI, android_id, Line1Number, SimSerialNumber, IMSI, SimCountryIso, SimOperator, SimOperatorName, NetworkCountryIso, NetworkOperator, NetworkOperatorName, NetworkType, PhoneType, SimState, MacAddress, SSID, BSSID, RELEASE, SDK, CPU_ABI, CPU_ABI2, widthPixels, heightPixels, RadioVersion, BRAND, MODEL, PRODUCT, MANUFACTURER, cpuinfo, HARDWARE, FINGERPRINT, DISPLAY, INCREMENTAL, SERIAL)
)
p.commit()
return jsonify({"code": ''})
except IntegrityError:
log.w('hardware duplicated')
return jsonify({"code": ''})
except Exception as e:
raise e
# "新设备登陆, 需要验证", 返回旧硬件信息
@staticmethod
def login_new_device(json_param):
log.d(sys._getframe().f_code.co_name)
userinfo = json_param['userinfo']
user = userinfo['user']
try:
with MysqlPool(WEBUTIL.mysql_config) as p:
rows = p.select( 'SELECT IMEI, android_id, Line1Number, SimSerialNumber, IMSI, SimCountryIso, SimOperator, SimOperatorName, NetworkCountryIso, NetworkOperator, NetworkOperatorName, NetworkType, PhoneType, SimState, MacAddress, SSID, BSSID, `RELEASE`, SDK, CPU_ABI, CPU_ABI2, widthPixels, heightPixels, RadioVersion, BRAND, MODEL, PRODUCT, MANUFACTURER, cpuinfo, HARDWARE, FINGERPRINT, DISPLAY, INCREMENTAL, SERIAL\
FROM hardware WHERE wxid=%s OR alias=%s OR qq=%s OR email=%s ORDER BY update_time',
(user, user, user, user)
)
if not rows:
log.w('no record, user: {}', user)
return jsonify( {} )
row = rows[0]
return jsonify( {"action": 'hook_hardware', "hwinfo": row} )
return jsonify( {} )
except Exception as e:
raise e
@wxapi.route('/auth.json', methods=['POST'])
def Page_AuthenticJson():
log.d(sys._getframe().f_code.co_name)
# log.d( 'request url: {}', request.url )
log.d("request form: {}", request.form.items().__str__())
try:
json_param = basetype_to_str(request.json)
msgtype = json_param['msgtype']
func = getattr(Authentic, msgtype, None)
if func:
return func(json_param)
else:
log.e('unknown msgtype: {}', msgtype)
return jsonify({"code": '系统错误'})
except:
log.e(traceback.format_exc())
return jsonify( {"code": '系统错误'} )
class Authentic(object):
# UI界面简单验证
@staticmethod
def authentic_simple(json_param):
log.d(sys._getframe().f_code.co_name)
token = request.headers.get('token', None)
device_id = request.headers.get('device-id', None)
build_variant = request.headers.get('build-variant', None)
real_ip = request.headers.get('X-Real-Ip', None)
log.d( "device-id: {}, ip: {}, build-variant: {}, token: {}", device_id, real_ip, build_variant, token)
log.d( "json_param = {}", json_param )
md5_token = json_param['md5_token']
with MysqlPool(WEBUTIL.mysql_config) as p:
rows = p.select( 'SELECT token FROM tb_toolkit_token WHERE md5_token=%s and is_enable=1',
(md5_token,)
)
log.d( "rows={}", rows )
if not rows:
# 验证不通过
log.w('auth_simple fail, device-id: {}, ip: {}, build-variant: {}', device_id, real_ip, build_variant)
return jsonify({"code": '验证失败'})
if len(rows) > 1: log.e('md5 token duplicate!')
row = rows[0]
# 用户token表: md5_token = MD5(token+TOKEN_SECRET); SELECT token FROM tb_toolkit_token WHERE md5_token=?
# 计算MD5_TOKEN值, 方法1 (推荐使用): update tb_toolkit_token set md5_token=MD5(concat(token, TOKEN_SECRET));
# 计算MD5_TOKEN值, 方法2: ./mycksum.py test_calmd5 "13857e53aa9cbf9e0e9fe38b01" + $TOKEN_SECRET
user_token = row['token']
res_md5_token = calmd5(user_token + TOKEN_SECRET)
log.i("authentic simple success, res_md5_token: {}", res_md5_token)
return jsonify({"code": '', "md5_token": res_md5_token})
# 插件验证, I包
@staticmethod
def authentic_i(json_param):
log.d(sys._getframe().f_code.co_name)
token = request.headers.get('token', None)
device_id = request.headers.get('device-id', None)
build_variant = request.headers.get('build-variant', None)
real_ip = request.headers.get('X-Real-Ip', None)
log.d( "device-id: {}, ip: {}, build-variant: {}, token: {}", device_id, real_ip, build_variant, token)
log.d( "json_param = {}", json_param )
key = json_param['key']
encrypt_last_token = json_param['last_token']
last_token = decrypt_aes(key, encrypt_last_token, iv=IV, usebase64=True)
# I包, 不需要判断 key 与库表中不一样, 而解密后last_token需要与库表中user_token相同
with MysqlPool(WEBUTIL.mysql_config) as p:
rows = p.select( 'SELECT token FROM tb_toolkit_token WHERE token=%s and is_enable=1',
(last_token,)
)
if not rows:
# 验证不通过
log.w('auth_i fail, device-id: {}, ip: {}, build-variant: {}', device_id, real_ip, build_variant)
return jsonify({"code": '验证失败'})
if len(rows) > 1:
log.e('user token duplicate!')
row = rows[0]
user_token = row['token'].encode('utf8') # note: 必须转为utf8, 默认是Unicode! 这样才能与java端保持一致!
# {
# token = Bse64( AES(data=user_token, key=当前客户随机串, initVector=IV) ),
# last_token = Bse64( AES(data=new_last_token, key=当前客户随机串, initVector=IV) )
# }
# 1.
res_token = encrypt_aes(key, user_token, iv=IV)
log.d( 'user_token: {}, len: {}, type: {}', user_token, len(user_token), type(user_token) )
log.d( 'key: {}', key )
log.d( 'res_token: {}, len: {}, type: {}', repr(res_token), len(res_token), type(res_token) )
log.d( 'raw res_token: {}', binascii.b2a_hex(res_token) )
res_token = base64.urlsafe_b64encode( res_token )
log.d( 'base64 res_token: {}, len: {}', res_token, len(res_token) )
# 2.
new_key = MyRandom.randomStr(10)
new_last_token = calmd5( str(int(time.time())) + MyRandom.randomStr(7) )
res_last_token = encrypt_aes(new_key, new_last_token, iv=IV)
# log.d( 'new_last_token: {}, len: {}, type: {}', new_last_token, len(new_last_token), type(new_last_token) )
# log.d( 'key: {}', key )
# log.d( 'raw res_last_token: {}, len: {}, type: {}', repr(res_last_token), len(res_last_token), type(res_last_token) )
# log.d( 'raw res_token: {}', binascii.b2a_hex(res_last_token) )
res_last_token = base64.urlsafe_b64encode( res_last_token )
log.d('base64 res_last_token: {}, len: {}', res_last_token, len(res_last_token))
log.d('auth_i success, update to new token: {}', new_last_token)
# 返回
ret = p.insert( 'UPDATE tb_toolkit_token SET last_token=%s WHERE token=%s',
(new_last_token, user_token)
)
return jsonify({"code": '', "key": new_key, "token": res_token, "last_token": res_last_token})
# 插件验证, U包
@staticmethod
def authentic_u(json_param):
log.d(sys._getframe().f_code.co_name)
token = request.headers.get('token', None)
device_id = request.headers.get('device-id', None)
build_variant = request.headers.get('build-variant', None)
real_ip = request.headers.get('X-Real-Ip', None)
log.d( "device-id: {}, ip: {}, build-variant: {}, token: {}", device_id, real_ip, build_variant, token)
log.d( "json_param = {}", json_param )
key = json_param['key']
encrypt_last_token = json_param['last_token']
last_token = decrypt_aes(key, encrypt_last_token, iv=IV, usebase64=True)
log.d('upload last token: {}, device-id: {}, ip: {}, build-variant: {}', last_token, device_id, real_ip, build_variant)
# U包, 不需要判断 key 与库表中不一样, 而解密后last_token需要与库表中user_token相同
with MysqlPool(WEBUTIL.mysql_config) as p:
rows = p.select( 'SELECT token FROM tb_toolkit_token WHERE last_token=%s and is_enable=1',
(last_token,)
)
if not rows:
# 验证不通过
log.e('auth_u fail!')
return jsonify({"code": '验证失败'})
if len(rows) > 1:
log.e('user token duplicate!')
row = rows[0]
updated_timestamp = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S') # "2017-10-11 00:00:00"
ret = p.insert( 'UPDATE tb_toolkit_token SET update_time=%s WHERE last_token=%s',
(updated_timestamp, last_token)
)
user_token = row['token'].encode('utf8') # note: 必须转为utf8, 默认是Unicode! 这样才能与java端保持一致!
# {
# token = Bse64( AES(data=user_token, key=当前客户随机串, initVector=IV) ),
# last_token = Bse64( AES(data=last_token, key=当前客户随机串, initVector=IV) )
# }
res_token = encrypt_aes(key, user_token, iv=IV, usebase64=True)
# new_last_token = calmd5( str(int(time.time())) + MyRandom.randomStr(7) )
new_key = MyRandom.randomStr(10)
res_last_token = encrypt_aes(new_key, last_token, iv=IV, usebase64=True)
log.d( 'res_last_token: {}, len: {}', res_last_token, len(res_last_token) )
log.d( 'auth_u success' )
return jsonify({"code": '', "key": new_key, "token": res_token, "last_token": res_last_token})
# http://139.199.171.40/wxapi/token.json?source=lynatgz
@wxapi.route('/token.json', methods=['GET'])
def create_token():
log.d(sys._getframe().f_code.co_name)
try:
source = request.args.get('source', None)
if source not in ['qqplugin', 'lynatgz']:
return jsonify({"code": '不支持此source值'})
device_id = request.headers.get('device-id', None)
build_variant = request.headers.get('build-variant', None)
real_ip = request.headers.get('X-Real-Ip', None)
log.d( "device-id: {}, ip: {}, build-variant: {}", device_id, real_ip, build_variant)
token = hashlib.md5(str(int(time.time())) + MyRandom.randomStr(7)).hexdigest()
md5_token = calmd5(token + TOKEN_SECRET)
expires_at = datetime.datetime.now() + datetime.timedelta(days=30)
openid = 'production'
with MysqlPool(WEBUTIL.mysql_config) as p:
ret = p.insert( 'INSERT INTO tb_toolkit_token(openid, token, md5_token, expires_at, source) VALUES (%s, %s, %s, %s, %s)',
(openid, token, md5_token, expires_at, source)
)
if not ret:
# 记录不变, 需谨慎处理!
log.e( 'not insert tb_toolkit_token record, openid: {}, ret: {}', openid, ret )
return jsonify({"code": '系统错误'})
log.i( 'insert tb_toolkit_token success, openid: {}, token: {}, md5_token: {}, ret: {}', openid, token, md5_token, ret )
# 返回加密后token
new_key = MyRandom.randomStr(10)
res_token = encrypt_aes(new_key, token, iv='xiaobaizhushou', usebase64=True)
log.d( 'res_token: {}, len: {}', res_token, len(res_token) )
return jsonify({"code": '', "key": new_key, "token": res_token, "md5_token": md5_token})
except:
log.e(traceback.format_exc())
return jsonify({"code": '系统错误'})
| 46.808333 | 499 | 0.600142 | 1,980 | 16,851 | 4.927273 | 0.147475 | 0.045203 | 0.012915 | 0.0164 | 0.592661 | 0.545818 | 0.52132 | 0.49221 | 0.475707 | 0.449877 | 0 | 0.01198 | 0.261943 | 16,851 | 359 | 500 | 46.938719 | 0.772453 | 0.08949 | 0 | 0.408935 | 0 | 0.030928 | 0.21968 | 0.009742 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037801 | false | 0.003436 | 0.061856 | 0 | 0.185567 | 0.003436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a66da1cf042e3419a188397e4e43b785aa423282 | 935 | py | Python | main.py | TheRockerfly/weather_app | 60e9dd8e535c0dfc318c8064bce6b31492943e93 | [
"MIT"
] | null | null | null | main.py | TheRockerfly/weather_app | 60e9dd8e535c0dfc318c8064bce6b31492943e93 | [
"MIT"
] | 1 | 2021-06-02T01:44:04.000Z | 2021-06-02T01:44:04.000Z | main.py | TheRockerfly/weather_app | 60e9dd8e535c0dfc318c8064bce6b31492943e93 | [
"MIT"
] | null | null | null | from pprint import pprint as pp
from flask import Flask, render_template, request
from module.weather import query_api
app = Flask(__name__)
@app.route('/')
def index():
return render_template(
'weather.html',
data=[{'name': 'Toronto'}, {'name': 'Montreal'}, {'name': 'Calgary'},
{'name': 'Ottawa'}, {'name': 'Edmonton'}, {'name': 'Mississauga'},
{'name': 'Winnipeg'}, {'name': 'Vancouver'}, {'name': 'Brampton'},
{'name': 'Quebec'}])
@app.route("/result", methods=['GET', 'POST'])
def result():
data = []
error = None
select = request.form.get('comp_select')
resp = query_api(select)
pp(resp)
if resp:
data.append(resp)
if len(data) != 2:
error = 'Bad Response from Weather API'
return render_template(
'result.html',
data=data,
error=error)
if __name__ == '__main__':
app.run(debug=True)
| 25.27027 | 80 | 0.571123 | 106 | 935 | 4.867925 | 0.481132 | 0.081395 | 0.077519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001422 | 0.248128 | 935 | 36 | 81 | 25.972222 | 0.732575 | 0 | 0 | 0.068966 | 0 | 0 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.103448 | 0.034483 | 0.241379 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a67191887ff2e4cbe5a722f8867e0bdf2eaf5490 | 1,256 | py | Python | audio/tests/backends/base.py | jerryuhoo/PaddleSpeech | 1eec7b5e042da294c7524af92f0fae4c32a71aa3 | [
"Apache-2.0"
] | 1,379 | 2021-11-10T02:42:21.000Z | 2022-03-31T13:34:25.000Z | audio/tests/backends/base.py | jerryuhoo/PaddleSpeech | 1eec7b5e042da294c7524af92f0fae4c32a71aa3 | [
"Apache-2.0"
] | 268 | 2021-11-10T14:07:34.000Z | 2022-03-31T02:25:20.000Z | audio/tests/backends/base.py | jerryuhoo/PaddleSpeech | 1eec7b5e042da294c7524af92f0fae4c32a71aa3 | [
"Apache-2.0"
] | 296 | 2021-11-15T02:37:11.000Z | 2022-03-31T12:14:46.000Z | # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import unittest
import urllib.request
mono_channel_wav = 'https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav'
multi_channels_wav = 'https://paddlespeech.bj.bcebos.com/PaddleAudio/cat.wav'
class BackendTest(unittest.TestCase):
def setUp(self):
self.initWavInput()
def initWavInput(self):
self.files = []
for url in [mono_channel_wav, multi_channels_wav]:
if not os.path.isfile(os.path.basename(url)):
urllib.request.urlretrieve(url, os.path.basename(url))
self.files.append(os.path.basename(url))
def initParmas(self):
raise NotImplementedError
| 35.885714 | 77 | 0.725318 | 175 | 1,256 | 5.16 | 0.588571 | 0.066445 | 0.046512 | 0.056478 | 0.093023 | 0.093023 | 0.093023 | 0 | 0 | 0 | 0 | 0.00779 | 0.182325 | 1,256 | 34 | 78 | 36.941176 | 0.87147 | 0.464172 | 0 | 0 | 0 | 0 | 0.162367 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.1875 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a671fe0e76961bcf5c4e0d73d1fd01eb4c998058 | 6,091 | py | Python | 07keras/09fasttext_multi_classification.py | KEVINYZY/python-tutorial | ae43536908eb8af56c34865f52a6e8644edc4fa3 | [
"Apache-2.0"
] | 2 | 2021-01-04T10:44:44.000Z | 2022-02-13T07:53:41.000Z | 07keras/09fasttext_multi_classification.py | zm79287/python-tutorial | d0f7348e1da4ff954e3add66e1aae55d599283ee | [
"Apache-2.0"
] | null | null | null | 07keras/09fasttext_multi_classification.py | zm79287/python-tutorial | d0f7348e1da4ff954e3add66e1aae55d599283ee | [
"Apache-2.0"
] | 2 | 2020-11-23T08:58:51.000Z | 2022-02-13T07:53:42.000Z | # -*- coding: utf-8 -*-
# Author: XuMing <xuming624@qq.com>
# Brief: This example demonstrates the use of fasttext for text classification
# Bi-gram : 0.9056 test accuracy after 5 epochs.
import os
import keras
import numpy as np
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import GlobalAveragePooling1D
from keras.models import Sequential
from keras.preprocessing import sequence
def get_corpus(data_dir):
"""
Get the corpus data with retrieve
:param data_dir:
:return:
"""
words = []
labels = []
for file_name in os.listdir(data_dir):
with open(os.path.join(data_dir, file_name), mode='r', encoding='utf-8') as f:
for line in f:
# label in first sep
parts = line.rstrip().split(',', 1)
if parts and len(parts) > 1:
# keras categorical label start with 0
lbl = int(parts[0]) - 1
sent = parts[1]
sent_split = sent.split()
words.append(sent_split)
labels.append(lbl)
return words, labels
def vectorize_words(words, word_idx):
inputs = []
for word in words:
inputs.append([word_idx[w] for w in word])
return inputs
def create_ngram_set(input_list, ngram_value=2):
"""
Create a set of n-grams
:param input_list: [1, 2, 3, 4, 9]
:param ngram_value: 2
:return: {(1, 2),(2, 3),(3, 4),(4, 9)}
"""
return set(zip(*[input_list[i:] for i in range(ngram_value)]))
def add_ngram(sequences, token_indice, ngram_range=2):
"""
Augment the input list by appending n-grams values
:param sequences:
:param token_indice:
:param ngram_range:
:return:
Example: adding bi-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017}
>>> add_ngram(sequences, token_indice, ngram_range=2)
[[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]]
"""
new_seq = []
for input in sequences:
new_list = input[:]
for i in range(len(new_list) - ngram_range + 1):
for ngram_value in range(2, ngram_range + 1):
ngram = tuple(new_list[i:i + ngram_value])
if ngram in token_indice:
new_list.append(token_indice[ngram])
new_seq.append(new_list)
return new_seq
ngram_range = 2
num_classes = 3
max_features = 20000
max_len = 400
batch_size = 32
embedding_dims = 50
epochs = 10
SAVE_MODEL_PATH = 'fasttext_multi_classification_model.h5'
pwd_path = os.path.abspath(os.path.dirname(__file__))
print('pwd_path:', pwd_path)
train_data_dir = os.path.join(pwd_path, '../data/sogou_classifier_data/train')
test_data_dir = os.path.join(pwd_path, '../data/sogou_classifier_data/test')
print('data_dir path:', train_data_dir)
print('loading data...')
x_train, y_train = get_corpus(train_data_dir)
x_test, y_test = get_corpus(test_data_dir)
y_train = keras.utils.to_categorical(y_train, num_classes=num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes=num_classes)
sent_maxlen = max(map(len, (x for x in x_train + x_test)))
print('-')
print('Sentence max length:', sent_maxlen, 'words')
print('Number of training data:', len(x_train))
print('Number of test data:', len(x_test))
print('-')
print('Here\'s what a "sentence" tuple looks like (label, sentence):')
print(y_train[0], x_train[0])
print('-')
print('Vectorizing the word sequences...')
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
vocab = set()
for w in x_train + x_test:
vocab |= set(w)
vocab = sorted(vocab)
vocab_size = len(vocab) + 1
print('Vocab size:', vocab_size, 'unique words')
word_idx = dict((c, i + 1) for i, c in enumerate(vocab))
ids_2_word = dict((value, key) for key, value in word_idx.items())
x_train = vectorize_words(x_train, word_idx)
x_test = vectorize_words(x_test, word_idx)
if ngram_range > 1:
print('Adding {}-gram features'.format(ngram_range))
# n-gram set from train data
ngram_set = set()
for input_list in x_train:
for i in range(2, ngram_range + 1):
ng_set = create_ngram_set(input_list, ngram_value=i)
ngram_set.update(ng_set)
# add to n-gram
start_index = max_features + 1
token_indice = {v: k + start_index for k, v in enumerate(ngram_set)}
indice_token = {token_indice[k]: k for k in token_indice}
max_features = np.max(list(indice_token.keys())) + 1
# augment x_train and x_test with n-grams features
x_train = add_ngram(x_train, token_indice, ngram_range)
x_test = add_ngram(x_test, token_indice, ngram_range)
train_mean_len = np.mean(list(map(len, x_train)), dtype=int)
test_mean_len = np.mean(list(map(len, x_test)), dtype=int)
print('Average train sequence length: {}'.format(train_mean_len))
print('Average test sequence length: {}'.format(test_mean_len))
print('pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('build model...')
model = Sequential()
# embed layer by maps vocab index into emb dimensions
model.add(Embedding(max_features, embedding_dims, input_length=max_len))
# pooling the embedding
model.add(GlobalAveragePooling1D())
# output multi classification of num_classes
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
model.save(SAVE_MODEL_PATH)
print('save model:', SAVE_MODEL_PATH)
probs = model.predict(x_test, batch_size=batch_size)
assert len(probs) == len(y_test)
for label, prob in zip(y_test, probs):
print('label_test_index:%s\tprob_index:%s\tprob:%s' % (label.argmax(), prob.argmax(), prob.max()))
| 34.805714 | 102 | 0.67099 | 937 | 6,091 | 4.149413 | 0.219851 | 0.024691 | 0.020576 | 0.021605 | 0.164095 | 0.14249 | 0.105453 | 0.088477 | 0.064815 | 0.024177 | 0 | 0.023337 | 0.197997 | 6,091 | 174 | 103 | 35.005747 | 0.772569 | 0.161221 | 0 | 0.025862 | 0 | 0 | 0.117494 | 0.034828 | 0 | 0 | 0 | 0 | 0.008621 | 1 | 0.034483 | false | 0 | 0.068966 | 0 | 0.137931 | 0.189655 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a675137fe059ab035f6c8c74928e31aa79eec616 | 961 | py | Python | scrapy_store_project/goods/tasks.py | MaksNech/pylab2018_ht_22 | 5c862b23203e93bf4cbdf5f1e5777f29052f2f69 | [
"MIT"
] | null | null | null | scrapy_store_project/goods/tasks.py | MaksNech/pylab2018_ht_22 | 5c862b23203e93bf4cbdf5f1e5777f29052f2f69 | [
"MIT"
] | 10 | 2020-02-11T23:54:49.000Z | 2022-03-11T23:42:36.000Z | scrapy_store_project/goods/tasks.py | MaksNech/pylab2018_ht_22 | 5c862b23203e93bf4cbdf5f1e5777f29052f2f69 | [
"MIT"
] | 1 | 2020-12-02T09:32:19.000Z | 2020-12-02T09:32:19.000Z | import requests
import tempfile
from celery.task import task
from django.core import files
from celery.utils.log import get_task_logger
from .models import Bag
logger = get_task_logger(__name__)
@task(
name="save_goods_to_db"
)
def save_goods_to_db(items_list):
for item in items_list:
bag = Bag(
title=item['title'],
brand=item['brand'],
image=item['image'],
price=item['price'],
size=item['size'],
description=item['description']
)
request = requests.get(item['image'], stream=True)
if request.status_code != requests.codes.ok:
continue
file_name = item['image'].split('/')[-1]
lf = tempfile.NamedTemporaryFile()
for block in request.iter_content(1024 * 8):
if not block:
break
lf.write(block)
bag.image.save(file_name, files.File(lf))
bag.save()
| 24.025 | 58 | 0.591051 | 118 | 961 | 4.644068 | 0.457627 | 0.04927 | 0.047445 | 0.047445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0.294485 | 961 | 39 | 59 | 24.641026 | 0.79941 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.193548 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6762cb2023d77e741f45f1530edf25cc49e8843 | 2,835 | py | Python | bokeh-app/main.py | jdkent/hrfViz | ab9cf3587cd8387388550f3bef3086ff866a7559 | [
"BSD-3-Clause"
] | null | null | null | bokeh-app/main.py | jdkent/hrfViz | ab9cf3587cd8387388550f3bef3086ff866a7559 | [
"BSD-3-Clause"
] | null | null | null | bokeh-app/main.py | jdkent/hrfViz | ab9cf3587cd8387388550f3bef3086ff866a7559 | [
"BSD-3-Clause"
] | null | null | null | ''' Present an interactive function explorer with slider widgets.
Scrub the sliders to change the properties of the ``hrf`` curve, or
type into the title text box to update the title of the plot.
Use the ``bokeh serve`` command to run the example by executing:
bokeh serve sliders.py
at your command prompt. Then navigate to the URL
http://localhost:5006/sliders
in your browser.
'''
import numpy as np
from bokeh.io import curdoc
from bokeh.layouts import row, column
from bokeh.models import ColumnDataSource
from bokeh.models.widgets import Slider, TextInput
from bokeh.plotting import figure
from nistats import hemodynamic_models
# Set up data
model = hemodynamic_models._gamma_difference_hrf(tr=2)
x = np.arange(0, len(model))
source = ColumnDataSource(data=dict(x=x, y=model))
# Set up plot
thr = 0.01
plot = figure(plot_height=400, plot_width=400, title="my hrf wave",
tools="crosshair,pan,reset,save,wheel_zoom",
x_range=[0, np.max(x)], y_range=[np.min(model)-thr, np.max(model)+thr])
plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)
# Set up widgets
text = TextInput(title="title", value='my hrf')
delay = Slider(title="delay", value=6.0, start=0, end=10, step=0.1)
time_length = Slider(title="time_length", value=32.0, start=16, end=48, step=0.1)
onset = Slider(title="onset", value=0.0, start=0.0, end=10, step=0.1)
undershoot = Slider(title="undershoot", value=16.0, start=4, end=32, step=0.1)
dispersion = Slider(title="dispersion", value=1.0, start=0.1, end=5.0, step=0.1)
u_dispersion = Slider(title="u_dispersion", value=1.0, start=0.1, end=5.0, step=0.1)
ratio = Slider(title="ratio", value=0.167, start=0.01, end=2.0, step=0.1)
scale = Slider(title="amplitude", value=1, start=0, end=5, step=0.1)
# Set up callbacks
def update_title(attrname, old, new):
plot.title.text = text.value
text.on_change('value', update_title)
def update_data(attrname, old, new):
# Get the current slider values
dy = delay.value
tl = time_length.value
on = onset.value
un = undershoot.value
di = dispersion.value
ud = u_dispersion.value
ra = ratio.value
# Generate the new curve
model = hemodynamic_models._gamma_difference_hrf(
tr=2, time_length=tl, onset=on, delay=dy, undershoot=un,
dispersion=di, u_dispersion=ud, ratio=ra
) * scale.value
x = np.arange(0, len(model))
source.data = dict(x=x, y=model)
for w in [delay, time_length, onset, delay, undershoot, dispersion, u_dispersion, ratio, scale]:
w.on_change('value', update_data)
# Set up layouts and add to document
inputs = column(text, delay, time_length, onset,
delay, undershoot, dispersion, u_dispersion, ratio,
scale)
curdoc().add_root(row(inputs, plot, width=800))
curdoc().title = "My HRF"
| 33.75 | 96 | 0.699824 | 462 | 2,835 | 4.218615 | 0.311688 | 0.010262 | 0.024628 | 0.010775 | 0.201129 | 0.201129 | 0.172396 | 0.147768 | 0.103643 | 0.103643 | 0 | 0.037288 | 0.167549 | 2,835 | 83 | 97 | 34.156627 | 0.788559 | 0.186243 | 0 | 0.041667 | 0 | 0 | 0.061928 | 0.015264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.145833 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a67778c67ef1eefa67194e04daebceb9183fd006 | 682 | py | Python | python-leetcode/laozhang/tree/leetcode_814_.py | sweeneycai/cs-summary-reflection | c4220b153baa6b1b93a11c7e5637d42e3429481f | [
"Apache-2.0"
] | 227 | 2019-04-09T00:36:00.000Z | 2022-03-29T05:05:03.000Z | python-leetcode/laozhang/tree/leetcode_814_.py | sweeneycai/cs-summary-reflection | c4220b153baa6b1b93a11c7e5637d42e3429481f | [
"Apache-2.0"
] | 139 | 2019-06-14T01:53:11.000Z | 2022-02-16T11:08:40.000Z | python-leetcode/laozhang/tree/leetcode_814_.py | sweeneycai/cs-summary-reflection | c4220b153baa6b1b93a11c7e5637d42e3429481f | [
"Apache-2.0"
] | 89 | 2019-04-10T07:00:54.000Z | 2022-03-23T01:36:03.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# coding=utf-8
"""
814. 二叉树剪枝
"""
from laozhang import TreeNode
class Solution:
def pruneTree(self, root: TreeNode) -> TreeNode:
def helper(root: TreeNode) -> TreeNode:
if root:
helper(root.left)
helper(root.right)
if root:
if root.left and root.left.val == 0 and not root.left.left and not root.left.right:
root.left = None
if root.right and root.right.val == 0 and not root.right.right and not root.right.left:
root.right = None
helper(root)
return root
| 28.416667 | 107 | 0.530792 | 85 | 682 | 4.258824 | 0.341176 | 0.132597 | 0.110497 | 0.055249 | 0.077348 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018433 | 0.363636 | 682 | 23 | 108 | 29.652174 | 0.815668 | 0.09824 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6777a5db732418a0a72fdae43c10c8041048b60 | 4,816 | py | Python | .pyinstaller/run_astropy_tests.py | lmichel/astropy | 67f944f6145ae4899e7bf6e335ffcb24c9493ac3 | [
"BSD-3-Clause"
] | null | null | null | .pyinstaller/run_astropy_tests.py | lmichel/astropy | 67f944f6145ae4899e7bf6e335ffcb24c9493ac3 | [
"BSD-3-Clause"
] | null | null | null | .pyinstaller/run_astropy_tests.py | lmichel/astropy | 67f944f6145ae4899e7bf6e335ffcb24c9493ac3 | [
"BSD-3-Clause"
] | null | null | null | import os
import shutil
import sys
import erfa # noqa
import pytest
import astropy # noqa
if len(sys.argv) == 3 and sys.argv[1] == '--astropy-root':
ROOT = sys.argv[2]
else:
# Make sure we don't allow any arguments to be passed - some tests call
# sys.executable which becomes this script when producing a pyinstaller
# bundle, but we should just error in this case since this is not the
# regular Python interpreter.
if len(sys.argv) > 1:
print("Extra arguments passed, exiting early")
sys.exit(1)
for root, dirnames, files in os.walk(os.path.join(ROOT, 'astropy')):
# NOTE: we can't simply use
# test_root = root.replace('astropy', 'astropy_tests')
# as we only want to change the one which is for the module, so instead
# we search for the last occurrence and replace that.
pos = root.rfind('astropy')
test_root = root[:pos] + 'astropy_tests' + root[pos + 7:]
# Copy over the astropy 'tests' directories and their contents
for dirname in dirnames:
final_dir = os.path.relpath(os.path.join(test_root, dirname), ROOT)
# We only copy over 'tests' directories, but not astropy/tests (only
# astropy/tests/tests) since that is not just a directory with tests.
if dirname == 'tests' and not root.endswith('astropy'):
shutil.copytree(os.path.join(root, dirname), final_dir, dirs_exist_ok=True)
else:
# Create empty __init__.py files so that 'astropy_tests' still
# behaves like a single package, otherwise pytest gets confused
# by the different conftest.py files.
init_filename = os.path.join(final_dir, '__init__.py')
if not os.path.exists(os.path.join(final_dir, '__init__.py')):
os.makedirs(final_dir, exist_ok=True)
with open(os.path.join(final_dir, '__init__.py'), 'w') as f:
f.write("#")
# Copy over all conftest.py files
for file in files:
if file == 'conftest.py':
final_file = os.path.relpath(os.path.join(test_root, file), ROOT)
shutil.copy2(os.path.join(root, file), final_file)
# Add the top-level __init__.py file
with open(os.path.join('astropy_tests', '__init__.py'), 'w') as f:
f.write("#")
# Remove test file that tries to import all sub-packages at collection time
os.remove(os.path.join('astropy_tests', 'utils', 'iers', 'tests', 'test_leap_second.py'))
# Remove convolution tests for now as there are issues with the loading of the C extension.
# FIXME: one way to fix this would be to migrate the convolution C extension away from using
# ctypes and using the regular extension mechanism instead.
shutil.rmtree(os.path.join('astropy_tests', 'convolution'))
os.remove(os.path.join('astropy_tests', 'modeling', 'tests', 'test_convolution.py'))
os.remove(os.path.join('astropy_tests', 'modeling', 'tests', 'test_core.py'))
os.remove(os.path.join('astropy_tests', 'visualization', 'tests', 'test_lupton_rgb.py'))
# FIXME: PIL minversion check does not work
os.remove(os.path.join('astropy_tests', 'visualization', 'wcsaxes', 'tests', 'test_misc.py'))
os.remove(os.path.join('astropy_tests', 'visualization', 'wcsaxes', 'tests', 'test_wcsapi.py'))
# FIXME: The following tests rely on the fully qualified name of classes which
# don't seem to be the same.
os.remove(os.path.join('astropy_tests', 'table', 'mixins', 'tests', 'test_registry.py'))
# Copy the top-level conftest.py
shutil.copy2(os.path.join(ROOT, 'astropy', 'conftest.py'),
os.path.join('astropy_tests', 'conftest.py'))
# We skip a few tests, which are generally ones that rely on explicitly
# checking the name of the current module (which ends up starting with
# astropy_tests rather than astropy).
SKIP_TESTS = ['test_exception_logging_origin',
'test_log',
'test_configitem',
'test_config_noastropy_fallback',
'test_no_home',
'test_path',
'test_rename_path',
'test_data_name_third_party_package',
'test_pkg_finder',
'test_wcsapi_extension',
'test_find_current_module_bundle',
'test_minversion',
'test_imports',
'test_generate_config',
'test_generate_config2',
'test_create_config_file',
'test_download_parallel_fills_cache']
# Run the tests!
sys.exit(pytest.main(['astropy_tests',
'-k ' + ' and '.join('not ' + test for test in SKIP_TESTS)],
plugins=['pytest_doctestplus.plugin',
'pytest_openfiles.plugin',
'pytest_remotedata.plugin',
'pytest_astropy_header.display']))
| 44.592593 | 95 | 0.648463 | 662 | 4,816 | 4.55136 | 0.351964 | 0.04381 | 0.06306 | 0.056422 | 0.212745 | 0.176236 | 0.159642 | 0.107202 | 0.085961 | 0.070362 | 0 | 0.002447 | 0.236296 | 4,816 | 107 | 96 | 45.009346 | 0.816748 | 0.321429 | 0 | 0.0625 | 0 | 0 | 0.316342 | 0.100093 | 0 | 0 | 0 | 0.009346 | 0 | 1 | 0 | false | 0.015625 | 0.109375 | 0 | 0.109375 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a680b15fd060d64cf9e6fa6cd3e6835e870e01ee | 5,109 | py | Python | DataManagements/UserCatesDBManagement.py | CHUht/Hangout_Recommendations | 477752da8259e821bb58487cdb9d483a6b208a3f | [
"MIT"
] | 2 | 2020-02-03T22:08:15.000Z | 2021-03-11T18:37:47.000Z | DataManagements/UserCatesDBManagement.py | CHUht/Hangout_Recommendations_Back_End | 477752da8259e821bb58487cdb9d483a6b208a3f | [
"MIT"
] | null | null | null | DataManagements/UserCatesDBManagement.py | CHUht/Hangout_Recommendations_Back_End | 477752da8259e821bb58487cdb9d483a6b208a3f | [
"MIT"
] | null | null | null | import sqlite3
from DataManagements.BackendAPIStaticList import singleton
from DataManagements.BackendAPIStaticList import cate_map
@singleton
class UserCatesManager:
def __init__(self):
pass
def dbconnect(self):
"""
connect to the database
:return: None
"""
self.connection = sqlite3.connect("Database.db", check_same_thread=False)
self.controller = self.connection.cursor()
def dbdeconnect(self):
"""
deconnecct from the database
:return:None
"""
self.connection.close()
def get_all_cates(self):
"""
this function returns a list of strings, all kinds of categories
:return: list of all categories
"""
to_return = list(cate_map.values())
return to_return
def insert_user_cates(self, user_id:int, cate_type_list:set):
"""
This function adds a new user to the user db table!
It takes the given username and password to create it
We assume the check for unique usernames is done at the front end level
"""
self.dbconnect()
sql_command = """
SELECT cate_type
FROM UserCates
WHERE user_id = '{0}'
""".format(user_id)
self.controller.execute(sql_command)
already_cates = self.controller.fetchall()
for i in range(len(already_cates)):
already_cates[i] = already_cates[i][0]
already_cates = set(already_cates)
to_insert = cate_type_list - already_cates
for cate_type in to_insert:
sql_command = """
INSERT INTO UserCates(user_id, cate_type)
VALUES ( ?, ?);
"""
values = (user_id,cate_type)
self.controller.execute(sql_command, values)
self.connection.commit()
self.dbdeconnect()
def return_user_cates(self, user_id):
"""
This function must return the user profile based on the username
It needs other database classes to work with it!
For now just return the basic stuff
"""
self.dbconnect()
sql_command = """
SELECT cate_type
FROM UserCates
WHERE user_id='{0}'
""".format(user_id)
self.controller.execute(sql_command)
result = self.controller.fetchall()
for i in range(len(result)):
result[i] = result[i][0]
self.dbdeconnect()
return result
def return_cate_user(self, cate_type:int):
"""
This function takes in a username and returns a user id!
The user names must all be unique
We check the creation of usernames to avoid duplicates
"""
self.dbconnect()
sql_command = """
SELECT user_id
FROM UserCates
WHERE cate_type='{0}'
""".format(cate_type)
self.controller.execute(sql_command)
query_result = self.controller.fetchall()
for i in range(len(query_result)):
query_result[i] = query_result[i][0]
self.dbdeconnect()
return query_result
def check_database(self):
# Returns everything in it
self.dbconnect()
sql_command = """
SELECT *
FROM UserCates
"""
self.controller.execute(sql_command)
# print('checke_database')
# for col in self.controller.fetchall():
# print(col)
result = self.controller.fetchall()
self.dbdeconnect()
return result
def delete_user_table(self):
"""
Created for debuging
Deletes the data in the user table!
"""
self.dbconnect()
sql_command = """
DELETE FROM UserCates;
"""
self.controller.execute(sql_command)
self.connection.commit()
sql_command = """
VACUUM;
"""
self.controller.execute(sql_command)
self.connection.commit()
self.dbdeconnect()
def drop_table(self):
"""
Created for debuging
Drops the table!
"""
self.dbconnect()
sql_command = """
DROP TABLE UserCates;
"""
self.connection.execute(sql_command)
self.dbdeconnect()
if __name__ == "__main__":
userCatesManager = UserCatesManager()
# userCatesManager.insert_user_tags(0,[1,4,13])
userCatesManager.insert_user_cates(0,{1,5,12})
userCatesManager.insert_user_cates(1,{1,6,14})
print(userCatesManager.return_user_cates(0))
print(userCatesManager.return_cate_user(1))
print(userCatesManager.check_database())
print(userCatesManager.get_all_cates())
# userCatesManager.delete_user_table()
# UserCatesManager.drop_table()
| 30.963636 | 83 | 0.562145 | 534 | 5,109 | 5.194757 | 0.245318 | 0.057678 | 0.049027 | 0.060562 | 0.359048 | 0.270728 | 0.204398 | 0.155732 | 0.105984 | 0.075703 | 0 | 0.007541 | 0.351145 | 5,109 | 164 | 84 | 31.152439 | 0.829261 | 0.188491 | 0 | 0.475248 | 0 | 0 | 0.210747 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09901 | false | 0.009901 | 0.029703 | 0 | 0.178218 | 0.039604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a683a3c1e348a3f60fd5ed371539e1911665f6e6 | 2,255 | py | Python | pupillae/fons/generator/dice_roller.py | chomouri/pupillae | 7c178eee78e5bff224c8982ca0a3674ea8830e55 | [
"MIT"
] | null | null | null | pupillae/fons/generator/dice_roller.py | chomouri/pupillae | 7c178eee78e5bff224c8982ca0a3674ea8830e55 | [
"MIT"
] | null | null | null | pupillae/fons/generator/dice_roller.py | chomouri/pupillae | 7c178eee78e5bff224c8982ca0a3674ea8830e55 | [
"MIT"
] | null | null | null | import random
import re
# Third-party modules:
# Local modules:
# Function WAI:
def roll_3d6():
"""Rolls 3d6"""
total = 0
for i in range(3):
roll = random.randint(1, 6)
# print(f"Rolling 1d6: {roll}")
total += roll
return total
# Function WAI:
def roll_4d6d():
"""Rolls 4d6, drops lowest"""
total = []
for i in range(4):
roll = random.randint(1, 6)
# print(f"Rolling 1d6: {roll}")
total.append(roll)
total.sort()
del total[0]
return sum(total)
def roll_die(quantity, sides):
array_raw = []
for i in range(quantity):
roll = random.randint(1, sides)
array_raw.append(roll)
return array_raw
def process_roll(message):
reply = re.split(r'd', message)
return reply
def parse_roll(message):
error_msg = []
#Split into groups, keeping trigger as index [0]
roll_grps = message.split(" ")
#Remove empty.
if len(roll_grps) < 2:
error_msg.append("No dice to roll")
else:
# Check if an argument in the roll is invalid.
inv_grp = False
current_grp = roll_grps[1]
quant_side = re.split(r'd', current_grp, 1)
if len(quant_side) == 2:
quant = quant_side[0]
if quant.isdigit():
quant = int(quant_side[0])
if quant > 100:
error_msg.append("Too many dice")
inv_grp = True
else:
error_msg.append("Number of dice must be numeric")
inv_grp = True
sides = quant_side[1]
if sides.isdigit():
sides = int(quant_side[1])
if sides > 100:
error_msg.append("Too many sides of the dice")
inv_grp = True
else:
error_msg.append("Number of sides must be numeric")
inv_grp = True
else:
error_msg.append("I can only roll one set of dice at the moment")
inv_grp = True
if inv_grp:
return f"Malformed Expression: {error_msg}."
else:
array = roll_die(quant, sides)
return f"{quant}x d{sides} = {array}\n--Total: {sum(array)}."
| 27.5 | 77 | 0.536585 | 294 | 2,255 | 3.996599 | 0.340136 | 0.054468 | 0.071489 | 0.028085 | 0.29617 | 0.238298 | 0.166809 | 0.142979 | 0.142979 | 0.142979 | 0 | 0.024793 | 0.356098 | 2,255 | 81 | 78 | 27.839506 | 0.784435 | 0.122838 | 0 | 0.196721 | 0 | 0.016393 | 0.126595 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081967 | false | 0 | 0.032787 | 0 | 0.213115 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a684db8114e6f751f36ca0a1f53bcfc8192318de | 3,088 | py | Python | tests/test_swaplink.py | aratz-lasa/py-swaplink | a1821953704215648749ccde65d6c4d9201998af | [
"MIT"
] | 1 | 2019-10-21T08:54:32.000Z | 2019-10-21T08:54:32.000Z | tests/test_swaplink.py | aratz-lasa/py-swaplink | a1821953704215648749ccde65d6c4d9201998af | [
"MIT"
] | 1 | 2021-06-02T00:33:01.000Z | 2021-06-02T00:33:01.000Z | tests/test_swaplink.py | aratz-lasa/py-swaplink | a1821953704215648749ccde65d6c4d9201998af | [
"MIT"
] | null | null | null | import asyncio
import random
from asyncio import Event
from typing import List, Any
import pytest
from swaplink import defaults
from tests.utils import setup_network_by_relative_loads
# for speeding up tests
defaults.HBEAT_SEND_FREQUENCY *= 0.3
defaults.HBEAT_CHECK_FREQUENCY *= 0.3
defaults.RPC_TIMEOUT *= 0.3
@pytest.mark.asyncio
async def test_swaplink_neighbour_retrieval():
my_num_links = 3
others_amount = 10
others_relative_load = [random.randrange(2, 20) for _ in range(others_amount)]
my_network, other_networks = await setup_network_by_relative_loads(
my_num_links, others_relative_load
)
await asyncio.sleep(defaults.HBEAT_CHECK_FREQUENCY * 1.5)
neighbours = my_network.list_neighbours()
assert len(neighbours) >= int(
my_num_links * 0.8
) # todo: how much links should it have after two cycles?
# clean up
await my_network.leave()
for network in other_networks:
await network.leave()
@pytest.mark.asyncio
async def test_swaplink_callback():
callback_flag = Event()
callback_neighbors = []
def callback(neighbors: List[Any]):
nonlocal callback_flag, callback_neighbors
callback_neighbors = neighbors
callback_flag.set()
my_num_links = 3
others_amount = 10
others_relative_load = [random.randrange(2, 20) for _ in range(others_amount)]
my_network, other_networks = await setup_network_by_relative_loads(
my_num_links, others_relative_load
)
my_network.list_neighbours(callback)
await asyncio.sleep(defaults.HBEAT_CHECK_FREQUENCY * 1.5)
cuurent_neighbors = my_network.list_neighbours(callback)
assert callback_flag.is_set()
assert callback_neighbors == cuurent_neighbors
# clean up
await my_network.leave()
for network in other_networks:
await network.leave()
@pytest.mark.asyncio
async def test_swaplink_random_selection():
my_relative_load = 5
others_amount = 10
others_relative_load = [random.randrange(2, 20) for _ in range(others_amount)]
my_network, other_networks = await setup_network_by_relative_loads(
my_relative_load, others_relative_load
)
await asyncio.sleep(defaults.HBEAT_CHECK_FREQUENCY * 1.5)
random_nodes = []
for _ in range(others_amount):
random_nodes.append(await my_network.select())
unique_nodes = set(random_nodes)
RANDOMNESS = 0.5 # todo: implement good randomness test
assert len(unique_nodes) >= (RANDOMNESS * others_amount)
# clean up
await my_network.leave()
for network in other_networks:
await network.leave()
@pytest.mark.asyncio
async def test_swaplink_leave():
my_num_links = 3
others_amount = 10
others_relative_load = [random.randrange(2, 20) for _ in range(others_amount)]
my_network, other_networks = await setup_network_by_relative_loads(
my_num_links, others_relative_load
)
await asyncio.sleep(defaults.HBEAT_CHECK_FREQUENCY * 1.5)
await my_network.leave()
for network in other_networks:
await network.leave()
| 29.692308 | 82 | 0.731865 | 414 | 3,088 | 5.142512 | 0.202899 | 0.050728 | 0.067637 | 0.051667 | 0.633631 | 0.581494 | 0.581494 | 0.564115 | 0.564115 | 0.542508 | 0 | 0.016881 | 0.194301 | 3,088 | 103 | 83 | 29.980583 | 0.838826 | 0.045013 | 0 | 0.487179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.051282 | 1 | 0.012821 | false | 0 | 0.089744 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6850dc67dbc459d1feab97522a0718104e78923 | 846 | py | Python | opening_window.py | m3hrab/Snake-and-ladder-game | bc69d280eb893eb658701218b7b42e321e5c41f3 | [
"MIT"
] | 1 | 2021-10-04T04:01:49.000Z | 2021-10-04T04:01:49.000Z | opening_window.py | m3hrab/Snake-and-Ladder-game | bc69d280eb893eb658701218b7b42e321e5c41f3 | [
"MIT"
] | null | null | null | opening_window.py | m3hrab/Snake-and-Ladder-game | bc69d280eb893eb658701218b7b42e321e5c41f3 | [
"MIT"
] | null | null | null | import pygame
import time
class Intro():
"""
A class that represent opening window of the game
and this window hold the play button, music button,
game sound button
"""
def __init__(self,screen):
self.screen = screen
def show_open_window(self):
# load the image
intro_image = pygame.image.load('images/intro.png')
# get the window rect and screen rect
intro_image_rect = intro_image.get_rect()
screen_rect = self.screen.get_rect()
# set the image rect
intro_image_rect.center = screen_rect.center
# draw the window
self.screen.blit(intro_image,intro_image_rect)
# make the most recently drawn screen visible
pygame.display.flip()
time.sleep(10)
| 24.171429 | 60 | 0.599291 | 105 | 846 | 4.647619 | 0.428571 | 0.122951 | 0.086066 | 0.07377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00354 | 0.332151 | 846 | 34 | 61 | 24.882353 | 0.860177 | 0.295508 | 0 | 0 | 0 | 0 | 0.030019 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a68586830537b4c7d2e69cd8efe90faed5602e2b | 13,741 | py | Python | gewittergefahr/scripts/deep_learning_helper.py | dopplerchase/GewitterGefahr | 4415b08dd64f37eba5b1b9e8cc5aa9af24f96593 | [
"MIT"
] | 26 | 2018-10-04T01:07:35.000Z | 2022-01-29T08:49:32.000Z | gewittergefahr/scripts/deep_learning_helper.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 4 | 2017-12-25T02:01:08.000Z | 2018-12-19T01:54:21.000Z | gewittergefahr/scripts/deep_learning_helper.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 11 | 2017-12-10T23:05:29.000Z | 2022-01-29T08:49:33.000Z | """Handles input args for training deep-learning models."""
import numpy
from gewittergefahr.gg_utils import soundings
from gewittergefahr.deep_learning import cnn
from gewittergefahr.deep_learning import deep_learning_utils as dl_utils
TIME_FORMAT = '%Y-%m-%d-%H%M%S'
SOUNDING_HEIGHTS_M_AGL = soundings.DEFAULT_HEIGHT_LEVELS_M_AGL + 0
INPUT_MODEL_FILE_ARG_NAME = 'input_model_file_name'
SOUNDING_FIELDS_ARG_NAME = 'sounding_field_names'
NORMALIZATION_TYPE_ARG_NAME = 'normalization_type_string'
NORMALIZATION_FILE_ARG_NAME = 'normalization_param_file_name'
MIN_NORM_VALUE_ARG_NAME = 'min_normalized_value'
MAX_NORM_VALUE_ARG_NAME = 'max_normalized_value'
TARGET_NAME_ARG_NAME = 'target_name'
SHUFFLE_TARGET_ARG_NAME = 'shuffle_target'
DOWNSAMPLING_CLASSES_ARG_NAME = 'downsampling_classes'
DOWNSAMPLING_FRACTIONS_ARG_NAME = 'downsampling_fractions'
MONITOR_ARG_NAME = 'monitor_string'
WEIGHT_LOSS_ARG_NAME = 'weight_loss_function'
X_TRANSLATIONS_ARG_NAME = 'x_translations_px'
Y_TRANSLATIONS_ARG_NAME = 'y_translations_px'
ROTATION_ANGLES_ARG_NAME = 'ccw_rotation_angles_deg'
NOISE_STDEV_ARG_NAME = 'noise_standard_deviation'
NUM_NOISINGS_ARG_NAME = 'num_noisings'
FLIP_X_ARG_NAME = 'flip_in_x'
FLIP_Y_ARG_NAME = 'flip_in_y'
TRAINING_DIR_ARG_NAME = 'input_training_dir_name'
FIRST_TRAINING_TIME_ARG_NAME = 'first_training_time_string'
LAST_TRAINING_TIME_ARG_NAME = 'last_training_time_string'
NUM_EX_PER_TRAIN_ARG_NAME = 'num_ex_per_train_batch'
VALIDATION_DIR_ARG_NAME = 'input_validation_dir_name'
FIRST_VALIDATION_TIME_ARG_NAME = 'first_validation_time_string'
LAST_VALIDATION_TIME_ARG_NAME = 'last_validation_time_string'
NUM_EX_PER_VALIDN_ARG_NAME = 'num_ex_per_validn_batch'
NUM_EPOCHS_ARG_NAME = 'num_epochs'
NUM_TRAINING_BATCHES_ARG_NAME = 'num_training_batches_per_epoch'
NUM_VALIDATION_BATCHES_ARG_NAME = 'num_validation_batches_per_epoch'
OUTPUT_DIR_ARG_NAME = 'output_dir_name'
INPUT_MODEL_FILE_HELP_STRING = (
'Path to input file (containing either trained or untrained CNN). Will be '
'read by `cnn.read_model`. The architecture of this CNN will be copied.')
SOUNDING_FIELDS_HELP_STRING = (
'List of sounding fields. Each must be accepted by '
'`soundings.check_field_name`. Input will contain each sounding field at '
'each of the following heights (metres AGL). If you do not want to train '
'with soundings, make this a list with one empty string ("").\n{0:s}'
).format(str(SOUNDING_HEIGHTS_M_AGL))
NORMALIZATION_TYPE_HELP_STRING = (
'Normalization type (used for both radar images and soundings). See doc '
'for `deep_learning_utils.normalize_radar_images` or '
'`deep_learning_utils.normalize_soundings`.')
NORMALIZATION_FILE_HELP_STRING = (
'Path to file with normalization params (used for both radar images and '
'soundings). See doc for `deep_learning_utils.normalize_radar_images` or '
'`deep_learning_utils.normalize_soundings`.')
MIN_NORM_VALUE_HELP_STRING = (
'Minimum value for min-max normalization (used for both radar images and '
'soundings). See doc for `deep_learning_utils.normalize_radar_images` or '
'`deep_learning_utils.normalize_soundings`.')
MAX_NORM_VALUE_HELP_STRING = (
'Max value for min-max normalization (used for both radar images and '
'soundings). See doc for `deep_learning_utils.normalize_radar_images` or '
'`deep_learning_utils.normalize_soundings`.')
TARGET_NAME_HELP_STRING = 'Name of target variable.'
SHUFFLE_TARGET_HELP_STRING = (
'Boolean flag. If 1, will randomly shuffle target values over all '
'examples.')
DOWNSAMPLING_CLASSES_HELP_STRING = (
'List of classes (integer labels) for downsampling. If you do not want '
'downsampling, leave this alone.')
DOWNSAMPLING_FRACTIONS_HELP_STRING = (
'List of downsampling fractions. The [k]th downsampling fraction goes with'
' the [k]th class in `{0:s}`, and the sum of all downsampling fractions '
'must be 1.0. If you do not want downsampling, leave this alone.'
).format(DOWNSAMPLING_CLASSES_ARG_NAME)
MONITOR_HELP_STRING = (
'Function used to monitor validation performance (and implement early '
'stopping). Must be in the following list.\n{0:s}'
).format(str(cnn.VALID_MONITOR_STRINGS))
WEIGHT_LOSS_HELP_STRING = (
'Boolean flag. If 1, each class in the loss function will be weighted by '
'the inverse of its frequency in training data. If 0, no such weighting '
'will be done.')
X_TRANSLATIONS_HELP_STRING = (
'x-translations for data augmentation (pixel units). See doc for '
'`data_augmentation.shift_radar_images`. If you do not want translation '
'augmentation, leave this alone.')
Y_TRANSLATIONS_HELP_STRING = (
'y-translations for data augmentation (pixel units). See doc for '
'`data_augmentation.shift_radar_images`. If you do not want translation '
'augmentation, leave this alone.')
ROTATION_ANGLES_HELP_STRING = (
'Counterclockwise rotation angles for data augmentation. See doc for '
'`data_augmentation.rotate_radar_images`. If you do not want rotation '
'augmentation, leave this alone.')
NOISE_STDEV_HELP_STRING = (
'Standard deviation for Gaussian noise. See doc for '
'`data_augmentation.noise_radar_images`. If you do not want noising '
'augmentation, leave this alone.')
NUM_NOISINGS_HELP_STRING = (
'Number of times to replicate each example with noise. See doc for '
'`data_augmentation.noise_radar_images`. If you do not want noising '
'augmentation, leave this alone.')
FLIP_X_HELP_STRING = (
'Boolean flag. If 1, will flip each radar image in the x-direction.')
FLIP_Y_HELP_STRING = (
'Boolean flag. If 1, will flip each radar image in the y-direction.')
TRAINING_DIR_HELP_STRING = (
'Name of directory with training data. Files therein will be found by '
'`input_examples.find_many_example_files` (with shuffled = True) and read '
'by `input_examples.read_example_file`.')
TRAINING_TIME_HELP_STRING = (
'Time (format "yyyy-mm-dd-HHMMSS"). Only examples from the period '
'`{0:s}`...`{1:s}` will be used for training.'
).format(FIRST_TRAINING_TIME_ARG_NAME, LAST_TRAINING_TIME_ARG_NAME)
NUM_EX_PER_TRAIN_HELP_STRING = 'Number of examples per training batch.'
VALIDATION_DIR_HELP_STRING = (
'Same as `{0:s}` but for on-the-fly validation. If you do not want '
'validation, leave this alone.'
).format(TRAINING_DIR_ARG_NAME)
VALIDATION_TIME_HELP_STRING = (
'Time (format "yyyy-mm-dd-HHMMSS"). Only examples from the period '
'`{0:s}`...`{1:s}` will be used for validation. If you do not want '
'validation, leave this alone.'
).format(FIRST_VALIDATION_TIME_ARG_NAME, LAST_VALIDATION_TIME_ARG_NAME)
NUM_EX_PER_VALIDN_HELP_STRING = 'Number of examples per validation batch.'
NUM_EPOCHS_HELP_STRING = 'Number of training epochs.'
NUM_TRAINING_BATCHES_HELP_STRING = 'Number of training batches in each epoch.'
NUM_VALIDATION_BATCHES_HELP_STRING = (
'Number of validation batches in each epoch.')
OUTPUT_DIR_HELP_STRING = (
'Path to output directory. The newly trained CNN and metafiles will be '
'saved here.')
DEFAULT_SOUNDING_FIELD_NAMES = [
soundings.RELATIVE_HUMIDITY_NAME, soundings.SPECIFIC_HUMIDITY_NAME,
soundings.VIRTUAL_POTENTIAL_TEMPERATURE_NAME,
soundings.U_WIND_NAME, soundings.V_WIND_NAME
]
DEFAULT_NORM_TYPE_STRING = dl_utils.Z_NORMALIZATION_TYPE_STRING + ''
DEFAULT_MIN_NORM_VALUE = -1.
DEFAULT_MAX_NORM_VALUE = 1.
DEFAULT_DOWNSAMPLING_CLASSES = numpy.array([0, 1], dtype=int)
DEFAULT_DOWNSAMPLING_FRACTIONS = numpy.array([0.5, 0.5])
DEFAULT_MONITOR_STRING = cnn.LOSS_FUNCTION_STRING + ''
DEFAULT_WEIGHT_LOSS_FLAG = 0
DEFAULT_X_TRANSLATIONS_PX = numpy.array([0], dtype=int)
DEFAULT_Y_TRANSLATIONS_PX = numpy.array([0], dtype=int)
DEFAULT_CCW_ROTATION_ANGLES_DEG = numpy.array([0], dtype=float)
DEFAULT_NOISE_STDEV = 0.05
DEFAULT_NUM_NOISINGS = 0
DEFAULT_FLIP_X_FLAG = 0
DEFAULT_FLIP_Y_FLAG = 0
DEFAULT_NUM_EXAMPLES_PER_BATCH = 512
DEFAULT_NUM_EPOCHS = 100
DEFAULT_NUM_TRAINING_BATCHES_PER_EPOCH = 32
DEFAULT_NUM_VALIDATION_BATCHES_PER_EPOCH = 16
def add_input_args(argument_parser):
"""Adds input args to ArgumentParser object.
:param argument_parser: Instance of `argparse.ArgumentParser` (may already
contain some input args).
:return: argument_parser: Same as input but with new args added.
"""
argument_parser.add_argument(
'--' + INPUT_MODEL_FILE_ARG_NAME, type=str, required=True,
help=INPUT_MODEL_FILE_HELP_STRING)
argument_parser.add_argument(
'--' + SOUNDING_FIELDS_ARG_NAME, type=str, nargs='+', required=False,
default=DEFAULT_SOUNDING_FIELD_NAMES, help=SOUNDING_FIELDS_HELP_STRING)
argument_parser.add_argument(
'--' + NORMALIZATION_TYPE_ARG_NAME, type=str, required=False,
default=DEFAULT_NORM_TYPE_STRING, help=NORMALIZATION_TYPE_HELP_STRING)
argument_parser.add_argument(
'--' + NORMALIZATION_FILE_ARG_NAME, type=str, required=True,
help=NORMALIZATION_FILE_HELP_STRING)
argument_parser.add_argument(
'--' + MIN_NORM_VALUE_ARG_NAME, type=float, required=False,
default=DEFAULT_MIN_NORM_VALUE, help=MIN_NORM_VALUE_HELP_STRING)
argument_parser.add_argument(
'--' + MAX_NORM_VALUE_ARG_NAME, type=float, required=False,
default=DEFAULT_MAX_NORM_VALUE, help=MAX_NORM_VALUE_HELP_STRING)
argument_parser.add_argument(
'--' + TARGET_NAME_ARG_NAME, type=str, required=True,
help=TARGET_NAME_HELP_STRING)
argument_parser.add_argument(
'--' + SHUFFLE_TARGET_ARG_NAME, type=int, required=False, default=0,
help=SHUFFLE_TARGET_HELP_STRING)
argument_parser.add_argument(
'--' + DOWNSAMPLING_CLASSES_ARG_NAME, type=int, nargs='+',
required=False, default=DEFAULT_DOWNSAMPLING_CLASSES,
help=DOWNSAMPLING_CLASSES_HELP_STRING)
argument_parser.add_argument(
'--' + DOWNSAMPLING_FRACTIONS_ARG_NAME, type=float, nargs='+',
required=False, default=DEFAULT_DOWNSAMPLING_FRACTIONS,
help=DOWNSAMPLING_FRACTIONS_HELP_STRING)
argument_parser.add_argument(
'--' + MONITOR_ARG_NAME, type=str, required=False,
default=DEFAULT_MONITOR_STRING, help=MONITOR_HELP_STRING)
argument_parser.add_argument(
'--' + WEIGHT_LOSS_ARG_NAME, type=int, required=False,
default=DEFAULT_WEIGHT_LOSS_FLAG, help=WEIGHT_LOSS_HELP_STRING)
argument_parser.add_argument(
'--' + X_TRANSLATIONS_ARG_NAME, type=int, nargs='+', required=False,
default=DEFAULT_X_TRANSLATIONS_PX, help=X_TRANSLATIONS_HELP_STRING)
argument_parser.add_argument(
'--' + Y_TRANSLATIONS_ARG_NAME, type=int, nargs='+', required=False,
default=DEFAULT_Y_TRANSLATIONS_PX, help=Y_TRANSLATIONS_HELP_STRING)
argument_parser.add_argument(
'--' + ROTATION_ANGLES_ARG_NAME, type=float, nargs='+', required=False,
default=DEFAULT_CCW_ROTATION_ANGLES_DEG,
help=ROTATION_ANGLES_HELP_STRING)
argument_parser.add_argument(
'--' + NOISE_STDEV_ARG_NAME, type=float, required=False,
default=DEFAULT_NOISE_STDEV, help=NOISE_STDEV_HELP_STRING)
argument_parser.add_argument(
'--' + NUM_NOISINGS_ARG_NAME, type=int, required=False,
default=DEFAULT_NUM_NOISINGS, help=NUM_NOISINGS_HELP_STRING)
argument_parser.add_argument(
'--' + FLIP_X_ARG_NAME, type=int, required=False,
default=DEFAULT_FLIP_X_FLAG, help=FLIP_X_HELP_STRING)
argument_parser.add_argument(
'--' + FLIP_Y_ARG_NAME, type=int, required=False,
default=DEFAULT_FLIP_Y_FLAG, help=FLIP_Y_HELP_STRING)
argument_parser.add_argument(
'--' + TRAINING_DIR_ARG_NAME, type=str, required=True,
help=TRAINING_DIR_HELP_STRING)
argument_parser.add_argument(
'--' + FIRST_TRAINING_TIME_ARG_NAME, type=str, required=True,
help=TRAINING_TIME_HELP_STRING)
argument_parser.add_argument(
'--' + LAST_TRAINING_TIME_ARG_NAME, type=str, required=True,
help=TRAINING_TIME_HELP_STRING)
argument_parser.add_argument(
'--' + NUM_EX_PER_TRAIN_ARG_NAME, type=int, required=False,
default=DEFAULT_NUM_EXAMPLES_PER_BATCH,
help=NUM_EX_PER_TRAIN_HELP_STRING)
argument_parser.add_argument(
'--' + VALIDATION_DIR_ARG_NAME, type=str, required=False, default='',
help=VALIDATION_DIR_HELP_STRING)
argument_parser.add_argument(
'--' + FIRST_VALIDATION_TIME_ARG_NAME, type=str, required=False,
default='', help=VALIDATION_TIME_HELP_STRING)
argument_parser.add_argument(
'--' + LAST_VALIDATION_TIME_ARG_NAME, type=str, required=False,
default='', help=VALIDATION_TIME_HELP_STRING)
argument_parser.add_argument(
'--' + NUM_EX_PER_VALIDN_ARG_NAME, type=int, required=False,
default=DEFAULT_NUM_EXAMPLES_PER_BATCH,
help=NUM_EX_PER_VALIDN_HELP_STRING)
argument_parser.add_argument(
'--' + NUM_EPOCHS_ARG_NAME, type=int, required=False,
default=DEFAULT_NUM_EPOCHS, help=NUM_EPOCHS_HELP_STRING)
argument_parser.add_argument(
'--' + NUM_TRAINING_BATCHES_ARG_NAME, type=int, required=False,
default=DEFAULT_NUM_TRAINING_BATCHES_PER_EPOCH,
help=NUM_TRAINING_BATCHES_HELP_STRING)
argument_parser.add_argument(
'--' + NUM_VALIDATION_BATCHES_ARG_NAME, type=int, required=False,
default=DEFAULT_NUM_VALIDATION_BATCHES_PER_EPOCH,
help=NUM_VALIDATION_BATCHES_HELP_STRING)
argument_parser.add_argument(
'--' + OUTPUT_DIR_ARG_NAME, type=str, required=True,
help=OUTPUT_DIR_HELP_STRING)
return argument_parser
| 40.414706 | 80 | 0.755695 | 1,893 | 13,741 | 5.07607 | 0.127839 | 0.049537 | 0.054844 | 0.080654 | 0.571652 | 0.492351 | 0.429805 | 0.335519 | 0.276199 | 0.229576 | 0 | 0.003885 | 0.157121 | 13,741 | 339 | 81 | 40.533923 | 0.825764 | 0.019358 | 0 | 0.22179 | 0 | 0.003891 | 0.320443 | 0.081133 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003891 | false | 0 | 0.015564 | 0 | 0.023346 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6864cff30190d27e7138fb69ceaf1b96919cb06 | 1,859 | py | Python | custom_components/rinnaitouch/__init__.py | funtastix/rinnaitouch | 272b3a4dcd8bcb66a9656c3f9ca497a74a7244b6 | [
"MIT"
] | 4 | 2022-02-17T22:26:14.000Z | 2022-03-31T05:45:53.000Z | custom_components/rinnaitouch/__init__.py | funtastix/rinnaitouch | 272b3a4dcd8bcb66a9656c3f9ca497a74a7244b6 | [
"MIT"
] | 8 | 2022-02-19T01:37:16.000Z | 2022-03-29T21:12:29.000Z | custom_components/rinnaitouch/__init__.py | funtastix/rinnaitouch | 272b3a4dcd8bcb66a9656c3f9ca497a74a7244b6 | [
"MIT"
] | null | null | null | """Set up main entity."""
# pylint: disable=duplicate-code
import logging
from dataclasses import dataclass
from homeassistant.config_entries import ConfigEntry
from homeassistant.exceptions import ConfigEntryNotReady
from homeassistant.const import CONF_HOST
from homeassistant.core import HomeAssistant
from homeassistant.helpers.entity import Entity
from homeassistant.const import Platform
from pyrinnaitouch import RinnaiSystem
from .const import DOMAIN
_LOGGER = logging.getLogger(__name__)
PLATFORMS = [
Platform.CLIMATE,
Platform.SWITCH,
Platform.BINARY_SENSOR,
Platform.SENSOR,
Platform.BUTTON,
Platform.SELECT
]
async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry):
"""Set up the rinnaitouch integration from a config entry."""
ip_address = entry.data.get(CONF_HOST)
_LOGGER.debug("Get controller with IP: %s", ip_address)
try:
system = RinnaiSystem.get_instance(ip_address)
#scenes = await system.getSupportedScenes()
scenes = []
await system.get_status()
except (
Exception,
ConnectionError,
ConnectionRefusedError,
) as err:
raise ConfigEntryNotReady from err
hass.data.setdefault(DOMAIN, {})[entry.entry_id] = RinnaiData(system=system, scenes=scenes)
hass.config_entries.async_setup_platforms(entry, PLATFORMS)
return True
async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry):
"""Unload a config entry."""
if unload_ok := await hass.config_entries.async_unload_platforms(entry, PLATFORMS):
hass.data[DOMAIN].pop(entry.entry_id)
return unload_ok
@dataclass
class RinnaiData:
"""Data for the Rinnai Touch integration."""
system: RinnaiSystem
scenes: list
class RinnaiEntity(Entity):
"""Base entity."""
def __init__(self):
pass
| 26.942029 | 95 | 0.731038 | 212 | 1,859 | 6.254717 | 0.415094 | 0.076923 | 0.033183 | 0.042232 | 0.057315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18397 | 1,859 | 68 | 96 | 27.338235 | 0.874094 | 0.077999 | 0 | 0 | 0 | 0 | 0.016169 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0.021739 | 0.217391 | 0 | 0.369565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a686bc5588bf96d2912d72d16b95f0e69f7601c3 | 4,238 | py | Python | python/pydiffx/dom/writer.py | beanbaginc/diffx | d913b4c94fa91bdabd8083882bd1acbe451ed5f8 | [
"MIT"
] | null | null | null | python/pydiffx/dom/writer.py | beanbaginc/diffx | d913b4c94fa91bdabd8083882bd1acbe451ed5f8 | [
"MIT"
] | null | null | null | python/pydiffx/dom/writer.py | beanbaginc/diffx | d913b4c94fa91bdabd8083882bd1acbe451ed5f8 | [
"MIT"
] | 1 | 2022-02-20T16:29:08.000Z | 2022-02-20T16:29:08.000Z | """Writer for generating a DiffX file from DOM objects."""
from __future__ import unicode_literals
import six
from pydiffx.writer import DiffXWriter
from pydiffx.sections import CONTENT_SECTIONS
class DiffXDOMWriter(object):
"""A writer for generating a DiffX file from DOM objects.
This will write a :py:class:`~pydiffx.dom.objects.DiffX` object tree
to a byte stream, such as a file, HTTP response, or memory-backed stream.
If constructing manually, one instance can be reused for multiple DiffX
objects.
"""
#: The class to instantiate for writing to a stream.
#:
#: Subclasses can set this if they need to use a more specialized writer.
#:
#: Type:
#: type
writer_cls = DiffXWriter
_remapped_options = {
'diff': {
'type': 'diff_type',
},
'meta': {
'format': 'meta_format'
},
}
def write_stream(self, diffx, stream):
"""Write a DiffX object to a stream.
Args:
diffx (pydiffx.dom.objects.DiffX):
The DiffX object to write.
stream (file or io.IOBase):
The byte stream to write to.
Raises:
pydiffx.errors.BaseDiffXError:
The DiffX contents could not be written. Details will be in
the error message.
"""
main_options = diffx.options.copy()
version = main_options.pop('version', DiffXWriter.VERSION)
encoding = main_options.pop('encoding', None)
writer = self.writer_cls(stream,
version=version,
encoding=encoding,
**main_options)
for subsection in diffx:
self._write_section(subsection, writer)
def _write_section(self, section, writer):
"""Write a section to the stream.
Args:
section (pydiffx.dom.objects.BaseDiffXSection):
The section to write.
writer (pydiffx.dom.writer.DiffXWriter):
The streaming writer to write with.
"""
if section.section_id in CONTENT_SECTIONS:
self._write_content_section(section, writer)
else:
self._write_container_section(section, writer)
def _write_container_section(self, section, writer):
"""Write a container section to the stream.
Args:
section (pydiffx.dom.objects.BaseDiffXContainerSection):
The container section to write.
writer (pydiffx.dom.writer.DiffXWriter):
The streaming writer to write with.
"""
write_func = getattr(writer, 'new_%s' % section.section_name)
write_func(**self._get_options(section))
for subsection in section:
self._write_section(subsection, writer)
def _write_content_section(self, section, writer):
"""Write a content section to the stream.
If there's no content to write, the section will be skipped.
Args:
section (pydiffx.dom.objects.BaseDiffXContentSection):
The content section to write.
writer (pydiffx.dom.writer.DiffXWriter):
The streaming writer to write with.
"""
content = section.content
if content:
write_func = getattr(writer, 'write_%s' % section.section_name)
write_func(content, **self._get_options(section))
def _get_options(self, section):
"""Return options to write for a given section.
This will take care of renaming any options as appropriate to pass
to the writer function.
Args:
section (pydiffx.dom.objects.BaseDiffXSection):
The section being written.
Returns:
dict:
The options to pass to the writer function.
"""
options = section.options
try:
remapped_options = self._remapped_options[section.section_name]
except KeyError:
return options
return {
remapped_options.get(_key, _key): _value
for _key, _value in six.iteritems(options)
}
| 30.271429 | 77 | 0.596272 | 467 | 4,238 | 5.280514 | 0.261242 | 0.028386 | 0.041363 | 0.034063 | 0.317518 | 0.306164 | 0.226683 | 0.194242 | 0.164639 | 0.092457 | 0 | 0 | 0.327513 | 4,238 | 139 | 78 | 30.489209 | 0.865263 | 0.434875 | 0 | 0.040816 | 0 | 0 | 0.033103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102041 | false | 0 | 0.081633 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a687d7feca22a1b98095bff58c06be97ab2cb464 | 1,540 | py | Python | L3_numpy_pandas_2D/A_2D_data.py | angelmtenor/IDAFC | 9d23746fd02e4eda2569d75b3c7a1383277e6e78 | [
"MIT"
] | null | null | null | L3_numpy_pandas_2D/A_2D_data.py | angelmtenor/IDAFC | 9d23746fd02e4eda2569d75b3c7a1383277e6e78 | [
"MIT"
] | null | null | null | L3_numpy_pandas_2D/A_2D_data.py | angelmtenor/IDAFC | 9d23746fd02e4eda2569d75b3c7a1383277e6e78 | [
"MIT"
] | null | null | null | import numpy as np
# Subway ridership for 5 stations on 10 different days
ridership = np.array([
[0, 0, 2, 5, 0],
[1478, 3877, 3674, 2328, 2539],
[1613, 4088, 3991, 6461, 2691],
[1560, 3392, 3826, 4787, 2613],
[1608, 4802, 3932, 4477, 2705],
[1576, 3933, 3909, 4979, 2685],
[95, 229, 255, 496, 201],
[2, 0, 1, 27, 0],
[1438, 3785, 3589, 4174, 2215],
[1342, 4043, 4009, 4665, 3033]
])
# Change False to True for each block of code to see what it does
# Accessing elements
if False:
print(ridership[1, 3])
print(ridership[1:3, 3:5])
print(ridership[1, :])
# Vectorized operations on rows or columns
if False:
print(ridership[0, :] + ridership[1, :])
print(ridership[:, 0] + ridership[:, 1])
# Vectorized operations on entire arrays
if False:
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
b = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]])
print(a + b)
def mean_riders_for_max_station(ridership):
"""
Fill in this function to find the station with the maximum riders on the
first day, then return the mean riders per day for that station. Also
return the mean ridership overall for comparsion.
Hint: NumPy's argmax() function might be useful:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html
"""
overall_mean = ridership.mean()
max_station = ridership[0, :].argmax()
mean_for_max = ridership[:, max_station].mean()
return overall_mean, mean_for_max
print(mean_riders_for_max_station(ridership))
| 28 | 76 | 0.63961 | 238 | 1,540 | 4.071429 | 0.52521 | 0.072239 | 0.04644 | 0.043344 | 0.173375 | 0.066047 | 0 | 0 | 0 | 0 | 0 | 0.164315 | 0.217532 | 1,540 | 54 | 77 | 28.518519 | 0.639834 | 0.343506 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.1 | 0.233333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a688464b1c3891545d1f144a4ff2e40f465e7ddf | 1,158 | py | Python | tests/init/feature.py | Carsten-Leue/ReduxPY | 3ca633c4d4d8a53418ee1049d571d1094feb14be | [
"MIT"
] | 13 | 2020-04-30T15:06:45.000Z | 2021-11-28T20:57:34.000Z | tests/init/feature.py | Carsten-Leue/ReduxPY | 3ca633c4d4d8a53418ee1049d571d1094feb14be | [
"MIT"
] | 4 | 2020-04-29T19:43:10.000Z | 2021-02-04T15:19:44.000Z | tests/init/feature.py | Carsten-Leue/ReduxPY | 3ca633c4d4d8a53418ee1049d571d1094feb14be | [
"MIT"
] | 4 | 2021-02-13T01:18:11.000Z | 2022-02-03T08:04:16.000Z | from typing import Any
from rx import Observable, pipe
from rx.operators import do_action, filter, map, ignore_elements
from redux import (
Epic,
Reducer,
ReduxFeatureModule,
combine_epics,
create_action,
create_feature_module,
handle_actions,
of_init_feature,
of_type,
select_action_payload,
select_feature,
StateType,
Action,
)
INIT_FEATURE = "INIT_FEATURE"
ADD_INIT_ACTION = "ADD_INIT_ACTION"
add_init_action = create_action(ADD_INIT_ACTION)
select_init_feature_module = select_feature(INIT_FEATURE)
def create_init_feature() -> ReduxFeatureModule:
"""
Constructs a new sample feature
"""
def handle_init_action(state: Any, action: Action) -> Any:
return select_action_payload(action)
sample_reducer = handle_actions({ADD_INIT_ACTION: handle_init_action})
add_epic = pipe(of_type(ADD_INIT_ACTION), ignore_elements(),)
init_epic = pipe(
of_init_feature(INIT_FEATURE), map(lambda x: add_init_action("init")),
)
sample_epic = combine_epics(add_epic, init_epic)
return create_feature_module(INIT_FEATURE, sample_reducer, sample_epic)
| 23.632653 | 78 | 0.736615 | 149 | 1,158 | 5.315436 | 0.275168 | 0.125 | 0.114899 | 0.07197 | 0.049242 | 0.049242 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185665 | 1,158 | 48 | 79 | 24.125 | 0.839873 | 0.02677 | 0 | 0 | 0 | 0 | 0.028029 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0.03125 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a68ae2307a359f7e47d9b87116672f1fc3da5cfb | 909 | py | Python | backend/backend/urls.py | Zhiwei1996/Todoist | c260ac051e909243395c98e8b4f45a42abb548ea | [
"MIT"
] | null | null | null | backend/backend/urls.py | Zhiwei1996/Todoist | c260ac051e909243395c98e8b4f45a42abb548ea | [
"MIT"
] | null | null | null | backend/backend/urls.py | Zhiwei1996/Todoist | c260ac051e909243395c98e8b4f45a42abb548ea | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""backend URL Configuration
"""
from django.conf.urls import include, url
from django.contrib import admin
from rest_framework.schemas import get_schema_view
from rest_framework.documentation import include_docs_urls
from todoist import views
API_TITLE = 'Todoist API'
API_DESCRIPTION = 'A Web API for creating and viewing todolist.'
schema_view = get_schema_view(title=API_TITLE)
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^admin/', include(admin.site.urls)),
url(r'^api/v01/', include('todoist.urls')),
url(r'^api/v01/auth/', include('rest_framework.urls',
namespace='rest_framework')),
url(r'^api/v01/schema/$', schema_view),
url(r'^api/v01/docs/', include_docs_urls(title=API_TITLE,
description=API_DESCRIPTION, public=False))
]
| 31.344828 | 88 | 0.665567 | 120 | 909 | 4.883333 | 0.4 | 0.040956 | 0.047782 | 0.068259 | 0.047782 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 0.19692 | 909 | 28 | 89 | 32.464286 | 0.789041 | 0.075908 | 0 | 0 | 0 | 0 | 0.201923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.277778 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a68cf4ecb5b81e1cac7d631028ac2e152c08518b | 1,192 | py | Python | secs_to_time.py | bonny1992/oneplus-notificator-fixed | 2e47d8b21b3100332f2a59b95d79b49e0d680bae | [
"Apache-2.0"
] | null | null | null | secs_to_time.py | bonny1992/oneplus-notificator-fixed | 2e47d8b21b3100332f2a59b95d79b49e0d680bae | [
"Apache-2.0"
] | null | null | null | secs_to_time.py | bonny1992/oneplus-notificator-fixed | 2e47d8b21b3100332f2a59b95d79b49e0d680bae | [
"Apache-2.0"
] | null | null | null | import math
def secs_to_time(seconds):
if seconds < 60:
return "{seconds} secondi".format(
seconds = seconds
)
else:
if int(seconds / 60 / 60 / 24) > 0:
days = seconds / 60 / 60 / 24
return "{days} giorni, {hours} ore, {minutes} minuti, {seconds} secondi".format(
days = int(days),
hours = int((math.ceil((days - int(days)) * 24))),
minutes = int((math.ceil((days - int(days)) * 24 * 60))),
seconds = int(math.ceil((days - int(days)) * 24 * 60 * 60))
)
elif int(seconds / 60 / 60) > 0:
hours = seconds / 60 / 60
return "{hours} ore, {minutes} minuti, {seconds} secondi".format(
hours = int(hours),
minutes = int((math.ceil((hours - int(hours)) * 60))),
seconds = int((math.ceil((hours - int(hours)) * 60 * 60)))
)
elif int(seconds / 60) > 0:
minutes = seconds / 60
return "{minutes} minuti, {seconds} secondi".format(
minutes = int(minutes),
seconds = int(((minutes - int(minutes)) * 60 ))
)
print (secs_to_time(300))
| 37.25 | 92 | 0.494128 | 135 | 1,192 | 4.333333 | 0.192593 | 0.107692 | 0.094017 | 0.138462 | 0.500855 | 0.420513 | 0.358974 | 0.088889 | 0 | 0 | 0 | 0.067445 | 0.353188 | 1,192 | 31 | 93 | 38.451613 | 0.69131 | 0 | 0 | 0 | 0 | 0 | 0.136745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.034483 | 0 | 0.206897 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6929cee90954c7741a45c1aea4c6b9d650cee79 | 2,042 | py | Python | backend/jobs/tests.py | k3ndr1c/ML-Platform | 35af599602f94b386f13c9ba2c0fa2b678a54375 | [
"MIT"
] | null | null | null | backend/jobs/tests.py | k3ndr1c/ML-Platform | 35af599602f94b386f13c9ba2c0fa2b678a54375 | [
"MIT"
] | null | null | null | backend/jobs/tests.py | k3ndr1c/ML-Platform | 35af599602f94b386f13c9ba2c0fa2b678a54375 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.test import TestCase
from django.urls import reverse
from rest_framework import status
from rest_framework.test import APIClient
from .models import Job, Prediction
from .serializers import JobSerializer, PredictionSerializer
CREATE_JOB_URL = reverse('jobs:create')
GET_PREDICTIONS_LIST_URL = reverse('jobs:predictions-list')
def sample_job(user, **params):
"""Create and return a sample job"""
return Job.objects.create(user=user)
def create_user(**param):
return get_user_model().objects.create_user(**param)
class PublicJobsApiTests(TestCase):
"""Test unauthenticated jobs API access"""
def setUp(self):
self.client = APIClient()
def test_auth_required_get_predictions(self):
"""Test that authentication is required to get predictions"""
res = self.client.get(GET_PREDICTIONS_LIST_URL)
self.assertEqual(res.status_code, status.HTTP_403_FORBIDDEN)
def test_auth_required_create_job(self):
"""Test that authentication is required to create new job"""
res = self.client.post(CREATE_JOB_URL, {})
self.assertEqual(res.status_code, status.HTTP_403_FORBIDDEN)
class PrivateJobsApiTests(TestCase):
"""Test authenticated Jobs API access"""
def setUp(self):
self.user = create_user(
email='test@testcase.com',
username='testuser',
phone_number='1234567890',
first_name='John',
middle_name= 'C',
last_name='Die',
mail_address='123 River St',
occupation='student',
password='testpassword',
)
self.client = APIClient()
self.client.force_authenticate(user=self.user)
def test_create_job(self):
"""Test creating a job"""
res = self.client.post(CREATE_JOB_URL, {})
self.assertEqual(res.status_code, status.HTTP_201_CREATED)
job = Job.objects.get(id=res.data['id'])
self.assertEqual(1, job.id)
| 30.029412 | 69 | 0.677767 | 250 | 2,042 | 5.348 | 0.356 | 0.044877 | 0.026926 | 0.04712 | 0.253553 | 0.253553 | 0.253553 | 0.153328 | 0.153328 | 0.153328 | 0 | 0.014393 | 0.217434 | 2,042 | 67 | 70 | 30.477612 | 0.822278 | 0.114104 | 0 | 0.190476 | 0 | 0 | 0.060742 | 0.011811 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.166667 | false | 0.02381 | 0.166667 | 0.02381 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a693893f40cf33adad9c304b0df64f4d022f6bf1 | 22,949 | py | Python | swords/__init__.py | p-lambda/swords | 04ca75370d0ce098a7f4db68240fc8e79a4f7b3b | [
"CC-BY-3.0"
] | 25 | 2021-05-24T06:54:45.000Z | 2022-03-18T15:30:39.000Z | swords/__init__.py | p-lambda/swords | 04ca75370d0ce098a7f4db68240fc8e79a4f7b3b | [
"CC-BY-3.0"
] | 2 | 2021-06-11T02:39:47.000Z | 2021-09-20T15:06:46.000Z | swords/__init__.py | p-lambda/swords | 04ca75370d0ce098a7f4db68240fc8e79a4f7b3b | [
"CC-BY-3.0"
] | 2 | 2021-11-19T09:06:30.000Z | 2022-03-24T18:31:40.000Z | from collections import defaultdict
import copy
from enum import Enum
import hashlib
import json
# From UD v2: https://universaldependencies.org/u/pos/
class Pos(Enum):
UNKNOWN = 0
# Open class
ADJ = 1
ADV = 2
INTJ = 3
NOUN = 4
PROPN = 5
VERB = 6
# Closed class
ADP = 7
AUX = 8
CCONJ = 9
DET = 10
NUM = 11
PART = 12
PRON = 13
SCONJ = 14
# Other
PUNCT = 15
SYM = 16
X = 17
# https://universaldependencies.org/tagset-conversion/en-penn-uposf.html
PTB_POS_TO_POS = """
#=>SYM
$=>SYM
''=>PUNCT
,=>PUNCT
-LRB-=>PUNCT
-RRB-=>PUNCT
.=>PUNCT
:=>PUNCT
AFX=>ADJ
CC=>CCONJ
CD=>NUM
DT=>DET
EX=>PRON
FW=>X
HYPH=>PUNCT
IN=>ADP
JJ=>ADJ
JJR=>ADJ
JJS=>ADJ
LS=>X
MD=>VERB
NIL=>X
NN=>NOUN
NNP=>PROPN
NNPS=>PROPN
NNS=>NOUN
PDT=>DET
POS=>PART
PRP=>PRON
PRP$=>DET
RB=>ADV
RBR=>ADV
RBS=>ADV
RP=>ADP
SYM=>SYM
TO=>PART
UH=>INTJ
VB=>VERB
VBD=>VERB
VBG=>VERB
VBN=>VERB
VBP=>VERB
VBZ=>VERB
WDT=>DET
WP=>PRON
WP$=>DET
WRB=>ADV
``=>PUNCT
""".strip().splitlines()
PTB_POS_TO_POS = {k:Pos[v] for k, v in [l.split('=>') for l in PTB_POS_TO_POS]}
_AIT_POS_TO_POS = {
'UNKN': Pos.UNKNOWN,
'VERB': Pos.VERB,
'NOUN': Pos.NOUN,
'PRON': Pos.PRON,
'ADJ': Pos.ADJ,
'ADV': Pos.ADV,
'ADP': Pos.ADP,
'CONJ': Pos.CCONJ,
'DET': Pos.DET,
'NUM': Pos.NUM,
'PRT': Pos.PART,
'OTH': Pos.X,
'PUNC': Pos.PUNCT,
'PROP': Pos.PROPN,
'PHRS': Pos.UNKNOWN,
}
_POS_TO_AIT_POS = {v:k for k, v in _AIT_POS_TO_POS.items()}
_POS_TO_AIT_POS[Pos.UNKNOWN] = 'UNKN'
_POS_TO_AIT_POS[Pos.INTJ] = 'UNKN'
_POS_TO_AIT_POS[Pos.AUX] = 'UNKN'
_POS_TO_AIT_POS[Pos.SCONJ] = 'CONJ'
_POS_TO_AIT_POS[Pos.SYM] = 'PUNC'
assert len(_POS_TO_AIT_POS) == len(Pos)
class Label(Enum):
FALSE = 0
TRUE = 1
FALSE_IMPLICIT = 2
TRUE_IMPLICIT = 3
UNSURE = 4
def _dict_checksum(d):
d_json = json.dumps(d, sort_keys=True)
return hashlib.sha1(d_json.encode('utf-8')).hexdigest()
class LexSubGenerationTask:
def __init__(self, extra=None):
self.__cid_to_context = {}
self.__tid_to_target = {}
if extra is not None:
try:
json.dumps(extra)
except:
raise ValueError('Extra information must be JSON serializable')
self.extra = extra
def stats(self):
return len(self.__cid_to_context), len(self.__tid_to_target)
def id(self):
return 'gt:' + _dict_checksum({
'contexts': sorted(list(self.all_context_ids())),
'targets': sorted(list(self.all_target_ids())),
})
@classmethod
def create_context(cls, context_str, extra=None):
if type(context_str) != str or len(context_str) == 0:
raise ValueError('Invalid context string')
if extra is not None:
try:
json.dumps(extra)
except:
raise ValueError('Extra information must be JSON serializable')
context = {
'context': context_str
}
if extra is not None:
context['extra'] = extra
return context
@classmethod
def create_target(cls, context_id, target_str, offset, pos=None, extra=None):
if not context_id.startswith('c:'):
raise ValueError('Invalid context ID')
# TODO: Make sure target_str.strip() == target_str?
if type(target_str) != str or len(target_str) == 0:
raise ValueError('Invalid target string')
if type(offset) != int:
raise ValueError('Invalid target offset')
if pos is not None and not isinstance(pos, Pos):
raise ValueError('Invalid target part-of-speech')
if extra is not None:
try:
json.dumps(extra)
except:
raise ValueError('Extra information must be JSON serializable')
target = {
'context_id': context_id,
'target': target_str,
'offset': offset,
'pos': pos,
}
if extra is not None:
target['extra'] = extra
return target
@classmethod
def context_id(cls, context):
return 'c:' + _dict_checksum({
# NOTE: Context is case-sensitive
'context': context['context'],
})
@classmethod
def target_id(cls, target):
return 't:' + _dict_checksum({
'context_id': target['context_id'],
# NOTE: Target is case-insensitive (because context has case info)
'target': target['target'].lower(),
'offset': target['offset'],
# NOTE: POS is an *input* to generation models, so it should be considered part of the target checksum
'pos': None if target['pos'] is None else target['pos'].name
})
def has_context(self, context_id):
return context_id in self.__cid_to_context
def has_target(self, target_id):
return target_id in self.__tid_to_target
def get_context(self, context_id):
if context_id not in self.__cid_to_context:
raise ValueError('Invalid context ID')
return self.__cid_to_context[context_id]
def get_target(self, target_id):
if target_id not in self.__tid_to_target:
raise ValueError('Invalid target ID')
return self.__tid_to_target[target_id]
def add_context(self, context_or_context_str, extra=None, update_ok=False):
if type(context_or_context_str) == dict:
if extra is not None:
raise ValueError()
context = context_or_context_str
context = self.create_context(context['context'], extra=context.get('extra'))
else:
context = self.create_context(context_or_context_str, extra=extra)
cid = self.context_id(context)
if not update_ok and cid in self.__cid_to_context:
raise ValueError('Context ID already exists')
self.__cid_to_context[cid] = context
return cid
def add_target(self, target_or_context_id, target_str=None, offset=None, pos=None, extra=None, update_ok=False):
if type(target_or_context_id) == dict:
if any([kwarg is not None for kwarg in [target_str, offset, pos, extra]]):
raise ValueError()
target = target_or_context_id
target = self.create_target(target['context_id'], target['target'], target['offset'], pos=target['pos'], extra=target.get('extra'))
else:
if any([kwarg is None for kwarg in [target_str, offset]]):
raise ValueError()
target = self.create_target(target_or_context_id, target_str, offset, pos=pos, extra=extra)
tid = self.target_id(target)
if not update_ok and tid in self.__tid_to_target:
raise ValueError('Target ID already exists')
context_id = target['context_id']
if not self.has_context(context_id):
raise ValueError('Invalid context ID')
context = self.get_context(context_id)
if context['context'][target['offset']:target['offset']+len(target['target'])].lower() != target['target'].lower():
raise ValueError('Target not found at offset')
self.__tid_to_target[tid] = target
return tid
def all_context_ids(self):
return self.__cid_to_context.keys()
def all_target_ids(self):
return self.__tid_to_target.keys()
def get_generator_inputs(self, target_id):
if not self.has_target(target_id):
raise ValueError('Invalid target ID')
target = self.get_target(target_id)
context = self.get_context(target['context_id'])
return {
'context': context['context'],
'target': target['target'],
'target_offset': target['offset'],
'target_pos': target['pos']
}
def iter_generator_input(self, batch_size=None, sort=True, sort_by='context_len_descending'):
if sort:
if sort_by == 'context_len_descending':
cid_to_tids = defaultdict(list)
for tid in self.all_target_ids():
target = self.get_target(tid)
cid_to_tids[target['context_id']].append(tid)
cids_sorted = sorted(cid_to_tids.keys(), key=lambda x: -len(self.get_context(x)['context']))
tids = []
for cid in cids_sorted:
tids.extend(cid_to_tids[cid])
else:
raise ValueError()
else:
tids = list(self.all_target_ids())
if batch_size is None:
for tid in tids:
yield tid, self.get_generator_inputs(tid)
else:
for i in range(0, len(tids), batch_size):
yield [(tid, self.get_generator_inputs(tid)) for tid in tids[i:i+batch_size]]
def as_dict(self):
result = {
'contexts': copy.deepcopy(self.__cid_to_context),
'targets': copy.deepcopy(self.__tid_to_target),
}
for tid, target in result['targets'].items():
target['pos'] = None if target['pos'] is None else target['pos'].name
if self.extra is not None:
result['extra'] = self.extra
return result
@classmethod
def from_dict(cls, d):
i = cls(extra=d.get('extra'))
for cid, context in d['contexts'].items():
_cid = i.add_context(context)
assert _cid == cid
for tid, target in d['targets'].items():
target['pos'] = None if target['pos'] is None else Pos[target['pos']]
_tid = i.add_target(target)
assert _tid == tid
return i
class LexSubRankingTask(LexSubGenerationTask):
def __init__(self, substitutes_lemmatized, *args, **kwargs):
super().__init__(*args, **kwargs)
if type(substitutes_lemmatized) != bool:
raise ValueError('Substitutes lemmatized must be True or False')
self.substitutes_lemmatized = substitutes_lemmatized
self.__sid_to_substitute = {}
self.__tid_to_sids = defaultdict(set)
def stats(self):
return super().stats() + (len(self.__sid_to_substitute),)
def id(self):
return 'rt:' + _dict_checksum({
'generation_task_id': super().id(),
'substitutes': sorted(self.all_substitute_ids()),
'substitutes_lemmatized': self.substitutes_lemmatized
})
@classmethod
def create_substitute(cls, target_id, substitute_str, extra=None):
if not target_id.startswith('t:'):
raise ValueError('Invalid target ID')
# TODO: Make sure substitute_str.strip() == substitute_str?
if type(substitute_str) != str or len(substitute_str) == 0:
raise ValueError('Invalid substitute string')
if extra is not None:
try:
json.dumps(extra)
except:
raise ValueError('Extra information must be JSON serializable')
substitute = {
'target_id': target_id,
'substitute': substitute_str
}
if extra is not None:
substitute['extra'] = extra
return substitute
@classmethod
def substitute_id(cls, substitute):
return 's:' + _dict_checksum({
'target_id': substitute['target_id'],
# TODO: Change this? (e.g. for acronyms)?
# NOTE: Substitute is case-insensitive (because context has case info)
'substitute': substitute['substitute'].lower()
})
def has_substitute(self, substitute_id):
return substitute_id in self.__sid_to_substitute
def get_substitute(self, substitute_id):
if not self.has_substitute(substitute_id):
raise ValueError('Invalid substitute ID')
return self.__sid_to_substitute[substitute_id]
def add_substitute(self, substitute_or_target_id, substitute_str=None, extra=None, update_ok=False):
if type(substitute_or_target_id) == dict:
if any([kwarg is not None for kwarg in [substitute_str, extra]]):
raise ValueError()
substitute = substitute_or_target_id
substitute = self.create_substitute(substitute['target_id'], substitute['substitute'], extra=substitute.get('extra'))
else:
if substitute_str is None:
raise ValueError()
substitute = self.create_substitute(substitute_or_target_id, substitute_str, extra=extra)
sid = self.substitute_id(substitute)
if not update_ok and sid in self.__sid_to_substitute:
raise ValueError('Substitute ID already exists')
target_id = substitute['target_id']
if not self.has_target(target_id):
raise ValueError('Invalid target ID')
self.__sid_to_substitute[sid] = substitute
self.__tid_to_sids[target_id].add(sid)
return sid
def all_substitute_ids(self, target_id=None):
if target_id is not None:
if not self.has_target(target_id):
raise ValueError('Invalid target ID')
return self.__tid_to_sids[target_id]
else:
return self.__sid_to_substitute.keys()
def get_ranker_inputs(self, substitute_id):
if not self.has_substitute(substitute_id):
raise ValueError('Invalid substitute ID')
substitute = self.get_substitute(substitute_id)
target = self.get_target(substitute['target_id'])
context = self.get_context(target['context_id'])
return {
'context': context['context'],
'target': target['target'],
'target_offset': target['offset'],
'target_pos': target['pos'],
'substitute': substitute['substitute'],
'substitute_lemmatized': self.substitutes_lemmatized
}
def as_dict(self):
result = super().as_dict()
result.update({
'substitutes': copy.deepcopy(self.__sid_to_substitute),
'substitutes_lemmatized': self.substitutes_lemmatized,
})
return result
@classmethod
def from_dict(cls, d):
i = cls(
substitutes_lemmatized=d['substitutes_lemmatized'],
extra=d.get('extra'))
# TODO: Any way to use super here?
for cid, context in d['contexts'].items():
_cid = i.add_context(context)
assert _cid == cid
for tid, target in d['targets'].items():
target['pos'] = None if target['pos'] is None else Pos[target['pos']]
_tid = i.add_target(target)
assert _tid == tid
for sid, substitute in d['substitutes'].items():
_sid = i.add_substitute(substitute)
assert _sid == sid
return i
class LexSubDataset(LexSubRankingTask):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__sid_to_labels = {}
def stats(self, include_uninformative_labels=False):
allow_list = [Label.TRUE, Label.TRUE_IMPLICIT, Label.FALSE]
if include_uninformative_labels:
allow_list.extend([Label.FALSE_IMPLICIT, Label.UNSURE])
num_labels = sum([len([l for l in labels if l in allow_list]) for labels in self.__sid_to_labels.values()])
return super().stats() + (num_labels,)
def id(self):
return 'd:' + _dict_checksum({
'ranking_task_id': super().id(),
'substitute_labels': sorted([(sid, [l.name for l in labels]) for sid, labels in self.__sid_to_labels.items()], key=lambda x: x[0])
})
def get_substitute_labels(self, substitute_id):
if not self.has_substitute(substitute_id):
raise ValueError('Invalid substitute ID')
return self.__sid_to_labels[substitute_id]
def add_substitute(self, substitute_or_target_id, labels_or_substitute_str, labels=None, extra=None, update_ok=False):
if type(substitute_or_target_id) == dict:
if any([kwarg is not None for kwarg in [labels, extra]]):
raise ValueError()
labels = labels_or_substitute_str
sid = super().add_substitute(substitute_or_target_id, update_ok=update_ok)
else:
if any([kwarg is None for kwarg in [labels_or_substitute_str, labels]]):
raise ValueError()
sid = super().add_substitute(substitute_or_target_id, labels_or_substitute_str, extra=extra, update_ok=update_ok)
if labels is None or len(labels) == 0:
raise ValueError('Labels must not be empty')
if sid in self.__sid_to_labels:
old_labels = self.__sid_to_labels[sid]
if labels[:len(old_labels)] != old_labels:
raise ValueError('Labels should only be updated')
self.__sid_to_labels[sid] = labels
return sid
def as_dict(self):
result = super().as_dict()
result.update({
'substitute_labels': {sid:[l.name for l in labels] for sid, labels in self.__sid_to_labels.items()}
})
return result
@classmethod
def from_dict(cls, d):
i = cls(
substitutes_lemmatized=d['substitutes_lemmatized'],
extra=d.get('extra'))
# TODO: Any way to use super here?
for cid, context in d['contexts'].items():
_cid = i.add_context(context)
assert _cid == cid
for tid, target in d['targets'].items():
target['pos'] = None if target['pos'] is None else Pos[target['pos']]
_tid = i.add_target(target)
assert _tid == tid
for sid, substitute in d['substitutes'].items():
_sid = i.add_substitute(substitute, [Label[l] for l in d['substitute_labels'][sid]])
assert _sid == sid
return i
def as_ait(self):
d = {
'ss': [],
}
cid_to_s_attrs = {}
for cid in self.all_context_ids():
context = self.get_context(cid)
s_attrs = {
'id': cid,
's': context['context'],
'extra': context.get('extra'),
'ws': []
}
try:
s_attrs['split'] = context['extra']['split']
except:
pass
cid_to_s_attrs[cid] = s_attrs
d['ss'].append(s_attrs)
tid_to_w_attrs = {}
for tid in self.all_target_ids():
target = self.get_target(tid)
w_attrs = {
'id': tid,
'w': target['target'],
'off': target['offset'],
'pos': [_POS_TO_AIT_POS[target['pos']]],
'extra': target.get('extra'),
'wprimes': []
}
tid_to_w_attrs[tid] = w_attrs
cid_to_s_attrs[target['context_id']]['ws'].append(w_attrs)
for sid in self.all_substitute_ids():
substitute = self.get_substitute(sid)
labels = self.get_substitute_labels(sid)
wp_attrs = {
'id': sid,
'wprime': substitute['substitute'],
'human_labels': [l.name for l in labels]
}
if substitute.get('extra') is not None:
wp_attrs['extra'] = substitute.get('extra')
tid_to_w_attrs[substitute['target_id']]['wprimes'].append(wp_attrs)
d['wprimes_lemmatized'] = self.substitutes_lemmatized
if self.extra is not None:
d['extra'] = self.extra
return d
@classmethod
def from_ait(cls, d):
i = cls(
substitutes_lemmatized=d.get('wprimes_lemmatized', False),
extra=d.get('extra'))
for s_attrs in d['ss']:
split = s_attrs.get('split')
extra = s_attrs.get('extra')
if split is not None:
if extra is None:
extra = {}
extra['split'] = split
cid = i.add_context(s_attrs['s'], extra=extra, update_ok=True)
for w_attrs in s_attrs['ws']:
try:
pos = _AIT_POS_TO_POS[w_attrs['pos'][0]]
except Exception as e:
print(w_attrs['pos'])
raise e
# TODO: Add rest of POS list?
tid = i.add_target(
cid,
w_attrs['w'],
w_attrs['off'],
pos=pos,
extra=w_attrs.get('extra'),
update_ok=True)
for wp_attrs in w_attrs['wprimes']:
sid = LexSubDataset.substitute_id(LexSubDataset.create_substitute(tid, wp_attrs['wprime']))
labels = [Label[l] for l in wp_attrs['human_labels']]
if i.has_substitute(sid):
labels = i.get_substitute_labels(sid) + labels
i.add_substitute(
tid,
wp_attrs['wprime'],
labels,
extra=wp_attrs.get('extra'),
update_ok=True)
return i
class LexSubResult:
def __init__(self, substitutes_lemmatized):
self.substitutes_lemmatized = substitutes_lemmatized
self.__tid_to_substitutes = {}
def __len__(self):
return len(self.__tid_to_substitutes)
def has_substitutes(self, target_id):
return target_id in self.__tid_to_substitutes
def get_substitutes(self, target_id):
if not self.has_substitutes(target_id):
raise ValueError('Invalid target ID')
return self.__tid_to_substitutes[target_id]
def _process_substitutes(self, target_id, substitutes):
if not target_id.startswith('t:'):
raise ValueError('Invalid target ID')
processed = []
for i, substitute in enumerate(substitutes):
if type(substitute) in [tuple, list] and len(substitute) == 2:
substitute, score = substitute
try:
score = float(score)
except:
raise ValueError('Invalid score')
processed.append((substitute, score))
elif type(substitute) == str:
processed.append((substitute, float(-i)))
else:
raise ValueError('Substitute must be (str, float) tuple or str')
processed = sorted(processed, key=lambda x: -x[1])
return processed
def add_substitutes(self, target_id, substitutes):
self.__tid_to_substitutes[target_id] = self._process_substitutes(target_id, substitutes)
def all_target_ids(self):
return self.__tid_to_substitutes.keys()
def iter_ranker_input(self, batch_size=None, sort=True, sort_by='context_len_descending'):
raise NotImplementedError()
"""
if sort:
if sort_by == 'context_len_descending':
cid_to_sids = defaultdict(list)
for sid in self.all_substitute_ids(iter_ok=True):
substitute = self.get_substitute(sid)
cid = self.get_target(substitute['target_id'])['context_id']
cid_to_sids[cid].append(sid)
cids_sorted = sorted(cid_to_sids.keys(), key=lambda x: -len(self.get_context(x)['context']))
sids = []
for cid in cids_sorted:
sids.extend(cid_to_sids[cid])
else:
raise ValueError()
else:
sids = self.all_substitute_ids()
if batch_size is None:
for sid in sids:
yield sid, self.get_ranker_inputs(sid)
else:
for i in range(0, len(sids), batch_size):
yield [(sid, self.get_ranker_inputs(sid)) for sid in sids[i:i+batch_size]]
"""
def as_dict(self):
return {
'substitutes_lemmatized': self.substitutes_lemmatized,
'substitutes': copy.deepcopy(self.__tid_to_substitutes)
}
@classmethod
def from_dict(cls, d):
i = cls(substitutes_lemmatized=d['substitutes_lemmatized'])
for tid, substitutes in d['substitutes'].items():
i.add_substitutes(tid, substitutes)
return i
class LexSubNoDuplicatesResult(LexSubResult):
def add_substitutes(self, target_id, substitutes):
processed = self._process_substitutes(target_id, substitutes)
if len(set([s.lower() for s, _ in processed])) != len(processed):
raise ValueError('Duplicate substitutes encountered')
super().add_substitutes(target_id, processed)
@classmethod
def from_dict(cls, d, aggregate_fn=lambda l: max(l)):
i = cls(substitutes_lemmatized=d['substitutes_lemmatized'])
for tid, substitutes in d['substitutes'].items():
substitute_lowercase_to_substitutes_and_scores = defaultdict(list)
for substitute, score in substitutes:
substitute_lowercase_to_substitutes_and_scores[substitute.lower()].append((substitute, score))
deduped = []
for _, substitutes_and_scores in substitute_lowercase_to_substitutes_and_scores.items():
substitute = sorted([sub for sub, _ in substitutes_and_scores], key=lambda x: sum(1 for c in x if x.isupper()))[-1]
score = aggregate_fn([score for _, score in substitutes_and_scores])
deduped.append((substitute, score))
i.add_substitutes(tid, deduped)
return i
| 31.610193 | 137 | 0.657066 | 3,117 | 22,949 | 4.591594 | 0.095605 | 0.031861 | 0.011948 | 0.01076 | 0.514184 | 0.398407 | 0.320151 | 0.271241 | 0.251118 | 0.23903 | 0 | 0.00257 | 0.219966 | 22,949 | 725 | 138 | 31.653793 | 0.796939 | 0.028847 | 0 | 0.306667 | 0 | 0 | 0.123209 | 0.011247 | 0 | 0 | 0 | 0.001379 | 0.015 | 1 | 0.088333 | false | 0.001667 | 0.008333 | 0.028333 | 0.223333 | 0.001667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a69603d25cb114ace361699985f470a1488e41a6 | 7,239 | py | Python | src/gumbel_social_transformer/st_model_tcn.py | tedhuang96/gst | ac300d34e17fa2d6639c1df329ac1e8f80bccaec | [
"MIT"
] | 8 | 2021-11-28T21:16:27.000Z | 2022-03-22T06:56:16.000Z | src/gumbel_social_transformer/st_model_tcn.py | tedhuang96/gst | ac300d34e17fa2d6639c1df329ac1e8f80bccaec | [
"MIT"
] | null | null | null | src/gumbel_social_transformer/st_model_tcn.py | tedhuang96/gst | ac300d34e17fa2d6639c1df329ac1e8f80bccaec | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from src.gumbel_social_transformer.gumbel_social_transformer import GumbelSocialTransformer
from src.gumbel_social_transformer.temporal_convolution_net import TemporalConvolutionNet
def offset_error_square_full_partial(x_pred, x_target, loss_mask_ped, loss_mask_pred_seq):
assert x_pred.shape[0] == loss_mask_ped.shape[0] == loss_mask_pred_seq.shape[0] == 1
assert x_pred.shape[1] == x_target.shape[1] == loss_mask_pred_seq.shape[2]
assert x_pred.shape[2] == x_target.shape[2] == loss_mask_ped.shape[1] == loss_mask_pred_seq.shape[1]
assert x_pred.shape[3] == x_target.shape[3] == 2
loss_mask_rel_pred = loss_mask_pred_seq.permute(0, 2, 1).unsqueeze(-1)
x_pred_m = x_pred * loss_mask_rel_pred
x_target_m = x_target * loss_mask_rel_pred
x_pred_m = x_pred_m * loss_mask_ped.unsqueeze(1).unsqueeze(-1)
x_target_m = x_target_m * loss_mask_ped.unsqueeze(1).unsqueeze(-1)
pos_pred = torch.cumsum(x_pred_m, dim=1)
pos_target = torch.cumsum(x_target_m, dim=1)
offset_error_sq = (((pos_pred-pos_target)**2.).sum(3))[0]
eventual_loss_mask = loss_mask_rel_pred[0,:,:,0] * loss_mask_ped[0]
offset_error_sq = offset_error_sq * eventual_loss_mask
return offset_error_sq, eventual_loss_mask
class st_model(nn.Module):
def __init__(self, args, device='cuda:0'):
super(st_model, self).__init__()
if args.spatial == 'gumbel_social_transformer':
self.node_embedding = nn.Linear(args.motion_dim, args.embedding_size).to(device)
self.edge_embedding = nn.Linear(args.motion_dim, 2 * args.embedding_size).to(device)
self.gumbel_social_transformer = GumbelSocialTransformer(
args.embedding_size,
args.spatial_num_heads,
args.spatial_num_heads_edges,
args.spatial_num_layers,
dim_feedforward=128,
dim_hidden=32,
dropout=0.1,
activation="relu",
attn_mech="vanilla",
ghost=args.ghost,
).to(device)
else:
raise RuntimeError('The spatial component is not found.')
if args.temporal == 'temporal_convolution_net':
self.temporal_conv_net = TemporalConvolutionNet(
in_channels=args.embedding_size,
out_channels=args.output_dim,
dim_hidden=32,
nconv=6,
obs_seq_len=args.obs_seq_len,
pred_seq_len=args.pred_seq_len).to(device)
else:
raise RuntimeError('The temporal component is not tcn.')
self.args = args
def raw2gaussian(self, prob_raw):
mu = prob_raw[:,:,:,:2]
sx, sy = torch.exp(prob_raw[:,:,:,2:3]), torch.exp(prob_raw[:,:,:,3:4])
corr = torch.tanh(prob_raw[:,:,:,4:5])
gaussian_params = (mu, sx, sy, corr)
return gaussian_params
def sample_gaussian(self, gaussian_params, device='cuda:0', detach_sample=False, sampling=True):
mu, sx, sy, corr = gaussian_params
if sampling:
if detach_sample:
mu, sx, sy, corr = mu.detach(), sx.detach(), sy.detach(), corr.detach()
sample_unit = torch.empty(mu.shape).normal_().to(device)
sample_unit_x, sample_unit_y = sample_unit[:,:,:,0:1], sample_unit[:,:,:,1:2]
sample_x = sx*sample_unit_x
sample_y = corr*sy*sample_unit_x+((1.-corr**2.)**0.5)*sy*sample_unit_y
sample = torch.cat((sample_x, sample_y), dim=3)+mu
else:
sample = mu
return sample
def edge_evolution(self, xt_plus, At, device='cuda:0'):
xt_plus = xt_plus[0,0]
At = At[0, 0]
num_nodes, motion_dim = xt_plus.shape
xt_plus_row = torch.ones(num_nodes,num_nodes,motion_dim).to(device)*xt_plus.view(num_nodes,1,motion_dim)
xt_plus_col = torch.ones(num_nodes,num_nodes,motion_dim).to(device)*xt_plus.view(1,num_nodes,motion_dim)
At_plus = At + (xt_plus_row - xt_plus_col)
At_plus = At_plus.unsqueeze(0).unsqueeze(0)
return At_plus
def forward(self, x, A, attn_mask, loss_mask_rel, tau=1., hard=False, sampling=True, device='cuda:0'):
info = {}
loss_mask_per_pedestrian = (loss_mask_rel[0].sum(1)==self.args.obs_seq_len+self.args.pred_seq_len).float().unsqueeze(0)
if self.args.only_observe_full_period:
assert loss_mask_per_pedestrian.shape[0] == 1
attn_mask = []
for tt in range(self.args.obs_seq_len):
attn_mask.append(torch.outer(loss_mask_per_pedestrian[0], loss_mask_per_pedestrian[0]).float())
attn_mask = torch.stack(attn_mask, dim=0).unsqueeze(0)
if self.args.spatial == 'gumbel_social_transformer':
x_embedding = self.node_embedding(x)[0]
A_embedding = self.edge_embedding(A)[0]
attn_mask = attn_mask[0].permute(0,2,1)
xs, sampled_edges, edge_multinomial, attn_weights = self.gumbel_social_transformer(x_embedding, A_embedding, attn_mask, tau=tau, hard=hard, device=device)
xs = xs.unsqueeze(0)
info['sampled_edges'], info['edge_multinomial'], info['attn_weights'] = sampled_edges, edge_multinomial, attn_weights
else:
raise RuntimeError("The spatial component is not found.")
if self.args.only_observe_full_period:
loss_mask_rel_full_partial = loss_mask_per_pedestrian[0]
else:
loss_mask_rel_obs = loss_mask_rel[0,:,:self.args.obs_seq_len]
loss_mask_rel_full_partial = loss_mask_rel_obs[:,-1]
if self.args.decode_style == 'readout':
xs = xs * loss_mask_rel_obs.permute(1,0).unsqueeze(-1)
xs = xs * loss_mask_rel_full_partial.unsqueeze(-1)
if self.args.temporal == 'temporal_convolution_net':
prob_raw_pred = self.temporal_conv_net(xs)
else:
raise RuntimeError('The temporal component can only be tcn for readout decode_style.')
x_sample_pred, A_sample_pred = [], []
A_sample = A[:, -1:]
for tt in range(self.args.pred_seq_len):
prob_raw = prob_raw_pred[:, tt:tt+1]
gaussian_params = self.raw2gaussian(prob_raw)
x_sample = self.sample_gaussian(gaussian_params, device=device, detach_sample=self.args.detach_sample, sampling=sampling)
A_sample = self.edge_evolution(x_sample, A_sample, device=device)
x_sample_pred.append(x_sample)
A_sample_pred.append(A_sample)
x_sample_pred = torch.cat(x_sample_pred, dim=1)
A_sample_pred = torch.cat(A_sample_pred, dim=1)
gaussian_params_pred = self.raw2gaussian(prob_raw_pred)
info['A_sample_pred'] = A_sample_pred
info['loss_mask_rel_full_partial'] = loss_mask_rel_full_partial.unsqueeze(0)
info['loss_mask_per_pedestrian'] = loss_mask_per_pedestrian
results = (gaussian_params_pred, x_sample_pred, info)
return results
else:
raise RuntimeError("The decoder style is not found.") | 49.924138 | 166 | 0.644426 | 1,021 | 7,239 | 4.219393 | 0.152791 | 0.066852 | 0.038301 | 0.034123 | 0.368617 | 0.229573 | 0.127205 | 0.094243 | 0.048282 | 0.048282 | 0 | 0.018474 | 0.244785 | 7,239 | 145 | 167 | 49.924138 | 0.769526 | 0 | 0 | 0.085938 | 0 | 0 | 0.061188 | 0.020442 | 0 | 0 | 0 | 0 | 0.039063 | 1 | 0.046875 | false | 0 | 0.03125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a69669d6bb96032de0f69b7cc74ce536e85ea475 | 7,598 | py | Python | my_source/__myglobal.py | IBNBlank/Shooting_Stars | 38a642ab5a6d1cd59c480f11ae8eea9c86192a46 | [
"MIT"
] | 3 | 2018-07-28T15:00:16.000Z | 2021-07-15T12:21:58.000Z | my_source/__myglobal.py | IBNBlank/Shooting_Stars | 38a642ab5a6d1cd59c480f11ae8eea9c86192a46 | [
"MIT"
] | null | null | null | my_source/__myglobal.py | IBNBlank/Shooting_Stars | 38a642ab5a6d1cd59c480f11ae8eea9c86192a46 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Author: IBNBlank
# @Date: 2018-07-22 19:56:30
# @Last Modified by: IBNBlank
# @Last Modified time: 2018-07-28 22:19:12
import pygame
from os import path
##### Color Define #####
COLOR = {
"BLACK": (0,0,0),
"WHITE": (255,255,255),
"RED": (255,0,0),
"YELLOW": (255,255,0),
"GREEN": (0,255,0),
"LIGHT_BLUE": (100,255,255)
}
##### Path Define #####
### image ###
image_dir = path.join(path.join(path.dirname(__file__), path.pardir), 'image')
player_img = path.join(image_dir, 'player')
block_img = path.join(image_dir, 'block')
plane_img = path.join(image_dir, 'plane')
bullet_img = path.join(image_dir, 'bullet')
powerup_img = path.join(image_dir, 'powerup')
ui_img = path.join(image_dir, 'ui')
explosion_img = path.join(image_dir, 'explosion')
player_one_img = path.join(player_img, 'player_one')
player_two_img = path.join(player_img, 'player_two')
player_three_img = path.join(player_img, 'player_three')
player_four_img = path.join(player_img, 'player_four')
block_one_img = path.join(block_img, 'block_one')
block_two_img = path.join(block_img, 'block_two')
block_three_img = path.join(block_img, 'block_three')
block_four_img = path.join(block_img, 'block_four')
plane_one_img = path.join(plane_img, 'plane_one')
plane_two_img = path.join(plane_img, 'plane_two')
plane_three_img = path.join(plane_img, 'plane_three')
bullet_my_img = path.join(bullet_img, 'my_bullet')
bullet_enemy_img = path.join(bullet_img, 'enemy_bullet')
bomb_my_img = path.join(bullet_img, 'my_bomb')
explosion_my_img = path.join(explosion_img, 'my_explosion')
explosion_enemy_img = path.join(explosion_img, 'enemy_explosion')
explosion_bullet_img = path.join(explosion_img, 'bullet_explosion')
atk_img = path.join(powerup_img, 'atk_up')
hp_img = path.join(powerup_img, 'hp_up')
speed_img = path.join(powerup_img, 'speed_up')
bomb_img = path.join(powerup_img, 'bomb_up')
life_img = path.join(powerup_img, 'life_up')
### music ###
music_dir = path.join(path.join(path.dirname(__file__), path.pardir), 'music')
##### Image Define #####
### Explosion ###
EXPLOSION_PLAYER_ANIMATION = []
for i in range(24):
if i < 10:
explosion_temp_img = pygame.image.load(
path.join(explosion_my_img, 'expl_11_000{0}.png'.format(i)))
else:
explosion_temp_img = pygame.image.load(
path.join(explosion_my_img, 'expl_11_00{0}.png'.format(i)))
explosion_temp_img = pygame.transform.rotozoom(explosion_temp_img, 0, 1.5)
EXPLOSION_PLAYER_ANIMATION.append(explosion_temp_img)
EXPLOSION_ENEMY_ANIMATION = []
for i in range(24):
if i < 10:
explosion_temp_img = pygame.image.load(
path.join(explosion_enemy_img, 'expl_02_000{0}.png'.format(i)))
else:
explosion_temp_img = pygame.image.load(
path.join(explosion_enemy_img, 'expl_02_00{0}.png'.format(i)))
explosion_temp_img = pygame.transform.rotozoom(explosion_temp_img, 0, 5.2)
EXPLOSION_ENEMY_ANIMATION.append(explosion_temp_img)
EXPLOSION_BULLET_ANIMATION = []
for i in range(9):
explosion_temp_img = pygame.image.load(
path.join(explosion_bullet_img, 'regularExplosion0{0}.png'.format(i)))
explosion_temp_img = pygame.transform.rotozoom(explosion_temp_img, 0, 0.5)
EXPLOSION_BULLET_ANIMATION.append(explosion_temp_img)
### Image ###
IMAGE = {
"BACKGROUND": {
"BACKGROUND_ONE": pygame.image.load(path.join(image_dir, 'background_one.png')),
"BACKGROUND_TWO": pygame.image.load(path.join(image_dir, 'background_two.jpg')),
"BACKGROUND_THREE": pygame.image.load(path.join(image_dir, 'background_three.jpg'))
},
"PLAYER_ONE": {
"ORIGIN": pygame.image.load(path.join(player_one_img, 'origin.png')),
"BLANK": pygame.image.load(path.join(player_one_img, 'blank.png'))
},
"PLAYER_TWO": {
"ORIGIN": pygame.image.load(path.join(player_two_img, 'origin.png')),
"BLANK": pygame.image.load(path.join(player_two_img, 'blank.png'))
},
"PLAYER_THREE": {
"ORIGIN": pygame.image.load(path.join(player_three_img, 'origin.png')),
"BLANK": pygame.image.load(path.join(player_three_img, 'blank.png'))
},
"PLAYER_FOUR": {
"ORIGIN": pygame.image.load(path.join(player_four_img, 'origin.png')),
"BLANK": pygame.image.load(path.join(player_four_img, 'blank.png'))
},
"BLOCK_ONE": pygame.image.load(path.join(block_one_img, 'origin.png')),
"BLOCK_TWO": pygame.image.load(path.join(block_two_img, 'origin.png')),
"BLOCK_THREE": pygame.image.load(path.join(block_three_img, 'origin.png')),
"BLOCK_FOUR": pygame.image.load(path.join(block_four_img, 'origin.png')),
"PLANE_ONE": pygame.image.load(path.join(plane_one_img, 'origin.png')),
"PLANE_TWO": pygame.image.load(path.join(plane_two_img, 'origin.png')),
"PLANE_THREE": pygame.image.load(path.join(plane_three_img, 'origin.png')),
"BULLET": {
"MY_BULLET": {
"BULLET": pygame.image.load(path.join(bullet_my_img, 'origin.png')),
"SHOT_LIGHT": pygame.image.load(path.join(bullet_my_img, 'shot_light.png'))
},
"ENEMY_BULLET": {
"BULLET": pygame.image.load(path.join(bullet_enemy_img, 'origin.png')),
"SHOT_LIGHT": pygame.image.load(path.join(bullet_enemy_img, 'shot_light.png'))
},
"MY_BOMB": pygame.image.load(path.join(bomb_my_img, 'origin.png'))
},
"MY_EXPLOSION": EXPLOSION_PLAYER_ANIMATION,
"ENEMY_EXPLOSION": EXPLOSION_ENEMY_ANIMATION,
"BULLET_EXPLOSION": EXPLOSION_BULLET_ANIMATION,
"POWER_UP": {
"ATK_UP": pygame.image.load(path.join(atk_img, 'origin.png')),
"HP_UP": pygame.image.load(path.join(hp_img, 'origin.png')),
"SPEED_UP": pygame.image.load(path.join(speed_img, 'origin.png')),
"BOMB_UP": pygame.image.load(path.join(bomb_img, 'origin.png')),
"LIFE_UP": pygame.image.load(path.join(life_img, 'origin.png'))
},
"UI":{
"BOMB_ICON": pygame.image.load(path.join(ui_img, 'bomb_icon.png'))
}
}
### Size Define ###
SIZE = {
"SCREEN": (600,800),
"PLAYER_ONE": (53,40),
"PLAYER_ONE_LIFE": (26,26),
"PLAYER_TWO": (53,40),
"PLAYER_TWO_LIFE": (26,26),
"PLAYER_THREE": (53,40),
"PLAYER_THREE_LIFE": (26,26),
"PLAYER_FOUR": (53,53),
"PLAYER_FOUR_LIFE": (26,26),
"PLANE_SMALL": (52,42),
"PLANE_MIDDLE": (104,84),
"PLANE_LARGE": (156,126),
"MY_BULLET": (10,60),
"ENEMY_BULLET": (9,50),
"MY_BOMB": (208,576),
"MY_SHOT_LIGHT": (50,50),
"ENEMY_SHOT_LIGHT": (40,40),
"POWER_UP": (30,30),
"BOMB_ICON": (26,26)
}
##### Music Define #####
MUSIC = {
"BACKGROUND": {
"BACKGROUND_ONE": path.join(music_dir, 'background_one.ogg'),
"BACKGROUND_TWO": path.join(music_dir, 'background_two.mp3'),
"BACKGROUND_THREE": path.join(music_dir, 'background_three.wav'),
},
"MY_SHOT": path.join(music_dir, 'my_shot.wav'),
"ENEMY_SHOT": path.join(music_dir, 'enemy_shot.wav'),
"EXPLOSION": path.join(music_dir, 'explosion.wav'),
"POWER_UP":path.join(music_dir, 'power_up.ogg')
}
##### Variable Define #####
FPS = 60
score = 0
enemy_time = 2000
last_time = 0
joystick_flag = True
scene_flag = 1
last_scene_flag = 1
##### Draw Text Define #####
def draw_text(text, surface, color, size, x, y):
font_name = pygame.font.match_font('my_font.ttf')
font = pygame.font.Font(font_name, size)
text_surface = font.render(text, True, color)
text_rect = text_surface.get_rect()
text_rect.midtop = (x ,y)
surface.blit(text_surface, text_rect) | 39.572917 | 91 | 0.672809 | 1,101 | 7,598 | 4.355132 | 0.128065 | 0.123462 | 0.106361 | 0.134724 | 0.556413 | 0.448175 | 0.294473 | 0.279458 | 0.199583 | 0.189572 | 0 | 0.031449 | 0.154646 | 7,598 | 192 | 92 | 39.572917 | 0.715086 | 0.035931 | 0 | 0.077844 | 0 | 0 | 0.209019 | 0.00332 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005988 | false | 0 | 0.011976 | 0 | 0.017964 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a69754150f6815f08c8e7d520f80806f794c55ac | 1,845 | py | Python | python/example.py | LenakeTech/crowdin-hybrid-sso-examples | 7da0868f6537d6abb2f6004f428e15002535eb8d | [
"Apache-2.0"
] | 1 | 2021-06-08T14:29:53.000Z | 2021-06-08T14:29:53.000Z | python/example.py | LenakeTech/crowdin-hybrid-sso-examples | 7da0868f6537d6abb2f6004f428e15002535eb8d | [
"Apache-2.0"
] | null | null | null | python/example.py | LenakeTech/crowdin-hybrid-sso-examples | 7da0868f6537d6abb2f6004f428e15002535eb8d | [
"Apache-2.0"
] | 2 | 2021-03-31T02:59:45.000Z | 2021-09-08T10:54:19.000Z | # Copyright 2019 Crowdin
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -*- coding: utf-8 -*-
import json
import base64, urllib, datetime
from time import mktime
from pyDes import *
from Crypto.Cipher import AES
def get_user_data(projects, registered):
data = {
'user_id': "12345678901",
'login': "johndoe",
'user_email': "john.doe@mail.com",
'display_name': "John Doe",
'locale': "en_US",
'gender': 1,
'projects': ",".join(projects),
'expiration': mktime((datetime.datetime.now() + datetime.timedelta(minutes=20)).timetuple()),
'languages': "uk,ro,fr",
'role': 0,
'redirect_to': "https://crowdin.com/project/docx-project"
}
return data;
def encrypt(data, api_key):
iv = api_key[16:32]
api_key = api_key[0:16]
data = json.dumps(data)
length = 16 - (len(data) % 16)
data += chr(length)*length
encryptor = AES.new(api_key, AES.MODE_CBC, iv)
d = encryptor.encrypt(data)
base64enc = base64.b64encode(d)
return urllib.pathname2url(base64enc)
basepath = "https://crowdin.com/join"
owner_login = " -- OWNERS LOGIN -- "
api_key = " -- OWNERS API KEY -- "
projects = ["docx-project", "csv-project"]
hash_part = encrypt(get_user_data(projects, False), api_key)
link = "%s?h=%s&uid=%s" % (basepath, hash_part, owner_login)
if len(link) > 2000:
raise Exception("Link is too long.")
| 30.245902 | 97 | 0.692683 | 266 | 1,845 | 4.725564 | 0.567669 | 0.038186 | 0.020684 | 0.025457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032552 | 0.16748 | 1,845 | 60 | 98 | 30.75 | 0.785807 | 0.304607 | 0 | 0 | 0 | 0 | 0.240536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.131579 | 0 | 0.236842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a69792fc01aa5b8cfd3d5c9274c48d77d86db356 | 6,928 | py | Python | chrischan-master/chrischan-master/main.py | b9king/Discord-Bots | e6b08eeeb8de0952726883cfce0717d4866eacc9 | [
"MIT"
] | null | null | null | chrischan-master/chrischan-master/main.py | b9king/Discord-Bots | e6b08eeeb8de0952726883cfce0717d4866eacc9 | [
"MIT"
] | null | null | null | chrischan-master/chrischan-master/main.py | b9king/Discord-Bots | e6b08eeeb8de0952726883cfce0717d4866eacc9 | [
"MIT"
] | null | null | null |
import discord
import helper
from helper import *
client=discord.Client()
@client.event
async def on_ready():
print('logged in as')
print(client.user.name)
print(client.user.id)
print('-----')
@client.event
async def on_guild_join(guild):
name = "**<:Png:590089990780878848> Christian Weston Chandler Bot**"
join_message = """Hello {}
I'm {}, created by b9king#6857 with help from I am Moonslice#4132
My commands are:
{}
{}
{}
{}
{}
{}
{}
You can support my creator here: https://www.patreon.com/b9king
""".format(guild.name,name,command1,command2,command3,command4,command5,command6,command7)
x = guild.channels
y = False
for i in x:
if i.permissions_for(guild.me).send_messages and not y:
x = i
break
await x.send(join_message)
#general.permissions_for(guild.me).send_messages:
#await general.send(join_message)
@client.event
async def on_message(message):
if message.content.startswith("(debug 124)"):
x = message.content.replace("(debug 124)","")
await client.change_presence(status=discord.Status.online, activity=discord.Game(x))
elif message.content == "<@590092097609138196>":
name = "**{}**".format(message.mentions[0].name)
command1 = "~Qotn Get the *Quote Of The Now*"
command2 = "~Begging Get Chris' begging stats"
command3 = "~Tdic Get the *This Day In Christory*"
command4 = "~Dyk Get the *Did You Know* about Chris"
command5 = "~Aotn Get the *Article of the Now*"
command6 = "~Cwcki (name) will try to summarize an article for you and link you it"
command7 = "~Christorian gives you the link to dive into the rabbit hole!"
Helpmessage = """
**Thanks for adding me to {}**!
***Description***
I am the CWCki bot. I give out information about the mayor of CWCville, the beloved, Christian Weston Chandler. Please don't bully/ troll them, they need help more than anything at this point.
***Commands***
{}
{}
{}
{}
{}
{}
{}
***Support The Creator***
Please support the creator by sharing me to other servers using the following link:
https://discordapp.com/api/oauth2/authorize?client_id=590092097609138196&permissions=0&scope=bot
or through the following links.
-<:patreon:630306170791395348> https://www.patreon.com/b9king
-<:paypal:630306883105849354> https://www.paypal.com/paypalme2/b9king
Or you can visit him here: https://benignking.xyz :heart:
""".format(message.guild.name,command1,command2,command3,command4,command5,command6,command7)
embed=discord.Embed(title="", url="https://www.patreon.com/b9king", description= Helpmessage, color=0x00ffff)
embed.set_thumbnail(url= message.mentions[0].avatar_url)
await message.channel.send(embed=embed)
if message.content == "~Qotn":
x = quoteOfTheNow()
embed=discord.Embed(title="", color=0x4eda12)
#embed.set_author(name="Quote Of The Now:", icon_url="https://files.catbox.moe/29zpbx.PNG")
embed.add_field(name="Quote Of THe Now:", value= x, inline=True)
await message.channel.send(embed=embed)
elif message.content == "~Begging":
x = chrisChanBegging()
z = ""
for i in x:
z += i + "\n"
embed=discord.Embed(title="", color=0x4eda12)
embed.add_field(name="Financhu:", value= z, inline=True)
await message.channel.send(embed=embed)
elif message.content == "~Tdic":
x = thisDayInChristory()
embed=discord.Embed(title="", color=0x4eda12)
embed.add_field(name="This Day In Christory:", value= x, inline=True)
await message.channel.send(embed=embed)
elif message.content == "~Dyk":
x = didYouKnow()
z = ""
for i in x:
z += "⚪��" + i + "\n"
embed=discord.Embed(title="", color=0x4eda12)
embed.add_field(name="Did You Know:", value= z, inline=True)
await message.channel.send(embed=embed)
elif message.content == "~Aotn":
x = articleOfTheNow()
link = x[1]
x = x[0]
embed=discord.Embed(title="Click here for article", color=0x4eda12, url = link)
embed.add_field(name="Article Of The Now:", value= x, inline=True)
await message.channel.send(embed=embed)
elif message.content.startswith( "~Cwcki"):
link = ""
z = message.content.replace("~Cwcki ","")
x = articleSummary(z)
embed=discord.Embed(title="", color=0x4eda12, url = link)
embed.add_field(name= z, value= x[0] + "\n" + x[1], inline=True)
await message.channel.send(embed=embed)
elif message.content == "~Christorian":
x = rabbitHole()
embed=discord.Embed(title="Become a Christorian", color=0x4eda12, url = x[0])
await message.channel.send(embed=embed)
#_________________________________________________________________
#________________Help Command_____________________________________
elif message.content.startswith("(debug 124 CWC)"):
x = message.content.replace("(debug 124 CWC)","")
await client.change_presence(status=discord.Status.online, activity=discord.Game(x))
elif message.content == "~Help CWC":
name = "**🎱 Fortune Teller**"
command1 = "~Qotn Get the *Quote Of The Now*"
command2 = "~Begging Get Chris' begging stats"
command3 = "~Tdic Get the *This Day In Christory*"
command4 = "~Dyk Get the *Did You Know* about Chris"
command5 = "~Aotn Get the *Article of the Now*"
command6 = "~Cwcki (name) will try to summarize an article for you and link you it"
command7 = "~Christorian gives you the link to dive into the rabbit hole!"
Helpmessage = """
**Thanks for adding me to {}**!
*I'm a bot that can give you a rundown on the creator of Sonichu *
My commands are:
{}
{}
{}
{}
{}
{}
{}
**Click the embed to support my creator**
""".format(message.guild.name,command1,command2,command3,command4,command5,command6,command7)
embed=discord.Embed(title="CWCki Bot Help", url="https://www.patreon.com/b9king", description= Helpmessage, color=0x00ffff)
embed.set_thumbnail(url="https://files.catbox.moe/pky9p3.png")
await message.channel.send(embed=embed)
client.run('NTkwMDkyMDk3NjA5MTM4MTk2.XQdMMg.0wbU5CA1Xq_KbtYEc0aNnbUAj_0')
| 34.81407 | 200 | 0.604648 | 819 | 6,928 | 4.943834 | 0.267399 | 0.048407 | 0.04001 | 0.048901 | 0.601383 | 0.541615 | 0.476167 | 0.466288 | 0.433193 | 0.433193 | 0.000289 | 0.041147 | 0.270352 | 6,928 | 198 | 201 | 34.989899 | 0.758853 | 0.043303 | 0 | 0.464789 | 0 | 0.014085 | 0.38544 | 0.025072 | 0 | 0 | 0.010874 | 0 | 0 | 1 | 0 | false | 0 | 0.021127 | 0 | 0.021127 | 0.028169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6985ed0535d74a9aa8aab737b5e80423499a130 | 6,433 | py | Python | qtvscodestyle/vscode/color_registry_manager.py | Greatness7/QtVSCodeStyle | 2654ca967c7ae5db3ce3fb46657ace9f1104f6b9 | [
"MIT"
] | 8 | 2021-10-04T00:21:25.000Z | 2022-03-14T19:57:03.000Z | qtvscodestyle/vscode/color_registry_manager.py | Greatness7/QtVSCodeStyle | 2654ca967c7ae5db3ce3fb46657ace9f1104f6b9 | [
"MIT"
] | null | null | null | qtvscodestyle/vscode/color_registry_manager.py | Greatness7/QtVSCodeStyle | 2654ca967c7ae5db3ce3fb46657ace9f1104f6b9 | [
"MIT"
] | 3 | 2021-11-15T23:58:33.000Z | 2022-02-01T18:50:01.000Z | # =============================================================================================
# QtVSCodeStyle.
#
# Copyright (c) 2015- Microsoft Corporation
# Copyright (c) 2021- Yunosuke Ohsugi
#
# Distributed under the terms of the MIT License.
# See https://github.com/microsoft/vscode/blob/main/LICENSE.txt
#
# Original code:
# https://github.com/microsoft/vscode/blob/main/src/vs/platform/theme/common/colorRegistry.ts
#
# (see NOTICE.md in the QtVSCodeStyle root directory for details)
# =============================================================================================
from __future__ import annotations
from enum import Enum, auto
from typing import Optional, Union
from qtvscodestyle.vscode.color import RGBA, Color
class _ColorIdentifier(str):
pass
_ColorValue = Union[Color, str, _ColorIdentifier, dict, None]
class _ColorTransformType(Enum):
Darken = auto()
Lighten = auto()
Transparent = auto()
OneOf = auto()
LessProminent = auto()
IfDefinedThenElse = auto()
class ColorRegistry:
_default_colors: dict[str, dict[str, _ColorValue]] = {"dark": {}, "light": {}, "hc": {}}
def __init__(self) -> None:
self._colors = {
"dark": ColorRegistry._default_colors["dark"].copy(),
"light": ColorRegistry._default_colors["light"].copy(),
"hc": ColorRegistry._default_colors["hc"].copy(),
}
@classmethod
def _register_default_color(cls, id: str, defaults: Union[dict[str, _ColorValue], None]) -> _ColorIdentifier:
cls._default_colors["dark"][id] = None if defaults is None else defaults["dark"]
cls._default_colors["light"][id] = None if defaults is None else defaults["light"]
cls._default_colors["hc"][id] = None if defaults is None else defaults["hc"]
return _ColorIdentifier(id)
def register_color(self, id: str, color: str, theme: str) -> None:
if self._colors[theme].get(id):
self._colors[theme][id] = color
def get_colors(self, theme: str) -> dict[str, Optional[Color]]:
colors_resolved = {}
for id, color_value in self._colors[theme].items():
color_value_resolved = self._resolve_color_value(color_value, theme)
colors_resolved[id] = color_value_resolved
return colors_resolved
def _resolve_color_value(self, color_value: _ColorValue, theme: str) -> Union[Color, None]:
if color_value is None:
return None
elif type(color_value) is str:
if color_value == "transparent":
return Color(RGBA(0, 0, 0, 0))
return Color.from_hex(color_value)
elif type(color_value) is Color:
return color_value
elif type(color_value) is _ColorIdentifier:
return self._resolve_color_value(self._colors[theme][color_value], theme)
elif type(color_value) is dict:
return self._execute_transform(color_value, theme)
def _is_defines(self, color_id: _ColorIdentifier, theme: str) -> bool:
return ColorRegistry._default_colors[theme][color_id] != self._colors[theme][color_id]
def _execute_transform(self, transform: dict, theme: str) -> Union[Color, None]: # noqa: C901
if transform["op"] is _ColorTransformType.Darken:
color_value = self._resolve_color_value(transform["value"], theme)
if type(color_value) is Color:
return color_value.darken(transform["factor"])
elif transform["op"] is _ColorTransformType.Lighten:
color_value = self._resolve_color_value(transform["value"], theme)
if type(color_value) is Color:
return color_value.lighten(transform["factor"])
elif transform["op"] is _ColorTransformType.Transparent:
color_value = self._resolve_color_value(transform["value"], theme)
if type(color_value) is Color:
return color_value.transparent(transform["factor"])
elif transform["op"] is _ColorTransformType.OneOf:
for candidate in transform["values"]:
color = self._resolve_color_value(candidate, theme)
if color:
return color
elif transform["op"] is _ColorTransformType.IfDefinedThenElse:
return self._resolve_color_value(
transform["then"] if self._is_defines(transform["if_"], theme) else transform["else_"],
theme,
)
elif transform["op"] is _ColorTransformType.LessProminent:
from_ = self._resolve_color_value(transform["value"], theme)
if not from_:
return None
background_color = self._resolve_color_value(transform["background"], theme)
if not background_color:
return from_.transparent(transform["factor"] * transform["transparency"])
if from_.is_darker_than(background_color):
color = Color.get_lighter_color(from_, background_color, transform["factor"])
else:
color = Color.get_darker_color(from_, background_color, transform["factor"])
return color.transparent(transform["transparency"])
return None
register_color = ColorRegistry._register_default_color
def darken(color_value: _ColorValue, factor: float) -> dict:
return {"op": _ColorTransformType.Darken, "value": color_value, "factor": factor}
def lighten(color_value: _ColorValue, factor: float) -> dict:
return {"op": _ColorTransformType.Lighten, "value": color_value, "factor": factor}
def transparent(color_value: _ColorValue, factor: float) -> dict:
return {"op": _ColorTransformType.Transparent, "value": color_value, "factor": factor}
def one_of(*color_values: _ColorValue) -> dict:
return {"op": _ColorTransformType.OneOf, "values": list(color_values)}
def if_defined_then_else(if_arg: _ColorIdentifier, then_arg: _ColorValue, else_arg: _ColorValue) -> dict:
return {"op": _ColorTransformType.IfDefinedThenElse, "if_": if_arg, "then": then_arg, "else_": else_arg}
def less_prominent(
color_value: _ColorValue, background_color_value: _ColorValue, factor: float, transparency: float
) -> dict:
return {
"op": _ColorTransformType.LessProminent,
"value": color_value,
"background": background_color_value,
"factor": factor,
"transparency": transparency,
}
| 41.237179 | 113 | 0.645733 | 705 | 6,433 | 5.625532 | 0.164539 | 0.110943 | 0.042864 | 0.047655 | 0.373424 | 0.266515 | 0.224155 | 0.159102 | 0.113464 | 0.067322 | 0 | 0.002979 | 0.217162 | 6,433 | 155 | 114 | 41.503226 | 0.784551 | 0.089849 | 0 | 0.082569 | 0 | 0 | 0.048296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119266 | false | 0.009174 | 0.036697 | 0.06422 | 0.46789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a69a0ad61363ec1a9cf3a9af6dcd2a95de3f22fa | 495 | py | Python | python/test_region.py | csd2022fuchuang/yolov5-opencv-cpp-python | 5b52dbffed6733a1353bd27a0001c09821ee0714 | [
"MIT"
] | null | null | null | python/test_region.py | csd2022fuchuang/yolov5-opencv-cpp-python | 5b52dbffed6733a1353bd27a0001c09821ee0714 | [
"MIT"
] | null | null | null | python/test_region.py | csd2022fuchuang/yolov5-opencv-cpp-python | 5b52dbffed6733a1353bd27a0001c09821ee0714 | [
"MIT"
] | 1 | 2022-03-24T09:01:45.000Z | 2022-03-24T09:01:45.000Z | from shapely.geometry import Polygon
import cv2
image = cv2.imread("a.png")
window_name = 'Image'
# Center coordinates
center_coordinates = (120, 50)
# Radius of circle
radius = 20
# Blue color in BGR
color = (255, 0, 0)
# Line thickness of 2 px
thickness = 2
# Using cv2.circle() method
# Draw a circle with blue line borders of thickness of 2 px
image = cv2.circle(image, center_coordinates, radius, color, thickness)
# Displaying the image
cv2.imshow(window_name, image)
cv2.waitKey(0) | 19.8 | 71 | 0.739394 | 77 | 495 | 4.701299 | 0.506494 | 0.088398 | 0.082873 | 0.077348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053269 | 0.165657 | 495 | 25 | 72 | 19.8 | 0.823245 | 0.365657 | 0 | 0 | 0 | 0 | 0.032573 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a69d50893a1048337e98b47e928f156375ca87ed | 8,247 | py | Python | pyNastran/converters/cart3d/test_cart3d.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | pyNastran/converters/cart3d/test_cart3d.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | pyNastran/converters/cart3d/test_cart3d.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | """tests non-gui related Cart3d class/interface"""
from __future__ import print_function
import os
import unittest
from numpy import array_equal, allclose
import pyNastran
from pyNastran.converters.cart3d.cart3d import read_cart3d
from pyNastran.converters.cart3d.cart3d_to_nastran import cart3d_to_nastran_filename, cart3d_to_nastran_model
from pyNastran.converters.cart3d.cart3d_to_stl import cart3d_to_stl_filename
from pyNastran.converters.cart3d.cart3d_to_tecplot import cart3d_to_tecplot
from pyNastran.converters.cart3d.input_c3d_reader import read_input_c3d
import pyNastran.converters.cart3d.input_cntl_reader
from pyNastran.utils.log import get_logger
PKG_PATH = pyNastran.__path__[0]
TEST_PATH = os.path.join(PKG_PATH, 'converters', 'cart3d', 'models')
class TestCart3d(unittest.TestCase):
def test_cart3d_io_01(self):
"""geometry"""
lines = (
"7 6\n"
"0.000000 0.000000 0.000000\n"
"1.000000 0.000000 0.000000\n"
"2.000000 0.000000 0.000000\n"
"1.000000 1.000000 0.000000\n"
"2.000000 1.000000 0.000000\n"
"1.000000 -1.000000 0.000000\n"
"2.000000 -1.000000 0.000000\n"
"1 4 2\n"
"2 4 5\n"
"2 5 3\n"
"2 6 1\n"
"5 6 2\n"
"5 5 2\n"
"1\n"
"2\n"
"3\n"
"2\n"
"4\n"
"6\n"
)
infile_name = os.path.join(TEST_PATH, 'flat_full.tri')
with open(infile_name, 'w') as f:
f.write(lines)
log = get_logger(level='warning', encoding='utf-8')
cart3d = read_cart3d(infile_name, log=log, debug=False)
assert len(cart3d.points) == 7, 'npoints=%s' % len(cart3d.points)
assert len(cart3d.elements) == 6, 'nelements=%s' % len(cart3d.elements)
assert len(cart3d.regions) == 6, 'nregions=%s' % len(cart3d.regions)
assert len(cart3d.loads) == 0, 'nloads=%s' % len(cart3d.loads)
os.remove(infile_name)
def test_cart3d_io_02(self):
"""geometry + results"""
lines = (
"5 3 6\n"
"0. 0. 0.\n"
"1. 0. 0.\n"
"2. 0. 0.\n"
"1. 1. 0.\n"
"2. 1. 0.\n"
"1 4 2\n"
"2 4 5\n"
"2 5 3\n"
"1\n"
"2\n"
"3\n"
"1.\n"
"1. 1. 1. 1. 1.\n"
"2.\n"
"2. 2. 2. 2. 2.\n"
"3.\n"
"3. 3. 3. 3. 3.\n"
"4.\n"
"4. 4. 4. 4. 4.\n"
"5.\n"
"5. 5. 5. 5. 5.\n"
)
cart3d_filename = os.path.join(TEST_PATH, 'flat.tri')
with open(cart3d_filename, 'w') as f:
f.write(lines)
log = get_logger(level='warning', encoding='utf-8')
cart3d = read_cart3d(cart3d_filename, log=log, debug=False,
result_names=None)
assert len(cart3d.points) == 5, 'npoints=%s' % len(cart3d.points)
assert len(cart3d.elements) == 3, 'nelements=%s' % len(cart3d.elements)
assert len(cart3d.regions) == 3, 'nregions=%s' % len(cart3d.regions)
assert len(cart3d.loads) == 14, 'nloads=%s' % len(cart3d.loads) # was 10
assert len(cart3d.loads['Cp']) == 5, 'nCp=%s' % len(cart3d.loads['Cp'])
outfile_name = os.path.join(TEST_PATH, 'flat.bin.tri')
cart3d.loads = None
cart3d.write_cart3d(outfile_name, is_binary=True)
cnormals = cart3d.get_normals()
nnormals = cart3d.get_normals_at_nodes(cnormals)
os.remove(cart3d_filename)
os.remove(outfile_name)
def test_cart3d_io_03(self):
"""read/write geometry in ascii/binary"""
log = get_logger(level='warning', encoding='utf-8')
infile_name = os.path.join(TEST_PATH, 'threePlugs.bin.tri')
outfile_name = os.path.join(TEST_PATH, 'threePlugs_out.tri')
outfile_name_bin = os.path.join(TEST_PATH, 'threePlugs_bin2.tri')
outfile_name_bin_out = os.path.join(TEST_PATH, 'threePlugs_bin_out.tri')
cart3d = read_cart3d(infile_name, log=log, debug=False)
cart3d.write_cart3d(outfile_name, is_binary=False)
cart3d.write_cart3d(outfile_name_bin, is_binary=True)
cart3d_ascii = read_cart3d(outfile_name, log=log, debug=False)
check_array(cart3d.points, cart3d_ascii.points)
check_array(cart3d.elements, cart3d_ascii.elements)
cart3d_bin = read_cart3d(outfile_name_bin, log=log, debug=False)
check_array(cart3d.points, cart3d_bin.points)
check_array(cart3d.elements, cart3d_ascii.elements)
#print(cart3d_bin.points)
#print('---------------')
#print(cart3d_bin.points)
os.remove(outfile_name)
os.remove(outfile_name_bin)
cart3d.write_cart3d(outfile_name_bin_out, is_binary=False)
os.remove(outfile_name_bin_out)
def test_cart3d_to_stl(self):
"""convert to stl"""
log = get_logger(level='warning', encoding='utf-8')
cart3d_filename = os.path.join(TEST_PATH, 'threePlugs.bin.tri')
stl_filename = os.path.join(TEST_PATH, 'threePlugs.stl')
cart3d_to_stl_filename(cart3d_filename, stl_filename, log=log)
#os.remove(stl_filename)
def test_cart3d_to_tecplot(self):
"""convert to tecplot"""
log = get_logger(level='warning', encoding='utf-8')
cart3d_filename = os.path.join(TEST_PATH, 'threePlugs.bin.tri')
tecplot_filename = os.path.join(TEST_PATH, 'threePlugs.plt')
cart3d_to_tecplot(cart3d_filename, tecplot_filename, log=log)
#os.remove(tecplot_filename)
def test_cart3d_to_nastran_01(self):
"""convert to nastran small field"""
log = get_logger(level='warning', encoding='utf-8')
cart3d_filename = os.path.join(TEST_PATH, 'threePlugs.bin.tri')
bdf_filename = os.path.join(TEST_PATH, 'threePlugs.bdf')
cart3d_to_nastran_filename(cart3d_filename, bdf_filename, log=log)
os.remove(bdf_filename)
def test_cart3d_to_nastran_02(self):
"""convert to nastran large field"""
log = get_logger(level='warning', encoding='utf-8')
cart3d_filename = os.path.join(TEST_PATH, 'threePlugs.bin.tri')
bdf_filename = os.path.join(TEST_PATH, 'threePlugs2.bdf')
model = cart3d_to_nastran_model(cart3d_filename, log=log)
model.write_bdf(bdf_filename, size=16)
self.assertAlmostEqual(model.nodes[1].xyz[0], 1.51971436,
msg='if this is 0.0, then the assign_type method had the float32 check removed')
os.remove(bdf_filename)
#model.write_bdf(out_filename=None, encoding=None, size=8,
#is_double=False,
#interspersed=True,
#enddata=None)
#def test_cart3d_input_cntl(self):
#"""tests the input.cntl reading"""
#from pyNastran.converters.cart3d.input_cntl_reader import read_input_cntl
#input_cntl_filename = os.path.join(TEST_PATH, '')
#read_input_cntl(input_cntl_filename, log=None, debug=False)
def test_cart3d_input_c3d(self):
"""tests the input.c3d reading"""
log = get_logger(level='warning', encoding='utf-8')
input_c3d_filename = os.path.join(TEST_PATH, 'input.c3d')
read_input_c3d(input_c3d_filename, log=log, debug=False, stack=True)
def check_array(points, points2):
nnodes = points.shape[0]
msg = ''
nfailed = 0
if not array_equal(points, points2):
for nid in range(nnodes):
p1 = points[nid]
p2 = points2[nid]
abs_sum_delta = sum(abs(p1-p2))
if not allclose(abs_sum_delta, 0.0, atol=1e-6):
msg += 'n=%s p1=%s p2=%s diff=%s\nsum(abs(p1-p2))=%s\n' % (
nid, str(p1), str(p2), str(p1-p2), abs_sum_delta)
nfailed += 1
if nfailed == 10:
break
if msg:
#print(msg)
raise RuntimeError(msg)
if __name__ == '__main__': # pragma: no cover
import time
time0 = time.time()
unittest.main()
print("dt = %s" % (time.time() - time0))
| 38.900943 | 111 | 0.600946 | 1,128 | 8,247 | 4.190603 | 0.151596 | 0.030463 | 0.038079 | 0.050349 | 0.539454 | 0.45568 | 0.356251 | 0.283478 | 0.232917 | 0.140681 | 0 | 0.068114 | 0.26834 | 8,247 | 211 | 112 | 39.085308 | 0.71528 | 0.089851 | 0 | 0.229814 | 0 | 0.006211 | 0.140516 | 0.006851 | 0 | 0 | 0 | 0 | 0.062112 | 1 | 0.055901 | false | 0 | 0.080745 | 0 | 0.142857 | 0.012422 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a69e23253f289cfbf526d1538461b44447ecb128 | 2,260 | py | Python | python/evolution_reader.py | hzla/Pokeweb | 203f8449179a5a0aeb4fa5d48e483048f09b24d1 | [
"MIT"
] | 3 | 2021-03-30T22:07:40.000Z | 2021-06-11T02:32:06.000Z | python/evolution_reader.py | hzla/Pokeweb | 203f8449179a5a0aeb4fa5d48e483048f09b24d1 | [
"MIT"
] | 2 | 2021-07-03T18:04:09.000Z | 2022-01-12T18:02:30.000Z | python/evolution_reader.py | hzla/Pokeweb | 203f8449179a5a0aeb4fa5d48e483048f09b24d1 | [
"MIT"
] | 1 | 2021-09-06T18:20:23.000Z | 2021-09-06T18:20:23.000Z | import ndspy
import ndspy.rom
import code
import io
import os
import os.path
from os import path
import json
import copy
def set_global_vars():
global ROM_NAME, NARC_FORMAT, POKEDEX, METHODS, ITEMS, MOVES
with open(f'session_settings.json', "r") as outfile:
settings = json.load(outfile)
ROM_NAME = settings['rom_name']
ITEMS = open(f'{ROM_NAME}/texts/items.txt', mode="r").read().splitlines()
POKEDEX = open(f'{ROM_NAME}/texts/pokedex.txt', "r").read().splitlines()
MOVES = open(f'{ROM_NAME}/texts/moves.txt', mode="r").read().splitlines()
METHODS = open(f'Reference_Files/evo_methods.txt', mode="r").read().splitlines()
NARC_FORMAT = []
for n in range(0, 7):
NARC_FORMAT.append([2, f'method_{n}'])
NARC_FORMAT.append([2, f'param_{n}'])
NARC_FORMAT.append([2, f'target_{n}'])
def output_evolutions_json(narc):
set_global_vars()
data_index = 0
while len(narc.files) < 800:
narc.files.append(narc.files[0])
for data in narc.files:
data_name = data_index
read_narc_data(data, NARC_FORMAT, data_name, "evolutions")
data_index += 1
def read_narc_data(data, narc_format, file_name, narc_name):
stream = io.BytesIO(data)
file = {"raw": {}, "readable": {} }
#USE THE FORMAT LIST TO PARSE BYTES
for entry in narc_format:
file["raw"][entry[1]] = read_bytes(stream, entry[0])
#CONVERT TO READABLE FORMAT USING CONSTANTS/TEXT BANKS
file["readable"] = to_readable(file["raw"], file_name)
#OUTPUT TO JSON
if not os.path.exists(f'{ROM_NAME}/json/{narc_name}'):
os.makedirs(f'{ROM_NAME}/json/{narc_name}')
with open(f'{ROM_NAME}/json/{narc_name}/{file_name}.json', "w") as outfile:
json.dump(file, outfile)
def to_readable(raw, file_name):
readable = copy.deepcopy(raw)
for n in range(0,7):
readable[f'method_{n}'] = METHODS[raw[f'method_{n}']]
readable[f'target_{n}'] = POKEDEX[raw[f'target_{n}']]
if raw[f'method_{n}'] in [5,6,17,18,19,20]:
readable[f'param_{n}'] = ITEMS[raw[f'param_{n}']]
elif raw[f'method_{n}'] == 21:
readable[f'param_{n}'] = MOVES[raw[f'param_{n}']]
elif raw[f'method_{n}'] == 22:
readable[f'param_{n}'] = POKEDEX[raw[f'param_{n}']]
else:
readable
return readable
def read_bytes(stream, n):
return int.from_bytes(stream.read(n), 'little')
| 25.681818 | 81 | 0.684071 | 369 | 2,260 | 4.01626 | 0.254743 | 0.04251 | 0.033063 | 0.032389 | 0.24359 | 0.152497 | 0.033738 | 0.033738 | 0.033738 | 0 | 0 | 0.014864 | 0.136726 | 2,260 | 87 | 82 | 25.977011 | 0.744746 | 0.04469 | 0 | 0.034483 | 0 | 0 | 0.203248 | 0.106729 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086207 | false | 0 | 0.155172 | 0.017241 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6a017b54a214e79dd2139fbb9e101b8d4539ba6 | 609 | py | Python | Python/797.py | JWang169/LintCodeJava | b75b06fa1551f5e4d8a559ef64e1ac29db79c083 | [
"CNRI-Python"
] | 1 | 2020-12-10T05:36:15.000Z | 2020-12-10T05:36:15.000Z | Python/797.py | JWang169/LintCodeJava | b75b06fa1551f5e4d8a559ef64e1ac29db79c083 | [
"CNRI-Python"
] | null | null | null | Python/797.py | JWang169/LintCodeJava | b75b06fa1551f5e4d8a559ef64e1ac29db79c083 | [
"CNRI-Python"
] | 3 | 2020-04-06T05:55:08.000Z | 2021-08-29T14:26:54.000Z | class Solution:
def allPathsSourceTarget(self, graph: List[List[int]]) -> List[List[int]]:
N = len(graph) - 1
mappings = {}
for i, nodes in enumerate(graph):
mappings[i] = nodes
result = []
queue = deque([[0]])
while queue:
path = queue.popleft()
last = path[-1]
if last == N:
result.append(path)
continue
nexts = mappings[last]
for nxt in nexts:
queue.append(path + [nxt])
return result
| 27.681818 | 78 | 0.438424 | 58 | 609 | 4.603448 | 0.534483 | 0.059925 | 0.082397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009036 | 0.454844 | 609 | 22 | 79 | 27.681818 | 0.795181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6a02d8b5ce16766be107720170a139dd7634fa9 | 661 | py | Python | day-07/part-2/div.py | lypnol/adventofcode-2021 | 8ba277d698e8c59ca9cd554acc135473f5964b87 | [
"MIT"
] | 6 | 2021-11-29T15:32:27.000Z | 2021-12-10T12:24:26.000Z | day-07/part-2/div.py | lypnol/adventofcode-2021 | 8ba277d698e8c59ca9cd554acc135473f5964b87 | [
"MIT"
] | 9 | 2021-11-29T15:38:04.000Z | 2021-12-13T14:54:16.000Z | day-07/part-2/div.py | lypnol/adventofcode-2021 | 8ba277d698e8c59ca9cd554acc135473f5964b87 | [
"MIT"
] | 3 | 2021-12-02T19:11:44.000Z | 2021-12-22T20:52:47.000Z | from tool.runners.python import SubmissionPy
class DivSubmission(SubmissionPy):
def run(self, s):
"""
:param s: input in string format
:return: solution flag
"""
# Your code goes here
positions = [int(x) for x in s.split(",")]
min_pos, max_pos = min(positions), max(positions)
return min(sum(((abs(x-x0))*(abs(x-x0)+1))>>1 for x0 in positions) for x in range(min_pos, max_pos+1))
def test_div():
"""
Run `python -m pytest ./day-07/part-1/div.py` to test the submission.
"""
assert (
DivSubmission().run(
"""
""".strip()
)
== None
)
| 23.607143 | 110 | 0.547655 | 88 | 661 | 4.056818 | 0.579545 | 0.022409 | 0.033613 | 0.067227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019481 | 0.301059 | 661 | 27 | 111 | 24.481481 | 0.753247 | 0.220877 | 0 | 0 | 0 | 0 | 0.002169 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6a05141e02ec09a3368291709071af8ba627d0b | 1,698 | py | Python | test/coreneuron/test_psolve.py | niltonlk/nrn | 464541abbf72fe58de77b16bf0e1df425a280b89 | [
"BSD-3-Clause"
] | null | null | null | test/coreneuron/test_psolve.py | niltonlk/nrn | 464541abbf72fe58de77b16bf0e1df425a280b89 | [
"BSD-3-Clause"
] | 1 | 2021-04-13T09:19:55.000Z | 2021-04-13T09:19:55.000Z | test/coreneuron/test_psolve.py | niltonlk/nrn | 464541abbf72fe58de77b16bf0e1df425a280b89 | [
"BSD-3-Clause"
] | null | null | null | import os
import pytest
import sys
import traceback
enable_gpu = bool(os.environ.get("CORENRN_ENABLE_GPU", ""))
from neuron import h, gui
pc = h.ParallelContext()
def model():
pc.gid_clear()
for s in h.allsec():
h.delete_section(sec=s)
s = h.Section()
s.L = 10
s.diam = 10
s.insert("hh")
ic = h.IClamp(s(0.5))
ic.delay = 0.1
ic.dur = 0.1
ic.amp = 0.5 * 0
syn = h.ExpSyn(s(0.5))
nc = h.NetCon(None, syn)
nc.weight[0] = 0.001
return {"s": s, "ic": ic, "syn": syn, "nc": nc}
def test_psolve():
# sequence of psolve with only beginning initialization
m = model()
vvec = h.Vector()
h.tstop = 5
vvec.record(m["s"](0.5)._ref_v, sec=m["s"])
def run(tstop):
pc.set_maxstep(10)
h.finitialize(-65)
m["nc"].event(3.5)
m["nc"].event(2.6)
h.continuerun(1) # Classic NEURON so psolve starts at t>0
while h.t < tstop:
pc.psolve(h.t + 1)
run(h.tstop)
vvec_std = vvec.c() # standard result
from neuron import coreneuron
coreneuron.enable = True
coreneuron.verbose = 0
coreneuron.gpu = enable_gpu
h.CVode().cache_efficient(True)
run(h.tstop)
if vvec_std.eq(vvec) == 0:
for i, x in enumerate(vvec_std):
print("%.3f %g %g %g" % (i * h.dt, x, vvec[i], x - vvec[i]))
assert vvec_std.eq(vvec)
assert vvec_std.size() == vvec.size()
coreneuron.enable = False
if __name__ == "__main__":
try:
test_psolve()
except:
traceback.print_exc()
# Make the CTest test fail
sys.exit(42)
# The test doesn't exit without this.
if enable_gpu:
h.quit()
| 22.64 | 72 | 0.567727 | 267 | 1,698 | 3.509363 | 0.438202 | 0.037353 | 0.009605 | 0.027748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031993 | 0.282097 | 1,698 | 74 | 73 | 22.945946 | 0.736669 | 0.099529 | 0 | 0.034483 | 0 | 0 | 0.036113 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0.051724 | false | 0 | 0.103448 | 0 | 0.172414 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6a2e757449885213e04fced8e6ceed98541f40d | 431 | py | Python | docs/make_tutorials.py | jonholdship/SpectralRadex | 2c38e136b963aac43ec2b17101da56a83c9884e4 | [
"MIT"
] | 1 | 2020-09-23T10:57:03.000Z | 2020-09-23T10:57:03.000Z | docs/make_tutorials.py | jonholdship/SpectralRadex | 2c38e136b963aac43ec2b17101da56a83c9884e4 | [
"MIT"
] | null | null | null | docs/make_tutorials.py | jonholdship/SpectralRadex | 2c38e136b963aac43ec2b17101da56a83c9884e4 | [
"MIT"
] | null | null | null | import subprocess
import glob
import os
# Convert the tutorials
for fn in glob.glob("../examples/*.ipynb"):
name = os.path.splitext(os.path.split(fn)[1])[0]
outfn = os.path.join("tutorials", name + ".rst")
print("Building {0}...".format(name))
subprocess.check_call(
"jupyter nbconvert --template _templates/tutorial_rst.tpl --to rst "
+ fn
+ " --output-dir tutorials",
shell=True,) | 30.785714 | 76 | 0.63109 | 56 | 431 | 4.803571 | 0.660714 | 0.066915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008746 | 0.204176 | 431 | 14 | 77 | 30.785714 | 0.77551 | 0.048724 | 0 | 0 | 0 | 0 | 0.332518 | 0.066015 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6a56754f5a359f5f3f898b5f76be76879351729 | 1,224 | py | Python | sorting/k_closest_points_to_origin.py | elenaborisova/A2SV-interview-prep | 02b7166a96d22221cd6adaedf14f845537f0752d | [
"MIT"
] | null | null | null | sorting/k_closest_points_to_origin.py | elenaborisova/A2SV-interview-prep | 02b7166a96d22221cd6adaedf14f845537f0752d | [
"MIT"
] | null | null | null | sorting/k_closest_points_to_origin.py | elenaborisova/A2SV-interview-prep | 02b7166a96d22221cd6adaedf14f845537f0752d | [
"MIT"
] | null | null | null | import math
import heapq
# Time: O(n * log n); Space: O(n)
# MinHeap
def k_closest(points, k):
distances = {} # O(n) space
h = [] # O(n) space
for point in points:
distance = math.sqrt(point[0] ** 2 + point[1] ** 2)
if distance not in distances:
distances[distance] = []
distances[distance].append(point)
heapq.heappush(h, distance) # O(log n) time
res = [] # O(k) space
for _ in range(k): # O(k) time
smallest = heapq.heappop(h) # O(log n) time
res.append(distances[smallest][-1])
distances[smallest].pop()
return res
# MaxHeap; Optimized
# Time: O(n log k); Space: O(k)
def k_closest2(points, k):
h = [] # O(k) space
for point in points: # O(n) time
distance = math.sqrt(point[0] ** 2 + point[1] ** 2)
heapq.heappush(h, (distance * -1, point)) # O(log k) time
if len(h) > k:
heapq.heappop(h) # O(log k) time
res = [] # O(k) space
for distance, point in h: # O(k) time
res.append(point)
return res
# Test cases:
print(k_closest2([[1, 3], [-2, 2]], 1))
print(k_closest2([[3, 3], [5, -1], [-2, 4]], 2))
print(k_closest2([[0, 1], [1, 0]], 2))
| 23.538462 | 66 | 0.535948 | 188 | 1,224 | 3.457447 | 0.228723 | 0.018462 | 0.032308 | 0.046154 | 0.281538 | 0.144615 | 0.092308 | 0.092308 | 0.092308 | 0 | 0 | 0.035632 | 0.289216 | 1,224 | 51 | 67 | 24 | 0.711494 | 0.196895 | 0 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.064516 | 0 | 0.193548 | 0.096774 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6a73500dcaa5434137903d1b0436c04ea1797ad | 270 | py | Python | src/guarded_suspension/receiver.py | tukeJonny/python_patterns | 3e14032030f60ea764ff50e3a5ac1b5dcda4b553 | [
"MIT"
] | null | null | null | src/guarded_suspension/receiver.py | tukeJonny/python_patterns | 3e14032030f60ea764ff50e3a5ac1b5dcda4b553 | [
"MIT"
] | null | null | null | src/guarded_suspension/receiver.py | tukeJonny/python_patterns | 3e14032030f60ea764ff50e3a5ac1b5dcda4b553 | [
"MIT"
] | null | null | null | #-*- coding: utf-8 -*-
from threading import Thread
class Receiver(Thread):
def __init__(self, queue):
super().__init__()
self.daemon = True
self.queue = queue
def run(self):
while True:
num = self.queue.get()
print("[<==] received {0}".format(num))
| 18 | 42 | 0.637037 | 36 | 270 | 4.555556 | 0.666667 | 0.164634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.185185 | 270 | 14 | 43 | 19.285714 | 0.736364 | 0.077778 | 0 | 0 | 0 | 0 | 0.072581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6a958c70054d2579b52a859472989d158f49b98 | 406 | py | Python | raspy/tests/test_IO/test_IOException.py | cyrusbuilt/RasPy | 1e34840cc90ea7f19317e881162209d3d819eb09 | [
"MIT"
] | null | null | null | raspy/tests/test_IO/test_IOException.py | cyrusbuilt/RasPy | 1e34840cc90ea7f19317e881162209d3d819eb09 | [
"MIT"
] | null | null | null | raspy/tests/test_IO/test_IOException.py | cyrusbuilt/RasPy | 1e34840cc90ea7f19317e881162209d3d819eb09 | [
"MIT"
] | null | null | null | """Tests for IOException."""
from raspy.io.io_exception import IOException
class TestIOException(object):
"""Test methods for IOException."""
def test_io_exception(self):
"""Test the exception."""
caught = None
try:
raise IOException("This is a test.")
except Exception as ex:
caught = ex
assert isinstance(caught, IOException)
| 21.368421 | 48 | 0.615764 | 44 | 406 | 5.613636 | 0.636364 | 0.11336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283251 | 406 | 18 | 49 | 22.555556 | 0.848797 | 0.17734 | 0 | 0 | 0 | 0 | 0.04717 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6aa1696a440193088edbc66879e995334f69dc4 | 1,634 | py | Python | languages/admin.py | City-of-Helsinki/kukkuu | 61f26bc622928fd04f6a397f832aaffff789e806 | [
"MIT"
] | null | null | null | languages/admin.py | City-of-Helsinki/kukkuu | 61f26bc622928fd04f6a397f832aaffff789e806 | [
"MIT"
] | 157 | 2019-10-08T07:58:59.000Z | 2022-03-20T23:00:17.000Z | languages/admin.py | City-of-Helsinki/kukkuu | 61f26bc622928fd04f6a397f832aaffff789e806 | [
"MIT"
] | 3 | 2019-10-07T12:06:26.000Z | 2022-01-25T14:03:14.000Z | from django.contrib import admin
from django.db.models import Count
from django.utils.translation import gettext_lazy as _
from parler.admin import TranslatableAdmin
from parler.utils.context import switch_language
from .models import Language
@admin.register(Language)
class LanguageAdmin(TranslatableAdmin):
list_display = (
"alpha_3_code",
"get_name_fi",
"get_name_sv",
"get_name_en",
"get_guardian_count",
)
list_display_links = ("alpha_3_code", "get_name_fi", "get_name_sv", "get_name_en")
fields = ("alpha_3_code", "name", "get_guardian_count")
readonly_fields = ("get_guardian_count",)
def get_queryset(self, request):
return (
super()
.get_queryset(request)
.prefetch_related("translations")
.translated()
.annotate(Count("guardians"))
.annotate(has_code=Count("alpha_3_code")) # to order null codes as first
.order_by("has_code", "translations__name", "id")
)
def get_name_fi(self, obj):
with switch_language(obj, "fi"):
return obj.name
get_name_fi.short_description = _("Finnish")
def get_name_sv(self, obj):
with switch_language(obj, "sv"):
return obj.name
get_name_sv.short_description = _("Swedish")
def get_name_en(self, obj):
with switch_language(obj, "en"):
return obj.name
get_name_en.short_description = _("English")
def get_guardian_count(self, obj):
return obj.guardians__count
get_guardian_count.short_description = _("Guardian count")
| 29.178571 | 86 | 0.653611 | 200 | 1,634 | 4.99 | 0.31 | 0.084168 | 0.08016 | 0.051102 | 0.218437 | 0.158317 | 0.074148 | 0.074148 | 0.074148 | 0.074148 | 0 | 0.003223 | 0.240514 | 1,634 | 55 | 87 | 29.709091 | 0.800967 | 0.017136 | 0 | 0.069767 | 0 | 0 | 0.163342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116279 | false | 0 | 0.139535 | 0.046512 | 0.488372 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6acbe27d3c0f67a17173db25575993280383538 | 13,656 | py | Python | Gearshift.py | TonyWhitley/gearbox | efdba8ecd88600418b39e38cdeccbb1f9a327ceb | [
"MIT"
] | null | null | null | Gearshift.py | TonyWhitley/gearbox | efdba8ecd88600418b39e38cdeccbb1f9a327ceb | [
"MIT"
] | null | null | null | Gearshift.py | TonyWhitley/gearbox | efdba8ecd88600418b39e38cdeccbb1f9a327ceb | [
"MIT"
] | null | null | null | # Gearshift.py - monitors the rFactor 2 shared memory values for the shifter
# and clutch and if a gear change is not done properly it repeatedly sends a
# "Neutral" key press to prevent the gear being selected.
#
# Inspired by http://www.richardjackett.com/grindingtranny
# I borrowed Grind_default.wav from there to make the noise of the grinding
# gears.
#
# The game has to have a key mapped as "Neutral". (Default: Numpad 0)
#
BUILD_REVISION = 61 # The git branch commit count
versionStr = 'gearshift V3.2.%d' % BUILD_REVISION
versionDate = '2020-02-20'
credits = "Reads the clutch and shifter from rF2 using\n" \
"The Iron Wolf's rF2 Shared Memory Tools.\n" \
"https://github.com/TheIronWolfModding/rF2SharedMemoryMapPlugin\n" \
"Inspired by http://www.richardjackett.com/grindingtranny\n" \
"I borrowed Grind_default.wav from there to make the noise of the grinding gears.\n\n"
from threading import Timer
from winsound import PlaySound, SND_FILENAME, SND_LOOP, SND_ASYNC
from tkinter import messagebox
try:
from configIni import Config, configFileName
except: # It's a rFactory component
from gearshift.configIni import Config, configFileName
import pyDirectInputKeySend.directInputKeySend as directInputKeySend
from readJSONfile import Json
from pyDirectInputKeySend.directInputKeySend import DirectInputKeyCodeTable, rfKeycodeToDIK
from mockMemoryMap import gui
from memoryMapInputs import Controls
# Main config variables, loaded from gearshift.ini
mockInput = False # If True then use mock input
ClutchEngaged = 90 # (0 - 100) the point in the travel where the clutch engages
doubleDeclutch = False # Not yet implemented
reshift = True # If True then neutral has to be selected before
# retrying failed change. If False then just have
# to de-clutch
###############################################################################
# Nothing much to twiddle with from here on
# Config variables, also loaded from gearshift.ini
global debug
debug = 0 # 0, 1, 2 or 3
neutralButton = None # The key used to force neutral, whatever the shifter says
graunchWav = None
controller_file = None
# Gear change events
clutchDisengage = 'clutchDisengage'
clutchEngage = 'clutchEngage'
gearSelect = 'gearSelect'
gearDeselect = 'gearDeselect'
graunchTimeout = 'graunchTimeout' # Memory-mapped mode
smStop = 'stop' # Stop the state machine
#globals
gearState = 'neutral' # TBD
ClutchPrev = 2 # Active states are 0 and 1 so 2 is "unknown"
graunch_o = None
#################################################################################
# AHK replacement fns
def SetTimer(callback, mS):
if mS > 0:
timer = Timer(mS / 1000, callback)
timer.start()
else:
pass # TBD delete timer?
def SoundPlay(soundfile):
PlaySound(soundfile, SND_FILENAME|SND_LOOP|SND_ASYNC)
def SoundStop():
PlaySound(None, SND_FILENAME)
def msgBox(str):
print(str)
#################################################################################
def quit(errorCode):
# User presses a key before exiting program
print('\n\nPress Enter to exit')
input()
sys.exit(errorCode)
#################################################################################
class graunch:
def __init__(self):
self.graunching = False
def graunchStart(self):
# Start the graunch noise and sending "Neutral"
# Start the noise
global graunchWav
SoundPlay(graunchWav)
self.graunching = True
self.graunch2()
if debug >= 2:
msgBox('GRAUNCH!')
def graunchStop(self):
if self.graunching:
SoundStop() # stop the noise
self.graunching = False
self.graunch1()
def graunch1(self):
# Send the "Neutral" key release
directInputKeySend.ReleaseKey(neutralButton)
if self.graunching:
SetTimer(self.graunch2, 20)
def graunch2(self):
if self.graunching:
# Send the "Neutral" key press
directInputKeySend.PressKey(neutralButton)
SetTimer(self.graunch3, 3000)
SetTimer(self.graunch1, 20) # Ensure neutralButton is released
if debug >= 1:
directInputKeySend.PressReleaseKey('DIK_G')
def graunch3(self):
""" Shared memory.
Neutral key causes gearDeselect event but if player doesn't move shifter
to neutral then rF2 will quickly report that it's in gear again,
causing a gearSelect event.
If SM is still in neutral (gearSelect hasn't happened) when this timer
expires then player has moved shifter to neutral
"""
gearStateMachine(graunchTimeout)
def isGraunching(self):
return self.graunching
######################################################################
def gearStateMachine(event):
global gearState
global graunch_o
global debug
# Gear change states
neutral = 'neutral'
clutchDown = 'clutchDown'
waitForDoubleDeclutchUp= 'waitForDoubleDeclutchUp'
clutchDownGearSelected = 'clutchDownGearSelected'
inGear = 'inGear'
graunching = 'graunching'
graunchingClutchDown = 'graunchingClutchDown'
neutralKeySent = 'neutralKeySent'
if debug >= 3:
msgBox('gearState %s event %s' % (gearState, event))
# event check (debug)
if event == clutchDisengage:
pass
elif event == clutchEngage:
pass
elif event == gearSelect:
pass
elif event == gearDeselect:
pass
elif event == graunchTimeout:
pass
elif event == smStop:
graunch_o.graunchStop()
gearState = neutral
else:
msgBox('gearStateMachine() invalid event %s' % event)
if gearState == neutral:
if event == clutchDisengage:
gearState = clutchDown
if debug >= 1:
directInputKeySend.PressKey('DIK_D')
elif event == gearSelect:
graunch_o.graunchStart()
gearState = graunching
elif event == graunchTimeout:
graunch_o.graunchStop()
elif gearState == clutchDown:
if event == gearSelect:
gearState = clutchDownGearSelected
elif event == clutchEngage:
gearState = neutral
if debug >= 1:
directInputKeySend.PressKey('DIK_U')
elif gearState == waitForDoubleDeclutchUp:
if event == clutchEngage:
gearState = neutral
if debug >= 2:
msgBox('Double declutch spin up the box')
elif event == gearSelect:
graunch_o.graunchStart()
gearState = graunching
elif gearState == clutchDownGearSelected:
if event == clutchEngage:
gearState = inGear
if debug >= 2:
msgBox('In gear')
elif event == gearDeselect:
if doubleDeclutch:
gearState = waitForDoubleDeclutchUp
else:
gearState = clutchDown
elif gearState == inGear:
if event == gearDeselect:
gearState = neutral
if debug >= 2:
msgBox('Knocked out of gear')
elif event == clutchDisengage:
gearState = clutchDownGearSelected
elif event == gearSelect: # smashed straight through without neutral.
# I don't think this can happen if rF2, only with mock inputs...
graunch_o.graunchStart()
gearState = graunching
elif gearState == graunching:
if event == clutchDisengage:
if reshift == False:
if debug >= 1:
directInputKeySend.PressKey('DIK_R')
gearState = clutchDownGearSelected
else:
gearState = graunchingClutchDown
graunch_o.graunchStop()
if debug >= 1:
directInputKeySend.PressKey('DIK_G')
elif event == clutchEngage:
graunch_o.graunchStart() # graunch again
elif event == gearDeselect:
gearState = neutralKeySent
elif event == gearSelect:
graunch_o.graunchStop()
graunch_o.graunchStart() # graunch again
pass
elif gearState == neutralKeySent:
# rF2 will have put it into neutral but if shifter
# still in gear it will have put it back in gear again
if event == gearSelect:
gearState = graunching
elif event == graunchTimeout:
# timed out and still not in gear, player has
# shifted to neutral
gearState = neutral
graunch_o.graunchStop()
elif gearState == graunchingClutchDown:
if event == clutchEngage:
graunch_o.graunchStart() # graunch again
gearState = graunching
elif event == gearDeselect:
gearState = clutchDown
graunch_o.graunchStop()
else:
msgBox('Bad gearStateMachine() state gearState')
if gearState != graunching and gearState != neutralKeySent:
graunch_o.graunchStop() # belt and braces - sometimes it gets stuck. REALLY????
def WatchClutch(Clutch):
# Clutch 100 is up, 0 is down to the floor
global ClutchPrev
ClutchState = 1 # engaged
if Clutch < ClutchEngaged:
ClutchState = 0 # clutch is disengaged
if ClutchState != ClutchPrev:
if ClutchState == 0:
gearStateMachine(clutchDisengage)
else:
gearStateMachine(clutchEngage)
ClutchPrev = ClutchState
#############################################################
def memoryMapCallback(clutchEvent=None, gearEvent=None, stopEvent=False):
if clutchEvent != None:
WatchClutch(clutchEvent)
if gearEvent != None:
if gearEvent == 0: # Neutral
gearStateMachine(gearDeselect)
else:
gearStateMachine(gearSelect)
if stopEvent:
gearStateMachine(smStop)
def ShowButtons():
pass
global neutralButtonKeycode
def main():
global graunch_o
global debug
global graunchWav
global ClutchEngaged
global controller_file
global neutralButton
config_o = Config()
debug = config_o.get('miscellaneous', 'debug')
if not debug: debug = 0
graunchWav = config_o.get('miscellaneous', 'wav file')
mockInput = config_o.get('miscellaneous', 'mock input')
reshift = config_o.get('miscellaneous', 'reshift') == 1
ClutchEngaged = config_o.get('clutch', 'bite point')
neutralButton = config_o.get('miscellaneous', 'neutral button')
ignitionButton = config_o.get('miscellaneous', 'ignition button')
controller_file = config_o.get_controller_file()
if neutralButton in DirectInputKeyCodeTable: # (it must be)
neutralButtonKeycode = neutralButton[4:]
else:
print('\ngearshift.ini "neutral button" entry "%s" not recognised.\nIt must be one of:' % neutralButton)
for _keyCode in DirectInputKeyCodeTable:
print(_keyCode, end=', ')
quit(99)
if ignitionButton in DirectInputKeyCodeTable: # (it must be)
_ignitionButton = ignitionButton[4:]
else:
print('\ngearshift.ini "ignition button" entry "%s" not recognised.\nIt must be one of:' % ignitionButton)
for _keyCode in DirectInputKeyCodeTable:
print(_keyCode, end=', ')
quit(99)
graunch_o = graunch()
controls_o = Controls(debug=debug,mocking=mockInput)
controls_o.run(memoryMapCallback)
return controls_o, graunch_o, neutralButtonKeycode
#############################################################
def get_neutral_control(_controller_file_test=None):
"""
Get the keycode specified in controller.json
"""
global controller_file
global neutralButton
if _controller_file_test:
_controller_file = _controller_file_test
else:
_controller_file = controller_file
_JSON_O = Json(_controller_file)
neutral_control = _JSON_O.get_item("Control - Neutral")
if neutral_control:
keycode = rfKeycodeToDIK(neutral_control[1])
if not keycode == neutralButton:
err = F'"Control - Neutral" in {_controller_file}\n'\
F'does not match {configFileName} "neutral button" entry'.format()
messagebox.showinfo('Config error', err)
return
err = F'"Control - Neutral" not in {_controller_file}\n'\
F'See {configFileName} "controller_file" entry'.format()
messagebox.showinfo('Config error', err)
if __name__ == "__main__":
controls_o, graunch_o, neutralButtonKeycode = main()
instructions = 'If gear selection fails this program will send %s ' \
'to the active window until you reselect a gear.\n\n' \
'You can minimise this window now.\n' \
'Do not close it until you have finished racing.' % neutralButtonKeycode
#############################################################
# Using shared memory, reading clutch state and gear selected direct from rF2
# mockInput: testing using the simple GUI to poke inputs into the memory map
# otherwise just use the GUI slightly differently
root = gui(mocking=mockInput,
instructions=instructions,
graunch_o=graunch_o,
controls_o=controls_o
)
get_neutral_control()
if root != 'OK':
root.mainloop()
controls_o.stop()
| 33.635468 | 110 | 0.614162 | 1,386 | 13,656 | 5.972583 | 0.262626 | 0.020295 | 0.009664 | 0.016671 | 0.196787 | 0.132278 | 0.09483 | 0.053636 | 0.053636 | 0.025127 | 0 | 0.008451 | 0.272115 | 13,656 | 405 | 111 | 33.718519 | 0.824346 | 0.16923 | 0 | 0.358885 | 0 | 0.006969 | 0.136206 | 0.004224 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062718 | false | 0.027875 | 0.034843 | 0.003484 | 0.111498 | 0.020906 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6ad7b6a876ff429c200629b8a6d1cb6df41dea0 | 2,225 | py | Python | rooms/management/commands/seed_amenities.py | alstn2468/Django_Airbnb_Clone | eeb61e4a36320a0b269d96f47cc6755dbc4c40f8 | [
"MIT"
] | 5 | 2019-11-26T00:34:24.000Z | 2021-01-04T06:04:48.000Z | rooms/management/commands/seed_amenities.py | alstn2468/Django_Airbnb_Clone | eeb61e4a36320a0b269d96f47cc6755dbc4c40f8 | [
"MIT"
] | 3 | 2021-06-09T19:05:40.000Z | 2021-09-08T01:49:01.000Z | rooms/management/commands/seed_amenities.py | alstn2468/Django_Airbnb_Clone | eeb61e4a36320a0b269d96f47cc6755dbc4c40f8 | [
"MIT"
] | 6 | 2019-11-24T11:47:09.000Z | 2021-08-16T20:21:35.000Z | from core.management.commands.custom_command import CustomCommand
from rooms.models import Amenity
class Command(CustomCommand):
help = "Automatically create amenities"
def handle(self, *args, **options):
try:
amenities = [
"Air conditioning",
"Alarm Clock",
"Balcony",
"Bathroom",
"Bathtub",
"Bed Linen",
"Boating",
"Cable TV",
"Carbon monoxide detectors",
"Chairs",
"Children Area",
"Coffee Maker in Room",
"Cooking hob",
"Cookware & Kitchen Utensils",
"Dishwasher",
"Double bed",
"En suite bathroom",
"Free Parking",
"Free Wireless Internet",
"Freezer",
"Fridge / Freezer",
"Golf",
"Hair Dryer",
"Heating",
"Hot tub",
"Indoor Pool",
"Ironing Board",
"Microwave",
"Outdoor Pool",
"Outdoor Tennis",
"Oven",
"Queen size bed",
"Restaurant",
"Shopping Mall",
"Shower",
"Smoke detectors",
"Sofa",
"Stereo",
"Swimming pool",
"Toilet",
"Towels",
"TV",
]
self.stdout.write(self.style.SUCCESS("■ START CREATE AMENITIES"))
for idx, name in enumerate(amenities):
Amenity.objects.create(name=name)
self.progress_bar(
idx + 1,
len(amenities),
prefix="■ PROGRESS",
suffix="Complete",
length=40,
)
self.stdout.write(self.style.SUCCESS("■ SUCCESS CREATE ALL AMENITIES!"))
except Exception as e:
self.stdout.write(self.style.ERROR(f"■ {e}"))
self.stdout.write(self.style.ERROR("■ FAIL CREATE AMENITIES"))
| 30.902778 | 84 | 0.417978 | 169 | 2,225 | 5.52071 | 0.668639 | 0.042872 | 0.064309 | 0.081458 | 0.132905 | 0.132905 | 0.132905 | 0 | 0 | 0 | 0 | 0.002595 | 0.480449 | 2,225 | 71 | 85 | 31.338028 | 0.800173 | 0 | 0 | 0 | 0 | 0 | 0.263371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015625 | false | 0 | 0.03125 | 0 | 0.078125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6aecdaa9ab63d5c899f2872cb0bfd6fc69bc486 | 7,898 | py | Python | vision/image_classification/mobile/mobilenet_model.py | pedro-abundio-wang/image-classification | 952719d7561b9998add0daf71d61e55cb6103eaf | [
"Apache-2.0"
] | null | null | null | vision/image_classification/mobile/mobilenet_model.py | pedro-abundio-wang/image-classification | 952719d7561b9998add0daf71d61e55cb6103eaf | [
"Apache-2.0"
] | null | null | null | vision/image_classification/mobile/mobilenet_model.py | pedro-abundio-wang/image-classification | 952719d7561b9998add0daf71d61e55cb6103eaf | [
"Apache-2.0"
] | null | null | null | """MobileNet model for Keras.
Related papers
- https://arxiv.org/abs/1704.04861
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.keras import backend
from tensorflow.keras import models
from tensorflow.keras import layers
def _conv_block(input_tensor,
filters,
alpha,
kernel=(3, 3),
strides=(1, 1)):
"""Adds an initial convolution layer (with batch normalization and relu).
Arguments:
input_tensor: input tensor
filters: Integer, the dimensionality of the output space.
alpha: controls the width of the network. - If `alpha` < 1.0,
proportionally decreases the number of filters in each layer. - If
`alpha` > 1.0, proportionally increases the number of filters in each
layer. - If `alpha` = 1, default number of filters from the paper are
used at each layer.
kernel: An integer or tuple/list of 2 integers, specifying the width and
height of the 2D convolution window. Can be a single integer to
specify the same value for all spatial dimensions.
strides: An integer or tuple/list of 2 integers, specifying the strides
of the convolution along the width and height. Can be a single integer
to specify the same value for all spatial dimensions.
Returns:
Output tensor of block.
"""
if backend.image_data_format() == 'channels_first':
channel_axis = 1
else: # channels_last
channel_axis = -1
filters = int(filters * alpha)
x = layers.ZeroPadding2D(
padding=((0, 1), (0, 1)),
name='conv1_pad')(input_tensor)
x = layers.Conv2D(
filters=filters,
kernel_size=kernel,
padding='valid',
use_bias=False,
strides=strides,
name='conv1')(x)
x = layers.BatchNormalization(
axis=channel_axis,
name='conv1_bn')(x)
x = layers.Activation('relu', name='conv1_relu')(x)
return x
def _depthwise_conv_block(input_tensor,
pointwise_conv_filters,
alpha,
depth_multiplier=1,
strides=(1, 1),
block_id=1):
"""Adds a depthwise convolution block.
A depthwise convolution block consists of a depthwise conv,
batch normalization, relu, pointwise convolution,
batch normalization and relu activation.
Arguments:
input_tensor: input tensor.
pointwise_conv_filters: Integer, the dimensionality of the output space.
alpha: controls the width of the network. - If `alpha` < 1.0,
proportionally decreases the number of filters in each layer. - If
`alpha` > 1.0, proportionally increases the number of filters in each
layer. - If `alpha` = 1, default number of filters from the paper are
used at each layer.
depth_multiplier: The number of depthwise convolution output channels
for each input channel. The total number of depthwise convolution
output channels will be equal to `filters_in * depth_multiplier`.
strides: An integer or tuple/list of 2 integers, specifying the strides
of the convolution along the width and height. Can be a single integer
to specify the same value for all spatial dimensions.
block_id: Integer, a unique identification designating the block number.
Returns:
Output tensor of block.
"""
if backend.image_data_format() == 'channels_first':
channel_axis = 1
else: # channels_last
channel_axis = -1
pointwise_conv_filters = int(pointwise_conv_filters * alpha)
if strides == (1, 1):
x = input_tensor
else:
x = layers.ZeroPadding2D(
padding=((0, 1), (0, 1)),
name='conv_pad_%d' % block_id)(input_tensor)
x = layers.DepthwiseConv2D(
kernel_size=(3, 3),
padding='same' if strides == (1, 1) else 'valid',
depth_multiplier=depth_multiplier,
strides=strides,
use_bias=False,
name='conv_dw_%d' % block_id)(x)
x = layers.BatchNormalization(
axis=channel_axis,
name='conv_dw_%d_bn' % block_id)(x)
x = layers.Activation('relu', name='conv_dw_%d_relu' % block_id)(x)
x = layers.Conv2D(
pointwise_conv_filters, (1, 1),
padding='same',
use_bias=False,
strides=(1, 1),
name='conv_pw_%d' % block_id)(x)
x = layers.BatchNormalization(
axis=channel_axis,
name='conv_pw_%d_bn' % block_id)(x)
x = layers.Activation('relu', name='conv_pw_%d_relu' % block_id)(x)
return x
def mobilenet(num_classes=1000,
batch_size=None,
resolution_scale=224,
width_multiplier=1.0,
depth_multiplier=1,
dropout=1e-3):
"""Instantiates the architecture.
Arguments:
width_multiplier: Controls the width of the network. This is known as the width
multiplier in the MobileNet paper. - If `alpha` < 1.0, proportionally
decreases the number of filters in each layer. - If `alpha` > 1.0,
proportionally increases the number of filters in each layer. - If
`alpha` = 1, default number of filters from the paper are used at each
layer. Default to 1.0.
resolution_scale: 128, 160, 192, 224
depth_multiplier: Depth multiplier for depthwise convolution. Default to 1.0.
dropout: Dropout rate. Default to 0.001.
num_classes: `int` number of classes for image classification.
batch_size: Size of the batches for each step.
Returns:
A Keras model instance.
"""
input_shape = (resolution_scale, resolution_scale, 3)
img_input = layers.Input(shape=input_shape, batch_size=batch_size)
x = img_input
if backend.image_data_format() == 'channels_first':
x = layers.Permute((3, 1, 2))(x)
shape = (int(1024 * width_multiplier), 1, 1)
else: # channels_last
shape = (1, 1, int(1024 * width_multiplier))
x = _conv_block(x, 32, width_multiplier, strides=(2, 2))
x = _depthwise_conv_block(x, 64, width_multiplier, depth_multiplier, block_id=1)
x = _depthwise_conv_block(x, 128, width_multiplier, depth_multiplier, strides=(2, 2), block_id=2)
x = _depthwise_conv_block(x, 128, width_multiplier, depth_multiplier, block_id=3)
x = _depthwise_conv_block(x, 256, width_multiplier, depth_multiplier, strides=(2, 2), block_id=4)
x = _depthwise_conv_block(x, 256, width_multiplier, depth_multiplier, block_id=5)
x = _depthwise_conv_block(x, 512, width_multiplier, depth_multiplier, strides=(2, 2), block_id=6)
x = _depthwise_conv_block(x, 512, width_multiplier, depth_multiplier, block_id=7)
x = _depthwise_conv_block(x, 512, width_multiplier, depth_multiplier, block_id=8)
x = _depthwise_conv_block(x, 512, width_multiplier, depth_multiplier, block_id=9)
x = _depthwise_conv_block(x, 512, width_multiplier, depth_multiplier, block_id=10)
x = _depthwise_conv_block(x, 512, width_multiplier, depth_multiplier, block_id=11)
x = _depthwise_conv_block(x, 1024, width_multiplier, depth_multiplier, strides=(2, 2), block_id=12)
x = _depthwise_conv_block(x, 1024, width_multiplier, depth_multiplier, block_id=13)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Reshape(shape, name='reshape')(x)
x = layers.Dropout(dropout, name='dropout')(x)
x = layers.Conv2D(num_classes, (1, 1), padding='same', name='conv_preds')(x)
x = layers.Reshape((num_classes,), name='reshape_')(x)
x = layers.Activation(activation='softmax', name='predictions', dtype='float32')(x)
# Create model.
return models.Model(img_input, x, name='mobilenet_%0.2f_%d' % (width_multiplier, resolution_scale))
| 38.339806 | 103 | 0.661433 | 1,053 | 7,898 | 4.765432 | 0.17284 | 0.03069 | 0.074731 | 0.049223 | 0.56098 | 0.525707 | 0.482264 | 0.47489 | 0.465923 | 0.439219 | 0 | 0.032215 | 0.245379 | 7,898 | 205 | 104 | 38.526829 | 0.809732 | 0.373639 | 0 | 0.342857 | 0 | 0 | 0.057398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.057143 | 0 | 0.114286 | 0.009524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6b0a6f69b4e8e457b33054cf5afa5f57a1ac191 | 943 | py | Python | tests/test_preprocessing.py | temuller/cosmo_phot | 011333f84486614cb9339d3874dc072c45ebed23 | [
"MIT"
] | null | null | null | tests/test_preprocessing.py | temuller/cosmo_phot | 011333f84486614cb9339d3874dc072c45ebed23 | [
"MIT"
] | null | null | null | tests/test_preprocessing.py | temuller/cosmo_phot | 011333f84486614cb9339d3874dc072c45ebed23 | [
"MIT"
] | null | null | null | import unittest
from hostphot.cutouts import download_images
from hostphot.coadd import coadd_images
from hostphot.image_masking import create_mask
class TestHostPhot(unittest.TestCase):
def test_preprocessing(self):
coadd_filters = 'riz'
survey = 'PS1'
name = 'SN2004eo'
host_ra, host_dec = 308.2092, 9.92755 # coods of host galaxy of SN2004eo
download_images(name, host_ra, host_dec, survey=survey)
# coadd
coadd_images(name, coadd_filters, survey)
# masking
coadd_mask_params = create_mask(name, host_ra, host_dec,
filt=coadd_filters, survey=survey,
extract_params=True)
for filt in 'grizy':
create_mask(name, host_ra, host_dec, filt, survey=survey,
common_params=coadd_mask_params)
if __name__ == '__main__':
unittest.main()
| 31.433333 | 81 | 0.623542 | 109 | 943 | 5.082569 | 0.412844 | 0.043321 | 0.072202 | 0.093863 | 0.142599 | 0.111913 | 0.111913 | 0.111913 | 0 | 0 | 0 | 0.033537 | 0.304348 | 943 | 29 | 82 | 32.517241 | 0.810976 | 0.04878 | 0 | 0 | 0 | 0 | 0.030235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6b264207fe8e823f52476d11d008957908297b6 | 4,395 | py | Python | bluetooth/objects/Device.py | Exus1/alfa-blue-me | 4b2f9f549967b44688e753a64b0578ebbfedf430 | [
"MIT"
] | null | null | null | bluetooth/objects/Device.py | Exus1/alfa-blue-me | 4b2f9f549967b44688e753a64b0578ebbfedf430 | [
"MIT"
] | 1 | 2020-07-06T14:36:18.000Z | 2021-01-27T09:13:12.000Z | bluetooth/objects/Device.py | Exus1/alfa-blue-me | 4b2f9f549967b44688e753a64b0578ebbfedf430 | [
"MIT"
] | null | null | null | import dbus
from module.EventBus import EventBus, mainEventBus
from bluetooth.objects.Player import Player
class Device:
event_bus: EventBus
__path: str
__dbus_obj: dbus.proxies.ProxyObject
__dbus_iface: dbus.proxies.Interface
__dbus_props_iface: dbus.proxies.Interface
__player_path: str = None
__player: Player = None
def __init__(self, path: str):
self.event_bus = EventBus()
self.__path = path
self.__dbus_obj = dbus.SystemBus().get_object('org.bluez', path)
self.__dbus_obj.connect_to_signal(
'PropertiesChanged',
self.__on_properties_changed,
dbus_interface='org.freedesktop.DBus.Properties'
)
self.__dbus_iface = dbus.Interface(self.__dbus_obj, 'org.bluez.Device1')
self.__dbus_props_iface = dbus.Interface(self.__dbus_obj, 'org.freedesktop.DBus.Properties')
self.__find_player()
def __del__(self):
if self.__player:
del self.__player
self.event_bus.off_all()
del self.event_bus
def is_connected(self):
return self.get_prop('Connected')
def is_paired(self):
return self.get_prop('Paired')
def pair(self):
self.__dbus_iface.Pair()
def connect(self):
self.__dbus_iface.Connect()
def disconnect(self):
self.__dbus_iface.Disconnect()
def connect_profile(self, profile):
self.__dbus_iface.ConnectProfile(profile)
def has_a2dp(self):
uuids = self.get_prop('UUIDs')
return '0000110d-0000-1000-8000-00805f9b34fb' in uuids
def has_player(self):
return self.__player is not None
def get_player(self):
if not self.__player:
raise Exception("Device hasn't player " + self.__path)
return self.__player
def get_address(self):
return self.get_prop('Address')
def get_rssi(self):
return self.get_prop('RSSI')
def get_name(self):
return self.get_prop('Name', 'Unknown')
def __find_player(self):
if self.__player is not None:
return
obj = dbus.SystemBus().get_object('org.bluez', "/")
mgr = dbus.Interface(obj, 'org.freedesktop.DBus.ObjectManager')
for path, ifaces in mgr.GetManagedObjects().items():
if str(path).startswith(self.__path):
adapter = ifaces.get('org.bluez.MediaPlayer1')
if not adapter:
continue
self.__set_player(path)
def __on_properties_changed(self, interface, changed: dict, invalidated):
if 'Connected' in changed:
self.__on_connected_property_change(changed.get('Connected'))
if 'Player' in changed:
self.__on_player_change(changed.get('Player'))
if 'Paired' in changed:
self.__on_paired_change(changed.get('Paired'))
def __on_connected_property_change(self, value):
if not value:
self.event_bus.trigger('disconnected')
mainEventBus.trigger('device:disconnected', {
'device': self
})
else:
self.event_bus.trigger('connected')
mainEventBus.trigger('device:connected', {
'device': self
})
def __on_player_change(self, path):
self.__set_player(path)
def __on_paired_change(self, value):
if not value:
self.event_bus.trigger('unpaired')
mainEventBus.trigger('device:unpaired', {
'device': self
})
else:
self.event_bus.trigger('paired')
mainEventBus.trigger('device:paired', {
'device': self
})
def __set_player(self, player_path: str):
self.__player_path = player_path
if self.__player:
del self.__player
self.__player = Player(self.__player_path)
self.__player.event_bus.add_forwarding('active-player', self.event_bus)
self.event_bus.trigger('player-changed', {
'player': self.get_player()
})
def get_prop(self, prop_name: str, default=None):
try:
return self.__dbus_props_iface.Get('org.bluez.Device1', prop_name)
except Exception:
return default
def get_all_props(self):
return self.__dbus_props_iface.GetAll('org.bluez.Device1')
| 31.392857 | 100 | 0.619113 | 506 | 4,395 | 5.009881 | 0.193676 | 0.051282 | 0.042604 | 0.033531 | 0.251677 | 0.152268 | 0.134911 | 0.034714 | 0.034714 | 0.034714 | 0 | 0.010072 | 0.277133 | 4,395 | 139 | 101 | 31.618705 | 0.78785 | 0 | 0 | 0.168142 | 0 | 0 | 0.113993 | 0.03504 | 0 | 0 | 0 | 0 | 0 | 1 | 0.19469 | false | 0 | 0.026549 | 0.061947 | 0.39823 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6b2ccd85fff302304cf6c5368fb0672775dec55 | 2,213 | py | Python | objmap.py | runejuhl/bin | 948b246c92540e4d7451538879847513864c0219 | [
"MIT"
] | null | null | null | objmap.py | runejuhl/bin | 948b246c92540e4d7451538879847513864c0219 | [
"MIT"
] | null | null | null | objmap.py | runejuhl/bin | 948b246c92540e4d7451538879847513864c0219 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import sys
import subprocess
import re
import os.path
def main():
if len(sys.argv) < 2:
print('Missing argument')
sys.exit(-1)
exe = sys.argv[1]
if not os.path.isfile(exe):
print('Not a file, sir.')
exit(-2)
o = subprocess.check_output(['objdump', '-M', 'intel', '-d', exe])
r = subprocess.check_output(['readelf', '-a', exe])
s = subprocess.check_output(['strings', '-t', 'x', exe])
# match addresses in strings output
regex = re.compile('^[ ]+(?P<addr>[0-9a-f]+) (.*)$')
strings = {}
for line in str(s).split('\n'):
match = regex.search(line)
if not match:
continue
(addr, string) = match.groups()
strings[int(addr, 16)+0x8048000] = string
# match output from readelf
regex = re.compile('^[ ]+(?P<num>[0-9]+): (?P<addr>[0-9a-f]+)[ ]+(?P<size>[0-9]+)[ ]+OBJECT[ ]+(?P<bind>GLOBAL|WEAK|LOCAL)[ ]+(?P<vis>DEFAULT|HIDDEN)[ ]+(?P<ndx>[0-9]+|ABS|UND)[ ]+(?P<name>.+)$')
variables = {}
for line in str(r).split('\n'):
match = regex.search(line)
if not match:
continue
g = match.groupdict()
variables[int(match.group('addr'), 16)] = match.groupdict()
# Match addresses and strings
def stringrepl(matchobj):
# if matchobj is None:
# return None
saddr = matchobj.groups()[1]
addr = int(saddr, 16)
if addr in strings:
return '%s ;; "%s"' % (matchobj.groups()[0], strings[addr])
return matchobj.groups()[0]
# Match addresses and variables
def varrepl(matchobj):
# if matchobj is None:
# return None
saddr = matchobj.groups()[1]
addr = int(saddr, 16)
if addr in variables:
var = variables[addr]
return '%s ;; var %s (%s, size %i)' % (matchobj.group('match'), var['name'], var['bind'].lower(), int(var['size']))
return matchobj.groups()[0]
replaced = re.sub(r'(.*?(0x[0-9a-f]{7,}).*)', stringrepl, str(o))
replaced = re.sub(r'(?P<match>.*?(0x[0-9a-f]{7,}).*)', varrepl, replaced)
print(replaced)
if __name__ == '__main__':
main()
| 29.118421 | 199 | 0.536376 | 289 | 2,213 | 4.069204 | 0.33564 | 0.059524 | 0.013605 | 0.02551 | 0.231293 | 0.204082 | 0.204082 | 0.204082 | 0.204082 | 0.204082 | 0 | 0.026429 | 0.264799 | 2,213 | 75 | 200 | 29.506667 | 0.696374 | 0.095798 | 0 | 0.244898 | 0 | 0.020408 | 0.197791 | 0.079317 | 0 | 0 | 0.004518 | 0 | 0 | 1 | 0.061224 | false | 0 | 0.081633 | 0 | 0.22449 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6b55e4c1d8bfa3eac38de71d15f89c1f1da3226 | 641 | py | Python | analysis/cohort_pickle_checks.py | opensafely/covid-vaccine-not-received | 5b8c7e4219e654cf2fcf6f5013a5ef6e9256f26d | [
"MIT"
] | null | null | null | analysis/cohort_pickle_checks.py | opensafely/covid-vaccine-not-received | 5b8c7e4219e654cf2fcf6f5013a5ef6e9256f26d | [
"MIT"
] | null | null | null | analysis/cohort_pickle_checks.py | opensafely/covid-vaccine-not-received | 5b8c7e4219e654cf2fcf6f5013a5ef6e9256f26d | [
"MIT"
] | null | null | null | ''' Count number of declines with uncertain dates.
'''
import os
import pandas as pd
import numpy as np
input_path="output/cohort.pickle"
output_path="output/cohort_pickle_checks.csv"
backend = os.getenv("OPENSAFELY_BACKEND", "expectations")
cohort = pd.read_pickle(input_path)
cohort = cohort.loc[pd.notnull(cohort["decl_first_dat"])]
cohort["decline date incorrect"] = np.where(cohort["decl_first_dat"] < "2020-12-08", 1, 0)
checks = cohort.groupby(["decline date incorrect"])["sex"].count()
checks = 100*checks/checks.sum()
print (checks)
#checks = cohort.agg({"max","min", "count"}).transpose()
checks.to_csv(f"{output_path}")
| 24.653846 | 90 | 0.730109 | 93 | 641 | 4.892473 | 0.548387 | 0.03956 | 0.07033 | 0.096703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022569 | 0.101404 | 641 | 25 | 91 | 25.64 | 0.767361 | 0.160686 | 0 | 0 | 0 | 0 | 0.339015 | 0.058712 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 0.230769 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6b73f047e331caed47c666c6df9c31375f316f1 | 2,794 | py | Python | archive/experiment_4_train.py | marjanin/tendon_stiffness | b1dc379b09bbf9c044410a6bc51afbee0cba2e05 | [
"MIT"
] | 1 | 2020-07-20T02:04:46.000Z | 2020-07-20T02:04:46.000Z | archive/experiment_4_train.py | marjanin/tendon_stiffness | b1dc379b09bbf9c044410a6bc51afbee0cba2e05 | [
"MIT"
] | null | null | null | archive/experiment_4_train.py | marjanin/tendon_stiffness | b1dc379b09bbf9c044410a6bc51afbee0cba2e05 | [
"MIT"
] | 1 | 2020-05-11T11:41:39.000Z | 2020-05-11T11:41:39.000Z | import gym
import numpy as np
from stable_baselines.common.policies import MlpPolicy as common_MlpPolicy
from stable_baselines.ddpg.policies import MlpPolicy as DDPG_MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines.ddpg.noise import NormalActionNoise, OrnsteinUhlenbeckActionNoise, AdaptiveParamNoiseSpec
from stable_baselines import PPO1, PPO2, DDPG
#defining the variables
RL_method = "PPO1"
experiment_ID = "experiment_4_test"
save_name_extension = RL_method
total_timesteps = 1000
stiffness_versions = 9
for stiffness_value in range(stiffness_versions):
stiffness_value_str = "stiffness_{}".format(stiffness_value)
log_dir = "./logs/{}/{}/{}/".format(experiment_ID, RL_method, stiffness_value_str)
# defining the environments
env = gym.make('TSNMILeg{}-v1'.format(stiffness_value))
#env = gym.wrappers.Monitor(env, "./tmp/gym-results", video_callable=False, force=True)
# defining the initial model
if RL_method == "PPO1":
model = PPO1(common_MlpPolicy, env, verbose=1, tensorboard_log=log_dir)
elif RL_method == "PPO2":
env = DummyVecEnv([lambda: env])
model = PPO2(common_MlpPolicy, env, verbose=1, tensorboard_log=log_dir)
elif RL_method == "DDPG":
env = DummyVecEnv([lambda: env])
n_actions = env.action_space.shape[-1]
param_noise = None
action_noise = OrnsteinUhlenbeckActionNoise(mean=np.zeros(n_actions), sigma=float(0.5)* 5 * np.ones(n_actions))
model = DDPG(DDPG_MlpPolicy, env, verbose=1, param_noise=param_noise, action_noise=action_noise, tensorboard_log=log_dir)
else:
raise ValueError("Invalid RL mode")
# setting the environment on the model
#model.set_env(env)
# training the model
# training the model
model.learn(total_timesteps=total_timesteps)
# saving the trained model
model.save(log_dir+"/model")
# ## running the trained model
# # remove to demonstrate saving and loading
# del model
# # defining the environments
# su_env = gym.make('HalfCheetah_nssu-v3')
# su_env = DummyVecEnv([lambda: su_env])
# ru_env = gym.make('HalfCheetah_nsru-v3')
# ru_env = DummyVecEnv([lambda: ru_env])
# # loading the trained model
# if RL_method == "PPO2":
# model = PPO2.load("trainedmodel-HalfCheetah_nssuru_"+save_name_extension)
# elif RL_method == "DDPG":
# model = DDPG.load("trainedmodel-HalfCheetah_nssuru_"+save_name_extension)
# else:
# raise ValueError("Invalid RL mode")
# # setting the seocond environment
# model.set_env(ru_env)
# #model = DDPG.load("PPO2-HalfCheetah_nssu-v3_test2")
# obs = ru_env.reset()
# while True:
# action, _states = model.predict(obs)
# obs, rewards, dones, info = ru_env.step(action)
# ru_env.render()
#import pdb; pdb.set_trace()
#tensorboard --logdir=/Users/alimarjaninejad/Documents/github/marjanin/gym_ali/log/
#http://Alis-MacBook-Pro.local:6006 | 39.914286 | 123 | 0.764853 | 387 | 2,794 | 5.30491 | 0.359173 | 0.031174 | 0.046274 | 0.029226 | 0.146128 | 0.146128 | 0.146128 | 0.097418 | 0.056503 | 0.056503 | 0 | 0.012971 | 0.117037 | 2,794 | 70 | 124 | 39.914286 | 0.819214 | 0.433787 | 0 | 0.064516 | 0 | 0 | 0.061648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.225806 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6b874afe3ecfcd5e956781905981e176b3e59f4 | 605 | py | Python | geotrek/settings/env_dev.py | GeotrekCE/Geotrek-admin | efcc7a6c2ccb6aee6b299b22f33f236dd8a23d91 | [
"BSD-2-Clause"
] | 50 | 2016-10-19T23:01:21.000Z | 2022-03-28T08:28:34.000Z | geotrek/settings/env_dev.py | GeotrekCE/Geotrek-admin | efcc7a6c2ccb6aee6b299b22f33f236dd8a23d91 | [
"BSD-2-Clause"
] | 1,422 | 2016-10-27T10:39:40.000Z | 2022-03-31T13:37:10.000Z | geotrek/settings/env_dev.py | GeotrekCE/Geotrek-admin | efcc7a6c2ccb6aee6b299b22f33f236dd8a23d91 | [
"BSD-2-Clause"
] | 46 | 2016-10-27T10:59:10.000Z | 2022-03-22T15:55:56.000Z | #
# Django Development
# ..........................
DEBUG = True
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
#
# Developper additions
# ..........................
INSTALLED_APPS = (
'django_extensions',
'debug_toolbar',
'drf_yasg',
) + INSTALLED_APPS
INTERNAL_IPS = type(str('c'), (), {'__contains__': lambda *a: True})()
ALLOWED_HOSTS = ['*']
MIDDLEWARE += (
'debug_toolbar.middleware.DebugToolbarMiddleware',
)
#
# Use some default tiles
# ..........................
LOGGING['loggers']['geotrek']['level'] = 'DEBUG'
LOGGING['loggers']['']['level'] = 'DEBUG'
| 18.90625 | 70 | 0.578512 | 53 | 605 | 6.358491 | 0.735849 | 0.077151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133884 | 605 | 31 | 71 | 19.516129 | 0.64313 | 0.236364 | 0 | 0 | 0 | 0 | 0.411504 | 0.205752 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6b88d0c63863ec876ce31f9690f31d086738e0d | 1,629 | py | Python | parsing/accessDB.py | marquettecomputationalsocialscience/seniordesign1819 | cc11c6f46dbcb1c2fb69e1ef2017953f0e6b066f | [
"MIT"
] | 5 | 2018-08-30T19:15:21.000Z | 2019-03-25T17:13:39.000Z | parsing/accessDB.py | marquettecomputationalsocialscience/seniordesign1819 | cc11c6f46dbcb1c2fb69e1ef2017953f0e6b066f | [
"MIT"
] | 15 | 2018-09-03T18:39:25.000Z | 2019-05-15T07:00:43.000Z | parsing/accessDB.py | marquettecomputationalsocialscience/seniordesign1819 | cc11c6f46dbcb1c2fb69e1ef2017953f0e6b066f | [
"MIT"
] | 8 | 2018-09-03T19:11:33.000Z | 2018-11-14T22:32:22.000Z | import os
import pandas as pd
from datetime import datetime as dt
# Read raw data in
root = os.path.expanduser('../data/')
files = [root + f for f in os.listdir(root) if f.endswith('.csv') and f != 'addresses.csv']
dfs = [pd.read_csv(f, header=0, index_col='ID', parse_dates=['Date/Time']) for f in files]
df = pd.concat(dfs)
df['Police District'] = df['Police District'].astype(str)
addrDB = pd.read_csv('../data/addresses.csv', header=0, index_col=0)
# Function to get the date of a given address
def geoLoc(addr):
if addr in addrDB.index:
return [addrDB.loc[addr, 'Latitude'], addrDB.loc[addr, 'Longitude']]
return ['', ''];
# Function to get a set of data
def filter(startDate='', endDate='', dayOfWeek=-1, call='', nature='', status='', doGeoLoc=False):
filtered = df
if call != '':
filtered = filtered[filtered['Call Number'] == call]
if nature != '':
filtered = filtered[filtered['Nature of Call'] == nature]
if status != '':
filtered = filtered[filtered['Status'] == status]
if startDate != '':
filtered = filtered[filtered['Date/Time'] >= dt.strptime(startDate, '%m/%d/%Y')]
if endDate != '':
filtered = filtered[filtered['Date/Time'] < dt.strptime(endDate, '%m/%d/%Y')]
if dayOfWeek >= 0:
filtered = filtered[filtered['Date/Time'].dt.dayofweek == dayOfWeek]
if doGeoLoc:
results = filtered.loc[:, 'Location'].apply(geoLoc)
filtered[['Latitude', 'Longitude']] = pd.DataFrame(results.values.tolist(), index=results.index, columns=['Latitude', 'Longitude'])
return filtered.sort_values(by='Date/Time') | 41.769231 | 139 | 0.638428 | 217 | 1,629 | 4.764977 | 0.37788 | 0.185687 | 0.139265 | 0.081238 | 0.11412 | 0.11412 | 0.081238 | 0 | 0 | 0 | 0 | 0.003748 | 0.181093 | 1,629 | 39 | 140 | 41.769231 | 0.771364 | 0.055249 | 0 | 0 | 0 | 0 | 0.149089 | 0.013672 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.096774 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6bb84f81d229ef9b0b05112770f57fbf3a0daf2 | 1,007 | py | Python | main.py | leandrovrabelo/pico_max7219 | 16c695908c9e39740406dc60c650168b8b4d7a5d | [
"MIT"
] | 3 | 2021-04-02T09:06:39.000Z | 2021-12-22T11:13:43.000Z | main.py | leandrovrabelo/pico_max7219 | 16c695908c9e39740406dc60c650168b8b4d7a5d | [
"MIT"
] | null | null | null | main.py | leandrovrabelo/pico_max7219 | 16c695908c9e39740406dc60c650168b8b4d7a5d | [
"MIT"
] | 1 | 2021-11-06T21:01:42.000Z | 2021-11-06T21:01:42.000Z | from max7219 import Matrix8x8
from machine import Pin, SPI
from utime import sleep
if __name__ == '__main__':
CS = Pin(5, Pin.OUT) # GPIO5 pin 7
CLK = Pin(6) # GPIO6 pin 9
DIN = Pin(7) # GPIO7 pin 10
BRIGHTNESS = 3 # from 0 to 15
text1 = "Hello World!"
text2 = "PICO PI"
# CLK = GPIO6 and MOSI (DIN) = GPIO6 are the default pins of SPI0 so you can omit it
spi = SPI(0, baudrate= 10_000_000, sck=CLK, mosi=DIN)
display = Matrix8x8(spi, CS, 1, orientation=1)
display.brightness(BRIGHTNESS)
display.invert = False
while True:
# all on
display.fill(True)
display.show()
sleep(0.5)
# all off
display.fill(False)
display.show()
sleep(0.5)
# show a string scrolling through the Matrix
display.text_scroll(text1)
# show a string one character at a time
display.one_char_a_time(text2, delay=0.25)
| 27.972222 | 89 | 0.571003 | 139 | 1,007 | 4.035971 | 0.553957 | 0.01426 | 0.057041 | 0.060606 | 0.064171 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070997 | 0.342602 | 1,007 | 36 | 90 | 27.972222 | 0.776435 | 0.226415 | 0 | 0.173913 | 0 | 0 | 0.036735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6bc42fe55c00a39f6b5dcc77087968fc2014f3d | 3,319 | py | Python | tests/test_dynamicprinter.py | DerekYu177/Tooling | 4b1c0490375659e716be708db254cb4f1b8f2b6b | [
"MIT"
] | 2 | 2018-06-28T20:30:25.000Z | 2022-01-03T15:14:39.000Z | tests/test_dynamicprinter.py | DerekYu177/Tooling | 4b1c0490375659e716be708db254cb4f1b8f2b6b | [
"MIT"
] | 4 | 2018-06-26T23:33:14.000Z | 2018-07-09T00:44:17.000Z | tests/test_dynamicprinter.py | DerekYu177/dynamictableprint | 4b1c0490375659e716be708db254cb4f1b8f2b6b | [
"MIT"
] | null | null | null | """
Tests the table print extra module
"""
import unittest
from unittest import mock
import pandas as pd
from dynamictableprint.dynamicprinter import DynamicTablePrint
def mock_terminal_size(_):
"""
Does what it says
"""
return [80]
class TestDynamicTablePrint(unittest.TestCase):
"""
Tests the wrapper DynamicTablePrint
"""
def setUp(self):
length = 30
raw_data = {
'something_good': ["FOOD"*2 for i in range(length)],
'something_bad': ["WORK"*20 for i in range(length)],
'squished': ["SQUISHABLE"*4 for i in range(length)],
'saved': ["CANADA"*3 for i in range(length)],
}
self.dataframe = pd.DataFrame.from_dict(
raw_data,
)
self.blank_dataframe = pd.DataFrame.from_dict(
{
'stupid': [],
'idiot': [],
'how could I forget': [],
'that blank': [],
'was a think': [],
}
)
self.auco = DynamicTablePrint(
self.dataframe,
angel_column='saved',
squish_column='squished',
)
@mock.patch('os.get_terminal_size', side_effect=mock_terminal_size)
def test_system_screen_width(self, _os_function):
"""
Tests that we make the correct call to os.get_terminal_size
"""
screen_width, _widths, _modified_dataframe = self.auco.fit_screen()
self.assertEqual(screen_width, 80)
def test_system_fallback_width(self):
"""
In the case where we cannot get at the system settings, we set a default
"""
self.assertEqual(self.auco.screen_width, self.auco.config.default_screen_width)
def test_settable_screen_width(self):
"""
User is allowed to set the screen width
"""
dtp = DynamicTablePrint(self.dataframe, screen_width=100)
self.assertEqual(dtp.screen_width, 100)
def test_printable_screen_width(self):
"""
Ensuring that we have the appropriate amount of space for columns
"""
default_screen_width = 80
printable_width = default_screen_width - 2 - 3*3
assert DynamicTablePrint.printable_screen_width(
['something_good', 'something_bad', 'squished', 'saved'],
default_screen_width) == printable_width
def test_empty_dataframe(self):
"""
if printing an empty dataframe, nothing should happen
"""
dtp = DynamicTablePrint(self.blank_dataframe)
dtp.config.empty_banner = 'Test in Progress'
dtp.squish_calculator = mock.MagicMock()
dtp.write_to_screen()
dtp.squish_calculator.assert_not_called()
def test_set_index(self):
"""
check to see that the index has been fixed during initialization
"""
dataframe = {
'big_column' : ['a' * i for i in range(30, 0, -1)]
}
dataframe = pd.DataFrame.from_dict(dataframe)
dataframe = dataframe.sort_values(by='big_column')
dtp = DynamicTablePrint(dataframe, screen_width=100)
indices = dtp.data_frame.index.values
for index in range(30):
assert index == indices[index]
if __name__ == '__main__':
unittest.main()
| 31.018692 | 87 | 0.602591 | 374 | 3,319 | 5.128342 | 0.379679 | 0.086027 | 0.015641 | 0.028676 | 0.079249 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013288 | 0.297077 | 3,319 | 106 | 88 | 31.311321 | 0.80883 | 0.134378 | 0 | 0 | 0 | 0 | 0.086022 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 1 | 0.123077 | false | 0 | 0.061538 | 0 | 0.215385 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6c11429802da894fcfc47476b4c7e126e53cb6b | 1,778 | py | Python | pitfall/helpers/aws/utils.py | bincyber/pitfall | 680c33ae30a2ed1d2bbf742f74accc34b81b1f5b | [
"Apache-2.0"
] | 33 | 2019-11-06T03:45:55.000Z | 2020-12-15T09:14:42.000Z | pitfall/helpers/aws/utils.py | bincyber/pitfall | 680c33ae30a2ed1d2bbf742f74accc34b81b1f5b | [
"Apache-2.0"
] | 3 | 2019-11-19T19:02:44.000Z | 2020-03-29T17:52:11.000Z | pitfall/helpers/aws/utils.py | bincyber/pitfall | 680c33ae30a2ed1d2bbf742f74accc34b81b1f5b | [
"Apache-2.0"
] | 1 | 2020-07-29T07:33:52.000Z | 2020-07-29T07:33:52.000Z | # Copyright 2019 Ali (@bincyber)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List, Dict, Any
import boto3
import random
DEFAULT_REGION = "us-east-1"
def extract_tags(tag_set: List[Dict[str, Any]]) -> dict:
"""
Returns a dictionary containing the keys and values extracted from an AWS tag set.
:param tag_set: a list of Tag objects, eg. [{'Key': 'Name', 'Value': 'test'}]
:type tag_set: list
:returns: a dictionary of Tag key/value pairs
:rtype: dict
"""
tags = {}
for i in tag_set:
k = i["Key"]
v = i["Value"]
tags[k] = v
return tags
def get_all_regions() -> List[str]:
"""
Gets a list of AWS regions available in this account.
:returns: a list of AWS regions
:rtype: list
"""
ec2 = boto3.client('ec2', region_name=DEFAULT_REGION)
r = ec2.describe_regions()
available_regions = []
for i in r["Regions"]:
region = i["RegionName"]
available_regions.append(region)
return available_regions
def get_random_region() -> str:
"""
Geta a random AWS region from the regions available in this account.
:returns: a random AWS region
:rtype: str
"""
regions = get_all_regions()
return random.choice(regions)
| 25.4 | 86 | 0.669854 | 258 | 1,778 | 4.546512 | 0.449612 | 0.051151 | 0.017903 | 0.02728 | 0.085251 | 0.063086 | 0.063086 | 0 | 0 | 0 | 0 | 0.010241 | 0.231159 | 1,778 | 69 | 87 | 25.768116 | 0.847842 | 0.564117 | 0 | 0 | 0 | 0 | 0.054015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.136364 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6c6d6983ffc811129d05e0b916f71312c4f08b8 | 1,115 | py | Python | cmdbox/scaffold_templates/urls.py | vitorfs/cmdbox | 97806c02caf5947ec855286212e61db714e3fb02 | [
"MIT"
] | 1 | 2019-09-07T11:49:11.000Z | 2019-09-07T11:49:11.000Z | cmdbox/scaffold_templates/urls.py | vitorfs/cmdbox | 97806c02caf5947ec855286212e61db714e3fb02 | [
"MIT"
] | null | null | null | cmdbox/scaffold_templates/urls.py | vitorfs/cmdbox | 97806c02caf5947ec855286212e61db714e3fb02 | [
"MIT"
] | 2 | 2018-09-04T08:33:17.000Z | 2020-09-18T20:26:46.000Z | from django.conf.urls import url
from cmdbox.scaffold_templates import views
urlpatterns = [
url(r'^$', views.scaffold_templates, name='list'),
url(r'^(?P<slug>[^/]+)/$', views.details, name='details'),
url(r'^(?P<slug>[^/]+)/add-file/$', views.add_file, name='add_file'),
url(r'^(?P<slug>[^/]+)/add-folder/$', views.add_folder, name='add_folder'),
url(r'^(?P<slug>[^/]+)/(?P<file_id>\d+)/add-file/$', views.add_children_file, name='add_children_file'),
url(r'^(?P<slug>[^/]+)/(?P<file_id>\d+)/add-folder/$', views.add_children_folder,
name='add_children_folder'),
url(r'^(?P<slug>[^/]+)/(?P<file_id>\d+)/rename/$', views.rename_file, name='rename_file'),
url(r'^(?P<slug>[^/]+)/(?P<file_id>\d+)/duplicate/$', views.duplicate_file, name='duplicate_file'),
url(r'^(?P<slug>[^/]+)/(?P<file_id>\d+)/delete/$', views.delete_file, name='delete_file'),
url(r'^(?P<slug>[^/]+)/edit/$', views.edit, name='edit'),
url(r'^(?P<slug>[^/]+)/edit/(?P<file_id>\d+)/$', views.edit_file, name='edit_file'),
url(r'^(?P<slug>[^/]+)/delete/$', views.delete, name='delete'),
]
| 53.095238 | 108 | 0.594619 | 166 | 1,115 | 3.825301 | 0.162651 | 0.075591 | 0.086614 | 0.155906 | 0.292913 | 0.181102 | 0.181102 | 0.181102 | 0.181102 | 0 | 0 | 0 | 0.095964 | 1,115 | 20 | 109 | 55.75 | 0.62996 | 0 | 0 | 0 | 0 | 0 | 0.451121 | 0.325561 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6c9952b1ebee49b75a8c48cec1cb1ca06df5caf | 2,653 | py | Python | [20]_break_AES_in_CTR_mode_statistically/analyzer_gui_entry.py | lucasg/Cryptopals | 095e80d0ab9acdda4e5804b45cdba932231086ff | [
"MIT"
] | 19 | 2016-08-01T03:45:39.000Z | 2022-02-01T19:48:52.000Z | [20]_break_AES_in_CTR_mode_statistically/analyzer_gui_entry.py | lucasg/Cryptopals | 095e80d0ab9acdda4e5804b45cdba932231086ff | [
"MIT"
] | null | null | null | [20]_break_AES_in_CTR_mode_statistically/analyzer_gui_entry.py | lucasg/Cryptopals | 095e80d0ab9acdda4e5804b45cdba932231086ff | [
"MIT"
] | 6 | 2019-04-27T02:09:46.000Z | 2021-04-05T15:09:51.000Z | # -*- coding: utf-8 -*-
import tkinter as tk
from tkinter import ttk
# Entry : a tkinter.Entry override which is more practical to use
# There is a label to the left and a button to the right (optionnals) :
#
# ----------------------------------------------
# | | | |
# | Label | Entry(redim) | Button |
# | | | |
# ----------------------------------------------
#
# The custom entry is resizable and every component too.
class AnalyserGUIEntry(ttk.Frame):
# Constructor
def __init__(self, master = None, **kwargs ):
# IP frame
ttk.Frame.__init__( self, master, **kwargs)
self.entry = None
self.label = None
self.button = None
# Place the elements if they are initialisation.
# Solve the initialisation order
def pack(self, **kwargs):
# Label Placement to the left
if self.label != None:
self.label.pack( side = tk.LEFT,
fill = tk.Y )
# Button Placement to the right
if self.button != None:
self.button.pack( side = tk.RIGHT,
fill = tk.Y )
# Entry Placement
if self.entry != None:
self.entry.pack( fill = tk.BOTH,
expand = tk.TRUE )
# Frame Placement
ttk.Frame.pack( self, side = tk.TOP, fill = tk.X ,
#expand = tk.X,
**kwargs )
# Add an Entry to the center
def add_entry(self, text_value, **kwargs):
# Text Entry constructor
self.text_value = tk.StringVar()
self.entry = ttk.Entry( self,
textvariable = self.text_value ,
justify = tk.RIGHT,
style = 'FTMEntry.TEntry',
**kwargs
)
self.set_value( text_value )
# Add a label to the left of the entry
def add_label(self, text, **kwargs ):
# Label Constructor
self.label_text = tk.StringVar()
self.label = ttk.Label( self,
textvariable = self.label_text,
anchor = tk.E ,
justify = tk.RIGHT,
style = 'FTMEntry.TLabel',
**kwargs
)
self.set_label( text )
# Add a [...] button to the right
# the button's style can always be overrided with kwargs parameters
def add_button(self, **kwargs):
self.button = ttk.Button( self,
width = 4,
text = "...",
style = 'FTMEntry.TButton',
**kwargs
)
# Label getter/setter
def get_label(self):
return self.label_text.get()
# Label getter/setter
def set_label(self, text):
return self.label_text.set( text )
# Entry getter/setter
def get_value(self):
return self.text_value.get().rstrip(" ")
# Entry getter/setter
def set_value(self, text):
# trailing whitespace for aesthetic purpose
return self.text_value.set( text + " " )
| 24.33945 | 71 | 0.591406 | 338 | 2,653 | 4.56213 | 0.286982 | 0.046693 | 0.042153 | 0.014267 | 0.076524 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001023 | 0.263098 | 2,653 | 109 | 72 | 24.33945 | 0.787724 | 0.345646 | 0 | 0.12963 | 0 | 0 | 0.031085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.037037 | 0.074074 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6ca3f09187518fec89422b29515f0f9cc5492c9 | 4,756 | py | Python | slack_janitor/main.py | paraita/slack-janitor | 64b3bdf76967276743144d9ef2db7f3fb5deaaea | [
"MIT"
] | null | null | null | slack_janitor/main.py | paraita/slack-janitor | 64b3bdf76967276743144d9ef2db7f3fb5deaaea | [
"MIT"
] | null | null | null | slack_janitor/main.py | paraita/slack-janitor | 64b3bdf76967276743144d9ef2db7f3fb5deaaea | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""Bulk deletion of files on your Slack workspace
Requires a valid token from:
https://api.slack.com/custom-integrations/legacy-tokens
"""
import os
import sys
import json
import urllib.parse
import http.client
import calendar
import argparse
import time
from datetime import datetime, timedelta
URL_DOMAIN="slack.com"
URL_LIST="/api/files.list"
URL_DEL="/api/files.delete"
def parse_args(argv):
TOKEN = None
if 'SLACK_TOKEN' in os.environ and os.environ['SLACK_TOKEN'] is not '':
TOKEN = os.environ['SLACK_TOKEN']
parser = argparse.ArgumentParser()
parser.add_argument('--token', '-t', help="Your Slack API Token. If none is provided, we'll try to use your SLACK_TOKEN environment variable instead.", default=TOKEN)
parser.add_argument('--days', '-d',
help='Only remove files older than the specified amount of days',
type=int, default=10)
parser.add_argument('--retries', '-r', help='Number of retries before aborting cleaning',
type=int, default=10)
parser.add_argument('--cooldown', '-c', help='Time (s) to wait before another attempt to clean',
type=int, default=3)
return parser.parse_args(argv)
def no_error(response):
status = response.code
if status != 200:
print("Shit happened !")
print("Status: %s" % status)
print("Reason: %s" % response.reason)
return False
else:
return True
def _delete_file(f, headers, cnt, total, token):
"""Delete one file with the Slack API (actual implementation)
"""
timestamp = str(calendar.timegm(datetime.now().utctimetuple()))
params = urllib.parse.urlencode({
'token': token,
'file': f['id'],
'set_active': 'true',
'_attempts': '1',
't': timestamp
})
conn = http.client.HTTPSConnection(URL_DOMAIN)
conn.request("POST", URL_DEL, body=params, headers=headers)
response = conn.getresponse()
if no_error(response):
print("[{}/{}] deleted {} ({})".format(cnt, total, f['name'].encode('utf-8'), f['id']))
return True
else:
return False
print("Will exit because an error occured during the deletion of %s" % f['id'])
sys.exit(1)
def delete_file(f, headers, cnt, total, args):
"""Delete one file with the Slack API
"""
cooldown_try = 0
while cooldown_try <= args.retries:
if _delete_file(f, headers, cnt, total, args.token):
return True
elif cooldown_try > args.retries:
print("Max number of retries reached !")
return False
else:
print(f"Let's cool down for {args.cooldown} seconds...")
time.sleep(args.cooldown)
cooldown_try += 1
def get_all_files(files_list, params, headers, args):
"""Fetch all files going through all the pages
"""
files = files_list['files']
paging = files_list['paging']
current_page = paging['page']
print("Fetching all files to delete")
while current_page < paging['pages']:
print("Fetching page {}/{}".format(current_page, paging['pages']-1))
conn = http.client.HTTPSConnection(URL_DOMAIN)
conn.request("POST", URL_LIST, body=params, headers=headers)
response = conn.getresponse()
responsejson = json.loads(response.read())
files = files + responsejson['files']
current_page += 1
paging = responsejson['paging']
total_nb_files = len(files)
print("There's %s files to delete" % total_nb_files)
for i in range(total_nb_files):
f = files[i]
if not delete_file(f, headers, i+1, total_nb_files, args):
print("Will exit because an error occured during the deletion of %s" % f['id'])
sys.exit(1)
def main(argv=None):
args = parse_args(argv)
DAYS = args.days
if args.token is None:
print("Could not find a valid Slack Token")
sys.exit(1)
date = str(calendar.timegm((datetime.now() + timedelta(- args.days)).utctimetuple()))
params = urllib.parse.urlencode({
'token': args.token,
'ts_date': date
})
headers = {
'Content-type': 'application/x-www-form-urlencoded'
}
conn = http.client.HTTPSConnection(URL_DOMAIN)
conn.request("POST", URL_LIST, body=params, headers=headers)
response = conn.getresponse()
response_code = response.code
if no_error(response):
get_all_files(json.loads(response.read()), params, headers, args)
else:
print("Will exit because an error occured during the initial fetch of the files list")
sys.exit(1)
if __name__ == "__main__":
main()
| 35.759398 | 170 | 0.626997 | 614 | 4,756 | 4.752443 | 0.306189 | 0.015422 | 0.023304 | 0.024674 | 0.290953 | 0.271761 | 0.242289 | 0.153873 | 0.153873 | 0.139136 | 0 | 0.005569 | 0.244954 | 4,756 | 132 | 171 | 36.030303 | 0.807018 | 0.065181 | 0 | 0.285714 | 0 | 0.008929 | 0.219556 | 0.007469 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053571 | false | 0 | 0.080357 | 0 | 0.196429 | 0.116071 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6caf3fe4edf041c7511413b1ca3d3833f7691c2 | 22,030 | py | Python | executors/location.py | thevickypedia/jarvis | 4bea623bb9f1618509d4fcd77a696c638011799c | [
"MIT"
] | null | null | null | executors/location.py | thevickypedia/jarvis | 4bea623bb9f1618509d4fcd77a696c638011799c | [
"MIT"
] | null | null | null | executors/location.py | thevickypedia/jarvis | 4bea623bb9f1618509d4fcd77a696c638011799c | [
"MIT"
] | null | null | null | import json
import math
import os
import pathlib
import re
import socket
import ssl
import sys
import urllib.error
import urllib.request
import webbrowser
from difflib import SequenceMatcher
from typing import NoReturn, Tuple, Union
import certifi
import yaml
from geopy.distance import geodesic
from geopy.exc import GeocoderUnavailable, GeopyError
from geopy.geocoders import Nominatim, options
from pyicloud import PyiCloudService
from pyicloud.exceptions import (PyiCloudAPIResponseException,
PyiCloudFailedLoginException)
from pyicloud.services.findmyiphone import AppleDevice
from speedtest import Speedtest
from timezonefinder import TimezoneFinder
from executors import controls
from executors.logger import logger
from modules.audio import listener, speaker
from modules.conditions import keywords
from modules.exceptions import NoInternetError
from modules.models import models
from modules.utils import shared, support
env = models.env
fileio = models.FileIO()
# stores necessary values for geolocation to receive the latitude, longitude and address
options.default_ssl_context = ssl.create_default_context(cafile=certifi.where())
geo_locator = Nominatim(scheme="http", user_agent="test/1", timeout=3)
def device_selector(phrase: str = None) -> Union[AppleDevice, None]:
"""Selects a device using the received input string.
See Also:
- Opens a html table with the index value and name of device.
- When chosen an index value, the device name will be returned.
Args:
phrase: Takes the voice recognized statement as argument.
Returns:
AppleDevice:
Returns the selected device from the class ``AppleDevice``
"""
if not all([env.icloud_user, env.icloud_pass]):
logger.warning("ICloud username or password not found.")
return
icloud_api = PyiCloudService(env.icloud_user, env.icloud_pass)
devices = [device for device in icloud_api.devices]
if not phrase:
phrase = socket.gethostname().split('.')[0] # Temporary fix
devices_str = [{str(device).split(":")[0].strip(): str(device).split(":")[1].strip()} for device in devices]
closest_match = [
(SequenceMatcher(a=phrase, b=key).ratio() + SequenceMatcher(a=phrase, b=val).ratio()) / 2
for device in devices_str for key, val in device.items()
]
index = closest_match.index(max(closest_match))
return icloud_api.devices[index]
def get_coordinates_from_ip() -> Tuple[float, float]:
"""Uses public IP to retrieve latitude and longitude. If fails, uses ``Speedtest`` module.
Returns:
tuple:
Returns latitude and longitude as a tuple.
"""
try:
info = json.load(urllib.request.urlopen(url="https://ipinfo.io/json"))
coordinates = tuple(map(float, info.get('loc', '0,0').split(',')))
except urllib.error.HTTPError as error:
logger.error(error)
coordinates = (0.0, 0.0)
if coordinates == (0.0, 0.0):
st = Speedtest()
return float(st.results.client["lat"]), float(st.results.client["lon"])
else:
return coordinates
def get_location_from_coordinates(coordinates: tuple) -> dict:
"""Uses the latitude and longitude information to get the address information.
Args:
coordinates: Takes the latitude and longitude as a tuple.
Returns:
dict:
Location address.
"""
try:
locator = geo_locator.reverse(coordinates, language="en")
return locator.raw["address"]
except (GeocoderUnavailable, GeopyError) as error:
logger.error(error)
return {}
def location_services(device: AppleDevice) -> Union[NoReturn,
Tuple[str or float or None, str or float or None, str or None]]:
"""Gets the current location of an Apple device.
Args:
device: Passed when locating a particular Apple device.
Returns:
None or Tuple[str or float, str or float, str or float]:
- On success, returns ``current latitude``, ``current longitude`` and ``location`` information as a ``dict``.
- On failure, calls the ``restart()`` or ``terminator()`` function depending on the error.
Raises:
PyiCloudFailedLoginException: Restarts if occurs once. Uses location by IP, if occurs once again.
"""
try:
# tries with icloud api to get your device's location for precise location services
if not device:
if not (device := device_selector()):
raise PyiCloudFailedLoginException
raw_location = device.location()
if not raw_location and sys._getframe(1).f_code.co_name == "locate": # noqa
return None, None, None
elif not raw_location:
raise PyiCloudAPIResponseException(reason=f"Unable to retrieve location for {device}")
else:
coordinates = raw_location["latitude"], raw_location["longitude"]
os.remove("pyicloud_error") if os.path.isfile("pyicloud_error") else None
except (PyiCloudAPIResponseException, PyiCloudFailedLoginException) as error:
if device:
logger.error(f"Unable to retrieve location::{error}")
caller = sys._getframe(1).f_code.co_name # noqa
if caller == "<module>":
if os.path.isfile("pyicloud_error"):
logger.error(f"Exception raised by {caller} once again. Proceeding...")
os.remove("pyicloud_error")
else:
logger.error(f"Exception raised by {caller}. Restarting.")
pathlib.Path("pyicloud_error").touch()
controls.restart_control(quiet=True)
coordinates = get_coordinates_from_ip()
except ConnectionError as error:
logger.error(error)
raise NoInternetError
if location_info := get_location_from_coordinates(coordinates=coordinates):
return *coordinates, location_info
else:
logger.error("Error retrieving address from latitude and longitude information. Initiating self reboot.")
speaker.speak(text=f"Received an error while retrieving your address {env.title}! "
"I think a restart should fix this.")
controls.restart_control(quiet=True)
def write_current_location() -> NoReturn:
"""Extracts location information from public IP address and writes it to a yaml file."""
if os.path.isfile(fileio.location):
try:
with open(fileio.location) as file:
data = yaml.load(stream=file, Loader=yaml.FullLoader) or {}
except yaml.YAMLError as error:
data = {}
logger.error(error)
address = data.get("address")
if address and data.get("reserved") and data.get("latitude") and data.get("longitude") and \
address.get("city", address.get("hamlet")) and address.get("country") and \
address.get("state", address.get("county")):
logger.info(f"{fileio.location} is reserved.")
logger.warning("Automatic location detection has been disabled!")
return
current_lat, current_lon = get_coordinates_from_ip()
location_info = get_location_from_coordinates(coordinates=(current_lat, current_lon))
current_tz = TimezoneFinder().timezone_at(lat=current_lat, lng=current_lon)
logger.info(f"Writing location info in {fileio.location}")
with open(fileio.location, 'w') as location_writer:
yaml.dump(data={"timezone": current_tz, "latitude": current_lat, "longitude": current_lon,
"address": location_info},
stream=location_writer, default_flow_style=False)
def location() -> NoReturn:
"""Gets the user's current location."""
try:
with open(fileio.location) as file:
current_location = yaml.load(stream=file, Loader=yaml.FullLoader)
except yaml.YAMLError as error:
logger.error(error)
speaker.speak(text=f"I'm sorry {env.title}! I wasn't able to get the location details. Please check the logs.")
return
speaker.speak(text=f"I'm at {current_location.get('address', {}).get('road', '')} - "
f"{current_location.get('address', {}).get('city', '')} "
f"{current_location.get('address', {}).get('state', '')} - "
f"in {current_location.get('address', {}).get('country', '')}")
def locate_device(target_device: AppleDevice) -> NoReturn:
"""Speaks the location information of the target device.
Args:
target_device: Takes the target device as an argument.
"""
try:
ignore_lat, ignore_lon, location_info_ = location_services(device=target_device)
except NoInternetError:
speaker.speak(text="I was unable to connect to the internet. Please check your connection settings and retry.",
run=True)
return
lookup = str(target_device).split(":")[0].strip()
if not location_info_:
speaker.speak(text=f"I wasn't able to locate your {lookup} {env.title}! It is probably offline.")
else:
if shared.called_by_offline:
post_code = location_info_["postcode"].split("-")[0]
else:
post_code = '"'.join(list(location_info_["postcode"].split("-")[0]))
iphone_location = f"Your {lookup} is near {location_info_['road']}, {location_info_['city']} " \
f"{location_info_['state']}. Zipcode: {post_code}, {location_info_['country']}"
stat = target_device.status()
bat_percent = f"Battery: {round(stat['batteryLevel'] * 100)} %, " if stat["batteryLevel"] else ""
device_model = stat["deviceDisplayName"]
phone_name = stat["name"]
speaker.speak(text=f"{iphone_location}. Some more details. {bat_percent} Name: {phone_name}, "
f"Model: {device_model}")
def locate(phrase: str) -> None:
"""Locates an Apple device using icloud api for python.
Args:
phrase: Takes the voice recognized statement as argument and extracts device name from it.
"""
if not (target_device := device_selector(phrase=phrase)):
support.no_env_vars()
return
if shared.called_by_offline:
locate_device(target_device=target_device)
return
sys.stdout.write(f"\rLocating your {target_device}")
target_device.play_sound()
before_keyword, keyword, after_keyword = str(target_device).partition(":") # partitions the hostname info
if before_keyword == "Accessory":
after_keyword = after_keyword.replace(f"{env.name}’s", "").replace(f"{env.name}'s", "").strip()
speaker.speak(text=f"I've located your {after_keyword} {env.title}!")
else:
speaker.speak(text=f"Your {before_keyword} should be ringing now {env.title}!")
speaker.speak(text="Would you like to get the location details?", run=True)
if not (phrase_location := listener.listen(timeout=3, phrase_limit=3)):
return
elif not any(word in phrase_location.lower() for word in keywords.ok):
return
locate_device(target_device=target_device)
if env.icloud_recovery:
speaker.speak(text="I can also enable lost mode. Would you like to do it?", run=True)
phrase_lost = listener.listen(timeout=3, phrase_limit=3)
if any(word in phrase_lost.lower() for word in keywords.ok):
target_device.lost_device(number=env.icloud_recovery, text="Return my phone immediately.")
speaker.speak(text="I've enabled lost mode on your phone.")
else:
speaker.speak(text=f"No action taken {env.title}!")
def distance(phrase) -> NoReturn:
"""Extracts the start and end location to get the distance for it.
Args:
phrase:Takes the phrase spoken as an argument.
"""
check = phrase.split() # str to list
places = []
for word in check:
if word[0].isupper() or "." in word: # looks for words that start with uppercase
try:
next_word = check[check.index(word) + 1] # looks if words after an uppercase word is also one
if next_word[0].isupper():
places.append(f"{word + ' ' + check[check.index(word) + 1]}")
else:
if word not in " ".join(places):
places.append(word)
except IndexError: # catches exception on lowercase word after an upper case word
if word not in " ".join(places):
places.append(word)
if len(places) >= 2:
start = places[0]
end = places[1]
elif len(places) == 1:
start = None
end = places[0]
else:
start, end = None, None
distance_controller(start, end)
def distance_controller(origin: str = None, destination: str = None) -> None:
"""Calculates distance between two locations.
Args:
origin: Takes the starting place name as an optional argument.
destination: Takes the destination place name as optional argument.
Notes:
- If ``origin`` is None, Jarvis takes the current location as ``origin``.
- If ``destination`` is None, Jarvis will ask for a destination from the user.
"""
if not destination:
speaker.speak(text="Destination please?")
if shared.called_by_offline:
return
speaker.speak(run=True)
if destination := listener.listen(timeout=3, phrase_limit=4):
if len(destination.split()) > 2:
speaker.speak(text=f"I asked for a destination {env.title}, not a sentence. Try again.")
distance_controller()
if "exit" in destination or "quit" in destination or "Xzibit" in destination:
return
if origin:
# if starting_point is received gets latitude and longitude of that location
desired_start = geo_locator.geocode(origin)
sys.stdout.write(f"\r{desired_start.address} **")
start = desired_start.latitude, desired_start.longitude
start_check = None
else:
try:
with open(fileio.location) as file:
current_location = yaml.load(stream=file, Loader=yaml.FullLoader)
except yaml.YAMLError as error:
logger.error(error)
speaker.speak(text=f"I neither received an origin location nor was able to get my location {env.title}!")
return
start = (current_location["latitude"], current_location["longitude"])
start_check = "My Location"
sys.stdout.write("::TO::") if origin else sys.stdout.write("\r::TO::")
desired_location = geo_locator.geocode(destination)
if desired_location:
end = desired_location.latitude, desired_location.longitude
else:
end = destination[0], destination[1]
if not all(isinstance(v, float) for v in start) or not all(isinstance(v, float) for v in end):
speaker.speak(text=f"I don't think {destination} exists {env.title}!")
return
miles = round(geodesic(start, end).miles) # calculates miles from starting point to destination
sys.stdout.write(f"** {desired_location.address} - {miles}")
if shared.called["directions"]:
# calculates drive time using d = s/t and distance calculation is only if location is same country
shared.called["directions"] = False
avg_speed = 60
t_taken = miles / avg_speed
if miles < avg_speed:
drive_time = int(t_taken * 60)
speaker.speak(text=f"It might take you about {drive_time} minutes to get there {env.title}!")
else:
drive_time = math.ceil(t_taken)
if drive_time == 1:
speaker.speak(text=f"It might take you about {drive_time} hour to get there {env.title}!")
else:
speaker.speak(text=f"It might take you about {drive_time} hours to get there {env.title}!")
elif start_check:
text = f"{env.title}! You're {miles} miles away from {destination}. "
if not shared.called["locate_places"]:
text += f"You may also ask where is {destination}"
speaker.speak(text=text)
else:
speaker.speak(text=f"{origin} is {miles} miles away from {destination}.")
return
def locate_places(phrase: str = None) -> None:
"""Gets location details of a place.
Args:
phrase: Takes the phrase spoken as an argument.
"""
place = support.get_capitalized(phrase=phrase) if phrase else None
# if no words found starting with an upper case letter, fetches word after the keyword 'is' eg: where is Chicago
if not place:
keyword = "is"
before_keyword, keyword, after_keyword = phrase.partition(keyword)
place = after_keyword.replace(" in", "").strip()
if not place:
if shared.called_by_offline:
speaker.speak(text=f"I need a location to get you the details {env.title}!")
return
speaker.speak(text="Tell me the name of a place!", run=True)
if not (converted := listener.listen(timeout=3, phrase_limit=4)) or "exit" in converted or "quit" in converted \
or "Xzibit" in converted:
return
place = support.get_capitalized(phrase=converted)
if not place:
keyword = "is"
before_keyword, keyword, after_keyword = converted.partition(keyword)
place = after_keyword.replace(" in", "").strip()
try:
with open(fileio.location) as file:
current_location = yaml.load(stream=file, Loader=yaml.FullLoader)
except yaml.YAMLError as error:
logger.error(error)
current_location = {"address": {"country": "United States"}}
try:
destination_location = geo_locator.geocode(place)
coordinates = destination_location.latitude, destination_location.longitude
located = geo_locator.reverse(coordinates, language="en")
data = located.raw
address = data["address"]
county = address["county"] if "county" in address else None
city = address["city"] if "city" in address.keys() else None
state = address["state"] if "state" in address.keys() else None
country = address["country"] if "country" in address else None
if place in country:
speaker.speak(text=f"{place} is a country")
elif place in (city or county):
speaker.speak(
text=f"{place} is in {state}" if country == current_location["address"]["country"] else
f"{place} is in {state} in {country}")
elif place in state:
speaker.speak(text=f"{place} is a state in {country}")
elif (city or county) and state and country:
if country == current_location["address"]["country"]:
speaker.speak(text=f"{place} is in {city or county}, {state}")
else:
speaker.speak(text=f"{place} is in {city or county}, {state}, in {country}")
if shared.called_by_offline:
return
shared.called["locate_places"] = True
except (TypeError, AttributeError):
speaker.speak(text=f"{place} is not a real place on Earth {env.title}! Try again.")
if shared.called_by_offline:
return
locate_places(phrase=None)
distance_controller(origin=None, destination=place)
def directions(phrase: str = None, no_repeat: bool = False) -> None:
"""Opens Google Maps for a route between starting and destination.
Uses reverse geocoding to calculate latitude and longitude for both start and destination.
Args:
phrase: Takes the phrase spoken as an argument.
no_repeat: A placeholder flag switched during ``recursion`` so that, ``Jarvis`` doesn't repeat himself.
"""
place = support.get_capitalized(phrase=phrase)
place = place.replace("I ", "").strip() if place else None
if not place:
speaker.speak(text="You might want to give a location.", run=True)
if converted := listener.listen(timeout=3, phrase_limit=4):
place = support.get_capitalized(phrase=converted)
place = place.replace("I ", "").strip()
if not place:
if no_repeat:
return
speaker.speak(text=f"I can't take you to anywhere without a location {env.title}!")
directions(phrase=None, no_repeat=True)
if "exit" in place or "quit" in place or "Xzibit" in place:
return
destination_location = geo_locator.geocode(place)
if not destination_location:
return
try:
coordinates = destination_location.latitude, destination_location.longitude
except AttributeError:
return
located = geo_locator.reverse(coordinates, language="en")
address = located.raw["address"]
end_country = address["country"] if "country" in address else None
end = f"{located.latitude},{located.longitude}"
try:
with open(fileio.location) as file:
current_location = yaml.load(stream=file, Loader=yaml.FullLoader)
except yaml.YAMLError as error:
logger.error(error)
speaker.speak(text=f"I wasn't able to get your current location to calculate the distance {env.title}!")
return
start_country = current_location["address"]["country"]
start = current_location["latitude"], current_location["longitude"]
maps_url = f"https://www.google.com/maps/dir/{start}/{end}/"
webbrowser.open(maps_url)
speaker.speak(text=f"Directions on your screen {env.title}!")
if start_country and end_country:
if re.match(start_country, end_country, flags=re.IGNORECASE):
shared.called["directions"] = True
distance_controller(origin=None, destination=place)
else:
speaker.speak(text="You might need a flight to get there!")
| 44.236948 | 120 | 0.644303 | 2,749 | 22,030 | 5.073118 | 0.17279 | 0.030116 | 0.039008 | 0.030475 | 0.321239 | 0.255557 | 0.175678 | 0.114872 | 0.094579 | 0.063172 | 0 | 0.003086 | 0.249796 | 22,030 | 497 | 121 | 44.325956 | 0.840745 | 0.145801 | 0 | 0.281167 | 0 | 0.005305 | 0.195077 | 0.019918 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03183 | false | 0.007958 | 0.079576 | 0 | 0.188329 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6d2975738e714400f26abe4b83b94afadae7e7d | 1,746 | py | Python | sdk/machinelearning/azure-mgmt-machinelearningcompute/azure/mgmt/machinelearningcompute/models/system_service.py | iscai-msft/azure-sdk-for-python | 83715b95c41e519d5be7f1180195e2fba136fc0f | [
"MIT"
] | 8 | 2021-01-13T23:44:08.000Z | 2021-03-17T10:13:36.000Z | sdk/machinelearning/azure-mgmt-machinelearningcompute/azure/mgmt/machinelearningcompute/models/system_service.py | iscai-msft/azure-sdk-for-python | 83715b95c41e519d5be7f1180195e2fba136fc0f | [
"MIT"
] | 226 | 2019-07-24T07:57:21.000Z | 2019-10-15T01:07:24.000Z | sdk/machinelearning/azure-mgmt-machinelearningcompute/azure/mgmt/machinelearningcompute/models/system_service.py | iscai-msft/azure-sdk-for-python | 83715b95c41e519d5be7f1180195e2fba136fc0f | [
"MIT"
] | 3 | 2016-05-03T20:49:46.000Z | 2017-10-05T21:05:27.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class SystemService(Model):
"""Information about a system service deployed in the cluster.
Variables are only populated by the server, and will be ignored when
sending a request.
:param system_service_type: The system service type. Possible values
include: 'None', 'ScoringFrontEnd', 'BatchFrontEnd'
:type system_service_type: str or
~azure.mgmt.machinelearningcompute.models.SystemServiceType
:ivar public_ip_address: The public IP address of the system service
:vartype public_ip_address: str
:ivar version: The state of the system service
:vartype version: str
"""
_validation = {
'system_service_type': {'required': True},
'public_ip_address': {'readonly': True},
'version': {'readonly': True},
}
_attribute_map = {
'system_service_type': {'key': 'systemServiceType', 'type': 'str'},
'public_ip_address': {'key': 'publicIpAddress', 'type': 'str'},
'version': {'key': 'version', 'type': 'str'},
}
def __init__(self, system_service_type):
super(SystemService, self).__init__()
self.system_service_type = system_service_type
self.public_ip_address = None
self.version = None
| 36.375 | 76 | 0.63173 | 192 | 1,746 | 5.5625 | 0.505208 | 0.133895 | 0.127341 | 0.039326 | 0.093633 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000707 | 0.190149 | 1,746 | 47 | 77 | 37.148936 | 0.754597 | 0.56701 | 0 | 0 | 0 | 0 | 0.257184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6d52658de85b23a61ea3688d54052bd4719e515 | 10,597 | py | Python | Chapter09/wallet/wallet_widgets/send_widget.py | HowToBeCalculated/Hands-On-Blockchain-for-Python-Developers | f9634259dd3dc509f36a5ccf3a5182c0d2ec79c4 | [
"MIT"
] | 62 | 2019-03-18T04:41:41.000Z | 2022-03-31T05:03:13.000Z | Chapter09/wallet/wallet_widgets/send_widget.py | HowToBeCalculated/Hands-On-Blockchain-for-Python-Developers | f9634259dd3dc509f36a5ccf3a5182c0d2ec79c4 | [
"MIT"
] | 2 | 2020-06-14T21:56:03.000Z | 2022-01-07T05:32:01.000Z | Chapter09/wallet/wallet_widgets/send_widget.py | HowToBeCalculated/Hands-On-Blockchain-for-Python-Developers | f9634259dd3dc509f36a5ccf3a5182c0d2ec79c4 | [
"MIT"
] | 42 | 2019-02-22T03:10:36.000Z | 2022-02-20T04:47:04.000Z | from PySide2.QtWidgets import (QWidget,
QGridLayout,
QVBoxLayout,
QHBoxLayout,
QPushButton,
QLabel,
QInputDialog,
QLineEdit,
QToolTip,
QComboBox,
QApplication,
QSlider,
QSizePolicy)
from PySide2.QtCore import Slot, SIGNAL, QSize, Qt
from PySide2.QtGui import QPixmap, QMovie, QPalette, QColor
from os.path import isdir, exists
from os import mkdir
from tools.util import render_avatar
from blockchain import blockchain, SendTransaction
from wallet_threads.send_thread import SendThread
from wallet_threads.send_token_thread import SendTokenThread
class SendWidget(QWidget):
tokens_file = 'tokens.json'
def __init__(self, parent=None):
super(SendWidget, self).__init__(parent)
self.token_name = 'Ethereum'
self.setupSenderSection()
self.setupDestinationSection()
self.setupTokenSection()
self.setupProgressSection()
self.setupSendButtonSection()
self.setupFeeSection()
self.send_thread = SendThread()
self.send_thread.send_transaction.connect(self.sendTransactionFinished)
self.send_token_thread = SendTokenThread()
self.send_token_thread.send_token_transaction.connect(self.sendTransactionFinished)
layout = QGridLayout()
layout.addLayout(self.sender_layout, 0, 0)
layout.addLayout(self.destination_layout, 0, 1)
layout.addLayout(self.progress_layout, 1, 0, 1, 2, Qt.AlignCenter)
layout.addLayout(self.token_layout, 2, 0)
layout.addLayout(self.send_layout, 2, 1)
layout.addLayout(self.slider_layout, 3, 0)
self.setLayout(layout)
def setupSenderSection(self):
accounts = blockchain.get_accounts()
sender_label = QLabel("Sender")
sender_label.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
self.balance_label = QLabel("Balance: ")
self.balance_label.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
self.avatar = QLabel()
self.sender_combo_box = QComboBox()
self.sender_items = []
for account, balance in accounts:
self.sender_items.append(account)
self.sender_combo_box.addItems(self.sender_items)
self.sender_combo_box.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
self.sender_combo_box.currentTextChanged.connect(self.filterSender)
first_account = self.sender_items[0]
self.filterSender(first_account)
self.setAvatar(first_account, self.avatar)
self.sender_layout = QVBoxLayout()
sender_wrapper_layout = QHBoxLayout()
sender_right_layout = QVBoxLayout()
sender_right_layout.addWidget(sender_label)
sender_right_layout.addWidget(self.sender_combo_box)
sender_right_layout.addWidget(self.balance_label)
sender_wrapper_layout.addWidget(self.avatar)
sender_wrapper_layout.addLayout(sender_right_layout)
sender_wrapper_layout.addStretch()
self.sender_layout.addLayout(sender_wrapper_layout)
self.sender_layout.addStretch()
def setupDestinationSection(self):
self.destination_layout = QVBoxLayout()
destination_label = QLabel("Destination")
destination_label.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
self.destination_line_edit = QLineEdit()
self.destination_line_edit.setFixedWidth(380);
self.destination_line_edit.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
self.destination_layout.addWidget(destination_label)
self.destination_layout.addWidget(self.destination_line_edit)
self.destination_layout.addStretch()
def setupTokenSection(self):
token_label = QLabel("Token")
token_label.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
token_combo_box = QComboBox()
tokens = blockchain.get_tokens()
first_token = 'Ethereum'
items = [first_token]
self.token_address = {'Ethereum': '0xcccccccccccccccccccccccccccccccccccccccc'}
self.token_informations = {}
for address, token_from_json in tokens.items():
token_information = blockchain.get_token_named_tuple(token_from_json, address)
self.token_informations[token_information.name] = token_information
self.token_address[token_information.name] = token_information.address
items.append(token_information.name)
self.amount_label = QLabel("Amount (in ethers)")
token_combo_box.addItems(items)
token_combo_box.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
token_combo_box.currentTextChanged.connect(self.filterToken)
self.token_avatar = QLabel()
self.filterToken(first_token)
token_address = self.token_address[first_token]
self.setAvatar(token_address, self.token_avatar)
self.token_layout = QVBoxLayout()
token_wrapper_layout = QHBoxLayout()
token_right_layout = QVBoxLayout()
token_right_layout.addWidget(token_label)
token_right_layout.addWidget(token_combo_box)
token_wrapper_layout.addWidget(self.token_avatar)
token_wrapper_layout.addLayout(token_right_layout)
token_wrapper_layout.addStretch()
self.token_layout.addLayout(token_wrapper_layout)
def setupProgressSection(self):
self.progress_layout = QHBoxLayout()
progress_vertical_layout = QVBoxLayout()
progress_wrapper_layout = QHBoxLayout()
self.progress_label = QLabel()
movie = QMovie('icons/ajax-loader.gif')
self.progress_label.setMovie(movie)
movie.start()
self.progress_label.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
self.progress_description_label = QLabel()
self.progress_description_label.setText("Transaction is being confirmed. Please wait!")
self.progress_description_label.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
progress_wrapper_layout.addWidget(self.progress_label)
progress_wrapper_layout.addWidget(self.progress_description_label)
progress_vertical_layout.addLayout(progress_wrapper_layout, 1)
self.progress_layout.addLayout(progress_vertical_layout)
self.sendTransactionFinished()
def setupSendButtonSection(self):
self.send_layout = QVBoxLayout()
self.amount_line_edit = QLineEdit()
self.send_button = QPushButton("Send")
self.send_button.setSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
self.send_button.clicked.connect(self.sendButtonClicked)
pal = self.send_button.palette()
pal.setColor(QPalette.Button, QColor(Qt.green))
self.send_button.setAutoFillBackground(True)
self.send_button.setPalette(pal)
self.send_button.update()
self.send_layout.addWidget(self.amount_label)
self.send_layout.addWidget(self.amount_line_edit)
self.send_layout.addWidget(self.send_button)
def setupFeeSection(self):
self.slider_layout = QVBoxLayout()
fee_label = QLabel("Fee")
self.fee_slider = QSlider(Qt.Horizontal)
self.fee_slider.setRange(1, 10)
self.fee_slider.setValue(3)
self.fee_slider.valueChanged.connect(self.feeSliderChanged)
self.gwei_label = QLabel()
self.feeSliderChanged(3)
self.slider_layout.addWidget(fee_label)
self.slider_layout.addWidget(self.fee_slider)
self.slider_layout.addWidget(self.gwei_label)
def filterToken(self, token_name):
address = self.token_address[token_name]
token_information = None
if token_name != 'Ethereum':
token_information = self.token_informations[token_name]
self.amount_label.setText("Amount")
else:
self.amount_label.setText("Amount (in ethers)")
self.updateBalanceLabel(token_name, self.sender_account, token_information)
self.setAvatar(address, self.token_avatar)
self.token_name = token_name
def filterSender(self, account_address):
self.sender_account = account_address
token_information = None
if self.token_name != 'Ethereum':
token_information = self.token_informations[self.token_name]
self.updateBalanceLabel(self.token_name, account_address, token_information)
self.setAvatar(account_address, self.avatar)
def updateBalanceLabel(self, token_name, account_address, token_information=None):
if token_name == 'Ethereum':
self.balance_label.setText("Balance: %.5f ethers" % blockchain.get_balance(account_address))
else:
self.balance_label.setText("Balance: %d coins" % blockchain.get_token_balance(account_address, token_information))
def setAvatar(self, code, avatar):
img_filename = render_avatar(code)
pixmap = QPixmap(img_filename)
avatar.setPixmap(pixmap)
def feeSliderChanged(self, value):
self.gwei_label.setText("%d GWei" % value)
self.fee = value
def sendButtonClicked(self):
password, ok = QInputDialog.getText(self, "Create A New Transaction",
"Password:", QLineEdit.Password)
if ok and password != '':
self.progress_label.setVisible(True)
self.progress_description_label.setVisible(True)
tx = SendTransaction(sender=self.sender_account,
password=password,
destination=self.destination_line_edit.text(),
amount=self.amount_line_edit.text(),
fee=self.fee)
token_information = None
if self.token_name != 'Ethereum':
token_information = self.token_informations[self.token_name]
self.send_token_thread.prepareTransaction(tx, token_information)
self.send_token_thread.start()
else:
self.send_thread.prepareTransaction(tx)
self.send_thread.start()
def sendTransactionFinished(self):
self.progress_label.setVisible(False)
self.progress_description_label.setVisible(False)
| 42.388 | 126 | 0.678116 | 1,083 | 10,597 | 6.376731 | 0.158818 | 0.033884 | 0.033015 | 0.060817 | 0.267593 | 0.181436 | 0.131914 | 0.071532 | 0.028092 | 0.028092 | 0 | 0.0036 | 0.239879 | 10,597 | 249 | 127 | 42.558233 | 0.853755 | 0 | 0 | 0.048077 | 0 | 0 | 0.031235 | 0.005945 | 0 | 0 | 0.003963 | 0 | 0 | 1 | 0.067308 | false | 0.019231 | 0.043269 | 0 | 0.120192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6d68744f1e709ba14365f28499cedb2613a508d | 1,097 | py | Python | CodeInterview/python/chapter2.py | espang/books | 821c92833968dca8b8a0456464f2e33211601abb | [
"MIT"
] | null | null | null | CodeInterview/python/chapter2.py | espang/books | 821c92833968dca8b8a0456464f2e33211601abb | [
"MIT"
] | null | null | null | CodeInterview/python/chapter2.py | espang/books | 821c92833968dca8b8a0456464f2e33211601abb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Feb 25 23:08:27 2016
@author: eikes
"""
class Node(object):
def __init__(self, value, next_node=None):
self.value = value
self.next_node = next_node
def has_next(self):
return self.next_node is not None
def __repr__(self):
vals = []
i = self
while i.has_next():
vals.append(i.value)
i = i.next_node
vals.append(i.value)
return '[ {0} ]'.format(', '.join(map(str, vals)))
def remove_dups(node):
current, last = node, None
values = set()
while current is not None:
if current.value in values:
#value allready in linked list --> remove current
last.next_node = current.next_node
else:
values.add(current.value)
last = current
current = current.next_node
def k_to_end(node, k):
if not node.has_next():
if k == 1:
print (node)
return 1
idx = k_to_end(node.next_node, k) + 1
if idx == k:
print(node)
return idx | 24.377778 | 61 | 0.547858 | 150 | 1,097 | 3.84 | 0.373333 | 0.125 | 0.041667 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023384 | 0.337284 | 1,097 | 45 | 62 | 24.377778 | 0.768913 | 0.111212 | 0 | 0.121212 | 0 | 0 | 0.009307 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0 | 0 | 0.030303 | 0.30303 | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6d7926ffd9c46e7000b4eeb76ac9fcded9a18e3 | 2,878 | py | Python | dotup/__init__.py | audiolion/dotup | 8c7243d5606340288a7968a6ff3f64d331d5d0e0 | [
"MIT"
] | 4 | 2019-02-17T01:04:15.000Z | 2019-02-20T13:39:25.000Z | dotup/__init__.py | audiolion/dotup | 8c7243d5606340288a7968a6ff3f64d331d5d0e0 | [
"MIT"
] | 4 | 2019-02-17T00:57:32.000Z | 2019-02-17T22:32:11.000Z | dotup/__init__.py | audiolion/dotup | 8c7243d5606340288a7968a6ff3f64d331d5d0e0 | [
"MIT"
] | null | null | null | __version__ = '0.3.2'
import sys
import os
import pwd
from pathlib import Path
import click
import crayons
def update_symlink(directory, filename, force=None):
force = False if force is None else force
home = str(Path.home())
try:
os.symlink(f'{home}/{directory}/{filename}', f'{home}/{filename}')
return True
except FileExistsError:
if force:
os.remove(f'{home}/{filename}')
os.symlink(f'{home}/{directory}/{filename}', f'{home}/{filename}')
return True
return False
def get_dotfiles(home, directory):
dotfile_dirlist = map(
lambda filename: f'{home}/{directory}/{filename}',
os.listdir(f'{home}/{directory}'),
)
dotfile_paths = filter(os.path.isfile, dotfile_dirlist)
dotfiles = map(lambda path: path.replace(f'{home}/{directory}/', ''), dotfile_paths)
return dotfiles
def check_dotfiles_directory_exists(home, directory):
return os.path.isdir(f'{home}/{directory}')
@click.command()
@click.option(
'--directory',
'-d',
default="dotfiles",
help="Dotfiles directory name. Must be located in home dir.",
)
@click.option('--force', is_flag=True, help="Overwrite existing symlinks.")
def dotup(directory, force):
home = str(Path.home())
exists = check_dotfiles_directory_exists(home, directory)
if not exists:
print(
f'\nError: no dotfile directory found at {crayons.yellow(f"{home}/{directory}")}\n'
)
print(
f'Use {crayons.cyan("dotup --directory")} to specify your dotfile directory name.'
)
return
print(f'\nSymlinking dotfiles found in {crayons.cyan(f"{home}/{directory}")}\n')
non_dotfiles = []
dotfiles = get_dotfiles(home, directory)
for filename in dotfiles:
if filename[0] != '.':
non_dotfiles.append(filename)
continue
success = update_symlink(directory, filename, force)
if success:
print(
f'Symlinked {crayons.red(filename)}@ -> {home}/{directory}/{filename}'
)
else:
prompt_remove = click.confirm(
f'\nFile already exists at {crayons.yellow(f"{home}/{filename}")}, overwrite it?'
)
if prompt_remove:
update_symlink(directory, filename, True)
print(
f'Symlinked {crayons.red(filename)}@ -> {home}/{directory}/{filename}'
)
else:
print(f'{crayons.magenta("Skipping")} {filename}')
for filename in non_dotfiles:
print(
f'\n{crayons.magenta("Skipped")} {crayons.yellow(f"{home}/{directory}/{filename}")}',
f'-- filename does not begin with \033[4m{crayons.cyan(".")}\033[0m',
)
if __name__ == "__main__":
dotup() # pragma: no cover
| 29.979167 | 97 | 0.59451 | 324 | 2,878 | 5.179012 | 0.311728 | 0.11621 | 0.075089 | 0.052443 | 0.329559 | 0.18236 | 0.133492 | 0.133492 | 0.133492 | 0.133492 | 0 | 0.005652 | 0.262335 | 2,878 | 95 | 98 | 30.294737 | 0.784739 | 0.005559 | 0 | 0.192308 | 0 | 0.012821 | 0.32972 | 0.158741 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.076923 | 0.012821 | 0.205128 | 0.089744 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6d9cb957ff30829049142edec6723b7649061c1 | 4,312 | py | Python | plugins/quetz_tos/quetz_tos/api.py | fcollonval/quetz | 6f604a29e13ef80d1b5e7ec48408841d0e4d482a | [
"BSD-3-Clause"
] | 108 | 2020-09-16T16:15:01.000Z | 2022-03-29T02:49:31.000Z | plugins/quetz_tos/quetz_tos/api.py | fcollonval/quetz | 6f604a29e13ef80d1b5e7ec48408841d0e4d482a | [
"BSD-3-Clause"
] | 317 | 2020-09-07T18:37:33.000Z | 2022-03-25T13:10:41.000Z | plugins/quetz_tos/quetz_tos/api.py | janjagusch/quetz | 4d88b4695166d310823a48e81e025983846afd05 | [
"BSD-3-Clause"
] | 36 | 2020-09-07T22:01:27.000Z | 2022-03-26T17:06:07.000Z | import os
import uuid
from tempfile import SpooledTemporaryFile
from fastapi import APIRouter, Depends, File, HTTPException, UploadFile, status
from sqlalchemy.orm.session import Session
from quetz import authorization, dao
from quetz.config import Config
from quetz.deps import get_dao, get_db, get_rules
from .db_models import TermsOfService, TermsOfServiceSignatures
router = APIRouter()
config = Config()
pkgstore = config.get_package_store()
def post_file(file):
if type(file.file) is SpooledTemporaryFile and not hasattr(file, "seekable"):
file.file.seekable = file.file._file.seekable
file.file.seek(0, os.SEEK_END)
file.file.seek(0)
# channel_name is passed as "root" since we want to upload the file
# in a host-wide manner i.e. independent of individual channels.
# Azure and S3 necessarily require the creation of `containers` and `buckets`
# (mapped to individual channels) before we can upload a file there.
# Hence, the container / bucket will be `root`
pkgstore.add_file(file.file.read(), "root", file.filename)
return file.filename
@router.get("/api/tos", tags=['Terms of Service'])
def get_current_tos(db: Session = Depends(get_db)):
current_tos = (
db.query(TermsOfService).order_by(TermsOfService.time_created.desc()).first()
)
if current_tos:
f = pkgstore.serve_path("root", current_tos.filename)
data_bytes = f.read()
return {
"id": str(uuid.UUID(bytes=current_tos.id)),
"content": data_bytes.decode('utf-8'),
"uploader_id": str(uuid.UUID(bytes=current_tos.uploader_id)),
"filename": current_tos.filename,
"time_created": current_tos.time_created,
}
else:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="terms of service file not found",
)
@router.post("/api/tos/sign", status_code=201, tags=['Terms of Service'])
def sign_current_tos(
tos_id: str = "",
db: Session = Depends(get_db),
dao: dao.Dao = Depends(get_dao),
auth: authorization.Rules = Depends(get_rules),
):
user_id = auth.assert_user()
user = dao.get_user(user_id)
if tos_id:
try:
tos_id_bytes = uuid.UUID(tos_id).bytes
except Exception:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=f"{tos_id} is not a valid hexadecimal string",
)
selected_tos = (
db.query(TermsOfService)
.filter(TermsOfService.id == tos_id_bytes)
.one_or_none()
)
if not selected_tos:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"terms of service with id {tos_id} not found",
)
else:
selected_tos = (
db.query(TermsOfService)
.order_by(TermsOfService.time_created.desc())
.first()
)
if selected_tos:
signature = (
db.query(TermsOfServiceSignatures)
.filter(TermsOfServiceSignatures.user_id == user_id)
.filter(TermsOfServiceSignatures.tos_id == selected_tos.id)
.one_or_none()
)
if signature:
return (
f"TOS already signed for {user.username}"
f" at {signature.time_created}."
)
else:
signature = TermsOfServiceSignatures(
user_id=user_id, tos_id=selected_tos.id
)
db.add(signature)
db.commit()
return f"TOS signed for {user.username}"
else:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="terms of service file not found",
)
@router.post("/api/tos/upload", status_code=201, tags=['Terms of Service'])
def upload_tos(
db: Session = Depends(get_db),
auth: authorization.Rules = Depends(get_rules),
tos_file: UploadFile = File(...),
):
user_id = auth.assert_server_roles(
["owner"], "To upload new Terms of Services you need to be a server owner."
)
filename = post_file(tos_file)
tos = TermsOfService(uploader_id=user_id, filename=filename)
db.add(tos)
db.commit()
| 32.666667 | 85 | 0.629174 | 529 | 4,312 | 4.94896 | 0.283554 | 0.022918 | 0.032086 | 0.042781 | 0.347212 | 0.25783 | 0.196715 | 0.175325 | 0.149351 | 0.149351 | 0 | 0.006986 | 0.269712 | 4,312 | 131 | 86 | 32.916031 | 0.824389 | 0.073284 | 0 | 0.247706 | 0 | 0 | 0.114286 | 0.006266 | 0 | 0 | 0 | 0 | 0.018349 | 1 | 0.036697 | false | 0 | 0.082569 | 0 | 0.155963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6da277871e2e14767fe369496d352e64e854912 | 1,546 | py | Python | tests/unit/dataactvalidator/test_a26_appropriations.py | chambers-brian/SIG_Digital-Strategy_SI_ODP_Backend | 3de8cedf69d5a0c9fad8239734bd6291cf583936 | [
"CC0-1.0"
] | null | null | null | tests/unit/dataactvalidator/test_a26_appropriations.py | chambers-brian/SIG_Digital-Strategy_SI_ODP_Backend | 3de8cedf69d5a0c9fad8239734bd6291cf583936 | [
"CC0-1.0"
] | null | null | null | tests/unit/dataactvalidator/test_a26_appropriations.py | chambers-brian/SIG_Digital-Strategy_SI_ODP_Backend | 3de8cedf69d5a0c9fad8239734bd6291cf583936 | [
"CC0-1.0"
] | null | null | null | from tests.unit.dataactcore.factories.staging import AppropriationFactory
from tests.unit.dataactcore.factories.domain import SF133Factory
from tests.unit.dataactvalidator.utils import number_of_errors, query_columns
_FILE = 'a26_appropriations'
_TAS = 'a26_appropriations_tas'
def test_column_headers(database):
expected_subset = {'row_number', 'contract_authority_amount_cpe',
'lines', 'amounts'}
actual = set(query_columns(_FILE, database))
assert (actual & expected_subset) == expected_subset
def test_success(database):
""" Tests that ContractAuthorityAmountTotal_CPE is provided if TAS has contract authority value
provided in GTAS """
tas = "".join([_TAS, "_success"])
sf1 = SF133Factory(tas=tas, period=1, fiscal_year=2016, line=1540, amount=1)
sf2 = SF133Factory(tas=tas, period=1, fiscal_year=2016, line=1640, amount=1)
ap = AppropriationFactory(tas=tas, contract_authority_amount_cpe=1)
assert number_of_errors(_FILE, database, models=[sf1, sf2, ap]) == 0
def test_failure(database):
""" Tests that ContractAuthorityAmountTotal_CPE is not provided if TAS has contract authority value
provided in GTAS """
tas = "".join([_TAS, "_failure"])
sf1 = SF133Factory(tas=tas, period=1, fiscal_year=2016, line=1540, amount=1)
sf2 = SF133Factory(tas=tas, period=1, fiscal_year=2016, line=1640, amount=1)
ap = AppropriationFactory(tas=tas, contract_authority_amount_cpe=0)
assert number_of_errors(_FILE, database, models=[sf1, sf2, ap]) == 1
| 37.707317 | 103 | 0.73674 | 200 | 1,546 | 5.48 | 0.32 | 0.032847 | 0.065693 | 0.087591 | 0.655109 | 0.594891 | 0.50365 | 0.50365 | 0.50365 | 0.50365 | 0 | 0.054323 | 0.154593 | 1,546 | 40 | 104 | 38.65 | 0.784239 | 0.143596 | 0 | 0.181818 | 0 | 0 | 0.082181 | 0.039171 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.136364 | false | 0 | 0.136364 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6dba61497cd544b0b3b024aa19484ec4d75797d | 5,334 | py | Python | mysql.py | alphagov/fabric-scripts | c162a43acdef9dec41acd7b2127f2cdef78be347 | [
"MIT"
] | 46 | 2015-03-21T00:45:27.000Z | 2021-11-16T04:33:29.000Z | mysql.py | alphagov/fabric-scripts | c162a43acdef9dec41acd7b2127f2cdef78be347 | [
"MIT"
] | 123 | 2015-03-02T12:10:31.000Z | 2021-11-16T10:29:27.000Z | mysql.py | alphagov/fabric-scripts | c162a43acdef9dec41acd7b2127f2cdef78be347 | [
"MIT"
] | 19 | 2015-02-09T11:06:10.000Z | 2021-04-22T16:52:28.000Z | from fabric.api import abort, env, hide, run, settings, task
from fabric.operations import prompt
def run_mysql_command(cmd):
run('sudo -i mysql -e "{}"'.format(cmd))
def switch_slow_query_log(value):
run_mysql_command('SET GLOBAL slow_query_log = "{}"'.format(value))
@task
def stop_slow_query_log(*args):
switch_slow_query_log('OFF')
@task
def start_slow_query_log(*args):
switch_slow_query_log('ON')
@task
def fix_replication_from_slow_query_log_after_upgrade():
"""
Used to fix issues seen when upgrading mysql
If you see the error
'Error 'You cannot 'ALTER' a log table if logging is enabled' on query.
when running show slave status, after a mysql upgrade, it is resolved by
running this task
"""
run_mysql_command("STOP SLAVE;")
run_mysql_command("SET GLOBAL slow_query_log = 'OFF';")
run_mysql_command("START SLAVE;")
run_mysql_command("SET GLOBAL slow_query_log = 'ON';")
run_mysql_command("show slave status\G;")
@task
def setup_slave_from_master(master):
"""
Sets up a slave from a master by:
- configuring MySQL replication config
- using the replicate_slave_from_master task to do an initial dump to the slave
Usage: fab environment -H mysql-slave-1.backend mysql.setup_slave_from_master:'mysql-master-1.backend'
"""
if len(env.hosts) > 1:
exit('This job is currently only setup to run against one slave at a time')
mysql_master = prompt("Master host (eg 'master.mysql' or 'whitehall-master.mysql'):")
replication_username = 'replica_user'
replication_password = prompt("Password for MySQL user {0}:".format(replication_username))
run_mysql_command("STOP SLAVE;")
run_mysql_command("CHANGE MASTER TO MASTER_HOST='{0}', MASTER_USER='{1}', MASTER_PASSWORD='{2}';".format(
mysql_master, replication_username, replication_password))
replicate_slave_from_master(master)
@task
def replicate_slave_from_master(master):
"""
Updates a slave from a master by taking a dump from the master,
copying it to the slave and then restoring the dump.
Usage: fab environment -H mysql-slave-1.backend mysql.replicate_slave_from_master:'mysql-master-1.backend'
"""
if len(env.hosts) > 1:
exit('This job is currently only setup to run against one slave at a time')
with settings(host_string=master):
# `--single-transaction` in conjunction with `--master-data` avoids
# locking tables for any significant length of time. See
# https://web.archive.org/web/20160308163516/https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_single-transaction
run('sudo -i mysqldump -u root --all-databases --master-data --single-transaction --quick --add-drop-database > dump.sql')
with settings(host_string=master, forward_agent=True):
run('scp dump.sql {0}:~'.format(env.hosts[0]))
with settings(host_string=master):
run('rm dump.sql')
run_mysql_command("STOP SLAVE")
run_mysql_command("SET GLOBAL slow_query_log=OFF")
with hide('running', 'stdout'):
database_file_size = run("stat --format='%s' dump.sql")
print('Importing MySQL database which is {0}GB, this might take a while...'.format(round(int(database_file_size) / (1024 * 1024 * 1024 * 1.0), 1)))
run('sudo -i mysql -uroot < dump.sql')
run('rm dump.sql')
run_mysql_command("START SLAVE")
run_mysql_command("SET GLOBAL slow_query_log=ON")
slave_status()
@task
def reset_slave():
"""
Used to reset a slave if MySQL replication is failing
If you see that the slave is 'NULL' seconds behind the master,
the problem may be resolved by running this task.
See docs on 'RESET SLAVE':
https://dev.mysql.com/doc/refman/5.5/en/reset-slave.html
"""
# Confirm slave status in case we need to refer to the values later
slave_status()
run_mysql_command("STOP SLAVE;")
with hide('everything'):
# Store last known log file and position
master_log_file = run("sudo -i mysql -e 'SHOW SLAVE STATUS\G' | grep '^\s*Relay_Master_Log_File:' | awk '{ print $2 }'")
master_log_pos = run("sudo -i mysql -e 'SHOW SLAVE STATUS\G' | grep '^\s*Exec_Master_Log_Pos:' | awk '{ print $2 }'")
if not master_log_file or not master_log_pos:
abort("Failed to determine replication log file and position, aborting.")
# Forget log file and position
run_mysql_command("RESET SLAVE;")
# Repoint log file and position to last known values
run_mysql_command("CHANGE MASTER TO MASTER_LOG_FILE='{}', MASTER_LOG_POS={};"
.format(master_log_file, master_log_pos))
run_mysql_command("START SLAVE;")
with hide('everything'):
seconds_behind_master = run("sudo -i mysql -e 'SHOW SLAVE STATUS\G' | grep '^\s*Seconds_Behind_Master:' | awk '{ print $2 }'")
# Compare as a string to ensure we got a non-nil value from MySQL
if seconds_behind_master != '0':
abort("Slave is still behind master by {} seconds; run mysql.slave_status to check status"
.format(seconds_behind_master))
@task
def slave_status():
"""
Show status of MySQL replication on slave; must be run against the slave host
"""
run_mysql_command("SHOW SLAVE STATUS\G;")
| 35.324503 | 151 | 0.691226 | 790 | 5,334 | 4.487342 | 0.264557 | 0.042877 | 0.076164 | 0.018336 | 0.376869 | 0.304372 | 0.281241 | 0.241467 | 0.201128 | 0.160508 | 0 | 0.011671 | 0.19685 | 5,334 | 150 | 152 | 35.56 | 0.815826 | 0.290776 | 0 | 0.338028 | 0 | 0.056338 | 0.390184 | 0.041404 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126761 | false | 0.042254 | 0.042254 | 0 | 0.169014 | 0.056338 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6dc1338864b8a9e2eb892ae7bb5fddbec7266c6 | 1,158 | py | Python | Experiment/stream.py | zainbaq/pyOpenBCI | 524d0263502ba5c3360be7a0fb4f1022dfe3108f | [
"MIT"
] | null | null | null | Experiment/stream.py | zainbaq/pyOpenBCI | 524d0263502ba5c3360be7a0fb4f1022dfe3108f | [
"MIT"
] | null | null | null | Experiment/stream.py | zainbaq/pyOpenBCI | 524d0263502ba5c3360be7a0fb4f1022dfe3108f | [
"MIT"
] | null | null | null | from pyOpenBCI import OpenBCIGanglion
from pylsl import StreamInfo, StreamOutlet
import numpy as np
import argparse
import json
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--config_path', type=str, default='config/board_config.json')
return parser.parse_args()
SCALE_FACTOR_EEG = (4500000)/24/(2**23-1) #uV/count
args = parse_args()
with open(args.config_path) as f:
BOARD_CONFIG = json.load(f)
print("Creating LSL stream for EEG. \nName: OpenBCIEEG\nID: OpenBCItestEEG\n")
info_eeg = StreamInfo('OpenBCIEEG', 'EEG', 4, 250, 'float32', 'OpenBCItestEEG')
outlet_eeg = StreamOutlet(info_eeg)
info = StreamInfo('MarkerStream', 'Markers', 4, 0, 'string', 'OpenBCItestMarkers')
# next make an outlet
outlet = StreamOutlet(info)
markernames = ['Marker']
def lsl_streamers(sample):
# print(len(sample)
print(sample.channels_data)
outlet_eeg.push_sample(np.array(sample.channels_data)*SCALE_FACTOR_EEG)
# outlet.push_sample(markernames[0])s
print(np.array(sample.channels_data)*SCALE_FACTOR_EEG)
board = OpenBCIGanglion(mac=BOARD_CONFIG['mac_address'])
board.start_stream(lsl_streamers)
| 27.571429 | 86 | 0.753022 | 156 | 1,158 | 5.410256 | 0.487179 | 0.031991 | 0.049763 | 0.049763 | 0.092417 | 0.092417 | 0.092417 | 0.092417 | 0 | 0 | 0 | 0.021632 | 0.121762 | 1,158 | 41 | 87 | 28.243902 | 0.80826 | 0.070812 | 0 | 0 | 0 | 0 | 0.186916 | 0.02243 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.2 | 0 | 0.32 | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6e3c38a31c4479f6488038ae2839813f652c121 | 17,672 | py | Python | source/methods_upload_user_stats.py | CheyenneNS/metrics | cfeeac6d01d99679897a998b193d630ada169c61 | [
"MIT"
] | null | null | null | source/methods_upload_user_stats.py | CheyenneNS/metrics | cfeeac6d01d99679897a998b193d630ada169c61 | [
"MIT"
] | null | null | null | source/methods_upload_user_stats.py | CheyenneNS/metrics | cfeeac6d01d99679897a998b193d630ada169c61 | [
"MIT"
] | null | null | null | from pymongo import MongoClient
from pymongo import ReadPreference
import json as _json
import os
import mysql.connector as mysql
import requests
requests.packages.urllib3.disable_warnings()
# NOTE get_user_info_from_auth2 sets up the initial dict.
#The following functions update certain fields in the dict.
# So get_user_info_from_auth2 must be called before get_internal_users and get_user_orgs_count
metrics_mysql_password = os.environ['METRICS_MYSQL_PWD']
mongoDB_metrics_connection = os.environ['MONGO_PATH']
profile_url = os.environ['PROFILE_URL']
kb_internal_user_url = os.environ['KB_INTERNAL_USER_URL']
sql_host = os.environ['SQL_HOST']
query_on = os.environ['QUERY_ON']
to_auth2 = os.environ['AUTH2_SUFFIX']
to_groups = os.environ['GRP_SUFFIX']
to_workspace = os.environ['WRK_SUFFIX']
def get_user_info_from_auth2():
""" get auth2 info and kbase_internal_users. Creates initial dict for the data. """
client_auth2 = MongoClient(mongoDB_metrics_connection+to_auth2)
db_auth2 = client_auth2.auth2
user_stats_dict = {} #dict that will have userid as the key,
#value is a dict with name, signup_date, last_signin_date,
#and email (that gets values from this function)
#orcid may be present and populated by this function.
#later called functions will populate kbase_internal_user, num_orgs and ...
user_info_query = db_auth2.users.find({},{"_id":0,"user":1,"email":1,"display":1,"create":1,"login":1})
for record in user_info_query:
if record["user"] =="***ROOT***":
continue
user_stats_dict[record["user"]]={"name":record["display"],
"signup_date":record["create"],
"last_signin_date":record["login"],
"email":record["email"],
"kbase_internal_user":False,
"institution":None,
"country":None,
"orcid":None,
"num_orgs":0,
"narrative_count":0,
"shared_count":0,
"narratives_shared" : 0
}
#Get all users with an ORCID authentication set up.
users_orcid_query = db_auth2.users.find({"idents.prov": "OrcID"},
{"user":1,"idents.prov":1,"idents.prov_id":1,"_id":0})
for record in users_orcid_query:
for ident in record["idents"]:
if ident["prov"] == "OrcID":
#just use the first orcid seen.
user_stats_dict[record["user"]]["orcid"] = ident["prov_id"]
continue
client_auth2.close()
return user_stats_dict
def get_internal_users(user_stats_dict):
"""
Gets the internal users from the kb_internal_staff google sheet that Roy maintains.
"""
params = (
('tqx', 'out:csv'),
('sheet', 'KBaseStaffAssociatedUsernamesPastPresent'),
)
response = requests.get(kb_internal_user_url, params=params)
if (response.status_code != 200):
print("ERROR - KB INTERNAL USER GOOGLE SHEET RESPONSE STATUS CODE : " + str(response.status_code))
print("KB INTERNAL USER will not get updated until this is fixed. Rest of the uuser upload should work.")
return user_stats_dict
lines = response.text.split("\n")
if len(lines) < 390:
print("SOMETHING IS WRONG WITH KBASE INTERNAL USERS LIST: " + str(response.status_code))
users_not_found_count = 0
for line in lines:
elements = line.split(",")
user = elements[0][1:-1]
if user in user_stats_dict:
user_stats_dict[user]["kbase_internal_user"] = True
else:
users_not_found_count += 1
if users_not_found_count > 0:
print("NUMBER OF USERS FOUND IN KB_INTERNAL GOOGLE SHEET THAT WERE NOT FOUND IN THE AUTH2 RECORDS : " + str(users_not_found_count))
return user_stats_dict
def get_user_orgs_count(user_stats_dict):
""" Gets the count of the orgs that users belong to and populates the onging data structure"""
client_orgs = MongoClient(mongoDB_metrics_connection+to_groups)
db_orgs = client_orgs.groups
orgs_query = db_orgs.groups.find({},{"name":1,"memb.user":1,"_id":0})
for record in orgs_query:
for memb in record["memb"]:
if memb["user"] in user_stats_dict:
user_stats_dict[memb["user"]]["num_orgs"] += 1
client_orgs.close()
return user_stats_dict
def get_user_narrative_stats(user_stats_dict):
"""
gets narrative summary stats (number of naratives,
number of shares, number of narratives shared for each user
"""
client_workspace = MongoClient(mongoDB_metrics_connection+to_workspace)
db_workspace = client_workspace.workspace
ws_user_dict = {}
#Get all the legitimate narratives and and their respective user (not del, saved(not_temp))
all_nar_cursor = db_workspace.workspaces.find({"del" : False,
"meta" : {"k" : "is_temporary", "v" : "false"} },
{"owner":1,"ws":1,"name":1,"_id":0})
for record in all_nar_cursor:
# TO REMOVE OLD WORKSPACE METHOD OF 1 WS for all narratives.
if "name" in record and record["name"] == record["owner"] + ":home" :
continue
#narrative to user mapping
ws_user_dict[record["ws"]] = record["owner"]
#increment user narrative count
user_stats_dict[record["owner"]]["narrative_count"] += 1
#Get all the narratives that have been shared and how many times they have been shared.
aggregation_string=[{
"$match" : {"perm" : { "$in": [ 10,20,30 ]}}
},{
"$group" : {"_id" : "$id", "shared_count" : { "$sum" : 1 }}
}]
all_shared_perms_cursor=db_workspace.workspaceACLs.aggregate(aggregation_string)
for record in db_workspace.workspaceACLs.aggregate(aggregation_string):
if record["_id"] in ws_user_dict:
user_stats_dict[ws_user_dict[record["_id"]]]["shared_count"] += record["shared_count"]
user_stats_dict[ws_user_dict[record["_id"]]]["narratives_shared"] += 1
return user_stats_dict
def get_institution_and_country(user_stats_dict):
"""
Gets the institution and country information for the user from the profile information
"""
url = profile_url
headers = dict()
arg_hash = {'method': "UserProfile.get_user_profile",
'params': [list(user_stats_dict.keys())],
'version': '1.1',
'id': 123
}
body = _json.dumps(arg_hash)
timeout = 1800
trust_all_ssl_certificates = 1
ret = requests.post(url, data=body, headers=headers,
timeout=timeout,
verify=not trust_all_ssl_certificates)
ret.encoding = 'utf-8'
if ret.status_code == 500:
if ret.headers.get(_CT) == _AJ:
err = ret.json()
if 'error' in err:
raise Exception(err)
else:
raise ServerError('Unknown', 0, ret.text)
else:
raise ServerError('Unknown', 0, ret.text)
if not ret.ok:
ret.raise_for_status()
resp = ret.json()
if 'result' not in resp:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
print(str(len(resp['result'][0])))
replaceDict = { '-':' ', ')':' ', '.': ' ', '(':'', '/':'', ',':'', ' +': ' ' }
counter = 0
for obj in resp['result'][0] :
if obj is None:
continue
counter += 1;
if obj['user']['username'] in user_stats_dict:
user_stats_dict[obj['user']['username']]["country"] = obj['profile']['userdata'].get('country')
institution = obj['profile']['userdata'].get('organization')
if institution == None:
if 'affiliations'in obj['profile']['userdata']:
affiliations = obj['profile']['userdata']['affiliations']
try:
institution = affiliations[0]['organization']
except IndexError:
try:
institution = obj['profile']['userdata']['organization']
except:
pass
if institution:
for key, replacement in replaceDict.items():
#institution = institution.str.replace(key, replacement)
institution = institution.replace(key, replacement)
institution = institution.rstrip()
user_stats_dict[obj['user']['username']]["institution"] = institution
return user_stats_dict
def upload_user_data(user_stats_dict):
"""
Takes the User Stats dict that is populated by the other functions and
then populates the user_info and user_system_summary_stats tables
in the metrics MySQL DB.
"""
total_users = len(user_stats_dict.keys())
rows_info_inserted = 0;
rows_info_updated = 0;
rows_stats_inserted = 0;
#connect to mysql
db_connection = mysql.connect(
host = sql_host,
user = "metrics",
passwd = metrics_mysql_password,
database = "metrics"
)
cursor = db_connection.cursor()
query = "use "+query_on
cursor.execute(query)
#get all existing users
existing_user_info = dict()
query = "select username, display_name, email, orcid, kb_internal_user, institution, " \
"country, signup_date, last_signin_date from user_info"
cursor.execute(query)
for (username, display_name, email, orcid, kb_internal_user, institution,
country, signup_date, last_signin_date) in cursor:
existing_user_info[username]={"name":display_name,
"email":email,
"orcid":orcid,
"kb_internal_user":kb_internal_user,
"institution":institution,
"country":country,
"signup_date":signup_date,
"last_signin_date":last_signin_date}
print("Number of existing users:" + str(len(existing_user_info)))
prep_cursor = db_connection.cursor(prepared=True)
user_info_insert_statement = "insert into user_info " \
"(username,display_name,email,orcid,kb_internal_user, " \
"institution,country,signup_date,last_signin_date) " \
"values(%s,%s,%s,%s,%s, " \
"%s,%s,%s,%s);"
update_prep_cursor = db_connection.cursor(prepared=True)
user_info_update_statement = "update user_info " \
"set display_name = %s, email = %s, " \
"orcid = %s, kb_internal_user = %s, " \
"institution = %s, country = %s, " \
"signup_date = %s, last_signin_date = %s " \
"where username = %s;"
new_user_info_count = 0
users_info_updated_count = 0
for username in user_stats_dict:
#check if new user_info exists in the existing user info, if not insert the record.
if username not in existing_user_info:
input = (username,user_stats_dict[username]["name"],
user_stats_dict[username]["email"],user_stats_dict[username]["orcid"],
user_stats_dict[username]["kbase_internal_user"],
user_stats_dict[username]["institution"],user_stats_dict[username]["country"],
user_stats_dict[username]["signup_date"],user_stats_dict[username]["last_signin_date"])
prep_cursor.execute(user_info_insert_statement,input)
new_user_info_count+= 1
else:
#Check if anything has changed in the user_info, if so update the record
if not ((user_stats_dict[username]["last_signin_date"] is None or
user_stats_dict[username]["last_signin_date"].strftime("%Y-%m-%d %H:%M:%S") ==
str(existing_user_info[username]["last_signin_date"])) and
(user_stats_dict[username]["signup_date"].strftime("%Y-%m-%d %H:%M:%S") ==
str(existing_user_info[username]["signup_date"])) and
user_stats_dict[username]["country"] == existing_user_info[username]["country"] and
user_stats_dict[username]["institution"] ==
existing_user_info[username]["institution"] and
user_stats_dict[username]["kbase_internal_user"] ==
existing_user_info[username]["kb_internal_user"] and
user_stats_dict[username]["orcid"] == existing_user_info[username]["orcid"] and
user_stats_dict[username]["email"] == existing_user_info[username]["email"] and
user_stats_dict[username]["name"] == existing_user_info[username]["name"]):
input = (user_stats_dict[username]["name"],user_stats_dict[username]["email"],
user_stats_dict[username]["orcid"],
user_stats_dict[username]["kbase_internal_user"],
user_stats_dict[username]["institution"],user_stats_dict[username]["country"],
user_stats_dict[username]["signup_date"],
user_stats_dict[username]["last_signin_date"],username)
update_prep_cursor.execute(user_info_update_statement,input)
users_info_updated_count += 1
db_connection.commit()
print("Number of new users info inserted:" + str(new_user_info_count))
print("Number of users updated:" + str(users_info_updated_count))
#NOW DO USER SUMMARY STATS
user_summary_stats_insert_statement = "insert into user_system_summary_stats " \
"(username,num_orgs, narrative_count, " \
"shared_count, narratives_shared) " \
"values(%s,%s,%s,%s,%s);"
existing_user_summary_stats = dict()
query = "select username, num_orgs, narrative_count, shared_count, narratives_shared " \
"from user_system_summary_stats_current"
cursor.execute(query)
for (username, num_orgs, narrative_count, shared_count, narratives_shared) in cursor:
existing_user_summary_stats[username]={"num_orgs":num_orgs,
"narrative_count":narrative_count,
"shared_count":shared_count,
"narratives_shared":narratives_shared}
print("Number of existing user summaries:" + str(len(existing_user_summary_stats)))
new_user_summary_count= 0
existing_user_summary_count= 0
for username in user_stats_dict:
if username not in existing_user_summary_stats:
#if user does not exist insert
input = (username,user_stats_dict[username]["num_orgs"],
user_stats_dict[username]["narrative_count"],user_stats_dict[username]["shared_count"],
user_stats_dict[username]["narratives_shared"])
prep_cursor.execute(user_summary_stats_insert_statement,input)
new_user_summary_count+= 1
else:
#else see if the new data differs from the most recent snapshot. If it does differ, do an insert
if not (user_stats_dict[username]["num_orgs"] ==
existing_user_summary_stats[username]["num_orgs"] and
user_stats_dict[username]["narrative_count"] ==
existing_user_summary_stats[username]["narrative_count"] and
user_stats_dict[username]["shared_count"] ==
existing_user_summary_stats[username]["shared_count"] and
user_stats_dict[username]["narratives_shared"] ==
existing_user_summary_stats[username]["narratives_shared"]):
input = (username,user_stats_dict[username]["num_orgs"],
user_stats_dict[username]["narrative_count"],user_stats_dict[username]["shared_count"],
user_stats_dict[username]["narratives_shared"])
prep_cursor.execute(user_summary_stats_insert_statement,input)
existing_user_summary_count+= 1
db_connection.commit()
# THIS CODE is to update any of the 434 excluded users that had accounts made for them
# but never logged in. In case any of them ever do log in, they will be removed from
# the excluded list
query = "UPDATE metrics.user_info set exclude = False where last_signin_date is not NULL"
cursor.execute(query)
db_connection.commit()
print("Number of new users summary inserted:" + str(new_user_summary_count))
print("Number of existing users summary inserted:" + str(existing_user_summary_count))
return 1
| 48.416438 | 139 | 0.589577 | 2,017 | 17,672 | 4.885969 | 0.164601 | 0.061187 | 0.087062 | 0.078843 | 0.388128 | 0.275799 | 0.216844 | 0.178995 | 0.150888 | 0.124099 | 0 | 0.00792 | 0.306983 | 17,672 | 364 | 140 | 48.549451 | 0.796767 | 0.115607 | 0 | 0.142857 | 0 | 0 | 0.194509 | 0.01753 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021429 | false | 0.010714 | 0.021429 | 0 | 0.067857 | 0.039286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6e5807938177f25388fc19a921008773b63bd36 | 1,096 | py | Python | profiles/tests.py | Jokotoye18/Learning_log | 7de278e252c9abd23a462cf6d7358e8ed22cb66f | [
"MIT"
] | null | null | null | profiles/tests.py | Jokotoye18/Learning_log | 7de278e252c9abd23a462cf6d7358e8ed22cb66f | [
"MIT"
] | 4 | 2021-03-30T13:25:57.000Z | 2021-09-22T19:04:09.000Z | profiles/tests.py | Jokotoye18/Learning_log | 7de278e252c9abd23a462cf6d7358e8ed22cb66f | [
"MIT"
] | null | null | null | from django.test import TestCase, Client
from django.contrib.auth import get_user_model
from.models import Profile
from allauth.account.forms import SignupForm
from django.urls import reverse
class ProfileModelTest(TestCase):
def setUp(self):
self.user = get_user_model().objects.create_user(
email = 'test@gmail.com',
username = 'testname'
)
self.profile = Profile.objects.create(
user = self.user,
location = 'ilorin',
interest = 'sport',
about = 'test about'
)
def test_profile_model_text_representation(self):
self.assertEqual(f'{self.profile}', f'{self.user.username} profile')
def test_profile_content(self):
self.assertEqual(f'{self.profile.user}', f'{self.user}')
self.assertEqual(f'{self.profile.location}', 'ilorin')
self.assertEqual(f'{self.profile.interest}', 'sport')
self.assertEqual(f'{self.profile.about}', 'test about')
class ProfileViewTest():
c = Client()
resp = c.get(reverse('profiles:profile'))
| 31.314286 | 76 | 0.642336 | 127 | 1,096 | 5.456693 | 0.354331 | 0.050505 | 0.11544 | 0.1443 | 0.206349 | 0.089466 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231752 | 1,096 | 35 | 77 | 31.314286 | 0.82304 | 0 | 0 | 0 | 0 | 0 | 0.198724 | 0.041933 | 0 | 0 | 0 | 0 | 0.185185 | 1 | 0.111111 | false | 0 | 0.185185 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6e7bb7bfc30d3b8e909b178aa21379802d6f3d7 | 26,013 | py | Python | pyro/infer/autoguide/gaussian.py | Jayanth-kumar5566/pyro | a98bb57e1704997a3e01c76a7820c0b1db909ee3 | [
"Apache-2.0"
] | 4,959 | 2017-11-03T14:39:17.000Z | 2019-02-04T16:14:30.000Z | pyro/infer/autoguide/gaussian.py | Jayanth-kumar5566/pyro | a98bb57e1704997a3e01c76a7820c0b1db909ee3 | [
"Apache-2.0"
] | 985 | 2017-11-03T14:27:56.000Z | 2019-02-02T18:52:54.000Z | pyro/infer/autoguide/gaussian.py | Jayanth-kumar5566/pyro | a98bb57e1704997a3e01c76a7820c0b1db909ee3 | [
"Apache-2.0"
] | 564 | 2017-11-03T15:05:55.000Z | 2019-01-31T14:02:29.000Z | # Copyright Contributors to the Pyro project.
# SPDX-License-Identifier: Apache-2.0
import itertools
from abc import ABCMeta, abstractmethod
from collections import OrderedDict, defaultdict
from contextlib import ExitStack
from types import SimpleNamespace
from typing import Callable, Dict, Optional, Set, Tuple, Union
import torch
from torch.distributions import biject_to
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from pyro.distributions import constraints
from pyro.infer.inspect import get_dependencies, is_sample_site
from pyro.nn.module import PyroModule, PyroParam
from pyro.ops.linalg import ignore_torch_deprecation_warnings
from pyro.poutine.runtime import am_i_wrapped, get_plates
from pyro.poutine.util import site_is_subsample
from .guides import AutoGuide
from .initialization import InitMessenger, init_to_feasible
from .utils import deep_getattr, deep_setattr, helpful_support_errors
# Helper to dispatch to concrete subclasses of AutoGaussian, e.g.
# AutoGaussian(model, backend="dense")
# is converted to
# AutoGaussianDense(model)
# The intent is to avoid proliferation of subclasses and docstrings,
# and provide a single interface AutoGaussian(...).
class AutoGaussianMeta(type(AutoGuide), ABCMeta):
backends = {}
default_backend = "dense"
def __init__(cls, *args, **kwargs):
super().__init__(*args, **kwargs)
assert cls.__name__.startswith("AutoGaussian")
key = cls.__name__.replace("AutoGaussian", "").lower()
cls.backends[key] = cls
def __call__(cls, *args, **kwargs):
if cls is AutoGaussian:
backend = kwargs.pop("backend", cls.default_backend)
cls = cls.backends[backend]
return super(AutoGaussianMeta, cls).__call__(*args, **kwargs)
class AutoGaussian(AutoGuide, metaclass=AutoGaussianMeta):
"""
Gaussian guide with optimal conditional independence structure.
This is equivalent to a full rank :class:`AutoMultivariateNormal` guide,
but with a sparse precision matrix determined by dependencies and plates in
the model [1]. Depending on model structure, this can have asymptotically
better statistical efficiency than :class:`AutoMultivariateNormal` .
This guide implements multiple backends for computation. All backends use
the same statistically optimal parametrization. The default "dense" backend
has computational complexity similar to :class:`AutoMultivariateNormal` .
The experimental "funsor" backend can be asymptotically cheaper in terms of
time and space (using Gaussian tensor variable elimination [2,3]), but
incurs large constant overhead. The "funsor" backend requires `funsor
<https://funsor.pyro.ai>`_ which can be installed via ``pip install
pyro-ppl[funsor]``.
The guide currently does not depend on the model's ``*args, **kwargs``.
Example::
guide = AutoGaussian(model)
svi = SVI(model, guide, ...)
Example using experimental funsor backend::
!pip install pyro-ppl[funsor]
guide = AutoGaussian(model, backend="funsor")
svi = SVI(model, guide, ...)
**References**
[1] S.Webb, A.Goliński, R.Zinkov, N.Siddharth, T.Rainforth, Y.W.Teh, F.Wood (2018)
"Faithful inversion of generative models for effective amortized inference"
https://dl.acm.org/doi/10.5555/3327144.3327229
[2] F.Obermeyer, E.Bingham, M.Jankowiak, J.Chiu, N.Pradhan, A.M.Rush, N.Goodman
(2019)
"Tensor Variable Elimination for Plated Factor Graphs"
http://proceedings.mlr.press/v97/obermeyer19a/obermeyer19a.pdf
[3] F. Obermeyer, E. Bingham, M. Jankowiak, D. Phan, J. P. Chen
(2019)
"Functional Tensors for Probabilistic Programming"
https://arxiv.org/abs/1910.10775
:param callable model: A Pyro model.
:param callable init_loc_fn: A per-site initialization function.
See :ref:`autoguide-initialization` section for available functions.
:param float init_scale: Initial scale for the standard deviation of each
(unconstrained transformed) latent variable.
:param str backend: Back end for performing Gaussian tensor variable
elimination. Defaults to "dense"; other options include "funsor".
"""
scale_constraint = constraints.softplus_positive
def __init__(
self,
model: Callable,
*,
init_loc_fn: Callable = init_to_feasible,
init_scale: float = 0.1,
backend: Optional[str] = None, # used only by metaclass
):
if not isinstance(init_scale, float) or not (init_scale > 0):
raise ValueError(f"Expected init_scale > 0. but got {init_scale}")
self._init_scale = init_scale
self._original_model = (model,)
model = InitMessenger(init_loc_fn)(model)
super().__init__(model)
@staticmethod
def _prototype_hide_fn(msg):
# In contrast to the AutoGuide base class, this includes observation
# sites and excludes deterministic sites.
return not is_sample_site(msg)
def _setup_prototype(self, *args, **kwargs) -> None:
super()._setup_prototype(*args, **kwargs)
self.locs = PyroModule()
self.scales = PyroModule()
self.white_vecs = PyroModule()
self.prec_sqrts = PyroModule()
self._factors = OrderedDict()
self._plates = OrderedDict()
self._event_numel = OrderedDict()
self._unconstrained_event_shapes = OrderedDict()
# Trace model dependencies.
model = self._original_model[0]
self._original_model = None
self.dependencies = poutine.block(get_dependencies)(model, args, kwargs)[
"prior_dependencies"
]
# Eliminate observations with no upstream latents.
for d, upstreams in list(self.dependencies.items()):
if all(self.prototype_trace.nodes[u]["is_observed"] for u in upstreams):
del self.dependencies[d]
del self.prototype_trace.nodes[d]
# Collect factors and plates.
for d, site in self.prototype_trace.nodes.items():
# Prune non-essential parts of the trace to save memory.
pruned_site, site = site, site.copy()
pruned_site.clear()
# Collect factors and plates.
if site["type"] != "sample" or site_is_subsample(site):
continue
assert all(f.vectorized for f in site["cond_indep_stack"])
self._factors[d] = self._compress_site(site)
plates = frozenset(site["cond_indep_stack"])
if site["fn"].batch_shape != _plates_to_shape(plates):
raise ValueError(
f"Shape mismatch at site '{d}'. "
"Are you missing a pyro.plate() or .to_event()?"
)
if site["is_observed"]:
# Break irrelevant observation plates.
plates &= frozenset().union(
*(self._plates[u] for u in self.dependencies[d] if u != d)
)
self._plates[d] = plates
# Create location-scale parameters, one per latent variable.
if site["is_observed"]:
# This may slightly overestimate, e.g. for Multinomial.
self._event_numel[d] = site["fn"].event_shape.numel()
# Account for broken irrelevant observation plates.
for f in set(site["cond_indep_stack"]) - plates:
self._event_numel[d] *= f.size
continue
with helpful_support_errors(site):
init_loc = biject_to(site["fn"].support).inv(site["value"]).detach()
batch_shape = site["fn"].batch_shape
event_shape = init_loc.shape[len(batch_shape) :]
self._unconstrained_event_shapes[d] = event_shape
self._event_numel[d] = event_shape.numel()
event_dim = len(event_shape)
deep_setattr(self.locs, d, PyroParam(init_loc, event_dim=event_dim))
deep_setattr(
self.scales,
d,
PyroParam(
torch.full_like(init_loc, self._init_scale),
constraint=self.scale_constraint,
event_dim=event_dim,
),
)
# Create parameters for dependencies, one per factor.
for d, site in self._factors.items():
u_size = 0
for u in self.dependencies[d]:
if not self._factors[u]["is_observed"]:
broken_shape = _plates_to_shape(self._plates[u] - self._plates[d])
u_size += broken_shape.numel() * self._event_numel[u]
d_size = self._event_numel[d]
if site["is_observed"]:
d_size = min(d_size, u_size) # just an optimization
batch_shape = _plates_to_shape(self._plates[d])
# Create parameters of each Gaussian factor.
white_vec = init_loc.new_zeros(batch_shape + (d_size,))
# We initialize with noise to avoid singular gradient.
prec_sqrt = torch.rand(
batch_shape + (u_size, d_size),
dtype=init_loc.dtype,
device=init_loc.device,
)
prec_sqrt.sub_(0.5).mul_(self._init_scale)
if not site["is_observed"]:
# Initialize the [d,d] block to the identity matrix.
prec_sqrt.diagonal(dim1=-2, dim2=-1).fill_(1)
deep_setattr(self.white_vecs, d, PyroParam(white_vec, event_dim=1))
deep_setattr(self.prec_sqrts, d, PyroParam(prec_sqrt, event_dim=2))
@staticmethod
def _compress_site(site):
# Save memory by retaining only necessary parts of the site.
return {
"name": site["name"],
"type": site["type"],
"cond_indep_stack": site["cond_indep_stack"],
"is_observed": site["is_observed"],
"fn": SimpleNamespace(
support=site["fn"].support,
batch_shape=site["fn"].batch_shape,
event_dim=site["fn"].event_dim,
),
}
def forward(self, *args, **kwargs) -> Dict[str, torch.Tensor]:
if self.prototype_trace is None:
self._setup_prototype(*args, **kwargs)
aux_values = self._sample_aux_values(temperature=1.0)
values, log_densities = self._transform_values(aux_values)
# Replay via Pyro primitives.
plates = self._create_plates(*args, **kwargs)
for name, site in self._factors.items():
if site["is_observed"]:
continue
with ExitStack() as stack:
for frame in site["cond_indep_stack"]:
stack.enter_context(plates[frame.name])
values[name] = pyro.sample(
name,
dist.Delta(values[name], log_densities[name], site["fn"].event_dim),
)
return values
def median(self, *args, **kwargs) -> Dict[str, torch.Tensor]:
"""
Returns the posterior median value of each latent variable.
:return: A dict mapping sample site name to median tensor.
:rtype: dict
"""
with torch.no_grad(), poutine.mask(mask=False):
aux_values = self._sample_aux_values(temperature=0.0)
values, _ = self._transform_values(aux_values)
return values
def _transform_values(
self,
aux_values: Dict[str, torch.Tensor],
) -> Tuple[Dict[str, torch.Tensor], Union[float, torch.Tensor]]:
# Learnably transform auxiliary values to user-facing values.
values = {}
log_densities = defaultdict(float)
compute_density = am_i_wrapped() and poutine.get_mask() is not False
for name, site in self._factors.items():
if site["is_observed"]:
continue
loc = deep_getattr(self.locs, name)
scale = deep_getattr(self.scales, name)
unconstrained = aux_values[name] * scale + loc
# Transform to constrained space.
transform = biject_to(site["fn"].support)
values[name] = transform(unconstrained)
if compute_density:
assert transform.codomain.event_dim == site["fn"].event_dim
log_densities[name] = transform.inv.log_abs_det_jacobian(
values[name], unconstrained
) - scale.log().reshape(site["fn"].batch_shape + (-1,)).sum(-1)
return values, log_densities
@abstractmethod
def _sample_aux_values(self, *, temperature: float) -> Dict[str, torch.Tensor]:
raise NotImplementedError
class AutoGaussianDense(AutoGaussian):
"""
Dense implementation of :class:`AutoGaussian` .
The following are equivalent::
guide = AutoGaussian(model, backend="dense")
guide = AutoGaussianDense(model)
"""
def _setup_prototype(self, *args, **kwargs):
super()._setup_prototype(*args, **kwargs)
# Collect global shapes and per-axis indices.
self._dense_shapes = {}
global_indices = {}
pos = 0
for d, event_shape in self._unconstrained_event_shapes.items():
batch_shape = self._factors[d]["fn"].batch_shape
self._dense_shapes[d] = batch_shape, event_shape
end = pos + (batch_shape + event_shape).numel()
global_indices[d] = torch.arange(pos, end).reshape(batch_shape + (-1,))
pos = end
self._dense_size = pos
# Create sparse -> dense precision scatter indices.
self._dense_scatter = {}
for d, site in self._factors.items():
prec_sqrt_shape = deep_getattr(self.prec_sqrts, d).shape
info_vec_shape = prec_sqrt_shape[:-1]
precision_shape = prec_sqrt_shape[:-1] + prec_sqrt_shape[-2:-1]
index1 = torch.zeros(info_vec_shape, dtype=torch.long)
index2 = torch.zeros(precision_shape, dtype=torch.long)
# Collect local offsets and create index1 for info_vec blockwise.
upstreams = [
u for u in self.dependencies[d] if not self._factors[u]["is_observed"]
]
local_offsets = {}
pos = 0
for u in upstreams:
local_offsets[u] = pos
broken_plates = self._plates[u] - self._plates[d]
pos += self._event_numel[u] * _plates_to_shape(broken_plates).numel()
u_index = global_indices[u]
# Permute broken plates to the right of preserved plates.
u_index = _break_plates(u_index, self._plates[u], self._plates[d])
# Scatter global indices into the [u] block.
u_start = local_offsets[u]
u_stop = u_start + u_index.size(-1)
index1[..., u_start:u_stop] = u_index
# Create index2 for precision blockwise.
for u, v in itertools.product(upstreams, upstreams):
u_index = global_indices[u]
v_index = global_indices[v]
# Permute broken plates to the right of preserved plates.
u_index = _break_plates(u_index, self._plates[u], self._plates[d])
v_index = _break_plates(v_index, self._plates[v], self._plates[d])
# Scatter global indices into the [u,v] block.
u_start = local_offsets[u]
u_stop = u_start + u_index.size(-1)
v_start = local_offsets[v]
v_stop = v_start + v_index.size(-1)
index2[
..., u_start:u_stop, v_start:v_stop
] = self._dense_size * u_index.unsqueeze(-1) + v_index.unsqueeze(-2)
self._dense_scatter[d] = index1.reshape(-1), index2.reshape(-1)
def _sample_aux_values(self, *, temperature: float) -> Dict[str, torch.Tensor]:
mvn = self._dense_get_mvn()
if temperature == 0:
# Simply return the mode.
flat_samples = mvn.mean
elif temperature == 1:
# Sample from a dense joint Gaussian over flattened variables.
flat_samples = pyro.sample(
f"_{self._pyro_name}_latent", mvn, infer={"is_auxiliary": True}
)
else:
raise NotImplementedError(f"Invalid temperature: {temperature}")
samples = self._dense_unflatten(flat_samples)
return samples
def _dense_unflatten(self, flat_samples: torch.Tensor) -> Dict[str, torch.Tensor]:
# Convert a single flattened sample to a dict of shaped samples.
sample_shape = flat_samples.shape[:-1]
samples = {}
pos = 0
for d, (batch_shape, event_shape) in self._dense_shapes.items():
end = pos + (batch_shape + event_shape).numel()
flat_sample = flat_samples[..., pos:end]
pos = end
# Assumes sample shapes are left of batch shapes.
samples[d] = flat_sample.reshape(
torch.broadcast_shapes(sample_shape, batch_shape) + event_shape
)
return samples
def _dense_flatten(self, samples: Dict[str, torch.Tensor]) -> torch.Tensor:
# Convert a dict of shaped samples single flattened sample.
flat_samples = []
for d, (batch_shape, event_shape) in self._dense_shapes.items():
shape = samples[d].shape
sample_shape = shape[: len(shape) - len(batch_shape) - len(event_shape)]
flat_samples.append(samples[d].reshape(sample_shape + (-1,)))
return torch.cat(flat_samples, dim=-1)
def _dense_get_mvn(self):
# Create a dense joint Gaussian over flattened variables.
flat_info_vec = torch.zeros(self._dense_size)
flat_precision = torch.zeros(self._dense_size**2)
for d, (index1, index2) in self._dense_scatter.items():
white_vec = deep_getattr(self.white_vecs, d)
prec_sqrt = deep_getattr(self.prec_sqrts, d)
info_vec = (prec_sqrt @ white_vec[..., None])[..., 0]
precision = prec_sqrt @ prec_sqrt.transpose(-1, -2)
flat_info_vec.scatter_add_(0, index1, info_vec.reshape(-1))
flat_precision.scatter_add_(0, index2, precision.reshape(-1))
info_vec = flat_info_vec
precision = flat_precision.reshape(self._dense_size, self._dense_size)
scale_tril = _precision_to_scale_tril(precision)
loc = (
scale_tril @ (scale_tril.transpose(-1, -2) @ info_vec.unsqueeze(-1))
).squeeze(-1)
return dist.MultivariateNormal(loc, scale_tril=scale_tril)
class AutoGaussianFunsor(AutoGaussian):
"""
Funsor implementation of :class:`AutoGaussian` .
The following are equivalent::
guide = AutoGaussian(model, backend="funsor")
guide = AutoGaussianFunsor(model)
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
_import_funsor()
def _setup_prototype(self, *args, **kwargs):
super()._setup_prototype(*args, **kwargs)
funsor = _import_funsor()
# Check TVE condition 1: plate nesting is monotone.
for d in self._factors:
pd = {p.name for p in self._plates[d]}
for u in self.dependencies[d]:
pu = {p.name for p in self._plates[u]}
if pu <= pd:
continue # ok
raise NotImplementedError(
"Expected monotone plate nesting, but found dependency "
f"{repr(u)} -> {repr(d)} leaves plates {pu - pd}. "
"Consider splitting into multiple guides via AutoGuideList, "
"or replacing the plate in the model by .to_event()."
)
# Determine TVE problem shape.
factor_inputs: Dict[str, OrderedDict[str, funsor.Domain]] = {}
eliminate: Set[str] = set()
plate_to_dim: Dict[str, int] = {}
for d, site in self._factors.items():
inputs = OrderedDict()
for f in sorted(self._plates[d], key=lambda f: f.dim):
plate_to_dim[f.name] = f.dim
inputs[f.name] = funsor.Bint[f.size]
eliminate.add(f.name)
for u in self.dependencies[d]:
if self._factors[u]["is_observed"]:
continue
inputs[u] = funsor.Reals[self._unconstrained_event_shapes[u]]
eliminate.add(u)
factor_inputs[d] = inputs
self._funsor_factor_inputs = factor_inputs
self._funsor_eliminate = frozenset(eliminate)
self._funsor_plate_to_dim = plate_to_dim
self._funsor_plates = frozenset(plate_to_dim)
def _sample_aux_values(self, *, temperature: float) -> Dict[str, torch.Tensor]:
funsor = _import_funsor()
# Convert torch to funsor.
particle_plates = frozenset(get_plates())
plate_to_dim = self._funsor_plate_to_dim.copy()
plate_to_dim.update({f.name: f.dim for f in particle_plates})
factors = {}
for d, inputs in self._funsor_factor_inputs.items():
batch_shape = torch.Size(
p.size for p in sorted(self._plates[d], key=lambda p: p.dim)
)
white_vec = deep_getattr(self.white_vecs, d)
prec_sqrt = deep_getattr(self.prec_sqrts, d)
factors[d] = funsor.gaussian.Gaussian(
white_vec=white_vec.reshape(batch_shape + white_vec.shape[-1:]),
prec_sqrt=prec_sqrt.reshape(batch_shape + prec_sqrt.shape[-2:]),
inputs=inputs,
)
# Perform Gaussian tensor variable elimination.
if temperature == 1:
samples, log_prob = _try_possibly_intractable(
funsor.recipes.forward_filter_backward_rsample,
factors=factors,
eliminate=self._funsor_eliminate,
plates=frozenset(plate_to_dim),
sample_inputs={f.name: funsor.Bint[f.size] for f in particle_plates},
)
else:
samples, log_prob = _try_possibly_intractable(
funsor.recipes.forward_filter_backward_precondition,
factors=factors,
eliminate=self._funsor_eliminate,
plates=frozenset(plate_to_dim),
)
# Substitute noise.
sample_shape = torch.Size(f.size for f in particle_plates)
noise = torch.randn(sample_shape + log_prob.inputs["aux"].shape)
noise.mul_(temperature)
aux = funsor.Tensor(noise)[tuple(f.name for f in particle_plates)]
with funsor.interpretations.memoize():
samples = {k: v(aux=aux) for k, v in samples.items()}
log_prob = log_prob(aux=aux)
# Convert funsor to torch.
if am_i_wrapped() and poutine.get_mask() is not False:
log_prob = funsor.to_data(log_prob, name_to_dim=plate_to_dim)
pyro.factor(f"_{self._pyro_name}_latent", log_prob, has_rsample=True)
samples = {
k: funsor.to_data(v, name_to_dim=plate_to_dim) for k, v in samples.items()
}
return samples
def _precision_to_scale_tril(P):
# Ref: https://nbviewer.jupyter.org/gist/fehiepsi/5ef8e09e61604f10607380467eb82006#Precision-to-scale_tril
Lf = torch.linalg.cholesky(torch.flip(P, (-2, -1)))
L_inv = torch.transpose(torch.flip(Lf, (-2, -1)), -2, -1)
L = torch.linalg.solve_triangular(
L_inv, torch.eye(P.shape[-1], dtype=P.dtype, device=P.device), upper=False
)
return L
@ignore_torch_deprecation_warnings()
def _try_possibly_intractable(fn, *args, **kwargs):
# Convert ValueError into NotImplementedError.
try:
return fn(*args, **kwargs)
except ValueError as e:
if str(e) != "intractable!":
raise e from None
raise NotImplementedError(
"Funsor backend found intractable plate nesting. "
'Consider using AutoGaussian(..., backend="dense"), '
"splitting into multiple guides via AutoGuideList, or "
"replacing some plates in the model by .to_event()."
) from e
def _plates_to_shape(plates):
shape = [1] * max([0] + [-f.dim for f in plates])
for f in plates:
shape[f.dim] = f.size
return torch.Size(shape)
def _break_plates(x, all_plates, kept_plates):
"""
Reshapes and permutes a tensor ``x`` with event_dim=1 and batch shape given
by ``all_plates`` by breaking all plates not in ``kept_plates``. Each
broken plate is moved into the event shape, and finally the event shape is
flattend back to a single dimension.
"""
assert x.shape[:-1] == _plates_to_shape(all_plates) # event_dim == 1
kept_plates = kept_plates & all_plates
broken_plates = all_plates - kept_plates
if not broken_plates:
return x
if not kept_plates:
# Empty batch shape.
return x.reshape(-1)
batch_shape = _plates_to_shape(kept_plates)
if max(p.dim for p in kept_plates) < min(p.dim for p in broken_plates):
# No permutation is necessary.
return x.reshape(batch_shape + (-1,))
# We need to permute broken plates left past kept plates.
event_dims = {-1} | {p.dim - 1 for p in broken_plates}
perm = sorted(range(-x.dim(), 0), key=lambda d: (d in event_dims, d))
return x.permute(perm).reshape(batch_shape + (-1,))
def _import_funsor():
try:
import funsor
except ImportError as e:
raise ImportError(
'AutoGaussian(..., backend="funsor") requires funsor. '
"Try installing via: pip install pyro-ppl[funsor]"
) from e
funsor.set_backend("torch")
return funsor
__all__ = [
"AutoGaussian",
]
| 41.290476 | 110 | 0.615462 | 3,143 | 26,013 | 4.878142 | 0.180719 | 0.018262 | 0.007827 | 0.010566 | 0.237412 | 0.186408 | 0.154579 | 0.117793 | 0.103118 | 0.098748 | 0 | 0.00897 | 0.284281 | 26,013 | 629 | 111 | 41.356121 | 0.814534 | 0.207934 | 0 | 0.187204 | 0 | 0 | 0.054453 | 0.002468 | 0 | 0 | 0 | 0 | 0.009479 | 1 | 0.054502 | false | 0 | 0.063981 | 0.004739 | 0.180095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6eab1523ee5589b73c20527590238599c8dfd16 | 320 | py | Python | ABC/063/c.py | fumiyanll23/AtCoder | 362ca9fcacb5415c1458bc8dee5326ba2cc70b65 | [
"MIT"
] | null | null | null | ABC/063/c.py | fumiyanll23/AtCoder | 362ca9fcacb5415c1458bc8dee5326ba2cc70b65 | [
"MIT"
] | null | null | null | ABC/063/c.py | fumiyanll23/AtCoder | 362ca9fcacb5415c1458bc8dee5326ba2cc70b65 | [
"MIT"
] | null | null | null | def main():
# input
N = int(input())
ss = [int(input()) for _ in range(N)]
# compute
ans = sum(ss)
for s in sorted(ss):
if ans%10==0 and s%10!=0:
ans -= s
# output
if ans%10 == 0:
print(0)
else:
print(ans)
if __name__ == '__main__':
main()
| 15.238095 | 41 | 0.453125 | 46 | 320 | 2.956522 | 0.478261 | 0.066176 | 0.102941 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05102 | 0.3875 | 320 | 20 | 42 | 16 | 0.642857 | 0.0625 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.076923 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6ecd6ac96ad20775915e71b1e12991c1fd7b239 | 4,711 | py | Python | ja_timex/timex.py | otokunaga2/ja-timex | 4534ecbb3d4e780d4777ed239bb832ce849fa0c1 | [
"MIT"
] | null | null | null | ja_timex/timex.py | otokunaga2/ja-timex | 4534ecbb3d4e780d4777ed239bb832ce849fa0c1 | [
"MIT"
] | null | null | null | ja_timex/timex.py | otokunaga2/ja-timex | 4534ecbb3d4e780d4777ed239bb832ce849fa0c1 | [
"MIT"
] | null | null | null | import re
from collections import defaultdict
from typing import DefaultDict, Dict, List
from ja_timex.number_normalizer import NumberNormalizer
from ja_timex.tag import TIMEX
from ja_timex.tagger import AbstimeTagger, DurationTagger, ReltimeTagger, SetTagger
from ja_timex.util import is_parial_pattern_of_number_expression
class TimexParser:
def __init__(
self,
number_normalizer=NumberNormalizer(),
abstime_tagger=AbstimeTagger(),
duration_tagger=DurationTagger(),
reltime_tagger=ReltimeTagger(),
set_tagger=SetTagger(),
custom_tagger=None,
) -> None:
self.number_normalizer = number_normalizer
self.abstime_tagger = abstime_tagger
self.duration_tagger = duration_tagger
self.reltime_tagger = reltime_tagger
self.set_tagger = set_tagger
self.custom_tagger = custom_tagger
self.all_patterns = {}
self.all_patterns["abstime"] = self.abstime_tagger.patterns
self.all_patterns["duration"] = self.duration_tagger.patterns
self.all_patterns["reltime"] = self.reltime_tagger.patterns
self.all_patterns["set"] = self.set_tagger.patterns
if self.custom_tagger:
self.all_patterns["custom"] = self.custom_tagger.patterns
# TODO: set default timezone by pendulum
def parse(self, raw_text: str) -> List[TIMEX]:
# 数の認識/規格化
processed_text = self._normalize_number(raw_text)
# 時間表現の抽出
all_extracts = self._extract(processed_text)
type2extracts = self._drop_duplicates(processed_text, all_extracts)
# 規格化
timex_tags = self._parse(type2extracts)
# 規格化後のタグの情報付与
timex_tags = self._modify_additional_information(timex_tags, processed_text)
return timex_tags
def _normalize_number(self, raw_text: str) -> str:
return self.number_normalizer.normalize(raw_text)
def _extract(self, processed_text: str) -> List[Dict]:
all_extracts = []
# すべてのtaggerのパターンの正規表現を順に適用していく
for type_name, patterns in self.all_patterns.items():
for pattern in patterns:
# 文字列中からのパターン検知
re_iter = re.finditer(pattern.re_pattern, processed_text)
for re_match in re_iter:
if is_parial_pattern_of_number_expression(re_match, processed_text):
continue
all_extracts.append({"type_name": type_name, "re_match": re_match, "pattern": pattern})
return all_extracts
def _drop_duplicates(self, processed_text: str, all_extracts: List[Dict]) -> DefaultDict[str, List[Dict]]:
type2extracts = defaultdict(list)
text_coverage_flag = [False] * len(processed_text)
long_order_extracts = sorted(all_extracts, key=lambda x: len(x["re_match"].group()), reverse=True)
for target_extract in long_order_extracts:
start_i, end_i = target_extract["re_match"].span()
# すべてがまだ未使用のcharだった場合に候補に加える
if any(text_coverage_flag[start_i:end_i]) is False:
text_coverage_flag[start_i:end_i] = [True] * (end_i - start_i)
type2extracts[target_extract["type_name"]].append(target_extract)
return type2extracts
def _parse(self, type2extracts: DefaultDict[str, List[Dict]]) -> List[TIMEX]:
results = []
for type_name, extracts in type2extracts.items():
for extract in extracts:
if type_name == "abstime":
results.append(self.abstime_tagger.parse_with_pattern(extract["re_match"], extract["pattern"]))
elif type_name == "duration":
results.append(self.duration_tagger.parse_with_pattern(extract["re_match"], extract["pattern"]))
elif type_name == "reltime":
results.append(self.reltime_tagger.parse_with_pattern(extract["re_match"], extract["pattern"]))
elif type_name == "set":
results.append(self.set_tagger.parse_with_pattern(extract["re_match"], extract["pattern"]))
elif type_name == "custom":
results.append(self.custom_tagger.parse_with_pattern(extract["re_match"], extract["pattern"]))
return results
def _modify_additional_information(self, timex_tags: List[TIMEX], processed_text: str) -> List[TIMEX]:
# update @tid
modified_tags = []
sorted_timex_tags = sorted(timex_tags, key=lambda x: x.span[0] if x.span else 0)
for i, timex in enumerate(sorted_timex_tags):
timex.tid = f"t{i}"
modified_tags.append(timex)
return modified_tags
| 42.441441 | 116 | 0.659733 | 542 | 4,711 | 5.431734 | 0.197417 | 0.026155 | 0.035666 | 0.037364 | 0.189198 | 0.141304 | 0.118886 | 0.101223 | 0.101223 | 0.084239 | 0 | 0.002537 | 0.246869 | 4,711 | 110 | 117 | 42.827273 | 0.827227 | 0.032902 | 0 | 0 | 0 | 0 | 0.041795 | 0 | 0 | 0 | 0 | 0.009091 | 0 | 1 | 0.08642 | false | 0 | 0.08642 | 0.012346 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6ee6c5c06774806a22bfd966c8beb3678968a98 | 1,481 | py | Python | pydynamo_brain/pydynamo_brain/ui/tilefigs.py | ubcbraincircuits/pyDynamo | 006eb6edb5e54670574dbfdf7d249e9037f01ffc | [
"MIT"
] | 4 | 2021-12-16T22:32:47.000Z | 2022-01-03T05:42:12.000Z | pydynamo_brain/pydynamo_brain/ui/tilefigs.py | padster/pyDynamo | 006eb6edb5e54670574dbfdf7d249e9037f01ffc | [
"MIT"
] | 1 | 2021-11-15T18:14:20.000Z | 2021-11-15T18:14:36.000Z | pydynamo_brain/pydynamo_brain/ui/tilefigs.py | padster/pyDynamo | 006eb6edb5e54670574dbfdf7d249e9037f01ffc | [
"MIT"
] | 1 | 2022-01-21T23:03:24.000Z | 2022-01-21T23:03:24.000Z | import math
from PyQt5.QtCore import QRect
from PyQt5.QtWidgets import QDesktopWidget
# pyqt5 version of matlab tilefigs function
def tileFigs(stackWindows):
# Filter out only open windows:
stackWindows = [w for w in stackWindows if w is not None]
assert len(stackWindows) > 0
hspc = 10 # Horisontal space.
topspc = 40 # Space above top figure.
medspc = 40 # Space between figures.
botspc = 10 # Space below bottom figure.
# Get screen size
geom = QDesktopWidget().availableGeometry()
scrwid = geom.width()
scrhgt = geom.height()
# Set 'miscellaneous parameter' (??).
ratio = (scrhgt * 0.5) / scrwid # ideal fraction of nv/nh (we will take ceil)
nfigs = len(stackWindows) # Number of figures. i.e. nv*nh
nv = max(1, math.ceil(math.sqrt(nfigs * ratio))) # Number of figures V.
nh = max(2, math.ceil(nfigs / nv)) # Number of figures H.
# Figure width and height
figwid = (scrwid - (nh + 1) * hspc) / nh
fighgt = (scrhgt - (topspc + botspc) - (nv - 1) * medspc) / nv
# Put the figures where they belong
for row in range(nv):
for col in range(nh):
idx = row * nh + col
if idx < nfigs:
figlft = (col + 1) * hspc + col * figwid
figtop = row * medspc + topspc + row * fighgt
stackWindows[idx].resize(figwid, fighgt)
stackWindows[idx].move(figlft, figtop)
stackWindows[idx].redraw()
| 35.261905 | 81 | 0.609048 | 191 | 1,481 | 4.722513 | 0.492147 | 0.026608 | 0.049889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017992 | 0.286968 | 1,481 | 41 | 82 | 36.121951 | 0.836174 | 0.26266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 1 | 0.035714 | false | 0 | 0.107143 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6f06952cd7645789cd9e5361ca955bdb8ae1570 | 2,126 | py | Python | gmmmml/policies/prediction.py | andycasey/gmmmml | 1acfeea14514fb7ed44ccdfadefc87f996fee86f | [
"MIT"
] | 3 | 2021-02-15T05:37:01.000Z | 2021-09-22T22:06:12.000Z | gmmmml/policies/prediction.py | andycasey/gmmmml | 1acfeea14514fb7ed44ccdfadefc87f996fee86f | [
"MIT"
] | null | null | null | gmmmml/policies/prediction.py | andycasey/gmmmml | 1acfeea14514fb7ed44ccdfadefc87f996fee86f | [
"MIT"
] | 1 | 2021-02-15T05:37:06.000Z | 2021-02-15T05:37:06.000Z |
import logging
import numpy as np
from .base import Policy
logger_name, *_ = __name__.split(".")
logger = logging.getLogger(logger_name)
class BasePredictionPolicy(Policy):
def __init__(self, *args, **kwargs):
super(BasePredictionPolicy, self).__init__(*args, **kwargs)
def predict(self, y, **kwargs):
raise NotImplementedError("should be implemented by the sub-classes")
class DefaultPredictionPolicy(BasePredictionPolicy):
def predict(self, y, **kwargs):
N, D = y.shape
# Predict a little bit ahead.
Kp = 1 + np.arange(2 * np.max(self.model._state_K))
logger.info("Predicting between K = {0} and K = {1}".format(Kp[0], Kp[-1]))
K, I, I_var, I_lower = self.model._predict_message_length(Kp, N, D, **kwargs)
K_min = K[np.argmin(I)]
logger.info(f"Predicted minimum message length at K = {K_min}")
return (K, I, I_var, I_lower)
class LookaheadFromInitialisationPredictionPolicy(BasePredictionPolicy):
def __init__(self, *args, **kwargs):
super(LookaheadFromInitialisationPredictionPolicy, self).__init__(*args, **kwargs)
def predict(self, y, **kwargs):
"""
Predict the message length only up to the K value that was trialled
during the initialisation procedure.
"""
N, D = y.shape
"""
K_inits = np.logspace(0, np.log10(N/2.0), self.meta["K_init"], dtype=int)
K_max = K_inits[1 + self.model._num_initialisations]
"""
K_max = int(np.ceil(1.5 * [*self.model._results][self.model._num_initialisations - 1]))
Kp = np.arange(1, 1 + K_max).astype(int)
logger.info("Predicting between K = {0} and K = {1}".format(Kp[0], Kp[-1]))
K, I, I_var, I_lower = self.model._predict_message_length(Kp, N, D, **kwargs)
K_min = K[np.argmin(I)]
logger.info(f"Predicted minimum message length at K = {K_min}")
return (K, I, I_var, I_lower)
class NoPredictionPolicy(BasePredictionPolicy):
def predict(self, y, **kwargs):
return None | 26.911392 | 95 | 0.623236 | 280 | 2,126 | 4.539286 | 0.321429 | 0.042486 | 0.04406 | 0.047207 | 0.451613 | 0.451613 | 0.346184 | 0.346184 | 0.346184 | 0.284815 | 0 | 0.013068 | 0.24412 | 2,126 | 79 | 96 | 26.911392 | 0.777847 | 0.062559 | 0 | 0.529412 | 0 | 0 | 0.117092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.088235 | 0.029412 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6f0c204d86ffc44292e3d7658bed03896f0bc41 | 818 | py | Python | binarytree/q16.py | pengfei-chen/algorithm_qa | c2ccdcb77004e88279d61e4e433ee49527fc34d6 | [
"MIT"
] | 79 | 2018-03-27T12:37:49.000Z | 2022-01-21T10:18:17.000Z | binarytree/q16.py | pengfei-chen/algorithm_qa | c2ccdcb77004e88279d61e4e433ee49527fc34d6 | [
"MIT"
] | null | null | null | binarytree/q16.py | pengfei-chen/algorithm_qa | c2ccdcb77004e88279d61e4e433ee49527fc34d6 | [
"MIT"
] | 27 | 2018-04-08T03:07:06.000Z | 2021-10-30T00:01:50.000Z | """
问题描述:给定一个有序数组sortArr,已知其中没有重复值,用这个有序数组生成一棵平衡二叉搜索树,并且该搜索二叉树
中序遍历结果与sortArr一致。
"""
from binarytree.toolcls import Node
from binarytree.q3 import PrintTree
class ReconstructBalancedBST:
@classmethod
def reconstruct(cls, arr):
if len(arr) == 0 or arr is None:
return None
return cls.reconstruct_detail(arr, 0, len(arr)-1)
@classmethod
def reconstruct_detail(cls, arr, start, end):
if start > end:
return None
pos = (start + end)//2
node = Node(arr[pos])
node.left = cls.reconstruct_detail(arr, start, pos-1)
node.right = cls.reconstruct_detail(arr, pos+1, end)
return node
if __name__ == '__main__':
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9]
PrintTree.print_tree(ReconstructBalancedBST.reconstruct(arr)) | 23.371429 | 65 | 0.643032 | 102 | 818 | 5.029412 | 0.45098 | 0.132554 | 0.116959 | 0.134503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025974 | 0.246944 | 818 | 35 | 65 | 23.371429 | 0.806818 | 0.09291 | 0 | 0.2 | 0 | 0 | 0.010884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.45 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6f1580c564937d6e53353810c7dfb2323b8dd6c | 1,988 | py | Python | tests/settings.py | acv-auctions/manifold | b798b0dd6c2f96395d47f700fd2ed0451b80331b | [
"Apache-2.0"
] | 2 | 2018-06-08T10:14:40.000Z | 2018-06-09T10:49:17.000Z | tests/settings.py | acv-auctions/manifold | b798b0dd6c2f96395d47f700fd2ed0451b80331b | [
"Apache-2.0"
] | 1 | 2019-01-15T18:38:51.000Z | 2019-01-15T18:38:51.000Z | tests/settings.py | acv-auctions/manifold | b798b0dd6c2f96395d47f700fd2ed0451b80331b | [
"Apache-2.0"
] | null | null | null | """
Copyright 2018 ACV Auctions
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# pylint: disable=W0401,W0614
import sys
from django.conf.global_settings import *
DEBUG = True
DEBUG_PROPAGATE_EXCEPTIONS = True
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': ':memory:'
}
}
SECRET_KEY = 'not very secret in tests'
INSTALLED_APPS = [
'manifold',
'tests.example_app'
]
ALLOWED_HOSTS = [
'*'
]
# HTTP Settings
WSGI_APPLICATION = 'manifold.http.application'
ROOT_URLCONF = 'manifold.http'
MANIFOLD = {
'default': {
'file': 'tests/example.thrift',
'service': 'ExampleService'
},
'non-default': {
'file': 'tests/secondary.thrift',
'service': 'DummyService',
'host': '127.0.0.1',
'port': 9090
}
}
# Logging Configuration
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '%(asctime)s %(levelname)s [%(name)s:%(lineno)s]'
' %(module)s %(process)d %(thread)d %(message)s'
}
},
'handlers': {
'default': {
'level': 'DEBUG' if DEBUG else 'INFO',
'class': 'logging.StreamHandler',
'stream': sys.stdout,
'formatter': 'verbose',
},
},
'loggers': {
'': {
'handlers': ['default'],
'level': 'DEBUG' if DEBUG else 'INFO',
'propagate': True
},
}
}
| 24.243902 | 72 | 0.598089 | 221 | 1,988 | 5.330317 | 0.628959 | 0.050934 | 0.022071 | 0.027165 | 0.067912 | 0.067912 | 0.067912 | 0.067912 | 0 | 0 | 0 | 0.019191 | 0.266097 | 1,988 | 81 | 73 | 24.54321 | 0.788211 | 0.309859 | 0 | 0.087719 | 0 | 0 | 0.391336 | 0.102056 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035088 | 0 | 0.035088 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6f16c8c9f6389943041ded44b395967b6721d7b | 21,562 | py | Python | Source/osdr_ml_modeler/optimizer.py | ArqiSoft/ml-services | 0c9beacc4a98c3f55ed56969a8b7eb84c4209c21 | [
"MIT"
] | null | null | null | Source/osdr_ml_modeler/optimizer.py | ArqiSoft/ml-services | 0c9beacc4a98c3f55ed56969a8b7eb84c4209c21 | [
"MIT"
] | null | null | null | Source/osdr_ml_modeler/optimizer.py | ArqiSoft/ml-services | 0c9beacc4a98c3f55ed56969a8b7eb84c4209c21 | [
"MIT"
] | 2 | 2018-12-22T13:46:31.000Z | 2019-06-18T16:46:08.000Z | import csv
import json
import os
import shutil
from collections import OrderedDict
from time import time
import numpy
import redis
from sklearn import model_selection
from MLLogger import BaseMLLogger
from exception_handler import MLExceptionHandler
from general_helper import (
get_oauth, make_stream_from_sdf, make_directory, get_multipart_object,
post_data_to_blob, fetch_token
)
from learner.algorithms import (
CLASSIFIER, REGRESSOR, model_type_by_code, NAIVE_BAYES, ELASTIC_NETWORK,
TRAINER_CLASS, ALGORITHM, CODES
)
from mass_transit.MTMessageProcessor import PureConsumer, PurePublisher
from mass_transit.mass_transit_constants import (
OPTIMIZE_TRAINING, TRAINING_OPTMIZATION_FAILED, TRAINING_OPTIMIZED
)
from messages import training_optimization_failed, model_training_optimized
from processor import sdf_to_csv
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
BLOB_URL = '{}/blobs'.format(os.environ['OSDR_BLOB_SERVICE_URL'])
REDIS_CLIENT = redis.StrictRedis(host='redis', db=0)
TEMP_FOLDER = os.environ['OSDR_TEMP_FILES_FOLDER']
LOGGER = BaseMLLogger(
log_name='logger', log_file_name='sds-ml-training-optimizer')
try:
EXPIRATION_TIME = int(os.environ['REDIS_EXPIRATION_TIME_SECONDS'])
except KeyError:
EXPIRATION_TIME = 12*60*60 # 12 hours
LOGGER.error('Max thread number not defined. Set it to 1')
OPTIMIZER_FORMATTER = '{:.04f}'.format
# set optimizer fingerprints sets
# will found optimal set from this list, and use it later for training model
# all other sets will be shown on optimizer report and on training report
BASE_FINGERPRINTS = [
[
{'Type': 'DESC'}, {'Type': 'AVALON', 'Size': 512},
{'Type': 'ECFP', 'Radius': 3, 'Size': 128},
{'Type': 'FCFC', 'Radius': 2, 'Size': 256}
], [
{'Type': 'MACCS'}, {'Type': 'AVALON', 'Size': 256},
{'Type': 'ECFP', 'Radius': 4, 'Size': 1024},
{'Type': 'FCFC', 'Radius': 4, 'Size': 256}
], [
{'Type': 'DESC'}, {'Type': 'AVALON', 'Size': 256},
{'Type': 'ECFP', 'Radius': 4, 'Size': 512},
{'Type': 'FCFC', 'Radius': 2, 'Size': 128}
], [
{'Type': 'DESC'}, {'Type': 'MACCS'},
{'Type': 'ECFP', 'Radius': 2, 'Size': 128},
{'Type': 'FCFC', 'Radius': 4, 'Size': 256}
], [
{'Type': 'DESC'}, {'Type': 'ECFP', 'Radius': 3, 'Size': 1024},
{'Type': 'FCFC', 'Radius': 4, 'Size': 256}
], [
{'Type': 'DESC'}, {'Type': 'ECFP', 'Radius': 2, 'Size': 512},
{'Type': 'FCFC', 'Radius': 2, 'Size': 512}
], [
{'Type': 'DESC'}, {'Type': 'MACCS'},
{'Type': 'ECFP', 'Radius': 2, 'Size': 1024},
{'Type': 'FCFC', 'Radius': 3, 'Size': 512}
], [
{'Type': 'ECFP', 'Radius': 2, 'Size': 512},
{'Type': 'FCFC', 'Radius': 3, 'Size': 128}
], [
{'Type': 'DESC'}, {'Type': 'MACCS'},
{'Type': 'ECFP', 'Radius': 3, 'Size': 512},
{'Type': 'FCFC', 'Radius': 2, 'Size': 128}
], [
{'Type': 'DESC'}, {'Type': 'AVALON', 'Size': 128},
{'Type': 'ECFP', 'Radius': 3, 'Size': 512}
], [
{'Type': 'DESC'}, {'Type': 'AVALON', 'Size': 128},
{'Type': 'ECFP', 'Radius': 2, 'Size': 128},
{'Type': 'FCFC', 'Radius': 2, 'Size': 128}
], [
{'Type': 'DESC'}, {'Type': 'MACCS'}, {'Type': 'AVALON', 'Size': 512},
{'Type': 'FCFC', 'Radius': 4, 'Size': 128}
]
]
@MLExceptionHandler(
logger=LOGGER, fail_publisher=TRAINING_OPTMIZATION_FAILED,
fail_message_constructor=training_optimization_failed
)
def find_optimal_parameters(body):
"""
Pika callback function used by ml optimizer
Find optimal training fingerprints set for input dataset
Using only 1000 (by default) or less structures from input dataset
Send overall optimizing result to Redis, to use it in ml training report
:param body: RabbitMQ MT message's body
:type body: dict
"""
oauth = get_oauth()
# check input methods
if not body['Methods']:
raise ValueError('Empty Methods')
# calculate metrics for each fingerprints set
metrics, target_metric = fingerprints_grid_search(
oauth, body, BASE_FINGERPRINTS)
# send all metrics to redis
# later use it to add to training report
REDIS_CLIENT.setex(
'optimizer_metrics_{}'.format(body['CorrelationId']), EXPIRATION_TIME,
json.dumps(metrics)
)
# find best fingerprints set
optimal_fingerprints = sorted(
metrics.values(), key=lambda value: value['metrics'][target_metric],
reverse=True
)[0]['fptype']
# set other default 'optimal' parameters for training model
body['SubSampleSize'] = 1.0
body['TestDataSize'] = 0.3
body['Scaler'] = 'MinMax'
body['KFold'] = 5
body['Fingerprints'] = optimal_fingerprints
body['OptimizationMethod'] = 'default'
body['NumberOfIterations'] = 100
# make optimizer metrics csv and post it to blob storage
formatted_metrics = TMP_TMP(
metrics, model_type_by_code(body['Methods'][0].lower()))
csv_path = '{}/ml_optimizer/{}/optimizing.csv'.format(
TEMP_FOLDER, body['CorrelationId'])
write_optimized_metrics_to_csv(formatted_metrics, csv_path)
multipart_model = get_multipart_object(
body, csv_path, 'application/x-spss-sav',
additional_fields={'ParentId': body['TargetFolderId']}
)
# send optimizer metrics csv file to blob storage
fetch_token(oauth)
response = post_data_to_blob(oauth, multipart_model)
LOGGER.info('Optimizer csv status code: {}'.format(response.status_code))
# send best fingerprints set and 'optimal' parameters to training model
training_optimized = model_training_optimized(body)
training_optimized_message_publisher = PurePublisher(TRAINING_OPTIMIZED)
training_optimized_message_publisher.publish(training_optimized)
# clear current optimization folder
shutil.rmtree(
'{}/ml_optimizer/{}'.format(TEMP_FOLDER, body['CorrelationId']),
ignore_errors=True
)
def write_optimized_metrics_to_csv(metrics, csv_file_path):
csv_formatted_metrics = OrderedDict()
for key, value in metrics.items():
if 'fingerprints' not in csv_formatted_metrics.keys():
csv_formatted_metrics['fingerprints'] = dict()
column_name = key
if key == '0':
column_name = 'Fingerprints set'
csv_formatted_metrics['fingerprints'][column_name] = column_name
for sub_key, sub_value in value.items():
if sub_key not in csv_formatted_metrics.keys():
csv_formatted_metrics[sub_key] = dict()
csv_formatted_metrics[sub_key][column_name] = sub_value
with open(csv_file_path, 'w') as f:
w = csv.DictWriter(f, csv_formatted_metrics.keys())
subkeys = csv_formatted_metrics['fingerprint_processing_time'].keys()
for row_key in subkeys:
row_dict = dict()
for key, value in csv_formatted_metrics.items():
row_dict[key] = value[row_key]
w.writerow(row_dict)
def fingerprints_as_string(fingerprints):
"""
Method to formatting fingerprints list to human readable string value
:param fingerprints: fingerprints set as list
:type fingerprints: list
:return: fingerprints set as string
:rtype: str
"""
all_fingerprints_string = []
# loop all fingerprints values in list
for fingerprint in fingerprints:
fingerprint_string = '{}'.format(fingerprint['Type'])
if 'Radius' in fingerprint.keys():
fingerprint_string += ' {} radius'.format(fingerprint['Radius'])
if 'Size' in fingerprint.keys():
fingerprint_string += ' {} size'.format(fingerprint['Size'])
all_fingerprints_string.append(fingerprint_string)
return ', '.join(all_fingerprints_string)
def fingerprints_grid_search(
oauth, body, fingerprints, subsample_size=1000
):
"""
Function for searching of optimal combination of fingerprints.
subsample_size molecules are extracted from initial dataset and used
for training of multiple models with varying combinations of fingerprints.
:param oauth:
:param body:
:param fingerprints: list of fingerprints' combinations
:param subsample_size: number of objects that will be used to train model
:return: dict with fingerprints' metrics and statistics
"""
# make folder for current optimization
optimizer_folder = '{}/ml_optimizer/{}'.format(
TEMP_FOLDER, body['CorrelationId'])
make_directory(optimizer_folder)
# download and save sdf file
stream = make_stream_from_sdf(body, oauth)
filename = body['SourceFileName']
temporary_sdf_filename = '{}/tmp_{}.sdf'.format(optimizer_folder, filename)
temporary_sdf_file = open(temporary_sdf_filename, 'wb')
temporary_sdf_file.write(stream.getvalue())
temporary_sdf_file.close()
# extract sample (which have subsample_size) from source dataset
prediction_target = body['ClassName']
mode = model_type_by_code(body['Methods'][0].lower())
sample_file_name = extract_sample_dataset(
input_file_name=temporary_sdf_filename, subsample_size=subsample_size,
prediction_target=prediction_target, mode=mode
)
# define classifier and regressor models for optimizing
if mode == CLASSIFIER:
model_code = NAIVE_BAYES
target_metric = 'test__AUC'
elif mode == REGRESSOR:
model_code = ELASTIC_NETWORK
target_metric = 'test__R2'
else:
raise ValueError('Unknown node: {}'.format(mode))
# loop all base fingerprints sets to find best set
metrics = dict()
for fingerprint_number, fptype in enumerate(fingerprints):
# make dataframe depends on fingerprint set
# and model type (classifier or regressor)
start_fps_processing = time()
if mode == CLASSIFIER:
dataframe = sdf_to_csv(
sample_file_name, fptype=fptype,
class_name_list=prediction_target
)
elif mode == REGRESSOR:
dataframe = sdf_to_csv(
sample_file_name, fptype=fptype,
value_name_list=prediction_target
)
else:
raise ValueError('Unknown mode: {}'.format(mode))
fps_processing_time_seconds = time() - start_fps_processing
# train model
start_current_training = time()
classic_classifier = ALGORITHM[TRAINER_CLASS][model_code](
sample_file_name, prediction_target, dataframe, subsample_size=1.0,
test_set_size=0.2, seed=0, fptype=fptype, scale='minmax',
n_split=1, output_path=optimizer_folder
)
classic_classifier.train_model(CODES[model_code])
current_training_time_seconds = time() - start_current_training
# add formatted model's metrics and times to heap
formatted_metrics = format_metrics(
classic_classifier.metrics[model_code]['mean'])
metrics.update({
fingerprint_number: {
'fptype': fptype,
'metrics': formatted_metrics,
'fingerprint_processing_time': fps_processing_time_seconds,
'prediction_time': current_training_time_seconds
}
})
return metrics, target_metric
def extract_sample_dataset(
input_file_name, subsample_size, prediction_target, mode
):
"""
Function for generation of subsampled dataset
and writing a corresponding file
:param input_file_name: name of input file
:param subsample_size:
number of structures that will be used to train model
:param prediction_target: name of the target variable
:param mode: classification or regression
:return: name of subsampled file
"""
prediction_target = '<' + prediction_target + '>'
valid_list = extract_sample_mols(
input_file_name, mode, subsample_size=subsample_size,
prediction_target=prediction_target
)
sample_file_name = write_sample_sdf(input_file_name, valid_list)
return sample_file_name
def write_sample_sdf(input_file_name, valid_list):
"""
Function for writing a temporary file with a subset of pre-selected
structures
:param input_file_name: name of input file
:param valid_list: list of indexes of pre-selected structures
:return: name of subsampled file
"""
sample_file_name = '{}_sample.sdf'.format(input_file_name.split('.')[0])
sample_file = open(sample_file_name, 'w')
mol = []
i = 0
for line in open(input_file_name):
mol.append(line)
if line[:4] == '$$$$':
i += 1
if i in valid_list:
for mol_line in mol:
sample_file.write(mol_line)
valid_list.remove(i)
mol = []
else:
mol = []
sample_file.close()
return sample_file_name
def extract_sample_mols(
input_file_name, mode, prediction_target='', n_bins=20,
critical_ratio=0.05, subsample_size=1000,
):
"""
Function for generation of list of indexes. The subset of structures with
the corresponding indexes will be used for the following model's training.
:param input_file_name: name of input file
:param mode: classification or regression
:param prediction_target: name of the target variable
:param n_bins: number of bins that will be used to split dataset
(in a stratified manner) in regression mode
:param critical_ratio: minimal fraction of minor class objects.
If actual value is less than critical_ratio,
major/minor classes ratio will be changed to critical_ratio
:param subsample_size:
number of structures that will be used to train model
:return: list of indexes
"""
counter = 0
values_list = list()
mol_numbers = list()
with open(input_file_name, 'r') as infile:
for line in infile:
if prediction_target in line:
values_list.append(next(infile, '').strip())
if line[:4] == '$$$$':
mol_numbers.append(counter)
counter += 1
mol_numbers = numpy.array(mol_numbers)
if mol_numbers.size <= subsample_size:
valid_list = mol_numbers
else:
if mode == CLASSIFIER:
temp_values_list = []
for value in values_list:
try:
temp_value = value.upper()
if temp_value == 'TRUE':
temp_values_list.append(1)
elif temp_value == 'FALSE':
temp_values_list.append(0)
else:
temp_values_list.append(int(temp_value))
except (AttributeError, ValueError):
temp_values_list.append(None)
values_list = numpy.array(temp_values_list, dtype=int)
true_class_indexes = numpy.argwhere(values_list == 1).flatten()
false_class_indexes = numpy.argwhere(values_list == 0).flatten()
if true_class_indexes.size > false_class_indexes.size:
major_class_indexes = true_class_indexes
minor_class_indexes = false_class_indexes
else:
major_class_indexes = false_class_indexes
minor_class_indexes = true_class_indexes
if minor_class_indexes.size < subsample_size * critical_ratio:
new_num_major_indexes = subsample_size - minor_class_indexes.size
valid_list = numpy.hstack((
minor_class_indexes,
numpy.random.choice(
major_class_indexes, new_num_major_indexes, replace=False
)
))
else:
if minor_class_indexes.size/mol_numbers.size > critical_ratio:
train_fraction = subsample_size / mol_numbers.size
new_num_minor_indexes = train_fraction * minor_class_indexes.size
new_num_major_indexes = train_fraction * major_class_indexes.size
valid_list = (numpy.hstack((
numpy.random.choice(
minor_class_indexes, round(new_num_minor_indexes),
replace=False
),
numpy.random.choice(
major_class_indexes, round(new_num_major_indexes),
replace=False
)
)))
else:
valid_list = numpy.hstack((
numpy.random.choice(
minor_class_indexes,
round(subsample_size * critical_ratio),
replace=False
),
numpy.random.choice(
major_class_indexes,
round(subsample_size * (1 - critical_ratio)),
replace=False
)
))
elif mode == REGRESSOR:
values_list = numpy.array(values_list, dtype=float)
percentiles = numpy.percentile(
values_list, numpy.linspace(0, 100, n_bins + 1))
falls_into = numpy.searchsorted(percentiles, values_list)
falls_into[falls_into == 0] = 1
x_train, x_test, y_train, y_test = model_selection.train_test_split(
mol_numbers, falls_into, stratify=falls_into,
train_size=subsample_size
)
valid_list = x_train
else:
raise ValueError('Unknown mode: {}'.format(mode))
return valid_list.tolist()
def format_metrics(metrics):
"""
Method to return dict with formatted metrics keys.
From tuple of strings to string with dunder ('__') between values
:param metrics: unformatted metrics. keys looks like ('test', 'AUC')
:type metrics: dict
:return: formatted metrics. keys looks like 'test__AUC'
:rtype: dict
"""
formatted_metrics = dict()
for key, value in metrics.items():
formatted_metrics['{}__{}'.format(key[0], key[1])] = value
return formatted_metrics
def TMP_TMP(optimal_metrics_dict, model_type):
# prepare metrics table headers
if model_type == CLASSIFIER:
formatted_metrics = OrderedDict({
'0': {
'fingerprint_processing_time': 'FP computation time, sec',
'test__ACC': 'Test ACC',
'test__AUC': 'Test AUC',
'test__Matthews_corr': 'Test Matthews corr coeff',
'prediction_time': 'training time, sec'
}
})
elif model_type == REGRESSOR:
formatted_metrics = OrderedDict({
'0': {
'fingerprint_processing_time': 'FP computation time, sec',
'test__R2': 'Test R2',
'test__RMSE': 'Test RMSE',
'prediction_time': 'training time, sec'
}
})
else:
raise ValueError('Unknown model type: {}'.format(model_type))
# fill metrics table values, correspond by header
if model_type == CLASSIFIER:
for model_number, model_data in optimal_metrics_dict.items():
fingerprints_string = fingerprints_as_string(model_data['fptype'])
formatted_metrics[fingerprints_string] = {
'fingerprint_processing_time': OPTIMIZER_FORMATTER(
model_data['fingerprint_processing_time']),
'test__ACC': OPTIMIZER_FORMATTER(
model_data['metrics']['test__ACC']),
'test__AUC': OPTIMIZER_FORMATTER(
model_data['metrics']['test__AUC']),
'test__Matthews_corr': OPTIMIZER_FORMATTER(
model_data['metrics']['test__Matthews_corr']),
'prediction_time': OPTIMIZER_FORMATTER(
model_data['prediction_time'])
}
elif model_type == REGRESSOR:
for model_number, model_data in optimal_metrics_dict.items():
fingerprints_string = fingerprints_as_string(model_data['fptype'])
formatted_metrics[fingerprints_string] = {
'fingerprint_processing_time': OPTIMIZER_FORMATTER(
model_data['fingerprint_processing_time']),
'test__R2': OPTIMIZER_FORMATTER(
model_data['metrics']['test__R2']),
'test__RMSE': OPTIMIZER_FORMATTER(
model_data['metrics']['test__RMSE']),
'prediction_time': OPTIMIZER_FORMATTER(
model_data['prediction_time'])
}
else:
raise ValueError('Unknown model type: {}'.format(model_type))
return formatted_metrics
if __name__ == '__main__':
try:
PREFETCH_COUNT = int(
os.environ['OSDR_RABBIT_MQ_ML_OPTIMIZER_PREFETCH_COUNT'])
except KeyError:
PREFETCH_COUNT = 1
LOGGER.error('Prefetch count not defined. Set it to 1')
OPTIMIZE_TRAINING['event_callback'] = find_optimal_parameters
TRAIN_MODELS_COMMAND_CONSUMER = PureConsumer(
OPTIMIZE_TRAINING, infinite_consuming=True,
prefetch_count=PREFETCH_COUNT
)
TRAIN_MODELS_COMMAND_CONSUMER.start_consuming()
| 37.240069 | 85 | 0.621927 | 2,402 | 21,562 | 5.317652 | 0.169858 | 0.030063 | 0.012213 | 0.019025 | 0.339779 | 0.25593 | 0.216785 | 0.185783 | 0.154466 | 0.086119 | 0 | 0.012001 | 0.27734 | 21,562 | 578 | 86 | 37.304498 | 0.807727 | 0.163575 | 0 | 0.284314 | 0 | 0 | 0.126636 | 0.02465 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022059 | false | 0 | 0.041667 | 0 | 0.080882 | 0.080882 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6f1f913f6a078effa24dda0c369361db5605649 | 1,415 | py | Python | tutorials/preprocessing/plot_extract_gfp_peaks.py | mscheltienne/pycrostates | be87adf69c94b2b179064f337acd8a49d01c305d | [
"BSD-3-Clause"
] | 1 | 2021-12-14T09:58:57.000Z | 2021-12-14T09:58:57.000Z | tutorials/preprocessing/plot_extract_gfp_peaks.py | mscheltienne/pycrostates | be87adf69c94b2b179064f337acd8a49d01c305d | [
"BSD-3-Clause"
] | null | null | null | tutorials/preprocessing/plot_extract_gfp_peaks.py | mscheltienne/pycrostates | be87adf69c94b2b179064f337acd8a49d01c305d | [
"BSD-3-Clause"
] | null | null | null | """
Global field power peaks extraction
===================================
This example demonstrates how to extract global field power (gfp) peaks for an eeg recording.
"""
#%%
# We start by loading some example data:
import mne
from mne.io import read_raw_eeglab
from pycrostates.datasets import lemon
raw_fname = lemon.load_data(subject_id='010004', condition='EC')
raw = read_raw_eeglab(raw_fname, preload=True)
raw.pick('eeg')
raw.set_eeg_reference('average')
#%%
# We can then use the :func:`~pycrostates.preprocessing.extract_gfp_peaks`
# function to extract samples with highest global field power.
# The min_peak_distance allow to select the minimum number of sample beween 2
# selected peaks.
from pycrostates.preprocessing import extract_gfp_peaks
raw_peaks = extract_gfp_peaks(raw, min_peak_distance=3)
raw_peaks
#%%
#
# .. warning::
#
# The returned object will always be a :class:`~mne.io.Raw`, but should not
# be used for any other purpose than fitting a clustering algorithm. To
# avoid any misuse of this object, we have deliberately assigned its
# sampling rate to -1.
raw_peaks.info['sfreq']
#%%
# Note that this function can also be used on :func:`~mne.epochs.Epochs` but
# will always return a :class:`~mne.io.Raw` instance.
epochs = mne.make_fixed_length_epochs(raw, duration=2, preload=True)
epochs_peaks = extract_gfp_peaks(epochs, min_peak_distance=3)
epochs_peaks
| 28.3 | 93 | 0.744876 | 214 | 1,415 | 4.775701 | 0.509346 | 0.039139 | 0.058708 | 0.035225 | 0.027397 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009039 | 0.139929 | 1,415 | 49 | 94 | 28.877551 | 0.830731 | 0.582332 | 0 | 0 | 0 | 0 | 0.040636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6fa1d12f414932e8c4b3f796cf576788abf842d | 2,284 | py | Python | setup.py | bluedynamics/souper.plone | 64afe8dc1f87f45c2e96e305f5fff2104ac21007 | [
"BSD-3-Clause"
] | 2 | 2015-05-05T15:16:44.000Z | 2019-07-09T12:53:52.000Z | setup.py | bluedynamics/souper.plone | 64afe8dc1f87f45c2e96e305f5fff2104ac21007 | [
"BSD-3-Clause"
] | 5 | 2015-06-02T06:42:00.000Z | 2021-02-13T15:31:29.000Z | setup.py | bluedynamics/souper.plone | 64afe8dc1f87f45c2e96e305f5fff2104ac21007 | [
"BSD-3-Clause"
] | 3 | 2015-05-05T15:17:25.000Z | 2018-10-12T11:10:55.000Z | from setuptools import setup, find_packages
import sys
import os
version = '1.3.2.dev0'
shortdesc = \
"Plone Souper Integration: Container for many lightweight queryable Records"
longdesc = open(os.path.join(os.path.dirname(__file__), 'README.rst')).read()
longdesc += open(os.path.join(os.path.dirname(__file__), 'CHANGES.rst')).read()
longdesc += open(os.path.join(os.path.dirname(__file__), 'LICENSE.rst')).read()
setup(name='souper.plone',
version=version,
description=shortdesc,
long_description=longdesc,
classifiers=[
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Framework :: Zope :: 2',
'Framework :: Zope :: 4',
'Framework :: Plone :: 4.3',
'Framework :: Plone :: 5.0',
'Framework :: Plone :: 5.1',
'Framework :: Plone :: 5.2',
'Framework :: Plone :: Addon',
'Intended Audience :: Developers',
'Topic :: Software Development :: Libraries :: Python Modules'
], # Get strings from http://pypi.python.org/pypi?%3Aaction=list_classifiers
keywords='container data record catalog',
author='BlueDynamics Alliance',
author_email='dev@bluedynamics.com',
url='http://pypi.python.org/pypi/souper.plone',
license='BSD',
packages=find_packages('src'),
package_dir={'': 'src'},
namespace_packages=['souper'],
include_package_data=True,
zip_safe=False,
install_requires=[
'setuptools',
'Products.CMFPlone',
'souper',
],
extras_require={
'test': [
'plone.app.testing',
'interlude',
'plone.api',
"zopyx.txng3.core ; python_version<'3'",
],
},
entry_points="""
# -*- Entry points: -*-
[z3c.autoinclude.plugin]
target = plone
""",
)
| 35.138462 | 83 | 0.572242 | 228 | 2,284 | 5.618421 | 0.495614 | 0.028103 | 0.117096 | 0.042155 | 0.135051 | 0.102264 | 0.102264 | 0.102264 | 0.102264 | 0.071819 | 0 | 0.016354 | 0.277145 | 2,284 | 64 | 84 | 35.6875 | 0.75954 | 0.031086 | 0 | 0.048387 | 0 | 0 | 0.469923 | 0.010855 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.048387 | 0 | 0.048387 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a6ffa89ddaaff36c4e4467a85ca26b7f3d836ded | 3,118 | py | Python | tests/entities/test_member.py | rennerocha/asociate-old | 75e946a1db909299b1442a4bce3b78a2ddf2aafb | [
"MIT"
] | null | null | null | tests/entities/test_member.py | rennerocha/asociate-old | 75e946a1db909299b1442a4bce3b78a2ddf2aafb | [
"MIT"
] | null | null | null | tests/entities/test_member.py | rennerocha/asociate-old | 75e946a1db909299b1442a4bce3b78a2ddf2aafb | [
"MIT"
] | null | null | null | import uuid
import pytest
from asociate.entities.association import Association
from asociate.entities.member import Member
pytestmark = [
pytest.mark.entities,
]
def test_member_init():
member = Member(
first_name="Arthur",
last_name="Dent",
email="arthur.dent@deepthought.com",
phone="912340042",
)
assert member.first_name == "Arthur"
assert member.last_name == "Dent"
assert member.email == "arthur.dent@deepthought.com"
assert member.phone == "912340042"
def test_member_init_from_dict():
member_dict = {
"first_name": "Arthur",
"last_name": "Dent",
"email": "arthur.dent@deepthought.com",
"phone": "912340042",
}
member = Member.from_dict(member_dict)
assert member.first_name == "Arthur"
assert member.last_name == "Dent"
assert member.email == "arthur.dent@deepthought.com"
assert member.phone == "912340042"
def test_member_full_name():
member_dict = {
"first_name": "Arthur",
"last_name": "Dent",
"email": "arthur.dent@deepthought.com",
"phone": "912340042",
}
member = Member.from_dict(member_dict)
assert member.full_name == "Arthur Dent"
def test_member_repr():
member = Member(
first_name="Arthur",
last_name="Dent",
email="arthur.dent@deepthought.com",
phone="912340042",
)
assert repr(member) == f"<Member: {member.first_name} {member.last_name}>"
def test_member_join_association(association, member):
member.join(association)
assert member in association.members
def test_error_if_try_join_not_valid_association(member):
with pytest.raises(ValueError) as excinfo:
member.join("not_a_valid_association_instance")
assert "Expected Association instance." in str(excinfo.value)
def test_member_model_to_dict():
member_dict = {
"first_name": "Arthur",
"last_name": "Dent",
"email": "arthur.dent@deepthought.com",
"phone": "912340042",
"active": False,
}
member = Member.from_dict(member_dict)
assert member.to_dict() == member_dict
def test_member_comparison():
member_dict = {
"first_name": "Arthur",
"last_name": "Dent",
"email": "arthur.dent@deepthought.com",
"phone": "912340042",
}
member_1 = Member.from_dict(member_dict)
member_2 = Member.from_dict(member_dict)
assert member_1 == member_2
def test_member_can_join_more_than_one_association(association, member):
code = uuid.uuid4()
association_1_dict = {
"code": code,
"name": "Association 1",
"slug": "association_1",
}
association_1 = Association.from_dict(association_1_dict)
code = uuid.uuid4()
association_2_dict = {
"code": code,
"name": "Association 2",
"slug": "association_1",
}
association_2 = Association.from_dict(association_2_dict)
member.join(association_1)
member.join(association_2)
assert member in association_1.members
assert member in association_2.members
| 25.349593 | 78 | 0.653945 | 364 | 3,118 | 5.340659 | 0.164835 | 0.08642 | 0.053498 | 0.106996 | 0.511317 | 0.471193 | 0.471193 | 0.452675 | 0.43107 | 0.43107 | 0 | 0.038017 | 0.223861 | 3,118 | 122 | 79 | 25.557377 | 0.765289 | 0 | 0 | 0.505376 | 0 | 0 | 0.220334 | 0.079538 | 0 | 0 | 0 | 0 | 0.172043 | 1 | 0.096774 | false | 0 | 0.043011 | 0 | 0.139785 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4701cb86fa4a2a5e7e7875805246fbc42f864585 | 2,256 | py | Python | manage_data.py | lastone9182/console-keep | 250b49653be9d370a1bb0f1c39c5f853c2eaa47e | [
"MIT"
] | null | null | null | manage_data.py | lastone9182/console-keep | 250b49653be9d370a1bb0f1c39c5f853c2eaa47e | [
"MIT"
] | null | null | null | manage_data.py | lastone9182/console-keep | 250b49653be9d370a1bb0f1c39c5f853c2eaa47e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from datetime import datetime
class Unit:
def __init__(self, element):
self.annotations = element['annotationsGroup']['annotations']
self.id = element['id']
self.parentId = element['parentId']
self.title = element['title'] if 'title' in element else ''
self.text = element['text'] if 'text' in element else ''
self.sortValue = element['sortValue'] if 'sortValue' in element else 0
self.reminders = element['reminders']
self.type = element['type']
self.to_datetime(element['timestamps'])
def to_datetime(self, timestamps):
result = dict()
for k, v in timestamps.items():
if k != 'kind':
result[k] = datetime.strptime(v, '%Y-%m-%dT%H:%M:%S.%fZ')
self.timestamps = result
class State:
def __init__(self, current):
self.parents = 'root'
self.current = current
class UnitGroup:
def __init__(self, data):
self.data = data
num = sum(1 for _ in self.gen_lists())
self.total_lists = [_ for _ in zip(range(num), self.gen_lists())]
self.dicts = dict()
def gen_lists(self):
for element in self.data:
yield Unit(element)
def refresh(self, flag):
self.dicts = dict()
idx = 0
for e in self.gen_lists():
if e.parentId == flag:
idx += 1
self.dicts[idx] = e
return idx
def ls(self, flag, **options):
idx = self.refresh(flag)
num = options['num']
if num is not None:
idx = num if idx > num else idx
self.gen_print(idx)
def gen_print(self, num):
for i, e in self.dicts.items():
if i > num:
break
anno_ = e.annotations
anno_query = ''
for w in anno_:
if 'webLink' in w:
w_url = w['webLink']['url']
anno_query = w_url
else:
anno_query = ''
ts_ = datetime.strftime(e.timestamps['created'], '%Y-%m-%d %H:%M %p')
print('{:<2} {} {:<10s} {:<20s} {:>4s} \n {}'.format(i, ts_, e.title, e.text, e.type, anno_query)) | 31.333333 | 112 | 0.520833 | 278 | 2,256 | 4.104317 | 0.309353 | 0.021034 | 0.028922 | 0.029798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007422 | 0.343085 | 2,256 | 72 | 112 | 31.333333 | 0.762483 | 0.009309 | 0 | 0.067797 | 0 | 0 | 0.093107 | 0.0094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135593 | false | 0 | 0.016949 | 0 | 0.220339 | 0.050847 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
47024e2477ee4ebf0940cce361da35002ab9f51a | 11,447 | py | Python | models/deoldify.py | wangruohui/DeOldify-OpenMMLab | 798b1b6675bfbc76d976a3cd6c87915c9b32dcdb | [
"MIT"
] | 6 | 2021-11-02T06:20:22.000Z | 2022-02-14T04:08:50.000Z | models/deoldify.py | wangruohui/DeOldify-OpenMMLab | 798b1b6675bfbc76d976a3cd6c87915c9b32dcdb | [
"MIT"
] | null | null | null | models/deoldify.py | wangruohui/DeOldify-OpenMMLab | 798b1b6675bfbc76d976a3cd6c87915c9b32dcdb | [
"MIT"
] | 1 | 2021-11-03T09:44:00.000Z | 2021-11-03T09:44:00.000Z | # Copyright (c) OpenMMLab. All rights reserved.
import numbers
import os.path as osp
import mmcv
import numpy as np
import torch
from mmcv.runner import auto_fp16
from mmedit.core import tensor2img
from mmedit.models.base import BaseModel
from mmedit.models.builder import build_backbone, build_component, build_loss
from mmedit.models.common import set_requires_grad
from mmedit.models.registry import MODELS
@MODELS.register_module()
class DeOldify(BaseModel):
"""DeOldify model for image colorization.
Ref:
https://github.com/jantic/DeOldify
Args:
generator (dict): Config for the generator.
discriminator (dict): Config for the discriminator.
gan_loss (dict): Config for the gan loss.
perceptual_loss (dict): Config for the perceptual loss. Default: None.
train_cfg (dict): Config for training. Default: None.
You may change the training of gan by setting:
`disc_steps`: how many discriminator updates after one generator
update.
`disc_init_steps`: how many discriminator updates at the start of
the training.
These two keys are useful when training with WGAN.
test_cfg (dict): Config for testing. Default: None.
You may change the testing of gan by setting:
`show_input`: whether to show input real images.
pretrained (str): Path for pretrained model. Default: None.
"""
def __init__(self,
generator,
discriminator,
gan_loss,
perceptual_loss=None,
train_cfg=None,
test_cfg=None,
pretrained=None):
super().__init__()
self.train_cfg = train_cfg
self.test_cfg = test_cfg
# generator
self.generator = build_backbone(generator)
# discriminator
self.discriminator = build_component(discriminator)
# losses
assert gan_loss is not None # gan loss cannot be None
self.gan_loss = build_loss(gan_loss)
self.perceptual_loss = build_loss(
perceptual_loss) if perceptual_loss else None
self.disc_steps = 1 if self.train_cfg is None else self.train_cfg.get('disc_steps', 1)
self.disc_init_steps = (0 if self.train_cfg is None else self.train_cfg.get('disc_init_steps', 0))
self.step_counter = 0 # counting training steps
self.show_input = (False if self.test_cfg is None else self.test_cfg.get('show_input', False))
# support fp16
self.fp16_enabled = False
self.init_weights(pretrained)
def init_weights(self, pretrained=None):
"""Initialize weights for the model.
Args:
pretrained (str, optional): Path for pretrained weights. If given
None, pretrained weights will not be loaded. Default: None.
"""
# self.generator.init_weights(pretrained=pretrained)
# self.discriminator.init_weights(pretrained=pretrained)
pass
# def setup(self, img_gray, img_color, meta):
# """Perform necessary pre-processing steps.
# Args:
# img_gray (Tensor): Input gray image.
# img_color (Tensor): Input color image.
# meta (list[dict]): Input meta data.
# Returns:
# Tensor, Tensor, list[str]: The gray/color images, and \
# the image path as the metadata.
# """
# image_gray_real = img_gray
# image_color_real = img_color
# image_path = [v['img_gray_path'] for v in meta]
# return image_gray_real, image_color_real, image_path
@auto_fp16(apply_to=('img_gray', ))
def forward(self, img_gray, img_color=None, test_mode=False, **kwargs):
"""Forward function.
Args:
img_gray (Tensor): Input gray image.
img_color (Tensor): Input color image. Default: None.
test_mode (bool): Whether in test mode or not. Default: False.
kwargs (dict): Other arguments.
"""
if test_mode:
return self.forward_test(img_gray, img_color, **kwargs)
return self.forward_train(img_gray, img_color)
def forward_train(self, img_gray, img_color, meta):
"""Forward function for training.
Args:
img_gray (Tensor): Input gray image.
img_color (Tensor): Input color image.
meta (list[dict]): Input meta data.
Returns:
dict: Dict of forward results for training.
"""
# necessary setup
img_gray_real, img_color_real, _ = self.setup(
img_gray, img_color, meta)
img_color_fake = self.generator(img_gray_real)
results = dict(img_gray_real=img_gray_real,
img_color_fake=img_color_fake, img_color_real=img_color_real)
return results
def forward_test(self,
img_gray,
img_color=None,
meta=None,
save_image=False,
save_path=None,
iteration=None):
"""Forward function for testing.
Args:
img_gray (Tensor): Input gray image.
img_color (Tensor): Input color image. Default: None
meta (list[dict]): Input meta data.
save_image (bool, optional): If True, results will be saved as
images. Default: False.
save_path (str, optional): If given a valid str path, the results
will be saved in this path. Default: None.
iteration (int, optional): Iteration number. Default: None.
Returns:
dict: Dict of forward and evaluation results for testing.
"""
img_gray_real, img_color_real = img_gray, img_color
img_color_fake = self.generator(img_gray_real)
results = dict(img_gray=img_gray_real.cpu(), img_color_fake=img_color_fake.cpu())
if img_color_real is not None:
results['img_color_real'] = img_color_real.cpu()
# save image
if save_image:
img_gray_path = meta[0]['img_gray_path']
folder_name = osp.splitext(osp.basename(img_gray_path))[0]
if isinstance(iteration, numbers.Number):
save_path = osp.join(save_path, folder_name,
f'{folder_name}-{iteration + 1:06d}.png')
elif iteration is None:
save_path = osp.join(save_path, f'{folder_name}.png')
else:
raise ValueError('iteration should be number or None, '
f'but got {type(iteration)}')
mmcv.imwrite(tensor2img(img_color_fake), save_path)
return results
def forward_dummy(self, img):
"""Used for computing network FLOPs.
Args:
img (Tensor): Dummy input used to compute FLOPs.
Returns:
Tensor: Dummy output produced by forwarding the dummy input.
"""
out = self.generator(img)
return out
def backward_discriminator(self, outputs):
"""Backward function for the discriminator.
Args:
outputs (dict): Dict of forward results.
Returns:
dict: Loss dict.
"""
# GAN loss for the discriminator
losses = dict()
# conditional GAN
fake_ab = torch.cat(
(outputs['img_gray_real'], outputs['img_color_fake']), 1)
fake_pred = self.discriminator(fake_ab.detach())
losses['loss_gan_d_fake'] = self.gan_loss(
fake_pred, target_is_real=False, is_disc=True)
real_ab = torch.cat(
(outputs['img_gray_real'], outputs['img_color_real']), 1)
real_pred = self.discriminator(real_ab)
losses['loss_gan_d_real'] = self.gan_loss(
real_pred, target_is_real=True, is_disc=True)
loss_d, log_vars_d = self.parse_losses(losses)
loss_d *= 0.5
loss_d.backward()
return log_vars_d
def backward_generator(self, outputs):
"""Backward function for the generator.
Args:
outputs (dict): Dict of forward results.
Returns:
dict: Loss dict.
"""
losses = dict()
# GAN loss for the generator
fake_ab = torch.cat(
(outputs['img_gray'], outputs['img_color_fake']), 1)
fake_pred = self.discriminator(fake_ab)
losses['loss_gan_g'] = self.gan_loss(
fake_pred, target_is_real=True, is_disc=False)
# perceptual loss for the generator
if self.perceptual_loss:
losses['loss_perceptual'] = self.perceptual_loss(outputs['img_color_fake'],
outputs['img_color_real'])
loss_g, log_vars_g = self.parse_losses(losses)
loss_g.backward()
return log_vars_g
def train_step(self, data_batch, optimizer):
"""Training step function.
Args:
data_batch (dict): Dict of the input data batch.
optimizer (dict[torch.optim.Optimizer]): Dict of optimizers for
the generator and discriminator.
Returns:
dict: Dict of loss, information for logger, the number of samples\
and results for visualization.
"""
# data
img_gray = data_batch['img_gray']
img_color = data_batch['img_color']
meta = data_batch['meta']
# forward generator
outputs = self.forward(img_gray, img_color, meta, test_mode=False)
log_vars = dict()
# discriminator
set_requires_grad(self.discriminator, True)
# optimize
optimizer['discriminator'].zero_grad()
log_vars.update(self.backward_discriminator(outputs=outputs))
optimizer['discriminator'].step()
# generator, no updates to discriminator parameters.
if (self.step_counter % self.disc_steps == 0
and self.step_counter >= self.disc_init_steps):
set_requires_grad(self.discriminator, False)
# optimize
optimizer['generator'].zero_grad()
log_vars.update(self.backward_generator(outputs=outputs))
optimizer['generator'].step()
self.step_counter += 1
log_vars.pop('loss', None) # remove the unnecessary 'loss'
results = dict(
log_vars=log_vars,
num_samples=len(outputs['image_gray_real']),
results=dict(
image_gray_real=outputs['image_gray_real'].cpu(),
image_color_fake=outputs['image_color_fake'].cpu(),
image_color_real=outputs['image_color_real'].cpu()))
return results
def val_step(self, data_batch, **kwargs):
"""Validation step function.
Args:
data_batch (dict): Dict of the input data batch.
kwargs (dict): Other arguments.
Returns:
dict: Dict of evaluation results for validation.
"""
# data
img_gray = data_batch['img_gray']
img_color = data_batch['img_color']
meta = data_batch['meta']
# forward generator
results = self.forward(img_gray, img_color, meta,
test_mode=True, **kwargs)
return results
| 35.006116 | 106 | 0.599983 | 1,365 | 11,447 | 4.810256 | 0.164103 | 0.046299 | 0.019799 | 0.027414 | 0.328815 | 0.269571 | 0.205148 | 0.18474 | 0.175297 | 0.163722 | 0 | 0.003436 | 0.313619 | 11,447 | 326 | 107 | 35.113497 | 0.832252 | 0.353892 | 0 | 0.128571 | 0 | 0 | 0.073329 | 0.003548 | 0 | 0 | 0 | 0 | 0.007143 | 1 | 0.071429 | false | 0.007143 | 0.078571 | 0 | 0.221429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4702b4bd55352e823ec087a35e72ae44e05776e2 | 9,631 | py | Python | btceapi/trade.py | Queeq/btce-api | b9f1045e604e46d89574a960915c51d8076a5c6d | [
"MIT"
] | 1 | 2018-04-16T09:04:37.000Z | 2018-04-16T09:04:37.000Z | btceapi/trade.py | Queeq/btce-api | b9f1045e604e46d89574a960915c51d8076a5c6d | [
"MIT"
] | null | null | null | btceapi/trade.py | Queeq/btce-api | b9f1045e604e46d89574a960915c51d8076a5c6d | [
"MIT"
] | null | null | null | # Copyright (c) 2013 Alan McIntyre
import urllib
import hashlib
import hmac
import warnings
from datetime import datetime
from btceapi import common
from btceapi import keyhandler
class InvalidNonceException(Exception):
def __init__(self, method, expectedNonce, actualNonce):
Exception.__init__(self)
self.method = method
self.expectedNonce = expectedNonce
self.actualNonce = actualNonce
def __str__(self):
return "Expected a nonce greater than %d" % self.expectedNonce
class TradeAccountInfo(object):
'''An instance of this class will be returned by
a successful call to TradeAPI.getInfo.'''
def __init__(self, info):
funds = info.get(u'funds')
for c in common.all_currencies:
setattr(self, "balance_%s" % c, funds.get(unicode(c), 0))
self.open_orders = info.get(u'open_orders')
self.server_time = datetime.fromtimestamp(info.get(u'server_time'))
self.transaction_count = info.get(u'transaction_count')
rights = info.get(u'rights')
self.info_rights = (rights.get(u'info') == 1)
self.withdraw_rights = (rights.get(u'withdraw') == 1)
self.trade_rights = (rights.get(u'trade') == 1)
class TransactionHistoryItem(object):
'''A list of instances of this class will be returned by
a successful call to TradeAPI.transHistory.'''
def __init__(self, transaction_id, info):
self.transaction_id = transaction_id
items = ("type", "amount", "currency", "desc",
"status", "timestamp")
for n in items:
setattr(self, n, info.get(n))
self.timestamp = datetime.fromtimestamp(self.timestamp)
class TradeHistoryItem(object):
'''A list of instances of this class will be returned by
a successful call to TradeAPI.tradeHistory.'''
def __init__(self, transaction_id, info):
self.transaction_id = transaction_id
items = ("pair", "type", "amount", "rate", "order_id",
"is_your_order", "timestamp")
for n in items:
setattr(self, n, info.get(n))
self.timestamp = datetime.fromtimestamp(self.timestamp)
class OrderItem(object):
'''A list of instances of this class will be returned by
a successful call to TradeAPI.activeOrders.'''
def __init__(self, order_id, info):
self.order_id = int(order_id)
vnames = ("pair", "type", "amount", "rate", "timestamp_created",
"status")
for n in vnames:
setattr(self, n, info.get(n))
self.timestamp_created = datetime.fromtimestamp(self.timestamp_created)
class TradeResult(object):
'''An instance of this class will be returned by
a successful call to TradeAPI.trade.'''
def __init__(self, info):
self.received = info.get(u"received")
self.remains = info.get(u"remains")
self.order_id = info.get(u"order_id")
funds = info.get(u'funds')
for c in common.all_currencies:
setattr(self, "balance_%s" % c, funds.get(unicode(c), 0))
class CancelOrderResult(object):
'''An instance of this class will be returned by
a successful call to TradeAPI.cancelOrder.'''
def __init__(self, info):
self.order_id = info.get(u"order_id")
funds = info.get(u'funds')
for c in common.all_currencies:
setattr(self, "balance_%s" % c, funds.get(unicode(c), 0))
def setHistoryParams(params, from_number, count_number, from_id, end_id,
order, since, end):
if from_number is not None:
params["from"] = "%d" % from_number
if count_number is not None:
params["count"] = "%d" % count_number
if from_id is not None:
params["from_id"] = "%d" % from_id
if end_id is not None:
params["end_id"] = "%d" % end_id
if order is not None:
if order not in ("ASC", "DESC"):
raise Exception("Unexpected order parameter: %r" % order)
params["order"] = order
if since is not None:
params["since"] = "%d" % since
if end is not None:
params["end"] = "%d" % end
class TradeAPI(object):
def __init__(self, key, handler):
self.key = key
self.handler = handler
if not isinstance(self.handler, keyhandler.KeyHandler):
raise Exception("The handler argument must be a"
" keyhandler.KeyHandler")
# We depend on the key handler for the secret
self.secret = handler.getSecret(key)
def _post(self, params, connection=None, raiseIfInvalidNonce=False):
params["nonce"] = self.handler.getNextNonce(self.key)
encoded_params = urllib.urlencode(params)
# Hash the params string to produce the Sign header value
H = hmac.new(self.secret, digestmod=hashlib.sha512)
H.update(encoded_params)
sign = H.hexdigest()
if connection is None:
connection = common.BTCEConnection()
headers = {"Key": self.key, "Sign": sign}
result = connection.makeJSONRequest("/tapi", headers, encoded_params)
success = result.get(u'success')
if not success:
err_message = result.get(u'error')
method = params.get("method", "[uknown method]")
if "invalid nonce" in err_message:
# If the nonce is out of sync, make one attempt to update to
# the correct nonce. This sometimes happens if a bot crashes
# and the nonce file doesn't get saved, so it's reasonable to
# attempt one correction. If multiple threads/processes are
# attempting to use the same key, this mechanism will
# eventually fail and the InvalidNonce will be emitted so that
# you'll end up here reading this comment. :)
# The assumption is that the invalid nonce message looks like
# "invalid nonce parameter; on key:4, you sent:3"
s = err_message.split(",")
expected = int(s[-2].split(":")[1])
actual = int(s[-1].split(":")[1])
if raiseIfInvalidNonce:
raise InvalidNonceException(method, expected, actual)
warnings.warn("The nonce in the key file is out of date;"
" attempting to correct.")
self.handler.setNextNonce(self.key, expected + 1)
return self._post(params, connection, True)
elif "no orders" in err_message and method == "ActiveOrders":
# ActiveOrders returns failure if there are no orders;
# intercept this and return an empty dict.
return {}
raise Exception("%s call failed with error: %s"
% (method, err_message))
if u'return' not in result:
raise Exception("Response does not contain a 'return' item.")
return result.get(u'return')
def getInfo(self, connection=None):
params = {"method": "getInfo"}
return TradeAccountInfo(self._post(params, connection))
def transHistory(self, from_number=None, count_number=None,
from_id=None, end_id=None, order="DESC",
since=None, end=None, connection=None):
params = {"method": "TransHistory"}
setHistoryParams(params, from_number, count_number, from_id, end_id,
order, since, end)
orders = self._post(params, connection)
result = []
for k, v in orders.items():
result.append(TransactionHistoryItem(int(k), v))
# We have to sort items here because the API returns a dict
if "ASC" == order:
result.sort(key=lambda a: a.transaction_id, reverse=False)
elif "DESC" == order:
result.sort(key=lambda a: a.transaction_id, reverse=True)
return result
def tradeHistory(self, from_number=None, count_number=None,
from_id=None, end_id=None, order=None,
since=None, end=None, pair=None, connection=None):
params = {"method": "TradeHistory"}
setHistoryParams(params, from_number, count_number, from_id, end_id,
order, since, end)
if pair is not None:
common.validatePair(pair)
params["pair"] = pair
orders = self._post(params, connection)
result = []
for k, v in orders.items():
result.append(TradeHistoryItem(k, v))
return result
def activeOrders(self, pair=None, connection=None):
params = {"method": "ActiveOrders"}
if pair is not None:
common.validatePair(pair)
params["pair"] = pair
orders = self._post(params, connection)
result = []
for k, v in orders.items():
result.append(OrderItem(k, v))
return result
def trade(self, pair, trade_type, rate, amount, connection=None):
common.validateOrder(pair, trade_type, rate, amount)
params = {"method": "Trade",
"pair": pair,
"type": trade_type,
"rate": common.formatCurrency(rate, pair),
"amount": common.formatCurrency(amount, pair)}
return TradeResult(self._post(params, connection))
def cancelOrder(self, order_id, connection=None):
params = {"method": "CancelOrder",
"order_id": order_id}
return CancelOrderResult(self._post(params, connection)) | 36.206767 | 79 | 0.602326 | 1,155 | 9,631 | 4.911688 | 0.200866 | 0.011987 | 0.015512 | 0.029614 | 0.377402 | 0.323991 | 0.312004 | 0.312004 | 0.306187 | 0.306187 | 0 | 0.002931 | 0.291559 | 9,631 | 266 | 80 | 36.206767 | 0.828521 | 0.13903 | 0 | 0.267045 | 0 | 0 | 0.098006 | 0.002554 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096591 | false | 0 | 0.039773 | 0.005682 | 0.238636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4702d3ea64c33128158593ce3ec94ea461005ad4 | 776 | py | Python | inventory_management/inventory/inventory.py | gmaher/blog_posts | 2db1c0f88adaa76a4bbd188fc3ac230ff9eaefb5 | [
"MIT"
] | null | null | null | inventory_management/inventory/inventory.py | gmaher/blog_posts | 2db1c0f88adaa76a4bbd188fc3ac230ff9eaefb5 | [
"MIT"
] | null | null | null | inventory_management/inventory/inventory.py | gmaher/blog_posts | 2db1c0f88adaa76a4bbd188fc3ac230ff9eaefb5 | [
"MIT"
] | 1 | 2019-12-15T17:17:10.000Z | 2019-12-15T17:17:10.000Z | import numpy as np
class Forecaster:
def __init__(self, c, m, sigma):
self.c = c
self.m = m
self.sigma = sigma
def predict(self, Y0, T):
Yhat = np.zeros((T+self.m))
Yhat[:self.m] = Y0
for i in range(self.m,T+self.m):
Yhat[i] = self.c + Yhat[i-self.m] + self.sigma*np.random.randn()
return Yhat[self.m:]
def sim(x0, y0, U, A, B, C, T, forecaster):
n = x0.shape[0]
X = np.zeros((n, T))
X[:,0] = x0
S = np.zeros((T))
Y = forecaster.predict(y0, T)
for i in range(1,T):
y = Y[i-1]
if y > X[0,i-1]:
S[i-1] = X[0,i-1]
else:
S[i-1] = y
X[:,i] = A.dot(X[:,i-1]) + B.dot(U[i-1]) + C.dot(S[i-1])
return X,Y,S
| 20.972973 | 76 | 0.45232 | 142 | 776 | 2.443662 | 0.274648 | 0.04611 | 0.025937 | 0.057637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039841 | 0.353093 | 776 | 36 | 77 | 21.555556 | 0.651394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.038462 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4703b8b3047a8ef94c23101c750894c129122120 | 1,388 | py | Python | chillpill_examples/cloud_hp_tuning_from_train_fn/run_hp_search.py | kevinbache/chillpill_examples | d9c5fac9972f1afbf7bb4e6b6e5388b9f52c73c3 | [
"MIT"
] | null | null | null | chillpill_examples/cloud_hp_tuning_from_train_fn/run_hp_search.py | kevinbache/chillpill_examples | d9c5fac9972f1afbf7bb4e6b6e5388b9f52c73c3 | [
"MIT"
] | null | null | null | chillpill_examples/cloud_hp_tuning_from_train_fn/run_hp_search.py | kevinbache/chillpill_examples | d9c5fac9972f1afbf7bb4e6b6e5388b9f52c73c3 | [
"MIT"
] | null | null | null | """This module runs a distributed hyperparameter tuning job on Google Cloud AI Platform."""
from pathlib import Path
import numpy as np
import chillpill
from chillpill import packages, params, search
from chillpill_examples.cloud_hp_tuning_from_train_fn import train
if __name__ == '__main__':
# Create a Cloud AI Platform Hyperparameter Search object
search = search.HyperparamSearchSpec(
max_trials=10,
max_parallel_trials=5,
max_failed_trials=2,
hyperparameter_metric_tag='val_acc',
)
# Add parameter search ranges for this problem.
my_param_ranges = train.MyParams(
activation=params.Categorical(['relu', 'tanh']),
num_layers=params.Integer(min_value=1, max_value=3),
num_neurons=params.Discrete(np.logspace(2, 8, num=7, base=2)),
dropout_rate=params.Double(min_value=-0.1, max_value=0.9),
learning_rate=params.Discrete(np.logspace(-6, 2, 17, base=10)),
batch_size=params.Integer(min_value=1, max_value=128),
)
search.add_parameters(my_param_ranges)
# Run hyperparameter search job
search.run_from_train_fn(
train_fn=train.train_fn,
additional_package_root_dirs=[str(packages.find_package_root(chillpill))],
cloud_staging_bucket='chillpill-staging-bucket',
gcloud_project_name='kb-experiment',
region='us-central1',
)
| 35.589744 | 91 | 0.713977 | 186 | 1,388 | 5.048387 | 0.526882 | 0.029819 | 0.028754 | 0.044728 | 0.063898 | 0.063898 | 0.063898 | 0 | 0 | 0 | 0 | 0.022183 | 0.18804 | 1,388 | 38 | 92 | 36.526316 | 0.811003 | 0.157061 | 0 | 0 | 0 | 0 | 0.061102 | 0.020654 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.178571 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4706c990a64c224b744ada7eefc49fc8bb2c5644 | 858 | py | Python | tests/timing/test_timing_encoding.py | brendanhasz/dsutils | e780e904f7bf0ec5e14aa7ddb337f01f29779143 | [
"MIT"
] | 1 | 2019-09-14T16:59:34.000Z | 2019-09-14T16:59:34.000Z | tests/timing/test_timing_encoding.py | brendanhasz/dsutils | e780e904f7bf0ec5e14aa7ddb337f01f29779143 | [
"MIT"
] | null | null | null | tests/timing/test_timing_encoding.py | brendanhasz/dsutils | e780e904f7bf0ec5e14aa7ddb337f01f29779143 | [
"MIT"
] | 7 | 2020-01-19T14:40:08.000Z | 2022-01-14T12:50:30.000Z | """Tests timing of encoding classes
"""
import time
import numpy as np
import pandas as pd
#import matplotlib.pyplot as plt
from dsutils.encoding import MultiTargetEncoderLOO
def test_timing_MultiTargetEncoderLOO():
"""Tests timing of encoding.MultiTargetEncoderLOO"""
# Dummy data
N = 10000
Nc = 100
df = pd.DataFrame()
cat1 = [str(e) for e in np.floor(Nc*np.random.randn(N))]
cat2 = [str(e) for e in np.floor(Nc*np.random.randn(N))]
df['a'] = [cat1[i]+','+cat2[i] for i in range(len(cat1))]
df['b'] = np.random.randn(N)
df['y'] = np.random.randn(N)
# Encode the data
mte = MultiTargetEncoderLOO(cols='a')
t0 = time.time()
mte.fit_transform(df[['a', 'b']], df['y'])
t1 = time.time()
print('Elapsed time: ', t1-t0)
if __name__ == "__main__":
test_timing_MultiTargetEncoderLOO() | 22.578947 | 61 | 0.638695 | 125 | 858 | 4.28 | 0.448 | 0.059813 | 0.097196 | 0.104673 | 0.157009 | 0.123364 | 0.123364 | 0.123364 | 0.123364 | 0.123364 | 0 | 0.024854 | 0.202797 | 858 | 38 | 62 | 22.578947 | 0.75731 | 0.160839 | 0 | 0 | 0 | 0 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.25 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
47092f96a5b0022e0fac79b527512d8f3952d111 | 3,409 | py | Python | instagram/views.py | derrokip34/Instagram-Clone | 63fdff902382b5e4986667566a901cca748b5731 | [
"MIT"
] | null | null | null | instagram/views.py | derrokip34/Instagram-Clone | 63fdff902382b5e4986667566a901cca748b5731 | [
"MIT"
] | 4 | 2020-06-02T13:03:34.000Z | 2021-06-10T22:59:11.000Z | instagram/views.py | derrokip34/Instagram-Clone | 63fdff902382b5e4986667566a901cca748b5731 | [
"MIT"
] | null | null | null | from django.shortcuts import render,redirect
from django.contrib.auth.models import User
from django.contrib.auth.decorators import login_required
from .models import Image,Profile,Comments
from .forms import UpdateProfile,UpdateUser,PostImageForm,CommentForm
from django.conf.urls import url
# Create your views here.
@login_required(login_url='/accounts/login')
def home(request):
current_user = request.user
images = Image.get_all_images()
title = 'Welcome to Instagram'
return render(request, 'index.html',{'title':title,'images':images,'current_user':current_user})
@login_required(login_url='/accounts/login')
def profile(request,id):
current_user = request.user
user = User.objects.filter(id=id).first()
user_profile = user.profile
profile = Profile.get_by_id(id)
images = Image.get_profile_images(id)
title = f'@{user.username} Instagram photos'
return render(request, 'profile.html',{'user':user,'current_user':current_user,'profile':user_profile,"images":images,'title':title})
@login_required(login_url='/accounts/login')
def update_profile(request):
current_user = request.user
if request.method == 'POST':
u_form = UpdateUser(request.POST,instance=request.user)
p_form = UpdateProfile(request.POST,request.FILES,instance=request.user.profile)
if u_form.is_valid() and p_form.is_valid():
u_form.save()
p_form.save()
return redirect('userProfile',id=current_user.id)
else:
u_form = UpdateUser(instance=request.user)
p_form = UpdateProfile(instance=request.user.profile)
title = f'Update @{current_user.username} profile'
return render(request,'update_profile.html', {'title':title,'user_form':u_form,'profile_form':p_form,'current_user':current_user})
@login_required(login_url='/accounts/login')
def post_image(request):
current_user = request.user
if request.method == 'POST':
img_form = PostImageForm(request.POST,request.FILES)
if img_form.is_valid():
image = img_form.save(commit=False)
image.owner = current_user
image.profile = current_user.profile
image.save()
return redirect('home')
else:
img_form = PostImageForm()
title = 'New Post'
return render(request, 'new_post.html',{'title':title,'img_form':img_form,'current_user':current_user})
@login_required(login_url='/accounts/login')
def comment(request,image_id):
current_user = request.user
image = Image.objects.filter(id=image_id).first()
comment_form = CommentForm()
# comments = Comments.objects.all()
if request.method == 'POST':
comment_form = CommentForm(request.POST,request.FILES)
if comment_form.is_valid():
comment = comment_form.save(commit=False)
comment.image = image
comment.user = current_user
comment.save()
return redirect('home')
else:
comment_form = CommentForm()
comments = Comments.objects.filter(image_id=image_id).all()
title = 'Comments'
return render(request,'comments.html',{'comment_form':comment_form,'image':image,'current_user':current_user,'comments':comments})
def like_image(request,image_id):
image = Image.objects.filter(id=image_id).first()
image.likes += 1
image.save()
return redirect('/')
| 37.054348 | 137 | 0.694045 | 431 | 3,409 | 5.306265 | 0.174014 | 0.096196 | 0.045912 | 0.045912 | 0.33756 | 0.259292 | 0.187582 | 0.155225 | 0.122868 | 0.080892 | 0 | 0.000358 | 0.180698 | 3,409 | 91 | 138 | 37.461538 | 0.818475 | 0.01672 | 0 | 0.324324 | 0 | 0 | 0.131084 | 0.007166 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.081081 | 0 | 0.283784 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4709430dc3111986d3531b884a393ec597c0e3c5 | 10,596 | py | Python | wahltraud/bot/bot.py | wdr-data/wahltraud | 0f972680f6cdbb66028aa8a39fc4a78a3e0ca08a | [
"RSA-MD"
] | 7 | 2017-07-02T12:25:45.000Z | 2019-05-27T10:39:41.000Z | wahltraud/bot/bot.py | wdr-data/wahltraud | 0f972680f6cdbb66028aa8a39fc4a78a3e0ca08a | [
"RSA-MD"
] | null | null | null | wahltraud/bot/bot.py | wdr-data/wahltraud | 0f972680f6cdbb66028aa8a39fc4a78a3e0ca08a | [
"RSA-MD"
] | null | null | null | import logging
from threading import Thread
from time import sleep
import os
import json
import schedule
#from django.utils.timezone import localtime, now
from apiai import ApiAI
from backend.models import Push, FacebookUser, Wiki
from .fb import send_text, send_buttons, button_postback, PAGE_TOKEN
from .handlers.payloadhandler import PayloadHandler
from .handlers.texthandler import TextHandler
from .handlers.apiaihandler import ApiAiHandler
from .callbacks.simple import (get_started, push, subscribe, unsubscribe, wiki, story,
apiai_fulfillment, about_manifesto, menue_manifesto, about,
questions,share_bot, push_step, menue_candidates, menue_data,
more_data, sunday_poll, greetings, presidents, chancelor, who_votes)
from .callbacks.shared import (get_pushes, get_breaking, send_push, schema)
from .callbacks import candidate, district, browse_lists, manifesto, party
from .data import by_district_id
# TODO: The idea is simple. When you send "subscribe" to the bot, the bot server would add a record according to the sender_id to their
# database or memory , then the bot server could set a timer to distribute the news messages to those sender_id who have subscribed for the news.
# Enable logging
logger = logging.getLogger(__name__)
logger.info('FB Wahltraud Logging')
API_AI_TOKEN = os.environ.get('WAHLTRAUD_API_AI_TOKEN', 'na')
ADMINS = [
1781215881903416, # Christian
1450422691688898, # Jannes
1543183652404650, # Lisa
]
def make_event_handler():
ai = ApiAI(API_AI_TOKEN)
handlers = [
ApiAiHandler(greetings, 'gruss'),
PayloadHandler(greetings, ['gruss']),
PayloadHandler(get_started, ['start']),
PayloadHandler(about, ['about']),
PayloadHandler(story, ['push_id', 'next_state']),
PayloadHandler(get_started, ['wahltraud_start_payload']),
PayloadHandler(share_bot, ['share_bot']),
PayloadHandler(subscribe, ['subscribe']),
PayloadHandler(unsubscribe, ['unsubscribe']),
ApiAiHandler(subscribe, 'anmelden'),
ApiAiHandler(unsubscribe, 'abmelden'),
PayloadHandler(push_step, ['push', 'next_state']),
PayloadHandler(push, ['push']),
ApiAiHandler(push, 'push'),
ApiAiHandler(district.result_nation_17,'Ergebnisse'),
ApiAiHandler(wiki, 'wiki'),
ApiAiHandler(who_votes, 'wer_darf_wählen'),
PayloadHandler(menue_candidates, ['menue_candidates']),
PayloadHandler(questions, ['questions']),
PayloadHandler(menue_data, ['menue_data']),
PayloadHandler(more_data, ['more_data']),
PayloadHandler(menue_manifesto, ['menue_manifesto']),
PayloadHandler(about_manifesto, ['about_manifesto']),
ApiAiHandler(presidents, 'bundespräsident'),
ApiAiHandler(chancelor, 'bundeskanzler'),
ApiAiHandler(candidate.basics, 'kandidat'),
ApiAiHandler(party.basics, 'parteien'),
ApiAiHandler(party.top_candidates_apiai, 'spitzenkandidat'),
#ApiAiHandler(sunday_poll, 'umfrage'),
PayloadHandler(party.show_parties, ['show_parties']),
PayloadHandler(party.show_electorial, ['show_electorial']),
PayloadHandler(party.show_party_options, ['show_party_options']),
PayloadHandler(party.show_party_candidates,['show_party_candidates']),
PayloadHandler(party.show_list_all, ['show_list_all']),
PayloadHandler(party.show_top_candidates,['show_top_candidates']),
ApiAiHandler(candidate.candidate_check, 'kandidatencheck'),
PayloadHandler(candidate.candidate_check_start,['candidate_check_start']),
PayloadHandler(district.result_state_17,['result_state_17']),
PayloadHandler(district.select_state_result,['select_state_result']),
PayloadHandler(district.intro_district, ['intro_district']),
PayloadHandler(candidate.intro_candidate, ['intro_candidate']),
PayloadHandler(district.show_13, ['show_13']),
PayloadHandler(district.result_17, ['result_17']),
PayloadHandler(district.result_first_vote, ['result_first_vote']),
PayloadHandler(district.result_second_vote, ['result_second_vote']),
PayloadHandler(district.novi, ['novi']),
PayloadHandler(district.show_structural_data, ['show_structural_data']),
PayloadHandler(candidate.search_candidate_list, ['search_candidate_list']),
PayloadHandler(candidate.payload_basics, ['payload_basics']),
PayloadHandler(candidate.more_infos_nrw, ['more_infos_nrw']),
PayloadHandler(candidate.no_video_to_show, ['no_video_to_show']),
PayloadHandler(candidate.show_video, ['show_video']),
PayloadHandler(candidate.show_random_candidate, ['show_random_candidate']),
PayloadHandler(district.show_candidates, ['show_candidates']),
ApiAiHandler(district.find_district, 'wahlkreis_finder'),
PayloadHandler(district.show_district, ['show_district']),
ApiAiHandler(browse_lists.apiai, 'liste'),
PayloadHandler(browse_lists.intro_lists, ['intro_lists']),
PayloadHandler(browse_lists.select_state, ['select_state']),
PayloadHandler(browse_lists.select_party, ['select_party']),
PayloadHandler(browse_lists.show_list, ['show_list', 'state', 'party']),
PayloadHandler(manifesto.manifesto_start, ['manifesto_start']),
PayloadHandler(manifesto.show_word_payload, ['show_word']),
PayloadHandler(manifesto.show_sentence_payload, ['show_sentence']),
PayloadHandler(manifesto.show_paragraph, ['show_paragraph']),
PayloadHandler(manifesto.show_manifesto, ['show_manifesto']),
ApiAiHandler(manifesto.show_word_apiai, 'wahlprogramm'),
TextHandler(apiai_fulfillment, '.*'),
]
def event_handler(data):
"""handle all incoming messages"""
messaging_events = data['entry'][0]['messaging']
logger.debug(messaging_events)
for event in messaging_events:
referral = event.get('referral')
if referral:
ref = referral.get('ref')
logging.info('Bot wurde mit bekantem User geteilt: ' + ref)
if ref.startswith('WK'):
wk = int(ref.replace("WK", ""))
dis = by_district_id[str(wk)]
send_text(
event['sender']['id'],
'Hi, schön dich wieder zu sehen! \nNovi sagt, du möchtest etwas über deinen Wahlkreis "{wk}" wissen? Sehr gerne...'.format(
wk=dis['district']
)
)
district.send_district(event['sender']['id'], dis['uuid'])
else:
send_text(
event['sender']['id'],
'Willkommen zurück. Was kann ich für dich tun?'
)
message = event.get('message')
if message:
text = message.get('text')
if (text is not None
and event.get('postback') is None
and message.get('quick_reply') is None):
request = ai.text_request()
request.lang = 'de'
request.query = text
request.session_id = event['sender']['id']
response = request.getresponse()
nlp = json.loads(response.read().decode())
logging.info(nlp)
message['nlp'] = nlp
for handler in handlers:
try:
if handler.check_event(event):
try:
handler.handle_event(event)
except Exception as e:
logging.exception("Handling event failed")
try:
sender_id = event['sender']['id']
send_text(
sender_id,
'Huppsala, das hat nicht funktioniert :('
)
if int(sender_id) in ADMINS:
txt = str(e)
txt = txt.replace(PAGE_TOKEN, '[redacted]')
txt = txt.replace(API_AI_TOKEN, '[redacted]')
send_text(sender_id, txt)
except:
pass
finally:
break
except:
logging.exception("Testing handler failed")
return event_handler
handle_events = make_event_handler()
def push_notification():
data = get_pushes()
if not data:
return
user_list = FacebookUser.objects.values_list('uid', flat=True)
unavailable_user_ids = list()
for user in user_list:
logger.debug("Send Push to: " + user)
try:
schema(data, user)
except Exception as e:
logger.exception("Push failed")
try:
if e.args[0]['code'] == 551: # User is unavailable (probs deleted chat or account)
unavailable_user_ids.append(user)
logging.info('Removing user %s', user)
except:
pass
sleep(2)
for user in unavailable_user_ids:
try:
FacebookUser.objects.get(uid=user).delete()
except:
logging.exception('Removing user %s failed', user)
def push_breaking():
data = get_breaking()
if data is None or data.delivered:
return
user_list = FacebookUser.objects.values_list('uid', flat=True)
for user in user_list:
logger.debug("Send Push to: " + user)
# media = '327430241009143'
# send_attachment_by_id(user, media, 'image')
try:
send_push(user, data)
except:
logger.exception("Push failed")
sleep(1)
data.delivered = True
data.save(update_fields=['delivered'])
schedule.every(30).seconds.do(push_breaking)
schedule.every().day.at("18:00").do(push_notification)
#schedule.every().day.at("08:00").do(push_notification)
def schedule_loop():
while True:
schedule.run_pending()
sleep(1)
schedule_loop_thread = Thread(target=schedule_loop, daemon=True)
schedule_loop_thread.start()
| 40.288973 | 147 | 0.60806 | 1,056 | 10,596 | 5.882576 | 0.283144 | 0.014166 | 0.022215 | 0.009015 | 0.037669 | 0.030908 | 0.030908 | 0.030908 | 0.030908 | 0.030908 | 0 | 0.012546 | 0.285391 | 10,596 | 262 | 148 | 40.442748 | 0.807845 | 0.057097 | 0 | 0.15942 | 0 | 0.004831 | 0.141267 | 0.012934 | 0 | 0 | 0 | 0.003817 | 0 | 1 | 0.024155 | false | 0.009662 | 0.077295 | 0 | 0.115942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
470e472fe7ae958fe62d97b5ce9dca8b55363b3a | 4,994 | py | Python | dbix/postgresql.py | alexbodn/python-dbix | 769298cc510a95437d9d7e7641b616b6aec97ced | [
"Apache-2.0"
] | null | null | null | dbix/postgresql.py | alexbodn/python-dbix | 769298cc510a95437d9d7e7641b616b6aec97ced | [
"Apache-2.0"
] | 3 | 2021-03-25T21:40:41.000Z | 2021-11-15T17:46:46.000Z | dbix/postgresql.py | alexbodn/python-dbix | 769298cc510a95437d9d7e7641b616b6aec97ced | [
"Apache-2.0"
] | null | null | null |
from .sqlschema import SQLSchema, SQLResultSet
import psycopg2
import psycopg2.extensions as pe
class POSTGRESQLResultSet(SQLResultSet):
def perform_insert(self, script, param, pk_fields, table, new_key):
script += u' returning %s' % u','. join ([
self.schema.render_name(field) for field in pk_fields
])
res = self.schema.db_execute(script, param)
return res.fetchone()
class POSTGRESQL(SQLSchema):
rs_class = POSTGRESQLResultSet
_type_conv = dict(
enum='varchar',
boolean='integer',
datetime='timestamp',
tinyint='integer',
mediumtext='text',
)
getdate = dict(
timestamp="CLOCK_TIMESTAMP() at time zone 'utc'",
date="cast((CLOCK_TIMESTAMP() at time zone 'utc') as DATE)",
time="cast((CLOCK_TIMESTAMP() at time zone 'utc') as TIME)",
)
deferred_fk = "DEFERRABLE INITIALLY DEFERRED"
render_paramplace = '%s'
on_update_trigger = """
CREATE OR REPLACE FUNCTION "trf_%(table)s%%(c)d_before"() RETURNS trigger AS
$BODY$
BEGIN
IF "new"."%(field)s"="old"."%(field)s" THEN
"new"."%(field)s" = %(getdate_tr)s;
END IF;
RETURN NEW;
END;
$BODY$ LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS "tr_%(table)s%%(c)d_before" ON "%(table)s";
CREATE TRIGGER "tr_%(table)s%%(c)d_before"
BEFORE UPDATE
ON "%(table)s" FOR EACH ROW
EXECUTE PROCEDURE "trf_%(table)s%%(c)d_before"();
"""
inline_fk = False
dsn = "dbname='%(db)s' user='%(user)s' host='%(host)s' password='%(password)s'"
dsn_dba = "dbname='postgres' user='%(user_dba)s' host='%(host)s' password='%(password_dba)s'"
def __init__(self, **connectparams):
super(POSTGRESQL, self).__init__()
self.type_render['serial primary key'] = self.type_render['integer']
self.connectparams = dict(connectparams)
self.connectparams.pop('db', None)
def render_name(self, name):
return '"%s"' % name
def render_autoincrement(self, attrs, entity, name):
attrs, __ = super(POSTGRESQL, self).render_autoincrement(
attrs, entity, name)
if attrs.get('is_auto_increment'):
attrs['data_type'] = 'serial primary key'
self.this_render_pk = False
return attrs, ''
def fk_disable(self):
self.db_executelist([
'ALTER TABLE %s DISABLE TRIGGER ALL' % entity['table'] \
for entity in self.entities
])
def fk_enable(self):
self.db_executelist([
'ALTER TABLE %s ENABLE TRIGGER ALL' % entity['table'] \
for entity in self.entities
])
def isdba(self):
return 'user_dba' in self.connectparams \
and 'password_dba' in self.connectparams
def db_create(self, dbname):
if not self.isdba():
return
conn = psycopg2.connect(self.dsn_dba % self.connectparams)
conn.set_isolation_level(pe.ISOLATION_LEVEL_AUTOCOMMIT)
cur = conn.cursor()
connectparams = dict(db=dbname)
connectparams.update(self.connectparams)
cur.execute(
"""
CREATE DATABASE %(db)s WITH OWNER=%(user)s;
""" % connectparams
)
cur.close()
conn.close()
dbs = self.db_list()
return dbs and dbname in dbs
def db_drop(self, dbname):
if not self.isdba():
return
dbs = self.db_list()
if dbs and dbname not in dbs:
return True
if dbname == self.dbname:
self.db_disconnect()
conn = psycopg2.connect(
self.dsn_dba % self.connectparams,
)
conn.set_isolation_level(pe.ISOLATION_LEVEL_AUTOCOMMIT)
cur = conn.cursor()
cur.execute("DROP DATABASE %(db)s;" % dict(db=dbname))
cur.close()
conn.close()
dbs = self.db_list()
return dbs and dbname not in dbs
def db_connect(self, dbname):
try:
connectparams = dict(db=dbname)
connectparams.update(self.connectparams)
self.connection = psycopg2.connect(self.dsn % connectparams,)
self.connection.set_isolation_level(
pe.ISOLATION_LEVEL_READ_COMMITTED)
self.dbname = dbname
return True
except:
self.db_reset()
return False
def db_disconnect(self):
if not self.connection:
return
self.connection.close()
self.db_reset()
def db_commit(self):
if not self.connection:
return
self.connection.commit()
def db_rollback(self):
if not self.connection:
return
self.connection.rollback()
def db_name(self):
return self.dbname
def db_list(self):
try:
conn = self.connection
if not conn:
connectparams = dict(db='postgres')
connectparams.update(self.connectparams)
conn = psycopg2.connect(self.dsn % connectparams)
cur = conn.cursor()
cur.execute("SELECT datname FROM pg_database;")
res = [row[0] for row in cur.fetchall()]
cur.close()
if not self.connection:
conn.close()
return res
except:
return None
def db_execute(self, script, param=list()):
self.pre_execute(script, param)
cur = self.db_cursor()
cur.execute(self.query_prefix + script, param)
#for notice in self.connection.notices:
# print (notice)
return cur
def db_executemany(self, script, param=list()):
cur = self.db_cursor()
cur.executemany(self.query_prefix + script, param)
return cur
def db_executescript(self, script):
return self.db_execute(script + ";\nselect 0=1;")
| 25.222222 | 94 | 0.692231 | 686 | 4,994 | 4.902332 | 0.236152 | 0.019625 | 0.016057 | 0.009515 | 0.369908 | 0.311924 | 0.250669 | 0.213797 | 0.119536 | 0.119536 | 0 | 0.002177 | 0.172006 | 4,994 | 197 | 95 | 25.350254 | 0.811125 | 0.010613 | 0 | 0.308176 | 0 | 0.018868 | 0.221266 | 0.0504 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113208 | false | 0.018868 | 0.018868 | 0.025157 | 0.327044 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
470f91dbbbad1edd443e920c9753325697ced101 | 2,232 | py | Python | datahub/cleanup/cleanup_config.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 6 | 2019-12-02T16:11:24.000Z | 2022-03-18T10:02:02.000Z | datahub/cleanup/cleanup_config.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 1,696 | 2019-10-31T14:08:37.000Z | 2022-03-29T12:35:57.000Z | datahub/cleanup/cleanup_config.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 9 | 2019-11-22T12:42:03.000Z | 2021-09-03T14:25:05.000Z | from datetime import datetime
from typing import Any, Mapping, NamedTuple, Sequence, Union
from dateutil.relativedelta import relativedelta
from dateutil.utils import today
from django.db.models import Q
from django.utils.timezone import utc
class DatetimeLessThanCleanupFilter(NamedTuple):
"""Represents a filter in a ModelCleanupConfig."""
# The field to use with the age threshold defined below
date_field: str
# Records older than this will match this filter
age_threshold: Union[relativedelta, datetime]
# Whether null values should be included in the filter (and considered as expired)
include_null: bool = False
@property
def cut_off_date(self):
"""Absolute date to use as as the cut-off (records older than this will be deleted)."""
if isinstance(self.age_threshold, datetime):
return self.age_threshold
return today(tzinfo=utc) - self.age_threshold
def as_q(self):
"""Returns a Q object for this filter."""
range_kwargs = {
f'{self.date_field}__lt': self.cut_off_date,
}
q = Q(**range_kwargs)
if self.include_null:
isnull_kwargs = {
f'{self.date_field}__isnull': True,
}
q |= Q(**isnull_kwargs)
return q
class ModelCleanupConfig(NamedTuple):
"""
Clean-up configuration for a model.
Defines the criteria for determining which records should be cleaned up.
"""
# The filters to apply to the model to determine the records to clean up.
# The filters will be combined using an AND operator, so records will only be
# cleaned up if they match all of the filters
filters: Sequence[DatetimeLessThanCleanupFilter]
# Fields (e.g. `Company.get_meta('interactions')`) to ignore when checking for
# referencing objects
excluded_relations: Sequence[Any] = ()
# Filters that referencing objects must match (where they exist). The keys are
# model fields e.g. Company._meta.get_field('interactions'). If multiple filters
# are specified for a field, they are combined using the AND operator
relation_filter_mapping: Mapping[Any, Sequence[DatetimeLessThanCleanupFilter]] = None
| 36 | 95 | 0.696237 | 291 | 2,232 | 5.243986 | 0.415808 | 0.039318 | 0.031455 | 0.026212 | 0.057667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230735 | 2,232 | 61 | 96 | 36.590164 | 0.888759 | 0.433692 | 0 | 0 | 0 | 0 | 0.037736 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0 | 0.633333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4711f0f7f0f4b6e8dfb47ed59210af2da72647a5 | 39,693 | py | Python | cinder/tests/unit/volume/drivers/test_hgst.py | rackerlabs/cinder | 4295ff0a64f781c3546f6c6e0816dbb8100133cb | [
"Apache-2.0"
] | 1 | 2019-02-08T05:24:58.000Z | 2019-02-08T05:24:58.000Z | cinder/tests/unit/volume/drivers/test_hgst.py | rackerlabs/cinder | 4295ff0a64f781c3546f6c6e0816dbb8100133cb | [
"Apache-2.0"
] | 1 | 2021-03-21T11:38:29.000Z | 2021-03-21T11:38:29.000Z | cinder/tests/unit/volume/drivers/test_hgst.py | rackerlabs/cinder | 4295ff0a64f781c3546f6c6e0816dbb8100133cb | [
"Apache-2.0"
] | 15 | 2017-01-12T10:35:10.000Z | 2019-04-19T08:22:10.000Z | # Copyright (c) 2015 HGST Inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_concurrency import processutils
from cinder import context
from cinder import exception
from cinder import test
from cinder.volume import configuration as conf
from cinder.volume.drivers.hgst import HGSTDriver
from cinder.volume import volume_types
class HGSTTestCase(test.TestCase):
# Need to mock these since we use them on driver creation
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def setUp(self, mock_ghn, mock_grnam, mock_pwnam):
"""Set up UUT and all the flags required for later fake_executes."""
super(HGSTTestCase, self).setUp()
self.stubs.Set(processutils, 'execute', self._fake_execute)
self._fail_vgc_cluster = False
self._fail_ip = False
self._fail_network_list = False
self._fail_domain_list = False
self._empty_domain_list = False
self._fail_host_storage = False
self._fail_space_list = False
self._fail_space_delete = False
self._fail_set_apphosts = False
self._fail_extend = False
self._request_cancel = False
self._return_blocked = 0
self.configuration = mock.Mock(spec=conf.Configuration)
self.configuration.safe_get = self._fake_safe_get
self._reset_configuration()
self.driver = HGSTDriver(configuration=self.configuration,
execute=self._fake_execute)
def _fake_safe_get(self, value):
"""Don't throw exception on missing parameters, return None."""
try:
val = getattr(self.configuration, value)
except AttributeError:
val = None
return val
def _reset_configuration(self):
"""Set safe and sane values for config params."""
self.configuration.num_volume_device_scan_tries = 1
self.configuration.volume_dd_blocksize = '1M'
self.configuration.volume_backend_name = 'hgst-1'
self.configuration.hgst_storage_servers = 'stor1:gbd0,stor2:gbd0'
self.configuration.hgst_net = 'net1'
self.configuration.hgst_redundancy = '0'
self.configuration.hgst_space_user = 'kane'
self.configuration.hgst_space_group = 'xanadu'
self.configuration.hgst_space_mode = '0777'
def _parse_space_create(self, *cmd):
"""Eats a vgc-cluster space-create command line to a dict."""
self.created = {'storageserver': ''}
cmd = list(*cmd)
while cmd:
param = cmd.pop(0)
if param == "-n":
self.created['name'] = cmd.pop(0)
elif param == "-N":
self.created['net'] = cmd.pop(0)
elif param == "-s":
self.created['size'] = cmd.pop(0)
elif param == "--redundancy":
self.created['redundancy'] = cmd.pop(0)
elif param == "--user":
self.created['user'] = cmd.pop(0)
elif param == "--user":
self.created['user'] = cmd.pop(0)
elif param == "--group":
self.created['group'] = cmd.pop(0)
elif param == "--mode":
self.created['mode'] = cmd.pop(0)
elif param == "-S":
self.created['storageserver'] += cmd.pop(0) + ","
else:
pass
def _parse_space_extend(self, *cmd):
"""Eats a vgc-cluster space-extend commandline to a dict."""
self.extended = {'storageserver': ''}
cmd = list(*cmd)
while cmd:
param = cmd.pop(0)
if param == "-n":
self.extended['name'] = cmd.pop(0)
elif param == "-s":
self.extended['size'] = cmd.pop(0)
elif param == "-S":
self.extended['storageserver'] += cmd.pop(0) + ","
else:
pass
if self._fail_extend:
raise processutils.ProcessExecutionError(exit_code=1)
else:
return '', ''
def _parse_space_delete(self, *cmd):
"""Eats a vgc-cluster space-delete commandline to a dict."""
self.deleted = {}
cmd = list(*cmd)
while cmd:
param = cmd.pop(0)
if param == "-n":
self.deleted['name'] = cmd.pop(0)
else:
pass
if self._fail_space_delete:
raise processutils.ProcessExecutionError(exit_code=1)
else:
return '', ''
def _parse_space_list(self, *cmd):
"""Eats a vgc-cluster space-list commandline to a dict."""
json = False
nameOnly = False
cmd = list(*cmd)
while cmd:
param = cmd.pop(0)
if param == "--json":
json = True
elif param == "--name-only":
nameOnly = True
elif param == "-n":
pass # Don't use the name here...
else:
pass
if self._fail_space_list:
raise processutils.ProcessExecutionError(exit_code=1)
elif nameOnly:
return "space1\nspace2\nvolume1\n", ''
elif json:
return HGST_SPACE_JSON, ''
else:
return '', ''
def _parse_network_list(self, *cmd):
"""Eat a network-list command and return error or results."""
if self._fail_network_list:
raise processutils.ProcessExecutionError(exit_code=1)
else:
return NETWORK_LIST, ''
def _parse_domain_list(self, *cmd):
"""Eat a domain-list command and return error, empty, or results."""
if self._fail_domain_list:
raise processutils.ProcessExecutionError(exit_code=1)
elif self._empty_domain_list:
return '', ''
else:
return "thisserver\nthatserver\nanotherserver\n", ''
def _fake_execute(self, *cmd, **kwargs):
"""Sudo hook to catch commands to allow running on all hosts."""
cmdlist = list(cmd)
exe = cmdlist.pop(0)
if exe == 'vgc-cluster':
exe = cmdlist.pop(0)
if exe == "request-cancel":
self._request_cancel = True
if self._return_blocked > 0:
return 'Request cancelled', ''
else:
raise processutils.ProcessExecutionError(exit_code=1)
elif self._fail_vgc_cluster:
raise processutils.ProcessExecutionError(exit_code=1)
elif exe == "--version":
return "HGST Solutions V2.5.0.0.x.x.x.x.x", ''
elif exe == "space-list":
return self._parse_space_list(cmdlist)
elif exe == "space-create":
self._parse_space_create(cmdlist)
if self._return_blocked > 0:
self._return_blocked = self._return_blocked - 1
out = "VGC_CREATE_000002\nBLOCKED\n"
raise processutils.ProcessExecutionError(stdout=out,
exit_code=1)
return '', ''
elif exe == "space-delete":
return self._parse_space_delete(cmdlist)
elif exe == "space-extend":
return self._parse_space_extend(cmdlist)
elif exe == "host-storage":
if self._fail_host_storage:
raise processutils.ProcessExecutionError(exit_code=1)
return HGST_HOST_STORAGE, ''
elif exe == "domain-list":
return self._parse_domain_list()
elif exe == "network-list":
return self._parse_network_list()
elif exe == "space-set-apphosts":
if self._fail_set_apphosts:
raise processutils.ProcessExecutionError(exit_code=1)
return '', ''
else:
raise NotImplementedError
elif exe == 'ip':
if self._fail_ip:
raise processutils.ProcessExecutionError(exit_code=1)
else:
return IP_OUTPUT, ''
elif exe == 'dd':
self.dd_count = -1
for p in cmdlist:
if 'count=' in p:
self.dd_count = int(p[6:])
return DD_OUTPUT, ''
else:
return '', ''
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_vgc_cluster_not_present(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when vgc-cluster returns an error."""
# Should pass
self._fail_vgc_cluster = False
self.driver.check_for_setup_error()
# Should throw exception
self._fail_vgc_cluster = True
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_parameter_redundancy_invalid(self, mock_ghn, mock_grnam,
mock_pwnam):
"""Test when hgst_redundancy config parameter not 0 or 1."""
# Should pass
self.driver.check_for_setup_error()
# Should throw exceptions
self.configuration.hgst_redundancy = ''
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
self.configuration.hgst_redundancy = 'Fred'
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_parameter_user_invalid(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when hgst_space_user doesn't map to UNIX user."""
# Should pass
self.driver.check_for_setup_error()
# Should throw exceptions
mock_pwnam.side_effect = KeyError()
self.configuration.hgst_space_user = ''
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
self.configuration.hgst_space_user = 'Fred!`'
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_parameter_group_invalid(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when hgst_space_group doesn't map to UNIX group."""
# Should pass
self.driver.check_for_setup_error()
# Should throw exceptions
mock_grnam.side_effect = KeyError()
self.configuration.hgst_space_group = ''
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
self.configuration.hgst_space_group = 'Fred!`'
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_parameter_mode_invalid(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when mode for created spaces isn't proper format."""
# Should pass
self.driver.check_for_setup_error()
# Should throw exceptions
self.configuration.hgst_space_mode = ''
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
self.configuration.hgst_space_mode = 'Fred'
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_parameter_net_invalid(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when hgst_net not in the domain."""
# Should pass
self.driver.check_for_setup_error()
# Should throw exceptions
self._fail_network_list = True
self.configuration.hgst_net = 'Fred'
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
self._fail_network_list = False
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_ip_addr_fails(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when IP ADDR command fails."""
# Should pass
self.driver.check_for_setup_error()
# Throw exception, need to clear internal cached host in driver
self._fail_ip = True
self.driver._vgc_host = None
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_domain_list_fails(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when domain-list fails for the domain."""
# Should pass
self.driver.check_for_setup_error()
# Throw exception, need to clear internal cached host in driver
self._fail_domain_list = True
self.driver._vgc_host = None
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_not_in_domain(self, mock_ghn, mock_grnam, mock_pwnam):
"""Test exception when Cinder host not domain member."""
# Should pass
self.driver.check_for_setup_error()
# Throw exception, need to clear internal cached host in driver
self._empty_domain_list = True
self.driver._vgc_host = None
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
@mock.patch('pwd.getpwnam', return_value=1)
@mock.patch('grp.getgrnam', return_value=1)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_parameter_storageservers_invalid(self, mock_ghn, mock_grnam,
mock_pwnam):
"""Test exception when the storage servers are invalid/missing."""
# Should pass
self.driver.check_for_setup_error()
# Storage_hosts missing
self.configuration.hgst_storage_servers = ''
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
# missing a : between host and devnode
self.configuration.hgst_storage_servers = 'stor1,stor2'
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
# missing a : between host and devnode
self.configuration.hgst_storage_servers = 'stor1:gbd0,stor2'
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
# Host not in cluster
self.configuration.hgst_storage_servers = 'stor1:gbd0'
self._fail_host_storage = True
self.assertRaises(exception.VolumeDriverException,
self.driver.check_for_setup_error)
def test_update_volume_stats(self):
"""Get cluster space available, should pass."""
actual = self.driver.get_volume_stats(True)
self.assertEqual('HGST', actual['vendor_name'])
self.assertEqual('hgst', actual['storage_protocol'])
self.assertEqual(90, actual['total_capacity_gb'])
self.assertEqual(87, actual['free_capacity_gb'])
self.assertEqual(0, actual['reserved_percentage'])
def test_update_volume_stats_redundancy(self):
"""Get cluster space available, half-sized - 1 for mirrors."""
self.configuration.hgst_redundancy = '1'
actual = self.driver.get_volume_stats(True)
self.assertEqual('HGST', actual['vendor_name'])
self.assertEqual('hgst', actual['storage_protocol'])
self.assertEqual(44, actual['total_capacity_gb'])
self.assertEqual(43, actual['free_capacity_gb'])
self.assertEqual(0, actual['reserved_percentage'])
def test_update_volume_stats_cached(self):
"""Get cached cluster space, should not call executable."""
self._fail_host_storage = True
actual = self.driver.get_volume_stats(False)
self.assertEqual('HGST', actual['vendor_name'])
self.assertEqual('hgst', actual['storage_protocol'])
self.assertEqual(90, actual['total_capacity_gb'])
self.assertEqual(87, actual['free_capacity_gb'])
self.assertEqual(0, actual['reserved_percentage'])
def test_update_volume_stats_error(self):
"""Test that when host-storage gives an error, return unknown."""
self._fail_host_storage = True
actual = self.driver.get_volume_stats(True)
self.assertEqual('HGST', actual['vendor_name'])
self.assertEqual('hgst', actual['storage_protocol'])
self.assertEqual('unknown', actual['total_capacity_gb'])
self.assertEqual('unknown', actual['free_capacity_gb'])
self.assertEqual(0, actual['reserved_percentage'])
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_create_volume(self, mock_ghn):
"""Test volume creation, ensure appropriate size expansion/name."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10}
ret = self.driver.create_volume(volume)
expected = {'redundancy': '0', 'group': 'xanadu',
'name': 'volume10', 'mode': '0777',
'user': 'kane', 'net': 'net1',
'storageserver': 'stor1:gbd0,stor2:gbd0,',
'size': '12'}
self.assertDictMatch(expected, self.created)
# Check the returned provider, note the the provider_id is hashed
expected_pid = {'provider_id': 'volume10'}
self.assertDictMatch(expected_pid, ret)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_create_volume_name_creation_fail(self, mock_ghn):
"""Test volume creation exception when can't make a hashed name."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10}
self._fail_space_list = True
self.assertRaises(exception.VolumeDriverException,
self.driver.create_volume, volume)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_create_snapshot(self, mock_ghn):
"""Test creating a snapshot, ensure full data of original copied."""
# Now snapshot the volume and check commands
snapshot = {'volume_name': 'volume10',
'volume_id': 'xxx', 'display_name': 'snap10',
'name': '123abc', 'volume_size': 10, 'id': '123abc',
'volume': {'provider_id': 'space10'}}
ret = self.driver.create_snapshot(snapshot)
# We must copy entier underlying storage, ~12GB, not just 10GB
self.assertEqual(11444, self.dd_count)
# Check space-create command
expected = {'redundancy': '0', 'group': 'xanadu',
'name': snapshot['display_name'], 'mode': '0777',
'user': 'kane', 'net': 'net1',
'storageserver': 'stor1:gbd0,stor2:gbd0,',
'size': '12'}
self.assertDictMatch(expected, self.created)
# Check the returned provider
expected_pid = {'provider_id': 'snap10'}
self.assertDictMatch(expected_pid, ret)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_create_cloned_volume(self, mock_ghn):
"""Test creating a clone, ensure full size is copied from original."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
orig = {'id': '1', 'name': 'volume1', 'display_name': '',
'volume_type_id': type_ref['id'], 'size': 10,
'provider_id': 'space_orig'}
clone = {'id': '2', 'name': 'clone1', 'display_name': '',
'volume_type_id': type_ref['id'], 'size': 10}
pid = self.driver.create_cloned_volume(clone, orig)
# We must copy entier underlying storage, ~12GB, not just 10GB
self.assertEqual(11444, self.dd_count)
# Check space-create command
expected = {'redundancy': '0', 'group': 'xanadu',
'name': 'clone1', 'mode': '0777',
'user': 'kane', 'net': 'net1',
'storageserver': 'stor1:gbd0,stor2:gbd0,',
'size': '12'}
self.assertDictMatch(expected, self.created)
# Check the returned provider
expected_pid = {'provider_id': 'clone1'}
self.assertDictMatch(expected_pid, pid)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_add_cinder_apphosts_fails(self, mock_ghn):
"""Test exception when set-apphost can't connect volume to host."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
orig = {'id': '1', 'name': 'volume1', 'display_name': '',
'volume_type_id': type_ref['id'], 'size': 10,
'provider_id': 'space_orig'}
clone = {'id': '2', 'name': 'clone1', 'display_name': '',
'volume_type_id': type_ref['id'], 'size': 10}
self._fail_set_apphosts = True
self.assertRaises(exception.VolumeDriverException,
self.driver.create_cloned_volume, clone, orig)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_create_volume_from_snapshot(self, mock_ghn):
"""Test creating volume from snapshot, ensure full space copy."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
snap = {'id': '1', 'name': 'volume1', 'display_name': '',
'volume_type_id': type_ref['id'], 'size': 10,
'provider_id': 'space_orig'}
volume = {'id': '2', 'name': 'volume2', 'display_name': '',
'volume_type_id': type_ref['id'], 'size': 10}
pid = self.driver.create_volume_from_snapshot(volume, snap)
# We must copy entier underlying storage, ~12GB, not just 10GB
self.assertEqual(11444, self.dd_count)
# Check space-create command
expected = {'redundancy': '0', 'group': 'xanadu',
'name': 'volume2', 'mode': '0777',
'user': 'kane', 'net': 'net1',
'storageserver': 'stor1:gbd0,stor2:gbd0,',
'size': '12'}
self.assertDictMatch(expected, self.created)
# Check the returned provider
expected_pid = {'provider_id': 'volume2'}
self.assertDictMatch(expected_pid, pid)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_create_volume_blocked(self, mock_ghn):
"""Test volume creation where only initial space-create is blocked.
This should actually pass because we are blocked byt return an error
in request-cancel, meaning that it got unblocked before we could kill
the space request.
"""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10}
self._return_blocked = 1 # Block & fail cancel => create succeeded
ret = self.driver.create_volume(volume)
expected = {'redundancy': '0', 'group': 'xanadu',
'name': 'volume10', 'mode': '0777',
'user': 'kane', 'net': 'net1',
'storageserver': 'stor1:gbd0,stor2:gbd0,',
'size': '12'}
self.assertDictMatch(expected, self.created)
# Check the returned provider
expected_pid = {'provider_id': 'volume10'}
self.assertDictMatch(expected_pid, ret)
self.assertTrue(self._request_cancel)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_create_volume_blocked_and_fail(self, mock_ghn):
"""Test volume creation where space-create blocked permanently.
This should fail because the initial create was blocked and the
request-cancel succeeded, meaning the create operation never
completed.
"""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10}
self._return_blocked = 2 # Block & pass cancel => create failed. :(
self.assertRaises(exception.VolumeDriverException,
self.driver.create_volume, volume)
self.assertTrue(self._request_cancel)
def test_delete_volume(self):
"""Test deleting existing volume, ensure proper name used."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10,
'provider_id': 'volume10'}
self.driver.delete_volume(volume)
expected = {'name': 'volume10'}
self.assertDictMatch(expected, self.deleted)
def test_delete_volume_failure_modes(self):
"""Test cases where space-delete fails, but OS delete is still OK."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10,
'provider_id': 'volume10'}
self._fail_space_delete = True
# This should not throw an exception, space-delete failure not problem
self.driver.delete_volume(volume)
self._fail_space_delete = False
volume['provider_id'] = None
# This should also not throw an exception
self.driver.delete_volume(volume)
def test_delete_snapshot(self):
"""Test deleting a snapshot, ensure proper name is removed."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
snapshot = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10,
'provider_id': 'snap10'}
self.driver.delete_snapshot(snapshot)
expected = {'name': 'snap10'}
self.assertDictMatch(expected, self.deleted)
def test_extend_volume(self):
"""Test extending a volume, check the size in GB vs. GiB."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10,
'provider_id': 'volume10'}
self.extended = {'name': '', 'size': '0',
'storageserver': ''}
self.driver.extend_volume(volume, 12)
expected = {'name': 'volume10', 'size': '2',
'storageserver': 'stor1:gbd0,stor2:gbd0,'}
self.assertDictMatch(expected, self.extended)
def test_extend_volume_noextend(self):
"""Test extending a volume where Space does not need to be enlarged.
Because Spaces are generated somewhat larger than the requested size
from OpenStack due to the base10(HGST)/base2(OS) mismatch, they can
sometimes be larger than requested from OS. In that case a
volume_extend may actually be a noop since the volume is already large
enough to satisfy OS's request.
"""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10,
'provider_id': 'volume10'}
self.extended = {'name': '', 'size': '0',
'storageserver': ''}
self.driver.extend_volume(volume, 10)
expected = {'name': '', 'size': '0',
'storageserver': ''}
self.assertDictMatch(expected, self.extended)
def test_space_list_fails(self):
"""Test exception is thrown when we can't call space-list."""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10,
'provider_id': 'volume10'}
self.extended = {'name': '', 'size': '0',
'storageserver': ''}
self._fail_space_list = True
self.assertRaises(exception.VolumeDriverException,
self.driver.extend_volume, volume, 12)
def test_cli_error_not_blocked(self):
"""Test the _blocked handler's handlinf of a non-blocked error.
The _handle_blocked handler is called on any process errors in the
code. If the error was not caused by a blocked command condition
(syntax error, out of space, etc.) then it should just throw the
exception and not try and retry the command.
"""
ctxt = context.get_admin_context()
extra_specs = {}
type_ref = volume_types.create(ctxt, 'hgst-1', extra_specs)
volume = {'id': '1', 'name': 'volume1',
'display_name': '',
'volume_type_id': type_ref['id'],
'size': 10,
'provider_id': 'volume10'}
self.extended = {'name': '', 'size': '0',
'storageserver': ''}
self._fail_extend = True
self.assertRaises(exception.VolumeDriverException,
self.driver.extend_volume, volume, 12)
self.assertFalse(self._request_cancel)
@mock.patch('socket.gethostbyname', return_value='123.123.123.123')
def test_initialize_connection(self, moch_ghn):
"""Test that the connection_info for Nova makes sense."""
volume = {'name': '123', 'provider_id': 'spacey'}
conn = self.driver.initialize_connection(volume, None)
expected = {'name': 'spacey', 'noremovehost': 'thisserver'}
self.assertDictMatch(expected, conn['data'])
# Below are some command outputs we emulate
IP_OUTPUT = """
3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
link/ether 00:25:90:d9:18:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.23/24 brd 192.168.0.255 scope global em2
valid_lft forever preferred_lft forever
inet6 fe80::225:90ff:fed9:1809/64 scope link
valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 123.123.123.123/8 scope host lo
valid_lft forever preferred_lft forever
inet 169.254.169.254/32 scope link lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
link/ether 00:25:90:d9:18:08 brd ff:ff:ff:ff:ff:ff
inet6 fe80::225:90ff:fed9:1808/64 scope link
valid_lft forever preferred_lft forever
"""
HGST_HOST_STORAGE = """
{
"hostStatus": [
{
"node": "tm33.virident.info",
"up": true,
"isManager": true,
"cardStatus": [
{
"cardName": "/dev/sda3",
"cardSerialNumber": "002f09b4037a9d521c007ee4esda3",
"cardStatus": "Good",
"cardStateDetails": "Normal",
"cardActionRequired": "",
"cardTemperatureC": 0,
"deviceType": "Generic",
"cardTemperatureState": "Safe",
"partitionStatus": [
{
"partName": "/dev/gbd0",
"partitionState": "READY",
"usableCapacityBytes": 98213822464,
"totalReadBytes": 0,
"totalWriteBytes": 0,
"remainingLifePCT": 100,
"flashReservesLeftPCT": 100,
"fmc": true,
"vspaceCapacityAvailable": 94947041280,
"vspaceReducedCapacityAvailable": 87194279936,
"_partitionID": "002f09b4037a9d521c007ee4esda3:0",
"_usedSpaceBytes": 3266781184,
"_enabledSpaceBytes": 3266781184,
"_disabledSpaceBytes": 0
}
]
}
],
"driverStatus": {
"vgcdriveDriverLoaded": true,
"vhaDriverLoaded": true,
"vcacheDriverLoaded": true,
"vlvmDriverLoaded": true,
"ipDataProviderLoaded": true,
"ibDataProviderLoaded": false,
"driverUptimeSecs": 4800,
"rVersion": "20368.d55ec22.master"
},
"totalCapacityBytes": 98213822464,
"totalUsedBytes": 3266781184,
"totalEnabledBytes": 3266781184,
"totalDisabledBytes": 0
},
{
"node": "tm32.virident.info",
"up": true,
"isManager": false,
"cardStatus": [],
"driverStatus": {
"vgcdriveDriverLoaded": true,
"vhaDriverLoaded": true,
"vcacheDriverLoaded": true,
"vlvmDriverLoaded": true,
"ipDataProviderLoaded": true,
"ibDataProviderLoaded": false,
"driverUptimeSecs": 0,
"rVersion": "20368.d55ec22.master"
},
"totalCapacityBytes": 0,
"totalUsedBytes": 0,
"totalEnabledBytes": 0,
"totalDisabledBytes": 0
}
],
"totalCapacityBytes": 98213822464,
"totalUsedBytes": 3266781184,
"totalEnabledBytes": 3266781184,
"totalDisabledBytes": 0
}
"""
HGST_SPACE_JSON = """
{
"resources": [
{
"resourceType": "vLVM-L",
"resourceID": "vLVM-L:698cdb43-54da-863e-1699-294a080ce4db",
"state": "OFFLINE",
"instanceStates": {},
"redundancy": 0,
"sizeBytes": 12000000000,
"name": "volume10",
"nodes": [],
"networks": [
"net1"
],
"components": [
{
"resourceType": "vLVM-S",
"resourceID": "vLVM-S:698cdb43-54da-863e-eb10-6275f47b8ed2",
"redundancy": 0,
"order": 0,
"sizeBytes": 12000000000,
"numStripes": 1,
"stripeSizeBytes": null,
"name": "volume10s00",
"state": "OFFLINE",
"instanceStates": {},
"components": [
{
"name": "volume10h00",
"resourceType": "vHA",
"resourceID": "vHA:3e86da54-40db-8c69-0300-0000ac10476e",
"redundancy": 0,
"sizeBytes": 12000000000,
"state": "GOOD",
"components": [
{
"name": "volume10h00",
"vspaceType": "vHA",
"vspaceRole": "primary",
"storageObjectID": "vHA:3e86da54-40db-8c69--18130019e486",
"state": "Disconnected (DCS)",
"node": "tm33.virident.info",
"partName": "/dev/gbd0"
}
],
"crState": "GOOD"
},
{
"name": "volume10v00",
"resourceType": "vShare",
"resourceID": "vShare:3f86da54-41db-8c69-0300-ecf4bbcc14cc",
"redundancy": 0,
"order": 0,
"sizeBytes": 12000000000,
"state": "GOOD",
"components": [
{
"name": "volume10v00",
"vspaceType": "vShare",
"vspaceRole": "target",
"storageObjectID": "vShare:3f86da54-41db-8c64bbcc14cc:T",
"state": "Started",
"node": "tm33.virident.info",
"partName": "/dev/gbd0_volume10h00"
}
]
}
]
}
],
"_size": "12GB",
"_state": "OFFLINE",
"_ugm": "",
"_nets": "net1",
"_hosts": "tm33.virident.info(12GB,NC)",
"_ahosts": "",
"_shosts": "tm33.virident.info(12GB)",
"_name": "volume10",
"_node": "",
"_type": "vLVM-L",
"_detail": "vLVM-L:698cdb43-54da-863e-1699-294a080ce4db",
"_device": ""
}
]
}
"""
NETWORK_LIST = """
Network Name Type Flags Description
------------ ---- ---------- ------------------------
net1 IPv4 autoConfig 192.168.0.0/24 1Gb/s
net2 IPv4 autoConfig 192.168.10.0/24 10Gb/s
"""
DD_OUTPUT = """
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000427529 s, 2.4 MB/s
"""
| 42.407051 | 78 | 0.586149 | 4,307 | 39,693 | 5.212213 | 0.142094 | 0.016838 | 0.016838 | 0.021649 | 0.648492 | 0.603011 | 0.580471 | 0.52813 | 0.490668 | 0.477126 | 0 | 0.043752 | 0.292319 | 39,693 | 935 | 79 | 42.452406 | 0.755429 | 0.128436 | 0 | 0.550781 | 0 | 0.00651 | 0.293947 | 0.043472 | 0 | 0 | 0 | 0 | 0.082031 | 1 | 0.052083 | false | 0.00651 | 0.010417 | 0 | 0.092448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
47145db2bd97cca1e96b5c6c5f106f848f49e927 | 1,526 | py | Python | p287m/find_duplicate.py | l33tdaima/l33tdaima | 0a7a9573dc6b79e22dcb54357493ebaaf5e0aa90 | [
"MIT"
] | 1 | 2020-02-20T12:04:46.000Z | 2020-02-20T12:04:46.000Z | p287m/find_duplicate.py | l33tdaima/l33tdaima | 0a7a9573dc6b79e22dcb54357493ebaaf5e0aa90 | [
"MIT"
] | null | null | null | p287m/find_duplicate.py | l33tdaima/l33tdaima | 0a7a9573dc6b79e22dcb54357493ebaaf5e0aa90 | [
"MIT"
] | null | null | null | from typing import List
class Solution:
def findDuplicate(self, nums: List[int]) -> int:
lo, hi = 1, len(nums) - 1
while lo < hi:
mid = (lo + hi) // 2
lt, eq = 0, 0
for n in nums:
if n == mid:
eq += 1
elif lo <= n < mid:
lt += 1
# print(lo, hi, mid, lt, eq)
if eq > 1:
return mid
if lt <= mid - lo:
lo = mid + 1
else:
hi = mid - 1
return lo
def findDuplicateON(self, nums: List[int]) -> int:
# Find the intersection point of the two runners.
tortoise = hare = nums[0]
while True:
tortoise = nums[tortoise]
hare = nums[nums[hare]]
if tortoise == hare:
break
# Find the "entrance" to the cycle.
tortoise = nums[0]
while tortoise != hare:
tortoise = nums[tortoise]
hare = nums[hare]
return hare
# TESTS
tests = [
([1, 1], 1),
([1, 2, 1], 1),
([1, 1, 1], 1),
([1, 3, 4, 2, 2], 2),
([3, 1, 3, 4, 2], 3),
([3, 1, 3, 3, 2], 3),
([1, 3, 4, 2, 1], 1),
([7, 9, 7, 4, 2, 8, 7, 7, 1, 5], 7),
([3, 1, 4, 5, 2, 6, 9, 8, 7, 9], 9),
]
for t in tests:
sol = Solution()
actual = sol.findDuplicate(t[0])
print("Find duplicate in", t[0], "->", actual)
assert actual == t[1]
assert sol.findDuplicateON(t[0]) == t[1]
| 25.433333 | 57 | 0.415465 | 209 | 1,526 | 3.033493 | 0.272727 | 0.031546 | 0.033123 | 0.031546 | 0.175079 | 0.029968 | 0 | 0 | 0 | 0 | 0 | 0.085352 | 0.431848 | 1,526 | 59 | 58 | 25.864407 | 0.645905 | 0.074705 | 0 | 0.041667 | 0 | 0 | 0.013504 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.041667 | false | 0 | 0.020833 | 0 | 0.145833 | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4715f519d8a21d72474f587c7da538635e521fe0 | 13,733 | py | Python | examples/gaussian_processes/plot_sparse_log_cox_gaussian_process_keras.py | ltiao/scribbles | 9f30ea92ee348154568a7791751634d1feaba774 | [
"MIT"
] | 1 | 2020-03-01T04:36:36.000Z | 2020-03-01T04:36:36.000Z | examples/gaussian_processes/plot_sparse_log_cox_gaussian_process_keras.py | ltiao/scribbles | 9f30ea92ee348154568a7791751634d1feaba774 | [
"MIT"
] | 3 | 2020-01-02T19:09:40.000Z | 2020-01-02T19:11:02.000Z | examples/gaussian_processes/plot_sparse_log_cox_gaussian_process_keras.py | ltiao/scribbles | 9f30ea92ee348154568a7791751634d1feaba774 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Variational Sparse Log Cox Gaussian Process
===========================================
Here we fit the hyperparameters of a Gaussian Process by maximizing the (log)
marginal likelihood. This is commonly referred to as empirical Bayes, or
type-II maximum likelihood estimation.
"""
# sphinx_gallery_thumbnail_number = 3
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from tensorflow.keras.layers import Layer, InputLayer
from tensorflow.keras.initializers import Identity, Constant
from sklearn.preprocessing import MinMaxScaler
from scribbles.datasets import coal_mining_disasters_load_data
from scribbles.plotting import fill_between_stddev
from scribbles.utils import get_kl_weight
from collections import defaultdict
# %%
# shortcuts
tfd = tfp.distributions
kernels = tfp.math.psd_kernels
# constants
num_train = 2048 # nbr training points in synthetic dataset
num_test = 40
num_features = 1 # dimensionality
num_index_points = 256 # nbr of index points
num_samples = 25
quadrature_size = 20
num_inducing_points = 50
num_epochs = 2000
batch_size = 64
shuffle_buffer_size = 500
jitter = 1e-6
kernel_cls = kernels.MaternFiveHalves
seed = 8888 # set random seed for reproducibility
random_state = np.random.RandomState(seed)
x_min, x_max = 0.0, 1.0
y_min, y_max = -0.05, 0.7
# index points
X_q = np.linspace(x_min, x_max, num_index_points).reshape(-1, num_features)
# %%
# Coal mining disasters dataset
# -----------------------------
scaler = MinMaxScaler()
Z, y = coal_mining_disasters_load_data(base_dir="../../datasets/")
X = scaler.fit_transform(Z)
y = y.astype(np.float64)
# %%
# Probability densities
fig, ax = plt.subplots()
ax.vlines(Z.squeeze(), ymin=-0.025, ymax=0.0, linewidth=0.6 * y)
ax.set_ylim(-0.05, 0.8)
ax.set_xlabel("days")
ax.set_ylabel("incidents")
plt.show()
# %%
# Encapsulate Variational Gaussian Process (particular variable initialization)
# in a Keras / TensorFlow Probability Mixin Layer.
# Clean and simple if we restrict to single-output (`event_shape = ()`) and
# `feature_ndim = 1` (i.e. inputs are simply vectors rather than matrices or
# tensors).
class VariationalGaussianProcess1D(tfp.layers.DistributionLambda):
def __init__(self, kernel_wrapper, num_inducing_points,
inducing_index_points_initializer, mean_fn=None, jitter=1e-6,
convert_to_tensor_fn=tfd.Distribution.sample, **kwargs):
def make_distribution(x):
return VariationalGaussianProcess1D.new(
x, kernel_wrapper=self.kernel_wrapper,
inducing_index_points=self.inducing_index_points,
variational_inducing_observations_loc=(
self.variational_inducing_observations_loc),
variational_inducing_observations_scale=(
self.variational_inducing_observations_scale),
mean_fn=self.mean_fn,
observation_noise_variance=tf.exp(
self.log_observation_noise_variance),
jitter=self.jitter)
super(VariationalGaussianProcess1D, self).__init__(
make_distribution_fn=make_distribution,
convert_to_tensor_fn=convert_to_tensor_fn,
dtype=kernel_wrapper.dtype)
self.kernel_wrapper = kernel_wrapper
self.inducing_index_points_initializer = inducing_index_points_initializer
self.num_inducing_points = num_inducing_points
self.mean_fn = mean_fn
self.jitter = jitter
self._dtype = self.kernel_wrapper.dtype
def build(self, input_shape):
input_dim = input_shape[-1]
# TODO: Fix initialization!
self.inducing_index_points = self.add_weight(
name="inducing_index_points",
shape=(self.num_inducing_points, input_dim),
initializer=self.inducing_index_points_initializer,
dtype=self.dtype)
self.variational_inducing_observations_loc = self.add_weight(
name="variational_inducing_observations_loc",
shape=(self.num_inducing_points,),
initializer="zeros", dtype=self.dtype)
self.variational_inducing_observations_scale = self.add_weight(
name="variational_inducing_observations_scale",
shape=(self.num_inducing_points, self.num_inducing_points),
initializer=Identity(gain=1.0), dtype=self.dtype)
self.log_observation_noise_variance = self.add_weight(
name="log_observation_noise_variance",
initializer=Constant(-5.0), dtype=self.dtype)
@staticmethod
def new(x, kernel_wrapper, inducing_index_points, mean_fn,
variational_inducing_observations_loc,
variational_inducing_observations_scale,
observation_noise_variance, jitter, name=None):
# ind = tfd.Independent(base, reinterpreted_batch_ndims=1)
# bijector = tfp.bijectors.Transpose(rightmost_transposed_ndims=2)
# d = tfd.TransformedDistribution(ind, bijector=bijector)
return tfd.VariationalGaussianProcess(
kernel=kernel_wrapper.kernel, index_points=x,
inducing_index_points=inducing_index_points,
variational_inducing_observations_loc=(
variational_inducing_observations_loc),
variational_inducing_observations_scale=(
variational_inducing_observations_scale),
mean_fn=mean_fn,
observation_noise_variance=observation_noise_variance,
jitter=jitter)
# %%
# Kernel wrapper layer
class KernelWrapper(Layer):
# TODO: Support automatic relevance determination
def __init__(self, kernel_cls=kernels.ExponentiatedQuadratic,
dtype=None, **kwargs):
super(KernelWrapper, self).__init__(dtype=dtype, **kwargs)
self.kernel_cls = kernel_cls
self.log_amplitude = self.add_weight(
name="log_amplitude",
initializer="zeros", dtype=dtype)
self.log_length_scale = self.add_weight(
name="log_length_scale",
initializer="zeros", dtype=dtype)
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return self.kernel_cls(amplitude=tf.exp(self.log_amplitude),
length_scale=tf.exp(self.log_length_scale))
# %%
# Poisson likelihood.
def make_poisson_likelihood(f):
return tfd.Independent(tfd.Poisson(log_rate=f),
reinterpreted_batch_ndims=1)
# %%
def log_likelihood(y, f):
likelihood = make_poisson_likelihood(f)
return likelihood.log_prob(y)
# %%
# Helper Model factory method.
def build_model(input_dim, jitter=1e-6):
inducing_index_points_initial = random_state.choice(X.squeeze(),
num_inducing_points) \
.reshape(-1, num_features)
inducing_index_points_initializer = (
tf.constant_initializer(inducing_index_points_initial))
return tf.keras.Sequential([
InputLayer(input_shape=(input_dim,)),
VariationalGaussianProcess1D(
kernel_wrapper=KernelWrapper(kernel_cls=kernel_cls,
dtype=tf.float64),
num_inducing_points=num_inducing_points,
inducing_index_points_initializer=inducing_index_points_initializer,
jitter=jitter)
])
# %%
model = build_model(input_dim=num_features, jitter=jitter)
optimizer = tf.keras.optimizers.Adam()
# %%
@tf.function
def nelbo(X_batch, y_batch):
qf = model(X_batch)
ell = qf.surrogate_posterior_expected_log_likelihood(
observations=y_batch,
log_likelihood_fn=log_likelihood,
quadrature_size=quadrature_size)
kl = qf.surrogate_posterior_kl_divergence_prior()
kl_weight = get_kl_weight(num_train, batch_size)
return - ell + kl_weight * kl
# %%
@tf.function
def train_step(X_batch, y_batch):
with tf.GradientTape() as tape:
loss = nelbo(X_batch, y_batch)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
# %%
dataset = tf.data.Dataset.from_tensor_slices((X, y)) \
.shuffle(seed=seed, buffer_size=shuffle_buffer_size) \
.batch(batch_size, drop_remainder=True)
# %%
keys = ["inducing_index_points",
"variational_inducing_observations_loc",
"variational_inducing_observations_scale",
"log_observation_noise_variance",
"log_amplitude", "log_length_scale"]
# %%
history = defaultdict(list)
for epoch in range(num_epochs):
for step, (X_batch, y_batch) in enumerate(dataset):
loss = train_step(X_batch, y_batch)
print("epoch={epoch:04d}, loss={loss:.4f}"
.format(epoch=epoch, loss=loss.numpy()))
history["nelbo"].append(loss.numpy())
for key, tensor in zip(keys, model.get_weights()):
history[key].append(tensor)
# %%
inducing_index_points_history = history.pop("inducing_index_points")
variational_inducing_observations_loc_history = (
history.pop("variational_inducing_observations_loc"))
inducing_index_points = inducing_index_points_history[-1]
variational_inducing_observations_loc = (
variational_inducing_observations_loc_history[-1])
# %%
# Log density ratio, log-odds, or logits.
fig, ax = plt.subplots()
ax.plot(X_q, model(X_q).mean().numpy().T,
label="posterior mean")
fill_between_stddev(X_q.squeeze(),
model(X_q).mean().numpy().squeeze(),
model(X_q).stddev().numpy().squeeze(), alpha=0.1,
label="posterior std dev", ax=ax)
ax.scatter(inducing_index_points, np.full_like(inducing_index_points, -3.5),
marker='^', c="tab:gray", label="inducing inputs", alpha=0.4)
ax.scatter(inducing_index_points, variational_inducing_observations_loc,
marker='+', c="tab:blue", label="inducing variable mean")
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$\log \lambda(x)$")
ax.legend()
plt.show()
# %%
Z_q = scaler.inverse_transform(X_q)
# %%
d = tfd.Independent(tfd.LogNormal(loc=model(X_q).mean(),
scale=model(X_q).stddev()),
reinterpreted_batch_ndims=1)
# %%
# Density ratio.
fig, ax = plt.subplots()
ax.plot(X_q, d.mean().numpy().T, label="transformed posterior mean")
fill_between_stddev(X_q.squeeze(),
d.mean().numpy().squeeze(),
d.stddev().numpy().squeeze(), alpha=0.1,
label="transformed posterior std dev", ax=ax)
ax.vlines(X.squeeze(), ymin=-0.025, ymax=0.0, linewidth=0.6 * y)
ax.set_xlabel('$x$')
ax.set_ylim(y_min, y_max)
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$\lambda(x)$")
ax.legend()
plt.show()
# %%
# Predictive mean samples.
posterior_predictive = tf.keras.Sequential([
model, tfp.layers.IndependentPoisson(event_shape=(num_index_points,))
])
# %%
fig, ax = plt.subplots()
ax.plot(X_q, posterior_predictive(X_q).mean())
ax.vlines(X.squeeze(), ymin=-0.025, ymax=0.0, linewidth=0.6 * y)
ax.set_xlabel('$x$')
ax.set_ylim(y_min, y_max)
# ax.legend()
plt.show()
# %%
def make_posterior_predictive(num_samples=None, seed=None):
def posterior_predictive(x):
f_samples = model(x).sample(num_samples, seed=seed)
return make_poisson_likelihood(f=f_samples)
return posterior_predictive
# %%
posterior_predictive = make_posterior_predictive(num_samples, seed=seed)
# %%
fig, ax = plt.subplots()
ax.plot(X_q, posterior_predictive(X_q).mean().numpy().T, color="tab:blue",
linewidth=0.8, alpha=0.6)
ax.vlines(X.squeeze(), ymin=-0.025, ymax=0.0, linewidth=0.6 * y)
ax.set_xlabel('$x$')
ax.set_ylim(y_min, y_max)
# ax.legend()
plt.show()
# %%
def get_inducing_index_points_data(inducing_index_points):
df = pd.DataFrame(np.hstack(inducing_index_points).T)
df.index.name = "epoch"
df.columns.name = "inducing index points"
s = df.stack()
s.name = 'x'
return s.reset_index()
# %%
data = get_inducing_index_points_data(inducing_index_points_history)
# %%
fig, ax = plt.subplots()
sns.lineplot(x='x', y="epoch", hue="inducing index points", palette="viridis",
sort=False, data=data, alpha=0.8, ax=ax)
ax.set_xlabel(r'$x$')
plt.show()
# %%
variational_inducing_observations_scale_history = (
history.pop("variational_inducing_observations_scale"))
# %%
fig, (ax1, ax2) = plt.subplots(ncols=2, sharex=True, sharey=True)
im1 = ax1.imshow(variational_inducing_observations_scale_history[0],
vmin=-0.1, vmax=1.1)
im2 = ax2.imshow(variational_inducing_observations_scale_history[-1],
vmin=-0.1, vmax=1.1)
fig.colorbar(im2, ax=[ax1, ax2], extend="both", orientation="horizontal")
ax1.set_xlabel(r"$i$")
ax1.set_ylabel(r"$j$")
ax2.set_xlabel(r"$i$")
plt.show()
# %%
history_df = pd.DataFrame(history)
history_df.index.name = "epoch"
history_df.reset_index(inplace=True)
# %%
fig, ax = plt.subplots()
sns.lineplot(x="epoch", y="nelbo", data=history_df, alpha=0.8, ax=ax)
ax.set_yscale("log")
plt.show()
# %%
parameters_df = history_df.drop(columns="nelbo") \
.rename(columns=lambda s: s.replace('_', ' '))
# %%
g = sns.PairGrid(parameters_df, hue="epoch", palette="RdYlBu", corner=True)
g = g.map_lower(plt.scatter, facecolor="none", alpha=0.6)
| 26.874755 | 82 | 0.675599 | 1,730 | 13,733 | 5.094798 | 0.220809 | 0.046177 | 0.066826 | 0.050147 | 0.355911 | 0.260041 | 0.199229 | 0.135126 | 0.068414 | 0.062968 | 0 | 0.013663 | 0.205927 | 13,733 | 510 | 83 | 26.927451 | 0.79459 | 0.106677 | 0 | 0.173432 | 0 | 0 | 0.066497 | 0.02878 | 0 | 0 | 0 | 0.001961 | 0 | 1 | 0.055351 | false | 0 | 0.04797 | 0.01845 | 0.154982 | 0.00369 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
471654000ff572cf541ee8cb15531d2329efd52d | 389 | py | Python | 06-sayi-tahmin-oyunu.py | omerkocadayi/WeWantEd--Python-a-Giris- | bedde84d0933d05a3a73b894c90c7d04736c3bba | [
"MIT"
] | 2 | 2017-03-26T13:02:42.000Z | 2017-04-03T00:50:19.000Z | 06-sayi-tahmin-oyunu.py | omerkocadayi/WeWantEd--Python-a-Giris- | bedde84d0933d05a3a73b894c90c7d04736c3bba | [
"MIT"
] | null | null | null | 06-sayi-tahmin-oyunu.py | omerkocadayi/WeWantEd--Python-a-Giris- | bedde84d0933d05a3a73b894c90c7d04736c3bba | [
"MIT"
] | null | null | null | import random
sayi = random.randint(1,100)
print ("Tahmin Oyununa Hos Geldiniz")
sayac=0
while True:
tahmin = int(input("Sayi Girin:"))
sayac += 1
if tahmin == sayi:
print ("\nTebrikler {} denemede bildiniz!" .format(sayac))
break
elif tahmin < sayi:
print ("Daha Buyuk Bir Sayi Girin")
else:
print ("Daha Kucuk Bir Sayi Girin")
| 20.473684 | 66 | 0.604113 | 49 | 389 | 4.795918 | 0.612245 | 0.114894 | 0.12766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021352 | 0.277635 | 389 | 18 | 67 | 21.611111 | 0.814947 | 0 | 0 | 0 | 0 | 0 | 0.311054 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |