hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ccff8fb01ea3f497743ce74c1e9b8975a96ada59 | 5,544 | py | Python | PYTHON/dgemm_omp.py | dbaaha/Kernels | 232fc44fc9427dd7b56862cec2d46296c467b4e8 | [
"BSD-3-Clause"
] | 346 | 2015-06-07T19:55:15.000Z | 2022-03-18T07:55:10.000Z | PYTHON/dgemm_omp.py | dbaaha/Kernels | 232fc44fc9427dd7b56862cec2d46296c467b4e8 | [
"BSD-3-Clause"
] | 202 | 2015-06-16T15:28:05.000Z | 2022-01-06T18:26:13.000Z | PYTHON/dgemm_omp.py | dbaaha/Kernels | 232fc44fc9427dd7b56862cec2d46296c467b4e8 | [
"BSD-3-Clause"
] | 101 | 2015-06-15T22:06:46.000Z | 2022-01-13T02:56:02.000Z | #!/usr/bin/env python3
#
# Copyright (c) 2015, Intel Corporation
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Intel Corporation nor the names of its
# contributors may be used to endorse or promote products
# derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#*******************************************************************
#
# NAME: dgemm
#
# PURPOSE: This program tests the efficiency with which a dense matrix
# dense multiplication is carried out
#
# USAGE: The program takes as input the matrix order,
# the number of times the matrix-matrix multiplication
# is carried out.
#
# <progname> <# iterations> <matrix order>
#
# The output consists of diagnostics to make sure the
# algorithm worked, and of timing statistics.
#
# HISTORY: Written by Rob Van der Wijngaart, February 2009.
# Converted to Python by Jeff Hammond, February 2016.
# PyOMP support, ave+std_dev by Tim Mattson, May 2021
# *******************************************************************
import sys
from numba import njit
from numba.openmp import openmp_context as openmp
from numba.openmp import omp_set_num_threads, omp_get_thread_num, omp_get_num_threads, omp_get_wtime
import numpy as np
#from time import process_time as timer
#@njit(enable_ssa=False, cache=True) What does "enable_ssa" mean?
@njit(fastmath=True)
def dgemm(iters,order):
# ********************************************************************
# ** Allocate space for the input and transpose matrix
# ********************************************************************
print('inside dgemm')
A = np.zeros((order,order))
B = np.zeros((order,order))
C = np.zeros((order,order))
for i in range(order):
A[:,i] = float(i)
B[:,i] = float(i)
# print(omp_get_num_threads())
for kiter in range(0,iters+1):
if kiter==1:
t0 = omp_get_wtime()
tSum=0.0
tsqSum=0.0
with openmp("parallel for schedule(static) private(j,k)"):
for i in range(order):
for k in range(order):
for j in range(order):
C[i][j] += A[i][k] * B[k][j]
if kiter>0:
tkiter = omp_get_wtime()
t = tkiter - t0
tSum = tSum + t
tsqSum = tsqSum+t*t
t0 = tkiter
dgemmAve = tSum/iters
dgemmStdDev = ((tsqSum-iters*dgemmAve*dgemmAve)/(iters-1))**0.5
print('finished with computations')
# ********************************************************************
# ** Analyze and output results.
# ********************************************************************
checksum = 0.0;
for i in range(order):
for j in range(order):
checksum += C[i][j];
ref_checksum = order*order*order
ref_checksum *= 0.25*(order-1.0)*(order-1.0)
ref_checksum *= (iters+1)
epsilon=1.e-8
if abs((checksum - ref_checksum)/ref_checksum) < epsilon:
print('Solution validates')
nflops = 2.0*order*order*order
recipDiff = (1.0/(dgemmAve-dgemmStdDev) - 1.0/(dgemmAve+dgemmStdDev))
GfStdDev = 1.e-6*nflops*recipDiff/2.0
print('nflops: ',nflops)
print('Rate: ',1.e-6*nflops/dgemmAve,' +/- (MF/s): ',GfStdDev)
else:
print('ERROR: Checksum = ', checksum,', Reference checksum = ', ref_checksum,'\n')
# sys.exit("ERROR: solution did not validate")
# ********************************************************************
# read and test input parameters
# ********************************************************************
print('Parallel Research Kernels version ') #, PRKVERSION
print('Python Dense matrix-matrix multiplication: C = A x B')
if len(sys.argv) != 3:
print('argument count = ', len(sys.argv))
sys.exit("Usage: ./dgemm <# iterations> <matrix order>")
itersIn = int(sys.argv[1])
if itersIn < 1:
sys.exit("ERROR: iterations must be >= 1")
orderIn = int(sys.argv[2])
if orderIn < 1:
sys.exit("ERROR: order must be >= 1")
print('Number of iterations = ', itersIn)
print('Matrix order = ', orderIn)
dgemm(itersIn, orderIn)
| 37.459459 | 100 | 0.590729 | 689 | 5,544 | 4.711176 | 0.391872 | 0.021565 | 0.022181 | 0.015712 | 0.085952 | 0.066235 | 0.05915 | 0.05915 | 0.041898 | 0.041898 | 0 | 0.014345 | 0.220418 | 5,544 | 147 | 101 | 37.714286 | 0.736696 | 0.539683 | 0 | 0.078125 | 0 | 0 | 0.167404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015625 | false | 0 | 0.078125 | 0 | 0.09375 | 0.171875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ccff9565d795674cf4b93f869a86dae2c49d6c9e | 1,859 | py | Python | DS_Algo/quick_sort.py | YorkFish/git_study | 6e023244daaa22e12b24e632e76a13e5066f2947 | [
"MIT"
] | null | null | null | DS_Algo/quick_sort.py | YorkFish/git_study | 6e023244daaa22e12b24e632e76a13e5066f2947 | [
"MIT"
] | null | null | null | DS_Algo/quick_sort.py | YorkFish/git_study | 6e023244daaa22e12b24e632e76a13e5066f2947 | [
"MIT"
] | null | null | null | # coding:utf-8
# example 17: quick_sort.py
import random
# def quick_sort(array):
# if len(array) <= 1:
# return array
# pivot_idx = 0
# pivot = array[pivot_idx]
# less_part = [num for num in array[pivot_idx + 1:] if num <= pivot]
# great_part = [num for num in array[pivot_idx + 1:] if num > pivot]
# return quick_sort(less_part) + [pivot] + quick_sort(great_part)
# def test_quick_sort():
# import random
# array = [random.randint(1, 100) for _ in range(10)]
# sorted_array = sorted(array)
# my_sorted_array = quick_sort(array)
# assert my_sorted_array == sorted_array
def partition(array, start, stop): # [start, stop)
pivot_idx = start
pivot = array[pivot_idx]
left = pivot_idx + 1
right = stop - 1
while left <= right:
while left <= right and array[left] < pivot:
left += 1
while left <= right and pivot <= array[right]:
right -= 1
if left < right:
array[left], array[right] = array[right], array[left]
array[pivot_idx], array[right] = array[right], array[pivot_idx]
return right
def test_partition():
lst = [3, 1, 4, 2]
assert partition(lst, 0, len(lst)) == 2
lst = [1, 2, 3, 4]
assert partition(lst, 0, len(lst)) == 0
lst = [4, 3, 2, 1]
assert partition(lst, 0, len(lst)) == 3
lst = [3, 5, 4, 3, 6, 7, 2, 3]
assert partition(lst, 0, len(lst)) == 1
def quick_sort_inplace(array, start, stop): # [start, stop)
if start < stop:
pivot = partition(array, start, stop)
quick_sort_inplace(array, start, pivot)
quick_sort_inplace(array, pivot + 1, stop)
def test_quick_sort_inplace():
seq = [random.randint(-100, 100) for _ in range(10)]
sorted_seq = sorted(seq)
quick_sort_inplace(seq, 0, len(seq))
assert seq == sorted_seq
| 27.338235 | 72 | 0.603012 | 273 | 1,859 | 3.952381 | 0.179487 | 0.091752 | 0.084337 | 0.070436 | 0.331789 | 0.203892 | 0.072289 | 0.072289 | 0.072289 | 0.072289 | 0 | 0.040117 | 0.262507 | 1,859 | 67 | 73 | 27.746269 | 0.7469 | 0.324906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 1 | 0.117647 | false | 0 | 0.029412 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69003cd799998d8973cda2d04b1c36351df5836d | 2,304 | py | Python | tests/test_zestimation.py | jibanCat/gpy_dla_detection | 4d987adec75a417313fdc6601ee41a0ea60a0a2e | [
"MIT"
] | 1 | 2020-07-31T01:31:52.000Z | 2020-07-31T01:31:52.000Z | tests/test_zestimation.py | jibanCat/gpy_dla_detection | 4d987adec75a417313fdc6601ee41a0ea60a0a2e | [
"MIT"
] | 12 | 2020-07-20T18:55:15.000Z | 2021-09-23T05:08:26.000Z | tests/test_zestimation.py | jibanCat/gpy_dla_detection | 4d987adec75a417313fdc6601ee41a0ea60a0a2e | [
"MIT"
] | null | null | null | """
A test file for testing zestimation
The learned file could be downloaded at
[learned_zqso_only_model_outdata_full_dr9q_minus_concordance_norm_1176-1256.mat]
(https://drive.google.com/file/d/1SqAU_BXwKUx8Zr38KTaA_nvuvbw-WPQM/view?usp=sharing)
"""
import os
import re
import time
import numpy as np
from .test_selection import filenames, z_qsos
from gpy_dla_detection.read_spec import read_spec, retrieve_raw_spec
from gpy_dla_detection.zqso_set_parameters import ZParameters
from gpy_dla_detection.zqso_samples import ZSamples
from gpy_dla_detection.zqso_gp import ZGPMAT
def test_zestimation(nspec: int):
filename = filenames[nspec]
if not os.path.exists(filename):
plate, mjd, fiber_id = re.findall(
r"spec-([0-9]+)-([0-9]+)-([0-9]+).fits", filename,
)[0]
retrieve_raw_spec(int(plate), int(mjd), int(fiber_id))
params = ZParameters()
z_qso_samples = ZSamples(params)
wavelengths, flux, noise_variance, pixel_mask = read_spec(filename)
z_qso_gp = ZGPMAT(
params,
z_qso_samples,
learned_file="data/dr12q/processed/learned_zqso_only_model_outdata_full_dr9q_minus_concordance_norm_1176-1256.mat",
)
tic = time.time()
z_qso_gp.inference_z_qso(wavelengths, flux, noise_variance, pixel_mask)
print("Z True : {:.3g}".format(z_qsos[nspec]))
toc = time.time()
print("spent {} mins; {} seconds".format((toc - tic) // 60, (toc - tic) % 60))
return z_qso_gp.z_map, z_qsos[nspec]
def test_batch(num_quasars: int = 100):
all_z_diffs = np.zeros((num_quasars,))
for nspec in range(num_quasars):
z_map, z_true = test_zestimation(nspec)
z_diff = z_map - z_true
print("[Info] z_diff = z_map - z_true = {:.8g}".format(z_diff))
all_z_diffs[nspec] = z_diff
print("[Info] abs(z_diff) < 0.5 = {:.4g}".format(accuracy(all_z_diffs, 0.5)))
print("[Info] abs(z_diff) < 0.05 = {:.4g}".format(accuracy(all_z_diffs, 0.05)))
# we got ~99% accuracy in https://arxiv.org/abs/2006.07343
# so at least we need to ensure ~98% here
assert accuracy(all_z_diffs, 0.5) > 0.98
def accuracy(z_diff: np.ndarray, z_thresh: float):
num_quasars = z_diff.shape[0]
corrects = (np.abs(z_diff) < z_thresh).sum()
return corrects / num_quasars
| 29.538462 | 123 | 0.69401 | 359 | 2,304 | 4.175487 | 0.392758 | 0.03002 | 0.03002 | 0.0507 | 0.274183 | 0.228152 | 0.122749 | 0.088059 | 0.088059 | 0.088059 | 0 | 0.037546 | 0.179253 | 2,304 | 77 | 124 | 29.922078 | 0.755156 | 0.147569 | 0 | 0 | 0 | 0 | 0.143734 | 0.069054 | 0 | 0 | 0 | 0 | 0.022727 | 1 | 0.068182 | false | 0 | 0.204545 | 0 | 0.318182 | 0.113636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690437062ad9bc153f8073ff84de085091fb62c1 | 5,249 | py | Python | megaman/embedding/base.py | jakevdp/Mmani | 681b6cdbd358b207e8b6c4a482262c84bea15bd7 | [
"BSD-2-Clause"
] | 303 | 2016-03-03T00:44:37.000Z | 2022-03-14T03:43:38.000Z | megaman/embedding/base.py | jakevdp/Mmani | 681b6cdbd358b207e8b6c4a482262c84bea15bd7 | [
"BSD-2-Clause"
] | 52 | 2016-02-26T21:41:31.000Z | 2021-06-27T08:33:51.000Z | megaman/embedding/base.py | jakevdp/Mmani | 681b6cdbd358b207e8b6c4a482262c84bea15bd7 | [
"BSD-2-Clause"
] | 67 | 2016-03-03T22:38:35.000Z | 2022-01-12T08:03:47.000Z | """ base estimator class for megaman """
# Author: James McQueen -- <jmcq@u.washington.edu>
# LICENSE: Simplified BSD https://github.com/mmp2/megaman/blob/master/LICENSE
import numpy as np
from scipy.sparse import isspmatrix
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.utils.validation import check_array
from ..geometry.geometry import Geometry
# from sklearn.utils.validation import FLOAT_DTYPES
FLOAT_DTYPES = (np.float64, np.float32, np.float16)
class BaseEmbedding(BaseEstimator, TransformerMixin):
""" Base Class for all megaman embeddings.
Inherits BaseEstimator and TransformerMixin from sklearn.
BaseEmbedding creates the common interface to the geometry
class for all embeddings as well as providing a common
.fit_transform().
Parameters
----------
n_components : integer
number of coordinates for the manifold.
radius : float (optional)
radius for adjacency and affinity calculations. Will be overridden if
either is set in `geom`
geom : dict or megaman.geometry.Geometry object
specification of geometry parameters: keys are
["adjacency_method", "adjacency_kwds", "affinity_method",
"affinity_kwds", "laplacian_method", "laplacian_kwds"]
Attributes
----------
geom_ : a fitted megaman.geometry.Geometry object.
"""
def __init__(self, n_components=2, radius=None, geom=None):
self.n_components = n_components
self.radius = radius
self.geom = geom
def _validate_input(self, X, input_type):
if input_type == 'data':
sparse_formats = None
elif input_type in ['adjacency', 'affinity']:
sparse_formats = ['csr', 'coo', 'lil', 'bsr', 'dok', 'dia']
else:
raise ValueError("unrecognized input_type: {0}".format(input_type))
return check_array(X, dtype=FLOAT_DTYPES, accept_sparse=sparse_formats)
# # The world is not ready for this...
# def estimate_radius(self, X, input_type='data', intrinsic_dim=None):
# """Estimate a radius based on the data and intrinsic dimensionality
#
# Parameters
# ----------
# X : array_like, [n_samples, n_features]
# dataset for which radius is estimated
# intrinsic_dim : int (optional)
# estimated intrinsic dimensionality of the manifold. If not
# specified, then intrinsic_dim = self.n_components
#
# Returns
# -------
# radius : float
# The estimated radius for the fit
# """
# if input_type == 'affinity':
# return None
# elif input_type == 'adjacency':
# return X.max()
# elif input_type == 'data':
# if intrinsic_dim is None:
# intrinsic_dim = self.n_components
# mean_std = np.std(X, axis=0).mean()
# n_features = X.shape[1]
# return 0.5 * mean_std / n_features ** (1. / (intrinsic_dim + 6))
# else:
# raise ValueError("Unrecognized input_type: {0}".format(input_type))
def fit_geometry(self, X=None, input_type='data'):
"""Inputs self.geom, and produces the fitted geometry self.geom_"""
if self.geom is None:
self.geom_ = Geometry()
elif isinstance(self.geom, Geometry):
self.geom_ = self.geom
else:
try:
kwds = dict(**self.geom)
except TypeError:
raise ValueError("geom must be a Geometry instance or "
"a mappable/dictionary")
self.geom_ = Geometry(**kwds)
if self.radius is not None:
self.geom_.set_radius(self.radius, override=False)
# if self.radius == 'auto':
# if X is not None and input_type != 'affinity':
# self.geom_.set_radius(self.estimate_radius(X, input_type),
# override=False)
# else:
# self.geom_.set_radius(self.radius,
# override=False)
if X is not None:
self.geom_.set_matrix(X, input_type)
return self
def fit_transform(self, X, y=None, input_type='data'):
"""Fit the model from data in X and transform X.
Parameters
----------
input_type : string, one of: 'data', 'distance' or 'affinity'.
The values of input data X. (default = 'data')
X: array-like, shape (n_samples, n_features)
Training vector, where n_samples in the number of samples
and n_features is the number of features.
If self.input_type is 'distance':
X : array-like, shape (n_samples, n_samples),
Interpret X as precomputed distance or adjacency graph
computed from samples.
Returns
-------
X_new: array-like, shape (n_samples, n_components)
"""
self.fit(X, y=y, input_type=input_type)
return self.embedding_
def transform(self, X, y=None, input_type='data'):
raise NotImplementedError("transform() not implemented. "
"Try fit_transform()")
| 36.451389 | 81 | 0.599543 | 613 | 5,249 | 4.986949 | 0.288744 | 0.061825 | 0.025515 | 0.016683 | 0.167484 | 0.117763 | 0.100752 | 0.085051 | 0.064115 | 0.036637 | 0 | 0.004347 | 0.298724 | 5,249 | 143 | 82 | 36.706294 | 0.826134 | 0.547152 | 0 | 0.047619 | 0 | 0 | 0.086915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119048 | false | 0 | 0.119048 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690613c67cd63310af621f929b186a79abbe5cd8 | 1,032 | py | Python | ppr-api/migrations/versions/d3f96fb8b8e5_update_user_profile_definition.py | cameron-freshworks/ppr | 01d6f5d300c791aebad5e58bb4601e9be2ccfc46 | [
"Apache-2.0"
] | 4 | 2020-01-21T21:46:42.000Z | 2021-02-24T18:30:24.000Z | ppr-api/migrations/versions/d3f96fb8b8e5_update_user_profile_definition.py | cameron-freshworks/ppr | 01d6f5d300c791aebad5e58bb4601e9be2ccfc46 | [
"Apache-2.0"
] | 1,313 | 2019-10-18T22:48:16.000Z | 2022-03-30T17:42:47.000Z | ppr-api/migrations/versions/d3f96fb8b8e5_update_user_profile_definition.py | cameron-freshworks/ppr | 01d6f5d300c791aebad5e58bb4601e9be2ccfc46 | [
"Apache-2.0"
] | 201 | 2019-10-18T21:34:41.000Z | 2022-03-31T20:07:42.000Z | """update user profile definition
Revision ID: d3f96fb8b8e5
Revises: 2b13f89aa1b3
Create Date: 2021-10-18 15:45:33.906745
"""
from alembic import op
import sqlalchemy as sa
from alembic_utils.pg_function import PGFunction
from sqlalchemy import text as sql_text
# revision identifiers, used by Alembic.
revision = 'd3f96fb8b8e5'
down_revision = '2b13f89aa1b3'
branch_labels = None
depends_on = None
# Update user profile to add registrations table and miscellaneous (future) preferences.
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('user_profiles', sa.Column('registrations_table', sa.JSON(), nullable=True))
op.add_column('user_profiles', sa.Column('misc_preferences', sa.JSON(), nullable=True))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('user_profiles', 'misc_preferences')
op.drop_column('user_profiles', 'registrations_table')
# ### end Alembic commands ###
| 30.352941 | 94 | 0.738372 | 131 | 1,032 | 5.679389 | 0.48855 | 0.053763 | 0.096774 | 0.061828 | 0.260753 | 0.198925 | 0.198925 | 0.11828 | 0 | 0 | 0 | 0.052273 | 0.147287 | 1,032 | 33 | 95 | 31.272727 | 0.793182 | 0.386628 | 0 | 0 | 0 | 0 | 0.245378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6906a12f2953d09ffd721ec5d1611ca70e378fb9 | 2,948 | py | Python | questions/available-captures-for-rook/Solution.py | marcus-aurelianus/leetcode-solutions | 8b43e72fe1f51c84abc3e89b181ca51f09dc7ca6 | [
"MIT"
] | 141 | 2017-12-12T21:45:53.000Z | 2022-03-25T07:03:39.000Z | questions/available-captures-for-rook/Solution.py | marcus-aurelianus/leetcode-solutions | 8b43e72fe1f51c84abc3e89b181ca51f09dc7ca6 | [
"MIT"
] | 32 | 2015-10-05T14:09:52.000Z | 2021-05-30T10:28:41.000Z | questions/available-captures-for-rook/Solution.py | marcus-aurelianus/leetcode-solutions | 8b43e72fe1f51c84abc3e89b181ca51f09dc7ca6 | [
"MIT"
] | 56 | 2015-09-30T05:23:28.000Z | 2022-03-08T07:57:11.000Z | """
On an 8 x 8 chessboard, there is one white rook. There also may be empty squares, white bishops, and black pawns. These are given as characters 'R', '.', 'B', and 'p' respectively. Uppercase characters represent white pieces, and lowercase characters represent black pieces.
The rook moves as in the rules of Chess: it chooses one of four cardinal directions (north, east, west, and south), then moves in that direction until it chooses to stop, reaches the edge of the board, or captures an opposite colored pawn by moving to the same square it occupies. Also, rooks cannot move into the same square as other friendly bishops.
Return the number of pawns the rook can capture in one move.
Example 1:
Input: [[".",".",".",".",".",".",".","."],[".",".",".","p",".",".",".","."],[".",".",".","R",".",".",".","p"],[".",".",".",".",".",".",".","."],[".",".",".",".",".",".",".","."],[".",".",".","p",".",".",".","."],[".",".",".",".",".",".",".","."],[".",".",".",".",".",".",".","."]]
Output: 3
Explanation:
In this example the rook is able to capture all the pawns.
Example 2:
Input: [[".",".",".",".",".",".",".","."],[".","p","p","p","p","p",".","."],[".","p","p","B","p","p",".","."],[".","p","B","R","B","p",".","."],[".","p","p","B","p","p",".","."],[".","p","p","p","p","p",".","."],[".",".",".",".",".",".",".","."],[".",".",".",".",".",".",".","."]]
Output: 0
Explanation:
Bishops are blocking the rook to capture any pawn.
Example 3:
Input: [[".",".",".",".",".",".",".","."],[".",".",".","p",".",".",".","."],[".",".",".","p",".",".",".","."],["p","p",".","R",".","p","B","."],[".",".",".",".",".",".",".","."],[".",".",".","B",".",".",".","."],[".",".",".","p",".",".",".","."],[".",".",".",".",".",".",".","."]]
Output: 3
Explanation:
The rook can capture the pawns at positions b5, d6 and f5.
Note:
board.length == board[i].length == 8
board[i][j] is either 'R', '.', 'B', or 'p'
There is exactly one cell with board[i][j] == 'R'
"""
class Solution(object):
def numRookCaptures(self, board):
"""
:type board: List[List[str]]
:rtype: int
"""
ri, rj = 0, 0
found = False
for i in xrange(len(board)):
for j in xrange(len(board[0])):
c = board[i][j]
if c == 'R':
ri, rj = i, j
found = True
break
if found:
break
num = 0
dirs = [[1, 0], [-1, 0], [0, 1], [0, -1]]
for di, dj in dirs:
i, j = ri + di, rj + dj
while i >= 0 and i < len(board) and j >= 0 and j < len(board[0]):
c = board[i][j]
if c == '.':
pass
elif c == 'p':
num += 1
break
else:
break
i += di
j += dj
return num | 39.837838 | 353 | 0.402307 | 341 | 2,948 | 3.478006 | 0.387097 | 0.033727 | 0.035413 | 0.030354 | 0.065767 | 0.052277 | 0.052277 | 0.033727 | 0.033727 | 0 | 0 | 0.012897 | 0.263569 | 2,948 | 74 | 354 | 39.837838 | 0.533395 | 0.678426 | 0 | 0.206897 | 0 | 0 | 0.003363 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0.034483 | 0 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6908c59f82b4dce18b0359af8fb11f6688af03cf | 3,200 | py | Python | test/test_npu/test_network_ops/test_sin.py | Ascend/pytorch | 39849cf72dafe8d2fb68bd1679d8fd54ad60fcfc | [
"BSD-3-Clause"
] | 1 | 2021-12-02T03:07:35.000Z | 2021-12-02T03:07:35.000Z | test/test_npu/test_network_ops/test_sin.py | Ascend/pytorch | 39849cf72dafe8d2fb68bd1679d8fd54ad60fcfc | [
"BSD-3-Clause"
] | 1 | 2021-11-12T07:23:03.000Z | 2021-11-12T08:28:13.000Z | test/test_npu/test_network_ops/test_sin.py | Ascend/pytorch | 39849cf72dafe8d2fb68bd1679d8fd54ad60fcfc | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2020, Huawei Technologies.All rights reserved.
#
# Licensed under the BSD 3-Clause License (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://opensource.org/licenses/BSD-3-Clause
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import numpy as np
import sys
import copy
from common_utils import TestCase, run_tests
from common_device_type import dtypes, instantiate_device_type_tests
from util_test import create_common_tensor
class TestSin(TestCase):
def cpu_op_exec(self, input1):
output = torch.sin(input1)
output = output.numpy()
return output
def npu_op_exec(self, input1):
output = torch.sin(input1)
output = output.to("cpu")
output = output.numpy()
return output
def npu_op_exec_out(self, input1, input2):
torch.sin(input1, out=input2)
output = input2.to("cpu")
output = output.numpy()
return output
def test_sin_common_shape_format(self, device):
shape_format = [
[[np.float32, 0, (5,3)]],
]
for item in shape_format:
cpu_input1, npu_input1 = create_common_tensor(item[0], -10, 10)
cpu_output = self.cpu_op_exec(cpu_input1)
npu_output = self.npu_op_exec(npu_input1)
self.assertRtolEqual(cpu_output, npu_output)
def test_sin_out_common_shape_format(self, device):
shape_format = [
[[np.float16, -1, (4, 3, 128, 128)], [np.float16, -1, (4, 3, 128, 128)]],
[[np.float16, 0, (4, 3, 128, 128)], [np.float16, 0, (10, 3, 64, 128)]],
[[np.float16, 0, (4, 3, 128, 128)], [np.float16, 0, (2, 3, 256, 128)]],
[[np.float32, 0, (4, 3, 128, 128)], [np.float32, 0, (4, 3, 128, 128)]],
[[np.float32, 0, (4, 3, 128, 128)], [np.float32, 0, (8, 3, 64, 128)]],
[[np.float32, -1, (4, 3, 128, 128)], [np.float32, -1, (4, 3, 256, 64)]],
]
for item in shape_format:
cpu_input1, npu_input1 = create_common_tensor(item[0], -10, 10)
cpu_input2, npu_input2 = create_common_tensor(item[0], -10, 10)
cpu_input3, npu_input3 = create_common_tensor(item[1], -10, 10)
if cpu_input1.dtype == torch.float16:
cpu_input1 = cpu_input1.to(torch.float32)
cpu_output = self.cpu_op_exec(cpu_input1)
npu_output_out1 = self.npu_op_exec_out(npu_input1, npu_input2)
npu_output_out2 = self.npu_op_exec_out(npu_input1, npu_input3)
cpu_output = cpu_output.astype(npu_output_out1.dtype)
self.assertRtolEqual(cpu_output, npu_output_out1)
self.assertRtolEqual(cpu_output, npu_output_out2)
instantiate_device_type_tests(TestSin, globals(), except_for='cpu')
if __name__ == "__main__":
run_tests()
| 42.105263 | 89 | 0.637813 | 453 | 3,200 | 4.284768 | 0.273731 | 0.028336 | 0.020608 | 0.032973 | 0.455951 | 0.455951 | 0.381762 | 0.381762 | 0.275631 | 0.225657 | 0 | 0.081675 | 0.24625 | 3,200 | 75 | 90 | 42.666667 | 0.723051 | 0.179375 | 0 | 0.290909 | 0 | 0 | 0.006508 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 1 | 0.090909 | false | 0 | 0.127273 | 0 | 0.290909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690958b4739c7c62384b219e19362e11eacddb43 | 968 | py | Python | src/simmate/website/core_components/filters/dynamics.py | laurenmm/simmate-1 | c06b94c46919b01cda50f78221ad14f75c100a14 | [
"BSD-3-Clause"
] | 9 | 2021-12-21T02:58:21.000Z | 2022-01-25T14:00:06.000Z | src/simmate/website/core_components/filters/dynamics.py | laurenmm/simmate-1 | c06b94c46919b01cda50f78221ad14f75c100a14 | [
"BSD-3-Clause"
] | 51 | 2022-01-01T15:59:58.000Z | 2022-03-26T21:25:42.000Z | src/simmate/website/core_components/filters/dynamics.py | laurenmm/simmate-1 | c06b94c46919b01cda50f78221ad14f75c100a14 | [
"BSD-3-Clause"
] | 7 | 2022-01-01T03:44:32.000Z | 2022-03-29T19:59:27.000Z | # -*- coding: utf-8 -*-
from simmate.website.core_components.filters import (
Structure,
Forces,
Thermodynamics,
Calculation,
)
from simmate.database.base_data_types.dynamics import (
DynamicsRun as DynamicsRunTable,
DynamicsIonicStep as DynamicsIonicStepTable,
)
class DynamicsRun(Structure, Calculation):
class Meta:
model = DynamicsRunTable
fields = dict(
temperature_start=["range"],
temperature_end=["range"],
time_step=["range"],
nsteps=["range"],
**Structure.get_fields(),
**Calculation.get_fields(),
)
class DynamicsIonicStep(Structure, Forces, Thermodynamics):
class Meta:
model = DynamicsIonicStepTable
fields = dict(
number=["range"],
temperature=["range"],
**Structure.get_fields(),
**Thermodynamics.get_fields(),
**Forces.get_fields(),
)
| 25.473684 | 59 | 0.599174 | 79 | 968 | 7.202532 | 0.481013 | 0.079086 | 0.101933 | 0.080844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001449 | 0.28719 | 968 | 37 | 60 | 26.162162 | 0.823188 | 0.021694 | 0 | 0.193548 | 0 | 0 | 0.031746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.064516 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690b4a69f377b976688d836e3b5ab8c8dc9b6884 | 3,821 | py | Python | webproject/taller1/algoritmoCoseno.py | jairocollante/sr | f395c0f9aef804ec0100edcfe1a1c6ccab2494a1 | [
"MIT"
] | null | null | null | webproject/taller1/algoritmoCoseno.py | jairocollante/sr | f395c0f9aef804ec0100edcfe1a1c6ccab2494a1 | [
"MIT"
] | null | null | null | webproject/taller1/algoritmoCoseno.py | jairocollante/sr | f395c0f9aef804ec0100edcfe1a1c6ccab2494a1 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from taller1.models import Userid_Timestamp_Count
class Coseno():
def recomendacionUsuario(self,usuario_activo):
print("Modelo Coseno Usuario")
cant = 10
df_mapreduce = Coseno.cargarDatos(self)
print("df_mapreduce.shape",df_mapreduce.shape)
df_pivot = df_mapreduce.pivot('userid','artist','count').fillna(0)
print("Pivot.shape=", df_pivot.shape)
lista_coseno_usuario = Coseno.iterarUsuario(self,df_pivot,usuario_activo)
print("Termina calculo coseno=",len(lista_coseno_usuario))
lista_coseno_usuario.sort(key=lambda k:k['coseno'], reverse = True)
print("Termina ordenar lista coseno")
usuario_mas_similar = lista_coseno_usuario[0]['usuario_similar']
print("Usuario mas similar=",usuario_mas_similar)
lista_recomendacion = Coseno.artistaMasEscuchadoPorUsuario(self,usuario_mas_similar,cant,df_pivot)
resp = {"lista_coseno_usuario":lista_coseno_usuario[:cant],
"lista_recomendacion":lista_recomendacion}
return resp
def cargarDatos(self):
#df_mapreduce = pd.read_csv('part-r-00000',sep='\t',names=['userid','artist','count'])
df_mapreduce = pd.DataFrame(list(Userid_Timestamp_Count.objects.all().values('userid','artist','count')))
return df_mapreduce.dropna()
def iterarUsuario(self,df_pivot,usuario_activo):
v_usuario_activo = df_pivot.loc[usuario_activo].values
lista_coseno=[]
for user_evaluado in df_pivot.index.tolist():
if usuario_activo != user_evaluado:
object = {}
object['usuario_similar']=user_evaluado
v_usuario_evaluado = df_pivot.loc[user_evaluado].values
object['coseno']=Coseno.cos_sim(self,v_usuario_activo, v_usuario_evaluado)
lista_coseno.append(object)
return lista_coseno
def valorCoseno(self):
return val['coseno']
def artistaMasEscuchadoPorUsuario(self,usuario_evaluado,cant,df_pivot):
artistas_escuchados = df_pivot.loc[usuario_evaluado]
df_r = pd.DataFrame(artistas_escuchados)
df_r = df_r.sort_values(by=[usuario_evaluado], ascending=False).index.tolist()
return df_r[:cant]
def cos_sim(self,a, b):
#Takes 2 vectors a, b and returns the cosine similarity according
#to the definition of the dot product
dot_product = np.dot(a, b)
norm_a = np.linalg.norm(a)
norm_b = np.linalg.norm(b)
return dot_product / (norm_a * norm_b)
def recomendacionItem(self,usuario_activo):
print("Modelo Coseno Item")
df_mapreduce = Coseno.cargarDatos(self)
print("df_mapreduce.shape",df_mapreduce.shape)
df_pivotA = df_mapreduce.pivot('userid','artist','count').fillna(0)
print("Usuario Pivot.shape=", df_pivotA.shape)
artista_activo = Coseno.artistaMasEscuchadoPorUsuario(self,usuario_activo,10,df_pivotA)
cant = 10
df_pivot = df_mapreduce.pivot('artist','userid','count').fillna(0)
print("Artista Pivot.shape=", df_pivot.shape)
lista_coseno_artista = Coseno.iterarArtistas(self,df_pivot,artista_activo[:1])
print("Termina calculo coseno=",len(lista_coseno_artista))
lista_coseno_artista.sort(key=lambda k:k['coseno'], reverse = True)
print("Termina ordenar lista coseno")
resp = {"lista_coseno_artista":lista_coseno_artista[:cant],
"artista_activo":artista_activo}
return resp
def iterarArtistas(self,df_pivot_artista,artista_activo):
v_artista_activo = df_pivot_artista.loc[artista_activo].values
lista_coseno=[]
for artista_evaluado in df_pivot_artista.index.tolist():
if artista_activo != artista_evaluado:
object = {}
object['artista_similar']=artista_evaluado
v_artista_evaluado = df_pivot_artista.loc[artista_evaluado].values
object['coseno']=Coseno.cos_sim(self,v_artista_activo, v_artista_evaluado)
lista_coseno.append(object)
return lista_coseno | 43.420455 | 108 | 0.741952 | 509 | 3,821 | 5.298625 | 0.210216 | 0.077494 | 0.046719 | 0.026696 | 0.42195 | 0.35076 | 0.252874 | 0.199481 | 0.163886 | 0.098628 | 0 | 0.0055 | 0.143418 | 3,821 | 88 | 109 | 43.420455 | 0.818515 | 0.048678 | 0 | 0.24 | 0 | 0 | 0.131134 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106667 | false | 0 | 0.04 | 0.013333 | 0.266667 | 0.16 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690bf7914b0e1d6a8fa65445caa5e076e78c0f57 | 5,646 | py | Python | python/dataconversion/shapefile_to_plt.py | Mehrdadj93/handyscripts | 5df9a69e17345ca5a3e42dda2424da2da0ab6f12 | [
"MIT"
] | 66 | 2018-09-21T22:55:34.000Z | 2022-03-22T14:29:57.000Z | python/dataconversion/shapefile_to_plt.py | Mehrdadj93/handyscripts | 5df9a69e17345ca5a3e42dda2424da2da0ab6f12 | [
"MIT"
] | 4 | 2018-10-04T22:09:01.000Z | 2022-03-31T16:18:38.000Z | python/dataconversion/shapefile_to_plt.py | Mehrdadj93/handyscripts | 5df9a69e17345ca5a3e42dda2424da2da0ab6f12 | [
"MIT"
] | 50 | 2018-09-23T15:50:55.000Z | 2022-03-06T06:59:33.000Z | """Convert Shapefiles to Tecplot plt format
usage:
> python shapefile_to_plt.py shapefile.shp outfile.plt
Necessary modules
-----------------
pyshp
The Python Shapefile Library (pyshp) reads and writes ESRI Shapefiles in pure Python.
https://pypi.python.org/pypi/pyshp
https://www.esri.com/library/whitepapers/pdfs/shapefile.pdf
Description
-----------
This script is used to convert Shapefiles (.shp) to Tecplot plt format.
Users will need to answer a few questions about their shapefile to accurately
import into Tecplot format.
First select a conversion type: Convert to a single zone or one zone per shape.
Next select variable names to use: x/y or lon/lat
Finally, if using one zone per shape, select the column to name the zones
After running the script, append the new plt file to the active frame and match the
variable names.
"""
import sys
import os
import time
import shapefile as sf
import tecplot as tp
from tecplot.constant import *
def create_connectivity_list(shape, element_offset=0):
"""Use the element indices for each shape to create the connectivity list"""
num_points = len(shape.points)
num_parts = len(shape.parts)
elements = []
for i in range(num_parts):
# parts[] returns the point index at the start of each part
# These values will define the connectivity list of the line segments
p1 = shape.parts[i]
# Check to see if we're at the last part so we don't over index the list
if i < num_parts - 1:
p2 = shape.parts[i + 1] - 1
else:
p2 = num_points - 1
p1 += element_offset
p2 += element_offset
# Create the connectivity list for this part. Each point is connected to the next
for i in range(p1, p2):
elements.append((i, i + 1))
return elements
def convert_to_single_zone(s, zone_name, dataset):
"""Loop over all the shapes, collecting their point values and generating
the FE-Line Segment connectivity list."""
x = []
y = []
elements = []
num_points = 0
for shapeRec in s.shapeRecords():
elements.extend(create_connectivity_list(shapeRec.shape, num_points))
x.extend([n[0] for n in shapeRec.shape.points])
y.extend([n[1] for n in shapeRec.shape.points])
num_points += len(shapeRec.shape.points)
# Now that we have the points and connectivity list we add a zone to the dataset
zone = dataset.add_fe_zone(ZoneType.FELineSeg, zone_name, num_points, len(elements))
zone.values(0)[:] = x
zone.values(1)[:] = y
zone.nodemap[:] = elements
def convert_to_one_zone_per_shape(s, name_index, dataset):
"""Create a Tecplot zone for each shape"""
for i, shapeRec in enumerate(s.shapeRecords()):
# Extract the zone name from the appropriate location in the shape record
zone_name = shapeRec.record[name_index]
if len(zone_name) == 0:
zone_name = 'NONE'
num_points = len(shapeRec.shape.points)
elements = create_connectivity_list(shapeRec.shape)
x = [n[0] for n in shapeRec.shape.points]
y = [n[1] for n in shapeRec.shape.points]
# Create the Tecplot zone and add the point data as well as the connectivity list
zone = dataset.add_fe_zone(ZoneType.FELineSeg, zone_name, num_points, len(elements))
zone.values(0)[:] = x
zone.values(1)[:] = y
zone.nodemap[:] = elements
# Print dots to give the user an indication that something is happening
sys.stdout.write('.')
sys.stdout.flush()
def get_var_names():
"""Choose the variable names to use"""
print("1 - Use 'x' and 'y'")
print("2 - Use 'lon' and 'lat'")
var_name_choice = int(input("Enter your choice for variable names: ")) - 1
return var_name_choice
def get_name_index(shape_reader):
"""Displays Shapefile column used to name zones"""
first_record = shape_reader.shapeRecords()[0].record
# Record is the "column" information for the shape
index = 1
for f, r in zip(shape_reader.fields[1:], first_record):
print(index, "- ", f[0], ": ", r)
index += 1
name_index = int(input("Enter the index to use for zone names: ")) - 1
return name_index
def get_conversion_option(shape_records):
"""Prompts user for conversion options"""
print("1 - Convert to a single zone")
print("2 - Convert to one zone per shape (%d zones) (this can take a while)" % (len(shape_records)))
import_option = int(input("Enter your conversion selection: "))
return import_option
def main(shapefilename, outfilename):
# define index from record for zone name
s = sf.Reader(shapefilename)
shape_records = s.shapeRecords()
conversion_option = get_conversion_option(shape_records)
if get_var_names() == 0:
x_var_name = 'x'
y_var_name = 'y'
else:
x_var_name = 'lon'
y_var_name = 'lat'
dataset = tp.active_frame().create_dataset("Shapefile", [x_var_name, y_var_name])
if conversion_option == 1: # Single Zone
start = time.time()
convert_to_single_zone(s, os.path.basename(shapefilename), dataset)
else: # One Zone per Shape
name_index = get_name_index(s)
start = time.time()
convert_to_one_zone_per_shape(s, name_index, dataset)
tp.data.save_tecplot_plt(outfilename)
print("Elapsed time: ", time.time() - start)
if len(sys.argv) != 3:
print("Usage:\nshapefile_to_plt.py shapefile.shp outfile.plt")
else:
shapefilename = sys.argv[1]
outfilename = sys.argv[2]
main(shapefilename, outfilename)
| 33.607143 | 104 | 0.669146 | 825 | 5,646 | 4.45697 | 0.249697 | 0.039162 | 0.016318 | 0.024476 | 0.214849 | 0.150122 | 0.126734 | 0.11096 | 0.096274 | 0.081044 | 0 | 0.008972 | 0.230074 | 5,646 | 167 | 105 | 33.808383 | 0.836899 | 0.333688 | 0 | 0.170213 | 0 | 0 | 0.091939 | 0.00728 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074468 | false | 0 | 0.085106 | 0 | 0.202128 | 0.074468 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690cf46e3a65cc79fc30a8178dbe551901ad1473 | 2,746 | py | Python | mastering_oop/special_methods/card_factory.py | brittainhard/py | aede05530ad05a8319fef7e76b49e4bf3cebebac | [
"MIT"
] | null | null | null | mastering_oop/special_methods/card_factory.py | brittainhard/py | aede05530ad05a8319fef7e76b49e4bf3cebebac | [
"MIT"
] | null | null | null | mastering_oop/special_methods/card_factory.py | brittainhard/py | aede05530ad05a8319fef7e76b49e4bf3cebebac | [
"MIT"
] | null | null | null | """When creating factory functions, plain functions are good unless you need to
inherit from a higher level class. If you don't need to inherit, dont use a
class."""
from functools import partial
from .suits import *
from .cards import *
def card(rank, suit):
if rank == 1:
return AceCard('A', suit)
elif 2 <= rank < 11:
return NumberCard(str(rank), suit)
elif 11 <= rank < 14:
name = {11: "J", 12: "Q", 13: "K"}[rank]
return FaceCard(name, suit)
else:
"""The else clause is there to make explicit what inputs this function
will handle"""
raise Exception("Rank out of range.")
def card_better_elif(rank, suit):
if rank == 1:
return AceCard('A', suit)
elif 2 <= rank < 11:
return NumberCard(str(rank), suit)
elif rank == 11:
return FaceCard("J", suit)
elif rank == 12:
return FaceCard("Q", suit)
elif rank == 13:
return FaceCard("K", suit)
else:
"""The else clause is there to make explicit what inputs this function
will handle"""
raise Exception("Rank out of range.")
def card_mapping(rank, suit):
"""Get the desired rank. If the rank isnt there by default, return a nubmer
card"""
class_ = {1: AceCard, 11: FaceCard, 12: FaceCard, 13: FaceCard}.get(rank, NumberCard)
return class_(rank, suit)
def card_functools_mapping(rank, suit):
part_class = {
1: partial(AceCard, 'A'),
11: partial(FaceCard, 'J'),
12: partial(FaceCard, 'Q'),
13: partial(FaceCard, 'K')
}.get(rank, partial(NumberCard, str(rank)))
return part_class(suit)
class CardFactory:
"""This class is designed to contain a 'fluent api'. That means that one
function call happens after the next. In the example, its x.a().b(). This
class is returning itself, which the next function uses to generate the
card. We are containing this in one object for the sake of simplicity.
It seems like the minute we decide to do a different API... I don't know
how this woulf be useful exactly. A lot of these are just examples of stuff
you can do with collections."""
def rank(self, rank):
self.class_, self.rank_str = {
1: (AceCard, 'A'),
11: (FaceCard, 'J'),
12: (FaceCard, 'Q'),
13: (FaceCard, 'K')
}.get(rank, (NumberCard, str(rank)))
return self
def suit(self, suit):
return self.class_(self.rank_str, suit)
def get_deck(self):
return [self.rank(r + 1).suit(s) for r in range(13) for s in (Club,
Diamond, Heart, Spade)]
factory_functions = [card, card_better_elif, card_mapping,
card_functools_mapping]
| 31.563218 | 89 | 0.617626 | 393 | 2,746 | 4.264631 | 0.335878 | 0.033413 | 0.040573 | 0.016706 | 0.243437 | 0.21957 | 0.21957 | 0.21957 | 0.21957 | 0.21957 | 0 | 0.023928 | 0.269483 | 2,746 | 86 | 90 | 31.930233 | 0.811565 | 0.257101 | 0 | 0.226415 | 0 | 0 | 0.028729 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132075 | false | 0 | 0.056604 | 0.037736 | 0.45283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690eeeccad42026d1f4e91d779a9b52f4b5eb52e | 847 | py | Python | Python3/1311-Get-Watched-Videos-by-Your-Friends/soln.py | wyaadarsh/LeetCode-Solutions | 3719f5cb059eefd66b83eb8ae990652f4b7fd124 | [
"MIT"
] | 5 | 2020-07-24T17:48:59.000Z | 2020-12-21T05:56:00.000Z | Python3/1311-Get-Watched-Videos-by-Your-Friends/soln.py | zhangyaqi1989/LeetCode-Solutions | 2655a1ffc8678ad1de6c24295071308a18c5dc6e | [
"MIT"
] | null | null | null | Python3/1311-Get-Watched-Videos-by-Your-Friends/soln.py | zhangyaqi1989/LeetCode-Solutions | 2655a1ffc8678ad1de6c24295071308a18c5dc6e | [
"MIT"
] | 2 | 2020-07-24T17:49:01.000Z | 2020-08-31T19:57:35.000Z | class Solution:
def watchedVideosByFriends(self, watchedVideos: List[List[str]], friends: List[List[int]], ID: int, level: int) -> List[str]:
n = len(friends)
# BFS
frontier = [ID]
levels = {ID : 0}
nsteps = 0
while frontier:
if level == 0:
break
level -= 1
next_level = []
for u in frontier:
for v in friends[u]:
if v not in levels:
levels[v] = nsteps + 1
next_level.append(v)
frontier = next_level
nsteps += 1
counter = collections.Counter()
for ID in frontier:
for video in watchedVideos[ID]:
counter[video] += 1
return sorted(counter, key=lambda x : (counter[x], x))
| 33.88 | 129 | 0.472255 | 91 | 847 | 4.362637 | 0.417582 | 0.06801 | 0.050378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014523 | 0.430933 | 847 | 24 | 130 | 35.291667 | 0.809129 | 0.003542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690f42d3fe72fbc49129a7da627897da2c12d466 | 2,614 | py | Python | tests/fixtures/client.py | radiac/mara | 413f1f9f4c7117839a8c03d72733d6f75494ddd3 | [
"BSD-3-Clause"
] | 16 | 2015-11-22T13:12:46.000Z | 2020-09-04T06:42:55.000Z | tests/fixtures/client.py | radiac/mara | 413f1f9f4c7117839a8c03d72733d6f75494ddd3 | [
"BSD-3-Clause"
] | 8 | 2016-01-09T23:32:46.000Z | 2019-09-30T23:30:49.000Z | tests/fixtures/client.py | radiac/mara | 413f1f9f4c7117839a8c03d72733d6f75494ddd3 | [
"BSD-3-Clause"
] | 7 | 2016-07-19T04:39:31.000Z | 2020-09-04T06:43:06.000Z | from __future__ import annotations
import logging
import socket
import pytest
from .constants import TEST_HOST, TEST_PORT
logger = logging.getLogger("tests.fixtures.client")
class BaseClient:
"""
Blocking test client to connect to an app server
"""
name: str
def __init__(self, name: str):
self.name = name
def __str__(self):
return self.name
class SocketClient(BaseClient):
socket: socket.socket | None
buffer: bytes
def __init__(self, name: str):
super().__init__(name)
self.buffer = b""
def connect(self, host: str, port: int):
logger.debug(f"Socket client {self} connecting")
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.connect((host, port))
logger.debug(f"Socket client {self} connected")
def write(self, raw: bytes):
if not self.socket:
raise ValueError("Socket not open")
logger.debug(f"Socket client {self} writing {raw!r}")
self.socket.sendall(raw)
def read(self, len: int = 1024) -> bytes:
if not self.socket:
raise ValueError("Socket not open")
raw: bytes = self.socket.recv(len)
logger.debug(f"Socket client {self} received {raw!r}")
return raw
def read_line(self, len: int = 1024) -> bytes:
if b"\r\n" not in self.buffer:
self.buffer += self.read(len)
if b"\r\n" not in self.buffer:
raise ValueError("Line not found")
line, self.buffer = self.buffer.split(b"\r\n", 1)
return line
def close(self):
if not self.socket:
raise ValueError("Socket not open")
logger.debug(f"Socket client {self} closing")
self.socket.close()
logger.debug(f"Socket client {self} closed")
@pytest.fixture
def socket_client_factory(request: pytest.FixtureRequest):
"""
Socket client factory fixture
Usage::
def test_client(app_harness, socket_client_factory):
app_harness(myapp)
client = socket_client_factory()
client.write(b'hello')
assert client.read() == b'hello'
"""
clients = []
def connect(name: str | None = None, host: str = TEST_HOST, port: int = TEST_PORT):
client_name = request.node.name
if name is not None:
client_name = f"{client_name}:{name}"
client = SocketClient(client_name)
client.connect(host, port)
clients.append(client)
return client
yield connect
for client in clients:
client.close()
| 25.627451 | 87 | 0.613236 | 331 | 2,614 | 4.722054 | 0.253776 | 0.076775 | 0.046065 | 0.069098 | 0.266155 | 0.243122 | 0.150352 | 0.150352 | 0.12476 | 0.12476 | 0 | 0.004762 | 0.27697 | 2,614 | 101 | 88 | 25.881188 | 0.822222 | 0.109028 | 0 | 0.163934 | 0 | 0 | 0.132366 | 0.009235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163934 | false | 0 | 0.081967 | 0.016393 | 0.393443 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
690f93b1369267fc52a8a48543ba3893e460a7b1 | 7,768 | py | Python | pi.py | przemyslawgarlinski/pi_iowrap | 3eef421ebe718dd3bfe1723d1ede6b6a7bc76599 | [
"MIT"
] | 1 | 2017-11-26T22:12:16.000Z | 2017-11-26T22:12:16.000Z | pi.py | przemyslawgarlinski/pi_iowrap | 3eef421ebe718dd3bfe1723d1ede6b6a7bc76599 | [
"MIT"
] | null | null | null | pi.py | przemyslawgarlinski/pi_iowrap | 3eef421ebe718dd3bfe1723d1ede6b6a7bc76599 | [
"MIT"
] | null | null | null | """Standard raspberry GPIO access layer.
It defines abstract layer that extends InOutInterface to access all standard
ports on rapsberry pi. It uses RPi.GPIO under the hood.
Thanks to that you have a standardized way of accessing these ports, as well
as any others implementing InOutInterface.
"""
import logging
from base import InOutInterface
from base import get_gpio
from base import Settings
from base import PortListener
from exceptions import InvalidPortNumberError
from port import Port
class PiInterface(InOutInterface):
"""Standard GPIO interface abstraction layer.
Some examples of raw calls to ports using RPi.GPIO
GPIO.setmode(GPIO.BOARD) // set usual port numbering
GPIO.setup(7, GPIO.OUT)
GPIO.output(7, GPIO.HIGH)
GPIO.output(7, GPIO.LOW)
GPIO.cleanup()
"""
_GROUND = (6, 9, 14, 20, 25, 30, 34, 39)
_POWER_5V = (2, 4)
_POWER_3V3 = (1, 17)
_I2C = (3, 5, 27, 28)
_FORBIDDEN = _GROUND + _POWER_5V + _POWER_3V3 + _I2C
PULL_UP = 'pull_up'
PULL_DOWN = 'pull_down'
def __init__(self):
super(PiInterface, self).__init__(40)
for number in range(1, 41):
if number not in self._FORBIDDEN:
self._ports[number] = Port(self, number)
# Defines the pull up or pull down rezistor for inputs.
# Possible values are:
# 1. self.PULL_UP
# 2. self.PULL_DOWN
# 3. None (input fluctuating by default)
self.pull_up_down_rezistor = self.PULL_UP
self._port_listeners = {}
self._initialize_ports()
def __str__(self):
return 'Raspberry PI GPIO'
def _validate_port_number(self, port_number):
super(PiInterface, self)._validate_port_number(port_number)
if port_number in self._GROUND:
raise InvalidPortNumberError(
'This port number(%d) is reserved for GROUND.', port_number)
if port_number in self._POWER_3V3:
raise InvalidPortNumberError(
'This port number(%d) is reserved for 3.3V POWER.', port_number)
if port_number in self._POWER_5V:
raise InvalidPortNumberError(
'This port number(%d) is reserved for 5V POWER.', port_number)
if port_number in self._I2C:
raise InvalidPortNumberError(
'This port number(%d) is reserved for I2c.', port_number)
if port_number in self._FORBIDDEN:
raise InvalidPortNumberError(
'This port number(%d) is forbidden to take.', port_number)
def _gpio_setup(self, port_number, gpio_attr_name):
self._validate_port_number(port_number)
if Settings.IS_NO_HARDWARE_MODE:
logging.warning('No hardware mode, no value written')
else:
gpio = get_gpio()
if gpio_attr_name == 'IN':
# Special case for settings port as input.
# Pullup or pulldown rezistor should be set here.
kwargs = {}
if self.pull_up_down_rezistor == self.PULL_UP:
kwargs['pull_up_down'] = gpio.PUD_UP
elif self.pull_up_down_rezistor == self.PULL_DOWN:
kwargs['pull_up_down'] = gpio.PUD_DOWN
gpio.setup(
port_number,
getattr(gpio, gpio_attr_name),
**kwargs)
else:
gpio.setup(port_number, getattr(gpio, gpio_attr_name))
def _gpio_output(self, port_number, value):
self._validate_port_number(port_number)
if Settings.IS_NO_HARDWARE_MODE:
logging.warning('No hardware mode, no value written')
else:
gpio = get_gpio()
gpio.output(
port_number,
gpio.HIGH if value == self.HIGH else gpio.LOW
)
def get_value(self, port_number):
self._validate_port_number(port_number)
value = self._check_no_hardware_port_value(port_number)
if value is not None:
return value
else:
gpio = get_gpio()
value = gpio.input(port_number)
# logging.debug(
# 'Read gpio port value (%s): %s',
# self.get_port(port_number),
# value)
return self.HIGH if value == gpio.HIGH else self.LOW
def set_as_input(self, port_number):
self._gpio_setup(port_number, 'IN')
self._in_out_registry[port_number] = self._INPUT
return self
def set_as_output(self, port_number):
self._gpio_setup(port_number, 'OUT')
self._in_out_registry[port_number] = self._OUTPUT
return self
def set_high(self, port_number):
self._validate_port_number(port_number)
self._validate_write_port_number(port_number)
self._gpio_output(port_number, self.HIGH)
return self
def set_low(self, port_number):
self._validate_port_number(port_number)
self._validate_write_port_number(port_number)
self._gpio_output(port_number, self.LOW)
return self
def add_event(
self,
port_number,
on_rising_callback=None,
on_falling_callback=None):
"""Adds listening event on given port.
In this case 2nd argument passed to a callback is a value read
during callback invocation, which in theory might not be the one
that actually cause triggering the event.
"""
if Settings.IS_NO_HARDWARE_MODE:
logging.warning('No hardware mode, adding read event failed.')
else:
port_listener = self._port_listeners.get(port_number)
if not port_listener:
port_listener = _PiPortListener(self.get_port(port_number))
gpio = get_gpio()
gpio.add_event_detect(
port_number,
gpio.BOTH,
callback=port_listener.trigger_callbacks,
bouncetime=Settings.READ_SWITCH_DEBOUNCE)
self._port_listeners[port_number] = port_listener
if on_rising_callback:
logging.debug(
'Adding rising callback for interface (%s) on port %d',
self, port_number)
port_listener.add_rising_callback(on_rising_callback)
if on_falling_callback:
logging.debug(
'Adding falling callback for interface (%s) on port %d',
self, port_number)
port_listener.add_falling_callback(on_falling_callback)
def clear_read_events(self, port_number):
if not Settings.IS_NO_HARDWARE_MODE:
get_gpio().remove_event_detect(port_number)
if port_number in self._port_listeners:
del self._port_listeners[port_number]
class _PiPortListener(PortListener):
def get_callbacks_to_trigger(self):
if not self._rising_callbacks and not self._falling_callbacks:
return []
to_trigger = []
port_value = self.port.value
if (port_value == InOutInterface.HIGH):
to_trigger.extend(self._rising_callbacks)
logging.debug(
'Event detected on interface (%s) on port (%d). '
'Type: RISING.',
self.port.interface,
self.port.number)
elif (port_value == InOutInterface.LOW):
to_trigger.extend(self._falling_callbacks)
logging.debug(
'Event detected on interface (%s) on port (%d). '
'Type: FALLING.',
self.port.interface,
self.port.number)
return to_trigger | 36.469484 | 80 | 0.608651 | 938 | 7,768 | 4.76226 | 0.204691 | 0.145512 | 0.043877 | 0.035818 | 0.40206 | 0.375196 | 0.351019 | 0.296844 | 0.242668 | 0.164316 | 0 | 0.009972 | 0.315783 | 7,768 | 213 | 81 | 36.469484 | 0.83048 | 0.137873 | 0 | 0.304636 | 0 | 0 | 0.094242 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086093 | false | 0 | 0.046358 | 0.006623 | 0.251656 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6910edd2d74b03e0e705138a62b8524d93b325d6 | 4,190 | py | Python | cogs/ments.py | NastyHub/yewon | 62e7666a6be8c970871d15af4dfbbcd3ff0a97fd | [
"MIT"
] | null | null | null | cogs/ments.py | NastyHub/yewon | 62e7666a6be8c970871d15af4dfbbcd3ff0a97fd | [
"MIT"
] | null | null | null | cogs/ments.py | NastyHub/yewon | 62e7666a6be8c970871d15af4dfbbcd3ff0a97fd | [
"MIT"
] | null | null | null | import discord
import os
import json
from discord.ext import commands, tasks
import time
import asyncio
import random
from discord.utils import MAX_ASYNCIO_SECONDS
##########################################################################
#generalrole = discord.utils.get(ctx.guild.roles, id=661454256251076613)
#logchannel = discord.utils.get(client.get_all_channels(), id = 753619980548833401)
#SERVER INFO
ownerid = 631441731350691850
chanwoo = 631441731350691850
yewon = 819734468465786891
saji = 785135229894524959
donggu = 543680309661663233
hanjae = 406822771524501516
mintchocolate = 434328592739074048
csticker = 864745666580316170
dohyun = 652531481767444498
##########################################################################
#USEFUL FUNCTIONS
##########################################################################
def checkidentity(supposeid):
if int(supposeid) == chanwoo:
return "chanwoo"
elif int(supposeid) == yewon:
return "yewon"
elif int(supposeid) == saji:
return "saji"
elif int(supposeid) == donggu:
return "donggu"
elif int(supposeid) == hanjae:
return "hanjae"
elif int(supposeid) == mintchocolate:
return "mint"
elif int(supposeid) == csticker:
return "csticker"
elif int(supposeid) == dohyun:
return "dohyun"
else:
return None
def sendrandom(providedlist, min, max):
howmuchtosend = random.randint(min, max)
sizeoflist = len(providedlist)
i = 1
returnlist = []
while i <= howmuchtosend:
i += 1
thingtoadd = providedlist[random.randrange(0, sizeoflist)]
returnlist.append(thingtoadd)
return returnlist
def getlist(sendid):
sendid = str(sendid)
path = "ments/ments.json"
with open(path) as f:
jsondata = json.load(f)
f.close()
try:
mylist = jsondata[sendid]
except:
mylist = None
return mylist
class ments(commands.Cog):
def __init__(self, client):
self.client = client
@commands.command(aliases=["테스트"])
async def test(self, ctx):
checkme = checkidentity(ctx.author.id)
#await ctx.message.delete()
if ctx.author.id == 434328592739074048:
await ctx.send('...나는 모구모구')
await ctx.send(file=discord.File('image/mogumogu.jpg'))
else:
grablist = getlist(ctx.author.id)
if grablist == None:
await ctx.send("아직 너는 잘 모르겠는데..")
else:
herelist = sendrandom(grablist, 1, 1)
for i in herelist:
await ctx.send(i)
@commands.command()
async def joinvc(self, ctx):
if ctx.author.id == ownerid:
await ctx.message.delete()
channel = ctx.author.voice.channel
await channel.connect()
@commands.command()
async def leavevc(self, ctx):
if ctx.author.id == ownerid:
await ctx.message.delete()
await ctx.voice_client.disconnect()
@commands.command()
async def sendjson(self, ctx):
if ctx.author.id == ownerid:
await ctx.author.send(file=discord.File('ments/ments.json'))
@commands.command(aliases=["전송"])
async def dm(self, ctx, target: discord.Member, *, message):
try:
await ctx.message.delete()
except:
await ctx.send("이 명령어는 서버에서 사용해 주세요")
embed = discord.Embed(
title = f"📨 메세지가 도착했습니다!",
description = f"```{message}```\n\n답장해도 보내지지 않으니 직접 그 사람에게 말하세용\n명령어: `?전송 @유저 메세지 내용`",
color = discord.Color.from_rgb(255,105,180)
)
embed.set_footer(text=f"{ctx.author.name}님이 보낸 메세지")
try:
await target.send(embed=embed)
except:
await ctx.send(f"{target.mention}, 도착한 메세지가 있었지만 디엠 수신 기능이 꺼져있어 보내지 못하였습니다.")
#find a channel with an id 879895499338039301 from all the servers the bot is in
channel = discord.utils.get(self.client.get_all_channels(), id = 879895499338039301)
await channel.send(embed=embed)
def setup(client):
client.add_cog(ments(client)) | 28.310811 | 100 | 0.587351 | 458 | 4,190 | 5.344978 | 0.393013 | 0.039216 | 0.045752 | 0.034314 | 0.071487 | 0.053513 | 0.053513 | 0.053513 | 0.053513 | 0.039216 | 0 | 0.085613 | 0.258473 | 4,190 | 148 | 101 | 28.310811 | 0.701963 | 0.068019 | 0 | 0.165138 | 0 | 0.009174 | 0.085101 | 0.006253 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045872 | false | 0 | 0.073395 | 0 | 0.229358 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
691177c77ccb4ef9f4c89444502885d55b50a94c | 5,007 | py | Python | data/process_data.py | MitraG/Disaster-Response-Project | 179d875f9d16aba08cca14d9517531fb29b28041 | [
"OLDAP-2.4"
] | null | null | null | data/process_data.py | MitraG/Disaster-Response-Project | 179d875f9d16aba08cca14d9517531fb29b28041 | [
"OLDAP-2.4"
] | null | null | null | data/process_data.py | MitraG/Disaster-Response-Project | 179d875f9d16aba08cca14d9517531fb29b28041 | [
"OLDAP-2.4"
] | null | null | null | #First, we import the relevant libraries
import sys
import pandas as pd
from sqlalchemy import create_engine
def load_data(messages_filepath, categories_filepath):
'''This function will load the messages and categories datasets.
Then, this function will merge the datasets by left join using the common id and then return a pandas dataframe.
If the input is invalid or the data does not exist, this function will raise an error.
INPUT:
messages_filepath --> location of messages data file from the project root
categories_filepath --> location of the categories data file from the project root
OUTPUT:
df --> a DataFrame containing the merged dataset
'''
#load the messages dataset
messages = pd.read_csv(messages_filepath)
#load the categories dataset
categories = pd.read_csv(categories_filepath)
#merge the two datasets
df = pd.merge(messages, categories, on='id', how = 'left')
return df
def clean_data(df):
''' This function will clean and prepare the merged data to make it more efficient to work with.
The steps this function will take to clean and prepare the data are:
- Split the categories into separate category columns
- Rename every column to its corresponding category
- Convert category values to a boolean format (0 and 1)
- Replace the original categories column in the merged dataframe with the new category columns
- Drop any dulplicates in the newly merged dataset
If the input is invalid or the data does not exist, this function will raise an error.
INPUT:
df --> a Pandas DataFrame with the merged data
OUTPUT:
df --> a new Pandas Dataframe with each category as a column and its entries as 0/1 indicators.
This is to flag if a message is classified under each category column.
'''
#Split the categories into 36 individual category columns and create a dataframe
cat_cols = df["categories"].str.split(";", expand=True)
#Rename every column to its corresponding category
##First, calling the first row of cat_cols to extract a new list of new column names
##Using a lambda function that takes everything
##up to the second to last character of each string with slicing
row = cat_cols.iloc[0]
string_slicer = lambda x: x[:-2]
cat_colnames = [string_slicer(i) for i in list(row)]
cat_cols.columns = cat_colnames
#Convert category values to a boolean format (0 and 1)
#Iterating through the category columns in df to keep only the last character of each string (the 1 or 0)
##Then convert the string into a numeric value
##Using the slicing method once again
int_slicer = lambda x: int(x[-1])
for column in cat_cols:
cat_cols[column] = [int_slicer(i) for i in list(cat_cols[column])]
#Replace the original categories column in the merged dataframe with the new category columns
df = df.drop(['categories'], axis=1)
df = pd.merge(df, cat_cols, left_index=True, right_index=True)
df['related'] = df['related'].astype('str').str.replace('2', '1')
df['related'] = df['related'].astype('int')
#Drop any dulplicates in the newly merged dataset
df = df.drop_duplicates()
return df
def save_data(df, database_filename):
''' This function will load the prepared data into a SQLite database file.
If the input is invalid or the data does not exist, this function will raise an error.
INPUT:
df --> a Pandas DataFrame containing the prepared data
DisasterResponse.db --> database to store data for model ingestion
'''
engine = create_engine('sqlite:///DisasterResponse.db')
df.to_sql('categorised_messages', engine, index=False, if_exists='replace')
def main():
''' This is the mail ETL function that extracts, transforms and loads the data.
'''
if len(sys.argv) == 4:
messages_filepath, categories_filepath, database_filepath = sys.argv[1:]
print('Loading data...\n MESSAGES: {}\n CATEGORIES: {}'
.format(messages_filepath, categories_filepath))
df = load_data(messages_filepath, categories_filepath)
print('Cleaning data...')
df = clean_data(df)
print('Saving data...\n DATABASE: {}'.format(database_filepath))
save_data(df, database_filepath)
print('Cleaned data saved to database!')
else:
print('Please provide the filepaths of the messages and categories '\
'datasets as the first and second argument respectively, as '\
'well as the filepath of the database to save the cleaned data '\
'to as the third argument. \n\nExample: python process_data.py '\
'disaster_messages.csv disaster_categories.csv '\
'DisasterResponse.db')
if __name__ == '__main__':
main() | 37.931818 | 117 | 0.675654 | 694 | 5,007 | 4.792507 | 0.279539 | 0.028864 | 0.038485 | 0.04089 | 0.312688 | 0.251052 | 0.19994 | 0.174083 | 0.149429 | 0.149429 | 0 | 0.004791 | 0.24965 | 5,007 | 132 | 118 | 37.931818 | 0.88049 | 0.496105 | 0 | 0.043478 | 0 | 0 | 0.243035 | 0.03129 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.065217 | 0 | 0.195652 | 0.108696 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
691235d4163755651c608d7db64917c20c45cfde | 1,568 | py | Python | randomseq/modules/make_random.py | andreagrioni/special-couscous | 17b8dcd0bcafab2f6952ddf3b38cd1292f62cee7 | [
"MIT"
] | null | null | null | randomseq/modules/make_random.py | andreagrioni/special-couscous | 17b8dcd0bcafab2f6952ddf3b38cd1292f62cee7 | [
"MIT"
] | 1 | 2021-08-17T12:17:29.000Z | 2021-08-17T12:17:29.000Z | randomseq/modules/make_random.py | andreagrioni/special-couscous | 17b8dcd0bcafab2f6952ddf3b38cd1292f62cee7 | [
"MIT"
] | null | null | null | import pandas as pd
from modules import bedtools
from modules import intervals
def generator(ARGUMENTS):
if not ARGUMENTS.input_bed and not ARGUMENTS.gtf_anno:
print(f"get random intervals from genome {ARGUMENTS.reference}")
RANDOM_BED = bedtools.random_interval(
ARGUMENTS.reference, ARGUMENTS.int_size, ARGUMENTS.N
)
elif ARGUMENTS.gtf_anno:
print(f"get intervals from annotation file {ARGUMENTS.gtf_anno}")
RANDOM_BED = intervals.gtf_to_bed(
file_name=ARGUMENTS.gtf_anno,
feature=ARGUMENTS.feature,
int_size=ARGUMENTS.int_size,
N=ARGUMENTS.N,
)
elif ARGUMENTS.input_bed:
print(f"load input bed file {ARGUMENTS.input_bed}")
RANDOM_BED = ARGUMENTS.input_bed
else:
print("nothing to do")
if ARGUMENTS.avoid_int and RANDOM_BED:
print("removing positive intervals")
RANDOM_BED = bedtools.intersect(
RANDOM_BED, ARGUMENTS.avoid_int, opt=ARGUMENTS.intersect_opt
)
return RANDOM_BED
def make_set(ARGUMENTS):
df_list = list()
tmp_size = 0
while tmp_size < ARGUMENTS.N:
RANDOM_BED = generator(ARGUMENTS)
tmp_df = pd.read_csv(RANDOM_BED, sep="\t", header=None)
tmp_size += tmp_df.shape[0]
df_list.append(tmp_df)
if df_list:
merge_df = pd.concat(df_list, axis=0).sample(n=ARGUMENTS.N)
merge_df.to_csv(RANDOM_BED, sep="\t", header=False, index=False)
return RANDOM_BED
if __name__ == "__main__":
pass
| 28 | 73 | 0.658163 | 207 | 1,568 | 4.7343 | 0.323672 | 0.10102 | 0.069388 | 0.042857 | 0.095918 | 0.095918 | 0 | 0 | 0 | 0 | 0 | 0.002562 | 0.253189 | 1,568 | 55 | 74 | 28.509091 | 0.83433 | 0 | 0 | 0.047619 | 0 | 0 | 0.128827 | 0.026786 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.02381 | 0.071429 | 0 | 0.166667 | 0.119048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69146b024afb8e8179d7794495072111c92f9cf1 | 532 | py | Python | Modulo3/aula18.py | Werberty/Curso-em-Video-Python3 | 24c0299edd635fb9c2db2ecbaf8532d292f92d49 | [
"MIT"
] | 1 | 2022-03-06T11:37:47.000Z | 2022-03-06T11:37:47.000Z | Modulo3/aula18.py | Werberty/Curso-em-Video-Python3 | 24c0299edd635fb9c2db2ecbaf8532d292f92d49 | [
"MIT"
] | null | null | null | Modulo3/aula18.py | Werberty/Curso-em-Video-Python3 | 24c0299edd635fb9c2db2ecbaf8532d292f92d49 | [
"MIT"
] | null | null | null | test = list()
test.append('Werberty')
test.append(21)
galera = list()
galera.append(test[:])
test[0] = 'Maria'
test[1] = 22
galera.append(test[:])
print(galera)
pessoal = [['joão', 19], ['Ana', 33], ['Joaquim', 13], ['Maria', 45]]
print(pessoal[1])
print(pessoal[2][1])
for p in pessoal:
print(f'{p[0]} tem {p[1]} anos de idade.')
galerinha = list()
dado = list()
for c in range(0, 3):
dado.append(str(input('Nome: ')))
dado.append(int(input('idade: ')))
galerinha.append(dado[:])
dado.clear()
print(galerinha) | 22.166667 | 69 | 0.614662 | 81 | 532 | 4.037037 | 0.469136 | 0.061162 | 0.097859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046256 | 0.146617 | 532 | 24 | 70 | 22.166667 | 0.674009 | 0 | 0 | 0.090909 | 0 | 0 | 0.144465 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.227273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6914bfcaeef4fb7954973326e68cefa7ccd0e8c9 | 4,667 | py | Python | summariser/ngram_vector/vector_generator.py | UKPLab/ijcai2019-relis | 8a40762dcfa90c075a4f6591cbdceb468026ef17 | [
"MIT"
] | 5 | 2019-06-30T14:45:12.000Z | 2020-07-26T12:59:36.000Z | summariser/ngram_vector/vector_generator.py | UKPLab/ijcai2019-relis | 8a40762dcfa90c075a4f6591cbdceb468026ef17 | [
"MIT"
] | 1 | 2020-07-11T10:47:57.000Z | 2020-09-16T10:53:36.000Z | summariser/ngram_vector/vector_generator.py | UKPLab/ijcai2019-relis | 8a40762dcfa90c075a4f6591cbdceb468026ef17 | [
"MIT"
] | 2 | 2019-12-24T02:10:42.000Z | 2020-04-27T05:39:49.000Z | from summariser.ngram_vector.base import Sentence
from summariser.utils.data_helpers import *
from nltk.stem.porter import PorterStemmer
from summariser.ngram_vector.state_type import *
import random
class Vectoriser:
def __init__(self,docs,sum_len=100,no_stop_words=True,stem=True,block=1,base=200,lang='english'):
self.docs = docs
self.without_stopwords = no_stop_words
self.stem = stem
self.block_num = block
self.base_length = base
self.language = lang
self.sum_token_length = sum_len
self.stemmer = PorterStemmer()
self.stoplist = set(stopwords.words(self.language))
self.sim_scores = {}
self.stemmed_sentences_list = []
self.load_data()
def sampleRandomReviews(self,num,heuristic_reward=True,rouge_reward=True,models=None):
heuristic_list = []
rouge_list = []
act_list = []
for ii in range(num):
state = State(self.sum_token_length, self.base_length, len(self.sentences),
self.block_num, self.language)
while state.available_sents != [0]:
new_id = random.choice(state.available_sents)
if new_id == 0:
continue
if new_id > 0 and len(self.sentences[new_id-1].untokenized_form.split(' ')) > self.sum_token_length:
continue
state.updateState(new_id-1,self.sentences)
actions = state.historical_actions
act_list.append(actions)
if heuristic_reward:
rew = state.getTerminalReward(self.sentences,self.stemmed_sentences_list,self.sent2tokens,self.sim_scores)
heuristic_list.append(rew)
if rouge_reward:
assert models is not None
r_dic = {}
for model in models:
model_name = model[0].split('/')[-1].strip()
rew = state.getOptimalTerminalRougeScores(model)
r_dic[model_name] = rew
rouge_list.append(r_dic)
return act_list, heuristic_list, rouge_list
def getSummaryVectors(self,summary_acts_list):
vector_list = []
for act_list in summary_acts_list:
state = State(self.sum_token_length, self.base_length, len(self.sentences), self.block_num, self.language)
for i, act in enumerate(act_list):
state.updateState(act, self.sentences, read=True)
vector = state.getSelfVector(self.top_ngrams_list, self.sentences)
vector_list.append(vector)
return vector_list
def sent2tokens(self, sent_str):
if self.without_stopwords and self.stem:
return sent2stokens_wostop(sent_str, self.stemmer, self.stoplist, self.language)
elif self.without_stopwords == False and self.stem:
return sent2stokens(sent_str, self.stemmer, self.language)
elif self.without_stopwords and self.stem == False:
return sent2tokens_wostop(sent_str, self.stoplist, self.language)
else: # both false
return sent2tokens(sent_str, self.language)
def load_data(self):
self.sentences = []
for doc_id, doc in enumerate(self.docs):
doc_name, doc_sents = doc
doc_tokens_list = []
for sent_id, sent_text in enumerate(doc_sents):
token_sent = word_tokenize(sent_text, self.language)
current_sent = Sentence(token_sent, doc_id, sent_id + 1)
untokenized_form = untokenize(token_sent)
current_sent.untokenized_form = untokenized_form
current_sent.length = len(untokenized_form.split(' '))
self.sentences.append(current_sent)
sent_tokens = self.sent2tokens(untokenized_form)
doc_tokens_list.extend(sent_tokens)
stemmed_form = ' '.join(sent_tokens)
self.stemmed_sentences_list.append(stemmed_form)
#print('total sentence num: ' + str(len(self.sentences)))
self.state_length_computer = StateLengthComputer(self.block_num, self.base_length, len(self.sentences))
self.top_ngrams_num = self.state_length_computer.getStatesLength(self.block_num)
self.vec_length = self.state_length_computer.getTotalLength()
sent_list = []
for sent in self.sentences:
sent_list.append(sent.untokenized_form)
self.top_ngrams_list = getTopNgrams(sent_list, self.stemmer, self.language,
self.stoplist, 2, self.top_ngrams_num)
| 42.045045 | 122 | 0.630812 | 548 | 4,667 | 5.120438 | 0.229927 | 0.055595 | 0.021383 | 0.025659 | 0.162153 | 0.11119 | 0.070563 | 0.058446 | 0.058446 | 0.058446 | 0 | 0.006876 | 0.283265 | 4,667 | 110 | 123 | 42.427273 | 0.831988 | 0.014142 | 0 | 0.022472 | 0 | 0 | 0.002393 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 1 | 0.05618 | false | 0 | 0.05618 | 0 | 0.191011 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69212d7ab58acd86f0a6a7a1366fa6f42c3e9584 | 1,296 | py | Python | src/openprocurement/tender/openua/procedure/models/award.py | ProzorroUKR/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 10 | 2020-02-18T01:56:21.000Z | 2022-03-28T00:32:57.000Z | src/openprocurement/tender/openua/procedure/models/award.py | quintagroup/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 26 | 2018-07-16T09:30:44.000Z | 2021-02-02T17:51:30.000Z | src/openprocurement/tender/openua/procedure/models/award.py | ProzorroUKR/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 15 | 2019-08-08T10:50:47.000Z | 2022-02-05T14:13:36.000Z | from schematics.types import StringType, BooleanType, MD5Type, BaseType
from schematics.exceptions import ValidationError
from schematics.types.compound import ModelType
from openprocurement.api.models import ListType
from openprocurement.tender.core.procedure.models.award import (
Award as BaseAward,
PatchAward as BasePatchAward,
PostAward as BasePostAward,
)
from openprocurement.tender.core.procedure.models.milestone import QualificationMilestoneListMixin
from openprocurement.tender.openua.procedure.models.item import Item
class Award(QualificationMilestoneListMixin, BaseAward):
complaints = BaseType()
items = ListType(ModelType(Item, required=True))
qualified = BooleanType(default=False)
eligible = BooleanType(default=False)
def validate_qualified(self, data, qualified):
if data["status"] == "active" and not qualified:
raise ValidationError("This field is required.")
def validate_eligible(self, data, eligible):
if data["status"] == "active" and not eligible:
raise ValidationError("This field is required.")
class PatchAward(BasePatchAward):
items = ListType(ModelType(Item, required=True))
qualified = BooleanType()
eligible = BooleanType()
class PostAward(BasePostAward):
pass
| 35.027027 | 98 | 0.759259 | 135 | 1,296 | 7.274074 | 0.392593 | 0.077393 | 0.076375 | 0.059063 | 0.336049 | 0.336049 | 0.118126 | 0.118126 | 0 | 0 | 0 | 0.000916 | 0.157407 | 1,296 | 36 | 99 | 36 | 0.898352 | 0 | 0 | 0.142857 | 0 | 0 | 0.054012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0.035714 | 0.25 | 0 | 0.678571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6922080e0ef28e36cb9344b32948cabc43d991c9 | 4,448 | py | Python | tests/test_exch_uniform.py | computationalmodelling/fidimag | 07a275c897a44ad1e0d7e8ef563f10345fdc2a6e | [
"BSD-2-Clause"
] | 53 | 2016-02-27T09:40:21.000Z | 2022-01-19T21:37:44.000Z | tests/test_exch_uniform.py | computationalmodelling/fidimag | 07a275c897a44ad1e0d7e8ef563f10345fdc2a6e | [
"BSD-2-Clause"
] | 132 | 2016-02-26T13:18:58.000Z | 2021-12-01T21:52:42.000Z | tests/test_exch_uniform.py | computationalmodelling/fidimag | 07a275c897a44ad1e0d7e8ef563f10345fdc2a6e | [
"BSD-2-Clause"
] | 32 | 2016-02-26T13:21:40.000Z | 2022-03-08T08:54:51.000Z | from __future__ import print_function
import numpy as np
from fidimag.atomistic import Sim
from fidimag.common import CuboidMesh
from fidimag.atomistic import UniformExchange
def init_m(pos):
x, y, z = pos
return (x - 0.5, y - 0.5, z - 0.5)
def test_exch_1d():
"""
Test the x component of the exchange field
in a 1D mesh, with the spin ordering:
0 1 2 3 4 5
"""
mesh = CuboidMesh(nx=5, ny=1, nz=1)
sim = Sim(mesh)
exch = UniformExchange(1)
sim.add(exch)
sim.set_m(init_m, normalise=False)
field = exch.compute_field()
assert field[0] == 1
assert field[1 * 3] == 2
assert field[2 * 3] == 4
assert field[3 * 3] == 6
assert field[4 * 3] == 3
assert np.max(field[2::3]) == 0
assert np.max(field[1::3]) == 0
def test_exch_1d_pbc():
mesh = CuboidMesh(nx=5, ny=1, nz=1, periodicity=(True, False, False))
sim = Sim(mesh)
exch = UniformExchange(1)
sim.add(exch)
sim.set_m(init_m, normalise=False)
field = exch.compute_field()
assert field[0] == 1 + 4
assert field[3] == 2
assert field[6] == 4
assert field[9] == 6
assert field[12] == 3 + 0
assert np.max(field[2::3]) == 0
assert np.max(field[1::3]) == 0
def test_exch_2d():
mesh = CuboidMesh(nx=5, ny=2, nz=1)
sim = Sim(mesh)
exch = UniformExchange(1)
sim.add(exch)
sim.set_m(init_m, normalise=False)
field = exch.compute_field()
assert np.max(field[2::3]) == 0
assert field[0] == 1
assert field[3] == 2 + 1
assert field[6] == 1 + 2 + 3
assert field[9] == 2 + 3 + 4
assert field[12] == 3 + 4
def test_exch_2d_pbc2d():
"""
Test the exchange field components in a 2D mesh with PBCs
The mesh sites:
3 4 5 --> (0,1,0) (1,1,0) (2,1,0)
y ^ 0 1 2 (0,0,0) (1,0,0) (2,0,0)
|
x -->
The expected components are in increasing order along x
"""
mesh = CuboidMesh(nx=3, ny=2, nz=1, periodicity=(True, True, False))
print(mesh.neighbours)
sim = Sim(mesh)
exch = UniformExchange(1)
sim.add(exch)
sim.set_m(init_m, normalise=False)
field = exch.compute_field()
expected_x = np.array([3, 4, 5, 3, 4, 5])
expected_y = np.array([2, 2, 2, 2, 2, 2])
# Since the field ordering is now: fx1 fy1 fz1 fx2 ...
# We extract the x components jumping in steps of 3
assert np.max(abs(field[::3] - expected_x)) == 0
# For the y component is similar, now we start at the 1th
# entry and jump in steps of 3
assert np.max(abs(field[1::3] - expected_y)) == 0
# Similar fot he z component
assert np.max(field[2::3]) == 0
def test_exch_3d():
"""
Test the exchange field of the spins in this 3D mesh:
bottom layer:
8 9 10 11
4 5 6 7 x 2
0 1 2 3
Assertions are according to the mx component of the spins, since J is set
to 1
Spin components are given according to the (i, j) index position in the
lattice:
i lattice site
[[ 0. 0. 0.] --> 0 j=0
[ 1. 0. 0.] --> 1
[ 2. 0. 0.] --> 2
[ 3. 0. 0.] --> 3
[ 0. 1. 0.] --> 4 j=1
[ 1. 1. 0.]
...
Remember the field ordering: fx0, fy0, fz0, fx1, ...
"""
mesh = CuboidMesh(nx=4, ny=3, nz=2)
sim = Sim(mesh)
exch = UniformExchange(1)
sim.add(exch)
sim.set_m(init_m, normalise=False)
field = exch.compute_field()
# print field
# Exchange from 0th spin
assert field[0] == 1
# Exchange from 1st spin
# spin: 2 0 5 13
# mx: 2 0 1 1
assert field[3] == 2 + 0 + 1 + 1
# Exchange from 2nd spin
# spin: 3 1 6 14
# mx: 3 1 2 2
assert field[6] == 3 + 1 + 2 + 2
# ...
assert field[9] == 2 + 3 + 3
assert field[4 * 3] == 1
assert field[5 * 3] == 5
assert field[6 * 3] == 10
assert field[7 * 3] == 11
def test_exch_energy_1d():
mesh = CuboidMesh(nx=2, ny=1, nz=1)
sim = Sim(mesh)
exch = UniformExchange(1.23)
sim.add(exch)
sim.set_m((0, 0, 1))
energy = exch.compute_energy()
assert energy == -1.23
if __name__ == '__main__':
# test_exch_1d()
# test_exch_1d_pbc()
# test_exch_2d()
test_exch_2d_pbc2d()
# test_exch_3d()
# test_exch_energy_1d()
| 23.046632 | 77 | 0.546088 | 717 | 4,448 | 3.297071 | 0.175732 | 0.107022 | 0.037225 | 0.035533 | 0.409052 | 0.348562 | 0.315567 | 0.30753 | 0.282572 | 0.258037 | 0 | 0.088206 | 0.319469 | 4,448 | 192 | 78 | 23.166667 | 0.692765 | 0.317221 | 0 | 0.413793 | 0 | 0 | 0.002787 | 0 | 0 | 0 | 0 | 0 | 0.367816 | 1 | 0.08046 | false | 0 | 0.057471 | 0 | 0.149425 | 0.022989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6928610238b2e0cae526f79687ff41ce7a474164 | 4,882 | py | Python | Competing_Algorithm/demo.py | ZhiQiu976/project-Indian-Buffet-Process | a4817550f2ca1778333066fa03ec6bb5b9cb4240 | [
"MIT"
] | null | null | null | Competing_Algorithm/demo.py | ZhiQiu976/project-Indian-Buffet-Process | a4817550f2ca1778333066fa03ec6bb5b9cb4240 | [
"MIT"
] | null | null | null | Competing_Algorithm/demo.py | ZhiQiu976/project-Indian-Buffet-Process | a4817550f2ca1778333066fa03ec6bb5b9cb4240 | [
"MIT"
] | 1 | 2020-04-30T17:26:26.000Z | 2020-04-30T17:26:26.000Z | """Demo for latent factor model"""
from __future__ import division
import numpy as np
import numpy.random as nr
import matplotlib.pyplot as plt
from IBPFM import IBPFM
from utils.tracePlot import trace
from utils.scaledimage import scaledimage
N = 100
chain = 1000
K_finite = 6
# # read the keyboard input for the number of images
# N = raw_input("Enter the number of noisy images for learning features: ")
# try:
# N = int(N)
# except ValueError:
# print "Not a number"
# sys.exit('Try again')
# # read the keyboard input for the number of MCMC chain
# chain = raw_input("Enter the number of MCMC chain: ")
# try:
# chain = int(chain)
# except ValueError:
# print "Not a number"
# sys.exit('Try again')
# # read the keyboard input for the number of finite K
# K_finite = raw_input("Enter the finite number (upper bound) of features K: ")
# try:
# K_finite = int(K_finite)
# except ValueError:
# print "Not a number"
# sys.exit('Try again')
# ------------------------------------------------------------------------------
# Model parameter
(alpha, alpha_a, alpha_b) = (1., 1., 1.)
(sigma_x, sigma_xa, sigma_xb) = (.5, 1., 1.)
(sigma_a, sigma_aa, sigma_ab) = (1., 1., 1.)
# ------------------------------------------------------------------------------
# Generate image data from the known features
feature1 = np.array([[0,1,0,0,0,0],[1,1,1,0,0,0],[0,1,0,0,0,0],\
[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0]])
feature2 = np.array([[0,0,0,1,1,1],[0,0,0,1,0,1],[0,0,0,1,1,1],\
[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0]])
feature3 = np.array([[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],\
[1,0,0,0,0,0],[1,1,0,0,0,0],[1,1,1,0,0,0]])
feature4 = np.array([[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],\
[0,0,0,1,1,1],[0,0,0,0,1,0],[0,0,0,0,1,0]])
D = 36
f1 = feature1.reshape(D)
f2 = feature2.reshape(D)
f3 = feature3.reshape(D)
f4 = feature4.reshape(D)
trueWeights = np.vstack((f1, f2, f3, f4))
# ------------------------------------------------------------------------------
# Generate noisy image data
K = 4
sig_x_true = 0.5
A = np.vstack((f1, f2, f3, f4)).astype(np.float)
Z_orig = nr.binomial(1, 0.5, (N, K)).astype(np.float)
V_orig = nr.normal(0, 1, size=(N, K))
# V_orig = nr.exponential(1, size=(N, K))
Z_orig = np.multiply(Z_orig, V_orig)
X = np.dot(Z_orig, A)
noise = nr.normal(0, sig_x_true, (N, D))
X += noise
# ------------------------------------------------------------------------------
# Return MCMC result
(K_save, alpha_save, sigma_x_save, sigma_a_save, loglikelihood_save, Z_save, A_save) = \
IBPFM(iteration=chain, data=X, upperbound_K=K_finite,
alpha=(alpha, alpha_a, alpha_b),
sigma_x=(sigma_x, sigma_xa, sigma_xb),
sigma_a=(sigma_a, sigma_aa, sigma_ab), realvaluedZ=True,
proposeNewfeature=True,
updateAlpha=True, updateSigma_x=True, updateSigma_a=True,
initZ=None, stdData=False)
# Save trace plots
trace(K_save, alpha_save, sigma_x_save, sigma_a_save, loglikelihood_save)
# Save true latent feature plot
(orig, sub) = plt.subplots(1, 4)
for sa in sub.flatten():
sa.set_visible(False)
orig.suptitle('True Latent Features')
for (i, true) in enumerate(trueWeights):
ax = sub[i]
ax.set_visible(True)
scaledimage(true.reshape(6, 6), pixwidth=3, ax=ax)
orig.set_size_inches(13, 3)
orig.savefig('Original_Latent_Features.png')
plt.close()
# Save some of example figures from data X
examples = X[0:4, :]
(ex, sub) = plt.subplots(1, 4)
for sa in sub.flatten():
sa.set_visible(False)
ex.suptitle('Image Examples')
for (i, true) in enumerate(examples):
ax = sub[i]
ax.set_visible(True)
scaledimage(true.reshape(6, 6), pixwidth=3, ax=ax)
ex.set_size_inches(13, 3)
ex.savefig('Image_Examples.png')
plt.close()
# Show and save result
lastZ = Z_save[:, :, chain]
mcount = (lastZ != 0).astype(np.int).sum(axis=0)
index = np.where(mcount > 0)
lastK = K_save[chain].astype(np.int)
lastA = A_save[index, :, chain]
A = lastA.reshape(len(index[0]), D)
A_row = A.shape[0]
for i in range(A_row):
cur_row = A[i, :].tolist()
abs_row = [abs(j) for j in cur_row]
max_index = abs_row.index(max(abs_row))
if cur_row[max_index] < 0:
A[i, :] = -np.array(cur_row)
K = max(len(trueWeights), len(A))
(fig, subaxes) = plt.subplots(2, K)
for sa in subaxes.flatten():
sa.set_visible(False)
fig.suptitle('Ground truth (top) vs learned factors (bottom)')
for (idx, trueFactor) in enumerate(trueWeights):
ax = subaxes[0, idx]
ax.set_visible(True)
scaledimage(trueFactor.reshape(6, 6),
pixwidth=3, ax=ax)
for (idx, learnedFactor) in enumerate(A):
ax = subaxes[1, idx]
scaledimage(learnedFactor.reshape(6, 6),
pixwidth=3, ax=ax)
ax.set_visible(True)
#fig.savefig("IBP_meanA.png")
plt.show()
| 30.135802 | 88 | 0.599549 | 820 | 4,882 | 3.460976 | 0.213415 | 0.072586 | 0.09408 | 0.105708 | 0.388302 | 0.32382 | 0.26709 | 0.251233 | 0.239253 | 0.236082 | 0 | 0.055968 | 0.17288 | 4,882 | 161 | 89 | 30.322981 | 0.646855 | 0.262392 | 0 | 0.195876 | 0 | 0 | 0.035413 | 0.00787 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.072165 | 0 | 0.072165 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6928d12a9070c21f90e40630002456e946b09b39 | 1,391 | py | Python | ex3_1_dnn_mnist_cl.py | yoongon/keraspp | 4950e2e78bfd19095b88fd3a1ca74ffedba819a5 | [
"MIT"
] | null | null | null | ex3_1_dnn_mnist_cl.py | yoongon/keraspp | 4950e2e78bfd19095b88fd3a1ca74ffedba819a5 | [
"MIT"
] | null | null | null | ex3_1_dnn_mnist_cl.py | yoongon/keraspp | 4950e2e78bfd19095b88fd3a1ca74ffedba819a5 | [
"MIT"
] | null | null | null | # 기본 파라미터 설정 #########################
Nin = 784
Nh_l = [100, 50]
number_of_class = 10
Nout = number_of_class
# 분류 DNN 모델 구현 ########################
from keras import layers, models
class DNN(models.Sequential):
def __init__(self, Nin, Nh_l, Nout):
super().__init__()
self.add(layers.Dense(Nh_l[0], activation='relu', input_shape=(Nin,), name='Hidden-1'))
self.add(layers.Dense(Nh_l[1], activation='relu', name='Hidden-2'))
self.add(layers.Dense(Nout, activation='softmax'))
self.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# 데이터 준비 ##############################
import numpy as np
from keras import datasets
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data()
Y_train = np_utils.to_categorical(y_train)
Y_test = np_utils.to_categorical(y_test)
L, W, H = X_train.shape
X_train = X_train.reshape(-1, W * H)
X_test = X_test.reshape(-1, W * H)
X_train = X_train / 255.0
X_test = X_test / 255.0
# 분류 DNN 학습 및 테스팅 ####################
model = DNN(Nin, Nh_l, Nout)
history = model.fit(X_train, Y_train, epochs=5, batch_size=100, validation_split=0.2)
performace_test = model.evaluate(X_test, Y_test, batch_size=100)
print('Test Loss and Accuracy ->', performace_test)
| 33.119048 | 95 | 0.608914 | 205 | 1,391 | 3.882927 | 0.404878 | 0.052764 | 0.048995 | 0.067839 | 0.133166 | 0.052764 | 0 | 0 | 0 | 0 | 0 | 0.029623 | 0.199137 | 1,391 | 41 | 96 | 33.926829 | 0.684919 | 0.035226 | 0 | 0 | 0 | 0 | 0.074373 | 0.019402 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.137931 | 0 | 0.206897 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69301dd6f35763c4b558452e2a75490a7e95b5cb | 1,176 | py | Python | ChestnutPatcher/inject-lib.py | chestnut-sandbox/Chestnut | b42b9eb902e0928e8b549339788f83bb009290c1 | [
"Zlib"
] | 7 | 2020-12-08T02:00:14.000Z | 2021-05-10T13:12:35.000Z | ChestnutPatcher/inject-lib.py | cc0x1f/Chestnut | b42b9eb902e0928e8b549339788f83bb009290c1 | [
"Zlib"
] | 2 | 2022-01-03T13:51:48.000Z | 2022-01-26T15:42:44.000Z | ChestnutPatcher/inject-lib.py | cc0x1f/Chestnut | b42b9eb902e0928e8b549339788f83bb009290c1 | [
"Zlib"
] | 2 | 2021-05-15T03:06:07.000Z | 2021-08-06T18:11:35.000Z | import sys
import lief
import json
import struct
import os
def filter_file(fname):
f = fname.replace("/", "_") + ".json"
if f[0] == ".":
f = f[1:]
return f
def main(fname):
# load filter
ffname = "policy_%s" % filter_file(fname)
filters = None
try:
filters = json.loads(open(ffname).read())
except:
print("[-] Could not load filter file %s" % ffname)
return 1
print("[+] Allowed syscalls: %d" % len(filters["syscalls"]))
# inject sandboxing library
binary = lief.parse(fname)
binary.add_library("libchestnut.so")
# add seccomp library as well
binary.add_library("libseccomp.so.2")
binary.write("%s_patched" % fname)
with open("%s_patched" % fname, "ab") as elf:
filter_data = json.dumps(filters).encode()
elf.write(filter_data)
elf.write(struct.pack("I", len(filter_data)))
os.chmod("%s_patched" % fname, 0o755);
#print(binary)
print("[+] Saved patched binary as %s_patched" % fname)
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: %s <binary>" % sys.argv[0])
else:
main(sys.argv[1])
| 25.021277 | 64 | 0.590986 | 154 | 1,176 | 4.376623 | 0.422078 | 0.047478 | 0.077151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012514 | 0.252551 | 1,176 | 46 | 65 | 25.565217 | 0.754266 | 0.066327 | 0 | 0 | 0 | 0 | 0.190302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.147059 | 0 | 0.264706 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69331d5c6bc354bd17e9f9d5696da9f4cbff069b | 5,826 | py | Python | scoff/parsers/generic.py | brunosmmm/scoff | e1a0b5f98dd9e60f41f3f7cfcda9038ffd80e138 | [
"MIT"
] | null | null | null | scoff/parsers/generic.py | brunosmmm/scoff | e1a0b5f98dd9e60f41f3f7cfcda9038ffd80e138 | [
"MIT"
] | 1 | 2020-03-20T13:57:52.000Z | 2021-03-11T17:25:25.000Z | scoff/parsers/generic.py | brunosmmm/scoff | e1a0b5f98dd9e60f41f3f7cfcda9038ffd80e138 | [
"MIT"
] | null | null | null | """Generic regex-based parser."""
import re
from collections import deque
from typing import Union, Any, List, Deque, Tuple, Dict, Callable
from scoff.parsers.linematch import MatcherError, LineMatcher
EMPTY_LINE = re.compile(b"\s*$")
class ParserError(Exception):
"""Parser error."""
class DataParser:
"""Simple data parser.
Tokens are regular expression-based
"""
def __init__(
self,
initial_state: Union[str, int, None] = None,
consume_spaces: bool = False,
**kwargs,
):
"""Initialize.
:param initial_state: Initial state of the parser
:param consume_spaces: Consume stray space characters
"""
self._state_hooks = {}
super().__init__(**kwargs)
self._state_stack: Deque[Union[str, int, None]] = deque()
self._state = initial_state
self._consume = consume_spaces
self._current_position = 1
self._current_line = 1
self._data = None
self._abort = False
@property
def state(self):
"""Get current state."""
return self._state
def add_state_hook(self, state: Union[str, int], hook: Callable):
"""Add state hook (callback).
A callback will be called when the parser reaches a specified state.
:param state: The parser state to add a callback to
:param hook: The callback to be added
"""
if not callable(hook):
raise TypeError("hook must be callable")
if state not in self.states:
print(self.states)
raise ParserError(f"unknown state '{state}'")
if state not in self._state_hooks:
self._state_hooks[state] = {hook}
else:
self._state_hooks[state] |= {hook}
def _handle_match(self, candidate):
"""Handle candidate match."""
def _handle_options(self, **options: Any):
"""Handle candidate options."""
def _try_parse(
self, candidates: List[LineMatcher], position: int
) -> Tuple[int, LineMatcher, Dict[str, str]]:
if self._consume:
m = EMPTY_LINE.match(self._data, position)
if m is not None:
# an empty line, consume
return (m.span()[1], None, None)
for candidate in candidates:
try:
if not isinstance(candidate, LineMatcher):
raise TypeError("candidate must be LineMatcher object")
size, fields = candidate.parse_first(self._data, position)
except MatcherError:
continue
options = candidate.options.copy()
change_state = options.pop("change_state", None)
push_state = options.pop("push_state", None)
pop_state = options.pop("pop_state", None)
if change_state is not None:
self._change_state(change_state)
elif push_state is not None:
self._push_state(push_state)
elif pop_state is not None:
self._pop_state(pop_state)
# handle other options
self._handle_options(**options)
# handle other custom options
self._handle_match(candidate)
# advance position
self._current_position += size
# advance line
self._current_line += (
self._data.count(b"\n", position, position + size) + 1
)
return (size, candidate, fields)
raise ParserError("could not parse data")
def _current_state_function(self, position: int) -> int:
if not hasattr(self, "_state_{}".format(self._state)):
raise RuntimeError(f"in unknown state: {self._state}")
size, stmt, fields = getattr(self, "_state_{}".format(self._state))(
position
)
# call hooks
if self._state in self._state_hooks:
for hook in self._state_hooks[self._state]:
hook(self._state, stmt, fields)
return size
def _abort_parser(self):
"""Stop parsing."""
self._abort = True
@property
def current_pos(self):
"""Get current position."""
return self._current_position
@property
def current_line(self):
"""Get current line."""
return self._current_line
@property
def states(self):
"""Get possible states."""
return [
attr_name.split("_")[2]
for attr_name in dir(self)
if attr_name.startswith("_state")
]
def parse(self, data: str) -> int:
"""Parse data.
:param data: Textual data to be parsed
:return: Current position in data
"""
self._data = data.encode()
self._current_position = 1
self._current_line = 1
current_pos = 0
while current_pos < len(data):
if self._abort is True:
break
size = self._current_state_function(current_pos)
# consume data
current_pos += size + 1
return current_pos
def _state_change_handler(self, old_state, new_state):
"""State change handler."""
def _change_state(self, new_state):
"""Change state."""
old_state = self._state
self._state = new_state
# call state change handler
self._state_change_handler(old_state, new_state)
def _push_state(self, new_state):
"""Push into state stack and change state."""
self._state_stack.append(self._state)
self._change_state(new_state)
def _pop_state(self, count):
"""Pop from state stack and change state."""
for num in range(count):
state = self._state_stack.popleft()
self._change_state(state)
| 31.15508 | 76 | 0.583934 | 667 | 5,826 | 4.874063 | 0.230885 | 0.066441 | 0.025838 | 0.014765 | 0.103045 | 0.037527 | 0.022147 | 0.022147 | 0 | 0 | 0 | 0.002265 | 0.317885 | 5,826 | 186 | 77 | 31.322581 | 0.815803 | 0.154652 | 0 | 0.068376 | 0 | 0 | 0.040717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136752 | false | 0 | 0.034188 | 0 | 0.25641 | 0.008547 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6933f6dc14a4268b062d6716b8b6fce13e8b8ff9 | 34,279 | py | Python | cfg/model/CfgModel.py | sdnellen/ordt-config-tool | 30cc7342c5bc0f574b2a4a8d207230e1fa527615 | [
"Apache-2.0"
] | 1 | 2019-12-06T19:11:28.000Z | 2019-12-06T19:11:28.000Z | utils/cfgtool/cfg/model/CfgModel.py | mytoys/open-register-design-tool | 5d6dea268f77546a9a786a16603f50e974d87050 | [
"Apache-2.0"
] | null | null | null | utils/cfgtool/cfg/model/CfgModel.py | mytoys/open-register-design-tool | 5d6dea268f77546a9a786a16603f50e974d87050 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
'''
@author: snellenbach
Config sequence model
'''
from enum import Enum, unique
import re
from cfg.model.RegModelWrapper import RegModelWrapper
from cfg.model.Utils import MsgUtils
from cfg.output.OutBuilder import OutBuilder as ob
# ------- config model node classes
class BaseCfgNode:
_nodeStack = [] # stack of active config nodes for auto-add
_outBuilder = None
def __init__(self, sourceAstNode=None, comment=''):
self.sourceAstNode = sourceAstNode
self.comment = comment
self.children = []
self.parent = None
self.allowedTags = set() # set of allowed versions for this level (parser allows currently allows in class, method)
# add this node to parent (top of stack)
if __class__._nodeStack:
self.parent = __class__._nodeStack[-1]
self.parent.addChild(self)
def addChild(self, child):
self.children.append(child)
def popChild(self):
''' pop last added child from this node '''
if self.children:
self.children.pop()
def display(self, indent = 0):
''' display config model node info recursively '''
print(' '*indent + 'base:')
for child in self.children:
child.display(indent+1)
@staticmethod
def finishNode(omit):
''' Pop current node from the active model stack. Optionally, remove this node if omit is set. '''
__class__.popNode()
if omit:
parent = __class__._nodeStack[-1]
parent.popChild()
@staticmethod
def popNode():
''' pop cfg node from top of the stack '''
return __class__._nodeStack.pop()
@staticmethod
def peekNode():
''' return cfg node at top of the stack '''
return __class__._nodeStack[-1]
def hierDisplay(self, indent, s):
''' display config model node info recursively '''
print(' '*indent + s)
for child in self.children:
child.display(indent+1)
def resolvePaths(self):
''' resolve all paths in this config model node info recursively '''
for child in self.children:
child.resolvePaths()
def setOutBuilder(self, outBuilder):
''' set specified output builder '''
#print(f'BaseCfgNode setOutBuilder: called in {type(self)}, outBuilder type={type(outBuilder)}')
BaseCfgNode._outBuilder = outBuilder
def generateOutput(self):
''' generate specified output for this config model recursively '''
#print(f'BaseCfgNode generateOutput: called in {type(self)}')
for child in self.children:
child.generateOutput()
class HierCfgNode(BaseCfgNode):
''' hierarchical node (pushed to node stack on create) '''
def __init__(self, sourceAstNode = None, comment=''):
BaseCfgNode.__init__(self, sourceAstNode, comment)
# append this node to the stack
__class__._nodeStack.append(self)
self.vars = {} # dict of vars defined in this node scope
def whatami(self):
return 'unspecified hierarchy'
def findVar(self, varName, allowInputs = True):
''' find a variable by name traversing from current node thru ancestors '''
if self.vars.__contains__(varName):
retVar = self.vars[varName]
if allowInputs or (type(retVar) is not CfgInputVariable):
return retVar
MsgUtils.errorExit('input variable ' + varName + ' can not be assigned a value.')
return None
elif self.parent is None:
return None
else:
return self.parent.findVar(varName)
def getInputList(self):
return {k: v for k, v in self.vars.items() if type(v) is CfgInputVariable}
def verifyInputParms(self, inputListStr, callingNode):
''' check that a list of call parameter strings matches inputs for this hier and return the list of resolved inputs '''
if type(inputListStr) is not str:
MsgUtils.errorExit(f'misformed input list found when in call of {self.whatami()} {self.name}')
inputList = [] if not inputListStr else inputListStr.split(',')
inputCount = len(inputList)
inputParms = self.getInputList()
inputParmCount = len(inputParms)
#print(f"HierCfgNode verifyInputParms: inputList={inputList}, in len={inputCount}, vars=({', '.join(str(e) for e in inputParms.values())}), parm len={inputParmCount}, callNode type={type(callingNode)}")
if inputCount != inputParmCount:
MsgUtils.errorExit(f'incorrect number of input parameters (found {inputCount}, expected {inputParmCount}) in call of {self.whatami()} {self.name}')
# loop and resolve inputs CfgVariable.resolveRhsExpression(className, CfgClassNode, False, True)
resolvedInputList = []
for inVal, inParm in zip(inputList, inputParms.values()):
resolvedInputList.append(CfgVariable.resolveRhsExpression(inVal, inParm.vartype, True, True))
return resolvedInputList
class CfgClassNode(HierCfgNode):
_classes = {}
_current = None
def __init__(self, name, sourceAstNode = None, comment=''):
HierCfgNode.__init__(self, sourceAstNode, comment)
self.name = name
self.methods = {}
__class__._classes[self.name] = self
__class__._current = self
#print('creating class node, name=', self.name)
def whatami(self):
return 'class'
@staticmethod
def getCurrent():
''' return last created CfgClassNode '''
return __class__._current
@staticmethod
def findClass(className):
''' return a CfgClassNode by name '''
return None if className not in __class__._classes else __class__._classes[className]
def findMethod(self, methodName):
''' return a CfgMethodNode in this class by name '''
return None if methodName not in self.methods else self.methods[methodName]
def display(self, indent = 0):
inParms = self.getInputList()
self.hierDisplay(indent, f"class: {self.name}, vars=({', '.join(str(e) for e in self.vars.values())}), inputs=({', '.join(str(e) for e in inParms.values())}), allowed versions='{self.allowedTags}")
def generateOutput(self):
''' generate specified output for this class node '''
#print(f'CfgClassNode generateOutput: called in {type(self)}')
BaseCfgNode._outBuilder.enterClass(self)
for child in self.children:
child.generateOutput()
BaseCfgNode._outBuilder.exitClass(self)
class CfgMethodNode(HierCfgNode):
def __init__(self, name, sourceAstNode = None, comment=''):
HierCfgNode.__init__(self, sourceAstNode, comment)
self.name = name
self.args = []
# add method to dict in current class scope
parent = BaseCfgNode._nodeStack[-2]
parent.methods[self.name] = self
#print('creating method node, name=', self.name)
def whatami(self):
return 'method'
def display(self, indent = 0):
inParms = self.getInputList()
self.hierDisplay(indent, f"method: {self.name}, vars=({', '.join(str(e) for e in self.vars.values())}), inputs=({', '.join(str(e) for e in inParms.values())})")
def generateOutput(self):
''' generate specified output for this method node '''
#print(f'CfgMethodNode generateOutput: called in {type(self)}')
BaseCfgNode._outBuilder.enterMethod(self)
for child in self.children:
child.generateOutput()
BaseCfgNode._outBuilder.exitMethod(self)
@unique
class ConfigAssignType(Enum):
UNSUPPORTED = 0
EQ = 1
def isSupported(self):
return type(self) is not ConfigAssignType.UNSUPPORTED
@staticmethod
def resolve(opStr):
''' convert a string to ConfigAssignType '''
if type(opStr) is ConfigAssignType: # if type is already correct, just return input
return opStr
if opStr == '=':
return ConfigAssignType.EQ
else:
return ConfigAssignType.UNSUPPORTED
class CfgAssign(BaseCfgNode):
def __init__(self, left=None, op=ConfigAssignType.UNSUPPORTED, right=None, sourceAstNode = None):
BaseCfgNode.__init__(self, sourceAstNode)
self.op = ConfigAssignType.resolve(op)
self.left = left # TODO - resolve here and remove checks from builder or allow default var create?
self.right = right # maybe pass target type into assign? or verify type match?
def isValid(self):
if self.op.isSupported() and (self.left is not None) and (self.right is not None):
return True
return False
def isRead(self):
''' return True if assign involves a reg read '''
return (type(self.right) is CfgReadNode)
def display(self, indent = 0):
self.hierDisplay(indent, f'assign: {self.left} {self.op.name} {self.right}')
def resolvePaths(self):
if self.isRead():
self.right.resolvePaths()
class CfgMethodCall(BaseCfgNode):
def __init__(self, className, methodName, parmList, sourceAstNode = None):
BaseCfgNode.__init__(self, sourceAstNode)
# if className specified in call path resolve class as a variable, else use current class
if className:
cfgClassVar = CfgVariable.resolveRhsExpression(className, CfgClassNode, False, True) # find the class variable
#self.cfgClass = CfgClassNode.getCurrent() # TODO add findVar option for non-none className
self.cfgClass = CfgClassNode.findClass(cfgClassVar.val[0].name) # TODO - saved call name structure shoul be fixed
else:
self.cfgClass = CfgClassNode.getCurrent()
#if not cfgClass:
# MsgUtils.errorExit('unable to resolve cfgClass ' + str(className) + ' in call of method ' + methodName)
self.cfgMethod = self.cfgClass.findMethod(methodName)
if not self.cfgMethod:
MsgUtils.errorExit(f'unable to resolve method {methodName} in cfgClass {self.cfgClass.name}')
self.parmList = self.cfgMethod.verifyInputParms(parmList, self.parent)
def display(self, indent = 0):
self.hierDisplay(indent, f'call: cfgClass={self.cfgClass.name}, method={self.cfgMethod.name}, parms={self.parmList}')
class CfgCaseNode(HierCfgNode):
def __init__(self, selectVar, sourceAstNode = None):
HierCfgNode.__init__(self, sourceAstNode)
self.selectVar = HierCfgNode.findVar(self, selectVar)
#print('creating case node, select var=' + str(self.selectVar))
def display(self, indent = 0):
self.hierDisplay(indent, f'case: select var={self.selectVar}')
class CfgCaseBlockNode(HierCfgNode):
_currentChoices = set() # init current choice set
def __init__(self, sourceAstNode = None):
HierCfgNode.__init__(self, sourceAstNode)
self.selectVals = set(__class__._currentChoices) # copy current set of choices
__class__._currentChoices.clear() # clear current choices
#print('creating case block node, choices=' + str(self.selectVals))
def display(self, indent = 0):
self.hierDisplay(indent, f'case block: choices={self.selectVals}')
@staticmethod
def addChoice(choiceName):
__class__._currentChoices.add(choiceName)
class CfgNumericForNode(HierCfgNode):
def __init__(self, name, rangeStart, rangeEnd, sourceAstNode = None):
HierCfgNode.__init__(self, sourceAstNode)
self.forVar = CfgVariable(name, CfgNumDataType)
self.rangeStart = CfgVariable.resolveRhsExpression(rangeStart, CfgNumDataType)
self.rangeEnd = CfgVariable.resolveRhsExpression(rangeEnd, CfgNumDataType)
#print('creating numeric for loop node, iterator var=' + str(self.forVar) + ' rangeStart=' + str(self.rangeStart) + ' rangeEnd=' + str(self.rangeEnd))
def display(self, indent = 0):
self.hierDisplay(indent, f'for (numeric): iterator={self.forVar} rangeStart={self.rangeStart} rangeEnd={self.rangeEnd}')
class CfgPathForNode(HierCfgNode):
def __init__(self, name, path, sourceAstNode = None):
HierCfgNode.__init__(self, sourceAstNode)
self.forVar = CfgVariable(name, CfgPathDataType)
self.path = CfgVariable.resolveRhsExpression(path, CfgPathDataType) # create path range
self.forVar.val = self.path # assign path to loop var so full path prefix can be extracted recursively using var
#print('creating path for loop node, iterator var=' + str(self.forVar) + ' path=' + str(self.path))
def display(self, indent = 0):
self.hierDisplay(indent, f'for (path): iterator={self.forVar}, range={self.path}')
def resolvePaths(self):
''' resolve paths in this for node '''
print(f'resolve CfgPathForNode path: {self.path}') # TODO
if type(self.path) is CfgPathDataType:
self.path.resolvePath(self.allowedTags) #TODO - any checks for a var?, how is version resolve handled?
# resolve paths in child nodes
for child in self.children:
child.resolvePaths()
class CfgPrintNode(BaseCfgNode):
def __init__(self, form, form_vars, sourceAstNode = None):
BaseCfgNode.__init__(self, sourceAstNode)
self.form = form # form can also be a list of comma separated args
self.form_vars = form_vars
#print('creating display node, form=', self.form, 'form_vars=', self.form_vars)
def display(self, indent = 0):
self.hierDisplay(indent, 'print: ' + str(self.form) + ', vars=' + str(self.form_vars))
class CfgWaitNode(BaseCfgNode):
def __init__(self, time, sourceAstNode = None):
BaseCfgNode.__init__(self, sourceAstNode)
self.time = time # time in ms
#print('creating wait node, time=', self.time)
def display(self, indent = 0):
self.hierDisplay(indent, 'wait: ' + str(self.time))
class CfgWriteNode(BaseCfgNode):
def __init__(self, path, value, wtype, isRmw = False, sourceAstNode = None):
BaseCfgNode.__init__(self, sourceAstNode)
self.path = CfgVariable.resolveRhsExpression(path, CfgPathDataType)
self.wtype = CfgPathHierType.resolve(wtype)
self.value = CfgVariable.resolveRhsExpression(value, CfgNumDataType)
self.isRmw = isRmw
#print('creating write node, path=', str(self.path), 'value=', str(self.value))
def display(self, indent = 0):
self.hierDisplay(indent, 'write: ' + str(self.path) + ', wtype=' + str(self.wtype) + ', value=' + str(self.value) + ', rmw=' + str(self.isRmw))
pass
def resolvePaths(self):
''' resolve paths in this write node '''
print(f'resolve CfgWriteNode path: {self.path}, wtype: {self.wtype}, rwm: {self.isRmw} --- self.path type={type(self.path)}') # TODO
if type(self.path) is CfgPathDataType:
self.path.resolvePath(self.allowedTags, self.wtype) #TODO - any checks for a var?, how is version resolve handled?
def generateOutput(self):
''' generate specified output for this write node '''
#print(f'CfgWriteNode generateOutput: called in {type(self)}')
if self.wtype.isReg():
BaseCfgNode._outBuilder.doRegWrite(self)
else:
BaseCfgNode._outBuilder.doFieldWrite(self)
class CfgWhileNode(HierCfgNode):
def __init__(self, compare, delay = 1, timeout = None, sourceAstNode = None):
HierCfgNode.__init__(self, sourceAstNode)
self.compare = compare
self.delay = delay
self.timeout = timeout
#print('creating poll node, compare=', self.compare, 'delay=', self.delay)
CfgWaitNode(self.delay)
def display(self, indent = 0):
prefix = 'poll ' if self.compare.isPoll() else ''
self.hierDisplay(indent, prefix + 'while: ' + str(self.compare) + ' timeout=' + str(self.timeout))
def isPoll(self):
''' return True if compare involves a reg read '''
return self.compare.isPoll()
def resolvePaths(self):
''' resolve paths in this for while node '''
if self.isPoll():
self.compare.resolvePaths()
for child in self.children:
child.resolvePaths()
# ------- config model support classes (not BaseCfgNode children)
@unique
class CfgPathHierType(Enum):
UNKNOWN = 0
REGSET = 1
REG = 2
FIELDSET = 3
FIELD = 4
@staticmethod
def resolve(hierStr):
''' convert a string to CfgPathHierType '''
if type(hierStr) is CfgPathHierType: # if type is already correct, just return input
return hierStr
if 'RegSet' in hierStr:
return CfgPathHierType.REGSET
elif 'FieldSet' in hierStr:
return CfgPathHierType.FIELDSET
elif 'Reg' in hierStr:
return CfgPathHierType.REG
elif 'Field' in hierStr:
return CfgPathHierType.FIELD
else:
return CfgPathHierType.UNKNOWN
def isReg(self):
return self is CfgPathHierType.REG
def matchesRegModelType(self, regModType):
if self is CfgPathHierType.UNKNOWN:
return True
#print(f' -> CfgPathHierType matchesRegModelType: self type={self.name}, regModType={regModType.name}') # TODO
if self.name == regModType.name:
return True
return False
class CfgReadNode():
def __init__(self, path, rtype = CfgPathHierType.UNKNOWN, sourceAstNode = None):
self.path = CfgVariable.resolveRhsExpression(path, CfgPathDataType)
self.rtype = CfgPathHierType.resolve(rtype)
self.sourceAstNode = sourceAstNode # TODO - change to srcInfo
#print('creating read node, path=', self.path)
def __str__(self):
return f'read {self.path}, rtype={self.rtype}'
def resolvePaths(self):
''' resolve paths in this read '''
print(f'resolve CfgReadNode path: {self.path}, rtype={self.rtype}') # TODO
if type(self.path) is CfgPathDataType:
self.path.resolvePath(set(), self.rtype) # read node has no allowed tag override, TODO - any checks for a var?, how is version resolve handled?
# ------- config model data classes
class CfgDataType():
def __init__(self):
pass
def isValid(self):
return hasattr(self, 'val') and (self.val is not None)
class CfgBoolDataType(CfgDataType):
def __init__(self):
pass
class CfgNumDataType(CfgDataType):
def __init__(self, s):
self.size = None
self.hasSize = False
intval = __class__.strToInt(s)
if intval is not None:
self.val = intval
@staticmethod
def strToInt(s):
''' convert str to int if possible, else return None '''
try:
out = int(s, 0)
return out
except ValueError:
return None
def __str__(self):
return str(self.val) + (('(size=' + str(self.size) + ')') if self.size else '') if self.isValid() else 'invalid num'
def needsSize(self):
return self.hasSize and self.size is None
class CfgEnumDataType(CfgNumDataType): # FIXME use separate type
def __init__(self):
pass
class CfgPathDataElement():
def __init__(self, pelemstr):
self.name = None # invalid if name is None
self.start = None
self.end = None
self.isIndexed = False
self.hasRange = False # is element indexed with start not equal to end
self.annotations = {}
if '[' in pelemstr: # detect an array
self.isIndexed = True
pat = re.compile('(\\w+)\\s*\\[(.*)\\]')
mat = pat.match(pelemstr)
if mat:
self.name = mat.group(1)
arraystr = mat.group(2)
if ':' in arraystr:
self.hasRange = True
pat = re.compile('(\\w+|\\*)\\s*:\\s*(\\w+|\\*)')
mat = pat.match(arraystr)
if mat:
leftstr = mat.group(1)
rightstr = mat.group(2)
if leftstr == '*':
self.hasRange = True
else:
self.start = leftstr
if rightstr == '*':
self.hasRange = True
else:
self.end = rightstr
#else:
# print('CfgPathDataElement array match failed for s=' + arraystr)
elif '*' in arraystr: # detect full range wildcard
self.hasRange = True
else:
self.start = arraystr # single index case
self.end = arraystr
else:
self.name = pelemstr # scalar, so just save the name
def isVar(self):
''' return true if this path element is a path variable '''
return hasattr(self, 'baseVar')
def isRootVar(self):
''' return true if this path element is a path variable representing root of the reg model '''
return self.isVar() and (self.name == 'root')
def needsResolution(self):
return self.isIndexed and ((self.start is None) or (self.end is None))
def getElementString(self, unrollBase, leftIdx, rightIdx=None):
if unrollBase and self.isVar() and not self.isRootVar():
return self.baseVar.val.genFullPathStr()
if not self.isIndexed:
return self.name
if not rightIdx or (rightIdx == leftIdx):
return f'{self.name}[{leftIdx}]'
return f'{self.name}[{leftIdx}:{rightIdx}]'
def getFullElementString(self): # TODO
''' return full element string '''
startStr = str(self.start) if self.start else '*'
endStr = str(self.end) if self.end else '*'
return self.getElementString(True, startStr, endStr)
def getRawElementString(self):
''' return raw element string '''
startStr = str(self.start) if self.start else '*'
endStr = str(self.end) if self.end else '*'
return self.getElementString(False, startStr, endStr)
def getSampleElementString(self):
''' return sample element string for model lookup with indices set to 0 '''
return self.getElementString(True, 0)
def __str__(self):
return self.getRawElementString()
class CfgPathDataType(CfgDataType):
def __init__(self, pathstr):
self.htype = CfgPathHierType.UNKNOWN # resolved path type is unknown by default
self.call = None # default to no call
basepathstr = ''
if '(' in pathstr: # detect a call and remove from path
pat = re.compile('(.*)\\.(\\w+)')
mat = pat.match(pathstr)
if mat:
basepathstr = mat.group(1)
self.call = mat.group(2)
#print(f'found call match path={self.val}, call={self.call}')
else:
basepathstr = pathstr # TODO - store as path elem tuples? also TODO allow range wildcards
# create a list of path elements
self.val = []
newlist = basepathstr.split('.')
for elemstr in newlist:
elem = CfgPathDataElement(elemstr)
self.val.append(elem)
# check for valid path var extract
if not self.val:
MsgUtils.errorExit(f'unable create path from string={pathstr}')
firstPathElement = self.getBasePathElem()
# check for valid path base variable
baseVar = CfgVariable.resolveLhsExpression(firstPathElement.name, CfgPathDataType, False, False) # check for existing base path variable
if not baseVar:
MsgUtils.errorExit(f'unable to resolve root of path {pathstr}')
firstPathElement.baseVar = baseVar # save the referenced path variable in first element
def genFullPathStr(self):
''' return path with base var unrolled '''
return '.'.join([ elem.getFullElementString() for elem in self.getPathList() ])
def genRawPathStr(self):
''' return raw path (no base var unroll) '''
return '.'.join([ elem.getRawElementString() for elem in self.getPathList() ])
def genSamplePathStr(self):
''' return sample path for model lookup with all indices set to 0 '''
return '.'.join([ elem.getSampleElementString() for elem in self.getPathList() ])
def hasCall(self):
return self.call is not None
def setRegset(self):
self.htype = CfgPathHierType.REGSET
def setReg(self):
self.htype = CfgPathHierType.REG
def setFieldset(self):
self.htype = CfgPathHierType.FIELDSET
def setField(self):
self.htype = CfgPathHierType.FIELD
def getBasePathElem(self):
''' return the base path element '''
return self.getPathList()[0]
def getBasePathVar(self):
''' return the base path variable '''
return self.getBasePathElem().baseVar
def needsResolution(self):
if not self.getBasePathVar(): # or self.getBasePath().needsResolution(): # TODO - need variable needsResolution method?
return True
for elem in self.getPathList(): # check to see if any path elems are unresolved
if elem.needsResolution():
return True
return False
def isMultiPath(self):
for elem in self.getPathList(): # check to see if any path elems have more than single element range
if elem.hasRange:
return True
return False
def resolvePath(self, allowedTags, targetType=CfgPathHierType.UNKNOWN): # TODO also pass in allowedTags
''' resolve path type and any path index wildcards by referencing the regmodel '''
print(f' -> resolvePath CfgPathDataType raw path: {self} full path: {self.genFullPathStr()} sample path: {self.genSamplePathStr()}') # TODO
regModel = RegModelWrapper.getRegModelRoot()
if not regModel:
if self.needsResolution():
MsgUtils.errorExit(f'Path {self} has unresolved info, but no register model is defined.')
return # if no model and resolved we're done
# extract valid version tags and annotate path elements for each
validTags = RegModelWrapper.getValidTags(allowedTags)
print(f' -> resolvePath CfgPathDataType: allowedTags={allowedTags}, regmod tags: {RegModelWrapper.getRegModelTags()} valid tags: {validTags}') # TODO
for tag in validTags:
plist = regModel.get_path_instance_list(tag, self.genSamplePathStr())
if 'error' in plist:
MsgUtils.errorExit(f'Path {self.genRawPathStr()} was not found in register model using tag="{tag}".')
if not targetType.matchesRegModelType(plist['type']): # check that path type returned from model matches target
MsgUtils.errorExit(f'Expected type of path {self.genRawPathStr()} ({targetType}) does not match returned register model type ({plist["type"]}).')
# TODO - check that MultPath elems are allowed
self.annotatePath(tag, plist['instances'])
#print(f' -> resolvePath CfgPathDataType model returns: {plist}')
def annotatePath(self, tag, regModelPath):
# extract the full path by expanding lead path vars
expandedPath = self.getExpandedPathList()
#print(f' -> CfgPathDataType annotatePath: this path len={len(self.getPathList())}, expanded path len={len(expandedPath)}, regmod path len={len(regModelPath)}, path={regModelPath}')
if len(expandedPath) != len(regModelPath):
MsgUtils.errorExit(f'Path {self.genRawPathStr()} does not match form of returned register model path.')
# now loop and append regmodel info to local (non expanded) path elements
localIndex = len(expandedPath) - len(self.getPathList())
for pathElem, regModElem in zip(self.getPathList(), regModelPath[localIndex:]): # only annotate local path elements
print(f' -> CfgPathDataType annotatePath: element annotation, tag={tag}, elem={pathElem.name}, mod elem type={type(regModElem)}')
annotation = RegModelWrapper.createAnnotation(regModElem)
pathElem.annotations[tag] = annotation # annotate pathElem by tag
def getPathList(self):
''' return non-expanded path list '''
return self.val
def getExpandedPathList(self):
''' generate full path list by unrolling base path variable '''
if self.getBasePathElem().isRootVar():
return self.getPathList()
else:
if len(self.getPathList()) > 1:
return self.getBasePathElem().baseVar.val.getExpandedPathList() + self.getPathList()[1:] # remove lead element and append remainder
else:
return self.getBasePathElem().val.getExpandedPathList()
def __str__(self):
return f'ptype={self.htype.name}, path={self.genRawPathStr()}, needsResolution={self.needsResolution()}'
# ------- variable classes
class CfgVariable:
def __init__(self, name, vartype = CfgNumDataType):
self.name = name
self.vartype = vartype
self.val = None
# add var in current scope
parent = BaseCfgNode._nodeStack[-1]
if parent.findVar(self.name):
MsgUtils.errorExit('variable ' + self.name + ' is already defined.')
if not name.isalnum():
MsgUtils.errorExit('variable name ' + self.name + ' is not valid.')
parent.vars[self.name] = self
#print (f'--- cfg_model CfgVariable: adding var {self.name}, parent type is {type(parent)}')
def __str__(self):
return self.vartype.__name__ + ' ' + self.name
@staticmethod
def resolveRhsExpression(inVal, targetVarType, allowInstCreate = True, exitOnFail = True): # targetVarType is valid CfgDataType
''' given an unknown rhs expression, return an existing variable or instance (new from str or existing) of specified target data type '''
if type(inVal) is targetVarType: # already target type so done
return inVal
if (type(inVal) is CfgVariable) and (inVal.vartype is targetVarType): # already a variable so done
return inVal
if type(inVal) is str:
# try to find an existing variable
foundVar = HierCfgNode.peekNode().findVar(inVal)
if (foundVar is not None) and (foundVar.vartype is targetVarType):
return foundVar
# else try creating new target instance
if allowInstCreate:
newVal = targetVarType(inVal)
if newVal.isValid():
return newVal
if exitOnFail:
MsgUtils.errorExit('unable to resolve rhs expression ' + str(inVal) + ' to a value or variable.')
@staticmethod
def resolveLhsExpression(inVar, targetVarType, allowVarCreate = True, exitOnFail = True): # targetVarType is valid CfgDataType
''' given an unknown lhs expression, return an existing variable or create a new variable of specified target data type from str '''
if (type(inVar) is CfgVariable) and (inVar.vartype is targetVarType): # already a variable so done
return inVar
if type(inVar) is str:
# try to find an existing (non-input) variable
foundVar = HierCfgNode.peekNode().findVar(inVar, False) # input variables are not allowed on lhs
if (foundVar is not None) and (foundVar.vartype is targetVarType):
return foundVar
# else create a new var of target type
if allowVarCreate:
return CfgVariable(inVar, targetVarType)
if exitOnFail:
MsgUtils.errorExit('unable to resolve lhs expression ' + str(inVar) + ' to a variable.')
class CfgInputVariable(CfgVariable):
def __str__(self):
return 'input ' + self.vartype.__name__ + ' ' + self.name
# ------- config model compare class
@unique
class ConfigCompareType(Enum):
UNSUPPORTED = 0
EQ = 1
NE = 2
GT = 3
LT = 4
GE = 5
LE = 6
def isSupported(self):
return type(self) is not ConfigCompareType.UNSUPPORTED
@staticmethod
def resolve(opStr):
''' convert a string to ConfigCompareType '''
if type(opStr) is ConfigCompareType: # if type is already correct, just return input
return opStr
if opStr == '==':
return ConfigCompareType.EQ
elif opStr == '!=':
return ConfigCompareType.NE
elif opStr == '>':
return ConfigCompareType.GT
elif opStr == '<':
return ConfigCompareType.LT
elif opStr == '>=':
return ConfigCompareType.GE
elif opStr == '<=':
return ConfigCompareType.LE
else :
return ConfigCompareType.UNSUPPORTED
class CfgCompare():
def __init__(self, left=None, op=ConfigCompareType.UNSUPPORTED, right=None):
self.op = op if type(op) is ConfigCompareType else ConfigCompareType.resolve(op)
self.left = left if type(left) is CfgReadNode else left # TODO - extract into val or variable
self.right = right if type(right) is CfgReadNode else right # TODO - extract into val or variable
def isValid(self):
if self.op.isSupported() and (self.left is not None) and (self.right is not None):
return True
return False
def leftIsPoll(self):
return type(self.left) is CfgReadNode
def rightIsPoll(self):
return type(self.right) is CfgReadNode
def isPoll(self):
''' return True if compare involves a reg read '''
return self.leftIsPoll() or self.rightIsPoll()
def __str__(self):
return f'l=({self.left}) op={self.op.name} r=({self.right})'
def resolvePaths(self):
''' resolve paths in this compare node '''
if self.leftIsPoll():
self.left.resolvePaths()
if self.rightIsPoll():
self.right.resolvePaths()
# ------ config model visitor TODO
| 47.675939 | 210 | 0.632603 | 3,825 | 34,279 | 5.59451 | 0.131242 | 0.013459 | 0.011823 | 0.01215 | 0.308472 | 0.261788 | 0.22258 | 0.168513 | 0.130427 | 0.110846 | 0 | 0.001985 | 0.265177 | 34,279 | 718 | 211 | 47.74234 | 0.847553 | 0.223956 | 0 | 0.29432 | 0 | 0.017212 | 0.10098 | 0.026361 | 0 | 0 | 0 | 0.001393 | 0 | 1 | 0.194492 | false | 0.006885 | 0.008606 | 0.034423 | 0.437177 | 0.015491 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69357d4b61dce4eb42c2cf196abf2f132d27de5f | 2,061 | py | Python | tests/test_dict_qtable.py | fgka/reinforcement-learning-py | e4c582d192b36a270efce5e1512596b72466c8f7 | [
"MIT"
] | null | null | null | tests/test_dict_qtable.py | fgka/reinforcement-learning-py | e4c582d192b36a270efce5e1512596b72466c8f7 | [
"MIT"
] | null | null | null | tests/test_dict_qtable.py | fgka/reinforcement-learning-py | e4c582d192b36a270efce5e1512596b72466c8f7 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# vim: ai:sw=4:ts=4:sta:et:fo=croql
# coding=utf-8
import pytest
# Uncomment to run test in debug mode
# import pudb; pudb.set_trace()
from reinforcement_learning.dict_qtable import DictQTable
from test_qaction import QActionTest
from test_qstate import QStateTest
"""
DictQTable
"""
@pytest.mark.incremental
class TestDictQTable(object):
action_a = QActionTest(3)
action_b = QActionTest(4)
action_c = QActionTest(5)
state_a = QStateTest([action_a, action_b])
state_b = QStateTest([action_c])
value_a = 123.1
value_b = 234.5
def test_set_value(self):
# given
obj = DictQTable()
obj.set_value(self.state_a, self.action_a, self.value_a)
# when
stored_states = obj.get_all_stored_states()
# then
assert stored_states is not None, 'Table: {}'.format(obj)
assert len(stored_states) is 1, 'Table: {}'.format(obj)
assert stored_states[0] is self.state_a, 'Table: {}'.format(obj)
value = obj.get_value(self.state_a, self.action_a)
assert value is not None, 'Table: {}'.format(obj)
assert value is self.value_a, 'Table: {}'.format(obj)
def test_get_stored_action_values(self):
# given
obj = DictQTable()
obj.set_value(self.state_a, self.action_a, self.value_a)
obj.set_value(self.state_a, self.action_b, self.value_b)
# when
stored_action_values = obj.get_stored_action_values(self.state_a)
# then
assert stored_action_values is not None, 'Table: {}'.format(obj)
assert len(stored_action_values) is 2, 'Table: {}'.format(obj)
assert self.action_a in stored_action_values.keys(), \
'Table: {}'.format(obj)
assert stored_action_values[self.action_a] is self.value_a, \
'Table: {}'.format(obj)
assert self.action_b in stored_action_values.keys(), \
'Table: {}'.format(obj)
assert stored_action_values[self.action_b] is self.value_b, \
'Table: {}'.format(obj)
| 33.786885 | 73 | 0.654051 | 291 | 2,061 | 4.395189 | 0.257732 | 0.094605 | 0.120407 | 0.125098 | 0.472244 | 0.41595 | 0.379984 | 0.296325 | 0.272088 | 0.212666 | 0 | 0.010665 | 0.226589 | 2,061 | 60 | 74 | 34.35 | 0.791719 | 0.080058 | 0 | 0.210526 | 0 | 0 | 0.053026 | 0 | 0 | 0 | 0 | 0 | 0.289474 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
693580d8911168f775041c87c4274d3d07d8d2de | 2,851 | py | Python | handlers/acceptmember.py | micjerry/groupservice | 807e5d53533897ac36d9bf1cce30aee09979ea9f | [
"Apache-2.0"
] | 1 | 2015-12-14T08:31:30.000Z | 2015-12-14T08:31:30.000Z | handlers/acceptmember.py | micjerry/groupservice | 807e5d53533897ac36d9bf1cce30aee09979ea9f | [
"Apache-2.0"
] | null | null | null | handlers/acceptmember.py | micjerry/groupservice | 807e5d53533897ac36d9bf1cce30aee09979ea9f | [
"Apache-2.0"
] | null | null | null | import tornado.web
import tornado.gen
import json
import io
import logging
import motor
from bson.objectid import ObjectId
import mickey.userfetcher
from mickey.basehandler import BaseHandler
class AcceptMemberHandler(BaseHandler):
@tornado.web.asynchronous
@tornado.gen.coroutine
def post(self):
coll = self.application.db.groups
publish = self.application.publish
token = self.request.headers.get("Authorization", "")
data = json.loads(self.request.body.decode("utf-8"))
groupid = data.get("groupid", "")
inviteid = data.get("invite_id", self.p_userid)
members = data.get("members", [])
logging.info("begin to add members to group %s" % groupid)
if not groupid or not members:
logging.error("invalid request")
self.set_status(403)
self.finish()
return
result = yield coll.find_one({"_id":ObjectId(groupid)})
if not result:
logging.error("group %s does not exist" % groupid)
self.set_status(404)
self.finish()
return
if result.get("owner", "") != self.p_userid:
logging.error("%s are not the owner" % self.p_userid)
self.set_status(403)
self.finish()
return;
#get exist members
exist_ids = [x.get("id", "") for x in result.get("members", [])]
# get members and the receivers
add_members = list(filter(lambda x: x not in exist_ids, [x.get("id", "") for x in members]))
notify = {}
notify["name"] = "mx.group.authgroup_invited"
notify["pub_type"] = "any"
notify["nty_type"] = "device"
notify["msg_type"] = "other"
notify["groupid"] = groupid
notify["groupname"] = result.get("name", "")
notify["userid"] = inviteid
opter_info = yield mickey.userfetcher.getcontact(inviteid, token)
if opter_info:
notify["username"] = opter_info.get("name", "")
else:
logging.error("get user info failed %s" % inviteid)
adddb_members = list(filter(lambda x: x.get("id", "") in add_members, members))
append_result = yield coll.find_and_modify({"_id":ObjectId(groupid)},
{
"$addToSet":{"appendings":{"$each": adddb_members}},
"$unset": {"garbage": 1}
})
if append_result:
self.set_status(200)
publish.publish_multi(add_members, notify)
else:
self.set_status(500)
logging.error("add user failed %s" % groupid)
return
self.finish()
| 33.541176 | 105 | 0.54437 | 308 | 2,851 | 4.938312 | 0.357143 | 0.039448 | 0.042735 | 0.021039 | 0.101249 | 0.101249 | 0.068376 | 0.026298 | 0 | 0 | 0 | 0.008976 | 0.335672 | 2,851 | 84 | 106 | 33.940476 | 0.794087 | 0.016485 | 0 | 0.166667 | 0 | 0 | 0.121028 | 0.009282 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015152 | false | 0 | 0.136364 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
693bf1abd077feaa155858e8c51deee32624b50d | 14,767 | py | Python | example/hermes_bot.py | azalio/python-icq-bot | b5ab8306d2abf8c259da71db1a3195c842d51110 | [
"MIT"
] | null | null | null | example/hermes_bot.py | azalio/python-icq-bot | b5ab8306d2abf8c259da71db1a3195c842d51110 | [
"MIT"
] | null | null | null | example/hermes_bot.py | azalio/python-icq-bot | b5ab8306d2abf8c259da71db1a3195c842d51110 | [
"MIT"
] | null | null | null | import logging.config
import random
import re
from collections import defaultdict
from datetime import datetime
from enum import Enum
import requests
from example.util import log_call
from icq.bot import ICQBot, FileNotFoundException
from icq.constant import TypingStatus
from icq.filter import MessageFilter
from icq.handler import MessageHandler
try:
from urllib import parse
except ImportError:
import urlparse as parse
logging.config.fileConfig("logging.ini")
log = logging.getLogger(__name__)
NAME = "Hermes Bot"
VERSION = "0.0.2"
TOKEN = "000.0000000000.0000000000:000000000"
PHRASES = (
"Sweet lion of Zion!", "Sweet manatee of Galilee!", "Sweet llamas of the Bahamas!",
"Sweet something... of... someplace...", "Great cow of Moscow!", "Sweet giant anteater of Santa Anita!",
"Sweet ghost of Babylon!", "Sacred boa of West and Eastern Samoa!", "Sacred hog of Prague!",
"Cursed bacteria of Liberia!", "Sweet guinea pig of Winnipeg!", "Great bonda of Uganda!",
"Sweet three-toed sloth of the ice planet Hoth!", "Sweet honey bee of infinity!",
"Sweet yeti of the Serengeti!", "Sweet bongo of the Congo!", "Sweet squid of Madrid!",
"Sweet kookaburra of Edinburgh!", "Sweet topology of cosmology!", "Sweet coincidence of Port-au-Prince!",
"Sweet orca of Mallorca!", "Sweet candelabra of Le Havre, LaBarbara!"
)
def logging_iterator(name, iterable):
for item in iterable:
log.debug("Processing line ({name}): '{item}'.".format(name=name, item=item))
yield item
class HTTPMethod(Enum):
GET = "GET"
POST = "POST"
HEAD = "HEAD"
OPTIONS = "OPTIONS"
PUT = "PUT"
DELETE = "DELETE"
TRACE = "TRACE"
CONNECT = "CONNECT"
PATCH = "PATCH"
class HTTPRequest(object):
pattern = re.compile(r"^Connected to (?P<host>\S+) \((?P<ip>[^)]+)\) port (?P<port>\d+) \(#\d+\)$", re.IGNORECASE)
_pattern_request_line = re.compile(
r"^(?P<method>" + "|".join(m.value for m in HTTPMethod) + r")\s(?P<uri>/\S*)\sHTTP/(?P<version>\d\.\d)$",
flags=re.IGNORECASE
)
_pattern_http_header = re.compile(
r"^\s*(?P<name>X-[^:]*?|Host|User-Agent|Accept|Accept-Encoding|Connection|Content-Length|Content-Type|Expect|If"
r"-None-Match)\s*:\s*(?P<value>.*?)\s*$", flags=re.IGNORECASE
)
@log_call
def __init__(self, ip, method, url, version, headers, data):
super(HTTPRequest, self).__init__()
self.ip = ip
self.method = method
self.url = url
self.version = version
self.headers = headers
self.data = data
@staticmethod
@log_call
def parse(match, lines):
for line in lines:
request_line_match = HTTPRequest._pattern_request_line.search(line)
if request_line_match:
log.debug("Line matched with 'HTTPRequest._pattern_request_line' pattern.")
break
else:
raise ParseException("Can't find request line!")
headers = defaultdict(list)
for line in lines:
header_match = re.search(HTTPRequest._pattern_http_header, line)
if header_match:
headers[header_match.group("name")].append(header_match.group("value"))
else:
break
method = HTTPMethod(request_line_match.group("method"))
# Crutch for handling "Expect" request.
if "Expect" in headers:
if len(headers["Expect"]) != 1 and headers["Expect"][0] != "100-continue":
raise ParseException("Unknown 'Expect' request header value ('{}')!".format(headers["Expect"]))
line = next(lines)
if line != "HTTP/1.1 100 Continue":
raise ParseException("Unknown status line ('{}') for 'Expect' response!".format(line))
line = next(lines)
if line == "We are completely uploaded and fine":
# No data, seems like client logging bug.
data = None
else:
data = line
else:
if method is HTTPMethod.GET:
data = None
elif method is HTTPMethod.POST:
data = next(lines)
else:
raise ParseException("Unsupported HTTP method ('{}')!".format(method))
return HTTPRequest(
ip=match.group("ip"),
method=method,
url=parse.urlparse("{scheme}://{host}{uri}".format(
scheme={80: "HTTP", 443: "HTTPS"}[int(match.group("port"))],
host=match.group("host"),
uri=request_line_match.group("uri")
)),
version=request_line_match.group("version"),
headers=headers,
data=data
)
def __repr__(self):
return (
"HTTPRequest(method='{self.method}', url='{self.url}', version='{self.version}', headers='{self.headers}', "
"data='{self.data}')".format(self=self)
)
class HTTPResponse(object):
pattern = re.compile(r"^HTTP/(?P<version>\d\.\d)\s(?P<status_code>\d{3})\s(?P<reason_phrase>.+)$", re.IGNORECASE)
_pattern_http_header = re.compile(
r"^\s*(?P<name>X-[^:]*?|Server|Date|Content-Type|Content-Length|Content-Encoding|Connection|Keep-Alive|Access-C"
r"ontrol-Allow-Origin|Transfer-Encoding|Pragma|Cache-Control|ETag|Strict-Transport-Security|Set-Cookie)\s*:\s*("
r"?P<value>.*?)\s*$", re.IGNORECASE
)
_pattern_elapsed = re.compile(r"^Completed in (?P<elapsed>\d+) ms$", re.IGNORECASE)
@log_call
def __init__(self, version, status_code, reason_phrase, headers, data, elapsed):
super(HTTPResponse, self).__init__()
self.version = version
self.status_code = status_code
self.reason_phrase = reason_phrase
self.headers = headers
self.data = data
self.elapsed = elapsed
@staticmethod
@log_call
def parse(match, lines):
headers = defaultdict(list)
for line in lines:
(key, value) = map(lambda s: s.strip(), line.split(":", 1))
headers[key].append(value)
data = next(lines)
for line in lines:
elapsed_match = re.search(HTTPResponse._pattern_elapsed, line)
if elapsed_match:
log.debug("Line matched with 'HTTPResponse._pattern_elapsed' pattern.")
elapsed = elapsed_match.group("elapsed")
break
else:
raise ParseException("Can't find elapsed time!")
return HTTPResponse(
version=match.group("version"),
status_code=match.group("status_code"),
reason_phrase=match.group("reason_phrase"),
headers=headers,
data=data,
elapsed=elapsed
)
def __repr__(self):
return (
"HTTPResponse(version='{self.version}', status_code='{self.status_code}', reason_phrase='{self.reason_phras"
"e}', headers='{self.headers}', data='{self.data}', elapsed='{self.elapsed}')".format(self=self)
)
class LogRecord(object):
pattern = re.compile(
r"^\[(?P<week_day>Sun|Mon|Tue|Wed|Thu|Fri|Sat)\s(?P<month>Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s{1,"
r"2}(?P<day>\d{1,2})\s(?P<hour>\d{2}):(?P<minute>\d{2}):(?P<second>\d{2})\s(?P<year>\d+)\.(?P<microsecond>\d{1,"
r"3})\]\.\[(?:0x)?[0-9a-fA-F]+\]\s*$", re.IGNORECASE
)
@log_call
def __init__(self, date_time, request=None, response=None):
super(LogRecord, self).__init__()
self.date_time = date_time
self.request = request
self.response = response
@staticmethod
@log_call
def parse(match, lines):
date_time = datetime(
year=int(match.group("year")),
month=int(datetime.strptime(match.group("month"), "%b").month),
day=int(match.group("day")),
hour=int(match.group("hour")),
minute=int(match.group("minute")),
second=int(match.group("second")),
microsecond=int(match.group("microsecond")) * 1000,
)
for line in lines:
request_match = HTTPRequest.pattern.search(line)
if request_match:
log.debug("Line matched with 'HTTPRequest.pattern' pattern.")
buffer = []
# noinspection PyAssignmentToLoopOrWithParameter
for line in lines:
response_match = re.search(HTTPResponse.pattern, line)
if response_match:
log.debug("Line matched with 'HTTPResponse.pattern' pattern.")
return LogRecord(
date_time=date_time,
request=HTTPRequest.parse(request_match, logging_iterator(HTTPRequest.__name__, buffer)),
response=HTTPResponse.parse(
response_match, logging_iterator(HTTPResponse.__name__, list(lines))
)
)
else:
buffer.append(line)
return LogRecord(date_time=date_time)
def fix_log(lines):
status_line_regexp = re.compile(r"^(?P<body>.*)(?P<status_line>HTTP/\d\.\d\s\d{3}\s.+)$", re.IGNORECASE)
connection_left_regexp = re.compile(r"^.*Connection #\d+ to host \S+ left intact$", re.IGNORECASE)
upload_sent_regexp = re.compile(r"^.*upload completely sent off: \d+ out of \d+ bytes$", re.IGNORECASE)
prev_line = None
for line in lines:
log.debug("Processing line: '{}'.".format(line))
if prev_line == "HTTP/1.1 100 Continue":
match = re.search(status_line_regexp, line)
if match:
log.debug("Fixing '100-continue' problem line.")
yield match.group("body")
yield match.group("status_line")
elif re.search(connection_left_regexp, line):
log.debug("Fixing 'Connection blah-blah left intact' problem line.")
# yield re.split(connection_left_split_regexp, line)[0]
elif re.search(upload_sent_regexp, line):
log.debug("Fixing 'Upload completely sent blah-blah' problem line.")
# result = re.split(upload_sent_split_regexp, line)[0]
else:
yield line
prev_line = line
def iterate_log(lines):
buffer = []
match = None
for line in lines:
m = re.search(LogRecord.pattern, line)
if m:
log.debug("Line matched with 'LogRecord.pattern' pattern.")
if buffer and match:
yield LogRecord.parse(match, logging_iterator(LogRecord.__name__, buffer))
buffer = []
match = m
else:
buffer.append(line)
def file_callback(bot, event):
source_uin = event.data["source"]["aimId"]
message = event.data["message"]
try:
bot.set_typing(target=source_uin, typing_status=TypingStatus.TYPING)
# Getting info for file in message.
path = parse.urlsplit(message.strip()).path
file_id = path.rsplit("/", 1).pop()
file_info_response = bot.get_file_info(file_id=file_id)
if file_info_response.status_code == requests.codes.not_found:
raise FileNotFoundException
url = file_info_response.json()["file_list"].pop()["dlink"]
# Starting file download.
file_response = bot.http_session.get(url, stream=True)
if file_response.encoding is None:
file_response.encoding = "utf-8"
# Downloading file and calculating stats.
stats = defaultdict(int)
status_codes = defaultdict(int)
for log_record in iterate_log(fix_log(
line for line in file_response.iter_lines(chunk_size=1024, decode_unicode=True) if line
)):
if log_record.request:
stats["requests_count"] += 1
if log_record.request.url.path == "/aim/startSession":
stats["start_session_count"] += 1
if log_record.request.url.path == "/genToken":
stats["gen_token_count"] += 1
if log_record.response:
key = log_record.response.status_code + " " + log_record.response.reason_phrase
status_codes[key] += 1
else:
stats["no_response_count"] += 1
bot.send_im(
target=source_uin,
message=(
"Total requests: {requests_count}\n /aim/startSession: {start_session_count}\n /genToken: {gen_to"
"ken_count}\n\nResponse count by status code:\n{status_codes}\n\nFound problems:\n{problems}\n\n{phrase"
"}"
).format(
requests_count=stats["requests_count"],
start_session_count=stats["start_session_count"],
gen_token_count=stats["gen_token_count"],
status_codes="\n".join([
" {code}: {count}".format(
code=code, count=count
) for (code, count) in sorted(status_codes.items())
]),
problems=" Requests without response: {no_response_count}".format(
no_response_count=stats["no_response_count"]
),
phrase=random.choice(PHRASES)
)
)
except FileNotFoundException:
bot.send_im(target=source_uin, message=random.choice(PHRASES) + " Give me your log right now!")
except ParseException as e:
bot.send_im(
target=source_uin,
message="{phrase} Log format is not supported! Error: '{error}'.".format(
phrase=random.choice(PHRASES), error=e
)
)
raise
except Exception:
bot.send_im(target=source_uin, message=random.choice(PHRASES) + " Something has gone wrong!")
raise
finally:
bot.set_typing(target=source_uin, typing_status=TypingStatus.NONE)
class ParseException(Exception):
pass
def main():
# Creating a new bot instance.
bot = ICQBot(token=TOKEN, name=NAME, version=VERSION)
# Registering message handlers.
bot.dispatcher.add_handler(MessageHandler(
callback=file_callback,
filters=MessageFilter.file & ~(MessageFilter.image | MessageFilter.video | MessageFilter.audio)
))
# Starting a polling thread watching for new events from server. This is a non-blocking call.
bot.start_polling()
# Blocking the current thread while the bot is working until SIGINT, SIGTERM or SIGABRT is received.
bot.idle()
if __name__ == "__main__":
main()
| 36.825436 | 120 | 0.590709 | 1,720 | 14,767 | 4.926744 | 0.237209 | 0.024782 | 0.011801 | 0.013217 | 0.187397 | 0.138423 | 0.11199 | 0.06396 | 0.034694 | 0.022658 | 0 | 0.008487 | 0.281845 | 14,767 | 400 | 121 | 36.9175 | 0.79057 | 0.039209 | 0 | 0.214511 | 0 | 0.0347 | 0.251146 | 0.091499 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041009 | false | 0.003155 | 0.047319 | 0.006309 | 0.173502 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
693d787b8fc4cb803c97288d52b6f488a6db0a75 | 2,920 | py | Python | qstat_live.py | romeromig/qstat_live | dde8ceb956dc0689a1c40c06ff20d58990488765 | [
"MIT"
] | null | null | null | qstat_live.py | romeromig/qstat_live | dde8ceb956dc0689a1c40c06ff20d58990488765 | [
"MIT"
] | null | null | null | qstat_live.py | romeromig/qstat_live | dde8ceb956dc0689a1c40c06ff20d58990488765 | [
"MIT"
] | null | null | null | #! /usr/bin/python3
import curses
import sys
import subprocess
def main_menu(stdscr):
k = 0
cursor_x = 0
cursor_y = 0
# Start colors in curses
curses.start_color()
curses.init_pair(1, curses.COLOR_CYAN, curses.COLOR_BLACK)
curses.init_pair(2, curses.COLOR_RED, curses.COLOR_BLACK)
curses.init_pair(3, curses.COLOR_BLACK, curses.COLOR_WHITE)
# Set mode
switch = 0
# Loop where k is the last character pressed
while True:
if k == ord('q'):
sys.exit()
# Respond if the switch was pressed
if k == ord('.'):
if switch == 0:
switch = 1
else:
switch = 0
k = -1
# Initialization
curses.curs_set(False)
stdscr.nodelay(True)
stdscr.clear()
height, width = stdscr.getmaxyx()
# Call qstat
if switch == 0:
process = subprocess.Popen("qstat -u '*'", stdout=subprocess.PIPE, shell=True)
else:
process = subprocess.Popen('qstat', stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
qstat = str(stdout)[2:-1].split('\\n')[:-1]
# Strings
statusbarstr = " github.com/miferg | '.' to toggle all or user | 'q' to exit "
if switch == 0:
title = " qstat all users, {} jobs".format(len(qstat)-2)
title_empty = " qstat all users, no jobs"
if switch == 1:
title = " qstat current user, {} jobs".format(len(qstat)-2)
title_empty = " qstat current user, no jobs"
# Centering calculations
start_x_title = int((width // 2) - (len(title) // 2) - len(title) % 2)
# Render status bar
stdscr.attron(curses.color_pair(3))
stdscr.addstr(height-1, 0, statusbarstr)
stdscr.addstr(height-1, len(statusbarstr), " " * (width - len(statusbarstr) - 1))
stdscr.attroff(curses.color_pair(3))
# Rendering title
stdscr.attron(curses.color_pair(3))
stdscr.attron(curses.A_BOLD)
if len(qstat)-2 == -2:
stdscr.addstr(0, 0, title_empty)
stdscr.addstr(0, len(title_empty), " " * (width - len(title) - 1))
else:
stdscr.addstr(0, 0, title)
stdscr.addstr(0, len(title), " " * (width - len(title) - 1))
stdscr.attroff(curses.color_pair(3))
# Turning off attributes for title
stdscr.attroff(curses.color_pair(2))
stdscr.attroff(curses.A_BOLD)
# Print the qstat report, line by line until the screen is filled
for i in range(0, min(len(qstat),height-3)):
stdscr.addstr(i+1, 0, qstat[i])
# Refresh the screen
stdscr.refresh()
curses.napms(100)
# Wait for next input
k = stdscr.getch()
def main():
curses.wrapper(main_menu)
if __name__ == "__main__":
main()
| 28.627451 | 90 | 0.564726 | 367 | 2,920 | 4.395095 | 0.332425 | 0.075016 | 0.046497 | 0.039678 | 0.225666 | 0.15871 | 0.121513 | 0.042157 | 0 | 0 | 0 | 0.024354 | 0.310959 | 2,920 | 101 | 91 | 28.910891 | 0.777336 | 0.121233 | 0 | 0.190476 | 0 | 0 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.047619 | 0 | 0.079365 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
693fc462e0aaf1cceaf2297cb92e001c5129520c | 2,458 | py | Python | nwbwidgets/utils/plotly.py | NeurodataWithoutBorders/nwb-jupyter-widgets | 0d11e5d7b193c53d744b13c6404186ac84f4a5c1 | [
"BSD-3-Clause-LBNL"
] | 35 | 2019-03-10T23:39:17.000Z | 2021-11-16T11:50:33.000Z | nwbwidgets/utils/plotly.py | catalystneuro/nwb-jupyter-widgets | 0d11e5d7b193c53d744b13c6404186ac84f4a5c1 | [
"BSD-3-Clause-LBNL"
] | 158 | 2019-03-12T21:40:24.000Z | 2022-03-16T14:35:55.000Z | nwbwidgets/utils/plotly.py | catalystneuro/nwb-jupyter-widgets | 0d11e5d7b193c53d744b13c6404186ac84f4a5c1 | [
"BSD-3-Clause-LBNL"
] | 20 | 2019-03-08T14:30:27.000Z | 2021-11-08T16:31:26.000Z | import plotly.graph_objects as go
import numpy as np
def multi_trace(x, y, color, label=None, fig=None, insert_nans=False):
"""Create multiple traces that are associated with a single legend label
Parameters
----------
x: array-like
y: array-like
color: str
label: str, optional
fig: go.FigureWidget
Returns
-------
"""
if fig is None:
fig = go.FigureWidget()
if insert_nans:
y_nans = []
x_nans = []
for xx,yy in zip(x,y):
y_nans.append(np.append(yy,np.nan))
x_nans.append(np.append(xx, np.nan))
y_plot = np.concatenate(y_nans,axis=0)
x_plot = np.concatenate(x_nans, axis=0)
fig.add_scattergl(
x=x_plot,
y=y_plot,
name=label,
line={"color": color},
)
return fig
else:
for i, yy in enumerate(y):
if label is not None and i:
showlegend = False
else:
showlegend = True
fig.add_scattergl(
x=x,
y=yy,
legendgroup=label,
name=label,
showlegend=showlegend,
line={"color": color},
)
return fig
def event_group(
times_list,
offset=0,
color="Black",
label=None,
fig=None,
marker=None,
line_width=None,
):
"""Create an event raster that are all associated with a single legend label
Parameters
----------
times_list: list of array-like
offset: float, optional
label: str, optional
fig: go.FigureWidget
optional, passed to go.Scatter.marker:
marker: str
line_width: str
color: str
default: Black
Returns
-------
"""
if fig is None:
fig = go.FigureWidget()
if label is not None:
showlegend = True
else:
showlegend = False
for i, times in enumerate(times_list):
if len(times):
fig.add_scattergl(
x=times,
y=np.ones_like(times) * (i + offset),
marker=dict(
color=color, line_width=line_width, symbol=marker, line_color=color
),
legendgroup=str(label),
name=label,
showlegend=showlegend,
mode="markers",
)
showlegend = False
return fig
| 22.345455 | 87 | 0.513832 | 284 | 2,458 | 4.352113 | 0.302817 | 0.022654 | 0.055016 | 0.038835 | 0.325243 | 0.18123 | 0.127832 | 0.059871 | 0.059871 | 0 | 0 | 0.001989 | 0.386493 | 2,458 | 109 | 88 | 22.550459 | 0.817639 | 0.203417 | 0 | 0.384615 | 0 | 0 | 0.011911 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.030769 | 0 | 0.107692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69417b7d7a3a4bf350d3fdf5bf9bebda6e608488 | 5,997 | py | Python | django_python3_saml/saml_settings.py | IronCountySchoolDistrict/django-python3-saml | 06d6198ed6c2b9ebfbfe4d6782715d91b6a468d8 | [
"BSD-3-Clause"
] | 6 | 2018-04-16T16:38:59.000Z | 2022-02-10T09:02:11.000Z | django_python3_saml/saml_settings.py | IronCountySchoolDistrict/django-python3-saml | 06d6198ed6c2b9ebfbfe4d6782715d91b6a468d8 | [
"BSD-3-Clause"
] | 1 | 2018-10-18T20:59:11.000Z | 2018-10-19T13:42:43.000Z | django_python3_saml/saml_settings.py | IronCountySchoolDistrict/django-python3-saml | 06d6198ed6c2b9ebfbfe4d6782715d91b6a468d8 | [
"BSD-3-Clause"
] | 6 | 2018-04-16T17:06:12.000Z | 2020-05-06T11:32:39.000Z | from django.conf import settings
class SAMLServiceProviderSettings(object):
contact_info = {
# Contact information template, it is recommended to suply a
# technical and support contacts.
"technical": {
"givenName": settings.SAML['CONTACT_INFO']['TECHNICAL']['GIVEN_NAME'],
"emailAddress": settings.SAML['CONTACT_INFO']['TECHNICAL']['EMAIL'],
},
"support": {
"givenName": settings.SAML['CONTACT_INFO']['SUPPORT']['GIVEN_NAME'],
"emailAddress": settings.SAML['CONTACT_INFO']['SUPPORT']['EMAIL'],
}
}
organization_info = {
# Organization information template, the info in en_US lang is
# recommended, add more if required.
"en-US": {
"name": settings.SAML['ORGANIZATION_INFO']['EN_US']['NAME'],
"displayname": settings.SAML['ORGANIZATION_INFO']['EN_US']['DISPLAY_NAME'],
"url": settings.SAML['ORGANIZATION_INFO']['EN_US']['URL'],
}
}
def __init__(self,
debug=False,
strict=True,
sp_metadata_url=None, sp_login_url=None, sp_logout_url=None, sp_x509cert=None, sp_private_key=None, # Service provider settings (e.g. us)
idp_metadata_url=None, idp_sso_url=None, idp_slo_url=None, idp_x509cert=None, idp_x509_fingerprint=None, # Identify provider settings (e.g. onelogin)
):
super(SAMLServiceProviderSettings, self).__init__()
self.settings = default_settings = {
# If strict is True, then the Python Toolkit will reject unsigned
# or unencrypted messages if it expects them to be signed or encrypted.
# Also it will reject the messages if the SAML standard is not strictly
# followed. Destination, NameId, Conditions ... are validated too.
"strict": strict,
# Enable debug mode (outputs errors).
"debug": debug,
# Service Provider Data that we are deploying.
"sp": {
# Identifier of the SP entity (must be a URI)
"entityId": sp_metadata_url,
# Specifies info about where and how the <AuthnResponse> message MUST be
# returned to the requester, in this case our SP.
"assertionConsumerService": {
# URL Location where the <Response> from the IdP will be returned
"url": sp_login_url,
# SAML protocol binding to be used when returning the <Response>
# message. OneLogin Toolkit supports this endpoint for the
# HTTP-POST binding only.
"binding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
},
# Specifies info about where and how the <Logout Response> message MUST be
# returned to the requester, in this case our SP.
"singleLogoutService": {
# URL Location where the <Response> from the IdP will be returned
"url": sp_logout_url,
# SAML protocol binding to be used when returning the <Response>
# message. OneLogin Toolkit supports the HTTP-Redirect binding
# only for this endpoint.
"binding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
},
# Specifies the constraints on the name identifier to be used to
# represent the requested subject.
# Take a look on src/onelogin/saml2/constants.py to see the NameIdFormat that are supported.
"NameIDFormat": "urn:oasis:names:tc:SAML:2.0:nameid-format:unspecified",
# Usually x509cert and privateKey of the SP are provided by files placed at
# the certs folder. But we can also provide them with the following parameters
'x509cert': sp_x509cert,
'privateKey': sp_private_key
},
# Identity Provider Data that we want connected with our SP.
"idp": {
# Identifier of the IdP entity (must be a URI)
"entityId": idp_metadata_url,
# SSO endpoint info of the IdP. (Authentication Request protocol)
"singleSignOnService": {
# URL Target of the IdP where the Authentication Request Message
# will be sent.
"url": idp_sso_url,
# SAML protocol binding to be used when returning the <Response>
# message. OneLogin Toolkit supports the HTTP-Redirect binding
# only for this endpoint.
"binding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
},
# SLO endpoint info of the IdP.
"singleLogoutService": {
# URL Location of the IdP where SLO Request will be sent.
"url": idp_slo_url,
# SAML protocol binding to be used when returning the <Response>
# message. OneLogin Toolkit supports the HTTP-Redirect binding
# only for this endpoint.
"binding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
},
# Public x509 certificate of the IdP
"x509cert": idp_x509cert,
# Instead of use the whole x509cert you can use a fingerprint
# (openssl x509 -noout -fingerprint -in "idp.crt" to generate it)
"certFingerprint": idp_x509_fingerprint
},
"organization": self.organization_info,
'contactPerson': self.contact_info,
}
if not idp_x509cert:
del self.settings['idp']['x509cert']
if not idp_x509_fingerprint:
del self.settings['idp']['certFingerprint']
| 51.698276 | 167 | 0.571786 | 651 | 5,997 | 5.165899 | 0.298003 | 0.011894 | 0.014273 | 0.022302 | 0.398454 | 0.348498 | 0.305679 | 0.254237 | 0.254237 | 0.254237 | 0 | 0.01426 | 0.345173 | 5,997 | 115 | 168 | 52.147826 | 0.842119 | 0.408371 | 0 | 0.079365 | 0 | 0 | 0.224222 | 0.077978 | 0 | 0 | 0 | 0 | 0.015873 | 1 | 0.015873 | false | 0 | 0.015873 | 0 | 0.079365 | 0.063492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6943f352d6732b6ea4e8c626dd8012e42b34ad09 | 25,972 | py | Python | heat/engine/parser.py | citrix-openstack-build/heat | fa31873529481472e037e3ce157b87f8057fe622 | [
"Apache-2.0"
] | null | null | null | heat/engine/parser.py | citrix-openstack-build/heat | fa31873529481472e037e3ce157b87f8057fe622 | [
"Apache-2.0"
] | null | null | null | heat/engine/parser.py | citrix-openstack-build/heat | fa31873529481472e037e3ce157b87f8057fe622 | [
"Apache-2.0"
] | null | null | null | # vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import re
from oslo.config import cfg
from heat.engine import environment
from heat.common import exception
from heat.engine import dependencies
from heat.common import identifier
from heat.engine import resource
from heat.engine import resources
from heat.engine import scheduler
from heat.engine import template
from heat.engine import timestamp
from heat.engine import update
from heat.engine.parameters import Parameters
from heat.engine.template import Template
from heat.engine.clients import Clients
from heat.db import api as db_api
from heat.openstack.common import log as logging
from heat.openstack.common.gettextutils import _
from heat.common.exception import StackValidationFailed
logger = logging.getLogger(__name__)
(PARAM_STACK_NAME, PARAM_REGION) = ('AWS::StackName', 'AWS::Region')
class Stack(object):
ACTIONS = (CREATE, DELETE, UPDATE, ROLLBACK, SUSPEND, RESUME
) = ('CREATE', 'DELETE', 'UPDATE', 'ROLLBACK', 'SUSPEND',
'RESUME')
STATUSES = (IN_PROGRESS, FAILED, COMPLETE
) = ('IN_PROGRESS', 'FAILED', 'COMPLETE')
created_time = timestamp.Timestamp(functools.partial(db_api.stack_get,
show_deleted=True),
'created_at')
updated_time = timestamp.Timestamp(functools.partial(db_api.stack_get,
show_deleted=True),
'updated_at')
_zones = None
def __init__(self, context, stack_name, tmpl, env=None,
stack_id=None, action=None, status=None,
status_reason='', timeout_mins=60, resolve_data=True,
disable_rollback=True, parent_resource=None, owner_id=None):
'''
Initialise from a context, name, Template object and (optionally)
Environment object. The database ID may also be initialised, if the
stack is already in the database.
'''
if owner_id is None:
if re.match("[a-zA-Z][a-zA-Z0-9_.-]*$", stack_name) is None:
raise ValueError(_('Invalid stack name %s'
' must contain only alphanumeric or '
'\"_-.\" characters, must start with alpha'
) % stack_name)
self.id = stack_id
self.owner_id = owner_id
self.context = context
self.clients = Clients(context)
self.t = tmpl
self.name = stack_name
self.action = action
self.status = status
self.status_reason = status_reason
self.timeout_mins = timeout_mins
self.disable_rollback = disable_rollback
self.parent_resource = parent_resource
self._resources = None
self._dependencies = None
resources.initialise()
self.env = env or environment.Environment({})
self.parameters = Parameters(self.name, self.t,
user_params=self.env.params)
self._set_param_stackid()
if resolve_data:
self.outputs = self.resolve_static_data(self.t[template.OUTPUTS])
else:
self.outputs = {}
@property
def resources(self):
if self._resources is None:
template_resources = self.t[template.RESOURCES]
self._resources = dict((name, resource.Resource(name, data, self))
for (name, data) in
template_resources.items())
return self._resources
@property
def dependencies(self):
if self._dependencies is None:
self._dependencies = self._get_dependencies(
self.resources.itervalues())
return self._dependencies
def reset_dependencies(self):
self._dependencies = None
@property
def root_stack(self):
'''
Return the root stack if this is nested (otherwise return self).
'''
if (self.parent_resource and self.parent_resource.stack):
return self.parent_resource.stack.root_stack
return self
def total_resources(self):
'''
Total number of resources in a stack, including nested stacks below.
'''
total = 0
for res in iter(self.resources.values()):
if hasattr(res, 'nested') and res.nested():
total += res.nested().total_resources()
total += 1
return total
def _set_param_stackid(self):
'''
Update self.parameters with the current ARN which is then provided
via the Parameters class as the AWS::StackId pseudo parameter
'''
# This can fail if constructor called without a valid context,
# as it is in many tests
try:
stack_arn = self.identifier().arn()
except (AttributeError, ValueError, TypeError):
logger.warning("Unable to set parameters StackId identifier")
else:
self.parameters.set_stack_id(stack_arn)
@staticmethod
def _get_dependencies(resources):
'''Return the dependency graph for a list of resources.'''
deps = dependencies.Dependencies()
for resource in resources:
resource.add_dependencies(deps)
return deps
@classmethod
def load(cls, context, stack_id=None, stack=None, resolve_data=True,
parent_resource=None, show_deleted=True):
'''Retrieve a Stack from the database.'''
if stack is None:
stack = db_api.stack_get(context, stack_id,
show_deleted=show_deleted)
if stack is None:
message = 'No stack exists with id "%s"' % str(stack_id)
raise exception.NotFound(message)
template = Template.load(context, stack.raw_template_id)
env = environment.Environment(stack.parameters)
stack = cls(context, stack.name, template, env,
stack.id, stack.action, stack.status, stack.status_reason,
stack.timeout, resolve_data, stack.disable_rollback,
parent_resource, owner_id=stack.owner_id)
return stack
def store(self, backup=False):
'''
Store the stack in the database and return its ID
If self.id is set, we update the existing stack
'''
s = {
'name': self._backup_name() if backup else self.name,
'raw_template_id': self.t.store(self.context),
'parameters': self.env.user_env_as_dict(),
'owner_id': self.owner_id,
'username': self.context.username,
'tenant': self.context.tenant_id,
'action': self.action,
'status': self.status,
'status_reason': self.status_reason,
'timeout': self.timeout_mins,
'disable_rollback': self.disable_rollback,
}
if self.id:
db_api.stack_update(self.context, self.id, s)
else:
# Create a context containing a trust_id and trustor_user_id
# if trusts are enabled
if cfg.CONF.deferred_auth_method == 'trusts':
trust_context = self.clients.keystone().create_trust_context()
new_creds = db_api.user_creds_create(trust_context)
else:
new_creds = db_api.user_creds_create(self.context)
s['user_creds_id'] = new_creds.id
new_s = db_api.stack_create(self.context, s)
self.id = new_s.id
self._set_param_stackid()
return self.id
def _backup_name(self):
return '%s*' % self.name
def identifier(self):
'''
Return an identifier for this stack.
'''
return identifier.HeatIdentifier(self.context.tenant_id,
self.name, self.id)
def __iter__(self):
'''
Return an iterator over this template's resources in the order that
they should be started.
'''
return iter(self.dependencies)
def __reversed__(self):
'''
Return an iterator over this template's resources in the order that
they should be stopped.
'''
return reversed(self.dependencies)
def __len__(self):
'''Return the number of resources.'''
return len(self.resources)
def __getitem__(self, key):
'''Get the resource with the specified name.'''
return self.resources[key]
def __setitem__(self, key, value):
'''Set the resource with the specified name to a specific value.'''
self.resources[key] = value
def __contains__(self, key):
'''Determine whether the stack contains the specified resource.'''
return key in self.resources
def keys(self):
'''Return a list of resource keys for the stack.'''
return self.resources.keys()
def __str__(self):
'''Return a human-readable string representation of the stack.'''
return 'Stack "%s"' % self.name
def resource_by_refid(self, refid):
'''
Return the resource in this stack with the specified
refid, or None if not found
'''
for r in self.resources.values():
if r.state in (
(r.CREATE, r.IN_PROGRESS),
(r.CREATE, r.COMPLETE),
(r.RESUME, r.IN_PROGRESS),
(r.RESUME, r.COMPLETE),
(r.UPDATE, r.IN_PROGRESS),
(r.UPDATE, r.COMPLETE)) and r.FnGetRefId() == refid:
return r
def validate(self):
'''
http://docs.amazonwebservices.com/AWSCloudFormation/latest/\
APIReference/API_ValidateTemplate.html
'''
# TODO(sdake) Should return line number of invalid reference
# Check duplicate names between parameters and resources
dup_names = set(self.parameters.keys()) & set(self.resources.keys())
if dup_names:
logger.debug("Duplicate names %s" % dup_names)
raise StackValidationFailed(message="Duplicate names %s" %
dup_names)
for res in self:
try:
result = res.validate()
except exception.Error as ex:
logger.exception(ex)
raise ex
except Exception as ex:
logger.exception(ex)
raise StackValidationFailed(message=str(ex))
if result:
raise StackValidationFailed(message=result)
def requires_deferred_auth(self):
'''
Returns whether this stack may need to perform API requests
during its lifecycle using the configured deferred authentication
method.
'''
return any(res.requires_deferred_auth for res in self)
def state_set(self, action, status, reason):
'''Update the stack state in the database.'''
if action not in self.ACTIONS:
raise ValueError("Invalid action %s" % action)
if status not in self.STATUSES:
raise ValueError("Invalid status %s" % status)
self.action = action
self.status = status
self.status_reason = reason
if self.id is None:
return
stack = db_api.stack_get(self.context, self.id)
stack.update_and_save({'action': action,
'status': status,
'status_reason': reason})
@property
def state(self):
'''Returns state, tuple of action, status.'''
return (self.action, self.status)
def timeout_secs(self):
'''
Return the stack creation timeout in seconds, or None if no timeout
should be used.
'''
if self.timeout_mins is None:
return None
return self.timeout_mins * 60
def create(self):
'''
Create the stack and all of the resources.
'''
def rollback():
if not self.disable_rollback and self.state == (self.CREATE,
self.FAILED):
self.delete(action=self.ROLLBACK)
creator = scheduler.TaskRunner(self.stack_task,
action=self.CREATE,
reverse=False,
post_func=rollback)
creator(timeout=self.timeout_secs())
@scheduler.wrappertask
def stack_task(self, action, reverse=False, post_func=None):
'''
A task to perform an action on the stack and all of the resources
in forward or reverse dependency order as specfifed by reverse
'''
self.state_set(action, self.IN_PROGRESS,
'Stack %s started' % action)
stack_status = self.COMPLETE
reason = 'Stack %s completed successfully' % action.lower()
res = None
def resource_action(r):
# Find e.g resource.create and call it
action_l = action.lower()
handle = getattr(r, '%s' % action_l)
return handle()
action_task = scheduler.DependencyTaskGroup(self.dependencies,
resource_action,
reverse)
try:
yield action_task()
except exception.ResourceFailure as ex:
stack_status = self.FAILED
reason = 'Resource %s failed: %s' % (action.lower(), str(ex))
except scheduler.Timeout:
stack_status = self.FAILED
reason = '%s timed out' % action.title()
self.state_set(action, stack_status, reason)
if callable(post_func):
post_func()
def _backup_stack(self, create_if_missing=True):
'''
Get a Stack containing any in-progress resources from the previous
stack state prior to an update.
'''
s = db_api.stack_get_by_name(self.context, self._backup_name(),
owner_id=self.id)
if s is not None:
logger.debug('Loaded existing backup stack')
return self.load(self.context, stack=s)
elif create_if_missing:
prev = type(self)(self.context, self.name, self.t, self.env,
owner_id=self.id)
prev.store(backup=True)
logger.debug('Created new backup stack')
return prev
else:
return None
def update(self, newstack):
'''
Compare the current stack with newstack,
and where necessary create/update/delete the resources until
this stack aligns with newstack.
Note update of existing stack resources depends on update
being implemented in the underlying resource types
Update will fail if it exceeds the specified timeout. The default is
60 minutes, set in the constructor
'''
updater = scheduler.TaskRunner(self.update_task, newstack)
updater()
@scheduler.wrappertask
def update_task(self, newstack, action=UPDATE):
if action not in (self.UPDATE, self.ROLLBACK):
logger.error("Unexpected action %s passed to update!" % action)
self.state_set(self.UPDATE, self.FAILED,
"Invalid action %s" % action)
return
if self.status != self.COMPLETE:
if (action == self.ROLLBACK and
self.state == (self.UPDATE, self.IN_PROGRESS)):
logger.debug("Starting update rollback for %s" % self.name)
else:
self.state_set(action, self.FAILED,
'State invalid for %s' % action)
return
self.state_set(self.UPDATE, self.IN_PROGRESS,
'Stack %s started' % action)
oldstack = Stack(self.context, self.name, self.t, self.env)
backup_stack = self._backup_stack()
try:
update_task = update.StackUpdate(self, newstack, backup_stack,
rollback=action == self.ROLLBACK)
updater = scheduler.TaskRunner(update_task)
self.env = newstack.env
self.parameters = newstack.parameters
try:
updater.start(timeout=self.timeout_secs())
yield
while not updater.step():
yield
finally:
self.reset_dependencies()
if action == self.UPDATE:
reason = 'Stack successfully updated'
else:
reason = 'Stack rollback completed'
stack_status = self.COMPLETE
except scheduler.Timeout:
stack_status = self.FAILED
reason = 'Timed out'
except exception.ResourceFailure as e:
reason = str(e)
stack_status = self.FAILED
if action == self.UPDATE:
# If rollback is enabled, we do another update, with the
# existing template, so we roll back to the original state
if not self.disable_rollback:
yield self.update_task(oldstack, action=self.ROLLBACK)
return
else:
logger.debug('Deleting backup stack')
backup_stack.delete()
self.state_set(action, stack_status, reason)
# flip the template to the newstack values
# Note we do this on success and failure, so the current
# stack resources are stored, even if one is in a failed
# state (otherwise we won't remove them on delete)
self.t = newstack.t
template_outputs = self.t[template.OUTPUTS]
self.outputs = self.resolve_static_data(template_outputs)
self.store()
def delete(self, action=DELETE):
'''
Delete all of the resources, and then the stack itself.
The action parameter is used to differentiate between a user
initiated delete and an automatic stack rollback after a failed
create, which amount to the same thing, but the states are recorded
differently.
'''
if action not in (self.DELETE, self.ROLLBACK):
logger.error("Unexpected action %s passed to delete!" % action)
self.state_set(self.DELETE, self.FAILED,
"Invalid action %s" % action)
return
stack_status = self.COMPLETE
reason = 'Stack %s completed successfully' % action.lower()
self.state_set(action, self.IN_PROGRESS, 'Stack %s started' % action)
backup_stack = self._backup_stack(False)
if backup_stack is not None:
backup_stack.delete()
if backup_stack.status != backup_stack.COMPLETE:
errs = backup_stack.status_reason
failure = 'Error deleting backup resources: %s' % errs
self.state_set(action, self.FAILED,
'Failed to %s : %s' % (action, failure))
return
action_task = scheduler.DependencyTaskGroup(self.dependencies,
resource.Resource.destroy,
reverse=True)
try:
scheduler.TaskRunner(action_task)(timeout=self.timeout_secs())
except exception.ResourceFailure as ex:
stack_status = self.FAILED
reason = 'Resource %s failed: %s' % (action.lower(), str(ex))
except scheduler.Timeout:
stack_status = self.FAILED
reason = '%s timed out' % action.title()
self.state_set(action, stack_status, reason)
if stack_status != self.FAILED:
# If we created a trust, delete it
stack = db_api.stack_get(self.context, self.id)
user_creds = db_api.user_creds_get(stack.user_creds_id)
trust_id = user_creds.get('trust_id')
if trust_id:
self.clients.keystone().delete_trust(trust_id)
# delete the stack
db_api.stack_delete(self.context, self.id)
self.id = None
def suspend(self):
'''
Suspend the stack, which invokes handle_suspend for all stack resources
waits for all resources to become SUSPEND_COMPLETE then declares the
stack SUSPEND_COMPLETE.
Note the default implementation for all resources is to do nothing
other than move to SUSPEND_COMPLETE, so the resources must implement
handle_suspend for this to have any effect.
'''
sus_task = scheduler.TaskRunner(self.stack_task,
action=self.SUSPEND,
reverse=True)
sus_task(timeout=self.timeout_secs())
def resume(self):
'''
Resume the stack, which invokes handle_resume for all stack resources
waits for all resources to become RESUME_COMPLETE then declares the
stack RESUME_COMPLETE.
Note the default implementation for all resources is to do nothing
other than move to RESUME_COMPLETE, so the resources must implement
handle_resume for this to have any effect.
'''
sus_task = scheduler.TaskRunner(self.stack_task,
action=self.RESUME,
reverse=False)
sus_task(timeout=self.timeout_secs())
def output(self, key):
'''
Get the value of the specified stack output.
'''
value = self.outputs[key].get('Value', '')
return self.resolve_runtime_data(value)
def restart_resource(self, resource_name):
'''
stop resource_name and all that depend on it
start resource_name and all that depend on it
'''
deps = self.dependencies[self[resource_name]]
failed = False
for res in reversed(deps):
try:
scheduler.TaskRunner(res.destroy)()
except exception.ResourceFailure as ex:
failed = True
logger.error('delete: %s' % str(ex))
for res in deps:
if not failed:
try:
res.state_reset()
scheduler.TaskRunner(res.create)()
except exception.ResourceFailure as ex:
logger.exception('create')
failed = True
else:
res.state_set(res.CREATE, res.FAILED,
'Resource restart aborted')
# TODO(asalkeld) if any of this fails we Should
# restart the whole stack
def get_availability_zones(self):
if self._zones is None:
self._zones = [
zone.zoneName for zone in
self.clients.nova().availability_zones.list(detailed=False)]
return self._zones
def resolve_static_data(self, snippet):
return resolve_static_data(self.t, self, self.parameters, snippet)
def resolve_runtime_data(self, snippet):
return resolve_runtime_data(self.t, self.resources, snippet)
def resolve_static_data(template, stack, parameters, snippet):
'''
Resolve static parameters, map lookups, etc. in a template.
Example:
>>> from heat.common import template_format
>>> template_str = '# JSON or YAML encoded template'
>>> template = Template(template_format.parse(template_str))
>>> parameters = Parameters('stack', template, {'KeyName': 'my_key'})
>>> resolve_static_data(template, None, parameters, {'Ref': 'KeyName'})
'my_key'
'''
return transform(snippet,
[functools.partial(template.resolve_param_refs,
parameters=parameters),
functools.partial(template.resolve_availability_zones,
stack=stack),
functools.partial(template.resolve_resource_facade,
stack=stack),
template.resolve_find_in_map,
template.reduce_joins])
def resolve_runtime_data(template, resources, snippet):
return transform(snippet,
[functools.partial(template.resolve_resource_refs,
resources=resources),
functools.partial(template.resolve_attributes,
resources=resources),
template.resolve_split,
template.resolve_member_list_to_map,
template.resolve_select,
template.resolve_joins,
template.resolve_replace,
template.resolve_base64])
def transform(data, transformations):
'''
Apply each of the transformation functions in the supplied list to the data
in turn.
'''
for t in transformations:
data = t(data)
return data
| 37.262554 | 79 | 0.578354 | 2,867 | 25,972 | 5.106034 | 0.161842 | 0.012979 | 0.01052 | 0.01093 | 0.240317 | 0.189152 | 0.164834 | 0.119066 | 0.107111 | 0.089214 | 0 | 0.001111 | 0.341522 | 25,972 | 696 | 80 | 37.316092 | 0.854871 | 0.197713 | 0 | 0.23341 | 0 | 0 | 0.0557 | 0.001203 | 0 | 0 | 0 | 0.002874 | 0 | 1 | 0.100687 | false | 0.004577 | 0.045767 | 0.009153 | 0.249428 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69453942628ce1c37639781c43bed0432a313dc3 | 1,547 | py | Python | src/puzzle_1_you_will_all_conform/my/please_conform_squared.py | foryourselfand/mit_6_S095_programming_for_the_puzzled | 88371bd8461709011acbed6066ac4f40c5cde29e | [
"MIT"
] | null | null | null | src/puzzle_1_you_will_all_conform/my/please_conform_squared.py | foryourselfand/mit_6_S095_programming_for_the_puzzled | 88371bd8461709011acbed6066ac4f40c5cde29e | [
"MIT"
] | null | null | null | src/puzzle_1_you_will_all_conform/my/please_conform_squared.py | foryourselfand/mit_6_S095_programming_for_the_puzzled | 88371bd8461709011acbed6066ac4f40c5cde29e | [
"MIT"
] | null | null | null | from typing import List
from please_conform import PleaseConform
from structures import Interval
class PleaseConformSquared(PleaseConform):
def please_conform(self, caps: List[str]) -> List[Interval]:
if len(caps) == 0:
return list()
caps: List[str] = caps.copy()
caps.append('end')
interval_inputs: List[Interval] = list()
count_forward: int = 0
count_backward: int = 0
index_previous: int = 0
for index_current in range(1, len(caps)):
cap_current = caps[index_current]
cap_previous = caps[index_previous]
if cap_current != cap_previous:
interval_input = Interval(start=index_previous,
end=index_current - 1,
cap_type=cap_previous)
interval_inputs.append(interval_input)
if cap_previous == 'F':
count_forward += 1
else:
count_backward += 1
index_previous = index_current
cap_to_flip: str
if count_forward < count_backward:
cap_to_flip = 'F'
else:
cap_to_flip = 'B'
interval_results: List[Interval] = list()
for interval_input in interval_inputs:
if interval_input.cap_type == cap_to_flip:
interval_result: Interval = interval_input
interval_results.append(interval_result)
return interval_results
| 30.333333 | 64 | 0.564318 | 163 | 1,547 | 5.079755 | 0.276074 | 0.078502 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008147 | 0.365223 | 1,547 | 50 | 65 | 30.94 | 0.835031 | 0 | 0 | 0.054054 | 0 | 0 | 0.003878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.081081 | 0 | 0.189189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
694611c01663b1d27e4ebc26f84bb2603c45ff7c | 5,784 | py | Python | apps/almoxarifado/apps/cont/forms.py | mequetrefe-do-subtroco/web_constel | 57b5626fb17b4fefc740cbe1ac95fd4ab90147bc | [
"MIT"
] | 1 | 2020-06-18T09:03:53.000Z | 2020-06-18T09:03:53.000Z | apps/almoxarifado/apps/cont/forms.py | gabrielhjs/web_constel | 57b5626fb17b4fefc740cbe1ac95fd4ab90147bc | [
"MIT"
] | 33 | 2020-06-16T18:59:33.000Z | 2021-08-12T21:33:17.000Z | apps/almoxarifado/apps/cont/forms.py | gabrielhjs/web_constel | 57b5626fb17b4fefc740cbe1ac95fd4ab90147bc | [
"MIT"
] | null | null | null | from django import forms
from .models import *
class FormCadastraModelo(forms.ModelForm):
class Meta:
model = Modelo
fields = ['nome', 'descricao', ]
def __init__(self, *args, **kwargs):
super(FormCadastraModelo, self).__init__(*args, **kwargs)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
class FormCadastraSecao(forms.ModelForm):
class Meta:
model = Secao
fields = ['nome', 'descricao', ]
def __init__(self, *args, **kwargs):
super(FormCadastraSecao, self).__init__(*args, **kwargs)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
class FormEntradaOnt1(forms.Form):
modelo = forms.ChoiceField()
secao = forms.ChoiceField()
def __init__(self, *args, **kwargs):
super(FormEntradaOnt1, self).__init__(*args, **kwargs)
modelos = Modelo.objects.all().order_by('nome')
modelos_name = [(i.id, i.nome.upper()) for i in modelos]
self.fields['modelo'] = forms.ChoiceField(
choices=modelos_name,
label='Modelo',
help_text='Modelo das ONT\'s a serem inseridas',
)
secoes = Secao.objects.all().order_by('nome')
secoes_name = [(i.id, i.nome.upper()) for i in secoes]
self.fields['secao'] = forms.ChoiceField(
choices=secoes_name,
label='Seção',
help_text='Atividade de destino das ONT\'s a serem inseridas',
)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
class NonstickyCharfield(forms.TextInput):
"""Custom text input widget that's "non-sticky"
(i.e. does not remember submitted values).
"""
def get_context(self, name, value, attrs):
value = None # Clear the submitted value.
return super().get_context(name, value, attrs)
class FormEntradaOnt2(forms.Form):
serial = forms.CharField(required=True, widget=NonstickyCharfield())
def __init__(self, *args, **kwargs):
super(FormEntradaOnt2, self).__init__(*args, **kwargs)
self.fields['serial'].widget.attrs.update(
{'autofocus': 'autofocus', 'required': 'required'}
)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
def clean(self):
form_data = super().clean()
serial = form_data['serial'].upper()
if serial.find('4857544', 0, 7) >= 0:
if len(serial) != 16:
self.errors['serial'] = ['Serial de Ont Huawei inválido']
return form_data
elif serial.find('ZNTS', 0, 5) >= 0:
if len(serial) != 12:
self.errors['serial'] = ['Serial de Ont Zhone inválido']
return form_data
else:
self.errors['serial'] = ['Serial de Ont inválido']
return form_data
class FormOntFechamento(forms.Form):
serial = forms.CharField(required=True, widget=NonstickyCharfield())
def __init__(self, *args, **kwargs):
super(FormOntFechamento, self).__init__(*args, **kwargs)
self.fields['serial'].widget.attrs.update(
{'autofocus': 'autofocus', 'required': 'required'}
)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
def clean(self):
form_data = super().clean()
serial = form_data['serial'].upper()
if Ont.objects.filter(codigo=serial).exists():
form_data['serial'] = Ont.objects.get(codigo=serial)
else:
self.errors['serial'] = ['Ont não cadastrada no sistema, cadastre-a para registrá-la como com defeito']
return form_data
class FormOntManutencao1(forms.Form):
modelo = forms.ChoiceField()
def __init__(self, *args, **kwargs):
super(FormOntManutencao1, self).__init__(*args, **kwargs)
modelos = Modelo.objects.all().order_by('nome')
modelos_name = [(i.id, i.nome.upper()) for i in modelos]
self.fields['modelo'] = forms.ChoiceField(
choices=modelos_name,
label='Modelo',
)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
class FormPswLogin(forms.Form):
"""
Formulário de login de usuário no psw
"""
username = forms.CharField(max_length=150, label='Chave da Copel')
password = forms.CharField(widget=forms.PasswordInput)
widgets = {
'password': forms.PasswordInput(),
}
def __init__(self, *args, **kwargs):
super(FormPswLogin, self).__init__(*args, **kwargs)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
class FormPswContrato(forms.Form):
"""
Formulário de busca de contrato no psw
"""
contratos = forms.CharField(
label='Contratos',
widget=forms.TextInput(
attrs={'placeholder': 'Ex: 1234567,1234568, 1234569'}
)
)
def __init__(self, *args, **kwargs):
super(FormPswContrato, self).__init__(*args, **kwargs)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
class FormSerial(forms.Form):
"""
Formulário de busca de serial
"""
serial = forms.CharField(label='Serial', required=False)
def __init__(self, *args, **kwargs):
super(FormSerial, self).__init__(*args, **kwargs)
for key in self.fields.keys():
self.fields[key].widget.attrs.update({'class': 'form-control'})
| 28.492611 | 115 | 0.602006 | 651 | 5,784 | 5.202765 | 0.219662 | 0.067907 | 0.055211 | 0.039858 | 0.613227 | 0.576912 | 0.500443 | 0.500443 | 0.475642 | 0.442279 | 0 | 0.010862 | 0.251902 | 5,784 | 202 | 116 | 28.633663 | 0.771897 | 0.038382 | 0 | 0.5 | 0 | 0 | 0.114867 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098361 | false | 0.016393 | 0.016393 | 0 | 0.336066 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69469bc9f4f19c9f16e8cc58a6b94958c3abce9d | 1,980 | py | Python | DMD/pyDMD.py | yusovm/GEMSEC | d9abd43d27e05607e7b1ea8c99fcc736abd204fd | [
"MIT"
] | null | null | null | DMD/pyDMD.py | yusovm/GEMSEC | d9abd43d27e05607e7b1ea8c99fcc736abd204fd | [
"MIT"
] | null | null | null | DMD/pyDMD.py | yusovm/GEMSEC | d9abd43d27e05607e7b1ea8c99fcc736abd204fd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Mar 2 16:43:45 2020
@author: micha
"""
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from MD_Analysis import Angle_Calc
from pydmd import DMD
pdb="pdbs/WT_295K_200ns_50ps_0_run.pdb"
#Extract phi, psi angles
AC=Angle_Calc(pdb)
Angle_DF=AC.get_phi_psi()
def cossin(data):
cols = data.columns
data = data.to_numpy()
coss = np.cos(data/180.*np.pi)
sins = np.sin(data/180.*np.pi)
res=pd.DataFrame()
for i in range(len(cols)):
res[cols[i]+"_cos"] = coss[:,i]
res[cols[i]+"_sin"] = sins[:,i]
return res
def halftime(data):
dropindex = [1+2*i for i in (range(int(data.shape[0]/2)))]
return data.drop(dropindex)
#half = halftime(Angle_DF)
angle_cossin = cossin(Angle_DF)
angle_cossin_full = angle_cossin.copy()
angle_cossin_full.drop(angle_cossin_full.tail(1).index,inplace=True)
f=angle_cossin_full.to_numpy()
dt=50*(10**-12)
xi=np.linspace(np.min(f),np.max(f),f.shape[0])
t=np.linspace(0,f.shape[0],f.shape[1])*dt #+200*10**-9
Xgrid,T=np.meshgrid(xi,t)
dmd = DMD(svd_rank=40)
dmd.fit(f.T)
xl=np.linspace(0,4000*dt,2000)
yl=range(40)
xlabel,ylabel=np.meshgrid(xl,yl)
#Actual
fig = plt.figure(figsize=(17,6))
plt.pcolor(xl, yl, f.real.T)
plt.yticks([])
plt.title('Actual Data')
plt.colorbar()
plt.show()
fig.savefig("PyDMD Actual Data.png")
#Reconstructed
fig2 = plt.figure(figsize=(17,6))
plt.pcolor(xl, yl, dmd.reconstructed_data.real)
plt.yticks([])
plt.title('Reconstructed Data')
plt.colorbar()
plt.show()
fig2.savefig("PyDMD Reconstructed Data.png")
#Error
fig3 = plt.figure(figsize=(17,6))
plt.pcolor(xl, yl, (np.sqrt(f.T-dmd.reconstructed_data)**2).real)
plt.yticks([])
plt.title('RMSE Error')
plt.colorbar()
plt.show()
fig3.savefig("PyDMD Error.png")
#Eigenvalues
dmd.plot_eigs(show_axes=True, show_unit_circle=True)
| 21.758242 | 69 | 0.663636 | 332 | 1,980 | 3.855422 | 0.391566 | 0.051563 | 0.046875 | 0.042188 | 0.142188 | 0.075 | 0.075 | 0.075 | 0.075 | 0 | 0 | 0.0454 | 0.165657 | 1,980 | 90 | 70 | 22 | 0.729419 | 0.084848 | 0 | 0.157895 | 0 | 0 | 0.084557 | 0.019378 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.105263 | 0 | 0.175439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69496f7144482f3f124f969247fa77f335fc5db1 | 695 | py | Python | models/t_compensate_event_definition.py | THM-MA/XSDATA-waypoint | dd94442f9d6677c525bf3ebb03c15fec52fa1079 | [
"MIT"
] | null | null | null | models/t_compensate_event_definition.py | THM-MA/XSDATA-waypoint | dd94442f9d6677c525bf3ebb03c15fec52fa1079 | [
"MIT"
] | null | null | null | models/t_compensate_event_definition.py | THM-MA/XSDATA-waypoint | dd94442f9d6677c525bf3ebb03c15fec52fa1079 | [
"MIT"
] | null | null | null | from dataclasses import dataclass, field
from typing import Optional
from xml.etree.ElementTree import QName
from .t_event_definition import TEventDefinition
__NAMESPACE__ = "http://www.omg.org/spec/BPMN/20100524/MODEL"
@dataclass
class TCompensateEventDefinition(TEventDefinition):
class Meta:
name = "tCompensateEventDefinition"
wait_for_completion: Optional[bool] = field(
default=None,
metadata={
"name": "waitForCompletion",
"type": "Attribute",
}
)
activity_ref: Optional[QName] = field(
default=None,
metadata={
"name": "activityRef",
"type": "Attribute",
}
)
| 24.821429 | 61 | 0.640288 | 63 | 695 | 6.920635 | 0.650794 | 0.055046 | 0.073395 | 0.110092 | 0.12844 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015534 | 0.258993 | 695 | 27 | 62 | 25.740741 | 0.831068 | 0 | 0 | 0.26087 | 0 | 0 | 0.188489 | 0.03741 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.173913 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
694980cd79dd13058b0c41b67c32eb322a3674e4 | 822 | py | Python | plugins.py | ddfabbro/translatorbot | a14a442ec840d81e3d8bbc6faa15e52f68145655 | [
"Unlicense"
] | null | null | null | plugins.py | ddfabbro/translatorbot | a14a442ec840d81e3d8bbc6faa15e52f68145655 | [
"Unlicense"
] | null | null | null | plugins.py | ddfabbro/translatorbot | a14a442ec840d81e3d8bbc6faa15e52f68145655 | [
"Unlicense"
] | null | null | null | import html
from googletrans import Translator
from slackbot.bot import default_reply, respond_to, listen_to
translator = Translator()
def translate(message):
msg_in = html.unescape(message.body["text"])
if msg_in != "":
if translator.detect(msg_in).lang == "en":
text = translator.translate(msg_in, dest = "ja").text
else:
text = translator.translate(msg_in, dest = "en").text
msg_out = "```{}```".format(text)
if message.thread_ts == message.body["event_ts"]:
message.send(msg_out)
else:
message.reply(msg_out)
@default_reply
def my_default_handler(message):
translate(message)
@respond_to(".*")
def all_replies(message):
translate(message)
@listen_to(".*")
def all_messages(message):
translate(message)
| 24.176471 | 65 | 0.647202 | 101 | 822 | 5.069307 | 0.376238 | 0.048828 | 0.134766 | 0.101563 | 0.125 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217762 | 822 | 33 | 66 | 24.909091 | 0.796268 | 0 | 0 | 0.2 | 0 | 0 | 0.036496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.12 | 0 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
694a62087121f0e7a903137a5cded3d86b3d17e4 | 1,324 | py | Python | app.py | muelletm/search | 3087dcfd26861b1386c38575b53cb026cb1045f8 | [
"Apache-2.0"
] | 1 | 2022-03-25T19:14:53.000Z | 2022-03-25T19:14:53.000Z | app.py | muelletm/search | 3087dcfd26861b1386c38575b53cb026cb1045f8 | [
"Apache-2.0"
] | null | null | null | app.py | muelletm/search | 3087dcfd26861b1386c38575b53cb026cb1045f8 | [
"Apache-2.0"
] | 4 | 2022-03-10T18:40:44.000Z | 2022-03-10T19:20:30.000Z | import collections
import os
from pathlib import Path
from typing import List
import streamlit as st
from sentence_transformers import SentenceTransformer
from search.engine import Engine, Result
from search.model import load_minilm_model
from search.utils import get_memory_usage
os.environ["TOKENIZERS_PARALLELISM"] = "false"
_DATA_DIR = os.environ.get("DATA_DIR", "data/people_pm_minilm")
st.set_page_config(page_title="Search Engine", layout="wide")
st.markdown(
"""
<style>
.big-font {
font-size:20px;
}
</style>
""",
unsafe_allow_html=True,
)
@st.cache(allow_output_mutation=True)
def load_engine() -> Engine:
engine = Engine(
data_dir=Path(_DATA_DIR),
)
return engine
@st.cache(allow_output_mutation=True)
def load_model() -> SentenceTransformer:
return load_minilm_model()
engine = load_engine()
model = load_model()
st.error("Create a text input for the query.")
st.error("Create a slider with the number of results to retrieve.")
with st.spinner("Querying index ..."):
st.error("Get query embedding.")
st.error("Search results (engine.search).")
# Show the results.
# You can use st.markdown to render markdown.
# e.g. st.markdown("**text**") will add text in bold font.
st.error("Render results")
st.markdown(f"**Mem Usage**: {get_memory_usage()}MB")
| 21.354839 | 67 | 0.728852 | 191 | 1,324 | 4.890052 | 0.450262 | 0.037473 | 0.03212 | 0.038544 | 0.079229 | 0.079229 | 0.079229 | 0.079229 | 0 | 0 | 0 | 0.00177 | 0.146526 | 1,324 | 61 | 68 | 21.704918 | 0.824779 | 0.089124 | 0 | 0.058824 | 0 | 0 | 0.246504 | 0.056818 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.264706 | 0.029412 | 0.382353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
694b3b51b65fa886685be715d3c914e309e0c1fe | 1,596 | py | Python | interface/exemplos1/04.py | ell3a/estudos-python | 09808a462aa3e73ad433501acb11f62217548af8 | [
"MIT"
] | null | null | null | interface/exemplos1/04.py | ell3a/estudos-python | 09808a462aa3e73ad433501acb11f62217548af8 | [
"MIT"
] | null | null | null | interface/exemplos1/04.py | ell3a/estudos-python | 09808a462aa3e73ad433501acb11f62217548af8 | [
"MIT"
] | null | null | null | from tkinter import *
class EditBoxWindow:
def __init__(self, parent = None):
if parent == None:
parent = Tk()
self.myParent = parent
self.top_frame = Frame(parent)
# Criando a barra de rolagem
scrollbar = Scrollbar(self.top_frame)
self.editbox = Text(self.top_frame, yscrollcommand=scrollbar.set)
scrollbar.pack(side=RIGHT, fill=Y)
scrollbar.config(command=self.editbox.yview)
# Área do texto
self.editbox.pack(anchor=CENTER, fill=BOTH)
self.top_frame.pack(side=TOP)
# Texto a procurar
self.bottom_left_frame = Frame(parent)
self.textfield = Entry(self.bottom_left_frame)
self.textfield.pack(side=LEFT, fill=X, expand=1)
# Botão Find
buttonSearch = Button(self.bottom_left_frame, text='Find', command=self.find)
buttonSearch.pack(side=RIGHT)
self.bottom_left_frame.pack(side=LEFT, expand=1)
self.bottom_right_frame = Frame(parent)
def find(self):
self.editbox.tag_remove('found', '1.0', END)
s = self.textfield.get()
if s:
idx = '1.0'
while True:
idx =self.editbox.search(s, idx, nocase=1, stopindex=END)
if not idx:
break
lastidx = '%s+%dc' % (idx, len(s))
self.editbox.tag_add('found', idx, lastidx)
idx = lastidx
self.editbox.tag_config('found', foreground='red')
if __name__=="__main__":
root = Tk()
myapp = EditBoxWindow(root) | 32.571429 | 85 | 0.58396 | 193 | 1,596 | 4.678756 | 0.393782 | 0.085271 | 0.053156 | 0.084164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006289 | 0.302632 | 1,596 | 49 | 86 | 32.571429 | 0.805031 | 0.042607 | 0 | 0 | 0 | 0 | 0.027559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.027778 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
694cb06a76643cd64ded70df62959d8318b7af93 | 426 | py | Python | app/app.py | cagriozkurt/EksiSansur | 071f5e136d58f7fdd5ba32c8387904b2710d04a5 | [
"MIT"
] | null | null | null | app/app.py | cagriozkurt/EksiSansur | 071f5e136d58f7fdd5ba32c8387904b2710d04a5 | [
"MIT"
] | null | null | null | app/app.py | cagriozkurt/EksiSansur | 071f5e136d58f7fdd5ba32c8387904b2710d04a5 | [
"MIT"
] | 1 | 2022-03-22T13:50:41.000Z | 2022-03-22T13:50:41.000Z | import psycopg
from flask import Flask, render_template
from flask_compress import Compress
app = Flask(__name__)
DATABASE_URL = ""
Compress(app)
@app.route("/")
def index():
with psycopg.connect(DATABASE_URL, sslmode="require") as conn:
with conn.cursor() as cur:
cur.execute("SELECT * FROM topics;")
items = cur.fetchall()
return render_template("index.html", items=items)
| 25.058824 | 66 | 0.671362 | 53 | 426 | 5.226415 | 0.54717 | 0.064982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213615 | 426 | 16 | 67 | 26.625 | 0.826866 | 0 | 0 | 0 | 0 | 0 | 0.091549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.230769 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69512ed9252aea21d648e173c9d6e12c14061403 | 1,404 | py | Python | 2020/07/ape.py | notxenonbox/adventofcode | 82cd8fafdf21c988bd7383f2b6d71cec04282e65 | [
"Unlicense"
] | null | null | null | 2020/07/ape.py | notxenonbox/adventofcode | 82cd8fafdf21c988bd7383f2b6d71cec04282e65 | [
"Unlicense"
] | null | null | null | 2020/07/ape.py | notxenonbox/adventofcode | 82cd8fafdf21c988bd7383f2b6d71cec04282e65 | [
"Unlicense"
] | null | null | null | import re
class Bag:
def __init__(self, _name, _contents):
self.name = _name
self.contents = _contents
self.c_cache = None
self.has_cache = {}
def hasBagType(self, _name, bags):
try:
return self.has_cache[_name]
except:
if _name != self.name:
for i in self.contents:
if bags[i[1]].hasBagType(_name, bags):
break
else:
return False
return True
else:
return True
def children_count(self):
if self.c_cache != None:
return self.c_cache
count = 0
for i in self.contents:
count += i[0] + (i[0] * bags[i[1]].children_count())
self.c_cache = count
return count
input_lines = []
with open('input.txt') as f:
input_lines = f.readlines()
input_lines = list(filter(None, input_lines))
bags = {}
for i in input_lines:
bag, contents = re.search(r'^((?:[\w]+ ){2})bags contain ([\S\s]+)', i).groups()
if contents.strip() == "no other bags.":
bag = bag.strip()
bags[bag] = Bag(bag, [])
continue
contents = contents.split(', ')
contents = list(map(lambda x: re.search(r'(\d)+ ((?:[\w]+ ){2})', x).groups(), contents))
# cleaning up
contents = list(map(lambda x: (int(x[0]), x[1].strip()), contents))
bag = bag.strip()
bags[bag] = Bag(bag, contents)
part1 = -1
for i in bags.values():
if i.hasBagType("shiny gold", bags):
part1 += 1
print(f'part 1: {part1}')
print(f'part 2: {bags["shiny gold"].children_count()}') | 22.285714 | 90 | 0.623219 | 215 | 1,404 | 3.948837 | 0.302326 | 0.042403 | 0.047114 | 0.03298 | 0.150766 | 0.056537 | 0.056537 | 0 | 0 | 0 | 0 | 0.014184 | 0.196581 | 1,404 | 63 | 91 | 22.285714 | 0.738475 | 0.007835 | 0 | 0.16 | 0 | 0 | 0.110632 | 0.017241 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06 | false | 0 | 0.02 | 0 | 0.22 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69517bd1fdcbc759ae3114b27d1f3038e73dc9c5 | 3,209 | py | Python | src/drive.py | Matej-Chmel/pydrive-chat | 551504335bcebbeed239f1961b7bffa3f45d220d | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | src/drive.py | Matej-Chmel/pydrive-chat | 551504335bcebbeed239f1961b7bffa3f45d220d | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | src/drive.py | Matej-Chmel/pydrive-chat | 551504335bcebbeed239f1961b7bffa3f45d220d | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | from datetime import datetime, timedelta
from io import BytesIO
from pathlib import Path
from time import altzone, daylight, localtime, timezone
from pydrive.auth import GoogleAuth, AuthenticationRejected
from pydrive.drive import GoogleDrive as Drive, GoogleDriveFile as File
from requests import patch
from .auth import gauth
from ._this import ENDL, res_
CHAT_LOG: File = None
FILE_TYPE = 'application/vnd.google-apps.file'
FOLDER_TYPE = 'application/vnd.google-apps.folder'
LAST_READ: datetime = None
UTC_OFFSET_SECS = -(altzone if daylight and localtime().tm_isdst > 0 else timezone)
drive: Drive = None
def setup_gauth():
path = res_('client_secrets.json')
if not Path(path).is_file():
raise FileNotFoundError
GoogleAuth.DEFAULT_SETTINGS['client_config_file'] = path
def empty_contents_of_(file):
patch(
f"https://www.googleapis.com/upload/drive/v3/files/{file['id']}?uploadType=multipart",
headers={'Authorization': f"Bearer {gauth.credentials.token_response['access_token']}"},
files={
'data': ('metadata', '{}', 'application/json'),
'file': BytesIO()
}
)
def ensure_item(title: str, mime_type=None, parents=None, trashed=False):
query = f"title='{title}'"
if mime_type:
query += f" and mimeType='{mime_type}'"
if parents:
query += f""" and {
' and '.join(f"'{item['id']}' in parents" for item in parents)
}""" if type(parents) is list else f" and '{parents['id']}' in parents"
if trashed is not None:
query += f' and trashed={str(trashed).lower()}'
try:
return drive.ListFile({'q': query}).GetList()[0]
except IndexError:
metadata = {'title': title}
if mime_type:
metadata['mimeType'] = mime_type
if parents:
metadata['parents'] = [
{'id': item['id']} for item in parents
] if type(parents) is list else [{'id': parents['id']}]
file = drive.CreateFile(metadata)
file.Upload()
return file
def log_into_drive():
creds_path = res_('creds.json')
if Path(creds_path).is_file():
gauth.LoadCredentialsFile(creds_path)
else:
try:
gauth.LocalWebserverAuth()
gauth.SaveCredentialsFile(creds_path)
except:
return None
return Drive(gauth)
def login_and_init():
global CHAT_LOG, drive
drive = log_into_drive()
if drive is None:
return False
app_data = ensure_item('AppData', FOLDER_TYPE)
app_folder = ensure_item('pydrive-chat', FOLDER_TYPE, app_data)
CHAT_LOG = ensure_item('chat_log.txt', parents=app_folder)
return True
def append_to_log(text):
CHAT_LOG.SetContentString(f'{CHAT_LOG.GetContentString()}{text}{ENDL}')
CHAT_LOG.Upload()
def overwrite_log(text=None):
if not text:
empty_contents_of_(CHAT_LOG)
CHAT_LOG.Upload()
CHAT_LOG.SetContentString('')
else:
CHAT_LOG.SetContentString(text)
CHAT_LOG.Upload()
def read_log():
return CHAT_LOG.GetContentString()
def read_if_modified():
global LAST_READ, LINES_READ
def was_modified():
modified_at = when_modified()
if LAST_READ < modified_at:
LAST_READ = modified_at
return True
return False
if LAST_READ is None or was_modified():
return CHAT_LOG.GetContentString()
return None
def when_modified():
return datetime.strptime(CHAT_LOG['modifiedDate'], '%Y-%m-%dT%H:%M:%S.%fZ') + timedelta(seconds=UTC_OFFSET_SECS)
| 26.966387 | 113 | 0.727641 | 454 | 3,209 | 4.955947 | 0.306167 | 0.046667 | 0.012 | 0.021333 | 0.099556 | 0.034667 | 0.034667 | 0.034667 | 0.034667 | 0.034667 | 0 | 0.001087 | 0.139607 | 3,209 | 118 | 114 | 27.194915 | 0.813836 | 0 | 0 | 0.193878 | 0 | 0.010204 | 0.192895 | 0.078841 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112245 | false | 0 | 0.091837 | 0.020408 | 0.326531 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69550175d4982933c72091480c87edac34bffafc | 1,967 | py | Python | tests/test_yaml_files.py | graeme-winter/data | 6e359b169c35d1a6569fd316f7b7ab19fa5812b8 | [
"BSD-3-Clause"
] | null | null | null | tests/test_yaml_files.py | graeme-winter/data | 6e359b169c35d1a6569fd316f7b7ab19fa5812b8 | [
"BSD-3-Clause"
] | null | null | null | tests/test_yaml_files.py | graeme-winter/data | 6e359b169c35d1a6569fd316f7b7ab19fa5812b8 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function
import pkg_resources
import pytest
import string
import yaml
definition_yamls = {
fn
for fn in pkg_resources.resource_listdir("dials_data", "definitions")
if fn.endswith(".yml")
}
hashinfo_yamls = {
fn
for fn in pkg_resources.resource_listdir("dials_data", "hashinfo")
if fn.endswith(".yml")
}
def is_valid_name(filename):
if not filename.endswith(".yml") or len(filename) <= 4:
return False
allowed_characters = frozenset(string.ascii_letters + string.digits + "_")
return all(c in allowed_characters for c in filename[:-4])
@pytest.mark.parametrize("yaml_file", definition_yamls)
def test_yaml_file_is_valid_definition(yaml_file):
assert is_valid_name(yaml_file)
definition = yaml.safe_load(
pkg_resources.resource_stream("dials_data", "definitions/" + yaml_file).read()
)
fields = set(definition)
required = {"name", "data", "description"}
optional = {"license", "url", "author"}
assert fields >= required, "Required fields missing: " + str(
sorted(required - fields)
)
assert fields <= (required | optional), "Unknown fields present: " + str(
sorted(fields - required - optional)
)
@pytest.mark.parametrize("yaml_file", hashinfo_yamls)
def test_yaml_file_is_valid_hashinfo(yaml_file):
assert is_valid_name(yaml_file)
assert (
yaml_file in definition_yamls
), "hashinfo file present without corresponding definition file"
hashinfo = yaml.safe_load(
pkg_resources.resource_stream("dials_data", "hashinfo/" + yaml_file).read()
)
fields = set(hashinfo)
required = {"definition", "formatversion", "verify"}
assert fields >= required, "Required fields missing: " + str(
sorted(required - fields)
)
assert fields <= required, "Unknown fields present: " + str(
sorted(fields - required)
)
| 31.222222 | 86 | 0.68785 | 234 | 1,967 | 5.547009 | 0.316239 | 0.067797 | 0.061633 | 0.01849 | 0.514638 | 0.437596 | 0.437596 | 0.329738 | 0.278891 | 0.206471 | 0 | 0.001895 | 0.195221 | 1,967 | 62 | 87 | 31.725806 | 0.818067 | 0.010676 | 0 | 0.192308 | 0 | 0 | 0.170782 | 0 | 0 | 0 | 0 | 0 | 0.134615 | 1 | 0.057692 | false | 0 | 0.096154 | 0 | 0.192308 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
695579af900de69250d3b0d15ec3f825c2990c7f | 18,839 | py | Python | PacELF_Phase3/scripts/csvColumnToXMLFile.py | pacelf/pacelf | cd9f3608843eaf7d9dff6e20e06ee4bf773467e3 | [
"MIT"
] | null | null | null | PacELF_Phase3/scripts/csvColumnToXMLFile.py | pacelf/pacelf | cd9f3608843eaf7d9dff6e20e06ee4bf773467e3 | [
"MIT"
] | 2 | 2021-10-06T01:58:48.000Z | 2022-02-18T04:52:34.000Z | PacELF_Phase3/scripts/csvColumnToXMLFile.py | pacelf/pacelf | cd9f3608843eaf7d9dff6e20e06ee4bf773467e3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# This program is designed to create XML sidecar files for harvesting metadata into Mediaflux.
#
# Author: Jay van Schyndel
# Date: 02 May 2017.
#
# Significant modifications done by: Daniel Baird
# Date: 2018 and early 2019
#
# Scenario. Metadata is stored in an MS Excel file in various columns.
# Excel has been used to create a new column representing the metadata in the required XML format.
# The file is then saved as a CSV.
# This program will open the CSV file, read the appropriate column and save the XML into a sidecar file based on the name of the data file.
# Note the program assumes there is a header row in the CSV file. It skips processesing the first row.
#
# new example: python ./scripts/csvColumnToXMLFile.py "./rawdata/excel/PacELF Phases 1_2_3 13Dec2018.csv" "/Users/pvrdwb/projects/PacELFDocs/PacELFphase3/" ../docs --location="HardcopyLocation2018"
#
# old example: python csvColumnToXMLFile.py rawSpreadsheet/PacELF_Phase_1_AND_2.csv ~/projects/PacELFDocs/PacELF\ PDFs ./docs
#
import os
import re
import sys
import csv
import shutil
import argparse
parser = argparse.ArgumentParser(description="Create XML sidecar files from a CSV file")
parser.add_argument(
"metadata_csv", metavar="metadataCSV", help="CSV file containing the XML"
)
parser.add_argument(
"src_folder", metavar="sourceFolder", help="directory containing the source files"
)
parser.add_argument(
"dest_folder", metavar="destinationFolder", help="Path of the destination folder"
)
parser.add_argument(
"--title",
metavar="titleColumn",
help="Column containing the title",
default="Title",
)
parser.add_argument(
"--xml", metavar="xmlColumn", help="Column containing the XML", default="XML"
)
parser.add_argument(
"--access",
metavar="accessColumn",
help="Column containing the Access Rights",
default="Access Rights",
)
parser.add_argument(
"--type", metavar="accessColumn", help="Column containing the Type", default="Type"
)
parser.add_argument(
"--file",
metavar="fileColumn",
help="Column containing the data file name",
default="PDF",
)
parser.add_argument(
"--location",
metavar="primaryLocationColumn",
help="Column containing the primary hardcopy location",
default="Hardcopy Locations",
)
try:
args = parser.parse_args()
except:
sys.exit(0)
print("Processing CSV file: ", args.metadata_csv)
# this is all the location typos we've found
loc_replacements = {}
loc_replacements[r"JCU WHOCC Ichimori collectoin"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCU WHOCC Ichimori Collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCU WHOCC ICHIMORI Collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCU WHO Ichimori Collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCU WHO Ichimori collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCU WHO CC Ichimori Collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCUWHOCC Ichimori collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCUWHOCC Ichimori Collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"Ichimori Collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCU WHOCC Nagasaki Collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"JCU WHOCC Nagasaki collection"] = r"JCU WHOCC Ichimori collection"
loc_replacements[r"WHO DPS Suva"] = r"WHO DPS Fiji"
loc_replacements[r"WHO HQ Geneva"] = r"WHO Geneva"
# -----------------------------------------------------------------------------
def clean_hc_location(loc):
if loc in loc_replacements:
return loc_replacements[loc]
else:
return loc
# -----------------------------------------------------------------------------
def clean_xml_content(xml_string):
"""
Given some xml in string form that we got right from the spreadsheet,
clean it up
"""
for old_loc in loc_replacements:
old = r"<hardcopy_location>([^<]*)" + old_loc + r"([^<]*)</hardcopy_location>"
new = (
r"<hardcopy_location>\1"
+ loc_replacements[old_loc]
+ r"\2</hardcopy_location>"
)
xml_string = re.sub(old, new, xml_string)
return xml_string
# -----------------------------------------------------------------------------
def get_location_info(location):
locations = {}
locations[
"JCU WHOCC Ichimori collection"
] = "James Cook University, Bldg 41 Rm 207, Townsville, Queensland 4811, Australia"
locations[
"JCU WHOCC"
] = "James Cook University, Bldg 41 Rm 207, Townsville, Queensland 4811, Australia"
locations[
"JCU Cairns (PMG)"
] = "James Cook University, Bldg E1 Rm 003C, Cairns, Queensland 4870, Australia"
locations[
"WHO DPS Fiji"
] = "World Health Organization, Level 4, Provident Plaza One, Downtown Boulevard, 33 Ellery Street, Suva, Fiji"
locations[
"WHO WPRO Manila"
] = "P.O. Box 2932, United Nations Ave. cor. Taft Ave, 1000 Manila, Philippines"
locations["WHO Geneva"] = "Avenue Appia 20, 1202 Geneva, Switzerland"
locations[
"JCU library"
] = "James Cook University, Eddie Koiko Mabo library, Bldg 18, Townsville, Queensland 4811, Australia"
return locations[location]
# -----------------------------------------------------------------------------
with open(args.metadata_csv, "rb") as csvfile:
metadataReader = csv.DictReader(csvfile, delimiter=",")
counts = {
"rows": 0,
"docs": 0,
"restrict": 0,
"hc": 0,
"restrict_hc": 0,
"write_err": 0,
"copy_err": 0,
"sidecar_err": 0,
"no_doc": 0,
"doc_missing": 0,
"sidecars": 0,
}
for row in metadataReader:
counts["rows"] += 1
# Skipping first row as it contains the header row.
if counts["rows"] > 1:
real_file = row[args.file]
xml_content = row[args.xml]
# clean the XML (this part is special to the specific data we're getting)
xml_content = clean_xml_content(xml_content)
doc_access = row[args.access]
doc_type = row[args.type]
hc_location = (
row[args.location].split(";")[0].strip()
) # semicolon separated list -- get the first one
hc_location = clean_hc_location(hc_location)
doc_title = row[args.title]
# bail if there's no title
if doc_title == "":
continue
else:
# print("LOOKING: " + doc_title)
counts["docs"] += 1
pass
# destination for the xml file
flat_file_name, file_ext = os.path.splitext(real_file)
# maybe there are subdirs in the file name, we'll flatten those out
flat_file_name = flat_file_name.replace("/", "#")
# copy the file there
# maybe we have to fake up the content coz it's restricted or something
fake_content = False
if doc_access == "Restricted" and doc_type == "Hardcopy" and hc_location:
# it's a restricted hardcopy with a location
counts["restrict_hc"] += 1
fake_content = "".join(
[
'The document "',
doc_title,
'" is unavailable due to data sensitivity, publisher restrictions or is not digitised. ',
"Please e-mail pacelf@jcu.edu.au or write to:\n\n ",
get_location_info(hc_location),
"\n\nto negotiate gaining access to this item.",
]
)
elif (
doc_access == "Restricted"
and doc_type == "Hardcopy"
and not hc_location
):
# it's a restricted hardcopy with no location
counts["restrict_hc"] += 1
fake_content = "".join(
[
'The document "',
doc_title,
'" is unavailable due to data sensitivity, publisher restrictions or is not digitised. ',
"Please e-mail pacelf@jcu.edu.au to negotiate gaining access to this item.",
]
)
elif doc_access != "Restricted" and doc_type == "Hardcopy" and hc_location:
# it's an unrestricted hardcopy with a location
counts["hc"] += 1
fake_content = "".join(
[
'The document "',
doc_title,
'" is not available in digital format. ',
"A copy is held at:\n\n ",
get_location_info(hc_location),
"\n\nplease write or email pacelf@jcu.edu.au to request a copy.",
]
)
elif (
doc_access != "Restricted"
and doc_type == "Hardcopy"
and not hc_location
):
# it's an unrestricted hardcopy with no location
counts["hc"] += 1
fake_content = "".join(
[
'The document "',
doc_title,
'" is not available in digital format. ',
"Please e-mail pacelf@jcu.edu.au to request a copy.",
]
)
elif doc_access == "Restricted" and doc_type != "Hardcopy":
# it's a restricted PDF
counts["restrict"] += 1
fake_content = "".join(
[
'The document "',
doc_title,
'" is unavailable due to data sensitivity, publisher restrictions or is not digitised. ',
"Please e-mail pacelf@jcu.edu.au to negotiate gaining access to this item.",
]
)
elif flat_file_name == "":
# any other situation where there's no doc
counts["no_doc"] += 1
fake_content = "".join(
[
'The document "',
doc_title,
'" is not available in digital format. ',
"Please e-mail pacelf@jcu.edu.au to discuss access.",
]
)
if flat_file_name == "":
flat_file_name = "PacELF_Phase2_" + str(counts["rows"])
#
# by now have fake content to use, or we expect the doc to be available.
#
# destination for the real file
real_dest_file = os.path.join(args.dest_folder, flat_file_name + file_ext)
# destination for the proxy document (.txt extension)
fake_dest_path = os.path.join(args.dest_folder, flat_file_name + ".txt")
if fake_content:
# write the fake content, if we have it
try:
file = open(fake_dest_path, "w")
file.write(fake_content)
file.close()
# print(unicode('PROXIED: ') + unicode(doc_title))
except ValueError as e:
counts["write_err"] += 1
print("Couldn't write content to: " + real_dest_file)
print(e)
else:
# we didn't have fake content, so use the real doc/pdf
real_file_path = os.path.join(args.src_folder, real_file)
if real_file == "":
print('No doc file specified for "' + doc_title + '"')
counts["no_doc"] += 1
continue
# try to copy the file --------
# first let's get some common error versions of the filename
fn_to_try = [real_file_path]
fn_to_try.append(
re.sub(r"\.pdf$", r" .pdf", real_file_path)
) # space before the pdf
fn_to_try.append(re.sub(r"\\", r"/", real_file_path)) # other slashes
fn_to_try.append(re.sub(r"$", r".pdf", real_file_path)) # add .pdf
fn_to_try.append(
re.sub(
r"Multicountry Pacific", r"multicountry pacific", real_file_path
)
) # upper case
fn_to_try.append(
re.sub(
r"Mulitcountry Pacific", r"multicountry pacific", real_file_path
)
) # typo & upper case
# some straight fixes
fn_to_try.append(
re.sub(
r"\\\.pdf$",
r"PDF version\.pdf",
re.sub(
r"Mulitcountry Pacific",
r"multicountry pacific",
real_file_path,
),
)
) # two fixes
fn_to_try.append(
re.sub(
r"PacELF_102", r"PacELF_102 Jarno et al 2006", real_file_path
)
) # add author
fn_to_try.append(
re.sub(
r"PacELF_448",
r"PacELF_448 Andrews et al 2012 PLOS PATHOGENS ",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"PacELF_493",
r"PacELF_493 Brelsfoard et al 2008 PLOS NTDs Interspecific hybridization South Pacific filariasis vectors",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"PacELF_508",
r"PacELF_508 Burkot et al 2013 MAL J Barrier screens",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"PacELF_314",
r"PacELF_314 Stolk et al 2013 PLOS NTDs",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"PacELF_317",
r"PacELF_317 Debrah et al 2006 PLOS PATHOGENS Doxycycline reduces VGF and improves pathology LF",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"PacELF_319",
r"PacELF_319 Hooper et al 2014 PLOS NTDs Asseesing progress in reducing at risk population after 13 years",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"\\2001-05 PRG Fiji May-Jun 2011\\",
r"/2011-05 PRG Fiji May-Jun 2011/",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"PacELF_414 WPRO PMM 2011 report_2011 Oct 31\.pdf",
r"PacELF_414 WPRO PMM 2011 report_2011 Oct 31 PDF version.pdf",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"Multicountry Pacific/PacELF_585",
r"French Polynesia/PacELF_585",
real_file_path,
)
)
fn_to_try.append(
re.sub(
r"Manson-Bahr 1912 FIlariasis and elephantiasis in Fiji LSHTM b21356658",
r"Manson-Bahr 1912 FIlariasis and elephantiasis in Fiji LSHTM b21356658",
real_file_path,
)
)
# find the first that is a file
for pth in fn_to_try:
if os.path.isfile(pth):
break
# try copying that
if os.path.isfile(pth):
try:
shutil.copyfile(pth, real_dest_file)
# print(' COPIED: ' + doc_title)
except shutil.Error as e:
counts["copy_err"] += 1
print("Could not copy doc: " + pth)
print(e)
else:
print(
"Could not find doc file for title: '"
+ doc_title
+ "', file: "
+ pth
)
counts["doc_missing"] += 1
#
# Now we've got content there, make the xml sidecar file
#
xml_dest_file = flat_file_name + ".xml"
xml_dest_path = args.dest_folder + "/" + xml_dest_file
try:
file = open(xml_dest_path, "w")
file.write(xml_content)
file.close()
counts["sidecars"] += 1
except ValueError as e:
counts["sidecar_err"] += 1
print("Oops, this one is dodgy: " + xml_dest_path)
print("ValueError: ", e)
print("\nSummary:")
print(
"".join(
[
" ",
str(counts["rows"]),
" rows read: ",
str(counts["docs"]),
" documents processed, ",
str(counts["sidecars"]),
" metadata sidecars produced;",
"\n ",
str(counts["hc"]),
" hard copies, ",
str(counts["restrict"]),
" restricted docs, ",
str(counts["restrict_hc"]),
" restricted hard copies;",
"\n ",
str(counts["copy_err"]),
" copy errors, ",
str(counts["write_err"]),
" write errors, ",
str(counts["sidecar_err"]),
" sidecar errors, ",
str(counts["doc_missing"]),
" docs not locatable, ",
str(counts["no_doc"]),
" docs not listed.",
"\n",
]
)
)
| 37.157791 | 197 | 0.49578 | 1,970 | 18,839 | 4.600508 | 0.228426 | 0.021185 | 0.025157 | 0.024385 | 0.388613 | 0.366435 | 0.344147 | 0.339181 | 0.310493 | 0.29626 | 0 | 0.022933 | 0.400499 | 18,839 | 506 | 198 | 37.231225 | 0.779529 | 0.144116 | 0 | 0.3257 | 0 | 0.002545 | 0.294829 | 0.007297 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007634 | false | 0.002545 | 0.015267 | 0 | 0.033079 | 0.02799 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69566d12c06529c48fecd538d7a1ad6d03fffd43 | 1,147 | py | Python | authenticationCwProject/authenticationCwApp/views.py | cs-fullstack-2019-spring/django-authentication-cw-bettyjware11 | 0afa675b0b6602a89ecfff9ee29a62c95f677de6 | [
"Apache-2.0"
] | null | null | null | authenticationCwProject/authenticationCwApp/views.py | cs-fullstack-2019-spring/django-authentication-cw-bettyjware11 | 0afa675b0b6602a89ecfff9ee29a62c95f677de6 | [
"Apache-2.0"
] | null | null | null | authenticationCwProject/authenticationCwApp/views.py | cs-fullstack-2019-spring/django-authentication-cw-bettyjware11 | 0afa675b0b6602a89ecfff9ee29a62c95f677de6 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from django.shortcuts import HttpResponse
from .forms import FoodFitnessForm
from django.contrib.auth.models import User
# function to test with
def index(request):
return HttpResponse("You made it.")
# function to create new user
def createUser(request):
form = FoodFitnessForm(request.POST or None)
context = {
"form": form
}
if request.method == "POST":
print(request.POST)
User.objects.create_user(request.POST["username"], request.POST["calories"], request.POST["date"])
return render(request, "authenticationCwApp/confirmUser.html")
return render(request, 'authenticationCwApp/createUser.html', context)
# function to confirm new user
def confirmUser(request):
form = FoodFitnessForm(request.GET or None)
context = {
"form": form
}
if request.method == 'GET':
User.objects.create_user(request.GET["username"], "", request.GET["calories"], request.GET["date"])
form.save()
return HttpResponse("New Food Calorie Tracker Created!!!!!")
return render(request, "authenticationCwApp/confirmUser.html", context)
| 28.675 | 107 | 0.70619 | 133 | 1,147 | 6.075188 | 0.360902 | 0.068069 | 0.070545 | 0.141089 | 0.289604 | 0.220297 | 0.089109 | 0.089109 | 0 | 0 | 0 | 0 | 0.173496 | 1,147 | 39 | 108 | 29.410256 | 0.852321 | 0.068003 | 0 | 0.153846 | 0 | 0 | 0.198122 | 0.100469 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.153846 | 0.038462 | 0.461538 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
695934996d2e79a5c72ef6834b5be0584eb22c99 | 4,763 | py | Python | csv-manipulation/csv-merge.py | bohnacker/data-manipulation | a46cdfdeca8d242038b118509c20b1eb39ea5b36 | [
"MIT"
] | 2 | 2020-06-05T15:57:50.000Z | 2020-06-30T12:59:00.000Z | csv-manipulation/csv-merge.py | bohnacker/data-manipulation | a46cdfdeca8d242038b118509c20b1eb39ea5b36 | [
"MIT"
] | null | null | null | csv-manipulation/csv-merge.py | bohnacker/data-manipulation | a46cdfdeca8d242038b118509c20b1eb39ea5b36 | [
"MIT"
] | null | null | null | from __future__ import print_function
# A script to help you with manipulating CSV-files. This is especially necessary when dealing with
# CSVs that have more than 65536 lines because those can not (yet) be opened in Excel or Numbers.
# This script works with two files from ourworldindata.org:
# https://ourworldindata.org/age-structure and https://ourworldindata.org/gender-ratio
# This script MERGES two CSVs.
#
# Usage:
# - Adjust filenames and delimiters.
# - Variable matchColumns: names of the matching columns in the first CSV
# - Variable withColumns: names of the matching columns in the second CSV
# - Variable copyColumns: which columns from the second CSV should be copied
# to the first. If copyColumns is [], it copies all cloumns except
# what's defined in the variable 'withColumn'
# Examples:
# copyColums = ['latitude', 'longitude'] Will copy those two columns
# copyColums = [] Will copy all columns
# ---------------------------------------------
# Change the parameters according to your task:
# Give the name of the CSV file where you want to add columns
readFileName1 = 'worlddata-median-age.csv' # <--- Adjust here
# What delimiter is used in this CSV? Usually ',' or ';'
readDelimiter1 = ',' # <--- Adjust here (have a look in your source CSV)
# Give the name of the CSV file that gives additional values
readFileName2 = 'worlddata-share-population-female.csv' # <--- Adjust here
# What delimiter is used in this CSV? Usually ',' or ';'
readDelimiter2 = ',' # <--- Adjust here (have a look in your source CSV)
# The result will be a new CSV file:
writeFileName = 'worlddata_merged.csv' # <--- Adjust here (has to be different than readFileName1)
# You can give a different delimiter for the result.
writeDelimiter = ',' # <--- Adjust here (';' is usually good)
matchColumns = ['Code', 'Year'] # <--- Adjust here
withColumns = ['Code', 'Year'] # <--- Adjust here
copyColumns = ['PercentFemale'] # <--- Adjust here
# # Second example for merging longitude/latitude data to a file with countries
# readFileName1 = 'wintergames_winners.csv'
# readDelimiter1 = ';'
# readFileName2 = 'longitude-latitude.csv'
# readDelimiter2 = ','
# writeFileName = 'wintergame_winners_merged.csv'
# writeDelimiter = ';'
# matchColumns = ['NOC']
# withColumns = ['IOC']
# copyColumns = ['latitude', 'longitude']
# ----------------------------------------------
# No need to change anything from here on ...
import csv
from collections import OrderedDict
readFile1 = open(readFileName1)
reader1 = csv.DictReader(readFile1, delimiter=readDelimiter1)
rows1 = list(reader1)
readFile2 = open(readFileName2)
reader2 = csv.DictReader(readFile2, delimiter=readDelimiter2)
rows2 = list(reader2)
writeFile = open(writeFileName, 'w')
writer = csv.writer(writeFile, delimiter=writeDelimiter)
# This writes the field names to the result.csv
headings1 = list(reader1.fieldnames)
if copyColumns == []:
copyColumns = list(filter(lambda x: x != withColumn, reader.fieldnames))
writer.writerow(headings1 + copyColumns)
# create dict from second csv to speed up finding stuff
print('Preparing merge')
print('----------------------')
dic = {}
unique = True
for row in rows2:
key = tuple(row[x] for x in withColumns)
# for col in withColumns:
# key = key + row[col] + '__'
if key != '':
if key in dic:
unique = False
else:
dic[key] = row
if (not unique):
print('Warning: The columns "%s" in the second CSV has duplicate values which could result in incorrect matching.' % withColumns)
print('----------------------')
print('Merging')
failed = []
numRows = 0
perc = 0
for i, row in enumerate(rows1):
if float(i) / len(rows1) > perc:
print('#', end='')
perc = perc + 0.01
values = []
val = tuple(row[x] for x in matchColumns)
# for col in matchColumns:
# val = val + row[col] + '__'
for key in headings1:
values.append(row[key])
for key in copyColumns:
try:
values.append(dic[val][key])
except:
if (not val in failed):
failed.append(val)
writer.writerow(values)
print('\n----------------------')
print('%d value(s) could not be found in the second CSV, so matching was not possible for every row.' % len(failed))
print("These values couldn't be matched:")
print(failed[:100])
if (len(failed) > 100):
print('... and %d more' % (len(failed) - 100))
| 33.780142 | 131 | 0.618728 | 567 | 4,763 | 5.174603 | 0.365079 | 0.030675 | 0.01636 | 0.014315 | 0.103613 | 0.103613 | 0.093388 | 0.05726 | 0.05726 | 0.034083 | 0 | 0.013611 | 0.244174 | 4,763 | 140 | 132 | 34.021429 | 0.801389 | 0.4789 | 0 | 0.031746 | 0 | 0.031746 | 0.186392 | 0.053196 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0.190476 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
695b6b62546eaa04ce79f4261d88a6eb1b8a86ea | 16,047 | py | Python | script/python/assoc_sclrt_kernels_spliceai_eval_top_hits.py | HealthML/faatpipe | 8292df4f34c99f035756a1acbd2c79055f652958 | [
"Apache-2.0"
] | 2 | 2021-12-06T09:00:52.000Z | 2022-03-03T15:03:51.000Z | script/python/assoc_sclrt_kernels_spliceai_eval_top_hits.py | HealthML/faatpipe | 8292df4f34c99f035756a1acbd2c79055f652958 | [
"Apache-2.0"
] | null | null | null | script/python/assoc_sclrt_kernels_spliceai_eval_top_hits.py | HealthML/faatpipe | 8292df4f34c99f035756a1acbd2c79055f652958 | [
"Apache-2.0"
] | null | null | null | import os
# os.environ["OMP_NUM_THREADS"] = "16"
import logging
logging.basicConfig(filename=snakemake.log[0], level=logging.INFO)
import pandas as pd
import numpy as np
# seak imports
from seak.data_loaders import intersect_ids, EnsemblVEPLoader, VariantLoaderSnpReader, CovariatesLoaderCSV
from seak.scoretest import ScoretestNoK
from seak.lrt import LRTnoK, pv_chi2mixture, fit_chi2mixture
from pysnptools.snpreader import Bed
import pickle
import sys
from util.association import BurdenLoaderHDF5
from util import Timer
class GotNone(Exception):
pass
# set up the covariatesloader
covariatesloader = CovariatesLoaderCSV(snakemake.params.phenotype,
snakemake.input.covariates_tsv,
snakemake.params.covariate_column_names,
sep='\t',
path_to_phenotypes=snakemake.input.phenotypes_tsv)
# initialize the null models
Y, X = covariatesloader.get_one_hot_covariates_and_phenotype('noK')
null_model_score = ScoretestNoK(Y, X)
null_model_lrt = LRTnoK(X, Y)
# set up function to filter variants:
def maf_filter(mac_report):
# load the MAC report, keep only observed variants with MAF below threshold
mac_report = pd.read_csv(mac_report, sep='\t', usecols=['SNP', 'MAF', 'Minor', 'alt_greater_ref'])
if snakemake.params.filter_highconfidence:
vids = mac_report.SNP[(mac_report.MAF < snakemake.params.max_maf) & (mac_report.Minor > 0) & ~(mac_report.alt_greater_ref.astype(bool)) & (mac_report.hiconf_reg.astype(bool))]
else:
vids = mac_report.SNP[(mac_report.MAF < snakemake.params.max_maf) & (mac_report.Minor > 0) & ~(mac_report.alt_greater_ref.astype(bool))]
# this has already been done in filter_variants.py
# load the variant annotation, keep only variants in high-confidece regions
# anno = pd.read_csv(anno_tsv, sep='\t', usecols=['Name', 'hiconf_reg'])
# vids_highconf = anno.Name[anno.hiconf_reg.astype(bool).values]
# vids = np.intersect1d(vids, vids_highconf)
return mac_report.set_index('SNP').loc[vids]
def get_regions():
# load the results, keep those below a certain p-value
results = pd.read_csv(snakemake.input.results_tsv, sep='\t')
kern = snakemake.params.kernels
if isinstance(kern, str):
kern = [kern]
pvcols_score = ['pv_score_' + k for k in kern ]
pvcols_lrt = ['pv_lrt_' + k for k in kern]
statcols = ['lrtstat_' + k for k in kern]
results = results[['gene', 'n_snp', 'cumMAC', 'nCarrier'] + statcols + pvcols_score + pvcols_lrt]
# get genes below threshold
genes = [results.gene[results[k] < 1e-7].values for k in pvcols_score + pvcols_lrt ]
genes = np.unique(np.concatenate(genes))
if len(genes) == 0:
return None
# set up the regions to loop over for the chromosome
regions = pd.read_csv(snakemake.input.regions_bed, sep='\t', header=None, usecols=[0 ,1 ,2 ,3, 5], dtype={0 :str, 1: np.int32, 2 :np.int32, 3 :str, 5:str})
regions.columns = ['chrom', 'start', 'end', 'name', 'strand']
regions['strand'] = regions.strand.map({'+': 'plus', '-': 'minus'})
regions = regions.set_index('name').loc[genes]
regions = regions.join(results.set_index('gene'), how='left').reset_index()
return regions
# genotype path, vep-path:
assert len(snakemake.params.ids) == len (snakemake.input.bed), 'Error: length of chromosome IDs does not match length of genotype files'
geno_vep = zip(snakemake.params.ids, snakemake.input.bed, snakemake.input.vep_tsv, snakemake.input.ensembl_vep_tsv, snakemake.input.mac_report, snakemake.input.h5_lof, snakemake.input.iid_lof, snakemake.input.gid_lof)
# get the top hits
regions_all = get_regions()
if regions_all is None:
logging.info('No genes pass significance threshold, exiting.')
sys.exit(0)
# where we store the results
stats = []
i_gene = 0
# enter the chromosome loop:
timer = Timer()
for i, (chromosome, bed, vep_tsv, ensembl_vep_tsv, mac_report, h5_lof, iid_lof, gid_lof) in enumerate(geno_vep):
if chromosome.replace('chr','') not in regions_all.chrom.unique():
continue
# set up the ensembl vep loader for the chromosome
spliceaidf = pd.read_csv(vep_tsv,
sep='\t',
usecols=['name', 'chrom', 'end', 'gene', 'max_effect', 'DS_AG', 'DS_AL', 'DS_DG', 'DS_DL', 'DP_AG', 'DP_AL', 'DP_DG', 'DP_DL'],
index_col='name')
# get set of variants for the chromosome:
mac_report = maf_filter(mac_report)
filter_vids = mac_report.index.values
# filter by MAF
keep = intersect_ids(filter_vids, spliceaidf.index.values)
spliceaidf = spliceaidf.loc[keep]
spliceaidf.reset_index(inplace=True)
# filter by impact:
spliceaidf = spliceaidf[spliceaidf.max_effect >= snakemake.params.min_impact]
# set up the regions to loop over for the chromosome
regions = regions_all.copy()
# discard all genes for which we don't have annotations
gene_ids = regions.name.str.split('_', expand=True) # table with two columns, ensembl-id and gene-name
regions['gene'] = gene_ids[1] # this is the gene name
regions['ensembl_id'] = gene_ids[0]
regions.set_index('gene', inplace=True)
genes = intersect_ids(np.unique(regions.index.values), np.unique(spliceaidf.gene)) # intersection of gene names
regions = regions.loc[genes].reset_index() # subsetting
regions = regions.sort_values(['chrom', 'start', 'end'])
# check if the variants are protein LOF variants, load the protein LOF variants:
ensemblvepdf = pd.read_csv(ensembl_vep_tsv, sep='\t', usecols=['Uploaded_variation', 'Gene'])
# this column will contain the gene names:
genes = intersect_ids(np.unique(ensemblvepdf.Gene.values), regions.ensembl_id) # intersection of ensembl gene ids
ensemblvepdf = ensemblvepdf.set_index('Gene').loc[genes].reset_index()
ensemblvepdf['gene'] = gene_ids.set_index(0).loc[ensemblvepdf.Gene.values].values
# set up the merge
ensemblvepdf.drop(columns=['Gene'], inplace=True) # get rid of the ensembl ids, will use gene names instead
ensemblvepdf.rename(columns={'Uploaded_variation': 'name'}, inplace=True)
ensemblvepdf['is_plof'] = 1.
ensemblvepdf = ensemblvepdf[~ensemblvepdf.duplicated()] # if multiple ensembl gene ids map to the same gene names, this prevents a crash.
# we add a column to the dataframe indicating whether the variant is already annotated as protein loss of function by the ensembl variant effect predictor
spliceaidf = pd.merge(spliceaidf, ensemblvepdf, on=['name', 'gene'], how='left', validate='one_to_one')
spliceaidf['is_plof'] = spliceaidf['is_plof'].fillna(0.).astype(bool)
# initialize the loader
# Note: we use "end" here because the start + 1 = end, and we need 1-based coordiantes (this would break if we had indels)
eveploader = EnsemblVEPLoader(spliceaidf['name'], spliceaidf['chrom'].astype('str') + ':' + spliceaidf['end'].astype('str'), spliceaidf['gene'], data=spliceaidf[['max_effect', 'is_plof', 'DS_AG', 'DS_AL', 'DS_DG', 'DS_DL', 'DP_AG', 'DP_AL', 'DP_DG', 'DP_DL']].values)
# set up the variant loader (splice variants) for the chromosome
plinkloader = VariantLoaderSnpReader(Bed(bed, count_A1=True, num_threads=4))
plinkloader.update_variants(eveploader.get_vids())
plinkloader.update_individuals(covariatesloader.get_iids())
# set up the protein LOF burden loader
bloader_lof = BurdenLoaderHDF5(h5_lof, iid_lof, gid_lof)
bloader_lof.update_individuals(covariatesloader.get_iids())
# set up the splice genotype + vep loading function
def get_splice(interval):
try:
V1 = eveploader.anno_by_interval(interval, gene=interval['name'].split('_')[1])
except KeyError:
raise GotNone
if V1.index.empty:
raise GotNone
vids = V1.index.get_level_values('vid')
V1 = V1.droplevel(['gene'])
temp_genotypes, temp_vids = plinkloader.genotypes_by_id(vids, return_pos=False)
temp_genotypes -= np.nanmean(temp_genotypes, axis=0)
G1 = np.ma.masked_invalid(temp_genotypes).filled(0.)
ncarrier = np.sum(G1 > 0.5, axis=0)
cummac = mac_report.loc[vids].Minor
# spliceAI max score
weights = V1[0].values.astype(np.float64)
is_plof = V1[1].values.astype(bool)
splice_preds_all = V1.iloc[:,2:]
splice_preds_all.columns = ['DS_AG', 'DS_AL', 'DS_DG', 'DS_DL', 'DP_AG', 'DP_AL', 'DP_DG', 'DP_DL']
# "standardized" positions -> codon start positions
# pos = V1[0].values.astype(np.int32)
return G1, vids, weights, ncarrier, cummac, is_plof, splice_preds_all
# set up the protein-LOF loading function
def get_plof(interval):
try:
G2 = bloader_lof.genotypes_by_id(interval['name']).astype(np.float)
except KeyError:
G2 = None
return G2
# set up the test-function for a single gene
def test_gene(interval, seed):
pval_dict = {}
pval_dict['gene'] = interval['name']
called = []
def pv_score(GV):
pv = null_model_score.pv_alt_model(GV)
if pv < 0.:
pv = null_model_score.pv_alt_model(GV, method='saddle')
return pv
def call_score(GV, name, vids=None):
if name not in pval_dict:
pval_dict[name] = {}
called.append(name)
pval_dict[name] = {}
# single-marker p-values
pval_dict[name]['pv_score'] = np.array([pv_score(GV[:,i,np.newaxis]) for i in range(GV.shape[1])])
# single-marker coefficients
beta = [ null_model_score.coef(GV[:,i,np.newaxis]) for i in range(GV.shape[1]) ]
pval_dict[name]['beta'] = np.array([x['beta'][0,0] for x in beta])
pval_dict[name]['betaSd'] = np.array([np.sqrt(x['var_beta'][0,0]) for x in beta])
if vids is not None:
pval_dict[name]['vid'] = vids
def call_lrt(GV, name, vids=None):
if name not in pval_dict:
pval_dict[name] = {}
called.append(name)
# get gene parameters, test statistics and and single-marker regression weights
lik = null_model_lrt.altmodel(GV)
pval_dict[name]['nLL'] = lik['nLL']
pval_dict[name]['sigma2'] = lik['sigma2']
pval_dict[name]['lrtstat'] = lik['stat']
pval_dict[name]['h2'] = lik['h2']
logdelta = null_model_lrt.model1.find_log_delta(GV.shape[1])
pval_dict[name]['log_delta'] = logdelta['log_delta']
pval_dict[name]['coef_random'] = null_model_lrt.model1.getPosteriorWeights(logdelta['beta'], logdelta=logdelta['log_delta'])
if vids is not None:
pval_dict[name]['vid'] = vids
# load splice variants
G1, vids, weights, ncarrier, cummac, is_plof, splice_preds_all = get_splice(interval)
# keep indicates which variants are NOT "protein LOF" variants, i.e. variants already identified by the ensembl VEP
keep = ~is_plof
# these are common to all kernels
pval_dict['vid'] = vids
pval_dict['weights'] = weights
pval_dict['MAC'] = cummac
pval_dict['nCarrier'] = ncarrier
pval_dict['not_LOF'] = keep
for col in splice_preds_all.columns:
pval_dict[col] = splice_preds_all[col].values.astype(np.float32)
# single-variant p-values:
call_score(G1, 'variant_pvals') # single variant p-values and coefficients estimated independently
call_lrt(G1.dot(np.diag(np.sqrt(weights), k=0)), 'variant_pvals') # single variant coefficients estimated *jointly* after weighting
# sanity checks
assert len(vids) == interval['n_snp'], 'Error: number of variants does not match! expected: {} got: {}'.format(interval['n_snp'], len(vids))
assert cummac.sum() == interval['cumMAC'], 'Error: cumMAC does not match! expeced: {}, got: {}'.format(interval['cumMAC'], cummac.sum())
# do a score burden test (max weighted), this is different than the baseline!
G1_burden = np.max(np.where(G1 > 0.5, np.sqrt(weights), 0.), axis=1, keepdims=True)
call_score(G1_burden, 'linwb')
call_lrt(G1_burden, 'linwb')
# linear weighted kernel
G1 = G1.dot(np.diag(np.sqrt(weights), k=0))
# do a score test (linear weighted)
call_score(G1, 'linw', vids=vids)
call_lrt(G1, 'linw')
# load plof burden
G2 = get_plof(interval)
if G2 is not None:
call_score(G2, 'LOF')
call_lrt(G2, 'LOF')
if np.any(keep):
# merged (single variable)
G1_burden_mrg = np.maximum(G2, G1_burden)
call_score(G1_burden_mrg, 'linwb_mrgLOF')
call_lrt(G1_burden_mrg, 'linwb_mrgLOF')
# concatenated ( >= 2 variables)
# we separate out the ones that are already part of the protein LOF variants!
G1 = np.concatenate([G1[:, keep], G2], axis=1)
call_score(G1, 'linw_cLOF', vids=np.array(vids[keep].tolist() + [-1]))
call_lrt(G1, 'linw_cLOF')
else:
logging.info('All Splice-AI variants for gene {} where already identified by the Ensembl variant effect predictor'.format(interval['name']))
return pval_dict, called
logging.info('loaders for chromosome {} initialized in {:.1f} seconds.'.format(chromosome, timer.check()))
# run tests for all genes on the chromosome
for _, region in regions.iterrows():
try:
gene_stats, called = test_gene(region, i_gene)
except GotNone:
continue
# build the single-variant datafame
single_var_columns = ['gene', 'vid', 'weights', 'MAC', 'nCarrier', 'not_LOF', 'DS_AG', 'DS_AL', 'DS_DG', 'DS_DL', 'DP_AG', 'DP_AL', 'DP_DG', 'DP_DL']
sv_df = pd.DataFrame.from_dict({k: gene_stats[k] for k in single_var_columns})
sv_df['pv_score'] = gene_stats['variant_pvals']['pv_score'] # single-variant p-values estimated independently
sv_df['coef_random'] = gene_stats['variant_pvals']['coef_random'] # single-variant coefficients estimated jointly after weighting
sv_df['beta'] = gene_stats['variant_pvals']['beta'] # single-variant coeffcients estimated independently *without* weighting
sv_df['betaSd'] = gene_stats['variant_pvals']['betaSd'] # standard errors for the single-variant coefficients estimated independently *without* weighting
sv_df['pheno'] = snakemake.params.phenotype
out_dir = os.path.join(snakemake.params.out_dir_stats, region['name'])
os.makedirs(out_dir, exist_ok=True)
sv_df.to_csv(out_dir + '/variants.tsv.gz', sep='\t', index=False)
for k in called:
if k == 'variant_pvals':
continue
results_dict = gene_stats[k]
df_cols = ['pv_score', 'coef_random', 'beta', 'betaSd', 'vid'] # parts of the dict that have lenght > 1
df = pd.DataFrame.from_dict(data={k: results_dict[k] for k in df_cols if k in results_dict})
df['gene'] = gene_stats['gene']
df['pheno'] = snakemake.params.phenotype
df.to_csv(out_dir + '/{}.tsv.gz'.format(k), sep='\t', index=False)
# other cols ['nLL', 'sigma2', 'lrtstat', 'h2', 'log_delta']
other_cols = {k: v for k, v in results_dict.items() if k not in df_cols}
other_cols['gene'] = gene_stats['gene']
other_cols['pheno'] = snakemake.params.phenotype
pickle.dump(other_cols, open(out_dir + '/{}_stats.pkl'.format(k), 'wb'))
i_gene += 1
logging.info('tested {} genes...'.format(i_gene))
timer.reset() | 42.228947 | 271 | 0.645541 | 2,173 | 16,047 | 4.598251 | 0.202945 | 0.020016 | 0.016813 | 0.003503 | 0.179243 | 0.132706 | 0.110889 | 0.096677 | 0.081465 | 0.076261 | 0 | 0.010103 | 0.228952 | 16,047 | 380 | 272 | 42.228947 | 0.797462 | 0.200287 | 0 | 0.114155 | 0 | 0 | 0.114707 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 1 | 0.03653 | false | 0.009132 | 0.054795 | 0 | 0.127854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
695bd5a3034969d8612674c32efea1218548dce0 | 4,741 | py | Python | animations/dafP1.py | TristanCacqueray/demo-render | 4c8403e684165e5e75c046ee023c1f794a6650a8 | [
"Apache-2.0"
] | 9 | 2018-02-19T14:17:12.000Z | 2021-03-27T14:46:28.000Z | animations/dafP1.py | TristanCacqueray/demo-render | 4c8403e684165e5e75c046ee023c1f794a6650a8 | [
"Apache-2.0"
] | null | null | null | animations/dafP1.py | TristanCacqueray/demo-render | 4c8403e684165e5e75c046ee023c1f794a6650a8 | [
"Apache-2.0"
] | null | null | null | #!/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import yaml
from utils.animation import Animation, run_main
from utils.audio import SpectroGram, AudioMod
p = """
formula: |
z.imag = fabs(z.imag);
z = cdouble_powr(z, mod);
z = cdouble_add(z, c);
z = cdouble_log(z);
kernel: mean-distance
kernel_params: "double mod"
kernel_params_mod:
- mod
mod: 1
xyinverted: True
gradient: render_data/Solankii Gradients for Gimp/Gradient-#21.ggr
c_imag: -0.13422671142194348
c_real: 0.298544669649099
i_step: 0.012438114469344182
julia: true
map_center_imag: 0.298544669649099
map_center_real: -0.1217885969525993
map_radius: 0.12438114469344182
r_step: 0.012438114469344182
radius: 51.16156978094776
"""
class Demo(Animation):
def __init__(self):
self.scenes = [
[4000, None],
[3500, self.ending],
[3286, self.zoom],
[2526, self.verse4],
[2025, self.verse3],
[1770, self.verse2],
[1520, self.tr1],
[754, self.verse1],
[0, self.intro],
]
super().__init__(yaml.load(p))
def setAudio(self, audio):
self.audio = audio
self.spectre = SpectroGram(audio.audio_frame_size)
self.audio_events = {
"low": AudioMod((0, 12), "max", decay=10),
"mid": AudioMod((152, 483), "max", decay=5),
"hgh": AudioMod((12, 456), "avg"),
}
def ending(self, frame):
self.params["c_imag"] -= 4e-5 * self.low + 1e-4 * self.mid + 1e-5
self.params["grad_freq"] += 2e-1 * self.hgh
def zoom(self, frame):
if self.scene_init:
self.imag_mod = self.logspace(self.params["c_imag"],
0.9187686207968877)
self.rad_mod = self.logspace(self.params["radius"], 0.03)
self.freq_mod = self.logspace(self.params["grad_freq"], 0.20)
self.params["grad_freq"] = self.freq_mod[self.scene_pos]
self.params["radius"] = self.rad_mod[self.scene_pos]
if frame < 3400:
self.params["c_imag"] = self.imag_mod[self.scene_pos]
else:
self.params["c_imag"] -= 4e-5 * self.low + 1e-4 * self.mid
def verse4(self, frame):
if self.scene_init:
self.rad_mod = self.logspace(self.params["radius"], 3606)
self.params["radius"] = self.rad_mod[self.scene_pos]
self.params["c_imag"] += 5e-6 * self.low
self.params["c_real"] -= 5e-6 * self.mid
def verse3(self, frame):
if self.scene_init:
self.rad_mod = self.logspace(self.params["radius"], 556)
self.params["radius"] = self.rad_mod[self.scene_pos]
self.params["c_imag"] += 8e-5 * self.mid
self.params["c_real"] -= 1e-5 * self.low
self.params["grad_freq"] += 1e-2 * self.hgh
def verse2(self, frame):
if self.scene_init:
self.base_real = self.params["c_real"]
self.params["c_imag"] -= 8e-5 * self.low
self.params["grad_freq"] += 1e-2 * self.mid
# self.params["c_real"] += 1e-4 * self.mid
# self.params["c_real"] += 1e-4 * self.mid
def tr1(self, frame):
if self.scene_init:
self.rad_mod = self.logspace(self.params["radius"], 129)
self.params["radius"] = self.rad_mod[self.scene_pos]
self.params["grad_freq"] -= 1e-2 * self.low
self.params["c_imag"] += 1e-4 * self.mid
def verse1(self, frame):
if self.scene_init:
self.rad_mod = self.linspace(self.params["radius"], 0.1)
self.params["radius"] = self.rad_mod[self.scene_pos]
self.params["c_imag"] += 4e-5 * self.low
self.params["c_real"] += 1e-4 * self.mid
self.params["grad_freq"] += 2e-2 * self.hgh
def intro(self, frame):
if self.scene_init:
self.base_real = self.params["c_real"]
self.rad_mod = self.linspace(self.params["radius"], 0.08)
self.params["radius"] = self.rad_mod[self.scene_pos]
self.params["c_imag"] += 4e-5 * self.low
self.params["c_real"] = self.base_real + 2e-4 * self.hgh
self.params["grad_freq"] += 3e-3 * self.mid
if __name__ == "__main__":
run_main(Demo())
| 34.605839 | 75 | 0.607467 | 670 | 4,741 | 4.153731 | 0.274627 | 0.136543 | 0.071146 | 0.060367 | 0.440532 | 0.398491 | 0.385196 | 0.355372 | 0.341718 | 0.306504 | 0 | 0.08228 | 0.248893 | 4,741 | 136 | 76 | 34.860294 | 0.699242 | 0.133938 | 0 | 0.163462 | 0 | 0 | 0.195748 | 0.005132 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096154 | false | 0 | 0.028846 | 0 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
695cb03fad71e52d66b52655147eb22d30993c0d | 366 | py | Python | backup/server/__init__.py | TheSithPadawan/CSE222A-CourseProject | ded7232fa76a8c57e2355a6559573f71e6e63871 | [
"MIT"
] | 1 | 2019-02-17T06:41:59.000Z | 2019-02-17T06:41:59.000Z | backup/server/__init__.py | TheSithPadawan/CSE222A-CourseProject | ded7232fa76a8c57e2355a6559573f71e6e63871 | [
"MIT"
] | 1 | 2019-02-21T05:19:31.000Z | 2019-03-02T06:38:33.000Z | backup/server/__init__.py | TheSithPadawan/CSE222A-CourseProject | ded7232fa76a8c57e2355a6559573f71e6e63871 | [
"MIT"
] | 1 | 2022-02-12T05:18:49.000Z | 2022-02-12T05:18:49.000Z | from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db_url = 'postgresql://postgres:postgres@localhost:5432/postgres'
db = SQLAlchemy()
def create_app():
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = db_url
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# initialize database
db.init_app(app)
return app
| 22.875 | 65 | 0.743169 | 46 | 366 | 5.630435 | 0.5 | 0.069498 | 0.146718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013029 | 0.161202 | 366 | 15 | 66 | 24.4 | 0.830619 | 0.051913 | 0 | 0 | 0 | 0 | 0.311047 | 0.311047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
696b0fe7fb16a5ee74d9c65d62711192a825d49b | 942 | py | Python | loop_fix_sample_rate.py | Crystalwarrior/AO2-Scripts | acbabc64706374362bd93662d9ce1e2cf8eb35fb | [
"MIT"
] | 1 | 2020-11-21T14:27:27.000Z | 2020-11-21T14:27:27.000Z | loop_fix_sample_rate.py | Crystalwarrior/AO2-Scripts | acbabc64706374362bd93662d9ce1e2cf8eb35fb | [
"MIT"
] | null | null | null | loop_fix_sample_rate.py | Crystalwarrior/AO2-Scripts | acbabc64706374362bd93662d9ce1e2cf8eb35fb | [
"MIT"
] | 1 | 2020-08-14T02:44:46.000Z | 2020-08-14T02:44:46.000Z | import os
from os import path
old_sample_rate = float(input("What was the original sample rate? "))
new_sample_rate = float(input("What is the new sample rate? "))
for file in os.listdir(os.getcwd()):
name = file.rsplit(".",1)[0]
if file.rsplit(".",1)[-1] == "opus" and path.exists(name + ".opus.txt"):
print('\n')
print(name)
f = open(name + ".opus.txt", "r")
lines = f.readlines()
f.close()
new_lines = []
for line in lines:
args = line.split('=')
command = args[0].strip()
samples = int(args[1].strip())
new_samples = int(samples * (new_sample_rate / old_sample_rate))
new_line = f'{command}={new_samples}'
print(f'Converting {line.strip()} to {new_line.strip()}')
new_lines.append(new_line)
f = open(name + ".opus.txt", "w")
f.write('\n'.join(new_lines))
f.close() | 36.230769 | 76 | 0.549894 | 129 | 942 | 3.891473 | 0.387597 | 0.119522 | 0.077689 | 0.079681 | 0.159363 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008837 | 0.279193 | 942 | 26 | 77 | 36.230769 | 0.730486 | 0 | 0 | 0.083333 | 0 | 0 | 0.184518 | 0.02439 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
696b8fef41b6b27dc44ddf7473367dc19aba29de | 2,221 | py | Python | reviewboard/extensions/templatetags/rb_extensions.py | Khan/reviewboard | 51ec4261e67b8bf4e2cfa9a0894a97b16509ad33 | [
"MIT"
] | 1 | 2015-09-11T15:50:17.000Z | 2015-09-11T15:50:17.000Z | reviewboard/extensions/templatetags/rb_extensions.py | Khan/reviewboard | 51ec4261e67b8bf4e2cfa9a0894a97b16509ad33 | [
"MIT"
] | null | null | null | reviewboard/extensions/templatetags/rb_extensions.py | Khan/reviewboard | 51ec4261e67b8bf4e2cfa9a0894a97b16509ad33 | [
"MIT"
] | null | null | null | from django import template
from django.conf import settings
from django.template.loader import render_to_string
from djblets.util.decorators import basictag
from reviewboard.extensions.hooks import DiffViewerActionHook, \
NavigationBarHook, \
ReviewRequestActionHook, \
ReviewRequestDropdownActionHook
register = template.Library()
def action_hooks(context, hookcls, action_key="action",
template_name="extensions/action.html"):
"""Displays all registered action hooks from the specified ActionHook."""
s = ""
for hook in hookcls.hooks:
for actions in hook.get_actions(context):
if actions:
new_context = {
action_key: actions
}
context.update(new_context)
s += render_to_string(template_name, new_context)
return s
@register.tag
@basictag(takes_context=True)
def diffviewer_action_hooks(context):
"""Displays all registered action hooks for the diff viewer."""
return action_hooks(context, DiffViewerActionHook)
@register.tag
@basictag(takes_context=True)
def review_request_action_hooks(context):
"""Displays all registered action hooks for review requests."""
return action_hooks(context, ReviewRequestActionHook)
@register.tag
@basictag(takes_context=True)
def review_request_dropdown_action_hooks(context):
"""Displays all registered action hooks for review requests."""
return action_hooks(context,
ReviewRequestDropdownActionHook,
"actions",
"extensions/action_dropdown.html")
@register.tag
@basictag(takes_context=True)
def navigation_bar_hooks(context):
"""Displays all registered navigation bar entries."""
s = ""
for hook in NavigationBarHook.hooks:
for nav_info in hook.get_entries(context):
if nav_info:
context.push()
context['entry'] = nav_info
s += render_to_string("extensions/navbar_entry.html", context)
context.pop()
return s
| 30.847222 | 78 | 0.641603 | 225 | 2,221 | 6.16 | 0.288889 | 0.087302 | 0.090909 | 0.077922 | 0.344877 | 0.29798 | 0.29798 | 0.243146 | 0.243146 | 0.131313 | 0 | 0 | 0.281405 | 2,221 | 71 | 79 | 31.28169 | 0.868421 | 0.130122 | 0 | 0.25 | 0 | 0 | 0.051941 | 0.042497 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104167 | false | 0 | 0.104167 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
696c68c8ad6875cf4824c538720ffa0e9657eb27 | 8,671 | py | Python | opengnn/models/model.py | CoderPat/OpenGNN | bc54328ad4aa034098073c72153eed361b4266ce | [
"MIT"
] | 32 | 2019-01-28T13:38:21.000Z | 2022-03-29T08:39:00.000Z | opengnn/models/model.py | CoderPat/OpenGNN | bc54328ad4aa034098073c72153eed361b4266ce | [
"MIT"
] | 1 | 2020-01-16T03:09:18.000Z | 2020-01-16T03:44:20.000Z | opengnn/models/model.py | CoderPat/OpenGNN | bc54328ad4aa034098073c72153eed361b4266ce | [
"MIT"
] | 7 | 2019-03-07T14:13:15.000Z | 2022-03-15T10:40:41.000Z | from abc import ABC, abstractmethod
from typing import Dict, Any
import tensorflow as tf
import numpy as np
from opengnn.utils.data import diverse_batch, batch_and_bucket_by_size
from opengnn.utils.data import filter_examples_by_size, truncate_examples_by_size
def optimize(loss: tf.Tensor, params: Dict[str, Any]):
global_step = tf.train.get_or_create_global_step()
optimizer = params.get('optimizer', 'Adam')
if optimizer != 'Adam':
optimizer_class = getattr(tf.train, optimizer, None)
if optimizer_class is None:
raise ValueError("Unsupported optimizer %s" % optimizer)
optimizer_params = params.get("optimizer_params", {})
def optimizer(lr): return optimizer_class(lr, **optimizer_params)
learning_rate = params['learning_rate']
if params.get('decay_rate') is not None:
learning_rate = tf.train.exponential_decay(
learning_rate,
global_step,
decay_steps=params.get('decay_steps', 1),
decay_rate=params['decay_rate'],
staircase=True)
return tf.contrib.layers.optimize_loss(
loss=loss,
global_step=global_step,
learning_rate=learning_rate,
clip_gradients=params['clip_gradients'],
summaries=[
"learning_rate",
"global_gradient_norm",
],
optimizer=optimizer,
name="optimizer")
class Model(ABC):
def __init__(self,
name: str,
features_inputter=None,
labels_inputter=None) -> None:
self.name = name
self.features_inputter = features_inputter
self.labels_inputter = labels_inputter
def model_fn(self):
def _model_fn(features, labels, mode, params, config=None):
if mode == tf.estimator.ModeKeys.TRAIN:
with tf.variable_scope(self.name):
# build models graph
outputs, predictions = self.__call__(
features, labels, mode, params, config)
# compute loss, tb_loss and train_op
loss, tb_loss = self.compute_loss(
features, labels, outputs, params, mode)
train_op = optimize(loss, params)
return tf.estimator.EstimatorSpec(
mode, loss=tb_loss, train_op=train_op)
elif mode == tf.estimator.ModeKeys.EVAL:
with tf.variable_scope(self.name):
# build models graph
outputs, predictions = self.__call__(
features, labels, mode, params, config)
# compute loss, tb_loss and metric ops
loss, tb_loss = self.compute_loss(
features, labels, outputs, params, mode)
metrics = self.compute_metrics(
features, labels, predictions)
# TODO: this assumes that the loss across validation can be
# calculated as the average over the loss of the minibatch
# which is not always the case (cross entropy averaged over time an batch)
# but if minibatch a correctly shuffled, this is a good aproximation for now
return tf.estimator.EstimatorSpec(
mode, loss=tb_loss, eval_metric_ops=metrics)
elif mode == tf.estimator.ModeKeys.PREDICT:
with tf.variable_scope(self.name):
# build models graph
_, predictions = self.__call__(
features, labels, mode, params, config)
return tf.estimator.EstimatorSpec(
mode, predictions=predictions)
return _model_fn
def input_fn(self,
mode: tf.estimator.ModeKeys,
batch_size: int,
metadata,
features_file,
labels_file=None,
sample_buffer_size=None,
maximum_features_size=None,
maximum_labels_size=None,
features_bucket_width=None,
labels_bucket_width=None,
num_threads=None):
assert not (mode != tf.estimator.ModeKeys.PREDICT and
labels_file is None)
# the function returned
def _input_fn():
self.initialize(metadata)
feat_dataset, feat_process_fn, feat_batch_fn, features_size_fn =\
self.get_features_builder(features_file, mode)
if labels_file is not None:
labels_dataset, labels_process_fn, \
labels_batch_fn, labels_size_fn = \
self.get_labels_builder(labels_file, mode)
dataset = tf.data.Dataset.zip((feat_dataset, labels_dataset))
def process_fn(features, labels):
return feat_process_fn(features), labels_process_fn(labels, features)
def batch_fn(dataset, batch_size):
return diverse_batch(
dataset, batch_size,
(feat_batch_fn, labels_batch_fn))
example_size_fns = [features_size_fn, labels_size_fn]
bucket_widths = [features_bucket_width, labels_bucket_width]
maximum_example_size = (maximum_features_size, maximum_labels_size)
else:
dataset = feat_dataset
process_fn = feat_process_fn
batch_fn = feat_batch_fn
example_size_fns = features_size_fn
bucket_widths = features_bucket_width
maximum_example_size = maximum_features_size
# shuffle, process batch and allow repetition
# TODO: Fix derived seed (bug in tensorflow)
seed = np.random.randint(np.iinfo(np.int64).max)
if sample_buffer_size is not None:
dataset = dataset.shuffle(
sample_buffer_size,
reshuffle_each_iteration=False,
seed=seed)
dataset = dataset.map(process_fn, num_parallel_calls=num_threads or 4)
dataset = dataset.apply(filter_examples_by_size(
example_size_fns=example_size_fns,
maximum_example_sizes=maximum_example_size))
dataset = dataset.apply(batch_and_bucket_by_size(
batch_size=batch_size,
batch_fn=batch_fn,
bucket_widths=bucket_widths,
example_size_fns=example_size_fns))
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.repeat()
return dataset.prefetch(None)
return _input_fn
def initialize(self, metadata):
"""
Runs model specific initialization (e.g. vocabularies loading).
Args:
metadata: A dictionary containing additional metadata set
by the user.
"""
if self.features_inputter is not None:
self.features_inputter.initialize(metadata)
if self.labels_inputter is not None:
self.labels_inputter.initialize(metadata)
@abstractmethod
def __call__(self, features, labels, mode, params, config=None):
raise NotImplementedError()
@abstractmethod
def compute_loss(self, features, labels, outputs, params, mode):
raise NotImplementedError()
@abstractmethod
def compute_metrics(self, features, labels, predictions):
raise NotImplementedError()
def get_features_builder(self, features_file, mode):
if self.features_inputter is None:
raise NotImplementedError()
dataset = self.features_inputter.make_dataset(features_file, mode)
process_fn = self.features_inputter.process
batch_fn = self.features_inputter.batch
size_fn = self.features_inputter.get_example_size
return dataset, process_fn, batch_fn, size_fn
def get_labels_builder(self, labels_file, mode):
if self.labels_inputter is None:
raise NotImplementedError()
dataset = self.labels_inputter.make_dataset(labels_file, mode)
process_fn = self.labels_inputter.process
batch_fn = self.labels_inputter.batch
size_fn = self.labels_inputter.get_example_size
return dataset, process_fn, batch_fn, size_fn
| 39.958525 | 93 | 0.587937 | 913 | 8,671 | 5.297919 | 0.20701 | 0.018813 | 0.033078 | 0.02853 | 0.357039 | 0.237544 | 0.199504 | 0.167459 | 0.104817 | 0.095927 | 0 | 0.000705 | 0.345404 | 8,671 | 216 | 94 | 40.143519 | 0.85148 | 0.075539 | 0 | 0.170886 | 0 | 0 | 0.020282 | 0 | 0 | 0 | 0 | 0.00463 | 0.006329 | 1 | 0.094937 | false | 0 | 0.037975 | 0.018987 | 0.208861 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
696cdcbe00ab21ab20807832ccc7573329262037 | 894 | py | Python | pyopendds/dev/itl2py/Output.py | jwillemsen/pyopendds | fd025416a02433dd42eeaf1ae449b6c1e19d177e | [
"MIT"
] | 19 | 2020-03-10T22:23:00.000Z | 2022-03-30T01:18:56.000Z | pyopendds/dev/itl2py/Output.py | jwillemsen/pyopendds | fd025416a02433dd42eeaf1ae449b6c1e19d177e | [
"MIT"
] | 28 | 2020-02-15T18:07:08.000Z | 2022-03-31T18:38:57.000Z | pyopendds/dev/itl2py/Output.py | jwillemsen/pyopendds | fd025416a02433dd42eeaf1ae449b6c1e19d177e | [
"MIT"
] | 6 | 2021-04-29T07:39:11.000Z | 2022-01-21T13:38:13.000Z | from pathlib import Path
from .ast import NodeVisitor
class Output(NodeVisitor):
def __init__(self, context: dict, path: Path, templates: dict):
self.context = context
self.path = path
self.templates = {}
for filename, template in templates.items():
self.templates[path / filename] = context['jinja'].get_template(template)
def write(self):
if self.context['dry_run']:
print('######################################## Create Dir', self.path)
else:
self.path.mkdir(exist_ok=True)
for path, template in self.templates.items():
content = template.render(self.context)
if self.context['dry_run']:
print('======================================== Write file', path)
print(content)
else:
path.write_text(content)
| 33.111111 | 85 | 0.530201 | 91 | 894 | 5.10989 | 0.406593 | 0.11828 | 0.055914 | 0.068817 | 0.103226 | 0.103226 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282998 | 894 | 26 | 86 | 34.384615 | 0.725429 | 0 | 0 | 0.190476 | 0 | 0 | 0.135347 | 0.089485 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.095238 | 0 | 0.238095 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
696f6073b8eafcb697b67113138ad0cec4dabea2 | 3,423 | py | Python | Lab7/Q3.py | fancent/PHY407 | 38ce8badb9537060becc255ec64e6de2968ca73c | [
"MIT"
] | 1 | 2020-12-20T17:30:06.000Z | 2020-12-20T17:30:06.000Z | Lab7/Q3.py | fancent/PHY407 | 38ce8badb9537060becc255ec64e6de2968ca73c | [
"MIT"
] | null | null | null | Lab7/Q3.py | fancent/PHY407 | 38ce8badb9537060becc255ec64e6de2968ca73c | [
"MIT"
] | 1 | 2021-06-12T14:21:13.000Z | 2021-06-12T14:21:13.000Z | """
This file aims to simulate the Belousov–Zhabotinsky reaction,
is a chemical mixture which, when heated, undergoes a series
of reactions that cause the chemical concentrations in the
mixture to oscillate between two extremes (x, y).
"""
import numpy as np
import matplotlib.pyplot as plt
## Constants
a = 1
b = 3
x0 = 0 # x concentration level (M)
y0 = 0 # y concentration level (M)
targetAcc = 10 ** -10 # target accuracy for BS method
start = 0 # start time (s)
end = 20 # end time (s)
def f(r):
"""
This function calculates the equations for the BZ reaction
"""
x = r[0]
y = r[1]
dxdt = 1 - ((b + 1) * x) + (a * (x ** 2) * y)
dydt = (b * x) - (a * (x ** 2) * y)
return np.array([dxdt, dydt], float)
def midpoint(r, n, H):
"""
This function calculates the modified mid-point method
given in the textbook
"""
r2 = np.copy(r)
h = H / n
r1 = r + 0.5 * h * f(r)
r2 += h * f(r1)
for _ in range(n - 1):
r1 += h * f(r2)
r2 += h * f(r1)
return 0.5 * (r1 + r2 + 0.5 * h * f(r2))
def BZ_reaction():
"""
This function simulates the entire Belousov–Zhabotinsky reaction
from start time to end time with the given constants at the
beginning of the file using the Bulirsch–Stoer method with
recursion instead of a while loop.
"""
r = np.array([x0, y0], float)
tpoints = [start]
xpoints = [r[0]]
ypoints = [r[1]]
def BS(r, t, H):
"""
This function is just a shell for the following recursive
function if n, the number of recursive calls, exceeds 8.
Then we will redo the calculation with a smaller H.
"""
def BS_row(R1, n):
"""
This function calculates the row of extrapolation estimates.
Then it calculates the error and check if it falls under
our desired accuracy. If not, it will recurse on itself
with a larger n. If yes, then it will update the list of
variables.
"""
if n > 8:
r1 = BS(r, t, H / 2)
return BS(r1, t + H / 2, H / 2)
else:
R2 = [midpoint(r, n, H)]
for m in range(1, n):
R2.append(R2[m - 1] + (R2[m - 1] - R1[m - 1]) / ((n / (n - 1)) ** (2 * (m)) - 1))
R2 = np.array(R2, float)
error_vector = (R2[n - 2] - R1[n - 2]) / ((n / (n - 1)) ** (2 * (n - 1)) - 1)
error = np.sqrt(error_vector[0] ** 2 + error_vector[1] ** 2)
target_accuracy = H * targetAcc
if error < target_accuracy:
tpoints.append(t + H)
xpoints.append(R2[n - 1][0])
ypoints.append(R2[n - 1][1])
return R2[n - 1]
else:
return BS_row(R2, n + 1)
return BS_row(np.array([midpoint(r, 1, H)], float), 2)
BS(r, start, end - start)
return tpoints, xpoints, ypoints
#plotting our results
t, x, y = BZ_reaction()
fig, graph = plt.subplots()
graph.plot(t, x, 'r', label="x")
graph.plot(t, y, 'b', label="y")
graph.plot(t, x, 'r.')
graph.plot(t, y, 'b.')
graph.set(xlabel='time (s)', ylabel='concentration level (M)',
title='Belousov–Zhabotinsky concentration level over time')
graph.grid()
graph.legend()
fig.savefig("q3.png")
plt.show() | 31.990654 | 101 | 0.532282 | 512 | 3,423 | 3.544922 | 0.3125 | 0.008815 | 0.008815 | 0.041322 | 0.031956 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038698 | 0.33567 | 3,423 | 107 | 102 | 31.990654 | 0.757696 | 0.334502 | 0 | 0.0625 | 0 | 0 | 0.045564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078125 | false | 0 | 0.03125 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69708664ff337e44ffae2d3817c35e032d51bec6 | 2,747 | py | Python | tests/solr_tests/tests/management_commands.py | kmgroup/django-haystack-1.2.x-for-d1.8 | b9368f169eec06e78131e81d6968cc6580d6ddee | [
"BSD-3-Clause"
] | 1 | 2016-02-24T19:40:05.000Z | 2016-02-24T19:40:05.000Z | tests/solr_tests/tests/management_commands.py | kmgroup/django-haystack-1.2.x-for-d1.8 | b9368f169eec06e78131e81d6968cc6580d6ddee | [
"BSD-3-Clause"
] | null | null | null | tests/solr_tests/tests/management_commands.py | kmgroup/django-haystack-1.2.x-for-d1.8 | b9368f169eec06e78131e81d6968cc6580d6ddee | [
"BSD-3-Clause"
] | null | null | null | import pysolr
from django.conf import settings
from django.core.management import call_command
from django.test import TestCase
from haystack import indexes
from haystack.sites import SearchSite
from core.models import MockModel
class SolrMockSearchIndex(indexes.SearchIndex):
text = indexes.CharField(document=True, use_template=True)
name = indexes.CharField(model_attr='author', faceted=True)
pub_date = indexes.DateField(model_attr='pub_date')
class ManagementCommandTestCase(TestCase):
fixtures = ['bulk_data.json']
def setUp(self):
super(ManagementCommandTestCase, self).setUp()
self.solr = pysolr.Solr(settings.HAYSTACK_SOLR_URL)
self.site = SearchSite()
self.site.register(MockModel, SolrMockSearchIndex)
# Stow.
import haystack
self.old_site = haystack.site
haystack.site = self.site
def tearDown(self):
import haystack
haystack.site = self.old_site
super(ManagementCommandTestCase, self).tearDown()
def test_basic_commands(self):
call_command('clear_index', interactive=False, verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 0)
call_command('update_index', verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 23)
call_command('clear_index', interactive=False, verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 0)
call_command('rebuild_index', interactive=False, verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 23)
def test_remove(self):
call_command('clear_index', interactive=False, verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 0)
call_command('update_index', verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 23)
# Remove a model instance.
MockModel.objects.get(pk=1).delete()
self.assertEqual(self.solr.search('*:*').hits, 23)
# Plain ``update_index`` doesn't fix it.
call_command('update_index', verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 23)
# With the remove flag, it's gone.
call_command('update_index', remove=True, verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 22)
def test_multiprocessing(self):
call_command('clear_index', interactive=False, verbosity=0)
self.assertEqual(self.solr.search('*:*').hits, 0)
# Watch the output, make sure there are multiple pids.
call_command('update_index', verbosity=2, workers=2, batchsize=5)
self.assertEqual(self.solr.search('*:*').hits, 23)
| 36.626667 | 73 | 0.650528 | 316 | 2,747 | 5.541139 | 0.306962 | 0.054826 | 0.11936 | 0.144489 | 0.432895 | 0.415191 | 0.415191 | 0.375214 | 0.350657 | 0.349515 | 0 | 0.014513 | 0.222424 | 2,747 | 74 | 74 | 37.121622 | 0.805243 | 0.056425 | 0 | 0.38 | 0 | 0 | 0.068832 | 0 | 0 | 0 | 0 | 0 | 0.22 | 1 | 0.1 | false | 0 | 0.18 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6971294bf9fe140162bcbb44e819402a23fefea8 | 15,633 | py | Python | cons3rt/models/user.py | cons3rt/cons3rt-python-sdk | f0bcb295735ac55bbe47448fcbd95d2c7beb3ec0 | [
"RSA-MD"
] | null | null | null | cons3rt/models/user.py | cons3rt/cons3rt-python-sdk | f0bcb295735ac55bbe47448fcbd95d2c7beb3ec0 | [
"RSA-MD"
] | null | null | null | cons3rt/models/user.py | cons3rt/cons3rt-python-sdk | f0bcb295735ac55bbe47448fcbd95d2c7beb3ec0 | [
"RSA-MD"
] | null | null | null | """
Copyright 2020 Jackpine Technologies Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# coding: utf-8
"""
cons3rt - Copyright Jackpine Technologies Corp.
NOTE: This file is auto-generated. Do not edit the file manually.
"""
import pprint
import re # noqa: F401
import six
from cons3rt.configuration import Configuration
__author__ = 'Jackpine Technologies Corporation'
__copyright__ = 'Copyright 2020, Jackpine Technologies Corporation'
__license__ = 'Apache 2.0',
__version__ = '1.0.0'
__maintainer__ = 'API Support'
__email__ = 'support@cons3rt.com'
class User(object):
"""NOTE: This class is auto-generated. Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'created_at': 'int',
'updated_at': 'int',
'administered_clouds': 'list[Cloud]',
'administered_virt_realms': 'list[VirtualizationRealm]',
'certificates': 'list[Certificate]',
'comment': 'str',
'default_project': 'Project',
'email': 'str',
'firstname': 'str',
'id': 'int',
'lastname': 'str',
'log_entries': 'list[LogEntry]',
'organization': 'str',
'project_count': 'int',
'state': 'str',
'terms_of_service_accepted': 'bool',
'username': 'str'
}
attribute_map = {
'created_at': 'createdAt',
'updated_at': 'updatedAt',
'administered_clouds': 'administeredClouds',
'administered_virt_realms': 'administeredVirtRealms',
'certificates': 'certificates',
'comment': 'comment',
'default_project': 'defaultProject',
'email': 'email',
'firstname': 'firstname',
'id': 'id',
'lastname': 'lastname',
'log_entries': 'logEntries',
'organization': 'organization',
'project_count': 'projectCount',
'state': 'state',
'terms_of_service_accepted': 'termsOfServiceAccepted',
'username': 'username'
}
def __init__(self, created_at=None, updated_at=None, administered_clouds=None, administered_virt_realms=None, certificates=None, comment=None, default_project=None, email=None, firstname=None, id=None, lastname=None, log_entries=None, organization=None, project_count=None, state=None, terms_of_service_accepted=None, username=None, local_vars_configuration=None): # noqa: E501
"""User - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._created_at = None
self._updated_at = None
self._administered_clouds = None
self._administered_virt_realms = None
self._certificates = None
self._comment = None
self._default_project = None
self._email = None
self._firstname = None
self._id = None
self._lastname = None
self._log_entries = None
self._organization = None
self._project_count = None
self._state = None
self._terms_of_service_accepted = None
self._username = None
self.discriminator = None
if created_at is not None:
self.created_at = created_at
if updated_at is not None:
self.updated_at = updated_at
if administered_clouds is not None:
self.administered_clouds = administered_clouds
if administered_virt_realms is not None:
self.administered_virt_realms = administered_virt_realms
if certificates is not None:
self.certificates = certificates
if comment is not None:
self.comment = comment
if default_project is not None:
self.default_project = default_project
if email is not None:
self.email = email
if firstname is not None:
self.firstname = firstname
if id is not None:
self.id = id
if lastname is not None:
self.lastname = lastname
if log_entries is not None:
self.log_entries = log_entries
if organization is not None:
self.organization = organization
if project_count is not None:
self.project_count = project_count
if state is not None:
self.state = state
if terms_of_service_accepted is not None:
self.terms_of_service_accepted = terms_of_service_accepted
if username is not None:
self.username = username
@property
def created_at(self):
"""Gets the created_at of this User. # noqa: E501
:return: The created_at of this User. # noqa: E501
:rtype: int
"""
return self._created_at
@created_at.setter
def created_at(self, created_at):
"""Sets the created_at of this User.
:param created_at: The created_at of this User. # noqa: E501
:type: int
"""
self._created_at = created_at
@property
def updated_at(self):
"""Gets the updated_at of this User. # noqa: E501
:return: The updated_at of this User. # noqa: E501
:rtype: int
"""
return self._updated_at
@updated_at.setter
def updated_at(self, updated_at):
"""Sets the updated_at of this User.
:param updated_at: The updated_at of this User. # noqa: E501
:type: int
"""
self._updated_at = updated_at
@property
def administered_clouds(self):
"""Gets the administered_clouds of this User. # noqa: E501
:return: The administered_clouds of this User. # noqa: E501
:rtype: list[Cloud]
"""
return self._administered_clouds
@administered_clouds.setter
def administered_clouds(self, administered_clouds):
"""Sets the administered_clouds of this User.
:param administered_clouds: The administered_clouds of this User. # noqa: E501
:type: list[Cloud]
"""
self._administered_clouds = administered_clouds
@property
def administered_virt_realms(self):
"""Gets the administered_virt_realms of this User. # noqa: E501
:return: The administered_virt_realms of this User. # noqa: E501
:rtype: list[VirtualizationRealm]
"""
return self._administered_virt_realms
@administered_virt_realms.setter
def administered_virt_realms(self, administered_virt_realms):
"""Sets the administered_virt_realms of this User.
:param administered_virt_realms: The administered_virt_realms of this User. # noqa: E501
:type: list[VirtualizationRealm]
"""
self._administered_virt_realms = administered_virt_realms
@property
def certificates(self):
"""Gets the certificates of this User. # noqa: E501
:return: The certificates of this User. # noqa: E501
:rtype: list[Certificate]
"""
return self._certificates
@certificates.setter
def certificates(self, certificates):
"""Sets the certificates of this User.
:param certificates: The certificates of this User. # noqa: E501
:type: list[Certificate]
"""
self._certificates = certificates
@property
def comment(self):
"""Gets the comment of this User. # noqa: E501
:return: The comment of this User. # noqa: E501
:rtype: str
"""
return self._comment
@comment.setter
def comment(self, comment):
"""Sets the comment of this User.
:param comment: The comment of this User. # noqa: E501
:type: str
"""
self._comment = comment
@property
def default_project(self):
"""Gets the default_project of this User. # noqa: E501
:return: The default_project of this User. # noqa: E501
:rtype: Project
"""
return self._default_project
@default_project.setter
def default_project(self, default_project):
"""Sets the default_project of this User.
:param default_project: The default_project of this User. # noqa: E501
:type: Project
"""
self._default_project = default_project
@property
def email(self):
"""Gets the email of this User. # noqa: E501
:return: The email of this User. # noqa: E501
:rtype: str
"""
return self._email
@email.setter
def email(self, email):
"""Sets the email of this User.
:param email: The email of this User. # noqa: E501
:type: str
"""
self._email = email
@property
def firstname(self):
"""Gets the firstname of this User. # noqa: E501
:return: The firstname of this User. # noqa: E501
:rtype: str
"""
return self._firstname
@firstname.setter
def firstname(self, firstname):
"""Sets the firstname of this User.
:param firstname: The firstname of this User. # noqa: E501
:type: str
"""
self._firstname = firstname
@property
def id(self):
"""Gets the id of this User. # noqa: E501
:return: The id of this User. # noqa: E501
:rtype: int
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this User.
:param id: The id of this User. # noqa: E501
:type: int
"""
self._id = id
@property
def lastname(self):
"""Gets the lastname of this User. # noqa: E501
:return: The lastname of this User. # noqa: E501
:rtype: str
"""
return self._lastname
@lastname.setter
def lastname(self, lastname):
"""Sets the lastname of this User.
:param lastname: The lastname of this User. # noqa: E501
:type: str
"""
self._lastname = lastname
@property
def log_entries(self):
"""Gets the log_entries of this User. # noqa: E501
:return: The log_entries of this User. # noqa: E501
:rtype: list[LogEntry]
"""
return self._log_entries
@log_entries.setter
def log_entries(self, log_entries):
"""Sets the log_entries of this User.
:param log_entries: The log_entries of this User. # noqa: E501
:type: list[LogEntry]
"""
self._log_entries = log_entries
@property
def organization(self):
"""Gets the organization of this User. # noqa: E501
:return: The organization of this User. # noqa: E501
:rtype: str
"""
return self._organization
@organization.setter
def organization(self, organization):
"""Sets the organization of this User.
:param organization: The organization of this User. # noqa: E501
:type: str
"""
self._organization = organization
@property
def project_count(self):
"""Gets the project_count of this User. # noqa: E501
:return: The project_count of this User. # noqa: E501
:rtype: int
"""
return self._project_count
@project_count.setter
def project_count(self, project_count):
"""Sets the project_count of this User.
:param project_count: The project_count of this User. # noqa: E501
:type: int
"""
self._project_count = project_count
@property
def state(self):
"""Gets the state of this User. # noqa: E501
:return: The state of this User. # noqa: E501
:rtype: str
"""
return self._state
@state.setter
def state(self, state):
"""Sets the state of this User.
:param state: The state of this User. # noqa: E501
:type: str
"""
allowed_values = ["REQUESTED", "ACTIVE", "INACTIVE"] # noqa: E501
if self.local_vars_configuration.client_side_validation and state not in allowed_values: # noqa: E501
raise ValueError(
"Invalid value for `state` ({0}), must be one of {1}" # noqa: E501
.format(state, allowed_values)
)
self._state = state
@property
def terms_of_service_accepted(self):
"""Gets the terms_of_service_accepted of this User. # noqa: E501
:return: The terms_of_service_accepted of this User. # noqa: E501
:rtype: bool
"""
return self._terms_of_service_accepted
@terms_of_service_accepted.setter
def terms_of_service_accepted(self, terms_of_service_accepted):
"""Sets the terms_of_service_accepted of this User.
:param terms_of_service_accepted: The terms_of_service_accepted of this User. # noqa: E501
:type: bool
"""
self._terms_of_service_accepted = terms_of_service_accepted
@property
def username(self):
"""Gets the username of this User. # noqa: E501
:return: The username of this User. # noqa: E501
:rtype: str
"""
return self._username
@username.setter
def username(self, username):
"""Sets the username of this User.
:param username: The username of this User. # noqa: E501
:type: str
"""
self._username = username
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, User):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, User):
return True
return self.to_dict() != other.to_dict()
| 27.966011 | 382 | 0.602827 | 1,820 | 15,633 | 4.992857 | 0.112088 | 0.044899 | 0.074832 | 0.078574 | 0.443381 | 0.335864 | 0.307252 | 0.195994 | 0.103444 | 0.030923 | 0 | 0.018182 | 0.306915 | 15,633 | 558 | 383 | 28.016129 | 0.820489 | 0.319197 | 0 | 0.085714 | 0 | 0 | 0.09905 | 0.018237 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163265 | false | 0 | 0.016327 | 0 | 0.289796 | 0.008163 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
69716928cf6a5af476069ffd69aec9842674f683 | 1,703 | py | Python | Topics/6_Compute/files/lambda_keepsecret.py | nsvijay04b1/AWS_Certified_Solutions_Architect_Professional | f141b9a64b652ffa9496ff47d527e7eaf4df006d | [
"Apache-2.0"
] | 1 | 2022-03-10T18:38:54.000Z | 2022-03-10T18:38:54.000Z | Topics/6_Compute/files/lambda_keepsecret.py | nsvijay04b1/AWS_Certified_Solutions_Architect_Professional | f141b9a64b652ffa9496ff47d527e7eaf4df006d | [
"Apache-2.0"
] | null | null | null | Topics/6_Compute/files/lambda_keepsecret.py | nsvijay04b1/AWS_Certified_Solutions_Architect_Professional | f141b9a64b652ffa9496ff47d527e7eaf4df006d | [
"Apache-2.0"
] | 1 | 2022-03-10T18:38:56.000Z | 2022-03-10T18:38:56.000Z | from __future__ import print_function
import json
import boto3
print('Loading function')
s3 = boto3.client('s3')
bucket_of_interest = "secretcatpics"
# For a PutObjectAcl API Event, gets the bucket and key name from the event
# If the object is not private, then it makes the object private by making a
# PutObjectAcl call.
def lambda_handler(event, context):
# Get bucket name from the event
bucket = event['Records'][0]['s3']['bucket']['name']
if (bucket != bucket_of_interest):
print("Doing nothing for bucket = " + bucket)
return
# Get key name from the event
key = event['Records'][0]['s3']['object']['key']
# If object is not private then make it private
if not (is_private(bucket, key)):
print("Object with key=" + key + " in bucket=" + bucket + " is not private!")
make_private(bucket, key)
else:
print("Object with key=" + key + " in bucket=" + bucket + " is already private.")
# Checks an object with given bucket and key is private
def is_private(bucket, key):
# Get the object ACL from S3
acl = s3.get_object_acl(Bucket=bucket, Key=key)
# Private object should have only one grant which is the owner of the object
if (len(acl['Grants']) > 1):
return False
# If canonical owner and grantee ids do no match, then conclude that the object
# is not private
owner_id = acl['Owner']['ID']
grantee_id = acl['Grants'][0]['Grantee']['ID']
if (owner_id != grantee_id):
return False
return True
# Makes an object with given bucket and key private by calling the PutObjectAcl API.
def make_private(bucket, key):
s3.put_object_acl(Bucket=bucket, Key=key, ACL="private")
print("Object with key=" + key + " in bucket=" + bucket + " is marked as private.")
| 32.132075 | 84 | 0.704052 | 266 | 1,703 | 4.424812 | 0.296992 | 0.071368 | 0.040782 | 0.040782 | 0.279524 | 0.189465 | 0.143585 | 0.094308 | 0.094308 | 0 | 0 | 0.009279 | 0.177334 | 1,703 | 52 | 85 | 32.75 | 0.830835 | 0.354668 | 0 | 0.068966 | 0 | 0 | 0.247698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.103448 | 0 | 0.344828 | 0.206897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15c53f273bb229773a9dd501a2ed7fb628ef13eb | 399 | py | Python | solutions/dog_muffin.py | attawesome/Computer-Vision-with-OpenCV | 1198bfee8683f6b5c415e07e085fa528e00ecd0a | [
"MIT"
] | 6 | 2020-04-30T21:46:13.000Z | 2021-02-15T21:32:50.000Z | solutions/dog_muffin.py | attawesome/Computer-Vision-with-OpenCV | 1198bfee8683f6b5c415e07e085fa528e00ecd0a | [
"MIT"
] | null | null | null | solutions/dog_muffin.py | attawesome/Computer-Vision-with-OpenCV | 1198bfee8683f6b5c415e07e085fa528e00ecd0a | [
"MIT"
] | 6 | 2020-04-27T17:59:50.000Z | 2020-12-13T16:10:09.000Z | import cv2
import numpy as np
import matplotlib.pyplot as plt
image = np.flip(cv2.imread('../img/dog_muffin.jpg'), axis=2)
mask = np.zeros(image.shape[:2], dtype="uint8")
cv2.rectangle(mask, (90, 120), (160, 190), 255, -1)
masked = cv2.bitwise_and(image, image, mask=mask)
plt.figure(figsize=(20, 10))
plt.imshow(np.flip(masked, axis =2))
plt.title('Masked Image'), plt.xticks([]), plt.yticks([])
| 30.692308 | 60 | 0.691729 | 67 | 399 | 4.089552 | 0.597015 | 0.043796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075419 | 0.102757 | 399 | 12 | 61 | 33.25 | 0.689944 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15c8d860e73304bf40bd4b99e9d98b1c9921b456 | 2,676 | py | Python | jetavator/schema_registry/VaultObject.py | jetavator/jetavator | 6edc7b57532809f9903735c333544658631252b5 | [
"Apache-2.0"
] | null | null | null | jetavator/schema_registry/VaultObject.py | jetavator/jetavator | 6edc7b57532809f9903735c333544658631252b5 | [
"Apache-2.0"
] | 86 | 2020-04-11T18:03:32.000Z | 2021-06-15T14:48:45.000Z | jetavator/schema_registry/VaultObject.py | jetavator/jetavator | 6edc7b57532809f9903735c333544658631252b5 | [
"Apache-2.0"
] | null | null | null | from __future__ import annotations
from typing import Any, Dict, List
from abc import ABC, abstractmethod
from datetime import datetime
from collections import namedtuple
from lazy_property import LazyProperty
from .sqlalchemy_tables import ObjectDefinition
import wysdom
from jetavator.services import ComputeServiceABC
from .ProjectABC import ProjectABC
VaultObjectKey = namedtuple('VaultObjectKey', ['type', 'name'])
HubKeyColumn = namedtuple('HubKeyColumn', ['name', 'source'])
class VaultObject(wysdom.UserObject, wysdom.RegistersSubclasses, ABC):
name: str = wysdom.UserProperty(str)
type: str = wysdom.UserProperty(str)
optional_yaml_properties = []
def __init__(
self,
project: ProjectABC,
sqlalchemy_object: ObjectDefinition
) -> None:
self.project = project
self._sqlalchemy_object = sqlalchemy_object
super().__init__(self.definition)
def __repr__(self) -> str:
class_name = type(self).__name__
return f'{class_name}({self.name})'
@classmethod
def subclass_instance(
cls,
project: ProjectABC,
definition: ObjectDefinition
) -> VaultObject:
return cls.registered_subclass_instance(
definition.type,
project,
definition
)
@LazyProperty
def key(self) -> VaultObjectKey:
return VaultObjectKey(self.type, self.name)
@property
def definition(self) -> Dict[str, Any]:
return self._sqlalchemy_object.definition
def export_sqlalchemy_object(self) -> ObjectDefinition:
if self._sqlalchemy_object.version != str(self.project.version):
raise ValueError(
"ObjectDefinition version must match project version "
"and cannot be updated."
)
self._sqlalchemy_object.deploy_dt = str(datetime.now())
return self._sqlalchemy_object
@abstractmethod
def validate(self) -> None:
pass
@property
def compute_service(self) -> ComputeServiceABC:
return self.project.compute_service
@property
def full_name(self) -> str:
return f'{self.type}_{self.name}'
@property
def checksum(self) -> str:
return str(self._sqlalchemy_object.checksum)
@property
def dependent_satellites(self) -> List[VaultObject]:
return [
satellite
for satellite in self.project.satellites.values()
if any(
dependency.type == self.type
and dependency.name == self.name
for dependency in satellite.pipeline.dependencies
)
]
| 26.49505 | 72 | 0.65284 | 264 | 2,676 | 6.424242 | 0.318182 | 0.084906 | 0.070755 | 0.028302 | 0.03184 | 0.03184 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2642 | 2,676 | 100 | 73 | 26.76 | 0.861351 | 0 | 0 | 0.093333 | 0 | 0 | 0.062079 | 0.017951 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146667 | false | 0.013333 | 0.133333 | 0.093333 | 0.453333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15ced209f13f0b2a1de3e15ddd92542a87b54d79 | 29,172 | py | Python | ginga/canvas/types/plots.py | kyraikeda/ginga | e0ce979de4a87e12ba7a90eec0517a0be05d14bc | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 76 | 2015-01-05T14:46:14.000Z | 2022-03-23T04:10:54.000Z | ginga/canvas/types/plots.py | kyraikeda/ginga | e0ce979de4a87e12ba7a90eec0517a0be05d14bc | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 858 | 2015-01-17T01:55:12.000Z | 2022-03-08T20:20:31.000Z | ginga/canvas/types/plots.py | kyraikeda/ginga | e0ce979de4a87e12ba7a90eec0517a0be05d14bc | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 60 | 2015-01-14T21:59:07.000Z | 2022-02-13T03:38:49.000Z | #
# plots.py -- classes for plots added to Ginga canvases.
#
# This is open-source software licensed under a BSD license.
# Please see the file LICENSE.txt for details.
#
import sys
import numpy as np
from ginga.canvas.CanvasObject import (CanvasObjectBase, _color,
register_canvas_types,
colors_plus_none)
from ginga.misc import Bunch
from ginga.canvas.types.layer import CompoundObject
from .basic import Path
from ginga.misc.ParamSet import Param
class XYPlot(CanvasObjectBase):
"""
Plotable object that defines a single path representing an X/Y line plot.
Like a Path, but has some optimization to reduce the actual numbers of
points in the path, depending on the scale and pan of the viewer.
"""
@classmethod
def get_params_metadata(cls):
return [
Param(name='linewidth', type=int, default=2,
min=0, max=20, widget='spinbutton', incr=1,
description="Width of outline"),
Param(name='linestyle', type=str, default='solid',
valid=['solid', 'dash'],
description="Style of outline (default: solid)"),
Param(name='color',
valid=colors_plus_none, type=_color, default='black',
description="Color of text"),
Param(name='alpha', type=float, default=1.0,
min=0.0, max=1.0, widget='spinfloat', incr=0.05,
description="Opacity of outline"),
]
def __init__(self, name=None, color='black',
linewidth=1, linestyle='solid',
alpha=1.0, x_acc=None, y_acc=None, **kwargs):
super(XYPlot, self).__init__(color=color, linewidth=linewidth,
linestyle=linestyle, alpha=alpha,
**kwargs)
self.name = name
self.kind = 'xyplot'
self.x_func = None
nul_arr = np.array([])
if x_acc is not None:
self.x_func = lambda arr: nul_arr if arr.size == 0 else x_acc(arr)
if y_acc is None:
y_acc = np.mean
self.y_func = lambda arr: nul_arr if arr.size == 0 else y_acc(arr)
self.points = np.copy(nul_arr)
self.limits = np.array([(0.0, 0.0), (0.0, 0.0)])
self.plot_xlim = (None, None)
self.path = Path([], color=color, linewidth=linewidth,
linestyle=linestyle, alpha=alpha, coord='data')
self.path.get_cpoints = self.get_cpoints
def plot_xy(self, xpts, ypts):
"""Convenience function for plotting X and Y points that are in
separate arrays.
"""
self.plot(np.asarray((xpts, ypts)).T)
def plot(self, points, limits=None):
"""Plot `points`, a list, tuple or array of (x, y) points.
Parameter
---------
points : array-like
list, tuple or array of (x, y) points
limits : array-like, optional
array of (xmin, ymin), (xmax, ymax)
Limits will be calculated if not passed in.
"""
self.points = np.asarray(points)
self.plot_xlim = (None, None)
# set or calculate limits
if limits is not None:
# passing limits saves costly min/max calculation
self.limits = np.asarray(limits)
else:
self._calc_limits(self.points)
def _calc_limits(self, points):
"""Internal routine to calculate the limits of `points`.
"""
# TODO: what should limits be if there are no points?
if len(points) == 0:
self.limits = np.array([[0.0, 0.0], [0.0, 0.0]])
else:
x_vals, y_vals = points.T
self.limits = np.array([(x_vals.min(), y_vals.min()),
(x_vals.max(), y_vals.max())])
def calc_points(self, viewer, start_x, stop_x):
"""Called when recalculating our path's points.
"""
# in case X axis is flipped
start_x, stop_x = min(start_x, stop_x), max(start_x, stop_x)
new_xlim = (start_x, stop_x)
if new_xlim == self.plot_xlim:
# X limits are the same, no need to recalculate points
return
self.plot_xlim = new_xlim
points = self.get_data_points(points=self.points)
if len(points) == 0:
self.path.points = points
return
x_data, y_data = points.T
# if we can determine the visible region shown on the plot
# limit the points to those within the region
if np.all(np.isfinite([start_x, stop_x])):
idx = np.logical_and(x_data >= start_x, x_data <= stop_x)
points = points[idx]
if self.x_func is not None:
# now find all points position in canvas X coord
cpoints = self.get_cpoints(viewer, points=points)
cx, cy = cpoints.T
# Reduce each group of Y points that map to a unique X via a
# function that reduces to a single value. The desirable function
# will depend on the function of the plot, but mean() would be a
# sensible default
_, i = np.unique(cx, return_index=True)
gr_pts = np.split(points, i)
x_data = np.array([self.x_func(a.T[0]) for a in gr_pts
if len(a) > 0])
y_data = np.array([self.y_func(a.T[1]) for a in gr_pts
if len(a) > 0])
assert len(x_data) == len(y_data)
points = np.array((x_data, y_data)).T
self.path.points = points
def recalc(self, viewer):
"""Called when recalculating our path's points.
"""
# select only points within range of the current pan/zoom
bbox = viewer.get_pan_rect()
if bbox is None:
self.path.points = []
return
start_x, stop_x = bbox[0][0], bbox[2][0]
self.calc_points(viewer, start_x, stop_x)
def get_cpoints(self, viewer, points=None, no_rotate=False):
"""Mostly internal routine used to calculate the native positions
to draw the plot.
"""
# If points are passed, they are assumed to be in data space
if points is None:
points = self.path.get_points()
return viewer.tform['data_to_plot'].to_(points)
def update_resize(self, viewer, dims):
"""Called when the viewer is resized."""
self.recalc(viewer)
def get_latest(self):
"""Get the latest (last) point on the plot. Returns None if there
are no points.
"""
if len(self.points) == 0:
return None
return self.points[-1]
def get_limits(self, lim_type):
"""Get the limits of the data or the visible part of the plot.
If `lim_type` == 'data' returns the limits of all the data points.
Otherwise returns the limits of the visibly plotted area. Limits
are returned in the form ((xmin, ymin), (xmax, ymax)), as an array.
"""
if lim_type == 'data':
# data limits
return np.asarray(self.limits)
# plot limits
self.path.crdmap = self.crdmap
if len(self.path.points) > 0:
llur = self.path.get_llur()
llur = [llur[0:2], llur[2:4]]
else:
llur = [(0.0, 0.0), (0.0, 0.0)]
return np.asarray(llur)
def draw(self, viewer):
"""Draw the plot. Normally not called by the user, but by the viewer
as needed.
"""
self.path.crdmap = self.crdmap
self.recalc(viewer)
if len(self.path.points) > 0:
self.path.draw(viewer)
class Axis(CompoundObject):
"""
Base class for axis plotables.
"""
def __init__(self, title=None, num_labels=4, font='sans',
fontsize=10.0):
super(Axis, self).__init__()
self.aide = None
self.num_labels = num_labels
self.title = title
self.font = font
self.fontsize = fontsize
self.grid_alpha = 1.0
self.format_value = self._format_value
def register_decor(self, aide):
self.aide = aide
def _format_value(self, v):
"""Default formatter for XAxis labels.
"""
return "%.4g" % v
def set_grid_alpha(self, alpha):
"""Set the transparency (alpha) of the XAxis grid lines.
`alpha` should be between 0.0 and 1.0
"""
for i in range(self.num_labels):
grid = self.grid[i]
grid.alpha = alpha
def get_data_xy(self, viewer, pt):
arr_pts = np.asarray(pt)
x, y = viewer.tform['data_to_plot'].from_(arr_pts).T[:2]
flips = viewer.get_transforms()
if flips[2]:
x, y = y, x
return (x, y)
def get_title(self):
titles_d = self.aide.get_axes_titles()
return titles_d[self.kind]
def add_plot(self, viewer, plot_src):
# Axis objects typically do not need to do anything when a
# plot is added--they recalculate labels in update_elements()
pass
def delete_plot(self, viewer, plot_src):
# Axis objects typically do not need to do anything when a
# plot is deleted--they recalculate labels in update_elements()
pass
class XAxis(Axis):
"""
Plotable object that defines X axis labels and grid lines.
"""
def __init__(self, title=None, num_labels=4, font='sans',
fontsize=10.0):
super(XAxis, self).__init__(title=title, num_labels=num_labels,
font=font, fontsize=fontsize)
self.kind = 'axis_x'
self.txt_ht = 0
self.title_wd = 0
self.pad_px = 5
def register_decor(self, aide):
self.aide = aide
# add X grid
self.grid = Bunch.Bunch()
for i in range(self.num_labels):
self.grid[i] = aide.dc.Line(0, 0, 0, 0, color=aide.grid_fg,
linestyle='dash', linewidth=1,
alpha=self.grid_alpha,
coord='window')
self.objects.append(self.grid[i])
self.axis_bg = aide.dc.Rectangle(0, 0, 100, 100, color=aide.norm_bg,
fill=True, fillcolor=aide.axis_bg,
coord='window')
self.objects.append(self.axis_bg)
self.lbls = Bunch.Bunch()
for i in range(self.num_labels):
self.lbls[i] = aide.dc.Text(0, 0, text='', color='black',
font=self.font,
fontsize=self.fontsize,
coord='window')
self.objects.append(self.lbls[i])
self._title = aide.dc.Text(0, 0, text='', color='black',
font=self.font, fontsize=self.fontsize,
alpha=0.0,
coord='window')
self.objects.append(self._title)
def update_elements(self, viewer):
"""This method is called if the plot is set with new points,
or is scaled or panned with existing points.
Update the XAxis labels to reflect the new values and/or pan/scale.
"""
for i in range(self.num_labels):
lbl = self.lbls[i]
# get data coord equivalents
x, y = self.get_data_xy(viewer, (lbl.x, lbl.y))
# format according to user's preference
lbl.text = self.format_value(x)
def update_bbox(self, viewer, dims):
"""This method is called if the viewer's window is resized.
Update all the XAxis elements to reflect the new dimensions.
"""
title = self.get_title()
self._title.text = title if title is not None else '555.55'
self.title_wd, self.txt_ht = viewer.renderer.get_dimensions(self._title)
wd, ht = dims[:2]
y_hi = ht
if title is not None:
# remove Y space for X axis title
y_hi -= self.txt_ht + 4
# remove Y space for X axis labels
y_hi -= self.txt_ht + self.pad_px
self.aide.update_plot_bbox(y_hi=y_hi)
def update_resize(self, viewer, dims, xy_lim):
"""This method is called if the viewer's window is resized.
Update all the XAxis elements to reflect the new dimensions.
"""
x_lo, y_lo, x_hi, y_hi = xy_lim
wd, ht = dims[:2]
# position axis title
title = self.get_title()
cx, cy = wd // 2 - self.title_wd // 2, ht - 4
if title is not None:
self._title.x = cx
self._title.y = cy
self._title.alpha = 1.0
cy = cy - self.txt_ht
else:
self._title.alpha = 0.0
# set X labels/grid as needed
# calculate evenly spaced interval on X axis in window coords
a = (x_hi - x_lo) // (self.num_labels - 1)
cx = x_lo
for i in range(self.num_labels):
lbl = self.lbls[i]
lbl.x, lbl.y = cx, cy
# get data coord equivalents
x, y = self.get_data_xy(viewer, (cx, cy))
# convert to formatted label
lbl.text = self.format_value(x)
grid = self.grid[i]
grid.x1 = grid.x2 = cx
grid.y1, grid.y2 = y_lo, y_hi
cx += a
self.axis_bg.x1, self.axis_bg.x2 = 0, wd
self.axis_bg.y1, self.axis_bg.y2 = y_hi, ht
class YAxis(Axis):
"""
Plotable object that defines Y axis labels and grid lines.
"""
def __init__(self, title=None, num_labels=4, font='sans', fontsize=10.0):
super(YAxis, self).__init__(title=title, num_labels=num_labels,
font=font, fontsize=fontsize)
self.kind = 'axis_y'
self.title_wd = 0
self.txt_wd = 0
self.txt_ht = 0
self.pad_px = 4
def register_decor(self, aide):
self.aide = aide
# add Y grid
self.grid = Bunch.Bunch()
for i in range(self.num_labels):
self.grid[i] = aide.dc.Line(0, 0, 0, 0, color=aide.grid_fg,
linestyle='dash', linewidth=1,
alpha=self.grid_alpha,
coord='window')
self.objects.append(self.grid[i])
# bg for RHS Y axis labels
self.axis_bg = aide.dc.Rectangle(0, 0, 100, 100, color=aide.norm_bg,
fill=True, fillcolor=aide.axis_bg,
coord='window')
self.objects.append(self.axis_bg)
# bg for LHS Y axis title
self.axis_bg2 = aide.dc.Rectangle(0, 0, 100, 100, color=aide.norm_bg,
fill=True, fillcolor=aide.axis_bg,
coord='window')
self.objects.append(self.axis_bg2)
# Y grid (tick) labels
self.lbls = Bunch.Bunch()
for i in range(self.num_labels):
self.lbls[i] = aide.dc.Text(0, 0, text='', color='black',
font=self.font,
fontsize=self.fontsize,
coord='window')
self.objects.append(self.lbls[i])
# Y title
self._title = aide.dc.Text(0, 0, text=self.title, color='black',
font=self.font,
fontsize=self.fontsize,
alpha=0.0,
rot_deg=90.0,
coord='window')
self.objects.append(self._title)
def update_elements(self, viewer):
"""This method is called if the plot is set with new points,
or is scaled or panned with existing points.
Update the YAxis labels to reflect the new values and/or pan/scale.
"""
# set Y labels/grid as needed
for i in range(self.num_labels):
lbl = self.lbls[i]
# get data coord equivalents
x, y = self.get_data_xy(viewer, (lbl.x, lbl.y))
lbl.text = self.format_value(y)
def update_bbox(self, viewer, dims):
"""This method is called if the viewer's window is resized.
Update all the YAxis elements to reflect the new dimensions.
"""
title = self.get_title()
self._title.text = title if title is not None else '555.55'
wd, ht = dims[:2]
self.title_wd, self.txt_ht = viewer.renderer.get_dimensions(self._title)
# TODO: not sure this will give us the maximum length of number
text = self.format_value(sys.float_info.max)
t = self.aide.dc.Text(0, 0, text=text,
fontsize=self.fontsize, font=self.font)
self.txt_wd, _ = viewer.renderer.get_dimensions(t)
if title is not None:
x_lo = self.txt_ht + 2 + self.pad_px
else:
x_lo = 0
x_hi = wd - (self.txt_wd + 4) - self.pad_px
self.aide.update_plot_bbox(x_lo=x_lo, x_hi=x_hi)
def update_resize(self, viewer, dims, xy_lim):
"""This method is called if the viewer's window is resized.
Update all the YAxis elements to reflect the new dimensions.
"""
x_lo, y_lo, x_hi, y_hi = xy_lim
wd, ht = dims[:2]
# position axis title
title = self.get_title()
cx = self.txt_ht + 2
cy = ht // 2 + self.title_wd // 2
if title is not None:
self._title.x = cx
self._title.y = cy
self._title.alpha = 1.0
else:
self._title.alpha = 0.0
cx = x_hi + self.pad_px
cy = y_hi
# set Y labels/grid as needed
a = (y_hi - y_lo) // (self.num_labels - 1)
for i in range(self.num_labels):
lbl = self.lbls[i]
# calculate evenly spaced interval on Y axis in window coords
lbl.x, lbl.y = cx, cy
# get data coord equivalents
x, y = self.get_data_xy(viewer, (cx, cy))
lbl.text = self.format_value(y)
grid = self.grid[i]
grid.x1, grid.x2 = x_lo, x_hi
grid.y1 = grid.y2 = cy
cy -= a
self.axis_bg.x1, self.axis_bg.x2 = x_hi, wd
self.axis_bg.y1, self.axis_bg.y2 = y_lo, y_hi
self.axis_bg2.x1, self.axis_bg2.x2 = 0, x_lo
self.axis_bg2.y1, self.axis_bg2.y2 = y_lo, y_hi
class PlotBG(CompoundObject):
"""
Plotable object that defines the plot background.
Can include a warning line and an alert line. If the last Y value
plotted exceeds the warning line then the background changes color.
For example, you might be plotting detector values and want to set
a warning if a certain threshold is crossed and an alert if the
detector has saturated (alerts are higher than warnings).
"""
def __init__(self, warn_y=None, alert_y=None, linewidth=1):
super(PlotBG, self).__init__()
self.y_lbl_info = [warn_y, alert_y]
self.warn_y = warn_y
self.alert_y = alert_y
self.linewidth = linewidth
# default warning check
self.check_warning = self._check_warning
self.norm_bg = 'white'
self.warn_bg = 'lightyellow'
self.alert_bg = 'mistyrose2'
self.kind = 'plot_bg'
self.pickable = True
self.opaque = True
def register_decor(self, aide):
self.aide = aide
# add a backdrop that we can change color for visual warnings
self.bg = aide.dc.Rectangle(0, 0, 100, 100, color=aide.norm_bg,
fill=True, fillcolor=aide.norm_bg,
fillalpha=1.0,
coord='window')
self.objects.append(self.bg)
# add warning and alert lines
self.ln_warn = aide.dc.Line(0, self.warn_y, 1, self.warn_y,
color='gold3', linewidth=self.linewidth,
alpha=0.0, coord='window')
self.objects.append(self.ln_warn)
self.ln_alert = aide.dc.Line(0, self.alert_y, 1, self.alert_y,
color='red', linewidth=self.linewidth,
alpha=0.0, coord='window')
self.objects.append(self.ln_alert)
def warning(self):
self.bg.fillcolor = self.warn_bg
def alert(self):
self.bg.fillcolor = self.alert_bg
def normal(self):
self.bg.fillcolor = self.norm_bg
def _check_warning(self):
max_y = None
for i, plot_src in enumerate(self.aide.plots.values()):
limits = plot_src.get_limits('data')
y = limits[1][1]
max_y = y if max_y is None else max(max_y, y)
if max_y is not None:
if self.alert_y is not None and max_y > self.alert_y:
self.alert()
elif self.warn_y is not None and max_y > self.warn_y:
self.warning()
else:
self.normal()
def update_elements(self, viewer):
"""This method is called if the plot is set with new points,
or is scaled or panned with existing points.
Update the XAxis labels to reflect the new values and/or pan/scale.
"""
y_lo, y_hi = self.aide.bbox.T[1].min(), self.aide.bbox.T[1].max()
# adjust warning/alert lines
if self.warn_y is not None:
x, y = self.get_canvas_xy(viewer, (0, self.warn_y))
if y_lo <= y <= y_hi:
self.ln_warn.alpha = 1.0
else:
# y out of range of plot area, so make it invisible
self.ln_warn.alpha = 0.0
self.ln_warn.y1 = self.ln_warn.y2 = y
if self.alert_y is not None:
x, y = self.get_canvas_xy(viewer, (0, self.alert_y))
if y_lo <= y <= y_hi:
self.ln_alert.alpha = 1.0
else:
# y out of range of plot area, so make it invisible
self.ln_alert.alpha = 0.0
self.ln_alert.y1 = self.ln_alert.y2 = y
self.check_warning()
def update_bbox(self, viewer, dims):
# this object does not adjust the plot bbox at all
pass
def update_resize(self, viewer, dims, xy_lim):
"""This method is called if the viewer's window is resized.
Update all the PlotBG elements to reflect the new dimensions.
"""
# adjust bg to window size, in case it changed
x_lo, y_lo, x_hi, y_hi = xy_lim
wd, ht = dims[:2]
self.bg.x1, self.bg.y1 = x_lo, y_lo
self.bg.x2, self.bg.y2 = x_hi, y_hi
# adjust warning/alert lines
if self.warn_y is not None:
x, y = self.get_canvas_xy(viewer, (0, self.warn_y))
self.ln_warn.x1, self.ln_warn.x2 = x_lo, x_hi
self.ln_warn.y1 = self.ln_warn.y2 = y
if self.alert_y is not None:
x, y = self.get_canvas_xy(viewer, (0, self.alert_y))
self.ln_alert.x1, self.ln_alert.x2 = x_lo, x_hi
self.ln_alert.y1 = self.ln_alert.y2 = y
def add_plot(self, viewer, plot_src):
pass
def delete_plot(self, viewer, plot_src):
pass
def get_canvas_xy(self, viewer, pt):
arr_pts = np.asarray(pt)
return viewer.tform['data_to_plot'].to_(arr_pts).T[:2]
class PlotTitle(CompoundObject):
"""
Plotable object that defines the plot title and keys.
"""
def __init__(self, title='', font='sans', fontsize=12.0):
super(PlotTitle, self).__init__()
self.font = font
self.fontsize = fontsize
self.title = title
self.txt_ht = 0
self.kind = 'plot_title'
self.format_label = self._format_label
self.pad_px = 5
def register_decor(self, aide):
self.aide = aide
self.title_bg = aide.dc.Rectangle(0, 0, 100, 100, color=aide.norm_bg,
fill=True, fillcolor=aide.axis_bg,
coord='window')
self.objects.append(self.title_bg)
self.lbls = dict()
self.lbls[0] = aide.dc.Text(0, 0, text=self.title, color='black',
font=self.font,
fontsize=self.fontsize,
coord='window')
self.objects.append(self.lbls[0])
def _format_label(self, lbl, plot_src):
"""Default formatter for PlotTitle labels.
"""
lbl.text = "{0:}".format(plot_src.name)
def update_elements(self, viewer):
"""This method is called if the plot is set with new points,
or is scaled or panned with existing points.
Update the PlotTitle labels to reflect the new values.
"""
for i, plot_src in enumerate(self.aide.plots.values()):
lbl = self.lbls[plot_src]
self.format_label(lbl, plot_src)
def update_bbox(self, viewer, dims):
"""This method is called if the viewer's window is resized.
Update all the PlotTitle elements to reflect the new dimensions.
"""
wd, ht = dims[:2]
if self.txt_ht == 0:
_, self.txt_ht = viewer.renderer.get_dimensions(self.lbls[0])
y_lo = self.txt_ht + self.pad_px
self.aide.update_plot_bbox(y_lo=y_lo)
def update_resize(self, viewer, dims, xy_lim):
"""This method is called if the viewer's window is resized.
Update all the PlotTitle elements to reflect the new dimensions.
"""
x_lo, y_lo, x_hi, y_hi = xy_lim
wd, ht = dims[:2]
nplots = len(list(self.aide.plots.keys())) + 1
# set title labels as needed
a = wd // (nplots + 1)
cx, cy = 4, self.txt_ht
lbl = self.lbls[0]
lbl.x, lbl.y = cx, cy
for i, plot_src in enumerate(self.aide.plots.values()):
cx += a
lbl = self.lbls[plot_src]
lbl.x, lbl.y = cx, cy
self.format_label(lbl, plot_src)
self.title_bg.x1, self.title_bg.x2 = 0, wd
self.title_bg.y1, self.title_bg.y2 = 0, y_lo
def add_plot(self, viewer, plot_src):
text = plot_src.name
color = plot_src.color
lbl = self.aide.dc.Text(0, 0, text=text, color=color,
font=self.font,
fontsize=self.fontsize,
coord='window')
self.lbls[plot_src] = lbl
self.objects.append(lbl)
lbl.crdmap = self.lbls[0].crdmap
self.format_label(lbl, plot_src)
# reorder and place labels
dims = viewer.get_window_size()
self.update_resize(viewer, dims, self.aide.llur)
def delete_plot(self, viewer, plot_src):
lbl = self.lbls[plot_src]
del self.lbls[plot_src]
self.objects.remove(lbl)
# reorder and place labels
dims = viewer.get_window_size()
self.update_resize(viewer, dims, self.aide.llur)
class CalcPlot(XYPlot):
def __init__(self, name=None, x_fn=None, y_fn=None, color='black',
linewidth=1, linestyle='solid', alpha=1.0, **kwdargs):
super(CalcPlot, self).__init__(name=name,
color=color, linewidth=linewidth,
linestyle=linestyle, alpha=alpha,
**kwdargs)
self.kind = 'calcplot'
if x_fn is None:
x_fn = lambda x: x # noqa
self.x_fn = x_fn
if y_fn is None:
y_fn = lambda y: y # noqa
self.y_fn = y_fn
def plot(self, y_fn, x_fn=None):
if x_fn is not None:
self.x_fn = x_fn
self.y_fn = y_fn
self.plot_xlim = (None, None)
def calc_points(self, viewer, start_x, stop_x):
# in case X axis is flipped
start_x, stop_x = min(start_x, stop_x), max(start_x, stop_x)
new_xlim = (start_x, stop_x)
if new_xlim == self.plot_xlim:
# X limits are the same, no need to recalculate points
return
self.plot_xlim = new_xlim
wd, ht = self.viewer.get_window_size()
x_pts = self.x_fn(np.linspace(start_x, stop_x, wd, dtype=np.float))
y_pts = self.y_fn(x_pts)
points = np.array((x_pts, y_pts)).T
self.path.points = points
def get_limits(self, lim_type):
try:
llur = self.path.get_llur()
limits = [llur[0:2], llur[2:4]]
return np.array(limits)
except Exception:
return np.array(((0.0, 0.0), (0.0, 0.0)))
# register our types
register_canvas_types(dict(xyplot=XYPlot, calcplot=CalcPlot))
| 35.189385 | 80 | 0.547614 | 4,021 | 29,172 | 3.828152 | 0.106441 | 0.007406 | 0.005457 | 0.005717 | 0.583902 | 0.526148 | 0.487429 | 0.444358 | 0.409277 | 0.381472 | 0 | 0.016214 | 0.350953 | 29,172 | 828 | 81 | 35.231884 | 0.796768 | 0.199095 | 0 | 0.479371 | 0 | 0 | 0.01986 | 0 | 0 | 0 | 0 | 0.002415 | 0.001965 | 1 | 0.10609 | false | 0.009823 | 0.013752 | 0.001965 | 0.165029 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d0460d3097f204a84b42c16fe07a92ea6eb177 | 8,305 | py | Python | process_disqualified_directors_data.py | Global-Witness/uk-companies-house-parsers-public | e33a3a7791d6565638e2f3c0fd1233a909d983f2 | [
"MIT"
] | 2 | 2019-08-03T16:58:43.000Z | 2022-01-23T16:57:52.000Z | process_disqualified_directors_data.py | Global-Witness/uk-companies-house-parsers-public | e33a3a7791d6565638e2f3c0fd1233a909d983f2 | [
"MIT"
] | 1 | 2019-06-18T10:10:20.000Z | 2019-06-18T10:10:20.000Z | process_disqualified_directors_data.py | Global-Witness/uk-companies-house-parsers-public | e33a3a7791d6565638e2f3c0fd1233a909d983f2 | [
"MIT"
] | 1 | 2020-11-01T10:21:03.000Z | 2020-11-01T10:21:03.000Z | import csv
import os
import sys
from collections import defaultdict
PERSONS_OUTPUT_FILENAME_TEMPLATE = "persons_data_%s.csv"
DISQUALIFICATIONS_FILENAME_TEMPLATE = "disqualifications_data_%s.csv"
EXEMPTIONS_FILENAME_TEMPLATE = 'exemptions_data_%s.csv'
SNAPSHOT_HEADER_IDENTIFIER = "DISQUALS"
TRAILER_RECORD_IDENTIFIER = "DISQUALS"
PERSON_RECORD_TYPE = '1'
DISQUALIFICATION_RECORD_TYPE = '2'
EXEMPTION_RECORD_TYPE = '3'
def process_header_row(row):
header_identifier = row[0:8]
print(header_identifier)
run_number = row[8:12]
production_date = row[12:20]
if header_identifier != SNAPSHOT_HEADER_IDENTIFIER:
print(
"Unsuported file type from header: '%s'. Expecting a snapshot header: '%s'"
% (header_identifier, SNAPSHOT_HEADER_IDENTIFIER))
sys.exit(1)
print("Processing snapshot file with run number %s from date %s" %
(run_number, production_date))
def process_person_row(row, output_writer):
record_type = row[0]
person_number = str(row[1:13])
person_dob = row[13:21]
person_postcode = row[21:29]
person_variable_ind = int(row[29:33])
person_details = row[33:33 + person_variable_ind]
person_details = person_details.split('<')
title = person_details[0]
forenames = person_details[1]
surname = person_details[2]
honours = person_details[3]
address_line_1 = person_details[4]
address_line_2 = person_details[5]
posttown = person_details[6]
county = person_details[7]
country = person_details[8]
nationality = person_details[9]
corporate_number = person_details[10]
country_registration = person_details[11]
output_writer.writerow([
record_type, person_number, person_dob, person_postcode,
person_details, title, forenames, surname, honours, address_line_1,
address_line_2, posttown, county, country, nationality,
corporate_number, country_registration
])
def process_disqualification_row(row, output_writer):
record_type = row[0]
person_number = str(row[1:13])
disqual_start_date = row[13:21]
disqual_end_date = row[21:29]
section_of_act = row[29:49]
disqual_type = row[49:79]
disqual_order_date = row[79:87]
case_number = row[87:117]
company_name = row[117:277]
court_name_variable_ind = int(row[277:281])
court_name = row[281:281 + court_name_variable_ind]
output_writer.writerow([
record_type, person_number, disqual_start_date, disqual_end_date,
section_of_act, disqual_type, disqual_order_date, case_number,
company_name, court_name
])
def process_exemption_row(row, output_writer):
record_type = row[0]
person_number = str(row[1:9])
exemption_start_date = row[13:21]
exemption_end_date = row[21:29]
exemption_purpose = int(row[29:39])
exemption_purpose_dict = defaultdict(
lambda: '', {
1: 'Promotion',
2: 'Formation',
3:
'Directorships or other participation in management of a company',
4:
'Designated member/member or other participation in management of an LLP',
5: 'Receivership in relation to a company or LLP'
})
exemption_purpose = exemption_purpose_dict[exemption_purpose]
exemption_company_name_ind = int(row[39:43])
exemption_company_name = row[43:43 + exemption_company_name_ind]
output_writer.writerow([
record_type, person_number, exemption_start_date, exemption_end_date,
exemption_purpose, exemption_company_name
])
def init_person_output_file(filename):
output_persons_file = open(filename, 'w')
persons_writer = csv.writer(output_persons_file, delimiter=",")
persons_writer.writerow([
"record_type", "person_number", "person_dob", "person_postcode",
"person_details", 'title', 'forenames', 'surname', 'honours',
'address_line_1', 'address_line_2', 'posttown', 'county', 'country',
'nationality', 'corporate_number', 'country_registration'
])
return output_persons_file, persons_writer
def init_disquals_output_file(filename):
output_disquals_file = open(filename, 'w')
disqauls_writer = csv.writer(output_disquals_file, delimiter=",")
disqauls_writer.writerow([
"record_type", "person_number", "disqual_start_date",
"disqual_end_date", "section_of_act", "disqual_type",
"disqual_order_date", "case_number", "company_name", "court_name"
])
return output_disquals_file, disqauls_writer
def init_exemptions_output_file(filename):
output_exemptions_file = open(filename, 'w')
exemptions_writer = csv.writer(output_exemptions_file, delimiter=",")
exemptions_writer.writerow([
"record_type", "person_number", "exemption_start_date",
"exemption_end_date", "exemption_purpose", "exemption_company_name"
])
return output_exemptions_file, exemptions_writer
def init_input_files(output_folder, base_input_name):
persons_output_filename = os.path.join(
output_folder, PERSONS_OUTPUT_FILENAME_TEMPLATE % (base_input_name))
disquals_output_filename = os.path.join(
output_folder, DISQUALIFICATIONS_FILENAME_TEMPLATE % (base_input_name))
exemptions_output_filename = os.path.join(
output_folder, EXEMPTIONS_FILENAME_TEMPLATE % (base_input_name))
print("Saving companies data to %s" % persons_output_filename)
print("Saving persons data to %s" % disquals_output_filename)
print("Saving persons data to %s" % exemptions_output_filename)
output_persons_file, output_persons_writer = init_person_output_file(
persons_output_filename)
output_disquals_file, output_disquals_writer = init_disquals_output_file(
disquals_output_filename)
output_exemptions_file, output_exemptions_writer = init_exemptions_output_file(
exemptions_output_filename)
return output_persons_file, output_persons_writer, output_disquals_file, output_disquals_writer, output_exemptions_file, output_exemptions_writer
def process_company_appointments_data(input_file, output_folder,
base_input_name):
persons_processed = 0
disquals_processed = 0
exemptions_processed = 0
output_persons_file, output_persons_writer, output_disquals_file, output_disquals_writer, output_exemptions_file, output_exemptions_writer = init_input_files(
output_folder, base_input_name)
for row_num, row in enumerate(input_file):
if row_num == 0:
process_header_row(row)
elif row[0:8] == TRAILER_RECORD_IDENTIFIER:
# End of file
record_count = int(row[45:53])
print(
"Reached end of file. Processed %s == %s records: %s persons, %s disquals, %s exemptions."
% (record_count, persons_processed + disquals_processed +
exemptions_processed, persons_processed, disquals_processed,
exemptions_processed))
output_persons_file.close()
output_disquals_file.close()
output_exemptions_file.close()
sys.exit(0)
elif row[0] == PERSON_RECORD_TYPE:
process_person_row(row, output_persons_writer)
persons_processed += 1
elif row[0] == DISQUALIFICATION_RECORD_TYPE:
process_disqualification_row(row, output_disquals_writer)
disquals_processed += 1
elif row[0] == EXEMPTION_RECORD_TYPE:
process_exemption_row(row, output_exemptions_writer)
exemptions_processed += 1
if __name__ == '__main__':
if len(sys.argv) < 3:
print(
'Usage: python process_disqualified_directors_data.py input_file output_folder\n',
'E.g. python process_disqualified_directors_data.py Prod195_1111_ni_sample.dat ./output/'
)
sys.exit(1)
input_filename = sys.argv[1]
output_folder = sys.argv[2]
input_file = open(input_filename, 'r')
base_input_name = os.path.basename(input_filename)
# Do not include the extension in the base input name
base_input_name = os.path.splitext(base_input_name)[0]
process_company_appointments_data(input_file, output_folder,
base_input_name) | 41.318408 | 162 | 0.706562 | 1,021 | 8,305 | 5.354554 | 0.164545 | 0.040424 | 0.026157 | 0.02634 | 0.462777 | 0.375526 | 0.313335 | 0.284068 | 0.252241 | 0.252241 | 0 | 0.02382 | 0.206382 | 8,305 | 201 | 163 | 41.318408 | 0.805644 | 0.007586 | 0 | 0.106742 | 0 | 0.005618 | 0.143811 | 0.021238 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050562 | false | 0 | 0.022472 | 0 | 0.095506 | 0.044944 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d05a5b381b6c77c32c1817ba56017e005be78f | 2,875 | py | Python | myPackage/tools.py | marcgonzmont/ML_Challenge | ed9767689064a941c2db7642fc499828d0ad1326 | [
"CC0-1.0"
] | null | null | null | myPackage/tools.py | marcgonzmont/ML_Challenge | ed9767689064a941c2db7642fc499828d0ad1326 | [
"CC0-1.0"
] | null | null | null | myPackage/tools.py | marcgonzmont/ML_Challenge | ed9767689064a941c2db7642fc499828d0ad1326 | [
"CC0-1.0"
] | null | null | null | from os import makedirs, errno
from os.path import exists, join
import numpy as np
from matplotlib import pyplot as plt
import itertools
from sklearn import preprocessing
def makeDir(path):
'''
To create output path if doesn't exist
see: https://stackoverflow.com/questions/273192/how-can-i-create-a-directory-if-it-does-not-exist
:param path: path to be created
:return: none
'''
try:
if not exists(path):
makedirs(path)
print("\nCreated '{}' folder\n".format(path))
except OSError as e:
if e.errno != errno.EEXIST:
raise
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
train_set = [data[i] for i in train_indices]
test_set = [data[i] for i in test_indices]
return np.asarray(train_set), np.asarray(test_set)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Greens):
'''
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
:param cm: confusion matrix
:param classes: array of classes' names
:param normalize: boolean
:param title: plot title
:param cmap: colour of matrix background
:return: plot confusion matrix
'''
# plt_name = altsep.join((plot_path,"".join((title,".png"))))
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
print('\nSum of main diagonal')
print(np.trace(cm))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label', labelpad=0)
# plt.savefig(plt_name)
plt.show()
def normalize(data):
'''
Normalize input data [0, 1]
:param data: input data
:return: normalized data
'''
scaler = preprocessing.MinMaxScaler()
data_min_max = scaler.fit_transform(data)
return data_min_max
| 31.593407 | 102 | 0.619826 | 371 | 2,875 | 4.708895 | 0.425876 | 0.060103 | 0.01889 | 0.029765 | 0.059531 | 0.059531 | 0.043503 | 0 | 0 | 0 | 0 | 0.007583 | 0.266087 | 2,875 | 90 | 103 | 31.944444 | 0.820379 | 0.224 | 0 | 0 | 0 | 0 | 0.089932 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.111111 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d0b4957b2e9a69a93ba4e15d527f629984be60 | 9,802 | py | Python | maskrcnn_benchmark/data/datasets/kitti.py | pwllr/IDA-3D | f5c8acbcf90a7c351499249a24754ad311da375e | [
"MIT"
] | 78 | 2020-03-02T12:00:57.000Z | 2022-02-21T09:48:08.000Z | maskrcnn_benchmark/data/datasets/kitti.py | pwllr/IDA-3D | f5c8acbcf90a7c351499249a24754ad311da375e | [
"MIT"
] | 19 | 2020-03-02T12:07:18.000Z | 2022-03-30T08:04:06.000Z | maskrcnn_benchmark/data/datasets/kitti.py | pwllr/IDA-3D | f5c8acbcf90a7c351499249a24754ad311da375e | [
"MIT"
] | 14 | 2020-06-14T03:29:47.000Z | 2021-07-21T06:30:08.000Z | import os
import torch
import torch.utils.data
from PIL import Image
import sys
import numpy as np
if sys.version_info[0] == 2:
import xml.etree.cElementTree as ET
else:
import xml.etree.ElementTree as ET
from maskrcnn_benchmark.structures.bounding_box import BoxList
from maskrcnn_benchmark.structures.bounding_box import ObjectList
def read_calib(calib_file_path):
data = {}
with open(calib_file_path, 'r') as f:
for line in f.readlines():
line = line.rstrip()
if len(line)==0: continue
key, value = line.split(':', 1)
data[key] = np.array([float(x) for x in value.split()])
return data
class KittiDataset(torch.utils.data.Dataset):
CLASSES = (
"__background__ ",
"car",
)
def __init__(self, data_dir, split, use_difficult=False, transforms=None):
self.root = data_dir
self.image_set = split
self.keep_difficult = use_difficult
self.transforms = transforms
self._annopath = os.path.join(self.root, "label_3d", "%s.xml")
self._image_left_path = os.path.join(self.root, "image_2", "%s.png")
self._image_right_path = os.path.join(self.root, "image_3", "%s.png")
self._calib_path = os.path.join(self.root, "calib", "%s.txt")
self._imgsetpath = os.path.join(self.root, "splits", "%s.txt")
with open(self._imgsetpath % self.image_set) as f:
self.ids = f.readlines()
self.ids = [x.strip("\n") for x in self.ids]
self.id_to_img_map = {k: v for k, v in enumerate(self.ids)}
cls = KittiDataset.CLASSES
self.class_to_ind = dict(zip(cls, range(len(cls))))
self.categories = dict(zip(range(len(cls)), cls))
def __getitem__(self, index):
img_id = self.ids[index]
img_left = Image.open(self._image_left_path % img_id).convert("RGB")
img_right = Image.open(self._image_right_path % img_id).convert("RGB")
target = self.get_groundtruth(index)
target_object = self.get_groundtruth(index)
target_left = target_object.get_field("left_box")
target_right = target_object.get_field("right_box")
target_left = target_left.clip_to_image(remove_empty=True)
target_right = target_right.clip_to_image(remove_empty=True)
if self.transforms is not None:
img_left, target_left = self.transforms(img_left, target_left)
img_right, target_right = self.transforms(img_right, target_right)
target_object.add_field("left_box", target_left)
target_object.add_field("right_box", target_right)
calib = self.preprocess_calib(index)
return img_left, img_right, target, calib, index
def __len__(self):
return len(self.ids)
def get_groundtruth(self, index):
img_id = self.ids[index]
anno = ET.parse(self._annopath % img_id).getroot()
anno = self._preprocess_annotation(anno)
height, width = anno["im_info"]
left_target = BoxList(anno["left_boxes"], (width, height), mode="xyxy")
left_target.add_field("labels", anno["labels"])
left_target.add_field("difficult", anno["difficult"])
right_target = BoxList(anno["right_boxes"], (width, height), mode="xyxy")
right_target.add_field("labels", anno["labels"])
right_target.add_field("difficult", anno["difficult"])
object_target = ObjectList()
object_target.add_field("left_box", left_target)
object_target.add_field("right_box", right_target)
object_target.add_field("labels", anno["labels"])
object_target.add_field("left_centers", anno["left_centers"])
object_target.add_field("right_centers", anno["right_centers"])
object_target.add_field("positions_xy", anno["positions_xy"])
object_target.add_field("positions_z", anno["positions_z"])
object_target.add_field("dimensions", anno["dimensions"])
object_target.add_field("alpha", anno["alpha"])
object_target.add_field("beta", anno["beta"])
object_target.add_field("corners", anno["corners"])
assert object_target.is_equal()
return object_target
def preprocess_calib(self, index):
img_id = self.ids[index]
calib_path = self._calib_path % img_id
calib = read_calib(calib_path)
P2 = np.reshape(calib['P2'], [3,4])
P3 = np.reshape(calib['P3'], [3,4])
c_u = P2[0,2]
c_v = P2[1,2]
f_u = P2[0,0]
f_v = P2[1,1]
b_x_2 = P2[0,3]/(f_u) # relative
b_y_2 = P2[1,3]/(f_v)
b_x_3 = P3[0,3]/(f_u) # relative
b_y_3 = P3[1,3]/(f_v)
b = abs(b_x_3 - b_x_2)
return {
"cu": c_u, "cv": c_v,
"fu": f_u, "fv": f_v,
"b": b,
"bx2":b_x_2,
}
def _preprocess_annotation(self, target):
left_boxes = []
right_boxes = []
gt_classes = []
difficult_boxes = []
TO_REMOVE = 0
#3d parameters
left_centers = []
right_centers = []
dimensions = []
positions_xy = []
positions_z = []
rotations = []
alphas = []
pconers = []
#occluded = []
#truncted = []
for obj in target.iter("object"):
difficult = int(obj.find("difficult").text) == 1
if not self.keep_difficult and difficult:
continue
name = obj.find("name").text.lower().strip()
left_bb = obj.find("left_bndbox")
left_box = [
left_bb.find("xmin").text,
left_bb.find("ymin").text,
left_bb.find("xmax").text,
left_bb.find("ymax").text,
]
left_bndbox = tuple(
map(lambda x: x - TO_REMOVE, list(map(float, left_box)))
)
left_boxes.append(left_bndbox)
left_center = [
left_bb.find("center").find("x").text,
left_bb.find("center").find("y").text,
]
left_center = list(map(float, left_center))
left_centers.append(left_center)
right_bb = obj.find("right_bndbox")
right_box = [
right_bb.find("xmin").text,
right_bb.find("ymin").text,
right_bb.find("xmax").text,
right_bb.find("ymax").text,
]
right_bndbox = tuple(
map(lambda x: x - TO_REMOVE, list(map(float, right_box)))
)
right_boxes.append(right_bndbox)
right_center = [
right_bb.find("center").find("x").text,
right_bb.find("center").find("y").text,
]
right_center = list(map(float, right_center))
right_centers.append(right_center)
gt_classes.append(self.class_to_ind[name])
difficult_boxes.append(difficult)
position_xy = [
obj.find("position").find("x").text,
obj.find("position").find("y").text,
]
position_xy = list(map(float, position_xy))
positions_xy.append(position_xy)
position_z = [
obj.find("position").find("z").find("depth").text,
obj.find("position").find("z").find("disp").text,
]
position_z = list(map(float, position_z))
positions_z.append(position_z)
dimension = [
obj.find("dimensions").find("h").text,
obj.find("dimensions").find("w").text,
obj.find("dimensions").find("l").text,
]
dimension = list(map(float, dimension))
dimensions.append(dimension)
alp = float(obj.find("alpha").text)
alphas.append(alp)
rot = float(obj.find("rotation").text)
rotations.append(rot)
pc = []
corners = obj.find("corners")
for i in range(8):
pc_str = corners.find("pc%d"%i).text
pc_i = [float(pc_s) for pc_s in pc_str.split(',')]
pc.append(pc_i)
pconers.append(pc)
size = target.find("size")
im_info = tuple(map(int, (size.find("height").text, size.find("width").text)))
res = {
"left_boxes": torch.tensor(left_boxes, dtype=torch.float32).view(-1,4),
"right_boxes": torch.tensor(right_boxes, dtype=torch.float32).view(-1,4),
"labels": torch.tensor(gt_classes),
"difficult": torch.tensor(difficult_boxes),
"left_centers": torch.tensor(left_centers, dtype=torch.float32).view(-1,2),
"right_centers": torch.tensor(right_centers, dtype=torch.float32).view(-1,2),
"positions_xy": torch.tensor(positions_xy, dtype=torch.float32).view(-1,2),
"positions_z": torch.tensor(positions_z, dtype=torch.float32).view(-1,2),
"dimensions": torch.tensor(dimensions, dtype=torch.float32).view(-1,3),
"alpha": torch.tensor(alphas, dtype=torch.float32),
"beta": torch.tensor(rotations, dtype=torch.float32),
"corners": torch.tensor(pconers, dtype=torch.float32).view(-1,8,7),
"im_info": im_info,
}
return res
def get_img_info(self, index):
img_id = self.ids[index]
anno = ET.parse(self._annopath % img_id).getroot()
size = anno.find("size")
im_info = tuple(map(int, (size.find("height").text, size.find("width").text)))
return {"height": im_info[0], "width": im_info[1]}
def map_class_id_to_class_name(self, class_id):
return KittiDataset.CLASSES[class_id]
| 36.169742 | 89 | 0.576515 | 1,245 | 9,802 | 4.301205 | 0.160643 | 0.025397 | 0.039216 | 0.041083 | 0.302708 | 0.206162 | 0.133147 | 0.060878 | 0.060878 | 0.060878 | 0 | 0.012839 | 0.28484 | 9,802 | 270 | 90 | 36.303704 | 0.75107 | 0.005917 | 0 | 0.03653 | 0 | 0 | 0.079791 | 0 | 0 | 0 | 0 | 0 | 0.004566 | 1 | 0.041096 | false | 0 | 0.045662 | 0.009132 | 0.13242 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d22cd4619c04c4fa2d8d4c420166060bc9ee2a | 25,744 | py | Python | escriptcore/py_src/faultsystems.py | markendr/esys-escript.github.io | 0023eab09cd71f830ab098cb3a468e6139191e8d | [
"Apache-2.0"
] | null | null | null | escriptcore/py_src/faultsystems.py | markendr/esys-escript.github.io | 0023eab09cd71f830ab098cb3a468e6139191e8d | [
"Apache-2.0"
] | null | null | null | escriptcore/py_src/faultsystems.py | markendr/esys-escript.github.io | 0023eab09cd71f830ab098cb3a468e6139191e8d | [
"Apache-2.0"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2003-2020 by The University of Queensland
# http://www.uq.edu.au
#
# Primary Business: Queensland, Australia
# Licensed under the Apache License, version 2.0
# http://www.apache.org/licenses/LICENSE-2.0
#
# Development until 2012 by Earth Systems Science Computational Center (ESSCC)
# Development 2012-2013 by School of Earth Sciences
# Development from 2014 by Centre for Geoscience Computing (GeoComp)
# Development from 2019 by School of Earth and Environmental Sciences
#
##############################################################################
from __future__ import print_function, division
__copyright__="""Copyright (c) 2003-2020 by The University of Queensland
http://www.uq.edu.au
Primary Business: Queensland, Australia"""
__license__="""Licensed under the Apache License, version 2.0
http://www.apache.org/licenses/LICENSE-2.0"""
__url__="https://launchpad.net/escript-finley"
#from esys.escript import sqrt, EPSILON, cos, sin, Lsup, atan, length, matrixmult, wherePositive, matrix_mult, inner, Scalar, whereNonNegative, whereNonPositive, maximum, minimum, sign, whereNegative, whereZero
import esys.escriptcore.pdetools as pdt
#from .util import *
from . import util as es
import numpy
import math
__all__= ['FaultSystem']
class FaultSystem(object):
"""
The FaultSystem class defines a system of faults in the Earth's crust.
A fault system is defined by set of faults index by a tag. Each fault is defined by a starting point V0 and a list of
strikes ``strikes`` and length ``l``. The strikes and the length is used to define a polyline with points ``V[i]`` such that
- ``V[0]=V0``
- ``V[i]=V[i]+ls[i]*array(cos(strikes[i]),sin(strikes[i]),0)``
So ``strikes`` defines the angle between the direction of the fault segment and the x0 axis. ls[i]==0 is allowed.
In case of a 3D model a fault plane is defined through a dip and depth.
The class provides a mechanism to parametrise each fault with the domain [0,w0_max] x [0, w1_max] (to [0,w0_max] in the 2D case).
"""
NOTAG="__NOTAG__"
MIN_DEPTH_ANGLE=0.1
def __init__(self,dim=3):
"""
Sets up the fault system
:param dim: spatial dimension
:type dim: ``int`` of value 2 or 3
"""
if not (dim == 2 or dim == 3):
raise ValueError("only dimension2 2 and 3 are supported.")
self.__dim=dim
self.__top={}
self.__ls={}
self.__strikes={}
self.__strike_vectors={}
self.__medDepth={}
self.__total_length={}
if dim ==2:
self.__depths=None
self.__depth_vectors=None
self.__dips=None
self.__bottom=None
self.__normals=None
else:
self.__depths={}
self.__depth_vectors={}
self.__dips={}
self.__bottom={}
self.__normals={}
self.__offsets={}
self.__w1_max={}
self.__w0_max={}
self.__center=None
self.__orientation = None
def getStart(self,tag=None):
"""
returns the starting point of fault ``tag``
:rtype: ``numpy.array``.
"""
return self.getTopPolyline(tag)[0]
def getTags(self):
"""
returns a list of the tags used by the fault system
:rtype: ``list``
"""
return list(self.__top.keys())
def getDim(self):
"""
returns the spatial dimension
:rtype: ``int``
"""
return self.__dim
def getTopPolyline(self, tag=None):
"""
returns the polyline used to describe fault tagged by ``tag``
:param tag: the tag of the fault
:type tag: ``float`` or ``str``
:return: the list of vertices defining the top of the fault. The coordinates are ``numpy.array``.
"""
if tag is None: tag=self.NOTAG
return self.__top[tag]
def getStrikes(self, tag=None):
"""
:return: the strike of the segements in fault ``tag``
:rtype: ``list`` of ``float``
"""
if tag is None: tag=self.NOTAG
return self.__strikes[tag]
def getStrikeVectors(self, tag=None):
"""
:return: the strike vectors of fault ``tag``
:rtype: ``list`` of ``numpy.array``.
"""
if tag is None: tag=self.NOTAG
return self.__strike_vectors[tag]
def getLengths(self, tag=None):
"""
:return: the lengths of segments in fault ``tag``
:rtype: ``list`` of ``float``
"""
if tag is None: tag=self.NOTAG
return self.__ls[tag]
def getTotalLength(self, tag=None):
"""
:return: the total unrolled length of fault ``tag``
:rtype: ``float``
"""
if tag is None: tag=self.NOTAG
return self.__total_length[tag]
def getMediumDepth(self,tag=None):
"""
returns the medium depth of fault ``tag``
:rtype: ``float``
"""
if tag is None: tag=self.NOTAG
return self.__medDepth[tag]
def getDips(self, tag=None):
"""
returns the list of the dips of the segements in fault ``tag``
:param tag: the tag of the fault
:type tag: ``float`` or ``str``
:return: the list of segment dips. In the 2D case None is returned.
"""
if tag is None: tag=self.NOTAG
if self.getDim()==3:
return self.__dips[tag]
else:
return None
def getBottomPolyline(self, tag=None):
"""
returns the list of the vertices defining the bottom of the fault ``tag``
:param tag: the tag of the fault
:type tag: ``float`` or ``str``
:return: the list of vertices. In the 2D case None is returned.
"""
if tag is None: tag=self.NOTAG
if self.getDim()==3:
return self.__bottom[tag]
else:
return None
def getSegmentNormals(self, tag=None):
"""
returns the list of the normals of the segments in fault ``tag``
:param tag: the tag of the fault
:type tag: ``float`` or ``str``
:return: the list of vectors normal to the segments. In the 2D case None is returned.
"""
if tag is None: tag=self.NOTAG
if self.getDim()==3:
return self.__normals[tag]
else:
return None
def getDepthVectors(self, tag=None):
"""
returns the list of the depth vector at top vertices in fault ``tag``.
:param tag: the tag of the fault
:type tag: ``float`` or ``str``
:return: the list of segment depths. In the 2D case None is returned.
"""
if tag is None: tag=self.NOTAG
if self.getDim()==3:
return self.__depth_vectors[tag]
else:
return None
def getDepths(self, tag=None):
"""
returns the list of the depths of the segements in fault ``tag``.
:param tag: the tag of the fault
:type tag: ``float`` or ``str``
:return: the list of segment depths. In the 2D case None is returned.
"""
if tag is None: tag=self.NOTAG
if self.getDim()==3:
return self.__depths[tag]
else:
return None
def getW0Range(self,tag=None):
"""
returns the range of the parameterization in ``w0``
:rtype: two ``float``
"""
return self.getW0Offsets(tag)[0], self.getW0Offsets(tag)[-1]
def getW1Range(self,tag=None):
"""
returns the range of the parameterization in ``w1``
:rtype: two ``float``
"""
if tag is None: tag=self.NOTAG
return -self.__w1_max[tag],0
def getW0Offsets(self, tag=None):
"""
returns the offsets for the parametrization of fault ``tag``.
:return: the offsets in the parametrization
:rtype: ``list`` of ``float``
"""
if tag is None: tag=self.NOTAG
return self.__offsets[tag]
def getCenterOnSurface(self):
"""
returns the center point of the fault system at the surface
:rtype: ``numpy.array``
"""
if self.__center is None:
self.__center=numpy.zeros((3,),numpy.float)
counter=0
for t in self.getTags():
for s in self.getTopPolyline(t):
self.__center[:2]+=s[:2]
counter+=1
self.__center/=counter
return self.__center[:self.getDim()]
def getOrientationOnSurface(self):
"""
returns the orientation of the fault system in RAD on the surface around the fault system center
:rtype: ``float``
"""
if self.__orientation is None:
center=self.getCenterOnSurface()
covariant=numpy.zeros((2,2))
for t in self.getTags():
for s in self.getTopPolyline(t):
covariant[0,0]+=(center[0]-s[0])**2
covariant[0,1]+=(center[1]-s[1])*(center[0]-s[0])
covariant[1,1]+=(center[1]-s[1])**2
covariant[1,0]+=(center[1]-s[1])*(center[0]-s[0])
e, V=numpy.linalg.eigh(covariant)
if e[0]>e[1]:
d=V[:,0]
else:
d=V[:,1]
if abs(d[0])>0.:
self.__orientation=es.atan(d[1]/d[0])
else:
self.__orientation=math.pi/2
return self.__orientation
def transform(self, rot=0, shift=numpy.zeros((3,))):
"""
applies a shift and a consecutive rotation in the x0x1 plane.
:param rot: rotation angle in RAD
:type rot: ``float``
:param shift: shift vector to be applied before rotation
:type shift: ``numpy.array`` of size 2 or 3
"""
if self.getDim() == 2:
mat=numpy.array([[es.cos(rot), -es.sin(rot)], [es.sin(rot), es.cos(rot)] ])
else:
mat=numpy.array([[es.cos(rot), -es.sin(rot),0.], [es.sin(rot), es.cos(rot),0.], [0.,0.,1.] ])
for t in self.getTags():
strikes=[ s+ rot for s in self.getStrikes(t) ]
V0=self.getStart(t)
self.addFault(strikes = [ s+ rot for s in self.getStrikes(t) ], \
ls = self.getLengths(t), \
V0=numpy.dot(mat,self.getStart(t)+shift), \
tag =t, \
dips=self.getDips(t),\
depths=self.getDepths(t), \
w0_offsets=self.getW0Offsets(t), \
w1_max=-self.getW1Range(t)[0])
def addFault(self, strikes, ls, V0=[0.,0.,0.],tag=None, dips=None, depths= None, w0_offsets=None, w1_max=None):
"""
adds a new fault to the fault system. The fault is named by ``tag``.
The fault is defined by a starting point V0 and a list of ``strikes`` and length ``ls``. The strikes and the length
is used to define a polyline with points ``V[i]`` such that
- ``V[0]=V0``
- ``V[i]=V[i]+ls[i]*array(cos(strikes[i]),sin(strikes[i]),0)``
So ``strikes`` defines the angle between the direction of the fault segment and the x0 axis. In 3D ``ls[i]`` ==0 is allowed.
In case of a 3D model a fault plane is defined through a dip ``dips`` and depth ``depths``.
From the dip and the depth the polyline ``bottom`` of the bottom of the fault is computed.
Each segment in the fault is decribed by the for vertices ``v0=top[i]``, ``v1==top[i+1]``, ``v2=bottom[i]`` and ``v3=bottom[i+1]``
The segment is parametrized by ``w0`` and ``w1`` with ``w0_offsets[i]<=w0<=w0_offsets[i+1]`` and ``-w1_max<=w1<=0``. Moreover
- ``(w0,w1)=(w0_offsets[i] , 0)->v0``
- ``(w0,w1)=(w0_offsets[i+1], 0)->v1``
- ``(w0,w1)=(w0_offsets[i] , -w1_max)->v2``
- ``(w0,w1)=(w0_offsets[i+1], -w1_max)->v3``
If no ``w0_offsets`` is given,
- ``w0_offsets[0]=0``
- ``w0_offsets[i]=w0_offsets[i-1]+L[i]``
where ``L[i]`` is the length of the segments on the top in 2D and in the middle of the segment in 3D.
If no ``w1_max`` is given, the average fault depth is used.
:param strikes: list of strikes. This is the angle of the fault segment direction with x0 axis. Right hand rule applies.
:type strikes: ``list`` of ``float``
:param ls: list of fault lengths. In the case of a 3D fault a segment may have length 0.
:type ls: ``list`` of ``float``
:param V0: start point of the fault
:type V0: ``list`` or ``numpy.array`` with 2 or 3 components. ``V0[2]`` must be zero.
:param tag: the tag of the fault. If fault ``tag`` already exists it is overwritten.
:type tag: ``float`` or ``str``
:param dips: list of dip angles. Right hand rule around strike direction applies.
:type dips: ``list`` of ``float``
:param depths: list of segment depth. Value mut be positive in the 3D case.
:type depths: ``list`` of ``float``
:param w0_offsets: ``w0_offsets[i]`` defines the offset of the segment ``i`` in the fault to be used in the parametrization of the fault. If not present the cumulative length of the fault segments is used.
:type w0_offsets: ``list`` of ``float`` or ``None``
:param w1_max: the maximum value used for parametrization of the fault in the depth direction. If not present the mean depth of the fault segments is used.
:type w1_max: ``float``
:note: In the three dimensional case the lists ``dip`` and ``top`` must have the same length.
"""
if tag is None:
tag=self.NOTAG
else:
if self.NOTAG in self.getTags():
raise ValueError('Attempt to add a fault with no tag to a set of existing faults')
if not isinstance(strikes, list): strikes=[strikes, ]
n_segs=len(strikes)
if not isinstance(ls, list): ls=[ ls for i in range(n_segs) ]
if not n_segs==len(ls):
raise ValueError("number of strike direction and length must match.")
if len(V0)>2:
if abs(V0[2])>0: raise Value("start point needs to be surface (3rd component ==0)")
if self.getDim()==2 and not (dips is None and depths is None) :
raise ValueError('Spatial dimension two does not support dip and depth for faults.')
if not dips is None:
if not isinstance(dips, list): dips=[dips for i in range(n_segs) ]
if n_segs != len(dips):
raise ValueError('length of dips must be one less than the length of top.')
if not depths is None:
if not isinstance(depths, list): depths=[depths for i in range(n_segs+1) ]
if n_segs+1 != len(depths):
raise ValueError('length of depths must be one less than the length of top.')
if w0_offsets != None:
if len(w0_offsets) != n_segs+1:
raise ValueError('expected length of w0_offsets is %s'%(n_segs))
self.__center=None
self.__orientation = None
#
# in the 2D case we don't allow zero length:
#
if self.getDim() == 2:
for l in ls:
if l<=0: raise ValueError("length must be positive")
else:
for l in ls:
if l<0: raise ValueError("length must be non-negative")
for i in range(n_segs+1):
if depths[i]<0: raise ValueError("negative depth.")
#
# translate start point to numarray
#
V0= numpy.array(V0[:self.getDim()],numpy.double)
#
# set strike vectors:
#
strike_vectors=[]
top_polyline=[V0]
total_length=0
for i in range(n_segs):
v=numpy.zeros((self.getDim(),))
v[0]=es.cos(strikes[i])
v[1]=es.sin(strikes[i])
strike_vectors.append(v)
top_polyline.append(top_polyline[-1]+ls[i]*v)
total_length+=ls[i]
#
# normal and depth direction
#
if self.getDim()==3:
normals=[]
for i in range(n_segs):
normals.append(numpy.array([es.sin(dips[i])*strike_vectors[i][1],-es.sin(dips[i])*strike_vectors[i][0], es.cos(dips[i])]) )
d=numpy.cross(strike_vectors[0],normals[0])
if d[2]>0:
f=-1
else:
f=1
depth_vectors=[f*depths[0]*d/numpy.linalg.norm(d) ]
for i in range(1,n_segs):
d=-numpy.cross(normals[i-1],normals[i])
d_l=numpy.linalg.norm(d)
if d_l<=0:
d=numpy.cross(strike_vectors[i],normals[i])
d_l=numpy.linalg.norm(d)
else:
for L in [ strike_vectors[i], strike_vectors[i-1]]:
if numpy.linalg.norm(numpy.cross(L,d)) <= self.MIN_DEPTH_ANGLE * numpy.linalg.norm(L) * d_l:
raise ValueError("%s-th depth vector %s too flat."%(i, d))
if d[2]>0:
f=-1
else:
f=1
depth_vectors.append(f*d*depths[i]/d_l)
d=numpy.cross(strike_vectors[n_segs-1],normals[n_segs-1])
if d[2]>0:
f=-1
else:
f=1
depth_vectors.append(f*depths[n_segs]*d/numpy.linalg.norm(d))
bottom_polyline=[ top_polyline[i]+depth_vectors[i] for i in range(n_segs+1) ]
#
# calculate offsets if required:
#
if w0_offsets is None:
w0_offsets=[0.]
for i in range(n_segs):
if self.getDim()==3:
w0_offsets.append(w0_offsets[-1]+(float(numpy.linalg.norm(bottom_polyline[i+1]-bottom_polyline[i]))+ls[i])/2.)
else:
w0_offsets.append(w0_offsets[-1]+ls[i])
w0_max=max(w0_offsets)
if self.getDim()==3:
self.__normals[tag]=normals
self.__depth_vectors[tag]=depth_vectors
self.__depths[tag]=depths
self.__dips[tag]=dips
self.__bottom[tag]=bottom_polyline
self.__ls[tag]=ls
self.__strikes[tag]=strikes
self.__strike_vectors[tag]=strike_vectors
self.__top[tag]=top_polyline
self.__total_length[tag]=total_length
self.__offsets[tag]=w0_offsets
if self.getDim()==2:
self.__medDepth[tag]=0.
else:
self.__medDepth[tag]=sum([ numpy.linalg.norm(v) for v in depth_vectors])/len(depth_vectors)
if w1_max is None or self.getDim()==2: w1_max=self.__medDepth[tag]
self.__w0_max[tag]=w0_max
self.__w1_max[tag]=w1_max
def getMaxValue(self,f, tol=es.sqrt(es.EPSILON)):
"""
returns the tag of the fault of where ``f`` takes the maximum value and a `Locator` object which can be used to collect values from `Data` class objects at the location where the minimum is taken.
:param f: a distribution of values
:type f: `escript.Data`
:param tol: relative tolerance used to decide if point is on fault
:type tol: ``tol``
:return: the fault tag the maximum is taken, and a `Locator` object to collect the value at location of maximum value.
"""
ref=-es.Lsup(f)*2
f_max=ref
t_max=None
loc_max=None
x=f.getFunctionSpace().getX()
for t in self.getTags():
p,m=self.getParametrization(x,tag=t, tol=tol)
loc=((m*f)+(1.-m)*ref).internal_maxGlobalDataPoint()
f_t=f.getTupleForGlobalDataPoint(*loc)[0]
if f_t>f_max:
f_max=f_t
t_max=t
loc_max=loc
if loc_max is None:
return None, None
else:
return t_max, pdt.Locator(x.getFunctionSpace(),x.getTupleForGlobalDataPoint(*loc_max))
def getMinValue(self,f, tol=es.sqrt(es.EPSILON)):
"""
returns the tag of the fault of where ``f`` takes the minimum value and a `Locator` object which can be used to collect values from `Data` class objects at the location where the minimum is taken.
:param f: a distribution of values
:type f: `escript.Data`
:param tol: relative tolerance used to decide if point is on fault
:type tol: ``tol``
:return: the fault tag the minimum is taken, and a `Locator` object to collect the value at location of minimum value.
"""
ref=es.Lsup(f)*2
f_min=ref
t_min=None
loc_min=None
x=f.getFunctionSpace().getX()
for t in self.getTags():
p,m=self.getParametrization(x,tag=t, tol=tol)
loc=((m*f)+(1.-m)*ref).internal_minGlobalDataPoint()
f_t=f.getTupleForGlobalDataPoint(*loc)[0]
if f_t<f_min:
f_min=f_t
t_min=t
loc_min=loc
if loc_min is None:
return None, None
else:
return t_min, pdt.Locator(x.getFunctionSpace(),x.getTupleForGlobalDataPoint(*loc_min))
def getParametrization(self,x,tag=None, tol=es.sqrt(es.EPSILON), outsider=None):
"""
returns the parametrization of the fault ``tag`` in the fault system. In fact the values of the parametrization for at given coordinates ``x`` is returned. In addition to the value of the parametrization a mask is returned indicating if the given location is on the fault with given tolerance ``tol``.
Typical usage of the this method is
dom=Domain(..)
x=dom.getX()
fs=FaultSystem()
fs.addFault(tag=3,...)
p, m=fs.getParametrization(x, outsider=0,tag=3)
saveDataCSV('x.csv',p=p, x=x, mask=m)
to create a file with the coordinates of the points in ``x`` which are on the fault (as ``mask=m``) together with their location ``p`` in the fault coordinate system.
:param x: location(s)
:type x: `escript.Data` object or ``numpy.array``
:param tag: the tag of the fault
:param tol: relative tolerance to check if location is on fault.
:type tol: ``float``
:param outsider: value used for parametrization values outside the fault. If not present an appropriate value is choosen.
:type outsider: ``float``
:return: the coordinates ``x`` in the coordinate system of the fault and a mask indicating coordinates in the fault by 1 (0 elsewhere)
:rtype: `escript.Data` object or ``numpy.array``
"""
offsets=self.getW0Offsets(tag)
w1_range=self.getW1Range(tag)
w0_range=self.getW0Range(tag)[1]-self.getW0Range(tag)[0]
if outsider is None:
outsider=min(self.getW0Range(tag)[0],self.getW0Range(tag)[1])-abs(w0_range)/es.sqrt(es.EPSILON)
if isinstance(x,list): x=numpy.array(x, numpy.double)
updated=x[0]*0
if self.getDim()==2:
#
#
p=x[0]*0 + outsider
top=self.getTopPolyline(tag)
for i in range(1,len(top)):
d=top[i]-top[i-1]
h=x-top[i-1]
h_l=es.length(h)
d_l=es.length(d)
s=es.inner(h,d)/d_l**2
s=s*es.whereNonPositive(s-1.-tol)*es.whereNonNegative(s+tol)
m=es.whereNonPositive(es.length(h-s*d)-tol*es.maximum(h_l,d_l))*(1.-updated)
p=(1.-m)*p+m*(offsets[i-1]+(offsets[i]-offsets[i-1])*s)
updated=es.wherePositive(updated+m)
else:
p=x[:2]*0 + outsider
top=self.getTopPolyline(tag)
bottom=self.getBottomPolyline(tag)
n=self.getSegmentNormals(tag)
for i in range(len(top)-1):
h=x-top[i]
R=top[i+1]-top[i]
r=bottom[i+1]-bottom[i]
D0=bottom[i]-top[i]
D1=bottom[i+1]-top[i+1]
s_upper=es.matrix_mult(numpy.linalg.pinv(numpy.vstack((R,D1)).T),h)
s_lower=es.matrix_mult(numpy.linalg.pinv(numpy.vstack((r,D0)).T),h)
m_ul=es.wherePositive(s_upper[0]-s_upper[1])
s=s_upper*m_ul+s_lower*(1-m_ul)
s0=s[0]
s1=s[1]
m=es.whereNonNegative(s0+tol)*es.whereNonPositive(s0-1.-tol)*es.whereNonNegative(s1+tol)*es.whereNonPositive(s1-1.-tol)
s0=s0*m
s1=s1*m
atol=tol*es.maximum(es.length(h),es.length(top[i]-bottom[i+1]))
m=es.whereNonPositive(es.length(h-s0*R-s1*D1)*m_ul+(1-m_ul)*es.length(h-s0*r-s1*D0)-atol)
p[0]=(1.-m)*p[0]+m*(offsets[i]+(offsets[i+1]-offsets[i])*s0)
p[1]=(1.-m)*p[1]+m*(w1_range[1]+(w1_range[0]-w1_range[1])*s1)
updated=es.wherePositive(updated+m)
return p, updated
def getSideAndDistance(self,x,tag=None):
"""
returns the side and the distance at ``x`` from the fault ``tag``.
:param x: location(s)
:type x: `escript.Data` object or ``numpy.array``
:param tag: the tag of the fault
:return: the side of ``x`` (positive means to the right of the fault, negative to the left) and the distance to the fault. Note that a value zero for the side means that that the side is undefined.
"""
d=None
side=None
if self.getDim()==2:
mat=numpy.array([[0., 1.], [-1., 0.] ])
s=self.getTopPolyline(tag)
for i in range(1,len(s)):
q=(s[i]-s[i-1])
h=x-s[i-1]
q_l=es.length(q)
qt=es.matrixmult(mat,q) # orthogonal direction
t=es.inner(q,h)/q_l**2
t=es.maximum(es.minimum(t,1,),0.)
p=h-t*q
dist=es.length(p)
lside=es.sign(es.inner(p,qt))
if d is None:
d=dist
side=lside
else:
m=es.whereNegative(d-dist)
m2=es.wherePositive(es.whereZero(abs(lside))+m)
d=dist*(1-m)+d*m
side=lside*(1-m2)+side*m2
else:
ns=self.getSegmentNormals(tag)
top=self.getTopPolyline(tag)
bottom=self.getBottomPolyline(tag)
for i in range(len(top)-1):
h=x-top[i]
R=top[i+1]-top[i]
r=bottom[i+1]-bottom[i]
D0=bottom[i]-top[i]
D1=bottom[i+1]-top[i+1]
s_upper=es.matrix_mult(numpy.linalg.pinv(numpy.vstack((R,D1)).T),h)
s_lower=es.matrix_mult(numpy.linalg.pinv(numpy.vstack((r,D0)).T),h)
m_ul=es.wherePositive(s_upper[0]-s_upper[1])
s=s_upper*m_ul+s_lower*(1-m_ul)
s=es.maximum(es.minimum(s,1.),0)
p=h-(m_ul*R+(1-m_ul)*r)*s[0]-(m_ul*D1+(1-m_ul)*D0)*s[1]
dist=es.length(p)
lside=es.sign(es.inner(p,ns[i]))
if d is None:
d=dist
side=lside
else:
m=es.whereNegative(d-dist)
m2=es.wherePositive(es.whereZero(abs(lside))+m)
d=dist*(1-m)+d*m
side=lside*(1-m2)+side*m2
return side, d
| 37.803231 | 305 | 0.595284 | 3,880 | 25,744 | 3.862113 | 0.106186 | 0.015682 | 0.018018 | 0.010277 | 0.445245 | 0.389456 | 0.363764 | 0.335269 | 0.305706 | 0.292759 | 0 | 0.022846 | 0.26379 | 25,744 | 680 | 306 | 37.858824 | 0.767794 | 0.361793 | 0 | 0.338462 | 0 | 0 | 0.050321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.012821 | 0 | 0.164103 | 0.002564 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d2546aeaf271239203d19ad3ebcd6e67edaf83 | 9,680 | py | Python | acs/acs/Core/FileParsingManager.py | wangji1/test-framework-and-suites-for-android | 59564f826f205fe7fab64f45b88b1a6dde6900af | [
"Apache-2.0"
] | null | null | null | acs/acs/Core/FileParsingManager.py | wangji1/test-framework-and-suites-for-android | 59564f826f205fe7fab64f45b88b1a6dde6900af | [
"Apache-2.0"
] | null | null | null | acs/acs/Core/FileParsingManager.py | wangji1/test-framework-and-suites-for-android | 59564f826f205fe7fab64f45b88b1a6dde6900af | [
"Apache-2.0"
] | null | null | null | """
:copyright: (c)Copyright 2013, Intel Corporation All Rights Reserved.
The source code contained or described here in and all documents related
to the source code ("Material") are owned by Intel Corporation or its
suppliers or licensors. Title to the Material remains with Intel Corporation
or its suppliers and licensors. The Material contains trade secrets and
proprietary and confidential information of Intel or its suppliers and
licensors.
The Material is protected by worldwide copyright and trade secret laws and
treaty provisions. No part of the Material may be used, copied, reproduced,
modified, published, uploaded, posted, transmitted, distributed, or disclosed
in any way without Intel's prior express written permission.
No license under any patent, copyright, trade secret or other intellectual
property right is granted to or conferred upon you by disclosure or delivery
of the Materials, either expressly, by implication, inducement, estoppel or
otherwise. Any license under such intellectual property rights must be express
and approved by Intel in writing.
:organization: INTEL MCG PSI
:summary: Implements file parsing manager
:since: 05/03/2013
:author: vdechefd
"""
import os
import lxml.etree as et
from acs.ErrorHandling.AcsConfigException import AcsConfigException
from acs.Core.Report.ACSLogging import LOGGER_FWK
from acs.Core.PathManager import Paths
import acs.UtilitiesFWK.Utilities as Utils
class FileParsingManager:
""" FileParsingManager
This class implements the File Parsing Manager.
This manager takes XML files as inputs and parses them into dictionaries.
It will parse:
- use case catalog
- bench config
- equipment catalog
- campaign
"""
def __init__(self, bench_config_name, equipment_catalog, global_config):
self._file_extention = ".xml"
self._execution_config_path = Paths.EXECUTION_CONFIG
self._equipment_catalog_path = Paths.EQUIPMENT_CATALOG
self._bench_config_name = (bench_config_name if os.path.isfile(bench_config_name) else
os.path.join(self._execution_config_path, bench_config_name + self._file_extention))
self._equipment_catalog_name = equipment_catalog + self._file_extention
self._global_config = global_config
self._ucase_catalogs = None
self._logger = LOGGER_FWK
def parse_bench_config(self):
"""
This function parses the bench config XML file into a dictionary.
"""
def __parse_node(node):
"""
This private function parse a node from bench_config parsing.
:rtype: dict
:return: Data stocked into a dictionnary.
"""
dico = {}
name = node.get('name', "")
if name:
# store all keys (except 'name')/value in a dict
for key in [x for x in node.attrib if x != "name"]:
dico[key] = node.attrib[key]
node_list = node.xpath('./*')
if node_list:
for node_item in node_list:
name = node_item.get('name', "")
if name:
dico[name] = __parse_node(node_item)
return dico
def __parse_bench_config(document):
"""
Last version of function parsing bench_config adapted for Multiphone.
:type document: object
:param document: xml document parsed by etree
:rtype: dict
:return: Data stocked into a dictionary.
"""
# parse bench_config (dom method)
bench_config = {}
node_list = document.xpath('/BenchConfig/*/*')
for node in node_list:
name = node.get('name', "")
if name:
bench_config[name] = __parse_node(node)
return bench_config
# body of the parse_bench_config() function.
if not os.path.isfile(self._bench_config_name):
error_msg = "Bench config file : %s does not exist" % self._bench_config_name
raise AcsConfigException(AcsConfigException.FILE_NOT_FOUND, error_msg)
try:
document = et.parse(self._bench_config_name)
except et.XMLSyntaxError:
_, error_msg, _ = Utils.get_exception_info()
error_msg = "{}; {}".format(self._bench_config_name, error_msg)
raise AcsConfigException(AcsConfigException.XML_PARSING_ERROR, error_msg)
result = __parse_bench_config(document)
bench_config_parameters = Utils.BenchConfigParameters(dictionnary=result,
bench_config_file=self._bench_config_name)
return bench_config_parameters
def parse_equipment_catalog(self):
"""
This function parses the equipment catalog XML file into a dictionary.
"""
# Instantiate empty dictionaries
eqt_type_dic = {}
# Get the xml doc
equipment_catalog_path = os.path.join(self._equipment_catalog_path, self._equipment_catalog_name)
if not os.path.isfile(equipment_catalog_path):
error_msg = "Equipment catalog file : %s does not exist" % equipment_catalog_path
raise AcsConfigException(AcsConfigException.FILE_NOT_FOUND, error_msg)
try:
equipment_catalog_doc = et.parse(equipment_catalog_path)
except et.XMLSyntaxError:
_, error_msg, _ = Utils.get_exception_info()
error_msg = "{}; {}".format(equipment_catalog_path, error_msg)
raise AcsConfigException(AcsConfigException.XML_PARSING_ERROR, error_msg)
root_node = equipment_catalog_doc.xpath('/Equipment_Catalog')
if not root_node:
raise AcsConfigException(AcsConfigException.FILE_NOT_FOUND,
"Wrong XML: could not find expected document root node: "
"'Equipment_Catalog'")
# Parse EquipmentTypes
list_eq_types = root_node[0].xpath('./EquipmentType')
for eq_type in list_eq_types:
eqt_type_dic.update(self._load_equipment_type(eq_type))
self._global_config.equipmentCatalog = eqt_type_dic.copy()
def _load_equipment_type(self, node):
"""
This function parses an "EquipmentType" XML Tag into a dictionary
:type node: Etree node
:param node: the "EquipmentType" node
:rtype dic: dict
:return: a dictionary of equipment
"""
dic = {}
eqt_type_name = node.get("name", "")
if eqt_type_name:
dic[eqt_type_name] = self._load_equipments(node)
return dic
def _load_equipments(self, node):
"""
This function parses "Equipment" XML Tags into a dictionary
:type node: Etree node
:param node: the node containing "Equipment" nodes
"""
# Get common equipment type parameters
dic = {}
dic.update(self._get_parameters(node))
eqt_nodes = node.xpath('./Equipment')
for sub_node in eqt_nodes:
eqt_model = sub_node.get("name", "")
if eqt_model:
dic[eqt_model] = self._get_parameters(sub_node)
dic[eqt_model].update(self._load_transport(sub_node))
dic[eqt_model].update(self._load_features(sub_node))
dic[eqt_model].update(self._load_controllers(sub_node))
return dic
def _load_transport(self, node):
"""
This function parses a "Transport" XML Tags from a node into a dictionary
:type node: DOM node
:param node: the node from which to get all parameters value
:rtype dic: dict
:return: a dictionary of transports
"""
dic = {}
transport_node = node.xpath('./Transports')
if transport_node:
dic["Transports"] = self._get_parameters(transport_node[0])
return dic
def _load_controllers(self, node):
"""
This function parses a "Controllers" XML Tags from a node into a dictionary
:type node: DOM node
:param node: the node from which to get all parameters value
:rtype dic: dict
:return: the dictionary of controllers
"""
dic = {}
transport_node = node.xpath('./Controllers')
if transport_node:
dic["Controllers"] = self._get_parameters(transport_node[0])
return dic
def _load_features(self, node):
"""
This function parses a "Features" XML Tags from a node into a dictionary
:type node: Element node
:param node: the node from which to get all parameters value
:rtype dic: dict
:return: a dictionary of features
"""
dic = {}
transport_node = node.xpath('./Features')
if transport_node:
dic["Features"] = self._get_parameters(transport_node[0])
return dic
def _get_parameters(self, node):
"""
This function parses all "Parameter" XML Tags from a node into a dictionary
:type node: Element node
:param node: the node from which to get all parameters value
:rtype dic: dict
:return: a dictionary of parameters
"""
dic = {}
parameters = node.xpath('./Parameter')
for parameter in parameters:
name = parameter.get("name", "")
value = parameter.get("value", "")
if name:
dic[name] = value
return dic
| 36.946565 | 119 | 0.630785 | 1,135 | 9,680 | 5.171806 | 0.21674 | 0.048722 | 0.028109 | 0.022658 | 0.36184 | 0.273765 | 0.235945 | 0.207496 | 0.191141 | 0.16201 | 0 | 0.002343 | 0.294421 | 9,680 | 261 | 120 | 37.088123 | 0.857101 | 0.335537 | 0 | 0.262712 | 0 | 0 | 0.05807 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09322 | false | 0 | 0.050847 | 0 | 0.228814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d53e64696bdcd31c356310ca351fd13adec82e | 2,256 | py | Python | Chapter 16/excel_to_csv.py | ostin-r/automate-boring-stuff-solutions | 78f0a2981e6520ff2907285e666168a0f35eba02 | [
"FTL"
] | 4 | 2021-06-14T10:37:58.000Z | 2021-12-30T17:49:17.000Z | Chapter 16/excel_to_csv.py | ostin-r/automate-boring-stuff-solutions | 78f0a2981e6520ff2907285e666168a0f35eba02 | [
"FTL"
] | null | null | null | Chapter 16/excel_to_csv.py | ostin-r/automate-boring-stuff-solutions | 78f0a2981e6520ff2907285e666168a0f35eba02 | [
"FTL"
] | 1 | 2021-07-29T15:26:54.000Z | 2021-07-29T15:26:54.000Z | '''
Austin Richards 4/14/21
excel_to_csv.py automates the conversion of many xlsx
files into csv files. This program names the csv file
in the format <filename>_<sheetname>.csv
'''
import logging
import os, csv, openpyxl
from pathlib import Path
from openpyxl.utils import get_column_letter
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s: %(message)s')
def get_all_paths(directory):
'''
returns all paths in a directory (and it's sub-directories)
in a list type
'''
file_paths = []
for path, dirs, files in os.walk(directory):
for filename in files:
filepath = os.path.join(path, filename)
file_paths.append(filepath)
return file_paths
def excel_to_csv(directory):
# get the absolute path, make a folder within it to save converted files, get files to convert
directory = os.path.abspath(directory)
new_dir = os.path.join(directory, 'converted_csv_files')
os.makedirs(new_dir, exist_ok=True)
all_paths = [path for path in get_all_paths(directory) if path.endswith('.xlsx')]
for excel_file in all_paths:
workbook = openpyxl.load_workbook(excel_file)
excel_filename = Path(excel_file).stem
print(f'copying {excel_filename}...')
for sheet_name in workbook.sheetnames:
print(f' copying {sheet_name}...')
# create a csv filename with the excel filename and sheetname, put in new folder
csv_filename = f'{excel_filename}_{sheet_name}.csv'
csv_filepath = os.path.join(new_dir, csv_filename)
new_file = open(csv_filepath, 'w', newline='')
csv_writer = csv.writer(new_file)
# get data from the xlsx file and write it to the new csv
sheet = workbook[sheet_name]
for row_num in range(1, sheet.max_row + 1):
row_data = []
for col_num in range(1, sheet.max_column + 1):
col_letter = get_column_letter(col_num)
row_data.append(sheet[col_letter + str(row_num)].value)
# write data to the new csv
csv_writer.writerow(row_data)
new_file.close()
print('files copied.')
excel_to_csv('Chapter 16') | 35.25 | 98 | 0.649379 | 318 | 2,256 | 4.424528 | 0.339623 | 0.028429 | 0.021322 | 0.028429 | 0.027008 | 0.027008 | 0 | 0 | 0 | 0 | 0 | 0.006579 | 0.258865 | 2,256 | 64 | 99 | 35.25 | 0.834928 | 0.223404 | 0 | 0 | 0 | 0 | 0.092281 | 0.019153 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.108108 | 0 | 0.189189 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d571532daed68e7ee3f7aca094467594fa8bdd | 2,041 | py | Python | kg/diff/templates/real_abs_rel_template.py | kevinsogo/compgen | c765fdb3008d41f409836a45ad5a506db6a99e74 | [
"MIT"
] | 6 | 2019-11-30T17:03:13.000Z | 2021-09-30T05:08:31.000Z | kg/diff/templates/real_abs_rel_template.py | kevinsogo/compgen | c765fdb3008d41f409836a45ad5a506db6a99e74 | [
"MIT"
] | 1 | 2020-01-20T12:13:03.000Z | 2020-01-20T12:13:03.000Z | kg/diff/templates/real_abs_rel_template.py | kevinsogo/compgen | c765fdb3008d41f409836a45ad5a506db6a99e74 | [
"MIT"
] | null | null | null | # Checks for an XXX error ### @replace "XXX", ('absolute/relative' if has_rel else 'absolute')
# with an error of at most 1e-XXX ### @replace "XXX", prec
# Don't edit this file. Edit real_abs_rel_template.py instead, and then run _real_check_gen.py
# Oh, actually, you're editing the correct file. Go on. ### @if False
raise Exception("You're not supposed to run this!!!") ### @if False
from itertools import zip_longest
from decimal import Decimal, InvalidOperation
from kg.checkers import * ### @keep @import
EPS = 0 ### @replace 0, f"Decimal('1e-{prec}')"
EPS *= 1+Decimal('1e-5') # add some leniency
@set_checker()
@default_score
def checker(input_file, output_file, judge_file, **kwargs):
worst = 0
for line1, line2 in zip_longest(output_file, judge_file):
if (line1 is None) != (line2 is None): raise WA("Unequal number of lines")
p1 = line1.rstrip().split(" ")
p2 = line2.rstrip().split(" ")
if len(p1) != len(p2): raise WA("Incorrect number of values in line")
for v1, v2 in zip(p1, p2):
if v1 != v2: # they're different as tokens. try considering them as numbers
try:
err = error(Decimal(v1), Decimal(v2)) ### @replace "error", "abs_rel_error" if has_rel else "abs_error"
except InvalidOperation:
raise WA(f"Unequal tokens that are not numbers: {v1!r} != {v2!r}")
worst = max(worst, err)
if err > EPS:
print('Found an error of', worst) ### @keep @if format not in ('hr', 'cms')
raise WA("Bad precision.")
print('Worst error:', worst) ### @keep @if format not in ('pg', 'hr', 'cms')
help_ = ('Compare if two sequences of real numbers are "close enough" (by XXX). ' ### @replace 'XXX', '1e-' + str(prec)
"Uses XXX error.") ### @replace 'XXX', 'absolute/relative' if has_rel else 'absolute'
if __name__ == '__main__': chk(help=help_)
| 52.333333 | 123 | 0.588437 | 283 | 2,041 | 4.134276 | 0.45583 | 0.034188 | 0.020513 | 0.030769 | 0.129915 | 0.129915 | 0.092308 | 0.092308 | 0.092308 | 0.092308 | 0 | 0.019555 | 0.273395 | 2,041 | 38 | 124 | 53.710526 | 0.769386 | 0.353748 | 0 | 0 | 0 | 0 | 0.223612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.103448 | 0 | 0.137931 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d5daf61123ba5cf4f2cb454d6eaf622ef68705 | 1,099 | py | Python | osspeak/recognition/actions/library/process.py | OSSpeak/OSSpeak | 327c38a37684165f87bf8d76ab2ca135b43b8ab7 | [
"MIT"
] | 1 | 2020-03-17T10:24:41.000Z | 2020-03-17T10:24:41.000Z | osspeak/recognition/actions/library/process.py | OSSpeak/OSSpeak | 327c38a37684165f87bf8d76ab2ca135b43b8ab7 | [
"MIT"
] | 12 | 2016-09-28T05:16:00.000Z | 2020-11-27T22:32:40.000Z | osspeak/recognition/actions/library/process.py | OSSpeak/OSSpeak | 327c38a37684165f87bf8d76ab2ca135b43b8ab7 | [
"MIT"
] | null | null | null | import time
import os
import subprocess
import threading
class ProcessHandler:
def __init__(self, *args, on_output=None):
self.process = subprocess.Popen(args, stdin=subprocess.PIPE,
stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
self.on_output = on_output
self.start_stdout_listening()
def send_message(self, msg):
if not isinstance(msg, bytes):
msg = msg.encode('utf8')
if not msg.endswith(b'\n'):
msg += b'\n'
self.process.stdin.write(msg)
try:
self.process.stdin.flush()
except OSError:
print(f'Process {self} already closed')
def dispatch_process_output(self):
for line in self.process.stdout:
line = line.decode('utf8')
self.on_output(line)
def start_stdout_listening(self):
t = threading.Thread(target=self.dispatch_process_output, daemon=True)
t.start()
def run(s):
proc = ProcessHandler(s)
return proc
def run_sync(s):
return subprocess.run(s, shell=True).stdout | 28.179487 | 78 | 0.626934 | 138 | 1,099 | 4.862319 | 0.427536 | 0.04769 | 0.035768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002481 | 0.266606 | 1,099 | 39 | 79 | 28.179487 | 0.830025 | 0 | 0 | 0 | 0 | 0 | 0.037273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.125 | 0.03125 | 0.40625 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15d6695d5c93e5916f4ed45db19a2876ea82e9ee | 977 | py | Python | setup.py | haaspt/whatsnew | 0524ad2b6132593282946073f2d647ea0ce960e8 | [
"MIT"
] | 2 | 2015-09-02T21:14:26.000Z | 2015-09-02T22:23:04.000Z | setup.py | haaspt/whatsnew | 0524ad2b6132593282946073f2d647ea0ce960e8 | [
"MIT"
] | 2 | 2018-01-02T00:54:49.000Z | 2018-01-02T00:56:01.000Z | setup.py | haaspt/whatsnew | 0524ad2b6132593282946073f2d647ea0ce960e8 | [
"MIT"
] | null | null | null | import os
from setuptools import setup
readme = open('README.md').read()
requirements = ['click', 'feedparser', 'beautifulsoup4']
setup(
name = "whatsnew",
version = "0.13",
author = "Patrick Tyler Haas",
author_email = "patrick.tyler.haas@gmail.com",
description = ("A lightweight, convenient tool to get an overview of the day's headlines right from your command line."),
license = "MIT",
keywords = "",
url = "https://github.com/haaspt/whatsnew",
scripts=['main.py', 'newsfeeds.py', 'config.py'],
install_requires=requirements,
long_description=readme,
entry_points = {
'console_scripts': [
'whatsnew = main:main'
],
},
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Programming Language :: Python :: 3.6',
],
)
| 29.606061 | 125 | 0.602866 | 102 | 977 | 5.72549 | 0.784314 | 0.041096 | 0.054795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009615 | 0.254862 | 977 | 32 | 126 | 30.53125 | 0.792582 | 0 | 0 | 0.068966 | 0 | 0.034483 | 0.47697 | 0.028659 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15da0373b1191b0c411cc06fa6774cfb3700fb3a | 2,880 | py | Python | Flask-Server/timeswitch/server.py | weichweich/pi-timeswitch | c4428783fbf8b2294f7a6f55c312beeabae94d6f | [
"MIT"
] | 4 | 2015-10-12T19:13:22.000Z | 2018-07-18T17:55:48.000Z | Flask-Server/timeswitch/server.py | weichweich/pi-timeswitch | c4428783fbf8b2294f7a6f55c312beeabae94d6f | [
"MIT"
] | null | null | null | Flask-Server/timeswitch/server.py | weichweich/pi-timeswitch | c4428783fbf8b2294f7a6f55c312beeabae94d6f | [
"MIT"
] | 2 | 2017-04-25T16:19:09.000Z | 2022-01-24T08:15:12.000Z | import argparse
import logging
import sys
from timeswitch.switch.manager import SwitchManager
from timeswitch.app import setup_app
from timeswitch.api import setup_api
from timeswitch.model import setup_model
# ######################################
# # parsing commandline args
# ######################################
def parse_arguments():
PARSER = argparse.ArgumentParser(description='Timeswitch for the\
GPIOs of an Raspberry Pi with a webinterface.')
PARSER.add_argument('-f', '--file', dest='schedule_file', metavar='file',
type=str, required=True,
help='A JSON-file containing the schedule.')
PARSER.add_argument('--debug', action='store_true',
help='A JSON-file containing the schedule.')
PARSER.add_argument('--create', dest='create', action='store_true',
help='Creates a new database. DELETES ALL DATA!!')
PARSER.add_argument('--manager', dest='manager', action='store_true',
help='Start the manager which switches the GPIOs at specified times.')
PARSER.add_argument('--static', dest='static_dir', metavar='file',
type=str, help='Folder with static files to serve')
return PARSER.parse_args()
# ######################################
# # Logging:
# ######################################
def setup_logger(debug=True):
# set up logging to file - see previous section for more details
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)-20s \
%(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M',
filename='piSwitch.log',
filemode='a')
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
if debug:
console.setLevel(logging.DEBUG)
else:
console.setLevel(logging.INFO)
# set a format which is simpler for console use
formatter = logging.Formatter('%(levelname)-8s:%(name)-8s:%(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to the root logger
logging.getLogger('').addHandler(console)
def start(cmd_args, app, switch_model):
switch_manager = None
if cmd_args.manager:
switch_manager = SwitchManager(switch_model)
switch_manager.start()
try:
app.run(debug=cmd_args.debug)
finally:
if cmd_args.manager:
switch_manager.stop()
def main():
cmd_args = parse_arguments()
setup_logger(cmd_args.debug)
app = setup_app(static_folder=cmd_args.static_dir, static_url_path='')
model = setup_model(app)
_ = setup_api(app, model)
start(cmd_args, app, model)
if __name__ == '__main__':
main()
| 32 | 94 | 0.601389 | 329 | 2,880 | 5.118541 | 0.386018 | 0.033254 | 0.050475 | 0.033848 | 0.099762 | 0.099762 | 0.065321 | 0.065321 | 0.065321 | 0.065321 | 0 | 0.002291 | 0.242361 | 2,880 | 89 | 95 | 32.359551 | 0.769478 | 0.100347 | 0 | 0.071429 | 0 | 0 | 0.162196 | 0.015683 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.125 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15db2920ca4b3d1d42695dca0b6816fa94b0f30d | 12,055 | py | Python | utils/videoloader.py | mie-lab/traffic4cast | aea6f90e8884c01689c84255c99e96d2b58dc470 | [
"Apache-2.0"
] | 8 | 2020-07-26T20:54:58.000Z | 2022-03-01T14:36:13.000Z | utils/videoloader.py | mie-lab/traffic4cast | aea6f90e8884c01689c84255c99e96d2b58dc470 | [
"Apache-2.0"
] | null | null | null | utils/videoloader.py | mie-lab/traffic4cast | aea6f90e8884c01689c84255c99e96d2b58dc470 | [
"Apache-2.0"
] | 5 | 2019-11-05T09:46:01.000Z | 2021-01-24T04:42:53.000Z | import sys, os, time
from pathlib import Path
import pickle
import datetime as dt
import glob
import h5py
import numpy as np
from matplotlib import pyplot as plt
from multiprocessing import Pool
from functools import partial
import torch
from torchvision import datasets, transforms
def subsample(x, n=0, m=200):
return x[..., n:m, n:m]
def _get_tstamp_string(tstamp_ix):
"""Calculates the timestamp in hh:mm based on the file index
Args:
tstamp_ix (int): Index of the single frame
Returns:
Str: hh:mm
"""
total_minutes = tstamp_ix*5
hours = total_minutes // 60
minutes = total_minutes % 60
return hours, minutes
class trafic4cast_dataset(torch.utils.data.Dataset):
"""Dataloader for trafic4cast data
Attributes:
compression (TYPE): Description
do_precomp_path (TYPE): Description
num_frames (TYPE): Description
reduce (TYPE): Description
source_root (TYPE): Description
split_type (TYPE): Description
target_file_paths (TYPE): Description
target_root (TYPE): Description
transform (TYPE): Description
valid_test_clips (TYPE): Description
"""
def __init__(self, source_root, target_root="precomuted_data",
split_type='train',
cities=['Berlin', 'Istanbul', 'Moscow'],
transform=None, reduce=False, compression=None,
num_frames=15, do_subsample=None, filter_test_times=False,
return_features=False, return_city=False):
"""Dataloader for the trafic4cast competition
Usage Dataloader:
The dataloader is situated in "videoloader.py", to use it, you have
to download the competition data and set two paths. "source_root"
and "target_root".
source_root: Is the directory with the raw competition data.
The expected file structure is shown below.
target_root: This directory will be used to store the
preprocessed data (about 200 GB)
Expected folder structure for raw data:
-source_root
- Berlin
-Berlin_test
-Berlin_training
-Berlin_validation
-Istanbul
-Instanbul_test
-…
-Moscow
-…
Args:
source_root (str): Is the directory with the raw competition data.
target_root (str, optional): This directory will be used to store the
preprocessed data
split_type (str, optional): Can be ['training', 'validation', 'test']
cities (list, optional): This can be used to limit the data loader to a
subset of cities. Has to be a list! Default is ['Berlin', 'Moscow', 'Istanbul']
transform (None, optional): Transform applied to x before returning it.
reduce (bool, optional): This option collapses the time dimension into the
(color) channel dimension.
compression (str, optional): The h5py compression method to store the
preprocessed data. 'compression=None' is the fastest.
num_frames (int, optional):
do_subsample (tuple, optional): Tuple of two integers. Returns only a part of the image. Slices the
image in the 'pixel' dimensions with x = x[n:m, n:m]. with m>n
filter_test_times (bool, optional): Filters output data, such that only valid (city-dependend) test-times are returned.
"""
self.reduce = reduce
self.source_root = source_root
self.target_root = target_root
self.transform = transform
self.split_type = split_type
self.compression = compression
self.cities = cities
self.num_frames = num_frames
self.subsample = False
self.filter_test_times = filter_test_times
self.return_features = return_features
self.return_city = return_city
if self.filter_test_times:
tt_dict2 = {}
tt_dict = pickle.load(open(os.path.join('.', 'utils', 'test_timestamps.dict'), "rb"))
for city, values in tt_dict.items():
values.sort()
tt_dict2[city] = values
self.valid_test_times = tt_dict2
if do_subsample is not None:
self.subsample = True
self.n = do_subsample[0]
self.m = do_subsample[1]
source_file_paths = []
for city in cities:
source_file_paths = source_file_paths + glob.glob(
os.path.join(self.source_root, city, '*_' + self.split_type,
'*.h5'))
do_precomp_path = []
missing_target_files = []
for raw_file_path in source_file_paths:
target_file = raw_file_path.replace(
self.source_root, self.target_root)
if not os.path.exists(target_file):
do_precomp_path.append(raw_file_path)
missing_target_files.append(target_file)
self.do_precomp_path = do_precomp_path
target_dirs = list(set([str(Path(x).parent)
for x in missing_target_files]))
for target_dir in target_dirs:
if not os.path.exists(target_dir):
os.makedirs(target_dir)
with Pool() as pool:
pool.map(self.precompute_clip, self.do_precomp_path)
pool.close()
pool.join()
target_file_paths = []
for city in cities:
target_file_paths = target_file_paths + glob.glob(
os.path.join(self.target_root, city, '*_' + self.split_type,
'*.h5'))
self.target_file_paths = target_file_paths
if self.split_type == 'test':
precomp_readt_test = partial(self.precompute_clip, mode='reading_test')
with Pool() as pool:
valid_test_clips = pool.map(precomp_readt_test,
self.target_file_paths)
pool.close()
pool.join()
valid_test_clips = [valid_tuple for sublist in valid_test_clips
for valid_tuple in sublist]
valid_test_clips.sort()
self.valid_test_clips = valid_test_clips
def precompute_clip(self, source_path, mode='writing'):
"""Summary
Args:
source_path (TYPE): Description
mode (str, optional): Description
Returns:
TYPE: Description
"""
target_path = source_path.replace(self.source_root, self.target_root)
f_source = h5py.File(source_path, 'r')
data1 = f_source['array']
data1 = data1[:]
if mode == 'writing':
data1 = np.moveaxis(data1, 3, 1)
f_target = h5py.File(target_path, 'w')
dset = f_target.create_dataset('array', (288, 3, 495, 436),
chunks=(1, 3, 495, 436),
dtype='uint8', data=data1,
compression=self.compression)
f_target.close()
if mode == 'reading_test':
valid_test_clips = list = []
for tstamp_ix in range(288-15):
clip = data1[tstamp_ix:tstamp_ix+self.num_frames, :, :, :]
sum_first_train_frame = np.sum(clip[0, :, :, :])
sum_last_train_frame = np.sum(clip[11, :, :, :])
if (sum_first_train_frame != 0) and (sum_last_train_frame != 0):
valid_test_clips.append((source_path, tstamp_ix))
f_source.close()
if mode == 'reading_test':
return valid_test_clips
def __len__(self):
if self.split_type == 'test':
pass
return len(self.valid_test_clips)
elif self.filter_test_times:
return len(self.target_file_paths) * 5
else:
return len(self.target_file_paths) * 272
def __getitem__(self, idx):
"""Summary
Args:
idx (TYPE): Description
Returns:
TYPE: Description
"""
return_dict = {}
if torch.is_tensor(idx):
idx = idx.tolist()
if self.split_type == 'test':
target_file_path, tstamp_ix = self.valid_test_clips[idx]
elif self.filter_test_times:
file_ix = idx // 5
valid_tstamp_ix = idx % 5
target_file_path = self.target_file_paths[file_ix]
city_name_path = Path(target_file_path.replace(self.target_root,''))
city_name = city_name_path.parts[1]
tstamp_ix = self.valid_test_times[city_name][valid_tstamp_ix]
else:
file_ix = idx // 272
tstamp_ix = idx % 272
target_file_path = self.target_file_paths[file_ix]
if self.return_features:
# create feature vector
date_string = Path(target_file_path).name.split('_')[0]
date_datetime = dt.datetime.strptime(date_string, '%Y%m%d')
hour, minute = _get_tstamp_string(tstamp_ix)
# feature_vector = []
sin_hours = np.sin(2*np.pi/24 * hour)
cos_hours = np.cos(2*np.pi/24 * hour)
sin_mins = np.sin(2*np.pi/60 * minute)
cos_mins = np.cos(2*np.pi/60 * minute)
sin_month = np.sin(2*np.pi/12 * date_datetime.month)
cos_month = np.cos(2*np.pi/12 * date_datetime.month)
weekday_ix = date_datetime.weekday() / 6
week_number = date_datetime.isocalendar()[1] / 52
feature_vector = np.asarray([sin_hours, cos_hours, sin_mins,
cos_mins, sin_month, cos_month,
weekday_ix, week_number]).ravel()
feature_vector = torch.from_numpy(feature_vector)
feature_vector = feature_vector.to(dtype=torch.float)
return_dict['feature_vector'] = feature_vector
if self.return_city:
city_name_path = Path(target_file_path.replace(self.target_root,''))
city_name = city_name_path.parts[1]
return_dict['city_names'] = city_name
# we want to predict the image at idx+1 based on the image with idx
f = h5py.File(target_file_path, 'r')
sample = f.get('array')
x = sample[tstamp_ix:tstamp_ix+12, :, :, :]
y = sample[tstamp_ix+12:tstamp_ix+15, :, :, :]
if self.reduce:
# stack all time dimensions into the channels.
# all channels of the same timestamp are left togehter
x = np.moveaxis(x, (0, 1), (2, 3))
x = np.reshape(x, (495, 436, 36))
x = torch.from_numpy(x)
x = x.permute(2, 0, 1) # Dimensions: time/channels, h, w
y = np.moveaxis(y, (0, 1), (2, 3))
y = np.reshape(y, (495, 436, 9))
y = torch.from_numpy(y)
y = y.permute(2, 0, 1)
y = y.to(dtype=torch.float) # is ByteTensor?
x = x.to(dtype=torch.float) # is ByteTensor?
else:
x = torch.from_numpy(x)
y = torch.from_numpy(y)
y = y.to(dtype=torch.float) # is ByteTensor?
x = x.to(dtype=torch.float) # is ByteTensor?
f.close()
if self.subsample:
x = subsample(x,self.n,self.m)
y = subsample(y,self.n,self.m)
if self.transform is not None:
x = self.transform(x)
return x, y, return_dict | 34.942029 | 132 | 0.555454 | 1,436 | 12,055 | 4.443593 | 0.194986 | 0.03291 | 0.026328 | 0.017866 | 0.226297 | 0.156872 | 0.118163 | 0.105313 | 0.071149 | 0.058925 | 0 | 0.018006 | 0.355039 | 12,055 | 345 | 133 | 34.942029 | 0.801929 | 0.233513 | 0 | 0.181818 | 0 | 0 | 0.022516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032086 | false | 0.005348 | 0.064171 | 0.005348 | 0.139037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15de738d2ebcbbe25e92a2d8f2d43e4cb6af3d82 | 1,760 | py | Python | pychunkedgraph/ingest/ingestion_utils.py | perlman/PyChunkedGraph | 2c582f46a8292010e8f9f54c94c63af0b172bdad | [
"MIT"
] | null | null | null | pychunkedgraph/ingest/ingestion_utils.py | perlman/PyChunkedGraph | 2c582f46a8292010e8f9f54c94c63af0b172bdad | [
"MIT"
] | null | null | null | pychunkedgraph/ingest/ingestion_utils.py | perlman/PyChunkedGraph | 2c582f46a8292010e8f9f54c94c63af0b172bdad | [
"MIT"
] | null | null | null | import numpy as np
from pychunkedgraph.backend import chunkedgraph, chunkedgraph_utils
import cloudvolume
def initialize_chunkedgraph(cg_table_id, ws_cv_path, chunk_size, cg_mesh_dir,
fan_out=2, instance_id=None, project_id=None):
""" Initalizes a chunkedgraph on BigTable
:param cg_table_id: str
name of chunkedgraph
:param ws_cv_path: str
path to watershed segmentation on Google Cloud
:param chunk_size: np.ndarray
array of three ints
:param cg_mesh_dir: str
mesh folder name
:param fan_out: int
fan out of chunked graph (2 == Octree)
:param instance_id: str
Google instance id
:param project_id: str
Google project id
:return: ChunkedGraph
"""
ws_cv = cloudvolume.CloudVolume(ws_cv_path)
bbox = np.array(ws_cv.bounds.to_list()).reshape(2, 3)
# assert np.all(bbox[0] == 0)
# assert np.all((bbox[1] % chunk_size) == 0)
n_chunks = ((bbox[1] - bbox[0]) / chunk_size).astype(np.int)
n_layers = int(np.ceil(chunkedgraph_utils.log_n(np.max(n_chunks), fan_out))) + 2
dataset_info = ws_cv.info
dataset_info["mesh"] = cg_mesh_dir
dataset_info["data_dir"] = ws_cv_path
dataset_info["graph"] = {"chunk_size": [int(s) for s in chunk_size]}
kwargs = {"table_id": cg_table_id,
"chunk_size": chunk_size,
"fan_out": np.uint64(fan_out),
"n_layers": np.uint64(n_layers),
"dataset_info": dataset_info,
"is_new": True}
if instance_id is not None:
kwargs["instance_id"] = instance_id
if project_id is not None:
kwargs["project_id"] = project_id
cg = chunkedgraph.ChunkedGraph(**kwargs)
return cg
| 30.877193 | 84 | 0.64375 | 251 | 1,760 | 4.25498 | 0.318725 | 0.067416 | 0.029963 | 0.02809 | 0.031835 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011416 | 0.253409 | 1,760 | 56 | 85 | 31.428571 | 0.80137 | 0.289205 | 0 | 0 | 0 | 0 | 0.084041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.12 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15dec2f55095e3b6ec802c41f0cedadb58146312 | 2,691 | py | Python | brew_gui.py | mburgess00/brew_controller | 913e3b37b9421759db5186e5f0e44cf8f4fd7f6a | [
"Apache-2.0"
] | null | null | null | brew_gui.py | mburgess00/brew_controller | 913e3b37b9421759db5186e5f0e44cf8f4fd7f6a | [
"Apache-2.0"
] | null | null | null | brew_gui.py | mburgess00/brew_controller | 913e3b37b9421759db5186e5f0e44cf8f4fd7f6a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from guizero import App, Text, Slider, Combo, PushButton, Box, Picture
pause = True
def readsensors():
return {"hlt" : 160, "rims" : 152, "bk" : 75}
def handlepause():
global pause
global pauseState
print("Pause Button pressed")
if pause:
print("running")
pause = not pause
pauseState.value=("Running")
hltFlame.visible=True
rimsFlame.visible=True
bkFlame.visible=True
else:
print("pausing")
pause = not pause
pauseState.value=("Paused")
hltFlame.visible=False
rimsFlame.visible=False
bkFlame.visible=False
return
app = App(title="Brew GUI", width=1280, height=768, layout="grid")
vertPad = Picture(app, image="blank_vert.gif", grid=[0,0])
hltBox = Box(app, layout="grid", grid=[1,0])
hltPad = Picture(hltBox, image="blank.gif", grid=[0,0])
hltTitle = Text(hltBox, text="HLT", grid=[0,1], align="top")
hltText = Text(hltBox, text="180", grid=[0,2], align="top")
hltSlider = Slider(hltBox, start=212, end=100, horizontal=False, grid=[0,3], align="top")
hltSlider.tk.config(length=500, width=50)
hltFlamePad = Picture(hltBox, image="blank_flame.gif", grid=[0,4])
hltFlame = Picture(hltBox, image="flame.gif", grid=[0,4])
rimsBox = Box(app, layout="grid", grid=[2,0])
rimsPad = Picture(rimsBox, image="blank.gif", grid=[0,0])
rimsTitle = Text(rimsBox, text="RIMS", grid=[0,1], align="top")
rimsText = Text(rimsBox, text="180", grid=[0,2], align="top")
rimsSlider = Slider(rimsBox, start=212, end=100, horizontal=False, grid=[0,3], align="top")
rimsSlider.tk.config(length=500, width=50)
rimsFlamePad = Picture(rimsBox, image="blank_flame.gif", grid=[0,4])
rimsFlame = Picture(rimsBox, image="flame.gif", grid=[0,4])
bkBox = Box(app, layout="grid", grid=[3,0])
bkPad = Picture(bkBox, image="blank.gif", grid=[0,0])
bkTitle = Text(bkBox, text="BK", grid=[0,1], align="top")
bkText = Text(bkBox, text="75", grid=[0,2], align="top")
bkSlider = Slider(bkBox, start=100, end=0, horizontal=False, grid=[0,3], align="top")
bkSlider.tk.config(length=500, width=50)
bkFlamePad = Picture(bkBox, image="blank_flame.gif", grid=[0,4])
bkFlame = Picture(bkBox, image="flame.gif", grid=[0,4])
modeBox = Box(app, layout="grid", grid=[4,0])
modePad = Picture(modeBox, image="blank.gif", grid=[0,0])
modeTitle = Text(modeBox, text="Mode", grid=[0,0], align="top")
mode = Combo(modeBox, options=["HLT", "RIMS", "BK"], grid=[1,0])
pauseState = Text(modeBox, text="Paused", grid=[0,1])
pauseButton = PushButton(modeBox, icon="pause-play.gif", command=handlepause, grid=[1,1])
hltFlame.visible=False
rimsFlame.visible=False
bkFlame.visible=False
app.display()
| 37.901408 | 91 | 0.670011 | 393 | 2,691 | 4.577608 | 0.264631 | 0.061145 | 0.048916 | 0.043357 | 0.414675 | 0.307949 | 0.193997 | 0.114508 | 0.114508 | 0.047804 | 0 | 0.047783 | 0.136752 | 2,691 | 70 | 92 | 38.442857 | 0.726647 | 0.007804 | 0 | 0.135593 | 0 | 0 | 0.107156 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.016949 | 0.016949 | 0.084746 | 0.050847 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15e018d003308b6e32514ff4f0cb219f8f099c3b | 2,498 | py | Python | tests/integration/test_integration.py | nuvolos-cloud/resolos | 0918066cab7b11ef04ae005f3e052b14a65ded68 | [
"MIT"
] | 1 | 2021-11-30T06:47:24.000Z | 2021-11-30T06:47:24.000Z | tests/integration/test_integration.py | nuvolos-cloud/resolos | 0918066cab7b11ef04ae005f3e052b14a65ded68 | [
"MIT"
] | 1 | 2021-04-08T12:56:39.000Z | 2021-04-08T12:56:39.000Z | tests/integration/test_integration.py | nuvolos-cloud/resolos | 0918066cab7b11ef04ae005f3e052b14a65ded68 | [
"MIT"
] | null | null | null | from click.testing import CliRunner
from resolos.interface import (
res_remote_add,
res_remote_remove,
res_init,
res_sync,
res
)
from resolos.remote import read_remote_db, list_remote_ids, delete_remote
from tests.common import verify_result
import logging
from pathlib import Path
import os
logger = logging.getLogger(__name__)
USER = os.environ["TEST_USER"]
PWD = os.environ["SSHPASS"]
HOST = os.environ["TEST_HOST"]
class TestIntegration:
remote_id = "test_remote"
def test_job(self, *args):
runner = CliRunner()
with runner.isolated_filesystem() as fs:
# Initialize a new local project
logger.info(f"Initializing new project in {fs}")
verify_result(runner.invoke(res, ["-v", "DEBUG", "info"]))
verify_result(runner.invoke(res_init, ["-y"]))
# Add remote
logger.info(f"### Adding remote in {fs}")
verify_result(
runner.invoke(
res_remote_add,
[self.remote_id, "-y", "-h", HOST, "-p", "3144", "-u", USER, "--remote-path", "/data/integration_test", "--conda-install-path", "/data", "--conda-load-command", "source /data/miniconda/bin/activate"]
)
)
remotes_list = read_remote_db()
assert self.remote_id in remotes_list
remotes_settings = remotes_list[self.remote_id]
assert remotes_settings["hostname"] == HOST
assert remotes_settings["username"] == USER
# Run job
with (Path(fs) / "test_script.py").open("w") as py:
py.write("""with open('test_output.txt', 'w') as txtf:
txtf.write('Hello, world!')""")
logger.info(f"### Syncing with remote {self.remote_id}")
verify_result(runner.invoke(res_sync, ["-r", self.remote_id]))
logger.info(f"### Running test job on {self.remote_id}")
verify_result(runner.invoke(res, ["-v", "DEBUG", "job", "-r", self.remote_id, "run", "-p", "normal", "python test_script.py"]))
# Sync back job results
logger.info(f"### Syncing results from remote {self.remote_id}")
verify_result(runner.invoke(res_sync, ["-r", self.remote_id]))
assert (Path(fs) / "test_output.txt").exists()
# Remove remote
logger.info(f"### Removing remote {self.remote_id}")
verify_result(runner.invoke(res_remote_remove, [self.remote_id])) | 43.068966 | 219 | 0.601281 | 308 | 2,498 | 4.681818 | 0.331169 | 0.066574 | 0.09154 | 0.116505 | 0.222607 | 0.203884 | 0.195562 | 0.144244 | 0.117198 | 0.085992 | 0 | 0.002158 | 0.257806 | 2,498 | 58 | 220 | 43.068966 | 0.77562 | 0.034027 | 0 | 0.081633 | 0 | 0 | 0.234635 | 0.030316 | 0 | 0 | 0 | 0 | 0.081633 | 1 | 0.020408 | false | 0.020408 | 0.142857 | 0 | 0.204082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15e034a0325db7cea2ebf82178c6ea8ad80d5cda | 946 | py | Python | test/test_add_group.py | romanovaes/python_training | 5df3a9b716e7659fb8f61e0b55e5217cc6a1a89e | [
"Apache-2.0"
] | null | null | null | test/test_add_group.py | romanovaes/python_training | 5df3a9b716e7659fb8f61e0b55e5217cc6a1a89e | [
"Apache-2.0"
] | null | null | null | test/test_add_group.py | romanovaes/python_training | 5df3a9b716e7659fb8f61e0b55e5217cc6a1a89e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from builtins import *
from model.group import Group
import pytest
import allure
#@pytest.mark.parametrize("group", testdata, ids=[repr(x) for x in testdata])
@allure.step('test_add_group')
def test_add_group(app, db, json_groups, check_ui):
group=json_groups
#with pytest.allure.step('Given a group list'):
old_group=db.get_group_list()
#with pytest.allure.step('When I add a group % to the list' % group):
app.group.create(group)
#with pytest.allure.step('Then the group list is equal to the old list with the added group'):
new_group=db.get_group_list()
old_group.append(group)
assert sorted(old_group, key=Group.id_or_max)==sorted(new_group, key=Group.id_or_max)
if check_ui:
assert sorted(map(app.group.clean_gap_from_group, new_group), key=Group.id_or_max) == sorted(app.group.get_group_list(), key=Group.id_or_max)
| 35.037037 | 153 | 0.689218 | 151 | 946 | 4.112583 | 0.364238 | 0.072464 | 0.064412 | 0.077295 | 0.21095 | 0.125604 | 0.125604 | 0 | 0 | 0 | 0 | 0.001305 | 0.190275 | 946 | 26 | 154 | 36.384615 | 0.809399 | 0.32241 | 0 | 0 | 0 | 0 | 0.022187 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15e3d38045eea34ae5ee1b0e2ce54a0a8b72223a | 3,988 | py | Python | test/test_docx_context.py | ieric2/docx2python | 4a1266811f9bd71f5eea5e0d458a391fd9eb4f73 | [
"MIT"
] | 52 | 2019-07-08T19:37:45.000Z | 2022-03-30T11:36:08.000Z | test/test_docx_context.py | ieric2/docx2python | 4a1266811f9bd71f5eea5e0d458a391fd9eb4f73 | [
"MIT"
] | 29 | 2019-08-29T09:48:24.000Z | 2022-03-13T13:58:58.000Z | test/test_docx_context.py | ieric2/docx2python | 4a1266811f9bd71f5eea5e0d458a391fd9eb4f73 | [
"MIT"
] | 23 | 2019-08-29T11:33:13.000Z | 2022-03-03T17:22:35.000Z | #!/usr/bin/env python3
# _*_ coding: utf-8 _*_
"""Test docx2python.docx_context.py
author: Shay Hill
created: 6/26/2019
"""
import os
import shutil
import zipfile
from collections import defaultdict
from typing import Any, Dict
import pytest
from docx2python.docx_context import (
collect_docProps,
collect_numFmts,
get_context,
pull_image_files,
)
class TestCollectNumFmts:
"""Test strip_text.collect_numFmts """
# noinspection PyPep8Naming
def test_gets_formats(self) -> None:
"""Retrieves formats from example.docx
This isn't a great test. There are numbered lists I've added then removed as
I've edited my test docx. These still appear in the docx file. I could
compare directly with the extracted numbering xml file, but even then I'd be
comparing to something I don't know to be accurate. This just tests that all
numbering formats are represented.
"""
zipf = zipfile.ZipFile("resources/example.docx")
numId2numFmts = collect_numFmts(zipf.read("word/numbering.xml"))
formats = {x for y in numId2numFmts.values() for x in y}
assert formats == {
"lowerLetter",
"upperLetter",
"lowerRoman",
"upperRoman",
"bullet",
"decimal",
}
class TestCollectDocProps:
"""Test strip_text.collect_docProps """
def test_gets_properties(self) -> None:
"""Retrieves properties from docProps"""
zipf = zipfile.ZipFile("resources/example.docx")
props = collect_docProps(zipf.read("docProps/core.xml"))
assert props["creator"] == "Shay Hill"
assert props["lastModifiedBy"] == "Shay Hill"
@pytest.fixture
def docx_context() -> Dict[str, Any]:
"""result of running strip_text.get_context"""
zipf = zipfile.ZipFile("resources/example.docx")
return get_context(zipf)
# noinspection PyPep8Naming
class TestGetContext:
"""Text strip_text.get_context """
def test_docProp2text(self, docx_context) -> None:
"""All targets mapped"""
zipf = zipfile.ZipFile("resources/example.docx")
props = collect_docProps(zipf.read("docProps/core.xml"))
assert docx_context["docProp2text"] == props
def test_numId2numFmts(self, docx_context) -> None:
"""All targets mapped"""
zipf = zipfile.ZipFile("resources/example.docx")
numId2numFmts = collect_numFmts(zipf.read("word/numbering.xml"))
assert docx_context["numId2numFmts"] == numId2numFmts
def test_numId2count(self, docx_context) -> None:
"""All numIds mapped to a default dict defaulting to 0"""
for numId in docx_context["numId2numFmts"]:
assert isinstance(docx_context["numId2count"][numId], defaultdict)
assert docx_context["numId2count"][numId][0] == 0
def test_lists(self) -> None:
"""Pass silently when no numbered or bulleted lists."""
zipf = zipfile.ZipFile("resources/basic.docx")
context = get_context(zipf)
assert "numId2numFmts" not in context
assert "numId2count" not in context
class TestPullImageFiles:
"""Test strip_text.pull_image_files """
def test_pull_image_files(self) -> None:
"""Copy image files to output path."""
zipf = zipfile.ZipFile("resources/example.docx")
context = get_context(zipf)
pull_image_files(zipf, context, "delete_this/path/to/images")
assert os.listdir("delete_this/path/to/images") == ["image1.png", "image2.jpg"]
# clean up
shutil.rmtree("delete_this")
def test_no_image_files(self) -> None:
"""Pass silently when no image files."""
zipf = zipfile.ZipFile("resources/basic.docx")
context = get_context(zipf)
pull_image_files(zipf, context, "delete_this/path/to/images")
assert os.listdir("delete_this/path/to/images") == []
# clean up
shutil.rmtree("delete_this")
| 33.79661 | 87 | 0.658977 | 476 | 3,988 | 5.388655 | 0.327731 | 0.060039 | 0.05614 | 0.084211 | 0.363743 | 0.355166 | 0.284211 | 0.284211 | 0.284211 | 0.284211 | 0 | 0.010407 | 0.228937 | 3,988 | 117 | 88 | 34.08547 | 0.82374 | 0.243731 | 0 | 0.287879 | 0 | 0 | 0.195848 | 0.081661 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.136364 | false | 0 | 0.106061 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15e42de122c93447226408dd404b3ccbe892d9e6 | 1,022 | py | Python | src/transporters/approximator/transporter.py | mucharafal/optics_generator_python | c14d4e5f19f921f4dc0a98129bca9d31754b72ad | [
"MIT"
] | null | null | null | src/transporters/approximator/transporter.py | mucharafal/optics_generator_python | c14d4e5f19f921f4dc0a98129bca9d31754b72ad | [
"MIT"
] | null | null | null | src/transporters/approximator/transporter.py | mucharafal/optics_generator_python | c14d4e5f19f921f4dc0a98129bca9d31754b72ad | [
"MIT"
] | null | null | null | import transporters.approximator.runner as ra
from data.parameters_names import ParametersNames as Parameters
def transport(approximator, particles):
"""matrix in format returned by data.particles_generator functions"""
segments = dict()
segments["start"] = particles
matrix_for_transporter = particles.get_default_coordinates_of(Parameters.X, Parameters.THETA_X, Parameters.Y,
Parameters.THETA_Y, Parameters.PT)
transported_particles = ra.transport(approximator, matrix_for_transporter)
segments["end"] = particles.__class__(transported_particles, get_mapping())
return segments
def get_mapping():
mapping = {
Parameters.X: 0,
Parameters.THETA_X: 1,
Parameters.Y: 2,
Parameters.THETA_Y: 3,
Parameters.PT: 4
}
return mapping
def get_transporter(approximator):
def transporter(particles):
return transport(approximator, particles)
return transporter
| 28.388889 | 113 | 0.683953 | 106 | 1,022 | 6.386792 | 0.415094 | 0.088626 | 0.088626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006418 | 0.237769 | 1,022 | 35 | 114 | 29.2 | 0.862644 | 0.061644 | 0 | 0 | 0 | 0 | 0.008395 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.086957 | 0.043478 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15f39d0f421f2c653443422d6ac14afb93981bf5 | 5,264 | py | Python | app/projects/tests/test_portfolio_api.py | nestor-san/cooperation-fit | 1a922233345698970c7e18e6213ad0320de70cce | [
"MIT"
] | null | null | null | app/projects/tests/test_portfolio_api.py | nestor-san/cooperation-fit | 1a922233345698970c7e18e6213ad0320de70cce | [
"MIT"
] | null | null | null | app/projects/tests/test_portfolio_api.py | nestor-san/cooperation-fit | 1a922233345698970c7e18e6213ad0320de70cce | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.urls import reverse
from django.test import TestCase
from rest_framework import status
from rest_framework.test import APIClient
from core.models import PortfolioItem
from projects.serializers import PortfolioItemSerializer
PORTFOLIO_URL = reverse('projects:portfolioitem-list')
def detail_url(portfolio_id):
"""Return the detail URL of a portfolio item"""
return reverse('projects:portfolioitem-detail', args=[portfolio_id])
class PublicPortfolioApiTests(TestCase):
"""Test the publicly available projects API"""
def setUp(self):
self.client = APIClient()
def test_login_not_required(self):
"""Test that login is not required to access the endpoint"""
res = self.client.get(PORTFOLIO_URL)
self.assertEqual(res.status_code, status.HTTP_200_OK)
def test_retrieve_portfolio_list(self):
"""Test retrieving a list of portfolio items"""
sample_user = get_user_model().objects.create_user(
'test@xemob.com',
'testpass'
)
PortfolioItem.objects.create(user=sample_user,
name='Portfolio Item 1')
PortfolioItem.objects.create(user=sample_user,
name='Portfolio Item 2')
res = self.client.get(PORTFOLIO_URL)
portfolio_items = PortfolioItem.objects.all().order_by('-name')
serializer = PortfolioItemSerializer(portfolio_items, many=True)
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(res.data, serializer.data)
class PrivatePortfolioApiTests(TestCase):
"""Test the private portfolio API"""
def setUp(self):
self.client = APIClient()
self.user = get_user_model().objects.create_user(
'test@xemob.com',
'testpass'
)
self.client.force_authenticate(self.user)
def test_create_portfolio_item_successfully(self):
"""Test creating a new portfolio item"""
payload = {'name': 'New portfolio item', 'user': self.user.id}
self.client.post(PORTFOLIO_URL, payload)
exists = PortfolioItem.objects.filter(
user=self.user,
name=payload['name']
).exists()
self.assertTrue(exists)
def test_create_portfolio_item_invalid(self):
"""Test creating a portfolio item with invalid payload"""
payload = {'name': '', 'user': self.user.id}
res = self.client.post(PORTFOLIO_URL, payload)
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST)
def test_partial_portfolio_update_successfully(self):
"""Test partial updating a project by owner is successful"""
portfolio_item = PortfolioItem.objects.create(user=self.user,
name='Portfolio Item 1')
payload = {'name': 'Alt portfolio item'}
url = detail_url(portfolio_item.id)
res = self.client.patch(url, payload)
portfolio_item.refresh_from_db()
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(portfolio_item.name, payload['name'])
def test_partial_portfolio_update_invalid(self):
"""Test updating a portfolio item by not owner is invalid"""
self.user2 = get_user_model().objects.create_user(
'other@xemob.com',
'testpass'
)
portfolio_item = PortfolioItem.objects.create(user=self.user2,
name='Portfolio Item 1')
payload = {'name': 'Alt portfolio item'}
url = detail_url(portfolio_item.id)
res = self.client.patch(url, payload)
portfolio_item.refresh_from_db()
self.assertEqual(res.status_code, status.HTTP_403_FORBIDDEN)
self.assertNotEqual(portfolio_item.name, payload['name'])
def test_full_portfolio_update_successful(self):
"""Test updating a portfolio item by owner is successful with PUT"""
portfolio_item = PortfolioItem.objects.create(user=self.user,
name='Portfolio Item 1')
payload = {'user': self.user.id, 'name': 'Alt portfolio item'}
url = detail_url(portfolio_item.id)
res = self.client.put(url, payload)
portfolio_item.refresh_from_db()
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(portfolio_item.name, payload['name'])
def test_full_portfolio_update_invalid(self):
"""Test updateing a portfolio item by not owner is invalid with PUT"""
self.user2 = get_user_model().objects.create_user(
'other@xemob.com',
'testpass'
)
portfolio_item = PortfolioItem.objects.create(user=self.user2,
name='Portfolio Item 1')
payload = {'user': self.user.id, 'name': 'Alt portfolio item'}
url = detail_url(portfolio_item.id)
res = self.client.put(url, payload)
portfolio_item.refresh_from_db()
self.assertEqual(res.status_code, status.HTTP_403_FORBIDDEN)
self.assertNotEqual(portfolio_item.name, payload['name'])
| 38.992593 | 78 | 0.644187 | 606 | 5,264 | 5.412541 | 0.181518 | 0.13872 | 0.051829 | 0.05122 | 0.629268 | 0.590854 | 0.553659 | 0.506707 | 0.486585 | 0.43872 | 0 | 0.007912 | 0.255699 | 5,264 | 134 | 79 | 39.283582 | 0.82925 | 0.101634 | 0 | 0.56383 | 0 | 0 | 0.084956 | 0.011984 | 0 | 0 | 0 | 0 | 0.138298 | 1 | 0.117021 | false | 0.042553 | 0.074468 | 0 | 0.223404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15f6ebf3d219ce6b1cd4eb680d9639aa61bcb259 | 369 | py | Python | test.py | liyao001/EVBUS | 8730ce6b062bc31df27506a06723dee3b5ab511a | [
"Apache-2.0"
] | null | null | null | test.py | liyao001/EVBUS | 8730ce6b062bc31df27506a06723dee3b5ab511a | [
"Apache-2.0"
] | null | null | null | test.py | liyao001/EVBUS | 8730ce6b062bc31df27506a06723dee3b5ab511a | [
"Apache-2.0"
] | null | null | null | from EVBUS import EVBUS
from sklearn.datasets import load_boston
import sklearn.model_selection as xval
boston = load_boston()
Y = boston.data[:, 12]
X = boston.data[:, 0:12]
bos_X_train, bos_X_test, bos_y_train, bos_y_test = xval.train_test_split(X, Y, test_size=0.3)
evbus = EVBUS.varU(bos_X_train, bos_y_train, bos_X_test)
v = evbus.calculate_variance()
print(v)
| 26.357143 | 93 | 0.772358 | 68 | 369 | 3.882353 | 0.397059 | 0.060606 | 0.068182 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021472 | 0.116531 | 369 | 13 | 94 | 28.384615 | 0.788344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.3 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15f98430ae44305a9e31a13aeb782706e3a1fd29 | 2,141 | py | Python | aws/olympic-api/olympic/app.py | kevinle-1/olympic-api | c341328eb9c4ce26fcd08199abf1fb996deebbbf | [
"MIT"
] | 4 | 2021-07-29T06:29:33.000Z | 2021-08-31T22:38:21.000Z | aws/olympic-api/olympic/app.py | kevinle-1/olympic-api | c341328eb9c4ce26fcd08199abf1fb996deebbbf | [
"MIT"
] | null | null | null | aws/olympic-api/olympic/app.py | kevinle-1/olympic-api | c341328eb9c4ce26fcd08199abf1fb996deebbbf | [
"MIT"
] | 2 | 2021-07-25T08:51:52.000Z | 2021-07-25T18:06:24.000Z | import json
import requests
from bs4 import BeautifulSoup
PAGE_URL = 'https://olympics.com/tokyo-2020/olympic-games/en/results/all-sports/medal-standings.htm'
def get_table(html=None):
if not html: html = requests.get(PAGE_URL).content
site = BeautifulSoup(html, 'html.parser')
table = site.find('table', { 'id': 'medal-standing-table' })
return table.findAll('tr')[1:] # Remove header
def get_num(value):
try:
return int(value.find('a').getText())
except:
return 0
def get_counts(entry):
values = entry.findAll('td', { 'class': 'text-center'})
return int(values[0].find('strong').getText()), { # 4 total, 3 bronze, 2 silver, 1 gold, 0 rank
'gold': get_num(values[1]),
'silver': get_num(values[2]),
'bronze': get_num(values[3]),
'total': get_num(values[4]),
}
def get_rankings():
rankings = []
for country in get_table():
rank, medals = get_counts(country)
rankings.append({
'country': country.find('a', { 'class': 'country'}).getText(),
'country_alpha3': country.find('div', { 'class': 'playerTag'})['country'],
'rank': rank,
'medals': medals
})
return rankings
def lambda_handler(event, context):
try:
country = event['queryStringParameters']['country']
except:
country = None
print(f'Request -> Country: {country}')
rankings = get_rankings()
if country:
if len(country) == 3:
for country_ranking in rankings:
if country == country_ranking['country_alpha3']:
rankings = country_ranking
return {
"statusCode": 200,
"headers": {
"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,X-Amz-Security-Token,Authorization,X-Api-Key,X-Requested-With,Accept,Access-Control-Allow-Methods,Access-Control-Allow-Origin,Access-Control-Allow-Headers",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET"
},
"body": json.dumps(rankings),
"isBase64Encoded": False
}
| 30.15493 | 225 | 0.601588 | 250 | 2,141 | 5.076 | 0.432 | 0.061466 | 0.085106 | 0.039401 | 0.066194 | 0.066194 | 0.066194 | 0 | 0 | 0 | 0 | 0.015442 | 0.243811 | 2,141 | 70 | 226 | 30.585714 | 0.768376 | 0.026623 | 0 | 0.072727 | 0 | 0.036364 | 0.294712 | 0.135577 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.054545 | 0 | 0.254545 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15fb00a29a6af7f533e2efe7c7832130560820ef | 5,348 | py | Python | crawlers/core/thread_types.py | nonemaw/YeTi | 92a3ba89f5b7fd8b2d5d3f5929ade0bf0b9e5cbe | [
"MIT"
] | 1 | 2017-10-04T12:21:20.000Z | 2017-10-04T12:21:20.000Z | crawlers/core/thread_types.py | nonemaw/YeTi | 92a3ba89f5b7fd8b2d5d3f5929ade0bf0b9e5cbe | [
"MIT"
] | null | null | null | crawlers/core/thread_types.py | nonemaw/YeTi | 92a3ba89f5b7fd8b2d5d3f5929ade0bf0b9e5cbe | [
"MIT"
] | null | null | null | import queue
import json
import logging
import threading
from crawlers.core.flags import FLAGS
class BaseThread(threading.Thread):
def __init__(self, name: str, worker, pool):
threading.Thread.__init__(self, name=name)
self._worker = worker # can be a Fetcher/Parser/Saver instance
self._thread_pool = pool # ThreadPool
def running(self):
return
def run(self):
logging.warning(f'{self.__class__.__name__}[{self.getName()}] started...')
while True:
try:
# keep running self.working() and checking result
# break (terminate) thread when self.working() failed
# break (terminate) thread when queue is empty, and all jobs
# are done
if not self.running():
break
except queue.Empty:
if self._thread_pool.all_done():
break
except Exception as e:
import sys, os
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
logging.warning(f'{self.__class__.__name__} end: error={str(e)}, file={str(fname)}, line={str(exc_tb.tb_lineno)}')
break
logging.warning(f'{self.__class__.__name__}[{self.getName()}] ended...')
class FetcherThread(BaseThread):
def __init__(self, name: str, worker, pool, session=None):
super().__init__(name, worker, pool)
self.session = session
def running(self):
"""
invoke Fetcher's working()
content: (status_code, url, html_text)
"""
priority, url, data, deep, repeat = self._thread_pool.get_task(FLAGS.FETCH)
try:
data = json.loads(data)
except:
data = {}
fetch_result, data, content = self._worker.working(url, data, repeat, self.session)
# fetch success, update FETCH counter, add task to task_queue_p, for
# parser's further process
if isinstance(data, dict):
data = json.dumps(data)
if fetch_result == 1:
self._thread_pool.update_flag(FLAGS.FETCH, 1)
self._thread_pool.put_task(FLAGS.PARSE, (priority, url, data, deep, content))
# fetch failed, put back to task_queue_f and repeat later
elif fetch_result == 0:
self._thread_pool.put_task(FLAGS.FETCH, (priority + 1, url, data, deep, repeat + 1))
# current round of fetcher is done, notify task_queue_f with
# task_done() to stop block
self._thread_pool.finish_task(FLAGS.FETCH)
return False if fetch_result == -1 else True
class ParserThread(BaseThread):
def __init__(self, name: str, worker, pool):
super().__init__(name, worker, pool)
def running(self):
"""
invoke Parser's working()
get all required urls from target html text
content: (status_code, url, html_text)
"""
priority, url, data, deep, content = self._thread_pool.get_task(FLAGS.PARSE)
try:
data = json.loads(data)
except:
data = {}
parse_result = 1
urls = []
stamp = ()
# if data is negative or data has a negative 'save' value, parse the
# html, otherwise skip
if not data or not data.get('save'):
parse_result, urls, stamp = self._worker.working(priority, url, data, deep, content)
if parse_result > 0:
self._thread_pool.update_flag(FLAGS.PARSE, 1)
# add each url in urls list into task_queue_f, waiting for
# fetcher's further process
for _url, _data, _priority in urls:
if isinstance(_data, dict):
_data = json.dumps(_data)
self._thread_pool.put_task(FLAGS.FETCH, (_priority, _url, _data, deep + 1, 0))
# add current url (already fetched/parsed) into task_queue_s,
# waiting for saver's further process
#
# if data in task_queue_p has a positive 'save' value, or no data but with an url
if (data and data.get('save')) or (not data and url):
try:
# when saving to task_queue_s, delete 'save' key
del data['save']
del data['type']
data = json.dumps(data)
except:
pass
self._thread_pool.put_task(FLAGS.SAVE, (url, data, stamp))
# current round of parser is done, notify task_queue_p with
# task_done() to stop block
self._thread_pool.finish_task(FLAGS.PARSE)
return True
class SaverThread(BaseThread):
def __init__(self, name: str, worker, pool):
super().__init__(name, worker, pool)
def running(self):
"""
invoke Saver's working()
"""
url, data, stamp = self._thread_pool.get_task(FLAGS.SAVE)
save_result = self._worker.working(url, data, stamp)
if save_result:
self._thread_pool.update_flag(FLAGS.SAVE, 1)
# current round of saver is done, notify task_queue_s with
# task_done() to stop block
self._thread_pool.finish_task(FLAGS.SAVE)
return True
| 35.184211 | 130 | 0.583209 | 665 | 5,348 | 4.454135 | 0.237594 | 0.050641 | 0.070898 | 0.032073 | 0.411884 | 0.341999 | 0.259284 | 0.229575 | 0.139095 | 0.139095 | 0 | 0.003572 | 0.319559 | 5,348 | 151 | 131 | 35.417219 | 0.810387 | 0.227188 | 0 | 0.318182 | 0 | 0.011364 | 0.053785 | 0.034612 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102273 | false | 0.011364 | 0.068182 | 0.011364 | 0.261364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15fb6a076074eb470434200bde6610f2e5b3ecae | 1,848 | py | Python | develop/tools/import-prep.py | Gautamverma66/pycon | 1ca95727619dbbe82483227e0964402b433131ee | [
"BSD-3-Clause"
] | null | null | null | develop/tools/import-prep.py | Gautamverma66/pycon | 1ca95727619dbbe82483227e0964402b433131ee | [
"BSD-3-Clause"
] | null | null | null | develop/tools/import-prep.py | Gautamverma66/pycon | 1ca95727619dbbe82483227e0964402b433131ee | [
"BSD-3-Clause"
] | 1 | 2020-09-30T18:09:16.000Z | 2020-09-30T18:09:16.000Z | #!/usr/bin/env python2.7
#
# Take various CSV inputs and produce a read-to-import conference schedule.
import pandas as pd
from datetime import date
def main():
dfs = []
t = pd.read_csv('talks.csv')
t['kind_slug'] = 'talk'
t['proposal_id'] = t.pop('proposal')
t['day'] = date(2016, 5, 30) + pd.to_timedelta(t['day'], 'd')
t['room'] = 'Session ' + t['room']
t = t[['kind_slug', 'proposal_id', 'day', 'time', 'duration', 'room']]
dfs.append(t)
t = pd.read_csv('~/Downloads/PyCon 2016 Tutorial Counts - Sheet1.csv')
rooms = {str(title).strip().lower(): room_name
for title, room_name in t[['Title', 'Room Name']].values}
t = pd.read_csv('tutorials.csv')
t['kind_slug'] = 'tutorial'
t['proposal_id'] = t.pop('ID')
t['day'] = pd.to_datetime(t['Day Slot'])
t['time'] = t['Time Slot'].str.extract('([^ ]*)')
t['duration'] = 200
t['room'] = t['Title'].str.strip().str.lower().map(rooms)
t = t[['kind_slug', 'proposal_id', 'day', 'time', 'duration', 'room']]
dfs.append(t)
t = pd.read_csv('sponsor-tutorials-edited.csv')
t = t[t['ID'].notnull()].copy()
t['kind_slug'] = 'sponsor-tutorial'
#t['kind_slug'] = 'tutorial'
t['proposal_id'] = t.pop('ID').astype(int)
t['day'] = pd.to_datetime(t['Day Slot'])
t['time'] = t['Time Slot'].str.extract('([^ ]*)')
t['room'] = t['Room']
# t = t.sort_values(['Title'])
# t['room'] = t.groupby(['day', 'time'])['room'].cumsum()
# t['room'] = t['room'].apply(lambda n: 'Sponsor Room {}'.format(n))
t = t[['kind_slug', 'proposal_id', 'day', 'time', 'duration', 'room']]
dfs.append(t)
#t.to_csv('schedule.csv', index=False)
c = pd.concat(dfs).rename(columns={'time': 'start'})
c.to_csv('schedule.csv', index=False)
if __name__ == '__main__':
main()
| 28.875 | 75 | 0.566017 | 272 | 1,848 | 3.724265 | 0.3125 | 0.017769 | 0.062192 | 0.039487 | 0.395854 | 0.381046 | 0.329714 | 0.329714 | 0.329714 | 0.329714 | 0 | 0.011371 | 0.191017 | 1,848 | 63 | 76 | 29.333333 | 0.666221 | 0.169372 | 0 | 0.277778 | 0 | 0 | 0.309758 | 0.018337 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.055556 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15fdf4924a3b098bc325a7b35edf84b4ae175290 | 997 | py | Python | utilities/resize_images.py | bmhopkinson/Marsh_Ann | 7d1baaa444392622967dd1ed12f9c7a23c5fb018 | [
"MIT"
] | null | null | null | utilities/resize_images.py | bmhopkinson/Marsh_Ann | 7d1baaa444392622967dd1ed12f9c7a23c5fb018 | [
"MIT"
] | null | null | null | utilities/resize_images.py | bmhopkinson/Marsh_Ann | 7d1baaa444392622967dd1ed12f9c7a23c5fb018 | [
"MIT"
] | null | null | null | import os
import cv2
import numpy as np
import re
path_regex = re.compile('^.+?/(.*)')
def resize_image(im, factor):
row, col, chan = im.shape
col_re = np.rint(col*factor).astype(int)
row_re = np.rint(row*factor).astype(int)
im = cv2.resize(im, (col_re, row_re)) #resize patch
return im
imdir = '../Marsh_Images_BH/Row1_1_2748to2797'
outdir = './image_resize_BH'
for (dirpath, dirname, files) in os.walk(imdir, topdown='True'):
for name in files:
fullpath = os.path.join(dirpath,name)
print(name)
m = path_regex.findall(dirpath)
dirpath_sub = m[0]
new_dirpath = os.path.join(outdir,dirpath_sub)
if not os.path.isdir(new_dirpath):
os.makedirs(new_dirpath)
file_base = os.path.splitext(name)[0]
im = cv2.imread(fullpath)
im_alt = resize_image(im, 0.2)
outfile = file_base + '_small.jpg'
outpath = os.path.join(new_dirpath, outfile)
cv2.imwrite(outpath,im_alt)
| 27.694444 | 64 | 0.636911 | 148 | 997 | 4.121622 | 0.432432 | 0.04918 | 0.04918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023438 | 0.229689 | 997 | 35 | 65 | 28.485714 | 0.770833 | 0.012036 | 0 | 0 | 0 | 0 | 0.077236 | 0.036585 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.142857 | 0 | 0.214286 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15fe4d22550f5d656a8766cbb160d9cad971c027 | 5,133 | py | Python | load.py | OdysseyScorpio/FactionGist | a3af4b52557890cb9c2cad20a740545917db7ec4 | [
"MIT"
] | 3 | 2019-10-17T08:28:55.000Z | 2020-06-02T15:43:32.000Z | load.py | OdysseyScorpio/FactionGist | a3af4b52557890cb9c2cad20a740545917db7ec4 | [
"MIT"
] | 11 | 2019-10-17T08:32:09.000Z | 2019-10-21T07:14:13.000Z | load.py | OdysseyScorpio/FactionGist | a3af4b52557890cb9c2cad20a740545917db7ec4 | [
"MIT"
] | 3 | 2019-10-17T08:33:51.000Z | 2021-07-05T18:05:38.000Z | import sys
import os
import ttk
import Tkinter as tk
import tkMessageBox
from ttkHyperlinkLabel import HyperlinkLabel
from config import applongname, appversion
import myNotebook as nb
import json
import requests
import zlib
import re
import webbrowser
this = sys.modules[__name__]
this.apiURL = "http://factiongist.herokuapp.com"
FG_VERSION = "0.0.3"
availableFactions = tk.StringVar()
try:
this_fullpath = os.path.realpath(__file__)
this_filepath, this_extension = os.path.splitext(this_fullpath)
config_file = this_filepath + "config.json"
with open(config_file) as f:
data = json.load(f)
availableFactions.set(data)
except:
availableFactions.set("everyone")
if(availableFactions.get() == "everyone"):
msginfo = ['Please update your Reporting Faction.',
'\nYou can report to one or many factions,'
'simply separate each faction with a comma.\n'
'\nFile > Settings > FactionGist']
tkMessageBox.showinfo("Reporting Factions", "\n".join(msginfo))
def plugin_app(parent):
this.parent = parent
this.frame = tk.Frame(parent)
filter_update()
return this.frame
def filter_update():
this.parent.after(300000, filter_update)
response = requests.get(this.apiURL + "/listeningFor")
if(response.status_code == 200):
this.listening = response.content
def plugin_start(plugin_dir):
awake = requests.get(this.apiURL)
check_version()
return 'FactionGist'
def plugin_prefs(parent):
PADX = 10 # formatting
frame = nb.Frame(parent)
frame.columnconfigure(5, weight=1)
HyperlinkLabel(frame, text='FactionGist GitHub', background=nb.Label().cget('background'),
url='https://github.com/OdysseyScorpio/FactionGist', underline=True).grid(columnspan=2, padx=PADX, sticky=tk.W)
nb.Label(frame, text="FactionGist - crazy-things-might-happen-pre-pre-alpha release Version {VER}".format(
VER=FG_VERSION)).grid(columnspan=2, padx=PADX, sticky=tk.W)
nb.Label(frame).grid() # spacer
nb.Button(frame, text="UPGRADE", command=upgrade_callback).grid(row=10, column=0,
columnspan=2, padx=PADX, sticky=tk.W)
nb.lblReportingFactions = tk.Label(frame)
nb.lblReportingFactions.grid(
row=3, column=0, columnspan=2, padx=PADX, sticky=tk.W)
nb.lblReportingFactions.config(text='Factions I am supporting')
nb.Entry1 = tk.Entry(frame, textvariable=availableFactions)
nb.Entry1.grid(row=4, column=0, columnspan=2, padx=PADX, sticky=tk.W+tk.E)
return frame
def check_version():
response = requests.get(this.apiURL + "/version")
version = response.content
if version != FG_VERSION:
upgrade_callback()
def upgrade_callback():
this_fullpath = os.path.realpath(__file__)
this_filepath, this_extension = os.path.splitext(this_fullpath)
corrected_fullpath = this_filepath + ".py"
try:
response = requests.get(this.apiURL + "/download")
if (response.status_code == 200):
with open(corrected_fullpath, "wb") as f:
f.seek(0)
f.write(response.content)
f.truncate()
f.flush()
os.fsync(f.fileno())
this.upgrade_applied = True # Latch on upgrade successful
msginfo = ['Upgrade has completed sucessfully.',
'Please close and restart EDMC']
tkMessageBox.showinfo("Upgrade status", "\n".join(msginfo))
sys.stderr.write("Finished plugin upgrade!\n")
else:
msginfo = ['Upgrade failed. Bad server response',
'Please try again']
tkMessageBox.showinfo("Upgrade status", "\n".join(msginfo))
except:
sys.stderr.writelines(
"Upgrade problem when fetching the remote data: {E}\n".format(E=sys.exc_info()[0]))
msginfo = ['Upgrade encountered a problem.',
'Please try again, and restart if problems persist']
tkMessageBox.showinfo("Upgrade status", "\n".join(msginfo))
def dashboard_entry(cmdr, is_beta, entry):
this.cmdr = cmdr
def journal_entry(cmdr, is_beta, system, station, entry, state):
if entry['event'] in this.listening:
entry['commanderName'] = cmdr
entry['pluginVersion'] = FG_VERSION
entry['currentSystem'] = system
entry['currentStation'] = station
entry['reportingFactions'] = [availableFactions.get()]
transmit_json = json.dumps(entry)
url_jump = this.apiURL + '/events'
headers = {'content-type': 'application/json'}
response = requests.post(
url_jump, data=transmit_json, headers=headers, timeout=7)
def plugin_stop():
sys.stderr.writelines("\nGood bye commander\n")
config = availableFactions.get()
this_fullpath = os.path.realpath(__file__)
this_filepath, this_extension = os.path.splitext(this_fullpath)
config_file = this_filepath + "config.json"
with open(config_file, 'w') as f:
json.dump(config, f)
| 36.147887 | 130 | 0.653029 | 607 | 5,133 | 5.413509 | 0.352554 | 0.018259 | 0.024346 | 0.028911 | 0.258065 | 0.21759 | 0.21759 | 0.176506 | 0.176506 | 0.165855 | 0 | 0.0091 | 0.229301 | 5,133 | 141 | 131 | 36.404255 | 0.821537 | 0.008767 | 0 | 0.142857 | 0 | 0 | 0.181943 | 0.007671 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07563 | false | 0 | 0.109244 | 0 | 0.210084 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15ff229d57bd444a73d08386dd948e890ca375a0 | 12,542 | py | Python | hexa/plugins/connector_s3/models.py | qgerome/openhexa-app | 8c9377b2ad972121d8e9575f5d52420212b52ed4 | [
"MIT"
] | 4 | 2021-07-19T12:53:21.000Z | 2022-01-26T17:45:02.000Z | hexa/plugins/connector_s3/models.py | qgerome/openhexa-app | 8c9377b2ad972121d8e9575f5d52420212b52ed4 | [
"MIT"
] | 20 | 2021-05-17T12:27:06.000Z | 2022-03-30T11:35:26.000Z | hexa/plugins/connector_s3/models.py | qgerome/openhexa-app | 8c9377b2ad972121d8e9575f5d52420212b52ed4 | [
"MIT"
] | 2 | 2021-09-07T04:19:59.000Z | 2022-02-08T15:33:29.000Z | import os
from logging import getLogger
from django.core.exceptions import ValidationError
from django.db import models, transaction
from django.template.defaultfilters import filesizeformat, pluralize
from django.urls import reverse
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from hexa.catalog.models import CatalogQuerySet, Datasource, Entry
from hexa.catalog.sync import DatasourceSyncResult
from hexa.core.models import Base, Permission
from hexa.core.models.cryptography import EncryptedTextField
from hexa.plugins.connector_s3.api import (
S3ApiError,
get_object_metadata,
head_bucket,
list_objects_metadata,
)
from hexa.plugins.connector_s3.region import AWSRegion
logger = getLogger(__name__)
class Credentials(Base):
"""We actually only need one set of credentials. These "principal" credentials will be then used to generate
short-lived credentials with a tailored policy giving access only to the buckets that the user team can
access"""
class Meta:
verbose_name = "S3 Credentials"
verbose_name_plural = "S3 Credentials"
ordering = ("username",)
username = models.CharField(max_length=200)
access_key_id = EncryptedTextField()
secret_access_key = EncryptedTextField()
default_region = models.CharField(
max_length=50, default=AWSRegion.EU_CENTRAL_1, choices=AWSRegion.choices
)
user_arn = models.CharField(max_length=200)
app_role_arn = models.CharField(max_length=200)
@property
def display_name(self):
return self.username
class BucketPermissionMode(models.IntegerChoices):
READ_ONLY = 1, "Read Only"
READ_WRITE = 2, "Read Write"
class BucketQuerySet(CatalogQuerySet):
def filter_by_mode(self, user, mode: BucketPermissionMode = None):
if user.is_active and user.is_superuser:
# if SU -> all buckets are RW; so if mode is provided and mode == RO -> no buckets available
if mode == BucketPermissionMode.READ_ONLY:
return self.none()
else:
return self
if mode is None:
# return all buckets
modes = [BucketPermissionMode.READ_ONLY, BucketPermissionMode.READ_WRITE]
else:
modes = [mode]
return self.filter(
bucketpermission__team__in=[t.pk for t in user.team_set.all()],
bucketpermission__mode__in=modes,
).distinct()
def filter_for_user(self, user):
if user.is_active and user.is_superuser:
return self
return self.filter(
bucketpermission__team__in=[t.pk for t in user.team_set.all()],
).distinct()
class Bucket(Datasource):
def get_permission_set(self):
return self.bucketpermission_set.all()
class Meta:
verbose_name = "S3 Bucket"
ordering = ("name",)
name = models.CharField(max_length=200)
region = models.CharField(
max_length=50, default=AWSRegion.EU_CENTRAL_1, choices=AWSRegion.choices
)
objects = BucketQuerySet.as_manager()
searchable = True # TODO: remove (see comment in datasource_index command)
@property
def principal_credentials(self):
try:
return Credentials.objects.get()
except (Credentials.DoesNotExist, Credentials.MultipleObjectsReturned):
raise ValidationError(
"The S3 connector plugin should be configured with a single Credentials entry"
)
def refresh(self, path):
metadata = get_object_metadata(
principal_credentials=self.principal_credentials,
bucket=self,
object_key=path,
)
try:
s3_object = Object.objects.get(bucket=self, key=path)
except Object.DoesNotExist:
Object.create_from_metadata(self, metadata)
except Object.MultipleObjectsReturned:
logger.warning(
"Bucket.refresh(): incoherent object list for bucket %s", self.id
)
else:
s3_object.update_from_metadata(metadata)
s3_object.save()
def clean(self):
try:
head_bucket(principal_credentials=self.principal_credentials, bucket=self)
except S3ApiError as e:
raise ValidationError(e)
def sync(self):
"""Sync the bucket by querying the S3 API"""
s3_objects = list_objects_metadata(
principal_credentials=self.principal_credentials,
bucket=self,
)
# Lock the bucket
with transaction.atomic():
Bucket.objects.select_for_update().get(pk=self.pk)
# Sync data elements
with transaction.atomic():
created_count = 0
updated_count = 0
identical_count = 0
deleted_count = 0
remote = set()
local = {str(x.key): x for x in self.object_set.all()}
for s3_object in s3_objects:
key = s3_object["Key"]
remote.add(key)
if key in local:
if (
s3_object.get("ETag") == local[key].etag
and s3_object["Type"] == local[key].type
):
# If it has the same key bot not the same ETag: the file was updated on S3
# (Sometime, the ETag contains double quotes -> strip them)
identical_count += 1
else:
updated_count += 1
local[key].update_from_metadata(s3_object)
local[key].save()
else:
Object.create_from_metadata(self, s3_object)
created_count += 1
# cleanup unmatched objects
for key, obj in local.items():
if key not in remote:
deleted_count += 1
obj.delete()
# Flag the datasource as synced
self.last_synced_at = timezone.now()
self.save()
return DatasourceSyncResult(
datasource=self,
created=created_count,
updated=updated_count,
identical=identical_count,
deleted=deleted_count,
)
@property
def content_summary(self):
count = self.object_set.count()
return (
""
if count == 0
else _("%(count)d object%(suffix)s")
% {"count": count, "suffix": pluralize(count)}
)
def populate_index(self, index):
index.last_synced_at = self.last_synced_at
index.content = self.content_summary
index.path = [self.pk.hex]
index.external_id = self.name
index.external_name = self.name
index.external_type = "bucket"
index.search = f"{self.name}"
index.datasource_name = self.name
index.datasource_id = self.id
@property
def display_name(self):
return self.name
def __str__(self):
return self.display_name
def writable_by(self, user):
if not user.is_active:
return False
elif user.is_superuser:
return True
elif (
BucketPermission.objects.filter(
bucket=self,
team_id__in=user.team_set.all().values("id"),
mode=BucketPermissionMode.READ_WRITE,
).count()
> 0
):
return True
else:
return False
def get_absolute_url(self):
return reverse(
"connector_s3:datasource_detail", kwargs={"datasource_id": self.id}
)
class BucketPermission(Permission):
bucket = models.ForeignKey("Bucket", on_delete=models.CASCADE)
mode = models.IntegerField(
choices=BucketPermissionMode.choices, default=BucketPermissionMode.READ_WRITE
)
class Meta:
unique_together = [("bucket", "team")]
def index_object(self):
self.bucket.build_index()
def __str__(self):
return f"Permission for team '{self.team}' on bucket '{self.bucket}'"
class ObjectQuerySet(CatalogQuerySet):
def filter_for_user(self, user):
if user.is_active and user.is_superuser:
return self
return self.filter(bucket__in=Bucket.objects.filter_for_user(user))
class Object(Entry):
def get_permission_set(self):
return self.bucket.bucketpermission_set.all()
class Meta:
verbose_name = "S3 Object"
ordering = ("key",)
unique_together = [("bucket", "key")]
bucket = models.ForeignKey("Bucket", on_delete=models.CASCADE)
key = models.TextField()
parent_key = models.TextField()
size = models.PositiveBigIntegerField()
storage_class = models.CharField(max_length=200) # TODO: choices
type = models.CharField(max_length=200) # TODO: choices
last_modified = models.DateTimeField(null=True, blank=True)
etag = models.CharField(max_length=200, null=True, blank=True)
objects = ObjectQuerySet.as_manager()
searchable = True # TODO: remove (see comment in datasource_index command)
def save(self, *args, **kwargs):
if self.parent_key is None:
self.parent_key = self.compute_parent_key(self.key)
super().save(*args, **kwargs)
def populate_index(self, index):
index.last_synced_at = self.bucket.last_synced_at
index.external_name = self.filename
index.path = [self.bucket.pk.hex, self.pk.hex]
index.context = self.parent_key
index.external_id = self.key
index.external_type = self.type
index.external_subtype = self.extension
index.search = f"{self.filename} {self.key}"
index.datasource_name = self.bucket.name
index.datasource_id = self.bucket.id
def __repr__(self):
return f"<Object s3://{self.bucket.name}/{self.key}>"
@property
def display_name(self):
return self.filename
@property
def filename(self):
if self.key.endswith("/"):
return os.path.basename(self.key[:-1])
return os.path.basename(self.key)
@property
def extension(self):
return os.path.splitext(self.key)[1].lstrip(".")
def full_path(self):
return f"s3://{self.bucket.name}/{self.key}"
@classmethod
def compute_parent_key(cls, key):
if key.endswith("/"): # This is a directory
return os.path.dirname(os.path.dirname(key)) + "/"
else: # This is a file
return os.path.dirname(key) + "/"
@property
def file_size_display(self):
return filesizeformat(self.size) if self.size > 0 else "-"
@property
def type_display(self):
if self.type == "directory":
return _("Directory")
else:
if verbose_file_type := self.verbose_file_type:
return verbose_file_type
else:
return _("File")
@property
def verbose_file_type(self):
file_type = {
"xlsx": "Excel file",
"md": "Markdown document",
"ipynb": "Jupyter Notebook",
"csv": "CSV file",
}.get(self.extension)
if file_type:
return _(file_type)
else:
return None
def update_from_metadata(self, metadata):
self.key = metadata["Key"]
self.parent_key = self.compute_parent_key(metadata["Key"])
self.size = metadata["Size"]
self.storage_class = metadata["StorageClass"]
self.type = metadata["Type"]
self.last_modified = metadata["LastModified"]
self.etag = metadata["ETag"]
@classmethod
def create_from_metadata(cls, bucket, metadata):
return cls.objects.create(
bucket=bucket,
key=metadata["Key"],
parent_key=cls.compute_parent_key(metadata["Key"]),
storage_class=metadata["StorageClass"],
last_modified=metadata["LastModified"],
etag=metadata["ETag"],
type=metadata["Type"],
size=metadata["Size"],
)
def get_absolute_url(self):
return reverse(
"connector_s3:object_detail",
kwargs={"bucket_id": self.bucket.id, "path": self.key},
)
| 32.492228 | 112 | 0.603253 | 1,391 | 12,542 | 5.265996 | 0.196262 | 0.020478 | 0.022116 | 0.029488 | 0.264164 | 0.22116 | 0.199317 | 0.148805 | 0.102116 | 0.089829 | 0 | 0.007787 | 0.303779 | 12,542 | 385 | 113 | 32.576623 | 0.831081 | 0.060676 | 0 | 0.2443 | 0 | 0 | 0.062048 | 0.010639 | 0 | 0 | 0 | 0.002597 | 0 | 1 | 0.104235 | false | 0 | 0.045603 | 0.045603 | 0.384365 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60143a70f259a4e959098d87fdd7674fa7e33dc | 1,618 | py | Python | vi/vi/csp/backtrack_search.py | pveierland/permve-ntnu-it3105 | 6a7e4751de47b091c1c9c59560c19a8452698d81 | [
"CC0-1.0"
] | null | null | null | vi/vi/csp/backtrack_search.py | pveierland/permve-ntnu-it3105 | 6a7e4751de47b091c1c9c59560c19a8452698d81 | [
"CC0-1.0"
] | null | null | null | vi/vi/csp/backtrack_search.py | pveierland/permve-ntnu-it3105 | 6a7e4751de47b091c1c9c59560c19a8452698d81 | [
"CC0-1.0"
] | null | null | null | import vi.csp
import collections
import operator
BacktrackStatistics = collections.namedtuple(
'BacktrackStatistics', ['calls', 'failures'])
def backtrack_search(network):
statistics = BacktrackStatistics(calls=0, failures=0)
# Ensure arc consistency before making any assumptions:
return backtrack(vi.csp.general_arc_consistency(network), statistics)
def backtrack(network, statistics):
def select_unassigned_variable():
# Use Minimum-Remaining-Values heuristic:
return min(((variable, domain)
for variable, domain in network.domains.items()
if len(domain) > 1),
key=operator.itemgetter(1))[0]
def order_domain_variables():
return network.domains[variable]
statistics = BacktrackStatistics(statistics.calls + 1,
statistics.failures)
if all(len(domain) == 1 for domain in network.domains.values()):
return network, statistics
variable = select_unassigned_variable()
for value in order_domain_variables():
successor = network.copy()
successor.domains[variable] = [value]
successor = vi.csp.general_arc_consistency_rerun(successor, variable)
if all(len(domain) >= 1
for domain in successor.domains.values()):
result, statistics = backtrack(successor, statistics)
if result:
return result, statistics
statistics = BacktrackStatistics(statistics.calls,
statistics.failures + 1)
return None, statistics
| 32.36 | 77 | 0.644623 | 158 | 1,618 | 6.512658 | 0.329114 | 0.066084 | 0.029155 | 0.029155 | 0.101069 | 0.050535 | 0.050535 | 0.050535 | 0 | 0 | 0 | 0.007614 | 0.269468 | 1,618 | 49 | 78 | 33.020408 | 0.862944 | 0.057478 | 0 | 0 | 0 | 0 | 0.021025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.090909 | 0.060606 | 0.393939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c601aedeadeb98d2c741f809b0374722d418f823 | 1,377 | py | Python | k_means.py | lgq9220/easy12306 | e31abd1c7675e2acb37f4653ab88cae49d2317cc | [
"Artistic-2.0"
] | null | null | null | k_means.py | lgq9220/easy12306 | e31abd1c7675e2acb37f4653ab88cae49d2317cc | [
"Artistic-2.0"
] | null | null | null | k_means.py | lgq9220/easy12306 | e31abd1c7675e2acb37f4653ab88cae49d2317cc | [
"Artistic-2.0"
] | null | null | null | #! env python
# coding: utf-8
# 功能:对文字部分使用k-means算法进行聚类
import os
import time
import sys
import cv2
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.externals import joblib
def get_img_as_vector(fn):
im = cv2.imread(fn)
im = im[:, :, 0]
retval, dst = cv2.threshold(im, 128, 1, cv2.THRESH_BINARY_INV)
return dst.reshape(dst.size)
def main():
# 读取训练用数据
print('Start: read data', time.process_time())
fns = os.listdir('ocr')
X = [get_img_as_vector(os.path.join('ocr', fn)) for fn in fns]
print('Samples', len(X), 'Feature', len(X[0]))
# PCA
print('Start: PCA', time.process_time())
pca = PCA(n_components=0.99)
pca.fit(X)
X = pca.transform(X)
print('Samples', len(X), 'Feature', len(X[0]))
sys.stdout.flush()
# 训练
print('Start: train', time.process_time())
n_clusters = 2000 # 聚类中心个数
estimator = KMeans(n_clusters, n_init=1, max_iter=20, verbose=True)
estimator.fit(X)
print('Clusters', estimator.n_clusters, 'Iter', estimator.n_iter_)
print('Start: classify', time.process_time())
fp = open('result11.txt', 'w')
for fn, c in zip(fns, estimator.labels_):
print(fn, c, file=fp)
fp.close()
print('Start: save model', time.process_time())
joblib.dump(estimator, 'k-means11.pkl')
if __name__ == '__main__':
main()
| 27.54 | 71 | 0.647785 | 205 | 1,377 | 4.204878 | 0.473171 | 0.058005 | 0.087007 | 0.032483 | 0.064965 | 0.064965 | 0.064965 | 0.064965 | 0 | 0 | 0 | 0.023466 | 0.195352 | 1,377 | 49 | 72 | 28.102041 | 0.754513 | 0.052288 | 0 | 0.054054 | 0 | 0 | 0.115562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.189189 | 0 | 0.27027 | 0.243243 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60244641d848895dbb47d903f04c522a8e5355d | 14,735 | py | Python | datasets/blender_efficient_sm.py | ktiwary2/nerf_pl | 99d40cba3a2d9a11d6988cb1a74cf29035a1ab5e | [
"MIT"
] | null | null | null | datasets/blender_efficient_sm.py | ktiwary2/nerf_pl | 99d40cba3a2d9a11d6988cb1a74cf29035a1ab5e | [
"MIT"
] | null | null | null | datasets/blender_efficient_sm.py | ktiwary2/nerf_pl | 99d40cba3a2d9a11d6988cb1a74cf29035a1ab5e | [
"MIT"
] | null | null | null | import torch
from torch.utils.data import Dataset
import json
import numpy as np
import os
from PIL import Image, ImageFilter
from torchvision import transforms as T
from models.camera import Camera
from tqdm import tqdm
from .ray_utils import *
class BlenderEfficientShadows(Dataset):
def __init__(self, root_dir, split='train', img_wh=(800, 800), hparams=None):
self.root_dir = root_dir
self.split = split
assert img_wh[0] == img_wh[1], 'image width must equal image height!'
self.img_wh = img_wh
print("Training Image size:", img_wh)
self.define_transforms()
self.white_back = True
# self.white_back = False # Setting it to False (!)
self.hparams = hparams
self.black_and_white = False
if self.hparams is not None and self.hparams.black_and_white_test:
self.black_and_white = True
self.read_meta()
self.hparams.coords_trans = False
print("------------")
print("NOTE: self.hparams.coords_trans is set to {} ".format(self.hparams.coords_trans))
print("------------")
def read_meta(self):
# self.split = 'train'
with open(os.path.join(self.root_dir,
# f"transforms_train.json"), 'r') as f:
f"transforms_{self.split}.json"), 'r') as f:
self.meta = json.load(f)
w, h = self.img_wh
print("Root Directory: ".format(self.root_dir))
# if 'bunny' or 'box' or 'vase' in self.root_dir:
# res = 200 # these imgs have original size of 200
# else:
# res = 800
res = 800
if 'resolution' in self.meta.keys():
res = self.meta['resolution']
print("-------------------------------")
print("RESOLUTION OF THE ORIGINAL IMAGE IS SET TO {}".format(res))
print("-------------------------------")
self.focal = 0.5*res/np.tan(0.5*self.meta['camera_angle_x']) # original focal length
# when W=res
self.focal *= self.img_wh[0]/res # modify focal length to match size self.img_wh
################
self.light_camera_focal = 0.5*res/np.tan(0.5*self.meta['light_camera_angle_x']) # original focal length
################
# if 'bunny' or 'box' or 'vase' in self.root_dir:
# self.light_camera_focal = 0.5*res/np.tan(0.5*self.meta['light_angle_x']) # original focal length
# else:
# self.light_camera_focal = 0.5*res/np.tan(0.5*self.meta['light_camera_angle_x']) # original focal length
# when W=res
self.light_camera_focal *= self.img_wh[0]/res # modify focal length to match size self.img_wh
# bounds, common for all scenes
self.near = 1.0
self.far = 200.0
# probably need to change this
self.light_near = 1.0
self.light_far = 200.0
self.bounds = np.array([self.near, self.far])
# ray directions for all pixels, same for all images (same H, W, focal)
self.directions = \
get_ray_directions(h, w, self.focal) # (h, w, 3)
### Light Camera Matrix
################
pose = np.array(self.meta['light_camera_transform_matrix'])[:3, :4]
################
# if 'bunny' or 'box' or 'vase' in self.root_dir:
# self.meta['light_angle_x'] = 0.5 * self.meta['light_angle_x']
# print("Changing the HFOV of Light")
# pose = np.array(self.meta['frames'][0]['light_transform'])[:3, :4]
# else:
# pose = np.array(self.meta['light_camera_transform_matrix'])[:3, :4]
self.l2w = torch.FloatTensor(pose)
pixels_u = torch.arange(0, w, 1)
pixels_v = torch.arange(0, h, 1)
i, j = np.meshgrid(pixels_v.numpy(), pixels_u.numpy(), indexing='xy')
i = torch.tensor(i) + 0.5 #.unsqueeze(2)
j = torch.tensor(j)+ 0.5 #.unsqueeze(2)
self.light_pixels = torch.stack([i,j, torch.ones_like(i)], axis=-1).view(-1, 3) # (H*W,3)
light_directions = get_ray_directions(h, w, self.light_camera_focal) # (h, w, 3)
rays_o, rays_d = get_rays(light_directions, self.l2w) # both (h*w, 3)
self.light_rays = torch.cat([rays_o, rays_d,
self.light_near*torch.ones_like(rays_o[:, :1]),
self.light_far*torch.ones_like(rays_o[:, :1])],
1) # (h*w, 8)
################
hfov = self.meta['light_camera_angle_x'] * 180./np.pi
################
# if 'bunny' or 'box' or 'vase' in self.root_dir:
# hfov = self.meta['light_angle_x'] * 180./np.pi
# else:
# hfov = self.meta['light_camera_angle_x'] * 180./np.pi
self.light_ppc = Camera(hfov, (h, w))
self.light_ppc.set_pose_using_blender_matrix(self.l2w, self.hparams.coords_trans)
print("LIGHT: c2w: {}\n, camera:{}\n, eye:{}\n".format(self.l2w, self.light_ppc.camera, self.light_ppc.eye_pos))
### Light Camera Matrix
# new_frames = []
# # only do on a single image
# for frame in self.meta['frames']:
# if 'r_137' in frame['file_path']:
# a = [frame]
# new_frames.extend(a * 10)
# break
# self.meta['frames'] = new_frames
if self.split == 'val':
new_frames = []
for frame in self.meta['frames']:
###### load the RGB+SM Image
file_path = frame['file_path'].split('/')
sm_file_path = 'sm_'+ file_path[-1]
sm_path = os.path.join(self.root_dir, f"{sm_file_path}.png")
## Continue if not os.path.exists(shadows)
if not os.path.exists(sm_path):
continue
else:
new_frames.append(frame)
self.meta['frames'] = new_frames
if self.split == 'train': # create buffer of all rays and rgb data
self.image_paths = []
self.poses = []
self.all_rays = []
self.all_rgbs = []
self.all_ppc = []
self.all_pixels = []
for frame in tqdm(self.meta['frames']):
#### change it to load the shadow map
file_path = frame['file_path'].split('/')
file_path = 'sm_'+ file_path[-1]
################
image_path = os.path.join(self.root_dir, f"{file_path}.png")
self.image_paths += [image_path]
## Continue if not os.path.exists(shadows)
if not os.path.exists(image_path):
continue
print("Processing Frame {}".format(image_path))
#####
# real processing begins
pose = np.array(frame['transform_matrix'])[:3, :4]
self.poses += [pose]
c2w = torch.FloatTensor(pose)
hfov = self.meta['camera_angle_x'] * 180./np.pi
ppc = Camera(hfov, (h, w))
ppc.set_pose_using_blender_matrix(c2w, self.hparams.coords_trans)
self.all_ppc.extend([ppc]*h*w)
img = Image.open(image_path)
img = img.resize(self.img_wh, Image.LANCZOS)
if not self.hparams.blur == -1:
img = img.filter(ImageFilter.GaussianBlur(self.hparams.blur))
img = self.transform(img) # (4, h, w)
img = img.view(3, -1).permute(1, 0) # (h*w, 4) RGBA
# Figure out where the rays originated from
pixels_u = torch.arange(0, w, 1)
pixels_v = torch.arange(0, h, 1)
i, j = np.meshgrid(pixels_v.numpy(), pixels_u.numpy(), indexing='xy')
i = torch.tensor(i) + 0.5 #.unsqueeze(2)
j = torch.tensor(j)+ 0.5 #.unsqueeze(2)
pixels = torch.stack([i,j, torch.ones_like(i)], axis=-1).view(-1, 3) # (H*W,3)
rays_o, rays_d = get_rays(self.directions, c2w)
rays = torch.cat([rays_o, rays_d,
self.near*torch.ones_like(rays_o[:, :1]),
self.far*torch.ones_like(rays_o[:, :1])],
1) # (H*W, 8)
print("-------------------------------")
print("frame: {}\n, c2w: {}\n, camera:{}\n, eye:{}\n".format(file_path, c2w, ppc.camera, ppc.eye_pos))
print("-------------------------------")
self.all_rgbs += [img]
self.all_rays += [rays]
self.all_pixels += [pixels]
self.all_rays = torch.cat(self.all_rays, 0) # (len(self.meta['frames])*h*w, 3)
self.all_pixels = torch.cat(self.all_pixels, 0) # (len(self.meta['frames])*h*w, 3)
self.all_rgbs = torch.cat(self.all_rgbs, 0) # (len(self.meta['frames])*h*w, 3)
print("self.all_rgbs.shape, self.all_rays.shape, self.all_pixels.shape, all_ppc.shape",
self.all_rgbs.shape, self.all_rays.shape, self.all_pixels.shape, len(self.all_ppc))
if not (float(self.hparams.white_pix) == -1):
print("-------------------------- rgb max {}, min {}".format(self.all_rgbs.max(), self.all_rgbs.min()))
print("only Training on pixels with shadow map values > 0.")
all_bw = (self.all_rgbs[:,0] + self.all_rgbs[:,1] + self.all_rgbs[:,2])/3.
idx = torch.where(all_bw > float(self.hparams.white_pix))
self.all_rgbs = self.all_rgbs[idx]
self.all_pixels = self.all_pixels[idx]
self.all_rays = self.all_rays[idx]
new_ppc = []
for i in idx[0]:
new_ppc.append(self.all_ppc[i])
self.all_ppc = new_ppc
print("self.all_rgbs.shape, self.all_rays.shape, self.all_pixels.shape, all_ppc.shape",
self.all_rgbs.shape, self.all_rays.shape, self.all_pixels.shape, len(self.all_ppc))
def define_transforms(self):
self.transform = T.ToTensor()
def __len__(self):
if self.split == 'train':
return len(self.all_rays)
elif self.split == 'val':
return 8 # only validate 8 images (to support <=8 gpus)
else:
return len(self.meta['frames'])
def __getitem__(self, idx):
"""
Processes and return rays, rgbs PER image
instead of on a ray by ray basis. Albeit slower,
Implementation of shadow mapping is easier this way.
"""
if self.split == 'train': # use data in the buffers
# pose = self.poses[idx]
# c2w = torch.FloatTensor(pose)
sample = {'rays': self.all_rays[idx], # (8) Ray originating from pixel (i,j)
'pixels': self.all_pixels[idx], # pixel where the ray originated from
'rgbs': self.all_rgbs[idx], # (h*w,3)
# 'ppc': [self.all_ppc[idx].eye_pos, self.all_ppc[idx].camera],
# 'light_ppc': [self.light_ppc.eye_pos, self.light_ppc.camera],
'ppc': {
'eye_pos': self.all_ppc[idx].eye_pos,
'camera': self.all_ppc[idx].camera,
},
'light_ppc': {
'eye_pos': self.light_ppc.eye_pos,
'camera': self.light_ppc.camera,
},
# 'c2w': pose, # (3,4)
# pixel where the light ray originated from
'light_pixels': self.light_pixels, #(h*w, 3)
# light rays
'light_rays': self.light_rays, #(h*w,8)
}
else: # create data for each image separately
frame = self.meta['frames'][idx]
file_path = frame['file_path'].split('/')
file_path = 'sm_'+ file_path[-1]
c2w = torch.FloatTensor(frame['transform_matrix'])[:3, :4]
###########
w, h = self.img_wh
hfov = self.meta['camera_angle_x'] * 180./np.pi
ppc = Camera(hfov, (h, w))
ppc.set_pose_using_blender_matrix(c2w, self.hparams.coords_trans)
eye_poses = [ppc.eye_pos]*h*w
cameras = [ppc.camera]*h*w
###########
img = Image.open(os.path.join(self.root_dir, f"{file_path}.png"))
img = img.resize(self.img_wh, Image.LANCZOS)
if not self.hparams.blur == -1:
img = img.filter(ImageFilter.GaussianBlur(self.hparams.blur))
img = self.transform(img) # (3, H, W)
img = img.view(3, -1).permute(1, 0) # (H*W, 3) RGBA
# img = img[:, :3]*img[:, -1:] + (1-img[:, -1:]) # blend A to RGB
pixels_u = torch.arange(0, w, 1)
pixels_v = torch.arange(0, h, 1)
i, j = np.meshgrid(pixels_v.numpy(), pixels_u.numpy(), indexing='xy')
i = torch.tensor(i) + 0.5 #.unsqueeze(2)
j = torch.tensor(j)+ 0.5 #.unsqueeze(2)
pixels = torch.stack([i,j, torch.ones_like(i)], axis=-1).view(-1, 3) # (H*W,3)
rays_o, rays_d = get_rays(self.directions, c2w)
rays = torch.cat([rays_o, rays_d,
self.near*torch.ones_like(rays_o[:, :1]),
self.far*torch.ones_like(rays_o[:, :1])],
1) # (H*W, 8)
# print("rays.shape", rays.shape)
# valid_mask = (img[-1]>0).flatten() # (H*W) valid color area
sample = {'rays': rays,
'pixels': pixels, # pixel where rays originated from
'rgbs': img,
'ppc': {
'eye_pos': eye_poses,
'camera': cameras,
},
'light_ppc': {
'eye_pos': self.light_ppc.eye_pos,
'camera': self.light_ppc.camera,
},
# pixel where the light ray originated from
'light_pixels': self.light_pixels, #(h*w, 3)
# light rays
'light_rays': self.light_rays, #(h*w,8)
}
return sample | 44.651515 | 120 | 0.498812 | 1,852 | 14,735 | 3.805076 | 0.13067 | 0.048673 | 0.024975 | 0.018731 | 0.561799 | 0.490563 | 0.467149 | 0.438059 | 0.409394 | 0.406556 | 0 | 0.021092 | 0.350051 | 14,735 | 330 | 121 | 44.651515 | 0.714733 | 0.19905 | 0 | 0.382075 | 0 | 0.009434 | 0.100252 | 0.023977 | 0 | 0 | 0 | 0 | 0.004717 | 1 | 0.023585 | false | 0 | 0.04717 | 0 | 0.09434 | 0.080189 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60337a0cc834ad033d09738b410868a1fcb6ef6 | 625 | py | Python | python_submission/5.longest-palindromic-substring.197920566.notac.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | python_submission/5.longest-palindromic-substring.197920566.notac.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | python_submission/5.longest-palindromic-substring.197920566.notac.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | class Solution(object):
def longestPalindrome(self, s):
"""
:type s: str
:rtype: str
"""
if len(s) == 1:
return s
start = 0
end = len(s)
maxlength = 0
longest = ""
while(start < len(s)):
substring = s[start:end]
if substring == substring[::-1] and len(substring) > maxlength:
maxlength = len(substring)
longest = substring
else:
end -=1
if start == end:
start += 1
end = len(s)
return longest
| 26.041667 | 75 | 0.4192 | 60 | 625 | 4.366667 | 0.383333 | 0.061069 | 0.053435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018293 | 0.4752 | 625 | 23 | 76 | 27.173913 | 0.780488 | 0.0384 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6074a650ad47398b9b59002465392e2c249c2e3 | 14,325 | py | Python | data-preprocess/ice-vision-data-merger-pipeline.py | jingwoo4710/mmdetection-icevision | da82741b29fdd1eb77b4e7483ff2a515d43d1760 | [
"Apache-2.0"
] | 4 | 2020-03-13T00:12:44.000Z | 2021-06-25T07:54:17.000Z | data-preprocess/ice-vision-data-merger-pipeline.py | jingwoo4710/mmdetection-icevision | da82741b29fdd1eb77b4e7483ff2a515d43d1760 | [
"Apache-2.0"
] | 4 | 2020-03-13T00:24:15.000Z | 2022-03-12T00:19:03.000Z | data-preprocess/ice-vision-data-merger-pipeline.py | jingwoo4710/mmdetection-icevision | da82741b29fdd1eb77b4e7483ff2a515d43d1760 | [
"Apache-2.0"
] | 1 | 2021-03-07T06:24:08.000Z | 2021-03-07T06:24:08.000Z | #!/usr/bin/env python
# coding: utf-8
# In[93]:
import os
from shutil import copyfile
import json
print("cwd = ", os.getcwd())
current_folder = os.getcwd()
#extracted_train_data = os.path.join(current_folder, "extracted_train_data")
extracted_train_data = "/dataset/training/"
#annotations_dir = '/data/annotations'
copied_train_data = "/data/dataset/training/"
# In[100]:
data_location = "/dataset/training/"
files_list = []
neural_net_list = []
linear_mappings = []
for subdir, dirs, files in os.walk(data_location):
for file in set(files):
if file.endswith('.pnm'):
current_file = os.path.join(subdir, file)
files_list.append(current_file)
print(len(files_list))
prev_file_number = 0
prev_file_dir_name = ""
prev_neural_net = ""
counter = 0
linear_list = []
########################################## EDIT #####################################################################
for file in sorted(files_list):
file_name_split = file.split('/')
file_number = int(file_name_split[-1].split(".pnm")[0])
dir_name = file_name_split[-3] + file_name_split[-2]
counter += file_number - prev_file_number
if(prev_file_dir_name != dir_name):
counter = 0
neural_net_list.append(file)
prev_neural_net = file
linear_list = []
else:
if(counter >= 5):
neural_net_list.append(file)
linear_mappings.append({ "linear_list": linear_list, "predecessor": prev_neural_net, "successor": file })
counter = 0
prev_neural_net = file
linear_list = []
else:
#linear_mappings[file] = "linear"
linear_list.append(file)
# print("making linear", file)
prev_file_number = file_number
prev_file_dir_name = dir_name
with open('linear_mappings.json', 'w') as outfile:
json.dump(linear_mappings, outfile)
# for file in file_body:
# if (file_body[file] == "neuralnet"):
# print(file)
# for file in file_body:
# if (file_body[file] == "linear"):
# print(file)
# In[97]:
#neural_net_list[] - list of images to be sent to neural network
import os
import glob
from mmdet.apis import init_detector, inference_detector, show_result, write_result
import time
import datetime
config_file = '/root/ws/mmdetection-icevision/configs/dcn/cascade_rcnn_dconv_c3-c5_r50_fpn_1x_all_classes.py'
#model = init_detector(config_file, checkpoint_file, device='cuda:0')
#epch_count = 1
#for epochs in glob.glob(os.path.join('/data_tmp/icevisionmodels/cascade_rcnn_dconv_c3-c5_r50_fpn_1x_all_classes/', '*.pth')):
checkpoint_file = '/data/trained_models/cascade_rcnn_dconv_c3-c5_r50_fpn_1x_135_classes/epoch_15.pth'
#checkpoint_file = epochs
# build the model from a config file and a checkpoint file
model = init_detector(config_file, checkpoint_file, device='cuda:0')
TEST_RESULT_PATH = "/data/test_results/"
img_count = 0
#print(img_count)
FINAL_ONLINE_TEST_PATH = "/data/train_subset/"
#FINAL_ONLINE_TEST_PATH = '/data/test_results/2018-02-13_1418/left/'
#for TEST_SET_PATH in (FINAL_ONLINE_TEST_PATH + "2018-02-16_1515_left/", FINAL_ONLINE_TEST_PATH + "2018-03-16_1424_left/", FINAL_ONLINE_TEST_PATH + "2018-03-23_1352_right/"):
#print(TEST_SET_PATH)
#imgs = glob.glob('/dataset/training/**/*.pnm', recursive=True)
for img in neural_net_list:
ts = time.time()
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
print ("time =", st)
#imgs = ['test.jpg', '000000.jpg']
#print(img) # /dataset/training/2018-02-13_1418/left/020963.pnm --> required format 2018-02-13_1418_left/000033
name = img.split("/") # ['', 'home', 'luminosity', 'ws', 'icevision', 'data', 'final', '2018-02-16_1515_left', '001887.jpg']
#print(name)
base = name[-1].split(".")[0] # ['001887', 'jpg']
#print(base)
name = name[-3] + "_" + name[-2]
tmp = name
name = name + "/" + base
#print(name)
######## Remove
#name_tmp = base.split("_")
#name = name_tmp[0] + "_" + name_tmp[1] + "_" + name_tmp[2] + "/" + name_tmp[-1]
#name = "annotation_train_subset/" + base
#base_list = base.split("_")
#name = base_list[0] + "_" + base_list[1] + "_" + base_list[2] + "/" + base_list[3]
##########Remove
result = inference_detector(model, img)
#write_result(name, result, model.CLASSES, out_file=os.path.join(TEST_RESULT_PATH, 'my_test_multi_scale_epch_{}.tsv'.format(epch_count))) # use name instead name1 for hackthon submission
#show_result(img, result, model.CLASSES, out_file= TEST_RESULT_PATH + 'bboxs/' + tmp + ".pnm")
write_result(name, result, model.CLASSES, out_file=os.path.join(TEST_RESULT_PATH, 'my_test_epch_15_interpolation.tsv')) # use name instead name1 for hackthon submission
img_count+=1
#print(img_count)
print("num = %d name = %s" %(img_count,name))
# In[103]:
import os
import glob
import csv
from shutil import copyfile
def linear_interpolation(pred, succ, lin_images, input_tsv, step, out_tsv):
lin_images.sort()
succ_base_name = os.path.basename(succ).split(".")[0]
pred_base_name = os.path.basename(pred).split(".")[0]
#copyfile(input_tsv, out_tsv)
tsv_file = csv.reader(open(input_tsv, "r"), delimiter="\t")
prd_classes = []
suc_classes = []
prd_keys = set()
suc_keys = set()
for row in tsv_file:
# print("row = ", row)
# print('ped_keys = ', prd_keys)
# print('suc_keys = ', suc_keys)
# frame xtl ytl xbr ybr class temporary data
# 2018-02-13_1418_left/020963 679 866 754 941 3.27
prd_record = {} #defaultdict(list)
suc_record = {} #defaultdict(list)
#print("row[0] = ", row[0])
x = os.path.join(os.path.basename(os.path.dirname(pred)),os.path.basename(pred))
y = os.path.basename(os.path.dirname(os.path.dirname(pred)))
dict_key = y + "_" + x
x2 = os.path.join(os.path.basename(os.path.dirname(succ)),os.path.basename(succ))
y2 = os.path.basename(os.path.dirname(os.path.dirname(succ)))
dict_key2 = y2 + "_" + x2
# print('y = ', y)
# print("x = ", x)
# print("dict_key = ", dict_key.split('.')[0])
if row[0] == dict_key.split('.')[0]:
if row[5] not in prd_keys:
print("pred check cleared")
prd_record["class"] = row[5]
prd_record["xtl"] = row[1]
prd_record["ytl"] = row[2]
prd_record["xbr"] = row[3]
prd_record["ybr"] = row[4]
print("prd_record['ybr'] = ", prd_record["ybr"])
prd_keys.add(row[5])
# #prd_record[row[5]].append(row[1]) #xtl
# prd_record[row[5]].append(row[2]) #ytl
# prd_record[row[5]].append(row[3]) #xbr
# prd_record[row[5]].append(row[4]) #ybr
prd_classes.append(prd_record)
else:
for prd_class in prd_classes:
if prd_class["class"] == row[5]:
del prd_class
print("del prd_class")
elif row[0] == dict_key2.split('.')[0]:
print("Succ check cleared")
if row[5] not in suc_keys:
suc_record["class"] = row[5]
suc_record["xtl"] = row[1]
suc_record["ytl"] = row[2]
suc_record["xbr"] = row[3]
suc_record["ybr"] = row[4]
suc_keys.add(row[5])
# suc_record[row[5]].append(row[1])
# suc_record[row[5]].append(row[2])
# suc_record[row[5]].append(row[3])
# suc_record[row[5]].append(row[4])
suc_classes.append(suc_record)
else:
for suc_class in suc_classes:
if suc_class["class"] == row[5]:
del suc_class
print("del prd_class")
#print("prd_keys = ", prd_keys)
common_classes = prd_keys.intersection(suc_keys)
print(common_classes)
for common_class in common_classes:
for prd_class in prd_classes:
if prd_class["class"] == common_class:
for suc_class in suc_classes:
if suc_class["class"] == common_class:
xtl_gr = (int(prd_class["xtl"]) - int(suc_class["xtl"])) / step
ytl_gr = (int(prd_class["ytl"]) - int(suc_class["ytl"])) / step
xbr_gr = (int(prd_class["xbr"]) - int(suc_class["xbr"])) / step
ybr_gr = (int(prd_class["ybr"]) - int(suc_class["ybr"])) / step
print(xtl_gr, ytl_gr, xbr_gr, ybr_gr)
for f in lin_images:
curr_base = os.path.basename(f).split(".")[0]
# print("curr_base = ", curr_base)
# print("pred_base_name = ", pred_base_name)
# print("f = ", f)
factor = int(curr_base) - int(pred_base_name)
curr_xtl = int(prd_class["xtl"]) + (factor * xtl_gr)
curr_ytl = int(prd_class["ytl"]) + (factor * ytl_gr)
curr_xbr = int(prd_class["xbr"]) + (factor * xbr_gr)
curr_ybr = int(prd_class["ybr"]) + (factor * ybr_gr)
temp = ''
with open(out_tsv, mode = 'a') as result_file:
result_file_writer = csv.writer(result_file, delimiter = '\t')
result_file_writer.writerow([f, str(curr_xtl), str(curr_ytl), str(curr_xbr), str(curr_ybr), prd_class["class"], temp, temp])
# In[105]:
#load the linear mappings.json
import csv
linear_mappings = "/root/ws/mmdetection-icevision/data-preprocess/linear_mappings.json"
input_tsv = os.path.join(TEST_RESULT_PATH, 'my_test_epch_15_interpolation_copy.tsv')
out_tsv = os.path.join(TEST_RESULT_PATH, 'my_test_epch_15_interpolation_copy.tsv')
interpolation_mappings = []
with open(linear_mappings, 'r') as f:
interpolation_mappings = json.load(f)
for i in interpolation_mappings:
pred = i["predecessor"]
succ = i['successor']
interpol_list = i['linear_list']
step = 5
linear_interpolation(pred, succ, interpol_list, input_tsv, step, out_tsv)
# if i["predecessor"] == neural_net_list[100]:
# break
# In[70]:
# trial code
# extracted_train_data = "/home/sgj/temp/test_data/2018-03-16_1324"
# for subdir, dirs, files in os.walk(extracted_train_data):
# print("subdir = ", subdir)
# for file in files:
# if file.endswith('.jpg'):
# current_file = os.path.join(subdir, file)
# #folder_name = os.path.basename(os.path.dirname(current_file))
# #expected_name = folder_name + '_' + os.path.basename(current_file)
# y = file.split("_")
# expected_name = y[0] + "_" + y[1] + "_left_jpgs_" + y[2]
# absolute_expected_name = os.path.join(os.path.dirname(current_file),expected_name)
# os.rename(current_file, absolute_expected_name)
# In[37]:
extracted_train_data = "/home/sgj/temp/train_data/2018-02-13_1418_left_jpgs"
for subdir, dirs, files in os.walk(extracted_train_data):
print("subdir = ", subdir)
for file in files:
if file.endswith('.jpg'):
current_file = os.path.join(subdir, file)
folder_name = os.path.basename(os.path.dirname(current_file))
expected_name = folder_name + '_' + os.path.basename(current_file)
absolute_expected_name = os.path.join(os.path.dirname(current_file),expected_name)
os.rename(current_file, absolute_expected_name)
# In[25]:
# move out un-annotated images -
# ARGS -
# Annotations data tsv
# Extracted images folder
# Destination folder for annotated_data
import os
annotation_data_tsv_folder = "/home/sgj/nvme/ice-vision/annotations/test/all_validation_annotations"
extracted_images_folder = "/home/sgj/temp/test_data/all_validation_images"
#dest_annotated_imgs = "/home/sgj/nvme/ice-vision/annotated_data/val"
dest_annotated_imgs = "/home/sgj/temp/ice-vision/annotated_data/val"
os.makedirs(dest_annotated_imgs)
img_count = 0
for root, dirs, files in os.walk(annotation_data_tsv_folder):
for name in files:
if name.endswith('.tsv'):
prefix = name.split(".")[0]
image_name = prefix + ".jpg"
expected_img_path = os.path.join(extracted_images_folder, image_name)
new_image_path = os.path.join(dest_annotated_imgs, image_name)
if os.path.exists(expected_img_path):
img_count = img_count + 1
os.rename(expected_img_path, new_image_path)
else:
print("image missing-----------------------")
print("total images = ", img_count)
# In[18]:
temp = "2018-02-13_1418_left_jpgs_014810.tsv"
temp.split(".")[0]
# In[3]:
for subdir, dirs, files in os.walk(copied_train_data):
print("subdir = ", subdir)
for file in files:
if file.endswith('.pnm'):
current_file = os.path.join(subdir, file)
print('current file = ', current_file)
cam_dir = current_file.split('/')[-2]
#print("cam dir = ", cam_dir)
date_dir = current_file.split('/')[-3]
#print("date_dir = ", date_dir)
expected_folder = '/data/train_subset/'
expected_file_name = date_dir + "_" + cam_dir + "_" + os.path.basename(current_file)
expected_file_path = os.path.join(expected_folder, expected_file_name)
#copyfile(current_file, dst_file_path)
os.rename(current_file, expected_file_path)
print("expected_file_path = ", expected_file_path)
# In[4]:
# In[ ]:
| 35.02445 | 194 | 0.585829 | 1,859 | 14,325 | 4.243141 | 0.152232 | 0.031947 | 0.021552 | 0.016227 | 0.36676 | 0.29285 | 0.240365 | 0.208418 | 0.204868 | 0.177485 | 0 | 0.031364 | 0.269948 | 14,325 | 408 | 195 | 35.110294 | 0.722892 | 0.281396 | 0 | 0.207921 | 0 | 0 | 0.117788 | 0.064564 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004951 | false | 0 | 0.069307 | 0 | 0.074257 | 0.084158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c609dcc818d4247b3e931541707e6e65fd9fc433 | 1,683 | py | Python | osu/graph_metadata.py | brandondong/osu-beatmap-generator | 7ca14793ef6a48a65cfd1a564f3b24d940a6051a | [
"MIT"
] | null | null | null | osu/graph_metadata.py | brandondong/osu-beatmap-generator | 7ca14793ef6a48a65cfd1a564f3b24d940a6051a | [
"MIT"
] | 1 | 2021-06-01T23:50:59.000Z | 2021-06-01T23:50:59.000Z | osu/graph_metadata.py | brandondong/osu-beatmap-generator | 7ca14793ef6a48a65cfd1a564f3b24d940a6051a | [
"MIT"
] | null | null | null | import os
import matplotlib.pyplot as plt
import numpy as np
from models import models_util
DIFFICULTY_LABEL = "Star Difficulty"
BPM_LABEL = "BPM"
LENGTH_LABEL = "Length"
CS_LABEL = "Circle Size"
DRAIN_LABEL = "HP Drain"
ACCURACY_LABEL = "Accuracy"
AR_LABEL = "Approach Rate"
SAVE_FOLDER = "visualization/"
def print_property_values(labels, values):
for idx, value in enumerate(values):
print(f"{labels[idx]}: {value}")
print()
# Data rows are in the format of [difficulty_rating],[bpm],[total_length],[cs],[drain],[accuracy],[ar].
labels = [DIFFICULTY_LABEL, BPM_LABEL, LENGTH_LABEL, CS_LABEL, DRAIN_LABEL, ACCURACY_LABEL, AR_LABEL]
filename_labels = []
for label in labels:
filename_labels.append(label.lower().replace(" ", "_"))
# Keep track of each property in separate rows.
points = np.transpose(models_util.load_metadata_dataset())
mins = points.min(axis=-1)
maxes = points.max(axis=-1)
means = np.mean(points, axis=-1)
print("Minimum values:")
print_property_values(labels, mins)
print("Maximum values:")
print_property_values(labels, maxes)
print("Mean values:")
print_property_values(labels, means)
# Plot graphs for each input output feature pair.
for i in range(3):
for j in range(3, 7):
plt.hexbin(points[i], points[j], gridsize=50, cmap="inferno")
plt.axis([mins[i], maxes[i], mins[j], maxes[j]])
x_label = labels[i]
y_label = labels[j]
plt.title(f"{y_label} vs {x_label}")
plt.xlabel(x_label)
plt.ylabel(y_label)
x_file_label = filename_labels[i]
y_file_label = filename_labels[j]
image_name = os.path.join(SAVE_FOLDER, f"{y_file_label}_vs_{x_file_label}.png")
print(f"Saving graph to {image_name}.")
plt.savefig(image_name) | 28.525424 | 103 | 0.734403 | 263 | 1,683 | 4.494297 | 0.387833 | 0.043993 | 0.064298 | 0.084602 | 0.07868 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005431 | 0.124777 | 1,683 | 59 | 104 | 28.525424 | 0.797013 | 0.115865 | 0 | 0 | 0 | 0 | 0.160269 | 0.024242 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.090909 | 0 | 0.113636 | 0.227273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60bdc534000a18e4fc655a5cff50a94af3d4302 | 2,077 | py | Python | ADE/train.py | BinahHu/ADE-Longtail | 4aabf1cbf50746e610b91362c40cbcb7884dd170 | [
"Apache-2.0"
] | null | null | null | ADE/train.py | BinahHu/ADE-Longtail | 4aabf1cbf50746e610b91362c40cbcb7884dd170 | [
"Apache-2.0"
] | null | null | null | ADE/train.py | BinahHu/ADE-Longtail | 4aabf1cbf50746e610b91362c40cbcb7884dd170 | [
"Apache-2.0"
] | null | null | null | # import some common libraries
# import some common detectron2 utilities
from detectron2.config import get_cfg
from detectron2.data import (
build_detection_test_loader,
build_detection_train_loader,
)
from detectron2.engine import default_argument_parser, default_setup, launch, DefaultTrainer
# import ADE related package
from dataset.ade import register_all_ade
from dataset.my_mapper import MyDatasetMapper
from transforms.my_resize import MyResize
from modeling.backbone.my_build import register_my_backbone
from modeling.roi_heads.roi_cls import register_roi_cls
from additional_cfg import set_additional_cfg
class Trainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg):
"""
Returns:
iterable
It now calls :func:`detectron2.data.build_detection_train_loader`.
Overwrite it if you'd like a different data loader.
"""
return build_detection_train_loader(cfg, mapper=MyDatasetMapper(cfg, is_train=True, augmentations=[
MyResize(cfg.INPUT.RESIZE_SHORT, cfg.INPUT.RESIZE_LONG)]))
def setup(args):
"""
Create configs and perform basic setups.
"""
cfg = get_cfg()
cfg.merge_from_file(args.config_file)
cfg.merge_from_list(args.opts)
cfg = set_additional_cfg(cfg)
cfg.freeze()
default_setup(
cfg, args
) # if you don't like any of the default setup, write your own setup code
return cfg
def register_all(cfg):
register_all_ade(cfg.DATASETS.ADE_ROOT)
register_my_backbone()
register_roi_cls()
def main(args):
cfg = setup(args)
register_all(cfg)
trainer = Trainer(cfg)
trainer.resume_or_load(resume=args.resume)
return trainer.train()
if __name__ == "__main__":
args = default_argument_parser().parse_args()
print("Command Line Args:", args)
launch(
main,
args.num_gpus,
num_machines=args.num_machines,
machine_rank=args.machine_rank,
dist_url=args.dist_url,
args=(args,),
)
| 26.628205 | 107 | 0.703418 | 269 | 2,077 | 5.159851 | 0.390335 | 0.040346 | 0.041066 | 0.054035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003081 | 0.218585 | 2,077 | 77 | 108 | 26.974026 | 0.852126 | 0.168031 | 0 | 0 | 0 | 0 | 0.01603 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.1875 | 0 | 0.354167 | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60dde17e3e2f4dd2320cb3a7f244103bed46b66 | 2,411 | py | Python | src/messages.py | Colk-tech/gcpdiscord | b2a43dca8db5a17e6e72f36c7e895db19b836067 | [
"MIT"
] | 4 | 2020-12-31T09:41:09.000Z | 2022-02-20T14:13:41.000Z | src/messages.py | Colk-tech/gcpdiscord | b2a43dca8db5a17e6e72f36c7e895db19b836067 | [
"MIT"
] | 3 | 2020-12-28T18:19:44.000Z | 2021-01-03T14:51:59.000Z | src/messages.py | Colk-tech/gcpdiscord | b2a43dca8db5a17e6e72f36c7e895db19b836067 | [
"MIT"
] | 2 | 2020-12-31T05:42:57.000Z | 2022-03-24T07:54:25.000Z | PERMISSION_DENIED_MESSAGE: str = "***PERMISSION DENIED!*** \n" \
"You are not permitted to use this command. \n" \
"Please contact to your server master. \n."
ERROR_OCCURRED_MESSAGE: str = "***ERROR OCCURRED!*** \n" \
"Error has occurred while executing gcp request command. \n" \
"Please contact to your server master or the software developer. \n" \
"Error: {} \n"
OPERATION_COMPLETED_MESSAGE: str = "***Operation Completed! ***\n" \
"Operation: {} has successfully completed. \n" \
"This may take more 2~3 minutes that the Minecraft Server starts (stops)."
INSTANCE_IS_ALREADY_IN_REQUESTED_STATUS: str = "***Already in status of {}.*** \n" \
"The instance is already in the status. \n" \
"No operation has done."
PRE_STOP_OPERATION_PROCESSING: str = "Processing pre-stop operation... \n" \
"Trying to shutdown Minecraft server from the console channel. \n" \
"Whichever the operation is completed or not, " \
"the server will shutdown in 5 minutes forcibly."
REQUEST_RECEIVED: str = "Operation: {} has requested. \n" \
"Please wait until the operation is done. \n"
START_REQUEST_RECEIVED_MESSAGE = "Trying to start the gcp server. \n" \
"It takes 3 sec at least to complete the operation. \n" \
"The minecraft server will start as soon as gcp server started. \n" \
"PLEASE WAIT UNTIL YOU RECEIVE MESSAGE 'SERVER HAS STARTED!' " \
"BEFORE YOU JOIN THE MINECRAFT SERVER."
STOP_REQUEST_RECEIVED_MESSAGE = "Trying to stop the gcp server. \n" \
"It takes 5 minutes at least to complete the operation. \n" \
"We will issue `stop` command in console channel. \n" \
"And then, we will wait for 5 minutes for the Minecraft server stops." \
"After all the process is done, we will shutdown GCP instance finally."
| 70.911765 | 109 | 0.517628 | 256 | 2,411 | 4.792969 | 0.34375 | 0.061125 | 0.05868 | 0.03423 | 0.193969 | 0.145069 | 0.112469 | 0.06357 | 0 | 0 | 0 | 0.004172 | 0.403567 | 2,411 | 33 | 110 | 73.060606 | 0.849096 | 0 | 0 | 0 | 0 | 0 | 0.542099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60e59d24506527b02637849c4f5442ee5efa5c7 | 4,035 | py | Python | util/SR2SC.py | lorteddie/dcmqi | 4f668745f4f8b2a67e6dcfdee187ac7793e07116 | [
"BSD-3-Clause"
] | null | null | null | util/SR2SC.py | lorteddie/dcmqi | 4f668745f4f8b2a67e6dcfdee187ac7793e07116 | [
"BSD-3-Clause"
] | null | null | null | util/SR2SC.py | lorteddie/dcmqi | 4f668745f4f8b2a67e6dcfdee187ac7793e07116 | [
"BSD-3-Clause"
] | null | null | null | from pydicom.sr import _snomed_dict
import os
import re
import pydicom.sr._snomed_dict
folder = "E:\\work\\QIICR\\dcmqi"
Out_Folder = "E:\\work\\QIICR\\renamed_dcmqi"
def recursive_file_find(address, regexp):
filelist = os.listdir(address)
approvedlist=[]
for filename in filelist:
fullpath = os.path.join(address, filename)
if os.path.isdir(fullpath):
approvedlist.extend(recursive_file_find(fullpath, regexp))
elif re.match(regexp, fullpath) is not None:
approvedlist.append(fullpath)
return approvedlist
def GetFileString(filename):
File_object = open(filename, "r")
try:
Content = File_object.read()
except:
print("Couldn't read the file")
Content = ""
File_object.close()
return Content
def WriteTextInFile(filename, txt):
folder = os.path.dirname(filename)
if not os.path.exists(folder):
os.makedirs(folder)
File_object = open(filename, "w")
File_object.write(txt)
File_object.close()
def FindRegex(regexp, text, extend=[0, 0], printout=False):
found_iters = re.finditer(regexp, text)
founds = list(found_iters)
ii = []
for mmatch in founds:
yy = text[mmatch.start() - extend[0]:mmatch.end() + extend[1]]
counter = "[%04d ]" % len(ii)
if (printout):
print(counter + yy)
ii.append(yy)
return ii
def ReplaceQuotedText(find_text, rep_text, text):
pattern = "(\"\s*" + find_text + "\s*\")|('\s*" + find_text + "\s*')"
replacement = "\"" + rep_text + "\"";
new_text = re.sub(pattern, replacement, text)
View = ShowReplaceMent(pattern, replacement, text)
return [new_text, View]
def FindAndReplace(find_text, rep_text, text):
newtext = re.sub(find_text, rep_text, text)
x = ShowReplaceMent(find_text, rep_text, text)
return [newtext, x]
def ShowReplaceMent(find_text, rep_text, text):
output = []
text_seq = FindRegex("\\n.*(" + find_text + ").*\\n", text, [-1, -1])
for line_txt in text_seq:
found_iters = re.finditer(find_text, line_txt)
founds = list(found_iters)
if len(founds) > 0:
mmatch = founds[0]
yy = line_txt[:mmatch.start()] + \
"{ [" + line_txt[mmatch.start():mmatch.end()] + "]-->[" + rep_text + "] }" + \
line_txt[mmatch.end():]
output.append(yy)
return output
dict = _snomed_dict.mapping["SCT"]
details = []
# recursive_file_find(folder, all_files, "(.*\\.cpp$)|(.*\\.h$)|(.*\\.json$)")
all_files = recursive_file_find(folder, "(?!.*\.git.*)")
for f, jj in zip(all_files, range(1, len(all_files))):
f_content = (GetFileString(f))
if len(f_content) == 0:
continue
[f_content, x] = ReplaceQuotedText("SCT", "SCT", f_content)
details = x
[f_content, x] = FindAndReplace(",\s*SRT\s*,", ",SCT,", f_content)
details.extend(x)
[f_content, x] = FindAndReplace("SCT", " SCT ", f_content)
details.extend(x)
[f_content, x] = FindAndReplace("_SCT_", "_SCT_", f_content)
details.extend(x)
[f_content, x] = FindAndReplace("sct.h", "sct.h", f_content)
details.extend(x)
for srt_code, sct_code in dict.items():
# f_content = ReplaceQuotedText(srt_code, sct_code, f_content)
[f_content, x] = FindAndReplace(srt_code, sct_code, f_content)
details.extend(x)
if len(details) == 0:
continue
edited_file_name = f.replace(folder, Out_Folder)
edited_file_log = f.replace(folder, os.path.join(Out_Folder, '..\\log')) + ".txt"
WriteTextInFile(edited_file_name, f_content)
print("------------------------------------------------------------------------")
f_number = "(file %03d ) " % jj
print(f_number + f)
logg = ""
for m, c in zip(details, range(0, len(details))):
indent = "\t\t\t%04d" % c
logg += (indent + m + "\n")
if len(logg) != 0:
WriteTextInFile(edited_file_log, logg)
print("the find/replace process finished ...")
| 32.28 | 95 | 0.603965 | 512 | 4,035 | 4.576172 | 0.242188 | 0.058045 | 0.023047 | 0.03201 | 0.172002 | 0.113103 | 0.065301 | 0.065301 | 0.065301 | 0.065301 | 0 | 0.006061 | 0.223048 | 4,035 | 124 | 96 | 32.540323 | 0.741308 | 0.033953 | 0 | 0.108911 | 0 | 0 | 0.092683 | 0.031836 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069307 | false | 0 | 0.039604 | 0 | 0.168317 | 0.069307 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60ef77ba35cbe6b2d5368873bee63645c0514b7 | 9,417 | py | Python | disentanglement_lib/visualize/visualize_util.py | erow/disentanglement_lib | c875207fdeadc44880277542447544941bc0bd0a | [
"Apache-2.0"
] | null | null | null | disentanglement_lib/visualize/visualize_util.py | erow/disentanglement_lib | c875207fdeadc44880277542447544941bc0bd0a | [
"Apache-2.0"
] | null | null | null | disentanglement_lib/visualize/visualize_util.py | erow/disentanglement_lib | c875207fdeadc44880277542447544941bc0bd0a | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2018 The DisentanglementLib Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utility functions for the visualization code."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
from disentanglement_lib.utils import resources
import numpy as np
from PIL import Image
import scipy
from six.moves import range
import torch
import imageio
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def array_animation(data, fps=20):
fig, ax = plt.subplots(figsize=(3, 3))
plt.tight_layout()
ax.set_axis_off()
if len(data.shape) == 4:
data = data.transpose([0, 2, 3, 1])
im = ax.imshow(data[0], vmin=0, vmax=1)
def init():
im.set_data(data[0])
return (im,)
# animation function. This is called sequentially
def animate(i):
data_slice = data[i]
im.set_data(data_slice)
return (im,)
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=len(data), interval=1000 / fps, blit=True)
return anim
def traversal_latents(base_latent, traversal_vector, dim):
l = len(traversal_vector)
traversals = base_latent.repeat(l, 1)
traversals[:, dim] = traversal_vector
return traversals
def plot_bar(axes, images, label=None):
for ax, img in zip(axes, images):
if img.shape[2] == 3:
ax.imshow(img)
elif img.shape[2] == 1:
ax.imshow(img.squeeze(2), cmap='gray')
ax.axis('off')
if label:
axes[-1].get_yaxis().set_label_position("right")
axes[-1].set_ylabel(label)
def sigmoid(x):
return 1 / (1 + np.exp(-np.clip(x, -20, 20)))
def plt_sample_traversal(mu, decode, traversal_len=5, dim_list=range(4), r=3):
"""
:param mu: Tensor: [1,dim]
:param decode:
:param traversal_len:
:param dim_list:
:param r:
:return:
"""
dim_len = len(dim_list)
if len(mu.shape) == 1:
mu = mu.unsqueeze(0)
with torch.no_grad():
fig, axes = plt.subplots(dim_len, traversal_len, squeeze=False,
figsize=(traversal_len, dim_len,))
plt.tight_layout(pad=0.1)
plt.subplots_adjust(wspace=0.01, hspace=0.05)
for i, dim in enumerate(dim_list):
base_latents = mu.clone()
linear_traversal = torch.linspace(-r, r, traversal_len)
traversals = traversal_latents(base_latents, linear_traversal, dim)
recon_batch = decode(traversals)
plot_bar(axes[i, :], recon_batch)
return fig
def save_image(image, image_path):
"""Saves an image in the [0,1]-valued Numpy array to image_path.
Args:
image: Numpy array of shape (height, width, {1,3}) with values in [0, 1].
image_path: String with path to output image.
"""
# Copy the single channel if we are provided a grayscale image.
if image.shape[2] == 1:
image = np.repeat(image, 3, axis=2)
image = np.ascontiguousarray(image)
image *= 255.
image = image.astype(np.uint8) # disable the converting warning
with open(image_path, "wb") as path:
img = Image.fromarray(image, mode="RGB")
img.save(path)
def grid_save_images(images, image_path):
"""Saves images in list of [0,1]-valued np.arrays on a grid.
Args:
images: List of Numpy arrays of shape (height, width, {1,3}) with values in
[0, 1].
image_path: String with path to output image.
"""
side_length = int(math.floor(math.sqrt(len(images))))
image_rows = [
np.concatenate(
images[side_length * i:side_length * i + side_length], axis=0)
for i in range(side_length)
]
tiled_image = np.concatenate(image_rows, axis=1)
print(image_path)
save_image(tiled_image, image_path)
def padded_grid(images, num_rows=None, padding_px=10, value=None):
"""Creates a grid with padding in between images."""
num_images = len(images)
if num_rows is None:
num_rows = best_num_rows(num_images)
# Computes how many empty images we need to add.
num_cols = int(np.ceil(float(num_images) / num_rows))
num_missing = num_rows * num_cols - num_images
# Add the empty images at the end.
all_images = images + [np.ones_like(images[0])] * num_missing
# Create the final grid.
rows = [padded_stack(all_images[i * num_cols:(i + 1) * num_cols], padding_px,
1, value=value) for i in range(num_rows)]
return padded_stack(rows, padding_px, axis=0, value=value)
def padded_stack(images, padding_px=10, axis=0, value=None):
"""Stacks images along axis with padding in between images."""
padding_arr = padding_array(images[0], padding_px, axis, value=value)
new_images = [images[0]]
for image in images[1:]:
new_images.append(padding_arr)
new_images.append(image)
return np.concatenate(new_images, axis=axis)
def padding_array(image, padding_px, axis, value=None):
"""Creates padding image of proper shape to pad image along the axis."""
shape = list(image.shape)
shape[axis] = padding_px
if value is None:
return np.ones(shape, dtype=image.dtype)
else:
assert len(value) == shape[-1]
shape[-1] = 1
return np.tile(value, shape)
def best_num_rows(num_elements, max_ratio=4):
"""Automatically selects a smart number of rows."""
best_remainder = num_elements
best_i = None
i = int(np.sqrt(num_elements))
while True:
if num_elements > max_ratio * i * i:
return best_i
remainder = (i - num_elements % i) % i
if remainder == 0:
return i
if remainder < best_remainder:
best_remainder = remainder
best_i = i
i -= 1
def pad_around(image, padding_px=10, axis=None, value=None):
"""Adds a padding around each image."""
# If axis is None, pad both the first and the second axis.
if axis is None:
image = pad_around(image, padding_px, axis=0, value=value)
axis = 1
padding_arr = padding_array(image, padding_px, axis, value=value)
return np.concatenate([padding_arr, image, padding_arr], axis=axis)
def add_below(image, padding_px=10, value=None):
"""Adds a footer below."""
if len(image.shape) == 2:
image = np.expand_dims(image, -1)
if image.shape[2] == 1:
image = np.repeat(image, 3, 2)
if image.shape[2] != 3:
raise ValueError("Could not convert image to have three channels.")
with open(resources.get_file("disentanglement_lib.png"), "rb") as f:
footer = np.array(Image.open(f).convert("RGB")) * 1.0 / 255.
missing_px = image.shape[1] - footer.shape[1]
if missing_px < 0:
return image
if missing_px > 0:
padding_arr = padding_array(footer, missing_px, axis=1, value=value)
footer = np.concatenate([padding_arr, footer], axis=1)
return padded_stack([image, footer], padding_px, axis=0, value=value)
def save_animation(list_of_animated_images, image_path, fps):
full_size_images = []
for single_images in zip(*list_of_animated_images):
full_size_images.append(
pad_around(add_below(padded_grid(list(single_images)))))
imageio.mimwrite(image_path, full_size_images, fps=fps)
def cycle_factor(starting_index, num_indices, num_frames):
"""Cycles through the state space in a single cycle."""
grid = np.linspace(starting_index, starting_index + 2 * num_indices,
num=num_frames, endpoint=False)
grid = np.array(np.ceil(grid), dtype=np.int64)
grid -= np.maximum(0, 2 * grid - 2 * num_indices + 1)
grid += np.maximum(0, -2 * grid - 1)
return grid
def cycle_gaussian(starting_value, num_frames, loc=0., scale=1.):
"""Cycles through the quantiles of a Gaussian in a single cycle."""
starting_prob = scipy.stats.norm.cdf(starting_value, loc=loc, scale=scale)
grid = np.linspace(starting_prob, starting_prob + 2.,
num=num_frames, endpoint=False)
grid -= np.maximum(0, 2 * grid - 2)
grid += np.maximum(0, -2 * grid)
grid = np.minimum(grid, 0.999)
grid = np.maximum(grid, 0.001)
return np.array([scipy.stats.norm.ppf(i, loc=loc, scale=scale) for i in grid])
def cycle_interval(starting_value, num_frames, min_val, max_val):
"""Cycles through the state space in a single cycle."""
starting_in_01 = (starting_value - min_val) / (max_val - min_val)
grid = np.linspace(starting_in_01, starting_in_01 + 2.,
num=num_frames, endpoint=False)
grid -= np.maximum(0, 2 * grid - 2)
grid += np.maximum(0, -2 * grid)
return grid * (max_val - min_val) + min_val
| 34.119565 | 84 | 0.653818 | 1,387 | 9,417 | 4.284066 | 0.240808 | 0.018176 | 0.015315 | 0.014137 | 0.143554 | 0.106698 | 0.099461 | 0.07001 | 0.07001 | 0.056547 | 0 | 0.021859 | 0.232452 | 9,417 | 275 | 85 | 34.243636 | 0.800221 | 0.207922 | 0 | 0.063584 | 0 | 0 | 0.012598 | 0.003149 | 0 | 0 | 0 | 0 | 0.00578 | 1 | 0.109827 | false | 0 | 0.075145 | 0.00578 | 0.289017 | 0.011561 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60efc48bff6adf1e39c4b9e35874cde0aa11abd | 6,515 | py | Python | Lab/Lab5/lab5_group12_server.py | Zhuoyue-Xing/IOT---INTELLIG-CONNECTED-SYS | 09111380e5ad36663e90de5bcd22691619c9a2f1 | [
"Apache-2.0"
] | 1 | 2020-03-04T21:51:42.000Z | 2020-03-04T21:51:42.000Z | Lab/Lab5/lab5_group12_server.py | Zhuoyue-Xing/IOT---INTELLIG-CONNECTED-SYS | 09111380e5ad36663e90de5bcd22691619c9a2f1 | [
"Apache-2.0"
] | null | null | null | Lab/Lab5/lab5_group12_server.py | Zhuoyue-Xing/IOT---INTELLIG-CONNECTED-SYS | 09111380e5ad36663e90de5bcd22691619c9a2f1 | [
"Apache-2.0"
] | null | null | null | # Created by Chenye Yang, Haokai Zhao, Zhuoyue Xing on 2019/10/13.
# Copyright © 2019 Chenye Yang, Haokai Zhao, Zhuoyue Xing . All rights reserved.
from machine import Pin, I2C, RTC, Timer
import socket
import ssd1306
import time
import network
import urequests
import json
# ESP8266 connects to a router
def ConnectWIFI(essid, key):
import network
sta_if = network.WLAN(network.STA_IF) # config a station object
if not sta_if.isconnected(): # if the connection is not established
print('connecting to network...')
sta_if.active(True) # activate the station interface
sta_if.connect(essid, key) # connect to WiFi network
while not sta_if.isconnected():
print('connecting')
pass
print('network config:', sta_if.ifconfig()) # check the IP address
# ap_if.active(False) # disable the access-point interface
else:
print('network config:', sta_if.ifconfig()) # IP address, subnet mask, gateway and DNS server
# Get Current Time and Set ESP8266 Time to current time
def SetCurtTime():
# World Clock API
url = "http://worldclockapi.com/api/json/est/now"
webTime = json.loads(urequests.get(url).text) # returns a json string, convert it to json
webTime = webTime['currentDateTime'].split('T') # currentDataTime string as 2019-10-13T15:05-04:00
date = list(map(int, webTime[0].split('-'))) # extract time numbers
time = list(map(int, webTime[1].split('-')[0].split(':')))
timeTuple = (date[0], date[1], date[2], 0, time[0], time[1], 0, 0) # (year, month, day, weekday, hours, minutes, seconds, mseconds)
rtc.datetime(timeTuple) # set a specific date and time
# print(rtc.datetime())
# Show current time on OLED
def OLEDShowTime():
weekday = {0:'Monday', 1:'Tuesday', 2:'Wednesday', 3:'Thursday', 4:'Friday', 5:'Saturday', 6:'Sunday'}
# Storeage the current date and time to <int> list
timeList = list(map(int, rtc.datetime()))
# Covert <int> list to <str>
dateStr = "{:0>4d}".format(timeList[0])+'-'+"{:0>2d}".format(timeList[1])+'-'+"{:0>2d}".format(timeList[2])
# weekStr = weekday[timeList[3]]
timeStr = "{:0>2d}".format(timeList[4])+':'+"{:0>2d}".format(timeList[5])+':'+"{:0>2d}".format(timeList[6])
# Put string to OLED
oled.text(dateStr, 0, 0) # (message, x, y, color)
# oled.text(weekStr, 0, 11)
oled.text(timeStr, 0, 22)
# OLED show whether ESP8266 received commands
def OLEDRecvComd(received):
if received:
oled.text('RCVD', 80, 22) # Received the correct command
else:
oled.text('MISS', 80, 22) # Received a command ,but NOT a correct one
# Judge the Command received
def WhatCommand(cmd):
# cmd is Command in string, like: "turn on display"
global FLAG_True_Comd # Flag about whether it is a right command
global FLAG_Display_On # Flag about whetehr OLED can display things
global FLAG_Show_Time # Flag about whether the current time is shown on OLED
if cmd == 'turn on display':
FLAG_True_Comd = 1
FLAG_Display_On = 1
elif cmd == 'turn off display':
FLAG_True_Comd = 1
FLAG_Display_On = 0
elif cmd == 'show current time':
FLAG_True_Comd = 1
FLAG_Show_Time = 1
elif cmd == 'close current time':
FLAG_True_Comd = 1
FLAG_Show_Time = 0
else:
FLAG_True_Comd = 0 # not a right command
# Judge what to display on OLED, and display OLED with Timer
def WhatShowOLED(p):
global FLAG_True_Comd # Flag about whether it is a right command
global FLAG_Display_On # Flag about whetehr OLED can display things
global FLAG_Show_Time # Flag about whether the current time is shown on OLED
global showComd
if FLAG_Display_On: # able to display OLED
oled.text(showComd,0,11)
OLEDRecvComd(FLAG_True_Comd) # whether it's a right command, show on OLED
if FLAG_Show_Time: # show time on OLED if be able to
OLEDShowTime()
oled.show() # display text
else:
oled.fill(0) # fill OLED with black
oled.show() # display all black
oled.fill(0) # refresh, remove residue
# ESP8266 as a server to listen and response
def ListenResponse():
# a response ahout receiving a right JSON POST request
goodHTML = """<!DOCTYPE html>
<html>
<head> <title>Good Command</title> </head>
<body> <h1>The command from you is received by ESP8266</h1></body>
</html>
"""
# a response ahout NOT receiving a JSON POST request
badHTML = """<!DOCTYPE html>
<html>
<head> <title>Bad Command</title> </head>
<body> <h1>The command from you is NOT a JSON format</h1></body>
</html>
"""
addr = socket.getaddrinfo('0.0.0.0', 80)[0][-1] # Set web server port number to 80
s = socket.socket()
s.bind(addr) # Bind the socket to address
s.listen(1) # Enable a server to accept connections
print('listening on', addr)
global FLAG_True_Comd # Flag about whether it is a right command
global FLAG_Display_On # Flag about whetehr OLED can display things
global FLAG_Show_Time # Flag about whether the current time is shown on OLED
FLAG_True_Comd = 0
FLAG_Display_On = 0
FLAG_Show_Time = 0
while True:
print("FLAG_True_Comd", FLAG_True_Comd)
print("FLAG_Display_On", FLAG_Display_On)
print("FLAG_Show_Time", FLAG_Show_Time)
# accept the connect to 80 port
cl, addr = s.accept()
print('client connected from', addr)
# ESP8266 listen from the port
# The client terminal instruction should be like:
# curl -H "Content-Type:application/json" -X POST -d '{"Command":"turn on display"}' http://192.168.50.100:80
cl_receive = cl.recv(500).decode("utf-8").split("\r\n")[-1] # get the whole request and try to split it
try: # if the request is in a JSON POST format
cl_receive = json.loads(cl_receive) # convert the json string to json
print(cl_receive['Command'])
global showComd
showComd = cl_receive['Command']
except ValueError: # if not, give the response ahout not receiving a JSON POST
response = "HTTP/1.1 501 Implemented\r\n\r\nBad"
else: # if can be trasformed to JSON, give good response
response = "HTTP/1.1 200 OK\r\n\r\nGood"
WhatCommand(cl_receive['Command']) # judge what's the command received
# write to the port, i.e., give response
cl.send(response)
cl.close()
if __name__ == '__main__':
i2c = I2C(-1, scl=Pin(5), sda=Pin(4)) # initialize access to the I2C bus
i2c.scan()
oled = ssd1306.SSD1306_I2C(128, 32, i2c) # the width=128 and height=32
rtc = RTC()
tim = Timer(-1)
tim.init(period=100, mode=Timer.PERIODIC, callback=WhatShowOLED)
ConnectWIFI('Columbia University','') # connect esp8266 to a router
SetCurtTime()
ListenResponse() # Show ESP8266 Pins to test server
| 35.601093 | 132 | 0.701765 | 1,024 | 6,515 | 4.383789 | 0.282227 | 0.021386 | 0.032078 | 0.018935 | 0.214079 | 0.203386 | 0.175763 | 0.160615 | 0.145912 | 0.129873 | 0 | 0.039351 | 0.176976 | 6,515 | 183 | 133 | 35.601093 | 0.79765 | 0.407982 | 0 | 0.31746 | 0 | 0 | 0.208344 | 0.005809 | 0.015873 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.007937 | 0.063492 | 0 | 0.119048 | 0.079365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c60fff305331c5af7ece82691a51268226e5c661 | 453 | py | Python | cord19/scripts/metadata/prefixes.py | udel-cbcb/covid19kg_rdf | 3cd8dd6c4654333777db6127f2a3f2e01b92b0ac | [
"CC-BY-4.0"
] | null | null | null | cord19/scripts/metadata/prefixes.py | udel-cbcb/covid19kg_rdf | 3cd8dd6c4654333777db6127f2a3f2e01b92b0ac | [
"CC-BY-4.0"
] | null | null | null | cord19/scripts/metadata/prefixes.py | udel-cbcb/covid19kg_rdf | 3cd8dd6c4654333777db6127f2a3f2e01b92b0ac | [
"CC-BY-4.0"
] | null | null | null | from rdflib import Namespace, XSD
from rdflib.namespace import DC, DCTERMS
FHIRCAT_CORD = Namespace("http://fhircat.org/cord-19/")
SSO = Namespace("http://semanticscholar.org/cv-research/")
DOI = Namespace("https://doi.org/")
PUBMED = Namespace("https://www.ncbi.nlm.nih.gov/pubmed/")
PMC = Namespace("https://www.ncbi.nlm.nih.gov/pmc/articles/")
MS_ACADEMIC = Namespace("https://academic.microsoft.com/paper/")
FHIR = Namespace("http://hl7.org/fhir/") | 45.3 | 64 | 0.735099 | 64 | 453 | 5.171875 | 0.515625 | 0.169184 | 0.102719 | 0.126888 | 0.181269 | 0.181269 | 0.181269 | 0 | 0 | 0 | 0 | 0.007109 | 0.068433 | 453 | 10 | 65 | 45.3 | 0.777251 | 0 | 0 | 0 | 0 | 0 | 0.477974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6111b7e68c98d9dc89d8452745da9d3fb412b2a | 6,568 | py | Python | pbutils/argparsers.py | phonybone/phonybone_utils | d95f226ddfc62a1d69b5ff6f53de86188fe0c8f9 | [
"MIT"
] | null | null | null | pbutils/argparsers.py | phonybone/phonybone_utils | d95f226ddfc62a1d69b5ff6f53de86188fe0c8f9 | [
"MIT"
] | null | null | null | pbutils/argparsers.py | phonybone/phonybone_utils | d95f226ddfc62a1d69b5ff6f53de86188fe0c8f9 | [
"MIT"
] | null | null | null | import sys
import os
import argparse
from types import MethodType
from importlib import import_module
from argparse import RawTextHelpFormatter # Note: this applies to all options, might not always be what we want...
from pbutils.configs import get_config, get_config_from_data, to_dict, inject_opts, CP
from .strings import qw, ppjson
from .streams import warn
def parser_stub(docstr):
parser = argparse.ArgumentParser(description=docstr, formatter_class=RawTextHelpFormatter)
parser.add_argument('--config', default=_get_default_config_fn())
parser.add_argument('-v', '--verbose', action='store_true', help='verbose')
parser.add_argument('-q', '--silent', action='store_true', help='silent mode')
parser.add_argument('-d', '--debug', action='store_true', help='debugging flag')
# leave comments here as templates
# parser.add_argument('required_arg')
# parser.add_argument('--something', default='', help='')
# parser.add_argument('args', nargs=argparse.REMAINDER)
return parser
def _get_default_config_fn():
# warning: pyenv breaks this with its shims
fn = sys.argv[0].replace('.py', '.ini')
if not fn.endswith('.ini'):
fn += '.ini'
return fn
def _assemble_config(opts, default_section_name='default'):
'''
This builds a config by the following steps:
1. read config file (as specified in opts, otherwise empty)
2. inject environment vars as specified by config
3. inject opts
returns a ConfigParserRaw object.
'''
if opts.config:
try:
config = get_config(opts.config, config_type='Raw')
except OSError as e:
if e.errno == 2 and e.filename != _get_default_config_fn():
raise
else:
config = get_config_from_data(f'[{default_section_name}]')
if opts.debug:
warn(f'skipping non-existent config file {opts.config}')
inject_opts(config, opts)
# add a convenience method to get opts, which are stored in the default section:
def opt(self, opt, default=None):
try:
return self.get(default_section_name, opt)
except CP.NoOptionError:
if default is not None:
return default
else:
raise
config.opt = MethodType(opt, config)
return config
def wrap_main(main, parser, args=sys.argv[1:]):
'''
create config from config file and cmd-line args;
set os.environ['DEBUG'] if -d;
Call main(config);
trap exceptions; if they occur, print an error message (with optional stack trace)
and set exit value appropriately.
'''
opts = parser.parse_args(args)
config = _assemble_config(opts)
if opts.debug:
os.environ['DEBUG'] = 'True'
warn(opts)
if opts.silent and opts.verbose:
warn('WARNING: both --silent and --verbose are set. Your output may be weird')
try:
rc = main(config) or 0
sys.exit(rc)
except Exception as e:
if 'DEBUG' in os.environ:
import traceback
traceback.print_exc()
else:
print('error: {} {}'.format(type(e), e))
sys.exit(1)
def parser_config(parser, config):
'''
Use sections/values from the config file to initialize an argparser.
One cmd-line arg per config section; that is, each section contains
all the args needed for a call to parser.add_argument()
Example:
names = ['-x', '--some-option']
section = {'type': int, 'action': 'store_true', etc} (Note: conflict)
'''
for section in config.sections():
section_dict = to_dict(config, section)
names = ['--'+section]
if 'short_name' in section_dict:
names.append('-'+section_dict.pop('short_name'))
if 'type' in section_dict:
actual_type = eval(section_dict['type'])
section_dict['type'] = actual_type
if 'default' in section_dict:
section_dict['default'] = actual_type(section_dict['default'])
if 'action' in section_dict:
action = section_dict['action']
if action not in qw('store store_const store_true store_false append append_const count help version'):
# action must be fully qualified name (module and class) of a class derived from argparse.Action
modname, clsname = action.rsplit('.', 1)
mod = import_module(modname)
cls = getattr(mod, clsname)
section_dict['action'] = cls
parser.add_argument(*names, **section_dict)
class FloatIntStrParserAction(argparse.Action):
'''
Convert a string value to float, int, or str as possible.
To be used as value to 'action' kwarg of argparse.parser.add_argument, eg:
parser.add_argument('--some-value', action=FloatIntStrParserAction, ...)
This is called one time for each value on command line.
NOTE: use of this class as an Action precludes the use of the 'type' kwarg in add_argument!
'''
def __init__(self, **kwargs):
super(FloatIntStrParserAction, self).__init__(**kwargs)
def __call__(self, parser, namespace, values, option_string):
if self.type is not None: # coerce to that type
setattr(namespace, self.dest, self.type(values))
return
for t in [int, float, str]: # order important
try:
setattr(namespace, self.dest, t(values))
break
except ValueError as e:
pass
else:
parser.error("Error processing negc_var '{}'".format(values)) # should never get here
if __name__ == '__main__':
def getopts(opts_ini=None):
import argparse
parser = argparse.ArgumentParser()
if opts_ini:
opts_config = get_config(opts_ini)
parser.add_argument('-v', action='store_true', help='verbose mode')
parser.add_argument('-q', action='store_true', help='silent mode')
parser.add_argument('-d', action='store_true', help='debugging flag')
opts = parser.parse_args()
if opts.debug:
os.environ['DEBUG'] = 'True'
print(ppjson(vars(opts)))
return opts
# -----------------------------------------------------------------------
opts_ini = os.path.abspath(os.path.join(os.path.dirname(__file__), 'opts.ini'))
if not os.path.exists(opts_ini):
warn('{}: no such file')
opts_ini = None
opts = getopts(opts_ini)
| 34.568421 | 115 | 0.622259 | 828 | 6,568 | 4.786232 | 0.300725 | 0.041635 | 0.060056 | 0.028766 | 0.079233 | 0.054504 | 0.038355 | 0.023719 | 0.023719 | 0.023719 | 0 | 0.00185 | 0.25944 | 6,568 | 189 | 116 | 34.751323 | 0.812911 | 0.25609 | 0 | 0.149123 | 0 | 0 | 0.124526 | 0.005057 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0.008772 | 0.105263 | 0 | 0.254386 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c612683f9e5f1b8a554762571a5c3752edd3f4c6 | 513 | py | Python | api/management/commands/user_stats.py | kopf/zzzz | eeaebc24c7c2c290e167dcf1a74c18586a3a75a7 | [
"BSD-3-Clause"
] | 10 | 2019-04-16T18:08:55.000Z | 2022-03-17T21:30:47.000Z | api/management/commands/user_stats.py | kopf/zzzz | eeaebc24c7c2c290e167dcf1a74c18586a3a75a7 | [
"BSD-3-Clause"
] | 3 | 2019-04-16T18:26:41.000Z | 2021-06-10T21:22:13.000Z | api/management/commands/user_stats.py | kopf/zzzz | eeaebc24c7c2c290e167dcf1a74c18586a3a75a7 | [
"BSD-3-Clause"
] | 1 | 2021-05-23T07:10:04.000Z | 2021-05-23T07:10:04.000Z | #!/usr/bin/env python3
import json
from django.core.management.base import BaseCommand
from api.models import User
class Command(BaseCommand):
help = 'Print a dict of user status (number of users signed up per day) to stdout'
def handle(self, *args, **options):
stats = {}
for user in User.objects.all():
date_str = user.date_joined.strftime('%Y-%m-%d')
stats.setdefault(date_str, 0)
stats[date_str] += 1
print(json.dumps(stats, indent=4))
| 27 | 86 | 0.639376 | 73 | 513 | 4.438356 | 0.739726 | 0.064815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010309 | 0.243665 | 513 | 18 | 87 | 28.5 | 0.824742 | 0.040936 | 0 | 0 | 0 | 0 | 0.164969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.5 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |