hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b173edffb27dcdcdb6a451358d5f8c1711b1370c | 1,024 | py | Python | tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial2_Solution_d65c4e04.py | raxosiris/course-content | 6904ae919b91aeb885d73b53cb9b02e6ec73d9cd | [
"CC-BY-4.0"
] | 26 | 2020-07-01T20:38:44.000Z | 2021-06-20T06:37:27.000Z | tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial2_Solution_d65c4e04.py | raxosiris/course-content | 6904ae919b91aeb885d73b53cb9b02e6ec73d9cd | [
"CC-BY-4.0"
] | 3 | 2020-06-23T03:46:36.000Z | 2020-07-07T05:26:01.000Z | tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial2_Solution_d65c4e04.py | raxosiris/course-content | 6904ae919b91aeb885d73b53cb9b02e6ec73d9cd | [
"CC-BY-4.0"
] | 16 | 2020-07-06T06:48:02.000Z | 2021-07-30T08:18:52.000Z | def pca(X):
"""
Performs PCA on multivariate data.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
"""
# Subtract the mean of X
X = X - np.mean(X, axis=0)
# Calculate the sample covariance matrix
cov_matrix = get_sample_cov_matrix(X)
# Calculate the eigenvalues and eigenvectors
evals, evectors = np.linalg.eigh(cov_matrix)
# Sort the eigenvalues in descending order
evals, evectors = sort_evals_descending(evals, evectors)
# Project the data onto the new eigenvector basis
score = change_of_basis(X, evectors)
return score, evectors, evals
# Perform PCA on the data matrix X
score, evectors, evals = pca(X)
# Plot the data projected into the new basis
with plt.xkcd():
plot_data_new_basis(score) | 30.117647 | 72 | 0.704102 | 145 | 1,024 | 4.889655 | 0.42069 | 0.056417 | 0.067701 | 0.101551 | 0.062059 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001266 | 0.228516 | 1,024 | 34 | 73 | 30.117647 | 0.896203 | 0.617188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b175b67d7f4aab8d0a2e801a138e913e8a9b0d2f | 250 | py | Python | AtCoder/ABC/160-169/ABC169_B.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/160-169/ABC169_B.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/160-169/ABC169_B.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | N = int(input())
A = sorted([int(n) for n in input().split()], reverse=True)
M = 10**18
ans = A[0]
for i in range(1, len(A)):
if A[-1] == 0:
ans = 0
break
ans *= A[i]
if M < ans:
ans = -1
break
print(ans)
| 17.857143 | 59 | 0.46 | 45 | 250 | 2.555556 | 0.488889 | 0.069565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06135 | 0.348 | 250 | 13 | 60 | 19.230769 | 0.644172 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1768d9079cffc26a1a73e9fe9638e50a8008732 | 3,254 | py | Python | src/primaires/scripting/parser/tests.py | stormi/tsunami | bdc853229834b52b2ee8ed54a3161a1a3133d926 | [
"BSD-3-Clause"
] | null | null | null | src/primaires/scripting/parser/tests.py | stormi/tsunami | bdc853229834b52b2ee8ed54a3161a1a3133d926 | [
"BSD-3-Clause"
] | null | null | null | src/primaires/scripting/parser/tests.py | stormi/tsunami | bdc853229834b52b2ee8ed54a3161a1a3133d926 | [
"BSD-3-Clause"
] | null | null | null | # -*-coding:Utf-8 -*
# Copyright (c) 2010 LE GOFF Vincent
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
# OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Fichier contenant la classe Tests, détaillée plus bas."""
from .expression import Expression
from . import expressions
class Tests(Expression):
"""Expression tests."""
nom = "tests"
def __init__(self):
"""Constructeur de l'expression."""
Expression.__init__(self)
self.nom = None
self.contraire = False
self.expressions = ()
def __repr__(self):
expressions = [str(e) for e in self.expressions]
chaine = " ".join(expressions)
if self.contraire:
chaine = "!" + chaine
return chaine
@classmethod
def parsable(cls, chaine):
"""Retourne True si la chaîne est parsable, False sinon."""
return True
@classmethod
def parser(cls, chaine):
"""Parse la chaîne.
Retourne l'objet créé et la partie non interprétée de la chaîne.
"""
objet = cls()
expressions = cls.expressions_def
# Parsage des expressions
types = ("nombre", "chaine", "fonction", "operateur", "connecteur",
"variable", "calcul")
expressions = []
if chaine.startswith("!"):
objet.contraire = True
chaine = chaine[1:]
while chaine.strip():
arg, chaine = cls.choisir(types, chaine)
expressions.append(arg)
objet.expressions = tuple(expressions)
return objet, chaine
@property
def code_python(self):
"""Retourne le code Python du test."""
py_tests = [t.code_python for t in self.expressions]
code = " ".join(py_tests)
if self.contraire:
code = "not " + code
return code
| 33.546392 | 79 | 0.673018 | 402 | 3,254 | 5.405473 | 0.487562 | 0.027612 | 0.015647 | 0.021169 | 0.084676 | 0.062586 | 0.062586 | 0.062586 | 0.062586 | 0.062586 | 0 | 0.002449 | 0.247081 | 3,254 | 96 | 80 | 33.895833 | 0.88449 | 0.55378 | 0 | 0.1 | 0 | 0 | 0.047965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.05 | 0 | 0.325 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1770f02e1fe6200aa1c481b16cf52744245ffb7 | 5,440 | py | Python | tests/test_chartbuilder.py | imduffy15/pyhelm | 2edea7f39e640ecc13e2f9a0e81de439f5c057ac | [
"Apache-2.0"
] | null | null | null | tests/test_chartbuilder.py | imduffy15/pyhelm | 2edea7f39e640ecc13e2f9a0e81de439f5c057ac | [
"Apache-2.0"
] | null | null | null | tests/test_chartbuilder.py | imduffy15/pyhelm | 2edea7f39e640ecc13e2f9a0e81de439f5c057ac | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
from unittest import TestCase
try:
from unittest import mock
except ImportError:
import mock
import io
from hapi.chart.template_pb2 import Template
from hapi.chart.metadata_pb2 import Metadata
from hapi.chart.config_pb2 import Config
from google.protobuf.any_pb2 import Any
from pyhelm.chartbuilder import ChartBuilder
class TestChartBuilder(TestCase):
_chart = io.StringIO(
"""
apiVersion: v1
description: testing
name: foobar
version: 1.2.3
appVersion: 3.2.1
"""
)
_values = io.StringIO(
"""
---
foo:
bar: baz
"""
)
_file = io.StringIO("")
_files_walk = (
x
for x in [
("charts", "", []),
("templates", "", []),
("files", "", [".helmignore", "Chart.yaml", "data"]),
]
)
_template = io.StringIO(
"""
---
apiVersion: v1
kind: Deployment
metadata:
name: {{ include "foo.fullname" . }}
namespace: "{{ .Values.namespace }}"
"""
)
_templates_walk = (x for x in [("t", "", ["deployment.yaml"])])
_mock_source_clone = "pyhelm.chartbuilder.ChartBuilder.source_clone"
def setUp(self):
ChartBuilder._logger = mock.Mock()
def test_no_type(self):
cb = ChartBuilder({"name": "", "source": {}})
self.assertIsNone(cb.source_directory)
cb._logger.exception.assert_called()
def test_unknown_type_with_parent(self):
cb = ChartBuilder(
{
"name": "bar",
"parent": "foo",
"source": {"location": "test", "type": "none"},
}
)
self.assertIsNone(cb.source_directory)
cb._logger.info.assert_called()
cb._logger.exception.assert_called()
@mock.patch("pyhelm.chartbuilder.repo.git_clone", return_value="/test")
def test_git(self, _0):
cb = ChartBuilder(
{
"name": "foo",
"source": {"location": "test", "type": "git", "subpath": "foo"},
}
)
self.assertEqual(cb.source_directory, "/test/foo")
cb._logger.info.assert_called()
cb._logger.exception.assert_not_called()
@mock.patch("pyhelm.chartbuilder.repo.from_repo", return_value="/test")
def test_repo(self, _0):
cb = ChartBuilder(
{"name": "foo", "source": {"location": "test", "type": "repo"}}
)
self.assertEqual(cb.source_directory, "/test/")
cb._logger.info.assert_called()
cb._logger.exception.assert_not_called()
def test_directory(self):
cb = ChartBuilder(
{"name": "foo", "source": {"location": "dir", "type": "directory"}}
)
self.assertEqual(cb.source_directory, "dir/")
cb._logger.info.assert_called()
cb._logger.exception.assert_not_called()
@mock.patch("pyhelm.chartbuilder.codecs.open", return_value=_chart)
@mock.patch(_mock_source_clone, return_value="")
def test_get_metadata(self, _0, _1):
m = ChartBuilder({}).get_metadata()
self.assertIsInstance(m, Metadata)
@mock.patch("pyhelm.chartbuilder.codecs.open", return_value=_file)
@mock.patch("pyhelm.chartbuilder.os.walk", return_value=_files_walk)
@mock.patch(_mock_source_clone, return_value="test")
def test_get_files(self, _0, _1, _2):
f = ChartBuilder({}).get_files()
self.assertEqual(len(f), 1)
self.assertIsInstance(f[0], Any)
@mock.patch(_mock_source_clone, return_value="test")
def test_get_values_not_found(self, _0):
ChartBuilder({}).get_values()
ChartBuilder._logger.warn.assert_called()
@mock.patch("pyhelm.chartbuilder.codecs.open", return_value=_values)
@mock.patch("pyhelm.chartbuilder.os.path.exists", return_value=True)
@mock.patch(_mock_source_clone, return_value="test")
def test_get_values(self, _0, _1, _2):
v = ChartBuilder({}).get_values()
self.assertIsInstance(v, Config)
@mock.patch("pyhelm.chartbuilder.codecs.open", return_value=_template)
@mock.patch("pyhelm.chartbuilder.os.walk", return_value=_templates_walk)
@mock.patch(_mock_source_clone, return_value="test")
def test_get_templates(self, _0, _1, _2):
t = ChartBuilder({"name": "foo"}).get_templates()
ChartBuilder._logger.warn.assert_called()
self.assertEqual(len(t), 1)
self.assertIsInstance(t[0], Template)
@mock.patch("pyhelm.chartbuilder.ChartBuilder.get_metadata")
@mock.patch("pyhelm.chartbuilder.ChartBuilder.get_templates")
@mock.patch("pyhelm.chartbuilder.ChartBuilder.get_values")
@mock.patch("pyhelm.chartbuilder.ChartBuilder.get_files")
@mock.patch("pyhelm.chartbuilder.Chart")
def test_get_helm_chart_exists(self, _0, _1, _2, _3, _4):
cb = ChartBuilder(
{
"name": "foo",
"source": {},
"dependencies": [{"name": "bar", "source": {}}],
}
)
cb._helm_chart = "123"
self.assertEqual(cb.get_helm_chart(), "123")
cb._helm_chart = None
cb.get_helm_chart()
cb._logger.info.assert_called()
@mock.patch("pyhelm.chartbuilder.repo")
def test_source_cleanup(self, mock_repo):
ChartBuilder(
{"name": "foo", "source": {"type": "directory", "location": "test"}}
).source_cleanup()
mock_repo.source_cleanup.assert_called()
| 31.445087 | 80 | 0.61875 | 610 | 5,440 | 5.254098 | 0.177049 | 0.056162 | 0.070203 | 0.126365 | 0.508892 | 0.418409 | 0.321061 | 0.26209 | 0.20468 | 0.166615 | 0 | 0.009802 | 0.231066 | 5,440 | 172 | 81 | 31.627907 | 0.756395 | 0 | 0 | 0.19685 | 0 | 0 | 0.175043 | 0.105912 | 0 | 0 | 0 | 0 | 0.19685 | 1 | 0.102362 | false | 0 | 0.086614 | 0 | 0.251969 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b178677fae7c2482783419a1c62f44895e9f54b9 | 1,879 | py | Python | aiida_crystal_dft/io/basis.py | tilde-lab/aiida-crystal-dft | 971fd13a3f414d6e80cc654dc92a8758f6e0365c | [
"MIT"
] | 2 | 2019-02-05T16:49:08.000Z | 2020-01-29T12:27:14.000Z | aiida_crystal_dft/io/basis.py | tilde-lab/aiida-crystal-dft | 971fd13a3f414d6e80cc654dc92a8758f6e0365c | [
"MIT"
] | 36 | 2020-03-09T19:35:10.000Z | 2021-12-07T22:13:31.000Z | aiida_crystal_dft/io/basis.py | tilde-lab/aiida-crystal-dft | 971fd13a3f414d6e80cc654dc92a8758f6e0365c | [
"MIT"
] | 1 | 2019-11-13T23:12:10.000Z | 2019-11-13T23:12:10.000Z | # Copyright (c) Andrey Sobolev, 2019. Distributed under MIT license, see LICENSE file.
"""
The module that deals with reading and parsing *.basis files
"""
from ase.data import atomic_numbers
from .parsers import gto_basis_parser
class BasisFile:
parser = gto_basis_parser()
def __init__(self):
self.basis_dict = {}
def parse(self, content):
"""Reads basis set from string"""
self.basis_dict = BasisFile.parser.parseString(content).asDict()
return self.basis_dict
def read(self, file_name):
"""Reads basis set from file"""
with open(file_name, 'r') as f:
self.basis_dict = BasisFile.parser.parseString(f.read()).asDict()
return self.basis_dict
class BasisAdapter:
def __init__(self, basis):
"""The class adapts either BasisFamily or a list of Basis instances"""
from aiida_crystal_dft.data.basis_family import CrystalBasisFamilyData
self.basis = basis
if isinstance(basis, CrystalBasisFamilyData):
self.basis_type = "basis_family"
elif isinstance(basis, list) and all([hasattr(b, "content") for b in basis]):
self.basis_type = "list"
else:
raise ValueError("Basis must be represented with a BasisFamily or a list of Basis instances")
@property
def predefined(self):
if self.basis_type == "basis_family":
return self.basis.predefined
return False
def get_basis(self, element):
if self.basis_type == "basis_family":
return self.basis.get_basis(element)
number = atomic_numbers[element]
for b in self.basis:
ecp_add = 0 if b.all_electron else 200
if b.content.split()[0] == str(number + ecp_add):
return b
raise KeyError("No basis for element {} in list".format(element))
| 32.964912 | 105 | 0.645024 | 240 | 1,879 | 4.9 | 0.395833 | 0.107143 | 0.055272 | 0.045918 | 0.256803 | 0.193878 | 0.127551 | 0.069728 | 0.069728 | 0 | 0 | 0.006484 | 0.261309 | 1,879 | 56 | 106 | 33.553571 | 0.840778 | 0.142097 | 0 | 0.108108 | 0 | 0 | 0.095658 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0 | 0.081081 | 0 | 0.486486 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b17b7e1f38b6df6cbd566778afec766be25871e5 | 1,943 | py | Python | source/betterCalculator.py | TwMoonBear-Arsenal/BetterCalculator | 7b9e9b9fecdcb422853ff83a4084971ce91c35df | [
"MIT"
] | 2 | 2021-03-15T11:31:36.000Z | 2021-03-15T11:36:34.000Z | source/betterCalculator.py | TwMoonBear-Arsenal/BetterCalculator | 7b9e9b9fecdcb422853ff83a4084971ce91c35df | [
"MIT"
] | null | null | null | source/betterCalculator.py | TwMoonBear-Arsenal/BetterCalculator | 7b9e9b9fecdcb422853ff83a4084971ce91c35df | [
"MIT"
] | 5 | 2021-03-15T12:00:51.000Z | 2021-03-15T12:23:42.000Z | import argparse # from std
#我好帥
#我超帥
# 我好帥
class BetterCalculator:
@staticmethod
def Conj(x, y):
return str(x)+str(y) # 宇森
@staticmethod
def Pow(x, y):
a = 1
for i in range(y):
a = a * x
return a # 庭維上傳囉~~~衝翁出衝衝衝
@staticmethod
def Mod(x, y):
t = int(x/y)
x = x-(y*t)
return x # 學姊組
@staticmethod
def Add(x, y):
return x+y
@staticmethod
def Minus(x, y):
return x-y
@staticmethod
def Multiple(x, y):
return x*y
@staticmethod
def Divide(x, y):
return x/y
def main():
# 準備參數解析
example_text = "example usage: py.exe .\BetterCalculator.py 5 c 3"
parser = argparse.ArgumentParser(
description="好一點數學運算功能", epilog=example_text)
parser.add_argument("x", help="第1個數字",
type=int)
parser.add_argument("operator", help="運算符號:加法(+), 減法(+), 乘法(+), 除法(+), 連接(c),次方(p), 餘數(m)}",
type=str)
parser.add_argument("y", help="第2個數字",
type=int)
args = parser.parse_args()
if(args.operator == "+"):
ans = BetterCalculator.Add(args.x, args.y)
elif(args.operator == "*"):
ans = BetterCalculator.Multiple(args.x, args.y)
elif(args.operator == "-"):
ans = BetterCalculator.Minus(args.x, args.y)
elif(args.operator == "/"):
ans = BetterCalculator.Divide(args.x, args.y)
elif(args.operator == "c"):
ans = BetterCalculator.Conj(args.x, args.y)
elif(args.operator == "p"):
ans = BetterCalculator.Pow(args.x, args.y)
elif(args.operator == "m"):
ans = BetterCalculator.Mod(args.x, args.y)
else:
ans = "輸入錯誤"
# end
if(ans == "輸入錯誤"):
print(ans)
else:
print("結果:", args.x, "", args.operator, "", args.y, "=", ans)
print()
if __name__ == "__main__":
main()
| 23.130952 | 96 | 0.52702 | 244 | 1,943 | 4.139344 | 0.307377 | 0.025743 | 0.071287 | 0.069307 | 0.29505 | 0.285149 | 0.285149 | 0.133663 | 0.133663 | 0 | 0 | 0.003751 | 0.313948 | 1,943 | 83 | 97 | 23.409639 | 0.753938 | 0.026248 | 0 | 0.177419 | 0 | 0.016129 | 0.083422 | 0.011158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.016129 | 0.080645 | 0.274194 | 0.048387 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b17d1087dd36e4d564161d7b60a734947a07b93b | 4,178 | py | Python | run-community-detection.py | gyauney/shakespeare-and-company-social-readership | 9ad68af449474c79ab421834c4a53866f5bb121e | [
"MIT"
] | 2 | 2021-09-07T18:00:49.000Z | 2021-09-09T16:34:31.000Z | run-community-detection.py | gyauney/shakespeare-and-company-social-readership | 9ad68af449474c79ab421834c4a53866f5bb121e | [
"MIT"
] | null | null | null | run-community-detection.py | gyauney/shakespeare-and-company-social-readership | 9ad68af449474c79ab421834c4a53866f5bb121e | [
"MIT"
] | null | null | null | from graph import *
import numpy as np
import networkx as nx
import json
import csv
from collections import defaultdict, OrderedDict
import itertools
import operator
import math
from community_detection import get_communities
import argparse
# parse the command-line arguments
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--num_groups', required=True, type=int)
parser.add_argument('--verbose', action='store_true', default=False)
return parser.parse_args()
def main():
args = parse_args()
# Test the community detection algorithm
# with the simple "Zachary's Karate Club" dataset.
# The vertices get split into two clear groups,
# which you can see in the resulting text file 'karate_community-percents.txt'.
G = nx.karate_club_graph()
A = nx.to_numpy_array(G)
vertices_in_order, edge_to_weight, vertex_to_neighbors, n = convert_adjacency_matrix_to_list(A)
C = get_communities(edge_to_weight, vertex_to_neighbors, n, 2, 5, False)
export_to_gephi(edge_to_weight, vertices_in_order, 'karate', C)
save_vertices_by_group_percents(vertex_to_neighbors, C, 2, vertices_in_order, 'karate')
# load the full shakespeare and company dataset
books, members, events = load_shakespeare_and_company_data('data')
book_uri_to_text = map_book_uris_to_text(books)
# load the books that are present in both Shakespeare and Company and the UCSD Goodreads book graph
# these were gotten in a separate preprocessing step
with open('data/book-uris-in-both-goodreads-and-sc.json', 'r') as f:
overlap_book_uris = json.load(f)
# load a dict that maps from Goodreads book id to summary string
# this was also constructed in a separate preprocessing step
with open('data/goodreads-book-id-to-text.json', 'r') as f:
goodreads_book_id_to_text = json.load(f)
# get the data in the format we need to construct graphs
# the lists of books have been pruned to only include:
# 1) books that are common to both datasets
# 2) books that will have at least one edge in the graphs
# n.b SC and Goodreads contain a different number of connected books,
# so the graphs have different numbers of vertices
# these are dicts from person to books they interacted with
sc_borrower_to_books = internal_get_sc_borrower_to_books(books, events, overlap_book_uris)
with open('data/goodreads-user-to-books.json', 'r') as f:
goodreads_user_to_books = json.load(f)
# Shakespeare and Company: create a graph and run the community detection algorithm
dataset = 'shakespeare-and-company_{}-groups'.format(args.num_groups)
books_in_vertex_order, book_to_vertex_index, edge_to_weight, vertex_to_neighbors, n = create_books_graph(sc_borrower_to_books)
print('Shakespeare and Company, # of vertices: {:,}'.format(n))
print('Shakespeare and Company, # of unique edges: {:,}'.format(int(len(edge_to_weight)/2)))
C = get_communities(edge_to_weight, vertex_to_neighbors, n, args.num_groups, 1, args.verbose)
# save the results in html and gephi format
save_html_with_community_summaries(n, edge_to_weight, vertex_to_neighbors, C, args.num_groups, books_in_vertex_order, dataset, book_uri_to_text)
export_to_gephi(edge_to_weight, books_in_vertex_order, dataset, C)
# Goodreads: create a graph and run the community detection algorithm
dataset = 'goodreads_{}-groups'.format(args.num_groups)
books_in_vertex_order, book_to_vertex_index, edge_to_weight, vertex_to_neighbors, n = create_books_graph(goodreads_user_to_books)
print('Goodreads, # of vertices: {:,}'.format(n))
print('Goodreads, # of unique edges: {:,}'.format(int(len(edge_to_weight)/2)))
C = get_communities(edge_to_weight, vertex_to_neighbors, n, args.num_groups, 1, args.verbose)
# save the results in html and gephi format
save_html_with_community_summaries(n, edge_to_weight, vertex_to_neighbors, C, args.num_groups, books_in_vertex_order, dataset, goodreads_book_id_to_text)
export_to_gephi(edge_to_weight, books_in_vertex_order, dataset, C)
if __name__ == '__main__':
main() | 51.580247 | 157 | 0.750359 | 636 | 4,178 | 4.652516 | 0.275157 | 0.02636 | 0.052721 | 0.048665 | 0.442717 | 0.387969 | 0.362623 | 0.352484 | 0.325448 | 0.325448 | 0 | 0.002585 | 0.166587 | 4,178 | 81 | 158 | 51.580247 | 0.847214 | 0.28674 | 0 | 0.081633 | 0 | 0 | 0.127746 | 0.049003 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.22449 | 0 | 0.285714 | 0.081633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1816468b8f98f5cdb37e870aac1d5b571b0d4da | 1,606 | py | Python | subparsers/list_single_image.py | dman777/image_share | a99c4f0a386201e10eaec0ab0efa50271b7ae122 | [
"MIT"
] | 1 | 2016-10-07T06:50:37.000Z | 2016-10-07T06:50:37.000Z | subparsers/list_single_image.py | dman777/image_share | a99c4f0a386201e10eaec0ab0efa50271b7ae122 | [
"MIT"
] | null | null | null | subparsers/list_single_image.py | dman777/image_share | a99c4f0a386201e10eaec0ab0efa50271b7ae122 | [
"MIT"
] | null | null | null | # List single image detail
def list_single_image_subparser(subparser):
list_image = subparser.add_parser('list-single-image',
description=('***List details'
' of a single image'),
help=('List details of a single image'
' of producers/consumers account'))
group_key = list_image.add_mutually_exclusive_group(required=True)
group_key.add_argument('-pn',
'--producer-username',
help="Producer\'s(source account) username")
group_key.add_argument('-cn',
'--consumer-username',
dest='producer_username',
metavar='CONSUMER_USERNAME',
help="Consumer\'s(destination account) username")
group_apikey = list_image.add_mutually_exclusive_group(required=True)
group_apikey.add_argument('-pa',
'--producer-apikey',
help="Producer\'s(source account) apikey")
group_apikey.add_argument('-ca',
'--consumer-apikey',
dest='producer_apikey',
metavar='CONSUMER_APIKEY',
help="Consumer\'s(destination account) apikey")
list_image.add_argument('-u',
'--uuid',
required=True,
help="Image ID number")
| 53.533333 | 79 | 0.472603 | 132 | 1,606 | 5.530303 | 0.30303 | 0.075342 | 0.061644 | 0.038356 | 0.364384 | 0.208219 | 0.139726 | 0.139726 | 0.139726 | 0 | 0 | 0 | 0.430884 | 1,606 | 29 | 80 | 55.37931 | 0.798687 | 0.014944 | 0 | 0 | 0 | 0 | 0.273418 | 0.029114 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b182b1069922065b5677f3bbd7ee9821e6bb1faa | 5,743 | py | Python | python/seldon_deploy_sdk/models/v1_firing_alert.py | SachinVarghese/seldon-deploy-sdk | 2c70e249c084f113a998ab876c29843ae5f6a99a | [
"Apache-2.0"
] | 6 | 2021-02-18T14:37:54.000Z | 2022-01-13T13:27:43.000Z | python/seldon_deploy_sdk/models/v1_firing_alert.py | SachinVarghese/seldon-deploy-sdk | 2c70e249c084f113a998ab876c29843ae5f6a99a | [
"Apache-2.0"
] | 14 | 2021-01-04T16:32:03.000Z | 2021-12-13T17:53:59.000Z | python/seldon_deploy_sdk/models/v1_firing_alert.py | SachinVarghese/seldon-deploy-sdk | 2c70e249c084f113a998ab876c29843ae5f6a99a | [
"Apache-2.0"
] | 7 | 2021-03-17T09:05:55.000Z | 2022-01-05T10:39:56.000Z | # coding: utf-8
"""
Seldon Deploy API
API to interact and manage the lifecycle of your machine learning models deployed through Seldon Deploy. # noqa: E501
OpenAPI spec version: v1alpha1
Contact: hello@seldon.io
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class V1FiringAlert(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'name': 'str',
'severity': 'str',
'description': 'str',
'title': 'str',
'active_at': 'datetime'
}
attribute_map = {
'name': 'name',
'severity': 'severity',
'description': 'description',
'title': 'title',
'active_at': 'activeAt'
}
def __init__(self, name=None, severity=None, description=None, title=None, active_at=None): # noqa: E501
"""V1FiringAlert - a model defined in Swagger""" # noqa: E501
self._name = None
self._severity = None
self._description = None
self._title = None
self._active_at = None
self.discriminator = None
if name is not None:
self.name = name
if severity is not None:
self.severity = severity
if description is not None:
self.description = description
if title is not None:
self.title = title
if active_at is not None:
self.active_at = active_at
@property
def name(self):
"""Gets the name of this V1FiringAlert. # noqa: E501
:return: The name of this V1FiringAlert. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this V1FiringAlert.
:param name: The name of this V1FiringAlert. # noqa: E501
:type: str
"""
self._name = name
@property
def severity(self):
"""Gets the severity of this V1FiringAlert. # noqa: E501
:return: The severity of this V1FiringAlert. # noqa: E501
:rtype: str
"""
return self._severity
@severity.setter
def severity(self, severity):
"""Sets the severity of this V1FiringAlert.
:param severity: The severity of this V1FiringAlert. # noqa: E501
:type: str
"""
self._severity = severity
@property
def description(self):
"""Gets the description of this V1FiringAlert. # noqa: E501
:return: The description of this V1FiringAlert. # noqa: E501
:rtype: str
"""
return self._description
@description.setter
def description(self, description):
"""Sets the description of this V1FiringAlert.
:param description: The description of this V1FiringAlert. # noqa: E501
:type: str
"""
self._description = description
@property
def title(self):
"""Gets the title of this V1FiringAlert. # noqa: E501
:return: The title of this V1FiringAlert. # noqa: E501
:rtype: str
"""
return self._title
@title.setter
def title(self, title):
"""Sets the title of this V1FiringAlert.
:param title: The title of this V1FiringAlert. # noqa: E501
:type: str
"""
self._title = title
@property
def active_at(self):
"""Gets the active_at of this V1FiringAlert. # noqa: E501
:return: The active_at of this V1FiringAlert. # noqa: E501
:rtype: datetime
"""
return self._active_at
@active_at.setter
def active_at(self, active_at):
"""Sets the active_at of this V1FiringAlert.
:param active_at: The active_at of this V1FiringAlert. # noqa: E501
:type: datetime
"""
self._active_at = active_at
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(V1FiringAlert, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, V1FiringAlert):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 26.104545 | 122 | 0.56521 | 648 | 5,743 | 4.895062 | 0.192901 | 0.037831 | 0.119798 | 0.108764 | 0.336381 | 0.280895 | 0.2686 | 0.187264 | 0.080706 | 0.02396 | 0 | 0.022919 | 0.339021 | 5,743 | 219 | 123 | 26.223744 | 0.812698 | 0.329096 | 0 | 0.071429 | 0 | 0 | 0.046997 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163265 | false | 0 | 0.030612 | 0 | 0.336735 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b183ef3b26d927146b78ac2292d4b54a87576ce7 | 1,193 | py | Python | tests/base/test_correspondance.py | Prithwijit-Chak/simpeg | d93145d768b5512621cdd75566b4a8175fee9ed3 | [
"MIT"
] | 358 | 2015-03-11T05:48:41.000Z | 2022-03-26T02:04:12.000Z | tests/base/test_correspondance.py | Prithwijit-Chak/simpeg | d93145d768b5512621cdd75566b4a8175fee9ed3 | [
"MIT"
] | 885 | 2015-01-19T09:23:48.000Z | 2022-03-29T12:08:34.000Z | tests/base/test_correspondance.py | Prithwijit-Chak/simpeg | d93145d768b5512621cdd75566b4a8175fee9ed3 | [
"MIT"
] | 214 | 2015-03-11T05:48:43.000Z | 2022-03-02T01:05:11.000Z | import unittest
import numpy as np
from discretize import TensorMesh
from SimPEG import (
maps,
regularization,
)
np.random.seed(10)
class LinearCorrespondenceTest(unittest.TestCase):
def setUp(self):
dh = 1.0
nx = 12
ny = 12
hx = [(dh, nx)]
hy = [(dh, ny)]
mesh = TensorMesh([hx, hy], "CN")
# reg
actv = np.ones(len(mesh), dtype=bool)
# maps
wires = maps.Wires(("m1", mesh.nC), ("m2", mesh.nC))
corr = regularization.LinearCorrespondence(
mesh, wire_map=wires, indActive=actv,
)
self.mesh = mesh
self.corr = corr
def test_order_full_hessian(self):
"""
Test deriv and deriv2 matrix of linear correspondance with approx_hessian=True
"""
corr = self.corr
self.assertTrue(corr._test_deriv())
self.assertTrue(corr._test_deriv2(expectedOrder=2))
def test_deriv2_no_arg(self):
m = np.random.randn(2 * len(self.mesh))
corr = self.corr
v = np.random.rand(len(m))
W = corr.deriv2(m)
Wv = corr.deriv2(m, v)
np.testing.assert_allclose(Wv, W @ v)
| 20.568966 | 86 | 0.569992 | 149 | 1,193 | 4.47651 | 0.489933 | 0.035982 | 0.035982 | 0.065967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020706 | 0.311819 | 1,193 | 57 | 87 | 20.929825 | 0.791717 | 0.073764 | 0 | 0.058824 | 0 | 0 | 0.005566 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 1 | 0.088235 | false | 0 | 0.117647 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1867f62bd4962d42ecfed0cf0a8b67766ff738d | 819 | py | Python | tests/functional/test_install.py | Archstacker/thefuck | ebe53f0d181c28ec2f7a86f46d7d51a7d48bbd9e | [
"MIT"
] | null | null | null | tests/functional/test_install.py | Archstacker/thefuck | ebe53f0d181c28ec2f7a86f46d7d51a7d48bbd9e | [
"MIT"
] | null | null | null | tests/functional/test_install.py | Archstacker/thefuck | ebe53f0d181c28ec2f7a86f46d7d51a7d48bbd9e | [
"MIT"
] | 1 | 2021-06-21T09:01:08.000Z | 2021-06-21T09:01:08.000Z | import pytest
from pexpect import TIMEOUT
from tests.functional.utils import spawn, functional, bare
envs = ((u'bash', 'ubuntu-bash', u'''
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -yy bash
'''), (u'bash', 'generic-bash', u'''
FROM fedora:latest
RUN dnf install -yy python-devel sudo which gcc
'''))
@functional
@pytest.mark.skipif(
bool(bare), reason="Can't be tested in bare run")
@pytest.mark.parametrize('shell, tag, dockerfile', envs)
def test_installation(request, shell, tag, dockerfile):
proc = spawn(request, tag, dockerfile, shell, install=False)
proc.sendline(u'cat /src/install.sh | sh - && $0')
proc.sendline(u'thefuck --version')
assert proc.expect([TIMEOUT, u'The Fuck'], timeout=600)
proc.sendline(u'fuck')
assert proc.expect([TIMEOUT, u'No fucks given'])
| 31.5 | 64 | 0.703297 | 121 | 819 | 4.752066 | 0.520661 | 0.026087 | 0.067826 | 0.08 | 0.083478 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005706 | 0.144078 | 819 | 25 | 65 | 32.76 | 0.814551 | 0 | 0 | 0 | 0 | 0 | 0.355311 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.045455 | false | 0 | 0.136364 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1868b45008215b850065d34845daf6a3b5cf396 | 3,371 | py | Python | holocron/utils.py | chenjun2hao/Holocron | 039cdb5238df523ca8a09fea31a2ac9d5f04a0ba | [
"MIT"
] | 1 | 2019-11-28T10:01:58.000Z | 2019-11-28T10:01:58.000Z | holocron/utils.py | chenjun2hao/Holocron | 039cdb5238df523ca8a09fea31a2ac9d5f04a0ba | [
"MIT"
] | null | null | null | holocron/utils.py | chenjun2hao/Holocron | 039cdb5238df523ca8a09fea31a2ac9d5f04a0ba | [
"MIT"
] | null | null | null | #!usr/bin/python
# -*- coding: utf-8 -*-
"""
Utils
"""
import numpy as np
import torch
from PIL import Image
from matplotlib import cm
class ActivationMapper(object):
"""Implements a class activation map extractor as described in https://arxiv.org/abs/1512.04150
Args:
model (torch.nn.Module): input model
conv_layer (str): name of the last convolutional layer
fc_layer (str): name of the fully connected layer
"""
conv_fmap = None
def __init__(self, model, conv_layer, fc_layer):
if not hasattr(model, conv_layer) or not hasattr(model, fc_layer):
raise ValueError(f"Unable to find submodules {conv_layer} and {fc_layer} in the model")
self.conv_layer = conv_layer
self.fc_layer = fc_layer
self.model = model
# Forward hook
self.model._modules.get(self.conv_layer).register_forward_hook(self.__hook)
# Softmax weight
self.smax_weights = self.model._modules.get(fc_layer).weight.data
def __hook(self, module, input, output):
self.conv_fmap = output.data
def get_activation_maps(self, class_idxs, normalized=True):
"""Recreate class activation maps
Args:
class_idxs (list<int>): class indices for expected activation maps
normalized (bool): should the activation map be normalized
Returns:
batch_cams (torch.Tensor<float>): activation maps of the last forwarded batch
"""
if any(idx >= self.smax_weights.size(0) for idx in class_idxs):
raise ValueError("Expected class_idx to be lower than number of output classes")
if self.conv_fmap is None:
raise TypeError("Inputs need to be forwarded in the model for the conv features to be hooked")
# Flatten spatial dimensions of feature map
batch_cams = self.smax_weights[class_idxs, :] @ torch.flatten(self.conv_fmap, 2)
# Normalize feature map
if normalized:
batch_cams -= batch_cams.min(dim=2, keepdim=True)[0]
batch_cams /= batch_cams.max(dim=2, keepdim=True)[0]
return batch_cams.view(self.conv_fmap.size(0), len(class_idxs), self.conv_fmap.size(3), self.conv_fmap.size(2)).cpu()
def overlay_mask(img, mask, colormap='jet', alpha=0.7):
"""Overlay a colormapped mask on a background image
Args:
img (PIL.Image.Image): background image
mask (PIL.Image.Image): mask to be overlayed in grayscale
colormap (str): colormap to be applied on the mask
alpha (float): transparency of the background image
Returns:
overlayed_img (PIL.Image.Image): overlayed image
"""
if not isinstance(img, Image.Image) or not isinstance(mask, Image.Image):
raise TypeError('img and mask arguments need to be PIL.Image')
if not isinstance(alpha, float) or alpha < 0 or alpha >= 1:
raise ValueError('alpha argument is expected to be of type float between 0 and 1')
cmap = cm.get_cmap(colormap)
# Resize mask and apply colormap
overlay = mask.resize(img.size, resample=Image.BICUBIC)
overlay = (255 * cmap(np.asarray(overlay) ** 2)[:, :, 1:]).astype(np.uint8)
# Overlay the image with the mask
overlayed_img = Image.fromarray((alpha * np.asarray(img) + (1 - alpha) * overlay).astype(np.uint8))
return overlayed_img
| 35.861702 | 125 | 0.667161 | 474 | 3,371 | 4.626582 | 0.329114 | 0.029184 | 0.032832 | 0.021888 | 0.030096 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012796 | 0.234945 | 3,371 | 93 | 126 | 36.247312 | 0.837534 | 0.312074 | 0 | 0 | 0 | 0 | 0.141354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b189304c2d819514d9049a9353f5e30cfb6e913f | 12,621 | py | Python | chart-installation/generate_map_files/scripts/symbol.py | juliensam/SMAC-M | 795b088f27adfa4db693e04cb6b6b98b19a19d34 | [
"MIT"
] | 38 | 2017-09-15T12:46:50.000Z | 2022-03-21T13:07:49.000Z | chart-installation/generate_map_files/scripts/symbol.py | Greenroom-Robotics/SMAC-M | 3416c469ca71aff05411811e3afeb37a8f0c3838 | [
"MIT"
] | 15 | 2017-12-07T16:46:12.000Z | 2021-08-05T15:55:33.000Z | chart-installation/generate_map_files/scripts/symbol.py | Greenroom-Robotics/SMAC-M | 3416c469ca71aff05411811e3afeb37a8f0c3838 | [
"MIT"
] | 26 | 2017-11-02T17:57:58.000Z | 2022-01-31T16:22:11.000Z | # This file is kept only for backwards compatibility. Edit the one in ../mapgen
import abc
from enum import Enum
from operator import attrgetter
import os
class VectorSymbol:
def __new__(cls, element):
if element.find('HPGL') is not None:
return super().__new__(cls)
return None
def __init__(self, element):
self.element = element
self.name = element.find('name').text
self.hpgl = element.find('HPGL').text
vector = element.find('vector')
origin = vector.find('origin')
self.offset_x = int(origin.attrib['x'])
self.offset_y = int(origin.attrib['y'])
self.width = int(vector.attrib['width'])
self.height = int(vector.attrib['height'])
self.subsymbols = []
self._parse_colors()
self._parse_vector_symbol()
def as_style(self, color_table):
return '\n\n'.join(s.as_style(color_table) for s in self.subsymbols)
@property
def as_symbol(self):
return '\n\n'.join(s.as_symbol for s in self.subsymbols)
def _parse_vector_symbol(self):
pen = '-'
width = 1
points = []
polygon_buffer = []
position = (None, None)
opacity = 100
subsymbols = []
for instruction in self.hpgl.split(';'):
if not instruction:
continue
command, args = instruction[:2], instruction[2:]
if command == 'SP':
if pen != args and len(points) > 2:
subsymbols.append(SubSymbol(self, pen, width,
opacity, points))
pen = args
elif command == 'SW':
width = int(args)
elif command == 'ST':
opacity = 100 - (int(args) * 25)
elif command == 'PM':
if len(points) > 2:
subsymbols.append(SubSymbol(self, pen, width,
opacity, points))
if args == '2':
# Exit polygon mode
polygon_buffer = list(points)
elif args == '0':
# Enter polygon mode
points = list(position)
else:
raise Exception('Subpolygons are not implemented')
elif command == 'PU':
if len(points) > 2:
subsymbols.append(SubSymbol(self, pen, width,
opacity, points))
coordinates = map(int, args.split(','))
for x, y in zip(coordinates, coordinates):
position = (x - self.offset_x, y - self.offset_y)
points = list(position)
elif command == 'PD':
coordinates = map(int, args.split(','))
for x, y in zip(coordinates, coordinates):
position = (x - self.offset_x, y - self.offset_y)
points.extend(position)
elif command == 'FP':
subsymbols.append(SubSymbol(self, pen, width, opacity,
polygon_buffer, filled=True))
elif command == 'EP':
subsymbols.append(SubSymbol(self, pen, width, opacity,
polygon_buffer))
elif command == 'CI':
subsymbols.append(CircleSubSymbol(self, pen, width, opacity,
int(args), position[0]))
else:
import warnings
warnings.warn('Not implemented: ' + command)
if len(points) > 2:
subsymbols.append(SubSymbol(self, pen, width,
opacity, points))
self.subsymbols = self._merge_symbols(subsymbols)
def _parse_colors(self):
color_ref = self.element.find('color-ref').text
self.colors = {}
while color_ref:
color, color_ref = color_ref[:6], color_ref[6:]
self.colors[color[0]] = color[1:]
def _merge_symbols(self, subsymbols):
subsymbols.sort(key=attrgetter('center'))
merged_symbols = []
while subsymbols:
symbol = subsymbols.pop()
for i, other in reversed(list(enumerate(subsymbols))):
if other & symbol:
subsymbols.pop(i)
symbol.merge(other)
merged_symbols.append(symbol)
for i, symbol in enumerate(merged_symbols):
symbol.set_name('{}_{}'.format(self.name, i))
return merged_symbols
class SubSymbol:
vector_template = """
SYMBOL
NAME "{symname}"
TYPE VECTOR
FILLED {filled}
POINTS
{points}
END
END"""
style_template = """
STYLE
SYMBOL "{symbol}"
COLOR {color}
INITIALGAP {initialgap}
GAP -{gap}
SIZE {size}
WIDTH {stroke_width}
OPACITY {opacity}
ANGLE AUTO
END
"""
def __init__(self, parent, pen, width, opacity, points, filled=False):
self.parent = parent
self.pen = pen
self.stroke_width = width
self.opacity = opacity
self.points = points
self.filled = filled
self.name = parent.name
self._calc_extremes()
def _calc_extremes(self):
self.left = min(p for p in self.points[::2] if p != -99)
self.right = max(self.points[::2])
self.top = min(p for p in self.points[1::2] if p != -99)
self.bottom = max(self.points[1::2])
def set_name(self, name):
self.name = name
@property
def center(self):
return (self.left + self.right) / 2
@property
def height(self):
return (self.bottom - self.top) or self.parent.height
@property
def normalised_points(self):
for i in range(0, len(self.points), 2):
if self.points[i] == -99:
yield -99
yield -99
else:
yield self.points[i] - self.left
yield self.points[i + 1] - self.top
def __and__(self, other):
if not isinstance(other, SubSymbol):
return NotImplemented
# Symbols only overlap if each contain the other's center
if not (other.left <= self.center <= other.right) and (
self.left <= other.center <= other.right):
return False
return (
self.pen == other.pen
and self.stroke_width == other.stroke_width
and self.opacity == other.opacity
and self.filled == other.filled
)
def merge(self, other):
self.points += [-99, -99] + other.points
self._calc_extremes()
@property
def as_symbol(self):
return self.vector_template.format(
symname=self.name,
filled=self.filled,
points=' '.join(map(str, self.normalised_points)),
)
def as_style(self, color_table):
color_key = self.parent.colors.get(self.pen, 'NODTA')
return self.style_template.format(
symbol=self.name,
color=color_table[color_key].rgb,
opacity=self.opacity,
size=self.height * 0.03,
stroke_width=0.3 * self.stroke_width,
# We use a slightly larger ratio for the gap to prevent
# overcrowding
initialgap=self.center * 0.04,
gap=self.parent.width * 0.04,
)
class CircleSubSymbol(SubSymbol):
vector_template = """
SYMBOL
NAME "{symname}"
TYPE ELLIPSE
FILLED {filled}
POINTS
{points}
END
END"""
center = None
def __and__(self, other):
return False
def __rand__(self, other):
return False
def __init__(self, parent, pen, width, opacity, radius, center):
self.center = center
super().__init__(parent, pen, width, opacity, (radius, radius),
filled=False)
@property
def normalised_points(self):
return self.points
@property
def width(self):
return self.points[0] * 2
class Pattern(metaclass=abc.ABCMeta):
class FillType(Enum):
Linear = 'L'
Staggered = 'S'
color = 'NODTA'
gap = 0
height = 0
width = 0
stroke_width = 0
@classmethod
def from_element(cls, element):
if element.find('bitmap'):
return BitmapPattern(element)
if element.find('vector'):
return VectorPattern(element)
def __init__(self, element):
self.name = element.find('name').text
self.fill_type = self.FillType(element.find('filltype').text)
def generate_bitmap(self, image, output_path):
# Default implementation noops; BitmapPattern will override
pass
@property
def size(self):
return max(self.height, self.width)
def as_style(self, color_table, ttt):
return '''
STYLE
SYMBOL "{symbol}"
COLOR {color}
GAP {gap}
SIZE {size}
WIDTH {stroke_width}
END
'''.format(
ttt=ttt,
symbol=self.name,
color=color_table[self.color].rgb,
gap=self.size + self.gap,
size=self.size,
stroke_width=self.stroke_width,
)
@abc.abstractmethod
def as_symbol(self, subdir):
pass
class BitmapPattern(Pattern):
def __init__(self, element):
super().__init__(element)
bitmap = element.find('bitmap')
self.gap = int(bitmap.find('distance').attrib['min'])
self.width = int(bitmap.attrib['width'])
self.height = int(bitmap.attrib['height'])
pivot = bitmap.find('pivot')
self.pivot_x = int(pivot.attrib['x'])
self.pivot_y = int(pivot.attrib['y'])
location = bitmap.find('graphics-location')
self.bitmap_x = int(location.attrib['x'])
self.bitmap_y = int(location.attrib['y'])
def generate_bitmap(self, image, output_path):
with image[self.bitmap_x:self.bitmap_x + self.width,
self.bitmap_y:self.bitmap_y + self.height] as symbol:
symbol.save(filename=os.path.join(output_path,
self.name + '.png'))
def as_symbol(self, symboltype):
return """
SYMBOL
NAME "{symname}"
TYPE PIXMAP
IMAGE "symbols-{symboltype}/{symname}.png"
END""".format(symname=self.name, symboltype=symboltype)
class VectorPattern(Pattern):
def __init__(self, element):
super().__init__(element)
self.hpgl = element.find('HPGL').text
vector = element.find('vector')
origin = vector.find('origin')
self.offset_x = int(origin.attrib['x'])
self.offset_y = int(origin.attrib['y'])
self.width = int(vector.attrib['width']) * 0.03
self.height = int(vector.attrib['height']) * 0.03
color_ref = element.find('color-ref').text
self.color = color_ref[1:] or 'NODTA' # Assume only one colour
distance = vector.find('distance')
self.gap = int(distance.attrib['min']) * 0.03
self._parse_vector()
def _parse_vector(self):
self.points = []
for instruction in self.hpgl.split(';'):
if not instruction:
continue
command, args = instruction[:2], instruction[2:]
if command == 'SP':
pass # Assume only one pen
elif command == 'SW':
self.stroke_width = int(args) * 0.3
elif command == 'PU':
if self.points:
self.points += [-99, -99]
x, y = map(int, args.split(','))
self.points += [x - self.offset_x, y - self.offset_y]
elif command == 'PD':
if not args:
continue # The PU already set our point
coordinates = map(int, args.split(','))
for x, y in zip(coordinates, coordinates):
self.points += [x - self.offset_x, y - self.offset_y]
else:
import warnings
warnings.warn('Pattern command not implemented: ' + command)
def as_symbol(self, symboltype):
return """
SYMBOL
NAME "{symname}"
TYPE VECTOR
POINTS
{points}
END
END""".format(symname=self.name, points=' '.join(map(str, self.points)))
| 30.633495 | 79 | 0.532525 | 1,381 | 12,621 | 4.749457 | 0.154236 | 0.027443 | 0.022869 | 0.020277 | 0.407227 | 0.328709 | 0.269553 | 0.220156 | 0.196676 | 0.179296 | 0 | 0.009941 | 0.354409 | 12,621 | 411 | 80 | 30.708029 | 0.795042 | 0.029079 | 0 | 0.411411 | 0 | 0 | 0.095973 | 0.00294 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102102 | false | 0.009009 | 0.018018 | 0.039039 | 0.234234 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b189783108a886fb457198a07d1431e930302d18 | 5,040 | py | Python | lib/colors.py | ikea-lisp-code/firemix | a4e2af316fa3ec7e847bc892256eee71c1d5619c | [
"MIT"
] | 2 | 2018-10-04T18:54:33.000Z | 2019-08-10T22:33:16.000Z | lib/colors.py | ikea-lisp-code/firemix | a4e2af316fa3ec7e847bc892256eee71c1d5619c | [
"MIT"
] | null | null | null | lib/colors.py | ikea-lisp-code/firemix | a4e2af316fa3ec7e847bc892256eee71c1d5619c | [
"MIT"
] | null | null | null | import colorsys
import numpy as np
def float_to_uint8(float_color):
"""
Converts a float color (0 to 1.0) to uint8 (0 to 255)
"""
return tuple(map(lambda x: int(255.0 * x), float_color))
def uint8_to_float(uint8_color):
"""
Converts a uint8 color (0 to 255) to float (0 to 1.0)
"""
return tuple(map(lambda x: float(x) / 255.0, uint8_color))
def rgb_uint8_to_hsv_float(rgb_color):
return colorsys.rgb_to_hsv(*uint8_to_float(rgb_color))
def hsv_float_to_rgb_uint8(hsv_color):
return float_to_uint8(colorsys.hsv_to_rgb(*hsv_color))
def clip(low, input, high):
return min(max(input, low), high)
def blend_to_buffer(source, destination, progress, mode):
h1,l1,s1 = source.T
h2,l2,s2 = destination.T
if mode == 'overwrite':
not_dark = l1 > 0.0
destination[not_dark] = source[not_dark]
else:
raise NotImplementedError
return destination
def hls_blend(start, end, output_buffer, progress, mode, fade_length=1.0, ease_power=0.5):
p = abs(progress)
startPower = (1.0 - p) / fade_length
startPower = clip(0.0, startPower, 1.0)
startPower = pow(startPower, ease_power)
endPower = p / fade_length
endPower = clip(0.0, endPower, 1.0)
endPower = pow(endPower, ease_power)
h1,l1,s1 = start.T
h2,l2,s2 = end.T
np.clip(l1,0,1,l1)
np.clip(l2,0,1,l2)
np.clip(s1,0,1,s1)
np.clip(s2,0,1,s2)
startWeight = (1.0 - 2 * np.abs(0.5 - l1)) * s1
endWeight = (1.0 - 2 * np.abs(0.5 - l2)) * s2
s = (s1 * startPower + s2 * endPower)
x1 = np.cos(2 * np.pi * h1) * startPower * startWeight
x2 = np.cos(2 * np.pi * h2) * endPower * endWeight
y1 = np.sin(2 * np.pi * h1) * startPower * startWeight
y2 = np.sin(2 * np.pi * h2) * endPower * endWeight
x = x1 + x2
y = y1 + y2
if progress >= 0:
l = np.maximum(l1 * startPower, l2 * endPower)
opposition = np.sqrt(np.square((x1-x2)/2) + np.square((y1-y2)/2))
if mode == 'multiply':
l -= opposition
elif mode == 'add':
l = np.maximum(l, opposition, l)
else: # hacky support for old blend
l = np.sqrt(np.square(x) + np.square(y)) / 2
h = np.arctan2(y, x) / (2*np.pi)
nocolor = (x * y == 0)
np.where(nocolor, h, 0)
np.where(nocolor, s, 0)
np.clip(l, 0, 1, l)
if output_buffer is not None:
frame = output_buffer
frame[:, 0] = h
frame[:, 1] = l
frame[:, 2] = s
else:
frame = np.asarray([h, l, s]).T
return frame
def rgb_to_hls(arr):
""" fast rgb_to_hls using numpy array """
# adapted from Arnar Flatberg
# http://www.mail-archive.com/numpy-discussion@scipy.org/msg06147.html
arr = arr.astype("float32") / 255.0
out = np.empty_like(arr)
arr_max = arr.max(-1)
delta = arr.ptp(-1)
arr_min = arr.min(-1)
total = arr_max + arr_min
l = total / 2.0
s = delta / total
idx = (l > 0.5)
s[idx] = delta[idx] / (2.0 - total[idx])
# red is max
idx = (arr[:,:,0] == arr_max)
out[idx, 0] = (arr[idx, 1] - arr[idx, 2]) / delta[idx]
# green is max
idx = (arr[:,:,1] == arr_max)
out[idx, 0] = 2. + (arr[idx, 2] - arr[idx, 0] ) / delta[idx]
# blue is max
idx = (arr[:,:,2] == arr_max)
out[idx, 0] = 4. + (arr[idx, 0] - arr[idx, 1] ) / delta[idx]
out[:,:,0] = (out[:,:,0]/6.0) % 1.0
out[:,:,1] = l
out[:,:,2] = s
idx = (delta==0)
out[idx, 2] = 0.0
out[idx, 0] = 0.0
# remove NaN
out[np.isnan(out)] = 0
return out
def hls_to_rgb(hls):
"""
Converts HLS color array [[H,L,S]] to RGB array.
http://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL
Returns [[R,G,B]] in [0..1]
Adapted from: http://stackoverflow.com/questions/4890373/detecting-thresholds-in-hsv-color-space-from-rgb-using-python-pil/4890878#4890878
"""
H = hls[:, 0]
L = hls[:, 1]
S = hls[:, 2]
C = (1 - np.absolute(2 * L - 1)) * S
Hp = H * 6.0
i = Hp.astype(np.int)
#f = Hp - i # |H' mod 2| ?
X = C * (1 - np.absolute(np.mod(Hp, 2) - 1))
#X = C * (1 - f)
# initialize with zero
R = np.zeros(H.shape, float)
G = np.zeros(H.shape, float)
B = np.zeros(H.shape, float)
# handle each case:
#mask = (Hp >= 0) == ( Hp < 1)
mask = i % 6 == 0
R[mask] = C[mask]
G[mask] = X[mask]
#mask = (Hp >= 1) == ( Hp < 2)
mask = i == 1
R[mask] = X[mask]
G[mask] = C[mask]
#mask = (Hp >= 2) == ( Hp < 3)
mask = i == 2
G[mask] = C[mask]
B[mask] = X[mask]
#mask = (Hp >= 3) == ( Hp < 4)
mask = i == 3
G[mask] = X[mask]
B[mask] = C[mask]
#mask = (Hp >= 4) == ( Hp < 5)
mask = i == 4
R[mask] = X[mask]
B[mask] = C[mask]
#mask = (Hp >= 5) == ( Hp < 6)
mask = i == 5
R[mask] = C[mask]
B[mask] = X[mask]
m = L - 0.5*C
R += m
G += m
B += m
rgb = np.empty_like(hls)
rgb[:, 0] = R
rgb[:, 1] = G
rgb[:, 2] = B
return rgb
| 23.661972 | 142 | 0.533532 | 831 | 5,040 | 3.162455 | 0.199759 | 0.006849 | 0.020548 | 0.012557 | 0.160198 | 0.08067 | 0.041096 | 0.019026 | 0.019026 | 0 | 0 | 0.066611 | 0.288095 | 5,040 | 212 | 143 | 23.773585 | 0.665831 | 0.16627 | 0 | 0.116279 | 0 | 0 | 0.006552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.015504 | 0.023256 | 0.155039 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b189a02c156e82cbc1089021cc91f8bbf47984dc | 3,177 | py | Python | SNAS/dist_util_torch.py | cwlacewe/SNAS-Series | 92ac8031f718235aecaefb9967851f8f355dbca0 | [
"MIT"
] | 133 | 2020-03-23T02:36:09.000Z | 2022-03-07T03:33:44.000Z | SNAS/dist_util_torch.py | cwlacewe/SNAS-Series | 92ac8031f718235aecaefb9967851f8f355dbca0 | [
"MIT"
] | 10 | 2020-04-05T16:47:54.000Z | 2021-06-03T09:08:24.000Z | SNAS/dist_util_torch.py | cwlacewe/SNAS-Series | 92ac8031f718235aecaefb9967851f8f355dbca0 | [
"MIT"
] | 27 | 2020-03-29T05:35:13.000Z | 2022-03-03T03:24:24.000Z | import os
import time
import math
import torch
import torch.distributed as dist
from torch.nn import Module
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data.sampler import Sampler
import multiprocessing as mp
import numpy as np
from torch.distributed import get_world_size, get_rank
# def init_dist(backend='nccl',
# master_ip='127.0.0.2',
# port=29500):
# if mp.get_start_method(allow_none=True) is None:
# mp.set_start_method('spawn')
# # if '[' in node_list:
# # beg = node_list.find('[')
# # pos1 = node_list.find('-', beg)
# # if pos1 < 0:
# # pos1 = 1000
# # pos2 = node_list.find(',', beg)
# # if pos2 < 0:
# # pos2 = 1000
# # node_list = node_list[:min(pos1, pos2)].replace('[', '')
# # addr = node_list[8:].replace('-', '.')
# os.environ['MASTER_ADDR'] = str(master_ip)
# os.environ['MASTER_PORT'] = str(port)
# rank = int(os.environ['RANK'])
# world_size = int(os.environ['WORLD_SIZE'])
# num_gpus = torch.cuda.device_count()
# torch.cuda.set_device(rank % num_gpus)
# device = torch.device("cuda")
# dist.init_process_group(backend=backend)
# return rank, world_size, device
def init_dist(backend='nccl', master_ip='10.10.17.21', port=29500):
if mp.get_start_method(allow_none=True) is None:
mp.set_start_method('spawn')
# node_list = os.environ['SLURM_NODELIST']
# if '[' in node_list:
# beg = node_list.find('[')
# pos1 = node_list.find('-', beg)
# if pos1 < 0:
# pos1 = 1000
# pos2 = node_list.find(',', beg)
# if pos2 < 0:
# pos2 = 1000
# node_list = node_list[:min(pos1, pos2)].replace('[', '')
# addr = node_list[8:].replace('-', '.')
# os.environ['MASTER_ADDR'] = addr
os.environ['MASTER_ADDR'] = master_ip
os.environ['MASTER_PORT'] = str(port)
rank = int(os.environ['RANK'])
world_size = int(os.environ['WORLD_SIZE'])
num_gpus = torch.cuda.device_count()
torch.cuda.set_device(rank % num_gpus)
device = torch.device('cuda')
dist.init_process_group(backend=backend)
return rank, world_size, device
def reduce_gradients(model, sync=False):
""" average gradients """
for name, param in model.named_parameters():
if param.requires_grad:
dist.all_reduce(param.grad.detach())
def reduce_tensorgradients(tensor_list, sync=False):
""" average gradients """
for param in tensor_list:
if param.requires_grad and param.grad is not None:
dist.all_reduce(param.grad.detach())
def part_reduce_gradients(tensor_list, param_count, sync=False):
""" average gradients """
dist.all_reduce(param_count.detach())
id = 0
for param in tensor_list:
if param.requires_grad:
if param_count[id]!= 0:
dist.all_reduce(param.grad.data)
param.grad.div_(param_count[id])
id += 1
def broadcast_params(model):
""" broadcast model parameters """
for name,p in model.state_dict().items():
dist.broadcast(p, 0)
| 30.548077 | 68 | 0.616934 | 425 | 3,177 | 4.421176 | 0.235294 | 0.063864 | 0.038318 | 0.031932 | 0.632251 | 0.592336 | 0.592336 | 0.529005 | 0.529005 | 0.487493 | 0 | 0.027295 | 0.238905 | 3,177 | 103 | 69 | 30.84466 | 0.749793 | 0.423985 | 0 | 0.139535 | 0 | 0 | 0.03413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116279 | false | 0 | 0.255814 | 0 | 0.395349 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b189e72c933116bb80d6f18519a94078119c05fc | 5,034 | py | Python | sushy/tests/unit/resources/system/test_secure_boot.py | calsoft-internal/sushy | ffa84846fdb8691042f2156e64649a98feb1f4a2 | [
"Apache-2.0"
] | 37 | 2017-03-24T10:17:37.000Z | 2022-02-10T19:42:26.000Z | sushy/tests/unit/resources/system/test_secure_boot.py | calsoft-internal/sushy | ffa84846fdb8691042f2156e64649a98feb1f4a2 | [
"Apache-2.0"
] | null | null | null | sushy/tests/unit/resources/system/test_secure_boot.py | calsoft-internal/sushy | ffa84846fdb8691042f2156e64649a98feb1f4a2 | [
"Apache-2.0"
] | 29 | 2017-07-19T21:28:06.000Z | 2021-06-09T05:20:32.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from unittest import mock
from sushy import exceptions
from sushy.resources.system import constants
from sushy.resources.system import secure_boot
from sushy.resources.system import secure_boot_database
from sushy.tests.unit import base
class SecureBootTestCase(base.TestCase):
def setUp(self):
super(SecureBootTestCase, self).setUp()
self.conn = mock.Mock()
with open('sushy/tests/unit/json_samples/secure_boot.json') as f:
self.secure_boot_json = json.load(f)
self.conn.get.return_value.json.return_value = self.secure_boot_json
self.secure_boot = secure_boot.SecureBoot(
self.conn, '/redfish/v1/Systems/437XR1138R2/SecureBoot',
registries={}, redfish_version='1.1.0')
def test__parse_attributes(self):
self.secure_boot._parse_attributes(self.secure_boot_json)
self.assertEqual('1.1.0', self.secure_boot.redfish_version)
self.assertEqual('SecureBoot', self.secure_boot.identity)
self.assertEqual('UEFI Secure Boot', self.secure_boot.name)
self.assertIsNone(self.secure_boot.description)
self.assertIs(False, self.secure_boot.enabled)
self.assertEqual(constants.SECURE_BOOT_DISABLED,
self.secure_boot.current_boot)
self.assertEqual(constants.SECURE_BOOT_MODE_DEPLOYED,
self.secure_boot.mode)
@mock.patch.object(secure_boot.LOG, 'warning', autospec=True)
def test_get_allowed_reset_keys_values(self, mock_log):
self.assertEqual({constants.SECURE_BOOT_RESET_KEYS_TO_DEFAULT,
constants.SECURE_BOOT_RESET_KEYS_DELETE_ALL,
constants.SECURE_BOOT_RESET_KEYS_DELETE_PK},
self.secure_boot.get_allowed_reset_keys_values())
self.assertFalse(mock_log.called)
@mock.patch.object(secure_boot.LOG, 'warning', autospec=True)
def test_get_allowed_reset_keys_values_no_values(self, mock_log):
self.secure_boot._actions.reset_keys.allowed_values = None
self.assertEqual({constants.SECURE_BOOT_RESET_KEYS_TO_DEFAULT,
constants.SECURE_BOOT_RESET_KEYS_DELETE_ALL,
constants.SECURE_BOOT_RESET_KEYS_DELETE_PK},
self.secure_boot.get_allowed_reset_keys_values())
self.assertTrue(mock_log.called)
@mock.patch.object(secure_boot.LOG, 'warning', autospec=True)
def test_get_allowed_reset_keys_values_custom_values(self, mock_log):
self.secure_boot._actions.reset_keys.allowed_values = [
'ResetAllKeysToDefault',
'IamNotRedfishCompatible',
]
self.assertEqual({constants.SECURE_BOOT_RESET_KEYS_TO_DEFAULT},
self.secure_boot.get_allowed_reset_keys_values())
self.assertFalse(mock_log.called)
def test_set_enabled(self):
self.secure_boot.set_enabled(True)
self.conn.patch.assert_called_once_with(
'/redfish/v1/Systems/437XR1138R2/SecureBoot',
data={'SecureBootEnable': True})
def test_set_enabled_wrong_type(self):
self.assertRaises(exceptions.InvalidParameterValueError,
self.secure_boot.set_enabled, 'banana')
def test_reset_keys(self):
self.secure_boot.reset_keys(
constants.SECURE_BOOT_RESET_KEYS_TO_DEFAULT)
self.conn.post.assert_called_once_with(
'/redfish/v1/Systems/437XR1138R2/SecureBoot'
'/Actions/SecureBoot.ResetKeys',
data={'ResetKeysType': 'ResetAllKeysToDefault'})
def test_reset_keys_wrong_value(self):
self.assertRaises(exceptions.InvalidParameterValueError,
self.secure_boot.reset_keys, 'DeleteEverything')
def test_databases(self):
self.conn.get.return_value.json.reset_mock()
with open('sushy/tests/unit/json_samples/'
'secure_boot_database_collection.json') as f:
self.conn.get.return_value.json.return_value = json.load(f)
result = self.secure_boot.databases
self.assertIsInstance(
result, secure_boot_database.SecureBootDatabaseCollection)
self.conn.get.return_value.json.assert_called_once_with()
self.conn.get.return_value.json.reset_mock()
self.assertIs(result, self.secure_boot.databases)
self.conn.get.return_value.json.assert_not_called()
| 44.548673 | 78 | 0.696663 | 615 | 5,034 | 5.417886 | 0.252033 | 0.129052 | 0.096639 | 0.057023 | 0.552821 | 0.477191 | 0.457383 | 0.414166 | 0.340036 | 0.267407 | 0 | 0.009388 | 0.217124 | 5,034 | 112 | 79 | 44.946429 | 0.836082 | 0.108462 | 0 | 0.219512 | 0 | 0 | 0.098368 | 0.074223 | 0 | 0 | 0 | 0 | 0.256098 | 1 | 0.121951 | false | 0 | 0.085366 | 0 | 0.219512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b18a4d1b8a503fb102ce851b7a6740bc0a49bb70 | 6,220 | py | Python | heat/engine/resources/openstack/neutron/lbaas/loadbalancer.py | hongbin/heat | 1a8495eaa728d36ccff4b0285befb5a02e31b3de | [
"Apache-2.0"
] | 1 | 2018-07-04T07:59:26.000Z | 2018-07-04T07:59:26.000Z | heat/engine/resources/openstack/neutron/lbaas/loadbalancer.py | ljzjohnson/heat | 9e463f4af77513980b1fd215d5d2ad3bf7b979f9 | [
"Apache-2.0"
] | 1 | 2018-07-26T22:07:09.000Z | 2018-07-26T22:07:09.000Z | heat/engine/resources/openstack/neutron/lbaas/loadbalancer.py | ljzjohnson/heat | 9e463f4af77513980b1fd215d5d2ad3bf7b979f9 | [
"Apache-2.0"
] | 3 | 2018-07-19T17:43:37.000Z | 2019-11-15T22:13:30.000Z | #
# Copyright 2015 IBM Corp.
#
# All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutronclient.common import exceptions
from heat.common import exception
from heat.common.i18n import _
from heat.engine import attributes
from heat.engine import constraints
from heat.engine import properties
from heat.engine.resources.openstack.neutron import neutron
from heat.engine import support
from heat.engine import translation
class LoadBalancer(neutron.NeutronResource):
"""A resource for creating LBaaS v2 Load Balancers.
This resource creates and manages Neutron LBaaS v2 Load Balancers,
which allows traffic to be directed between servers.
"""
support_status = support.SupportStatus(version='6.0.0')
required_service_extension = 'lbaasv2'
entity = 'loadbalancer'
PROPERTIES = (
DESCRIPTION, NAME, PROVIDER, VIP_ADDRESS, VIP_SUBNET,
ADMIN_STATE_UP, TENANT_ID
) = (
'description', 'name', 'provider', 'vip_address', 'vip_subnet',
'admin_state_up', 'tenant_id'
)
ATTRIBUTES = (
VIP_ADDRESS_ATTR, VIP_PORT_ATTR, VIP_SUBNET_ATTR, POOLS_ATTR
) = (
'vip_address', 'vip_port_id', 'vip_subnet_id', 'pools'
)
properties_schema = {
DESCRIPTION: properties.Schema(
properties.Schema.STRING,
_('Description of this Load Balancer.'),
update_allowed=True,
default=''
),
NAME: properties.Schema(
properties.Schema.STRING,
_('Name of this Load Balancer.'),
update_allowed=True
),
PROVIDER: properties.Schema(
properties.Schema.STRING,
_('Provider for this Load Balancer.'),
constraints=[
constraints.CustomConstraint('neutron.lbaas.provider')
],
),
VIP_ADDRESS: properties.Schema(
properties.Schema.STRING,
_('IP address for the VIP.'),
constraints=[
constraints.CustomConstraint('ip_addr')
],
),
VIP_SUBNET: properties.Schema(
properties.Schema.STRING,
_('The name or ID of the subnet on which to allocate the VIP '
'address.'),
constraints=[
constraints.CustomConstraint('neutron.subnet')
],
required=True
),
ADMIN_STATE_UP: properties.Schema(
properties.Schema.BOOLEAN,
_('The administrative state of this Load Balancer.'),
default=True,
update_allowed=True
),
TENANT_ID: properties.Schema(
properties.Schema.STRING,
_('The ID of the tenant who owns the Load Balancer. Only '
'administrative users can specify a tenant ID other than '
'their own.'),
constraints=[
constraints.CustomConstraint('keystone.project')
],
)
}
attributes_schema = {
VIP_ADDRESS_ATTR: attributes.Schema(
_('The VIP address of the LoadBalancer.'),
type=attributes.Schema.STRING
),
VIP_PORT_ATTR: attributes.Schema(
_('The VIP port of the LoadBalancer.'),
type=attributes.Schema.STRING
),
VIP_SUBNET_ATTR: attributes.Schema(
_('The VIP subnet of the LoadBalancer.'),
type=attributes.Schema.STRING
),
POOLS_ATTR: attributes.Schema(
_('Pools this LoadBalancer is associated with.'),
type=attributes.Schema.LIST,
support_status=support.SupportStatus(version='9.0.0')
),
}
def translation_rules(self, props):
return [
translation.TranslationRule(
props,
translation.TranslationRule.RESOLVE,
[self.VIP_SUBNET],
client_plugin=self.client_plugin(),
finder='find_resourceid_by_name_or_id',
entity='subnet'
),
]
def handle_create(self):
properties = self.prepare_properties(
self.properties,
self.physical_resource_name()
)
properties['vip_subnet_id'] = properties.pop(self.VIP_SUBNET)
lb = self.client().create_loadbalancer(
{'loadbalancer': properties})['loadbalancer']
self.resource_id_set(lb['id'])
def check_create_complete(self, data):
return self.client_plugin().check_lb_status(self.resource_id)
def handle_update(self, json_snippet, tmpl_diff, prop_diff):
if prop_diff:
self.client().update_loadbalancer(
self.resource_id,
{'loadbalancer': prop_diff})
return prop_diff
def check_update_complete(self, prop_diff):
if prop_diff:
return self.client_plugin().check_lb_status(self.resource_id)
return True
def handle_delete(self):
pass
def check_delete_complete(self, data):
if self.resource_id is None:
return True
try:
try:
if self.client_plugin().check_lb_status(self.resource_id):
self.client().delete_loadbalancer(self.resource_id)
except exception.ResourceInError:
# Still try to delete loadbalancer in error state
self.client().delete_loadbalancer(self.resource_id)
except exceptions.NotFound:
# Resource is gone
return True
return False
def resource_mapping():
return {
'OS::Neutron::LBaaS::LoadBalancer': LoadBalancer
}
| 32.736842 | 78 | 0.609646 | 652 | 6,220 | 5.645706 | 0.292945 | 0.0652 | 0.030427 | 0.060853 | 0.270035 | 0.176039 | 0.153763 | 0.123064 | 0.071991 | 0.06031 | 0 | 0.004396 | 0.305145 | 6,220 | 189 | 79 | 32.910053 | 0.847293 | 0.134566 | 0 | 0.296552 | 0 | 0 | 0.151272 | 0.01552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055172 | false | 0.006897 | 0.062069 | 0.02069 | 0.234483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b18d16c879cac6fd3708e309b0082f070678f07a | 13,242 | py | Python | openselfsup/datasets/contrastive_boxes.py | ChenhongyiYang/CCOP | 956b952a42017ab1390adcaa6573c7764bedafec | [
"Apache-2.0"
] | 22 | 2021-11-29T12:05:41.000Z | 2022-03-10T23:09:27.000Z | openselfsup/datasets/contrastive_boxes.py | ChenhongyiYang/CCOP | 956b952a42017ab1390adcaa6573c7764bedafec | [
"Apache-2.0"
] | 1 | 2022-02-04T21:24:48.000Z | 2022-02-04T22:23:02.000Z | openselfsup/datasets/contrastive_boxes.py | ChenhongyiYang/CCOP | 956b952a42017ab1390adcaa6573c7764bedafec | [
"Apache-2.0"
] | 2 | 2022-03-24T11:16:06.000Z | 2022-03-30T08:41:11.000Z | import torch
from PIL import Image
from openselfsup.datasets.registry import DATASETS
from openselfsup.datasets.base import BaseDataset
import math
import copy
import random
import logging
import numpy as np
from typing import List, Optional, Union
import torch
import torchvision.transforms.transforms
from torchvision.transforms import RandomResizedCrop
import torchvision.transforms.functional as TVF
from PIL import Image
from detectron2.structures import Boxes
def RandomResizedCrop_get_params(width, height, scale: List[float], ratio: List[float]):
"""Get parameters for ``crop`` for a random sized crop.
Args:
img (PIL Image or Tensor): Input image.
scale (list): range of scale of the origin size cropped
ratio (list): range of aspect ratio of the origin aspect ratio cropped
Returns:
tuple: params (i, j, h, w) to be passed to ``crop`` for a random
sized crop.
"""
area = height * width
for _ in range(10):
target_area = area * torch.empty(1).uniform_(scale[0], scale[1]).item()
log_ratio = torch.log(torch.tensor(ratio))
aspect_ratio = torch.exp(
torch.empty(1).uniform_(log_ratio[0], log_ratio[1])
).item()
w = int(round(math.sqrt(target_area * aspect_ratio)))
h = int(round(math.sqrt(target_area / aspect_ratio)))
if 0 < w <= width and 0 < h <= height:
i = torch.randint(0, height - h + 1, size=(1,)).item()
j = torch.randint(0, width - w + 1, size=(1,)).item()
return i, j, h, w
# Fallback to central crop
in_ratio = float(width) / float(height)
if in_ratio < min(ratio):
w = width
h = int(round(w / min(ratio)))
elif in_ratio > max(ratio):
h = height
w = int(round(h * max(ratio)))
else: # whole image
w = width
h = height
i = (height - h) // 2
j = (width - w) // 2
return i, j, h, w
def get_jitter_iou_mat(boxes, jitter):
with torch.no_grad():
if type(boxes) is Boxes:
boxes = boxes.tensor
if type(jitter) is Boxes:
jitter = jitter.tensor
n_jitter = jitter.size(1)
boxes_ = boxes[:, None, :].repeat(1, n_jitter, 1) # [N, n_jitter, 4]
xmin1 = boxes_[:, :, 0]
ymin1 = boxes_[:, :, 1]
xmax1 = boxes_[:, :, 2]
ymax1 = boxes_[:, :, 3]
xmin2 = jitter[:, :, 0]
ymin2 = jitter[:, :, 1]
xmax2 = jitter[:, :, 2]
ymax2 = jitter[:, :, 3]
xmin_int = torch.max(xmin1, xmin2)
ymin_int = torch.max(ymin1, ymin2)
xmax_int = torch.min(xmax1, xmax2)
ymax_int = torch.min(ymax1, ymax2)
w_int = (xmax_int - xmin_int).clamp(min=0.)
h_int = (ymax_int - ymin_int).clamp(min=0.)
area_int = w_int * h_int
area1 = (xmax1 - xmin1) * (ymax1 - ymin1)
area2 = (xmax2 - xmin2) * (ymax2 - ymin2)
iou = area_int / (area1 + area2 - area_int) # [N, n_jitter]
return iou
class AdvancedInstanceContrastAug(object):
def __init__(self, size, scale, ratio, area_thr, part_thr, min_box_num, max_try):
self.size = size
self.scale = scale
self.ratio = ratio
self.small_thr = area_thr
self.part_thr = part_thr
self.min_box_num = min_box_num
self.max_try = max_try
def _filter_out_boxes(self, boxes, ori_areas):
# filter out small (zero) boxes
areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
# filter out partial boxes
part_keep = (areas / ori_areas) > self.part_thr
return part_keep
def _one_try(self, img, boxes, scale):
if scale is None:
top, left, h, w = RandomResizedCrop.get_params(img, self.scale, self.ratio)
else:
top, left, h, w = RandomResizedCrop.get_params(img, scale, self.ratio)
new_boxes = boxes.clone()
new_boxes[:, 0] = (new_boxes[:, 0] - left).clamp(min=0, max=w)
new_boxes[:, 1] = (new_boxes[:, 1] - top).clamp(min=0, max=h)
new_boxes[:, 2] = (new_boxes[:, 2] - left).clamp(min=0, max=w)
new_boxes[:, 3] = (new_boxes[:, 3] - top).clamp(min=0, max=h)
return (top, left, h, w), new_boxes
def _resize_box(self, boxes, h, w):
h = float(h)
w = float(w)
boxes[:, 0] *= self.size / w
boxes[:, 1] *= self.size / h
boxes[:, 2] *= self.size / w
boxes[:, 3] *= self.size / h
return boxes
def apply_random_hflip(self, img, boxes):
_img = img.copy()
_boxes = boxes.clone()
width, height = _img.size
r = random.random()
if r > 0.5:
_img = _img.transpose(Image.FLIP_LEFT_RIGHT)
_boxes[:, 0] = width - boxes[:, 2]
_boxes[:, 2] = width - boxes[:, 0]
return _img, _boxes
def apply_box_jitter(self, boxes, xy_mag, wh_mag, n_jitter, iou_range):
assert len(iou_range) == 2
assert iou_range[0] < iou_range[1]
if len(boxes) == 0:
return boxes
with torch.no_grad():
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
x = (x1 + x2) * 0.5
y = (y1 + y2) * 0.5
w = x2 - x1
h = y2 - y1
t_xy = (torch.rand(size=(len(boxes), n_jitter, 2)) - 0.5) * xy_mag # [N, n_jitter,2]
t_wh = (torch.rand(size=(len(boxes), n_jitter, 2)) - 0.5) * wh_mag # [N, n_jitter,2]
x_jitter = w[:, None] * t_xy[:, :, 0] + x[:, None]
y_jitter = h[:, None] * t_xy[:, :, 1] + y[:, None]
w_jitter = torch.exp(t_wh[:, :, 0]) * w[:, None]
h_jitter = torch.exp(t_wh[:, :, 1]) * h[:, None]
x1_jitter = x_jitter - 0.5 * w_jitter
y1_jitter = y_jitter - 0.5 * h_jitter
x2_jitter = x_jitter + 0.5 * w_jitter
y2_jitter = y_jitter + 0.5 * h_jitter
jit_boxes = torch.stack((x1_jitter, y1_jitter, x2_jitter, y2_jitter), dim=2) # [N, n_jitter, 4]
box_jitter_iou_mat = get_jitter_iou_mat(boxes, jit_boxes) # [N, n_jitter]
final_boxes = []
for i in range(len(jit_boxes)):
iou_vec = box_jitter_iou_mat[i]
valid_inds = torch.arange(n_jitter, dtype=torch.long)[(iou_vec >= iou_range[0]) & (iou_vec <= iou_range[1])]
if len(valid_inds) == 0:
final_boxes.append(jit_boxes[i][0])
else:
selected_ind = valid_inds[torch.randperm(len(valid_inds))[0]]
final_boxes.append(jit_boxes[i][selected_ind])
return torch.stack(final_boxes, dim=0)
def apply_aug(self, img, boxes, scale=None, resize=True, ret_keep=False):
num_boxes = len(boxes)
ori_areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
min_box_num = min(num_boxes, self.min_box_num)
current_max = 0
cur_crop_params = None
cur_keep = None
cur_boxes = None
for i in range(self.max_try):
crop_params, new_boxes = self._one_try(img, boxes, scale)
keep = self._filter_out_boxes(new_boxes, ori_areas)
n = keep.int().sum()
if n >= min_box_num:
y, x, h, w = crop_params
if not ret_keep:
new_boxes = new_boxes[keep]
if resize:
new_img = TVF.resized_crop(img, y, x, h, w, [self.size, self.size])
new_boxes = self._resize_box(new_boxes, h, w)
else:
new_img = TVF.crop(img, y, x, h, w)
im_w, im_h = img.size
scale = np.sqrt(w * h) / np.sqrt(im_w * im_h)
if not ret_keep:
return new_img, new_boxes, scale
else:
return new_img, new_boxes, keep, scale
if n >= current_max:
cur_crop_params = crop_params
cur_keep = keep
cur_boxes = new_boxes
current_max = n
if not ret_keep:
new_boxes = cur_boxes[cur_keep]
else:
new_boxes = cur_boxes
y, x, h, w = cur_crop_params
if resize:
new_img = TVF.resized_crop(img, y, x, h, w, [self.size, self.size])
new_boxes = self._resize_box(new_boxes, h, w)
else:
new_img = TVF.crop(img, y, x, h, w)
im_w, im_h = img.size
scale = np.sqrt(w * h) / np.sqrt(im_w * im_h)
if not ret_keep:
return new_img, new_boxes, scale
else:
return new_img, new_boxes, cur_keep, scale
def __call__(self, img, instances):
cropped_img, new_boxes, scale = self.apply_aug(img, instances.gt_boxes.tensor)
img1, boxes1 = self.apply_random_hflip(cropped_img, new_boxes)
img2, boxes2 = self.apply_random_hflip(cropped_img, new_boxes)
boxes1 = self.apply_box_jitter(boxes1, 1., 1., n_jitter=5, iou_range=(0.3, 0.6))
boxes2 = self.apply_box_jitter(boxes2, 1., 1., n_jitter=5, iou_range=(0.3, 0.6))
return img1, img2, boxes1, boxes2
class ScaleInstanceContrastAug(object):
def __init__(self, size, scales, ratio, area_thr, part_thr, min_box_num, max_try):
self.aug = AdvancedInstanceContrastAug(size, (0.2, 1.), ratio, area_thr, part_thr, min_box_num, max_try)
self.aug = AdvancedInstanceContrastAug(size, (0.2, 1.), ratio, area_thr, part_thr, min_box_num, max_try)
self.small_thr = area_thr
self.scales = scales
def __call__(self, img, instances, ret_class=False):
contrast_boxes = instances.gt_boxes.tensor
img1, boxes1, keep1, _ = self.aug.apply_aug(img, contrast_boxes, scale=self.scales[0], resize=True, ret_keep=True)
img2, boxes2, keep2, _ = self.aug.apply_aug(img, contrast_boxes, scale=self.scales[1], resize=True, ret_keep=True)
classes = instances.gt_classes
area1 = (boxes1[:, 2] - boxes1[:, 0]) * (boxes1[:, 3] - boxes1[:, 1])
area2 = (boxes2[:, 2] - boxes2[:, 0]) * (boxes2[:, 3] - boxes2[:, 1])
area_keep = (area1 > self.small_thr) & (area2 > self.small_thr)
boxes1 = boxes1[keep1 & keep2 & area_keep]
boxes2 = boxes2[keep1 & keep2 & area_keep]
classes = classes[keep1 & keep2 & area_keep]
img1, boxes1 = self.aug.apply_random_hflip(img1, boxes1)
img2, boxes2 = self.aug.apply_random_hflip(img2, boxes2)
r = random.random()
if r > 0.5:
boxes1 = boxes1.clamp(min=0, max=self.aug.size - 1)
boxes2 = boxes2.clamp(min=0, max=self.aug.size - 1)
if ret_class:
return img1, img2, boxes1, boxes2, classes
else:
return img1, img2, boxes1, boxes2
else:
boxes1 = boxes1.clamp(min=0, max=self.aug.size - 1)
boxes2 = boxes2.clamp(min=0, max=self.aug.size - 1)
if ret_class:
return img2, img1, boxes2, boxes1, classes
else:
return img2, img1, boxes2, boxes1
@DATASETS.register_module
class ContrastiveBox(BaseDataset):
"""Dataset for contrastive learning methods that forward
two views of the image at a time (MoCo, SimCLR).
"""
def __init__(self, data_source, pipeline, aug_dict, prefetch = False):
super(ContrastiveBox, self).__init__(data_source, pipeline, prefetch=False)
self.aug = ScaleInstanceContrastAug(
size=aug_dict['size'],
scales=aug_dict['scales'],
ratio=(3. / 4, 4. / 3),
area_thr=aug_dict['area_thr'],
part_thr=aug_dict['part_thr'],
min_box_num=aug_dict['min_box_num'],
max_try=aug_dict['max_try'],
)
self.max_box_num = data_source.max_box_num
def __getitem__(self, idx):
data_dict = self.data_source.get_sample(idx)
img = Image.open(data_dict['file_name']).convert('RGB')
img1, img2, boxes1, boxes2, classes = self.aug(img, data_dict['instances'], ret_class=True)
n_boxes = len(boxes1)
img1 = self.pipeline(img1)
img2 = self.pipeline(img2)
img_cat = torch.cat((img1.unsqueeze(0), img2.unsqueeze(0)), dim=0)
boxes_ret1 = img_cat.new_full((self.max_box_num, 4), 0, dtype=torch.float32)
boxes_ret2 = img_cat.new_full((self.max_box_num, 4), 0, dtype=torch.float32)
boxes_ret1[:n_boxes] = boxes1
boxes_ret2[:n_boxes] = boxes2
classes_ret = img_cat.new_full((self.max_box_num,), -1, dtype=torch.int64)
classes_ret[:n_boxes] = classes
boxes_cat = torch.cat((boxes_ret1.unsqueeze(0), boxes_ret2.unsqueeze(0)), dim=0)
boxes_num = img_cat.new_full((1,), 0, dtype=torch.int32) + n_boxes
return dict(img=img_cat, boxes=boxes_cat, boxes_num=boxes_num, classes=classes_ret, img_id=int(data_dict['img_id']))
def __len__(self):
return self.data_source.get_length()
def evaluate(self, scores, keyword, logger=None, **kwargs):
raise NotImplemented | 36.379121 | 124 | 0.572043 | 1,844 | 13,242 | 3.882863 | 0.132863 | 0.031285 | 0.013827 | 0.013408 | 0.346089 | 0.264525 | 0.23324 | 0.215922 | 0.173743 | 0.165642 | 0 | 0.033254 | 0.300559 | 13,242 | 364 | 125 | 36.379121 | 0.739797 | 0.047349 | 0 | 0.216912 | 0 | 0 | 0.005657 | 0 | 0 | 0 | 0 | 0 | 0.007353 | 1 | 0.058824 | false | 0 | 0.058824 | 0.003676 | 0.202206 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b18d320cb7ed8eb6cf8836a116e5f9ed46ae0d1d | 3,933 | py | Python | examples/image/compute_segment_fluo.py | softbear/squidpy_notebooks | 28a5989105e705f06070fbe52b3dfb4d5741e3bd | [
"MIT"
] | null | null | null | examples/image/compute_segment_fluo.py | softbear/squidpy_notebooks | 28a5989105e705f06070fbe52b3dfb4d5741e3bd | [
"MIT"
] | null | null | null | examples/image/compute_segment_fluo.py | softbear/squidpy_notebooks | 28a5989105e705f06070fbe52b3dfb4d5741e3bd | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Cell-segmentation for fluorescence images
-----------------------------------------
This example shows how to use the high resolution tissue images to segment nuclei.
This information can be used to compute additional image features like cell count and cell size per spot
(see :ref:`sphx_glr_auto_examples_image_compute_segmentation_features.py`).
This example shows how to use :func:`squidpy.im.segment` and explains the parameters you can use.
We provide two segmentation models :class:`squidpy.im.SegmentationWatershed`
and :class:`squidpy.im.SegmentationBlob`.
In addition, you can use a custom segmentation function, like a pre-trained :mod:`tensorflow.keras` model,
to perform the segmentation utilizing :class:`squidpy.im.SegmentationCustom`.
Note that when using the provided segmentation models `'blob'` and `'watershed'`, the quality of the
cell-segmentation depends on the quality of your tissue images.
In this example we use the DAPI stain of a fluorescence dataset to compute the segmentation.
For harder cases, you may want to provide your own pre-trained segmentation model.
.. seealso::
See :ref:`sphx_glr_auto_examples_image_compute_segment_hne.py` for an example on how to
calculate a cell-segmentation of an H&E stain.
"""
import squidpy as sq
import numpy as np
import matplotlib.pyplot as plt
# load fluorescence tissue image
img = sq.datasets.visium_fluo_image_crop()
###############################################################################
# We crop the image to a smaller segment.
# This is only to speed things up, :func:`squidpy.im.segment` can also process very large images
# (see :ref:`sphx_glr_auto_examples_image_compute_process_hires.py`.)
crop = img.crop_corner(1000, 1000, size=1000)
###############################################################################
# The tissue image in this dataset contains four fluorescence stains.
# The first one is DAPI, which we will use for the nuclei-segmentation.
fig, axes = plt.subplots(1, 3, figsize=(10, 20))
for i, ax in enumerate(axes):
crop.show("image", channel=i, ax=ax)
###############################################################################
# We segment the image with :func:`squidpy.im.segment` using watershed segmentation
# (``method="watershed"``).
# With the arguments ``layer`` and ``channel`` we define the image layer and
# channel of the image that should be segmented.
#
# With ``kwargs`` we can provide keyword arguments to the segmentation model.
# For watershed segmentation, we need to set a threshold to create the mask image.
# You can either set a manual threshold, or use automated
# `Otsu thresholding <https://en.wikipedia.org/wiki/Otsu%27s_method>`_.
# For this fluorescence image example, Otsu's thresh works very well,
# thus we will use ``thresh = None``.
# See :ref:`sphx_glr_auto_examples_image_compute_segment_hne.py`
# for an example where we use a manually defined threshold.
#
# In addition, we can specify if the values greater or equal than
# the threshold should be in the mask (default)
# or if the values smaller to the threshold should be in the mask (``geq = False``).
sq.im.segment(img=crop, layer="image", channel=0, method="watershed", thresh=None, geq=True)
###############################################################################
# The segmented crop is saved in the layer ``segmented_watershed``.
# This behavior can be changed with the arguments ``copy`` and ``layer_added``.
# The result of the segmentation is a label image that can be used to extract features like the
# number of cells from the image.
print(crop)
print(f"Number of segments in crop: {len(np.unique(crop['segmented_watershed']))}")
fig, axes = plt.subplots(1, 2)
crop.show("image", channel=0, ax=axes[0])
_ = axes[0].set_title("DAPI")
crop.show("segmented_watershed", cmap="jet", interpolation="none", ax=axes[1])
_ = axes[1].set_title("segmentation")
| 45.206897 | 106 | 0.692601 | 567 | 3,933 | 4.730159 | 0.368607 | 0.020134 | 0.014914 | 0.019389 | 0.126771 | 0.112603 | 0.094705 | 0.07308 | 0.045488 | 0.045488 | 0 | 0.00818 | 0.129672 | 3,933 | 86 | 107 | 45.732558 | 0.775343 | 0.723621 | 0 | 0 | 0 | 0 | 0.188859 | 0.061141 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.1875 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b18fa117de3daa5c91e021692f647795832b7e86 | 472 | py | Python | python/searching/jump-search.py | sesn/learn-algorithms | cbd803e1c4ca70edf2a715fca7d1bca9455e5d3f | [
"MIT"
] | null | null | null | python/searching/jump-search.py | sesn/learn-algorithms | cbd803e1c4ca70edf2a715fca7d1bca9455e5d3f | [
"MIT"
] | null | null | null | python/searching/jump-search.py | sesn/learn-algorithms | cbd803e1c4ca70edf2a715fca7d1bca9455e5d3f | [
"MIT"
] | null | null | null | import math
def jump_search(arr, n, x):
step = math.sqrt(n)
prev = 0
while (arr[min(step, n) < x]):
prev = step;
step += math.sqrt(n)
if (prev >= n):
return False
while (arr[prev] < x):
prev = prev + 1
if (prev == min(step, n)):
return False
if (arr[prev] == x):
return True
return False
arr = [23,512,214,12,5,67,1,4,65]
arr.sort();
result = jump_search(arr, len(arr), 1)
print('Search Element is found', result)
| 17.481481 | 40 | 0.563559 | 78 | 472 | 3.384615 | 0.435897 | 0.125 | 0.098485 | 0.098485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057637 | 0.264831 | 472 | 26 | 41 | 18.153846 | 0.70317 | 0 | 0 | 0.15 | 0 | 0 | 0.048729 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.05 | 0 | 0.3 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1902d18c9cd9392cab1674b8a37ed796e9f7e88 | 9,432 | py | Python | python/federatedml/util/label_transform.py | QuantumA/FATE | 89a3dd593252128c1bf86fb1014b25a629bdb31a | [
"Apache-2.0"
] | 1 | 2022-02-07T06:23:15.000Z | 2022-02-07T06:23:15.000Z | python/federatedml/util/label_transform.py | JavaGreenHands/FATE | ea1e94b6be50c70c354d1861093187e523af32f2 | [
"Apache-2.0"
] | 11 | 2020-10-09T09:53:50.000Z | 2021-12-06T16:14:51.000Z | python/federatedml/util/label_transform.py | JavaGreenHands/FATE | ea1e94b6be50c70c354d1861093187e523af32f2 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2021 The FATE Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import numpy as np
from federatedml.model_base import Metric, MetricMeta
from federatedml.model_base import ModelBase
from federatedml.param.label_transform_param import LabelTransformParam
from federatedml.protobuf.generated import label_transform_meta_pb2, label_transform_param_pb2
from federatedml.statistic.data_overview import get_label_count, get_predict_result_labels
from federatedml.util import LOGGER
class LabelTransformer(ModelBase):
def __init__(self):
super().__init__()
self.model_param = LabelTransformParam()
self.metric_name = "label_transform"
self.metric_namespace = "train"
self.metric_type = "LABEL_TRANSFORM"
self.model_param_name = 'LabelTransformParam'
self.model_meta_name = 'LabelTransformMeta'
self.weight_mode = None
self.encoder_key_type = None
self.encoder_value_type = None
self.label_encoder = None
self.label_list = None
def _init_model(self, params):
self.model_param = params
self.label_encoder = params.label_encoder
self.label_list = params.label_list
self.need_run = params.need_run
def update_label_encoder(self, data):
if self.label_encoder is not None:
LOGGER.info(f"label encoder provided")
if self.label_list is not None:
LOGGER.info(f"label list provided")
self.encoder_key_type = {str(v): type(v).__name__ for v in self.label_list}
else:
data_type = data.schema.get("content_type")
if data_type is None:
label_count = get_label_count(data)
labels = sorted(label_count.keys())
# predict result
else:
labels = sorted(get_predict_result_labels(data))
self.label_encoder = dict(zip(labels, range(len(labels))))
if self.encoder_key_type is None:
self.encoder_key_type = {str(k): type(k).__name__ for k in self.label_encoder.keys()}
self.encoder_value_type = {str(k): type(v).__name__ for k, v in self.label_encoder.items()}
self.label_encoder = {load_value_to_type(k, self.encoder_key_type[str(k)]): v for k, v in self.label_encoder.items()}
def _get_meta(self):
meta = label_transform_meta_pb2.LabelTransformMeta(
need_run=self.need_run
)
return meta
def _get_param(self):
label_encoder = self.label_encoder
if self.label_encoder is not None:
label_encoder = {str(k): str(v) for k, v in self.label_encoder.items()}
param = label_transform_param_pb2.LabelTransformParam(
label_encoder=label_encoder,
encoder_key_type=self.encoder_key_type,
encoder_value_type=self.encoder_value_type)
return param
def export_model(self):
meta_obj = self._get_meta()
param_obj = self._get_param()
result = {
self.model_meta_name: meta_obj,
self.model_param_name: param_obj
}
self.model_output = result
return result
def load_model(self, model_dict):
meta_obj = list(model_dict.get('model').values())[0].get(self.model_meta_name)
param_obj = list(model_dict.get('model').values())[0].get(self.model_param_name)
self.need_run = meta_obj.need_run
self.encoder_key_type = param_obj.encoder_key_type
self.encoder_value_type = param_obj.encoder_value_type
self.label_encoder = {
load_value_to_type(k, self.encoder_key_type[k]): load_value_to_type(v, self.encoder_value_type[k])
for k, v in param_obj.label_encoder.items()
}
return
def callback_info(self):
metric_meta = MetricMeta(name='train',
metric_type=self.metric_type,
extra_metas={
"label_encoder": self.label_encoder
})
self.callback_metric(metric_name=self.metric_name,
metric_namespace=self.metric_namespace,
metric_data=[Metric(self.metric_name, 0)])
self.tracker.set_metric_meta(metric_namespace=self.metric_namespace,
metric_name=self.metric_name,
metric_meta=metric_meta)
@staticmethod
def replace_instance_label(instance, label_encoder):
new_instance = copy.deepcopy(instance)
label_replace_val = label_encoder.get(instance.label)
if label_replace_val is None:
raise ValueError(f"{instance.label} not found in given label encoder")
new_instance.label = label_replace_val
return new_instance
@staticmethod
def replace_predict_label(predict_inst, label_encoder):
transform_predict_inst = copy.deepcopy(predict_inst)
true_label, predict_label, predict_score, predict_detail, result_type = transform_predict_inst.features
# true_label, predict_label = label_encoder[true_label], label_encoder[predict_label]
true_label_replace_val, predict_label_replace_val = label_encoder.get(true_label), label_encoder.get(predict_label)
if true_label_replace_val is None:
raise ValueError(f"{true_label_replace_val} not found in given label encoder")
if predict_label_replace_val is None:
raise ValueError(f"{predict_label_replace_val} not found in given label encoder")
label_encoder_detail = {str(k): v for k, v in label_encoder.items()}
predict_detail = {label_encoder_detail[label]: score for label, score in predict_detail.items()}
transform_predict_inst.features = [true_label_replace_val, predict_label_replace_val, predict_score,
predict_detail, result_type]
return transform_predict_inst
@staticmethod
def replace_predict_label_cluster(predict_inst, label_encoder):
transform_predict_inst = copy.deepcopy(predict_inst)
true_label, predict_label = transform_predict_inst.features[0], transform_predict_inst.features[1]
true_label, predict_label = label_encoder[true_label], label_encoder[predict_label]
transform_predict_inst.features = [true_label, predict_label]
return transform_predict_inst
@staticmethod
def transform_data_label(data, label_encoder):
data_type = data.schema.get("content_type")
if data_type == "cluster_result":
return data.mapValues(lambda v: LabelTransformer.replace_predict_label_cluster(v, label_encoder))
elif data_type == "predict_result":
predict_detail = data.first()[1].features[3]
if predict_detail == 1 and list(predict_detail.keys())[0] == "label":
LOGGER.info(f"Regression prediction result provided. Original data returned.")
return data
return data.mapValues(lambda v: LabelTransformer.replace_predict_label(v, label_encoder))
elif data_type is None:
return data.mapValues(lambda v: LabelTransformer.replace_instance_label(v, label_encoder))
else:
raise ValueError(f"unknown data type: {data_type} encountered. Label transform aborted.")
def transform(self, data):
LOGGER.info(f"Enter Label Transformer Transform")
if self.label_encoder is None:
raise ValueError(f"Input Label Encoder is None. Label Transform aborted.")
label_encoder = self.label_encoder
data_type = data.schema.get("content_type")
# revert label encoding if predict result
if data_type is not None:
label_encoder = dict(zip(self.label_encoder.values(), self.label_encoder.keys()))
result_data = LabelTransformer.transform_data_label(data, label_encoder)
result_data.schema = data.schema
self.callback_info()
return result_data
def fit(self, data):
LOGGER.info(f"Enter Label Transform Fit")
self.update_label_encoder(data)
result_data = LabelTransformer.transform_data_label(data, self.label_encoder)
result_data.schema = data.schema
self.callback_info()
return result_data
# also used in feature imputation, to be moved to common util
def load_value_to_type(value, value_type):
if value is None:
loaded_value = None
elif value_type in ["int", "int64", "long", "float", "float64", "double"]:
loaded_value = getattr(np, value_type)(value)
elif value_type in ["str", "_str"]:
loaded_value = str(value)
else:
raise ValueError(f"unknown value type: {value_type}")
return loaded_value | 43.869767 | 125 | 0.674512 | 1,202 | 9,432 | 4.99584 | 0.167221 | 0.103913 | 0.050624 | 0.02398 | 0.43164 | 0.358035 | 0.274438 | 0.222148 | 0.172523 | 0.119234 | 0 | 0.00365 | 0.244699 | 9,432 | 215 | 126 | 43.869767 | 0.839276 | 0.08768 | 0 | 0.152439 | 0 | 0 | 0.079898 | 0.00594 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091463 | false | 0 | 0.04878 | 0 | 0.231707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b190f92b38f263a01220cb020ab62c05fa2af5c2 | 1,860 | py | Python | sim_utils/make_condor_job_script.py | kaiwen-kakuiii/metadetect-sims | a0fd133ca5bc946c6ce769e8657ef2ce10226953 | [
"BSD-3-Clause"
] | 2 | 2021-07-12T09:41:51.000Z | 2022-01-27T08:13:33.000Z | sim_utils/make_condor_job_script.py | kaiwen-kakuiii/metadetect-sims | a0fd133ca5bc946c6ce769e8657ef2ce10226953 | [
"BSD-3-Clause"
] | 6 | 2019-04-04T23:53:27.000Z | 2021-07-30T11:35:20.000Z | sim_utils/make_condor_job_script.py | kaiwen-kakuiii/metadetect-sims | a0fd133ca5bc946c6ce769e8657ef2ce10226953 | [
"BSD-3-Clause"
] | 2 | 2020-10-30T18:14:29.000Z | 2021-07-22T16:34:56.000Z | #!/usr/bin/env python
import os
PREAMBLE = """\
#
# always keep these at the defaults
#
Universe = vanilla
kill_sig = SIGINT
+Experiment = "astro"
# copy env. variables to the job
GetEnv = True
#
# options below you can change safely
#
# don't send email
Notification = Never
# Run this executable. executable bits must be set
Executable = {script_name}
# A guess at the memory usage, including virtual memory
Image_Size = 1000000
# this restricts the jobs to use the the shared pool
# Do this if your job will exceed 2 hours
#requirements = (cpu_experiment == "sdcc")
# each time the Queue command is called, it makes a new job
# and sends the last specified arguments. job_name will show
# up if you use the condortop job viewer
"""
try:
from config import N_PATCHES_PER_JOB
except Exception:
N_PATCHES_PER_JOB = 200
def _append_job(fp, num, output_dir):
fp.write("""\
+job_name = "sim-{num:05d}"
Arguments = {n_patches} {num} {output_dir}
Queue
""".format(n_patches=N_PATCHES_PER_JOB, num=num, output_dir=output_dir))
try:
from config import N_PATCHES as n_patches
except Exception:
n_patches = 10_000_000
n_jobs = n_patches // N_PATCHES_PER_JOB
n_jobs_per_script = 500
n_scripts = n_jobs // 500
cwd = os.path.abspath(os.getcwd())
try:
os.makedirs(os.path.join(cwd, 'outputs'))
except Exception:
pass
try:
os.makedirs(os.path.join(cwd, 'outputs', 'logs'))
except Exception:
pass
script_name = os.path.join(cwd, "job_condor.sh")
output_dir = os.path.join(cwd, "outputs")
script = PREAMBLE.format(script_name=script_name)
job_ind = 1
for snum in range(n_scripts):
with open('condor_job_%05d.desc' % snum, 'w') as fp:
fp.write(script)
for num in range(job_ind, job_ind + 500):
_append_job(fp, num, output_dir)
job_ind += 500
| 22.142857 | 72 | 0.694624 | 291 | 1,860 | 4.257732 | 0.460481 | 0.064568 | 0.035513 | 0.045198 | 0.185634 | 0.169492 | 0.053269 | 0.053269 | 0 | 0 | 0 | 0.024275 | 0.202688 | 1,860 | 83 | 73 | 22.409639 | 0.811194 | 0.010753 | 0 | 0.16129 | 0 | 0 | 0.475258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016129 | false | 0.032258 | 0.048387 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1922fa29738afd1530f715a217458e365e06fd8 | 851 | py | Python | class_data.py | m-ymn/Python-Apps | 09d1ff8de554902a5fcdc34a868eab31df0c28f7 | [
"MIT"
] | 1 | 2020-03-09T18:16:13.000Z | 2020-03-09T18:16:13.000Z | class_data.py | m-ymn/Python-Apps | 09d1ff8de554902a5fcdc34a868eab31df0c28f7 | [
"MIT"
] | null | null | null | class_data.py | m-ymn/Python-Apps | 09d1ff8de554902a5fcdc34a868eab31df0c28f7 | [
"MIT"
] | null | null | null |
class Person:
def __init__(self, name, age,num1,num2):
self.name = name
self.age = age
self.num1 = num1
self.num2 = num2
class Person2:
def __init__(self, name, age,num1,num2):
self.name = name
self.age = age
self.num1 = num1
self.num2 = num2
p1 = Person("ymn",10,20.3,44.765)
p2 = Person2("tbb",24,10,250.45)
#l1 = []
l1 = [p1, p2, Person("abb",1.0,0.41,5564)]
#if isinstance((p1),float) :
# print("yes it work")
def avg_sum(ln = []):
sum1 = 0 ;
for obj1 in ln :
obj_d = vars(obj1) # creating dict key value pairs from objects
l2 = list(obj_d.values())
print(l2)
for obj in l2:
if isinstance(obj,float) or isinstance(obj,int) or isinstance(obj,complex):
#print(obj)
sum1 += obj
print(sum1)
avg_sum(l1)
| 19.790698 | 87 | 0.564042 | 129 | 851 | 3.627907 | 0.449612 | 0.068376 | 0.047009 | 0.064103 | 0.307692 | 0.307692 | 0.307692 | 0.307692 | 0.307692 | 0.307692 | 0 | 0.098007 | 0.292597 | 851 | 42 | 88 | 20.261905 | 0.679402 | 0.130435 | 0 | 0.384615 | 0 | 0 | 0.012329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0 | 0 | 0.192308 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1948cde186fadcc50dcc26df5dd51fb0a848ec5 | 6,480 | py | Python | src/helix_trajectory.py | RobertMilijas/uav_ros_general | d9ef09e2da4802891296a9410e5031675c09c066 | [
"BSD-3-Clause"
] | 1 | 2021-12-20T13:43:22.000Z | 2021-12-20T13:43:22.000Z | src/helix_trajectory.py | RobertMilijas/uav_ros_general | d9ef09e2da4802891296a9410e5031675c09c066 | [
"BSD-3-Clause"
] | 4 | 2020-12-21T15:15:13.000Z | 2021-06-16T10:40:29.000Z | src/helix_trajectory.py | RobertMilijas/uav_ros_general | d9ef09e2da4802891296a9410e5031675c09c066 | [
"BSD-3-Clause"
] | 2 | 2021-02-15T13:35:18.000Z | 2021-05-24T12:44:18.000Z | #!/usr/bin/env python
import copy
import math
import sys, os
import time
from geometry_msgs.msg import Pose, Point, Transform, Twist
from nav_msgs.msg import Path
from std_msgs.msg import String
from visualization_msgs.msg import Marker
import rospy
from topp_ros.srv import GetHelixPoints, GetHelixPointsResponse, \
GetHelixPointsRequest, GenerateTrajectory, GenerateTrajectoryRequest, \
GenerateTrajectoryResponse
from trajectory_msgs.msg import JointTrajectory, JointTrajectoryPoint, \
MultiDOFJointTrajectory, MultiDOFJointTrajectoryPoint
class HelicalTrajectory():
""" Generate and execute a helical trajectory.
To avoid discontinuities, also generate a linear trajectory from
the current carrot reference to the start of the helix.
Requires the following scripts from the topp_ros package:
get_helix_points.py
generate_toppra_trajectory.py
"""
def __init__(self):
# Helix parameters
self.r = 5
self.angleStep = 1
self.x0 = 0.0
self.y0 = 0.0
self.z0 = 2.0
self.zf = 5.0
self.deltaZ = 0.2
self.velocities = 1.00
self.accelerations = 0.5
self.trajectory_sampling_period = 0.01
# Define services
self.trajectory_type = rospy.get_param("~trajectory_type",
"generate_toppra_trajectory")
self.request_trajectory_service = rospy.ServiceProxy(
self.trajectory_type, GenerateTrajectory)
self.get_helix_points_service = rospy.ServiceProxy(
"get_helix_points", GetHelixPoints)
# Define publishers
self.trajectory_pub = rospy.Publisher('helix/trajectory',
MultiDOFJointTrajectory, queue_size=1)
self.trajectory_point_pub = rospy.Publisher('position_hold/trajectory',
MultiDOFJointTrajectoryPoint, queue_size=1)
# Subscribe to carrot
self.carrot_sub = rospy.Subscriber('carrot/trajectory',
MultiDOFJointTrajectoryPoint, self.CarrotCb)
self.carrot_status_sub = rospy.Subscriber('carrot/status', String, self.StatusCb)
# Initialize flags
self.carrot_received = False
self.carrot_status = String()
self.executing_trajectory = False
self.trajectory_generated = False
def JointTrajectory2MultiDofTrajectory(self, joint_trajectory):
multi_dof_trajectory = MultiDOFJointTrajectory()
for i in range(0, len(joint_trajectory.points)):
temp_point = MultiDOFJointTrajectoryPoint()
temp_transform = Transform()
temp_transform.translation.x = joint_trajectory.points[i].positions[0]
temp_transform.translation.y = joint_trajectory.points[i].positions[1]
temp_transform.translation.z = joint_trajectory.points[i].positions[2]
temp_transform.rotation.w = 1.0
temp_vel = Twist()
temp_vel.linear.x = joint_trajectory.points[i].velocities[0]
temp_vel.linear.y = joint_trajectory.points[i].velocities[1]
temp_vel.linear.z = joint_trajectory.points[i].velocities[2]
temp_acc = Twist()
temp_acc.linear.x = joint_trajectory.points[i].accelerations[0]
temp_acc.linear.y = joint_trajectory.points[i].accelerations[1]
temp_acc.linear.z = joint_trajectory.points[i].accelerations[2]
temp_point.transforms.append(temp_transform)
temp_point.velocities.append(temp_vel)
temp_point.accelerations.append(temp_acc)
multi_dof_trajectory.points.append(temp_point)
return multi_dof_trajectory
def StatusCb(self, data):
if data.data != self.carrot_status:
rospy.loginfo("Carrot status changed from %s to %s", self.carrot_status, data.data)
self.carrot_status = data.data
def CarrotCb(self, data):
if self.carrot_received == False and self.carrot_status == "HOLD":
self.carrot_data = data
self.carrot_received = True
rospy.logwarn("Carrot received in position hold mode. Generating trajectory")
# Set up helical trajectory
helix_request = GetHelixPointsRequest()
helix_request.r = self.r
helix_request.angleStep = self.angleStep
helix_request.x0 = self.x0
helix_request.y0 = self.y0
helix_request.z0 = self.z0
helix_request.zf = self.zf
helix_request.deltaZ = self.deltaZ
# Get the points
helix_response = self.get_helix_points_service(helix_request)
# Prepend the current carrot position
first_point = JointTrajectoryPoint()
initial_pose = data.transforms[0].translation
first_point.positions = [initial_pose.x, initial_pose.y, initial_pose.z, 0]
helix_response.helix_points.points.insert(0, first_point)
# GetHelixPoints just returns the points. The dynamical part must be
# provided by the user.
dof = len(helix_response.helix_points.points[0].positions)
helix_response.helix_points.points[0].velocities = [self.velocities]*dof
helix_response.helix_points.points[0].accelerations = [self.accelerations]*dof
# Now call the trajectory generation service
trajectory_request = GenerateTrajectoryRequest()
trajectory_request.waypoints = helix_response.helix_points
trajectory_request.sampling_frequency = 100
trajectory_response = self.request_trajectory_service(trajectory_request)
# Repack the received trajectory to MultiDof
self.multi_dof_trajectory = self.JointTrajectory2MultiDofTrajectory(
trajectory_response.trajectory)
self.trajectory_generated = True
rospy.loginfo("Trajectory generated")
self.trajectory_pub.publish(self.multi_dof_trajectory)
def Run(self):
while not rospy.is_shutdown():
if self.trajectory_generated:
if not self.executing_trajectory:
self.executing_trajectory = True
rospy.loginfo("Trajectory execution started")
self.trajectory_point_pub.publish(self.multi_dof_trajectory.points.pop(0))
rospy.sleep(self.trajectory_sampling_period)
if __name__ == "__main__":
rospy.init_node("quad_helix_trajectory_node")
a = HelicalTrajectory()
a.Run()
| 37.674419 | 93 | 0.678086 | 720 | 6,480 | 5.8875 | 0.252778 | 0.045294 | 0.04954 | 0.046709 | 0.152866 | 0.078084 | 0 | 0 | 0 | 0 | 0 | 0.011247 | 0.24537 | 6,480 | 171 | 94 | 37.894737 | 0.855624 | 0.098611 | 0 | 0 | 0 | 0 | 0.053488 | 0.013156 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045045 | false | 0 | 0.099099 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b194d8704816a0157d770f0945bf947a71c4526d | 6,737 | py | Python | wanderbits/things.py | Who8MyLunch/WanderBits | 058685971f5ab2083c9fdd7bd2eba960c2ae5992 | [
"MIT"
] | null | null | null | wanderbits/things.py | Who8MyLunch/WanderBits | 058685971f5ab2083c9fdd7bd2eba960c2ae5992 | [
"MIT"
] | 1 | 2018-01-13T20:53:38.000Z | 2018-01-13T20:53:38.000Z | wanderbits/things.py | Who8MyLunch/WanderBits | 058685971f5ab2083c9fdd7bd2eba960c2ae5992 | [
"MIT"
] | null | null | null | #!/usr/bin/python
from __future__ import division, print_function, unicode_literals
"""
Things class for WanderBits, a text-based adventure game.
"""
import abc
import errors
# Helpers
def find_thing(many_things, name):
"""
Find a matching Thing by name.
"""
if not isinstance(name, basestring):
msg = 'name must be a string: {:s}'.format(str(name))
raise errors.ThingError(msg)
for t in many_things:
if t.name.lower() == name.lower():
return t
msg = 'Unable to find matching Thing: {:s}'.format(name)
raise errors.FindThingError(msg)
#################################################
class Thing(object):
"""
Things class for WanderBits, a text-based adventure game.
This class is a base class. Inherit from this class to implement
a particular game item.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def __init__(self, **kwargs):
"""
Initialize Thing class.
Each kind of game item needs to be implemented as a subclass of
the Thing base class.
"""
base_property_keys = ['name', 'description']
self._properties = {}
self.update_properties(base_property_keys, kwargs)
# Things are able to contain other Things.
self._container = []
# Which Thing contains the current Thing.
self._parent = None
def __repr__(self):
return 'Thing [{:s}]'.format(self.name)
def update_properties(self, property_keys, mapping):
"""Update this Thing's inherent property values.
"""
for k in property_keys:
try:
self._properties[k] = mapping[k]
except KeyError:
print(k)
raise
@property
def name(self):
"""This Thing's characteristic name.
"""
return self._properties['name']
@property
def kind(self):
"""This Thing's characteristic kind of thing.
"""
return self._properties['kind']
@property
def description(self):
"""This Thing's description.
"""
return self._properties['description']
@property
def size(self):
"""This Thing's physical size.
"""
try:
return self._properties['size']
except KeyError:
return 0
# msg = 'Thing hasn't a size: {:s}'.format(self.name)
# raise errors.ThingError(msg)
@property
def capacity(self):
"""This Thing's physical size.
"""
try:
return self._properties['capacity']
except KeyError:
return 0
# msg = 'Thing hasn't a capacity: {:s}'.format(self.name)
# raise errors.ThingError(msg)
@property
def parent(self):
"""Another Thing that contains self.
"""
return self._parent
@parent.setter
def parent(self, value):
if isinstance(value, Thing) or value is None:
# TODO: I don't like having None here as a valid input.
self._parent = value
else:
msg = 'Parent must be a Thing: {:s}'.format(str(value))
raise errors.ThingError(msg)
def add(self, obj):
"""Place new object inside oneself.
"""
if not isinstance(obj, Thing):
msg = 'Object must be a Thing: {:s}'.format(str(obj))
raise errors.ThingError(msg)
if obj in self._container:
msg = '{:s} already contains {:s}'.format(self, obj)
raise errors.ThingError(msg)
if self.available_space < obj.size:
msg = 'Not enough room in {:s} to contain {:s}'.format(self, obj)
raise errors.ThingError(msg)
# Add to container, update it's parent.
self._container.append(obj)
obj.parent = self
def remove(self, obj):
"""Remove object from oneself.
"""
try:
# Remove from container, remove self as parent.
self._container.remove(obj)
obj.parent = None
except ValueError:
msg = '{:s} does not contains {:s}'.format(self, obj)
raise errors.ThingError(msg)
@property
def container(self):
"""A list of Things contained by this Thing.
"""
return self._container
@property
def available_space(self):
"""Amount of space inside this Thing available for storing more Things.
"""
contained_size = 0
for T in self._container:
contained_size += T.size
return self.capacity - contained_size
#################################################
#################################################
# nice discussion that clarifies inheriting from an abstract class and
# using also using super():
# http://pymotw.com/2/abc/#concrete-methods-in-abcs
class Room(Thing):
"""Room object.
"""
property_keys = ['connections', 'size', 'capacity']
def __init__(self, **kwargs):
super(Room, self).__init__(**kwargs)
self.update_properties(self.property_keys, kwargs)
self.update_properties(['kind'], {'kind': 'room'})
@property
def connections(self):
"""
Mapping to other rooms.
"""
return self._properties['connections']
#################################################
class Item(Thing):
"""Item object.
"""
property_keys = ['size', 'capacity']
def __init__(self, **kwargs):
super(Item, self).__init__(**kwargs)
self.update_properties(self.property_keys, kwargs)
self.update_properties(['kind'], {'kind': 'item'})
#################################################
class User(Thing):
"""User object.
"""
property_keys = ['size', 'capacity']
def __init__(self, **kwargs):
super(User, self).__init__(**kwargs)
self.update_properties(self.property_keys, kwargs)
self.update_properties(['kind'], {'kind': 'user'})
@property
def local_things(self):
"""
Return list of Things that are nearby.
These are Things that may be either physically manipulated or observed.
This includes the current room, Things in the room, Things held
by the user. Does not include Things inside Things held
by the user.
"""
# User should be contained by a room.
room = self.parent
# List of things.
things = [room] + room.container + self.container
# Remove self from list.
# things.remove(self)
return things
#################################################
if __name__ == '__main__':
pass
| 26.840637 | 79 | 0.556776 | 745 | 6,737 | 4.90604 | 0.22953 | 0.019152 | 0.045964 | 0.052531 | 0.306977 | 0.264843 | 0.253352 | 0.232011 | 0.221614 | 0.151573 | 0 | 0.000841 | 0.293751 | 6,737 | 250 | 80 | 26.948 | 0.767339 | 0.262728 | 0 | 0.290598 | 0 | 0 | 0.084785 | 0 | 0 | 0 | 0 | 0.004 | 0 | 1 | 0.17094 | false | 0.008547 | 0.025641 | 0.008547 | 0.384615 | 0.017094 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b19a6a78bdd63994716e9aedd33658a47903076d | 2,038 | py | Python | Source/Chapter3/02 kmeans.py | irmoralesb/MLForDevsBook | 4e990d720ef5888525d09d2e27e37a4db21a75db | [
"Unlicense"
] | null | null | null | Source/Chapter3/02 kmeans.py | irmoralesb/MLForDevsBook | 4e990d720ef5888525d09d2e27e37a4db21a75db | [
"Unlicense"
] | null | null | null | Source/Chapter3/02 kmeans.py | irmoralesb/MLForDevsBook | 4e990d720ef5888525d09d2e27e37a4db21a75db | [
"Unlicense"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
samples = np.array([[1, 2], [12, 2], [0, 1], [10, 0], [9, 1], [8, 2], [0, 10], [1, 8], [2, 9], [9, 9], [10, 8], [8, 9]],
dtype=np.float)
centers = np.array([[3, 2], [2, 6], [9, 3], [7, 6]], dtype=np.float)
N = len(samples)
fig, ax = plt.subplots()
samples_trans = samples.transpose()
ax.scatter(samples_trans[0], samples_trans[1], marker='o', s=100)
centers_trans = centers.transpose()
ax.scatter(centers_trans[0], centers_trans[1], marker='s', s=100, color='black')
plt.plot()
plt.show()
def distance(sample, centroids):
distances = np.zeros(len(centroids))
for i in range(0, len(centroids)):
dist = np.sqrt(sum(pow(np.subtract(sample, centroids[i]), 2)))
distances[i] = dist
return distances
def show_current_status(samples, centers, clusters, plotnumber):
plt.subplot(620 + plotnumber)
samples_transposed = samples.transpose()
plt.scatter(samples_transposed[0], samples_transposed[1], marker='o', s=150, c=clusters)
centers_transposed = centers.transpose()
plt.scatter(centers_transposed[0], centers_transposed[1], marker='s', s=100, color='black')
plt.plot()
plt.show()
def kmeans(centroids, samples, K, plotresults):
plt.figure(figsize=(20, 20))
distances = np.zeros((N, K))
new_centroids = np.zeros((K, 2))
final_centroids = np.zeros((K, 2))
clusters = np.zeros(len(samples), np.int)
for i in range(0, len(samples)):
distances[i] = distance(samples[i], centroids)
clusters[i] = np.argmin(distances[i])
new_centroids[clusters[i]] += samples[i]
divisor = np.bincount(clusters).astype(np.float)
divisor.resize([K])
for j in range(0, K):
final_centroids[j] = np.nan_to_num(np.divide(new_centroids[j], divisor[j]))
if i > 3 and plotresults == True:
show_current_status(samples[:i], final_centroids, clusters[:i], i - 3)
return final_centroids
finalcenters = kmeans(centers, samples, 4, True)
| 35.754386 | 120 | 0.638862 | 298 | 2,038 | 4.285235 | 0.291946 | 0.027408 | 0.018794 | 0.014096 | 0.112764 | 0.084573 | 0.061081 | 0.061081 | 0.061081 | 0.061081 | 0 | 0.043531 | 0.18842 | 2,038 | 56 | 121 | 36.392857 | 0.728537 | 0 | 0 | 0.088889 | 0 | 0 | 0.006869 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.044444 | 0 | 0.155556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b19fb07fbde39a0bbd8982fd7215d6986bd04cd4 | 3,239 | py | Python | csvimport.py | datanobokemono/elasticsearch-py-importer | 2562cbf3ea60390f9c06694e25bb42c61e77075a | [
"Apache-2.0"
] | 3 | 2015-09-16T21:52:11.000Z | 2017-04-20T07:35:32.000Z | csvimport.py | datanobokemono/elasticsearch-py-importer | 2562cbf3ea60390f9c06694e25bb42c61e77075a | [
"Apache-2.0"
] | 1 | 2015-09-18T06:37:59.000Z | 2015-09-18T06:37:59.000Z | csvimport.py | datanobokemono/elasticsearch-py-importer | 2562cbf3ea60390f9c06694e25bb42c61e77075a | [
"Apache-2.0"
] | null | null | null | # datanobokemono
# v 0.1
import datetime
import csv
import json
import sys
import bokuno_console
from elasticsearch import Elasticsearch
from elasticsearch import helpers
# Get arguments as dictionary of parameters as keys and their values
# If the parameter was given without value it will return 0 by default
args = bokuno_console.get_args()
# ------------------------------------------------------------------------------------------------------
# Get the index name
index_name = "" if not "-i" in args else args["-i"]
# Get the type name
type_name = "" if not "-t" in args else args["-t"]
# Setting ID column
id_name = "" if not "-id" in args else args["-id"]
# Getting CSV file name from arguments
filename = ""
if "-f" in args:
filename = args["-f"]
# ------------------------------------------------------------------------------------------------------
# Checking vital parameters
if filename and index_name and type_name and id_name:
# Initialize ElasticSearch
es = Elasticsearch()
# Setting bulk size that defines how many rows will be sent in one request to Elastic
# and how many rows of CSV data will be hold in memory
bulk_size = 1000
if "-bs" in args:
bulk_size = args["-bs"]
bulk_data = []
# Rows processed meter
count = 0
# Set the progress step in order to track the import progress in command line logs.
# Progress step is set by -p parameter
progress_divisor = 0
if "-p" in args:
progress_divisor = bulk_size if args["-p"] == 0 else int(args["-p"])
# Log execution time to optimize it
time_started = datetime.datetime.now()
# Initializing default values for empty row cells from json object in -d parameter
default_values = {}
if "-d" in args:
default_values = json.loads(args["-d"])
# Log beginning of parsing
print("Parsing file %s" % filename)
# Opening the file and processing rows
with open(filename, 'rt', encoding='utf-8') as csv_file:
if "-ch" in args:
csv_data = csv.reader(csv_file)
else:
csv_data = csv.DictReader(csv_file)
for row in csv_data:
count += 1
#TODO: AUTOINCREMENT
# Checking if row has id value
if row[id_name]:
id_value = row[id_name]
# Setting default values
if "-d" in args:
for k in default_values:
if not row[k]:
row[k] = default_values[k]
# Collecting data as bulk action
action = {
"_index" : index_name,
"_type" : type_name,
"_id" : id_value,
"_source" : row
}
bulk_data.append(action)
# -p in arguments stands for Progress allows to track current row
if progress_divisor and count % progress_divisor == 0:
bokuno_console.update_print ("%d rows processed\r" % count)
if count % bulk_size == 0:
helpers.bulk(es, bulk_data)
bulk_data = []
# Index any data left from CSV file
if len(bulk_data):
helpers.bulk(es, bulk_data)
bulk_data = []
# Outputting number of rows processed
bokuno_console.update_print ("%d rows processed\r" % count)
print("")
# Outputting
time_finished = datetime.datetime.now()
time_took = time_finished - time_started
print("Your operation took %d minutes and %d seconds" % divmod(time_took.days * 86400 + time_took.seconds, 60))
else:
print("ERROR: Check your import settings")
| 26.768595 | 112 | 0.649892 | 466 | 3,239 | 4.399142 | 0.319742 | 0.026341 | 0.013171 | 0.020488 | 0.092683 | 0.092683 | 0.07122 | 0.042927 | 0.042927 | 0 | 0 | 0.00808 | 0.197592 | 3,239 | 120 | 113 | 26.991667 | 0.780685 | 0.372646 | 0 | 0.171875 | 0 | 0 | 0.098951 | 0 | 0 | 0 | 0 | 0.008333 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.09375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a06b1e031671e64fed7dc215a9d9daf4ce49be | 1,443 | py | Python | Ene-Jun-2021/perez-sanchez-jose-jahir/Examen/Ejercicio5/srp_test.py | bryanbalderas/DAS_Sistemas | 1e31f088c0de7134471025a5730b0abfc19d936e | [
"MIT"
] | 41 | 2017-09-26T09:36:32.000Z | 2022-03-19T18:05:25.000Z | Ene-Jun-2021/perez-sanchez-jose-jahir/Examen/Ejercicio5/srp_test.py | bryanbalderas/DAS_Sistemas | 1e31f088c0de7134471025a5730b0abfc19d936e | [
"MIT"
] | 67 | 2017-09-11T05:06:12.000Z | 2022-02-14T04:44:04.000Z | Ene-Jun-2021/perez-sanchez-jose-jahir/Examen/Ejercicio5/srp_test.py | bryanbalderas/DAS_Sistemas | 1e31f088c0de7134471025a5730b0abfc19d936e | [
"MIT"
] | 210 | 2017-09-01T00:10:08.000Z | 2022-03-19T18:05:12.000Z | import unittest
from srp import *
#Las pruebas unitarias que realize mientras refactorizaba
class SrpTest(unittest.TestCase):
def test_string(self):
user = Usuario(
nombre='Ramanujan',
edad=25,
direccion='Calle X, #Y Colonia Z'
)
self.assertEqual(user.serializar("string"),"Nombre: Ramanujan\nEdad: 25\nDireccion: Calle X, #Y Colonia Z")
def test_dic(self):
user = Usuario(
nombre='Ramanujan',
edad=25,
direccion='Calle X, #Y Colonia Z'
)
self.assertEqual(user.serializar("diccionario"),{'nombre': 'Ramanujan', 'edad': 25, 'direccion': 'Calle X, #Y Colonia Z'})
def test_json(self):
user = Usuario(
nombre='Ramanujan',
edad=25,
direccion='Calle X, #Y Colonia Z'
)
self.assertEqual(user.serializar("json"),'{"nombre": "Ramanujan", "edad": 25, "direccion": "Calle X, #Y Colonia Z"}')
def test_html(self):
user = Usuario(
nombre='Ramanujan',
edad=25,
direccion='Calle X, #Y Colonia Z'
)
self.assertEqual(user.serializar("html"), '<table border="1"><tr><th>nombre</th><td>Ramanujan</td></tr><tr><th>edad</th><td>25</td></tr><tr><th>direccion</th><td>Calle X, #Y Colonia Z</td></tr></table>')
if __name__ == "__main__":
unittest.main() | 37 | 211 | 0.554401 | 167 | 1,443 | 4.718563 | 0.281437 | 0.060914 | 0.071066 | 0.142132 | 0.630711 | 0.611675 | 0.611675 | 0.583756 | 0.583756 | 0.583756 | 0 | 0.016602 | 0.290367 | 1,443 | 39 | 212 | 37 | 0.75293 | 0.038808 | 0 | 0.484848 | 0 | 0.060606 | 0.356164 | 0.084355 | 0 | 0 | 0 | 0 | 0.121212 | 1 | 0.121212 | false | 0 | 0.060606 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a306a05e3e59889eb9d8e002dd241eb43261b7 | 3,035 | py | Python | main.py | animesh-srivastava/twitter-spam-detection | e5e8ce04a1a62ef9d82d21564b16276bdc6673eb | [
"MIT"
] | null | null | null | main.py | animesh-srivastava/twitter-spam-detection | e5e8ce04a1a62ef9d82d21564b16276bdc6673eb | [
"MIT"
] | null | null | null | main.py | animesh-srivastava/twitter-spam-detection | e5e8ce04a1a62ef9d82d21564b16276bdc6673eb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Apr 9 16:50:44 2020
@author: animesh-srivastava
"""
#%% Importing the modules
import os
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split, KFold
from sklearn.preprocessing import LabelEncoder, StandardScaler
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout
#%% Importing and cleaning the data
df = pd.read_csv("train.csv")
df.drop(["Id","Tweet","location","Unnamed: 8","Unnamed: 9","Unnamed: 10","Unnamed: 11"],axis=1,inplace=True)
df.dropna(inplace=True)
df = df[df["following"].apply(lambda x: x.isnumeric())]
df = df[df["followers"].apply(lambda x: x.isnumeric())]
df = df[df["actions"].apply(lambda x: x.isnumeric())]
df = df[df["is_retweet"].apply(lambda x: x.isnumeric())]
df = df[df["Type"].apply(lambda x: x=="Quality" or x=="Spam")]
#%% Label Encoder on the final column
enc = LabelEncoder()
df["Type"] = enc.fit_transform(df["Type"])
#%% Defining the input and the target values
x = df.iloc[:,0:4].values
y = df.iloc[:,4:5].values
sc = StandardScaler()
x = sc.fit_transform(x)
#%% Defining the model
def evaluate_model(x_train,x_test,y_train,y_test,count):
model = Sequential()
model.add(Dense(16,activation='relu',input_shape=[4]))
model.add(Dense(32,activation='relu'))
model.add(Dense(16,activation='relu'))
model.add(Dense(8,activation='relu'))
model.add(Dense(4,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(x_train,y_train,batch_size = 5, epochs = 10,validation_data=(x_test,y_test))
if not os.path.isdir("./weights/"):
os.mkdir("./weights/")
model.save("./weights/model_part_"+str(count)+".hdf5")
val, val_acc = model.evaluate(x_test, y_test)
return model, val_acc
#%% K-fold cross validation
acc,model_list = list(),list()
n_folds = 10
count = 1
kfold = KFold(n_folds,shuffle=True)
for train,test in kfold.split(x,y):
print("Running k fold cross validation training with k = "+str(n_folds)+". The Current count is "+str(count))
model, val_acc = evaluate_model(x[train],x[test],y[train],y[test],count)
count=count+1
acc.append(val_acc)
model_list.append(model)
#%% Measuring the accuracy
accuracy = np.mean(acc)
print(f'Accuracy is {np.mean(acc)*100}% ({np.std(acc)*100})')
#%%
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=1./n_folds)
y_pred = model.predict(x_test)
y_pred = (y_pred>0.5)
#%%
cm = confusion_matrix(y_test, y_pred)
print(cm)
#%%
tdf = pd.read_csv('test.csv')
x_val = tdf.iloc[:,2:6].values
x_val = sc.transform(x_val)
y_val = model.predict(x_val)
tdf["Type"] = y_val>0.5
tdf["Type"].replace({True: 1, False: 0},inplace=True)
tdf["Confidence"] = y_val*100
tdf["Type"] = enc.inverse_transform(tdf["Type"])
tdf.to_csv("predicted.csv")
| 36.566265 | 114 | 0.683031 | 483 | 3,035 | 4.165631 | 0.331263 | 0.019881 | 0.017893 | 0.032306 | 0.175447 | 0.135189 | 0.106362 | 0.106362 | 0.039761 | 0.039761 | 0 | 0.022657 | 0.14201 | 3,035 | 82 | 115 | 37.012195 | 0.75 | 0.099835 | 0 | 0 | 0 | 0 | 0.151596 | 0.007979 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0.126984 | 0 | 0.15873 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a307e271622f31efa3c1b0d29a930531a60035 | 12,552 | py | Python | datasimple/xl.py | CraigKelly/datasimple | 1458149f789b7aeb0e2d7886bc9ba5fd5600d700 | [
"Apache-2.0"
] | 1 | 2018-05-29T18:12:13.000Z | 2018-05-29T18:12:13.000Z | datasimple/xl.py | CraigKelly/datasimple | 1458149f789b7aeb0e2d7886bc9ba5fd5600d700 | [
"Apache-2.0"
] | null | null | null | datasimple/xl.py | CraigKelly/datasimple | 1458149f789b7aeb0e2d7886bc9ba5fd5600d700 | [
"Apache-2.0"
] | null | null | null | """Provide helpers for Excel files (using openpyxl)."""
import glob
import os.path as pth
import sys
from argparse import ArgumentParser
from collections import ChainMap
from contextlib import closing
from datetime import datetime
from openpyxl import load_workbook, Workbook
from openpyxl.styles import NamedStyle, Font, PatternFill
from .core import norm_ws, kv, read_config
from .cli import log
# Luckily the functionality we want to already out there
globmatch = glob.fnmatch.fnmatch
def _val(cell):
v = cell.value
if v is None:
return ''
t = type(v)
if t is str:
if v and v[0] == "'" and v.find("'", 1) < 0:
v = v[1:] # starts with ' and doesn't have a match: old excel "force string" method
return norm_ws(v)
elif t is int:
return v
elif t is float:
return v
elif t is datetime:
return v
else:
return norm_ws(repr(v))
# This is generally just for us, but some might find it useful
def load_ro_workbook(xlsx_file):
"""Read a workbook opened read-only for fast access."""
return load_workbook(filename=xlsx_file, read_only=True, data_only=True)
def ws_sheet_names(xlsx_file, log_on_open=True):
"""Return the sheet names in the XLSX file."""
if log_on_open:
log('SCAN: [!c]{:s}[!/c]', xlsx_file)
with closing(load_ro_workbook(xlsx_file)) as wb:
return [str(i) for i in wb.sheetnames]
def ws_scan_raw(xlsx_file, sheet_name, log_on_open=True):
"""Iterator for every row in sheet_name in xlsx_file."""
if log_on_open:
log('OPEN: [!c]{:s} => {:s}[!/c]', xlsx_file, sheet_name)
with closing(load_ro_workbook(xlsx_file)) as wb:
ws = wb[sheet_name]
for row in ws:
yield [_val(cell) for cell in row]
def ws_scan(xlsx_file, sheet_name, log_on_open=True):
"""Iterator for every row in sheet_name in xlsx_file, returned as dict."""
headers = None
for vals in ws_scan_raw(xlsx_file, sheet_name, log_on_open=log_on_open):
if not headers:
headers = vals
continue
yield dict((k, v) for k, v in zip(headers, vals) if k)
def add_named_style(
wb,
name,
number_format=None,
font_name='Calibri',
font_size=10,
font_color='FF000000',
fill=None
):
"""Add a named style to the open workbook.
Already named styles are ignored. See add_default_styles for example usage.
"""
if name in wb.named_styles:
return
sty = NamedStyle(name=name)
sty.font = Font(name=font_name, size=font_size, color=font_color)
if fill:
sty.fill = fill
if number_format:
sty.number_format = number_format
wb.add_named_style(sty)
def add_default_styles(wb):
"""Add default styles we use in some tools"""
add_named_style(wb, 'IMHeader', font_color='FFFFFFFF', fill=PatternFill('solid', fgColor='FF4F81BD'))
add_named_style(wb, 'IMNormal')
add_named_style(wb, 'IMInt', number_format='0')
add_named_style(wb, 'IMFloat', number_format='0.0000')
add_named_style(wb, 'IMComma', number_format='#,##0')
add_named_style(wb, 'IMPercent', number_format='0.000%')
add_named_style(wb, 'IMCurrency', number_format='_($* #,##0_);_($* (#,##0);_($* "-"??_);_(@_)')
add_named_style(wb, 'IMDate', number_format='m/d/yy')
def ics_report_params(xlsx_file, sheet_name='Cover'):
"""Return a dictionary of report parameters for an ICS batch system report.
If a parameter appears more than once, the values are returned as a list"""
parms = dict()
for vals in ws_scan(xlsx_file, sheet_name):
val = vals['Parms']
k, v = kv(val)
if not k:
continue # invalid line
if k in parms:
# We have a multi-value parameter
if type(parms[k]) is not list:
parms[k] = [parms[k]]
parms[k].append(v)
else:
# Just a normal, single val parameter
parms[k] = v
return parms
def _int(v):
"""Convert to int for excel, but default to original value."""
try:
if v is None or v == '':
return ''
if type(v) is str:
# we handle strings like '2,345.00'
return int(float(v.replace(',', '')))
return int(v)
except ValueError:
return v
def _acct(v):
"""Convert to accounting (float) for excel, but default to original value."""
try:
if v is None or v == '':
return ''
return float(v)
except ValueError:
return v
class ValueMapper(object):
"""Support column-name-based value casting and cell formatting.
After creation, call create_mapper with a sheet name to create
a mapper function. That function takes a column name and a
value then returns (StyleName, value)."""
def __init__(self, config_file=None):
self.default_mapping = {
'Qty': 'IMComma',
'ExtPrice': 'IMCurrency',
}
self.config_file = config_file
if self.config_file:
with open(self.config_file) as inp:
cfg_text = str(inp.read()).strip()
self.config = read_config(cfg_text)
else:
self.config = dict()
self.type_map = {
int: 'IMComma',
float: 'IMCurrency',
}
self.convert_map = {
'IMComma': _int,
'IMInt': _int,
'IMCurrency': _acct,
'IMFloat': _acct,
'IMPercent': _acct,
}
def create_mapper(self, sheet_name, log_choices=False):
col_map = ChainMap(
self.config.get(sheet_name, {}),
self.config.get('WORKBOOK', {}),
self.default_mapping
)
if log_choices:
log('Col Map for {}: {}', sheet_name, col_map)
log('CONFIG: {}', self.config)
type_map = dict(self.type_map)
convert_map = dict(self.convert_map)
# We allow wildcards in the column names now
wildcards = list((k.lower(), v) for k, v in col_map.items() if '*' in k)
def m(col_name, val):
# See if there's a col name mapping
style_name = col_map.get(col_name, '')
src = 'col_map'
if not style_name:
# See if there's a wildcard match in the col mappings
lcn = col_name.lower()
for patt, sty in wildcards:
if globmatch(lcn, patt):
style_name, src = sty, 'col_map:wildcard'
break
if not style_name:
# Punt based on type
style_name = type_map.get(type(val), '')
src = 'type_map'
if not style_name:
# Still stumped: just default
style_name = 'IMNormal'
src = 'default_style'
# We perform some conversions automatically
conv = convert_map.get(style_name, None)
if conv:
val = conv(val)
if log_choices:
log('{}, {} => {} via {}', col_name, val, style_name, src)
# Finally done
return style_name, val
return m
class XlsxImporter(object):
"""Base class used to create xlsx import scripts.update_metadata
This is for our csv2xlsx and sql2xlsx tools - and hopefully future stuff as
well. It should also be handy for custom spreadsheet creation.
"""
def __init__(self):
"""Construction."""
pass
def add_args(self, argparser):
"""Add any arguments parser arguments necessary."""
raise NotImplementedError
def validate_args(self, args):
"""Validate values for custom arguments given in add_args."""
raise NotImplementedError
def customize_val_mapper(self, value_mapper):
"""Optional way to customize ValueMapper instance."""
pass
def get_data(self, args):
"""Return tuple of (cols, rows).
Cols is an iterable of column names.
Rows is an iterable of rows where each row is an iterable of values.
Each row should correspond to the column list each.
"""
raise NotImplementedError
def before_save(self, args, wb, sheet):
"""Optional last chance at the sheet before save."""
pass
def main(self, cmdline_args=None):
"""Our main contribution: this is the logic that we provide."""
# Our default arguments
parser = ArgumentParser()
parser.add_argument('-b', '--book', type=str, required=True, help='Name of workbook to create/update')
parser.add_argument('-s', '--sheetname', type=str, required=True, help='Name of worksheet to create')
parser.add_argument('-m', '--mapper', type=str, default='', help='Mapper config file to use')
parser.add_argument('-f', '--freeze', type=str, default='', help='Optional cell at which to perform a Freeze Panes (e.g. use A2 to freeze top row)')
parser.add_argument('-t', '--transpose', action='store_true', default=False, help='If set, transpose result sheet')
# Get any necessary arguments, parse everything, and then perform
# init validation
self.add_args(parser)
if cmdline_args is None:
cmdline_args = list(sys.argv[1:])
args = parser.parse_args(args=cmdline_args)
self.validate_args(args)
# Create col/value mapper
if args.mapper:
assert pth.isfile(args.mapper)
mapper_src = ValueMapper(args.mapper)
self.customize_val_mapper(mapper_src)
mapper = mapper_src.create_mapper(args.sheetname)
# Create or open workbook and get our worksheet ready
if pth.isfile(args.book):
log('Opening [!y]{:s}[!/y]', args.book)
wb = load_workbook(args.book)
else:
log('Creating [!y]{:s}[!/y]', args.book)
wb = Workbook()
wb.guess_types = False
if args.sheetname in wb.sheetnames:
log('Removing previous sheet [!r]{:s}[!/r]', args.sheetname)
del wb[args.sheetname]
log('Creating worksheet [!y]{:s}[!/y]', args.sheetname)
sheet = wb.create_sheet(args.sheetname)
# openpyxl creates a default worksheet named 'Sheet': remove it unless that's
# what we just created...
if args.sheetname != 'Sheet' and 'Sheet' in wb.sheetnames:
log('Removing DEFAULT SHEET named [!r]Sheet[!/r]')
del wb['Sheet']
# Add our standard styles
add_default_styles(wb)
# Now we need the data that we'll be writing
col_names, rows = self.get_data(args)
# simplify cell writing, and handle transpoition
def _write_cell(r, c, v, sty):
if args.transpose:
r, c = c, r
sheet.cell(row=r, column=c, value=v).style = sty
# Create header row
col_names = list(col_names) # Go ahead and freeze column names
for idx, col in enumerate(col_names):
_write_cell(1, idx+1, col, 'IMHeader')
# Now create all rows
count = 0
for row in rows:
for idx, val in enumerate(row):
col = col_names[idx]
style, val = mapper(col, val)
_write_cell(count+2, idx+1, val, style)
count += 1
if count == 1:
log('[!g]First Record Written![!/g]')
elif count % 5000 == 0:
log('Records: [!g]{:,d}[!/g]', count)
# Finalize sheet - we autofit cols, freeze if necessary, and save
for col in sheet.columns:
column = col[0].column # need column name for below
max_length = 6
for cell in col:
v = cell.value
if v:
max_length = max(max_length, len(str(cell.value)))
adjusted_width = (max_length + 3.2) * 0.88 # calc is totally arbitrary
sheet.column_dimensions[column].width = adjusted_width
# Freeze if requests
if args.freeze:
log('Freezing sheet at [!c]{}[!/c]', args.freeze)
sheet.freeze_panes = args.freeze
# Give our implementor one last shot
self.before_save(args, wb, sheet)
log('[!c]Saving[!/c]')
wb.save(args.book)
wb.close()
log('[!br][!w]DONE[!/w][!/br] -> Rows: [!g]{:,d}[!/g]', count)
| 32.772846 | 162 | 0.585723 | 1,667 | 12,552 | 4.272945 | 0.233353 | 0.016847 | 0.018251 | 0.018953 | 0.119753 | 0.080163 | 0.072441 | 0.051383 | 0.051383 | 0.040994 | 0 | 0.006525 | 0.304095 | 12,552 | 382 | 163 | 32.858639 | 0.80893 | 0.21869 | 0 | 0.149798 | 0 | 0.008097 | 0.103653 | 0.002498 | 0 | 0 | 0 | 0 | 0.004049 | 1 | 0.08502 | false | 0.012146 | 0.048583 | 0 | 0.218623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a31e04d47f6d68b068910c2a1aab377ea81915 | 6,746 | py | Python | connect/graph_service.py | dondon486/outlook_hack | 244d710ea178ae1765a48d77b8c6ba2b80be27af | [
"MIT"
] | null | null | null | connect/graph_service.py | dondon486/outlook_hack | 244d710ea178ae1765a48d77b8c6ba2b80be27af | [
"MIT"
] | null | null | null | connect/graph_service.py | dondon486/outlook_hack | 244d710ea178ae1765a48d77b8c6ba2b80be27af | [
"MIT"
] | 1 | 2021-01-05T00:21:18.000Z | 2021-01-05T00:21:18.000Z | # Copyright (c) Microsoft. All rights reserved. Licensed under the MIT license.
# See LICENSE in the project root for license information.
import requests
import uuid
import json
from connect.data import get_email_text
from collections import Counter
from collections import OrderedDict
from flask import request
# The base URL for the Microsoft Graph API.
graph_api_endpoint = 'https://graph.microsoft.com/v1.0{0}'
owner_id = 'moagarw@microsoft.com'
def call_getMails(access_token):
# The resource URL for the sendMail action.
send_mail_url = graph_api_endpoint.format('/me/mailFolders/SentItems/messages')
# Set request headers.
headers = {
'User-Agent': 'python_tutorial/1.0',
'Authorization': 'Bearer {0}'.format(access_token),
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Use these headers to instrument calls. Makes it easier
# to correlate requests and responses in case of problems
# and is a recommended best practice.
request_id = str(uuid.uuid4())
instrumentation = {
'client-request-id': request_id,
'return-client-request-id': 'true'
}
headers.update(instrumentation)
params = {'$select': 'toRecipients', '$top': '50'}
search_crt = add_search()
if search_crt:
params['$search'] = search_crt
response = requests.get(send_mail_url, params , headers=headers, verify=False)
list = json.loads(response.text)
l = []
h = {}
if list.get('value'):
for v in list.get('value'):
recs = v.get('toRecipients')
for r in recs:
print
l.append(r.get('emailAddress').get('address'))
h[r.get('emailAddress').get('address')] = r.get('emailAddress')
l = remove_redundant(l)
l = sort_by_freq(l)
return [h.get(i) for i in l]
def remove_redundant(list):
if owner_id in list:
list.remove(owner_id)
existing_users = request.args.getlist('existing_users[]')
if existing_users:
list = [x for x in list if x not in existing_users]
return list
def add_search():
existing_users = request.args.getlist('existing_users[]')
limiter = ''
search = ''
if existing_users:
for u in existing_users:
search = search + limiter + u
limiter = '+'
if search:
search = 'to:[' + search + ']'
search = '"' + search + '"'
return search
def call_getCalendarUsers(access_token):
# The resource URL for the sendMail action.
send_mail_url = graph_api_endpoint.format('/me/calendar/events')
# Set request headers.
headers = {
'User-Agent': 'python_tutorial/1.0',
'Authorization': 'Bearer {0}'.format(access_token),
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Use these headers to instrument calls. Makes it easier
# to correlate requests and responses in case of problems
# and is a recommended best practice.
request_id = str(uuid.uuid4())
instrumentation = {
'client-request-id': request_id,
'return-client-request-id': 'true'
}
headers.update(instrumentation)
response = requests.get(send_mail_url, {'$select': 'attendees', '$top': '50'}, headers=headers, verify=False)
list = json.loads(response.text)
l = []
h = {}
#return list
for v in list.get('value'):
recs = v.get('attendees')
for r in recs:
print
l.append(r.get('emailAddress').get('address'))
h[r.get('emailAddress').get('address')] = r.get('emailAddress')
l = sort_by_freq(l)
return [h.get(i) for i in l]
def call_getCalendarRooms(access_token):
# The resource URL for the sendMail action.
send_mail_url = graph_api_endpoint.format('/me/calendar/events')
# Set request headers.
headers = {
'User-Agent': 'python_tutorial/1.0',
'Authorization': 'Bearer {0}'.format(access_token),
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Use these headers to instrument calls. Makes it easier
# to correlate requests and responses in case of problems
# and is a recommended best practice.
request_id = str(uuid.uuid4())
instrumentation = {
'client-request-id': request_id,
'return-client-request-id': 'true'
}
headers.update(instrumentation)
response = requests.get(send_mail_url, {'$select': 'location', '$top': '50'}, headers=headers, verify=False)
list = json.loads(response.text)
l = []
h = {}
for v in list.get('value'):
r = v.get('location')
l.append(r.get('displayName'))
h[r.get('displayName')] = r
l = sort_by_freq(l)
return [h.get(i) for i in l]
#top 5 sorted y frequency
def sort_by_freq(orig_list):
final_recs = [item for items, c in Counter(orig_list).most_common() for item in [items] * c]
final_recs = list(OrderedDict.fromkeys(final_recs))
return final_recs[:5]
def call_sendMail_endpoint(access_token, alias, emailAddress):
# The resource URL for the sendMail action.
send_mail_url = graph_api_endpoint.format('/me/sendMail')
# Set request headers.
headers = {
'User-Agent': 'python_tutorial/1.0',
'Authorization': 'Bearer {0}'.format(access_token),
'Accept': 'application/json',
'Content-Type': 'application/json'
}
# Use these headers to instrument calls. Makes it easier
# to correlate requests and responses in case of problems
# and is a recommended best practice.
request_id = str(uuid.uuid4())
instrumentation = {
'client-request-id': request_id,
'return-client-request-id': 'true'
}
headers.update(instrumentation)
# Create the email that is to be sent with API.
email = {
'Message': {
'Subject': 'Welcome to Office 365 development with Python and the Office 365 Connect sample',
'Body': {
'ContentType': 'HTML',
'Content': get_email_text('mohit agarwal')
},
'ToRecipients': [
{
'EmailAddress': {
'Address': emailAddress
}
}
]
},
'SaveToSentItems': 'true'
}
response = requests.post(url=send_mail_url, headers=headers, data=json.dumps(email), verify=False, params=None)
# Check if the response is 202 (success) or not (failure).
if (response.status_code == requests.codes.accepted):
return response.status_code
else:
return "{0}: {1}".format(response.status_code, response.text)
| 32.432692 | 115 | 0.622739 | 829 | 6,746 | 4.95778 | 0.224367 | 0.035037 | 0.021411 | 0.016545 | 0.610706 | 0.610706 | 0.599027 | 0.577616 | 0.577616 | 0.564964 | 0 | 0.007561 | 0.254966 | 6,746 | 207 | 116 | 32.589372 | 0.810187 | 0.171064 | 0 | 0.48 | 0 | 0 | 0.216391 | 0.027139 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046667 | false | 0 | 0.046667 | 0 | 0.146667 | 0.013333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a6791f1a8ec7c7688477b282d11ddab3a33905 | 4,161 | py | Python | userbot/plugins/utube.py | Marshmellow098/AK-CRAZY-TECH-BOT | 66b6f57810447b31bb81d675bd1fd5a58740ceb6 | [
"MIT"
] | null | null | null | userbot/plugins/utube.py | Marshmellow098/AK-CRAZY-TECH-BOT | 66b6f57810447b31bb81d675bd1fd5a58740ceb6 | [
"MIT"
] | null | null | null | userbot/plugins/utube.py | Marshmellow098/AK-CRAZY-TECH-BOT | 66b6f57810447b31bb81d675bd1fd5a58740ceb6 | [
"MIT"
] | null | null | null | import re
import random
from userbot import bot
from userbot.utils import admin_cmd
IF_EMOJI = re.compile(
"["
"\U0001F1E0-\U0001F1FF" # flags (iOS)
"\U0001F300-\U0001F5FF" # symbols & pictographs
"\U0001F600-\U0001F64F" # emoticons
"\U0001F680-\U0001F6FF" # transport & map symbols
"\U0001F700-\U0001F77F" # alchemical symbols
"\U0001F780-\U0001F7FF" # Geometric Shapes Extended
"\U0001F800-\U0001F8FF" # Supplemental Arrows-C
"\U0001F900-\U0001F9FF" # Supplemental Symbols and Pictographs
"\U0001FA00-\U0001FA6F" # Chess Symbols
"\U0001FA70-\U0001FAFF" # Symbols and Pictographs Extended-A
"\U00002702-\U000027B0" # Dingbats
"]+")
def deEmojify(inputString: str) -> str:
"""Remove emojis and other non-safe characters from string"""
return re.sub(IF_EMOJI, '', inputString)
@borg.on(admin_cmd(pattern="uta(?: |$)(.*)"))
async def nope(doit):
ok = doit.pattern_match.group(1)
if not ok:
if doit.is_reply:
what = (await doit.get_reply_message()).message
else:
await doit.edit("`Sir please give some query to search and download it for you..!`")
return
sticcers = await bot.inline_query(
"Lybot", f"{(deEmojify(ok))}")
await sticcers[0].click(doit.chat_id,
reply_to=doit.reply_to_msg_id,
silent=True if doit.is_reply else False,
hide_via=True)
await doit.delete()
import asyncio
import os
from pathlib import Path
from telethon.errors.rpcerrorlist import YouBlockedUserError
from userbot.utils import admin_cmd, edit_or_reply
SEARCH_STRING = "<code>Ok weit, searching....</code>"
NOT_FOUND_STRING = "<code>Sorry !I am unable to find any results to your query</code>"
SENDING_STRING = "<code>Ok I found something related to that.....</code>"
BOT_BLOCKED_STRING = "<code>Please unblock @utubebot and try again</code>"
@bot.on(admin_cmd(pattern="ut (.*)"))
async def fetcher(event):
if event.fwd_from:
return
song = event.pattern_match.group(1)
chat = "@utubebot"
event = await event.edit(SEARCH_STRING, parse_mode="html")
async with event.client.conversation(chat) as conv:
try:
purgeflag = await conv.send_message("/start")
await conv.get_response()
await conv.send_message(song)
ok = await conv.get_response()
while ok.edit_hide != True:
await asyncio.sleep(0.1)
ok = await event.client.get_messages(chat, ids=ok.id)
baka = await event.client.get_messages(chat)
if baka[0].message.startswith(
("Sorry I found nothing..")
):
await delete_messages(event, chat, purgeflag)
return await edit_delete(
event, NOT_FOUND_STRING, parse_mode="html", time=5
)
await event.edit(SENDING_STRING, parse_mode="html")
await baka[0].click(0)
music = await conv.get_response()
await event.client.send_read_acknowledge(conv.chat_id)
except YouBlockedUserError:
await event.edit(BOT_BLOCKED_STRING, parse_mode="html")
return
await event.client.send_file(
event.chat_id,
music,
caption=f"<b>==> <code>{song}</code></b>",
parse_mode="html",
)
await event.delete()
await delete_messages(event, chat, purgeflag)
@borg.on(admin_cmd(pattern="utv(?: |$)(.*)"))
async def nope(doit):
ok = doit.pattern_match.group(1)
if not ok:
if doit.is_reply:
what = (await doit.get_reply_message()).message
else:
await doit.edit("`Please give some query to search..!`")
return
sticcers = await bot.inline_query(
"vid", f"{(deEmojify(ok))}")
await sticcers[0].click(doit.chat_id,
reply_to=doit.reply_to_msg_id,
silent=True if doit.is_reply else False,
hide_via=True)
await doit.delete()
| 37.151786 | 96 | 0.608027 | 501 | 4,161 | 4.914172 | 0.351297 | 0.032494 | 0.026401 | 0.021121 | 0.355808 | 0.31844 | 0.190089 | 0.190089 | 0.190089 | 0.190089 | 0 | 0.047793 | 0.275895 | 4,161 | 111 | 97 | 37.486486 | 0.769333 | 0.068733 | 0 | 0.3 | 0 | 0 | 0.182996 | 0.065837 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01 | false | 0 | 0.09 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a6b4dfbb1002cd6d657bb9cefea37a57502ce7 | 2,974 | py | Python | dev/sherpa/stats/compare_stats.py | grburgess/gammapy | 609e460698caca7223afeef5e71826c7b32728d1 | [
"BSD-3-Clause"
] | 3 | 2019-01-28T12:21:14.000Z | 2019-02-10T19:58:07.000Z | dev/sherpa/stats/compare_stats.py | grburgess/gammapy | 609e460698caca7223afeef5e71826c7b32728d1 | [
"BSD-3-Clause"
] | null | null | null | dev/sherpa/stats/compare_stats.py | grburgess/gammapy | 609e460698caca7223afeef5e71826c7b32728d1 | [
"BSD-3-Clause"
] | null | null | null | """This script calculated WStat using different implemented methods. It's
purpose is to aid the decision which is the 'correct' WStat to be used"""
import numpy as np
from gammapy.utils.random import get_random_state
def get_test_data():
n_bins = 1
random_state = get_random_state(3)
model = random_state.rand(n_bins) * 20
data = random_state.poisson(model)
staterror = np.sqrt(data)
off_vec = random_state.poisson(0.7 * model)
alpha = np.array([0.2] * len(model))
return data, model, staterror, off_vec, alpha
def calc_wstat_sherpa():
data, model, staterr, off_vec, alpha = get_test_data()
import sherpa.stats as ss
wstat = ss.WStat()
# We assume equal exposure
bkg = dict(bkg = off_vec,
exposure_time=[1, 1],
backscale_ratio=1./alpha,
data_size=len(model)
)
stat = wstat.calc_stat(data, model, staterror=staterr, bkg=bkg)
print("Sherpa stat: {}".format(stat[0]))
print("Sherpa fvec: {}".format(stat[1]))
def calc_wstat_gammapy():
data, model, staterr, off_vec, alpha = get_test_data()
from gammapy.stats import wstat
from gammapy.stats.fit_statistics import (
_get_wstat_background,
_get_wstat_extra_terms,
)
# background estimate
bkg = _get_wstat_background(data, off_vec, alpha, model)
print("Gammapy mu_bkg: {}".format(bkg))
statsvec = wstat(n_on=data,
mu_signal=model,
n_off=off_vec,
alpha=alpha)
print("Gammapy stat: {}".format(np.sum(statsvec)))
print("Gammapy statsvec: {}".format(statsvec))
print("---> with extra terms")
extra_terms = _get_wstat_extra_terms(data, off_vec)
print("Gammapy extra terms: {}".format(extra_terms))
statsvec = wstat(n_on=data,
mu_signal=model,
n_off=off_vec,
alpha=alpha, extra_terms=True)
print("Gammapy stat: {}".format(np.sum(statsvec)))
print("Gammapy statsvec: {}".format(statsvec))
def calc_wstat_xspec():
data, model, staterr, off_vec, alpha = get_test_data()
from xspec_stats import xspec_wstat as wstat
from xspec_stats import xspec_wstat_f, xspec_wstat_d
# alpha = t_s / t_b
t_b = 1. / alpha
t_s = 1
d = xspec_wstat_d(t_s, t_b, model, data, off_vec)
f = xspec_wstat_f(data, off_vec, t_s, t_b, model, d)
bkg = f * t_b
stat = wstat(t_s, t_b, model, data, off_vec)
print("XSPEC mu_bkg (f * t_b): {}".format(bkg))
print("XSPEC stat: {}".format(stat))
if __name__ == "__main__":
data, model, staterr, off_vec, alpha = get_test_data()
print("Test data")
print("n_on: {}".format(data))
print("n_off: {}".format(off_vec))
print("alpha: {}".format(alpha))
print("n_pred: {}".format(model))
print("\n")
calc_wstat_sherpa()
print("\n")
calc_wstat_gammapy()
print("\n")
calc_wstat_xspec()
| 30.979167 | 73 | 0.627438 | 420 | 2,974 | 4.178571 | 0.221429 | 0.051282 | 0.050142 | 0.043305 | 0.288889 | 0.283761 | 0.251852 | 0.251852 | 0.230199 | 0.186895 | 0 | 0.006667 | 0.243443 | 2,974 | 95 | 74 | 31.305263 | 0.773333 | 0.068931 | 0 | 0.236111 | 0 | 0 | 0.095255 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.097222 | 0 | 0.166667 | 0.263889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a91488cba0242f7d4423e74b04268b77d35360 | 2,188 | py | Python | setup.py | Rippling/pdfminer.six | 112de9c52f71464d6c32568d72bb098a220b482e | [
"MIT"
] | null | null | null | setup.py | Rippling/pdfminer.six | 112de9c52f71464d6c32568d72bb098a220b482e | [
"MIT"
] | null | null | null | setup.py | Rippling/pdfminer.six | 112de9c52f71464d6c32568d72bb098a220b482e | [
"MIT"
] | null | null | null | from setuptools import setup
import sys
import pdfminer as package
requires = ['six', 'pycryptodome', 'sortedcontainers']
if sys.version_info >= (3, 0):
requires.append('chardet')
import os
def _get_rp_version(actual_version):
# if __PROJECT_GIT_COMMIT_SHA is provided, that gets the highest precedence.
# if not available, then we can check if it's present in the file (the file
# is never checked in, so it's used only during the package installation step)
rp_commit_sha = None
try:
rp_commit_sha = open('rp-version', 'r').read()
except FileNotFoundError:
pass
rp_commit_sha = os.environ.get('__PROJECT_GIT_COMMIT_SHA', None) or rp_commit_sha
if rp_commit_sha is not None:
with open('rp-version', 'w') as f:
f.write(rp_commit_sha)
actual_version = actual_version + "." + rp_commit_sha
return actual_version
version = package.__version__
version = _get_rp_version(version)
setup(
name='pdfminer.six',
version=version,
packages=['pdfminer'],
package_data={'pdfminer': ['cmap/*.pickle.gz']},
install_requires=requires,
description='PDF parser and analyzer',
long_description=package.__doc__,
license='MIT/X',
author='Yusuke Shinyama + Philippe Guglielmetti',
author_email='pdfminer@goulu.net',
url='https://github.com/pdfminer/pdfminer.six',
scripts=[
'tools/pdf2txt.py',
'tools/dumppdf.py',
'tools/latin2ascii.py',
],
keywords=[
'pdf parser',
'pdf converter',
'layout analysis',
'text mining',
],
classifiers=[
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Topic :: Text Processing',
],
)
| 29.972603 | 85 | 0.638483 | 257 | 2,188 | 5.252918 | 0.521401 | 0.06 | 0.057037 | 0.057778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009004 | 0.238574 | 2,188 | 72 | 86 | 30.388889 | 0.801321 | 0.102834 | 0 | 0.050847 | 0 | 0 | 0.390505 | 0.012251 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0.016949 | 0.067797 | 0 | 0.101695 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1a9c8d83bfc9d2d410dc480f07512901417b367 | 453 | py | Python | global_v.py | anilozdemir/TSSL-BP | f0613047b7e4309f25e0a490fc96467fced5fbf0 | [
"MIT"
] | null | null | null | global_v.py | anilozdemir/TSSL-BP | f0613047b7e4309f25e0a490fc96467fced5fbf0 | [
"MIT"
] | null | null | null | global_v.py | anilozdemir/TSSL-BP | f0613047b7e4309f25e0a490fc96467fced5fbf0 | [
"MIT"
] | null | null | null | import torch
dtype = None
device = None
n_steps = None
syn_a = None
tau_s = None
def init(dty, dev, n_t, ts):
global dtype, device, n_steps, syn_a, partial_a, tau_s
dtype = dty
device = dev
n_steps = n_t
tau_s = ts
syn_a = torch.zeros((1, 1, 1, 1, n_steps), dtype=dtype, device=device)
syn_a[..., 0] = 1
for t in range(n_steps-1):
syn_a[..., t+1] = syn_a[..., t] - syn_a[..., t] / tau_s
syn_a /= tau_s
| 21.571429 | 74 | 0.578366 | 85 | 453 | 2.835294 | 0.294118 | 0.13278 | 0.062241 | 0.049793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024169 | 0.269316 | 453 | 20 | 75 | 22.65 | 0.703927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1aaf8d8b254e35a312d496cedb1a43b986a7dda | 16,126 | py | Python | yggdrasil/serialize/PandasSerialize.py | cropsinsilico/cis_interface | 6723015faa66ea470395a927b86ad5b58ce1c29c | [
"BSD-3-Clause"
] | 8 | 2017-10-17T20:33:27.000Z | 2018-12-16T21:07:41.000Z | yggdrasil/serialize/PandasSerialize.py | cropsinsilico/cis_interface | 6723015faa66ea470395a927b86ad5b58ce1c29c | [
"BSD-3-Clause"
] | 44 | 2017-09-28T07:17:17.000Z | 2019-01-22T21:47:09.000Z | yggdrasil/serialize/PandasSerialize.py | cropsinsilico/cis_interface | 6723015faa66ea470395a927b86ad5b58ce1c29c | [
"BSD-3-Clause"
] | 7 | 2018-03-28T20:06:03.000Z | 2018-08-04T16:39:08.000Z | import pandas
import copy
import numpy as np
import warnings
import io as sio
from yggdrasil import platform, serialize
from yggdrasil.metaschema.datatypes.JSONArrayMetaschemaType import (
JSONArrayMetaschemaType)
from yggdrasil.serialize.AsciiTableSerialize import AsciiTableSerialize
from yggdrasil.communication.transforms.PandasTransform import PandasTransform
class PandasSerialize(AsciiTableSerialize):
r"""Class for serializing/deserializing Pandas data frames.
Args:
no_header (bool, optional): If True, headers will not be read or
serialized from/to tables. Defaults to False.
str_as_bytes (bool, optional): If True, strings in columns are
read as bytes. Defaults to False.
"""
_seritype = 'pandas'
_schema_subtype_description = ('Serializes tables using the pandas package.')
_schema_properties = {'no_header': {'type': 'boolean',
'default': False},
'str_as_bytes': {'type': 'boolean',
'default': False}}
_schema_excluded_from_inherit = ['as_array']
default_read_meth = 'read'
as_array = True
concats_as_str = False
# has_header = False
def __init__(self, *args, **kwargs):
self.write_header_once = False
self.dont_write_header = kwargs.pop('dont_write_header',
kwargs.get('no_header', False))
return super(PandasSerialize, self).__init__(*args, **kwargs)
@property
def empty_msg(self):
r"""obj: Object indicating empty message."""
if self.numpy_dtype:
return pandas.DataFrame(np.zeros(0, self.numpy_dtype))
else:
return pandas.DataFrame(columns=self.get_field_names())
def get_field_names(self, *args, **kwargs):
r"""Get the field names for an array of fields.
Args:
*args: Arguments are passed to the parent class's method.
**kwargs: Keyword arguments are passed to the parent class's
method.
Returns:
list: Names for each field in the data type.
"""
if self.no_header:
return None
return super(PandasSerialize, self).get_field_names(*args, **kwargs)
@classmethod
def apply_field_names(cls, frame, field_names=None):
r"""Apply field names as columns to a frame, first checking for a mapping.
If there is a direct mapping, the columns are reordered to match the order
of the field names. If there is not an overlap in the field names and
columns, a one-to-one mapping is assumed, but a warning is issued. If there
is a partial overlap, an error is raised.
Args:
frame (pandas.DataFrame): Frame to apply field names to as columns.
field_names (list, optional): New field names that should be applied.
If not provided, the original frame will be returned unaltered.
Returns:
pandas.DataFrame: Frame with updated field names.
Raises:
RuntimeError: If there is a partial overlap between the field names
and columns.
"""
if field_names is None:
return frame
cols = frame.columns.tolist()
if len(field_names) != len(cols):
raise RuntimeError(("Number of field names (%d) doesn't match "
+ "number of columns in data frame (%d).")
% (len(field_names), len(cols)))
# Check for missing fields
fmiss = []
for f in field_names:
if f not in cols:
fmiss.append(f)
if fmiss:
if len(fmiss) == len(field_names):
warnings.warn("Assuming direct mapping of field names to columns. "
+ "This may not be correct.")
frame.columns = field_names
else:
# Partial overlap
raise RuntimeError("%d fields (%s) missing from frame: %s"
% (len(fmiss), str(fmiss), str(frame)))
else:
# Reorder columns
frame = frame[field_names]
return frame
def cformat2nptype(self, *args, **kwargs):
r"""Method to convert c format string to numpy data type.
Args:
*args: Arguments are passed to serialize.cformat2nptype.
**kwargs: Keyword arguments are passed to serialize.cformat2nptype.
Returns:
np.dtype: Corresponding numpy data type.
"""
out = super(PandasSerialize, self).cformat2nptype(*args, **kwargs)
if (out.char == 'S') and (not self.str_as_bytes):
out = np.dtype('U%d' % out.itemsize)
return out
def func_serialize(self, args):
r"""Serialize a message.
Args:
args (obj): Python object to be serialized.
Returns:
bytes, str: Serialized message.
"""
if not isinstance(args, pandas.DataFrame):
raise TypeError(("Pandas DataFrame required. Invalid type "
+ "of '%s' provided.") % type(args))
fd = sio.StringIO()
# For Python 3 and higher, bytes need to be encoded
args_ = copy.deepcopy(args)
for c in args.columns:
if isinstance(args_[c][0], bytes):
args_[c] = args_[c].apply(lambda s: s.decode('utf-8'))
if (self.field_names is None) and (not self.no_header):
self.field_names = self.get_field_names()
args_ = self.apply_field_names(args_, self.field_names)
if not self.no_header:
cols = args_.columns.tolist()
if cols == list(range(len(cols))):
args_ = self.apply_field_names(args_, ['f%d' % i for i in
range(len(cols))])
args_.to_csv(fd, index=False,
# Not in pandas <0.24
# line_terminator=self.newline.decode("utf-8"),
sep=self.delimiter.decode("utf-8"),
mode='w', encoding='utf8',
header=(not self.dont_write_header))
if self.write_header_once:
self.dont_write_header = True
out = fd.getvalue()
fd.close()
# Required to change out \r\n for \n on windows
out = out.encode("utf-8")
out = out.replace(platform._newline, self.newline)
return out
def func_deserialize(self, msg):
r"""Deserialize a message.
Args:
msg (str, bytes): Message to be deserialized.
Returns:
obj: Deserialized Python object.
"""
fd = sio.BytesIO(msg)
names = None
dtype = None
if self.initialized:
np_dtype = self.numpy_dtype
dtype = {}
if self.no_header:
dtype_names = range(len(np_dtype.names))
else:
dtype_names = np_dtype.names
for n in dtype_names:
if np_dtype[n].char in ['U', 'S']:
dtype[n] = object
else:
dtype[n] = np_dtype[n]
kws = dict(sep=self.delimiter.decode("utf-8"),
names=names,
dtype=dtype,
encoding='utf8',
skipinitialspace=True)
if self.no_header:
kws['header'] = None
out = pandas.read_csv(fd, **kws)
out = out.dropna(axis='columns', how='all')
fd.close()
if self.str_as_bytes:
# Make sure strings are bytes
for c, d in zip(out.columns, out.dtypes):
if (d == object) and isinstance(out[c][0], str):
out[c] = out[c].apply(lambda s: s.encode('utf-8'))
# On windows, long != longlong and longlong requires special cformat
# For now, long will be used to preserve the use of %ld to match long
if platform._is_win: # pragma: windows
if np.dtype('longlong').itemsize == 8:
new_dtypes = dict()
for c, d in zip(out.columns, out.dtypes):
if d == np.dtype('longlong'):
new_dtypes[c] = np.int32
else:
new_dtypes[c] = d
out = out.astype(new_dtypes, copy=False)
# Reorder if necessary
out = self.apply_field_names(out, self.get_field_names())
if dtype is not None:
out = out.astype(dtype, copy=False)
if (self.field_names is None) and (not self.no_header):
self.field_names = out.columns.tolist()
if not self.initialized:
typedef = JSONArrayMetaschemaType.encode_type(out)
self.update_serializer(extract=True, **typedef)
return out
@property
def send_converter(self):
kws = {}
field_names = self.get_field_names()
if field_names is not None:
kws['field_names'] = field_names
return PandasTransform(**kws)
@classmethod
def object2dict(cls, obj, **kwargs):
r"""Convert a message object into a dictionary.
Args:
obj (object): Object that would be serialized by this class and
should be returned in a dictionary form.
**kwargs: Additional keyword arguments are ignored.
Returns:
dict: Dictionary version of the provided object.
"""
if isinstance(obj, pandas.DataFrame):
return serialize.pandas2dict(obj)
return super(PandasSerialize, cls).object2dict(obj, as_array=True,
**kwargs)
@classmethod
def object2array(cls, obj, **kwargs):
r"""Convert a message object into an array.
Args:
obj (object): Object that would be serialized by this class and
should be returned in an array form.
**kwargs: Additional keyword arguments are ignored.
Returns:
np.array: Array version of the provided object.
"""
if isinstance(obj, pandas.DataFrame):
return serialize.pandas2numpy(obj)
return super(PandasSerialize, cls).object2array(obj, as_array=True,
**kwargs)
@classmethod
def concatenate(cls, objects, **kwargs):
r"""Concatenate objects to get object that would be recieved if
the concatenated serialization were deserialized.
Args:
objects (list): Objects to be concatenated.
**kwargs: Additional keyword arguments are ignored.
Returns:
list: Set of objects that results from concatenating those provided.
"""
if len(objects) == 0:
return []
if isinstance(objects[0], pandas.DataFrame):
field_names = objects[0].columns.tolist()
for i in range(1, len(objects)):
objects[i] = cls.apply_field_names(objects[i],
field_names)
return [pandas.concat(objects, ignore_index=True)]
out = super(PandasSerialize, cls).concatenate(objects, as_array=True,
**kwargs)
return out
def consolidate_array(self, out):
r"""Consolidate message into a structure numpy array if possible.
Args:
out (list, tuple, np.ndarray): Object to consolidate into a
structured numpy array.
Returns:
np.ndarray: Structured numpy array containing consolidated message.
Raises:
ValueError: If the array cannot be consolidated.
"""
if isinstance(out, pandas.DataFrame):
out = serialize.pandas2numpy(out)
return super(PandasSerialize, self).consolidate_array(out)
@classmethod
def get_testing_options(cls, not_as_frames=False, no_names=False,
no_header=False, **kwargs):
r"""Method to return a dictionary of testing options for this class.
Args:
not_as_frames (bool, optional): If True, the returned example
includes data that is not in a pandas data frame. Defaults to
False.
no_names (bool, optional): If True, an example is returned where the
names are not provided to the deserializer. Defaults to False.
no_header (bool, optional): If True, an example is returned
where a header is not included. Defaults to False.
Returns:
dict: Dictionary of variables to use for testing.
"""
kwargs.setdefault('table_string_type', 'string')
field_names = None
out = super(PandasSerialize, cls).get_testing_options(array_columns=True,
**kwargs)
if kwargs['table_string_type'] == 'bytes':
out['kwargs']['str_as_bytes'] = True
for k in ['as_array']: # , 'format_str']:
if k in out['kwargs']:
del out['kwargs'][k]
out['extra_kwargs'] = {}
if no_names:
for x in [out['kwargs'], out]:
if 'field_names' in x:
del x['field_names']
header_line = b'f0\tf1\tf2\n'
elif no_header:
for x in [out['kwargs'], out]:
if 'field_names' in x:
del x['field_names']
header_line = b''
out['kwargs']['no_header'] = True
for x in out['typedef']['items']:
x.pop('title', None)
else:
if 'field_names' in out['kwargs']:
field_names = out['kwargs']['field_names']
header_line = b'name\tcount\tsize\n'
out['contents'] = (header_line
+ b'one\t1\t1.0\n'
+ b'two\t2\t2.0\n'
+ b'three\t3\t3.0\n'
+ b'one\t1\t1.0\n'
+ b'two\t2\t2.0\n'
+ b'three\t3\t3.0\n')
out['concatenate'] = [([], [])]
if not_as_frames:
pass
elif no_header:
out['objects'] = [serialize.list2pandas(x) for x in out['objects']]
out['dtype'] = np.dtype(','.join([x[1] for x in out['dtype'].descr]))
else:
if field_names is None:
field_names = ['f0', 'f1', 'f2']
out['objects'] = [serialize.list2pandas(x, names=field_names)
for x in out['objects']]
out['kwargs']['datatype'] = copy.deepcopy(out['typedef'])
if no_names:
for x in out['kwargs']['datatype']['items']:
x.pop('title', None)
out['empty'] = pandas.DataFrame(np.zeros(0, out['dtype']))
return out
def enable_file_header(self):
r"""Set serializer attributes to enable file headers to be included in
the serializations."""
self.dont_write_header = False
self.write_header_once = True
def disable_file_header(self):
r"""Set serializer attributes to disable file headers from being
included in the serializations."""
self.dont_write_header = True
self.write_header_once = True
def serialize_file_header(self):
r"""Return the serialized header information that should be prepended
to files serialized using this class.
Returns:
bytes: Header string that should be written to the file.
"""
return b''
def deserialize_file_header(self, fd):
r"""Deserialize the header information from the file and update the
serializer.
Args:
fd (file): File containing header.
"""
pass
| 38.304038 | 83 | 0.552648 | 1,861 | 16,126 | 4.681891 | 0.177861 | 0.059681 | 0.012051 | 0.007231 | 0.253759 | 0.201767 | 0.150121 | 0.136692 | 0.100999 | 0.073224 | 0 | 0.005851 | 0.353528 | 16,126 | 420 | 84 | 38.395238 | 0.829928 | 0.27713 | 0 | 0.237354 | 0 | 0 | 0.085606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066148 | false | 0.007782 | 0.035019 | 0 | 0.214008 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1abe503415f197b207c80c32b172f6a9c25c079 | 432 | py | Python | src/feature_extractor.py | amalbed/ml-test | 434a11332ad8b8e2f994fa885103d4327fffc88b | [
"MIT"
] | null | null | null | src/feature_extractor.py | amalbed/ml-test | 434a11332ad8b8e2f994fa885103d4327fffc88b | [
"MIT"
] | null | null | null | src/feature_extractor.py | amalbed/ml-test | 434a11332ad8b8e2f994fa885103d4327fffc88b | [
"MIT"
] | null | null | null | from rdkit.Chem import rdMolDescriptors, MolFromSmiles, rdmolfiles, rdmolops
def fingerprint_features(smile_string, radius=2, size=2048):
mol = MolFromSmiles(smile_string)
new_order = rdmolfiles.CanonicalRankAtoms(mol)
mol = rdmolops.RenumberAtoms(mol, new_order)
return rdMolDescriptors.GetMorganFingerprintAsBitVect(
mol, radius, nBits=size, useChirality=True, useBondTypes=True, useFeatures=False
)
| 39.272727 | 88 | 0.780093 | 45 | 432 | 7.377778 | 0.666667 | 0.066265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013441 | 0.138889 | 432 | 10 | 89 | 43.2 | 0.879032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1ac2b8d1a7c55441c4b2757f0e3aef40511a78e | 1,341 | py | Python | project/core/views/XploreParser.py | tacitia/ThoughtFlow | d8402a7c434f97bc382e37890852711a7e7f161b | [
"MIT"
] | null | null | null | project/core/views/XploreParser.py | tacitia/ThoughtFlow | d8402a7c434f97bc382e37890852711a7e7f161b | [
"MIT"
] | 1 | 2016-02-04T03:17:13.000Z | 2016-02-04T03:17:13.000Z | project/core/views/XploreParser.py | tacitia/ThoughtFlow | d8402a7c434f97bc382e37890852711a7e7f161b | [
"MIT"
] | null | null | null | import xml.etree.ElementTree as ET
import sys, json
import pprint
import os
def getElementValue(element):
return None if element is None else element.text
def getElementArrayValue(element, name):
return [] if element is None else [e.text for e in element.findall(name)]
def getEntries():
entries = []
for i in range(1,5):
filename = 'core/xploreData/TVCGPubs' + str(i) + '.xml'
tree = ET.parse(filename)
root = tree.getroot()
for child in root.findall('document'):
title = child.find('title')
abstract = child.find('abstract')
if title is None or abstract is None:
continue
authors = child.find('authors')
affiliations = child.find('affiliations')
controlledterms = child.find('controlledterms')
thesaurusterms = child.find('thesaurusterms')
date = child.find('py')
publicationId = child.find('publicationId')
entries.append({
'title': getElementValue(title),
'authors': getElementValue(authors),
'abstract': getElementValue(abstract),
'affiliations': getElementValue(affiliations),
'terms': getElementArrayValue(controlledterms, 'term') + getElementArrayValue(thesaurusterms, 'term'),
'publicationId': getElementValue(publicationId),
'date': getElementValue(date)
})
return entries | 35.289474 | 110 | 0.676361 | 142 | 1,341 | 6.387324 | 0.401408 | 0.079383 | 0.024256 | 0.033076 | 0.041896 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001876 | 0.205071 | 1,341 | 38 | 111 | 35.289474 | 0.848968 | 0 | 0 | 0 | 0 | 0 | 0.129657 | 0.017884 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.114286 | 0.057143 | 0.285714 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1adc444e29d8e8768de02f020ffb84b43d3b777 | 4,380 | py | Python | tools/nntool/importer/tflite2/handlers/handler.py | mfkiwl/gap_sdk | 642b798dfdc7b85ccabe6baba295033f0eadfcd4 | [
"Apache-2.0"
] | null | null | null | tools/nntool/importer/tflite2/handlers/handler.py | mfkiwl/gap_sdk | 642b798dfdc7b85ccabe6baba295033f0eadfcd4 | [
"Apache-2.0"
] | null | null | null | tools/nntool/importer/tflite2/handlers/handler.py | mfkiwl/gap_sdk | 642b798dfdc7b85ccabe6baba295033f0eadfcd4 | [
"Apache-2.0"
] | 1 | 2021-11-11T02:12:25.000Z | 2021-11-11T02:12:25.000Z | # Copyright (C) 2020 GreenWaves Technologies, SAS
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import inspect
from importer.common.handler_options import HandlerOptions, handler_option
from .. import common
@handler_option('remove_quantize_ops', val_type=bool, default=True, desc="remove cast and quantization operations on non-constant tensors")
@handler_option('load_quantization', val_type=bool, default=False, desc="load TFLITE tensor quantization", shortcut='q')
@handler_option('use_lut_sigmoid', val_type=bool, default=False, desc="Map logistic node from tflite onto LUT based sigmoid operation (supported only with sq8 quantization)")
@handler_option('use_lut_tanh', val_type=bool, default=False, desc="Map TANH node from tflite onto LUT based tanh operation (supported only with sq8 quantization)")
# @handler_option('insert_relus', val_type=bool, default=False, desc="Insert RELUs if quantization scaling implies that they may be necessary to ensure float calculation")
class Handler(HandlerOptions):
""" This class is base handler class.
Base backend and frontend base handler class inherit this class.
All operator handler MUST put decorator @tflite_op to register corresponding op.
"""
TFLITE_OP = None
TFLITE_CUSTOM_OP = None
GRAPH_VERSION = 0
PARTIAL_SUPPORT = False
NOT_SUPPORTED = False
PS_DESCRIPTION = ''
@classmethod
def check_cls(cls):
if not cls.TFLITE_OP:
common.LOG.warning(
"%s doesn't have TFLITE_OP. "
"Please use Handler.tflite_op decorator to register TFLITE_OP.",
cls.__name__)
@classmethod
def handle(cls, node, **kwargs):
""" Main method in handler. It will find corresponding versioned handle method,
whose name format is `version_%d`. So prefix `version_` is reserved.
DON'T use it for other purpose.
:param node: Operator for backend.
:param kwargs: Other args.
:return: TensorflowNode for backend.
"""
possible_versions = [ver for ver in cls.get_versions() if ver <= node.op_version]
if possible_versions:
handle_version = max(possible_versions)
ver_handle = getattr(cls, "version_{}".format(handle_version), None)
#pylint: disable=not-callable
return ver_handle(node, **kwargs)
raise ValueError(
"{} version {} is not implemented.".format(node.op_type, node.op_version))
@classmethod
def get_versions(cls):
""" Get all support versions.
:return: Version list.
"""
versions = []
for k, _ in inspect.getmembers(cls, inspect.ismethod):
if k.startswith("version_"):
versions.append(int(k.replace("version_", "")))
return versions
@staticmethod
def tflite_op(op):
return Handler.property_register("TFLITE_OP", op)
@staticmethod
def tflite_custom_op(op):
return Handler.property_register("TFLITE_CUSTOM_OP", op)
@staticmethod
def partial_support(ps):
return Handler.property_register("PARTIAL_SUPPORT", ps)
@staticmethod
def not_supported(ps):
return Handler.property_register("NOT_SUPPORTED", ps)
@staticmethod
def ps_description(psd):
return Handler.property_register("PS_DESCRIPTION", psd)
@staticmethod
def property_register(name, value):
def deco(cls):
setattr(cls, name, value)
return cls
return deco
tflite_op = Handler.tflite_op
tflite_custom_op = Handler.tflite_custom_op
partial_support = Handler.partial_support
not_supported = Handler.not_supported
ps_description = Handler.ps_description
property_register = Handler.property_register
| 37.118644 | 174 | 0.703425 | 565 | 4,380 | 5.299115 | 0.357522 | 0.02672 | 0.046092 | 0.03006 | 0.171677 | 0.150969 | 0.104876 | 0.036072 | 0 | 0 | 0 | 0.002326 | 0.214612 | 4,380 | 117 | 175 | 37.435897 | 0.868023 | 0.316667 | 0 | 0.140625 | 0 | 0 | 0.19667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15625 | false | 0 | 0.046875 | 0.078125 | 0.453125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b1ae4ccedece7b833d0b77ec448feab75928550a | 6,434 | py | Python | catalyst/dl/callbacks/checkpoint.py | 162/catalyst | b4ba36be52c51160e0fabecdcb084a8d5cd96cb7 | [
"MIT"
] | null | null | null | catalyst/dl/callbacks/checkpoint.py | 162/catalyst | b4ba36be52c51160e0fabecdcb084a8d5cd96cb7 | [
"MIT"
] | null | null | null | catalyst/dl/callbacks/checkpoint.py | 162/catalyst | b4ba36be52c51160e0fabecdcb084a8d5cd96cb7 | [
"MIT"
] | null | null | null | from typing import Dict
import os
from catalyst.dl.core import Callback, RunnerState
from catalyst.dl import utils
class CheckpointCallback(Callback):
"""
Checkpoint callback to save/restore your model/criterion/optimizer/metrics.
"""
def __init__(
self, save_n_best: int = 3, resume: str = None, resume_dir: str = None
):
"""
Args:
save_n_best: number of best checkpoint to keep
resume: path to checkpoint to load and initialize runner state
"""
self.save_n_best = save_n_best
self.resume = resume
self.resume_dir = resume_dir
self.top_best_metrics = []
self._keys_from_state = ["resume", "resume_dir"]
@staticmethod
def load_checkpoint(*, filename, state: RunnerState):
if os.path.isfile(filename):
print(f"=> loading checkpoint {filename}")
checkpoint = utils.load_checkpoint(filename)
state.epoch = checkpoint["epoch"]
utils.unpack_checkpoint(
checkpoint,
model=state.model,
criterion=state.criterion,
optimizer=state.optimizer,
scheduler=state.scheduler
)
print(
f"loaded checkpoint {filename} (epoch {checkpoint['epoch']})")
else:
raise Exception("no checkpoint found at {filename}")
def save_checkpoint(
self,
logdir: str,
checkpoint: Dict,
is_best: bool,
save_n_best: int = 5,
main_metric: str = "loss",
minimize_metric: bool = True
):
suffix = f"{checkpoint['stage']}.{checkpoint['epoch']}"
filepath = utils.save_checkpoint(
logdir=f"{logdir}/checkpoints/",
checkpoint=checkpoint,
suffix=suffix,
is_best=is_best,
is_last=True
)
checkpoint_metric = checkpoint["valid_metrics"][main_metric]
self.top_best_metrics.append((filepath, checkpoint_metric))
self.top_best_metrics = sorted(
self.top_best_metrics,
key=lambda x: x[1],
reverse=not minimize_metric
)
if len(self.top_best_metrics) > save_n_best:
last_item = self.top_best_metrics.pop(-1)
last_filepath = last_item[0]
os.remove(last_filepath)
def pack_checkpoint(self, **kwargs):
return utils.pack_checkpoint(**kwargs)
def on_stage_start(self, state: RunnerState):
for key in self._keys_from_state:
value = getattr(state, key, None)
if value is not None:
setattr(self, key, value)
if self.resume_dir is not None:
self.resume = str(self.resume_dir) + "/" + str(self.resume)
if self.resume is not None:
self.load_checkpoint(filename=self.resume, state=state)
def on_epoch_end(self, state: RunnerState):
if state.stage.startswith("infer"):
return
checkpoint = self.pack_checkpoint(
model=state.model,
criterion=state.criterion,
optimizer=state.optimizer,
scheduler=state.scheduler,
epoch_metrics=dict(state.metrics.epoch_values),
valid_metrics=dict(state.metrics.valid_values),
stage=state.stage,
epoch=state.epoch,
checkpoint_data=state.checkpoint_data
)
self.save_checkpoint(
logdir=state.logdir,
checkpoint=checkpoint,
is_best=state.metrics.is_best,
save_n_best=self.save_n_best,
main_metric=state.main_metric,
minimize_metric=state.minimize_metric
)
def on_stage_end(self, state: RunnerState):
print("Top best models:")
top_best_metrics_str = "\n".join(
[
"{filepath}\t{metric:3.4f}".format(
filepath=filepath, metric=metric
) for filepath, metric in self.top_best_metrics
]
)
print(top_best_metrics_str)
class IterationCheckpointCallback(Callback):
"""
Iteration checkpoint callback to save your model/criterion/optimizer
"""
def __init__(
self,
save_n_last: int = 3,
num_iters: int = 100,
stage_restart: bool = True
):
"""
Args:
save_n_last: number of last checkpoint to keep
num_iters: save the checkpoint every `num_iters`
stage_restart: restart counter every stage or not
"""
self.save_n_last = save_n_last
self.num_iters = num_iters
self.stage_restart = stage_restart
self._iteration_counter = 0
self.last_checkpoints = []
def save_checkpoint(
self,
logdir,
checkpoint,
save_n_last
):
suffix = f"{checkpoint['stage']}." \
f"epoch.{checkpoint['epoch']}." \
f"iter.{self._iteration_counter}"
filepath = utils.save_checkpoint(
logdir=f"{logdir}/checkpoints/",
checkpoint=checkpoint,
suffix=suffix,
is_best=False,
is_last=False
)
self.last_checkpoints.append(filepath)
if len(self.last_checkpoints) > save_n_last:
top_filepath = self.last_checkpoints.pop(0)
os.remove(top_filepath)
print(f"\nSaved checkpoint at {filepath}")
def pack_checkpoint(self, **kwargs):
return utils.pack_checkpoint(**kwargs)
def on_stage_start(self, state):
if self.stage_restart:
self._iteration_counter = 0
def on_batch_end(self, state):
self._iteration_counter += 1
if self._iteration_counter % self.num_iters == 0:
checkpoint = self.pack_checkpoint(
model=state.model,
criterion=state.criterion,
optimizer=state.optimizer,
scheduler=state.scheduler,
epoch_metrics=None,
valid_metrics=None,
stage=state.stage,
epoch=state.epoch
)
self.save_checkpoint(
logdir=state.logdir,
checkpoint=checkpoint,
save_n_last=self.save_n_last
)
__all__ = ["CheckpointCallback", "IterationCheckpointCallback"]
| 31.23301 | 79 | 0.580665 | 687 | 6,434 | 5.20524 | 0.181951 | 0.022371 | 0.035235 | 0.035235 | 0.308725 | 0.261745 | 0.22651 | 0.22651 | 0.195749 | 0.195749 | 0 | 0.003707 | 0.329189 | 6,434 | 205 | 80 | 31.385366 | 0.824838 | 0.067143 | 0 | 0.303797 | 0 | 0 | 0.076805 | 0.040612 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075949 | false | 0 | 0.025316 | 0.012658 | 0.132911 | 0.031646 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490c4e58b184ddb1854dca19a04a43926f64a8a7 | 43,073 | py | Python | python/sppmon.py | IBM/sppmon | 076c68cf37d22fe4d4e4e1b9ff41686315f0daa5 | [
"Apache-2.0"
] | null | null | null | python/sppmon.py | IBM/sppmon | 076c68cf37d22fe4d4e4e1b9ff41686315f0daa5 | [
"Apache-2.0"
] | null | null | null | python/sppmon.py | IBM/sppmon | 076c68cf37d22fe4d4e4e1b9ff41686315f0daa5 | [
"Apache-2.0"
] | null | null | null | """
----------------------------------------------------------------------------------------------
(c) Copyright IBM Corporation 2020, 2021. All Rights Reserved.
IBM Spectrum Protect Family Software
Licensed materials provided under the terms of the IBM International Program
License Agreement. See the Software licensing materials that came with the
IBM Program for terms and conditions.
U.S. Government Users Restricted Rights: Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
----------------------------------------------------------------------------------------------
SPDX-License-Identifier: Apache-2.0
Description:
Monitoring and long-term reporting for IBM Spectrum Protect Plus.
Provides a data bridge from SPP to InfluxDB and provides visualization dashboards via Grafana.
This program provides functions to query IBM Spectrum Protect Plus Servers,
VSNAP, VADP and other servers via REST API and ssh. This data is stored into a InfluxDB database.
Repository:
https://github.com/IBM/spectrum-protect-sppmon
Author:
Daniel Wendler
Niels Korschinsky
changelog:
02/06/2020 version 0.2 tested with SPP 10.1.5
02/13/2020 version 0.25 improved debug logging & url encoding
02/20/2020 version 0.31 added function to add / modify REST API url parameters
added jobLogDetails function to capture joblogs and store in Influx DB
03/24/2020 version 0.4 migrated to Influxdb
04/15/2020 version 0.6 reworked all files with exception handling
04/16/2020 version 0.6.1 Hotfixing vmStats table
04/16/2020 version 0.6.2 Split into multiple tables
04/17/2020 version 0.6.3 New Catalog Statistics and Reduced JobLogs store to only Summary.
04/18/2020 version 0.6.4 Parsing two new JobLogID's
04/20/2020 version 0.6.4.1 Minor change to jobLogs
04/22/2020 version 0.6.5 Improved SSH Commands and added new Stats via ssh
04/23/2020 version 0.7 New module structure
04/27/2020 version 0.7.1 Fixes to index errors breaking the execution.
04/27/2020 version 0.7.2 Reintroduced all joblogs and added --minimumLogs
04/30/2020 version 0.8 Reworked Exception system, intorduces arg grouping
05/07/2020 version 0.8.1 Part of the documentation and typing system, renamed programm args
05/14/2020 version 0.8.2 Cleanup and full typing
05/18/2020 version 0.9 Documentation finished and some bugfixes.
05/19/2020 version 0.9.1 Moved future import into main file.
06/02/2020 version 0.9.2 Fixed df ssh command, introduced CLOUDPROXY and shortend ssh.py file.
06/03/2020 version 0.9.3 Introduces --hourly, grafana changes and small bugfixes
07/16/2020 version 0.9.4 Shift of the --joblogs to --daily as expected
07/16/2020 version 0.9.5 Dynamically shift of the pagesize for any kind of get-API requests
08/02/2020 version 0.10.0 Introducing Retention Policies and Continuous Queries, breaking old tables
08/25/2020 version 0.10.1 Fixes to Transfer Data, Parse Unit and Top-SSH-Command parsing
09/01/2020 version 0.10.2 Parse_Unit fixes (JobLogs) and adjustments on timeout
11/10/2020 version 0.10.3 Introduced --loadedSystem argument and moved --minimumLogs to deprecated
12/07/2020 version 0.10.4 Included SPP 10.1.6 addtional job information features and some bugfixes
12/29/2020 version 0.10.5 Replaced ssh 'top' command by 'ps' command to bugfix truncating data
01/22/2021 version 0.10.6 Removed `--processStats`, integrated in `--ssh` plus Server/vSnap `df` root recording
01/22/2021 version 0.10.7 Replaced `transfer_data` by `copy_database` with improvements
01/28/2021 version 0.11 Copy_database now also creates the database with RP's if missing.
01/29/2021 version 0.12 Implemented --test function, also disabeling regular setup on certain args
02/09/2021 version 0.12.1 Hotfix job statistic and --test now also checks for all commands individually
02/07/2021 version 0.13 Implemented additional Office365 Joblog parsing
02/10/2021 version 0.13.1 Fixes to partial send(influx), including influxdb version into stats
03/29/2021 version 0.13.2 Fixes to typing, reducing error messages and tracking code for NaN bug
07/06/2021 version 0.13.3 Hotfixing version endpoint for SPP 10.1.8.1
07/09/2021 version 0.13.4 Hotfixing storage execption, chaning top-level execption handling to reduce the need of further hotfixes
08/06/2021 version 0.13.5 Fixing PS having unituitive CPU-recording, reintroducing TOP to collect CPU informations only
07/14/2021 version 0.13.6 Optimizing CQ's, reducing batch size and typo fix within cpuram table
07/27/2021 version 0.13.7 Streamlining --test arg and checking for GrafanaReader on InfluxSetup
08/02/2021 version 0.13.8 Enhancement and replacement of the ArgumentParser and clearer config-file error messages
08/10/2021 version 0.13.9 Rework of the JobLogs and fix of Log-Filter.
08/18/2021 version 0.14 Added install script and fixed typo in config file, breaking old config files.
08/22/2021 version 0.15 Added --fullLogs argument and reduced regular/loaded joblog query to SUMMARY-Only
08/25/2021 version 0.15.1 Replaced SLA-Endpoint by so-far unknown endpoint, bringing it in line with other api-requests.
08/27/2021 version 1.0.0 Release of SPPMon
08/27/2021 version 1.0.1 Reverted parts of the SLA-Endpoint change
08/31/2021 version 1.0.2 Changed VADP table definition to prevent drop of false duplicates
09/09/2021 version 1.1.0 Increase logging for REST-API errors, add ssh-client skip option for cfg file.
02/22/2021 version 1.1.1 Only ssh-calls the vSnap-api if it is available
"""
from __future__ import annotations
import functools
import logging
import os
import re
import subprocess
import sys
import time
from argparse import ArgumentError, ArgumentParser
from subprocess import CalledProcessError
from typing import Any, Dict, List, NoReturn, Optional, Union
from influx.influx_client import InfluxClient
from sppConnection.api_queries import ApiQueries
from sppConnection.rest_client import RestClient
from sppmonMethods.jobs import JobMethods
from sppmonMethods.protection import ProtectionMethods
from sppmonMethods.ssh import SshMethods
from sppmonMethods.system import SystemMethods
from sppmonMethods.testing import TestingMethods
from utils.connection_utils import ConnectionUtils
from utils.execption_utils import ExceptionUtils
from utils.methods_utils import MethodUtils
from utils.spp_utils import SppUtils
# Version:
VERSION = "1.1.1 (2021/02/22)"
# ----------------------------------------------------------------------------
# command line parameter parsing
# ----------------------------------------------------------------------------
parser = ArgumentParser(
# exit_on_error=False, TODO: Enable in python version 3.9
description=
"""Monitoring and long-term reporting for IBM Spectrum Protect Plus.
Provides a data bridge from SPP to InfluxDB and provides visualization dashboards via Grafana.
This program provides functions to query IBM Spectrum Protect Plus Servers,
VSNAP, VADP and other servers via REST API and ssh. This data is stored into a InfluxDB database.""",
epilog="For feature-requests or bug-reports please visit https://github.com/IBM/spectrum-protect-sppmon")
parser.add_argument("-v", '--version', action='version', version="Spectrum Protect Plus Monitoring (SPPMon) version " + VERSION)
parser.add_argument("--cfg", required=True, dest="configFile", help="REQUIRED: specify the JSON configuration file")
parser.add_argument("--verbose", dest="verbose", action="store_true", help="print to stdout")
parser.add_argument("--debug", dest="debug", action="store_true", help="save debug messages")
parser.add_argument("--test", dest="test", action="store_true", help="tests connection to all components")
parser.add_argument("--constant", dest="constant", action="store_true",
help="execute recommended constant functions: (ssh, cpu, sppCatalog)")
parser.add_argument("--hourly", dest="hourly", action="store_true",
help="execute recommended hourly functions: (constant + jobs, vadps, storages)")
parser.add_argument("--daily", dest="daily", action="store_true",
help="execute recommended daily functions: (hourly + joblogs, vms, slaStats, vmStats)")
parser.add_argument("--all", dest="all", action="store_true", help="execute all functions: (daily + sites)")
parser.add_argument("--jobs", dest="jobs", action="store_true", help="store job history")
parser.add_argument("--jobLogs", dest="jobLogs", action="store_true",
help="retrieve detailed information per job (job-sessions)")
parser.add_argument("--loadedSystem", dest="loadedSystem", action="store_true",
help="Special settings for loaded systems, increasing API-request timings.")
parser.add_argument("--fullLogs", dest="fullLogs", action="store_true",
help="Requesting any kind of Joblogs instead of the default SUMMARY-Logs.")
parser.add_argument("--ssh", dest="ssh", action="store_true", help="execute monitoring commands via ssh")
parser.add_argument("--vms", dest="vms", action="store_true", help="store vm statistics (hyperV, vmWare)")
parser.add_argument("--vmStats", dest="vmStats", action="store_true", help="calculate vm statistic from catalog data")
parser.add_argument("--slaStats", dest="slaStats", action="store_true", help="calculate vm's and applications per SLA")
parser.add_argument("--vadps", dest="vadps", action="store_true", help="store VADPs statistics")
parser.add_argument("--storages", dest="storages", action="store_true", help="store storages (vsnap) statistics")
parser.add_argument("--sites", dest="sites", action="store_true", help="store site settings")
parser.add_argument("--cpu", dest="cpu", action="store_true", help="capture SPP server CPU and RAM utilization")
parser.add_argument("--sppcatalog", dest="sppcatalog", action="store_true", help="capture Spp-Catalog Storage usage")
parser.add_argument("--copy_database", dest="copy_database",
help="Copy all data from .cfg database into a new database, specified by `copy_database=newName`. Delete old database with caution.")
# DEPRECATED AREA
#TODO removed in Version 1.1.
parser.add_argument("--minimumLogs", dest="minimumLogs", action="store_true",
help="DEPRECATED, use '--loadedSystem' instead. To be removed in v1.1")
parser.add_argument("--processStats", dest="processStats", action="store_true",
help="DEPRECATED, use '--ssh' instead")
parser.add_argument("--create_dashboard", dest="create_dashboard", action="store_true",
help="DEPRECATED: Just import the regular dashboard instead, choose datasource within Grafana. To be removed in v1.1")
parser.add_argument("--dashboard_folder_path", dest="dashboard_folder_path",
help="DEPRECATED: Just import the regular dashboard instead, choose datasource within Grafana. To be removed in v1.1")
print = functools.partial(print, flush=True)
LOGGER_NAME = 'sppmon'
LOGGER = logging.getLogger(LOGGER_NAME)
ERROR_CODE_START_ERROR: int = 3
ERROR_CODE_CMD_ARGS: int = 2
ERROR_CODE: int = 1
SUCCESS_CODE: int = 0
try:
ARGS = parser.parse_args()
except SystemExit as exit_code:
if(exit_code.code != SUCCESS_CODE):
print("> Error when reading SPPMon arguments.", file=sys.stderr)
print("> Please make sure to specify a config file and check the spelling of your arguments.", file=sys.stderr)
print("> Use --help to display all argument options and requirements", file=sys.stderr)
exit(exit_code)
except ArgumentError as error:
print(error.message)
print("> Error when reading SPPMon arguments.", file=sys.stderr)
print("> Please make sure to specify a config file and check the spelling of your arguments.", file=sys.stderr)
print("> Use --help to display all argument options and requirements", file=sys.stderr)
exit(ERROR_CODE_CMD_ARGS)
class SppMon:
"""Main-File for the sppmon. Only general functions here and calls for sub-modules.
Attributes:
log_path - path to logger, set in set_logger
pid_file_path - path to pid_file, set in check_pid_file
config_file
See below for full list
Methods:
set_logger - Sets global logger for stdout and file logging.
set_critial_configs - Sets up any critical infrastructure.
set_optional_configs - Sets up any optional infrastructure.
store_script_metrics - Stores script metrics into influxb.
exit - Executes finishing tasks and exits sppmon.
"""
# set class variables
MethodUtils.verbose = ARGS.verbose
SppUtils.verbose = ARGS.verbose
# ###### API-REST page settings ###### #
# ## IMPORTANT NOTES ## #
# please read the documentation before adjusting values.
# if unsure contact the sppmon develop team before adjusting
# ## Recommend changes for loaded systems ##
# Use --loadedSystem if sppmon causes big CPU spikes on your SPP-Server
# CAUTION: using --loadedSystem causes some data to not be recorded.
# all changes adjusts settings to avoid double running mongodb jobs.
# Hint: make sure SPP-mongodb tables are correctly indexed.
# Priority list for manual changes:
# Only if unable to connect at all:
# 1. Increase initial_connection_timeout
# Small/Medium Spikes:
# finetune `default` variables:
# 1. increase timeout while decreasing preferred send time (timeout disable: None)
# 2. increase timeout reduction (0-0.99)
# 3. decrease scaling factor (>1)
# Critical/Big Spikes:
# CAUTION Reduce Recording: causes less Joblogs-Types to be recorded
# 1. Enable `--loadedSystem`
# 2. finetune `loaded`-variables (see medium spikes 1-3)
# 3. Reduce JobLog-Types (min only `SUMMARY`)
# Other finetuning mechanics (no data-loss):
# 1. decrease allowed_send_delta (>=0)
# 2. decrease starting pagesize (>1)
# Pagesize size
starting_page_size: int = 50
"""starting page size for dynamical change within rest_client"""
loaded_starting_page_size: int = 10
"""starting page size for dynamical change within rest_client on loaded systems"""
min_page_size: int = 5
"""minimum size of a rest-api page"""
loaded_min_page_size: int = 1
"""minimum size of a rest-api page on loaded systems"""
# Increase / Decrease of pagesize
max_scaling_factor: float = 3.5
"""max scaling factor of the pagesize increase per request"""
loaded_max_scaling_factor: float = 2.0
"""max scaling factor of the pagesize increase per request for loaded systems"""
timeout_reduction: float = 0.7
"""reduce of the actual pagesize on timeout in percent"""
loaded_timeout_reduction: float = 0.95
"""reduce of the actual pagesize on timeout in percent on loaded systems"""
allowed_send_delta: float = 0.15
"""delta of send allowed before adjustments are made to the pagesize in %"""
loaded_allowed_send_delta: float = 0.15
"""delta of send allowed before adjustments are made to the pagesize in % on loaded systems"""
# Send time and timeouts
pref_send_time: int = 30
"""preferred query send time in seconds"""
loaded_pref_send_time: int = 30
"""desired send time per query in seconds for loaded systems"""
initial_connection_timeout: float = 6.05
"""Time spend waiting for the initial connection, slightly larger than 3 multiple"""
request_timeout: int | None = 60
"""timeout for api-requests, none deactivates timeout"""
loaded_request_timeout: int | None = 180
"""timeout on loaded systems, none deactivates timeout"""
max_send_retries: int = 3
"""Count of retries before failing request. Last one is min size. 0 to disable."""
loaded_max_send_retries: int = 1
"""Count of retries before failing request on loaded systems. Last one is min size. 0 to disable."""
# ## REST-CLIENT-OPTIONS ##
# Never observed debug-type
# possible options: '["INFO","DEBUG","ERROR","SUMMARY","WARN", "DETAIL"]'
joblog_types = ["SUMMARY"]
"""joblog query types on normal running systems"""
full_joblog_types = ["INFO", "DEBUG", "ERROR", "SUMMARY", "WARN", "DETAIL"]
"""jobLog types to be requested on full logs."""
# String, cause of days etc
# ### DATALOSS if turned down ###
job_log_retention_time = "60d"
"""Configured spp log rentation time, logs get deleted after this time."""
# set later in each method, here to avoid missing attribute
influx_client: Optional[InfluxClient] = None
rest_client: Optional[RestClient] = None
api_queries: Optional[ApiQueries] = None
system_methods: Optional[SystemMethods] = None
job_methods: Optional[JobMethods] = None
protection_methods: Optional[ProtectionMethods] = None
ssh_methods: Optional[SshMethods] = None
def __init__(self):
self.log_path: str = ""
"""path to logger, set in set_logger."""
self.pid_file_path: str = ""
"""path to pid_file, set in check_pid_file."""
self.set_logger()
LOGGER.info("Starting SPPMon")
if(not self.check_pid_file()):
ExceptionUtils.error_message("Another instance of sppmon with the same args is running")
self.exit(ERROR_CODE_START_ERROR)
time_stamp_name, time_stamp = SppUtils.get_capture_timestamp_sec()
self.start_counter = time.perf_counter()
LOGGER.debug("\n\n")
LOGGER.debug(f"running script version: {VERSION}")
LOGGER.debug(f"cmdline options: {ARGS}")
LOGGER.debug(f"{time_stamp_name}: {time_stamp}")
LOGGER.debug("")
if(not ARGS.configFile):
ExceptionUtils.error_message("missing config file, aborting")
self.exit(error_code=ERROR_CODE_CMD_ARGS)
try:
self.config_file = SppUtils.read_conf_file(config_file_path=ARGS.configFile)
except ValueError as error:
ExceptionUtils.exception_info(error=error, extra_message="Error when trying to read Config file, unable to read")
self.exit(error_code=ERROR_CODE_START_ERROR)
LOGGER.info("Setting up configurations")
self.setup_args()
self.set_critial_configs(self.config_file)
self.set_optional_configs(self.config_file)
def set_logger(self) -> None:
"""Sets global logger for stdout and file logging.
Changes logger aquired by LOGGER_NAME.
Raises:
ValueError: Unable to open logger
"""
self.log_path = SppUtils.mk_logger_file(ARGS.configFile, ".log")
try:
file_handler = logging.FileHandler(self.log_path)
except Exception as error:
# TODO here: Right exception, how to print this error?
print("unable to open logger", file=sys.stderr)
raise ValueError("Unable to open Logger") from error
file_handler_fmt = logging.Formatter(
'%(asctime)s:[PID %(process)d]:%(levelname)s:%(module)s.%(funcName)s> %(message)s')
file_handler.setFormatter(file_handler_fmt)
if(ARGS.debug):
file_handler.setLevel(logging.DEBUG)
else:
file_handler.setLevel(logging.ERROR)
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
logger = logging.getLogger(LOGGER_NAME)
logger.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
def check_pid_file(self) -> bool:
if(ARGS.verbose):
LOGGER.info("Checking for other SPPMon instances")
self.pid_file_path = SppUtils.mk_logger_file(ARGS.configFile, ".pid_file")
try:
try:
file = open(self.pid_file_path, "rt")
match_list = re.findall(r"(\d+) " + str(ARGS), file.read())
file.close()
deleted_processes: List[str] = []
for match in match_list:
# add spaces to make clear the whole number is matched
match = f' {match} '
try:
if(os.name == 'nt'):
args = ['ps', '-W']
else:
args = ['ps', '-p', match]
result = subprocess.run(args, check=True, capture_output=True)
if(re.search(match, str(result.stdout))):
return False
# not in there -> delete entry
deleted_processes.append(match)
except CalledProcessError as error:
deleted_processes.append(match)
# delete processes which did get killed, not often called
if(deleted_processes):
file = open(self.pid_file_path, "rt")
file_str = file.read()
file.close()
options = str(ARGS)
for pid in deleted_processes:
file_str = file_str.replace(f"{pid} {options}", "")
# do not delete if empty since we will use it below
file = open(self.pid_file_path, "wt")
file.write(file_str.strip())
file.close()
except FileNotFoundError:
pass # no file created yet
# always write your own pid into it
file = open(self.pid_file_path, "at")
file.write(f"{os.getpid()} {str(ARGS)}")
file.close()
return True
except Exception as error:
ExceptionUtils.exception_info(error)
raise ValueError("Error when checking pid file")
def remove_pid_file(self) -> None:
try:
file = open(self.pid_file_path, "rt")
file_str = file.read()
file.close()
new_file_str = file_str.replace(f"{os.getpid()} {str(ARGS)}", "").strip()
if(not new_file_str.strip()):
os.remove(self.pid_file_path)
else:
file = open(self.pid_file_path, "wt")
file.write(new_file_str)
file.close()
except Exception as error:
ExceptionUtils.exception_info(error, "Error when removing pid_file")
def set_critial_configs(self, config_file: Dict[str, Any]) -> None:
"""Sets up any critical infrastructure, to be called within the init.
Be aware not everything may be initalized on call time.
Add config here if the system should abort if it is missing.
Arguments:
config_file {Dict[str, Any]} -- Opened Config file
"""
if(not config_file):
ExceptionUtils.error_message("missing or empty config file, aborting")
self.exit(error_code=ERROR_CODE_START_ERROR)
try:
# critical components only
self.influx_client = InfluxClient(config_file)
if(not self.ignore_setup):
# delay the connect into the testing phase
self.influx_client.connect()
except ValueError as err:
ExceptionUtils.exception_info(error=err, extra_message="error while setting up critical config. Aborting")
self.influx_client = None # set none, otherwise the variable is undeclared
self.exit(error_code=ERROR_CODE)
def set_optional_configs(self, config_file: Dict[str, Any]) -> None:
"""Sets up any optional infrastructure, to be called within the init.
Be aware not everything may be initalized on call time.
Add config here if the system should not abort if it is missing.
Arguments:
config_file {Dict[str, Any]} -- Opened Config file
"""
if(not config_file):
ExceptionUtils.error_message("missing or empty config file, aborting.")
self.exit(error_code=ERROR_CODE_START_ERROR)
if(not self.influx_client):
ExceptionUtils.error_message("Influx client is somehow missing. aborting")
self.exit(error_code=ERROR_CODE)
# ############################ REST-API #####################################
try:
ConnectionUtils.verbose = ARGS.verbose
# ### Loaded Systems part 1/2 ### #
if(ARGS.minimumLogs or ARGS.loadedSystem):
# Setting pagesize scaling settings
ConnectionUtils.timeout_reduction = self.loaded_timeout_reduction
ConnectionUtils.allowed_send_delta = self.loaded_allowed_send_delta
ConnectionUtils.max_scaling_factor = self.loaded_max_scaling_factor
# Setting RestClient request settings.
self.rest_client = RestClient(
config_file=config_file,
initial_connection_timeout=self.initial_connection_timeout,
pref_send_time=self.loaded_pref_send_time,
request_timeout=self.loaded_request_timeout,
max_send_retries=self.loaded_max_send_retries,
starting_page_size=self.loaded_starting_page_size,
min_page_size=self.loaded_min_page_size,
verbose=ARGS.verbose
)
else:
ConnectionUtils.timeout_reduction = self.timeout_reduction
ConnectionUtils.allowed_send_delta = self.allowed_send_delta
ConnectionUtils.max_scaling_factor = self.max_scaling_factor
# Setting RestClient request settings.
self.rest_client = RestClient(
config_file=config_file,
initial_connection_timeout=self.initial_connection_timeout,
pref_send_time=self.pref_send_time,
request_timeout=self.request_timeout,
max_send_retries=self.max_send_retries,
starting_page_size=self.starting_page_size,
min_page_size=self.min_page_size,
verbose=ARGS.verbose
)
self.api_queries = ApiQueries(self.rest_client)
if(not self.ignore_setup):
# delay the connect into the testing phase
self.rest_client.login()
except ValueError as error:
ExceptionUtils.exception_info(error=error, extra_message="REST-API is not available due Config error")
# Required to declare variable
self.rest_client = None
self.api_queries = None
# ######################## System, Job and Hypervisor Methods ##################
try:
# explicit ahead due dependency
self.system_methods = SystemMethods(self.influx_client, self.api_queries, ARGS.verbose)
except ValueError as error:
ExceptionUtils.exception_info(error=error)
# ### Full Logs ### #
if(ARGS.fullLogs):
given_log_types = self.full_joblog_types
else:
given_log_types = self.joblog_types
try:
auth_rest: Dict[str, Any] = SppUtils.get_cfg_params(param_dict=config_file, param_name="sppServer") # type: ignore
# TODO DEPRECATED TO BE REMOVED IN 1.1
self.job_log_retention_time = auth_rest.get("jobLog_rentation", auth_rest.get("jobLog_retention", self.job_log_retention_time))
# TODO New once 1.1 is live
#self.job_log_retention_time = auth_rest.get("jobLog_retention", self.job_log_retention_time)
self.job_methods = JobMethods(
self.influx_client, self.api_queries, self.job_log_retention_time,
given_log_types, ARGS.verbose)
except ValueError as error:
ExceptionUtils.exception_info(error=error)
try:
# dependen on system methods
self.protection_methods = ProtectionMethods(self.system_methods, self.influx_client, self.api_queries,
ARGS.verbose)
except ValueError as error:
ExceptionUtils.exception_info(error=error)
# ############################### SSH #####################################
if(self.ssh and not self.ignore_setup):
try:
# set from None to methods once finished
self.ssh_methods = SshMethods(
influx_client=self.influx_client,
config_file=config_file,
verbose=ARGS.verbose)
except ValueError as error:
ExceptionUtils.exception_info(
error=error,
extra_message="SSH-Commands are not available due Config error")
# Variable needs to be declared
self.ssh_methods = None
else:
# Variable needs to be declared
self.ssh_methods = None
def setup_args(self) -> None:
"""This method set up all required parameters and transforms arg groups into individual args.
"""
# ## call functions based on cmdline parameters
# Temporary features / Deprecated
if(ARGS.minimumLogs):
ExceptionUtils.error_message(
"DEPRECATED: using deprecated argument '--minumumLogs'. Use to '--loadedSystem' instead.")
if(ARGS.processStats):
ExceptionUtils.error_message(
"DEPRECATED: using deprecated argument '--minumumLogs'. Use to '--ssh' instead.")
# ignore setup args
self.ignore_setup: bool = (
ARGS.create_dashboard or bool(ARGS.dashboard_folder_path) or
ARGS.test
)
if(self.ignore_setup):
ExceptionUtils.error_message("> WARNING: An option for a utility operation has been specified. Bypassing normal SPPMON operation.")
if((ARGS.create_dashboard or bool(ARGS.dashboard_folder_path)) and not
(ARGS.create_dashboard and bool(ARGS.dashboard_folder_path))):
ExceptionUtils.error_message("> Using --create_dashboard without associated folder path. Aborting.")
self.exit(ERROR_CODE_CMD_ARGS)
# incremental setup, higher executes all below
all_args: bool = ARGS.all
daily: bool = ARGS.daily or all_args
hourly: bool = ARGS.hourly or daily
constant: bool = ARGS.constant or hourly
# ######## All Methods #################
self.sites: bool = ARGS.sites or all_args
# ######## Daily Methods ###############
self.vms: bool = ARGS.vms or daily
self.job_logs: bool = ARGS.jobLogs or daily
self.sla_stats: bool = ARGS.slaStats or daily
self.vm_stats: bool = ARGS.vmStats or daily
# ######## Hourly Methods ##############
self.jobs: bool = ARGS.jobs or hourly
self.vadps: bool = ARGS.vadps or hourly
self.storages: bool = ARGS.storages or hourly
# ssh vsnap pools ?
# ######## Constant Methods ############
self.ssh: bool = ARGS.ssh or constant
self.cpu: bool = ARGS.cpu or constant
self.spp_catalog: bool = ARGS.sppcatalog or constant
def store_script_metrics(self) -> None:
"""Stores script metrics into influxb. To be called before exit.
Does not raise any exceptions, skips if influxdb is missing.
"""
LOGGER.info("Storing script metrics")
try:
if(not self.influx_client):
raise ValueError("no influxClient set up")
insert_dict: Dict[str, Union[str, int, float, bool]] = {}
# add version nr, api calls are needed
insert_dict["sppmon_version"] = VERSION
insert_dict["influxdb_version"] = self.influx_client.version
if(self.rest_client):
try:
(version_nr, build) = self.rest_client.get_spp_version_build()
insert_dict["spp_version"] = version_nr
insert_dict["spp_build"] = build
except ValueError as error:
ExceptionUtils.exception_info(error=error, extra_message="could not query SPP version and build.")
# end total sppmon runtime
end_counter = time.perf_counter()
insert_dict['duration'] = int((end_counter-self.start_counter)*1000)
# add arguments of sppmon
for (key, value) in vars(ARGS).items():
# Value is either string, true or false/None
if(value):
insert_dict[key] = value
# save occured errors
error_count = len(ExceptionUtils.stored_errors)
if(error_count > 0):
ExceptionUtils.error_message(f"total of {error_count} exception/s occured")
insert_dict['errorCount'] = error_count
# save list as str if not empty
if(ExceptionUtils.stored_errors):
insert_dict['errorMessages'] = str(ExceptionUtils.stored_errors)
# get end timestamp
(time_key, time_val) = SppUtils.get_capture_timestamp_sec()
insert_dict[time_key] = time_val
# save the metrics
self.influx_client.insert_dicts_to_buffer(
table_name="sppmon_metrics",
list_with_dicts=[insert_dict]
)
self.influx_client.flush_insert_buffer()
LOGGER.info("Stored script metrics sucessfull")
# + 1 due the "total of x exception/s occured"
if(error_count + 1 < len(ExceptionUtils.stored_errors)):
ExceptionUtils.error_message(
"A non-critical error occured while storing script metrics. \n\
This error can't be saved into the DB, it's only displayed within the logs.")
except ValueError as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Error when storing sppmon-metrics, skipping this step. Possible insert-buffer data loss")
def exit(self, error_code: int = SUCCESS_CODE) -> NoReturn:
"""Executes finishing tasks and exits sppmon. To be called every time.
Executes finishing tasks and displays error messages.
Specify only error message if something did went wrong.
Use Error codes specified at top of module.
Does NOT return.
Keyword Arguments:
error {int} -- Errorcode if a error occured. (default: {0})
"""
# error with the command line arguments
# dont store runtime here
if(error_code == ERROR_CODE_CMD_ARGS):
parser.print_help()
sys.exit(ERROR_CODE_CMD_ARGS) # unreachable?
if(error_code == ERROR_CODE_START_ERROR):
ExceptionUtils.error_message("Error when starting SPPMon. Please review the errors above")
sys.exit(ERROR_CODE_START_ERROR)
script_end_time = SppUtils.get_actual_time_sec()
LOGGER.debug("Script end time: %d", script_end_time)
try:
if(not self.ignore_setup):
self.store_script_metrics()
if(self.influx_client):
self.influx_client.disconnect()
if(self.rest_client):
self.rest_client.logout()
except ValueError as error:
ExceptionUtils.exception_info(error=error, extra_message="Error occured while exiting sppmon")
error_code = ERROR_CODE
self.remove_pid_file()
# Both error-clauses are actually the same, but for possiblility of an split between error cases
# always last due beeing true for any number != 0
if(error_code == ERROR_CODE or error_code):
ExceptionUtils.error_message("Error occured while executing sppmon")
elif(not self.ignore_setup):
LOGGER.info("\n\n!!! script completed !!!\n")
print(f"check log for details: grep \"PID {os.getpid()}\" {self.log_path} > sppmon.log.{os.getpid()}")
sys.exit(error_code)
def main(self):
LOGGER.info("Starting argument execution")
if(not self.influx_client):
ExceptionUtils.error_message("somehow no influx client is present even after init")
self.exit(ERROR_CODE)
# ##################### SYSTEM METHODS #######################
if(self.sites and self.system_methods):
try:
self.system_methods.sites()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when requesting sites, skipping them all")
if(self.cpu and self.system_methods):
try:
self.system_methods.cpuram()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when collecting cpu stats, skipping them all")
if(self.spp_catalog and self.system_methods):
try:
self.system_methods.sppcatalog()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when collecting file system stats, skipping them all")
# ####################### JOB METHODS ########################
if(self.jobs and self.job_methods):
# store all jobs grouped by jobID
try:
self.job_methods.get_all_jobs()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when requesting jobs, skipping them all")
if(self.job_logs and self.job_methods):
# store all job logs per job session instance
try:
self.job_methods.job_logs()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when requesting job logs, skipping them all")
# ####################### SSH METHODS ########################
if(self.ssh and self.ssh_methods):
# execute ssh statements for, VSNAP, VADP, other ssh hosts
# store all job logs per job session instance
try:
self.ssh_methods.ssh()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when excecuting ssh commands, skipping them all")
# ################### HYPERVISOR METHODS #####################
if(self.vms and self.protection_methods):
try:
self.protection_methods.store_vms()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when requesting all VMs, skipping them all")
if(self.sla_stats and self.protection_methods):
# number of VMs per SLA and sla dumps
try:
self.protection_methods.vms_per_sla()
self.protection_methods.sla_dumps()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when requesting and computing VMs per sla, skipping them all")
if(self.vm_stats and self.protection_methods):
# retrieve and calculate VM inventory summary
try:
self.protection_methods.create_inventory_summary()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when creating inventory summary, skipping them all")
if(self.vadps and self.protection_methods):
try:
self.protection_methods.vadps()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when requesting vadps, skipping them all")
if(self.storages and self.protection_methods):
try:
self.protection_methods.storages()
self.influx_client.flush_insert_buffer()
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when collecting storages, skipping them all")
# ###################### OTHER METHODS #######################
if(ARGS.copy_database):
try:
self.influx_client.copy_database(ARGS.copy_database)
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when coping database.")
# ################### NON-SETUP-METHODS #######################
if(ARGS.test):
try:
TestingMethods.test_connection(self.config_file, self.influx_client, self.rest_client)
except Exception as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when testing connection.")
# DEPRECATED TODO REMOVE NEXT VERSION
if(ARGS.create_dashboard):
try:
ExceptionUtils.error_message(
"This method is deprecated. You do not need to manually create a dashboard anymore.\n" +
"Please just select the datasource when importing the regular 14-day dashboard in grafana.\n" +
"Devs may adjust their dashboard to be generic with the scripts/generifyDashboard.py script.")
except ValueError as error:
ExceptionUtils.exception_info(
error=error,
extra_message="Top-level-error when creating dashboard")
self.exit()
if __name__ == "__main__":
SppMon().main()
| 45.773645 | 151 | 0.637638 | 5,285 | 43,073 | 5.061684 | 0.160076 | 0.014056 | 0.013009 | 0.031102 | 0.366379 | 0.319726 | 0.277522 | 0.258084 | 0.228365 | 0.198236 | 0 | 0.022391 | 0.261742 | 43,073 | 940 | 152 | 45.82234 | 0.818862 | 0.258654 | 0 | 0.305085 | 0 | 0.009416 | 0.190801 | 0.005857 | 0 | 0 | 0 | 0.005319 | 0 | 1 | 0.018832 | false | 0.003766 | 0.048964 | 0 | 0.124294 | 0.022599 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490ca7e94ecfb4b273fd5bfd125b3fd79de65077 | 2,254 | py | Python | plugins/alphabet_index_plugin.py | Sakartu/stringinfo | d1e98587486c7ae27c8e619aeb3e703b2f3af928 | [
"MIT"
] | null | null | null | plugins/alphabet_index_plugin.py | Sakartu/stringinfo | d1e98587486c7ae27c8e619aeb3e703b2f3af928 | [
"MIT"
] | null | null | null | plugins/alphabet_index_plugin.py | Sakartu/stringinfo | d1e98587486c7ae27c8e619aeb3e703b2f3af928 | [
"MIT"
] | null | null | null | import string
import textwrap
import binascii
from veryprettytable import VeryPrettyTable
from plugins import BasePlugin
__author__ = 'peter'
class AlphabetIndexPlugin(BasePlugin):
short_description = 'Use the (hex or digit) input(s) as index in the alphabet'
header = 'As index in the alphabet:'
default = False
description = textwrap.dedent('''\
This plugin only works when (at least one of the) string(s) is either a concatenation of decimals, or a hex string.
This plugin will try to use character pairs from the string as indexes in the alphabet, such that 01 is 'a',
02 is 'b', etc. If the string is a hexstring the pairs will first be converted to integers. The index will be used
modulo te length of the alphabet.''')
key = '--alphabet-index'
def sentinel(self):
for s in self.args['STRING']:
try:
map(int, s)
return True
except ValueError:
pass
try:
map(int, binascii.unhexlify(s))
return True
except ValueError:
continue
return False
def handle(self):
t = VeryPrettyTable(field_names=('String', 'Method', 'Base', 'Output'))
for s in self.args['STRING']:
alphabet = string.ascii_lowercase
try:
t.add_row((repr(s), 'dec', '0-based', ''.join(alphabet[int(x) % len(alphabet)] for x in self._chunks(s, 2))))
t.add_row((repr(s), 'dec', '1-based', ''.join(alphabet[(int(x) - 1) % len(alphabet)] for x in self._chunks(s, 2))))
except ValueError:
pass
try:
indexes = [int(x) for x in binascii.unhexlify(s)]
t.add_row((repr(s), 'hex', '0-based', ''.join(alphabet[int(x) % len(alphabet)] for x in indexes)))
t.add_row((repr(s), 'hex', '1-based', ''.join(alphabet[(int(x) - 1) % len(alphabet)] for x in indexes)))
except ValueError as e:
if self.args['--verbose']:
print(e)
t.align = 'l'
return t.get_string()
@staticmethod
def _chunks(s, n):
for i in range(0, len(s), n):
yield s[i:i+n] | 35.21875 | 131 | 0.566992 | 300 | 2,254 | 4.21 | 0.373333 | 0.015835 | 0.023753 | 0.034838 | 0.308789 | 0.234363 | 0.144101 | 0.144101 | 0.144101 | 0.125099 | 0 | 0.008409 | 0.314108 | 2,254 | 64 | 132 | 35.21875 | 0.808538 | 0 | 0 | 0.254902 | 0 | 0.058824 | 0.255876 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.039216 | 0.098039 | 0 | 0.352941 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490d6a486ee1c450d2772bba0c198bc52982fa44 | 2,861 | py | Python | dual_pixels/eval/script.py | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 23,901 | 2018-10-04T19:48:53.000Z | 2022-03-31T21:27:42.000Z | dual_pixels/eval/script.py | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 891 | 2018-11-10T06:16:13.000Z | 2022-03-31T10:42:34.000Z | dual_pixels/eval/script.py | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 6,047 | 2018-10-12T06:31:02.000Z | 2022-03-31T13:59:28.000Z | # coding=utf-8
# Copyright 2021 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Script to evaluate model predictions against the ground truth."""
import glob
import os
from absl import app
from absl import flags
import numpy as np
from PIL import Image
import tensorflow.compat.v2 as tf
from dual_pixels.eval import get_metrics
flags.DEFINE_string('test_dir', 'test/', 'Path to test dataset.')
flags.DEFINE_string('prediction_dir', 'model_prediction/',
'Path to model predictions.')
FLAGS = flags.FLAGS
# Crop over which we do evaluation.
CROP_HEIGHT = 512
CROP_WIDTH = 384
def get_captures():
"""Gets a list of captures."""
depth_dir = os.path.join(FLAGS.test_dir, 'merged_depth')
return [
name for name in os.listdir(depth_dir)
if os.path.isdir(os.path.join(depth_dir, name))
]
def load_capture(capture_name):
"""Loads the ground truth depth, confidence and prediction for a capture."""
# Assume that we are loading the center capture.
# Load GT Depth.
depth_dir = os.path.join(FLAGS.test_dir, 'merged_depth')
gt_depth_path = glob.glob(
os.path.join(depth_dir, capture_name, '*_center.png'))[0]
gt_depth = Image.open(gt_depth_path)
gt_depth = np.asarray(gt_depth, dtype=np.float32) / 255.0
# Load GT Depth confidence.
depth_conf_dir = os.path.join(FLAGS.test_dir, 'merged_conf')
gt_depth_conf_path = glob.glob(
os.path.join(depth_conf_dir, capture_name, '*_center.npy'))[0]
gt_depth_conf = np.load(gt_depth_conf_path)
# Load prediction.
prediction_path = glob.glob(
os.path.join(FLAGS.prediction_dir, capture_name + '.npy'))[0]
prediction = np.load(prediction_path)
return prediction, gt_depth, gt_depth_conf
def main(argv):
del argv # Unused.
tf.enable_v2_behavior()
captures = get_captures()
loss_dict = {'wmae': [], 'wrmse': [], 'spearman': []}
for capture in captures:
print(capture)
pred, depth_gt, conf_gt = load_capture(capture)
losses = get_metrics.metrics(pred, depth_gt, conf_gt, CROP_HEIGHT,
CROP_WIDTH)
for loss_name, loss in loss_dict.items():
loss.append(losses[loss_name].numpy())
for loss_name, loss in loss_dict.items():
loss_dict[loss_name] = np.mean(loss)
print(loss_dict)
if __name__ == '__main__':
app.run(main)
| 32.511364 | 78 | 0.715834 | 435 | 2,861 | 4.521839 | 0.370115 | 0.042705 | 0.035587 | 0.030503 | 0.158617 | 0.130656 | 0.119471 | 0.092018 | 0.076258 | 0.041688 | 0 | 0.011465 | 0.176861 | 2,861 | 87 | 79 | 32.885057 | 0.823779 | 0.315274 | 0 | 0.078431 | 0 | 0 | 0.093084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.156863 | 0 | 0.254902 | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490e0687f15f0c40c71f2ec913d5eaffdedf33ee | 1,311 | py | Python | arithmetic_analysis/newton_forward_interpolation.py | NavpreetDevpuri/Python | 7ef5ae66d777e8ed702993c6aa9270e0669cb0c6 | [
"MIT"
] | 145,614 | 2016-07-21T05:40:05.000Z | 2022-03-31T22:17:22.000Z | arithmetic_analysis/newton_forward_interpolation.py | NavpreetDevpuri/Python | 7ef5ae66d777e8ed702993c6aa9270e0669cb0c6 | [
"MIT"
] | 3,987 | 2016-07-28T17:31:25.000Z | 2022-03-30T23:07:46.000Z | arithmetic_analysis/newton_forward_interpolation.py | NavpreetDevpuri/Python | 7ef5ae66d777e8ed702993c6aa9270e0669cb0c6 | [
"MIT"
] | 40,014 | 2016-07-26T15:14:41.000Z | 2022-03-31T22:23:03.000Z | # https://www.geeksforgeeks.org/newton-forward-backward-interpolation/
from __future__ import annotations
import math
# for calculating u value
def ucal(u: float, p: int) -> float:
"""
>>> ucal(1, 2)
0
>>> ucal(1.1, 2)
0.11000000000000011
>>> ucal(1.2, 2)
0.23999999999999994
"""
temp = u
for i in range(1, p):
temp = temp * (u - i)
return temp
def main() -> None:
n = int(input("enter the numbers of values: "))
y: list[list[float]] = []
for i in range(n):
y.append([])
for i in range(n):
for j in range(n):
y[i].append(j)
y[i][j] = 0
print("enter the values of parameters in a list: ")
x = list(map(int, input().split()))
print("enter the values of corresponding parameters: ")
for i in range(n):
y[i][0] = float(input())
value = int(input("enter the value to interpolate: "))
u = (value - x[0]) / (x[1] - x[0])
# for calculating forward difference table
for i in range(1, n):
for j in range(n - i):
y[j][i] = y[j + 1][i - 1] - y[j][i - 1]
summ = y[0][0]
for i in range(1, n):
summ += (ucal(u, i) * y[0][i]) / math.factorial(i)
print(f"the value at {value} is {summ}")
if __name__ == "__main__":
main()
| 22.603448 | 70 | 0.532418 | 204 | 1,311 | 3.362745 | 0.308824 | 0.081633 | 0.052478 | 0.09621 | 0.21137 | 0.112245 | 0 | 0 | 0 | 0 | 0 | 0.063991 | 0.29672 | 1,311 | 57 | 71 | 23 | 0.680043 | 0.171625 | 0 | 0.15625 | 0 | 0 | 0.178435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.15625 | 0.09375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490e2bf10efe7d88411e47548245b92cef012ee9 | 4,682 | py | Python | deeplearning_logger/keras/keras_logger.py | hectorLop/DeepLearning_Logger | a8ce450e423a7f26c4212f0e9f0f041cc7f55117 | [
"MIT"
] | 1 | 2021-03-16T11:10:08.000Z | 2021-03-16T11:10:08.000Z | deeplearning_logger/keras/keras_logger.py | hectorLop/DeepLearning_Logger | a8ce450e423a7f26c4212f0e9f0f041cc7f55117 | [
"MIT"
] | 2 | 2021-03-14T19:47:10.000Z | 2021-06-09T18:36:26.000Z | deeplearning_logger/keras/keras_logger.py | hectorLop/DeepLearning_Logger | a8ce450e423a7f26c4212f0e9f0f041cc7f55117 | [
"MIT"
] | null | null | null | import json
from datetime import date, datetime
from typing import Dict, List
from deeplearning_logger.keras.configs import *
from deeplearning_logger.json import ConfigsJSONEncoder
class Experiment():
"""
Deeplearning Experiment
Parameters
----------
experiment_path : str
Path which contains the experiment files.
name : str
Experiment name.
configs : List
Configurations list.
Attributes
----------
_description : str
Experiment description.
_datatime : datetime
Datetime the experiment was performed.
_name : str
Experiment name.
_experiment_path : str
Path which contains the experiment files.
_configs : List
Configurations list.
"""
_CONFIGS_EQUIVALENCES = {
'metrics': MetricsConfig,
'model': ModelConfig,
'callbacks': CallbackConfig
}
def __init__(self, experiment_path: str, name: str, configs: List[Config],
description: str='', experiment_datetime : date = None) -> None:
self._description = description
if experiment_datetime is None:
self._datetime = datetime.now()
else:
self._datetime = experiment_datetime
self._name = name
self._experiment_path = experiment_path
self._configs = configs
@classmethod
def by_config_files(cls, path: str, config_info_file: str,
config_data_file: str):
print(config_data_file)
print(config_info_file)
experiment_data = cls._parse_config_file(config_data_file)
experiment_config = cls._parse_config_file(config_info_file)
name, description, datetime = cls._parse_experiment_config(
experiment_config)
configs = cls._parse_experiment_data(experiment_data)
return cls(experiment_path=path, name=name,
description=description, experiment_datetime=datetime,
configs=configs)
@classmethod
def _parse_config_file(cls, config_file):
with open(config_file, 'r') as file:
config = json.load(file)
return config
@classmethod
def _parse_experiment_data(cls, experiment_data: Dict):
configs_list = [(key, value) for key, value in experiment_data.items()]
configs = []
for config in configs_list:
configs.append(cls._CONFIGS_EQUIVALENCES[config[0]](config[1]))
return configs
@classmethod
def _parse_experiment_config(cls, experiment_config: Dict):
new_name = experiment_config['name']
new_description = experiment_config['description']
new_datetime = datetime.strptime(experiment_config['datetime'],
'%Y-%m-%dT%H:%M:%S')
return new_name, new_description, new_datetime
def register_experiment(self) -> None:
"""
Registers the experiment data
"""
# Obtains the experiment configs
experiment_config = self._create_experiment_config()
experiment_data = {}
for config_element in self._configs:
if not isinstance(config_element, Config):
raise ValueError(
f'{config_element.__class__.__name__} is'\
' not a Config object')
name, data = config_element.config
experiment_data[name] = data
# Write the experiments config
self._write_json_file('experiment_config.json', experiment_config)
self._write_json_file('experiment_data.json', experiment_data)
def _create_experiment_config(self) -> Dict:
"""
Creates the experiment configuration dictionary
Returns
-------
experiment_config : Dict
Dictionary containing the experiment description
"""
datetime_str = self._datetime.strftime('%Y-%m-%dT%H:%M:%S')
experiment_config = {
'name': self._name,
'description': self._description,
'datetime': datetime_str
}
return experiment_config
def _write_json_file(self, filename: str, config_data: Dict) -> None:
"""
Write a given configuration data into a JSON file.
Parameters
----------
filename : str
JSON filename
config_data : Dict
Data to write into the JSOn file
"""
with open(self._experiment_path + filename, 'w') as outfile:
json.dump(config_data, outfile, indent=4, cls=ConfigsJSONEncoder) | 31.85034 | 81 | 0.613627 | 472 | 4,682 | 5.800847 | 0.21822 | 0.093499 | 0.018627 | 0.01534 | 0.084733 | 0.067202 | 0.037984 | 0.037984 | 0.037984 | 0 | 0 | 0.000921 | 0.304144 | 4,682 | 147 | 82 | 31.85034 | 0.839472 | 0.183682 | 0 | 0.052632 | 0 | 0 | 0.056751 | 0.015935 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.065789 | 0 | 0.263158 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490e8684c3549af07eba127b1f48a3002016dcb5 | 1,309 | py | Python | S4/S4 Library/simulation/interactions/join_liability.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | 1 | 2021-05-20T19:33:37.000Z | 2021-05-20T19:33:37.000Z | S4/S4 Library/simulation/interactions/join_liability.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | S4/S4 Library/simulation/interactions/join_liability.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | import weakref
from interactions.interaction_finisher import FinishingType
from interactions.liability import Liability
JOIN_INTERACTION_LIABILITY = 'JoinInteractionLiability'
class JoinInteractionLiability(Liability):
def __init__(self, join_interaction, **kwargs):
super().__init__(**kwargs)
self._join_interaction_refs = [weakref.ref(join_interaction)]
self._owning_interaction_ref = None
def merge(self, interaction, key, new_liability):
self._join_interaction_refs.extend(new_liability._join_interaction_refs)
return self
def on_add(self, interaction):
self._owning_interaction_ref = weakref.ref(interaction)
def release(self):
for join_ref in self._join_interaction_refs:
join_interaction = join_ref() if join_ref() is not None else None
if join_interaction is not None:
finishing_type = FinishingType.LIABILITY
owning_interaction = self._owning_interaction_ref() if self._owning_interaction_ref is not None else None
if owning_interaction is not None:
finishing_type = owning_interaction.finishing_type
join_interaction.cancel(finishing_type, cancel_reason_msg='Linked join interaction has finished/been cancelled.')
| 45.137931 | 129 | 0.728801 | 149 | 1,309 | 6.033557 | 0.295302 | 0.183537 | 0.084538 | 0.106785 | 0.239155 | 0.122358 | 0.048943 | 0 | 0 | 0 | 0 | 0 | 0.20932 | 1,309 | 28 | 130 | 46.75 | 0.868599 | 0 | 0 | 0 | 0 | 0 | 0.05806 | 0.018335 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.130435 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490ecccd2c0aa304c71ff85fb704053142b2484c | 9,698 | py | Python | main.py | GzuPark/staring-contest-game | bb7c75cd1e645b4905e82ff08f2b230d77699d9c | [
"BSD-2-Clause"
] | null | null | null | main.py | GzuPark/staring-contest-game | bb7c75cd1e645b4905e82ff08f2b230d77699d9c | [
"BSD-2-Clause"
] | 1 | 2021-11-09T08:25:56.000Z | 2021-11-09T08:25:56.000Z | main.py | GzuPark/staring-contest-game | bb7c75cd1e645b4905e82ff08f2b230d77699d9c | [
"BSD-2-Clause"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os
import time
from collections import OrderedDict
import cv2
import dlib
import numpy as np
from scipy.spatial import distance
class ObjectTracker():
def __init__(self, max_disappeared=50):
self.next_obj_id = 0
self.objects = OrderedDict()
self.rects = OrderedDict()
self.disappeared = OrderedDict()
self.max_disappeared = max_disappeared
def register(self, centroid, rect):
self.objects[self.next_obj_id] = centroid
self.rects[self.next_obj_id] = rect
self.disappeared[self.next_obj_id] = 0
self.next_obj_id += 1
def deregister(self, obj_id):
del self.objects[obj_id]
del self.rects[obj_id]
del self.disappeared[obj_id]
def update(self, rects):
if len(rects) == 0:
for obj_id in list(self.disappeared.keys()):
if self.disappeared[obj_id] > self.max_disappeared:
self.deregister(obj_id)
return self.objects, self.rects
input_centroids = np.zeros((len(rects), 2), dtype="int")
input_rects = np.zeros((len(rects), 4), dtype="int")
for i, (start_x, start_y, end_x, end_y) in enumerate(rects):
centroid_x = int((start_x + end_x) / 2.0)
centroid_y = int((start_y + end_y) / 2.0)
input_centroids[i] = (centroid_x, centroid_y)
input_rects[i] = (start_x, start_y, end_x, end_y)
if len(self.objects) == 0:
for i in range(len(input_centroids)):
self.register(input_centroids[i], input_rects[i])
else:
obj_ids = list(self.objects.keys())
obj_centroids = list(self.objects.values())
dist = distance.cdist(np.array(obj_centroids), input_centroids)
rows = dist.min(axis=1).argsort()
cols = dist.argmin(axis=1)[rows]
used_rows = set()
used_cols = set()
for (row, col) in zip(rows, cols):
if (row in used_rows) or (col in used_cols):
continue
obj_id = obj_ids[row]
self.objects[obj_id] = input_centroids[col]
self.rects[obj_id] = input_rects[col]
self.disappeared[obj_id] = 0
used_rows.add(row)
used_cols.add(col)
unused_rows = set(range(dist.shape[0])).difference(used_rows)
unused_cols = set(range(dist.shape[1])).difference(used_cols)
if dist.shape[0] >= dist.shape[1]:
for row in unused_rows:
obj_id = obj_ids[row]
self.disappeared[obj_id] += 1
if self.disappeared[obj_id] > self.max_disappeared:
self.deregister(obj_id)
else:
for col in unused_cols:
self.register(input_centroids[col], input_rects[col])
return self.objects, self.rects
def face_detector(model):
realpath = os.path.dirname(os.path.realpath(__file__))
model_path = os.path.join(realpath, "models", model)
faces = dlib.get_frontal_face_detector()
landmarks = dlib.shape_predictor(model_path)
return faces, landmarks
def landmarks_detector(img, face, detector):
rect = dlib.rectangle(
int(face.left()),
int(face.top()),
int(face.right()),
int(face.bottom()),
)
landmarks = detector(img, rect)
return landmarks, [int(face.left()), int(face.top()), int(face.right()), int(face.bottom())]
def eye_aspect_ratio(points):
A = distance.euclidean(points[1], points[5])
B = distance.euclidean(points[2], points[4])
C = distance.euclidean(points[0], points[3])
EAR = (A + B) / (2.0 * C)
return EAR
def play(img, player_face_rect, landmarks, **info_eye):
points, rect = landmarks_detector(img, player_face_rect, landmarks)
ear = check_blink(points, **info_eye)
return ear, rect
def shape_eye(landmarks, position):
result = []
for i in range(len(position)):
x = landmarks.parts()[position[i]].x
y = landmarks.parts()[position[i]].y
result.append((x, y))
return result
def check_blink(landmarks, left_eye, right_eye):
shape_left_eye = shape_eye(landmarks, left_eye)
shape_right_eye = shape_eye(landmarks, right_eye)
left_ear = eye_aspect_ratio(shape_left_eye)
right_ear = eye_aspect_ratio(shape_right_eye)
ear = (left_ear + right_ear) / 2.0
return ear
def run(args):
if args.test is True:
time.sleep(1)
return
else:
cap = cv2.VideoCapture(args.cam_id)
faces, landmarks = face_detector(args.model)
ot = ObjectTracker()
info_eye = {
'left_eye': list(range(36, 42, 1)),
'right_eye': list(range(42, 48, 1)),
}
previous_time = 0
if (cap.isOpened() is False):
print("Error opening video stream.")
else:
print("Press 'q' if you want to quit.")
player1_blink = False
player2_blink = False
winner = ""
game = OrderedDict()
announce = ""
judge_time = 4
while(cap.isOpened()):
ret, img = cap.read()
current_time = time.time()
if ret is True:
flipped_img = cv2.flip(cv2.resize(img, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA), 1)
face_rectangles = faces(flipped_img, 0)
# H, W, C = flipped_img.shape
if len(face_rectangles) == 2:
rects = []
players_ear = []
dummy_rect = []
for i, face in enumerate(face_rectangles):
rect = [int(face.left()), int(face.top()), int(face.right()), int(face.bottom())]
rects.append(rect)
ear, _rect = play(flipped_img, face, landmarks, **info_eye)
players_ear.append(ear)
dummy_rect.append(_rect)
objects, rectangles = ot.update(rects)
for obj_id, rect in rectangles.items():
cv2.rectangle(
flipped_img,
(rect[0], rect[1]),
(rect[2], rect[3]),
(255 * obj_id, 255 * ((obj_id - 1) % 2), 0),
1
)
for ear, _rect in zip(players_ear, dummy_rect):
if (_rect[0] == rect[0]) and (_rect[1] == rect[1]) and (_rect[2] == rect[2]) and (_rect[3] == rect[3]):
text = "Player {}".format(obj_id + 1)
cv2.putText(
flipped_img, text,
(rect[0] - 10, rect[1] - 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
(255 * obj_id, 255 * ((obj_id - 1) % 2), 0),
1
)
game[obj_id] = ear
else:
pass
players = list(game.keys())
if game[players[0]] < args.ratio:
player1_blink = True
else:
player1_blink = False
if game[players[1]] < args.ratio:
player2_blink = True
else:
player2_blink = False
if (player1_blink is False) and (player2_blink is True):
winner = "Winner is the Player1"
judge = "[INFO] " + winner
judge_time = time.time()
if announce != judge:
announce = judge
print(announce)
elif (player1_blink is True) and (player2_blink is False):
winner = "Winner is the Player2"
judge = "[INFO] " + winner
judge_time = time.time()
if announce != judge:
announce = judge
print(announce)
else:
judge = "[Warning] Need only 2 players"
judge_time = time.time() - 4
if announce != judge:
announce = judge
print(announce)
# update time
diff_time = current_time - previous_time
previous_time = current_time
fps = "FPS: {:.1f}".format(1 / diff_time)
cv2.putText(flipped_img, fps, (20, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255))
cv2.putText(flipped_img, winner, (20, 60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255))
cv2.imshow("Staring Contests", flipped_img)
if time.time() - judge_time > 3:
winner = ""
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model", type=str, default="shape_predictor_68_face_landmarks.dat", help="dlib detector model")
parser.add_argument("-c", "--cam_id", type=int, default=0, help="webcam ID")
parser.add_argument("-r", "--ratio", type=float, default=0.2, help="EAR threshold")
parser.add_argument("-t", "--test", action="store_true", help="to interrupt for testing")
args = parser.parse_args()
return args
def main():
args = get_args()
run(args)
if __name__ == '__main__':
main()
| 32.543624 | 127 | 0.531244 | 1,147 | 9,698 | 4.292938 | 0.197036 | 0.027417 | 0.01117 | 0.013201 | 0.170796 | 0.135053 | 0.120431 | 0.112104 | 0.112104 | 0.089764 | 0 | 0.025551 | 0.3543 | 9,698 | 297 | 128 | 32.653199 | 0.760779 | 0.004021 | 0 | 0.176211 | 0 | 0 | 0.037697 | 0.003832 | 0 | 0 | 0.000414 | 0 | 0 | 1 | 0.057269 | false | 0.004405 | 0.048458 | 0 | 0.154185 | 0.026432 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
490eda150dbc2f22caa55c27010aa854ffbce9a7 | 2,712 | py | Python | src/problems/min_lateness_same_due_date.py | adrigrillo/ds | 3b1ebc5228f61aed02deb0bb9cd9f9b206e5055d | [
"MIT"
] | null | null | null | src/problems/min_lateness_same_due_date.py | adrigrillo/ds | 3b1ebc5228f61aed02deb0bb9cd9f9b206e5055d | [
"MIT"
] | null | null | null | src/problems/min_lateness_same_due_date.py | adrigrillo/ds | 3b1ebc5228f61aed02deb0bb9cd9f9b206e5055d | [
"MIT"
] | null | null | null | from collections import OrderedDict
from typing import Dict, Tuple, List
def calculate_lateness(time: int, job_process_time: int, due_date_job: int) -> int:
"""
Method that calculates the tardiness of a job. `T_j = C_j - d_j`
:param time: actual time of the execution
:param job_process_time: time required to process the job
:param due_date_job: due date to finish the job
:return: lateness
"""
time_after_job = time + job_process_time
return time_after_job - due_date_job
def calculate_tardiness(time: int, job_process_time: int, due_date_job: int) -> int:
"""
Method that calculates the lateness of a job. `L_j = max(T_j, 0) = max(C_j - d_j, 0)`
:param time: actual time of the execution
:param job_process_time: time required to process the job
:param due_date_job: due date to finish the job
:return: tardiness
"""
lateness = calculate_lateness(time, job_process_time, due_date_job)
return max(0, lateness)
def obtain_optimal_schedule_common_due_date(jobs: Dict[int, int], due_date: int) -> Tuple[List[int], int]:
"""
Method that calculates the optimal execution of jobs with the same due date that
minimizes the total tardiness of the set.
:param jobs: dictionary with the jobs and its processing times
:param due_date: due date for processing the jobs
:return: list with the jobs to execute and total tardiness
"""
sorted_jobs = OrderedDict(sorted(jobs.items(), key=lambda kv: kv[1]))
tardiness = calculate_tardiness_jobs(sorted_jobs, due_date)
return list(sorted_jobs.keys()), tardiness
def calculate_tardiness_jobs(jobs: Dict[int, int], due_date: int) -> int:
"""
Method that calculates the tardiness of a set of jobs
:param jobs: dictionary with the jobs and its processing times
:param due_date: due date for processing the jobs
:return: total tardiness
"""
time = 0
total_tardiness = 0
for item, processing_time in jobs.items():
lateness = calculate_tardiness(time, processing_time, due_date)
total_tardiness += lateness
time += processing_time
return total_tardiness
def calculate_lateness_jobs(jobs: Dict[int, int], due_date: int) -> int:
"""
Method that calculates the lateness of a set of jobs
:param jobs: dictionary with the jobs and its processing times
:param due_date: due date for processing the jobs
:return: total lateness
"""
time = 0
total_lateness = 0
for item, processing_time in jobs.items():
lateness = calculate_lateness(time, processing_time, due_date)
total_lateness += lateness
time += processing_time
return total_lateness
| 35.684211 | 106 | 0.707227 | 397 | 2,712 | 4.642317 | 0.163728 | 0.083559 | 0.045578 | 0.043407 | 0.610418 | 0.610418 | 0.521975 | 0.508953 | 0.508953 | 0.487249 | 0 | 0.003763 | 0.216077 | 2,712 | 75 | 107 | 36.16 | 0.863123 | 0.429204 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49108747a5c5acf9ffb58382f54e77c38cce33bf | 4,902 | py | Python | example/cta/nr.py | vincent87lee/alphahunter | 5f45dbd5f09354dd161606f7e740f8c8d8ae2772 | [
"MIT"
] | 149 | 2019-12-05T05:26:15.000Z | 2022-03-15T03:44:46.000Z | example/cta/nr.py | webclinic017/alphahunter | e3ccc10bb8b641a6a516ec7cd908e5b006343264 | [
"MIT"
] | 4 | 2020-09-12T20:46:06.000Z | 2021-09-01T16:39:14.000Z | example/cta/nr.py | webclinic017/alphahunter | e3ccc10bb8b641a6a516ec7cd908e5b006343264 | [
"MIT"
] | 73 | 2019-11-29T03:13:11.000Z | 2022-03-24T06:06:31.000Z | # -*- coding:utf-8 -*-
"""
固定数量模式CTA: Normalized Return Model
Project: alphahunter
Author: HJQuant
Description: Asynchronous driven quantitative trading framework
"""
import numpy as np
import pandas as pd
from quant.market import Kline
from quant.interface.model_api import ModelAPI
from quant.interface.ah_math import AHMath
class NrModel(object):
def __init__(self):
#这个model订阅‘BTC/USDT’
self.symbols = ['BTC/USDT']
self.mode_params = {
'fixed_volume': 0.03, #每次买卖0.03个btc
'warmup_period': 3, #预热周期三天
'signal_period': 60, #信号周期60分钟
'open': 0.42, #做多信号
'close': -0.42 #做空信号
}
self.running_status = 'running'
self.last_kline = None
self.last_kline_end_dt = None
self.last_midnight = None
self.lag_ret_matrix = []
self.factor = np.nan
self.signal = np.nan #model返回的信号值,这个值是介于-1.0到1.0之间的一个浮点数
self.target_position = {'BTC': 0}
self.latency = 2*60*1000 #两分钟
def on_history_kline(self, kline: Kline):
''' 加载历史数据'''
if kline.symbol not in self.symbols:
return
midnight = ModelAPI.find_last_datetime_by_time_str(kline.end_dt, reference_time_str='00:00:00.000')
if self.last_midnight != midnight:
self.last_midnight = midnight #新的一天来了
self.lag_ret_matrix.append([]) #为新的一天创建一个空列表用于保存相关数据
self.lag_ret_matrix[-1].append(kline.lag_ret_fillna)
if len(self.lag_ret_matrix) > self.mode_params['warmup_period'] + 1: #前三天+'今天'=共四天
self.lag_ret_matrix.pop(0) #只需要前三天的完整数据,把前第四天的去掉
def on_time(self):
''' 每5秒定时被驱动,检查k线是否断连'''
if self.running_status == 'stopping': #如果是停止状态就不工作了
return
now = ModelAPI.current_milli_timestamp()
if self.last_kline_end_dt == None:
self.last_kline_end_dt = now
if now - self.last_kline_end_dt > self.latency: #超过2分钟
self.factor = np.nan
self.signal = np.nan
self.target_position['BTC'] = 0.0
self.running_status = 'stopping'
def on_kline_update_callback(self, kline: Kline):
''' 最新1分钟k线来了,我们需要更新此model的signal'''
if self.running_status == 'stopping': #如果是停止状态就不工作了
return
if kline.symbol not in self.symbols:
return
self.last_kline = kline
self.last_kline_end_dt = kline.end_dt
midnight = ModelAPI.find_last_datetime_by_time_str(self.last_kline_end_dt, reference_time_str='00:00:00.000')
if self.last_midnight != midnight:
self.last_midnight = midnight #新的一天到了
self.lag_ret_matrix.append([])
self.lag_ret_matrix[-1].append(self.last_kline.lag_ret_fillna)
if len(self.lag_ret_matrix) == self.mode_params['warmup_period'] + 1: #有了三天的数据后,从第四天起就开始工作
self.generate_factor() #产生因子
self.generate_signal() #通过因子生成信号
self.generate_target_position() #通过信号生成仓位
elif len(self.lag_ret_matrix) > self.mode_params['warmup_period'] + 1: #历史数据超过三天,前四天+'今天'=共五天
self.lag_ret_matrix.pop(0) #前第四天的数据去掉,只需要保存前三天的数据
#策略正式工作以后,每当新的一天到来,都将仓位,信号等都清0,重新开始计算
self.factor = np.nan
self.signal = np.nan
self.target_position['BTC'] = 0.0
def generate_factor(self):
all_lag_ret = []
for each in self.lag_ret_matrix: #把前三天+'今天'的数据都放到一个列表里面
all_lag_ret += each
all_lag_ret = pd.Series(all_lag_ret)
x = AHMath.ewma(all_lag_ret, self.mode_params['signal_period'])
y = AHMath.ewma(np.abs(all_lag_ret), self.mode_params['signal_period'])
factors = AHMath.zero_divide(x, y)
count = factors[-self.mode_params['signal_period']-1:].count()
self.factor = factors.iloc[-1] if count == self.mode_params['signal_period']+1 else np.nan
def generate_signal(self):
self.signal = 0.0
if self.factor > self.mode_params['open']:
self.signal = 1.0
elif self.factor < self.mode_params['close']:
self.signal = -1.0
elif np.isnan(self.factor):
self.signal = np.nan
def generate_target_position(self):
if self.target_position['BTC'] == 0 and self.signal == 1:
self.target_position['BTC'] = self.signal * self.mode_params['fixed_volume']
elif self.target_position['BTC'] == 0 and self.signal == -1:
self.target_position['BTC'] = self.signal * self.mode_params['fixed_volume']
elif self.target_position['BTC'] > 0 and self.signal == -1:
self.target_position['BTC'] = 0
elif self.target_position['BTC'] < 0 and self.signal == 1:
self.target_position['BTC'] = 0
elif self.target_position['BTC'] != 0 and np.isnan(self.signal):
self.target_position['BTC'] = 0 | 37.707692 | 117 | 0.626887 | 640 | 4,902 | 4.576563 | 0.251563 | 0.038921 | 0.079891 | 0.093206 | 0.546944 | 0.460567 | 0.407989 | 0.377262 | 0.270741 | 0.270741 | 0 | 0.023033 | 0.256018 | 4,902 | 130 | 118 | 37.707692 | 0.780093 | 0.108935 | 0 | 0.315789 | 0 | 0 | 0.063209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073684 | false | 0 | 0.052632 | 0 | 0.178947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49147fa4a4d3541dbecba09a6af51c34c4c131b9 | 12,369 | py | Python | jam/third_party/werkzeug/secure_cookie/securecookie.py | pubmania/jam-py | 33c930206702dba951fd42783a93127cd7fdc3ac | [
"BSD-3-Clause"
] | 384 | 2015-01-06T15:09:23.000Z | 2022-02-25T19:56:44.000Z | jam/third_party/werkzeug/secure_cookie/securecookie.py | pubmania/jam-py | 33c930206702dba951fd42783a93127cd7fdc3ac | [
"BSD-3-Clause"
] | 222 | 2015-01-06T19:11:08.000Z | 2022-02-16T06:46:39.000Z | jam/third_party/werkzeug/secure_cookie/securecookie.py | pubmania/jam-py | 33c930206702dba951fd42783a93127cd7fdc3ac | [
"BSD-3-Clause"
] | 86 | 2015-01-16T09:50:31.000Z | 2022-02-25T13:27:14.000Z | """
Secure Cookie
=============
This module implements a cookie that is not alterable from the client
because it adds a checksum the server checks for. You can use it as a
session replacement if all you have is a user id or something to mark a
logged in user.
Keep in mind that the data is still readable from the client as a normal
cookie is. However you don't have to store and flush the sessions you
have at the server.
Example usage:
>>> from secure_cookie.securecookie import SecureCookie
>>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
Dumping into a string so that one can store it in a cookie:
>>> value = x.serialize()
Loading from that string again:
>>> x = SecureCookie.unserialize(value, "deadbeef")
>>> x["baz"]
(1, 2, 3)
If someone modifies the cookie and the checksum is wrong the unserialize
method will fail silently and return a new empty :class:`SecureCookie`
object.
Keep in mind that the values will be visible in the cookie so do not
store data in a cookie you don't want the user to see.
Application Integration
-----------------------
If you are using the Werkzeug request object you could integrate the
secure cookie into your application like this::
from werkzeug.utils import cached_property
from werkzeug.wrappers import BaseRequest
from secure_cookie.securecookie import SecureCookie
# Don't use this key but a different one; you could just use
# os.urandom(20) to get something random.
SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea'
class Request(BaseRequest):
@cached_property
def client_session(self):
data = self.cookies.get("session_data")
if not data:
return SecureCookie(secret_key=SECRET_KEY)
return SecureCookie.unserialize(data, SECRET_KEY)
def application(environ, start_response):
request = Request(environ)
# get a response object here
response = ...
if request.client_session.should_save:
session_data = request.client_session.serialize()
response.set_cookie(
'session_data',
session_data,
httponly=True,
)
return response(environ, start_response)
A less verbose integration can be achieved by using shorthand methods::
class Request(BaseRequest):
@cached_property
def client_session(self):
return SecureCookie.load_cookie(
self,
secret_key=COOKIE_SECRET,
)
def application(environ, start_response):
request = Request(environ)
# get a response object here
response = ...
request.client_session.save_cookie(response)
return response(environ, start_response)
"""
import base64
import pickle
import warnings
from hashlib import sha1 as _default_hash
from hmac import new as hmac
from time import time
from werkzeug._compat import iteritems
from werkzeug._compat import text_type
from werkzeug._compat import to_bytes
from werkzeug._compat import to_native
from werkzeug._internal import _date_to_unix
from werkzeug.security import safe_str_cmp
from werkzeug.urls import url_quote_plus
from werkzeug.urls import url_unquote_plus
from .sessions import ModificationTrackingDict
class UnquoteError(Exception):
"""Internal exception used to signal failures on quoting."""
class SecureCookie(ModificationTrackingDict):
"""Represents a secure cookie. You can subclass this class and
provide an alternative mac method. The import thing is that the mac
method is a function with a similar interface to the hashlib.
Required methods are :meth:`update` and :meth:`digest`.
Example usage:
>>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
>>> x["foo"]
42
>>> x["baz"]
(1, 2, 3)
>>> x["blafasel"] = 23
>>> x.should_save
True
:param data: The initial data. Either a dict, list of tuples, or
``None``.
:param secret_key: The secret key. If not set ``None`` or not
specified it has to be set before :meth:`serialize` is called.
:param new: The initial value of the ``new`` flag.
"""
#: The hash method to use. This has to be a module with a new
#: function or a function that creates a hashlib object, such as
#: func:`hashlib.md5`. Subclasses can override this attribute. The
#: default hash is sha1. Make sure to wrap this in
#: :func:`staticmethod` if you store an arbitrary function there
#: such as :func:`hashlib.sha1` which might be implemented as a
#: function.
hash_method = staticmethod(_default_hash)
#: The module used for serialization. Should have a ``dumps`` and a
#: ``loads`` method that takes bytes. The default is :mod:`pickle`.
#:
#: .. versionchanged:: 0.1
#: The default of ``pickle`` will change to :mod:`json` in 1.0.
serialization_method = pickle
#: If the contents should be base64 quoted. This can be disabled if
#: the serialization process returns cookie safe strings only.
quote_base64 = True
def __init__(self, data=None, secret_key=None, new=True):
ModificationTrackingDict.__init__(self, data or ())
if secret_key is not None:
secret_key = to_bytes(secret_key, "utf-8")
self.secret_key = secret_key
self.new = new
if self.serialization_method is pickle:
warnings.warn(
"The default SecureCookie.serialization_method will"
" change from pickle to json in 1.0. To upgrade"
" existing tokens, override unquote to try pickle if"
" json fails."
)
def __repr__(self):
return "<%s %s%s>" % (
self.__class__.__name__,
dict.__repr__(self),
"*" if self.should_save else "",
)
@property
def should_save(self):
"""True if the session should be saved. By default this is only
true for :attr:`modified` cookies, not :attr:`new`.
"""
return self.modified
@classmethod
def quote(cls, value):
"""Quote the value for the cookie. This can be any object
supported by :attr:`serialization_method`.
:param value: The value to quote.
"""
if cls.serialization_method is not None:
value = cls.serialization_method.dumps(value)
if cls.quote_base64:
value = b"".join(
base64.b64encode(to_bytes(value, "utf8")).splitlines()
).strip()
return value
@classmethod
def unquote(cls, value):
"""Unquote the value for the cookie. If unquoting does not work
a :exc:`UnquoteError` is raised.
:param value: The value to unquote.
"""
try:
if cls.quote_base64:
value = base64.b64decode(value)
if cls.serialization_method is not None:
value = cls.serialization_method.loads(value)
return value
except Exception:
# Unfortunately pickle and other serialization modules can
# cause pretty much every error here. If we get one we catch
# it and convert it into an UnquoteError.
raise UnquoteError()
def serialize(self, expires=None):
"""Serialize the secure cookie into a string.
If expires is provided, the session will be automatically
invalidated after expiration when you unseralize it. This
provides better protection against session cookie theft.
:param expires: An optional expiration date for the cookie (a
:class:`datetime.datetime` object).
"""
if self.secret_key is None:
raise RuntimeError("no secret key defined")
if expires:
self["_expires"] = _date_to_unix(expires)
result = []
mac = hmac(self.secret_key, None, self.hash_method)
for key, value in sorted(self.items()):
result.append(
(
"%s=%s" % (url_quote_plus(key), self.quote(value).decode("ascii"))
).encode("ascii")
)
mac.update(b"|" + result[-1])
return b"?".join([base64.b64encode(mac.digest()).strip(), b"&".join(result)])
@classmethod
def unserialize(cls, string, secret_key):
"""Load the secure cookie from a serialized string.
:param string: The cookie value to unserialize.
:param secret_key: The secret key used to serialize the cookie.
:return: A new :class:`SecureCookie`.
"""
if isinstance(string, text_type):
string = string.encode("utf-8", "replace")
if isinstance(secret_key, text_type):
secret_key = secret_key.encode("utf-8", "replace")
try:
base64_hash, data = string.split(b"?", 1)
except (ValueError, IndexError):
items = ()
else:
items = {}
mac = hmac(secret_key, None, cls.hash_method)
for item in data.split(b"&"):
mac.update(b"|" + item)
if b"=" not in item:
items = None
break
key, value = item.split(b"=", 1)
# try to make the key a string
key = url_unquote_plus(key.decode("ascii"))
try:
key = to_native(key)
except UnicodeError:
pass
items[key] = value
# no parsing error and the mac looks okay, we can now
# sercurely unpickle our cookie.
try:
client_hash = base64.b64decode(base64_hash)
except TypeError:
items = client_hash = None
if items is not None and safe_str_cmp(client_hash, mac.digest()):
try:
for key, value in iteritems(items):
items[key] = cls.unquote(value)
except UnquoteError:
items = ()
else:
if "_expires" in items:
if time() > items["_expires"]:
items = ()
else:
del items["_expires"]
else:
items = ()
return cls(items, secret_key, False)
@classmethod
def load_cookie(cls, request, key="session", secret_key=None):
"""Load a :class:`SecureCookie` from a cookie in the request. If
the cookie is not set, a new :class:`SecureCookie` instance is
returned.
:param request: A request object that has a `cookies` attribute
which is a dict of all cookie values.
:param key: The name of the cookie.
:param secret_key: The secret key used to unquote the cookie.
Always provide the value even though it has no default!
"""
data = request.cookies.get(key)
if not data:
return cls(secret_key=secret_key)
return cls.unserialize(data, secret_key)
def save_cookie(
self,
response,
key="session",
expires=None,
session_expires=None,
max_age=None,
path="/",
domain=None,
secure=None,
httponly=False,
force=False,
):
"""Save the data securely in a cookie on response object. All
parameters that are not described here are forwarded directly
to :meth:`~BaseResponse.set_cookie`.
:param response: A response object that has a
:meth:`~BaseResponse.set_cookie` method.
:param key: The name of the cookie.
:param session_expires: The expiration date of the secure cookie
stored information. If this is not provided the cookie
``expires`` date is used instead.
"""
if force or self.should_save:
data = self.serialize(session_expires or expires)
response.set_cookie(
key,
data,
expires=expires,
max_age=max_age,
path=path,
domain=domain,
secure=secure,
httponly=httponly,
)
| 32.635884 | 86 | 0.601989 | 1,518 | 12,369 | 4.808959 | 0.229908 | 0.036986 | 0.00274 | 0.003288 | 0.156986 | 0.096849 | 0.080685 | 0.080685 | 0.06411 | 0.039726 | 0 | 0.008583 | 0.312394 | 12,369 | 378 | 87 | 32.722222 | 0.849735 | 0.503355 | 0 | 0.152318 | 0 | 0 | 0.052373 | 0.0058 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059603 | false | 0.006623 | 0.099338 | 0.006623 | 0.245033 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49148eb99034dbe01147936a687ac51ebe474302 | 13,839 | py | Python | userinterface/qrollout.py | bhsingleton/dcc | 9ad59f1cb8282df938062e15c020688dd268a722 | [
"MIT"
] | 1 | 2021-08-06T16:04:24.000Z | 2021-08-06T16:04:24.000Z | userinterface/qrollout.py | bhsingleton/dcc | 9ad59f1cb8282df938062e15c020688dd268a722 | [
"MIT"
] | null | null | null | userinterface/qrollout.py | bhsingleton/dcc | 9ad59f1cb8282df938062e15c020688dd268a722 | [
"MIT"
] | 1 | 2021-08-06T16:04:31.000Z | 2021-08-06T16:04:31.000Z | from PySide2 import QtCore, QtWidgets, QtGui
import logging
logging.basicConfig()
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
QWIDGETSIZE_MAX = (1 << 24) - 1 # https://code.qt.io/cgit/qt/qtbase.git/tree/src/widgets/kernel/qwidget.h#n873
class QRolloutViewport(QtWidgets.QWidget):
"""
Overload of QWidget used to represent the viewport for a rollout widget.
"""
def __init__(self, parent=None, f=QtCore.Qt.WindowFlags()):
"""
Private method called after a new instance has been created.
By default this button will be expanded on initialize.
:type parent: QtWidgets.QWidget
:type f: int
"""
# Call parent method
#
super(QRolloutViewport, self).__init__(parent=parent, f=f)
# Modify widget properties
#
self.setSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
self.setFocusPolicy(QtCore.Qt.ClickFocus)
def paintEvent(self, event):
"""
The event for any paint requests made to this widget.
:type event: QtGui.QPaintEvent
:rtype: None
"""
# Initialize painter
#
painter = QtGui.QPainter(self)
painter.setRenderHint(QtGui.QPainter.Antialiasing, True)
# Get brush from palette
#
brush = QtGui.QBrush(QtGui.QColor(73, 73, 73), style=QtCore.Qt.SolidPattern)
rect = self.rect()
# Paint background
#
path = QtGui.QPainterPath()
path.addRoundedRect(rect, 4, 4)
painter.setPen(QtCore.Qt.NoPen)
painter.fillPath(path, brush)
class QRollout(QtWidgets.QWidget):
"""
Overload of QWidget used to provide a collapsible viewport triggered by a button.
This widget relies on toggling the visibility of the viewport widget to emulate Maya and Max's rollouts.
Any tinkering with the underlying hierarchy will result in unpredictable behaviour!
"""
expandedChanged = QtCore.Signal(bool)
stateChanged = QtCore.Signal(bool)
def __init__(self, title, parent=None, f=QtCore.Qt.WindowFlags()):
"""
Private method called after a new instance has been created.
:type parent: QtWidgets.QWidget
:type f: int
:rtype: None
"""
# Call inherited method
#
super(QRollout, self).__init__(parent=parent, f=f)
# Declare class variables
#
self._title = title
self._expanded = True
self._checked = False
self._checkable = False
self._checkBoxVisible = False
self._gripperVisible = False
self._viewport = QRolloutViewport(parent=self)
self._viewport.installEventFilter(self) # Overload eventFilter to take advantage of this!
# Add widgets to layout
#
self.setLayout(QtWidgets.QVBoxLayout())
self.layout().addWidget(self._viewport)
self.layout().setAlignment(QtCore.Qt.AlignTop)
self.layout().setContentsMargins(0, 20, 0, 0)
# Modify color palette
#
palette = self.palette()
palette.setColor(QtGui.QPalette.Button, QtGui.QColor(93, 93, 93))
palette.setColor(QtGui.QPalette.Highlight, QtGui.QColor(81, 121, 148))
self.setPalette(palette)
# Modify widget properties
#
self.setFocusPolicy(QtCore.Qt.ClickFocus)
self.setMouseTracking(True)
# Connect signal
#
self.expandedChanged.connect(self._viewport.setVisible)
def viewport(self):
"""
Returns the viewport widget for this rollout.
By default this viewport is created with no rollout.
It's up to the developer to add this themself.
:rtype: QRolloutViewport
"""
return self._viewport
def title(self):
"""
Returns the header title for this rollout.
:rtype: str
"""
return self._title
def setTitle(self, title):
"""
Updates the header title for this rollout.
:rtype: str
"""
self._title = title
self.repaint()
def checkable(self):
"""
Returns the checkable state for this rollout.
:rtype: bool
"""
return self._checkable
def setCheckable(self, checkable):
"""
Updates the checkable state for this rollout.
:type checkable: bool
:rtype: None
"""
self._checkable = checkable
def checked(self):
"""
Returns the checked state for this rollout.
This rollout must be checkable for this value to have any significance.
:rtype: bool
"""
return self._checked
def setChecked(self, checked):
"""
Updates the checked state for this rollout.
This method will emit the "stateChanged" signal if successful.
:rtype: bool
"""
# Update private value
# Be sure to check for redundancy
#
if checked == self._checked or not self._checkable:
return
self._checked = checked
# Redraw widget and emit signal
#
self.repaint()
self.stateChanged.emit(self._checked)
def showCheckBox(self):
"""
Shows the check box for this rollout.
This widget must be checkable for this to have any effect.
:rtype: None
"""
# Check if widget is checkable
#
if not self._checkable:
return
self._checkBoxVisible = True
def hideCheckBox(self):
"""
Hides the check box for this rollout.
This widget must be checkable for this to have any effect.
:rtype: None
"""
# Check if widget is checkable
#
if not self._checkable:
return
self._checkBoxVisible = False
def showGripper(self):
"""
Show the gripper icon for this rollout.
:rtype: None
"""
self._gripperVisible = True
def hideGripper(self):
"""
Hides the gripper icon for this rollout.
:rtype: None
"""
self._gripperVisible = False
def expanded(self):
"""
Returns the expanded state for this rollout.
:rtype: bool
"""
return self._expanded
def setExpanded(self, expanded):
"""
Updates the expanded state for this rollout.
This will in turn emit the "expandedChanged" signal with the new value.
:type expanded: bool
:rtype: None
"""
# Update private value
# Be sure to check for redundancy
#
if expanded == self._expanded:
return
self._expanded = expanded
# Modify size properties
#
if self._expanded:
self.setFixedSize(QtCore.QSize(QWIDGETSIZE_MAX, QWIDGETSIZE_MAX))
self.setSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
else:
self.setFixedSize(QtCore.QSize(QWIDGETSIZE_MAX, 20))
self.setSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Fixed)
# Repaint widget and emit signal
#
self.repaint()
self.expandedChanged.emit(self._expanded)
def headerRect(self):
"""
Returns the header bounding box.
:rtype: QtCore.QRectF
"""
return QtCore.QRectF(0, 0, self.rect().width(), 20)
def titleRect(self):
"""
Returns the title bounding box.
:rtype: QtCore.QRectF
"""
return QtCore.QRectF(20, 0, self.rect().width() - 40, 20)
def expanderRect(self):
"""
Returns the expander bounding box.
:rtype: QtCore.QRectF
"""
return QtCore.QRectF(0, 0, 20, 20)
def gripperRect(self):
"""
Returns the gripper bounding box.
:rtype: QtCore.QRectF
"""
return QtCore.QRectF(self.rect().width() - 20, 0, 20, 20)
def expander(self):
"""
Returns the polygonal shape for the expander icon.
:rtype: QtGui.QPolygon
"""
# Shrink bounding box
#
rect = self.expanderRect()
rect.adjust(7, 7, -7, -7)
# Check if expanded
# This will determine the orientation of the arrow
#
polygon = QtGui.QPolygon()
if self._expanded:
polygon.append(QtCore.QPoint(rect.left(), rect.top()))
polygon.append(QtCore.QPoint(rect.right(), rect.top()))
polygon.append(QtCore.QPoint(rect.center().x(), rect.bottom()))
else:
polygon.append(QtCore.QPoint(rect.left(), rect.top()))
polygon.append(QtCore.QPoint(rect.right(), rect.center().y()))
polygon.append(QtCore.QPoint(rect.left(), rect.bottom()))
return polygon
def gripper(self):
"""
Returns the points that make up the gripper.
:rtype: list
"""
# Shrink bounding box
#
rect = self.gripperRect()
rect.adjust(7, 7, -7, -7)
return (
QtCore.QPointF(rect.left(), rect.top()),
QtCore.QPointF(rect.center().x(), rect.top()),
QtCore.QPointF(rect.right(), rect.top()),
QtCore.QPointF(rect.left(), rect.center().y()),
QtCore.QPointF(rect.center().x(), rect.center().y()),
QtCore.QPointF(rect.right(), rect.center().y()),
QtCore.QPointF(rect.left(), rect.bottom()),
QtCore.QPointF(rect.center().x(), rect.bottom()),
QtCore.QPointF(rect.right(), rect.bottom()),
)
def enterEvent(self, event):
"""
The event for whenever the mouse enters this widget.
To get mouse highlighting we need to force a repaint operation here.
:type event: QtGui.QEvent
:rtype: None
"""
# Force re-paint
#
self.repaint()
# Call inherited method
#
super(QRollout, self).enterEvent(event)
def leaveEvent(self, event):
"""
The event for whenever the mouse leaves this widget.
To get mouse highlighting we need to force a repaint operation here.
:type event: QtGui.QEvent
:rtype: None
"""
# Force re-paint
#
self.repaint()
# Call inherited method
#
super(QRollout, self).leaveEvent(event)
def mouseReleaseEvent(self, event):
"""
The event for whenever the mouse button has been released from this widget.
:type event: QtGui.QMouseReleaseEvent
:rtype: None
"""
# Check if expander was pressed
#
if self.expanderRect().contains(event.pos()):
self.toggleExpanded()
elif self.titleRect().contains(event.pos()):
self.toggleChecked()
else:
pass
# Call inherited method
#
super(QRollout, self).mouseReleaseEvent(event)
def paintEvent(self, event):
"""
The event for any paint requests made to this widget.
:type event: QtGui.QPaintEvent
:rtype: None
"""
# Initialize painter
#
painter = QtGui.QPainter(self)
painter.setRenderHint(QtGui.QPainter.Antialiasing, True)
# Get brush from palette
#
palette = self.palette()
pen = QtGui.QPen(palette.highlightedText(), 1) if self.underMouse() else QtGui.QPen(palette.text(), 1)
# Paint background
#
path = QtGui.QPainterPath()
path.addRoundedRect(self.headerRect(), 4, 4)
painter.setPen(QtCore.Qt.NoPen)
painter.fillPath(path, palette.highlight() if self.checked() else palette.button())
# Paint title
#
titleRect = self.titleRect()
if self._checkable and self._checkBoxVisible:
# Define check box options
#
options = QtWidgets.QStyleOptionButton()
options.state |= QtWidgets.QStyle.State_On if self._checked else QtWidgets.QStyle.State_Off
options.state |= QtWidgets.QStyle.State_Enabled
options.text = self._title
options.rect = QtCore.QRect(titleRect.x(), titleRect.y(), titleRect.width(), titleRect.height())
# Draw check box control
#
style = QtWidgets.QApplication.style()
style.drawControl(QtWidgets.QStyle.CE_CheckBox, options, painter)
else:
painter.setPen(pen)
painter.setBrush(QtCore.Qt.NoBrush)
painter.drawText(titleRect, QtCore.Qt.AlignLeft | QtCore.Qt.AlignVCenter, self._title, boundingRect=titleRect)
# Paint expander
#
path = QtGui.QPainterPath()
path.addPolygon(self.expander())
painter.setPen(QtCore.Qt.NoPen)
painter.fillPath(path, palette.highlightedText() if self.underMouse() else palette.text())
# Paint mover
#
if self._gripperVisible:
points = self.gripper()
painter.setPen(pen)
painter.drawPoints(points)
def toggleExpanded(self):
"""
Toggles the expanded state of this rollout.
:rtype: None
"""
self.setExpanded(not self._expanded)
def toggleChecked(self):
"""
The event for whenever the header button is checked.
If checkable is not enabled then the status will not be inversed.
:rtype: None
"""
# Check if widget is checkable
#
if not self._checkable:
return
self.setChecked(not self._checked)
| 25.627778 | 122 | 0.583713 | 1,460 | 13,839 | 5.485616 | 0.210959 | 0.013984 | 0.022724 | 0.014234 | 0.470471 | 0.412536 | 0.355975 | 0.274192 | 0.23074 | 0.223374 | 0 | 0.007754 | 0.319749 | 13,839 | 539 | 123 | 25.675325 | 0.843 | 0.293229 | 0 | 0.270115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16092 | false | 0.005747 | 0.011494 | 0 | 0.287356 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4914e25c63e8333fceb820ebf698ef2961a52e6e | 18,446 | py | Python | awokado/resource.py | 5783354/awokado | 9454067f005fd8905409902fb955de664ba3d5b6 | [
"MIT"
] | 6 | 2019-02-08T16:21:24.000Z | 2019-03-13T13:00:05.000Z | awokado/resource.py | 5783354/awokado | 9454067f005fd8905409902fb955de664ba3d5b6 | [
"MIT"
] | 10 | 2019-05-21T10:29:16.000Z | 2021-05-21T12:09:49.000Z | awokado/resource.py | 5783354/awokado | 9454067f005fd8905409902fb955de664ba3d5b6 | [
"MIT"
] | 1 | 2020-02-26T06:47:41.000Z | 2020-02-26T06:47:41.000Z | import json
import sys
from typing import Dict, List, Optional, Tuple, Union, Type
import bulky
import falcon
import sqlalchemy as sa
from cached_property import cached_property
from clavis import Transaction
from marshmallow import utils, Schema, ValidationError
from sqlalchemy.orm import Session
from awokado.consts import (
AUDIT_DEBUG,
BULK_CREATE,
BULK_UPDATE,
CREATE,
DELETE,
OP_IN,
UPDATE,
)
from awokado.custom_fields import ToMany, ToOne
from awokado.db import DATABASE_URL, persistent_engine
from awokado.exceptions import BadRequest, MethodNotAllowed
from awokado.filter_parser import FilterItem
from awokado.meta import ResourceMeta
from awokado.request import ReadContext
from awokado.response import Response
from awokado.utils import (
get_ids_from_payload,
get_read_params,
get_id_field,
M2MMapping,
AuthBundle,
)
class BaseResource(Schema):
RESOURCES: Dict[str, Type["BaseResource"]] = {}
Response = Response
Meta: ResourceMeta
def __new__(cls: Type["BaseResource"]):
if cls.Meta.name not in ("base_resource", "_resource"):
cls.RESOURCES[cls.Meta.name] = cls
return super().__new__(cls)
def __init__(self):
super().__init__()
cls_name = self.__class__.__name__
class_meta = getattr(self, "Meta", None)
if isinstance(class_meta, type):
print(
"resourse.Meta as class will be deprecated soon",
file=sys.stderr,
)
self.Meta = ResourceMeta.from_class(class_meta)
if not isinstance(self.Meta, ResourceMeta):
raise Exception(
f"{cls_name}.Meta must inherit from ResourceMeta class"
)
if not self.Meta.name or self.Meta.name in (
"base_resource",
"_resource",
):
raise Exception(f"{cls_name} must have Meta.name")
resource_id_name = get_id_field(self, name_only=True, skip_exc=True)
if resource_id_name:
resource_id_field = self.fields.get(resource_id_name)
resource_id_field = resource_id_field.metadata.get("model_field")
if not resource_id_field:
raise Exception(
f"Resource's {cls_name} id field {resource_id_name}"
f" must have model_field."
)
###########################################################################
# Marshmallow validation methods
###########################################################################
def validate_create_request(self, req: falcon.Request, is_bulk=False):
methods = self.Meta.methods
payload = json.load(req.bounded_stream)
if isinstance(payload.get(self.Meta.name), list):
request_method = BULK_CREATE
is_bulk = True
else:
request_method = CREATE
if request_method not in methods:
raise MethodNotAllowed()
data = payload.get(self.Meta.name)
if not data:
raise BadRequest(
f"Invalid schema, resource name is missing at the top level. "
f"Your POST request has to look like: "
f'{{"{self.Meta.name}": [{{"field_name": "field_value"}}] '
f'or {{"field_name": "field_value"}} }}'
)
try:
deserialized = self.load(data, many=is_bulk)
except ValidationError as exc:
raise BadRequest(exc.messages)
req.stream = {self.Meta.name: deserialized}
def validate_update_request(self, req: falcon.Request):
methods = self.Meta.methods
if UPDATE not in methods and BULK_UPDATE not in methods:
raise MethodNotAllowed()
payload = json.load(req.bounded_stream)
data = payload.get(self.Meta.name)
try:
deserialized = self.load(data, partial=True, many=True)
except ValidationError as exc:
raise BadRequest(exc.messages)
req.stream = {self.Meta.name: deserialized}
###########################################################################
# Falcon methods
###########################################################################
def on_patch(
self, req: falcon.Request, resp: falcon.Response, *args, **kwargs
):
"""
Falcon method. PATCH-request entry point.
Here is a database transaction opening.
This is where authentication takes place
(if auth class is pointed in `resource <#awokado.meta.ResourceMeta>`_)
Then update method is run.
"""
with Transaction(DATABASE_URL, engine=persistent_engine) as t:
session = t.session
user_id, _ = self.auth(session, req, resp)
self.validate_update_request(req)
payload = req.stream
data = payload[self.Meta.name]
ids = get_ids_from_payload(self.Meta.model, data)
if self.Meta.auth:
self.Meta.auth.can_update(session, user_id, ids)
self.audit_log(
f"Update: {self.Meta.name}", payload, user_id, AUDIT_DEBUG
)
result = self.update(session, payload, user_id)
resp.body = json.dumps(result, default=str)
def on_post(self, req: falcon.Request, resp: falcon.Response):
"""
Falcon method. POST-request entry point.
Here is a database transaction opening.
This is where authentication takes place
(if auth class is pointed in `resource <#awokado.meta.ResourceMeta>`_)
Then create method is run.
"""
with Transaction(DATABASE_URL, engine=persistent_engine) as t:
session = t.session
user_id, token = self.auth(session, req, resp)
self.validate_create_request(req)
payload = req.stream
if self.Meta.auth:
self.Meta.auth.can_create(
session, payload, user_id, skip_exc=False
)
self.audit_log(
f"Create: {self.Meta.name}", payload, user_id, AUDIT_DEBUG
)
result = self.create(session, payload, user_id)
resp.body = json.dumps(result, default=str)
def on_get(
self,
req: falcon.Request,
resp: falcon.Response,
resource_id: int = None,
):
"""
Falcon method. GET-request entry point.
Here is a database transaction opening.
This is where authentication takes place
(if auth class is pointed in `resource <#awokado.meta.ResourceMeta>`_)
Then read_handler method is run.
It's responsible for the whole read workflow.
"""
with Transaction(DATABASE_URL, engine=persistent_engine) as t:
session = t.session
user_id, token = self.auth(session, req, resp)
params = get_read_params(req, self.__class__)
params["resource_id"] = resource_id
result = self.read_handler(session, user_id, **params)
resp.body = json.dumps(result, default=str)
def on_delete(
self,
req: falcon.Request,
resp: falcon.Response,
resource_id: int = None,
):
"""
Falcon method. DELETE-request entry point.
Here is a database transaction opening.
This is where authentication takes place
(if auth class is pointed in `resource <#awokado.meta.ResourceMeta>`_)
Then delete method is run.
"""
with Transaction(DATABASE_URL, engine=persistent_engine) as t:
session = t.session
user_id, token = self.auth(session, req, resp)
if DELETE not in self.Meta.methods:
raise MethodNotAllowed()
ids_to_delete = req.get_param_as_list("ids")
data = [ids_to_delete, resource_id]
if not any(data) or all(data):
raise BadRequest(
details=(
"It should be a bulk delete (?ids=1,2,3) or delete"
" of a single resource (v1/resource/1)"
)
)
if not ids_to_delete:
ids_to_delete = [resource_id]
if self.Meta.auth:
self.Meta.auth.can_delete(session, user_id, ids_to_delete)
result = self.delete(session, user_id, ids_to_delete)
resp.body = json.dumps(result, default=str)
def auth(self, *args, **kwargs) -> AuthBundle:
"""This method should return (user_id, token) tuple"""
return AuthBundle(0, "")
def audit_log(self, *args, **kwargs):
return
def _check_model_exists(self):
if not self.Meta.model:
raise Exception(
f"{self.__class__.__name__}.Meta.model field not set"
)
###########################################################################
# Resource methods
###########################################################################
def update(
self, session: Session, payload: dict, user_id: int, *args, **kwargs
) -> dict:
"""
First of all, data is prepared for updating:
Marshmallow load method for data structure deserialization and then preparing data for SQLAlchemy update query.
Updates data with bulk_update_mappings sqlalchemy method. Saves many-to-many relationships.
Returns updated resources with the help of read_handler method.
"""
self._check_model_exists()
data = payload[self.Meta.name]
data_to_update = self._to_update(data)
ids = get_ids_from_payload(self.Meta.model, data_to_update)
session.bulk_update_mappings(self.Meta.model, data_to_update)
self._save_m2m(session, data, update=True)
result = self.read_handler(
session=session,
user_id=user_id,
filters=[FilterItem.create("id", OP_IN, ids)],
)
return result
def create(self, session: Session, payload: dict, user_id: int) -> dict:
"""
Create method
You can override it to add your logic.
First of all, data is prepared for creating:
Marshmallow load method for data structure deserialization and then preparing data for SQLAlchemy create a query.
Inserts data to the database
(Uses bulky library if there is more than one entity to create). Saves many-to-many relationships.
Returns created resources with the help of read_handler method.
"""
self._check_model_exists()
# prepare data to insert
data = payload[self.Meta.name]
if isinstance(data, list):
return self.bulk_create(session, user_id, data)
data_to_insert = self._to_create(data)
# insert to DB
resource_id = session.execute(
sa.insert(self.Meta.model)
.values(data_to_insert)
.returning(self.Meta.model.id)
).scalar()
data["id"] = resource_id
self._save_m2m(session, data)
return self.read_handler(
session=session, user_id=user_id, resource_id=resource_id
)
def bulk_create(self, session: Session, user_id: int, data: list) -> dict:
self._check_model_exists()
data_to_insert = [self._to_create(i) for i in data]
# insert to DB
resource_ids = bulky.insert(
session,
self.Meta.model,
data_to_insert,
returning=[self.Meta.model.id],
)
ids = [r.id for r in resource_ids]
result = self.read_handler(
session=session,
user_id=user_id,
filters=[FilterItem.create("id", OP_IN, ids)],
)
return result
def delete(self, session: Session, user_id: int, obj_ids: list):
"""
Simply deletes objects with passed identifiers
"""
self._check_model_exists()
session.execute(
sa.delete(self.Meta.model).where(self.Meta.model.id.in_(obj_ids))
)
return {}
def _to_update(self, data: list) -> list:
"""
Prepare resource data for SQLAlchemy update query
"""
to_update_list = []
for data_line in data:
to_update = {}
for fn, v in data_line.items():
f = self.fields[fn]
if isinstance(f, ToMany):
continue
model_field = f.metadata.get("model_field")
if not model_field:
continue
to_update[model_field.key] = v
to_update_list.append(to_update)
return to_update_list
def _to_create(self, data: dict) -> dict:
"""
Prepare resource data for SQLAlchemy create query
"""
to_create = {}
for fn, v in data.items():
f = self.fields[fn]
if isinstance(f, ToMany):
continue
model_field = f.metadata["model_field"]
to_create[model_field.key] = v
return to_create
def read_handler(
self,
session: Session,
user_id: int,
include: list = None,
filters: Optional[List[FilterItem]] = None,
sort: list = None,
resource_id: int = None,
limit: int = None,
offset: int = None,
) -> dict:
ctx = ReadContext(
session,
self,
user_id,
include,
filters,
sort,
resource_id,
limit,
offset,
)
self.read__query(ctx)
self.read__filtering(ctx)
self.read__sorting(ctx)
self.read__pagination(ctx)
self.read__execute_query(ctx)
if not ctx.obj_ids:
if ctx.is_list:
response = self.Response(self, is_list=ctx.is_list)
return response.serialize()
else:
raise BadRequest("Object Not Found")
self.read__includes(ctx)
return self.read__serializing(ctx)
def read__query(self, ctx: ReadContext):
return ctx.read__query()
def read__filtering(self, ctx: ReadContext):
return ctx.read__filtering()
def read__sorting(self, ctx: ReadContext):
return ctx.read__sorting()
def read__pagination(self, ctx: ReadContext):
return ctx.read__pagination()
def read__execute_query(self, ctx: ReadContext):
return ctx.read__execute_query()
def read__includes(self, ctx: ReadContext):
return ctx.read__includes()
def read__serializing(self, ctx: ReadContext) -> dict:
return ctx.read__serializing()
def get_related_model(self, field: Union[ToOne, ToMany]):
resource_name = field.metadata["resource"]
resource = self.RESOURCES[resource_name]
return resource.Meta.model
def _process_to_many_field(self, field: ToMany) -> M2MMapping:
related_model = self.get_related_model(field)
resource_model = self.Meta.model
model_field = field.metadata["model_field"]
field_obj = M2MMapping(related_model=related_model)
if not isinstance(model_field, sa.Column):
model_field = getattr(
model_field.parent.persist_selectable.c, model_field.key
)
if related_model.__table__ == model_field.table:
for fk in model_field.table.foreign_keys:
if fk.column.table == resource_model.__table__:
field_obj.left_fk_field = fk.parent
break
else:
field_obj.secondary = model_field.table
for fk in model_field.table.foreign_keys:
if fk.column.table == related_model.__table__:
field_obj.right_fk_field = fk.parent
elif fk.column.table == resource_model.__table__:
field_obj.left_fk_field = fk.parent
return field_obj
@cached_property
def _to_many_fields(self) -> List[Tuple[str, M2MMapping]]:
return [
(field_name, self._process_to_many_field(field))
for field_name, field in self.fields.items()
if isinstance(field, ToMany)
]
@staticmethod
def check_exists(
session: Session, table: sa.Table, ids: list, field_name: str
):
result = session.execute(
sa.select([table.c.id]).where(table.c.id.in_(ids))
)
missed = set(ids) - {item.id for item in result}
if missed:
raise BadRequest(
{
field_name: f"objects with id {','.join(map(str, missed))} does not exist"
}
)
@staticmethod
def _get_m2m(field: M2MMapping, field_name: str, data) -> List[dict]:
m2m = []
for obj in data:
rel_ids = obj.get(field_name) or ()
for rel_id in rel_ids:
m2m.append(
{
field.left_fk_field: obj.get("id"),
field.right_fk_field: rel_id,
}
)
return m2m
def _save_m2m(
self, session: Session, data: Union[list, dict], update: bool = False
) -> None:
data = data if utils.is_collection(data) else [data]
for field_name, field in self._to_many_fields:
if field.secondary is not None:
if update:
session.execute(
sa.delete(field.secondary).where(
field.left_fk_field.in_(
[obj.get("id") for obj in data]
)
)
)
many_2_many = self._get_m2m(field, field_name, data)
if many_2_many:
self.check_exists(
session,
field.related_model.__table__,
[obj[field.right_fk_field] for obj in many_2_many],
field_name,
)
session.execute(
sa.insert(field.secondary).values(many_2_many)
)
| 31.858377 | 121 | 0.565001 | 2,081 | 18,446 | 4.793849 | 0.135031 | 0.028869 | 0.019547 | 0.012029 | 0.436949 | 0.371592 | 0.305233 | 0.274358 | 0.247193 | 0.231856 | 0 | 0.001852 | 0.326846 | 18,446 | 578 | 122 | 31.913495 | 0.801562 | 0.106744 | 0 | 0.259542 | 0 | 0 | 0.050962 | 0.003654 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07888 | false | 0 | 0.048346 | 0.022901 | 0.195929 | 0.002545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
492059cea4ee8c812a641b677dcf7e0ac22fddb2 | 1,177 | py | Python | Leetcode/733-flood_fill.py | EdwaRen/Competitve-Programming | e8bffeb457936d28c75ecfefb5a1f316c15a9b6c | [
"MIT"
] | 1 | 2021-05-03T21:48:25.000Z | 2021-05-03T21:48:25.000Z | Leetcode/733-flood_fill.py | EdwaRen/Competitve_Programming | e8bffeb457936d28c75ecfefb5a1f316c15a9b6c | [
"MIT"
] | null | null | null | Leetcode/733-flood_fill.py | EdwaRen/Competitve_Programming | e8bffeb457936d28c75ecfefb5a1f316c15a9b6c | [
"MIT"
] | null | null | null | class Solution(object):
def floodFill(self, image, sr, sc, newColor):
"""
:type image: List[List[int]]
:type sr: int
:type sc: int
:type newColor: int
:rtype: List[List[int]]
"""
# Catch edge case
if not image or len(image) == 0:
return None
# Cache commonly used expressions
N = len(image)
M = len(image[0])
orig_color = image[sr][sc]
seen = set()
# BFS
queue = [(sr, sc)]
while queue:
row, col = queue.pop(0)
image[row][col] = newColor
for dir in [[row+1, col], [row, col+1], [row-1, col], [row, col-1]]:
if dir[0] >= N or dir[0] < 0 or dir[1] >= M or dir[1] < 0:
continue
if image[dir[0]][dir[1]] == orig_color and str(dir[0]) + "," + str(dir[1]) not in seen:
seen.add(str(dir[0]) + "," + str(dir[1]))
queue.append((dir[0], dir[1]))
return image
image = [
[1, 1, 1],
[1, 1, 0],
[1, 0, 1],
]
z = Solution()
res = z.floodFill(image, 1, 1, 1)
for i in res:
print(i)
| 25.586957 | 103 | 0.447749 | 164 | 1,177 | 3.20122 | 0.323171 | 0.045714 | 0.022857 | 0.038095 | 0.106667 | 0.106667 | 0 | 0 | 0 | 0 | 0 | 0.045833 | 0.388275 | 1,177 | 45 | 104 | 26.155556 | 0.683333 | 0.129992 | 0 | 0 | 0 | 0 | 0.002073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0 | 0 | 0.142857 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4923a226a9e247904045e185504b8855f6356a84 | 4,860 | py | Python | scripts/generate_fe_hpo_results.py | williamy1996/Autoexpression | b470d9ff67074c8b076abbc1dce359db9a36f921 | [
"MIT"
] | null | null | null | scripts/generate_fe_hpo_results.py | williamy1996/Autoexpression | b470d9ff67074c8b076abbc1dce359db9a36f921 | [
"MIT"
] | null | null | null | scripts/generate_fe_hpo_results.py | williamy1996/Autoexpression | b470d9ff67074c8b076abbc1dce359db9a36f921 | [
"MIT"
] | null | null | null | import os
import sys
import pickle as pkl
import argparse
import numpy as np
sys.path.append(os.getcwd())
from ConfigSpace.hyperparameters import UnParametrizedHyperparameter
from solnml.components.fe_optimizers.bo_optimizer import BayesianOptimizationOptimizer
from solnml.components.hpo_optimizer.smac_optimizer import SMACOptimizer
from solnml.components.utils.constants import CLASSIFICATION, REGRESSION
from solnml.datasets.utils import load_train_test_data
from solnml.components.metrics.metric import get_metric
from solnml.components.evaluators.base_evaluator import fetch_predict_estimator
from solnml.components.evaluators.cls_evaluator import ClassificationEvaluator
from solnml.components.evaluators.reg_evaluator import RegressionEvaluator
parser = argparse.ArgumentParser()
parser.add_argument('--datasets', type=str, default='diabetes')
parser.add_argument('--metrics', type=str, default='acc')
parser.add_argument('--task', type=str, choices=['reg', 'cls'], default='cls')
parser.add_argument('--output_dir', type=str, default='./data/fe_hpo_results')
args = parser.parse_args()
dataset_list = args.datasets.split(',')
metric = get_metric(args.metrics)
algorithms = ['lightgbm', 'random_forest',
'libsvm_svc', 'extra_trees',
'liblinear_svc', 'k_nearest_neighbors',
'logistic_regression',
'gradient_boosting', 'adaboost']
task = args.task
if task == 'cls':
from solnml.components.models.classification import _classifiers
_estimators = _classifiers
else:
from solnml.components.models.regression import _regressors
_estimators = _regressors
eval_type = 'holdout'
output_dir = args.output_dir
if not os.path.exists(output_dir):
os.makedirs(output_dir)
for dataset in dataset_list:
train_data, test_data = load_train_test_data(dataset)
for algo in algorithms:
cs = _estimators[algo].get_hyperparameter_search_space()
model = UnParametrizedHyperparameter("estimator", algo)
cs.add_hyperparameter(model)
default_hpo_config = cs.get_default_configuration()
if task == 'cls':
fe_evaluator = ClassificationEvaluator(default_hpo_config, scorer=metric,
name='fe', resampling_strategy=eval_type,
seed=1)
hpo_evaluator = ClassificationEvaluator(default_hpo_config, scorer=metric,
data_node=train_data, name='hpo',
resampling_strategy=eval_type,
seed=1)
else:
fe_evaluator = RegressionEvaluator(default_hpo_config, scorer=metric,
name='fe', resampling_strategy=eval_type,
seed=1)
hpo_evaluator = RegressionEvaluator(default_hpo_config, scorer=metric,
data_node=train_data, name='hpo',
resampling_strategy=eval_type,
seed=1)
fe_optimizer = BayesianOptimizationOptimizer(task_type=CLASSIFICATION if task == 'cls' else REGRESSION,
input_data=train_data,
evaluator=fe_evaluator,
model_id=algo,
time_limit_per_trans=600,
mem_limit_per_trans=5120,
number_of_unit_resource=10,
seed=1)
hpo_optimizer = SMACOptimizer(evaluator=hpo_evaluator,
config_space=cs,
per_run_time_limit=600,
per_run_mem_limit=5120,
output_dir='./logs',
trials_per_iter=100)
fe_optimizer.iterate()
fe_eval_dict = fe_optimizer.eval_dict
fe_dict = {}
for key, value in fe_eval_dict.items():
fe_dict[key[0]] = value
hpo_optimizer.iterate()
hpo_eval_dict = hpo_optimizer.eval_dict
hpo_dict = {}
for key, value in hpo_eval_dict.items():
hpo_dict[key[1]] = value
with open(os.path.join(output_dir, '%s-%s-fe.pkl' % (dataset, algo)), 'wb') as f:
pkl.dump(fe_dict, f)
with open(os.path.join(output_dir, '%s-%s-hpo.pkl' % (dataset, algo)), 'wb') as f:
pkl.dump(hpo_dict, f)
print("Algo %s end" % algo)
| 47.647059 | 111 | 0.577366 | 488 | 4,860 | 5.483607 | 0.297131 | 0.037369 | 0.067265 | 0.032885 | 0.21151 | 0.198804 | 0.198804 | 0.160688 | 0.141256 | 0.119581 | 0 | 0.008122 | 0.341358 | 4,860 | 101 | 112 | 48.118812 | 0.827866 | 0 | 0 | 0.164835 | 0 | 0 | 0.057202 | 0.004321 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.175824 | 0 | 0.175824 | 0.010989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
492581c7b8d48efd8c0e7f0968812eaf2c9f779d | 2,534 | py | Python | Project/source/tests/UnitConverterTest.py | EricPapagiannis/CSCC01-team10-Project | 773a853f60464c1729c1d9ce4447dfae6fd01dac | [
"MIT"
] | 1 | 2017-01-31T02:27:27.000Z | 2017-01-31T02:27:27.000Z | Project/source/tests/UnitConverterTest.py | EricPapagiannis/CSCC01-team10-Project | 773a853f60464c1729c1d9ce4447dfae6fd01dac | [
"MIT"
] | null | null | null | Project/source/tests/UnitConverterTest.py | EricPapagiannis/CSCC01-team10-Project | 773a853f60464c1729c1d9ce4447dfae6fd01dac | [
"MIT"
] | null | null | null | import sys
sys.path.append('../')
from data_parsing.CSV_data_parser import UnitConverter
import unittest
class UnitConverterTest(unittest.TestCase):
def testDateConvert(self):
inp = '2015-08-21'
resulteu = UnitConverter.convertToOpen('lastupdate', inp, 'eu')
resultnasa = UnitConverter.convertToOpen('lastupdate', inp, 'nasa')
expected = '15/08/21'
self.assertEqual(resulteu, expected)
self.assertEqual(resultnasa, expected)
def testDateConvertFormats(self):
inp = '2015-08-21'
resulteu = UnitConverter.convertToOpen('lastupdate', inp, 'eu')
resultnasa = UnitConverter.convertToOpen('lastupdate', inp, 'nasa')
expected = '15/08/21'
self.assertEqual(resulteu, expected)
self.assertEqual(resultnasa, expected)
def testConvertEURA(self):
inp = '45.7625'
fullRevolutionInp = '405.7625'
resulteu = UnitConverter.convertToOpen('rightascension', inp, 'eu')
resulteuFullRev = UnitConverter.convertToOpen('rightascension',
fullRevolutionInp, 'eu')
expected = '3.00000 3.00000 3.00000'
expected2 = '27.00000 3.00000 3.00000'
self.assertEqual(resulteu, expected)
self.assertEqual(resulteuFullRev, expected2)
def testConvertEURANegativeNumberString(self):
inp = '-314.2375'
resulteu = UnitConverter.convertToOpen('rightascension', inp, 'eu')
expected = '-20.00000 -56.00000 -57.00000'
self.assertEqual(resulteu, expected)
def testConvertEUDEC(self):
inp = '45.7625'
fullRevolutionInp = '405.7625'
resulteu = UnitConverter.convertToOpen('declination', inp, 'eu')
resulteuFullRev = UnitConverter.convertToOpen('rightacension',
fullRevolutionInp, 'eu')
expected = '3.00000 3.00000 3.00000'
self.assertEqual(resulteu, expected)
self.assertEqual(resulteuFullRev, '405.7625')
def testConvertNASARA(self):
inp = '03h03m03s'
resultnasa = UnitConverter.convertToOpen('rightascension', inp, 'nasa')
expected = '03 03 03'
self.assertEqual(resultnasa, expected)
def testConvertNASADEC(self):
inp = '-03h03m03s'
resultnasa = UnitConverter.convertToOpen('declination', inp, 'nasa')
expected = '-03 03 03'
self.assertEqual(resultnasa, expected)
if __name__ == '__main__':
unittest.main(exit=False)
| 37.820896 | 79 | 0.638516 | 229 | 2,534 | 7.017467 | 0.262009 | 0.177971 | 0.04107 | 0.096453 | 0.732421 | 0.654014 | 0.544493 | 0.544493 | 0.544493 | 0.499067 | 0 | 0.089427 | 0.249803 | 2,534 | 66 | 80 | 38.393939 | 0.755918 | 0 | 0 | 0.490909 | 0 | 0 | 0.153907 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.127273 | false | 0 | 0.054545 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
492681014be8dbe2aad0d6aea6631c0105158cc6 | 2,763 | py | Python | navio/aws/services/_acm.py | naviotech/navio-aws | 6827e0af8c3e1d2b4216f3b6ae80c2f1851dad0b | [
"Apache-2.0",
"MIT"
] | null | null | null | navio/aws/services/_acm.py | naviotech/navio-aws | 6827e0af8c3e1d2b4216f3b6ae80c2f1851dad0b | [
"Apache-2.0",
"MIT"
] | null | null | null | navio/aws/services/_acm.py | naviotech/navio-aws | 6827e0af8c3e1d2b4216f3b6ae80c2f1851dad0b | [
"Apache-2.0",
"MIT"
] | null | null | null | import boto3
import uuid
from navio.aws._common import dump
from navio.aws.services._session import AWSSession
class AWSACM(AWSSession):
def __init__(self, **kwargs):
super(
self.__class__,
self
).__init__(kwargs['profile_name'], kwargs.get('region_name', None))
def find_cert_arn(self, **kwargs):
cert_arn = None
if 'domain_name' not in kwargs:
raise Exception('Argument missing: domain_name')
client = self.client('acm')
cache_key = 'acm.certificates.{}.{}.{}'.format(
self.region_name,
self.profile_name,
'ISSUED'
)
certificates_list = self.cache(cache_key)
if certificates_list is None:
certificates_list = list()
paginator = client.get_paginator('list_certificates')
page_iterator = paginator.paginate(CertificateStatuses=['ISSUED'])
for page in page_iterator:
if 'CertificateSummaryList' in page:
for cert in page['CertificateSummaryList']:
certificates_list.append(cert)
self.cache(cache_key, certificates_list)
for cert in certificates_list:
cert_details = client.describe_certificate(CertificateArn=cert['CertificateArn'])
for san in cert_details['Certificate']['SubjectAlternativeNames']:
if san == kwargs['domain_name']:
if cert_arn is not None:
raise Exception(
'Multiple certificates with same domain name. ({}, {})'.format(
cert_arn, cert['CertificateArn']))
else:
cert_arn = cert['CertificateArn']
return cert_arn
def request_via_dns(self, **kwargs):
if 'domain_name' not in kwargs:
raise Exception('Argument missing: domain_name')
if 'alternative_names' not in kwargs:
raise Exception('Argument missing: alternative_names')
client = self.client('acm')
resp = client.request_certificate(
DomainName=kwargs.get('domain_name'),
SubjectAlternativeNames=kwargs.get('alternative_names'),
ValidationMethod='DNS',
IdempotencyToken=uuid.uuid4()
)
return resp
def get_dns_validation_options(self, **kwargs):
if 'certificate_arn' not in kwargs:
raise Exception('Argument missing: certificate_arn')
client = self.client('acm')
resp = client.describe_certificate(
CertificateArn=certificate_arn
)
return resp['Certificate']['DomainValidationOptions'][0]['ResourceRecord']
| 33.289157 | 93 | 0.594282 | 267 | 2,763 | 5.925094 | 0.28839 | 0.044248 | 0.027813 | 0.040455 | 0.165613 | 0.165613 | 0.128951 | 0.078382 | 0.078382 | 0.078382 | 0 | 0.001577 | 0.311618 | 2,763 | 82 | 94 | 33.695122 | 0.830179 | 0 | 0 | 0.112903 | 0 | 0 | 0.191459 | 0.041621 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.064516 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
492a5be4a3a0378922ed0d46261ba1b5ccd32114 | 5,637 | py | Python | build_libs_and_wrappers.py | gschramm/parallelproj | 0e6eadb81b8d961b9ed932420d77b56fd87c4bf0 | [
"MIT"
] | 5 | 2021-01-27T15:05:03.000Z | 2022-03-18T08:40:13.000Z | build_libs_and_wrappers.py | gschramm/parallelproj | 0e6eadb81b8d961b9ed932420d77b56fd87c4bf0 | [
"MIT"
] | 13 | 2021-02-10T12:15:29.000Z | 2021-09-23T10:38:53.000Z | build_libs_and_wrappers.py | gschramm/parallelproj | 0e6eadb81b8d961b9ed932420d77b56fd87c4bf0 | [
"MIT"
] | 2 | 2021-02-14T21:26:32.000Z | 2021-09-19T18:43:48.000Z | # small wrapper script for all cmake calls
# to build all C and CUDA libs
# supposed to be OS independent
import argparse
import os
import platform
from tempfile import mkdtemp
from shutil import rmtree
parser = argparse.ArgumentParser(description = 'Build C/CUDA libs with cmake and install \
them to the correct location for the python package')
parser.add_argument('--build_dir', help = 'temp build directory',
default = None)
parser.add_argument('--source_dir', help = 'cmake source dir',
default = os.path.dirname(os.path.abspath(__file__)))
parser.add_argument('--cmake_install_prefix', help = 'cmake INSTALL_LIB_DIR - default: %(default)s',
default = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'pyparallelproj',f'{platform.system()}_{platform.architecture()[0]}'))
parser.add_argument('--keep_build_dir', help = 'do not remove tempory build dir',
action = 'store_true')
parser.add_argument('--dry', help = 'dry run - only print cmake commands',
action = 'store_true')
parser.add_argument('--cmake_bin', help = 'cmake binary to use', default = 'cmake')
parser.add_argument('--generate_idl_wrappers', action = 'store_true')
parser.add_argument('--keep_idl_wrappers', help = 'do not remove tempory idl wrappers',
action = 'store_true')
args = parser.parse_args()
#---------------------------------------------------------------------------------------------
if args.build_dir is None:
build_dir = mkdtemp(prefix = 'build_', dir = '.')
else:
build_dir = args.build_dir
source_dir = args.source_dir
cmake_install_prefix = args.cmake_install_prefix
remove_build_dir = not args.keep_build_dir
dry = args.dry
cmake_bin = args.cmake_bin
generate_idl_wrappers = args.generate_idl_wrappers
keep_idl_wrappers = args.keep_idl_wrappers
#---------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------
def generate_idl_wrapper(src_file, wrapper_file, add_extern_C = False):
""" Parse a C header file and generate idl wrappers for all included functions
"""
from pyclibrary import CParser
parser = CParser(src_file)
if add_extern_C:
extern_str = 'extern "C" '
else:
extern_str = ''
with open(wrapper_file, 'w') as f:
f.write('// AUTO_GENERATED - DO NOT MODIIFY\n\n')
f.write(f'#include "{os.path.basename(src_file)}"\n\n')
f.write('')
for func_name,val in parser.defs['functions'].items():
# write the function with return type
rtype = val[0][0]
nargs = len(val[1])
f.write(f'{extern_str}{rtype} {func_name}_idl_wrapper(int argc, void *argv[])\n')
f.write('{\n')
f.write(f' {func_name}(\n')
# write the function arguments
for i, args in enumerate(val[1]):
type_qual = ''
if 'const' in args[1].type_quals[0]: type_qual = 'const '
if len(args[1]) == 2 and args[1][1] == '*':
prefix = f'({type_qual}{args[1][0]}*)'
else:
prefix = f'*({type_qual}{args[1][0]}*)'
if i < (nargs - 1):
f.write(f' {prefix} argv[{i}],\n')
else:
f.write(f' {prefix} argv[{i}]);\n')
f.write('}\n')
f.write('\n')
print(f'generated {wrapper_file}')
#---------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------
# generate the IDL wrappers
if generate_idl_wrappers:
c_idl_wrapper_dir = os.path.join('c','wrapper')
os.makedirs(c_idl_wrapper_dir, exist_ok = True)
header_file = os.path.join('c','include','parallelproj_c.h')
wrapper_file = os.path.join('c','wrapper',
f'{os.path.splitext(os.path.basename(header_file))[0]}_idl_wrapper.c')
generate_idl_wrapper(header_file, wrapper_file)
cuda_idl_wrapper_dir = os.path.join('cuda','wrapper')
os.makedirs(cuda_idl_wrapper_dir, exist_ok = True)
header_file_cuda = os.path.join('cuda','include','parallelproj_cuda.h')
wrapper_file_cuda = os.path.join('cuda','wrapper',
f'{os.path.splitext(os.path.basename(header_file_cuda))[0]}_idl_wrapper.cu')
generate_idl_wrapper(header_file_cuda, wrapper_file_cuda, add_extern_C = True)
#---------------------------------------------------------------------------------------------
# on windows DLLs get install in CMAKE_INSTALL_BINDIR
cmake_options = f'-B {build_dir} -DCMAKE_INSTALL_PREFIX={cmake_install_prefix}'
if generate_idl_wrappers:
cmake_options = f'{cmake_options} -DPARALLELPROJ_BUILD_WITH_IDL_WRAPPERS=TRUE'
cmd1 = f'{cmake_bin} {cmake_options} {source_dir}'
cmd2 = f'{cmake_bin} --build {build_dir} --target install --config release'
if dry:
print(cmd1,'\n')
print(cmd2)
else:
os.system(cmd1)
os.system(cmd2)
if remove_build_dir:
rmtree(build_dir)
else:
print(f'Kept build directory {build_dir}')
if generate_idl_wrappers:
if not keep_idl_wrappers:
rmtree(c_idl_wrapper_dir)
rmtree(cuda_idl_wrapper_dir)
else:
print(f'Kept idl c wrapper directory {c_idl_wrapper_dir}')
print(f'Kept idl cuda wrapper directory {cuda_idl_wrapper_dir}')
| 38.875862 | 157 | 0.572113 | 696 | 5,637 | 4.389368 | 0.21408 | 0.031424 | 0.044517 | 0.018331 | 0.237316 | 0.166612 | 0.0982 | 0.052373 | 0.030115 | 0.030115 | 0 | 0.005196 | 0.180593 | 5,637 | 144 | 158 | 39.145833 | 0.656203 | 0.18893 | 0 | 0.131313 | 0 | 0 | 0.298396 | 0.099978 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010101 | false | 0 | 0.060606 | 0 | 0.070707 | 0.070707 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
492be04c0842007b58b8413dbd54319841c5fa0d | 356 | py | Python | secao_2/aula_24_ex_2.py | alfmorais/estrutura_de_dados_em_python | ffa75b3af63e9591b4909b1d0639c45444b433e7 | [
"MIT"
] | 1 | 2021-06-06T22:41:15.000Z | 2021-06-06T22:41:15.000Z | secao_2/aula_24_ex_2.py | alfmorais/estrutura_de_dados_em_python | ffa75b3af63e9591b4909b1d0639c45444b433e7 | [
"MIT"
] | null | null | null | secao_2/aula_24_ex_2.py | alfmorais/estrutura_de_dados_em_python | ffa75b3af63e9591b4909b1d0639c45444b433e7 | [
"MIT"
] | null | null | null | notas = {
"Alfredo": [5.0, 9.5, 8.7, 7.5],
"Joaquim": [9.9, 7.5, 6.8, 7.9],
"Helena": [6.5, 7.9, 8.8, 9.5]
}
def soma_media(lista):
media = sum(lista) / 4
return media
for k in notas:
nota = notas.get(k)
media = soma_media(nota)
media = round(media, 2)
print(k + ' ficou com a média ' + str(media))
| 22.25 | 50 | 0.502809 | 62 | 356 | 2.854839 | 0.483871 | 0.022599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.30618 | 356 | 15 | 51 | 23.733333 | 0.611336 | 0 | 0 | 0 | 0 | 0 | 0.11437 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
492c9f544acf9e9ec66543a70180dcbc1f956e2f | 4,309 | py | Python | examples/large_five_way_classification.py | xxchenxx/tape | fc79eb3c03c89cb3e4e6a04190ca423d6225cf62 | [
"BSD-3-Clause"
] | null | null | null | examples/large_five_way_classification.py | xxchenxx/tape | fc79eb3c03c89cb3e4e6a04190ca423d6225cf62 | [
"BSD-3-Clause"
] | null | null | null | examples/large_five_way_classification.py | xxchenxx/tape | fc79eb3c03c89cb3e4e6a04190ca423d6225cf62 | [
"BSD-3-Clause"
] | null | null | null |
from typing import Union, List, Tuple, Any, Dict
import torch
from torch.utils.data import Dataset
from pathlib import Path
import numpy as np
import pickle
from tape.datasets import LMDBDataset, pad_sequences
from tape.registry import registry
from tape.tokenizers import TAPETokenizer
from tape import ProteinBertForSequenceClassification
@registry.register_task('large_five_way_classification', num_labels=5)
class LargeFiveWayClassificationDataset(Dataset):
def __init__(self,
data_path: Union[str, Path],
split: str,
tokenizer: Union[str, TAPETokenizer] = 'iupac',
data_label_set=None,
data_fold=None,
in_memory: bool = False):
if split not in ('train', 'valid'):
raise ValueError(f"Unrecognized split: {split}. Must be one of "
f"['train', 'valid")
if isinstance(tokenizer, str):
tokenizer = TAPETokenizer(vocab=tokenizer)
self.tokenizer = tokenizer
data_path = Path(data_path)
self.data_path = data_path
with open(data_path / 'label.txt', 'r') as f:
labels = f.readlines()
self.labels = []
self.labels_int = []
for label in labels:
name, label_int = label.split()
self.labels.append(name)
self.labels_int.append(int(label_int))
#print(self.labels)
self.ptms = {}
np.random.seed(42)
order = np.random.permutation(len(self.labels))
data_fold = int(data_fold)
interval = len(self.labels) // 10
test_sample = order[data_fold * interval:(data_fold + 1) * interval]
from sklearn.model_selection import train_test_split
train_data = []
train_labels = []
test_data = []
test_labels = []
for i in range(len(self.labels)):
if i in test_sample:
test_data.append(self.labels[i])
test_labels.append(self.labels_int[i])
else:
train_data.append(self.labels[i])
train_labels.append(self.labels_int[i])
print(len(train_data))
print(len(train_labels))
print(len(test_data))
print(len(test_labels))
if split == 'train':
self.labels = train_data
self.labels_int = train_labels
else:
self.labels = test_data
self.labels_int = test_labels
def __len__(self) -> int:
return len(self.labels)
def __getitem__(self, index: int):
name = self.labels[index]
labels = self.labels_int[index]
fasta_path = self.data_path / 'seqs' / f"{name}.fasta"
with open(fasta_path, 'r') as f:
fasta = f.readlines()[1:]
fasta = ''.join(fasta).replace("\n", "")
token_ids = self.tokenizer.encode(fasta)
input_mask = np.ones_like(token_ids)
# pad with -1s because of cls/sep tokens
#labels = np.pad(labels, (1, 1), 'constant', constant_values=-1)
return token_ids, input_mask, labels
def collate_fn(self, batch: List[Tuple[Any, ...]]) -> Dict[str, torch.Tensor]:
input_ids, input_mask, ss_label = tuple(zip(*batch))
input_ids = torch.from_numpy(pad_sequences(input_ids, 0, max_length=1280))
input_mask = torch.from_numpy(pad_sequences(input_mask, 0, max_length=1280))
#ss_label = torch.from_numpy(pad_sequences(ss_label, -1))
ss_label = torch.from_numpy(np.asarray(ss_label))
output = {'input_ids': input_ids,
'input_mask': input_mask,
'targets': ss_label}
return output
registry.register_task_model(
'large_five_way_classification', 'transformer', ProteinBertForSequenceClassification, force_reregister=True)
if __name__ == '__main__':
""" To actually run the task, you can do one of two things. You can
simply import the appropriate run function from tape.main. The
possible functions are `run_train`, `run_train_distributed`, and
`run_eval`. Alternatively, you can add this dataset directly to
tape/datasets.py.
"""
from tape.main import run_train
run_train() | 35.03252 | 112 | 0.61035 | 526 | 4,309 | 4.775665 | 0.30038 | 0.075637 | 0.036226 | 0.020303 | 0.083599 | 0.045382 | 0 | 0 | 0 | 0 | 0 | 0.007164 | 0.287306 | 4,309 | 123 | 113 | 35.03252 | 0.810811 | 0.040845 | 0 | 0.022472 | 0 | 0 | 0.055381 | 0.015152 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044944 | false | 0 | 0.134831 | 0.011236 | 0.224719 | 0.044944 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
492ccea7c862f24a39c4ada66af724e1c1a3aa04 | 3,467 | py | Python | satflow/models/layers/GResBlock.py | lewtun/satflow | 6a675e4fa921b4dd023361b55cc2a5fa25b8f8ed | [
"MIT"
] | 32 | 2021-06-16T17:55:16.000Z | 2022-02-23T08:45:27.000Z | satflow/models/layers/GResBlock.py | lewtun/satflow | 6a675e4fa921b4dd023361b55cc2a5fa25b8f8ed | [
"MIT"
] | 75 | 2021-06-03T13:43:49.000Z | 2022-03-11T13:00:52.000Z | satflow/models/layers/GResBlock.py | lewtun/satflow | 6a675e4fa921b4dd023361b55cc2a5fa25b8f8ed | [
"MIT"
] | 5 | 2021-08-06T15:01:14.000Z | 2022-02-25T22:41:25.000Z | import torch
import torch.nn as nn
from torch.nn import functional as F
from satflow.models.layers.Normalization import ConditionalNorm, SpectralNorm
class GResBlock(nn.Module):
def __init__(
self,
in_channel,
out_channel,
kernel_size=None,
padding=1,
stride=1,
n_class=96,
bn=True,
activation=F.relu,
upsample_factor=2,
downsample_factor=1,
):
super().__init__()
self.upsample_factor = upsample_factor if downsample_factor is 1 else 1
self.downsample_factor = downsample_factor
self.activation = activation
self.bn = bn if downsample_factor is 1 else False
if kernel_size is None:
kernel_size = [3, 3]
self.conv0 = SpectralNorm(
nn.Conv2d(
in_channel, out_channel, kernel_size, stride, padding, bias=True if bn else True
)
)
self.conv1 = SpectralNorm(
nn.Conv2d(
out_channel, out_channel, kernel_size, stride, padding, bias=True if bn else True
)
)
self.skip_proj = True
self.conv_sc = SpectralNorm(nn.Conv2d(in_channel, out_channel, 1, 1, 0))
# if in_channel != out_channel or upsample_factor or downsample_factor:
# self.conv_sc = SpectralNorm(nn.Conv2d(in_channel, out_channel, 1, 1, 0))
# self.skip_proj = True
if bn:
self.CBNorm1 = ConditionalNorm(in_channel, n_class) # TODO 2 x noise.size[1]
self.CBNorm2 = ConditionalNorm(out_channel, n_class)
def forward(self, x, condition=None):
# The time dimension is combined with the batch dimension here, so each frame proceeds
# through the blocks independently
BT, C, W, H = x.size()
out = x
if self.bn:
out = self.CBNorm1(out, condition)
out = self.activation(out)
if self.upsample_factor != 1:
out = F.interpolate(out, scale_factor=self.upsample_factor)
out = self.conv0(out)
if self.bn:
out = out.view(BT, -1, W * self.upsample_factor, H * self.upsample_factor)
out = self.CBNorm2(out, condition)
out = self.activation(out)
out = self.conv1(out)
if self.downsample_factor != 1:
out = F.avg_pool2d(out, self.downsample_factor)
if self.skip_proj:
skip = x
if self.upsample_factor != 1:
skip = F.interpolate(skip, scale_factor=self.upsample_factor)
skip = self.conv_sc(skip)
if self.downsample_factor != 1:
skip = F.avg_pool2d(skip, self.downsample_factor)
else:
skip = x
y = out + skip
y = y.view(
BT,
-1,
W * self.upsample_factor // self.downsample_factor,
H * self.upsample_factor // self.downsample_factor,
)
return y
if __name__ == "__main__":
n_class = 96
batch_size = 4
n_frames = 20
gResBlock = GResBlock(3, 100, [3, 3])
x = torch.rand([batch_size * n_frames, 3, 64, 64])
condition = torch.rand([batch_size, n_class])
condition = condition.repeat(n_frames, 1)
y = gResBlock(x, condition)
print(gResBlock)
print(x.size())
print(y.size())
# with SummaryWriter(comment='gResBlock') as w:
# w.add_graph(gResBlock, [x, condition, ])
| 29.134454 | 97 | 0.586386 | 434 | 3,467 | 4.497696 | 0.230415 | 0.086066 | 0.082992 | 0.048668 | 0.371414 | 0.259734 | 0.156762 | 0.118852 | 0.118852 | 0.118852 | 0 | 0.023749 | 0.319873 | 3,467 | 118 | 98 | 29.381356 | 0.804071 | 0.116527 | 0 | 0.141176 | 0 | 0 | 0.00262 | 0 | 0 | 0 | 0 | 0.008475 | 0 | 1 | 0.023529 | false | 0 | 0.047059 | 0 | 0.094118 | 0.035294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
493275410ca8b65822a0847b5113ded65eef188f | 2,191 | py | Python | henpy/persist/tables.py | 7venheavens/henpy | d5ee06f1c51b095f243ba67a0626fd6d935eadcb | [
"MIT"
] | null | null | null | henpy/persist/tables.py | 7venheavens/henpy | d5ee06f1c51b095f243ba67a0626fd6d935eadcb | [
"MIT"
] | null | null | null | henpy/persist/tables.py | 7venheavens/henpy | d5ee06f1c51b095f243ba67a0626fd6d935eadcb | [
"MIT"
] | null | null | null | from sqlalchemy import Column, ForeignKey, Integer, String, Date, Table
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from sqlalchemy import create_engine
Base = declarative_base()
# Association table for many2many video tag relatiojn
video_tag = Table("video_tag", Base.metadata,
Column("video_id", Integer, ForeignKey("video.id")),
Column("tag_id", Integer, ForeignKey("tag.id")),)
class Tag(Base):
"""
@attrs
id
name
data
"""
__tablename__ = "tag"
id = Column(Integer, primary_key=True)
name = Column(String(255), nullable=False)
data = relationship("TagData")
videos = relationship(
"Video",
secondary=video_tag,
back_populates="tags")
class TagData(Base):
"""Language specific tag data
@attrs
id
tag
language
name
display_name
"""
__tablename__ = "tag_data"
id = Column(Integer, primary_key=True)
tag_id = Column(Integer, ForeignKey("tag.id"))
type = Column(String(255))
language = Column(String(20), nullable=False)
name = Column(String(255), nullable=False, index=True)
display_name = Column(String(255), nullable=False)
class Video(Base):
__tablename__ = "video"
id = Column(Integer, primary_key=True)
code = Column(String(50), nullable=False, index=True)
release_date = Column(Date, index=True)
image_path = Column(String(255), nullable=True) # Images are optional
director = Column(String(100), nullable=False)
maker = Column(String(100), nullable=False)
label = Column(String(100), nullable=False)
tags = relationship(
"Tag",
secondary=video_tag,
back_populates="videos")
def __repr__(self):
return self.__str__()
def __str__(self):
return f"<VideoMetadata:code={self.code}>"
class VideoData(Base):
"""Video data containing language requirements
"""
__tablename__ = "video_data"
id = Column(Integer, primary_key=True)
title = Column(String(100), nullable=False)
language = Column(String(20), nullable=False)
| 26.719512 | 74 | 0.65267 | 251 | 2,191 | 5.498008 | 0.278884 | 0.104348 | 0.054348 | 0.063768 | 0.334783 | 0.210145 | 0.047826 | 0 | 0 | 0 | 0 | 0.02019 | 0.231401 | 2,191 | 81 | 75 | 27.049383 | 0.799287 | 0.109995 | 0 | 0.170213 | 0 | 0 | 0.06695 | 0.017003 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.085106 | 0.042553 | 0.787234 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49328abc38259283b652356fa70316ea4deac8aa | 3,583 | py | Python | codes/data/paired_frame_dataset.py | neonbjb/DL-Art-School | a6f0f854b987ac724e258af8b042ea4459a571bc | [
"Apache-2.0"
] | 12 | 2020-12-13T12:45:03.000Z | 2022-03-29T09:58:15.000Z | codes/data/paired_frame_dataset.py | neonbjb/mmsr | 2706a84f15613e9dcd48e2ba927e7779046cf681 | [
"Apache-2.0"
] | 1 | 2020-12-31T01:12:45.000Z | 2021-03-31T11:43:52.000Z | codes/data/paired_frame_dataset.py | neonbjb/mmsr | 2706a84f15613e9dcd48e2ba927e7779046cf681 | [
"Apache-2.0"
] | 3 | 2020-12-14T06:04:04.000Z | 2020-12-26T19:11:41.000Z | from data.base_unsupervised_image_dataset import BaseUnsupervisedImageDataset
import numpy as np
import torch
from bisect import bisect_left
import os.path as osp
class PairedFrameDataset(BaseUnsupervisedImageDataset):
def __init__(self, opt):
super(PairedFrameDataset, self).__init__(opt)
def get_pair(self, chunk_index, chunk_offset):
imname = osp.basename(self.chunks[chunk_index].path)
if '_left' in imname:
chunks = [chunk_index, chunk_index+1]
else:
chunks = [chunk_index-1, chunk_index]
hqs, refs, masks, centers = [], [], [], []
for i in chunks:
h, r, c, m, p = self.chunks[i][chunk_offset]
hqs.append(h)
refs.append(r)
masks.append(m)
centers.append(c)
path = p
return hqs, refs, masks, centers, path
def __getitem__(self, item):
chunk_ind = bisect_left(self.starting_indices, item)
chunk_ind = chunk_ind if chunk_ind < len(self.starting_indices) and self.starting_indices[chunk_ind] == item else chunk_ind-1
hqs, refs, masks, centers, path = self.get_pair(chunk_ind, item-self.starting_indices[chunk_ind])
hs, hrs, hms, hcs = self.resize_hq(hqs, refs, masks, centers)
ls, lrs, lms, lcs = self.synthesize_lq(hs, hrs, hms, hcs)
# Convert to torch tensor
hq = torch.from_numpy(np.ascontiguousarray(np.transpose(np.stack(hs), (0, 3, 1, 2)))).float()
hq_ref = torch.from_numpy(np.ascontiguousarray(np.transpose(np.stack(hrs), (0, 3, 1, 2)))).float()
hq_mask = torch.from_numpy(np.ascontiguousarray(np.stack(hms))).squeeze().unsqueeze(dim=1)
hq_ref = torch.cat([hq_ref, hq_mask], dim=1)
lq = torch.from_numpy(np.ascontiguousarray(np.transpose(np.stack(ls), (0, 3, 1, 2)))).float()
lq_ref = torch.from_numpy(np.ascontiguousarray(np.transpose(np.stack(lrs), (0, 3, 1, 2)))).float()
lq_mask = torch.from_numpy(np.ascontiguousarray(np.stack(lms))).squeeze().unsqueeze(dim=1)
lq_ref = torch.cat([lq_ref, lq_mask], dim=1)
return {'GT_path': path, 'lq': lq, 'hq': hq, 'gt_fullsize_ref': hq_ref, 'lq_fullsize_ref': lq_ref,
'lq_center': torch.tensor(lcs, dtype=torch.long), 'gt_center': torch.tensor(hcs, dtype=torch.long)}
if __name__ == '__main__':
opt = {
'name': 'amalgam',
'paths': ['F:\\4k6k\\datasets\\ns_images\\vr\\validation'],
'weights': [1],
#'target_size': 128,
'force_multiple': 32,
'scale': 2,
'eval': False,
'fixed_corruptions': ['jpeg-medium'],
'random_corruptions': [],
'num_corrupts_per_image': 0,
'num_frames': 10
}
ds = PairedFrameDataset(opt)
import os
os.makedirs("debug", exist_ok=True)
bs = 0
batch = None
for i in range(len(ds)):
import random
k = 'lq'
element = ds[random.randint(0,len(ds))]
base_file = osp.basename(element["GT_path"])
o = element[k].unsqueeze(0)
if bs < 2:
if batch is None:
batch = o
else:
batch = torch.cat([batch, o], dim=0)
bs += 1
continue
if 'path' not in k and 'center' not in k:
b, fr, f, h, w = batch.shape
for j in range(fr):
import torchvision
base=osp.basename(base_file)
torchvision.utils.save_image(batch[:, j], "debug/%i_%s_%i__%s.png" % (i, k, j, base))
bs = 0
batch = None
| 38.945652 | 133 | 0.591962 | 484 | 3,583 | 4.190083 | 0.305785 | 0.031558 | 0.04142 | 0.047337 | 0.217949 | 0.168639 | 0.146943 | 0.146943 | 0.10355 | 0.053254 | 0 | 0.016024 | 0.26849 | 3,583 | 91 | 134 | 39.373626 | 0.757726 | 0.011722 | 0 | 0.076923 | 0 | 0 | 0.081119 | 0.025155 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.102564 | 0 | 0.179487 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4933af55544473e150b55792a2338ce69937835b | 6,538 | py | Python | Amiga/a500.py | thanasisk/binja-amiga | 9ca2116a7dff614b3800ce7c8ab7df86713c5cad | [
"MIT"
] | 5 | 2021-05-07T16:14:43.000Z | 2021-08-09T16:15:00.000Z | Amiga/a500.py | thanasisk/binja-amiga | 9ca2116a7dff614b3800ce7c8ab7df86713c5cad | [
"MIT"
] | 6 | 2021-05-07T15:50:17.000Z | 2021-05-10T19:54:34.000Z | Amiga/a500.py | thanasisk/binja-amiga | 9ca2116a7dff614b3800ce7c8ab7df86713c5cad | [
"MIT"
] | null | null | null | # coding=utf-8
"""
Copyright (c) 2021 Athanasios Kostopoulos
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
from __future__ import print_function
import struct
from binaryninja.architecture import Architecture
from binaryninja.types import Symbol
from binaryninja.function import InstructionInfo, InstructionTextTokenType, InstructionTextToken
try:
from m68k import M68000, OpImmediate
except ModuleNotFoundError:
import sys
import os
import binaryninja
sys.path.append(os.path.join(binaryninja.user_plugin_path(), "..", "repositories", "community", "plugins"))
from wrigjl_binaryninjam68k import M68000, OpImmediate
COPPER_INSTRUCTIONS = [ 'CMOVE', 'CSKIP', 'CWAIT', 'CEND' ]
CEND = 0xFFFFFFFE
#class A500(M68000):
class A500(Architecture):
name = "A500"
# Sizes
SIZE_BYTE = 0
SIZE_WORD = 1
SIZE_LONG = 2
# BROKEN
def perform_get_instruction_info(self, data, addr):
instr, length, _size, _source, dest, _third = self.decode_instruction(data)
if instr == 'unimplemented':
return None
result = InstructionInfo()
result.length = length
if instr in COPPER_INSTRUCTIONS:
conditional = False
branch_dest = None
return result
else:
return None
def perform_get_instruction_low_level_il(self, data, addr, il):
instr, length, size, source, dest, third = self.decode_instruction(data)
if instr is not None:
if source is not None:
pre_il = source.get_pre_il(il)
if pre_il is not None:
il.append(pre_il)
self.generate_instruction_il(il, instr, length, size, source, dest, third)
if source is not None:
post_il = source.get_post_il(il)
if post_il is not None:
il.append(post_il)
else:
il.append(il.unimplemented())
return length
def generate_instruction_il(self, il, instr, length, size, source, dest, third):
size_bytes = None
if size is not None:
size_bytes = 1 << size
if instr == 'CWAIT':
if source is not None:
il.append(source.get_source_il(il))
elif instr == 'CSKIP':
if source is not None:
il.append(source.get_source_il(il))
elif instr == 'CEND':
if source is not None:
il.append(source.get_source_il(il))
elif instr == 'CMOVE':
if source is not None:
il.append(source.get_source_il(il))
else:
il.append(il.uninplemented())
# BROKEN
def perform_get_instruction_text(self, data, addr):
instr, length, _size, source, dest, third = self.decode_instruction(data)
#print("perform_get_instruction_text: %s" % instr)
if instr == 'unimplemented':
return None
if instr in COPPER_INSTRUCTIONS:
#if size is not None:
# instr += SizeSuffix[size]
tokens = [InstructionTextToken(InstructionTextTokenType.InstructionToken, "%-10s" % instr)]
if source is not None:
tokens += source.format(addr)
if dest is not None:
if source is not None:
tokens += [InstructionTextToken(InstructionTextTokenType.OperandSeparatorToken, ',')]
tokens += dest.format(addr)
if third is not None:
if source is not None or dest is not None:
tokens += [InstructionTextToken(InstructionTextTokenType.OperandSeparatorToken, ',')]
tokens += third.format(addr)
return tokens, length
else:
return None, None
# Yay, fixed!
def decode_instruction(self, data):
error_value = ('unimplemented', len(data), None, None, None, None)
instr = None
length = None
size = None
source = None
dest = None
third = None
if len(data) < 4:
return error_value
instruction = struct.unpack_from('>L', data)[0]
if instruction == CEND:
instr = 'CEND'
size = 4
length = 4
return instr, length, size, source, dest, third
#msb = instruction >> 8
#opcode = msb >> 4
instr_type = instruction & 0x00010001
if instr_type == 0x00010000:
comment = "CWAIT"
#comment += disassemble_wait(value)
_source = struct.unpack_from(">H", data, 0)[0]
src = OpImmediate(2, _source)
instr = comment
size = 4
length = 4
source = src
elif instr_type == 0x00010001:
comment = "CSKIP"
instr = comment
size = 4
length = 4
#mask = ((1 << 0x10) - 1) << 0x10
#_source = instruction & 0xFFFF0000
_source = struct.unpack_from(">H", data, 0)[0]
src = OpImmediate(2, _source)
source = src
#comment += disassemble_wait(value)
elif instr_type == 0x00000000 or instr_type == 0x00000001:
comment = "CMOVE"
_source = struct.unpack_from(">H", data, 0)[0]
src = OpImmediate(2, _source)
instr = comment
size = 4
length = 4
source = src
else:
print("NOT RECOGNIZED")
return instr, length, size, source, dest, third
| 37.574713 | 111 | 0.605078 | 759 | 6,538 | 5.105402 | 0.277997 | 0.021935 | 0.039484 | 0.030194 | 0.343226 | 0.280258 | 0.258323 | 0.22271 | 0.160516 | 0.160516 | 0 | 0.02658 | 0.315234 | 6,538 | 173 | 112 | 37.791908 | 0.838955 | 0.214133 | 0 | 0.375 | 0 | 0 | 0.031085 | 0 | 0 | 0 | 0.01173 | 0 | 0 | 1 | 0.039063 | false | 0 | 0.078125 | 0 | 0.234375 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49351ccb9de7ca310fddb2c9dfb5773c9ec68810 | 795 | py | Python | scripts/deploy.py | DennisDv24/polydice-core | 77bc3f0037f9d762a11ab3c67f1411a2cd677e76 | [
"MIT"
] | null | null | null | scripts/deploy.py | DennisDv24/polydice-core | 77bc3f0037f9d762a11ab3c67f1411a2cd677e76 | [
"MIT"
] | null | null | null | scripts/deploy.py | DennisDv24/polydice-core | 77bc3f0037f9d762a11ab3c67f1411a2cd677e76 | [
"MIT"
] | null | null | null | from brownie import GameController
from brownie import config, network, accounts
from scripts.utils import (
get_account,
LOCAL_BLOCKCHAIN_ENVS,
get_contract_from_config,
fund_with_link
)
from web3 import Web3
import pytest
import time
def deploy_game_controller(acc = None, funds = Web3.toWei(10, 'ether')):
if not acc: acc = get_account()
active_net = network.show_active()
game_controller = GameController.deploy(
get_contract_from_config('vrf_coordinator'),
get_contract_from_config('link_token'),
config['networks'][active_net]['fee'],
config['networks'][active_net]['keyhash'],
{'from': acc, 'value': funds}
)
return game_controller
def main():
#test_random_number()
deploy_testnet()
| 22.714286 | 72 | 0.68805 | 96 | 795 | 5.40625 | 0.510417 | 0.063584 | 0.086705 | 0.121387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007962 | 0.210063 | 795 | 34 | 73 | 23.382353 | 0.818471 | 0.025157 | 0 | 0 | 0 | 0 | 0.084306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4936012a4655c06bf28aea538f9377e8d0c03894 | 9,359 | py | Python | araig_test_runners/src/base/base_runner.py | ipa320/araig_test_stack | c704a4a4ac55f4113ff7ccd72aede0695e78a709 | [
"Apache-2.0"
] | null | null | null | araig_test_runners/src/base/base_runner.py | ipa320/araig_test_stack | c704a4a4ac55f4113ff7ccd72aede0695e78a709 | [
"Apache-2.0"
] | 52 | 2021-01-14T11:02:14.000Z | 2022-01-19T17:26:36.000Z | araig_test_runners/src/base/base_runner.py | ipa-hsd/araig_test_stack | edf673dad8385225529f8fdc86a8dae57005ec49 | [
"Apache-2.0"
] | 2 | 2021-03-26T10:01:20.000Z | 2021-08-06T12:59:17.000Z | #!/usr/bin/env python
import rospy
from araig_msgs.msg import BoolStamped
import yaml
import os
import threading
class TestBase(object):
def __init__(self, sub_dict = {} , pub_dict = {}, param_list = {}, rate = 100):
self._rate = rospy.Rate(rate)
self._input_interface = {
"robot_has_stopped" : "/signal/calc/robot_has_stopped",
"start_test" : "/signal/ui/start_test",
"interrupt_test" : "/signal/ui/interrupt_test",
"reset_test" : "/signal/ui/reset_test",
"began_recording" : "/signal/logger/begin_write"
}
self._input_interface.update(sub_dict)
self._output_interface = {
"start_robot" : "/signal/runner/start_robot",
"stop_robot" : "/signal/runner/stop_robot",
"test_completed" : "/signal/runner/test_completed",
"test_failed" : "/signal/runner/test_failed",
"test_succeeded" : "/signal/runner/test_succeeded",
# TODO: The next 3 signals should be removed once UI is fully integrated
"start_test" : "/signal/ui/start_test",
"reset_test" : "/signal/ui/reset_test",
"interrupt_test" : "/signal/ui/interrupt_test"
}
self._output_interface.update(pub_dict)
self._config_param = [
]
self._config_param += param_list
# TODO: Make this enums or constants or something
self._RETURN_CASE_INTERRUPTED = -1
self._RETURN_CASE_TIMED_OUT = -2
self._locks = {}
self._flag = {}
self._publishers = {}
# get ros param:
self.config_param = {}
self.get_config(self._config_param)
# sub_init
for key in self._input_interface:
rospy.Subscriber(self._input_interface[key], BoolStamped, self.callback_for_all_bool_topics, key)
self._locks[key] = threading.Lock()
self._flag[key] = BoolStamped()
self._flag[key].data = False
# pub_init
for key in self._output_interface:
self._publishers[key] = rospy.Publisher(self._output_interface[key], BoolStamped,queue_size=10, latch=True)
self._publishers[key].publish(self.buildNewBoolStamped(False))
try:
while not rospy.is_shutdown():
self.main()
self._rate.sleep()
except rospy.ROSException:
pass
def setSafeFlag(self, key, value):
if not key in self._input_interface.keys():
rospy.logerr(rospy.get_name() + ": Retrieving a key that does not exist!: {}".format(key))
return
with self._locks[key]:
self._flag[key] = value
# If seq = False, get data; If True, get header
def getSafeFlag(self, key, header = False):
if not key in self._input_interface.keys():
rospy.logerr(rospy.get_name() + ": Retrieving a key that does not exist!: {}".format(key))
return
else:
with self._locks[key]:
if not header:
return self._flag[key].data
else:
return self._flag[key].header
def buildNewBoolStamped(self, data = True):
msg = BoolStamped()
msg.header.stamp = rospy.Time.now()
msg.data = data
return msg
def callback_for_all_bool_topics(self, msg, key):
self.setSafeFlag(key,msg)
def timestampToFloat(self, stamp):
timestamp_float = float(stamp.secs + float(stamp.nsecs*(1e-9)))
return timestamp_float
def get_config(self, param_list):
for arg in param_list:
module_name = "/runner/"
ns = module_name
if rospy.has_param(ns + arg):
self.config_param[arg] = rospy.get_param(ns + arg)
else:
rospy.logerr("{}: {} param not set!!".format(ns, arg))
rospy.signal_shutdown("Param not set")
def isInterrupted(self):
if self.getSafeFlag("interrupt_test"):
rospy.logwarn(rospy.get_name() + ": Interrupted! Test failed! Stopping robot!")
self.stopRobot()
self.testFailed()
self.waitForReset()
return True
# Every test needs to override this function with core logic
def main(self):
pass
# The following functions impart a standard interface/structure to every runner
# but can also be overriden with test specific logic in subclasses
def startRobot(self):
# Expectation: Velocity interpreter starts sending cmd_vel, Goal interpreter calls move_base action with goal etc..
self._publishers["stop_robot"].publish(self.buildNewBoolStamped(False))
rospy.sleep(0.1)
self._publishers["start_robot"].publish(self.buildNewBoolStamped(True))
def stopRobot(self):
# Expectation: Velocity interpreter stops sending cmd_vel, Goal interpreter calls move_base action cancel etc..
self._publishers["start_robot"].publish(self.buildNewBoolStamped(False))
rospy.sleep(0.1)
self._publishers["stop_robot"].publish(self.buildNewBoolStamped(True))
def standardStartupSequence(self):
# Wait until start signal received
rospy.logwarn(rospy.get_name() + ": Waiting to start...")
self.loopFallbackOnFlags(["start_test"])
# Start received, wait for recorder to boot up. Potentially infinite loop, can be interrupted.
rospy.logwarn(rospy.get_name() + ": Start received, waiting for recorder init")
if self.loopFallbackOnFlags(["began_recording"]) == self._RETURN_CASE_INTERRUPTED:
return False
# Start robot
rospy.logwarn(rospy.get_name() + ": Recorder is active, Starting robot")
self.startRobot()
return True
def testCompleted(self):
self.stopRobot()
self._publishers["test_completed"].publish(self.buildNewBoolStamped(True))
def testSucceeded(self):
self.testCompleted()
self._publishers["test_succeeded"].publish(self.buildNewBoolStamped(True))
self._publishers["test_failed"].publish(self.buildNewBoolStamped(False))
def testFailed(self):
self.testCompleted()
self._publishers["test_failed"].publish(self.buildNewBoolStamped(True))
self._publishers["test_succeeded"].publish(self.buildNewBoolStamped(False))
# TODO: These functions can eventually be used as "nodes" in a Behaviour Tree like structure
def waitForReset(self):
rospy.logwarn("----------------------------------------------------------")
rospy.logwarn(rospy.get_name() + ": Waiting for user to give reset signal")
rospy.logwarn("----------------------------------------------------------")
# TODO: ui should set reset_test to False after set to True, reset signal is an event
while not self.getSafeFlag("reset_test"):
self._rate.sleep()
rospy.logwarn(rospy.get_name() + ": Resetting")
for key in self._output_interface:
self._publishers[key].publish(self.buildNewBoolStamped(False))
def sleepUninterruptedFor(self, duration):
start = rospy.Time.now()
while self.timestampToFloat(rospy.Time.now() - start) <= duration:
self._rate.sleep()
if self.isInterrupted():
return self._RETURN_CASE_INTERRUPTED
# Returns the first flag that is true, or if interrupted
def loopFallbackOnFlags(self, flag_list = []):
while not self.isInterrupted():
self._rate.sleep()
for index, flag in enumerate(flag_list):
if self.getSafeFlag(flag):
return index
return self._RETURN_CASE_INTERRUPTED
# Returns the first flag that is false, or if interrupted
def loopSequenceOnFlags(self, flag_list = []):
while not self.isInterrupted():
self._rate.sleep()
for index, flag in enumerate(flag_list):
if not self.getSafeFlag(flag):
return index
return self._RETURN_CASE_INTERRUPTED
# Returns the first flag that is true, or if interrupted, or if timed out
def timedLoopFallbackOnFlags(self, flag_list, duration):
start = rospy.Time.now()
while not self.isInterrupted():
self._rate.sleep()
if self.timestampToFloat(rospy.Time.now() - start) > duration:
return self._RETURN_CASE_TIMED_OUT
for index, flag in enumerate(flag_list):
if self.getSafeFlag(flag):
return index
return self._RETURN_CASE_INTERRUPTED
# Returns the first flag that is false, or if interrupted, or if timed out
def timedLoopSequenceOnFlags(self, flag_list, duration):
start = rospy.Time.now()
while not self.isInterrupted():
self._rate.sleep()
if self.timestampToFloat(rospy.Time.now() - start) > duration:
return self._RETURN_CASE_TIMED_OUT
for index, flag in enumerate(flag_list):
if not self.getSafeFlag(flag):
return index
return self._RETURN_CASE_INTERRUPTED
| 41.595556 | 123 | 0.609253 | 1,042 | 9,359 | 5.289827 | 0.212092 | 0.033019 | 0.059869 | 0.031749 | 0.471698 | 0.423258 | 0.388425 | 0.278483 | 0.273041 | 0.240022 | 0 | 0.00208 | 0.280799 | 9,359 | 224 | 124 | 41.78125 | 0.816818 | 0.129715 | 0 | 0.383721 | 0 | 0 | 0.132693 | 0.054284 | 0 | 0 | 0 | 0.004464 | 0 | 1 | 0.122093 | false | 0.011628 | 0.02907 | 0 | 0.273256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49364a962f492e4ca24cfa050be3cce42956ca9a | 993 | py | Python | examples/mlp/imdb_review_classification/train_word_embedding_nn.py | vishalbelsare/neupy | 684313cdaddcad326f2169384fb15ec3aa29d991 | [
"MIT"
] | null | null | null | examples/mlp/imdb_review_classification/train_word_embedding_nn.py | vishalbelsare/neupy | 684313cdaddcad326f2169384fb15ec3aa29d991 | [
"MIT"
] | null | null | null | examples/mlp/imdb_review_classification/train_word_embedding_nn.py | vishalbelsare/neupy | 684313cdaddcad326f2169384fb15ec3aa29d991 | [
"MIT"
] | null | null | null | import os
import pandas as pd
from neupy import environment
from src.word_embedding_nn import WordEmbeddingNN
from src.preprocessing import TokenizeText
from src.utils import create_logger, REVIEWS_FILE, WORD_EMBEDDING_NN
logger = create_logger(__name__)
environment.reproducible()
if not os.path.exists(REVIEWS_FILE):
raise EnvironmentError("Cannot find reviews.csv file. Probably you "
"haven't run `loadata.py` script yet.")
data = pd.read_csv(REVIEWS_FILE, sep='\t')
train_data = data[data.type == 'train']
documents = train_data.text.values
logger.info("Tokenizing train data")
text_tokenizer = TokenizeText(ignore_stopwords=False)
word2vec = WordEmbeddingNN(size=100, workers=4, min_count=5, window=10)
text = text_tokenizer.transform(documents)
logger.info("Building vocabulary")
word2vec.build_vocab(text)
word2vec.train(text, n_epochs=10)
logger.info("Saving model into the {} file".format(WORD_EMBEDDING_NN))
word2vec.save(WORD_EMBEDDING_NN)
| 29.205882 | 72 | 0.776435 | 138 | 993 | 5.398551 | 0.557971 | 0.069799 | 0.080537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01496 | 0.124874 | 993 | 33 | 73 | 30.090909 | 0.842348 | 0 | 0 | 0 | 0 | 0 | 0.156093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.26087 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4937b9ef2738f7db3d427ae6f3c37f3b143e5a73 | 2,199 | py | Python | channels/serializers/contributors.py | mitodl/open-discussions | ab6e9fac70b8a1222a84e78ba778a7a065c20541 | [
"BSD-3-Clause"
] | 12 | 2017-09-27T21:23:27.000Z | 2020-12-25T04:31:30.000Z | channels/serializers/contributors.py | mitodl/open-discussions | ab6e9fac70b8a1222a84e78ba778a7a065c20541 | [
"BSD-3-Clause"
] | 3,293 | 2017-06-30T18:16:01.000Z | 2022-03-31T18:01:34.000Z | channels/serializers/contributors.py | mitodl/open-discussions | ab6e9fac70b8a1222a84e78ba778a7a065c20541 | [
"BSD-3-Clause"
] | 1 | 2020-04-13T12:19:57.000Z | 2020-04-13T12:19:57.000Z | """Serializers for contributor REST APIs"""
from django.contrib.auth import get_user_model
from rest_framework import serializers
from channels.serializers.validators import validate_email, validate_username
from open_discussions.serializers import WriteableSerializerMethodField
from profiles.models import Profile
User = get_user_model()
class ContributorSerializer(serializers.Serializer):
"""Serializer for contributors. Should be accessible by moderators only"""
contributor_name = WriteableSerializerMethodField()
email = WriteableSerializerMethodField()
full_name = serializers.SerializerMethodField()
def validate_contributor_name(self, value):
"""Validate contributor name"""
return {"contributor_name": validate_username(value)}
def get_contributor_name(self, instance):
"""Returns the name for the contributor"""
return instance.name
def validate_email(self, value):
"""Validate email"""
return {"email": validate_email(value)}
def get_email(self, instance):
"""Get the email from the associated user"""
return (
User.objects.filter(username=instance.name)
.values_list("email", flat=True)
.first()
)
def get_full_name(self, instance):
"""Get the full name of the associated user"""
return (
Profile.objects.filter(user__username=instance.name)
.values_list("name", flat=True)
.first()
)
def create(self, validated_data):
api = self.context["channel_api"]
channel_name = self.context["view"].kwargs["channel_name"]
contributor_name = validated_data.get("contributor_name")
email = validated_data.get("email")
if email and contributor_name:
raise ValueError("Only one of contributor_name, email should be specified")
if contributor_name:
username = contributor_name
elif email:
username = User.objects.get(email__iexact=email).username
else:
raise ValueError("Missing contributor_name or email")
return api.add_contributor(username, channel_name)
| 34.359375 | 87 | 0.683038 | 236 | 2,199 | 6.186441 | 0.313559 | 0.123288 | 0.016438 | 0.024658 | 0.041096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227831 | 2,199 | 63 | 88 | 34.904762 | 0.859835 | 0.120055 | 0 | 0.095238 | 0 | 0 | 0.087414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.119048 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
493b54cb244a4494caed2559df7cb5689c8fbbc0 | 1,178 | py | Python | libraries/botbuilder-community-middleware-text-recognizer/botbuilder/community/middleware/text/recognizer/email_middleware.py | rvinothrajendran/botbuilder-community-python | b94ebd808ed3ed8217e32cf2a13fa5c681418622 | [
"MIT"
] | 22 | 2018-08-24T19:10:39.000Z | 2021-11-15T23:42:22.000Z | libraries/botbuilder-community-middleware-text-recognizer/botbuilder/community/middleware/text/recognizer/email_middleware.py | rvinothrajendran/botbuilder-community-python | b94ebd808ed3ed8217e32cf2a13fa5c681418622 | [
"MIT"
] | 29 | 2019-10-14T15:45:17.000Z | 2020-07-26T02:54:46.000Z | libraries/botbuilder-community-middleware-text-recognizer/botbuilder/community/middleware/text/recognizer/email_middleware.py | rvinothrajendran/botbuilder-community-python | b94ebd808ed3ed8217e32cf2a13fa5c681418622 | [
"MIT"
] | 8 | 2019-07-31T06:19:29.000Z | 2021-05-13T20:28:20.000Z | from recognizers_sequence import SequenceRecognizer
from recognizers_text import Culture,ModelResult
from botbuilder.core import Middleware,TurnContext
from botbuilder.schema import Activity,ActivityTypes
from typing import Callable,Awaitable
class EmailRecognizerMiddleware(Middleware):
def __init__(self,
default_locale = None):
if default_locale is None:
default_locale = Culture.English
self._default_locale = default_locale
async def on_turn(self,
context:TurnContext,
next:Callable[[TurnContext],Awaitable]):
if context.activity.type == ActivityTypes.message:
email_recongnizer = SequenceRecognizer(self._default_locale)
email_model = email_recongnizer.get_email_model()
model_result = email_model.parse(context.activity.text)
if len(model_result) > 0:
email_entities = []
for email in model_result:
value = email.resolution["value"]
email_entities.append(value)
context.turn_state.setdefault("emailentities",email_entities)
return await next() | 40.62069 | 77 | 0.679117 | 119 | 1,178 | 6.487395 | 0.462185 | 0.101036 | 0.066062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00114 | 0.255518 | 1,178 | 29 | 78 | 40.62069 | 0.879133 | 0 | 0 | 0 | 0 | 0 | 0.015267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.2 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
493b6d6ee60f73ec00fa9dfe87d277a452087b7b | 2,013 | py | Python | bacpypes/comm/pdu.py | cbergmiller/bacpypes | 7b1f2e989787c2c1f807680fee5ee7a71b3689ab | [
"MIT"
] | 1 | 2018-01-11T13:10:15.000Z | 2018-01-11T13:10:15.000Z | bacpypes/comm/pdu.py | cbergmiller/bacpypes | 7b1f2e989787c2c1f807680fee5ee7a71b3689ab | [
"MIT"
] | null | null | null | bacpypes/comm/pdu.py | cbergmiller/bacpypes | 7b1f2e989787c2c1f807680fee5ee7a71b3689ab | [
"MIT"
] | null | null | null |
import logging
from ..debugging import btox
from .pci import PCI
from .pdu_data import PDUData
DEBUG = False
_logger = logging.getLogger(__name__)
__all__ = ['PDU']
class PDU(PCI, PDUData):
"""
A Protocol Data Unit (PDU) is the name for a collection of information that
is passed between two entities. It is composed of Protcol Control Information
(PCI) - information about addressing, processing instructions - and data.
The set of classes in this module are not specific to BACnet.
"""
def __init__(self, data=None, **kwargs):
if DEBUG: _logger.debug('__init__ %r %r', data, kwargs)
# pick up some optional kwargs
user_data = kwargs.get('user_data', None)
source = kwargs.get('source', None)
destination = kwargs.get('destination', None)
# carry source and destination from another PDU
# so this can act like a copy constructor
if isinstance(data, PDU):
# allow parameters to override values
user_data = user_data or data.pduUserData
source = source or data.pduSource
destination = destination or data.pduDestination
# now continue on
PCI.__init__(self, user_data=user_data, source=source, destination=destination)
PDUData.__init__(self, data)
def __str__(self):
return f'<{self.__class__.__name__} {self.pduSource} -> {self.pduDestination} : {btox(self.pduData, ".")}>'
def dict_contents(self, use_dict=None, as_class=dict):
"""Return the contents of an object as a dict."""
if DEBUG: _logger.debug('dict_contents use_dict=%r as_class=%r', use_dict, as_class)
# make/extend the dictionary of content
if use_dict is None:
use_dict = as_class()
# call into the two base classes
self.pci_contents(use_dict=use_dict, as_class=as_class)
self.pdudata_contents(use_dict=use_dict, as_class=as_class)
# return what we built/updated
return use_dict
| 40.26 | 115 | 0.669647 | 270 | 2,013 | 4.740741 | 0.385185 | 0.054688 | 0.028125 | 0.04375 | 0.05625 | 0.05625 | 0.05625 | 0.05625 | 0.05625 | 0 | 0 | 0 | 0.242424 | 2,013 | 49 | 116 | 41.081633 | 0.839344 | 0.298063 | 0 | 0 | 0 | 0.035714 | 0.129009 | 0.034257 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.142857 | 0.035714 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
493e2e5b41e08d9878a7192db4f6149489ec029d | 8,729 | py | Python | src/ascent/python/ascent_jupyter_bridge/console_client.py | srini009/ascent | 70558059dc3fe514206781af6e48715d8934c37c | [
"BSD-3-Clause"
] | null | null | null | src/ascent/python/ascent_jupyter_bridge/console_client.py | srini009/ascent | 70558059dc3fe514206781af6e48715d8934c37c | [
"BSD-3-Clause"
] | null | null | null | src/ascent/python/ascent_jupyter_bridge/console_client.py | srini009/ascent | 70558059dc3fe514206781af6e48715d8934c37c | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
###############################################################################
# Copyright (c) Lawrence Livermore National Security, LLC and other Ascent
# Project developers. See top-level LICENSE AND COPYRIGHT files for dates and
# other details. No copyright assignment is required to contribute to Ascent.
###############################################################################
# Copyright (c) Lawrence Livermore National Security, LLC and other
# Bridge Kernel Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import print_function, unicode_literals
import enum
import os
import readline
from bridge_kernel.client import SocketClient, get_backend_list, load_config
# Python 2/3 compatibility
try:
input = raw_input
except NameError:
pass
info_str = """
console_client commands start with '%' (hash); any command starting with a hash is parsed as a
console_client command, other commands are sent as raw execute requests to the backend, with the
exception of (white-space stripped) 'quit' and 'quit()', which ask that you quit using the hash
command. Nothing stops you from quitting this instance with expression other than 'quit' and
'quit()' that stops the backend so use a bit of caution.
"""
class Status(enum.Enum):
quit = 0
normal = 1
class ConsoleClientCompletor():
def __init__(self, tcr):
self.tcr = tcr
def complete(self, code, state):
response = None
if state == 0:
self.matches = self.tcr.complete(code)
try:
response = self.matches[state]
except IndexError:
response = None
return response
class ConsoleClient():
def __init__(self, config, klass=SocketClient, readline_delims="{}[]()=+-/*^~;,# \t\n", prompt='> '):
self.klass = klass
self.client = None
self.is_debug = False
self.prompt = prompt
self.config = config
self.completor = ConsoleClientCompletor(self)
readline.parse_and_bind('tab: complete')
readline.set_completer_delims(readline_delims)
readline.set_completer(self.completor.complete)
self._setup_builtins()
self._connect_client()
def __del__(self):
self._disconnect_client()
def _setup_builtins(self):
self._builtins = {
'%help': {
'callback': self._help,
},
'%quit': {
'callback': self._quit,
'info': "disconnect from the remote instance and quit console_client"
},
'%quit_inst': {
'callback': self._shutdown_backend,
'info': "quit the backend"
},
'%disconnect': {
'callback': self._disconnect_client,
'info': "disconnect from the current backend and select a new one"
},
'%debug': {
'callback': self._toggle_client_debug,
'info': "toggles the client's debug prints",
},
'%flush': {
'callback': self._flush_client,
'info': "flush the clients queued messages"
},
'%complete': {
'callback': lambda args: print(self.complete(args)),
'info': "print the raw message from a complete request"
},
'%inspect': {
'callback': self._inspect,
'info': "print the raw message from an inspect request"
}
}
def _process_builtins(self, code):
split = code.split(' ')
cmd = split[0]
if len(split) > 1:
args = ' '.join(split[1:])
else:
args = ''
if cmd == '%help':
self._help()
elif cmd in self._builtins.keys():
return self._builtins[cmd]['callback'](args) or Status.normal
else:
print("error: no builtin %{}".format(cmd))
def _add_builtin(self, name, callback, info):
if name[0] != '%':
name = '%' + name
self._builtins[name] = {
'callback': callback,
'info': info
}
def _help(self, *args):
print(info_str)
print("builtins:")
for k, v in self._builtins.items():
if k == '%help':
continue
if 'info' in v:
info = v['info']
else:
info = ""
print(" {:<15} {}".format(k, info))
def _connect_client(self):
self._disconnect_client()
self.client = self.klass.from_config(self.config)
if not self.client.is_connected:
print("couldn't connect")
return
self.client.set_debug(self.is_debug)
def _disconnect_client(self, *args):
if self.client is not None and self.client.is_connected:
self.client.disconnect()
def _quit(self, *args):
if self.client is not None and self.client.is_connected:
self.client.disconnect()
return Status.quit
def _shutdown_backend(self, *args):
if self.client is not None and self.client.is_connected:
if (input('this will quit the code, are you sure? (Y/y)> ').lower() == 'y'):
self.client.execute("quit()")
else:
print("no client connected")
def _toggle_client_debug(self, *args):
if self.client is None:
return
if self.is_debug:
print("turning off debug prints")
self.is_debug = False
else:
print("turning on debug prints")
self.is_debug = True
self.client.set_debug(self.is_debug)
def _flush_client(self, *args):
if self.client is not None:
self.client.flush()
def _inspect(self, text):
if self.client is not None:
print(self.client.inspect(text))
def complete(self, code):
if len(code) > 0 and code[0] == '%':
return [x for x in self._builtins.keys() if x[0:len(code)] == code]
if self.client is not None and self.client.is_connected:
data = self.client.complete(code, len(code))
if data is not None:
matches = data['matches']
cursor_start = data['cursor_start']
if cursor_start > 0:
matches = ["{}{}".format(code[0:cursor_start], m) for m in matches]
return matches
return []
def repl(self):
print("%help for console_client help")
stat = Status.normal
while self.client is not None and self.client.is_connected:
try:
code = input(self.prompt)
if code.strip() == "quit" or code.strip() == "quit()":
print("please use %quit or %quit_inst")
elif len(code) > 0 and code.strip()[0] == "%":
stat = self._process_builtins(code.strip())
else:
self.client.execute(code)
except KeyboardInterrupt:
print()
return stat
def select_backend():
try:
while True:
backends = get_backend_list()
if backends is None or len(backends) == 0:
print("no backends available")
return None
keys = sorted(list(backends.keys()))
for i, k in enumerate(keys):
print("%d) %s" % (i, k))
req = input("> ")
if req == "%quit":
return None
try:
index = int(req)
if index >= 0 and index < len(keys):
return backends[keys[index]]
except ValueError:
pass
except KeyboardInterrupt:
print()
return None
cls = ConsoleClient
if __name__ == "__main__":
from argparse import ArgumentParser, ArgumentTypeError
def existing_file(path):
path = os.path.abspath(path)
if not os.path.exists(path):
raise ArgumentTypeError("file doesn't exist: %s" % path)
return path
parser = ArgumentParser()
parser.add_argument("-p", "--path", type=existing_file, help="path to config file")
args = parser.parse_args()
if args.path is not None:
config = load_config(args.path)
if config is not None:
cls(config).repl()
else:
while True:
config = select_backend()
if config is None:
break
if cls(config).repl() == Status.quit:
break
| 30.953901 | 105 | 0.542445 | 971 | 8,729 | 4.749743 | 0.239959 | 0.054206 | 0.036427 | 0.021249 | 0.145056 | 0.129011 | 0.108413 | 0.108413 | 0.094536 | 0.062229 | 0 | 0.003617 | 0.334861 | 8,729 | 281 | 106 | 31.064057 | 0.790734 | 0.051781 | 0 | 0.197248 | 0 | 0.004587 | 0.162515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087156 | false | 0.009174 | 0.027523 | 0 | 0.201835 | 0.091743 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
493f1074976e99b902f269699cccea4c176a2b66 | 2,676 | py | Python | rubiks-cube-ai-master/env/gym_Rubiks_Cube/envs/rubiks_cube_env.py | SirSnuffles/Rubiks-Cube | 1d4b275602fb90d8f768483bc71a8f4fdf0116a4 | [
"Apache-2.0"
] | null | null | null | rubiks-cube-ai-master/env/gym_Rubiks_Cube/envs/rubiks_cube_env.py | SirSnuffles/Rubiks-Cube | 1d4b275602fb90d8f768483bc71a8f4fdf0116a4 | [
"Apache-2.0"
] | null | null | null | rubiks-cube-ai-master/env/gym_Rubiks_Cube/envs/rubiks_cube_env.py | SirSnuffles/Rubiks-Cube | 1d4b275602fb90d8f768483bc71a8f4fdf0116a4 | [
"Apache-2.0"
] | null | null | null | import gym
from gym import spaces
import numpy as np
import random
from gym_Rubiks_Cube.envs import cube
actionList = [
'f', 'r', 'l', 'u', 'd', 'b',
'.f', '.r', '.l', '.u', '.d', '.b']
tileDict = {
'R': 0,
'O': 1,
'Y': 2,
'G': 3,
'B': 4,
'W': 5,
}
class RubiksCubeEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self, orderNum=3):
# the action is 6 move x 2 direction = 12
self.action_space = spaces.Discrete(12)
# input is 9x6 = 54 array
self.orderNum = orderNum
low = np.array([0 for i in range(self.orderNum * self.orderNum * 6)])
high = np.array([5 for i in range(self.orderNum * self.orderNum * 6)])
self.observation_space = spaces.Box(low, high, dtype=np.uint8) # flattened
self.step_count = 0
self.scramble_low = 1
self.scramble_high = 10
self.doScamble = True
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def step(self, action):
self.action_log.append(action)
self.ncube.minimalInterpreter(actionList[action])
self.state = self.getstate()
self.step_count = self.step_count + 1
reward = 0.0
done = False
others = {}
if self.ncube.isSolved():
reward = 1.0
done = True
if self.step_count > 40:
done = True
return self.state, reward, done, others
def reset(self):
self.state = {}
self.ncube = cube.Cube(order=self.orderNum)
if self.doScamble:
self.scramble()
self.state = self.getstate()
self.step_count = 0
self.action_log = []
return self.state
def getstate(self):
return np.array([tileDict[i] for i in self.ncube.constructVectorState()])
def render(self, mode='human', close=False):
if close:
return
self.ncube.displayCube(isColor=True)
def setScramble(self, low, high, doScamble=True):
self.scramble_low = low
self.scramble_high = high
self.doScamble = doScamble
def scramble(self):
# set the scramber number
scramble_num = random.randint(self.scramble_low, self.scramble_high)
# check if scramble
while self.ncube.isSolved():
self.scramble_log = []
for i in range(scramble_num):
action = random.randint(0, 11)
self.scramble_log.append(action)
self.ncube.minimalInterpreter(actionList[action])
def getlog(self):
return self.scramble_log, self.action_log
| 27.587629 | 83 | 0.577354 | 337 | 2,676 | 4.495549 | 0.311573 | 0.079208 | 0.042904 | 0.021782 | 0.192079 | 0.176898 | 0.168977 | 0.124092 | 0.047525 | 0 | 0 | 0.019829 | 0.302691 | 2,676 | 96 | 84 | 27.875 | 0.792069 | 0.042975 | 0 | 0.108108 | 0 | 0 | 0.018004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121622 | false | 0 | 0.067568 | 0.027027 | 0.297297 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494391db64a996785ae5cf983cbf1863176ed04a | 1,849 | py | Python | util.py | arka356/TR-K | 0063c89732cc693e221c4ff1b9be20127b8cd692 | [
"MIT"
] | 62 | 2017-03-20T04:19:52.000Z | 2022-03-11T14:26:57.000Z | util.py | arka356/TR-K | 0063c89732cc693e221c4ff1b9be20127b8cd692 | [
"MIT"
] | 7 | 2016-11-17T02:49:01.000Z | 2021-01-15T00:24:28.000Z | util.py | arka356/TR-K | 0063c89732cc693e221c4ff1b9be20127b8cd692 | [
"MIT"
] | 50 | 2017-06-16T22:45:35.000Z | 2022-02-20T17:14:44.000Z | # -*-coding: utf-8 -*-
import inspect
import datetime
import os
import os.path
def save_log(contents, subject="None", folder=""):
current_dir = os.getcwd()
filePath = current_dir + os.sep + folder + os.sep + cur_month() + ".txt"
openMode = ""
if (os.path.isfile(filePath)):
openMode = 'a'
else:
openMode = 'w'
line = '[{0:<8}][{1:<10}] {2}\n'.format(cur_date_time(), subject, contents)
with open(filePath, openMode, encoding='utf8') as f:
f.write(line)
pass
def whoami():
return '* ' + cur_time_msec() + ' ' + inspect.stack()[1][3] + ' '
def whosdaddy():
return '*' + cur_time_msec() + ' ' + inspect.stack()[2][3] + ' '
def cur_date_time(time_string = '%y-%m-%d %H:%M:%S'):
cur_time = datetime.datetime.now().strftime(time_string)
return cur_time
def cur_time_msec(time_string ='%H:%M:%S.%f'):
cur_time = datetime.datetime.now().strftime(time_string)
return cur_time
def cur_date(time_string = '%y-%m-%d'):
cur_time = datetime.datetime.now().strftime(time_string)
return cur_time
def cur_month(time_string ='%y-%m'):
cur_time = datetime.datetime.now().strftime(time_string)
return cur_time
def cur_time(time_string ='%H:%M:%S' ):
cur_time = datetime.datetime.now().strftime(time_string)
return cur_time
# business day calculate
def date_by_adding_business_days(from_date, add_days):
business_days_to_add = add_days
current_date = from_date
while business_days_to_add > 0:
current_date += datetime.timedelta(days=1)
weekday = current_date.weekday()
if weekday >= 5: # sunday = 6
continue
business_days_to_add -= 1
return current_date
if __name__ == "__main__":
print(cur_time())
print(cur_date())
print(cur_date_time() )
save_log("한글", "한글", "log") | 28.890625 | 79 | 0.637642 | 261 | 1,849 | 4.245211 | 0.306513 | 0.094765 | 0.08213 | 0.103791 | 0.388087 | 0.343863 | 0.291516 | 0.291516 | 0.291516 | 0.291516 | 0 | 0.011573 | 0.205517 | 1,849 | 64 | 80 | 28.890625 | 0.742682 | 0.029205 | 0 | 0.2 | 0 | 0 | 0.060268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.18 | false | 0.02 | 0.08 | 0.04 | 0.42 | 0.06 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49444baa2aa54210d7c50ea35afb06f2e5b37a8c | 15,912 | py | Python | local_epifx/src/epifx/cmd/json.py | ruarai/epifx.covid | be7aecbf9e86c3402f6851ea65f6705cdb59f3cf | [
"BSD-3-Clause"
] | null | null | null | local_epifx/src/epifx/cmd/json.py | ruarai/epifx.covid | be7aecbf9e86c3402f6851ea65f6705cdb59f3cf | [
"BSD-3-Clause"
] | null | null | null | local_epifx/src/epifx/cmd/json.py | ruarai/epifx.covid | be7aecbf9e86c3402f6851ea65f6705cdb59f3cf | [
"BSD-3-Clause"
] | null | null | null | """Convert epidemic forecasts into JSON files for online viewing."""
import datetime
import errno
import h5py
import json
import logging
import numpy as np
import os.path
import pypfilt.summary
from . import settings
fs_fmt = "%Y-%m-%d %H:%M:%S"
d_fmt = "%Y-%m-%d"
def dtime(dstr):
"""Convert datetime strings to datetime instances."""
if isinstance(dstr, bytes):
dstr = dstr.decode()
return datetime.datetime.strptime(dstr, fs_fmt)
def date_str(dstr):
"""Convert datetime strings to date strings."""
if isinstance(dstr, bytes):
dstr = dstr.decode()
dt = datetime.datetime.strptime(dstr, fs_fmt).date()
return dt.strftime(d_fmt)
def update_obs_model(f, hdf5_file, om_dict):
"""
Record the observation model parameters, and check that they're consistent
across all of the input files.
:param f: The file object from which to read the simulation output.
:param hdf5_file: The corresponding file name for ``f``.
:param om_dict: A dictionary of observation model parameter names/values.
:raises ValueError: if more than one observation model is found. Note that
if the parameter **values** differ across the input files, a warning
message will be printed but the files will still be processed.
"""
logger = logging.getLogger(__name__)
# Note: in Python 3, h5py group methods such as keys(), values(),
# and items() return view-like objects that cannot be sliced or
# indexed like lists, but which support iteration.
obs_models = list(f['meta']['param']['obs'].values())
n_obs_models = len(obs_models)
if n_obs_models != 1:
raise ValueError("Found {} observation models".format(n_obs_models))
om = obs_models[0]
if om_dict:
# An observation model has already been recorded, check that the
# observation model in this file is consistent with it.
for key in om.keys():
if key not in om_dict:
# A new observation model parameter has appeared.
logger.warning("New parameter {} in {}".format(
key, os.path.basename(hdf5_file)))
continue
ok = om_dict[key] == om[key][()].item()
if not ok:
logger.warning("Param {} differs".format(key))
pass
else:
# Record the observation model parameters.
for key in om.keys():
om_dict[key] = om[key][()].item()
def most_recent_obs_date(f):
"""
Return the time of the most recent observation (a ``datetime`` instance).
:param f: The file object from which to read the simulation output.
:raises ValueError: if more than one observation model is found.
"""
obs_units = list(f['data']['obs'].keys())
if len(obs_units) != 1:
raise ValueError("Found {} observation models".format(
len(obs_units)))
obs = f['data']['obs'][obs_units[0]][()]
return max(dtime(row['date']) for row in obs)
def update_forecast_cis(f, hdf5_file, fs_dict, cis, most_recent):
"""
Record the forecast credible intervals and return the forecasting dates
for which other simulation outputs should be calculated.
:param f: The file object from which to read the simulation output.
:param hdf5_file: The corresponding file name for ``f``.
:param fs_dict: A dictionary of forecast credible intervals.
:param cis: The credible intervals to record.
:param most_recent: Whether to use only the most recent forecast.
"""
logger = logging.getLogger(__name__)
# Extract the forecast credible intervals.
fs = f['data']['forecasts'][()]
conds = tuple(fs['prob'] == p for p in cis)
keep = np.logical_or.reduce(conds)
fs = fs[keep]
# Note that forecast dates are date-time strings (%Y-%m-%d %H:%M:%S).
fs_dates = np.unique(fs['fs_date'])
# Check that this table contains the desired credible intervals.
ci_levels = np.unique(fs['prob'])
if len(ci_levels) < len(cis):
msg = "expected CIs: {}; only found: {}"
expect = ", ".join(str(p) for p in sorted(cis))
found = ", ".join(str(p) for p in sorted(ci_levels))
logger.warning(msg.format(expect, found))
# Ignore the estimation run, if present.
sim_end = max(dtime(dstr) for dstr in fs['date']).strftime(fs_fmt)
if len(fs_dates) == 1 and fs_dates[0] == sim_end:
# If the file only contains the result of an estimation run,
# inform the user and keep these results --- they can result
# from directly using pypfilt.run() to produce forecasts.
last_obs = most_recent_obs_date(f)
logger.warning('Estimation run, set fs_date = {} for {}'.format(
last_obs.strftime(fs_fmt), os.path.basename(hdf5_file)))
# Replace fs_date with the date of the most recent observation.
last_obs = last_obs.strftime(fs_fmt)
fs_dates = [last_obs]
fs['fs_date'] = last_obs
# Discard all rows prior to the (effective) forecasting date.
dates = np.array([dtime(row['date']) for row in fs])
fs = fs[dates >= last_obs]
# Note: these files may contain duplicate rows.
# So identify the first duplicate row (if any) and crop.
for (n, rix) in enumerate(np.where(fs['date'] == last_obs)[0]):
# If the nth row for the date on which the forecast begins isn't
# the nth row of the entire table, it represents the start of the
# duplicate data, so discard all subsequent rows.
if n != rix:
fs = fs[:rix]
break
else:
fs_dates = [d for d in fs_dates if d != sim_end]
if most_recent:
# Only retain the more recent forecast.
fs_dates = [max(dtime(dstr) for dstr in fs_dates)]
fs_dates = [d.strftime(fs_fmt) for d in fs_dates]
# Store the forecast credible intervals.
ci_levels = np.unique(fs['prob'])
for fs_date in fs_dates:
mask = fs['fs_date'] == fs_date
if not isinstance(mask, np.ndarray) or mask.shape[0] != fs.shape[0]:
raise ValueError('Invalid fs_date comparison; {} == {}'.format(
type(fs['fs_date'][0]), type(fs_date)))
fs_rows = fs[mask]
ci_dict = {}
for ci in ci_levels:
ci_rows = fs_rows[fs_rows['prob'] == ci]
ci_dict[str(ci)] = [
{"date": date_str(date),
"ymin": ymin,
"ymax": ymax}
for (_, _, _, date, _, ymin, ymax) in ci_rows]
fs_dict[date_str(fs_date)] = ci_dict
# Return the forecast date(s) that should be considered for this file.
return fs_dates
def update_peak_timing(f, hdf5_file, pkt_dict, cis, fs_dates):
"""
Record the peak timing credible intervals.
:param f: The file object from which to read the simulation output.
:param hdf5_file: The corresponding file name for ``f``.
:param pkt_dict: A dictionary of peak timing credible intervals.
:param cis: The credible intervals to record.
:param fs_dates: The forecasting dates for which the observations should
be recorded.
:raises ValueError: if more than one observation model is found. Note that
if the parameter **values** differ across the input files, a warning
message will be printed but the files will still be processed.
"""
logger = logging.getLogger(__name__)
# Extract the peak timing credible intervals.
try:
pk = f['data']['peak_cints'][()]
except KeyError:
# If this table is not present, return an empty array with the
# minimal set of required columns.
logger.warning("No 'peak_cints' table: {}".format(
os.path.basename(hdf5_file)))
return
conds = tuple(pk['prob'] == p for p in cis)
keep = np.logical_or.reduce(conds)
pk = pk[keep]
ci_levels = np.unique(pk['prob'])
if len(ci_levels) < len(cis):
msg = "expected CIs: {}; only found: {}"
expect = ", ".join(str(p) for p in sorted(cis))
found = ", ".join(str(p) for p in sorted(ci_levels))
logger.warning(msg.format(expect, found))
for fs_date in fs_dates:
mask = pk['fs_date'] == fs_date
if not isinstance(mask, np.ndarray) or mask.shape[0] != pk.shape[0]:
raise ValueError('Invalid fs_date comparison; {} == {}'.format(
type(pk['fs_date'][0]), type(fs_date)))
pk_rows = pk[mask]
ci_dict = {}
for ci in ci_levels:
ci_rows = pk_rows[pk_rows['prob'] == ci]
ci_dict[str(ci)] = [
{"date": date_str(fs_date),
"ymin": date_str(tmin),
"ymax": date_str(tmax)}
for (_, _, _, _, _smin, _smax, tmin, tmax) in ci_rows]
pkt_dict[date_str(fs_date)] = ci_dict
def update_obs(f, hdf5_file, obs_dict, fs_dates):
"""
Record the observations provided at each of the forecasting dates.
:param f: The file object from which to read the simulation output.
:param hdf5_file: The corresponding file name for ``f``.
:param obs_dict: A dictionary of observations.
:param fs_dates: The forecasting dates for which the observations should
be recorded.
:raises ValueError: if more than one observation model is found. Note that
if the parameter **values** differ across the input files, a warning
message will be printed but the files will still be processed.
"""
obs_units = list(f['data']['obs'].keys())
n_obs_units = len(obs_units)
if n_obs_units != 1:
raise ValueError("Found {} observation models".format(n_obs_units))
obs = f['data']['obs'][obs_units[0]][()]
cols = obs.dtype.names
bs_cols = [c for c in cols if obs.dtype[c].kind == 'S']
for fs_date in fs_dates:
obs_list = [
{c: obs_row[c].item() for c in cols}
for obs_row in obs]
for o in obs_list:
# Convert byte string to Unicode strings.
for c in bs_cols:
if isinstance(o[c], bytes):
o[c] = o[c].decode()
# Ensure the date is stored as 'YYYY-MM-DD'.
o['date'] = date_str(o['date'])
obs_dict[date_str(fs_date)] = obs_list
def convert(files, most_recent, locn_id, out_file, replace, pretty, cis=None):
"""
Convert a set of epidemic forecasts into a JSON file for online viewing.
:param files: A list of forecast files (HDF5).
:param most_recent: Whether to use only the most recent forecast in each
file.
:param locn_id: The forecasting location identifier.
:param out_file: The output file name.
:param replace: Whether to replace (overwrite) an existing JSON file,
rather than updating it with the provided forecasts.
:param pretty: Whether the JSON output should be pretty-printed.
:param cis: The credible intervals to record (default: ``[0, 50, 95]``).
"""
logger = logging.getLogger(__name__)
locn_settings = settings.local(locn_id)
if cis is None:
cis = [0, 50, 95]
# If we're updating an existing file, try to load the current contents.
json_data = None
if (not replace) and os.path.isfile(out_file):
# The output file already exists and we're not replacing it.
try:
with open(out_file, encoding='utf-8') as f:
json_data = json.load(f)
except json.JSONDecodeError:
logger.warning("Could not read file '{}'".format(out_file))
# If we're generating a new file, or the current file could not be loaded,
# start with empty content.
if json_data is None:
json_data = {
'obs': {},
'forecasts': {},
'timing': {},
'obs_model': {},
'location': locn_id,
'location_name': locn_settings['name'],
'obs_axis_lbl': locn_settings['obs_axis_lbl'],
'obs_axis_prec': locn_settings['obs_axis_prec'],
'obs_datum_lbl': locn_settings['obs_datum_lbl'],
'obs_datum_prec': locn_settings['obs_datum_prec'],
}
# Note: files may be in any order, sorting yields deterministic output.
for hdf5_file in sorted(files):
with h5py.File(hdf5_file, 'r') as f:
update_obs_model(f, hdf5_file, json_data['obs_model'])
fs_dates = update_forecast_cis(f, hdf5_file,
json_data['forecasts'],
cis, most_recent)
update_peak_timing(f, hdf5_file, json_data['timing'],
cis, fs_dates)
update_obs(f, hdf5_file, json_data['obs'], fs_dates)
if pretty:
indent = 2
separators = (', ', ': ')
else:
indent = None
separators = (',', ':')
# Create the output directory (and missing parents) as needed.
# The directory will be empty ('') if out_file has no path component.
out_dir = os.path.dirname(out_file)
if out_dir and not os.path.isdir(out_dir):
# Create with mode -rwxr-x---.
try:
logger.info('Creating {}'.format(out_dir))
os.makedirs(out_dir, mode=0o750)
except OSError as e:
# Potential race condition with multiple script instances.
if e.errno != errno.EEXIST:
logger.warning('Could not create {}'.format(out_dir))
raise
logger.debug("Writing {}".format(out_file))
with open(out_file, encoding='utf-8', mode='w') as f:
json.dump(json_data, f, ensure_ascii=False,
sort_keys=True, indent=indent, separators=separators)
def parser():
"""Return the command-line argument parser for ``epifx-json``."""
p = settings.common_parser(locns=False)
ip = p.add_argument_group('Input arguments')
ip.add_argument(
'-i', '--intervals', action='store', metavar='CIs',
help='Credible intervals (default: 0,50,95)')
ip.add_argument(
'-m', '--most-recent', action='store_true',
help='Use only the most recent forecast in each file')
op = p.add_argument_group('Output arguments')
op.add_argument(
'-o', '--output', action='store', type=str, default='output.json',
help='The name of the JSON output file')
op.add_argument(
'-p', '--pretty', action='store_true',
help='Pretty-print the JSON output')
op.add_argument(
'-r', '--replace', action='store_true',
help='Replace the output file (default: update if it exists)')
rp = p.add_argument_group('Required arguments')
rp.add_argument(
'-l', '--location', action='store', type=str, default=None,
help='The location to which the forecasts pertain')
rp.add_argument(
'files', metavar='HDF5_FILE', type=str, nargs='*',
help='Forecast data file(s)')
return p
def main(args=None):
"""The entry point for ``epifx-json``."""
p = parser()
if args is None:
args = vars(p.parse_args())
else:
args = vars(p.parse_args(args))
if args['location'] is None:
p.print_help()
return 2
if not args['files']:
p.print_help()
return 2
if args['intervals'] is not None:
vals = args['intervals'].split(",")
for ix, val in enumerate(vals):
try:
vals[ix] = int(val)
except ValueError:
p.error("Invalid credible interval '{}'".format(val))
args['intervals'] = vals
logging.basicConfig(level=args['loglevel'])
convert(files=args['files'], most_recent=args['most_recent'],
locn_id=args['location'], out_file=args['output'],
replace=args['replace'], pretty=args['pretty'],
cis=args['intervals'])
| 39.386139 | 78 | 0.611425 | 2,198 | 15,912 | 4.292539 | 0.176524 | 0.014626 | 0.007631 | 0.004452 | 0.412613 | 0.343296 | 0.293588 | 0.257128 | 0.249285 | 0.222046 | 0 | 0.005355 | 0.27231 | 15,912 | 403 | 79 | 39.483871 | 0.809483 | 0.323341 | 0 | 0.239044 | 0 | 0 | 0.132919 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039841 | false | 0.003984 | 0.035857 | 0 | 0.10757 | 0.011952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49453094eeccb0098b22342dae39d3919ddc6ead | 616 | py | Python | tests/test_RC4.py | prudywsh/GS15-crypto | d7bbe3fcce9131bf1b1d222843a1a2acf2f7c824 | [
"MIT"
] | null | null | null | tests/test_RC4.py | prudywsh/GS15-crypto | d7bbe3fcce9131bf1b1d222843a1a2acf2f7c824 | [
"MIT"
] | null | null | null | tests/test_RC4.py | prudywsh/GS15-crypto | d7bbe3fcce9131bf1b1d222843a1a2acf2f7c824 | [
"MIT"
] | 1 | 2017-12-29T10:45:57.000Z | 2017-12-29T10:45:57.000Z | import unittest
from src.RC4 import RC4
class TestGCD(unittest.TestCase):
def test_rc4_1(self):
text = "wiki"
key = "pedia"
rc4 = RC4(key)
self.assertEqual(rc4.cipher(text), "ÙÌîÃ")
def test_rc4_2(self):
text = "Plaintext"
key = "Key"
rc4 = RC4(key)
self.assertEqual(rc4.cipher(text), "»ó\x16èÙ@¯\nÓ")
def test_rc4_decrypt(self):
text = "Hello world 45 ! azeaze è&"
key = "ALittleKey:3"
rc4 = RC4(key)
self.assertEqual(rc4.cipher(rc4.cipher(text)), text)
if __name__ == '__main__':
unittest.main()
| 22.814815 | 60 | 0.577922 | 81 | 616 | 4.246914 | 0.45679 | 0.104651 | 0.087209 | 0.113372 | 0.311047 | 0.311047 | 0.311047 | 0.215116 | 0 | 0 | 0 | 0.049887 | 0.284091 | 616 | 26 | 61 | 23.692308 | 0.725624 | 0 | 0 | 0.15 | 0 | 0 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.15 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4945d673db93bf4fd5bea0294f067991a9141258 | 935 | py | Python | data_structures/queue/number_of_islands.py | shoniavika/python-ds | b2f2e912ea2cdea36c4f960edcf7149ed7f508a6 | [
"MIT"
] | null | null | null | data_structures/queue/number_of_islands.py | shoniavika/python-ds | b2f2e912ea2cdea36c4f960edcf7149ed7f508a6 | [
"MIT"
] | null | null | null | data_structures/queue/number_of_islands.py | shoniavika/python-ds | b2f2e912ea2cdea36c4f960edcf7149ed7f508a6 | [
"MIT"
] | null | null | null | # Input: binary matrix, 0 means water, 1 means land
# Output: the number of islands
import sys
from queue import Queue
from typing import List
def numIslands(grid: List[List[str]]) -> int:
if not grid:
return 0
vlen = len(grid)
hlen = len(grid[0])
que = Queue()
islandsCnt = 0
for v in range(vlen):
for h in range(hlen):
if grid[v][h] == '1':
que.put((v, h))
islandsCnt += 1
while not que.isEmpty():
curV, curH = que.get()
for x, y in (
[curV, curH - 1], [curV, curH + 1],
[curV - 1, curH], [curV + 1, curH]):
if (vlen > x >= 0 and hlen > y >= 0 and
grid[x][y] == "1"):
que.put((x, y))
grid[x][y] = "0"
return islandsCnt
| 27.5 | 64 | 0.418182 | 115 | 935 | 3.4 | 0.4 | 0.02046 | 0.035806 | 0.066496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0.459893 | 935 | 33 | 65 | 28.333333 | 0.744554 | 0.084492 | 0 | 0 | 0 | 0 | 0.003517 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.12 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494733b58826f7513b749f3a6f7d414ba807d64f | 14,829 | py | Python | msdnet/gpuoperations.py | dani-lbnl/msdnet | 20f503322524ceb340379448f1778a58bb1f9a18 | [
"MIT"
] | 24 | 2019-08-24T06:42:51.000Z | 2021-10-09T14:27:51.000Z | msdnet/gpuoperations.py | dani-lbnl/msdnet | 20f503322524ceb340379448f1778a58bb1f9a18 | [
"MIT"
] | 12 | 2019-07-31T06:56:19.000Z | 2020-12-05T18:08:54.000Z | msdnet/gpuoperations.py | dani-lbnl/msdnet | 20f503322524ceb340379448f1778a58bb1f9a18 | [
"MIT"
] | 11 | 2019-09-17T02:39:24.000Z | 2022-03-30T21:28:35.000Z | #-----------------------------------------------------------------------
#Copyright 2019 Centrum Wiskunde & Informatica, Amsterdam
#
#Author: Daniel M. Pelt
#Contact: D.M.Pelt@cwi.nl
#Website: http://dmpelt.github.io/msdnet/
#License: MIT
#
#This file is part of MSDNet, a Python implementation of the
#Mixed-Scale Dense Convolutional Neural Network.
#-----------------------------------------------------------------------
"""Module implementing network operations on GPU using Numba."""
import numpy as np
from numba import cuda, float32, int32
import math
def get1dgridsize(sz, tpb = 1024):
"""Return CUDA grid size for 1d arrays.
:param sz: input array size
:param tpb: (optional) threads per block
"""
return (sz + (tpb - 1)) // tpb, tpb
def get2dgridsize(sz, tpb = (8, 8)):
"""Return CUDA grid size for 2d arrays.
:param sz: input array size
:param tpb: (optional) threads per block
"""
bpg0 = (sz[0] + (tpb[0] - 1)) // tpb[0]
bpg1 = (sz[1] + (tpb[1] - 1)) // tpb[1]
return (bpg0, bpg1), tpb
class GPUImageData(object):
"""Object that represents a set of 2D images on GPU.
:param shape: total shape of all images
:param dl: list of dilations in the network
:param nin: number of input images of network
"""
def __init__(self, shape, dl, nin):
self.arr = cuda.device_array(shape, dtype=np.float32)
self.dlg = cuda.to_device(dl.astype(np.uint8))
dlgt = np.zeros((len(dl),nin+len(dl)),dtype=np.uint8)
for i,d in enumerate(dl):
dlgt[i] = d
self.dlgt = cuda.to_device(dlgt)
dlgb = np.zeros((len(dl),len(dl)),dtype=np.uint8)
for i in range(len(dl)):
dlgb[i,:len(dl)-i-1] = dl[i+1:]
self.dlgb = cuda.to_device(dlgb)
self.set_block_size((shape[-2],shape[-1]))
self.shape = shape
self.nin = nin
def set_block_size(self, imshape):
"""Set CUDA grid sizes to be used."""
self.bpg1d, self.tpb1d = get1dgridsize(imshape[0]*imshape[1])
self.bpg2d, self.tpb2d = get2dgridsize(imshape)
def setimages(self, ims):
"""Set data to set of images.
:param ims: set of images
"""
bpg, tpb = get1dgridsize(ims.size)
imsg = cuda.to_device(ims)
setimages_cuda[bpg, tpb](imsg.ravel(), self.arr.ravel())
def setscalars(self, scl, start=0):
"""Set each image to a scalar.
:param scl: scalar values
"""
bpg, tpb = get1dgridsize(self.arr[start:].size)
sclr = cuda.to_device(scl.ravel())
set_scalar_cuda[bpg, tpb](sclr, self.arr[start:].ravel(), self.arr[0].size)
def fill(self, val, start=None, end=None):
"""Set image data to single scalar value.
:param val: scalar value
"""
bpg, tpb = get1dgridsize(self.arr[start:end].size)
fill_cuda[bpg, tpb](np.float32(val), self.arr[start:end].ravel())
def copy(self, start=None, end=None):
"""Return copy of image data."""
return self.arr[start:end].copy_to_host()
def get(self, start=None, end=None):
"""Return image data."""
return self.arr[start:end].copy_to_host()
def add(self, val, i):
"""Add scalar to single image.
:param val: scalar to add
:param i: index of image to add value to
"""
bpg, tpb = get1dgridsize(self.arr[i].size)
add_cuda[bpg, tpb](np.float32(val), self.arr[i].ravel())
def mult(self, val, i):
"""Multiply single image with value.
:param val: value
:param i: index of image to multiply
"""
bpg, tpb = get1dgridsize(self.arr[i].size)
mult_cuda[bpg, tpb](np.float32(val), self.arr[i].ravel())
def prepare_forw_conv(self, f):
"""Prepare for forward convolutions.
:param f: convolution filters
"""
self.forw_idx = np.zeros((len(f),2),dtype=np.uint32)
idx = 0
for i, fi in enumerate(f):
self.forw_idx[i] = idx, idx+fi.size
idx += fi.size
ff = np.zeros(idx,dtype=np.float32)
for i, fi in enumerate(f):
l, r = self.forw_idx[i]
ff[l:r] = fi.ravel()
self.forw_fg = cuda.to_device(ff)
def forw_conv(self, i, outidx, dl):
"""Perform forward convolutions
:param i: image index to compute
:param outidx: image index to write output to
:param dl: dilation list
"""
l, r = self.forw_idx[i]
conv2d[self.bpg2d, self.tpb2d, 0,4*(r-l)](self.arr, 0, outidx, outidx, self.forw_fg, l, r, self.dlgt, i)
def prepare_back_conv(self, f):
"""Prepare for backward convolutions.
:param f: convolution filters
"""
self.back_idx = {}
idx = 0
for key, val in f.items():
self.back_idx[key] = idx, idx + val.size
idx += val.size
ff = np.zeros(idx,dtype=np.float32)
for key, val in f.items():
l, r = self.back_idx[key]
ff[l:r] = val.ravel()
self.back_fg = cuda.to_device(ff)
def back_conv(self, outidx, dl):
"""Perform backward convolutions
:param outidx: image index to write output to
:param dl: dilation list
"""
l, r = self.back_idx[outidx]
conv2d[self.bpg2d, self.tpb2d, 0, 4*(r-l)](self.arr, outidx+1, self.shape[0], outidx, self.back_fg, l, r, self.dlgb, outidx)
def relu(self, i):
"""Apply ReLU to single image."""
relu2d_cuda[self.bpg2d, self.tpb2d](self.arr, i)
def relu2(self, i, dat, j):
"""Apply backpropagation ReLU to single image."""
relu2_2d_cuda[self.bpg2d, self.tpb2d](dat.arr, self.arr, j, i)
def combine_all_all(self, dat, w):
"""Compute linear combinations of images."""
wg = cuda.to_device(w)
comb_all_all_cuda[self.bpg2d, self.tpb2d](dat.arr, self.arr, wg)
def prepare_gradient(self):
"""Prepare for gradient computation."""
inlist = []
dellist = []
for i in range(self.arr.shape[0]):
inlist.extend(range(self.nin+i))
dellist.extend([i]*(self.nin+i))
self.inlist = cuda.to_device(np.array(inlist).astype(np.uint32))
self.dellist = cuda.to_device(np.array(dellist).astype(np.uint32))
self.nf = len(inlist)
self.gr = cuda.to_device(np.zeros(self.nf*9,dtype=np.float32))
def filtergradientfull(self, ims):
"""Compute gradients for filters."""
bpg, tpb = get1dgridsize(9*self.nf)
filtergradientfull[bpg,tpb](ims.arr, self.arr, self.dlg, self.gr, self.inlist, self.dellist)
q = self.gr.copy_to_host()
return q
def weightgradientall(self, delta):
"""Compute gradients for weights."""
tmp = cuda.device_array(24*self.shape[0]*delta.shape[0])
fastmult[24,1024](delta.arr,self.arr,tmp)
return tmp.copy_to_host().reshape((delta.shape[0],self.arr.shape[0],24)).sum(2)
def sumall(self):
"""Compute image sums."""
tmp = cuda.device_array(24*self.shape[0])
fastsumall[24,1024](self.arr,tmp)
return tmp.copy_to_host().reshape((self.arr.shape[0],24)).sum(1)
def softmax(self):
"""Compute softmax."""
softmax[self.bpg2d, self.tpb2d](self.arr)
@cuda.jit(fastmath=True)
def setimages_cuda(inp, out):
i = cuda.grid(1)
if i<inp.size:
out[i] = inp[i]
@cuda.jit(fastmath=True)
def add_cuda(val, out):
i = cuda.grid(1)
if i<out.size:
out[i] += val
@cuda.jit(fastmath=True)
def fill_cuda(val, out):
i = cuda.grid(1)
if i<out.size:
out[i] = val
@cuda.jit(fastmath=True)
def set_scalar_cuda(val, out, size):
i = cuda.grid(1)
if i<out.size:
j = i//size
out[i] = val[j]
@cuda.jit(fastmath=True)
def mult_cuda(val, out):
i = cuda.grid(1)
if i<out.size:
out[i] *= val
@cuda.jit(fastmath=True)
def mult_arr_cuda(in1, in2, ii, jj, out):
i, j = cuda.grid(2)
if i<out.shape[0] and j<out.shape[1]:
out[i,j] = in1[ii, i, j]*in2[jj, i, j]
@cuda.jit(fastmath=True)
def comb_cuda(inp, out, w):
i = cuda.grid(1)
if i<out.size:
out[i] += w*inp[i]
@cuda.jit(fastmath=True)
def comb_all_cuda(inp, out, w):
i, j = cuda.grid(2)
if i<out.shape[0] and j<out.shape[1]:
tmp = float32(0)
for k in range(w.shape[0]):
tmp += w[k] * inp[k,i, j]
out[i, j] += tmp
@cuda.jit(fastmath=True)
def comb_all_all_cuda(inp, out, w):
i, j = cuda.grid(2)
if i<out.shape[1] and j<out.shape[2]:
for l in range(out.shape[0]):
tmp = float32(0)
for k in range(inp.shape[0]):
tmp += w[l, k] * inp[k,i, j]
out[l, i, j] += tmp
@cuda.jit(fastmath=True)
def relu_cuda(data):
i = cuda.grid(1)
if i<data.size:
if data[i]<0:
data[i]=0
@cuda.jit(fastmath=True)
def relu2d_cuda(data, k):
i, j = cuda.grid(2)
if i<data.shape[1] and j<data.shape[2]:
if data[k,i,j]<0:
data[k,i,j]=0
@cuda.jit(fastmath=True)
def relu2_cuda(inp, out):
i = cuda.grid(1)
if i<inp.size:
if inp[i]<=0:
out[i]=0
@cuda.jit(fastmath=True)
def relu2_2d_cuda(inp, out, k, l):
i, j = cuda.grid(2)
if i<inp.shape[1] and j<inp.shape[2]:
if inp[k,i,j]<=0:
out[l,i,j]=0
@cuda.jit(fastmath=True)
def conv2d(arr, il, ir, ao, fin, fl, fr, dlin, dli):
inp = arr[il:ir]
out = arr[ao]
f = fin[fl:fr]
dl = dlin[dli]
fshared = cuda.shared.array(shape=0, dtype=float32)
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bdx = cuda.blockDim.x
bdy = cuda.blockDim.y
tid = ty*bdx+tx
nth = bdx*bdy
for i in range(tid,f.size,nth):
fshared[i] = f[i]
cuda.syncthreads()
do=-1
xc,yc = cuda.grid(2)
if xc<out.shape[0] and yc<out.shape[1]:
tmp = float32(0)
idx = int32(0)
for j in range(inp.shape[0]):
if do!=dl[j]:
do=dl[j]
d=dl[j]
if xc>=d:
xl = xc-d
else:
xl = d-xc
if xc<out.shape[0]-d:
xr = xc+d
else:
xr = 2*out.shape[0] - (xc+d + 2)
if yc>=d:
yl = yc-d
else:
yl = d-yc
if yc<out.shape[1]-d:
yr = yc+d
else:
yr = 2*out.shape[1] - (yc+d + 2)
tmp = cuda.fma(inp[j,xl,yl],fshared[idx], tmp)
tmp = cuda.fma(inp[j,xl,yc],fshared[idx+1], tmp)
tmp = cuda.fma(inp[j,xl,yr],fshared[idx+2], tmp)
tmp = cuda.fma(inp[j,xc,yl],fshared[idx+3], tmp)
tmp = cuda.fma(inp[j,xc,yc],fshared[idx+4], tmp)
tmp = cuda.fma(inp[j,xc,yr],fshared[idx+5], tmp)
tmp = cuda.fma(inp[j,xr,yl],fshared[idx+6], tmp)
tmp = cuda.fma(inp[j,xr,yc],fshared[idx+7], tmp)
tmp = cuda.fma(inp[j,xr,yr],fshared[idx+8], tmp)
idx+=9
out[xc,yc] += tmp
@cuda.jit(fastmath=True)
def filtergradientfull(inp, delta, dl, gr, inlist, dellist):
idx = cuda.grid(1)
f = idx % 9
idx2 = idx // 9
if idx2 >= dellist.shape[0]:
return
j = dellist[idx2]
i = inlist[idx2]
fi = f // 3
fj = f % 3
ii = inp[i]
jj = delta[j]
d = dl[j]
l = (fi-1)*d
u = (fj-1)*d
tmp = float32(0)
for q in range(inp.shape[1]):
xc = q+l
if xc<0:
xc = -xc
if xc>=inp.shape[1]:
xc = 2*inp.shape[1] - (xc + 2)
for r in range(inp.shape[2]):
yc = r+u
if yc<0:
yc = -yc
if yc>=inp.shape[2]:
yc = 2*inp.shape[2] - (yc + 2)
tmp += ii[xc,yc]*jj[q,r]
gr[idx] = tmp
def fastmult_impl(a, b, out):
tx = int32(cuda.threadIdx.x)
gtx = tx + cuda.blockIdx.x * 1024
gsize = 1024 * cuda.gridDim.x
sz2 = a[0].size
nc = a[0].shape[1]
fshared = cuda.shared.array(shape=1024, dtype=float32)
fidx = 0
for ai in range(a.shape[0]):
for bi in range(b.shape[0]):
sumv = float32(0)
for i in range(gtx,sz2,gsize):
sumv += a[ai,i//nc,i%nc]*b[bi,i//nc,i%nc]
fshared[tx] = sumv
cuda.syncthreads()
sz = int32(512)
while sz>0:
if tx<sz:
fshared[tx] += fshared[tx+sz]
cuda.syncthreads()
sz//=2
if tx==0:
out[cuda.blockIdx.x + fidx] = fshared[0]
fidx += cuda.gridDim.x
def fastsumall_impl(a, out):
tx = int32(cuda.threadIdx.x)
gtx = tx + cuda.blockIdx.x * 1024
gsize = 1024 * cuda.gridDim.x
sz2 = a[0].size
nc = a[0].shape[1]
fshared = cuda.shared.array(shape=1024, dtype=float32)
fidx = 0
for ai in range(a.shape[0]):
sumv = float32(0)
for i in range(gtx,sz2,gsize):
sumv += a[ai,i//nc,i%nc]
fshared[tx] = sumv
cuda.syncthreads()
sz = int32(512)
while sz>0:
if tx<sz:
fshared[tx] += fshared[tx+sz]
cuda.syncthreads()
sz//=2
if tx==0:
out[cuda.blockIdx.x + fidx] = fshared[0]
fidx += cuda.gridDim.x
maxregisters = 64
fastsumall = cuda.jit(fastsumall_impl, fastmath=True, max_registers=maxregisters)
fastmult = cuda.jit(fastmult_impl, fastmath=True, max_registers=maxregisters)
while maxregisters>16:
tmp = cuda.to_device(np.zeros((1,1,1),dtype=np.float32))
out = cuda.to_device(np.zeros(1024,dtype=np.float32))
try:
fastsumall[24,1024](tmp,out)
fastmult[24,1024](tmp,tmp,out)
except cuda.cudadrv.driver.CudaAPIError:
maxregisters -= 16
fastsumall = cuda.jit(fastsumall_impl, fastmath=True, max_registers=maxregisters)
fastmult = cuda.jit(fastmult_impl, fastmath=True, max_registers=maxregisters)
print('Lowering maximum number of CUDA registers to ', maxregisters)
continue
break
@cuda.jit(fastmath=True)
def softmax(inp):
x, y = cuda.grid(2)
if x>=inp.shape[1] or y>=inp.shape[2]:
return
nim = inp.shape[0]
mx = inp[0, x, y]
for j in range(1,nim):
if inp[j,x, y]>mx:
mx = inp[j,x,y]
sm = 0
for j in range(nim):
inp[j,x,y] = math.exp(inp[j,x,y] - mx)
sm += inp[j,x,y]
for j in range(nim):
inp[j,x,y] /= sm
| 31.023013 | 132 | 0.538337 | 2,230 | 14,829 | 3.536771 | 0.13139 | 0.022188 | 0.03043 | 0.038544 | 0.475339 | 0.421453 | 0.343984 | 0.287055 | 0.260048 | 0.228604 | 0 | 0.033626 | 0.302111 | 14,829 | 477 | 133 | 31.08805 | 0.728476 | 0.128465 | 0 | 0.325648 | 0 | 0 | 0.003608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118156 | false | 0 | 0.008646 | 0 | 0.15562 | 0.002882 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49493b6029995cc0d4fb4c1a374a231bfbe90b83 | 3,668 | py | Python | main.py | natekratzer/hist_income_by_quintile | 0347085e362a91f30bff189d5916efd9b0a6e03c | [
"MIT"
] | null | null | null | main.py | natekratzer/hist_income_by_quintile | 0347085e362a91f30bff189d5916efd9b0a6e03c | [
"MIT"
] | null | null | null | main.py | natekratzer/hist_income_by_quintile | 0347085e362a91f30bff189d5916efd9b0a6e03c | [
"MIT"
] | null | null | null | ## Import libraries
import numpy as np
import pandas as pd
import altair as alt
## Read in quintile data
# I made a csv of table F-1: https://www.census.gov/data/tables/time-series/demo/income-poverty/historical-income-families.html
# All values are in 2019 dollars as provided by the Census
hist_income = pd.read_csv("hist_income47.csv")
## Cleaning
# Footnotes for the data are available here: https://www.census.gov/topics/income-poverty/income/guidance/cps-historic-footnotes.html
# Fix Years
#remove the footnotes that are linked in parentheses
hist_income['Year'] = hist_income['Year'].str.replace(r"\(.*\)" , "")
#remove all commas
hist_income = hist_income.apply(lambda x: x.str.replace(',', '')) #using a lambda function and applying to all columns
# Dictionary to retype columns
type_dict = {
"Year" : "datetime64[ns]",
"20" : "int32",
"40" : "int32",
"60" : "int32",
"80" : "int32",
"95": "int32"
}
# Change data types
hist_income = hist_income.astype(type_dict)
# Average of years with multiple values
# See linked footnotes above - two slightly different methods available for each year
hist_income = hist_income.groupby(['Year']).mean() #this also turns year into my index, which is convenient for later applying changes to all columns except year
hist_income = hist_income.pct_change() #percent change
hist_income = hist_income.dropna() #filter out the first year since it's NA for percent change
hist_income = hist_income.apply(lambda x: x * 100) # multiply by 100 for convenience
## Pull in presidential data
pres = pd.read_csv("pres_year.csv")
pres["Year"] = pd.to_datetime(pres["Year"], format = "%Y") # need to manually convert because going straight from int to date causes issues
# Dictionary to retype columns
type_dict_pres = {
"Year" : "datetime64[ns]",
"President" : "string",
"Term" : "string",
"Democrat" : "bool",
"Second Term" : "bool"
}
# Change data types
pres = pres.astype(type_dict_pres)
pres = pres.set_index("Year")
# Join datasets
df = pres.join(hist_income) #join is for joining on index, merge is for merging on columns
# make a lagged dataframe to give each president a 1 year lag before their economic policy kicks in.
pres_lag = pres.shift(periods = -1)
df_lag = pres_lag.join(hist_income)
# transform data from wide to long
df = pd.melt(df, id_vars = ['Democrat'], value_vars =['20', '40', '60', '80', '95'], ignore_index = False, var_name = "Percentile", value_name = "Growth")
df_lag = pd.melt(df_lag, id_vars = ['Democrat'], value_vars =['20', '40', '60', '80', '95'], ignore_index = False, var_name = "Percentile", value_name = "Growth")
# Plot data in altair
df['Party'] = np.where(df['Democrat']== True, 'Dem', 'Rep')
df_lag['Party'] = np.where(df_lag['Democrat']== True, 'Dem', 'Rep')
#set up for colors
domain = ['Dem', 'Rep']
range_ = ['blue', 'red']
#Unlagged chart
chart = alt.Chart(df).mark_bar().encode(
alt.X('Party', title = ""),
alt.Y('mean(Growth)', title = "Percent Growth"),
color = alt.Color('Party', scale = alt.Scale(domain = domain, range = range_)),
column = 'Percentile'
).properties(
title = 'Avg Annual Income Growth, 1947-2019'
)
chart.save("pres.html")
#Lagged chart
chart_lag = alt.Chart(df_lag).mark_bar().encode(
alt.X('Party', title = ""),
alt.Y('mean(Growth)', title = "Percent Growth"),
color = alt.Color('Party', scale = alt.Scale(domain = domain, range = range_)),
column = 'Percentile'
).properties(
title = 'Avg Annual Income Growth, 1947-2019, 1 year lag'
)
chart_lag.save("pres_lag.html")
# save data as images https://altair-viz.github.io/user_guide/saving_charts.html
| 35.269231 | 162 | 0.694384 | 549 | 3,668 | 4.533698 | 0.391621 | 0.068301 | 0.033748 | 0.048212 | 0.294094 | 0.274809 | 0.229811 | 0.229811 | 0.203295 | 0.203295 | 0 | 0.024619 | 0.158397 | 3,668 | 103 | 163 | 35.61165 | 0.781665 | 0.374864 | 0 | 0.210526 | 0 | 0 | 0.221976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494a340bd73162535cb79812ea6b3f58eb79f3e5 | 794 | py | Python | ch08_dash_standard_components/table_with_csv_fixed.py | Ethan0621/plotly-dash-dev | abe478824db1ee511a2d92f88e5dad49f5d6e27e | [
"MIT"
] | 21 | 2020-10-02T08:17:33.000Z | 2022-03-22T06:10:17.000Z | ch08_dash_standard_components/table_with_csv_fixed.py | Ethan0621/plotly-dash-dev | abe478824db1ee511a2d92f88e5dad49f5d6e27e | [
"MIT"
] | 4 | 2019-07-18T04:43:31.000Z | 2021-10-31T10:30:25.000Z | ch08_dash_standard_components/table_with_csv_fixed.py | Ethan0621/plotly-dash-dev | abe478824db1ee511a2d92f88e5dad49f5d6e27e | [
"MIT"
] | 12 | 2019-07-23T05:36:57.000Z | 2021-07-11T08:57:47.000Z | import dash
import dash_html_components as html
import dash_table
import pandas as pd
df = pd.read_csv("data/kitakyushu_hinanjo.csv", encoding="shift-jis")
app = dash.Dash(__name__)
app.layout = html.Div(
[
dash_table.DataTable(
style_cell={
"textAlign": "center",
"maxWidth": "80px",
"minWidth": "80px",
"whiteSpace": "normal",
},
fixed_rows={"headers": True}, # ➊ 縦スクロール時にヘッダを固定
# fixed_columns={"headers": True, "data": 3}, # ➋ 横スクロール時に最初の3列を固定
style_table={"minWidth": "100%"}, # ➌ テーブル大きさ対策
columns=[{"name": col, "id": col} for col in df.columns],
data=df.to_dict("records"),
)
]
)
app.run_server(debug=True)
| 27.37931 | 79 | 0.551637 | 87 | 794 | 4.850575 | 0.62069 | 0.07109 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021818 | 0.307305 | 794 | 28 | 80 | 28.357143 | 0.745455 | 0.117128 | 0 | 0 | 0 | 0 | 0.176724 | 0.038793 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.173913 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494a4c70038203d34285fe6ecfc5226a07e862db | 2,240 | py | Python | build/lib/forest/mark.py | cemac/forest | f55b5997ac44fb0bf69bb3d026294c2750118b02 | [
"BSD-3-Clause"
] | 1 | 2020-07-14T14:37:21.000Z | 2020-07-14T14:37:21.000Z | build/lib/forest/mark.py | cemac/forest | f55b5997ac44fb0bf69bb3d026294c2750118b02 | [
"BSD-3-Clause"
] | 65 | 2020-03-17T16:26:15.000Z | 2021-03-26T18:46:51.000Z | build/lib/forest/mark.py | cemac/forest | f55b5997ac44fb0bf69bb3d026294c2750118b02 | [
"BSD-3-Clause"
] | null | null | null | """Decorators to mark classes and functions"""
import inspect
from unittest.mock import Mock
from contextlib import contextmanager
from functools import wraps
from forest.observe import Observable
import pandas as pd
import numpy as np
def component(cls):
"""Enforce one-way data-flow"""
if issubclass(cls, Observable) and hasattr(cls, "render"):
cls.render = disable_notify(cls.render)
return cls
def disable_notify(render):
"""Disable self.notify during self.render"""
@wraps(render)
def wrapper(self, *args, **kwargs):
with disable(self, "notify"):
return_value = render(self, *args, **kwargs)
return return_value
return wrapper
@contextmanager
def disable(obj, method_name):
"""Temporarily disable a method inside a code block"""
method = getattr(obj, method_name)
setattr(obj, method_name, Mock())
yield
setattr(obj, method_name, method)
def sql_sanitize_time(*labels):
"""Decorator to protect SQL statements from unsupported datetime types
>>> @sql_sanitize_time("b", "c")
... def method(self, a, b, c=None, d=False):
... # b and c will be converted to a str compatible with SQL queries
... pass
"""
def outer(f):
parameters = inspect.signature(f).parameters
# Get positional index
index = {}
for i, name in enumerate(parameters):
if name in labels:
index[name] = i
def inner(*args, **kwargs):
args = list(args)
for label in labels:
if label in kwargs:
kwargs[label] = sanitize_time(kwargs[label])
else:
i = index[label]
if i < len(args):
args[i] = sanitize_time(args[i])
return f(*args, **kwargs)
return inner
return outer
def sanitize_time(value):
"""Query-compatible equivalent of value"""
fmt = "%Y-%m-%d %H:%M:%S"
if value is None:
return value
elif isinstance(value, str):
return value
elif isinstance(value, np.datetime64):
return pd.to_datetime(str(value)).strftime(fmt)
else:
return value.strftime(fmt)
| 28 | 76 | 0.60625 | 276 | 2,240 | 4.862319 | 0.387681 | 0.040984 | 0.038748 | 0.029806 | 0.044709 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001252 | 0.287054 | 2,240 | 79 | 77 | 28.35443 | 0.839073 | 0.199107 | 0 | 0.075472 | 0 | 0 | 0.016657 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150943 | false | 0 | 0.132075 | 0 | 0.471698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494aa5f4983623e3404649d63065b913d3c55c87 | 10,525 | py | Python | real_cnn_model/models/enhanced_classifier.py | 82magnolia/ev_tta | c99ea780491c4a53502afd2cc68c40fbb8183471 | [
"CC-BY-4.0"
] | 1 | 2022-03-24T09:51:23.000Z | 2022-03-24T09:51:23.000Z | real_cnn_model/models/enhanced_classifier.py | 82magnolia/ev_tta | c99ea780491c4a53502afd2cc68c40fbb8183471 | [
"CC-BY-4.0"
] | null | null | null | real_cnn_model/models/enhanced_classifier.py | 82magnolia/ev_tta | c99ea780491c4a53502afd2cc68c40fbb8183471 | [
"CC-BY-4.0"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
try:
import torchsort
except ModuleNotFoundError:
# TODO: Install torchsort in other servers
pass
"""
URIE excerpted from https://github.com/taeyoungson/urie/
"""
class Selector(nn.Module):
def __init__(self, channel, reduction=16, crp_classify=False):
super(Selector, self).__init__()
self.spatial_attention = 4
self.in_channel = channel * (self.spatial_attention ** 2)
self.avg_pool = nn.AdaptiveAvgPool2d((self.spatial_attention, self.spatial_attention))
self.fc = nn.Sequential(
nn.Linear(self.in_channel, self.in_channel // reduction, bias=False),
nn.ReLU(inplace=True),
)
self.att_conv1 = nn.Linear(self.in_channel // reduction, self.in_channel)
self.att_conv2 = nn.Linear(self.in_channel // reduction, self.in_channel)
def forward(self, x):
b, c, H, W = x.size()
y = self.avg_pool(x).view(b, -1)
y = self.fc(y)
att1 = self.att_conv1(y).view(b, c, self.spatial_attention, self.spatial_attention)
att2 = self.att_conv2(y).view(b, c, self.spatial_attention, self.spatial_attention)
attention = torch.stack((att1, att2))
attention = nn.Softmax(dim=0)(attention)
att1 = F.interpolate(attention[0], scale_factor=(H / self.spatial_attention, W / self.spatial_attention), mode="nearest")
att2 = F.interpolate(attention[1], scale_factor=(H / self.spatial_attention, W / self.spatial_attention), mode="nearest")
return att1, att2
class SelectiveConv(nn.Module):
def __init__(self, kernel_size, padding, bias, reduction, in_channels, out_channels, first=False):
super(SelectiveConv, self).__init__()
self.first = first
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=padding, bias=bias)
self.conv2 = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=padding, bias=bias)
self.selector = Selector(out_channels, reduction=reduction)
self.IN = nn.InstanceNorm2d(in_channels)
self.BN = nn.BatchNorm2d(in_channels)
self.relu = nn.LeakyReLU(inplace=True)
def forward(self, x):
if self.first:
f_input = x
s_input = x
else:
f_input = self.BN(x)
f_input = self.relu(f_input)
s_input = self.IN(x)
s_input = self.relu(s_input)
out1 = self.conv1(f_input)
out2 = self.conv2(s_input)
out = out1 + out2
att1, att2 = self.selector(out)
out = torch.mul(out1, att1) + torch.mul(out2, att2)
return out
class SKDown(nn.Module):
def __init__(self, kernel_size, padding, bias, reduction, in_channels, out_channels, first=False):
super(SKDown, self).__init__()
self.maxpool_conv = nn.Sequential(
nn.MaxPool2d(2),
SelectiveConv(kernel_size, padding, bias, reduction, in_channels, out_channels, first=first)
)
def forward(self, x):
return self.maxpool_conv(x)
class SKUp(nn.Module):
def __init__(self, kernel_size, padding, bias, reduction, in_channels, out_channels, bilinear=True):
super().__init__()
# if bilinear, use the normal convolutions to reduce the number of channels
if bilinear:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
else:
self.up = nn.ConvTranspose2d(in_channels // 2, in_channels // 2, kernel_size=2, stride=2)
self.conv = SelectiveConv(kernel_size, padding, bias, reduction, in_channels, out_channels)
def forward(self, x1, x2):
x1 = self.up(x1)
diffY = torch.tensor([x2.size()[2] - x1.size()[2]])
diffX = torch.tensor([x2.size()[3] - x1.size()[3]])
x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
diffY // 2, diffY - diffY // 2])
x = torch.cat([x2, x1], dim=1)
return self.conv(x)
class OutConv(nn.Module):
def __init__(self, in_channels, out_channels):
pass
def forward(self, x):
pass
class SKUNet(nn.Module):
def __init__(self, num_channels, in_kernel_size, mid_kernel_size, bilinear=True):
super(SKUNet, self).__init__()
self.bilinear = bilinear
self.down1 = nn.Conv2d(kernel_size=in_kernel_size, padding=in_kernel_size // 2, in_channels=num_channels, out_channels=32)
self.down2 = SKDown(mid_kernel_size, mid_kernel_size // 2, False, 16, 32, 64)
self.down3 = SKDown(mid_kernel_size, mid_kernel_size // 2, False, 16, 64, 64)
self.up1 = SKUp(mid_kernel_size, mid_kernel_size // 2, False, 16, 128, 32, bilinear)
self.up2 = SKUp(mid_kernel_size, mid_kernel_size // 2, False, 16, 64, 16, bilinear)
self.up3 = nn.Conv2d(kernel_size=mid_kernel_size, padding=mid_kernel_size // 2, in_channels=16, out_channels=num_channels)
def forward(self, x):
x_origin = x
x1 = self.down1(x)
x2 = self.down2(x1)
x3 = self.down3(x2)
x = self.up1(x3, x2)
x = self.up2(x, x1)
x = self.up3(x)
return torch.add(x, x_origin)
# DiffDiST and helper classes
class LinearExp(nn.Module):
def __init__(self, n_in, n_out):
super(LinearExp, self).__init__()
self.weight = nn.Parameter(torch.zeros(n_in, n_out))
self.bias = nn.Parameter(torch.zeros(n_out))
def forward(self, x):
A = torch.exp(self.weight)
return x @ A + self.bias
class ChannelModulation(nn.Module):
def __init__(self, n_in):
super(ChannelModulation, self).__init__()
self.weight = nn.Parameter(torch.ones(n_in), requires_grad=True)
self.bias = nn.Parameter(torch.zeros(n_in), requires_grad=True)
def forward(self, x):
return (x.permute(0, 2, 3, 1) * self.weight + self.bias).permute(0, 3, 1, 2)
class TensorMixer(nn.Module):
def __init__(self, n_in, init_alpha, init_beta):
super(TensorMixer, self).__init__()
self.alpha = nn.Parameter(torch.zeros(n_in).fill_(init_alpha), requires_grad=True)
self.beta = nn.Parameter(torch.zeros(n_in).fill_(init_beta), requires_grad=True)
self.bias = nn.Parameter(torch.zeros(n_in), requires_grad=True)
def forward(self, x1, x2):
# x1, x2 are assumed to have shape (B x C x H x W)
return (x1.permute(0, 2, 3, 1) * self.alpha + x2.permute(0, 2, 3, 1) * self.beta + self.bias).permute(0, 3, 1, 2)
class TransConvBlock(nn.Module):
def __init__(self, n_in, n_hid):
super(TransConvBlock, self).__init__()
self.conv2d = nn.Conv2d(n_in, n_hid, kernel_size=3, padding=1)
self.group_norm = nn.GroupNorm(n_hid, n_hid)
self.relu = nn.ReLU()
def forward(self, x):
x = self.conv2d(x)
x = self.group_norm(x)
x = self.relu(x)
return x
class InputTranformNet(nn.Module):
def __init__(self, n_in, n_hid=18, n_layers=6):
super(InputTranformNet, self).__init__()
self.channel_mod = ChannelModulation(n_in)
self.tau = nn.Parameter(torch.ones(1), requires_grad=True)
# Make layers
layers = []
for idx in range(n_layers):
if idx == 0:
layers.append(TransConvBlock(n_in, n_hid))
elif idx == n_layers - 1:
layers.append(TransConvBlock(n_hid, n_in))
else:
layers.append(TransConvBlock(n_hid, n_hid))
self.res_transform = nn.Sequential(*layers)
def forward(self, x):
res_x = self.res_transform(x)
x = self.tau * x + (1 - self.tau) * res_x
x = self.channel_mod(x)
return x
class DiffDiST(nn.Module):
def __init__(self, init_alpha=1.0, init_beta=0.0, init_gamma=1.0, num_groups=4):
# Formula: D = gamma * (argsort(S)) + (1 - gamma) * (monotone(S)), where S = alpha * T + beta * 1 / C
super(DiffDiST, self).__init__()
self.num_groups = num_groups
self.alpha = nn.Parameter(torch.tensor(init_alpha), requires_grad=True)
self.beta = nn.Parameter(torch.tensor(init_beta), requires_grad=True)
self.gamma = nn.Parameter(torch.tensor(init_gamma), requires_grad=True)
def forward(self, x):
print(self.alpha.data.item(), self.beta.data.item(), self.gamma.data.item())
# x is assumed to have shape (B x 4 x H x W)
inv_count = x[:, [0, 2], ...] # (B x 2 x H x W)
time_out = x[:, [1, 3], ...] # (B x 2 x H x W)
result = time_out + self.beta * inv_count
return result
class EnhancedClassifier(nn.Module):
def __init__(self, classifier: nn.Module, enhancer: nn.Module, return_input=False):
super(EnhancedClassifier, self).__init__()
self.classifier = classifier
self.enhancer = enhancer
self.return_input = return_input
def forward(self, x):
if self.return_input:
if self.enhancer is None:
return self.classifier(x), x
else:
x = self.enhancer(x)
return self.classifier(x), x
else:
if self.enhancer is None:
return self.classifier(x)
else:
return self.classifier(self.enhancer(x))
class ProjectionClassifier(nn.Module):
def __init__(self, classifier: nn.Module, projector: nn.Module, return_mode: str):
super(ProjectionClassifier, self).__init__()
self.feature_extractor = nn.Sequential(*list(classifier.children())[:-1])
self.projector = projector
self.final_classifier = classifier.fc
self.return_mode = return_mode
def forward(self, x):
x = self.feature_extractor(x)
x = torch.flatten(x, 1)
proj = x + self.projector(x)
pred = self.final_classifier(proj)
if self.return_mode == 'both':
return pred, proj
elif self.return_mode == 'pred':
return pred
elif self.return_mode == 'proj':
return proj
class Projector(nn.Module):
def __init__(self, dim, hid_dim):
super(Projector, self).__init__()
self.projector = nn.Sequential(nn.Linear(dim, hid_dim, bias=False),
nn.BatchNorm1d(hid_dim),
nn.ReLU(inplace=True), # hidden layer
nn.Linear(hid_dim, dim)) # output layer
def forward(self, x):
return self.projector(x)
| 35.677966 | 130 | 0.622328 | 1,446 | 10,525 | 4.313278 | 0.149378 | 0.035915 | 0.026455 | 0.036075 | 0.400192 | 0.348244 | 0.281546 | 0.251403 | 0.2211 | 0.185827 | 0 | 0.023164 | 0.253492 | 10,525 | 294 | 131 | 35.79932 | 0.77065 | 0.038385 | 0 | 0.167442 | 0 | 0 | 0.003385 | 0 | 0 | 0 | 0 | 0.003401 | 0 | 1 | 0.139535 | false | 0.013953 | 0.018605 | 0.018605 | 0.316279 | 0.004651 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494beadf169f54297678eacf98cc94536e55e0ae | 5,857 | py | Python | platform/safariextz/build.py | FastZyx/Viewhance | 84624a5e93040fe1f7b89d4bb2d9386d12f172f9 | [
"BSD-3-Clause"
] | 95 | 2015-05-07T23:16:06.000Z | 2022-03-16T09:16:57.000Z | platform/safariextz/build.py | FastZyx/Viewhance | 84624a5e93040fe1f7b89d4bb2d9386d12f172f9 | [
"BSD-3-Clause"
] | 78 | 2015-05-29T13:22:28.000Z | 2022-03-26T14:31:20.000Z | platform/safariextz/build.py | FastZyx/Viewhance | 84624a5e93040fe1f7b89d4bb2d9386d12f172f9 | [
"BSD-3-Clause"
] | 24 | 2015-06-11T00:32:19.000Z | 2021-08-15T05:47:56.000Z | import sys
import os
import json
import subprocess
from io import open
from time import time
from shutil import rmtree, copy, which
from collections import OrderedDict
from urllib.request import urlretrieve
from .. import base
class Platform(base.PlatformBase):
l10n_dir = 'locales'
requires_all_strings = True
disabled = True
def __init__(self, build_dir, *args):
super().__init__(build_dir, *args)
self.build_dir = os.path.join(
build_dir,
self.config['name'] + '.safariextension'
)
self.update_file = 'Update.plist'
def __del__(self):
for param in ['description', 'build_number', 'update_file']:
if param in self.config:
del self.config[param]
def write_manifest(self):
info_plist_path = os.path.join(self.build_dir, 'Info.plist')
with open(info_plist_path, 'wt', encoding='utf-8', newline='\n') as f:
def_lang = self.languages[self.config['def_lang']]
self.config['description'] = def_lang[self.desc_string]
self.config['build_number'] = int(time())
self.config['update_file'] = self.update_file
with open(os.path.join('meta', 'Info.plist'), 'r') as info_plist:
f.write(info_plist.read().format(**self.config))
def write_update_file(self):
if not self.config['update_url']:
return
update_file = os.path.join(self.build_dir, '..', self.update_file)
with open(update_file, 'wt', encoding='utf-8', newline='\n') as f:
with open(os.path.join('meta', self.update_file), 'r') as tmpl:
f.write(tmpl.read().format(**self.config))
def write_locales(self, lng_strings):
locale_files = {
'options': 'strings.js'
}
for alpha2 in lng_strings:
locale_dir = os.path.join(self.build_dir, self.l10n_dir, alpha2)
try: os.makedirs(locale_dir)
except: pass
if not os.path.exists(locale_dir):
sys.stderr.write(
'Falied to create locale directory:\n' + locale_dir + '\n'
)
continue
lang = lng_strings[alpha2]
for grp in locale_files:
if grp not in lang:
continue
locale = open(
os.path.join(locale_dir, locale_files[grp]),
'wt', encoding='utf-8', newline='\n'
)
if self.params['-min']:
json_args = {'separators': (',', ':')}
else:
json_args = {'separators': (',', ': '), 'indent': '\t'}
with locale as f:
f.write('vAPI.l10nData = ')
f.write(
json.dumps(
lang[grp],
**json_args,
ensure_ascii=False
)
)
def write_files(self):
copy(self.pjif('meta', 'Settings.plist'), self.build_dir)
def write_package(self):
key = self.pjif('secret', 'key.pem')
certs = self.pjif('secret', 'certs')
tmp_dir = self.build_dir + '.tmp'
package = self.package_name + '.' + self.ext
if not os.path.isfile(key):
sys.stderr.write(key + ' is missing\n')
return
if not os.path.isfile(self.pjif(certs, 'safari_extension.cer')):
sys.stderr.write(
self.pjif(certs, 'safari_extension.cer') + ' is missing\n'
)
return
try: os.remove(package)
except: pass
try: rmtree(tmp_dir)
except: pass
try: os.makedirs(tmp_dir)
except: pass
try: os.makedirs(certs)
except: pass
if not os.path.isfile(self.pjif(certs, 'AppleWWDRCA.cer')):
print('Downloading AppleWWDRCA.cer...')
urlretrieve(
'https://developer.apple.com/certificationauthority/AppleWWDRCA.cer',
self.pjif(certs, 'AppleWWDRCA.cer')
)
if not os.path.isfile(self.pjif(certs, 'AppleIncRootCertificate.cer')):
print('Downloading AppleIncRootCertificate.cer...')
urlretrieve(
'https://www.apple.com/appleca/AppleIncRootCertificate.cer',
self.pjif(certs, 'AppleIncRootCertificate.cer')
)
if which('xar') is None:
sys.stderr.write('xar command is not available\n')
try: rmtree(tmp_dir)
except: pass
return False
subprocess.call([
'xar', '-czf', package,
'--compression-args=9',
'--distribution',
'--directory', os.path.dirname(self.build_dir),
os.path.basename(self.build_dir)
])
sig_len = len(subprocess.Popen(
['openssl', 'dgst', '-binary', '-sign', key, key],
stdout=subprocess.PIPE
).stdout.read())
digest_dat = pj(tmp_dir, self.ext + '_digest.dat')
sig_dat = pj(tmp_dir, self.ext + '_sig.dat')
subprocess.call([
'xar', '--sign', '-f', package,
'--digestinfo-to-sign', digest_dat,
'--sig-size', str(sig_len),
'--cert-loc', self.pjif(certs, 'safari_extension.cer'),
'--cert-loc', self.pjif(certs, 'AppleWWDRCA.cer'),
'--cert-loc', self.pjif(certs, 'AppleIncRootCertificate.cer')
])
subprocess.call([
'openssl', 'rsautl', '-sign', '-inkey', key,
'-in', digest_dat, '-out', sig_dat
])
subprocess.call(['xar', '--inject-sig', sig_dat, '-f', package])
try: rmtree(tmp_dir)
except: pass
| 32.359116 | 85 | 0.528769 | 645 | 5,857 | 4.662016 | 0.255814 | 0.027935 | 0.035916 | 0.018291 | 0.307283 | 0.213834 | 0.083139 | 0.046558 | 0 | 0 | 0 | 0.003347 | 0.336862 | 5,857 | 180 | 86 | 32.538889 | 0.770855 | 0 | 0 | 0.174825 | 0 | 0 | 0.16937 | 0.018952 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048951 | false | 0.048951 | 0.06993 | 0 | 0.174825 | 0.013986 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494dfb6c99040eeb9a807abe39c6c407694d56cf | 6,562 | py | Python | tensorflow/contrib/linear_optimizer/python/ops/sharded_mutable_dense_hashtable.py | connectthefuture/tensorflow | 93812423fcd5878aa2c1d0b68dc0496980c8519d | [
"Apache-2.0"
] | 23 | 2017-02-09T14:26:32.000Z | 2020-11-09T05:54:36.000Z | tensorflow/contrib/linear_optimizer/python/ops/sharded_mutable_dense_hashtable.py | connectthefuture/tensorflow | 93812423fcd5878aa2c1d0b68dc0496980c8519d | [
"Apache-2.0"
] | 2 | 2017-03-29T10:34:08.000Z | 2017-11-21T02:08:40.000Z | tensorflow/contrib/linear_optimizer/python/ops/sharded_mutable_dense_hashtable.py | connectthefuture/tensorflow | 93812423fcd5878aa2c1d0b68dc0496980c8519d | [
"Apache-2.0"
] | 10 | 2017-03-28T06:16:10.000Z | 2020-08-25T09:03:44.000Z | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Sharded mutable dense hash table."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from six.moves import range
from tensorflow.contrib.lookup import lookup_ops
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import math_ops
class ShardedMutableDenseHashTable(lookup_ops.LookupInterface):
"""A sharded version of MutableDenseHashTable.
It is designed to be interface compatible with LookupInterface and
MutableDenseHashTable, with the exception of the export method, which is
replaced by an export_sharded method.
The _ShardedMutableDenseHashTable keeps `num_shards` MutableDenseHashTable
internally. The shard is computed via the modulo operation on the key.
"""
# TODO(andreasst): consider moving this to lookup_ops
def __init__(self,
key_dtype,
value_dtype,
default_value,
empty_key,
num_shards=1,
name='ShardedMutableHashTable'):
with ops.name_scope(name, 'sharded_mutable_hash_table') as scope:
super(ShardedMutableDenseHashTable, self).__init__(key_dtype,
value_dtype, scope)
table_shards = []
for i in range(num_shards):
table_shards.append(
lookup_ops.MutableDenseHashTable(
key_dtype=key_dtype,
value_dtype=value_dtype,
default_value=default_value,
empty_key=empty_key,
name='%s-%d-of-%d' % (name, i + 1, num_shards)))
self._table_shards = table_shards
# TODO(andreasst): add a value_shape() method to LookupInterface
# pylint: disable=protected-access
self._value_shape = self._table_shards[0]._value_shape
# pylint: enable=protected-access
@property
def _num_shards(self):
return len(self._table_shards)
@property
def table_shards(self):
return self._table_shards
def size(self, name=None):
with ops.name_scope(name, 'sharded_mutable_hash_table_size'):
sizes = [
self._table_shards[i].size() for i in range(self._num_shards)
]
return math_ops.add_n(sizes)
def _shard_indices(self, keys):
key_shape = keys.get_shape()
if key_shape.ndims > 1:
# If keys are a matrix (i.e. a single key is a vector), we use the first
# element of each key vector to determine the shard.
keys = array_ops.slice(keys, [0, 0], [key_shape[0].value, 1])
keys = array_ops.reshape(keys, [-1])
indices = math_ops.mod(math_ops.abs(keys), self._num_shards)
return math_ops.cast(indices, dtypes.int32)
def _check_keys(self, keys):
if not keys.get_shape().is_fully_defined():
raise ValueError('Key shape must be fully defined, got %s.' %
keys.get_shape())
if keys.get_shape().ndims != 1 and keys.get_shape().ndims != 2:
raise ValueError('Expected a vector or matrix for keys, got %s.' %
keys.get_shape())
def lookup(self, keys, name=None):
if keys.dtype != self._key_dtype:
raise TypeError('Signature mismatch. Keys must be dtype %s, got %s.' %
(self._key_dtype, keys.dtype))
self._check_keys(keys)
num_shards = self._num_shards
if num_shards == 1:
return self._table_shards[0].lookup(keys, name=name)
shard_indices = self._shard_indices(keys)
# TODO(andreasst): support 'keys' that are not vectors
key_shards = data_flow_ops.dynamic_partition(keys, shard_indices,
num_shards)
value_shards = [
self._table_shards[i].lookup(key_shards[i], name=name)
for i in range(num_shards)
]
num_keys = keys.get_shape().dims[0]
original_indices = math_ops.range(num_keys)
partitioned_indices = data_flow_ops.dynamic_partition(original_indices,
shard_indices,
num_shards)
result = data_flow_ops.dynamic_stitch(partitioned_indices, value_shards)
result.set_shape(
tensor_shape.TensorShape([num_keys]).concatenate(self._value_shape))
return result
def insert(self, keys, values, name=None):
self._check_keys(keys)
num_shards = self._num_shards
if num_shards == 1:
return self._table_shards[0].insert(keys, values, name=name)
shard_indices = self._shard_indices(keys)
# TODO(andreasst): support 'keys' that are not vectors
key_shards = data_flow_ops.dynamic_partition(keys, shard_indices,
num_shards)
value_shards = data_flow_ops.dynamic_partition(values, shard_indices,
num_shards)
return_values = [
self._table_shards[i].insert(key_shards[i], value_shards[i], name=name)
for i in range(num_shards)
]
return control_flow_ops.group(*return_values)
def export_sharded(self, name=None):
"""Returns lists of the keys and values tensors in the sharded table.
Args:
name: name of the table.
Returns:
A pair of lists with the first list containing the key tensors and the
second list containing the value tensors from each shard.
"""
keys_list = []
values_list = []
for table_shard in self._table_shards:
exported_keys, exported_values = table_shard.export(name=name)
keys_list.append(exported_keys)
values_list.append(exported_values)
return keys_list, values_list
| 39.059524 | 80 | 0.663517 | 849 | 6,562 | 4.888104 | 0.244994 | 0.041205 | 0.036145 | 0.027711 | 0.256386 | 0.213735 | 0.163855 | 0.146506 | 0.146506 | 0.125783 | 0 | 0.005245 | 0.24459 | 6,562 | 167 | 81 | 39.293413 | 0.831955 | 0.261963 | 0 | 0.186916 | 0 | 0 | 0.047399 | 0.016779 | 0 | 0 | 0 | 0.005988 | 0 | 1 | 0.084112 | false | 0 | 0.11215 | 0.018692 | 0.28972 | 0.009346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494e3288a31783b5de2d6c2b7511844930fd1ad2 | 293 | py | Python | api/parties/admin.py | django-doctor/lite-api | 1ba278ba22ebcbb977dd7c31dd3701151cd036bf | [
"MIT"
] | 3 | 2019-05-15T09:30:39.000Z | 2020-04-22T16:14:23.000Z | api/parties/admin.py | django-doctor/lite-api | 1ba278ba22ebcbb977dd7c31dd3701151cd036bf | [
"MIT"
] | 85 | 2019-04-24T10:39:35.000Z | 2022-03-21T14:52:12.000Z | api/parties/admin.py | django-doctor/lite-api | 1ba278ba22ebcbb977dd7c31dd3701151cd036bf | [
"MIT"
] | 1 | 2021-01-17T11:12:19.000Z | 2021-01-17T11:12:19.000Z | from django.contrib import admin
from api.parties import models
@admin.register(models.Party)
class PartyAdmin(admin.ModelAdmin):
list_display = (
"name",
"address",
"type",
"sub_type",
)
list_filter = (
"type",
"sub_type",
)
| 16.277778 | 35 | 0.566553 | 30 | 293 | 5.4 | 0.666667 | 0.08642 | 0.135802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.313993 | 293 | 17 | 36 | 17.235294 | 0.80597 | 0 | 0 | 0.285714 | 0 | 0 | 0.119454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494ecc571cee8cc9b981aab1a73cb60e0b18fc74 | 1,988 | py | Python | fluke8845a.py | molyb/measuringInstrument | a5b07114f00a5b7c287d96857ec427ec1af12f4b | [
"MIT"
] | null | null | null | fluke8845a.py | molyb/measuringInstrument | a5b07114f00a5b7c287d96857ec427ec1af12f4b | [
"MIT"
] | null | null | null | fluke8845a.py | molyb/measuringInstrument | a5b07114f00a5b7c287d96857ec427ec1af12f4b | [
"MIT"
] | null | null | null | import telnetlib
class Fluke8845A:
def __init__(self, ip, port=3490):
self.ip = ip
self.port = port
self.inst = telnetlib.Telnet(ip, port)
self.write('SYSTem:REMote')
def reset(self):
self.write('*RST')
def id(self):
return self.query("*IDN?")
def dc_voltage(self, digit_filter=True):
if digit_filter:
self.write('SENSe:VOLTage:DC:FILTer:DIGItal:STATe on')
return float(self.query('MEASure:VOLTage:DC?'))
def ac_voltage(self):
return float(self.query('MEASure:VOLTage:AC?'))
def dc_current(self, digit_filter=True):
if digit_filter:
self.write('SENSe:CURRent:DC:FILTer:DIGItal:STATe on')
return float(self.query('MEASure:CURRent:DC?'))
def ac_current(self):
return float(self.query('MEASure:CURRent:AC?'))
def resistance(self, digit_filter=True):
if digit_filter:
self.write('SENSe:RESistance:FILTer:DIGItal:STATe on')
ret = self.query('Measure:RESistance?')
if ret is None:
return None
return float(ret)
def write(self, cmd: str):
if cmd[-1] != '\n':
cmd += '\n'
self.inst.write(bytearray(cmd, encoding='utf-8'))
def read(self):
ret = self.inst.read_until(b'\n', timeout=1.).decode('utf-8')
if len(ret) < 1:
return None
while ret[-1] == ('\n' or '\r'):
ret = ret[:-1]
return ret
def query(self, cmd: str):
self.write(cmd)
return self.read()
def main():
import sys
if 1 < len(sys.argv):
ip = sys.argv[1]
else:
ip = input('Please input the ip address (e.g. 192.168.1.100) > ')
port = 3490
if 2 < len(sys.argv):
port = int(sys.argv[2])
dmm = Fluke8845A(ip, port)
print(dmm.id())
print(dmm.dc_voltage())
print(dmm.dc_current())
print(dmm.resistance())
if __name__ == '__main__':
main()
| 25.487179 | 73 | 0.565895 | 267 | 1,988 | 4.11985 | 0.277154 | 0.049091 | 0.072727 | 0.072727 | 0.296364 | 0.296364 | 0.214545 | 0.214545 | 0.214545 | 0.214545 | 0 | 0.025983 | 0.283702 | 1,988 | 77 | 74 | 25.818182 | 0.746489 | 0 | 0 | 0.083333 | 0 | 0 | 0.158954 | 0.055835 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.033333 | 0.05 | 0.416667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494f080e8d4ff644e186a089d3a1af1bffa14c5f | 2,401 | py | Python | src/python/noderunner.py | programmfabrik/easydb-library | 5626174af7f3932023aa0504676ecea12d7e79cc | [
"MIT"
] | 1 | 2018-03-20T21:23:29.000Z | 2018-03-20T21:23:29.000Z | src/python/noderunner.py | programmfabrik/easydb-library | 5626174af7f3932023aa0504676ecea12d7e79cc | [
"MIT"
] | 2 | 2018-09-11T20:51:16.000Z | 2019-01-16T10:36:54.000Z | src/python/noderunner.py | programmfabrik/easydb-library | 5626174af7f3932023aa0504676ecea12d7e79cc | [
"MIT"
] | 3 | 2017-08-07T13:47:52.000Z | 2021-09-21T13:28:44.000Z | # coding=utf8
import os
import sys
from subprocess import Popen, PIPE
def call(config, script, parameters='', additional_nodepaths=[], logger=None):
node_runner_binary, node_runner_app, node_paths = get_paths(config)
if node_runner_binary is None:
raise Exception('node_runner_binary_not_found')
if node_runner_app is None:
raise Exception('node_runner_app_not_found')
command = node_runner_binary.split(' ') + [node_runner_app, script, '-']
node_paths += additional_nodepaths
node_env = {
'NODE_PATH': ':'.join([os.path.abspath(n) for n in node_paths])
}
if logger is not None:
logger.debug('noderunner call: %s' % ' '.join(command))
logger.debug('noderunner stdin: %s' % parameters)
logger.debug('noderunner environment: %s' % node_env)
p1 = Popen(
command,
shell=False,
stdin=PIPE,
stdout=PIPE,
stderr=PIPE,
env=node_env
)
out, err = p1.communicate(input=parameters)
exit_code = p1.returncode
if logger is not None:
logger.debug('noderunner call: %s bytes from stdout, %s bytes from stderr, exit code: %s ==> %s'
% (len(out), len(err), exit_code, 'OK' if exit_code == 0 else 'ERROR'))
if (exit_code != 0):
logger.error('noderunner call: exit code: %s, error: %s, out: %s' % (exit_code, err, out))
return unicode(out, encoding='utf-8'), unicode(err, encoding='utf-8'), exit_code
def get_paths(config):
if not 'system' in config or not 'nodejs' in config['system']:
return None, None, None
for k in ['node_runner_binary', 'node_runner_app', 'node_modules']:
if k not in config['system']['nodejs']:
return None, None, None
node_runner_binary = config['system']['nodejs']['node_runner_binary']
if node_runner_binary is None:
return None, None, None
node_runner_binary = os.path.abspath(node_runner_binary)
node_runner_app = config['system']['nodejs']['node_runner_app']
if node_runner_app is None:
return None, None, None
node_runner_app = os.path.abspath(node_runner_app)
node_modules = config['system']['nodejs']['node_modules']
if node_modules is None:
node_path = set()
else:
node_path = set(node_modules.split(':'))
return node_runner_binary, node_runner_app, list(node_path)
| 31.181818 | 104 | 0.64848 | 328 | 2,401 | 4.52439 | 0.22561 | 0.148248 | 0.118598 | 0.053908 | 0.375337 | 0.299191 | 0.175202 | 0.103774 | 0.057951 | 0.057951 | 0 | 0.00431 | 0.226989 | 2,401 | 76 | 105 | 31.592105 | 0.795259 | 0.004581 | 0 | 0.185185 | 0 | 0.018519 | 0.18258 | 0.022194 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.055556 | 0 | 0.203704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494f2a3221a188193617a333bcd09a347c4963aa | 1,446 | py | Python | app/db/repositories/oauth_connections.py | Max-Zhenzhera/my_vocab_backend | f93d0c7c7f4a45fce47eb7ce74cfcda195b13a72 | [
"MIT"
] | 1 | 2021-11-18T16:25:22.000Z | 2021-11-18T16:25:22.000Z | app/db/repositories/oauth_connections.py | Max-Zhenzhera/my_vocab_backend | f93d0c7c7f4a45fce47eb7ce74cfcda195b13a72 | [
"MIT"
] | null | null | null | app/db/repositories/oauth_connections.py | Max-Zhenzhera/my_vocab_backend | f93d0c7c7f4a45fce47eb7ce74cfcda195b13a72 | [
"MIT"
] | null | null | null | from typing import ClassVar
from sqlalchemy.dialects.postgresql import insert as pg_insert
from sqlalchemy.future import select as sa_select
from sqlalchemy.orm import joinedload
from .base import BaseRepository
from .types_ import ModelType
from ..models import OAuthConnection
from ...schemas.authentication.oauth import BaseOAuthConnection
__all__ = ['OAuthConnectionsRepository']
class OAuthConnectionsRepository(BaseRepository):
model: ClassVar[ModelType] = OAuthConnection
async def link_connection(self, oauth_connection: BaseOAuthConnection) -> OAuthConnection:
oauth_connection_on_insert = oauth_connection.dict()
oauth_connection_on_conflict = {
key: value for key, value in oauth_connection_on_insert.items() if key != 'user_id'
}
insert_stmt = pg_insert(OAuthConnection).values(**oauth_connection_on_insert)
update_on_conflict_stmt = insert_stmt.on_conflict_do_update(
index_elements=[OAuthConnection.user_id],
set_=oauth_connection_on_conflict
)
return await self._return_from_statement(update_on_conflict_stmt)
async def fetch_by_google_id_with_user(self, google_id: str) -> OAuthConnection:
stmt = (
sa_select(OAuthConnection)
.options(joinedload(OAuthConnection.user))
.where(OAuthConnection.google_id == google_id)
)
return await self._fetch_entity(stmt)
| 36.15 | 95 | 0.74343 | 161 | 1,446 | 6.335404 | 0.391304 | 0.102941 | 0.083333 | 0.067647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19018 | 1,446 | 39 | 96 | 37.076923 | 0.87105 | 0 | 0 | 0 | 0 | 0 | 0.022822 | 0.017981 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.275862 | 0 | 0.413793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
494f90dee871caf08107e320ba6140b47bbaabc8 | 15,665 | py | Python | prysm/objects.py | JulesScholler/prysm | 0ae37c16ed5e1bcd71a7e9eda841ef1a12a80e00 | [
"MIT"
] | null | null | null | prysm/objects.py | JulesScholler/prysm | 0ae37c16ed5e1bcd71a7e9eda841ef1a12a80e00 | [
"MIT"
] | 1 | 2019-05-24T13:02:12.000Z | 2019-05-24T13:02:12.000Z | prysm/objects.py | muepf1/prysm_mpf | be4c114de22193b36dbf87375a3ebf1300df21e6 | [
"MIT"
] | null | null | null | """Objects for image simulation with."""
from scipy.signal import chirp
from .conf import config
from .mathops import engine as e, jinc
from .convolution import Convolvable
from .coordinates import cart_to_polar, polar_to_cart
class Slit(Convolvable):
"""Representation of a slit or pair of slits."""
def __init__(self, width, orientation='Vertical', sample_spacing=None, samples=0):
"""Create a new Slit instance.
Parameters
----------
width : `float`
the width of the slit in microns
orientation : `string`, {'Horizontal', 'Vertical', 'Crossed', 'Both'}
the orientation of the slit; Crossed and Both produce the same results
sample_spacing : `float`
spacing of samples in the synthetic image
samples : `int`
number of samples per dimension in the synthetic image
Notes
-----
Default of 0 samples allows quick creation for convolutions without
generating the image; use samples > 0 for an actual image.
"""
w = width / 2
if samples > 0:
ext = samples / 2 * sample_spacing
x = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
y = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
arr = e.zeros((samples, samples))
else:
arr, x, y = None, None, None
# paint in the slit
if orientation.lower() in ('v', 'vert', 'vertical'):
if samples > 0:
arr[:, abs(x) < w] = 1
self.orientation = 'Vertical'
self.width_x = width
self.width_y = 0
elif orientation.lower() in ('h', 'horiz', 'horizontal'):
if samples > 0:
arr[abs(y) < w, :] = 1
self.width_x = 0
self.width_y = width
self.orientation = 'Horizontal'
elif orientation.lower() in ('b', 'both', 'c', 'crossed'):
if samples > 0:
arr[abs(y) < w, :] = 1
arr[:, abs(x) < w] = 1
self.orientation = 'Crossed'
self.width_x, self.width_y = width, width
super().__init__(data=arr, x=x, y=y, has_analytic_ft=True)
def analytic_ft(self, x, y):
"""Analytic fourier transform of a slit.
Parameters
----------
x : `numpy.ndarray`
sample points in x frequency axis
y : `numpy.ndarray`
sample points in y frequency axis
Returns
-------
`numpy.ndarray`
2D numpy array containing the analytic fourier transform
"""
if self.width_x > 0 and self.width_y > 0:
return (e.sinc(x * self.width_x) +
e.sinc(y * self.width_y)).astype(config.precision)
elif self.width_x > 0 and self.width_y == 0:
return e.sinc(x * self.width_x).astype(config.precision)
else:
return e.sinc(y * self.width_y).astype(config.precision)
class Pinhole(Convolvable):
"""Representation of a pinhole."""
def __init__(self, width, sample_spacing=None, samples=0):
"""Create a Pinhole instance.
Parameters
----------
width : `float`
the width of the pinhole
sample_spacing : `float`
spacing of samples in the synthetic image
samples : `int`
number of samples per dimension in the synthetic image
Notes
-----
Default of 0 samples allows quick creation for convolutions without
generating the image; use samples > 0 for an actual image.
"""
self.width = width
# produce coordinate arrays
if samples > 0:
ext = samples / 2 * sample_spacing
x = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
y = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
xv, yv = e.meshgrid(x, y)
w = width / 2
# paint a circle on a black background
arr = e.zeros((samples, samples))
arr[e.sqrt(xv**2 + yv**2) < w] = 1
else:
arr, x, y = None, None, None
super().__init__(data=arr, x=x, y=y, has_analytic_ft=True)
def analytic_ft(self, x, y):
"""Analytic fourier transform of a slit.
Parameters
----------
x : `numpy.ndarray`
sample points in x frequency axis
y : `numpy.ndarray`
sample points in y frequency axis
Returns
-------
`numpy.ndarray`
2D numpy array containing the analytic fourier transform
"""
xq, yq = e.meshgrid(x, y)
# factor of pi corrects for jinc being modulo pi
# factor of 2 converts radius to diameter
rho = e.sqrt(xq**2 + yq**2) * self.width * 2 * e.pi
return jinc(rho).astype(config.precision)
class SiemensStar(Convolvable):
"""Representation of a Siemen's star object."""
def __init__(self, spokes, sinusoidal=True, radius=0.9, background='black', sample_spacing=2, samples=256):
"""Produce a Siemen's Star.
Parameters
----------
spokes : `int`
number of spokes in the star.
sinusoidal : `bool`
if True, generates a sinusoidal Siemen' star, else, generates a bar/block siemen's star
radius : `float`,
radius of the star, relative to the array width (default 90%)
background : 'string', {'black', 'white'}
background color
sample_spacing : `float`
spacing of samples, in microns
samples : `int`
number of samples per dimension in the synthetic image
Raises
------
ValueError
background other than black or white
"""
self.spokes = spokes
self.radius = radius
# generate a coordinate grid
x = e.linspace(-1, 1, samples, dtype=config.precision)
y = e.linspace(-1, 1, samples, dtype=config.precision)
xx, yy = e.meshgrid(x, y)
rv, pv = cart_to_polar(xx, yy)
ext = sample_spacing * (samples / 2)
ux = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
uy = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
# generate the siemen's star as a (rho,phi) polynomial
arr = e.cos(spokes / 2 * pv)
if not sinusoidal: # make binary
arr[arr < 0] = -1
arr[arr > 0] = 1
# scale to (0,1) and clip into a disk
arr = (arr + 1) / 2
if background.lower() in ('b', 'black'):
arr[rv > radius] = 0
elif background.lower() in ('w', 'white'):
arr[rv > radius] = 1
else:
raise ValueError('invalid background color')
super().__init__(data=arr, x=ux, y=uy, has_analytic_ft=False)
class TiltedSquare(Convolvable):
"""Represents a tilted square for e.g. slanted-edge MTF calculation."""
def __init__(self, angle=4, background='white', sample_spacing=2, samples=256, radius=0.3, contrast=0.9):
"""Create a new TitledSquare instance.
Parameters
----------
angle : `float`
angle in degrees to tilt w.r.t. the x axis
background : `string`
white or black; the square will be the opposite color of the background
sample_spacing : `float`
spacing of samples
samples : `int`
number of samples
radius : `float`
fractional
contrast : `float`
contrast, anywhere from 0 to 1
"""
if background.lower() == 'white':
arr = e.ones((samples, samples), dtype=config.precision)
fill_with = 1 - contrast
else:
arr = e.zeros((samples, samples), dtype=config.precision)
fill_with = 1
ext = samples / 2 * sample_spacing
radius = radius * ext * 2
x = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
y = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
xx, yy = e.meshgrid(x, y)
# TODO: convert inline operation to use of rotation matrix
angle = e.radians(angle)
xp = xx * e.cos(angle) - yy * e.sin(angle)
yp = xx * e.sin(angle) + yy * e.cos(angle)
mask = (abs(xp) < radius) * (abs(yp) < radius)
arr[mask] = fill_with
super().__init__(data=arr, x=x, y=y, has_analytic_ft=False)
class SlantedEdge(Convolvable):
"""Representation of a slanted edge."""
def __init__(self, angle=4, contrast=0.9, crossed=False, sample_spacing=2, samples=256):
"""Create a new TitledSquare instance.
Parameters
----------
angle : `float`
angle in degrees to tilt w.r.t. the y axis
contrast : `float`
difference between minimum and maximum values in the image
crossed : `bool`, optional
whether to make a single edge (crossed=False) or pair of crossed edges (crossed=True)
aka a "BMW target"
sample_spacing : `float`
spacing of samples
samples : `int`
number of samples
"""
diff = (1 - contrast) / 2
arr = e.full((samples, samples), 1 - diff)
ext = samples / 2 * sample_spacing
x = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
y = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
xx, yy = e.meshgrid(x, y)
angle = e.radians(angle)
xp = xx * e.cos(angle) - yy * e.sin(angle)
# yp = xx * e.sin(angle) + yy * e.cos(angle) # do not need this
mask = xp > 0 # single edge
if crossed:
mask = xp > 0 # set of 4 edges
upperright = mask & e.rot90(mask)
lowerleft = e.rot90(upperright, 2)
mask = upperright | lowerleft
arr[mask] = diff
self.contrast = contrast
self.black = diff
self.white = 1 - diff
super().__init__(data=arr, x=x, y=y, has_analytic_ft=False)
class Grating(Convolvable):
"""A grating with a given ruling."""
def __init__(self, period, angle=0, sinusoidal=False, sample_spacing=2, samples=256):
"""Create a new Grating object
Parameters
----------
period : `float`
period of the grating in microns
angle : `float`, optional
clockwise angle of the grating w.r.t. the X axis, degrees
sinusoidal : `bool`, optional
if True, the grating is a sinusoid, else it has square edges
sample_spacing : `float`, optional
center-to-center sample spacing in microns
samples : `int`, optional
number of samples across the diameter of the array
"""
self.period = period
self.sinusoidal = sinusoidal
ext = samples / 2 * sample_spacing
x = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
y = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
xx, yy = e.meshgrid(x, y)
if angle != 0:
rho, phi = cart_to_polar(xx, yy)
phi += e.radians(angle)
xx, yy = polar_to_cart(rho, phi)
data = e.cos(2 * e.pi / period * xx)
if sinusoidal:
data += 1
data /= 2
else:
data[data > 0] = 1
data[data < 0] = 0
super().__init__(data=data, x=x, y=y, has_analytic_ft=False)
class GratingArray(Convolvable):
"""An array of gratings with given rulings."""
def __init__(self, periods, angles=None, sinusoidal=False, sample_spacing=2, samples=256):
# if angles not provided, angles are 0
if angles is None:
angles = [0] * len(periods)
self.periods = periods
self.angles = angles
self.sinusoidal = sinusoidal
# calculate the basic grid things are defined on
ext = samples / 2 * sample_spacing
x = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
y = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
xx, yy = e.meshgrid(x, y)
xxx, yyy = xx, yy
# compute the grid parameters; number of columns, number of samples per column
squareness = e.sqrt(len(periods))
ncols = int(e.ceil(squareness))
samples_per_patch = int(e.floor(samples / ncols))
low_idx_x = 0
high_idx_x = samples_per_patch
low_idx_y = 0
high_idx_y = samples_per_patch
curr_row = 0
out = e.zeros(xx.shape)
for idx, (period, angle) in enumerate(zip(periods, angles)):
# if we're off at an off angle, adjust the coordinates
if angle != 0:
rho, phi = cart_to_polar(xxx, yyy)
phi += e.radians(angle)
xxx, yyy = polar_to_cart(rho, phi)
# compute the sinusoid
data = e.cos(2 * e.pi / period * xxx)
# compute the indices to embed it into the final array;
# every time the current column advances, advance the X coordinates
sy = slice(low_idx_y, high_idx_y)
sx = slice(low_idx_x, high_idx_x)
out[sy, sx] += data[sy, sx]
# advance the indices are needed
if (idx > 0) & ((idx + 1) % ncols == 0):
offset = samples_per_patch * curr_row
low_idx_x = 0
high_idx_x = samples_per_patch
low_idx_y = samples_per_patch + offset
high_idx_y = samples_per_patch * 2 + offset
curr_row += 1
else:
low_idx_x += samples_per_patch
high_idx_x += samples_per_patch
xxx = xx
if sinusoidal:
out += 1
out /= 2
else:
out[out > 0] = 1
out[out < 0] = 0
super().__init__(data=out, x=x, y=y, has_analytic_ft=False)
class Chirp(Convolvable):
"""A frequency chirp."""
def __init__(self, p0, p1, angle=0, method='linear', binary=True, sample_spacing=2, samples=256):
"""Create a new Chirp instance.
Parameters
----------
p0 : `float`
first period, units of microns
p1 : `float`
second period, units of microns
angle : `float`
clockwise angle between the X axis and the chirp, units of degrees
method : `str`, optional, {'linear', 'quadratic', 'logarithmic', 'hyperbolic'}
type of chirp, passed directly to scipy.signal.chirp
binary : `bool`, optional
if True, the chirp is a square bar target, not a sinusoidal target.
sample_spacing : `float`, optional
center-to-center spacing of samples in the array
samples : `float`, optional
number of samples
"""
p0 *= 2
p1 *= 2
ext = samples / 2 * sample_spacing
x = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
y = e.arange(-ext, ext, sample_spacing, dtype=config.precision)
xx, yy = e.meshgrid(x, y)
if angle != 0:
rho, phi = cart_to_polar(xx, yy)
phi += e.radians(angle)
xx, yy = polar_to_cart(rho, phi)
sig = chirp(xx, 1 / p1, ext, 1 / p0, method=method)
if binary:
sig[sig < 0] = 0
sig[sig > 0] = 1
else:
sig = (sig + 1) / 2
super().__init__(x=x, y=y, data=sig, has_analytic_ft=False)
| 34.888641 | 111 | 0.556591 | 1,980 | 15,665 | 4.3 | 0.158586 | 0.061076 | 0.046981 | 0.02443 | 0.48802 | 0.441038 | 0.431055 | 0.389359 | 0.351656 | 0.313719 | 0 | 0.015004 | 0.336291 | 15,665 | 448 | 112 | 34.966518 | 0.803886 | 0.323332 | 0 | 0.385714 | 0 | 0 | 0.013974 | 0 | 0 | 0 | 0 | 0.002232 | 0 | 1 | 0.047619 | false | 0 | 0.02381 | 0 | 0.128571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
495549c13a701a42c3547f10b1f7d6aa73219d40 | 2,906 | py | Python | hh_dmft/dmft.py | AlexLoner/HH_DMFT | 8f2c3cfefa1eaab5b1b2d400d12487d5d1888d1a | [
"MIT"
] | null | null | null | hh_dmft/dmft.py | AlexLoner/HH_DMFT | 8f2c3cfefa1eaab5b1b2d400d12487d5d1888d1a | [
"MIT"
] | null | null | null | hh_dmft/dmft.py | AlexLoner/HH_DMFT | 8f2c3cfefa1eaab5b1b2d400d12487d5d1888d1a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import print_function, division, absolute_import, unicode_literals
from datetime import datetime
import numpy as np
class DMFT():
def __init__(self, n_loops, gf_initial, solver, alpha, tol, tb_model = None):
self.n_loops = n_loops
self.solver = solver
self.alpha = alpha
self.tol = tol
self.num_iw = gf_initial.shape[0]
self.TB = tb_model
##### Create lists for Greens Function
self.g0_list = np.zeros((n_loops + 1, self.num_iw), dtype=np.complex64)
self.g0_list[0, :] = gf_initial
self.g_list = np.zeros((n_loops, self.num_iw), dtype=np.complex64)
def _check_consistent_condition(self, m1, m2):
n = self.num_iw // 3
return np.linalg.norm(m1[self.num_iw//2 :self.num_iw//2 + n] - m2[self.num_iw//2:self.num_iw//2 + n]) < self.tol
# def _gf_calc(self):
# G = GfImFreq(mesh = MeshImFreq(self.solver.beta, 'Fermion', self.num_iw//2), indices = [1])
# Sigma0 = GfImFreq(mesh = MeshImFreq(self.solver.beta, 'Fermion', self.num_iw//2), indices = [1]);
# Sigma0.data[:,0,0] = self.sigma
# G << self.HT(Sigma = Sigma0, mu=self.solver.mu)
# g = G.data[:,0,0]
# return g
def body(self):
self.consistent_condition = False
cur_loop = 0
while not self.consistent_condition and cur_loop < self.n_loops:
print("DMFT Loop : {} / {} starts at {}".format(cur_loop, self.n_loops, datetime.now()))
self.solver.g0 = self.g0_list[cur_loop, :]
self.solver.run()
gf_cluster = self.solver.get_gf() # will run the solver to get the cluster (anderson green function)
self.sigma = self.g0_list[cur_loop, :] ** -1 - gf_cluster ** -1 # sigma = sigma_anderson (cluster)
self.TB.HT(self.g_list[cur_loop, :], self.solver.wn, self.solver.mu, self.sigma)
# self.g_list[cur_loop, :] = self._gf_calc()
self.g0_list[cur_loop + 1, :] = self.alpha * ((self.sigma + self.g_list[cur_loop, :] ** -1) ** -1) + (1.0 - self.alpha) * self.g0_list[cur_loop, :] # should anble to calculate new system's gf with corresponding dispersion
self.consistent_condition = self._check_consistent_condition(self.g0_list[cur_loop, :], self.g0_list[cur_loop + 1, :])
cur_loop += 1
if self.consistent_condition:
print("DMFT Loop : ends with self-consistent_condition at {} in {} step".format(datetime.now(), cur_loop))
self.g0_list = np.delete(self.g0_list, range(cur_loop, self.n_loops + 1), axis=0)
self.g_list = np.delete(self.g_list, range(cur_loop, self.n_loops), axis=0)
else:
print("DMFT Loop : {} / {} ends at {}".format(cur_loop, self.n_loops, datetime.now()))
| 53.814815 | 233 | 0.601858 | 417 | 2,906 | 3.983213 | 0.251799 | 0.071644 | 0.054184 | 0.036123 | 0.369055 | 0.310656 | 0.210716 | 0.149308 | 0.149308 | 0.080674 | 0 | 0.024096 | 0.257398 | 2,906 | 53 | 234 | 54.830189 | 0.745598 | 0.224019 | 0 | 0 | 0 | 0 | 0.056325 | 0.011176 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
49557e0a59ee5f94c1863b837faec46541356b36 | 1,212 | py | Python | main.py | BlackFalcons/kryptering | b15580638389199393a7fbb6dd510422c01b9e26 | [
"Apache-2.0"
] | null | null | null | main.py | BlackFalcons/kryptering | b15580638389199393a7fbb6dd510422c01b9e26 | [
"Apache-2.0"
] | null | null | null | main.py | BlackFalcons/kryptering | b15580638389199393a7fbb6dd510422c01b9e26 | [
"Apache-2.0"
] | null | null | null | from Cracker import Cracker
from Encryption import Encryption
if __name__ == '__main__':
# cracker = Cracker("Imzmrh", Encryption.getNorwegianAlphabet())
# cracker.bruteforce()
while True:
prompt_encryption = input("Vil du kryptere eller dekryptere(k eller d): ")
prompt_lowered = prompt_encryption.lower()
if prompt_lowered == "k":
try:
key = int(input("Oppgi nøkkel: "))
except ValueError:
print("You have to provide a integer.")
break
msg = input("Skriv inn meldingen: ")
if isinstance(key, int):
print(Encryption.encrypt(msg, key))
else:
print("Du må oppgi et heltall!")
break
elif prompt_lowered == "d":
try:
key = int(input("Oppgi nøkkel: "))
except ValueError:
print("You have to provide a integer.")
break
msg = input("Skriv inn meldingen: ")
if isinstance(key, int):
print(Encryption.decrypt(msg, key))
else:
print("Du må oppgi et heltall!")
break
| 30.3 | 82 | 0.522277 | 120 | 1,212 | 5.166667 | 0.416667 | 0.03871 | 0.029032 | 0.045161 | 0.551613 | 0.551613 | 0.551613 | 0.551613 | 0.551613 | 0.551613 | 0 | 0 | 0.382013 | 1,212 | 39 | 83 | 31.076923 | 0.82777 | 0.068482 | 0 | 0.666667 | 0 | 0 | 0.205151 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
495910cf27ebe38bf6e74c44bdda3738c41cdf8c | 23,565 | py | Python | player/models.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | player/models.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | player/models.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | from django.core.exceptions import ValidationError
from django.db import models
from django.core.validators import MinValueValidator, MaxValueValidator
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_str
DATA_FILES = './data_files/'
MIXED_OPEN = 'Mixed Open'
MEN_OPEN = 'Mens Open'
WOMEN_OPEN = 'Womens Open'
SENIOR_MIX = 'Senior Mix Open'
MEN_30 = 'Mens 30'
MEN_40 = 'Mens 40'
SENIOR_WOMEN = 'Senior Womes Open'
WOMEN_27 = 'Women 27'
MXO = 'MXO'
MO = 'MO'
WO = 'WO'
SMX = 'SMX'
W27 = 'W27'
M30 = 'M30'
M40 = 'M40'
TOUCH_DIVISION_CHOICES = (
(MXO, MIXED_OPEN),
(MO, MEN_OPEN),
(WO, WOMEN_OPEN),
(SMX, SENIOR_MIX),
(M30, MEN_30),
(M40, MEN_40),
(W27, WOMEN_27)
)
def get_player_gender(division):
if division in [WO, W27]:
result = Person.FEMALE
elif division in [MO, M30, M40]:
result = Person.MALE
elif division in [MXO, SMX]:
result = Person.UNKNOWN
else:
raise Exception("Division %s is not supported." % division)
return result
# Create your models here.
class Person(models.Model):
MALE = 'M'
FEMALE = 'F'
UNKNOWN = 'U'
GENDER_CHOICES = (
(MALE, 'Male'),
(FEMALE, 'Female'),
(UNKNOWN, None)
)
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
born = models.DateField(null=True, blank=True)
nationality = models.CharField(max_length=30, null=True, blank=True)
gender = models.CharField(max_length=1, choices=GENDER_CHOICES, default=UNKNOWN, null=True)
class Meta:
ordering = ['gender', 'last_name', 'first_name']
def __str__(self):
return '{0} {1} - {2}'.format(smart_str(self.first_name), smart_str(self.last_name), self.gender)
def get_full_name(self):
"""Returns the person's full name."""
return '{0} {1}'.format(smart_str(self.first_name), smart_str(self.last_name))
def get_full_name_reverse(self):
"""Returns the person's full name."""
return '{0}, {1}'.format(smart_str(self.last_name), smart_str(self.first_name))
def compare_name(self, other):
"""Returns True if both persons have the same full name otherwise False."""
return self.get_full_name() == other.get_full_name()
def __lt__(self, other):
if self.gender != other.gender:
if self.gender == self.FEMALE or other.gender == self.UNKNOWN:
return True
else:
return False
else:
return self.last_name <= other.last_name
def get_png_flag(self):
return 'images/flags/16/Germany.png'
class Team(models.Model):
name = models.CharField(max_length=40)
players = models.ManyToManyField(Person, through='Player')
division = models.CharField(max_length=3, choices=TOUCH_DIVISION_CHOICES)
def __str__(self):
return self.division + ' - ' + self.name
class Tournament(models.Model):
TOURNAMENT_CHOICES = (("PADEL", "PADEL"), ("TOUCH", "TOUCH"))
type = models.CharField(max_length=10, choices=TOURNAMENT_CHOICES, default="TOUCH")
name = models.CharField(max_length=50)
country = models.CharField(max_length=30)
city = models.CharField(max_length=30)
address = models.CharField(max_length=100, null=True, blank=True)
date = models.DateField(null=True, blank=True)
teams = models.ManyToManyField(Team)
division = models.CharField(max_length=3, choices=TOUCH_DIVISION_CHOICES)
class Meta:
ordering = ['name']
def __str__(self):
if self.country and self.city:
result = '{0} - {1} ({2}, {3})'.format(
self.division, self.name, smart_str(self.city), smart_str(self.country))
elif self.country:
result = '{0} - {1} ({2})'.format(self.division, self.name, smart_str(self.country))
elif self.city:
result = '{0} - {1} ({2})'.format(self.division, self.name, smart_str(self.city))
else:
result = '{0} - {1}'.format(self.division, self.name)
return result
def get_division_name(self):
for x in TOUCH_DIVISION_CHOICES:
if self.division == x[0]:
if 'MO' == x[0]:
return MEN_OPEN
elif 'WO' == x[0]:
return WOMEN_OPEN
elif 'MXO' == x[0]:
return MIXED_OPEN
elif 'M30' == x[0]:
return MEN_30
elif 'M40' == x[0]:
return MEN_40
elif 'SMX' == x[0]:
return SENIOR_MIX
elif 'W27' == x[0]:
return WOMEN_27
assert "A name for the division: %s could not be found." % self.division
def __lt__(self, other):
if self.name >= other.name:
result = False
else:
result = True
return result
class Player(models.Model):
person = models.ForeignKey(Person)
team = models.ForeignKey(Team)
number = models.PositiveSmallIntegerField(null=True, blank=True)
tournaments_played = models.ManyToManyField(Tournament, blank=True)
class Meta:
ordering = ["person"]
def __str__(self):
return '{:s}, {:s} {:s}'.format(str(self.team), str(self.number), str(self.person))
class GameRound(models.Model):
FINAL = 'Final'
SEMI = 'Semifinal'
QUARTER = '1/4'
EIGHTH = 'Eighthfinals'
SIXTEENTH = '1/16'
THIRD_POSITION = 'Third position'
FIFTH_POSITION = 'Fifth position'
SIXTH_POSITION = 'Sixth position'
SEVENTH_POSITION = 'Seventh position'
EIGHTH_POSITION = 'Eighth position'
NINTH_POSITION = 'Ninth position'
TENTH_POSITION = 'Tenth position'
ELEVENTH_POSITION = 'Eleventh position'
TWELFTH_POSITION = 'Twelfth position'
THIRTEENTH_POSITION = 'Thirteenth position'
FOURTEENTH_POSITION = 'Fourteenth position'
FIFTEENTH_POSITION = 'Fifteenth position'
SIXTEENTH_POSITION = 'Sixteenth position'
EIGHTEENTH_POSITION = 'Eighteenth position'
TWENTIETH_POSITION = 'Twentieth position'
DIVISION = 'Division'
POOL_A = 'Pool A'
POOL_B = 'Pool B'
POOL_C = 'Pool C'
POOL_D = 'Pool D'
POOL_E = 'Pool E'
POOL_F = 'Pool F'
LIGA = 'Liga'
ordered_rounds = [FINAL, THIRD_POSITION, SEMI, FIFTH_POSITION, QUARTER, SIXTH_POSITION,
SEVENTH_POSITION, EIGHTH_POSITION, EIGHTH, NINTH_POSITION, TENTH_POSITION,
ELEVENTH_POSITION, TWELFTH_POSITION, THIRTEENTH_POSITION, FOURTEENTH_POSITION,
FIFTEENTH_POSITION, SIXTEENTH_POSITION, EIGHTEENTH_POSITION, TWENTIETH_POSITION]
GAME_ROUND_CHOICES = (
(FINAL, FINAL),
(SEMI, SEMI),
(QUARTER, QUARTER),
(EIGHTH, EIGHTH),
(SIXTEENTH, SIXTEENTH),
(THIRD_POSITION, THIRD_POSITION),
(FIFTH_POSITION, FIFTH_POSITION),
(SIXTH_POSITION, SIXTH_POSITION),
(SEVENTH_POSITION, SEVENTH_POSITION),
(EIGHTH_POSITION, EIGHTH_POSITION),
(NINTH_POSITION, NINTH_POSITION),
(TENTH_POSITION, TENTH_POSITION),
(ELEVENTH_POSITION, ELEVENTH_POSITION),
(TWELFTH_POSITION, TWELFTH_POSITION),
(THIRTEENTH_POSITION, THIRTEENTH_POSITION),
(FIFTEENTH_POSITION, FIFTEENTH_POSITION),
(SIXTEENTH_POSITION, SIXTEENTH_POSITION),
(EIGHTEENTH_POSITION, EIGHTEENTH_POSITION),
(TWENTIETH_POSITION, TWENTIETH_POSITION),
(DIVISION, DIVISION),
(POOL_A, POOL_A),
(POOL_B, POOL_B),
(POOL_C, POOL_C),
(POOL_D, POOL_D),
(POOL_E, POOL_E),
(POOL_F, POOL_F),
(LIGA, LIGA),
)
GOLD = 'Gold'
SILVER = 'Silver'
BRONZE = 'Bronze'
WOOD = 'Wood'
CATEGORY_ROUND_CHOICES = (
(GOLD, GOLD),
(SILVER, SILVER),
(BRONZE, BRONZE),
(WOOD, WOOD),
)
round = models.CharField(default=POOL_A, max_length=32, null=False, blank=False, choices=GAME_ROUND_CHOICES)
number_teams = models.PositiveIntegerField(default=2, validators=[MinValueValidator(0), MaxValueValidator(20)])
category = models.CharField(default=GOLD, max_length=6, null=False, blank=False, choices=CATEGORY_ROUND_CHOICES)
def __str__(self):
return '{:s} {:s} {:s}'.format(str(self.round), str(self.number_teams), str(self.category))
def is_pool(self):
return self.round == self.POOL_A or self.round == self.POOL_B or self.round == self.POOL_C or \
self.round == self.POOL_D or self.round == self.POOL_E or self.round == self.POOL_F
def __lt__(self, other):
# print('self = %s, other = %s' %(self, other))
if self.category == other.category:
if self.round == other.round:
result = self.number_teams.__lt__(other.number_teams)
else:
if self.round == self.FINAL:
result = False
elif other.round == self.FINAL:
result = True
elif self.round == self.THIRD_POSITION:
result = False
elif other.round == self.THIRD_POSITION:
result = True
elif self.round == self.SEMI:
result = False
elif other.round == self.SEMI:
result = True
elif self.round == self.FIFTH_POSITION:
result = False
elif other.round == self.FIFTH_POSITION:
result = True
elif self.round == self.SIXTH_POSITION:
result = False
elif other.round == self.SIXTH_POSITION:
result = True
elif self.round == self.SEVENTH_POSITION:
result = False
elif other.round == self.SEVENTH_POSITION:
result = True
elif self.round == self.EIGHTH_POSITION:
result = False
elif other.round == self.EIGHTH_POSITION:
result = True
elif self.round == self.QUARTER:
result = False
elif other.round == self.QUARTER:
result = True
elif self.round == self.NINTH_POSITION:
result = False
elif other.round == self.NINTH_POSITION:
result = True
elif self.round == self.TENTH_POSITION:
result = False
elif other.round == self.TENTH_POSITION:
result = True
elif self.round == self.ELEVENTH_POSITION:
result = False
elif other.round == self.ELEVENTH_POSITION:
result = True
elif self.round == self.TWELFTH_POSITION:
result = False
elif other.round == self.TWELFTH_POSITION:
result = True
elif self.round == self.THIRTEENTH_POSITION:
result = False
elif other.round == self.THIRTEENTH_POSITION:
result = True
elif self.round == self.FIFTEENTH_POSITION:
result = False
elif other.round == self.FIFTEENTH_POSITION:
result = True
elif self.round == self.SIXTEENTH_POSITION:
result = False
elif other.round == self.SIXTEENTH_POSITION:
result = True
elif self.round == self.SIXTEENTH:
result = False
elif other.round == self.SIXTEENTH:
result = True
elif self.round == self.EIGHTEENTH_POSITION:
result = False
elif other.round == self.EIGHTEENTH_POSITION:
result = True
elif self.round == self.TWENTIETH_POSITION:
result = False
elif other.round == self.TWENTIETH_POSITION:
result = True
elif self.round == self.DIVISION:
result = False
elif other.round == self.DIVISION:
result = True
elif self.round in {self.POOL_A, self.POOL_B, self.POOL_C, self.POOL_D, self.POOL_E, self.POOL_F}:
result = False
elif other.round in {self.POOL_A, self.POOL_B, self.POOL_C, self.POOL_D, self.POOL_E, self.POOL_F}:
result = True
else:
raise Exception('Problem comparing values: %s and %s' % (self.round, other.round))
else:
if self.category == self.GOLD:
result = False
elif other.category == self.GOLD:
result = True
elif self.category == self.SILVER:
result = False
elif other.category == self.SILVER:
result = True
elif self.category == self.BRONZE:
result = False
elif other.category == self.BRONZE:
result = True
elif self.category == self.WOOD:
result = False
else:
raise Exception('Problem comparing values: %s and %s' % (self.category, other.category))
return result
def __cmp__(self, other):
# print('self = %s, other = %s' %(self, other))
if self.category == other.category:
if self.round == other.round:
result = self.number_teams.__cmp__(other.number_teams)
else:
if self.round == self.FINAL:
result = 1
elif other.round == self.FINAL:
result = -1
elif self.round == self.THIRD_POSITION:
result = 1
elif other.round == self.THIRD_POSITION:
result = -1
elif self.round == self.SEMI:
result = 1
elif other.round == self.SEMI:
result = -1
elif self.round == self.FIFTH_POSITION:
result = 1
elif other.round == self.FIFTH_POSITION:
result = -1
elif self.round == self.SIXTH_POSITION:
result = 1
elif other.round == self.SIXTH_POSITION:
result = -1
elif self.round == self.SEVENTH_POSITION:
result = 1
elif other.round == self.SEVENTH_POSITION:
result = -1
elif self.round == self.QUARTER:
result = 1
elif other.round == self.QUARTER:
result = -1
elif self.round == self.NINTH_POSITION:
result = 1
elif other.round == self.NINTH_POSITION:
result = -1
elif self.round == self.TENTH_POSITION:
result = 1
elif other.round == self.TENTH_POSITION:
result = -1
elif self.round == self.ELEVENTH_POSITION:
result = 1
elif other.round == self.ELEVENTH_POSITION:
result = -1
elif self.round == self.TWELFTH_POSITION:
result = 1
elif other.round == self.TWELFTH_POSITION:
result = -1
elif self.round == self.THIRTEENTH_POSITION:
result = 1
elif other.round == self.THIRTEENTH_POSITION:
result = -1
elif self.round == self.FIFTEENTH_POSITION:
result = 1
elif other.round == self.FIFTEENTH_POSITION:
result = -1
elif self.round == self.SIXTEENTH_POSITION:
result = 1
elif other.round == self.SIXTEENTH_POSITION:
result = -1
elif self.round == self.SIXTEENTH:
result = 1
elif other.round == self.SIXTEENTH:
result = -1
elif self.round == self.EIGHTEENTH_POSITION:
result = 1
elif other.round == self.EIGHTEENTH_POSITION:
result = -1
elif self.round == self.TWENTIETH_POSITION:
result = 1
elif other.round == self.TWENTIETH_POSITION:
result = -1
else:
raise Exception('Problem comparing values: %s and %s' % (self.round, other.round))
else:
if self.category == self.GOLD:
result = 1
elif other.category == self.GOLD:
result = -1
elif self.category == self.SILVER:
result = 1
elif other.category == self.SILVER:
result = -1
elif self.category == self.BRONZE:
result = 1
elif other.category == self.BRONZE:
result = -1
elif self.category == self.WOOD:
result = 1
else:
raise Exception('Problem comparing values: %s and %s' % (self.category, other.category))
return result
class GameField(models.Model):
name = models.CharField(max_length=50, null=False, blank=False)
def __str__(self): # Python 3: def __str__(self):
return '{}'.format(self.name)
class PadelResult(models.Model):
local1 = models.SmallIntegerField(null=True, blank=True)
local2 = models.SmallIntegerField(null=True, blank=True)
local3 = models.SmallIntegerField(null=True, blank=True)
local4 = models.SmallIntegerField(null=True, blank=True)
local5 = models.SmallIntegerField(null=True, blank=True)
local6 = models.SmallIntegerField(null=True, blank=True)
local7 = models.SmallIntegerField(null=True, blank=True)
local8 = models.SmallIntegerField(null=True, blank=True)
local9 = models.SmallIntegerField(null=True, blank=True)
local10 = models.SmallIntegerField(null=True, blank=True)
visitor1 = models.SmallIntegerField(null=True, blank=True)
visitor2 = models.SmallIntegerField(null=True, blank=True)
visitor3 = models.SmallIntegerField(null=True, blank=True)
visitor4 = models.SmallIntegerField(null=True, blank=True)
visitor5 = models.SmallIntegerField(null=True, blank=True)
visitor6 = models.SmallIntegerField(null=True, blank=True)
visitor7 = models.SmallIntegerField(null=True, blank=True)
visitor8 = models.SmallIntegerField(null=True, blank=True)
visitor9 = models.SmallIntegerField(null=True, blank=True)
visitor10 = models.SmallIntegerField(null=True, blank=True)
@classmethod
def create(cls, scores):
while scores[len(scores)-1] == '':
del(scores[-1])
result = cls(local1=scores[0], visitor1=scores[1])
try:
result.local2 = scores[2]
result.visitor2 = scores[3]
result.local3 = scores[4]
result.visitor3 = scores[5]
result.local4 = scores[6]
result.visitor4 = scores[7]
result.local5 = scores[8]
result.visitor5 = scores[9]
result.local6 = scores[10]
result.visitor6 = scores[11]
result.local7 = scores[12]
result.visitor7 = scores[13]
result.local8 = scores[14]
result.visitor8 = scores[15]
result.local9 = scores[16]
result.visitor9 = scores[17]
result.local10 = scores[18]
result.visitor10 = scores[19]
except IndexError:
pass
return result
def _get_local_scores(self):
return self._get_scores_lists()[0]
def _get_visitor_scores(self):
return self._get_scores_lists()[1]
def _get_scores_lists(self):
local = list()
visitor = list()
scores = [self.local1, self.visitor1, self.local2, self.visitor2, self.local3, self.visitor3,
self.local4, self.visitor4, self.local5, self.visitor5, self.local6, self.visitor6,
self.local7, self.visitor7, self.local8, self.visitor8, self.local9, self.visitor9,
self.local10, self.visitor10]
for i in range(len(scores)):
if scores[i] is not None:
if i % 2 == 0:
local.append(scores[i])
else:
visitor.append(scores[i])
else:
break
return local, visitor
def get_result_pairs(self):
result = list()
for index in range(len(self.local_scores)):
x = self.local_scores[index]
y = self.visitor_scores[index]
result.append(str(x) + '-' + str(y))
return result
local_scores = property(_get_local_scores)
visitor_scores = property(_get_visitor_scores)
class Game(models.Model):
field = models.ForeignKey(GameField, blank=True, null=True)
time = models.TimeField(blank=True, null=True)
local = models.ForeignKey(Team, related_name="local", null=True, blank=True)
visitor = models.ForeignKey(Team, related_name="visitor", null=True, blank=True)
local_score = models.SmallIntegerField(null=True, blank=True)
visitor_score = models.SmallIntegerField(null=True, blank=True)
tournament = models.ForeignKey(Tournament)
phase = models.ForeignKey(GameRound)
result_padel = models.ForeignKey(PadelResult, null=True, blank=True)
def __str__(self):
return '{} - {} - {} {} - {} {}'.format(
self.tournament, self.phase, self.local, self.local_score, self.visitor_score, self.visitor)
def __lt__(self, other):
return self.phase.__lt__(other.phase)
def __cmp__(self, other):
return self.phase.__cmp__(other.phase)
class PlayerStadistic(models.Model):
player = models.ForeignKey(Player)
points = models.PositiveSmallIntegerField(null=True, blank=True, default=0)
mvp = models.PositiveSmallIntegerField(null=True, blank=True, default=0)
played = models.PositiveSmallIntegerField(null=True, blank=True, default=0)
game = models.ForeignKey(Game, null=True)
tournament = models.ForeignKey(Tournament, null=True)
def clean(self):
if not self.game or not self.tournament:
raise ValidationError(_('PlayerStatistic must be related either to a game or to a tournament.'))
def is_game_stat(self):
return True if self.game else False
def is_tournament_stat(self):
return not self.is_game_stat()
def __str__(self):
if self.is_game_stat():
return '{} - {} - touchdowns: {}'.format(self.game, self.player, self.points)
else:
return '{} - {} - touchdowns: {} - played: {} - mvp: {}'.format(
self.tournament, self.player, self.points, self.played, self.mvp)
class Contact(models.Model):
name = models.CharField(max_length=40, null=False, blank=False)
email = models.EmailField(max_length=50, null=False, blank=False)
where = models.CharField(max_length=20, null=True, blank=True)
message = models.TextField(null=False, blank=False)
| 38.567921 | 116 | 0.570422 | 2,561 | 23,565 | 5.099961 | 0.104647 | 0.053748 | 0.041804 | 0.049613 | 0.615267 | 0.55547 | 0.409234 | 0.217135 | 0.178853 | 0.175331 | 0 | 0.017139 | 0.326544 | 23,565 | 610 | 117 | 38.631148 | 0.80586 | 0.012476 | 0 | 0.413408 | 0 | 0 | 0.04748 | 0.001161 | 0 | 0 | 0 | 0 | 0.001862 | 1 | 0.054004 | false | 0.001862 | 0.009311 | 0.026071 | 0.351955 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4959a2d067607ac104beabd6e507a4fbdd7b2580 | 13,503 | py | Python | flask_app.py | sandygoreraza/FlaskTaskManagementSampleApp | 22e82a5013b0b9178e2a3acfe6d51367d9804bfe | [
"MIT"
] | null | null | null | flask_app.py | sandygoreraza/FlaskTaskManagementSampleApp | 22e82a5013b0b9178e2a3acfe6d51367d9804bfe | [
"MIT"
] | null | null | null | flask_app.py | sandygoreraza/FlaskTaskManagementSampleApp | 22e82a5013b0b9178e2a3acfe6d51367d9804bfe | [
"MIT"
] | null | null | null | # A very simple Flask Hello World app for you to get started with...
from flask import Flask,render_template,request,session,redirect,url_for,flash
from flask_admin import Admin,BaseView, expose
import models.users
import models.tasks
import models.admin.users
import datetime
app = Flask(__name__)
app.secret_key = 'wrwerwr34534534te4rt454645tretger'
#sess.init_app(app)
@app.route("/home" , methods =['GET'])
def home():
"""Return an HTML-formatted string and an optional response status code"""
return """
<!DOCTYPE html>
<html>
<head><title>My First Flask Application</title></head>
<body><h1>hello world</h1></body>
</html>
""", 200
@app.route("/", methods =['GET','POST'])
def index():
if request.method == 'GET':
return render_template('login.html',title='login'), 200
else:
username = request.form['username']
password = request.form['password']
if models.users.check_user(username,password):
##session manager
session['username'] = username
##userID=models.users.get_userid(username)
session['user_id'] =models.users.get_userid(username)
##session manager
return redirect(url_for('dashboard'))
#return render_template('dashboard.html', message = username, title='dashaboard',tasks= Test), 200
else:
error_message = "Incorrect username or password"
return render_template('login.html', message = error_message,title='login'), 200
@app.route("/dashboard", methods =['GET'])
def dashboard():
##check if session is set else logout - start
if 'username' in session:
userlistreassign = models.users.UserReassignList(session['user_id'])
#currenttask = models.tasks.tasks_diplay()
currenttask = models.tasks.Mytasks_diplay(session['user_id'])
return render_template('dashboard.html',title="Task List - Dashboard",tasks= currenttask,reassignUsers=userlistreassign), 200
else:
return redirect(url_for('logout'))
##check if session is set else logout - end
@app.route("/add_task", methods =['GET'])
def add_task():
##check if session is set else logout - start
if 'username' in session:
return render_template('add_task.html',title='Add Task'), 200
else:
return redirect(url_for('logout'))
##check if session is set else logout - end
@app.route("/update_task", methods =['GET'])
def update_task():
task_id = request.args.get('task')
currenttask = models.tasks.task_diplayID(task_id)
##check if session is set else logout - start
if 'username' in session:
return render_template('update_task.html',title='Update Task page', html_task= currenttask), 200
else:
return redirect(url_for('logout'))
##check if session is set else logout - end
@app.route("/taskupdateaction", methods =['POST'])
def taskupdateaction():
Tname = request.form['tname']
Tdescription = request.form['tdescription']
## task_id = request.args.get('task')
models.tasks.updatetask(request.form['task_id'],Tname,Tdescription)
return redirect(url_for('dashboard'))
##return render_template('add_task.html',title=task), 200
@app.route("/taskdelete", methods =['GET'])
def taskdelete():
task_id = request.args.get('task')
models.tasks.delete_task(task_id)
return redirect(url_for('dashboard'))
##return render_template('add_task.html',title=task), 200
@app.route("/taskadd", methods =['POST'])
def taskadd():
Tname = request.form['tname']
Tdescription = request.form['tdescription']
ctime=datetime.datetime.now()
get_user_id=session['user_id']
models.tasks.create(Tname,Tdescription,ctime,get_user_id)
return redirect(url_for('dashboard'))
@app.route("/reassignTask", methods =['POST'])
def reassignTask():
##check if session is set else logout - start
if 'username' in session:
task_id = request.form['task_id']
user_id = request.form['user_id']
userlistreassign = models.users.reassignTask(user_id,task_id)
#currenttask = models.tasks.tasks_diplay()
flash('Task successlly reassigned ')
return redirect(url_for('dashboard'))
else:
return redirect(url_for('logout'))
##check if session is set else logout - end
@app.route("/logout", methods =['GET'])
def logout():
session.clear()
flash('logout successful')
return redirect(url_for('index'))
@app.route("/about", methods =['GET'])
def about():
return render_template('about.html',title='about',about_title ='About Company ABC',Team='Meet the team below'), 200
@app.route("/terms_conditions", methods =['GET'])
def terms_conditions():
return render_template('terms_conditions.html', message ="Terms and condition page" , title='Terms and Conditions'), 200
@app.route("/privacy", methods =['GET'])
def privacy():
return render_template('Privacy.html' , message ="Privacy Statement page", title='privacy'), 200
@app.route("/registration", methods =['GET','POST'])
def registration():
if request.method == 'GET':
return render_template('registration.html', message ="Registration page" , title='registration',note = "add new user"), 200
else:
fullname = request.form['fullname']
username = request.form['username']
password = request.form['password']
ctime=datetime.datetime.now()
###register new user
notification=models.users.new_user(fullname,username,password,ctime)
return redirect(url_for('index'))
#admin = Admin(app)
# Add administrative views here
@app.route("/admin/login", methods =['GET','POST'])
def adminlogin():
if request.method == 'GET':
return render_template('admin/login.html' , message ="Admin -Login", title='Login Page | Materialize - Material Design'), 200
else:
username = request.form['username']
password = request.form['password']
if models.admin.users.check_user(username,password):
##session manager
##get user email
session['useradmin'] = username
##get user_id
session['user_id'] = models.admin.users.get_user_id(username)
##get user role
session['user_role'] = models.admin.users.get_user_role(username)
##session manager
return redirect(url_for('admin_dashboard'))
#return render_template('dashboard.html', message = username, title='dashaboard',tasks= Test, user_id =), 200
else:
error_message = "Incorrect username or password"
return render_template('admin/login.html' , message = error_message, title='Login Page | Materialize - Material Design'), 200
@app.route("/admin/", methods =['GET'])
def admin_dashboard():
if 'user_id' in session:
ctime=datetime.datetime.now()
Current_username = models.admin.users.get_user_name(session['user_id'])
TotalSignUps =models.users.TotalSignUps()
TodaysTasks = models.tasks.Today_tasks()
SigninUpsDisplay = models.users.SignUpsDisplayLimit5()
TotalSignUpsLast24hrs=models.users.TotalSignUps_last_24hr()
TotalSignUps_last_24hr_display = models.users.TotalSignUps_last_24hr_display()
TodaySignsups=models.users.TodaySignUps()
TasksLast24hrscount = len(models.tasks.tasks_last_24hr())
TasksLast24hrs = models.tasks.tasks_last_24hr()
TotalTasks = models.tasks.num_tasks()
TaskdisplayLimit5 = models.tasks.tasks_diplayLimit5()
TasksLast24hrsAll = models.tasks.tasks_last_24hr()
TasksLastWeekAll = models.tasks.tasks_last_week()
LastWeekSignupsAll = models.users.TotalSignUps_last_Week()
now = datetime.datetime.now()
date_now =now.strftime("%Y-%m-%d")
d = datetime.timedelta(days = 6)
LastWeekdate = now - d
data = [TotalTasks,TotalSignUps,TasksLast24hrscount,TotalSignUpsLast24hrs,TodaysTasks,LastWeekdate,LastWeekSignupsAll,LastWeekdate,TasksLastWeekAll,TasksLast24hrs,TotalSignUps_last_24hr_display,SigninUpsDisplay,TaskdisplayLimit5,Current_username]
return render_template('admin/dashboard.html',html_data = data,title='Admin Dashboard', date_now =date_now ), 200
else:
return redirect(url_for('adminlogin'))
@app.route("/admin/users", methods =['GET'])
def admin_user_signups():
if 'user_id' in session:
Current_username = models.admin.users.get_user_name(session['user_id'])
TotalSignUps =models.users.SignUpsDisplay()
TotalTasks = models.tasks.num_tasks()
TotalSignUpsnum=models.users.TotalSignUps()
data = [Current_username,TotalSignUps,TotalTasks,TotalSignUps,TotalSignUpsnum ]
return render_template('admin/user_signupsList.html',html_data = data,title='Admin User SignUps'), 200
else:
return redirect(url_for('adminlogin'))
@app.route("/admin/tasks", methods =['GET'])
def admin_user_tasks():
if 'user_id' in session:
Current_username = models.admin.users.get_user_name(session['user_id'])
TotalSignUps =len(models.users.SignUpsDisplay())
TotalSignUpsnum = models.users.TotalSignUps()
TotalTasksnum = len(models.tasks.tasks_diplayWithUsers())
TotalTasks = models.tasks.tasks_diplayWithUsers()
data = [Current_username,TotalSignUps,TotalTasksnum,TotalTasks ]
return render_template('admin/user_tasksList.html',html_data = data,title='Admin User Task Lists'), 200
else:
return redirect(url_for('adminlogin'))
@app.route("/admin/contacts", methods =['GET'])
def admin_contacts():
if 'user_id' in session:
Current_username = models.admin.users.get_user_name(session['user_id'])
TotalSignUps =len(models.users.SignUpsDisplay())
TotalSignUpsnum = models.users.TotalSignUps()
TotalTasksnum = len(models.tasks.tasks_diplayWithUsers())
TotalTasks = models.tasks.tasks_diplayWithUsers()
data = [Current_username,TotalSignUps,TotalTasksnum,TotalTasks ]
return render_template('admin/contacts.html',html_data = data,title='Admin User Task Lists'), 200
else:
return redirect(url_for('adminlogin'))
@app.route("/admin/Viewtask", methods =['GET'])
def admin_viewtask():
if 'user_id' in session:
task_id = request.args.get('task')
Get_task_details = models.tasks.task_diplayID(task_id)
Current_username = models.admin.users.get_user_name(session['user_id'])
TotalSignUps =len(models.users.SignUpsDisplay())
TotalSignUpsnum = models.users.TotalSignUps()
TotalTasksnum = len(models.tasks.tasks_diplayWithUsers())
TotalTasks = models.tasks.tasks_diplayWithUsers()
data = [Current_username,TotalSignUps,TotalTasksnum,TotalTasks,Get_task_details,task_id ]
return render_template('admin/viewtask.html',html_data = data,title='Admin User View Task'), 200
else:
return redirect(url_for('adminlogin'))
@app.route("/admin/ViewUser", methods =['GET'])
def admin_viewUser():
if 'user_id' in session:
user_id = request.args.get('user')
Current_username = models.admin.users.get_user_name(session['user_id'])
TotalSignUps =len(models.users.SignUpsDisplay())
TotalTasksnum = len(models.tasks.tasks_diplayWithUsers())
ViewUserDetails = models.users.user_details_diplayID(user_id)
data = [Current_username,TotalSignUps,TotalTasksnum,ViewUserDetails]
return render_template('admin/viewuser.html',html_data = data,title='Admin View User details'), 200
else:
return redirect(url_for('adminlogin'))
@app.route("/admin/deleteUserWithTasks", methods =['GET'])
def admin_delete_user_with_tasks():
if 'user_id' in session:
user_id = request.args.get('id')
#run deletion
models.users.delete_user_with_tasks(user_id)
return redirect(url_for('admin_user_signups'))
else:
return redirect(url_for('adminlogin'))
@app.route("/admin/taskdelete", methods =['GET'])
def taskdelete_admin():
task_id = request.args.get('task')
models.tasks.delete_task(task_id)
return redirect(url_for('admin_user_tasks'))
##return render_template('add_task.html',title=task), 200
@app.route("/admin/logout", methods =['GET'])
def logoutadmin():
session.clear()
return redirect(url_for('adminlogin'))
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8000, debug=True)
| 27.613497 | 259 | 0.636821 | 1,477 | 13,503 | 5.66283 | 0.134733 | 0.021521 | 0.038498 | 0.052606 | 0.576638 | 0.502272 | 0.456958 | 0.382951 | 0.356528 | 0.350909 | 0 | 0.01239 | 0.240909 | 13,503 | 488 | 260 | 27.670082 | 0.80361 | 0.088795 | 0 | 0.425439 | 0 | 0 | 0.16393 | 0.013441 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.035088 | 0.026316 | 0.013158 | 0.307018 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
495b261eceb8394f0901a556e7bb015330a24b60 | 2,698 | py | Python | install.py | maserlib/ExPRES | cfb976e6e98fe79fdb6135081c0d52c86d672c5a | [
"MIT"
] | 2 | 2019-05-23T17:09:21.000Z | 2020-07-19T10:51:31.000Z | install.py | maserlib/ExPRES | cfb976e6e98fe79fdb6135081c0d52c86d672c5a | [
"MIT"
] | 30 | 2019-03-28T09:44:43.000Z | 2021-11-29T16:16:47.000Z | install.py | maserlib/ExPRES | cfb976e6e98fe79fdb6135081c0d52c86d672c5a | [
"MIT"
] | 2 | 2021-08-09T10:13:41.000Z | 2021-12-02T16:58:12.000Z | from urllib.request import urlopen, urlretrieve
from lxml import etree
from pathlib import Path
import os
_CUR_DIR = Path(__file__).parent
_URL_MFL_ROOT = 'http://maser.obspm.fr/support/serpe/mfl'
_MFL_NAMES = ['ISaAC', 'JRM09', 'O6', 'VIT4', 'VIP4', 'SPV', 'Z3', 'VIPAL']
_MFL_ROOT_DIR = _CUR_DIR / 'mfl'
def html_ls(url):
url_fixed = url.strip('/')
with urlopen(url_fixed) as ufile:
root = etree.parse(ufile, etree.HTMLParser())
list_ls = []
for tr in root.getroot().xpath('body/table/tr'):
for td in tr.xpath('td'):
for item in td.iter():
if item.tag == 'a':
if item.text != 'Parent Directory':
list_ls.append('{}/{}'.format(url_fixed, item.text))
return list_ls
def html_cp(url, file):
urlretrieve(url, file)
def download_mfl(model_name=None):
model_name_list = _MFL_NAMES
if model_name is not None:
if model_name not in _MFL_NAMES:
raise ValueError('"{}": this model name is not supported'.format(model_name))
else:
model_name_list = [model_name]
mfl_list = html_ls(_URL_MFL_ROOT)
if not _MFL_ROOT_DIR.is_dir():
cur_command = 'mkdir {}'.format(str(_MFL_ROOT_DIR))
print(cur_command)
os.system(cur_command)
for mfl_item_url in mfl_list:
dir_name = mfl_item_url.strip('/').split('/')[-1]
if '_' in dir_name:
if dir_name.split('_')[0] in model_name_list:
cur_dir = _MFL_ROOT_DIR / dir_name
if not cur_dir.is_dir():
cur_command = 'mkdir {}'.format(str(cur_dir))
print(cur_command)
os.system(cur_command)
cur_mfl_subdir_list = html_ls(mfl_item_url)
for mfl_subdir_item_url in cur_mfl_subdir_list:
subdir_name = mfl_subdir_item_url.strip('/').split('/')[-1]
cur_subdir = cur_dir / subdir_name
cur_command = 'mkdir {}'.format(str(cur_subdir))
print(cur_command)
os.system(cur_command)
cur_mfl_file_list = html_ls(mfl_subdir_item_url)
for mfl_file_list_item_url in cur_mfl_file_list:
file_name = mfl_file_list_item_url.split('/')[-1]
cur_file = cur_subdir / file_name
print('Downloading: {}'.format(mfl_file_list_item_url))
print(' into: {}'.format(str(cur_file)))
html_cp(mfl_file_list_item_url, str(cur_file))
| 34.151899 | 89 | 0.561898 | 350 | 2,698 | 3.954286 | 0.242857 | 0.050578 | 0.047688 | 0.043353 | 0.24711 | 0.152457 | 0.13078 | 0.13078 | 0.056358 | 0 | 0 | 0.005507 | 0.326909 | 2,698 | 78 | 90 | 34.589744 | 0.756608 | 0 | 0 | 0.105263 | 0 | 0 | 0.077465 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.070175 | 0 | 0.140351 | 0.087719 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
495c21680536c79bd8813d0943fbabacbd4b7475 | 8,826 | py | Python | stability/projects/project_management.py | zachbateman/stability | 8610ed98a70d0e4136bc30000e9d5186dba33ab6 | [
"MIT"
] | null | null | null | stability/projects/project_management.py | zachbateman/stability | 8610ed98a70d0e4136bc30000e9d5186dba33ab6 | [
"MIT"
] | null | null | null | stability/projects/project_management.py | zachbateman/stability | 8610ed98a70d0e4136bc30000e9d5186dba33ab6 | [
"MIT"
] | null | null | null | '''
Python module containing Project management code.
'''
import os
import datetime
import shutil
import json
import copy
from stability.tools import FileData
from pprint import pprint as pp
DATE_FORMAT = '%b %d %Y %H:%M:%S' # for use with strftime and strptime
class ProjectGroup():
'''
Class for handling all of the user's Projects
'''
def __init__(self, **kwargs) -> None:
self.projects: dict = {}
self.archived_projects: dict = {}
for key in kwargs: # will only be triggered if using self.fromdict()
setattr(self, key, kwargs[key])
def load_existing_projects(self) -> dict:
try:
with open(self._saved_group_filepath()) as json_file:
d = json.load(json_file)
for key in d:
setattr(self, key, d[key])
except FileNotFoundError: # if user had not previously saved info
print('No existing projects found.')
def save_projects(self, starting_path: str='C:/'):
with open(self._saved_group_filepath(), 'w') as json_file:
json.dump(self.asdict(), json_file)
print('Projects saved.')
def _saved_group_filepath(self, starting_path: str='C:/') -> str:
return os.path.join(starting_path, 'stability', 'project_group.json')
def add_new_project(self, project_name: str='', initial_folder: str='', proj_obj=None) -> None:
if proj_obj:
self.projects[proj_obj.name] = proj_obj
print(f'{proj_obj} added to Project Group!')
else:
self.projects[project_name] = Project(project_name=project_name, initial_folder=initial_folder)
print(f'{self.projects[project_name]} created!')
self.save_projects()
def archive_project(self, project_name: str) -> None:
pass
def delete_archived_project(self, project_name: str) -> None:
pass
def __repr__(self) -> str:
return 'Project Group: ' + '\n - '.join(self.projects.keys()) + '\n'
def __eq__(self, other) -> bool:
if (self.projects == other.projects
and self.archived_projects == other.archived_projects):
return True
return False
def asdict(self) -> dict:
'''Convert instance into representative dict'''
d = {}
d['projects'] = {name: proj.asdict() for name, proj in self.projects.items()}
d['archived_projects'] = {name: proj.asdict() for name, proj in self.archived_projects.items()}
return d
@classmethod
def fromdict(cls, d):
'''Create class instance from dict'''
kwargs = copy.deepcopy(d)
kwargs['projects'] = {name: Project.fromdict(d) for name, d in kwargs['projects'].items()}
kwargs['archived_projects'] = {name: Project.fromdict(d) for name, d in kwargs['archived_projects'].items()}
return cls(**kwargs)
class Project():
def __init__(self, project_name: str='', initial_folder: str='', **kwargs) -> None:
self.name = project_name
self.initial_folder = initial_folder
# Now convert datetime.now() to a _ROUNDED_ time via DATE_FORMAT
# Provides same date after using .asdict() and .fromdict() conversion.
self.project_creation_date = datetime.datetime.strptime(datetime.datetime.now().strftime(DATE_FORMAT), DATE_FORMAT)
self.files: dict = {} # dict of File objects (which each contain list of FileData objects)
self.create_project_archive()
for key in kwargs:
setattr(self, key, kwargs[key])
def create_project_archive(self, starting_path: str='C:/'):
self.archive_path = os.path.join(starting_path, 'stability', 'project_archives', self.name.lower())
try:
os.makedirs(self.archive_path)
except OSError: # if folder already exists
pass
def add_file(self, file_path: str='', file_name: str='', file_obj=None):
if file_obj:
self.files[file_obj.file_name] = file_obj
self.copy_file_to_project_archive(file_obj.file_name)
else:
self.files[file_name] = File(file_path, file_name)
self.copy_file_to_project_archive(file_name)
def copy_file_to_project_archive(self, file_name: str, file_version: str='latest'):
'''
file_version arg determines which file gets copied (if multiple files are tracked for a the File)
- 'latest' uses the most recent version
- a specific file path string uses that exact file path
'''
if file_version == 'latest':
file = self.files[file_name].latest_file()
else:
file = file_version
shutil.copy(file, self.archive_path)
def __repr__(self) -> str:
return f'Project: {self.name}'
def __eq__(self, other) -> bool:
if (self.name == other.name
and self.initial_folder == other.initial_folder
and self.project_creation_date == other.project_creation_date):
return True
breakpoint()
return False
def asdict(self) -> dict:
'''Convert instance into representative dict'''
d = {}
d['name'] = self.name
d['initial_folder'] = self.initial_folder
d['project_creation_date'] = self.project_creation_date.strftime(DATE_FORMAT)
d['files'] = {file_name: file.asdict() for file_name, file in self.files.items()}
return d
@classmethod
def fromdict(cls, d):
'''Create class instance from dict'''
kwargs = copy.deepcopy(d)
kwargs['project_creation_date'] = datetime.datetime.strptime(d['project_creation_date'], DATE_FORMAT)
kwargs['files'] = {file_name: File.fromdict(file_dict) for file_name, file_dict in d['files'].items()}
return cls(**kwargs)
class File():
def __init__(self, initial_filepath: str='', file_name: str='', **kwargs) -> None:
'''
Create File object using specified initial_filepath and file_name.
Alternatively, (if using File.fromdict()) create instance from representative dict.
'''
self.initial_filepath = initial_filepath
self.file_name = file_name # not necessarily the _actual_ name of the file
# Now convert datetime.now() to a _ROUNDED_ time via DATE_FORMAT
# Provides same date after using .asdict() and .fromdict() conversion.
self.initial_tracking_date = datetime.datetime.strptime(datetime.datetime.now().strftime(DATE_FORMAT), DATE_FORMAT)
self.filedatas: list = [FileData(initial_filepath)] # FileData objects
self.file_add_times: list = [datetime.datetime.strptime(datetime.datetime.now().strftime(DATE_FORMAT), DATE_FORMAT)]
self.extension = self.filedatas[0].extension
for key in kwargs:
setattr(self, key, kwargs[key])
@property
def num_versions(self):
return len(self.filedatas)
def latest_file(self) -> str:
return self.filedatas[-1].filepath
def add_updated_fileversion(self, filepath):
if os.path.splitext(filepath)[-1] == self.extension:
self.filedatas.append(FileData(filename))
self.file_add_times.append(datetime.datetime.strptime(datetime.datetime.now().strftime(DATE_FORMAT), DATE_FORMAT))
else:
print(f'Error - Updated file version has different extension!')
print(f'Expected: {self.extension} Recieved: ...{filepath[-15:]}')
print('File version update not saved.\n')
def __repr__(self) -> str:
return f'File: {self.file_name}'
def __eq__(self, other) -> bool:
if (self.initial_filepath == other.initial_filepath
and self.file_name == other.file_name
and self.initial_tracking_date == other.initial_tracking_date):
return True
return False
def asdict(self) -> dict:
'''Convert instance into representative dict'''
d = {}
d['initial_filepath'] = self.initial_filepath
d['file_name'] = self.file_name
d['initial_tracking_date'] = self.initial_tracking_date.strftime(DATE_FORMAT)
d['filedatas'] = [fd.filepath for fd in self.filedatas]
d['file_add_times'] = [ftime.strftime(DATE_FORMAT) for ftime in self.file_add_times]
d['extension'] = self.extension
return d
@classmethod
def fromdict(cls, d):
'''Create class instance from dict'''
kwargs = copy.deepcopy(d)
kwargs['initial_tracking_date'] = datetime.datetime.strptime(d['initial_tracking_date'], DATE_FORMAT)
kwargs['filedatas'] = [FileData(fp) for fp in d['filedatas']]
kwargs['file_add_times'] = [datetime.datetime.strptime(ftime, DATE_FORMAT) for ftime in d['file_add_times']]
return cls(**kwargs)
| 38.881057 | 126 | 0.641854 | 1,104 | 8,826 | 4.925725 | 0.168478 | 0.030894 | 0.024458 | 0.01324 | 0.384884 | 0.339647 | 0.294226 | 0.251563 | 0.228025 | 0.200074 | 0 | 0.000749 | 0.243145 | 8,826 | 226 | 127 | 39.053097 | 0.813323 | 0.136755 | 0 | 0.309677 | 0 | 0 | 0.100053 | 0.020705 | 0 | 0 | 0 | 0 | 0 | 1 | 0.174194 | false | 0.019355 | 0.045161 | 0.03871 | 0.354839 | 0.051613 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
495d934dc7d8a89668ed8aaf50bc6d42c9f1db23 | 25,754 | py | Python | electrum/plugins/jade/jadepy/jade.py | MatthewWesley/electrum | f923d2ecdb145c5c84767a30f1aadf8a035a1dc6 | [
"MIT"
] | null | null | null | electrum/plugins/jade/jadepy/jade.py | MatthewWesley/electrum | f923d2ecdb145c5c84767a30f1aadf8a035a1dc6 | [
"MIT"
] | null | null | null | electrum/plugins/jade/jadepy/jade.py | MatthewWesley/electrum | f923d2ecdb145c5c84767a30f1aadf8a035a1dc6 | [
"MIT"
] | null | null | null | import cbor
import hashlib
import json
import time
import logging
import collections
import collections.abc
import traceback
import random
import sys
# JadeError
from .jade_error import JadeError
# Low-level comms backends
from .jade_serial import JadeSerialImpl
from .jade_tcp import JadeTCPImpl
# Not used in electrum wallet
# Removed to reduce transitive dependencies
# from .jade_ble import JadeBleImpl
# Default serial connection
DEFAULT_SERIAL_DEVICE = '/dev/ttyUSB0'
DEFAULT_BAUD_RATE = 115200
DEFAULT_SERIAL_TIMEOUT = 120
# Default BLE connection
DEFAULT_BLE_DEVICE_NAME = 'Jade'
DEFAULT_BLE_SERIAL_NUMBER = None
DEFAULT_BLE_SCAN_TIMEOUT = 60
# 'jade' logger
logger = logging.getLogger('jade')
device_logger = logging.getLogger('jade-device')
# Helper to map bytes-like types into hex-strings
# to make for prettier message-logging
def _hexlify(data):
if data is None:
return None
elif isinstance(data, bytes) or isinstance(data, bytearray):
return data.hex()
elif isinstance(data, list):
return [_hexlify(item) for item in data]
elif isinstance(data, dict):
return {k: _hexlify(v) for k, v in data.items()}
else:
return data
# Simple http request function which can be used when a Jade response
# requires an external http call.
# The default implementation used in JadeAPI._jadeRpc() below.
# NOTE: Only available if the 'requests' dependency is available.
# NOTE: Removed entirely for electrum - so it is not used silently as a fallback.
# (hard error preferred in that case)
# Jade repo api will be improved to make enabling this function more explicit
# try:
# import requests
#
# def _http_request(params):
# logger.debug('_http_request: {}'.format(params))
#
# # Use the first non-onion url
# url = [url for url in params['urls'] if not url.endswith('.onion')][0]
# if params['method'] == 'GET':
# assert 'data' not in params, 'Cannot pass body to requests.get'
# f = requests.get(url)
# elif params['method'] == 'POST':
# data = json.dumps(params['data'])
# f = requests.post(url, data)
#
# logger.debug("http_request received reply: {}".format(f.text))
#
# if f.status_code != 200:
# logger.error("http error {} : {}".format(f.status_code, f.text))
# raise ValueError(f.status_code)
#
# assert params['accept'] == 'json'
# f = f.json()
#
# return {'body': f}
#
# except ImportError as e:
# logger.warn(e)
# logger.warn('Default _http_requests() function will not be available')
#
#
# High-Level Jade Client API
# Builds on a JadeInterface to provide a meaningful API
#
# Either:
# a) use with JadeAPI.create_[serial|ble]() as jade:
# (recommended)
# or:
# b) use JadeAPI.create_[serial|ble], then call connect() before
# using, and disconnect() when finished
# (caveat cranium)
# or:
# c) use ctor to wrap existing JadeInterface instance
# (caveat cranium)
#
class JadeAPI:
def __init__(self, jade):
assert jade is not None
self.jade = jade
def __enter__(self):
self.connect()
return self
def __exit__(self, exc_type, exc, tb):
if (exc_type):
logger.error("Exception causing JadeAPI context exit.")
logger.error(exc_type)
logger.error(exc)
traceback.print_tb(tb)
self.disconnect(exc_type is not None)
@staticmethod
def create_serial(device=None, baud=None, timeout=None):
impl = JadeInterface.create_serial(device, baud, timeout)
return JadeAPI(impl)
# @staticmethod
# def create_ble(device_name=None, serial_number=None,
# scan_timeout=None, loop=None):
# impl = JadeInterface.create_ble(device_name, serial_number,
# scan_timeout, loop)
# return JadeAPI(impl)
# Connect underlying interface
def connect(self):
self.jade.connect()
# Disconnect underlying interface
def disconnect(self, drain=False):
self.jade.disconnect(drain)
# Drain all output from the interface
def drain(self):
self.jade.drain()
# Raise any returned error as an exception
@staticmethod
def _get_result_or_raise_error(reply):
if 'error' in reply:
e = reply['error']
raise JadeError(e.get('code'), e.get('message'), e.get('data'))
return reply['result']
# Helper to call wrapper interface rpc invoker
def _jadeRpc(self, method, params=None, inputid=None, http_request_fn=None, long_timeout=False):
newid = inputid if inputid else str(random.randint(100000, 999999))
request = self.jade.build_request(newid, method, params)
reply = self.jade.make_rpc_call(request, long_timeout)
result = self._get_result_or_raise_error(reply)
# The Jade can respond with a request for interaction with a remote
# http server. This is used for interaction with the pinserver but the
# code below acts as a dumb proxy and simply makes the http request and
# forwards the response back to the Jade.
# Note: the function called to make the http-request can be passed in,
# or it can default to the simple _http_request() function above, if available.
if isinstance(result, collections.abc.Mapping) and 'http_request' in result:
this_module = sys.modules[__name__]
make_http_request = http_request_fn or getattr(this_module, '_http_request', None)
assert make_http_request, 'Default _http_request() function not available'
http_request = result['http_request']
http_response = make_http_request(http_request['params'])
return self._jadeRpc(
http_request['on-reply'],
http_response['body'],
http_request_fn=make_http_request,
long_timeout=long_timeout)
return result
# Get version information from the hw
def get_version_info(self):
return self._jadeRpc('get_version_info')
# Add client entropy to the hw rng
def add_entropy(self, entropy):
params = {'entropy': entropy}
return self._jadeRpc('add_entropy', params)
# OTA new firmware
def ota_update(self, fwcmp, fwlen, chunksize, cb):
cmphasher = hashlib.sha256()
cmphasher.update(fwcmp)
cmphash = cmphasher.digest()
cmplen = len(fwcmp)
# Initiate OTA
params = {'fwsize': fwlen,
'cmpsize': cmplen,
'cmphash': cmphash}
result = self._jadeRpc('ota', params)
assert result is True
# Write binary chunks
written = 0
while written < cmplen:
remaining = cmplen - written
length = min(remaining, chunksize)
chunk = bytes(fwcmp[written:written + length])
result = self._jadeRpc('ota_data', chunk)
assert result is True
written += length
if (cb):
cb(written, cmplen)
# All binary data uploaded
return self._jadeRpc('ota_complete')
# Run (debug) healthcheck on the hw
def run_remote_selfcheck(self):
return self._jadeRpc('debug_selfcheck', long_timeout=True)
# Set the (debug) mnemonic
def set_mnemonic(self, mnemonic, passphrase=None, temporary_wallet=False):
params = {'mnemonic': mnemonic, 'passphrase': passphrase,
'temporary_wallet': temporary_wallet}
return self._jadeRpc('debug_set_mnemonic', params)
# Set the (debug) seed
def set_seed(self, seed, temporary_wallet=False):
params = {'seed': seed, 'temporary_wallet': temporary_wallet}
return self._jadeRpc('debug_set_mnemonic', params)
# Override the pinserver details on the hww
def set_pinserver(self, urlA=None, urlB=None, pubkey=None, cert=None):
params = {}
if urlA is not None or urlB is not None:
params['urlA'] = urlA
params['urlB'] = urlB
if pubkey is not None:
params['pubkey'] = pubkey
if cert is not None:
params['certificate'] = cert
return self._jadeRpc('update_pinserver', params)
# Reset the pinserver details on the hww to their defaults
def reset_pinserver(self, reset_details, reset_certificate):
params = {'reset_details': reset_details,
'reset_certificate': reset_certificate}
return self._jadeRpc('update_pinserver', params)
# Trigger user authentication on the hw
# Involves pinserver handshake
def auth_user(self, network, http_request_fn=None):
params = {'network': network}
return self._jadeRpc('auth_user', params,
http_request_fn=http_request_fn,
long_timeout=True)
# Get xpub given a path
def get_xpub(self, network, path):
params = {'network': network, 'path': path}
return self._jadeRpc('get_xpub', params)
# Get registered multisig wallets
def get_registered_multisigs(self):
return self._jadeRpc('get_registered_multisigs')
# Register a multisig wallet
def register_multisig(self, network, multisig_name, variant, sorted_keys, threshold, signers):
params = {'network': network, 'multisig_name': multisig_name,
'descriptor': {'variant': variant, 'sorted': sorted_keys,
'threshold': threshold, 'signers': signers}}
return self._jadeRpc('register_multisig', params)
# Get receive-address for parameters
def get_receive_address(self, *args, recovery_xpub=None, csv_blocks=0,
variant=None, multisig_name=None):
if multisig_name is not None:
assert len(args) == 2
keys = ['network', 'paths', 'multisig_name']
args += (multisig_name,)
elif variant is not None:
assert len(args) == 2
keys = ['network', 'path', 'variant']
args += (variant,)
else:
assert len(args) == 4
keys = ['network', 'subaccount', 'branch', 'pointer', 'recovery_xpub', 'csv_blocks']
args += (recovery_xpub, csv_blocks)
return self._jadeRpc('get_receive_address', dict(zip(keys, args)))
# Sign a message
def sign_message(self, path, message, use_ae_signatures=False,
ae_host_commitment=None, ae_host_entropy=None):
if use_ae_signatures:
# Anti-exfil protocol:
# We send the signing request and receive the signer-commitment in
# reply once the user confirms.
# We can then request the actual signature passing the ae-entropy.
params = {'path': path, 'message': message, 'ae_host_commitment': ae_host_commitment}
signer_commitment = self._jadeRpc('sign_message', params)
params = {'ae_host_entropy': ae_host_entropy}
signature = self._jadeRpc('get_signature', params)
return signer_commitment, signature
else:
# Standard EC signature, simple case
params = {'path': path, 'message': message}
return self._jadeRpc('sign_message', params)
# Get a Liquid master blinding key
def get_master_blinding_key(self):
return self._jadeRpc('get_master_blinding_key')
# Get a Liquid public blinding key for a given script
def get_blinding_key(self, script):
params = {'script': script}
return self._jadeRpc('get_blinding_key', params)
# Get the shared secret to unblind a tx, given the receiving script on
# our side and the pubkey of the sender (sometimes called "nonce" in
# Liquid). Optionally fetch our blinding pubkey also.
def get_shared_nonce(self, script, their_pubkey, include_pubkey=False):
params = {'script': script, 'their_pubkey': their_pubkey, 'include_pubkey': include_pubkey}
return self._jadeRpc('get_shared_nonce', params)
# Get a "trusted" blinding factor to blind an output. Normally the blinding
# factors are generated and returned in the `get_commitments` call, but
# for the last output the VBF must be generated on the host side, so this
# call allows the host to get a valid ABF to compute the generator and
# then the "final" VBF. Nonetheless, this call is kept generic, and can
# also generate VBFs, thus the "type" parameter.
# `hash_prevouts` is computed as specified in BIP143 (double SHA of all
# the outpoints being spent as input. It's not checked right away since
# at this point Jade doesn't know anything about the tx we are referring
# to. It will be checked later during `sign_liquid_tx`.
# `output_index` is the output we are trying to blind.
# `type` can either be "ASSET" or "VALUE" to generate ABFs or VBFs.
def get_blinding_factor(self, hash_prevouts, output_index, type):
params = {'hash_prevouts': hash_prevouts,
'output_index': output_index,
'type': type}
return self._jadeRpc('get_blinding_factor', params)
# Generate the blinding factors and commitments for a given output.
# Can optionally get a "custom" VBF, normally used for the last
# input where the VBF is not random, but generated accordingly to
# all the others.
# `hash_prevouts` and `output_index` have the same meaning as in
# the `get_blinding_factor` call.
# NOTE: the `asset_id` should be passed as it is normally displayed, so
# reversed compared to the "consensus" representation.
def get_commitments(self,
asset_id,
value,
hash_prevouts,
output_index,
vbf=None):
params = {'asset_id': asset_id,
'value': value,
'hash_prevouts': hash_prevouts,
'output_index': output_index}
if vbf is not None:
params['vbf'] = vbf
return self._jadeRpc('get_commitments', params)
# Common code for sending btc- and liquid- tx-inputs and receiving the
# signatures. Handles standard EC and AE signing schemes.
def _send_tx_inputs(self, base_id, inputs, use_ae_signatures):
if use_ae_signatures:
# Anti-exfil protocol:
# We send one message per input (which includes host-commitment *but
# not* the host entropy) and receive the signer-commitment in reply.
# Once all n input messages are sent, we can request the actual signatures
# (as the user has a chance to confirm/cancel at this point).
# We request the signatures passing the ae-entropy for each one.
# Send inputs one at a time, receiving 'signer-commitment' in reply
signer_commitments = []
host_ae_entropy_values = []
for txinput in inputs:
# ae-protocol - do not send the host entropy immediately
txinput = txinput.copy() # shallow copy
host_ae_entropy_values.append(txinput.pop('ae_host_entropy', None))
base_id += 1
input_id = str(base_id)
reply = self._jadeRpc('tx_input', txinput, input_id)
signer_commitments.append(reply)
# Request the signatures one at a time, sending the entropy
signatures = []
for (i, host_ae_entropy) in enumerate(host_ae_entropy_values, 1):
base_id += 1
sig_id = str(base_id)
params = {'ae_host_entropy': host_ae_entropy}
reply = self._jadeRpc('get_signature', params, sig_id)
signatures.append(reply)
assert len(signatures) == len(inputs)
return list(zip(signer_commitments, signatures))
else:
# Legacy protocol:
# We send one message per input - without expecting replies.
# Once all n input messages are sent, the hw then sends all n replies
# (as the user has a chance to confirm/cancel at this point).
# Then receive all n replies for the n signatures.
# NOTE: *NOT* a sequence of n blocking rpc calls.
# NOTE: at some point this flow should be removed in favour of the one
# above, albeit without passing anti-exfil entropy or commitment data.
# Send all n inputs
requests = []
for txinput in inputs:
base_id += 1
msg_id = str(base_id)
request = self.jade.build_request(msg_id, 'tx_input', txinput)
self.jade.write_request(request)
requests.append(request)
time.sleep(0.1)
# Receive all n signatures
signatures = []
for request in requests:
reply = self.jade.read_response()
self.jade.validate_reply(request, reply)
signature = self._get_result_or_raise_error(reply)
signatures.append(signature)
assert len(signatures) == len(inputs)
return signatures
# Sign a Liquid txn
def sign_liquid_tx(self, network, txn, inputs, commitments, change, use_ae_signatures=False):
# 1st message contains txn and number of inputs we are going to send.
# Reply ok if that corresponds to the expected number of inputs (n).
base_id = 100 * random.randint(1000, 9999)
params = {'network': network,
'txn': txn,
'num_inputs': len(inputs),
'trusted_commitments': commitments,
'use_ae_signatures': use_ae_signatures,
'change': change}
reply = self._jadeRpc('sign_liquid_tx', params, str(base_id))
assert reply
# Send inputs and receive signatures
return self._send_tx_inputs(base_id, inputs, use_ae_signatures)
# Sign a txn
def sign_tx(self, network, txn, inputs, change, use_ae_signatures=False):
# 1st message contains txn and number of inputs we are going to send.
# Reply ok if that corresponds to the expected number of inputs (n).
base_id = 100 * random.randint(1000, 9999)
params = {'network': network,
'txn': txn,
'num_inputs': len(inputs),
'use_ae_signatures': use_ae_signatures,
'change': change}
reply = self._jadeRpc('sign_tx', params, str(base_id))
assert reply
# Send inputs and receive signatures
return self._send_tx_inputs(base_id, inputs, use_ae_signatures)
#
# Mid-level interface to Jade
# Wraps either a serial or a ble connection
# Calls to send and receive bytes and cbor messages over the interface.
#
# Either:
# a) use wrapped with JadeAPI
# (recommended)
# or:
# b) use with JadeInterface.create_[serial|ble]() as jade:
# ...
# or:
# c) use JadeInterface.create_[serial|ble], then call connect() before
# using, and disconnect() when finished
# (caveat cranium)
# or:
# d) use ctor to wrap existing low-level implementation instance
# (caveat cranium)
#
class JadeInterface:
def __init__(self, impl):
assert impl is not None
self.impl = impl
def __enter__(self):
self.connect()
return self
def __exit__(self, exc_type, exc, tb):
if (exc_type):
logger.error("Exception causing JadeInterface context exit.")
logger.error(exc_type)
logger.error(exc)
traceback.print_tb(tb)
self.disconnect(exc_type is not None)
@staticmethod
def create_serial(device=None, baud=None, timeout=None):
if device and JadeTCPImpl.isSupportedDevice(device):
impl = JadeTCPImpl(device)
else:
impl = JadeSerialImpl(device or DEFAULT_SERIAL_DEVICE,
baud or DEFAULT_BAUD_RATE,
timeout or DEFAULT_SERIAL_TIMEOUT)
return JadeInterface(impl)
# @staticmethod
# def create_ble(device_name=None, serial_number=None,
# scan_timeout=None, loop=None):
# impl = JadeBleImpl(device_name or DEFAULT_BLE_DEVICE_NAME,
# serial_number or DEFAULT_BLE_SERIAL_NUMBER,
# scan_timeout or DEFAULT_BLE_SCAN_TIMEOUT,
# loop=loop)
# return JadeInterface(impl)
def connect(self):
self.impl.connect()
def disconnect(self, drain=False):
if drain:
self.drain()
self.impl.disconnect()
def drain(self):
logger.warn("Draining interface...")
drained = bytearray()
finished = False
while not finished:
byte_ = self.impl.read(1)
drained.extend(byte_)
finished = byte_ == b''
if finished or byte_ == b'\n' or len(drained) > 256:
try:
device_logger.warn(drained.decode('utf-8'))
except Exception as e:
# Dump the bytes raw and as hex if decoding as utf-8 failed
device_logger.warn("Raw:")
device_logger.warn(drained)
device_logger.warn("----")
device_logger.warn("Hex dump:")
device_logger.warn(drained.hex())
# Clear and loop to continue collecting
drained.clear()
@staticmethod
def build_request(input_id, method, params=None):
request = {"method": method, "id": input_id}
if params is not None:
request["params"] = params
return request
@staticmethod
def serialise_cbor_request(request):
dump = cbor.dumps(request)
len_dump = len(dump)
if 'method' in request and 'ota_data' in request['method']:
msg = 'Sending ota_data message {} as cbor of size {}'.format(request['id'], len_dump)
logger.info(msg)
else:
logger.info('Sending: {} as cbor of size {}'.format(_hexlify(request), len_dump))
return dump
def write(self, bytes_):
logger.debug("Sending: {} bytes".format(len(bytes_)))
wrote = self.impl.write(bytes_)
logger.debug("Sent: {} bytes".format(len(bytes_)))
return wrote
def write_request(self, request):
msg = self.serialise_cbor_request(request)
written = 0
while written < len(msg):
written += self.write(msg[written:])
def read(self, n):
logger.debug("Reading {} bytes...".format(n))
bytes_ = self.impl.read(n)
logger.debug("Received: {} bytes".format(len(bytes_)))
return bytes_
def read_cbor_message(self):
while True:
# 'self' is sufficiently 'file-like' to act as a load source.
# Throws EOFError on end of stream/timeout/lost-connection etc.
message = cbor.load(self)
if isinstance(message, collections.abc.Mapping):
# A message response (to a prior request)
if 'id' in message:
logger.info("Received msg: {}".format(_hexlify(message)))
return message
# A log message - handle as normal
if 'log' in message:
response = message['log']
log_method = device_logger.error
try:
response = message['log'].decode("utf-8")
log_methods = {
'E': device_logger.error,
'W': device_logger.warn,
'I': device_logger.info,
'D': device_logger.debug,
'V': device_logger.debug,
}
if len(response) > 1 and response[1] == ' ':
lvl = response[0]
log_method = log_methods.get(lvl, device_logger.error)
except Exception as e:
logger.error('Error processing log message: {}'.format(e))
log_method('>> {}'.format(response))
continue
# Unknown/unhandled/unexpected message
logger.error("Unhandled message received")
device_logger.error(message)
def read_response(self, long_timeout=False):
while True:
try:
return self.read_cbor_message()
except EOFError as e:
if not long_timeout:
raise
@staticmethod
def validate_reply(request, reply):
assert isinstance(reply, dict) and 'id' in reply
assert ('result' in reply) != ('error' in reply)
assert reply['id'] == request['id'] or \
reply['id'] == '00' and 'error' in reply
def make_rpc_call(self, request, long_timeout=False):
# Write outgoing request message
assert isinstance(request, dict)
assert 'id' in request and len(request['id']) > 0
assert 'method' in request and len(request['method']) > 0
assert len(request['id']) < 16 and len(request['method']) < 32
self.write_request(request)
# Read and validate incoming message
reply = self.read_response(long_timeout)
self.validate_reply(request, reply)
return reply
| 38.611694 | 100 | 0.609148 | 3,084 | 25,754 | 4.932231 | 0.177691 | 0.020249 | 0.022352 | 0.011834 | 0.220301 | 0.165735 | 0.149234 | 0.138584 | 0.127276 | 0.117547 | 0 | 0.004891 | 0.301312 | 25,754 | 666 | 101 | 38.66967 | 0.840447 | 0.305661 | 0 | 0.215223 | 0 | 0 | 0.091367 | 0.002657 | 0 | 0 | 0 | 0 | 0.049869 | 1 | 0.125984 | false | 0.005249 | 0.034121 | 0.010499 | 0.278215 | 0.005249 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
495e0594b4144524ac91ec955a17059dec99779b | 6,509 | py | Python | condor/probes/probe_libs/history_watcher.py | DHTC-Tools/logstash-confs | 1382f40aa9c5c837012312d9ab01a314062d7b8b | [
"Apache-2.0"
] | 8 | 2015-06-23T22:10:22.000Z | 2017-05-15T14:55:46.000Z | condor/probes/probe_libs/history_watcher.py | DHTC-Tools/logstash-confs | 1382f40aa9c5c837012312d9ab01a314062d7b8b | [
"Apache-2.0"
] | null | null | null | condor/probes/probe_libs/history_watcher.py | DHTC-Tools/logstash-confs | 1382f40aa9c5c837012312d9ab01a314062d7b8b | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 University of Chicago
# Available under Apache 2.0 License
__version__ = '0.7.1'
import os
import cStringIO
import re
import time
JOB_STATUS = {'0': 'Unexpanded',
'1': 'Idle',
'2': 'Running',
'3': 'Removed',
'4': 'Completed',
'5': 'Held',
'6': 'Submission Error'}
JOB_UNIVERSE = {'0': 'Min',
'1': 'Standard',
'2': 'Pipe',
'3': 'Linda',
'4': 'PVM',
'5': 'Vanilla',
'6': 'PVMD',
'7': 'Scheduler',
'8': 'MPI',
'9': 'Grid',
'10': 'Java',
'11': 'Parallel',
'12': 'Local',
'13': 'Max'}
class HistoryWatcher:
"""
Watches a history file and gets latest classad from it
"""
def __init__(self, filename=None):
"""
Initializer
:param filename: path to history file that should be watched
"""
self._filename = filename
self._buff = ""
self._current_inode = 0
self._filehandle = None
# used in parse_classad
self._completion_re = re.compile(r'\*\*\*\s+Offset\s+=\s+\d+.*CompletionDate\s+=\s+(\d+)')
@property
def filename(self):
"""
Return the filename that's being watched
:return: filename of watched file
"""
return self._filename
@filename.setter
def filename(self, filename=None):
"""
Set file to watch for the class
:param filename: path to history file that should be watched
"""
self._filename = filename
def next_classad(self):
"""
Generator that gets the latest classad from watched file
:return: a dict with classads
"""
if not self._filename:
yield {}
if not os.path.isfile(self._filename):
yield {}
if self._filehandle is None:
try:
self._filehandle = open(self._filename)
except IOError:
yield {}
if self._current_inode == 0:
file_stat = os.stat(self._filename)
self._current_inode = file_stat.st_ino
while True:
where = self._filehandle.tell()
line = self._filehandle.readline()
if not line:
try:
new_stat = os.stat(self._filename)
except OSError:
# file may not be there due to rotation, wait a while and check again
time.sleep(0.5)
continue
if self._current_inode != new_stat.st_ino:
self._filehandle.close()
self._current_inode = new_stat.st_ino
self._filehandle = open(self._filename)
where = self._filehandle.tell()
self._filehandle.seek(where)
# give up control and then pause when starting again
yield {}
time.sleep(30)
else:
self._buff += line
classads, self._buff = self.__parse_classad(self._buff)
if classads:
for classad in classads:
yield classad
else:
yield {}
def __parse_classad(self, classad_string):
"""
Parse a string into a dict with HTCondor classads
:param classad_string: string with classads in it
:return: tuple (classads, string) with a list of classads and remaining
part of the buffer
"""
classad = {}
classads = []
remaining_buffer = ""
temp = cStringIO.StringIO(classad_string)
for line in temp:
if not line:
break
match = self._completion_re.match(line)
if match is not None:
classad['CompletionDate'] = int(match.group(1))
classads.append(classad)
classad = {}
remaining_buffer = ""
else:
fields = line.split('=')
if len(fields) != 2:
continue
key = fields[0].strip()
value = fields[1].strip()
if key in classad:
try:
value = int(value)
if value > classad[key]:
classad[key] = temp
continue
else:
continue
except ValueError:
classad[key.strip()] = value.strip()
continue
if len(value) >= 2 and value[0] == '"' and value[-1] == '"':
value = value[1:-1]
if key == 'JobStatus':
value = JOB_STATUS[value]
if key == 'JobUniverse':
value = JOB_UNIVERSE[value]
classad[key] = value
remaining_buffer += line
return classads, remaining_buffer
def get_state(self):
"""
Create a record of current state that can be used at a later point
to restore file watcher to the same position if possible
:return: a tuple (filename, inode, position) with state information
"""
return self._filename, self._current_inode, self._filehandle.tell()
def restore_state(self, state_tuple):
"""
Restore state information from a state tuple so that file watching
can be continued, note, this may not succeed if the file is missing
or has been rotated
:param state_tuple: a tuple with filename, inode, and file_position
:return: True if state has been restored, False otherwise
"""
if not os.path.isfile(state_tuple[0]):
self._filename = state_tuple[0]
self._current_inode = 0
return False
self._filename = state_tuple[0]
try:
self._filehandle = open(self._filename)
except IOError:
return False
current_inode = os.stat(self._filename).st_ino
if current_inode != state_tuple[1]:
return False
self._current_inode = state_tuple[1]
self._filehandle.seek(state_tuple[2])
return True | 32.545 | 98 | 0.496697 | 668 | 6,509 | 4.691617 | 0.296407 | 0.061264 | 0.040842 | 0.016273 | 0.17358 | 0.100191 | 0.100191 | 0.100191 | 0.070836 | 0.044033 | 0 | 0.014644 | 0.412506 | 6,509 | 200 | 99 | 32.545 | 0.804916 | 0.195883 | 0 | 0.296296 | 0 | 0 | 0.049899 | 0.010707 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051852 | false | 0 | 0.02963 | 0 | 0.140741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4967087107c6a2050f0d841ca5536b87ddab04a6 | 3,919 | py | Python | tools/infer.py | Melika-Ayoughi/Full-Scale-Gambler-for-Object-Detection | 202f3b1e17be40ecc74e4841f9e887c3f093b9db | [
"Apache-2.0"
] | null | null | null | tools/infer.py | Melika-Ayoughi/Full-Scale-Gambler-for-Object-Detection | 202f3b1e17be40ecc74e4841f9e887c3f093b9db | [
"Apache-2.0"
] | null | null | null | tools/infer.py | Melika-Ayoughi/Full-Scale-Gambler-for-Object-Detection | 202f3b1e17be40ecc74e4841f9e887c3f093b9db | [
"Apache-2.0"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import argparse
import multiprocessing as mp
from pathlib import Path
import json
import time
import tqdm
import torch
from detectron2.config import get_cfg
from detectron2.data import MetadataCatalog
from detectron2.data.detection_utils import read_image
from detectron2.engine.defaults import DefaultPredictor
from detectron2.utils.logger import setup_logger
from detectron2.evaluation.coco_evaluation import instances_to_coco_json
from detectron2.utils.visualizer import Visualizer
def setup_cfg(args):
# load config from file and command-line arguments
cfg = get_cfg()
cfg.merge_from_file(args.config_file)
cfg.merge_from_list(args.opts)
# Set score_threshold for builtin models
cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold
cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold
cfg.freeze()
print(cfg)
return cfg
def get_parser():
parser = argparse.ArgumentParser(description="Detectron2 infer ")
parser.add_argument(
"--config-file",
metavar="FILE",
required=True,
help="path to config file",
)
parser.add_argument(
"--input_file",
required=True,
help="A file with a list of input images path")
parser.add_argument(
"--output",
required=True,
help="A file or directory to save output visualizations. "
)
parser.add_argument(
"--confidence_threshold",
type=float,
default=0.5,
help="Minimum score for instance predictions to be shown",
)
parser.add_argument(
"--plot_output",
action="store_true",
help="Whether or not to plot the predictions",
)
parser.add_argument(
"--opts",
help="Modify config options using the command-line 'KEY VALUE' pairs",
default=[],
nargs=argparse.REMAINDER,
)
return parser
def main(args):
logger = setup_logger()
logger.info("Arguments: " + str(args))
cfg = setup_cfg(args)
predictor = DefaultPredictor(cfg)
cpu_device = torch.device("cpu")
metadata = MetadataCatalog.get(
cfg.DATASETS.TEST[0] if len(cfg.DATASETS.TEST) else "__unused"
)
if args.input_file:
with Path(args.input_file).open() as file:
image_names = [str(Path(session) / "lri_1refl" / "image_COMBINED.png")
for session in map(str.strip, file) if session]
output_folder = Path(args.output)
output_folder.mkdir(exist_ok=True, parents=True)
for path in tqdm.tqdm(image_names, disable=not args.output):
img = read_image(path, format="BGR")
start_time = time.time()
predictions = predictor(img)
num_predictions = len(predictions["instances"])
time_spent = time.time() - start_time
logger.info(f"{path}: detected {num_predictions} instances in {time_spent:.2f}s")
instances = predictions["instances"].to(cpu_device)
out_i_folder = output_folder / Path(path).parents[1].name
out_i_folder.mkdir(exist_ok=True, parents=True)
output_json_file = out_i_folder / "result.json"
results = instances_to_coco_json(instances, -1)
with output_json_file.open("w") as f:
json.dump(results, f)
if args.plot_output:
out_filename = out_i_folder / "predicted.png"
visualizer = Visualizer(img, metadata)
vis_output = visualizer.draw_instance_predictions(predictions=instances)
vis_output.save(str(out_filename))
if __name__ == "__main__":
mp.set_start_method("spawn", force=True)
main(get_parser().parse_args())
| 33.495726 | 93 | 0.663435 | 482 | 3,919 | 5.195021 | 0.350622 | 0.039137 | 0.040735 | 0.03115 | 0.079872 | 0.063099 | 0.063099 | 0.036741 | 0 | 0 | 0 | 0.005044 | 0.241133 | 3,919 | 116 | 94 | 33.784483 | 0.83692 | 0.039806 | 0 | 0.09375 | 0 | 0 | 0.142857 | 0.005853 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.145833 | 0 | 0.197917 | 0.010417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
496743d68ae04b2ee36a6253c8a788223b80775f | 14,888 | py | Python | data-processing/common/string.py | hhchi13/scholarphi | 5683e68d5934a2f461aa674acbf4a5e9db3b5dbb | [
"Apache-2.0"
] | 285 | 2020-09-30T23:52:56.000Z | 2022-03-17T09:01:19.000Z | data-processing/common/string.py | hhchi13/scholarphi | 5683e68d5934a2f461aa674acbf4a5e9db3b5dbb | [
"Apache-2.0"
] | 116 | 2019-12-02T17:15:01.000Z | 2020-09-03T12:12:23.000Z | data-processing/common/string.py | hhchi13/scholarphi | 5683e68d5934a2f461aa674acbf4a5e9db3b5dbb | [
"Apache-2.0"
] | 35 | 2020-10-01T09:11:41.000Z | 2022-01-26T15:51:46.000Z | import dataclasses
import logging
from collections import UserString
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Tuple, Union
@dataclass
class Segment:
initial: str
current: str
changed: bool
class JournaledString(UserString): # pylint: disable=too-many-ancestors
"""
A string that keeps a record of the edits made to it. It preserves a record
of which spans have been replaced. This allows the mapping of character offsets in a
changed copy of the string to character offsets in the original string. This class was
created to help with finding locations in a string of TeX corresponding to entities that
were found in a transformed version of that TeX.
By subclassing 'UserString', the typical methods of a string (e.g., 'split', 'lower',
equality comparisons, etc.) are all exposed to the client, who can use this as if it was
a typical Python string. When the client want to make changes to this string that
are tracked by the string, they should use the special methods defined on this class
(e.g., "edit"). Like Python strings, this one is also immutable (i.e., the special
methods return copies of the string, not the original string).
"""
def __init__(self, data: Union[str, List[Segment]]) -> None:
if isinstance(data, str):
self.segments = [Segment(data, data, False)]
"""
Segments of the mutable string, each of which includes information about its initial
value, its current value, and marker indicating whether the segment has changed.
"""
elif isinstance(data, list) and all([isinstance(s, Segment) for s in data]):
self.segments = data
else:
logging.warning( # pylint: disable=logging-not-lazy
"Could not create JournaledString from input data %s. "
+ "Check that the input data has one of the supported types.",
data,
)
# Set the initial internal contents of the UserString superclass to empty;
# the string value will be computed dynamically from the segments (see below).
super(JournaledString, JournaledString).__init__(self, "")
@property # type: ignore
def data(self) -> str: # type: ignore
# 'UserString''s underlying representation of the string is in the 'data' attribute.
# To avoid having two sources of truth for the value of the string, the 'data'
# attribute is overwritten, so that the value of 'data' is always dynamically determined
# from the contents of the 'segments'.
return "".join([s.current for s in self.segments])
@data.setter
def data(self, _: Any) -> None:
# This method prevents other methods in UserString from accidentally changing the
# data of the string in an unexpected way.
return
@property
def initial(self) -> str:
" Get the initial value of the string, before it was mutated. "
return "".join([s.initial for s in self.segments])
def edit(self, start: int, end: int, replacement: str) -> "JournaledString":
"""
Replace a substring of the string (from 'start' to 'end') with a new substring.
Return a changed copy (do not modify this object).
"""
# By making 'middle' a greedy substring, and the other two non-greedy, 'greedy'
# absorbs the contents of 'initial' there would be a conflict, and also absorbs
# segments where 'initial' was replaced with the empty string.
left = self.substring(
0,
start,
greedy=False,
include_truncated_left=True,
include_truncated_right=False,
)
middle = self.substring(start, end, greedy=True)
right = self.substring(
end,
len(self),
greedy=False,
include_truncated_left=False,
include_truncated_right=True,
)
# If the replacement doesn't change the string, return a clone.
if str(middle) == replacement:
return JournaledString(self.segments)
# Merge the middle segments into a contiguous "changed" segment.
merged_middle = Segment(
initial="".join([s.initial for s in middle.segments]),
current=replacement,
changed=True,
)
# Detect whether the replacement bisects segments on its left or right side.
left_cut = False
right_cut = False
s_start = 0
for s in self.segments:
if s_start < start < s_start + len(s.current):
left_cut = True
if s_start < end < s_start + len(s.current):
right_cut = True
s_start += len(s.current)
# If a segment on the left was bisected, and it has been changed
# in the past, it needs to be merged with the middle as the call to
# 'substring' will have duplicated the 'initial' property in both the middle
# substring and the one on the side, which needs to be deduplicated.
if left_cut and left.segments[-1].changed:
last_left = left.segments[-1]
merged_middle.current = last_left.current + merged_middle.current
del left.segments[-1]
# Do the same check on the right.
if right_cut and right.segments[0].changed:
first_right = right.segments[0]
merged_middle.current = merged_middle.current + first_right.current
del right.segments[0]
# Create a new string by combining all the segments.
new_segments = []
for segment_list in [left.segments, [merged_middle], right.segments]:
for s in segment_list:
if not (s.initial == "" and s.current == ""):
new_segments.append(s)
return JournaledString(new_segments)
def substring(
self,
start: int,
end: int,
greedy: bool = True,
include_truncated_left: bool = True,
include_truncated_right: bool = True,
) -> "JournaledString":
"""
Get a substring of the journaled string, with pointers back to only the parts of the
initial substring that correspond to the substringed part of the string. 'greedy'
determines whether 'initial' is grown, to include:
* segments on the boundary where 'initial' was replaced with ''
* the contents of 'initial' for bisected segments.
"""
new_segments: List[Segment] = []
s_start = 0
for s in self.segments:
s_end = s_start + len(s.current)
# Simplest case: 'start' and 'end' surround a segment, so that entire segment
# will be included in the new string.
if start <= s_start and end >= s_end:
# Only include replacements of spans with blanks if in 'greedy' mode.
if s_start == start and s_start == s_end and not include_truncated_left:
continue
if s_end == end and s_start == s_end and not include_truncated_right:
continue
new_segments.append(s)
# Trickier cases: look for when 'start' and 'end' appear within a segment. In that
# case, a new segment needs to be added with the initial and current strings truncated.
else:
initial = s.initial
current = s.current
overlaps = False
# Truncate right side if the end is within this segment.
if s_start < end < s_end:
overlaps = True
end_in_s = end - s_start
current = current[:end_in_s]
# Only truncate the initial string if it has not been changed. If it has
# been changed, then it isn't clear which characters in the initial string the
# truncated characters in the updated string correspond to, so conservatively
# assume that the segment maps to the same initial segment.
if not s.changed:
initial = initial[:end_in_s]
elif greedy:
initial = initial
else:
initial = ""
# Truncate left side if the start is within this segment. Note that it might be
# possible for both the start ane end to lie within this segment, hence the shared
# 'current' and 'initial' variables.
if s_start < start < s_end:
overlaps = True
start_in_s = start - s_start
current = current[start_in_s:]
if not s.changed:
initial = initial[start_in_s:]
if overlaps:
new_segments.append(Segment(initial, current, s.changed))
s_start = s_end
return JournaledString(new_segments)
def initial_offsets(
self, start: int, end: int
) -> Tuple[Optional[int], Optional[int]]:
"""
Convert offsets expressed relative to the current value of the string to offsets in the
original string. The offsets will be precise wherever the string hasn't been changed. They
will be approximate returning a conservatively large span in places where the string has
been mutated.
"""
# Search for the start position. Search from the end of the current
# string backwards, to find the last possible segment that could
# map to the initial offsets, to provide a tight mapping.
current_segment_end = sum([len(s.current) for s in self.segments])
initial_segment_end = sum([len(s.initial) for s in self.segments])
start_in_initial: Optional[int] = initial_segment_end
for s in reversed(self.segments):
current_segment_start = current_segment_end - len(s.current)
initial_segment_start = initial_segment_end - len(s.initial)
if current_segment_start <= start <= current_segment_end:
if s.changed:
start_in_initial = (
initial_segment_end
if start == current_segment_end
else initial_segment_start
)
else:
start_in_initial = initial_segment_start + (
start - current_segment_start
)
break
current_segment_end -= len(s.current)
initial_segment_end -= len(s.initial)
# Repeat the search, this time searching forward to find the end offset.
current_segment_start = 0
initial_segment_start = 0
end_in_initial: Optional[int] = initial_segment_start
for s in self.segments:
current_segment_end = current_segment_start + len(s.current)
initial_segment_end = initial_segment_start + len(s.initial)
if current_segment_start <= end <= current_segment_end:
if s.changed and len(s.current) > 0:
end_in_initial = (
initial_segment_start
if end == current_segment_start
else initial_segment_end
)
else:
end_in_initial = initial_segment_start + (
end - current_segment_start
)
break
current_segment_start += len(s.current)
initial_segment_start += len(s.initial)
return (start_in_initial, end_in_initial)
def current_offsets(
self, start: int, end: int
) -> Tuple[Optional[int], Optional[int]]:
"""
Convert offsets expressed relative to the initial value of the string to offsets in the
updated (current value of the) string. See the note in 'to_initial_offsets' about the
limitations of precision in this method.
"""
current_segment_start = 0
initial_segment_start = 0
start_in_current: Optional[int] = None
for s in self.segments:
initial_segment_end = initial_segment_start + len(s.initial)
# Search the segment for the start position.
if start_in_current is None:
if initial_segment_start <= start <= initial_segment_end:
start_in_current = current_segment_start
# If the 'start' offset comes at the end of this segment, return the end.
if s.changed and start == initial_segment_end:
start_in_current = current_segment_start + len(s.current)
# If this segment is still from the initial string, the start index
# can be adjusted to a specific offset in the segment.
if not s.changed:
start_in_current += start - initial_segment_start
# Only look for the end once the start has been found.
if start_in_current is not None:
# Search the segment for the end position. This search process is analogous
# to the search for 'start' above; see it for comments.
if initial_segment_start <= end <= initial_segment_end:
if s.changed:
if end == initial_segment_start:
end_in_current = current_segment_start
else:
end_in_current = current_segment_start + len(s.current)
else:
end_in_current = current_segment_start + (
end - initial_segment_start
)
return (start_in_current, end_in_current)
current_segment_start += len(s.current)
initial_segment_start += len(s.initial)
return (None, None)
def to_json(self) -> Dict[str, Any]:
return {
"value": str(self),
"segments": ([dataclasses.asdict(s) for s in self.segments]),
}
@staticmethod
def from_json(json: Dict[str, Any]) -> "JournaledString":
try:
string = JournaledString(
[
Segment(str(s["initial"]), str(s["current"]), bool(s["changed"]))
for s in json["segments"]
]
)
return string
except (KeyError, TypeError) as e:
raise ValueError(
f"Could not instantiate MutableString from JSON {json}. Error: {e}"
)
| 43.028902 | 99 | 0.587722 | 1,803 | 14,888 | 4.723794 | 0.181919 | 0.046495 | 0.037924 | 0.010567 | 0.273923 | 0.198427 | 0.138781 | 0.117882 | 0.074674 | 0.049078 | 0 | 0.00144 | 0.346857 | 14,888 | 345 | 100 | 43.153623 | 0.874434 | 0.32684 | 0 | 0.214953 | 0 | 0 | 0.033545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046729 | false | 0 | 0.023364 | 0.014019 | 0.14486 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4967be6985611b644c7e5e28c8cac7ac411557cb | 10,195 | py | Python | dataprofiler/DataProfiler.py | pengfei99/PandasProfilingService | 53634340876a0f03503c627ee75abf3e39fb9754 | [
"MIT"
] | null | null | null | dataprofiler/DataProfiler.py | pengfei99/PandasProfilingService | 53634340876a0f03503c627ee75abf3e39fb9754 | [
"MIT"
] | null | null | null | dataprofiler/DataProfiler.py | pengfei99/PandasProfilingService | 53634340876a0f03503c627ee75abf3e39fb9754 | [
"MIT"
] | null | null | null | import json
import logging
import os
from typing import Optional
import pandas as pd
import pandas_profiling as pp
import pyarrow.parquet as pq
from pandas_profiling import ProfileReport
from dataprofiler.storage.StorageEngineInterface import StorageEngineInterface
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
log.addHandler(handler)
class DataProfiler:
def __init__(self, storage_engine: StorageEngineInterface, tmp_dir: str):
self.storage_engine = storage_engine
self.tmp_dir = tmp_dir
def save_report(self, input_file_path: str, file_format: str, report_format: str, report_destination: str,
separator=',', na_val='', report_title="", report_name="", minimal=True, explorative=False):
"""
This function takes a input file path, generate a data profiling report, and copy it to a output path
:param report_destination: the path to store output report
:param explorative: remove some stats from column detail page for better speed
:param minimal: remove correlation and duplication row detection for better speed.
:param input_file_path: full input file path. if s3 path, it must be in format s3://{bucket_name}/{bucket_key}
:param file_format: input data file format, only csv, json and parquet are accepted
:param report_format: output report format, only html and json are accepted
:param separator: if input data file format is csv, you can specify a custom csv
:param na_val: if input data file format is csv, you can specify a custom na_val
:param report_title: The title of the report in the generated report file
:param report_name: The name of the generated report file
:return: if success, return nothing, otherwise raise exception
"""
local_path = self.make_report(input_file_path, file_format, report_format, separator, na_val, report_title,
report_name, minimal=minimal, explorative=explorative)
if local_path is not None:
full_report_path = f"{report_destination}/{report_name}.{report_format}"
if self.storage_engine.upload_data(local_path, full_report_path):
log.info(f"Report has been generated and uploaded to {full_report_path}")
else:
log.error(f"Fail to upload report to {full_report_path}")
raise
else:
log.error(f"Fail to generate report for {input_file_path}")
raise
def make_report(self, input_file_path: str, file_format: str, report_format: str, separator=',', na_val='',
report_title="", report_name="", minimal=True, explorative=False) -> Optional[str]:
"""
This function takes a data file path, and generate a data profile report, the parameter such as separator, na_val
is optional, and only has effect if file_format is csv. If the report_title and report_name are empty, we
will generate one based on input file name
:param explorative: remove some stats from column detail page for better speed
:param minimal: remove correlation and duplication row detection for better speed.
:param input_file_path: full input file path. if s3 path, it must be in format s3://{bucket_name}/{bucket_key}
:param file_format: input data file format, only csv, json and parquet are accepted
:param report_format: output report format, only html and json are accepted
:param separator: if input data file format is csv, you can specify a custom csv
:param na_val: if input data file format is csv, you can specify a custom na_val
:param report_title: The title of the report in the generated report file
:param report_name: The name of the generated report file
:return: if success, return the full path of the generated report, otherwise return None
Note the param report_name should not contain format. The final report_name will be
built report_name.report_format
"""
# build local file path for download
source_file_name = self.storage_engine.get_short_file_name(input_file_path)
local_path = f"{self.tmp_dir}/{source_file_name}"
# build report title
if report_title == "":
report_title = f"Profiling report of {source_file_name}"
# build report name, if none is given
if report_name == "":
report_name = f"{source_file_name}_report.{report_format}"
else:
report_name = f"{report_name}.{report_format}"
# if download success, produce report
if self.storage_engine.download_data(input_file_path, local_path):
df = DataProfiler.get_source_df(local_path, file_format, separator, na_val)
# if convert to pandas dataframe is successful, start generate report
if df is not None:
# if generate report is successful, log success
if DataProfiler.generate_report(df, self.tmp_dir, report_name, report_title, minimal=minimal,
explorative=explorative):
report_full_path = f"{self.tmp_dir}/{report_name}"
log.info(f"Successfully generated report for {input_file_path}, report location {report_full_path}")
return report_full_path
else:
log.exception(f"Fail to generate profiling report for {input_file_path}")
return None
else:
log.exception("Fail to convert data to pandas dataframe")
return None
else:
log.exception(f"Fail to download file from {input_file_path}")
return None
@staticmethod
def get_source_df(input_file_path: str, file_format: str, separator=',', na_val='') -> Optional[pd.DataFrame]:
"""
This function read various data source file (e.g. csv, json, parquet), then return a pandas data frame of the
data source file
:param input_file_path: path of the input file
:param file_format: file format, only accept csv, json, and parquet
:param separator: if file is csv, can specify a separator
:param na_val: if file is csv, can specify a null value identifier
:return: a pandas dataframe, or none, if the file format is not supported
"""
if file_format == "csv":
df = pd.read_csv(input_file_path, sep=separator, na_values=[na_val])
elif file_format == "json":
rows = []
for line in open(input_file_path, 'r'):
rows.append(json.loads(line))
df = pd.DataFrame(rows)
# I don't use df = pd.read_json(input_file_path), because it can't handle null value in json file correctly.
elif file_format == "parquet":
table = pq.read_table(input_file_path)
df = table.to_pandas()
else:
log.error("The input data format is not supported")
df = None
return df
@staticmethod
def generate_report(df: pd.DataFrame, output_path: str, report_name: str, report_title="Profiling Report",
minimal=True, explorative=False) -> bool:
"""
This function generates a data profiling report without metadata and column description
:param explorative: remove some stats from column detail page for better speed
:param minimal: remove correlation and duplication row detection for better speed.
:param df: input pandas dataframe which we will profile
:param output_path: output report parent directory
:param report_name: the name of the output report
:param report_title: the title of the report, default value is 'Profiling Report'
:return: return true if the report is generated successfully
"""
try:
os.makedirs(output_path, exist_ok=True)
profile = ProfileReport(df, title=report_title, minimal=minimal, explorative=explorative)
report_full_path = f"{output_path}/{report_name}"
profile.to_file(report_full_path)
except Exception as e:
log.exception(f"Fail to generate report. {e}")
return False
else:
log.info(f"Profiling Report is successfully generated at {report_full_path} ")
return True
@staticmethod
def generate_report_with_meta(df: pd.DataFrame, dataset_metadata: dict, columns_description: dict) -> \
Optional[pp.ProfileReport]:
"""
This function generates report with metadata and column description. Check below example for more information about
the format of metadata and column description
:param df: input pandas dataframe which we will profile
:param dataset_metadata:
:param columns_description:
:return: return a ProfileReport that can be written in html or json
Examples:
dataset_metadata = {
"description": "This profiling report was generated by using the census dataset.",
"creator": "toto",
"author": "toto",
"copyright_holder": "toto LLC",
"copyright_year": "2020",
"url": "http://toto.org"}
columns_description = {
"descriptions": {
"column_name": "column_description",
...
"column_name": "column_description", }
}
"""
try:
profile = ProfileReport(df, title="Agriculture Data", dataset=dataset_metadata,
variables=columns_description)
except Exception as e:
log.exception(f"Fail to generate report. {e}")
return None
else:
log.info(f"Profiling Report is successfully generated")
return profile
def main():
pass
if __name__ == "__main__":
main()
| 47.640187 | 123 | 0.65385 | 1,307 | 10,195 | 4.938026 | 0.17521 | 0.030679 | 0.040285 | 0.017663 | 0.411528 | 0.35017 | 0.335916 | 0.320421 | 0.308491 | 0.292997 | 0 | 0.00108 | 0.273664 | 10,195 | 213 | 124 | 47.86385 | 0.870493 | 0.413536 | 0 | 0.226415 | 0 | 0 | 0.172439 | 0.038526 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066038 | false | 0.009434 | 0.084906 | 0 | 0.245283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
496b4947f6c44a28839de8dc2b5d4c8f214dfc17 | 12,676 | py | Python | hough_visualize.py | Elucidation/ChessboardDetect | a5d2a2c2ab2434e4e041b4f384f3cd7d6884d2c4 | [
"MIT"
] | 43 | 2016-10-28T02:13:26.000Z | 2022-02-16T14:20:32.000Z | hough_visualize.py | AnkaChan/ChessboardDetect | a5d2a2c2ab2434e4e041b4f384f3cd7d6884d2c4 | [
"MIT"
] | 3 | 2016-11-15T19:04:46.000Z | 2020-08-26T20:41:29.000Z | hough_visualize.py | AnkaChan/ChessboardDetect | a5d2a2c2ab2434e4e041b4f384f3cd7d6884d2c4 | [
"MIT"
] | 12 | 2018-08-22T22:33:21.000Z | 2021-08-20T08:40:42.000Z | from __future__ import print_function
from pylab import *
import numpy as np
import cv2
import sys
from board_detect import *
from contour_detect import *
from rectify_refine import *
def getRhoTheta(line):
x1,y1,x2,y2 = line
theta = np.arctan2(y2-y1,x2-x1)
rho = x1*np.cos(theta) + y1*sin(theta)
return rho, theta
def findAndDrawTile(img):
contours, chosen_tile_idx, edges = findPotentialTiles(img)
if not len(contours):
return
drawPotentialTiles(img, contours, chosen_tile_idx)
tile_corners = getChosenTile(contours, chosen_tile_idx)
hough_corners, corner_hough_lines, edges_roi = refineTile(img, edges, contours, chosen_tile_idx)
drawBestHoughLines(img, hough_corners, corner_hough_lines)
# Single tile warp
ideal_tile = np.array([
[1,0],
[1,1],
[0,1],
[0,0],
],dtype=np.float32)
tile_res=32
M = cv2.getPerspectiveTransform(hough_corners,
(tile_res)*(ideal_tile+8+1))
side_len = tile_res*(8 + 1)*2
out_img = cv2.warpPerspective(img, M,
(side_len, side_len))
cv2.imshow('image %dx%d' % (img.shape[1],img.shape[0]),img)
cv2.imshow('warp',out_img)
def findAndDrawHough(img):
img_diag_size = int(np.ceil(np.sqrt(img.shape[0]*img.shape[0] + img.shape[1]*img.shape[1])))
hough_img = np.zeros([2*img_diag_size/4, 180]) # -90 to 90 deg, -rho_max to rho_max
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,100,650,apertureSize = 3)
min_img_side = min(img.shape[:2])
minLineLength = min_img_side/4
maxLineGap = min_img_side/10
threshold = int(min_img_side/4)
# print(minLineLength, maxLineGap, threshold)
lines = cv2.HoughLinesP(edges,rho=1,theta=np.pi/180,
threshold=threshold, minLineLength=minLineLength, maxLineGap=maxLineGap)
if any(lines):
rhothetas = np.zeros([lines.shape[0], 2])
for i, (x1,y1,x2,y2) in enumerate(lines[:,0,:]):
cv2.line(img,(x1,y1),(x2,y2), (0,0,255),2)
rho, theta = getRhoTheta((x1,y1,x2,y2))
rhothetas[i,:] = rho, theta
img_rho, img_theta = (int(theta*180/np.pi + 90), int((rho+img_diag_size)/4))
cv2.circle(hough_img, (img_rho, img_theta), 3, (255,0,0),-1)
# Using opencv
cv2.imshow('image %dx%d' % (img.shape[1],img.shape[0]),img)
cv2.imshow('edges',edges)
cv2.imshow('hough',hough_img)
def findAndDrawMask(img):
# img = scaleImageIfNeeded(img, 600, 480)
# Edges
edges = cv2.Canny(img, 100, 550)
mask, top_two_angles, min_area_rect, median_contour = getEstimatedChessboardMask(img, edges, iters=5)
img_masked_full = cv2.bitwise_and(img,img,mask = (mask > 0.5).astype(np.uint8))
img_masked = cv2.addWeighted(img,0.2,img_masked_full,0.8,0)
# Hough lines overlay
edges_masked = cv2.bitwise_and(edges,edges,mask = (mask > 0.5).astype(np.uint8))
if top_two_angles is not None and len(top_two_angles) == 2:
lines = getHoughLines(edges_masked, min_line_size=0.25*min(min_area_rect[1]))
lines_a, lines_b = parseHoughLines(lines, top_two_angles, angle_threshold_deg=15)
plotHoughLines(img_masked, lines, color=(255,255,255), line_thickness=1)
plotHoughLines(img_masked, lines_a, color=(0,0,255))
plotHoughLines(img_masked, lines_b, color=(0,255,0))
if min_area_rect is not None:
drawMinAreaRect(img_masked, min_area_rect)
# cv2.imshow('Masked',img_masked)
return img_masked
# cv2.imshow('edges %s' % filename, edges_masked)
# cv2.imshow('mask %s' % filename, mask)
def findAndDrawChessboard(img):
img_orig = img.copy()
img_orig2 = img.copy()
# Edges
edges = cv2.Canny(img, 100, 550)
# Get mask for where we think chessboard is
mask, top_two_angles, min_area_rect, median_contour = getEstimatedChessboardMask(img, edges,iters=3) # More iters gives a finer mask
if top_two_angles is None or len(top_two_angles) != 2 or min_area_rect is None:
print('fail', top_two_angles)
return img
if mask.min() != 0:
return img
# Get hough lines of masked edges
edges_masked = cv2.bitwise_and(edges,edges,mask = (mask > 0.5).astype(np.uint8))
img_orig = cv2.bitwise_and(img_orig,img_orig,mask = (mask > 0.5).astype(np.uint8))
lines = getHoughLines(edges_masked, min_line_size=0.25*min(min_area_rect[1]))
lines_a, lines_b = parseHoughLines(lines, top_two_angles, angle_threshold_deg=35)
if len(lines_a) < 2 or len(lines_b) < 2:
print('fail2', lines_a, lines_b)
return img
# plotHoughLines(img, lines, color=(255,255,255), line_thickness=1)
# plotHoughLines(img, lines_a, color=(0,0,255))
# plotHoughLines(img, lines_b, color=(0,255,0))
a = time()
for i2 in range(2):
for i in range(5):
corners = chooseRandomGoodQuad(lines_a, lines_b, median_contour)
# warp_img, M = getTileImage(img_orig, corners.astype(np.float32),tile_buffer=16, tile_res=16)
M = getTileTransform(corners.astype(np.float32),tile_buffer=16, tile_res=16)
# Warp lines and draw them on warped image
all_lines = np.vstack([lines_a[:,:2], lines_a[:,2:], lines_b[:,:2], lines_b[:,2:]]).astype(np.float32)
warp_pts = cv2.perspectiveTransform(all_lines[None,:,:], M)
warp_pts = warp_pts[0,:,:]
warp_lines_a = np.hstack([warp_pts[:len(lines_a),:], warp_pts[len(lines_a):2*len(lines_a),:]])
warp_lines_b = np.hstack([warp_pts[2*len(lines_a):2*len(lines_a)+len(lines_b),:], warp_pts[2*len(lines_a)+len(lines_b):,:]])
# Get thetas of warped lines
thetas_a = np.array([getSegmentTheta(line) for line in warp_lines_a])
thetas_b = np.array([getSegmentTheta(line) for line in warp_lines_b])
median_theta_a = (np.median(thetas_a*180/np.pi))
median_theta_b = (np.median(thetas_b*180/np.pi))
# Gradually relax angle threshold over N iterations
if i < 20:
warp_angle_threshold = 0.03
elif i < 30:
warp_angle_threshold = 0.1
elif i < 50:
warp_angle_threshold = 0.3
elif i < 70:
warp_angle_threshold = 0.5
elif i < 80:
warp_angle_threshold = 1.0
else:
warp_angle_threshold = 2.0
if ((angleCloseDeg(abs(median_theta_a), 0, warp_angle_threshold) and
angleCloseDeg(abs(median_theta_b), 90, warp_angle_threshold)) or
(angleCloseDeg(abs(median_theta_a), 90, warp_angle_threshold) and
angleCloseDeg(abs(median_theta_b), 0, warp_angle_threshold))):
break
# else:
# print('iter %d: %.2f %.2f' % (i, abs(median_theta_a), abs(median_theta_b)))
warp_img, M = getTileImage(img_orig, corners.astype(np.float32),tile_buffer=16, tile_res=16)
lines_x, lines_y, step_x, step_y = getWarpCheckerLines(warp_img)
if len(lines_x) > 0:
break
warp_img, M = getTileImage(img_orig, corners.astype(np.float32),tile_buffer=16, tile_res=16)
for corner in corners:
cv2.circle(img, tuple(map(int,corner)), 5, (255,150,150),-1)
if len(lines_x) > 0:
warp_corners, all_warp_corners = getRectChessCorners(lines_x, lines_y)
tile_centers = all_warp_corners + np.array([step_x/2.0, step_y/2.0]) # Offset from corner to tile centers
M_inv = np.matrix(np.linalg.inv(M))
real_corners, all_real_tile_centers = getOrigChessCorners(warp_corners, tile_centers, M_inv)
tile_res = 64 # Each tile has N pixels per side
tile_buffer = 1
warp_img, better_M = getTileImage(img_orig2, real_corners, tile_buffer=tile_buffer, tile_res=tile_res)
# Further refine rectified image
warp_img, was_rotated, refine_M = reRectifyImages(warp_img)
# combined_M = better_M
combined_M = np.matmul(refine_M,better_M)
M_inv = np.matrix(np.linalg.inv(combined_M))
# Get better_M based corners
hlines = vlines = (np.arange(8)+tile_buffer)*tile_res
hcorner = (np.array([0,8,8,0])+tile_buffer)*tile_res
vcorner = (np.array([0,0,8,8])+tile_buffer)*tile_res
ideal_corners = np.vstack([hcorner,vcorner]).T
ideal_all_corners = np.array(list(itertools.product(hlines, vlines)))
ideal_tile_centers = ideal_all_corners + np.array([tile_res/2.0, tile_res/2.0]) # Offset from corner to tile centers
real_corners, all_real_tile_centers = getOrigChessCorners(ideal_corners, ideal_tile_centers, M_inv)
# Get final refined rectified warped image for saving
warp_img, _ = getTileImage(img_orig2, real_corners, tile_buffer=tile_buffer, tile_res=tile_res)
cv2.polylines(img, [real_corners.astype(np.int32)], True, (150,50,255), thickness=4)
cv2.polylines(img, [all_real_tile_centers.astype(np.int32)], False, (0,50,255), thickness=1)
img_masked_full = cv2.bitwise_and(img,img,mask = (mask > 0.5).astype(np.uint8))
img_masked = cv2.addWeighted(img,0.2,img_masked_full,0.8,0)
drawMinAreaRect(img_masked, min_area_rect)
return img_masked
def processVideo(filename, func=findAndDrawHough, rate=1):
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
cap = cv2.VideoCapture(filename)
img_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
img_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
img_rescale_ratio = 1.0
out_size = (int(img_width*img_rescale_ratio), int(img_height*img_rescale_ratio))
output_filename = 'output3_%s.avi' % (filename[:-4])
print("Writing to %s at scale %s" % (output_filename, out_size))
# out = cv2.VideoWriter(output_filename,fourcc, 20.0, (384,216)) # 0.2
# out = cv2.VideoWriter(output_filename,fourcc, 20.0, (576,324)) # 0.3
out = cv2.VideoWriter(output_filename,fourcc, 20.0, out_size)
i = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not np.any(frame):
break
i+=1
img = cv2.resize(frame,None,fx=img_rescale_ratio, fy=img_rescale_ratio, interpolation = cv2.INTER_AREA)
if (i == 1):
if img.shape[0] != out_size[1] or img.shape[1] != out_size[0]:
print(img.shape, out_size)
img_masked = func(img)
out.write(img_masked)
cv2.imshow('Masked',img_masked)
if cv2.waitKey(rate) & 0xFF == ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
def main(filenames):
for filename in filenames:
print("Processing %s" % filename)
img = cv2.imread(filename)
img_diag_size = int(np.ceil(np.sqrt(img.shape[0]*img.shape[0] + img.shape[1]*img.shape[1])))
print(img_diag_size)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,100,650,apertureSize = 3)
min_img_side = min(img.shape[:2])
minLineLength = min_img_side/8
maxLineGap = min_img_side/10
threshold = int(min_img_side/8)
print(minLineLength, maxLineGap, threshold)
lines = cv2.HoughLinesP(edges,rho=1,theta=np.pi/180,
threshold=threshold, minLineLength=minLineLength, maxLineGap=maxLineGap)
# colors = np.random.random([lines.shape[0],3])*255
colors = [
[255,0,0],
[0,255,0],
[255,255,0],
[0,0,255],
[255,0,255],
[0,255,255],
[255,255,255],
]
hough_img = np.zeros([2*img_diag_size/4, 180]) # -90 to 90 deg, -rho_max to rho_max
if any(lines):
rhothetas = np.zeros([lines.shape[0], 2])
for i, (x1,y1,x2,y2) in enumerate(lines[:,0,:]):
color = list(map(int,colors[i%len(colors)])) # dtype needs to be int, not np.int32
cv2.line(img,(x1,y1),(x2,y2), color,2)
rho, theta = getRhoTheta((x1,y1,x2,y2))
rhothetas[i,:] = rho, theta
img_rho, img_theta = (int(theta*180/np.pi + 90), int((rho+img_diag_size)/4))
cv2.circle(hough_img, (img_rho, img_theta), 3, (255,0,0),-1)
plot(rhothetas[:,1]*180/np.pi, rhothetas[:,0], 'o')
print(hough_img.shape)
xlabel('theta (deg)')
ylabel('rho')
# Using matplotlib
# imshow(img)
# show()
# Using opencv
cv2.imshow('image %dx%d' % (img.shape[1],img.shape[0]),img)
cv2.imshow('edges',edges)
cv2.imshow('hough',hough_img)
cv2.moveWindow('hough', 0,0)
# cv2.waitKey(0)
axis('equal')
show()
cv2.destroyAllWindows()
if __name__ == '__main__':
if len(sys.argv) > 1:
filenames = sys.argv[1:]
else:
filenames = ['input2/27.jpg']
# filenames = ['input/2.jpg', 'input/6.jpg', 'input/17.jpg']
# filenames = ['input/1.jpg', 'input/2.jpg', 'input/3.jpg', 'input_fails/37.jpg', 'input_fails/38.jpg']
# filenames = ['input_fails/37.jpg', 'input_fails/38.jpg']
# main(filenames)
# processVideo('chess1.mp4')
# processVideo('chess2.mp4', func=findAndDrawTile)
processVideo('chess1.mp4', func=findAndDrawMask)
# processVideo('chess3.mp4', func=findAndDrawChessboard)
print('Done.') | 35.507003 | 134 | 0.671821 | 1,943 | 12,676 | 4.180134 | 0.166752 | 0.0197 | 0.022162 | 0.006895 | 0.499754 | 0.446565 | 0.426988 | 0.387466 | 0.349175 | 0.326274 | 0 | 0.055383 | 0.185232 | 12,676 | 357 | 135 | 35.507003 | 0.731022 | 0.143894 | 0 | 0.284483 | 0 | 0 | 0.017589 | 0 | 0 | 0 | 0.00037 | 0 | 0 | 1 | 0.030172 | false | 0 | 0.034483 | 0 | 0.094828 | 0.043103 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
496e92d7cad481a4d6a3509fd9afbfb4305f5cd5 | 7,213 | py | Python | main/config.py | chdb/DhammaMap1 | ed31ca8bd1e9dd4bc87a824903ea483979ff9e8d | [
"MIT"
] | null | null | null | main/config.py | chdb/DhammaMap1 | ed31ca8bd1e9dd4bc87a824903ea483979ff9e8d | [
"MIT"
] | null | null | null | main/config.py | chdb/DhammaMap1 | ed31ca8bd1e9dd4bc87a824903ea483979ff9e8d | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Global config variables. Config variables stored in DB are loaded into CONFIG_DB variable
"""
import os
#pylint: disable=import-error
import util
import logging
#from jinja_boot import set_autoescape
from collections import namedtuple
# DelayCfg = namedtuple('DelayCfg', ['delay' #
# ,'latency' # milliseconds - maximum time for network plus browser response
# ]) # ... after this it will try again. Too small will prevent page access for slow systems. Too big will cause
#Todo: set latency value at runtime from multiple of eg a redirect
RateLimitCfg= namedtuple('RateLimitCfg', [ 'minDelay' #
, 'lockCfg' #
] )
LockCfg = namedtuple('LockCfg' , [ 'delayFn' # lambda(n) - minimum time between requests in milliseconds as function of the retry number.
, 'maxbad' # number consecutive 'bad' requests in 'period' to trigger lockout
, 'period' # seconds - time permitted for < maxbad consecutive 'bad' requests
, 'lockTime' # seconds - duration of lockout
, 'bGoodReset'# boolean - whether reset occurs for good login
] )
cfg_={'DebugMode' : True #if True, uncaught exceptions are raised instead of using HTTPInternalServerError.
# so that you get a stack trace in the log
#otherwise its just a '500 Internal Server Error'
,'locales' : []
# seconds- 0 means never expire
# 'maxAgeRecentLogin' : 60*10
,'maxAgeSignUpTok' : 60*60*24
,'maxAgePasswordTok' : 60*60
,'maxAgePassword2Tok': 60*60
#,'maxIdleAnon' : 0 #60*60
,'maxIdleAuth' : 15 #60*60
,'MemCacheKeepAlive' : 500 # milliseconds - to refresh MemCache item, so (probably) not be flushed
,'Signin' : RateLimitCfg ( 700
# milliseconds - minimum time between requests.
# delay or lock on repeated requests from the same ipa and the same ema
, {'ema_ipa':LockCfg( lambda n: (n**2)*100 # *10 # 100 200 400 800 1600...
, 3 #maxbad
, 60*1 #period
, 60*3 #lockTime
, True #bGoodReset
)
# delay orlock onrepeated requests from the same ema but different ipa's
,'ema' :LockCfg( lambda n: (n-1)*300 # 0 300 600 900 1200...
, 3 #maxbad
, 60*1 #period
, 60*3 #lockTime
, True #bGoodReset
)
# delay orlock onrepeated requests from the same ipa but different ema's
,'ipa' :LockCfg( lambda n: (n-1)*500 # 0 500 1000 1500 2000 ...
, 3 #maxbad
, 60*1 #period
, 60*3 #lockTime
, True #bGoodReset
) } )
,'Forgot':RateLimitCfg( 500 # milliseconds- minimum time between requests.
, {'ema_ipa':LockCfg( lambda n: (n**2)*100 # 100 200 400 800 1600...
, 3 #maxbad
, 60*1 #period
, 60*3 #lockTime
, True #bGoodReset
)
,'ema' :LockCfg( lambda n: (n-1)*300 # 0 300 600 900 1200...
, 3 #maxbad
, 60*1 #period
, 60*3 #lockTime
, True #bGoodReset
)
,'ipa' :LockCfg( lambda n: (n-1)*500 # 0 500 1000 1500 2000 ...
, 3 #maxbad
, 60*1 #period
, 60*3 #lockTime
, True #bGoodReset
) } )
# ,'pepper' : None
# ,'recordEmails' : True
# ,'email_developers' : True
# ,'developers' : (('Santa Klauss', 'snowypal@northpole.com'))
#add-to/update the default_config at \webapp2_extras\jinja2.py
# ,'webapp2_extras.jinja2': { 'template_path' : [ 'template' ]
# , 'environment_args': { 'extensions': ['jinja2.ext.i18n'
# ,'jinja2.ext.autoescape'
# ,'jinja2.ext.with_'
# ]
# , 'autoescape': set_autoescape
# }
# }
}
cfg_['Signup'] = cfg_['Forgot']
class TwoWayDict(dict):
def __setitem__(_s, key1, key2):
# Remove any previous connections with these keys
if not key1.endswith(':'):
raise ValueError
if key1 in _s:
raise ValueError
if key2 in _s:
raise ValueError
dict.__setitem__(_s, key1, key2)
dict.__setitem__(_s, key2, key1)
authNames = TwoWayDict()
# follow the convention: all short names end in ':' (and long names dont)
# follow the convention: all builtin names start with '_' (short and long names, both)
authNames ['_e:'] = '_email'
authNames ['_u:'] = '_userName'
## OAuth 1.0a
# authNames ['bb:'] = 'bitbucket'
# authNames ['fk:'] = 'flickr'
# authNames ['pk:'] = 'plurk'
authNames ['tt:'] = 'twitter'
# authNames ['tb:'] = 'tumblr'
## 'ubuntuone', # UbuntuOne service is no longer available
# authNames ['vm:'] = 'vimeo'
# authNames ['xr:'] = 'xero'
# authNames ['xg:'] = 'xing'
# authNames ['yh:'] = 'yahoo'
## OAuth 2.0
# authNames ['am:'] = 'amazon'
## 'behance', # doesn't support third party authorization anymore.
# authNames ['bl:'] = 'bitly'
# authNames ['dv:'] = 'deviantart'
authNames ['fb:'] = 'facebook'
# authNames ['fs:'] = 'foursquare'
authNames ['gg:'] = 'google'
authNames ['gh:'] = 'github'
authNames ['li:'] = 'linkedin'
# authNames ['pp:'] = 'paypal'
# authNames ['rd:'] = 'reddit'
# authNames ['vk:'] = 'vk'
# authNames ['wl:'] = 'windowslive'
# authNames ['ym:'] = 'yammer'
# authNames ['yd:'] = 'yandex'
authNames ['ig:'] = 'instagram' # not yet impl in authomatic
## OpenID
# authNames ['ol:'] = 'openid_livejournal'
# authNames ['ov:'] = 'openid_verisignlabs'
# authNames ['ow:'] = 'openid_wordpress'
# authNames ['oy:'] = 'openid_yahoo'
| 45.942675 | 157 | 0.455012 | 635 | 7,213 | 5.102362 | 0.475591 | 0.015123 | 0.025926 | 0.027778 | 0.202778 | 0.171605 | 0.171605 | 0.171605 | 0.156173 | 0.156173 | 0 | 0.055775 | 0.438237 | 7,213 | 156 | 158 | 46.237179 | 0.74383 | 0.45113 | 0 | 0.468354 | 0 | 0 | 0.077925 | 0 | 0 | 0 | 0 | 0.00641 | 0 | 1 | 0.012658 | false | 0.025316 | 0.050633 | 0 | 0.075949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |