hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e19634e1c0e6ad67f639ff7b727b4525f8c022d4 | 1,186 | py | Python | test/arguments/with_range_check_code/python/Bit4RangeCheckTest.py | dkBrazz/zserio | 29dd8145b7d851fac682d3afe991185ea2eac318 | [
"BSD-3-Clause"
] | 86 | 2018-09-06T09:30:53.000Z | 2022-03-27T01:12:36.000Z | test/arguments/with_range_check_code/python/Bit4RangeCheckTest.py | dkBrazz/zserio | 29dd8145b7d851fac682d3afe991185ea2eac318 | [
"BSD-3-Clause"
] | 362 | 2018-09-04T20:21:24.000Z | 2022-03-30T15:14:38.000Z | test/arguments/with_range_check_code/python/Bit4RangeCheckTest.py | dkBrazz/zserio | 29dd8145b7d851fac682d3afe991185ea2eac318 | [
"BSD-3-Clause"
] | 20 | 2018-09-10T15:59:02.000Z | 2021-12-01T15:38:22.000Z | import unittest
import zserio
from testutils import getZserioApi
class Bit4RangeCheckTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.api = getZserioApi(__file__, "with_range_check_code.zs",
extraArgs=["-withRangeCheckCode"]).bit4_range_check
def testBit4LowerBound(self):
self._checkBit4Value(BIT4_LOWER_BOUND)
def testBit4UpperBound(self):
self._checkBit4Value(BIT4_UPPER_BOUND)
def testBit4BelowLowerBound(self):
with self.assertRaises(zserio.PythonRuntimeException):
self._checkBit4Value(BIT4_LOWER_BOUND - 1)
def testBit4AboveUpperBound(self):
with self.assertRaises(zserio.PythonRuntimeException):
self._checkBit4Value(BIT4_UPPER_BOUND + 1)
def _checkBit4Value(self, value):
bit4RangeCheckCompound = self.api.Bit4RangeCheckCompound(value_=value)
bitBuffer = zserio.serialize(bit4RangeCheckCompound)
readBit4RangeCheckCompound = zserio.deserialize(self.api.Bit4RangeCheckCompound, bitBuffer)
self.assertEqual(bit4RangeCheckCompound, readBit4RangeCheckCompound)
BIT4_LOWER_BOUND = 0
BIT4_UPPER_BOUND = 15
| 34.882353 | 99 | 0.73946 | 107 | 1,186 | 7.943925 | 0.411215 | 0.084706 | 0.103529 | 0.061176 | 0.272941 | 0.174118 | 0.174118 | 0.174118 | 0.174118 | 0 | 0 | 0.030083 | 0.187184 | 1,186 | 33 | 100 | 35.939394 | 0.85166 | 0 | 0 | 0.08 | 0 | 0 | 0.036256 | 0.020236 | 0 | 0 | 0 | 0 | 0.12 | 1 | 0.24 | false | 0 | 0.12 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e199a6e60b58553046744ab0013002524ed3e824 | 473 | py | Python | topic-db/topicdb/core/models/language.py | anthcp-infocom/Contextualise | 0136660fcb965fd70fb4c7a33de7973a69ee9fec | [
"MIT"
] | 184 | 2019-01-10T03:50:50.000Z | 2022-03-31T19:45:16.000Z | topic-db/topicdb/core/models/language.py | anthcp-infocom/Contextualise | 0136660fcb965fd70fb4c7a33de7973a69ee9fec | [
"MIT"
] | 11 | 2019-04-07T07:39:11.000Z | 2022-02-17T13:29:32.000Z | topic-db/topicdb/core/models/language.py | anthcp-infocom/Contextualise | 0136660fcb965fd70fb4c7a33de7973a69ee9fec | [
"MIT"
] | 9 | 2019-10-26T02:43:59.000Z | 2021-11-03T00:49:10.000Z | """
Language enumeration. Part of the StoryTechnologies project.
June 12, 2016
Brett Alistair Kromkamp (brett.kromkamp@gmail.com)
"""
from enum import Enum
class Language(Enum):
# https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes
# https://en.wikipedia.org/wiki/ISO_639-2
ENG = 1 # English
SPA = 2 # Spanish
DEU = 3 # German
ITA = 4 # Italian
FRA = 5 # French
NLD = 6 # Dutch
def __str__(self):
return self.name
| 18.92 | 60 | 0.64482 | 67 | 473 | 4.41791 | 0.761194 | 0.047297 | 0.108108 | 0.128378 | 0.155405 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05618 | 0.247357 | 473 | 24 | 61 | 19.708333 | 0.775281 | 0.560254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0.1 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e1b43999f4dbbac898da1e996502f381b7896fa5 | 72,341 | py | Python | src/tale/syntax/grammar/TaleParser.py | tale-lang/tale | 1779f94aa13545e58a1d5a8819b85ad02ada4144 | [
"MIT"
] | 17 | 2020-02-11T10:38:19.000Z | 2020-09-22T16:36:25.000Z | src/tale/syntax/grammar/TaleParser.py | tale-lang/tale | 1779f94aa13545e58a1d5a8819b85ad02ada4144 | [
"MIT"
] | 18 | 2020-02-14T20:36:25.000Z | 2020-05-26T21:52:46.000Z | src/tale/syntax/grammar/TaleParser.py | tale-lang/tale | 1779f94aa13545e58a1d5a8819b85ad02ada4144 | [
"MIT"
] | 1 | 2020-02-16T12:04:07.000Z | 2020-02-16T12:04:07.000Z | # Generated from tale/syntax/grammar/Tale.g4 by ANTLR 4.8
# encoding: utf-8
from antlr4 import *
from io import StringIO
import sys
if sys.version_info[1] > 5:
from typing import TextIO
else:
from typing.io import TextIO
def serializedATN():
with StringIO() as buf:
buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\3\20")
buf.write("\u00f1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7\t\7")
buf.write("\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r\4\16")
buf.write("\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23\t\23")
buf.write("\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30\4\31")
buf.write("\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36\t\36")
buf.write("\4\37\t\37\4 \t \4!\t!\3\2\3\2\7\2E\n\2\f\2\16\2H\13\2")
buf.write("\3\2\3\2\3\3\3\3\5\3N\n\3\3\4\3\4\3\4\3\4\3\5\3\5\3\5")
buf.write("\3\5\3\5\5\5Y\n\5\3\6\3\6\3\6\3\7\3\7\3\7\3\b\3\b\3\b")
buf.write("\3\b\3\t\5\tf\n\t\3\t\3\t\3\t\6\tk\n\t\r\t\16\tl\3\n\3")
buf.write("\n\3\13\3\13\5\13s\n\13\3\f\3\f\3\f\6\fx\n\f\r\f\16\f")
buf.write("y\3\r\3\r\5\r~\n\r\3\16\3\16\3\16\3\16\5\16\u0084\n\16")
buf.write("\3\16\3\16\3\17\3\17\5\17\u008a\n\17\3\20\3\20\3\21\3")
buf.write("\21\3\22\3\22\5\22\u0092\n\22\3\23\3\23\3\24\3\24\3\24")
buf.write("\6\24\u0099\n\24\r\24\16\24\u009a\3\24\3\24\3\25\3\25")
buf.write("\3\25\3\25\3\25\5\25\u00a4\n\25\3\26\3\26\3\26\3\26\3")
buf.write("\26\3\26\7\26\u00ac\n\26\f\26\16\26\u00af\13\26\3\27\3")
buf.write("\27\3\27\3\27\3\27\3\27\3\27\5\27\u00b8\n\27\3\30\3\30")
buf.write("\3\30\3\30\3\30\3\30\3\30\3\30\7\30\u00c2\n\30\f\30\16")
buf.write("\30\u00c5\13\30\3\31\3\31\5\31\u00c9\n\31\3\32\5\32\u00cc")
buf.write("\n\32\3\32\3\32\3\32\3\32\6\32\u00d2\n\32\r\32\16\32\u00d3")
buf.write("\3\33\3\33\3\33\5\33\u00d9\n\33\3\34\3\34\3\35\3\35\3")
buf.write("\35\7\35\u00e0\n\35\f\35\16\35\u00e3\13\35\3\36\3\36\5")
buf.write("\36\u00e7\n\36\3\37\3\37\5\37\u00eb\n\37\3 \3 \3!\3!\3")
buf.write("!\3\u00e1\4*.\"\2\4\6\b\n\f\16\20\22\24\26\30\32\34\36")
buf.write(" \"$&(*,.\60\62\64\668:<>@\2\2\2\u00f0\2F\3\2\2\2\4M\3")
buf.write("\2\2\2\6O\3\2\2\2\bX\3\2\2\2\nZ\3\2\2\2\f]\3\2\2\2\16")
buf.write("`\3\2\2\2\20e\3\2\2\2\22n\3\2\2\2\24r\3\2\2\2\26t\3\2")
buf.write("\2\2\30}\3\2\2\2\32\177\3\2\2\2\34\u0089\3\2\2\2\36\u008b")
buf.write("\3\2\2\2 \u008d\3\2\2\2\"\u0091\3\2\2\2$\u0093\3\2\2\2")
buf.write("&\u0095\3\2\2\2(\u00a3\3\2\2\2*\u00a5\3\2\2\2,\u00b7\3")
buf.write("\2\2\2.\u00b9\3\2\2\2\60\u00c8\3\2\2\2\62\u00cb\3\2\2")
buf.write("\2\64\u00d8\3\2\2\2\66\u00da\3\2\2\28\u00dc\3\2\2\2:\u00e6")
buf.write("\3\2\2\2<\u00ea\3\2\2\2>\u00ec\3\2\2\2@\u00ee\3\2\2\2")
buf.write("BE\7\r\2\2CE\5\4\3\2DB\3\2\2\2DC\3\2\2\2EH\3\2\2\2FD\3")
buf.write("\2\2\2FG\3\2\2\2GI\3\2\2\2HF\3\2\2\2IJ\7\2\2\3J\3\3\2")
buf.write("\2\2KN\5\6\4\2LN\5(\25\2MK\3\2\2\2ML\3\2\2\2N\5\3\2\2")
buf.write("\2OP\5\b\5\2PQ\7\3\2\2QR\5\"\22\2R\7\3\2\2\2SY\5\n\6\2")
buf.write("TY\5\f\7\2UY\5\16\b\2VY\5\20\t\2WY\5\22\n\2XS\3\2\2\2")
buf.write("XT\3\2\2\2XU\3\2\2\2XV\3\2\2\2XW\3\2\2\2Y\t\3\2\2\2Z[")
buf.write("\5\24\13\2[\\\7\b\2\2\\\13\3\2\2\2]^\7\n\2\2^_\5\30\r")
buf.write("\2_\r\3\2\2\2`a\5\24\13\2ab\7\n\2\2bc\5\24\13\2c\17\3")
buf.write("\2\2\2df\5\24\13\2ed\3\2\2\2ef\3\2\2\2fj\3\2\2\2gh\7\b")
buf.write("\2\2hi\7\4\2\2ik\5\24\13\2jg\3\2\2\2kl\3\2\2\2lj\3\2\2")
buf.write("\2lm\3\2\2\2m\21\3\2\2\2no\7\b\2\2o\23\3\2\2\2ps\5\30")
buf.write("\r\2qs\5\26\f\2rp\3\2\2\2rq\3\2\2\2s\25\3\2\2\2tw\5\30")
buf.write("\r\2uv\7\5\2\2vx\5\30\r\2wu\3\2\2\2xy\3\2\2\2yw\3\2\2")
buf.write("\2yz\3\2\2\2z\27\3\2\2\2{~\5\32\16\2|~\5\34\17\2}{\3\2")
buf.write("\2\2}|\3\2\2\2~\31\3\2\2\2\177\u0080\7\6\2\2\u0080\u0083")
buf.write("\5\36\20\2\u0081\u0082\7\4\2\2\u0082\u0084\5 \21\2\u0083")
buf.write("\u0081\3\2\2\2\u0083\u0084\3\2\2\2\u0084\u0085\3\2\2\2")
buf.write("\u0085\u0086\7\7\2\2\u0086\33\3\2\2\2\u0087\u008a\7\b")
buf.write("\2\2\u0088\u008a\5<\37\2\u0089\u0087\3\2\2\2\u0089\u0088")
buf.write("\3\2\2\2\u008a\35\3\2\2\2\u008b\u008c\7\b\2\2\u008c\37")
buf.write("\3\2\2\2\u008d\u008e\7\b\2\2\u008e!\3\2\2\2\u008f\u0092")
buf.write("\5$\23\2\u0090\u0092\5&\24\2\u0091\u008f\3\2\2\2\u0091")
buf.write("\u0090\3\2\2\2\u0092#\3\2\2\2\u0093\u0094\5(\25\2\u0094")
buf.write("%\3\2\2\2\u0095\u0098\7\17\2\2\u0096\u0099\7\r\2\2\u0097")
buf.write("\u0099\5\4\3\2\u0098\u0096\3\2\2\2\u0098\u0097\3\2\2\2")
buf.write("\u0099\u009a\3\2\2\2\u009a\u0098\3\2\2\2\u009a\u009b\3")
buf.write("\2\2\2\u009b\u009c\3\2\2\2\u009c\u009d\7\20\2\2\u009d")
buf.write("\'\3\2\2\2\u009e\u00a4\5*\26\2\u009f\u00a4\5,\27\2\u00a0")
buf.write("\u00a4\5.\30\2\u00a1\u00a4\5\62\32\2\u00a2\u00a4\58\35")
buf.write("\2\u00a3\u009e\3\2\2\2\u00a3\u009f\3\2\2\2\u00a3\u00a0")
buf.write("\3\2\2\2\u00a3\u00a1\3\2\2\2\u00a3\u00a2\3\2\2\2\u00a4")
buf.write(")\3\2\2\2\u00a5\u00a6\b\26\1\2\u00a6\u00a7\58\35\2\u00a7")
buf.write("\u00a8\7\b\2\2\u00a8\u00ad\3\2\2\2\u00a9\u00aa\f\4\2\2")
buf.write("\u00aa\u00ac\7\b\2\2\u00ab\u00a9\3\2\2\2\u00ac\u00af\3")
buf.write("\2\2\2\u00ad\u00ab\3\2\2\2\u00ad\u00ae\3\2\2\2\u00ae+")
buf.write("\3\2\2\2\u00af\u00ad\3\2\2\2\u00b0\u00b1\7\n\2\2\u00b1")
buf.write("\u00b8\5:\36\2\u00b2\u00b3\7\n\2\2\u00b3\u00b4\7\6\2\2")
buf.write("\u00b4\u00b5\5(\25\2\u00b5\u00b6\7\7\2\2\u00b6\u00b8\3")
buf.write("\2\2\2\u00b7\u00b0\3\2\2\2\u00b7\u00b2\3\2\2\2\u00b8-")
buf.write("\3\2\2\2\u00b9\u00ba\b\30\1\2\u00ba\u00bb\5\60\31\2\u00bb")
buf.write("\u00bc\7\n\2\2\u00bc\u00bd\5\60\31\2\u00bd\u00c3\3\2\2")
buf.write("\2\u00be\u00bf\f\4\2\2\u00bf\u00c0\7\n\2\2\u00c0\u00c2")
buf.write("\5\60\31\2\u00c1\u00be\3\2\2\2\u00c2\u00c5\3\2\2\2\u00c3")
buf.write("\u00c1\3\2\2\2\u00c3\u00c4\3\2\2\2\u00c4/\3\2\2\2\u00c5")
buf.write("\u00c3\3\2\2\2\u00c6\u00c9\5*\26\2\u00c7\u00c9\58\35\2")
buf.write("\u00c8\u00c6\3\2\2\2\u00c8\u00c7\3\2\2\2\u00c9\61\3\2")
buf.write("\2\2\u00ca\u00cc\5\64\33\2\u00cb\u00ca\3\2\2\2\u00cb\u00cc")
buf.write("\3\2\2\2\u00cc\u00d1\3\2\2\2\u00cd\u00ce\5\66\34\2\u00ce")
buf.write("\u00cf\7\4\2\2\u00cf\u00d0\5\64\33\2\u00d0\u00d2\3\2\2")
buf.write("\2\u00d1\u00cd\3\2\2\2\u00d2\u00d3\3\2\2\2\u00d3\u00d1")
buf.write("\3\2\2\2\u00d3\u00d4\3\2\2\2\u00d4\63\3\2\2\2\u00d5\u00d9")
buf.write("\5*\26\2\u00d6\u00d9\5.\30\2\u00d7\u00d9\58\35\2\u00d8")
buf.write("\u00d5\3\2\2\2\u00d8\u00d6\3\2\2\2\u00d8\u00d7\3\2\2\2")
buf.write("\u00d9\65\3\2\2\2\u00da\u00db\7\b\2\2\u00db\67\3\2\2\2")
buf.write("\u00dc\u00e1\5:\36\2\u00dd\u00de\7\5\2\2\u00de\u00e0\5")
buf.write(":\36\2\u00df\u00dd\3\2\2\2\u00e0\u00e3\3\2\2\2\u00e1\u00e2")
buf.write("\3\2\2\2\u00e1\u00df\3\2\2\2\u00e29\3\2\2\2\u00e3\u00e1")
buf.write("\3\2\2\2\u00e4\u00e7\7\b\2\2\u00e5\u00e7\5<\37\2\u00e6")
buf.write("\u00e4\3\2\2\2\u00e6\u00e5\3\2\2\2\u00e7;\3\2\2\2\u00e8")
buf.write("\u00eb\5> \2\u00e9\u00eb\5@!\2\u00ea\u00e8\3\2\2\2\u00ea")
buf.write("\u00e9\3\2\2\2\u00eb=\3\2\2\2\u00ec\u00ed\7\t\2\2\u00ed")
buf.write("?\3\2\2\2\u00ee\u00ef\7\13\2\2\u00efA\3\2\2\2\33DFMXe")
buf.write("lry}\u0083\u0089\u0091\u0098\u009a\u00a3\u00ad\u00b7\u00c3")
buf.write("\u00c8\u00cb\u00d3\u00d8\u00e1\u00e6\u00ea")
return buf.getvalue()
class TaleParser ( Parser ):
grammarFileName = "Tale.g4"
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
sharedContextCache = PredictionContextCache()
literalNames = [ "<INVALID>", "'='", "':'", "','", "'('", "')'" ]
symbolicNames = [ "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
"<INVALID>", "<INVALID>", "IDENTIFIER", "NUMBER",
"OPERATOR", "STRING", "WS", "NEWLINE", "SKIP_", "INDENT",
"DEDENT" ]
RULE_program = 0
RULE_statement = 1
RULE_assignment = 2
RULE_form = 3
RULE_unaryForm = 4
RULE_prefixOperatorForm = 5
RULE_binaryForm = 6
RULE_keywordForm = 7
RULE_primitiveForm = 8
RULE_parameter = 9
RULE_tupleParameter = 10
RULE_singleParameter = 11
RULE_simpleParameter = 12
RULE_patternMatchingParameter = 13
RULE_parameterName = 14
RULE_parameterType = 15
RULE_assignmentBody = 16
RULE_simpleAssignmentBody = 17
RULE_compoundAssignmentBody = 18
RULE_expression = 19
RULE_unary = 20
RULE_prefixOperator = 21
RULE_binary = 22
RULE_binaryOperand = 23
RULE_keyword = 24
RULE_keywordArgument = 25
RULE_keywordName = 26
RULE_primitive = 27
RULE_primitiveItem = 28
RULE_literal = 29
RULE_intLiteral = 30
RULE_stringLiteral = 31
ruleNames = [ "program", "statement", "assignment", "form", "unaryForm",
"prefixOperatorForm", "binaryForm", "keywordForm", "primitiveForm",
"parameter", "tupleParameter", "singleParameter", "simpleParameter",
"patternMatchingParameter", "parameterName", "parameterType",
"assignmentBody", "simpleAssignmentBody", "compoundAssignmentBody",
"expression", "unary", "prefixOperator", "binary", "binaryOperand",
"keyword", "keywordArgument", "keywordName", "primitive",
"primitiveItem", "literal", "intLiteral", "stringLiteral" ]
EOF = Token.EOF
T__0=1
T__1=2
T__2=3
T__3=4
T__4=5
IDENTIFIER=6
NUMBER=7
OPERATOR=8
STRING=9
WS=10
NEWLINE=11
SKIP_=12
INDENT=13
DEDENT=14
def __init__(self, input:TokenStream, output:TextIO = sys.stdout):
super().__init__(input, output)
self.checkVersion("4.8")
self._interp = ParserATNSimulator(self, self.atn, self.decisionsToDFA, self.sharedContextCache)
self._predicates = None
class ProgramContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def EOF(self):
return self.getToken(TaleParser.EOF, 0)
def NEWLINE(self, i:int=None):
if i is None:
return self.getTokens(TaleParser.NEWLINE)
else:
return self.getToken(TaleParser.NEWLINE, i)
def statement(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.StatementContext)
else:
return self.getTypedRuleContext(TaleParser.StatementContext,i)
def getRuleIndex(self):
return TaleParser.RULE_program
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterProgram" ):
listener.enterProgram(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitProgram" ):
listener.exitProgram(self)
def program(self):
localctx = TaleParser.ProgramContext(self, self._ctx, self.state)
self.enterRule(localctx, 0, self.RULE_program)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 68
self._errHandler.sync(self)
_la = self._input.LA(1)
while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << TaleParser.T__3) | (1 << TaleParser.IDENTIFIER) | (1 << TaleParser.NUMBER) | (1 << TaleParser.OPERATOR) | (1 << TaleParser.STRING) | (1 << TaleParser.NEWLINE))) != 0):
self.state = 66
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [TaleParser.NEWLINE]:
self.state = 64
self.match(TaleParser.NEWLINE)
pass
elif token in [TaleParser.T__3, TaleParser.IDENTIFIER, TaleParser.NUMBER, TaleParser.OPERATOR, TaleParser.STRING]:
self.state = 65
self.statement()
pass
else:
raise NoViableAltException(self)
self.state = 70
self._errHandler.sync(self)
_la = self._input.LA(1)
self.state = 71
self.match(TaleParser.EOF)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class StatementContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def assignment(self):
return self.getTypedRuleContext(TaleParser.AssignmentContext,0)
def expression(self):
return self.getTypedRuleContext(TaleParser.ExpressionContext,0)
def getRuleIndex(self):
return TaleParser.RULE_statement
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterStatement" ):
listener.enterStatement(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitStatement" ):
listener.exitStatement(self)
def statement(self):
localctx = TaleParser.StatementContext(self, self._ctx, self.state)
self.enterRule(localctx, 2, self.RULE_statement)
try:
self.state = 75
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,2,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 73
self.assignment()
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 74
self.expression()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class AssignmentContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def form(self):
return self.getTypedRuleContext(TaleParser.FormContext,0)
def assignmentBody(self):
return self.getTypedRuleContext(TaleParser.AssignmentBodyContext,0)
def getRuleIndex(self):
return TaleParser.RULE_assignment
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterAssignment" ):
listener.enterAssignment(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitAssignment" ):
listener.exitAssignment(self)
def assignment(self):
localctx = TaleParser.AssignmentContext(self, self._ctx, self.state)
self.enterRule(localctx, 4, self.RULE_assignment)
try:
self.enterOuterAlt(localctx, 1)
self.state = 77
self.form()
self.state = 78
self.match(TaleParser.T__0)
self.state = 79
self.assignmentBody()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class FormContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def unaryForm(self):
return self.getTypedRuleContext(TaleParser.UnaryFormContext,0)
def prefixOperatorForm(self):
return self.getTypedRuleContext(TaleParser.PrefixOperatorFormContext,0)
def binaryForm(self):
return self.getTypedRuleContext(TaleParser.BinaryFormContext,0)
def keywordForm(self):
return self.getTypedRuleContext(TaleParser.KeywordFormContext,0)
def primitiveForm(self):
return self.getTypedRuleContext(TaleParser.PrimitiveFormContext,0)
def getRuleIndex(self):
return TaleParser.RULE_form
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterForm" ):
listener.enterForm(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitForm" ):
listener.exitForm(self)
def form(self):
localctx = TaleParser.FormContext(self, self._ctx, self.state)
self.enterRule(localctx, 6, self.RULE_form)
try:
self.state = 86
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,3,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 81
self.unaryForm()
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 82
self.prefixOperatorForm()
pass
elif la_ == 3:
self.enterOuterAlt(localctx, 3)
self.state = 83
self.binaryForm()
pass
elif la_ == 4:
self.enterOuterAlt(localctx, 4)
self.state = 84
self.keywordForm()
pass
elif la_ == 5:
self.enterOuterAlt(localctx, 5)
self.state = 85
self.primitiveForm()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class UnaryFormContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def parameter(self):
return self.getTypedRuleContext(TaleParser.ParameterContext,0)
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def getRuleIndex(self):
return TaleParser.RULE_unaryForm
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterUnaryForm" ):
listener.enterUnaryForm(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitUnaryForm" ):
listener.exitUnaryForm(self)
def unaryForm(self):
localctx = TaleParser.UnaryFormContext(self, self._ctx, self.state)
self.enterRule(localctx, 8, self.RULE_unaryForm)
try:
self.enterOuterAlt(localctx, 1)
self.state = 88
self.parameter()
self.state = 89
self.match(TaleParser.IDENTIFIER)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class PrefixOperatorFormContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def OPERATOR(self):
return self.getToken(TaleParser.OPERATOR, 0)
def singleParameter(self):
return self.getTypedRuleContext(TaleParser.SingleParameterContext,0)
def getRuleIndex(self):
return TaleParser.RULE_prefixOperatorForm
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterPrefixOperatorForm" ):
listener.enterPrefixOperatorForm(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitPrefixOperatorForm" ):
listener.exitPrefixOperatorForm(self)
def prefixOperatorForm(self):
localctx = TaleParser.PrefixOperatorFormContext(self, self._ctx, self.state)
self.enterRule(localctx, 10, self.RULE_prefixOperatorForm)
try:
self.enterOuterAlt(localctx, 1)
self.state = 91
self.match(TaleParser.OPERATOR)
self.state = 92
self.singleParameter()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class BinaryFormContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def parameter(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.ParameterContext)
else:
return self.getTypedRuleContext(TaleParser.ParameterContext,i)
def OPERATOR(self):
return self.getToken(TaleParser.OPERATOR, 0)
def getRuleIndex(self):
return TaleParser.RULE_binaryForm
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterBinaryForm" ):
listener.enterBinaryForm(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitBinaryForm" ):
listener.exitBinaryForm(self)
def binaryForm(self):
localctx = TaleParser.BinaryFormContext(self, self._ctx, self.state)
self.enterRule(localctx, 12, self.RULE_binaryForm)
try:
self.enterOuterAlt(localctx, 1)
self.state = 94
self.parameter()
self.state = 95
self.match(TaleParser.OPERATOR)
self.state = 96
self.parameter()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class KeywordFormContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def parameter(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.ParameterContext)
else:
return self.getTypedRuleContext(TaleParser.ParameterContext,i)
def IDENTIFIER(self, i:int=None):
if i is None:
return self.getTokens(TaleParser.IDENTIFIER)
else:
return self.getToken(TaleParser.IDENTIFIER, i)
def getRuleIndex(self):
return TaleParser.RULE_keywordForm
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterKeywordForm" ):
listener.enterKeywordForm(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitKeywordForm" ):
listener.exitKeywordForm(self)
def keywordForm(self):
localctx = TaleParser.KeywordFormContext(self, self._ctx, self.state)
self.enterRule(localctx, 14, self.RULE_keywordForm)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 99
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,4,self._ctx)
if la_ == 1:
self.state = 98
self.parameter()
self.state = 104
self._errHandler.sync(self)
_la = self._input.LA(1)
while True:
self.state = 101
self.match(TaleParser.IDENTIFIER)
self.state = 102
self.match(TaleParser.T__1)
self.state = 103
self.parameter()
self.state = 106
self._errHandler.sync(self)
_la = self._input.LA(1)
if not (_la==TaleParser.IDENTIFIER):
break
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class PrimitiveFormContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def getRuleIndex(self):
return TaleParser.RULE_primitiveForm
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterPrimitiveForm" ):
listener.enterPrimitiveForm(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitPrimitiveForm" ):
listener.exitPrimitiveForm(self)
def primitiveForm(self):
localctx = TaleParser.PrimitiveFormContext(self, self._ctx, self.state)
self.enterRule(localctx, 16, self.RULE_primitiveForm)
try:
self.enterOuterAlt(localctx, 1)
self.state = 108
self.match(TaleParser.IDENTIFIER)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ParameterContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def singleParameter(self):
return self.getTypedRuleContext(TaleParser.SingleParameterContext,0)
def tupleParameter(self):
return self.getTypedRuleContext(TaleParser.TupleParameterContext,0)
def getRuleIndex(self):
return TaleParser.RULE_parameter
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterParameter" ):
listener.enterParameter(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitParameter" ):
listener.exitParameter(self)
def parameter(self):
localctx = TaleParser.ParameterContext(self, self._ctx, self.state)
self.enterRule(localctx, 18, self.RULE_parameter)
try:
self.state = 112
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,6,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 110
self.singleParameter()
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 111
self.tupleParameter()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class TupleParameterContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def singleParameter(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.SingleParameterContext)
else:
return self.getTypedRuleContext(TaleParser.SingleParameterContext,i)
def getRuleIndex(self):
return TaleParser.RULE_tupleParameter
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterTupleParameter" ):
listener.enterTupleParameter(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitTupleParameter" ):
listener.exitTupleParameter(self)
def tupleParameter(self):
localctx = TaleParser.TupleParameterContext(self, self._ctx, self.state)
self.enterRule(localctx, 20, self.RULE_tupleParameter)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 114
self.singleParameter()
self.state = 117
self._errHandler.sync(self)
_la = self._input.LA(1)
while True:
self.state = 115
self.match(TaleParser.T__2)
self.state = 116
self.singleParameter()
self.state = 119
self._errHandler.sync(self)
_la = self._input.LA(1)
if not (_la==TaleParser.T__2):
break
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class SingleParameterContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def simpleParameter(self):
return self.getTypedRuleContext(TaleParser.SimpleParameterContext,0)
def patternMatchingParameter(self):
return self.getTypedRuleContext(TaleParser.PatternMatchingParameterContext,0)
def getRuleIndex(self):
return TaleParser.RULE_singleParameter
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterSingleParameter" ):
listener.enterSingleParameter(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitSingleParameter" ):
listener.exitSingleParameter(self)
def singleParameter(self):
localctx = TaleParser.SingleParameterContext(self, self._ctx, self.state)
self.enterRule(localctx, 22, self.RULE_singleParameter)
try:
self.state = 123
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [TaleParser.T__3]:
self.enterOuterAlt(localctx, 1)
self.state = 121
self.simpleParameter()
pass
elif token in [TaleParser.IDENTIFIER, TaleParser.NUMBER, TaleParser.STRING]:
self.enterOuterAlt(localctx, 2)
self.state = 122
self.patternMatchingParameter()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class SimpleParameterContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def parameterName(self):
return self.getTypedRuleContext(TaleParser.ParameterNameContext,0)
def parameterType(self):
return self.getTypedRuleContext(TaleParser.ParameterTypeContext,0)
def getRuleIndex(self):
return TaleParser.RULE_simpleParameter
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterSimpleParameter" ):
listener.enterSimpleParameter(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitSimpleParameter" ):
listener.exitSimpleParameter(self)
def simpleParameter(self):
localctx = TaleParser.SimpleParameterContext(self, self._ctx, self.state)
self.enterRule(localctx, 24, self.RULE_simpleParameter)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 125
self.match(TaleParser.T__3)
self.state = 126
self.parameterName()
self.state = 129
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==TaleParser.T__1:
self.state = 127
self.match(TaleParser.T__1)
self.state = 128
self.parameterType()
self.state = 131
self.match(TaleParser.T__4)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class PatternMatchingParameterContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def literal(self):
return self.getTypedRuleContext(TaleParser.LiteralContext,0)
def getRuleIndex(self):
return TaleParser.RULE_patternMatchingParameter
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterPatternMatchingParameter" ):
listener.enterPatternMatchingParameter(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitPatternMatchingParameter" ):
listener.exitPatternMatchingParameter(self)
def patternMatchingParameter(self):
localctx = TaleParser.PatternMatchingParameterContext(self, self._ctx, self.state)
self.enterRule(localctx, 26, self.RULE_patternMatchingParameter)
try:
self.state = 135
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [TaleParser.IDENTIFIER]:
self.enterOuterAlt(localctx, 1)
self.state = 133
self.match(TaleParser.IDENTIFIER)
pass
elif token in [TaleParser.NUMBER, TaleParser.STRING]:
self.enterOuterAlt(localctx, 2)
self.state = 134
self.literal()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ParameterNameContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def getRuleIndex(self):
return TaleParser.RULE_parameterName
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterParameterName" ):
listener.enterParameterName(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitParameterName" ):
listener.exitParameterName(self)
def parameterName(self):
localctx = TaleParser.ParameterNameContext(self, self._ctx, self.state)
self.enterRule(localctx, 28, self.RULE_parameterName)
try:
self.enterOuterAlt(localctx, 1)
self.state = 137
self.match(TaleParser.IDENTIFIER)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ParameterTypeContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def getRuleIndex(self):
return TaleParser.RULE_parameterType
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterParameterType" ):
listener.enterParameterType(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitParameterType" ):
listener.exitParameterType(self)
def parameterType(self):
localctx = TaleParser.ParameterTypeContext(self, self._ctx, self.state)
self.enterRule(localctx, 30, self.RULE_parameterType)
try:
self.enterOuterAlt(localctx, 1)
self.state = 139
self.match(TaleParser.IDENTIFIER)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class AssignmentBodyContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def simpleAssignmentBody(self):
return self.getTypedRuleContext(TaleParser.SimpleAssignmentBodyContext,0)
def compoundAssignmentBody(self):
return self.getTypedRuleContext(TaleParser.CompoundAssignmentBodyContext,0)
def getRuleIndex(self):
return TaleParser.RULE_assignmentBody
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterAssignmentBody" ):
listener.enterAssignmentBody(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitAssignmentBody" ):
listener.exitAssignmentBody(self)
def assignmentBody(self):
localctx = TaleParser.AssignmentBodyContext(self, self._ctx, self.state)
self.enterRule(localctx, 32, self.RULE_assignmentBody)
try:
self.state = 143
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [TaleParser.IDENTIFIER, TaleParser.NUMBER, TaleParser.OPERATOR, TaleParser.STRING]:
self.enterOuterAlt(localctx, 1)
self.state = 141
self.simpleAssignmentBody()
pass
elif token in [TaleParser.INDENT]:
self.enterOuterAlt(localctx, 2)
self.state = 142
self.compoundAssignmentBody()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class SimpleAssignmentBodyContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def expression(self):
return self.getTypedRuleContext(TaleParser.ExpressionContext,0)
def getRuleIndex(self):
return TaleParser.RULE_simpleAssignmentBody
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterSimpleAssignmentBody" ):
listener.enterSimpleAssignmentBody(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitSimpleAssignmentBody" ):
listener.exitSimpleAssignmentBody(self)
def simpleAssignmentBody(self):
localctx = TaleParser.SimpleAssignmentBodyContext(self, self._ctx, self.state)
self.enterRule(localctx, 34, self.RULE_simpleAssignmentBody)
try:
self.enterOuterAlt(localctx, 1)
self.state = 145
self.expression()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class CompoundAssignmentBodyContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def INDENT(self):
return self.getToken(TaleParser.INDENT, 0)
def DEDENT(self):
return self.getToken(TaleParser.DEDENT, 0)
def NEWLINE(self, i:int=None):
if i is None:
return self.getTokens(TaleParser.NEWLINE)
else:
return self.getToken(TaleParser.NEWLINE, i)
def statement(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.StatementContext)
else:
return self.getTypedRuleContext(TaleParser.StatementContext,i)
def getRuleIndex(self):
return TaleParser.RULE_compoundAssignmentBody
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterCompoundAssignmentBody" ):
listener.enterCompoundAssignmentBody(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitCompoundAssignmentBody" ):
listener.exitCompoundAssignmentBody(self)
def compoundAssignmentBody(self):
localctx = TaleParser.CompoundAssignmentBodyContext(self, self._ctx, self.state)
self.enterRule(localctx, 36, self.RULE_compoundAssignmentBody)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 147
self.match(TaleParser.INDENT)
self.state = 150
self._errHandler.sync(self)
_la = self._input.LA(1)
while True:
self.state = 150
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [TaleParser.NEWLINE]:
self.state = 148
self.match(TaleParser.NEWLINE)
pass
elif token in [TaleParser.T__3, TaleParser.IDENTIFIER, TaleParser.NUMBER, TaleParser.OPERATOR, TaleParser.STRING]:
self.state = 149
self.statement()
pass
else:
raise NoViableAltException(self)
self.state = 152
self._errHandler.sync(self)
_la = self._input.LA(1)
if not ((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << TaleParser.T__3) | (1 << TaleParser.IDENTIFIER) | (1 << TaleParser.NUMBER) | (1 << TaleParser.OPERATOR) | (1 << TaleParser.STRING) | (1 << TaleParser.NEWLINE))) != 0)):
break
self.state = 154
self.match(TaleParser.DEDENT)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ExpressionContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def unary(self):
return self.getTypedRuleContext(TaleParser.UnaryContext,0)
def prefixOperator(self):
return self.getTypedRuleContext(TaleParser.PrefixOperatorContext,0)
def binary(self):
return self.getTypedRuleContext(TaleParser.BinaryContext,0)
def keyword(self):
return self.getTypedRuleContext(TaleParser.KeywordContext,0)
def primitive(self):
return self.getTypedRuleContext(TaleParser.PrimitiveContext,0)
def getRuleIndex(self):
return TaleParser.RULE_expression
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterExpression" ):
listener.enterExpression(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitExpression" ):
listener.exitExpression(self)
def expression(self):
localctx = TaleParser.ExpressionContext(self, self._ctx, self.state)
self.enterRule(localctx, 38, self.RULE_expression)
try:
self.state = 161
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,14,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 156
self.unary(0)
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 157
self.prefixOperator()
pass
elif la_ == 3:
self.enterOuterAlt(localctx, 3)
self.state = 158
self.binary(0)
pass
elif la_ == 4:
self.enterOuterAlt(localctx, 4)
self.state = 159
self.keyword()
pass
elif la_ == 5:
self.enterOuterAlt(localctx, 5)
self.state = 160
self.primitive()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class UnaryContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def primitive(self):
return self.getTypedRuleContext(TaleParser.PrimitiveContext,0)
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def unary(self):
return self.getTypedRuleContext(TaleParser.UnaryContext,0)
def getRuleIndex(self):
return TaleParser.RULE_unary
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterUnary" ):
listener.enterUnary(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitUnary" ):
listener.exitUnary(self)
def unary(self, _p:int=0):
_parentctx = self._ctx
_parentState = self.state
localctx = TaleParser.UnaryContext(self, self._ctx, _parentState)
_prevctx = localctx
_startState = 40
self.enterRecursionRule(localctx, 40, self.RULE_unary, _p)
try:
self.enterOuterAlt(localctx, 1)
self.state = 164
self.primitive()
self.state = 165
self.match(TaleParser.IDENTIFIER)
self._ctx.stop = self._input.LT(-1)
self.state = 171
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,15,self._ctx)
while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
if _alt==1:
if self._parseListeners is not None:
self.triggerExitRuleEvent()
_prevctx = localctx
localctx = TaleParser.UnaryContext(self, _parentctx, _parentState)
self.pushNewRecursionContext(localctx, _startState, self.RULE_unary)
self.state = 167
if not self.precpred(self._ctx, 2):
from antlr4.error.Errors import FailedPredicateException
raise FailedPredicateException(self, "self.precpred(self._ctx, 2)")
self.state = 168
self.match(TaleParser.IDENTIFIER)
self.state = 173
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,15,self._ctx)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.unrollRecursionContexts(_parentctx)
return localctx
class PrefixOperatorContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def OPERATOR(self):
return self.getToken(TaleParser.OPERATOR, 0)
def primitiveItem(self):
return self.getTypedRuleContext(TaleParser.PrimitiveItemContext,0)
def expression(self):
return self.getTypedRuleContext(TaleParser.ExpressionContext,0)
def getRuleIndex(self):
return TaleParser.RULE_prefixOperator
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterPrefixOperator" ):
listener.enterPrefixOperator(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitPrefixOperator" ):
listener.exitPrefixOperator(self)
def prefixOperator(self):
localctx = TaleParser.PrefixOperatorContext(self, self._ctx, self.state)
self.enterRule(localctx, 42, self.RULE_prefixOperator)
try:
self.state = 181
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,16,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 174
self.match(TaleParser.OPERATOR)
self.state = 175
self.primitiveItem()
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 176
self.match(TaleParser.OPERATOR)
self.state = 177
self.match(TaleParser.T__3)
self.state = 178
self.expression()
self.state = 179
self.match(TaleParser.T__4)
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class BinaryContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def binaryOperand(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.BinaryOperandContext)
else:
return self.getTypedRuleContext(TaleParser.BinaryOperandContext,i)
def OPERATOR(self):
return self.getToken(TaleParser.OPERATOR, 0)
def binary(self):
return self.getTypedRuleContext(TaleParser.BinaryContext,0)
def getRuleIndex(self):
return TaleParser.RULE_binary
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterBinary" ):
listener.enterBinary(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitBinary" ):
listener.exitBinary(self)
def binary(self, _p:int=0):
_parentctx = self._ctx
_parentState = self.state
localctx = TaleParser.BinaryContext(self, self._ctx, _parentState)
_prevctx = localctx
_startState = 44
self.enterRecursionRule(localctx, 44, self.RULE_binary, _p)
try:
self.enterOuterAlt(localctx, 1)
self.state = 184
self.binaryOperand()
self.state = 185
self.match(TaleParser.OPERATOR)
self.state = 186
self.binaryOperand()
self._ctx.stop = self._input.LT(-1)
self.state = 193
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,17,self._ctx)
while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
if _alt==1:
if self._parseListeners is not None:
self.triggerExitRuleEvent()
_prevctx = localctx
localctx = TaleParser.BinaryContext(self, _parentctx, _parentState)
self.pushNewRecursionContext(localctx, _startState, self.RULE_binary)
self.state = 188
if not self.precpred(self._ctx, 2):
from antlr4.error.Errors import FailedPredicateException
raise FailedPredicateException(self, "self.precpred(self._ctx, 2)")
self.state = 189
self.match(TaleParser.OPERATOR)
self.state = 190
self.binaryOperand()
self.state = 195
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,17,self._ctx)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.unrollRecursionContexts(_parentctx)
return localctx
class BinaryOperandContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def unary(self):
return self.getTypedRuleContext(TaleParser.UnaryContext,0)
def primitive(self):
return self.getTypedRuleContext(TaleParser.PrimitiveContext,0)
def getRuleIndex(self):
return TaleParser.RULE_binaryOperand
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterBinaryOperand" ):
listener.enterBinaryOperand(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitBinaryOperand" ):
listener.exitBinaryOperand(self)
def binaryOperand(self):
localctx = TaleParser.BinaryOperandContext(self, self._ctx, self.state)
self.enterRule(localctx, 46, self.RULE_binaryOperand)
try:
self.state = 198
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,18,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 196
self.unary(0)
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 197
self.primitive()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class KeywordContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def keywordArgument(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.KeywordArgumentContext)
else:
return self.getTypedRuleContext(TaleParser.KeywordArgumentContext,i)
def keywordName(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.KeywordNameContext)
else:
return self.getTypedRuleContext(TaleParser.KeywordNameContext,i)
def getRuleIndex(self):
return TaleParser.RULE_keyword
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterKeyword" ):
listener.enterKeyword(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitKeyword" ):
listener.exitKeyword(self)
def keyword(self):
localctx = TaleParser.KeywordContext(self, self._ctx, self.state)
self.enterRule(localctx, 48, self.RULE_keyword)
try:
self.enterOuterAlt(localctx, 1)
self.state = 201
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,19,self._ctx)
if la_ == 1:
self.state = 200
self.keywordArgument()
self.state = 207
self._errHandler.sync(self)
_alt = 1
while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
if _alt == 1:
self.state = 203
self.keywordName()
self.state = 204
self.match(TaleParser.T__1)
self.state = 205
self.keywordArgument()
else:
raise NoViableAltException(self)
self.state = 209
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,20,self._ctx)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class KeywordArgumentContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def unary(self):
return self.getTypedRuleContext(TaleParser.UnaryContext,0)
def binary(self):
return self.getTypedRuleContext(TaleParser.BinaryContext,0)
def primitive(self):
return self.getTypedRuleContext(TaleParser.PrimitiveContext,0)
def getRuleIndex(self):
return TaleParser.RULE_keywordArgument
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterKeywordArgument" ):
listener.enterKeywordArgument(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitKeywordArgument" ):
listener.exitKeywordArgument(self)
def keywordArgument(self):
localctx = TaleParser.KeywordArgumentContext(self, self._ctx, self.state)
self.enterRule(localctx, 50, self.RULE_keywordArgument)
try:
self.state = 214
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,21,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 211
self.unary(0)
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 212
self.binary(0)
pass
elif la_ == 3:
self.enterOuterAlt(localctx, 3)
self.state = 213
self.primitive()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class KeywordNameContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def getRuleIndex(self):
return TaleParser.RULE_keywordName
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterKeywordName" ):
listener.enterKeywordName(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitKeywordName" ):
listener.exitKeywordName(self)
def keywordName(self):
localctx = TaleParser.KeywordNameContext(self, self._ctx, self.state)
self.enterRule(localctx, 52, self.RULE_keywordName)
try:
self.enterOuterAlt(localctx, 1)
self.state = 216
self.match(TaleParser.IDENTIFIER)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class PrimitiveContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def primitiveItem(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(TaleParser.PrimitiveItemContext)
else:
return self.getTypedRuleContext(TaleParser.PrimitiveItemContext,i)
def getRuleIndex(self):
return TaleParser.RULE_primitive
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterPrimitive" ):
listener.enterPrimitive(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitPrimitive" ):
listener.exitPrimitive(self)
def primitive(self):
localctx = TaleParser.PrimitiveContext(self, self._ctx, self.state)
self.enterRule(localctx, 54, self.RULE_primitive)
try:
self.enterOuterAlt(localctx, 1)
self.state = 218
self.primitiveItem()
self.state = 223
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,22,self._ctx)
while _alt!=1 and _alt!=ATN.INVALID_ALT_NUMBER:
if _alt==1+1:
self.state = 219
self.match(TaleParser.T__2)
self.state = 220
self.primitiveItem()
self.state = 225
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,22,self._ctx)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class PrimitiveItemContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def IDENTIFIER(self):
return self.getToken(TaleParser.IDENTIFIER, 0)
def literal(self):
return self.getTypedRuleContext(TaleParser.LiteralContext,0)
def getRuleIndex(self):
return TaleParser.RULE_primitiveItem
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterPrimitiveItem" ):
listener.enterPrimitiveItem(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitPrimitiveItem" ):
listener.exitPrimitiveItem(self)
def primitiveItem(self):
localctx = TaleParser.PrimitiveItemContext(self, self._ctx, self.state)
self.enterRule(localctx, 56, self.RULE_primitiveItem)
try:
self.state = 228
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [TaleParser.IDENTIFIER]:
self.enterOuterAlt(localctx, 1)
self.state = 226
self.match(TaleParser.IDENTIFIER)
pass
elif token in [TaleParser.NUMBER, TaleParser.STRING]:
self.enterOuterAlt(localctx, 2)
self.state = 227
self.literal()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LiteralContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def intLiteral(self):
return self.getTypedRuleContext(TaleParser.IntLiteralContext,0)
def stringLiteral(self):
return self.getTypedRuleContext(TaleParser.StringLiteralContext,0)
def getRuleIndex(self):
return TaleParser.RULE_literal
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterLiteral" ):
listener.enterLiteral(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitLiteral" ):
listener.exitLiteral(self)
def literal(self):
localctx = TaleParser.LiteralContext(self, self._ctx, self.state)
self.enterRule(localctx, 58, self.RULE_literal)
try:
self.state = 232
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [TaleParser.NUMBER]:
self.enterOuterAlt(localctx, 1)
self.state = 230
self.intLiteral()
pass
elif token in [TaleParser.STRING]:
self.enterOuterAlt(localctx, 2)
self.state = 231
self.stringLiteral()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class IntLiteralContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def NUMBER(self):
return self.getToken(TaleParser.NUMBER, 0)
def getRuleIndex(self):
return TaleParser.RULE_intLiteral
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterIntLiteral" ):
listener.enterIntLiteral(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitIntLiteral" ):
listener.exitIntLiteral(self)
def intLiteral(self):
localctx = TaleParser.IntLiteralContext(self, self._ctx, self.state)
self.enterRule(localctx, 60, self.RULE_intLiteral)
try:
self.enterOuterAlt(localctx, 1)
self.state = 234
self.match(TaleParser.NUMBER)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class StringLiteralContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def STRING(self):
return self.getToken(TaleParser.STRING, 0)
def getRuleIndex(self):
return TaleParser.RULE_stringLiteral
def enterRule(self, listener:ParseTreeListener):
if hasattr( listener, "enterStringLiteral" ):
listener.enterStringLiteral(self)
def exitRule(self, listener:ParseTreeListener):
if hasattr( listener, "exitStringLiteral" ):
listener.exitStringLiteral(self)
def stringLiteral(self):
localctx = TaleParser.StringLiteralContext(self, self._ctx, self.state)
self.enterRule(localctx, 62, self.RULE_stringLiteral)
try:
self.enterOuterAlt(localctx, 1)
self.state = 236
self.match(TaleParser.STRING)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
def sempred(self, localctx:RuleContext, ruleIndex:int, predIndex:int):
if self._predicates == None:
self._predicates = dict()
self._predicates[20] = self.unary_sempred
self._predicates[22] = self.binary_sempred
pred = self._predicates.get(ruleIndex, None)
if pred is None:
raise Exception("No predicate with index:" + str(ruleIndex))
else:
return pred(localctx, predIndex)
def unary_sempred(self, localctx:UnaryContext, predIndex:int):
if predIndex == 0:
return self.precpred(self._ctx, 2)
def binary_sempred(self, localctx:BinaryContext, predIndex:int):
if predIndex == 1:
return self.precpred(self._ctx, 2)
| 34.317362 | 239 | 0.590868 | 7,758 | 72,341 | 5.415055 | 0.085073 | 0.013521 | 0.009783 | 0.009712 | 0.66039 | 0.611664 | 0.589288 | 0.550821 | 0.474292 | 0.416544 | 0 | 0.064981 | 0.300963 | 72,341 | 2,107 | 240 | 34.33365 | 0.765766 | 0.001742 | 0 | 0.567136 | 1 | 0.055627 | 0.094539 | 0.074443 | 0 | 0 | 0.000111 | 0 | 0 | 1 | 0.148977 | false | 0.022379 | 0.004476 | 0.056266 | 0.303708 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e1b490b033e953f1585ccd81fdcb489a598e5706 | 353 | py | Python | 004.py | gabrieleliasdev/python-cev | 45390963b5112a982e673f6a6866da422bf9ae6d | [
"MIT"
] | null | null | null | 004.py | gabrieleliasdev/python-cev | 45390963b5112a982e673f6a6866da422bf9ae6d | [
"MIT"
] | null | null | null | 004.py | gabrieleliasdev/python-cev | 45390963b5112a982e673f6a6866da422bf9ae6d | [
"MIT"
] | null | null | null | print('Olá, Mundo!')
print(7+4)
print('7'+'4')
print('Olá', 5)
# Toda variável é um objeto
# Um objeto é mais do que uma variável
nome = 'Gabriel'
idade = 30
peso = 79
print(nome,idade,peso)
nome = input('>>> Nome ')
idade = input('>>> Idade ')
peso = input('>>> Peso ')
print(nome,idade,peso)
print(f'Nome:{nome} ,Idade:{idade} ,Peso:{peso}')
| 14.12 | 49 | 0.620397 | 56 | 353 | 3.910714 | 0.410714 | 0.164384 | 0.063927 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030612 | 0.167139 | 353 | 24 | 50 | 14.708333 | 0.714286 | 0.175637 | 0 | 0.153846 | 0 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.538462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
e1b5d39efe358fd9f5a0abeb927321f0eef6f285 | 680 | py | Python | examples/create_mac_table_entry.py | open-switch/opx-docs | f448f3f3dc0de38822bbf16c1e173eb108925a40 | [
"CC-BY-4.0"
] | 122 | 2017-02-10T01:47:04.000Z | 2022-03-23T20:11:11.000Z | examples/create_mac_table_entry.py | open-switch/opx-docs | f448f3f3dc0de38822bbf16c1e173eb108925a40 | [
"CC-BY-4.0"
] | 37 | 2017-03-01T07:07:22.000Z | 2021-11-11T16:47:42.000Z | examples/create_mac_table_entry.py | open-switch/opx-docs | f448f3f3dc0de38822bbf16c1e173eb108925a40 | [
"CC-BY-4.0"
] | 39 | 2017-01-18T16:22:58.000Z | 2020-11-18T13:23:43.000Z | #Python code block to configure MAC address table entry
import cps_utils
#Register the attribute type
cps_utils.add_attr_type('base-mac/table/mac-address', 'mac')
#Define the MAC address, interface index and VLAN attributes
d = {'mac-address': '00:0a:0b:cc:0d:0e', 'ifindex': 18, 'vlan': '100'}
#Create a CPS object
obj = cps_utils.CPSObject('base-mac/table', data=d)
#Associate the operation to the CPS object
tr_obj = ('create', obj.get())
#Create a transaction object
transaction = cps_utils.CPSTransaction([tr_obj])
#Check for failure
ret = transaction.commit()
if not ret:
raise RuntimeError('Error creating MAC Table Entry')
print 'Successfully created'
| 27.2 | 70 | 0.738235 | 103 | 680 | 4.796117 | 0.592233 | 0.080972 | 0.048583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0189 | 0.144118 | 680 | 24 | 71 | 28.333333 | 0.829897 | 0.358824 | 0 | 0 | 0 | 0 | 0.347319 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e1b6ebd37b97bc9b109f511037c684ea5fa2de9b | 225 | py | Python | events/defaults.py | bozbalci/cython-experiments | a675571e09297e3cda9154e8b611562bb8b14f7e | [
"Unlicense"
] | 1 | 2018-06-23T17:52:20.000Z | 2018-06-23T17:52:20.000Z | events/defaults.py | bozbalci/cython-experiments | a675571e09297e3cda9154e8b611562bb8b14f7e | [
"Unlicense"
] | null | null | null | events/defaults.py | bozbalci/cython-experiments | a675571e09297e3cda9154e8b611562bb8b14f7e | [
"Unlicense"
] | null | null | null | # defaults.py: contains the built-in variables, events and methods
# used for scripting the C program
import event
events = {}
_event_names = ["on_start", "on_exit"]
for evt in _event_names:
events[evt] = event.Event()
| 22.5 | 66 | 0.724444 | 34 | 225 | 4.617647 | 0.647059 | 0.127389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168889 | 225 | 9 | 67 | 25 | 0.839572 | 0.431111 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e1b88db11881c00abc4ca3f31868a0861378a947 | 780 | py | Python | hopsapp/__init__.py | mrahman013/Hope4Hops-web-applcation | d5bde1463c6fbc1ea5424cb656504119393c6ce2 | [
"MIT"
] | null | null | null | hopsapp/__init__.py | mrahman013/Hope4Hops-web-applcation | d5bde1463c6fbc1ea5424cb656504119393c6ce2 | [
"MIT"
] | null | null | null | hopsapp/__init__.py | mrahman013/Hope4Hops-web-applcation | d5bde1463c6fbc1ea5424cb656504119393c6ce2 | [
"MIT"
] | null | null | null | """Implements a basic flask app that provides hashes of text."""
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
import flask_login
#pylint: disable=invalid-name
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://yjjuylsytqewni:d0d63322c6abd33e2dadeafd7ef2501f73af54cf2d39596e464ea2c18b0234a3@ec2-23-23-78-213.compute-1.amazonaws.com:5432/d3gdnt7fkmonn1' #pylint: disable=line-too-long
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True
app.secret_key = 'HGTYNVK123LOL908973'
db = SQLAlchemy(app)
login_manager = flask_login.LoginManager()
login_manager.init_app(app)
# This import need to be here that's why disabling pylint
#pylint: disable=wrong-import-position
import hopsapp.models
import hopsapp.routes
| 35.454545 | 224 | 0.815385 | 102 | 780 | 6.088235 | 0.598039 | 0.062802 | 0.061192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088359 | 0.085897 | 780 | 21 | 225 | 37.142857 | 0.782609 | 0.267949 | 0 | 0 | 0 | 0.076923 | 0.405694 | 0.362989 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e1b8fdfc631946eef5fedb38c2e25e5e6c2e1add | 800 | py | Python | npytoImage.py | x35yao/camera | 0ee77f5de72d785ba68bef44a557470ec425d702 | [
"MIT"
] | null | null | null | npytoImage.py | x35yao/camera | 0ee77f5de72d785ba68bef44a557470ec425d702 | [
"MIT"
] | null | null | null | npytoImage.py | x35yao/camera | 0ee77f5de72d785ba68bef44a557470ec425d702 | [
"MIT"
] | null | null | null | import numpy as np;
import cv2;
n = 428671
img_RS_color = np.load('/home/p4bhattachan/gripper/3DCameraServer/testImages/npyFiles/{}_RS_color.npy'.format(n))
cv2.imshow('RS Color Image {}'.format(n), img_RS_color)
#
# # img_RS_depth = np.load('/home/p4bhattachan/gripper/3DCameraServer/testImages/npyFiles/{}_RS_depth.npy'.format(n))
# # cv2.imshow('RS Depth Image {}'.format(n), img_RS_depth)
#
# img_ZED_color = np.load('/home/p4bhattachan/gripper/3DCameraServer/testImages/npyFiles/{}_ZED_color.npy'.format(n))
# cv2.imshow('ZED Color Image {}'.format(n), img_ZED_color)
#
# # img_ZED_depth = np.load('/home/p4bhattachan/gripper/3DCameraServer/testImages/npyFiles/{}_ZED_depth.npy'.format(n))
# # cv2.imshow('ZED Depth Image {}'.format(n), img_ZED_depth)
cv2.waitKey(0)
cv2.destroyAllWindows()
| 38.095238 | 119 | 0.7475 | 117 | 800 | 4.905983 | 0.222222 | 0.097561 | 0.069686 | 0.15331 | 0.818815 | 0.662021 | 0.477352 | 0.477352 | 0.477352 | 0 | 0 | 0.02973 | 0.075 | 800 | 20 | 120 | 40 | 0.745946 | 0.65 | 0 | 0 | 0 | 0 | 0.357414 | 0.292776 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e1bf68076ea2cc2d9234c0759575b80d167f8b2e | 680 | py | Python | geomat/stein/migrations/0060_remove_mineraltype_mohs_scale.py | mimischi/django-geomat | 8c5bc4c9ba9759b58b52ddf339ccaec40ec5f6ea | [
"BSD-3-Clause"
] | 3 | 2017-01-13T15:53:39.000Z | 2017-05-05T11:57:55.000Z | geomat/stein/migrations/0060_remove_mineraltype_mohs_scale.py | mimischi/django-geomat | 8c5bc4c9ba9759b58b52ddf339ccaec40ec5f6ea | [
"BSD-3-Clause"
] | 233 | 2016-11-05T15:19:48.000Z | 2021-09-07T23:33:47.000Z | geomat/stein/migrations/0060_remove_mineraltype_mohs_scale.py | GeoMatDigital/django-geomat | 8c5bc4c9ba9759b58b52ddf339ccaec40ec5f6ea | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.0.2 on 2018-05-04 07:33
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
('stein', '0059_mineraltype_new_mohs_scale'),
]
operations = [
migrations.AlterField(
model_name='mineraltype',
name='mohs_scale',
field=models.CharField(max_length=20, verbose_name="mohs scale", default="")
),
migrations.RemoveField(
model_name='mineraltype',
name='mohs_scale',
),
migrations.RenameField(model_name="mineraltype", old_name="new_mohs_scale", new_name="mohs_scale")
]
| 26.153846 | 106 | 0.632353 | 75 | 680 | 5.52 | 0.52 | 0.130435 | 0.125604 | 0.086957 | 0.15942 | 0.15942 | 0 | 0 | 0 | 0 | 0 | 0.04142 | 0.254412 | 680 | 25 | 107 | 27.2 | 0.775148 | 0.066176 | 0 | 0.333333 | 1 | 0 | 0.194313 | 0.048973 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e1c4600d073fba00b0a31f0113ee9536694f12a6 | 3,364 | py | Python | py_trees_ros/visitors.py | geoc1234/py_trees_ros | 65a055624f9261d67f0168ef419aa650302f96d0 | [
"BSD-3-Clause"
] | 65 | 2019-05-01T08:21:42.000Z | 2022-03-23T15:49:55.000Z | py_trees_ros/visitors.py | geoc1234/py_trees_ros | 65a055624f9261d67f0168ef419aa650302f96d0 | [
"BSD-3-Clause"
] | 62 | 2019-02-27T14:27:42.000Z | 2022-02-08T03:54:30.000Z | py_trees_ros/visitors.py | geoc1234/py_trees_ros | 65a055624f9261d67f0168ef419aa650302f96d0 | [
"BSD-3-Clause"
] | 23 | 2019-03-03T17:09:59.000Z | 2022-01-06T03:07:59.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# License: BSD
# https://raw.githubusercontent.com/splintered-reality/py_trees_ros/devel/LICENSE
#
##############################################################################
# Documentation
##############################################################################
"""
ROS Visitors are entities that can be passed to a ROS tree implementation
(e.g. :class:`~py_trees_ros.trees.BehaviourTree`) and used to either visit
each and every behaviour in the tree, or visit behaviours as the tree is
traversed in an executing tick. At each behaviour, the visitor
runs its own method on the behaviour to do as it wishes - logging, introspecting).
.. warning:: Visitors should not modify the behaviours they visit.
.. seealso:: The base interface and core visitors in :mod:`py_trees.visitors`
"""
##############################################################################
# Imports
##############################################################################
import py_trees.visitors
import py_trees_ros_interfaces.msg as py_trees_msgs
import rclpy
import time
from . import conversions
##############################################################################
# Visitors
##############################################################################
class SetupLogger(py_trees.visitors.VisitorBase):
"""
Use as a visitor to :meth:`py_trees_ros.trees.TreeManager.setup`
to log the name and timings of each behaviours' setup
to the ROS debug channel.
Args:
node: an rclpy node that will provide debug logger
"""
def __init__(self, node: rclpy.node.Node):
super().__init__(full=True)
self.node = node
def initialise(self):
"""
Initialise the timestamping chain.
"""
self.start_time = time.monotonic()
self.last_time = self.start_time
def run(self, behaviour):
current_time = time.monotonic()
self.node.get_logger().debug(
"'{}'.setup: {:.4f}s".format(behaviour.name, current_time - self.last_time)
)
self.last_time = current_time
def finalise(self):
current_time = time.monotonic()
self.node.get_logger().debug(
"Total tree setup time: {:.4f}s".format(current_time - self.start_time)
)
class TreeToMsgVisitor(py_trees.visitors.VisitorBase):
"""
Visits the entire tree and gathers all behaviours as
messages for the tree logging publishers.
Attributes:
tree (:class:`py_trees_msgs.msg.BehaviourTree`): tree representation in message form
"""
def __init__(self):
"""
Well
"""
super(TreeToMsgVisitor, self).__init__()
self.full = True # examine all nodes
def initialise(self):
"""
Initialise and stamp a :class:`py_trees_msgs.msg.BehaviourTree`
instance.
"""
self.tree = py_trees_msgs.BehaviourTree()
# TODO: crystal api
# self.tree.stamp = rclpy.clock.Clock.now().to_msg()
def run(self, behaviour):
"""
Convert the behaviour into a message and append to the tree.
Args:
behaviour (:class:`~py_trees.behaviour.Behaviour`): behaviour to convert
"""
self.tree.behaviours.append(conversions.behaviour_to_msg(behaviour))
| 31.735849 | 92 | 0.568668 | 371 | 3,364 | 5.010782 | 0.390836 | 0.048951 | 0.021517 | 0.033889 | 0.083916 | 0.083916 | 0.049489 | 0.049489 | 0.049489 | 0 | 0 | 0.001495 | 0.204518 | 3,364 | 105 | 93 | 32.038095 | 0.693199 | 0.426278 | 0 | 0.258065 | 0 | 0 | 0.038613 | 0 | 0 | 0 | 0 | 0.009524 | 0 | 1 | 0.225806 | false | 0 | 0.16129 | 0 | 0.451613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e1c8c4baec324f5e5f8e13e03541f29a1a32842d | 11,394 | py | Python | Jarvis/features/Friday_Blueprint.py | faizeraza/Jarvis-Virtual-Assistant- | da88fc0124e6020aff1030317dc3dc918f7aa017 | [
"MIT"
] | 1 | 2021-12-14T00:18:10.000Z | 2021-12-14T00:18:10.000Z | Jarvis/features/Friday_Blueprint.py | faizeraza/Jarvis-Virtual-Assistant- | da88fc0124e6020aff1030317dc3dc918f7aa017 | [
"MIT"
] | null | null | null | Jarvis/features/Friday_Blueprint.py | faizeraza/Jarvis-Virtual-Assistant- | da88fc0124e6020aff1030317dc3dc918f7aa017 | [
"MIT"
] | 1 | 2021-12-29T05:01:02.000Z | 2021-12-29T05:01:02.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'Friday_Blueprint.ui'
#
# Created by: PyQt5 UI code generator 5.15.4
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(420, 650)
MainWindow.setSizeIncrement(QtCore.QSize(0, 0))
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(0, 0, 421, 651))
self.label.setText("")
self.label.setPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/see.jpg"))
self.label.setScaledContents(True)
self.label.setObjectName("label")
self.verticalLayoutWidget = QtWidgets.QWidget(self.centralwidget)
self.verticalLayoutWidget.setGeometry(QtCore.QRect(0, 0, 71, 651))
self.verticalLayoutWidget.setObjectName("verticalLayoutWidget")
self.verticalLayout_5 = QtWidgets.QVBoxLayout(self.verticalLayoutWidget)
self.verticalLayout_5.setSizeConstraint(QtWidgets.QLayout.SetMaximumSize)
self.verticalLayout_5.setContentsMargins(0, 0, 0, 0)
self.verticalLayout_5.setObjectName("verticalLayout_5")
self.pushButton_9 = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.pushButton_9.setText("")
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/user.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_9.setIcon(icon)
self.pushButton_9.setIconSize(QtCore.QSize(30, 30))
self.pushButton_9.setAutoDefault(True)
self.pushButton_9.setDefault(True)
self.pushButton_9.setFlat(True)
self.pushButton_9.setObjectName("pushButton_9")
self.verticalLayout_5.addWidget(self.pushButton_9)
self.pushButton_10 = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.pushButton_10.setText("")
icon1 = QtGui.QIcon()
icon1.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/data.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_10.setIcon(icon1)
self.pushButton_10.setIconSize(QtCore.QSize(30, 30))
self.pushButton_10.setAutoDefault(True)
self.pushButton_10.setDefault(True)
self.pushButton_10.setFlat(True)
self.pushButton_10.setObjectName("pushButton_10")
self.verticalLayout_5.addWidget(self.pushButton_10)
self.pushButton_11 = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.pushButton_11.setText("")
icon2 = QtGui.QIcon()
icon2.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/bot.jpg"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_11.setIcon(icon2)
self.pushButton_11.setIconSize(QtCore.QSize(49, 30))
self.pushButton_11.setDefault(True)
self.pushButton_11.setFlat(True)
self.pushButton_11.setObjectName("pushButton_11")
self.verticalLayout_5.addWidget(self.pushButton_11)
self.pushButton_12 = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.pushButton_12.setMinimumSize(QtCore.QSize(69, 0))
self.pushButton_12.setMaximumSize(QtCore.QSize(75, 16777215))
self.pushButton_12.setText("")
icon3 = QtGui.QIcon()
icon3.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/settings.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_12.setIcon(icon3)
self.pushButton_12.setIconSize(QtCore.QSize(30, 30))
self.pushButton_12.setAutoDefault(True)
self.pushButton_12.setDefault(True)
self.pushButton_12.setFlat(True)
self.pushButton_12.setObjectName("pushButton_12")
self.verticalLayout_5.addWidget(self.pushButton_12)
spacerItem = QtWidgets.QSpacerItem(20, 151, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
self.verticalLayout_5.addItem(spacerItem)
spacerItem1 = QtWidgets.QSpacerItem(20, 69, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
self.verticalLayout_5.addItem(spacerItem1)
spacerItem2 = QtWidgets.QSpacerItem(13, 253, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
self.verticalLayout_5.addItem(spacerItem2)
self.pushButton_13 = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.pushButton_13.setText("")
icon4 = QtGui.QIcon()
icon4.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/feedback.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_13.setIcon(icon4)
self.pushButton_13.setIconSize(QtCore.QSize(40, 40))
self.pushButton_13.setDefault(True)
self.pushButton_13.setFlat(True)
self.pushButton_13.setObjectName("pushButton_13")
self.verticalLayout_5.addWidget(self.pushButton_13)
self.horizontalLayoutWidget = QtWidgets.QWidget(self.centralwidget)
self.horizontalLayoutWidget.setGeometry(QtCore.QRect(70, 600, 351, 51))
self.horizontalLayoutWidget.setObjectName("horizontalLayoutWidget")
self.horizontalLayout_4 = QtWidgets.QHBoxLayout(self.horizontalLayoutWidget)
self.horizontalLayout_4.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout_4.setObjectName("horizontalLayout_4")
self.pushButton_14 = QtWidgets.QPushButton(self.horizontalLayoutWidget)
self.pushButton_14.setText("")
icon5 = QtGui.QIcon()
icon5.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/lens.svg"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_14.setIcon(icon5)
self.pushButton_14.setIconSize(QtCore.QSize(40, 40))
self.pushButton_14.setAutoDefault(True)
self.pushButton_14.setDefault(True)
self.pushButton_14.setFlat(True)
self.pushButton_14.setObjectName("pushButton_14")
self.horizontalLayout_4.addWidget(self.pushButton_14)
spacerItem3 = QtWidgets.QSpacerItem(65, 15, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout_4.addItem(spacerItem3)
self.label_2 = QtWidgets.QLabel(self.horizontalLayoutWidget)
#Self.label_2.setPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/Speak.gif"))
self.label_2.setText("waiting")
self.label_2.setScaledContents(True)
self.label_2.setObjectName("label_2")
self.horizontalLayout_4.addWidget(self.label_2)
spacerItem4 = QtWidgets.QSpacerItem(68, 15, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout_4.addItem(spacerItem4)
self.pushButton_15 = QtWidgets.QPushButton(self.horizontalLayoutWidget)
self.pushButton_15.setText("")
icon6 = QtGui.QIcon()
icon6.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/mic.gif"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_15.setIcon(icon6)
self.pushButton_15.setIconSize(QtCore.QSize(40, 40))
self.pushButton_15.setAutoDefault(True)
self.pushButton_15.setDefault(True)
self.pushButton_15.setFlat(True)
self.pushButton_15.setObjectName("pushButton_15")
self.horizontalLayout_4.addWidget(self.pushButton_15)
spacerItem5 = QtWidgets.QSpacerItem(10, 20, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout_4.addItem(spacerItem5)
self.horizontalLayoutWidget_2 = QtWidgets.QWidget(self.centralwidget)
self.horizontalLayoutWidget_2.setGeometry(QtCore.QRect(70, 560, 351, 41))
self.horizontalLayoutWidget_2.setObjectName("horizontalLayoutWidget_2")
self.horizontalLayout_5 = QtWidgets.QHBoxLayout(self.horizontalLayoutWidget_2)
self.horizontalLayout_5.setSizeConstraint(QtWidgets.QLayout.SetNoConstraint)
self.horizontalLayout_5.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout_5.setObjectName("horizontalLayout_5")
self.textEdit_2 = QtWidgets.QTextEdit(self.horizontalLayoutWidget_2)
self.textEdit_2.setObjectName("textEdit_2")
self.horizontalLayout_5.addWidget(self.textEdit_2)
self.pushButton_16 = QtWidgets.QPushButton(self.horizontalLayoutWidget_2)
self.pushButton_16.setText("")
icon7 = QtGui.QIcon()
icon7.addPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/send.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_16.setIcon(icon7)
self.pushButton_16.setIconSize(QtCore.QSize(40, 40))
self.pushButton_16.setCheckable(False)
self.pushButton_16.setAutoRepeatDelay(300)
self.pushButton_16.setAutoDefault(True)
self.pushButton_16.setDefault(True)
self.pushButton_16.setFlat(True)
self.pushButton_16.setObjectName("pushButton_16")
self.horizontalLayout_5.addWidget(self.pushButton_16)
spacerItem6 = QtWidgets.QSpacerItem(10, 10, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout_5.addItem(spacerItem6)
self.verticalLayoutWidget_2 = QtWidgets.QWidget(self.centralwidget)
self.verticalLayoutWidget_2.setGeometry(QtCore.QRect(70, 0, 351, 561))
self.verticalLayoutWidget_2.setObjectName("verticalLayoutWidget_2")
self.verticalLayout = QtWidgets.QVBoxLayout(self.verticalLayoutWidget_2)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.textEdit = QtWidgets.QTextEdit(self.verticalLayoutWidget_2)
self.textEdit.setObjectName("textEdit")
self.verticalLayout.addWidget(self.textEdit)
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setGeometry(QtCore.QRect(420, 0, 961, 741))
self.label_3.setText("")
self.label_3.setScaledContents(True)
self.label_3.setObjectName("label_3")
self.label_5 = QtWidgets.QLabel(self.centralwidget)
self.label_5.setGeometry(QtCore.QRect(0, 650, 421, 91))
self.label_5.setText("")
self.label_5.setPixmap(QtGui.QPixmap("D:/jarvis/Jarvis/utils/images/Recognizer.gif"))
self.label_5.setScaledContents(True)
self.label_5.setObjectName("label_5")
MainWindow.setCentralWidget(self.centralwidget)
self.movie = QtGui.QMovie("D:/jarvis/Jarvis/utils/images/AIassistant.gif")
self.label_3.setMovie(self.movie)
self.movie1 = QtGui.QMovie("D:/jarvis/Jarvis/utils/images/Recognizer.gif")
self.label_5.setMovie(self.movie1)
self.startAnimation()
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def startAnimation(self):
self.movie.start()
self.movie1.start()
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "JARVIS"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 49.755459 | 121 | 0.722924 | 1,260 | 11,394 | 6.405556 | 0.15873 | 0.128361 | 0.049065 | 0.028993 | 0.411225 | 0.375542 | 0.243588 | 0.182629 | 0.099368 | 0.070871 | 0 | 0.045651 | 0.167544 | 11,394 | 228 | 122 | 49.973684 | 0.805271 | 0.031683 | 0 | 0 | 1 | 0 | 0.075744 | 0.04971 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015957 | false | 0 | 0.010638 | 0 | 0.031915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e1c8d5b0e59bc3cff42a51e6c70986bae9cb73c9 | 3,201 | py | Python | pints/toy/_logistic_model.py | iamleeg/pints | bd1c11472ff3ec0990f3d55f0b2f20d92397926d | [
"BSD-3-Clause"
] | null | null | null | pints/toy/_logistic_model.py | iamleeg/pints | bd1c11472ff3ec0990f3d55f0b2f20d92397926d | [
"BSD-3-Clause"
] | null | null | null | pints/toy/_logistic_model.py | iamleeg/pints | bd1c11472ff3ec0990f3d55f0b2f20d92397926d | [
"BSD-3-Clause"
] | null | null | null | #
# Logistic toy model.
#
# This file is part of PINTS.
# Copyright (c) 2017-2019, University of Oxford.
# For licensing information, see the LICENSE file distributed with the PINTS
# software package.
#
from __future__ import absolute_import, division
from __future__ import print_function, unicode_literals
import numpy as np
import pints
from . import ToyModel
class LogisticModel(pints.ForwardModelS1, ToyModel):
"""
Logistic model of population growth [1].
.. math::
f(t) &= \\frac{k}{1+(k/p_0 - 1)*\exp(-r t)} \\\\
\\frac{\\partial f(t)}{\\partial r} &=
\\frac{k t (k / p_0 - 1) \exp(-r t)}
{((k/p_0-1) \exp(-r t) + 1)^2} \\\\
\\frac{\\partial f(t)}{ \\partial k} &= \\frac{k \exp(-r t)}
{p_0 ((k/p_0-1)\exp(-r t) + 1)^2}
+ \\frac{1}{(k/p_0 - 1)\exp(-r t) + 1}
Has two parameters: A growth rate :math:`r` and a carrying capacity
:math:`k`. The initial population size :math:`f(0) = p_0` can be set using
the (optional) named constructor arg ``initial_population_size``
[1] https://en.wikipedia.org/wiki/Population_growth
*Extends:* :class:`pints.ForwardModel`, :class:`pints.toy.ToyModel`.
"""
def __init__(self, initial_population_size=2):
super(LogisticModel, self).__init__()
self._p0 = float(initial_population_size)
if self._p0 < 0:
raise ValueError('Population size cannot be negative.')
def n_parameters(self):
""" See :meth:`pints.ForwardModel.n_parameters()`. """
return 2
def simulate(self, parameters, times):
""" See :meth:`pints.ForwardModel.simulate()`. """
return self._simulate(parameters, times, False)
def simulateS1(self, parameters, times):
""" See :meth:`pints.ForwardModelS1.simulateS1()`. """
return self._simulate(parameters, times, True)
def _simulate(self, parameters, times, sensitivities):
r, k = [float(x) for x in parameters]
times = np.asarray(times)
if np.any(times < 0):
raise ValueError('Negative times are not allowed.')
if self._p0 == 0 or k < 0:
if sensitivities:
return np.zeros(times.shape), \
np.zeros((len(times), len(parameters)))
else:
return np.zeros(times.shape)
exp = np.exp(-r * times)
c = (k / self._p0 - 1)
values = k / (1 + c * exp)
if sensitivities:
dvalues_dp = np.empty((len(times), len(parameters)))
dvalues_dp[:, 0] = k * times * c * exp / (c * exp + 1)**2
dvalues_dp[:, 1] = -k * exp / \
(self._p0 * (c * exp + 1)**2) + 1 / (c * exp + 1)
return values, dvalues_dp
else:
return values
def suggested_parameters(self):
""" See :meth:`pints.toy.ToyModel.suggested_parameters()`. """
return np.array([0.1, 50])
def suggested_times(self):
""" See :meth:`pints.toy.ToyModel.suggested_times()`. """
return np.linspace(0, 100, 100)
| 34.419355 | 79 | 0.555764 | 403 | 3,201 | 4.287841 | 0.300248 | 0.008102 | 0.017361 | 0.011574 | 0.243056 | 0.112847 | 0.076968 | 0.035301 | 0.017361 | 0.017361 | 0 | 0.02932 | 0.296782 | 3,201 | 92 | 80 | 34.793478 | 0.738339 | 0.391128 | 0 | 0.093023 | 0 | 0 | 0.036066 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162791 | false | 0 | 0.116279 | 0 | 0.511628 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e1ca3f7ea92eaa76db3dc052ef98666164a81b5e | 414 | py | Python | poo/pybank/bank.py | fredsonchaves07/python-fundamentals | 4aee479c48f86319a2041e35ea985f971393c2ce | [
"MIT"
] | null | null | null | poo/pybank/bank.py | fredsonchaves07/python-fundamentals | 4aee479c48f86319a2041e35ea985f971393c2ce | [
"MIT"
] | null | null | null | poo/pybank/bank.py | fredsonchaves07/python-fundamentals | 4aee479c48f86319a2041e35ea985f971393c2ce | [
"MIT"
] | null | null | null | class Bank:
def __init__(self):
self.__agencies = [1111, 2222, 3333]
self.__costumers = []
self.__accounts = []
def insert_costumers(self, costumer):
self.__costumers.append(costumer)
def insert_accounts(self, account):
self.__accounts.append(account)
def authenticate(self, costumer):
if costumer not in self.__costumers:
return None
| 25.875 | 44 | 0.63285 | 44 | 414 | 5.545455 | 0.477273 | 0.159836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039735 | 0.270531 | 414 | 15 | 45 | 27.6 | 0.768212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bed8ffa1e73ffa405bfc1005a04f4f722ab41812 | 2,069 | py | Python | api/migrations/0005_auto_20200906_1951.py | sh2MAN/yamdb_final | 17f84bacd832237d88d3389605cf2acdf2a590f5 | [
"BSD-3-Clause"
] | null | null | null | api/migrations/0005_auto_20200906_1951.py | sh2MAN/yamdb_final | 17f84bacd832237d88d3389605cf2acdf2a590f5 | [
"BSD-3-Clause"
] | null | null | null | api/migrations/0005_auto_20200906_1951.py | sh2MAN/yamdb_final | 17f84bacd832237d88d3389605cf2acdf2a590f5 | [
"BSD-3-Clause"
] | 12 | 2021-02-11T16:39:00.000Z | 2022-03-30T19:18:24.000Z | # Generated by Django 3.0.5 on 2020-09-06 19:51
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('api', '0004_auto_20200906_1752'),
]
operations = [
migrations.AlterModelOptions(
name='category',
options={'verbose_name': 'Категория', 'verbose_name_plural': 'Категории'},
),
migrations.AlterModelOptions(
name='genre',
options={'verbose_name': 'Жанр', 'verbose_name_plural': 'Жанры'},
),
migrations.AlterModelOptions(
name='title',
options={'ordering': ('-id',), 'verbose_name': 'Произведение', 'verbose_name_plural': 'Произведения'},
),
migrations.RemoveConstraint(
model_name='review',
name='unique_review',
),
migrations.AlterField(
model_name='category',
name='name',
field=models.CharField(max_length=20, verbose_name='Наименование'),
),
migrations.AlterField(
model_name='genre',
name='name',
field=models.CharField(max_length=20, verbose_name='Наименование'),
),
migrations.AlterField(
model_name='title',
name='category',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='categories', to='api.Category', verbose_name='Категория'),
),
migrations.AlterField(
model_name='title',
name='description',
field=models.TextField(blank=True, null=True, verbose_name='Описание'),
),
migrations.AlterField(
model_name='title',
name='name',
field=models.CharField(max_length=100, verbose_name='Название'),
),
migrations.AddConstraint(
model_name='review',
constraint=models.UniqueConstraint(fields=('title', 'author'), name='unique_review'),
),
]
| 34.483333 | 177 | 0.585307 | 190 | 2,069 | 6.205263 | 0.384211 | 0.102629 | 0.106022 | 0.122986 | 0.254453 | 0.254453 | 0.185751 | 0.154368 | 0.154368 | 0.154368 | 0 | 0.025589 | 0.282262 | 2,069 | 59 | 178 | 35.067797 | 0.76835 | 0.02175 | 0 | 0.566038 | 1 | 0 | 0.186944 | 0.011375 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037736 | 0 | 0.09434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bed9a7c33d1cf837bf05eedf9e2389f71612ac64 | 1,104 | py | Python | user_activity/models.py | adithya-bhat-b/user-activity | d2577bbb295ac381e08a31e296e3d681da7ab036 | [
"MIT"
] | null | null | null | user_activity/models.py | adithya-bhat-b/user-activity | d2577bbb295ac381e08a31e296e3d681da7ab036 | [
"MIT"
] | 3 | 2021-04-08T22:04:18.000Z | 2021-06-09T19:14:16.000Z | user_activity/models.py | adithya-bhat-b/user-activity | d2577bbb295ac381e08a31e296e3d681da7ab036 | [
"MIT"
] | null | null | null | import pytz
from django.db import models
# Create your models here.
def _get_time_zones():
"""
Function to get all the timezones
"""
timezone_choices = [(tz, tz) for tz in pytz.all_timezones]
return timezone_choices
# Model for user
class User(models.Model):
"""
User model:
attributes:
id - unique id of the user
real_name - user name
time_zone - user timezone
"""
id = models.CharField(primary_key=True, max_length=50)
real_name = models.CharField(max_length=100)
time_zone = models.CharField(max_length=50, choices=_get_time_zones())
class Meta:
# Db table name
db_table = "user"
# Model for user
class UserActivity(models.Model):
"""
UserActivity model:
start_time: start time of an user activity
end_time: end time of an user activity
"""
user_id = models.ForeignKey(User, on_delete=models.CASCADE)
start_time = models.DateTimeField()
end_time = models.DateTimeField()
class Meta:
# Db table name
db_table = "user_activity" | 25.674419 | 74 | 0.646739 | 144 | 1,104 | 4.777778 | 0.375 | 0.040698 | 0.034884 | 0.049419 | 0.148256 | 0.090116 | 0.090116 | 0.090116 | 0 | 0 | 0 | 0.008653 | 0.26721 | 1,104 | 43 | 75 | 25.674419 | 0.84178 | 0.322464 | 0 | 0.117647 | 0 | 0 | 0.025185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bedb1dc2f3fdaeceb37c80ae1a87e69944c3c668 | 1,725 | py | Python | lambda/populateDB/lambda_function.py | aws-samples/amazon-connect-dynamic-ivr-menus | 911f5d04cf78d3097cfe7e169bd0062459d61ec4 | [
"MIT-0"
] | 4 | 2021-06-24T14:42:42.000Z | 2021-12-13T07:08:48.000Z | lambda/populateDB/lambda_function.py | aws-samples/amazon-connect-dynamic-ivr-menus | 911f5d04cf78d3097cfe7e169bd0062459d61ec4 | [
"MIT-0"
] | 1 | 2021-12-13T06:53:39.000Z | 2021-12-13T06:53:39.000Z | lambda/populateDB/lambda_function.py | aws-samples/amazon-connect-dynamic-ivr-menus | 911f5d04cf78d3097cfe7e169bd0062459d61ec4 | [
"MIT-0"
] | 2 | 2021-06-10T18:54:03.000Z | 2021-12-13T08:07:05.000Z | import json
import boto3
import os
def lambda_handler(event, context):
# TODO implement
dynamodb = boto3.resource('dynamodb')
customerTable = os.environ['customerTable']
table1 = dynamodb.Table(customerTable)
policiesTable = os.environ['policiesTable']
table2 = dynamodb.Table(policiesTable)
# Phone numbers should follow international format E.164
table1.put_item(
Item={
'clientID': '+3526919xxxxxx',
'clientName': 'Marius',
'clientPolicies': ['car','house']
}
)
table1.put_item(
Item={
'clientID': '+3526919xxxxxx',
'clientName': 'John',
'clientPolicies': ['boat','pet']
}
)
table2.put_item(
Item={
'policyID': 'car',
'description': 'Your car insurance covers third party damage and theft. Authorized service points are this and that.'
}
)
table2.put_item(
Item={
'policyID': 'house',
'description': 'Your house insurance covers damage caused by natural disasters, fires and earthquakes. To fill a claim, please visit our website.'
}
)
table2.put_item(
Item={
'policyID': 'boat',
'description': 'Your boat insurance covers damage caused by natural distasters and fires. To fill a claim, please visit our website.'
}
)
table2.put_item(
Item={
'policyID': 'pet',
'description': 'Your pet insurance covers any medical interventions required to keep your pet healty. For a list of approved vet centers, please visit our website.'
}
)
return 'ok'
| 28.278689 | 176 | 0.584348 | 172 | 1,725 | 5.819767 | 0.488372 | 0.041958 | 0.065934 | 0.067932 | 0.333666 | 0.283716 | 0.211788 | 0.115884 | 0.115884 | 0.115884 | 0 | 0.022862 | 0.315362 | 1,725 | 60 | 177 | 28.75 | 0.824725 | 0.04 | 0 | 0.291667 | 0 | 0.0625 | 0.445251 | 0 | 0 | 0 | 0 | 0.016667 | 0 | 1 | 0.020833 | false | 0 | 0.0625 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bedeaa04e3aa523fae916c1f3ad83805bf94106f | 2,849 | py | Python | examples/s5b_transfer/s5b_receiver.py | isabella232/slixmpp | e15e6735f1dbfc66a5d43efe9fa9e7f5c9d1610a | [
"BSD-3-Clause"
] | null | null | null | examples/s5b_transfer/s5b_receiver.py | isabella232/slixmpp | e15e6735f1dbfc66a5d43efe9fa9e7f5c9d1610a | [
"BSD-3-Clause"
] | 1 | 2021-02-24T07:58:40.000Z | 2021-02-24T07:58:40.000Z | examples/s5b_transfer/s5b_receiver.py | isabella232/slixmpp | e15e6735f1dbfc66a5d43efe9fa9e7f5c9d1610a | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Slixmpp: The Slick XMPP Library
Copyright (C) 2015 Emmanuel Gil Peyrot
This file is part of Slixmpp.
See the file LICENSE for copying permission.
"""
import asyncio
import logging
from getpass import getpass
from argparse import ArgumentParser
import slixmpp
class S5BReceiver(slixmpp.ClientXMPP):
"""
A basic example of creating and using a SOCKS5 bytestream.
"""
def __init__(self, jid, password, filename):
slixmpp.ClientXMPP.__init__(self, jid, password)
self.file = open(filename, 'wb')
self.add_event_handler("socks5_connected", self.stream_opened)
self.add_event_handler("socks5_data", self.stream_data)
self.add_event_handler("socks5_closed", self.stream_closed)
def stream_opened(self, sid):
logging.info('Stream opened. %s', sid)
def stream_data(self, data):
self.file.write(data)
def stream_closed(self, exception):
logging.info('Stream closed. %s', exception)
self.file.close()
self.disconnect()
if __name__ == '__main__':
# Setup the command line arguments.
parser = ArgumentParser()
# Output verbosity options.
parser.add_argument("-q", "--quiet", help="set logging to ERROR",
action="store_const", dest="loglevel",
const=logging.ERROR, default=logging.INFO)
parser.add_argument("-d", "--debug", help="set logging to DEBUG",
action="store_const", dest="loglevel",
const=logging.DEBUG, default=logging.INFO)
# JID and password options.
parser.add_argument("-j", "--jid", dest="jid",
help="JID to use")
parser.add_argument("-p", "--password", dest="password",
help="password to use")
parser.add_argument("-o", "--out", dest="filename",
help="file to save to")
args = parser.parse_args()
# Setup logging.
logging.basicConfig(level=args.loglevel,
format='%(levelname)-8s %(message)s')
if args.jid is None:
args.jid = input("Username: ")
if args.password is None:
args.password = getpass("Password: ")
if args.filename is None:
args.filename = input("File path: ")
# Setup the S5BReceiver and register plugins. Note that while plugins may
# have interdependencies, the order in which you register them does
# not matter.
xmpp = S5BReceiver(args.jid, args.password, args.filename)
xmpp.register_plugin('xep_0030') # Service Discovery
xmpp.register_plugin('xep_0065', {
'auto_accept': True
}) # SOCKS5 Bytestreams
# Connect to the XMPP server and start processing XMPP stanzas.
xmpp.connect()
xmpp.process(forever=False)
| 31.307692 | 77 | 0.630046 | 340 | 2,849 | 5.15 | 0.432353 | 0.0257 | 0.048544 | 0.032553 | 0.113649 | 0.045688 | 0.045688 | 0 | 0 | 0 | 0 | 0.010803 | 0.25272 | 2,849 | 90 | 78 | 31.655556 | 0.811649 | 0.2106 | 0 | 0.04 | 0 | 0 | 0.159163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.16 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bee066a8fc595636f1ed42106327e650d743c5d7 | 1,529 | py | Python | 155.min-stack.py | elfgzp/leetCode | 964c6574d310a9a6c486bf638487fd2f72b83b3f | [
"MIT"
] | 3 | 2019-04-12T06:22:56.000Z | 2019-05-04T04:25:01.000Z | 155.min-stack.py | elfgzp/Leetcode | 964c6574d310a9a6c486bf638487fd2f72b83b3f | [
"MIT"
] | null | null | null | 155.min-stack.py | elfgzp/Leetcode | 964c6574d310a9a6c486bf638487fd2f72b83b3f | [
"MIT"
] | null | null | null | #
# @lc app=leetcode.cn id=155 lang=python3
#
# [155] 最小栈
#
# https://leetcode-cn.com/problems/min-stack/description/
#
# algorithms
# Easy (47.45%)
# Total Accepted: 19.4K
# Total Submissions: 40.3K
# Testcase Example: '["MinStack","push","push","push","getMin","pop","top","getMin"]\n[[],[-2],[0],[-3],[],[],[],[]]'
#
# 设计一个支持 push,pop,top 操作,并能在常数时间内检索到最小元素的栈。
#
#
# push(x) -- 将元素 x 推入栈中。
# pop() -- 删除栈顶的元素。
# top() -- 获取栈顶元素。
# getMin() -- 检索栈中的最小元素。
#
#
# 示例:
#
# MinStack minStack = new MinStack();
# minStack.push(-2);
# minStack.push(0);
# minStack.push(-3);
# minStack.getMin(); --> 返回 -3.
# minStack.pop();
# minStack.top(); --> 返回 0.
# minStack.getMin(); --> 返回 -2.
#
#
#
class MinStack:
def __init__(self):
"""
initialize your data structure here.
"""
self._min = None
self._stack = []
def push(self, x: int) -> None:
if self._min is None:
self._min = x
else:
self._min = min(self._min, x)
self._stack.append(x)
def pop(self) -> None:
self._stack.pop(-1)
if self._stack:
self._min = min(self._stack)
else:
self._min = None
def top(self) -> int:
return self._stack[-1]
def getMin(self) -> int:
return self._min
# Your MinStack object will be instantiated and called as such:
# obj = MinStack()
# obj.push(x)
# obj.pop()
# param_3 = obj.top()
# param_4 = obj.getMin()
| 19.602564 | 118 | 0.530412 | 189 | 1,529 | 4.185185 | 0.407407 | 0.070796 | 0.040455 | 0.035398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027322 | 0.281884 | 1,529 | 77 | 119 | 19.857143 | 0.693078 | 0.533682 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.1 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bee68e7de68c03f76e1ccae51e5aa678663d50fa | 493 | py | Python | ariadne_server/tests/fixtures/fake_context.py | seanaye/FeatherLight-API | 4d42a424762311ee35b3fd4f689883aa4197eb2e | [
"MIT"
] | 3 | 2020-06-28T17:30:57.000Z | 2022-01-25T18:03:38.000Z | ariadne_server/tests/fixtures/fake_context.py | seanaye/FeatherLight-API | 4d42a424762311ee35b3fd4f689883aa4197eb2e | [
"MIT"
] | null | null | null | ariadne_server/tests/fixtures/fake_context.py | seanaye/FeatherLight-API | 4d42a424762311ee35b3fd4f689883aa4197eb2e | [
"MIT"
] | 1 | 2021-02-04T07:14:08.000Z | 2021-02-04T07:14:08.000Z | from secrets import token_hex
import pytest
class Object:
pass
class FakeContext(dict):
def __init__(self):
req_obj = Object()
req_obj.cookies = {}
req_obj.client = Object()
req_obj.client.host = token_hex(5)
req_obj.headers = {
'origin': 'some_origin',
'x-real-ip': 'fake_ip'
}
self['request'] = req_obj
@pytest.fixture(autouse=True, scope='function')
def context():
return FakeContext() | 18.259259 | 47 | 0.584178 | 58 | 493 | 4.724138 | 0.603448 | 0.131387 | 0.087591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002882 | 0.296146 | 493 | 27 | 48 | 18.259259 | 0.786744 | 0 | 0 | 0 | 0 | 0 | 0.097166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.055556 | 0.111111 | 0.055556 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
beea57272100654c7600d64caab6b4c5cdc2179e | 2,484 | py | Python | articlequality/feature_lists/tests/test_enwiki.py | mariushoch/articlequality | 57edf786636548bed466aa4e9d9e213fe8d1093b | [
"MIT"
] | null | null | null | articlequality/feature_lists/tests/test_enwiki.py | mariushoch/articlequality | 57edf786636548bed466aa4e9d9e213fe8d1093b | [
"MIT"
] | null | null | null | articlequality/feature_lists/tests/test_enwiki.py | mariushoch/articlequality | 57edf786636548bed466aa4e9d9e213fe8d1093b | [
"MIT"
] | null | null | null | from revscoring.datasources.revision_oriented import revision
from revscoring.dependencies import solve
from .. import enwiki
revision_text = revision.text
def test_cite_templates():
text = """
This is some text with a citation.<ref>{{cite lol|title=Made up}}</ref>
This is some more text. {{foo}} {{{cite}}}
I am a new paragraph.<ref>{{cite book|title=The stuff}}</ref>
{{Cite hat|ascii=_n_}}
"""
assert solve(enwiki.cite_templates, cache={revision_text: text}) == 3
def test_infobox_templates():
text = """
{{Infobox pants|hats=2|pajams=23}}
This is some text with a citation.<ref>{{cite lol|title=Made up}}</ref>
This is some more text.
I am a new paragraph.<ref>{{cite book|title=The stuff}}</ref>
{{Cite hat|ascii=_n_}}
"""
assert solve(enwiki.infobox_templates, cache={revision_text: text}) == 1
def test_cn_templates():
text = """
{{Infobox pants|hats=2|pajams=23}}
This is some text with a citation.{{cn}}
This is some more text. {{foo}}
I am a new paragraph.{{fact|date=never}}
I am a new paragraph.{{Citation_needed|date=never}}
"""
assert solve(enwiki.cn_templates, cache={revision_text: text}) == 3
def test_who_templates():
text = """
This is some text with a citation.{{cn}}
This is some more text. {{foo}}
I am a new paragraph.{{who}}
I am a new paragraph.{{who|date=today}}
"""
assert solve(enwiki.who_templates, cache={revision_text: text}) == 2
def test_main_article_templates():
text = """
This is some text with a citation.{{cn}}
This is some more text. {{foo}}
== Some section ==
{{Main|section}}
I am a new paragraph.{{who|date=today}}
"""
assert solve(enwiki.main_article_templates,
cache={revision_text: text}) == 1
def test_paragraphs_without_refs_total_length():
text = """
Here is the first paragraph.
It contains some references <ref>first reference</ref>.
Here is second paragraph. One line with reference <ref>reference</ref>.
Here is third paragraph.
It has two lines, but no references.
Here is fourth paragraph.
It has two lines <ref>reference</ref>.
One of which has a reference.
Here is fifth paragraph. One line, no references.
Short line.<ref>last</ref><ref>One more reference</ref>
"""
assert solve(enwiki.paragraphs_without_refs_total_length,
cache={revision_text: text}) == 114
| 27 | 76 | 0.654589 | 356 | 2,484 | 4.457865 | 0.235955 | 0.037807 | 0.063012 | 0.030876 | 0.619408 | 0.522369 | 0.518589 | 0.518589 | 0.42281 | 0.414619 | 0 | 0.007172 | 0.214171 | 2,484 | 91 | 77 | 27.296703 | 0.80584 | 0 | 0 | 0.444444 | 0 | 0 | 0.605475 | 0.114734 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bef2574ded37985d33b872832104339ea2dcbc78 | 384 | py | Python | project_9/util.py | sople1/project_9 | 7d91d786533d508572feae1ffbd1b4a6a80208ab | [
"CC0-1.0"
] | null | null | null | project_9/util.py | sople1/project_9 | 7d91d786533d508572feae1ffbd1b4a6a80208ab | [
"CC0-1.0"
] | null | null | null | project_9/util.py | sople1/project_9 | 7d91d786533d508572feae1ffbd1b4a6a80208ab | [
"CC0-1.0"
] | null | null | null | """
utility for project 9
:author: Seongsu Yoon <sople1@snooey.net>
:license: CC0
"""
def clear():
"""
clear cmd/term
:return: void
"""
import os
import sys
if sys.platform == 'win32':
os.system('cls') # on windows
else:
os.system('clear') # on linux / os x
if __name__ == '__main__':
raise Exception("please run main py")
| 14.769231 | 45 | 0.570313 | 49 | 384 | 4.306122 | 0.77551 | 0.075829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018248 | 0.286458 | 384 | 25 | 46 | 15.36 | 0.751825 | 0.354167 | 0 | 0 | 0 | 0 | 0.177273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0.222222 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bef786a72fbb29131b60f5c806a5c2a1d2c1e463 | 3,135 | py | Python | software/nuke/init.py | kei-iketani/plex | cf09c8ef93984e5a69b23bf56248b87e4cfd98b0 | [
"MIT"
] | 153 | 2018-03-22T18:29:17.000Z | 2022-03-07T03:43:09.000Z | software/nuke/init.py | kei-iketani/plex | cf09c8ef93984e5a69b23bf56248b87e4cfd98b0 | [
"MIT"
] | 30 | 2018-08-16T16:27:42.000Z | 2021-02-24T05:37:25.000Z | software/nuke/init.py | alexanderrichter/arPipeline | 3466f70a79e4d32c0647ba21d9689157a0f7772e | [
"MIT"
] | 34 | 2018-03-24T03:54:05.000Z | 2022-03-10T11:36:52.000Z | #*********************************************************************
# content = init Nuke
# version = 0.1.0
# date = 2019-12-01
#
# license = MIT <https://github.com/alexanderrichtertd>
# author = Alexander Richter <alexanderrichtertd.com>
#*********************************************************************
import os
import errno
import nuke
import pipefunc
from tank import Tank
#*********************************************************************
# VARIABLE
TITLE = os.path.splitext(os.path.basename(__file__))[0]
LOG = Tank().log.init(script=TITLE)
PROJECT_DATA = Tank().data_project
RESOLUTION = (' ').join([str(PROJECT_DATA['resolution'][0]),
str(PROJECT_DATA['resolution'][1]),
PROJECT_DATA['name'].replace(' ', '')])
#*********************************************************************
# FOLDER CREATION
def create_write_dir():
file_name = nuke.filename(nuke.thisNode())
file_path = os.path.dirname(file_name)
os_path = nuke.callbacks.filenameFilter(file_path)
# cope with the directory existing already by ignoring that exception
try: os.makedirs(os_path)
except OSError, e:
if e.errno != errno.EEXIST:
raise
def add_plugin_paths():
# ADD all IMG paths
for img in os.getenv('IMG_PATH').split(';'):
for img_sub in pipefunc.get_deep_folder_list(path=img, add_path=True):
nuke.pluginAddPath(img_sub)
# ADD sub software paths
for paths in os.getenv('SOFTWARE_SUB_PATH').split(';'):
nuke.pluginAddPath(paths)
#*********************************************************************
# PIPELINE
Tank().init_software()
add_plugin_paths()
try: from scripts import write_node
except: LOG.warning('FAILED loading write_node')
# LOAD paths
try:
for paths in os.getenv('SOFTWARE_SUB_PATH').split(';'):
nuke.pluginAddPath(paths)
except:
LOG.warning('FAILED loading SOFTWARE_SUB_PATH')
print('SETTINGS')
# RESOLUTION *********************************************************************
try:
nuke.addFormat(RESOLUTION)
nuke.knobDefault('Root.format', PROJECT_DATA['name'].replace(' ', ''))
print(' {} ON - {}'.format(chr(254), RESOLUTION))
except:
LOG.error(' OFF - {}'.format(RESOLUTION), exc_info=True)
print(' {} OFF - {}'.format(chr(254), RESOLUTION))
# FPS *********************************************************************
try:
nuke.knobDefault("Root.fps", str(PROJECT_DATA['fps']))
print(' {} ON - {} fps'.format(chr(254), PROJECT_DATA['fps']))
except:
LOG.error(' OFF - {} fps'.format(PROJECT_DATA['fps']), exc_info=True)
print(' {} OFF - {} fps'.format(chr(254), PROJECT_DATA['fps']))
# createFolder *********************************************************************
try:
nuke.addBeforeRender(create_write_dir)
print(' {} ON - create_write_dir (before render)'.format(chr(254)))
except:
LOG.error(' OFF - create_write_dir (before render)'.format(chr(254)), exc_info=True)
print(' {} OFF - create_write_dir (before render)'.format(chr(254)))
print('')
| 30.436893 | 89 | 0.536204 | 333 | 3,135 | 4.888889 | 0.33033 | 0.060811 | 0.051597 | 0.031327 | 0.251843 | 0.183047 | 0.183047 | 0.14742 | 0.124079 | 0.07371 | 0 | 0.013278 | 0.159171 | 3,135 | 102 | 90 | 30.735294 | 0.604325 | 0.29059 | 0 | 0.218182 | 0 | 0 | 0.173479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.109091 | null | null | 0.145455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bef9a72ceb82bbb48832da89c306ea29b20a4752 | 863 | py | Python | rnd/HaskellRSLCompiler/test/parse/test.py | syoyo/lucille | ff81b332ae78181dbbdc1ec3c3b0f59992e7c0fa | [
"BSD-3-Clause"
] | 77 | 2015-01-29T21:02:10.000Z | 2022-03-04T11:23:12.000Z | rnd/HaskellRSLCompiler/test/parse/test.py | syoyo/lucille | ff81b332ae78181dbbdc1ec3c3b0f59992e7c0fa | [
"BSD-3-Clause"
] | 1 | 2018-11-08T02:11:24.000Z | 2018-11-08T04:31:17.000Z | rnd/HaskellRSLCompiler/test/parse/test.py | syoyo/lucille | ff81b332ae78181dbbdc1ec3c3b0f59992e7c0fa | [
"BSD-3-Clause"
] | 13 | 2015-04-20T08:17:29.000Z | 2020-06-17T18:35:06.000Z | #!/usr/bin/env python
import os, sys
import subprocess
import re
import glob
errlog = []
def run(f):
cmd = "../../lslc"
p = subprocess.Popen([cmd, f], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
outs = [l for l in p.stdout]
errs = [l for l in p.stderr]
errline = re.compile("TODO")
failed = False
for l in errs:
if errline.search(l):
failed = True
if failed:
print "[FAIL] ", f
errlog.append("==== [" + f + "] ====")
for l in errs:
errlog.append(l[:-1])
errlog.append("=====================")
errlog.append("\n")
else:
print "[OK ] ", f
def main():
for f in glob.glob("*.sl"):
run(f)
f = open("errlog.log", "w")
for l in errlog:
print >>f, l
if __name__ == '__main__':
main()
| 17.979167 | 98 | 0.499421 | 114 | 863 | 3.701754 | 0.429825 | 0.047393 | 0.07109 | 0.033175 | 0.037915 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001689 | 0.314021 | 863 | 47 | 99 | 18.361702 | 0.711149 | 0.023175 | 0 | 0.0625 | 0 | 0 | 0.102138 | 0.024941 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.09375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
befebe8c408a00b9be09490e9fa3fb8d41c06ce6 | 1,081 | py | Python | tests/test_utils.py | tedeler/pyexchange | 58042f473cbd4f00769249ce9ca20c6a376eddb6 | [
"Apache-2.0"
] | 128 | 2015-01-11T10:29:40.000Z | 2021-06-25T05:27:45.000Z | tests/test_utils.py | tedeler/pyexchange | 58042f473cbd4f00769249ce9ca20c6a376eddb6 | [
"Apache-2.0"
] | 52 | 2015-01-02T15:24:28.000Z | 2020-08-07T04:49:49.000Z | tests/test_utils.py | tedeler/pyexchange | 58042f473cbd4f00769249ce9ca20c6a376eddb6 | [
"Apache-2.0"
] | 96 | 2015-01-02T15:16:20.000Z | 2021-12-25T01:37:46.000Z | from datetime import datetime
from pytz import timezone, utc
from pytest import mark
from pyexchange.utils import convert_datetime_to_utc
def test_converting_none_returns_none():
assert convert_datetime_to_utc(None) is None
def test_converting_non_tz_aware_date_returns_tz_aware():
utc_time = datetime(year=2014, month=1, day=1, hour=1, minute=1, second=1)
assert utc_time.tzinfo is None
assert convert_datetime_to_utc(utc_time) == datetime(year=2014, month=1, day=1, hour=1, minute=1, second=1, tzinfo=utc)
def test_converting_tz_aware_date_returns_tz_aware_date():
# US/Pacific timezone is UTC-07:00 (In April we are in DST)
# We use localize() because according to the pytz documentation, using the tzinfo
# argument of the standard datetime constructors does not work for timezones with DST.
pacific_time = timezone("US/Pacific").localize(datetime(year=2014, month=4, day=1, hour=1, minute=0, second=0))
utc_time = utc.localize(datetime(year=2014, month=4, day=1, hour=8, minute=0, second=0))
assert convert_datetime_to_utc(pacific_time) == utc_time
| 43.24 | 121 | 0.781684 | 181 | 1,081 | 4.453039 | 0.342541 | 0.043424 | 0.084367 | 0.099256 | 0.400744 | 0.359801 | 0.223325 | 0.223325 | 0.223325 | 0.129032 | 0 | 0.042239 | 0.123959 | 1,081 | 24 | 122 | 45.041667 | 0.80887 | 0.205365 | 0 | 0 | 0 | 0 | 0.011696 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.214286 | false | 0 | 0.285714 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
befed480f20eb883fd15d6235756ef7750bbee56 | 786 | py | Python | vidpub/__main__.py | gary9630/session-video-publisher | 6602f53d722af8e569c82b7de8ef79a63293c766 | [
"0BSD"
] | null | null | null | vidpub/__main__.py | gary9630/session-video-publisher | 6602f53d722af8e569c82b7de8ef79a63293c766 | [
"0BSD"
] | 5 | 2020-11-15T12:45:03.000Z | 2021-12-07T08:29:40.000Z | vidpub/__main__.py | gary9630/session-video-publisher | 6602f53d722af8e569c82b7de8ef79a63293c766 | [
"0BSD"
] | 4 | 2018-06-23T16:48:03.000Z | 2021-04-18T09:51:29.000Z | import argparse
from .upload_video import upload_video
from .generate_playlist import generate_playlist
def parse_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument(
"-u", "--upload", action="store_true", help="Upload videos to YouTube channel"
)
parser.add_argument(
"-p", "--playlist", action="store_true", help="Generate playlist information in json files"
)
parser.add_argument(
"-o", "--output_dir", default="./videos", help="Output path of video information"
)
return parser.parse_args(argv)
def main(argv=None):
options = parse_args(argv)
if options.upload:
upload_video()
if options.playlist:
generate_playlist(options.output_dir)
if __name__ == "__main__":
main()
| 23.818182 | 99 | 0.675573 | 94 | 786 | 5.393617 | 0.43617 | 0.126233 | 0.076923 | 0.074951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204835 | 786 | 32 | 100 | 24.5625 | 0.8112 | 0 | 0 | 0.130435 | 1 | 0 | 0.227735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
830448984e5a77e90d22cacc683d54197d1adc44 | 130,468 | py | Python | pycity_calc/cities/scripts/city_generator/city_generator.py | RWTH-EBC/pyCity_calc | 99fd0dab7f9a9030fd84ba4715753364662927ec | [
"MIT"
] | 4 | 2020-06-22T14:14:25.000Z | 2021-11-08T11:47:01.000Z | pycity_calc/cities/scripts/city_generator/city_generator.py | RWTH-EBC/pyCity_calc | 99fd0dab7f9a9030fd84ba4715753364662927ec | [
"MIT"
] | 4 | 2019-08-28T19:42:28.000Z | 2019-08-28T19:43:44.000Z | pycity_calc/cities/scripts/city_generator/city_generator.py | RWTH-EBC/pyCity_calc | 99fd0dab7f9a9030fd84ba4715753364662927ec | [
"MIT"
] | null | null | null | # coding=utf-8
"""
Script to generate city object.
"""
from __future__ import division
import os
import numpy as np
import pickle
import warnings
import random
import datetime
import shapely.geometry.point as point
import pycity_base.classes.Weather as weath
import pycity_base.classes.demand.SpaceHeating as SpaceHeating
import pycity_base.classes.demand.ElectricalDemand as ElectricalDemand
import pycity_base.classes.demand.Apartment as Apartment
import pycity_base.classes.demand.DomesticHotWater as DomesticHotWater
import pycity_base.classes.demand.Occupancy as occup
import pycity_calc.environments.timer as time
# import pycity_calc.environments.market as price
import pycity_calc.environments.germanmarket as germanmarket
import pycity_calc.environments.environment as env
import pycity_calc.environments.co2emissions as co2
import pycity_calc.buildings.building as build_ex
import pycity_calc.cities.city as city
import pycity_calc.visualization.city_visual as citvis
import pycity_calc.toolbox.modifiers.slp_th_manipulator as slpman
import pycity_calc.toolbox.teaser_usage.teaser_use as tusage
import pycity_calc.toolbox.mc_helpers.user.user_unc_sampling as usunc
try:
import teaser.logic.simulation.VDI_6007.weather as vdiweather
except: # pragma: no cover
msg = 'Could not import teaser.logic.simulation.VDI_6007.weather. ' \
'If you need to use it, install ' \
'it via pip "pip install TEASER". Alternatively, you might have ' \
'run into trouble with XML bindings in TEASER. This can happen ' \
'if you try to re-import TEASER within an active Python console.' \
'Please close the active Python console and open another one. Then' \
' try again. You might also be on the wrong TEASER branch ' \
'(without VDI 6007 core).'
warnings.warn(msg)
def load_data_file_with_spec_demand_data(filename):
"""
Function loads and returns data from
.../src/data/BaseData/Specific_Demand_Data/filename.
Filename should hold float (or int) values.
Other values (e.g. strings) will be loaded as 'nan'.
Parameter
---------
filename : str
String with name of file, e.g. 'district_data.txt'
Returns
-------
dataset : numpy array
Numpy array with data
"""
src_path = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname
(
os.path.abspath(
__file__)))))
input_data_path = os.path.join(src_path, 'data', 'BaseData',
'Specific_Demand_Data', filename)
dataset = np.genfromtxt(input_data_path, delimiter='\t', skip_header=1)
return dataset
def convert_th_slp_int_and_str(th_slp_int):
"""
Converts thermal slp type integer into string
Parameters
----------
th_slp_int : int
SLP type integer number
Returns
-------
th_slp_tag : str
SLP type string
Annotations
-----------
- `HEF` : Single family household
- `HMF` : Multi family household
- `GBA` : Bakeries
- `GBD` : Other services
- `GBH` : Accomodations
- `GGA` : Restaurants
- `GGB` : Gardening
- `GHA` : Retailers
- `GHD` : Summed load profile business, trade and services
- `GKO` : Banks, insurances, public institutions
- `GMF` : Household similar businesses
- `GMK` : Automotive
- `GPD` : Paper and printing
- `GWA` : Laundries
"""
if th_slp_int is None:
msg = 'th_slp_int is None. Going to return None.'
warnings.warn(msg)
return None
slp_th_profile_dict_tag = {0: 'HEF',
1: 'HMF',
2: 'GMF',
3: 'GMK',
4: 'GPD',
5: 'GHA',
6: 'GBD',
7: 'GKO',
8: 'GBH',
9: 'GGA',
10: 'GBA',
11: 'GWA',
12: 'GGB',
13: 'GHD'}
th_slp_tag = slp_th_profile_dict_tag[th_slp_int]
return th_slp_tag
def convert_el_slp_int_and_str(el_slp_int):
"""
Converts el slp type integer into string
Parameters
----------
el_slp_int : int
SLP type integer number
Returns
-------
el_slp_tag : str
SLP type string
Annotations
-----------
# 0: H0 : Residential
# 1: G0 : Commercial
# 2: G1 : Commercial Mo-Sa 08:00 to 18:00
# 3: G2 : Commercial, mainly evening hours
# 4: G3 : Commercial 24 hours
# 5: G4 : Shop / hairdresser
# 6: G5 : Backery
# 7: G6 : Commercial, weekend
# 8: L0 : Farm
# 9: L1 : Farm, mainly cattle and milk
# 10: L2 : Other farming
"""
if el_slp_int is None:
msg = 'el_slp_int is None. Going to return None.'
warnings.warn(msg)
return None
slp_el_profile_dict_tag = {0: 'H0',
1: 'G0',
2: 'G1',
3: 'G2',
4: 'G3',
5: 'G4',
6: 'G5',
7: 'G6',
8: 'L0',
9: 'L1',
10: 'L2'}
el_slp_tag = slp_el_profile_dict_tag[el_slp_int]
return el_slp_tag
def convert_method_3_nb_into_str(method_3_nb):
"""
Converts method_3_nb into string
Parameters
----------
method_3_nb : int
Number of method 3
Returns
-------
method_3_str : str
String of method 3
"""
if method_3_nb is None:
msg = 'method_3_nb is None. Going to return None.'
warnings.warn(msg)
return None
dict_method_3 = {0: 'food_pro',
1: 'metal',
2: 'rest',
3: 'sports',
4: 'repair'}
method_3_str = dict_method_3[method_3_nb]
return method_3_str
def convert_method_4_nb_into_str(method_4_nb):
"""
Converts method_4_nb into string
Parameters
----------
method_4_nb : int
Number of method 4
Returns
-------
method_4_str : str
String of method 4
"""
if method_4_nb is None:
msg = 'method_4_nb is None. Going to return None.'
warnings.warn(msg)
return None
dict_method_4 = {0: 'metal_1', 1: 'metal_2', 2: 'warehouse'}
method_4_str = dict_method_4[method_4_nb]
return method_4_str
def conv_build_type_nb_to_name(build_type):
"""
Convert build_type number to name / explanation
Parameters
----------
build_type : int
Building type number, based on Spec_demands_non_res.txt
Returns
-------
build_name : str
Building name / explanation
"""
if build_type is None:
msg = 'build_type is None. Going to return None for build_name.'
warnings.warn(msg)
return None
dict_b_name = {
0: 'Residential',
1: 'Office (simulation)',
2: 'Main construction work',
3: 'Finishing trade construction work',
4: 'Bank and insurance',
5: 'Public institution',
6: 'Non profit organization',
7: 'Small office buildings',
8: 'Other services',
9: 'Metal',
10: 'Automobile',
11: 'Wood and timber',
12: 'Paper',
13: 'Small retailer for food',
14: 'Small retailer for non-food',
15: 'Large retailer for food',
16: 'Large retailer for non-food',
17: 'Primary school',
18: 'School for physically handicapped',
19: 'High school',
20: 'Trade school',
21: 'University',
22: 'Hotel',
23: 'Restaurant',
24: 'Childrens home',
25: 'Backery',
26: 'Butcher',
27: 'Laundry',
28: 'Farm primary agriculture ',
29: 'Farm with 10 - 49 cattle units',
30: 'Farm with 50 - 100 cattle units',
31: 'Farm with more than 100 cattle units',
32: 'Gardening',
33: 'Hospital',
34: 'Library',
35: 'Prison',
36: 'Cinema',
37: 'Theater',
38: 'Parish hall',
39: 'Sports hall',
40: 'Multi purpose hall',
41: 'Swimming hall',
42: 'Club house',
43: 'Fitness studio',
44: 'Train station smaller 5000m2',
45: 'Train station equal to or larger than 5000m2'
}
return dict_b_name[build_type]
def constrained_sum_sample_pos(n, total):
"""
Return a randomly chosen list of n positive integers summing to total.
Each such list is equally likely to occur.
Parameters
----------
n : int
Number of chosen integers
total : int
Sum of all entries of result list
Returns
-------
results_list : list (of int)
List with result integers, which sum up to value 'total'
"""
dividers = sorted(random.sample(range(1, int(total)), int(n - 1)))
list_occ = [a - b for a, b in zip(dividers + [total], [0] + dividers)]
for i in range(len(list_occ)):
list_occ[i] = int(list_occ[i])
return list_occ
def redistribute_occ(occ_list):
"""
Redistribute occupants in occ_list, so that each apartment is having at
least 1 person and maximal 5 persons.
Parameters
----------
occ_list
Returns
-------
occ_list_new : list
List holding number of occupants per apartment
"""
occ_list_new = occ_list[:]
if sum(occ_list_new) / len(occ_list_new) > 5: # pragma: no cover
msg = 'Average number of occupants per apartment is higher than 5.' \
' This is not valid for usage of Richardson profile generator.'
raise AssertionError(msg)
# Number of occupants to be redistributed
nb_occ_redist = 0
# Find remaining occupants
# ###############################################################
for i in range(len(occ_list_new)):
if occ_list_new[i] > 5:
# Add remaining occupants to nb_occ_redist
nb_occ_redist += occ_list_new[i] - 5
# Set occ_list_new entry to 5 persons
occ_list_new[i] = 5
if nb_occ_redist == 0:
# Return original list
return occ_list_new
# Identify empty apartments and add single occupant
# ###############################################################
for i in range(len(occ_list_new)):
if occ_list_new[i] == 0:
# Add single occupant
occ_list_new[i] = 1
# Remove occupant from nb_occ_redist
nb_occ_redist -= 1
if nb_occ_redist == 0:
# Return original list
return occ_list_new
# Redistribute remaining occupants
# ###############################################################
for i in range(len(occ_list_new)):
if occ_list_new[i] < 5:
# Fill occupants up with remaining occupants
for j in range(5 - occ_list_new[i]):
# Add single occupant
occ_list_new[i] += 1
# Remove single occupant from remaining sum
nb_occ_redist -= 1
if nb_occ_redist == 0:
# Return original list
return occ_list_new
if nb_occ_redist: # pragma: no cover
raise AssertionError('Not all occupants could be distributed.'
'Check inputs and/or redistribute_occ() call.')
def generate_environment(timestep=3600,
year_timer=2017,
year_co2=2017,
try_path=None,
location=(51.529086, 6.944689),
altitude=55,
new_try=False):
"""
Returns environment object. Total number of timesteps is automatically
generated for one year.
Parameters
----------
timestep : int
Timestep in seconds
year_timer : int, optional
Chosen year of analysis (default: 2010)
(influences initial day for profile generation)
year_co2 : int, optional
Chose year with specific emission factors (default: 2017)
try_path : str, optional
Path to TRY weather file (default: None)
If set to None, uses default weather TRY file (2010, region 5)
location : Tuple, optional
(latitude , longitude) of the simulated system's position,
(default: (51.529086, 6.944689) for Bottrop, Germany.
altitude : float, optional
Altitute of location in m (default: 55 - City of Bottrop)
new_try : bool, optional
Defines, if TRY dataset have been generated after 2017 (default: False)
If False, assumes that TRY dataset has been generated before 2017.
If True, assumes that TRY dataset has been generated after 2017 and
belongs to the new TRY classes. This is important for extracting
the correct values from the TRY dataset!
Returns
-------
environment : object
Environment object
"""
# Create environment
timer = time.TimerExtended(timestep=timestep, year=year_timer)
weather = weath.Weather(timer, useTRY=True, pathTRY=try_path,
location=location, altitude=altitude,
new_try=new_try)
market = germanmarket.GermanMarket()
co2em = co2.Emissions(year=year_co2)
environment = env.EnvironmentExtended(timer=timer,
weather=weather,
prices=market,
location=location,
co2em=co2em)
return environment
def generate_res_building_single_zone(environment, net_floor_area,
spec_th_demand,
th_gen_method,
el_gen_method,
annual_el_demand=None,
el_random=False,
use_dhw=False,
dhw_method=1, number_occupants=None,
build_year=None, mod_year=None,
build_type=None, pv_use_area=None,
height_of_floors=None, nb_of_floors=None,
neighbour_buildings=None,
residential_layout=None, attic=None,
cellar=None, construction_type=None,
dormer=None, dhw_volumen=None,
do_normalization=True,
slp_manipulate=True,
curr_central_ahu=None,
dhw_random=False, prev_heat_dev=True,
season_mod=None):
"""
Function generates and returns extended residential building object
with single zone.
Parameters
----------
environment : object
Environment object
net_floor_area : float
Net floor area of building in m2
spec_th_demand : float
Specific thermal energy demand in kWh/m2*a
th_gen_method : int
Thermal load profile generation method
1 - Use SLP
2 - Load Modelica simulation output profile (only residential)
Method 2 is only used for residential buildings. For non-res.
buildings, SLPs are generated instead
el_gen_method : int, optional
Electrical generation method (default: 1)
1 - Use SLP
2 - Generate stochastic load profile (only valid for residential
building)
annual_el_demand : float, optional
Annual electrical energy demand in kWh/a (default: None)
el_random : bool, optional
Defines, if random value should be chosen from statistics
or if average value should be chosen. el_random == True means,
use random value. (default: False)
use_dhw : bool, optional
Boolean to define, if domestic hot water profile should be generated
(default: False)
True - Generate dhw profile
dhw_method : int, optional
Domestic hot water profile generation method (default: 1)
1 - Use Annex 42 profile
2 - Use stochastic profile
number_occupants : int, optional
Number of occupants (default: None)
build_year : int, optional
Building year of construction (default: None)
mod_year : int, optional
Last year of modernization of building (default: None)
build_type : int, optional
Building type (default: None)
pv_use_area : float, optional
Usable pv area in m2 (default: None)
height_of_floors : float
average height of single floor
nb_of_floors : int
Number of floors above the ground
neighbour_buildings : int
neighbour (default = 0)
0: no neighbour
1: one neighbour
2: two neighbours
residential_layout : int
type of floor plan (default = 0)
0: compact
1: elongated/complex
attic : int
type of attic (default = 0)
0: flat roof
1: non heated attic
2: partly heated attic
3: heated attic
cellar : int
type of cellar (default = 0)
0: no cellar
1: non heated cellar
2: partly heated cellar
3: heated cellar
construction_type : str
construction type (default = "heavy")
heavy: heavy construction
light: light construction
dormer : str
construction type
0: no dormer
1: dormer
dhw_volumen : float, optional
Volume of domestic hot water in liter per capita and day
(default: None).
do_normalization : bool, optional
Defines, if stochastic profile (el_gen_method=2) should be
normalized to given annualDemand value (default: True).
If set to False, annual el. demand depends on stochastic el. load
profile generation. If set to True, does normalization with
annualDemand
slp_manipulate : bool, optional
Defines, if thermal space heating SLP profile should be modified
(default: True). Only used for residential buildings!
Only relevant, if th_gen_method == 1
True - Do manipulation
False - Use original profile
Sets thermal power to zero in time spaces, where average daily outdoor
temperature is equal to or larger than 12 °C. Rescales profile to
original demand value.
curr_central_ahu : bool, optional
Defines, if building has air handling unit (AHU)
(default: False)
dhw_random : bool, optional
Defines, if hot water volume per person and day value should be
randomized by choosing value from gaussian distribution (20 %
standard deviation) (default: False)
If True: Randomize value
If False: Use reference value
prev_heat_dev : bool, optional
Defines, if heating devices should be prevented within chosen
appliances (default: True). If set to True, DESWH, E-INST,
Electric shower, Storage heaters and Other electric space heating
are set to zero. Only relevant for el_gen_method == 2
season_mod : float, optional
Float to define rescaling factor to rescale annual lighting power curve
with cosine wave to increase winter usage and decrease summer usage.
Reference is maximum lighting power (default: None). If set to None,
do NOT perform rescaling with cosine wave
Returns
-------
extended_building : object
BuildingExtended object
"""
assert net_floor_area > 0
assert spec_th_demand >= 0
if annual_el_demand is not None:
assert annual_el_demand >= 0
else:
assert number_occupants is not None
assert number_occupants > 0
# Define SLP profiles for residential building with single zone
th_slp_type = 'HEF'
el_slp_type = 'H0'
if number_occupants is not None:
assert number_occupants > 0
assert number_occupants <= 5 # Max 5 occupants for stochastic profile
if el_gen_method == 2 or (dhw_method == 2 and use_dhw == True):
# Generate occupancy profile (necessary for stochastic, el. or
# dhw profile)
occupancy_object = occup.Occupancy(environment,
number_occupants=number_occupants)
else: # Generate occupancy object without profile generation
# Just used to store information about number of occupants
occupancy_object = occup.Occupancy(environment,
number_occupants=number_occupants,
do_profile=False)
else:
occupancy_object = None # Dummy object to prevent error with
# apartment usage
if el_gen_method == 2:
warnings.warn('Stochastic el. profile cannot be generated ' +
'due to missing number of occupants. ' +
'SLP is used instead.')
# Set el_gen_method to 1 (SLP)
el_gen_method = 1
elif dhw_method == 2:
raise AssertionError('DHW profile cannot be generated' +
'for residential building without' +
'occupants (stochastic mode).' +
'Please check your input file ' +
'(missing number of occupants) ' +
'or disable dhw generation.')
if (number_occupants is None and dhw_method == 1 and use_dhw == True):
# Set number of occupants to 2 to enable dhw usage
number_occupants = 2
# Create space heating demand
if th_gen_method == 1:
# Use SLP
heat_power_curve = SpaceHeating.SpaceHeating(environment,
method=1,
profile_type=th_slp_type,
livingArea=net_floor_area,
specificDemand=spec_th_demand)
if slp_manipulate: # Do SLP manipulation
timestep = environment.timer.timeDiscretization
temp_array = environment.weather.tAmbient
mod_curve = \
slpman.slp_th_manipulator(timestep,
th_slp_curve=heat_power_curve.loadcurve,
temp_array=temp_array)
heat_power_curve.loadcurve = mod_curve
elif th_gen_method == 2:
# Use Modelica result profile
heat_power_curve = SpaceHeating.SpaceHeating(environment,
method=3,
livingArea=net_floor_area,
specificDemand=spec_th_demand)
# Calculate el. energy demand for apartment, if no el. energy
# demand is given for whole building to rescale
if annual_el_demand is None:
# Generate annual_el_demand_ap
annual_el_demand = calc_el_dem_ap(nb_occ=number_occupants,
el_random=el_random,
type='sfh')
print('Annual electrical demand in kWh: ', annual_el_demand)
if number_occupants is not None:
print('El. demand per person in kWh: ')
print(annual_el_demand / number_occupants)
print()
# Create electrical power curve
if el_gen_method == 2:
if season_mod is not None:
season_light_mod = True
else:
season_light_mod = False
el_power_curve = ElectricalDemand.ElectricalDemand(environment,
method=2,
total_nb_occupants=number_occupants,
randomizeAppliances=True,
lightConfiguration=0,
annualDemand=annual_el_demand,
occupancy=occupancy_object.occupancy,
do_normalization=do_normalization,
prev_heat_dev=prev_heat_dev,
season_light_mod=season_light_mod,
light_mod_fac=season_mod)
else: # Use el. SLP
el_power_curve = ElectricalDemand.ElectricalDemand(environment,
method=1,
annualDemand=annual_el_demand,
profileType=el_slp_type)
# Create domestic hot water demand
if use_dhw:
if dhw_volumen is None or dhw_random:
dhw_kwh = calc_dhw_dem_ap(nb_occ=number_occupants,
dhw_random=dhw_random,
type='sfh')
# Reconvert kWh/a to Liters per day
dhw_vol_ap = dhw_kwh * 1000 * 3600 * 1000 / (955 * 4182 * 35 * 365)
# DHW volume per person and day
dhw_volumen = dhw_vol_ap / number_occupants
if dhw_method == 1: # Annex 42
dhw_power_curve = DomesticHotWater.DomesticHotWater(environment,
tFlow=60,
thermal=True,
method=1,
# Annex 42
dailyConsumption=dhw_volumen * number_occupants,
supplyTemperature=25)
else: # Stochastic profile
dhw_power_curve = DomesticHotWater.DomesticHotWater(environment,
tFlow=60,
thermal=True,
method=2,
supplyTemperature=25,
occupancy=occupancy_object.occupancy)
# Rescale to reference dhw volume (liters per person
# and day)
curr_dhw_vol_flow = dhw_power_curve.water
# Water volume flow in Liter/hour
curr_volume_year = sum(curr_dhw_vol_flow) * \
environment.timer.timeDiscretization / \
3600
curr_vol_day = curr_volume_year / 365
curr_vol_day_and_person = curr_vol_day / \
occupancy_object.number_occupants
print('Curr. volume per person and day: ',
curr_vol_day_and_person)
dhw_con_factor = dhw_volumen / curr_vol_day_and_person
print('Conv. factor of hot water: ', dhw_con_factor)
print('New volume per person and day: ',
curr_vol_day_and_person * dhw_con_factor)
# Normalize water flow and power load
dhw_power_curve.water *= dhw_con_factor
dhw_power_curve.loadcurve *= dhw_con_factor
# Create apartment
apartment = Apartment.Apartment(environment, occupancy=occupancy_object,
net_floor_area=net_floor_area)
# Add demands to apartment
if th_gen_method == 1 or th_gen_method == 2:
if use_dhw:
apartment.addMultipleEntities([heat_power_curve, el_power_curve,
dhw_power_curve])
else:
apartment.addMultipleEntities([heat_power_curve, el_power_curve])
else:
if use_dhw:
apartment.addMultipleEntities([el_power_curve,
dhw_power_curve])
else:
apartment.addEntity(el_power_curve)
# Create extended building object
extended_building = \
build_ex.BuildingExtended(environment,
build_year=build_year,
mod_year=mod_year,
build_type=build_type,
roof_usabl_pv_area=pv_use_area,
net_floor_area=net_floor_area,
height_of_floors=height_of_floors,
nb_of_floors=nb_of_floors,
neighbour_buildings=neighbour_buildings,
residential_layout=residential_layout,
attic=attic,
cellar=cellar,
construction_type=construction_type,
dormer=dormer,
with_ahu=
curr_central_ahu)
# Add apartment to extended building
extended_building.addEntity(entity=apartment)
return extended_building
def generate_res_building_multi_zone(environment,
net_floor_area,
spec_th_demand,
th_gen_method,
el_gen_method,
nb_of_apartments,
annual_el_demand=None,
el_random=False,
use_dhw=False,
dhw_method=1,
total_number_occupants=None,
build_year=None, mod_year=None,
build_type=None, pv_use_area=None,
height_of_floors=None, nb_of_floors=None,
neighbour_buildings=None,
residential_layout=None, attic=None,
cellar=None, construction_type=None,
dormer=None, dhw_volumen=None,
do_normalization=True,
slp_manipulate=True,
curr_central_ahu=False,
dhw_random=False, prev_heat_dev=True,
season_mod=None):
"""
Function generates and returns extended residential building object
with multiple apartments. Occupants are randomly distributed over
number of apartments.
Parameters
----------
environment : object
Environment object
net_floor_area : float
Net floor area of building in m2
spec_th_demand : float
Specific thermal energy demand in kWh/m2*a
annual_el_demand : float, optional
Annual electrical energy demand in kWh/a (default: None)
el_random : bool, optional
Defines, if random value should be chosen from statistics
or if average value should be chosen. el_random == True means,
use random value. (default: False)
th_gen_method : int
Thermal load profile generation method
1 - Use SLP
2 - Load Modelica simulation output profile (only residential)
Method 2 is only used for residential buildings. For non-res.
buildings, SLPs are generated instead
el_gen_method : int, optional
Electrical generation method (default: 1)
1 - Use SLP
2 - Generate stochastic load profile (only valid for residential
building)
nb_of_apartments : int
Number of apartments within building
use_dhw : bool, optional
Boolean to define, if domestic hot water profile should be generated
(default: False)
True - Generate dhw profile
dhw_method : int, optional
Domestic hot water profile generation method (default: 1)
1 - Use Annex 42 profile
2 - Use stochastic profile
total_number_occupants : int, optional
Total number of occupants in all apartments (default: None)
build_year : int, optional
Building year of construction (default: None)
mod_year : int, optional
Last year of modernization of building (default: None)
build_type : int, optional
Building type (default: None)
pv_use_area : float, optional
Usable pv area in m2 (default: None)
height_of_floors : float
average height of the floors
nb_of_floors : int
Number of floors above the ground
neighbour_buildings : int
neighbour (default = 0)
0: no neighbour
1: one neighbour
2: two neighbours
residential_layout : int
type of floor plan (default = 0)
0: compact
1: elongated/complex
attic : int
type of attic (default = 0)
0: flat roof
1: non heated attic
2: partly heated attic
3: heated attic
cellar : int
type of cellar (default = 0)
0: no cellar
1: non heated cellar
2: partly heated cellar
3: heated cellar
construction_type : str
construction type (default = "heavy")
heavy: heavy construction
light: light construction
dormer : str
construction type
0: no dormer
1: dormer
dhw_volumen : float, optional
Volume of domestic hot water in liter per capita and day
(default: None).
do_normalization : bool, optional
Defines, if stochastic profile (el_gen_method=2) should be
normalized to given annualDemand value (default: True).
If set to False, annual el. demand depends on stochastic el. load
profile generation. If set to True, does normalization with
annualDemand
slp_manipulate : bool, optional
Defines, if thermal space heating SLP profile should be modified
(default: True). Only used for residential buildings!
Only relevant, if th_gen_method == 1
True - Do manipulation
False - Use original profile
Sets thermal power to zero in time spaces, where average daily outdoor
temperature is equal to or larger than 12 °C. Rescales profile to
original demand value.
curr_central_ahu : bool, optional
Defines, if building has air handling unit (AHU)
(default: False)
dhw_random : bool, optional
Defines, if hot water volume per person and day value should be
randomized by choosing value from gaussian distribution (20 %
standard deviation) (default: False)
If True: Randomize value
If False: Use reference value
prev_heat_dev : bool, optional
Defines, if heating devices should be prevented within chosen
appliances (default: True). If set to True, DESWH, E-INST,
Electric shower, Storage heaters and Other electric space heating
are set to zero. Only relevant for el_gen_method == 2
season_mod : float, optional
Float to define rescaling factor to rescale annual lighting power curve
with cosine wave to increase winter usage and decrease summer usage.
Reference is maximum lighting power (default: None). If set to None,
do NOT perform rescaling with cosine wave
Returns
-------
extended_building : object
BuildingExtended object
Annotation
----------
Raise assertion error when share of occupants per apartment is higher
than 5 (necessary for stochastic, el. profile generation)
"""
assert net_floor_area > 0
assert spec_th_demand >= 0
if annual_el_demand is not None:
assert annual_el_demand >= 0
if total_number_occupants is not None:
assert total_number_occupants > 0
assert total_number_occupants / nb_of_apartments <= 5, (
'Number of occupants per apartment is ' +
'at least once higher than 5.')
# Distribute occupants to different apartments
occupancy_list = constrained_sum_sample_pos(n=nb_of_apartments,
total=total_number_occupants)
# While not all values are smaller or equal to 5, return run
# This while loop might lead to large runtimes for buildings with a
# large number of apartments (not finding a valid solution, see
# issue #147). Thus, we add a counter to exit the loop
count = 0
while all(i <= 5 for i in occupancy_list) is not True:
occupancy_list = constrained_sum_sample_pos(n=nb_of_apartments,
total=total_number_occupants)
if count == 100000:
# Take current occupancy_list and redistribute occupants
# manually until valid distribution is found
occupancy_list = redistribute_occ(occ_list=occupancy_list)
# Exit while loop
break
count += 1
print('Current list of occupants per apartment: ', occupancy_list)
else:
msg = 'Number of occupants is None for current building!'
warnings.warn(msg)
# Define SLP profiles for residential building with multiple zone
th_slp_type = 'HMF'
el_slp_type = 'H0'
# Create extended building object
extended_building = \
build_ex.BuildingExtended(environment,
build_year=build_year,
mod_year=mod_year,
build_type=build_type,
roof_usabl_pv_area=pv_use_area,
net_floor_area=net_floor_area,
height_of_floors=height_of_floors,
nb_of_floors=nb_of_floors,
neighbour_buildings=
neighbour_buildings,
residential_layout=
residential_layout,
attic=attic,
cellar=cellar,
construction_type=
construction_type,
dormer=dormer,
with_ahu=curr_central_ahu)
if annual_el_demand is not None:
# Distribute el. demand equally to apartments
annual_el_demand_ap = annual_el_demand / nb_of_apartments
else:
annual_el_demand_ap = None
# Loop over apartments
# #---------------------------------------------------------------------
for i in range(int(nb_of_apartments)):
# Dummy init of number of occupants
curr_number_occupants = None
# Check number of occupants
if total_number_occupants is not None:
# Get number of occupants
curr_number_occupants = occupancy_list[i]
# Generate occupancy profiles for stochastic el. and/or dhw
if el_gen_method == 2 or (dhw_method == 2 and use_dhw):
# Generate occupancy profile (necessary for stochastic, el. or
# dhw profile)
occupancy_object = occup.Occupancy(environment,
number_occupants=
curr_number_occupants)
else: # Generate occupancy object without profile
occupancy_object = occup.Occupancy(environment,
number_occupants=
curr_number_occupants,
do_profile=False)
else:
if el_gen_method == 2:
warnings.warn('Stochastic el. profile cannot be generated ' +
'due to missing number of occupants. ' +
'SLP is used instead.')
# Set el_gen_method to 1 (SLP)
el_gen_method = 1
elif dhw_method == 2:
raise AssertionError('DHW profile cannot be generated' +
'for residential building without' +
'occupants (stochastic mode).' +
'Please check your input file ' +
'(missing number of occupants) ' +
'or disable dhw generation.')
if (curr_number_occupants is None and dhw_method == 1 and
use_dhw == True):
# If dhw profile should be generated, but current number of
# occupants is None, number of occupants is samples from
# occupancy distribution for apartment
curr_number_occupants = usunc.calc_sampling_occ_per_app(
nb_samples=1)
# Assumes equal area share for all apartments
apartment_area = net_floor_area / nb_of_apartments
# Create space heating demand (for apartment)
if th_gen_method == 1:
# Use SLP
heat_power_curve = \
SpaceHeating.SpaceHeating(environment,
method=1,
profile_type=th_slp_type,
livingArea=apartment_area,
specificDemand=spec_th_demand)
if slp_manipulate: # Do SLP manipulation
timestep = environment.timer.timeDiscretization
temp_array = environment.weather.tAmbient
mod_curve = \
slpman.slp_th_manipulator(timestep,
th_slp_curve=heat_power_curve.loadcurve,
temp_array=temp_array)
heat_power_curve.loadcurve = mod_curve
elif th_gen_method == 2:
# Use Modelica result profile
heat_power_curve = SpaceHeating.SpaceHeating(environment,
method=3,
livingArea=apartment_area,
specificDemand=spec_th_demand)
# Calculate el. energy demand for apartment, if no el. energy
# demand is given for whole building to rescale
if annual_el_demand_ap is None:
# Generate annual_el_demand_ap
annual_el_demand_ap = calc_el_dem_ap(nb_occ=curr_number_occupants,
el_random=el_random,
type='mfh')
print('Annual el. demand (apartment) in kWh: ', annual_el_demand_ap)
if curr_number_occupants is not None:
print('El. demand per person in kWh: ')
print(annual_el_demand_ap / curr_number_occupants)
print()
# Create electrical power curve
if el_gen_method == 2:
if season_mod is not None:
season_light_mod = True
else:
season_light_mod = False
el_power_curve = ElectricalDemand.ElectricalDemand(environment,
method=2,
total_nb_occupants=curr_number_occupants,
randomizeAppliances=True,
lightConfiguration=0,
annualDemand=annual_el_demand_ap,
occupancy=occupancy_object.occupancy,
do_normalization=do_normalization,
prev_heat_dev=prev_heat_dev,
season_light_mod=season_light_mod,
light_mod_fac=season_mod)
else: # Use el. SLP
el_power_curve = ElectricalDemand.ElectricalDemand(environment,
method=1,
annualDemand=annual_el_demand_ap,
profileType=el_slp_type)
# Create domestic hot water demand
if use_dhw:
if dhw_volumen is None or dhw_random:
dhw_kwh = calc_dhw_dem_ap(nb_occ=curr_number_occupants,
dhw_random=dhw_random,
type='mfh')
# Reconvert kWh/a to Liters per day
dhw_vol_ap = dhw_kwh * 1000 * 3600 * 1000 / (
955 * 4182 * 35 * 365)
# DHW volume per person and day
dhw_volumen = dhw_vol_ap / curr_number_occupants
if dhw_method == 1: # Annex 42
dhw_power_curve = DomesticHotWater.DomesticHotWater(
environment,
tFlow=60,
thermal=True,
method=1,
# Annex 42
dailyConsumption=dhw_volumen * curr_number_occupants,
supplyTemperature=25)
else: # Stochastic profile
dhw_power_curve = DomesticHotWater.DomesticHotWater(
environment,
tFlow=60,
thermal=True,
method=2,
supplyTemperature=25,
occupancy=occupancy_object.occupancy)
# Rescale to reference dhw volume (liters per person
# and day)
curr_dhw_vol_flow = dhw_power_curve.water
# Water volume flow in Liter/hour
curr_volume_year = sum(curr_dhw_vol_flow) * \
environment.timer.timeDiscretization / \
3600
curr_vol_day = curr_volume_year / 365
curr_vol_day_and_person = curr_vol_day / \
occupancy_object.number_occupants
print('Curr. volume per person and day: ',
curr_vol_day_and_person)
dhw_con_factor = dhw_volumen / curr_vol_day_and_person
print('Conv. factor of hot water: ', dhw_con_factor)
print('New volume per person and day: ',
curr_vol_day_and_person * dhw_con_factor)
# Normalize water flow and power load
dhw_power_curve.water *= dhw_con_factor
dhw_power_curve.loadcurve *= dhw_con_factor
# Create apartment
apartment = Apartment.Apartment(environment,
occupancy=occupancy_object,
net_floor_area=apartment_area)
# Add demands to apartment
if th_gen_method == 1 or th_gen_method == 2:
if use_dhw:
apartment.addMultipleEntities([heat_power_curve,
el_power_curve,
dhw_power_curve])
else:
apartment.addMultipleEntities([heat_power_curve,
el_power_curve])
else:
if use_dhw:
apartment.addMultipleEntities([el_power_curve,
dhw_power_curve])
else:
apartment.addEntity(el_power_curve)
# Add apartment to extended building
extended_building.addEntity(entity=apartment)
return extended_building
def generate_nonres_building_single_zone(environment,
net_floor_area, spec_th_demand,
annual_el_demand, th_slp_type,
el_slp_type=None,
build_year=None, mod_year=None,
build_type=None, pv_use_area=None,
method_3_type=None,
method_4_type=None,
height_of_floors=None,
nb_of_floors=None):
"""
Function generates and returns extended nonresidential building object
with single zone.
Parameters
----------
environment : object
Environment object
net_floor_area : float
Net floor area of building in m2
spec_th_demand : float
Specific thermal energy demand in kWh/m2*a
annual_el_demand : float
Annual electrical energy demand in kWh/a
th_slp_type : str
Thermal SLP type (for non-residential buildings)
- `GBA` : Bakeries
- `GBD` : Other services
- `GBH` : Accomodations
- `GGA` : Restaurants
- `GGB` : Gardening
- `GHA` : Retailers
- `GHD` : Summed load profile business, trade and services
- `GKO` : Banks, insurances, public institutions
- `GMF` : Household similar businesses
- `GMK` : Automotive
- `GPD` : Paper and printing
- `GWA` : Laundries
el_slp_type : str, optional (default: None)
Electrical SLP type
- H0 : Household
- L0 : Farms
- L1 : Farms with breeding / cattle
- L2 : Farms without cattle
- G0 : Business (general)
- G1 : Business (workingdays 8:00 AM - 6:00 PM)
- G2 : Business with high loads in the evening
- G3 : Business (24 hours)
- G4 : Shops / Barbers
- G5 : Bakery
- G6 : Weekend operation
number_occupants : int, optional
Number of occupants (default: None)
build_year : int, optional
Building year of construction (default: None)
mod_year : int, optional
Last year of modernization of building (default: None)
build_type : int, optional
Building type (default: None)
pv_use_area : float, optional
Usable pv area in m2 (default: None)
method_3_type : str, optional
Defines type of profile for method=3 (default: None)
Options:
- 'food_pro': Food production
- 'metal': Metal company
- 'rest': Restaurant (with large cooling load)
- 'sports': Sports hall
- 'repair': Repair / metal shop
method_4_type : str, optional
Defines type of profile for method=4 (default: None)
- 'metal_1' : Metal company with smooth profile
- 'metal_2' : Metal company with fluctuation in profile
- 'warehouse' : Warehouse
height_of_floors : float
average height of the floors
nb_of_floors : int
Number of floors above the ground
Returns
-------
extended_building : object
BuildingExtended object
"""
assert net_floor_area > 0
assert spec_th_demand >= 0
assert annual_el_demand >= 0
assert th_slp_type != 'HEF', ('HEF thermal slp profile only valid for ' +
'residential buildings.')
assert th_slp_type != 'HMF', ('HMF thermal slp profile only valid for ' +
'residential buildings.')
assert el_slp_type != 'H0', ('H0 thermal slp profile only valid for ' +
'residential buildings.')
# Create space heating demand
heat_power_curve = SpaceHeating.SpaceHeating(environment,
method=1,
profile_type=th_slp_type,
livingArea=net_floor_area,
specificDemand=spec_th_demand)
if method_3_type is not None:
el_power_curve = \
ElectricalDemand.ElectricalDemand(environment,
method=3,
annualDemand=annual_el_demand,
do_normalization=True,
method_3_type=method_3_type)
elif method_4_type is not None:
el_power_curve = \
ElectricalDemand.ElectricalDemand(environment,
method=4,
annualDemand=annual_el_demand,
do_normalization=True,
method_4_type=method_4_type)
else:
# Use el. SLP for el. power load generation
assert el_slp_type is not None, 'el_slp_type is required!'
el_power_curve = \
ElectricalDemand.ElectricalDemand(environment,
method=1,
annualDemand=annual_el_demand,
profileType=el_slp_type)
# Create apartment
apartment = Apartment.Apartment(environment)
# Add demands to apartment
apartment.addMultipleEntities([heat_power_curve, el_power_curve])
# Create extended building object
extended_building = build_ex.BuildingExtended(environment,
net_floor_area=net_floor_area,
build_year=build_year,
mod_year=mod_year,
build_type=build_type,
roof_usabl_pv_area=pv_use_area,
height_of_floors=height_of_floors,
nb_of_floors=nb_of_floors,
)
# Add apartment to extended building
extended_building.addEntity(entity=apartment)
return extended_building
def get_district_data_from_txt(path, delimiter='\t'):
"""
Load city district data from txt file (see annotations below for further
information of required inputs).
naN are going to be replaced with Python None.
Parameters
----------
path : str
Path to txt file
delimiter : str, optional
Defines delimiter for txt file (default: '\t')
Returns
-------
district_data : ndarray
Numpy 2d-array with city district data (each column represents
different parameter, see annotations)
Annotations
-----------
File structure
Columns:
1: id (int)
2: x in m (float)
3: y in m (float)
4: building_type (int, e.g. 0 for residential building)
5: net floor area in m2 (float)
6: Year of construction (int, optional)
7: Year of modernization (int, optional)
8: Annual (final) thermal energy demand in kWh (float, optional)
9: Annual electrical energy demand in kWh (float, optional)
10: Usable pv roof area in m2 (float, optional)
11: Number of apartments (int, optional)
12: Total number of occupants (int, optional)
13: Number of floors above the ground (int, optional)
14: Average Height of floors (float, optional)
15: If building has a central AHU or not (boolean, optional)
16: Residential layout (int, optional, e.g. 0 for compact)
17: Neighbour Buildings (int, optional) (0 - free standing)
(1 - double house) (2 - row house)
18: Type of attic (int, optional, e.g. 0 for flat roof) (1 - regular roof;
unheated) (2 - regular roof; partially heated) (3 - regular roof; fully
heated)
19: Type of cellar (int, optional, e.g. 1 for non heated cellar)
(0 - no basement) (1 - non heated) (2 - partially heated) (3 - fully heated)
20: Dormer (int, optional, 0: no dormer/ 1: dormer)
21: Construction Type(heavy/light, optional) (0 - heavy; 1 - light)
22: Method_3_nb (for usage of measured, weekly non-res. el. profile
(optional)
23: Method_4_nb (for usage of measured, annual non-res. el. profile
(optional)
"""
district_data = np.genfromtxt(path, delimiter=delimiter, skip_header=1)
# Replace nan with None values of Python
district_data = np.where(np.isnan(district_data), None, district_data)
return district_data
def calc_el_dem_ap(nb_occ, el_random, type):
"""
Calculate electric energy demand per apartment per year
in kWh/a (residential buildings, only)
Parameters
----------
nb_occ : int
Number of occupants
el_random : bool
Defines, if random value should be chosen from statistics
or if average value should be chosen. el_random == True means,
use random value.
type : str
Define residential building type (single family or multi-
family)
Options:
- 'sfh' : Single family house
- 'mfh' : Multi family house
Returns
-------
el_dem : float
Electric energy demand per apartment in kWh/a
"""
assert nb_occ > 0
assert nb_occ <= 5, 'Number of occupants cannot exceed 5 per ap.'
assert type in ['sfh', 'mfh']
if el_random:
# Choose first entry of random sample list
el_dem = usunc.calc_sampling_el_demand_per_apartment(
nb_samples=1,
nb_persons=nb_occ,
type=type)[0]
else:
# Choose average value depending on nb_occ
# Class D without hot water (Stromspiegel 2017)
dict_sfh = {1: 2500,
2: 3200,
3: 3900,
4: 4200,
5: 5400}
dict_mfh = {1: 1500,
2: 2200,
3: 2800,
4: 3200,
5: 4000}
if type == 'sfh':
el_dem = dict_sfh[nb_occ]
elif type == 'mfh':
el_dem = dict_mfh[nb_occ]
return el_dem
def calc_dhw_dem_ap(nb_occ, dhw_random, type, delta_t=35, c_p_water=4182,
rho_water=995):
"""
Calculate hot water energy demand per apartment per year
in kWh/a (residential buildings, only)
Parameters
----------
nb_occ : int
Number of occupants
dhw_random : bool
Defines, if random value should be chosen from statistics
or if average value should be chosen. dhw_random == True means,
use random value.
type : str
Define residential building type (single family or multi-
family)
Options:
- 'sfh' : Single family house
- 'mfh' : Multi family house
delta_t : float, optional
Temperature split of heated up water in Kelvin (default: 35)
c_p_water : float, optional
Specific heat capacity of water in J/kgK (default: 4182)
rho_water : float, optional
Density of water in kg/m3 (default: 995)
Returns
-------
dhw_dem : float
Electric energy demand per apartment in kWh/a
"""
assert nb_occ > 0
assert nb_occ <= 5, 'Number of occupants cannot exceed 5 per ap.'
assert type in ['sfh', 'mfh']
if dhw_random:
# Choose first entry of random sample list
# DHW volume in liters per apartment and day
dhw_volume = usunc.calc_sampling_dhw_per_apartment(
nb_samples=1,
nb_persons=nb_occ,
b_type=type)[0]
dhw_dem = dhw_volume * 365 * rho_water * c_p_water * delta_t / \
(1000 * 3600 * 1000)
else:
# Choose average value depending on nb_occ
# Class D without hot water (Stromspiegel 2017)
dict_sfh = {1: 500,
2: 800,
3: 1000,
4: 1300,
5: 1600}
dict_mfh = {1: 500,
2: 900,
3: 1300,
4: 1400,
5: 2000}
if type == 'sfh':
dhw_dem = dict_sfh[nb_occ]
elif type == 'mfh':
dhw_dem = dict_mfh[nb_occ]
return dhw_dem
def run_city_generator(generation_mode, timestep,
year_timer, year_co2,
location,
th_gen_method,
el_gen_method, district_data, use_dhw=False,
dhw_method=1, try_path=None,
pickle_city_filename=None, do_save=True,
path_save_city=None, eff_factor=0.85,
show_city=False, altitude=55, dhw_volumen=None,
do_normalization=True, slp_manipulate=True,
call_teaser=False, teaser_proj_name='pycity',
do_log=True, log_path=None,
project_name='teaser_project',
air_vent_mode=1, vent_factor=0.5,
t_set_heat=20,
t_set_cool=70,
t_night=16,
vdi_sh_manipulate=False, city_osm=None,
el_random=False, dhw_random=False, prev_heat_dev=True,
season_mod=None, merge_windows=False, new_try=False):
"""
Function generates city district for user defined input. Generated
buildings consist of only one single zone!
Parameters
----------
generation_mode : int
Integer to define method to generate city district
(so far, only csv/txt file import has been implemented)
generation_mode = 0: Load data from csv/txt file (tab seperated)
timestep : int
Timestep in seconds
year_timer : int
Chosen year of analysis
(influences initial day for profile generation)
year_co2 : int, optional
Chose year with specific emission factors
location : Tuple
(latitude, longitude) of the simulated system's position.
th_gen_method : int
Thermal load profile generation method
1 - Use SLP
2 - Load Modelica simulation output profile (only residential)
Method 2 is only used for residential buildings. For non-res.
buildings, SLPs are generated instead
3 - Use TEASER VDI 6007 core to simulate thermal loads‚
el_gen_method : int
Electrical generation method
1 - Use SLP
2 - Generate stochastic load profile (only valid for residential
building). Requires number of occupants.
district_data : ndarray
Numpy 2d-array with city district data (each column represents
different parameter, see annotations)
use_dhw : bool, optional
Defines if domestic hot water profiles should be generated.
(default: False)
dhw_method : int, optional
Defines method for dhw profile generation (default: 1)
Only relevant if use_dhw=True. Options:
- 1: Generate profiles via Annex 42
- 2: Generate stochastic dhw profiles
try_path : str, optional
Path to TRY weather file (default: None)
If set to None, uses default weather TRY file (2010, region 5)
pickle_city_filename : str, optional
Name for file, which should be pickled and saved, if no path is
handed over to save object to(default: None)
do_save : bool, optional
Defines, if city object instance should be saved as pickle file
(default: True)
path_save_city : str, optional
Path to save (pickle and dump) city object instance to (default: None)
If None is used, saves file to .../output/...
eff_factor : float, optional
Efficiency factor of thermal boiler system (default: 0.85)
show_city : bool, optional
Boolean to define if city district should be printed by matplotlib
after generation (default: False)
True: Print results
False: Do not print results
altitude : float, optional
Altitude of location in m (default: 55 - City of Bottrop)
dhw_volumen : float, optional
Volume of domestic hot water in liter per capita and day
(default: None).
do_normalization : bool, optional
Defines, if stochastic profile (el_gen_method=2) should be
normalized to given annualDemand value (default: True).
If set to False, annual el. demand depends on stochastic el. load
profile generation. If set to True, does normalization with
annualDemand
slp_manipulate : bool, optional
Defines, if thermal space heating SLP profile should be modified
(default: True). Only used for residential buildings!
Only relevant, if th_gen_method == 1
True - Do manipulation
False - Use original profile
Sets thermal power to zero in time spaces, where average daily outdoor
temperature is equal to or larger than 12 °C. Rescales profile to
original demand value.
call_teaser : bool, optional
Defines, if teaser should be called to generate typeBuildings
(currently, residential typeBuildings only).
(default: False)
If set to True, generates typeBuildings and add them to building node
as attribute 'type_building'
teaser_proj_name : str, optional
TEASER project name (default: 'pycity'). Only relevant, if call_teaser
is set to True
do_log : bool, optional
Defines, if log file of inputs should be generated (default: True)
log_path : str, optional
Path to log file (default: None). If set to None, saves log to
.../output
air_vent_mode : int
Defines method to generation air exchange rate for VDI 6007 simulation
Options:
0 : Use constant value (vent_factor in 1/h)
1 : Use deterministic, temperature-dependent profile
2 : Use stochastic, user-dependent profile
vent_factor : float, optional
Ventilation rate factor in 1/h (default: 0.5). Only used, if
array_vent_rate is None (otherwise, array_vent_rate array is used)
t_set_heat : float, optional
Heating set temperature in degree Celsius. If temperature drops below
t_set_heat, model is going to be heated up. (default: 20)
(Related to constraints for res. buildings in DIN V 18599)
t_set_cool : float, optional
Cooling set temperature in degree Celsius. If temperature rises above
t_set_cool, model is going to be cooled down. (default: 70)
t_night : float, optional
Night set back temperature in degree Celsius (default: 16)
(Related to constraints for res. buildings in DIN V 18599)
project_name : str, optional
TEASER project name (default: 'teaser_project')
vdi_sh_manipulate : bool, optional
Defines, if VDI 6007 thermal space heating load curve should be
normalized to match given annual space heating demand in kWh
(default: False)
el_random : bool, optional
Defines, if annual, eletrical demand value for normalization of
el. load profile should randomly diverge from reference value
within specific boundaries (default: False).
If False: Use reference value for normalization
If True: Allow generating values that is different from reference value
dhw_random : bool, optional
Defines, if hot water volume per person and day value should be
randomized by choosing value from gaussian distribution (20 %
standard deviation) (default: False)
If True: Randomize value
If False: Use reference value
prev_heat_dev : bool, optional
Defines, if heating devices should be prevented within chosen
appliances (default: True). If set to True, DESWH, E-INST,
Electric shower, Storage heaters and Other electric space heating
are set to zero. Only relevant for el_gen_method == 2
season_mod : float, optional
Float to define rescaling factor to rescale annual lighting power curve
with cosine wave to increase winter usage and decrease summer usage.
Reference is maximum lighting power (default: None). If set to None,
do NOT perform rescaling with cosine wave
merge_windows : bool, optional
Defines TEASER project setting for merge_windows_calc
(default: False). If set to False, merge_windows_calc is set to False.
If True, Windows are merged into wall resistances.
new_try : bool, optional
Defines, if TRY dataset have been generated after 2017 (default: False)
If False, assumes that TRY dataset has been generated before 2017.
If True, assumes that TRY dataset has been generated after 2017 and
belongs to the new TRY classes. This is important for extracting
the correct values from the TRY dataset!
Returns
-------
city_object : object
City object of pycity_calc
Annotations
-----------
Non-residential building loads are automatically generated via SLP
(even if el_gen_method is set to 2). Furthermore, dhw profile generation
is automatically neglected (only valid for residential buildings)
Electrical load profiles of residential buildings without occupants
are automatically generated via SLP (even if el_gen_method is set to 2)
File structure (district_data np.array)
Columns:
1: id (int)
2: x in m (float)
3: y in m (float)
4: building_type (int, e.g. 0 for residential building)
5: net floor area in m2 (float)
6: Year of construction (int, optional)
7: Year of modernization (int, optional)
8: Annual (final) thermal energy demand in kWh (float, optional)
For residential: space heating, only!
For non-residential: Space heating AND hot water! (SLP usage)
9: Annual electrical energy demand in kWh (float, optional)
10: Usable pv roof area in m2 (float, optional)
11: Number of apartments (int, optional)
12: Total number of occupants (int, optional)
13: Number of floors above the ground (int, optional)
14: Average Height of floors (float, optional)
15: If building has a central AHU or not (boolean, optional)
16: Residential layout (int, optional, e.g. 0 for compact)
17: Neighbour Buildings (int, optional); 0 - free standing; 1 - Double house; 2 - Row house;
18: Type of attic (int, optional, e.g. 0 for flat roof); 1 - Roof, non heated; 2 - Roof, partially heated; 3- Roof, fully heated;
19: Type of basement (int, optional, e.g. 1 for non heated basement 0 - No basement; 1 - basement, non heated; 2 - basement, partially heated; 3- basement, fully heated;
20: Dormer (int, optional, 0: no dormer/ 1: dormer)
21: Construction Type(heavy/light, optional) (0 - heavy; 1 - light)
22: Method_3_nb (for usage of measured, weekly non-res. el. profile
(optional) (0 to 4)
23: Method_4_nb (for usage of measured, annual non-res. el. profile
(optional) (0 - 2)
method_3_type : str, optional
Defines type of profile for method=3 (default: None)
Options:
0 - 'food_pro': Food production
1 - 'metal': Metal company
2 - 'rest': Restaurant (with large cooling load)
3 - 'sports': Sports hall
4 - 'repair': Repair / metal shop
method_4_type : str, optional
Defines type of profile for method=4 (default: None)
0 - 'metal_1' : Metal company with smooth profile
1 - 'metal_2' : Metal company with fluctuation in profile
2 - 'warehouse' : Warehouse
"""
assert eff_factor > 0, 'Efficiency factor has to be larger than zero.'
assert eff_factor <= 1, 'Efficiency factor cannot increase value 1.'
if dhw_volumen is not None: # pragma: no cover
assert dhw_volumen >= 0, 'Hot water volume cannot be below zero.'
if generation_mode == 1: # pragma: no cover
assert city_osm is not None, 'Generation mode 1 requires city object!'
if vdi_sh_manipulate is True and th_gen_method == 3: # pragma: no cover
msg = 'Simulated profiles of VDI 6007 call (TEASER --> ' \
'space heating) is going to be normalized with annual thermal' \
' space heating demand values given by user!'
warnings.warn(msg)
if do_log: # pragma: no cover
# Write log file
# ################################################################
# Log file path
if log_path is None:
# If not existing, use default path
this_path = os.path.dirname(os.path.abspath(__file__))
log_path = os.path.join(this_path, 'output', 'city_gen_log.txt')
log_file = open(log_path, mode='w')
log_file.write('PyCity_Calc city_generator.py log file')
log_file.write('\n############## Time and location ##############\n')
log_file.write('Date: ' + str(datetime.datetime.now()) + '\n')
log_file.write('generation_mode: ' + str(generation_mode) + '\n')
log_file.write('timestep in seconds: ' + str(timestep) + '\n')
log_file.write('Year for timer: ' + str(year_timer) + '\n')
log_file.write('Year for CO2 emission factors: '
+ str(year_co2) + '\n')
log_file.write('Location: ' + str(location) + '\n')
log_file.write('altitude: ' + str(altitude) + '\n')
if generation_mode == 0:
log_file.write('Generation mode: csv/txt input, only.\n')
elif generation_mode == 1:
log_file.write('Generation mode: csv/txt plus city osm object.\n')
log_file.write('\n############## Generation methods ##############\n')
log_file.write('th_gen_method: ' + str(th_gen_method) + '\n')
if th_gen_method == 1:
log_file.write('Manipulate SLP: ' + str(slp_manipulate) + '\n')
elif th_gen_method == 3:
log_file.write('t_set_heat: ' + str(t_set_heat) + '\n')
log_file.write('t_set_night: ' + str(t_night) + '\n')
log_file.write('t_set_cool: ' + str(t_set_cool) + '\n')
log_file.write('air_vent_mode: ' + str(air_vent_mode) + '\n')
log_file.write('vent_factor: ' + str(vent_factor) + '\n')
log_file.write('el_gen_method: ' + str(el_gen_method) + '\n')
log_file.write(
'Normalize el. profile: ' + str(do_normalization) + '\n')
log_file.write(
'Do random el. normalization: ' + str(el_random) + '\n')
log_file.write(
'Prevent el. heating devices for el load generation: '
'' + str(prev_heat_dev) + '\n')
log_file.write(
'Rescaling factor lighting power curve to implement seasonal '
'influence: ' + str(season_mod) + '\n')
log_file.write('use_dhw: ' + str(use_dhw) + '\n')
log_file.write('dhw_method: ' + str(dhw_method) + '\n')
log_file.write('dhw_volumen: ' + str(dhw_volumen) + '\n')
log_file.write(
'Do random dhw. normalization: ' + str(dhw_random) + '\n')
log_file.write('\n############## Others ##############\n')
log_file.write('try_path: ' + str(try_path) + '\n')
log_file.write('eff_factor: ' + str(eff_factor) + '\n')
log_file.write('timestep in seconds: ' + str(timestep) + '\n')
log_file.write('call_teaser: ' + str(call_teaser) + '\n')
log_file.write('teaser_proj_name: ' + str(teaser_proj_name) + '\n')
# Log file is closed, after pickle filename has been generated
# (see code below)
if generation_mode == 0 or generation_mode == 1:
# ##################################################################
# Load specific demand files
# Load specific thermal demand input data
spec_th_dem_res_building = load_data_file_with_spec_demand_data(
'RWI_res_building_spec_th_demand.txt')
start_year_column = (spec_th_dem_res_building[:, [0]])
# Reverse
start_year_column = start_year_column[::-1]
"""
Columns:
1. Start year (int)
2. Final year (int)
3. Spec. thermal energy demand in kWh/m2*a (float)
"""
# ##################################################################
# Load specific electrical demand input data
spec_el_dem_res_building = load_data_file_with_spec_demand_data(
'AGEB_res_building_spec_e_demand.txt')
"""
Columns:
1. Start year (int)
2. Final year (int)
3. Spec. thermal energy demand in kWh/m2*a (float)
"""
# ##################################################################
# Load specific electrical demand input data
# (depending on number of occupants)
spec_el_dem_res_building_per_person = \
load_data_file_with_spec_demand_data(
'Stromspiegel2017_spec_el_energy_demand.txt')
"""
Columns:
1. Number of persons (int) ( 1 - 5 SFH and 1 - 5 MFH)
2. Annual electrical demand in kWh/a (float)
3. Specific electrical demand per person in kWh/person*a (float)
"""
# ###################################################################
# Load specific demand data and slp types for
# non residential buildings
spec_dem_and_slp_non_res = load_data_file_with_spec_demand_data(
'Spec_demands_non_res.txt')
"""
Columns:
1. type_id (int)
2. type_name (string) # Currently 'nan', due to expected float
3. Spec. thermal energy demand in kWh/m2*a (float)
4. Spec. electrical energy demand in kWh/m2*a (float)
5. Thermal SLP type (int)
6. Electrical SLP type (int)
"""
# ###################################################################
# Generate city district
# Generate extended environment of pycity_calc
environment = generate_environment(timestep=timestep,
year_timer=year_timer,
year_co2=year_co2,
location=location,
try_path=try_path,
altitude=altitude,
new_try=new_try)
print('Generated environment object.\n')
if generation_mode == 0:
# Generate city object
# ############################################################
city_object = city.City(environment=environment)
print('Generated city object.\n')
else:
# Overwrite city_osm environment
print('Overwrite city_osm.environment with new environment')
city_osm.environment = environment
city_object = city_osm
# Check if district_data only holds one entry for single building
# In this case, has to be processed differently
if district_data.ndim > 1:
multi_data = True
else: # Only one entry (single building)
multi_data = False
# If multi_data is false, loop below is going to be exited with
# a break statement at the end.
# Generate dummy node id and thermal space heating demand dict
dict_id_vdi_sh = {}
# Loop over district_data
# ############################################################
for i in range(len(district_data)):
if multi_data:
# Extract data out of input file
curr_id = int(
district_data[i][0]) # id / primary key of building
curr_x = district_data[i][1] # x-coordinate in m
curr_y = district_data[i][2] # y-coordinate in m
curr_build_type = int(
district_data[i][3]) # building type nb (int)
curr_nfa = district_data[i][4] # Net floor area in m2
curr_build_year = district_data[i][5] # Year of construction
curr_mod_year = district_data[i][
6] # optional (last year of modernization)
curr_th_e_demand = district_data[i][
7] # optional: Final thermal energy demand in kWh
# For residential buildings: Space heating only!
# For non-residential buildings: Space heating AND hot water! (SLP)
curr_el_e_demand = district_data[i][
8] # optional (Annual el. energy demand in kWh)
curr_pv_roof_area = district_data[i][
9] # optional (Usable pv roof area in m2)
curr_nb_of_apartments = district_data[i][
10] # optional (Number of apartments)
curr_nb_of_occupants = district_data[i][
11] # optional (Total number of occupants)
curr_nb_of_floors = district_data[i][
12] # optional (Number of floors above the ground)
curr_avg_height_of_floors = district_data[i][
13] # optional (Average Height of floors)
curr_central_ahu = district_data[i][
14] # optional (If building has a central air handling unit (AHU) or not (boolean))
curr_res_layout = district_data[i][
15] # optional Residential layout (int, optional, e.g. 0 for compact)
curr_nb_of_neighbour_bld = district_data[i][
16] # optional Neighbour Buildings (int, optional)
curr_type_attic = district_data[i][
17] # optional Type of attic (int, optional, e.g. 0 for flat roof);
# 1 - Roof, non heated; 2 - Roof, partially heated; 3- Roof, fully heated;
curr_type_cellar = district_data[i][
18] # optional Type of basement
# (int, optional, e.g. 1 for non heated basement 0 - No basement; 1 - basement, non heated; 2 - basement, partially heated; 3- basement, fully heated;
curr_dormer = district_data[i][
19] # optional Dormer (int, optional, 0: no dormer/ 1: dormer)
curr_construction_type = district_data[i][
20] # optional Construction Type(heavy/light, optional) (0 - heavy; 1 - light)
curr_method_3_nb = district_data[i][
21] # optional Method_3_nb (for usage of measured, weekly non-res. el. profile
curr_method_4_nb = district_data[i][
22] # optional Method_4_nb (for usage of measured, annual non-res. el. profile
else: # Single entry
# Extract data out of input file
curr_id = int(district_data[0]) # id / primary key of building
curr_x = district_data[1] # x-coordinate in m
curr_y = district_data[2] # y-coordinate in m
curr_build_type = int(
district_data[3]) # building type nb (int)
curr_nfa = district_data[4] # Net floor area in m2
curr_build_year = district_data[5] # Year of construction
curr_mod_year = district_data[
6] # optional (last year of modernization)
curr_th_e_demand = district_data[
7] # optional: Final thermal energy demand in kWh
# For residential buildings: Space heating only!
# For non-residential buildings: Space heating AND hot water! (SLP)
curr_el_e_demand = district_data[
8] # optional (Annual el. energy demand in kWh)
curr_pv_roof_area = district_data[
9] # optional (Usable pv roof area in m2)
curr_nb_of_apartments = district_data[
10] # optional (Number of apartments)
curr_nb_of_occupants = district_data[
11] # optional (Total number of occupants)
curr_nb_of_floors = district_data[
12] # optional (Number of floors above the ground)
curr_avg_height_of_floors = district_data[
13] # optional (Average Height of floors)
curr_central_ahu = district_data[
14] # optional (If building has a central air handling unit (AHU) or not (boolean))
curr_res_layout = district_data[
15] # optional Residential layout (int, optional, e.g. 0 for compact)
curr_nb_of_neighbour_bld = district_data[
16] # optional Neighbour Buildings (int, optional)
curr_type_attic = district_data[
17] # optional Type of attic (int, optional, e.g. 0 for flat roof);
# 1 - Roof, non heated; 2 - Roof, partially heated; 3- Roof, fully heated;
curr_type_cellar = district_data[
18] # optional Type of basement
# (int, optional, e.g. 1 for non heated basement 0 - No basement; 1 - basement, non heated; 2 - basement, partially heated; 3- basement, fully heated;
curr_dormer = district_data[
19] # optional Dormer (int, optional, 0: no dormer/ 1: dormer)
curr_construction_type = district_data[
20] # optional Construction Type(heavy/light, optional) (0 - heavy; 1 - light)
curr_method_3_nb = district_data[
21] # optional Method_3_nb (for usage of measured, weekly non-res. el. profile
curr_method_4_nb = district_data[
22] # optional Method_4_nb (for usage of measured, annual non-res. el. profile
print('Process building', curr_id)
print('########################################################')
# Assert functions
# ############################################################
assert curr_build_type >= 0
assert curr_nfa > 0
for m in range(5, 9):
if multi_data:
if district_data[i][m] is not None:
assert district_data[i][m] > 0
else:
if district_data[m] is not None:
assert district_data[m] > 0
if curr_nb_of_apartments is not None:
assert curr_nb_of_apartments > 0
# Convert to int
curr_nb_of_apartments = int(curr_nb_of_apartments)
if curr_nb_of_occupants is not None:
assert curr_nb_of_occupants > 0
# Convert curr_nb_of_occupants from float to int
curr_nb_of_occupants = int(curr_nb_of_occupants)
if (curr_nb_of_occupants is not None
and curr_nb_of_apartments is not None):
assert curr_nb_of_occupants / curr_nb_of_apartments <= 5, (
'Average share of occupants per apartment should ' +
'not exceed 5 persons! (Necessary for stochastic, el.' +
'profile generation.)')
if curr_method_3_nb is not None:
curr_method_3_nb >= 0
if curr_method_4_nb is not None:
curr_method_4_nb >= 0
if curr_build_type == 0 and curr_nb_of_apartments is None: # pragma: no cover
# Define single apartment, if nb of apartments is unknown
msg = 'Building ' + str(curr_id) + ' is residential, but' \
' does not have a number' \
' of apartments. Going' \
' to set nb. to 1.'
warnings.warn(msg)
curr_nb_of_apartments = 1
if (curr_build_type == 0 and curr_nb_of_occupants is None
and use_dhw and dhw_method == 2):
raise AssertionError('DHW profile cannot be generated' +
'for residential building without' +
'occupants (stochastic mode).' +
'Please check your input file ' +
'(missing number of occupants) ' +
'or disable dhw generation.')
# Check if TEASER inputs are defined
if call_teaser or th_gen_method == 3:
if curr_build_type == 0: # Residential
assert curr_nb_of_floors is not None
assert curr_avg_height_of_floors is not None
assert curr_central_ahu is not None
assert curr_res_layout is not None
assert curr_nb_of_neighbour_bld is not None
assert curr_type_attic is not None
assert curr_type_cellar is not None
assert curr_dormer is not None
assert curr_construction_type is not None
if curr_nb_of_floors is not None:
assert curr_nb_of_floors > 0
if curr_avg_height_of_floors is not None:
assert curr_avg_height_of_floors > 0
if curr_central_ahu is not None:
assert 0 <= curr_central_ahu <= 1
if curr_res_layout is not None:
assert 0 <= curr_res_layout <= 1
if curr_nb_of_neighbour_bld is not None:
assert 0 <= curr_nb_of_neighbour_bld <= 2
if curr_type_attic is not None:
assert 0 <= curr_type_attic <= 3
if curr_type_cellar is not None:
assert 0 <= curr_type_cellar <= 3
if curr_dormer is not None:
assert 0 <= curr_dormer <= 1
if curr_construction_type is not None:
assert 0 <= curr_construction_type <= 1
# Check building type (residential or non residential)
# #-------------------------------------------------------------
if curr_build_type == 0: # Is residential
print('Residential building')
# Get spec. net therm. demand value according to last year
# of modernization or build_year
# If year of modernization is defined, use curr_mod_year
if curr_mod_year is not None:
use_year = int(curr_mod_year)
else: # Use year of construction
use_year = int(curr_build_year)
# Get specific, thermal energy demand (based on use_year)
for j in range(len(start_year_column)):
if use_year >= start_year_column[j]:
curr_spec_th_demand = spec_th_dem_res_building[len(
spec_th_dem_res_building) - 1 - j][2]
break
# # Get spec. electr. demand
# if curr_nb_of_occupants is None:
# # USE AGEB values, if no number of occupants is given
# # Set specific demand value in kWh/m2*a
# curr_spec_el_demand = spec_el_dem_res_building[1]
# # Only valid for array like [2012 38.7]
# else:
# # Use Stromspiegel 2017 values
# # Calculate specific electric demand values depending
# # on number of occupants
#
# if curr_nb_of_apartments == 1:
# btype = 'sfh'
# elif curr_nb_of_apartments > 1:
# btype = 'mfh'
#
# # Average occupancy number per apartment
# curr_av_occ_per_app = \
# curr_nb_of_occupants / curr_nb_of_apartments
# print('Average number of occupants per apartment')
# print(round(curr_av_occ_per_app, ndigits=2))
#
# if curr_av_occ_per_app <= 5 and curr_av_occ_per_app > 0:
# # Correctur factor for non-int. av. number of
# # occupants (#19)
#
# # Divide annual el. energy demand with net floor area
# if btype == 'sfh':
# row_idx_low = math.ceil(curr_av_occ_per_app) - 1
# row_idx_high = math.floor(curr_av_occ_per_app) - 1
# elif btype == 'mfh':
# row_idx_low = math.ceil(curr_av_occ_per_app) - 1 \
# + 5
# row_idx_high = math.floor(curr_av_occ_per_app) - 1 \
# + 5
#
# cur_spec_el_dem_per_occ_high = \
# spec_el_dem_res_building_per_person[row_idx_high][2]
# cur_spec_el_dem_per_occ_low = \
# spec_el_dem_res_building_per_person[row_idx_low][2]
#
# print('Chosen reference spec. el. demands per person '
# 'in kWh/a (high and low value):')
# print(cur_spec_el_dem_per_occ_high)
# print(cur_spec_el_dem_per_occ_low)
#
# delta = round(curr_av_occ_per_app, 0) - \
# curr_av_occ_per_app
#
# if delta < 0:
# curr_spec_el_dem_occ = cur_spec_el_dem_per_occ_high + \
# (cur_spec_el_dem_per_occ_high -
# cur_spec_el_dem_per_occ_low) * delta
# elif delta > 0:
# curr_spec_el_dem_occ = cur_spec_el_dem_per_occ_low + \
# (cur_spec_el_dem_per_occ_high -
# cur_spec_el_dem_per_occ_low) * delta
# else:
# curr_spec_el_dem_occ = cur_spec_el_dem_per_occ_high
#
# # print('Calculated spec. el. demand per person in '
# # 'kWh/a:')
# # print(round(curr_spec_el_dem_occ, ndigits=2))
#
# # Specific el. demand per person (dependend on av.
# # number of occupants in each apartment)
# # --> Multiplied with number of occupants
# # --> Total el. energy demand in kWh
# # --> Divided with net floor area
# # --> Spec. el. energy demand in kWh/a
#
# curr_spec_el_demand = \
# curr_spec_el_dem_occ * curr_nb_of_occupants \
# / curr_nfa
#
# # print('Spec. el. energy demand in kWh/m2:')
# # print(curr_spec_el_demand)
#
# else:
# raise AssertionError('Invalid number of occupants')
# if el_random:
# if curr_nb_of_occupants is None:
# # Randomize curr_spec_el_demand with normal distribution
# # with curr_spec_el_demand as mean and 10 % standard dev.
# curr_spec_el_demand = \
# np.random.normal(loc=curr_spec_el_demand,
# scale=0.10 * curr_spec_el_demand)
# else:
# # Randomize rounding up and down of curr_av_occ_per_ap
# if round(curr_av_occ_per_app) > curr_av_occ_per_app:
# # Round up
# delta = round(curr_av_occ_per_app) - \
# curr_av_occ_per_app
# prob_r_up = 1 - delta
# rnb = random.random()
# if rnb < prob_r_up:
# use_occ = math.ceil(curr_av_occ_per_app)
# else:
# use_occ = math.floor(curr_av_occ_per_app)
#
# else:
# # Round down
# delta = curr_av_occ_per_app - \
# round(curr_av_occ_per_app)
# prob_r_down = 1 - delta
# rnb = random.random()
# if rnb < prob_r_down:
# use_occ = math.floor(curr_av_occ_per_app)
# else:
# use_occ = math.ceil(curr_av_occ_per_app)
#
# sample_el_per_app = \
# usunc.calc_sampling_el_demand_per_apartment(nb_samples=1,
# nb_persons=use_occ,
# type=btype)[0]
#
# # Divide sampled el. demand per apartment through
# # number of persons of apartment (according to
# # Stromspiegel 2017) and multiply this value with
# # actual number of persons in building to get
# # new total el. energy demand. Divide this value with
# # net floor area to get specific el. energy demand
# curr_spec_el_demand = \
# (sample_el_per_app / curr_av_occ_per_app) * \
# curr_nb_of_occupants / curr_nfa
# conversion of the construction_type from int to str
if curr_construction_type == 0:
new_curr_construction_type = 'heavy'
elif curr_construction_type == 1:
new_curr_construction_type = 'light'
else:
new_curr_construction_type = 'heavy'
# #-------------------------------------------------------------
else: # Non-residential
print('Non residential')
# Get spec. demands and slp types according to building_type
curr_spec_th_demand = \
spec_dem_and_slp_non_res[curr_build_type - 2][2]
curr_spec_el_demand = \
spec_dem_and_slp_non_res[curr_build_type - 2][3]
curr_th_slp_type = \
spec_dem_and_slp_non_res[curr_build_type - 2][4]
curr_el_slp_type = \
spec_dem_and_slp_non_res[curr_build_type - 2][5]
# Convert slp type integers into strings
curr_th_slp_type = convert_th_slp_int_and_str(curr_th_slp_type)
curr_el_slp_type = convert_el_slp_int_and_str(curr_el_slp_type)
# If curr_el_e_demand is not known, calculate it via spec.
# demand
if curr_el_e_demand is None:
curr_el_e_demand = curr_spec_el_demand * curr_nfa
# #-------------------------------------------------------------
# If curr_th_e_demand is known, recalc spec e. demand
if curr_th_e_demand is not None:
# Calc. spec. net thermal energy demand with efficiency factor
curr_spec_th_demand = eff_factor * curr_th_e_demand / curr_nfa
else:
# Spec. final energy demand is given, recalculate it to
# net thermal energy demand with efficiency factor
curr_spec_th_demand *= eff_factor
# # If curr_el_e_demand is not known, calculate it via spec. demand
# if curr_el_e_demand is None:
# curr_el_e_demand = curr_spec_el_demand * curr_nfa
if th_gen_method == 1 or th_gen_method == 2 or curr_build_type != 0:
print('Used specific thermal demand value in kWh/m2*a:')
print(curr_spec_th_demand)
# #-------------------------------------------------------------
# Generate BuildingExtended object
if curr_build_type == 0: # Residential
if curr_nb_of_apartments > 1: # Multi-family house
building = generate_res_building_multi_zone(environment,
net_floor_area=curr_nfa,
spec_th_demand=curr_spec_th_demand,
annual_el_demand=curr_el_e_demand,
th_gen_method=th_gen_method,
el_gen_method=el_gen_method,
nb_of_apartments=curr_nb_of_apartments,
use_dhw=use_dhw,
dhw_method=dhw_method,
total_number_occupants=curr_nb_of_occupants,
build_year=curr_build_year,
mod_year=curr_mod_year,
build_type=curr_build_type,
pv_use_area=curr_pv_roof_area,
height_of_floors=curr_avg_height_of_floors,
nb_of_floors=curr_nb_of_floors,
neighbour_buildings=curr_nb_of_neighbour_bld,
residential_layout=curr_res_layout,
attic=curr_type_attic,
cellar=curr_type_cellar,
construction_type=new_curr_construction_type,
dormer=curr_dormer,
dhw_volumen=dhw_volumen,
do_normalization=do_normalization,
slp_manipulate=slp_manipulate,
curr_central_ahu=curr_central_ahu,
dhw_random=dhw_random,
prev_heat_dev=prev_heat_dev,
season_mod=season_mod)
elif curr_nb_of_apartments == 1: # Single-family house
building = generate_res_building_single_zone(environment,
net_floor_area=curr_nfa,
spec_th_demand=curr_spec_th_demand,
annual_el_demand=curr_el_e_demand,
th_gen_method=th_gen_method,
el_gen_method=el_gen_method,
use_dhw=use_dhw,
dhw_method=dhw_method,
number_occupants=curr_nb_of_occupants,
build_year=curr_build_year,
mod_year=curr_mod_year,
build_type=curr_build_type,
pv_use_area=curr_pv_roof_area,
height_of_floors=curr_avg_height_of_floors,
nb_of_floors=curr_nb_of_floors,
neighbour_buildings=curr_nb_of_neighbour_bld,
residential_layout=curr_res_layout,
attic=curr_type_attic,
cellar=curr_type_cellar,
construction_type=new_curr_construction_type,
dormer=curr_dormer,
dhw_volumen=dhw_volumen,
do_normalization=do_normalization,
slp_manipulate=slp_manipulate,
curr_central_ahu=curr_central_ahu,
dhw_random=dhw_random,
prev_heat_dev=prev_heat_dev,
season_mod=season_mod)
else:
raise AssertionError('Wrong number of apartments')
else: # Non-residential
method_3_str = None
method_4_str = None
# Convert curr_method numbers, if not None
if curr_method_3_nb is not None:
method_3_str = \
convert_method_3_nb_into_str(int(curr_method_3_nb))
if curr_method_4_nb is not None:
method_4_str = \
convert_method_4_nb_into_str(int(curr_method_4_nb))
building = generate_nonres_building_single_zone(environment,
th_slp_type=curr_th_slp_type,
net_floor_area=curr_nfa,
spec_th_demand=curr_spec_th_demand,
annual_el_demand=curr_el_e_demand,
el_slp_type=curr_el_slp_type,
build_year=curr_build_year,
mod_year=curr_mod_year,
build_type=curr_build_type,
pv_use_area=curr_pv_roof_area,
method_3_type=method_3_str,
method_4_type=method_4_str,
height_of_floors=curr_avg_height_of_floors,
nb_of_floors=curr_nb_of_floors
)
# Generate position shapely point
position = point.Point(curr_x, curr_y)
if generation_mode == 0:
# Add building to city object
id = city_object.add_extended_building(
extended_building=building,
position=position, name=curr_id)
elif generation_mode == 1:
# Add building as entity to corresponding building node
# Positions should be (nearly) equal
assert position.x - city_object.nodes[int(curr_id)][
'position'].x <= 0.1
assert position.y - city_object.nodes[int(curr_id)][
'position'].y <= 0.1
city_object.nodes[int(curr_id)]['entity'] = building
id = curr_id
# Save annual thermal net heat energy demand for space heating
# to dict (used for normalization with VDI 6007 core)
dict_id_vdi_sh[id] = curr_spec_th_demand * curr_nfa
print('Finished processing of building', curr_id)
print('#######################################################')
print()
# If only single building should be processed, break loop
if multi_data is False:
break
# #-------------------------------------------------------------
print('Added all buildings with data to city object.')
# VDI 6007 simulation to generate space heating load curves
# Overwrites existing heat load curves (and annual heat demands)
if th_gen_method == 3:
print('Perform VDI 6007 space heating load simulation for every'
' building')
if el_gen_method == 1:
# Skip usage of occupancy and electrial load profiles
# as internal loads within VDI 6007 core
requ_profiles = False
else:
requ_profiles = True
tusage.calc_and_add_vdi_6007_loads_to_city(city=city_object,
air_vent_mode=air_vent_mode,
vent_factor=vent_factor,
t_set_heat=t_set_heat,
t_set_cool=t_set_cool,
t_night=t_night,
alpha_rad=None,
project_name=project_name,
requ_profiles=requ_profiles)
# Set call_teaser to False, as it is already included
# in calc_and_add_vdi_6007_loads_to_city
call_teaser = False
if vdi_sh_manipulate:
# Normalize VDI 6007 load curves to match given annual
# thermal space heating energy demand
for n in city_object.nodes():
if 'node_type' in city_object.nodes[n]:
# If node_type is building
if city_object.nodes[n]['node_type'] == 'building':
# If entity is kind building
if city_object.nodes[n][
'entity']._kind == 'building':
# Given value (user input)
ann_sh = dict_id_vdi_sh[n]
# Building pointer
curr_b = city_object.nodes[n]['entity']
# Current value on object
curr_sh = curr_b.get_annual_space_heat_demand()
norm_factor = ann_sh / curr_sh
# Do normalization
# Loop over apartments
for apart in curr_b.apartments:
# Normalize apartment space heating load
apart.demandSpaceheating.loadcurve \
*= norm_factor
print('Generation results:')
print('###########################################')
for n in city_object.nodes():
if 'node_type' in city_object.nodes[n]:
if city_object.nodes[n]['node_type'] == 'building':
if 'entity' in city_object.nodes[n]:
if city_object.nodes[n]['entity']._kind == 'building':
print('Results of building: ', n)
print('################################')
print()
curr_b = city_object.nodes[n]['entity']
sh_demand = curr_b.get_annual_space_heat_demand()
el_demand = curr_b.get_annual_el_demand()
dhw_demand = curr_b.get_annual_dhw_demand()
nfa = curr_b.net_floor_area
print('Annual space heating demand in kWh:')
print(sh_demand)
if nfa is not None and nfa != 0:
print(
'Specific space heating demand in kWh/m2:')
print(sh_demand / nfa)
print()
print('Annual electric demand in kWh:')
print(el_demand)
if nfa is not None and nfa != 0:
print('Specific electric demand in kWh/m2:')
print(el_demand / nfa)
nb_occ = curr_b.get_number_of_occupants()
if nb_occ is not None and nb_occ != 0:
print('Specific electric demand in kWh'
' per person and year:')
print(el_demand / nb_occ)
print()
print('Annual hot water demand in kWh:')
print(dhw_demand)
if nfa is not None and nfa != 0:
print('Specific hot water demand in kWh/m2:')
print(dhw_demand / nfa)
volume_year = dhw_demand * 1000 * 3600 / (
4200 * 35)
volume_day = volume_year / 365
if nb_occ is not None and nb_occ != 0:
v_person_day = \
volume_day / nb_occ
print('Hot water volume per person and day:')
print(v_person_day)
print()
# Create and add TEASER type_buildings to every building node
if call_teaser:
# Create TEASER project
project = tusage.create_teaser_project(name=teaser_proj_name,
merge_windows=merge_windows)
# Generate typeBuildings and add to city
tusage.create_teaser_typecity(project=project,
city=city_object,
generate_Output=False)
if do_save: # pragma: no cover
if path_save_city is None:
if pickle_city_filename is None:
msg = 'If path_save_city is None, pickle_city_filename' \
'cannot be None! Instead, filename has to be ' \
'defined to be able to save city object.'
raise AssertionError
this_path = os.path.dirname(os.path.abspath(__file__))
path_save_city = os.path.join(this_path, 'output',
pickle_city_filename)
try:
# Pickle and dump city objects
pickle.dump(city_object, open(path_save_city, 'wb'))
print('Pickled and dumped city object to: ')
print(path_save_city)
except:
warnings.warn('Could not pickle and save city object')
if do_log: # pragma: no cover
if pickle_city_filename is not None:
log_file.write('pickle_city_filename: ' +
str(pickle_city_filename)
+ '\n')
print('Wrote log file to: ' + str(log_path))
# Close log file
log_file.close()
# Visualize city
if show_city: # pragma: no cover
# Plot city district
try:
citvis.plot_city_district(city=city_object,
plot_street=False)
except:
warnings.warn('Could not plot city district.')
return city_object
if __name__ == '__main__':
this_path = os.path.dirname(os.path.abspath(__file__))
# User inputs #########################################################
# Choose generation mode
# ######################################################
# 0 - Use csv/txt input to generate city district
# 1 - Use csv/txt input file to enrich existing city object, based on
# osm call (city object should hold nodes, but no entities. City
# generator is going to add building, apartment and load entities to
# building nodes
generation_mode = 0
# Generate environment
# ######################################################
year_timer = 2017
year_co2 = 2017
timestep = 3600 # Timestep in seconds
# location = (51.529086, 6.944689) # (latitude, longitude) of Bottrop
location = (50.775346, 6.083887) # (latitude, longitude) of Aachen
altitude = 266 # Altitude of location in m (Aachen)
# Weather path
try_path = None
# If None, used default TRY (region 5, 2010)
new_try = False
# new_try has to be set to True, if you want to use TRY data of 2017
# or newer! Else: new_try = False
# Space heating load generation
# ######################################################
# Thermal generation method
# 1 - SLP (standardized load profile)
# 2 - Load and rescale Modelica simulation profile
# (generated with TRY region 12, 2010)
# 3 - VDI 6007 calculation (requires el_gen_method = 2)
th_gen_method = 3
# For non-residential buildings, SLPs are generated automatically.
# Manipulate thermal slp to fit to space heating demand?
slp_manipulate = False
# True - Do manipulation
# False - Use original profile
# Only relevant, if th_gen_method == 1
# Sets thermal power to zero in time spaces, where average daily outdoor
# temperature is equal to or larger than 12 °C. Rescales profile to
# original demand value.
# Manipulate vdi space heating load to be normalized to given annual net
# space heating demand in kWh
vdi_sh_manipulate = False
# Electrical load generation
# ######################################################
# Choose electric load profile generation method (1 - SLP; 2 - Stochastic)
# Stochastic profile is only generated for residential buildings,
# which have a defined number of occupants (otherwise, SLP is used)
el_gen_method = 2
# If user defindes method_3_nb or method_4_nb within input file
# (only valid for non-residential buildings), SLP will not be used.
# Instead, corresponding profile will be loaded (based on measurement
# data, see ElectricalDemand.py within pycity)
# Do normalization of el. load profile
# (only relevant for el_gen_method=2).
# Rescales el. load profile to expected annual el. demand value in kWh
do_normalization = True
# Randomize electrical demand value (residential buildings, only)
el_random = True
# Prevent usage of electrical heating and hot water devices in
# electrical load generation (only relevant if el_gen_method == 2)
prev_heat_dev = True
# True: Prevent electrical heating device usage for profile generation
# False: Include electrical heating devices in electrical load generation
# Use cosine function to increase winter lighting usage and reduce
# summer lighting usage in richadson el. load profiles
# season_mod is factor, which is used to rescale cosine wave with
# lighting power reference (max. lighting power)
season_mod = 0.3
# If None, do not use cosine wave to estimate seasonal influence
# Else: Define float
# (only relevant if el_gen_method == 2)
# Hot water profile generation
# ######################################################
# Generate DHW profiles? (True/False)
use_dhw = True # Only relevant for residential buildings
# DHW generation method? (1 - Annex 42; 2 - Stochastic profiles)
# Choice of Anex 42 profiles NOT recommended for multiple builings,
# as profile stays the same and only changes scaling.
# Stochastic profiles require defined nb of occupants per residential
# building
dhw_method = 2 # Only relevant for residential buildings
# Define dhw volume per person and day (use_dhw=True)
dhw_volumen = None # Only relevant for residential buildings
# Randomize choosen dhw_volume reference value by selecting new value
dhw_random = True
# Input file names and pathes
# ######################################################
# Define input data filename
filename = 'city_3_buildings.txt'
# filename = 'city_clust_simple.txt'
# filename = 'aachen_forsterlinde_mod_6.txt'
# filename = 'aachen_frankenberg_mod_6.txt'
# filename = 'aachen_huenefeld_mod_6.txt'
# filename = 'aachen_kronenberg_mod_8.txt'
# filename = 'aachen_preusweg_mod_8.txt'
# filename = 'aachen_tuerme_mod_6.txt'
# Output filename
pickle_city_filename = filename[:-4] + '.pkl'
# For generation_mode == 1:
# city_osm_input = None
# city_osm_input = 'aachen_forsterlinde_mod_7.pkl'
city_osm_input = 'aachen_frankenberg_mod_7.pkl'
# city_osm_input = 'aachen_huenefeld_mod_7.pkl'
# city_osm_input = 'aachen_kronenberg_mod_7.pkl'
# city_osm_input = 'aachen_preusweg_mod_7.pkl'
# city_osm_input = 'aachen_tuerme_mod_7.pkl'
# Pickle and dump city object instance?
do_save = True
# Path to save city object instance to
path_save_city = None
# If None, uses .../output/...
# Efficiency factor of thermal energy systems
# Used to convert input values (final energy demand) to net energy demand
eff_factor = 1
# For VDI 6007 simulation (th_gen_method == 3)
# #####################################
t_set_heat = 20 # Heating set temperature in degree Celsius
t_set_night = 16 # Night set back temperature in degree Celsius
t_set_cool = 70 # Cooling set temperature in degree Celsius
# Air exchange rate (required for th_gen_method = 3 (VDI 6007 sim.))
air_vent_mode = 2
# int; Define mode for air ventilation rate generation
# 0 : Use constant value (vent_factor in 1/h)
# 1 : Use deterministic, temperature-dependent profile
# 2 : Use stochastic, user-dependent profile
# False: Use static ventilation rate value
vent_factor = 0.3 # Constant. ventilation rate
# (only used, if air_vent_mode is 0. Otherwise, estimate vent_factor
# based on last year of modernization)
# TEASER typebuilding generation
# ######################################################
# Use TEASER to generate typebuildings?
call_teaser = False
teaser_proj_name = filename[:-4]
# Requires additional attributes (such as nb_of_floors, net_floor_area..)
merge_windows = False
# merge_windows : bool, optional
# Defines TEASER project setting for merge_windows_calc
# (default: False). If set to False, merge_windows_calc is set to False.
# If True, Windows are merged into wall resistances.
txt_path = os.path.join(this_path, 'input', filename)
if generation_mode == 1:
path_city_osm_in = os.path.join(this_path, 'input', city_osm_input)
# Path for log file
log_f_name = log_file_name = str('log_' + filename)
log_f_path = os.path.join(this_path, 'output', log_file_name)
# End of user inputs ################################################
print('Run city generator for ', filename)
assert generation_mode in [0, 1]
if generation_mode == 1:
assert city_osm_input is not None
if air_vent_mode == 1 or air_vent_mode == 2:
assert el_gen_method == 2, 'air_vent_mode 1 and 2 require occupancy' \
' profiles!'
# Load district_data file
district_data = get_district_data_from_txt(txt_path)
if generation_mode == 1:
# Load city input file
city_osm = pickle.load(open(path_city_osm_in, mode='rb'))
else:
# Dummy value
city_osm = None
# Generate city district
city = run_city_generator(generation_mode=generation_mode,
timestep=timestep,
year_timer=year_timer,
year_co2=year_co2,
location=location,
th_gen_method=th_gen_method,
el_gen_method=el_gen_method, use_dhw=use_dhw,
dhw_method=dhw_method,
district_data=district_data,
pickle_city_filename=pickle_city_filename,
eff_factor=eff_factor, show_city=True,
try_path=try_path, altitude=altitude,
dhw_volumen=dhw_volumen,
do_normalization=do_normalization,
slp_manipulate=slp_manipulate,
call_teaser=call_teaser,
teaser_proj_name=teaser_proj_name,
air_vent_mode=air_vent_mode,
vent_factor=vent_factor,
t_set_heat=t_set_heat,
t_set_cool=t_set_cool,
t_night=t_set_night,
vdi_sh_manipulate=vdi_sh_manipulate,
city_osm=city_osm, el_random=el_random,
dhw_random=dhw_random,
prev_heat_dev=prev_heat_dev,
log_path=log_f_path,
season_mod=season_mod,
merge_windows=merge_windows,
new_try=new_try,
path_save_city=path_save_city,
do_save=do_save)
| 44.482782 | 173 | 0.52153 | 13,984 | 130,468 | 4.640732 | 0.07337 | 0.010401 | 0.00735 | 0.005609 | 0.647266 | 0.603119 | 0.56156 | 0.528245 | 0.508583 | 0.488197 | 0 | 0.019183 | 0.405065 | 130,468 | 2,932 | 174 | 44.497954 | 0.816826 | 0.370957 | 0 | 0.435703 | 1 | 0 | 0.082985 | 0.005405 | 0 | 0 | 0 | 0 | 0.049924 | 1 | 0.012103 | false | 0 | 0.020424 | 0 | 0.049924 | 0.043116 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
830713faff66a018b4d3b736c65a71173ebb4219 | 3,078 | py | Python | templates/php/functionsTest.py | anconaesselmann/LiveUnit | 8edebb49cb02fa898550cbafdf87af7fc22f106b | [
"MIT"
] | null | null | null | templates/php/functionsTest.py | anconaesselmann/LiveUnit | 8edebb49cb02fa898550cbafdf87af7fc22f106b | [
"MIT"
] | null | null | null | templates/php/functionsTest.py | anconaesselmann/LiveUnit | 8edebb49cb02fa898550cbafdf87af7fc22f106b | [
"MIT"
] | null | null | null | import unittest
import os
if __name__ == '__main__' and __package__ is None:
from os import sys, path
sys.path.append(path.abspath(path.join(__file__, "..", "..")))
sys.path.append(path.abspath(path.join(__file__, "..", "..", "..", "classes_and_tests")))
from php.functions import *
from src.mocking.MockFileSystem import MockFileSystem
class PhpFunctionsTest(unittest.TestCase):
def test_get_doc_block_tag(self):
settings = "{\"author\": \"Axel\"}"
args = {"settings" : settings}
expected = "@author Axel"
fc = FunctionCollection()
result = fc.get_doc_block_tag(args)
self.assertEqual(expected, result)
def test_get_doc_block_tag_with_empty_value(self):
settings = "{\"author\": None}"
args = {"settings" : settings}
expected = None
fc = FunctionCollection()
result = fc.get_doc_block_tag(args)
self.assertEqual(expected, result)
def test_get_class_name(self):
args = {"dir" : path.join("Folder1", "Folder2", "FileName.php")}
expected = "FileName"
fc = FunctionCollection()
result = fc.get_class_name(args)
self.assertEqual(expected, result)
def test_get_py_package_name(self):
args = {"dir" : path.join(os.sep, "MyProject", "library", "aae", "mvc", "Controller.php")}
expected = path.join("aae\\mvc")
mockFileSystem = MockFileSystem()
mockFileSystem.createFile(path.join(os.sep, "MyProject", "libraryTest", "SomeFileTest.php"))
fc = FunctionCollection()
fc.fileSystem = mockFileSystem
result = fc.get_php_namespace(args)
self.assertEqual(expected, result)
"""def test_get_relative_autoloader_path(self):
settings = "{\"php_autoloader_dir\": \"relative/path/to/Autoloader.php\"}"
args = {"settings" : settings}
expected = "require_once strstr(__FILE__, 'Test', true).'/relative/path/to/Autoloader.php';"
result = FunctionCollection.get_php_autoloader(args)
self.assertEqual(expected, result)
def test_get_absolute_autoloader_path(self):
settings = "{\"php_autoloader_dir\": \"/absolute/path/to/Autoloader.php\"}"
args = {"settings" : settings}
expected = "require_once \"/absolute/path/to/Autoloader.php\";"
result = FunctionCollection.get_php_autoloader(args)
self.assertEqual(expected, result)
def test_getautoloader_path_with_no_value(self):
settings = "{\"php_autoloader_dir\": None}"
args = {"settings" : settings}
expected = None
result = FunctionCollection.get_php_autoloader(args)
self.assertEqual(expected, result)
def test_get_php_namespace(self):
settings = "{\"base_dir\": \"/MyProject/library\"}"
args = {"settings" : settings, "dir": "/MyProject/library/aae/mvc/Controller.php"}
expected = "aae\\mvc"
result = FunctionCollection.get_php_namespace(args)
self.assertEqual(expected, result)"""
if __name__ == '__main__':
unittest.main() | 34.58427 | 100 | 0.649448 | 333 | 3,078 | 5.717718 | 0.219219 | 0.029412 | 0.079832 | 0.113445 | 0.645483 | 0.583508 | 0.504727 | 0.415441 | 0.29937 | 0.29937 | 0 | 0.000823 | 0.210851 | 3,078 | 89 | 101 | 34.58427 | 0.783038 | 0 | 0 | 0.3 | 0 | 0 | 0.117379 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | null | null | 0 | 0.125 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
830c809918b4ad486fc0af2abc3ed71d8ce032a1 | 1,161 | py | Python | www/courses/cs1120/spring2017/code/day10.py | ic4f/sergey.cs.uni.edu | 52bbf121f73603fdb465dae36dbe691fe39e6e47 | [
"Unlicense"
] | null | null | null | www/courses/cs1120/spring2017/code/day10.py | ic4f/sergey.cs.uni.edu | 52bbf121f73603fdb465dae36dbe691fe39e6e47 | [
"Unlicense"
] | null | null | null | www/courses/cs1120/spring2017/code/day10.py | ic4f/sergey.cs.uni.edu | 52bbf121f73603fdb465dae36dbe691fe39e6e47 | [
"Unlicense"
] | null | null | null | def makePic():
file = pickAFile()
return makePicture(file)
def decreaseRed(picture):
for pixel in getPixels(picture):
setRed(pixel, getRed(pixel) * 0.2)
repaint(picture)
def decreaseRed2(picture):
pixels = getPixels(picture)
for i in range(len(pixels)):
pixel = pixels[i]
setRed(pixel, getRed(pixel) * 0.2)
repaint(picture)
def decreaseRedHalf(picture):
pixels = getPixels(picture)
for i in range((len(pixels)/2) * 0.9):
pixel = pixels[i]
setRed(pixel, getRed(pixel) * 0.2)
repaint(picture)
def makeNetherlands(picture):
pixels = getPixels(picture)
color1 = makeColor(174,28,40)
color2 = makeColor(255, 255, 255)
color3 = makeColor(33,70,139)
point1 = len(pixels)/3
point2 = point1 * 2
point3 = len(pixels)
for i in range(0, point1):
pixel = pixels[i]
setColor(pixel, color1)
print i
for i in range(point1, point2):
pixel = pixels[i]
setColor(pixel, color2)
print i
for i in range(point2, point3):
pixel = pixels[i]
setColor(pixel, color3)
print i
repaint(picture)
| 23.22 | 41 | 0.612403 | 147 | 1,161 | 4.836735 | 0.285714 | 0.028129 | 0.042194 | 0.077356 | 0.49789 | 0.392405 | 0.344585 | 0.344585 | 0.344585 | 0.28692 | 0 | 0.060213 | 0.270457 | 1,161 | 49 | 42 | 23.693878 | 0.779221 | 0 | 0 | 0.45 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
830e1e09e8a968bc1c2ae3714f7a575834f1f2be | 4,627 | py | Python | tests/test_training.py | Hilly12/masters-code | 60b20a0e5e4c0ab9152b090b679391d8d62ec88a | [
"MIT"
] | null | null | null | tests/test_training.py | Hilly12/masters-code | 60b20a0e5e4c0ab9152b090b679391d8d62ec88a | [
"MIT"
] | null | null | null | tests/test_training.py | Hilly12/masters-code | 60b20a0e5e4c0ab9152b090b679391d8d62ec88a | [
"MIT"
] | null | null | null | import torch
import prifair as pf
N_SAMPLES = 10000
VAL_SAMPLES = 1000
STUDENT_SAMPLES = 5000
INPUTS = 1000
OUTPUTS = 5
BATCH_SIZE = 256
MAX_PHYSICAL_BATCH_SIZE = 128
EPSILON = 2.0
DELTA = 1e-5
MAX_GRAD_NORM = 1.0
N_TEACHERS = 4
N_GROUPS = 10
EPOCHS = 2
class MockModel(torch.nn.Module):
def __init__(self):
super(MockModel, self).__init__()
self.model = torch.nn.Sequential(
torch.nn.Linear(in_features=INPUTS, out_features=OUTPUTS),
torch.nn.LogSoftmax(dim=1),
)
def forward(self, x):
return self.model(x)
X = torch.randn(N_SAMPLES + VAL_SAMPLES, INPUTS)
Y = torch.randint(0, OUTPUTS, (N_SAMPLES + VAL_SAMPLES,))
student = torch.randn(STUDENT_SAMPLES, INPUTS)
groups = torch.randint(0, N_GROUPS, (N_SAMPLES,))
weights = torch.ones(N_SAMPLES) / N_SAMPLES
train_data = torch.utils.data.TensorDataset(X[:N_SAMPLES], Y[:N_SAMPLES])
val_data = torch.utils.data.TensorDataset(X[N_SAMPLES:], Y[N_SAMPLES:])
student_data = torch.utils.data.TensorDataset(student, torch.zeros(STUDENT_SAMPLES))
train_loader = torch.utils.data.DataLoader(train_data, batch_size=BATCH_SIZE)
val_loader = torch.utils.data.DataLoader(val_data, batch_size=BATCH_SIZE)
student_loader = torch.utils.data.DataLoader(student_data, batch_size=BATCH_SIZE)
model_class = MockModel
optim_class = torch.optim.NAdam
criterion = torch.nn.NLLLoss()
def test_vanilla():
model, metrics = pf.training.train_vanilla(
train_loader=train_loader,
val_loader=val_loader,
model_class=model_class,
optim_class=optim_class,
loss_fn=criterion,
epochs=EPOCHS,
)
assert model is not None and metrics is not None
def test_dpsgd():
model, metrics = pf.training.train_dpsgd(
train_loader=train_loader,
val_loader=val_loader,
model_class=model_class,
optim_class=optim_class,
loss_fn=criterion,
target_epsilon=EPSILON,
target_delta=DELTA,
max_grad_norm=MAX_GRAD_NORM,
epochs=EPOCHS,
max_physical_batch_size=MAX_PHYSICAL_BATCH_SIZE,
)
assert model is not None and metrics is not None
def test_dpsgd_weighted():
model, metrics = pf.training.train_dpsgd_weighted(
train_loader=train_loader,
val_loader=val_loader,
model_class=model_class,
optim_class=optim_class,
loss_fn=criterion,
target_epsilon=EPSILON,
target_delta=DELTA,
max_grad_norm=MAX_GRAD_NORM,
epochs=EPOCHS,
max_physical_batch_size=MAX_PHYSICAL_BATCH_SIZE,
weighting="sensitive_attr",
labels=groups.numpy(),
)
assert model is not None and metrics is not None
model, metrics = pf.training.train_dpsgd_weighted(
train_loader=train_loader,
val_loader=val_loader,
model_class=model_class,
optim_class=optim_class,
loss_fn=criterion,
target_epsilon=EPSILON,
target_delta=DELTA,
max_grad_norm=MAX_GRAD_NORM,
epochs=EPOCHS,
max_physical_batch_size=MAX_PHYSICAL_BATCH_SIZE,
weighting="custom",
weights=weights,
)
assert model is not None and metrics is not None
def test_dpsgdf():
model, metrics = pf.training.train_dpsgdf(
train_loader=train_loader,
val_loader=val_loader,
model_class=model_class,
optim_class=optim_class,
loss_fn=criterion,
target_epsilon=EPSILON,
target_delta=DELTA,
base_clipping_threshold=MAX_GRAD_NORM,
epochs=EPOCHS,
group_labels=groups,
max_physical_batch_size=MAX_PHYSICAL_BATCH_SIZE,
)
assert model is not None and metrics is not None
def test_pate():
model, metrics = pf.training.train_pate(
train_loader=train_loader,
val_loader=val_loader,
student_loader=student_loader,
model_class=model_class,
optim_class=optim_class,
loss_fn=criterion,
n_teachers=N_TEACHERS,
target_epsilon=EPSILON,
target_delta=DELTA,
epochs=EPOCHS,
)
assert model is not None and metrics is not None
def test_reweighed_sft_pate():
model, metrics = pf.training.train_reweighed_sftpate(
train_loader=train_loader,
val_loader=val_loader,
student_loader=student_loader,
model_class=model_class,
optim_class=optim_class,
loss_fn=criterion,
n_teachers=N_TEACHERS,
target_epsilon=EPSILON,
target_delta=DELTA,
epochs=EPOCHS,
weights=weights,
)
assert model is not None and metrics is not None
| 28.91875 | 84 | 0.690944 | 612 | 4,627 | 4.903595 | 0.160131 | 0.047984 | 0.069977 | 0.05998 | 0.722093 | 0.634122 | 0.602799 | 0.602799 | 0.602799 | 0.602799 | 0 | 0.010367 | 0.228658 | 4,627 | 159 | 85 | 29.100629 | 0.830485 | 0 | 0 | 0.554745 | 0 | 0 | 0.004322 | 0 | 0 | 0 | 0 | 0 | 0.051095 | 1 | 0.058394 | false | 0 | 0.014599 | 0.007299 | 0.087591 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
830e39c22c34be264cb1928c1b6da3f32584283d | 177 | py | Python | problem/01000~09999/02164/2164.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-19T16:37:44.000Z | 2019-04-19T16:37:44.000Z | problem/01000~09999/02164/2164.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-20T11:42:44.000Z | 2019-04-20T11:42:44.000Z | problem/01000~09999/02164/2164.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 3 | 2019-04-19T16:37:47.000Z | 2021-10-25T00:45:00.000Z | from collections import deque
n,x=int(input()),deque()
for i in range(1,n+1): x.append(i)
while len(x)>1:
x.popleft()
if len(x)==1: break
x.append(x.popleft())
print(x.pop()) | 22.125 | 34 | 0.661017 | 37 | 177 | 3.162162 | 0.567568 | 0.034188 | 0.08547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025478 | 0.112994 | 177 | 8 | 35 | 22.125 | 0.719745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
830ebcb1b5a538ed7758db2770eff5e0ab51ebf3 | 2,066 | py | Python | gpvdm_gui/gui/json_fdtd.py | roderickmackenzie/gpvdm | 914fd2ee93e7202339853acaec1d61d59b789987 | [
"BSD-3-Clause"
] | 12 | 2016-09-13T08:58:13.000Z | 2022-01-17T07:04:52.000Z | gpvdm_gui/gui/json_fdtd.py | roderickmackenzie/gpvdm | 914fd2ee93e7202339853acaec1d61d59b789987 | [
"BSD-3-Clause"
] | 3 | 2017-11-11T12:33:02.000Z | 2019-03-08T00:48:08.000Z | gpvdm_gui/gui/json_fdtd.py | roderickmackenzie/gpvdm | 914fd2ee93e7202339853acaec1d61d59b789987 | [
"BSD-3-Clause"
] | 6 | 2019-01-03T06:17:12.000Z | 2022-01-01T15:59:00.000Z | #
# General-purpose Photovoltaic Device Model - a drift diffusion base/Shockley-Read-Hall
# model for 1st, 2nd and 3rd generation solar cells.
# Copyright (C) 2008-2022 Roderick C. I. MacKenzie r.c.i.mackenzie at googlemail.com
#
# https://www.gpvdm.com
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License v2.0, as published by
# the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
## @package json_transfer_matrix
# Store the cv domain json data
#
import sys
import os
import shutil
import json
from json_base import json_base
class json_fdtd_simulation(json_base):
def __init__(self):
json_base.__init__(self,"fdtd_segment")
self.var_list=[]
self.var_list.append(["english_name","FDTD (beta)"])
self.var_list.append(["icon","fdtd"])
self.var_list.append(["fdtd_lambda_start",520e-9])
self.var_list.append(["fdtd_lambda_stop",700e-9])
self.var_list.append(["fdtd_lambda_points",1])
self.var_list.append(["use_gpu",False])
self.var_list.append(["max_ittr",100000])
self.var_list.append(["zlen",1])
self.var_list.append(["xlen",60])
self.var_list.append(["ylen",60])
self.var_list.append(["xsize",8.0])
self.var_list.append(["lam_jmax",12])
self.var_list.append(["plot",1])
self.var_list.append(["fdtd_xzy","zy"])
self.var_list.append(["dt",1e-19])
self.var_list.append(["id",self.random_id()])
self.var_list_build()
class json_fdtd(json_base):
def __init__(self):
json_base.__init__(self,"fdtd",segment_class=True,segment_example=json_fdtd_simulation())
| 31.30303 | 91 | 0.727009 | 325 | 2,066 | 4.427692 | 0.489231 | 0.087561 | 0.137596 | 0.18902 | 0.245309 | 0.160528 | 0.102849 | 0.063933 | 0.063933 | 0.063933 | 0 | 0.029495 | 0.14666 | 2,066 | 65 | 92 | 31.784615 | 0.786727 | 0.462246 | 0 | 0.068966 | 0 | 0 | 0.144311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.172414 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
83150604a0fb11e77945d0c0fcad08abbb284ce0 | 342 | py | Python | download_from_link.py | bogdanf555/scripts | 42b7b36c5891da6dcde8f7889bdf0798f91bef12 | [
"MIT"
] | null | null | null | download_from_link.py | bogdanf555/scripts | 42b7b36c5891da6dcde8f7889bdf0798f91bef12 | [
"MIT"
] | null | null | null | download_from_link.py | bogdanf555/scripts | 42b7b36c5891da6dcde8f7889bdf0798f91bef12 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import requests
import sys
if __name__ == '__main__':
if len(sys.argv) != 3:
print("Error: you should pass 2 arguments: [link_to_download_from] [path_to_save_downloaded_file]")
exit(1)
url = sys.argv[1]
r = requests.get(url, allow_redirects=True)
open(sys.argv[2], 'wb').write(r.content)
| 24.428571 | 107 | 0.660819 | 52 | 342 | 4.038462 | 0.75 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.192982 | 342 | 13 | 108 | 26.307692 | 0.73913 | 0.049708 | 0 | 0 | 0 | 0 | 0.308642 | 0.16358 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.222222 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
831850a395edae115c39b123b0382e44942149bf | 644 | py | Python | profiles/migrations/0002_auto_20211214_0825.py | praekeltfoundation/ge-web | 331d22554dfd6b6f6060b1fd7a110f38dd7ddece | [
"BSD-2-Clause"
] | 1 | 2022-03-09T15:11:52.000Z | 2022-03-09T15:11:52.000Z | profiles/migrations/0002_auto_20211214_0825.py | praekeltfoundation/ge-web | 331d22554dfd6b6f6060b1fd7a110f38dd7ddece | [
"BSD-2-Clause"
] | 14 | 2022-01-03T09:49:41.000Z | 2022-03-31T12:53:31.000Z | profiles/migrations/0002_auto_20211214_0825.py | praekeltfoundation/ge-web | 331d22554dfd6b6f6060b1fd7a110f38dd7ddece | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 3.1.14 on 2021-12-14 08:25
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('wagtailcore', '0066_collection_management_permissions'),
('profiles', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='profilesettings',
name='terms_and_conditions',
field=models.ForeignKey(blank=True, help_text='Choose a Terms and Conditions page', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.page'),
),
]
| 30.666667 | 194 | 0.673913 | 74 | 644 | 5.716216 | 0.675676 | 0.056738 | 0.066194 | 0.104019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046967 | 0.206522 | 644 | 20 | 195 | 32.2 | 0.780822 | 0.071429 | 0 | 0 | 1 | 0 | 0.260067 | 0.063758 | 0.071429 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
831cac4a9b399f71b7446e06e08d2d1e23c17328 | 1,335 | py | Python | app/marketing/migrations/0002_membership.py | NDevox/website | 76004e667f2295eddd79d500ba21f02a0480412f | [
"Apache-2.0"
] | null | null | null | app/marketing/migrations/0002_membership.py | NDevox/website | 76004e667f2295eddd79d500ba21f02a0480412f | [
"Apache-2.0"
] | null | null | null | app/marketing/migrations/0002_membership.py | NDevox/website | 76004e667f2295eddd79d500ba21f02a0480412f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2017-07-12 04:25
from __future__ import unicode_literals
from django.db import migrations, models
def forward(apps, schema_editor):
db_alias = schema_editor.connection.alias
Cron = apps.get_model('django_celery_beat', 'CrontabSchedule')
cron = Cron.objects.using(db_alias).create(minute='0', hour='0')
Task = apps.get_model('django_celery_beat', 'PeriodicTask')
Task.objects.using(db_alias).create(name='Capture slack membership counts',
task='marketing.tasks.capture_snapshot_of_user_count', # noqa
crontab=cron)
class Migration(migrations.Migration):
initial = True
dependencies = [
('marketing', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Membership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('member_count', models.IntegerField()),
('deleted_count', models.IntegerField()),
('bot_count', models.IntegerField()),
('timestamp', models.DateTimeField(auto_now_add=True)),
],
),
migrations.RunPython(forward),
]
| 32.560976 | 114 | 0.605243 | 138 | 1,335 | 5.644928 | 0.608696 | 0.026958 | 0.088575 | 0.046213 | 0.136072 | 0.071887 | 0 | 0 | 0 | 0 | 0 | 0.023614 | 0.270412 | 1,335 | 40 | 115 | 33.375 | 0.776181 | 0.054682 | 0 | 0 | 1 | 0 | 0.174881 | 0.036566 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
83207ebe69e3bf9bcd3f660b07c8f5bca9f8663b | 2,038 | py | Python | seeq/addons/clustering/__main__.py | seeq12/seeq-clustering | 220793499d5f9669e7d9dde4820af0eee27f84dc | [
"Apache-2.0"
] | 3 | 2021-10-15T05:32:44.000Z | 2021-12-14T16:33:24.000Z | seeq/addons/clustering/__main__.py | seeq12/seeq-clustering | 220793499d5f9669e7d9dde4820af0eee27f84dc | [
"Apache-2.0"
] | 2 | 2021-11-19T17:46:06.000Z | 2022-01-20T06:54:00.000Z | seeq/addons/clustering/__main__.py | seeq12/seeq-clustering | 220793499d5f9669e7d9dde4820af0eee27f84dc | [
"Apache-2.0"
] | null | null | null | import os
import sys
import argparse
from ._install_addon import install_addon
def cli_interface():
""" Installs Seeq Add-on Tool """
parser = argparse.ArgumentParser(description='Install Clustering as a Seeq Add-on Tool')
parser.add_argument('--username', type=str, default=None,
help='Username or Access Key of Seeq admin user installing the tool(s) ')
parser.add_argument('--seeq_url', type=str,
help="Seeq hostname URL with the format https://my.seeq.com/ or https://my.seeq.com:34216")
parser.add_argument('--app_url', type=str,
help="URL of clustering app notebook with the format e.g. https://my.seeq.com/data-lab/CBA9A827-35A8-4944-8A74-EE7008DC3ED8/notebooks/hb/seeq/addons/clustering/App.ipynb")
parser.add_argument('--users', type=str, nargs='*', default=[],
help="List of the Seeq users to will have access to the Correlation Add-on Tool,"
" default: %(default)s")
parser.add_argument('--groups', type=str, nargs='*', default=['Everyone'],
help="List of the Seeq groups to will have access to the Correlation Add-on Tool, "
"default: %(default)s")
parser.add_argument('--password', type=str, default=None,
help="Password of Seeq user installing the tool. Must supply a password if not supplying an accesskey for username")
parser.add_argument('--sort_key', type=str, default=None,
help="A string, typically one character letter. The sort_key determines the order in which the Add-on Tools are displayed in the tool panel, "
"default: %(default)s")
return parser.parse_args()
if __name__ == '__main__':
args = cli_interface()
install_addon(
sort_key=args.sort_key,
permissions_group=args.groups,
permissions_users=args.users,
username=args.username,
password=args.password
) | 49.707317 | 195 | 0.632483 | 263 | 2,038 | 4.787072 | 0.376426 | 0.05004 | 0.094519 | 0.042891 | 0.225576 | 0.115965 | 0.115965 | 0.115965 | 0.115965 | 0.115965 | 0 | 0.016513 | 0.257115 | 2,038 | 41 | 196 | 49.707317 | 0.815059 | 0.012267 | 0 | 0.090909 | 0 | 0.090909 | 0.442173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0.090909 | 0.121212 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8321d10093f3ed3b6d58be76b8214f867e414822 | 939 | py | Python | utils/customchecks.py | arielbeje/good-bot-name | de1429ea5b653fd8ee88d649452ebef7e7399e5b | [
"MIT"
] | 10 | 2018-04-08T00:02:18.000Z | 2022-01-25T18:34:06.000Z | utils/customchecks.py | arielbeje/good-bot-name | de1429ea5b653fd8ee88d649452ebef7e7399e5b | [
"MIT"
] | 14 | 2018-01-26T16:55:09.000Z | 2021-09-19T11:35:58.000Z | utils/customchecks.py | arielbeje/Good_Bot_Name | de1429ea5b653fd8ee88d649452ebef7e7399e5b | [
"MIT"
] | 14 | 2018-02-14T01:35:08.000Z | 2021-03-30T12:18:03.000Z | """
Code stolen from https://github.com/Rapptz/discord.py
"""
import functools
import discord
from discord.ext import commands
from . import sql
class NotAModError(commands.CheckFailure):
pass
class NoTokenError(Exception):
pass
def is_mod():
async def predicate(ctx):
ch = ctx.channel
permissions = ch.permissions_for(ctx.author)
if permissions.administrator:
return True
msg = ctx.message
if not msg.guild:
raise NotAModError()
return False
getter = functools.partial(discord.utils.get, msg.author.roles)
modroles = [int(result[0]) for result in await sql.fetch("SELECT roleid FROM modroles WHERE serverid=?", str(ctx.message.guild.id))]
if not any(getter(id=role) is not None for role in modroles):
raise NotAModError()
return False
return True
return commands.check(predicate)
| 24.076923 | 140 | 0.652822 | 115 | 939 | 5.313043 | 0.547826 | 0.032733 | 0.075286 | 0.091653 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001437 | 0.258786 | 939 | 38 | 141 | 24.710526 | 0.876437 | 0.056443 | 0 | 0.32 | 0 | 0 | 0.050114 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.08 | 0.16 | 0 | 0.48 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
832283ba27d3f56129d5cb0cef3c3b8a60934088 | 2,974 | py | Python | tests/test_motif_finder.py | gaybro8777/RStudio-GitHub-Analysis | 014195c90ca49f64d28c9fcd96d128301ff65157 | [
"BSD-2-Clause"
] | 2 | 2020-09-13T11:55:13.000Z | 2021-05-23T01:29:19.000Z | tests/test_motif_finder.py | gaybro8777/RStudio-GitHub-Analysis | 014195c90ca49f64d28c9fcd96d128301ff65157 | [
"BSD-2-Clause"
] | null | null | null | tests/test_motif_finder.py | gaybro8777/RStudio-GitHub-Analysis | 014195c90ca49f64d28c9fcd96d128301ff65157 | [
"BSD-2-Clause"
] | 2 | 2020-10-17T20:18:37.000Z | 2021-05-23T01:29:25.000Z | """
This script tests the classes and functions from motif_finder.py.
Parameters
----------
None
Returns
-------
Assertion errors if tests fail
"""
import sys
import random
import pickle
import networkx as nx
from github_analysis.big_cloud_scratch import git_graph
from github_analysis.data_layer import getCommitsByProjectIds
from github_analysis.cluster import get_embedding_clusters
from github_analysis.motif_finder import *
clusters = get_embedding_clusters(random_state=0)
projects_cluster = getCommitsByProjectIds(clusters[0])
G = git_graph(projects_cluster)
mf = MotifFinder(G)
# Unit tests
def test_main_output_type():
pass
def test_sample_initial_node_output_type():
"""Check that MotifFinder.sample_initial_node outputs an integer."""
assert type(mf.sample_initial_node()) == int
def test_sample_initial_node_output():
"""Check that MotifFinder.sample_initial_node outputs a node in the given graph."""
assert mf.sample_initial_node() in G
def test_get_random_child_output_type():
"""Check that MotifFinder.get_random_child outputs an integer."""
assert type(mf.get_random_child(355738534)) == int
def test_get_random_child_no_children():
"""Check that MotifFinder.get_random_child outputs None if there are no children."""
assert mf.get_random_child(139371373) is None
def test_get_random_child_output():
"""Check that MotifFinder.get_random_child outputs a child of the node its been given."""
initial_node = mf.sample_initial_node()
child = mf.get_random_child(initial_node)
assert child in G.successors(initial_node)
def test_get_sample_motif_bad_input():
"""Check that MotifFinder.get_sample_motif raises an error when not given an integer for the k param."""
try:
mf.get_sample_motif('5')
except TypeError:
return True
raise TypeError
def test_get_sample_motif_output_type():
"""Check that MotifFinder.get_sample_motif outputs a networkx directed graph."""
assert type(mf.get_sample_motif(5)) == nx.classes.digraph.DiGraph
def test_get_sample_motif_output():
"""Check that MotifFinder.get_sample_motif outputs a networkx directed graph that is a subgraph of G."""
subgraph = mf.get_sample_motif(5)
for node in subgraph:
if node in G:
continue
else:
raise ValueError('Subgraph doesnt contain same nodes as graph')
def test_get_motif_samples_bad_input():
"""Check that MotifFinder.get_motif_samples raises an error when not given an integer for the k and num_samples
param."""
try:
mf.get_motif_samples('5', '5')
except TypeError:
return True
raise TypeError
def test_get_motif_samples_output_type():
"""Check that MotifFinder.get_sample_motif outputs a dictionary."""
assert type(mf.get_motif_samples(5,5)) == dict
def test_get_motifs_by_cluster_output_type():
assert type(get_motifs_by_cluster(clusters)) == dict
# def test_get_motifs | 28.596154 | 115 | 0.751849 | 429 | 2,974 | 4.920746 | 0.254079 | 0.043108 | 0.094742 | 0.087162 | 0.501658 | 0.421127 | 0.275225 | 0.175272 | 0.175272 | 0.175272 | 0 | 0.010878 | 0.165434 | 2,974 | 104 | 116 | 28.596154 | 0.839645 | 0.331876 | 0 | 0.156863 | 0 | 0 | 0.023896 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 1 | 0.235294 | false | 0.019608 | 0.156863 | 0 | 0.431373 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
83245e358084afd5d7f959c3a7aebfc9ab55bb73 | 1,107 | py | Python | torrent.py | fishy/scripts | 91abd0451cae916d885f4ff0fd2f69d335d37cf3 | [
"BSD-3-Clause"
] | 4 | 2016-05-09T13:42:23.000Z | 2021-11-29T15:16:11.000Z | torrent.py | fishy/scripts | 91abd0451cae916d885f4ff0fd2f69d335d37cf3 | [
"BSD-3-Clause"
] | null | null | null | torrent.py | fishy/scripts | 91abd0451cae916d885f4ff0fd2f69d335d37cf3 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import sys
import os
from types import StringType
# get bencode package from http://github.com/fishy/scripts/downloads
from bencode.bencode import bencode, bdecode, BTFailure
try :
torrent = sys.argv[1]
except IndexError :
print "Usage: \"%s <torrent_file> [tracker_url]\" to show torrent info (without tracker_url), or to add tracker(s)" % sys.argv[0]
sys.exit()
size = os.stat(torrent).st_size
file = open(torrent, "rb")
data = file.read(size)
file.close()
info = bdecode(data)
if len(sys.argv) == 2 :
print info
sys.exit()
if 'announce-list' not in info :
list = [info['announce']]
for i in range(len(sys.argv)-2) :
tracker = sys.argv[i+2]
if tracker not in list :
list.append(tracker)
print list
info['announce-list'] = [list]
else :
list = info['announce-list'][0]
if type(list) == StringType :
list = [list]
for i in range(len(sys.argv)-2) :
tracker = sys.argv[i+2]
if tracker not in list :
list.append(tracker)
print list
info['announce-list'][0] = list
writedata = bencode(info)
file = open(torrent, "wb")
file.write(writedata)
file.close()
| 23.0625 | 130 | 0.68925 | 176 | 1,107 | 4.3125 | 0.369318 | 0.064559 | 0.084321 | 0.043478 | 0.28722 | 0.258235 | 0.258235 | 0.258235 | 0.258235 | 0.258235 | 0 | 0.009677 | 0.159892 | 1,107 | 47 | 131 | 23.553191 | 0.806452 | 0.078591 | 0 | 0.358974 | 0 | 0 | 0.134578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.102564 | null | null | 0.102564 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
832b2f005e0af85ddb6e44118b2f277f3ecf6b06 | 571 | py | Python | Dataset/Leetcode/valid/78/455.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/valid/78/455.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/valid/78/455.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution:
def __init__(self):
self.result = []
def XXX(self, nums):
return self.helper(nums, 0, [])
def helper(self, nums, index, temp):
if index == len(nums):
self.result.append(temp)
return
self.result.append(temp)
for i in range(index, len(nums)):
self.helper(nums, i + 1, temp + [nums[i]])
return self.result
undefined
for (i = 0; i < document.getElementsByTagName("code").length; i++) { console.log(document.getElementsByTagName("code")[i].innerText); }
| 27.190476 | 139 | 0.576182 | 70 | 571 | 4.642857 | 0.414286 | 0.123077 | 0.086154 | 0.098462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007264 | 0.276708 | 571 | 20 | 140 | 28.55 | 0.779661 | 0 | 0 | 0.133333 | 0 | 0 | 0.01406 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
832b736a0869d3dc222dea9d11955ffc80809ec5 | 1,322 | py | Python | IDS/IDS/urls.py | YashwantChauhan/SDL | 0d48dfa129d72316f35967df98ce2f1e6f949fc5 | [
"MIT"
] | 2 | 2020-12-24T15:13:49.000Z | 2021-06-05T15:43:58.000Z | IDS/IDS/urls.py | YashwantChauhan/SDL | 0d48dfa129d72316f35967df98ce2f1e6f949fc5 | [
"MIT"
] | 2 | 2021-12-28T14:06:20.000Z | 2021-12-28T14:25:44.000Z | IDS/IDS/urls.py | YashwantChauhan/SDL | 0d48dfa129d72316f35967df98ce2f1e6f949fc5 | [
"MIT"
] | null | null | null | """IDS URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path,include
from Apps.home import views as home_views
from Apps.Signup import views as Signup_views
from Apps.Dashboard import urls as Dash_urls
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
path('admin/', admin.site.urls),
path('' , home_views.home , name='home' ),
path('Signin/' , Signup_views.signin , name='Signin' ),
path('Signup/' , Signup_views.signup , name='Signup'),
path('Signout/', Signup_views.logout , name='logout'),
path('Dashboard/', include(Dash_urls.urlpatterns) ),
]
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 38.882353 | 77 | 0.723147 | 194 | 1,322 | 4.85567 | 0.324742 | 0.053079 | 0.015924 | 0.025478 | 0.124204 | 0.124204 | 0.079618 | 0 | 0 | 0 | 0 | 0.007162 | 0.155068 | 1,322 | 33 | 78 | 40.060606 | 0.836168 | 0.46823 | 0 | 0 | 0 | 0 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4375 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
832fa03411fdc8cba2cd96e51a219e3ef9e4283a | 975 | py | Python | main.py | BL-Lac149597870/drugVQA | 604703d66457c958ddc9eeb35268391edb6c4996 | [
"MIT"
] | null | null | null | main.py | BL-Lac149597870/drugVQA | 604703d66457c958ddc9eeb35268391edb6c4996 | [
"MIT"
] | null | null | null | main.py | BL-Lac149597870/drugVQA | 604703d66457c958ddc9eeb35268391edb6c4996 | [
"MIT"
] | null | null | null | '''
Author: QHGG
Date: 2021-02-27 13:42:43
LastEditTime: 2021-03-01 23:26:38
LastEditors: QHGG
Description:
FilePath: /drugVQA/main.py
'''
import torch
from sklearn import metrics
import warnings
warnings.filterwarnings("ignore")
torch.cuda.set_device(0)
print('cuda size == 1')
from trainAndTest import *
import time
def timeLable():
return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
def main():
"""
Parsing command line parameters, reading data, fitting and scoring a SEAL-CI model.
"""
losses,accs,testResults = train(trainArgs)
with open("logs/"+ timeLable() +"losses.txt", "w") as f:
f.writelines([str(log) + '\n' for log in losses])
with open("logs/"+ timeLable() +"accs.txt", "w") as f:
f.writelines([str(log) + '\n' for log in accs])
with open("logs/"+ timeLable() +"testResults.txt", "w") as f:
f.writelines([str(log) + '\n' for log in testResults])
if __name__ == "__main__":
main()
| 28.676471 | 87 | 0.645128 | 139 | 975 | 4.460432 | 0.582734 | 0.03871 | 0.058065 | 0.101613 | 0.159677 | 0.159677 | 0.159677 | 0.159677 | 0.159677 | 0.159677 | 0 | 0.037688 | 0.18359 | 975 | 33 | 88 | 29.545455 | 0.741206 | 0.220513 | 0 | 0 | 0 | 0 | 0.138399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | true | 0 | 0.25 | 0.05 | 0.4 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8330e631a49e6776f2efa9742d5ed0e6a7e38620 | 6,556 | py | Python | src/utility.py | bbookman/demand | 47101843ab84f4161e618edfa5a8e8fea2e1d955 | [
"MIT"
] | null | null | null | src/utility.py | bbookman/demand | 47101843ab84f4161e618edfa5a8e8fea2e1d955 | [
"MIT"
] | null | null | null | src/utility.py | bbookman/demand | 47101843ab84f4161e618edfa5a8e8fea2e1d955 | [
"MIT"
] | null | null | null | import sys, re, pdb
from bs4 import BeautifulSoup as beautiful
from datetime import datetime
import requests, logging
import timeout_decorator, pandas as pd
import socket, urllib3
def read_input_file():
#todo - what if argument is not there or invalid?
print_and_log('Reading input file')
file = sys.argv[2]
#?????
return results
def get_zip_code():
# First argument ..
# todo what if arg is not there or invalid
print_and_log(f'Got command line zip code {sys.argv[1]} ', 'info')
return sys.argv[1]
def make_date_string():
stamp = datetime.now()
date_string = stamp.strftime('%Y-%d-%m-%H-%M-%S')
return date_string
def make_time_string():
stamp = datetime.now()
time_string = stamp.strftime('%H:%M')
return time_string
def build_site_url(template, title, zipcode='', radius='90', age='60'):
""" Makes an url with each query item inserted into the url template
site_id: type = str, value of site id like 'indeed' or 'careerbuilder'
template: type = str, the url template. example: 'http://indeed.com?{}&afg=&rfr=&title={}'
title: type = str, job title using escape characters that are site dependent. example: 'software+quality+engineer'
zipcode: type = str, ZIP CODE
radius: type = str, represents the radius of the job search. example: '50' (miles)
age: type = str, the number of days the job description has been posted. example: '30' (days)
returns an url string
"""
url = template.format(title = title, zipcode = zipcode, radius = radius, age = age)
print_and_log(f'Built site url: {url}')
return url
def build_job_title(title, title_separator):
""" Takes list of title words and adds site specific separator between words
title: string
separator: type = string
returns string
"""
result =''
words = title.split()
for word in words:
result+= word + title_separator
return result[:-1]
@timeout_decorator.timeout(10)
def get_all_anchors(soup):
print_and_log('Getting All Anchors')
return soup('a')
@timeout_decorator.timeout(10)
def get_anchors_by_selector(title_selector, soup):
print_and_log(f'Getting Anchors by selector: {title_selector}')
return soup('a', title_selector)
def add_site_id(site_id, ref):
print_and_log('Adding site id to href for complete url')
return f'http://{site_id}.com{ref}'
def title_meets_threshold(title, title_word_values, threshold=90):
print('Evaluating job title against threshold')
total = 0
if not title:
return False
t = re.sub(r"(?<=[A-z])\&(?=[A-z])", " ", title.lower())
t = re.sub(r"(?<=[A-z])\-(?=[A-z])", " ", t)
for word, value in title_word_values.items():
if word.lower() in t:
total+=value
if total >= threshold:
print_and_log(f'Met threshold: {title}')
return True
print_and_log(f'Not met threshold: {title}')
return False
@timeout_decorator.timeout(10)
def get_soup(url, skill_dict):
soup = None
if url == 'http://dice.com/jobs/browsejobs':
print_and_log(make_data_frame(skill_dict))
sys.exit()
elif url in 'http://simplyhired.comhttps://www.simplyhired':
return soup
else:
print_and_log(f'Getting raw html from: {url}' )
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:63.0) Gecko/20100101 Firefox/63.0'
session = requests.Session()
session.headers.update({'User-Agent': user_agent})
try:
response = session.get(url)
body = response.text
soup = beautiful(body, 'html.parser')
print_and_log('Got raw html')
except urllib3.exceptions.NewConnectionError as e:
print_and_log(e, 'error')
write_file(skill_dict, title='new_connection_error_encountered_captured_results')
except socket.gaierror as s:
print_and_log(s, 'error')
write_file(skill_dict, title='socket_error_encountered_captured_results')
except socket.error as e:
print_and_log(e, 'error')
write_file(skill_dict, title='socket_error_encountered_captured_results')
except Exception as e:
print_and_log(e, 'error')
write_file(skill_dict, title='exception_encountered_captured_results')
except BaseException as b:
print_and_log(b, 'error')
write_file(skill_dict, title='exception_encountered_captured_results')
return soup
def clean_text(text):
body = re.split(r'\W+', text)
return [word.lower() for word in body]
@timeout_decorator.timeout(10)
def get_title_by_tag(selector, tag, soup):
print_and_log(f'Getting job title by tag: {tag}, selector: {selector}')
data = soup(tag, selector)
text = ''
if data:
text = data[0].text
text = text.strip('\n')
text = text.strip()
text = text.rstrip()
text = text.lstrip()
print_and_log(f'Got title: {text}')
return text
@timeout_decorator.timeout(10)
def filter_links(links, link_selector):
print_and_log(f'Filtering links, selector:{link_selector}')
return [link for link in links if link_selector.lower() in link.lower()]
def like(string):
"""
Return a compiled regular expression that matches the given
string with any prefix and postfix, e.g. if string = "hello",
the returned regex matches r".*hello.*"
"""
string_ = string
if not isinstance(string_, str):
string_ = str(string_)
regex = MATCH_ALL + re.escape(string_) + MATCH_ALL
return re.compile(regex, flags=re.DOTALL)
def set_log(filename, level): #todo level options
logging.basicConfig(filename=filename, level=level)
def report(e: Exception):
logging.exception(str(e))
def print_and_log(text, level = 'info'):
print(text)
if level == 'debug':
logging.debug(text)
elif level == 'info':
logging.info(text)
elif level == 'warning':
logging.warning(text)
def make_data_frame(skill_dict):
series = pd.Series(skill_dict)
df = series.to_frame('skill_count')
df.sort_values('skill_count', ascending=False, inplace=True)
df['percent'] = df['skill_count'] / df['skill_count'].sum() * 100
df.round(2)
return df
def write_file(skill_dict, zipcode = '99999', title = 'RESULTS', ):
d = make_date_string()
file_name = f'{title}_{zipcode}_{d}results.txt'
with open(file_name, 'w') as file:
file.write(f'[{title}: [{zipcode}: {skill_dict} ]]')
| 33.111111 | 119 | 0.657413 | 915 | 6,556 | 4.542077 | 0.287432 | 0.038499 | 0.052936 | 0.025987 | 0.19538 | 0.154475 | 0.101781 | 0.101781 | 0.08205 | 0.08205 | 0 | 0.011337 | 0.219646 | 6,556 | 197 | 120 | 33.279188 | 0.801016 | 0.151769 | 0 | 0.12766 | 0 | 0.007092 | 0.199232 | 0.055748 | 0 | 0 | 0 | 0.010152 | 0 | 1 | 0.141844 | false | 0 | 0.042553 | 0 | 0.319149 | 0.156028 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8331c341859f7ceb90f3dad9bbc18d41377413e5 | 1,940 | py | Python | section_11_(api)/dicts_and_lists.py | hlcooll/python_lessons | 3790f98cbc5a0721fcfc9e5f52ba79a64878f362 | [
"MIT"
] | 425 | 2015-01-13T03:19:10.000Z | 2022-03-13T00:34:44.000Z | section_11_(api)/dicts_and_lists.py | Supercodero/python-lessons | 38409c318e7a62d30b2ffd68f8a7a5a5ec00778d | [
"MIT"
] | null | null | null | section_11_(api)/dicts_and_lists.py | Supercodero/python-lessons | 38409c318e7a62d30b2ffd68f8a7a5a5ec00778d | [
"MIT"
] | 178 | 2015-01-08T05:01:05.000Z | 2021-12-02T00:56:58.000Z | # Dictionaries and lists, together
# Loading from https://raw.githubusercontent.com/shannonturner/education-compliance-reports/master/investigations.json
investigations = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-112.073032,
33.453527
]
},
"properties": {
"marker-symbol": "marker",
"marker-color": "#D4500F",
"address": " AZ ",
"name": "Arizona State University"
}
},
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-121.645734,
39.648248
]
},
"properties": {
"marker-symbol": "marker",
"marker-color": "#D4500F",
"address": " CA ",
"name": "Butte-Glen Community College District"
}
},
]
}
# The first level is a dictionary with two keys: type and features
# type's value is a string: FeatureCollection
# features' value is a list of dictionaries
# We're going to focus on the features list.
# Each item in the features list is a dictionary that has three keys: type, geometry, and properties
# If we wanted to access all of the properies for the first map point, here's how:
print investigations['features'][0]['properties']
# list of dictionaries ^ ^ ^
# first map point | | properties
# {
# "marker-symbol": "marker",
# "marker-color": "#D4500F",
# "address": " AZ ",
# "name": "Arizona State University"
# }
# As we see above, properties is itself a dictionary
# To get the name of that map point:
print investigations['features'][0]['properties']['name']
# Arizona State University
# Generally speaking, if what's between the square brackets is a number, you're accessing a list.
# If it's a string, you're accessing a dictionary.
# If you get stuck or are getting errors, try printing out the item and the key or index. | 26.216216 | 118 | 0.625258 | 233 | 1,940 | 5.206009 | 0.476395 | 0.012366 | 0.054411 | 0.06925 | 0.301731 | 0.239077 | 0.174773 | 0.174773 | 0.131904 | 0.131904 | 0 | 0.03272 | 0.243814 | 1,940 | 74 | 119 | 26.216216 | 0.794138 | 0.557216 | 0 | 0.358974 | 0 | 0 | 0.399522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
83383133f1e2636bee0ef87328b2ad1c26e323fd | 1,288 | py | Python | Desafio horario atual/__init__.py | pinheirogus/Curso-Python-Udemy | d6d52320426172e924081b9df619490baa8c6016 | [
"MIT"
] | 1 | 2021-09-01T01:58:13.000Z | 2021-09-01T01:58:13.000Z | Desafio horario atual/__init__.py | pinheirogus/Curso-Python-Udemy | d6d52320426172e924081b9df619490baa8c6016 | [
"MIT"
] | null | null | null | Desafio horario atual/__init__.py | pinheirogus/Curso-Python-Udemy | d6d52320426172e924081b9df619490baa8c6016 | [
"MIT"
] | null | null | null | # num1 = input("Digite um número inteiro: ")
#
#
# try:
#
# if num1.isnumeric() :
# num1 = int(num1)
# if (num1 % 2) == 0 :
# print("Você digitou um número par.")
# elif (num1 % 2) != 0:
# print("Você digitou um número ímpar.")
# else:
# print("Você não digitou um número válido.")
# else:
# print("Você não digitou um número inteiro.")
# except:
# print("Você não digitou um número.")
###################################################################################################################################
#hora_atual = input("Qual o horário atual? ")
###################################################################################################################################
nome = input("Por favor, digite seu primeiro nome: ")
try:
if nome.isnumeric():
print("Você não digitou um nome válido.")
else:
if len(nome) <= 4:
print("Seu nome é curto.")
elif (len(nome) == 5) or (len(nome) == 6):
print("Seu nome é normal.")
elif len(nome) > 6:
print("Seu nome é muito grande.")
else:
print("Você não digitou um nome válido.1")
except:
print("Você não digitou um nome válido.")
| 30.666667 | 131 | 0.420807 | 132 | 1,288 | 4.098485 | 0.325758 | 0.133087 | 0.133087 | 0.210721 | 0.554529 | 0.554529 | 0.475046 | 0.110906 | 0 | 0 | 0 | 0.015806 | 0.263199 | 1,288 | 41 | 132 | 31.414634 | 0.554268 | 0.375 | 0 | 0.266667 | 0 | 0 | 0.369025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
833a1a0c360f3cdcf8d7b6c1f70840aed091b251 | 699 | py | Python | Lista 2/Exercicio 14.py | GiovannaPazello/Projetos-em-Python | 3cf7edbdf2a2350605a775389f7fe2cc7fe8032e | [
"MIT"
] | null | null | null | Lista 2/Exercicio 14.py | GiovannaPazello/Projetos-em-Python | 3cf7edbdf2a2350605a775389f7fe2cc7fe8032e | [
"MIT"
] | null | null | null | Lista 2/Exercicio 14.py | GiovannaPazello/Projetos-em-Python | 3cf7edbdf2a2350605a775389f7fe2cc7fe8032e | [
"MIT"
] | null | null | null | '''Faça um programa que gere números aleatórios entre 0 e 50 até o número 32 ser
gerado. Quando isso ocorrer, informar:
a. A soma de todos os números gerados
b. A quantidade de números gerados que é impar
c. O menor número gerado'''
import random
x = random.randint(0,50)
cont = 32
somaNumeros = 0
qqntImpares = 0
menorNumero = 51
while cont != x:
x = random.randint(0, 50)
somaNumeros = somaNumeros + x
if x%2 != 0:
qqntImpares = qqntImpares + 1
if menorNumero > x:
menorNumero = x
print('A soma de todos os números é {}'.format(somaNumeros))
print('A quantidade de números ímpares é {}'.format(qqntImpares))
print('O menor número é {}'.format(menorNumero))
| 23.3 | 80 | 0.690987 | 107 | 699 | 4.514019 | 0.457944 | 0.043478 | 0.028986 | 0.049689 | 0.15735 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.213162 | 699 | 29 | 81 | 24.103448 | 0.841818 | 0.323319 | 0 | 0.125 | 0 | 0 | 0.186551 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
833a4ecb5ab38b8de2e042cd613f15a274dee6fa | 1,556 | py | Python | mavsim_python/chap4/wind_simulation.py | eyler94/mavsim_template_files | 181a76f15dc454f5a6f58f4596d9039cbe388cd9 | [
"MIT"
] | null | null | null | mavsim_python/chap4/wind_simulation.py | eyler94/mavsim_template_files | 181a76f15dc454f5a6f58f4596d9039cbe388cd9 | [
"MIT"
] | null | null | null | mavsim_python/chap4/wind_simulation.py | eyler94/mavsim_template_files | 181a76f15dc454f5a6f58f4596d9039cbe388cd9 | [
"MIT"
] | 1 | 2021-11-15T09:53:42.000Z | 2021-11-15T09:53:42.000Z | """
Class to determine wind velocity at any given moment,
calculates a steady wind speed and uses a stochastic
process to represent wind gusts. (Follows section 4.4 in uav book)
"""
import sys
sys.path.append('..')
import numpy as np
class wind_simulation:
def __init__(self, Ts):
# steady state wind defined in the inertial frame
self._steady_state = np.array([[0., 0., 0.]]).T
# self.steady_state = np.array([[3., 1., 0.]]).T
# Dryden gust model parameters (pg 56 UAV book)
# HACK: Setting Va to a constant value is a hack. We set a nominal airspeed for the gust model.
# Could pass current Va into the gust function and recalculate A and B matrices.
Va = 17
self._A =
self._B =
self._C =
self._gust_state =
self._Ts = Ts
def update(self):
# returns a six vector.
# The first three elements are the steady state wind in the inertial frame
# The second three elements are the gust in the body frame
return np.concatenate(( self._steady_state, self._gust() ))
def _gust(self):
# calculate wind gust using Dryden model. Gust is defined in the body frame
w = np.random.randn() # zero mean unit variance Gaussian (white noise)
# propagate Dryden model (Euler method): x[k+1] = x[k] + Ts*( A x[k] + B w[k] )
self._gust_state += self._Ts * (self._A @ self._gust_state + self._B * w)
# output the current gust: y[k] = C x[k]
return self._C @ self._gust_state
| 38.9 | 105 | 0.628535 | 238 | 1,556 | 3.991597 | 0.441176 | 0.057895 | 0.054737 | 0.053684 | 0.110526 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011545 | 0.27635 | 1,556 | 39 | 106 | 39.897436 | 0.832149 | 0.457584 | 0 | 0 | 0 | 0 | 0.003082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
833c0720b2fa02e3aacf53733cbb5dfadce129a9 | 326 | py | Python | project4/network/migrations/0005_remove_post_likers.py | mjs375/cs50_Network | 31a2399f4429931b15721861a2940b57811ae844 | [
"MIT"
] | null | null | null | project4/network/migrations/0005_remove_post_likers.py | mjs375/cs50_Network | 31a2399f4429931b15721861a2940b57811ae844 | [
"MIT"
] | null | null | null | project4/network/migrations/0005_remove_post_likers.py | mjs375/cs50_Network | 31a2399f4429931b15721861a2940b57811ae844 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-15 16:01
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('network', '0004_auto_20201111_2224'),
]
operations = [
migrations.RemoveField(
model_name='post',
name='likers',
),
]
| 18.111111 | 47 | 0.588957 | 35 | 326 | 5.371429 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135371 | 0.297546 | 326 | 17 | 48 | 19.176471 | 0.68559 | 0.138037 | 0 | 0 | 1 | 0 | 0.143369 | 0.082437 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
83419d745e57d76be4f84f2cf4a69352d320b89f | 738 | py | Python | users/urls.py | mahmutcankurt/DjangoBlogSite | 8597bbe7ed066b50e02367a98f0062deb37d251d | [
"Apache-2.0"
] | 3 | 2021-01-24T13:14:33.000Z | 2022-01-25T22:17:59.000Z | users/urls.py | mahmutcankurt1/staj | 8597bbe7ed066b50e02367a98f0062deb37d251d | [
"Apache-2.0"
] | null | null | null | users/urls.py | mahmutcankurt1/staj | 8597bbe7ed066b50e02367a98f0062deb37d251d | [
"Apache-2.0"
] | null | null | null | from django.conf.urls import url
from .views import signupView, activate, account_activation_sent, user_login, user_logout, user_edit_profile, user_change_password
urlpatterns = [
url(r'^register/$', signupView, name='register'),
url(r'^account_activation_sent/$', account_activation_sent, name='account_activation_sent'),
url(r'^activate/(?P<uidb64>[0-9A-Za-z_\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$', activate,
name='activate'),
url(r'^login/$', user_login, name='user_login'),
url(r'^logout/$', user_logout, name='user_logout'),
url(r'^user_edit_profile/$', user_edit_profile, name='user_edit_profile'),
url(r'^change_password/$', user_change_password, name='change_password'),
]
| 43.411765 | 130 | 0.703252 | 107 | 738 | 4.579439 | 0.308411 | 0.057143 | 0.171429 | 0.036735 | 0.028571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021244 | 0.107046 | 738 | 16 | 131 | 46.125 | 0.722307 | 0 | 0 | 0 | 0 | 0.083333 | 0.36635 | 0.183175 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8343385a22dd30ea40482bf144f766b74f99b606 | 6,969 | py | Python | tutorials/rhythm/plot_SlidingWindowMatching.py | bcmartinb/neurodsp | 36d8506f3bd916f83b093a62843ffb77647a6e1e | [
"Apache-2.0"
] | 154 | 2019-01-30T04:10:48.000Z | 2022-03-30T12:55:00.000Z | tutorials/rhythm/plot_SlidingWindowMatching.py | bcmartinb/neurodsp | 36d8506f3bd916f83b093a62843ffb77647a6e1e | [
"Apache-2.0"
] | 159 | 2019-01-28T22:49:36.000Z | 2022-03-17T16:42:48.000Z | tutorials/rhythm/plot_SlidingWindowMatching.py | bcmartinb/neurodsp | 36d8506f3bd916f83b093a62843ffb77647a6e1e | [
"Apache-2.0"
] | 42 | 2019-05-31T21:06:44.000Z | 2022-03-25T23:17:57.000Z | """
Sliding Window Matching
=======================
Find recurring patterns in neural signals using Sliding Window Matching.
This tutorial primarily covers the :func:`~.sliding_window_matching` function.
"""
###################################################################################################
# Overview
# --------
#
# Non-periodic or non-sinusoidal properties can be difficult to assess in frequency domain
# methods. To try and address this, the sliding window matching (SWM) algorithm has been
# proposed for detecting and measuring recurring, but unknown, patterns in time series data.
# Patterns of interest may be transient events, and/or the waveform shape of neural oscillations.
#
# In this example, we will explore applying the SWM algorithm to some LFP data.
#
# The SWM approach tries to find recurring patterns (or motifs) in the data, using sliding
# windows. An iterative process samples window randomly, and compares each to the average
# window. The goal is to find a selection of windows that look maximally like the average
# window, at which point the occurrences of the window have been detected, and the average
# window pattern can be examined.
#
# The sliding window matching algorithm is described in
# `Gips et al, 2017 <https://doi.org/10.1016/j.jneumeth.2016.11.001>`_
#
###################################################################################################
# sphinx_gallery_thumbnail_number = 2
import numpy as np
# Import the sliding window matching function
from neurodsp.rhythm import sliding_window_matching
# Import utilities for loading and plotting data
from neurodsp.utils.download import load_ndsp_data
from neurodsp.plts.rhythm import plot_swm_pattern
from neurodsp.plts.time_series import plot_time_series
from neurodsp.utils import set_random_seed, create_times
from neurodsp.utils.norm import normalize_sig
###################################################################################################
# Set random seed, for reproducibility
set_random_seed(0)
###################################################################################################
# Load neural signal
# ------------------
#
# First, we will load a segment of ECoG data, as an example time series.
#
###################################################################################################
# Download, if needed, and load example data files
sig = load_ndsp_data('sample_data_1.npy', folder='data')
sig = normalize_sig(sig, mean=0, variance=1)
# Set sampling rate, and create a times vector for plotting
fs = 1000
times = create_times(len(sig)/fs, fs)
###################################################################################################
#
# Next, we can visualize this data segment. As we can see this segment of data has
# some prominent bursts of oscillations, in this case, in the beta frequency.
#
###################################################################################################
# Plot example signal
plot_time_series(times, sig)
###################################################################################################
# Apply sliding window matching
# -----------------------------
#
# The beta oscillation in our data segment looks like it might have some non-sinusoidal
# properties. We can investigate this with sliding window matching.
#
# Sliding window matching can be applied with the
# :func:`~.sliding_window_matching` function.
#
###################################################################################################
# Data Preprocessing
# ~~~~~~~~~~~~~~~~~~
#
# Typically, the input signal does not have to be filtered into a band of interest to use SWM.
#
# If the goal is to characterize non-sinusoidal rhythms, you typically won't want to
# apply a filter that will smooth out the features of interest.
#
# However, if the goal is to characterize higher frequency activity, it can be useful to
# apply a highpass filter, so that the method does not converge on a lower frequency motif.
#
# In our case, the beta rhythm of interest is the most prominent, low frequency, feature of the
# data, so we won't apply a filter.
#
###################################################################################################
# Algorithm Settings
# ~~~~~~~~~~~~~~~~~~
#
# The SWM algorithm has some algorithm specific settings that need to be applied, including:
#
# - `win_len` : the length of the window, defined in seconds
# - `win_spacing` : the minimum distance between windows, also defined in seconds
#
# The length of the window influences the patterns that are extracted from the data.
# Typically, you want to set the window length to match the expected timescale of the
# patterns under study.
#
# For our purposes, we will define the window length to be about 1 cycle of a beta oscillation,
# which should help the algorithm to find the waveform shape of the neural oscillation.
#
###################################################################################################
# Define window length & minimum window spacing, both in seconds
win_len = .055
win_spacing = .055
###################################################################################################
# Apply the sliding window matching algorithm to the time series
windows, window_starts = sliding_window_matching(sig, fs, win_len, win_spacing, var_thresh=.5)
###################################################################################################
# Examine the Results
# ~~~~~~~~~~~~~~~~~~~
#
# What we got back from the SWM function are the calculate average window, the list
# of indices in the data of the windows, and the calculated costs for each iteration of
# the algorithm run.
#
# In order to visualize the resulting pattern, we can use
# :func:`~.plot_swm_pattern`.
#
###################################################################################################
# Compute the average window
avg_window = np.mean(windows, 0)
# Plot the discovered pattern
plot_swm_pattern(avg_window)
###################################################################################################
#
# In the above average pattern, that looks to capture a beta rhythm, we can notice some
# waveform shape of the extracted rhythm.
#
###################################################################################################
# Concluding Notes
# ~~~~~~~~~~~~~~~~
#
# One thing to keep in mind is that the SWM algorithm includes a random element of sampling
# and comparing the windows - meaning it is not deterministic. Because of this, results
# can change with different random seeds.
#
# To explore this, go back and change the random seed, and see how the output changes.
#
# You can also set the number of iterations that the algorithm sweeps through. Increasing
# the number of iterations, and using longer data segments, can help improve the robustness
# of the algorithm results.
#
| 39.822857 | 99 | 0.578275 | 819 | 6,969 | 4.863248 | 0.346764 | 0.04243 | 0.068541 | 0.024102 | 0.057243 | 0.03063 | 0 | 0 | 0 | 0 | 0 | 0.0061 | 0.129574 | 6,969 | 174 | 100 | 40.051724 | 0.650511 | 0.632228 | 0 | 0 | 0 | 0 | 0.024194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.388889 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8343d14fcff75c3593b87cced0b3013a8661f9e3 | 719 | py | Python | forge/auth/backends.py | django-forge/forge | 6223d2a4e7a570dfba87c3ae2e14927010fe7fd9 | [
"MIT"
] | 3 | 2022-03-30T22:14:35.000Z | 2022-03-31T22:04:42.000Z | forge/auth/backends.py | django-forge/forge | 6223d2a4e7a570dfba87c3ae2e14927010fe7fd9 | [
"MIT"
] | null | null | null | forge/auth/backends.py | django-forge/forge | 6223d2a4e7a570dfba87c3ae2e14927010fe7fd9 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend
UserModel = get_user_model()
class EmailModelBackend(ModelBackend):
def authenticate(self, request, username=None, password=None, **kwargs):
if username is None:
email = kwargs.get(UserModel.EMAIL_FIELD)
else:
email = username
email = UserModel._default_manager.normalize_email(email)
try:
user = UserModel.objects.get(email=email)
except UserModel.DoesNotExist:
return None
else:
if user.check_password(password) and self.user_can_authenticate(user):
return user
return None
| 28.76 | 82 | 0.66064 | 79 | 719 | 5.873418 | 0.468354 | 0.043103 | 0.073276 | 0.090517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.267038 | 719 | 24 | 83 | 29.958333 | 0.880455 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.111111 | 0.111111 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
83499ec97a8ebaba9f0df370c50f48f1b192aa91 | 719 | py | Python | ved/migrations/0010_auto_20180303_1353.py | mjovanc/tidlundsved | da55a07d02f04bc636299fe4d236aa19188a359b | [
"MIT"
] | 1 | 2019-04-19T20:39:39.000Z | 2019-04-19T20:39:39.000Z | ved/migrations/0010_auto_20180303_1353.py | mjovanc/tidlundsved | da55a07d02f04bc636299fe4d236aa19188a359b | [
"MIT"
] | 3 | 2020-01-15T22:21:14.000Z | 2020-01-15T22:21:15.000Z | ved/migrations/0010_auto_20180303_1353.py | mjovanc/tidlundsved | da55a07d02f04bc636299fe4d236aa19188a359b | [
"MIT"
] | null | null | null | # Generated by Django 2.0.2 on 2018-03-03 13:53
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('ved', '0009_auto_20180302_1839'),
]
operations = [
migrations.AlterField(
model_name='order',
name='firewood_choice',
field=models.CharField(max_length=50, verbose_name='Val'),
),
migrations.AlterField(
model_name='order',
name='order_status',
field=models.CharField(choices=[('Ej påbörjad', 'Ej påbörjad'), ('Påbörjad', 'Påbörjad'), ('Levererad', 'Levererad')], default='Ej påbörjad', max_length=30, verbose_name='Status på order'),
),
]
| 29.958333 | 201 | 0.606398 | 77 | 719 | 5.519481 | 0.584416 | 0.063529 | 0.117647 | 0.136471 | 0.178824 | 0.178824 | 0 | 0 | 0 | 0 | 0 | 0.065177 | 0.253129 | 719 | 23 | 202 | 31.26087 | 0.726257 | 0.062587 | 0 | 0.352941 | 1 | 0 | 0.220238 | 0.034226 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8354f3e967b4c8a5432e55702c43dd8c0b61efde | 415 | py | Python | OrderService/Order/migrations/0003_order_payment_details.py | surajkendhey/Kart | 458bee955d1569372fc8b3facb2602063a6ec6f5 | [
"Apache-2.0"
] | null | null | null | OrderService/Order/migrations/0003_order_payment_details.py | surajkendhey/Kart | 458bee955d1569372fc8b3facb2602063a6ec6f5 | [
"Apache-2.0"
] | null | null | null | OrderService/Order/migrations/0003_order_payment_details.py | surajkendhey/Kart | 458bee955d1569372fc8b3facb2602063a6ec6f5 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.1.2 on 2020-10-18 09:41
from django.db import migrations
import jsonfield.fields
class Migration(migrations.Migration):
dependencies = [
('Order', '0002_auto_20201018_1503'),
]
operations = [
migrations.AddField(
model_name='order',
name='payment_details',
field=jsonfield.fields.JSONField(default=dict),
),
]
| 20.75 | 59 | 0.621687 | 45 | 415 | 5.622222 | 0.777778 | 0.118577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10231 | 0.26988 | 415 | 19 | 60 | 21.842105 | 0.732673 | 0.108434 | 0 | 0 | 1 | 0 | 0.130435 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
835f4e7f9614427e618dd0d65cdbcc8a97ccc269 | 157 | py | Python | testtarget.py | epopisces/template_api_wrapper | e581eb31f6123ca2d93803453f2a1ab25c3c1981 | [
"MIT"
] | null | null | null | testtarget.py | epopisces/template_api_wrapper | e581eb31f6123ca2d93803453f2a1ab25c3c1981 | [
"MIT"
] | null | null | null | testtarget.py | epopisces/template_api_wrapper | e581eb31f6123ca2d93803453f2a1ab25c3c1981 | [
"MIT"
] | null | null | null | class ToolNameAPI:
thing = 'thing'
toolname_tool = 'example'
tln = ToolNameAPI()
the_repo = "reponame"
author = "authorname"
profile = "authorprofile" | 15.7 | 25 | 0.713376 | 16 | 157 | 6.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165605 | 157 | 10 | 26 | 15.7 | 0.839695 | 0 | 0 | 0 | 0 | 0 | 0.272152 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8362ff8cbd0cfe323812bd28b2652a04191c1026 | 462 | py | Python | getColorFromNumber.py | clean-code-craft-tcq-1/modular-python-preetikadyan | 0775e7e62edbbb0d7c3506b2bd072562a44d7f8b | [
"MIT"
] | null | null | null | getColorFromNumber.py | clean-code-craft-tcq-1/modular-python-preetikadyan | 0775e7e62edbbb0d7c3506b2bd072562a44d7f8b | [
"MIT"
] | null | null | null | getColorFromNumber.py | clean-code-craft-tcq-1/modular-python-preetikadyan | 0775e7e62edbbb0d7c3506b2bd072562a44d7f8b | [
"MIT"
] | null | null | null | from main import *
def get_color_from_pair_number(pair_number):
zero_based_pair_number = pair_number - 1
major_index = zero_based_pair_number // len(MINOR_COLORS)
if major_index >= len(MAJOR_COLORS):
raise Exception('Major index out of range')
minor_index = zero_based_pair_number % len(MINOR_COLORS)
if minor_index >= len(MINOR_COLORS):
raise Exception('Minor index out of range')
return MAJOR_COLORS[major_index], MINOR_COLORS[minor_index]
| 42 | 61 | 0.779221 | 71 | 462 | 4.690141 | 0.323944 | 0.18018 | 0.117117 | 0.171171 | 0.24024 | 0.24024 | 0.24024 | 0.24024 | 0.24024 | 0 | 0 | 0.002513 | 0.138528 | 462 | 10 | 62 | 46.2 | 0.834171 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
836a1f95f9bc7256c74547e4b46165f7f107b034 | 286 | py | Python | test_service.py | jgawrilo/qcr_ci | bd4c192444e03a551e3c5f4f0a275a4c029294de | [
"Apache-2.0"
] | 1 | 2020-03-05T13:27:39.000Z | 2020-03-05T13:27:39.000Z | test_service.py | jgawrilo/qcr_ci | bd4c192444e03a551e3c5f4f0a275a4c029294de | [
"Apache-2.0"
] | null | null | null | test_service.py | jgawrilo/qcr_ci | bd4c192444e03a551e3c5f4f0a275a4c029294de | [
"Apache-2.0"
] | null | null | null | import requests
import json
headers = {'Content-Type': 'application/json'}
data = json.load(open("./test_input2.json"))
url = "http://localhost:5001/api/impact"
response = requests.post(url,data=json.dumps({"data":data}),headers=headers)
print json.dumps(response.json(),indent=2)
| 22 | 76 | 0.727273 | 40 | 286 | 5.175 | 0.6 | 0.077295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022901 | 0.083916 | 286 | 12 | 77 | 23.833333 | 0.767176 | 0 | 0 | 0 | 0 | 0 | 0.286713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.285714 | null | null | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
836a92d066a5c850634a4179920f5c67049059c7 | 16,969 | py | Python | google/appengine/ext/datastore_admin/backup_pb2.py | vladushakov987/appengine_python3 | 0dd481c73e2537a50ee10f1b79cd65938087e555 | [
"Apache-2.0"
] | null | null | null | google/appengine/ext/datastore_admin/backup_pb2.py | vladushakov987/appengine_python3 | 0dd481c73e2537a50ee10f1b79cd65938087e555 | [
"Apache-2.0"
] | null | null | null | google/appengine/ext/datastore_admin/backup_pb2.py | vladushakov987/appengine_python3 | 0dd481c73e2537a50ee10f1b79cd65938087e555 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# Copyright 2007 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
import google
from google.net.proto2.python.public import descriptor as _descriptor
from google.net.proto2.python.public import message as _message
from google.net.proto2.python.public import reflection as _reflection
from google.net.proto2.python.public import symbol_database as _symbol_database
from google.net.proto2.proto import descriptor_pb2
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='apphosting/ext/datastore_admin/backup.proto',
package='apphosting.ext.datastore_admin',
serialized_pb=_b('\n+apphosting/ext/datastore_admin/backup.proto\x12\x1e\x61pphosting.ext.datastore_admin\"\x8c\x01\n\x06\x42\x61\x63kup\x12?\n\x0b\x62\x61\x63kup_info\x18\x01 \x01(\x0b\x32*.apphosting.ext.datastore_admin.BackupInfo\x12\x41\n\tkind_info\x18\x02 \x03(\x0b\x32..apphosting.ext.datastore_admin.KindBackupInfo\"Q\n\nBackupInfo\x12\x13\n\x0b\x62\x61\x63kup_name\x18\x01 \x01(\t\x12\x17\n\x0fstart_timestamp\x18\x02 \x01(\x03\x12\x15\n\rend_timestamp\x18\x03 \x01(\x03\"\x8c\x01\n\x0eKindBackupInfo\x12\x0c\n\x04kind\x18\x01 \x02(\t\x12\x0c\n\x04\x66ile\x18\x02 \x03(\t\x12\x43\n\rentity_schema\x18\x03 \x01(\x0b\x32,.apphosting.ext.datastore_admin.EntitySchema\x12\x19\n\nis_partial\x18\x04 \x01(\x08:\x05\x66\x61lse\"\x90\x05\n\x0c\x45ntitySchema\x12\x0c\n\x04kind\x18\x01 \x01(\t\x12\x41\n\x05\x66ield\x18\x02 \x03(\x0b\x32\x32.apphosting.ext.datastore_admin.EntitySchema.Field\x1a\xb2\x01\n\x04Type\x12\x0f\n\x07is_list\x18\x01 \x01(\x08\x12R\n\x0eprimitive_type\x18\x02 \x03(\x0e\x32:.apphosting.ext.datastore_admin.EntitySchema.PrimitiveType\x12\x45\n\x0f\x65mbedded_schema\x18\x03 \x03(\x0b\x32,.apphosting.ext.datastore_admin.EntitySchema\x1aj\n\x05\x46ield\x12\x0c\n\x04name\x18\x01 \x02(\t\x12?\n\x04type\x18\x02 \x03(\x0b\x32\x31.apphosting.ext.datastore_admin.EntitySchema.Type\x12\x12\n\nfield_name\x18\x03 \x01(\t\"\x8d\x02\n\rPrimitiveType\x12\t\n\x05\x46LOAT\x10\x00\x12\x0b\n\x07INTEGER\x10\x01\x12\x0b\n\x07\x42OOLEAN\x10\x02\x12\n\n\x06STRING\x10\x03\x12\r\n\tDATE_TIME\x10\x04\x12\n\n\x06RATING\x10\x05\x12\x08\n\x04LINK\x10\x06\x12\x0c\n\x08\x43\x41TEGORY\x10\x07\x12\x10\n\x0cPHONE_NUMBER\x10\x08\x12\x12\n\x0ePOSTAL_ADDRESS\x10\t\x12\t\n\x05\x45MAIL\x10\n\x12\r\n\tIM_HANDLE\x10\x0b\x12\x0c\n\x08\x42LOB_KEY\x10\x0c\x12\x08\n\x04TEXT\x10\r\x12\x08\n\x04\x42LOB\x10\x0e\x12\x0e\n\nSHORT_BLOB\x10\x0f\x12\x08\n\x04USER\x10\x10\x12\r\n\tGEO_POINT\x10\x11\x12\r\n\tREFERENCE\x10\x12\x42\x14\x10\x02 \x02(\x02\x42\x0c\x42\x61\x63kupProtos')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_ENTITYSCHEMA_PRIMITIVETYPE = _descriptor.EnumDescriptor(
name='PrimitiveType',
full_name='apphosting.ext.datastore_admin.EntitySchema.PrimitiveType',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='FLOAT', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='INTEGER', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BOOLEAN', index=2, number=2,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='STRING', index=3, number=3,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DATE_TIME', index=4, number=4,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RATING', index=5, number=5,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LINK', index=6, number=6,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CATEGORY', index=7, number=7,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PHONE_NUMBER', index=8, number=8,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='POSTAL_ADDRESS', index=9, number=9,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='EMAIL', index=10, number=10,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IM_HANDLE', index=11, number=11,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BLOB_KEY', index=12, number=12,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TEXT', index=13, number=13,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BLOB', index=14, number=14,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SHORT_BLOB', index=15, number=15,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='USER', index=16, number=16,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GEO_POINT', index=17, number=17,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='REFERENCE', index=18, number=18,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=836,
serialized_end=1105,
)
_sym_db.RegisterEnumDescriptor(_ENTITYSCHEMA_PRIMITIVETYPE)
_BACKUP = _descriptor.Descriptor(
name='Backup',
full_name='apphosting.ext.datastore_admin.Backup',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='backup_info', full_name='apphosting.ext.datastore_admin.Backup.backup_info', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='kind_info', full_name='apphosting.ext.datastore_admin.Backup.kind_info', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=80,
serialized_end=220,
)
_BACKUPINFO = _descriptor.Descriptor(
name='BackupInfo',
full_name='apphosting.ext.datastore_admin.BackupInfo',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='backup_name', full_name='apphosting.ext.datastore_admin.BackupInfo.backup_name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='start_timestamp', full_name='apphosting.ext.datastore_admin.BackupInfo.start_timestamp', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='end_timestamp', full_name='apphosting.ext.datastore_admin.BackupInfo.end_timestamp', index=2,
number=3, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=222,
serialized_end=303,
)
_KINDBACKUPINFO = _descriptor.Descriptor(
name='KindBackupInfo',
full_name='apphosting.ext.datastore_admin.KindBackupInfo',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='kind', full_name='apphosting.ext.datastore_admin.KindBackupInfo.kind', index=0,
number=1, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='file', full_name='apphosting.ext.datastore_admin.KindBackupInfo.file', index=1,
number=2, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='entity_schema', full_name='apphosting.ext.datastore_admin.KindBackupInfo.entity_schema', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='is_partial', full_name='apphosting.ext.datastore_admin.KindBackupInfo.is_partial', index=3,
number=4, type=8, cpp_type=7, label=1,
has_default_value=True, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=306,
serialized_end=446,
)
_ENTITYSCHEMA_TYPE = _descriptor.Descriptor(
name='Type',
full_name='apphosting.ext.datastore_admin.EntitySchema.Type',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='is_list', full_name='apphosting.ext.datastore_admin.EntitySchema.Type.is_list', index=0,
number=1, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='primitive_type', full_name='apphosting.ext.datastore_admin.EntitySchema.Type.primitive_type', index=1,
number=2, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='embedded_schema', full_name='apphosting.ext.datastore_admin.EntitySchema.Type.embedded_schema', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=547,
serialized_end=725,
)
_ENTITYSCHEMA_FIELD = _descriptor.Descriptor(
name='Field',
full_name='apphosting.ext.datastore_admin.EntitySchema.Field',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='apphosting.ext.datastore_admin.EntitySchema.Field.name', index=0,
number=1, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='type', full_name='apphosting.ext.datastore_admin.EntitySchema.Field.type', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='field_name', full_name='apphosting.ext.datastore_admin.EntitySchema.Field.field_name', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=727,
serialized_end=833,
)
_ENTITYSCHEMA = _descriptor.Descriptor(
name='EntitySchema',
full_name='apphosting.ext.datastore_admin.EntitySchema',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='kind', full_name='apphosting.ext.datastore_admin.EntitySchema.kind', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='field', full_name='apphosting.ext.datastore_admin.EntitySchema.field', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[_ENTITYSCHEMA_TYPE, _ENTITYSCHEMA_FIELD, ],
enum_types=[
_ENTITYSCHEMA_PRIMITIVETYPE,
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=449,
serialized_end=1105,
)
_BACKUP.fields_by_name['backup_info'].message_type = _BACKUPINFO
_BACKUP.fields_by_name['kind_info'].message_type = _KINDBACKUPINFO
_KINDBACKUPINFO.fields_by_name['entity_schema'].message_type = _ENTITYSCHEMA
_ENTITYSCHEMA_TYPE.fields_by_name['primitive_type'].enum_type = _ENTITYSCHEMA_PRIMITIVETYPE
_ENTITYSCHEMA_TYPE.fields_by_name['embedded_schema'].message_type = _ENTITYSCHEMA
_ENTITYSCHEMA_TYPE.containing_type = _ENTITYSCHEMA
_ENTITYSCHEMA_FIELD.fields_by_name['type'].message_type = _ENTITYSCHEMA_TYPE
_ENTITYSCHEMA_FIELD.containing_type = _ENTITYSCHEMA
_ENTITYSCHEMA.fields_by_name['field'].message_type = _ENTITYSCHEMA_FIELD
_ENTITYSCHEMA_PRIMITIVETYPE.containing_type = _ENTITYSCHEMA
DESCRIPTOR.message_types_by_name['Backup'] = _BACKUP
DESCRIPTOR.message_types_by_name['BackupInfo'] = _BACKUPINFO
DESCRIPTOR.message_types_by_name['KindBackupInfo'] = _KINDBACKUPINFO
DESCRIPTOR.message_types_by_name['EntitySchema'] = _ENTITYSCHEMA
Backup = _reflection.GeneratedProtocolMessageType('Backup', (_message.Message,), dict(
DESCRIPTOR = _BACKUP,
__module__ = 'google.appengine.ext.datastore_admin.backup_pb2'
))
_sym_db.RegisterMessage(Backup)
BackupInfo = _reflection.GeneratedProtocolMessageType('BackupInfo', (_message.Message,), dict(
DESCRIPTOR = _BACKUPINFO,
__module__ = 'google.appengine.ext.datastore_admin.backup_pb2'
))
_sym_db.RegisterMessage(BackupInfo)
KindBackupInfo = _reflection.GeneratedProtocolMessageType('KindBackupInfo', (_message.Message,), dict(
DESCRIPTOR = _KINDBACKUPINFO,
__module__ = 'google.appengine.ext.datastore_admin.backup_pb2'
))
_sym_db.RegisterMessage(KindBackupInfo)
EntitySchema = _reflection.GeneratedProtocolMessageType('EntitySchema', (_message.Message,), dict(
Type = _reflection.GeneratedProtocolMessageType('Type', (_message.Message,), dict(
DESCRIPTOR = _ENTITYSCHEMA_TYPE,
__module__ = 'google.appengine.ext.datastore_admin.backup_pb2'
))
,
Field = _reflection.GeneratedProtocolMessageType('Field', (_message.Message,), dict(
DESCRIPTOR = _ENTITYSCHEMA_FIELD,
__module__ = 'google.appengine.ext.datastore_admin.backup_pb2'
))
,
DESCRIPTOR = _ENTITYSCHEMA,
__module__ = 'google.appengine.ext.datastore_admin.backup_pb2'
))
_sym_db.RegisterMessage(EntitySchema)
_sym_db.RegisterMessage(EntitySchema.Type)
_sym_db.RegisterMessage(EntitySchema.Field)
DESCRIPTOR.has_options = True
DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('\020\002 \002(\002B\014BackupProtos'))
| 37.376652 | 1,971 | 0.736755 | 2,185 | 16,969 | 5.475515 | 0.134096 | 0.051488 | 0.058258 | 0.07673 | 0.643597 | 0.60448 | 0.570545 | 0.435891 | 0.392762 | 0.369609 | 0 | 0.047768 | 0.135188 | 16,969 | 453 | 1,972 | 37.459161 | 0.767496 | 0.033532 | 0 | 0.640704 | 0 | 0.002513 | 0.252396 | 0.217054 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017588 | 0 | 0.017588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
836acb8a4b706f8933f3b1012b5068f029201a8e | 11,254 | py | Python | PSBChart_support.py | georgepruitt/PSBChart | ee31497ffb12f818bab7ec750425f9fc7259c0f8 | [
"Apache-2.0"
] | 1 | 2019-08-02T06:36:05.000Z | 2019-08-02T06:36:05.000Z | PSBChart_support.py | schkr/PSBChart | bf19c2632491f18ba6ee6b3337bcb118350b9b3e | [
"Apache-2.0"
] | 1 | 2018-02-07T21:20:43.000Z | 2018-02-07T21:20:43.000Z | PSBChart_support.py | schkr/PSBChart | bf19c2632491f18ba6ee6b3337bcb118350b9b3e | [
"Apache-2.0"
] | 1 | 2019-08-02T06:35:30.000Z | 2019-08-02T06:35:30.000Z | #! /usr/bin/env python
#
# Support module generated by PAGE version 4.10
# In conjunction with Tcl version 8.6
# Jan 12, 2018 04:09:34 PM
import turtle
from turtle import TurtleScreen, RawTurtle, TK
from tkinter.filedialog import askopenfilename
import tkinter as tk
import os.path
import datetime
import csv
import sys
from PSBChart import ManageTrades
try:
from Tkinter import *
except ImportError:
from tkinter import *
try:
import ttk
py3 = 0
except ImportError:
import tkinter.ttk as ttk
py3 = 1
d = list()
dt = list()
o = list()
h = list()
l = list()
c = list()
v = list()
oi = list()
tradeDate = list()
tradeVal1 = list()
tradeType = list()
tradeSize = list()
tradeNtryOrXit = list()
tradePrice = list()
highestHigh = 0
lowestLow = 99999999
root = tk.Tk()
#root.withdraw()
##s = tk.ScrollBar(root)
T = tk.Text(root,height=10,width=50)
##s.pack(side=tk.RIGHT, fill = tk.Y)
T.pack(side=tk.RIGHT, fill = tk.Y)
##s.config(command=T.yview)
##T.config(yscrollcommand.set)
def manageTrades(trades,indicatorList):
if trades.load:
cnt = 0
file = askopenfilename(filetypes=(('CSV files', '*.csv'),
('TXT files', '*.txt'),('POR files', '*.por')),
title='Select Markets or Ports. To Test- CSV format only!')
with open(file) as f:
f_csv = csv.reader(f)
numDecs = 0
for row in f_csv:
numCols = len(row)
cnt += 1
tradeDate.append(int(row[0]))
# dt.append(datetime.datetime.strptime(row[0],'%Y%m%d'))
tradeVal1.append(int(row[1]))
tradeType.append(row[2])
tradeSize.append(int(row[3]))
tradePrice.append(float(row[4]))
print("Trades ",tradeDate[-1]," ",tradePrice[-1])
tradeCnt = cnt
trades.setLoadDraw(False,True)
w.Button5.configure(state = "disable")
loadAndDraw(False,True,indicatorList,trades)
def loadAndDraw(load,draw,indicatorList,trades):
def get_mouse_click_coor(x, y):
print(x, y)
barNumber = round(x/10)
barNumber = max(1,barNumber)
print("Bar Number: ",barNumber," ",d[startPt+barNumber-1]," ",o[startPt+barNumber-1]," ",highestHigh)
# tkMessageBox("Information",str(barNumber)
# # trtl.write('Vivax Solutions', font=("Arial", 20, "bold")) # chosing the font
## trtl.goto(10,highestHigh-.05*(highestHigh - lowestLow))
## trtl.pendown()
indexVal =startPt+barNumber-1
outPutStr = str(d[indexVal]) + " " +str(o[indexVal])+ " " +str(h[indexVal])+ " " +str(l[indexVal])+ " " + str(c[indexVal]) # chosing the font
root.focus_set()
T.focus_set( )
T.insert(tk.END,outPutStr+"\n")
## trtl.goto(20,highestHigh-60)
## trtl.write(str(o[50-(50-barNumber)]), font=("Arial", 8, "bold")) # chosing the font
## trtl.goto(20,highestHigh-80)
## trtl.write(str(h[50-(50-barNumber)]), font=("Arial", 8, "bold")) # chosing the font
## trtl.goto(20,highestHigh-100)
## trtl.write(str(l[50-(50-barNumber)]), font=("Arial", 8, "bold")) # chosing the font
## trtl.goto(20,highestHigh-120)
## trtl.write(str(c[50-(50-barNumber)]), font=("Arial", 8, "bold")) # chosing the font
##
## #root.withdraw()
if load == True:
cnt = 0
file = askopenfilename(filetypes=(('CSV files', '*.csv'),
('TXT files', '*.txt'),('POR files', '*.por')),
title='Select Markets or Ports. To Test- CSV format only!')
with open(file) as f:
f_csv = csv.reader(f)
numDecs = 0
for row in f_csv:
numCols = len(row)
cnt += 1
d.append(int(row[0]))
dt.append(datetime.datetime.strptime(row[0],'%Y%m%d'))
o.append(float(row[1]))
h.append(float(row[2]))
l.append(float(row[3]))
c.append(float(row[4]))
v.append(float(row[5]))
oi.append(float(row[6]))
oString= str(o[-1])
if '.' in oString:
decLoc = oString.index('.')
numDecs = max(numDecs,len(oString) - decLoc - 1)
xDate = list()
yVal = list()
zVal = list()
w.Button5.configure(state = "normal")
w.Entry1.insert(0,str(d[-1]))
if draw == True:
startDrawDateStr = w.Entry1.get()
startDrawDate = int(startDrawDateStr)
cnt = -1
for x in range(0,len(d)):
cnt+=1
if startDrawDate >= d[x]: startPt = x
numBarsPlot = 60
if startPt + numBarsPlot > len(d): startPt = len(d) - (numBarsPlot + 1)
print(startPt," ",len(d)," ",numBarsPlot);
indicCnt = 0
screen = TurtleScreen(w.Canvas1)
trtl = RawTurtle(screen)
screen.tracer(False)
screen.bgcolor('white')
clr=['red','green','blue','yellow','purple']
trtl.pensize(6)
trtl.penup()
trtl.color("black")
highestHigh = 0
lowestLow = 99999999
# scaleMult = 10**numDecs
scaleMult = 1
for days in range(startPt,startPt+numBarsPlot):
if h[days]*scaleMult > highestHigh: highestHigh = h[days]*scaleMult
if l[days]*scaleMult < lowestLow: lowestLow = l[days]*scaleMult
hhllDiffScale= (highestHigh - lowestLow) /1.65
hhllDiff = highestHigh - lowestLow
botOfChart = lowestLow
screen.setworldcoordinates(-10,highestHigh-hhllDiffScale,673,highestHigh)
print(highestHigh," ",lowestLow)
m=0
trtl.setheading(0)
trtl.penup()
for i in range(startPt,startPt+numBarsPlot+1):
m=m+1
trtl.goto(m*10,h[i]*scaleMult)
trtl.pendown()
trtl.goto(m*10,l[i]*scaleMult)
trtl.penup()
trtl.goto(m*10,c[i]*scaleMult)
trtl.pendown()
trtl.goto(m*10+5,c[i]*scaleMult)
trtl.penup()
trtl.goto(m*10,o[i]*scaleMult)
trtl.pendown()
trtl.goto(m*10-5,o[i]*scaleMult)
trtl.penup()
trtl.goto(10,highestHigh)
print("Indicator List: ",indicatorList)
if len(indicatorList)!=0:
movAvgParams = list([])
if "movAvg" in indicatorList:
movAvgVal = 0
movAvgParamIndexVal = indicatorList.index("movAvg")
movAvgParams.append(indicatorList[movAvgParamIndexVal + 1])
movAvgParams.append(indicatorList[movAvgParamIndexVal + 2])
movAvgParams.append(indicatorList[movAvgParamIndexVal + 3])
for j in range(0,3):
n = 0
trtl.penup()
if j == 0 : trtl.color("red")
if j == 1 : trtl.color("green")
if j == 2 : trtl.color("blue")
for i in range(startPt,startPt+numBarsPlot):
n = n + 1
movAvgVal = 0
for k in range(i-movAvgParams[j],i):
movAvgVal = movAvgVal + c[k] * scaleMult
if movAvgParams[j] !=0 :
movAvgVal = movAvgVal/movAvgParams[j]
if i == startPt : trtl.goto(n*10,movAvgVal)
trtl.pendown()
trtl.goto(n*10,movAvgVal)
trtl.penup()
# print("PlotTrades : ",plotTrades)
if trades.draw:
debugTradeDate = tradeDate[0]
debugDate = d[startPt]
n = 0
while debugTradeDate <= debugDate:
n +=1
debugTradeDate = tradeDate[n]
m = 0
for i in range(startPt,startPt+numBarsPlot):
m = m + 1
debugDate = d[i]
if debugDate == debugTradeDate:
trtl.penup()
tradeValue = tradePrice[n]
if tradeType[n] == "buy":
trtl.color("Green")
trtl.goto(m*10-5,tradeValue - hhllDiff *.03)
trtl.pensize(3)
trtl.pendown()
trtl.goto(m*10,tradeValue)
trtl.goto(m*10+5,tradeValue - hhllDiff *.03)
trtl.penup()
if tradeType[n] == "sell":
trtl.color("Red")
trtl.goto(m*10-5,tradeValue + hhllDiff *.03)
trtl.pensize(3)
trtl.pendown()
trtl.goto(m*10,tradeValue)
trtl.goto(m*10+5,tradeValue + hhllDiff *.03)
trtl.penup()
if tradeType[n] == "longLiq":
trtl.color("Blue")
trtl.penup()
trtl.goto(m*10-5, tradeValue)
trtl.pensize(3)
trtl.pendown()
trtl.goto(m*10+5, tradeValue)
trtl.penup()
trtl.pensize(1)
print("Found a trade: ",tradeValue," ",debugTradeDate," m= ",m," ",tradeValue-hhllDiff*.05)
n+=1
if n < len(tradeDate): debugTradeDate = tradeDate[n]
trtl.color("black")
trtl.goto(-10,botOfChart)
trtl.pendown()
trtl.goto(673,botOfChart)
trtl.penup()
trtl.goto(-10,botOfChart)
m = 0
for i in range(startPt,startPt+numBarsPlot):
if i % 10 == 0 :
m = m + 1
trtl.pendown()
trtl.write(str(d[i]), font=("Arial", 8, "bold")) # chosing the font
trtl.penup()
trtl.goto(m*100,botOfChart)
trtl.penup()
trtl.goto(628,highestHigh)
trtl.pendown()
trtl.goto(628,botOfChart)
trtl.penup()
m = 0
vertIncrement = hhllDiff/10
for i in range(0,11):
trtl.goto(630,highestHigh - m*vertIncrement)
trtl.pendown()
trtl.write(str(highestHigh - m * vertIncrement),font=("Arial", 8, "bold"))
trtl.penup()
m +=1
# turtle.done()
screen.onscreenclick(get_mouse_click_coor)
## turtle.mainloop()
def init(top, gui, *args, **kwargs):
global w, top_level, root
w = gui
top_level = top
root = top
def destroy_window():
# Function which closes the window.
global top_level
top_level.destroy()
top_level = None
if __name__ == '__main__':
import PSBChart
PSBChart.vp_start_gui()
| 34.521472 | 150 | 0.494846 | 1,206 | 11,254 | 4.594527 | 0.208955 | 0.04187 | 0.024364 | 0.027793 | 0.298863 | 0.277567 | 0.251218 | 0.223606 | 0.201227 | 0.169464 | 0 | 0.036773 | 0.376577 | 11,254 | 325 | 151 | 34.627692 | 0.752993 | 0.109117 | 0 | 0.321569 | 1 | 0 | 0.040332 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.062745 | 0 | 0.082353 | 0.027451 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55c74a48da6996ad1f49dfbcbd9bd447049566b8 | 451 | py | Python | python-pulseaudio-master/setup.py | rrbutani/SoundAndColor | 44992fa188c109a3b11b2df137b9272a0b6203d8 | [
"Unlicense"
] | null | null | null | python-pulseaudio-master/setup.py | rrbutani/SoundAndColor | 44992fa188c109a3b11b2df137b9272a0b6203d8 | [
"Unlicense"
] | null | null | null | python-pulseaudio-master/setup.py | rrbutani/SoundAndColor | 44992fa188c109a3b11b2df137b9272a0b6203d8 | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python
from distutils.core import setup
setup(name='libpulseaudio',
version='1.1',
description='simple libpulseaudio bindings',
author='Valodim',
author_email='valodim@mugenguild.com',
license='LGPL',
url='http://github.com/valodim/python-pulseaudio',
packages=['pulseaudio'],
provides=['libpulseaudio'],
download_url='http://datatomb.de/~valodim/libpulseaudio-1.1.tar.gz'
)
| 28.1875 | 73 | 0.662971 | 50 | 451 | 5.94 | 0.68 | 0.013468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010782 | 0.177384 | 451 | 15 | 74 | 30.066667 | 0.789757 | 0.044346 | 0 | 0 | 0 | 0 | 0.455814 | 0.051163 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55ce2377676e46ea6ca7f0b0a8a26da468d757a5 | 1,861 | py | Python | Sudoko.py | abirbhattacharya82/Sudoko-Solver | 36ea15d16561fe5031548ed3f4c58757280117f6 | [
"MIT"
] | 1 | 2021-07-25T03:02:39.000Z | 2021-07-25T03:02:39.000Z | Sudoko.py | abirbhattacharya82/Sudoku-Solver | 36ea15d16561fe5031548ed3f4c58757280117f6 | [
"MIT"
] | null | null | null | Sudoko.py | abirbhattacharya82/Sudoku-Solver | 36ea15d16561fe5031548ed3f4c58757280117f6 | [
"MIT"
] | null | null | null | def find_space(board):
for i in range(0,9):
for j in range(0,9):
if board[i][j]==0:
return (i,j)
return None
def check(board,num,r,c):
for i in range(0,9):
if board[r][i]==num and c!=i:
return False
for i in range(0,9):
if board[i][c]==num and r!=i:
return False
x=r//3
y=c//3
for i in range(x*3,x*3+3):
for j in range(y*3,y*3+3):
if board[i][j]==num and r!=i and c!=j:
return False
return True
def enter_datas(board):
for i in range(1,10):
print("Enter the Datas in Row ",i)
x=[int(i) for i in input().split()]
board.append(x)
def show(board):
for i in range(0,9):
for j in range(0,9):
if j==2 or j==5:
print(board[i][j]," | ",end="")
else:
print(board[i][j],end=" ")
if i==2 or i==5:
print("\n-----------------------\n")
else:
print("\n")
def solve(board):
x=find_space(board)
if not x:
return True
else:
r,c=x
for i in range(1,10):
if check(board,i,r,c):
board[r][c]=i
if solve(board):
return True
board[r][c]=0
return False
board=[]
enter_datas(board)
show(board)
solve(board)
print("\n\n")
show(board)
'''
Enter the Datas in a Row
7 8 0 4 0 0 1 2 0
Enter the Datas in a Row
6 0 0 0 7 5 0 0 9
Enter the Datas in a Row
0 0 0 6 0 1 0 7 8
Enter the Datas in a Row
0 0 7 0 4 0 2 6 0
Enter the Datas in a Row
0 0 1 0 5 0 9 3 0
Enter the Datas in a Row
9 0 4 0 6 0 0 0 5
Enter the Datas in a Row
0 7 0 3 0 0 0 1 2
Enter the Datas in a Row
1 2 0 0 0 7 4 0 0
Enter the Datas in a Row
0 4 9 2 0 6 0 0 7
''' | 23.2625 | 51 | 0.476088 | 366 | 1,861 | 2.409836 | 0.13388 | 0.036281 | 0.147392 | 0.170068 | 0.407029 | 0.367347 | 0.292517 | 0.216553 | 0.07483 | 0.07483 | 0 | 0.099823 | 0.391725 | 1,861 | 80 | 52 | 23.2625 | 0.679329 | 0 | 0 | 0.363636 | 0 | 0 | 0.043197 | 0.019438 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.254545 | 0.109091 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55d3a610da3467d16c45533e5d12b2a9f0ad38ba | 1,457 | py | Python | adbc/zql/builders/core.py | aleontiev/apg | c6a10a9b0a576913c63ed4f093e2a0fa7469af87 | [
"MIT"
] | 2 | 2020-07-17T16:33:42.000Z | 2020-07-21T04:48:38.000Z | adbc/zql/builders/core.py | aleontiev/apg | c6a10a9b0a576913c63ed4f093e2a0fa7469af87 | [
"MIT"
] | null | null | null | adbc/zql/builders/core.py | aleontiev/apg | c6a10a9b0a576913c63ed4f093e2a0fa7469af87 | [
"MIT"
] | null | null | null | from adbc.zql.validator import Validator
class Builder(Validator):
INDENT = 4
IDENTIFIER_SPLIT_CHARACTER = '.'
WHITESPACE_CHARACTER = ' '
WILDCARD_CHARACTER = '*'
QUOTE_CHARACTERS = {'"', "'", '`'}
RAW_QUOTE_CHARACTER = '`'
COMMANDS = {
'select',
'insert',
'update',
'delete',
'truncate',
'create',
'alter',
'drop',
'show',
'explain',
'set'
}
OPERATOR_REWRITES = {}
OPERATORS = {
'not': 1,
'!!': 1,
'is': 2,
'is null': {
'arguments': 1,
'binds': 'right'
},
'is not null': {
'arguments': 1,
'binds': 'right'
},
'!': {
'arguments': 1,
'binds': 'right'
},
'@': 1,
'|/': 1,
'=': 2,
'+': 2,
'*': 2,
'-': 2,
'/': 2,
'%': 2,
'^': 2,
'#': 2,
'~': 1,
'>>': 2,
'&': 2,
'<<': 2,
'|': 2,
'||': 2,
'<': 2,
'<=': 2,
'-': 2,
'!=': 2,
'<>': 2,
'like': 2,
'ilike': 2,
'~~': 2,
'!~~': 2,
'>': 2,
'>=': 2,
'and': 2,
'or': 2,
}
# TODO: handle non-functional clause expressions
# like CASE, BETWEEN, etc
CLAUSES = {
'case',
'between'
}
| 18.922078 | 52 | 0.315031 | 106 | 1,457 | 4.254717 | 0.509434 | 0.088692 | 0.113082 | 0.124169 | 0.161863 | 0.044346 | 0.044346 | 0.044346 | 0.044346 | 0 | 0 | 0.047306 | 0.477694 | 1,457 | 76 | 53 | 19.171053 | 0.545335 | 0.048044 | 0 | 0.111111 | 0 | 0 | 0.153179 | 0 | 0 | 0 | 0 | 0.013158 | 0 | 1 | 0 | false | 0 | 0.013889 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55d3b92efdbe3c9a4d84e47ec3fda8ecb4588bca | 426 | py | Python | setup.py | InTheMorning/python-bme280 | 47af2784c937bed429d8986b5205b495e03d74f2 | [
"MIT"
] | null | null | null | setup.py | InTheMorning/python-bme280 | 47af2784c937bed429d8986b5205b495e03d74f2 | [
"MIT"
] | null | null | null | setup.py | InTheMorning/python-bme280 | 47af2784c937bed429d8986b5205b495e03d74f2 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(name='bme280',
version='1.0.0',
packages=['bme280'],
install_requires=['smbus2'],
python_requires='>=2.7',
url='https://dev.mycrobase.de/gitea/cn/python-bme280',
author='Christian Nicolai',
description='A python library for accessing the BME280 combined humidity and pressure sensor from Bosch.',
long_description=open('README.md').read())
| 30.428571 | 112 | 0.671362 | 53 | 426 | 5.339623 | 0.811321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.183099 | 426 | 13 | 113 | 32.769231 | 0.761494 | 0 | 0 | 0 | 0 | 0 | 0.450704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55d3d277d3db0f3730f055eade9ab037ac954a49 | 1,190 | py | Python | List/learnlist.py | shahasifbashir/LearnPython | 4ce6b81d66ea7bbf0a40427871daa4e563b6a184 | [
"MIT"
] | null | null | null | List/learnlist.py | shahasifbashir/LearnPython | 4ce6b81d66ea7bbf0a40427871daa4e563b6a184 | [
"MIT"
] | null | null | null | List/learnlist.py | shahasifbashir/LearnPython | 4ce6b81d66ea7bbf0a40427871daa4e563b6a184 | [
"MIT"
] | null | null | null | # A simple list
myList = [10,20,4,5,6,2,9,10,2,3,34,14]
#print the whole list
print("The List is {}".format(myList))
# printing elemts of the list one by one
print("printing elemts of the list one by one")
for elements in myList:
print(elements)
print("")
#printing elements that are greater than 10 only
print("printing elements that are greater than 10 only")
for elements in myList:
if(elements>10):
print(elements)
#printing elements that are greater that 10 but by using a list and appending the elements on it
newList = []
for elements in myList:
if(elements <10):
newList.append(elements)
print("")
print("Print the new List \n{}".format(newList))
#print the above list part using a single line
print(" The list is {}".format([item for item in myList if item < 10]))
# here [item { This is the out put} for item { the is the for part} in myList {This Is the input list} if item <10 {This is the condition}]
#Ask the user for an input and print the elemets of list less than that number
print("Input a number : ")
num = int(input())
print(" The elemnts of the list less that {} are {}".format(num,[item for item in myList if item < num]))
| 25.869565 | 139 | 0.696639 | 206 | 1,190 | 4.024272 | 0.305825 | 0.067551 | 0.048251 | 0.068758 | 0.402895 | 0.318456 | 0.318456 | 0.183353 | 0.108565 | 0 | 0 | 0.032461 | 0.197479 | 1,190 | 45 | 140 | 26.444444 | 0.835602 | 0.398319 | 0 | 0.35 | 0 | 0 | 0.280057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.55 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
55d68de8c22f2deefdb481f4a73d47295a2e3b27 | 870 | py | Python | pmapi/app.py | jbushman/primemirror-api | 4844d57b5581a2d537996c77eec65956ef5f1dc9 | [
"Apache-2.0"
] | null | null | null | pmapi/app.py | jbushman/primemirror-api | 4844d57b5581a2d537996c77eec65956ef5f1dc9 | [
"Apache-2.0"
] | null | null | null | pmapi/app.py | jbushman/primemirror-api | 4844d57b5581a2d537996c77eec65956ef5f1dc9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
from pmapi.config import Config, get_logger
import os
import logging
import requests
import connexion
from flask import Flask, request
logger = get_logger()
# if not Config.TOKEN:
# data = {
# "hostname": Config.HOSTNAME,
# "ip": Config.IP,
# "state": Config.STATE,
# "url": Config.URL,
# "service_type": Config.SERVICE_TYPE,
# "roles": "'service', 'primemirror'",
# }
# logging.info("Registering Service: ".format(data))
# r = requests.post("{}/register/service".format(Config.DEPLOYMENT_API_URI), json=data, verify=False)
# resp = r.json()
# if "TOKEN" in resp:
# update_env("TOKEN", resp["TOKEN"])
flask_app = connexion.FlaskApp(__name__)
flask_app.add_api("openapi.yaml", validate_responses=True, strict_validation=True)
app = flask_app.app
app.config.from_object(Config)
| 24.857143 | 104 | 0.670115 | 109 | 870 | 5.183486 | 0.504587 | 0.042478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001404 | 0.181609 | 870 | 34 | 105 | 25.588235 | 0.792135 | 0.574713 | 0 | 0 | 0 | 0 | 0.033994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.545455 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
55d8e1c6fdbebec334001ecd1716470ce185570d | 1,001 | py | Python | cha_bebe/presente/migrations/0001_initial.py | intelektos/Cha_bebe | 23df4af3901413c9c50e73bd305ade165c81001b | [
"MIT"
] | null | null | null | cha_bebe/presente/migrations/0001_initial.py | intelektos/Cha_bebe | 23df4af3901413c9c50e73bd305ade165c81001b | [
"MIT"
] | 9 | 2020-06-08T03:31:08.000Z | 2022-01-13T02:44:42.000Z | cha_bebe/presente/migrations/0001_initial.py | intelektos/Cha_bebe | 23df4af3901413c9c50e73bd305ade165c81001b | [
"MIT"
] | 1 | 2020-06-01T17:43:20.000Z | 2020-06-01T17:43:20.000Z | # Generated by Django 3.0.6 on 2020-05-14 18:13
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Presente',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('titulo', models.CharField(max_length=100)),
('slug', models.SlugField(blank=True, max_length=100, unique=True)),
('descricao', models.TextField(blank=True, null=True)),
('valor', models.FloatField(blank=True, null=True)),
('imagem', models.ImageField(blank=True, null=True, upload_to='presentes/imagens')),
('thumbnail', models.ImageField(blank=True, null=True, upload_to='presentes/thumbnail')),
],
options={
'ordering': ('titulo',),
},
),
]
| 33.366667 | 114 | 0.566434 | 100 | 1,001 | 5.6 | 0.59 | 0.080357 | 0.092857 | 0.121429 | 0.178571 | 0.178571 | 0.178571 | 0.178571 | 0.178571 | 0 | 0 | 0.029536 | 0.28971 | 1,001 | 29 | 115 | 34.517241 | 0.758087 | 0.044955 | 0 | 0 | 1 | 0 | 0.10587 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55dae12ae7fedf07888052fca21d9aabf3ce95df | 1,367 | py | Python | main.py | klarman-cell-observatory/cirrocumulus-app-engine | 52997ae790773364591ab8d7c747e4505700373b | [
"BSD-3-Clause"
] | null | null | null | main.py | klarman-cell-observatory/cirrocumulus-app-engine | 52997ae790773364591ab8d7c747e4505700373b | [
"BSD-3-Clause"
] | 1 | 2021-04-13T14:52:39.000Z | 2021-04-13T15:53:34.000Z | main.py | klarman-cell-observatory/cirrocumulus-app-engine | 52997ae790773364591ab8d7c747e4505700373b | [
"BSD-3-Clause"
] | null | null | null | import os
import sys
sys.path.append('lib')
from flask import Flask, send_from_directory
import cirrocumulus
from cirrocumulus.cloud_firestore_native import CloudFireStoreNative
from cirrocumulus.api import blueprint
from cirrocumulus.envir import CIRRO_AUTH_CLIENT_ID, CIRRO_AUTH, CIRRO_DATABASE, CIRRO_DATASET_PROVIDERS
from cirrocumulus.google_auth import GoogleAuth
from cirrocumulus.no_auth import NoAuth
from cirrocumulus.util import add_dataset_providers
client_path = os.path.join(cirrocumulus.__path__[0], 'client')
# If `entrypoint` is not defined in app.yaml, App Engine will look for an app
# called `app` in `main.py`.
app = Flask(__name__, static_folder=client_path, static_url_path='')
app.register_blueprint(blueprint, url_prefix='/api')
@app.route('/')
def root():
return send_from_directory(client_path, "index.html")
if os.environ.get(CIRRO_AUTH_CLIENT_ID) is not None:
app.config[CIRRO_AUTH] = GoogleAuth(os.environ.get(CIRRO_AUTH_CLIENT_ID))
else:
app.config[CIRRO_AUTH] = NoAuth()
app.config[CIRRO_DATABASE] = CloudFireStoreNative()
os.environ[CIRRO_DATASET_PROVIDERS] = ','.join(['cirrocumulus.zarr_dataset.ZarrDataset',
'cirrocumulus.parquet_dataset.ParquetDataset'])
add_dataset_providers()
if __name__ == '__main__':
app.run(host='127.0.0.1', port=5000, debug=True)
| 33.341463 | 104 | 0.766642 | 186 | 1,367 | 5.333333 | 0.419355 | 0.096774 | 0.045363 | 0.051411 | 0.058468 | 0.058468 | 0.058468 | 0 | 0 | 0 | 0 | 0.009251 | 0.130212 | 1,367 | 40 | 105 | 34.175 | 0.825063 | 0.074616 | 0 | 0 | 0 | 0 | 0.096672 | 0.063391 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.37037 | 0.037037 | 0.444444 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
55dce36c7d1bd205aea80744f2bd0ceb8afc6832 | 1,169 | py | Python | manage/db_logger.py | ReanGD/web-home-manage | bbc5377a1f7fde002442fee7720e4ab9e9ad22b3 | [
"Apache-2.0"
] | null | null | null | manage/db_logger.py | ReanGD/web-home-manage | bbc5377a1f7fde002442fee7720e4ab9e9ad22b3 | [
"Apache-2.0"
] | null | null | null | manage/db_logger.py | ReanGD/web-home-manage | bbc5377a1f7fde002442fee7720e4ab9e9ad22b3 | [
"Apache-2.0"
] | null | null | null | import sys
import traceback
from manage.models import LoadLog
class DbLogger(object):
def __init__(self, rec_id=None):
if rec_id is None:
self.rec = LoadLog.objects.create()
else:
self.rec = LoadLog.objects.get(pk=int(rec_id))
def remove_torrent(self):
if self.rec.torent_ptr is not None:
for it in LoadLog.objects.filter(torent_ptr=self.rec.torent_ptr):
it.torent_ptr = None
it.save()
def id(self):
return self.rec.id
def json_result(self):
return {'result': self.rec.result, 'text': self.rec.text}
def text(self):
return self.rec.text
def write(self, msg):
self.rec.text += ("\n" + msg)
self.rec.save()
def set_result(self, result):
self.rec.result = result
self.rec.save()
def set_torrent(self, t):
self.torent_ptr = t
self.rec.save()
def exception(self):
e_type, e_value, e_traceback = sys.exc_info()
s = "\n".join(traceback.format_exception(e_type, e_value, e_traceback))
self.write(s)
self.set_result(LoadLog.RES_FAILED)
| 25.977778 | 79 | 0.597092 | 164 | 1,169 | 4.097561 | 0.329268 | 0.145833 | 0.058036 | 0.0625 | 0.113095 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0.284859 | 1,169 | 44 | 80 | 26.568182 | 0.803828 | 0 | 0 | 0.088235 | 0 | 0 | 0.011976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.264706 | false | 0 | 0.088235 | 0.088235 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55e48ca73e642e82cfdfccf386ed40c0b2fba12d | 725 | py | Python | app/blogging/routes.py | Sjors/patron | a496097ad0821b677c8e710e8aceb587928be31c | [
"MIT"
] | 114 | 2018-12-30T20:43:37.000Z | 2022-03-21T18:57:47.000Z | app/blogging/routes.py | Sjors/patron | a496097ad0821b677c8e710e8aceb587928be31c | [
"MIT"
] | 17 | 2019-04-25T20:20:57.000Z | 2022-03-29T21:48:35.000Z | app/blogging/routes.py | Sjors/patron | a496097ad0821b677c8e710e8aceb587928be31c | [
"MIT"
] | 17 | 2019-01-02T06:37:11.000Z | 2022-03-29T22:22:40.000Z | from app.blogging import bp
from datetime import datetime
from flask import flash, redirect, url_for
from flask_login import current_user
@bp.before_request
def protect():
'''
Registers new function to Flask-Blogging Blueprint that protects
updates to make them only viewable by paid subscribers.
'''
if current_user.is_authenticated:
if datetime.today() <= current_user.expiration:
return None
else:
flash('You must have a paid-up subscription \
to view updates.', 'warning')
return redirect(url_for('main.support'))
else:
flash('Please login to view updates.', 'warning')
return redirect(url_for('auth.login'))
| 31.521739 | 68 | 0.666207 | 91 | 725 | 5.208791 | 0.582418 | 0.06962 | 0.088608 | 0.084388 | 0.168776 | 0.168776 | 0.168776 | 0.168776 | 0 | 0 | 0 | 0 | 0.252414 | 725 | 22 | 69 | 32.954545 | 0.874539 | 0.165517 | 0 | 0.125 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | true | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55eab24c8b73ac11d50c210b2451b3c1e941b6bd | 676 | py | Python | src/lib/jianshu_parser/jianshuparser.py | eebook/jianshu2e-book | d638fb8c2f47cf8e91e9f74e2e1e5f61f3c98a48 | [
"MIT"
] | 7 | 2019-01-02T14:52:48.000Z | 2021-11-05T06:11:46.000Z | src/lib/jianshu_parser/jianshuparser.py | knarfeh/jianshu2e-book | d638fb8c2f47cf8e91e9f74e2e1e5f61f3c98a48 | [
"MIT"
] | 2 | 2021-03-22T17:11:32.000Z | 2021-12-13T19:36:17.000Z | src/lib/jianshu_parser/jianshuparser.py | ee-book/jianshu2e-book | d638fb8c2f47cf8e91e9f74e2e1e5f61f3c98a48 | [
"MIT"
] | 2 | 2019-04-18T05:44:24.000Z | 2021-06-10T09:35:44.000Z | # -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
from src.lib.jianshu_parser.base import BaseParser
from src.lib.jianshu_parser.content.JianshuAuthor import JianshuAuthorInfo
from src.lib.jianshu_parser.content.JianshuArticle import JianshuArticle
class JianshuParser(BaseParser):
u"""
获得jianshu_info表中所需的内容
"""
def __init__(self, content):
self.dom = BeautifulSoup(content, 'lxml')
self.article_parser = JianshuArticle(self.dom)
return
def get_jianshu_info_list(self):
author_parser = JianshuAuthorInfo() # SinaBlog_Info表中的信息
author_parser.set_dom(self.dom)
return [author_parser.get_info()]
| 28.166667 | 74 | 0.724852 | 77 | 676 | 6.12987 | 0.467532 | 0.044492 | 0.063559 | 0.108051 | 0.175847 | 0.127119 | 0 | 0 | 0 | 0 | 0 | 0.00363 | 0.184911 | 676 | 23 | 75 | 29.391304 | 0.852995 | 0.093195 | 0 | 0 | 0 | 0 | 0.006711 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
55ee2be125f56e9339bd29f2a5e248d4c0042d7f | 220 | py | Python | Contest/Keyence2021/a/main.py | mpses/AtCoder | 9c101fcc0a1394754fcf2385af54b05c30a5ae2a | [
"CC0-1.0"
] | null | null | null | Contest/Keyence2021/a/main.py | mpses/AtCoder | 9c101fcc0a1394754fcf2385af54b05c30a5ae2a | [
"CC0-1.0"
] | null | null | null | Contest/Keyence2021/a/main.py | mpses/AtCoder | 9c101fcc0a1394754fcf2385af54b05c30a5ae2a | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
(n,), a, b = [[*map(int, o.split())] for o in open(0)]
from itertools import*
*A, = accumulate(a, max)
print(ans := a[0] * b[0])
for i in range(1, n):
ans = max(ans, A[i] * b[i])
print(ans) | 27.5 | 54 | 0.554545 | 44 | 220 | 2.772727 | 0.568182 | 0.131148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028249 | 0.195455 | 220 | 8 | 55 | 27.5 | 0.661017 | 0.095455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55f657ac810bd7adff3d28ddcf6b426dbce9f289 | 291 | py | Python | dev/user-agent-stacktrace/lib/utils.py | Katharine/apisnoop | 46c0e101c6e1e13a783f5022a6f77787c0824032 | [
"Apache-2.0"
] | null | null | null | dev/user-agent-stacktrace/lib/utils.py | Katharine/apisnoop | 46c0e101c6e1e13a783f5022a6f77787c0824032 | [
"Apache-2.0"
] | 13 | 2018-08-21T04:00:44.000Z | 2019-07-03T22:36:07.000Z | dev/user-agent-stacktrace/lib/utils.py | Katharine/apisnoop | 46c0e101c6e1e13a783f5022a6f77787c0824032 | [
"Apache-2.0"
] | 1 | 2019-05-09T18:47:22.000Z | 2019-05-09T18:47:22.000Z | from collections import defaultdict
def defaultdicttree():
return defaultdict(defaultdicttree)
def defaultdict_to_dict(d):
if isinstance(d, defaultdict):
new_d = {}
for k, v in d.items():
new_d[k] = defaultdict_to_dict(v)
d = new_d
return d
| 22.384615 | 45 | 0.639175 | 38 | 291 | 4.710526 | 0.473684 | 0.067039 | 0.189944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.271478 | 291 | 12 | 46 | 24.25 | 0.84434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0.1 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55f78570dc2c54902bbba417e6ce4621cf9434e6 | 1,819 | py | Python | miniGithub/migrations/0003_auto_20200119_0955.py | stefan096/UKS | aeabe6a9995143c006ad4143e8e876a102e9d69b | [
"MIT"
] | null | null | null | miniGithub/migrations/0003_auto_20200119_0955.py | stefan096/UKS | aeabe6a9995143c006ad4143e8e876a102e9d69b | [
"MIT"
] | 36 | 2020-01-12T17:00:23.000Z | 2020-03-21T13:25:28.000Z | miniGithub/migrations/0003_auto_20200119_0955.py | stefan096/UKS | aeabe6a9995143c006ad4143e8e876a102e9d69b | [
"MIT"
] | null | null | null | # Generated by Django 3.0.2 on 2020-01-19 09:55
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('miniGithub', '0002_project_owner'),
]
operations = [
migrations.CreateModel(
name='Comment',
fields=[
('custom_event_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='miniGithub.Custom_Event')),
('description', models.CharField(max_length=500)),
],
bases=('miniGithub.custom_event',),
),
migrations.AlterField(
model_name='custom_event',
name='creator',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.CreateModel(
name='Problem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100)),
('base_problem', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='problem', to='miniGithub.Problem')),
('project', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='miniGithub.Project')),
],
),
migrations.AddField(
model_name='custom_event',
name='problem',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='miniGithub.Problem'),
),
]
| 41.340909 | 206 | 0.632216 | 194 | 1,819 | 5.773196 | 0.376289 | 0.05 | 0.075 | 0.117857 | 0.386607 | 0.34375 | 0.286607 | 0.286607 | 0.25 | 0.25 | 0 | 0.018064 | 0.239142 | 1,819 | 43 | 207 | 42.302326 | 0.791185 | 0.024739 | 0 | 0.324324 | 1 | 0 | 0.136569 | 0.025959 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.081081 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55fa09f3a8c3fad0ee952c33bd12012b56fb9d68 | 668 | py | Python | AnkiIn/notetypes/ListCloze.py | Clouder0/AnkiIn | ca944bb9f79ce49bc2db62a0bfaeffe7908b48da | [
"MIT"
] | 1 | 2021-07-04T08:10:53.000Z | 2021-07-04T08:10:53.000Z | AnkiIn/notetypes/ListCloze.py | Clouder0/AnkiIn | ca944bb9f79ce49bc2db62a0bfaeffe7908b48da | [
"MIT"
] | 35 | 2021-07-03T10:50:20.000Z | 2022-01-09T09:33:17.000Z | AnkiIn/notetypes/ListCloze.py | Clouder0/AnkiIn | ca944bb9f79ce49bc2db62a0bfaeffe7908b48da | [
"MIT"
] | 2 | 2021-08-21T11:33:00.000Z | 2021-10-15T18:59:33.000Z | from .Cloze import get as cget
from ..config import dict as conf
from ..config import config_updater
notetype_name = "ListCloze"
if notetype_name not in conf["notetype"]:
conf["notetype"][notetype_name] = {}
settings = conf["notetype"][notetype_name]
priority = None
def update_list_cloze_config():
global settings, priority
priority = settings.get("priority", 15)
config_updater.append((update_list_cloze_config, 15))
def check(lines: list, extra_params={}) -> bool:
return lines[0].startswith("- ") or lines[0].startswith(r"* ")
def get(text: str, deck: str, tags: list, extra_params={}):
return cget(text=text, deck=deck, tags=tags)
| 23.034483 | 66 | 0.712575 | 93 | 668 | 4.967742 | 0.419355 | 0.103896 | 0.069264 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010563 | 0.149701 | 668 | 28 | 67 | 23.857143 | 0.802817 | 0 | 0 | 0 | 0 | 0 | 0.067365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.1875 | 0.125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
55fadfd4280d478b35858e331edea1ce48c5383a | 9,697 | py | Python | app/routes.py | ptkaczyk/Ithacartists | 0d8effafe64b29ae1756169cac1eb4d6bc980c1d | [
"MIT"
] | null | null | null | app/routes.py | ptkaczyk/Ithacartists | 0d8effafe64b29ae1756169cac1eb4d6bc980c1d | [
"MIT"
] | null | null | null | app/routes.py | ptkaczyk/Ithacartists | 0d8effafe64b29ae1756169cac1eb4d6bc980c1d | [
"MIT"
] | null | null | null | from flask import render_template, Flask, flash, redirect, url_for, abort, request
from flask_login import login_user, logout_user, login_required
from werkzeug.urls import url_parse
from app import app, db
from app.forms import *
from app.models import *
@app.route('/')
@app.route('/landing')
def landing():
return render_template('Landing.html', title='Landing')
@app.route('/artistlist')
def artistlist():
artists=Artist.query.all()
return render_template('Artists.html', artists=artists, title='Artists')
@app.route('/login', methods=['GET', 'POST'])
def login():
form = loginForm()
if form.validate_on_submit():
user=User.query.filter_by(username=form.username.data).first()
if user is None or not user.check_password(form.password.data):
flash('Incorrect name or password')
return redirect(url_for('login'))
login_user(user)
return redirect(url_for('landing'))
return render_template('Login.html', form=form, title='Login')
@app.route('/search', methods=['GET','POST'])
def search():
searched = Product.query.all()
form = searchForm()
if form.validate_on_submit():
searched = Product.query.filter_by(name=form.searchable.data).all()
return render_template('search.html', searchable=searched, form=form, title='Search')
@app.route('/user/<name>')
def user(name):
if len(User.query.filter_by(username=name).all()) > 0:
chosenUser = User.query.filter_by(username=name).first()
chosenProducts = Product.query.filter_by(Id=chosenUser.id).all()
return render_template('user.html', title='User', userName=chosenUser.username, chosenUser=chosenUser,
productList=chosenProducts)
else:
abort(404)
@app.route('/product/<productName>')
def product(productName):
if len(Product.query.filter_by(name=productName).all()) > 0:
chosenProduct=Product.query.filter_by(name=productName).first()
chosenUser=User.query.filter_by(id=chosenProduct.userId).first()
userName=chosenUser.username
return render_template('product.html', title='Product', name=productName, userPosting=userName,
description=chosenProduct.description, date=chosenProduct.dateHarvested,
productPrice=chosenProduct.price, amount=chosenProduct.amount)
else:
abort(404)
@app.route('/newProduct', methods=['GET','POST'])
def newProduct():
form = productForm()
if form.validate_on_submit():
flash('New product created: {}'.format(form.name.data))
newP = Product(name=form.name.data, description=form.description.data, price=form.price.data, amount=form.amount.data, dateHarvested=form.date.data, userId=4)
db.session.add(newP)
db.session.commit()
return redirect(url_for('landing'))
return render_template('newProduct.html', title='New Product', form=form)
@app.route('/newartist', methods=['GET', 'POST'])
@login_required
def newartist():
form = artistForm()
if form.validate_on_submit():
if len(Artist.query.filter_by(firstname=form.artistName.data).all()) > 0:
flash('That name already exists')
else:
flash('New page created: {}'.format(form.artistName.data))
newA = Artist(firstname=form.artistName.data, lastname='', hometown=form.hometown.data, description=form.description.data)
db.session.add(newA)
db.session.commit()
return redirect(url_for('artistlist'))
return render_template('NewArtist.html', form=form, title='New Artist')
@app.route('/newvenue', methods=['GET','POST'])
def newvenue():
form = venueForm()
if form.validate_on_submit():
if len(Venue.query.filter_by(name=form.name.data).all()) > 0:
flash('That venue already exists')
else:
flash('New venue created: {}'.format(form.name.data))
newV = Venue(name=form.name.data, description=form.description.data)
db.session.add(newV)
db.session.commit()
return redirect(url_for('artistlist'))
return render_template('NewVenue.html', title='New Venue', form=form)
@app.route('/newevent', methods=['GET', 'POST'])
def newevent():
form = eventForm()
form.venue.choices = [(venue.id, venue.name) for venue in Venue.query.all()]
form.artists.choices = [(artist.id, artist.firstname) for artist in Artist.query.all()]
if form.validate_on_submit():
if len(Event.query.filter_by(name=form.name.data).all()) > 0:
flash('That event already exists')
else:
flash('New event created: {}'.format(form.name.data))
newE = Event(name=form.name.data, description=form.description.data, time=form.time.data, venueId=form.venue.data)
db.session.add(newE)
db.session.commit()
for a in form.artists.data:
newX = ArtistToEvent(artistId=Artist.query.filter_by(id=a).first().id, eventId=newE.id)
db.session.add(newX)
db.session.commit()
return redirect(url_for('artistlist'))
return render_template('NewEvent.html', title='New Event', form=form)
@app.route('/artist/<name>')
#instructor = Instructor.query.filter_by(firstname="Alex").first()
def artist(name):
if len(Artist.query.filter_by(firstname=name).all()) > 0:
chosenArtist=Artist.query.filter_by(firstname=name).first()
chosenJoins=ArtistToEvent.query.filter_by(artistId=chosenArtist.id).all()
chosenEvents = []
trackingInt=0
for oneEvent in chosenJoins:
chosenEvents.append(Event.query.filter_by(id=chosenJoins[trackingInt].eventId).first())
trackingInt=trackingInt+1
#chosenEvents=Event.query.filter_by(id=chosenJoin.eventId).all()
return render_template('Artist.html', title='Artist', artistName=chosenArtist.firstname, hometown=chosenArtist.hometown, description=chosenArtist.description, event_list=chosenEvents)
else:
abort(404)
@app.route('/register', methods=['GET','POST'])
def register():
form = registerForm()
if form.validate_on_submit():
if len(User.query.filter_by(username=form.username.data).all()) > 0:
flash('That name already exists')
else:
flash('New user created. You can now log in.')
newU= User(username=form.username.data, password=form.password.data)
newU.set_password(form.password.data)
db.session.add(newU)
db.session.commit()
return redirect(url_for('landing'))
return render_template('Register.html', form=form, title='Register')
@app.route('/logout')
def logout():
logout_user()
flash("User has been logged out.")
return redirect(url_for('landing'))
@app.route('/populate_db')
def populate_db():
a1=Artist(firstname='Anne', lastname='Apricot', hometown='Ithaca', description='A')
a2=Artist(firstname='Ben', lastname='Barrel', hometown='Ithaca', description='B')
a3=Artist(firstname='Cathy', lastname='Chowder', hometown='Ithaca', description='C')
a4=Artist(firstname='Dan', lastname='Derringer', hometown='Delanson', description='D')
e1=Event(name='Augustfest', description='A', venueId='0')
e2 = Event(name='Burgerfest', description='B', venueId='1')
e3 = Event(name='Ciderfest', description='C', venueId='2')
e4 = Event(name='Donutfest', description='D', venueId='1')
e5 = Event(name='Earwigfest', description='E', venueId='1')
e6 = Event(name='Falafelfest', description='F', venueId='2')
ate1 = ArtistToEvent(artistId=1, eventId=1)
ate2 = ArtistToEvent(artistId=2, eventId=2)
ate3 = ArtistToEvent(artistId=3, eventId=3)
ate4 = ArtistToEvent(artistId=4, eventId=4)
ate5 = ArtistToEvent(artistId=1, eventId=5)
ate6 = ArtistToEvent(artistId=2, eventId=5)
ate7 = ArtistToEvent(artistId=3, eventId=6)
ate8 = ArtistToEvent(artistId=1, eventId=6)
v1 = Venue(name='Adelide Acres', description='A')
v2 = Venue(name='Baltimore Barrelers', description='B')
v3 = Venue(name='Canary Church', description='C')
u1 = User(username='Peter',password='Tkaczyk')
u1.set_password('Tkaczyk')
u2 = User(username='Old Man McFarmer', password='Farmlivin')
u2.set_password('Farmlivin')
u3 = User(username='Young Man McFarmer', password='ILovFarm')
u3.set_password('ILovFarm')
p1 = Product(name='Eggs', amount = 12, dateHarvested = '12-12-2020', description = 'delicious eggs', price = '$0.99'
, userId=1)
p2 = Product(name='Tomatoes', amount=20, dateHarvested='12-14-2020', description='delicious tomatoes', price='$1.99',
userId=2)
p3 = Product(name='Beets', amount=30, dateHarvested='12-10-2020', description='delicious beets', price='$2.99'
, userId=3)
p4 = Product(name='Bacon', amount=10, dateHarvested='11-20-2020', description='delicious bacon', price='$3.99',
userId=2)
p5 = Product(name='Turnips', amount=40, dateHarvested='12-10-2020', description='delicious turnips', price='$4.99',
userId=3)
db.session.add_all([u1, u2, u3, p1, p2, p3, p4, p5])
db.session.commit()
return "database has been populated."
@app.route('/reset_db')
def reset_db():
flash("Resetting database: deleting old data and repopulating with dummy data")
meta = db.metadata
for table in reversed(meta.sorted_tables):
print('Clear table {}'.format(table))
db.session.execute(table.delete())
db.session.commit()
populate_db()
return "Reset and repopulated data."
| 42.530702 | 191 | 0.662267 | 1,194 | 9,697 | 5.314908 | 0.203518 | 0.032934 | 0.038922 | 0.025213 | 0.255121 | 0.194768 | 0.159943 | 0.128585 | 0.078002 | 0.078002 | 0 | 0.019121 | 0.185624 | 9,697 | 227 | 192 | 42.718062 | 0.784475 | 0.0132 | 0 | 0.1875 | 0 | 0 | 0.143828 | 0.0023 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078125 | false | 0.052083 | 0.03125 | 0.005208 | 0.223958 | 0.005208 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
55fb46ee1813e2c980cdc6a6a49ca860bf41a84e | 2,861 | py | Python | src/bloombox/schema/services/devices/v1beta1/DevicesService_Beta1_pb2_grpc.py | Bloombox/Python | 1b125fbdf54efb390afe12aaa966f093218c4387 | [
"Apache-2.0"
] | 4 | 2018-01-23T20:13:11.000Z | 2018-07-28T22:36:09.000Z | src/bloombox/schema/services/devices/v1beta1/DevicesService_Beta1_pb2_grpc.py | Bloombox/Python | 1b125fbdf54efb390afe12aaa966f093218c4387 | [
"Apache-2.0"
] | 159 | 2018-02-02T09:55:52.000Z | 2021-07-21T23:41:59.000Z | src/bloombox/schema/services/devices/v1beta1/DevicesService_Beta1_pb2_grpc.py | Bloombox/Python | 1b125fbdf54efb390afe12aaa966f093218c4387 | [
"Apache-2.0"
] | 3 | 2018-01-23T20:13:15.000Z | 2020-01-17T01:07:53.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from devices.v1beta1 import DevicesService_Beta1_pb2 as devices_dot_v1beta1_dot_DevicesService__Beta1__pb2
class DevicesStub(object):
"""Specifies the devices service, which enables managed devices to check-in, authorize themselves, and discover their
identity/role.
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Ping = channel.unary_unary(
'/bloombox.schema.services.devices.v1beta1.Devices/Ping',
request_serializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Ping.Request.SerializeToString,
response_deserializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Ping.Response.FromString,
)
self.Activate = channel.unary_unary(
'/bloombox.schema.services.devices.v1beta1.Devices/Activate',
request_serializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Activation.Request.SerializeToString,
response_deserializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Activation.Response.FromString,
)
class DevicesServicer(object):
"""Specifies the devices service, which enables managed devices to check-in, authorize themselves, and discover their
identity/role.
"""
def Ping(self, request, context):
"""Ping the device server.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Activate(self, request, context):
"""Setup and enable a device for live use. If this is the first time the subject device has activated itself,
initialize or otherwise provision any requisite objects or resources.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_DevicesServicer_to_server(servicer, server):
rpc_method_handlers = {
'Ping': grpc.unary_unary_rpc_method_handler(
servicer.Ping,
request_deserializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Ping.Request.FromString,
response_serializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Ping.Response.SerializeToString,
),
'Activate': grpc.unary_unary_rpc_method_handler(
servicer.Activate,
request_deserializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Activation.Request.FromString,
response_serializer=devices_dot_v1beta1_dot_DevicesService__Beta1__pb2.Activation.Response.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'bloombox.schema.services.devices.v1beta1.Devices', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
| 42.701493 | 119 | 0.774205 | 328 | 2,861 | 6.402439 | 0.301829 | 0.090476 | 0.104762 | 0.085714 | 0.675714 | 0.675714 | 0.655238 | 0.599048 | 0.599048 | 0.385714 | 0 | 0.018891 | 0.148899 | 2,861 | 66 | 120 | 43.348485 | 0.843532 | 0.203076 | 0 | 0.205128 | 1 | 0 | 0.118812 | 0.072007 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.051282 | 0 | 0.205128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
55fe127a3e15c5c409ac7dd672e540ee28e8d786 | 413 | py | Python | oldPython/driving_app.py | Awarua-/Can-I-Have-Your-Attention-COSC475-Research | 71b5140b988aa6512a7cf5b5b6d043e20fd02084 | [
"MIT"
] | null | null | null | oldPython/driving_app.py | Awarua-/Can-I-Have-Your-Attention-COSC475-Research | 71b5140b988aa6512a7cf5b5b6d043e20fd02084 | [
"MIT"
] | null | null | null | oldPython/driving_app.py | Awarua-/Can-I-Have-Your-Attention-COSC475-Research | 71b5140b988aa6512a7cf5b5b6d043e20fd02084 | [
"MIT"
] | null | null | null | from kivy.app import App
from kivy.uix.label import Label
from kivy.core.window import Window
class DrivingApp(App):
def build(self):
Window.fullscreen = False
# Need to set the size, otherwise very pixalated
# wonders about pixel mapping?
Window.size(1920, 1080)
b = Label(text='Launch Child App')
return b
if __name__ == "__main__":
DrivingApp.run()
| 21.736842 | 56 | 0.653753 | 55 | 413 | 4.763636 | 0.690909 | 0.091603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02623 | 0.261501 | 413 | 18 | 57 | 22.944444 | 0.832787 | 0.181598 | 0 | 0 | 0 | 0 | 0.071642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3603655d64ea26fd4eb5614d884927de08638bdc | 30,296 | py | Python | plugins/modules/oci_sch_service_connector.py | A7rMtWE57x/oci-ansible-collection | 80548243a085cd53fd5dddaa8135b5cb43612c66 | [
"Apache-2.0"
] | null | null | null | plugins/modules/oci_sch_service_connector.py | A7rMtWE57x/oci-ansible-collection | 80548243a085cd53fd5dddaa8135b5cb43612c66 | [
"Apache-2.0"
] | null | null | null | plugins/modules/oci_sch_service_connector.py | A7rMtWE57x/oci-ansible-collection | 80548243a085cd53fd5dddaa8135b5cb43612c66 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright (c) 2017, 2020 Oracle and/or its affiliates.
# This software is made available to you under the terms of the GPL 3.0 license or the Apache 2.0 license.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Apache License v2.0
# See LICENSE.TXT for details.
# GENERATED FILE - DO NOT EDIT - MANUAL CHANGES WILL BE OVERWRITTEN
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
"metadata_version": "1.1",
"status": ["preview"],
"supported_by": "community",
}
DOCUMENTATION = """
---
module: oci_sch_service_connector
short_description: Manage a ServiceConnector resource in Oracle Cloud Infrastructure
description:
- This module allows the user to create, update and delete a ServiceConnector resource in Oracle Cloud Infrastructure
- For I(state=present), creates a new service connector in the specified compartment.
A service connector is a logically defined flow for moving data from
a source service to a destination service in Oracle Cloud Infrastructure.
For general information about service connectors, see
L(Service Connector Hub Overview,https://docs.cloud.oracle.com/iaas/service-connector-hub/using/index.htm).
- For purposes of access control, you must provide the
L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment where
you want the service connector to reside. Notice that the service connector
doesn't have to be in the same compartment as the source or target services.
For information about access control and compartments, see
L(Overview of the IAM Service,https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/overview.htm).
- After you send your request, the new service connector's state is temporarily
CREATING. When the state changes to ACTIVE, data begins transferring from the
source service to the target service. For instructions on deactivating and
activating service connectors, see
L(To activate or deactivate a service connector,https://docs.cloud.oracle.com/iaas/service-connector-hub/using/index.htm).
- "This resource has the following action operations in the M(oci_service_connector_actions) module: activate, deactivate."
version_added: "2.9"
author: Oracle (@oracle)
options:
display_name:
description:
- A user-friendly name. It does not have to be unique, and it is changeable.
Avoid entering confidential information.
- Required for create using I(state=present).
- Required for update, delete when environment variable C(OCI_USE_NAME_AS_IDENTIFIER) is set.
- This parameter is updatable when C(OCI_USE_NAME_AS_IDENTIFIER) is not set.
type: str
aliases: ["name"]
compartment_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the
comparment to create the service connector in.
- Required for create using I(state=present).
- Required for update when environment variable C(OCI_USE_NAME_AS_IDENTIFIER) is set.
- Required for delete when environment variable C(OCI_USE_NAME_AS_IDENTIFIER) is set.
type: str
description:
description:
- The description of the resource. Avoid entering confidential information.
- This parameter is updatable.
type: str
source:
description:
- ""
- Required for create using I(state=present).
- This parameter is updatable.
type: dict
suboptions:
kind:
description:
- The type descriminator.
type: str
choices:
- "logging"
required: true
log_sources:
description:
- The resources affected by this work request.
type: list
required: true
suboptions:
compartment_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment containing the log
source.
type: str
required: true
log_group_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the log group.
type: str
log_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the log.
type: str
tasks:
description:
- The list of tasks.
- This parameter is updatable.
type: list
suboptions:
kind:
description:
- The type descriminator.
type: str
choices:
- "logRule"
required: true
condition:
description:
- A filter or mask to limit the source used in the flow defined by the service connector.
type: str
required: true
target:
description:
- ""
- Required for create using I(state=present).
- This parameter is updatable.
type: dict
suboptions:
kind:
description:
- The type descriminator.
type: str
choices:
- "notifications"
- "objectStorage"
- "monitoring"
- "functions"
- "streaming"
required: true
topic_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the topic.
- Required when kind is 'notifications'
type: str
namespace:
description:
- The namespace.
- Applicable when kind is 'objectStorage'
type: str
bucket_name:
description:
- The name of the bucket. Avoid entering confidential information.
- Required when kind is 'objectStorage'
type: str
object_name_prefix:
description:
- The prefix of the objects. Avoid entering confidential information.
- Applicable when kind is 'objectStorage'
type: str
compartment_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment containing the metric.
- Required when kind is 'monitoring'
type: str
metric_namespace:
description:
- The namespace of the metric.
- "Example: `oci_computeagent`"
- Required when kind is 'monitoring'
type: str
metric:
description:
- The name of the metric.
- "Example: `CpuUtilization`"
- Required when kind is 'monitoring'
type: str
function_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the function.
- Required when kind is 'functions'
type: str
stream_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the stream.
- Required when kind is 'streaming'
type: str
freeform_tags:
description:
- "Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only.
Example: `{\\"bar-key\\": \\"value\\"}`"
- This parameter is updatable.
type: dict
defined_tags:
description:
- "Defined tags for this resource. Each key is predefined and scoped to a namespace.
Example: `{\\"foo-namespace\\": {\\"bar-key\\": \\"value\\"}}`"
- This parameter is updatable.
type: dict
service_connector_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the service connector.
- Required for update using I(state=present) when environment variable C(OCI_USE_NAME_AS_IDENTIFIER) is not set.
- Required for delete using I(state=absent) when environment variable C(OCI_USE_NAME_AS_IDENTIFIER) is not set.
type: str
aliases: ["id"]
state:
description:
- The state of the ServiceConnector.
- Use I(state=present) to create or update a ServiceConnector.
- Use I(state=absent) to delete a ServiceConnector.
type: str
required: false
default: 'present'
choices: ["present", "absent"]
extends_documentation_fragment: [ oracle.oci.oracle, oracle.oci.oracle_creatable_resource, oracle.oci.oracle_wait_options ]
"""
EXAMPLES = """
- name: Create service_connector
oci_sch_service_connector:
display_name: display_name_example
compartment_id: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
source:
kind: logging
log_sources:
- compartment_id: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
target:
kind: notifications
- name: Update service_connector using name (when environment variable OCI_USE_NAME_AS_IDENTIFIER is set)
oci_sch_service_connector:
display_name: display_name_example
compartment_id: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
description: description_example
source:
kind: logging
log_sources:
- compartment_id: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
tasks:
- kind: logRule
condition: condition_example
target:
kind: notifications
freeform_tags: {'Department': 'Finance'}
defined_tags: {'Operations': {'CostCenter': 'US'}}
- name: Update service_connector
oci_sch_service_connector:
display_name: display_name_example
description: description_example
service_connector_id: ocid1.serviceconnector.oc1..xxxxxxEXAMPLExxxxxx
- name: Delete service_connector
oci_sch_service_connector:
service_connector_id: ocid1.serviceconnector.oc1..xxxxxxEXAMPLExxxxxx
state: absent
- name: Delete service_connector using name (when environment variable OCI_USE_NAME_AS_IDENTIFIER is set)
oci_sch_service_connector:
display_name: display_name_example
compartment_id: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
state: absent
"""
RETURN = """
service_connector:
description:
- Details of the ServiceConnector resource acted upon by the current operation
returned: on success
type: complex
contains:
id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the service connector.
returned: on success
type: string
sample: ocid1.resource.oc1..xxxxxxEXAMPLExxxxxx
display_name:
description:
- A user-friendly name. It does not have to be unique, and it is changeable.
Avoid entering confidential information.
returned: on success
type: string
sample: display_name_example
description:
description:
- The description of the resource. Avoid entering confidential information.
returned: on success
type: string
sample: description_example
compartment_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment containing the service connector.
returned: on success
type: string
sample: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
time_created:
description:
- "The date and time when the service connector was created.
Format is defined by L(RFC3339,https://tools.ietf.org/html/rfc3339).
Example: `2020-01-25T21:10:29.600Z`"
returned: on success
type: string
sample: 2020-01-25T21:10:29.600Z
time_updated:
description:
- "The date and time when the service connector was updated.
Format is defined by L(RFC3339,https://tools.ietf.org/html/rfc3339).
Example: `2020-01-25T21:10:29.600Z`"
returned: on success
type: string
sample: 2020-01-25T21:10:29.600Z
lifecycle_state:
description:
- The current state of the service connector.
returned: on success
type: string
sample: CREATING
lifecyle_details:
description:
- A message describing the current state in more detail.
For example, the message might provide actionable
information for a resource in a `FAILED` state.
returned: on success
type: string
sample: lifecyle_details_example
source:
description:
- ""
returned: on success
type: complex
contains:
kind:
description:
- The type descriminator.
returned: on success
type: string
sample: logging
log_sources:
description:
- The resources affected by this work request.
returned: on success
type: complex
contains:
compartment_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment containing the log
source.
returned: on success
type: string
sample: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
log_group_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the log group.
returned: on success
type: string
sample: ocid1.loggroup.oc1..xxxxxxEXAMPLExxxxxx
log_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the log.
returned: on success
type: string
sample: ocid1.log.oc1..xxxxxxEXAMPLExxxxxx
tasks:
description:
- The list of tasks.
returned: on success
type: complex
contains:
kind:
description:
- The type descriminator.
returned: on success
type: string
sample: logRule
condition:
description:
- A filter or mask to limit the source used in the flow defined by the service connector.
returned: on success
type: string
sample: condition_example
target:
description:
- ""
returned: on success
type: complex
contains:
kind:
description:
- The type descriminator.
returned: on success
type: string
sample: notifications
topic_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the topic.
returned: on success
type: string
sample: ocid1.topic.oc1..xxxxxxEXAMPLExxxxxx
namespace:
description:
- The namespace.
returned: on success
type: string
sample: namespace_example
bucket_name:
description:
- The name of the bucket. Avoid entering confidential information.
returned: on success
type: string
sample: bucket_name_example
object_name_prefix:
description:
- The prefix of the objects. Avoid entering confidential information.
returned: on success
type: string
sample: object_name_prefix_example
compartment_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment containing the metric.
returned: on success
type: string
sample: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
metric_namespace:
description:
- The namespace of the metric.
- "Example: `oci_computeagent`"
returned: on success
type: string
sample: oci_computeagent
metric:
description:
- The name of the metric.
- "Example: `CpuUtilization`"
returned: on success
type: string
sample: CpuUtilization
function_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the function.
returned: on success
type: string
sample: ocid1.function.oc1..xxxxxxEXAMPLExxxxxx
stream_id:
description:
- The L(OCID,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the stream.
returned: on success
type: string
sample: ocid1.stream.oc1..xxxxxxEXAMPLExxxxxx
freeform_tags:
description:
- "Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only.
Example: `{\\"bar-key\\": \\"value\\"}`"
returned: on success
type: dict
sample: {'Department': 'Finance'}
defined_tags:
description:
- "Defined tags for this resource. Each key is predefined and scoped to a namespace.
Example: `{\\"foo-namespace\\": {\\"bar-key\\": \\"value\\"}}`"
returned: on success
type: dict
sample: {'Operations': {'CostCenter': 'US'}}
system_tags:
description:
- "The system tags associated with this resource, if any. The system tags are set by Oracle Cloud Infrastructure services. Each key is
predefined and scoped to namespaces.
For more information, see L(Resource Tags,https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
Example: `{orcl-cloud: {free-tier-retain: true}}`"
returned: on success
type: dict
sample: {}
sample: {
"id": "ocid1.resource.oc1..xxxxxxEXAMPLExxxxxx",
"display_name": "display_name_example",
"description": "description_example",
"compartment_id": "ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx",
"time_created": "2020-01-25T21:10:29.600Z",
"time_updated": "2020-01-25T21:10:29.600Z",
"lifecycle_state": "CREATING",
"lifecyle_details": "lifecyle_details_example",
"source": {
"kind": "logging",
"log_sources": [{
"compartment_id": "ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx",
"log_group_id": "ocid1.loggroup.oc1..xxxxxxEXAMPLExxxxxx",
"log_id": "ocid1.log.oc1..xxxxxxEXAMPLExxxxxx"
}]
},
"tasks": [{
"kind": "logRule",
"condition": "condition_example"
}],
"target": {
"kind": "notifications",
"topic_id": "ocid1.topic.oc1..xxxxxxEXAMPLExxxxxx",
"namespace": "namespace_example",
"bucket_name": "bucket_name_example",
"object_name_prefix": "object_name_prefix_example",
"compartment_id": "ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx",
"metric_namespace": "oci_computeagent",
"metric": "CpuUtilization",
"function_id": "ocid1.function.oc1..xxxxxxEXAMPLExxxxxx",
"stream_id": "ocid1.stream.oc1..xxxxxxEXAMPLExxxxxx"
},
"freeform_tags": {'Department': 'Finance'},
"defined_tags": {'Operations': {'CostCenter': 'US'}},
"system_tags": {}
}
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.oracle.oci.plugins.module_utils import (
oci_common_utils,
oci_wait_utils,
)
from ansible_collections.oracle.oci.plugins.module_utils.oci_resource_utils import (
OCIResourceHelperBase,
get_custom_class,
)
try:
from oci.sch import ServiceConnectorClient
from oci.sch.models import CreateServiceConnectorDetails
from oci.sch.models import UpdateServiceConnectorDetails
HAS_OCI_PY_SDK = True
except ImportError:
HAS_OCI_PY_SDK = False
class ServiceConnectorHelperGen(OCIResourceHelperBase):
"""Supported operations: create, update, get, list and delete"""
def get_module_resource_id_param(self):
return "service_connector_id"
def get_module_resource_id(self):
return self.module.params.get("service_connector_id")
def get_get_fn(self):
return self.client.get_service_connector
def get_resource(self):
return oci_common_utils.call_with_backoff(
self.client.get_service_connector,
service_connector_id=self.module.params.get("service_connector_id"),
)
def get_required_kwargs_for_list(self):
required_list_method_params = [
"compartment_id",
]
return dict(
(param, self.module.params[param]) for param in required_list_method_params
)
def get_optional_kwargs_for_list(self):
optional_list_method_params = ["display_name"]
return dict(
(param, self.module.params[param])
for param in optional_list_method_params
if self.module.params.get(param) is not None
and (
self._use_name_as_identifier()
or (
not self.module.params.get("key_by")
or param in self.module.params.get("key_by")
)
)
)
def list_resources(self):
required_kwargs = self.get_required_kwargs_for_list()
optional_kwargs = self.get_optional_kwargs_for_list()
kwargs = oci_common_utils.merge_dicts(required_kwargs, optional_kwargs)
return oci_common_utils.list_all_resources(
self.client.list_service_connectors, **kwargs
)
def get_create_model_class(self):
return CreateServiceConnectorDetails
def create_resource(self):
create_details = self.get_create_model()
return oci_wait_utils.call_and_wait(
call_fn=self.client.create_service_connector,
call_fn_args=(),
call_fn_kwargs=dict(create_service_connector_details=create_details,),
waiter_type=oci_wait_utils.WORK_REQUEST_WAITER_KEY,
operation=oci_common_utils.CREATE_OPERATION_KEY,
waiter_client=self.get_waiter_client(),
resource_helper=self,
wait_for_states=oci_common_utils.get_work_request_completed_states(),
)
def get_update_model_class(self):
return UpdateServiceConnectorDetails
def update_resource(self):
update_details = self.get_update_model()
return oci_wait_utils.call_and_wait(
call_fn=self.client.update_service_connector,
call_fn_args=(),
call_fn_kwargs=dict(
service_connector_id=self.module.params.get("service_connector_id"),
update_service_connector_details=update_details,
),
waiter_type=oci_wait_utils.WORK_REQUEST_WAITER_KEY,
operation=oci_common_utils.UPDATE_OPERATION_KEY,
waiter_client=self.get_waiter_client(),
resource_helper=self,
wait_for_states=oci_common_utils.get_work_request_completed_states(),
)
def delete_resource(self):
return oci_wait_utils.call_and_wait(
call_fn=self.client.delete_service_connector,
call_fn_args=(),
call_fn_kwargs=dict(
service_connector_id=self.module.params.get("service_connector_id"),
),
waiter_type=oci_wait_utils.WORK_REQUEST_WAITER_KEY,
operation=oci_common_utils.DELETE_OPERATION_KEY,
waiter_client=self.get_waiter_client(),
resource_helper=self,
wait_for_states=oci_common_utils.get_work_request_completed_states(),
)
ServiceConnectorHelperCustom = get_custom_class("ServiceConnectorHelperCustom")
class ResourceHelper(ServiceConnectorHelperCustom, ServiceConnectorHelperGen):
pass
def main():
module_args = oci_common_utils.get_common_arg_spec(
supports_create=True, supports_wait=True
)
module_args.update(
dict(
display_name=dict(aliases=["name"], type="str"),
compartment_id=dict(type="str"),
description=dict(type="str"),
source=dict(
type="dict",
options=dict(
kind=dict(type="str", required=True, choices=["logging"]),
log_sources=dict(
type="list",
elements="dict",
required=True,
options=dict(
compartment_id=dict(type="str", required=True),
log_group_id=dict(type="str"),
log_id=dict(type="str"),
),
),
),
),
tasks=dict(
type="list",
elements="dict",
options=dict(
kind=dict(type="str", required=True, choices=["logRule"]),
condition=dict(type="str", required=True),
),
),
target=dict(
type="dict",
options=dict(
kind=dict(
type="str",
required=True,
choices=[
"notifications",
"objectStorage",
"monitoring",
"functions",
"streaming",
],
),
topic_id=dict(type="str"),
namespace=dict(type="str"),
bucket_name=dict(type="str"),
object_name_prefix=dict(type="str"),
compartment_id=dict(type="str"),
metric_namespace=dict(type="str"),
metric=dict(type="str"),
function_id=dict(type="str"),
stream_id=dict(type="str"),
),
),
freeform_tags=dict(type="dict"),
defined_tags=dict(type="dict"),
service_connector_id=dict(aliases=["id"], type="str"),
state=dict(type="str", default="present", choices=["present", "absent"]),
)
)
module = AnsibleModule(argument_spec=module_args, supports_check_mode=True)
if not HAS_OCI_PY_SDK:
module.fail_json(msg="oci python sdk required for this module.")
resource_helper = ResourceHelper(
module=module,
resource_type="service_connector",
service_client_class=ServiceConnectorClient,
namespace="sch",
)
result = dict(changed=False)
if resource_helper.is_delete_using_name():
result = resource_helper.delete_using_name()
elif resource_helper.is_delete():
result = resource_helper.delete()
elif resource_helper.is_update_using_name():
result = resource_helper.update_using_name()
elif resource_helper.is_update():
result = resource_helper.update()
elif resource_helper.is_create():
result = resource_helper.create()
module.exit_json(**result)
if __name__ == "__main__":
main()
| 41.219048 | 159 | 0.568887 | 2,999 | 30,296 | 5.580527 | 0.123041 | 0.048757 | 0.032505 | 0.040153 | 0.684632 | 0.633126 | 0.575466 | 0.526231 | 0.505736 | 0.470244 | 0 | 0.009703 | 0.353677 | 30,296 | 734 | 160 | 41.275204 | 0.845003 | 0.014556 | 0 | 0.576471 | 0 | 0.044118 | 0.743591 | 0.067855 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019118 | false | 0.001471 | 0.011765 | 0.010294 | 0.051471 | 0.001471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
360825b11a2ba8661131f351d015f5a8ff5ce829 | 263 | py | Python | Python_Projects/numeric/lossofsignificance.py | arifBurakDemiray/TheCodesThatIWrote | 17d7bc81c516ec97110d0749e9c19d5e6ef9fc88 | [
"MIT"
] | 1 | 2019-11-01T20:18:06.000Z | 2019-11-01T20:18:06.000Z | Python_Projects/numeric/lossofsignificance.py | arifBurakDemiray/TheCodesThatIWrote | 17d7bc81c516ec97110d0749e9c19d5e6ef9fc88 | [
"MIT"
] | null | null | null | Python_Projects/numeric/lossofsignificance.py | arifBurakDemiray/TheCodesThatIWrote | 17d7bc81c516ec97110d0749e9c19d5e6ef9fc88 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Apr 13 13:35:33 2020
"""
#for finding loss of significances
x=1e-1
flag = True
a=0
while (flag):
print (((2*x)/(1-(x**2))),"......",(1/(1+x))-(1/(1-x)))
x= x*(1e-1)
a=a+1
if(a==25):
flag=False
| 14.611111 | 59 | 0.48289 | 50 | 263 | 2.54 | 0.58 | 0.047244 | 0.062992 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 0.228137 | 263 | 17 | 60 | 15.470588 | 0.487685 | 0.346008 | 0 | 0 | 0 | 0 | 0.037267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
360ce588463dab38c7d8f02e3de4947c05f44448 | 4,877 | py | Python | scrape.py | darenr/contemporary-art--rss-scraper | 92d66d18712e781e6e96980004a17f810568e652 | [
"MIT"
] | null | null | null | scrape.py | darenr/contemporary-art--rss-scraper | 92d66d18712e781e6e96980004a17f810568e652 | [
"MIT"
] | null | null | null | scrape.py | darenr/contemporary-art--rss-scraper | 92d66d18712e781e6e96980004a17f810568e652 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import json
import codecs
import traceback
import sys
import requests
import requests_cache
import feedparser
import collections
from bs4 import BeautifulSoup
from urlparse import urlparse, urljoin
one_day = 60 * 60 * 24
requests_cache.install_cache(
'rss_cache', backend='sqlite', expire_after=one_day)
headers = {
'User-Agent': 'Mozilla/5.0'
}
def get_entry_formatted(mime_type, value):
if mime_type.lower() == 'text/html':
soup = BeautifulSoup(value, 'html5lib')
return ''.join(line.lstrip() for line in soup.getText().splitlines(True))
else:
return value;
def parse_content(mime_type, value):
if mime_type.lower() == 'text/html':
soup = BeautifulSoup(value, 'html5lib')
# scoop up all the text
result = {
"text": ''.join(line.lstrip() for line in soup.getText().splitlines(True))
}
if soup.find('img'):
result['imgurl'] = soup.find('img')['src']
return result
else:
return value
def get_entry_value(entry, key, feed):
#
# deals with differences between feeds
#
_key = feed['fields'][key] if 'fields' in feed and key in feed['fields'] else key
if _key in entry:
return entry[_key]
else:
print ' *', 'No', _key, "field in", entry
return None
def fetch_page_and_parse(feed, url):
print ' *', 'parsing page link:', url
page = requests.get(url, headers=headers)
result = {}
if page.status_code == 200:
soup = BeautifulSoup(page.text, 'html5lib')
if 'selector' in feed:
for img in soup.select(feed['selector']):
src = img['src'] if img.has_attr('src') else None
if not src:
src = img['srcset'] if img.has_attr('srcset') else None
if src:
if src.startswith('/'):
result['imgurl'] = urljoin(feed['url'], src)
else:
result['imgurl'] = src
break
else:
# look for og_image as the default
if soup.find('meta', {"property": "og:image"}):
if 'content' in soup.find('meta', {"property": "og:image"}):
result['imgurl'] = soup.find('meta', {"property": "og:image"})['content']
return result
def validate(record):
mandatory_fields = ['imgurl', 'description', 'title', 'link']
for field in mandatory_fields:
if not (field in record and record[field]):
print ' *', 'Missing field', field
return False
return True
def process_feed(feed):
print ' *', 'processing', feed['url']
rawxml = requests.get(feed['url'], headers=headers)
d = feedparser.parse(rawxml.text)
rows = []
for entry in d['entries']:
# standard fields:
record = {
"organization": feed['organization'],
"link": get_entry_value(entry, 'link', feed),
"title": get_entry_value(entry, 'title', feed),
"date": get_entry_value(entry, 'published', feed),
"user_tags": [],
"description": "",
"imgurl": ""
}
if 'category' in entry and entry['category']:
record['user_tags'].append(get_entry_formatted("text/html", entry["category"]))
if 'summary_detail' in entry and entry['summary_detail']:
m = parse_content(entry["summary_detail"]["type"], entry["summary_detail"]["value"])
if 'text' in m:
record["description"] = m['text']
if 'imgurl' in m:
record["imgurl"] = m['imgurl']
if 'media_thumbnail' in entry and entry['media_thumbnail']:
media_thumbnail = entry['media_thumbnail'][0]
if 'url' in media_thumbnail:
record["imgurl"] = media_thumbnail['url']
if 'tags' in entry and entry['tags']:
for x in entry['tags']:
if 'term' in x:
record['user_tags'].append(x['term'])
record['user_tags'] = list(set(record['user_tags']))
if not record['imgurl']:
m = fetch_page_and_parse(feed, record['link'])
for k in m:
record[k] = m[k]
if validate(record):
#
# any that fail to validate are just ignored
#
rows.append(record)
return rows
if __name__ == "__main__":
with codecs.open('sources.json', 'rb', 'utf-8') as f:
sources = json.loads(f.read().encode('utf-8'))
try:
ingest_rows = []
for feed in sources['feeds']:
ingest_rows += process_feed(feed)
print ' *', 'scraped %d records' % (len(ingest_rows))
except Exception, e:
traceback.print_exc()
print str(e)
| 29.029762 | 96 | 0.552184 | 570 | 4,877 | 4.601754 | 0.278947 | 0.018681 | 0.019825 | 0.027449 | 0.133816 | 0.117804 | 0.086923 | 0.086923 | 0.086923 | 0.086923 | 0 | 0.005645 | 0.309822 | 4,877 | 167 | 97 | 29.203593 | 0.773619 | 0.035473 | 0 | 0.092437 | 0 | 0 | 0.157469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.084034 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
361199dea80437ba6ce5df8eea417f22ea366fce | 301 | py | Python | api/indexer/tzprofiles_indexer/models.py | clehner/tzprofiles | e44497bccf28d2d75cfdfa0c417dbecc0f342c12 | [
"Apache-2.0"
] | null | null | null | api/indexer/tzprofiles_indexer/models.py | clehner/tzprofiles | e44497bccf28d2d75cfdfa0c417dbecc0f342c12 | [
"Apache-2.0"
] | null | null | null | api/indexer/tzprofiles_indexer/models.py | clehner/tzprofiles | e44497bccf28d2d75cfdfa0c417dbecc0f342c12 | [
"Apache-2.0"
] | null | null | null | from tortoise import Model, fields
class TZProfile(Model):
account = fields.CharField(36, pk=True)
contract = fields.CharField(36)
valid_claims = fields.JSONField()
invalid_claims = fields.JSONField()
errored = fields.BooleanField()
class Meta:
table = "tzprofiles"
| 23.153846 | 43 | 0.69103 | 33 | 301 | 6.242424 | 0.666667 | 0.145631 | 0.165049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016807 | 0.209302 | 301 | 12 | 44 | 25.083333 | 0.84874 | 0 | 0 | 0 | 0 | 0 | 0.033223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3617e8e260511cf8ba4c78d54d81b23de02b0480 | 2,385 | py | Python | Scripts/sims4communitylib/classes/time/common_alarm_handle.py | ColonolNutty/Sims4CommunityLibrary | 684f28dc3c7deb4d9fd520e21e63942b65a91d31 | [
"CC-BY-4.0"
] | 118 | 2019-08-31T04:33:18.000Z | 2022-03-28T21:12:14.000Z | Scripts/sims4communitylib/classes/time/common_alarm_handle.py | ColonolNutty/Sims4CommunityLibrary | 684f28dc3c7deb4d9fd520e21e63942b65a91d31 | [
"CC-BY-4.0"
] | 15 | 2019-12-05T01:29:46.000Z | 2022-02-18T17:13:46.000Z | Scripts/sims4communitylib/classes/time/common_alarm_handle.py | ColonolNutty/Sims4CommunityLibrary | 684f28dc3c7deb4d9fd520e21e63942b65a91d31 | [
"CC-BY-4.0"
] | 28 | 2019-09-07T04:11:05.000Z | 2022-02-07T18:31:40.000Z | """
The Sims 4 Community Library is licensed under the Creative Commons Attribution 4.0 International public license (CC BY 4.0).
https://creativecommons.org/licenses/by/4.0/
https://creativecommons.org/licenses/by/4.0/legalcode
Copyright (c) COLONOLNUTTY
"""
import os
from sims4.commands import Command, CommandType, CheatOutput
from sims4communitylib.utils.common_time_utils import CommonTimeUtils
from typing import Any, Callable
ON_RTD = os.environ.get('READTHEDOCS', None) == 'True'
if not ON_RTD:
from scheduling import Timeline
from alarms import AlarmHandle
from date_and_time import DateAndTime, TimeSpan
else:
# noinspection PyMissingOrEmptyDocstring
class AlarmHandle:
def cancel(self):
pass
# noinspection PyMissingOrEmptyDocstring
class DateAndTime:
pass
# noinspection PyMissingOrEmptyDocstring
class TimeSpan:
pass
# noinspection PyMissingOrEmptyDocstring
class Timeline:
pass
class CommonAlarmHandle(AlarmHandle):
"""A custom alarm handle that keeps track of when it is slated to trigger for the first time."""
def __init__(
self,
owner: Any,
on_alarm_triggered_callback: Callable[['CommonAlarmHandle'], None],
timeline: Timeline,
when: DateAndTime,
should_repeat: bool=False,
time_until_repeat: TimeSpan=None,
accurate_repeat: bool=True,
persist_across_zone_loads: bool=False
):
self.started_at_date_and_time = when
super().__init__(
owner,
on_alarm_triggered_callback,
timeline,
when,
repeating=should_repeat,
repeat_interval=time_until_repeat,
accurate_repeat=accurate_repeat,
cross_zone=persist_across_zone_loads
)
if not ON_RTD:
@Command('s4clib.print_current_time', command_type=CommandType.Live)
def _s4clib_print_current_time(_connection: int=None):
output = CheatOutput(_connection)
output('Current time')
output('Hour {} Minute {}'.format(CommonTimeUtils.get_current_date_and_time().hour(), CommonTimeUtils.get_current_date_and_time().minute()))
output('Abs Hour {} Abs Minute {}'.format(CommonTimeUtils.get_current_date_and_time().absolute_hours(), CommonTimeUtils.get_current_date_and_time().absolute_minutes()))
| 33.125 | 176 | 0.704403 | 269 | 2,385 | 5.988848 | 0.434944 | 0.026071 | 0.040968 | 0.072005 | 0.160149 | 0.160149 | 0.137803 | 0.105525 | 0.045934 | 0.045934 | 0 | 0.006948 | 0.215514 | 2,385 | 71 | 177 | 33.591549 | 0.854089 | 0.209644 | 0 | 0.122449 | 0 | 0 | 0.05939 | 0.013376 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0.081633 | 0.142857 | 0 | 0.306122 | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
36188c3a24365e2e84cb2983da3bc80cf1611d71 | 1,431 | py | Python | core/myauthbackend.py | devendraotari/HRMS_project | c6480903c2a8212c6698987e8ced96a114c4d7c7 | [
"BSD-2-Clause"
] | null | null | null | core/myauthbackend.py | devendraotari/HRMS_project | c6480903c2a8212c6698987e8ced96a114c4d7c7 | [
"BSD-2-Clause"
] | null | null | null | core/myauthbackend.py | devendraotari/HRMS_project | c6480903c2a8212c6698987e8ced96a114c4d7c7 | [
"BSD-2-Clause"
] | null | null | null | from django.contrib.auth.backends import BaseBackend
from django.contrib.auth import get_user_model
class EmailPhoneBackend(BaseBackend):
"""
docstring
"""
def authenticate(self,request, email=None,phone=None, password=None):
# Check the username/password and return a user.
my_user_model = get_user_model()
user = None
try:
print(f"{request.data['phone']}")
if request.data.get('email',None):
print(f"custom auth call{email}")
user = my_user_model.objects.get(email=request.data.get('email',None))
if request.data.get('phone',None):
print("in auth phone")
user = my_user_model.objects.get(phone=request.data.get('phone',None))
print(f"user{user}")
if user.check_password(password):
return user # return user on valid credentials
except my_user_model.DoesNotExist as mmode:
print(f"{mmode}")
return None # return None if custom user model does not exist
except Exception as e:
return None # return None in case of other exceptions
def get_user(self, user_id):
my_user_model = get_user_model()
try:
return my_user_model.objects.get(pk=user_id)
except my_user_model.DoesNotExist:
return None
| 40.885714 | 87 | 0.596785 | 176 | 1,431 | 4.715909 | 0.3125 | 0.119277 | 0.092771 | 0.054217 | 0.333735 | 0.183133 | 0 | 0 | 0 | 0 | 0 | 0 | 0.310971 | 1,431 | 35 | 88 | 40.885714 | 0.841785 | 0.125087 | 0 | 0.25 | 0 | 0 | 0.08 | 0.019167 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0.071429 | 0.071429 | 0 | 0.357143 | 0.178571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
361ee510413d5ff2e8e4d3a5aa90b44d49e73ac2 | 1,447 | py | Python | program/appID3.py | trungvuong55555/FlaskAPI_ExpertSystem | 6f7a557fefd093e901070fe2ec363e0c2ed8ffa2 | [
"MIT"
] | null | null | null | program/appID3.py | trungvuong55555/FlaskAPI_ExpertSystem | 6f7a557fefd093e901070fe2ec363e0c2ed8ffa2 | [
"MIT"
] | null | null | null | program/appID3.py | trungvuong55555/FlaskAPI_ExpertSystem | 6f7a557fefd093e901070fe2ec363e0c2ed8ffa2 | [
"MIT"
] | null | null | null | from flask import Flask, request, render_template
import pickle
app = Flask(__name__)#khoi tao flask
model = pickle.load(open('modelID3.pkl', 'rb'))#unpicke model
@app.route('/',methods =["GET", "POST"])
def home():
if request.method == "POST":
#lay gia tri tu form
one= request.form.get("a0")
two= request.form.get("a1")
three = request.form.get("a2")
four = request.form.get("a3")
five = request.form.get("a4")
six = request.form.get("a5")
seven = request.form.get("a6")
eight = request.form.get("a7")
nine = request.form.get("a8")
ten = request.form.get("a9")
eleven = request.form.get("a10")
#ep kieu du lieu ve int
one= int(one)
two= int(two)
three= int(three)
four= int(four)
five= int(five)
six= int(six)
seven= int(seven)
eight= int(eight)
nine= int(nine)
ten= int(ten)
eleven = int(eleven)
#dua ve dang vector
input_value= [one,two,three,four,five,six,seven,eight,nine,ten,eleven]
#dua ra ve du doan du lieu
prediction = model.predict([input_value])
prediction= str(prediction) #ep kieu du lieu ve dang string de co the xuat ra duoc man hinh
return "quality of wine is : "+ prediction;
return render_template('index.html')
if __name__ == "__main__":
app.run(debug=True)
| 28.94 | 99 | 0.57982 | 199 | 1,447 | 4.135678 | 0.442211 | 0.147023 | 0.18712 | 0.029162 | 0.034022 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012512 | 0.281963 | 1,447 | 49 | 100 | 29.530612 | 0.779596 | 0.119558 | 0 | 0 | 0 | 0 | 0.075099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.055556 | 0 | 0.138889 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
362141754e09b014da8e86cb87845189f022576c | 448 | py | Python | home_work/App/views.py | jianghaiming0707/python1806homework | 2509f75794ac0ef8711cb1d1c2c4378408619a75 | [
"Apache-2.0"
] | 1 | 2018-06-28T01:01:35.000Z | 2018-06-28T01:01:35.000Z | home_work/App/views.py | jianghaiming0707/python1806homework | 2509f75794ac0ef8711cb1d1c2c4378408619a75 | [
"Apache-2.0"
] | 6 | 2018-06-25T04:50:23.000Z | 2018-07-03T10:24:08.000Z | home_work/App/views.py | jianghaiming0707/python1806homework | 2509f75794ac0ef8711cb1d1c2c4378408619a75 | [
"Apache-2.0"
] | 42 | 2018-06-19T09:48:04.000Z | 2019-09-15T01:20:06.000Z | from django.shortcuts import render
from django.http import HttpResponse
from App.models import *
# Create your views here.
def search(seq):
myclass=Myclass.objects.all()
return render(seq,'test.html',context={'myclass':myclass})
def students(req):
students_id=req.GET.get('classid')
studentt=Student.objects.all()
studentt=studentt.filter(cid_id=students_id)
return render(req,'student.html',context={'students':studentt}) | 34.461538 | 67 | 0.747768 | 61 | 448 | 5.442623 | 0.52459 | 0.060241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118304 | 448 | 13 | 67 | 34.461538 | 0.840506 | 0.051339 | 0 | 0 | 0 | 0 | 0.101415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
36230cd6aca7407d1176980b4ef533beffe100f8 | 9,756 | py | Python | pysnmp-with-texts/HPN-ICF-VOICE-IF-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/HPN-ICF-VOICE-IF-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/HPN-ICF-VOICE-IF-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module HPN-ICF-VOICE-IF-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/HPN-ICF-VOICE-IF-MIB
# Produced by pysmi-0.3.4 at Wed May 1 13:41:57 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, Integer, OctetString = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "Integer", "OctetString")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ValueSizeConstraint, ConstraintsUnion, ConstraintsIntersection, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ValueSizeConstraint", "ConstraintsUnion", "ConstraintsIntersection", "ValueRangeConstraint")
hpnicfVoice, = mibBuilder.importSymbols("HPN-ICF-OID-MIB", "hpnicfVoice")
ifIndex, = mibBuilder.importSymbols("IF-MIB", "ifIndex")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
TimeTicks, Unsigned32, Gauge32, NotificationType, MibIdentifier, ModuleIdentity, Counter32, IpAddress, iso, Counter64, ObjectIdentity, Integer32, MibScalar, MibTable, MibTableRow, MibTableColumn, Bits = mibBuilder.importSymbols("SNMPv2-SMI", "TimeTicks", "Unsigned32", "Gauge32", "NotificationType", "MibIdentifier", "ModuleIdentity", "Counter32", "IpAddress", "iso", "Counter64", "ObjectIdentity", "Integer32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Bits")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
hpnicfVoiceInterface = ModuleIdentity((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13))
hpnicfVoiceInterface.setRevisions(('2007-12-10 17:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: hpnicfVoiceInterface.setRevisionsDescriptions(('The initial version of this MIB file.',))
if mibBuilder.loadTexts: hpnicfVoiceInterface.setLastUpdated('200712101700Z')
if mibBuilder.loadTexts: hpnicfVoiceInterface.setOrganization('')
if mibBuilder.loadTexts: hpnicfVoiceInterface.setContactInfo('')
if mibBuilder.loadTexts: hpnicfVoiceInterface.setDescription('This MIB file is to provide the definition of the voice interface general configuration.')
hpnicfVoiceIfObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1))
hpnicfVoiceIfConfigTable = MibTable((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1), )
if mibBuilder.loadTexts: hpnicfVoiceIfConfigTable.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfConfigTable.setDescription('The table contains configurable parameters for both analog voice interface and digital voice interface.')
hpnicfVoiceIfConfigEntry = MibTableRow((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: hpnicfVoiceIfConfigEntry.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfConfigEntry.setDescription('The entry of voice interface table.')
hpnicfVoiceIfCfgCngOn = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('enable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgCngOn.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgCngOn.setDescription('This object indicates whether the silence gaps should be filled with background noise. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 voice subscriber line.')
hpnicfVoiceIfCfgNonLinearSwitch = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('enable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgNonLinearSwitch.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgNonLinearSwitch.setDescription('This object expresses the nonlinear processing is enable or disable for the voice interface. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 voice subscriber line. Currently, only digital voice subscriber lines can be set disabled.')
hpnicfVoiceIfCfgInputGain = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(-140, 139))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgInputGain.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgInputGain.setDescription('This object indicates the amount of gain added to the receiver side of the voice interface. Unit is 0.1 db. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 voice subscriber line.')
hpnicfVoiceIfCfgOutputGain = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(-140, 139))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgOutputGain.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgOutputGain.setDescription('This object indicates the amount of gain added to the send side of the voice interface. Unit is 0.1 db. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 voice subscriber line.')
hpnicfVoiceIfCfgEchoCancelSwitch = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('enable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgEchoCancelSwitch.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgEchoCancelSwitch.setDescription('This object indicates whether the echo cancellation is enabled. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 voice subscriber line.')
hpnicfVoiceIfCfgEchoCancelDelay = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 64))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgEchoCancelDelay.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgEchoCancelDelay.setDescription("This object indicates the delay of the echo cancellation for the voice interface. This value couldn't be modified unless hpnicfVoiceIfCfgEchoCancelSwitch is enable. Unit is milliseconds. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 voice subscriber line. The default value of this object is 32.")
hpnicfVoiceIfCfgTimerDialInterval = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 300))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgTimerDialInterval.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgTimerDialInterval.setDescription('The interval, in seconds, between two dialing numbers. The default value of this object is 10 seconds. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 with loop-start or ground-start protocol voice subscriber line.')
hpnicfVoiceIfCfgTimerFirstDial = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 300))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgTimerFirstDial.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgTimerFirstDial.setDescription('The period of time, in seconds, before dialing the first number. The default value of this object is 10 seconds. It is applicable to FXO, FXS subscriber lines and E1/T1 with loop-start or ground-start protocol voice subscriber line.')
hpnicfVoiceIfCfgPrivateline = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 9), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 31))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgPrivateline.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgPrivateline.setDescription('This object indicates the E.164 phone number for plar mode. It is applicable to FXO, FXS, E&M subscriber lines and E1/T1 voice subscriber line.')
hpnicfVoiceIfCfgRegTone = MibTableColumn((1, 3, 6, 1, 4, 1, 11, 2, 14, 11, 15, 2, 39, 13, 1, 1, 1, 10), OctetString().subtype(subtypeSpec=ConstraintsUnion(ValueSizeConstraint(0, 0), ValueSizeConstraint(2, 3), ))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: hpnicfVoiceIfCfgRegTone.setStatus('current')
if mibBuilder.loadTexts: hpnicfVoiceIfCfgRegTone.setDescription('This object uses 2 or 3 letter country code specify voice parameters of different countrys. This value will take effect on all voice interfaces of all cards on the device.')
mibBuilder.exportSymbols("HPN-ICF-VOICE-IF-MIB", hpnicfVoiceInterface=hpnicfVoiceInterface, hpnicfVoiceIfCfgEchoCancelDelay=hpnicfVoiceIfCfgEchoCancelDelay, hpnicfVoiceIfConfigEntry=hpnicfVoiceIfConfigEntry, PYSNMP_MODULE_ID=hpnicfVoiceInterface, hpnicfVoiceIfObjects=hpnicfVoiceIfObjects, hpnicfVoiceIfCfgNonLinearSwitch=hpnicfVoiceIfCfgNonLinearSwitch, hpnicfVoiceIfCfgTimerFirstDial=hpnicfVoiceIfCfgTimerFirstDial, hpnicfVoiceIfCfgPrivateline=hpnicfVoiceIfCfgPrivateline, hpnicfVoiceIfCfgInputGain=hpnicfVoiceIfCfgInputGain, hpnicfVoiceIfCfgRegTone=hpnicfVoiceIfCfgRegTone, hpnicfVoiceIfCfgTimerDialInterval=hpnicfVoiceIfCfgTimerDialInterval, hpnicfVoiceIfCfgCngOn=hpnicfVoiceIfCfgCngOn, hpnicfVoiceIfCfgEchoCancelSwitch=hpnicfVoiceIfCfgEchoCancelSwitch, hpnicfVoiceIfCfgOutputGain=hpnicfVoiceIfCfgOutputGain, hpnicfVoiceIfConfigTable=hpnicfVoiceIfConfigTable)
| 154.857143 | 863 | 0.791513 | 1,166 | 9,756 | 6.620926 | 0.212693 | 0.045078 | 0.078886 | 0.007254 | 0.461529 | 0.346503 | 0.335363 | 0.331606 | 0.331606 | 0.331606 | 0 | 0.058438 | 0.093173 | 9,756 | 62 | 864 | 157.354839 | 0.814174 | 0.034235 | 0 | 0 | 0 | 0.185185 | 0.325507 | 0.008074 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.148148 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3626cc57d851fc7ca881f30af21ead100d822372 | 1,043 | py | Python | pointnet2/tf_ops/sampling/tf_sampling.py | ltriess/pointnet2_keras | 29be56161c8c772442b85b8fda300d10ff7fe7b3 | [
"MIT"
] | 2 | 2022-02-06T23:12:15.000Z | 2022-03-28T06:48:52.000Z | pointnet2/tf_ops/sampling/tf_sampling.py | ltriess/pointnet2_keras | 29be56161c8c772442b85b8fda300d10ff7fe7b3 | [
"MIT"
] | null | null | null | pointnet2/tf_ops/sampling/tf_sampling.py | ltriess/pointnet2_keras | 29be56161c8c772442b85b8fda300d10ff7fe7b3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
""" Furthest point sampling
Original author: Haoqiang Fan
Modified by Charles R. Qi
All Rights Reserved. 2017.
Modified by Larissa Triess (2020)
"""
import os
import sys
import tensorflow as tf
from tensorflow.python.framework import ops
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(BASE_DIR)
sampling_module = tf.load_op_library(os.path.join(BASE_DIR, "tf_sampling_so.so"))
def farthest_point_sample(k: int, points: tf.Tensor) -> tf.Tensor:
"""Returns the indices of the k farthest points in points
Arguments:
k : int
The number of points to consider.
points : tf.Tensor(shape=(batch_size, P1, 3), dtype=tf.float32)
The points with P1 dataset points given in xyz.
Returns:
indices : tf.Tensor(shape=(batch_size, k), dtype=tf.int32)
The indices of the k farthest points in points.
"""
return sampling_module.farthest_point_sample(points, k)
ops.NoGradient("FarthestPointSample")
| 25.439024 | 81 | 0.701822 | 152 | 1,043 | 4.690789 | 0.526316 | 0.044881 | 0.053296 | 0.042076 | 0.168303 | 0.106592 | 0.106592 | 0.106592 | 0.106592 | 0 | 0 | 0.020262 | 0.19559 | 1,043 | 40 | 82 | 26.075 | 0.829559 | 0.534995 | 0 | 0 | 0 | 0 | 0.083141 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3626d45d010076e81364291684b9ea5d2493fb6c | 561 | py | Python | gql/resolvers/mutations/scope.py | apoveda25/graphql-python-server | eb7b911aa1116327120b857beb17da3e30523e74 | [
"Apache-2.0"
] | 4 | 2020-06-20T11:54:04.000Z | 2021-09-07T11:41:32.000Z | gql/resolvers/mutations/scope.py | apoveda25/graphql-python-server | eb7b911aa1116327120b857beb17da3e30523e74 | [
"Apache-2.0"
] | null | null | null | gql/resolvers/mutations/scope.py | apoveda25/graphql-python-server | eb7b911aa1116327120b857beb17da3e30523e74 | [
"Apache-2.0"
] | null | null | null | from ariadne import MutationType
from datetime import datetime as dt
from models.scope import Scope
from schemas.helpers.normalize import change_keys
from schemas.scope import ScopeCreate
mutations_resolvers = MutationType()
@mutations_resolvers.field("scopeCreate")
async def resolve_scope_create(_, info, scope) -> dict:
store_data = Scope.get_instance()
data = ScopeCreate(**scope, key=f'{scope["collection"]}{scope["action"]}')
normalize = change_keys(data.dict(exclude_none=True), key="_key")
return await store_data.create(normalize)
| 31.166667 | 78 | 0.773619 | 72 | 561 | 5.861111 | 0.513889 | 0.052133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 561 | 17 | 79 | 33 | 0.855984 | 0 | 0 | 0 | 0 | 0 | 0.094474 | 0.067736 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
362e01958a44c444693e75555e77973e632954c9 | 5,926 | py | Python | nevermined_compute_api/workflow_utils.py | nevermined-io/compute-api | c0d3b1875b3b95ffa78374ff89a4fefd0d3af598 | [
"Apache-2.0"
] | null | null | null | nevermined_compute_api/workflow_utils.py | nevermined-io/compute-api | c0d3b1875b3b95ffa78374ff89a4fefd0d3af598 | [
"Apache-2.0"
] | 3 | 2020-11-20T11:57:04.000Z | 2021-04-06T10:56:49.000Z | nevermined_compute_api/workflow_utils.py | nevermined-io/compute-api | c0d3b1875b3b95ffa78374ff89a4fefd0d3af598 | [
"Apache-2.0"
] | null | null | null | import os
from pathlib import Path
import json
from contracts_lib_py.utils import get_account
from common_utils_py.ddo.ddo import DDO
from nevermined_sdk_py import Nevermined, Config
import yaml
from configparser import ConfigParser
config_parser = ConfigParser()
configuration = config_parser.read('config.ini')
GROUP = config_parser.get('resources', 'group') # str | The custom resource's group name
VERSION = config_parser.get('resources', 'version') # str | The custom resource's version
NAMESPACE = config_parser.get('resources', 'namespace') # str | The custom resource's namespace
KEYFILE = json.loads(Path(os.getenv("PROVIDER_KEYFILE")).read_text())
def create_execution(service_agreement_id, workflow):
"""Creates the argo workflow template
Args:
service_agreement_id (str): The id of the service agreement being executed
workflow (dict): The workflow submitted to the compute api
Returns:
dict: The workflow template filled by the compute api with all the parameters
"""
ddo = DDO(dictionary=workflow)
workflow_template = get_workflow_template()
workflow_template['apiVersion'] = GROUP + '/' + VERSION
workflow_template['metadata']['namespace'] = NAMESPACE
workflow_template['spec']['arguments']['parameters'] += create_arguments(ddo)
workflow_template["spec"]["workflowMetadata"]["labels"][
"serviceAgreementId"] = service_agreement_id
if ddo.metadata["main"]["type"] == "fl-coordinator":
workflow_template["spec"]["entrypoint"] = "coordinator-workflow"
else:
workflow_template["spec"]["entrypoint"] = "compute-workflow"
return workflow_template
def create_arguments(ddo):
"""Create the arguments that need to be add to the argo template.
Args:
ddo (:py:class:`common_utils_py.ddo.ddo.DDO`): The workflow DDO.
Returns:
list: The list of arguments to be appended to the argo workflow
"""
args = ''
image = ''
tag = ''
if ddo.metadata["main"]["type"] != "fl-coordinator":
workflow = ddo.metadata["main"]["workflow"]
options = {
"resources": {
"metadata.url": "http://172.17.0.1:5000",
},
"keeper-contracts": {
"keeper.url": "http://172.17.0.1:8545"
}
}
config = Config(options_dict=options)
nevermined = Nevermined(config)
# TODO: Currently this only supports one stage
transformation_did = workflow["stages"][0]["transformation"]["id"]
transformation_ddo = nevermined.assets.resolve(transformation_did)
transformation_metadata = transformation_ddo.get_service("metadata")
# get args and container
args = transformation_metadata.main["algorithm"]["entrypoint"]
image = transformation_metadata.main["algorithm"]["requirements"]["container"]["image"]
tag = transformation_metadata.main["algorithm"]["requirements"]["container"]["tag"]
arguments = [
{
"name": "credentials",
# remove white spaces
"value": json.dumps(KEYFILE, separators=(",", ":"))
},
{
"name": "password",
"value": os.getenv("PROVIDER_PASSWORD")
},
{
"name": "metadata_url",
"value": "http://172.17.0.1:5000"
},
{
"name": "gateway_url",
"value": "http://172.17.0.1:8030"
},
{
"name": "node",
"value": "http://172.17.0.1:8545"
},
{
"name": "secret_store_url",
"value": "http://172.17.0.1:12001"
},
{
"name": "workflow",
"value": f"did:nv:{ddo.asset_id[2:]}"
},
{
"name": "verbose",
"value": "false"
},
{
"name": "transformation_container_image",
"value": f"{image}:{tag}"
},
{
"name": "transformation_arguments",
"value": args
}
]
return arguments
def setup_keeper():
init_account_envvars()
account = get_account(0)
if account is None:
raise AssertionError(f'Nevermined Gateway cannot run without a valid '
f'ethereum account. Account address was not found in the environment'
f'variable `PROVIDER_ADDRESS`. Please set the following evnironment '
f'variables and try again: `PROVIDER_ADDRESS`, `PROVIDER_PASSWORD`, '
f', `PROVIDER_KEYFILE`, `RSA_KEYFILE` and `RSA_PASSWORD`.')
if not account.key_file and not (account.password and account.key_file):
raise AssertionError(f'Nevermined Gateway cannot run without a valid '
f'ethereum account with either a password and '
f'keyfile/encrypted-key-string '
f'or private key. Current account has password {account.password}, '
f'keyfile {account.key_file}, encrypted-key {account._encrypted_key} '
f'and private-key {account._private_key}.')
def init_account_envvars():
os.environ['PARITY_ADDRESS'] = os.getenv('PROVIDER_ADDRESS', '')
os.environ['PARITY_PASSWORD'] = os.getenv('PROVIDER_PASSWORD', '')
os.environ['PARITY_KEYFILE'] = os.getenv('PROVIDER_KEYFILE', '')
os.environ['PSK-RSA_PRIVKEY_FILE'] = os.getenv('RSA_PRIVKEY_FILE', '')
os.environ['PSK-RSA_PUBKEY_FILE'] = os.getenv('RSA_PUBKEY_FILE', '')
def get_workflow_template():
"""Returns a pre configured argo workflow template.
Returns:
dict: argo workflow template
"""
path = Path(__file__).parent / "argo-workflow.yaml"
with path.open() as f:
workflow_template = yaml.safe_load(f)
return workflow_template
| 35.065089 | 99 | 0.602599 | 637 | 5,926 | 5.455259 | 0.287284 | 0.073669 | 0.01554 | 0.017266 | 0.162302 | 0.133237 | 0.083741 | 0.067338 | 0.043165 | 0.043165 | 0 | 0.016207 | 0.271178 | 5,926 | 168 | 100 | 35.27381 | 0.788377 | 0.132636 | 0 | 0.033613 | 0 | 0 | 0.323907 | 0.030453 | 0 | 0 | 0 | 0.005952 | 0.016807 | 1 | 0.042017 | false | 0.067227 | 0.067227 | 0 | 0.134454 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
36323555756558519c34b677df24af6e2865a756 | 2,797 | py | Python | src/cltl/backend/source/pyaudio_source.py | leolani/cltl-backend | 4ecc6227f9d48e40b9f59e6d78e0fcee9cdadbd4 | [
"MIT"
] | null | null | null | src/cltl/backend/source/pyaudio_source.py | leolani/cltl-backend | 4ecc6227f9d48e40b9f59e6d78e0fcee9cdadbd4 | [
"MIT"
] | null | null | null | src/cltl/backend/source/pyaudio_source.py | leolani/cltl-backend | 4ecc6227f9d48e40b9f59e6d78e0fcee9cdadbd4 | [
"MIT"
] | null | null | null | import logging
import uuid
from typing import Iterable
import numpy as np
import pyaudio
from cltl.backend.api.util import raw_frames_to_np
from cltl.backend.spi.audio import AudioSource
logger = logging.getLogger(__name__)
class PyAudioSource(AudioSource):
BUFFER = 8
def __init__(self, rate, channels, frame_size):
self.id = str(uuid.uuid4())[:6]
self._rate = rate
self._channels = channels
self._frame_size = frame_size
self._pyaudio = pyaudio.PyAudio()
self._active = False
self._start_time = None
self._time = None
@property
def audio(self) -> Iterable[np.array]:
return raw_frames_to_np(self, self.frame_size, self.channels, self.depth)
@property
def rate(self) -> int:
return self._rate
@property
def channels(self) -> int:
return self._channels
@property
def frame_size(self) -> int:
return self._frame_size
@property
def depth(self) -> int:
return 2
@property
def active(self):
return self._active
@property
def time(self):
return self._mic_time - self._start_time
@property
def _mic_time(self):
return self._time
@_mic_time.setter
def _mic_time(self, stream_time):
advanced = stream_time - self._time
if advanced > self._stream.get_input_latency():
logger.exception("Latency exceeded buffer (%.4fsec) - dropped frames: %.4fsec",
self._stream.get_input_latency(), advanced)
self._time = stream_time
def stop(self):
self._active = False
logger.debug("Stopped microphone (%s)", self.id)
def __enter__(self):
self._stream = self._pyaudio.open(self._rate, self._channels, pyaudio.paInt16, input=True,
frames_per_buffer=self.BUFFER * self._frame_size)
self._active = True
self._start_time = self._stream.get_time()
self._time = self._start_time
logger.debug("Opened microphone (%s) with rate: %s, channels: %s, frame_size: %s",
self.id, self._rate, self._channels, self._frame_size)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if self._active:
self._active = False
self._stream.close()
logger.debug("Closed microphone (%s)", self.id)
else:
logger.warning("Ignored close microphone (%s)", self.id)
def __iter__(self):
return self
def __next__(self):
if not self._active:
raise StopIteration()
data = self._stream.read(self._frame_size, exception_on_overflow=False)
self._mic_time = self._stream.get_time()
return data | 27.15534 | 98 | 0.620665 | 341 | 2,797 | 4.774194 | 0.269795 | 0.055283 | 0.047912 | 0.031327 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00398 | 0.281373 | 2,797 | 103 | 99 | 27.15534 | 0.80597 | 0 | 0 | 0.168831 | 0 | 0 | 0.071122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.194805 | false | 0 | 0.090909 | 0.116883 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
36359877c7a4f6573f92718849e22bc0b0b933eb | 624 | py | Python | python2/examples/tutorial_threadednotifier.py | openEuler-BaseService/pyinotify | d6c8b832177945106901fb6c0cd5ae7d54df8247 | [
"MIT"
] | 1,509 | 2015-01-04T01:20:06.000Z | 2022-03-29T08:06:41.000Z | python2/examples/tutorial_threadednotifier.py | openEuler-BaseService/pyinotify | d6c8b832177945106901fb6c0cd5ae7d54df8247 | [
"MIT"
] | 98 | 2015-01-09T20:58:57.000Z | 2022-03-29T11:53:44.000Z | python2/examples/tutorial_threadednotifier.py | openEuler-BaseService/pyinotify | d6c8b832177945106901fb6c0cd5ae7d54df8247 | [
"MIT"
] | 333 | 2015-01-02T09:22:01.000Z | 2022-03-24T01:51:40.000Z | # ThreadedNotifier example from tutorial
#
# See: http://github.com/seb-m/pyinotify/wiki/Tutorial
#
import pyinotify
wm = pyinotify.WatchManager() # Watch Manager
mask = pyinotify.IN_DELETE | pyinotify.IN_CREATE # watched events
class EventHandler(pyinotify.ProcessEvent):
def process_IN_CREATE(self, event):
print "Creating:", event.pathname
def process_IN_DELETE(self, event):
print "Removing:", event.pathname
#log.setLevel(10)
notifier = pyinotify.ThreadedNotifier(wm, EventHandler())
notifier.start()
wdd = wm.add_watch('/tmp', mask, rec=True)
wm.rm_watch(wdd.values())
notifier.stop()
| 24 | 66 | 0.735577 | 78 | 624 | 5.782051 | 0.602564 | 0.04878 | 0.053215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003717 | 0.137821 | 624 | 25 | 67 | 24.96 | 0.834572 | 0.217949 | 0 | 0 | 0 | 0 | 0.045833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3636470ba1388bdc81e02a4d210d625e92578097 | 2,063 | py | Python | models/globalsenti.py | movabo/newstsc | dcf0cff31c0e463c9a96cdaa24e9b662ed53f7ed | [
"MIT"
] | 3 | 2021-02-28T19:14:49.000Z | 2022-03-29T12:10:14.000Z | models/globalsenti.py | movabo/newstsc | dcf0cff31c0e463c9a96cdaa24e9b662ed53f7ed | [
"MIT"
] | null | null | null | models/globalsenti.py | movabo/newstsc | dcf0cff31c0e463c9a96cdaa24e9b662ed53f7ed | [
"MIT"
] | 1 | 2021-05-13T10:27:12.000Z | 2021-05-13T10:27:12.000Z | # -*- coding: utf-8 -*-
# file: lcf_bert.py
# author: yangheng <yangheng@m.scnu.edu.cn>
# Copyright (C) 2019. All Rights Reserved.
# The code is based on repository: https://github.com/yangheng95/LCF-ABSA
import torch
import torch.nn as nn
from models.lcf import LCF_BERT
class Global_LCF(nn.Module):
def __init__(self, bert, opt):
super(Global_LCF, self).__init__()
self.max_num_components = 20
self.lcf = LCF_BERT(bert, opt, is_global_configuration=True)
self.linear_merge_remainder_comps = nn.Linear(opt.bert_dim * self.max_num_components, opt.bert_dim)
self.linear_merge_lcf_and_remainder = nn.Linear(opt.bert_dim * 2, opt.polarities_dim)
def _get_inputs_for_component(self, inputs, component_index):
assert component_index < self.max_num_components, "component_index({}) >= max_num_components({})".format(
component_index, self.max_num_components)
return [inputs[component_index * 4], inputs[component_index * 4 + 1], inputs[component_index * 4 + 2], inputs[
component_index * 4 + 3]]
def forward(self, inputs):
# this is the main component, which we want to classify
main_comp_inputs = self._get_inputs_for_component(inputs, 0)
main_lcf_output = self.lcf(main_comp_inputs)
# process remaining document components, which we don't want to classify but use as context
# TODO maybe disable gradient in these components? or at least in BERT in them?
lst_remainder_comp_outputs = []
for i in range(1, self.max_num_components):
cur_comp_inputs = self._get_inputs_for_component(inputs, i)
cur_comp_output = self.lcf(cur_comp_inputs)
lst_remainder_comp_outputs.append(cur_comp_output)
remainder_comp_outputs = torch.cat(lst_remainder_comp_outputs, dim=-1)
remainder_merged = self.linear_merge_remainder_comps(remainder_comp_outputs)
dense_out = self.linear_merge_lcf_and_remainder(main_lcf_output, remainder_merged)
return dense_out
| 38.924528 | 118 | 0.713039 | 292 | 2,063 | 4.702055 | 0.369863 | 0.081573 | 0.06992 | 0.072833 | 0.221413 | 0.15295 | 0.059723 | 0.059723 | 0 | 0 | 0 | 0.012107 | 0.199224 | 2,063 | 52 | 119 | 39.673077 | 0.819007 | 0.201648 | 0 | 0 | 0 | 0 | 0.027473 | 0.013431 | 0 | 0 | 0 | 0.019231 | 0.037037 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36364741a2a1bcdc096a9a1390acb2038c00084b | 10,351 | py | Python | analysis/outflows/__init__.py | lconaboy/seren3 | 5a2ec80adf0d69664d2ee874f5ba12cc02d6c337 | [
"CNRI-Python"
] | 1 | 2017-09-21T14:58:23.000Z | 2017-09-21T14:58:23.000Z | analysis/outflows/__init__.py | lconaboy/seren3 | 5a2ec80adf0d69664d2ee874f5ba12cc02d6c337 | [
"CNRI-Python"
] | 1 | 2020-09-09T08:52:43.000Z | 2020-09-09T08:52:43.000Z | analysis/outflows/__init__.py | lconaboy/seren3 | 5a2ec80adf0d69664d2ee874f5ba12cc02d6c337 | [
"CNRI-Python"
] | 1 | 2019-01-21T10:57:41.000Z | 2019-01-21T10:57:41.000Z | def integrate_surface_flux(flux_map, r):
'''
Integrates a healpix surface flux to compute the total
net flux out of the sphere.
r is the radius of the sphere in meters
'''
import numpy as np
import healpy as hp
from scipy.integrate import trapz
from seren3.array import SimArray
if not ((isinstance(flux_map, SimArray) or isinstance(r, SimArray))):
raise Exception("Must pass SimArrays")
# Compute theta/phi
npix = len(flux_map)
nside = hp.npix2nside(npix)
# theta, phi = hp.pix2ang(nside, range(npix))
theta, phi = hp.pix2ang(nside, range(npix))
r = r.in_units("kpc") # make sure r is in meters
# Compute the integral
# integrand = np.zeros(len(theta))
ix = theta.argsort()
integrand = r**2 * np.sin(theta[ix]) * flux_map[ix]
# for i in range(len(theta)):
# th, ph = (theta[i], phi[i])
# integrand[i] = r**2 * np.sin(th) * flux_map[i] # mass_flux_radial function already deals with unit vev
# integrand = integrand[:, None] + np.zeros(len(phi)) # 2D over theta and phi
# I = trapz(trapz(integrand, phi), theta)
I = trapz(integrand, theta[ix]) * 2.*np.pi
return SimArray(I, "Msol yr**-1")
def dm_by_dt(subsnap, filt=False, **kwargs):
'''
Compute mass flux at the virial sphere
'''
import numpy as np
from seren3.array import SimArray
from seren3.analysis.render import render_spherical
reload(render_spherical)
rvir = SimArray(subsnap.region.radius, subsnap.info["unit_length"])
to_distance = rvir/4.
# to_distance = rvir
in_units = "kg s**-1 m**-2"
s = kwargs.pop("s", subsnap.pynbody_snapshot(filt=filt))
if "nside" not in kwargs:
kwargs["nside"] = 2**3
kwargs["radius"] = to_distance
kwargs["denoise"] = True
im = render_spherical.render_quantity(subsnap.g, "mass_flux_radial", s=s, in_units=in_units, out_units=in_units, **kwargs)
im.convert_units("Msol yr**-1 kpc**-2")
def _compute_flux(im, to_distance, direction=None):
im_tmp = im.copy()
ix = None
if ("out" == direction):
ix = np.where(im_tmp < 0)
im_tmp[ix] = 0
elif ("in" == direction):
ix = np.where(im_tmp > 0)
im_tmp[ix] = 0
else:
return integrate_surface_flux(im, to_distance)
return integrate_surface_flux(im_tmp, to_distance)
F = _compute_flux(im, to_distance)
F_plus = _compute_flux(im, to_distance, direction="out")
F_minus = _compute_flux(im, to_distance, direction="in")
return (F, F_plus, F_minus), im
def integrate_dm_by_dt(I1, I2, lbtime):
from scipy.integrate import trapz
return trapz(I1, lbtime) / trapz(I2, lbtime)
def mass_flux_hist(halo, back_to_aexp, return_data=True, **kwargs):
'''
Compute history of in/outflows
'''
import numpy as np
from seren3.scripts.mpi import write_mass_flux_hid_dict
db = kwargs.pop("db", write_mass_flux_hid_dict.load_db(halo.base.path, halo.base.ioutput))
if (int(halo["id"]) in db.keys()):
catalogue = halo.base.halos(finder="ctrees")
F = []
age_arr = []
hids = []
iouts = []
def _compute(h, db):
hid = int(h["id"])
res = db[hid]
F.append(res["F"])
age_arr.append(h.base.age)
hids.append(hid)
iouts.append(h.base.ioutput)
_compute(halo, db)
for prog in catalogue.iterate_progenitors(halo, back_to_aexp=back_to_aexp):
prog_db = write_mass_flux_hid_dict.load_db(prog.base.path, prog.base.ioutput)
if (int(prog["id"]) in prog_db.keys()):
_compute(prog, prog_db)
else:
break
F = np.array(F)
age_arr = np.array(age_arr)
hids = np.array(hids, dtype=np.int64)
iouts = np.array(iouts)
lbtime = halo.base.age - age_arr
if return_data:
return F, age_arr, lbtime, hids, iouts
return F
else:
return None
def fesc_tot_outflow(snapshot):
'''
Integrate the total mass ourflowed and photons escaped for all haloes
'''
import numpy as np
from scipy.integrate import trapz
from seren3.array import SimArray
from seren3.scripts.mpi import time_int_fesc_all_halos, history_mass_flux_all_halos
fesc_db = time_int_fesc_all_halos.load(snapshot)
mass_flux_db = history_mass_flux_all_halos.load(snapshot)
mass_flux_hids = np.array( [int(res.idx) for res in mass_flux_db] )
def _integrate_halo(fesc_res, mass_flux_res):
photons_escaped = SimArray(fesc_res["I1"], "s**-1").in_units("yr**-1")
cum_photons_escaped = trapz(photons_escaped, fesc_res["lbtime"].in_units("yr"))
F, F_plus, F_minus = mass_flux_res["F"].transpose()
F_plus = SimArray(F_plus, "Msol yr**-1")
F_minus = SimArray(F_minus, "Msol yr**-1")
if (len(F_plus) != len(photons_escaped)):
return np.nan, np.nan
cum_outflowed_mass = trapz(F_plus, mass_flux_res["lbtime"].in_units("yr"))
cum_inflowed_mass = np.abs(trapz(F_minus, mass_flux_res["lbtime"].in_units("yr")))
# return cum_photons_escaped, cum_outflowed_mass - cum_inflowed_mass
return cum_photons_escaped, cum_outflowed_mass
nphotons_escaped = np.zeros(len(fesc_db))
tot_mass_outflowed = np.zeros(len(fesc_db))
mvir = np.zeros(len(fesc_db))
for i in range(len(fesc_db)):
hid = int(fesc_db[i].idx)
fesc_res = fesc_db[i].result
mass_flux_res_ix = np.abs(mass_flux_hids - hid).argmin()
mass_flux_res = mass_flux_db[mass_flux_res_ix].result
nphotons_escaped[i], tot_mass_outflowed[i] = _integrate_halo(fesc_res, mass_flux_res)
mvir[i] = fesc_res["Mvir"]
ix = np.where( np.logical_and( ~np.isnan(nphotons_escaped), ~np.isnan(tot_mass_outflowed)) )
nphotons_escaped = nphotons_escaped[ix]
tot_mass_outflowed = tot_mass_outflowed[ix]
mvir = mvir[ix]
return nphotons_escaped, tot_mass_outflowed, mvir
def fesc_mean_time_outflow(snapshot):
'''
Integrate the total mass outflowed and photons escaped for all haloes
'''
import numpy as np
from scipy.integrate import trapz
from seren3.array import SimArray
from seren3.scripts.mpi import time_int_fesc_all_halos, history_mass_flux_all_halos
fesc_db = time_int_fesc_all_halos.load(snapshot)
mass_flux_db = history_mass_flux_all_halos.load(snapshot)
mass_flux_hids = np.array( [int(res.idx) for res in mass_flux_db] )
def _integrate_halo(fesc_res, mass_flux_res):
photons_escaped = SimArray(fesc_res["I1"], "s**-1").in_units("yr**-1")
# cum_photons_escaped = trapz(photons_escaped, fesc_res["lbtime"].in_units("yr"))
cum_photons_escaped = fesc_res["tint_fesc_hist"][0]
F, F_plus, F_minus = mass_flux_res["F"].transpose()
F_plus = SimArray(F_plus, "Msol yr**-1")
F_minus = SimArray(F_minus, "Msol yr**-1")
if (len(F_plus) != len(photons_escaped)):
return np.nan, np.nan
lbtime = mass_flux_res["lbtime"]
F_net_outflow = F_plus - np.abs(F_minus)
if len(np.where(np.isnan(F_net_outflow))[0] > 0):
return np.nan, np.nan
ix = np.where(F_net_outflow < 0.)
if len(ix[0] == 0):
return cum_photons_escaped, lbtime[-1]
else:
time_outflow = [0]
for i in ix[0]:
if (i == 0):
continue
time_outflow.append(lbtime[i - 1])
time_spent = np.zeros(len(time_outflow) - 1)
for i in range(len(time_spent)):
time_spent[i] = time_outflow[i+1] - time_outflow[i]
return cum_photons_escaped, time_spent.mean()
nphotons_escaped = np.zeros(len(fesc_db))
time_spent_net_outflow = np.zeros(len(fesc_db))
mvir = np.zeros(len(fesc_db))
for i in range(len(fesc_db)):
hid = int(fesc_db[i].idx)
fesc_res = fesc_db[i].result
mass_flux_res_ix = np.abs(mass_flux_hids - hid).argmin()
mass_flux_res = mass_flux_db[mass_flux_res_ix].result
nphotons_escaped[i], time_spent_net_outflow[i] = _integrate_halo(fesc_res, mass_flux_res)
mvir[i] = fesc_res["Mvir"]
ix = np.where( np.logical_and( ~np.isnan(nphotons_escaped),\
np.logical_and(~np.isnan(time_spent_net_outflow),\
time_spent_net_outflow > 0) ) )
nphotons_escaped = nphotons_escaped[ix]
time_spent_net_outflow = time_spent_net_outflow[ix]
mvir = mvir[ix]
return nphotons_escaped, SimArray(time_spent_net_outflow, "Gyr"), mvir
def plot(sims, iout, labels, cols, ax=None, **kwargs):
import numpy as np
import matplotlib.pylab as plt
from seren3.analysis import plots
if (ax is None):
ax = plt.gca()
ls = ["-", "--"]
lw = [3., 1.5]
for sim, label, col, lsi, lwi in zip(sims, labels, cols, ls, lw):
snap = sim[iout]
nphotons_escaped, tot_mass_outflowed, mvir = fesc_tot_outflow(snap)
print "%e" % nphotons_escaped.sum()
log_mvir = np.log10(mvir)
x = np.log10(tot_mass_outflowed)
y = np.log10(nphotons_escaped)
ix = np.where(np.logical_and(log_mvir >= 7.5, x>=5.5))
x = x[ix]
y = y[ix]
ix = np.where(np.logical_and(np.isfinite(x), np.isfinite(y)))
x = x[ix]
y = y[ix]
bc, mean, std, sterr = plots.fit_scatter(x, y, ret_sterr=True, **kwargs)
ax.scatter(x, y, alpha=0.10, s=5, color=col)
e = ax.errorbar(bc, mean, yerr=std, color=col, label=label,\
fmt="o", markerfacecolor=col, mec='k',\
capsize=2, capthick=2, elinewidth=2, linewidth=lwi, linestyle=lsi)
# ax.plot(bc, mean, color=col, label=None, linewidth=3., linestyle="-")
# ax.fill_between(bc, mean-std, mean+std, facecolor=col, alpha=0.35, interpolate=True, label=label)
ax.set_xlabel(r"log$_{10}$ $\int_{0}^{t_{\mathrm{H}}}$ $\vec{F}_{+}(t)$ $dt$ [M$_{\odot}$]", fontsize=20)
ax.set_ylabel(r'log$_{10}$ $\int_{0}^{t_{\mathrm{H}}}$ $\dot{\mathrm{N}}_{\mathrm{ion}}(t)$ f$_{\mathrm{esc}}$ ($t$) $dt$ [#]', fontsize=20)
ax.legend(loc='lower right', frameon=False, prop={"size" : 16}) | 34.734899 | 144 | 0.628925 | 1,537 | 10,351 | 3.994795 | 0.174366 | 0.046906 | 0.026873 | 0.021661 | 0.487459 | 0.433876 | 0.384039 | 0.331922 | 0.293811 | 0.293811 | 0 | 0.01195 | 0.240073 | 10,351 | 298 | 145 | 34.734899 | 0.768624 | 0.073133 | 0 | 0.341837 | 0 | 0.010204 | 0.051741 | 0.009824 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005102 | 0.107143 | null | null | 0.005102 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3637cf787bdf4e4784cdc6527a8256c98d6b4fec | 1,646 | py | Python | cpu/pipeline/writeback_unit.py | tim-roderick/simple-cpu-simulator | 334baf1934751527b7e5ffa0ad85d5e53e7215a1 | [
"MIT"
] | 2 | 2019-12-09T12:02:50.000Z | 2019-12-09T22:40:01.000Z | cpu/pipeline/writeback_unit.py | tim-roderick/simple-cpu-simulator | 334baf1934751527b7e5ffa0ad85d5e53e7215a1 | [
"MIT"
] | null | null | null | cpu/pipeline/writeback_unit.py | tim-roderick/simple-cpu-simulator | 334baf1934751527b7e5ffa0ad85d5e53e7215a1 | [
"MIT"
] | 1 | 2020-05-04T09:13:50.000Z | 2020-05-04T09:13:50.000Z | from .component import Component
from cpu.Memory import SCOREBOARD
from isa.Instructions import ALUInstruction as alu
class writeback_unit(Component):
def add_result(self, result):
result.finished = True
self.pipeline_register = self.pipeline_register + [result]
self.clean()
def clean(self):
self.pipeline_register = list(filter(None, self.pipeline_register))
def run(self, cpu):
if not self.halt:
cpu.update_reservation()
for instruction in self.pipeline_register:
if cpu.reorder_buffer.is_retirable(cpu, instruction):
instruction.writeback(cpu)
instruction.reservation_update()
#
# if str(instruction.eo[0]).startswith('r'):
# cpu.update_reservation()
#
cpu.increment_ie()
if instruction in self.pipeline_register:
index = self.pipeline_register.index(instruction)
self.pipeline_register[index] = ""
self.clean()
def flush(self, cpu, instruction):
self.halt = True
for instruction in self.pipeline_register:
if instruction not in cpu.reorder_buffer.buffer:
#
if isinstance(instruction, alu) or instruction.opcode in ["LD", "LDC", "MOV"]:
SCOREBOARD[instruction.operands[0]] = 1
#
index = self.pipeline_register.index(instruction)
self.pipeline_register[index] = ""
self.clean()
| 38.27907 | 94 | 0.567436 | 161 | 1,646 | 5.677019 | 0.347826 | 0.14442 | 0.2407 | 0.136761 | 0.28337 | 0.247265 | 0.247265 | 0.164114 | 0.164114 | 0.164114 | 0 | 0.002791 | 0.346902 | 1,646 | 42 | 95 | 39.190476 | 0.847442 | 0.040705 | 0 | 0.290323 | 0 | 0 | 0.005092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.096774 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
363b300b4584703dde103216ec3118b56fec2aec | 179 | py | Python | model/get_data.py | qq1010903229/OIer | ec1f4c60d76188efd18af157f46849b27dd8ddae | [
"Apache-2.0"
] | null | null | null | model/get_data.py | qq1010903229/OIer | ec1f4c60d76188efd18af157f46849b27dd8ddae | [
"Apache-2.0"
] | null | null | null | model/get_data.py | qq1010903229/OIer | ec1f4c60d76188efd18af157f46849b27dd8ddae | [
"Apache-2.0"
] | null | null | null | f = open("OI_school.csv")
op = open("mdt.txt","w")
for i in f.readlines():
c = i.split('","')
op.write(c[-3]+','+c[-2]+','+"".join([i+',' for i in eval(c[1])])[:-1]+'\n')
| 29.833333 | 80 | 0.463687 | 34 | 179 | 2.411765 | 0.647059 | 0.097561 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026144 | 0.145251 | 179 | 5 | 81 | 35.8 | 0.509804 | 0 | 0 | 0 | 0 | 0 | 0.162011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
363ecc9fcc777c09f95b187bd0eb4e97cd4e05fe | 2,068 | py | Python | power_data_to_sat_passes/filtersatpowerfiles.py | abrahamneben/orbcomm_beam_mapping | 71b3e7d6e4214db0a6f4e68ebeeb7d7f846f5004 | [
"MIT"
] | 1 | 2019-04-10T02:50:19.000Z | 2019-04-10T02:50:19.000Z | power_data_to_sat_passes/filtersatpowerfiles.py | abrahamneben/orbcomm_beam_mapping | 71b3e7d6e4214db0a6f4e68ebeeb7d7f846f5004 | [
"MIT"
] | null | null | null | power_data_to_sat_passes/filtersatpowerfiles.py | abrahamneben/orbcomm_beam_mapping | 71b3e7d6e4214db0a6f4e68ebeeb7d7f846f5004 | [
"MIT"
] | null | null | null | #!/users/aneben/python/bin/python
import sys
import commands
import numpy as np
import string
np.set_printoptions(precision=3,linewidth=200)
months={'Jan':'01','Feb':'02','Mar':'03','Apr':'04','May':'05','Jun':'06','Jul':'07','Aug':'08','Sept':'09','Oct':'10','Nov':'11','Dec':'12'}
def make_datetime_numeric(dt):
dt_elts = dt.split()
month = months[dt_elts[2]]
day = dt_elts[3]
time = ''.join(dt_elts[4].split(':'))
year = dt_elts[5]
return year+month+day+time
def read_next_refew_spectrum(f):
header = ''
inheader = True
while inheader:
nextline = f.readline()
if len(nextline) == 0:
return [[],[]]
elif nextline == ' CH 1 CH 2 CH 3 CH 4\n':
break
else:
header += nextline
spectrum = np.zeros(512)
# cols: tileEW=0, refEW=1, tileNS=2, refNS=3
for i in range(512): spectrum[i] = float(f.readline().split()[1])
return [header,spectrum]
label = sys.argv[1]
satpowerdir = '/media/disk-1/MWA_Tile/newdata/'+label
satpowerfnames = commands.getoutput('ls '+satpowerdir+'/satpower*').split()
outf = open('../phase3/composite_'+label+'/'+label+'_filteredsatpows.txt','w')
satbins = np.array([102, 115, 128, 225, 236, 339, 352, 365 ,378, 410])
skip=4
for fname in satpowerfnames:
f = open(fname)
print 'reading '+fname
acq_num = 0
[header,spect] = read_next_refew_spectrum(f)
while len(spect) != 0:
satstrs = header.split('\n')[3:-2]
allsats = np.zeros(8,dtype=int)
sats = [int(satstr[2:4]) for satstr in satstrs]
allsats[0:len(sats)] = sats
if acq_num%skip == 0:
datetime = header.split('\n')[2]
outf.write('\n'+make_datetime_numeric(datetime))
for i in range(len(satbins)): outf.write(",%1.3f"%(20*np.log10(spect[satbins[i]])))
outf.write(',')
outf.write(','.join(map(str,allsats)))
acq_num += 1
if acq_num%5000==0: print acq_num/50000.
[header,spect] = read_next_refew_spectrum(f)
f.close()
outf.close()
| 26.512821 | 141 | 0.597679 | 299 | 2,068 | 4.043478 | 0.478261 | 0.024814 | 0.032258 | 0.052109 | 0.072787 | 0.054591 | 0.054591 | 0 | 0 | 0 | 0 | 0.067118 | 0.2147 | 2,068 | 77 | 142 | 26.857143 | 0.67734 | 0.036267 | 0 | 0.037037 | 0 | 0 | 0.105025 | 0.015578 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.074074 | null | null | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
363f6b85601d80ec792d9609a878c76ff8a2a456 | 14,280 | py | Python | burst_paper/all_ds/plot_allband_ds.py | jackievilladsen/dynspec | 87101b188d7891644d848e781bca00f044fe3f0b | [
"MIT"
] | 2 | 2019-05-01T00:34:28.000Z | 2021-02-10T09:18:10.000Z | burst_paper/all_ds/plot_allband_ds.py | jackievilladsen/dynspec | 87101b188d7891644d848e781bca00f044fe3f0b | [
"MIT"
] | null | null | null | burst_paper/all_ds/plot_allband_ds.py | jackievilladsen/dynspec | 87101b188d7891644d848e781bca00f044fe3f0b | [
"MIT"
] | null | null | null | '''
plot_allband_ds.py - Load P,L,S band dynamic spectrum for a given epoch, bin to specified resolution, and plot to file
'''
import dynspec.plot
reload(dynspec.plot)
from dynspec import load_dict
from dynspec.plot import *
from pylab import *
import os, subprocess
import matplotlib.gridspec as gridspec
'''
def get_obsname(obsfile):
# take a file directory such as '/data/jrv/15A-416/YZCMi/1' and
# convert to obs name such as '15A-416_YZCMi_1' and srcname 'YZCMi'
names = obsfile.split('/')
srcname = names[4]
obsname = names[3]+'_'+names[4]+'_'+names[5]
return obsname,srcname
'''
def get_obsfile(obsname):
# take an obs name such as '15A-416_YZCMi_1' and return srcname ('YZCMi')
# and file directory ('/data/jrv/15A-416/YZCMi/1')
names = obsname.split('_')
srcname = names[1]
obsfile = '/data/jrv/'+names[0]+'/'+names[1]+'/'+names[2]
return obsfile, srcname
params = {'legend.fontsize': 'small',
'axes.titlesize': 'small',
'axes.labelsize': 'small',
'xtick.labelsize': 'x-small',
'ytick.labelsize': 'x-small',
'image.interpolation': 'nearest'}
rcParams.update(params)
loadfile = '/data/jrv/burst_paper/all_burst_epoch_dynspec_LSband.npy'
ds_list = load_dict(loadfile)
loadfileP = '/data/jrv/burst_paper/all_burst_epoch_dynspec_Pband.npy'
dsP_list = load_dict(loadfileP)
ds_dir = '/data/jrv/burst_paper/ds/all_burst_dynspec/' # where to save ds plots
if not os.path.exists(ds_dir):
os.system('mkdir '+ds_dir)
close('all')
# note: throughout, "LS" can also include C band, I initially wrote this code for 2015 data (which only has LS band)
# but it works for the 2013 data with LSC band
# params that can be changed are listed in default_fig_params
default_fig_params = {
'tint_P': 300,
'tint_LS': 60,
'df_MHz_P': 16,
'df_MHz_LS': 16,
'smax_P': None,
'smax_LS': None,
'pixflag_sigfacP': 7.,
'pixflag_sigfacLS': 10.,
'chanflag_sigfacP': 3.,
'chanflag_sigfacLS': 7.,
'colorscale_P':'linear',
'colorscale_LS':'linear',
'maskpartial_P':0.5,
'maskpartial_LS':0.5,
'linthresh_P':None,
'linthresh_LS':None}
fig_params_dict = {
'13A-423_UVCet_1':{'tint_LS':60,'df_MHz_LS':32,'smax_LS':None,'colorscale_LS':'symlog','pixflag_sigfacLS':100,'maskpartial_LS':1.0},
'13A-423_UVCet_2':{'tint_LS':60,'df_MHz_LS':32,'smax_LS':0.015,'maskpartial_LS':0.55},
'13A-423_UVCet_2_b':{'tint_LS':300,'df_MHz_LS':64,'smax_LS':0.008,'linthresh_LS':0.002,'maskpartial_LS':0.55,'colorscale_LS':'symlog'},
'15A-416_ADLeo_3':{'smax_LS':0.03,'smax_P':0.02},
'15A-416_ADLeo_4':{'smax_LS':0.045,'smax_P':0.02,'pixflag_sigfacLS':50.},
'15A-416_ADLeo_5':{'tint_LS':120,'df_MHz_LS':32,'tint_P':150,'df_MHz_P':8},
'15A-416_EQPeg_2':{'tint_LS':120,'df_MHz_LS':32,'tint_P':180,'df_MHz_P':8,'chanflag_sigfacP':2.5,'maskpartial_P':0.9,'pixflag_sigfacP':5.,'smax_P':0.1,'maskpartial_LS':0.7},
'15A-416_UVCet_1':{'df_MHz_LS':32},
'15A-416_UVCet_2':{'tint_P':150,'smax_P':0.05},
'15A-416_UVCet_3':{'tint_P':180,'df_MHz_P':16,'smax_P':0.05},
'15A-416_UVCet_4':{'colorscale_LS':'symlog','smax_LS':0.1,'df_MHz_LS':16,'maskpartial_LS':0.9,'linthresh_LS':0.012,'tint_P':180,'smax_P':0.05},
'15A-416_UVCet_5':{'smax_P':0.04,'maskpartial_P':0.7,'maskpartial_LS':0.9},
'15A-416_YZCMi_1':{'smax_P':0.05,'maskpartial_P':0.7,'maskpartial_LS':0.8,'tint_LS':150,'df_MHz_LS':32,'colorscale_LS':'symlog','smax_LS':0.05,'linthresh_LS':0.0075,'chanflag_sigfacLS':4.},
'15A-416_YZCMi_2':{'smax_P':0.05,'tint_LS':120,'df_MHz_LS':32,'smax_LS':0.015}
}
### PLOT INDIVIDUAL OBSERVATIONS ###
obs_list = fig_params_dict.keys()
#obs_list = ['15A-416_EQPeg_2'] # so I can work on just this event
fig_max_width=6.5
fig_max_height=8.25
for obsname in obs_list:
for func in [real,imag]:
# load dynamic spectra for this observation
print '\n-----', obsname, '-----'
obsfile,srcname = get_obsfile(obsname)
ds = ds_list[obsfile]
dsP = dsP_list.get(obsfile,None)
# load custom parameters for plotting this epoch (binning, RFI flagging, color scale)
fig_params = deepcopy(default_fig_params)
fp_dict_temp = fig_params_dict.get(obsname,{})
for k in fp_dict_temp:
fig_params[k] = fp_dict_temp[k]
# Duration of observation relative to 3h40m (max duration of any) - scale x-axis by this
# so they are all on the same time scale
duration = ds.get_tlist()[-1]*ds.dt()
print 'Duration:',duration,'sec'
frac_duration = duration/(3*3600+40*60)
print 'Fractional duration compared to 3h40m:', frac_duration
# Bandwidth of >1 GHz data relative to 3 GHz (default for 2015) - scale y-axis of >1 GHz dynspec by this
BW_LSC = max(ds.f)-min(ds.f)
frac_BW = BW_LSC/3.e9
print 'Fractional bandwidth of >1 GHz data compared to 3 GHz:',frac_BW
# bin LS band dynamic spectrum to desired resolution
# mask RFI pix and chans before binning, pix after binning
ds.mask_RFI_pixels(rmsfac=fig_params['pixflag_sigfacLS'],func=imag)
ds.mask_RFI(rmsfac=fig_params['chanflag_sigfacLS'])
nt = int(round(fig_params['tint_LS']/ds.dt())) # number of integrations to bin together
nf = int(round(fig_params['df_MHz_LS']/(ds.df()/1e6))) # number of channels to bin together
ds = ds.bin_dynspec(nt=nt,nf=nf,mask_partial=fig_params['maskpartial_LS'])
ds.mask_RFI_pixels(rmsfac=fig_params['pixflag_sigfacLS'],func=imag)
if dsP:
dsP.mask_RFI_pixels(rmsfac=fig_params['pixflag_sigfacP'])
dsP.mask_RFI(rmsfac=fig_params['chanflag_sigfacP'])
# bin P band dynamic spectrum to desired resolution
nt = int(round(fig_params['tint_P']/dsP.dt())) # number of integrations to bin together
nf = int(round(fig_params['df_MHz_P']/(dsP.df()/1e6))) # number of channels to bin together
dsP = dsP.bin_dynspec(nt=nt,nf=nf,mask_partial=fig_params['maskpartial_P'])
dsP.mask_RFI(rmsfac=fig_params['chanflag_sigfacP'])
# calculate horizontal positions of subplots in units from 0 to 1
# (0 is left edge)
dsplot_w = 3.2 * frac_duration # width of dynamic spectrum in inches
gap_l = 0.55 # width of x-axis blank space (left) in inches
gap_c = 0.15 # width of x-axis blank space (center) in inches
gap_cbar = 0.45 # width of blank space between V plot & cbar in inches
gap_r = 0.57 # width of x-axis blank space (right) in inches
cbar_w = 0.13 # width of colorbar in inches
tot_w = 2*dsplot_w + cbar_w + gap_l + gap_c + gap_cbar + gap_r # total width in inches
#if obs == '13A-423_UVCet_2':
# tot_w += gap_c + dsplot_w + gap_cbar + gap_r
print 'Total width of figure in inches:', tot_w, '(goal: <=8.25)'
x1 = gap_l/tot_w # left edge of Stokes I dynspec
x2 = x1 + dsplot_w/tot_w # right edge of Stokes I dynspec
x3 = x2 + gap_c/tot_w # left edge of Stokes V dynspec
x4 = x3 + dsplot_w/tot_w # right edge of Stokes V dynspec
x5 = x4 + gap_cbar/tot_w # left edge of colorbar
x6 = x5+cbar_w/tot_w # right edge of colorbar
#if obs == '13A-423_UVCet_2':
# x7 = x6 + (gap_r+gap_c)/tot_w # left edge of second Stokes V dynspec
# x8 = x
# calculate vertical positions of subplots in units from 0 to 1
# (0 is bottom edge)
dsLS_h = 3.2 * frac_BW # height of LS band dynspec in inches
dsP_h = 0.9 # height of P band dynspec in inches
gap_t = 0.43 # height of y-axis blank space at top (includes titles) in inches
gap_rows = 0.5 # heights of each gap between rows of dynspecs in inches
gap_b = 0.36 # height of y-axis blank space at bottom in inches
if dsP:
tot_h = dsLS_h + 2*dsP_h + gap_t + 2*gap_rows + gap_b # total height in inches
else:
tot_h = gap_t + dsLS_h + gap_b # total height in inches if no P band data
print 'Total height of figure in inches:', tot_h, '(goal: <=6.8)'
y1 = 1-(gap_t/tot_h) # top edge of LS band dynspec
y2 = y1 - dsLS_h/tot_h # bottom edge of LS band dynspec
y3 = y2 - gap_rows/tot_h # top edge of P band I,V dynspecs
y4 = y3 - dsP_h/tot_h # bottom edge of P band I,V dynspecs
y5 = y4 - gap_rows/tot_h # top edge of P band U dynspec
y6 = y5 - dsP_h/tot_h # bottom edge of P band U dynspec
cbarP_h = (2*dsP_h + gap_rows)/tot_h
# create figure
close('all')
figname = ds_dir+obsname+'.pdf'
if func == imag:
figname = ds_dir+obsname+'_imag.pdf'
fig=figure(figsize=(tot_w,tot_h))
# First row of plots: Stokes I LS, Stokes V LS, colorbar LS
# Format for axes command is axes([x_left, y_bottom, width, height])
# First row: y_bottom is y2, x_left is x1, x3, x5
# set flux limits for LS band
smax = fig_params['smax_LS']
if smax is None:
smax = max(percentile(real(ds.spec['i']),99)*1.1,median(real(ds.spec['i']))*2)
smin = -smax # make colorbar symmetric about zero
# set axis ratio to 'auto' in order to fill specified subplot areas
# IMPORTANT: must not include 'cbar' and 'cbar_label' in axis_labels
ar0 = 'auto'
# plot Stokes I real, LS band
ax = axes([x1,y2,dsplot_w/tot_w,dsLS_h/tot_h])
#ax.set_autoscale_on(False)
pp = {'pol':'i','smin':smin,'smax':smax,'trim_mask':False,'axis_labels':[],'ar0':ar0,'dy':0.5,'scale':fig_params['colorscale_LS'],'func':func}
if fig_params['linthresh_LS']:
pp['linthresh']=fig_params['linthresh_LS']
plt,cbar_ticks,cbar_ticklbls = ds.plot_dynspec(plot_params=pp)
#gca().xaxis.set_visible(False)
#gca().yaxis.set_label_coords(-0.2,0)
if dsP:
title('Stokes I, 1-4 GHz')
else:
title('Stokes I')
fig.text(0.01,0.5,'Frequency (GHz)',va='center',rotation='vertical',fontsize='small')
# plot Stokes V real, LS band
ax=axes([x3,y2,dsplot_w/tot_w,dsLS_h/tot_h])
pp = {'pol':'v','smin':smin,'smax':smax,'trim_mask':False,'axis_labels':['xlabel'],'ar0':ar0,'dy':0.5,'scale':fig_params['colorscale_LS'],'func':func}
if fig_params['linthresh_LS']:
pp['linthresh']=fig_params['linthresh_LS']
ds.plot_dynspec(plot_params=pp)
gca().yaxis.tick_right()
xlabel_text = ax.xaxis.get_label_text()
ax.set_xlabel('')
#gca().xaxis.set_visible(False)
if dsP:
title('Stokes V, 1-4 GHz')
else:
title('Stokes V')
# plot LS band colorbar
ax = axes([x5,y2,cbar_w/tot_w,dsLS_h/tot_h])
cbar=colorbar(plt,cax=ax)
cbar.set_ticks(cbar_ticks)
cbar.set_ticklabels(cbar_ticklbls)
ax = cbar.ax
if dsP:
cbar_label = '1-4 Flux Density (mJy)'
ycbar = 0.75
else:
cbar_label = 'Flux Density (mJy)'
ycbar=0.65
if obsname=='15A-416_UVCet_1':
ycbar=0.98
ax.text(4.2,ycbar,cbar_label,rotation=90,fontsize='small')
if dsP:
# Second row of plots: Stokes I P, apparent Stokes V P
# Format for axes command is axes([x_left, y_bottom, width, height])
# Second row: y_bottom is y4, x_left is x1, x3
# set flux limits for P band
smaxP = fig_params['smax_P']
if smaxP is None:
smaxP = dsP.get_rms('v')*6.
sminP = -smaxP
# plot Stokes I real, P band
ax = axes([x1,y4,dsplot_w/tot_w,dsP_h/tot_h])
pp = {'pol':'i','smin':sminP,'smax':smaxP,'trim_mask':False,'axis_labels':[],'dy':0.05,'ar0':ar0,'scale':fig_params['colorscale_P'],'func':func}
if fig_params['linthresh_P']:
pp['linthresh']=fig_params['linthresh_P']
dsP.plot_dynspec(plot_params=pp)
title('Stokes I, 0.2-0.5 GHz')
# plot Stokes V real, P band
ax = axes([x3,y4,dsplot_w/tot_w,dsP_h/tot_h])
pp = {'pol':'v','smin':sminP,'smax':smaxP,'trim_mask':False,'axis_labels':[],'dy':0.05,'ar0':ar0,'scale':fig_params['colorscale_P'],'func':func}
if fig_params['linthresh_P']:
pp['linthresh']=fig_params['linthresh_P']
plt,cbar_ticks,cbar_ticklbls=dsP.plot_dynspec(plot_params=pp)
gca().yaxis.tick_right()
title('Stokes V\', 0.2-0.5 GHz')
# Third row of plots: [empty], apparent Stokes U P, P band colorbar (extra height)
# Format for axes command is axes([x_left, y_bottom, width, height])
# Third row: y_bottom is y6
# x_left is x3 (Stokes U), x5 (colorbar)
# height is dsP_h (Stokes U), 2*dsP_h+gap_rows (colorbar)
# plot Stokes U real, P band
ax = axes([x3,y6,dsplot_w/tot_w,dsP_h/tot_h])
pp = {'pol':'u','smin':sminP,'smax':smaxP,'trim_mask':False,'axis_labels':[],'dy':0.05,'ar0':ar0,'scale':fig_params['colorscale_P'],'func':func}
if fig_params['linthresh_P']:
pp['linthresh']=fig_params['linthresh_P']
dsP.plot_dynspec(plot_params=pp)
gca().yaxis.tick_right()
title('Stokes U\', 0.2-0.5 GHz')
# plot P band colorbar
ax = axes([x5,y6,cbar_w/tot_w,cbarP_h])
cbar=colorbar(plt,cax=ax)
cbar.set_ticks(cbar_ticks)
cbar.set_ticklabels(cbar_ticklbls)
ax = cbar.ax
ax.text(4.2,0.9,'0.2-0.5 GHz Flux Density (mJy)',rotation=90,fontsize='small')
fig.text(0.5,0.01,xlabel_text,ha='center',fontsize='small')
date = ds.t0().split()[0]
fig_title = srcname[0:2]+' '+srcname[2:5]+' - '+date
if func == imag:
fig_title += ' - Imag(vis)'
suptitle(fig_title,y=0.99,fontsize='medium')
savefig(figname)
| 45.769231 | 193 | 0.614566 | 2,254 | 14,280 | 3.697427 | 0.171251 | 0.039957 | 0.009239 | 0.007559 | 0.438445 | 0.37401 | 0.304416 | 0.266859 | 0.230742 | 0.181545 | 0 | 0.050083 | 0.244958 | 14,280 | 311 | 194 | 45.916399 | 0.722871 | 0.25042 | 0 | 0.197115 | 0 | 0 | 0.239395 | 0.015122 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028846 | null | null | 0.028846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36472112e71a6f099b1f967e54265e83e3ef22d7 | 2,068 | py | Python | PyInstaller/hooks/hook-numpy.py | mathiascode/pyinstaller | eaad76a75a5cc7be90e445f974f4bf1731045496 | [
"Apache-2.0"
] | 9,267 | 2015-01-01T04:08:45.000Z | 2022-03-31T11:42:38.000Z | PyInstaller/hooks/hook-numpy.py | bwoodsend/pyinstaller | 2a16bc2fe0a1234d0f89836d39b7877c74b3bca1 | [
"Apache-2.0"
] | 5,150 | 2015-01-01T12:09:56.000Z | 2022-03-31T18:06:12.000Z | PyInstaller/hooks/hook-numpy.py | bwoodsend/pyinstaller | 2a16bc2fe0a1234d0f89836d39b7877c74b3bca1 | [
"Apache-2.0"
] | 2,101 | 2015-01-03T10:25:27.000Z | 2022-03-30T11:04:42.000Z | #!/usr/bin/env python3
# --- Copyright Disclaimer ---
#
# In order to support PyInstaller with numpy<1.20.0 this file will be duplicated for a short period inside
# PyInstaller's repository [1]. However this file is the intellectual property of the NumPy team and is
# under the terms and conditions outlined their repository [2].
#
# .. refs:
#
# [1] PyInstaller: https://github.com/pyinstaller/pyinstaller/
# [2] NumPy's license: https://github.com/numpy/numpy/blob/master/LICENSE.txt
#
"""
This hook should collect all binary files and any hidden modules that numpy needs.
Our (some-what inadequate) docs for writing PyInstaller hooks are kept here:
https://pyinstaller.readthedocs.io/en/stable/hooks.html
PyInstaller has a lot of NumPy users so we consider maintaining this hook a high priority.
Feel free to @mention either bwoodsend or Legorooj on Github for help keeping it working.
"""
from PyInstaller.compat import is_conda, is_pure_conda
from PyInstaller.utils.hooks import collect_dynamic_libs
# Collect all DLLs inside numpy's installation folder, dump them into built app's root.
binaries = collect_dynamic_libs("numpy", ".")
# If using Conda without any non-conda virtual environment manager:
if is_pure_conda:
# Assume running the NumPy from Conda-forge and collect it's DLLs from the communal Conda bin directory. DLLs from
# NumPy's dependencies must also be collected to capture MKL, OpenBlas, OpenMP, etc.
from PyInstaller.utils.hooks import conda_support
datas = conda_support.collect_dynamic_libs("numpy", dependencies=True)
# Submodules PyInstaller cannot detect (probably because they are only imported by extension modules, which PyInstaller
# cannot read).
hiddenimports = ['numpy.core._dtype_ctypes']
if is_conda:
hiddenimports.append("six")
# Remove testing and building code and packages that are referenced throughout NumPy but are not really dependencies.
excludedimports = [
"scipy",
"pytest",
"nose",
"distutils",
"f2py",
"setuptools",
"numpy.f2py",
"numpy.distutils",
]
| 38.296296 | 119 | 0.758704 | 294 | 2,068 | 5.282313 | 0.588435 | 0.01159 | 0.034771 | 0.032196 | 0.039923 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006311 | 0.157157 | 2,068 | 53 | 120 | 39.018868 | 0.884682 | 0.70793 | 0 | 0 | 0 | 0 | 0.17474 | 0.041522 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.315789 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3649038aeb95961f992580df722315d018924dd9 | 12,731 | py | Python | ixnetwork_restpy/testplatform/sessions/ixnetwork/traffic/trafficitem/configelement/stack/macInMACv42_template.py | OpenIxia/ixnetwork_restpy | f628db450573a104f327cf3c737ca25586e067ae | [
"MIT"
] | 20 | 2019-05-07T01:59:14.000Z | 2022-02-11T05:24:47.000Z | ixnetwork_restpy/testplatform/sessions/ixnetwork/traffic/trafficitem/configelement/stack/macInMACv42_template.py | OpenIxia/ixnetwork_restpy | f628db450573a104f327cf3c737ca25586e067ae | [
"MIT"
] | 60 | 2019-04-03T18:59:35.000Z | 2022-02-22T12:05:05.000Z | ixnetwork_restpy/testplatform/sessions/ixnetwork/traffic/trafficitem/configelement/stack/macInMACv42_template.py | OpenIxia/ixnetwork_restpy | f628db450573a104f327cf3c737ca25586e067ae | [
"MIT"
] | 13 | 2019-05-20T10:48:31.000Z | 2021-10-06T07:45:44.000Z | from ixnetwork_restpy.base import Base
from ixnetwork_restpy.files import Files
class MacInMACv42(Base):
__slots__ = ()
_SDM_NAME = 'macInMACv42'
_SDM_ATT_MAP = {
'HeaderBDstAddress': 'macInMACv42.header.bDstAddress-1',
'HeaderBSrcAddress': 'macInMACv42.header.bSrcAddress-2',
'BTAGEthertypeEthertypeValue': 'macInMACv42.header.bTAGEthertype.ethertypeValue-3',
'BTagPcp': 'macInMACv42.header.bTAGEthertype.bTag.pcp-4',
'BTagDei': 'macInMACv42.header.bTAGEthertype.bTag.dei-5',
'BTagVlanID': 'macInMACv42.header.bTAGEthertype.bTag.vlanID-6',
'ITAGEthertypeEthertypeValue': 'macInMACv42.header.iTAGEthertype.ethertypeValue-7',
'ITAGPcp': 'macInMACv42.header.iTAGEthertype.iTAG.pcp-8',
'ITAGDrop': 'macInMACv42.header.iTAGEthertype.iTAG.drop-9',
'ITAGFmt': 'macInMACv42.header.iTAGEthertype.iTAG.fmt-10',
'ITAGReserved': 'macInMACv42.header.iTAGEthertype.iTAG.reserved-11',
'ITAGISID': 'macInMACv42.header.iTAGEthertype.iTAG.iSID-12',
'HeaderCDstAddress': 'macInMACv42.header.cDstAddress-13',
'HeaderCSrcAddress': 'macInMACv42.header.cSrcAddress-14',
'STAGSTAGEthertype': 'macInMACv42.header.sTAGCTAG.tag.sTAG.sTAGEthertype-15',
'STAGPcp': 'macInMACv42.header.sTAGCTAG.tag.sTAG.sTAG.pcp-16',
'STAGDei': 'macInMACv42.header.sTAGCTAG.tag.sTAG.sTAG.dei-17',
'STAGVlanID': 'macInMACv42.header.sTAGCTAG.tag.sTAG.sTAG.vlanID-18',
'CTAGCTAGEthertype': 'macInMACv42.header.sTAGCTAG.tag.cTAG.cTAGEthertype-19',
'CTAGUserPriority': 'macInMACv42.header.sTAGCTAG.tag.cTAG.cTAG.userPriority-20',
'CTAGCfi': 'macInMACv42.header.sTAGCTAG.tag.cTAG.cTAG.cfi-21',
'CTAGVlanId': 'macInMACv42.header.sTAGCTAG.tag.cTAG.cTAG.vlanId-22',
'BothSTAGCTAGSTAGEthertype': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.sTAGEthertype-23',
'BothstagctagSTAGPcp': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.sTAG.pcp-24',
'BothstagctagSTAGDei': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.sTAG.dei-25',
'BothstagctagSTAGVlanID': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.sTAG.vlanID-26',
'BothSTAGCTAGCTAGEthertype': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.cTAGEthertype-27',
'BothstagctagCTAGUserPriority': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.cTAG.userPriority-28',
'BothstagctagCTAGCfi': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.cTAG.cfi-29',
'BothstagctagCTAGVlanId': 'macInMACv42.header.sTAGCTAG.tag.bothSTAGCTAG.cTAG.vlanId-30',
'TagNoSTAGCTAG': 'macInMACv42.header.sTAGCTAG.tag.noSTAGCTAG-31',
}
def __init__(self, parent, list_op=False):
super(MacInMACv42, self).__init__(parent, list_op)
@property
def HeaderBDstAddress(self):
"""
Display Name: B-Destination Address (Ethernet)
Default Value: 00:00:00:00:00:00
Value Format: mAC
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['HeaderBDstAddress']))
@property
def HeaderBSrcAddress(self):
"""
Display Name: B-Source Address (Ethernet)
Default Value: 00:00:00:00:00:00
Value Format: mAC
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['HeaderBSrcAddress']))
@property
def BTAGEthertypeEthertypeValue(self):
"""
Display Name: Ethertype value
Default Value: 0x88A8
Value Format: hex
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BTAGEthertypeEthertypeValue']))
@property
def BTagPcp(self):
"""
Display Name: B-TAG PCP
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BTagPcp']))
@property
def BTagDei(self):
"""
Display Name: B-TAG DEI
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BTagDei']))
@property
def BTagVlanID(self):
"""
Display Name: B-TAG VLAN ID
Default Value: 2
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BTagVlanID']))
@property
def ITAGEthertypeEthertypeValue(self):
"""
Display Name: Ethertype value
Default Value: 0x88E7
Value Format: hex
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['ITAGEthertypeEthertypeValue']))
@property
def ITAGPcp(self):
"""
Display Name: I-TAG PCP
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['ITAGPcp']))
@property
def ITAGDrop(self):
"""
Display Name: I-TAG DEI
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['ITAGDrop']))
@property
def ITAGFmt(self):
"""
Display Name: FMT
Default Value: 0
Value Format: decimal
Available enum values: Payload Encapsulated Wi Fcs, 0, Payload Encapsulated Wo Fcs, 1, No Encapsulation, 2, Reserved, 3
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['ITAGFmt']))
@property
def ITAGReserved(self):
"""
Display Name: Reserved
Default Value: 0x0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['ITAGReserved']))
@property
def ITAGISID(self):
"""
Display Name: I-SID
Default Value: 256
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['ITAGISID']))
@property
def HeaderCDstAddress(self):
"""
Display Name: C-Destination Address (Ethernet)
Default Value: 00:00:00:00:00:00
Value Format: mAC
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['HeaderCDstAddress']))
@property
def HeaderCSrcAddress(self):
"""
Display Name: C-Source Address (Ethernet)
Default Value: 00:00:00:00:00:00
Value Format: mAC
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['HeaderCSrcAddress']))
@property
def STAGSTAGEthertype(self):
"""
Display Name: S-TAG Ethertype
Default Value: 0x88A8
Value Format: hex
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['STAGSTAGEthertype']))
@property
def STAGPcp(self):
"""
Display Name: S-TAG PCP
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['STAGPcp']))
@property
def STAGDei(self):
"""
Display Name: S-TAG DEI
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['STAGDei']))
@property
def STAGVlanID(self):
"""
Display Name: S-TAG VLAN ID
Default Value: 2
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['STAGVlanID']))
@property
def CTAGCTAGEthertype(self):
"""
Display Name: C-TAG Ethertype
Default Value: 0x8100
Value Format: hex
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['CTAGCTAGEthertype']))
@property
def CTAGUserPriority(self):
"""
Display Name: C-TAG User Priority
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['CTAGUserPriority']))
@property
def CTAGCfi(self):
"""
Display Name: C-TAG CFI
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['CTAGCfi']))
@property
def CTAGVlanId(self):
"""
Display Name: C-TAG VLAN ID
Default Value: 2
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['CTAGVlanId']))
@property
def BothSTAGCTAGSTAGEthertype(self):
"""
Display Name: S-TAG Ethertype
Default Value: 0x88A8
Value Format: hex
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothSTAGCTAGSTAGEthertype']))
@property
def BothstagctagSTAGPcp(self):
"""
Display Name: S-TAG PCP
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothstagctagSTAGPcp']))
@property
def BothstagctagSTAGDei(self):
"""
Display Name: S-TAG DEI
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothstagctagSTAGDei']))
@property
def BothstagctagSTAGVlanID(self):
"""
Display Name: S-TAG VLAN ID
Default Value: 2
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothstagctagSTAGVlanID']))
@property
def BothSTAGCTAGCTAGEthertype(self):
"""
Display Name: C-TAG Ethertype
Default Value: 0x8100
Value Format: hex
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothSTAGCTAGCTAGEthertype']))
@property
def BothstagctagCTAGUserPriority(self):
"""
Display Name: C-TAG User Priority
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothstagctagCTAGUserPriority']))
@property
def BothstagctagCTAGCfi(self):
"""
Display Name: C-TAG CFI
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothstagctagCTAGCfi']))
@property
def BothstagctagCTAGVlanId(self):
"""
Display Name: C-TAG VLAN ID
Default Value: 2
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['BothstagctagCTAGVlanId']))
@property
def TagNoSTAGCTAG(self):
"""
Display Name: No S-TAG/C-TAG
Default Value: 0
Value Format: decimal
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['TagNoSTAGCTAG']))
def add(self):
return self._create(self._map_locals(self._SDM_ATT_MAP, locals()))
| 35.561453 | 127 | 0.657843 | 1,321 | 12,731 | 6.178653 | 0.118092 | 0.052561 | 0.076819 | 0.050968 | 0.633668 | 0.606224 | 0.543617 | 0.53504 | 0.53504 | 0.53504 | 0 | 0.023195 | 0.244835 | 12,731 | 357 | 128 | 35.661064 | 0.825775 | 0.179248 | 0 | 0.373494 | 0 | 0 | 0.267035 | 0.201928 | 0 | 0 | 0 | 0 | 0 | 1 | 0.198795 | false | 0 | 0.198795 | 0.006024 | 0.614458 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
364de668db5e04abf8c4ddb3813bc74fcc464515 | 3,097 | py | Python | src/alphazero/data.py | Whillikers/seldon | 0d3ff7b25c7272d76a9aba38ee22efd910750f84 | [
"MIT"
] | 1 | 2019-11-03T20:18:16.000Z | 2019-11-03T20:18:16.000Z | src/alphazero/data.py | Whillikers/seldon | 0d3ff7b25c7272d76a9aba38ee22efd910750f84 | [
"MIT"
] | null | null | null | src/alphazero/data.py | Whillikers/seldon | 0d3ff7b25c7272d76a9aba38ee22efd910750f84 | [
"MIT"
] | null | null | null | """
Code for working with data.
In-memory format (as a list):
- board: Tensor (8, 8, 2) [bool; one-hot]
- move: Tensor (64,) [bool; one-hot]
- value: Tensor () [float32]
On-disk format (to save space and quicken loading):
- board: int64
- move: int64
- value: float32
"""
from typing import Dict, Tuple
import tensorflow as tf # type: ignore
from board import BOARD_SHAPE, BOARD_SQUARES, Board, Loc
EXAMPLE_SPEC = {
"board": tf.io.FixedLenFeature([2], tf.int64),
"move": tf.io.FixedLenFeature([], tf.int64),
"value": tf.io.FixedLenFeature([], tf.float32),
}
# Hack to allow storing bitboards efficiently as tf.Int64.
# Necessary because boards are all valid uint64 but not necessarily valid int64.
# Taken from: https://stackoverflow.com/questions/20766813/how-to-convert-signed-to-
# unsigned-integer-in-python
def _signed_representation(unsigned: int) -> int:
"""Convert an "unsigned" int to its equivalent C "signed" representation."""
return (unsigned & ((1 << 63) - 1)) - (unsigned & (1 << 63))
def _unsigned_representation(signed: int) -> int:
"""Convert a "signed" int to its equivalent C "unsigned" representation."""
return signed & 0xFFFFFFFFFFFFFFFF
# See: https://stackoverflow.com/questions/48333210/tensorflow-how-to-convert-an-
# integer-tensor-to-the-corresponding-binary-tensor
def decode_bitboard(encoded: tf.Tensor) -> tf.Tensor:
"""
Convert from uint64 board representation to a tf.Tensor board.
"""
flat = tf.math.mod(
tf.bitwise.right_shift(encoded, tf.range(BOARD_SQUARES, dtype=tf.int64)), 2
)
board = tf.reshape(flat, BOARD_SHAPE)
# Hack to allow using rot90 on a 2D tensor
return tf.image.rot90(tf.expand_dims(board, axis=-1), k=2)[:, :, 0]
def serialize_example(board: Board, move: Loc, value: float) -> str:
"""
Serialize a single training example into a string.
"""
black = _signed_representation(int(board.black))
white = _signed_representation(int(board.white))
features = {
"board": tf.train.Feature(int64_list=tf.train.Int64List(value=[black, white])),
"move": tf.train.Feature(int64_list=tf.train.Int64List(value=[move.as_int])),
"value": tf.train.Feature(float_list=tf.train.FloatList(value=[value])),
}
ex = tf.train.Example(features=tf.train.Features(feature=features))
return ex.SerializeToString()
def preprocess_example(
serialized: str
) -> Tuple[Dict[str, tf.Tensor], Dict[str, tf.Tensor]]:
"""
Turn a serialized example into the training-ready format.
"""
example = tf.io.parse_single_example(serialized, EXAMPLE_SPEC)
bitboards = example["board"]
black_bb = bitboards[0]
white_bb = bitboards[1]
black = decode_bitboard(black_bb)
white = decode_bitboard(white_bb)
board = tf.stack([black, white], axis=-1)
move = tf.one_hot(example["move"], BOARD_SQUARES)
# TODO: better solution to multi-input Keras model training
return (
{"board": board},
{"policy_softmax": move, "tf_op_layer_Tanh": example["value"]},
)
| 34.032967 | 87 | 0.680336 | 414 | 3,097 | 5 | 0.352657 | 0.027053 | 0.027536 | 0.02029 | 0.06087 | 0.042512 | 0.042512 | 0.042512 | 0.042512 | 0 | 0 | 0.029134 | 0.179851 | 3,097 | 90 | 88 | 34.411111 | 0.785827 | 0.351954 | 0 | 0 | 0 | 0 | 0.039773 | 0 | 0 | 0 | 0.009298 | 0.011111 | 0 | 1 | 0.116279 | false | 0 | 0.069767 | 0 | 0.302326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36595769c1ee20b5e029d4e12f235050f6967122 | 33,084 | py | Python | server/miscellaneous.py | dewancse/SMT-PMR | 8d280ff5d169a021a73ffa30c8159581ab859c62 | [
"MIT"
] | null | null | null | server/miscellaneous.py | dewancse/SMT-PMR | 8d280ff5d169a021a73ffa30c8159581ab859c62 | [
"MIT"
] | 10 | 2017-05-16T22:08:40.000Z | 2017-10-30T21:07:47.000Z | server/miscellaneous.py | dewancse/SMT-PMR | 8d280ff5d169a021a73ffa30c8159581ab859c62 | [
"MIT"
] | null | null | null | import requests
from libcellml import *
import lxml.etree as ET
# pre-generated model recipe in JSON format
model_recipe = [
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P26433",
"med_pr_text": "sodium/hydrogen exchanger 3 (rat)",
"med_pr_text_syn": "NHE3",
"model_entity": "weinstein_1995.cellml#NHE3.J_NHE3_Na",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/PR_P26433",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "",
"source_fma3": "",
"variable_text": "J_NHE3_Na",
"variable_text2": "flux",
"variable_text3": "flux"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q9ET37",
"med_pr_text": "low affinity sodium-glucose cotransporter (mouse)",
"med_pr_text_syn": "Q9ET37",
"model_entity": "mackenzie_1996-mouse-baso.cellml#NBC_current.J_Na",
"model_entity2": "mackenzie_1996-mouse-baso.cellml#NBC_current.J_Na",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/PR_Q9ET37",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "Na+",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma3": "",
"variable_text": "J_Na",
"variable_text2": "J_Na",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P55018",
"med_pr_text": "solute carrier family 12 member 3 (rat)",
"med_pr_text_syn": "TSC",
"model_entity": "chang_fujita_b_1999.cellml#total_transepithelial_sodium_flux.J_mc_Na",
"model_entity2": "chang_fujita_b_1999.cellml#solute_concentrations.J_mc_Cl",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "Cl-",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma3": "",
"variable_text": "J_mc_Na",
"variable_text2": "J_mc_Cl",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q63633",
"med_pr_text": "solute carrier family 12 member 5 (rat)",
"med_pr_text_syn": "Q63633",
"model_entity": "chang_fujita_b_1999.cellml#solute_concentrations.J_mc_Cl",
"model_entity2": "chang_fujita_b_1999.cellml#total_transepithelial_potassium_flux.J_mc_K",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi3": "",
"solute_text": "Cl-",
"solute_text2": "K+",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma3": "",
"variable_text": "J_mc_Cl",
"variable_text2": "J_mc_K",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P37089",
"med_pr_text": "amiloride-sensitive sodium channel subunit alpha (rat)",
"med_pr_text_syn": "RENAC",
"model_entity": "chang_fujita_b_1999.cellml#mc_sodium_flux.G_mc_Na",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "Na+",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_mc_Na",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q06393",
"med_pr_text": "chloride channel protein ClC-Ka (rat)",
"med_pr_text_syn": "CLCNK1",
"model_entity": "chang_fujita_b_1999.cellml#mc_chloride_flux.G_mc_Cl",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "Cl-",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_mc_Cl",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P15387",
"med_pr_text": "potassium voltage-gated channel subfamily B member 1 (rat)",
"med_pr_text_syn": "P15387",
"model_entity": "chang_fujita_b_1999.cellml#mc_potassium_flux.G_mc_K",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "K+",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_mc_K",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_P06685",
"med_pr_text": "sodium/potassium-transporting ATPase subunit alpha-1 (rat)",
"med_pr_text_syn": "P06685",
"model_entity": "chang_fujita_b_1999.cellml#solute_concentrations.J_sc_Na",
"model_entity2": "chang_fujita_b_1999.cellml#sc_potassium_flux.J_sc_K",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "K+",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma3": "",
"variable_text": "J_sc_Na",
"variable_text2": "J_sc_K",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q06393",
"med_pr_text": "chloride channel protein ClC-Ka (rat)",
"med_pr_text_syn": "CLCNK1",
"model_entity": "chang_fujita_b_1999.cellml#sc_chloride_flux.G_sc_Cl",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "Cl-",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_sc_Cl",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_P15387",
"med_pr_text": "potassium voltage-gated channel subfamily B member 1 (rat)",
"med_pr_text_syn": "P15387",
"model_entity": "chang_fujita_b_1999.cellml#sc_potassium_flux.G_sc_K",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "K+",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_sc_K",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_67394",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q9Z0S6",
"med_pr_text": "claudin-10 (mouse)",
"med_pr_text_syn": "CLDN10A",
"model_entity": "chang_fujita_b_1999.cellml#ms_sodium_flux.G_ms_Na",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "diffusiveflux",
"sink_fma3": "diffusiveflux",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "diffusiveflux",
"solute_chebi3": "diffusiveflux",
"solute_text": "Na+",
"solute_text2": "diffusiveflux",
"solute_text3": "diffusiveflux",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "diffusiveflux",
"source_fma3": "diffusiveflux",
"variable_text": "G_ms_Na",
"variable_text2": "diffusiveflux",
"variable_text3": "diffusiveflux"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_67394",
"med_pr": "http://purl.obolibrary.org/obo/PR_O35054",
"med_pr_text": "claudin-4 (mouse)",
"med_pr_text_syn": "CPETR1",
"model_entity": "chang_fujita_b_1999.cellml#ms_chloride_flux.G_ms_Cl",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "diffusiveflux",
"sink_fma3": "diffusiveflux",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "diffusiveflux",
"solute_chebi3": "diffusiveflux",
"solute_text": "Cl-",
"solute_text2": "diffusiveflux",
"solute_text3": "diffusiveflux",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "diffusiveflux",
"source_fma3": "diffusiveflux",
"variable_text": "G_ms_Cl",
"variable_text2": "diffusiveflux",
"variable_text3": "diffusiveflux"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_67394",
"med_pr": "http://purl.obolibrary.org/obo/PR_F1LZ52",
"med_pr_text": "kelch-like protein 3 (rat)",
"med_pr_text_syn": "F1LZ52",
"model_entity": "chang_fujita_b_1999.cellml#ms_potassium_flux.G_ms_K",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "diffusiveflux",
"sink_fma3": "diffusiveflux",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi2": "diffusiveflux",
"solute_chebi3": "diffusiveflux",
"solute_text": "K+",
"solute_text2": "diffusiveflux",
"solute_text3": "diffusiveflux",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "diffusiveflux",
"source_fma3": "diffusiveflux",
"variable_text": "G_ms_K",
"variable_text2": "diffusiveflux",
"variable_text3": "diffusiveflux"
}
]
# sparql endpoint in PMR
sparqlendpoint = "https://models.physiomeproject.org/pmr2_virtuoso_search"
# workspace url where we have all models
workspaceURL = "https://models.physiomeproject.org/workspace/267/rawfile/HEAD/"
# reference URIs of anatomical locations
lumen_fma = "http://purl.obolibrary.org/obo/FMA_74550"
cytosol_fma = "http://purl.obolibrary.org/obo/FMA_66836"
interstitialfluid_fma = "http://purl.obolibrary.org/obo/FMA_9673"
# solutes dictionary to map URI to name
dict_solutes = [
{
"http://purl.obolibrary.org/obo/CHEBI_29101": "Na",
"http://purl.obolibrary.org/obo/CHEBI_17996": "Cl",
"http://purl.obolibrary.org/obo/CHEBI_29103": "K"
}
]
# get channels and diffusive fluxes equations from source model
def getChannelsEquation(str_channel, v, compartment, importedModel, m, epithelial):
# string index of "id=" and "</math>" inside MathML
str_index = []
# save here required variables to make channels and diffusive fluxes equations
# e.g. ['C_c_Na', 'RT', 'psi_c', 'P_mc_Na', 'F', 'psi_m']
list_of_variables = []
# remove C_c_Na from here ['C_c_Na', 'RT', 'psi_c', 'P_mc_Na', 'F', 'psi_m'] and save in this variable
list_of_variables_2 = []
for i in range(len(str_channel)):
if "id=" in str_channel[i]:
str_index.append(i) # insert variables equation
elif "</math>" in str_channel[i]:
str_index.append(i) # insert math index to note end of math
# print(str_index)
for i in range(len(str_index)):
flag = False
if i + 1 == len(str_index):
break
else:
my_str = str_channel[str_index[i]:str_index[i + 1] - 1]
for i in range(len(my_str)):
if "<eq/>" in my_str[i] and "<ci>" + v + "</ci>" in my_str[i + 1]:
channel_str = ""
for s in my_str:
channel_str += s
channel_str = "<math xmlns=\"http://www.w3.org/1998/Math/MathML\">\n" + channel_str + "</apply>\n</math>\n"
# check that whether this channel already exists in this component
# we are doing this because G_mc_Na, etc comes twice in the epithelial component!
mth = compartment.math()
if channel_str not in mth:
compartment.appendMath(channel_str)
# extract variables from this math string
for i in range(len(my_str)):
if "<ci>" in my_str[i]:
start_index = my_str[i].find("<ci>")
end_index = my_str[i].find("</ci>")
if my_str[i][start_index + 4:end_index] != v:
list_of_variables.append(my_str[i][start_index + 4:end_index])
flag = True
break
if flag == True:
break
# remove variables if already exists in the component
for i in range(compartment.variableCount()):
var = compartment.variable(i)
# we will remove C_c_Na from the list below after constructing lumen, cytosol and interstitial fluid component
# e.g. ['C_c_Na', 'RT', 'psi_c', 'P_mc_Na', 'F', 'psi_m']
if var.name() in list_of_variables:
list_of_variables.remove(var.name())
# unique elements in the list
list_of_variables = list(set(list_of_variables))
# save all components including a parent component into a mycomponent variable
# for now, we have considered 3 encapsulation stages: grandparent -> parent -> children
mycomponent = Component()
for i in range(importedModel.componentCount()):
c = importedModel.component(i)
mycomponent.addComponent(c)
for j in range(c.componentCount()):
c2 = c.component(j)
mycomponent.addComponent(c2)
for k in range(c2.componentCount()):
c3 = c2.component(k)
mycomponent.addComponent(c3)
for item in list_of_variables:
# iterate over components
for i in range(mycomponent.componentCount()):
c = mycomponent.component(i)
# variables within a component
for j in range(c.variableCount()):
v = c.variable(j)
if v.name() == item and v.initialValue() != "":
# add units
addUnitsModel(v.units(), importedModel, m)
if epithelial.variable(v.name()) == None:
v_epithelial = Variable()
# insert this variable in the epithelial component
createComponent(v_epithelial, v.name(), v.units(), "public_and_private",
v.initialValue(), epithelial, v)
if compartment.variable(v.name()) == None:
v_compartment = Variable()
# insert this variable in the lumen/cytosol/interstitial fluid component
createComponent(v_compartment, v.name(), v.units(), "public", None, compartment, v)
# user-defined function to append a substring of ODE based equations
def subMath(sign, vFlux):
return " <apply>\n" \
" <" + sign + "/>\n" + \
" <ci>" + vFlux + "</ci>\n" + \
" </apply>"
# user-defined function to define ODE based equations
def fullMath(vConcentration, subMath):
return "<math xmlns=\"http://www.w3.org/1998/Math/MathML\">\n" \
" <apply id=" + '"' + vConcentration + "_diff_eq" + '"' + ">\n" + \
" <eq/>\n" \
" <apply>\n" \
" <diff/>\n" \
" <bvar>\n" \
" <ci>time</ci>\n" \
" </bvar>\n" \
" <ci>" + vConcentration + "</ci>\n" + \
" </apply>\n" \
" <apply>\n" \
" <plus/>\n" \
"" + subMath + "\n" + \
" </apply>\n" \
" </apply>\n" \
"</math>\n"
# insert ODE equations for lumen, cytosol and interstitial fluid component
def insertODEMathEquation(math_dict, compartment, v_cons, v_flux, sign):
# ODE equations for lumen
if compartment.name() == "lumen":
if v_cons.name() not in math_dict[0]["lumen"].keys():
math_dict[0]["lumen"][v_cons.name()] = subMath(sign, v_flux.name())
else:
math_dict[0]["lumen"][v_cons.name()] = \
math_dict[0]["lumen"][v_cons.name()] + "\n" + subMath(sign, v_flux.name())
# ODE equations for cytosol
if compartment.name() == "cytosol":
if v_cons.name() not in math_dict[0]["cytosol"].keys():
math_dict[0]["cytosol"][v_cons.name()] = subMath(sign, v_flux.name())
else:
math_dict[0]["cytosol"][v_cons.name()] = \
math_dict[0]["cytosol"][v_cons.name()] + "\n" + subMath(sign, v_flux.name())
# ODE equations for interstitial fluid
if compartment.name() == "interstitialfluid":
if v_cons.name() not in math_dict[0]["interstitialfluid"].keys():
math_dict[0]["interstitialfluid"][v_cons.name()] = subMath(sign, v_flux.name())
else:
math_dict[0]["interstitialfluid"][v_cons.name()] = \
math_dict[0]["interstitialfluid"][v_cons.name()] + "\n" + subMath(sign, v_flux.name())
# math for total fluxes in the lumen, cytosol and interstitial fluid component
def fullMathTotalFlux(vTotalFlux, sMath):
return "<math xmlns=\"http://www.w3.org/1998/Math/MathML\">\n" \
" <apply id=" + '"' + vTotalFlux + "_calculation" + '"' + ">\n" + \
" <eq/>\n" \
" <ci>" + vTotalFlux + "</ci>\n" + \
" <apply>\n" \
" <plus/>\n" \
"" + sMath + "\n" + \
" </apply>\n" \
" </apply>\n" \
"</math>\n"
# user-defined function to append a substring of total fluxes and channels equations
def subMathTotalFluxAndChannel(sign, vFlux):
return " <apply>\n" \
" <" + sign + "/>\n" + \
" <ci>" + vFlux + "</ci>\n" + \
" </apply>"
# insert equations for total fluxes
def insertMathsForTotalFluxes(compartment, math_dict_Total_Flux, dict_solutes, chebi, sign, v_flux):
if compartment.name() == "lumen":
lumen_flux = "J_" + dict_solutes[0][chebi] + "_lumen"
if lumen_flux not in math_dict_Total_Flux[0]["lumen"].keys():
math_dict_Total_Flux[0]["lumen"][lumen_flux] = subMathTotalFluxAndChannel(sign, v_flux.name())
else:
math_dict_Total_Flux[0]["lumen"][lumen_flux] = \
math_dict_Total_Flux[0]["lumen"][lumen_flux] + "\n" + \
subMathTotalFluxAndChannel(sign, v_flux.name())
if compartment.name() == "cytosol":
cytosol_flux = "J_" + dict_solutes[0][chebi] + "_cytosol"
if cytosol_flux not in math_dict_Total_Flux[0]["cytosol"].keys():
math_dict_Total_Flux[0]["cytosol"][cytosol_flux] = \
subMathTotalFluxAndChannel(sign, v_flux.name())
else:
math_dict_Total_Flux[0]["cytosol"][cytosol_flux] = \
math_dict_Total_Flux[0]["cytosol"][cytosol_flux] + "\n" + \
subMathTotalFluxAndChannel(sign, v_flux.name())
if compartment.name() == "interstitialfluid":
interstitialfluid_flux = "J_" + dict_solutes[0][chebi] + "_interstitialfluid"
if interstitialfluid_flux not in math_dict_Total_Flux[0]["interstitialfluid"].keys():
math_dict_Total_Flux[0]["interstitialfluid"][interstitialfluid_flux] = \
subMathTotalFluxAndChannel(sign, v_flux.name())
else:
math_dict_Total_Flux[0]["interstitialfluid"][interstitialfluid_flux] = \
math_dict_Total_Flux[0]["interstitialfluid"][interstitialfluid_flux] + "\n" + \
subMathTotalFluxAndChannel(sign, v_flux.name())
# insert equations for channels and diffusive fluxes
def insertMathsForTotalChannels(compartment, math_dict_Total_Flux, dict_solutes, chebi, sign, flux_name):
if compartment.name() == "lumen":
lumen_flux = "J_" + dict_solutes[0][chebi] + "_lumen"
if lumen_flux not in math_dict_Total_Flux[0]["lumen"].keys():
math_dict_Total_Flux[0]["lumen"][lumen_flux] = subMathTotalFluxAndChannel(sign, flux_name)
else:
math_dict_Total_Flux[0]["lumen"][lumen_flux] = \
math_dict_Total_Flux[0]["lumen"][lumen_flux] + "\n" + subMathTotalFluxAndChannel(sign, flux_name)
if compartment.name() == "cytosol":
cytosol_flux = "J_" + dict_solutes[0][chebi] + "_cytosol"
if cytosol_flux not in math_dict_Total_Flux[0]["cytosol"].keys():
math_dict_Total_Flux[0]["cytosol"][cytosol_flux] = subMathTotalFluxAndChannel(sign, flux_name)
else:
math_dict_Total_Flux[0]["cytosol"][cytosol_flux] = \
math_dict_Total_Flux[0]["cytosol"][cytosol_flux] + "\n" + subMathTotalFluxAndChannel(sign, flux_name)
if compartment.name() == "interstitialfluid":
interstitialfluid_flux = "J_" + dict_solutes[0][chebi] + "_interstitialfluid"
if interstitialfluid_flux not in math_dict_Total_Flux[0]["interstitialfluid"].keys():
math_dict_Total_Flux[0]["interstitialfluid"][interstitialfluid_flux] = \
subMathTotalFluxAndChannel(sign, flux_name)
else:
math_dict_Total_Flux[0]["interstitialfluid"][interstitialfluid_flux] = \
math_dict_Total_Flux[0]["interstitialfluid"][interstitialfluid_flux] + "\n" + \
subMathTotalFluxAndChannel(sign, flux_name)
# assign plus or minus sign in the equations
def odeSignNotation(compartment, source_fma, sink_fma):
# lumen
if compartment.name() == "lumen":
if source_fma == lumen_fma and sink_fma == cytosol_fma:
sign = "minus"
elif source_fma == lumen_fma and sink_fma == interstitialfluid_fma:
sign = "minus"
elif source_fma == cytosol_fma and sink_fma == lumen_fma:
sign = "plus"
elif source_fma == interstitialfluid_fma and sink_fma == lumen_fma:
sign = "plus"
# cytosol
if compartment.name() == "cytosol":
if source_fma == cytosol_fma and sink_fma == lumen_fma:
sign = "minus"
elif source_fma == cytosol_fma and sink_fma == interstitialfluid_fma:
sign = "minus"
elif source_fma == lumen_fma and sink_fma == cytosol_fma:
sign = "plus"
elif source_fma == interstitialfluid_fma and sink_fma == cytosol_fma:
sign = "plus"
# interstitial fluid
if compartment.name() == "interstitialfluid":
if source_fma == interstitialfluid_fma and sink_fma == cytosol_fma:
sign = "minus"
elif source_fma == interstitialfluid_fma and sink_fma == lumen_fma:
sign = "minus"
elif source_fma == cytosol_fma and sink_fma == interstitialfluid_fma:
sign = "plus"
elif source_fma == lumen_fma and sink_fma == interstitialfluid_fma:
sign = "plus"
return sign
# user-defined function to instantiate a time component and its variable attributes
# if v2 == None then variable comes from this component, e.g. environment.time
# else variable comes from other component, e.g. lumen.P_mc_Na where P_mc_Na comes from a source model
def createComponent(v, name, unit, interface, initialvalue, component, v2):
v.setName(name)
v.setUnits(unit)
v.setInterfaceType(interface)
if initialvalue != None:
v.setInitialValue(initialvalue)
if v2 == None:
v.setId(component.name() + "." + v.name())
else:
v.setId(component.name() + "." + v2.name())
component.addVariable(v)
# concentration sparql query to get a list of concentration of solutes (chebi) in the (fma) compartment
# fma and chebi are two input values to this function
def concentrationSparql(fma, chebi):
return "PREFIX semsim: <http://www.bhi.washington.edu/SemSim#>" \
"PREFIX ro: <http://www.obofoundry.org/ro/ro.owl#>" \
"PREFIX dcterms: <http://purl.org/dc/terms/>" \
"SELECT ?modelEntity " \
"WHERE { " \
"?modelEntity semsim:isComputationalComponentFor ?model_prop. " \
"?model_prop semsim:hasPhysicalDefinition <http://identifiers.org/opb/OPB_00340>. " \
"?model_prop semsim:physicalPropertyOf ?source_entity. " \
"?source_entity ro:part_of ?source_part_of_entity. " \
"?source_part_of_entity semsim:hasPhysicalDefinition <" + fma + ">. " + \
"?source_entity semsim:hasPhysicalDefinition <" + chebi + ">. " + \
"}"
# add required units from the imported models
def addUnitsModel(unit_name, importedModel, m):
i = 0
while importedModel.units(i) != None:
u = importedModel.units(i)
# u.getUnitAttributes(reference, prefix, exponent, multiplier, id))
if u.name() == unit_name:
# if this unit not exists, then add in the model
if m.units(unit_name) == None:
m.addUnits(u)
break
i += 1
# instantiate source url and create an imported component in the import section of the new model
def instantiateImportedComponent(sourceurl, component, epithelial, m):
imp = ImportSource()
imp.setUrl(sourceurl)
importedComponent = Component()
importedComponent.setName(component)
importedComponent.setSourceComponent(imp, component)
# m.addComponent(importedComponent)
if m.component(importedComponent.name()) is None:
m.addComponent(importedComponent)
# if epithelial.component(importedComponent.name()) == None:
# epithelial.addComponent(importedComponent)
# making http request to the source model
r = requests.get(sourceurl)
# parse the string representation of the model to access by libcellml
p = Parser()
impModel = p.parseModel(r.text)
# check a valid model
if p.errorCount() > 0:
for i in range(p.errorCount()):
desc = p.error(i).description()
cellmlNullNamespace = "Model element is in invalid namespace 'null'"
cellml10Namespace = "Model element is in invalid namespace 'http://www.cellml.org/cellml/1.0#'"
cellml11Namespace = "Model element is in invalid namespace 'http://www.cellml.org/cellml/1.1#'"
if desc.find(cellmlNullNamespace) != -1:
print("Error in miscellaneous.py: ", p.error(i).description())
exit()
elif desc.find(cellml10Namespace) != -1 or desc.find(cellml11Namespace) != -1:
print("Msg in miscellaneous.py: ", p.error(i).description())
# parsing cellml 1.0 or 1.1 to 2.0
dom = ET.fromstring(r.text.encode("utf-8"))
xslt = ET.parse("cellml1to2.xsl")
transform = ET.XSLT(xslt)
newdom = transform(dom)
mstr = ET.tostring(newdom, pretty_print=True)
mstr = mstr.decode("utf-8")
# parse the string representation of the model to access by libcellml
impModel = p.parseModel(mstr)
else:
print("Error in miscellaneous.py: ", p.error(i).description())
exit()
impComponent = impModel.component(importedComponent.name())
# in order to later define the connections we need, we must make sure all the variables from
# the source model are present in the imported component, we only need the name so just grab
# that from the source.
for i in range(impComponent.variableCount()):
impVariable = impComponent.variable(i)
v = Variable()
v.setName(impVariable.name())
importedComponent.addVariable(v)
# process model entities and source models' urls
def processModelEntity(modelentity, epithelial, m):
cellml_model_name = modelentity[0:modelentity.find('#')]
component_variable = modelentity[modelentity.find('#') + 1:len(modelentity)]
component = component_variable[:component_variable.find('.')]
sourceurl = workspaceURL + cellml_model_name
instantiateImportedComponent(sourceurl, component, epithelial, m)
| 44.647773 | 127 | 0.599716 | 3,853 | 33,084 | 4.910978 | 0.11238 | 0.04101 | 0.091322 | 0.106543 | 0.671018 | 0.641687 | 0.617641 | 0.577317 | 0.535567 | 0.521351 | 0 | 0.035279 | 0.253748 | 33,084 | 740 | 128 | 44.708108 | 0.731135 | 0.098809 | 0 | 0.53958 | 1 | 0.003231 | 0.401721 | 0.037376 | 0.003231 | 0 | 0 | 0 | 0 | 1 | 0.022617 | false | 0 | 0.03231 | 0.008078 | 0.06462 | 0.006462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
365d9e30b62ef2b43194d13cd3b143b547c83df7 | 2,383 | py | Python | epg_grabber/sites/beinsports_id.py | akmalharith/epg-grabber | ee6bdd20f7cbb4c780d96a8ce0fe2ca68b553c33 | [
"MIT"
] | 1 | 2022-03-16T00:42:21.000Z | 2022-03-16T00:42:21.000Z | epg_grabber/sites/beinsports_id.py | akmalharith/epg-grabber | ee6bdd20f7cbb4c780d96a8ce0fe2ca68b553c33 | [
"MIT"
] | null | null | null | epg_grabber/sites/beinsports_id.py | akmalharith/epg-grabber | ee6bdd20f7cbb4c780d96a8ce0fe2ca68b553c33 | [
"MIT"
] | 1 | 2022-03-17T17:16:30.000Z | 2022-03-17T17:16:30.000Z | from typing import List
import requests
from pathlib import Path
from datetime import date, datetime
from bs4 import BeautifulSoup
from helper.classes import Channel, Program
from helper.utils import get_channel_by_name, get_epg_datetime
TIMEZONE_OFFSET = "+0800"
PROGRAM_URL = "https://epg.beinsports.com/utctime_id.php?cdate={date}&offset=+8&mins=00&category=sports&id=123"
def get_all_channels():
return [Channel(
"channels_1",
"beInSPORTS1.Id",
"beIN SPORTS 1",
"",
True),
Channel(
"channels_2",
"beInSPORTS2.Id",
"beIN SPORTS 2",
"",
True)]
def get_programs_by_channel(channel_name: str, *args) -> List[Program]:
# TODO: Accept days as input and increment the date_input in an outer for
# loop
date_input = date.today()
datetime_today = datetime.now().replace(
hour=0, minute=0, second=0, microsecond=0)
url = PROGRAM_URL.format(
date=date_input)
channel = get_channel_by_name(channel_name, Path(__file__).stem)
try:
r = requests.get(url)
except requests.exceptions.RequestException as e:
raise SystemExit(e)
if r.status_code != 200:
raise Exception(r.raise_for_status())
soup = BeautifulSoup(r.text, features="html.parser")
divs = soup.find_all("div", {"id": channel.id})
programs = []
for div in divs:
line = div.find_all("li", {"parent": "slider_1"})
for value in line:
time_period = str(value.find("p", {"class": "time"}).string)
time_start, time_end = time_period.split("-")
start_hour, start_minute = time_start.split(":")
start_time = datetime_today.replace(
hour=int(start_hour), minute=int(start_minute))
end_hour, end_minute = time_end.split(":")
end_time = datetime_today.replace(
hour=int(end_hour), minute=int(end_minute))
obj = Program(
channel_name=channel.tvg_id,
title=value.find("p", {"class": "title"}).string,
description=value.find("p", {"class": "format"}).string,
start=get_epg_datetime(start_time, TIMEZONE_OFFSET),
stop=get_epg_datetime(end_time, TIMEZONE_OFFSET)
)
programs.append(obj)
return programs
| 31.773333 | 111 | 0.61645 | 296 | 2,383 | 4.75 | 0.388514 | 0.012802 | 0.029872 | 0.032006 | 0.044097 | 0.044097 | 0 | 0 | 0 | 0 | 0 | 0.014261 | 0.264373 | 2,383 | 74 | 112 | 32.202703 | 0.787792 | 0.031893 | 0 | 0.034483 | 0 | 0.017241 | 0.105035 | 0 | 0 | 0 | 0 | 0.013514 | 0 | 1 | 0.034483 | false | 0 | 0.12069 | 0.017241 | 0.189655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.