hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
df81e1d9539d033063a2a56dbeb862075f49347e | 798 | py | Python | weighmail/observers/__init__.py | gremmie/weighmail | d20b5ab3ef9556a3e7a4a06875c4ca69c22fe31d | [
"BSD-3-Clause"
] | null | null | null | weighmail/observers/__init__.py | gremmie/weighmail | d20b5ab3ef9556a3e7a4a06875c4ca69c22fe31d | [
"BSD-3-Clause"
] | null | null | null | weighmail/observers/__init__.py | gremmie/weighmail | d20b5ab3ef9556a3e7a4a06875c4ca69c22fe31d | [
"BSD-3-Clause"
] | null | null | null | """Base observer class for weighmail operations.
"""
class BaseObserver(object):
"""Base observer class; does nothing."""
def searching(self, label):
"""Called when the search process has started for a label"""
pass
def labeling(self, label, count):
"""Called when the labelling process has started for a given label
label - the label we are working on
count - number of messages to label
"""
pass
def done_labeling(self, label, count):
"""Called when finished labelling for a given label
label - the label we were working on
count - number of messages that were labelled
"""
pass
def done(self):
"""Called when completely finished"""
pass
| 24.181818 | 74 | 0.601504 | 96 | 798 | 4.989583 | 0.427083 | 0.083507 | 0.070981 | 0.083507 | 0.45929 | 0.379958 | 0.121086 | 0.121086 | 0 | 0 | 0 | 0 | 0.319549 | 798 | 32 | 75 | 24.9375 | 0.882136 | 0.547619 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0.444444 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
10c58e27a9810f57838afb1a0c1697fd854c3b9b | 239 | py | Python | pz/installer.py | pyramidzero/pzinstaller | 43058b0a681fbea6e2173f1192aea720483d861c | [
"MIT"
] | null | null | null | pz/installer.py | pyramidzero/pzinstaller | 43058b0a681fbea6e2173f1192aea720483d861c | [
"MIT"
] | null | null | null | pz/installer.py | pyramidzero/pzinstaller | 43058b0a681fbea6e2173f1192aea720483d861c | [
"MIT"
] | null | null | null | from subprocess import run
# configuration defaults tools
update = ['brew', 'update']
install = ['brew', 'install', 'git']
class Installer:
def func_update(self):
run(update)
def func_install(self):
run(install)
| 18.384615 | 36 | 0.65272 | 28 | 239 | 5.5 | 0.571429 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217573 | 239 | 12 | 37 | 19.916667 | 0.823529 | 0.117155 | 0 | 0 | 0 | 0 | 0.114833 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
10c7a36a4680a6d2f0e301e67846d4b75e77d776 | 15,796 | py | Python | pysnmp/FUNI-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/FUNI-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/FUNI-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module FUNI-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/FUNI-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 19:03:03 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, SingleValueConstraint, ValueRangeConstraint, ConstraintsIntersection, ValueSizeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "SingleValueConstraint", "ValueRangeConstraint", "ConstraintsIntersection", "ValueSizeConstraint")
ifIndex, = mibBuilder.importSymbols("IF-MIB", "ifIndex")
NotificationGroup, ModuleCompliance, ObjectGroup = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance", "ObjectGroup")
TimeTicks, enterprises, MibIdentifier, Counter32, MibScalar, MibTable, MibTableRow, MibTableColumn, iso, Gauge32, ObjectIdentity, NotificationType, ModuleIdentity, Bits, Integer32, Unsigned32, Counter64, IpAddress = mibBuilder.importSymbols("SNMPv2-SMI", "TimeTicks", "enterprises", "MibIdentifier", "Counter32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "iso", "Gauge32", "ObjectIdentity", "NotificationType", "ModuleIdentity", "Bits", "Integer32", "Unsigned32", "Counter64", "IpAddress")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
atmfFuniMIB = ModuleIdentity((1, 3, 6, 1, 4, 1, 353, 5, 6, 1))
if mibBuilder.loadTexts: atmfFuniMIB.setLastUpdated('9705080000Z')
if mibBuilder.loadTexts: atmfFuniMIB.setOrganization('The ATM Forum')
atmForum = MibIdentifier((1, 3, 6, 1, 4, 1, 353))
atmForumNetworkManagement = MibIdentifier((1, 3, 6, 1, 4, 1, 353, 5))
atmfFuni = MibIdentifier((1, 3, 6, 1, 4, 1, 353, 5, 6))
funiMIBObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1))
class FuniValidVpi(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ValueRangeConstraint(0, 255)
class FuniValidVci(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ValueRangeConstraint(0, 65535)
funiIfConfTable = MibTable((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1), )
if mibBuilder.loadTexts: funiIfConfTable.setStatus('current')
funiIfConfEntry = MibTableRow((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: funiIfConfEntry.setStatus('current')
funiIfConfMode = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("mode1a", 1), ("mode1b", 2), ("mode3", 3), ("mode4", 4))).clone('mode1a')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfMode.setStatus('current')
funiIfConfFcsBits = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("fcsBits16", 1), ("fcsBits32", 2))).clone('fcsBits16')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfFcsBits.setStatus('current')
funiIfConfSigSupport = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfSigSupport.setStatus('current')
funiIfConfSigVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 4), FuniValidVpi()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfSigVpi.setStatus('current')
funiIfConfSigVci = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 5), FuniValidVci().clone(5)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfSigVci.setStatus('current')
funiIfConfIlmiSupport = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfIlmiSupport.setStatus('current')
funiIfConfIlmiVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 7), FuniValidVpi()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfIlmiVpi.setStatus('current')
funiIfConfIlmiVci = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 8), FuniValidVci().clone(16)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfIlmiVci.setStatus('current')
funiIfConfOamSupport = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 1, 1, 9), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfConfOamSupport.setStatus('current')
funiIfStatsTable = MibTable((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2), )
if mibBuilder.loadTexts: funiIfStatsTable.setStatus('current')
funiIfStatsEntry = MibTableRow((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: funiIfStatsEntry.setStatus('current')
funiIfEstablishedPvccs = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 1), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfEstablishedPvccs.setStatus('current')
funiIfEstablishedSvccs = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 2), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfEstablishedSvccs.setStatus('current')
funiIfRxAbortedFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfRxAbortedFrames.setStatus('current')
funiIfRxTooShortFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfRxTooShortFrames.setStatus('current')
funiIfRxTooLongFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfRxTooLongFrames.setStatus('current')
funiIfRxFcsErrFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 6), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfRxFcsErrFrames.setStatus('current')
funiIfRxUnknownFaFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 7), Counter32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: funiIfRxUnknownFaFrames.setStatus('current')
funiIfRxDiscardedFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 8), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfRxDiscardedFrames.setStatus('current')
funiIfTxTooLongFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfTxTooLongFrames.setStatus('current')
funiIfTxLenErrFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 10), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfTxLenErrFrames.setStatus('current')
funiIfTxCrcErrFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 11), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfTxCrcErrFrames.setStatus('current')
funiIfTxPartialFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 12), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfTxPartialFrames.setStatus('current')
funiIfTxTimeOutFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 13), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfTxTimeOutFrames.setStatus('current')
funiIfTxDiscardedFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 2, 1, 14), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiIfTxDiscardedFrames.setStatus('current')
funiVclStatsTable = MibTable((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3), )
if mibBuilder.loadTexts: funiVclStatsTable.setStatus('current')
funiVclStatsEntry = MibTableRow((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"), (0, "FUNI-MIB", "funiVclFaVpi"), (0, "FUNI-MIB", "funiVclFaVci"))
if mibBuilder.loadTexts: funiVclStatsEntry.setStatus('current')
funiVclFaVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 1), FuniValidVpi())
if mibBuilder.loadTexts: funiVclFaVpi.setStatus('current')
funiVclFaVci = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 2), FuniValidVci())
if mibBuilder.loadTexts: funiVclFaVci.setStatus('current')
funiVclRxClp0Frames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclRxClp0Frames.setStatus('current')
funiVclRxTotalFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclRxTotalFrames.setStatus('current')
funiVclTxClp0Frames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclTxClp0Frames.setStatus('current')
funiVclTxTotalFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 6), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclTxTotalFrames.setStatus('current')
funiVclRxClp0Octets = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 7), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclRxClp0Octets.setStatus('current')
funiVclRxTotalOctets = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 8), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclRxTotalOctets.setStatus('current')
funiVclTxClp0Octets = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclTxClp0Octets.setStatus('current')
funiVclTxTotalOctets = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 10), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclTxTotalOctets.setStatus('current')
funiVclRxErrors = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 11), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclRxErrors.setStatus('current')
funiVclTxErrors = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 12), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclTxErrors.setStatus('current')
funiVclRxOamFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 13), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclRxOamFrames.setStatus('current')
funiVclTxOamFrames = MibTableColumn((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 1, 3, 1, 14), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: funiVclTxOamFrames.setStatus('current')
funiMIBConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 2))
funiMIBCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 2, 1))
funiMIBGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 2, 2))
funiMIBCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 2, 1, 1)).setObjects(("FUNI-MIB", "funiIfConfMinGroup"), ("FUNI-MIB", "funiIfStatsMinGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
funiMIBCompliance = funiMIBCompliance.setStatus('current')
funiIfConfMinGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 2, 2, 1)).setObjects(("FUNI-MIB", "funiIfConfMode"), ("FUNI-MIB", "funiIfConfFcsBits"), ("FUNI-MIB", "funiIfConfSigSupport"), ("FUNI-MIB", "funiIfConfSigVpi"), ("FUNI-MIB", "funiIfConfSigVci"), ("FUNI-MIB", "funiIfConfIlmiSupport"), ("FUNI-MIB", "funiIfConfIlmiVpi"), ("FUNI-MIB", "funiIfConfIlmiVci"), ("FUNI-MIB", "funiIfConfOamSupport"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
funiIfConfMinGroup = funiIfConfMinGroup.setStatus('current')
funiIfStatsMinGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 2, 2, 2)).setObjects(("FUNI-MIB", "funiIfEstablishedPvccs"), ("FUNI-MIB", "funiIfEstablishedSvccs"), ("FUNI-MIB", "funiIfRxAbortedFrames"), ("FUNI-MIB", "funiIfRxTooShortFrames"), ("FUNI-MIB", "funiIfRxTooLongFrames"), ("FUNI-MIB", "funiIfRxFcsErrFrames"), ("FUNI-MIB", "funiIfRxUnknownFaFrames"), ("FUNI-MIB", "funiIfRxDiscardedFrames"), ("FUNI-MIB", "funiIfTxTooLongFrames"), ("FUNI-MIB", "funiIfTxLenErrFrames"), ("FUNI-MIB", "funiIfTxCrcErrFrames"), ("FUNI-MIB", "funiIfTxPartialFrames"), ("FUNI-MIB", "funiIfTxTimeOutFrames"), ("FUNI-MIB", "funiIfTxDiscardedFrames"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
funiIfStatsMinGroup = funiIfStatsMinGroup.setStatus('current')
funiVclStatsOptionalGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 353, 5, 6, 1, 2, 2, 3)).setObjects(("FUNI-MIB", "funiVclRxClp0Frames"), ("FUNI-MIB", "funiVclRxTotalFrames"), ("FUNI-MIB", "funiVclTxClp0Frames"), ("FUNI-MIB", "funiVclTxTotalFrames"), ("FUNI-MIB", "funiVclRxClp0Octets"), ("FUNI-MIB", "funiVclRxTotalOctets"), ("FUNI-MIB", "funiVclTxClp0Octets"), ("FUNI-MIB", "funiVclTxTotalOctets"), ("FUNI-MIB", "funiVclRxErrors"), ("FUNI-MIB", "funiVclTxErrors"), ("FUNI-MIB", "funiVclRxOamFrames"), ("FUNI-MIB", "funiVclTxOamFrames"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
funiVclStatsOptionalGroup = funiVclStatsOptionalGroup.setStatus('current')
mibBuilder.exportSymbols("FUNI-MIB", funiVclRxErrors=funiVclRxErrors, funiVclTxClp0Octets=funiVclTxClp0Octets, funiVclTxClp0Frames=funiVclTxClp0Frames, funiIfConfIlmiSupport=funiIfConfIlmiSupport, funiIfConfMode=funiIfConfMode, FuniValidVpi=FuniValidVpi, funiIfEstablishedPvccs=funiIfEstablishedPvccs, funiIfTxCrcErrFrames=funiIfTxCrcErrFrames, funiIfTxTimeOutFrames=funiIfTxTimeOutFrames, funiIfTxTooLongFrames=funiIfTxTooLongFrames, funiVclRxTotalOctets=funiVclRxTotalOctets, funiIfStatsMinGroup=funiIfStatsMinGroup, funiIfConfTable=funiIfConfTable, funiIfStatsEntry=funiIfStatsEntry, funiVclFaVpi=funiVclFaVpi, funiIfConfSigVpi=funiIfConfSigVpi, funiIfConfFcsBits=funiIfConfFcsBits, funiIfRxTooLongFrames=funiIfRxTooLongFrames, funiIfRxDiscardedFrames=funiIfRxDiscardedFrames, atmfFuniMIB=atmfFuniMIB, funiVclTxErrors=funiVclTxErrors, atmfFuni=atmfFuni, funiIfRxUnknownFaFrames=funiIfRxUnknownFaFrames, funiIfTxPartialFrames=funiIfTxPartialFrames, funiIfConfIlmiVci=funiIfConfIlmiVci, funiIfTxLenErrFrames=funiIfTxLenErrFrames, funiVclRxTotalFrames=funiVclRxTotalFrames, funiIfConfMinGroup=funiIfConfMinGroup, funiVclStatsTable=funiVclStatsTable, FuniValidVci=FuniValidVci, funiVclRxOamFrames=funiVclRxOamFrames, funiIfConfIlmiVpi=funiIfConfIlmiVpi, funiVclStatsEntry=funiVclStatsEntry, funiIfConfSigSupport=funiIfConfSigSupport, funiIfRxFcsErrFrames=funiIfRxFcsErrFrames, funiVclTxTotalOctets=funiVclTxTotalOctets, funiIfStatsTable=funiIfStatsTable, funiVclStatsOptionalGroup=funiVclStatsOptionalGroup, funiVclRxClp0Frames=funiVclRxClp0Frames, funiVclTxOamFrames=funiVclTxOamFrames, funiMIBGroups=funiMIBGroups, atmForum=atmForum, funiMIBCompliance=funiMIBCompliance, funiIfConfSigVci=funiIfConfSigVci, PYSNMP_MODULE_ID=atmfFuniMIB, funiIfConfEntry=funiIfConfEntry, funiIfRxTooShortFrames=funiIfRxTooShortFrames, funiIfEstablishedSvccs=funiIfEstablishedSvccs, funiMIBCompliances=funiMIBCompliances, atmForumNetworkManagement=atmForumNetworkManagement, funiVclTxTotalFrames=funiVclTxTotalFrames, funiIfTxDiscardedFrames=funiIfTxDiscardedFrames, funiVclFaVci=funiVclFaVci, funiMIBConformance=funiMIBConformance, funiIfConfOamSupport=funiIfConfOamSupport, funiVclRxClp0Octets=funiVclRxClp0Octets, funiIfRxAbortedFrames=funiIfRxAbortedFrames, funiMIBObjects=funiMIBObjects)
| 118.766917 | 2,273 | 0.746961 | 1,795 | 15,796 | 6.572145 | 0.10585 | 0.01814 | 0.013987 | 0.018649 | 0.400865 | 0.36628 | 0.342545 | 0.33695 | 0.239129 | 0.213783 | 0 | 0.075875 | 0.089706 | 15,796 | 132 | 2,274 | 119.666667 | 0.744558 | 0.019625 | 0 | 0.04918 | 0 | 0 | 0.154035 | 0.019707 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057377 | 0 | 0.106557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
10c9be2f6f73cff7dec3ae3bf47fff1f91431efb | 508 | py | Python | tests/r/test_us_pop.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 199 | 2017-07-24T01:34:27.000Z | 2022-01-29T00:50:55.000Z | tests/r/test_us_pop.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 46 | 2017-09-05T19:27:20.000Z | 2019-01-07T09:47:26.000Z | tests/r/test_us_pop.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 45 | 2017-07-26T00:10:44.000Z | 2022-03-16T20:44:59.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import shutil
import sys
import tempfile
from observations.r.us_pop import us_pop
def test_us_pop():
"""Test module us_pop.py by downloading
us_pop.csv and testing shape of
extracted data has 22 rows and 2 columns
"""
test_path = tempfile.mkdtemp()
x_train, metadata = us_pop(test_path)
try:
assert x_train.shape == (22, 2)
except:
shutil.rmtree(test_path)
raise()
| 21.166667 | 43 | 0.75 | 78 | 508 | 4.551282 | 0.538462 | 0.084507 | 0.135211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014458 | 0.183071 | 508 | 23 | 44 | 22.086957 | 0.840964 | 0.214567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.466667 | 0 | 0.533333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
10cf86523f7b53b3cbe34ca1abb35ccfd600860e | 3,633 | py | Python | src/automotive/application/actions/serial_actions.py | philosophy912/automotive | de918611652b789a83545f346c1569c2c2c955a6 | [
"Apache-2.0"
] | null | null | null | src/automotive/application/actions/serial_actions.py | philosophy912/automotive | de918611652b789a83545f346c1569c2c2c955a6 | [
"Apache-2.0"
] | null | null | null | src/automotive/application/actions/serial_actions.py | philosophy912/automotive | de918611652b789a83545f346c1569c2c2c955a6 | [
"Apache-2.0"
] | 1 | 2022-02-28T07:23:28.000Z | 2022-02-28T07:23:28.000Z | # -*- coding:utf-8 -*-
# --------------------------------------------------------
# Copyright (C), 2016-2020, lizhe, All rights reserved
# --------------------------------------------------------
# @Name: serial_actions.py
# @Author: lizhe
# @Created: 2021/5/2 - 0:02
# --------------------------------------------------------
from typing import List
from automotive.utils.serial_utils import SerialUtils
from automotive.logger.logger import logger
from automotive.utils.common.enums import SystemTypeEnum
from ..common.interfaces import BaseDevice
class SerialActions(BaseDevice):
"""
串口操作类
"""
def __init__(self, port: str, baud_rate: int):
super().__init__()
self.__serial = SerialUtils()
self.__port = port.upper()
self.__baud_rate = baud_rate
@property
def serial_utils(self):
return self.__serial
def open(self):
"""
打开串口
"""
logger.info("初始化串口")
logger.info("打开串口")
buffer = 32768
self.__serial.connect(port=self.__port, baud_rate=self.__baud_rate)
logger.info(f"*************串口初始化成功*************")
self.__serial.serial_port.set_buffer(buffer, buffer)
logger.info(f"串口缓存为[{buffer}]")
def close(self):
"""
关闭串口
"""
logger.info("关闭串口")
self.__serial.disconnect()
def write(self, command: str):
"""
向串口写入数据
:param command:
"""
self.__serial.write(command)
def read(self) -> str:
"""
从串口中读取数据
:return:
"""
return self.__serial.read()
def read_lines(self) -> List[str]:
"""
从串口中读取数据,按行来读取
:return:
"""
return self.__serial.read_lines()
def clear_buffer(self):
"""
清空串口缓存数据
"""
self.read()
def file_exist(self, file: str, check_times: int = None, interval: float = 0.5, timeout: int = 10) -> bool:
"""
检查文件是否存在
:param file: 文件名(绝对路径)
:param check_times: 检查次数
:param interval: 间隔时间
:param timeout: 超时时间
:return: 存在/不存在
"""
logger.info(f"检查文件{file}是否存在")
return self.__serial.file_exist(file, check_times, interval, timeout)
def login(self, username: str, password: str, double_check: bool = False, login_locator: str = "login"):
"""
登陆系统
:param username: 用户名
:param password: 密码
:param double_check: 登陆后的二次检查
:param login_locator: 登陆定位符
"""
logger.info(f"登陆系统,用户名{username}, 密码{password}")
self.__serial.login(username, password, double_check, login_locator)
def copy_file(self, remote_folder: str, target_folder: str, system_type: SystemTypeEnum, timeout: float = 300):
"""
复制文件
:param remote_folder: 原始文件
:param target_folder: 目标文件夹
:param system_type: 系统类型,目前支持QNX和Linux
:param timeout: 超时时间
"""
logger.info(f"复制{remote_folder}下面所有的文件到{target_folder}")
self.__serial.copy_file(remote_folder, target_folder, system_type, timeout)
def check_text(self, contents: str) -> bool:
"""
检查是否重启
:param contents: 重启的标识内容
:return:
True: 串口输出找到了匹配的内容
False: 串口输出没有找到匹配的内容
"""
logger.warning("使用前请调用clear_buffer方法清除缓存")
data = self.read()
result = True
for content in contents:
logger.debug(f"现在检查{content}是否在串口数据中存在")
result = result and content in data
return result
| 25.95 | 115 | 0.548582 | 374 | 3,633 | 5.122995 | 0.368984 | 0.057411 | 0.028706 | 0.022965 | 0.02714 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011637 | 0.290394 | 3,633 | 139 | 116 | 26.136691 | 0.731575 | 0.237545 | 0 | 0 | 0 | 0 | 0.084429 | 0.050912 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0.06 | 0.1 | 0.02 | 0.46 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
10de22358037cf8ccf5fee7e45edea840e4276ac | 159 | py | Python | run.py | radish2012/flask-restful-example | 972c720cee9819d030f9889a8535a444277b874e | [
"MIT"
] | 650 | 2019-07-08T09:09:25.000Z | 2022-03-31T08:01:43.000Z | run.py | radish2012/flask-restful-example | 972c720cee9819d030f9889a8535a444277b874e | [
"MIT"
] | 5 | 2020-01-14T05:35:37.000Z | 2022-03-11T23:46:39.000Z | run.py | radish2012/flask-restful-example | 972c720cee9819d030f9889a8535a444277b874e | [
"MIT"
] | 222 | 2019-07-15T01:52:03.000Z | 2022-03-28T05:32:21.000Z | from app.factory import create_app, celery_app
app = create_app(config_name="DEVELOPMENT")
app.app_context().push()
if __name__ == "__main__":
app.run()
| 19.875 | 46 | 0.735849 | 23 | 159 | 4.521739 | 0.608696 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125786 | 159 | 7 | 47 | 22.714286 | 0.748201 | 0 | 0 | 0 | 0 | 0 | 0.119497 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
10e350fdeb74a09ba92056d3e0417d107ac47a47 | 50 | py | Python | {{cookiecutter.project_name}}/src/{{cookiecutter.package_name}}/__init__.py | cav71/cav71-python-package-cookiecutter | 697a830560ee5e3072e28a0021e227a7d0ef5b66 | [
"BSD-3-Clause"
] | null | null | null | {{cookiecutter.project_name}}/src/{{cookiecutter.package_name}}/__init__.py | cav71/cav71-python-package-cookiecutter | 697a830560ee5e3072e28a0021e227a7d0ef5b66 | [
"BSD-3-Clause"
] | null | null | null | {{cookiecutter.project_name}}/src/{{cookiecutter.package_name}}/__init__.py | cav71/cav71-python-package-cookiecutter | 697a830560ee5e3072e28a0021e227a7d0ef5b66 | [
"BSD-3-Clause"
] | null | null | null | __version__ = "0.0.0"
__hash__ = "<invalid-hash>"
| 16.666667 | 27 | 0.66 | 7 | 50 | 3.571429 | 0.571429 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.12 | 50 | 2 | 28 | 25 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0.38 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
10e78d87cc82a459f5caa7a1a6341f84faedc2e2 | 7,358 | py | Python | tests/test_setuptools_build_subpackage.py | ashb/setuptools-build-subpackage | 6169baaea0020aaecf71e0441e1c44120c88b4ff | [
"Apache-2.0"
] | 2 | 2020-11-30T12:41:13.000Z | 2021-07-14T14:43:42.000Z | tests/test_setuptools_build_subpackage.py | ashb/setuptools-build-subpackage | 6169baaea0020aaecf71e0441e1c44120c88b4ff | [
"Apache-2.0"
] | null | null | null | tests/test_setuptools_build_subpackage.py | ashb/setuptools-build-subpackage | 6169baaea0020aaecf71e0441e1c44120c88b4ff | [
"Apache-2.0"
] | null | null | null | import os
import tarfile
import textwrap
from pathlib import Path
import setuptools
from wheel.wheelfile import WheelFile
from setuptools_build_subpackage import Distribution
ROOT = Path(__file__).parent.parent
def build_dist(folder, command, output, *args):
args = [
'--subpackage-folder',
folder,
'clean',
'--all',
command,
'--dist-dir',
output,
*args,
]
cur = os.getcwd()
os.chdir('example')
try:
setuptools.setup(
distclass=Distribution,
script_args=args,
)
finally:
os.chdir(cur)
def test_bdist_wheel(tmp_path):
build_dist('example/sub_module_a', 'bdist_wheel', tmp_path)
build_dist('example/sub_module_b', 'bdist_wheel', tmp_path)
wheel_a_path = tmp_path / 'example_sub_moudle_a-0.0.0-py2.py3-none-any.whl'
wheel_b_path = tmp_path / 'example_sub_moudle_b-0.0.0-py2.py3-none-any.whl'
assert wheel_a_path.exists(), "sub_module_a wheel file exists"
assert wheel_b_path.exists(), "sub_module_b wheel file exists"
with WheelFile(wheel_a_path) as wheel_a:
assert set(wheel_a.namelist()) == {
'example/sub_module_a/__init__.py',
'example/sub_module_a/where.py',
'example_sub_moudle_a-0.0.0.dist-info/AUTHORS.rst',
'example_sub_moudle_a-0.0.0.dist-info/LICENSE',
'example_sub_moudle_a-0.0.0.dist-info/METADATA',
'example_sub_moudle_a-0.0.0.dist-info/WHEEL',
'example_sub_moudle_a-0.0.0.dist-info/top_level.txt',
'example_sub_moudle_a-0.0.0.dist-info/RECORD',
}
where = wheel_a.open('example/sub_module_a/where.py').read()
assert where == b'a = "module_a"\n'
with WheelFile(wheel_b_path) as wheel_b:
assert set(wheel_b.namelist()) == {
'example/sub_module_b/__init__.py',
'example/sub_module_b/where.py',
'example_sub_moudle_b-0.0.0.dist-info/AUTHORS.rst',
'example_sub_moudle_b-0.0.0.dist-info/LICENSE',
'example_sub_moudle_b-0.0.0.dist-info/METADATA',
'example_sub_moudle_b-0.0.0.dist-info/WHEEL',
'example_sub_moudle_b-0.0.0.dist-info/top_level.txt',
'example_sub_moudle_b-0.0.0.dist-info/RECORD',
}
where = wheel_b.open('example/sub_module_b/where.py').read()
assert where == b'a = "module_b"\n'
def test_sdist(tmp_path):
# Build both dists in the same test, so we can check there is no cross-polution
build_dist('example/sub_module_a', 'sdist', tmp_path)
build_dist('example/sub_module_b', 'sdist', tmp_path)
sdist_a_path = tmp_path / 'example_sub_moudle_a-0.0.0.tar.gz'
sdist_b_path = tmp_path / 'example_sub_moudle_b-0.0.0.tar.gz'
assert sdist_a_path.exists(), "sub_module_a sdist file exists"
assert sdist_b_path.exists(), "sub_module_b sdist file exists"
with tarfile.open(sdist_a_path) as sdist_a:
assert set(sdist_a.getnames()) == {
'example_sub_moudle_a-0.0.0',
'example_sub_moudle_a-0.0.0/AUTHORS.rst',
'example_sub_moudle_a-0.0.0/LICENSE',
'example_sub_moudle_a-0.0.0/PKG-INFO',
'example_sub_moudle_a-0.0.0/example',
'example_sub_moudle_a-0.0.0/example/sub_module_a',
'example_sub_moudle_a-0.0.0/example/sub_module_a/__init__.py',
'example_sub_moudle_a-0.0.0/example/sub_module_a/where.py',
'example_sub_moudle_a-0.0.0/example_sub_moudle_a.egg-info',
'example_sub_moudle_a-0.0.0/example_sub_moudle_a.egg-info/PKG-INFO',
'example_sub_moudle_a-0.0.0/example_sub_moudle_a.egg-info/SOURCES.txt',
'example_sub_moudle_a-0.0.0/example_sub_moudle_a.egg-info/dependency_links.txt',
'example_sub_moudle_a-0.0.0/example_sub_moudle_a.egg-info/not-zip-safe',
'example_sub_moudle_a-0.0.0/example_sub_moudle_a.egg-info/top_level.txt',
'example_sub_moudle_a-0.0.0/setup.cfg',
'example_sub_moudle_a-0.0.0/setup.py',
}
where = sdist_a.extractfile('example_sub_moudle_a-0.0.0/example/sub_module_a/where.py').read()
assert where == b'a = "module_a"\n'
setup_cfg = sdist_a.extractfile('example_sub_moudle_a-0.0.0/setup.cfg').read().decode('ascii')
assert setup_cfg == (ROOT / 'example' / 'example' / 'sub_module_a' / 'setup.cfg').open(encoding='ascii').read()
with tarfile.open(sdist_b_path) as sdist_b:
assert set(sdist_b.getnames()) == {
'example_sub_moudle_b-0.0.0',
'example_sub_moudle_b-0.0.0/AUTHORS.rst',
'example_sub_moudle_b-0.0.0/LICENSE',
'example_sub_moudle_b-0.0.0/PKG-INFO',
'example_sub_moudle_b-0.0.0/example',
'example_sub_moudle_b-0.0.0/example/sub_module_b',
'example_sub_moudle_b-0.0.0/example/sub_module_b/__init__.py',
'example_sub_moudle_b-0.0.0/example/sub_module_b/where.py',
'example_sub_moudle_b-0.0.0/example_sub_moudle_b.egg-info',
'example_sub_moudle_b-0.0.0/example_sub_moudle_b.egg-info/PKG-INFO',
'example_sub_moudle_b-0.0.0/example_sub_moudle_b.egg-info/SOURCES.txt',
'example_sub_moudle_b-0.0.0/example_sub_moudle_b.egg-info/dependency_links.txt',
'example_sub_moudle_b-0.0.0/example_sub_moudle_b.egg-info/not-zip-safe',
'example_sub_moudle_b-0.0.0/example_sub_moudle_b.egg-info/top_level.txt',
'example_sub_moudle_b-0.0.0/setup.cfg',
'example_sub_moudle_b-0.0.0/setup.py',
}
where = sdist_b.extractfile('example_sub_moudle_b-0.0.0/example/sub_module_b/where.py').read()
assert where == b'a = "module_b"\n'
setup_cfg = sdist_b.extractfile('example_sub_moudle_b-0.0.0/setup.cfg').read().decode('ascii')
assert setup_cfg == (ROOT / 'example' / 'example' / 'sub_module_b' / 'setup.cfg').open(encoding='ascii').read()
def test_license_template(tmp_path):
build_dist('example/sub_module_a', 'sdist', tmp_path, '--license-template', ROOT / 'LICENSE')
sdist_a_path = tmp_path / 'example_sub_moudle_a-0.0.0.tar.gz'
assert sdist_a_path.exists(), "sub_module_a sdist file exists"
with tarfile.open(sdist_a_path) as sdist_a:
setup_py = sdist_a.extractfile('example_sub_moudle_a-0.0.0/setup.py').read().decode('ascii')
assert setup_py == textwrap.dedent(
"""\
# Apache Software License 2.0
#
# Copyright (c) 2020, Ash Berlin-Taylor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__import__("setuptools").setup()
"""
)
| 41.806818 | 119 | 0.647323 | 1,138 | 7,358 | 3.864675 | 0.137083 | 0.049113 | 0.240109 | 0.131423 | 0.69668 | 0.690769 | 0.663256 | 0.657344 | 0.59186 | 0.480218 | 0 | 0.030802 | 0.22343 | 7,358 | 175 | 120 | 42.045714 | 0.738887 | 0.010465 | 0 | 0.081967 | 0 | 0 | 0.508493 | 0.427926 | 0 | 0 | 0 | 0 | 0.131148 | 1 | 0.032787 | false | 0 | 0.057377 | 0 | 0.090164 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
10fb0f98c0db7ba3d5ed61bdb4bc78ad51efafdc | 13,380 | py | Python | azext_csvmware/_help.py | ctaggart/az-csvmware-cli | 6f6f7cd5cb9ae0e34e4d81b499337c3a5ca9fc74 | [
"MIT"
] | 2 | 2020-05-20T13:33:33.000Z | 2020-09-12T03:48:15.000Z | azext_csvmware/_help.py | ctaggart/az-csvmware-cli | 6f6f7cd5cb9ae0e34e4d81b499337c3a5ca9fc74 | [
"MIT"
] | null | null | null | azext_csvmware/_help.py | ctaggart/az-csvmware-cli | 6f6f7cd5cb9ae0e34e4d81b499337c3a5ca9fc74 | [
"MIT"
] | 2 | 2020-05-11T17:10:27.000Z | 2021-01-02T16:15:35.000Z | # coding=utf-8
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
"""
This file contains the help strings (summaries and examples) for all commands and command groups.
"""
from knack.help_files import helps # pylint: disable=unused-import
helps['csvmware'] = """
type: group
short-summary: Manage Azure VMware Solution by CloudSimple.
"""
helps['csvmware vm'] = """
type: group
short-summary: Manage VMware virtual machines.
"""
helps['csvmware vm create'] = """
type: command
short-summary: Create a VMware virtual machine.
parameters:
- name: --nic
short-summary: Add or modify NICs.
long-summary: |
By default, the nics will be added according to the vSphere VM template.
You can add more nics, or modify some properties of a nic specified in the VM template.
Multiple nics can be specified by using more than one `--nic` argument.
If a nic name already exists in the VM template, that nic would be modified according to the user input.
If a nic name does not exist in the VM template, a new nic would be created and a new name will be assigned to it.
Usage: --nic name=MyNicName virtual-network=MyNetwork adapter=MyAdapter power-on-boot=True/False
- name: --disk
short-summary: Add or modify disks.
long-summary: |
By default, the disks will be added according to the vSphere VM template.
You can add more disks, or modify some properties of a disk specified in the VM template.
Multiple disks can be specified by using more than one `--disk` argument.
If a disk name already exists in the VM template, that disk would be modified according to the user input.
If a disk name does not exist in the VM template, a new disk would be created and a new name will be assigned to it.
Usage: --disk name=MyDiskName controller=SCSIControllerID mode=IndependenceMode size=DiskSizeInKB
examples:
- name: Creating a VM with default parameters from the vm template.
text: >
az csvmware vm create -n MyVm -g MyResourceGroup -p MyPrivateCloud -r MyResourcePool --template MyVmTemplate
- name: Creating a VM and adding an extra nic to the VM with virtual network MyVirtualNetwork, adapter VMXNET3, that power ups on boot.
The name entered in the nic is for identification purposes only, to see if such a nic name exists in the vm template, else a nic is created and a new name is assigned.
Lets say the vm template contains a nic with name "Network adapter 1".
text: >
az csvmware vm create -n MyVm -g MyResourceGroup -p MyPrivateCloud -r MyResourcePool --template MyVmTemplate --nic name=NicNameWouldBeAssigned virtual-network=MyVirtualNetwork adapter=VMXNET3 power-on-boot=True
- name: Customizing specific properties of a VM. Changing the number of cores to 2 and adapter of "Network adapter 1" nic to E1000E, from that specified in the template. All other properties would be defaulted from the template.
text: >
az csvmware vm create -n MyVm -g MyResourceGroup -p MyPrivateCloud -r MyResourcePool --template MyVmTemplate --cores 2 --nic name="Network adapter 1" adapter=E1000E
- name: Customizing specific properties of a VM. Changing the adapter of "Network adapter 1" nic to E1000E, from that specified in the template, and also adding another nic with virtual network MyVirtualNetwork, adapter VMXNET3, that power ups on boot.
text: >
az csvmware vm create -n MyVm -g MyResourceGroup -p MyPrivateCloud -r MyResourcePool --template MyVmTemplate --nic name="Network adapter 1" adapter=E1000E --nic name=NicNameWouldBeAssigned virtual-network=MyVirtualNetwork adapter=VMXNET3 power-on-boot=True
- name: Creating a VM and adding an extra disk to the VM with SCSI controller 0, persistent mode, and 41943040 KB size.
The name entered in the disk is for identification purposes only, to see if such a disk name exists in the vm template, else a disk is created and a new name is assigned.
Lets say the vm template contains a disk with name "Hard disk 1".
text: >
az csvmware vm create -n MyVm -g MyResourceGroup -p MyPrivateCloud -r MyResourcePool --template MyVmTemplate --disk name=DiskNameWouldBeAssigned controller=1000 mode=persistent size=41943040
- name: Customizing specific properties of a VM. Changing the size of "Hard disk 1" disk to 21943040 KB, from that specified in the template, and also adding another disk with SCSI controller 0, persistent mode, and 41943040 KB size.
text: >
az csvmware vm create -n MyVm -g MyResourceGroup -p MyPrivateCloud -r MyResourcePool --template MyVmTemplate --disk name="Hard disk 1" size=21943040 --disk name=DiskNameWouldBeAssigned controller=1000 mode=persistent size=41943040
"""
helps['csvmware vm list'] = """
type: command
short-summary: List details of VMware virtual machines in the current subscription. If resource group is specified, only the details of virtual machines in that resource group would be listed.
examples:
- name: List details of VMware VMs in the current subscription.
text: >
az csvmware vm list
- name: List details of VMware VMs in a particular resource group.
text: >
az csvmware vm list -g MyResourceGroup
"""
helps['csvmware vm delete'] = """
type: command
short-summary: Delete a VMware virtual machine.
examples:
- name: Delete a VMware VM.
text: >
az csvmware vm delete -n MyVm -g MyResourceGroup
"""
helps['csvmware vm show'] = """
type: command
short-summary: Get the details of a VMware virtual machine.
examples:
- name: Get the details of a VMware VM.
text: >
az csvmware vm show -n MyVm -g MyResourceGroup
"""
helps['csvmware vm start'] = """
type: command
short-summary: Start a VMware virtual machine.
examples:
- name: Start a VMware VM.
text: >
az csvmware vm start -n MyVm -g MyResourceGroup
"""
helps['csvmware vm stop'] = """
type: command
short-summary: Stop/Reboot/Suspend a VMware virtual machine.
examples:
- name: Power off a VMware VM.
text: >
az csvmware vm stop -n MyVm -g MyResourceGroup --mode poweroff
- name: Restart a VMware VM.
text: >
az csvmware vm stop -n MyVm -g MyResourceGroup --mode reboot
"""
helps['csvmware vm update'] = """
type: command
short-summary: Update the tags field of a VMware virtual machine.
examples:
- name: Add or update a tag.
text: >
az csvmware vm update -n MyVm -g MyResourceGroup --set tags.tagName=tagValue
- name: Remove a tag.
text: >
az csvmware vm update -n MyVm -g MyResourceGroup --remove tags.tagName
"""
helps['csvmware vm nic'] = """
type: group
short-summary: Manage VMware virtual machine's Network Interface Cards.
"""
helps['csvmware vm nic add'] = """
type: command
short-summary: Add NIC to a VMware virtual machine.
examples:
- name: Add a NIC with default parameters in a VM.
text: >
az csvmware vm nic add --vm-name MyVm -g MyResourceGroup --virtual-network MyVirtualNetwork
- name: Add a NIC with E1000E adapter that powers on boot in a VM.
text: >
az csvmware vm nic add --vm-name MyVm -g MyResourceGroup --virtual-network MyVirtualNetwork --adapter E1000E --power-on-boot true
"""
helps['csvmware vm nic list'] = """
type: command
short-summary: List details of NICs available on a VMware virtual machine.
examples:
- name: List details of NICs in a VM.
text: >
az csvmware vm nic list --vm-name MyVm -g MyResourceGroup
"""
helps['csvmware vm nic show'] = """
type: command
short-summary: Get the details of a VMware virtual machine's NIC.
examples:
- name: Get the details of a NIC in a VM.
text: >
az csvmware vm nic show --vm-name MyVm -g MyResourceGroup -n "My NIC Name"
"""
helps['csvmware vm nic delete'] = """
type: command
short-summary: Delete NICs from a VM.
examples:
- name: Delete two NICs from a VM.
text: >
az csvmware vm nic delete --vm-name MyVm -g MyResourceGroup --nics "My NIC Name 1" "My NIC Name 2"
"""
helps['csvmware vm disk'] = """
type: group
short-summary: Manage VMware virtual machine's disks.
"""
helps['csvmware vm disk add'] = """
type: command
short-summary: Add disk to a VMware virtual machine.
examples:
- name: Add a disk with default parameters in a VM.
text: >
az csvmware vm disk add --vm-name MyVm -g MyResourceGroup
- name: Add a disk with SATA controller 0 and 64 GB memory in a VM.
text: >
az csvmware vm disk add --vm-name MyVm -g MyResourceGroup --controller 15000 --size 67108864
"""
helps['csvmware vm disk list'] = """
type: command
short-summary: List details of disks available on a VMware virtual machine.
examples:
- name: List details of disks in a VM.
text: >
az csvmware vm disk list --vm-name MyVm -g MyResourceGroup
"""
helps['csvmware vm disk show'] = """
type: command
short-summary: Get the details of a VMware virtual machine's disk.
examples:
- name: Get the details of a disk in a VM.
text: >
az csvmware vm disk show --vm-name MyVm -g MyResourceGroup -n "My Disk Name"
"""
helps['csvmware vm disk delete'] = """
type: command
short-summary: Delete disks from a VM.
examples:
- name: Delete two disks from a VM.
text: >
az csvmware vm disk delete --vm-name MyVm -g MyResourceGroup --disks "My Disk Name 1" "My Disk Name 2"
"""
helps['csvmware vm-template'] = """
type: group
short-summary: Manage VMware virtual machine templates.
"""
helps['csvmware vm-template list'] = """
type: command
short-summary: List details of VMware virtual machines templates in a private cloud.
examples:
- name: List details of VM templates.
text: >
az csvmware vm-template list -p MyPrivateCloud -r MyResourcePool --location eastus
"""
helps['csvmware vm-template show'] = """
type: command
short-summary: Get the details of a VMware virtual machines template in a private cloud.
examples:
- name: Get the details of a VM template.
text: >
az csvmware vm-template show -n MyVmTemplate -p MyPrivateCloud --location eastus
"""
helps['csvmware virtual-network'] = """
type: group
short-summary: Manage virtual networks.
"""
helps['csvmware virtual-network list'] = """
type: command
short-summary: List details of available virtual networks in a private cloud.
examples:
- name: List details of virtual networks.
text: >
az csvmware virtual-network list -p MyPrivateCloud -r MyResourcePool --location eastus
"""
helps['csvmware virtual-network show'] = """
type: command
short-summary: Get the details of a virtual network in a private cloud.
examples:
- name: Get the details of a virtual network.
text: >
az csvmware virtual-network show -n MyVirtualNetwork -p MyPrivateCloud --location eastus
"""
helps['csvmware private-cloud'] = """
type: group
short-summary: Manage VMware private clouds.
"""
helps['csvmware private-cloud list'] = """
type: command
short-summary: List details of private clouds in a region.
examples:
- name: List details of private clouds in East US.
text: >
az csvmware private-cloud list --location eastus
"""
helps['csvmware private-cloud show'] = """
type: command
short-summary: Get the details of a private cloud in a region.
examples:
- name: Get the details of a private cloud which is in East US.
text: >
az csvmware private-cloud show -n MyPrivateCloud --location eastus
"""
helps['csvmware resource-pool'] = """
type: group
short-summary: Manage VMware resource pools.
"""
helps['csvmware resource-pool list'] = """
type: command
short-summary: List details of resource pools in a private cloud.
examples:
- name: List details of resource pools.
text: >
az csvmware resource-pool list -p MyPrivateCloud --location eastus
"""
helps['csvmware resource-pool show'] = """
type: command
short-summary: Get the details of a resource pool in a private cloud.
examples:
- name: Get the details of a resource pool.
text: >
az csvmware resource-pool show -n MyResourcePool -p MyPrivateCloud --location eastus
"""
| 41.169231 | 268 | 0.652242 | 1,779 | 13,380 | 4.905003 | 0.125351 | 0.055008 | 0.052945 | 0.049507 | 0.757277 | 0.695966 | 0.611277 | 0.521659 | 0.437658 | 0.347926 | 0 | 0.011778 | 0.257549 | 13,380 | 324 | 269 | 41.296296 | 0.86662 | 0.035725 | 0 | 0.444444 | 0 | 0.077778 | 0.948483 | 0.022655 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.003704 | 0 | 0.003704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
800e44a4c6050f23f945f1f76634a4002f79fc45 | 1,157 | py | Python | gen_art/graphics/Context.py | shnupta/SeeMyFeels | 0a37acc3e628d69f96197907db1c2ebd30b78469 | [
"MIT"
] | 3 | 2021-04-01T21:16:35.000Z | 2022-03-12T21:17:51.000Z | gen_art/graphics/Context.py | shnupta/SeeMyFeels | 0a37acc3e628d69f96197907db1c2ebd30b78469 | [
"MIT"
] | null | null | null | gen_art/graphics/Context.py | shnupta/SeeMyFeels | 0a37acc3e628d69f96197907db1c2ebd30b78469 | [
"MIT"
] | null | null | null | import cairo
from uuid import uuid4
from gen_art.graphics.Helpers import does_path_exist, open_file
from os import path
from datetime import datetime
class DrawContext:
def __init__(self, width, height, output_path, open_bool):
self.open_bool = open_bool
self.width = width
self.height = height
self.output_path = output_path
self.init()
def init(self):
self.cairo_context = self.setup_png()
def setup_png(self):
self.surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, self.width, self.height)
return cairo.Context(self.surface)
def export_png(self):
self.surface.write_to_png(self.output_path)
print("INFO: Saving file to {}".format(self.output_path))
if self.open_bool:
print("INFO: Opening file {}".format(self.output_path))
open_file(self.output_path)
def export(self):
self.export_png()
@property
def context(self):
return self.cairo_context
@context.setter
def context(self, context):
self.context = context
def get_output_path(self):
return self.output_path
| 26.906977 | 87 | 0.666379 | 153 | 1,157 | 4.830065 | 0.287582 | 0.121786 | 0.113667 | 0.048714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003425 | 0.242869 | 1,157 | 42 | 88 | 27.547619 | 0.840183 | 0 | 0 | 0 | 0 | 0 | 0.038029 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242424 | false | 0 | 0.151515 | 0.060606 | 0.515152 | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8022f771c37a2c17506b1b5ad623309f807eb9bd | 1,552 | py | Python | setup.py | FoxNerdSaysMoo/HomeAssistantAPI | 69b175141fa4aaed3a0c0d33a8bc9e8cc56caf6a | [
"MIT"
] | null | null | null | setup.py | FoxNerdSaysMoo/HomeAssistantAPI | 69b175141fa4aaed3a0c0d33a8bc9e8cc56caf6a | [
"MIT"
] | null | null | null | setup.py | FoxNerdSaysMoo/HomeAssistantAPI | 69b175141fa4aaed3a0c0d33a8bc9e8cc56caf6a | [
"MIT"
] | null | null | null | from setuptools import setup
from homeassistant_api import __version__
with open("README.md", "r") as f:
read = f.read()
setup(
name="HomeAssistant API",
url="https://github.com/GrandMoff100/HomeassistantAPI",
description="Python Wrapper for Homeassistant's REST API",
version=__version__,
keywords=['homeassistant', 'api', 'wrapper', 'client'],
author="GrandMoff100",
author_email="nlarsen23.student@gmail.com",
packages=[
"homeassistant_api",
"homeassistant_api.models",
"homeassistant_api._async",
"homeassistant_api._async.models"
],
long_description=read,
long_description_content_type="text/markdown",
install_requires=["requests", "simplejson"],
extras_require={
"async": ["aiohttp"]
},
python_requires=">=3.6",
provides=["homeassistant_api"],
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Version Control :: Git"
]
)
| 34.488889 | 71 | 0.631443 | 152 | 1,552 | 6.289474 | 0.532895 | 0.133891 | 0.183054 | 0.16318 | 0.056485 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021685 | 0.227448 | 1,552 | 44 | 72 | 35.272727 | 0.775646 | 0 | 0 | 0 | 0 | 0 | 0.552191 | 0.068299 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8025e5f72fc9d4b3c01001445187f2773b458389 | 15,270 | py | Python | pysnmp-with-texts/CISCOSB-RMON.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/CISCOSB-RMON.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/CISCOSB-RMON.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module CISCOSB-RMON (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/CISCOSB-RMON
# Produced by pysmi-0.3.4 at Wed May 1 12:23:18 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, OctetString, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "Integer", "OctetString", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsIntersection, ConstraintsUnion, SingleValueConstraint, ValueSizeConstraint, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsIntersection", "ConstraintsUnion", "SingleValueConstraint", "ValueSizeConstraint", "ValueRangeConstraint")
switch001, = mibBuilder.importSymbols("CISCOSB-MIB", "switch001")
EntryStatus, OwnerString = mibBuilder.importSymbols("RMON-MIB", "EntryStatus", "OwnerString")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
ObjectIdentity, iso, Gauge32, TimeTicks, Counter64, Counter32, Bits, NotificationType, Integer32, MibIdentifier, MibScalar, MibTable, MibTableRow, MibTableColumn, ModuleIdentity, Unsigned32, IpAddress = mibBuilder.importSymbols("SNMPv2-SMI", "ObjectIdentity", "iso", "Gauge32", "TimeTicks", "Counter64", "Counter32", "Bits", "NotificationType", "Integer32", "MibIdentifier", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "ModuleIdentity", "Unsigned32", "IpAddress")
TruthValue, TextualConvention, RowStatus, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TruthValue", "TextualConvention", "RowStatus", "DisplayString")
rlRmonControl = ModuleIdentity((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49))
rlRmonControl.setRevisions(('2004-06-01 00:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: rlRmonControl.setRevisionsDescriptions(('Initial version of this MIB.',))
if mibBuilder.loadTexts: rlRmonControl.setLastUpdated('200406010000Z')
if mibBuilder.loadTexts: rlRmonControl.setOrganization('Cisco Small Business')
if mibBuilder.loadTexts: rlRmonControl.setContactInfo('Postal: 170 West Tasman Drive San Jose , CA 95134-1706 USA Website: Cisco Small Business Home http://www.cisco.com/smb>;, Cisco Small Business Support Community <http://www.cisco.com/go/smallbizsupport>')
if mibBuilder.loadTexts: rlRmonControl.setDescription('The private MIB module definition for switch001 RMON MIB.')
rlRmonControlMibVersion = MibScalar((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlRmonControlMibVersion.setStatus('current')
if mibBuilder.loadTexts: rlRmonControlMibVersion.setDescription("The MIB's version. The current version is 1")
rlRmonControlHistoryControlQuotaBucket = MibScalar((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(8)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlRmonControlHistoryControlQuotaBucket.setStatus('current')
if mibBuilder.loadTexts: rlRmonControlHistoryControlQuotaBucket.setDescription('Maximum number of buckets to be used by each History Control group entry. changed to read only, value is derived from rsMaxRmonEtherHistoryEntrie')
rlRmonControlHistoryControlMaxGlobalBuckets = MibScalar((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(300)).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlRmonControlHistoryControlMaxGlobalBuckets.setStatus('current')
if mibBuilder.loadTexts: rlRmonControlHistoryControlMaxGlobalBuckets.setDescription('Maximum number of buckets to be used by all History Control group entries together.')
rlHistoryControlTable = MibTable((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4), )
if mibBuilder.loadTexts: rlHistoryControlTable.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlTable.setDescription('A list of rlHistory control entries. This table is exactly like the corresponding RMON I History control group table, but is used to sample statistics of counters not specified by the RMON I statistics group.')
rlHistoryControlEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1), ).setIndexNames((0, "CISCOSB-RMON", "rlHistoryControlIndex"))
if mibBuilder.loadTexts: rlHistoryControlEntry.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlEntry.setDescription('A list of parameters that set up a periodic sampling of statistics. As an example, an instance of the rlHistoryControlInterval object might be named rlHistoryControlInterval.2')
rlHistoryControlIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlHistoryControlIndex.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlIndex.setDescription('An index that uniquely identifies an entry in the rlHistoryControl table. Each such entry defines a set of samples at a particular interval for a sampled counter.')
rlHistoryControlDataSource = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1, 2), ObjectIdentifier()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlHistoryControlDataSource.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlDataSource.setDescription('This object identifies the source of the data for which historical data was collected and placed in the rlHistory table. This object may not be modified if the associated rlHistoryControlStatus object is equal to valid(1).')
rlHistoryControlBucketsRequested = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(50)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlHistoryControlBucketsRequested.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlBucketsRequested.setDescription('The requested number of discrete time intervals over which data is to be saved in the part of the rlHistory table associated with this rlHistoryControlEntry. When this object is created or modified, the probe should set rlHistoryControlBucketsGranted as closely to this object as is possible for the particular probe implementation and available resources.')
rlHistoryControlBucketsGranted = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlHistoryControlBucketsGranted.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlBucketsGranted.setDescription('The number of discrete sampling intervals over which data shall be saved in the part of the rlHistory table associated with this rlHistoryControlEntry. When the associated rlHistoryControlBucketsRequested object is created or modified, the probe should set this object as closely to the requested value as is possible for the particular probe implementation and available resources. The probe must not lower this value except as a result of a modification to the associated rlHistoryControlBucketsRequested object. There will be times when the actual number of buckets associated with this entry is less than the value of this object. In this case, at the end of each sampling interval, a new bucket will be added to the rlHistory table. When the number of buckets reaches the value of this object and a new bucket is to be added to the media-specific table, the oldest bucket associated with this rlHistoryControlEntry shall be deleted by the agent so that the new bucket can be added. When the value of this object changes to a value less than the current value, entries are deleted from the rlHistory table. Enough of the oldest of these entries shall be deleted by the agent so that their number remains less than or equal to the new value of this object. When the value of this object changes to a value greater than the current value, the number of associated rlHistory table entries may be allowed to grow.')
rlHistoryControlInterval = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 3600)).clone(1800)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlHistoryControlInterval.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlInterval.setDescription('The interval in seconds over which the data is sampled for each bucket in the part of the rlHistory table associated with this rlHistoryControlEntry. This interval can be set to any number of seconds between 1 and 3600 (1 hour). Because the counters in a bucket may overflow at their maximum value with no indication, a prudent manager will take into account the possibility of overflow in any of the associated counters. It is important to consider the minimum time in which any counter could overflow and set the rlHistoryControlInterval object to a value This object may not be modified if the associated rlHistoryControlStatus object is equal to valid(1).')
rlHistoryControlOwner = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1, 6), OwnerString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlHistoryControlOwner.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlOwner.setDescription('The entity that configured this entry and is therefore using the resources assigned to it.')
rlHistoryControlStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 4, 1, 7), EntryStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlHistoryControlStatus.setStatus('current')
if mibBuilder.loadTexts: rlHistoryControlStatus.setDescription('The status of this rlHistoryControl entry. Each instance of the rlHistory table associated with this rlHistoryControlEntry will be deleted by the agent if this rlHistoryControlEntry is not equal to valid(1).')
rlHistoryTable = MibTable((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 5), )
if mibBuilder.loadTexts: rlHistoryTable.setStatus('current')
if mibBuilder.loadTexts: rlHistoryTable.setDescription('A list of history entries.')
rlHistoryEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 5, 1), ).setIndexNames((0, "CISCOSB-RMON", "rlHistoryIndex"), (0, "CISCOSB-RMON", "rlHistorySampleIndex"))
if mibBuilder.loadTexts: rlHistoryEntry.setStatus('current')
if mibBuilder.loadTexts: rlHistoryEntry.setDescription('An historical statistics sample of a counter specified by the corresponding history control entry. This sample is associated with the rlHistoryControlEntry which set up the parameters for a regular collection of these samples. As an example, an instance of the rlHistoryPkts object might be named rlHistoryPkts.2.89')
rlHistoryIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 5, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlHistoryIndex.setStatus('current')
if mibBuilder.loadTexts: rlHistoryIndex.setDescription('The history of which this entry is a part. The history identified by a particular value of this index is the same history as identified by the same value of rlHistoryControlIndex.')
rlHistorySampleIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 5, 1, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlHistorySampleIndex.setStatus('current')
if mibBuilder.loadTexts: rlHistorySampleIndex.setDescription('An index that uniquely identifies the particular sample this entry represents among all samples associated with the same rlHistoryControlEntry. This index starts at 1 and increases by one as each new sample is taken.')
rlHistoryIntervalStart = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 5, 1, 3), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlHistoryIntervalStart.setStatus('current')
if mibBuilder.loadTexts: rlHistoryIntervalStart.setDescription('The value of sysUpTime at the start of the interval over which this sample was measured. If the probe keeps track of the time of day, it should start the first sample of the history at a time such that when the next hour of the day begins, a sample is started at that instant. Note that following this rule may require the probe to delay collecting the first sample of the history, as each sample must be of the same interval. Also note that the sample which is currently being collected is not accessible in this table until the end of its interval.')
rlHistoryValue = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 5, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: rlHistoryValue.setStatus('current')
if mibBuilder.loadTexts: rlHistoryValue.setDescription('The value of the sampled counter at the time of this sampling.')
rlControlHistoryControlQuotaBucket = MibScalar((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(8)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlControlHistoryControlQuotaBucket.setStatus('current')
if mibBuilder.loadTexts: rlControlHistoryControlQuotaBucket.setDescription('Maximum number of buckets to be used by each rlHistoryControlTable entry.')
rlControlHistoryControlMaxGlobalBuckets = MibScalar((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(300)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlControlHistoryControlMaxGlobalBuckets.setStatus('current')
if mibBuilder.loadTexts: rlControlHistoryControlMaxGlobalBuckets.setDescription('Maximum number of buckets to be used by all rlHistoryControlTable entries together.')
rlControlHistoryMaxEntries = MibScalar((1, 3, 6, 1, 4, 1, 9, 6, 1, 101, 49, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(300)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rlControlHistoryMaxEntries.setStatus('current')
if mibBuilder.loadTexts: rlControlHistoryMaxEntries.setDescription('Maximum number of rlHistoryTable entries.')
mibBuilder.exportSymbols("CISCOSB-RMON", rlHistoryControlIndex=rlHistoryControlIndex, rlHistoryTable=rlHistoryTable, rlHistoryControlOwner=rlHistoryControlOwner, rlControlHistoryMaxEntries=rlControlHistoryMaxEntries, rlRmonControl=rlRmonControl, rlHistoryControlBucketsRequested=rlHistoryControlBucketsRequested, rlHistoryValue=rlHistoryValue, rlHistoryControlDataSource=rlHistoryControlDataSource, PYSNMP_MODULE_ID=rlRmonControl, rlControlHistoryControlQuotaBucket=rlControlHistoryControlQuotaBucket, rlHistoryControlEntry=rlHistoryControlEntry, rlRmonControlHistoryControlQuotaBucket=rlRmonControlHistoryControlQuotaBucket, rlHistoryIntervalStart=rlHistoryIntervalStart, rlHistoryEntry=rlHistoryEntry, rlHistoryIndex=rlHistoryIndex, rlHistorySampleIndex=rlHistorySampleIndex, rlHistoryControlBucketsGranted=rlHistoryControlBucketsGranted, rlHistoryControlTable=rlHistoryControlTable, rlControlHistoryControlMaxGlobalBuckets=rlControlHistoryControlMaxGlobalBuckets, rlRmonControlHistoryControlMaxGlobalBuckets=rlRmonControlHistoryControlMaxGlobalBuckets, rlRmonControlMibVersion=rlRmonControlMibVersion, rlHistoryControlStatus=rlHistoryControlStatus, rlHistoryControlInterval=rlHistoryControlInterval)
| 171.573034 | 1,487 | 0.806418 | 1,883 | 15,270 | 6.538502 | 0.192246 | 0.045809 | 0.080166 | 0.007148 | 0.367365 | 0.24269 | 0.22742 | 0.223197 | 0.208739 | 0.195988 | 0 | 0.044011 | 0.10275 | 15,270 | 88 | 1,488 | 173.522727 | 0.854609 | 0.020825 | 0 | 0 | 0 | 0.175 | 0.438667 | 0.036137 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1125 | 0 | 0.1125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
802b03d8a8f74e07591150e943daaff1c7cc2c3e | 826 | py | Python | adet/modeling/DTInst/DTE/__init__.py | shuaiqi361/AdelaiDet | 35d944033a8d2f7aa623ad607b57bd8a1fe88b43 | [
"BSD-2-Clause"
] | null | null | null | adet/modeling/DTInst/DTE/__init__.py | shuaiqi361/AdelaiDet | 35d944033a8d2f7aa623ad607b57bd8a1fe88b43 | [
"BSD-2-Clause"
] | null | null | null | adet/modeling/DTInst/DTE/__init__.py | shuaiqi361/AdelaiDet | 35d944033a8d2f7aa623ad607b57bd8a1fe88b43 | [
"BSD-2-Clause"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from .MaskLoader import MaskLoader
from .utils import IOUMetric, fast_ista, prepare_distance_transform_from_mask, \
prepare_overlay_DTMs_from_mask, prepare_extended_DTMs_from_mask, prepare_augmented_distance_transform_from_mask, \
prepare_distance_transform_from_mask_with_weights, tensor_to_dtm, prepare_complement_distance_transform_from_mask_with_weights
__all__ = ["MaskLoader", "IOUMetric",
"prepare_distance_transform_from_mask", "fast_ista", "tensor_to_dtm",
'prepare_overlay_DTMs_from_mask', 'prepare_extended_DTMs_from_mask',
'prepare_augmented_distance_transform_from_mask', 'prepare_distance_transform_from_mask_with_weights',
'prepare_complement_distance_transform_from_mask_with_weights']
| 68.833333 | 130 | 0.825666 | 103 | 826 | 5.941748 | 0.300971 | 0.156863 | 0.27451 | 0.326797 | 0.72549 | 0.620915 | 0.620915 | 0.620915 | 0.447712 | 0.447712 | 0 | 0 | 0.113801 | 826 | 11 | 131 | 75.090909 | 0.836066 | 0.082324 | 0 | 0 | 0 | 0 | 0.387566 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
803f3e78ea2014f7c662ee3a5d6517f238a79624 | 4,339 | py | Python | tests/bugs/core_3365_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/bugs/core_3365_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/bugs/core_3365_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | #coding:utf-8
#
# id: bugs.core_3365
# title: Extend syntax for ALTER USER CURRENT_USER
# decription:
# Replaced old code: removed EDS from here as it is not needed at all:
# we can use here trivial "connect '$(DSN)' ..." instead.
# Non-privileged user is created in this test and then we check that
# he is able to change his personal data: password, firstname and any of
# TAGS key-value pair (avaliable in Srp only).
#
# Checked on 4.0.0.1635: OK, 3.773s; 3.0.5.33180: OK, 2.898s.
#
# tracker_id: CORE-3365
# min_versions: ['3.0']
# versions: 3.0
# qmid: None
import pytest
from firebird.qa import db_factory, isql_act, Action
# version: 3.0
# resources: None
substitutions_1 = [('[ \t]+', ' '), ('=', '')]
init_script_1 = """"""
db_1 = db_factory(sql_dialect=3, init=init_script_1)
test_script_1 = """
set bail on;
set count on;
-- Drop any old account with name = 'TMP$C3365' if it remains from prevoius run:
set term ^;
execute block as
begin
begin
execute statement 'drop user tmp$c3365 using plugin Srp' with autonomous transaction;
when any do begin end
end
begin
execute statement 'drop user tmp$c3365 using plugin Legacy_UserManager' with autonomous transaction;
when any do begin end
end
end^
set term ;^
commit;
set width usrname 10;
set width firstname 10;
set width sec_plugin 20;
set width sec_attr_key 20;
set width sec_attr_val 20;
set width sec_plugin 20;
recreate view v_usr_info as
select
su.sec$user_name as usrname
,su.sec$first_name as firstname
,su.sec$plugin as sec_plugin
,sa.sec$key as sec_attr_key
,sa.sec$value as sec_attr_val
from sec$users su left
join sec$user_attributes sa using(sec$user_name, sec$plugin)
where su.sec$user_name = upper('tmp$c3365');
commit;
grant select on v_usr_info to public;
commit;
create user tmp$c3365 password 'Ir0nM@n' firstname 'John'
using plugin Srp
tags (initname='Ozzy', surname='Osbourne', groupname='Black Sabbath', birthday = '03.12.1948')
;
commit;
select 'before altering' as msg, v.* from v_usr_info v;
commit;
connect '$(DSN)' user tmp$c3365 password 'Ir0nM@n';
alter current user
set password 'H1ghWaySt@r' firstname 'Ian'
using plugin Srp
tags (initname='Ian', surname='Gillan', groupname='Deep Purple', drop birthday)
;
commit;
connect '$(DSN)' user tmp$c3365 password 'H1ghWaySt@r';
commit;
select 'after altering' as msg, v.* from v_usr_info v;
commit;
connect '$(DSN)' user SYSDBA password 'masterkey';
drop user tmp$c3365 using plugin Srp;
commit;
"""
act_1 = isql_act('db_1', test_script_1, substitutions=substitutions_1)
expected_stdout_1 = """
MSG USRNAME FIRSTNAME SEC_PLUGIN SEC_ATTR_KEY SEC_ATTR_VAL
=============== ========== ========== ==================== ==================== ====================
before altering TMP$C3365 John Srp BIRTHDAY 03.12.1948
before altering TMP$C3365 John Srp GROUPNAME Black Sabbath
before altering TMP$C3365 John Srp INITNAME Ozzy
before altering TMP$C3365 John Srp SURNAME Osbourne
Records affected: 4
MSG USRNAME FIRSTNAME SEC_PLUGIN SEC_ATTR_KEY SEC_ATTR_VAL
============== ========== ========== ==================== ==================== ====================
after altering TMP$C3365 Ian Srp GROUPNAME Deep Purple
after altering TMP$C3365 Ian Srp INITNAME Ian
after altering TMP$C3365 Ian Srp SURNAME Gillan
Records affected: 3
"""
@pytest.mark.version('>=3.0')
def test_1(act_1: Action):
act_1.expected_stdout = expected_stdout_1
act_1.execute()
assert act_1.clean_expected_stdout == act_1.clean_stdout
| 33.898438 | 108 | 0.567181 | 546 | 4,339 | 4.377289 | 0.340659 | 0.050209 | 0.046862 | 0.03682 | 0.339749 | 0.288703 | 0.192887 | 0.157322 | 0.157322 | 0.079498 | 0 | 0.05111 | 0.314589 | 4,339 | 127 | 109 | 34.165354 | 0.752522 | 0.168933 | 0 | 0.282353 | 0 | 0.011765 | 0.858896 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 1 | 0.011765 | false | 0.058824 | 0.023529 | 0 | 0.035294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8043c3df7727468e10027ab3c916c11597ab2643 | 19,038 | py | Python | User/User.py | howiemac/evoke4 | 5d7af36c9fb23d94766d54c9c63436343959d3a8 | [
"BSD-3-Clause"
] | null | null | null | User/User.py | howiemac/evoke4 | 5d7af36c9fb23d94766d54c9c63436343959d3a8 | [
"BSD-3-Clause"
] | null | null | null | User/User.py | howiemac/evoke4 | 5d7af36c9fb23d94766d54c9c63436343959d3a8 | [
"BSD-3-Clause"
] | null | null | null | """ evoke base User object
IHM 2/2/2006 and thereafter
CJH 2012 and therafter
gives session-based user validation
The database users table must have an entry with uid==1 and id==guest.
This is used to indicate no valid login.
The database users table must have an entry with uid==2 . This is the
sys admin user.
Registration is verifed via email.
Where a user has a stage of "" (the default), this indicates that they
have not yet had their registration verified, and they will be unable
to login.
"""
import time
import re
import inspect
import crypt
import uuid
import hashlib
from base64 import urlsafe_b64encode as encode, urlsafe_b64decode as decode
from base import lib
from base.render import html
class User:
def permitted(self,user):
"permitted if own record or got edit permit"
return self.uid==user.uid or user.can('edit user')
@classmethod
def hashed(self, pw, salt=None):
"return a hashed password prepended with a salt, generated if not specified"
salt = salt or uuid.uuid4().hex
return hashlib.sha512(salt.encode()+pw.encode()).hexdigest()+':'+salt
def check_password(self, pw):
"fetch pw from database, split into salt and hash then compare against the pw supplied"
hashed = self.pw or self.hashed("")
#salt, hash = hashed[:19], hashed[19:]
hash,salt = hashed.split(':')
return self.hashed(pw, salt) == hashed
@classmethod
def fetch_user(cls,id):
"return User object for given id, or return None if not found"
users=cls.list(id=id)
return users and users[0] or None
@classmethod
def fetch_user_by_email(cls,email):
"return User object for given email, or return None if not found"
users=cls.list(email=email)
return users and users[0] or None
@classmethod
def fetch_if_valid(cls,id, pw):
"authenticate password and id - return validated user instance"
if id:
user=cls.fetch_user(id)
# print "VERIFIED",user.id,user.pw,id,pw, " mode:",getattr(user,'mode','NO MODE')
if user and user.check_password(pw) and (user.stage=='verified'):
return user #valid
return None #invalid
@classmethod
def create(cls,req):
"create a new user, using data from req"
self=cls.new()
self.store(req)#update and flush
return self
def store(self,req):
"update a user, using data from req"
self.update(req)
self.flush()
return self
def remove(self,req):
"delete an unverified user - called from Page_registrations.evo"
if self.stage!='verified':
self.delete()
req.message='"%s" has been deleted' % self.id
return self.view(req)
remove.permit='edit user'
def send_email(self,subject,body):
""
print "email: ", self.Config.mailfrom,self.email
lib.email(self.Config.mailfrom,self.email,subject,body)
###### permits ########################
def is_guest(self):
""
return self.uid==1
as_guest=is_guest # this can be overridden elsewhere, to allow an "as_guest" mode, for non-guest users
def is_admin(self):
"system admin?"
return self.uid==2
def can(self,what):
"""
permit checker - replacement for ob.allowed() which is no more (RIP...)
- `what` can be a permit, in the form "task group"
- `what` can be a method, in which case the permit of that method is checked, and the permitted() method of its class.
- old form method permits (ie "group.task") are also supported
- a user can have a master group, which gives unlimited access
DO NOT CALL THIS METHOD FROM WITHIN A CLASS permitted METHOD or RECURSION WILL BE INFINITE!
"""
if "master" in self.get_permits():
return 1
if inspect.ismethod(what):
permit = getattr(what.im_func, 'permit', None)
if permit=='guest':
return 1 # ok regardless, if explicit guest permit
if type(what).__name__=='instancemethod':
if not (inspect.isclass(what.im_self) or what.im_self.permitted(self)):
# print ">>>>>>>>>>>>> method",what,'NOT PERMITTED'
return 0
if not permit:
return 1 #ok if permitted and no permit set
else:
permit=what
if permit.find('.')>-1: #retro compatibility
group,task = permit.split(".",1)
else:
task,group = permit.split(" ",1)
# print ">>>>>>>>>>>>> string",what,task,group,task in self.get_permits().get(group,[]),self.get_permits().get(group,[])
return task in self.get_permits().get(group,[])
def get_permits(self):
"returns the permits for a user, as a dictionary of {group:[tasks]}"
if not hasattr(self,"permits"):
self.permits={}
for k,v in ((i['group'],i['task']) for i in self.Permit.list(asObjects=False, user=self.uid)):
if k in self.permits:
self.permits[k].append(v)
else:
self.permits[k]=[v]
return self.permits
def store_permits(self):
"stores the permit dictionary (group:[tasks]}"
# clear out existing permits for this user (only those in Config.permits, as other permits may be there also, and these should be retained)
for group,tasks in self.Config.permits.items():
self.list(asObjects=False,sql='delete from %s where user="%s" and `group`="%s" and task in %s' % (self.Permit.table,self.uid,group,lib.sql_list(tasks)))
# store the new permits
for group,tasks in self.permits.items():
for task in tasks: # store the permit
permit = self.Permit.new()
permit.user = self.uid
permit.group = group
permit.task = task
permit.flush()
def sorted_permit_items(self):
"sorts Config.permits.items() so that master comes first"
return sorted(self.Config.permits.items(),(lambda x,y:(x[0]=='master' or x<y) and -1 or 0))
def create_permits(self):
"creates permits"
self.stage='verified'
self.flush()
self.permits=self.Config.default_permits #set opening permits
self.store_permits()
###################### user validation ######################
def hook(self,req,ob,method,url):
"""req hook - to allow apps to add attributes to req
This is called by dispatch.py, for req.user, immediately after calling req.user.refresh() - so
req.user can alse be modifed reliably via this hook.
"""
pass
refresh=hook # backwards compatibility (IHM 2014), in case refresh has been overridden by an app
@classmethod
def validate_user(cls,req):
"hook method to allow <app>.User subclass to override the default validation and permit setting"
req.user=cls.validated_user(req)
req.user.get_permits()
# print "req.user set to: ",req.user
@classmethod
def validated_user(cls, req):
"""login validation is now handled by Twisted.cred. If we have got this far
then the password has been successfully checked and the users id is
available as req.request.avatarId
"""
user= cls.Session.fetch_user(req)
# print "VALIDATED USER:",user.id
# play around with cookies
if user.uid>1 and req.get("evokeLogin"):
#found a valid user in the request, so set the cookies
forever = 10*365*24*3600 # 10 years on
# req.set_cookie('evokeID',user.cookie_data(),expires=req.get("keepLogin") and forever or None)
if req.get('evokePersist'): #user wants name remembered
# print "REMEMBER ME"
req.set_cookie('evokePersist',user.id,expires=forever)
elif req.cookies.get('evokePersist')==user.id: #user no longer wants name remembered
req.clear_cookie('evokePersist')
return user
def login_failure(self,req):
"checks login form entries for validity - this is called only for guest user, sometime after validate_user().."
if '__user__' in req: #we must have logged in and failed login validation to get here
user=self.fetch_user(req.__user__)
if user and not user.stage:
req.error='registration for "%s" has not yet been verified' % req.__user__
else: # CJH: not good practice to distinguish which of username and password is valid, so....
req.error="username or password is invalid - please try again - have you registered?"
return 1
return 0 #we have a guest and not a login failure
######################## form handlers #######################
def login(self,req):
""
return self.login_form(req)
login.permit="guest"
def logout(self, req):
"expire the user and password cookie"
req.clear_cookie('evokeID')
req.request.getSession().expire()
if req.return_to:
return req.redirect(req.return_to)
req.message='%s has been logged out' % req.user.id
return req.redirect(self.fetch_user('guest').url('login')) #use redirect to force clean new login
def register(self,req):
"create new user record"
if self.Config.registration_method=='admin': # registration by admin only
if not req.user.can('edit user'):
return self.error(req,'access denied - registration must be done by admin')
if 'pass2' in req: #form must have been submitted, so process it
uob=self.fetch_user(req.username)
eob=self.fetch_user_by_email(req.email)
retry=(req.redo==req.username) and uob and (not uob.stage)
if not req.username:
req.error='please enter a username'
elif uob and not retry:
req.error='username "%s" is taken, please try another' % req.username
elif not re.match('.*@.*' ,req.email):
req.error='please enter a valid email address'
elif eob and ((not retry) or (eob.uid!=uob.uid)):
req.error='you already have a login for this email address'
elif not req.pass1:
req.error='please enter a password'
elif req.pass2!=req.pass1:
req.error='passwords do not match - please re-enter'
else: #must be fine
uob=uob or self.new()
uob.id=req.username
uob.pw=self.hashed(req.pass1) # hash the password
uob.email=req.email
uob.when=lib.DATE()
uob.flush() #store the new user
key=uob.verification_key()
site=self.get_sitename(req)
if self.Config.registration_method=='admin':
# registration by admin only
return uob.verify_manually(req)
elif self.Config.registration_method=='approve':
# registration with admin approval
# (O/S : this should maybe give email confirmation to the new user when admin verifies them?)
admin=self.get(2) #O/S we should allow a nominated other with 'user edit' permit to act as admin for this purpose....
text="""
Hi %s
%s wants to register with us at %s, and gives the following introduction:
-----------------------
%s
-----------------------
To approve their registration, simply click the link below:
-----------------------
http://%s%s
-----------------------
""" % (admin.id,req.username,site,req.story,req.get_host(),(self.class_url('verify?key=%s') % key))
lib.email(self.Config.mailfrom,admin.email,subject="%s registration verification" % site,text=text)#send the email
return self.get(1).registration_requested(req)
################################################
#else we assume that registration_method is 'self' (the default)
# registration with self confirmation via email
text="""
Hi %s
Thanks for registering with us at %s. We look forward to seeing you around the site.
To complete your registration, you need to confirm that you got this email. To do so, simply click the link below:
-----------------------
http://%s%s
-----------------------
If clicking the link doesn't work, just copy and paste the entire address into your browser. If you're still having problems, simply forward this email to %s and we'll do our best to help you.
Welcome to %s.
""" % (req.username,site,req.get_host(),(self.class_url('verify?key=%s') % key),self.Config.mailto,site)
print "!!!!!!!! REGISTRATION !!!!!!!!:%s:%s" % (req.username,key)
lib.email(self.Config.mailfrom,req.email,subject="%s registration verification" % site,text=text)#send the email
req.message='registration of "%s" accepted' % req.username
return self.get(1).registered_form(req)
return self.register_form(req)
register.permit="guest" #dodge the login validation
def verify(cls,req):
"called from registration email to complete the registration process"
try:
#check key
# prepare key - need to strip whitespace and make sure the length
# is a multiple of 4
key = req.key.strip()
if len(key) % 4:
key = key + ('=' * (4 - len(key)%4))
req.key = key
try:
uid,id,pw=decode(req.key).split(',')
except:
uid,id,pw=decode(req.key+'=').split(',') # bodge it... some browsers dont return a trailing '='
# print '>>>>>',uid,id,pw
self=cls.get(int(uid))
if (self.id==id) and (self.pw==pw):
if not self.stage: # not already verified, so ..
req.__user__=id
req.__pass__=pw
self.create_permits()
if self.Config.registration_method=='self':
self.validate_user(req) #create the login cookie
return req.redirect(self.url("view?message=%s" % lib.url_safe('your registration has been verified'))) #use redirect to force clean new login
else:
return req.redirect(self.url("view?message=%s" % lib.url_safe('registration of "%s" has been verified' % id)))
except:
raise
return self.error('verification failure')
verify.permit='guest'
verify=classmethod(verify)
def verify_manually(self,req):
"manually verify a registration"
if not self.stage:
self.create_permits()
req.message='registration for "%s" has been verified' % self.id
return self.view(req)
verify_manually.permit='edit user'
def verification_key(self):
""
return encode("%s,%s,%s" % (self.uid,self.id,self.pw))
# TODO - password reset mechanism
def reminder(self,req):
"send password reminder email"
return ''
#self.logout(req)
# print "User.reminder"
if 'id' in req or 'email' in req: #form must have been submitted, so process it
# User.reminder req has id or email
if not (req.id or req.email):
req.error='please enter a registered username or email address'
else:
user=self.fetch_user(req.id) or self.fetch_user_by_email(req.email)
# print "User.reminder user=", user, user.uid, user.email
if not user:
req.error='%s is not registered' % (req.id and "username" or "email address",)
else: #must be fine!
user.send_email('%s password reminder' % user.id,'your password for %s is: %s' % (req.get_host(),user.pw))
req.message='your password has been emailed to you'
return req.redirect(self.Page.get(1).url('view?message=%s' % lib.url_safe(req.message))) # redirect to check permissions
return self.reminder_form(req)
reminder.permit="guest" #dodge the login validation
###### user admin ######################
def edit(self, req):
"edit user details, including permits"
if 'pass2' in req: #form must have been submitted, so process it
if self.uid==req.user.uid:#ie if editing your own permissions
req['user.edit']=1 #for safety - dont allow you to lose your own security access
if 'pw' in req and not req.pw: #no password entered, so don't change it
del req["pw"]
if self.Config.user_email_required and not re.match('.*@.*' ,req.email):
req.error='please enter a valid email address'
elif self.Config.user_email_required and (self.email!=req.email) and self.fetch_user_by_email(req.email):
req.error='you already have a login for this email address'
elif req.pass2!=req.pass1:
req.error='passwords do not match - please re-enter'
else: #must be fine!
if (self.uid>2) and req.user.can('edit user'): # if not admin user, and can edit users, then update permits
self.permits={}
for group,tasks in self.Config.permits.items():
for task in tasks:
if req.get(group+'.'+task):
if group in self.permits:
self.permits[group].append(task)
else:
self.permits[group]=[task]
self.store_permits()
if req.pass1:
self.pw=self.hashed(req.pass1)
self.store(req)
req.message='details updated for "%s"' % self.id
#following not needed for session-based login
## if self.uid==req.user.uid:
# if self.pw!=req.user.pw:#user is altering own details, so fix the login
# req.__user__=self.id
# req.__pass__=self.pw
# self.validate_user(req) #create the login cookie
return self.finish_edit(req) #redirects appropriately
return self.edit_form(req)
edit.permit='edit user'
def finish_edit(self,req):
"returns to user menu (if allowed)"
if req.user.can('edit user'):
return self.redirect(req,'registrations')
return self.redirect(req)
########## utilities ########
def get_HTML_title(self,ob,req):
"HTML title - used by wrappers - uses req.title if it exists, otherwise ob.get_title() if it exists"
return "%s %s" % (self.get_sitename(req),req.title or (hasattr(ob,"get_title") and ob.get_title()) or "",)
def get_sitename(self,req):
"used in emails, HTML title etc."
return self.Config.sitename or req.get_host()
########## landing places ##################
@classmethod
def welcome(self,req):
"the welcome page, when no object/instance is specified in the URL"
if req.return_to:
return req.redirect(req.return_to)
return req.redirect(self.Page.get(self.Config.default_page).url())
# or use this if Page is not installed or in use:
# return self.get(1).view(req)
def view(self,req):
""
if self.uid==1:
return self.registrations(req)
return self.edit_form(req)
home=view
################# errors and messages ################
@classmethod
def error(self,req,errormsg=''):
""
req.error=errormsg or req.error or 'undefined error'
try:
return req.user.error_form(req)
except:
return req.error
@classmethod
def ok(self,req,msg=''):
""
req.message=msg or req.message or ''
return req.user.error_form(req)
######################## forms #######################
@html
def error_form(self,req):
pass
@html
def login_form(self,req):
req.title='login'
@html
def register_form(self,req):
pass
@html
def registered_form(self,req):
pass
@html
def registration_requested(self,req):
pass
@html
def registrations(self,req):
"listing of user registrations, allowing verification"
req.items=self.list(orderby='uid desc')
registrations.permit='edit user'
@html
def reminder_form(self,req):
pass
@html
def edit_form(self,req):
pass
| 35.718574 | 193 | 0.643082 | 2,751 | 19,038 | 4.395856 | 0.169756 | 0.020673 | 0.007525 | 0.008683 | 0.238568 | 0.202431 | 0.149012 | 0.119325 | 0.10841 | 0.095841 | 0 | 0.005533 | 0.221609 | 19,038 | 532 | 194 | 35.785714 | 0.810514 | 0.168348 | 0 | 0.298913 | 0 | 0.008152 | 0.263015 | 0.012997 | 0 | 0 | 0 | 0.00188 | 0 | 0 | null | null | 0.076087 | 0.024457 | null | null | 0.005435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
805782c90511b1092705178abcf7d9ba97014167 | 669 | py | Python | src/thenewboston/factories/network_validator.py | achalpatel/thenewboston-python | 4044ce07cb5e0d1f92b4332bbd8c6ac8f33bcdb9 | [
"MIT"
] | 122 | 2020-07-12T23:08:49.000Z | 2021-12-18T16:14:10.000Z | src/thenewboston/factories/network_validator.py | achalpatel/thenewboston-python | 4044ce07cb5e0d1f92b4332bbd8c6ac8f33bcdb9 | [
"MIT"
] | 47 | 2020-07-15T02:18:09.000Z | 2021-09-22T19:51:59.000Z | src/thenewboston/factories/network_validator.py | achalpatel/thenewboston-python | 4044ce07cb5e0d1f92b4332bbd8c6ac8f33bcdb9 | [
"MIT"
] | 52 | 2020-07-13T10:49:52.000Z | 2021-10-30T03:34:55.000Z | from factory import Faker
from .network_node import NetworkNodeFactory
from ..constants.network import ACCOUNT_FILE_HASH_LENGTH, BLOCK_IDENTIFIER_LENGTH, MAX_POINT_VALUE, MIN_POINT_VALUE
from ..models.network_validator import NetworkValidator
class NetworkValidatorFactory(NetworkNodeFactory):
daily_confirmation_rate = Faker('pyint', max_value=MAX_POINT_VALUE, min_value=MIN_POINT_VALUE)
root_account_file = Faker('url')
root_account_file_hash = Faker('text', max_nb_chars=ACCOUNT_FILE_HASH_LENGTH)
seed_block_identifier = Faker('text', max_nb_chars=BLOCK_IDENTIFIER_LENGTH)
class Meta:
model = NetworkValidator
abstract = True
| 39.352941 | 115 | 0.807175 | 85 | 669 | 5.941176 | 0.435294 | 0.087129 | 0.089109 | 0.083168 | 0.075248 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127055 | 669 | 16 | 116 | 41.8125 | 0.864726 | 0 | 0 | 0 | 0 | 0 | 0.023916 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
3376cbd09166d238ee459563762207df8c790db5 | 194 | py | Python | contentcuration/contentcuration/test_settings.py | DXCanas/content-curation | 06ac2cf2a49d2420cb8a418f5df2bfee53ef644b | [
"MIT"
] | null | null | null | contentcuration/contentcuration/test_settings.py | DXCanas/content-curation | 06ac2cf2a49d2420cb8a418f5df2bfee53ef644b | [
"MIT"
] | null | null | null | contentcuration/contentcuration/test_settings.py | DXCanas/content-curation | 06ac2cf2a49d2420cb8a418f5df2bfee53ef644b | [
"MIT"
] | null | null | null | from .not_production_settings import * # noqa
DEBUG = True
WEBPACK_LOADER["DEFAULT"][ # noqa
"LOADER_CLASS"
] = "contentcuration.tests.webpack_loader.TestWebpackLoader"
TEST_ENV = True
| 19.4 | 60 | 0.752577 | 22 | 194 | 6.363636 | 0.772727 | 0.185714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14433 | 194 | 9 | 61 | 21.555556 | 0.843373 | 0.046392 | 0 | 0 | 0 | 0 | 0.401099 | 0.296703 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
337dd50916fb840e7091ed51530bebf7c3ed34d3 | 188 | py | Python | tests/test.py | ZhenningLang/py-proj-init | f6e0da044c4e3140537ac6c4240124c071e89261 | [
"MIT"
] | null | null | null | tests/test.py | ZhenningLang/py-proj-init | f6e0da044c4e3140537ac6c4240124c071e89261 | [
"MIT"
] | null | null | null | tests/test.py | ZhenningLang/py-proj-init | f6e0da044c4e3140537ac6c4240124c071e89261 | [
"MIT"
] | null | null | null | import os
import sys
CURRENT_PATH = os.path.split(os.path.realpath(__file__))[0]
sys.path.append(os.path.join(CURRENT_PATH, '..'))
from py_proj_init.__main__ import main # noqa
main()
| 18.8 | 59 | 0.744681 | 31 | 188 | 4.129032 | 0.548387 | 0.140625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005952 | 0.106383 | 188 | 9 | 60 | 20.888889 | 0.755952 | 0.021277 | 0 | 0 | 0 | 0 | 0.010989 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
3393b84012cd8093cec6722d84c146ceb9736287 | 6,692 | py | Python | lightconfig/lightconfig.py | daassh/LightConfig | e7d962bafae47e635b8a14abe8f6f98cb3ea16a9 | [
"MIT"
] | 2 | 2018-07-24T02:16:41.000Z | 2018-08-06T06:52:15.000Z | lightconfig/lightconfig.py | daassh/LightConfig | e7d962bafae47e635b8a14abe8f6f98cb3ea16a9 | [
"MIT"
] | null | null | null | lightconfig/lightconfig.py | daassh/LightConfig | e7d962bafae47e635b8a14abe8f6f98cb3ea16a9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
# get a easy way to edit config file
"""
>>> from lightconfig import LightConfig
>>> cfg = LightConfig("config.ini")
>>> cfg.section1.option1 = "value1"
>>> print(cfg.section1.option1)
value1
>>> "section1" in cfg
True
>>> "option1" in cfg.section1
True
"""
import os
import codecs
import locale
try:
from ConfigParser import RawConfigParser as ConfigParser
except ImportError:
from configparser import RawConfigParser as ConfigParser
class ConfigParserOptionCaseSensitive(ConfigParser):
"""
add case sensitve to ConfigParser
"""
def __init__(self, defaults=None):
ConfigParser.__init__(self, defaults)
def optionxform(self, option_str):
return option_str
class LightConfig(object):
def __init__(self, config_path, try_encoding={'utf-8', 'utf-8-sig', locale.getpreferredencoding().lower()}, try_convert_digit = False):
self.__dict__['_config_path'] = config_path
self.__dict__['_try_encoding'] = try_encoding if isinstance(try_encoding, (list, tuple, set)) else [try_encoding]
self.__dict__['_try_convert_digit'] = try_convert_digit
self.__dict__['_config'] = ConfigParserOptionCaseSensitive()
if not os.path.exists(config_path):
dir_path = os.path.dirname(os.path.abspath(config_path))
if not os.path.exists(dir_path):
os.makedirs(dir_path)
open(config_path, 'a').close()
LightConfig._read(self)
self.__dict__['_cached_stamp'] = LightConfig._stamp(self)
def __str__(self):
return str(LightConfig._as_dict(self))
def __repr__(self):
return repr(LightConfig._as_dict(self))
def __iter__(self):
return iter(LightConfig._as_dict(self).items())
def __getattribute__(self, item):
if item in ('keys', '__dict__'):
return super(LightConfig, self).__getattribute__(item)
else:
return LightConfig.__getattr__(self, item)
def __getattr__(self, item):
return LightConfig.Section(self, item, self.__dict__['_try_convert_digit'])
__getitem__ = __getattr__
def __setattr__(self, name, value):
try:
value = dict(value)
except:
raise ValueError('"{}" is not dictable'.format(value))
else:
LightConfig.__dict__['__delattr__'](self, name)
section = LightConfig.Section(self, name, self.__dict__['_try_convert_digit'])
for k, v in value.items():
LightConfig.Section.__setattr__(section, k, v)
__setitem__ = __setattr__
def __delattr__(self, item):
if item in self:
self.__dict__['_config'].remove_section(item)
LightConfig._save(self)
__delitem__ = __delattr__
def __contains__(self, item):
return self.__dict__['_config'].has_section(item)
def _as_dict(self):
res = {}
for section in self.keys():
res[section] = self[section]
return res
def keys(self):
return self.__dict__['_config'].sections()
def _read(self):
for encoding in self.__dict__['_try_encoding']:
fp = codecs.open(self.__dict__['_config_path'], encoding=encoding)
try:
if 'read_file' in dir(self.__dict__['_config']):
self.__dict__['_config'].read_file(fp)
else:
self.__dict__['_config'].readfp(fp)
except:
err = True
else:
err = False
self.__dict__['_encoding'] = encoding
break
if err:
raise UnicodeError("\"{}\" codec can't decode this config file".format(', '.join(self.__dict__['_try_encoding'])))
def _save(self):
self.__dict__['_config'].write(codecs.open(self.__dict__['_config_path'], "w", encoding=self.__dict__['_encoding']))
self.__dict__['_cached_stamp'] = LightConfig._stamp(self)
def _stamp(self):
return os.stat(self.__dict__['_config_path']).st_mtime
class Section(object):
def __init__(self, conf, section, try_convert_digit):
self.__dict__['_section'] = section
self.__dict__['_conf'] = conf
self.__dict__['_try_convert_digit'] = try_convert_digit
def __str__(self):
return str(LightConfig.Section._as_dict(self))
def __repr__(self):
return repr(LightConfig.Section._as_dict(self))
def __iter__(self):
return iter(LightConfig.Section._as_dict(self).items())
def __getattribute__(self, item):
if item in ('keys', '__dict__'):
return super(LightConfig.Section, self).__getattribute__(item)
else:
return LightConfig.Section.__getattr__(self, item)
def __getattr__(self, option):
current_stamp = LightConfig._stamp(self.__dict__['_conf'])
if current_stamp != self.__dict__['_conf'].__dict__['_cached_stamp']:
LightConfig._read(self.__dict__['_conf'])
self.__dict__['_conf'].__dict__['_cached_stamp'] = current_stamp
value = self.__dict__['_conf'].__dict__['_config'].get(self.__dict__['_section'], option)
if self.__dict__['_try_convert_digit']:
try:
value = eval(value)
except:
pass
return value
__getitem__ = __getattr__
def __setattr__(self, key, value):
if not self.__dict__['_section'] in self.__dict__['_conf']:
self.__dict__['_conf'].__dict__['_config'].add_section(self.__dict__['_section'])
self.__dict__['_conf'].__dict__['_config'].set(self.__dict__['_section'], key, str(value))
LightConfig._save(self.__dict__['_conf'])
__setitem__ = __setattr__
def __delattr__(self, item):
if item in self:
self.__dict__['_conf'].__dict__['_config'].remove_option(self.__dict__['_section'], item)
LightConfig._save(self.__dict__['_conf'])
__delitem__ = __delattr__
def __contains__(self, item):
return self.__dict__['_conf'].__dict__['_config'].has_option(self.__dict__['_section'], item)
def _as_dict(self):
return dict(self.__dict__['_conf'].__dict__['_config'].items(self.__dict__['_section']))
def keys(self):
return self.__dict__['_conf'].__dict__['_config'].options(self.__dict__['_section'])
| 36.568306 | 139 | 0.609534 | 711 | 6,692 | 4.991561 | 0.180028 | 0.108199 | 0.050719 | 0.040575 | 0.472246 | 0.364328 | 0.206819 | 0.19104 | 0.145957 | 0.072133 | 0 | 0.002453 | 0.268978 | 6,692 | 182 | 140 | 36.769231 | 0.723017 | 0.047071 | 0 | 0.335878 | 0 | 0 | 0.097292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206107 | false | 0.007634 | 0.045802 | 0.10687 | 0.450382 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
3399c4cddcb4d17b9d9aef1da6a020e45290b8d3 | 15,780 | py | Python | kaggle_tutorial_mod.py | DistrictDataLabs/02-seefish | e39189b37f0e925a2c7e9c34be608cbd8243733e | [
"Apache-2.0"
] | null | null | null | kaggle_tutorial_mod.py | DistrictDataLabs/02-seefish | e39189b37f0e925a2c7e9c34be608cbd8243733e | [
"Apache-2.0"
] | null | null | null | kaggle_tutorial_mod.py | DistrictDataLabs/02-seefish | e39189b37f0e925a2c7e9c34be608cbd8243733e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Fri Jan 16 17:31:42 2015
This is adapted from the kaggle tutorial for the National Data Science Bowl at
https://www.kaggle.com/c/datasciencebowl/details/tutorial
Any code section lifted from the tutorial will start with # In tutorial [n].
My adaption will start with # Adapted
2/21/2015: I skipped over all the buildup stuff and just used the functions
they summarized it in. Works fine.
DONE: 1. Create a file with all that other stuff removed.
DONE: 2. Then adapt the file references back to what Chris simplified it to and
make sure it runs.
3. Write/adapt the code to build a submission
4. Figure out how this fits with CNNs.
@author: kperez-lopez
"""
# In tutorial [1]:
from skimage.io import imread
from skimage.transform import resize
from sklearn.ensemble import RandomForestClassifier as RF
import glob
import os
from sklearn import cross_validation
from sklearn.cross_validation import StratifiedKFold as KFold
from sklearn.metrics import classification_report
from matplotlib import pyplot as plt
from matplotlib import colors
from pylab import cm
from skimage import segmentation
from skimage.morphology import watershed
from skimage import measure
from skimage import morphology
import numpy as np
import pandas as pd
from scipy import ndimage
from skimage.feature import peak_local_max
# make graphics inline
# Editor says this is an error.
# TODO: figure out why.
# %matplotlib inline -I've got IPython console set to "automatic," which yields
# a separate window for graphics
# In tutorial [2]: ( don't know why they include this)
import warnings
warnings.filterwarnings("ignore")
path = "./"
# Using .differences(...) removes any files in the dir train that have,
# extensions, i.e., are not subdirs, e.g., list.txt
train_dir_names = \
list(set(glob.glob(os.path.join(path, "train", "*"))).
difference(set(glob.glob(os.path.join(path, "train", "*.*")))))
train_dir_names.sort()
def getLargestRegion(props, labelmap, im_thresh):
regionmaxprop = None
for regionprop in props:
# check to see if the region is at least 50% nonzero
if sum(im_thresh[labelmap == regionprop.label])*1.0/regionprop.area < \
0.50:
continue
if regionmaxprop is None:
regionmaxprop = regionprop
if regionmaxprop.filled_area < regionprop.filled_area:
regionmaxprop = regionprop
return regionmaxprop
# In Tutorial [9]:
"""
Now, we collect the previous steps together in a function to make it easily
repeatable.
"""
# Adapted from Tutorial [9]:
def getMinorMajorRatio(image):
image = image.copy()
# Create the thresholded image to eliminate some of the background
im_thresh = np.where(image > np.mean(image), 0., 1.0)
# Dilate the image
im_dilated = morphology.dilation(im_thresh, np.ones((4, 4)))
# Create the label list
label_list = measure.label(im_dilated)
label_list = im_thresh*label_list
label_list = label_list.astype(int)
regionprops_list = measure.regionprops(label_list)
maxregion = getLargestRegion(regionprops_list, label_list, im_thresh)
# guard against cases where the segmentation fails by providing zeros
ratio = 0.0
if ((not maxregion is None) and (maxregion.major_axis_length != 0.0)):
ratio = 0.0 if maxregion is None else maxregion.minor_axis_length*1.0\
/ maxregion.major_axis_length
return ratio
"""
Preparing Training Data
With our code for the ratio of minor to major axis, let's add the raw pixel
values to the list of features for our dataset. In order to use the pixel
values in a model for our classifier, we need a fixed length feature vector, so
we will rescale the images to be constant size and add the fixed number of
pixels to the feature vector.
To create the feature vectors, we will loop through each of the directories in
our training data set and then loop over each image within that class. For each
image, we will rescale it to 25 x 25 pixels and then add the rescaled pixel
values to a feature vector, X. The last feature we include will be our
width-to-length ratio. We will also create the class label in the vector y,
which will have the true class label for each row of the feature vector, X.
"""
# Adapted from Tutorial [10]
# Rescale the images and create the combined metrics and training labels
# get the total training images
numberofImages = 0
for folder in train_dir_names:
for fileNameDir in os.walk(folder):
# fileNameDir will be a 3-tuple, (dirpath, dirnames, filenames)
# so we look at the last element, a list of the filenames
# print fileNameDir
for fileName in fileNameDir[2]:
# Only read in the images
if fileName[-4:] != ".jpg":
continue
numberofImages += 1
# We'll rescale the images to be 25x25=625
# Why 25? Why not 2**5 = 32?
maxPixel = 25
imageSize = maxPixel * maxPixel
num_rows = numberofImages # one row for each image in the training dataset
num_features = imageSize + 1 # for our ratio
# X is the ARRAY of feature vectors with one row of features per image
# consisting of the pixel values and our metric
X = np.zeros((num_rows, num_features), dtype=float)
# y is the numeric class label
# TODO why the double parens?
y = np.zeros((num_rows))
files = []
# Generate training data
i = 0
label = 0
# List of string of class names
namesClasses = list()
print "Reading images"
# Navigate through the list of directories
for folder in train_dir_names:
# Append the string class name for each class
currentClass = folder.split(os.pathsep)[-1]
print currentClass
namesClasses.append(currentClass)
for fileNameDir in os.walk(folder):
for fileName in fileNameDir[2]:
# Only read in the images
if fileName[-4:] != ".jpg":
continue
# Read in the images and create the features
nameFileImage = \
"{0}{1}{2}".format(fileNameDir[0], os.sep, fileName)
image = imread(nameFileImage, as_grey=True)
files.append(nameFileImage)
axisratio = getMinorMajorRatio(image)
# TODO: check out exactly how skimage resizes
image = resize(image, (maxPixel, maxPixel))
# Store the rescaled image pixels and the axis ratio
X[i, 0:imageSize] = np.reshape(image, (1, imageSize))
X[i, imageSize] = axisratio
# Store the classlabel
y[i] = label
i += 1
# report progress for each 5% done
report = [int((j+1)*num_rows/20.) for j in range(20)]
if i in report:
print np.ceil(i * 100.0 / num_rows), "% done"
label += 1
"""
Width-to-Length Ratio Class Separation
Now that we have calculated the width-to-length ratio metric for all the
images, we can look at the class separation to see how well our feature
performs. We'll compare pairs of the classes' distributions by plotting each
pair of classes. While this will not cover the whole space of hundreds of
possible combinations, it will give us a feel for how similar or dissimilar
different classes are in this feature, and the class distributions should be
comparable across subplots.
"""
# From Tutorial [12]
# Loop through the classes two at a time and compare their distributions of
# the Width/Length Ratio
# Create a DataFrame object to make subsetting the data on the class
df = pd.DataFrame({"class": y[:], "ratio": X[:, num_features-1]})
f = plt.figure(figsize=(30, 20))
# Suppress zeros and choose a few large classes to better highlight the
# distributions.
# Here "large" means images that have a large ratio of minor to major axis.
df = df.loc[df["ratio"] > 0]
minimumSize = 20
counts = df["class"].value_counts()
largeclasses = [int(x) for x in list(counts.loc[counts > minimumSize].index)]
# Loop through 40 of the classes
for j in range(0, 40, 2):
subfig = plt.subplot(4, 5, j / 2 + 1)
# Plot the normalized histograms for two classes
classind1 = largeclasses[j]
classind2 = largeclasses[j+1]
n, bins, p = plt.hist(df.loc[df["class"] == classind1]["ratio"].values,
alpha=0.5, bins=[x*0.01 for x in range(100)],
label=namesClasses[classind1].split(os.sep)[-1],
normed=1)
n2, bins, p = plt.hist(df.loc[df["class"] == (classind2)]["ratio"].values,
alpha=0.5, bins=bins,
label=namesClasses[classind2].split(os.sep)[-1],
normed=1)
subfig.set_ylim([0., 10.])
plt.legend(loc='upper right')
plt.xlabel("Width/Length Ratio")
# results = six histograms in 2x3 display
# TODO: this doesn't make sense, printing out 20 graphs on top of each other.
# Figure out how to display this reasonably.
"""
From the (truncated) figure above, you will see some cases where the classes
are well separated and others were they are not.
NB:
It is typical that one single
feature will not allow you to completely separate more than thirty distinct
classes. You will need to be creative in coming up with additional metrics to
discriminate between all the classes.
TODO: Figure out how CNN fits into this task.
"""
"""
TODO: Understand this thoroughly.
Random Forest Classification
We choose a random forest model to classify the images. Random forests perform
well in many classification tasks and have robust default settings. We will
give a brief description of a random forest model so that you can understand
its two main free parameters: n_estimators and max_features.
A random forest model is an ensemble model of n_estimators number of decision
trees. During the training process, each decision tree is grown automatically
by making a series of conditional splits on the data. At each split in the
decision tree, a random sample of max_features number of features is chosen and
used to make a conditional decision on which of the two nodes that the data
will be grouped in. The best condition for the split is determined by the split
that maximizes the class purity of the nodes directly below. The tree continues
to grow by making additional splits until the leaves are pure or the leaves
have less than the minimum number of samples for a split (in sklearn default
for min_samples_split is two data points). The final majority class purity of
the terminal nodes of the decision tree are used for making predictions on what
class a new data point will belong. Then, the aggregate vote across the forest
determines the class prediction for new samples.
With our training data consisting of the feature vector X and the class label
vector y, we will now calculate some class metrics for the performance of our
model, by class and overall. First, we train the random forest on all the
available data and let it perform the 5-fold cross validation. Then we perform
the cross validation using the KFold method, which splits the data into train
and test sets, and a classification report. The classification report provides
a useful list of performance metrics for your classifier vs. the internal
metrics of the random forest module.
"""
# From Tutorial [19]
print "Training"
# n_estimators is the number of decision trees
# max_features also known as m_try is set to the default value of the square
# root of the number of features
clf = RF(n_estimators=100, n_jobs=3);
scores = cross_validation.cross_val_score(clf, X, y, cv=5, n_jobs=1);
print "Accuracy of all classes"
print np.mean(scores)
"""
Tutorial Results:
Training
Accuracy of all classes
0.446073202468
# 2/?/2015
I got *very* close:
Accuracy of all classes
0.466980629201
# 2/21/2015 6:50pm Also very close
Training
Accuracy of all classes
0.466064989056
# 2/22/2015
Training
Accuracy of all classes
0.465496298508
"""
# From Tutorial [14]:
# TODO: Understand completely:
# sklearn.cross_validation import StratifiedKFold as KFold, including results
kf = KFold(y, n_folds=5)
y_pred = y * 0
for train, test in kf:
X_train, X_test, y_train, y_test=X[train, :], X[test, :], y[train], y[test]
clf = RF(n_estimators=100, n_jobs=3)
clf.fit(X_train, y_train)
y_pred[test] = clf.predict(X_test)
print classification_report(y, y_pred, target_names=namesClasses)
"""
The current model, while somewhat accurate overall, doesn't do well for all
classes, including the shrimp caridean, stomatopod, or hydromedusae tentacles
classes. For others it does quite well, getting many of the correct
classifications for trichodesmium_puff and copepod_oithona_eggs classes. The
metrics shown above for measuring model performance include precision, recall,
and f1-score.
The precision metric gives probability that a chosen class is correct,
(true positives / (true positive + false positives)),
while recall measures the ability of the model to correctly classify examples
of a given class,
(true positives / (false negatives + true positives)).
The F1 score is the geometric average of the precision and recall (the sqrt of
their product).
The competition scoring uses a multiclass log-loss metric to compute your
overall score. In the next steps, we define the multiclass log-loss function
and compute your estimated score on the training dataset.
"""
# From tutorial [16]:
def multiclass_log_loss(y_true, y_pred, eps=1e-15):
"""Multi class version of Logarithmic Loss metric.
https://www.kaggle.com/wiki/MultiClassLogLoss
Parameters
----------
y_true : array, shape = [n_samples]
true class, intergers in [0, n_classes - 1)
y_pred : array, shape = [n_samples, n_classes]
Returns
-------
loss : float
"""
predictions = np.clip(y_pred, eps, 1 - eps)
# normalize row sums to 1
predictions /= predictions.sum(axis=1)[:, np.newaxis]
actual = np.zeros(y_pred.shape)
n_samples = actual.shape[0]
actual[np.arange(n_samples), y_true.astype(int)] = 1
vectsum = np.sum(actual * np.log(predictions))
loss = -1.0 / n_samples * vectsum
return loss
# From tutor [17]:
# Get the probability predictions for computing the log-loss function
kf = KFold(y, n_folds=5)
# prediction probabilities number of samples, by number of classes
y_pred = np.zeros((len(y), len(set(y))))
for train, test in kf:
X_train, X_test, y_train, y_test = X[train,:], X[test,:], y[train], y[test]
clf = RF(n_estimators=100, n_jobs=3)
clf.fit(X_train, y_train)
y_pred[test] = clf.predict_proba(X_test)
# From tutorial [18]:
multiclass_log_loss(y, y_pred)
"""
Tutorial Results: 3.7390475458333374
My results - very close:
2/?/2015 3.7285067867109327
2/22/2015 3.7570415769375152
"""
""""
The multiclass log loss function is a classification error metric that heavily
penalizes you for being both confident (either predicting very high or very low
class probability) and wrong. Throughout the competition you will want to check
that your model improvements are driving this loss metric lower.
"""
"""
Where to Go From Here
Now that you've made a simple metric, created a model, and examined the model's
performance on the training data, the next step is to make improvements to your
model to make it more competitive. The random forest model we created does not
perform evenly across all classes and in some cases fails completely. By
creating new features and looking at some of your distributions for the problem
classes directly, you can identify features that specifically help separate
those classes from the others. You can add new metrics by considering other
image properties, stratified sampling, transformations, or other models for the
classification.
"""
| 36.443418 | 80 | 0.721229 | 2,429 | 15,780 | 4.638123 | 0.277069 | 0.008432 | 0.003728 | 0.008876 | 0.102343 | 0.084857 | 0.054056 | 0.04518 | 0.033375 | 0.033375 | 0 | 0.025982 | 0.20488 | 15,780 | 432 | 81 | 36.527778 | 0.871922 | 0.171103 | 0 | 0.176056 | 0 | 0 | 0.027395 | 0 | 0 | 0 | 0 | 0.013889 | 0 | 0 | null | null | 0 | 0.140845 | null | null | 0.049296 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
33b8da2d72ef09ad6eef64de8e3cb74d42aa9d37 | 414 | py | Python | src/uwds3_core/estimation/dense_optical_flow_estimator.py | underworlds-robot/uwds3_core | 3aec39f83ec5ba2c0b70485aa23bf6eeaedeeda7 | [
"MIT"
] | 1 | 2021-06-08T02:55:15.000Z | 2021-06-08T02:55:15.000Z | src/uwds3_core/estimation/dense_optical_flow_estimator.py | underworlds-robot/uwds3_core | 3aec39f83ec5ba2c0b70485aa23bf6eeaedeeda7 | [
"MIT"
] | null | null | null | src/uwds3_core/estimation/dense_optical_flow_estimator.py | underworlds-robot/uwds3_core | 3aec39f83ec5ba2c0b70485aa23bf6eeaedeeda7 | [
"MIT"
] | null | null | null | import cv2
class DenseOpticalFlowEstimator(object):
def __init__(self):
self.previous_frame = None
def estimate(self, frame):
if first_frame is None:
return None
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
flow = cv2.calcOpticalFlowFarneback(self.previous_frame, gray, None, 0.5, 1, 20, 1, 5, 1.2, 0)
self.previous_frame = gray
return flow
| 27.6 | 102 | 0.644928 | 53 | 414 | 4.867925 | 0.509434 | 0.139535 | 0.197674 | 0.162791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049342 | 0.2657 | 414 | 14 | 103 | 29.571429 | 0.799342 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
33bea67e17e2e48816f3acbadb4afec665fa95a1 | 142 | py | Python | Code/hypers.py | taoqi98/KIM | dc897026d5a639a9a554d06ac036b121fcbcf6a0 | [
"MIT"
] | 7 | 2021-08-13T12:43:17.000Z | 2022-03-24T11:25:52.000Z | Code/hypers.py | JulySinceAndrew/KIM-SIGIR-2021 | 87b1c21f79a5389cc4a0d122e7ded5f63a63da28 | [
"MIT"
] | 5 | 2021-07-20T07:27:05.000Z | 2022-02-25T07:28:39.000Z | Code/hypers.py | JulySinceAndrew/KIM-SIGIR-2021 | 87b1c21f79a5389cc4a0d122e7ded5f63a63da28 | [
"MIT"
] | null | null | null | MAX_SENTENCE = 30
MAX_ALL = 50
MAX_SENT_LENGTH=MAX_SENTENCE
MAX_SENTS=MAX_ALL
max_entity_num = 10
num = 100
num1 = 200
num2 = 100
npratio=4
| 11.833333 | 28 | 0.774648 | 27 | 142 | 3.740741 | 0.62963 | 0.217822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151261 | 0.161972 | 142 | 11 | 29 | 12.909091 | 0.697479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
33d1491d6a521c55fdf5d9796d0dc3453bde5f3c | 5,522 | py | Python | coalaip/plugin.py | bigchaindb/pycoalaip | cecc8f6ff4733f0525fafcee63647753e832f0be | [
"Apache-2.0"
] | 20 | 2016-08-13T15:01:20.000Z | 2018-10-09T21:18:11.000Z | coalaip/plugin.py | imbi7py/pycoalaip | cecc8f6ff4733f0525fafcee63647753e832f0be | [
"Apache-2.0"
] | 57 | 2016-08-04T16:02:05.000Z | 2017-09-15T08:20:06.000Z | coalaip/plugin.py | imbi7py/pycoalaip | cecc8f6ff4733f0525fafcee63647753e832f0be | [
"Apache-2.0"
] | 8 | 2018-11-15T16:34:59.000Z | 2021-07-09T00:20:37.000Z | from abc import ABC, abstractmethod, abstractproperty
class AbstractPlugin(ABC):
"""Abstract interface for all persistence layer plugins.
Expects the following to be defined by the subclass:
- :attr:`type` (as a read-only property)
- :func:`generate_user`
- :func:`get_status`
- :func:`save`
- :func:`transfer`
"""
@abstractproperty
def type(self):
"""A string denoting the type of plugin (e.g. BigchainDB)."""
@abstractmethod
def generate_user(self, *args, **kwargs):
"""Generate a new user on the persistence layer.
Args:
*args: argument list, as necessary
**kwargs: keyword arguments, as necessary
Returns:
A representation of a user (e.g. a tuple with the user's
public and private keypair) on the persistence layer
Raises:
:exc:`~.PersistenceError`: If any other unhandled error
in the plugin occurred
"""
@abstractmethod
def is_same_user(self, user_a, user_b):
"""Compare the given user representations to see if they mean
the same user on the persistence layer.
Args:
user_a (any): User representation
user_b (any): User representation
Returns:
bool: Whether the given user representations are the same
user.
"""
@abstractmethod
def get_history(self, persist_id):
"""Get the ownership history of an entity on the persistence
layer.
Args:
persist_id (str): Id of the entity on the persistence layer
Returns:
list of dict: The ownership history of the entity, sorted
starting from the beginning of the entity's history
(i.e. creation). Each dict is of the form::
{
'user': A representation of a user as specified by the
persistence layer (may omit secret details, e.g. private keys),
'event_id': A reference id for the ownership event (e.g. transfer id)
}
Raises:
:exc:`~.EntityNotFoundError`: If the entity could not be
found on the persistence layer
:exc:`~.PersistenceError`: If any other unhandled error
in the plugin occurred
"""
@abstractmethod
def get_status(self, persist_id):
"""Get the status of an entity on the persistence layer.
Args:
persist_id (str): Id of the entity on the persistence layer
Returns:
Status of the entity, in any format.
Raises:
:exc:`~.EntityNotFoundError`: If the entity could not be
found on the persistence layer
:exc:`~.PersistenceError`: If any other unhandled error
in the plugin occurred
"""
@abstractmethod
def save(self, entity_data, *, user):
"""Create the entity on the persistence layer.
Args:
entity_data (dict): The entity's data
user (any, keyword): The user the entity should be assigned
to after creation. The user must be represented in the
same format as :meth:`generate_user`'s output.
Returns:
str: Id of the created entity on the persistence layer
Raises:
:exc:`~..EntityCreationError`: If the entity failed to be
created
:exc:`~.PersistenceError`: If any other unhandled error
in the plugin occurred
"""
@abstractmethod
def load(self, persist_id):
"""Load the entity from the persistence layer.
Args:
persist_id (str): Id of the entity on the persistence layer
Returns:
dict: The persisted data of the entity
Raises:
:exc:`~.EntityNotFoundError`: If the entity could not be
found on the persistence layer
:exc:`~.PersistenceError`: If any other unhandled error
in the plugin occurred
"""
@abstractmethod
def transfer(self, persist_id, transfer_payload, *, from_user, to_user):
"""Transfer the entity whose id matches :attr:`persist_id` on
the persistence layer from the current user to a new owner.
Args:
persist_id (str): Id of the entity on the persistence layer
transfer_payload (dict): The transfer's payload
from_user (any, keyword): The current owner, represented in the
same format as :meth:`generate_user`'s output
to_user (any, keyword): The new owner, represented in the same
format as :meth:`generate_user`'s output.
If the specified user format includes private
information (e.g. a private key) but is not required by
the persistence layer to identify a transfer recipient,
then this information may be omitted in this argument.
Returns:
str: Id of the transfer action on the persistence layer
Raises:
:exc:`~.EntityNotFoundError`: If the entity could not be
found on the persistence layer
:exc:`~..EntityTransferError`: If the entity failed to be
transferred
:exc:`~.PersistenceError`: If any other unhandled error
in the plugin occurred
"""
| 34.72956 | 91 | 0.588555 | 652 | 5,522 | 4.935583 | 0.222393 | 0.104413 | 0.118086 | 0.110938 | 0.484462 | 0.446551 | 0.380671 | 0.380671 | 0.380671 | 0.380671 | 0 | 0 | 0.343716 | 5,522 | 158 | 92 | 34.949367 | 0.887969 | 0.732343 | 0 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0.055556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
33d9116df66190b4bcd4b6335837886228590452 | 466 | py | Python | Lib/site-packages/py2exe/samples/pywin32/com_typelib/pre_gen/wscript/show_info.py | Aakash10399/simple-health-glucheck | 1f7c4ff7778a44f09b1c8cb0089fef51dc26cea2 | [
"bzip2-1.0.6"
] | 35 | 2015-08-15T14:32:38.000Z | 2021-12-09T16:21:26.000Z | Lib/site-packages/py2exe/samples/pywin32/com_typelib/pre_gen/wscript/show_info.py | Aakash10399/simple-health-glucheck | 1f7c4ff7778a44f09b1c8cb0089fef51dc26cea2 | [
"bzip2-1.0.6"
] | 4 | 2015-09-12T10:42:57.000Z | 2017-02-27T04:05:51.000Z | Lib/site-packages/py2exe/samples/pywin32/com_typelib/pre_gen/wscript/show_info.py | Aakash10399/simple-health-glucheck | 1f7c4ff7778a44f09b1c8cb0089fef51dc26cea2 | [
"bzip2-1.0.6"
] | 15 | 2015-07-10T23:58:07.000Z | 2022-01-23T22:16:33.000Z | # Print some simple information using the WScript.Network object.
import sys
from win32com.client.gencache import EnsureDispatch
ob = EnsureDispatch('WScript.Network')
# For the sake of ensuring the correct module is used...
mod = sys.modules[ob.__module__]
print "The module hosting the object is", mod
# Now use the object.
print "About this computer:"
print "Domain =", ob.UserDomain
print "Computer Name =", ob.ComputerName
print "User Name =", ob.UserName
| 25.888889 | 65 | 0.761803 | 66 | 466 | 5.318182 | 0.590909 | 0.079772 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005025 | 0.145923 | 466 | 17 | 66 | 27.411765 | 0.876884 | 0.296137 | 0 | 0 | 0 | 0 | 0.313665 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.555556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
33f87e0f533a0c640931bd3fd8c3d5fa7efb74b8 | 1,291 | py | Python | devconf/ast/mixins/expression.py | everclear72216/ucapi | 7f5afbee6b3b772086d33c2ee37e85e65af61697 | [
"MIT"
] | null | null | null | devconf/ast/mixins/expression.py | everclear72216/ucapi | 7f5afbee6b3b772086d33c2ee37e85e65af61697 | [
"MIT"
] | 5 | 2019-03-04T16:17:30.000Z | 2019-05-04T08:34:19.000Z | devconf/ast/mixins/expression.py | everclear72216/ucapi | 7f5afbee6b3b772086d33c2ee37e85e65af61697 | [
"MIT"
] | null | null | null | import ast.value
import ast.qualifier
import ast.mixins.node
import ast.mixins.typed
import ast.mixins.qualified
class LValueExpression(ast.mixins.node.Node, ast.mixins.typed.Typed, ast.mixins.qualified.Qualified):
def __init__(self):
super().__init__()
self.__value: ast.value.Value or None = None
def get_value(self) -> ast.value.Value:
assert isinstance(self.__value, ast.value.Value) or self.has_default()
if self.__value is None:
assert self.has_default()
value = self.get_default()
assert isinstance(value, ast.value.Value)
return value
else:
assert isinstance(self.__value, ast.value.Value)
return self.__value
def set_value(self, value: ast.value.Value) -> None:
assert isinstance(value, ast.value.Value)
if hasattr(super(), 'set_value'):
super().set_value(value)
self.__value = value
def has_default(self) -> bool:
return False
def get_default(self) -> ast.value.Value or None:
return None
def evaluate(self):
pass
class RValueExpression(LValueExpression):
def __init__(self):
super().__init__()
self.add_qualifier(ast.qualifier.ConstQualifier())
| 23.907407 | 101 | 0.642912 | 158 | 1,291 | 5.012658 | 0.21519 | 0.126263 | 0.131313 | 0.136364 | 0.342172 | 0.270202 | 0.09596 | 0 | 0 | 0 | 0 | 0 | 0.252517 | 1,291 | 53 | 102 | 24.358491 | 0.820725 | 0 | 0 | 0.176471 | 0 | 0 | 0.006971 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 1 | 0.205882 | false | 0.029412 | 0.147059 | 0.058824 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
33feaaef9c20723000c009c977b27fc9c05c9b4d | 4,478 | py | Python | projects/imsend/imsend.py | ZJM6658/PythonProject | 8ca51a1551b20ccd696358941727188838e0e236 | [
"MIT"
] | 1 | 2021-06-09T02:06:17.000Z | 2021-06-09T02:06:17.000Z | projects/imsend/imsend.py | ZJM6658/PythonProject | 8ca51a1551b20ccd696358941727188838e0e236 | [
"MIT"
] | null | null | null | projects/imsend/imsend.py | ZJM6658/PythonProject | 8ca51a1551b20ccd696358941727188838e0e236 | [
"MIT"
] | null | null | null | #!usr/bin/python
# -*- coding: utf-8 -*-
' an im_send project '
# __author__ 'ZHU JIAMIN'
import sys
import mysql.connector #python3不支持
import requests
import json
from os import path, access, R_OK # W_OK for write permission.
#python2默认编码ascii 使用此方法改为utf8
reload(sys)
sys.setdefaultencoding('utf8')
# 流程
# 1.检查传入的参数是否符合程序需求(必填mobile,其他都有默认值)
# 2.检查同级目录下是否有accessToken文件,有的话读取其中的token,没有的话去环信服务器获取后写入到该文件
# 3.拿到accessToken之后,根据传入的参数(mobile, limit, offset),查询数据库,获取发送方集合、接收方的用户信息
# 4.循环发送消息
#请求头
BASE_URL = 'https://a1.easemob.com/xxxx/xxxxxx'
#用来获取和保存环信ACCESS_TOKEN
CLIENT_ID = 'xxxxxx'
CLIENT_SECRET = 'xxxxxxx'
ACCESS_TOKEN ='' #从文件读取
TOKEN_PATH = './accessToken.txt'
#用来存放传入的参数
INPUT_PARAMS = {'offset':0, 'limit':1, 'isGroup':0, 'text': '测试消息'}
# TODO
# 支持往一个人所加的所有群里面发送消息
# 支持一个群里面所有人往同时群里发消息
def main():
args = sys.argv
if len(args) == 1:
print('请输入必要参数:\
\n-mobile 接收方手机号(必填)\
\n-text 发送内容(默认为:测试消息)\
\n-offset 起始游标(默认为0)\
\n-limit 发送数量(默认为1)')
# \n-isGroup 是否群聊(默认为0,需要则填1)'
return
global INPUT_PARAMS
argsLen = len(args)
for i in range(argsLen):
arg = args[i]
#过滤掉第一个参数(自己本身)
if i == 0: continue
if i%2 == 1:
#去掉key参数中的'-'
arg = arg.replace('-', '')
if i < argsLen - 1:
INPUT_PARAMS[arg] = args[i+1]
pass
#检查必要参数mobile是否正确传入
if not('mobile' in INPUT_PARAMS) or len(INPUT_PARAMS['mobile']) == 0:
print('请传入必要参数-mobile')
return
# print INPUT_PARAMS
limit = int(INPUT_PARAMS['limit'])
offset = int(INPUT_PARAMS['offset'])
if limit < 0 or offset < 0 or limit > 2000 or offset > 2000 or (offset + limit) > 2000:
print('limit 和 offset参数必须>=0, <2000, 且limit + offset < 2000')
return
checkAccessToken()
pass
#检查access_token 不存在便获取
def checkAccessToken():
global ACCESS_TOKEN
if path.exists(TOKEN_PATH) and path.isfile(TOKEN_PATH) and access(TOKEN_PATH, R_OK):
# print("token文件存在且可读")
f = open(TOKEN_PATH, 'r')
ACCESS_TOKEN = f.read()
f.close()
if not(ACCESS_TOKEN):
getIMAccessToken()
else:
# print("token文件不存在或不可读")
getIMAccessToken()
prepareSend()
pass
#准备发送 获取发送消息所需要的数据
def prepareSend():
userSQL = 'select * from y_user where mobile_phone=%s && isdel=0' %(INPUT_PARAMS['mobile'], )
result = getDataFromDataBase(userSQL)
if len(result) == 0 :
print "未查询到手机号码为%s的用户,请检查手机号是否正确" %(INPUT_PARAMS['mobile'])
return
accepterInfo = result[0]
sendersSQL = 'select * from y_user where mobile_phone like "1300000%%" && isdel=0 limit %s offset %s' %(INPUT_PARAMS['limit'], INPUT_PARAMS['offset'])
result = getDataFromDataBase(sendersSQL)
if len(result) == 0:
print '没有找到发送者列表'
return
#imid字段在第14个 这里因为没有使用ORM,返回的是一个元组(tuple)
toImId = accepterInfo[14]
for user in result:
sendMessage(user, toImId)
pass
pass
#发送消息
def sendMessage(fromUser, toImId):
fromImId = fromUser[14]
if len(ACCESS_TOKEN) == 0: return
sendBody = {
"target_type": "users",
"target": [
toImId
],
"msg": {
"type": "txt",
"msg": INPUT_PARAMS['text']
},
"from": fromImId,
"ext": {
"attr1": "v1"
}
}
url = BASE_URL + '/messages'
headers = {
'Content-Type': 'application/json;charset=utf-8',
'Authorization': ACCESS_TOKEN
}
r = requests.post(url, headers = headers, data = json.dumps(sendBody))
# print fromUser
logInfo = '用户名:%s,手机号:%s,' %(fromUser[7], fromUser[8])
if r.status_code == 200:
print logInfo + '发送成功'
else:
print logInfo + '发送失败'
#传入查询语句 查询数据库
def getDataFromDataBase(execute):
conn = mysql.connector.connect(host = 'mysql.xxxx.net',user = 'root',
password = 'xxxx',database = 'xxxx',port = 3306,
charset = 'utf8')
cursor = conn.cursor()
cursor.execute(execute)
result = cursor.fetchall()
cursor.close()
conn.close()
return result
#获取环信access_token 用于后续操作
def getIMAccessToken():
global ACCESS_TOKEN
url = BASE_URL+'/token'
headers = {'Content-Type': 'application/json;charset=utf-8'}
payload = {
'grant_type': 'client_credentials',
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET
}
r = requests.post(url,headers = headers,data = json.dumps(payload))
if r.status_code == 200:
data = json.loads(r.text)
# print(data)
print('获取access_token成功')
#这里返回的r.text是unicode类型,所以转换出来的dict需要用unicode编码的key取到
ukey = 'access_token'.encode('unicode_escape')
ACCESS_TOKEN = 'Bearer ' + data[ukey]
# 写入文件 w直接覆盖
fp = open(TOKEN_PATH, 'w')
fp.write(ACCESS_TOKEN)
fp.close()
else:
print('获取access_token失败')
pass
if __name__ == '__main__':
main()
| 23.202073 | 151 | 0.684457 | 575 | 4,478 | 5.210435 | 0.398261 | 0.04773 | 0.017023 | 0.010013 | 0.100801 | 0.078772 | 0.078772 | 0.058077 | 0.028705 | 0 | 0 | 0.02227 | 0.167709 | 4,478 | 192 | 152 | 23.322917 | 0.781594 | 0.163243 | 0 | 0.165414 | 0 | 0 | 0.199892 | 0.022899 | 0 | 0 | 0 | 0.005208 | 0 | 0 | null | null | 0.052632 | 0.037594 | null | null | 0.067669 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
1d3ea69b3e4a276b5c416f98c802081047762711 | 2,826 | py | Python | src/api2db/install/make_lab.py | TristenHarr/api2db | 8c8b14280441f5153ff146c23359a0eb91022ddb | [
"MIT"
] | 45 | 2021-05-05T01:34:20.000Z | 2021-11-02T08:41:34.000Z | src/api2db/install/make_lab.py | TristenHarr/api2db | 8c8b14280441f5153ff146c23359a0eb91022ddb | [
"MIT"
] | 1 | 2021-06-02T11:43:33.000Z | 2021-06-02T20:32:29.000Z | src/api2db/install/make_lab.py | TristenHarr/api2db | 8c8b14280441f5153ff146c23359a0eb91022ddb | [
"MIT"
] | 3 | 2021-05-08T21:49:24.000Z | 2021-05-13T23:14:09.000Z | import os
_lab_components = """from api2db.ingest import *
CACHE=True # Caches API data so that only a single API call is made if True
def import_target():
return None
def pre_process():
return None
def data_features():
return None
def post_process():
return None
if __name__ == "__main__":
api_form = ApiForm(name="lab",
pre_process=pre_process(),
data_features=data_features(),
post_process=post_process()
)
api_form.experiment(CACHE, import_target)
"""
def mlab():
"""
This shell command is used for creation of a lab. Labs offer an easier way to design an ApiForm.
Given a project directory
::
project_dir-----/
|
apis-----/
| |- __init__.py
| |- FooCollector.py
| |- BarCollector.py
|
AUTH-----/
| |- bigquery_auth_template.json
| |- omnisci_auth_template.json
| |- sql_auth_template.json
|
CACHE/
|
STORE/
|
helpers.py
|
main.py
**Shell Command:** ``path/to/project_dir> mlab``
::
project_dir-----/
|
apis-------/
| |- __init__.py
| |- FooCollector.py
| |- BarCollector.py
|
AUTH-------/
| |- bigquery_auth_template.json
| |- omnisci_auth_template.json
| |- sql_auth_template.json
|
CACHE/
|
STORE/
|
laboratory-/
| |- lab.py EDIT THIS FILE!
|
helpers.py
|
main.py
Returns:
None
"""
lab_dir_path = os.path.join(os.getcwd(), "laboratory")
if not os.path.isdir(lab_dir_path):
os.makedirs(lab_dir_path)
with open(os.path.join(lab_dir_path, "lab.py"), "w") as f:
for line in _lab_components:
f.write(line)
print("Lab has been created. Edit the file found in laboratory/lab.py")
else:
print("Lab already exists!")
| 28.836735 | 100 | 0.381812 | 227 | 2,826 | 4.506608 | 0.409692 | 0.070381 | 0.093842 | 0.035191 | 0.250244 | 0.250244 | 0.250244 | 0.250244 | 0.250244 | 0.250244 | 0 | 0.000756 | 0.531847 | 2,826 | 97 | 101 | 29.134021 | 0.772487 | 0.572895 | 0 | 0.137931 | 0 | 0 | 0.644711 | 0.108782 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.137931 | 0 | 0.310345 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1d4187041cb8f8754084b1a0b8f675142f96aee6 | 2,348 | py | Python | docs/examples/use_cases/video_superres/common/loss_scaler.py | cyyever/DALI | e2b2d5a061da605e3e9e681017a7b2d53fe41a62 | [
"ECL-2.0",
"Apache-2.0"
] | 3,967 | 2018-06-19T04:39:09.000Z | 2022-03-31T10:57:53.000Z | docs/examples/use_cases/video_superres/common/loss_scaler.py | cyyever/DALI | e2b2d5a061da605e3e9e681017a7b2d53fe41a62 | [
"ECL-2.0",
"Apache-2.0"
] | 3,494 | 2018-06-21T07:09:58.000Z | 2022-03-31T19:44:51.000Z | docs/examples/use_cases/video_superres/common/loss_scaler.py | cyyever/DALI | e2b2d5a061da605e3e9e681017a7b2d53fe41a62 | [
"ECL-2.0",
"Apache-2.0"
] | 531 | 2018-06-19T23:53:10.000Z | 2022-03-30T08:35:59.000Z | import torch
class LossScaler:
def __init__(self, scale=1):
self.cur_scale = scale
# `params` is a list / generator of torch.Variable
def has_overflow(self, params):
return False
# `x` is a torch.Tensor
def _has_inf_or_nan(x):
return False
# `overflow` is boolean indicating whether we overflowed in gradient
def update_scale(self, overflow):
pass
@property
def loss_scale(self):
return self.cur_scale
def scale_gradient(self, module, grad_in, grad_out):
return tuple(self.loss_scale * g for g in grad_in)
def backward(self, loss):
scaled_loss = loss*self.loss_scale
scaled_loss.backward()
class DynamicLossScaler:
def __init__(self,
init_scale=2**32,
scale_factor=2.,
scale_window=1000):
self.cur_scale = init_scale
self.cur_iter = 0
self.last_overflow_iter = -1
self.scale_factor = scale_factor
self.scale_window = scale_window
# `params` is a list / generator of torch.Variable
def has_overflow(self, params):
# return False
for p in params:
if p.grad is not None and DynamicLossScaler._has_inf_or_nan(p.grad.data):
return True
return False
# `x` is a torch.Tensor
def _has_inf_or_nan(x):
inf_count = torch.sum(x.abs() == float('inf'))
if inf_count > 0:
return True
nan_count = torch.sum(x != x)
return nan_count > 0
# `overflow` is boolean indicating whether we overflowed in gradient
def update_scale(self, overflow):
if overflow:
#self.cur_scale /= self.scale_factor
self.cur_scale = max(self.cur_scale/self.scale_factor, 1)
self.last_overflow_iter = self.cur_iter
else:
if (self.cur_iter - self.last_overflow_iter) % self.scale_window == 0:
self.cur_scale *= self.scale_factor
# self.cur_scale = 1
self.cur_iter += 1
@property
def loss_scale(self):
return self.cur_scale
def scale_gradient(self, module, grad_in, grad_out):
return tuple(self.loss_scale * g for g in grad_in)
def backward(self, loss):
scaled_loss = loss*self.loss_scale
scaled_loss.backward()
| 28.634146 | 85 | 0.617121 | 319 | 2,348 | 4.30094 | 0.210031 | 0.066327 | 0.078717 | 0.024052 | 0.653061 | 0.618076 | 0.598397 | 0.598397 | 0.598397 | 0.541545 | 0 | 0.010328 | 0.298978 | 2,348 | 81 | 86 | 28.987654 | 0.823208 | 0.151618 | 0 | 0.490909 | 0 | 0 | 0.001514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.254545 | false | 0.018182 | 0.018182 | 0.109091 | 0.490909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
1d41b1db26751bf84729eea34f1bc555d8b62d08 | 591 | py | Python | backend/tests/access/test_access_user_remove.py | fjacob21/mididecweb | b65f28eb6fdeafa265796b6190a4264a5eac54ce | [
"MIT"
] | null | null | null | backend/tests/access/test_access_user_remove.py | fjacob21/mididecweb | b65f28eb6fdeafa265796b6190a4264a5eac54ce | [
"MIT"
] | 88 | 2016-11-12T14:54:38.000Z | 2018-08-02T00:25:07.000Z | backend/tests/access/test_access_user_remove.py | mididecouverte/mididecweb | b65f28eb6fdeafa265796b6190a4264a5eac54ce | [
"MIT"
] | null | null | null | from src.access import UserRemoveAccess
from generate_access_data import generate_access_data
def test_remove_user_access():
sessions = generate_access_data()
user = sessions['user'].users.get('user')
useraccess = UserRemoveAccess(sessions['user'], user)
manageraccess = UserRemoveAccess(sessions['manager'], user)
superaccess = UserRemoveAccess(sessions['super'], user)
noneaccess = UserRemoveAccess(sessions['none'], user)
assert useraccess.granted()
assert not manageraccess.granted()
assert superaccess.granted()
assert not noneaccess.granted()
| 36.9375 | 63 | 0.752961 | 62 | 591 | 7.032258 | 0.387097 | 0.220183 | 0.123853 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142132 | 591 | 15 | 64 | 39.4 | 0.859961 | 0 | 0 | 0 | 1 | 0 | 0.047377 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1d479001ca8c194710be9daccfb27ed5e279b01d | 532 | py | Python | b0012_integer_to_roman.py | savarin/algorithms | 4d1f8f2361de12a02f376883f648697562d177ae | [
"MIT"
] | 1 | 2020-06-16T23:22:54.000Z | 2020-06-16T23:22:54.000Z | b0012_integer_to_roman.py | savarin/algorithms | 4d1f8f2361de12a02f376883f648697562d177ae | [
"MIT"
] | null | null | null | b0012_integer_to_roman.py | savarin/algorithms | 4d1f8f2361de12a02f376883f648697562d177ae | [
"MIT"
] | null | null | null |
lookup = [
(10, "x"),
(9, "ix"),
(5, "v"),
(4, "iv"),
(1, "i"),
]
def to_roman(integer):
#
"""
"""
for decimal, roman in lookup:
if decimal <= integer:
return roman + to_roman(integer - decimal)
return ""
def main():
print(to_roman(1))
print(to_roman(2))
print(to_roman(4))
print(to_roman(5))
print(to_roman(6))
print(to_roman(9))
print(to_roman(10))
print(to_roman(11))
print(to_roman(36))
if __name__ == "__main__":
main()
| 15.2 | 54 | 0.513158 | 72 | 532 | 3.527778 | 0.375 | 0.30315 | 0.425197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048257 | 0.298872 | 532 | 34 | 55 | 15.647059 | 0.632708 | 0 | 0 | 0 | 0 | 0 | 0.028902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.166667 | 0.375 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1d4e1390d738eb0ddc1e3c14bffd7c96ac769e6a | 1,011 | py | Python | P20-Stack Abstract Data Type/Stack - Base Converter.py | necrospiritus/Python-Working-Examples | 075d410673e470fc7c4ffc262e92109a3032132f | [
"MIT"
] | null | null | null | P20-Stack Abstract Data Type/Stack - Base Converter.py | necrospiritus/Python-Working-Examples | 075d410673e470fc7c4ffc262e92109a3032132f | [
"MIT"
] | null | null | null | P20-Stack Abstract Data Type/Stack - Base Converter.py | necrospiritus/Python-Working-Examples | 075d410673e470fc7c4ffc262e92109a3032132f | [
"MIT"
] | null | null | null | class Stack:
def __init__(self):
self.items = []
def is_empty(self): # test to see whether the stack is empty.
return self.items == []
def push(self, item): # adds a new item to the top of the stack.
self.items.append(item)
def pop(self): # removes the top item from the stack.
return self.items.pop()
def peek(self): # return the top item from the stack.
return self.items[len(self.items) - 1]
def size(self): # returns the number of items on the stack.
return len(self.items)
def base_converter(dec_number, base):
digits = "0123456789ABCDEF"
rem_stack = Stack()
while dec_number > 0:
rem = dec_number % base
rem_stack.push(rem)
dec_number = dec_number // base
new_string = ""
while not rem_stack.is_empty():
new_string = new_string + digits[rem_stack.pop()]
return new_string
print(base_converter(196, 2))
print(base_converter(25, 8))
print(base_converter(26, 16)) | 25.923077 | 69 | 0.635015 | 148 | 1,011 | 4.182432 | 0.331081 | 0.101777 | 0.058158 | 0.045234 | 0.119548 | 0.119548 | 0.119548 | 0.119548 | 0.119548 | 0 | 0 | 0.030749 | 0.260138 | 1,011 | 39 | 70 | 25.923077 | 0.796791 | 0.192878 | 0 | 0 | 0 | 0 | 0.019729 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.259259 | false | 0 | 0 | 0.148148 | 0.481481 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
1d6e65b0e5d6c4ee6ad11a44b07d0b7c7fe3d49f | 360 | py | Python | SimpleSign.py | wanzhiguo/mininero | 7dd71b02a4613478b59b2670ccf7c74a22cc2ffd | [
"BSD-3-Clause"
] | 182 | 2016-02-05T18:33:09.000Z | 2022-03-23T12:31:54.000Z | SimpleSign.py | wanzhiguo/mininero | 7dd71b02a4613478b59b2670ccf7c74a22cc2ffd | [
"BSD-3-Clause"
] | 81 | 2016-09-04T14:00:24.000Z | 2022-03-28T17:22:52.000Z | SimpleSign.py | wanzhiguo/mininero | 7dd71b02a4613478b59b2670ccf7c74a22cc2ffd | [
"BSD-3-Clause"
] | 63 | 2016-02-05T19:38:06.000Z | 2022-03-07T06:07:46.000Z | import MiniNero
import ed25519
import binascii
import PaperWallet
import cherrypy
import os
import time
import bitmonerod
import SimpleXMR2
import SimpleServer
message = "send0d000114545737471em2WCg9QKxRxbo6S3xKF2K4UDvdu6hMc"
message = "send0d0114545747771em2WCg9QKxRxbo6S3xKF2K4UDvdu6hMc"
sec = raw_input("sec?")
print(SimpleServer.Signature(message, sec))
| 21.176471 | 65 | 0.858333 | 33 | 360 | 9.333333 | 0.575758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152905 | 0.091667 | 360 | 16 | 66 | 22.5 | 0.788991 | 0 | 0 | 0 | 0 | 0 | 0.300836 | 0.289694 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.714286 | 0 | 0.714286 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1d8a630a7af7286dcfe25ff650a3b9fddf8961f0 | 1,294 | py | Python | tools/python/boutiques/tests/test_bids.py | shots47s/boutiques | 831f937a6b1491af63a800786967e4d9bca1e262 | [
"MIT"
] | 54 | 2016-07-21T19:14:13.000Z | 2021-11-16T11:49:15.000Z | tools/python/boutiques/tests/test_bids.py | shots47s/boutiques | 831f937a6b1491af63a800786967e4d9bca1e262 | [
"MIT"
] | 539 | 2016-07-20T20:09:38.000Z | 2022-03-17T00:45:26.000Z | tools/python/boutiques/tests/test_bids.py | shots47s/boutiques | 831f937a6b1491af63a800786967e4d9bca1e262 | [
"MIT"
] | 52 | 2016-07-22T18:09:59.000Z | 2021-02-03T15:22:55.000Z | #!/usr/bin/env python
from unittest import TestCase
from boutiques.bosh import bosh
from boutiques.bids import validate_bids
from boutiques import __file__ as bofile
from jsonschema.exceptions import ValidationError
from boutiques.validator import DescriptorValidationError
import os.path as op
import simplejson as json
import os
class TestBIDS(TestCase):
def test_bids_good(self):
fil = op.join(op.split(bofile)[0], 'schema/examples/bids_good.json')
self.assertFalse(bosh(["validate", fil, '-b']))
def test_bids_bad1(self):
fil = op.join(op.split(bofile)[0], 'schema/examples/bids_bad1.json')
self.assertRaises(DescriptorValidationError, bosh, ["validate",
fil, '-b'])
def test_bids_bad2(self):
fil = op.join(op.split(bofile)[0], 'schema/examples/bids_bad2.json')
self.assertRaises(DescriptorValidationError, bosh, ["validate",
fil, '-b'])
def test_bids_invalid(self):
fil = op.join(op.split(bofile)[0], 'schema/examples/bids_bad2.json')
descriptor = json.load(open(fil))
self.assertRaises(DescriptorValidationError, validate_bids,
descriptor, False)
| 36.971429 | 76 | 0.641422 | 149 | 1,294 | 5.449664 | 0.315436 | 0.064039 | 0.054187 | 0.064039 | 0.447044 | 0.447044 | 0.447044 | 0.413793 | 0.413793 | 0.413793 | 0 | 0.009288 | 0.251159 | 1,294 | 34 | 77 | 38.058824 | 0.828689 | 0.015456 | 0 | 0.230769 | 0 | 0 | 0.117832 | 0.094266 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.153846 | false | 0 | 0.346154 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1d950e325ef85dc98b69ae74e351c6705f81fa42 | 20,121 | py | Python | web/openerp/addons/base/tests/test_ir_actions.py | diogocs1/comps | 63df07f6cf21c41e4527c06e2d0499f23f4322e7 | [
"Apache-2.0"
] | 1 | 2019-12-29T11:53:56.000Z | 2019-12-29T11:53:56.000Z | odoo/openerp/addons/base/tests/test_ir_actions.py | tuanquanghpvn/odoo8-tutorial | 52d25f1ca5f233c431cb9d3b24b79c3b4fb5127e | [
"MIT"
] | null | null | null | odoo/openerp/addons/base/tests/test_ir_actions.py | tuanquanghpvn/odoo8-tutorial | 52d25f1ca5f233c431cb9d3b24b79c3b4fb5127e | [
"MIT"
] | 3 | 2020-10-08T14:42:10.000Z | 2022-01-28T14:12:29.000Z | import unittest2
from openerp.osv.orm import except_orm
import openerp.tests.common as common
from openerp.tools import mute_logger
class TestServerActionsBase(common.TransactionCase):
def setUp(self):
super(TestServerActionsBase, self).setUp()
cr, uid = self.cr, self.uid
# Models
self.ir_actions_server = self.registry('ir.actions.server')
self.ir_actions_client = self.registry('ir.actions.client')
self.ir_values = self.registry('ir.values')
self.ir_model = self.registry('ir.model')
self.ir_model_fields = self.registry('ir.model.fields')
self.res_partner = self.registry('res.partner')
self.res_country = self.registry('res.country')
# Data on which we will run the server action
self.test_country_id = self.res_country.create(cr, uid, {
'name': 'TestingCountry',
'code': 'TY',
'address_format': 'SuperFormat',
})
self.test_country = self.res_country.browse(cr, uid, self.test_country_id)
self.test_partner_id = self.res_partner.create(cr, uid, {
'name': 'TestingPartner',
'city': 'OrigCity',
'country_id': self.test_country_id,
})
self.test_partner = self.res_partner.browse(cr, uid, self.test_partner_id)
self.context = {
'active_id': self.test_partner_id,
'active_model': 'res.partner',
}
# Model data
self.res_partner_model_id = self.ir_model.search(cr, uid, [('model', '=', 'res.partner')])[0]
self.res_partner_name_field_id = self.ir_model_fields.search(cr, uid, [('model', '=', 'res.partner'), ('name', '=', 'name')])[0]
self.res_partner_city_field_id = self.ir_model_fields.search(cr, uid, [('model', '=', 'res.partner'), ('name', '=', 'city')])[0]
self.res_partner_country_field_id = self.ir_model_fields.search(cr, uid, [('model', '=', 'res.partner'), ('name', '=', 'country_id')])[0]
self.res_partner_parent_field_id = self.ir_model_fields.search(cr, uid, [('model', '=', 'res.partner'), ('name', '=', 'parent_id')])[0]
self.res_country_model_id = self.ir_model.search(cr, uid, [('model', '=', 'res.country')])[0]
self.res_country_name_field_id = self.ir_model_fields.search(cr, uid, [('model', '=', 'res.country'), ('name', '=', 'name')])[0]
self.res_country_code_field_id = self.ir_model_fields.search(cr, uid, [('model', '=', 'res.country'), ('name', '=', 'code')])[0]
# create server action to
self.act_id = self.ir_actions_server.create(cr, uid, {
'name': 'TestAction',
'condition': 'True',
'model_id': self.res_partner_model_id,
'state': 'code',
'code': 'obj.write({"comment": "MyComment"})',
})
class TestServerActions(TestServerActionsBase):
def test_00_action(self):
cr, uid = self.cr, self.uid
# Do: eval 'True' condition
self.ir_actions_server.run(cr, uid, [self.act_id], self.context)
self.test_partner.refresh()
self.assertEqual(self.test_partner.comment, 'MyComment', 'ir_actions_server: invalid condition check')
self.test_partner.write({'comment': False})
# Do: eval False condition, that should be considered as True (void = True)
self.ir_actions_server.write(cr, uid, [self.act_id], {'condition': False})
self.ir_actions_server.run(cr, uid, [self.act_id], self.context)
self.test_partner.refresh()
self.assertEqual(self.test_partner.comment, 'MyComment', 'ir_actions_server: invalid condition check')
# Do: create contextual action
self.ir_actions_server.create_action(cr, uid, [self.act_id])
# Test: ir_values created
ir_values_ids = self.ir_values.search(cr, uid, [('name', '=', 'Run TestAction')])
self.assertEqual(len(ir_values_ids), 1, 'ir_actions_server: create_action should have created an entry in ir_values')
ir_value = self.ir_values.browse(cr, uid, ir_values_ids[0])
self.assertEqual(ir_value.value, 'ir.actions.server,%s' % self.act_id, 'ir_actions_server: created ir_values should reference the server action')
self.assertEqual(ir_value.model, 'res.partner', 'ir_actions_server: created ir_values should be linked to the action base model')
# Do: remove contextual action
self.ir_actions_server.unlink_action(cr, uid, [self.act_id])
# Test: ir_values removed
ir_values_ids = self.ir_values.search(cr, uid, [('name', '=', 'Run TestAction')])
self.assertEqual(len(ir_values_ids), 0, 'ir_actions_server: unlink_action should remove the ir_values record')
def test_10_code(self):
cr, uid = self.cr, self.uid
self.ir_actions_server.write(cr, uid, self.act_id, {
'state': 'code',
'code': """partner_name = obj.name + '_code'
self.pool["res.partner"].create(cr, uid, {"name": partner_name}, context=context)
workflow"""
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: code server action correctly finished should return False')
pids = self.res_partner.search(cr, uid, [('name', 'ilike', 'TestingPartner_code')])
self.assertEqual(len(pids), 1, 'ir_actions_server: 1 new partner should have been created')
def test_20_trigger(self):
cr, uid = self.cr, self.uid
# Data: code server action (at this point code-based actions should work)
act_id2 = self.ir_actions_server.create(cr, uid, {
'name': 'TestAction2',
'type': 'ir.actions.server',
'condition': 'True',
'model_id': self.res_partner_model_id,
'state': 'code',
'code': 'obj.write({"comment": "MyComment"})',
})
act_id3 = self.ir_actions_server.create(cr, uid, {
'name': 'TestAction3',
'type': 'ir.actions.server',
'condition': 'True',
'model_id': self.res_country_model_id,
'state': 'code',
'code': 'obj.write({"code": "ZZ"})',
})
# Data: create workflows
partner_wf_id = self.registry('workflow').create(cr, uid, {
'name': 'TestWorkflow',
'osv': 'res.partner',
'on_create': True,
})
partner_act1_id = self.registry('workflow.activity').create(cr, uid, {
'name': 'PartnerStart',
'wkf_id': partner_wf_id,
'flow_start': True
})
partner_act2_id = self.registry('workflow.activity').create(cr, uid, {
'name': 'PartnerTwo',
'wkf_id': partner_wf_id,
'kind': 'function',
'action': 'True',
'action_id': act_id2,
})
partner_trs1_id = self.registry('workflow.transition').create(cr, uid, {
'signal': 'partner_trans',
'act_from': partner_act1_id,
'act_to': partner_act2_id
})
country_wf_id = self.registry('workflow').create(cr, uid, {
'name': 'TestWorkflow',
'osv': 'res.country',
'on_create': True,
})
country_act1_id = self.registry('workflow.activity').create(cr, uid, {
'name': 'CountryStart',
'wkf_id': country_wf_id,
'flow_start': True
})
country_act2_id = self.registry('workflow.activity').create(cr, uid, {
'name': 'CountryTwo',
'wkf_id': country_wf_id,
'kind': 'function',
'action': 'True',
'action_id': act_id3,
})
country_trs1_id = self.registry('workflow.transition').create(cr, uid, {
'signal': 'country_trans',
'act_from': country_act1_id,
'act_to': country_act2_id
})
# Data: re-create country and partner to benefit from the workflows
self.test_country_id = self.res_country.create(cr, uid, {
'name': 'TestingCountry2',
'code': 'T2',
})
self.test_country = self.res_country.browse(cr, uid, self.test_country_id)
self.test_partner_id = self.res_partner.create(cr, uid, {
'name': 'TestingPartner2',
'country_id': self.test_country_id,
})
self.test_partner = self.res_partner.browse(cr, uid, self.test_partner_id)
self.context = {
'active_id': self.test_partner_id,
'active_model': 'res.partner',
}
# Run the action on partner object itself ('base')
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'trigger',
'use_relational_model': 'base',
'wkf_model_id': self.res_partner_model_id,
'wkf_model_name': 'res.partner',
'wkf_transition_id': partner_trs1_id,
})
self.ir_actions_server.run(cr, uid, [self.act_id], self.context)
self.test_partner.refresh()
self.assertEqual(self.test_partner.comment, 'MyComment', 'ir_actions_server: incorrect signal trigger')
# Run the action on related country object ('relational')
self.ir_actions_server.write(cr, uid, [self.act_id], {
'use_relational_model': 'relational',
'wkf_model_id': self.res_country_model_id,
'wkf_model_name': 'res.country',
'wkf_field_id': self.res_partner_country_field_id,
'wkf_transition_id': country_trs1_id,
})
self.ir_actions_server.run(cr, uid, [self.act_id], self.context)
self.test_country.refresh()
self.assertEqual(self.test_country.code, 'ZZ', 'ir_actions_server: incorrect signal trigger')
# Clear workflow cache, otherwise openerp will try to create workflows even if it has been deleted
from openerp.workflow import clear_cache
clear_cache(cr, uid)
def test_30_client(self):
cr, uid = self.cr, self.uid
client_action_id = self.registry('ir.actions.client').create(cr, uid, {
'name': 'TestAction2',
'tag': 'Test',
})
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'client_action',
'action_id': client_action_id,
})
res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertEqual(res['name'], 'TestAction2', 'ir_actions_server: incorrect return result for a client action')
def test_40_crud_create(self):
cr, uid = self.cr, self.uid
_city = 'TestCity'
_name = 'TestNew'
# Do: create a new record in the same model and link it
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'object_create',
'use_create': 'new',
'link_new_record': True,
'link_field_id': self.res_partner_parent_field_id,
'fields_lines': [(0, 0, {'col1': self.res_partner_name_field_id, 'value': _name}),
(0, 0, {'col1': self.res_partner_city_field_id, 'value': _city})],
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: create record action correctly finished should return False')
# Test: new partner created
pids = self.res_partner.search(cr, uid, [('name', 'ilike', _name)])
self.assertEqual(len(pids), 1, 'ir_actions_server: TODO')
partner = self.res_partner.browse(cr, uid, pids[0])
self.assertEqual(partner.city, _city, 'ir_actions_server: TODO')
# Test: new partner linked
self.test_partner.refresh()
self.assertEqual(self.test_partner.parent_id.id, pids[0], 'ir_actions_server: TODO')
# Do: copy current record
self.ir_actions_server.write(cr, uid, [self.act_id], {'fields_lines': [[5]]})
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'object_create',
'use_create': 'copy_current',
'link_new_record': False,
'fields_lines': [(0, 0, {'col1': self.res_partner_name_field_id, 'value': 'TestCopyCurrent'}),
(0, 0, {'col1': self.res_partner_city_field_id, 'value': 'TestCity'})],
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: create record action correctly finished should return False')
# Test: new partner created
pids = self.res_partner.search(cr, uid, [('name', 'ilike', 'TestingPartner (copy)')]) # currently res_partner overrides default['name'] whatever its value
self.assertEqual(len(pids), 1, 'ir_actions_server: TODO')
partner = self.res_partner.browse(cr, uid, pids[0])
self.assertEqual(partner.city, 'TestCity', 'ir_actions_server: TODO')
self.assertEqual(partner.country_id.id, self.test_partner.country_id.id, 'ir_actions_server: TODO')
# Do: create a new record in another model
self.ir_actions_server.write(cr, uid, [self.act_id], {'fields_lines': [[5]]})
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'object_create',
'use_create': 'new_other',
'crud_model_id': self.res_country_model_id,
'link_new_record': False,
'fields_lines': [(0, 0, {'col1': self.res_country_name_field_id, 'value': 'obj.name', 'type': 'equation'}),
(0, 0, {'col1': self.res_country_code_field_id, 'value': 'obj.name[0:2]', 'type': 'equation'})],
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: create record action correctly finished should return False')
# Test: new country created
cids = self.res_country.search(cr, uid, [('name', 'ilike', 'TestingPartner')])
self.assertEqual(len(cids), 1, 'ir_actions_server: TODO')
country = self.res_country.browse(cr, uid, cids[0])
self.assertEqual(country.code, 'TE', 'ir_actions_server: TODO')
# Do: copy a record in another model
self.ir_actions_server.write(cr, uid, [self.act_id], {'fields_lines': [[5]]})
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'object_create',
'use_create': 'copy_other',
'crud_model_id': self.res_country_model_id,
'link_new_record': False,
'ref_object': 'res.country,%s' % self.test_country_id,
'fields_lines': [(0, 0, {'col1': self.res_country_name_field_id, 'value': 'NewCountry', 'type': 'value'}),
(0, 0, {'col1': self.res_country_code_field_id, 'value': 'NY', 'type': 'value'})],
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: create record action correctly finished should return False')
# Test: new country created
cids = self.res_country.search(cr, uid, [('name', 'ilike', 'NewCountry')])
self.assertEqual(len(cids), 1, 'ir_actions_server: TODO')
country = self.res_country.browse(cr, uid, cids[0])
self.assertEqual(country.code, 'NY', 'ir_actions_server: TODO')
self.assertEqual(country.address_format, 'SuperFormat', 'ir_actions_server: TODO')
def test_50_crud_write(self):
cr, uid = self.cr, self.uid
_name = 'TestNew'
# Do: create a new record in the same model and link it
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'object_write',
'use_write': 'current',
'fields_lines': [(0, 0, {'col1': self.res_partner_name_field_id, 'value': _name})],
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: create record action correctly finished should return False')
# Test: new partner created
pids = self.res_partner.search(cr, uid, [('name', 'ilike', _name)])
self.assertEqual(len(pids), 1, 'ir_actions_server: TODO')
partner = self.res_partner.browse(cr, uid, pids[0])
self.assertEqual(partner.city, 'OrigCity', 'ir_actions_server: TODO')
# Do: copy current record
self.ir_actions_server.write(cr, uid, [self.act_id], {'fields_lines': [[5]]})
self.ir_actions_server.write(cr, uid, [self.act_id], {
'use_write': 'other',
'crud_model_id': self.res_country_model_id,
'ref_object': 'res.country,%s' % self.test_country_id,
'fields_lines': [(0, 0, {'col1': self.res_country_name_field_id, 'value': 'obj.name', 'type': 'equation'})],
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: create record action correctly finished should return False')
# Test: new country created
cids = self.res_country.search(cr, uid, [('name', 'ilike', 'TestNew')])
self.assertEqual(len(cids), 1, 'ir_actions_server: TODO')
# Do: copy a record in another model
self.ir_actions_server.write(cr, uid, [self.act_id], {'fields_lines': [[5]]})
self.ir_actions_server.write(cr, uid, [self.act_id], {
'use_write': 'expression',
'crud_model_id': self.res_country_model_id,
'write_expression': 'object.country_id',
'fields_lines': [(0, 0, {'col1': self.res_country_name_field_id, 'value': 'NewCountry', 'type': 'value'})],
})
run_res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
self.assertFalse(run_res, 'ir_actions_server: create record action correctly finished should return False')
# Test: new country created
cids = self.res_country.search(cr, uid, [('name', 'ilike', 'NewCountry')])
self.assertEqual(len(cids), 1, 'ir_actions_server: TODO')
@mute_logger('openerp.addons.base.ir.ir_model', 'openerp.models')
def test_60_multi(self):
cr, uid = self.cr, self.uid
# Data: 2 server actions that will be nested
act1_id = self.ir_actions_server.create(cr, uid, {
'name': 'Subaction1',
'sequence': 1,
'model_id': self.res_partner_model_id,
'state': 'code',
'code': 'action = {"type": "ir.actions.act_window"}',
})
act2_id = self.ir_actions_server.create(cr, uid, {
'name': 'Subaction2',
'sequence': 2,
'model_id': self.res_partner_model_id,
'state': 'object_create',
'use_create': 'copy_current',
})
act3_id = self.ir_actions_server.create(cr, uid, {
'name': 'Subaction3',
'sequence': 3,
'model_id': self.res_partner_model_id,
'state': 'code',
'code': 'action = {"type": "ir.actions.act_url"}',
})
self.ir_actions_server.write(cr, uid, [self.act_id], {
'state': 'multi',
'child_ids': [(6, 0, [act1_id, act2_id, act3_id])],
})
# Do: run the action
res = self.ir_actions_server.run(cr, uid, [self.act_id], context=self.context)
# Test: new partner created
pids = self.res_partner.search(cr, uid, [('name', 'ilike', 'TestingPartner (copy)')]) # currently res_partner overrides default['name'] whatever its value
self.assertEqual(len(pids), 1, 'ir_actions_server: TODO')
# Test: action returned
self.assertEqual(res.get('type'), 'ir.actions.act_url')
# Test loops
with self.assertRaises(except_orm):
self.ir_actions_server.write(cr, uid, [self.act_id], {
'child_ids': [(6, 0, [self.act_id])]
})
if __name__ == '__main__':
unittest2.main()
| 49.195599 | 163 | 0.610705 | 2,544 | 20,121 | 4.576258 | 0.087264 | 0.039942 | 0.103075 | 0.068545 | 0.75889 | 0.72436 | 0.680725 | 0.657275 | 0.634599 | 0.600756 | 0 | 0.008093 | 0.24462 | 20,121 | 408 | 164 | 49.316176 | 0.757879 | 0.068734 | 0 | 0.525836 | 0 | 0.00304 | 0.249853 | 0.008182 | 0 | 0 | 0 | 0 | 0.109422 | 1 | 0.024316 | false | 0 | 0.015198 | 0 | 0.045593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1d95852843689e3fd89ad1bab5143b8e92c83354 | 3,564 | py | Python | memory.py | Bl41r/gb-emulator-python | 04917fa0cdd09eb522e1409fc992e41df34fbf9a | [
"MIT"
] | null | null | null | memory.py | Bl41r/gb-emulator-python | 04917fa0cdd09eb522e1409fc992e41df34fbf9a | [
"MIT"
] | null | null | null | memory.py | Bl41r/gb-emulator-python | 04917fa0cdd09eb522e1409fc992e41df34fbf9a | [
"MIT"
] | null | null | null | """Gameboy memory.
Cartridge
---------
[0000-3FFF] Cartridge ROM, bank 0: The first 16,384 bytes of the cartridge program are always available at this point in the memory map. Special circumstances apply:
[0000-00FF] BIOS: When the CPU starts up, PC starts at 0000h, which is the start of the 256-byte GameBoy BIOS code. Once the BIOS has run, it is removed from the memory map, and this area of the cartridge rom becomes addressable.
[0100-014F] Cartridge header: This section of the cartridge contains data about its name and manufacturer, and must be written in a specific format.
[4000-7FFF] Cartridge ROM, other banks: Any subsequent 16k "banks" of the cartridge program can be made available to the CPU here, one by one; a chip on the cartridge is generally used to switch between banks, and make a particular area accessible. The smallest programs are 32k, which means that no bank-selection chip is required.
System Mem
----------
[8000-9FFF] Graphics RAM: Data required for the backgrounds and sprites used by the graphics subsystem is held here, and can be changed by the cartridge program. This region will be examined in further detail in part 3 of this series.
[A000-BFFF] Cartridge (External) RAM: There is a small amount of writeable memory available in the GameBoy; if a game is produced that requires more RAM than is available in the hardware, additional 8k chunks of RAM can be made addressable here.
[C000-DFFF] Working RAM: The GameBoy's internal 8k of RAM, which can be read from or written to by the CPU.
[E000-FDFF] Working RAM (shadow): Due to the wiring of the GameBoy hardware, an exact copy of the working RAM is available 8k higher in the memory map. This copy is available up until the last 512 bytes of the map, where other areas are brought into access.
[FE00-FE9F] Graphics: sprite information: Data about the sprites rendered
by the graphics chip are held here, including the sprites' positions and
attributes.
[FF00-FF7F] Memory-mapped I/O: Each of the GameBoy's subsystems (graphics, sound, etc.) has control values, to allow programs to create effects and use the hardware. These values are available to the CPU directly on the address bus, in this area.
[FF80-FFFF] Zero-page RAM: A high-speed area of 128 bytes of RAM is available at the top of memory. Oddly, though this is "page" 255 of the memory, it is referred to as page zero, since most of the interaction between the program and the GameBoy hardware occurs through use of this page of memory.
"""
import array
class GbMemory(object):
"""Memory of the LC-3 VM."""
def __init__(self):
"""Init."""
self.mem_size = 2**16
self.memory = array.array('B', [0 for i in range(self.mem_size)])
self.cartridge_type = 0
def write_byte(self, address, value):
"""Write a byte to an address."""
self.memory[address] = value
# self._show_mem_around_addr(address)
def read_byte(self, address):
"""Return a byte from memory at an address."""
return self.memory[address]
def read_word(self, address):
"""Read a word from memoery @ address."""
return self.read_byte(address) + (self.read_byte(address + 1) << 8)
def write_word(self, address, value):
"""Write a word in mem @ address."""
self.write_byte(address, value & 255)
self.write_byte(address + 1, value >> 8)
def reset_memory(self):
"""Reset all memory slots to 0."""
for i in range(self.mem_size):
self.memory[i] = 0
self.cartridge_type = 0
| 59.4 | 332 | 0.720819 | 582 | 3,564 | 4.376289 | 0.395189 | 0.023557 | 0.021987 | 0.01649 | 0.038477 | 0.021201 | 0.021201 | 0.021201 | 0.021201 | 0 | 0 | 0.031963 | 0.201178 | 3,564 | 59 | 333 | 60.40678 | 0.862662 | 0.763468 | 0 | 0.105263 | 0 | 0 | 0.001248 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315789 | false | 0 | 0.052632 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1da36580d402ceaa2d29765e76d1412b05300439 | 285 | py | Python | diofant/tests/utilities/test_misc.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | 57 | 2016-09-13T23:16:26.000Z | 2022-03-29T06:45:51.000Z | diofant/tests/utilities/test_misc.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | 402 | 2016-05-11T11:11:47.000Z | 2022-03-31T14:27:02.000Z | diofant/tests/utilities/test_misc.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | 20 | 2016-05-11T08:17:37.000Z | 2021-09-10T09:15:51.000Z | from diofant.utilities.decorator import no_attrs_in_subclass
__all__ = ()
def test_no_attrs_in_subclass():
class A:
x = 'test'
A.x = no_attrs_in_subclass(A, A.x)
class B(A):
pass
assert hasattr(A, 'x') is True
assert hasattr(B, 'x') is False
| 15.833333 | 60 | 0.631579 | 45 | 285 | 3.688889 | 0.488889 | 0.048193 | 0.162651 | 0.307229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259649 | 285 | 17 | 61 | 16.764706 | 0.78673 | 0 | 0 | 0 | 0 | 0 | 0.021053 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.1 | false | 0.1 | 0.1 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d53f500a66c4a9814996ed105b5158ad59abc0ad | 1,910 | py | Python | phantomapp/migrations/0009_order_orderproduct.py | t7hm1/My-django-project | cf1a86a5134a86af510f9392a748f129954d1c76 | [
"MIT"
] | 5 | 2018-09-21T13:56:19.000Z | 2019-10-23T23:48:20.000Z | phantomapp/migrations/0009_order_orderproduct.py | mach1el/My-django-project | cf1a86a5134a86af510f9392a748f129954d1c76 | [
"MIT"
] | null | null | null | phantomapp/migrations/0009_order_orderproduct.py | mach1el/My-django-project | cf1a86a5134a86af510f9392a748f129954d1c76 | [
"MIT"
] | 1 | 2019-01-11T10:41:55.000Z | 2019-01-11T10:41:55.000Z | # Generated by Django 2.1 on 2018-09-06 02:03
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('phantomapp', '0008_auto_20180904_2102'),
]
operations = [
migrations.CreateModel(
name='Order',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('username', models.CharField(blank=True, max_length=255)),
('email', models.CharField(max_length=255)),
('first_name', models.CharField(max_length=255)),
('last_name', models.CharField(max_length=255)),
('company', models.CharField(max_length=255)),
('country', models.CharField(max_length=255)),
('state', models.CharField(max_length=255)),
('address', models.CharField(max_length=255)),
('telephone', models.CharField(max_length=255)),
('created', models.DateTimeField(auto_now=True)),
('updated', models.DateTimeField(auto_now=True)),
('paid', models.BooleanField(default=False)),
],
options={
'ordering': ('-created',),
},
),
migrations.CreateModel(
name='OrderProduct',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.IntegerField()),
('product', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='products', to='phantomapp.ShopProduct')),
('purchase', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='products', to='phantomapp.Order')),
],
),
]
| 42.444444 | 146 | 0.575916 | 184 | 1,910 | 5.836957 | 0.396739 | 0.125698 | 0.100559 | 0.178771 | 0.56797 | 0.361266 | 0.303538 | 0.303538 | 0.303538 | 0.303538 | 0 | 0.041334 | 0.27801 | 1,910 | 44 | 147 | 43.409091 | 0.737491 | 0.022513 | 0 | 0.263158 | 1 | 0 | 0.124933 | 0.024129 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d5434f4d2e70af5a0a1a114b544845f6ac8dde26 | 387 | py | Python | lib/mcapi.py | RalphORama/MCPerms | 099d9169b1c7992b7d1c9b72003b846366eb78f7 | [
"MIT"
] | 3 | 2017-12-01T09:39:36.000Z | 2021-07-27T23:52:11.000Z | lib/mcapi.py | RalphORama/MCPerms | 099d9169b1c7992b7d1c9b72003b846366eb78f7 | [
"MIT"
] | null | null | null | lib/mcapi.py | RalphORama/MCPerms | 099d9169b1c7992b7d1c9b72003b846366eb78f7 | [
"MIT"
] | 2 | 2019-02-25T19:05:05.000Z | 2020-02-12T13:17:01.000Z | from requests import get
from json import loads
from time import time
from uuid import UUID
def username_to_uuid(username, when=int(time())):
url = 'https://api.mojang.com/users/profiles/minecraft/{}?at={}'
r = get(url.format(username, when))
if r.status_code == 200:
data = loads(r.text)
uuid = UUID(data['id'])
return str(uuid)
return None
| 21.5 | 68 | 0.648579 | 57 | 387 | 4.350877 | 0.596491 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009934 | 0.219638 | 387 | 17 | 69 | 22.764706 | 0.811258 | 0 | 0 | 0 | 0 | 0 | 0.149871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d55558bb3f6a8ea15ddbd66ee162839cfc0d523b | 516 | py | Python | backend/mp/apps/questionnarie/migrations/0002_auto_20200327_1422.py | shidashui/mymp | 75d81906908395ece1c8d12249d6afc4bd2d0704 | [
"MIT"
] | 1 | 2020-03-14T12:33:24.000Z | 2020-03-14T12:33:24.000Z | backend/mp/apps/questionnarie/migrations/0002_auto_20200327_1422.py | shidashui/mymp | 75d81906908395ece1c8d12249d6afc4bd2d0704 | [
"MIT"
] | 8 | 2021-03-19T00:59:11.000Z | 2022-03-12T00:19:38.000Z | backend/mp/apps/questionnarie/migrations/0002_auto_20200327_1422.py | shidashui/mymp | 75d81906908395ece1c8d12249d6afc4bd2d0704 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.4 on 2020-03-27 14:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('questionnarie', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='questionnaire',
name='email',
),
migrations.AddField(
model_name='questionnaire',
name='user_id',
field=models.IntegerField(default=0, verbose_name='用户id'),
),
]
| 22.434783 | 70 | 0.581395 | 50 | 516 | 5.9 | 0.74 | 0.061017 | 0.149153 | 0.176271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.302326 | 516 | 22 | 71 | 23.454545 | 0.763889 | 0.087209 | 0 | 0.25 | 1 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d55ba82230322a219b1f4cd74baae15cd90185fa | 13,815 | py | Python | newscout_web/news_site/migrations/0002_auto_20190819_1230.py | rsqwerty/newscout_web | 8be095cc3e1a95d6ccf5cd8c43d3f13746b263f2 | [
"Apache-2.0"
] | 3 | 2019-10-30T07:15:59.000Z | 2021-12-26T20:59:05.000Z | newscout_web/news_site/migrations/0002_auto_20190819_1230.py | rsqwerty/newscout_web | 8be095cc3e1a95d6ccf5cd8c43d3f13746b263f2 | [
"Apache-2.0"
] | 322 | 2019-10-30T07:12:36.000Z | 2022-02-10T10:55:32.000Z | newscout_web/news_site/migrations/0002_auto_20190819_1230.py | rsqwerty/newscout_web | 8be095cc3e1a95d6ccf5cd8c43d3f13746b263f2 | [
"Apache-2.0"
] | 7 | 2019-10-30T13:34:54.000Z | 2021-12-27T12:08:07.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.4 on 2019-08-19 12:30
from __future__ import unicode_literals
from django.conf import settings
import django.contrib.auth.validators
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('news_site', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='AdGroup',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateTimeField(auto_now=True, verbose_name='Last Modified At')),
('is_active', models.BooleanField(default=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='AdType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateTimeField(auto_now=True, verbose_name='Last Modified At')),
('type', models.CharField(max_length=100)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Advertisement',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateTimeField(auto_now=True, verbose_name='Last Modified At')),
('ad_text', models.CharField(max_length=160)),
('ad_url', models.URLField()),
('media', models.ImageField(blank=True, null=True, upload_to='')),
('is_active', models.BooleanField(default=True)),
('impsn_limit', models.IntegerField(default=0)),
('delivered', models.IntegerField(default=0)),
('click_count', models.IntegerField(default=0)),
('ad_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.AdType')),
('adgroup', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.AdGroup')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Campaign',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateTimeField(auto_now=True, verbose_name='Last Modified At')),
('name', models.CharField(max_length=160)),
('is_active', models.BooleanField(default=True)),
('daily_budget', models.DecimalField(blank=True, decimal_places=2, max_digits=8, null=True)),
('max_bid', models.DecimalField(blank=True, decimal_places=2, max_digits=8, null=True)),
('start_date', models.DateTimeField()),
('end_date', models.DateTimeField()),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='CategoryAssociation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('child_cat', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='child_category', to='news_site.Category')),
('parent_cat', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='parent_category', to='news_site.Category')),
],
),
migrations.CreateModel(
name='CategoryDefaultImage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('default_image_url', models.URLField()),
('category', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Category')),
],
),
migrations.CreateModel(
name='DailyDigest',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateTimeField(auto_now=True, verbose_name='Last Modified At')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Devices',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('device_name', models.CharField(blank=True, max_length=255, null=True)),
('device_id', models.CharField(blank=True, max_length=255, null=True)),
],
),
migrations.CreateModel(
name='Menu',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Category')),
],
),
migrations.CreateModel(
name='Notification',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('breaking_news', models.BooleanField(default=False)),
('daily_edition', models.BooleanField(default=False)),
('personalized', models.BooleanField(default=False)),
('device', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Devices')),
],
),
migrations.CreateModel(
name='ScoutedItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=255)),
('url', models.URLField(default='http://nowhe.re')),
('category', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Category')),
],
),
migrations.CreateModel(
name='ScoutFrontier',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('url', models.URLField(default='http://nowhe.re')),
('category', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Category')),
],
),
migrations.CreateModel(
name='SocialAccount',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('provider', models.CharField(max_length=200)),
('social_account_id', models.CharField(max_length=200)),
('image_url', models.CharField(blank=True, max_length=250, null=True)),
],
options={
'verbose_name_plural': 'Social Accounts',
},
),
migrations.CreateModel(
name='SubMenu',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('hash_tags', models.ManyToManyField(to='news_site.HashTag')),
('name', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Category')),
],
),
migrations.CreateModel(
name='TrendingArticle',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateTimeField(auto_now=True, verbose_name='Last Modified At')),
('active', models.BooleanField(default=True)),
('score', models.FloatField(default=0.0)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='TrendingHashTag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
],
),
migrations.RemoveField(
model_name='subcategory',
name='category',
),
migrations.RemoveField(
model_name='article',
name='industry',
),
migrations.RemoveField(
model_name='article',
name='sub_category',
),
migrations.AddField(
model_name='article',
name='edited_by',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='article',
name='edited_on',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='article',
name='indexed_on',
field=models.DateTimeField(default=django.utils.timezone.now),
),
migrations.AddField(
model_name='article',
name='manually_edit',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='article',
name='spam',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='articlemedia',
name='video_url',
field=models.TextField(blank=True, null=True, validators=[django.core.validators.URLValidator()]),
),
migrations.AlterField(
model_name='article',
name='category',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='news_site.Category'),
),
migrations.AlterField(
model_name='article',
name='cover_image',
field=models.TextField(validators=[django.core.validators.URLValidator()]),
),
migrations.AlterField(
model_name='article',
name='source_url',
field=models.TextField(validators=[django.core.validators.URLValidator()]),
),
migrations.AlterField(
model_name='articlemedia',
name='url',
field=models.TextField(blank=True, null=True, validators=[django.core.validators.URLValidator()]),
),
migrations.AlterField(
model_name='userprofile',
name='passion',
field=models.ManyToManyField(blank=True, to='news_site.HashTag'),
),
migrations.AlterField(
model_name='userprofile',
name='username',
field=models.CharField(error_messages={'unique': 'A user with that username already exists.'}, help_text='Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.', max_length=150, unique=True, validators=[django.contrib.auth.validators.UnicodeUsernameValidator()], verbose_name='username'),
),
migrations.DeleteModel(
name='Industry',
),
migrations.DeleteModel(
name='SubCategory',
),
migrations.AddField(
model_name='trendingarticle',
name='articles',
field=models.ManyToManyField(to='news_site.Article'),
),
migrations.AddField(
model_name='socialaccount',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='menu',
name='submenu',
field=models.ManyToManyField(to='news_site.SubMenu'),
),
migrations.AddField(
model_name='devices',
name='user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='dailydigest',
name='articles',
field=models.ManyToManyField(to='news_site.Article'),
),
migrations.AddField(
model_name='dailydigest',
name='device',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Devices'),
),
migrations.AddField(
model_name='adgroup',
name='campaign',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='news_site.Campaign'),
),
migrations.AddField(
model_name='adgroup',
name='category',
field=models.ManyToManyField(to='news_site.Category'),
),
]
| 44.708738 | 317 | 0.573941 | 1,314 | 13,815 | 5.866819 | 0.144597 | 0.042807 | 0.024647 | 0.048515 | 0.744325 | 0.704631 | 0.596186 | 0.582696 | 0.582696 | 0.555195 | 0 | 0.006734 | 0.290554 | 13,815 | 308 | 318 | 44.853896 | 0.779818 | 0.004922 | 0 | 0.664452 | 1 | 0 | 0.137151 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.003322 | 0.023256 | 0 | 0.033223 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d56289df74f4674cf5abeb1309158e1e0a013342 | 457 | py | Python | class1/class_1_yaml.py | daveg999/Automation_class | d23652ecae56b790684971dda6e85a1d2367e22b | [
"Apache-2.0"
] | null | null | null | class1/class_1_yaml.py | daveg999/Automation_class | d23652ecae56b790684971dda6e85a1d2367e22b | [
"Apache-2.0"
] | null | null | null | class1/class_1_yaml.py | daveg999/Automation_class | d23652ecae56b790684971dda6e85a1d2367e22b | [
"Apache-2.0"
] | null | null | null | import yaml
import json
yaml_list = range(5)
yaml_list.append('string1')
yaml_list.append('string2')
yaml_list.append({})
yaml_list[-1]
{}
yaml_list[-1]['critter1'] = 'hedgehog'
yaml_list[-1]['critter2'] = 'bunny'
yaml_list[-1]['dungeon_levels'] = range(5)
yaml_list.append('list_end')
with open("class1_list.yml", "w") as f:
f.write(yaml.dump(yaml_list, default_flow_style=False))
with open("class1_list.json", "w") as f:
json.dump(yaml_list, f)
| 21.761905 | 57 | 0.706783 | 75 | 457 | 4.08 | 0.4 | 0.287582 | 0.183007 | 0.091503 | 0.130719 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029197 | 0.100656 | 457 | 20 | 58 | 22.85 | 0.715328 | 0 | 0 | 0 | 0 | 0 | 0.214912 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d56fd836050b26c8b6384555a79574597fcc5e6a | 197 | py | Python | pyCLI/main.py | tillstud/spotirss | 407dbdcc1e625527fec041bd07a19bf4ac456817 | [
"MIT"
] | null | null | null | pyCLI/main.py | tillstud/spotirss | 407dbdcc1e625527fec041bd07a19bf4ac456817 | [
"MIT"
] | null | null | null | pyCLI/main.py | tillstud/spotirss | 407dbdcc1e625527fec041bd07a19bf4ac456817 | [
"MIT"
] | null | null | null | from pyCLI.config import Config
from pyCLI.logging import logger
def main(config: Config):
print(config.pycli_message)
logger.debug("If you enter 'pyCLI -v', you will see this message!")
| 24.625 | 71 | 0.741117 | 30 | 197 | 4.833333 | 0.6 | 0.124138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162437 | 197 | 7 | 72 | 28.142857 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0.258883 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d582ca9816237ee2b7fbc9eac1fc86001c97f980 | 290 | py | Python | basecam/__init__.py | sdss/baseCam | d5222f2c93df8e5b6ef894f32eca28b1cd3b3616 | [
"BSD-3-Clause"
] | null | null | null | basecam/__init__.py | sdss/baseCam | d5222f2c93df8e5b6ef894f32eca28b1cd3b3616 | [
"BSD-3-Clause"
] | 18 | 2020-01-13T20:57:48.000Z | 2021-06-22T14:43:16.000Z | basecam/__init__.py | sdss/basecam | 526f8be1b7c83e087e8f78484e63ba18531dce87 | [
"BSD-3-Clause"
] | null | null | null | # encoding: utf-8
# flake8: noqa
from sdsstools import get_package_version
NAME = "sdss-basecam"
__version__ = get_package_version(__file__, "sdss-basecam") or "dev"
from .camera import *
from .events import *
from .exceptions import *
from .exposure import *
from .notifier import *
| 17.058824 | 68 | 0.748276 | 38 | 290 | 5.394737 | 0.578947 | 0.195122 | 0.165854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008163 | 0.155172 | 290 | 16 | 69 | 18.125 | 0.828571 | 0.096552 | 0 | 0 | 0 | 0 | 0.104247 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d588539983a2e180d430faf9156fe49f7c14386f | 372 | py | Python | server/tests/test_env.py | amiralis1365/cards-game | db44eaefd0c10c7876be52e97534f7e4201c9581 | [
"CNRI-Python"
] | null | null | null | server/tests/test_env.py | amiralis1365/cards-game | db44eaefd0c10c7876be52e97534f7e4201c9581 | [
"CNRI-Python"
] | null | null | null | server/tests/test_env.py | amiralis1365/cards-game | db44eaefd0c10c7876be52e97534f7e4201c9581 | [
"CNRI-Python"
] | null | null | null | """Sanity test environment setup."""
import os.path
from django.conf import settings
from django.test import TestCase
class EnvTestCase(TestCase):
"""Environment test cases."""
def test_env_file_exists(self):
"""Test environment file exists."""
env_file = os.path.join(settings.DEFAULT_ENV_PATH, ".env")
assert os.path.exists(env_file)
| 24.8 | 66 | 0.701613 | 49 | 372 | 5.183673 | 0.469388 | 0.070866 | 0.102362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180108 | 372 | 14 | 67 | 26.571429 | 0.832787 | 0.225806 | 0 | 0 | 0 | 0 | 0.014706 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d5ab5835f2e337b522ce060307e167093634b260 | 1,056 | py | Python | multiuploader/management/commands/clean_uploads.py | SharmaVinayKumar/django-multiuploader | 58e545307014830b00101129a297b6d465b87583 | [
"MIT"
] | 5 | 2017-02-25T21:12:37.000Z | 2017-03-12T15:05:55.000Z | multiuploader/management/commands/clean_uploads.py | SharmaVinayKumar/django-multiuploader | 58e545307014830b00101129a297b6d465b87583 | [
"MIT"
] | 4 | 2017-02-25T19:08:23.000Z | 2017-03-12T15:53:54.000Z | multiuploader/management/commands/clean_uploads.py | vinaypost/multiuploader | 58e545307014830b00101129a297b6d465b87583 | [
"MIT"
] | null | null | null | from __future__ import print_function, unicode_literals
import os
from datetime import timedelta
import multiuploader.default_settings as DEFAULTS
from django.conf import settings
from django.core.management.base import BaseCommand
from django.utils.timezone import now
from multiuploader.models import MultiuploaderFile
class Command(BaseCommand):
help = 'Clean all temporary attachments loaded to MultiuploaderFile model'
def handle(self, *args, **options):
expiration_time = getattr(settings, "MULTIUPLOADER_FILE_EXPIRATION_TIME",
DEFAULTS.MULTIUPLOADER_FILE_EXPIRATION_TIME)
time_threshold = now() - timedelta(seconds=expiration_time)
for attach in MultiuploaderFile.objects.filter(upload_date__lt=time_threshold):
try:
os.remove(attach.file.path)
except Exception as ex:
print(ex)
MultiuploaderFile.objects.filter(upload_date__lt=time_threshold).delete()
print("Cleaning temporary upload files complete")
| 35.2 | 87 | 0.730114 | 117 | 1,056 | 6.384615 | 0.555556 | 0.074967 | 0.072289 | 0.082999 | 0.147256 | 0.147256 | 0.147256 | 0.147256 | 0 | 0 | 0 | 0 | 0.208333 | 1,056 | 29 | 88 | 36.413793 | 0.893541 | 0 | 0 | 0 | 0 | 0 | 0.131629 | 0.032197 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.380952 | 0 | 0.52381 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d5ab995005d2f182743e6f37ffa3d88f794d55fe | 540 | py | Python | tests/test_source.py | Koech-code/News-App | 07688d64df5d0512a8d59613d403f7d6a7377360 | [
"MIT"
] | null | null | null | tests/test_source.py | Koech-code/News-App | 07688d64df5d0512a8d59613d403f7d6a7377360 | [
"MIT"
] | null | null | null | tests/test_source.py | Koech-code/News-App | 07688d64df5d0512a8d59613d403f7d6a7377360 | [
"MIT"
] | null | null | null | import unittest
from app.models import Source
class testSource(unittest.TestCase):
"""
SourcesTest class to test the behavior of the Sources class
"""
def setUp(self):
"""
Method that runs before each other test runs
"""
self.new_source = Source('abc-news','ABC news','Your trusted source for breaking news',"https://abcnews.go.com","general","en","us")
def test_instance(self):
self.assertTrue(isinstance(self.new_source,Source))
if __name__ == "__main__":
unittest.main() | 30 | 140 | 0.661111 | 69 | 540 | 5.014493 | 0.652174 | 0.040462 | 0.075145 | 0.109827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212963 | 540 | 18 | 141 | 30 | 0.814118 | 0.192593 | 0 | 0 | 0 | 0 | 0.236181 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d5acd000db9d4a9b597057b8dfc5cb47789972b8 | 432 | py | Python | python_backend/custom_types/forvo_api_types.py | BenLeong0/japanese_vocab_fetcher | c441eaf46c7330f9319216a6321ce8ec8d3de6cc | [
"MIT"
] | null | null | null | python_backend/custom_types/forvo_api_types.py | BenLeong0/japanese_vocab_fetcher | c441eaf46c7330f9319216a6321ce8ec8d3de6cc | [
"MIT"
] | 2 | 2021-12-26T23:34:02.000Z | 2021-12-26T23:34:11.000Z | python_backend/custom_types/forvo_api_types.py | BenLeong0/japanese_vocab_fetcher | c441eaf46c7330f9319216a6321ce8ec8d3de6cc | [
"MIT"
] | null | null | null | from typing import Literal, TypedDict
class ForvoAPIItem(TypedDict):
id: int
word: str
original: str
addtime: str
hits: int
username: str
sex: str
country: str
code: str
langname: str
pathmp3: str
pathogg: str
rate: int
num_votes: int
num_positive_votes: int
class ForvoAPIResponse(TypedDict):
attributes: dict[Literal["total"], int]
items: list[ForvoAPIItem]
| 17.28 | 43 | 0.655093 | 52 | 432 | 5.384615 | 0.596154 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003165 | 0.268519 | 432 | 24 | 44 | 18 | 0.882911 | 0 | 0 | 0 | 0 | 0 | 0.011574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6344c1a546dfde9307cede5f7a6e9805a1f7479b | 385 | py | Python | roles/lib_openshift/src/lib/import.py | ramkrsna/openshift-ansible | fc96d8d22f6c277b599e6e2fa4e9cc06814a9460 | [
"Apache-2.0"
] | null | null | null | roles/lib_openshift/src/lib/import.py | ramkrsna/openshift-ansible | fc96d8d22f6c277b599e6e2fa4e9cc06814a9460 | [
"Apache-2.0"
] | null | null | null | roles/lib_openshift/src/lib/import.py | ramkrsna/openshift-ansible | fc96d8d22f6c277b599e6e2fa4e9cc06814a9460 | [
"Apache-2.0"
] | null | null | null | # pylint: skip-file
# flake8: noqa
'''
OpenShiftCLI class that wraps the oc commands in a subprocess
'''
# pylint: disable=too-many-lines
from __future__ import print_function
import atexit
import json
import os
import re
import shutil
import subprocess
import tempfile
# pylint: disable=import-error
import ruamel.yaml as yaml
from ansible.module_utils.basic import AnsibleModule
| 20.263158 | 64 | 0.8 | 55 | 385 | 5.490909 | 0.709091 | 0.086093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003021 | 0.14026 | 385 | 18 | 65 | 21.388889 | 0.909366 | 0.397403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
63468acc0ac6a4997496eeb95bb3c0c24a04aa45 | 2,746 | py | Python | formain.py | GenBill/Maple_2K | e82df2c2d549d91a1d53cc8b8b688949a5280792 | [
"MIT"
] | null | null | null | formain.py | GenBill/Maple_2K | e82df2c2d549d91a1d53cc8b8b688949a5280792 | [
"MIT"
] | null | null | null | formain.py | GenBill/Maple_2K | e82df2c2d549d91a1d53cc8b8b688949a5280792 | [
"MIT"
] | null | null | null | from fim_mission import *
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torch.optim as optim
from torch.optim import lr_scheduler
import torchvision.transforms as transforms
from torchvision import datasets, models
import matplotlib.pyplot as plt
from tensorboardX import SummaryWriter, writer
import os
import argparse
import random
import numpy as np
import warnings
from PIL import Image
plt.ion() # interactive mode
warnings.filterwarnings('ignore')
os.environ['CUDA_VISIBLE_DEVICES'] = '1' # opt.cuda
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # "cpu" #
datawriter = SummaryWriter()
data_root = './Dataset' # '../Dataset/Kaggle265'
target_root = './Targetset' # '../Dataset/Kaggle265'
num_workers = 0
loader = aim_loader(data_root, target_root, num_workers)
model_ft = models.resnet50(pretrained=True).to(device)
print(model_ft)
'''
# criterion = torch.nn.MSELoss()
criterion = torch.nn.L1Loss()
step = 16
model_ft.eval()
with torch.no_grad():
for num_iter, (data, target, data_size, target_size) in enumerate(tqdm(loader)):
target = target.to(device)
data = data.to(device)
data_size = data_size.item()
target_size = target_size.item()
min_loss = 1024.
min_i, min_j = -1, -1
for i in range(0, data_size-target_size, step):
for j in range(0, data_size-target_size, step):
trans_T = model_ft(target)
trans_D = model_ft(data[:,:,i:i+target_size,j:j+target_size])
loss = criterion(trans_T, trans_D).item()
if min_loss>loss:
min_i, min_j = i, j
min_loss = loss
head_i = max(0, min_i-step)
head_j = max(0, min_j-step)
tail_i = min(min_i+step, data_size)
tail_j = min(min_j+step, data_size)
for i in range(head_i, tail_i):
for j in range(head_j, tail_j):
trans_T = model_ft(target)
trans_D = model_ft(data[:,:,i:i+target_size,j:j+target_size])
loss = criterion(trans_T, trans_D).item()
if min_loss>loss:
min_i, min_j = i, j
min_loss = loss
data[0,:,min_i:min_i+target_size,min_j:min_j+target_size] = target[0,:,:,:]
datawriter.add_image('new_img', data[0,:,:,:], num_iter)
datawriter.add_scalar('img_loss', min_loss, num_iter)
x, y = get_position(min_i, min_j, data_size, target_size)
print('Iter : {}'.format(num_iter))
print('Pos = ({}, {})'.format(x, y))
print('Loss = {}'.format(min_loss))
datawriter.close()
''' | 32.305882 | 84 | 0.633285 | 398 | 2,746 | 4.148241 | 0.271357 | 0.072683 | 0.042399 | 0.04361 | 0.185342 | 0.185342 | 0.185342 | 0.185342 | 0.149001 | 0.149001 | 0 | 0.012518 | 0.243627 | 2,746 | 85 | 85 | 32.305882 | 0.782378 | 0.028405 | 0 | 0 | 0 | 0 | 0.060948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.62069 | 0 | 0.62069 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6346c2addc2cf35274df3428a19c76028b86eba4 | 252 | py | Python | Sprachanalyse/versuch.py | DemonicStorm/LitBlogRepo | 0fe436071840f3b9af59f4363967cfc6eb397865 | [
"MIT"
] | 2 | 2022-02-15T19:18:12.000Z | 2022-02-16T08:06:20.000Z | Sprachanalyse/versuch.py | DemonicStorm/LitBlogRepo | 0fe436071840f3b9af59f4363967cfc6eb397865 | [
"MIT"
] | null | null | null | Sprachanalyse/versuch.py | DemonicStorm/LitBlogRepo | 0fe436071840f3b9af59f4363967cfc6eb397865 | [
"MIT"
] | null | null | null | import spacy
from spacy_langdetect import LanguageDetector
import en_core_web_sm
from glob import glob
nlp = en_core_web_sm.load()
#nlp = spacy.load('en')
nlp.add_pipe(LanguageDetector(), name='language_detector', last=True)
print(LanguageDetector) | 21 | 69 | 0.805556 | 37 | 252 | 5.243243 | 0.540541 | 0.061856 | 0.092784 | 0.113402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099206 | 252 | 12 | 70 | 21 | 0.854626 | 0.087302 | 0 | 0 | 0 | 0 | 0.073913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
635a1b194d242f8ac83aa8d4f19dfd8d49c42b24 | 573 | py | Python | app/routes.py | valtemirprocopio/forms | 05d819aad3d8c32c87b0f62a3c8e2b6fda8aa26e | [
"MIT"
] | null | null | null | app/routes.py | valtemirprocopio/forms | 05d819aad3d8c32c87b0f62a3c8e2b6fda8aa26e | [
"MIT"
] | null | null | null | app/routes.py | valtemirprocopio/forms | 05d819aad3d8c32c87b0f62a3c8e2b6fda8aa26e | [
"MIT"
] | null | null | null | from app import app
from flask import render_template, flash, redirect, url_for
from app.forms import LoginForm
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html')
@app.route('/contato', methods=['GET','POST'])
def contato():
form = LoginForm()
if form.validate_on_submit():
mensagem = flash('A mensagem foi enviada com sucesso.')
return redirect('/index')
return render_template('contato.html', form=form)
@app.route('/features')
def features():
return render_template('features.html')
| 22.92 | 63 | 0.678883 | 73 | 573 | 5.232877 | 0.452055 | 0.146597 | 0.157068 | 0.13089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172775 | 573 | 24 | 64 | 23.875 | 0.805907 | 0 | 0 | 0 | 0 | 0 | 0.187719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.176471 | 0.117647 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
637cdb4f017832b717342e2928773f70e1670584 | 316 | py | Python | 000000stepikProgBasKirFed/Stepik000000ProgBasKirFedсh01p01st07TASK07_20210205_print.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000000stepikProgBasKirFed/Stepik000000ProgBasKirFedсh01p01st07TASK07_20210205_print.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000000stepikProgBasKirFed/Stepik000000ProgBasKirFedсh01p01st07TASK07_20210205_print.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | '''
Напишите программу, которая объявляет переменную: "name" и присваивает ей значение "Python".
Программа должна напечатать в одну строку, разделяя пробелами:
Строку "name"
Значение переменной "name"
Число 3
Число 8.5
Sample Input:
Sample Output:
name Python 3 8.5
'''
name = 'Python'
print('name', name, 3, 8.5)
| 19.75 | 92 | 0.743671 | 46 | 316 | 5.108696 | 0.608696 | 0.025532 | 0.025532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033457 | 0.148734 | 316 | 15 | 93 | 21.066667 | 0.840149 | 0.832278 | 0 | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
63819767d87fb2523fcafd5031ccca5b56c6122d | 4,149 | py | Python | chrome/tools/extract_actions.py | zachlatta/chromium | c4625eefca763df86471d798ee5a4a054b4716ae | [
"BSD-3-Clause"
] | 1 | 2021-09-24T22:49:10.000Z | 2021-09-24T22:49:10.000Z | chrome/tools/extract_actions.py | changbai1980/chromium | c4625eefca763df86471d798ee5a4a054b4716ae | [
"BSD-3-Clause"
] | null | null | null | chrome/tools/extract_actions.py | changbai1980/chromium | c4625eefca763df86471d798ee5a4a054b4716ae | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
# Copyright 2007 Google Inc. All rights reserved.
"""Extract UserMetrics "actions" strings from the Chrome source.
This program generates the list of known actions we expect to see in the
user behavior logs. It walks the Chrome source, looking for calls to
UserMetrics functions, extracting actions and warning on improper calls,
as well as generating the lists of possible actions in situations where
there are many possible actions.
See also:
chrome/browser/user_metrics.h
http://wiki.corp.google.com/twiki/bin/view/Main/ChromeUserExperienceMetrics
Run it from the chrome/browser directory like:
extract_actions.py > actions_list
"""
__author__ = 'evanm (Evan Martin)'
import os
import re
import sys
from google import path_utils
# Files that are known to use UserMetrics::RecordComputedAction(), which means
# they require special handling code in this script.
# To add a new file, add it to this list and add the appropriate logic to
# generate the known actions to AddComputedActions() below.
KNOWN_COMPUTED_USERS = [
'back_forward_menu_model.cc',
'options_page_view.cc',
'render_view_host.cc', # called using webkit identifiers
'user_metrics.cc', # method definition
'new_tab_ui.cc', # most visited clicks 1-9
]
def AddComputedActions(actions):
"""Add computed actions to the actions list.
Arguments:
actions: set of actions to add to.
"""
# Actions for back_forward_menu_model.cc.
for dir in ['BackMenu_', 'ForwardMenu_']:
actions.add(dir + 'ShowFullHistory')
actions.add(dir + 'Popup')
for i in range(1, 20):
actions.add(dir + 'HistoryClick' + str(i))
actions.add(dir + 'ChapterClick' + str(i))
# Actions for new_tab_ui.cc.
for i in range(1, 10):
actions.add('MostVisited%d' % i)
def AddWebKitEditorActions(actions):
"""Add editor actions from editor_client_impl.cc.
Arguments:
actions: set of actions to add to.
"""
action_re = re.compile(r'''\{ [\w']+, +\w+, +"(.*)" +\},''')
editor_file = os.path.join(path_utils.ScriptDir(), '..', '..', 'webkit',
'glue', 'editor_client_impl.cc')
for line in open(editor_file):
match = action_re.search(line)
if match: # Plain call to RecordAction
actions.add(match.group(1))
def GrepForActions(path, actions):
"""Grep a source file for calls to UserMetrics functions.
Arguments:
path: path to the file
actions: set of actions to add to
"""
action_re = re.compile(r'[> ]UserMetrics:?:?RecordAction\(L"(.*)"')
other_action_re = re.compile(r'[> ]UserMetrics:?:?RecordAction\(')
computed_action_re = re.compile(r'UserMetrics::RecordComputedAction')
for line in open(path):
match = action_re.search(line)
if match: # Plain call to RecordAction
actions.add(match.group(1))
elif other_action_re.search(line):
# Warn if this file shouldn't be mentioning RecordAction.
if os.path.basename(path) != 'user_metrics.cc':
print >>sys.stderr, 'WARNING: %s has funny RecordAction' % path
elif computed_action_re.search(line):
# Warn if this file shouldn't be calling RecordComputedAction.
if os.path.basename(path) not in KNOWN_COMPUTED_USERS:
print >>sys.stderr, 'WARNING: %s has RecordComputedAction' % path
def WalkDirectory(root_path, actions):
for path, dirs, files in os.walk(root_path):
if '.svn' in dirs:
dirs.remove('.svn')
for file in files:
ext = os.path.splitext(file)[1]
if ext == '.cc':
GrepForActions(os.path.join(path, file), actions)
def main(argv):
actions = set()
AddComputedActions(actions)
AddWebKitEditorActions(actions)
# Walk the source tree to process all .cc files.
chrome_root = os.path.join(path_utils.ScriptDir(), '..')
WalkDirectory(chrome_root, actions)
webkit_root = os.path.join(path_utils.ScriptDir(), '..', '..', 'webkit')
WalkDirectory(os.path.join(webkit_root, 'glue'), actions)
WalkDirectory(os.path.join(webkit_root, 'port'), actions)
# Print out the actions as a sorted list.
for action in sorted(actions):
print action
if '__main__' == __name__:
main(sys.argv)
| 33.192 | 78 | 0.699446 | 578 | 4,149 | 4.906574 | 0.33564 | 0.031735 | 0.021157 | 0.023977 | 0.297955 | 0.238717 | 0.187588 | 0.12835 | 0.112835 | 0.112835 | 0 | 0.004417 | 0.18149 | 4,149 | 124 | 79 | 33.459677 | 0.830683 | 0.174018 | 0 | 0.09375 | 1 | 0 | 0.196588 | 0.059708 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.046875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
638271269c493149415ea03cdea0dc60c36a233d | 2,898 | py | Python | src/nerb/named_entities.py | johnnygreco/nerb | 1ea395bade7d58b176c965d062987284a2d6f590 | [
"MIT"
] | null | null | null | src/nerb/named_entities.py | johnnygreco/nerb | 1ea395bade7d58b176c965d062987284a2d6f590 | [
"MIT"
] | null | null | null | src/nerb/named_entities.py | johnnygreco/nerb | 1ea395bade7d58b176c965d062987284a2d6f590 | [
"MIT"
] | null | null | null | from __future__ import annotations
# Standard library
import re
from copy import deepcopy
from dataclasses import dataclass
from typing import Callable, Optional
__all__ = ['NamedEntity', 'NamedEntityList']
@dataclass(frozen=True)
class NamedEntity:
name: str
entity: str
string: str
span: tuple[int, int]
class NamedEntityList:
"""Named entity list class."""
def __init__(self, init_list: Optional[list] = None):
init_list = [] if init_list is None else init_list
self._list = init_list
def append(self, entity: NamedEntity):
"""Append entity to this list, where the element must be of type NamedEntity."""
if not isinstance(entity, NamedEntity):
raise TypeError(
f'{self.__class__.__name__} holds {NamedEntity} objects. You gave {type(entity)}.')
self._list.append(entity)
def copy(self):
return deepcopy(self)
def extend(self, entity_list: NamedEntityList | list[NamedEntity]):
"""Extend list. Similar to the standard python list object, extend takes an iterable as an argument."""
if not isinstance(entity_list, (NamedEntityList, list)):
raise TypeError(
f'Expected object of type {self.__class__.__name__} or list. You gave {type(entity_list)}.'
)
for elem in entity_list:
self.append(elem)
def get_unique_names(self) -> set[str]:
"""Return set of the unique names in this NamedEntityList."""
return set([entity.name for entity in self])
def sort(self, key: Callable, *, reverse: bool = False) -> None:
"""
Sort the list according to the given key. The sort is executed in-place.
Parameters
----------
key : callable (e.g., a lambda function)
Function that defines how the list should be sorted.
reverse : bool, optional
If True, sort in descending order.
"""
self._list.sort(key=key, reverse=reverse)
def __add__(self, other: NamedEntityList):
"""Define what it means to add two list objects together."""
concatenated_list = list(self) + list(other)
return self.__class__(concatenated_list)
def __getitem__(self, item):
if isinstance(item, list):
return self.__class__([self._list[i] for i in item])
elif isinstance(item, slice):
return self.__class__(self._list[item])
else:
return self._list[item]
def __iter__(self):
return iter(self._list)
def __len__(self):
return len(self._list)
def __repr__(self):
repr = '\n'.join([f'[{i}] {p.__repr__()}' for i, p in enumerate(self)])
repr = re.sub(r'^', ' ' * 4, repr, flags=re.M)
repr = f'(\n{repr}\n)' if len(self) > 0 else f'([])'
return f'{self.__class__.__name__}{repr}'
| 32.561798 | 111 | 0.622498 | 364 | 2,898 | 4.700549 | 0.337912 | 0.042081 | 0.022794 | 0.024547 | 0.026885 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000939 | 0.265355 | 2,898 | 88 | 112 | 32.931818 | 0.802724 | 0.201518 | 0 | 0.037736 | 0 | 0.018868 | 0.11908 | 0.036536 | 0 | 0 | 0 | 0 | 0 | 1 | 0.207547 | false | 0 | 0.09434 | 0.056604 | 0.584906 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
638b620923dedf797dae35ba43746969775844b6 | 3,935 | py | Python | cd2h_repo_project/modules/doi/schemas.py | galterlibrary/InvenioRDM-at-NU | 5aff6ac7c428c9a61bdf221627bfc05f2280d1a3 | [
"MIT"
] | 6 | 2019-09-02T00:01:50.000Z | 2021-11-04T08:23:40.000Z | cd2h_repo_project/modules/doi/schemas.py | galterlibrary/InvenioRDM-at-NU | 5aff6ac7c428c9a61bdf221627bfc05f2280d1a3 | [
"MIT"
] | 72 | 2019-09-04T18:52:35.000Z | 2020-07-21T19:58:15.000Z | cd2h_repo_project/modules/doi/schemas.py | galterlibrary/InvenioRDM-at-NU | 5aff6ac7c428c9a61bdf221627bfc05f2280d1a3 | [
"MIT"
] | null | null | null | """JSON Schemas."""
import csv
from collections import defaultdict
from datetime import date
from os.path import dirname, join, realpath
from flask import current_app
from marshmallow import Schema, fields
from cd2h_repo_project.modules.records.resource_type import ResourceType
class DataCiteResourceTypeMap(object):
"""DataCite Resource Type Mapping.
TODO: If we extract this module out, make this class a configuration
setting.
"""
def __init__(self):
"""Constructor."""
self.filename = join(
dirname(dirname(realpath(__file__))),
'records', 'data', 'resource_type_mapping.csv'
)
with open(self.filename) as f:
reader = csv.DictReader(f)
self.map = {
(row['Group'].lower(), row['Name'].lower()):
row['DataCite'].strip()
for row in reader
}
def get(self, key, default=None):
"""Return the mapped value.
`key` is (<general resource type>, <specific resource type>).
"""
return self.map.get(key, default)
class DataCiteResourceTypeSchemaV4(Schema):
"""ResourceType schema."""
resourceTypeGeneral = fields.Method('get_general_resource_type')
resourceType = fields.Method('get_specific_resource_type')
def get_general_resource_type(self, resource_type):
"""Return DataCite's controlled vocabulary General Resource Type."""
resource_type_obj = ResourceType.get(
resource_type['general'], resource_type['specific']
)
return resource_type_obj.map(DataCiteResourceTypeMap())
def get_specific_resource_type(self, resource_type):
"""Return title-ized Specific Resource Type."""
return resource_type['specific'].title()
class DataCiteTitleSchemaV4(Schema):
"""Title schema."""
title = fields.Str()
class DataCiteCreatorSchemaV4(Schema):
"""Creator schema.
Each of these fields are inside the `creator` node.
"""
creatorName = fields.Str(attribute='full_name')
# TODO (optional): sub creatorName: nameType
givenName = fields.Str(attribute='first_name')
familyName = fields.Str(attribute='last_name')
# TODO (optional):
# nameIdentifier
# nameIdentifierScheme
# schemeURI
# affiliation
class DataCiteSchemaV4(Schema):
"""Schema for DataCite Metadata.
For now, only the minimum required fields are implemented. In the future,
we may want to include optional fields as well.
Fields and subfields are based on
schema.datacite.org/meta/kernel-4.1/doc/DataCite-MetadataKernel_v4.1.pdf
"""
identifier = fields.Method('get_identifier', dump_only=True)
# NOTE: This auto-magically serializes the `creators` and `creator` nodes.
creators = fields.List(
fields.Nested(DataCiteCreatorSchemaV4),
attribute='metadata.authors',
dump_only=True)
titles = fields.List(
fields.Nested(DataCiteTitleSchemaV4),
attribute='metadata',
dump_only=True)
publisher = fields.Method('get_publisher', dump_only=True)
publicationYear = fields.Method('get_year', dump_only=True)
resourceType = fields.Nested(
DataCiteResourceTypeSchemaV4,
attribute='metadata.resource_type',
dump_only=True)
def get_identifier(self, data):
"""Get record main identifier."""
return {
# If no DOI, 'DUMMY' value is used and will be ignored by DataCite
'identifier': data.get('metadata', {}).get('doi') or 'DUMMY',
'identifierType': 'DOI'
}
def get_publisher(self, data):
"""Extract publisher."""
return current_app.config['DOI_PUBLISHER']
def get_year(self, data):
"""Extract year.
Current year for now.
TODO: Revisit when dealing with embargo.
"""
return date.today().year
| 30.503876 | 78 | 0.653621 | 429 | 3,935 | 5.869464 | 0.393939 | 0.090548 | 0.028594 | 0.021446 | 0.027006 | 0.027006 | 0 | 0 | 0 | 0 | 0 | 0.004005 | 0.238628 | 3,935 | 128 | 79 | 30.742188 | 0.836449 | 0.276239 | 0 | 0.047619 | 0 | 0 | 0.10855 | 0.036431 | 0 | 0 | 0 | 0.023438 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.587302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
639138968935973bda9f7100f85f9fc9166454f1 | 399 | py | Python | zip/unzip_print.py | juarezhenriquelisboa/Python | 5c5498b33e7cba4e3bfa322a6a76bed74b68e6bf | [
"MIT"
] | 1 | 2021-01-01T14:46:28.000Z | 2021-01-01T14:46:28.000Z | zip/unzip_print.py | juarezhenriquelisboa/Python | 5c5498b33e7cba4e3bfa322a6a76bed74b68e6bf | [
"MIT"
] | null | null | null | zip/unzip_print.py | juarezhenriquelisboa/Python | 5c5498b33e7cba4e3bfa322a6a76bed74b68e6bf | [
"MIT"
] | null | null | null | import zipfile
import sys
for arg in sys.argv[1:]:
senha = str(arg)
z = zipfile.ZipFile("protegido.zip")
files = z.namelist()
z.setpassword(senha)
z.extractall()
z.close()
for extracted_file in files:
print "Nome do arquivo: "+extracted_file+"\n\nConteudo: "
with open(extracted_file) as f:
content = f.readlines()
print ''.join(content)
print '\n\n'
| 16.625 | 61 | 0.639098 | 56 | 399 | 4.5 | 0.589286 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003226 | 0.223058 | 399 | 23 | 62 | 17.347826 | 0.809677 | 0 | 0 | 0 | 0 | 0 | 0.120301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.066667 | 0.133333 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6392f325bfe2c22484f1ffe055193199e29b8c30 | 948 | py | Python | awards/forms.py | JKimani77/awards | 8cdfaadbd4aca5ef2031966496ebcb5c3c3ea49e | [
"MIT"
] | null | null | null | awards/forms.py | JKimani77/awards | 8cdfaadbd4aca5ef2031966496ebcb5c3c3ea49e | [
"MIT"
] | null | null | null | awards/forms.py | JKimani77/awards | 8cdfaadbd4aca5ef2031966496ebcb5c3c3ea49e | [
"MIT"
] | null | null | null | from django import forms
from django.contrib.auth.forms import UserCreationForm,AuthenticationForm
from django.contrib.auth.models import User
from .models import Profile,Project,Review
class RegForm(UserCreationForm):
email = forms.EmailField()
class Meta:
model = User
fields = ('username','email', 'password1','password2')
class LoginForm(AuthenticationForm):
username = forms.CharField(label='Username', max_length=254)
password = forms.CharField(label='Password',widget=forms.PasswordInput)
class ProfileForm(forms.ModelForm):
class Meta:
model = Profile
fields = ('profile_pic','bio')
class ProjectForm(forms.ModelForm):
class Meta:
model = Project
fields = ('title','description','project_pic','project_link')
class RatingForm(forms.ModelForm):
class Meta:
model = Review
fields =('design','usability','content') | 26.333333 | 75 | 0.681435 | 98 | 948 | 6.55102 | 0.44898 | 0.056075 | 0.087227 | 0.107477 | 0.130841 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006649 | 0.206751 | 948 | 36 | 76 | 26.333333 | 0.847074 | 0 | 0 | 0.166667 | 0 | 0 | 0.128556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.166667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
63c00642ff2e4d391cdfd97b2502db83f3e78004 | 276 | py | Python | al_helper/__init__.py | Taehun/al_helper | 8e304a69359e3807564bb15954df2994e0bb8897 | [
"Apache-2.0"
] | null | null | null | al_helper/__init__.py | Taehun/al_helper | 8e304a69359e3807564bb15954df2994e0bb8897 | [
"Apache-2.0"
] | null | null | null | al_helper/__init__.py | Taehun/al_helper | 8e304a69359e3807564bb15954df2994e0bb8897 | [
"Apache-2.0"
] | null | null | null | """Let's score the unlabeled data for the active learning"""
from al_helper.apis import build
from al_helper.helpers import ALHelper, ALHelperFactory, ALHelperObjectDetection
__version__ = "0.1.0"
__all__ = ["build", "ALHelper", "ALHelperFactory", "ALHelperObjectDetection"]
| 39.428571 | 80 | 0.786232 | 33 | 276 | 6.272727 | 0.69697 | 0.057971 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012146 | 0.105072 | 276 | 6 | 81 | 46 | 0.825911 | 0.195652 | 0 | 0 | 0 | 0 | 0.259259 | 0.106481 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
63c1e56c492a20f0ed2af22f56c19c8afeb33a3d | 568 | py | Python | flocka/extensions.py | sleekslush/flocka | 3d1c0ae9bf82b7b8afb03494ee6dd8488157fe68 | [
"BSD-2-Clause"
] | 1 | 2018-10-09T14:09:12.000Z | 2018-10-09T14:09:12.000Z | flocka/extensions.py | sleekslush/flocka | 3d1c0ae9bf82b7b8afb03494ee6dd8488157fe68 | [
"BSD-2-Clause"
] | 11 | 2017-03-22T15:26:05.000Z | 2017-06-01T20:17:52.000Z | flocka/extensions.py | sleekslush/flocka | 3d1c0ae9bf82b7b8afb03494ee6dd8488157fe68 | [
"BSD-2-Clause"
] | null | null | null | from flask_cache import Cache
from flask_debugtoolbar import DebugToolbarExtension
from flask_login import LoginManager
from flask_assets import Environment
from flask_migrate import Migrate
from flocka.models import User
# Setup flask cache
cache = Cache()
# Init flask assets
assets_env = Environment()
# Debug Toolbar
debug_toolbar = DebugToolbarExtension()
# Alembic
migrate = Migrate()
# Flask Login
login_manager = LoginManager()
login_manager.login_view = "main.login"
@login_manager.user_loader
def load_user(userid):
return User.query.get(userid)
| 19.586207 | 52 | 0.806338 | 73 | 568 | 6.09589 | 0.410959 | 0.101124 | 0.076404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132042 | 568 | 28 | 53 | 20.285714 | 0.902637 | 0.121479 | 0 | 0 | 0 | 0 | 0.020284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.4 | 0.066667 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
891d65158f17bd525585b5367fe4ef83f22f5f0b | 416 | py | Python | 1122.py | wilbertgeng/LeetCode_exercise | f00c08e0d28ffa88d61d4262c6d1f49f1fa91ebc | [
"MIT"
] | null | null | null | 1122.py | wilbertgeng/LeetCode_exercise | f00c08e0d28ffa88d61d4262c6d1f49f1fa91ebc | [
"MIT"
] | null | null | null | 1122.py | wilbertgeng/LeetCode_exercise | f00c08e0d28ffa88d61d4262c6d1f49f1fa91ebc | [
"MIT"
] | null | null | null | """1122. Relative Sort Array"""
class Solution(object):
def relativeSortArray(self, arr1, arr2):
"""
:type arr1: List[int]
:type arr2: List[int]
:rtype: List[int]
"""
####
pos = {num:i for i, num in enumerate(arr2)}
return sorted(arr1, key=lambda x: pos.get(x, 1000+x))
####
return sorted(arr1, key = (arr2 + sorted(arr1)).index)
| 26 | 62 | 0.526442 | 51 | 416 | 4.294118 | 0.588235 | 0.09589 | 0.146119 | 0.173516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059233 | 0.310096 | 416 | 15 | 63 | 27.733333 | 0.703833 | 0.209135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
89352fbdd1b631cd689cfa66c662c5a6306871c6 | 6,687 | py | Python | beartype/_decor/conf.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | null | null | null | beartype/_decor/conf.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | null | null | null | beartype/_decor/conf.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# --------------------( LICENSE )--------------------
# Copyright (c) 2014-2021 Beartype authors.
# See "LICENSE" for further details.
'''
**Beartype decorator configuration API** (i.e., enumerations, classes,
singletons, and other attributes enabling external callers to selectively
configure the :func:`beartype` decorator on a fine-grained per-decoration call
basis).
Most of the public attributes defined by this private submodule are explicitly
exported to external callers in our top-level :mod:`beartype.__init__`
submodule. This private submodule is *not* intended for direct importation by
downstream callers.
'''
# ....................{ IMPORTS }....................
from enum import (
Enum,
auto as next_enum_member_value,
unique as die_unless_enum_member_values_unique,
)
# ....................{ ENUMERATIONS }....................
#FIXME: Unit test us up, please.
@die_unless_enum_member_values_unique
class BeartypeStrategy(Enum):
'''
Enumeration of all kinds of **container type-checking strategies** (i.e.,
competing procedures for type-checking items of containers passed to or
returned from :func:`beartype.beartype`-decorated callables, each with
concomitant benefits and disadvantages with respect to runtime complexity
and quality assurance).
Strategies are intentionally named according to `conventional Big O
notation <Big O_>`__ (e.g., :attr:`BeartypeStrategy.On` enables the
``O(n)`` strategy). Strategies are established per-decoration at the
fine-grained level of callables decorated by the :func: `beartype.beartype`
decorator by either:
* Calling a high-level convenience decorator establishing that strategy
(e.g., :func:`beartype.conf.beartype_On`, enabling the ``O(n)`` strategy
for all callables decorated by that decorator).
* Setting the :attr:`BeartypeConfiguration.strategy` variable of the
:attr:`BeartypeConfiguration` object passed as the optional ``conf``
parameter to the lower-level core :func: `beartype.beartype` decorator.
Strategies enforce and guarantee their corresponding runtime complexities
(e.g., ``O(n)``) across all type checks performed for all callables
enabling those strategies. For example, a callable decorated with the
:attr:`BeartypeStrategy.On` strategy will exhibit linear runtime complexity
as its type-checking overhead.
.. _Big O:
https://en.wikipedia.org/wiki/Big_O_notation
Attributes
----------
O0 : EnumMemberType
**No-time strategy** (i.e, disabling type-checking for a callable by
reducing :func:`beartype.beartype` to the identity decorator for that
callable). Although currently useless, this strategy will usefully
allow end users to selectively prevent callables from being
type-checked by our as-yet-unimplemented import hook. When implemented,
that hook will type-check *all* callables in a given package by
default. Some means is needed to prevent that from happening for select
callables. This is that means.
O1 : EnumMemberType
**Constant-time strategy** (i.e., our default ``O(1)`` strategy
type-checking a single randomly selected item of a container that you
currently enjoy). Since this is the default, this strategy need *not*
be explicitly configured.
Ologn : EnumMemberType
**Logarithmic-time strategy** (i.e., an ``O(lgn)` strategy
type-checking a randomly selected number of items ``j`` of a container
``obj`` such that ``j = len(obj)``. This strategy is **currently
unimplemented.** (*To be implemented by a future beartype release.*)
On : EnumMemberType
**Linear-time strategy** (i.e., an ``O(n)`` strategy type-checking
*all* items of a container. This strategy is **currently
unimplemented.** (*To be implemented by a future beartype release.*)
'''
O0 = next_enum_member_value()
O1 = next_enum_member_value()
Ologn = next_enum_member_value()
On = next_enum_member_value()
# ....................{ CLASSES }....................
#FIXME: *INSUFFICIENT.* Critically, we also *MUST* declare a __new__() method
#to enforce memoization. A new "BeartypeConfiguration" instance is instantiated
#*ONLY* if no existing instance with the same settings has been previously
#instantiated; else, an existing cached instance is reused. This is essential,
#as the @beartype decorator itself memoizes on the basis of this instance. See
#the following StackOverflow post for the standard design pattern:
# https://stackoverflow.com/a/13054570/2809027
#
#Note, however, that there's an intriguing gotcha:
# "When you define __new__, you usually do all the initialization work in
# __new__; just don't define __init__ at all."
#
#Why? Because if you define both __new__() and __init__() then Python
#implicitly invokes *BOTH*, even if the object returned by __new__() has
#already been previously initialized with __init__(). This is a facepalm
#moment, although the rationale does indeed make sense. Ergo, we *ONLY* want to
#define __new__(); the existing __init__() should simply be renamed __new__()
#and generalized from there to support caching.
#FIXME: Unit test us up, please.
#FIXME: Document us up, please.
class BeartypeConfiguration(object):
'''
* An `is_debug` boolean instance variable. When enabled, `@beartype`
emits debugging information for the decorated callable – including
the code for the wrapper function dynamically generated by
`@beartype` that type-checks that callable.
* A `strategy` instance variable whose value must be a
`BeartypeStrategy` enumeration member. This is how you notify
`@beartype` of which strategy to apply to each callable.
'''
is_debug: bool
strategy: BeartypeStrategy
def __init__(
self,
is_debug: bool = False,
strategy: BeartypeStrategy = BeartypeStrategy.O1,
) -> None:
#FIXME: Implement actual validation, please.
if not isinstance(is_debug, bool):
raise ValueError()
if not isinstance(strategy, BeartypeStrategy):
raise ValueError()
self.is_debug = is_debug
self.strategy = strategy
# ....................{ SINGLETONS }....................
#FIXME: Unit test us up, please.
#FIXME: Document us up, please. Note this attribute is intentionally *NOT*
#exported from "beartype.__init__".
BEAR_CONF_DEFAULT = BeartypeConfiguration()
| 46.117241 | 79 | 0.686257 | 829 | 6,687 | 5.420989 | 0.38842 | 0.015576 | 0.015576 | 0.021139 | 0.080329 | 0.080329 | 0.05385 | 0.05385 | 0.05385 | 0.05385 | 0 | 0.005645 | 0.205324 | 6,687 | 144 | 80 | 46.4375 | 0.839857 | 0.82638 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8942f14789d57a93ee819d488c307294d017124c | 108 | py | Python | examples/blogprj/urls.py | pimentech/django-mongoforms | 6220e91e05d73a26e495460f98667e23dc16c5f6 | [
"BSD-3-Clause"
] | 1 | 2017-07-27T05:44:47.000Z | 2017-07-27T05:44:47.000Z | examples/blogprj/urls.py | pimentech/django-mongoforms | 6220e91e05d73a26e495460f98667e23dc16c5f6 | [
"BSD-3-Clause"
] | null | null | null | examples/blogprj/urls.py | pimentech/django-mongoforms | 6220e91e05d73a26e495460f98667e23dc16c5f6 | [
"BSD-3-Clause"
] | null | null | null | from django.conf.urls.defaults import *
urlpatterns = patterns('',
(r'^', include('apps.blog.urls')),
) | 21.6 | 39 | 0.657407 | 13 | 108 | 5.461538 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 108 | 5 | 40 | 21.6 | 0.755319 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
89738170301569f699c91bba33c6010b3ec65d70 | 430 | py | Python | misago/misago/core/tests/test_frontendcontext_middleware.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | 2 | 2021-03-06T21:06:13.000Z | 2021-03-09T15:05:12.000Z | misago/misago/core/tests/test_frontendcontext_middleware.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | null | null | null | misago/misago/core/tests/test_frontendcontext_middleware.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | null | null | null | from django.test import TestCase
from ..middleware import FrontendContextMiddleware
class MockRequest:
pass
class FrontendContextMiddlewareTests(TestCase):
def test_middleware_frontend_context_dict(self):
"""Middleware sets frontend_context dict on request"""
request = MockRequest()
FrontendContextMiddleware().process_request(request)
self.assertEqual(request.frontend_context, {})
| 25.294118 | 62 | 0.75814 | 40 | 430 | 7.975 | 0.525 | 0.141066 | 0.119122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169767 | 430 | 16 | 63 | 26.875 | 0.893557 | 0.111628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0.111111 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
897a56ea0416972cc519745819a41aaa3ab7c3b3 | 2,017 | py | Python | src/hyperloop/geometry/inlet.py | uwhl/Hyperloop | b00a1a6570e1c3d94b3e0ce95bad75892eb6caec | [
"Apache-2.0"
] | 1 | 2016-09-03T09:46:04.000Z | 2016-09-03T09:46:04.000Z | src/hyperloop/geometry/inlet.py | uwhl/Hyperloop | b00a1a6570e1c3d94b3e0ce95bad75892eb6caec | [
"Apache-2.0"
] | null | null | null | src/hyperloop/geometry/inlet.py | uwhl/Hyperloop | b00a1a6570e1c3d94b3e0ce95bad75892eb6caec | [
"Apache-2.0"
] | null | null | null | from math import pi, sqrt
from openmdao.core.component import Component
class InletGeom(Component):
'''Calculates the dimensions for the inlet and compressor entrance'''
def __init__(self):
super(InletGeom, self).__init__()
self.add_param('wall_thickness', 0.05, desc='thickness of inlet wall', units='m')
# self.add_param('area_in', 0.0, desc='flow area required at front of inlet', units='m**2')
self.add_param('area_out', 0.0, desc='flow area required at back of inlet', units='m**2')
self.add_param('hub_to_tip', 0.4, desc='hub to tip ratio for compressor')
self.add_param('cross_section', 1.4, desc='cross sectional area of passenger capsule', units='m**2')
self.add_param('tube_area', 2.33, desc='cross sectional area inside of tube', units='m**2')
self.add_output('r_back_inner', 0.0, desc='inner radius of back of inlet', units='m')
self.add_output('r_back_outer', 0.0, desc='outer radius of back of inlet', units='m')
self.add_output('bypass_area', 0.0, desc='available flow area round capsule', units='m**2')
self.add_output('area_frontal', 0.0, desc='total capsule frontal area', units='m**2')
def solve_nonlinear(self, params, unknowns, resids):
unknowns['r_back_inner'] = sqrt(params['area_out'] / pi / (1.0 - params['hub_to_tip'] ** 2))
unknowns['r_back_outer'] = unknowns['r_back_inner'] + params['wall_thickness']
unknowns['bypass_area'] = params['tube_area'] - params['cross_section']
unknowns['area_frontal'] = pi * (unknowns['r_back_outer']) ** 2
if __name__ == '__main__':
from openmdao.core.problem import Problem
from openmdao.core.group import Group
p = Problem(root=Group())
p.root.add('comp', InletGeom())
p.setup()
p.run()
for var_name, units in (('r_back_inner', 'm'), ('r_back_outer', 'm'), ('bypass_area', 'm**2'), ('area_frontal', 'm**2')):
print '%s (%s): %f' % (var_name, units, p.root.comp.unknowns[var_name])
| 53.078947 | 125 | 0.655429 | 308 | 2,017 | 4.081169 | 0.272727 | 0.055688 | 0.057279 | 0.043755 | 0.209228 | 0.198091 | 0.140016 | 0.10183 | 0.060461 | 0.060461 | 0 | 0.020482 | 0.176996 | 2,017 | 37 | 126 | 54.513514 | 0.736747 | 0.044125 | 0 | 0 | 0 | 0 | 0.33027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.142857 | 0.142857 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8986e74c9d5d0f3e83c77fa1bbe488f8b8c7d861 | 1,163 | py | Python | convert.py | 9Knight9n/crawler-legislation-uk | 6f91ccf8323933a1926245a6dd0de747658ec6dd | [
"MIT"
] | null | null | null | convert.py | 9Knight9n/crawler-legislation-uk | 6f91ccf8323933a1926245a6dd0de747658ec6dd | [
"MIT"
] | null | null | null | convert.py | 9Knight9n/crawler-legislation-uk | 6f91ccf8323933a1926245a6dd0de747658ec6dd | [
"MIT"
] | null | null | null | # from os import listdir
# from os.path import isfile, join
#
# from save import xht_files_dir, txt_files_dir_converted
# from utils import convert_xht_to_txt,convert_xht_to_txt_2
# only_files = [f for f in listdir(xht_files_dir) if isfile(join(xht_files_dir, f))]
# for index,file_ in enumerate(only_files):
# print(f'doc {index+1}.{file_} converted.')
# f = open(xht_files_dir+"/"+file_, "r")
# text = f.read()
# text = convert_xht_to_txt_2(text)
# f = open(txt_files_dir_converted+"/"+file_[:-3]+"txt", "w")
# for line in text:
# f.write(line)
# f.close()
# break
# f = open(xht_files_dir+"/"+"The Air Navigation (Restriction of Flying) (Abingdon Air and Country Show) Regulations 2021.xht", "r")
# text = f.read()
# text = convert_xht_to_txt_2(text)
# if len(text) == 0:
# print("hello")
# f = open(txt_files_dir_converted+"/"+"The Air Navigation (Restriction of Flying) (Abingdon Air and Country Show) Regulations 2021."+"txt", "w")
# for line in text:
# f.write(line)
# f.close()
# import re
#
# regexes = [
# r'“.*”',
#
# ]
#
# pair = re.compile(regexes[0])
# print(pair.search('““dalam means”dwdw')) | 30.605263 | 145 | 0.656062 | 181 | 1,163 | 3.994475 | 0.337017 | 0.08852 | 0.076072 | 0.082988 | 0.528354 | 0.461964 | 0.392808 | 0.392808 | 0.392808 | 0.392808 | 0 | 0.015674 | 0.177128 | 1,163 | 38 | 146 | 30.605263 | 0.739812 | 0.934652 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
89ac931b692c43514c36ae03212d4c0af12bbef1 | 4,877 | py | Python | clld/lib/wordpress.py | Woseseltops/clld | 5ba065f35b7e6f68b8638d86550e6f0f597ff02d | [
"MIT"
] | 1 | 2019-08-12T15:43:56.000Z | 2019-08-12T15:43:56.000Z | clld/lib/wordpress.py | Woseseltops/clld | 5ba065f35b7e6f68b8638d86550e6f0f597ff02d | [
"MIT"
] | null | null | null | clld/lib/wordpress.py | Woseseltops/clld | 5ba065f35b7e6f68b8638d86550e6f0f597ff02d | [
"MIT"
] | null | null | null | """
Client for the xmlrpc API of a wordpress blog.
.. note::
we ignore blog_id altogether, see
http://joseph.randomnetworks.com/archives/2008/06/10/\
blog-id-in-wordpress-and-xml-rpc-blog-apis/
thus, rely on identifying the appropriate blog by xmlrpc endpoint.
"""
import re
import xmlrpclib
import requests
XMLRPC_PATH = 'xmlrpc.php'
def sluggify(phrase):
"""
>>> assert sluggify('a and B') == 'a-and-b'
"""
phrase = phrase.lower().strip()
phrase = re.sub('\s+', '-', phrase)
return phrase
class Client(object):
"""client to a wpmu blog
provides a unified interface to functionality called over xmlrpc or plain http
>>> c = Client('blog.example.org', 'user', 'password')
>>> assert c.service_url == 'http://blog.example.org/xmlrpc.php'
"""
def __init__(self, url, user, password):
self.user = user
self.password = password
if not url.startswith('http://') and not url.startswith('https://'):
url = 'http://' + url
if not url.endswith(XMLRPC_PATH):
if not url.endswith('/'):
url += '/'
url += XMLRPC_PATH
self.service_url = url
self.server = xmlrpclib.Server(self.service_url)
self.base_url = self.service_url.replace(XMLRPC_PATH, '')
def get_post(self, id): # pragma: no cover
return self.server.metaWeblog.getPost(id, self.user, self.password)
def get_authors(self): # pragma: no cover
return self.server.wp.getAuthors(0, self.user, self.password)
def get_recent_posts(self, number_of_posts): # pragma: no cover
return self.server.metaWeblog.getRecentPosts(
0, self.user, self.password, number_of_posts)
def create_post(self,
title,
content,
categories=None,
published=False,
date=None,
tags='',
custom_fields=None,
**kwargs):
published = [xmlrpclib.False, xmlrpclib.True][int(published)]
struct = dict(title=title, description=content)
if date:
struct['date_created_gmt'] = date
struct['dateCreated'] = date
if tags:
if isinstance(tags, (list, tuple)):
tags = ','.join(tags)
struct['mt_keywords'] = tags
if custom_fields is not None:
struct['custom_fields'] = [
dict(key=key, value=value) for key, value in custom_fields.items()]
struct.update(kwargs)
post_id = self.server.metaWeblog.newPost(
'', self.user, self.password, struct, published)
if categories:
self.set_categories(categories, post_id)
return post_id
def get_categories(self, name=None):
res = []
for c in self.server.wp.getCategories('', self.user, self.password):
if name:
if c['categoryName'] == name:
res.append(c)
else:
res.append(c)
for c in res:
c['name'] = c['categoryName']
c['id'] = c['categoryId']
return res
def set_categories(self, categories, post_id=None):
existing_categories = dict(
[(c['categoryName'], c) for c in self.get_categories()])
cat_map = {}
for cat in categories:
if cat['name'] not in existing_categories:
struct = dict(name=cat['name'])
for attr in ['parent_id', 'description', 'slug']:
if attr in cat:
struct[attr] = cat[attr]
cat_map[cat['name']] = int(
self.server.wp.newCategory('', self.user, self.password, struct))
else:
cat_map[cat['name']] = int(existing_categories[cat['name']]['id'])
if post_id:
self.server.mt.setPostCategories(
post_id,
self.user,
self.password,
[dict(categoryId=cat_map[name]) for name in cat_map])
return cat_map
def get_post_id_from_path(self, path):
"""
pretty hacky way to determine whether some post exists
"""
if not path.startswith(self.base_url):
path = self.base_url + path
res = requests.get(path)
if res.status_code != 200:
return None
m = re.search(
'\<input type\="hidden" name\="comment_post_ID" value\="(?P<id>[0-9]+)" \/\>',
res.text)
if m:
return int(m.group('id'))
else:
p = '\<div\s+class\=\"post\"\s+id\=\"post\-(?P<id>[0-9]+)\"\>'
if len(re.findall(p, res.text)) == 1:
m = re.search(p, res.text)
return int(m.group('id'))
| 34.34507 | 90 | 0.541726 | 576 | 4,877 | 4.484375 | 0.291667 | 0.024777 | 0.049555 | 0.054201 | 0.125048 | 0.061556 | 0.030197 | 0 | 0 | 0 | 0 | 0.005493 | 0.328071 | 4,877 | 141 | 91 | 34.588652 | 0.782728 | 0.010252 | 0 | 0.067308 | 0 | 0.019231 | 0.077682 | 0.024531 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.086538 | 0.028846 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
98243878adcb6218f1936ee177c9f2cd4bab733f | 81 | py | Python | models/env.py | claranet/cloud-deploy | a1277f5a1173efffbaeb298c9d22ec0aa39c62e7 | [
"Apache-2.0"
] | 25 | 2018-03-27T13:26:17.000Z | 2022-02-02T09:24:25.000Z | models/env.py | claranet/cloud-deploy | a1277f5a1173efffbaeb298c9d22ec0aa39c62e7 | [
"Apache-2.0"
] | null | null | null | models/env.py | claranet/cloud-deploy | a1277f5a1173efffbaeb298c9d22ec0aa39c62e7 | [
"Apache-2.0"
] | 5 | 2018-05-08T16:09:57.000Z | 2021-08-04T13:12:36.000Z | env = ['prod', 'preprod', 'dev', 'staging', 'test', 'demo', 'int', 'uat', 'oat']
| 40.5 | 80 | 0.506173 | 10 | 81 | 4.1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135802 | 81 | 1 | 81 | 81 | 0.585714 | 0 | 0 | 0 | 0 | 0 | 0.469136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9832e3a03c7ed0e8cda3d657951d2ea2db2a6ab7 | 1,254 | py | Python | nlpr/utils/nL_nr_assay_check.py | jgrembi/nL-qPCR_PathogenChip | 89a319a6039d8bd8ebc3164046dd8789d422551e | [
"CC0-1.0"
] | null | null | null | nlpr/utils/nL_nr_assay_check.py | jgrembi/nL-qPCR_PathogenChip | 89a319a6039d8bd8ebc3164046dd8789d422551e | [
"CC0-1.0"
] | null | null | null | nlpr/utils/nL_nr_assay_check.py | jgrembi/nL-qPCR_PathogenChip | 89a319a6039d8bd8ebc3164046dd8789d422551e | [
"CC0-1.0"
] | null | null | null | # THE PURPOSE OF THIS SCRIPT IS TO SELECT ASSAYS FROM A LIST OF PRIMER COMBINATIONS, SUCH THAT NO MORE THAN TWO OF THE ASSAYS TARGET EXACTLY APROXIMATELY THE SAME POSITIONS .
import sys
fn = sys.argv[1]
fh = open(fn, 'r')
def plus_or_minus(x,h):
L = []
for i in range(h):
L.append(int(x-i))
L.append(int(x+i))
return list(set(L))
def lists_overlap3(a, b):
return bool(set(a) & set(b))
forbidden_range_F = []
forbidden_range_R = []
forbidden_range_F2 = []
forbidden_range_R2= []
forbidden_range_F3 = []
forbidden_range_R3 = []
# Take the best hit
line = fh.readline()
line = line.strip()
print line
for line in fh:
forbidden_range_F3 = list(forbidden_range_F2)
forbidden_range_R3 = list(forbidden_range_R2)
forbidden_range_F2 = list(forbidden_range_F)
forbidden_range_R2 = list(forbidden_range_R)
#print "#####"
#print forbidden_range_F2
#print forbidden_range_F3
#print "#####"
line = line.strip()
start = int(line.split()[6])
end = int(line.split()[7])
forbidden_range_F.append(start)
forbidden_range_R.append(end)
test_F = plus_or_minus(int(start),4)
test_R = plus_or_minus(int(end),4)
if lists_overlap3(test_F, forbidden_range_F2) and lists_overlap3(test_R,forbidden_range_R2):
pass
else:
print line
fh.close()
| 25.08 | 174 | 0.729665 | 210 | 1,254 | 4.104762 | 0.366667 | 0.324826 | 0.092807 | 0.025522 | 0.218097 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020522 | 0.145136 | 1,254 | 49 | 175 | 25.591837 | 0.783582 | 0.202552 | 0 | 0.108108 | 0 | 0 | 0.001017 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.027027 | 0.027027 | null | null | 0.054054 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
98371d3c3294fca73ee9c19cdbb1162098c454b0 | 1,205 | bzl | Python | csharp/private/sdk.bzl | j3parker/rules_csharp | f5fbbd545b1f18efad5e4ce3d06bfabe6b48eeb4 | [
"Apache-2.0"
] | null | null | null | csharp/private/sdk.bzl | j3parker/rules_csharp | f5fbbd545b1f18efad5e4ce3d06bfabe6b48eeb4 | [
"Apache-2.0"
] | null | null | null | csharp/private/sdk.bzl | j3parker/rules_csharp | f5fbbd545b1f18efad5e4ce3d06bfabe6b48eeb4 | [
"Apache-2.0"
] | null | null | null | """
Declarations for the .NET SDK Downloads URLs and version
These are the URLs to download the .NET SDKs for each of the supported operating systems. These URLs are accessible from: https://dotnet.microsoft.com/download/dotnet-core.
"""
DOTNET_SDK_VERSION = "3.1.100"
DOTNET_SDK = {
"windows": {
"url": "https://download.visualstudio.microsoft.com/download/pr/28a2c4ff-6154-473b-bd51-c62c76171551/ea47eab2219f323596c039b3b679c3d6/dotnet-sdk-3.1.100-win-x64.zip",
"hash": "abcd034b230365d9454459e271e118a851969d82516b1529ee0bfea07f7aae52",
},
"linux": {
"url": "https://download.visualstudio.microsoft.com/download/pr/d731f991-8e68-4c7c-8ea0-fad5605b077a/49497b5420eecbd905158d86d738af64/dotnet-sdk-3.1.100-linux-x64.tar.gz",
"hash": "3687b2a150cd5fef6d60a4693b4166994f32499c507cd04f346b6dda38ecdc46",
},
"osx": {
"url": "https://download.visualstudio.microsoft.com/download/pr/bea99127-a762-4f9e-aac8-542ad8aa9a94/afb5af074b879303b19c6069e9e8d75f/dotnet-sdk-3.1.100-osx-x64.tar.gz",
"hash": "b38e6f8935d4b82b283d85c6b83cd24b5253730bab97e0e5e6f4c43e2b741aab",
},
}
RUNTIME_TFM = "netcoreapp3.1"
RUNTIME_FRAMEWORK_VERSION = "3.1.0"
| 50.208333 | 179 | 0.751867 | 132 | 1,205 | 6.818182 | 0.507576 | 0.05 | 0.088889 | 0.093333 | 0.213333 | 0.166667 | 0.166667 | 0.166667 | 0 | 0 | 0 | 0.261278 | 0.117012 | 1,205 | 23 | 180 | 52.391304 | 0.584586 | 0.190871 | 0 | 0 | 0 | 0.176471 | 0.753878 | 0.198552 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
987b85fcc75895ef8fd5e121355ef5a571c2a852 | 586 | py | Python | gpytorch/priors/__init__.py | bdecost/gpytorch | a5f1ad3e47daf3f8db04b605fb13ff3f9f871e3a | [
"MIT"
] | null | null | null | gpytorch/priors/__init__.py | bdecost/gpytorch | a5f1ad3e47daf3f8db04b605fb13ff3f9f871e3a | [
"MIT"
] | null | null | null | gpytorch/priors/__init__.py | bdecost/gpytorch | a5f1ad3e47daf3f8db04b605fb13ff3f9f871e3a | [
"MIT"
] | 1 | 2018-11-15T10:03:40.000Z | 2018-11-15T10:03:40.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from .gamma_prior import GammaPrior
from .multivariate_normal_prior import MultivariateNormalPrior
from .normal_prior import NormalPrior
from .smoothed_box_prior import SmoothedBoxPrior
from .wishart_prior import InverseWishartPrior, WishartPrior
from .lkj_prior import LKJCovariancePrior
__all__ = [GammaPrior, InverseWishartPrior, MultivariateNormalPrior, NormalPrior,
SmoothedBoxPrior, WishartPrior, LKJCovariancePrior]
| 36.625 | 81 | 0.863481 | 60 | 586 | 7.916667 | 0.416667 | 0.138947 | 0.134737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109215 | 586 | 15 | 82 | 39.066667 | 0.909962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
98821ed78144f8e01e3dd237b4ed382e65a64488 | 797 | py | Python | setup.py | poliquin/pyfixwidth | a41f25b788f5bd78465f51b42258922709dce9bc | [
"MIT"
] | 6 | 2020-02-13T22:20:15.000Z | 2021-10-12T02:30:51.000Z | setup.py | poliquin/pyfixwidth | a41f25b788f5bd78465f51b42258922709dce9bc | [
"MIT"
] | null | null | null | setup.py | poliquin/pyfixwidth | a41f25b788f5bd78465f51b42258922709dce9bc | [
"MIT"
] | 1 | 2021-06-16T21:21:38.000Z | 2021-06-16T21:21:38.000Z | # -*- coding: utf8 -*-
from distutils.core import setup
setup(
name='pyfixwidth',
packages=['fixwidth'],
version='0.1.1',
description="Read fixed width data files",
author='Chris Poliquin',
author_email='chrispoliquin@gmail.com',
url='https://github.com/poliquin/pyfixwidth',
keywords=['data', 'fixed width', 'parse', 'parser'],
classifiers=[
'Programming Language :: Python :: 3',
'Operating System :: OS Independent',
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'Topic :: Utilities'
],
long_description="""\
Read fixed width data files
---------------------------
Python 3 module for reading fixed width data files and converting the field
contents to appropriate Python types.
"""
)
| 28.464286 | 75 | 0.624843 | 85 | 797 | 5.835294 | 0.741176 | 0.080645 | 0.084677 | 0.114919 | 0.137097 | 0.137097 | 0 | 0 | 0 | 0 | 0 | 0.011094 | 0.208281 | 797 | 27 | 76 | 29.518519 | 0.77496 | 0.025094 | 0 | 0 | 0 | 0 | 0.607742 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
98907675d26bfe65790edfc2bde7b8179aee4ad8 | 5,793 | py | Python | tests/test_losses/test_mesh_losses.py | nightfuryyy/mmpose | 910d9e31dd9d46e3329be1b7567e6309d70ab64c | [
"Apache-2.0"
] | 1,775 | 2020-07-10T01:20:01.000Z | 2022-03-31T16:31:50.000Z | tests/test_losses/test_mesh_losses.py | KHB1698/mmpose | 93c3a742c540dfb4ca515ad545cef705a07d90b4 | [
"Apache-2.0"
] | 1,021 | 2020-07-11T11:40:24.000Z | 2022-03-31T14:32:26.000Z | tests/test_losses/test_mesh_losses.py | KHB1698/mmpose | 93c3a742c540dfb4ca515ad545cef705a07d90b4 | [
"Apache-2.0"
] | 477 | 2020-07-11T11:27:51.000Z | 2022-03-31T09:42:25.000Z | # Copyright (c) OpenMMLab. All rights reserved.
import pytest
import torch
from numpy.testing import assert_almost_equal
from mmpose.models import build_loss
from mmpose.models.utils.geometry import batch_rodrigues
def test_mesh_loss():
"""test mesh loss."""
loss_cfg = dict(
type='MeshLoss',
joints_2d_loss_weight=1,
joints_3d_loss_weight=1,
vertex_loss_weight=1,
smpl_pose_loss_weight=1,
smpl_beta_loss_weight=1,
img_res=256,
focal_length=5000)
loss = build_loss(loss_cfg)
smpl_pose = torch.zeros([1, 72], dtype=torch.float32)
smpl_rotmat = batch_rodrigues(smpl_pose.view(-1, 3)).view(-1, 24, 3, 3)
smpl_beta = torch.zeros([1, 10], dtype=torch.float32)
camera = torch.tensor([[1, 0, 0]], dtype=torch.float32)
vertices = torch.rand([1, 6890, 3], dtype=torch.float32)
joints_3d = torch.ones([1, 24, 3], dtype=torch.float32)
joints_2d = loss.project_points(joints_3d, camera) + (256 - 1) / 2
fake_pred = {}
fake_pred['pose'] = smpl_rotmat
fake_pred['beta'] = smpl_beta
fake_pred['camera'] = camera
fake_pred['vertices'] = vertices
fake_pred['joints_3d'] = joints_3d
fake_gt = {}
fake_gt['pose'] = smpl_pose
fake_gt['beta'] = smpl_beta
fake_gt['vertices'] = vertices
fake_gt['has_smpl'] = torch.ones(1, dtype=torch.float32)
fake_gt['joints_3d'] = joints_3d
fake_gt['joints_3d_visible'] = torch.ones([1, 24, 1], dtype=torch.float32)
fake_gt['joints_2d'] = joints_2d
fake_gt['joints_2d_visible'] = torch.ones([1, 24, 1], dtype=torch.float32)
losses = loss(fake_pred, fake_gt)
assert torch.allclose(losses['vertex_loss'], torch.tensor(0.))
assert torch.allclose(losses['smpl_pose_loss'], torch.tensor(0.))
assert torch.allclose(losses['smpl_beta_loss'], torch.tensor(0.))
assert torch.allclose(losses['joints_3d_loss'], torch.tensor(0.))
assert torch.allclose(losses['joints_2d_loss'], torch.tensor(0.))
fake_pred = {}
fake_pred['pose'] = smpl_rotmat + 1
fake_pred['beta'] = smpl_beta + 1
fake_pred['camera'] = camera
fake_pred['vertices'] = vertices + 1
fake_pred['joints_3d'] = joints_3d.clone()
joints_3d_t = joints_3d.clone()
joints_3d_t[:, 0] = joints_3d_t[:, 0] + 1
fake_gt = {}
fake_gt['pose'] = smpl_pose
fake_gt['beta'] = smpl_beta
fake_gt['vertices'] = vertices
fake_gt['has_smpl'] = torch.ones(1, dtype=torch.float32)
fake_gt['joints_3d'] = joints_3d_t
fake_gt['joints_3d_visible'] = torch.ones([1, 24, 1], dtype=torch.float32)
fake_gt['joints_2d'] = joints_2d + (256 - 1) / 2
fake_gt['joints_2d_visible'] = torch.ones([1, 24, 1], dtype=torch.float32)
losses = loss(fake_pred, fake_gt)
assert torch.allclose(losses['vertex_loss'], torch.tensor(1.))
assert torch.allclose(losses['smpl_pose_loss'], torch.tensor(1.))
assert torch.allclose(losses['smpl_beta_loss'], torch.tensor(1.))
assert torch.allclose(losses['joints_3d_loss'], torch.tensor(0.5 / 24))
assert torch.allclose(losses['joints_2d_loss'], torch.tensor(0.5))
def test_gan_loss():
"""test gan loss."""
with pytest.raises(NotImplementedError):
loss_cfg = dict(
type='GANLoss',
gan_type='test',
real_label_val=1.0,
fake_label_val=0.0,
loss_weight=1)
_ = build_loss(loss_cfg)
input_1 = torch.ones(1, 1)
input_2 = torch.ones(1, 3, 6, 6) * 2
# vanilla
loss_cfg = dict(
type='GANLoss',
gan_type='vanilla',
real_label_val=1.0,
fake_label_val=0.0,
loss_weight=2.0)
gan_loss = build_loss(loss_cfg)
loss = gan_loss(input_1, True, is_disc=False)
assert_almost_equal(loss.item(), 0.6265233)
loss = gan_loss(input_1, False, is_disc=False)
assert_almost_equal(loss.item(), 2.6265232)
loss = gan_loss(input_1, True, is_disc=True)
assert_almost_equal(loss.item(), 0.3132616)
loss = gan_loss(input_1, False, is_disc=True)
assert_almost_equal(loss.item(), 1.3132616)
# lsgan
loss_cfg = dict(
type='GANLoss',
gan_type='lsgan',
real_label_val=1.0,
fake_label_val=0.0,
loss_weight=2.0)
gan_loss = build_loss(loss_cfg)
loss = gan_loss(input_2, True, is_disc=False)
assert_almost_equal(loss.item(), 2.0)
loss = gan_loss(input_2, False, is_disc=False)
assert_almost_equal(loss.item(), 8.0)
loss = gan_loss(input_2, True, is_disc=True)
assert_almost_equal(loss.item(), 1.0)
loss = gan_loss(input_2, False, is_disc=True)
assert_almost_equal(loss.item(), 4.0)
# wgan
loss_cfg = dict(
type='GANLoss',
gan_type='wgan',
real_label_val=1.0,
fake_label_val=0.0,
loss_weight=2.0)
gan_loss = build_loss(loss_cfg)
loss = gan_loss(input_2, True, is_disc=False)
assert_almost_equal(loss.item(), -4.0)
loss = gan_loss(input_2, False, is_disc=False)
assert_almost_equal(loss.item(), 4)
loss = gan_loss(input_2, True, is_disc=True)
assert_almost_equal(loss.item(), -2.0)
loss = gan_loss(input_2, False, is_disc=True)
assert_almost_equal(loss.item(), 2.0)
# hinge
loss_cfg = dict(
type='GANLoss',
gan_type='hinge',
real_label_val=1.0,
fake_label_val=0.0,
loss_weight=2.0)
gan_loss = build_loss(loss_cfg)
loss = gan_loss(input_2, True, is_disc=False)
assert_almost_equal(loss.item(), -4.0)
loss = gan_loss(input_2, False, is_disc=False)
assert_almost_equal(loss.item(), -4.0)
loss = gan_loss(input_2, True, is_disc=True)
assert_almost_equal(loss.item(), 0.0)
loss = gan_loss(input_2, False, is_disc=True)
assert_almost_equal(loss.item(), 3.0)
| 35.323171 | 78 | 0.655619 | 887 | 5,793 | 3.990981 | 0.110485 | 0.043503 | 0.081638 | 0.072316 | 0.775989 | 0.744915 | 0.720904 | 0.662429 | 0.622599 | 0.531921 | 0 | 0.053247 | 0.202486 | 5,793 | 163 | 79 | 35.539877 | 0.712987 | 0.017607 | 0 | 0.478261 | 0 | 0 | 0.073291 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 1 | 0.014493 | false | 0 | 0.036232 | 0 | 0.050725 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
989d14ba8bad9846c10db51fb0c7bf4b880dcf12 | 1,422 | py | Python | test/test_add_contact_to_group.py | havrylyshyn/python_training | 2b1e1a3dd3a2b86ce1068fe52e233dee42b07580 | [
"Apache-2.0"
] | null | null | null | test/test_add_contact_to_group.py | havrylyshyn/python_training | 2b1e1a3dd3a2b86ce1068fe52e233dee42b07580 | [
"Apache-2.0"
] | null | null | null | test/test_add_contact_to_group.py | havrylyshyn/python_training | 2b1e1a3dd3a2b86ce1068fe52e233dee42b07580 | [
"Apache-2.0"
] | null | null | null | from model.contact import Contact
from model.group import Group
import random
def test_add_contact_to_group(app, db):
if len(db.get_contact_list()) == 0:
app.contact.create(Contact(firstname="contact", lastname="forGroup", address="UA, Kyiv, KPI", homephone="0123456789", email="test@mail.com"))
if len(db.get_group_list()) == 0:
app.group.create(Group(name="groupForContact", header="header", footer="footer"))
contact = random.choice(db.get_contact_list())
group = random.choice(db.get_group_list())
app.contact.add_contact_to_group(contact.id, group.id)
assert object_in_list(contact, db.get_contacts_from_group(group))
# assert db.get_contacts_from_group(group).__contains__(contact)
def test_add_contact_to_group_2(app, db, orm):
if len(db.get_contact_list()) == 0:
app.contact.create(Contact(firstname="contact", lastname="forGroup", address="UA, Kyiv, KPI", homephone="0123456789", email="test@mail.com"))
if len(db.get_group_list()) == 0:
app.group.create(Group(name="groupForContact", header="header", footer="footer"))
contact = random.choice(db.get_contact_list())
group = random.choice(db.get_group_list())
app.contact.add_contact_to_group(contact.id, group.id)
assert contact in orm.get_contacts_in_group(group)
def object_in_list(object, list):
if object in list:
return True
else:
return False
| 41.823529 | 149 | 0.716596 | 206 | 1,422 | 4.713592 | 0.228155 | 0.051493 | 0.049434 | 0.070031 | 0.774459 | 0.774459 | 0.669413 | 0.669413 | 0.669413 | 0.669413 | 0 | 0.020525 | 0.14346 | 1,422 | 33 | 150 | 43.090909 | 0.776683 | 0.043601 | 0 | 0.538462 | 0 | 0 | 0.115129 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.115385 | false | 0 | 0.115385 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
98a2ca2296e875523ce2f68c78e6507e53f436a6 | 721 | py | Python | Chapter10/fabfile_operations.py | frankethp/Hands-On-Enterprise-Automation-with-Python | 4d20dc5fda2265a2c3666770b8ad53e63c7ae07c | [
"MIT"
] | 51 | 2018-07-02T04:03:07.000Z | 2022-03-08T07:20:29.000Z | Chapter10/fabfile_operations.py | MindaugasVaitkus2/Hands-On-Enterprise-Automation-with-Python | 39471804525701e634bd35046d8db3c0bca51dd6 | [
"MIT"
] | 1 | 2018-08-06T10:13:15.000Z | 2020-10-08T12:27:17.000Z | Chapter10/fabfile_operations.py | MindaugasVaitkus2/Hands-On-Enterprise-Automation-with-Python | 39471804525701e634bd35046d8db3c0bca51dd6 | [
"MIT"
] | 43 | 2018-07-24T08:50:41.000Z | 2022-03-18T21:45:40.000Z | #!/usr/bin/python
__author__ = "Bassim Aly"
__EMAIL__ = "basim.alyy@gmail.com"
from fabric.api import *
env.hosts = [
'10.10.10.140', # ubuntu machine
'10.10.10.193', # CentOS machine
]
env.user = "root"
env.password = "access123"
def run_ops():
output = run("hostname")
def get_ops():
try:
get("/var/log/messages", "/root/")
except:
pass
def put_ops():
try:
put("/root/VeryImportantFile.txt", "/root/")
except:
pass
def sudo_ops():
sudo("whoami") # it should print the root even if you use another account
def prompt_ops():
prompt("please supply release name", default="7.4.1708")
def reboot_ops():
reboot(wait=60, use_sudo=True)
| 16.386364 | 78 | 0.614424 | 100 | 721 | 4.28 | 0.67 | 0.037383 | 0.028037 | 0.079439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052158 | 0.228849 | 721 | 43 | 79 | 16.767442 | 0.717626 | 0.142857 | 0 | 0.222222 | 0 | 0 | 0.278502 | 0.043974 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.111111 | 0.074074 | 0 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7f2f313dd457f2204002811c97661a2ad5ef1b2a | 633 | py | Python | Project Pattern/pattern_26.py | AMARTYA2020/nppy | 7f750534bb5faa4e661447ca132077de0ce0a0ed | [
"MIT"
] | 4 | 2020-12-07T10:15:08.000Z | 2021-11-17T11:21:07.000Z | Project Pattern/pattern_26.py | AMARTYA2020/nppy | 7f750534bb5faa4e661447ca132077de0ce0a0ed | [
"MIT"
] | null | null | null | Project Pattern/pattern_26.py | AMARTYA2020/nppy | 7f750534bb5faa4e661447ca132077de0ce0a0ed | [
"MIT"
] | 1 | 2021-02-17T07:53:13.000Z | 2021-02-17T07:53:13.000Z | class Pattern_Twenty_Six:
'''Pattern twenty_six
***
* *
*
* ***
* *
* *
***
'''
def __init__(self, strings='*'):
if not isinstance(strings, str):
strings = str(strings)
for i in range(7):
if i in [0, 6]:
print(f' {strings * 3}')
elif i in [1, 4, 5]:
print(f'{strings} {strings}')
elif i == 3:
print(f'{strings} {strings * 3}')
else:
print(strings)
if __name__ == '__main__':
Pattern_Twenty_Six()
| 19.181818 | 49 | 0.388626 | 60 | 633 | 3.816667 | 0.483333 | 0.170306 | 0.209607 | 0.174672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026946 | 0.472354 | 633 | 32 | 50 | 19.78125 | 0.658683 | 0.086888 | 0 | 0 | 0 | 0 | 0.132937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.133333 | 0.266667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7f2fcabb83cbf0cb2f3a253ecff31e49e9b71e6b | 1,069 | py | Python | SVD.py | divi9626/RANSAC | c109e41bc9a476b64572d82c92a8aa20504df41a | [
"MIT"
] | null | null | null | SVD.py | divi9626/RANSAC | c109e41bc9a476b64572d82c92a8aa20504df41a | [
"MIT"
] | null | null | null | SVD.py | divi9626/RANSAC | c109e41bc9a476b64572d82c92a8aa20504df41a | [
"MIT"
] | null | null | null | import numpy as np
A = np.asarray([[-5, -5, -1, 0, 0, 0, 500, 500, 100],
[0, 0, 0, -5, -5, -1, 500, 500, 100],
[-150, -5, -1, 0, 0, 0, 30000, 1000, 200],
[0, 0, 0, -150, -5, -1, 12000, 400, 80],
[-150, -150, -1, 0, 0, 0, 33000, 33000, 220],
[0, 0, 0, -150, -150, -1, 12000, 12000, 80],
[-5, -150, -1, 0, 0, 0, 500, 15000, 100],
[0, 0, 0, -5, -150, -1, 1000, 30000, 200]])
# A = U*sig*Vt
U_A = A.dot(A.T)
V_A = A.T.dot(A)
# U is eigen vector of A.dot(A.T)
# V is eigen vector of A.T.dot(A)
### Calculating SVD ######
U = np.linalg.eig(U_A)[1]
print('U Matrix is: ')
print(U)
V = np.linalg.eig(V_A)[1]
print('V Matrix is: ')
print(V)
sigma = np.sqrt(np.absolute(np.linalg.eig(V_A)[0]))
S = np.diag(sigma)
S = S[0:8, :]
print('Sigma Matrix is:')
print(S)
###### Homography #######
H = V[:, 8]
H = np.reshape(H,(3,3))
print('H matrix is: ')
print(H) | 26.725 | 70 | 0.424696 | 182 | 1,069 | 2.467033 | 0.258242 | 0.071269 | 0.053452 | 0.035635 | 0.256125 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230216 | 0.34986 | 1,069 | 40 | 71 | 26.725 | 0.415827 | 0.097287 | 0 | 0 | 0 | 0 | 0.061111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0.307692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7f2fea8781d091be4e8f7a425ea94d6e86e56885 | 911 | py | Python | apps/university/api/serializers.py | ilyukevich/university-schedule | 305e568b00a847a8d2d10217568e7f87833fb5b3 | [
"MIT"
] | null | null | null | apps/university/api/serializers.py | ilyukevich/university-schedule | 305e568b00a847a8d2d10217568e7f87833fb5b3 | [
"MIT"
] | null | null | null | apps/university/api/serializers.py | ilyukevich/university-schedule | 305e568b00a847a8d2d10217568e7f87833fb5b3 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from ..models import Faculties, Departaments, StudyGroups, Auditories, Disciplines
class FacultiesSerializers(serializers.ModelSerializer):
"""Faculties API"""
class Meta:
fields = '__all__'
model = Faculties
class DepartamentsSerializers(serializers.ModelSerializer):
"""Departaments API"""
class Meta:
fields = '__all__'
model = Departaments
class StudyGroupsSerializers(serializers.ModelSerializer):
"""StudyGroups API"""
class Meta:
fields = '__all__'
model = StudyGroups
class AuditoriesSerializers(serializers.ModelSerializer):
"""Auditories API"""
class Meta:
fields = '__all__'
model = Auditories
class DisciplinesSerializers(serializers.ModelSerializer):
"""Disciplines API"""
class Meta:
fields = '__all__'
model = Disciplines
| 21.186047 | 82 | 0.683864 | 73 | 911 | 8.246575 | 0.315068 | 0.215947 | 0.099668 | 0.149502 | 0.215947 | 0.215947 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227223 | 911 | 42 | 83 | 21.690476 | 0.855114 | 0.084523 | 0 | 0.454545 | 0 | 0 | 0.043317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7f3f0bcb734c5067a788e80bda49721133fbbbe5 | 4,247 | py | Python | oo/carro.py | RafaelLJC/pythonbirds | 43d401c6ef2b539ec45e19a218d0032de0435162 | [
"MIT"
] | null | null | null | oo/carro.py | RafaelLJC/pythonbirds | 43d401c6ef2b539ec45e19a218d0032de0435162 | [
"MIT"
] | null | null | null | oo/carro.py | RafaelLJC/pythonbirds | 43d401c6ef2b539ec45e19a218d0032de0435162 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue Jul 21 16:41:09 2020
@author: rafae
Exercício
Você deve criar uma classe carro que vai possuir dois atributos
compostos por outras duas classes:
1) motor;
2) Direção.
O motor terá a responsabilidade de controlar a velocidade.
Ele oferece os seguintes atributos:
1) Atributo de dado velocidade;
2) Método acelerar, que deverá incrementar a velocidade de uma unidade;
3) Método frenar, que deverá decrementar a velocidade em duas unidades.
A direção terá a responsabilidade de controlar a direção. Ela oferece os
seguintes atributos:
1) Valor de direção com valores possóveis: Norte, Sul, Leste e Oeste;
2) Método girar a direita
2) Método girar a esquerda
N
O L
S
Exemplo:
#testando motor
>>> motor = Motor()
>>> motor.velocidade
0
>>> motor.acelerar
>>> motor.velocidade
1
>>> motor.acelerar
>>> motor.velocidade
2
>>> motor.acelerar
>>> motor.velocidade
3
>>> motor.frear
>>> motor.velocidade
1
>>> motor.frear
>>> motor.velocidade
0
#testando direção
>>> direcao = Direcao()
>>> direcao.valor
'Norte'
>>> direcao.girar_a_direita()
>>> direcao.valor
'Leste'
>>> direcao.girar_a_direita()
>>> direcao.valor
'Sul'
>>> direcao.girar_a_direita()
>>> direcao.valor
'Oeste'
>>> direcao.girar_a_direita()
>>> direcao.valor
'Norte'
>>> direcao.girar_a_esquerda()
>>> direcao.valor
'Oeste'
>>> direcao.girar_a_esquerda()
>>> direcao.valor
'Sul'
>>> direcao.girar_a_esquerda()
>>> direcao.valor
'Leste'
>>> direcao.girar_a_esquerda()
>>> direcao.valor
'Norte'
>>> carro = Carro(direcao, motor)
>>> carro.caluclar_velocidade()
0
>>> carro.acelerar
>>> carro.caluclar_velocidade()
1
>>> carro.acelerar
>>> carro.caluclar_velocidade()
2
>>> carro.frear
>>> carro.caluclar_velocidade()
0
>>> carro.caluclar_direcao()
'Norte'
>>> carro.girar_a_direita()
>>> carro.caluclar_direcao()
'Leste'
>>> carro.girar_a_esquerda()
>>> carro.caluclar_direcao()
'Norte'
>>> carro.girar_a_esquerda()
>>> carro.caluclar_direcao()
'Oeste'
"""
class Carro:
def __init__(self, direcao, motor):
self.direcao = direcao
self.motor = motor
def calcular_velocidade(self):
return self.motor.velocidade
def acelerar(self):
self.motor.acelerar
def frear(self):
self.motor.frear()
def calcular_direcao(self):
return self.direcao.valor
def girar_a_direita(self):
self.direcao.girar_a_direita()
def girar_a_esquerda(self):
self.direcao.girar_a_esquerda()
class Motor:
def __init__(self):
self.velocidade = 0
def acelerar(self):
self.velocidade += 1
def frear(self):
self.velocidade -= 2
self.velocidade = max(0, self.velocidade)
motor = Motor()
motor.acelerar()
motor.acelerar()
motor.frear()
motor.frear()
motor.frear()
motor.acelerar()
motor.frear()
motor.acelerar()
motor.frear()
NORTE = 'Norte'
SUL = 'Sul'
LESTE = 'Leste'
OESTE = 'Oeste'
class Direcao:
def __init__(self):
self.valor = NORTE
def girar_a_direita(self):
self.valor = rotacao_a_direita_dct[self.valor]
def girar_a_esquerda(self):
self.valor = rotacao_a_esquerda_dct[self.valor]
rotacao_a_direita_dct = {NORTE: LESTE, LESTE: SUL, SUL: OESTE, OESTE: NORTE}
rotacao_a_esquerda_dct = {NORTE: OESTE, OESTE: SUL, SUL: LESTE, LESTE: NORTE}
direcao = Direcao()
direcao.girar_a_direita()
direcao.girar_a_esquerda()
direcao.girar_a_esquerda()
carro = Carro(direcao, motor)
#print(direcao.valor)
#print(motor.velocidade)
print(carro.calcular_direcao())
print(carro.calcular_velocidade())
| 23.859551 | 77 | 0.586296 | 471 | 4,247 | 5.125265 | 0.191083 | 0.054681 | 0.070008 | 0.060895 | 0.41135 | 0.288732 | 0.049296 | 0 | 0 | 0 | 0 | 0.011765 | 0.299506 | 4,247 | 177 | 78 | 23.99435 | 0.799664 | 0.622086 | 0 | 0.388889 | 0 | 0 | 0.011285 | 0 | 0 | 0 | 0 | 0.022599 | 0 | 1 | 0.240741 | false | 0 | 0 | 0.037037 | 0.333333 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7f60669d9c8f83bdc7550f5d0401483d476b3958 | 1,565 | py | Python | setup.py | charlesthomas/testrail_reporter | bc522ed3a66aee38c21cc45e2d3b9a786df45e02 | [
"MIT"
] | null | null | null | setup.py | charlesthomas/testrail_reporter | bc522ed3a66aee38c21cc45e2d3b9a786df45e02 | [
"MIT"
] | null | null | null | setup.py | charlesthomas/testrail_reporter | bc522ed3a66aee38c21cc45e2d3b9a786df45e02 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup
NAME = 'testrail_reporter'
DESCRIPTION = 'Nosetests Plugin to Report Test Results to TestRail.'
VERSION = open('VERSION').read().strip()
LONG_DESC = open('README.rst').read()
LICENSE = open('LICENSE').read()
setup(
name=NAME,
version=VERSION,
author='Charles Thomas',
author_email='ch@rlesthom.as',
packages=['testrail_reporter'],
url='https://github.com/charlesthomas/%s' % NAME,
license=LICENSE,
description=DESCRIPTION,
long_description=LONG_DESC,
# test_suite='tests',
entry_points = {'nose.plugins.0.10':
['testrail_reporter = testrail_reporter.testrail_reporter:TestRailReporter']},
install_requires=['nose >= 1.3.7',
'testrail >= 0.3.6',],
classifiers=['Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'Natural Language :: English',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Internet :: WWW/HTTP',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',],
)
| 36.395349 | 98 | 0.645367 | 167 | 1,565 | 5.976048 | 0.520958 | 0.152305 | 0.200401 | 0.104208 | 0.054108 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017572 | 0.2 | 1,565 | 42 | 99 | 37.261905 | 0.779553 | 0.025559 | 0 | 0 | 0 | 0 | 0.601445 | 0.034143 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026316 | 0 | 0.026316 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7f6a2ed067f756a98844a26933435661e374cbc1 | 991 | py | Python | mushroom_rl/utils/eligibility_trace.py | PuzeLiu/mushroom-rl | 99942b425e66b4ddcc26009d7105dde23841e95d | [
"MIT"
] | 344 | 2020-01-10T09:45:02.000Z | 2022-03-30T09:48:28.000Z | mushroom_rl/utils/eligibility_trace.py | AmmarFahmy/mushroom-rl | 2625ee7f64d5613b3b9fba00f0b7a39fece88ca5 | [
"MIT"
] | 44 | 2020-01-23T03:00:56.000Z | 2022-03-25T17:14:22.000Z | mushroom_rl/utils/eligibility_trace.py | AmmarFahmy/mushroom-rl | 2625ee7f64d5613b3b9fba00f0b7a39fece88ca5 | [
"MIT"
] | 93 | 2020-01-10T21:17:58.000Z | 2022-03-31T17:58:52.000Z | from mushroom_rl.utils.table import Table
def EligibilityTrace(shape, name='replacing'):
"""
Factory method to create an eligibility trace of the provided type.
Args:
shape (list): shape of the eligibility trace table;
name (str, 'replacing'): type of the eligibility trace.
Returns:
The eligibility trace table of the provided shape and type.
"""
if name == 'replacing':
return ReplacingTrace(shape)
elif name == 'accumulating':
return AccumulatingTrace(shape)
else:
raise ValueError('Unknown type of trace.')
class ReplacingTrace(Table):
"""
Replacing trace.
"""
def reset(self):
self.table[:] = 0.
def update(self, state, action):
self.table[state, action] = 1.
class AccumulatingTrace(Table):
"""
Accumulating trace.
"""
def reset(self):
self.table[:] = 0.
def update(self, state, action):
self.table[state, action] += 1.
| 21.543478 | 71 | 0.616549 | 112 | 991 | 5.446429 | 0.401786 | 0.104918 | 0.093443 | 0.068852 | 0.236066 | 0.236066 | 0.236066 | 0.236066 | 0.236066 | 0.236066 | 0 | 0.00554 | 0.271443 | 991 | 45 | 72 | 22.022222 | 0.839335 | 0.303734 | 0 | 0.333333 | 0 | 0 | 0.083736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.055556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7f71ee99f2565a596f85f7576a116d55d2f91fcd | 6,927 | py | Python | pyoperant/reinf.py | arouse01/pyoperant | e61de84862096720cca7dbecf517ee11c5d504d4 | [
"BSD-3-Clause"
] | 1 | 2019-01-26T17:19:47.000Z | 2019-01-26T17:19:47.000Z | pyoperant/reinf.py | arouse01/pyoperant | e61de84862096720cca7dbecf517ee11c5d504d4 | [
"BSD-3-Clause"
] | null | null | null | pyoperant/reinf.py | arouse01/pyoperant | e61de84862096720cca7dbecf517ee11c5d504d4 | [
"BSD-3-Clause"
] | null | null | null | from numpy import random
class BaseSchedule(object):
"""Maintains logic for deciding whether to consequate trials.
This base class provides the most basic reinforcent schedule: every
response is consequated.
Methods:
consequate(trial) -- returns a boolean value based on whether the trial
should be consequated. Always returns True.
"""
def __init__(self):
super(BaseSchedule, self).__init__()
def consequate(self, trial):
assert hasattr(trial, 'correct') and isinstance(trial.correct, bool)
if trial.correct:
return True
else:
return True
class ContinuousReinforcement(BaseSchedule):
"""Maintains logic for deciding whether to consequate trials.
This base class provides the most basic reinforcent schedule: every
response is consequated.
Methods:
consequate(trial) -- returns a boolean value based on whether the trial
should be consequated. Always returns True.
"""
def __init__(self):
super(ContinuousReinforcement, self).__init__()
def consequate(self, trial):
assert hasattr(trial, 'correct') and isinstance(trial.correct, bool)
if trial.correct:
return True
else:
return True
class FixedRatioSchedule(BaseSchedule):
"""Maintains logic for deciding whether to consequate trials.
This class implements a fixed ratio schedule, where a reward reinforcement
is provided after every nth correct response, where 'n' is the 'ratio'.
Incorrect trials are always reinforced.
Methods:
consequate(trial) -- returns a boolean value based on whether the trial
should be consequated.
"""
def __init__(self, ratio=1):
super(FixedRatioSchedule, self).__init__()
self.ratio = max(ratio, 1)
self._update()
def _update(self):
self.cumulative_correct = 0
self.threshold = self.ratio
def consequate(self, trial):
assert hasattr(trial, 'correct') and isinstance(trial.correct, bool)
if trial.correct:
self.cumulative_correct += 1
if self.cumulative_correct >= self.threshold:
self._update()
return True
else:
return False
elif not trial.correct:
self.cumulative_correct = 0
return True
else:
return False
def __unicode__(self):
return "FR%i" % self.ratio
class GoInterruptSchedule(BaseSchedule):
"""Maintains logic for deciding whether to consequate trials.
This class implements a conditional continuous schedule, where reinforcement
is provided after certain correct and incorrect responses
Added 6/27/18 by AR for zebra finch isochronicity discrimination experiment.
Correct Response (Resp switch to S+) = True
False Alarm (Resp switch to S-) = True
Miss (NR or Trial switch to S+) = False
Correct Reject (Trial switch to S-) = False
Probe trials are always rewarded (but handled in behavior file instead of here)
Methods:
consequate(trial) -- returns a boolean value based on whether the trial
should be consequated.
"""
def __init__(self):
super(GoInterruptSchedule, self).__init__()
def consequate(self, trial):
assert hasattr(trial, 'correct') and isinstance(trial.correct, bool)
if trial.correct:
if trial.response == 'sPlus': # Hit
return True
else:
return False # Correct reject
elif not trial.correct:
if trial.response == 'sPlus': # False alarm
return True
else:
return False # Miss
else:
return False
class GoInterruptPercentSchedule(BaseSchedule):
"""Maintains logic for deciding whether to consequate trials.
This class implements a conditional percent reinforcement schedule, where reinforcement
is provided randomly after certain correct and incorrect responses
Added 7/9/18 by AR for zebra finch isochronicity discrimination experiment.
Correct Response (Resp switch to S+) = True by probability
False Alarm (Resp switch to S-) = True always
Miss (NR or Trial switch to S+) = False
Correct Reject (Trial switch to S-) = False
Probe trials are always rewarded (but handled in behavior file instead of here)
Methods:
consequate(trial) -- returns a boolean value based on whether the trial
should be consequated.
"""
def __init__(self, prob=1):
super(GoInterruptPercentSchedule, self).__init__()
self.prob = prob
def consequate(self, trial):
if trial.responseType == "correct_response":
return random.random() < self.prob
elif trial.responseType == "false_alarm":
return True
else:
return False
class VariableRatioSchedule(FixedRatioSchedule):
"""Maintains logic for deciding whether to consequate trials.
This class implements a variable ratio schedule, where a reward
reinforcement is provided after every a number of consecutive correct
responses. On average, the number of consecutive responses necessary is the
'ratio'. After a reinforcement is provided, the number of consecutive
correct trials needed for the next reinforcement is selected by sampling
randomly from the interval [1,2*ratio-1]. e.g. a ratio of '3' will require
consecutive correct trials of 1, 2, 3, 4, & 5, randomly.
Incorrect trials are always reinforced.
Methods:
consequate(trial) -- returns a boolean value based on whether the trial
should be consequated.
"""
def __init__(self, ratio=1):
super(VariableRatioSchedule, self).__init__(ratio=ratio)
def _update(self):
""" update min correct by randomly sampling from interval [1:2*ratio)"""
self.cumulative_correct = 0
self.threshold = random.randint(1, 2 * self.ratio)
def __unicode__(self):
return "VR%i" % self.ratio
class PercentReinforcement(BaseSchedule):
"""Maintains logic for deciding whether to consequate trials.
This class implements a probabalistic reinforcement, where a reward reinforcement
is provided x percent of the time.
Incorrect trials are always reinforced.
Methods:
consequate(trial) -- returns a boolean value based on whether the trial
should be consequated.
"""
def __init__(self, prob=1):
super(PercentReinforcement, self).__init__()
self.prob = prob
def consequate(self, trial):
assert hasattr(trial, 'correct') and isinstance(trial.correct, bool)
if trial.correct:
return random.random() < self.prob
else:
return True
def __unicode__(self):
return "PR%i" % self.prob
| 31.06278 | 91 | 0.664501 | 814 | 6,927 | 5.55774 | 0.184275 | 0.045093 | 0.015915 | 0.038683 | 0.727675 | 0.673077 | 0.638373 | 0.59107 | 0.59107 | 0.577365 | 0 | 0.006099 | 0.266205 | 6,927 | 222 | 92 | 31.202703 | 0.883927 | 0.483326 | 0 | 0.725275 | 0 | 0 | 0.02568 | 0 | 0 | 0 | 0 | 0 | 0.054945 | 1 | 0.197802 | false | 0 | 0.010989 | 0.032967 | 0.516484 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7f794d62fc3c675ead0a023096a5c67f9a7ede0f | 195 | py | Python | 1-iniciante/1038.py | marcobarone-dev/uri | 82bf0b244d3966673b10a42948dcdeabcde07e76 | [
"MIT"
] | 1 | 2018-07-04T02:42:29.000Z | 2018-07-04T02:42:29.000Z | 1-iniciante/1038.py | marcobarone-dev/uri-python | 82bf0b244d3966673b10a42948dcdeabcde07e76 | [
"MIT"
] | null | null | null | 1-iniciante/1038.py | marcobarone-dev/uri-python | 82bf0b244d3966673b10a42948dcdeabcde07e76 | [
"MIT"
] | null | null | null | produtos = {1: 4.0, 2: 4.5, 3: 5.0, 4: 2.0, 5: 1.5}
produto, quantidade = [int(num) for num in input().split()]
total = produtos[produto] * quantidade
print('Total: R$ {:.2f}'.format(total))
| 39 | 60 | 0.605128 | 35 | 195 | 3.371429 | 0.571429 | 0.288136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09816 | 0.164103 | 195 | 4 | 61 | 48.75 | 0.625767 | 0 | 0 | 0 | 0 | 0 | 0.08377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7f94ad38bbb98e281664244307b46754d21960c0 | 2,859 | py | Python | tests/unit/test_handler_marker.py | Rogdham/bigxml | ab983f50c49bf861c3b61e3e636db90f9ff19ed1 | [
"MIT"
] | 4 | 2020-08-24T13:31:46.000Z | 2022-01-25T08:03:19.000Z | tests/unit/test_handler_marker.py | Rogdham/bigxml | ab983f50c49bf861c3b61e3e636db90f9ff19ed1 | [
"MIT"
] | null | null | null | tests/unit/test_handler_marker.py | Rogdham/bigxml | ab983f50c49bf861c3b61e3e636db90f9ff19ed1 | [
"MIT"
] | null | null | null | import pytest
from bigxml.handler_marker import _ATTR_MARKER, xml_handle_element, xml_handle_text
from bigxml.nodes import XMLText
def test_one_maker_element():
@xml_handle_element("abc", "def")
def fct(arg):
return arg * 6
assert getattr(fct, _ATTR_MARKER, None) == (("abc", "def"),)
assert fct(7) == 42
def test_one_maker_element_on_method():
class Klass:
def __init__(self, multiplier):
self.multiplier = multiplier
@xml_handle_element("abc", "def")
def method(self, arg):
return arg * self.multiplier
instance = Klass(6)
assert getattr(instance.method, _ATTR_MARKER, None) == (("abc", "def"),)
assert instance.method(7) == 42
def test_one_maker_element_on_static_method():
class Klass:
@xml_handle_element("abc", "def")
@staticmethod
def method(arg):
return arg * 6
assert getattr(Klass.method, _ATTR_MARKER, None) == (("abc", "def"),)
assert Klass.method(7) == 42
def test_one_maker_element_on_method_before_staticmethod():
class Klass:
@staticmethod
@xml_handle_element("abc", "def")
def method(arg):
return arg * 6
assert getattr(Klass.method, _ATTR_MARKER, None) == (("abc", "def"),)
assert Klass.method(7) == 42
def test_several_maker_element():
@xml_handle_element("abc", "def")
@xml_handle_element("ghi")
@xml_handle_element("klm", "opq", "rst")
def fct(arg):
return arg * 6
assert getattr(fct, _ATTR_MARKER, None) == (
("klm", "opq", "rst"),
("ghi",),
("abc", "def"),
)
assert fct(7) == 42
def test_one_maker_element_no_args():
with pytest.raises(TypeError):
@xml_handle_element()
def fct(arg): # pylint: disable=unused-variable
return arg * 6
def test_one_marker_text_no_call():
@xml_handle_text
def fct(arg):
return arg * 6
assert getattr(fct, _ATTR_MARKER, None) == ((XMLText.name,),)
assert fct(7) == 42
def test_one_marker_text_no_args():
@xml_handle_text()
def fct(arg):
return arg * 6
assert getattr(fct, _ATTR_MARKER, None) == ((XMLText.name,),)
assert fct(7) == 42
def test_one_marker_text_args():
@xml_handle_text("abc", "def")
def fct(arg):
return arg * 6
assert getattr(fct, _ATTR_MARKER, None) == (
(
"abc",
"def",
XMLText.name,
),
)
assert fct(7) == 42
def test_mixed_markers():
@xml_handle_element("abc", "def")
@xml_handle_text("ghi")
@xml_handle_element("klm", "opq", "rst")
def fct(arg):
return arg * 6
assert getattr(fct, _ATTR_MARKER, None) == (
("klm", "opq", "rst"),
("ghi", XMLText.name),
("abc", "def"),
)
assert fct(7) == 42
| 23.628099 | 83 | 0.593214 | 363 | 2,859 | 4.391185 | 0.15427 | 0.090339 | 0.110414 | 0.065245 | 0.705772 | 0.685696 | 0.659348 | 0.542033 | 0.523212 | 0.483061 | 0 | 0.017561 | 0.263029 | 2,859 | 120 | 84 | 23.825 | 0.738965 | 0.010843 | 0 | 0.568182 | 0 | 0 | 0.046709 | 0 | 0 | 0 | 0 | 0 | 0.204545 | 1 | 0.238636 | false | 0 | 0.034091 | 0.113636 | 0.420455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7f971a249f5ab865830322c937e8dd39d7142ad5 | 149 | py | Python | Codeforces Problems/Soldier and bananas/Soldier and Bananas.py | Social-CodePlat/Comptt-Coding-Solutions | 240732e6c1a69e1124064bff4a27a5785a14b021 | [
"MIT"
] | null | null | null | Codeforces Problems/Soldier and bananas/Soldier and Bananas.py | Social-CodePlat/Comptt-Coding-Solutions | 240732e6c1a69e1124064bff4a27a5785a14b021 | [
"MIT"
] | 1 | 2020-10-13T20:57:34.000Z | 2020-10-13T20:57:34.000Z | Codeforces Problems/Soldier and bananas/Soldier and Bananas.py | Social-CodePlat/Comptt-Coding-Solutions | 240732e6c1a69e1124064bff4a27a5785a14b021 | [
"MIT"
] | null | null | null | arr=[int(x) for x in input().split()]
sum=0
for i in range(1,(arr[2]+1)):
sum+=arr[0]*i
if sum<=arr[1]:
print(0)
else:
print(sum-arr[1])
| 16.555556 | 37 | 0.557047 | 33 | 149 | 2.515152 | 0.484848 | 0.216867 | 0.168675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066116 | 0.187919 | 149 | 8 | 38 | 18.625 | 0.619835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7fb391590f55d21149c4442ca20b07a2041ace3f | 6,835 | py | Python | experiments/2_Maze_Neutrality/experiment/stimuli_creation/stim_csv_iterator.py | BranPap/gender_ideology | 2b2b87e13cb7a8abd0403828fbc235768a774aaa | [
"MIT"
] | 1 | 2021-03-30T03:12:05.000Z | 2021-03-30T03:12:05.000Z | experiments/2_Maze_Neutrality/experiment/stimuli_creation/stim_csv_iterator.py | BranPap/gender_ideology | 2b2b87e13cb7a8abd0403828fbc235768a774aaa | [
"MIT"
] | null | null | null | experiments/2_Maze_Neutrality/experiment/stimuli_creation/stim_csv_iterator.py | BranPap/gender_ideology | 2b2b87e13cb7a8abd0403828fbc235768a774aaa | [
"MIT"
] | 1 | 2021-03-30T03:41:32.000Z | 2021-03-30T03:41:32.000Z | import json
import pandas as pd
import random
df = pd.read_csv("experiment\stimuli_creation\maze_lexemes.csv")
states = ["California","Alabama","Alaska","Arizona","Arkansas","Connecticut","Colorado","Delaware","Florida","Georgia","Hawaii","Idaho","Illinois","Indiana","Iowa","Kansas","Kentucky","Louisiana","Maine","Maryland","Massachusetts","Michigan","Minnesota","Mississippi","Missouri","Montana","Nebraska","Nevada","New Hampshire","New Jersey","New Mexico","New York","North Carolina","North Dakota","Ohio","Oklahoma","Oregon","Pennsylvania","Rhode Island","South Carolina","South Dakota","Tennessee","Texas","Utah","Vermont","Virginia","Washington","West Virginia","Wisconsin","Wyoming"]
random.shuffle(states)
activities = ["swimming","writing","singing","dancing","hiking","running","reading","drawing","painting","cooking","cycling","walking","studying","surfing","camping"]
random.shuffle(activities)
stim_list = []
coin = [0,1]
entry = []
status = 1
with open("experiment\stimuli_creation\maze_stims.csv", 'w') as stim_input:
stim_input.write("name,be,det,target,prep,state,pro,like,activity,question1,answer1,question2,answer2,gender,lexeme,orthog,condition,id")
stim_input.write("\n")
for index,row in df.iterrows():
status +=1
state = states.pop()
activity = random.choice(activities)
antistate = random.choice(states)
activity_2 = random.choice(activities)
stim_input.write("NAME,is,")
stim_input.write(row["det"])
stim_input.write("," + row["neutral"])
stim_input.write(",from,")
stim_input.write(state+".")
stim_input.write(",She,")
stim_input.write("likes,")
stim_input.write(activity+".")
entry.append(str(('female;'+str(status)+';Jane is '+row["det"]+" "+row["neutral"]+" from "+state+". She likes "+activity+".")))
activity_chance = random.choice(coin)
if activity_chance == 0:
stim_input.write(",Does NAME like "+activity_2+"?")
if activity == activity_2:
stim_input.write(",Yes")
else:
stim_input.write(",No")
else:
stim_input.write(",Does NAME like "+activity+"?")
stim_input.write(",Yes")
chance = random.choice(coin)
if chance == 0:
stim_input.write(",Is NAME from "+antistate+"?")
stim_input.write(",No,")
else:
stim_input.write(",Is NAME from "+state+"?")
stim_input.write(",Yes,")
stim_input.write("female,"+row['lexeme']+','+row["female"]+',')
stim_input.write("neutral_female"+',')
stim_input.write(row['lexeme'])
stim_input.write("_neutral_female")
stim_input.write('\n')
stim_input.write("NAME,is,")
stim_input.write(row["det"])
stim_input.write("," + row["female"])
stim_input.write(",from,")
stim_input.write(state+".")
stim_input.write(",She,")
stim_input.write("likes,")
stim_input.write(activity+".")
entry.append(str(('female;'+str(status)+';Jane is '+row["det"]+" "+row["female"]+" from "+state+". She likes "+activity+".")))
if activity_chance == 0:
stim_input.write(",Does NAME like "+activity_2+"?")
if activity == activity_2:
stim_input.write(",Yes")
else:
stim_input.write(",No")
else:
stim_input.write(",Does NAME like "+activity+"?")
stim_input.write(",Yes")
if chance == 0:
stim_input.write(",Is NAME from "+antistate+"?")
stim_input.write(",No,")
else:
stim_input.write(",Is NAME from "+state+"?")
stim_input.write(",Yes,")
stim_input.write("female,"+row['lexeme']+','+row["female"]+',')
stim_input.write("congruent_female"+',')
stim_input.write(row['lexeme'])
stim_input.write("_congruent_female")
stim_input.write('\n')
stim_input.write("NAME,is,")
stim_input.write(row["det"])
stim_input.write("," + row["neutral"])
stim_input.write(",from,")
stim_input.write(state+".")
stim_input.write(",He,")
stim_input.write("likes,")
stim_input.write(activity+".")
entry.append(str(('male;'+str(status)+';John is '+row["det"]+" "+row["neutral"]+" from "+state+". He likes "+activity+".")))
if activity_chance == 0:
stim_input.write(",Does NAME like "+activity_2+"?")
if activity == activity_2:
stim_input.write(",Yes")
else:
stim_input.write(",No")
else:
stim_input.write(",Does NAME like "+activity+"?")
stim_input.write(",Yes")
if chance == 0:
stim_input.write(",Is NAME from "+antistate+"?")
stim_input.write(",No,")
else:
stim_input.write(",Is NAME from "+state+"?")
stim_input.write(",Yes,")
stim_input.write("male,"+row['lexeme']+','+row["male"]+',')
stim_input.write("neutral_male"+',')
stim_input.write(row["lexeme"]+"_neutral_male")
stim_input.write('\n')
stim_input.write("NAME,is,")
stim_input.write(row["det"])
stim_input.write("," + row["male"])
stim_input.write(",from,")
stim_input.write(state+".")
stim_input.write(",He,")
stim_input.write("likes,")
stim_input.write(activity+".")
entry.append(str(('male;'+str(status)+';John is '+row["det"]+" "+row["male"]+" from "+state+". He likes "+activity+".")))
if activity_chance == 0:
stim_input.write(",Does NAME like "+activity_2+"?")
if activity == activity_2:
stim_input.write(",Yes")
else:
stim_input.write(",No")
else:
stim_input.write(",Does NAME like "+activity+"?")
stim_input.write(",Yes")
if chance == 0:
stim_input.write(",Is NAME from "+antistate+"?")
stim_input.write(",No,")
else:
stim_input.write(",Is NAME from "+state+"?")
stim_input.write(",Yes,")
stim_input.write("male,"+row['lexeme']+','+row["male"]+',')
stim_input.write("congruent_male"+',')
stim_input.write(row['lexeme'])
stim_input.write("_congruent_male")
stim_input.write('\n')
stim_list.append(row['lexeme'])
stim_list.append(row['neutral'])
stim_list.append(row['male'])
stim_list.append(row['female'])
with open('list_file.txt', 'w') as stim_checker:
stim_checker.write(str(stim_list))
with open('to-be-matched.txt', 'w') as match_list:
for sentence in entry:
match_list.write(str(sentence)+"\n")
| 42.71875 | 582 | 0.574689 | 799 | 6,835 | 4.750939 | 0.188986 | 0.213383 | 0.32824 | 0.053741 | 0.670179 | 0.644889 | 0.635406 | 0.629347 | 0.615121 | 0.577977 | 0 | 0.004764 | 0.232187 | 6,835 | 159 | 583 | 42.987421 | 0.718559 | 0 | 0 | 0.687075 | 0 | 0.006803 | 0.245208 | 0.0297 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020408 | 0 | 0.020408 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7fb79206ce5753b6ec8f399db4b0f99941b1fce3 | 1,243 | py | Python | Hash PWs for Cisco/setupPW.py | NetworkNick-US/PythonScripts | b8441e4d433be59f4b3c4bd5c61543b2ef66ae4b | [
"MIT"
] | null | null | null | Hash PWs for Cisco/setupPW.py | NetworkNick-US/PythonScripts | b8441e4d433be59f4b3c4bd5c61543b2ef66ae4b | [
"MIT"
] | null | null | null | Hash PWs for Cisco/setupPW.py | NetworkNick-US/PythonScripts | b8441e4d433be59f4b3c4bd5c61543b2ef66ae4b | [
"MIT"
] | null | null | null | import getpass
import os
import platform
import subprocess
class Style:
BLACK = '\033[30m'
RED = '\033[31m'
GREEN = '\033[32m'
YELLOW = '\033[33m'
BLUE = '\033[34m'
MAGENTA = '\033[35m'
CYAN = '\033[36m'
WHITE = '\033[37m'
UNDERLINE = '\033[4m'
RESET = '\033[0m'
BLUEBACKGROUND = '\x1b[1;37;46m'
def clearConsole():
clear_con = 'cls' if platform.system().lower() == "windows" else 'clear'
os.system(clear_con)
def hashPass(salted, pwd):
return subprocess.getoutput("openssl passwd -salt " + salted + " -1 " + pwd)
def main():
os.system("")
print("This script will help you hash a password for use with your Ansible playbooks for IOS and IOS XE devices.\n",
Style.RED, "PLEASE NOTE: CURRENTLY NXOS_USER REQUIRES CLEAR-TEXT PASSWORDS", Style.RESET)
salt = getpass.getpass(prompt="Please enter a random string as your salt: ", stream=None)
userpasswd = getpass.getpass(prompt="Password: ", stream=None)
print("The value you should be using for your variable 'fallbackAdminPW' is: " + hashPass(salt, userpasswd))
print(Style.BLUE + "\nVisit NetworkNick.us for more Ansible and Python tools!\n" + Style.RESET)
if __name__ == '__main__':
main()
| 29.595238 | 120 | 0.65889 | 168 | 1,243 | 4.809524 | 0.613095 | 0.019802 | 0.049505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055724 | 0.205953 | 1,243 | 41 | 121 | 30.317073 | 0.762918 | 0 | 0 | 0 | 0 | 0.032258 | 0.394208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0.258065 | 0.129032 | 0.032258 | 0.645161 | 0.096774 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7fbb3e0a262f49733f933fde19767b6609edf780 | 16,513 | py | Python | learning_experiments/src3/eval_trained_model.py | TommasoBendinelli/spatial_relations_experiments | cd165437835a37c947ccf13a77531a5a42d4c925 | [
"MIT"
] | null | null | null | learning_experiments/src3/eval_trained_model.py | TommasoBendinelli/spatial_relations_experiments | cd165437835a37c947ccf13a77531a5a42d4c925 | [
"MIT"
] | null | null | null | learning_experiments/src3/eval_trained_model.py | TommasoBendinelli/spatial_relations_experiments | cd165437835a37c947ccf13a77531a5a42d4c925 | [
"MIT"
] | null | null | null | import argparse
import os
import os.path as osp
import cv2
import numpy as np
from scipy.stats import multivariate_normal
from scipy.stats import norm
import matplotlib
# matplotlib.use('agg')
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import subprocess
import shutil
import chainer
from chainer import training
from chainer.training import extensions
from chainer.dataset import concat_examples
from chainer.backends.cuda import to_cpu
import chainer.functions as F
from chainer import serializers
import net_200x200 as net
import data_generator
from config_parser import ConfigParser
from utils import *
def save_reconstruction_arrays(data, model, folder_name="."):
print("Clear Images from Last Reconstructions\n")
all_files = list([filename for filename in os.listdir(folder_name) if '.' in filename])
list(map(lambda x : os.remove(folder_name + x), all_files))
print("Saving Array RECONSTRUCTIONS\n")
(train_b0, train_b1) = data
no_images = 10
train_ind = np.linspace(0, len(train_b0) - 1, no_images, dtype=int)
result = model(train_b0[train_ind], train_b1[train_ind])
gt_b0 = np.swapaxes(train_b0[train_ind], 1, 3)
gt_b1 = np.swapaxes(train_b1[train_ind], 1, 3)
rec_b0 = np.swapaxes(result[0].data, 1, 3)
rec_b1 = np.swapaxes(result[1].data, 1, 3)
output = {"gt_b0": gt_b0, "gt_b1": gt_b1, 'rec_b0': rec_b0, 'rec_b1': rec_b1}
np.savez(os.path.join("result", "reconstruction_arrays/train" + ".npz"), **output)
def eval_seen_data(data, model, groups, folder_name=".", pairs=None):
print("Clear Images from Last Seen Scatter\n")
all_files = list([filename for filename in os.listdir(folder_name) if '.' in filename])
list(map(lambda x : os.remove(folder_name + x), all_files))
print("Evaluating on SEEN data\n")
(data_b0, data_b1) = data
n = 100
every_nth = len(data_b0) / n
if every_nth == 0:
every_nth = 1
axis_ranges = [-5, 5]
for group_key in groups:
for label in groups[group_key]:
print(("Visualising label:\t{0}, Group:\t{1}".format(label, group_key)))
indecies = [i for i, x in enumerate(train_labels) if x == label]
filtered_data_b0 = data_b0.take(indecies, axis=0)[::every_nth]
filtered_data_b1 = data_b1.take(indecies, axis=0)[::every_nth]
latent_mu = model.get_latent(filtered_data_b0, filtered_data_b1).data
pairs = [(0,1), (0,2), (1,2)]
for pair in pairs:
plt.scatter(latent_mu[:, pair[0]], latent_mu[:, pair[1]], c='red', label=label, alpha=0.75)
plt.grid()
# major axes
plt.plot([axis_ranges[0], axis_ranges[1]], [0,0], 'k')
plt.plot([0,0], [axis_ranges[0], axis_ranges[1]], 'k')
plt.xlim(axis_ranges[0], axis_ranges[1])
plt.ylim(axis_ranges[0], axis_ranges[1])
plt.xlabel("Z_" + str(pair[0]))
plt.ylabel("Z_" + str(pair[1]))
plt.legend(loc='upper left', bbox_to_anchor=(1, 1), fontsize=14)
plt.savefig(osp.join(folder_name, "group_" + str(group_key) + "_" + label + "_Z_" + str(pair[0]) + "_Z_" + str(pair[1])), bbox_inches="tight")
plt.close()
def eval_seen_data_single(data, model, labels=[], folder_name=".", pairs=None):
print("Clear Images from Last Seen Scatter Single\n")
all_files = list([filename for filename in os.listdir(folder_name) if '.' in filename])
list(map(lambda x : os.remove(folder_name + x), all_files))
print("Evaluating on SEEN SINGLE data\n")
(data_b0, data_b1) = data
axis_ranges = [-15, 15]
# pairs = [(0,1)]
n = 100
every_nth = len(data_b0) / n
if every_nth == 0:
every_nth = 1
filtered_data_b0 = data_b0.take(list(range(len(data_b0))), axis=0)[::every_nth]
filtered_data_b1 = data_b1.take(list(range(len(data_b1))), axis=0)[::every_nth]
labels = labels[::every_nth]
latent = np.array(model.get_latent(filtered_data_b0, filtered_data_b1))
filtered_data_b0 = np.swapaxes(filtered_data_b0, 1, 3)
filtered_data_b1 = np.swapaxes(filtered_data_b1, 1, 3)
for i in range(0, len(latent[0]), 33):
fig = plt.figure()
fig.canvas.set_window_title(labels[i])
ax = fig.add_subplot(1, len(pairs) + 1, 1, projection='3d')
points = filtered_data_b0[i].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_0 = filtered_points[...,0][::3]
ys_0 = filtered_points[...,1][::3]
zs_0 = filtered_points[...,2][::3]
ax.scatter(xs_0, ys_0, zs_0, c='r', alpha=0.5)
points = filtered_data_b1[i].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_1 = filtered_points[...,0][::3]
ys_1 = filtered_points[...,1][::3]
zs_1 = filtered_points[...,2][::3]
ax.scatter(xs_1, ys_1, zs_1, c='c', alpha=0.5)
ax.set_xlabel('X', fontweight="bold")
ax.set_ylabel('Y', fontweight="bold")
ax.set_zlabel('Z', fontweight="bold")
for j, pair in enumerate(pairs):
ax = fig.add_subplot(1, len(pairs) + 1, j + 2)
ax.scatter(latent[pair[0], i], latent[pair[1], i], c='red', label="unseen", alpha=0.75)
ax.grid()
# major axes
ax.plot([axis_ranges[0], axis_ranges[1]], [0,0], 'k')
ax.plot([0,0], [axis_ranges[0], axis_ranges[1]], 'k')
ax.set_xlim(axis_ranges[0], axis_ranges[1])
ax.set_ylim(axis_ranges[0], axis_ranges[1])
ax.set_xlabel("Z_" + str(pair[0]))
ax.set_ylabel("Z_" + str(pair[1]))
# ax.legend(loc='upper left', bbox_to_anchor=(1, 1), fontsize=14)
# plt.savefig(osp.join(folder_name, str(i) + "_Z_" + str(pair[0]) + "_Z_" + str(pair[1])), bbox_inches="tight")
# plt.close()
plt.show()
def eval_unseen_data(data, model, folder_name=".", pairs=None):
print("Clear Images from Last Unseen Scatter\n")
all_files = list([filename for filename in os.listdir(folder_name) if '.' in filename])
list(map(lambda x : os.remove(folder_name + x), all_files))
print("Evaluating on UNSEEN data\n")
(data_b0, data_b1) = data
axis_ranges = [-5, 5]
# pairs = [(0,1), (0,2), (1,2)]
# pairs = [(0,1)]
# n = 100
# every_nth = len(data_b0) / n
# if every_nth == 0:
# every_nth = 1
every_nth = 2
filtered_data_b0 = data_b0.take(list(range(len(data_b0))), axis=0)[::every_nth]
filtered_data_b1 = data_b1.take(list(range(len(data_b1))), axis=0)[::every_nth]
latent = np.array(model.get_latent(filtered_data_b0, filtered_data_b1))
latent_flipped = np.array(model.get_latent(filtered_data_b1, filtered_data_b0))
filtered_data_b0 = np.swapaxes(filtered_data_b0, 1, 3)
filtered_data_b1 = np.swapaxes(filtered_data_b1, 1, 3)
for i in range(len(filtered_data_b0)):
print(("{0}/{1}".format(i, len(latent[0]))))
fig = plt.figure()
ax = fig.add_subplot(2, 4, 1, projection='3d')
points = filtered_data_b0[i].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_0 = filtered_points[...,0][::3]
ys_0 = filtered_points[...,1][::3]
zs_0 = filtered_points[...,2][::3]
ax.scatter(xs_0, ys_0, zs_0, c='r', alpha=0.5)
points = filtered_data_b1[i].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_1 = filtered_points[...,0][::3]
ys_1 = filtered_points[...,1][::3]
zs_1 = filtered_points[...,2][::3]
ax.scatter(xs_1, ys_1, zs_1, c='c', alpha=0.5)
ax.set_xlabel('X', fontweight="bold")
ax.set_ylabel('Y', fontweight="bold")
ax.set_zlabel('Z', fontweight="bold")
for j, pair in enumerate(pairs):
ax = fig.add_subplot(2, 4, j + 2)
ax.scatter(latent[pair[0], i], latent[pair[1], i], c='red', label="unseen", alpha=0.75)
ax.grid()
# major axes
ax.plot([axis_ranges[0], axis_ranges[1]], [0,0], 'k')
ax.plot([0,0], [axis_ranges[0], axis_ranges[1]], 'k')
# ax.set_xlim(axis_ranges[0], axis_ranges[1])
# ax.set_ylim(axis_ranges[0], axis_ranges[1])
ax.set_xlabel("Z_" + str(pair[0]))
ax.set_ylabel("Z_" + str(pair[1]))
# ax.legend(loc='upper left', bbox_to_anchor=(1, 1), fontsize=14)
ax = fig.add_subplot(2, 4, 5, projection='3d')
ax.scatter(xs_1, ys_1, zs_1, c='r', alpha=0.5)
ax.scatter(xs_0, ys_0, zs_0, c='c', alpha=0.5)
ax.set_xlabel('X', fontweight="bold")
ax.set_ylabel('Y', fontweight="bold")
ax.set_zlabel('Z', fontweight="bold")
for j, pair in enumerate(pairs):
ax = fig.add_subplot(2, 4, j + 6)
ax.scatter(latent_flipped[pair[0], i], latent_flipped[pair[1], i], c='red', label="unseen", alpha=0.75)
ax.grid()
# major axes
ax.plot([axis_ranges[0], axis_ranges[1]], [0,0], 'k')
ax.plot([0,0], [axis_ranges[0], axis_ranges[1]], 'k')
# ax.set_xlim(axis_ranges[0], axis_ranges[1])
# ax.set_ylim(axis_ranges[0], axis_ranges[1])
ax.set_xlabel("Z_" + str(pair[0]))
ax.set_ylabel("Z_" + str(pair[1]))
# ax.legend(loc='upper left', bbox_to_anchor=(1, 1), fontsize=14)
# plt.savefig(osp.join(folder_name, str(i) + "_Z_" + str(pair[0]) + "_Z_" + str(pair[1])), bbox_inches="tight")
# plt.close()
plt.show()
def eval_unseen_time(data, model, folder_name=".", pairs=None):
print("Clear Images from Last Unseen Scatter\n")
all_files = list([filename for filename in os.listdir(folder_name) if '.' in filename])
list(map(lambda x : os.remove(folder_name + x), all_files))
print("Evaluating on UNSEEN data through time\n")
cmap = plt.cm.get_cmap('cool')
(data_b0, data_b1) = data
axis_ranges = [-20, 20]
# pairs = [(0,1), (0,2), (1,2)]
pairs = [(0,1), (2,3)]
npz_size = 50
npz_files = 4
for k in range(npz_files):
filtered_data_b0 = data_b0.take(list(range(len(data_b0))), axis=0)[k * npz_size : (k+1) * npz_size - 1]
filtered_data_b1 = data_b1.take(list(range(len(data_b1))), axis=0)[k * npz_size : (k+1) * npz_size - 1]
latent = np.array(model.get_latent(filtered_data_b0, filtered_data_b1))
latent_flipped = np.array(model.get_latent(filtered_data_b1, filtered_data_b0))
filtered_data_b0 = np.swapaxes(filtered_data_b0, 1, 3)
filtered_data_b1 = np.swapaxes(filtered_data_b1, 1, 3)
print(("{0}/{1}".format(k, npz_files)))
fig = plt.figure()
###################
#### FIRST ROW ####
###################
ax = fig.add_subplot(2, len(pairs) + 2, 1, projection='3d')
points = filtered_data_b0[1].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_0_first = filtered_points[...,0][::3]
ys_0_first = filtered_points[...,1][::3]
zs_0_first = filtered_points[...,2][::3]
ax.scatter(xs_0_first, ys_0_first, zs_0_first, c='r', alpha=0.5)
points = filtered_data_b1[1].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_1_first = filtered_points[...,0][::3]
ys_1_first = filtered_points[...,1][::3]
zs_1_first = filtered_points[...,2][::3]
ax.scatter(xs_1_first, ys_1_first, zs_1_first, c='c', alpha=0.5)
ax.set_xlabel('X', fontweight="bold")
ax.set_ylabel('Y', fontweight="bold")
ax.set_zlabel('Z', fontweight="bold")
ax = fig.add_subplot(2, len(pairs) + 2, 2, projection='3d')
points = filtered_data_b0[-1].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_0_last = filtered_points[...,0][::3]
ys_0_last = filtered_points[...,1][::3]
zs_0_last = filtered_points[...,2][::3]
ax.scatter(xs_0_last, ys_0_last, zs_0_last, c='r', alpha=0.5)
points = filtered_data_b1[-1].reshape(200*200,3)
filtered_points = np.array(list([row for row in points if [point for point in row if (point != [0,0,0]).all()]]))
xs_1_last = filtered_points[...,0][::3]
ys_1_last = filtered_points[...,1][::3]
zs_1_last = filtered_points[...,2][::3]
ax.scatter(xs_1_last, ys_1_last, zs_1_last, c='c', alpha=0.5)
ax.set_xlabel('X', fontweight="bold")
ax.set_ylabel('Y', fontweight="bold")
ax.set_zlabel('Z', fontweight="bold")
for j, pair in enumerate(pairs):
ax = fig.add_subplot(2, len(pairs) + 2, j + 3)
for i in range(len(latent[0])):
x = (latent[pair[0], i], latent[pair[1], i])
rgba = cmap(i/float(npz_size))
ax.scatter(x[0], x[1], c=[rgba[:3]], label="unseen", s=30, alpha=0.75)
ax.grid()
# major axes
ax.plot([axis_ranges[0], axis_ranges[1]], [0,0], 'k')
ax.plot([0,0], [axis_ranges[0], axis_ranges[1]], 'k')
ax.set_xlabel("Z_" + str(pair[0]))
ax.set_ylabel("Z_" + str(pair[1]))
ax.set_xlim(axis_ranges[0], axis_ranges[1])
ax.set_ylim(axis_ranges[0], axis_ranges[1])
##################
### SECOND ROW ###
##################
ax = fig.add_subplot(2, len(pairs) + 2, len(pairs) + 3, projection='3d')
ax.scatter(xs_1_first, ys_1_first, zs_1_first, c='r', alpha=0.5)
ax.scatter(xs_0_first, ys_0_first, zs_0_first, c='c', alpha=0.5)
ax.set_xlabel('X', fontweight="bold")
ax.set_ylabel('Y', fontweight="bold")
ax.set_zlabel('Z', fontweight="bold")
ax = fig.add_subplot(2, len(pairs) + 2, len(pairs) + 4, projection='3d')
ax.scatter(xs_1_last, ys_1_last, zs_1_last, c='r', alpha=0.5)
ax.scatter(xs_0_last, ys_0_last, zs_0_last, c='c', alpha=0.5)
ax.set_xlabel('X', fontweight="bold")
ax.set_ylabel('Y', fontweight="bold")
ax.set_zlabel('Z', fontweight="bold")
for j, pair in enumerate(pairs):
ax = fig.add_subplot(2, len(pairs) + 2, j + len(pairs) + 5)
for i in range(len(latent_flipped[0])):
x = (latent_flipped[pair[0], i], latent_flipped[pair[1], i])
rgba = cmap(i/float(npz_size))
ax.scatter(x[0], x[1], c=[rgba[:3]], label="unseen", s=30, alpha=0.75)
ax.grid()
# major axes
ax.plot([axis_ranges[0], axis_ranges[1]], [0,0], 'k')
ax.plot([0,0], [axis_ranges[0], axis_ranges[1]], 'k')
ax.set_xlabel("Z_" + str(pair[0]))
ax.set_ylabel("Z_" + str(pair[1]))
ax.set_xlim(axis_ranges[0], axis_ranges[1])
ax.set_ylim(axis_ranges[0], axis_ranges[1])
# plt.savefig(osp.join(folder_name, "npz_" + str(k) + "_Z_" + str(pair[0]) + "_Z_" + str(pair[1])), bbox_inches="tight")
# plt.close()
plt.show()
if __name__ == "__main__":
ignore = ["unlabelled", "train"]
generator = data_generator.DataGenerator()
train_b0, train_b1, train_labels, train_concat, train_vectors, test_b0, test_b1, test_labels, test_concat, test_vectors, unseen_b0, unseen_b1,\
unseen_labels, groups = generator.generate_dataset(ignore=ignore, args=None)
print('\n###############################################')
print("DATA_LOADED")
print(("# Training Branch 0: \t\t{0}".format(train_b0.shape)))
print(("# Training Branch 1: \t\t{0}".format(train_b1.shape)))
print(("# Training labels: \t{0}".format(set(train_labels))))
print(("# Training labels: \t{0}".format(train_labels.shape)))
print(("# Training concat: \t{0}".format(len(train_concat))))
print(("# Training vectors: \t{0}".format(train_vectors.shape)))
print(("# Testing Branch 0: \t\t{0}".format(test_b0.shape)))
print(("# Testing Branch 1: \t\t{0}".format(test_b1.shape)))
print(("# Testing labels: \t{0}".format(set(test_labels))))
print(("# Testing concat: \t{0}".format(len(test_concat))))
print(("# Testing labels: \t{0}".format(test_labels.shape)))
print(("# Testing vectors: \t{0}".format(test_vectors.shape)))
print(("# Unseen Branch 0: \t\t{0}".format(unseen_b0.shape)))
print(("# Unseen Branch 1: \t\t{0}".format(unseen_b1.shape)))
print(("# Unseen labels: \t{0}".format(set(unseen_labels))))
print(("\n# Groups: \t{0}".format(groups)))
print('###############################################\n')
model = net.Conv_Siam_VAE(train_b0.shape[1], train_b1.shape[1], n_latent=8, groups=groups, alpha=1, beta=1, gamma=1)
serializers.load_npz("result/models/final.model", model)
model.to_cpu()
pairs = list(itertools.combinations(list(range(len(groups))), 2))
# save the pointcloud reconstructions
# save_reconstruction_arrays((train_b0, train_b0), model, folder_name="result/reconstruction_arrays/")
# evaluate on the data that was seen during trainig
# eval_seen_data((train_b0, train_b1), model, groups, folder_name="eval/scatter/seen/", pairs=pairs)
# evaluate on the data that was seen during trainig one by one + 3D
# eval_seen_data_single((test_b0, test_b1), model, labels=test_labels, folder_name="eval/scatter/seen_single/", pairs=pairs)
# evaluate on the data that was NOT seen during trainig
# eval_unseen_data((unseen_b0, unseen_b1), model, folder_name="eval/scatter/unseen/", pairs=pairs)
# evaluate the unseen data through time
eval_unseen_time((unseen_b0, unseen_b1), model, folder_name="eval/scatter/unseen_time/", pairs=pairs) | 36.452539 | 146 | 0.664507 | 2,824 | 16,513 | 3.6767 | 0.082861 | 0.050082 | 0.031783 | 0.034672 | 0.737455 | 0.718482 | 0.662525 | 0.65405 | 0.633728 | 0.607339 | 0 | 0.047245 | 0.137346 | 16,513 | 453 | 147 | 36.452539 | 0.681643 | 0.101738 | 0 | 0.472603 | 0 | 0 | 0.087571 | 0.011898 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017123 | false | 0 | 0.078767 | 0 | 0.09589 | 0.109589 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f6862c3a762adb42778672f66157ed1731e5cdfe | 1,079 | py | Python | Most Asked DSA By Companies/Meta/3-973.py | neelaadityakumar/leetcode | e78e0b8dc0113bdc1721bf7d025a463bea04847f | [
"MIT"
] | null | null | null | Most Asked DSA By Companies/Meta/3-973.py | neelaadityakumar/leetcode | e78e0b8dc0113bdc1721bf7d025a463bea04847f | [
"MIT"
] | null | null | null | Most Asked DSA By Companies/Meta/3-973.py | neelaadityakumar/leetcode | e78e0b8dc0113bdc1721bf7d025a463bea04847f | [
"MIT"
] | null | null | null | # https://leetcode.com/problems/k-closest-points-to-origin/
# 973. K Closest Points to Origin
# Medium
# Share
# Given an array of points where points[i] = [xi, yi] represents a point on the X-Y plane and an integer k, return the k closest points to the origin (0, 0).
# The distance between two points on the X-Y plane is the Euclidean distance (i.e., √(x1 - x2)2 + (y1 - y2)2).
# You may return the answer in any order. The answer is guaranteed to be unique (except for the order that it is in).
# Example 1:
# Input: points = [[1,3],[-2,2]], k = 1
# Output: [[-2,2]]
# Explanation:
# The distance between (1, 3) and the origin is sqrt(10).
# The distance between (-2, 2) and the origin is sqrt(8).
# Since sqrt(8) < sqrt(10), (-2, 2) is closer to the origin.
# We only want the closest k = 1 points from the origin, so the answer is just [[-2,2]].
# Example 2:
# Input: points = [[3,3],[5,-1],[-2,4]], k = 2
# Output: [[3,3],[-2,4]]
# Explanation: The answer [[-2,4],[3,3]] would also be accepted.
# Constraints:
# 1 <= k <= points.length <= 104
# -104 < xi, yi < 104
| 34.806452 | 157 | 0.644115 | 199 | 1,079 | 3.497487 | 0.41206 | 0.064655 | 0.060345 | 0.068966 | 0.149425 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068886 | 0.192771 | 1,079 | 30 | 158 | 35.966667 | 0.729047 | 0.95088 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f6866a52596fe4972475bf119793b740f2d4ea78 | 1,232 | py | Python | src/huaytools/pytorch/modules/loss/cosine_similarity.py | imhuay/studies-gitbook | 69a31c20c91d131d0fafce0622f4035b9b95e93a | [
"MIT"
] | 100 | 2021-10-13T01:22:27.000Z | 2022-03-31T09:52:49.000Z | src/huaytools/pytorch/modules/loss/cosine_similarity.py | imhuay/studies-gitbook | 69a31c20c91d131d0fafce0622f4035b9b95e93a | [
"MIT"
] | null | null | null | src/huaytools/pytorch/modules/loss/cosine_similarity.py | imhuay/studies-gitbook | 69a31c20c91d131d0fafce0622f4035b9b95e93a | [
"MIT"
] | 27 | 2021-11-01T01:05:09.000Z | 2022-03-31T03:32:01.000Z | #!/usr/bin/env python
# -*- coding:utf-8 -*-
"""
Time: 2021-10-13 8:30 下午
Author: huayang
Subject:
"""
import os
import sys
import json
import doctest
from typing import *
from collections import defaultdict
from torch.nn import functional as F # noqa
from huaytools.pytorch.modules.loss.mean_squared_error import mean_squared_error_loss
def cosine_similarity_loss(x1, x2, labels):
""" cosine 相似度损失
Examples:
# >>> logits = torch.randn(5, 5).clamp(min=_EPSILON) # 负对数似然的输入需要值大于 0
# >>> labels = torch.arange(5)
# >>> onehot_labels = F.one_hot(labels)
#
# # 与官方结果比较
# >>> my_ret = negative_log_likelihood_loss(logits, onehot_labels)
# >>> official_ret = F.nll_loss(torch.log(logits + _EPSILON), labels, reduction='none')
# >>> assert torch.allclose(my_ret, official_ret, atol=1e-5)
Args:
x1: [B, N]
x2: same shape as x1
labels: [B] or scalar
Returns:
[B] vector or scalar
"""
cosine_scores = F.cosine_similarity(x1, x2, dim=-1) # [B]
return mean_squared_error_loss(cosine_scores, labels) # [B]
def _test():
""""""
doctest.testmod()
if __name__ == '__main__':
""""""
_test()
| 21.614035 | 95 | 0.624188 | 160 | 1,232 | 4.58125 | 0.5625 | 0.04502 | 0.065484 | 0.05457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027748 | 0.239448 | 1,232 | 56 | 96 | 22 | 0.754536 | 0.521916 | 0 | 0 | 0 | 0 | 0.016227 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.533333 | 0 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f68d9b8db5553fd496f890e0ba612aaca3bab81b | 26,309 | py | Python | Gather_Data.py | batumoglu/Home_Credit | bf3f918bafdc0e9be1c24809068fac1242fff881 | [
"Apache-2.0"
] | 1 | 2019-11-04T08:49:34.000Z | 2019-11-04T08:49:34.000Z | Gather_Data.py | batumoglu/Home_Credit | bf3f918bafdc0e9be1c24809068fac1242fff881 | [
"Apache-2.0"
] | null | null | null | Gather_Data.py | batumoglu/Home_Credit | bf3f918bafdc0e9be1c24809068fac1242fff881 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon May 28 19:51:12 2018
@author: ozkan
"""
import pandas as pd
import numpy as np
#from sklearn.preprocessing import MinMaxScaler, LabelEncoder
from scipy import stats
import gc
import GatherTables
def one_hot_encoder(df):
original_columns = list(df.columns)
categorical_columns = [col for col in df.columns if df[col].dtype == 'object']
df = pd.get_dummies(df, columns= categorical_columns, dummy_na= True)
new_columns = [c for c in df.columns if c not in original_columns]
return df, new_columns
def checkTrainTestConsistency(train, test):
return (train,test)
def AllData_v2(reduce_mem=True):
app_data, len_train = GatherTables.getAppData()
app_data = GatherTables.generateAppFeatures(app_data)
merged_df = GatherTables.handlePrev(app_data)
merged_df = GatherTables.handleCreditCard(merged_df)
merged_df = GatherTables.handleBuro(merged_df)
merged_df = GatherTables.handleBuroBalance(merged_df)
merged_df = GatherTables.handlePosCash(merged_df)
merged_df = GatherTables.handleInstallments(merged_df)
categorical_feats = [f for f in merged_df.columns if merged_df[f].dtype == 'object']
for f_ in categorical_feats:
merged_df[f_], indexer = pd.factorize(merged_df[f_])
merged_df.drop('SK_ID_CURR', axis=1, inplace=True)
data = merged_df[:len_train]
test = merged_df[len_train:]
y = data.pop('TARGET')
test.drop(['TARGET'], axis=1, inplace=True)
return(data, test, y)
def AllData_v3(reduce_mem=True):
app_data, len_train = GatherTables.getAppData()
app_data = GatherTables.generateAppFeatures(app_data)
merged_df = GatherTables.handlePrev_v2(app_data)
merged_df = GatherTables.handleCreditCard_v2(merged_df)
merged_df = GatherTables.handleBuro_v2(merged_df)
merged_df = GatherTables.handleBuroBalance_v2(merged_df)
merged_df = GatherTables.handlePosCash_v2(merged_df)
merged_df = GatherTables.handleInstallments_v2(merged_df)
categorical_feats = [f for f in merged_df.columns if merged_df[f].dtype == 'object']
for f_ in categorical_feats:
merged_df[f_], indexer = pd.factorize(merged_df[f_])
merged_df.drop('SK_ID_CURR', axis=1, inplace=True)
data = merged_df[:len_train]
test = merged_df[len_train:]
y = data.pop('TARGET')
test.drop(['TARGET'], axis=1, inplace=True)
return(data, test, y)
def AllData_v4(reduce_mem=True):
app_data, len_train = GatherTables.getAppData()
app_data = GatherTables.generateAppFeatures_v4(app_data)
merged_df = GatherTables.handlePrev_v4(app_data)
merged_df = GatherTables.handleCreditCard_v4(merged_df)
merged_df = GatherTables.handleBuro_v4(merged_df)
merged_df = GatherTables.handleBuroBalance_v2(merged_df)
merged_df = GatherTables.handlePosCash_v2(merged_df)
merged_df = GatherTables.handleInstallments_v2(merged_df)
merged_df,cat_cols = one_hot_encoder(merged_df)
merged_df.drop('SK_ID_CURR', axis=1, inplace=True)
data = merged_df[:len_train]
test = merged_df[len_train:]
y = data.pop('TARGET')
test.drop(['TARGET'], axis=1, inplace=True)
return(data, test, y)
def ApplicationBuroBalance(reduce_mem=True):
data = pd.read_csv('../input/application_train.csv')
test = pd.read_csv('../input/application_test.csv')
buro = pd.read_csv('../input/bureau.csv')
buro_balance = pd.read_csv('../input/bureau_balance.csv')
# Handle Buro Balance
buro_balance.loc[buro_balance['STATUS']=='C', 'STATUS'] = '0'
buro_balance.loc[buro_balance['STATUS']=='X', 'STATUS'] = '0'
buro_balance['STATUS'] = buro_balance['STATUS'].astype('int64')
buro_balance_group = buro_balance.groupby('SK_ID_BUREAU').agg({'STATUS':['max','mean'], 'MONTHS_BALANCE':'max'})
buro_balance_group.columns = [' '.join(col).strip() for col in buro_balance_group.columns.values]
idx = buro_balance.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].transform(max) == buro_balance['MONTHS_BALANCE']
Buro_Balance_Last = buro_balance[idx][['SK_ID_BUREAU','STATUS']]
Buro_Balance_Last.rename(columns={'STATUS': 'Buro_Balance_Last_Value'}, inplace=True)
Buro_Balance_Last['Buro_Balance_Max'] = Buro_Balance_Last['SK_ID_BUREAU'].map(buro_balance_group['STATUS max'])
Buro_Balance_Last['Buro_Balance_Mean'] = Buro_Balance_Last['SK_ID_BUREAU'].map(buro_balance_group['STATUS mean'])
Buro_Balance_Last['Buro_Balance_Last_Month'] = Buro_Balance_Last['SK_ID_BUREAU'].map(buro_balance_group['MONTHS_BALANCE max'])
# Handle Buro Data
def nonUnique(x):
return x.nunique()
def modeValue(x):
return stats.mode(x)[0][0]
def totalBadCredit(x):
badCredit = 0
for value in x:
if(value==2 or value==3):
badCredit+=1
return badCredit
def creditOverdue(x):
overdue=0
for value in x:
if(value>0):
overdue+=1
return overdue
categorical_feats = [f for f in buro.columns if buro[f].dtype == 'object']
for f_ in categorical_feats:
buro[f_], indexer = pd.factorize(buro[f_])
categorical_feats = [f for f in data.columns if data[f].dtype == 'object']
for f_ in categorical_feats:
data[f_], indexer = pd.factorize(data[f_])
test[f_] = indexer.get_indexer(test[f_])
# Aggregate Values on All Credits
buro_group = buro.groupby('SK_ID_CURR').agg({'SK_ID_BUREAU':'count',
'AMT_CREDIT_SUM':'sum',
'AMT_CREDIT_SUM_DEBT':'sum',
'CREDIT_CURRENCY': [nonUnique, modeValue],
'CREDIT_TYPE': [nonUnique, modeValue],
'CNT_CREDIT_PROLONG': 'sum',
'CREDIT_ACTIVE': totalBadCredit,
'CREDIT_DAY_OVERDUE': creditOverdue
})
buro_group.columns = [' '.join(col).strip() for col in buro_group.columns.values]
# Aggregate Values on Active Credits
buro_active = buro.loc[buro['CREDIT_ACTIVE']==1]
buro_group_active = buro_active.groupby('SK_ID_CURR').agg({'AMT_CREDIT_SUM': ['sum', 'count'],
'AMT_CREDIT_SUM_DEBT': 'sum',
'AMT_CREDIT_SUM_LIMIT': 'sum'
})
buro_group_active.columns = [' '.join(col).strip() for col in buro_group_active.columns.values]
# Getting last credit for each user
idx = buro.groupby('SK_ID_CURR')['SK_ID_BUREAU'].transform(max) == buro['SK_ID_BUREAU']
Buro_Last = buro[idx][['SK_ID_CURR','CREDIT_TYPE','DAYS_CREDIT_UPDATE','DAYS_CREDIT',
'DAYS_CREDIT_ENDDATE','DAYS_ENDDATE_FACT', 'SK_ID_BUREAU']]
Buro_Last['Credit_Count'] = Buro_Last['SK_ID_CURR'].map(buro_group['SK_ID_BUREAU count'])
Buro_Last['Total_Credit_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group['AMT_CREDIT_SUM sum'])
Buro_Last['Total_Debt_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group['AMT_CREDIT_SUM_DEBT sum'])
Buro_Last['NumberOfCreditCurrency'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_CURRENCY nonUnique'])
Buro_Last['MostCommonCreditCurrency'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_CURRENCY modeValue'])
Buro_Last['NumberOfCreditType'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_TYPE nonUnique'])
Buro_Last['MostCommonCreditType'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_TYPE modeValue'])
Buro_Last['NumberOfCreditProlong'] = Buro_Last['SK_ID_CURR'].map(buro_group['CNT_CREDIT_PROLONG sum'])
Buro_Last['NumberOfBadCredit'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_ACTIVE totalBadCredit'])
Buro_Last['NumberOfDelayedCredit'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_DAY_OVERDUE creditOverdue'])
Buro_Last['Active_Credit_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM sum'])
Buro_Last['Active_Credit_Count'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM count'])
Buro_Last['Active_Debt_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM_DEBT sum'])
Buro_Last['Active_Credit_Card_Limit'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM_LIMIT sum'])
Buro_Last['BalanceOnCreditBuro'] = Buro_Last['Active_Debt_Amount'] / Buro_Last['Active_Credit_Amount']
# Merge buro with Buro Balance
buro_merged = pd.merge(buro, Buro_Balance_Last, how='left', on='SK_ID_BUREAU')
buro_merged = buro_merged[['SK_ID_CURR','SK_ID_BUREAU','Buro_Balance_Last_Value','Buro_Balance_Max',
'Buro_Balance_Mean','Buro_Balance_Last_Month']]
buro_merged_group = buro_merged.groupby('SK_ID_CURR').agg(np.mean)
buro_merged_group.reset_index(inplace=True)
buro_merged_group.drop('SK_ID_BUREAU', axis=1, inplace=True)
# Add Tables to main Data
data = data.merge(right=Buro_Last.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=Buro_Last.reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(right=buro_merged_group.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=buro_merged_group.reset_index(), how='left', on='SK_ID_CURR')
y = data['TARGET']
data.drop(['SK_ID_CURR','TARGET'], axis=1, inplace=True)
test.drop(['SK_ID_CURR'], axis=1, inplace=True)
if(reduce_mem==True):
data = reduce_mem_usage(data)
test = reduce_mem_usage(test)
return(data, test, y)
def ApplicationBuro(reduce_mem=True):
data = pd.read_csv('../input/application_train.csv')
test = pd.read_csv('../input/application_test.csv')
buro = pd.read_csv('../input/bureau.csv')
def nonUnique(x):
return x.nunique()
def modeValue(x):
return stats.mode(x)[0][0]
def totalBadCredit(x):
badCredit = 0
for value in x:
if(value==2 or value==3):
badCredit+=1
return badCredit
def creditOverdue(x):
overdue=0
for value in x:
if(value>0):
overdue+=1
return overdue
categorical_feats = [f for f in buro.columns if buro[f].dtype == 'object']
for f_ in categorical_feats:
buro[f_], indexer = pd.factorize(buro[f_])
categorical_feats = [f for f in data.columns if data[f].dtype == 'object']
for f_ in categorical_feats:
data[f_], indexer = pd.factorize(data[f_])
test[f_] = indexer.get_indexer(test[f_])
# Aggregate Values on All Credits
buro_group = buro.groupby('SK_ID_CURR').agg({'SK_ID_BUREAU':'count',
'AMT_CREDIT_SUM':'sum',
'AMT_CREDIT_SUM_DEBT':'sum',
'CREDIT_CURRENCY': [nonUnique, modeValue],
'CREDIT_TYPE': [nonUnique, modeValue],
'CNT_CREDIT_PROLONG': 'sum',
'CREDIT_ACTIVE': totalBadCredit,
'CREDIT_DAY_OVERDUE': creditOverdue
})
buro_group.columns = [' '.join(col).strip() for col in buro_group.columns.values]
# Aggregate Values on Active Credits
buro_active = buro.loc[buro['CREDIT_ACTIVE']==1]
buro_group_active = buro_active.groupby('SK_ID_CURR').agg({'AMT_CREDIT_SUM': ['sum', 'count'],
'AMT_CREDIT_SUM_DEBT': 'sum',
'AMT_CREDIT_SUM_LIMIT': 'sum'
})
buro_group_active.columns = [' '.join(col).strip() for col in buro_group_active.columns.values]
# Getting last credit for each user
idx = buro.groupby('SK_ID_CURR')['SK_ID_BUREAU'].transform(max) == buro['SK_ID_BUREAU']
Buro_Last = buro[idx][['SK_ID_CURR','CREDIT_TYPE','DAYS_CREDIT_UPDATE','DAYS_CREDIT',
'DAYS_CREDIT_ENDDATE','DAYS_ENDDATE_FACT']]
Buro_Last['Credit_Count'] = Buro_Last['SK_ID_CURR'].map(buro_group['SK_ID_BUREAU count'])
Buro_Last['Total_Credit_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group['AMT_CREDIT_SUM sum'])
Buro_Last['Total_Debt_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group['AMT_CREDIT_SUM_DEBT sum'])
Buro_Last['NumberOfCreditCurrency'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_CURRENCY nonUnique'])
Buro_Last['MostCommonCreditCurrency'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_CURRENCY modeValue'])
Buro_Last['NumberOfCreditType'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_TYPE nonUnique'])
Buro_Last['MostCommonCreditType'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_TYPE modeValue'])
Buro_Last['NumberOfCreditProlong'] = Buro_Last['SK_ID_CURR'].map(buro_group['CNT_CREDIT_PROLONG sum'])
Buro_Last['NumberOfBadCredit'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_ACTIVE totalBadCredit'])
Buro_Last['NumberOfDelayedCredit'] = Buro_Last['SK_ID_CURR'].map(buro_group['CREDIT_DAY_OVERDUE creditOverdue'])
Buro_Last['Active_Credit_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM sum'])
Buro_Last['Active_Credit_Count'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM count'])
Buro_Last['Active_Debt_Amount'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM_DEBT sum'])
Buro_Last['Active_Credit_Card_Limit'] = Buro_Last['SK_ID_CURR'].map(buro_group_active['AMT_CREDIT_SUM_LIMIT sum'])
Buro_Last['BalanceOnCreditBuro'] = Buro_Last['Active_Debt_Amount'] / Buro_Last['Active_Credit_Amount']
data = data.merge(right=Buro_Last.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=Buro_Last.reset_index(), how='left', on='SK_ID_CURR')
y = data['TARGET']
data.drop(['SK_ID_CURR','TARGET'], axis=1, inplace=True)
test.drop(['SK_ID_CURR'], axis=1, inplace=True)
if(reduce_mem==True):
data = reduce_mem_usage(data)
test = reduce_mem_usage(test)
return(data, test, y)
def ApplicationOnly(reduce_mem=True):
data = pd.read_csv('../input/application_train.csv')
test = pd.read_csv('../input/application_test.csv')
categorical_feats = [f for f in data.columns if data[f].dtype == 'object']
for f_ in categorical_feats:
data[f_], indexer = pd.factorize(data[f_])
test[f_] = indexer.get_indexer(test[f_])
y = data['TARGET']
data.drop(['SK_ID_CURR','TARGET'], axis=1, inplace=True)
test.drop(['SK_ID_CURR'], axis=1, inplace=True)
if(reduce_mem==True):
data = reduce_mem_usage(data)
test = reduce_mem_usage(test)
return(data, test, y)
def ApplicationBuroAndPrev(reduce_mem=True):
data = pd.read_csv('../input/application_train.csv')
test = pd.read_csv('../input/application_test.csv')
prev = pd.read_csv('../input/previous_application.csv')
buro = pd.read_csv('../input/bureau.csv')
categorical_feats = [f for f in data.columns if data[f].dtype == 'object']
for f_ in categorical_feats:
data[f_], indexer = pd.factorize(data[f_])
test[f_] = indexer.get_indexer(test[f_])
prev_cat_features = [f_ for f_ in prev.columns if prev[f_].dtype == 'object']
for f_ in prev_cat_features:
prev = pd.concat([prev, pd.get_dummies(prev[f_], prefix=f_)], axis=1)
cnt_prev = prev[['SK_ID_CURR', 'SK_ID_PREV']].groupby('SK_ID_CURR').count()
prev['SK_ID_PREV'] = prev['SK_ID_CURR'].map(cnt_prev['SK_ID_PREV'])
avg_prev = prev.groupby('SK_ID_CURR').mean()
avg_prev.columns = ['prev_app_' + f_ for f_ in avg_prev.columns]
buro_cat_features = [f_ for f_ in buro.columns if buro[f_].dtype == 'object']
for f_ in buro_cat_features:
buro = pd.concat([buro, pd.get_dummies(buro[f_], prefix=f_)], axis=1)
avg_buro = buro.groupby('SK_ID_CURR').mean()
avg_buro['buro_count'] = buro[['SK_ID_BUREAU','SK_ID_CURR']].groupby('SK_ID_CURR').count()['SK_ID_BUREAU']
avg_buro.columns = ['bureau_' + f_ for f_ in avg_buro.columns]
data = data.merge(right=avg_prev.reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(right=avg_buro.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=avg_prev.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=avg_buro.reset_index(), how='left', on='SK_ID_CURR')
y = data['TARGET']
data.drop(['SK_ID_CURR','TARGET'], axis=1, inplace=True)
test.drop(['SK_ID_CURR'], axis=1, inplace=True)
if(reduce_mem==True):
data = reduce_mem_usage(data)
test = reduce_mem_usage(test)
return(data, test, y)
def AllData(reduce_mem=True):
data = pd.read_csv('../input/application_train.csv')
test = pd.read_csv('../input/application_test.csv')
prev = pd.read_csv('../input/previous_application.csv')
buro = pd.read_csv('../input/bureau.csv')
buro_balance = pd.read_csv('../input/bureau_balance.csv')
credit_card = pd.read_csv('../input/credit_card_balance.csv')
POS_CASH = pd.read_csv('../input/POS_CASH_balance.csv')
payments = pd.read_csv('../input/installments_payments.csv')
categorical_feats = [f for f in data.columns if data[f].dtype == 'object']
for f_ in categorical_feats:
data[f_], indexer = pd.factorize(data[f_])
test[f_] = indexer.get_indexer(test[f_])
y = data['TARGET']
del data['TARGET']
#Pre-processing buro_balance
print('Pre-processing buro_balance...')
buro_grouped_size = buro_balance.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].size()
buro_grouped_max = buro_balance.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].max()
buro_grouped_min = buro_balance.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].min()
buro_counts = buro_balance.groupby('SK_ID_BUREAU')['STATUS'].value_counts(normalize = False)
buro_counts_unstacked = buro_counts.unstack('STATUS')
buro_counts_unstacked.columns = ['STATUS_0', 'STATUS_1','STATUS_2','STATUS_3','STATUS_4','STATUS_5','STATUS_C','STATUS_X',]
buro_counts_unstacked['MONTHS_COUNT'] = buro_grouped_size
buro_counts_unstacked['MONTHS_MIN'] = buro_grouped_min
buro_counts_unstacked['MONTHS_MAX'] = buro_grouped_max
buro = buro.join(buro_counts_unstacked, how='left', on='SK_ID_BUREAU')
#Pre-processing previous_application
print('Pre-processing previous_application...')
#One-hot encoding of categorical features in previous application data set
prev_cat_features = [pcol for pcol in prev.columns if prev[pcol].dtype == 'object']
prev = pd.get_dummies(prev, columns=prev_cat_features)
avg_prev = prev.groupby('SK_ID_CURR').mean()
cnt_prev = prev[['SK_ID_CURR', 'SK_ID_PREV']].groupby('SK_ID_CURR').count()
avg_prev['nb_app'] = cnt_prev['SK_ID_PREV']
del avg_prev['SK_ID_PREV']
#Pre-processing buro
print('Pre-processing buro...')
#One-hot encoding of categorical features in buro data set
buro_cat_features = [bcol for bcol in buro.columns if buro[bcol].dtype == 'object']
buro = pd.get_dummies(buro, columns=buro_cat_features)
avg_buro = buro.groupby('SK_ID_CURR').mean()
avg_buro['buro_count'] = buro[['SK_ID_BUREAU', 'SK_ID_CURR']].groupby('SK_ID_CURR').count()['SK_ID_BUREAU']
del avg_buro['SK_ID_BUREAU']
#Pre-processing POS_CASH
print('Pre-processing POS_CASH...')
le = LabelEncoder()
POS_CASH['NAME_CONTRACT_STATUS'] = le.fit_transform(POS_CASH['NAME_CONTRACT_STATUS'].astype(str))
nunique_status = POS_CASH[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').nunique()
nunique_status2 = POS_CASH[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').max()
POS_CASH['NUNIQUE_STATUS'] = nunique_status['NAME_CONTRACT_STATUS']
POS_CASH['NUNIQUE_STATUS2'] = nunique_status2['NAME_CONTRACT_STATUS']
POS_CASH.drop(['SK_ID_PREV', 'NAME_CONTRACT_STATUS'], axis=1, inplace=True)
#Pre-processing credit_card
print('Pre-processing credit_card...')
credit_card['NAME_CONTRACT_STATUS'] = le.fit_transform(credit_card['NAME_CONTRACT_STATUS'].astype(str))
nunique_status = credit_card[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').nunique()
nunique_status2 = credit_card[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').max()
credit_card['NUNIQUE_STATUS'] = nunique_status['NAME_CONTRACT_STATUS']
credit_card['NUNIQUE_STATUS2'] = nunique_status2['NAME_CONTRACT_STATUS']
credit_card.drop(['SK_ID_PREV', 'NAME_CONTRACT_STATUS'], axis=1, inplace=True)
#Pre-processing payments
print('Pre-processing payments...')
avg_payments = payments.groupby('SK_ID_CURR').mean()
avg_payments2 = payments.groupby('SK_ID_CURR').max()
avg_payments3 = payments.groupby('SK_ID_CURR').min()
del avg_payments['SK_ID_PREV']
#Join data bases
print('Joining databases...')
data = data.merge(right=avg_prev.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=avg_prev.reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(right=avg_buro.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=avg_buro.reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(POS_CASH.groupby('SK_ID_CURR').mean().reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(POS_CASH.groupby('SK_ID_CURR').mean().reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(credit_card.groupby('SK_ID_CURR').mean().reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(credit_card.groupby('SK_ID_CURR').mean().reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(right=avg_payments.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=avg_payments.reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(right=avg_payments2.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=avg_payments2.reset_index(), how='left', on='SK_ID_CURR')
data = data.merge(right=avg_payments3.reset_index(), how='left', on='SK_ID_CURR')
test = test.merge(right=avg_payments3.reset_index(), how='left', on='SK_ID_CURR')
if(reduce_mem==True):
data = reduce_mem_usage(data)
test = reduce_mem_usage(test)
return(data, test, y)
def reduce_mem_usage(df):
start_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
def AllData_v5(reduce_mem=True):
df = GatherTables.application_train_test()
with GatherTables.timer("Process bureau and bureau_balance"):
bureau = GatherTables.bureau_and_balance()
print("Bureau df shape:", bureau.shape)
df = df.join(bureau, how='left', on='SK_ID_CURR')
print("Current data shape:", df.shape)
del bureau
gc.collect()
with GatherTables.timer("Process previous_applications"):
prev = GatherTables.previous_applications()
print("Previous applications df shape:", prev.shape)
df = df.join(prev, how='left', on='SK_ID_CURR')
print("Current data shape:", df.shape)
del prev
gc.collect()
with GatherTables.timer("Process POS-CASH balance"):
pos = GatherTables.pos_cash()
print("Pos-cash balance df shape:", pos.shape)
df = df.join(pos, how='left', on='SK_ID_CURR')
print("Current data shape:", df.shape)
del pos
gc.collect()
with GatherTables.timer("Process installments payments"):
ins = GatherTables.installments_payments()
print("Installments payments df shape:", ins.shape)
df = df.join(ins, how='left', on='SK_ID_CURR')
print("Current data shape:", df.shape)
del ins
gc.collect()
with GatherTables.timer("Process credit card balance"):
cc = GatherTables.credit_card_balance()
print("Credit card balance df shape:", cc.shape)
df = df.join(cc, how='left', on='SK_ID_CURR')
print("Current data shape:", df.shape)
del cc
gc.collect()
df, new_columns = one_hot_encoder(df)
df.drop('SK_ID_CURR', axis=1, inplace=True)
data = df[df['TARGET'].notnull()]
test = df[df['TARGET'].isnull()]
y = data.pop('TARGET')
test.drop(['TARGET'], axis=1, inplace=True)
return(data, test, y) | 47.661232 | 130 | 0.656505 | 3,634 | 26,309 | 4.441387 | 0.072647 | 0.035688 | 0.053036 | 0.021128 | 0.793556 | 0.753284 | 0.689653 | 0.654089 | 0.640211 | 0.63798 | 0 | 0.006963 | 0.20301 | 26,309 | 552 | 131 | 47.661232 | 0.762781 | 0.028317 | 0 | 0.580189 | 0 | 0 | 0.225486 | 0.03407 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04717 | false | 0 | 0.011792 | 0.011792 | 0.084906 | 0.04717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f68df4f6b83e61acb9a971c85b9efef84578fc72 | 216 | py | Python | ois/data_request.py | pandincus/ois-service | 7ed45ea5a758f5b529d823aeda60b73a89da66e2 | [
"MIT"
] | 3 | 2018-07-09T04:02:01.000Z | 2018-08-29T09:57:36.000Z | ois/data_request.py | pandincus/ois-service | 7ed45ea5a758f5b529d823aeda60b73a89da66e2 | [
"MIT"
] | null | null | null | ois/data_request.py | pandincus/ois-service | 7ed45ea5a758f5b529d823aeda60b73a89da66e2 | [
"MIT"
] | null | null | null | from .data_request_type import DataRequestType
class DataRequest():
def __init__(self, fieldName, requestType):
self.fieldName = fieldName
self.requestType = requestType
self.value = 0
| 21.6 | 47 | 0.699074 | 22 | 216 | 6.590909 | 0.681818 | 0.17931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006024 | 0.231481 | 216 | 9 | 48 | 24 | 0.86747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f69101537e1714c27d7b564d92d8f58ca2160bfc | 4,001 | py | Python | lawerWeb/settings.py | xia-deng/lawerWeb | 6d2fe3642b2b7fbdda568e3af240bbcf6fda6c48 | [
"Apache-2.0"
] | null | null | null | lawerWeb/settings.py | xia-deng/lawerWeb | 6d2fe3642b2b7fbdda568e3af240bbcf6fda6c48 | [
"Apache-2.0"
] | null | null | null | lawerWeb/settings.py | xia-deng/lawerWeb | 6d2fe3642b2b7fbdda568e3af240bbcf6fda6c48 | [
"Apache-2.0"
] | null | null | null | """
Django settings for lawerWeb project.
Generated by 'django-admin startproject' using Django 2.1.3.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'h4ep$74a1pw@9)kgv2%#!ohfe_1a6!v_17x^((h3g*^3**lqco'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'blog',
'froala_editor',
'xadmin',
'crispy_forms',
'reversion',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'lawerWeb.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'lawerWeb.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
# 中文配置
LANGUAGE_CODE = 'zh-Hans'
TIME_ZONE = 'Asia/Shanghai'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
# 公共的 static 文件,比如 jquery.js 可以放这里,这里面的文件夹不能包含 STATIC_ROOT
STATIC_URL = '/static/'
STATIC_ROOT=os.path.join(BASE_DIR,'static')
STATICFILES_DIRS = (
"common_static",
)
IS_POLL_NUM_EDIT = True
IS_COMMENT_NUM_EDIT = True
PER_PAGE_SHOW = 10
FROALA_EDITOR_PLUGINS = ('align', 'char_counter', 'code_beautifier' ,'code_view', 'colors', 'draggable', 'emoticons',
'entities', 'file', 'font_family', 'font_size', 'image_manager', 'image',
'line_breaker', 'link', 'lists', 'paragraph_format', 'paragraph_style', 'quick_insert', 'quote', 'save', 'table',
'url', 'video')
USE_FROALA_EDITOR = True
#FROALA_UPLOAD_PATH = os.path.join(BASE_DIR, 'media')
# upload folder
#MEDIA_URL: URL访问路径
MEDIA_URL = '/media/'
#MEDIA_ROOT:上传存放路径,必须是本地路径的绝对路径
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
SELECT_INPUT_COLUMN_NUMBER=10
| 27.784722 | 121 | 0.699075 | 466 | 4,001 | 5.847639 | 0.448498 | 0.07156 | 0.056514 | 0.06422 | 0.204771 | 0.18055 | 0.097982 | 0.097982 | 0.044037 | 0 | 0 | 0.01221 | 0.16071 | 4,001 | 143 | 122 | 27.979021 | 0.799285 | 0.290927 | 0 | 0 | 1 | 0 | 0.522436 | 0.364672 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.011765 | 0 | 0.011765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f6996c46ab316655cd0bf4a158ff83417ba37b8e | 3,900 | py | Python | construction/markdown.py | rik/mesconseilscovid | ff4b365a677da6bb73284ca5bba73651cde570e9 | [
"MIT"
] | null | null | null | construction/markdown.py | rik/mesconseilscovid | ff4b365a677da6bb73284ca5bba73651cde570e9 | [
"MIT"
] | null | null | null | construction/markdown.py | rik/mesconseilscovid | ff4b365a677da6bb73284ca5bba73651cde570e9 | [
"MIT"
] | null | null | null | import re
from textwrap import indent
import mistune
from jinja2 import Template
from .directives.injection import InjectionDirective
from .directives.renvoi import RenvoiDirective
from .directives.section import SectionDirective
from .directives.question import QuestionDirective
from .directives.toc import DirectiveToc
from .typographie import typographie
class FrenchTypographyMixin:
def text(self, text_):
return typographie(super().text(text_))
def block_html(self, html):
return typographie(super().block_html(html))
class ClassMixin:
"""Possibilité d’ajouter une classe CSS sur un paragraphe ou un élément de liste.
Par exemple :
* {.maClasse} item classique de la liste en markdown
"""
RE_CLASS = re.compile(
r"""^
(?P<before>.*?)
(?:\s*\{\.(?P<class>[\w\- ]+?)\}\s*)
(?P<after>.*)
$
""",
re.MULTILINE | re.VERBOSE,
)
def paragraph(self, text):
return self._element_with_classes("p", text) or super().paragraph(text)
def list_item(self, text, level):
return self._element_with_classes("li", text) or super().list_item(text, level)
def _element_with_classes(self, name, text):
mo = self.RE_CLASS.match(text)
if mo is not None:
class_ = mo.group("class")
content = " ".join(filter(None, [mo.group("before"), mo.group("after")]))
return f'<{name} class="{class_}">{content}</{name}>\n'
class CustomHTMLRenderer(FrenchTypographyMixin, ClassMixin, mistune.HTMLRenderer):
pass
def create_markdown_parser(questions_index=None):
plugins = [
SectionDirective(),
QuestionDirective(),
DirectiveToc(),
]
if questions_index is not None:
plugins.append(RenvoiDirective(questions_index=questions_index))
plugins.append(InjectionDirective(questions_index=questions_index))
return mistune.create_markdown(
renderer=CustomHTMLRenderer(escape=False),
plugins=plugins,
)
class MarkdownContent:
"""Block content."""
def __init__(self, text, markdown):
self.text = text
self.markdown = markdown
def __str__(self):
return self.render_block()
def render_block(self):
return self.markdown(self.text)
def split(self, separator="\n---\n"):
return [
self.__class__(text.strip(), self.markdown)
for text in self.text.split(separator)
]
def render_me(self, tag="div"):
return f'<{tag} class="me visible">{str(self).strip()}</{tag}>'
def render_them(self, tag="div"):
return f'<{tag} class="them" hidden>{str(self).strip()}</{tag}>'
class MarkdownInlineContent(MarkdownContent):
"""Inline content."""
def __str__(self):
return self.render_inline()
def render_inline(self):
return self.markdown.inline(self.text, {}).strip()
def render_me(self):
return super().render_me(tag="span")
def render_them(self):
return super().render_them(tag="span")
def render_markdown_file(file_path, markdown_parser):
source = file_path.read_text()
templated_source = Template(source).render(formulaire=render_formulaire)
return MarkdownContent(templated_source, markdown_parser)
def render_formulaire(nom_formulaire, prefixe=""):
from .thematiques import THEMATIQUES_DIR
path = THEMATIQUES_DIR / "formulaires" / f"{nom_formulaire}.md"
with path.open() as f:
template = Template(f.read())
if prefixe:
prefixe = nom_formulaire + "-" + prefixe
else:
prefixe = nom_formulaire
markdown = (
f'<div class="formulaire" data-nom="{nom_formulaire}" data-prefixe="{prefixe}">\n\n'
+ template.render(prefixe=prefixe)
+ "\n\n</div>"
)
return indent(markdown, " ").lstrip()
| 27.857143 | 92 | 0.647436 | 445 | 3,900 | 5.51236 | 0.28764 | 0.026091 | 0.022829 | 0.017122 | 0.064411 | 0.041582 | 0.020383 | 0 | 0 | 0 | 0 | 0.00033 | 0.223077 | 3,900 | 139 | 93 | 28.057554 | 0.809241 | 0.045641 | 0 | 0.022222 | 0 | 0.011111 | 0.089607 | 0.045787 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.011111 | 0.122222 | 0.144444 | 0.577778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f699d90b4f3cca23aac7e6c012ba4fdf6a39d432 | 5,404 | py | Python | HMS/Hospital/models.py | Arshad360/Hospital-Management-System-Cse327-Projectr | 707ab80d021f8a2d10e28b25dd8df5aea512787a | [
"MIT"
] | 2 | 2021-02-10T18:10:30.000Z | 2021-04-27T18:07:51.000Z | HMS/Hospital/models.py | Arshad360/Hospital-Management-System-Cse327-Projectr | 707ab80d021f8a2d10e28b25dd8df5aea512787a | [
"MIT"
] | 1 | 2020-09-23T19:00:54.000Z | 2020-09-23T19:04:22.000Z | HMS/Hospital/models.py | Arshad360/Hospital-Management-System-Cse327-Projectr | 707ab80d021f8a2d10e28b25dd8df5aea512787a | [
"MIT"
] | 1 | 2021-05-02T17:11:33.000Z | 2021-05-02T17:11:33.000Z | from django.db import models
from django.contrib.auth.models import User
# Create your models here
# All the departments
departments = [('Cardiologist', 'Cardiologist'),
('Dermatologists', 'Dermatologists'),
('Emergency Medicine Specialists', 'Emergency Medicine Specialists'),
('Allergists/Immunologists', 'Allergists/Immunologists'),
('Anesthesiologists', 'Anesthesiologists'),
('Colon and Rectal Surgeons', 'Colon and Rectal Surgeons')
]
# Defines of the Appointment Class
class Appointment(models.Model):
# Gets the patientId
patientId = models.PositiveIntegerField(null=True)
# Gets the doctorId
doctorId = models.PositiveIntegerField(null=True)
# Gets the patientName
patientName = models.CharField(max_length=40, null=True)
# Gets the doctorName
doctorName = models.CharField(max_length=40, null=True)
# Gets the appointmentDate
appointmentDate = models.DateField(auto_now=True)
# Gets the description
description = models.TextField(max_length=500)
status = models.BooleanField(default=False)
# Ambulance class define
class Ambulance(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
# Emergency class define
class Emergency(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
=======
""""
:Title = CharField
:Maxlength = 40
:body = Textfield
"""
class Pharmacy(models.Model):
title = models.CharField(max_length=40,null=True)
pub_date = models.DateTimeField()
body = models.TextField()
class availablebloodGroup(models.Model):
title = models.CharField(max_length=40,null=True)
pub_date = models.DateTimeField()
body = models.TextField()
class bloodBank(models.Model):
title = models.CharField(max_length=40,null=True)
pub_date = models.DateTimeField()
body = models.TextField()
class coronaUpdate(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class donateBlood(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class footer(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class home(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class homeSlider(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class homeBase(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class login(models.Model):
title = models.PositiveIntegerField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class navBar(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class notice(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class Pharmacy(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class saveLife(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
class specialCare(models.Model):
title = models.CharField(max_length=40, null=True)
pub_date = models.DateTimeField()
body = models.TextField()
=======
departments=[('Cardiologist','Cardiologist')
('Dermatologists','Dermatologists'),
('Emergency Medicine Specialists','Emergency Medicine Specialists'),
('Allergists/Immunologists','Allergists/Immunologists'),
('Anesthesiologists','Anesthesiologists'),
('Colon and Rectal Surgeons','Colon and Rectal Surgeons')
]
class Doctor(models.Model):
user=models.OneToOneField(User,on_delete=models.CASCADE)
profile_pic= models.ImageField(upload_to='profile_pic/DoctorProfilePic/',null=True,blank=True)
address = models.CharField(max_length=40)
mobile = models.CharField(max_length=20,null=True)
department= models.CharField(max_length=50,choices=departments,default='Cardiologist')
status=models.BooleanField(default=False)
@property
def get_name(self):
return self.user.first_name+" "+self.user.last_name
@property
def get_id(self):
return self.user.id
def __str__(self):
return "{} ({})".format(self.user.first_name,self.department)
# corona class define
class coronacenter(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
# diabetes class define
class diabetescenter(models.Model):
title = models.CharField(max_length=40)
pub_date = models.DateTimeField()
body = models.TextField()
| 29.210811 | 98 | 0.675611 | 576 | 5,404 | 6.237847 | 0.189236 | 0.062622 | 0.115224 | 0.153632 | 0.708043 | 0.669079 | 0.646257 | 0.646257 | 0.646257 | 0.623434 | 0 | 0.012441 | 0.211695 | 5,404 | 184 | 99 | 29.369565 | 0.830986 | 0.053294 | 0 | 0.632479 | 0 | 0 | 0.106611 | 0.024816 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.017094 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f6ad70965cd923a8fabd51c9477be6519ca96f2f | 281 | py | Python | tests/test.py | J35P312/vcf2cytosure | e2867afc1a697c09a1f2f400e1a5ac3365499234 | [
"MIT"
] | 1 | 2019-01-21T11:37:43.000Z | 2019-01-21T11:37:43.000Z | tests/test.py | J35P312/vcf2cytosure | e2867afc1a697c09a1f2f400e1a5ac3365499234 | [
"MIT"
] | 39 | 2017-04-06T09:30:09.000Z | 2022-02-06T10:32:09.000Z | tests/test.py | J35P312/vcf2cytosure | e2867afc1a697c09a1f2f400e1a5ac3365499234 | [
"MIT"
] | 3 | 2017-04-06T09:28:24.000Z | 2020-06-25T09:30:26.000Z | import pytest
from unittest.mock import patch
import vcf2cytosure
def test_version_argument():
with patch('sys.argv', ['vcf2cytosure.py','--version']):
with pytest.raises(SystemExit) as excinfo:
vcf2cytosure.main()
assert excinfo.value.code == 0
| 23.416667 | 60 | 0.683274 | 33 | 281 | 5.757576 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.202847 | 281 | 11 | 61 | 25.545455 | 0.830357 | 0 | 0 | 0 | 0 | 0 | 0.113879 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | true | 0 | 0.375 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f6af847c4409dc5698048181b6fa67b8dcf6d55a | 1,808 | py | Python | playback/db.py | Nierot/Spotify | 11dfe064cbd281c86473ef025d41f7eef293e81b | [
"MIT"
] | null | null | null | playback/db.py | Nierot/Spotify | 11dfe064cbd281c86473ef025d41f7eef293e81b | [
"MIT"
] | null | null | null | playback/db.py | Nierot/Spotify | 11dfe064cbd281c86473ef025d41f7eef293e81b | [
"MIT"
] | null | null | null | from . import models
def new_user(name, date, token):
existing_users = models.User.objects.filter(token=token)
if (len(models.User.objects.filter(token=token)) > 0):
return False
else:
user = models.User(name=name,created_at=date,token=token)
user.save()
return True
def new_genre(name):
genre = models.Genre(name=name)
genre.save()
return genre
def new_artist(name, genres):
artist = models.Artist(name=name)
artist.save()
for genre in genres:
artist.genres.add(genre)
return artist
def new_track(name, artist):
track = models.Track(name=name, artist=artist)
track.save()
return track
def add_liked_track(user, track, term):
"""
Term is an integer, 1 for short_term, 2 for medium_term, and 3 for long_term
"""
liked_track = models.Liked_track(term=term, track=track, user=user)
liked_track.save()
return liked_track
def add_liked_artist(user, artist, term):
"""
Term is an integer, 1 for short_term, 2 for medium_term, and 3 for long_term
"""
liked_artist = models.Liked_artist(term=term, artist=artist, user=user)
liked_artist.save()
return liked_artist
def add_liked_genre(user, genre, term):
"""
Term is an integer, 1 for short_term, 2 for medium_term, and 3 for long_term
"""
liked_genre = models.Liked_genre(term=term, genre=genre, user=user)
liked_genre.save()
return liked_genre
def get_user(token):
return models.User.objects.get(token=token)
def get_genre(name):
return models.Genre.objects.get(name=name)
def get_artist(name):
return models.Artist.objects.get(name=name)
def get_track(name):
return models.Track.objects.get(name=name)
def add_genre_to_artist(genre,artist):
artist.genres.add(genre) | 27.393939 | 81 | 0.690819 | 272 | 1,808 | 4.444853 | 0.161765 | 0.046319 | 0.042184 | 0.029777 | 0.275434 | 0.258065 | 0.163772 | 0.163772 | 0.163772 | 0.163772 | 0 | 0.006892 | 0.197456 | 1,808 | 66 | 82 | 27.393939 | 0.826327 | 0.128319 | 0 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.022222 | 0.088889 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f6b1308798ea655f67ea118514f86ed5643a557d | 665 | py | Python | tests/test_csvtodb.py | rv816/csvtodb | 020ef50e44e458cddeec84f42d3d6e372aa678df | [
"0BSD"
] | null | null | null | tests/test_csvtodb.py | rv816/csvtodb | 020ef50e44e458cddeec84f42d3d6e372aa678df | [
"0BSD"
] | null | null | null | tests/test_csvtodb.py | rv816/csvtodb | 020ef50e44e458cddeec84f42d3d6e372aa678df | [
"0BSD"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_csvtodb
----------------------------------
Tests for `csvtodb` module.
"""
import unittest
from csvtodb.csvtodb import *
class TestCsvtodb(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_000_something(self):
pass
testfix = [['foo', 'bar', 'yellow'], ['thing1', 'thing2', 3], ['green', 'purple', 10]]
def test_upload_to_db():
db_url = 'sqlite://'
db = dataset.connect(db_url)
tablename = 'qrs_valueset_to_codes'
testtable = upload_to_db(testfix, tablename, db_url)
assert list(testtable.all())[1]['foo'] == 'green'
| 19 | 86 | 0.593985 | 80 | 665 | 4.7625 | 0.6375 | 0.062992 | 0.057743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018727 | 0.196992 | 665 | 34 | 87 | 19.558824 | 0.694757 | 0.178947 | 0 | 0.1875 | 0 | 0 | 0.136194 | 0.039179 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.25 | false | 0.1875 | 0.125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f6b528db6aa41bb01a751fc5e92895bd172d0c13 | 681 | py | Python | src/main/python/proc/expression/let.py | cjblink1/lang | 245b8d002341dce4fa5905b1f274770e34867c7e | [
"MIT"
] | null | null | null | src/main/python/proc/expression/let.py | cjblink1/lang | 245b8d002341dce4fa5905b1f274770e34867c7e | [
"MIT"
] | null | null | null | src/main/python/proc/expression/let.py | cjblink1/lang | 245b8d002341dce4fa5905b1f274770e34867c7e | [
"MIT"
] | null | null | null |
from proc.expression.expression import Expression
from proc.environment import Environment
class LetExpression(Expression):
def __init__(self, variable: str, bound_expression: Expression, body: Expression):
self.variable = variable
self.bound_expression = bound_expression
self.body = body
def string_representation(self):
return "variable = {0}, bound-expression = {1}, body = {2)".format(self.variable, self.bound_expression, self.body)
def evaluate(self, environment: Environment):
bound_value = self.bound_expression.evaluate(environment)
return self.body.evaluate(environment.extend(self.variable, bound_value)) | 40.058824 | 123 | 0.735683 | 77 | 681 | 6.350649 | 0.311688 | 0.184049 | 0.116564 | 0.110429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0053 | 0.168869 | 681 | 17 | 124 | 40.058824 | 0.858657 | 0 | 0 | 0 | 0 | 0 | 0.073421 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.166667 | 0.083333 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
101a8bad1d0114bbb79e68c96eb4371364f2ff83 | 771 | py | Python | examples/python/numpy_functions.py | benedicteb/FYS2140-Resources | 31b572e455c3ac8dff868db903f18687e363f1bf | [
"MIT"
] | null | null | null | examples/python/numpy_functions.py | benedicteb/FYS2140-Resources | 31b572e455c3ac8dff868db903f18687e363f1bf | [
"MIT"
] | null | null | null | examples/python/numpy_functions.py | benedicteb/FYS2140-Resources | 31b572e455c3ac8dff868db903f18687e363f1bf | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Created on Mon 2 Dec 2013
Script viser import av funksjoner fra numpy og bruk av noen.
@author Benedicte Emilie Braekken
"""
from numpy import *
print 'e^1 =', exp( 1 ) # Eksponentialfunksjonen
print 'cos(pi) =', cos( pi ) # Cosinus
print 'sqrt(4) =', sqrt( 4 ) # Kvadratrot
print 'range(5) =', range(5) # Rekke opp til 4
print 'zeros(5) =', zeros(5) # Tom array med 5 elementer
print 'linspace(0,5,5) =', linspace(0,5,5) # Rekke som ikke oeker med 1
"""
bruker @ unix $ python numpy_functions.py
e^1 = 2.71828182846
cos(pi) = -1.0
sqrt(4) = 2.0
range(5) = [0, 1, 2, 3, 4]
zeros(5) = [ 0. 0. 0. 0. 0.]
linspace(0,5,5) = [ 0. 1.25 2.5 3.75 5. ]
"""
| 28.555556 | 72 | 0.553826 | 124 | 771 | 3.435484 | 0.475806 | 0.018779 | 0.070423 | 0.077465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118182 | 0.286641 | 771 | 26 | 73 | 29.653846 | 0.656364 | 0.169909 | 0 | 0 | 0 | 0 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.857143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
63d8ff7291ec00834c06722d6f791ed6401a8d77 | 158 | py | Python | Desafios/desafio053.py | josivantarcio/Desafios-em-Python | c747eed785b9640e15c498262fc5f8f5afd4337e | [
"MIT"
] | null | null | null | Desafios/desafio053.py | josivantarcio/Desafios-em-Python | c747eed785b9640e15c498262fc5f8f5afd4337e | [
"MIT"
] | 1 | 2021-04-23T15:11:11.000Z | 2021-05-21T22:36:56.000Z | Desafios/desafio053.py | josivantarcio/Desafios-em-Python | c747eed785b9640e15c498262fc5f8f5afd4337e | [
"MIT"
] | null | null | null | frase = str(input('Digite a frase: ')).strip().upper()
palavras = frase.split()
juntarPalavras = ''.join(palavras)
trocar = juntarPalavras[::-1]
print(trocar) | 31.6 | 54 | 0.702532 | 19 | 158 | 5.842105 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.094937 | 158 | 5 | 55 | 31.6 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0.100629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.