hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
92799feccf4a1361a1ddbaeca85667a04610ecb8 | 1,007 | py | Python | anecAPI/anecAPI.py | suborofu/anecAPI | ddf54e63254fc4b00748e673d8b48aa59abd6526 | [
"MIT"
] | null | null | null | anecAPI/anecAPI.py | suborofu/anecAPI | ddf54e63254fc4b00748e673d8b48aa59abd6526 | [
"MIT"
] | null | null | null | anecAPI/anecAPI.py | suborofu/anecAPI | ddf54e63254fc4b00748e673d8b48aa59abd6526 | [
"MIT"
] | null | null | null | from modern_jokes import modern_jokes
from soviet_jokes import soviet_jokes
import random
import argparse
def soviet_joke():
return random.choice(soviet_jokes)
def modern_joke():
return random.choice(modern_jokes)
def random_joke():
return random.choice(soviet_jokes + modern_jokes)
# TODO: Auto-generated jokes
if __name__ == "__main__":
parser = argparse.ArgumentParser(add_help=True,
description='Displays a funny (or not) USSR/Russian joke (also called anecdote).')
parser.add_argument("-m", "--modern", action="store_true", help="display a modern Russian joke")
parser.add_argument("-s", "--soviet", action="store_true", help="display an old USSR joke")
parser.add_argument("-a", "--any", action="store_true", help="display a USSR/Russian joke (default)")
args = parser.parse_args()
if args.modern:
print(modern_joke())
elif args.soviet:
print(soviet_joke())
else:
print(random_joke())
| 27.972222 | 119 | 0.681231 | 130 | 1,007 | 5.046154 | 0.369231 | 0.067073 | 0.073171 | 0.10061 | 0.222561 | 0.182927 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196624 | 1,007 | 35 | 120 | 28.771429 | 0.810878 | 0.025819 | 0 | 0 | 0 | 0 | 0.226762 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.130435 | false | 0 | 0.173913 | 0.130435 | 0.434783 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
929548cb4fdb28eda560c96f789724f2f74e54ef | 4,090 | py | Python | Smaug/models/users.py | luviiv/Smaug | d26d837bb97ef583e05c43d16aed265356b7cf74 | [
"Apache-2.0"
] | null | null | null | Smaug/models/users.py | luviiv/Smaug | d26d837bb97ef583e05c43d16aed265356b7cf74 | [
"Apache-2.0"
] | null | null | null | Smaug/models/users.py | luviiv/Smaug | d26d837bb97ef583e05c43d16aed265356b7cf74 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
users.py
~~~~~~~~~~~
user manage
:copyright: (c) 2015 by Lu Tianchao.
:license: Apache, see LICENSE for more details.
"""
import hashlib
from datetime import datetime
from werkzeug import generate_password_hash, check_password_hash, \
cached_property
from flask.ext.sqlalchemy import BaseQuery
from flask.ext.principal import RoleNeed, UserNeed, Permission
from Smaug.extensions import db
from Smaug.permissions import null
class UserQuery(BaseQuery):
def from_identity(self, identity):
"""
Loads user from flask.ext.principal.Identity instance and
assigns permissions from user.
A "user" instance is monkeypatched to the identity instance.
If no user found then None is returned.
"""
try:
user = self.get(int(identity.id))
except ValueError:
user = None
if user:
identity.provides.update(user.provides)
identity.user = user
return user
def authenticate(self, login, password):
user = self.filter(db.or_(User.username==login,
User.email==login)).first()
if user:
authenticate = user.check_password(password)
else:
authenticate = False
return user, authenticate
def authenticate_openid(self, email, openid):
user = self.filter(User.email==email).first()
if user:
authenticate = user.check_openid(openid)
else:
authenticate = False
return user, authenticate
class User(db.Model):
__tablename__ = "users"
query_class = UserQuery
# user roles
MEMBER = 100
MODERATOR = 200
ADMIN = 300
id = db.Column(db.Integer, primary_key = True)
username = db.Column(db.Unicode(60), unique=True, nullable=False)
email = db.Column(db.String(150), unique=True, nullable=False)
role = db.Column(db.Integer, default = MEMBER)
_password = db.Column("password", db.String(80))
_openid = db.Column("openid", db.String(80), unique=True)
class Permissions(object):
def __init__(self, obj):
self.obj = obj
@cached_property
def send_message(self):
if not self.obj.receive_email:
return null
needs = [UserNeed(user_id) for user_id in self.obj.friends]
if not needs:
return null
return Permission(*needs)
def __str__(self):
return self.username
def __repr__(self):
return "<%s>" % self
@cached_property
def permissions(self):
return self.Permissions(self)
def _get_password(self):
return self._password
def _set_password(self, password):
self._password = generate_password_hash(password)
password = db.synonym("_password",
descriptor=property(_get_password,
_set_password))
def check_password(self, password):
if self.password is None:
return False
return check_password_hash(self.password, password)
def _get_openid(self):
return self._openid
def _set_openid(self, openid):
self._openid = generate_password_hash(openid)
openid = db.synonym("_openid",
descriptor=property(_get_openid,
_set_openid))
def check_openid(self, openid):
if self.openid is None:
return False
return check_password_hash(self.openid, openid)
@cached_property
def provides(self):
needs = [RoleNeed('authenticated'),
UserNeed(self.id)]
if self.is_moderator:
needs.append(RoleNeed('moderator'))
if self.is_admin:
needs.append(RoleNeed('admin'))
return needs
@property
def is_moderator(self):
return self.role >= self.MODERATOR
@property
def is_admin(self):
return self.role >= self.ADMIN | 25.72327 | 71 | 0.601467 | 453 | 4,090 | 5.269316 | 0.273731 | 0.029326 | 0.035191 | 0.017595 | 0.11814 | 0.099707 | 0.036866 | 0.036866 | 0.036866 | 0 | 0 | 0.008124 | 0.307824 | 4,090 | 159 | 72 | 25.72327 | 0.835041 | 0.083374 | 0 | 0.18 | 0 | 0 | 0.017969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.17 | false | 0.16 | 0.07 | 0.07 | 0.58 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
92a63e6d9950d24608b3bbb002e036b82919a7bf | 2,541 | py | Python | pyalgs/data_structures/commons/queue.py | vertexproject/pyalgs | 11b9ea37afc9e9f9e38ffacc42b53f9cd96f5f83 | [
"BSD-3-Clause"
] | 12 | 2017-05-01T10:31:42.000Z | 2021-06-23T14:03:28.000Z | pyalgs/data_structures/commons/queue.py | vertexproject/pyalgs | 11b9ea37afc9e9f9e38ffacc42b53f9cd96f5f83 | [
"BSD-3-Clause"
] | 2 | 2018-08-01T10:09:09.000Z | 2020-07-16T11:41:46.000Z | pyalgs/data_structures/commons/queue.py | vertexproject/pyalgs | 11b9ea37afc9e9f9e38ffacc42b53f9cd96f5f83 | [
"BSD-3-Clause"
] | 6 | 2017-06-04T01:41:14.000Z | 2021-01-19T05:05:44.000Z | from abc import abstractmethod, ABCMeta
class Queue(object):
""" Queue interface
"""
__metaclass__ = ABCMeta
@abstractmethod
def enqueue(self, item):
pass
@abstractmethod
def dequeue(self):
pass
@abstractmethod
def is_empty(self):
pass
@abstractmethod
def size(self):
pass
@staticmethod
def create():
return LinkedListQueue()
@abstractmethod
def iterate(self):
pass
class Node(object):
value = None
nextNode = None
def __init__(self, value):
self.value = value
class LinkedListQueue(Queue):
first = None
last = None
N = 0
def size(self):
return self.N
def iterate(self):
x = self.first
while x is not None:
value = x.value
x = x.nextNode
yield value
def enqueue(self, item):
old_last = self.last
self.last = Node(item)
if old_last is not None:
old_last.nextNode = self.last
if self.first is None:
self.first = self.last
self.N += 1
def is_empty(self):
return self.N == 0
def dequeue(self):
if self.is_empty():
return None
old_first = self.first
self.first = old_first.nextNode
if old_first == self.last:
self.last = None
self.N -= 1
return old_first.value
class ArrayQueue(Queue):
head = 0
tail = 0
s = []
def __init__(self, capacity=None):
if capacity is None:
capacity = 10
self.s = [0] * capacity
def iterate(self):
if self.is_empty():
return
for i in range(self.head, self.tail):
yield self.s[i]
def enqueue(self, item):
self.s[self.tail] = item
self.tail += 1
if self.tail == len(self.s):
self.resize(len(self.s) * 2)
def resize(self, new_size):
tmp = [0] * new_size
for i in range(self.head, self.tail):
tmp[i-self.head] = self.s[i]
self.s = tmp
self.tail = self.tail - self.head
self.head = 0
def size(self):
return self.tail - self.head
def is_empty(self):
return self.size() == 0
def dequeue(self):
if self.is_empty():
return None
deleted = self.s[self.head]
self.head += 1
if self.size() == len(self.s) // 4:
self.resize(len(self.s) // 2)
return deleted
| 19.546154 | 45 | 0.528926 | 319 | 2,541 | 4.128527 | 0.178683 | 0.037965 | 0.045558 | 0.041002 | 0.214882 | 0.214882 | 0.098709 | 0.098709 | 0.057707 | 0.057707 | 0 | 0.010599 | 0.368752 | 2,541 | 129 | 46 | 19.697674 | 0.810474 | 0.005903 | 0 | 0.340426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.202128 | false | 0.053191 | 0.010638 | 0.053191 | 0.457447 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
92acbaaddd86f818bb28122d75c4dba5ebed8fa5 | 366 | py | Python | tests/test_core.py | blacktanktop/vivid | e85837bcd86575f8a275517250dd026aac3e451f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | tests/test_core.py | blacktanktop/vivid | e85837bcd86575f8a275517250dd026aac3e451f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | tests/test_core.py | blacktanktop/vivid | e85837bcd86575f8a275517250dd026aac3e451f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | from vivid.core import BaseBlock, network_hash
def test_network_hash():
a = BaseBlock('a')
b = BaseBlock('b')
assert network_hash(a) != network_hash(b)
assert network_hash(a) == network_hash(a)
c = BaseBlock('c', parent=[a, b])
hash1 = network_hash(c)
a._parent = [BaseBlock('z')]
hash2 = network_hash(c)
assert hash1 != hash2
| 24.4 | 46 | 0.642077 | 52 | 366 | 4.326923 | 0.326923 | 0.391111 | 0.213333 | 0.16 | 0.266667 | 0.266667 | 0.266667 | 0 | 0 | 0 | 0 | 0.013889 | 0.213115 | 366 | 14 | 47 | 26.142857 | 0.767361 | 0 | 0 | 0 | 0 | 0 | 0.010929 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2b8f14ec9702e12886e533477032b0cf4a2e6d8f | 1,918 | py | Python | tests/test_analysis_status_response.py | s0b0lev/mythx-models | 0fc14fef9e41a68a7d97e0bb170fd0eca5693d9a | [
"MIT"
] | null | null | null | tests/test_analysis_status_response.py | s0b0lev/mythx-models | 0fc14fef9e41a68a7d97e0bb170fd0eca5693d9a | [
"MIT"
] | null | null | null | tests/test_analysis_status_response.py | s0b0lev/mythx-models | 0fc14fef9e41a68a7d97e0bb170fd0eca5693d9a | [
"MIT"
] | null | null | null | import json
import pytest
from mythx_models.exceptions import ValidationError
from mythx_models.response import Analysis, AnalysisStatusResponse
from mythx_models.util import serialize_api_timestamp
from . import common as testdata
def assert_analysis_data(expected, analysis: Analysis):
assert expected["apiVersion"] == analysis.api_version
assert expected["maruVersion"] == analysis.maru_version
assert expected["mythrilVersion"] == analysis.mythril_version
assert expected["harveyVersion"] == analysis.harvey_version
assert expected["queueTime"] == analysis.queue_time
assert expected["runTime"] == analysis.run_time
assert expected["status"] == analysis.status
assert expected["submittedAt"] == serialize_api_timestamp(analysis.submitted_at)
assert expected["submittedBy"] == analysis.submitted_by
assert expected["uuid"] == analysis.uuid
def test_analysis_list_from_valid_json():
resp = AnalysisStatusResponse.from_json(
json.dumps(testdata.ANALYSIS_STATUS_RESPONSE_DICT)
)
assert_analysis_data(testdata.ANALYSIS_STATUS_RESPONSE_DICT, resp.analysis)
def test_analysis_list_from_empty_json():
with pytest.raises(ValidationError):
AnalysisStatusResponse.from_json("{}")
def test_analysis_list_from_valid_dict():
resp = AnalysisStatusResponse.from_dict(testdata.ANALYSIS_STATUS_RESPONSE_DICT)
assert_analysis_data(testdata.ANALYSIS_STATUS_RESPONSE_DICT, resp.analysis)
def test_analysis_list_from_empty_dict():
with pytest.raises(ValidationError):
AnalysisStatusResponse.from_dict({})
def test_analysis_list_to_dict():
d = testdata.ANALYSIS_STATUS_RESPONSE_OBJECT.to_dict()
assert d == testdata.ANALYSIS_STATUS_RESPONSE_DICT
def test_analysis_list_to_json():
json_str = testdata.ANALYSIS_STATUS_RESPONSE_OBJECT.to_json()
assert json.loads(json_str) == testdata.ANALYSIS_STATUS_RESPONSE_DICT
| 34.872727 | 84 | 0.792492 | 226 | 1,918 | 6.367257 | 0.247788 | 0.09729 | 0.122307 | 0.166782 | 0.436414 | 0.411397 | 0.175122 | 0.175122 | 0.175122 | 0.175122 | 0 | 0 | 0.123045 | 1,918 | 54 | 85 | 35.518519 | 0.855529 | 0 | 0 | 0.108108 | 0 | 0 | 0.051095 | 0 | 0 | 0 | 0 | 0 | 0.405405 | 1 | 0.189189 | false | 0 | 0.162162 | 0 | 0.351351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2b92b611127a05b1d87d98fdcdf787a237a82dc6 | 129 | py | Python | faced/const.py | hseguro/faced | 4ad42c54fb8e8679fb5feda30af5db3ac74ccc8c | [
"MIT"
] | 575 | 2018-08-27T18:30:53.000Z | 2022-03-31T03:25:36.000Z | faced/const.py | 18718615232/faced | 31ef0d30e1567a06113f49ff4a1202760d952df2 | [
"MIT"
] | 35 | 2018-09-04T08:16:59.000Z | 2022-02-03T18:28:29.000Z | faced/const.py | 18718615232/faced | 31ef0d30e1567a06113f49ff4a1202760d952df2 | [
"MIT"
] | 172 | 2018-08-31T16:55:50.000Z | 2022-02-28T12:03:58.000Z | import os
MODELS_PATH = os.path.join(os.path.dirname(__file__), "models")
YOLO_SIZE = 288
YOLO_TARGET = 9
CORRECTOR_SIZE = 50
| 14.333333 | 63 | 0.744186 | 21 | 129 | 4.190476 | 0.666667 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.139535 | 129 | 8 | 64 | 16.125 | 0.738739 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2b97f27689c29817d3fc17a2b5b71b9419f2063e | 6,023 | py | Python | indexerNew.py | philophilo/searchingReddit | 5c10dc70ecf6a614863403f28a91efe73c11fede | [
"MIT"
] | null | null | null | indexerNew.py | philophilo/searchingReddit | 5c10dc70ecf6a614863403f28a91efe73c11fede | [
"MIT"
] | null | null | null | indexerNew.py | philophilo/searchingReddit | 5c10dc70ecf6a614863403f28a91efe73c11fede | [
"MIT"
] | null | null | null | #!/home/master00/anaconda23/bin/python
from util import *
import argparse
import base64
import os
import json
from collections import defaultdict
# Two main type of indexes
# -- Forward index
# -- Inverted index
# Forward index
# doc1 -> [learning, python, how, to]
# doc2 -> [learning, c++]
# ...
# doc3 -> [python, c++]
# inverted index
# learning -> [doc1, doc2]
# python -> [doc1, doc3]
# how -> [doc1]
# to -> [doc1]
# c++ -> [doc2, doc3]
# TODO: improve this
# Indexer assumes that collection fits in ram
#
class Indexer(object):
def __init__(self):
self.inverted_index = dict()
self.forward_index = dict()
self.url_to_id = dict()
self.doc_count = 0
# TODO: remove these assumptions
# assumes that add_document() is never called twice for a document
# assumes that a document has a unique url
# parsed_text is a list of Terms
def add_document(self, url, parsed_text):
self.doc_count += 1
assert url not in self.url_to_id
current_id = self.doc_count
self.url_to_id[url] = current_id
print(self.url_to_id[url], "\t", current_id)
self.forward_index[current_id] = parsed_text
for position, term in enumerate(parsed_text):
#TODO default dict
if term not in self.inverted_index:
# if the word doesn't exist in the database
# create a list on which to append doc id and
# the position in the doc
self.inverted_index[term] = []
self.inverted_index[term].append((position, current_id))
def save_on_disk(self, index_dir):
def dump_json_to_file(source, file_name):
file_path = os.path.join(index_dir, file_name)
json.dump(source, open(file_path, "w"), indent=4)
dump_json_to_file(self.inverted_index, "inverted_index")
dump_json_to_file(self.forward_index, "forward_index")
dump_json_to_file(self.url_to_id, "url_to_id")
class Searcher(object):
def __init__(self, index_dir):
self.inverted_index = dict()
self.forward_index = dict()
self.url_to_id = dict()
self.doc_count = dict()
def load_json_from_file(file_name):
file_path = os.path.join(index_dir, file_name)
dst = json.load(open(file_path))
return dst
self.inverted_index = load_json_from_file("inverted_index")
self.forward_index = load_json_from_file("forward_index")
self.url_to_id = load_json_from_file("url_to_id")
self.id_to_url = {v : k for k,v in self.url_to_id.iteritems()}
"""
# query [word1, word2] -> returns all documents that contain one of these words
# sort of OR
def find_documents(self, query_terms):
return sum([self.inverted_index[word] for word in query_terms], [])
"""
def generate_snippet(self, query_terms, doc_id):
query_terms_in_window = []
best_window_len = 100500 # TODO: inf would be better ;)
words_in_best_window = 0
best_window = []
for pos,word in enumerate(self.forward_index[unicode(doc_id)]):
if word in query_terms:
query_terms_in_window.append((word, pos))
if len(query_terms_in_window) > 1 and query_terms_in_window[0] == word:
query_terms_in_window.pop(0)
current_window_len = pos-query_terms_in_window[0][1] + 1
wiw = len(set(map(lambda x: x[0], query_terms_in_window)))
if wiw > words_in_best_window or (wiw == words_in_best_window and current_window_len < best_window_len):
words_in_best_window = wiw
best_window = query_terms_in_window[:]
best_window_len = current_window_len
doc_len = len(self.forward_index[unicode(doc_id)])
# TODO 15 should be a named constant
snippet_start = max(best_window[0][1] - 15, 0)
snippet_end = min(doc_len, best_window[len(best_window) - 1][1] + 1 + 15)
return [(term, term in query_terms) for term in self.forward_index[unicode(doc_id)][snippet_start:snippet_end]]
def find_documents_AND(self, query_terms):
# docid -> number of query words
docids = set()
query_word_count = defaultdict(set)
for query_word in query_terms:
for (pos, docid) in self.inverted_index.get(query_word, []):
try:
query_word_count[docid].add(query_word)
except Exception as e:
print "Error:", e, self.inverted_index.get(query_word, [])
return [docid for docid, unique_hits in query_word_count.iteritems() if len(unique_hits) == len(query_terms)]
# sort of OR
def find_documens_OR(self, query_terms):
docids = set()
for query_word in query_terms:
for (pos, docid) in self.inverted_index.get(query_word, []):
docids.add(docid)
return docids
def get_document_text(self, doc_id):
return self.forward_index[unicode(doc_id)]
def get_url(self, doc_id):
return self.id_to_url[doc_id]
def create_index_from_dir(stored_documents_dir, index_dir):
indexer = Indexer()
for filename in os.listdir(stored_documents_dir):
opened_file = open(os.path.join(stored_documents_dir, filename))
# TODO: words are not just seperated not just by space, but commas, semicolons, etc
parsed_doc = parseRedditPost(opened_file.read()).split(" ")
indexer.add_document(base64.b16decode(filename), parsed_doc)
indexer.save_on_disk(index_dir)
def main():
parser = argparse.ArgumentParser(description='Index /r/learnprogramming')
parser.add_argument("--stored_documents_dir", dest="stored_documents_dir", required=True)
parser.add_argument("--index_dir", dest="index_dir", required=True)
args = parser.parse_args()
create_index_from_dir(args.stored_documents_dir, args.index_dir)
if __name__ =="__main__":
main()
| 37.64375 | 120 | 0.65034 | 844 | 6,023 | 4.344787 | 0.221564 | 0.049086 | 0.050995 | 0.023998 | 0.238069 | 0.16553 | 0.114535 | 0.097627 | 0.097627 | 0.097627 | 0 | 0.011487 | 0.248381 | 6,023 | 159 | 121 | 37.880503 | 0.798542 | 0.141624 | 0 | 0.141414 | 0 | 0 | 0.035946 | 0.004468 | 0 | 0 | 0 | 0.012579 | 0.010101 | 0 | null | null | 0 | 0.060606 | null | null | 0.020202 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2b9d72d72fd5c63abb08e7957792d60430f4fc1f | 490 | py | Python | metadeploy/api/migrations/0045_product_license_requirements.py | sfdc-qbranch/MetaDeploy | d22547b3814dbec6aefa4d86b9f81c6f175c1b67 | [
"BSD-3-Clause"
] | 33 | 2019-03-20T15:34:39.000Z | 2022-03-30T15:59:40.000Z | metadeploy/api/migrations/0045_product_license_requirements.py | sfdc-qbranch/MetaDeploy | d22547b3814dbec6aefa4d86b9f81c6f175c1b67 | [
"BSD-3-Clause"
] | 2,718 | 2019-02-27T19:46:07.000Z | 2022-03-11T23:18:09.000Z | metadeploy/api/migrations/0045_product_license_requirements.py | sfdc-qbranch/MetaDeploy | d22547b3814dbec6aefa4d86b9f81c6f175c1b67 | [
"BSD-3-Clause"
] | 28 | 2019-03-28T04:57:16.000Z | 2022-02-04T16:49:25.000Z | # Generated by Django 2.1.5 on 2019-01-28 21:04
import sfdo_template_helpers.fields
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [("api", "0044_merge_20190125_1502")]
operations = [
migrations.AddField(
model_name="product",
name="license_requirements",
field=sfdo_template_helpers.fields.MarkdownField(
blank=True, property_suffix="_markdown"
),
)
]
| 24.5 | 61 | 0.640816 | 52 | 490 | 5.826923 | 0.807692 | 0.079208 | 0.125413 | 0.165017 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085873 | 0.263265 | 490 | 19 | 62 | 25.789474 | 0.753463 | 0.091837 | 0 | 0 | 1 | 0 | 0.142212 | 0.054176 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2ba7860171b35ef1fd22e6ca196a4bf0cdd6ce68 | 841 | py | Python | main.py | fmiju/fssg3 | ffda53e9e65a9ecd22c853e80a0af0226a4b22e4 | [
"BSD-3-Clause"
] | null | null | null | main.py | fmiju/fssg3 | ffda53e9e65a9ecd22c853e80a0af0226a4b22e4 | [
"BSD-3-Clause"
] | null | null | null | main.py | fmiju/fssg3 | ffda53e9e65a9ecd22c853e80a0af0226a4b22e4 | [
"BSD-3-Clause"
] | null | null | null | import pygame
from pygame.locals import *
gameState = 0
pygame.init()
while 1:
# print "state: ", gameState
# idle
if gameState == 0:
print pygame.event.get()
for event in pygame.event.get():
if (event.type == KEYDOWN and event.key == K_SPACE):
gameState = 1
# build ship
elif gameState == 1:
for event in pygame.event.get():
if (event.type == KEYDOWN and event.key == K_1):
gameState = 2
# shoot
elif gameState == 2:
for event in pygame.event.get():
if (event.type == KEYDOWN and event.key == K_2):
gameState = 3
# dead
elif gameState == 3:
for event in pygame.event.get():
if (event.type == KEYDOWN and event.key == K_r):
gameState = 0
elif (event.type == KEYDOWN and event.key == K_q):
gameState = 4
# quit
elif gameState == 4:
print "quitting"
else:
print 'meow'
| 21.025 | 55 | 0.630202 | 124 | 841 | 4.233871 | 0.298387 | 0.104762 | 0.133333 | 0.180952 | 0.464762 | 0.464762 | 0.464762 | 0.411429 | 0.411429 | 0.411429 | 0 | 0.021841 | 0.237812 | 841 | 39 | 56 | 21.564103 | 0.797192 | 0.068966 | 0 | 0.214286 | 0 | 0 | 0.015464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2bb394837f98ad170af7ac4403fbc8f5bff79198 | 272 | py | Python | newbitcoin/newbitcoin/code-ch03/helper.py | tys-hiroshi/test_programmingbitcoin | 6eb6fb1c087f6dd2cb2b01f527488a904065efa6 | [
"MIT"
] | null | null | null | newbitcoin/newbitcoin/code-ch03/helper.py | tys-hiroshi/test_programmingbitcoin | 6eb6fb1c087f6dd2cb2b01f527488a904065efa6 | [
"MIT"
] | null | null | null | newbitcoin/newbitcoin/code-ch03/helper.py | tys-hiroshi/test_programmingbitcoin | 6eb6fb1c087f6dd2cb2b01f527488a904065efa6 | [
"MIT"
] | null | null | null | from unittest import TestSuite, TextTestRunner
import hashlib
def run(test):
suite = TestSuite()
suite.addTest(test)
TextTestRunner().run(suite)
def hash256(s):
'''two rounds of sha256'''
return hashlib.sha256(hashlib.sha256(s).digest()).digest()
| 18.133333 | 62 | 0.694853 | 33 | 272 | 5.727273 | 0.575758 | 0.137566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053333 | 0.172794 | 272 | 14 | 63 | 19.428571 | 0.786667 | 0.073529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2bb98fcaa1938a0b80ec364136a3876078651d5b | 2,083 | py | Python | dialogs.py | 6dba/async-telegram-bot | 3950f5bc51be9f2f93924442948a98db52332d8e | [
"MIT"
] | null | null | null | dialogs.py | 6dba/async-telegram-bot | 3950f5bc51be9f2f93924442948a98db52332d8e | [
"MIT"
] | null | null | null | dialogs.py | 6dba/async-telegram-bot | 3950f5bc51be9f2f93924442948a98db52332d8e | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from aiogram.types import ReplyKeyboardRemove, \
ReplyKeyboardMarkup, KeyboardButton, \
InlineKeyboardMarkup, InlineKeyboardButton
"""Диалоги бота"""
@dataclass(frozen=True)
class Messages:
SMILE = ['🤷♀️','🧐','🤷♂️','🤔','😐','🤨','🤯','🥱','👀','👋','💊','🙅♀️','🎇','🗿']
start_html: str = "Добро пожаловать, <b>{first_name}!</b> 🕺\nЯ бот, созданный для того, чтобы напоминать тебе - пора пить таблеточки!"
help: str = """<b>Я бот - помощник!</b> 🗿\nУмею напоминать о том, что пора принимать лекарства! 🧬\n\nВ главном меню ты сможешь создать, просмотреть или редактировать напоминание 😯\n\nНажав на название препарата, можно перейти в магазин, чтобы изучить подробнее или даже купить! 💸\n\nДавай начнём, пиши: /menu 📌"""
mainMenu: str = "🔮 Выбери действие:"
edit: str = "🔮 Выбери действие:"
start_addplan: str = "Я задам несколько вопросов, прошу отвечать корректно 🙆"
drug_name_addplan: str = "Как называется препарат? 🤨"
allday_message: str = "Напоминания будут приходить в течение дня: \n\nУтром - 08:00\nДнём - 12:00\nВечером - 18:00\n"
idontknow: str = "<b>Я не знаю, что с этим делать</b> 🤷♀️\nНапомню, есть команда: /help"
None_pills: str = "<b>Пусто..</b>\nКажется, у тебя ещё не добавлено никаких напоминаний 🧐 \nСамое время начать!🗿"
pills_limit: str = "Хэй-хэй, слишком много таблеточек добавляешь! \nПопробуй удалить уже оконченные курсы..🗿"
drug_exist: str = "Думаю, такой уже есть.. "
input_error: str = "Попробуй ввести ещё раз.. "
input_error_time: str = "Напиши что-то в формате: <u><i>19:00</i></u> или <u><i>09:25</i></u> "
input_unreal: str = "Хм.. Не верю.. Попробуй ввести ещё раз "
input_more_than_need: str = "Хм.. Не думаю, что стоит пить больше, чем нужно.. Попробуй ввести ещё раз "
input_error_dose: str = "Напиши что-то в формате: <u><i>1</i></u> или <u><i>0.25</i></u> "
exist_drug_and_dose: str = "Думаю, препарат с данной дозировкой уже есть.. "
message = Messages()
| 40.057692 | 318 | 0.649544 | 305 | 2,083 | 4.498361 | 0.619672 | 0.005831 | 0.037172 | 0.043732 | 0.107143 | 0.078717 | 0.034985 | 0.034985 | 0 | 0 | 0 | 0.014328 | 0.195871 | 2,083 | 51 | 319 | 40.843137 | 0.780896 | 0 | 0 | 0 | 0 | 0.28 | 0.613591 | 0.011905 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.84 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2bbb687f364745f2b77d52b36e535e60715406d8 | 1,584 | py | Python | reflexy/base/tests/test_reflex.py | eso/reflexy | 5ea03bae806488c01a53ccffe9701066baa5964d | [
"BSD-3-Clause"
] | null | null | null | reflexy/base/tests/test_reflex.py | eso/reflexy | 5ea03bae806488c01a53ccffe9701066baa5964d | [
"BSD-3-Clause"
] | null | null | null | reflexy/base/tests/test_reflex.py | eso/reflexy | 5ea03bae806488c01a53ccffe9701066baa5964d | [
"BSD-3-Clause"
] | null | null | null | import unittest
from reflexy.base import reflex
class TestReflexModule(unittest.TestCase):
sof = 'datasetname|file1.fits;PRO_CATG1;PURPOSE1:PURPOSE2,file2;' \
'PRO_CAT2;PURPOSE1'
sopexp = [('long_param1', '3'), ('param2', '3'), ('param3', 'ser'),
('param_not_shown', 'none')]
sop = 'recipe_name:long_param1=3,recipe_name:param2=3,' \
'recipe_name:param3=ser,recipe_name:param_not_shown=none'
def test_parseSof(self):
r = reflex.parseSof(self.sof)
self.assertEqual(len(r), 2)
self.assertEqual(r.datasetName, 'datasetname')
f1, f2 = r.files
self.assertEqual(f1.name, 'file1.fits')
self.assertEqual(f1.category, 'PRO_CATG1')
self.assertEqual(len(f1.purposes), 2)
self.assertIn('PURPOSE1', f1.purposes)
self.assertIn('PURPOSE2', f1.purposes)
self.assertEqual(f2.name, 'file2')
self.assertEqual(f2.category, 'PRO_CAT2')
self.assertEqual(len(f2.purposes), 1)
self.assertEqual(f2.purposes[0], 'PURPOSE1')
def test_parseRoundTripJson(self):
r = reflex.parseSof(self.sof)
j = r.toJSON()
r2 = reflex.parseSofJson(j)
self.assertEqual(r, r2)
def test_parseSop(self):
r = reflex.parseSop(self.sop)
self.assertEqual(len(r), len(self.sopexp))
for p, ep in zip(r, self.sopexp):
self.assertEqual(p.recipe, 'recipe_name')
self.assertEqual(p.displayName, ep[0])
self.assertEqual(p.value, ep[1])
if __name__ == "__main__":
unittest.main()
| 35.2 | 71 | 0.625631 | 196 | 1,584 | 4.923469 | 0.326531 | 0.217617 | 0.074611 | 0.035233 | 0.053886 | 0.053886 | 0 | 0 | 0 | 0 | 0 | 0.035159 | 0.227904 | 1,584 | 44 | 72 | 36 | 0.753884 | 0 | 0 | 0.054054 | 0 | 0 | 0.195076 | 0.100379 | 0 | 0 | 0 | 0 | 0.432432 | 1 | 0.081081 | false | 0 | 0.054054 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2bd58136299dfe9fb78fc5afd520e4be379aa582 | 10,136 | py | Python | accounts/serializers.py | aniruddha2000/foodfeeda | 4dffbbe0310a1809c9743b3525e63bac8a8a0768 | [
"Apache-2.0"
] | null | null | null | accounts/serializers.py | aniruddha2000/foodfeeda | 4dffbbe0310a1809c9743b3525e63bac8a8a0768 | [
"Apache-2.0"
] | null | null | null | accounts/serializers.py | aniruddha2000/foodfeeda | 4dffbbe0310a1809c9743b3525e63bac8a8a0768 | [
"Apache-2.0"
] | 1 | 2022-03-17T12:47:40.000Z | 2022-03-17T12:47:40.000Z | from django.contrib.auth.password_validation import validate_password
from django.contrib.auth.tokens import PasswordResetTokenGenerator
from django.utils.encoding import force_str
from django.utils.http import urlsafe_base64_decode
from rest_framework.exceptions import AuthenticationFailed
from rest_framework.serializers import (
CharField, EmailField, ModelSerializer, Serializer, ValidationError)
from rest_framework.validators import UniqueValidator
from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
from accounts.models import NGO, CustomUser, Donner
class MyTokenObtainPairSerializer(TokenObtainPairSerializer):
@classmethod
def get_token(cls, user):
token = super(MyTokenObtainPairSerializer, cls).get_token(user)
# Add custom claims
token["email"] = user.email
token["type"] = user.type
return token
class DonnerDetailSerializer(ModelSerializer):
class Meta:
model = Donner
exclude = (
"is_superuser",
"is_staff",
"last_login",
"password",
"country",
"user_permissions",
"groups",
)
class NGODetailSerializer(ModelSerializer):
class Meta:
model = NGO
exclude = (
"is_superuser",
"is_staff",
"last_login",
"password",
"country",
"user_permissions",
)
class DonnerRegisterSerializer(ModelSerializer):
email = EmailField(
required=True, validators=[UniqueValidator(queryset=Donner.objects.all())]
)
password = CharField(write_only=True, required=True,
validators=[validate_password])
password2 = CharField(write_only=True, required=True)
class Meta:
model = Donner
fields = (
"id",
"type",
"email",
"password",
"password2",
"first_name",
"last_name",
"phone_number",
"country",
"state",
"city",
"pin",
"DOB",
"profile_photo",
)
extra_kwargs = {
"first_name": {"required": True},
"last_name": {"required": True},
"password": {"write_only": True},
"password2": {"write_only": True},
}
def validate(self, attrs):
if attrs["password"] != attrs["password2"]:
raise ValidationError(
{"password": "Password fields didn't match."})
return attrs
def create(self, validated_data):
user = Donner.objects.create(
email=validated_data["email"],
first_name=validated_data["first_name"],
last_name=validated_data["last_name"],
type=validated_data["type"],
country=validated_data["country"],
phone_number=validated_data["phone_number"],
state=validated_data["state"],
city=validated_data["city"],
pin=validated_data["pin"],
DOB=validated_data["DOB"],
profile_photo=validated_data["profile_photo"],
)
user.set_password(validated_data["password"])
user.save()
return user
class NGORegisterSerializer(ModelSerializer):
email = EmailField(
required=True, validators=[UniqueValidator(queryset=Donner.objects.all())]
)
password = CharField(write_only=True, required=True,
validators=[validate_password])
password2 = CharField(write_only=True, required=True)
class Meta:
model = NGO
fields = (
"id",
"email",
"password",
"password2",
"name",
"phone_number",
"type",
"country",
"state",
"city",
"pin",
"ngo_approval_cert",
)
extra_kwargs = {
"name": {"required": True},
"password": {"write_only": True},
"password2": {"write_only": True},
}
def validate(self, attrs):
if attrs["password"] != attrs["password2"]:
raise ValidationError(
{"password": "Password fields didn't match."})
return attrs
def create(self, validated_data):
user = NGO.objects.create(
email=validated_data["email"],
name=validated_data["name"],
type=validated_data["type"],
phone_number=validated_data["phone_number"],
country=validated_data["country"],
state=validated_data["state"],
city=validated_data["city"],
pin=validated_data["pin"],
ngo_approval_cert=validated_data["ngo_approval_cert"],
)
user.set_password(validated_data["password"])
user.save()
return user
class DonnerChangePasswordSerializer(ModelSerializer):
password = CharField(
write_only=True, required=True, validators=[validate_password])
password2 = CharField(write_only=True, required=True)
old_password = CharField(write_only=True, required=True)
class Meta:
model = Donner
fields = ('old_password', 'password', 'password2')
def validate(self, attrs):
if attrs['password'] != attrs['password2']:
raise ValidationError(
{"password": "Password fields didn't match."})
return attrs
def validate_old_password(self, value):
user = self.context['request'].user
if not user.check_password(value):
raise ValidationError(
{"old_password": "Old password is not correct"})
return value
def update(self, instance, validated_data):
instance.set_password(validated_data['password'])
instance.save()
return instance
class NGOChangePasswordSerializer(ModelSerializer):
password = CharField(
write_only=True, required=True, validators=[validate_password])
password2 = CharField(write_only=True, required=True)
old_password = CharField(write_only=True, required=True)
class Meta:
model = NGO
fields = ('old_password', 'password', 'password2')
def validate(self, attrs):
if attrs['password'] != attrs['password2']:
raise ValidationError(
{"password": "Password fields didn't match."})
return attrs
def validate_old_password(self, value):
user = self.context['request'].user
if not user.check_password(value):
raise ValidationError(
{"old_password": "Old password is not correct"})
return value
def update(self, instance, validated_data):
instance.set_password(validated_data['password'])
instance.save()
return instance
class DonnerUpdateUserSerializer(ModelSerializer):
class Meta:
model = Donner
fields = (
"first_name",
"last_name",
"phone_number",
"country",
"state",
"city",
"pin",
"DOB",
"profile_photo",
)
def validate_email(self, value):
user = self.context['request'].user
if Donner.objects.exclude(pk=user.pk).filter(email=value).exists():
raise ValidationError({"email": "This email is already in use."})
return value
def update(self, instance, validated_data):
instance.first_name = validated_data['first_name']
instance.last_name = validated_data['last_name']
instance.phone_number = validated_data['phone_number']
instance.country = validated_data['country']
instance.state = validated_data['state']
instance.city = validated_data['city']
instance.pin = validated_data['pin']
instance.DOB = validated_data['DOB']
instance.profile_photo = validated_data['profile_photo']
instance.save()
return instance
class NGOUpdateUserSerializer(ModelSerializer):
class Meta:
model = NGO
fields = (
"name",
"phone_number",
"country",
"state",
"city",
"pin",
"ngo_approval_cert",
)
def validate_email(self, value):
user = self.context['request'].user
if NGO.objects.exclude(pk=user.pk).filter(email=value).exists():
raise ValidationError({"email": "This email is already in use."})
return value
def update(self, instance, validated_data):
instance.name = validated_data['name']
instance.phone_number = validated_data['phone_number']
instance.country = validated_data['country']
instance.state = validated_data['state']
instance.city = validated_data['city']
instance.pin = validated_data['pin']
instance.ngo_approval_cert = validated_data['ngo_approval_cert']
instance.save()
return instance
class EmailResetPasswordSerializer(Serializer):
email = EmailField(min_length=5)
class Meta:
fields = ['email']
class SetNewPasswordSerializer(Serializer):
password = CharField(min_length=6, max_length=100, write_only=True)
uidb64 = CharField(min_length=1, write_only=True)
token = CharField(min_length=1, write_only=True)
class Meta:
fields = ["password", "uidb64", "token"]
def validate(self, attrs):
try:
password = attrs.get("password")
uidb64 = attrs.get("uidb64")
token = attrs.get("token")
id = force_str(urlsafe_base64_decode(uidb64))
user = CustomUser.objects.get(id=id)
if not PasswordResetTokenGenerator().check_token(user, token):
raise AuthenticationFailed("The reset link is invalid", 401)
user.set_password(password)
user.save()
return (user)
except Exception:
raise AuthenticationFailed("The reset link is invalid", 401)
| 29.811765 | 82 | 0.594909 | 965 | 10,136 | 6.081865 | 0.146114 | 0.101891 | 0.037655 | 0.037485 | 0.719543 | 0.671835 | 0.615267 | 0.59022 | 0.558869 | 0.558869 | 0 | 0.00575 | 0.296468 | 10,136 | 339 | 83 | 29.899705 | 0.817277 | 0.001677 | 0 | 0.72119 | 0 | 0 | 0.136108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05948 | false | 0.204461 | 0.033457 | 0 | 0.289963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2bdae61f57803109d14c71c62a57ac97f4c10c5b | 724 | py | Python | v002/__init__.py | cgarcia-UCO/AgentSurvival | 34c4b265c90f0b74fec8dbc65c275eb5cdd16ba3 | [
"MIT"
] | null | null | null | v002/__init__.py | cgarcia-UCO/AgentSurvival | 34c4b265c90f0b74fec8dbc65c275eb5cdd16ba3 | [
"MIT"
] | null | null | null | v002/__init__.py | cgarcia-UCO/AgentSurvival | 34c4b265c90f0b74fec8dbc65c275eb5cdd16ba3 | [
"MIT"
] | null | null | null | try:
from IPython import get_ipython
if get_ipython().__class__.__name__ not in ['NoneType']:
from IPython import display
i_am_in_interatcive = True
import pylab as pl
pl.rcParams['figure.figsize'] = [13, 13]
# print("INTERACTIVE")
else:
import matplotlib.pyplot as pl
i_am_in_interatcive = False
# print("NOT INTERACTIVE")
except:
import matplotlib.pyplot as pl
i_am_in_interatcive = False
# print("__INIT__ EXECUTED")
from .Agent import Agent
from .Enviroment_with_agents import Enviroment_with_agents
from .Enviroment import Enviroment
from .InOut_Simple_Laberinth import InOut_Simple_Laberinth, No_Walls_Laberinth
import numpy as np
| 28.96 | 78 | 0.718232 | 94 | 724 | 5.180851 | 0.457447 | 0.01848 | 0.030801 | 0.098563 | 0.213552 | 0.213552 | 0.213552 | 0.213552 | 0.213552 | 0.213552 | 0 | 0.007055 | 0.216851 | 724 | 24 | 79 | 30.166667 | 0.851852 | 0.099448 | 0 | 0.222222 | 0 | 0 | 0.033951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.555556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2bdcd33401ff8240d2bd02c4fb44f1db772552fb | 1,587 | py | Python | main.py | maxBombrun/lipidDroplets | d624b718a7f70c45c1058938d916b78aa14390a7 | [
"BSD-3-Clause"
] | null | null | null | main.py | maxBombrun/lipidDroplets | d624b718a7f70c45c1058938d916b78aa14390a7 | [
"BSD-3-Clause"
] | null | null | null | main.py | maxBombrun/lipidDroplets | d624b718a7f70c45c1058938d916b78aa14390a7 | [
"BSD-3-Clause"
] | null | null | null | import os
import csv
import multiprocessing
import settings
import segmentNucAndGFP
import cellProfilerGetRelation
import measureGFPSize
import plotFeatures
import fusionCSV
import clusterDroplets
import computeZprime
settings.init()
CPPath=settings.pathList[0]
inputDataPath=settings.pathList[1]
resultPath=settings.pathList[2]
outputDetPath=settings.pathList[3]
inputCellProfilerPath=settings.pathList[4]
outputCellProfilerPath=settings.pathList[5]
nProc = multiprocessing.cpu_count()
listPlates= [x for x in os.listdir(inputDataPath) if os.path.isdir(inputDataPath+x) and x.startswith('plate')]
csv.register_dialect('unixpwd', delimiter=',',quotechar = '"', doublequote = True, skipinitialspace = False,lineterminator = '\n', quoting = csv.QUOTE_NONE)
print listPlates
## Segmentation of the nuclei and the lipid droplets
segmentNucAndGFP.segmentFatDroplet(listPlates)
## Cells approximation based on the previous segmentation
## and features extraction through CellProfiler
cellProfilerGetRelation.runCellProfilerRelationship()
## Individual lipid droplet measurements
## Creation of size distribution vectors
measureGFPSize.measureGFP()
## Classification of the cells based on the vectors
clusterDroplets.getClusterOfDroplets(nbClass=2)
## Creation of CSV output, summarizing the measurements per-cell and per-well
fusionCSV.getPerImageMeasurements()
## Plotting of the features
plotFeatures.plotFeat()
## Validation of the features through the computation of the Zprime factor
computeZprime.getZprime()
| 27.362069 | 158 | 0.7908 | 169 | 1,587 | 7.408284 | 0.568047 | 0.076677 | 0.015974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005102 | 0.135476 | 1,587 | 57 | 159 | 27.842105 | 0.907434 | 0.281033 | 0 | 0 | 0 | 0 | 0.015052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.37931 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
2bddb9b7282ff011959fc2aa41a6520b2612a723 | 503 | py | Python | test/integration/test_load_shapefile_networkx_native.py | JoachimC/magicbox_distance | 3706172315f406a391bb643d494dad121b858e97 | [
"BSD-3-Clause"
] | null | null | null | test/integration/test_load_shapefile_networkx_native.py | JoachimC/magicbox_distance | 3706172315f406a391bb643d494dad121b858e97 | [
"BSD-3-Clause"
] | 12 | 2018-08-11T13:26:33.000Z | 2018-10-16T15:36:16.000Z | test/integration/test_load_shapefile_networkx_native.py | JoachimC/magicbox_distance | 3706172315f406a391bb643d494dad121b858e97 | [
"BSD-3-Clause"
] | null | null | null | import unittest
import networkx as nx
class TestLoadColumbiaRoadsNetworkXNative(unittest.TestCase):
def test_load(self):
# https://data.humdata.org/dataset/d8f6feda-6755-4e84-bd14-5c719bc5f37a (hotosm_col_roads_lines_shp.zip)
roads_file = "/Users/joachim/Downloads/hotosm_col_roads_lines_shp/hotosm_col_roads_lines.shp"
# todo : ImportError: read_shp requires OGR: http://www.gdal.org/
G = nx.read_shp(roads_file)
if __name__ == '__main__':
unittest.main()
| 29.588235 | 112 | 0.739563 | 66 | 503 | 5.272727 | 0.666667 | 0.077586 | 0.12069 | 0.163793 | 0.189655 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042353 | 0.15507 | 503 | 16 | 113 | 31.4375 | 0.776471 | 0.33002 | 0 | 0 | 0 | 0 | 0.257485 | 0.233533 | 0 | 0 | 0 | 0.0625 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2bf76e6c71e2e4b130aa3da1d86af45c54dbc8eb | 261 | py | Python | Exercises/Exercise 15 - Hard.py | MikelShifrin/Python1 | 0096a327023a28e0c639042ae01268b07e61943e | [
"MIT"
] | 3 | 2019-07-02T13:46:23.000Z | 2019-08-19T14:41:25.000Z | Exercises/Exercise 15 - Hard.py | MikelShifrin/Python1 | 0096a327023a28e0c639042ae01268b07e61943e | [
"MIT"
] | null | null | null | Exercises/Exercise 15 - Hard.py | MikelShifrin/Python1 | 0096a327023a28e0c639042ae01268b07e61943e | [
"MIT"
] | null | null | null | #Assignment 12
#create 2 files on Desktop:
#input.txt
#output.txt
#inside input.txt write the following lines:
#apple
#orange
#banana
#cucumber
#Your program will add an s to each line
#and write it to output.txt
#Hint: name = 'hello\n'
# name.rstrip('\n') | 20.076923 | 44 | 0.720307 | 44 | 261 | 4.272727 | 0.795455 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013761 | 0.164751 | 261 | 13 | 45 | 20.076923 | 0.848624 | 0.904215 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9210cd1a83170ca2842ecbe0ae9f1b20d789338b | 219 | py | Python | compare_two_values.py | jenildesai25/Visa_interview | 9077d01e122267e5708b12293c8cb6d9801cdc46 | [
"MIT"
] | null | null | null | compare_two_values.py | jenildesai25/Visa_interview | 9077d01e122267e5708b12293c8cb6d9801cdc46 | [
"MIT"
] | null | null | null | compare_two_values.py | jenildesai25/Visa_interview | 9077d01e122267e5708b12293c8cb6d9801cdc46 | [
"MIT"
] | null | null | null | VISA full time master's MCQ.
def func(a, b):
x = a
y = b
while x != y:
if x > y:
x = x - y
if x < y:
y = y - x
return x or y
print(func(2437, 875))
| 15.642857 | 29 | 0.374429 | 37 | 219 | 2.216216 | 0.513514 | 0.097561 | 0.097561 | 0.121951 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066038 | 0.515982 | 219 | 13 | 30 | 16.846154 | 0.707547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
92153cba78723d1570ff0bdccee6e07fae76883e | 649 | py | Python | consts.py | honey96dev/python-coinbase-tradingbot | ec5fad4336a10ed28c44373dc9509581dd96264f | [
"MIT"
] | 1 | 2021-12-29T09:31:37.000Z | 2021-12-29T09:31:37.000Z | consts.py | honey96dev/python-coinbase-tradingbot | ec5fad4336a10ed28c44373dc9509581dd96264f | [
"MIT"
] | null | null | null | consts.py | honey96dev/python-coinbase-tradingbot | ec5fad4336a10ed28c44373dc9509581dd96264f | [
"MIT"
] | null | null | null | months_json = {
"1": "January",
"2": "February",
"3": "March",
"4": "April",
"5": "May",
"6": "June",
"7": "July",
"8": "August",
"9": "September",
"01": "January",
"02": "February",
"03": "March",
"04": "April",
"05": "May",
"06": "June",
"07": "July",
"08": "August",
"09": "September",
"10": "October",
"11": "November",
"12": "December",
}
month_days = {
"1": 31,
"2": 28,
"3": 31,
"4": 30,
"5": 31,
"6": 30,
"7": 31,
"8": 31,
"9": 30,
"01": 31,
"02": 28,
"03": 31,
"04": 30,
"05": 31,
"06": 30,
"07": 31,
"08": 31,
"09": 30,
"10": 31,
"11": 30,
"12": 31,
}
| 12.98 | 20 | 0.391371 | 88 | 649 | 2.863636 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229787 | 0.275809 | 649 | 49 | 21 | 13.244898 | 0.306383 | 0 | 0 | 0 | 0 | 0 | 0.294299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9219f5cfa01beb284c389cfb42e21e0246c37a82 | 1,063 | py | Python | datumaro/datumaro/util/test_utils.py | godlikejay/cvat | 50c40ba70dd8f890478068f29ce3de1057ec848c | [
"MIT"
] | 2 | 2020-01-10T08:50:50.000Z | 2020-01-23T06:11:11.000Z | datumaro/datumaro/util/test_utils.py | godlikejay/cvat | 50c40ba70dd8f890478068f29ce3de1057ec848c | [
"MIT"
] | 5 | 2022-02-13T20:46:25.000Z | 2022-02-27T10:34:53.000Z | datumaro/datumaro/util/test_utils.py | godlikejay/cvat | 50c40ba70dd8f890478068f29ce3de1057ec848c | [
"MIT"
] | 1 | 2020-06-26T00:27:43.000Z | 2020-06-26T00:27:43.000Z |
# Copyright (C) 2019 Intel Corporation
#
# SPDX-License-Identifier: MIT
import inspect
import os
import os.path as osp
import shutil
def current_function_name(depth=1):
return inspect.getouterframes(inspect.currentframe())[depth].function
class FileRemover:
def __init__(self, path, is_dir=False, ignore_errors=False):
self.path = path
self.is_dir = is_dir
self.ignore_errors = ignore_errors
def __enter__(self):
return self
# pylint: disable=redefined-builtin
def __exit__(self, type=None, value=None, traceback=None):
if self.is_dir:
shutil.rmtree(self.path, ignore_errors=self.ignore_errors)
else:
os.remove(self.path)
# pylint: enable=redefined-builtin
class TestDir(FileRemover):
def __init__(self, path=None, ignore_errors=False):
if path is None:
path = osp.abspath('temp_%s' % current_function_name(2))
os.makedirs(path, exist_ok=ignore_errors)
super().__init__(path, is_dir=True, ignore_errors=ignore_errors) | 27.25641 | 73 | 0.687676 | 139 | 1,063 | 4.971223 | 0.42446 | 0.156295 | 0.054993 | 0.063676 | 0.075253 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007168 | 0.212606 | 1,063 | 39 | 74 | 27.25641 | 0.818399 | 0.124177 | 0 | 0 | 0 | 0 | 0.007568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.166667 | 0.083333 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ecfcea4ccf7a5523cf5c703502dd1432cbaa0c9e | 3,545 | py | Python | oncopolicy/models/deterministic_progression.py | yala/Tempo | bf3e0e78d64869bb2079c582a4a35982f78386ad | [
"MIT"
] | 6 | 2022-01-15T11:57:19.000Z | 2022-02-13T21:15:22.000Z | oncopolicy/models/deterministic_progression.py | yala/Tempo | bf3e0e78d64869bb2079c582a4a35982f78386ad | [
"MIT"
] | null | null | null | oncopolicy/models/deterministic_progression.py | yala/Tempo | bf3e0e78d64869bb2079c582a4a35982f78386ad | [
"MIT"
] | 2 | 2022-02-02T13:09:29.000Z | 2022-02-18T07:06:19.000Z | import torch
import torch.nn as nn
from oncopolicy.models.factory import RegisterModel
import pdb
class AbstractDeterministicGuideline(nn.Module):
def __init__(self, args):
super(AbstractDeterministicGuideline, self).__init__()
self.args = args
self.max_steps = args.max_steps
def get_logprob(self, z):
z = z.unsqueeze(1)
return torch.log(torch.cat([1-z, z], dim =1))
def get_prob(self, z):
return z
@RegisterModel("last_observed_risk")
class LastObservedRisk(AbstractDeterministicGuideline):
'''
Deterministic risk progression model. Predict
observed risk doesnt change from last observation
'''
def __init__(self, args):
super(LastObservedRisk, self).__init__(args)
self.max_pool = nn.MaxPool1d(kernel_size=self.max_steps, stride=1)
def forward(self, x, batch):
'''
Forward func used in training/eval risk progression model.
args:
- x: tensor of shape [B, self.max_steps, args.risk_dimension], with 0s for unobserved
- batch: full batch obj, contains 'oberved tensor'
returns:
- z: tensor of shape [B, self.max_steps, args.risk_dimension], with last observed risk for each dim
'''
B, _, D = x.size()
obsereved_key = 'observed' if 'observed' in batch else 'progression_observed'
obs = batch[obsereved_key] # shape [B, self.max_steps]
indicies = torch.arange(start=0, end=self.max_steps).unsqueeze(0).expand([B,self.max_steps]).to(self.args.device)
obs_indicies = (obs.float() * indicies.float()).unsqueeze(1)
obs_indicies_w_pad = torch.cat([torch.zeros([B, 1, self.max_steps]).to(self.args.device), obs_indicies[:,:,:-1]], dim=-1)
indices_of_most_recent = self.max_pool(obs_indicies_w_pad).long().transpose(1,2).expand(B, self.max_steps, D)
z = torch.gather(x, dim=1, index=indices_of_most_recent)
return z, None
@RegisterModel("static_risk")
class StaticRisk(AbstractDeterministicGuideline):
'''
Deterministic risk progression model. Predict
observed risk doesnt change from first observation. Assume static
'''
def __init__(self, args):
super(StaticRisk, self).__init__(args)
def forward(self, x, batch):
'''
Forward func used in training/eval risk progression model.
args:
- x: tensor of shape [B, self.max_steps, args.risk_dimension], with 0s for unobserved
- batch: full batch obj, contains 'oberved tensor'
returns:
- z: tensor of shape [B, self.max_steps, args.risk_dimension], with last observed risk for each dim
'''
z = x[:,0,:].unsqueeze(1).expand_as(x).contiguous()
return z, None
@RegisterModel("random")
class Random(AbstractDeterministicGuideline):
'''
Predict rand risk at each timestep.
'''
def __init__(self, args):
super(Random, self).__init__(args)
def forward(self, x, batch):
'''
Forward func used in training/eval risk progression model.
args:
- x: tensor of shape [B, self.max_steps, args.risk_dimension], with 0s for unobserved
- batch: full batch obj, contains 'oberved tensor'
returns:
- z: tensor of shape [B, MAX_STEPS, args.risk_dimension], with last observed risk for each dim
'''
z = torch.sigmoid( torch.randn_like(x).to(x.device))
return z, None
| 37.315789 | 129 | 0.640621 | 451 | 3,545 | 4.858093 | 0.239468 | 0.044728 | 0.065723 | 0.047467 | 0.540849 | 0.484254 | 0.484254 | 0.484254 | 0.484254 | 0.448654 | 0 | 0.007148 | 0.250212 | 3,545 | 94 | 130 | 37.712766 | 0.817156 | 0.337941 | 0 | 0.232558 | 0 | 0 | 0.035306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.209302 | false | 0 | 0.093023 | 0.023256 | 0.511628 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a6049452e4aac0634d91765ee03449bd362ab74a | 1,046 | py | Python | swagger_ui/__init__.py | dirkgomez/voice-skill-sdk | ff8a3cc226f48d65fe9ad06741a03b8205f1f0b5 | [
"MIT"
] | null | null | null | swagger_ui/__init__.py | dirkgomez/voice-skill-sdk | ff8a3cc226f48d65fe9ad06741a03b8205f1f0b5 | [
"MIT"
] | null | null | null | swagger_ui/__init__.py | dirkgomez/voice-skill-sdk | ff8a3cc226f48d65fe9ad06741a03b8205f1f0b5 | [
"MIT"
] | null | null | null | #
# voice-skill-sdk
#
# (C) 2020, Deutsche Telekom AG
#
# Deutsche Telekom AG and all other contributors /
# copyright owners license this file to you under the MIT
# License (the "License"); you may not use this file
# except in compliance with the License.
# You may obtain a copy of the License at
#
# https://opensource.org/licenses/MIT
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
from pathlib import Path
from bottle import get, static_file, redirect
here: Path = Path(__file__).absolute().parent
UI_ROOT = here / 'node_modules/swagger-ui-dist'
@get('/')
def root():
return redirect('/swagger-ui/')
@get('/swagger-ui/')
@get('/swagger-ui/<filename:path>')
def send_static(filename=None):
return static_file(filename or 'index.html', root=UI_ROOT)
| 26.820513 | 62 | 0.739006 | 156 | 1,046 | 4.891026 | 0.589744 | 0.078637 | 0.044561 | 0.04194 | 0.04325 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00454 | 0.157744 | 1,046 | 38 | 63 | 27.526316 | 0.861521 | 0.595602 | 0 | 0 | 0 | 0 | 0.223881 | 0.136816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0.181818 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
a61fb4cae066d888d2f215862b59d8af6c32f27d | 537 | py | Python | src/mainmodulename/plugins/type_one_plugin/__init__.py | portikCoder/basic_python_plugin_project | 3176f6d5683ec03ec670ebda7ca6513289c72699 | [
"MIT"
] | 1 | 2021-04-09T20:10:04.000Z | 2021-04-09T20:10:04.000Z | src/mainmodulename/plugins/type_one_plugin/__init__.py | portikCoder/basic_python_plugin_project | 3176f6d5683ec03ec670ebda7ca6513289c72699 | [
"MIT"
] | 1 | 2021-02-06T21:48:45.000Z | 2021-02-06T21:48:45.000Z | src/mainmodulename/plugins/type_one_plugin/__init__.py | portikCoder/basic_python_plugin_project | 3176f6d5683ec03ec670ebda7ca6513289c72699 | [
"MIT"
] | null | null | null | # Copyright (c) 2021 portikCoder. All rights reserved.
# See the license text under the root package.
from typing import Type
from mainmodulename.common.plugin_template import Plugin
from mainmodulename.plugins.type_one_plugin.type_one_plugin import TypeOnePlugin
PLUGIN_CLASS: Type[Plugin] = TypeOnePlugin
ALIASES = ['ctr', 'ctr_plugin', 'CtrFileNamePlugin']
def get_plugin_class() -> Type[Plugin]:
"""
If need some logic before returning the class itself purely, put it here.
:return:
"""
return PLUGIN_CLASS
| 29.833333 | 80 | 0.757914 | 70 | 537 | 5.671429 | 0.614286 | 0.083123 | 0.065491 | 0.105793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008869 | 0.160149 | 537 | 17 | 81 | 31.588235 | 0.871397 | 0.340782 | 0 | 0 | 0 | 0 | 0.09009 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a6353e0352eb39b16f50a9f0e0699f0ab72b2a30 | 756 | py | Python | 22. Generate Parentheses/solution1.py | sunshot/LeetCode | 8f6503201831055f1d49ed3abb25be44a13ec317 | [
"MIT"
] | null | null | null | 22. Generate Parentheses/solution1.py | sunshot/LeetCode | 8f6503201831055f1d49ed3abb25be44a13ec317 | [
"MIT"
] | null | null | null | 22. Generate Parentheses/solution1.py | sunshot/LeetCode | 8f6503201831055f1d49ed3abb25be44a13ec317 | [
"MIT"
] | null | null | null | from typing import List
class Solution:
def generateParenthesis(self, n: int) -> List[str]:
if n == 0:
return ['']
if n == 1:
return ['()']
if n == 2:
result = []
result.append('()()')
result.append('(())')
return result
ans = []
for i in range(n):
for left in self.generateParenthesis(i):
for right in self.generateParenthesis(n-1-i):
ans.append('({}){}'.format(left, right))
return ans
if __name__== '__main__':
solution = Solution()
n = 3
ans = solution.generateParenthesis(n)
print(ans)
n = 4
ans = solution.generateParenthesis(n)
print(ans) | 26.068966 | 61 | 0.488095 | 79 | 756 | 4.56962 | 0.405063 | 0.024931 | 0.049862 | 0.171745 | 0.216066 | 0.216066 | 0 | 0 | 0 | 0 | 0 | 0.012712 | 0.375661 | 756 | 29 | 62 | 26.068966 | 0.752119 | 0 | 0 | 0.153846 | 1 | 0 | 0.031704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.269231 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a63c831ec9710b64344fb7dfa36c7cec29b0cae2 | 358 | py | Python | Intern/variables2.py | AalsiCodeMan/Notebook-Ex | fd1cf8beddf26f6dd94f476f4c308b9057cc5ac7 | [
"Unlicense"
] | 1 | 2020-06-28T12:35:55.000Z | 2020-06-28T12:35:55.000Z | Intern/variables2.py | AalsiCodeMan/Notebook-Ex | fd1cf8beddf26f6dd94f476f4c308b9057cc5ac7 | [
"Unlicense"
] | 1 | 2021-10-02T05:33:51.000Z | 2021-10-02T05:34:02.000Z | Intern/variables2.py | AalsiCodeMan/Notebook-Ex | fd1cf8beddf26f6dd94f476f4c308b9057cc5ac7 | [
"Unlicense"
] | 3 | 2020-10-17T08:19:02.000Z | 2021-10-11T12:33:18.000Z | ## Data Categorisation
'''
1) Whole Number (Ints) - 100, 1000, -450, 999
2) Real Numbers (Floats) - 33.33, 44.01, -1000.033
3) String - "Bangalore", "India", "Raj", "abc123"
4) Boolean - True, False
Variables in python are dynamic in nature
'''
a = 10
print(a)
print(type(a))
a = 10.33
print(a)
print(type(a))
a = 'New Jersey'
print(a)
print(type(a))
| 14.916667 | 50 | 0.636872 | 60 | 358 | 3.8 | 0.666667 | 0.078947 | 0.144737 | 0.197368 | 0.219298 | 0.149123 | 0 | 0 | 0 | 0 | 0 | 0.139456 | 0.178771 | 358 | 23 | 51 | 15.565217 | 0.636054 | 0.659218 | 0 | 0.666667 | 0 | 0 | 0.089286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
a63d1ac18dbb8521b652e406baf4c3ebd14d1bd4 | 1,281 | py | Python | PycharmProjects/PythonExercicios/ex045.py | RodrigoMASRamos/Projects.py | ed15981b320914c9667305dcd5fb5b7906fd9b00 | [
"MIT"
] | null | null | null | PycharmProjects/PythonExercicios/ex045.py | RodrigoMASRamos/Projects.py | ed15981b320914c9667305dcd5fb5b7906fd9b00 | [
"MIT"
] | null | null | null | PycharmProjects/PythonExercicios/ex045.py | RodrigoMASRamos/Projects.py | ed15981b320914c9667305dcd5fb5b7906fd9b00 | [
"MIT"
] | null | null | null | # Exercício Python #045 - GAME: Pedra Papel e Tesoura
#
# Crie um programa que faça o computador jogar JOKENPÔ com você.
# Aprenda a arrumar as cores nas respostas!
from random import choice
from random import randint # Maneira utilizada na resolução deste exercício
from time import sleep
print('\033[1;31mATENÇÃO! ESTE É UM JOGO ALTAMENTE PERIGOSO ONDE NÃO HÁ CHANCES DE VITÓRIA PARA VOCÊ!\033[m')
Uc = input('\033[0;30mMe diga, \033[1;34mó grande jogador, \033[0;30mvocê escolhe \033[1;35mPEDRA, \033[1;31mPAPEL, '
'\033[0;30mou \033[1;36mTESOURA? ').strip().upper()
PC = ['\033[1;35mPEDRA\033[m', '\033[1;31mPAPEL\033[m', '\033[1;36mTESOURA\033[m']
PCc = choice(PC)
sleep(0.5)
print('JO')
sleep(1)
print('KEN')
sleep(1)
print('PO!')
if PCc == 'PEDRA' and Uc == 'TESOURA' or PCc == 'TESOURA' and Uc == 'PAPEL' or PCc == 'PAPEL' and Uc == 'PEDRA':
print(f'\033[1;31mHAHAHA! Eu venci! \033[0;30mEu escolhi \033[m{PCc} \033[0;30me você \033[m{Uc}\033[0;30m!')
elif PCc == Uc:
print(f'\033[1;33mEMPATE! Vamos jogar novamente! Eu escolhi \033[m{PCc} \033[0;30me você \033[m{Uc}')
else:
print(f'\033[0;34mT-T Infelizmente,\033[1;32mvocê venceu... \033[0;30mEu escolhi \033[m{PCc}, \033[0;30me você '
f'escolheu \033[m{Uc}\033[0;30m...\033[m') | 38.818182 | 117 | 0.672912 | 223 | 1,281 | 3.865471 | 0.443946 | 0.051044 | 0.027842 | 0.034803 | 0.148492 | 0.148492 | 0.12529 | 0.12529 | 0.12529 | 0.12529 | 0 | 0.158234 | 0.151444 | 1,281 | 33 | 118 | 38.818182 | 0.634775 | 0.157689 | 0 | 0.095238 | 0 | 0.238095 | 0.628145 | 0.132339 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a644ae4c04f7d11053b8e2b79110bf1aa7213f69 | 1,882 | py | Python | tests/script/test_p2pk.py | meherett/btmhdw | 6929750edb7747a9937806272127c98db86e4c98 | [
"MIT"
] | 3 | 2019-06-02T06:31:06.000Z | 2019-06-16T20:46:38.000Z | tests/script/test_p2pk.py | meherett/btmhdw | 6929750edb7747a9937806272127c98db86e4c98 | [
"MIT"
] | 3 | 2020-09-10T04:40:58.000Z | 2021-06-25T15:38:35.000Z | tests/script/test_p2pk.py | meherett/btmhdw | 6929750edb7747a9937806272127c98db86e4c98 | [
"MIT"
] | 1 | 2020-08-11T07:48:19.000Z | 2020-08-11T07:48:19.000Z | #!/usr/bin/env python3
import json
import os
from pybytom.script import (
get_public_key_hash, get_p2pkh_program, get_p2wpkh_program, get_p2wpkh_address
)
# Test Values
base_path = os.path.dirname(__file__)
file_path = os.path.abspath(os.path.join(base_path, "..", "values.json"))
values = open(file_path, "r")
_ = json.loads(values.read())
values.close()
def test_p2pk():
assert get_public_key_hash(
public_key=_["script"]["p2pk"]["public_key"]
) == _["script"]["p2pk"]["public_key_hash"]
assert get_p2pkh_program(
public_key_hash=_["script"]["p2pk"]["public_key_hash"]
) == _["script"]["p2pk"]["program"]["p2pkh"]
assert get_p2wpkh_program(
public_key_hash=_["script"]["p2pk"]["public_key_hash"]
) == _["script"]["p2pk"]["program"]["p2wpkh"]
assert get_p2wpkh_address(
public_key_hash=_["script"]["p2pk"]["public_key_hash"], network="mainnet", vapor=False
) == _["script"]["p2pk"]["address"]["mainnet"]
assert get_p2wpkh_address(
public_key_hash=_["script"]["p2pk"]["public_key_hash"], network="solonet", vapor=False
) == _["script"]["p2pk"]["address"]["solonet"]
assert get_p2wpkh_address(
public_key_hash=_["script"]["p2pk"]["public_key_hash"], network="testnet", vapor=False
) == _["script"]["p2pk"]["address"]["testnet"]
assert get_p2wpkh_address(
public_key_hash=_["script"]["p2pk"]["public_key_hash"], network="mainnet", vapor=True
) == _["script"]["p2pk"]["vapor_address"]["mainnet"]
assert get_p2wpkh_address(
public_key_hash=_["script"]["p2pk"]["public_key_hash"], network="solonet", vapor=True
) == _["script"]["p2pk"]["vapor_address"]["solonet"]
assert get_p2wpkh_address(
public_key_hash=_["script"]["p2pk"]["public_key_hash"], network="testnet", vapor=True
) == _["script"]["p2pk"]["vapor_address"]["testnet"]
| 37.64 | 94 | 0.658874 | 230 | 1,882 | 4.978261 | 0.178261 | 0.165066 | 0.215721 | 0.165939 | 0.703057 | 0.654148 | 0.557205 | 0.557205 | 0.557205 | 0.557205 | 0 | 0.020333 | 0.13762 | 1,882 | 49 | 95 | 38.408163 | 0.685151 | 0.017535 | 0 | 0.210526 | 0 | 0 | 0.275041 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 1 | 0.026316 | false | 0 | 0.078947 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a64954da5abc2b618672c110a9d724b19813f598 | 9,145 | py | Python | benchmarks/compare_with_others.py | ProLoD/icontract-hypothesis | fe6c12a7395807d78880c2bbead48580fe8a1cff | [
"MIT"
] | 57 | 2021-01-14T12:01:19.000Z | 2022-03-02T10:54:43.000Z | benchmarks/compare_with_others.py | ProLoD/icontract-hypothesis | fe6c12a7395807d78880c2bbead48580fe8a1cff | [
"MIT"
] | 7 | 2021-02-15T16:28:55.000Z | 2021-07-23T10:58:21.000Z | benchmarks/compare_with_others.py | ProLoD/icontract-hypothesis | fe6c12a7395807d78880c2bbead48580fe8a1cff | [
"MIT"
] | 2 | 2021-01-21T05:35:58.000Z | 2021-04-02T08:28:06.000Z | #!/usr/bin/env python3
"""Benchmark icontract against deal when used together with hypothesis."""
import os
import sys
import timeit
from typing import List
import deal
import dpcontracts
import hypothesis
import hypothesis.extra.dpcontracts
import hypothesis.strategies
import icontract
import tabulate
import icontract_hypothesis
def benchmark_icontract_assume_preconditions(arg_count: int = 1) -> None:
"""Benchmark the Hypothesis testing with icontract and rejection sampling."""
count = 0
if arg_count == 1:
@icontract.require(lambda a: a > 0)
def some_func(a: int) -> None:
nonlocal count
count += 1
pass
assume_preconditions = icontract_hypothesis.make_assume_preconditions(some_func)
@hypothesis.settings(
suppress_health_check=(hypothesis.HealthCheck.filter_too_much,)
)
@hypothesis.given(a=hypothesis.strategies.integers())
def execute(a: int) -> None:
assume_preconditions(a)
some_func(a)
elif arg_count == 2:
@icontract.require(lambda a: a > 0)
@icontract.require(lambda b: b > 0)
def some_func(a: int, b: int) -> None:
nonlocal count
count += 1
pass
assume_preconditions = icontract_hypothesis.make_assume_preconditions(some_func)
@hypothesis.settings(
suppress_health_check=(hypothesis.HealthCheck.filter_too_much,)
)
@hypothesis.given(
a=hypothesis.strategies.integers(), b=hypothesis.strategies.integers()
)
def execute(a: int, b: int) -> None:
assume_preconditions(a=a, b=b)
some_func(a, b)
elif arg_count == 3:
@icontract.require(lambda a: a > 0)
@icontract.require(lambda b: b > 0)
@icontract.require(lambda c: c > 0)
def some_func(a: int, b: int, c: int) -> None:
nonlocal count
count += 1
pass
assume_preconditions = icontract_hypothesis.make_assume_preconditions(some_func)
@hypothesis.settings(
suppress_health_check=(hypothesis.HealthCheck.filter_too_much,)
)
@hypothesis.given(
a=hypothesis.strategies.integers(),
b=hypothesis.strategies.integers(),
c=hypothesis.strategies.integers(),
)
def execute(a: int, b: int, c: int) -> None:
assume_preconditions(a=a, b=b, c=c)
some_func(a, b, c)
else:
raise NotImplementedError("arg_count {}".format(arg_count))
execute()
# Assert the count of function executions for fair tests
assert count == 100
def benchmark_icontract_inferred_strategy(arg_count: int = 1) -> None:
"""Benchmark the Hypothesis testing with icontract and inferred search strategies."""
count = 0
if arg_count == 1:
@icontract.require(lambda a: a > 0)
def some_func(a: int) -> None:
nonlocal count
count += 1
pass
elif arg_count == 2:
@icontract.require(lambda a: a > 0)
@icontract.require(lambda b: b > 0)
def some_func(a: int, b: int) -> None:
nonlocal count
count += 1
pass
elif arg_count == 3:
@icontract.require(lambda a: a > 0)
@icontract.require(lambda b: b > 0)
@icontract.require(lambda c: c > 0)
def some_func(a: int, b: int, c: int) -> None:
nonlocal count
count += 1
pass
else:
raise NotImplementedError("arg_count {}".format(arg_count))
icontract_hypothesis.test_with_inferred_strategy(some_func)
# Assert the count of function executions for fair tests
assert count == 100
def benchmark_dpcontracts(arg_count: int = 1) -> None:
"""Benchmark the Hypothesis testing with dpcontracts."""
count = 0
if arg_count == 1:
@dpcontracts.require("some dummy contract", lambda args: args.a > 0)
def some_func(a: int) -> None:
nonlocal count
count += 1
pass
@hypothesis.settings(
suppress_health_check=(hypothesis.HealthCheck.filter_too_much,)
)
@hypothesis.given(a=hypothesis.strategies.integers())
def execute(a: int) -> None:
hypothesis.extra.dpcontracts.fulfill(some_func)(a)
elif arg_count == 2:
@dpcontracts.require("some dummy contract", lambda args: args.a > 0)
@dpcontracts.require("some dummy contract", lambda args: args.b > 0)
def some_func(a: int, b: int) -> None:
nonlocal count
count += 1
pass
@hypothesis.settings(
suppress_health_check=(hypothesis.HealthCheck.filter_too_much,)
)
@hypothesis.given(
a=hypothesis.strategies.integers(), b=hypothesis.strategies.integers()
)
def execute(a: int, b: int) -> None:
hypothesis.extra.dpcontracts.fulfill(some_func)(a, b)
elif arg_count == 3:
@dpcontracts.require("some dummy contract", lambda args: args.a > 0)
@dpcontracts.require("some dummy contract", lambda args: args.b > 0)
@dpcontracts.require("some dummy contract", lambda args: args.c > 0)
def some_func(a: int, b: int, c: int) -> None:
nonlocal count
count += 1
pass
@hypothesis.settings(
suppress_health_check=(hypothesis.HealthCheck.filter_too_much,)
)
@hypothesis.given(
a=hypothesis.strategies.integers(),
b=hypothesis.strategies.integers(),
c=hypothesis.strategies.integers(),
)
def execute(a: int, b: int, c: int) -> None:
hypothesis.extra.dpcontracts.fulfill(some_func)(a, b, c)
else:
raise NotImplementedError("arg_count {}".format(arg_count))
execute()
# Assert the count of function executions for fair tests
assert count == 100
def benchmark_deal(arg_count: int = 1) -> None:
"""Benchmark the Hypothesis testing with deal."""
count = 0
if arg_count == 1:
@deal.pre(lambda _: _.a > 0)
def some_func(a: int) -> None:
nonlocal count
count += 1
pass
for case in deal.cases(some_func, count=100):
case()
elif arg_count == 2:
@deal.pre(lambda _: _.a > 0)
@deal.pre(lambda _: _.b > 0)
def some_func(a: int, b: int) -> None:
nonlocal count
count += 1
pass
for case in deal.cases(some_func, count=100):
case()
elif arg_count == 3:
@deal.pre(lambda _: _.a > 0)
@deal.pre(lambda _: _.b > 0)
@deal.pre(lambda _: _.c > 0)
def some_func(a: int, b: int, c: int) -> None:
nonlocal count
count += 1
pass
for case in deal.cases(some_func, count=100):
case()
else:
raise NotImplementedError("arg_count {}".format(arg_count))
assert count == 100
def writeln_utf8(text: str = "") -> None:
"""
Write the text to STDOUT using UTF-8 encoding followed by a new-line character.
We can not use ``print()`` as we can not rely on the correct encoding in Windows.
See: https://stackoverflow.com/questions/31469707/changing-the-locale-preferred-encoding-in-python-3-in-windows
"""
sys.stdout.buffer.write(text.encode("utf-8"))
sys.stdout.buffer.write(os.linesep.encode("utf-8"))
def measure_functions() -> None:
# yapf: disable
funcs = [
'benchmark_icontract_inferred_strategy',
'benchmark_icontract_assume_preconditions',
'benchmark_dpcontracts',
'benchmark_deal',
]
# yapf: enable
durations = [0.0] * len(funcs)
number = 10
for arg_count in [1, 2, 3]:
for i, func in enumerate(funcs):
duration = timeit.timeit(
"{}(arg_count={})".format(func, arg_count),
setup="from __main__ import {}".format(func),
number=number,
)
durations[i] = duration
table = [] # type: List[List[str]]
for func, duration in zip(funcs, durations):
# yapf: disable
table.append([
'`{}`'.format(func),
'{:.2f} s'.format(duration),
'{:.2f} ms'.format(duration * 1000 / number),
'{:.0f}%'.format(duration * 100 / durations[0])
])
# yapf: enable
# yapf: disable
table_str = tabulate.tabulate(
table,
headers=['Case', 'Total time', 'Time per run', 'Relative time per run'],
colalign=('left', 'right', 'right', 'right'),
tablefmt='rst')
# yapf: enable
writeln_utf8()
writeln_utf8("Argument count: {}".format(arg_count))
writeln_utf8()
writeln_utf8(table_str)
if __name__ == "__main__":
writeln_utf8("Benchmarking Hypothesis testing:")
writeln_utf8("")
measure_functions()
| 29.310897 | 115 | 0.584254 | 1,058 | 9,145 | 4.910208 | 0.155009 | 0.043118 | 0.031184 | 0.027719 | 0.692397 | 0.686044 | 0.6795 | 0.675265 | 0.642733 | 0.625217 | 0 | 0.018053 | 0.303445 | 9,145 | 311 | 116 | 29.405145 | 0.797488 | 0.096227 | 0 | 0.65 | 0 | 0 | 0.05825 | 0.011942 | 0 | 0 | 0 | 0 | 0.018182 | 1 | 0.109091 | false | 0.054545 | 0.059091 | 0 | 0.168182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a6502bc8d3efa4a62ed9b5abd448b7a3fdc28b1d | 12,184 | py | Python | models/edhoc/draftedhoc-20200301/oracle.py | hoheinzollern/EDHOC-Verification | b62bb5192021b9cee52845943ba0c1999cb84119 | [
"MIT"
] | null | null | null | models/edhoc/draftedhoc-20200301/oracle.py | hoheinzollern/EDHOC-Verification | b62bb5192021b9cee52845943ba0c1999cb84119 | [
"MIT"
] | null | null | null | models/edhoc/draftedhoc-20200301/oracle.py | hoheinzollern/EDHOC-Verification | b62bb5192021b9cee52845943ba0c1999cb84119 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import sys, re
from functools import reduce
DEBUG = False
#DEBUG = True
# Put prios between 0 and 100. Above 100 is for default strategy
MAXNPRIO = 200 # max number of prios, 0 is lowest prio
FALLBACKPRIO = MAXNPRIO # max number of prios, 0 is lowest prio
prios = [(i, []) for i in range(MAXNPRIO + 1)]
def outputPrios(goalLines, lemma):
rankedGoals = [str(goal) + "\n" for prioList in prios for goal in prioList[1]]
print("".join(rankedGoals))
def dumpPrios(goalLines, lemma):
print("Prios:")
for pl in prios:
for p in pl[1]:
print(" > level:{}, goalNo:{}".format(pl[0], p))
def prioritize(goalNumber, prio, goalLine):
prios[prio][1].insert(0, goalNumber)
if DEBUG:
goal = re.sub("\s+", " ", goalLine)
print("goalNo:{} prio:{} goal:{}".format(goalNumber, prio, goal))
def genPrios(goalLines, lemma):
# Prioritize splitEqs over new instances
# splitEqs = False
# splitEqsLine = -1
# for i in range(len(goalLines)):
# if re.match(".*splitEqs.*", goalLines[i]):
# splitEqs = True
# splitEqsLine = i
for line in goalLines:
goal = line.split(':')[0]
if "sanity" in lemma:
if DEBUG:
print("MATCHING Sanity LEMMA: {}".format(lemma))
if re.match(".*SKRev.*", line) or\
re.match(".*Completed.*", line):
prioritize(goal, 90, line)
elif re.match(".*StR.*", line) or\
re.match(".*StI.*", line):
prioritize(goal, 80, line)
elif re.match(".*KU\( 'g'\^~xx \).*", line) or\
re.match(".*KU\( 'g'\^~yy \).*", line) or\
re.match(".*KU\( ~xx.*", line) or\
re.match(".*KU\( ~yy.*", line) or\
re.match(".*~~>.*", line) or\
re.match(".*=.*=.*", line):
prioritize(goal, 70, line)
elif re.match(".*LTK_.*", line):
prioritize(goal, 67, line)
elif re.match(".*aead.*", line):
prioritize(goal, 65, line)
elif re.match(".*KU\( sign.*", line) or\
re.match(".*KU\( extr.*", line) or\
re.match(".*KU\( expa.*", line):
prioritize(goal, 60, line)
elif re.match(".*KU\( h\(.*", line):
prioritize(goal, 55, line)
elif re.match(".*KU\( h\(.*", line):
prioritize(goal, 55, line)
else:
prioritize(goal, 50, line)
elif "authImplicit" in lemma: #"authGIYImplicitAuthGuarantee" in lemma: # Special for imp agree
if DEBUG:
print("MATCHING Auth LEMMA: {}".format(lemma))
if re.match(".*: !KU\( ~xx \).*", line) or\
re.match(".*: !KU\( ~yy \).*", line) or\
re.match(".*: !KU\( ~xx\.. \).*", line) or\
re.match(".*: !KU\( ~yy\.. \).*", line) or\
re.match(".*~~>.*", line) or\
re.match(".*: !KU\( 'g'\^~xx \).*", line) or\
re.match(".*: !KU\( 'g'\^~xx\.. \).*", line) or\
re.match(".*: !KU\( 'g'\^~yy \).*", line) or\
re.match(".*: !KU\( 'g'\^~yy\.. \).*", line) or\
re.match(".*: !KU\( ~ltk \).*", line) or\
re.match(".*: !KU\( ~ltk\.. \).*", line) or\
re.match(".*: !KU\( pk\(~ltk\) \).*", line) or\
re.match(".*: !KU\( 'g'\^~ltk \).*", line) or\
re.match(".*: !KU\( 'g'\^~ltk\.. \).*", line) or\
re.match(".*: !LTK_SIG\(.*", line) or\
re.match(".*: !LTK_STAT\(.*", line) or\
re.match(".*: !PK_SIG\(.*", line) or\
re.match(".*: !PK_STAT\(.*", line) or\
re.match(".*: StI._.*", line) or\
re.match(".*: StR._.*", line) or\
re.match(".*ExpRunning.*", line):
prioritize(goal, 97, line)
elif re.match(".*KU\( 'g'\^\(~yy.*\*~ltk.*", line) or\
re.match(".*KU\( 'g'\^\(~xx.*\*~ltk.*", line):
prioritize(goal, 93, line)
elif re.match(".*KU\( 'g'\^\(~xx.*\*~yy.*", line) or\
re.match(".*KU\( 'g'\^\(~yy.*\*~xx.*", line):
prioritize(goal, 90, line)
elif re.match(".*KU\( extr.*", line) or\
re.match(".*KU\( expa.*", line):
prioritize(goal, 80, line)
elif re.match(".*LTKRev.*", line) or\
re.match(".*sign.*", line) or\
re.match(".*aead.*", line):
prioritize(goal, 70, line)
elif re.match(".*KU\( h\(.*", line):
prioritize(goal, 60, line)
elif re.match(".*KU\( \(.V⊕.*", line):
prioritize(goal, 40, line)
else:
prioritize(goal, 50, line)
elif "auth" in lemma:
if DEBUG:
print("MATCHING Auth LEMMA: {}".format(lemma))
if re.match(".*KU\( ~ltk.*", line) or\
re.match(".*KU\( ~xx.*", line) or\
re.match(".*KU\( ~yy.*", line):
prioritize(goal, 98, line)
elif re.match(".*: !KU\( ~xx \).*", line) or\
re.match(".*: !KU\( ~yy \).*", line) or\
re.match(".*: !KU\( ~xx\.. \).*", line) or\
re.match(".*: !KU\( ~yy\.. \).*", line) or\
re.match(".*~~>.*", line) or\
re.match(".*: !KU\( 'g'\^~xx \).*", line) or\
re.match(".*: !KU\( 'g'\^~xx\.. \).*", line) or\
re.match(".*: !KU\( 'g'\^~yy \).*", line) or\
re.match(".*: !KU\( 'g'\^~yy\.. \).*", line) or\
re.match(".*: !KU\( ~ltk \).*", line) or\
re.match(".*: !KU\( ~ltk\.. \).*", line) or\
re.match(".*: !KU\( pk\(~ltk\) \).*", line) or\
re.match(".*: !KU\( 'g'\^~ltk \).*", line) or\
re.match(".*: !KU\( 'g'\^~ltk\.. \).*", line) or\
re.match(".*: !LTK_SIG\(.*", line) or\
re.match(".*: !LTK_STAT\(.*", line) or\
re.match(".*: !PK_SIG\(.*", line) or\
re.match(".*: !PK_STAT\(.*", line) or\
re.match(".*: StI._.*", line) or\
re.match(".*: StR._.*", line) or\
re.match(".*ExpRunning.*", line):
prioritize(goal, 90, line)
elif \
re.match(".*aead.*", line) or\
re.match(".*KU\( expa.*", line) or\
re.match(".*KU\( extr.*", line):
prioritize(goal, 87, line)
elif re.match(".*KU\( 'g'\^~ltk.*\).*", line) or\
re.match(".*KU\( 'g'\^\(~ltk.*\).*", line) or\
re.match(".*Helper.*", line) or\
re.match(".*~~>.*", line) or\
re.match(".*=.*=.*", line):
prioritize(goal, 85, line)
elif re.match(".*KU\( 'g'\^\(~yy.*\*~ltk.*", line) or\
re.match(".*KU\( 'g'\^\(~xx.*\*~ltk.*", line):
prioritize(goal, 80, line)
elif re.match(".*KU\( 'g'\^\(~xx.*\*~yy.*", line) or\
re.match(".*KU\( 'g'\^\(~yy.*\*~xx.*", line):
prioritize(goal, 75, line)
elif re.match(".*LTKRev.*", line) or\
re.match(".*sign.*", line) or\
re.match(".*splitEqs.*", line) or\
re.match(".*StI.*", line) or\
re.match(".*StR.*", line):
prioritize(goal, 70, line)
elif re.match(".*KU\( h\(.*", line):
prioritize(goal, 60, line)
elif re.match(".*KU\( \(.V⊕.*", line):
prioritize(goal, 40, line)
else:
prioritize(goal, 50, line)
elif "AEAD" in lemma:
if DEBUG:
print("MATCHING AEAD LEMMA: {}".format(lemma))
if re.match(".*: !KU\( ~xx \).*", line) or\
re.match(".*: !KU\( ~yy \).*", line) or\
re.match(".*: !KU\( ~xx\.. \).*", line) or\
re.match(".*: !KU\( ~yy\.. \).*", line) or\
re.match(".*: !KU\( ~ltk \).*", line) or\
re.match(".*: !KU\( ~ltk\.. \).*", line) or\
re.match(".*: !KU\( ~AD_3.*", line):
prioritize(goal, 90, line)
elif \
re.match(".*KU\( 'g'\^\(~xx\*~yy\).*", line) or\
re.match(".*KU\( 'g'\^\(~ltk\*~yy\).*", line) or\
re.match(".*KU\( 'g'\^\(~ltk\*~xx\).*", line):
prioritize(goal, 80, line)
elif \
re.match(".*: !KU\( *aead\(.*", line) or\
re.match(".*: !KU\( *expa.*", line):
re.match(".*: !KU\( *extr.*", line) or\
prioritize(goal, 70, line)
elif \
re.match(".*last.*", line) or\
re.match(".*: !KU\( pk\(~ltk\) \).*", line) or\
re.match(".*: !KU\( 'g'\^~ltk \).*", line) or\
re.match(".*: !KU\( 'g'\^~ltk\.. \).*", line) or\
re.match(".*: !LTK_SIG\(.*", line) or\
re.match(".*: !LTK_STAT\(.*", line) or\
re.match(".*: !PK_SIG\(.*", line) or\
re.match(".*: !PK_STAT\(.*", line) or\
re.match(".*: St.*", line):
prioritize(goal, 60, line)
else:
prioritize(goal, 50, line)
elif "secrecy" in lemma:
if DEBUG:
print("MATCHING Secrecy LEMMA: {}".format(lemma))
if re.match(".*KU\( ~ltk.*", line) or\
re.match(".*KU\( 'g'\^\(~ltk\*.*\).*", line):
prioritize(goal, 97, line)
elif re.match(".*KU\( ~xx.*", line) or\
re.match(".*KU\( ~yy.*", line) or\
re.match(".*Helper.*", line) or\
re.match(".*~~>.*", line) or\
re.match(".*=.*=.*", line):
prioritize(goal, 95, line)
elif re.match(".*KU\( 'g'\^\(~xx\*~yy\).*", line) or\
re.match(".*KU\( 'g'\^\(~yy\*~xx\).*", line):
prioritize(goal, 90, line)
elif \
re.match(".*KU\( expa.*", line) or\
re.match(".*KU\( extr.*", line):
prioritize(goal, 80, line)
elif re.match(".*LTKRev.*", line) or\
re.match(".*sign.*", line) or\
re.match(".*StI.*", line) or\
re.match(".*StR.*", line) or\
re.match(".*aead.*", line):
prioritize(goal, 70, line)
elif re.match(".*KU\( h\(.*", line):
prioritize(goal, 60, line)
elif re.match(".*KU\( \(.V⊕.*", line):
prioritize(goal, 40, line)
else:
prioritize(goal, 50, line)
else:
if DEBUG:
print("NO MATCH FOR LEMMA: {}".format(lemma))
exit(0)
def echoOracle(goalLines, lemma):
for line in goalLines:
goal = line.split(':')[0]
prioritize(goal, 0, line)
def testMatch(pattern, tamarinString):
if re.match(pattern, tamarinString):
print("Matches!")
else:
print("Don't match!")
if __name__ == "__main__":
if sys.argv[1] == "testMatch":
if len(sys.argv) != 4:
print("usage: oracle.py testMatch pattern tamarinString")
sys.exit(1)
testMatch(sys.argv[2], sys.argv[3])
sys.exit(0)
goalLines = sys.stdin.readlines()
lemma = sys.argv[1]
genPrios(goalLines, lemma)
#echoOracle(goalLines, lemma)
# We want 0 to be lowest prio, so reverse all level-lists and the list itself
prios = [(p[0], p[1][::-1]) for p in prios][::-1]
outputPrios(goalLines, lemma)
#dumpPrios(goalLines, lemma)
| 43.514286 | 105 | 0.403234 | 1,313 | 12,184 | 3.724296 | 0.115765 | 0.19182 | 0.157055 | 0.255215 | 0.716155 | 0.707771 | 0.670961 | 0.647853 | 0.596319 | 0.578937 | 0 | 0.015187 | 0.367695 | 12,184 | 279 | 106 | 43.670251 | 0.619159 | 0.046454 | 0 | 0.643443 | 0 | 0 | 0.213571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02459 | false | 0 | 0.008197 | 0 | 0.032787 | 0.053279 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6645500bd9415fcb00c5c1063997449655e7a5b | 464 | py | Python | supriya/ugens/LFNoise1.py | deeuu/supriya | 14fcb5316eccb4dafbe498932ceff56e1abb9d27 | [
"MIT"
] | null | null | null | supriya/ugens/LFNoise1.py | deeuu/supriya | 14fcb5316eccb4dafbe498932ceff56e1abb9d27 | [
"MIT"
] | null | null | null | supriya/ugens/LFNoise1.py | deeuu/supriya | 14fcb5316eccb4dafbe498932ceff56e1abb9d27 | [
"MIT"
] | null | null | null | import collections
from supriya import CalculationRate
from supriya.synthdefs import UGen
class LFNoise1(UGen):
"""
A ramp noise generator.
::
>>> supriya.ugens.LFNoise1.ar()
LFNoise1.ar()
"""
### CLASS VARIABLES ###
__documentation_section__ = "Noise UGens"
_ordered_input_names = collections.OrderedDict([("frequency", 500.0)])
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
| 18.56 | 79 | 0.68319 | 45 | 464 | 6.8 | 0.666667 | 0.071895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019022 | 0.206897 | 464 | 24 | 80 | 19.333333 | 0.8125 | 0.213362 | 0 | 0 | 0 | 0 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a66f853bf9f33f87146ba3858b82466747a4ba7f | 133 | py | Python | nicos_mlz/refsans/setups/elements/alphai.py | jkrueger1/nicos | 5f4ce66c312dedd78995f9d91e8a6e3c891b262b | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 12 | 2019-11-06T15:40:36.000Z | 2022-01-01T16:23:00.000Z | nicos_mlz/refsans/setups/elements/alphai.py | jkrueger1/nicos | 5f4ce66c312dedd78995f9d91e8a6e3c891b262b | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 91 | 2020-08-18T09:20:26.000Z | 2022-02-01T11:07:14.000Z | nicos_mlz/refsans/setups/elements/alphai.py | jkrueger1/nicos | 5f4ce66c312dedd78995f9d91e8a6e3c891b262b | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 6 | 2020-01-11T10:52:30.000Z | 2022-02-25T12:35:23.000Z | description = 'Alphai alias device'
group = 'lowlevel'
devices = dict(
alphai = device('nicos.devices.generic.DeviceAlias'),
)
| 16.625 | 57 | 0.706767 | 14 | 133 | 6.714286 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 133 | 7 | 58 | 19 | 0.839286 | 0 | 0 | 0 | 0 | 0 | 0.451128 | 0.24812 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a66fb86f66ae53aa1b1feda298a87c0ab06d20d0 | 268 | py | Python | src/core/migrations/0055_merge_20190305_1616.py | metabolism-of-cities/ARCHIVED-metabolism-of-cities-platform-v3 | c754d3b1b401906a21640b8eacb6b724a448b31c | [
"MIT"
] | null | null | null | src/core/migrations/0055_merge_20190305_1616.py | metabolism-of-cities/ARCHIVED-metabolism-of-cities-platform-v3 | c754d3b1b401906a21640b8eacb6b724a448b31c | [
"MIT"
] | null | null | null | src/core/migrations/0055_merge_20190305_1616.py | metabolism-of-cities/ARCHIVED-metabolism-of-cities-platform-v3 | c754d3b1b401906a21640b8eacb6b724a448b31c | [
"MIT"
] | null | null | null | # Generated by Django 2.1.2 on 2019-03-05 16:16
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('core', '0054_auto_20190305_1613'),
('core', '0054_merge_20190304_0758'),
]
operations = [
]
| 17.866667 | 47 | 0.641791 | 33 | 268 | 5.030303 | 0.787879 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229268 | 0.235075 | 268 | 14 | 48 | 19.142857 | 0.580488 | 0.16791 | 0 | 0 | 1 | 0 | 0.248869 | 0.21267 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6844d436f3ac8eae3409198b262402806d0a4c7 | 1,022 | py | Python | class3/exercises/exercise3/exercise3.py | EndlessDynamics/Fork_nornir_course | 04bf7e3819659f481a4e04059152877b795177b2 | [
"Apache-2.0"
] | null | null | null | class3/exercises/exercise3/exercise3.py | EndlessDynamics/Fork_nornir_course | 04bf7e3819659f481a4e04059152877b795177b2 | [
"Apache-2.0"
] | null | null | null | class3/exercises/exercise3/exercise3.py | EndlessDynamics/Fork_nornir_course | 04bf7e3819659f481a4e04059152877b795177b2 | [
"Apache-2.0"
] | null | null | null | from nornir import InitNornir
from nornir.core.filter import F
def main():
nr = InitNornir()
print("\nExercise 3a (role AGG)")
print("-" * 20)
agg_devs = nr.filter(F(role__contains="AGG"))
print(agg_devs.inventory.hosts)
print("-" * 20)
print("\nExercise 3b (sea or sfo group)")
print("-" * 20)
union = nr.filter(F(groups__contains="sea") | F(groups__contains="sfo"))
print(union.inventory.hosts)
print("-" * 20)
print("\nExercise 3c (WAN-role and WIFI password 'racecar')")
print("-" * 20)
racecar = nr.filter(
F(site_details__wifi_password__contains="racecar") & F(role="WAN")
)
print(racecar.inventory.hosts)
print("-" * 20)
print("\nExercise 3d (WAN-role and not WIFI password 'racecar')")
print("-" * 20)
not_racecar = nr.filter(
~F(site_details__wifi_password__contains="racecar") & F(role="WAN")
)
print(not_racecar.inventory.hosts)
print("-" * 20)
print()
if __name__ == "__main__":
main()
| 25.55 | 76 | 0.620352 | 130 | 1,022 | 4.646154 | 0.3 | 0.092715 | 0.059603 | 0.139073 | 0.548013 | 0.461921 | 0.221854 | 0.221854 | 0.221854 | 0.221854 | 0 | 0.024876 | 0.213307 | 1,022 | 39 | 77 | 26.205128 | 0.726368 | 0 | 0 | 0.258065 | 0 | 0 | 0.204501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0.129032 | 0.064516 | 0 | 0.096774 | 0.548387 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 2 |
a68c2fae55a4172bd6d6344ce187492035975eee | 1,263 | py | Python | app/analyzers/indicators/sar.py | r15ch13/crypto-signal | d423681223124278a3942cf2e930aafe5b84a855 | [
"MIT"
] | 4 | 2021-03-03T16:39:59.000Z | 2021-08-28T21:05:34.000Z | app/analyzers/indicators/sar.py | r15ch13/crypto-signal | d423681223124278a3942cf2e930aafe5b84a855 | [
"MIT"
] | 1 | 2021-05-10T16:11:48.000Z | 2021-05-10T16:11:48.000Z | app/analyzers/indicators/sar.py | r15ch13/crypto-signal | d423681223124278a3942cf2e930aafe5b84a855 | [
"MIT"
] | 6 | 2019-03-07T10:58:45.000Z | 2021-05-08T22:18:01.000Z | """ MACD Indicator
"""
import math
import pandas
from talib import abstract
from analyzers.utils import IndicatorUtils
class SAR(IndicatorUtils):
def analyze(self, historical_data, signal=['sar'], hot_thresh=None, cold_thresh=None):
"""Performs a macd analysis on the historical data
Args:
historical_data (list): A matrix of historical OHCLV data.
signal (list, optional): Defaults to macd. The indicator line to check hot/cold
against.
hot_thresh (float, optional): Defaults to None. The threshold at which this might be
good to purchase.
cold_thresh (float, optional): Defaults to None. The threshold at which this might be
good to sell.
Returns:
pandas.DataFrame: A dataframe containing the indicators and hot/cold values.
"""
dataframe = self.convert_to_dataframe(historical_data)
sar_values = abstract.SAR(dataframe).iloc[:]
sar_values.dropna(how='all', inplace=True)
if sar_values[signal[0]].shape[0]:
sar_values['is_hot'] = sar_values[signal[0]] > hot_thresh
sar_values['is_cold'] = sar_values[signal[0]] < cold_thresh
return sar_values
| 33.236842 | 97 | 0.648456 | 159 | 1,263 | 5.018868 | 0.421384 | 0.090226 | 0.067669 | 0.06015 | 0.172932 | 0.172932 | 0.172932 | 0.172932 | 0.172932 | 0.172932 | 0 | 0.00432 | 0.266825 | 1,263 | 37 | 98 | 34.135135 | 0.857451 | 0.436263 | 0 | 0 | 0 | 0 | 0.031148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a68d575600de2c24556e7348001ad2a913abac90 | 1,178 | py | Python | tools/csv2kml.py | platypii/BASElineFlightComputer | 7881febb680b32c88c7331563e6b4ed15db9a433 | [
"MIT"
] | 22 | 2016-06-28T15:18:25.000Z | 2022-03-20T01:44:13.000Z | tools/csv2kml.py | Probot9/BASElineFlightComputer | cd449d32887e82d8a187586894cbd1a44c5b668a | [
"MIT"
] | 5 | 2018-04-30T00:54:03.000Z | 2020-01-30T18:11:45.000Z | tools/csv2kml.py | Probot9/BASElineFlightComputer | cd449d32887e82d8a187586894cbd1a44c5b668a | [
"MIT"
] | 5 | 2015-05-27T03:26:49.000Z | 2021-12-11T05:49:33.000Z | #!/usr/bin/python
import sys
if len(sys.argv) != 2:
print 'Usage: csv2kml input.csv'
exit()
kml_header = """<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<kml xmlns=\"http://www.opengis.net/kml/2.2\" xmlns:gx=\"http://www.google.com/kml/ext/2.2\" xmlns:kml=\"http://www.opengis.net/kml/2.2\" xmlns:atom=\"http://www.w3.org/2005/Atom\">
<Document>
<Style id=\"flight\">
<LineStyle>
<color>ffff5500</color>
<width>5</width>
</LineStyle>
</Style>
<Folder>
<name>Jump</name>
<open>1</open>
<Placemark>
<name>Track</name>
<styleUrl>#flight</styleUrl>
<LineString>
<tessellate>1</tessellate>
<altitudeMode>absolute</altitudeMode>
<coordinates>"""
kml_footer = """
</coordinates>
</LineString>
</Placemark>
</Folder>
</Document>
</kml>
"""
# Write KML file
with open(sys.argv[1]) as f:
# write kml header
print kml_header
# write point data
for line in f:
cols = line.split(',')
if cols[1] == 'gps':
lat = cols[5]
lon = cols[6]
alt = cols[7]
print lon + ',' + lat + ',' + alt
# write kml footer
print kml_footer
| 21.418182 | 181 | 0.565365 | 153 | 1,178 | 4.326797 | 0.503268 | 0.042296 | 0.031722 | 0.05136 | 0.081571 | 0.081571 | 0.081571 | 0.081571 | 0 | 0 | 0 | 0.03118 | 0.237691 | 1,178 | 54 | 182 | 21.814815 | 0.706013 | 0.06961 | 0 | 0 | 0 | 0.02439 | 0.708257 | 0.104587 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02439 | null | null | 0.097561 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a69449e89ea900074212b71558e97b166df355bd | 1,388 | py | Python | skassist-docs/python/doc_definitions.py | radmerti/scikit-assist | 6ecd58608632c2ef8b52f6f71fd3695db522e22e | [
"BSD-2-Clause"
] | null | null | null | skassist-docs/python/doc_definitions.py | radmerti/scikit-assist | 6ecd58608632c2ef8b52f6f71fd3695db522e22e | [
"BSD-2-Clause"
] | null | null | null | skassist-docs/python/doc_definitions.py | radmerti/scikit-assist | 6ecd58608632c2ef8b52f6f71fd3695db522e22e | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# ______________________________________________________________________________
def boolean_func(experiment):
"""Function that returns True when an experiment matches and False otherwise.
Args:
experiment (:class:`~skassist.Experiment`): Experiment that is to be tested.
"""
# ______________________________________________________________________________
def scoring_function(self, model, y_true, y_predicted_probability):
"""The scoring function takes a model, the true labels and the prediction
and calculates one or more scores. These are returned in a dictionary which
:func:`~skassist.Model.calc_results` uses to commit them to permanent storage.
Args:
scoring_function (:func:`function`):
A python function for calculating the results given the true labels
and the predictions. See :func:`~skassist.Model.scoring_function`.
skf (:obj:`numpy.ndarray`):
An array containing arrays of splits. E.g. an array with 10 arrays,
each containing 3 splits for a 10-fold cross-validation with
training, test and validation set.
df (:obj:`pandas.DataFrame`):
The DataFrame on which to evaluate the model. Must contain all
feature, "extra" feature and target columns that the model
requires.
""" | 42.060606 | 84 | 0.701009 | 159 | 1,388 | 5.08805 | 0.584906 | 0.074166 | 0.032138 | 0.039555 | 0.046972 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005592 | 0.226945 | 1,388 | 33 | 85 | 42.060606 | 0.748369 | 0.852305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a696ee2566ea61b7ea08a6739c150d86fe1efe72 | 1,526 | py | Python | bostaSDK/pickup/create/CreatePickupResponse.py | bostaapp/bosta-python | df3f48dafac49b2577669fd4d74a5e5e9d28f2c1 | [
"MIT"
] | null | null | null | bostaSDK/pickup/create/CreatePickupResponse.py | bostaapp/bosta-python | df3f48dafac49b2577669fd4d74a5e5e9d28f2c1 | [
"MIT"
] | 1 | 2020-11-18T11:01:32.000Z | 2020-11-18T11:10:52.000Z | bostaSDK/pickup/create/CreatePickupResponse.py | bostaapp/bosta-python | df3f48dafac49b2577669fd4d74a5e5e9d28f2c1 | [
"MIT"
] | null | null | null |
class CreatePickupResponse:
def __init__(self, res):
"""
Initialize new instance from CreatePickupResponse class
Parameters:
res (dict, str): JSON response object or response text message
Returns:
instance from CreatePickupResponse
"""
self.fromResponseObj(res)
def fromResponseObj(self, res):
"""
Extract _id, puid, business, businessLocationId,
scheduledDate, scheduledTimeSlot, contactPerson,
createdAt and updatedAt fields from json response object
Parameters:
res (dict, str): JSON response object or response text message
"""
if type(res) is dict and res.get('data') is not None:
self.message = res.get("message")
newPickup = res["data"]
self._id = newPickup["_id"]
self.puid = newPickup["puid"]
self.business = newPickup["business"]
self.businessLocationId = newPickup["businessLocationId"]
self.scheduledDate = newPickup["scheduledDate"]
self.scheduledTimeSlot = newPickup["scheduledTimeSlot"]
self.contactPerson = newPickup["contactPerson"]
self.createdAt = newPickup["createdAt"]
self.updatedAt = newPickup["updatedAt"]
else:
self.message = str(res)
def __str__(self):
return self.message
def get_pickupId(self):
return self._id
def get_message(self):
return self.message
| 29.921569 | 70 | 0.609436 | 142 | 1,526 | 6.450704 | 0.309859 | 0.048035 | 0.058952 | 0.043668 | 0.128821 | 0.128821 | 0.128821 | 0.128821 | 0.128821 | 0.128821 | 0 | 0 | 0.303408 | 1,526 | 50 | 71 | 30.52 | 0.861712 | 0.26671 | 0 | 0.083333 | 0 | 0 | 0.108458 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0 | 0.125 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
a6983bd2dae45fdbaac009e69a080431f0e315a5 | 522 | py | Python | code/bot/mapping-bringup/launch/slam_launch.py | jacobwaller/beer-bot | 60d89bee2029d87a870940081bd6ab93d05c4eca | [
"MIT"
] | null | null | null | code/bot/mapping-bringup/launch/slam_launch.py | jacobwaller/beer-bot | 60d89bee2029d87a870940081bd6ab93d05c4eca | [
"MIT"
] | null | null | null | code/bot/mapping-bringup/launch/slam_launch.py | jacobwaller/beer-bot | 60d89bee2029d87a870940081bd6ab93d05c4eca | [
"MIT"
] | null | null | null | from launch import LaunchDescription
import launch_ros.actions
from ament_index_python.packages import get_package_share_directory
def generate_launch_description():
return LaunchDescription([
launch_ros.actions.Node(
parameters=[
get_package_share_directory("slam_toolbox") + '/config/mapper_params_lifelong.yaml'
],
package='slam_toolbox',
executable='lifelong_slam_toolbox_node',
name='slam_toolbox',
output='screen'
)
]) | 30.705882 | 95 | 0.683908 | 53 | 522 | 6.358491 | 0.584906 | 0.130564 | 0.094955 | 0.142433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235632 | 522 | 17 | 96 | 30.705882 | 0.844612 | 0 | 0 | 0 | 1 | 0 | 0.196941 | 0.116635 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | true | 0 | 0.2 | 0.066667 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a69c6da5a24875fb6294bf1da51edf46e3db4865 | 475 | py | Python | hw/hw09/tests/q1_3.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
] | null | null | null | hw/hw09/tests/q1_3.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
] | null | null | null | hw/hw09/tests/q1_3.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
] | null | null | null | test = { 'name': 'q1_3',
'points': 1,
'suites': [ { 'cases': [ {'code': '>>> type(max_estimate) in set([int, np.int32, np.int64])\nTrue', 'hidden': False, 'locked': False},
{'code': '>>> max_estimate in observations.column(0)\nTrue', 'hidden': False, 'locked': False}],
'scored': True,
'setup': '',
'teardown': '',
'type': 'doctest'}]}
| 52.777778 | 144 | 0.414737 | 42 | 475 | 4.619048 | 0.714286 | 0.113402 | 0.134021 | 0.226804 | 0.278351 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.376842 | 475 | 8 | 145 | 59.375 | 0.628378 | 0 | 0 | 0 | 0 | 0 | 0.414737 | 0.058947 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6a06d990fb107f6d173990c09e667ce4615e46e | 283 | py | Python | backend0bit/serializers.py | piotrb5e3/0bit-backend | 1df105ab57d0ddde5868ae4b03b359e0c3f00b13 | [
"Apache-2.0"
] | null | null | null | backend0bit/serializers.py | piotrb5e3/0bit-backend | 1df105ab57d0ddde5868ae4b03b359e0c3f00b13 | [
"Apache-2.0"
] | 1 | 2020-06-05T19:21:04.000Z | 2020-06-05T19:21:04.000Z | backend0bit/serializers.py | piotrb5e3/0bit-backend | 1df105ab57d0ddde5868ae4b03b359e0c3f00b13 | [
"Apache-2.0"
] | null | null | null | from rest_framework import serializers
from backend0bit.models import StaticPage
class StaticPageSerializer(serializers.ModelSerializer):
class Meta:
model = StaticPage
fields = ('id', 'title', 'url', 'contents', 'order')
read_only_fields = ('order',)
| 25.727273 | 60 | 0.696113 | 28 | 283 | 6.928571 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004405 | 0.19788 | 283 | 10 | 61 | 28.3 | 0.85022 | 0 | 0 | 0 | 0 | 0 | 0.09894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a6a1b6d7205087acf58bd99258d54c81c8cb5272 | 6,538 | py | Python | test_routes.py | Varini/CHALLENGE_SFNewsAPI | 439bf7dfc10f83466786f6bd054afc8ab07b1b58 | [
"MIT"
] | null | null | null | test_routes.py | Varini/CHALLENGE_SFNewsAPI | 439bf7dfc10f83466786f6bd054afc8ab07b1b58 | [
"MIT"
] | null | null | null | test_routes.py | Varini/CHALLENGE_SFNewsAPI | 439bf7dfc10f83466786f6bd054afc8ab07b1b58 | [
"MIT"
] | null | null | null | from operator import index
from fastapi.testclient import TestClient
from index import app
client = TestClient(app)
def test_get_item():
response = client.get("/articles/10000")
assert response.status_code == 200
assert response.json() == {"id": 10000,
"title": "NASA TV to Air Launch of Space Station Module, Departure of Another",
"url": "http://www.nasa.gov/press-release/nasa-tv-to-air-launch-of-space-station-module-departure-of-another",
"imageUrl": "https://www.nasa.gov/sites/default/files/thumbnails/image/mlm_at_baikonur.jpg?itok=SrfC6Yzm",
"newsSite": "NASA",
"summary": "NASA will provide live coverage of a new Russian science module’s launch and automated docking to the International Space Station, and the undocking of another module that has been part of the orbital outpost for the past 20 years.",
"publishedAt": "2021-07-13T20:22:00.000Z", "updatedAt": "2021-07-13T20:22:06.617Z",
"featured": False,
"launches": [{"id": "27fd5d5a-6935-4697-98b4-b409f029e2f0", "provider": "Launch Library 2"}],
"events": [{"id": 268, "provider": "Launch Library 2"}]}
def test_get_invalid_id():
id = 9999999999
response = client.get(f"/articles/{id}")
assert response.status_code == 404
assert response.json() == {"detail": f"Article ID: {id} not found"}
def test_get_non_integer_id():
response = client.get("/articles/abc")
assert response.status_code == 422
assert response.json()[
"detail"][0]["msg"] == "value is not a valid integer"
def test_create_item():
response = client.post("/articles/",
json={
"title": "No commercial crew test flights expected this year",
"url": "https://spaceflightnow.com/2018/10/06/no-commercial-crew-test-flights-expected-this-year/",
"imageUrl": "https://mk0spaceflightnoa02a.kinstacdn.com/wp-content/uploads/2018/10/ccp-countdown-header-326x245.jpg",
"newsSite": "Spaceflight Now",
"summary": "",
"publishedAt": "2018-10-05T22:00:00.000Z",
"updatedAt": "2021-05-18T13:43:19.589Z",
"featured": False,
"launches": [],
"events": []
}
)
request = client.get("/articles/?page_size=1")
id = request.json()[0]["id"]
assert response.status_code == 201
assert response.json() == {
"id": id,
"title": "No commercial crew test flights expected this year",
"url": "https://spaceflightnow.com/2018/10/06/no-commercial-crew-test-flights-expected-this-year/",
"imageUrl": "https://mk0spaceflightnoa02a.kinstacdn.com/wp-content/uploads/2018/10/ccp-countdown-header-326x245.jpg",
"newsSite": "Spaceflight Now",
"summary": "",
"publishedAt": "2018-10-05T22:00:00.000Z",
"updatedAt": "2021-05-18T13:43:19.589Z",
"featured": False,
"launches": [],
"events": []
}
def test_update_item():
request = client.get("/articles/?page_size=1")
id = request.json()[0]["id"]
response = client.put(f"/articles/{id}",
json={
"title": "Altered Title",
"url": "www.domain.com",
"imageUrl": "IMAGE.img",
"newsSite": "",
"summary": "",
"publishedAt": "2018-10-05T22:00:00.000Z",
"updatedAt": "2021-05-18T13:43:19.589Z",
"featured": True,
"launches": [
{
"id": "Altered",
"provider": "Altered Launch"
}
],
"events": [
{
"id": 1037,
"provider": "Altered Provider"
}
]
}
)
assert response.status_code == 200
assert response.json() == {
"id": id,
"title": "Altered Title",
"url": "www.domain.com",
"imageUrl": "IMAGE.img",
"newsSite": "",
"summary": "",
"publishedAt": "2018-10-05T22:00:00.000Z",
"updatedAt": "2021-05-18T13:43:19.589Z",
"featured": True,
"launches": [
{
"id": "Altered",
"provider": "Altered Launch"
}
],
"events": [
{
"id": 1037,
"provider": "Altered Provider"
}
]
}
def test_update_invalid_id():
id = 9999999999
response = client.put(f"/articles/{id}",
json={
"title": "Altered Title"
}
)
assert response.status_code == 404
assert response.json() == {"detail": f"Article ID: {id} not found"}
def test_update_non_integer_id():
response = client.put("/articles/abc")
assert response.status_code == 422
assert response.json()[
"detail"][0]["msg"] == "value is not a valid integer"
def test_delete_item():
request = client.get("/articles/?page_size=1")
id = request.json()[0]["id"]
response = client.delete(f"/articles/{id}")
assert response.status_code == 200
assert response.json() == f"Article ID: {id} was deleted."
def test_delete_invalid_id():
id = 9999999999
response = client.delete(f"/articles/{id}")
assert response.status_code == 404
assert response.json() == {"detail": f"Article ID: {id} not found"}
def test_delete_non_integer_id():
response = client.delete("/articles/abc")
assert response.status_code == 422
assert response.json()[
"detail"][0]["msg"] == "value is not a valid integer"
| 40.608696 | 277 | 0.487611 | 634 | 6,538 | 4.958991 | 0.258675 | 0.089059 | 0.063613 | 0.076336 | 0.748728 | 0.716285 | 0.68416 | 0.68416 | 0.677481 | 0.647583 | 0 | 0.081903 | 0.376262 | 6,538 | 160 | 278 | 40.8625 | 0.689063 | 0 | 0 | 0.640288 | 0 | 0.05036 | 0.359437 | 0.05231 | 0 | 0 | 0 | 0 | 0.143885 | 1 | 0.071942 | false | 0 | 0.021583 | 0 | 0.093525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6a650997a0e40d341164565e71707c592c13050 | 1,372 | py | Python | src/producto18.py | alonsosilvaallende/Datos-COVID19 | a52b586ce0c9eb41a3f7443a164402124ffef504 | [
"MIT"
] | null | null | null | src/producto18.py | alonsosilvaallende/Datos-COVID19 | a52b586ce0c9eb41a3f7443a164402124ffef504 | [
"MIT"
] | null | null | null | src/producto18.py | alonsosilvaallende/Datos-COVID19 | a52b586ce0c9eb41a3f7443a164402124ffef504 | [
"MIT"
] | null | null | null | '''
MIT License
Copyright (c) 2020 Sebastian Cornejo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
import csv
import pandas as pd
from os import listdir
if __name__ == '__main__':
# producto 18: tasa de incidencia total e histórica
df = pd.read_csv('../input/Tasadeincidencia.csv')
df.dropna(how='all', inplace=True)
df.to_csv('../output/producto18/TasadeIncidencia.csv') | 41.575758 | 78 | 0.78207 | 211 | 1,372 | 5.037915 | 0.587678 | 0.082785 | 0.024459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00695 | 0.161079 | 1,372 | 33 | 79 | 41.575758 | 0.916594 | 0.819242 | 0 | 0 | 0 | 0 | 0.3361 | 0.290456 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
a6a93fa998370452499035e80f03dcb57488747c | 516 | py | Python | listwords.py | corbinmcneill/bonkbot | 5d355d9b8d2377176fc8ec317a7cba03ff3bb37a | [
"MIT"
] | 1 | 2020-12-07T06:58:38.000Z | 2020-12-07T06:58:38.000Z | listwords.py | corbinmcneill/bonkbot | 5d355d9b8d2377176fc8ec317a7cba03ff3bb37a | [
"MIT"
] | 1 | 2021-01-06T06:36:11.000Z | 2021-01-06T09:06:15.000Z | listwords.py | corbinmcneill/bonkbot | 5d355d9b8d2377176fc8ec317a7cba03ff3bb37a | [
"MIT"
] | 2 | 2021-01-06T06:34:42.000Z | 2021-01-28T08:41:40.000Z | #!/usr/bin/env python3
import discord
import config
import util
from functools import reduce
from handler import Handler
class ListWordsHandler(Handler):
name = "listwords"
async def message_handler(self, message, jail, bonkbot):
print("Starting listwords handler")
if self.cf.get("list_words_trigger_phrase") in message.content.lower() and util.is_mentioned(message, bonkbot):
await message.channel.send(util.list_trigger_words())
return True
return False
| 28.666667 | 119 | 0.718992 | 65 | 516 | 5.6 | 0.661538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002415 | 0.197674 | 516 | 17 | 120 | 30.352941 | 0.876812 | 0.040698 | 0 | 0 | 0 | 0 | 0.121704 | 0.05071 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.692308 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a6ca5dc72a2a53aa7dccb88c27d8a3fa44186150 | 1,175 | py | Python | tests/conftest.py | dclimber/python-kzt-exchangerates | 60eca52b776f889848d631be43c843bd9bd50d06 | [
"MIT"
] | 1 | 2021-05-15T15:19:00.000Z | 2021-05-15T15:19:00.000Z | tests/conftest.py | dclimber/python-kzt-exchangerates | 60eca52b776f889848d631be43c843bd9bd50d06 | [
"MIT"
] | null | null | null | tests/conftest.py | dclimber/python-kzt-exchangerates | 60eca52b776f889848d631be43c843bd9bd50d06 | [
"MIT"
] | null | null | null | import pytest
from pathlib import Path
import xml.etree.ElementTree as ET
@pytest.fixture
def date_for_tests():
return '24.04.2013'
@pytest.fixture
def result_date():
return '2013-04-24'
@pytest.fixture
def latest_url():
return 'https://nationalbank.kz/rss/rates_all.xml'
@pytest.fixture
def dated_url(date_for_tests):
return 'https://nationalbank.kz/rss/get_rates.cfm?fdate={}'.format(
date_for_tests)
@pytest.fixture
def sample_rss():
# rss file for 2013-04-24 (date for tests)
file = Path("sample_rss.xml")
text = file.read_text()
rss = ET.fromstring(text)
return rss
@pytest.fixture
def supported_currencies():
# currencies from sample_rss.xml
return ['AUD', 'GBP', 'BYR', 'BRL', 'HUF', 'HKD', 'DKK', 'AED', 'USD',
'EUR', 'CAD', 'CNY', 'KWD', 'KGS', 'LVL', 'LTL', 'MYR', 'MDL',
'NOK', 'PLN', 'SAR', 'RUB', 'XDR', 'SGD', 'TJS', 'TRY', 'UZS',
'UAH', 'CZK', 'SEK', 'CHF', 'ZAR', 'KRW', 'JPY', 'KZT']
@pytest.fixture
def target_currencies():
return ['AUD', 'GBP', 'DKK', 'AED', 'USD', 'EUR', 'CAD', 'CNY', 'KWD']
| 24.479167 | 79 | 0.577872 | 155 | 1,175 | 4.270968 | 0.490323 | 0.137462 | 0.169184 | 0.054381 | 0.148036 | 0.063444 | 0.063444 | 0 | 0 | 0 | 0 | 0.026549 | 0.230638 | 1,175 | 47 | 80 | 25 | 0.705752 | 0.060426 | 0 | 0.225806 | 0 | 0 | 0.233424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0 | 0.096774 | 0.193548 | 0.548387 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
a6df1b58a3503a041b065c001a210524d030e7a9 | 493 | py | Python | src/apps/account/test_utils.py | plitzenberger/graphene-auth-examples | 93694f10977feb35f73ffe1f84dea631fd6d17dc | [
"MIT"
] | 71 | 2017-06-09T13:02:15.000Z | 2021-06-15T20:00:38.000Z | src/apps/account/test_utils.py | plitzenberger/graphene-auth-examples | 93694f10977feb35f73ffe1f84dea631fd6d17dc | [
"MIT"
] | 13 | 2017-07-11T16:08:40.000Z | 2019-07-01T04:33:17.000Z | src/apps/account/test_utils.py | plitzenberger/graphene-auth-examples | 93694f10977feb35f73ffe1f84dea631fd6d17dc | [
"MIT"
] | 14 | 2017-05-18T16:27:30.000Z | 2019-09-20T12:57:17.000Z | import pytest
from django.core import mail
from test_fixtures.users import user
from .utils import send_activation_email, send_password_reset_email
@pytest.mark.django_db
def test_send_activtion_email(user, rf):
request = rf.request()
send_activation_email(user, request)
assert len(mail.outbox) == 1
@pytest.mark.django_db
def test_send_password_reset_email(user, rf):
request = rf.request()
send_password_reset_email(user, request)
assert len(mail.outbox) == 1
| 23.47619 | 67 | 0.770791 | 73 | 493 | 4.931507 | 0.356164 | 0.1 | 0.141667 | 0.183333 | 0.605556 | 0.533333 | 0.533333 | 0.2 | 0 | 0 | 0 | 0.004739 | 0.144016 | 493 | 20 | 68 | 24.65 | 0.848341 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0.214286 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
472afc21ea7eeaee85fc2d73a6095f40e07d89e4 | 22,482 | py | Python | oompa/tracking/github/GitHubNeo.py | sjtsp2008/oompa | 2dfdecba192c408c0463da27e0b5859ef9ce3db6 | [
"Apache-2.0"
] | 2 | 2016-02-23T00:58:11.000Z | 2017-06-14T15:39:22.000Z | oompa/tracking/github/GitHubNeo.py | sjtsp2008/oompa | 2dfdecba192c408c0463da27e0b5859ef9ce3db6 | [
"Apache-2.0"
] | 3 | 2015-06-21T11:13:50.000Z | 2015-06-21T13:27:03.000Z | oompa/tracking/github/GitHubNeo.py | sjtsp2008/oompa | 2dfdecba192c408c0463da27e0b5859ef9ce3db6 | [
"Apache-2.0"
] | null | null | null | #
# GitHubNeo.py
#
# note: i tried using bulbs, which would be easier to
# migrate to other tinkerpop graph engines, but had
# trouble authenticating
#
#
"""
package oompa.tracking.github
experiments with working on github graphs in neo
uses py2neo
TODO: i think bulb seems to have better object modeling (but doesn't work for me)
"""
from datetime import timedelta
from datetime import datetime
import py2neo
from oompa.tracking.github import github_utils
Node = py2neo.Node
Relationship = py2neo.Relationship
"""
misc neo notes:
visit: http://localhost:7474/
default user - neo4j
part of neo walkthrough
CREATE (ee:Person { name: "Emil", from: "Sweden", klout: 99 })
() means "node"
{} surround attrs
Person is the label
MATCH (ee:Person) WHERE ee.name = "Emil" RETURN ee;
complex creation:
MATCH (ee:Person) WHERE ee.name = "Emil"
CREATE (js:Person { name: "Johan", from: "Sweden", learn: "surfing" }),
(ir:Person { name: "Ian", from: "England", title: "author" }),
(rvb:Person { name: "Rik", from: "Belgium", pet: "Orval" }),
(ally:Person { name: "Allison", from: "California", hobby: "surfing" }),
(ee)-[:KNOWS {since: 2001}]->(js),(ee)-[:KNOWS {rating: 5}]->(ir),
(js)-[:KNOWS]->(ir),(js)-[:KNOWS]->(rvb),
(ir)-[:KNOWS]->(js),(ir)-[:KNOWS]->(ally),
(rvb)-[:KNOWS]->(ally)
pattern matching:
MATCH (ee:Person)-[:KNOWS]-(friends)
WHERE ee.name = "Emil" RETURN ee, friends
Pattern matching can be used to make recommendations. Johan is
learning to surf, so he may want to find a new friend who already
does:
MATCH (js:Person)-[:KNOWS]-()-[:KNOWS]-(surfer)
WHERE js.name = "Johan" AND surfer.hobby = "surfing"
RETURN DISTINCT surfer
() empty parenthesis to ignore these nodes
DISTINCT because more than one path will match the pattern
surfer will contain Allison, a friend of a friend who surfs
"""
def parseFreshness(freshness):
"""
TODO: not fully general/bulletproof yet
"""
pieces = freshness.split()
days = 0
hours = 0
minutes = 0
for piece in pieces:
unit = piece[-1]
value = int(piece[:-1])
if unit == "d":
days = value
elif unit == "h":
hours = value
elif unit == "m":
minutes = value
else:
raise Exception("unknown time unit", unit, freshness)
pass
freshness_delta = timedelta(days = days, hours = hours, minutes = minutes)
return freshness_delta
class GitHubNeo:
"""
interface for lazy github graph in neo4j
- update
- list
- track
- discover
"""
# ISO format
_dtFormat = "%Y-%m-%dT%H:%M:%S"
def __init__(self, config, githubHelper):
# XXX get from config
neo_url = config.get("neo.github.url")
neo_user = config.get("neo.github.user")
neo_passwd = config.get("neo.github.passwd")
# TODO: derive from the url
neo_host = "localhost:7474"
# TODO: if freshness, parse it to a real latency (e.g., "4d" -> seconds)
self.freshness = config.get("neo.github.freshness")
if self.freshness:
self.freshness = parseFreshness(self.freshness)
pass
py2neo.authenticate(neo_host, neo_user, neo_passwd)
self.graph = py2neo.Graph(neo_url)
self.githubHelper = githubHelper
self._establishNeoSchema()
return
def _establishNeoSchema(self):
"""
set up constraints on relationships and nodes in neo graph
note: i believe that schema constraints are volatile, per-session
if i don't apply these contraints, on a graph that had them
in previous sessions, i can violate the previous contraints
"""
schema = self.graph.schema
try:
schema.create_uniqueness_constraint("User", "name")
schema.create_uniqueness_constraint("Organization", "name")
schema.create_uniqueness_constraint("Repository", "name")
# except py2neo.error.ConstraintViolationException:
except:
# already established
return
# TODO: User
# TODO: Organization
# TODO: relationships
return
def query(self, query):
"""
submit arbitrary cypher-syntax query to graph
query is a string
"""
for record in self.graph.cypher.execute(query):
yield record
return
def getNodeType(self, node):
"""
return the type of the given node
"""
# XXX still figuring out LabelSet - don't know how to get values as list
return node.labels.copy().pop()
def getNode(self, nodeName, nodeType = None):
"""
returns a neo Node
TODO: if nodeType specified, use it (esp User vs Organization)
"""
# print("NT: %s" % nodeType)
typeSpec = ""
# XXX figure out best way to support this
# if nodeType is not None:
# typeSpec = ":%s" % nodeType
# pass
# XXX this does not feel like "The Best Way" to simply get a node
query = 'MATCH (node {name:"%s"}%s) RETURN node' % ( nodeName, typeSpec )
# print(" Q: %r" % query)
records = list(self.query(query))
if not records:
return None
if len(records) > 1:
print("XXX getNode() plural records: %s %s (%r)" % ( nodeType, nodeType, typeSpec ))
for record in records:
print(" R: %r" % ( record, ))
pass
xxx
pass
record = records[0]
return self.graph.hydrate(record[0])
def _now(self):
return datetime.utcnow().replace(microsecond = 0)
def _parseISODTStr(self, dtStr):
return datetime.strptime(dtStr, self._dtFormat)
def createNode(self, name, nodeType):
# we don't want the microsecond junk in time string
now = self._now()
node = Node(nodeType, name = name, createdDT = now.isoformat())
self.graph.create(node)
return node
def getOrAddNode(self, name, nodeType = None):
node = self.getNode(name, nodeType)
if node is not None:
# print("# node already in graph")
return node
# print("# node not in graph - have to create")
if nodeType is None:
for nodeType, name, obj in self.githubHelper.getKindNameAndObject([ name, ]):
# only one
break
if nodeType is None:
print(" could not determine nodeType for name: %s" % name)
xxx
pass
pass
return self.createNode(name, nodeType)
#
# neo edge relationships for lists "away from" certain types of github objects
#
# some list names have an alias, because the list name is confusing
#
# TODO: test if simple rule of removing the final "s" works. that would be simpler
# - there are a couple of exceptions
#
_relationships = {
"Repository" : [
( "stargazers", "starred", "from", ),
( "subscribers", "subscriber", "from", ),
( "contributors", "contributor", "from", ),
( "forks", "forkOf", "from", ),
# ...
],
"User" : [
( "followers", "follows", "from", ),
( "following", "follows", "to", ),
( "starred_repositories", "starred", "to", ),
( "subscriptions", "subscriber", "to", ),
( "organizations", "memberOf", "to", ),
],
}
def updateRelationships(self, obj, slot, relationshipLabel, direction, destNode):
"""
obj is a github3 object (repository, user, organization)
direction is "from" or "to"
TODO: support attribute decorators
generates stream of entitySpec
"""
destNodeType = self.getNodeType(destNode)
graph = self.graph
print("updateRelationships: %-25s - %-25s %4s - %s" % (
slot, relationshipLabel, direction, destNode.properties["name"] ))
# XXX need otherNodeLabelGetter
# - .name, .login, ...
# determine neighbor nodeType by slot name
# TODO: use a dictionary - simpler
neighborNodeType = None
# XXX just figure this out by what we get back
if slot in [ "stargazers", "contributors", ]:
neighborNodeType = "User"
elif slot in [ "followers", "following" ]:
neighborNodeType = "User"
elif slot == "organizations":
neighborNodeType = "Organization"
elif slot == "starred_repositories":
neighborNodeType = "Repository"
elif slot == "subscriptions":
neighborNodeType = "Repository"
elif slot == "forks":
neighborNodeType = "Repository"
elif slot == "subscribers":
# i think that things can subsribe to users or orgs, too
# this is currently just Users subscribed to Repository
neighborNodeType = "User"
else:
print(" XXX slot not handled in switch yet - %r" % slot)
xxx
pass
if neighborNodeType == "User":
nodeNameAttr = "login"
elif neighborNodeType == "Organization":
nodeNameAttr = "name"
elif neighborNodeType == "Repository":
nodeNameAttr = "full_name"
else:
xxx
pass
# print("# nodeNameAttr: %s - %s - %s" % ( slot, neighborNodeType, nodeNameAttr ))
# TODO: get all of them, and batch-update
neighbors = []
for value in getattr(obj, slot)():
# if slot == "forks":
# print(" fork obj")
# github_utils.dumpSlotValues(obj)
# value is another github object (User, ...)
# name = value.name
# name = str(value)
nodeName = getattr(value, nodeNameAttr)
neighbors.append(( neighborNodeType, nodeName ))
pass
# TODO: batch-update
neighbors = sorted(neighbors, key = lambda _tuple : _tuple[1])
for neighborNodeType, nodeName in neighbors:
# only if verbose tracing
# print(" %s: %r" % ( relationshipLabel, nodeName ))
srcNode = Node(neighborNodeType, name = nodeName)
# graph.merge_one(Relationship(srcNode, relationshipLabel, destNode))
# XXX try/except is sloppy - i don't get merge vs create yet
if direction == "from":
relationship = Relationship(srcNode, relationshipLabel, destNode)
else:
relationship = Relationship(destNode, relationshipLabel, srcNode)
pass
try:
graph.create(relationship)
except:
# already exists
pass
yield ( neighborNodeType, nodeName )
pass
# need to flush anything?
return
def _getRelationshipTuples(self, nodeType, relationships = None):
"""
TODO: memoize
"""
# print("_getRelationshipTuples(): %s %s" % ( nodeType, relationships ))
#
# XXX still working out the best way to normalize github
# relationships to neo relationships
#
for relationshipTuple in self._relationships[nodeType]:
listName, relationshipLabel, direction = relationshipTuple
if relationships:
keep = False
for rel in relationships:
if rel == relationshipLabel:
keep = True
break
pass
if not keep:
continue
pass
yield relationshipTuple
pass
return
def updateGithubObj(self, githubObj, node, relationships = None):
"""
githubObj is a GitHub3 Repository, User, or Organization
node is a py2neo Node
"""
# starting to generalize
graph = self.graph
nodeType = self.getNodeType(node)
# note that full_name is something that i attach
if nodeType == "Repository":
name = githubObj.full_name
else:
name = githubObj.name
pass
# TODO: want to report different things for different object -
# user needs login and name
print("GitHubNeo.updateGithubObj(): %s" % name.encode("utf8"))
relationshipTuples = list(self._getRelationshipTuples(nodeType, relationships))
# TODO: *local* LRU cache user and repo - may also be on contributes, subscribes.
# make sure we only pay github points once
for listName, relationshipLabel, direction in relationshipTuples:
for entitySpec in self.updateRelationships(githubObj,
listName,
relationshipLabel,
direction,
node):
yield entitySpec
pass
pass
node.properties["updatedDT"] = datetime.utcnow().replace(microsecond = 0)
node.push()
# what else should go in graph?
# .parent
# .source
# .description
# .homepage
# .language
# .last_modified
# .updated_at
# branches()
# code_frequency()
# XXX blocked - requires authentication
# dumpList(obj, "collaborators")
# comments()
# commit_activity()
# commits()
# contributor_statistics()
# github_utils.dumpList(obj, "contributors")
# default_branch
# deployments() ???
# events()
# github_utils.dumpList(obj, "forks")
# hooks() ???
# issues()
# keys() ???
# labels() ??? i think these are tags used in issues/planning
# github_utils.dumpList(obj, "languages")
# milestones()
# notifications()
# open_issues_count ???
# owner (a User object)
# pull_requests
# refs() ???
# releases() ???
# size (what are units?)
# statuses() ?
# github_utils.dumpList(obj, "subscribers")
# tags()
# i think that tree is some sort of file tree. (i was hoping it was fork ancestry)
# tree = obj.tree()
# print("TREE: %s" % tree)
# teams()
# { "Last-Modified": "",
# "all": [0, 0, 1, 1, ..., (52 weeks?) ],
# "owner": [ 0, 0, 0, 0, ... ] }
# print(" weekly_commit_count: %s" % obj.weekly_commit_count())
return
def updateUser(self, githubObj, node):
"""
user is a GitHub3 User
TODO: refactor - merge with updateRepository - just a generic
"""
graph = self.graph
nodeType = self.getNodeType(node)
# was it forked from something?
print("GitHubNeo.updateUser(): %s - %r" % ( user.login, user.name ))
nodeType = "User"
node = Node(nodeType, name = user.login)
# use merge_one to create if it does not already exist
# XXX merge_one does not persist the node?
# graph.merge_one(node)
graph.create(node)
for listNameTuple in self._relationships[nodeType]:
if isinstance(listNameTuple, tuple):
listNameTuple, relationshipLabel = listNameTuple
else:
listName = listNameTuple
relationshipLabel = listName
pass
for entitySpec in self.updateRelationships(user, listName, relationshipLabel, node):
yield entitySpec
pass
pass
return
def updateOrganization(self, name, org):
"""
org is a GitHub3 Organization
"""
graph = self.graph
# was it forked from something?
print("GitHubNeo.updateOrg(): %s" % user)
xxx
print(" bio: %s" % obj.bio)
print(" company: %s" % obj.company)
print(" location: %s" % obj.location)
dumpList(obj, "public_members")
dumpList(obj, "repositories")
return
def _nodeFreshEnough(self, node):
updatedDTStr = node.properties.get("updatedDT")
if updatedDTStr:
age = self._now() - self._parseISODTStr(updatedDTStr)
if age <= self.freshness:
return True
pass
return False
def _getCachedNeighbors(self, node, relationships = None):
nodeType = self.getNodeType(node)
# print(" _getCachedNeighbors(): %-12s %s" % ( nodeType, node.properties["name"] ))
if relationships is None:
#
# list of ( githubSlot, neoRelationLabel )
#
relationships = self._relationships[nodeType]
neoRelationships = []
for relationshipInfo in relationships:
if isinstance(relationshipInfo, tuple):
neoRelationship = relationshipInfo[1]
else:
neoRelationship = relationshipInfo
pass
neoRelationships.append(neoRelationship)
pass
pass
else:
#
# map relationships in to neo relationships
#
neoRelationships = relationships
pass
i = 1
for neoRelationship in neoRelationships:
# print(" neoRelationship: %s" % neoRelationship)
neighbors = []
for rel in node.match(neoRelationship):
if node == rel.start_node:
neighborNode = rel.end_node
else:
neighborNode = rel.start_node
pass
# XXX expensive. we can already know this, from the neoRelationship
neighborNodeType = self.getNodeType(neighborNode)
# yield neighborNodeType, neighborNode.properties["name"]
neighbors.append(( neighborNodeType, neighborNode.properties["name"], 1 ))
i += 1
pass
if neighbors:
print(" %5d %s" % ( len(neighbors), neoRelationship ))
pass
# XXX optional. user may want to sort by something else
# (added date - but that's not supported yet)
neighbors = sorted(neighbors, key = lambda _tuple : _tuple[1])
for neighborTuple in neighbors:
yield neighborTuple
pass
pass
return
def update(self, entitySpecs, numHops = None, relationships = None):
"""update the edges/relationships around the specified node names
creates the nodes if they don't already exist
entitySpecs is list of github names - Repository, User,
Organization. can by type-hinted - org:..., user:...,
repo:..., or else we guess, using the helper
if staleness constraints specified, will use what's in cache
if new enough (to save github points)
TODO: maybe able to specify only certain relationship types to update
"""
if numHops is None:
numHops = 1
pass
hop = 1
# list of entities left to check, and their hop
#
# a seed is at hop 1 (versus 0)
boundary = []
for entitySpec in entitySpecs:
boundary.append(( entitySpec, hop ))
pass
helper = self.githubHelper
freshness = self.freshness
while boundary:
entitySpec, _hop = boundary[0]
boundary = boundary[1:]
print("GitHubNeo.update: %s %5d %5d %s" %
( _hop, len(boundary), helper.checkRatePointsLeft(), entitySpec, ))
nodeType = None
extra = None
if isinstance(entitySpec, tuple):
nodeType = entitySpec[0]
name = entitySpec[1]
if len(entitySpec) > 2:
extra = entitySpec[2]
pass
pass
else:
name = entitySpec
pass
node = self.getOrAddNode(name, nodeType)
nodeType = self.getNodeType(node)
if freshness is not None and self._nodeFreshEnough(node):
# print(" using cached relationships: %s - %s" % ( nodeType, name ))
neighbors = self._getCachedNeighbors(node, relationships = relationships)
else:
githubObj = helper.getGithubObject(name, nodeType)
neighbors = self.updateGithubObj(githubObj, node, relationships = relationships)
pass
# need to drain the stream, even if we don't add them to boundary
neighbors = list(neighbors)
if _hop < numHops:
for _entitySpec in neighbors:
boundary.append(( _entitySpec, _hop + 1 ))
# print(" added to boundary: %s %s" % ( _hop + 1, _entitySpec ))
pass
pass
# print("")
pass
return
pass
| 26.892344 | 96 | 0.526644 | 2,122 | 22,482 | 5.539585 | 0.262488 | 0.006891 | 0.004083 | 0.006125 | 0.051638 | 0.028839 | 0.026202 | 0.007997 | 0.007997 | 0 | 0 | 0.005689 | 0.38235 | 22,482 | 835 | 97 | 26.924551 | 0.840847 | 0.259541 | 0 | 0.3125 | 0 | 0 | 0.070037 | 0.004954 | 0 | 0 | 0 | 0.014371 | 0 | 1 | 0.05625 | false | 0.134375 | 0.0125 | 0.00625 | 0.14375 | 0.040625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
472bc55d336ce348951bd8eeba92c1a80bba3988 | 526 | py | Python | webServ/app.py | harryparkdotio/dabdabrevolution | ffb8a28379edd5e44d23d39ee5e5cc38ddf5181a | [
"MIT"
] | 9 | 2017-01-10T03:40:18.000Z | 2019-03-20T11:27:22.000Z | webServ/app.py | harryparkdotio/dabdabrevolution | ffb8a28379edd5e44d23d39ee5e5cc38ddf5181a | [
"MIT"
] | null | null | null | webServ/app.py | harryparkdotio/dabdabrevolution | ffb8a28379edd5e44d23d39ee5e5cc38ddf5181a | [
"MIT"
] | 3 | 2018-02-28T04:10:55.000Z | 2018-12-21T14:19:12.000Z | from flask import Flask, Response, send_from_directory
import random, time
app = Flask(__name__, static_folder='www')
@app.route('/')
def index():
return ''
@app.route('/stream')
def stream():
def event():
while True:
yield "data: " + random.choice(['a', 'b', 'c', 'd']) + "nn"
with app.app_context():
time.sleep(1)
return Response(event(), mimetype="text/event-stream")
@app.route('/static/<path:path>')
def static_f(path):
return app.send_static_file(path)
if __name__ == '__main__':
app.run(debug=True) | 22.869565 | 62 | 0.671103 | 76 | 526 | 4.394737 | 0.552632 | 0.071856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002198 | 0.134981 | 526 | 23 | 63 | 22.869565 | 0.731868 | 0 | 0 | 0 | 0 | 0 | 0.127135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.105263 | 0.105263 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
472ccb20b84f3fb1960515f43b725476c316bd05 | 212 | py | Python | SRC/Chapter_03-When-Object-Are-Alike/supplier.py | eminemence/python3_OOPs | a61f2649cf9c68a782a3e90c1f667877597dfb1d | [
"MIT"
] | null | null | null | SRC/Chapter_03-When-Object-Are-Alike/supplier.py | eminemence/python3_OOPs | a61f2649cf9c68a782a3e90c1f667877597dfb1d | [
"MIT"
] | null | null | null | SRC/Chapter_03-When-Object-Are-Alike/supplier.py | eminemence/python3_OOPs | a61f2649cf9c68a782a3e90c1f667877597dfb1d | [
"MIT"
] | 1 | 2021-01-13T08:25:04.000Z | 2021-01-13T08:25:04.000Z | import contact
class Supplier(contact.Contact):
def order(self, order):
print(
"If this were a real system we would send"
"{} order to {}".format(order, self.name)
)
| 21.2 | 54 | 0.566038 | 26 | 212 | 4.615385 | 0.769231 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.325472 | 212 | 9 | 55 | 23.555556 | 0.839161 | 0 | 0 | 0 | 0 | 0 | 0.254717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b2ea698bf453d5761c99858fd3248bda11d4d8d | 160 | py | Python | main.py | NChechulin/telegram-renderer-bot | 3b66a550022bce1604a073a2c922d6c9597f508f | [
"MIT"
] | 1 | 2020-09-20T04:36:11.000Z | 2020-09-20T04:36:11.000Z | main.py | NChechulin/telegram-renderer-bot | 3b66a550022bce1604a073a2c922d6c9597f508f | [
"MIT"
] | 2 | 2020-04-04T21:02:49.000Z | 2020-04-06T11:16:17.000Z | main.py | NChechulin/telegram-renderer-bot | 3b66a550022bce1604a073a2c922d6c9597f508f | [
"MIT"
] | null | null | null | """Main file which starts the bot and sets all of the parameters"""
from bot import Bot
if __name__ == '__main__':
bot = Bot('token.txt')
bot.start()
| 20 | 67 | 0.6625 | 25 | 160 | 3.92 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2125 | 160 | 7 | 68 | 22.857143 | 0.777778 | 0.38125 | 0 | 0 | 0 | 0 | 0.182796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b3953f91cb7e3a0cd0d67201766483abefb5893 | 229 | py | Python | Week 3/Python Track/Permituation.py | Dawit-Getachew/A2SV_Practice | 2fe06d725e0acfe668c6dae98fe3ef6e6e26ef61 | [
"MIT"
] | null | null | null | Week 3/Python Track/Permituation.py | Dawit-Getachew/A2SV_Practice | 2fe06d725e0acfe668c6dae98fe3ef6e6e26ef61 | [
"MIT"
] | null | null | null | Week 3/Python Track/Permituation.py | Dawit-Getachew/A2SV_Practice | 2fe06d725e0acfe668c6dae98fe3ef6e6e26ef61 | [
"MIT"
] | null | null | null | # Enter your code here. Read input from STDIN. Print output to STDOUT
from itertools import permutations
s1 = input().split()
S2 = sorted(tuple(s1[0]))
out = tuple(permutations(S2,int(s1[1])))
for i in out:
print("".join(i))
| 28.625 | 69 | 0.69869 | 38 | 229 | 4.210526 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036082 | 0.152838 | 229 | 7 | 70 | 32.714286 | 0.78866 | 0.292576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b458116a42dd7d10c7864067436041dfcef5bf1 | 1,226 | py | Python | Software 1/Practical/Week 09/Practical 14/vector.py | KristoffLiu/YorkCSSolution | cae76529b1e5956a59ac4bd51797a30a5a23fb8d | [
"Apache-2.0"
] | 3 | 2019-11-14T22:04:32.000Z | 2019-11-17T16:07:58.000Z | Software 1/Practical/Week 09/Practical 14/vector.py | KristoffLiu/YorkCSSolution | cae76529b1e5956a59ac4bd51797a30a5a23fb8d | [
"Apache-2.0"
] | null | null | null | Software 1/Practical/Week 09/Practical 14/vector.py | KristoffLiu/YorkCSSolution | cae76529b1e5956a59ac4bd51797a30a5a23fb8d | [
"Apache-2.0"
] | 1 | 2019-11-15T07:52:14.000Z | 2019-11-15T07:52:14.000Z | class Vector:
#exercise 01
def __init__(self,inputlist):
self._vector = []
_vector = inputlist
#exercise 02
def __str__(self):
return "<" + str(self._vector).strip("[]") + ">"
#exercise 03
def dim(self):
return len(self._vector)
#exercise 04
def get(self,index):
return self._vector[index]
def set(self,index,value):
self._vector[index] = value
def scalar_product(self, scalar):
return [scalar * x for x in self._vector]
#exercise 05
def add(self, other_vector):
if not isinstance(other_vector) == True and type(other_vector) == Vector:
raise TypeError
elif not self.dim() == other_vector.dim():
raise ValueError
else:
return self.scalar_product(other_vector)
#exercise 06
def equals(self,other_vector):
if not self.dim() == other_vector.dim():
return False
elif self == other_vector:
return True
else:
for i in range(self.dim()):
if self._vector[i] != other_vector._vector[i]:
return False
else:
return True
| 25.541667 | 81 | 0.551387 | 141 | 1,226 | 4.595745 | 0.312057 | 0.152778 | 0.069444 | 0.052469 | 0.131173 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0.014963 | 0.34584 | 1,226 | 47 | 82 | 26.085106 | 0.793017 | 0.053834 | 0 | 0.21875 | 0 | 0 | 0.003466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.125 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
5b45b7ebc3202d65284b774cf188919577862056 | 370 | py | Python | Stomp/Utils/util.py | phan91/STOMP_agilis | c47394e7b58809fdd3eba539b6fa610eb8b476ce | [
"MIT"
] | null | null | null | Stomp/Utils/util.py | phan91/STOMP_agilis | c47394e7b58809fdd3eba539b6fa610eb8b476ce | [
"MIT"
] | null | null | null | Stomp/Utils/util.py | phan91/STOMP_agilis | c47394e7b58809fdd3eba539b6fa610eb8b476ce | [
"MIT"
] | null | null | null | import re
def replace_all(repls, str):
"""
Applies replacements as described in the repls dictionary on input str.
:param repls: Dictionary of replacements
:param str: The string to be changed
:return: The changed string
"""
return re.sub('|'.join(re.escape(key) for key in repls.keys()),
lambda k: repls[k.group(0)], str) | 30.833333 | 75 | 0.645946 | 53 | 370 | 4.490566 | 0.622642 | 0.12605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003584 | 0.245946 | 370 | 12 | 76 | 30.833333 | 0.849462 | 0.478378 | 0 | 0 | 0 | 0 | 0.006024 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5b4d5fdd9c41e3cd542218051f15a30c5892c606 | 161 | py | Python | Configs/UNet_Configs.py | zeeshanalipnhwr/Semantic-Segmentation-Keras | 304af52c60a16e32da9f1e3f57c653f578cc8bf5 | [
"MIT"
] | 3 | 2020-04-13T07:56:03.000Z | 2020-10-13T12:56:00.000Z | Configs/UNet_Configs.py | zeeshanalipnhwr/Semantic-Segmentation-Keras | 304af52c60a16e32da9f1e3f57c653f578cc8bf5 | [
"MIT"
] | null | null | null | Configs/UNet_Configs.py | zeeshanalipnhwr/Semantic-Segmentation-Keras | 304af52c60a16e32da9f1e3f57c653f578cc8bf5 | [
"MIT"
] | 2 | 2020-04-06T02:09:04.000Z | 2020-04-06T09:43:36.000Z | DEPTH = 16 # the number of filters of the first conv layer of the encoder of the UNet
# Training hyperparameters
BATCHSIZE = 16
EPOCHS = 100
OPTIMIZER = "adam"
| 23 | 85 | 0.751553 | 25 | 161 | 4.84 | 0.72 | 0.123967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054264 | 0.198758 | 161 | 6 | 86 | 26.833333 | 0.883721 | 0.602484 | 0 | 0 | 0 | 0 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b6ad3dccf84bb8ff4fe870c3a1a16f0ee7630a4 | 263 | py | Python | Python_codes/palindrome_string/palindrome.py | latedeveloper08/hacktober2021 | 3b2b4781668221b32dd96e2335157572b00dcf56 | [
"MIT"
] | null | null | null | Python_codes/palindrome_string/palindrome.py | latedeveloper08/hacktober2021 | 3b2b4781668221b32dd96e2335157572b00dcf56 | [
"MIT"
] | null | null | null | Python_codes/palindrome_string/palindrome.py | latedeveloper08/hacktober2021 | 3b2b4781668221b32dd96e2335157572b00dcf56 | [
"MIT"
] | null | null | null | string=input("Enter a string:")
length=len(string)
mid=length//2
rev=-1
for a in range(mid):
if string[a]==string[rev]:
a+=1
rev=-1
else:
print(string,"is a palindrome")
break
else:
print(string,"is not a palindrome")
| 17.533333 | 39 | 0.585551 | 41 | 263 | 3.756098 | 0.487805 | 0.090909 | 0.194805 | 0.220779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 0.262357 | 263 | 14 | 40 | 18.785714 | 0.773196 | 0 | 0 | 0.307692 | 0 | 0 | 0.187023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b7391d6b2a463ec5ffec394ba3d99bf008d1599 | 1,182 | py | Python | src/authentication/migrations/0003_user_additional_fiels_nullable.py | Alirezaja1384/MajazAmooz | 9200e46bed33aeb60d578a5c4c02013a8032cf08 | [
"MIT"
] | 3 | 2021-04-01T19:42:53.000Z | 2022-03-01T09:50:17.000Z | src/authentication/migrations/0003_user_additional_fiels_nullable.py | Alirezaja1384/MajazAmooz | 9200e46bed33aeb60d578a5c4c02013a8032cf08 | [
"MIT"
] | null | null | null | src/authentication/migrations/0003_user_additional_fiels_nullable.py | Alirezaja1384/MajazAmooz | 9200e46bed33aeb60d578a5c4c02013a8032cf08 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-03-27 10:53
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("authentication", "0002_auto_20210326_1814"),
]
operations = [
migrations.AlterField(
model_name="user",
name="avatar",
field=models.ImageField(
blank=True,
upload_to="images/avatars",
verbose_name="تصویر پروفایل",
),
),
migrations.AlterField(
model_name="user",
name="coins",
field=models.PositiveIntegerField(
blank=True, null=True, verbose_name="سکه ها"
),
),
migrations.AlterField(
model_name="user",
name="diamonds",
field=models.PositiveIntegerField(
blank=True, null=True, verbose_name="الماس ها"
),
),
migrations.AlterField(
model_name="user",
name="scores",
field=models.PositiveIntegerField(
blank=True, null=True, verbose_name="امتیاز"
),
),
]
| 26.863636 | 62 | 0.51692 | 102 | 1,182 | 5.872549 | 0.5 | 0.133556 | 0.166945 | 0.193656 | 0.549249 | 0.549249 | 0.42571 | 0.295492 | 0.295492 | 0 | 0 | 0.042062 | 0.376481 | 1,182 | 43 | 63 | 27.488372 | 0.770692 | 0.038071 | 0 | 0.513514 | 1 | 0 | 0.110132 | 0.020264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b75d17a228cb623b421648bad8d7be7041786ab | 841 | py | Python | Chapter 3/Q19_Match_output.py | inshaal/CBSE_NCERT_SOLUTIONS | 0804c2b42e80ccf42ad7dc4d91998848529e216d | [
"Unlicense"
] | null | null | null | Chapter 3/Q19_Match_output.py | inshaal/CBSE_NCERT_SOLUTIONS | 0804c2b42e80ccf42ad7dc4d91998848529e216d | [
"Unlicense"
] | null | null | null | Chapter 3/Q19_Match_output.py | inshaal/CBSE_NCERT_SOLUTIONS | 0804c2b42e80ccf42ad7dc4d91998848529e216d | [
"Unlicense"
] | null | null | null | ''' Q19 Predict the output'''
class Match:
''"Runs and Wickets"''
runs=281
wickets=5
def __init__(self,runs,wickets):
self.runs=runs
self.wickets=wickets
print "Runs scored are : ",runs
print "Wickets taken are : ",wickets
print "Test.__do__ :",Match.__doc__
print "Test.__name__ : ",Match.__name__
print "Test.__module__ : ",Match.__module__
print "Test.__bases__ : ",Match.__bases__
print "Test.__dict__ : ",Match.__dict__
'''
SOLUTIONS : This is the output -
Runs scored are : 281
Wickets taken are : 5
Test.__do__ : Runs and Wickets
Test.__name__ : Match
Test.__module__ : __main__
Test.__bases__ : ()
Test.__dict__ : {'__module__': '__main__', 'runs': 281, '__doc__': 'Runs and Wickets', '__init__': <function __init__ at 0x0398BA70>, 'wickets': 5}
'''
| 26.28125 | 149 | 0.653983 | 102 | 841 | 4.529412 | 0.313725 | 0.097403 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03177 | 0.214031 | 841 | 31 | 150 | 27.129032 | 0.667171 | 0 | 0 | 0 | 0 | 0 | 0.303167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
5b858246ad28edd0c4482b11bf509db078683da1 | 12,676 | py | Python | TAR/dataset.py | jiyanggao/CTAP | 4c8ed32c72763f1b47e607b6e957695f9381930e | [
"MIT"
] | 49 | 2018-07-11T06:33:39.000Z | 2022-02-24T04:43:32.000Z | TAR/dataset.py | jiyanggao/CTAP | 4c8ed32c72763f1b47e607b6e957695f9381930e | [
"MIT"
] | 10 | 2018-07-31T07:01:53.000Z | 2021-01-23T11:16:49.000Z | TAR/dataset.py | jiyanggao/CTAP | 4c8ed32c72763f1b47e607b6e957695f9381930e | [
"MIT"
] | 12 | 2018-07-21T10:30:04.000Z | 2021-07-12T00:32:34.000Z |
import numpy as np
from math import sqrt
import os
import random
import pickle
def calculate_IoU(i0,i1):
union=(min(i0[0],i1[0]) , max(i0[1],i1[1]))
inter=(max(i0[0],i1[0]) , min(i0[1],i1[1]))
iou=1.0*(inter[1]-inter[0])/(union[1]-union[0])
return iou
'''
A class that handles the training set
'''
class TrainingDataSet(object):
def __init__(self,flow_feat_dir,appr_feat_dir,clip_gt_path,background_path,batch_size,movie_length_info,ctx_num,central_num, unit_feature_size,unit_size,
pos_neg_ratio=10.0):
#it_path: image_token_file path
self.ctx_num=ctx_num
self.unit_feature_size=unit_feature_size
self.unit_size=unit_size
self.batch_size=batch_size
self.movie_length_info=movie_length_info
self.visual_feature_dim=self.unit_feature_size
self.flow_feat_dir=flow_feat_dir
self.appr_feat_dir=appr_feat_dir
self.training_samples=[]
self.central_num=central_num
print "Reading training data list from "+clip_gt_path+" and "+background_path
db_size = 0
with open(clip_gt_path) as f:
db_size += len(f.readlines())
with open(background_path) as f:
db_size += len(f.readlines())
with open(clip_gt_path) as f:
for l in f:
movie_name=l.rstrip().split(" ")[0]
clip_start=float(l.rstrip().split(" ")[1])
clip_end=float(l.rstrip().split(" ")[2])
gt_start=float(l.rstrip().split(" ")[3])
gt_end=float(l.rstrip().split(" ")[4])
round_gt_start=np.round(gt_start/unit_size)*self.unit_size+1
round_gt_end=np.round(gt_end/unit_size)*self.unit_size+1
self.training_samples.append((movie_name,clip_start,clip_end,gt_start,gt_end,round_gt_start,round_gt_end,1))
print str(len(self.training_samples))+" training samples are read"
positive_num=len(self.training_samples)*1.0
with open(background_path) as f:
for l in f:
# control the ratio between background samples and positive samples to be 10:1
if random.random()>pos_neg_ratio*positive_num/db_size: continue
movie_name=l.rstrip().split(" ")[0]
clip_start=float(l.rstrip().split(" ")[1])
clip_end=float(l.rstrip().split(" ")[2])
self.training_samples.append((movie_name,clip_start,clip_end,0,0,0,0,0))
self.num_samples=len(self.training_samples)
print str(len(self.training_samples))+" training samples are read"
def calculate_regoffset(self,clip_start,clip_end,round_gt_start,round_gt_end):
start_offset=(round_gt_start-clip_start)/self.unit_size
end_offset=(round_gt_end-clip_end)/self.unit_size
return start_offset, end_offset
'''
Get the central features
'''
def get_pooling_feature(self,flow_feat_dir,appr_feat_dir,movie_name,start,end):
swin_step=self.unit_size
all_feat=np.zeros([0,self.unit_feature_size],dtype=np.float32)
current_pos=start
while current_pos<end:
swin_start=current_pos
swin_end=swin_start+swin_step
flow_feat=np.load(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
appr_feat=np.load(appr_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
flow_feat=flow_feat/np.linalg.norm(flow_feat)
appr_feat=appr_feat/np.linalg.norm(appr_feat)
#feat=flow_feat
feat=np.hstack((flow_feat,appr_feat))
all_feat=np.vstack((all_feat,feat))
current_pos+=swin_step
pool_feat=all_feat
return pool_feat
'''
Get the past (on the left of the central unit) context features
'''
def get_left_context_feature(self,flow_feat_dir,appr_feat_dir,movie_name,start,end):
swin_step=self.unit_size
all_feat=np.zeros([0,self.unit_feature_size],dtype=np.float32)
count=0
current_pos=start
context_ext=False
while count<self.ctx_num/2:
swin_start=current_pos-swin_step
swin_end=current_pos
if os.path.exists(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy"):
flow_feat=np.load(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
appr_feat=np.load(appr_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
flow_feat=flow_feat/np.linalg.norm(flow_feat)
appr_feat=appr_feat/np.linalg.norm(appr_feat)
#feat=flow_feat
feat=np.hstack((flow_feat,appr_feat))
all_feat=np.vstack((all_feat,feat))
context_ext=True
current_pos-=swin_step
count+=1
count=0
current_pos=start
while count<self.ctx_num/2:
swin_start=current_pos
swin_end=current_pos+swin_step
if os.path.exists(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy"):
flow_feat=np.load(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
appr_feat=np.load(appr_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
flow_feat=flow_feat/np.linalg.norm(flow_feat)
appr_feat=appr_feat/np.linalg.norm(appr_feat)
#feat=flow_feat
feat=np.hstack((flow_feat,appr_feat))
all_feat=np.vstack((all_feat,feat))
context_ext=True
current_pos+=swin_step
count+=1
if context_ext:
pool_feat=all_feat
else:
# print "no left "+str(start)
pool_feat=np.zeros([0,self.unit_feature_size],dtype=np.float32)
#print pool_feat.shape
return pool_feat
'''
Get the future (on the right of the central unit) context features
'''
def get_right_context_feature(self,flow_feat_dir,appr_feat_dir,movie_name,start,end):
swin_step=self.unit_size
all_feat=np.zeros([0,self.unit_feature_size],dtype=np.float32)
count=0
current_pos=end
context_ext=False
while count<self.ctx_num/2:
swin_start=current_pos
swin_end=current_pos+swin_step
if os.path.exists(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy"):
flow_feat=np.load(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
appr_feat=np.load(appr_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
flow_feat=flow_feat/np.linalg.norm(flow_feat)
appr_feat=appr_feat/np.linalg.norm(appr_feat)
#feat=flow_feat
feat=np.hstack((flow_feat,appr_feat))
all_feat=np.vstack((all_feat,feat))
context_ext=True
current_pos+=swin_step
count+=1
count=0
current_pos=end
while count<self.ctx_num/2:
swin_start=current_pos-swin_step
swin_end=current_pos
if os.path.exists(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy"):
flow_feat=np.load(flow_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
appr_feat=np.load(appr_feat_dir+movie_name+".mp4"+"_"+str(swin_start)+"_"+str(swin_end)+".npy")
flow_feat=flow_feat/np.linalg.norm(flow_feat)
appr_feat=appr_feat/np.linalg.norm(appr_feat)
#feat=flow_feat
feat=np.hstack((flow_feat,appr_feat))
all_feat=np.vstack((all_feat,feat))
context_ext=True
current_pos-=swin_step
count+=1
if context_ext:
pool_feat=all_feat
else:
# print "no right "+str(end)
pool_feat=np.zeros([0,self.unit_feature_size],dtype=np.float32)
#print pool_feat.shape
return pool_feat
def sample_to_number(self,all_feats,num):
sampled_feats=np.zeros([num,all_feats.shape[1]],dtype=np.float32)
if all_feats.shape[0]==0: return sampled_feats
if all_feats.shape[0]==num: return all_feats
else:
for k in range(num):
sampled_feats[k]=all_feats[all_feats.shape[0]/num*k,:]
return sampled_feats
def next_batch(self):
random_batch_index=random.sample(range(self.num_samples),self.batch_size)
central_batch=np.zeros([self.batch_size,self.central_num, self.visual_feature_dim])
left_batch=np.zeros([self.batch_size, self.ctx_num, self.visual_feature_dim])
right_batch=np.zeros([self.batch_size, self.ctx_num, self.visual_feature_dim])
label_batch=np.zeros([self.batch_size],dtype=np.int32)
offset_batch=np.zeros([self.batch_size,2],dtype=np.float32)
index=0
while index < self.batch_size:
k=random_batch_index[index]
movie_name=self.training_samples[k][0]
if self.training_samples[k][7]==1:
clip_start=self.training_samples[k][1]
clip_end=self.training_samples[k][2]
round_gt_start=self.training_samples[k][5]
round_gt_end=self.training_samples[k][6]
start_offset,end_offset=self.calculate_regoffset(clip_start,clip_end,round_gt_start,round_gt_end)
featmap=self.get_pooling_feature(self.flow_feat_dir, self.appr_feat_dir, movie_name,clip_start,clip_end)
left_feat=self.get_left_context_feature(self.flow_feat_dir, self.appr_feat_dir, movie_name,clip_start,clip_end)
right_feat=self.get_right_context_feature(self.flow_feat_dir, self.appr_feat_dir, movie_name,clip_start,clip_end)
featmap=self.sample_to_number(featmap,self.central_num)
right_feat=self.sample_to_number(right_feat,self.ctx_num)
left_feat=self.sample_to_number(left_feat,self.ctx_num)
central_batch[index,:,:]=featmap
left_batch[index,:,:]=left_feat
right_batch[index,:,:]=right_feat
label_batch[index]=1
offset_batch[index,0]=start_offset
offset_batch[index,1]=end_offset
#print str(clip_start)+" "+str(clip_end)+" "+str(round_gt_start)+" "+str(round_gt_end)+" "+str(start_offset)+" "+str(end_offset)
index+=1
else:
clip_start=self.training_samples[k][1]
clip_end=self.training_samples[k][2]
left_feat=self.get_left_context_feature(self.flow_feat_dir, self.appr_feat_dir,movie_name,clip_start,clip_end)
right_feat=self.get_right_context_feature(self.flow_feat_dir, self.appr_feat_dir,movie_name,clip_start,clip_end)
featmap=self.get_pooling_feature(self.flow_feat_dir, self.appr_feat_dir,movie_name,clip_start,clip_end)
featmap=self.sample_to_number(featmap,self.central_num)
right_feat=self.sample_to_number(right_feat,self.ctx_num)
left_feat=self.sample_to_number(left_feat,self.ctx_num)
central_batch[index,:,:]=featmap
left_batch[index,:,:]=left_feat
right_batch[index,:,:]=right_feat
label_batch[index]=0
offset_batch[index,0]=0
offset_batch[index,1]=0
index+=1
return central_batch, left_batch, right_batch, label_batch,offset_batch
'''
A class that handles the test set
'''
class TestingDataSet(object):
def __init__(self,flow_feat_dir,appr_feat_dir,test_clip_path,batch_size,ctx_num):
self.ctx_num=ctx_num
#il_path: image_label_file path
self.batch_size=batch_size
self.flow_feat_dir=flow_feat_dir
self.appr_feat_dir=appr_feat_dir
print "Reading testing data list from "+test_clip_path
self.test_samples=[]
with open(test_clip_path) as f:
for l in f:
movie_name=l.rstrip().split(" ")[0]
clip_start=float(l.rstrip().split(" ")[1])
clip_end=float(l.rstrip().split(" ")[2])
self.test_samples.append((movie_name,clip_start,clip_end))
self.num_samples=len(self.test_samples)
print "test clips number: "+str(len(self.test_samples))
| 46.602941 | 157 | 0.631982 | 1,820 | 12,676 | 4.058242 | 0.083516 | 0.058489 | 0.035743 | 0.049824 | 0.728676 | 0.685215 | 0.659491 | 0.646223 | 0.62984 | 0.62984 | 0 | 0.013132 | 0.249053 | 12,676 | 271 | 158 | 46.774908 | 0.76279 | 0.034553 | 0 | 0.652968 | 0 | 0 | 0.0243 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.022831 | null | null | 0.022831 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5ba8bd99382a6bd8f9062e85811ea5bbc00b9220 | 345 | py | Python | main.py | alextremblay962/Hydropinic_System | 7fcc87f110425183fb8e18c7f7c6664781565f65 | [
"MIT"
] | null | null | null | main.py | alextremblay962/Hydropinic_System | 7fcc87f110425183fb8e18c7f7c6664781565f65 | [
"MIT"
] | null | null | null | main.py | alextremblay962/Hydropinic_System | 7fcc87f110425183fb8e18c7f7c6664781565f65 | [
"MIT"
] | null | null | null | import serial
import json
import io
import time
ser = serial.Serial("COM24" , 9600, timeout=2)
topic = "hydro/light1"
payload = 1
#data = json.dumps({"topic":topic,"payload":payload})
data = "{\"topic\":\"hydro/light1\",\"payload\":1}"
data = data.encode()
print(data)
ser.write(b'A')
hello = ser.readline()#.decode("ascii")
print(hello) | 16.428571 | 53 | 0.669565 | 49 | 345 | 4.714286 | 0.530612 | 0.08658 | 0.138528 | 0.199134 | 0.242424 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0.036304 | 0.121739 | 345 | 21 | 54 | 16.428571 | 0.726073 | 0.197101 | 0 | 0 | 0 | 0 | 0.097826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5baf938ce58015edbc41bd4ae801eee8162b79c1 | 2,282 | py | Python | profileparser.py | JimKnowler/profile-visualiser | 2398b17c68ea748fc82e7cc15e43ccbfb64f8e2c | [
"MIT"
] | 3 | 2018-06-19T16:23:35.000Z | 2021-07-15T05:35:21.000Z | profileparser.py | JimKnowler/profile-visualiser | 2398b17c68ea748fc82e7cc15e43ccbfb64f8e2c | [
"MIT"
] | null | null | null | profileparser.py | JimKnowler/profile-visualiser | 2398b17c68ea748fc82e7cc15e43ccbfb64f8e2c | [
"MIT"
] | null | null | null | class ProfileParser:
def __init__(self, consumer):
self._consumer = consumer
def load_file(self, filename):
with open(filename, "r") as file:
for line_number, line in enumerate(file):
try:
line = line.rstrip()
self.parse(line)
except Exception as e:
print "exception while parsing line ", line_number
print ">> line: [", line, "]"
print ">>", e
raise e
def parse(self, line):
if line.startswith('#'):
# ignore comment lines
return
split_line = line.split(' ',1)
line_type = split_line[0]
if line_type == 'T':
split_line = line.split(' ',2)
thread_id = int(split_line[1])
thread_label = split_line[2]
self._consumer.on_thread(thread_id, thread_label)
elif line_type == 'F':
split_line = line.split(' ',3)
thread_id = int(split_line[1])
function_id = int(split_line[2])
function_label = split_line[3]
self._consumer.on_function(thread_id, function_id, function_label)
elif line_type == 'S':
split_line = line.split(' ',3)
thread_id = int(split_line[1])
function_id = int(split_line[2])
time = int(split_line[3])
self._consumer.on_sample_start(thread_id, function_id, time)
elif line_type == 'E':
split_line = line.split(' ',3)
thread_id = int(split_line[1])
function_id = int(split_line[2])
time = int(split_line[3])
self._consumer.on_sample_finish(thread_id, function_id, time)
elif line_type == 'V':
split_line = line.split(' ',3)
thread_id = int(split_line[1])
event_id = int(split_line[2])
event_label = split_line[3]
self._consumer.on_event(thread_id, event_id, event_label)
elif line_type == 'Y':
split_line = line.split(' ',3)
thread_id = int(split_line[1])
event_id = int(split_line[2])
time = int(split_line[3])
self._consumer.on_event_emit(thread_id, event_id, time)
elif line_type == 'C':
split_line = line.split(' ',2)
counter_id = int(split_line[1])
counter_label = split_line[2]
self._consumer.on_counter(counter_id, counter_label)
elif line_type == 'D':
split_line = line.split(' ',3)
counter_id = int(split_line[1])
time = int(split_line[2])
counter_value = int(split_line[3])
self._consumer.on_counter_value(counter_id, time, counter_value)
| 25.931818 | 69 | 0.667835 | 345 | 2,282 | 4.113043 | 0.17971 | 0.20296 | 0.15222 | 0.128259 | 0.546159 | 0.493305 | 0.447498 | 0.339676 | 0.291755 | 0.291755 | 0 | 0.017401 | 0.194128 | 2,282 | 87 | 70 | 26.229885 | 0.754214 | 0.008764 | 0 | 0.369231 | 0 | 0 | 0.026991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.046154 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5bb90766f7b85944500acf3f02a74757b861fc3d | 8,492 | py | Python | orquesta/tests/unit/graphing/native/test_routes_split.py | batk0/orquesta | f03f3f2f3820bf111a9277f4f6c5d6c83a89d004 | [
"Apache-2.0"
] | null | null | null | orquesta/tests/unit/graphing/native/test_routes_split.py | batk0/orquesta | f03f3f2f3820bf111a9277f4f6c5d6c83a89d004 | [
"Apache-2.0"
] | null | null | null | orquesta/tests/unit/graphing/native/test_routes_split.py | batk0/orquesta | f03f3f2f3820bf111a9277f4f6c5d6c83a89d004 | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from orquesta.tests.unit.composition.native import base
class SplitWorkflowRoutesTest(base.OrchestraWorkflowComposerTest):
def test_split(self):
wf_name = 'split'
expected_routes = [
{
'tasks': [
'task1',
'task2',
'task4__1',
'task5__1',
'task6__1',
'task7__1',
],
'path': [
('task1', 'task2', 0),
('task2', 'task4__1', 0),
('task4__1', 'task5__1', 0),
('task4__1', 'task6__1', 0),
('task5__1', 'task7__1', 0),
('task6__1', 'task7__1', 0),
]
},
{
'tasks': [
'task1',
'task3',
'task4__2',
'task5__2',
'task6__2',
'task7__2'
],
'path': [
('task1', 'task3', 0),
('task3', 'task4__2', 0),
('task4__2', 'task5__2', 0),
('task4__2', 'task6__2', 0),
('task5__2', 'task7__2', 0),
('task6__2', 'task7__2', 0)
]
}
]
self.assert_wf_ex_routes(wf_name, expected_routes)
def test_splits(self):
wf_name = 'splits'
expected_routes = [
{
'tasks': [
'task1',
'task8__1'
],
'path': [
('task1', 'task8__1', 0)
]
},
{
'tasks': [
'task1',
'task2',
'task4__1',
'task5__1',
'task6__1',
'task7__1',
'task8__2'
],
'path': [
('task1', 'task2', 0),
('task2', 'task4__1', 0),
('task4__1', 'task5__1', 0),
('task4__1', 'task6__1', 0),
('task5__1', 'task7__1', 0),
('task6__1', 'task7__1', 0),
('task7__1', 'task8__2', 0)
]
},
{
'tasks': [
'task1',
'task3',
'task4__2',
'task5__2',
'task6__2',
'task7__2',
'task8__3'
],
'path': [
('task1', 'task3', 0),
('task3', 'task4__2', 0),
('task4__2', 'task5__2', 0),
('task4__2', 'task6__2', 0),
('task5__2', 'task7__2', 0),
('task6__2', 'task7__2', 0),
('task7__2', 'task8__3', 0)
]
}
]
self.assert_wf_ex_routes(wf_name, expected_routes)
def test_nested_splits(self):
wf_name = 'splits-nested'
expected_routes = [
{
'tasks': [
'task1',
'task10__1',
'task2',
'task4__1',
'task5__1',
'task7__1',
'task8__1',
'task9__1'
],
'path': [
('task1', 'task2', 0),
('task2', 'task4__1', 0),
('task4__1', 'task5__1', 0),
('task5__1', 'task7__1', 0),
('task7__1', 'task8__1', 0),
('task7__1', 'task9__1', 0),
('task8__1', 'task10__1', 0),
('task9__1', 'task10__1', 0)
]
},
{
'tasks': [
'task1',
'task10__2',
'task2',
'task4__1',
'task6__1',
'task7__2',
'task8__2',
'task9__2'
],
'path': [
('task1', 'task2', 0),
('task2', 'task4__1', 0),
('task4__1', 'task6__1', 0),
('task6__1', 'task7__2', 0),
('task7__2', 'task8__2', 0),
('task7__2', 'task9__2', 0),
('task8__2', 'task10__2', 0),
('task9__2', 'task10__2', 0)
]
},
{
'tasks': [
'task1',
'task10__3',
'task3',
'task4__2',
'task5__2',
'task7__3',
'task8__3',
'task9__3'
],
'path': [
('task1', 'task3', 0),
('task3', 'task4__2', 0),
('task4__2', 'task5__2', 0),
('task5__2', 'task7__3', 0),
('task7__3', 'task8__3', 0),
('task7__3', 'task9__3', 0),
('task8__3', 'task10__3', 0),
('task9__3', 'task10__3', 0)
]
},
{
'tasks': [
'task1',
'task10__4',
'task3',
'task4__2',
'task6__2',
'task7__4',
'task8__4',
'task9__4'
],
'path': [
('task1', 'task3', 0),
('task3', 'task4__2', 0),
('task4__2', 'task6__2', 0),
('task6__2', 'task7__4', 0),
('task7__4', 'task8__4', 0),
('task7__4', 'task9__4', 0),
('task8__4', 'task10__4', 0),
('task9__4', 'task10__4', 0)
]
}
]
self.assert_wf_ex_routes(wf_name, expected_routes)
def test_splits_extra_join(self):
wf_name = 'splits-join'
expected_routes = [
{
'tasks': [
'task1',
'task2',
'task4__1',
'task5__1',
'task6__1',
'task7__1',
'task8__1'
],
'path': [
('task1', 'task2', 0),
('task1', 'task8__1', 0),
('task2', 'task4__1', 0),
('task4__1', 'task5__1', 0),
('task4__1', 'task6__1', 0),
('task5__1', 'task7__1', 0),
('task6__1', 'task7__1', 0),
('task7__1', 'task8__1', 0)
]
},
{
'tasks': [
'task1',
'task3',
'task4__2',
'task5__2',
'task6__2',
'task7__2',
'task8__2'
],
'path': [
('task1', 'task3', 0),
('task1', 'task8__2', 0),
('task3', 'task4__2', 0),
('task4__2', 'task5__2', 0),
('task4__2', 'task6__2', 0),
('task5__2', 'task7__2', 0),
('task6__2', 'task7__2', 0),
('task7__2', 'task8__2', 0)
]
}
]
self.assert_wf_ex_routes(wf_name, expected_routes)
| 32.166667 | 74 | 0.321597 | 666 | 8,492 | 3.537538 | 0.148649 | 0.024618 | 0.032683 | 0.040747 | 0.566214 | 0.502122 | 0.502122 | 0.494058 | 0.485569 | 0.476655 | 0 | 0.125811 | 0.546043 | 8,492 | 263 | 75 | 32.288973 | 0.485344 | 0.061352 | 0 | 0.620253 | 0 | 0 | 0.223144 | 0 | 0 | 0 | 0 | 0 | 0.016878 | 1 | 0.016878 | false | 0 | 0.004219 | 0 | 0.025316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5bc0cb9ffa299ad9d23aa50d9179c1903fd178ee | 811 | py | Python | PSA/modules/BroEventDispatcher.py | SECURED-FP7/secured-psa-nsm | 20c8f790ebc2d2aa8c33bda1e047f8f29275a0be | [
"Apache-2.0"
] | null | null | null | PSA/modules/BroEventDispatcher.py | SECURED-FP7/secured-psa-nsm | 20c8f790ebc2d2aa8c33bda1e047f8f29275a0be | [
"Apache-2.0"
] | null | null | null | PSA/modules/BroEventDispatcher.py | SECURED-FP7/secured-psa-nsm | 20c8f790ebc2d2aa8c33bda1e047f8f29275a0be | [
"Apache-2.0"
] | null | null | null | # -*- Mode:Python;indent-tabs-mode:nil; -*-
#
# BroEventDispatcher.py
#
# A simple event dispatcher.
#
# Author: jju / VTT Technical Research Centre of Finland Ltd., 2016
#
import logging
callbacks = { }
def init():
pass
def register( key, obj ):
"""
Register a callback for key 'key'
"""
global callbacks
callbacks[ key ] = obj
def unregister( key ):
"""
Unregisters callback for key 'key'
"""
global callbacks
del callbacks[ key ]
def dispatch( key, data ):
"""
Dispatch event 'data' to the callback registered for key 'key'
"""
global callbacks
try:
cb = callbacks[ key ]
if cb != None:
cb.onEvent( data )
except Exception as e:
logging.warning( 'No dispatcher for key: ' + key + ': ' + str( e ) )
| 19.309524 | 76 | 0.588163 | 95 | 811 | 5.021053 | 0.557895 | 0.050314 | 0.075472 | 0.09434 | 0.184486 | 0.134172 | 0 | 0 | 0 | 0 | 0 | 0.006944 | 0.289766 | 811 | 41 | 77 | 19.780488 | 0.821181 | 0.35635 | 0 | 0.166667 | 0 | 0 | 0.053305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.055556 | 0.055556 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5bc3b76b4051d6f7137b9c441c17bb2450602f26 | 50,408 | py | Python | StackApp/env/lib/python2.7/site-packages/blueprint/backend/files.py | jonathanmusila/StackOverflow-Lite | a9a03f129592c6f741eb4d1e608ca2db0e40bf11 | [
"MIT"
] | null | null | null | StackApp/env/lib/python2.7/site-packages/blueprint/backend/files.py | jonathanmusila/StackOverflow-Lite | a9a03f129592c6f741eb4d1e608ca2db0e40bf11 | [
"MIT"
] | null | null | null | StackApp/env/lib/python2.7/site-packages/blueprint/backend/files.py | jonathanmusila/StackOverflow-Lite | a9a03f129592c6f741eb4d1e608ca2db0e40bf11 | [
"MIT"
] | null | null | null | """
Search for configuration files to include in the blueprint.
"""
import base64
from collections import defaultdict
import errno
import glob
import grp
import hashlib
import logging
import os.path
import pwd
import re
import stat
import subprocess
from blueprint import util
# An extra list of pathnames and MD5 sums that will be checked after no
# match is found in `dpkg`(1)'s list. If a pathname is given as the value
# then that file's contents will be hashed.
#
# Many of these files are distributed with packages and copied from
# `/usr/share` in the `postinst` program.
#
# XXX Update `blueprintignore`(5) if you make changes here.
MD5SUMS = {'/etc/adduser.conf': ['/usr/share/adduser/adduser.conf'],
'/etc/apparmor.d/tunables/home.d/ubuntu':
['2a88811f7b763daa96c20b20269294a4'],
'/etc/apt/apt.conf.d/00CDMountPoint':
['cb46a4e03f8c592ee9f56c948c14ea4e'],
'/etc/apt/apt.conf.d/00trustcdrom':
['a8df82e6e6774f817b500ee10202a968'],
'/etc/chatscripts/provider': ['/usr/share/ppp/provider.chatscript'],
'/etc/default/console-setup':
['0fb6cec686d0410993bdf17192bee7d6',
'b684fd43b74ac60c6bdafafda8236ed3',
'/usr/share/console-setup/console-setup'],
'/etc/default/grub': ['ee9df6805efb2a7d1ba3f8016754a119',
'ad9283019e54cedfc1f58bcc5e615dce'],
'/etc/default/irqbalance': ['7e10d364b9f72b11d7bf7bd1cfaeb0ff'],
'/etc/default/keyboard': ['06d66484edaa2fbf89aa0c1ec4989857'],
'/etc/default/locale': ['164aba1ef1298affaa58761647f2ceba',
'7c32189e775ac93487aa4a01dffbbf76'],
'/etc/default/rcS': ['/usr/share/initscripts/default.rcS'],
'/etc/environment': ['44ad415fac749e0c39d6302a751db3f2'],
'/etc/hosts.allow': ['8c44735847c4f69fb9e1f0d7a32e94c1'],
'/etc/hosts.deny': ['92a0a19db9dc99488f00ac9e7b28eb3d'],
'/etc/initramfs-tools/modules':
['/usr/share/initramfs-tools/modules'],
'/etc/inputrc': ['/usr/share/readline/inputrc'],
'/etc/iscsi/iscsid.conf': ['6c6fd718faae84a4ab1b276e78fea471'],
'/etc/kernel-img.conf': ['f1ed9c3e91816337aa7351bdf558a442'],
'/etc/ld.so.conf': ['4317c6de8564b68d628c21efa96b37e4'],
'/etc/ld.so.conf.d/nosegneg.conf':
['3c6eccf8f1c6c90eaf3eb486cc8af8a3'],
'/etc/networks': ['/usr/share/base-files/networks'],
'/etc/nsswitch.conf': ['/usr/share/base-files/nsswitch.conf'],
'/etc/pam.d/common-account': ['9d50c7dda6ba8b6a8422fd4453722324'],
'/etc/pam.d/common-auth': ['a326c972f4f3d20e5f9e1b06eef4d620'],
'/etc/pam.d/common-password': ['9f2fbf01b1a36a017b16ea62c7ff4c22'],
'/etc/pam.d/common-session': ['e2b72dd3efb2d6b29698f944d8723ab1'],
'/etc/pam.d/common-session-noninteractive':
['508d44b6daafbc3d6bd587e357a6ff5b'],
'/etc/pam.d/fingerprint-auth-ac':
['d851f318a16c32ed12f5b1cd55e99281'],
'/etc/pam.d/fingerprint-auth': ['d851f318a16c32ed12f5b1cd55e99281'],
'/etc/pam.d/password-auth-ac': ['e8aee610b8f5de9b6a6cdba8a33a4833'],
'/etc/pam.d/password-auth': ['e8aee610b8f5de9b6a6cdba8a33a4833'],
'/etc/pam.d/smartcard-auth-ac':
['dfa6696dc19391b065c45b9525d3ae55'],
'/etc/pam.d/smartcard-auth': ['dfa6696dc19391b065c45b9525d3ae55'],
'/etc/pam.d/system-auth-ac': ['e8aee610b8f5de9b6a6cdba8a33a4833'],
'/etc/pam.d/system-auth': ['e8aee610b8f5de9b6a6cdba8a33a4833'],
'/etc/ppp/chap-secrets': ['faac59e116399eadbb37644de6494cc4'],
'/etc/ppp/pap-secrets': ['698c4d412deedc43dde8641f84e8b2fd'],
'/etc/ppp/peers/provider': ['/usr/share/ppp/provider.peer'],
'/etc/profile': ['/usr/share/base-files/profile'],
'/etc/python/debian_config': ['7f4739eb8858d231601a5ed144099ac8'],
'/etc/rc.local': ['10fd9f051accb6fd1f753f2d48371890'],
'/etc/rsyslog.d/50-default.conf':
['/usr/share/rsyslog/50-default.conf'],
'/etc/security/opasswd': ['d41d8cd98f00b204e9800998ecf8427e'],
'/etc/selinux/restorecond.conf':
['b5b371cb8c7b33e17bdd0d327fa69b60'],
'/etc/selinux/targeted/modules/semanage.trans.LOCK':
['d41d8cd98f00b204e9800998ecf8427e'],
'/etc/selinux/targeted/modules/active/file_contexts.template':
['bfa4d9e76d88c7dc49ee34ac6f4c3925'],
'/etc/selinux/targeted/modules/active/file_contexts':
['1622b57a3b85db3112c5f71238c68d3e'],
'/etc/selinux/targeted/modules/active/users_extra':
['daab665152753da1bf92ca0b2af82999'],
'/etc/selinux/targeted/modules/active/base.pp':
['6540e8e1a9566721e70953a3cb946de4'],
'/etc/selinux/targeted/modules/active/modules/fetchmail.pp':
['0b0c7845f10170a76b9bd4213634cb43'],
'/etc/selinux/targeted/modules/active/modules/usbmuxd.pp':
['72a039c5108de78060651833a073dcd1'],
'/etc/selinux/targeted/modules/active/modules/pulseaudio.pp':
['d9c4f1abf8397d7967bb3014391f7b61'],
'/etc/selinux/targeted/modules/active/modules/screen.pp':
['c343b6c4df512b3ef435f06ed6cfd8b4'],
'/etc/selinux/targeted/modules/active/modules/cipe.pp':
['4ea2d39babaab8e83e29d13d7a83e8da'],
'/etc/selinux/targeted/modules/active/modules/rpcbind.pp':
['48cdaa5a31d75f95690106eeaaf855e3'],
'/etc/selinux/targeted/modules/active/modules/nut.pp':
['d8c81e82747c85d6788acc9d91178772'],
'/etc/selinux/targeted/modules/active/modules/mozilla.pp':
['405329d98580ef56f9e525a66adf7dc5'],
'/etc/selinux/targeted/modules/active/modules/openvpn.pp':
['110fe4c59b7d7124a7d33fda1f31428a'],
'/etc/selinux/targeted/modules/active/modules/denyhosts.pp':
['d12dba0c7eea142c16abd1e0424dfda4'],
'/etc/selinux/targeted/modules/active/modules/rhcs.pp':
['e7a6bf514011f39f277d401cd3d3186a'],
'/etc/selinux/targeted/modules/active/modules/radius.pp':
['a7380d93d0ac922364bc1eda85af80bf'],
'/etc/selinux/targeted/modules/active/modules/policykit.pp':
['1828a7a89c5c7a9cd0bd1b04b379e2c0'],
'/etc/selinux/targeted/modules/active/modules/varnishd.pp':
['260ef0797e6178de4edeeeca741e2374'],
'/etc/selinux/targeted/modules/active/modules/bugzilla.pp':
['c70402a459add46214ee370039398931'],
'/etc/selinux/targeted/modules/active/modules/java.pp':
['ac691d90e755a9a929c1c8095d721899'],
'/etc/selinux/targeted/modules/active/modules/courier.pp':
['d6eb2ef77d755fd49d61e48383867ccb'],
'/etc/selinux/targeted/modules/active/modules/userhelper.pp':
['787e5ca0ee1c9e744e9116837d73c2b9'],
'/etc/selinux/targeted/modules/active/modules/sssd.pp':
['aeb11626d9f34af08e9cd50b1b5751c7'],
'/etc/selinux/targeted/modules/active/modules/munin.pp':
['db2927d889a3dfbe439eb67dfdcba61d'],
'/etc/selinux/targeted/modules/active/modules/ppp.pp':
['7c6f91f4aae1c13a3d2a159a4c9b8553'],
'/etc/selinux/targeted/modules/active/modules/xfs.pp':
['6b3be69f181f28e89bfcffa032097dcb'],
'/etc/selinux/targeted/modules/active/modules/consolekit.pp':
['ef682e07a732448a12f2e93da946d655'],
'/etc/selinux/targeted/modules/active/modules/telnet.pp':
['43fd78d022e499bcb6392da33ed6e28d'],
'/etc/selinux/targeted/modules/active/modules/nagios.pp':
['9c9e482867dce0aa325884a50a023a83'],
'/etc/selinux/targeted/modules/active/modules/sysstat.pp':
['0fc4e6b3472ce5e8cfd0f3e785809552'],
'/etc/selinux/targeted/modules/active/modules/tor.pp':
['2c926e3c5b79879ed992b72406544394'],
'/etc/selinux/targeted/modules/active/modules/qpidd.pp':
['959d4763313e80d8a75bc009094ea085'],
'/etc/selinux/targeted/modules/active/modules/radvd.pp':
['a7636d3df0f431ad421170150e8a9d2e'],
'/etc/selinux/targeted/modules/active/modules/aiccu.pp':
['c0eafc1357cd0c07be4034c1a27ada98'],
'/etc/selinux/targeted/modules/active/modules/tgtd.pp':
['55da30386834e60a10b4bab582a1b689'],
'/etc/selinux/targeted/modules/active/modules/sectoolm.pp':
['6f8fba8d448da09f85a03622de295ba9'],
'/etc/selinux/targeted/modules/active/modules/unconfineduser.pp':
['0bc2f6faf3b38a657c4928ec7b611d7a'],
'/etc/selinux/targeted/modules/active/modules/sambagui.pp':
['31a5121c80a6114b25db4984bdf8d999'],
'/etc/selinux/targeted/modules/active/modules/mpd.pp':
['cdabce7844a227a81c2334dec0c49e9b'],
'/etc/selinux/targeted/modules/active/modules/hddtemp.pp':
['76d85610a7e198c82406d850ccd935e1'],
'/etc/selinux/targeted/modules/active/modules/clamav.pp':
['f8f5b60e3f5b176810ea0666b989f63d'],
'/etc/selinux/targeted/modules/active/modules/tvtime.pp':
['886dc0a6e9ebcbb6787909851e7c209f'],
'/etc/selinux/targeted/modules/active/modules/cgroup.pp':
['9e1cd610b6fde0e9b42cabd7f994db46'],
'/etc/selinux/targeted/modules/active/modules/rshd.pp':
['e39cec5e9ade8a619ecb91b85a351408'],
'/etc/selinux/targeted/modules/active/modules/roundup.pp':
['133b9b3b2f70422953851e18d6c24276'],
'/etc/selinux/targeted/modules/active/modules/virt.pp':
['9ae34fca60c651c10298797c1260ced0'],
'/etc/selinux/targeted/modules/active/modules/asterisk.pp':
['f823fdcb2c6df4ddde374c9edb11ef26'],
'/etc/selinux/targeted/modules/active/modules/livecd.pp':
['8972e6ef04f490b8915e7983392b96ce'],
'/etc/selinux/targeted/modules/active/modules/netlabel.pp':
['91fc83e5798bd271742823cbb78c17ff'],
'/etc/selinux/targeted/modules/active/modules/qemu.pp':
['e561673d5f9e5c19bcae84c1641fa4a7'],
'/etc/selinux/targeted/modules/active/modules/unconfined.pp':
['3acd5dceb6b7a71c32919c29ef920785'],
'/etc/selinux/targeted/modules/active/modules/postgresql.pp':
['3ecc9f2c7b911fa37d8ab6cc1c6b0ea7'],
'/etc/selinux/targeted/modules/active/modules/apache.pp':
['c0089e4472399e9bc5237b1e0485ac39'],
'/etc/selinux/targeted/modules/active/modules/abrt.pp':
['09e212789d19f41595d7952499236a0c'],
'/etc/selinux/targeted/modules/active/modules/rsync.pp':
['e2567e8716c116ea6324c77652c97137'],
'/etc/selinux/targeted/modules/active/modules/git.pp':
['7904fd9fbae924be5377ccd51036248e'],
'/etc/selinux/targeted/modules/active/modules/amanda.pp':
['594eddbbe3b4530e79702fc6a882010e'],
'/etc/selinux/targeted/modules/active/modules/cvs.pp':
['62cf7b7d58f507cc9f507a6c303c8020'],
'/etc/selinux/targeted/modules/active/modules/chronyd.pp':
['a4ff3e36070d461771230c4019b23440'],
'/etc/selinux/targeted/modules/active/modules/gpm.pp':
['ed3f26e774be81c2cbaaa87dcfe7ae2d'],
'/etc/selinux/targeted/modules/active/modules/modemmanager.pp':
['840d4da9f32a264436f1b22d4d4a0b2a'],
'/etc/selinux/targeted/modules/active/modules/podsleuth.pp':
['67e659e9554bc35631ee829b5dc71647'],
'/etc/selinux/targeted/modules/active/modules/publicfile.pp':
['0f092d92c326444dc9cee78472c56655'],
'/etc/selinux/targeted/modules/active/modules/postfix.pp':
['a00647ad811c22810c76c1162a97e74b'],
'/etc/selinux/targeted/modules/active/modules/exim.pp':
['8c3cd1fbd8f68e80ac7707f243ac1911'],
'/etc/selinux/targeted/modules/active/modules/telepathy.pp':
['9b32f699beb6f9c563f06f6b6d76732c'],
'/etc/selinux/targeted/modules/active/modules/amtu.pp':
['1b87c9fef219244f80b1f8f57a2ce7ea'],
'/etc/selinux/targeted/modules/active/modules/bitlbee.pp':
['cf0973c8fff61577cf330bb74ef75eed'],
'/etc/selinux/targeted/modules/active/modules/memcached.pp':
['0146491b4ab9fbd2854a7e7fb2092168'],
'/etc/selinux/targeted/modules/active/modules/sandbox.pp':
['82502d6d11b83370d1a77343f20d669f'],
'/etc/selinux/targeted/modules/active/modules/dictd.pp':
['6119d37987ea968e90a39d96866e5805'],
'/etc/selinux/targeted/modules/active/modules/pingd.pp':
['16c40af7785c8fa9d40789284ce8fbb9'],
'/etc/selinux/targeted/modules/active/modules/milter.pp':
['acaec7d2ee341e97ac5e345b55f6c7ae'],
'/etc/selinux/targeted/modules/active/modules/snort.pp':
['25f360aa5dec254a8fc18262bbe40510'],
'/etc/selinux/targeted/modules/active/modules/cups.pp':
['5323d417895d5ab508048e2bc45367bf'],
'/etc/selinux/targeted/modules/active/modules/rdisc.pp':
['5bed79cb1f4d5a2b822d6f8dbf53fe97'],
'/etc/selinux/targeted/modules/active/modules/rlogin.pp':
['6f88cc86985b4bc79d4b1afbffb1a732'],
'/etc/selinux/targeted/modules/active/modules/openct.pp':
['884f078f5d12f7b1c75cf011a94746e1'],
'/etc/selinux/targeted/modules/active/modules/dbskk.pp':
['caa93f24bfeede892fd97c59ee8b61da'],
'/etc/selinux/targeted/modules/active/modules/bluetooth.pp':
['ce4f1b34168c537b611783033316760e'],
'/etc/selinux/targeted/modules/active/modules/gpsd.pp':
['dd15485b8c6e5aeac018ddbe0948464c'],
'/etc/selinux/targeted/modules/active/modules/tuned.pp':
['5fc9de20402245e4a1a19c5b31101d06'],
'/etc/selinux/targeted/modules/active/modules/piranha.pp':
['fcedf8588c027633bedb76b598b7586f'],
'/etc/selinux/targeted/modules/active/modules/vhostmd.pp':
['0ca7152ed8a0ae393051876fe89ed657'],
'/etc/selinux/targeted/modules/active/modules/corosync.pp':
['20518dface3d23d7408dd56a51c8e6e1'],
'/etc/selinux/targeted/modules/active/modules/clogd.pp':
['533994a32ecf847a3162675e171c847c'],
'/etc/selinux/targeted/modules/active/modules/samba.pp':
['c7cd9b91a5ba4f0744e3f55a800f2831'],
'/etc/selinux/targeted/modules/active/modules/howl.pp':
['fef7dd76a97921c3e5e0e66fbac15091'],
'/etc/selinux/targeted/modules/active/modules/shutdown.pp':
['55f36d9820dcd19c66729d446d3ce6b2'],
'/etc/selinux/targeted/modules/active/modules/oddjob.pp':
['54d59b40e7bc0dc0dee3882e6c0ce9f3'],
'/etc/selinux/targeted/modules/active/modules/pcscd.pp':
['e728f332850dfcb5637c4e8f220af2fc'],
'/etc/selinux/targeted/modules/active/modules/canna.pp':
['de4f1a3ada6f9813da36febc31d2a282'],
'/etc/selinux/targeted/modules/active/modules/arpwatch.pp':
['0ddc328fa054f363a035ba44ec116514'],
'/etc/selinux/targeted/modules/active/modules/seunshare.pp':
['64844bbf79ee23e087a5741918f3a7ad'],
'/etc/selinux/targeted/modules/active/modules/rhgb.pp':
['c9630cc5830fcb4b775985c5740f5a71'],
'/etc/selinux/targeted/modules/active/modules/prelude.pp':
['2b85511c571c19751bb79b288267661c'],
'/etc/selinux/targeted/modules/active/modules/portmap.pp':
['231abe579c0370f49cac533c6057792b'],
'/etc/selinux/targeted/modules/active/modules/logadm.pp':
['980b1345ef8944a90b6efdff0c8b3278'],
'/etc/selinux/targeted/modules/active/modules/ptchown.pp':
['987fc8a6ff50ef7eed0edc79f91b1ec5'],
'/etc/selinux/targeted/modules/active/modules/vmware.pp':
['8cf31ec8abd75f2a6c56857146caf5a1'],
'/etc/selinux/targeted/modules/active/modules/portreserve.pp':
['0354f017b429dead8de0d143f7950fcc'],
'/etc/selinux/targeted/modules/active/modules/awstats.pp':
['c081d3168b28765182bb4ec937b4c0b1'],
'/etc/selinux/targeted/modules/active/modules/tmpreaper.pp':
['ac0173dd09a54a87fdcb42d3a5e29442'],
'/etc/selinux/targeted/modules/active/modules/postgrey.pp':
['68013352c07570ac38587df9fb7e88ee'],
'/etc/selinux/targeted/modules/active/modules/tftp.pp':
['a47fb7872bfb06d80c8eef969d91e6f9'],
'/etc/selinux/targeted/modules/active/modules/rgmanager.pp':
['1cee78e1ff3f64c4d013ce7b820e534b'],
'/etc/selinux/targeted/modules/active/modules/aisexec.pp':
['95e70fd35e9cb8284488d6bf970815b7'],
'/etc/selinux/targeted/modules/active/modules/xguest.pp':
['d8df4b61df93008cd594f98c852d4cba'],
'/etc/selinux/targeted/modules/active/modules/cobbler.pp':
['6978d8b37b1da384130db5c5c2144175'],
'/etc/selinux/targeted/modules/active/modules/mysql.pp':
['d147af479531042f13e70d72bd58a0e9'],
'/etc/selinux/targeted/modules/active/modules/amavis.pp':
['7fc17b2f47c1d8226a9003df1ef67bb5'],
'/etc/selinux/targeted/modules/active/modules/fprintd.pp':
['d58f18b496f69a74ece1f1b1b9432405'],
'/etc/selinux/targeted/modules/active/modules/nis.pp':
['d696b167de5817226298306c79761fa2'],
'/etc/selinux/targeted/modules/active/modules/squid.pp':
['3f9e075e79ec5aa59609a7ccebce0afe'],
'/etc/selinux/targeted/modules/active/modules/smokeping.pp':
['98b83cac4488d7dd18c479b62dd3cf15'],
'/etc/selinux/targeted/modules/active/modules/ktalk.pp':
['afe14e94861782679305c91da05e7d5e'],
'/etc/selinux/targeted/modules/active/modules/certwatch.pp':
['bf13c9a642ded8354ba26d5462ddd60c'],
'/etc/selinux/targeted/modules/active/modules/games.pp':
['3bcd17c07699d58bd436896e75a24520'],
'/etc/selinux/targeted/modules/active/modules/zabbix.pp':
['5445ccfec7040ff1ccf3abf4de2e9a3c'],
'/etc/selinux/targeted/modules/active/modules/rwho.pp':
['710e29c8e621de6af9ca74869624b9f0'],
'/etc/selinux/targeted/modules/active/modules/w3c.pp':
['aea6b9518cb3fa904cc7ee82239b07c2'],
'/etc/selinux/targeted/modules/active/modules/cyphesis.pp':
['dccb3f009cd56c5f8856861047d7f2ff'],
'/etc/selinux/targeted/modules/active/modules/kismet.pp':
['f2d984e007275d35dd03a2d59ade507e'],
'/etc/selinux/targeted/modules/active/modules/zosremote.pp':
['77a2681c4b1c3c001faeca9874b58ecf'],
'/etc/selinux/targeted/modules/active/modules/pads.pp':
['76b7413009a202e228ee08c5511f3f42'],
'/etc/selinux/targeted/modules/active/modules/avahi.pp':
['b59670ba623aba37ab8f0f1f1127893a'],
'/etc/selinux/targeted/modules/active/modules/apcupsd.pp':
['81fae28232730a49b7660797ef4354c3'],
'/etc/selinux/targeted/modules/active/modules/usernetctl.pp':
['22850457002a48041d885c0d74fbd934'],
'/etc/selinux/targeted/modules/active/modules/finger.pp':
['5dd6b44358bbfabfdc4f546e1ed34370'],
'/etc/selinux/targeted/modules/active/modules/dhcp.pp':
['7e63b07b64848a017eec5d5f6b88f22e'],
'/etc/selinux/targeted/modules/active/modules/xen.pp':
['67086e8e94bdaab8247ac4d2e23162d1'],
'/etc/selinux/targeted/modules/active/modules/plymouthd.pp':
['1916027e7c9f28430fa2ac30334e8964'],
'/etc/selinux/targeted/modules/active/modules/uucp.pp':
['5bec7a345a314a37b4a2227bdfa926f1'],
'/etc/selinux/targeted/modules/active/modules/daemontools.pp':
['aad7633adfc8b04e863b481deebaf14a'],
'/etc/selinux/targeted/modules/active/modules/kdumpgui.pp':
['66e08b4187623fa1c535972a35ec058c'],
'/etc/selinux/targeted/modules/active/modules/privoxy.pp':
['f13c986051659fa900786ea54a59ceae'],
'/etc/selinux/targeted/modules/active/modules/unprivuser.pp':
['a0d128b495a6ea5da72c849ac63c5848'],
'/etc/selinux/targeted/modules/active/modules/ada.pp':
['a75fd52c873e2c9326ad87f7515a664f'],
'/etc/selinux/targeted/modules/active/modules/lircd.pp':
['3cc5cc5b24d40416f9d630a80005d33b'],
'/etc/selinux/targeted/modules/active/modules/openoffice.pp':
['522c3ee13bc37cbe9903d00f0cbccd1d'],
'/etc/selinux/targeted/modules/active/modules/puppet.pp':
['9da4c553f40f3dea876171e672168044'],
'/etc/selinux/targeted/modules/active/modules/wine.pp':
['31c470eabd98c5a5dbc66ba52ad64de0'],
'/etc/selinux/targeted/modules/active/modules/ulogd.pp':
['065551ea63de34a7257ecec152f61552'],
'/etc/selinux/targeted/modules/active/modules/mplayer.pp':
['f889dbfa3d9ef071d8e569def835a2f3'],
'/etc/selinux/targeted/modules/active/modules/ftp.pp':
['75a9f3563903eb8126ffbcc9277e1d8c'],
'/etc/selinux/targeted/modules/active/modules/gnome.pp':
['b859e2d45123f60ff27a90cdb0f40e1b'],
'/etc/selinux/targeted/modules/active/modules/ethereal.pp':
['8963c6b80025b27850f0cdf565e5bd54'],
'/etc/selinux/targeted/modules/active/modules/iscsi.pp':
['7786cb4a84889010751b4d89c72a2956'],
'/etc/selinux/targeted/modules/active/modules/chrome.pp':
['cb44c1c7b13cc04c07c4e787a259b63f'],
'/etc/selinux/targeted/modules/active/modules/guest.pp':
['308d614589af73e39a22e5c741e9eecb'],
'/etc/selinux/targeted/modules/active/modules/inn.pp':
['8d60592dcd3bf4d2fa97f0fefa9374ca'],
'/etc/selinux/targeted/modules/active/modules/gitosis.pp':
['21c79a711157224bebba0a2cccbe8881'],
'/etc/selinux/targeted/modules/active/modules/ksmtuned.pp':
['8f985e777c206d2bde3fc2ac6a28cd24'],
'/etc/selinux/targeted/modules/active/modules/sosreport.pp':
['9b4780d27555e94335f80a0bb2ab4f14'],
'/etc/selinux/targeted/modules/active/modules/ipsec.pp':
['68cacb8c78796957fb4a181390033b16'],
'/etc/selinux/targeted/modules/active/modules/comsat.pp':
['1cecb3f5cbe24251017908e14838ee2a'],
'/etc/selinux/targeted/modules/active/modules/gpg.pp':
['75358ddabb045e91010d80f1ab68307a'],
'/etc/selinux/targeted/modules/active/modules/gnomeclock.pp':
['a4e74df48faab3af8f4df0fa16c65c7e'],
'/etc/selinux/targeted/modules/active/modules/sasl.pp':
['5ba9be813a7dd4236fc2d37bc17c5052'],
'/etc/selinux/targeted/modules/active/modules/vpn.pp':
['32ae00c287432ae5ad4f8affbc9e44fe'],
'/etc/selinux/targeted/modules/active/modules/accountsd.pp':
['308057b48c6d70a45e5a603dbe625c2d'],
'/etc/selinux/targeted/modules/active/modules/devicekit.pp':
['1f5a8f12ebeebfed2cfeb3ee4648dd13'],
'/etc/selinux/targeted/modules/active/modules/psad.pp':
['b02f11705249c93735f019f5b97fdf7b'],
'/etc/selinux/targeted/modules/active/modules/mono.pp':
['8bba1cc6826e8300c140f9c393ad07e9'],
'/etc/selinux/targeted/modules/active/modules/cachefilesd.pp':
['82b93ba87b5920ecc8a7388f4cf8ea43'],
'/etc/selinux/targeted/modules/active/modules/usbmodules.pp':
['20c3a57da3c1311a75a63f1c6ae91bf3'],
'/etc/selinux/targeted/modules/active/modules/certmonger.pp':
['b9fe8ba6abc5204cd8eec546f5614ff5'],
'/etc/selinux/targeted/modules/active/modules/pegasus.pp':
['bb0ec4379c28b196d1794d7310111d98'],
'/etc/selinux/targeted/modules/active/modules/ntop.pp':
['99b46fe44ccf3c4e045dbc73d2a88f59'],
'/etc/selinux/targeted/modules/active/modules/zebra.pp':
['12adcaae458d18f650578ce25e10521a'],
'/etc/selinux/targeted/modules/active/modules/soundserver.pp':
['583abd9ccef70279bff856516974d471'],
'/etc/selinux/targeted/modules/active/modules/stunnel.pp':
['2693ac1bf08287565c3b4e58d0f9ea55'],
'/etc/selinux/targeted/modules/active/modules/ldap.pp':
['039baf0976f316c3f209a5661174a72e'],
'/etc/selinux/targeted/modules/active/modules/fail2ban.pp':
['ce13513c427ff140bf988b01bd52e886'],
'/etc/selinux/targeted/modules/active/modules/spamassassin.pp':
['e02232992676b0e1279c54bfeea290e3'],
'/etc/selinux/targeted/modules/active/modules/procmail.pp':
['d5c58e90fac452a1a6d68cc496e7f1ae'],
'/etc/selinux/targeted/modules/active/modules/afs.pp':
['6e7a4bf08dc7fa5a0f97577b913267ad'],
'/etc/selinux/targeted/modules/active/modules/ricci.pp':
['8b1d44245be204907c82c3580a43901d'],
'/etc/selinux/targeted/modules/active/modules/qmail.pp':
['ea08eb2172c275598d4f85c9b78182cd'],
'/etc/selinux/targeted/modules/active/modules/ccs.pp':
['cad223d57f431e2f88a1d1542c2ac504'],
'/etc/selinux/targeted/modules/active/modules/audioentropy.pp':
['19f6fd5e3ee2a3726a952631e993a133'],
'/etc/selinux/targeted/modules/active/modules/ncftool.pp':
['c15f4833a21e9c8cd1237ee568aadcf3'],
'/etc/selinux/targeted/modules/active/modules/nx.pp':
['3677983206101cfcd2182e180ef3876b'],
'/etc/selinux/targeted/modules/active/modules/rtkit.pp':
['0eaae15f4c12522270b26769487a06e0'],
'/etc/selinux/targeted/modules/active/modules/ntp.pp':
['141339ee3372e07d32575c6777c8e466'],
'/etc/selinux/targeted/modules/active/modules/likewise.pp':
['b5f0d18f8b601e102fd9728fbb309692'],
'/etc/selinux/targeted/modules/active/modules/aide.pp':
['69600bc8a529f8128666a563c7409929'],
'/etc/selinux/targeted/modules/active/modules/nslcd.pp':
['5c87b1c80bdd8bbf60c33ef51a765a93'],
'/etc/selinux/targeted/modules/active/modules/slocate.pp':
['fdea88c374382f3d652a1ac529fbd189'],
'/etc/selinux/targeted/modules/active/modules/execmem.pp':
['44cc2d117e3bf1a33d4e3516aaa7339d'],
'/etc/selinux/targeted/modules/active/modules/cpufreqselector.pp':
['7da9c9690dc4f076148ef35c3644af13'],
'/etc/selinux/targeted/modules/active/modules/cmirrord.pp':
['084b532fa5ccd6775c483d757bcd0920'],
'/etc/selinux/targeted/modules/active/modules/bind.pp':
['5560f5706c8c8e83d8a2ac03a85b93fb'],
'/etc/selinux/targeted/modules/active/modules/uml.pp':
['a0841bc9ffca619fe5d44c557b70d258'],
'/etc/selinux/targeted/modules/active/modules/staff.pp':
['bdf16ee0fa0721770aa31c52e45227c3'],
'/etc/selinux/targeted/modules/active/modules/certmaster.pp':
['bc589a4f0dd49a05d52b9ffda7bdd149'],
'/etc/selinux/targeted/modules/active/modules/webalizer.pp':
['c99ccad469be3c901ede9da9a87e44b2'],
'/etc/selinux/targeted/modules/active/modules/hal.pp':
['c75783ec2dd49d437a242e0c69c31c96'],
'/etc/selinux/targeted/modules/active/modules/kdump.pp':
['d731820c7b5bb711566ea23970106b7a'],
'/etc/selinux/targeted/modules/active/modules/firewallgui.pp':
['ee3522a0072989ed08f70b03f7fd69d9'],
'/etc/selinux/targeted/modules/active/modules/tcpd.pp':
['b1f7db819812da14c4e836a9d9e79980'],
'/etc/selinux/targeted/modules/active/modules/mailman.pp':
['4116cbe11d943a076dd06cea91993745'],
'/etc/selinux/targeted/modules/active/modules/smartmon.pp':
['45d6440b436d8ac3f042e80c392dd672'],
'/etc/selinux/targeted/modules/active/modules/smoltclient.pp':
['dcfd6ecd62ee7191abda39315ec6ef1b'],
'/etc/selinux/targeted/modules/active/modules/kerberos.pp':
['936533081cfbe28eb9145fde86edb4f8'],
'/etc/selinux/targeted/modules/active/modules/lockdev.pp':
['e2da620d3272f296dd90bff8b921d203'],
'/etc/selinux/targeted/modules/active/modules/automount.pp':
['a06d3d617c6d8c29e29ce3fb0db48c9c'],
'/etc/selinux/targeted/modules/active/modules/webadm.pp':
['4ac9b2f95f8d8218ec93f001995fd8ba'],
'/etc/selinux/targeted/modules/active/modules/pyzor.pp':
['c2b00c08d77d7d5a8588dd82c489e354'],
'/etc/selinux/targeted/modules/active/modules/rssh.pp':
['aacef6c826e9d699e84a1dd564b68105'],
'/etc/selinux/targeted/modules/active/modules/nsplugin.pp':
['0c90d308f5e956900150eb6ed84b0b54'],
'/etc/selinux/targeted/modules/active/modules/lpd.pp':
['5bf17a46aa2d3e2ecc0daffcf092054e'],
'/etc/selinux/targeted/modules/active/modules/dcc.pp':
['84749af337d72ba6bbbe54b013c6c62c'],
'/etc/selinux/targeted/modules/active/modules/irc.pp':
['42897f214251c7ca9bc04379c4abff5e'],
'/etc/selinux/targeted/modules/active/modules/icecast.pp':
['962c81fc8ef5fd49c925a2249d229d1d'],
'/etc/selinux/targeted/modules/active/modules/dnsmasq.pp':
['ec4a8a50eb5806e450d97a77cbe8a8b4'],
'/etc/selinux/targeted/modules/active/modules/jabber.pp':
['5a528d52f7337d44bfc867333f2b1921'],
'/etc/selinux/targeted/modules/active/modules/remotelogin.pp':
['68c22a0bc6e4d5031153cf10d75ba76a'],
'/etc/selinux/targeted/modules/active/modules/boinc.pp':
['a70386e9ffdaccd04cbb565e6fe5c822'],
'/etc/selinux/targeted/modules/active/modules/mrtg.pp':
['7e6f395e72768d350d259c15d22a1cbb'],
'/etc/selinux/targeted/modules/active/modules/snmp.pp':
['fc5166e3066504601037054874fe0487'],
'/etc/selinux/targeted/modules/active/modules/cyrus.pp':
['d2e792bf111ce4a6ffdb87fe11d89d16'],
'/etc/selinux/targeted/modules/active/modules/dovecot.pp':
['b716de8b77f0dfeb9212d5cf36bddfa1'],
'/etc/selinux/targeted/modules/active/modules/cdrecord.pp':
['24c0325480e2f1d6cf1ce31c25d5f10a'],
'/etc/selinux/targeted/modules/active/modules/calamaris.pp':
['c7ec43f01369524db32249fb755f4e7f'],
'/etc/selinux/targeted/modules/active/modules/kerneloops.pp':
['2493d3308dfcd34e94308af9d5c888c3'],
'/etc/selinux/targeted/modules/active/modules/razor.pp':
['06425e50a31f14cec090c30e05fb9827'],
'/etc/selinux/targeted/modules/active/netfilter_contexts':
['d41d8cd98f00b204e9800998ecf8427e'],
'/etc/selinux/targeted/modules/active/seusers.final':
['fdf1cdf1d373e4583ca759617a1d2af3'],
'/etc/selinux/targeted/modules/active/file_contexts.homedirs':
['d7c4747704e9021ec2e16c7139fedfd9'],
'/etc/selinux/targeted/modules/active/commit_num':
['c08cc266624f6409b01432dac9576ab0'],
'/etc/selinux/targeted/modules/active/policy.kern':
['5398a60f820803049b5bb7d90dd6196b'],
'/etc/selinux/targeted/modules/active/homedir_template':
['682a31c8036aaf9cf969093d7162960a'],
'/etc/selinux/targeted/modules/semanage.read.LOCK':
['d41d8cd98f00b204e9800998ecf8427e'],
'/etc/selinux/targeted/contexts/failsafe_context':
['940b12538b676287b3c33e68426898ac'],
'/etc/selinux/targeted/contexts/virtual_domain_context':
['1e28f1b8e58e56a64c852bd77f57d121'],
'/etc/selinux/targeted/contexts/removable_context':
['e56a6b14d2bed27405d2066af463df9f'],
'/etc/selinux/targeted/contexts/netfilter_contexts':
['d41d8cd98f00b204e9800998ecf8427e'],
'/etc/selinux/targeted/contexts/userhelper_context':
['53441d64f9bc6337e3aac33f05d0954c'],
'/etc/selinux/targeted/contexts/virtual_image_context':
['b21a69d3423d2e085d5195e25922eaa1'],
'/etc/selinux/targeted/contexts/securetty_types':
['ee2445f940ed1b33e778a921cde8ad9e'],
'/etc/selinux/targeted/contexts/default_type':
['d0f63fea19ee82e5f65bdbb1de899c5d'],
'/etc/selinux/targeted/contexts/dbus_contexts':
['b1c42884fa5bdbde53d64cff469374fd'],
'/etc/selinux/targeted/contexts/files/file_contexts':
['1622b57a3b85db3112c5f71238c68d3e'],
'/etc/selinux/targeted/contexts/files/file_contexts.homedirs':
['d7c4747704e9021ec2e16c7139fedfd9'],
'/etc/selinux/targeted/contexts/files/media':
['3c867677892c0a15dc0b9e9811cc2c49'],
'/etc/selinux/targeted/contexts/initrc_context':
['99866a62735a38b2bf839233c1a1689d'],
'/etc/selinux/targeted/contexts/x_contexts':
['9dde3f5e3ddac42b9e99a4613c972b97'],
'/etc/selinux/targeted/contexts/customizable_types':
['68be87281cf3d40cb2c4606cd2b1ea2b'],
'/etc/selinux/targeted/contexts/users/xguest_u':
['e26010a418df86902332c57434370246'],
'/etc/selinux/targeted/contexts/users/unconfined_u':
['ee88bed48d9601ff2b11f68f97d361ac'],
'/etc/selinux/targeted/contexts/users/staff_u':
['f3412f7cbf441078a9de40fcaab93254'],
'/etc/selinux/targeted/contexts/users/root':
['328e08341d1ff9296573dd43c355e283'],
'/etc/selinux/targeted/contexts/users/user_u':
['2fe911f440282fda0590cd99540da579'],
'/etc/selinux/targeted/contexts/users/guest_u':
['61e7e7e7403b2eac30e312342e66e4cd'],
'/etc/selinux/targeted/contexts/default_contexts':
['0888c75fc814058bb3c01ef58f7a1f47'],
'/etc/selinux/targeted/policy/policy.24':
['5398a60f820803049b5bb7d90dd6196b'],
'/etc/selinux/targeted/setrans.conf':
['ae70362b6fa2af117bd6e293ce232069'],
'/etc/selinux/targeted/seusers':
['fdf1cdf1d373e4583ca759617a1d2af3'],
'/etc/selinux/config': ['91081ef6d958e79795d0255d7c374a56'],
'/etc/selinux/restorecond_user.conf':
['4e1b5b5e38c660f87d5a4f7d3a998c29'],
'/etc/selinux/semanage.conf': ['f33b524aef1a4df2a3d0eecdda041a5c'],
'/etc/sgml/xml-core.cat': ['bcd454c9bf55a3816a134f9766f5928f'],
'/etc/shells': ['0e85c87e09d716ecb03624ccff511760'],
'/etc/ssh/sshd_config': ['e24f749808133a27d94fda84a89bb27b',
'8caefdd9e251b7cc1baa37874149a870',
'874fafed9e745b14e5fa8ae71b82427d'],
'/etc/sudoers': ['02f74ccbec48997f402a063a172abb48'],
'/etc/ufw/after.rules': ['/usr/share/ufw/after.rules'],
'/etc/ufw/after6.rules': ['/usr/share/ufw/after6.rules'],
'/etc/ufw/before.rules': ['/usr/share/ufw/before.rules'],
'/etc/ufw/before6.rules': ['/usr/share/ufw/before6.rules'],
'/etc/ufw/ufw.conf': ['/usr/share/ufw/ufw.conf']}
for pathname, overrides in MD5SUMS.iteritems():
for i in range(len(overrides)):
if '/' != overrides[i][0]:
continue
try:
overrides[i] = hashlib.md5(open(overrides[i]).read()).hexdigest()
except IOError:
pass
def files(b, r):
logging.info('searching for configuration files')
# Visit every file in `/etc` except those on the exclusion list above.
for dirpath, dirnames, filenames in os.walk('/etc'):
# Determine if this entire directory should be ignored by default.
ignored = r.ignore_file(dirpath)
# Collect up the full pathname to each file, `lstat` them all, and
# note which ones will probably be ignored.
files = []
for filename in filenames:
pathname = os.path.join(dirpath, filename)
try:
files.append((pathname,
os.lstat(pathname),
r.ignore_file(pathname, ignored)))
except OSError as e:
logging.warning('{0} caused {1} - try running as root'.
format(pathname, errno.errorcode[e.errno]))
# Track the ctime of each file in this directory. Weed out false
# positives by ignoring files with common ctimes.
ctimes = defaultdict(lambda: 0)
# Map the ctimes of each directory entry that isn't being ignored.
for pathname, s, ignored in files:
if not ignored:
ctimes[s.st_ctime] += 1
for dirname in dirnames:
try:
ctimes[os.lstat(os.path.join(dirpath, dirname)).st_ctime] += 1
except OSError:
pass
for pathname, s, ignored in files:
# Always ignore block special files, character special files,
# pipes, and sockets. They end up looking like deadlocks.
if stat.S_ISBLK(s.st_mode) \
or stat.S_ISCHR(s.st_mode) \
or stat.S_ISFIFO(s.st_mode) \
or stat.S_ISSOCK(s.st_mode):
continue
# Make sure this pathname will actually be able to be included
# in the blueprint. This is a bit of a cop-out since the file
# could be important but at least it's not a crashing bug.
try:
pathname = unicode(pathname)
except UnicodeDecodeError:
logging.warning('{0} not UTF-8 - skipping it'.
format(repr(pathname)[1:-1]))
continue
# Ignore ignored files and files that share their ctime with other
# files in the directory. This is a very strong indication that
# the file is original to the system and should be ignored.
if ignored \
or 1 < ctimes[s.st_ctime] and r.ignore_file(pathname, True):
continue
# Check for a Mustache template and an optional shell script
# that templatize this file.
try:
template = open(
'{0}.blueprint-template.mustache'.format(pathname)).read()
except IOError:
template = None
try:
data = open(
'{0}.blueprint-template.sh'.format(pathname)).read()
except IOError:
data = None
# The content is used even for symbolic links to determine whether
# it has changed from the packaged version.
try:
content = open(pathname).read()
except IOError:
#logging.warning('{0} not readable'.format(pathname))
continue
# Ignore files that are unchanged from their packaged version.
if _unchanged(pathname, content, r):
continue
# Resolve the rest of the file's metadata from the
# `/etc/passwd` and `/etc/group` databases.
try:
pw = pwd.getpwuid(s.st_uid)
owner = pw.pw_name
except KeyError:
owner = s.st_uid
try:
gr = grp.getgrgid(s.st_gid)
group = gr.gr_name
except KeyError:
group = s.st_gid
mode = '{0:o}'.format(s.st_mode)
# A symbolic link's content is the link target.
if stat.S_ISLNK(s.st_mode):
content = os.readlink(pathname)
# Ignore symbolic links providing backwards compatibility
# between SystemV init and Upstart.
if '/lib/init/upstart-job' == content:
continue
# Ignore symbolic links into the Debian alternatives system.
# These are almost certainly managed by packages.
if content.startswith('/etc/alternatives/'):
continue
b.add_file(pathname,
content=content,
encoding='plain',
group=group,
mode=mode,
owner=owner)
# A regular file is stored as plain text only if it is valid
# UTF-8, which is required for JSON serialization.
else:
kwargs = dict(group=group,
mode=mode,
owner=owner)
try:
if template:
if data:
kwargs['data'] = data.decode('utf_8')
kwargs['template'] = template.decode('utf_8')
else:
kwargs['content'] = content.decode('utf_8')
kwargs['encoding'] = 'plain'
except UnicodeDecodeError:
if template:
if data:
kwargs['data'] = base64.b64encode(data)
kwargs['template'] = base64.b64encode(template)
else:
kwargs['content'] = base64.b64encode(content)
kwargs['encoding'] = 'base64'
b.add_file(pathname, **kwargs)
# If this file is a service init script or config , create a
# service resource.
try:
manager, service = util.parse_service(pathname)
if not r.ignore_service(manager, service):
b.add_service(manager, service)
b.add_service_package(manager,
service,
'apt',
*_dpkg_query_S(pathname))
b.add_service_package(manager,
service,
'yum',
*_rpm_qf(pathname))
except ValueError:
pass
def _dpkg_query_S(pathname):
"""
Return a list of package names that contain `pathname` or `[]`. This
really can be a list thanks to `dpkg-divert`(1).
"""
# Cache the pathname-to-package mapping.
if not hasattr(_dpkg_query_S, '_cache'):
_dpkg_query_S._cache = defaultdict(set)
cache_ref = _dpkg_query_S._cache
for listname in glob.iglob('/var/lib/dpkg/info/*.list'):
package = os.path.splitext(os.path.basename(listname))[0]
for line in open(listname):
cache_ref[line.rstrip()].add(package)
# Return the list of packages that contain this file, if any.
if pathname in _dpkg_query_S._cache:
return list(_dpkg_query_S._cache[pathname])
# If `pathname` isn't in a package but is a symbolic link, see if the
# symbolic link is in a package. `postinst` programs commonly display
# this pattern.
try:
return _dpkg_query_S(os.readlink(pathname))
except OSError:
pass
return []
def _dpkg_md5sum(package, pathname):
"""
Find the MD5 sum of the packaged version of pathname or `None` if the
`pathname` does not come from a Debian package.
"""
# Cache any MD5 sums stored in the status file. These are typically
# conffiles and the like.
if not hasattr(_dpkg_md5sum, '_status_cache'):
_dpkg_md5sum._status_cache = {}
cache_ref = _dpkg_md5sum._status_cache
try:
pattern = re.compile(r'^ (\S+) ([0-9a-f]{32})')
for line in open('/var/lib/dpkg/status'):
match = pattern.match(line)
if not match:
continue
cache_ref[match.group(1)] = match.group(2)
except IOError:
pass
# Return this file's MD5 sum, if it can be found.
try:
return _dpkg_md5sum._status_cache[pathname]
except KeyError:
pass
# Cache the MD5 sums for files in this package.
if not hasattr(_dpkg_md5sum, '_cache'):
_dpkg_md5sum._cache = defaultdict(dict)
if package not in _dpkg_md5sum._cache:
cache_ref = _dpkg_md5sum._cache[package]
try:
for line in open('/var/lib/dpkg/info/{0}.md5sums'.format(package)):
md5sum, rel_pathname = line.split(None, 1)
cache_ref['/{0}'.format(rel_pathname.rstrip())] = md5sum
except IOError:
pass
# Return this file's MD5 sum, if it can be found.
try:
return _dpkg_md5sum._cache[package][pathname]
except KeyError:
pass
return None
def _rpm_qf(pathname):
"""
Return a list of package names that contain `pathname` or `[]`. RPM
might not actually support a single pathname being claimed by more
than one package but `dpkg` does so the interface is maintained.
"""
try:
p = subprocess.Popen(['rpm', '--qf=%{NAME}', '-qf', pathname],
close_fds=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
except OSError:
return []
stdout, stderr = p.communicate()
if 0 != p.returncode:
return []
return [stdout]
def _rpm_md5sum(pathname):
"""
Find the MD5 sum of the packaged version of pathname or `None` if the
`pathname` does not come from an RPM.
"""
if not hasattr(_rpm_md5sum, '_cache'):
_rpm_md5sum._cache = {}
symlinks = []
try:
p = subprocess.Popen(['rpm', '-qa', '--dump'],
close_fds=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
pattern = re.compile(r'^(/etc/\S+) \d+ \d+ ([0-9a-f]+) ' # No ,
'(0\d+) \S+ \S+ \d \d \d (\S+)$')
for line in p.stdout:
match = pattern.match(line)
if match is None:
continue
if '0120777' == match.group(3):
symlinks.append((match.group(1), match.group(4)))
else:
_rpm_md5sum._cache[match.group(1)] = match.group(2)
# Find the MD5 sum of the targets of any symbolic links, even
# if the target is outside of /etc.
pattern = re.compile(r'^(/\S+) \d+ \d+ ([0-9a-f]+) ' # No ,
'(0\d+) \S+ \S+ \d \d \d (\S+)$')
for pathname, target in symlinks:
if '/' != target[0]:
target = os.path.normpath(os.path.join(
os.path.dirname(pathname), target))
if target in _rpm_md5sum._cache:
_rpm_md5sum._cache[pathname] = _rpm_md5sum._cache[target]
else:
p = subprocess.Popen(['rpm', '-qf', '--dump', target],
close_fds=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
for line in p.stdout:
match = pattern.match(line)
if match is not None and target == match.group(1):
_rpm_md5sum._cache[pathname] = match.group(2)
except OSError:
pass
return _rpm_md5sum._cache.get(pathname, None)
def _unchanged(pathname, content, r):
"""
Return `True` if a file is unchanged from its packaged version.
"""
# Ignore files that are from the `base-files` package (which
# doesn't include MD5 sums for every file for some reason).
apt_packages = _dpkg_query_S(pathname)
if 'base-files' in apt_packages:
return True
# Ignore files that are unchanged from their packaged version,
# or match in MD5SUMS.
md5sums = MD5SUMS.get(pathname, [])
md5sums.extend([_dpkg_md5sum(package, pathname)
for package in apt_packages])
md5sum = _rpm_md5sum(pathname)
if md5sum is not None:
md5sums.append(md5sum)
if (hashlib.md5(content).hexdigest() in md5sums \
or 64 in [len(md5sum or '') for md5sum in md5sums] \
and hashlib.sha256(content).hexdigest() in md5sums) \
and r.ignore_file(pathname, True):
return True
return False
| 51.860082 | 79 | 0.607047 | 4,253 | 50,408 | 7.151893 | 0.232542 | 0.086794 | 0.153861 | 0.193149 | 0.383536 | 0.344906 | 0.038926 | 0.026531 | 0.026531 | 0.018049 | 0 | 0.170259 | 0.272 | 50,408 | 971 | 80 | 51.913491 | 0.65861 | 0.070029 | 0 | 0.124851 | 0 | 0 | 0.556766 | 0.539428 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007134 | false | 0.015458 | 0.015458 | 0 | 0.03805 | 0.007134 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5bc9e6c226f00a75b8cdcda4f0faf8b1dc74f397 | 429 | py | Python | quickforex/providers/__init__.py | jean-edouard-boulanger/python-quickforex | be95a795719f8bb77eceeb543d2ab963182b3f56 | [
"MIT"
] | null | null | null | quickforex/providers/__init__.py | jean-edouard-boulanger/python-quickforex | be95a795719f8bb77eceeb543d2ab963182b3f56 | [
"MIT"
] | null | null | null | quickforex/providers/__init__.py | jean-edouard-boulanger/python-quickforex | be95a795719f8bb77eceeb543d2ab963182b3f56 | [
"MIT"
] | null | null | null | from quickforex.providers.base import ProviderBase
from quickforex.providers.provider_metadata import (
ProviderMetadata,
SettingFieldDescription,
)
from quickforex.providers.exchangerate_host import ExchangeRateHostProvider
from quickforex.providers.dummy import DummyProvider
__all__ = [
"ProviderBase",
"ExchangeRateHostProvider",
"DummyProvider",
"ProviderMetadata",
"SettingFieldDescription",
]
| 26.8125 | 75 | 0.797203 | 33 | 429 | 10.181818 | 0.484848 | 0.166667 | 0.27381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132867 | 429 | 15 | 76 | 28.6 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 0.109557 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5bd0173040b2f2d05f3a1e8d1fe723aa238abeb6 | 421 | py | Python | Windows10_structed_data/get_report_data.py | naporium/tools_for_windows-main | 4be11a033b5b5ee39c1d4af2e2013e7c5b144772 | [
"MIT"
] | null | null | null | Windows10_structed_data/get_report_data.py | naporium/tools_for_windows-main | 4be11a033b5b5ee39c1d4af2e2013e7c5b144772 | [
"MIT"
] | null | null | null | Windows10_structed_data/get_report_data.py | naporium/tools_for_windows-main | 4be11a033b5b5ee39c1d4af2e2013e7c5b144772 | [
"MIT"
] | null | null | null | from json import load, dumps
REPORT_FILE = "report.txt"
with open(REPORT_FILE, "r") as file:
data = load(file)
# EXAMPLE ON ACESSING THE JSON DATA REPORT
print(dumps(data["INTERFACES_INFO"]["Ethernet adapter Ethernet"], indent=4))
print(dumps(data["INTERFACES_INFO"]["Ethernet adapter Ethernet"]["IPv4 Address"]))
print(dumps(data["INTERFACES_INFO"]["Ethernet adapter Ethernet"]["Physical Address"]))
| 38.272727 | 87 | 0.72209 | 56 | 421 | 5.339286 | 0.482143 | 0.100334 | 0.140468 | 0.240803 | 0.511706 | 0.511706 | 0.511706 | 0.511706 | 0 | 0 | 0 | 0.005479 | 0.133017 | 421 | 10 | 88 | 42.1 | 0.813699 | 0.095012 | 0 | 0 | 0 | 0 | 0.430894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
5bd463296b1effec64cbf25d42163fc58e8ce63e | 259 | py | Python | setup.py | kevinmooreiii/moldriver | 65a0a9cef8971737076f81720a61aa5b607333d2 | [
"Apache-2.0"
] | null | null | null | setup.py | kevinmooreiii/moldriver | 65a0a9cef8971737076f81720a61aa5b607333d2 | [
"Apache-2.0"
] | null | null | null | setup.py | kevinmooreiii/moldriver | 65a0a9cef8971737076f81720a61aa5b607333d2 | [
"Apache-2.0"
] | null | null | null | """ Install moldriver
"""
from distutils.core import setup
setup(name="moldr",
version="0.1.1",
packages=["moldr",
"autofile",
"autofile.info",
"autofile.file",
"autofile.system"])
| 19.923077 | 35 | 0.490347 | 23 | 259 | 5.521739 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018072 | 0.359073 | 259 | 12 | 36 | 21.583333 | 0.746988 | 0.065637 | 0 | 0 | 0 | 0 | 0.273504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5bed867c2db74e773ad7534adb7486f3bfecb2cd | 707 | py | Python | calculator/calculator.py | swilltec/calculator | 3c5e076a996e53117ba19ef438369cfe8753a3cc | [
"MIT"
] | null | null | null | calculator/calculator.py | swilltec/calculator | 3c5e076a996e53117ba19ef438369cfe8753a3cc | [
"MIT"
] | null | null | null | calculator/calculator.py | swilltec/calculator | 3c5e076a996e53117ba19ef438369cfe8753a3cc | [
"MIT"
] | null | null | null | """Main module."""
from functools import reduce
class Calc:
def add(self, *args):
return sum(args)
def subtract(self, a, b):
return a - b
def multiply(self, *args):
if not all(args):
raise ValueError
return reduce(lambda x, y: x*y, args)
def divide(self, a, b):
try:
return a / b
except:
return "inf"
def avg(self, args, ut=None, lt=None):
_args = args[:]
if ut:
_args = [x for x in _args if x <= ut]
if lt:
_args = [x for x in _args if x >= lt]
if not len(_args):
return 0
return sum(_args)/len(_args)
| 18.128205 | 49 | 0.478076 | 96 | 707 | 3.4375 | 0.395833 | 0.024242 | 0.078788 | 0.054545 | 0.109091 | 0.109091 | 0.109091 | 0.109091 | 0 | 0 | 0 | 0.00241 | 0.413013 | 707 | 38 | 50 | 18.605263 | 0.792771 | 0.016973 | 0 | 0 | 0 | 0 | 0.004354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.041667 | 0.083333 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5bf631e50626c44c993a9984bac16fda807b9125 | 596 | py | Python | fullcontact/schema/company_schema.py | michaelcredera/fullcontact-python-client | 482970b00b134409e6c9f303e7c2a7a6fc4a4685 | [
"Apache-2.0"
] | 8 | 2020-04-13T15:53:43.000Z | 2022-02-04T07:37:17.000Z | fullcontact/schema/company_schema.py | michaelcredera/fullcontact-python-client | 482970b00b134409e6c9f303e7c2a7a6fc4a4685 | [
"Apache-2.0"
] | 9 | 2020-06-04T15:30:50.000Z | 2022-02-04T07:36:39.000Z | fullcontact/schema/company_schema.py | michaelcredera/fullcontact-python-client | 482970b00b134409e6c9f303e7c2a7a6fc4a4685 | [
"Apache-2.0"
] | 7 | 2020-09-18T16:02:43.000Z | 2022-02-17T09:22:54.000Z | # -*- coding: utf-8 -*-
"""
This module serves the class for validating
FullContact Company Enrich and Search API requests.
"""
from .base.schema_base import BaseRequestSchema
class CompanyEnrichRequestSchema(BaseRequestSchema):
schema_name = "Company Enrich"
domain: str
webhookUrl: str
required_fields = ("domain",)
class CompanySearchRequestSchema(BaseRequestSchema):
schema_name = "Company Search"
companyName: str
sort: str
location: str
locality: str
region: str
country: str
webhookUrl: str
required_fields = ("companyName",)
| 18.625 | 52 | 0.706376 | 61 | 596 | 6.819672 | 0.590164 | 0.0625 | 0.129808 | 0.163462 | 0.144231 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002119 | 0.208054 | 596 | 31 | 53 | 19.225806 | 0.879237 | 0.199664 | 0 | 0.125 | 0 | 0 | 0.095949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
75079e3853a4f1c859f848cdaeaff5686ff6bb5e | 934 | py | Python | setup.py | sfable/malias | 6a5f6dc17eb42b6579e7981696427cf9deae56fb | [
"MIT"
] | null | null | null | setup.py | sfable/malias | 6a5f6dc17eb42b6579e7981696427cf9deae56fb | [
"MIT"
] | null | null | null | setup.py | sfable/malias | 6a5f6dc17eb42b6579e7981696427cf9deae56fb | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
from malias import __version__
setup(
name = "malias",
version = __version__,
author = 'Stuart Fable',
author_email = 'stuart.fable@gmail.com',
license = 'MIT',
keywords = 'malias system alias',
description = '',
url = 'https://github.com/sfable/malias',
download_url = 'https://github.com/sfable/malias/archive/master.zip',
packages = find_packages(),
install_requires = ['docopt'],
zip_safe = True,
entry_points = {
'console_scripts': ['malias=malias.cli:main']
},
classifiers = [
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4'
]
)
| 30.129032 | 73 | 0.611349 | 96 | 934 | 5.78125 | 0.583333 | 0.136937 | 0.18018 | 0.140541 | 0.201802 | 0.104505 | 0 | 0 | 0 | 0 | 0 | 0.01138 | 0.247323 | 934 | 30 | 74 | 31.133333 | 0.778094 | 0 | 0 | 0 | 0 | 0 | 0.464668 | 0.047109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
750918e8872a1b7a4b9376e1b5490cc0edf34be0 | 210 | py | Python | hjhj.py | jatinchaudhary/python_dump | 5f7d63237fcb96a66bc3c4151599c7849f0f2735 | [
"bzip2-1.0.6"
] | null | null | null | hjhj.py | jatinchaudhary/python_dump | 5f7d63237fcb96a66bc3c4151599c7849f0f2735 | [
"bzip2-1.0.6"
] | null | null | null | hjhj.py | jatinchaudhary/python_dump | 5f7d63237fcb96a66bc3c4151599c7849f0f2735 | [
"bzip2-1.0.6"
] | null | null | null | a=int(input())
b=int(input())
c=int(input())
if(a>b and a>c):
print("number 1 is greatest")
elif(b>a and b>c):
print("number 2 is greatest")
else:
print("number 3 is greatest")
| 15 | 34 | 0.557143 | 37 | 210 | 3.162162 | 0.432432 | 0.205128 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019355 | 0.261905 | 210 | 13 | 35 | 16.153846 | 0.735484 | 0 | 0 | 0 | 0 | 0 | 0.304569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
750a3aa09148f31ca78defa66da3c665dedc7552 | 695 | py | Python | Python with functions/W11-12/week 12/Activities/fruit.py | marcosamos/Python-tasks-and-proyects | 00426323647639016a407c40af1fd00f35ea2229 | [
"MIT"
] | null | null | null | Python with functions/W11-12/week 12/Activities/fruit.py | marcosamos/Python-tasks-and-proyects | 00426323647639016a407c40af1fd00f35ea2229 | [
"MIT"
] | null | null | null | Python with functions/W11-12/week 12/Activities/fruit.py | marcosamos/Python-tasks-and-proyects | 00426323647639016a407c40af1fd00f35ea2229 | [
"MIT"
] | null | null | null | def main():
# Create and print a list named fruit.
fruit_list = ["pear", "banana", "apple", "mango"]
print(f"original: {fruit_list}")
fruit_list.reverse()
print(f"Reverse {fruit_list}")
fruit_list.append("Orange")
print(f"Append Orange {fruit_list}")
pos = fruit_list.index("apple")
fruit_list.insert(pos, "cherry")
print(f"insert cherry: {fruit_list}")
fruit_list.remove("banana")
print(f"Remove Banana: {fruit_list}")
last = fruit_list.pop()
print(f"pop {last}: {fruit_list}")
fruit_list.sort()
print(f"Sorted: {fruit_list}")
fruit_list.clear()
print(f"cleared: {fruit_list}")
if __name__ == "__main__":
main() | 24.821429 | 53 | 0.635971 | 94 | 695 | 4.43617 | 0.329787 | 0.366906 | 0.167866 | 0.215827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195683 | 695 | 28 | 54 | 24.821429 | 0.745975 | 0.051799 | 0 | 0 | 0 | 0 | 0.361702 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.05 | 0.4 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
75226d063cbc9f7c130c7e84b92141262adc03ae | 293 | py | Python | prepoint/__init__.py | edose/prepoint | 1239ea3962cc33c4d640bb4ba8b1b00e032b74d4 | [
"MIT"
] | null | null | null | prepoint/__init__.py | edose/prepoint | 1239ea3962cc33c4d640bb4ba8b1b00e032b74d4 | [
"MIT"
] | null | null | null | prepoint/__init__.py | edose/prepoint | 1239ea3962cc33c4d640bb4ba8b1b00e032b74d4 | [
"MIT"
] | null | null | null | __author__ = "Eric Dose :: New Mexico Mira Project, Albuquerque"
# PyInstaller (__init__.py file should be in place as peer to .py file to run):
# in Windows Command Prompt (E. Dose dev PC):
# cd c:\Dev\prepoint\prepoint
# C:\Programs\Miniconda\Scripts\pyinstaller app.spec
| 36.625 | 79 | 0.699659 | 43 | 293 | 4.581395 | 0.790698 | 0.060914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201365 | 293 | 7 | 80 | 41.857143 | 0.84188 | 0.740614 | 0 | 0 | 0 | 0 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7523e5848177e8777c98952e7fd199b4781b9cbb | 1,329 | py | Python | setup.py | merwok-forks/shortuuid | 4da632a986c3a43f75c7df64f27a90bbf7ff8039 | [
"BSD-3-Clause"
] | null | null | null | setup.py | merwok-forks/shortuuid | 4da632a986c3a43f75c7df64f27a90bbf7ff8039 | [
"BSD-3-Clause"
] | null | null | null | setup.py | merwok-forks/shortuuid | 4da632a986c3a43f75c7df64f27a90bbf7ff8039 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import sys
from setuptools import setup
from shortuuid import __version__
assert sys.version >= "2.5", "Requires Python v2.5 or above."
classifiers = [
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 2.5",
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Topic :: Software Development :: Libraries :: Python Modules",
]
setup(
name="shortuuid",
version=__version__,
author="Stochastic Technologies",
author_email="info@stochastictechnologies.com",
url="https://github.com/stochastic-technologies/shortuuid/",
description="A generator library for concise, " "unambiguous and URL-safe UUIDs.",
long_description="A library that generates short, pretty, "
"unambiguous unique IDs "
"by using an extensive, case-sensitive alphabet and omitting "
"similar-looking letters and numbers.",
license="BSD",
classifiers=classifiers,
packages=["shortuuid"],
test_suite="shortuuid.tests",
tests_require=[],
)
| 31.642857 | 86 | 0.676448 | 148 | 1,329 | 5.993243 | 0.533784 | 0.192785 | 0.253664 | 0.146561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018536 | 0.188111 | 1,329 | 41 | 87 | 32.414634 | 0.803522 | 0.015049 | 0 | 0 | 0 | 0 | 0.629205 | 0.0237 | 0 | 0 | 0 | 0 | 0.029412 | 1 | 0 | false | 0 | 0.088235 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7537d9ff6f54603cb0471062e3b58878836382b5 | 4,362 | py | Python | moe/optimal_learning/python/interfaces/domain_interface.py | misokg/Cornell-MOE | 1547d6b168b7fc70857d522baa0d5d45c41d3cdf | [
"Apache-2.0"
] | 218 | 2017-10-14T03:54:00.000Z | 2022-03-25T14:48:38.000Z | moe/optimal_learning/python/interfaces/domain_interface.py | Tracy3370/Cornell-MOE | df299d1be882d2af9796d7a68b3f9505cac7a53e | [
"Apache-2.0"
] | 45 | 2017-09-27T14:33:31.000Z | 2020-12-16T09:32:50.000Z | moe/optimal_learning/python/interfaces/domain_interface.py | Tracy3370/Cornell-MOE | df299d1be882d2af9796d7a68b3f9505cac7a53e | [
"Apache-2.0"
] | 63 | 2017-09-25T14:23:57.000Z | 2022-03-17T01:41:42.000Z | # -*- coding: utf-8 -*-
"""Interface for a domain: in/out test, random point generation, and update limiting (for constrained optimization)."""
from builtins import object
from abc import ABCMeta, abstractmethod, abstractproperty
from future.utils import with_metaclass
class DomainInterface(with_metaclass(ABCMeta, object)):
"""Interface for a domain: in/out test, random point generation, and update limiting (for constrained optimization)."""
@abstractproperty
def dim(self):
"""Return the number of spatial dimensions."""
pass
@abstractmethod
def check_point_inside(self, point):
r"""Check if a point is inside the domain/on its boundary or outside.
:param point: point to check
:type point: array of float64 with shape (dim)
:return: true if point is inside the domain
:rtype: bool
"""
pass
@abstractmethod
def get_bounding_box(self):
"""Return a list of ClosedIntervals representing a bounding box for this domain."""
pass
@abstractmethod
def get_constraint_list(self):
"""Return a list of lambda functions expressing the domain bounds as linear constraints. Used by COBYLA.
:return: a list of lambda functions corresponding to constraints
:rtype: array of lambda functions with shape (dim * 2)
"""
pass
@abstractmethod
def generate_random_point_in_domain(self, random_source=None):
"""Generate ``point`` uniformly at random such that ``self.check_point_inside(point)`` is True.
.. Note:: if you need multiple points, use generate_uniform_random_points_in_domain instead;
depending on implementation, it may ield better distributions over many points. For example,
tensor product type domains use latin hypercube sampling instead of repeated random draws
which guarantees that no non-uniform clusters may arise (in subspaces) versus this method
which treats all draws independently.
:return: point in domain
:rtype: array of float64 with shape (dim)
"""
pass
@abstractmethod
def generate_uniform_random_points_in_domain(self, num_points, random_source):
r"""Generate AT MOST ``num_points`` uniformly distributed points from the domain.
.. NOTE::
The number of points returned may be LESS THAN ``num_points``!
Implementations may use rejection sampling. In such cases, generating the requested
number of points may be unreasonably slow, so implementers are allowed to generate
fewer than ``num_points`` results.
:param num_points: max number of points to generate
:type num_points: int >= 0
:param random_source:
:type random_source: callable yielding uniform random numbers in [0,1]
:return: uniform random sampling of points from the domain; may be fewer than ``num_points``!
:rtype: array of float64 with shape (num_points_generated, dim)
"""
pass
@abstractmethod
def compute_update_restricted_to_domain(self, max_relative_change, current_point, update_vector):
r"""Compute a new update so that CheckPointInside(``current_point`` + ``new_update``) is true.
Changes new_update_vector so that:
``point_new = point + new_update_vector``
has coordinates such that ``CheckPointInside(point_new)`` returns true.
``new_update_vector`` is a function of ``update_vector``.
``new_update_vector`` is just a copy of ``update_vector`` if ``current_point`` is already inside the domain.
.. NOTE::
We modify update_vector (instead of returning point_new) so that further update
limiting/testing may be performed.
:param max_relative_change: max change allowed per update (as a relative fraction of current distance to boundary)
:type max_relative_change: float64 in (0, 1]
:param current_point: starting point
:type current_point: array of float64 with shape (dim)
:param update_vector: proposed update
:type update_vector: array of float64 with shape (dim)
:return: new update so that the final point remains inside the domain
:rtype: array of float64 with shape (dim)
"""
pass
| 40.388889 | 123 | 0.684319 | 559 | 4,362 | 5.214669 | 0.32737 | 0.041166 | 0.043225 | 0.03705 | 0.200686 | 0.178388 | 0.125557 | 0.091252 | 0.091252 | 0.063122 | 0 | 0.006379 | 0.2453 | 4,362 | 107 | 124 | 40.766355 | 0.879101 | 0.703576 | 0 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.107143 | 0 | 0.392857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7540ff40645b92068ec7fcaab55701834909aa30 | 672 | py | Python | scripts/make_agg_intervals.py | mitchute/Open-GHX | 7a88872c36200c620cfd07994119cfb243a998c9 | [
"MIT"
] | 4 | 2017-10-09T21:08:44.000Z | 2020-11-18T11:09:56.000Z | scripts/make_agg_intervals.py | mitchute/Open-GHX | 7a88872c36200c620cfd07994119cfb243a998c9 | [
"MIT"
] | 1 | 2017-08-18T01:44:13.000Z | 2017-08-18T02:23:21.000Z | scripts/make_agg_intervals.py | mitchute/Open-GHX | 7a88872c36200c620cfd07994119cfb243a998c9 | [
"MIT"
] | 3 | 2016-09-08T14:57:21.000Z | 2021-06-29T08:42:08.000Z | def make_interval(depth, depth_integer_multiplier, num, step_num, start_val):
all_groups_str = "[\n"
for n in range(num):
all_groups_str += "\t["
for d in range(depth):
val = str(start_val * pow(depth_integer_multiplier, d))
if d == depth - 1:
if n == num - 1:
all_groups_str += "%s]\n" % val
else:
all_groups_str += "%s],\n" % val
else:
all_groups_str += "%s, " % val
start_val += step_num
all_groups_str += "]\n"
return all_groups_str
print(make_interval(12, 2, 10, 10, 10))
| 26.88 | 78 | 0.486607 | 86 | 672 | 3.511628 | 0.337209 | 0.208609 | 0.278146 | 0.129139 | 0.182119 | 0.182119 | 0.182119 | 0.182119 | 0.182119 | 0.182119 | 0 | 0.026961 | 0.392857 | 672 | 24 | 79 | 28 | 0.713235 | 0 | 0 | 0.117647 | 0 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.117647 | 0.058824 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
754420bf10ce818718b6b91f9f308e99e3df1a1e | 423 | py | Python | RecoPPS/Local/python/ctppsDiamondLocalReconstruction_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | RecoPPS/Local/python/ctppsDiamondLocalReconstruction_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | RecoPPS/Local/python/ctppsDiamondLocalReconstruction_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
# reco hit production
from RecoPPS.Local.ctppsDiamondRecHits_cfi import ctppsDiamondRecHits
# local track fitting
from RecoPPS.Local.ctppsDiamondLocalTracks_cfi import ctppsDiamondLocalTracks
ctppsDiamondLocalReconstructionTask = cms.Task(
ctppsDiamondRecHits,
ctppsDiamondLocalTracks
)
ctppsDiamondLocalReconstruction = cms.Sequence(ctppsDiamondLocalReconstructionTask)
| 30.214286 | 83 | 0.865248 | 35 | 423 | 10.4 | 0.6 | 0.06044 | 0.087912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094563 | 423 | 13 | 84 | 32.538462 | 0.950392 | 0.092199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f32c53cc845f82e4bd29dd3cc7b2266b7a8d4e51 | 73 | py | Python | GoogleColab/InteligenciaAnalitica/Ex5/trainbase.py | AlexandroLuis/ComputerScience | 273c3f6797737d98ffdf00ae870ceafa3aba59d1 | [
"MIT"
] | 2 | 2021-02-06T21:48:24.000Z | 2022-03-21T00:16:17.000Z | GoogleColab/InteligenciaAnalitica/Ex5/trainbase.py | AlexandroLuis/ComputerScience | 273c3f6797737d98ffdf00ae870ceafa3aba59d1 | [
"MIT"
] | null | null | null | GoogleColab/InteligenciaAnalitica/Ex5/trainbase.py | AlexandroLuis/ComputerScience | 273c3f6797737d98ffdf00ae870ceafa3aba59d1 | [
"MIT"
] | 1 | 2022-02-10T20:59:23.000Z | 2022-02-10T20:59:23.000Z | dataset = pd.read_csv('Corona_NLP_train.csv',encoding='iso8859')
dataset
| 24.333333 | 64 | 0.794521 | 11 | 73 | 5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.054795 | 73 | 2 | 65 | 36.5 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0.369863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f344faaed6efdb55380a728e18b9884b9e5e2388 | 639 | py | Python | cisco_umbrella_enforcement/setup.py | xhennessy-r7/insightconnect-plugins | 59268051313d67735b5dd3a30222eccb92aca8e9 | [
"MIT"
] | null | null | null | cisco_umbrella_enforcement/setup.py | xhennessy-r7/insightconnect-plugins | 59268051313d67735b5dd3a30222eccb92aca8e9 | [
"MIT"
] | null | null | null | cisco_umbrella_enforcement/setup.py | xhennessy-r7/insightconnect-plugins | 59268051313d67735b5dd3a30222eccb92aca8e9 | [
"MIT"
] | null | null | null | # GENERATED BY KOMAND SDK - DO NOT EDIT
from setuptools import setup, find_packages
setup(name='cisco_umbrella_enforcement-rapid7-plugin',
version='1.0.0',
description='Cisco Umbrella Enforcement give technology partners the ability to send security events from their platform/service/appliance within a customer environment to the Cisco security cloud for enforcement',
author='rapid7',
author_email='',
url='',
packages=find_packages(),
install_requires=['komand'], # Add third-party dependencies to requirements.txt, not here!
scripts=['bin/komand_cisco_umbrella_enforcement']
)
| 42.6 | 220 | 0.733959 | 79 | 639 | 5.822785 | 0.708861 | 0.084783 | 0.156522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009524 | 0.178404 | 639 | 14 | 221 | 45.642857 | 0.866667 | 0.1518 | 0 | 0 | 1 | 0.090909 | 0.543599 | 0.191095 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f34959f0169dcb486cb4bef10086593c4a815fed | 628 | py | Python | auth0login/migrations/0001_initial.py | nkawa/vimeo-coursetool | 729215fe23b1bf05918a38d21e585a7b8862e75a | [
"Apache-2.0"
] | null | null | null | auth0login/migrations/0001_initial.py | nkawa/vimeo-coursetool | 729215fe23b1bf05918a38d21e585a7b8862e75a | [
"Apache-2.0"
] | null | null | null | auth0login/migrations/0001_initial.py | nkawa/vimeo-coursetool | 729215fe23b1bf05918a38d21e585a7b8862e75a | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.8 on 2021-10-10 14:32
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Ticket',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ticketName', models.CharField(max_length=40)),
('ticketGroup', models.CharField(max_length=20)),
('ticketKeyword', models.CharField(max_length=20)),
],
),
]
| 26.166667 | 117 | 0.58121 | 63 | 628 | 5.698413 | 0.68254 | 0.125348 | 0.150418 | 0.200557 | 0.144847 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047297 | 0.292994 | 628 | 23 | 118 | 27.304348 | 0.761261 | 0.071656 | 0 | 0 | 1 | 0 | 0.075732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f351c677fbd55b35040364f1c638b44dd7ef0b32 | 1,291 | py | Python | watchmate_v2.0.1/app/models.py | rroy11705/Rest_API_With_Django | 6a75db2e2c3913ec9afc1cbfef67a5c9fd655e60 | [
"CNRI-Python"
] | null | null | null | watchmate_v2.0.1/app/models.py | rroy11705/Rest_API_With_Django | 6a75db2e2c3913ec9afc1cbfef67a5c9fd655e60 | [
"CNRI-Python"
] | null | null | null | watchmate_v2.0.1/app/models.py | rroy11705/Rest_API_With_Django | 6a75db2e2c3913ec9afc1cbfef67a5c9fd655e60 | [
"CNRI-Python"
] | null | null | null | from django.db import models
from django.core.validators import MinValueValidator, MaxValueValidator
from django.contrib.auth.models import User
class StreamPlatform(models.Model):
name = models.CharField(max_length=64)
about = models.TextField(max_length=512)
website = models.URLField(max_length=128)
def __str__(self):
return self.name
class WatchList(models.Model):
title = models.CharField(max_length=256)
storyline = models.TextField(max_length=2048)
platforms = models.ManyToManyField(StreamPlatform, related_name="watchlist")
active = models.BooleanField(default=True)
created = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.title
class Review(models.Model):
reviewer = models.ForeignKey(User, on_delete=models.CASCADE)
rating = models.PositiveIntegerField(
validators=[MinValueValidator(1), MaxValueValidator(5)])
description = models.CharField(max_length=256, null=True)
watchlist = models.ForeignKey(WatchList, on_delete=models.CASCADE, related_name="reviews")
active = models.BooleanField(default=True)
created = models.DateTimeField(auto_now_add=True)
update = models.DateTimeField(auto_now=True)
def __str__(self):
return str(self.rating)
| 33.973684 | 94 | 0.749032 | 152 | 1,291 | 6.184211 | 0.401316 | 0.057447 | 0.057447 | 0.076596 | 0.280851 | 0.159574 | 0.159574 | 0.159574 | 0.159574 | 0.159574 | 0 | 0.018265 | 0.15182 | 1,291 | 37 | 95 | 34.891892 | 0.840183 | 0 | 0 | 0.25 | 0 | 0 | 0.012393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.107143 | 0.107143 | 0.964286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f3539262cba46ff60c76205e56634f13ca25de23 | 190 | py | Python | Python 201/enumeration.py | PacktPublishing/The-Complete-Python-Course-including-Django-Web-Framework | 402b35d4739ed91e50d6c3380cab6f085a46c52b | [
"MIT"
] | 3 | 2021-07-09T01:24:20.000Z | 2022-03-24T06:30:19.000Z | Python 201/enumeration.py | PacktPublishing/The-Complete-Python-Course-including-Django-Web-Framework | 402b35d4739ed91e50d6c3380cab6f085a46c52b | [
"MIT"
] | null | null | null | Python 201/enumeration.py | PacktPublishing/The-Complete-Python-Course-including-Django-Web-Framework | 402b35d4739ed91e50d6c3380cab6f085a46c52b | [
"MIT"
] | 3 | 2021-07-01T21:52:53.000Z | 2021-09-02T08:54:23.000Z | animals = ["Gully", "Rhubarb", "Zephyr", "Henry"]
for index, animal in enumerate(animals):
# if index % 2 == 0:
# continue
# print(animal)
print(f"{index+1}.\t{animal}")
| 27.142857 | 49 | 0.573684 | 24 | 190 | 4.541667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.226316 | 190 | 6 | 50 | 31.666667 | 0.721088 | 0.236842 | 0 | 0 | 0 | 0 | 0.304965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f363d12a0b564c0f2c4645099bd660201af5ca16 | 4,428 | py | Python | clac_line_index.py | shichenhui/Data-mining-techniques-on-astronomical-spectra-data.-I-Clustering-analysis | fd6a7c27cfe2110ee1a2ffc31ddca26340d2cabc | [
"Apache-2.0"
] | null | null | null | clac_line_index.py | shichenhui/Data-mining-techniques-on-astronomical-spectra-data.-I-Clustering-analysis | fd6a7c27cfe2110ee1a2ffc31ddca26340d2cabc | [
"Apache-2.0"
] | null | null | null | clac_line_index.py | shichenhui/Data-mining-techniques-on-astronomical-spectra-data.-I-Clustering-analysis | fd6a7c27cfe2110ee1a2ffc31ddca26340d2cabc | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
class LineIndex:
def __init__(self):
self.elements = [(4143.375, 4178.375, 4081.375, 4118.875, 4245.375, 4285.375),
(4143.375, 4178.375, 4085.125, 4097.625, 4245.375, 4285.375),
(4223.500, 4236.000, 4212.250, 4221.000, 4242.250, 4252.250),
(4282.625, 4317.625, 4267.625, 4283.875, 4320.125, 4333.375),
(4370.375, 4421.625, 4360.375, 4371.625, 4444.125, 4456.625),
(4453.375, 4475.875, 4447.125, 4455.875, 4478.375, 4493.375),
(4515.500, 4560.500, 4505.500, 4515.500, 4561.750, 4580.500),
(4635.250, 4721.500, 4612.750, 4631.500, 4744.000, 4757.750),
(4848.875, 4877.625, 4828.875, 4848.875, 4877.625, 4892.625),
(4979.000, 5055.250, 4947.750, 4979.000, 5055.250, 5066.500),
(5070.375, 5135.375, 4896.375, 4958.875, 5302.375, 5367.375),
(5155.375, 5197.875, 4896.375, 4958.875, 5302.375, 5367.375),
(5161.375, 5193.875, 5143.875, 5162.625, 5192.625, 5207.625),
(5247.375, 5287.375, 5234.875, 5249.875, 5287.375, 5319.875),
(5314.125, 5354.125, 5306.625, 5317.875, 5355.375, 5365.375),
(5390.250, 5417.750, 5379.000, 5390.250, 5417.750, 5427.750),
(5698.375, 5722.125, 5674.625, 5698.375, 5724.625, 5738.375),
(5778.375, 5798.375, 5767.125, 5777.125, 5799.625, 5813.375),
(5878.625, 5911.125, 5862.375, 5877.375, 5923.875, 5949.875),
(5938.875, 5995.875, 5818.375, 5850.875, 6040.375, 6105.375),
(6191.375, 6273.875, 6068.375, 6143.375, 6374.375, 6416.875),]
def calc(self, flux, wave):
"""
计算一条光谱的线指数
:param flux: 光谱的流量向量
:param wave: 光谱的波长向量
:return: 线指数
"""
line_index = []
for num, i in enumerate(self.elements):
print(num)
# 求每一个元素的线指数
# 找出中心波段、蓝端、红端的波段和流量
center_band, center_flux = wave[(wave >= i[0]) & (wave <= i[1])], flux[(wave >= i[0]) & (wave <= i[1])]
left_band, left_flux = wave[(wave >= i[2]) & (wave <= i[3])], flux[(wave >= i[2]) & (wave <= i[3])]
right_band, right_flux = wave[(wave >= i[4]) & (wave <= i[5])], flux[(wave >= i[4]) & (wave <= i[5])]
# 计算连续谱直线,通过两个点画直线
y_left = np.trapz(left_flux, left_band)
y_right = np.trapz(right_flux, right_band)
x_left = np.mean(left_band)
x_right = np.mean(right_band)
# y = kx + b
k = (y_right - y_left) / (x_right - x_left)
b = y_right - k*y_right
if num in (0,1,10,11,19,20):
# 对部分元素,计算Mag星等,当做线指数值
Fc = k * center_band + b # 连续谱流量
Mag = -2.5*np.log2((1 / (center_band[-1]-center_band[1])) * np.trapz(center_flux/Fc, center_band))
line_index.append(Mag)
else:
# 对部分元素,计算equivalent width等效带宽,当做线指数值
Fc = k*center_band + b # 连续谱流量
EW = np.trapz((1-center_flux/Fc), center_band)
line_index.append(EW)
# 转换成np.array,并消除控制和无限值
line_index = np.array(line_index)
line_index[np.isnan(line_index)] = 0
line_index[np.isinf(line_index)] = 0
return line_index
def calc_and_plot(self,flux, wave):
# 计算线指数,并画图看看效果,与self.calc() 函数传进传出相同
line_index = self.calc(flux, wave)
center_wave = []
for i in self.elements:
center_wave.append((i[0]+i[1]) / 2)
plt.plot(wave, flux)
plt.scatter(center_wave, line_index)
plt.show()
return line_index
if __name__ == '__main__':
from astropy.io import fits
data = fits.open(r'C:\Users\panda\Desktop\spec-56591-EG012606S021203F01_sp08-138.fits')
a = data[0]
wave = a.data[2] # 第3行是波长
flux = a.data[0] # 第1行是光谱
model = LineIndex()
line_index = model.calc_and_plot(flux, wave) | 45.649485 | 116 | 0.497967 | 583 | 4,428 | 3.672384 | 0.380789 | 0.058851 | 0.016815 | 0.018216 | 0.117702 | 0.117702 | 0.084073 | 0.060719 | 0 | 0 | 0 | 0.332981 | 0.359079 | 4,428 | 97 | 117 | 45.649485 | 0.421424 | 0.059846 | 0 | 0.059701 | 0 | 0 | 0.018458 | 0.016463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0 | 0.044776 | 0 | 0.134328 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f366bab7949be48bac13c8ce7427325ee3cbe177 | 896 | py | Python | tests/clvm/benchmark_costs.py | Flax-Network/flax-light-wallet | 1745850a28a47bbbc4b5f3d460f35b34b4ed4f25 | [
"Apache-2.0"
] | 1 | 2021-12-02T14:38:11.000Z | 2021-12-02T14:38:11.000Z | tests/clvm/benchmark_costs.py | Flax-Network/flax-light-wallet | 1745850a28a47bbbc4b5f3d460f35b34b4ed4f25 | [
"Apache-2.0"
] | null | null | null | tests/clvm/benchmark_costs.py | Flax-Network/flax-light-wallet | 1745850a28a47bbbc4b5f3d460f35b34b4ed4f25 | [
"Apache-2.0"
] | 6 | 2021-11-21T00:38:27.000Z | 2021-12-03T01:25:19.000Z | from flaxlight.types.blockchain_format.program import INFINITE_COST
from flaxlight.types.spend_bundle import SpendBundle
from flaxlight.types.generator_types import BlockGenerator
from flaxlight.consensus.cost_calculator import calculate_cost_of_program, NPCResult
from flaxlight.consensus.default_constants import DEFAULT_CONSTANTS
from flaxlight.full_node.bundle_tools import simple_solution_generator
from flaxlight.full_node.mempool_check_conditions import get_name_puzzle_conditions
def cost_of_spend_bundle(spend_bundle: SpendBundle) -> int:
program: BlockGenerator = simple_solution_generator(spend_bundle)
npc_result: NPCResult = get_name_puzzle_conditions(
program, INFINITE_COST, cost_per_byte=DEFAULT_CONSTANTS.COST_PER_BYTE, safe_mode=True
)
cost: int = calculate_cost_of_program(program.program, npc_result, DEFAULT_CONSTANTS.COST_PER_BYTE)
return cost
| 52.705882 | 103 | 0.856027 | 118 | 896 | 6.118644 | 0.355932 | 0.126039 | 0.074792 | 0.060942 | 0.074792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094866 | 896 | 16 | 104 | 56 | 0.890259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.5 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f3884809bd62f0f776b8c21dd23802afacf22c1c | 349 | py | Python | EASTAR/main/templatetags/extra.py | DightMerc/EASTAR | 04a3578932f8b4b842e0898513ef279c2f750f48 | [
"Apache-2.0"
] | 1 | 2020-09-21T16:46:19.000Z | 2020-09-21T16:46:19.000Z | EASTAR/main/templatetags/extra.py | DightMerc/EASTAR | 04a3578932f8b4b842e0898513ef279c2f750f48 | [
"Apache-2.0"
] | null | null | null | EASTAR/main/templatetags/extra.py | DightMerc/EASTAR | 04a3578932f8b4b842e0898513ef279c2f750f48 | [
"Apache-2.0"
] | null | null | null | from django import template
from django.template.defaultfilters import stringfilter
register = template.Library()
@register.filter
@stringfilter
def FindPhoto(value):
if "%photo%" in str(value):
return True
else:
return False
@register.filter
@stringfilter
def ReplacePhoto(value):
return value.replace("%photo%","")
| 18.368421 | 55 | 0.719198 | 39 | 349 | 6.435897 | 0.564103 | 0.079681 | 0.207171 | 0.231076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17765 | 349 | 18 | 56 | 19.388889 | 0.874564 | 0 | 0 | 0.285714 | 0 | 0 | 0.040115 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.071429 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3896211c27faf122c9bf819667c176a83dab4ba | 1,058 | py | Python | influxable/db/function/transformations.py | AndyBryson/influxable | 8a2f798cc5ae12b04f803afc84d7e064a3afd250 | [
"MIT"
] | 30 | 2019-07-28T12:57:21.000Z | 2022-03-30T05:02:57.000Z | influxable/db/function/transformations.py | AndyBryson/influxable | 8a2f798cc5ae12b04f803afc84d7e064a3afd250 | [
"MIT"
] | 9 | 2020-04-23T11:29:29.000Z | 2022-02-04T09:15:16.000Z | influxable/db/function/transformations.py | AndyBryson/influxable | 8a2f798cc5ae12b04f803afc84d7e064a3afd250 | [
"MIT"
] | 5 | 2021-03-23T04:05:42.000Z | 2022-01-28T12:04:37.000Z | from . import _generate_function
Abs = _generate_function('ABS')
ACos = _generate_function('ACOS')
ASin = _generate_function('ASIN')
ATan = _generate_function('ATAN')
ATan2 = _generate_function('ATAN2')
Ceil = _generate_function('CEIL')
Cos = _generate_function('COS')
CumulativeSum = _generate_function('CUMULATIVE_SUM')
Derivative = _generate_function('DERIVATIVE')
Difference = _generate_function('DIFFERENCE')
Elapsed = _generate_function('ELAPSED')
Exp = _generate_function('EXP')
Floor = _generate_function('FLOOR')
Histogram = _generate_function('HISTOGRAM')
Ln = _generate_function('LN')
Log = _generate_function('LOG')
Log2 = _generate_function('LOG2')
Log10 = _generate_function('LOG10')
MovingAverage = _generate_function('MOVING_AVERAGE')
NonNegativeDerivative = _generate_function('NON_NEGATIVE_DERIVATIVE')
NonNegativeDifference = _generate_function('NON_NEGATIVE_DIFFERENCE')
Pow = _generate_function('POW')
Round = _generate_function('ROUND')
Sin = _generate_function('SIN')
Sqrt = _generate_function('SQRT')
Tan = _generate_function('TAN')
| 36.482759 | 69 | 0.797732 | 114 | 1,058 | 6.877193 | 0.315789 | 0.55102 | 0.048469 | 0.068878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008205 | 0.07845 | 1,058 | 28 | 70 | 37.785714 | 0.795897 | 0 | 0 | 0 | 1 | 0 | 0.167297 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f396ce948df906683a7c38c39ca570318694f325 | 120 | py | Python | config_web/uberon.py | NikkiBytes/pending.api | 3c83bb8e413c3032a3a4539d19a779b5f0b67650 | [
"Apache-2.0"
] | 3 | 2019-02-17T23:36:35.000Z | 2022-03-01T16:43:06.000Z | config_web/uberon.py | NikkiBytes/pending.api | 3c83bb8e413c3032a3a4539d19a779b5f0b67650 | [
"Apache-2.0"
] | 56 | 2019-01-26T16:34:12.000Z | 2022-03-23T06:57:03.000Z | config_web/uberon.py | NikkiBytes/pending.api | 3c83bb8e413c3032a3a4539d19a779b5f0b67650 | [
"Apache-2.0"
] | 6 | 2020-10-22T17:37:54.000Z | 2022-03-01T16:56:55.000Z |
ES_HOST = 'localhost:9200'
ES_INDEX = 'pending-uberon'
ES_DOC_TYPE = 'anatomy'
API_PREFIX = 'uberon'
API_VERSION = ''
| 15 | 27 | 0.716667 | 17 | 120 | 4.705882 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038835 | 0.141667 | 120 | 7 | 28 | 17.142857 | 0.737864 | 0 | 0 | 0 | 0 | 0 | 0.344538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3a26881a4ca1cefb8ab84deb4c848ac0e3245ce | 285 | py | Python | zip_submission.py | RapidsAtHKUST/TriangleCounting | b63541f8f9f32d3cb2f52b9b8f07e5974238b6e1 | [
"MIT"
] | null | null | null | zip_submission.py | RapidsAtHKUST/TriangleCounting | b63541f8f9f32d3cb2f52b9b8f07e5974238b6e1 | [
"MIT"
] | null | null | null | zip_submission.py | RapidsAtHKUST/TriangleCounting | b63541f8f9f32d3cb2f52b9b8f07e5974238b6e1 | [
"MIT"
] | null | null | null | import datetime
import os
if __name__ == '__main__':
date_str = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M")
print(date_str)
os.system('zip -r tc-rapids-{}.zip triangle-counting technical_report.pdf -x *cmake-build-debug/* -x */CMake* -x *.idea/*'.format(date_str))
| 35.625 | 144 | 0.673684 | 44 | 285 | 4.090909 | 0.704545 | 0.116667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 285 | 7 | 145 | 40.714286 | 0.722892 | 0 | 0 | 0 | 0 | 0.166667 | 0.463158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f3a2c44f9a438f736d2343eb401aac7338e90dca | 382 | py | Python | binarysearch/loneInteger.py | Ry4nW/python-wars | 76e3fb24b7ae2abf35db592f1ad59cf8d5f9e508 | [
"MIT"
] | 1 | 2021-06-06T19:55:22.000Z | 2021-06-06T19:55:22.000Z | binarysearch/loneInteger.py | Ry4nW/python-wars | 76e3fb24b7ae2abf35db592f1ad59cf8d5f9e508 | [
"MIT"
] | 1 | 2022-01-20T19:20:33.000Z | 2022-01-20T23:51:46.000Z | binarysearch/loneInteger.py | Ry4nW/python-wars | 76e3fb24b7ae2abf35db592f1ad59cf8d5f9e508 | [
"MIT"
] | null | null | null | class Solution:
def solve(self, nums):
integersDict = {}
for i in range(len(nums)):
try:
integersDict[nums[i]] += 1
except:
integersDict[nums[i]] = 1
for integer in integersDict:
if integersDict[integer] != 3:
return integer
return nums[0]
| 19.1 | 42 | 0.455497 | 36 | 382 | 4.833333 | 0.555556 | 0.183908 | 0.195402 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.455497 | 382 | 20 | 43 | 19.1 | 0.817308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3a82067b35984c55df054378d7da2b20c33f677 | 524 | py | Python | projects/admin.py | BridgesLab/Lab-Website | d6f6c9c068bbf668c253e5943d9514947023e66d | [
"CC0-1.0",
"MIT"
] | 6 | 2015-08-31T16:55:16.000Z | 2022-02-10T08:23:07.000Z | projects/admin.py | BridgesLab/Lab-Website | d6f6c9c068bbf668c253e5943d9514947023e66d | [
"CC0-1.0",
"MIT"
] | 30 | 2015-03-22T15:49:31.000Z | 2020-05-25T23:59:37.000Z | projects/admin.py | BridgesLab/Lab-Website | d6f6c9c068bbf668c253e5943d9514947023e66d | [
"CC0-1.0",
"MIT"
] | 6 | 2016-09-07T08:25:21.000Z | 2020-03-27T10:24:57.000Z | '''This package sets up the admin interface for the :mod:`papers` app.'''
from django.contrib import admin
from projects.models import Funding, FundingAgency
class FundingAdmin(admin.ModelAdmin):
'''The :class:`~projects.models.Funding` model admin is the default.'''
pass
admin.site.register(Funding, FundingAdmin)
class FundingAgencyAdmin(admin.ModelAdmin):
'''The :class:`~projects.models.FundingAgency` model admin is the default.'''
pass
admin.site.register(FundingAgency, FundingAgencyAdmin)
| 37.428571 | 81 | 0.751908 | 63 | 524 | 6.253968 | 0.460317 | 0.106599 | 0.091371 | 0.116751 | 0.406091 | 0.406091 | 0.218274 | 0.218274 | 0.218274 | 0 | 0 | 0 | 0.133588 | 524 | 13 | 82 | 40.307692 | 0.867841 | 0.391221 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f3aa39200a91e8ee89c1f49153a1bf66342a6c27 | 1,666 | py | Python | tests/functional/parser/schemas.py | n2N8Z/aws-lambda-powertools-python | 0cb5d506f534ac76b42f2d5959d93c7b2bb4d8e9 | [
"MIT-0"
] | null | null | null | tests/functional/parser/schemas.py | n2N8Z/aws-lambda-powertools-python | 0cb5d506f534ac76b42f2d5959d93c7b2bb4d8e9 | [
"MIT-0"
] | null | null | null | tests/functional/parser/schemas.py | n2N8Z/aws-lambda-powertools-python | 0cb5d506f534ac76b42f2d5959d93c7b2bb4d8e9 | [
"MIT-0"
] | null | null | null | from typing import Dict, List, Optional
from pydantic import BaseModel
from typing_extensions import Literal
from aws_lambda_powertools.utilities.parser.models import (
DynamoDBStreamChangedRecordModel,
DynamoDBStreamModel,
DynamoDBStreamRecordModel,
EventBridgeModel,
SnsModel,
SnsNotificationModel,
SnsRecordModel,
SqsModel,
SqsRecordModel,
)
class MyDynamoBusiness(BaseModel):
Message: Dict[Literal["S"], str]
Id: Dict[Literal["N"], int]
class MyDynamoScheme(DynamoDBStreamChangedRecordModel):
NewImage: Optional[MyDynamoBusiness]
OldImage: Optional[MyDynamoBusiness]
class MyDynamoDBStreamRecordModel(DynamoDBStreamRecordModel):
dynamodb: MyDynamoScheme
class MyAdvancedDynamoBusiness(DynamoDBStreamModel):
Records: List[MyDynamoDBStreamRecordModel]
class MyEventbridgeBusiness(BaseModel):
instance_id: str
state: str
class MyAdvancedEventbridgeBusiness(EventBridgeModel):
detail: MyEventbridgeBusiness
class MySqsBusiness(BaseModel):
message: str
username: str
class MyAdvancedSqsRecordModel(SqsRecordModel):
body: str
class MyAdvancedSqsBusiness(SqsModel):
Records: List[MyAdvancedSqsRecordModel]
class MySnsBusiness(BaseModel):
message: str
username: str
class MySnsNotificationModel(SnsNotificationModel):
Message: str
class MyAdvancedSnsRecordModel(SnsRecordModel):
Sns: MySnsNotificationModel
class MyAdvancedSnsBusiness(SnsModel):
Records: List[MyAdvancedSnsRecordModel]
class MyKinesisBusiness(BaseModel):
message: str
username: str
class MyCloudWatchBusiness(BaseModel):
my_message: str
user: str
| 19.833333 | 61 | 0.779712 | 134 | 1,666 | 9.656716 | 0.440299 | 0.037094 | 0.044049 | 0.062597 | 0.081144 | 0.081144 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156062 | 1,666 | 83 | 62 | 20.072289 | 0.920341 | 0 | 0 | 0.117647 | 0 | 0 | 0.0012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.078431 | 0 | 0.803922 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f3aec13ab37c42864f83f8353ef76bbf6e8e00c7 | 420 | py | Python | pokemon_combat/body_part.py | ryndovaira/telebot_fight_game | 660473fea0de9635935495b0b2fd827bb51c47c9 | [
"MIT"
] | null | null | null | pokemon_combat/body_part.py | ryndovaira/telebot_fight_game | 660473fea0de9635935495b0b2fd827bb51c47c9 | [
"MIT"
] | null | null | null | pokemon_combat/body_part.py | ryndovaira/telebot_fight_game | 660473fea0de9635935495b0b2fd827bb51c47c9 | [
"MIT"
] | null | null | null | from enum import Enum, auto
class BodyPart(Enum):
NOTHING = auto() # ничего (начальное состояние)
HEAD = auto() # голова
BELLY = auto() # живот
LEGS = auto() # ноги
@classmethod
def min_index(cls):
return cls.HEAD.value
@classmethod
def max_index(cls):
return cls.LEGS.value
@classmethod
def has_item(cls, name):
return name in cls._member_names_
| 20 | 52 | 0.619048 | 52 | 420 | 4.884615 | 0.576923 | 0.165354 | 0.110236 | 0.133858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 420 | 20 | 53 | 21 | 0.846667 | 0.109524 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.066667 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f3b81f0da5b4d78d7f7a29ad65dbfe51dd9c3d39 | 2,411 | py | Python | modules/dialogue_importer.py | KAIST-AILab/PyOpenDial | c9bca653c18ccc082dc8b86b4a8feee9ed00a75b | [
"MIT"
] | 9 | 2019-09-23T01:56:43.000Z | 2022-03-13T17:58:40.000Z | modules/dialogue_importer.py | KAIST-AILab/PyOpenDial | c9bca653c18ccc082dc8b86b4a8feee9ed00a75b | [
"MIT"
] | 2 | 2019-11-18T17:02:30.000Z | 2021-07-14T15:47:08.000Z | modules/dialogue_importer.py | KAIST-AILab/PyOpenDial | c9bca653c18ccc082dc8b86b4a8feee9ed00a75b | [
"MIT"
] | 1 | 2022-02-08T06:41:19.000Z | 2022-02-08T06:41:19.000Z | import logging
from threading import Thread
from time import sleep
from multipledispatch import dispatch
from dialogue_state import DialogueState
from modules.dialogue_recorder import DialogueRecorder
from modules.forward_planner import ForwardPlanner
class DialogueImporter(Thread):
"""
Functionality to import a previously recorded dialogue in the dialogue system. The
import essentially "replays" the previous interaction, including all state update
operations.
"""
# logger
log = logging.getLogger('PyOpenDial')
def __init__(self, system, turns):
"""
Creates a new dialogue importer attached to a particular dialogue system, and
with an ordered list of turns (encoded by their dialogue state).
:param system: the dialogue system
:param turns: the sequence of turns
"""
self.system = system
self.turns = turns
self.wizard_of_mode = False
@dispatch(bool)
def set_wizard_of_oz_mode(self, is_wizard_of_oz):
"""
Sets whether the import should consider the system actions as "expert"
Wizard-of-Oz actions to imitate.
:param is_wizard_of_oz: whether the system actions are wizard-of-Oz examples
"""
self.wizard_of_mode = is_wizard_of_oz
@dispatch()
def run(self):
if self.wizard_of_mode:
# TODO: WizardLearner
# self.system.attach_module(WizardLearner)
# for turn in self.turns:
# self.add_turn(turn)
pass
else:
self.system.detach_module(ForwardPlanner)
for turn in self.turns:
self.add_turn(turn)
self.system.get_state().remove_nodes(self.system.get_state().get_action_node_ids())
self.system.get_state().remove_nodes(self.system.get_state().get_utility_node_ids())
self.system.attach_module(ForwardPlanner)
@dispatch(DialogueState)
def add_turn(self, turn):
try:
while self.system.is_pauesd() or not self.system.get_module(DialogueRecorder).is_running():
try:
# TODO: Thread
sleep(100)
except:
pass
self.system.add_content(turn.copy())
except Exception as e:
self.log.warning("could not add content: %s" % e)
| 33.486111 | 103 | 0.635421 | 286 | 2,411 | 5.188811 | 0.388112 | 0.080863 | 0.040431 | 0.048518 | 0.11186 | 0.11186 | 0.11186 | 0.11186 | 0.11186 | 0.067385 | 0 | 0.001753 | 0.290336 | 2,411 | 71 | 104 | 33.957746 | 0.865576 | 0.29158 | 0 | 0.105263 | 0 | 0 | 0.022082 | 0 | 0 | 0 | 0 | 0.014085 | 0 | 1 | 0.105263 | false | 0.052632 | 0.210526 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f3c435d0d39f9e66d72d19cb75fb37c344115fe5 | 364 | py | Python | Q6.2_brain_teaser.py | latika18/learning | a57c9aacc0157bf7c318f46c1e7c4971d1d55aea | [
"Unlicense"
] | null | null | null | Q6.2_brain_teaser.py | latika18/learning | a57c9aacc0157bf7c318f46c1e7c4971d1d55aea | [
"Unlicense"
] | null | null | null | Q6.2_brain_teaser.py | latika18/learning | a57c9aacc0157bf7c318f46c1e7c4971d1d55aea | [
"Unlicense"
] | null | null | null | There is an 8x8 chess board in which two diagonally opposite corners have been cut off.
You are given 31 dominos, and a single domino can cover exactly two squares.
Can you use the 31 dominos to cover the entire board? Prove your answer (by providing an example, or showing why it’s impossible).
_
________________________________________________________________
| 60.666667 | 130 | 0.821429 | 54 | 364 | 4.333333 | 0.833333 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019481 | 0.153846 | 364 | 5 | 131 | 72.8 | 0.74026 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3c7491e9e71d4e9502531e23a18eb825a935574 | 1,906 | py | Python | online.py | abraker95/auto-loved-bot | 07e050a8804cea7614af917476c193797a453cb1 | [
"MIT"
] | null | null | null | online.py | abraker95/auto-loved-bot | 07e050a8804cea7614af917476c193797a453cb1 | [
"MIT"
] | null | null | null | online.py | abraker95/auto-loved-bot | 07e050a8804cea7614af917476c193797a453cb1 | [
"MIT"
] | null | null | null | import time
import requests
import json
class Online():
session = requests.session()
REQUEST_OK = 0 # Data can be handled
REQUEST_RETRY = 1 # Try getting data again
REQUEST_BAD = 2 # No point in trying, skip and go to next one
@staticmethod
def fetch_web_data(url):
response = Online.session.get(url, timeout=60*5)
# Common response if match is yet to exist
if 'Page Missing' in response.text:
return Online.REQUEST_RETRY, {}
# What to do with the data
status = Online.validate_response(response)
try: data = json.loads(response.text)
except: return status, {}
return status, data
@staticmethod
def validate_response(response):
if response.status_code == 200: return Online.REQUEST_OK # Ok
if response.status_code == 400: return Online.REQUEST_BAD # Unable to process request
if response.status_code == 401: return Online.REQUEST_BAD # Need to log in
if response.status_code == 403: return Online.REQUEST_BAD # Forbidden
if response.status_code == 404: return Online.REQUEST_BAD # Resource not found
if response.status_code == 405: return Online.REQUEST_BAD # Method not allowed
if response.status_code == 407: return Online.REQUEST_BAD # Proxy authentication required
if response.status_code == 408: return Online.REQUEST_RETRY # Request timeout
if response.status_code == 429: return Online.REQUEST_RETRY # Too many requests
if response.status_code == 500: return Online.REQUEST_RETRY # Internal server error
if response.status_code == 502: return Online.REQUEST_RETRY # Bad Gateway
if response.status_code == 503: return Online.REQUEST_RETRY # Service unavailable
if response.status_code == 504: return Online.REQUEST_RETRY # Gateway timeout
| 39.708333 | 100 | 0.681532 | 248 | 1,906 | 5.100806 | 0.379032 | 0.132806 | 0.210277 | 0.205534 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031425 | 0.248688 | 1,906 | 47 | 101 | 40.553191 | 0.851955 | 0.198321 | 0 | 0.0625 | 0 | 0 | 0.007963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.09375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
45ee805f01f9aaeb166bd38905c0d475f3c26174 | 538 | py | Python | ibsng/handler/invoice/get_invoice_by_i_d.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 6 | 2018-03-06T10:16:36.000Z | 2021-12-05T12:43:10.000Z | ibsng/handler/invoice/get_invoice_by_i_d.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 3 | 2018-03-06T10:27:08.000Z | 2022-01-02T15:21:27.000Z | ibsng/handler/invoice/get_invoice_by_i_d.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 3 | 2018-01-06T16:28:31.000Z | 2018-09-17T19:47:19.000Z | """Get invoice by id API method."""
from ibsng.handler.handler import Handler
class getInvoiceByID(Handler):
"""Get invoice by id method class."""
def control(self):
"""Validate inputs after setup method.
:return: None
:rtype: None
"""
self.is_valid(self.invoice_id, int)
def setup(self, invoice_id):
"""Setup required parameters.
:param int invoice_id: ibsng invoice id
:return: None
:rtype: None
"""
self.invoice_id = invoice_id
| 21.52 | 47 | 0.598513 | 64 | 538 | 4.9375 | 0.4375 | 0.170886 | 0.123418 | 0.088608 | 0.14557 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295539 | 538 | 24 | 48 | 22.416667 | 0.833773 | 0.410781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3403f5c550e586734e611cd55d8e1765defe86ed | 297 | py | Python | aiida/cmdline/groups/__init__.py | aiidateam/aiida_core | 46d244e32ac5eca2e22a3d088314591ce064be57 | [
"PSF-2.0",
"MIT"
] | 153 | 2016-12-23T20:59:03.000Z | 2019-07-02T06:47:52.000Z | aiida/cmdline/groups/__init__.py | aiidateam/aiida_core | 46d244e32ac5eca2e22a3d088314591ce064be57 | [
"PSF-2.0",
"MIT"
] | 2,466 | 2016-12-24T01:03:52.000Z | 2019-07-04T13:41:08.000Z | aiida/cmdline/groups/__init__.py | aiidateam/aiida_core | 46d244e32ac5eca2e22a3d088314591ce064be57 | [
"PSF-2.0",
"MIT"
] | 88 | 2016-12-23T16:28:00.000Z | 2019-07-01T15:55:20.000Z | # -*- coding: utf-8 -*-
"""Module with custom implementations of :class:`click.Group`."""
# AUTO-GENERATED
# yapf: disable
# pylint: disable=wildcard-import
from .dynamic import *
from .verdi import *
__all__ = (
'DynamicEntryPointCommandGroup',
'VerdiCommandGroup',
)
# yapf: enable
| 16.5 | 65 | 0.686869 | 30 | 297 | 6.666667 | 0.833333 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004032 | 0.164983 | 297 | 17 | 66 | 17.470588 | 0.802419 | 0.525253 | 0 | 0 | 1 | 0 | 0.351145 | 0.221374 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
340b8f990c1b32dcdd389654c6b09c11a8ed3506 | 2,418 | py | Python | incubator/kafka-connect/kafka-connect.py | CiscoM31/functions | 96e34dfc815d92563ce2421b41c9c18906c8278b | [
"Apache-2.0"
] | 74 | 2017-07-26T17:02:39.000Z | 2021-07-27T22:27:57.000Z | incubator/kafka-connect/kafka-connect.py | CiscoM31/functions | 96e34dfc815d92563ce2421b41c9c18906c8278b | [
"Apache-2.0"
] | 15 | 2017-06-23T23:48:39.000Z | 2019-10-01T09:49:32.000Z | incubator/kafka-connect/kafka-connect.py | CiscoM31/functions | 96e34dfc815d92563ce2421b41c9c18906c8278b | [
"Apache-2.0"
] | 35 | 2017-06-20T14:44:21.000Z | 2021-11-04T12:28:31.000Z | import json
import base64
from kubernetes import client, config
config.load_incluster_config()
v1=client.CoreV1Api()
#Get slack secret
for secrets in v1.list_secret_for_all_namespaces().items:
if secrets.metadata.name == 'slack':
token = base64.b64decode(secrets.data['token'])
print "==> Function ready to listen events..."
def handler(event, context):
util_data = False
try:
if 'op' in event['data']['payload']:
util_data = True;
except:
util_data = False
if util_data == True:
# CREATE operation
if event['data']['payload']['op'] == "c":
first_name = event['data']['payload']['after']['first_name'];
last_name = event['data']['payload']['after']['last_name'];
email = event['data']['payload']['after']['email'];
msg = "Create operation: Added user %s %s with email %s" % (first_name, last_name, email)
print msg;
# DELETE operation
elif event['data']['payload']['op'] == "d":
first_name = event['data']['payload']['before']['first_name'];
last_name = event['data']['payload']['before']['last_name'];
email = event['data']['payload']['before']['email'];
msg = "Delete operation: Deleted user %s %s with email %s" % (first_name, last_name, email)
print msg;
# UPDATE operation
elif event['data']['payload']['op'] == "u":
row_id = event['data']['payload']['before']['id'];
first_name_before = event['data']['payload']['before']['first_name'];
last_name_before = event['data']['payload']['before']['last_name'];
email_before = event['data']['payload']['before']['email'];
first_name_after = event['data']['payload']['after']['first_name'];
last_name_after = event['data']['payload']['after']['last_name'];
email_after = event['data']['payload']['after']['email'];
msg = "Update operation in row with id %s: \n Old value: Name: %s %s and Email: %s \n New value: Name: %s %s and Email %s" % (row_id, first_name_before, last_name_before, email_before, first_name_after, last_name_after, email_after)
print msg;
else:
msg = "Unrecognized operation"
print msg;
else:
print "Payload is empty. Useless event..."
return "Function executed"
| 42.421053 | 244 | 0.58354 | 292 | 2,418 | 4.667808 | 0.263699 | 0.112252 | 0.19956 | 0.112986 | 0.482025 | 0.432869 | 0.311079 | 0.180484 | 0.067498 | 0.067498 | 0 | 0.004921 | 0.24359 | 2,418 | 56 | 245 | 43.178571 | 0.740295 | 0.027709 | 0 | 0.177778 | 0 | 0.022222 | 0.298679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.133333 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3414b567571ba9fec00a916eb601f4d6648a9a43 | 483 | py | Python | src/Models/loss.py | HomerW/CSGNet | 4ecc7f3e836867118dba3d5f220ed5e74a536b93 | [
"MIT"
] | null | null | null | src/Models/loss.py | HomerW/CSGNet | 4ecc7f3e836867118dba3d5f220ed5e74a536b93 | [
"MIT"
] | null | null | null | src/Models/loss.py | HomerW/CSGNet | 4ecc7f3e836867118dba3d5f220ed5e74a536b93 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from torch.autograd import Variable
nllloss = nn.NLLLoss()
def losses_joint(out, labels, time_steps: int):
"""
Defines loss
:param out: output from the network
:param labels: Ground truth labels
:param time_steps: Length of the program
:return loss: Sum of categorical losses
"""
loss = Variable(torch.zeros(1)).cuda()
for i in range(time_steps):
loss += nllloss(out[i], labels[:, i])
return loss
| 24.15 | 47 | 0.670807 | 69 | 483 | 4.637681 | 0.536232 | 0.084375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002681 | 0.227743 | 483 | 19 | 48 | 25.421053 | 0.855228 | 0.339545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
3430dc2aab0b9a66724eea76329a3c8960caccb6 | 1,640 | py | Python | day06/ftp_server.py | zhangyage/Python-oldboy | a95c1b465929e2be641e425fcb5e15b366800831 | [
"Apache-2.0"
] | 1 | 2020-06-04T08:44:09.000Z | 2020-06-04T08:44:09.000Z | day06/ftp_server.py | zhangyage/Python-oldboy | a95c1b465929e2be641e425fcb5e15b366800831 | [
"Apache-2.0"
] | null | null | null | day06/ftp_server.py | zhangyage/Python-oldboy | a95c1b465929e2be641e425fcb5e15b366800831 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
'''
G:/temp 目录要提前创建一下,做为ftp的根目录
'''
import SocketServer
import os
class MyServer(SocketServer.BaseRequestHandler):
def handle(self):
base_path = 'G:/temp'
conn = self.request
print 'connected...'
while True:
pre_data = conn.recv(1024)
#获取请求的方法、文件名、文件大小
cmd,file_name,file_size=pre_data.split('|')
#已经接受的文件大小
recv_size = 0
#上传文件路径拼接
file_dir = os.path.join(base_path,file_name)
f = file(file_dir,'wb')
Flag = True
while Flag:
#未上传完成
if int(file_size) > recv_size:
#最多接收1024,可能接收的小于1024
data = conn.recv(1024)
recv_size+=len(data)
#上传完毕退出循环
else:
recv_size = 0
Flag = False
continue
f.write(data)
print 'upload successed.'
f.close()
instance = SocketServer.ThreadingTCPServer(('127.0.0.1',9000),MyServer)
instance.serve_forever()
| 22.777778 | 71 | 0.35122 | 117 | 1,640 | 4.794872 | 0.589744 | 0.057041 | 0.042781 | 0.057041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0.57561 | 1,640 | 72 | 72 | 22.777778 | 0.764368 | 0.065244 | 0 | 0.074074 | 0 | 0 | 0.032236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.074074 | null | null | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3442e1c11024fc9efc57990cff859cf751f64fe7 | 396 | py | Python | rest_framework_push_notifications/serializers.py | incuna/rest-framework-push-notifications | e57f06db45754af9e0b50159b8f12104222b2f68 | [
"BSD-2-Clause"
] | 1 | 2021-09-18T08:13:11.000Z | 2021-09-18T08:13:11.000Z | rest_framework_push_notifications/serializers.py | incuna/rest-framework-push-notifications | e57f06db45754af9e0b50159b8f12104222b2f68 | [
"BSD-2-Clause"
] | 7 | 2015-06-22T11:39:45.000Z | 2021-06-10T17:46:53.000Z | rest_framework_push_notifications/serializers.py | incuna/rest-framework-push-notifications | e57f06db45754af9e0b50159b8f12104222b2f68 | [
"BSD-2-Clause"
] | null | null | null | from push_notifications import models
from rest_framework.serializers import HyperlinkedModelSerializer
class APNSDevice(HyperlinkedModelSerializer):
class Meta:
fields = ('url', 'registration_id', 'name', 'device_id', 'active')
model = models.APNSDevice
class APNSDeviceUpdate(APNSDevice):
class Meta(APNSDevice.Meta):
read_only_fields = ('registration_id',)
| 28.285714 | 74 | 0.742424 | 39 | 396 | 7.358974 | 0.589744 | 0.216028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164141 | 396 | 13 | 75 | 30.461538 | 0.867069 | 0 | 0 | 0 | 0 | 0 | 0.131313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
34495a06737335e1258b7f38ad90e67ba9e2259e | 2,532 | py | Python | enaml/mpl_canvas.py | viz4biz/PyDataNYC2015 | 066154ea9f1837c355e6108a28b85889f3020da3 | [
"Apache-2.0"
] | 11 | 2015-11-11T13:57:21.000Z | 2019-08-14T15:53:43.000Z | enaml/mpl_canvas.py | viz4biz/PyDataNYC2015 | 066154ea9f1837c355e6108a28b85889f3020da3 | [
"Apache-2.0"
] | null | null | null | enaml/mpl_canvas.py | viz4biz/PyDataNYC2015 | 066154ea9f1837c355e6108a28b85889f3020da3 | [
"Apache-2.0"
] | 6 | 2015-11-11T13:57:25.000Z | 2018-09-12T07:53:03.000Z | #------------------------------------------------------------------------------
# Copyright (c) 2013, Nucleic Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#------------------------------------------------------------------------------
from atom.api import Typed, ForwardTyped, Bool, observe, set_default, Value, List, Enum
from enaml.core.declarative import d_
from .control import Control, ProxyControl
#: Delay the import of matplotlib until needed. This removes the hard
#: dependecy on matplotlib for the rest of the Enaml code base.
def Figure():
from matplotlib.figure import Figure
return Figure
class ProxyMPLCanvas(ProxyControl):
""" The abstract definition of a proxy MPLCanvas object.
"""
#: A reference to the MPLCanvas declaration.
declaration = ForwardTyped(lambda: MPLCanvas)
def set_figure(self, figure):
raise NotImplementedError
def set_toolbar_visible(self, visible):
raise NotImplementedError
def set_toolbar_location(self, location):
raise NotImplementedError
def set_event_actions(self, actions):
raise NotImplementedError
def draw(self):
raise NotImplementedError
class MPLCanvas(Control):
""" A control which can be used to embded a matplotlib figure.
"""
#: The matplotlib figure to display in the widget.
figure = d_(ForwardTyped(Figure))
#: Whether or not the matplotlib figure toolbar is visible.
toolbar_visible = d_(Bool(False))
toolbar_location = d_(Enum('top', 'bottom'))
event_actions = d_(List(Value()))
#: Matplotlib figures expand freely in height and width by default.
hug_width = set_default('ignore')
hug_height = set_default('ignore')
#: A reference to the ProxyMPLCanvas object.
proxy = Typed(ProxyMPLCanvas)
def draw(self):
""" Request draw on the Figure """
if self.proxy_is_active:
self.proxy.draw()
#--------------------------------------------------------------------------
# Observers
#--------------------------------------------------------------------------
@observe('figure', 'toolbar_visible', 'toolbar_location', 'event_actions')
def _update_proxy(self, change):
""" An observer which sends state change to the proxy.
"""
# The superclass handler implementation is sufficient.
super(MPLCanvas, self)._update_proxy(change)
| 31.65 | 87 | 0.612164 | 271 | 2,532 | 5.612546 | 0.424354 | 0.078895 | 0.071006 | 0.059172 | 0.048652 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001961 | 0.194313 | 2,532 | 79 | 88 | 32.050633 | 0.743627 | 0.453791 | 0 | 0.21875 | 0 | 0 | 0.052985 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.71875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.