hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f095d3380b5ebd2361d49e633805f48a1b42caba | 2,866 | py | Python | tests/knowledge/rules/aws/context_aware/test_ec2_role_share_rule.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | tests/knowledge/rules/aws/context_aware/test_ec2_role_share_rule.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | tests/knowledge/rules/aws/context_aware/test_ec2_role_share_rule.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | import unittest
from cloudrail.knowledge.context.aws.aws_connection import PublicConnectionDetail, PolicyConnectionProperty, ConnectionDirectionType
from cloudrail.knowledge.context.aws.ec2.ec2_instance import Ec2Instance
from cloudrail.knowledge.context.aws.ec2.network_interface import NetworkInterface
from cloudrail.knowledge.context.aws.iam.role import Role
from cloudrail.knowledge.context.aws.aws_environment_context import AwsEnvironmentContext
from cloudrail.knowledge.rules.aws.context_aware.ec2_role_share_rule import Ec2RoleShareRule
from cloudrail.knowledge.rules.base_rule import RuleResultType
from cloudrail.dev_tools.rule_test_utils import create_empty_entity
class TestEc2RoleShareRule(unittest.TestCase):
def setUp(self):
self.rule = Ec2RoleShareRule()
def test_ec2_role_share_rule_fail(self):
# Arrange
connection_detail = PublicConnectionDetail(PolicyConnectionProperty([]), ConnectionDirectionType.INBOUND)
network_interface: NetworkInterface = create_empty_entity(NetworkInterface)
network_interface.inbound_connections.add(connection_detail)
private_ec2: Ec2Instance = create_empty_entity(Ec2Instance)
private_ec2.iam_profile_id = 'iam_profile_id'
role: Role = create_empty_entity(Role)
role.role_name = 'iam_role'
private_ec2.iam_role = role
public_ec2: Ec2Instance = create_empty_entity(Ec2Instance)
public_ec2.iam_profile_id = 'iam_profile_id'
public_ec2.network_resource.network_interfaces.append(network_interface)
context = AwsEnvironmentContext(ec2s=[private_ec2, public_ec2])
# Act
result = self.rule.run(context, {})
# Assert
self.assertEqual(RuleResultType.FAILED, result.status)
self.assertEqual(1, len(result.issues))
def test_ec2_role_share_rule_pass(self):
# Arrange
connection_detail = PublicConnectionDetail(PolicyConnectionProperty([]), ConnectionDirectionType.INBOUND)
network_interface: NetworkInterface = create_empty_entity(NetworkInterface)
network_interface.inbound_connections.add(connection_detail)
private_ec2: Ec2Instance = create_empty_entity(Ec2Instance)
private_ec2.iam_profile_id = 'iam_profile_id1'
role: Role = create_empty_entity(Role)
role.role_name = 'iam_role'
private_ec2.iam_role = role
public_ec2: Ec2Instance = create_empty_entity(Ec2Instance)
public_ec2.iam_profile_id = 'iam_profile_id2'
public_ec2.network_resource.network_interfaces.append(network_interface)
context = AwsEnvironmentContext(ec2s=[private_ec2, public_ec2])
# Act
result = self.rule.run(context, {})
# Assert
self.assertEqual(RuleResultType.SUCCESS, result.status)
self.assertEqual(0, len(result.issues))
| 40.942857 | 132 | 0.759944 | 314 | 2,866 | 6.633758 | 0.226115 | 0.047528 | 0.073452 | 0.069611 | 0.707633 | 0.692271 | 0.602976 | 0.601056 | 0.601056 | 0.601056 | 0 | 0.01675 | 0.166783 | 2,866 | 69 | 133 | 41.536232 | 0.855528 | 0.01291 | 0 | 0.5 | 0 | 0 | 0.026223 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.068182 | false | 0.022727 | 0.204545 | 0 | 0.295455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f09b0ac82d16a4658253e26c01ef5ddc01555542 | 703 | py | Python | vaksinasi/migrations/0003_auto_20211029_0123.py | harmonica-pacil/invid19 | b4986f3375721deb02c9b9b8c982cd4e426c423c | [
"Unlicense"
] | 1 | 2021-12-27T12:50:05.000Z | 2021-12-27T12:50:05.000Z | vaksinasi/migrations/0003_auto_20211029_0123.py | harmonica-pacil/invid19 | b4986f3375721deb02c9b9b8c982cd4e426c423c | [
"Unlicense"
] | null | null | null | vaksinasi/migrations/0003_auto_20211029_0123.py | harmonica-pacil/invid19 | b4986f3375721deb02c9b9b8c982cd4e426c423c | [
"Unlicense"
] | null | null | null | # Generated by Django 3.2.8 on 2021-10-28 18:23
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('vaksinasi', '0002_alter_pendaftar_nik'),
]
operations = [
migrations.AlterField(
model_name='pendaftar',
name='NIK',
field=models.CharField(error_messages={'unique': 'This email has already been registered'}, max_length=16, unique=True),
),
migrations.AlterField(
model_name='vaksin',
name='kode',
field=models.CharField(error_messages={'unique': 'This email has already been registered'}, max_length=3, unique=True),
),
]
| 29.291667 | 132 | 0.618777 | 77 | 703 | 5.532468 | 0.597403 | 0.093897 | 0.117371 | 0.13615 | 0.380282 | 0.380282 | 0.380282 | 0.380282 | 0.380282 | 0.380282 | 0 | 0.042389 | 0.261735 | 703 | 23 | 133 | 30.565217 | 0.77842 | 0.064011 | 0 | 0.235294 | 1 | 0 | 0.217988 | 0.036585 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f09c5c08205d285d8d7e7dce3b7328bf5dab9194 | 1,991 | py | Python | classes/stack.py | EashanKaushik/Data-Structures | e5bd391e029cb47e650d5665647ff57590b9b343 | [
"MIT"
] | null | null | null | classes/stack.py | EashanKaushik/Data-Structures | e5bd391e029cb47e650d5665647ff57590b9b343 | [
"MIT"
] | null | null | null | classes/stack.py | EashanKaushik/Data-Structures | e5bd391e029cb47e650d5665647ff57590b9b343 | [
"MIT"
] | null | null | null | # node class for develping linked list
class Node:
def __init__(self, data=None, pointer=None):
self.data = data
self.pointer = pointer
def set_data(self, data):
self.data = data
def get_data(self):
return self.data
def set_pointer(self, pointer):
self.pointer = pointer
def get_pointer(self):
return self.pointer
def __str__(self):
return f'(data: {self.data} & pointer: {self.pointer})'
class Stack:
def __init__(self, buttom=None, top=None):
self.buttom = buttom
self.top = top
# push operation
def push(self, data):
if self.buttom == None:
self.buttom = self.top = Node(data)
else:
new_node = Node(data, self.top)
self.top = new_node
return self
# pop operation
def pop(self):
if self.top == None:
return None
data = self.top.get_data()
self.top = self.top.get_pointer()
return data
# peek operation
def peek(self):
return self.top.get_data()
# returns stack as list
def as_list(self):
curr_node = self.top
stack_list = list()
while curr_node.get_pointer() != None:
stack_list.append(curr_node.get_data())
curr_node = curr_node.get_pointer()
stack_list.append(curr_node.get_data())
return stack_list
# returns True if stack empty and False if its not
def is_empty(self):
if self.top:
return False
else:
return True
def __str__(self):
return f'top: {self.top} & buttom: {self.buttom}'
if __name__ == '__main__':
stack = Stack()
stack.push('Google')
stack.push('Udemy')
stack.push('Facebook')
# print(stack.peek())
stack.pop()
print(stack.peek())
print(stack.as_list())
| 21.641304 | 64 | 0.547464 | 243 | 1,991 | 4.279835 | 0.185185 | 0.080769 | 0.042308 | 0.040385 | 0.125 | 0.057692 | 0.057692 | 0 | 0 | 0 | 0 | 0 | 0.349573 | 1,991 | 91 | 65 | 21.879121 | 0.803089 | 0.085886 | 0 | 0.178571 | 0 | 0 | 0.064497 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232143 | false | 0 | 0 | 0.089286 | 0.464286 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f09e721d7a2aea6a6da9968525dbcc1b80b3e3d5 | 549 | py | Python | tests/test_cosine_estimator.py | lefnire/lefnire_ml_utils | 65b84bb59faa41268aa14405daa58f6ba0b2509b | [
"MIT"
] | 3 | 2020-10-27T04:03:16.000Z | 2021-03-06T01:26:06.000Z | tests/test_cosine_estimator.py | lefnire/lefnire_ml_utils | 65b84bb59faa41268aa14405daa58f6ba0b2509b | [
"MIT"
] | 3 | 2020-10-08T22:47:55.000Z | 2020-10-29T18:43:36.000Z | tests/test_cosine_estimator.py | lefnire/lefnire_ml_utils | 65b84bb59faa41268aa14405daa58f6ba0b2509b | [
"MIT"
] | null | null | null | from ml_tools import CosineEstimator, Similars
from ml_tools.fixtures import articles
import numpy as np
corpus = articles()
split_ = len(corpus)//3
x, y = corpus[split_:], corpus[:split_] # note reversal; x should be smaller
x, y = Similars(x, y).embed().value()
def test_cosine_estimator():
dnn = CosineEstimator(y)
dnn.fit_cosine()
adjustments = np.zeros((y.shape[0],))
dnn.fit_adjustments(x, adjustments)
preds = dnn.predict(x)
print(preds)
# TODO test outcome. Will need a larger corpus with dissimilar articles
| 28.894737 | 77 | 0.712204 | 79 | 549 | 4.835443 | 0.56962 | 0.015707 | 0.057592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004405 | 0.173042 | 549 | 18 | 78 | 30.5 | 0.837004 | 0.189435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.285714 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0a3f15b303ced78faaebde15a58c43f37452c94 | 5,686 | py | Python | src/web/modules/post/controllers/post/create.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | 2 | 2017-04-30T07:29:23.000Z | 2017-04-30T07:36:27.000Z | src/web/modules/post/controllers/post/create.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null | src/web/modules/post/controllers/post/create.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null | from flask import request, render_template
import json
import traceback
import lib.es as es
import web.util.tools as tools
import web.modules.post.services.workflow as workflow
import web.modules.post.services.upload as upload
import web.util.jinja as jinja
import web.modules.admin.services.notification as notification
def get(p):
host = p['c']['host']; index = p['c']['index'];
# send out empty post to be compatible with edit form
p['post'] = {}
# init workflow
wf = tools.get("wf", 'create')
p['workflow'] = workflow.init(wf, host, index)
# field map
fields = es.list(host, index, 'field')
p['field_map'] = {}
for field in fields:
p['field_map'][field['id']] = field
######################################################
# check condition
if p['workflow'] and p['workflow'].get('condition'):
try:
exec (p['workflow']['condition'], globals())
ret = condition(p)
if ret != True and ret: return ret
except SystemExit: pass
except Exception, e:
raise
######################################################
if request.method == "POST":
return post(p)
# get list of field
if p['workflow'] and p['workflow'].get('screen'):
p['field_list'] = []
for field in jinja.getlist(p['workflow'].get('screen')):
query = "name:{}".format(field)
ret = es.list(host, index, 'field', field, query)
if len(ret): p['field_list'].append(ret[0])
else:
query = "visible:create"
option = "size=10000&sort=order_key:asc"
p['field_list'] = es.list(host, index, 'field', query, option)
return render_template("post/post/create.html", p=p)
def post(p):
host = p['c']['host']; index = p['c']['index'];
# get all submitted fields
p['post'] = {}
p['original'] = {}
for field in request.form:
field_info = p['field_map'][field]
value = tools.get(field)
# if object then convert to json object
if field_info['handler'] == "object":
if value:
p["post"][field_info['id']] = json.loads(value)
elif value:
p["post"][field_info['id']] = value
######################################################
# validate
if p['workflow'] and p['workflow'].get('validation'):
try:
exec (p['workflow']['validation'], globals())
ret = validation(p)
if ret != True and ret: return ret
except SystemExit: pass
except Exception, e:
raise
######################################################
# create post
p['post']['created'] = es.now()
p['post']['created_by'] = p['login']
response = es.create(host, index, 'post', p['post'].get('id'), p["post"])
# get created id
p["post"]["id"] = response["_id"]
# handle attachment
#try:
for f in request.files:
if request.files[f]:
p["post"][f] = \
upload.save(request.files[f], p['c']['allowed_exts'],
p["post"]["id"], p['c']['upload_dir'])
#except Exception, e:
# es.delete(host, index, 'post', p['post'].get('id'))
# return tools.alert(str(e))
es.update(host, index, 'post', p["post"]["id"], p["post"])
es.flush(host, index)
######################################################
# Record History
if p['c']['keep_history'] == "Yes":
for k, v in p['post'].items():
if k in ["updated", "viewed"]: continue
if p['original'].get(k) != p['post'].get(k):
# write history
doc = {
"id": p["post"]["id"],
"field": k,
"previous": unicode(p['original'].get(k)),
"current": unicode(p['post'].get(k)),
"login": p['login'],
"created": es.now()
}
es.create(host, index, 'log', '', doc)
######################################################
# Post action
p['post'] = es.get(host, index, 'post', p["post"]["id"])
if p['workflow'] and p['workflow'].get('postaction'):
try:
exec (p['workflow']['postaction'], globals())
ret = postaction(p)
if ret != True and ret: return ret
except SystemExit: pass
except Exception, e:
raise
######################################################
######################################################
# notification
if p['workflow']:
notifications = es.list(host, index, 'notification', 'workflow:{}'.format(p['workflow'].get('name')))
for p['notification'] in notifications:
p['notification']['recipients'] = jinja.getlist(p['notification'].get('recipients'))
if p['notification'] and p['notification'].get('condition'):
try:
exec (p['notification'].get('condition'), globals())
ret = condition(p)
if ret != True and ret: return ret
except SystemExit: pass
except Exception, e:
raise
# send notification
notification.send(p,
p['notification'].get('header'),
p['notification'].get('message'),
p['notification'].get('recipients')
)
######################################################
# redirect to view
return tools.redirect("{}/post/view/{}".format(p['url'], p["post"]["id"]))
| 34.047904 | 109 | 0.470805 | 614 | 5,686 | 4.330619 | 0.226384 | 0.039489 | 0.027078 | 0.022565 | 0.282437 | 0.224897 | 0.194058 | 0.137646 | 0.137646 | 0.120346 | 0 | 0.001487 | 0.290362 | 5,686 | 166 | 110 | 34.253012 | 0.657497 | 0.074569 | 0 | 0.236364 | 0 | 0 | 0.178797 | 0.010517 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.036364 | 0.081818 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0b3c832cf8e1ee83bee7a2645bdcc49172d3621 | 344 | py | Python | program_train/hachina1.py | zhujisheng/HAComponent | 29c44a5ea3090d748738830a50e2d25c07e1bac8 | [
"Apache-2.0"
] | 39 | 2017-12-17T13:51:13.000Z | 2022-02-25T02:57:39.000Z | program_train/hachina1.py | vitc-123/HAComponent | 29c44a5ea3090d748738830a50e2d25c07e1bac8 | [
"Apache-2.0"
] | 5 | 2019-05-30T07:09:49.000Z | 2021-07-15T02:53:49.000Z | program_train/hachina1.py | vitc-123/HAComponent | 29c44a5ea3090d748738830a50e2d25c07e1bac8 | [
"Apache-2.0"
] | 28 | 2018-01-05T10:48:28.000Z | 2021-12-07T13:59:22.000Z | """
文件名:hachina.py.
演示程序,三行代码创建一个新设备.
"""
def setup(hass, config):
"""HomeAssistant在配置文件中发现hachina域的配置后,会自动调用hachina.py文件中的setup函数."""
# 设置实体hachina.Hello_World的状态。
# 注意1:实体并不需要被创建,只要设置了实体的状态,实体就自然存在了
# 注意2:实体的状态可以是任何字符串
hass.states.set("hachina.hello_world", "太棒了!")
# 返回True代表初始化成功
return True
| 20.235294 | 72 | 0.671512 | 31 | 344 | 7.387097 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.197674 | 344 | 16 | 73 | 21.5 | 0.822464 | 0.555233 | 0 | 0 | 0 | 0 | 0.190083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0b7d59823b393f5c64dfa9a82c8f3f19e0a14bc | 5,627 | py | Python | pydaemon/tx_sms_service_.py | tmkasun/pysmsgate | 42c06f1e8e3598697844fd9b098f314a24709777 | [
"Apache-2.0"
] | null | null | null | pydaemon/tx_sms_service_.py | tmkasun/pysmsgate | 42c06f1e8e3598697844fd9b098f314a24709777 | [
"Apache-2.0"
] | null | null | null | pydaemon/tx_sms_service_.py | tmkasun/pysmsgate | 42c06f1e8e3598697844fd9b098f314a24709777 | [
"Apache-2.0"
] | null | null | null | import json
import sys
import time
from twisted.internet.task import deferLater
from twisted.web import http
from twisted.web.resource import Resource
from twisted.web.server import Site
from twisted.internet import reactor
from gsmmodem.modem import GsmModem, SentSms
from gsmmodem.exceptions import TimeoutException, PinRequiredError, IncorrectPinError
from config import config
from libs.services import modem_service
from libs import modem_manager
class Sms(Resource):
isLeaf = True
def __init__(self, serviceType):
Resource.__init__(self)
self.serviceType = serviceType
def render_POST(self, request):
if self.serviceType == 'send':
print "DEBUG: Got POST a request from {}".format(request.getClientIP())
# global debugObject
# reactor.callLater(2,reactor.stop)
# debugObject = request
print "DEBUG: ",
print(request.args)
# TODO: Return JSON with status and ACK of sending message
# TODO: Use inline call back ratherthan blocking call
d = deferLater(reactor, 0, lambda: request)
d.addCallback(self._delayedRender)
request.responseHeaders.addRawHeader(b"content-type", b"application/json")
timestamp = int(time.time())
return_value = {
u'result': u'true',
u'timestamp': timestamp,
u'status': u'sent',
u'refid': u'N/A',
}
return json.dumps(return_value)
def _delayedRender(self, request):
mobile_number = request.args['mobile_number'][0]
if not (self.isMobile(mobile_number)):
return "Invalid mobile number: {}\nerror code:-1".format(mobile_number)
message = request.args['message'][0]
#TODO: find why this class var is not resolved not Service.debug_mode:
if True:
print("DEBUG: Running delayed job")
sendSms(mobile_number, message)
else:
print("[DEBUG_MODE]: Message = {} , \nmobile number = {}".format(mobile_number, message))
def isMobile(self, number):
try:
int(number)
if (len(number) != 10):
return False
return True
except ValueError:
return False
def sendSms(destination, message, deliver=False):
if deliver:
print ('\nSending SMS and waiting for delivery report...')
else:
print('\nSending SMS \nmessage ({}) \nto ({})...'.format(message, destination))
try:
modem = modem_manager.modems.get_random_modem()
sms = modem.sendSms(destination, message, waitForDeliveryReport=deliver)
except TimeoutException:
print('Failed to send message: the send operation timed out')
else:
if sms.report:
print('Message sent{0}'.format(
' and delivered OK.' if sms.status == SentSms.DELIVERED else ', but delivery failed.'))
else:
print('Message sent.')
class UnknownService(Resource):
isLeaf = True
def render(self, request):
return self.error_info(request)
def error_info(self, request):
request.responseHeaders.addRawHeader(b"content-type", b"application/json")
request.setResponseCode(http.NOT_FOUND)
return_value = {
u'result': u'false',
u'reason': u'Unknown Service',
u'request': {
u'args': request.args,
u'client': {
u'host': request.client.host,
u'port': request.client.port,
u'type': request.client.type,
},
u'code': request.code,
u'method': request.method,
u'path': request.path,
}
}
return json.dumps(return_value)
class Service(Resource):
# isLeaf = True
debugMode = False
def __init__(self, debugMode):
Resource.__init__(self)
Service.debugMode = debugMode
def getChild(self, path, request):
if path == "sms":
return Sms(request.postpath[0]) # Get the next URL component
elif path == "modem":
return modem_service.ModemService(request.postpath[0])
elif path == "ping":
return Ping()
else:
return UnknownService()
def render_GET(self, request):
request.responseHeaders.addRawHeader(b"content-type", b"application/json")
return_value = {u'result': u'ok'}
return json.dumps(return_value)
def restart(self):
pass
class Ping(Resource):
isLeaf = True
def render_GET(self, request):
request.responseHeaders.addRawHeader(b"content-type", b"application/json")
timestamp = int(time.time())
return_value = {
u'result': u'true',
u'timestamp': timestamp,
u'status': u'pong',
}
return json.dumps(return_value)
def main():
port = config.api['port']
service_name = config.api['service_name']
debug_mode = config.api['debug']
resource = Service(debug_mode)
root_web = Site(resource)
resource.putChild(service_name, Service(debug_mode))
if not debug_mode:
modem_manager.init()
print("Connected to modem")
else:
print("DEBUG_MODE enabled no message will be sent out from the dongle")
reactor.listenTCP(port, root_web)
print "Server running on {} url: localhost:{}/{}".format(port, port, service_name)
reactor.run()
if __name__ == '__main__':
main() | 30.917582 | 103 | 0.602808 | 623 | 5,627 | 5.338684 | 0.280899 | 0.026458 | 0.021648 | 0.042093 | 0.192724 | 0.159952 | 0.133794 | 0.133794 | 0.133794 | 0.133794 | 0 | 0.002508 | 0.291274 | 5,627 | 182 | 104 | 30.917582 | 0.831494 | 0.05207 | 0 | 0.244604 | 0 | 0 | 0.152272 | 0 | 0 | 0 | 0 | 0.005495 | 0 | 0 | null | null | 0.007194 | 0.093525 | null | null | 0.093525 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0c4871928dffeb7e7e0aad03825633ae820c35a | 2,076 | py | Python | hello/migrations/0002_auto_20201116_1409.py | chenyuan99/OwlSavesCats | d8135848db5e6092467ee0d31aa46c36599cace1 | [
"MIT"
] | null | null | null | hello/migrations/0002_auto_20201116_1409.py | chenyuan99/OwlSavesCats | d8135848db5e6092467ee0d31aa46c36599cace1 | [
"MIT"
] | null | null | null | hello/migrations/0002_auto_20201116_1409.py | chenyuan99/OwlSavesCats | d8135848db5e6092467ee0d31aa46c36599cace1 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.8 on 2020-11-16 19:09
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('hello', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Author',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('realname', models.CharField(max_length=64)),
('phone', models.CharField(max_length=16)),
('email', models.EmailField(max_length=254)),
('sign', models.BooleanField()),
('create_time', models.DateTimeField(auto_now=True)),
],
options={
'ordering': ['-id'],
},
),
migrations.CreateModel(
name='paperclip',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=200)),
('abstract', models.CharField(max_length=200)),
('publish_time', models.DateTimeField()),
('create_time', models.DateTimeField(auto_now=True)),
('pid', models.CharField(max_length=16)),
],
),
migrations.AlterUniqueTogether(
name='guest',
unique_together=None,
),
migrations.RemoveField(
model_name='guest',
name='event',
),
migrations.DeleteModel(
name='Event',
),
migrations.DeleteModel(
name='Guest',
),
migrations.AddField(
model_name='author',
name='paperclip',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='hello.paperclip'),
),
migrations.AlterUniqueTogether(
name='author',
unique_together={('phone', 'paperclip')},
),
]
| 32.952381 | 114 | 0.53131 | 180 | 2,076 | 6 | 0.427778 | 0.05 | 0.083333 | 0.111111 | 0.377778 | 0.22037 | 0.22037 | 0.146296 | 0.146296 | 0.146296 | 0 | 0.024513 | 0.331888 | 2,076 | 62 | 115 | 33.483871 | 0.754146 | 0.021676 | 0 | 0.428571 | 1 | 0 | 0.097585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.089286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0cd195aebcc37de4124760ae9d1f43db09b1611 | 585 | py | Python | requests_test.py | AnakinJiang/PythonDemo | 4e8e8296b098ce541c588fafdf07cf2d6d955d38 | [
"MIT"
] | null | null | null | requests_test.py | AnakinJiang/PythonDemo | 4e8e8296b098ce541c588fafdf07cf2d6d955d38 | [
"MIT"
] | null | null | null | requests_test.py | AnakinJiang/PythonDemo | 4e8e8296b098ce541c588fafdf07cf2d6d955d38 | [
"MIT"
] | null | null | null | '''
@Author: AnakinJiang
@Email: jiangjinpeng319 AT gmail.com
@Descripttion: requests测试demo
@Date: 2019-08-27 15:37:14
@LastEditors: AnakinJiang
@LastEditTime: 2019-08-27 16:55:06
'''
import requests
def get_test():
url1 = 'https://www.douban.com/'
r1 = requests.get(url1)
print(r1.status_code)
print(r1.text)
print(r1.content)
url2 = 'https://www.douban.com/search'
params = {'q': 'python', 'cat': '1001'}
r2 = requests.get(url2,params=params)
print(r2.url)
print(r2.encoding)
print(type(r2.content))
print(r2.headers)
get_test()
| 19.5 | 43 | 0.659829 | 81 | 585 | 4.728395 | 0.592593 | 0.05483 | 0.041775 | 0.088773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099379 | 0.174359 | 585 | 29 | 44 | 20.172414 | 0.693582 | 0.299145 | 0 | 0 | 0 | 0 | 0.164589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.133333 | 0.466667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
f0d0afc35b9e84aba33193038e8752efcd1d32f7 | 1,695 | py | Python | tests/test_errors.py | althonos/pyhmmer | eb6fe7c0e74557e0ae9d647693711583d2d86b68 | [
"MIT"
] | 26 | 2020-11-10T22:57:49.000Z | 2022-03-24T16:58:55.000Z | tests/test_errors.py | althonos/pyhmmer | eb6fe7c0e74557e0ae9d647693711583d2d86b68 | [
"MIT"
] | 13 | 2020-11-12T11:41:08.000Z | 2022-03-09T18:17:48.000Z | tests/test_errors.py | althonos/pyhmmer | eb6fe7c0e74557e0ae9d647693711583d2d86b68 | [
"MIT"
] | 2 | 2021-04-04T05:13:07.000Z | 2021-11-30T09:11:23.000Z | import unittest
from pyhmmer.easel import Alphabet
from pyhmmer.errors import UnexpectedError, AllocationError, EaselError, AlphabetMismatch
class TestErrors(unittest.TestCase):
def test_unexpected_error(self):
err = UnexpectedError(1, "p7_ReconfigLength")
self.assertEqual(repr(err), "UnexpectedError(1, 'p7_ReconfigLength')")
self.assertEqual(str(err), "Unexpected error occurred in 'p7_ReconfigLength': eslFAIL (status code 1)")
def test_allocation_error(self):
err = AllocationError("ESL_SQ", 16)
self.assertEqual(repr(err), "AllocationError('ESL_SQ', 16)")
self.assertEqual(str(err), "Could not allocate 16 bytes for type ESL_SQ")
err2 = AllocationError("float", 4, 32)
self.assertEqual(repr(err2), "AllocationError('float', 4, 32)")
self.assertEqual(str(err2), "Could not allocate 128 bytes for an array of 32 float")
def test_easel_error(self):
err = EaselError(1, "failure")
self.assertEqual(repr(err), "EaselError(1, 'failure')")
self.assertEqual(str(err), "Error raised from C code: failure, eslFAIL (status code 1)")
def test_alphabet_mismatch(self):
err = AlphabetMismatch(Alphabet.dna(), Alphabet.rna())
self.assertEqual(repr(err), "AlphabetMismatch(Alphabet.dna(), Alphabet.rna())")
self.assertEqual(str(err), "Expected Alphabet.dna(), found Alphabet.rna()")
self.assertNotEqual(err, 1)
err2 = AlphabetMismatch(Alphabet.dna(), Alphabet.rna())
self.assertEqual(err, err)
self.assertEqual(err, err2)
err3 = AlphabetMismatch(Alphabet.dna(), Alphabet.amino())
self.assertNotEqual(err, err3)
| 42.375 | 111 | 0.684366 | 198 | 1,695 | 5.787879 | 0.30303 | 0.157068 | 0.082897 | 0.076789 | 0.480803 | 0.480803 | 0.374346 | 0.097731 | 0 | 0 | 0 | 0.024673 | 0.187021 | 1,695 | 39 | 112 | 43.461538 | 0.806967 | 0 | 0 | 0 | 0 | 0 | 0.282006 | 0.047788 | 0 | 0 | 0 | 0 | 0.482759 | 1 | 0.137931 | false | 0 | 0.103448 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0d0b9a716fb2240576fa6c1da54c8656e26b395 | 1,196 | py | Python | electrum_gui/common/provider/chains/cfx/sdk/cfx_address/base32.py | BixinKey/electrum | f5de4e74e313b9b569f13ba6ab9142a38bf095f2 | [
"MIT"
] | 12 | 2020-11-12T08:53:05.000Z | 2021-07-06T17:30:39.000Z | electrum_gui/common/provider/chains/cfx/sdk/cfx_address/base32.py | liyanhrxy/electrum | 107608ef201ff1d20d2f6091c257b1ceff9b7362 | [
"MIT"
] | 209 | 2020-09-23T06:58:18.000Z | 2021-11-18T11:25:41.000Z | electrum_gui/common/provider/chains/cfx/sdk/cfx_address/base32.py | liyanhrxy/electrum | 107608ef201ff1d20d2f6091c257b1ceff9b7362 | [
"MIT"
] | 19 | 2020-10-13T11:42:26.000Z | 2022-02-06T01:26:34.000Z | import base64
STANDARD_ALPHABET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567'
CUSTOM_ALPHABET = 'abcdefghjkmnprstuvwxyz0123456789'
ENCODE_TRANS = str.maketrans(STANDARD_ALPHABET, CUSTOM_ALPHABET)
DECODE_TRANS = str.maketrans(CUSTOM_ALPHABET, STANDARD_ALPHABET)
PADDING_LETTER = '='
def encode(buffer):
assert type(buffer) == bytes or type(buffer) == bytearray, "please pass an bytes"
b32encoded = base64.b32encode(buffer) # encode bytes
b32str = b32encoded.decode().replace(PADDING_LETTER, "") # translate chars
return b32str.translate(ENCODE_TRANS) # remove padding char
def decode(b32str):
assert type(b32str) == str, "please pass an str"
# pad to 8's multiple with '='
b32len = len(b32str)
if b32len % 8 > 0:
padded_len = b32len + (8 - b32len % 8)
b32str = b32str.ljust(padded_len, PADDING_LETTER)
# translate and decode
return base64.b32decode(b32str.translate(DECODE_TRANS))
def decode_to_words(b32str):
result = bytearray()
for c in b32str:
result.append(CUSTOM_ALPHABET.index(c))
return result
def encode_words(words):
result = ""
for v in words:
result += CUSTOM_ALPHABET[v]
return result
| 29.9 | 85 | 0.70903 | 144 | 1,196 | 5.75 | 0.381944 | 0.084541 | 0.041063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065285 | 0.193144 | 1,196 | 39 | 86 | 30.666667 | 0.792746 | 0.08194 | 0 | 0.071429 | 0 | 0 | 0.094322 | 0.058608 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.142857 | false | 0.071429 | 0.035714 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f0d71db6f553884c2bb1448ebcb992750a840180 | 2,175 | py | Python | reamber/algorithms/generate/sv/generators/svFuncSequencer.py | Bestfast/reamberPy | 91b76ca6adf11fbe8b7cee7c186481776a4d7aaa | [
"MIT"
] | null | null | null | reamber/algorithms/generate/sv/generators/svFuncSequencer.py | Bestfast/reamberPy | 91b76ca6adf11fbe8b7cee7c186481776a4d7aaa | [
"MIT"
] | null | null | null | reamber/algorithms/generate/sv/generators/svFuncSequencer.py | Bestfast/reamberPy | 91b76ca6adf11fbe8b7cee7c186481776a4d7aaa | [
"MIT"
] | null | null | null | from typing import Callable, List, Union
from numpy import arange
from reamber.algorithms.generate.sv.SvPkg import SvPkg
from reamber.algorithms.generate.sv.SvSequence import SvSequence
def svFuncSequencer(funcs: List[Union[float, Callable[[float], float], None]],
offsets: Union[List[float], float, None] = None,
repeats: int = 1,
repeatGap: float = 0,
startX: float = 0,
endX: float = 1
):
""" Sets up a sequence using functions.
:param funcs: Funcs to generate values. \
If List, values will be used directly. \
If Callable, values will be called with the X. \
If None, this will leave a gap in the sequence.
:param offsets: Offsets to use on functions. \
If List, offsets will be used to map the funcs. \
If Float, all funcs are assumed to be separated by {float} ms. Starting from 0. \
If None, all funcs are assumed to be separated by 1 ms. Starting from 0.
:param repeats: The amount of repeats. This affects the increment of the X argument passed to the Callables. \
If 0, only endX will be used.
:param repeatGap: The gap between the repeats.
:param startX: The starting X.
:param endX: The ending X.
"""
length = len(funcs)
if offsets is None: offsets = list(range(0, length))
# We use [:length] because sometimes arange will create too many for some reason (?)
elif isinstance(offsets, (float, int)): offsets = list(arange(0, length * offsets, offsets))[:length]
assert length == len(offsets)
seq = SvSequence()
for i, (offset, func) in enumerate(zip(offsets, funcs)):
if isinstance(func, Callable): seq.appendInit([(offset, 0)])
elif isinstance(func, (float, int)): seq.appendInit([(offset, func)])
elif func is None: pass
pkg = SvPkg.repeat(seq=seq, times=repeats, gap=repeatGap)
nones = 0
for funcI, func in enumerate(funcs):
if func is None: nones += 1
if isinstance(func, Callable):
pkg.applyNth(func, funcI - nones, startX, endX)
return pkg
| 38.157895 | 114 | 0.630805 | 293 | 2,175 | 4.682594 | 0.341297 | 0.017493 | 0.021866 | 0.042274 | 0.093294 | 0.048105 | 0.048105 | 0.048105 | 0 | 0 | 0 | 0.008244 | 0.274943 | 2,175 | 56 | 115 | 38.839286 | 0.861763 | 0.386207 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.037037 | false | 0.037037 | 0.148148 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0db6dccf419614f773337c9ea484f5c0bdce823 | 452 | py | Python | itmo/2015-16/final/weight.py | dluschan/olymp | dfbf4352dbc7f6fd7563e7bd19aff6fd67fb50b7 | [
"MIT"
] | null | null | null | itmo/2015-16/final/weight.py | dluschan/olymp | dfbf4352dbc7f6fd7563e7bd19aff6fd67fb50b7 | [
"MIT"
] | null | null | null | itmo/2015-16/final/weight.py | dluschan/olymp | dfbf4352dbc7f6fd7563e7bd19aff6fd67fb50b7 | [
"MIT"
] | 1 | 2018-09-14T18:50:48.000Z | 2018-09-14T18:50:48.000Z | weight = []
diff = 0
n = int(input())
for i in range(n):
q, c = map(int, input().split())
if c == 2:
q *= -1
weight.append(q)
diff += q
min_diff = abs(diff)
for i in range(n):
if abs(diff - 2*weight[i]) < min_diff:
min_diff = abs(diff - 2*weight[i])
for j in range(i+1, n):
if abs(diff - 2*(weight[i] + weight[j])) < min_diff:
min_diff = abs(diff - 2*(weight[i] + weight[j]))
print(min_diff)
| 25.111111 | 60 | 0.526549 | 80 | 452 | 2.9 | 0.275 | 0.181034 | 0.137931 | 0.241379 | 0.564655 | 0.465517 | 0.465517 | 0.25 | 0.25 | 0 | 0 | 0.024691 | 0.283186 | 452 | 17 | 61 | 26.588235 | 0.691358 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0dd9ec3c005973e7ab2799270be09efd623558e | 9,700 | py | Python | test/test_digitalbitbox.py | matejcik/HWI | f0021502d470660b9d8a5e79fcc440ac809a610f | [
"MIT"
] | 54 | 2018-03-16T14:50:19.000Z | 2019-01-29T19:19:17.000Z | test/test_digitalbitbox.py | matejcik/HWI | f0021502d470660b9d8a5e79fcc440ac809a610f | [
"MIT"
] | 86 | 2018-01-30T18:40:44.000Z | 2019-01-30T20:48:24.000Z | test/test_digitalbitbox.py | matejcik/HWI | f0021502d470660b9d8a5e79fcc440ac809a610f | [
"MIT"
] | 11 | 2017-09-05T17:19:53.000Z | 2019-01-10T19:11:04.000Z | #! /usr/bin/env python3
import argparse
import atexit
import json
import os
import subprocess
import sys
import time
import unittest
from test_device import (
Bitcoind,
DeviceEmulator,
DeviceTestCase,
TestDeviceConnect,
TestGetKeypool,
TestGetDescriptors,
TestSignTx,
)
from hwilib.devices.digitalbitbox import BitboxSimulator, send_plain, send_encrypt
class BitBox01Emulator(DeviceEmulator):
def __init__(self, simulator):
try:
os.unlink('bitbox-emulator.stderr')
except FileNotFoundError:
pass
self.simulator = simulator
self.bitbox_log = None
self.simulator_proc = None
self.type = 'digitalbitbox'
self.path = 'udp:127.0.0.1:35345'
self.fingerprint = 'a31b978a'
self.master_xpub = "tpubDCjZ76WbqdyGWi7NaFLuhWL8GX7NK5gCGB7ApynxUHGkgvBVCtpXX1i6Uj88rL9WKM7vimN8QZRjowSX4g2uPxjnuie1Kg7XK8pvNGZznQi"
self.password = "0000"
self.supports_ms_display = False
self.supports_xpub_ms_display = False
self.supports_unsorted_ms = False
self.supports_taproot = False
self.strict_bip48 = False
self.include_xpubs = False
self.supports_device_multiple_multisig = True
def start(self):
super().start()
self.bitbox_log = open('bitbox-emulator.stderr', 'a')
# Start the Digital bitbox simulator
self.simulator_proc = subprocess.Popen(
[
'./' + os.path.basename(self.simulator),
'../../tests/sd_files/'
],
cwd=os.path.dirname(self.simulator),
stderr=self.bitbox_log
)
# Wait for simulator to be up
while True:
try:
self.dev = BitboxSimulator('127.0.0.1', 35345)
reply = send_plain(b'{"password":"0000"}', self.dev)
if 'error' not in reply:
break
except Exception:
pass
time.sleep(0.5)
# Set password and load from backup
send_encrypt(json.dumps({"seed": {"source": "backup", "filename": "test_backup.pdf", "key": "key"}}), '0000', self.dev)
atexit.register(self.stop)
def stop(self):
super().stop()
self.simulator_proc.terminate()
self.simulator_proc.wait()
self.bitbox_log.close()
atexit.unregister(self.stop)
# DigitalBitbox specific management command tests
class TestDBBManCommands(DeviceTestCase):
def test_restore(self):
result = self.do_command(self.dev_args + ['-i', 'restore'])
self.assertIn('error', result)
self.assertIn('code', result)
self.assertEqual(result['error'], 'The Digital Bitbox does not support restoring via software')
self.assertEqual(result['code'], -9)
def test_pin(self):
result = self.do_command(self.dev_args + ['promptpin'])
self.assertIn('error', result)
self.assertIn('code', result)
self.assertEqual(result['error'], 'The Digital Bitbox does not need a PIN sent from the host')
self.assertEqual(result['code'], -9)
result = self.do_command(self.dev_args + ['sendpin', '1234'])
self.assertIn('error', result)
self.assertIn('code', result)
self.assertEqual(result['error'], 'The Digital Bitbox does not need a PIN sent from the host')
self.assertEqual(result['code'], -9)
def test_display(self):
result = self.do_command(self.dev_args + ['displayaddress', '--path', 'm/0h'])
self.assertIn('error', result)
self.assertIn('code', result)
self.assertEqual(result['error'], 'The Digital Bitbox does not have a screen to display addresses on')
self.assertEqual(result['code'], -9)
def test_setup_wipe(self):
# Device is init, setup should fail
result = self.do_command(self.dev_args + ['-i', 'setup', '--label', 'setup_test', '--backup_passphrase', 'testpass'])
self.assertEquals(result['code'], -10)
self.assertEquals(result['error'], 'Device is already initialized. Use wipe first and try again')
# Wipe
result = self.do_command(self.dev_args + ['wipe'])
self.assertTrue(result['success'])
# Check arguments
result = self.do_command(self.dev_args + ['-i', 'setup', '--label', 'setup_test'])
self.assertEquals(result['code'], -7)
self.assertEquals(result['error'], 'The label and backup passphrase for a new Digital Bitbox wallet must be specified and cannot be empty')
result = self.do_command(self.dev_args + ['-i', 'setup', '--backup_passphrase', 'testpass'])
self.assertEquals(result['code'], -7)
self.assertEquals(result['error'], 'The label and backup passphrase for a new Digital Bitbox wallet must be specified and cannot be empty')
# Setup
result = self.do_command(self.dev_args + ['-i', 'setup', '--label', 'setup_test', '--backup_passphrase', 'testpass'])
self.assertTrue(result['success'])
# Reset back to original
result = self.do_command(self.dev_args + ['wipe'])
self.assertTrue(result['success'])
send_plain(b'{"password":"0000"}', self.emulator.dev)
send_encrypt(json.dumps({"seed": {"source": "backup", "filename": "test_backup.pdf", "key": "key"}}), '0000', self.emulator.dev)
# Make sure device is init, setup should fail
result = self.do_command(self.dev_args + ['-i', 'setup', '--label', 'setup_test', '--backup_passphrase', 'testpass'])
self.assertEquals(result['code'], -10)
self.assertEquals(result['error'], 'Device is already initialized. Use wipe first and try again')
def test_backup(self):
# Check arguments
result = self.do_command(self.dev_args + ['backup', '--label', 'backup_test'])
self.assertEquals(result['code'], -7)
self.assertEquals(result['error'], 'The label and backup passphrase for a Digital Bitbox backup must be specified and cannot be empty')
result = self.do_command(self.dev_args + ['backup', '--backup_passphrase', 'key'])
self.assertEquals(result['code'], -7)
self.assertEquals(result['error'], 'The label and backup passphrase for a Digital Bitbox backup must be specified and cannot be empty')
# Wipe
result = self.do_command(self.dev_args + ['wipe'])
self.assertTrue(result['success'])
# Setup
result = self.do_command(self.dev_args + ['-i', 'setup', '--label', 'backup_test', '--backup_passphrase', 'testpass'])
self.assertTrue(result['success'])
# make the backup
result = self.do_command(self.dev_args + ['backup', '--label', 'backup_test_backup', '--backup_passphrase', 'testpass'])
self.assertTrue(result['success'])
class TestBitboxGetXpub(DeviceTestCase):
def test_getxpub(self):
self.dev_args.remove('--chain')
self.dev_args.remove('test')
result = self.do_command(self.dev_args + ['--expert', 'getxpub', 'm/44h/0h/0h/3'])
self.assertEqual(result['xpub'], 'xpub6Du9e5Cz1NZWz3dvsvM21tsj4xEdbAb7AcbysFL42Y3yr8PLMnsaxhetHxurTpX5Rp5RbnFFwP1wct8K3gErCUSwcxFhxThsMBSxdmkhTNf')
self.assertFalse(result['testnet'])
self.assertFalse(result['private'])
self.assertEqual(result['depth'], 4)
self.assertEqual(result['parent_fingerprint'], '31d5e5ea')
self.assertEqual(result['child_num'], 3)
self.assertEqual(result['chaincode'], '7062818c752f878bf96ca668f77630452c3fa033b7415eed3ff568e04ada8104')
self.assertEqual(result['pubkey'], '029078c9ad8421afd958d7bc054a0952874923e2586fc9375604f0479a354ea193')
def digitalbitbox_test_suite(simulator, bitcoind, interface):
dev_emulator = BitBox01Emulator(simulator)
signtx_cases = [
(["legacy"], ["legacy"], True, True),
(["segwit"], ["segwit"], True, True),
(["legacy", "segwit"], ["legacy", "segwit"], True, True),
]
# Generic Device tests
suite = unittest.TestSuite()
suite.addTest(DeviceTestCase.parameterize(TestDBBManCommands, bitcoind, emulator=dev_emulator, interface=interface))
suite.addTest(DeviceTestCase.parameterize(TestBitboxGetXpub, bitcoind, emulator=dev_emulator, interface=interface))
suite.addTest(DeviceTestCase.parameterize(TestDeviceConnect, bitcoind, emulator=dev_emulator, interface=interface, detect_type="digitalbitbox"))
suite.addTest(DeviceTestCase.parameterize(TestDeviceConnect, bitcoind, emulator=dev_emulator, interface=interface, detect_type="digitalbitbox_01_simulator"))
suite.addTest(DeviceTestCase.parameterize(TestGetDescriptors, bitcoind, emulator=dev_emulator, interface=interface))
suite.addTest(DeviceTestCase.parameterize(TestGetKeypool, bitcoind, emulator=dev_emulator, interface=interface))
suite.addTest(DeviceTestCase.parameterize(TestSignTx, bitcoind, emulator=dev_emulator, interface=interface, signtx_cases=signtx_cases))
result = unittest.TextTestRunner(stream=sys.stdout, verbosity=2).run(suite)
return result.wasSuccessful()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Test Digital Bitbox implementation')
parser.add_argument('simulator', help='Path to simulator binary')
parser.add_argument('bitcoind', help='Path to bitcoind binary')
parser.add_argument('--interface', help='Which interface to send commands over', choices=['library', 'cli', 'bindist'], default='library')
args = parser.parse_args()
# Start bitcoind
bitcoind = Bitcoind.create(args.bitcoind)
sys.exit(not digitalbitbox_test_suite(args.simulator, bitcoind, args.interface))
| 45.971564 | 161 | 0.668454 | 1,069 | 9,700 | 5.946679 | 0.228251 | 0.039327 | 0.032877 | 0.05081 | 0.510304 | 0.498663 | 0.483404 | 0.450684 | 0.421268 | 0.415919 | 0 | 0.028007 | 0.201237 | 9,700 | 210 | 162 | 46.190476 | 0.792463 | 0.038557 | 0 | 0.257669 | 0 | 0 | 0.252229 | 0.047588 | 0 | 0 | 0 | 0 | 0.257669 | 1 | 0.06135 | false | 0.09816 | 0.06135 | 0 | 0.147239 | 0.01227 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f0df7eb72b3c3a8b625596cf0e32b466f1260149 | 456 | py | Python | ml-env/examples/train.py | RafalSkolasinski/dockerfiles | 8c8609fda42ad2b55960b64aef12774e00e2a4d4 | [
"MIT"
] | null | null | null | ml-env/examples/train.py | RafalSkolasinski/dockerfiles | 8c8609fda42ad2b55960b64aef12774e00e2a4d4 | [
"MIT"
] | 134 | 2021-02-10T14:32:47.000Z | 2022-03-31T02:16:17.000Z | ml-env/examples/train.py | RafalSkolasinski/dockerfiles | 8c8609fda42ad2b55960b64aef12774e00e2a4d4 | [
"MIT"
] | null | null | null | import numpy as np
import joblib
from sklearn import datasets
from sklearn import svm
from sklearn.model_selection import train_test_split
filename_p = "model.joblib"
if __name__ == "__main__":
digits = datasets.load_digits()
X_train, X_test, y_train, y_test = train_test_split(
digits.data, digits.target, random_state=0
)
clf = svm.SVC(gamma=0.001, C=100.0)
clf.fit(X_train, y_train)
joblib.dump(clf, filename_p)
| 19 | 56 | 0.717105 | 71 | 456 | 4.28169 | 0.507042 | 0.108553 | 0.111842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.190789 | 456 | 23 | 57 | 19.826087 | 0.799458 | 0 | 0 | 0 | 0 | 0 | 0.04386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f0e245d926233f4c636eb409d21b940db19ff2c8 | 1,982 | py | Python | activitfinal.py | AjayBadrinath/Python-Stuff | 34bb0339968b943cc63c5dc31721e3504faea157 | [
"MIT"
] | null | null | null | activitfinal.py | AjayBadrinath/Python-Stuff | 34bb0339968b943cc63c5dc31721e3504faea157 | [
"MIT"
] | null | null | null | activitfinal.py | AjayBadrinath/Python-Stuff | 34bb0339968b943cc63c5dc31721e3504faea157 | [
"MIT"
] | null | null | null | import sys
#"splchar":["!","@","#","$",".",",",":","%","^","*"]
splchar=[chr(i) for i in range(33,48)]#ASCII spl charecter range from 33-48 and58-65#and i here is the mapping expression ie the thing thet is executed evry iteration
splchar1=[chr(i) for i in range(58,65)]#Instead of explicit declaration of for loop this is better for assigning number and converting to charecter then make it a list
splchar+=splchar1#making all spl char in one list
letter={"cap":"ABCDEFGHIJKLMNOPQRSTUVWXYZ","alpha":"abcdefghijklmnopqrstuvwxyz","digit":"0123456789","white_space":[" ","\t","\n"],
"splchar":splchar}#Dictionary to be used
print("Enter your string with multiple lines\n")
print("Ctrl+D to Terminate input\n")
a=sys.stdin.read()#ctrl+d to finish input#this is for multiline input
x=list(letter.items())#return Datatype dict_item so convert to list which return tuples in list
lineno=[j for j in a if j=="\n"]#list of \n
cap,small,digit,whitespace,spl=0,0,0,0,0#Assign all values to be 0
'''Implementation of switch case(sort of)'''
d1={"cap":"if i in x[j][-1] and x[j][0]=='cap':cap+=1",
"alpha":"if i in x[j][-1] and x[j][0]=='alpha':small+=1",
"digit":"if i in x[j][-1] and x[j][0]=='digit':digit+=1",
"white_space":"if i in x[j][-1] and x[j][0]=='white_space':whitespace+=1",
"splchar":"if i in x[j][-1] and x[j][0]=='splchar':spl+=1"
}
#here to be specific the switch case here uses executable command with shared key with dictionary letter
for i in a:#Check the each charecter of input a
for j in range(0,len(x)):#Accessing the list of key value pair from letter
exec(str(d1.get(x[j][0])))#exec execute string and this access the switch statement
print("\nThe total number of caps",cap,"\nSmall:",small,"\ndigit:",digit,"\nwhitespaces",whitespace,"\nSpecialCharecters",spl,"\nTotal Alphabets",cap+small,
"\nTotal lines", len(lineno))
| 48.341463 | 168 | 0.652371 | 329 | 1,982 | 3.917933 | 0.392097 | 0.017067 | 0.013964 | 0.023274 | 0.07758 | 0.07758 | 0.054306 | 0.054306 | 0.054306 | 0.054306 | 0 | 0.032475 | 0.176589 | 1,982 | 40 | 169 | 49.55 | 0.757353 | 0.380424 | 0 | 0 | 0 | 0.217391 | 0.478648 | 0.163701 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0e44d50b65aca97969715923cd5504c6c2fa654 | 28,838 | py | Python | pytests/backup/ibr.py | ramalingam-cb/testrunner | 81cea7a5a493cf0c67fca7f97c667cd3c6ad2142 | [
"Apache-2.0"
] | null | null | null | pytests/backup/ibr.py | ramalingam-cb/testrunner | 81cea7a5a493cf0c67fca7f97c667cd3c6ad2142 | [
"Apache-2.0"
] | null | null | null | pytests/backup/ibr.py | ramalingam-cb/testrunner | 81cea7a5a493cf0c67fca7f97c667cd3c6ad2142 | [
"Apache-2.0"
] | null | null | null | __author__ = 'ashvinder'
import re
import os
import gc
import logger
import time
from TestInput import TestInputSingleton
from backup.backup_base import BackupBaseTest
from remote.remote_util import RemoteMachineShellConnection
from couchbase_helper.documentgenerator import BlobGenerator
from couchbase_helper.documentgenerator import DocumentGenerator
from memcached.helper.kvstore import KVStore
from membase.api.rest_client import RestConnection, Bucket
from couchbase_helper.data_analysis_helper import *
from memcached.helper.data_helper import VBucketAwareMemcached
from view.spatialquerytests import SimpleDataSet
from view.spatialquerytests import SpatialQueryTests
from membase.helper.spatial_helper import SpatialHelper
from couchbase_helper.cluster import Cluster
from membase.helper.bucket_helper import BucketOperationHelper
from couchbase_helper.document import DesignDocument, View
import copy
class IBRTests(BackupBaseTest):
def setUp(self):
super(IBRTests, self).setUp()
self.num_mutate_items = self.input.param("mutate_items", 1000)
gen_load = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_load, "create", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a full backup
if not self.command_options:
self.command_options = []
options = self.command_options + [' -m full']
self.total_backups = 1
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
def tearDown(self):
super(IBRTests, self).tearDown()
def restoreAndVerify(self, bucket_names, kvs_before, expected_error=None):
for bucket in self.buckets:
bucket.kvs[1] = kvs_before[bucket.name]
del kvs_before
gc.collect()
errors, outputs = self.shell.restore_backupFile(self.couchbase_login_info, self.backup_location, bucket_names)
errors.extend(outputs)
error_found = False
if expected_error:
for line in errors:
if line.find(expected_error) != -1:
error_found = True
break
self.assertTrue(error_found, "Expected error not found: %s" % expected_error)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
if expected_error:
for bucket in self.buckets:
bucket.kvs[1] = KVStore()
self.verify_results(self.master)
self._verify_stats_all_buckets(self.servers[:self.num_servers])
def verify_dir_structure(self, total_backups, buckets, nodes):
cmd = 'find ' + self.backup_location + ' -type f'
if self.shell.info.type.lower() == 'windows':
cmd = 'cmd.exe /C "dir /s /b C:\\tmp\\backup"'
output, error = self.shell.execute_command(cmd)
self.log.info("output = {0} error = {1}".format(output,error))
if error:
raise Exception('Got error {0}',format(error))
expected_design_json = total_backups * buckets
expected_data_cbb = total_backups * buckets * nodes
expected_meta_json = total_backups * buckets * nodes
expected_failover_json = total_backups * buckets * nodes
timestamp = '\d{4}\-\d{2}\-\d{2}T\d+Z'
pattern_mode = '(full|accu|diff)'
timestamp_backup = timestamp + '\-' + pattern_mode
pattern_bucket = 'bucket-\w+'
pattern_node = 'node\-\d{1,3}\.\d{1,3}\.\d{1,3}.\d{1,3}.+'
pattern_design_json = timestamp + '/|\\\\' + timestamp_backup + \
'/|\\\\' + pattern_bucket
pattern_backup_files = pattern_design_json + '/|\\\\' + pattern_node
data_cbb = 0
failover = 0
meta_json = 0
design_json = 0
for line in output:
if 'data-0000.cbb' in line:
if re.search(pattern_backup_files, line):
data_cbb += 1
if 'failover.json' in line:
if re.search(pattern_backup_files, line):
failover += 1
if self.cb_version[:5] != "4.5.1" and 'meta.json' in line:
if re.search(pattern_backup_files, line):
meta_json += 1
if 'design.json' in line:
if re.search(pattern_design_json, line):
design_json += 1
self.log.info("expected_data_cbb {0} data_cbb {1}"
.format(expected_data_cbb, data_cbb))
self.log.info("expected_failover_json {0} failover {1}"
.format(expected_failover_json, failover))
if self.cb_version[:5] != "4.5.1":
self.log.info("expected_meta_json {0} meta_json {1}"
.format(expected_meta_json, meta_json))
""" add json support later in this test
self.log.info("expected_design_json {0} design_json {1}"
.format(expected_design_json, design_json)) """
if self.cb_version[:5] != "4.5.1":
if data_cbb == expected_data_cbb and failover == expected_failover_json and \
meta_json == expected_meta_json:
# add support later in and design_json == expected_design_json:
return True
else:
if data_cbb == expected_data_cbb and failover == expected_failover_json:
return True
return False
def testFullBackupDirStructure(self):
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Full Backup')
def testMultipleFullBackupDirStructure(self):
for count in range(10):
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a incremental backup
options = self.command_options + [' -m full']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
self.sleep(120)
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Full Backup')
def testIncrBackupDirStructure(self):
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a incremental backup
options = self.command_options + [' -m accu']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Incremental Backup')
def testMultipleIncrBackupDirStructure(self):
for count in range(10):
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a incremental backup
options = self.command_options + [' -m accu']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
self.log.info("sleeping for 30 secs")
self.sleep(30)
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Incremental Backup')
def testMultipleDiffBackupDirStructure(self):
for count in range(10):
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a incremental backup
options = self.command_options + [' -m diff']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
self.sleep(60)
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Differential Backup')
def testMultipleIncrDiffBackupDirStructure(self):
for count in range(10):
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a incremental backup
options = self.command_options + [' -m accu']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
self.sleep(60)
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a diff backup
options = self.command_options + [' -m diff']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
self.sleep(60)
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Combo Incr and Diff Backup')
def testMultipleFullIncrDiffBackupDirStructure(self):
for count in range(10):
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a incremental backup
options = self.command_options + [' -m accu']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
self.sleep(60)
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a diff backup
options = self.command_options + [' -m diff']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
self.sleep(60)
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a full backup
options = self.command_options + [' -m full']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options, delete_backup=False)
self.total_backups += 1
self.sleep(60)
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Combo Full,Incr and Diff Backups')
def testDiffBackupDirStructure(self):
# Update data
gen_update = BlobGenerator('testdata', 'testdata-', self.value_size, end=5)
self._load_all_buckets(self.master, gen_update, "update", 0, 1, self.item_flag, True, batch_size=20000,
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a diff backup
options = self.command_options + [' -m diff']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
self.total_backups += 1
if not self.verify_dir_structure(self.total_backups, len(self.buckets), len(self.servers)):
raise Exception('Backup Directory Verification Failed for Differential Backup')
def testIncrementalBackup(self):
gen_extra = BlobGenerator('zoom', 'zoom-', self.value_size, end=self.num_items)
self.log.info("Starting Incremental backup")
extra_items_deleted_flag = 0
if(self.doc_ops is not None):
self._load_all_buckets(self.master, gen_extra, "create", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
if("update" in self.doc_ops):
self._load_all_buckets(self.master, gen_extra, "update", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
if("delete" in self.doc_ops):
self._load_all_buckets(self.master, gen_extra, "delete", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
extra_items_deleted_flag = 1
if("expire" in self.doc_ops):
if extra_items_deleted_flag == 1:
self._load_all_buckets(self.master, gen_extra, "create", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
self._load_all_buckets(self.master, gen_extra, "update", self.expire_time, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
#Take a incremental backup
options = self.command_options + [' -m accu']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
# Save copy of data
kvs_before = {}
for bucket in self.buckets:
kvs_before[bucket.name] = bucket.kvs[1]
bucket_names = [bucket.name for bucket in self.buckets]
# Delete all buckets
self._all_buckets_delete(self.master)
gc.collect()
self._bucket_creation()
self.sleep(20)
self.restoreAndVerify(bucket_names, kvs_before)
def testDifferentialBackup(self):
gen_extra = BlobGenerator('zoom', 'zoom-', self.value_size, end=self.num_items)
self.log.info("Starting Differential backup")
extra_items_deleted_flag = 0
if(self.doc_ops is not None):
self._load_all_buckets(self.master, gen_extra, "create", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
if("update" in self.doc_ops):
self._load_all_buckets(self.master, gen_extra, "update", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
if("delete" in self.doc_ops):
self._load_all_buckets(self.master, gen_extra, "delete", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
extra_items_deleted_flag = 1
if("expire" in self.doc_ops):
if extra_items_deleted_flag == 1:
self._load_all_buckets(self.master, gen_extra, "create", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
self._load_all_buckets(self.master, gen_extra, "update", self.expire_time, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a diff backup
options = self.command_options + [' -m diff']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
# Save copy of data
kvs_before = {}
for bucket in self.buckets:
kvs_before[bucket.name] = bucket.kvs[1]
bucket_names = [bucket.name for bucket in self.buckets]
# Delete all buckets
self._all_buckets_delete(self.master)
gc.collect()
self._bucket_creation()
self.sleep(20)
self.restoreAndVerify(bucket_names, kvs_before)
def testFullBackup(self):
# Save copy of data
kvs_before = {}
for bucket in self.buckets:
kvs_before[bucket.name] = bucket.kvs[1]
bucket_names = [bucket.name for bucket in self.buckets]
# Delete all buckets
self._all_buckets_delete(self.master)
gc.collect()
self._bucket_creation()
self.sleep(20)
self.restoreAndVerify(bucket_names, kvs_before)
def testIncrementalBackupConflict(self):
gen_extra = BlobGenerator('zoom', 'zoom-', self.value_size, end=self.num_items)
self.log.info("Starting Incremental backup")
extra_items_deleted_flag = 0
if(self.doc_ops is not None):
self._load_all_buckets(self.master, gen_extra, "create", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
if("update" in self.doc_ops):
self._load_all_buckets(self.master, gen_extra, "update", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
if("delete" in self.doc_ops):
self._load_all_buckets(self.master, gen_extra, "delete", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
extra_items_deleted_flag = 1
if("expire" in self.doc_ops):
if extra_items_deleted_flag == 1:
self._load_all_buckets(self.master, gen_extra, "create", 0, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
self._load_all_buckets(self.master, gen_extra, "update", self.expire_time, 1, self.item_flag, True, batch_size=20000, pause_secs=5, timeout_secs=180)
#Take a incremental backup
options = self.command_options + [' -m accu']
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
# Save copy of data
kvs_before = {}
for bucket in self.buckets:
kvs_before[bucket.name] = bucket.kvs[1]
bucket_names = [bucket.name for bucket in self.buckets]
# Delete all buckets
self._all_buckets_delete(self.master)
gc.collect()
self.lww = self.num_mutate_items = self.input.param("lww_new", False)
self._bucket_creation()
self.sleep(20)
expected_error = self.input.param("expected_error", None)
self.restoreAndVerify(bucket_names, kvs_before, expected_error)
class IBRJsonTests(BackupBaseTest):
def setUp(self):
super(IBRJsonTests, self).setUp()
self.num_mutate_items = self.input.param("mutate_items", 1000)
template = '{{ "mutated" : 0, "age": {0}, "first_name": "{1}" }}'
gen_load = DocumentGenerator('load_by_id_test', template, range(5),\
['james', 'john'], start=0, end=self.num_items)
self._load_all_buckets(self.master, gen_load, "create", 0, 1,\
self.item_flag, True, batch_size=20000,\
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
if self.test_with_view:
view_list = []
bucket = "default"
if self.dev_view:
prefix_ddoc="dev_ddoc"
else:
prefix_ddoc="ddoc"
ddoc_view_map = self.bucket_ddoc_map.pop(bucket, {})
for ddoc_count in xrange(self.num_ddocs):
design_doc_name = prefix_ddoc + str(ddoc_count)
view_list = self.make_default_views("views", self.num_views_per_ddoc)
self.create_views(self.master, design_doc_name, view_list,\
bucket, self.wait_timeout * 2)
ddoc_view_map[design_doc_name] = view_list
self.bucket_ddoc_map[bucket] = ddoc_view_map
#Take a full backup
if not self.command_options:
self.command_options = []
options = self.command_options + [' -m full']
self.total_backups = 1
self.shell.execute_cluster_backup(self.couchbase_login_info,\
self.backup_location, options)
self.sleep(2)
def testFullBackup(self):
# Save copy of data
kvs_before = {}
for bucket in self.buckets:
kvs_before[bucket.name] = bucket.kvs[1]
bucket_names = [bucket.name for bucket in self.buckets]
# Delete all buckets
self._all_buckets_delete(self.master)
gc.collect()
self._bucket_creation()
self.sleep(20)
self.restoreAndVerify(bucket_names, kvs_before)
def restoreAndVerify(self,bucket_names,kvs_before):
for bucket in self.buckets:
bucket.kvs[1] = kvs_before[bucket.name]
del kvs_before
gc.collect()
self.shell.restore_backupFile(self.couchbase_login_info,\
self.backup_location, bucket_names)
self.sleep(10)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
self.verify_results(self.master)
self._verify_stats_all_buckets(self.servers[:self.num_servers])
""" add design doc and view """
if self.test_with_view:
result = False
query = {"stale" : "false", "full_set" : "true", \
"connection_timeout" : 60000}
for bucket, ddoc_view_map in self.bucket_ddoc_map.items():
for ddoc_name, view_list in ddoc_view_map.items():
for view in view_list:
try:
result = self.cluster.query_view(self.master,\
ddoc_name, view.name, query,\
self.num_items, timeout=10)
except Exception:
pass
if not result:
self.fail("There is no: View: {0} in Design Doc:"\
" {1} in bucket: {2}"\
.format(view.name, ddoc_name, bucket))
self.log.info("DDoc Data Validation Successful")
def tearDown(self):
super(IBRJsonTests, self).tearDown()
def testMultipleBackups(self):
if not self.command_options:
self.command_options = []
options = self.command_options
if self.backup_type is not None:
if "accu" in self.backup_type:
options = self.command_options + [' -m accu']
if "diff" in self.backup_type:
options = self.command_options + [' -m diff']
diff_backup = [" -m diff"]
accu_backup = [" -m accu"]
current_backup = [" -m diff"]
for count in range(self.number_of_backups):
if "mix" in self.backup_type:
if current_backup == diff_backup:
current_backup = accu_backup
options = self.command_options + accu_backup
elif current_backup == accu_backup:
current_backup = diff_backup
options = self.command_options + diff_backup
# Update data
template = '{{ "mutated" : {0}, "age": {0}, "first_name": "{1}" }}'
gen_update = DocumentGenerator('load_by_id_test', template, range(5),\
['james', 'john'], start=0, end=self.num_items)
self._load_all_buckets(self.master, gen_update, "update", 0, 1,\
self.item_flag, True, batch_size=20000,\
pause_secs=5, timeout_secs=180)
self._wait_for_stats_all_buckets(self.servers[:self.num_servers])
#Take a backup
self.shell.execute_cluster_backup(self.couchbase_login_info,\
self.backup_location, options)
# Save copy of data
kvs_before = {}
for bucket in self.buckets:
kvs_before[bucket.name] = bucket.kvs[1]
bucket_names = [bucket.name for bucket in self.buckets]
# Delete all buckets
self._all_buckets_delete(self.master)
gc.collect()
self._bucket_creation()
self.sleep(20)
self.restoreAndVerify(bucket_names, kvs_before)
class IBRSpatialTests(SpatialQueryTests):
def setUp(self):
self.input = TestInputSingleton.input
self.servers = self.input.servers
self.master = self.servers[0]
self.log = logger.Logger.get_logger()
self.helper = SpatialHelper(self, "default")
self.helper.setup_cluster()
self.cluster = Cluster()
self.default_bucket = self.input.param("default_bucket", True)
self.sasl_buckets = self.input.param("sasl_buckets", 0)
self.standard_buckets = self.input.param("standard_buckets", 0)
self.memcached_buckets = self.input.param("memcached_buckets", 0)
self.servers = self.helper.servers
self.shell = RemoteMachineShellConnection(self.master)
info = self.shell.extract_remote_info()
self.os = info.type.lower()
self.couchbase_login_info = "%s:%s" % (self.input.membase_settings.rest_username,
self.input.membase_settings.rest_password)
self.backup_location = self.input.param("backup_location", "/tmp/backup")
self.command_options = self.input.param("command_options", '')
def tearDown(self):
self.helper.cleanup_cluster()
def test_backup_with_spatial_data(self):
num_docs = self.helper.input.param("num-docs", 5000)
self.log.info("description : Make limit queries on a simple "
"dataset with {0} docs".format(num_docs))
data_set = SimpleDataSet(self.helper, num_docs)
data_set.add_limit_queries()
self._query_test_init(data_set)
if not self.command_options:
self.command_options = []
options = self.command_options + [' -m full']
self.total_backups = 1
self.shell.execute_cluster_backup(self.couchbase_login_info, self.backup_location, options)
time.sleep(2)
self.buckets = RestConnection(self.master).get_buckets()
bucket_names = [bucket.name for bucket in self.buckets]
BucketOperationHelper.delete_all_buckets_or_assert(self.servers, self)
gc.collect()
self.helper._create_default_bucket()
self.shell.restore_backupFile(self.couchbase_login_info, self.backup_location, bucket_names)
SimpleDataSet(self.helper, num_docs)._create_views()
self._query_test_init(data_set)
| 45.485804 | 165 | 0.627055 | 3,525 | 28,838 | 4.871206 | 0.078582 | 0.03436 | 0.042397 | 0.029352 | 0.728263 | 0.698911 | 0.686099 | 0.677829 | 0.672646 | 0.658494 | 0 | 0.022296 | 0.273701 | 28,838 | 633 | 166 | 45.557662 | 0.797517 | 0.026181 | 0 | 0.577586 | 0 | 0.00431 | 0.077183 | 0.003126 | 0 | 0 | 0 | 0 | 0.00431 | 1 | 0.051724 | false | 0.00431 | 0.045259 | 0 | 0.109914 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f0e89ae2be157a7563b7db3f026034c5b7dec360 | 482 | py | Python | cardiffshop/cardiffshop/urls.py | yigitguler/admin-example | 48a0eacea1d03123fef5e86165749e9c0213e0d8 | [
"MIT"
] | null | null | null | cardiffshop/cardiffshop/urls.py | yigitguler/admin-example | 48a0eacea1d03123fef5e86165749e9c0213e0d8 | [
"MIT"
] | null | null | null | cardiffshop/cardiffshop/urls.py | yigitguler/admin-example | 48a0eacea1d03123fef5e86165749e9c0213e0d8 | [
"MIT"
] | null | null | null | from products.views import ProductDetail
from django.conf import settings
from django.conf.urls.static import static
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^products/(?P<slug>[-\w]+)/', ProductDetail.as_view(), name="product-detail"),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) \
+ static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 37.076923 | 88 | 0.761411 | 67 | 482 | 5.373134 | 0.41791 | 0.111111 | 0.116667 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10166 | 482 | 12 | 89 | 40.166667 | 0.831409 | 0 | 0 | 0 | 0 | 0 | 0.099585 | 0.056017 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f0f5e1e9645a14c12d5c5d93830c5d22ebcb474f | 823 | py | Python | Module/engine.py | NoahSchiro/physics-engine | c96d96eeaf823583ac1035d58fb69d7d47019f87 | [
"MIT"
] | null | null | null | Module/engine.py | NoahSchiro/physics-engine | c96d96eeaf823583ac1035d58fb69d7d47019f87 | [
"MIT"
] | null | null | null | Module/engine.py | NoahSchiro/physics-engine | c96d96eeaf823583ac1035d58fb69d7d47019f87 | [
"MIT"
] | null | null | null | from free_bodies import *
# This is the actual engine which is apply physics to objects
class physics_engine:
FB = [] # Array holds all of the objects in our simulation. FB = free bodies
# Add free bodies to the system
def add_fb(self, fb):
self.FB.append(fb)
# Remove a free body from the sytem
def remove_fb(self, fb):
if fb in self.FB:
self.FB.remove(fb)
# Apply force to an object in the system
def apply_force(self, fb, newtons):
# Make sure our object is in our system, and
# see that we are not trying to apply a negative force
if (fb in self.FB and newtons >= 0):
# Find where the object is in our engine
for bodies in self.FB:
if bodies == fb:
| 28.37931 | 84 | 0.584447 | 125 | 823 | 3.808 | 0.408 | 0.10084 | 0.067227 | 0.05042 | 0.05042 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001883 | 0.3548 | 823 | 28 | 85 | 29.392857 | 0.894539 | 0.443499 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b066a126354f8b346cb7cd4adb064a1438b7f2c | 1,190 | py | Python | datadog/api/screenboards.py | gust/datadogpy | 94f7dfeed87849a615916a5f171b25400b7c6cc5 | [
"BSD-3-Clause"
] | null | null | null | datadog/api/screenboards.py | gust/datadogpy | 94f7dfeed87849a615916a5f171b25400b7c6cc5 | [
"BSD-3-Clause"
] | null | null | null | datadog/api/screenboards.py | gust/datadogpy | 94f7dfeed87849a615916a5f171b25400b7c6cc5 | [
"BSD-3-Clause"
] | null | null | null | from datadog.api.resources import GetableAPIResource, CreateableAPIResource, \
UpdatableAPIResource, DeletableAPIResource, ActionAPIResource, ListableAPIResource
class Screenboard(GetableAPIResource, CreateableAPIResource,
UpdatableAPIResource, DeletableAPIResource,
ActionAPIResource, ListableAPIResource):
"""
A wrapper around Screenboard HTTP API.
"""
_class_name = 'screen'
_class_url = '/screen'
_json_name = 'board'
@classmethod
def share(cls, board_id):
"""
Share the screenboard with given id
:param board_id: screenboard to share
:type board_id: id
:returns: Dictionary representing the API's JSON response
"""
return super(Screenboard, cls)._trigger_action('POST', 'screen/share', board_id)
@classmethod
def revoke(cls, board_id):
"""
Revoke a shared screenboard with given id
:param board_id: screenboard to revoke
:type board_id: id
:returns: Dictionary representing the API's JSON response
"""
return super(Screenboard, cls)._trigger_action('DELETE', 'screen/share', board_id)
| 31.315789 | 90 | 0.668908 | 117 | 1,190 | 6.649573 | 0.393162 | 0.071979 | 0.151671 | 0.203085 | 0.670951 | 0.670951 | 0.375321 | 0.375321 | 0.375321 | 0.254499 | 0 | 0 | 0.251261 | 1,190 | 37 | 91 | 32.162162 | 0.873176 | 0.294958 | 0 | 0.142857 | 0 | 0 | 0.071331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b0b1a7b4c5a81ec7957c24cfd67e9ce4f84e8eb | 7,311 | py | Python | SCALA_library/Lower_level_tools/pyCorner260.py | snfactory/scala | f046d0e458bc4146546e80df16e15259f6019f93 | [
"Apache-2.0"
] | null | null | null | SCALA_library/Lower_level_tools/pyCorner260.py | snfactory/scala | f046d0e458bc4146546e80df16e15259f6019f93 | [
"Apache-2.0"
] | null | null | null | SCALA_library/Lower_level_tools/pyCorner260.py | snfactory/scala | f046d0e458bc4146546e80df16e15259f6019f93 | [
"Apache-2.0"
] | null | null | null | ###########################################################
# LOWER LEVEL LIBRARY THAT TALKS TO THE MONOCHROMATON #
###########################################################
import subprocess
import time
# -- See pyserial
import serial
class CornerStone260:
"""This class controlls the Cornerstone 260 monochromator"""
def __init__(self, port='/dev/ttyUSB1'):
self.serialport = port #windows: 'COMx', Linux: '/dev/<your_device_file>'
self.baud = 9600
self.sendtermchar = "\r\n"
self.rectermchar = "\r\n"
self.timeout = 10
def SerialCommand(self,command):
#setup - if a Serial object can't be created, a SerialException will be raised.
while True:
try:
ser = serial.Serial(self.serialport, self.baud, timeout=self.timeout)
#break out of while loop when connection is made
break
except serial.SerialException:
print 'waiting for device ' + self.serialport + ' to be available'
self.CS_Sleep(3)
ser.flushInput()
ser.write(command + self.sendtermchar)
answer = ser.readline()
ser.close()
return answer.upper()[:-2] == command.upper()
def SerialQuery(self,command):
#setup - if a Serial object can't be created, a SerialException will be raised.
while True:
try:
ser = serial.Serial(self.serialport, self.baud)
#break out of while loop when connection is made
break
except serial.SerialException:
print 'waiting for device ' + self.serialport + ' to be available'
self.CS_Sleep(3)
ser.flushInput()
ser.write(command + self.sendtermchar)
answer1 = ser.readline()
answer2 = ser.readline()
ser.close()
return answer2[:-2]
def CS_Sleep(self,timesleep_in_second):
"""
"""
time.sleep(timesleep_in_second)
def CS_Units_NM(self):
"""Specifies the operational units: nanometer"""
return self.SerialCommand('UNITS NM')
def CS_Units_UM(self):
"""Specifies the operational units: micrometer"""
return self.SerialCommand('UNITS UM')
def CS_Units_WN(self):
"""Specifies the operational units: wavenumbers (1/cm)"""
return self.SerialCommand('UNITS WN')
def CS_GetUnits(self):
"""Returns the operational units: NM, UM, WN"""
return self.SerialQuery('UNITS?')[0:2]
def CS_GoWave(self, position):
"""Moves the wavelength drive to the specified position (see units!)"""
return self.SerialCommand('GOWAVE %f' % (position))
def CS_GetWave(self):
"""Returns the wavelength drive position (see units!)"""
return self.SerialQuery('WAVE?')
def CS_Calibrate(self, cal):
"""Define the current position as the wavelength specified in the numeric parameter"""
return self.SerialCommand('CALIBRATE %f' % (cal))
def CS_Abort(self):
"""Stops any wavelength motion immediately"""
return self.SerialCommand('ABORT')
def CS_Step(self, n):
"""Moves the wavelength drive by the integer number of n"""
return self.SerialCommand('STEP %d' % (n))
def CS_GetStep(self):
"""Returns the wavelength drive position in steps"""
return self.SerialQuery('STEP?')
def CS_Grat(self,n):
"""Selects the grating Nr. 'n' """
return self.SerialCommand('GRAT %d' % (n))
def CS_GetGrat(self):
"""Returns the grating parameters"""
return self.SerialQuery('GRAT?')
def CS_GratLabel(self,n,label=' '):
"""Defines the label of the grating Nr. 'n' """
return self.SerialCommand('GRAT%dLABEL %s' % (n, label[:8]))
def CS_GetLabel(self,n):
"""Returns the label of the grating"""
return self.SerialQuery('GRAT%dLABEL?' % (n))
def CS_GratZero(self,n,zero):
"""Defines the zero of the grating Nr. 'n' """
return self.SerialCommand('GRAT%dZERO %f' % (n, zero))
def CS_GetZero(self,n):
"""Returns the zero of the grating"""
return self.SerialQuery('GRAT%dZERO?' % (n))
def CS_GratLines(self,n,lines):
"""Defines the lines of the grating Nr. 'n' """
return self.SerialCommand('GRAT%dLINES %d' % (n, lines))
def CS_GetLines(self,n):
"""Returns the label of the grating"""
return self.SerialQuery('GRAT%dLINES?' % (n))
def CS_GratFactor(self,n,factor):
"""Sets the calibration factor of the grating Nr. 'n' """
return self.SerialCommand('GRAT%dFACTOR %f' % (n, factor))
def CS_GetFactor(self,n):
"""Returns the calibration factor of the grating"""
return self.SerialQuery('GRAT%dFACTOR?' % (n))
def CS_GratOffset(self,n,offset):
"""Sets the calibration offset of the grating Nr. 'n' """
return self.SerialCommand('GRAT%dOFFSET %f' % (n, offset))
def CS_GetOffset(self,n):
"""Returns the calibration offset of the grating"""
return self.SerialQuery('GRAT%dOFFSET?' % (n))
def CS_ShutterOpen(self):
"""Opens the shutter"""
return self.SerialCommand('SHUTTER O')
def CS_ShutterClose(self):
"""Closess the shutter"""
return self.SerialCommand('SHUTTER C')
def CS_GetShutter(self):
"""Returns the shutter state"""
return self.SerialQuery('SHUTTER?')
def CS_Filter(self,n):
"""Moves the filter wheel to the position specified in 'n' """
return self.SerialCommand('FILTER %d' % (n))
def CS_GetFilter(self):
"""Returns the current filter position"""
return self.SerialQuery('FILTER?')
def CS_OutPort(self,n):
"""Selects the output port"""
return self.SerialCommand('OUTPORT %d' % (n))
def CS_GetOutPort(self):
"""Returns the current out port"""
return self.SerialQuery('OUTPORT?')
def CS_FilterLabel(self,n,label):
"""Defines the label of the filter Nr. 'n' """
return self.SerialCommand('FILTER%dLABEL %s' % (n, label[:8]))
def CS_GetFilterLabel(self,n):
"""Returns the label of the filter"""
return self.SerialQuery('FILTER%dLABEL?' % (n))
def CS_GetInfo(self):
"""Returns the system info"""
return self.SerialQuery('INFO?')
def CS_GetStatus(self):
"""Returns the status byte"""
return self.SerialQuery('STB?')
def CS_GetError(self):
"""Returns the error code"""
return self.SerialQuery('ERROR?')
def CS_GetSerialPort(self):
return self.serialport
def CS_SetSerialPort(self, port):
self.serialport = port
if __name__ == '__main__':
cs = CornerStone260( port = '/dev/ttyUSB1')
print cs.GetInfo()
| 36.193069 | 95 | 0.562987 | 814 | 7,311 | 4.982801 | 0.237101 | 0.045611 | 0.102071 | 0.053254 | 0.42431 | 0.349852 | 0.29216 | 0.24926 | 0.225838 | 0.174063 | 0 | 0.005699 | 0.303926 | 7,311 | 201 | 96 | 36.373134 | 0.791315 | 0.050609 | 0 | 0.175439 | 0 | 0 | 0.086358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.026316 | null | null | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b0d5b4aaea543cf63af89cd9a03e2708a9fbab7 | 4,845 | py | Python | resources/mgltools_x86_64Linux2_1.5.6/MGLToolsPckgs/AutoDockTools/autoflex4Commands.py | J-E-J-S/aaRS-Pipeline | 43f59f28ab06e4b16328c3bc405cdddc6e69ac44 | [
"MIT"
] | 8 | 2021-12-14T21:30:01.000Z | 2022-02-14T11:30:03.000Z | resources/mgltools_x86_64Linux2_1.5.6/MGLToolsPckgs/AutoDockTools/autoflex4Commands.py | J-E-J-S/aaRS-Pipeline | 43f59f28ab06e4b16328c3bc405cdddc6e69ac44 | [
"MIT"
] | null | null | null | resources/mgltools_x86_64Linux2_1.5.6/MGLToolsPckgs/AutoDockTools/autoflex4Commands.py | J-E-J-S/aaRS-Pipeline | 43f59f28ab06e4b16328c3bc405cdddc6e69ac44 | [
"MIT"
] | null | null | null | #############################################################################
#
# Author: Ruth HUEY, Michel F. SANNER
#
# Copyright: M. Sanner TSRI 2008
#
#############################################################################
# $Header: /opt/cvs/python/packages/share1.5/AutoDockTools/autoflex4Commands.py,v 1.1 2008/06/04 15:37:20 rhuey Exp $
#
# $Id: autoflex4Commands.py,v 1.1 2008/06/04 15:37:20 rhuey Exp $
#
#
#
#
#
"""
This Module facilitates producing a formatted flexible residue file for AutoDock. The steps in this process are:
* Set the macromolecule:
o Read a PDBQT Macromolecule
o Choose Macromol...
* Select which residues are to be flexible in macromolecule using Pmv selection tools:
o ICOM Select
o SelectFromString
o Select Spherical Region
* Set which torsions in the sidechains of those residues are to be flexible interactively
* The results of the previous steps are written to two files:
o one containing the sidechains of the flexible residues with special keywords
o a second containing the rigid portion of the macromolecule
"""
from ViewerFramework.VFCommand import CommandGUI
from AutoDockTools.autoflexCommands import AF_MacroReader,\
AF_MacroChooser, AF_SelectResidues, AF_ProcessResidues,\
AF_ProcessHingeResidues, AF_EditHinge, AF_SetHinge,\
AF_SetBondRotatableFlag, AF_StepBack, AF_FlexFileWriter,\
AF_RigidFileWriter, AF_LigandDirectoryWriter, menuText
AF_MacroReaderGUI=CommandGUI()
AF_MacroReaderGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'], \
menuText['Read Macro'], cascadeName = menuText['InputMB'])
AF_MacroChooserGUI=CommandGUI()
AF_MacroChooserGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'],
menuText['Choose Macro'], cascadeName = menuText['InputMB'])
AF_SelectResiduesGUI = CommandGUI()
AF_SelectResiduesGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'],menuText['Set Residues'])
AF_ProcessResiduesGUI = CommandGUI()
AF_ProcessHingeResiduesGUI = CommandGUI()
AF_EditHingeGUI = CommandGUI()
AF_EditHingeGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'],\
menuText['Edit Hinge'])
AF_SetHingeGUI = CommandGUI()
AF_SetHingeGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'],\
menuText['Set Hinge'])
AF_StepBackGUI = CommandGUI()
AF_StepBackGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'], menuText['Step Back'])
AF_FlexFileWriterGUI = CommandGUI()
AF_FlexFileWriterGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'], \
menuText['writeFlexible'], cascadeName = menuText['WriteMB'])
AF_RigidFileWriterGUI = CommandGUI()
AF_RigidFileWriterGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'], \
menuText['writeRigid'], cascadeName = menuText['WriteMB'])
AF_LigandDirectoryWriterGUI = CommandGUI()
AF_LigandDirectoryWriterGUI.addMenuCommand('AutoTools4Bar', menuText['AutoFlexMB'], \
menuText['writeDir'], cascadeName = menuText['WriteMB'])
commandList = [
{'name':'AD4flex_readMacro','cmd':AF_MacroReader(),'gui':AF_MacroReaderGUI},
{'name':'AD4flex_chooseMacro','cmd':AF_MacroChooser(),'gui':AF_MacroChooserGUI},
{'name':'AD4flex_setResidues','cmd':AF_SelectResidues(),'gui':AF_SelectResiduesGUI},
#{'name':'AD4flex_processResidues','cmd':AF_ProcessResidues(),'gui':None},
#{'name':'AD4flex_processHingeResidues','cmd':AF_ProcessHingeResidues(),'gui':None},
#{'name':'AD4flex_setBondRotatableFlag','cmd':AF_SetBondRotatableFlag(),'gui':None},
#{'name':'AD4flex_setHinge','cmd':AF_SetHinge(),'gui':AF_SetHingeGUI},
#{'name':'AD4flex_editHinge','cmd':AF_EditHinge(),'gui':None},
{'name':'AD4flex_stepBack','cmd':AF_StepBack(),'gui':AF_StepBackGUI},
{'name':'AD4flex_writeFlexFile','cmd':AF_FlexFileWriter(),'gui':AF_FlexFileWriterGUI},
{'name':'AD4flex_writeRigidFile','cmd':AF_RigidFileWriter(),'gui':AF_RigidFileWriterGUI},
#{'name':'AD4flex_writeFlexDir','cmd':AF_LigandDirectoryWriter(),'gui':AF_LigandDirectoryWriterGUI}
]
def initModule(vf):
for dict in commandList:
vf.addCommand(dict['cmd'], dict['name'], dict['gui'])
if not hasattr(vf, 'ADflex_processResidues'):
vf.addCommand(AF_ProcessResidues(), 'ADflex_processResidues', None)
if not hasattr(vf, 'ADflex_setBondRotatableFlag'):
vf.addCommand(AF_SetBondRotatableFlag(), 'ADflex_setBondRotatableFlag', None)
vf.ADflex_setResidues = vf.AD4flex_setResidues
if vf.hasGui:
vf.GUI.menuBars['AutoTools4Bar'].menubuttons[menuText['AutoFlexMB']].config(bg='tan',underline='-1')
if not hasattr(vf.GUI, 'adtBar'):
vf.GUI.adtBar = vf.GUI.menuBars['AutoTools4Bar']
vf.GUI.adtFrame = vf.GUI.adtBar.menubuttons.values()[0].master
| 36.984733 | 118 | 0.707327 | 493 | 4,845 | 6.805274 | 0.3286 | 0.039344 | 0.09389 | 0.120715 | 0.223547 | 0.059613 | 0.02623 | 0.02623 | 0.02623 | 0.02623 | 0 | 0.015718 | 0.133333 | 4,845 | 130 | 119 | 37.269231 | 0.783282 | 0.295975 | 0 | 0 | 0 | 0 | 0.2057 | 0.04368 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.018519 | false | 0 | 0.037037 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b0f2186008ea5990d722ee460df30182fea5006 | 1,088 | py | Python | ipsc/2016/f.py | csferng/competitive-programming | f7ae710392c0fd606f735df6efcfca5e8fbd3501 | [
"MIT"
] | null | null | null | ipsc/2016/f.py | csferng/competitive-programming | f7ae710392c0fd606f735df6efcfca5e8fbd3501 | [
"MIT"
] | null | null | null | ipsc/2016/f.py | csferng/competitive-programming | f7ae710392c0fd606f735df6efcfca5e8fbd3501 | [
"MIT"
] | null | null | null | import collections
import itertools
import re
import sys
read_str = lambda : sys.stdin.readline().strip()
read_str_list = lambda : sys.stdin.readline().strip().split()
read_int = lambda : int(read_str())
read_int_list = lambda : map(int, read_str_list())
read_float = lambda : float(read_str())
read_float_list = lambda : map(float, read_str_list())
def solve_one(N2, fr, to):
# print >> sys.stderr, 'Q', N2, fr, to
bg, ed = fr, fr
l = 0
# print >> sys.stderr, (bg, ed)
while (to-bg)%N2 > ed-bg:
bg *= 2
ed = ed*2 + 1
if bg > N2:
bg -= N2
ed -= N2
l += 1
# print >> sys.stderr, (bg, ed)
d = (to-bg) % N2
c = ""
for b in xrange(l-1, -1, -1):
b1 = ((d>>b) & 1)
b2 = 1 if fr >= N2/2 else 0
c += 'L' if (b1^b2)==1 else 'R'
fr = (fr*2+b1) % N2
return c
def solve(N, A):
return ''.join( solve_one(N*2, A[i-1]-1, A[i]-1) for i in xrange(1,len(A)))
def main():
T = read_int()
for _ in xrange(T):
read_str()
N, X, K = read_int_list()
A = read_int_list()
ans = solve(N, [X] + A)
print '%d:%s' % (len(ans), ans)
if __name__ == "__main__":
main()
| 21.76 | 76 | 0.581801 | 204 | 1,088 | 2.946078 | 0.27451 | 0.081531 | 0.054908 | 0.073211 | 0.14975 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038824 | 0.21875 | 1,088 | 49 | 77 | 22.204082 | 0.668235 | 0.089154 | 0 | 0 | 0 | 0 | 0.015198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b112c171eb02964d852f653ff119690933ba7a7 | 314 | py | Python | setup.py | elcolumbio/goofy | f922db2b9662fcc5e26529a2d428feaabede6b42 | [
"Apache-2.0"
] | 1 | 2018-06-29T06:03:17.000Z | 2018-06-29T06:03:17.000Z | setup.py | elcolumbio/goofy | f922db2b9662fcc5e26529a2d428feaabede6b42 | [
"Apache-2.0"
] | null | null | null | setup.py | elcolumbio/goofy | f922db2b9662fcc5e26529a2d428feaabede6b42 | [
"Apache-2.0"
] | null | null | null | from setuptools import setup
setup(name='goofy',
version='0.1',
description='A goofy ebay bot.',
url='github.com/elcolumbio/goofy',
author='Florian Benkö',
author_email='f.benkoe@innotrade24.de',
license='Apache License, Version 2.0 (the "License")',
packages=['goofy'])
| 28.545455 | 60 | 0.640127 | 39 | 314 | 5.128205 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024 | 0.203822 | 314 | 10 | 61 | 31.4 | 0.776 | 0 | 0 | 0 | 0 | 0 | 0.433121 | 0.159236 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b114a97cf42ee5a0905829c9063503a14dd4b05 | 1,820 | py | Python | nxt/main.py | TeamGalileoRobotics/qLearning | 70333f45373abc72a489e1ac4089f8e3db34c6a8 | [
"MIT"
] | 4 | 2017-09-27T16:16:58.000Z | 2019-11-27T09:26:54.000Z | nxt/main.py | TeamGalileoRobotics/qLearning | 70333f45373abc72a489e1ac4089f8e3db34c6a8 | [
"MIT"
] | null | null | null | nxt/main.py | TeamGalileoRobotics/qLearning | 70333f45373abc72a489e1ac4089f8e3db34c6a8 | [
"MIT"
] | null | null | null | import random
import os
import traceback
import json
from environment import Environment
from environment import Action
# 0 <= GAMMA < 1
# GAMMA = 0 -> only consider immediate rewards
# GAMMA = 1 -> consider future rewards
GAMMA = 0.8
confidence = 0
# initialize "brain"
brain = open("brainFile", "r")
if(os.stat("./brainFile").st_size == 0):
q = {}
else:
q = json.load(brain)
brain.close()
# normalize so highest/lowest value is 100/-100
def normalize(matrix):
flat = []
for li in matrix:
for i in li:
flat.append(abs(i))
max_val = max(flat)
if max_val == 0:
return matrix
for x, li in enumerate(matrix):
for y, val in enumerate(li):
matrix[x][y] = (val / max_val) * 100
return matrix
# initialize a q value array
def initialize_q_value(key):
if not key in q:
q[key] = [0 for _ in range(Environment.NUM_ACTIONS)]
env = Environment()
while env.running:
old_state = env.state
# set q value to empty array if not already existing
initialize_q_value(old_state)
# pick only best actions (this way of picking might leave actions unexplored)
max_action = [action for action, q_value in enumerate(q[env.state]) if q_value == max(q[env.state])]
action = 0
if (confidence > random.random()):
action = max_action
else:
action = random.randint(0, env.NUM_ACTIONS - 1)
confidence += (1 - confidence) / 10
try:
reward = env.move(action)
except Exception:
print traceback.format_exc()
break
initialize_q_value(env.state)
# q-learning
q[old_state][action] = reward + GAMMA * max(q[env.state])
# normalize values
#q = normalize(q)
# write brain to file
brain = open("brainFile", "w")
json.dump(q,brain)
brain.close() | 21.162791 | 104 | 0.638462 | 260 | 1,820 | 4.388462 | 0.369231 | 0.03681 | 0.042068 | 0.021034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018328 | 0.250549 | 1,820 | 86 | 105 | 21.162791 | 0.818182 | 0.208242 | 0 | 0.12 | 0 | 0 | 0.021693 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.12 | null | null | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b19361cb7bec29a8393dd2d4d6342be97e9c458 | 1,664 | py | Python | src/polls/polls/project/migrations/0002_shoppingitem_shoppinglist.py | Valeriu92/Shopping_list | 3614dfc6691c28cf88db8af77ba246a9c4943794 | [
"MIT"
] | null | null | null | src/polls/polls/project/migrations/0002_shoppingitem_shoppinglist.py | Valeriu92/Shopping_list | 3614dfc6691c28cf88db8af77ba246a9c4943794 | [
"MIT"
] | 6 | 2021-03-19T08:45:06.000Z | 2021-09-22T19:19:21.000Z | src/polls/polls/project/migrations/0002_shoppingitem_shoppinglist.py | Valeriu92/Shopping_list | 3614dfc6691c28cf88db8af77ba246a9c4943794 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.5 on 2020-06-21 14:52
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('project', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='ShoppingList',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('date_created', models.DateTimeField(auto_now_add=True)),
('date_updated', models.DateTimeField(auto_now=True)),
('owner', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='shopping_lists', to='project.Users')),
],
),
migrations.CreateModel(
name='ShoppingItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('quantity', models.IntegerField(default=0)),
('gramaj', models.CharField(max_length=20, null=True)),
('marca', models.CharField(max_length=50)),
('magazin', models.CharField(max_length=50)),
('date_created', models.DateTimeField(auto_now_add=True)),
('date_updated', models.DateTimeField(auto_now=True)),
('list', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='shopping_item', to='project.ShoppingList')),
],
),
]
| 42.666667 | 146 | 0.597356 | 171 | 1,664 | 5.649123 | 0.397661 | 0.07764 | 0.093168 | 0.124224 | 0.604555 | 0.57764 | 0.57764 | 0.57764 | 0.57764 | 0.57764 | 0 | 0.02437 | 0.260216 | 1,664 | 38 | 147 | 43.789474 | 0.760357 | 0.027043 | 0 | 0.5 | 1 | 0 | 0.124923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0b1af5ee826d4553c7214ee6c50908fbb0aba15f | 1,291 | py | Python | rurina5/test2.py | TeaCondemns/rurina | 43725ebea5872953125271a9abb300a4e3a80a64 | [
"MIT"
] | null | null | null | rurina5/test2.py | TeaCondemns/rurina | 43725ebea5872953125271a9abb300a4e3a80a64 | [
"MIT"
] | null | null | null | rurina5/test2.py | TeaCondemns/rurina | 43725ebea5872953125271a9abb300a4e3a80a64 | [
"MIT"
] | null | null | null | from input import flip
from utilities.surface import *
import utilities.time as time
from nodes import Control, init
from shape import draw
import pygame
import pygame.key as key
from event import get, typename2
from input import map
screen = pygame.display.set_mode((800, 800), pygame.RESIZABLE)
_mask = AlphaSurface((100, 100))
_gradient = gradient((0, 0, 100, 100), [(200, 20, 20), (20, 200, 20), (20, 20, 200)], surface=pygame.Surface((100, 100)), offset=1)
pygame.draw.circle(_mask, (40, 240, 40), (50, 50), 50)
_gradient = mask(_gradient, _mask)
init()
control = Control()
control.focused_cursor = 2
print(pygame.BUTTON_LEFT, pygame.BUTTON_MIDDLE, pygame.BUTTON_RIGHT)
while True:
screen.fill((255, 255, 255))
flip()
time.flip()
pygame.display.get_surface().blit(_gradient, (0, 0))
draw(control.rect)
control.input(get(False))
# for e in get(False):
# print(typename2(e))
# if control.pressed:
# print('actions:', map._actions)
# print('events:', get(False))
#
# for e in get(False):
# if e.type == pygame.MOUSEBUTTONUP:
# if e.button == pygame.BUTTON_LEFT:
# exit()
pygame.event.get()
pygame.display.flip()
# print(time.dt)
# print(time.fps)
| 25.82 | 131 | 0.640589 | 177 | 1,291 | 4.587571 | 0.367232 | 0.019704 | 0.036946 | 0.022167 | 0.080049 | 0.054187 | 0.054187 | 0 | 0 | 0 | 0 | 0.074038 | 0.215337 | 1,291 | 49 | 132 | 26.346939 | 0.727542 | 0.233927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
9bc5c0aaba0c3abcdcd825f0ec975266b287ad70 | 4,270 | py | Python | recupero/migrations/0001_initial.py | cluster311/ggg | 262173c66fe40ada30083d439a79f16f841f5772 | [
"BSD-3-Clause"
] | 6 | 2020-03-16T02:51:16.000Z | 2020-11-10T00:58:01.000Z | recupero/migrations/0001_initial.py | cluster311/ggg | 262173c66fe40ada30083d439a79f16f841f5772 | [
"BSD-3-Clause"
] | 204 | 2019-09-19T02:00:57.000Z | 2022-02-10T10:48:52.000Z | recupero/migrations/0001_initial.py | cluster311/ggg | 262173c66fe40ada30083d439a79f16f841f5772 | [
"BSD-3-Clause"
] | 3 | 2019-09-16T22:59:24.000Z | 2022-03-21T22:52:44.000Z | # Generated by Django 2.2.4 on 2019-10-22 00:36
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
('cie10_django', '0001_initial'),
('nhpgd_django', '0001_initial'),
('profesionales', '0006_profesional_dni'),
('centros_de_salud', '0005_auto_20191014_0001'),
('obras_sociales', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='DocumentoAnexo',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('documento_adjunto', models.FileField(upload_to='documentos_anexos')),
],
),
migrations.CreateModel(
name='Factura',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('fecha', models.DateField(default=django.utils.timezone.now)),
('especialidad', models.CharField(max_length=50)),
('centro_de_salud', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='facturas', to='centros_de_salud.CentroDeSalud')),
('cies_extras', models.ManyToManyField(blank=True, related_name='extras_facturas', to='cie10_django.CIE10')),
('codigo_cie_principal', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='facturas', to='cie10_django.CIE10')),
('obra_social', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='facturas', to='obras_sociales.ObraSocial')),
('profesional', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='facturas', to='profesionales.Profesional')),
],
),
migrations.CreateModel(
name='TipoDocumentoAnexo',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nombre', models.CharField(help_text='Tipo de documento', max_length=30)),
],
),
migrations.CreateModel(
name='TipoPrestacion',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nombre', models.CharField(help_text='Tipo de atención', max_length=30)),
('tipo', models.PositiveIntegerField(choices=[(100, 'Consulta'), (200, 'Práctica'), (300, 'Internación')], default=100)),
('documentos_requeridos', models.ManyToManyField(blank=True, related_name='requerido_en_tipos', to='recupero.TipoDocumentoAnexo')),
('documentos_sugeridos', models.ManyToManyField(blank=True, related_name='sugerido_en_tipos', to='recupero.TipoDocumentoAnexo')),
],
),
migrations.CreateModel(
name='Prestacion',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('fecha', models.DateField(default=django.utils.timezone.now, help_text='Fecha de la prestación')),
('cantidad', models.IntegerField(default=1)),
('documentos_adjuntados', models.ManyToManyField(blank=True, to='recupero.DocumentoAnexo')),
('factura', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='prestaciones', to='recupero.Factura')),
('nomenclador', models.ForeignKey(help_text='Servicio realizado o entregado', on_delete=django.db.models.deletion.CASCADE, related_name='prestaciones', to='nhpgd_django.NomencladorHPGD')),
('tipo', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='recupero.TipoPrestacion')),
],
),
migrations.AddField(
model_name='documentoanexo',
name='tipo',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='recupero.TipoDocumentoAnexo'),
),
]
| 56.184211 | 204 | 0.638642 | 433 | 4,270 | 6.120092 | 0.297921 | 0.030189 | 0.047547 | 0.074717 | 0.509057 | 0.463019 | 0.416604 | 0.416604 | 0.399245 | 0.379245 | 0 | 0.022823 | 0.220141 | 4,270 | 75 | 205 | 56.933333 | 0.772973 | 0.010539 | 0 | 0.382353 | 1 | 0 | 0.23301 | 0.07104 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.044118 | 0 | 0.102941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bca087d33ec78af86014248cb1d2a83ed4c9f78 | 423 | py | Python | GetDataFrame.py | Adhmir/mcdm | c1d8bec4f3628f5d95ee7cd3bfdfb9ff54783dce | [
"MIT"
] | null | null | null | GetDataFrame.py | Adhmir/mcdm | c1d8bec4f3628f5d95ee7cd3bfdfb9ff54783dce | [
"MIT"
] | null | null | null | GetDataFrame.py | Adhmir/mcdm | c1d8bec4f3628f5d95ee7cd3bfdfb9ff54783dce | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue Dec 28 17:41:18 2021
@author: Adhmir Renan Voltoni Gomes
"""
import pandas as pd
import MCDM_V_001 as mcdm
def pegar_dados():
file_path = mcdm.label_file["text"]
excel_filename = r"{}".format(file_path)
if excel_filename[-4:] == ".csv":
df = pd.read_csv(excel_filename)
else:
df = pd.read_excel(excel_filename)
return df
| 23.5 | 45 | 0.617021 | 62 | 423 | 4.016129 | 0.677419 | 0.208835 | 0.064257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053797 | 0.252955 | 423 | 17 | 46 | 24.882353 | 0.734177 | 0.224586 | 0 | 0 | 0 | 0 | 0.033003 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bcb27a7d1f03b098af4f51756f0b10851a103ba | 2,561 | py | Python | cmsplugin_remote_form/migrations/0005_auto_20200429_1442.py | georgmzimmer/cmsplugin-remote-form | 804e67ccc18964223247b39cc6e359f977a16556 | [
"BSD-3-Clause"
] | null | null | null | cmsplugin_remote_form/migrations/0005_auto_20200429_1442.py | georgmzimmer/cmsplugin-remote-form | 804e67ccc18964223247b39cc6e359f977a16556 | [
"BSD-3-Clause"
] | 1 | 2020-02-13T17:30:13.000Z | 2020-02-13T17:30:13.000Z | cmsplugin_remote_form/migrations/0005_auto_20200429_1442.py | georgmzimmer/cmsplugin-remote-form | 804e67ccc18964223247b39cc6e359f977a16556 | [
"BSD-3-Clause"
] | 4 | 2020-01-16T03:52:18.000Z | 2020-04-29T19:35:16.000Z | # Generated by Django 2.2.12 on 2020-04-29 18:42
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('cmsplugin_remote_form', '0004_remoteform_notification_emails'),
]
operations = [
migrations.AlterField(
model_name='extrafield',
name='fieldType',
field=models.CharField(choices=[('CharField', 'CharField'), ('BooleanField', 'BooleanField'), ('EmailField', 'EmailField'), ('DecimalField', 'DecimalField'), ('FloatField', 'FloatField'), ('IntegerField', 'IntegerField'), ('FileField', 'FileField'), ('ImageField', 'ImageField'), ('USStateSelect', 'US State Selector'), ('IPAddressField', 'IPAddressField'), ('MathCaptcha', 'Math Captcha'), ('auto_Textarea', 'CharField as Textarea'), ('auto_hidden_input', 'CharField as HiddenInput'), ('auto_referral_page', 'Referral page as HiddenInput'), ('auto_GET_parameter', 'GET parameter as HiddenInput'), ('CharFieldWithValidator', 'CharFieldWithValidator'), ('ChoiceField', 'ChoiceField'), ('ReCaptcha', 'reCAPTCHA')], max_length=100),
),
migrations.AlterField(
model_name='extrafield',
name='initial',
field=models.CharField(blank=True, max_length=4096, null=True),
),
migrations.AlterField(
model_name='extrafield',
name='name',
field=models.CharField(default='', max_length=100, verbose_name='Name'),
),
migrations.AlterField(
model_name='remoteform',
name='error_notification_emails',
field=models.CharField(blank=True, help_text='multiple emails separated by commas', max_length=250, null=True, verbose_name='Email Errors To:'),
),
migrations.AlterField(
model_name='remoteform',
name='on_submit',
field=models.CharField(blank=True, help_text='Google Analytics Code', max_length=400, null=True),
),
migrations.AlterField(
model_name='remoteform',
name='post_url',
field=models.CharField(default='#remoteURL', max_length=200, null=True, verbose_name='Remote URL'),
),
migrations.AlterField(
model_name='remoteform',
name='template',
field=models.CharField(choices=[('cmsplugin_remote_form_templates/default.html', 'Default'), ('cmsplugin_remote_form_templates/vertical_onecol.html', 'Vertical - One Col')], default='cmsplugin_remote_form/default.html', max_length=255),
),
]
| 52.265306 | 741 | 0.646232 | 248 | 2,561 | 6.495968 | 0.403226 | 0.086903 | 0.108628 | 0.126009 | 0.260708 | 0.242706 | 0.045934 | 0 | 0 | 0 | 0 | 0.02071 | 0.208122 | 2,561 | 48 | 742 | 53.354167 | 0.773669 | 0.017962 | 0 | 0.5 | 1 | 0 | 0.37684 | 0.101472 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bcf66aa506518f09109e3c3f1caf8194c4a6b3a | 1,418 | py | Python | tools/debug_discovery.py | s1rd4v3/homebridge-tuya-web-es6-js | 4d02d05117f88e4a4251158716ef0677a2af92db | [
"MIT"
] | 172 | 2020-05-17T10:51:17.000Z | 2022-03-21T08:54:00.000Z | tools/debug_discovery.py | s1rd4v3/homebridge-tuya-web-es6-js | 4d02d05117f88e4a4251158716ef0677a2af92db | [
"MIT"
] | 303 | 2020-05-17T20:42:57.000Z | 2022-03-30T07:37:32.000Z | tools/debug_discovery.py | s1rd4v3/homebridge-tuya-web-es6-js | 4d02d05117f88e4a4251158716ef0677a2af92db | [
"MIT"
] | 85 | 2020-05-02T13:24:22.000Z | 2022-03-23T15:48:04.000Z | # The script is intended to get a list of all devices available via Tuya Home Assistant API endpoint.
import requests
import pprint
# CHANGE THIS - BEGINNING
USERNAME = ""
PASSWORD = ""
REGION = "eu" # cn, eu, us
COUNTRY_CODE = "1" # Your account country code, e.g., 1 for USA or 86 for China
BIZ_TYPE = "smart_life" # tuya, smart_life, jinvoo_smart
FROM = "tuya" # you likely don't need to touch this
# CHANGE THIS - END
# NO NEED TO CHANGE ANYTHING BELOW
TUYACLOUDURL = "https://px1.tuya{}.com"
pp = pprint.PrettyPrinter(indent=4)
print("Getting credentials")
auth_response = requests.post(
(TUYACLOUDURL + "/homeassistant/auth.do").format(REGION),
data={
"userName": USERNAME,
"password": PASSWORD,
"countryCode": COUNTRY_CODE,
"bizType": BIZ_TYPE,
"from": FROM,
},
)
print("Got credentials")
auth_response = auth_response.json()
pp.pprint(auth_response)
header = {"name": "Discovery", "namespace": "discovery", "payloadVersion": 1}
payload = {"accessToken": auth_response["access_token"]}
data = {"header": header, "payload": payload}
print("Getting devices")
discovery_response = requests.post(
(TUYACLOUDURL + "/homeassistant/skill").format(REGION), json=data
)
print("Got devices")
discovery_response = discovery_response.json()
pp.pprint(discovery_response)
print("!!! NOW REMOVE THIS FILE, SO YOUR CREDENTIALS (username, password) WON'T LEAK !!!")
| 31.511111 | 101 | 0.703808 | 181 | 1,418 | 5.41989 | 0.530387 | 0.061162 | 0.046891 | 0.06524 | 0.091743 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005882 | 0.16079 | 1,418 | 44 | 102 | 32.227273 | 0.818487 | 0.219323 | 0 | 0 | 0 | 0 | 0.310565 | 0.020036 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.085714 | 0.057143 | 0 | 0.057143 | 0.257143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9bd1df98ec0a7f71b2cc5017976bf6fbcc6f9846 | 3,562 | py | Python | yamtbx/dataproc/adxv.py | 7l2icj/kamo_clone | 5f4a5eed3cd9d91a021d805e46125c19cc2ed1b6 | [
"BSD-3-Clause"
] | 16 | 2016-05-20T11:19:40.000Z | 2021-01-01T19:44:23.000Z | yamtbx/dataproc/adxv.py | 7l2icj/kamo_clone | 5f4a5eed3cd9d91a021d805e46125c19cc2ed1b6 | [
"BSD-3-Clause"
] | 4 | 2017-03-10T00:51:11.000Z | 2021-02-07T17:18:46.000Z | yamtbx/dataproc/adxv.py | 7l2icj/kamo_clone | 5f4a5eed3cd9d91a021d805e46125c19cc2ed1b6 | [
"BSD-3-Clause"
] | 9 | 2016-12-15T16:00:06.000Z | 2021-09-10T08:34:14.000Z | """
(c) RIKEN 2015. All rights reserved.
Author: Keitaro Yamashita
This software is released under the new BSD License; see LICENSE.
"""
import socket
import subprocess
import time
import os
import getpass
import tempfile
class Adxv:
def __init__(self, adxv_bin=None, no_adxv_beam_center=True):
self.adxv_bin = adxv_bin
self.no_adxv_beam_center = no_adxv_beam_center
if self.adxv_bin is None: self.adxv_bin = "adxv"
self.adxv_proc = None # subprocess object
self.adxv_port = 8100 # adxv's default port. overridden later.
self.sock = None
self.spot_type_counter = -1
# __init__()
def start(self, cwd=None):
adxv_comm = self.adxv_bin + " -socket %d"
if self.no_adxv_beam_center: adxv_comm += " -no_adxv_beam_center"
if not self.is_alive():
# find available port number
sock_test = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock_test.bind(("localhost", 0))
self.adxv_port = sock_test.getsockname()[1]
sock_test.close()
# start adxv
self.adxv_proc = subprocess.Popen(adxv_comm%self.adxv_port, shell=True, cwd=cwd)
for i in xrange(10): # try for 5 seconds.
try:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # On OSX(?), need to re-create object when failed
self.sock.connect(("localhost", self.adxv_port))
break
except socket.error:
time.sleep(.5)
continue
# start()
def is_alive(self):
return self.adxv_proc is not None and self.adxv_proc.poll() is None # None means still running.
# is_alive()
def open_image(self, imgfile, raise_window=True):
self.start(cwd=os.path.dirname(imgfile))
sent = self.sock.send("load_image %s\n"%imgfile)
if sent == 0:
raise RuntimeError("adxv load failed! Close adxv and double-click again.")
if raise_window:
sent = self.sock.send("raise_window Control\n") # raise_window is available from adxv 1.9.9
sent = self.sock.send("raise_window Image\n")
#sock.close()
# open_image()
def open_hdf5(self, h5file, frameno_or_path, tmpdir=None, raise_window=True, binning=1):
from yamtbx.dataproc import eiger
if tmpdir is None:
tmpdir = "/dev/shm" if os.path.isdir("/dev/shm") else tempfile.gettempdir()
imgfile = os.path.join(tmpdir, "adxvtmp-%s-%s.cbf"%(getpass.getuser(), os.getpid()))
eiger.extract_to_minicbf(h5file, frameno_or_path, imgfile, binning=binning)
self.open_image(imgfile, raise_window=raise_window)
# open_hdf5()
def define_spot(self, color, radius=0, box=0):
self.spot_type_counter += 1
sent = self.sock.send("box %d %d\n" % (box,box)) # seems ignored?
sent = self.sock.send("define_type %d color %s radius %d\n"%(self.spot_type_counter, color, radius))
print sent
if sent == 0:
print "define_spot failed!"
sent = self.sock.send("box 20 20\n")
return self.spot_type_counter
# define_spot()
def load_spots(self, spots):
if len(spots) == 0:
return
sent = self.sock.send("load_spots %d\n" % len(spots))
for x, y, t in spots:
sent = self.sock.send("%.2f %.2f %d\n" % (x, y, t))
sent = self.sock.send("end_of_pack\n")
# load_spots()
# class Adxv
| 33.603774 | 131 | 0.609208 | 494 | 3,562 | 4.214575 | 0.313765 | 0.049952 | 0.051873 | 0.069164 | 0.157541 | 0.064361 | 0.038425 | 0.038425 | 0 | 0 | 0 | 0.013592 | 0.277092 | 3,562 | 105 | 132 | 33.92381 | 0.794951 | 0.097979 | 0 | 0.030769 | 0 | 0 | 0.102749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.030769 | 0.107692 | null | null | 0.030769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bd2ef61ca9a089dc9825ad4627fb4388dc0d5d8 | 571 | py | Python | api/accounts/urls.py | paurushofficial/InvitationApi | 35a33fab52bf9d2269b804d8b5e9958b41f7506c | [
"Apache-2.0"
] | null | null | null | api/accounts/urls.py | paurushofficial/InvitationApi | 35a33fab52bf9d2269b804d8b5e9958b41f7506c | [
"Apache-2.0"
] | null | null | null | api/accounts/urls.py | paurushofficial/InvitationApi | 35a33fab52bf9d2269b804d8b5e9958b41f7506c | [
"Apache-2.0"
] | null | null | null | from django.urls import path
from rest_framework_simplejwt.views import (
TokenObtainPairView,
TokenRefreshView,
TokenVerifyView,
)
from . views import *
urlpatterns = [
path('register/', UserRegisterView.as_view()),
path('logout/', LogoutView.as_view()),
path('token/', TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('token/refresh/', TokenRefreshView.as_view(), name='token_refresh'),
path('token/verify/', TokenVerifyView.as_view(), name='token_verify'),
path('change_password/', ChangePasswordView.as_view()),
]
| 30.052632 | 77 | 0.718039 | 61 | 571 | 6.508197 | 0.459016 | 0.09068 | 0.075567 | 0.11335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131349 | 571 | 18 | 78 | 31.722222 | 0.800403 | 0 | 0 | 0 | 0 | 0 | 0.187391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9bd32e9affe9d4798bdc0f0e2c528ef20a36adfd | 3,450 | py | Python | PhyTestOnline/PhyTestOnline.py | JerryLife/PhyTestOnline | c3ca3ec396195e587b7409b492e3848daffca6fd | [
"MIT"
] | 1 | 2016-12-12T06:09:29.000Z | 2016-12-12T06:09:29.000Z | PhyTestOnline/PhyTestOnline.py | JerryLife/PhyTestOnline | c3ca3ec396195e587b7409b492e3848daffca6fd | [
"MIT"
] | null | null | null | PhyTestOnline/PhyTestOnline.py | JerryLife/PhyTestOnline | c3ca3ec396195e587b7409b492e3848daffca6fd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (C) 2016 Jerry Life
import urllib
import urllib2
import re
import xlwt
START_PAGE = 0 # start page
ALL_PAGE = 82 # number of all pages
class PhyTestOnline(object):
"""
This class is specially for a HUST Physics Test Online providing a crawler to download
the Keys and to save them as Excel(.xls). For convenience, you can just use PhyTestOnline.main()
to finish the whole procedure.
Attention: This is a simple practice in crawler, which is only for study and communication.
It should never be used for illegal or improper ways like cheating. If so, the one who did it is
responsible for his own behavior instead of the author.
"""
def __init__(self, baseURL="http://115.156.215.236/admin/menu/query/queryandchoose/xianshi?paperNum=0"):
self.baseURL = baseURL
def getFirstPage(self, url=None):
if not url:
url = self.baseURL
try:
user_agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) ' \
'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent': user_agent}
data = urllib.urlencode({})
request = urllib2.Request(url, data, headers)
response = urllib2.urlopen(request, timeout=10)
content = response.read()
return content.decode('GBK')
except urllib2.URLError, e:
if hasattr(e, "reason"):
print "Fail to connect to PhysicsTestOnline:", e.reason
return None
def getText(self, url=None):
textModel = re.compile('<td width="146">([0-9]+\.jpg)')
ansModel = re.compile('<td width="41">(.*)</td>')
html = self.getFirstPage(url)
if not html:
return None
text = re.findall(textModel, html)
ans = re.findall(ansModel, html)
if len(text) == len(ans):
print "%d Got" % len(ans)
return zip(text, ans)
else:
print "Answer or picture lost!"
return None
def getAll(self, allPage=ALL_PAGE):
startPage = self.baseURL[0:-1]
ansList = []
for i in range(START_PAGE, allPage+1):
url = startPage + str(i)
ans = self.getText(url)
if not ans:
pass
else:
print "Page%d finished.%d%%" % (i, i*100/allPage)
ansList += ans
print "Program complete."
return ansList
def saveAns(self, ansList, fileName='D:\TestAnswer.xls'):
ans = xlwt.Workbook()
sheet = ans.add_sheet('Sheet1')
numOfProblems = len(ansList)
for i in range(numOfProblems):
sheet.write(i, 0, ansList[i][0])
sheet.write(i, 1, ansList[i][1])
print "Line %d saved.%d%% Finished" % (i+1, (i+1)*100/numOfProblems)
# need protect?
ans.protect = True
ans.wnd_protect = True
ans.obj_protect = True
ans.save(fileName)
print "All saved."
return None
def main(self):
ansList = self.getAll()
iSave = raw_input('Save now? y/n: ')
if iSave == 'y':
self.saveAns(ansList)
else:
return None
return True
ans = PhyTestOnline()
ans.main() | 34.5 | 109 | 0.553623 | 422 | 3,450 | 4.492891 | 0.473934 | 0.026371 | 0.02057 | 0.016878 | 0.018987 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033304 | 0.338551 | 3,450 | 100 | 110 | 34.5 | 0.797546 | 0.027826 | 0 | 0.105263 | 0 | 0.026316 | 0.157627 | 0.009101 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.013158 | 0.052632 | null | null | 0.092105 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bd475cdfb843175aafda09a546901e32a2f92de | 1,518 | py | Python | funcs.py | amsynist/gmail_counter_deleter | 44c7ff780a13d7aae7a1ccee96abfc3b63dcc77a | [
"MIT"
] | null | null | null | funcs.py | amsynist/gmail_counter_deleter | 44c7ff780a13d7aae7a1ccee96abfc3b63dcc77a | [
"MIT"
] | null | null | null | funcs.py | amsynist/gmail_counter_deleter | 44c7ff780a13d7aae7a1ccee96abfc3b63dcc77a | [
"MIT"
] | null | null | null | import re
import imaplib
def checkinbox(user,passw,imapserver):
imap = imaplib.IMAP4_SSL(imapserver)
try:
imap.login(user,passw)
print("Connecting and Fetching required info,Please Wait..")
select_folder = input("Enter the folder : ")
imap.select(select_folder)
status, messages = imap.select(select_folder)
messages = int(messages[0])
print(f"Total Number of Mails in {select_folder} : {messages} ")
except Exception as err:
print(' -- !! AUTHENTICATION-FAILED !! --')
print("It seems that password was incorrect.")
def checkemail(user,passw,imapserver):
regex = '^(\w|\.|\_|\-)+[@](\w|\_|\-|\.)+[.]\w{2,3}$'
if(re.search(regex, user)):
print("Valid Email Entered")
else:
print("!! Enter a Valid Email and Try Again !!")
def deleteEmail(user, passw, imapserver):
checkemail(user,passw,imapserver)
mail = imaplib.IMAP4_SSL(imapserver)
mail.login(user, passw)
try:
print("Connecting and Fetching required info,Please Wait..")
mail.select("inbox")
print(f"Deleting all Emails from the inbox")
typ, data = mail.search(None, 'ALL')
for num in data[0].split():
mail.store(num, '+FLAGS', r'(\Deleted)')
print("Deleting....")
mail.expunge()
print("Emails Deleted")
mail.close()
mail.logout()
except Exception as err:
print(' -- !! AUTHENTICATION-FAILED !! --')
print("It seems that password was incorrect.")
| 34.5 | 70 | 0.614625 | 181 | 1,518 | 5.110497 | 0.447514 | 0.058378 | 0.082162 | 0.054054 | 0.278919 | 0.278919 | 0.278919 | 0.278919 | 0.175135 | 0.175135 | 0 | 0.005146 | 0.231884 | 1,518 | 43 | 71 | 35.302326 | 0.788165 | 0 | 0 | 0.25 | 0 | 0 | 0.330698 | 0.055995 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0.2 | 0.05 | 0 | 0.125 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9bd59710cf5b8329b4da0192b5e1698bf07e999d | 13,736 | py | Python | plugin-python/proto/pyvcloudprovider_pb2_grpc.py | srinarayanant/terraform-provider-vcloud-director-1 | 1e805550e69fb5284d4a746a2f86326ec72c565f | [
"BSD-2-Clause"
] | null | null | null | plugin-python/proto/pyvcloudprovider_pb2_grpc.py | srinarayanant/terraform-provider-vcloud-director-1 | 1e805550e69fb5284d4a746a2f86326ec72c565f | [
"BSD-2-Clause"
] | null | null | null | plugin-python/proto/pyvcloudprovider_pb2_grpc.py | srinarayanant/terraform-provider-vcloud-director-1 | 1e805550e69fb5284d4a746a2f86326ec72c565f | [
"BSD-2-Clause"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from proto import catalog_item_pb2 as proto_dot_catalog__item__pb2
from proto import pyvcloudprovider_pb2 as proto_dot_pyvcloudprovider__pb2
from proto import vapp_pb2 as proto_dot_vapp__pb2
class PyVcloudProviderStub(object):
"""Interface exported by the server.
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Login = channel.unary_unary(
'/proto.PyVcloudProvider/Login',
request_serializer=proto_dot_pyvcloudprovider__pb2.
LoginCredentials.SerializeToString,
response_deserializer=proto_dot_pyvcloudprovider__pb2.LoginResult.
FromString,
)
self.ReadCatalog = channel.unary_unary(
'/proto.PyVcloudProvider/ReadCatalog',
request_serializer=proto_dot_pyvcloudprovider__pb2.Catalog.
SerializeToString,
response_deserializer=proto_dot_pyvcloudprovider__pb2.
ReadCatalogResult.FromString,
)
self.CreateCatalog = channel.unary_unary(
'/proto.PyVcloudProvider/CreateCatalog',
request_serializer=proto_dot_pyvcloudprovider__pb2.Catalog.
SerializeToString,
response_deserializer=proto_dot_pyvcloudprovider__pb2.
CreateCatalogResult.FromString,
)
self.DeleteCatalog = channel.unary_unary(
'/proto.PyVcloudProvider/DeleteCatalog',
request_serializer=proto_dot_pyvcloudprovider__pb2.Catalog.
SerializeToString,
response_deserializer=proto_dot_pyvcloudprovider__pb2.
DeleteCatalogResult.FromString,
)
self.CatalogUploadMedia = channel.unary_unary(
'/proto.PyVcloudProvider/CatalogUploadMedia',
request_serializer=proto_dot_catalog__item__pb2.
CatalogUploadMediaInfo.SerializeToString,
response_deserializer=proto_dot_catalog__item__pb2.
CatalogUploadMediaResult.FromString,
)
self.CatalogUploadOva = channel.unary_unary(
'/proto.PyVcloudProvider/CatalogUploadOva',
request_serializer=proto_dot_catalog__item__pb2.
CatalogUploadOvaInfo.SerializeToString,
response_deserializer=proto_dot_catalog__item__pb2.
CatalogUploadOvaResult.FromString,
)
self.OvaCheckResolved = channel.unary_unary(
'/proto.PyVcloudProvider/OvaCheckResolved',
request_serializer=proto_dot_catalog__item__pb2.
CatalogCheckResolvedInfo.SerializeToString,
response_deserializer=proto_dot_pyvcloudprovider__pb2.
CheckResolvedResult.FromString,
)
self.DeleteCatalogItem = channel.unary_unary(
'/proto.PyVcloudProvider/DeleteCatalogItem',
request_serializer=proto_dot_catalog__item__pb2.
DeleteCatalogItemInfo.SerializeToString,
response_deserializer=proto_dot_catalog__item__pb2.
DeleteCatalogItemResult.FromString,
)
self.isPresentCatalogItem = channel.unary_unary(
'/proto.PyVcloudProvider/isPresentCatalogItem',
request_serializer=proto_dot_catalog__item__pb2.
IsPresentCatalogItemInfo.SerializeToString,
response_deserializer=proto_dot_catalog__item__pb2.
IsPresentCatalogItemResult.FromString,
)
self.CaptureVapp = channel.unary_unary(
'/proto.PyVcloudProvider/CaptureVapp',
request_serializer=proto_dot_catalog__item__pb2.CaptureVAppInfo.
SerializeToString,
response_deserializer=proto_dot_catalog__item__pb2.
CaptureVAppResult.FromString,
)
self.CreateVApp = channel.unary_unary(
'/proto.PyVcloudProvider/CreateVApp',
request_serializer=proto_dot_vapp__pb2.CreateVAppInfo.
SerializeToString,
response_deserializer=proto_dot_vapp__pb2.CreateVAppResult.
FromString,
)
self.DeleteVApp = channel.unary_unary(
'/proto.PyVcloudProvider/DeleteVApp',
request_serializer=proto_dot_vapp__pb2.DeleteVAppInfo.
SerializeToString,
response_deserializer=proto_dot_vapp__pb2.DeleteVAppResult.
FromString,
)
self.ReadVApp = channel.unary_unary(
'/proto.PyVcloudProvider/ReadVApp',
request_serializer=proto_dot_vapp__pb2.ReadVAppInfo.
SerializeToString,
response_deserializer=proto_dot_vapp__pb2.ReadVAppResult.
FromString,
)
self.StopPlugin = channel.unary_unary(
'/proto.PyVcloudProvider/StopPlugin',
request_serializer=proto_dot_pyvcloudprovider__pb2.StopInfo.
SerializeToString,
response_deserializer=proto_dot_pyvcloudprovider__pb2.StopResult.
FromString,
)
class PyVcloudProviderServicer(object):
"""Interface exported by the server.
"""
def Login(self, request, context):
"""Tenant Loging to VCD
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ReadCatalog(self, request, context):
"""check if catalog is preset and return true and the catalog details
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CreateCatalog(self, request, context):
"""create a new catalog
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteCatalog(self, request, context):
"""delete a catalog
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CatalogUploadMedia(self, request, context):
"""catalog upload Media - anything other than ova
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CatalogUploadOva(self, request, context):
"""catalog upload ova
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def OvaCheckResolved(self, request, context):
"""check resolved after upload
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteCatalogItem(self, request, context):
"""catalog item delete
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def isPresentCatalogItem(self, request, context):
"""check if catalog item is preset and return true
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CaptureVapp(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CreateVApp(self, request, context):
"""create vApp
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteVApp(self, request, context):
"""delete VApp
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ReadVApp(self, request, context):
"""Read VApp
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def StopPlugin(self, request, context):
"""remote stop interface for the plugin
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_PyVcloudProviderServicer_to_server(servicer, server):
rpc_method_handlers = {
'Login':
grpc.unary_unary_rpc_method_handler(
servicer.Login,
request_deserializer=proto_dot_pyvcloudprovider__pb2.
LoginCredentials.FromString,
response_serializer=proto_dot_pyvcloudprovider__pb2.LoginResult.
SerializeToString,
),
'ReadCatalog':
grpc.unary_unary_rpc_method_handler(
servicer.ReadCatalog,
request_deserializer=proto_dot_pyvcloudprovider__pb2.Catalog.
FromString,
response_serializer=proto_dot_pyvcloudprovider__pb2.
ReadCatalogResult.SerializeToString,
),
'CreateCatalog':
grpc.unary_unary_rpc_method_handler(
servicer.CreateCatalog,
request_deserializer=proto_dot_pyvcloudprovider__pb2.Catalog.
FromString,
response_serializer=proto_dot_pyvcloudprovider__pb2.
CreateCatalogResult.SerializeToString,
),
'DeleteCatalog':
grpc.unary_unary_rpc_method_handler(
servicer.DeleteCatalog,
request_deserializer=proto_dot_pyvcloudprovider__pb2.Catalog.
FromString,
response_serializer=proto_dot_pyvcloudprovider__pb2.
DeleteCatalogResult.SerializeToString,
),
'CatalogUploadMedia':
grpc.unary_unary_rpc_method_handler(
servicer.CatalogUploadMedia,
request_deserializer=proto_dot_catalog__item__pb2.
CatalogUploadMediaInfo.FromString,
response_serializer=proto_dot_catalog__item__pb2.
CatalogUploadMediaResult.SerializeToString,
),
'CatalogUploadOva':
grpc.unary_unary_rpc_method_handler(
servicer.CatalogUploadOva,
request_deserializer=proto_dot_catalog__item__pb2.
CatalogUploadOvaInfo.FromString,
response_serializer=proto_dot_catalog__item__pb2.
CatalogUploadOvaResult.SerializeToString,
),
'OvaCheckResolved':
grpc.unary_unary_rpc_method_handler(
servicer.OvaCheckResolved,
request_deserializer=proto_dot_catalog__item__pb2.
CatalogCheckResolvedInfo.FromString,
response_serializer=proto_dot_pyvcloudprovider__pb2.
CheckResolvedResult.SerializeToString,
),
'DeleteCatalogItem':
grpc.unary_unary_rpc_method_handler(
servicer.DeleteCatalogItem,
request_deserializer=proto_dot_catalog__item__pb2.
DeleteCatalogItemInfo.FromString,
response_serializer=proto_dot_catalog__item__pb2.
DeleteCatalogItemResult.SerializeToString,
),
'isPresentCatalogItem':
grpc.unary_unary_rpc_method_handler(
servicer.isPresentCatalogItem,
request_deserializer=proto_dot_catalog__item__pb2.
IsPresentCatalogItemInfo.FromString,
response_serializer=proto_dot_catalog__item__pb2.
IsPresentCatalogItemResult.SerializeToString,
),
'CaptureVapp':
grpc.unary_unary_rpc_method_handler(
servicer.CaptureVapp,
request_deserializer=proto_dot_catalog__item__pb2.CaptureVAppInfo.
FromString,
response_serializer=proto_dot_catalog__item__pb2.CaptureVAppResult.
SerializeToString,
),
'CreateVApp':
grpc.unary_unary_rpc_method_handler(
servicer.CreateVApp,
request_deserializer=proto_dot_vapp__pb2.CreateVAppInfo.FromString,
response_serializer=proto_dot_vapp__pb2.CreateVAppResult.
SerializeToString,
),
'DeleteVApp':
grpc.unary_unary_rpc_method_handler(
servicer.DeleteVApp,
request_deserializer=proto_dot_vapp__pb2.DeleteVAppInfo.FromString,
response_serializer=proto_dot_vapp__pb2.DeleteVAppResult.
SerializeToString,
),
'ReadVApp':
grpc.unary_unary_rpc_method_handler(
servicer.ReadVApp,
request_deserializer=proto_dot_vapp__pb2.ReadVAppInfo.FromString,
response_serializer=proto_dot_vapp__pb2.ReadVAppResult.
SerializeToString,
),
'StopPlugin':
grpc.unary_unary_rpc_method_handler(
servicer.StopPlugin,
request_deserializer=proto_dot_pyvcloudprovider__pb2.StopInfo.
FromString,
response_serializer=proto_dot_pyvcloudprovider__pb2.StopResult.
SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'proto.PyVcloudProvider', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler, ))
| 40.759644 | 79 | 0.683241 | 1,185 | 13,736 | 7.532489 | 0.109705 | 0.052879 | 0.056464 | 0.048958 | 0.751737 | 0.632086 | 0.52924 | 0.335649 | 0.274591 | 0.274591 | 0 | 0.006006 | 0.248471 | 13,736 | 336 | 80 | 40.880952 | 0.858665 | 0.04885 | 0 | 0.458484 | 1 | 0 | 0.104703 | 0.041326 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057762 | false | 0.00361 | 0.01444 | 0 | 0.079422 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bd599d6657b8c94098c98e04397485d867aa089 | 1,820 | py | Python | bot/triggers/commands/math.py | elihschiff/Rubber-Duck-Python | 24dea3b64a8a46368cd8dd995c800375f355b55e | [
"MIT"
] | 7 | 2020-07-07T20:58:14.000Z | 2021-12-23T02:51:20.000Z | bot/triggers/commands/math.py | elihschiff/Rubber-Duck-Python | 24dea3b64a8a46368cd8dd995c800375f355b55e | [
"MIT"
] | null | null | null | bot/triggers/commands/math.py | elihschiff/Rubber-Duck-Python | 24dea3b64a8a46368cd8dd995c800375f355b55e | [
"MIT"
] | 1 | 2020-03-29T13:36:43.000Z | 2020-03-29T13:36:43.000Z | from . import Command
from .. import utils
import wolframalpha
class Math(Command):
names = ["math", "calc", "calculate", "solve"]
description = "Solves a math problem"
usage = "!math [expression]"
examples = "!math d/dx sin(x)^2"
show_in_help = True
async def execute_command(self, client, msg, content, **kwargs):
if not content:
await utils.delay_send(msg.channel, f"Usage: {self.usage}", reply_to=msg)
return
wolfram = wolframalpha.Client(client.config["wolfram_id"])
query_res = wolfram.query(content)
try:
for pod in query_res.pods:
if (
pod.title.startswith("Result")
or pod.title.startswith("Exact result")
or pod.title.startswith("Power of 10 representation")
or pod.title.startswith("Decimal approximation")
):
await msg.channel.send(
f"The answer for `{content}` is: `{utils.sanitized(pod.text)}`",
reference=msg,
mention_author=True,
)
return
if pod.title.startswith("Plot"):
await msg.channel.send(
# TODO: this should be more robust
f"The answer for `{content}` is: {list(list(pod.subpod)[0].img)[0]['@src']}",
reference=msg,
mention_author=True,
)
return
except (KeyError, AttributeError):
pass
await msg.channel.send(
f"I could not find an answer for `{utils.sanitized(content)}`",
reference=msg,
mention_author=True,
)
| 37.142857 | 101 | 0.503297 | 184 | 1,820 | 4.918478 | 0.505435 | 0.044199 | 0.099448 | 0.066298 | 0.258564 | 0.125967 | 0 | 0 | 0 | 0 | 0 | 0.004513 | 0.391209 | 1,820 | 48 | 102 | 37.916667 | 0.812274 | 0.017582 | 0 | 0.27907 | 0 | 0.023256 | 0.207167 | 0.055431 | 0 | 0 | 0 | 0.020833 | 0 | 1 | 0 | false | 0.023256 | 0.069767 | 0 | 0.27907 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bdb227b8986247b3e187af1afb2dd0572b47533 | 1,281 | py | Python | scripts/parse_coredump_bin.py | lucasdietrich/AVRTOS | a8a3b7890ed54fbb1bd718fd13b0eb620ecf9b13 | [
"Apache-2.0"
] | 3 | 2021-12-10T21:16:03.000Z | 2022-03-20T08:09:53.000Z | scripts/parse_coredump_bin.py | Adecy/atmega328p-multithreading | b0f2da9f31fba7bcccf5f06f827af87494a0bcb3 | [
"Apache-2.0"
] | null | null | null | scripts/parse_coredump_bin.py | Adecy/atmega328p-multithreading | b0f2da9f31fba7bcccf5f06f827af87494a0bcb3 | [
"Apache-2.0"
] | null | null | null | import re
import struct
from typing import List
class Core:
def __init__(self):
self.registers = []
def SP(self) -> int:
return self.registers[0]
def PC(self) -> int:
return self.registers[1]
def SREG(self) -> int:
return self.registers[-1]
def r(self, i: int) -> int:
return self.registers[i + 2]
def __repr__(self):
return f"Core\n" \
f" SP={self.SP():0{4}X} PC={self.PC():0{4}X} SREG={self.SREG():0{2}X} " + \
(" ".join(f"r{i}={self.r(i)}" for i in range(32)))
re_coredump = re.compile(b"<<<<.{37}>>>>")
def parse_core(filename: str) -> List[Core]:
with open(filename, "br") as fp:
content = fp.read()
match = re_coredump.findall(content)
cores = []
print(match)
for parsed in match:
data = parsed[4:41]
# https://docs.python.org/3/library/struct.html
core = Core()
core.registers = struct.unpack("HHB" + "B" * 32, data)
cores.append(core)
print(core.registers, core)
return cores
def decode(core: bytes):
return
if __name__ == "__main__":
import sys
if len(sys.argv) == 2:
parse_core(sys.argv[1])
else:
raise Exception("argument problem") | 19.707692 | 91 | 0.552693 | 178 | 1,281 | 3.865169 | 0.426966 | 0.094477 | 0.075581 | 0.127907 | 0.125 | 0.087209 | 0.087209 | 0 | 0 | 0 | 0 | 0.024044 | 0.285714 | 1,281 | 65 | 92 | 19.707692 | 0.727869 | 0.035129 | 0 | 0 | 0 | 0.025 | 0.109312 | 0.019433 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0.15 | 0.5 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
9bdd52b91453871245afa987abaad8fffe050c55 | 484 | py | Python | scripts/publish_source.py | apguerrera/DreamFrames | ad6c7c081378f02010583dbdcb33e8ff112dd94b | [
"MIT"
] | 2 | 2020-06-09T02:12:21.000Z | 2021-02-06T07:33:31.000Z | scripts/publish_source.py | apguerrera/DreamFrames | ad6c7c081378f02010583dbdcb33e8ff112dd94b | [
"MIT"
] | null | null | null | scripts/publish_source.py | apguerrera/DreamFrames | ad6c7c081378f02010583dbdcb33e8ff112dd94b | [
"MIT"
] | 2 | 2019-05-01T01:53:42.000Z | 2020-05-11T14:11:56.000Z | from brownie import *
from .contract_addresses import *
import time
def publish():
if network.show_active() == "development":
return False
else:
return True
def main():
publish_Goober_nft()
def publish_Goober_nft():
dream_frames_NFT_address = CONTRACTS[network.show_active()]["dream_frames_NFT"]
if dream_frames_NFT_address != '':
goober_nft = DreamFramesNFT.at(goober_nft_address)
DreamFramesNFT.publish_source(goober_nft)
| 24.2 | 83 | 0.706612 | 58 | 484 | 5.551724 | 0.465517 | 0.139752 | 0.130435 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.200413 | 484 | 19 | 84 | 25.473684 | 0.832041 | 0 | 0 | 0 | 0 | 0 | 0.055785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9be072e436a510815bc7fcb98815452de49a5068 | 491 | py | Python | datahub/omis/invoice/migrations/0006_invoice_contact_email.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 6 | 2019-12-02T16:11:24.000Z | 2022-03-18T10:02:02.000Z | datahub/omis/invoice/migrations/0006_invoice_contact_email.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 1,696 | 2019-10-31T14:08:37.000Z | 2022-03-29T12:35:57.000Z | datahub/omis/invoice/migrations/0006_invoice_contact_email.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 9 | 2019-11-22T12:42:03.000Z | 2021-09-03T14:25:05.000Z | # Generated by Django 2.0.1 on 2018-01-03 15:50
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('omis_invoice', '0005_populate_billing_fields'),
]
operations = [
migrations.AddField(
model_name='invoice',
name='contact_email',
field=models.EmailField(blank=True, help_text='Email address of the contact at the time of invoice creation.', max_length=255),
),
]
| 25.842105 | 139 | 0.645621 | 59 | 491 | 5.237288 | 0.79661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059783 | 0.250509 | 491 | 18 | 140 | 27.277778 | 0.779891 | 0.09165 | 0 | 0 | 1 | 0 | 0.272523 | 0.063063 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9be66a9b0c5bd06f8460d9a3b8c3707b90aaf766 | 1,513 | py | Python | bin/system-setup.py | filipecosta90/readies | ffd48b39cd7c124e08eb3d770a0e0581445ffa37 | [
"BSD-3-Clause"
] | 9 | 2019-12-17T17:57:54.000Z | 2022-02-04T15:43:35.000Z | bin/system-setup.py | filipecosta90/readies | ffd48b39cd7c124e08eb3d770a0e0581445ffa37 | [
"BSD-3-Clause"
] | 87 | 2021-01-06T08:59:37.000Z | 2022-02-16T05:10:03.000Z | bin/system-setup.py | filipecosta90/readies | ffd48b39cd7c124e08eb3d770a0e0581445ffa37 | [
"BSD-3-Clause"
] | 11 | 2019-12-11T13:30:23.000Z | 2022-01-06T12:21:26.000Z | #!/bin/sh
''''[ ! -z $VIRTUAL_ENV ] && exec python -u -- "$0" ${1+"$@"}; command -v python3 > /dev/null && exec python3 -u -- "$0" ${1+"$@"}; exec python2 -u -- "$0" ${1+"$@"} # '''
import sys
import os
import argparse
HERE = os.path.dirname(__file__)
ROOT = os.path.abspath(os.path.join(HERE, ".."))
sys.path.insert(0, ROOT)
import paella
#----------------------------------------------------------------------------------------------
class SystemSetup(paella.Setup):
def __init__(self, nop=False):
paella.Setup.__init__(self, nop)
def common_first(self):
# self.install("")
# self.group_install("")
# self.setup_pip()
# self.pip_install("")
print("common_first")
def debian_compat(self):
print("debian_compat")
def redhat_compat(self):
print("redhat_compat")
def fedora(self):
print("fedora")
def macos(self):
print("macos")
def common_last(self):
print("common_last")
#----------------------------------------------------------------------------------------------
parser = argparse.ArgumentParser(description='Set up system for build.')
parser.add_argument('-n', '--nop', action="store_true", help='no operation')
# parser.add_argument('--bool', action="store_true", help="flag")
# parser.add_argument('--int', type=int, default=1, help='number')
# parser.add_argument('--str', type=str, default='str', help='string')
args = parser.parse_args()
SystemSetup(nop = args.nop).setup()
| 29.666667 | 171 | 0.540648 | 172 | 1,513 | 4.575581 | 0.44186 | 0.057179 | 0.086404 | 0.048285 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008682 | 0.162591 | 1,513 | 50 | 172 | 30.26 | 0.61247 | 0.421679 | 0 | 0 | 0 | 0 | 0.133721 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.269231 | false | 0 | 0.153846 | 0 | 0.461538 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9bf2b2dc723f4b7d00b4319b8189354d02aff030 | 4,318 | py | Python | 3/konks_lab_3.py | exucutional/study_comp_math | c73b6f00e86b2d19c4cb81c377ca4706d70b2831 | [
"MIT"
] | null | null | null | 3/konks_lab_3.py | exucutional/study_comp_math | c73b6f00e86b2d19c4cb81c377ca4706d70b2831 | [
"MIT"
] | null | null | null | 3/konks_lab_3.py | exucutional/study_comp_math | c73b6f00e86b2d19c4cb81c377ca4706d70b2831 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# # Lab 02
#
# ## Solving a system of nonlinear equations
#
# ### Konks Eric, Б01-818
#
# IV.12.7.д
# $$\begin{cases} x^7 - 5x^2y^4 + 1510 = 0 \\ y^3 - 3x^4y - 105 = 0 \end{cases}$$
# $$\begin{cases} x_{n+1} = \sqrt{\frac{x_n^7 + 1510}{5y_n^4}} \\ y_{n+1} = \sqrt[3]{3x_{n}^4y_{n}+105} \end{cases}$$
# $$J=\begin{pmatrix}7x^6-10xy^4 & -20x^2y^3\\-12x^3y & 3y^2-3x^4\end{pmatrix}$$
# In[1]:
import unittest
import logging
import numpy as np
import pandas as pd
# In[2]:
#logging.basicConfig(level=logging.DEBUG)
# In[3]:
class FPI:
def __init__(self, f_vec):
self.__f_vec = f_vec
self.iter = 0
self.log = logging.getLogger("FPI")
def __is_stop(self, next_x, cur_x, q, delta):
if next_x == cur_x:
return False
if sum(np.abs((next_x[i] - cur_x[i])) for i in range(len(cur_x))) <= delta * (1 - q):
return True
return False
def solve(self, init_x, q, delta):
cur_x = init_x
next_x = init_x
while not self.__is_stop(next_x, cur_x, q, delta):
cur_x = next_x
next_x = cur_x[:]
for i in range(len(self.__f_vec)):
next_x[i] = self.__f_vec[i](cur_x)
self.log.debug(f"Iter[{self.iter}]: Init: {cur_x} Next: {next_x}")
self.iter = self.iter + 1
return next_x
# In[4]:
class Newton:
def __init__(self, f_vec, J):
self.__f_vec = f_vec
self.__J = J
self.iter = 0
self.log = logging.getLogger("Newton")
def __J_mul_f(self, x, i):
return sum(self.__f_vec[j](x) * self.__J[i][j](x) for j in range(len(self.__f_vec)))
def __is_stop(self, next_x, cur_x, M2, m1, delta):
if next_x == cur_x:
return False
if sum(np.abs(next_x[i] - cur_x[i]) for i in range(len(cur_x))) < np.sqrt(2*delta*m1/M2):
return True
return False
def solve(self, init_x, M2, m1, delta):
self.iter = 0
cur_x = init_x
next_x = init_x
while not self.__is_stop(next_x, cur_x, M2, m1, delta):
cur_x = next_x
next_x = cur_x[:]
for i in range(len(self.__f_vec)):
next_x[i] = cur_x[i] - self.__J_mul_f(cur_x, i)
self.log.debug(f"Iter[{self.iter}]: Init: {cur_x} Next: {next_x}")
self.iter = self.iter + 1
return next_x
# In[5]:
def fpi_f1(x):
return np.sqrt((x[0]**7 + 1510)/(5 * (x[1]**4)))
def fpi_f2(x):
return np.cbrt(3*(x[0]**4)*x[1] + 105)
fpi = FPI([fpi_f1, fpi_f2])
# In[6]:
def newton_f1(x):
return x[0]**7-5*(x[0]**2)*(x[1]**4)+1510
def newton_f2(x):
return x[1]**3-3*(x[0]**4)*x[1]-105
def J00(x):
return 7*(x[0]**6)-10*x[0]*(x[1]**4)
def J01(x):
return -20*(x[0]**2)*(x[1]**3)
def J10(x):
return -12*(x[0]**3)*x[1]
def J11(x):
return 3*(x[1]**2) - 3*(x[0]**4)
def J(x):
return [[J00(x), J01(x)], [J10(x), J11(x)]]
def J00_inv(x):
return J11(x)/(J00(x)*J11(x)-J10(x)*J01(x))
def J01_inv(x):
return - J01(x)/(J00(x)*J11(x)-J10(x)*J01(x))
def J10_inv(x):
return - J10(x)/(J00(x)*J11(x)-J10(x)*J01(x))
def J11_inv(x):
return J00(x)/(J00(x)*J11(x)-J10(x)*J01(x))
J_inv = [[J00_inv, J01_inv], [J10_inv, J11_inv]]
newton = Newton([newton_f1, newton_f2], J_inv)
# In[7]:
log = logging.getLogger()
x_init_vec_fpi = [[1,5], [3, -4], [-1, 5]]
x_init_vec_newton = [[1,5], [3, -4], [-1, 5], [-4, 0], [-2, -2]]
delta = 10**-5
q = 0.5
m1 = 1
M2 = 1
fpi_results = []
fpi_iterations = []
newton_results = []
newton_iterations = []
for x in x_init_vec_fpi:
fpi_results.append(fpi.solve(x, q, delta))
fpi_iterations.append(fpi.iter)
for x in x_init_vec_newton:
newton_results.append(newton.solve(x, M2, m1, delta))
newton_iterations.append(newton.iter)
# In[8]:
fpi_dt = pd.DataFrame({"Начальное приближение": x_init_vec_fpi, "Результат": fpi_results, "Итераций": fpi_iterations})
newton_dt = pd.DataFrame({"Начальное приближение": x_init_vec_newton, "Результат": newton_results, "Итераций": newton_iterations})
print("Метод простых итераций")
print(fpi_dt)
print("\nМетод Ньютона")
print(newton_dt)
| 22.968085 | 130 | 0.556739 | 761 | 4,318 | 2.946124 | 0.159001 | 0.037467 | 0.032114 | 0.032114 | 0.436218 | 0.4157 | 0.374219 | 0.334077 | 0.277877 | 0.236396 | 0 | 0.077305 | 0.254053 | 4,318 | 187 | 131 | 23.090909 | 0.618752 | 0.113015 | 0 | 0.278846 | 0 | 0 | 0.056812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0.038462 | 0.134615 | 0.461538 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
9bfd1efc408b89ae999c37acf631f9234b2c8e00 | 1,529 | py | Python | Libraries/Simplynet.py | frknayk/WeightVis | 255114ed0981467e29c5a28d0130af24373db1f1 | [
"MIT"
] | 2 | 2021-02-24T06:00:27.000Z | 2022-02-16T11:14:25.000Z | Libraries/Simplynet.py | frknayk/WeightVis | 255114ed0981467e29c5a28d0130af24373db1f1 | [
"MIT"
] | 1 | 2020-07-28T09:14:53.000Z | 2020-07-28T09:17:47.000Z | Libraries/Simplynet.py | frknayk/WeightVis | 255114ed0981467e29c5a28d0130af24373db1f1 | [
"MIT"
] | null | null | null | import numpy as np
import pickle
from Libraries.Enums import NNLibs
from Libraries.Reader import Reader
#TODO Complete read() function for SimplyNet
#TODO SimplyNet init(self,path) - path should go to read's parameter(like torch)
class SimplyNet(Reader):
def __init__(self,path):
self.weights_list = []
self.biases_list = []
self.load_weights(path)
def read(self):
pass
def get_weights(self,path):
"""Load weights and biases """
file_name = path + ".pickle"
with open(file_name, 'rb') as handle:
b = pickle.load(handle)
return b
def load_weights(self,path):
"""Get weights from list of layers """
weights_list = self.get_weights(path)
for idx in range(len(weights_list)):
# Layers are stored as dict where layer name is key in SimplyNet
layer_dict = weights_list[idx]
# Get the key (there is single key : layer name!)
key_list = [thing for thing in layer_dict.keys()]
# Get weights dictionary
weigts_dict = layer_dict[key_list[0]]
# Read weights and biases
W = weigts_dict['W'].T
b = weigts_dict['b'].T
# bias must be in the shape of (dim_bias,)
b = b.reshape(b.shape[1])
self.weights_list.append(W)
self.biases_list.append(b)
def get_lib(self):
"""Get enumeration of lib"""
return NNLibs.SimplyNet
| 30.58 | 80 | 0.589274 | 202 | 1,529 | 4.326733 | 0.376238 | 0.062929 | 0.02746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001919 | 0.318509 | 1,529 | 49 | 81 | 31.204082 | 0.836852 | 0.262917 | 0 | 0 | 0 | 0 | 0.009955 | 0 | 0 | 0 | 0 | 0.020408 | 0 | 1 | 0.172414 | false | 0.034483 | 0.137931 | 0 | 0.413793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50038d02e89c3e78533be863fdaf0e1d485f0ce8 | 1,083 | py | Python | setup.py | marianaalbano/python_mail | 4f222894a2faa0714b2211ee9210af8f5cb5f1ed | [
"MIT"
] | null | null | null | setup.py | marianaalbano/python_mail | 4f222894a2faa0714b2211ee9210af8f5cb5f1ed | [
"MIT"
] | null | null | null | setup.py | marianaalbano/python_mail | 4f222894a2faa0714b2211ee9210af8f5cb5f1ed | [
"MIT"
] | null | null | null | #!/usr/bin/python3
#-*- coding: utf-8 -*-
import setuptools
longdesc = """
Esse módulo foi criado com o objetivo de realizar a busca de mensagens na caixa de e-mail de forma
simplificada e intuitiva utilizando o módulo imaplib para conexão.
A instalação pode ser feita utilizando:
``pip install git+https://github.com/marianaalbano/python_mail.git``.
"""
setuptools.setup(
name="python_mail",
version="1.0.2",
author="Mariana Albano",
author_email="mariana.albano@outlook.com",
description="Management email module",
license='MIT License',
long_description="Esse módulo foi criado com o objetivo de realizar a busca de mensagens na caixa de e-mail de forma simplificada e intuitiva utilizando o módulo imaplib para conexão.",
long_description_content_type="text/markdown",
url="https://github.com/marianaalbano/python_mail.git",
packages=setuptools.find_packages(),
classifiers=(
"Programming Language :: Python :: 3.5",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
),
) | 34.935484 | 189 | 0.715605 | 143 | 1,083 | 5.356643 | 0.545455 | 0.039164 | 0.033943 | 0.049608 | 0.45953 | 0.45953 | 0.45953 | 0.355091 | 0.355091 | 0.355091 | 0 | 0.007821 | 0.173592 | 1,083 | 31 | 190 | 34.935484 | 0.848045 | 0.035088 | 0 | 0 | 0 | 0.083333 | 0.675287 | 0.024904 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5008bf50f1c889a0cb8f758f821ceff1b2688937 | 1,933 | py | Python | swmmio/version_control/tests/compare_inp.py | jennwuu/swmmio | 6918ecfb69c10333cbc65ce0ab6554f8a04ef8f9 | [
"MIT"
] | 76 | 2016-04-26T14:04:02.000Z | 2022-03-24T10:10:29.000Z | swmmio/version_control/tests/compare_inp.py | Kimi-Monica/swmmio | 54dd6c1f7a3e47db5702b1f703beca0a8945a250 | [
"MIT"
] | 94 | 2016-05-06T15:32:51.000Z | 2022-02-10T08:03:30.000Z | swmmio/version_control/tests/compare_inp.py | Kimi-Monica/swmmio | 54dd6c1f7a3e47db5702b1f703beca0a8945a250 | [
"MIT"
] | 26 | 2016-09-01T22:51:47.000Z | 2022-02-09T09:13:23.000Z | import os
def remove_comments_and_crlf(inp_path, comment_string=';', overwrite=False):
tmpfilename = os.path.splitext(os.path.basename(inp_path))[0] + '_mod.inp'
tmpfilepath = os.path.join(os.path.dirname(inp_path), tmpfilename)
with open (inp_path) as oldf:
with open(tmpfilepath, 'w') as newf:
for line in oldf:
if ';' in line:
#remove the comments
if line.strip()[0] == comment_string:
#skip the whole line
pass
else:
#write the line to the left of the comment
non_comment_line = line.split(';')[0]
newf.write(non_comment_line + '\n')
elif line == '\n':
pass
else:
newf.write(line)
if overwrite:
os.remove(inp_path)
os.rename(tmpfilepath, inp_path)
def line_by_line(path1, path2, outfile):
"""
given paths to two INP files, return a text file showing where differences
occur in line-by-line fashion. If the order of elements do not match, this
will be recorded as a difference.
ignores any spaces in a file such that lines with more or less white space
having the same non-whitespace will be considered equal.
"""
#outfile =r"P:\06_Tools\v_control\Testing\cleaned\linebyline.txt"
with open(outfile, 'w') as diff_file:
with open (path1) as f1:
with open(path2) as f2:
line1 = next(f1)
line2 = next(f2)
while line1 and line2:
#replace all white space to check only actual content
if line1.replace(" ", "") != line2.replace(" ", ""):
diff_file.write(line1)
line1 = next(f1)
line2 = next(f2)
| 33.327586 | 78 | 0.53492 | 236 | 1,933 | 4.288136 | 0.474576 | 0.041502 | 0.027668 | 0.031621 | 0.043478 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0.019884 | 0.375582 | 1,933 | 57 | 79 | 33.912281 | 0.818558 | 0.264873 | 0 | 0.258065 | 0 | 0 | 0.013728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.064516 | 0.032258 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
500cdcbd8b9588b7badc6c30cf7210ed1d19b9b3 | 639 | py | Python | webapp/app/encryptioncontext/migrations/0001_initial.py | aws-samples/aws-secrets-manager-credential-rotation-without-container-restart | 11ad22e8f1d55bf48af219fecdd4ba208c88dff4 | [
"MIT-0"
] | 3 | 2021-08-10T21:05:32.000Z | 2021-11-08T10:25:57.000Z | webapp/app/encryptioncontext/migrations/0001_initial.py | aws-samples/aws-secrets-manager-credential-rotation-without-container-restart | 11ad22e8f1d55bf48af219fecdd4ba208c88dff4 | [
"MIT-0"
] | null | null | null | webapp/app/encryptioncontext/migrations/0001_initial.py | aws-samples/aws-secrets-manager-credential-rotation-without-container-restart | 11ad22e8f1d55bf48af219fecdd4ba208c88dff4 | [
"MIT-0"
] | 1 | 2021-08-10T21:05:33.000Z | 2021-08-10T21:05:33.000Z | # Generated by Django 3.0.1 on 2020-01-02 22:21
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='CustomerProfile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('account_number', models.CharField(max_length=8)),
('userid', models.CharField(max_length=6)),
('account_encrypted', models.BinaryField(max_length=4096)),
],
),
]
| 26.625 | 114 | 0.588419 | 65 | 639 | 5.661538 | 0.723077 | 0.07337 | 0.097826 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 0.28795 | 639 | 23 | 115 | 27.782609 | 0.762637 | 0.070423 | 0 | 0 | 1 | 0 | 0.094595 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
500ef1d473debd5ed56d175f4b097ca7c05550eb | 6,390 | py | Python | person/person_processor.py | RuizheYang/deeplearning-proj | f2b0a11cb622a43f55038da577661ba9656e0573 | [
"Apache-2.0"
] | null | null | null | person/person_processor.py | RuizheYang/deeplearning-proj | f2b0a11cb622a43f55038da577661ba9656e0573 | [
"Apache-2.0"
] | null | null | null | person/person_processor.py | RuizheYang/deeplearning-proj | f2b0a11cb622a43f55038da577661ba9656e0573 | [
"Apache-2.0"
] | null | null | null | """
Model Description
@author: Xiayu Li
@contact: xiayu_li@shannonai.com
@version: 0.1
@license: Apache Licence
@file: person_processor.py
@time: 2019/11/14 3:08 PM
"""
import json
import os
from typing import List, Tuple, Dict
from allennlp.predictors.predictor import Predictor
from allennlp.modules.token_embedders.bert_token_embedder import PretrainedBertModel
from extractor.common.base_processor import BaseProcessor, EntityProcessor
from extractor.common.collector_input_output_record import PersonInputRecord
from extractor.person.ner_predictor import PersonNERPredictor
from extractor.person.identity_predictor import PersonIdentityPredcitor
from extractor.person.person_collector import PersonCollector
from extractor.utils.text_process import find_sentence
from extractor.common import CONFIG_PATH
class PersonProcessor(EntityProcessor):
def __init__(self, config_path, cuda_device=0, batch_size=64):
super().__init__(config_path, cuda_device=cuda_device, batch_size=batch_size)
self.ner_engine = PersonNERPredictor(model_path=getattr(self.config.model_path, "person_ner"),
cuda_device=cuda_device,
batch_size=batch_size,
bert_path=self.config.chinese_bert_path)
PretrainedBertModel._cache.clear()
self.identify_engine = PersonIdentityPredcitor(model_path=getattr(self.config.model_path, "person_classifier"),
cuda_device=cuda_device,
batch_size=batch_size)
self.collector = PersonCollector(config_path)
def process_batch_title_content(self, title_content_aritcle_type: List[Tuple[str, str]], meta_data: List[str]=None):
merged_results = []
for i, title_content_single in enumerate(title_content_aritcle_type):
title, content = title_content_single
article_type = meta_data[i]
title_result, content_result = self.process_title_content(title, content, article_type)
title_content_results = []
for result in title_result:
title_content_results.append(PersonInputRecord(PER_NAME=result['name'],
HINT_SENT=result['hint_sent'],
PER_ATTRIBUTE=result['attribute'],
SOURCE_FIELD=result['source_field'],
ARTICLE_TYPE=result['article_type'],
TITLE=title,
CONFIDENCE=result['confidence']))
for result in content_result:
title_content_results.append(PersonInputRecord(PER_NAME=result['name'],
HINT_SENT=result['hint_sent'],
PER_ATTRIBUTE=result['attribute'],
SOURCE_FIELD=result['source_field'],
ARTICLE_TYPE=result['article_type'],
TITLE=title,
CONFIDENCE=result['confidence']))
title_content_results = self.collector.result_collect(title_content_results)
merged_results.append(title_content_results)
return merged_results
if __name__ == "__main__":
person_processor = PersonProcessor(CONFIG_PATH, cuda_device=1, batch_size=64)
ans = person_processor.process_content(content='习近平 在 参加 党 的 十九大 贵州省 代表团 讨论 时 强调 万众一心 开拓进取 把 新时代 中国 特色 社会主义 推向 前进 | 习近平在参加党的十九大贵州省代表团讨论时强调万众一心开拓进取把新时代中国特色社会主义推向前进10月19日,习近平同志参加党的十九大贵州省代表团讨论。新华社记者李涛摄习近平同志19日上午在参加党的十九大贵州省代表团讨论时强调,党的十九大报告进一步指明了党和国家事业的前进方向,是我们党团结带领全国各族人民在新时代坚持和发展中国特色社会主义的政治宣言和行动纲领。要深刻学习领会中国特色社会主义进入新时代的新论断,深刻学习领会我国社会主要矛盾发生变化的新特点,深刻学习领会分两步走全面建设社会主义现代化国家的新目标,深刻学习领会党的建设的新要求,激励全党全国各族人民万众一心,开拓进取,把新时代中国特色社会主义推向前进。贵州省代表团讨论气氛热烈。孙志刚、谌贻琴、余留芬、潘克刚、周建琨、钟晶、杨波、张蜀新、黄俊琼等9位代表分别结合实际,对报告发表了意见,畅谈了认识体会。大家认为,党的十九大报告是一个实事求是、与时俱进,凝心聚力、催人奋进的报告,是一个动员和激励全党为决胜全面建成小康社会,夺取新时代中国特色社会主义伟大胜利,实现中华民族伟大复兴的中国梦不懈奋斗的报告,一致表示拥护这个报告。习近平边听边记,同代表们深入讨论。六盘水市盘州市淤泥乡岩博村党委书记余留芬发言时说,广大农民对党的十九大报告提出土地承包到期后再延长30年的政策十分满意,习近平听了十分高兴,说这是要给广大农民吃个“定心丸”。遵义市播州区枫香镇花茂村党总支书记潘克刚讲到乡村农家乐旅游成为乡亲致富新路,习近平说既要鼓励发展乡村农家乐,也要对乡村旅游作分析和预测,提前制定措施,确保乡村旅游可持续发展。毕节市委书记周建琨讲到把支部建在生产小组上、发展脱贫攻坚讲习所,习近平强调,新时代的农民讲习所是一个创新,党的根基在基层,一定要抓好基层党建,在农村始终坚持党的领导。黔西南州贞丰县龙场镇龙河村卫生室医生钟晶讲到农村医疗保障问题,习近平详细询问现在农民一年交多少医疗保险费、贫困乡村老百姓生产生活条件有没有改善。贵州六盘水市钟山区大湾镇海嘎村党支部第一书记杨波谈了自己连续8年坚持当驻村第一书记、带领乡亲脱贫致富的体会,习近平表示,对在脱贫攻坚一线的基层干部要关心爱护,各方面素质好、条件具备的要提拔使用,同时要鼓励年轻干部到脱贫攻坚一线去历练。习近平还对黔东南州镇远县江古镇中心小学教师黄俊琼说,老少边穷地区的教育培训工作要加大力度,让更多乡村和基层教师受到专业培训。在认真听取代表发言后,习近平表示,很高兴作为贵州省代表团的代表参加讨论。习近平向在座各位代表和贵州全省各族干部群众致以诚挚的问候。习近平指出,5年来,贵州认真贯彻落实党中央决策部署,各方面工作不断有新进展。综合实力显著提升,脱贫攻坚成效显著,生态环境持续改善,改革开放取得重大进展,人民群众获得感不断增强,政治生态持续向好。贵州取得的成绩,是党的十八大以来党和国家事业大踏步前进的一个缩影。这从一个角度说明了党的十八大以来党中央确定的大政方针和工作部署是完全正确的。习近平希望贵州的同志全面贯彻落实党的十九大精神,大力培育和弘扬团结奋进、拼搏创新、苦干实干、后发赶超的精神,守好发展和生态两条底线,创新发展思路,发挥后发优势,决战脱贫攻坚,决胜同步小康,续写新时代贵州发展新篇章,开创百姓富、生态美的多彩贵州新未来。习近平指出,中国特色社会主义进入了新时代,这是我国发展新的历史方位。作出这个重大政治判断,是一项关系全局的战略考量,我们必须按照新时代的要求,完善发展战略和各项政策,推进和落实各项工作。我国社会主要矛盾的变化是关系全局的历史性变化,对党和国家工作提出了许多新要求,我们要深入贯彻新发展理念,着力解决好发展不平衡不充分问题,更好满足人民多方面日益增长的需要,更好推动人的全面发展、全体人民共同富裕。我们要紧密结合党的十九大对我国未来发展作出的战略安排,推进党和国家各项工作,特别是要保持各项战略、工作、政策、措施的连续性和前瞻性,一步接一步,连续不断朝着我们确定的目标前进。习近平强调,办好中国的事情,关键在党。全面从严治党不仅是党长期执政的根本要求,也是实现中华民族伟大复兴的根本保证。我们党要团结带领人民进行伟大斗争、推进伟大事业、实现伟大梦想,必须毫不动摇把党建设得更加坚强有力。全面从严治党永远在路上。在全面从严治党这个问题上,我们不能有差不多了,该松口气、歇歇脚的想法,不能有打好一仗就一劳永逸的想法,不能有初见成效就见好就收的想法。必须持之以恒、善作善成,把管党治党的螺丝拧得更紧,把全面从严治党的思路举措搞得更加科学、更加严密、更加有效,推动全面从严治党向纵深发展。各级党组织和全体党员、各级领导干部必须坚决维护党中央权威,坚决服从党中央集中统一领导,把“四个意识”落实在岗位上、落实在行动上,不折不扣执行党中央决策部署,始终在思想上政治上行动上同党中央保持高度一致。习近平指出,大会之后,要认真组织好党的十九大精神宣传教育工作和学习培训工作,注重宣传各地区各部门学习贯彻的具体举措和实际行动,注重反映基层干部群众学习贯彻的典型事迹和良好风貌。要充分利用各种宣传形式和手段,采取人民群众喜闻乐见的形式,推动党的十九大精神进企业、进农村、进机关、进校园、进社区、进军营,让干部鼓足干劲。要组织好集中宣讲活动,把党的十九大精神讲清楚、讲明白,让老百姓听得懂、能领会、可落实。',
article_type='新闻报道_讲话稿')
print(ans)
| 86.351351 | 2,387 | 0.684507 | 556 | 6,390 | 7.645683 | 0.539568 | 0.03952 | 0.026817 | 0.014114 | 0.144907 | 0.144907 | 0.144907 | 0.144907 | 0.116678 | 0.096918 | 0 | 0.006234 | 0.246948 | 6,390 | 73 | 2,388 | 87.534247 | 0.877182 | 0.025196 | 0 | 0.290909 | 0 | 0.018182 | 0.399936 | 0.356374 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.218182 | 0 | 0.290909 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
500f0349827ded93b1215069df6a905dce9513ac | 1,130 | py | Python | application/__init__.py | ppawlo97/si-summer-2020 | ddbb4e29ba9da9af9aaf658df07f891e36737d10 | [
"MIT"
] | null | null | null | application/__init__.py | ppawlo97/si-summer-2020 | ddbb4e29ba9da9af9aaf658df07f891e36737d10 | [
"MIT"
] | 3 | 2021-05-21T16:19:13.000Z | 2022-02-10T00:50:32.000Z | application/__init__.py | ppawlo97/si-summer-2020 | ddbb4e29ba9da9af9aaf658df07f891e36737d10 | [
"MIT"
] | null | null | null | import logging
logging.basicConfig(level=logging.INFO)
from flask import Flask
from application.config import Config
app = Flask(__name__)
app.config.from_object(Config)
from application.models.classifiers.CNNClassifier import CNNClassifier
from application.models.classifiers.MLPClassifier import MLPClassifier
from application.models.classifiers.NaiveBayesClassifier import NaiveBayesClassifier
from application.models.classifiers.SVMClassifier import SVMClassifier
from application.models.detectors.CasClasDetector import CasClasDetector
from application.models.detectors.MTCNNDetector import MTCNNDetector
from application.utils import get_urls_list
logging.info("Loading models...")
MODELS = {"mtcnn": MTCNNDetector(),
"casclas": CasClasDetector(app.config["PRETRAINED_CASCLAS"]),
"mlp": MLPClassifier(app.config["MLP_WEIGHTS"]),
"svm": SVMClassifier(app.config["SVM"]),
"cnn": CNNClassifier(app.config["CNN_WEIGHTS"]),
"nb": NaiveBayesClassifier(app.config["CATEGORICAL_NB"])}
IMG_URLS = get_urls_list(app.config["OFFLINE_IMG_URLS"])
from application import routes
| 35.3125 | 84 | 0.79115 | 123 | 1,130 | 7.138211 | 0.308943 | 0.153759 | 0.143508 | 0.145786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110619 | 1,130 | 31 | 85 | 36.451613 | 0.873632 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
501249edf828430cf33dfa53f6c8ad7a2568f14d | 500 | py | Python | valtypes/__init__.py | vanburgerberg/constypes | e1a527d21a30782e1e5bfc1005f18791e29fdeb2 | [
"MIT"
] | 2 | 2021-08-30T20:08:49.000Z | 2021-11-28T13:16:54.000Z | valtypes/__init__.py | vanburgerberg/valtypes | e1a527d21a30782e1e5bfc1005f18791e29fdeb2 | [
"MIT"
] | null | null | null | valtypes/__init__.py | vanburgerberg/valtypes | e1a527d21a30782e1e5bfc1005f18791e29fdeb2 | [
"MIT"
] | null | null | null | from .validator import (
And,
Attr,
Chain,
Const,
Contains,
ExcMax,
ExcMin,
Float,
Macro,
Max,
Min,
MultipleOf,
Not,
Or,
Pattern,
Proto,
Type,
Validator,
Xor,
)
__all__ = [
"And",
"Attr",
"Chain",
"Const",
"Validator",
"Contains",
"ExcMax",
"ExcMin",
"Float",
"Macro",
"Max",
"Min",
"MultipleOf",
"Not",
"Or",
"Pattern",
"Proto",
"Type",
"Xor",
]
| 11.363636 | 24 | 0.436 | 42 | 500 | 5.095238 | 0.52381 | 0.065421 | 0.11215 | 0.158879 | 0.626168 | 0.626168 | 0.626168 | 0.626168 | 0.626168 | 0.626168 | 0 | 0 | 0.4 | 500 | 43 | 25 | 11.627907 | 0.713333 | 0 | 0 | 0 | 0 | 0 | 0.192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
501312c476ded79ec6cb9b0965b516133cb44842 | 430 | bzl | Python | deps/deno/repository.bzl | y0psolo/YAD | 0f1f9c5140687345dee591667793d6f8ed6e29e5 | [
"Apache-2.0"
] | 1 | 2021-11-05T09:13:57.000Z | 2021-11-05T09:13:57.000Z | deps/deno/repository.bzl | y0psolo/YAD | 0f1f9c5140687345dee591667793d6f8ed6e29e5 | [
"Apache-2.0"
] | 9 | 2021-12-02T13:25:52.000Z | 2022-01-26T14:24:05.000Z | deps/deno/repository.bzl | y0psolo/YAD | 0f1f9c5140687345dee591667793d6f8ed6e29e5 | [
"Apache-2.0"
] | null | null | null | load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
def deno_repository():
# Get Deno archive
http_archive(
name = "deno-amd64",
build_file = "//ext/deno:BUILD",
sha256 = "7b883e3c638d21dd1875f0108819f2f13647b866ff24965135e679c743013f46",
type = "zip",
urls = ["https://github.com/denoland/deno/releases/download/v1.17.3/deno-x86_64-unknown-linux-gnu.zip"],
)
| 35.833333 | 112 | 0.669767 | 49 | 430 | 5.734694 | 0.734694 | 0.078292 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176136 | 0.181395 | 430 | 11 | 113 | 39.090909 | 0.622159 | 0.037209 | 0 | 0 | 0 | 0.111111 | 0.584951 | 0.262136 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
501610d9ff0b5e179cb5002235857a130ed0cb4e | 8,184 | py | Python | src/oci/ai_vision/models/detected_language.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/ai_vision/models/detected_language.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/ai_vision/models/detected_language.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class DetectedLanguage(object):
"""
Language detected in a document.
"""
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "ENG"
LANGUAGE_CODE_ENG = "ENG"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "CES"
LANGUAGE_CODE_CES = "CES"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "DAN"
LANGUAGE_CODE_DAN = "DAN"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "NLD"
LANGUAGE_CODE_NLD = "NLD"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "FIN"
LANGUAGE_CODE_FIN = "FIN"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "FRA"
LANGUAGE_CODE_FRA = "FRA"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "DEU"
LANGUAGE_CODE_DEU = "DEU"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "ELL"
LANGUAGE_CODE_ELL = "ELL"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "HUN"
LANGUAGE_CODE_HUN = "HUN"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "ITA"
LANGUAGE_CODE_ITA = "ITA"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "NOR"
LANGUAGE_CODE_NOR = "NOR"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "POL"
LANGUAGE_CODE_POL = "POL"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "POR"
LANGUAGE_CODE_POR = "POR"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "RON"
LANGUAGE_CODE_RON = "RON"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "RUS"
LANGUAGE_CODE_RUS = "RUS"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "SLK"
LANGUAGE_CODE_SLK = "SLK"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "SPA"
LANGUAGE_CODE_SPA = "SPA"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "SWE"
LANGUAGE_CODE_SWE = "SWE"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "TUR"
LANGUAGE_CODE_TUR = "TUR"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "ARA"
LANGUAGE_CODE_ARA = "ARA"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "CHI_SIM"
LANGUAGE_CODE_CHI_SIM = "CHI_SIM"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "HIN"
LANGUAGE_CODE_HIN = "HIN"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "JPN"
LANGUAGE_CODE_JPN = "JPN"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "KOR"
LANGUAGE_CODE_KOR = "KOR"
#: A constant which can be used with the language_code property of a DetectedLanguage.
#: This constant has a value of "OTHERS"
LANGUAGE_CODE_OTHERS = "OTHERS"
def __init__(self, **kwargs):
"""
Initializes a new DetectedLanguage object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param language_code:
The value to assign to the language_code property of this DetectedLanguage.
Allowed values for this property are: "ENG", "CES", "DAN", "NLD", "FIN", "FRA", "DEU", "ELL", "HUN", "ITA", "NOR", "POL", "POR", "RON", "RUS", "SLK", "SPA", "SWE", "TUR", "ARA", "CHI_SIM", "HIN", "JPN", "KOR", "OTHERS", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type language_code: str
:param confidence:
The value to assign to the confidence property of this DetectedLanguage.
:type confidence: float
"""
self.swagger_types = {
'language_code': 'str',
'confidence': 'float'
}
self.attribute_map = {
'language_code': 'languageCode',
'confidence': 'confidence'
}
self._language_code = None
self._confidence = None
@property
def language_code(self):
"""
**[Required]** Gets the language_code of this DetectedLanguage.
Language of the document, abbreviated according to ISO 639-2.
Allowed values for this property are: "ENG", "CES", "DAN", "NLD", "FIN", "FRA", "DEU", "ELL", "HUN", "ITA", "NOR", "POL", "POR", "RON", "RUS", "SLK", "SPA", "SWE", "TUR", "ARA", "CHI_SIM", "HIN", "JPN", "KOR", "OTHERS", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The language_code of this DetectedLanguage.
:rtype: str
"""
return self._language_code
@language_code.setter
def language_code(self, language_code):
"""
Sets the language_code of this DetectedLanguage.
Language of the document, abbreviated according to ISO 639-2.
:param language_code: The language_code of this DetectedLanguage.
:type: str
"""
allowed_values = ["ENG", "CES", "DAN", "NLD", "FIN", "FRA", "DEU", "ELL", "HUN", "ITA", "NOR", "POL", "POR", "RON", "RUS", "SLK", "SPA", "SWE", "TUR", "ARA", "CHI_SIM", "HIN", "JPN", "KOR", "OTHERS"]
if not value_allowed_none_or_none_sentinel(language_code, allowed_values):
language_code = 'UNKNOWN_ENUM_VALUE'
self._language_code = language_code
@property
def confidence(self):
"""
**[Required]** Gets the confidence of this DetectedLanguage.
Confidence score between 0 to 1.
:return: The confidence of this DetectedLanguage.
:rtype: float
"""
return self._confidence
@confidence.setter
def confidence(self, confidence):
"""
Sets the confidence of this DetectedLanguage.
Confidence score between 0 to 1.
:param confidence: The confidence of this DetectedLanguage.
:type: float
"""
self._confidence = confidence
def __repr__(self):
return formatted_flat_dict(self)
def __eq__(self, other):
if other is None:
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| 38.971429 | 253 | 0.669599 | 1,123 | 8,184 | 4.730187 | 0.142476 | 0.158133 | 0.084714 | 0.112575 | 0.648155 | 0.611258 | 0.578125 | 0.578125 | 0.578125 | 0.578125 | 0 | 0.004835 | 0.241813 | 8,184 | 209 | 254 | 39.157895 | 0.851249 | 0.635508 | 0 | 0.031746 | 0 | 0 | 0.098699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126984 | false | 0 | 0.031746 | 0.031746 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
501c4eef87e41e664070910134642c391a29e880 | 832 | py | Python | uso_red/migrations/0001_initial.py | joelsegoviacrespo/control_aforo_migrado | be90d1d45a20f735e7ef20449c4ab91ca05b5d85 | [
"MIT"
] | null | null | null | uso_red/migrations/0001_initial.py | joelsegoviacrespo/control_aforo_migrado | be90d1d45a20f735e7ef20449c4ab91ca05b5d85 | [
"MIT"
] | null | null | null | uso_red/migrations/0001_initial.py | joelsegoviacrespo/control_aforo_migrado | be90d1d45a20f735e7ef20449c4ab91ca05b5d85 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.13 on 2020-11-26 03:26
from django.db import migrations, models
import djongo.models.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='UsoRed',
fields=[
('_id', djongo.models.fields.ObjectIdField(auto_created=True, primary_key=True, serialize=False)),
('fecha', models.DateTimeField(blank=True, null=True)),
('enviadosGB', models.FloatField(blank=True, default=0.0)),
('recibidosGB', models.FloatField(blank=True, default=0.0)),
('tipo_red', models.CharField(default='', max_length=255)),
],
options={
'db_table': 'uso_red',
},
),
]
| 28.689655 | 114 | 0.564904 | 84 | 832 | 5.511905 | 0.619048 | 0.058315 | 0.077754 | 0.107991 | 0.146868 | 0.146868 | 0.146868 | 0 | 0 | 0 | 0 | 0.039587 | 0.301683 | 832 | 28 | 115 | 29.714286 | 0.757315 | 0.055288 | 0 | 0 | 1 | 0 | 0.07398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
501dcb0c9d559197f9d0137f85f4646aa314aebc | 1,782 | py | Python | import_em.py | kaija/taiwan_stockloader | 637244c3b0bc96093cc5a7b3df093a829f9e3c2d | [
"MIT"
] | 2 | 2015-06-13T09:17:46.000Z | 2015-10-25T15:31:33.000Z | import_em.py | kaija/taiwan_stockloader | 637244c3b0bc96093cc5a7b3df093a829f9e3c2d | [
"MIT"
] | null | null | null | import_em.py | kaija/taiwan_stockloader | 637244c3b0bc96093cc5a7b3df093a829f9e3c2d | [
"MIT"
] | 3 | 2016-02-01T07:36:55.000Z | 2018-08-03T12:22:20.000Z | #!/usr/bin/python
import datetime
import httplib
import urllib
import redis
import json
from datetime import timedelta
#now = datetime.datetime.now();
#today = now.strftime('%Y-%m-%d')
#print today
rdb = redis.Redis('localhost')
def rv(value):
out = ""
for num in value.strip().split(","):
out+=num
return out
def isfloat(value):
try:
float(value)
return True
except ValueError:
return False
def convfloat(value):
try:
return float(value)
except ValueError:
return -1
def convint(value):
try:
return int(value)
except ValueError:
return 0
def dump(key, value):
print key
print json.dumps(value)
def save2redis(key, value):
old = rdb.get("TWE" + key)
if old is None:
val = []
val.append(value)
rdb.set("TWE"+key ,json.dumps(val))
else:
l = json.loads(old)
l.append(value)
rdb.set("TWE"+key ,json.dumps(l))
today = datetime.date.today()
one_day = timedelta(days=1);
start_day = datetime.date(2007, 7, 1);
#start_day = datetime.date(2015, 5, 14);
print "Import from " + start_day.strftime("%Y-%m-%d") + " to " + today.strftime("%Y-%m-%d")
dl_date = start_day
stocks = {}
dl_date = start_day
print "Start merge history"
while dl_date < today:
file_name = "emerging/" + dl_date.strftime("%Y%m%d") + ".csv"
f = open(file_name, 'r')
print "open " + file_name
lines = f.readlines()
for line in lines:
r = line.split('","')
if len(r) == 17:
head = r[0].split("\"")
sid = head[1].strip(" ")
obj = {"volume": convint(rv(r[8])), "open": convfloat(r[4]), "high": convfloat(r[5]), "low": convfloat(r[6]), "val": convfloat(r[2]), "date": dl_date.strftime("%Y-%m-%d"), "avg": convfloat(r[7]), "buyPrice": convfloat(r[11]), "salePrice": convfloat(r[12])}
#dump(sid, obj)
save2redis(sid, obj)
dl_date += one_day
| 20.25 | 259 | 0.64422 | 277 | 1,782 | 4.086643 | 0.364621 | 0.061837 | 0.04417 | 0.048587 | 0.123675 | 0.086572 | 0.056537 | 0.056537 | 0 | 0 | 0 | 0.02152 | 0.165544 | 1,782 | 87 | 260 | 20.482759 | 0.739744 | 0.079686 | 0 | 0.129032 | 0 | 0 | 0.09308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.112903 | null | null | 0.080645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
501e238ecce4d14dde53565c91651d0fe9f9fbc1 | 5,453 | py | Python | LaserController/InterfaceGUI.py | ColdMatter/PhotonBEC | c6bcf9bdefd267c8adde0d299cf5920b010c5022 | [
"MIT"
] | null | null | null | LaserController/InterfaceGUI.py | ColdMatter/PhotonBEC | c6bcf9bdefd267c8adde0d299cf5920b010c5022 | [
"MIT"
] | null | null | null | LaserController/InterfaceGUI.py | ColdMatter/PhotonBEC | c6bcf9bdefd267c8adde0d299cf5920b010c5022 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'InterfaceGUI.ui'
#
# Created: Mon Mar 11 15:56:47 2013
# by: PyQt4 UI code generator 4.9.1
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtWidgets
try:
from PyQt5 import QtCore, QtGui
except:
from PySide import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(240, 200)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.groupBox = QtWidgets.QGroupBox(self.centralwidget)
self.groupBox.setGeometry(QtCore.QRect(0, 0, 241, 381))
self.groupBox.setObjectName(_fromUtf8("groupBox"))
self.powerSlider = QtWidgets.QSlider(self.groupBox)
self.powerSlider.setGeometry(QtCore.QRect(0, 150, 141, 29))
self.powerSlider.setMaximum(2400)
self.powerSlider.setOrientation(QtCore.Qt.Horizontal)
self.powerSlider.setObjectName(_fromUtf8("powerSlider"))
self.enable_checkBox = QtWidgets.QCheckBox(self.groupBox)
self.enable_checkBox.setGeometry(QtCore.QRect(50, 0, 101, 22))
self.enable_checkBox.setObjectName(_fromUtf8("enable_checkBox"))
self.power_timer_checkBox = QtWidgets.QCheckBox(self.groupBox)
self.power_timer_checkBox.setGeometry(QtCore.QRect(155, 0, 101, 22))
self.power_timer_checkBox.setObjectName(_fromUtf8("timer_checkBox"))
self.label_2 = QtWidgets.QLabel(self.groupBox)
self.label_2.setGeometry(QtCore.QRect(0, 50, 101, 17))
self.label_2.setObjectName(_fromUtf8("label_2"))
self.label_3 = QtWidgets.QLabel(self.groupBox)
self.label_3.setGeometry(QtCore.QRect(189, 50, 51, 20))
self.label_3.setObjectName(_fromUtf8("label_3"))
self.label_4 = QtWidgets.QLabel(self.groupBox)
self.label_4.setGeometry(QtCore.QRect(190, 100, 31, 20))
self.label_4.setObjectName(_fromUtf8("label_4"))
self.label_6 = QtWidgets.QLabel(self.groupBox)
self.label_6.setGeometry(QtCore.QRect(0, 130, 101, 17))
self.label_6.setObjectName(_fromUtf8("label_6"))
self.currentLCD = QtWidgets.QLCDNumber(self.groupBox)
self.currentLCD.setGeometry(QtCore.QRect(0, 70, 181, 51))
self.currentLCD.setObjectName(_fromUtf8("currentLCD"))
self.setLCD = QtWidgets.QLCDNumber(self.groupBox)
self.setLCD.setGeometry(QtCore.QRect(190, 70, 51, 23))
self.setLCD.setNumDigits(4)
self.setLCD.setSegmentStyle(QtWidgets.QLCDNumber.Flat)
self.setLCD.setObjectName(_fromUtf8("setLCD"))
self.setText = QtWidgets.QLineEdit(self.groupBox)
self.setText.setGeometry(QtCore.QRect(160, 150, 51, 27))
self.setText.setObjectName(_fromUtf8("setText"))
self.setTextButton = QtWidgets.QPushButton(self.groupBox)
self.setTextButton.setGeometry(QtCore.QRect(210, 150, 31, 27))
self.setTextButton.setObjectName(_fromUtf8("setTextButton"))
self.getPowerPushButton = QtWidgets.QPushButton(self.groupBox)
self.getPowerPushButton.setGeometry(QtCore.QRect(100, 40, 61, 27))
self.getPowerPushButton.setObjectName(_fromUtf8("getPowerPushButton"))
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 240, 23))
self.menubar.setObjectName(_fromUtf8("menubar"))
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
MainWindow.setStatusBar(self.statusbar)
screen = QtWidgets.QDesktopWidget().screenGeometry()
mysize = MainWindow.geometry()
MainWindow.move(screen.width() - mysize.width() - 15, 0)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(QtWidgets.QApplication.translate("MainWindow", "LaserController", None))
self.groupBox.setTitle(QtWidgets.QApplication.translate("MainWindow", "Power", None))
self.enable_checkBox.setText(QtWidgets.QApplication.translate("MainWindow", "Enable Output", None))
self.label_2.setText(QtWidgets.QApplication.translate("MainWindow", "Current Value", None))
self.label_3.setText(QtWidgets.QApplication.translate("MainWindow", "Set Value", None))
self.label_4.setText(QtWidgets.QApplication.translate("MainWindow", "mW", None))
self.label_6.setText(QtWidgets.QApplication.translate("MainWindow", "Controls", None))
self.setTextButton.setText(QtWidgets.QApplication.translate("MainWindow", "Set", None))
self.getPowerPushButton.setText(QtWidgets.QApplication.translate("MainWindow", "GetPower", None))
self.power_timer_checkBox.setText(QtWidgets.QApplication.translate("MainWindow", "Poll Power", None))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 47.417391 | 109 | 0.708601 | 591 | 5,453 | 6.42978 | 0.269036 | 0.093947 | 0.081053 | 0.105263 | 0.214737 | 0.115263 | 0 | 0 | 0 | 0 | 0 | 0.043411 | 0.172015 | 5,453 | 114 | 110 | 47.833333 | 0.798228 | 0.039611 | 0 | 0.022222 | 1 | 0 | 0.069434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.044444 | 0 | 0.077778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
501f4dae72eb2773eebea6e00006189d15c72360 | 302 | py | Python | src/sportsdata/nba/teams/washington_wizards.py | OrangeCardinal/sportsdata | e6e182e89c8f8a12ffe18b218a37b8bdb8971e03 | [
"Apache-2.0"
] | null | null | null | src/sportsdata/nba/teams/washington_wizards.py | OrangeCardinal/sportsdata | e6e182e89c8f8a12ffe18b218a37b8bdb8971e03 | [
"Apache-2.0"
] | null | null | null | src/sportsdata/nba/teams/washington_wizards.py | OrangeCardinal/sportsdata | e6e182e89c8f8a12ffe18b218a37b8bdb8971e03 | [
"Apache-2.0"
] | null | null | null | from sports.nba.nba_team import NBA_Team
class WashingtonWizards(NBA_Team):
"""
NBA's Washington Wizards Static Information
"""
full_name = "Washington Wizards"
name = "Wizards"
team_id = 1610612764
def __init__(self):
"""
"""
super().__init__()
| 17.764706 | 47 | 0.612583 | 32 | 302 | 5.375 | 0.625 | 0.122093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045872 | 0.278146 | 302 | 16 | 48 | 18.875 | 0.743119 | 0.142384 | 0 | 0 | 0 | 0 | 0.110132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
501ffd3a5b4084f880ca38c9bb54d4225d927e52 | 1,305 | py | Python | modules/text/sentiment_analysis/senta_cnn/net.py | Steffy-zxf/HubModule | 40b0563f86634714033ab7712a08a58eba81bad1 | [
"Apache-2.0"
] | null | null | null | modules/text/sentiment_analysis/senta_cnn/net.py | Steffy-zxf/HubModule | 40b0563f86634714033ab7712a08a58eba81bad1 | [
"Apache-2.0"
] | null | null | null | modules/text/sentiment_analysis/senta_cnn/net.py | Steffy-zxf/HubModule | 40b0563f86634714033ab7712a08a58eba81bad1 | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
import paddle.fluid as fluid
def cnn_net(data,
dict_dim,
emb_dim=128,
hid_dim=128,
hid_dim2=96,
class_dim=2,
win_size=3):
"""
Conv net
"""
# embedding layer
emb = fluid.layers.embedding(
input=data,
size=[dict_dim, emb_dim],
param_attr=fluid.ParamAttr(name="@HUB_senta_cnn@embedding_0.w_0"))
# convolution layer
conv_3 = fluid.nets.sequence_conv_pool(
input=emb,
num_filters=hid_dim,
filter_size=win_size,
act="tanh",
pool_type="max",
param_attr=fluid.ParamAttr(name="@HUB_senta_cnn@sequence_conv_0.w_0"),
bias_attr=fluid.ParamAttr(name="@HUB_senta_cnn@sequence_conv_0.b_0"))
# full connect layer
fc_1 = fluid.layers.fc(
input=[conv_3],
size=hid_dim2,
param_attr=fluid.ParamAttr(name="@HUB_senta_cnn@fc_0.w_0"),
bias_attr=fluid.ParamAttr(name="@HUB_senta_cnn@fc_0.b_0"))
# softmax layer
prediction = fluid.layers.fc(
input=[fc_1],
size=class_dim,
act="softmax",
param_attr=fluid.ParamAttr(name="@HUB_senta_cnn@fc_1.w_0"),
bias_attr=fluid.ParamAttr(name="@HUB_senta_cnn@fc_1.b_0"))
return prediction, fc_1
| 28.369565 | 78 | 0.609195 | 186 | 1,305 | 3.951613 | 0.301075 | 0.085714 | 0.171429 | 0.209524 | 0.417687 | 0.417687 | 0.417687 | 0.417687 | 0.359184 | 0.246259 | 0 | 0.033333 | 0.264368 | 1,305 | 45 | 79 | 29 | 0.732292 | 0.07433 | 0 | 0 | 0 | 0 | 0.171717 | 0.159933 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.03125 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5029b3ee53519f158d3ce03f346eb8d5c279dd5f | 1,596 | py | Python | scripts/cerealfiller_entity_and_test_pipeline.py | sedgewickmm18/functions | 69d04a67b122601c4f207ded8e872d31b2ddafc8 | [
"Apache-2.0"
] | null | null | null | scripts/cerealfiller_entity_and_test_pipeline.py | sedgewickmm18/functions | 69d04a67b122601c4f207ded8e872d31b2ddafc8 | [
"Apache-2.0"
] | null | null | null | scripts/cerealfiller_entity_and_test_pipeline.py | sedgewickmm18/functions | 69d04a67b122601c4f207ded8e872d31b2ddafc8 | [
"Apache-2.0"
] | null | null | null | import datetime as dt
import json
import os
import pandas as pd
from sqlalchemy import Column, Integer, String, Float, DateTime, Boolean, func
from iotfunctions.preprocessor import BaseTransformer
from iotfunctions.bif import IoTExpression
from iotfunctions.metadata import EntityType, make_sample_entity
from iotfunctions.db import Database
from iotfunctions.estimator import SimpleAnomaly
#replace with a credentials dictionary or provide a credentials file
with open('credentials.json', encoding='utf-8') as F:
credentials = json.loads(F.read())
#create a sample entity to work with
db_schema = None #set if you are not using the default
db = Database(credentials=credentials)
numeric_columns = ['fill_time','temp','humidity','wait_time','size_sd']
table_name = 'as_sample_cereal'
entity = make_sample_entity(db=db, schema = db_schema,
float_cols = numeric_columns,
name = table_name,
register = True)
entity.name
#examine the sample entity
df = db.read_table(entity.name,schema=db_schema)
df.head(1).transpose()
#configure an expression function
expression = '510 + 15*df["temp"] + 5*df["humidity"]'
mass_fn = IoTExpression(expression=expression, output_name='fill_mass')
df = entity.exec_pipeline(mass_fn)
df.head(1).transpose()
#build an anomaly model
features = ['temp', 'humidity', 'wait_time']
targets = ['fill_mass']
anomaly_fn = SimpleAnomaly(features=['temp','humidity','fill_time'],targets=['fill_mass'],threshold=0.01)
df = entity.exec_pipeline(mass_fn,anomaly_fn)
df.head(1).transpose()
| 36.272727 | 105 | 0.740602 | 217 | 1,596 | 5.304147 | 0.451613 | 0.069505 | 0.018245 | 0.041703 | 0.074718 | 0.045178 | 0 | 0 | 0 | 0 | 0 | 0.009615 | 0.152882 | 1,596 | 43 | 106 | 37.116279 | 0.841716 | 0.135965 | 0 | 0.09375 | 0 | 0 | 0.131924 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
503e2e5fa5da5155bca173a661a3c68a6802e6ed | 2,917 | py | Python | src/pyuwds3/types/vector/scalar_stable.py | LAAS-HRI/uwds3 | 42390f62ed5701a32710341b01faa10efc448078 | [
"MIT"
] | 2 | 2020-08-19T06:15:14.000Z | 2021-05-23T09:55:18.000Z | src/pyuwds3/types/vector/scalar_stable.py | LAAS-HRI/uwds3 | 42390f62ed5701a32710341b01faa10efc448078 | [
"MIT"
] | 5 | 2021-01-06T09:00:35.000Z | 2021-01-20T13:22:19.000Z | src/pyuwds3/types/vector/scalar_stable.py | LAAS-HRI/uwds3 | 42390f62ed5701a32710341b01faa10efc448078 | [
"MIT"
] | 2 | 2020-11-18T17:34:43.000Z | 2021-05-23T16:14:17.000Z | import rospy
import numpy as np
import cv2
class ScalarStable(object):
"""Represents a stabilized scalar"""
def __init__(self,
x=.0,
vx=.0,
p_cov=.03, m_cov=.01,
time=None):
"""ScalarStabilized constructor"""
self.x = x
self.vx = vx
self.p_cov = p_cov
self.m_cov = m_cov
self.filter = cv2.KalmanFilter(2, 1)
self.filter.statePost = self.to_array()
self.filter.measurementMatrix = np.array([[1, 1]], np.float32)
self.__update_noise_cov(p_cov, m_cov)
if time is None:
self.last_update = rospy.Time().now()
else:
self.last_update = time
def from_array(self, array):
"""Updates the scalar stabilized state from array"""
assert array.shape == (2, 1)
self.x = array[0]
self.vx = array[1]
self.filter.statePre = self.filter.statePost
def to_array(self):
"""Returns the scalar stabilizer state array representation"""
return np.array([[self.x], [self.vx]], np.float32)
def position(self):
"""Returns the scalar's position"""
return self.x
def velocity(self):
"""Returns the scalar's velocity"""
return self.vx
def update(self, x, time=None, m_cov=None):
"""Updates/Filter the scalar"""
if m_cov is not None:
self.__update_noise_cov(self.p_cov, m_cov)
self.__update_time(time=time)
self.filter.predict()
measurement = np.array([[np.float32(x)]])
assert measurement.shape == (1, 1)
self.filter.correct(measurement)
self.from_array(self.filter.statePost)
def predict(self, time=None):
"""Predicts the scalar state"""
self.__update_time(time=time)
self.filter.predict()
self.from_array(self.filter.statePost)
def __update_noise_cov(self, p_cov, m_cov):
"""Updates the process and measurement covariances"""
self.filter.processNoiseCov = np.array([[1, 0],
[0, 1]], np.float32) * p_cov
self.filter.measurementNoiseCov = np.array([[1]], np.float32) * m_cov
def __update_transition(self, dt):
self.filter.transitionMatrix = np.array([[1, dt],
[0, 1]], np.float32)
def __update_time(self, time=None):
if time is None:
now = rospy.Time().now()
else:
now = time
elapsed_time = now - self.last_update
self.last_update = now
self.__update_transition(elapsed_time.to_sec())
def __len__(self):
return 1
def __add__(self, scalar):
return self.x + scalar.x
def __sub__(self, scalar):
return self.x - scalar.x
def __str__(self):
return("{}".format(self.to_array()))
| 31.031915 | 77 | 0.567707 | 364 | 2,917 | 4.340659 | 0.214286 | 0.082278 | 0.017722 | 0.01519 | 0.192405 | 0.165823 | 0.165823 | 0.121519 | 0 | 0 | 0 | 0.019393 | 0.310593 | 2,917 | 93 | 78 | 31.365591 | 0.766285 | 0.11073 | 0 | 0.147059 | 0 | 0 | 0.000785 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 1 | 0.205882 | false | 0 | 0.044118 | 0.058824 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
504ae56ee7d72c3c28a102b8af0d776ce3b43014 | 59,138 | py | Python | aliyun/log/logclient.py | SeraphLiu/aliyun-log-sdk-python | 35f608bd6de9f5ed7a89c40288c550cfc3bea8ba | [
"BSD-3-Clause"
] | null | null | null | aliyun/log/logclient.py | SeraphLiu/aliyun-log-sdk-python | 35f608bd6de9f5ed7a89c40288c550cfc3bea8ba | [
"BSD-3-Clause"
] | null | null | null | aliyun/log/logclient.py | SeraphLiu/aliyun-log-sdk-python | 35f608bd6de9f5ed7a89c40288c550cfc3bea8ba | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
# Copyright (C) Alibaba Cloud Computing
# All rights reserved.
import sys
import requests
try:
import json
except ImportError:
import simplejson as json
try :
import logservice_lz4
except ImportError:
pass
from datetime import datetime
from log_logs_pb2 import LogGroup
from aliyun.log.util import Util
from aliyun.log.logexception import LogException
from aliyun.log.getlogsresponse import GetLogsResponse
from aliyun.log.putlogsresponse import PutLogsResponse
from aliyun.log.listtopicsresponse import ListTopicsResponse
from aliyun.log.listlogstoresresponse import ListLogstoresResponse
from aliyun.log.gethistogramsresponse import GetHistogramsResponse
from aliyun.log.logstore_config_response import CreateLogStoreResponse
from aliyun.log.logstore_config_response import DeleteLogStoreResponse
from aliyun.log.logstore_config_response import GetLogStoreResponse
from aliyun.log.logstore_config_response import UpdateLogStoreResponse
from aliyun.log.logstore_config_response import ListLogStoreResponse
from aliyun.log.pulllog_response import PullLogResponse
from aliyun.log.cursor_response import GetCursorResponse
from aliyun.log.cursor_time_response import GetCursorTimeResponse
from aliyun.log.index_config_response import CreateIndexResponse
from aliyun.log.index_config_response import UpdateIndexResponse
from aliyun.log.index_config_response import DeleteIndexResponse
from aliyun.log.index_config_response import GetIndexResponse
from aliyun.log.logtail_config_response import CreateLogtailConfigResponse
from aliyun.log.logtail_config_response import UpdateLogtailConfigResponse
from aliyun.log.logtail_config_response import DeleteLogtailConfigResponse
from aliyun.log.logtail_config_response import GetLogtailConfigResponse
from aliyun.log.logtail_config_response import ListLogtailConfigResponse
from aliyun.log.machinegroup_response import CreateMachineGroupResponse
from aliyun.log.machinegroup_response import UpdateMachineGroupResponse
from aliyun.log.machinegroup_response import DeleteMachineGroupResponse
from aliyun.log.machinegroup_response import GetMachineGroupResponse
from aliyun.log.machinegroup_response import ListMachineGroupResponse
from aliyun.log.machinegroup_response import ListMachinesResponse
from aliyun.log.machinegroup_response import ApplyConfigToMachineGroupResponse
from aliyun.log.machinegroup_response import RemoveConfigToMachineGroupResponse
from aliyun.log.machinegroup_response import GetMachineGroupAppliedConfigResponse
from aliyun.log.machinegroup_response import GetConfigAppliedMachineGroupsResponse
from aliyun.log.acl_response import UpdateAclResponse
from aliyun.log.acl_response import ListAclResponse
from aliyun.log.shard_response import ListShardResponse
from aliyun.log.shard_response import DeleteShardResponse
from aliyun.log.shipper_response import CreateShipperResponse
from aliyun.log.shipper_response import UpdateShipperResponse
from aliyun.log.shipper_response import DeleteShipperResponse
from aliyun.log.shipper_response import GetShipperConfigResponse
from aliyun.log.shipper_response import ListShipperResponse
from aliyun.log.shipper_response import GetShipperTasksResponse
from aliyun.log.shipper_response import RetryShipperTasksResponse
from aliyun.log.project_response import CreateProjectResponse
from aliyun.log.project_response import DeleteProjectResponse
from aliyun.log.project_response import GetProjectResponse
CONNECTION_TIME_OUT = 20
API_VERSION = '0.6.0'
USER_AGENT = 'log-python-sdk-v-0.6.1'
"""
LogClient class is the main class in the SDK. It can be used to communicate with
log service server to put/get data.
:Author: log_dev
"""
class LogClient(object):
""" Construct the LogClient with endpoint, accessKeyId, accessKey.
:type endpoint: string
:param endpoint: log service host name, for example, http://ch-hangzhou.sls.aliyuncs.com
:type accessKeyId: string
:param accessKeyId: aliyun accessKeyId
:type accessKey: string
:param accessKey: aliyun accessKey
"""
__version__ = API_VERSION
Version = __version__
def __init__(self, endpoint, accessKeyId, accessKey,securityToken = None):
if isinstance(endpoint, unicode): # ensure is ascii str
endpoint = endpoint.encode('ascii')
if isinstance(accessKeyId, unicode):
accessKeyId = accessKeyId.encode('ascii')
if isinstance(accessKey, unicode):
accessKey = accessKey.encode('ascii')
self._isRowIp = True
self._port = 80
self._setendpoint(endpoint)
self._accessKeyId = accessKeyId
self._accessKey = accessKey
self._timeout = CONNECTION_TIME_OUT
self._source = Util.get_host_ip(self._logHost)
self._securityToken = securityToken;
def _setendpoint(self, endpoint):
pos = endpoint.find('://')
if pos != -1:
endpoint = endpoint[pos + 3:] # strip http://
pos = endpoint.find('/')
if pos != -1:
endpoint = endpoint[:pos]
pos = endpoint.find(':')
if pos != -1:
self._port = int(endpoint[pos + 1:])
endpoint = endpoint[:pos]
self._isRowIp = Util.is_row_ip(endpoint)
self._logHost = endpoint
self._endpoint = endpoint + ':' + str(self._port)
def _getGMT(self):
return datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
def _loadJson(self, respText, requestId):
if not respText:
return None
try:
return json.loads(respText)
except:
raise LogException('BadResponse',
'Bad json format:\n%s' % respText,
requestId)
def _getHttpResponse(self, method, url, params, body, headers): # ensure method, url, body is str
try :
headers['User-Agent'] = USER_AGENT
r = None
if method.lower() == 'get' :
r = requests.get(url, params = params, data = body, headers = headers, timeout = self._timeout)
elif method.lower() == 'post':
r = requests.post(url, params = params, data = body, headers = headers, timeout = self._timeout)
elif method.lower() == 'put':
r = requests.put(url, params = params, data = body, headers = headers, timeout = self._timeout)
elif method.lower() == 'delete':
r = requests.delete(url, params = params, data = body, headers = headers, timeout = self._timeout)
return (r.status_code, r.content, r.headers)
except Exception, ex:
raise LogException('LogRequestError', str(ex))
def _sendRequest(self, method, url, params, body, headers, respons_body_type = 'json'):
(status, respText, respHeader) = self._getHttpResponse(method, url, params, body, headers)
header = {}
for key, value in respHeader.items():
header[key] = value
requestId = header['x-log-requestid'] if 'x-log-requestid' in header else ''
exJson = None
header = Util.convert_unicode_to_str(header)
if status == 200 :
if respons_body_type == 'json' :
exJson = self._loadJson(respText, requestId)
#exJson = Util.convert_unicode_to_str(exJson)
return (exJson, header)
else :
return (respText, header)
exJson = self._loadJson(respText.encode('utf-8'), requestId)
exJson = Util.convert_unicode_to_str(exJson)
if 'errorCode' in exJson and 'errorMessage' in exJson:
raise LogException(exJson['errorCode'], exJson['errorMessage'], requestId)
else:
exJson = '. Return json is '+str(exJson) if exJson else '.'
raise LogException('LogRequestError',
'Request is failed. Http code is '+str(status)+exJson, requestId)
def _send(self, method, project, body, resource, params, headers, respons_body_type ='json'):
if body:
headers['Content-Length'] = str(len(body))
headers['Content-MD5'] = Util.cal_md5(body)
else:
headers['Content-Length'] = '0'
headers["x-log-bodyrawsize"] = '0'
headers['x-log-apiversion'] = API_VERSION
headers['x-log-signaturemethod'] = 'hmac-sha1'
url = ''
if self._isRowIp:
url = "http://" + self._endpoint
else:
url = "http://" + project + "." + self._endpoint
headers['Host'] = project + "." + self._logHost
headers['Date'] = self._getGMT()
if self._securityToken != None and self._securityToken != "" :
headers["x-acs-security-token"] = self._securityToken
signature = Util.get_request_authorization(method, resource,
self._accessKey, params, headers)
headers['Authorization'] = "LOG " + self._accessKeyId + ':' + signature
url = url + resource
return self._sendRequest(method, url, params, body, headers, respons_body_type)
def get_unicode(self, key):
if isinstance(key, str):
key = unicode(key, 'utf-8')
return key
def put_logs(self, request):
""" Put logs to log service.
Unsuccessful opertaion will cause an LogException.
:type request: PutLogsRequest
:param request: the PutLogs request parameters class
:return: PutLogsResponse
:raise: LogException
"""
if len(request.get_log_items()) > 4096:
raise LogException('InvalidLogSize',
"logItems' length exceeds maximum limitation: 4096 lines.")
logGroup = LogGroup()
logGroup.Topic = request.get_topic()
if request.get_source():
logGroup.Source = request.get_source()
else:
if self._source=='127.0.0.1':
self._source = Util.get_host_ip(request.get_project() + '.' + self._logHost)
logGroup.Source = self._source
for logItem in request.get_log_items():
log = logGroup.Logs.add()
log.Time = logItem.get_time()
contents = logItem.get_contents()
for key, value in contents:
content = log.Contents.add()
content.Key = self.get_unicode(key)
content.Value = self.get_unicode(value)
body = logGroup.SerializeToString()
if len(body) > 3 * 1024 * 1024: # 3 MB
raise LogException('InvalidLogSize',
"logItems' size exceeds maximum limitation: 3 MB.")
headers = {}
headers['x-log-bodyrawsize'] = str(len(body))
headers['Content-Type'] = 'application/x-protobuf'
is_compress = request.get_compress()
compress_data = None
if is_compress :
headers['x-log-compresstype'] = 'lz4'
compress_data = logservice_lz4.compress(body)
params = {}
logstore = request.get_logstore()
project = request.get_project()
resource = '/logstores/' + logstore
if request.get_hash_key() is not None:
resource = '/logstores/' + logstore+"/shards/route"
params["key"] = request.get_hash_key()
else:
resource = '/logstores/' + logstore+"/shards/lb"
respHeaders = None
if is_compress :
respHeaders = self._send('POST', project, compress_data, resource, params, headers)
else :
respHeaders = self._send('POST', project, body, resource, params, headers)
return PutLogsResponse(respHeaders[1])
def list_logstores(self, request):
""" List all logstores of requested project.
Unsuccessful opertaion will cause an LogException.
:type request: ListLogstoresRequest
:param request: the ListLogstores request parameters class.
:return: ListLogStoresResponse
:raise: LogException
"""
headers = {}
params = {}
resource = '/logstores'
project = request.get_project()
(resp, header) = self._send("GET", project, None, resource, params, headers)
return ListLogstoresResponse(resp, header)
def list_topics(self, request):
""" List all topics in a logstore.
Unsuccessful opertaion will cause an LogException.
:type request: ListTopicsRequest
:param request: the ListTopics request parameters class.
:return: ListTopicsResponse
:raise: LogException
"""
headers = {}
params = {}
if request.get_token()!=None:
params['token'] = request.get_token()
if request.get_line()!=None:
params['line'] = request.get_line()
params['type'] = 'topic'
logstore = request.get_logstore()
project = request.get_project()
resource = "/logstores/" + logstore
(resp, header) = self._send("GET", project, None, resource, params, headers)
return ListTopicsResponse(resp, header)
def get_histograms(self, request):
""" Get histograms of requested query from log service.
Unsuccessful opertaion will cause an LogException.
:type request: GetHistogramsRequest
:param request: the GetHistograms request parameters class.
:return: GetHistogramsResponse
:raise: LogException
"""
headers = {}
params = {}
if request.get_topic()!=None:
params['topic'] = request.get_topic()
if request.get_from()!=None:
params['from'] = request.get_from()
if request.get_to()!=None:
params['to'] = request.get_to()
if request.get_query()!=None:
params['query'] = request.get_query()
params['type'] = 'histogram'
logstore = request.get_logstore()
project = request.get_project()
resource = "/logstores/" + logstore
(resp, header) = self._send("GET", project, None, resource, params, headers)
return GetHistogramsResponse(resp, header)
def get_logs(self, request):
""" Get logs from log service.
Unsuccessful opertaion will cause an LogException.
:type request: GetLogsRequest
:param request: the GetLogs request parameters class.
:return: GetLogsResponse
:raise: LogException
"""
headers = {}
params = {}
if request.get_topic()!=None:
params['topic'] = request.get_topic()
if request.get_from()!=None:
params['from'] = request.get_from()
if request.get_to()!=None:
params['to'] = request.get_to()
if request.get_query()!=None:
params['query'] = request.get_query()
params['type'] = 'log'
if request.get_line()!=None:
params['line'] = request.get_line()
if request.get_offset()!=None:
params['offset'] = request.get_offset()
if request.get_reverse()!=None:
params['reverse'] = 'true' if request.get_reverse() else 'false'
logstore = request.get_logstore()
project = request.get_project()
resource = "/logstores/" + logstore
(resp, header) = self._send("GET", project, None, resource, params, headers)
return GetLogsResponse(resp, header)
def get_cursor(self, project_name, logstore_name, shard_id, start_time) :
""" Get cursor from log service for batch pull logs
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shard_id: int
:param shard_id: the shard id
:type start_time: int
:param start_time: the start time of cursor, e.g 1441093445
:return: GetCursorResponse
:raise: LogException
"""
headers = {}
headers['Content-Type'] = 'application/json'
params = {}
resource = "/logstores/" + logstore_name + "/shards/" + str(shard_id)
params['type'] = 'cursor'
params['from'] = str(start_time)
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetCursorResponse(resp, header)
def get_cursor_time(self, project_name, logstore_name, shard_id, cursor) :
""" Get cursor time from log service
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shard_id: int
:param shard_id: the shard id
:type cursor: string
:param cursor: the cursor to get its service receive time
:return: GetCursorTimeResponse
:raise: LogException
"""
headers = {}
headers['Content-Type'] = 'application/json'
params = {}
resource = "/logstores/" + logstore_name + "/shards/" + str(shard_id)
params['type'] = 'cursor_time'
params['cursor'] = cursor
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetCursorTimeResponse(resp, header)
def get_begin_cursor(self, project_name, logstore_name, shard_id) :
""" Get begin cursor from log service for batch pull logs
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shard_id: int
:param shard_id: the shard id
:return: GetLogsResponse
:raise: LogException
"""
return self.get_cursor(project_name, logstore_name, shard_id, "begin")
def get_end_cursor(self, project_name, logstore_name, shard_id) :
""" Get end cursor from log service for batch pull logs
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shard_id: int
:param shard_id: the shard id
:return: GetLogsResponse
:raise: LogException
"""
return self.get_cursor(project_name, logstore_name, shard_id, "end")
def pull_logs(self, project_name, logstore_name, shard_id, cursor, count = 1000, end_cursor = None, compress=False):
""" batch pull log data from log service
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shard_id: int
:param shard_id: the shard id
:type cursor: string
:param cursor: the start to cursor to get data
:type count: int
:param count: the required pull log package count, default 1000 packages
:type end_cursor : string
:param end_cursor: the end cursor position to get data
:type comress : boolean
:param compress : if use lz4 compress for transfer data
:return: PullLogResponse
:raise: LogException
"""
headers = {}
if compress :
headers['Accept-Encoding'] = 'lz4'
else :
headers['Accept-Encoding'] = ''
headers['Accept'] = 'application/x-protobuf'
params = {}
resource = "/logstores/" + logstore_name + "/shards/" + str(shard_id)
params['type'] = 'log'
params['cursor'] = cursor
params['count'] = str(count)
if end_cursor != None and len(end_cursor) > 0 :
params['end_cursor'] = end_cursor
(resp, header) = self._send("GET", project_name, None, resource, params, headers, "binary")
if compress :
raw_size = int(header['x-log-bodyrawsize'])
raw_data = logservice_lz4.uncompress(raw_size, resp)
return PullLogResponse(raw_data, header)
else :
return PullLogResponse(resp, header)
def create_logstore(self, project_name, logstore_name, ttl, shard_count):
""" create log store
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type ttl: int
:param ttl: the life cycle of log in the logstore in days
:type shard_count: int
:param shard_count: the shard count of the logstore to create
:return: CreateLogStoreResponse
:raise: LogException
"""
headers = {}
params = {}
headers["x-log-bodyrawsize"] = '0'
headers["Content-Type"] = "application/json"
resource = "/logstores"
body = {}
body["logstoreName"] = logstore_name.encode("utf-8");
body["ttl"] = (int)(ttl);
body["shardCount"] = (int)(shard_count);
body_str = json.dumps(body);
(resp, header) = self._send("POST", project_name, body_str, resource, params, headers)
return CreateLogStoreResponse(header)
def delete_logstore(self, project_name, logstore_name):
""" delete log store
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:return: DeleteLogStoreResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name
(resp, header) = self._send("DELETE", project_name, None, resource, params, headers)
return DeleteLogStoreResponse(header)
def get_logstore(self, project_name, logstore_name):
""" get the logstore meta info
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:return: GetLogStoreResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetLogStoreResponse(resp, header)
def update_logstore(self, project_name, logstore_name, ttl, shard_count):
"""
update the logstore meta info
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type ttl: int
:param ttl: the life cycle of log in the logstore in days
:type shard_count: int
:param shard_count: the shard count of the logstore to create
:return: UpdateLogStoreResponse
:raise: LogException
"""
headers = {}
headers["x-log-bodyrawsize"] = '0'
headers["Content-Type"] = "application/json"
params = {}
resource = "/logstores/" + logstore_name
body = {}
body["logstoreName"] = logstore_name
body["ttl"] = (int)(ttl);
body["shardCount"] = (int)(shard_count);
body_str = json.dumps(body);
(resp, header) = self._send("PUT", project_name, body_str, resource, params, headers)
return UpdateLogStoreResponse(header)
def list_logstore(self, project_name, logstore_name_pattern = None, offset = 0, size = 100) :
""" list the logstore in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name_pattern: string
:param logstore_name_pattern: the sub name logstore, used for the server to return logstore names contain this sub name
:type offset: int
:param offset: the offset of all the matched names
:type size: int
:param size: the max return names count
:return: ListLogStoreResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores"
if logstore_name_pattern != None :
params['logstorename'] = logstore_name_pattern
params['offset'] = str(offset)
params['size'] = str(size)
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return ListLogStoreResponse(resp, header)
def list_shards(self, project_name, logstore_name) :
""" list the shard meta of a logstore
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:return: ListShardResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/shards"
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return ListShardResponse(resp, header)
def split_shard(self,project_name,logstore_name,shardId,split_hash):
""" split a readwrite shard into two shards
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shardId: int
:param shardId : the shard id
:type split_hash: string
:param split_hash: the internal hash between the shard begin and end hash
:return: ListShardResponse
:raise: LogException
"""
headers = {}
params = {"action":"split","key":split_hash}
resource = "/logstores/"+logstore_name+"/shards/"+str(shardId);
(resp,header) = self._send("POST",project_name,None,resource,params,headers);
return ListShardResponse(resp,header);
def merge_shard(self,project_name,logstore_name,shardId):
""" split two adjacent readwrite hards into one shards
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shardId: int
:param shardId : the shard id of the left shard, server will determine the right adjacent shardId
:return: ListShardResponse
:raise: LogException
"""
headers = {}
params = {"action":"merge"}
resource = "/logstores/"+logstore_name+"/shards/"+str(shardId);
(resp,header) = self._send("POST",project_name,None,resource,params,headers);
return ListShardResponse(resp,header);
def delete_shard(self,project_name,logstore_name,shardId):
""" delete a readonly shard
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shardId: int
:param shardId : the read only shard id
:return: ListShardResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/"+logstore_name+"/shards/"+str(shardId);
(resp,header) = self._send("DELETE",project_name,None,resource,params,headers);
return DeleteShardResponse(header);
def create_index(self, project_name, logstore_name, index_detail) :
""" create index for a logstore
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type index_detail: index_config.IndexConfig
:param index_detail: the index config detail used to create index
:return: CreateIndexResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/index"
headers['Content-Type'] = 'application/json'
body = json.dumps(index_detail.to_json())
headers['x-log-bodyrawsize'] = str(len(body))
(resp, header) = self._send("POST", project_name, body, resource, params, headers)
return CreateIndexResponse(header)
def update_index(self, project_name, logstore_name, index_detail) :
""" update index for a logstore
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type index_detail: index_config.IndexConfig
:param index_detail: the index config detail used to update index
:return: UpdateIndexResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/index"
headers['Content-Type'] = 'application/json'
body = json.dumps(index_detail.to_json())
headers['x-log-bodyrawsize'] = str(len(body))
(resp, header) = self._send("PUT", project_name, body, resource, params, headers)
return UpdateIndexResponse(header)
def delete_index(self, project_name, logstore_name) :
""" delete index of a logstore
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:return: DeleteIndexResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/index"
(resp, header) = self._send("DELETE", project_name, None, resource, params, headers)
return DeleteIndexResponse(header)
def get_index_config(self, project_name , logstore_name) :
""" get index config detail of a logstore
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:return: GetIndexResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/index"
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetIndexResponse(resp, header)
def create_logtail_config(self, project_name, config_detail) :
""" create logtail config in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type config_detail: logtail_config_detail.CommonRegLogConfigDetail or logtail_config_detail.ApsaraLogConfigDetail
:param config_detail: the logtail config detail info, the CommonRegLogConfigDetail is used to create common regex logs ,the ApsaraLogConfigDetail is used to create apsara log
:return: CreateLogtailConfigResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/configs"
headers['Content-Type'] = 'application/json'
body = json.dumps(config_detail.to_json())
headers['x-log-bodyrawsize'] = str(len(body))
(resp, headers) = self._send("POST", project_name, body, resource, params, headers)
return CreateLogtailConfigResponse(headers)
def update_logtail_config(self, project_name, config_detail) :
""" update logtail config in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type config_detail: logtail_config_detail.CommonRegLogConfigDetail or logtail_config_detail.ApsaraLogConfigDetail
:param config_detail: the logtail config detail info, the CommonRegLogConfigDetail is used to create common regex logs, the ApsaraLogConfigDetail is used to create apsara log
:return: UpdateLogtailConfigResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/configs/" + config_detail.config_name
headers['Content-Type'] = 'application/json'
body = json.dumps(config_detail.to_json())
headers['x-log-bodyrawsize'] = str(len(body))
(resp, headers) = self._send("PUT", project_name, body, resource, params, headers)
return UpdateLogtailConfigResponse(headers)
def delete_logtail_config(self, project_name, config_name):
""" delete logtail config in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type config_name: string
:param config_name: the logtail config name
:return: DeleteLogtailConfigResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/configs/" + config_name
(resp, headers) = self._send("DELETE", project_name, None, resource, params, headers)
return DeleteLogtailConfigResponse(headers)
def get_logtail_config(self, project_name, config_name) :
""" get logtail config in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type config_name: string
:param config_name: the logtail config name
:return: GetLogtailConfigResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/configs/" + config_name
(resp, headers) = self._send("GET", project_name, None, resource, params, headers)
return GetLogtailConfigResponse(resp, headers)
def list_logtail_config(self, project_name, offset = 0, size = 100) :
""" list logtail config name in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type offset: int
:param offset: the offset of all config names
:type size: int
:param size: the max return names count
:return: ListLogtailConfigResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/configs"
params['offset'] = str(offset)
params['size'] = str(size)
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return ListLogtailConfigResponse(resp, header)
def create_machine_group(self, project_name, group_detail) :
""" create machine group in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type group_detail: machine_group_detail.MachineGroupDetail
:param group_detail: the machine group detail config
:return: CreateMachineGroupResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups"
headers['Content-Type'] = 'application/json'
body = json.dumps(group_detail.to_json())
headers['x-log-bodyrawsize'] = str(len(body))
(resp, headers) = self._send("POST", project_name, body, resource, params, headers)
return CreateMachineGroupResponse(headers)
def delete_machine_group(self, project_name, group_name):
""" delete machine group in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type group_name: string
:param group_name: the group name
:return: DeleteMachineGroupResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups/" + group_name
(resp, headers) = self._send("DELETE", project_name, None, resource, params, headers)
return DeleteMachineGroupResponse(headers)
def update_machine_group(self, project_name, group_detail) :
""" update machine group in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type group_detail: machine_group_detail.MachineGroupDetail
:param group_detail: the machine group detail config
:return: UpdateMachineGroupResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups/" + group_detail.group_name
headers['Content-Type'] = 'application/json'
body = json.dumps(group_detail.to_json())
headers['x-log-bodyrawsize'] = str(len(body))
(resp, headers) = self._send("PUT", project_name, body, resource, params, headers)
return UpdateMachineGroupResponse(headers)
def get_machine_group(self, project_name, group_name) :
""" get machine group in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type group_name: string
:param group_name: the group name to get
:return: GetMachineGroupResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups/" + group_name
(resp, headers) = self._send("GET", project_name, None, resource, params, headers)
return GetMachineGroupResponse(resp, headers)
def list_machine_group(self, project_name, offset = 0, size = 100) :
""" list machine group names in a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type offset: int
:param offset: the offset of all group name
:type size: int
:param size: the max return names count
:return: ListMachineGroupResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups"
params['offset'] = str(offset)
params['size'] = str(size)
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return ListMachineGroupResponse(resp, header)
def list_machines(self, project_name, group_name, offset = 0, size = 100) :
""" list machines in a machine group
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type group_name: string
:param group_name: the group name to list
:type offset: int
:param offset: the offset of all group name
:type size: int
:param size: the max return names count
:return: ListMachinesResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups/" + group_name + "/machines"
params['offset'] = str(offset)
params['size'] = str(size)
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return ListMachinesResponse(resp, header)
def apply_config_to_machine_group(self, project_name, config_name, group_name) :
""" apply a logtail config to a machine group
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type config_name: string
:param config_name: the logtail config name to apply
:type group_name: string
:param group_name: the machine group name
:return: ApplyConfigToMachineGroupResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups/" + group_name + "/configs/" + config_name
(resp, header) = self._send("PUT", project_name, None, resource, params, headers)
return ApplyConfigToMachineGroupResponse(header)
def remove_config_to_machine_group(self, project_name, config_name, group_name) :
""" remove a logtail config to a machine group
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type config_name: string
:param config_name: the logtail config name to apply
:type group_name: string
:param group_name: the machine group name
:return: RemoveConfigToMachineGroupResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups/" + group_name + "/configs/" + config_name
(resp, header) = self._send("DELETE", project_name, None, resource, params, headers)
return RemoveConfigToMachineGroupResponse(header)
def get_machine_group_applied_configs(self, project_name, group_name):
""" get the logtail config names applied in a machine group
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type group_name: string
:param group_name: the group name list
:return: GetMachineGroupAppliedConfigResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/machinegroups/" + group_name + "/configs"
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetMachineGroupAppliedConfigResponse(resp, header)
def get_config_applied_machine_groups(self, project_name, config_name):
""" get machine group names where the logtail config applies to
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type config_name: string
:param config_name: the logtail config name used to apply
:return: GetConfigAppliedMachineGroupsResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/configs/" + config_name + "/machinegroups"
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetConfigAppliedMachineGroupsResponse(resp, header)
def _update_acl(self, project_name, logstore_name, acl_action, acl_config) :
headers = {}
params = {}
params['type'] = 'acl'
resource = "/"
if logstore_name != None and len(logstore_name) > 0 :
resource = "/logstores/" + logstore_name
body = acl_config.to_json()
body['action'] = acl_action
body = json.dumps(body)
headers['Content-Type'] = 'application/json'
headers['x-log-bodyrawsize'] = str(len(body))
(resp, headers) = self._send("PUT", project_name, body, resource, params, headers)
return UpdateAclResponse(headers)
def update_project_acl(self, project_name, acl_action, acl_config):
""" update acl of a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type acl_action: string
:param acl_action: "grant" or "revoke", grant or revoke the acl_config to/from a project
:type acl_config: acl_config.AclConfig
:param acl_config: the detail acl config info
:return: UpdateAclResponse
:raise: LogException
"""
return self._update_acl(project_name, None, acl_action, acl_config)
def update_logstore_acl(self, project_name, logstore_name, acl_action, acl_config):
""" update acl of a logstore
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type acl_action: string
:param acl_action: "grant" or "revoke", grant or revoke the acl_config to/from a logstore
:type acl_config: acl_config.AclConfig
:param acl_config: the detail acl config info
:return: UpdateAclResponse
:raise: LogException
"""
return self._update_acl(project_name, logstore_name, acl_action, acl_config)
def _list_acl(self, project_name, logstore_name, offset = 0 , size = 100) :
headers = {}
params = {}
params['type'] = 'acl'
params['offset'] = str(offset)
params['size'] = str(size)
resource = "/"
if logstore_name != None and len(logstore_name) > 0 :
resource = "/logstores/" + logstore_name
(resp, headers) = self._send("GET", project_name, None, resource, params, headers)
return ListAclResponse(resp, headers)
def list_project_acl(self, project_name, offset = 0 , size = 100) :
""" list acl of a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type offset: int
:param offset: the offset of all acl
:type size: int
:param size: the max return acl count
:return: ListAclResponse
:raise: LogException
"""
return self._list_acl(project_name, None, offset, size)
def list_logstore_acl(self, project_name, logstore_name, offset = 0 ,size = 100) :
""" list acl of a logstore
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type offset: int
:param offset: the offset of all acl
:type size: int
:param size: the max return acl count
:return: ListAclResponse
:raise: LogException
"""
return self._list_acl(project_name, logstore_name, offset, size)
def create_shipper(self, project_name, logstore_name, shipper_name, shipper_type, shipper_config) :
""" create odps/oss shipper
for every type, it only allowed one shipper
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shipper_name: string
:param shipper_name: the shipper name
:type shipper_type: string
:param shipper_type: only support "odps" or "oss"
:type shipper_config : OssShipperConfig or OdpsShipperConfig
:param shipper_config : the detail shipper config, must be OssShipperConfig or OdpsShipperConfig type
:return: CreateShipperResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/shipper"
body = {}
body["shipperName"] = shipper_name
body["targetType"] = shipper_type
body["targetConfiguration"] = shipper_config.to_json()
body = json.dumps(body)
headers['Content-Type'] = 'application/json'
headers['x-log-bodyrawsize'] = str(len(body))
(resp, headers) = self._send("POST", project_name, body, resource, params, headers)
return CreateShipperResponse(headers)
def update_shipper(self, project_name, logstore_name, shipper_name, shipper_type, shipper_config) :
""" update odps/oss shipper
for every type, it only allowed one shipper
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shipper_name: string
:param shipper_name: the shipper name
:type shipper_type: string
:param shipper_type: only support "odps" or "oss" , the type must be same with the oringal shipper
:type shipper_config : OssShipperConfig or OdpsShipperConfig
:param shipper_config : the detail shipper config, must be OssShipperConfig or OdpsShipperConfig type
:return: UpdateShipperResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/shipper/" + shipper_name
body = {}
body["shipperName"] = shipper_name
body["targetType"] = shipper_type
body["targetConfiguration"] = shipper_config.to_json()
body = json.dumps(body)
headers['Content-Type'] = 'application/json'
headers['x-log-bodyrawsize'] = str(len(body))
(resp, headers) = self._send("PUT", project_name, body, resource, params, headers)
return UpdateShipperResponse(headers)
def delete_shipper(self, project_name, logstore_name, shipper_name) :
""" delete odps/oss shipper
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shipper_name: string
:param shipper_name: the shipper name
:return: DeleteShipperResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/shipper/" + shipper_name
(resp, header) = self._send("DELETE", project_name, None, resource, params, headers)
return DeleteShipperResponse(header)
def get_shipper_config(self, project_name, logstore_name, shipper_name) :
""" get odps/oss shipper
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shipper_name: string
:param shipper_name: the shipper name
:return: GetShipperConfigResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/shipper/" + shipper_name
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetShipperConfigResponse(resp, header)
def list_shipper(self, project_name, logstore_name) :
""" list odps/oss shipper
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:return: ListShipperResponse
:raise: LogException
"""
headers = {}
params = {}
resource = "/logstores/" + logstore_name + "/shipper"
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return ListShipperResponse(resp, header)
def get_shipper_tasks(self, project_name, logstore_name, shipper_name, start_time, end_time, status_type = '', offset = 0, size = 100):
""" get odps/oss shipper tasks in a certain time range
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shipper_name: string
:param shipper_name: the shipper name
:type start_time: int
:param start_time: the start timestamp
:type end_time: int
:param end_time: the end timestamp
:type status_type : string
:param status_type : support one of ['', 'fail', 'success', 'running'] , if the status_type = '' , return all kinds of status type
:type offset : int
:param offset : the begin task offset
:type size : int
:param size : the needed tasks count
:return: ListShipperResponse
:raise: LogException
"""
headers = {}
params = {}
params["from"] = str(int(start_time))
params["to"] = str(int(end_time))
params["status"] = status_type
params["offset"] = str(int(offset))
params["size"] = str(int(size))
resource = "/logstores/" + logstore_name + "/shipper/" + shipper_name + "/tasks"
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetShipperTasksResponse(resp, header)
def retry_shipper_tasks(self, project_name, logstore_name, shipper_name, task_list) :
""" retry failed tasks , only the failed task can be retried
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type logstore_name: string
:param logstore_name: the logstore name
:type shipper_name: string
:param shipper_name: the shipper name
:type task_list: string array
:param task_list: the failed task_id list, e.g ['failed_task_id_1', 'failed_task_id_2',...], currently the max retry task count 10 every time
:return: RetryShipperTasksResponse
:raise: LogException
"""
headers = {}
params = {}
body = json.dumps(task_list)
headers['Content-Type'] = 'application/json'
headers['x-log-bodyrawsize'] = str(len(body))
resource = "/logstores/" + logstore_name + "/shipper/" + shipper_name + "/tasks"
(resp, header) = self._send("PUT", project_name, body, resource, params, headers)
return RetryShipperTasksResponse(header)
def create_project(self, project_name, project_des) :
""" Create a project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:type project_des: string
:param project_des: the description of a project
:return: CreateProjectResponse
:raise: LogException
"""
headers = {}
params = {}
body = {}
body["projectName"] = project_name
body["description"] = project_des
body = json.dumps(body)
headers['Content-Type'] = 'application/json'
headers['x-log-bodyrawsize'] = str(len(body))
resource = "/"
(resp, header) = self._send("POST", project_name, body, resource, params, headers)
return CreateProjectResponse(header)
def get_project(self, project_name) :
""" get project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:return: GetProjectResponse
:raise: LogException
"""
headers = {}
params = {}
body = {}
resource = "/"
(resp, header) = self._send("GET", project_name, None, resource, params, headers)
return GetProjectResponse(resp, header)
def delete_project(self, project_name):
""" delete project
Unsuccessful opertaion will cause an LogException.
:type project_name: string
:param project_name: the Project name
:return: DeleteProjectResponse
:raise: LogException
"""
headers = {}
params = {}
body = {}
resource = "/"
(resp, header) = self._send("DELETE", project_name, None, resource, params, headers)
return DeleteProjectResponse(header)
| 36.325553 | 183 | 0.615543 | 6,125 | 59,138 | 5.793469 | 0.068245 | 0.074398 | 0.038467 | 0.043962 | 0.687614 | 0.669381 | 0.620318 | 0.582838 | 0.557531 | 0.53524 | 0 | 0.002865 | 0.297694 | 59,138 | 1,627 | 184 | 36.34788 | 0.851516 | 0.003551 | 0 | 0.477795 | 0 | 0 | 0.085141 | 0.002635 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.001531 | 0.087289 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
504b5c677cc47a60e5d73108185305ec18d1d8d6 | 489 | py | Python | CodeWars/Python/AlternateCase.py | BobbyRobillard/CodingChallenges | 71d5ca0b7f7c470c547d858dde7a799ce7d0d1a0 | [
"MIT"
] | null | null | null | CodeWars/Python/AlternateCase.py | BobbyRobillard/CodingChallenges | 71d5ca0b7f7c470c547d858dde7a799ce7d0d1a0 | [
"MIT"
] | null | null | null | CodeWars/Python/AlternateCase.py | BobbyRobillard/CodingChallenges | 71d5ca0b7f7c470c547d858dde7a799ce7d0d1a0 | [
"MIT"
] | null | null | null | def alternate_case(s):
# Like a Giga Chad
return "".join([char.lower() if char.isupper() else char.upper() for char in s])
# Like a Beta Male
# return s.swapcase()
# EXAMPLE AND TESTING #
input = ["Hello World", "cODEwARS"]
for item in input:
print("\nInput: {0}\nAlternate Case: {1}".format(item, alternate_case(item)))
assert alternate_case("Hello World") == "hELLO wORLD" # Simple Unit Tests
assert alternate_case("cODEwARS") == "CodeWars" # Simple Unit Tests
| 30.5625 | 84 | 0.672802 | 69 | 489 | 4.710145 | 0.565217 | 0.16 | 0.036923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004988 | 0.179959 | 489 | 15 | 85 | 32.6 | 0.805486 | 0.224949 | 0 | 0 | 0 | 0 | 0.242588 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.142857 | false | 0 | 0 | 0.142857 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
505c47079444405aa3ca837c5da8f814a93cab97 | 1,745 | py | Python | examples/GANs/3DGAN/eval.py | Tarkiyah/kaotlin | 97374f648a53f6532f2348ca3f9ace943c4e2a4c | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2019-11-18T05:22:15.000Z | 2020-02-12T15:23:14.000Z | examples/GANs/3DGAN/eval.py | AOE-khkhan/kaolin | ed132736421ee723d14d59eaeb0286a8916a159d | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | examples/GANs/3DGAN/eval.py | AOE-khkhan/kaolin | ed132736421ee723d14d59eaeb0286a8916a159d | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2019-11-18T13:03:53.000Z | 2019-11-18T13:03:53.000Z | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import numpy as np
import os
import torch
from torch.autograd import Variable
import torch.optim as optim
from torch.utils.data import DataLoader
import sys
from tqdm import tqdm
from architectures import Generator
import kaolin as kal
"""
Commandline arguments
"""
parser = argparse.ArgumentParser()
parser.add_argument('-expid', type=str, default='GAN', help='Unique experiment identifier.')
parser.add_argument('--device', type=str, default='cuda', help='Device to use')
parser.add_argument('-batchsize', type=int, default=50, help='Batch size.')
args = parser.parse_args()
gen = Generator().to(args.device)
gen.load_state_dict(torch.load('log/{0}/gen.pth'.format(args.expid)))
gen.eval()
z = torch.normal(torch.zeros(args.batchsize, 200), torch.ones(args.batchsize, 200)*.33).to(args.device)
fake_voxels = gen(z)[:,0]
for i,model in enumerate(fake_voxels):
model = model[:-2,:-2,:-2]
model = kal.rep.voxel.max_connected(model, .5)
verts, faces = kal.conversion.voxel.to_mesh_quad(model)
mesh = kal.rep.QuadMesh.from_tensors( verts, faces)
mesh.laplacian_smoothing(iterations = 3)
mesh.show() | 32.314815 | 103 | 0.755874 | 266 | 1,745 | 4.909774 | 0.552632 | 0.045942 | 0.039051 | 0.024502 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016404 | 0.126648 | 1,745 | 54 | 104 | 32.314815 | 0.840551 | 0.333524 | 0 | 0 | 0 | 0 | 0.088078 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.413793 | 0 | 0.413793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
50632d26f7af85cd942cc29b1b5c01dae2c5c47e | 2,978 | py | Python | swim_backend/events.py | eurocontrol-swim/swim-backend | bdeba82d43b833f2fc9ef81e806d8ce0aafdb5b9 | [
"BSD-3-Clause"
] | null | null | null | swim_backend/events.py | eurocontrol-swim/swim-backend | bdeba82d43b833f2fc9ef81e806d8ce0aafdb5b9 | [
"BSD-3-Clause"
] | null | null | null | swim_backend/events.py | eurocontrol-swim/swim-backend | bdeba82d43b833f2fc9ef81e806d8ce0aafdb5b9 | [
"BSD-3-Clause"
] | null | null | null | """
Copyright 2019 EUROCONTROL
==========================================
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following
disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
==========================================
Editorial note: this license is an instance of the BSD license template as provided by the Open Source Initiative:
http://opensource.org/licenses/BSD-3-Clause
Details on EUROCONTROL: http://www.eurocontrol.int
"""
import abc
__author__ = "EUROCONTROL (SWIM)"
class Event(list):
"""
Simplistic implementation of event handling.
A list of callable objects. Calling an instance of this will cause a
call to each item in the list in ascending order by index.
"""
_type = 'Generic'
def __call__(self, *args, **kwargs):
for handler in self:
handler(*args, **kwargs)
def __repr__(self):
return f"{self._type} Event({list.__repr__(self)})"
class EventSafe(list):
def __call__(self, *args, **kwargs):
handlers = [handler_class(*args, **kwargs) for handler_class in self]
processed_handlers = []
for handler in handlers:
try:
handler.do()
processed_handlers.append(handler)
except:
handler.undo()
processed_handlers.reverse()
for processed_handler in processed_handlers:
processed_handler.undo()
raise
class EventHandler(abc.ABC):
@abc.abstractmethod
def do(self, *args, **kwargs):
pass
@abc.abstractmethod
def undo(self, *args, **kwargs):
pass
| 36.765432 | 121 | 0.693083 | 383 | 2,978 | 5.310705 | 0.469974 | 0.029499 | 0.027532 | 0.022616 | 0.111111 | 0.066863 | 0.066863 | 0.066863 | 0.066863 | 0.066863 | 0 | 0.003442 | 0.21961 | 2,978 | 80 | 122 | 37.225 | 0.871773 | 0.653459 | 0 | 0.2 | 0 | 0 | 0.065934 | 0.027972 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.066667 | 0.033333 | 0.033333 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ac9387393ec1679069c88ceafbe610de436150e4 | 20,187 | py | Python | aiokraken/utils/tests/test_timeindexeddataframe.py | asmodehn/aiokraken | b260bd41d5aa091e6a4f1818328426fbe6f625c0 | [
"MIT"
] | null | null | null | aiokraken/utils/tests/test_timeindexeddataframe.py | asmodehn/aiokraken | b260bd41d5aa091e6a4f1818328426fbe6f625c0 | [
"MIT"
] | 82 | 2019-08-30T09:37:49.000Z | 2022-03-29T14:53:22.000Z | aiokraken/utils/tests/test_timeindexeddataframe.py | asmodehn/aiokraken | b260bd41d5aa091e6a4f1818328426fbe6f625c0 | [
"MIT"
] | null | null | null | import unittest
from datetime import datetime, timezone
from pandas import DatetimeTZDtype
from parameterized import parameterized
import pandas as pd
from aiokraken.utils.timeindexeddataframe import TimeindexedDataframe
"""
Test module.
This is intended for extensive testing, using parameterized, hypothesis or similar generation methods
For simple usecase examples, we should rely on doctests.
"""
class TestTimeindexedDataframe(unittest.TestCase):
@parameterized.expand(
[
[
pd.DataFrame( # One with "datetime" column (like internal model)
# TODO: proper currencies...
[
[
datetime.fromtimestamp(1567039620, tz=timezone.utc),
8746.4,
8751.5,
8745.7,
8745.7,
8749.3,
0.09663298,
8,
],
[
datetime.fromtimestamp(1567039680, tz=timezone.utc),
8745.7,
8747.3,
8745.7,
8747.3,
8747.3,
0.00929540,
1,
],
],
# grab that from kraken documentation
columns=[
"datetime",
"open",
"high",
"low",
"close",
"vwap",
"volume",
"count",
],
) # there is no datetime index a priori.
], [
pd.DataFrame( # One with "datetime" column (like internal model)
# TODO: proper currencies...
[
[
datetime.fromtimestamp(1567039620, tz=timezone.utc),
8746.4,
8751.5,
8745.7,
8745.7,
8749.3,
0.09663298,
8,
],
[
datetime.fromtimestamp(1567039680, tz=timezone.utc),
8745.7,
8747.3,
8745.7,
8747.3,
8747.3,
0.00929540,
1,
],
],
# grab that from kraken documentation
columns=[
"datetime",
"open",
"high",
"low",
"close",
"vwap",
"volume",
"count",
],
).set_index("datetime") # we already have an index
],
]
)
def test_load_ok(self, df):
""" Verifying that expected data parses properly """
tidf = TimeindexedDataframe(data=df, index="datetime")
import pandas.api.types as ptypes
num_cols = ["open", "high", "low", "close", "vwap", "volume", "count"]
assert all(ptypes.is_numeric_dtype(tidf.dataframe[col]) for col in num_cols)
assert tidf.dataframe.index.name == "datetime"
# Verify we have a timezone aware, ns precision datetime.
assert ptypes.is_datetime64tz_dtype(tidf.dataframe.index.dtype)
assert ptypes.is_datetime64_ns_dtype(tidf.dataframe.index.dtype)
# TODO : property test instead (move this example test to doc...)
@parameterized.expand(
[
[
pd.DataFrame(
# TODO: proper Time, proper currencies...
[
[
datetime.fromtimestamp(1567039620, tz=timezone.utc),
8746.4,
8751.5,
8745.7,
8745.7,
8749.3,
0.09663298,
8,
],
[
datetime.fromtimestamp(1567039680, tz=timezone.utc),
8745.7,
8747.3,
8745.7,
8747.3,
8747.3,
0.00929540,
1,
],
],
# grab that from kraken documentation
columns=[
"datetime",
"open",
"high",
"low",
"close",
"vwap",
"volume",
"count",
],
),
pd.DataFrame(
# TODO: proper Time, proper currencies...
[
[
datetime.fromtimestamp(1567039680, tz=timezone.utc),
8745.8,
8747.3,
8745.7,
8747.3,
8747.3,
0.00929540,
1,
], # Not the value is a bit modified to trigger stitching...
[
datetime.fromtimestamp(1567039720, tz=timezone.utc),
8746.6,
8751.4,
8745.3,
8745.4,
8748.1,
0.09663297,
3,
],
],
# grab that from kraken documentation
columns=[
"datetime",
"open",
"high",
"low",
"close",
"vwap",
"volume",
"count",
],
),
],
]
)
def test_stitch_ok(
self, df1, df2
): # TODO : there are MANY cases to test for stitch
""" Verifying that expected data parses properly """
tidf1 = TimeindexedDataframe(data=df1)
tidf2 = TimeindexedDataframe(data=df2)
stitched1 = tidf1.merge(tidf2)
import pandas.api.types as ptypes
num_cols = ["open", "high", "low", "close", "vwap", "volume", "count"]
assert all(
ptypes.is_numeric_dtype(stitched1.dataframe[col]) for col in num_cols
)
assert stitched1.dataframe.index.name == "datetime"
# Verify we have a timezone aware, ns precision datetime.
assert ptypes.is_datetime64tz_dtype(stitched1.dataframe.index.dtype)
assert ptypes.is_datetime64_ns_dtype(stitched1.dataframe.index.dtype)
# verifying stitches
assert (stitched1.dataframe.iloc[0] == tidf1.dataframe.iloc[0]).all()
assert (stitched1.dataframe.iloc[-1] == tidf2.dataframe.iloc[-1]).all()
assert len(stitched1) == 3
# Note : careful with default merging strategy, ORDER MATTERS !
# To make it not matter, we need mode semantics...
@parameterized.expand(
[
[
pd.DataFrame( # One with "datetime" column (like internal model)
# TODO: proper Time, proper currencies...
[
[
datetime.fromtimestamp(1567039620, tz=timezone.utc),
8746.4,
8751.5,
8745.7,
8745.7,
8749.3,
0.09663298,
8,
],
[
datetime.fromtimestamp(1567039680, tz=timezone.utc),
8745.7,
8747.3,
8745.7,
8747.3,
8747.3,
0.00929540,
1,
],
],
# grab that from kraken documentation
columns=[
"datetime",
"open",
"high",
"low",
"close",
"vwap",
"volume",
"count",
],
).set_index("datetime")
],
]
)
def test_getitem_ok(self, df):
""" Verifying that expected data parses properly """
tidf = TimeindexedDataframe(data=df)
import pandas.api.types as ptypes
num_cols = ["open", "high", "low", "close", "vwap", "volume", "count"]
assert all(ptypes.is_numeric_dtype(tidf.dataframe[col]) for col in num_cols)
assert ptypes.is_datetime64_any_dtype(tidf.dataframe.index)
assert tidf.dataframe.index.name == "datetime"
assert tidf.dataframe.index.dtype == DatetimeTZDtype(tz=timezone.utc)
# verifying all ways to access data
# get the first element
assert isinstance(tidf.iloc[0], pd.Series)
assert tidf.iloc[0]["open"] == 8746.4
assert tidf.iloc[0]["high"] == 8751.5
assert tidf.iloc[0]["low"] == 8745.7
assert tidf.iloc[0]["close"] == 8745.7
assert tidf.iloc[0]["vwap"] == 8749.3
assert tidf.iloc[0]["volume"] == 0.09663298
assert tidf.iloc[0]["count"] == 8
# NOT WORKING
# get based on timeindex
# assert isinstance(tidf.tloc[1567039620], pd.Series)
# assert tidf.tloc[1567039620]["open"] == 8746.4
# assert tidf.tloc[1567039620]["high"] == 8751.5
# assert tidf.tloc[1567039620]["low"] == 8745.7
# assert tidf.tloc[1567039620]["close"] == 8745.7
# assert tidf.tloc[1567039620]["vwap"] == 8749.3
# assert tidf.tloc[1567039620]["volume"] == 0.09663298
# assert tidf.tloc[1567039620]["count"] == 8
# get from datetime
firstdatetime = datetime(
year=2019, month=8, day=29, hour=0, minute=47, second=0, tzinfo=timezone.utc
)
assert isinstance(tidf[firstdatetime], pd.Series)
assert tidf[firstdatetime]["open"] == 8746.4
assert tidf[firstdatetime]["high"] == 8751.5
assert tidf[firstdatetime]["low"] == 8745.7
assert tidf[firstdatetime]["close"] == 8745.7
assert tidf[firstdatetime]["vwap"] == 8749.3
assert tidf[firstdatetime]["volume"] == 0.09663298
assert tidf[firstdatetime]["count"] == 8
scnddatetime = datetime(
year=2019, month=8, day=29, hour=0, minute=48, second=0, tzinfo=timezone.utc
)
# get slice and verify equality
assert isinstance(tidf[firstdatetime:scnddatetime], TimeindexedDataframe)
assert tidf[firstdatetime:scnddatetime] == tidf
# get list of columns only
assert isinstance(tidf[["open", "high", "low", "close"]], TimeindexedDataframe)
assert tidf[["open", "high", "low", "close"]][firstdatetime]["open"] == tidf[firstdatetime]["open"]
assert tidf[["open", "high", "low", "close"]][firstdatetime]["high"] == tidf[firstdatetime]["high"]
assert tidf[["open", "high", "low", "close"]][firstdatetime]["low"] == tidf[firstdatetime]["low"]
assert tidf[["open", "high", "low", "close"]][firstdatetime]["close"] == tidf[firstdatetime]["close"]
@parameterized.expand(
[
[
pd.DataFrame( # One with "datetime" column (like internal model)
# TODO: proper Time, proper currencies...
[
[
datetime.fromtimestamp(1567039620, tz=timezone.utc),
8746.4,
8751.5,
8745.7,
8745.7,
8749.3,
0.09663298,
8,
],
[
datetime.fromtimestamp(1567039680, tz=timezone.utc),
8745.7,
8747.3,
8745.7,
8747.3,
8747.3,
0.00929540,
1,
],
],
# grab that from kraken documentation
columns=[
"datetime",
"open",
"high",
"low",
"close",
"vwap",
"volume",
"count",
],
).set_index("datetime")
],
]
)
def test_iter_ok(self, df):
""" Verifying that expected data iterates properly """
tidf = TimeindexedDataframe(data=df)
import pandas.api.types as ptypes
num_cols = ["open", "high", "low", "close", "vwap", "volume", "count"]
assert all(ptypes.is_numeric_dtype(tidf.dataframe[col]) for col in num_cols)
assert ptypes.is_datetime64_any_dtype(tidf.dataframe.index)
assert tidf.dataframe.index.name == "datetime"
assert tidf.dataframe.index.dtype == DatetimeTZDtype(tz=timezone.utc)
it = iter(tidf)
ts, s = next(it)
assert ts == datetime(
year=2019, month=8, day=29, hour=0, minute=48, second=0, tzinfo=timezone.utc
)
assert (s == pd.Series(data={
"open":8745.7,
"high":8747.3,
"low":8745.7,
"close":8747.3,
"vwap":8747.3,
"volume":0.00929540,
"count":1,
})).all()
ts2, s2 = next(it)
assert ts2 == datetime(
year=2019, month=8, day=29, hour=0, minute=47, second=0, tzinfo=timezone.utc
)
assert (s2 == pd.Series(data={
"open": 8746.4,
"high": 8751.5,
"low": 8745.7,
"close": 8745.7,
"vwap": 8749.3,
"volume": 0.09663298,
"count": 8,
})).all()
# @parameterized.expand(
# [
# [
# pd.DataFrame( # One with "datetime" column (like internal model)
# # TODO: proper Time, proper currencies...
# [
# [
# datetime.fromtimestamp(1567039620, tz=timezone.utc),
# 8746.4,
# 8751.5,
# 8745.7,
# 8745.7,
# 8749.3,
# 0.09663298,
# 8,
# ],
# [
# datetime.fromtimestamp(1567039680, tz=timezone.utc),
# 8745.7,
# 8747.3,
# 8745.7,
# 8747.3,
# 8747.3,
# 0.00929540,
# 1,
# ],
# ],
# # grab that from kraken documentation
# columns=[
# "datetime",
# "open",
# "high",
# "low",
# "close",
# "vwap",
# "volume",
# "count",
# ],
# ).set_index("datetime")
# ],
# ]
# )
# def test_aiter_ok(self, df):
# import asyncio
#
# clock = [1567039690,1567039750,1567039810,1567039870]
# countcall = iter(clock)
# def timer():
# return datetime.fromtimestamp(next(countcall), tz=timezone.utc)
#
# slept = 0
# async def sleeper(secs):
# slept = secs
#
# """ Verifying that expected data iterates properly asynchronously """
# tidf = TimeindexedDataframe(data=df, timer=timer, sleeper=sleeper)
#
# import pandas.api.types as ptypes
#
# num_cols = ["open", "high", "low", "close", "vwap", "volume", "count"]
# assert all(ptypes.is_numeric_dtype(tidf.dataframe[col]) for col in num_cols)
#
# assert ptypes.is_datetime64_any_dtype(tidf.dataframe.index)
# assert tidf.dataframe.index.name == "datetime"
# assert tidf.dataframe.index.dtype == DatetimeTZDtype(tz=timezone.utc)
#
# sync=asyncio.Lock()
#
# async def testrunner():
# idx = 0
#
# asyncio.get_running_loop().create_task(provider())
#
# async for m in tidf:
# async with sync:
# if idx == 0:
# assert m[0] == datetime.fromtimestamp(1567039740, tz=timezone.utc)
# assert slept == 50
# elif idx == 1:
# assert m[0]== datetime.fromtimestamp(1567039800, tz=timezone.utc)
# assert slept == 50
# elif idx == 2:
# assert m[0]== datetime.fromtimestamp(1567039860, tz=timezone.utc)
# assert slept == 50
# idx += 1
# if idx >= 3:
# break
#
# async def provider():
# idx=len(df)
# async with sync:
# # TODO : better way to append data (using __call__ ??)
# tidf.dataframe[idx] = [
# datetime.fromtimestamp(1567039740, tz=timezone.utc),
# 8745.7,
# 8747.2,
# 8745.8,
# 8747.3,
# 8747.3,
# 0.00929540,
# 1,
# ]
#
# idx = idx + 1
# async with sync:
# tidf.dataframe[idx] = [
# datetime.fromtimestamp(1567039800, tz=timezone.utc),
# 8745.7,
# 8747.3,
# 8745.7,
# 8747.3,
# 8747.3,
# 0.00929540,
# 1,
# ]
#
# idx = idx + 1
# async with sync:
# tidf.dataframe[idx] = [
# datetime.fromtimestamp(1567039860, tz=timezone.utc),
# 8745.7,
# 8747.3,
# 8745.7,
# 8747.3,
# 8747.3,
# 0.00929540,
# 1,
# ]
#
# # Note : even if we use asyncio here for apparent "parallelism" of control flow,
# # the timer and sleeper are test stubs to control syncronicity...
# asyncio.run(testrunner())
if __name__ == "__main__":
unittest.main()
| 36.904936 | 109 | 0.395849 | 1,594 | 20,187 | 4.970514 | 0.156211 | 0.025243 | 0.039379 | 0.021457 | 0.673608 | 0.616812 | 0.56784 | 0.540578 | 0.528083 | 0.506248 | 0 | 0.119729 | 0.503096 | 20,187 | 546 | 110 | 36.972527 | 0.670123 | 0.294447 | 0 | 0.632836 | 0 | 0 | 0.048048 | 0 | 0 | 0 | 0 | 0.001832 | 0.137313 | 1 | 0.01194 | false | 0 | 0.029851 | 0 | 0.044776 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac9affe885fc3efd2cfe8bac23b5b5b2bbdd9cfc | 13,297 | py | Python | vtpl_api/models/engine_task.py | vtpl1/vtpl_api | d289c92254deb040de925205c583de69802a1c6b | [
"MIT"
] | null | null | null | vtpl_api/models/engine_task.py | vtpl1/vtpl_api | d289c92254deb040de925205c583de69802a1c6b | [
"MIT"
] | null | null | null | vtpl_api/models/engine_task.py | vtpl1/vtpl_api | d289c92254deb040de925205c583de69802a1c6b | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Engine api
Engine APIs # noqa: E501
The version of the OpenAPI document: 1.0.4
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
class EngineTask(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'id': 'str',
'capbilities_type': 'Capability',
'event_type': 'EventType',
'engine_machine_id': 'str',
'is_expired': 'bool',
'time_to_live': 'int',
'source': 'SourceEndPoint',
'destination': 'DestinationEndPoint',
'zone_setting': 'EngineTaskZoneSetting',
'line_setting': 'EngineTaskLineSetting',
'config': 'list[Config]',
'updated': 'datetime',
'created': 'datetime',
'etag': 'str',
'links': 'Links'
}
attribute_map = {
'id': '_id',
'capbilities_type': 'capbilitiesType',
'event_type': 'eventType',
'engine_machine_id': 'engineMachineId',
'is_expired': 'isExpired',
'time_to_live': 'timeToLive',
'source': 'source',
'destination': 'destination',
'zone_setting': 'zoneSetting',
'line_setting': 'lineSetting',
'config': 'config',
'updated': 'updated',
'created': 'created',
'etag': 'etag',
'links': 'links'
}
def __init__(self, id=None, capbilities_type=None, event_type=None, engine_machine_id=None, is_expired=False, time_to_live=-1, source=None, destination=None, zone_setting=None, line_setting=None, config=None, updated=None, created=None, etag=None, links=None): # noqa: E501
"""EngineTask - a model defined in OpenAPI""" # noqa: E501
self._id = None
self._capbilities_type = None
self._event_type = None
self._engine_machine_id = None
self._is_expired = None
self._time_to_live = None
self._source = None
self._destination = None
self._zone_setting = None
self._line_setting = None
self._config = None
self._updated = None
self._created = None
self._etag = None
self._links = None
self.discriminator = None
if id is not None:
self.id = id
if capbilities_type is not None:
self.capbilities_type = capbilities_type
if event_type is not None:
self.event_type = event_type
if engine_machine_id is not None:
self.engine_machine_id = engine_machine_id
if is_expired is not None:
self.is_expired = is_expired
if time_to_live is not None:
self.time_to_live = time_to_live
if source is not None:
self.source = source
if destination is not None:
self.destination = destination
if zone_setting is not None:
self.zone_setting = zone_setting
if line_setting is not None:
self.line_setting = line_setting
if config is not None:
self.config = config
if updated is not None:
self.updated = updated
if created is not None:
self.created = created
if etag is not None:
self.etag = etag
if links is not None:
self.links = links
@property
def id(self):
"""Gets the id of this EngineTask. # noqa: E501
:return: The id of this EngineTask. # noqa: E501
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this EngineTask.
:param id: The id of this EngineTask. # noqa: E501
:type: str
"""
self._id = id
@property
def capbilities_type(self):
"""Gets the capbilities_type of this EngineTask. # noqa: E501
:return: The capbilities_type of this EngineTask. # noqa: E501
:rtype: Capability
"""
return self._capbilities_type
@capbilities_type.setter
def capbilities_type(self, capbilities_type):
"""Sets the capbilities_type of this EngineTask.
:param capbilities_type: The capbilities_type of this EngineTask. # noqa: E501
:type: Capability
"""
self._capbilities_type = capbilities_type
@property
def event_type(self):
"""Gets the event_type of this EngineTask. # noqa: E501
:return: The event_type of this EngineTask. # noqa: E501
:rtype: EventType
"""
return self._event_type
@event_type.setter
def event_type(self, event_type):
"""Sets the event_type of this EngineTask.
:param event_type: The event_type of this EngineTask. # noqa: E501
:type: EventType
"""
self._event_type = event_type
@property
def engine_machine_id(self):
"""Gets the engine_machine_id of this EngineTask. # noqa: E501
:return: The engine_machine_id of this EngineTask. # noqa: E501
:rtype: str
"""
return self._engine_machine_id
@engine_machine_id.setter
def engine_machine_id(self, engine_machine_id):
"""Sets the engine_machine_id of this EngineTask.
:param engine_machine_id: The engine_machine_id of this EngineTask. # noqa: E501
:type: str
"""
self._engine_machine_id = engine_machine_id
@property
def is_expired(self):
"""Gets the is_expired of this EngineTask. # noqa: E501
Explanations: * true = Engines will NEVER execute this task * false = Engines will execute this task # noqa: E501
:return: The is_expired of this EngineTask. # noqa: E501
:rtype: bool
"""
return self._is_expired
@is_expired.setter
def is_expired(self, is_expired):
"""Sets the is_expired of this EngineTask.
Explanations: * true = Engines will NEVER execute this task * false = Engines will execute this task # noqa: E501
:param is_expired: The is_expired of this EngineTask. # noqa: E501
:type: bool
"""
self._is_expired = is_expired
@property
def time_to_live(self):
"""Gets the time_to_live of this EngineTask. # noqa: E501
Time in milliseconds of expiry or the task. Engines will not execute an expired task. Explanations: * -1 = Never expires * -2 = Expired * 0 = Will expire in 0 milliseconds * >0 = milliseconds till expiry # noqa: E501
:return: The time_to_live of this EngineTask. # noqa: E501
:rtype: int
"""
return self._time_to_live
@time_to_live.setter
def time_to_live(self, time_to_live):
"""Sets the time_to_live of this EngineTask.
Time in milliseconds of expiry or the task. Engines will not execute an expired task. Explanations: * -1 = Never expires * -2 = Expired * 0 = Will expire in 0 milliseconds * >0 = milliseconds till expiry # noqa: E501
:param time_to_live: The time_to_live of this EngineTask. # noqa: E501
:type: int
"""
self._time_to_live = time_to_live
@property
def source(self):
"""Gets the source of this EngineTask. # noqa: E501
:return: The source of this EngineTask. # noqa: E501
:rtype: SourceEndPoint
"""
return self._source
@source.setter
def source(self, source):
"""Sets the source of this EngineTask.
:param source: The source of this EngineTask. # noqa: E501
:type: SourceEndPoint
"""
self._source = source
@property
def destination(self):
"""Gets the destination of this EngineTask. # noqa: E501
:return: The destination of this EngineTask. # noqa: E501
:rtype: DestinationEndPoint
"""
return self._destination
@destination.setter
def destination(self, destination):
"""Sets the destination of this EngineTask.
:param destination: The destination of this EngineTask. # noqa: E501
:type: DestinationEndPoint
"""
self._destination = destination
@property
def zone_setting(self):
"""Gets the zone_setting of this EngineTask. # noqa: E501
:return: The zone_setting of this EngineTask. # noqa: E501
:rtype: EngineTaskZoneSetting
"""
return self._zone_setting
@zone_setting.setter
def zone_setting(self, zone_setting):
"""Sets the zone_setting of this EngineTask.
:param zone_setting: The zone_setting of this EngineTask. # noqa: E501
:type: EngineTaskZoneSetting
"""
self._zone_setting = zone_setting
@property
def line_setting(self):
"""Gets the line_setting of this EngineTask. # noqa: E501
:return: The line_setting of this EngineTask. # noqa: E501
:rtype: EngineTaskLineSetting
"""
return self._line_setting
@line_setting.setter
def line_setting(self, line_setting):
"""Sets the line_setting of this EngineTask.
:param line_setting: The line_setting of this EngineTask. # noqa: E501
:type: EngineTaskLineSetting
"""
self._line_setting = line_setting
@property
def config(self):
"""Gets the config of this EngineTask. # noqa: E501
:return: The config of this EngineTask. # noqa: E501
:rtype: list[Config]
"""
return self._config
@config.setter
def config(self, config):
"""Sets the config of this EngineTask.
:param config: The config of this EngineTask. # noqa: E501
:type: list[Config]
"""
self._config = config
@property
def updated(self):
"""Gets the updated of this EngineTask. # noqa: E501
:return: The updated of this EngineTask. # noqa: E501
:rtype: datetime
"""
return self._updated
@updated.setter
def updated(self, updated):
"""Sets the updated of this EngineTask.
:param updated: The updated of this EngineTask. # noqa: E501
:type: datetime
"""
self._updated = updated
@property
def created(self):
"""Gets the created of this EngineTask. # noqa: E501
:return: The created of this EngineTask. # noqa: E501
:rtype: datetime
"""
return self._created
@created.setter
def created(self, created):
"""Sets the created of this EngineTask.
:param created: The created of this EngineTask. # noqa: E501
:type: datetime
"""
self._created = created
@property
def etag(self):
"""Gets the etag of this EngineTask. # noqa: E501
:return: The etag of this EngineTask. # noqa: E501
:rtype: str
"""
return self._etag
@etag.setter
def etag(self, etag):
"""Sets the etag of this EngineTask.
:param etag: The etag of this EngineTask. # noqa: E501
:type: str
"""
self._etag = etag
@property
def links(self):
"""Gets the links of this EngineTask. # noqa: E501
:return: The links of this EngineTask. # noqa: E501
:rtype: Links
"""
return self._links
@links.setter
def links(self, links):
"""Sets the links of this EngineTask.
:param links: The links of this EngineTask. # noqa: E501
:type: Links
"""
self._links = links
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, EngineTask):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 27.644491 | 278 | 0.590885 | 1,552 | 13,297 | 4.896907 | 0.094072 | 0.047368 | 0.126316 | 0.118421 | 0.505526 | 0.396711 | 0.361184 | 0.246184 | 0.138026 | 0.086053 | 0 | 0.019519 | 0.318042 | 13,297 | 480 | 279 | 27.702083 | 0.818593 | 0.378807 | 0 | 0.082927 | 1 | 0 | 0.081139 | 0.00601 | 0 | 0 | 0 | 0 | 0 | 1 | 0.17561 | false | 0 | 0.014634 | 0 | 0.307317 | 0.009756 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
aca6a6e0486d3884a6f02c4b628910863d9b5d95 | 834 | py | Python | venv/Lib/site-packages/plotnine/geoms/geom_col.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/plotnine/geoms/geom_col.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | 1 | 2020-10-02T21:43:06.000Z | 2020-10-15T22:52:39.000Z | venv/Lib/site-packages/plotnine/geoms/geom_col.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | null | null | null | from ..doctools import document
from .geom_bar import geom_bar
@document
class geom_col(geom_bar):
"""
Bar plot with base on the x-axis
This is an alternate version of :class:`geom_bar` that maps
the height of bars to an existing variable in your data. If
you want the height of the bar to represent a count of cases,
use :class:`geom_bar`.
{usage}
Parameters
----------
{common_parameters}
width : float, (default: None)
Bar width. If :py:`None`, the width is set to
`90%` of the resolution of the data.
See Also
--------
plotnine.geoms.geom_bar
"""
REQUIRED_AES = {'x', 'y'}
NON_MISSING_AES = {'xmin', 'xmax', 'ymin', 'ymax'}
DEFAULT_PARAMS = {'stat': 'identity', 'position': 'stack',
'na_rm': False, 'width': None}
| 26.0625 | 65 | 0.605516 | 117 | 834 | 4.205128 | 0.606838 | 0.085366 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003263 | 0.264988 | 834 | 31 | 66 | 26.903226 | 0.799347 | 0.545564 | 0 | 0 | 0 | 0 | 0.172638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
aca90fb089b9626c7c5db9aee7d6d8d8f14631cd | 1,155 | py | Python | utils.py | SeoulTech-HCIRLab/ChannelAug | 2701c86836150f86dfdf3ab4f57485f262c88b8f | [
"Apache-2.0"
] | 3 | 2020-06-30T06:29:35.000Z | 2021-03-02T14:18:55.000Z | utils.py | titania7777/ChannelAug | 03dcd4aa6bdb8b2a38d5057b55672d8a862e4e11 | [
"Apache-2.0"
] | null | null | null | utils.py | titania7777/ChannelAug | 03dcd4aa6bdb8b2a38d5057b55672d8a862e4e11 | [
"Apache-2.0"
] | null | null | null | # Max-Heinrich Laves
# Institute of Mechatronic Systems
# Leibniz Universität Hannover, Germany
# 2019
# Code From https://github.com/mlaves/bayesian-temperature-scaling
import torch
__all__ = ['accuracy', 'kl_loss', 'nentr', 'xavier_normal_init']
def accuracy(input, target):
_, max_indices = torch.max(input.data, 1)
acc = (max_indices == target).sum().float() / max_indices.size(0)
return acc.item()
def kl_loss(logits):
return -torch.nn.functional.log_softmax(logits, dim=1).mean()
def nentr(p, base=None):
"""
Calculates entropy of p to the base b. If base is None, the natural logarithm is used.
:param p: batches of class label probability distributions (softmax output)
:param base: base b
:return:
"""
eps = torch.tensor([1e-16], device=p.device)
if base:
base = torch.tensor([base], device=p.device, dtype=torch.float32)
return (p.mul(p.add(eps).log().div(base.log()))).sum(dim=1).abs()
else:
return (p.mul(p.add(eps).log())).sum(dim=1).abs()
def xavier_normal_init(m):
if isinstance(m, torch.nn.Conv2d):
torch.nn.init.xavier_normal_(m.weight.data) | 30.394737 | 90 | 0.670996 | 171 | 1,155 | 4.432749 | 0.508772 | 0.047493 | 0.042216 | 0.029024 | 0.083113 | 0.05277 | 0.05277 | 0 | 0 | 0 | 0 | 0.015773 | 0.176623 | 1,155 | 38 | 91 | 30.394737 | 0.781283 | 0.304762 | 0 | 0 | 0 | 0 | 0.049223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0.055556 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acaa7e107370a525398d8d212fa14ab906e106e9 | 1,892 | py | Python | tests/testing_drive.py | alexzanderr/_core-dev | 831f69dad524e450c4243b1dd88f26de80e1d444 | [
"MIT"
] | null | null | null | tests/testing_drive.py | alexzanderr/_core-dev | 831f69dad524e450c4243b1dd88f26de80e1d444 | [
"MIT"
] | null | null | null | tests/testing_drive.py | alexzanderr/_core-dev | 831f69dad524e450c4243b1dd88f26de80e1d444 | [
"MIT"
] | null | null | null |
import unittest
from core.drive import copy
from core.aesthetics import *
class TestingDrivepy(unittest.TestCase):
def test_copy_function(self):
source_tests = [
r"D:\Alexzander__\programming\python\Python2Executable",
r"D:\Alexzander__\programming\python\byzantion",
r"D:\Alexzander__\programming\python\BizidayNews",
r"D:\Alexzander__\programming\python\bitcoin",
r"D:\Alexzander__\programming\python\core",
r"",
r"",
]
destination_tests = [
r"D:\Alexzander__\programming\python\testing_copy_func",
r"D:\Alexzander__\programming\python\testing_copy_func",
r"D:\Alexzander__\programming\python\testing_copy_func",
r"D:\Alexzander__\programming\python\testing_copy_func",
r"D:\Alexzander__\programming\python\testing_copy_func",
r"",
r"",
]
for index, (source, destination) in enumerate(
zip(source_tests, destination_tests),
start=1
):
if source != r"" and destination != r"":
try:
result = self.assertEqual(
copy(
source,
destination,
open_destination_when_done=False,
__print=False),
True
)
if result is None:
print(f"Test #{index} {green_bold('passed')}.")
except BaseException as exception:
print(red_bold(type(exception)))
print(red_bold(exception))
print(f"Test #{index} DIDNT pass!")
if __name__ == '__main__':
unittest.main()
| 35.037037 | 72 | 0.509514 | 167 | 1,892 | 5.461078 | 0.371257 | 0.02193 | 0.131579 | 0.252193 | 0.412281 | 0.285088 | 0.242325 | 0.242325 | 0.242325 | 0.242325 | 0 | 0.001742 | 0.393235 | 1,892 | 53 | 73 | 35.698113 | 0.792683 | 0 | 0 | 0.2 | 0 | 0 | 0.300871 | 0.275299 | 0 | 0 | 0 | 0 | 0.022222 | 1 | 0.022222 | false | 0.044444 | 0.066667 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acb38fbd951d6721bb277eafb67e6e86f5c11fc0 | 534 | py | Python | Python/1 pengenalan python/2 komentar dan operasi matematika/6 kesimpulan.py | ekovegeance-com/tree | 7a429d0f35c5a71769820177f60d22e7231b4e40 | [
"Apache-2.0"
] | 3 | 2020-12-21T13:01:35.000Z | 2020-12-27T08:25:57.000Z | Python/1 pengenalan python/2 komentar dan operasi matematika/6 kesimpulan.py | ekovegeance-com/tree | 7a429d0f35c5a71769820177f60d22e7231b4e40 | [
"Apache-2.0"
] | 2 | 2020-12-05T23:26:16.000Z | 2020-12-27T10:21:47.000Z | Python/1 pengenalan python/2 komentar dan operasi matematika/6 kesimpulan.py | faizH3/faiz | c6a38717b91db8f76a0c4c4fd3168eb3ce8123ef | [
"Apache-2.0"
] | 3 | 2021-07-27T19:05:40.000Z | 2021-11-08T09:03:23.000Z | # Instruksi:
# Buatlah komentar di garis pertama,
# Buat variabel bernama jumlah_pacar yang isinya angka (bukan desimal),
# Buat variabel bernama lagi_galau yang isinya boolean,
# Buat variabel dengan nama terserah anda dan gunakan salah satu dari operator matematika yang telah kita pelajari.
#variabel untuk menyimpan data
# tipe data boolean dan angka
# spasi: pentingnya spasi dalam python
# komentar: untuk menjelaskan kode
#operasi matematika: mulai dari penambahan sampai modulus
jumlah_pacar = 1
lagi_galau = False
umur = 20 | 35.6 | 115 | 0.803371 | 74 | 534 | 5.743243 | 0.702703 | 0.084706 | 0.089412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006637 | 0.153558 | 534 | 15 | 116 | 35.6 | 0.933628 | 0.872659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acbbc4f8e25d9363036a7e6c45e12fa4a283bea3 | 983 | py | Python | args.py | fang1fan/m5-python-starter | 434bcd701c04707e5a5c3ed07ee51d0a66687dfc | [
"Apache-2.0"
] | 1 | 2021-01-15T01:45:58.000Z | 2021-01-15T01:45:58.000Z | args.py | fang1fan/m5-python-starter | 434bcd701c04707e5a5c3ed07ee51d0a66687dfc | [
"Apache-2.0"
] | null | null | null | args.py | fang1fan/m5-python-starter | 434bcd701c04707e5a5c3ed07ee51d0a66687dfc | [
"Apache-2.0"
] | null | null | null | import argparse
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--d_model', type=int, default=0, help='d_model')
parser.add_argument('--d_head', type=int, default=2, help='head')
parser.add_argument('--d_inner', type=bool, default=True, help='inner layers')
parser.add_argument('--n_token', type=str, default='roberta-base', help='number of tokens')
parser.add_argument('--n_layer', type=str, default='gru', help='number of hidden layers')
parser.add_argument('--n_head', type=int, default=2, help='num attention heads')
parser.add_argument('--dropout', type=int, default=1024, help='dropout')
parser.add_argument('--dropatt', type=int, default=0.5, help='dropatt')
parser.add_argument('--attention_dropout_prob', type=int, default=1024, help='attention_dropout_prob')
parser.add_argument('--output_dropout_prob', type=int, default=0.5, help='output_dropout_prob')
args = parser.parse_args()
args = vars(args)
| 44.681818 | 102 | 0.755849 | 142 | 983 | 5.042254 | 0.316901 | 0.125698 | 0.23743 | 0.075419 | 0.27933 | 0.120112 | 0 | 0 | 0 | 0 | 0 | 0.016429 | 0.071211 | 983 | 21 | 103 | 46.809524 | 0.767798 | 0 | 0 | 0 | 0 | 0 | 0.2706 | 0.068159 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acc1831c2a1513db9dd276b866be14f74d8ab6db | 341 | py | Python | Chapter04/CNN_1.py | PacktPublishing/Practical-Convolutional-Neural-Networks | fabffa7f8afa8986a3f2e0756ec8bc4c12836eb9 | [
"MIT"
] | 23 | 2018-03-22T21:30:32.000Z | 2022-01-02T13:26:34.000Z | Chapter04/CNN_1.py | huanghanchi/Practical-Convolutional-Neural-Networks | 365aa803d38316ed9749e4c8c0f3ae2667788781 | [
"MIT"
] | 2 | 2018-05-21T04:53:34.000Z | 2019-03-05T13:04:34.000Z | Chapter04/CNN_1.py | huanghanchi/Practical-Convolutional-Neural-Networks | 365aa803d38316ed9749e4c8c0f3ae2667788781 | [
"MIT"
] | 17 | 2018-03-12T12:00:19.000Z | 2022-02-22T16:36:36.000Z | #Refer AlexNet implementation code, returns last fully connected layer
fc7 = AlexNet(resized, feature_extract=True)
shape = (fc7.get_shape().as_list()[-1], 43)
fc8_weight = tf.Variable(tf.truncated_normal(shape, stddev=1e-2))
fc8_b = tf.Variable(tf.zeros(43))
logits = tf.nn.xw_plus_b(fc7, fc8_weight, fc8_b)
probs = tf.nn.softmax(logits)
| 34.1 | 70 | 0.756598 | 57 | 341 | 4.350877 | 0.649123 | 0.072581 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045603 | 0.099707 | 341 | 9 | 71 | 37.888889 | 0.762215 | 0.202346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acc30f8a7181a1c7790a5aea356612213e20f556 | 375 | py | Python | octoprint_marlin_flasher/validation/validators/arduino.py | thinkyhead/OctoPrint-Marlin-Flasher | c43110226d1b9d4aa0df2fdfb8cffab47d687957 | [
"MIT"
] | 1 | 2021-09-20T22:17:22.000Z | 2021-09-20T22:17:22.000Z | octoprint_marlin_flasher/validation/validators/arduino.py | thinkyhead/OctoPrint-Marlin-Flasher | c43110226d1b9d4aa0df2fdfb8cffab47d687957 | [
"MIT"
] | null | null | null | octoprint_marlin_flasher/validation/validators/arduino.py | thinkyhead/OctoPrint-Marlin-Flasher | c43110226d1b9d4aa0df2fdfb8cffab47d687957 | [
"MIT"
] | 1 | 2021-12-10T03:37:29.000Z | 2021-12-10T03:37:29.000Z | from flask_babel import gettext
from marshmallow import ValidationError
import intelhex
import zipfile
def is_correct_file_type(filename):
try:
with zipfile.ZipFile(filename, "r") as _:
pass
except zipfile.BadZipfile:
try:
ih = intelhex.IntelHex()
ih.loadhex(filename)
except intelhex.IntelHexError:
raise ValidationError(gettext("Invalid file type."))
| 22.058824 | 55 | 0.770667 | 46 | 375 | 6.173913 | 0.586957 | 0.056338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149333 | 375 | 16 | 56 | 23.4375 | 0.890282 | 0 | 0 | 0.142857 | 0 | 0 | 0.050667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0.071429 | 0.285714 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
acc468eb15f6052159fe5c3f86610a87918f8f91 | 291 | py | Python | modules/accounts/__init__.py | vladpi/zenmoney-bot | 280723a49979632811f585fb8dced3c396fe563a | [
"Apache-2.0"
] | null | null | null | modules/accounts/__init__.py | vladpi/zenmoney-bot | 280723a49979632811f585fb8dced3c396fe563a | [
"Apache-2.0"
] | 1 | 2022-02-16T22:29:36.000Z | 2022-02-16T22:29:54.000Z | modules/accounts/__init__.py | vladpi/zenmoney-bot | 280723a49979632811f585fb8dced3c396fe563a | [
"Apache-2.0"
] | null | null | null | from .exports import ( # noqa
create_account_from_zenmoney_account,
delete_account,
get_accounts_by_user,
get_user_account_by_id,
get_user_account_by_title,
update_account_transactions_count,
)
from .schemas import AccountModel # noqa
from .tables import * # noqa
| 26.454545 | 41 | 0.769759 | 38 | 291 | 5.394737 | 0.526316 | 0.097561 | 0.136585 | 0.156098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178694 | 291 | 10 | 42 | 29.1 | 0.857741 | 0.04811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.3 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acc64880afae366e820de6d62f2717aea69a4649 | 2,360 | py | Python | tests/project_creator.py | AlexandrovRoman/Flask-DJ | ecf6cb05a8115641cab6634e8d004801a96c314c | [
"MIT"
] | 7 | 2020-03-12T03:09:12.000Z | 2021-05-01T08:11:33.000Z | tests/project_creator.py | AlexandrovRoman/Flask-DJ | ecf6cb05a8115641cab6634e8d004801a96c314c | [
"MIT"
] | null | null | null | tests/project_creator.py | AlexandrovRoman/Flask-DJ | ecf6cb05a8115641cab6634e8d004801a96c314c | [
"MIT"
] | 1 | 2020-12-17T07:24:55.000Z | 2020-12-17T07:24:55.000Z | from os.path import exists, join
import pytest
from flask_dj import startproject
from tests.basic_project_creator import ProjectCreate
class TestBaseSettingProjectConstructor(ProjectCreate):
def setup(self, need_static=False, need_templates=False):
super().setup()
def test_project_folder_exist(self):
assert exists(self.project_path)
def test_main_folder_exist(self):
assert exists(join(self.project_path, self.project_name))
def test_main_init(self):
self.main_file_test('__init__', ['from flask import Flask', 'from flask_login import LoginManager'])
def test_main_config(self):
self.main_file_test('config', ["HOST = '127.0.0.1'", "PORT = 5000", "Config = DevelopConfig"])
def main_file_test(self, filename, test_contents=['']):
self._file_test(self.project_name, filename, test_contents)
def test_main_urls(self):
assert exists(join(join(self.project_path, self.project_name), 'urls.py'))
def test_main_manage(self):
self._file_test(self.project_path, 'manage', [f'from {self.project_name} import app, config'])
def test_utils_folder_exist(self):
assert exists(join(self.project_path, 'utils'))
def test_utils_urls(self):
self._file_test('utils', 'urls', [f'from {self.project_name} import app'])
def test_templates_folder(self):
assert exists(join(self.project_path, 'templates')) or not self.need_templates
def test_static_folder(self):
assert exists(join(self.project_path, 'static')) or not self.need_static
class TestAdvancedProjectConfig(TestBaseSettingProjectConstructor):
def setup(self):
super().setup(need_templates=True, need_static=True)
class UncorrectProjectName(ProjectCreate):
def setup(self, project_name):
super().setup(project_name=project_name, fast_start=False)
def test_create(self):
with pytest.raises(ValueError):
startproject(self.project_name)
def test_main_folder_not_exist(self):
assert not exists(join(self.project_path, self.project_name))
def teardown(self):
pass
class TestNumUncorrectProjectName(UncorrectProjectName):
def setup(self):
super().setup("123project")
class TestDashInProjectName(UncorrectProjectName):
def setup(self):
super().setup("pro-ject")
| 32.328767 | 108 | 0.717373 | 299 | 2,360 | 5.411371 | 0.230769 | 0.108776 | 0.074166 | 0.070457 | 0.358467 | 0.275031 | 0.202101 | 0.145241 | 0.094561 | 0 | 0 | 0.006653 | 0.172034 | 2,360 | 72 | 109 | 32.777778 | 0.821392 | 0 | 0 | 0.0625 | 0 | 0 | 0.111017 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 1 | 0.395833 | false | 0.020833 | 0.145833 | 0 | 0.645833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acc6a2044f16b81953b9b0b10f9e3d38dd9da534 | 1,822 | py | Python | server/apps/authentication/tests.py | krishnasagar14/street_parking | 5b5e13d94358c0c610b9c2188abb62e52598a3bb | [
"MIT"
] | null | null | null | server/apps/authentication/tests.py | krishnasagar14/street_parking | 5b5e13d94358c0c610b9c2188abb62e52598a3bb | [
"MIT"
] | 3 | 2020-02-11T23:45:37.000Z | 2021-06-10T21:13:14.000Z | server/apps/authentication/tests.py | krishnasagar14/street_parking | 5b5e13d94358c0c610b9c2188abb62e52598a3bb | [
"MIT"
] | null | null | null | from django.test import TestCase
from rest_framework.test import APIRequestFactory
from .views import LoginView, SignupView
from common.tests import USER_DATA, prepare_dummy_user_data
# Create your tests here.
class ApiViewTests(TestCase):
factory = APIRequestFactory()
def test_signup(self):
user_data = USER_DATA
view = SignupView.as_view()
req = self.factory.post('/register/', user_data, format='json')
resp = view(req)
st_code = resp.status_code
self.assertEqual(st_code, 400)
user_data['password'] = 'test01'
req = self.factory.post('/register/', user_data, format='json')
resp = view(req)
st_code = resp.status_code
rdata = resp.data.get('data')
self.assertEqual(st_code, 201)
self.assertEqual(rdata['message'], 'USER_REGISTER_SUCCESS')
print("User register API test success")
def test_login(self):
prepare_dummy_user_data()
view = LoginView.as_view()
user_data = {}
req = self.factory.post('/login/', user_data, format='json')
resp = view(req)
st_code = resp.status_code
self.assertEqual(st_code, 400)
user_data = {
'email': USER_DATA['email'],
'password': '',
}
req = self.factory.post('/login/', user_data, format='json')
resp = view(req)
st_code = resp.status_code
self.assertEqual(st_code, 400)
user_data['password'] = 'test123'
req = self.factory.post('/login/', user_data, format='json')
resp = view(req)
st_code = resp.status_code
rdata = resp.data.get('data')
self.assertEqual(st_code, 200)
self.assertEqual(len(rdata['token'].split('.')), 3)
print("User login API test success") | 31.964912 | 71 | 0.616905 | 222 | 1,822 | 4.878378 | 0.252252 | 0.110803 | 0.064635 | 0.083102 | 0.517082 | 0.517082 | 0.517082 | 0.517082 | 0.517082 | 0.517082 | 0 | 0.015533 | 0.257958 | 1,822 | 57 | 72 | 31.964912 | 0.785503 | 0.012623 | 0 | 0.444444 | 0 | 0 | 0.115128 | 0.01168 | 0 | 0 | 0 | 0 | 0.155556 | 1 | 0.044444 | false | 0.066667 | 0.088889 | 0 | 0.177778 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
acc6cf450962693d28429a214f8508d7ed98f2f8 | 5,205 | py | Python | build/android/play_services/utils.py | TwistedCore/external_v8 | c6725dab9be251fbfc6fd7d53c3513a23e78c36c | [
"BSD-3-Clause"
] | 2 | 2019-01-28T08:09:58.000Z | 2021-11-15T15:32:10.000Z | build/android/play_services/utils.py | TwistedCore/external_v8 | c6725dab9be251fbfc6fd7d53c3513a23e78c36c | [
"BSD-3-Clause"
] | null | null | null | build/android/play_services/utils.py | TwistedCore/external_v8 | c6725dab9be251fbfc6fd7d53c3513a23e78c36c | [
"BSD-3-Clause"
] | 6 | 2020-09-23T08:56:12.000Z | 2021-11-18T03:40:49.000Z | # Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
'''
Utility functions for all things related to manipulating google play services
related files.
'''
import argparse
import filecmp
import json
import logging
import os
import re
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), os.pardir))
from devil.utils import cmd_helper
_XML_VERSION_NUMBER_PATTERN = re.compile(
r'<integer name="google_play_services_version">(\d+)<\/integer>')
class DefaultsRawHelpFormatter(argparse.ArgumentDefaultsHelpFormatter,
argparse.RawDescriptionHelpFormatter):
'''
Combines the features of RawDescriptionHelpFormatter and
ArgumentDefaultsHelpFormatter, providing defaults for the arguments and raw
text for the description.
'''
pass
class ConfigParser(object):
'''Reads and writes the configuration files for play services related scripts
The configuration files are JSON files. Here is the data they are expected
to contain:
- version_number
Number. Mirrors @integer/google_play_services_version from the library.
Example: 815000
- sdk_version
Version of the Play Services SDK to retrieve, when preprocessing the
library from a maven/gradle repository.
Example: "8.1.0"
- clients
List of strings. Name of the clients (or play services modules) to
include when preprocessing the library.
Example: ["play-services-base", "play-services-cast"]
- version_xml_path
String. Path to the version.xml string describing the current version.
Should be relative to the library base directory
Example: "res/values/version.xml"
- locale_whitelist
List of strings. List of locales to keep from the resources. Can be
obtained by generating an android build and looking at the content of
`out/Debug/gen/chrome/java/res`; or looking at the android section in
`//chrome/app/generated_resources.grd`
Example: ["am", "ar", "bg", "ca", "cs"]
- resource_whitelist
List of strings. List of resource files to explicitely keep in the final
output. Use it to keep drawables for example, as we currently remove them
all.
Example: ["play-services-base/res/drawables/foobar.xml"]
'''
_VERSION_NUMBER_KEY = 'version_number'
def __init__(self, path):
self.path = path
self._data = {}
with open(path, 'r') as stream:
self._data = json.load(stream)
@property
def version_number(self):
return self._data.get(self._VERSION_NUMBER_KEY)
@property
def sdk_version(self):
return self._data.get('sdk_version')
@property
def clients(self):
return self._data.get('clients') or []
@property
def version_xml_path(self):
return self._data.get('version_xml_path')
@property
def locale_whitelist(self):
return self._data.get('locale_whitelist') or []
@property
def resource_whitelist(self):
return self._data.get('resource_whitelist') or []
def UpdateVersionNumber(self, new_version_number):
'''Updates the version number and saves it in the configuration file. '''
with open(self.path, 'w') as stream:
self._data[self._VERSION_NUMBER_KEY] = new_version_number
stream.write(DumpTrimmedJson(self._data))
def DumpTrimmedJson(json_data):
'''
Default formatting when dumping json to string has trailing spaces and lacks
a new line at the end. This function fixes that.
'''
out = json.dumps(json_data, sort_keys=True, indent=2)
out = out.replace(' ' + os.linesep, os.linesep)
return out + os.linesep
def FileEquals(expected_file, actual_file):
'''
Returns whether the two files are equal. Returns False if any of the files
doesn't exist.
'''
if not os.path.isfile(actual_file) or not os.path.isfile(expected_file):
return False
return filecmp.cmp(expected_file, actual_file)
def IsRepoDirty(repo_root):
'''Returns True if there are no staged or modified files, False otherwise.'''
# diff-index returns 1 if there are staged changes or modified files,
# 0 otherwise
cmd = ['git', 'diff-index', '--quiet', 'HEAD']
return cmd_helper.Call(cmd, cwd=repo_root) == 1
def GetVersionNumberFromLibraryResources(version_xml):
'''
Extracts a Google Play services version number from its version.xml file.
'''
with open(version_xml, 'r') as version_file:
version_file_content = version_file.read()
match = _XML_VERSION_NUMBER_PATTERN.search(version_file_content)
if not match:
raise AttributeError('A value for google_play_services_version was not '
'found in ' + version_xml)
return int(match.group(1))
def MakeLocalCommit(repo_root, files_to_commit, message):
'''Makes a local git commit.'''
logging.debug('Staging files (%s) for commit.', files_to_commit)
if cmd_helper.Call(['git', 'add'] + files_to_commit, cwd=repo_root) != 0:
raise Exception('The local commit failed.')
logging.debug('Committing.')
if cmd_helper.Call(['git', 'commit', '-m', message], cwd=repo_root) != 0:
raise Exception('The local commit failed.')
| 30.438596 | 79 | 0.716427 | 716 | 5,205 | 5.068436 | 0.350559 | 0.042987 | 0.023147 | 0.02976 | 0.090383 | 0.057316 | 0.025351 | 0.025351 | 0.025351 | 0.025351 | 0 | 0.004738 | 0.189049 | 5,205 | 170 | 80 | 30.617647 | 0.855011 | 0.446686 | 0 | 0.115942 | 0 | 0 | 0.122397 | 0.029229 | 0 | 0 | 0 | 0 | 0 | 1 | 0.188406 | false | 0.014493 | 0.115942 | 0.086957 | 0.507246 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
acd3f396f98f04bb660c487afcf7cf999a2fe26c | 1,219 | py | Python | src_py/testudpvilistus.py | paulharter/biofeed | fa83e5dcec568d1cd7350b2047c9b91891d1623e | [
"MIT"
] | null | null | null | src_py/testudpvilistus.py | paulharter/biofeed | fa83e5dcec568d1cd7350b2047c9b91891d1623e | [
"MIT"
] | null | null | null | src_py/testudpvilistus.py | paulharter/biofeed | fa83e5dcec568d1cd7350b2047c9b91891d1623e | [
"MIT"
] | null | null | null | import socket,time
tcpSock=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print "try connect",tcpSock
tcpSock.connect(('169.254.1.1',2000))
print "connected - wait for response"
print tcpSock.recv(8)
time.sleep(0.1)
tcpSock.send("$$$")
time.sleep(0.5)
print tcpSock.recv(1024)
print "Param set"
tcpSock.send("set com time 0\r") # set maximum wait between buffers
tcpSock.send("set com size 140\r") # set buffer size
tcpSock.send("set ip flags 3\r") # turn off TCP retries
tcpSock.send("set ip host 169.254.1.10\r") # turn off TCP retries
tcpSock.send("set ip remote 49990\r") # turn off TCP retries
tcpSock.send("set sys autosleep 0\r") # turn off TCP retries
tcpSock.send("set ip proto 3\r") # turn off TCP retries
tcpSock.send("exit\r")
tcpSock.send("RING\n")
tcpSock.send("RING\n")
tcpSock.close()
HOST = '' # Symbolic name meaning all available interfaces
PORT = 49990 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((HOST,PORT))
s.settimeout(500)
count=0
print 'UDP Listening on port ',PORT
while 1:
count=count+1
data,address=s.recvfrom(1024)
print address,len(data),count
print "end"
| 32.078947 | 75 | 0.694011 | 197 | 1,219 | 4.274112 | 0.401015 | 0.143705 | 0.11639 | 0.065321 | 0.32304 | 0.276722 | 0.276722 | 0.195962 | 0.12114 | 0 | 0 | 0.056492 | 0.172272 | 1,219 | 37 | 76 | 32.945946 | 0.777998 | 0.188679 | 0 | 0.058824 | 0 | 0 | 0.255048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.029412 | null | null | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acd7b158a80820d009b00a99ec78dd68591f2b96 | 458 | py | Python | Programs/Evernote/PackMemo/PackMemo.py | Psiphonc/EasierLife | ad9143a6362d70489ef4b36651ce58a3cc1d0fa3 | [
"MIT"
] | 203 | 2016-04-02T07:43:47.000Z | 2022-01-05T11:41:03.000Z | Programs/Evernote/PackMemo/PackMemo.py | Psiphonc/EasierLife | ad9143a6362d70489ef4b36651ce58a3cc1d0fa3 | [
"MIT"
] | 4 | 2016-05-13T11:20:09.000Z | 2018-09-23T01:12:07.000Z | Programs/Evernote/PackMemo/PackMemo.py | Psiphonc/EasierLife | ad9143a6362d70489ef4b36651ce58a3cc1d0fa3 | [
"MIT"
] | 169 | 2016-04-26T03:20:04.000Z | 2022-03-09T18:36:19.000Z | from EvernoteController import EvernoteController
from Memo import Memo
MEMO_NAME = 'Memo'
MEMO_DIR = 'Memo'
MEMO_STORAGE_DIR = 'S-Memo'
def f(fn, *args, **kwargs):
try:
fn(*args, **kwargs)
except:
pass
m = Memo()
e = EvernoteController()
f(e.create_notebook, MEMO_DIR)
f(e.create_notebook, MEMO_STORAGE_DIR)
f(e.move_note, MEMO_DIR+'/'+MEMO_NAME, MEMO_STORAGE_DIR)
e.create_note('Memo', m.raw_memo(), MEMO_DIR)
| 22.9 | 57 | 0.676856 | 67 | 458 | 4.373134 | 0.343284 | 0.109215 | 0.143345 | 0.109215 | 0.136519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19214 | 458 | 19 | 58 | 24.105263 | 0.791892 | 0 | 0 | 0 | 0 | 0 | 0.04328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0.125 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
acde47f7cdaa571a2abde3c7a22b637d977809a6 | 1,445 | py | Python | apps/credit_card/models.py | code-yeongyu/backend | cafad5a1cae47ab86ca71028379b72837ea4543d | [
"MIT"
] | 1 | 2021-07-09T01:27:16.000Z | 2021-07-09T01:27:16.000Z | apps/credit_card/models.py | code-yeongyu/backend | cafad5a1cae47ab86ca71028379b72837ea4543d | [
"MIT"
] | 10 | 2021-07-08T04:26:55.000Z | 2021-07-20T14:01:58.000Z | apps/credit_card/models.py | code-yeongyu/pangpang-eats-backend | cafad5a1cae47ab86ca71028379b72837ea4543d | [
"MIT"
] | 3 | 2021-07-08T04:06:59.000Z | 2021-10-02T04:32:16.000Z | from django.db import models
from django.core.validators import MinLengthValidator
from apps.user.models import User
from pangpangeats.settings import AUTH_USER_MODEL
from apps.common.models import BaseModel
from apps.common.validators import numeric_validator
class CreditCard(BaseModel):
owner: User = models.ForeignKey(AUTH_USER_MODEL,
on_delete=models.CASCADE,
null=False)
owner_first_name = models.CharField(max_length=5, null=False, blank=False)
owner_last_name = models.CharField(max_length=5, null=False, blank=False)
alias = models.CharField(max_length=100, null=True, blank=True)
card_number = models.CharField(
validators=(MinLengthValidator(16), ),
max_length=16,
null=False,
blank=False,
)
cvc = models.CharField(
validators=(
MinLengthValidator(3),
numeric_validator,
),
max_length=3,
null=False,
blank=False,
)
# both should be a future than now, but not validate them on the model, but validate them in the serializer
expiry_year = models.PositiveSmallIntegerField(null=False)
expiry_month = models.PositiveSmallIntegerField(null=False)
def __str__(self): # pragma: no cover
CARD_NUMBER = self.card_number[:4] + "-****" * 3
return f"{self.owner_last_name}{self.owner_first_name} {CARD_NUMBER}" | 39.054054 | 111 | 0.673356 | 173 | 1,445 | 5.450867 | 0.404624 | 0.066808 | 0.059385 | 0.080594 | 0.101803 | 0.101803 | 0.101803 | 0.101803 | 0.101803 | 0.101803 | 0 | 0.011818 | 0.238754 | 1,445 | 37 | 112 | 39.054054 | 0.845455 | 0.084429 | 0 | 0.121212 | 0 | 0 | 0.048448 | 0.034065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.181818 | 0 | 0.515152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ace31bc75f6c304a3efff0e6911ae27ee2b4ecee | 1,064 | py | Python | DataProcessor/dev_set_partition.py | cherry979988/feedforward-RE | 546a608a8cb5b35c475e577995df70a89affa15e | [
"MIT"
] | 1 | 2019-08-25T00:44:27.000Z | 2019-08-25T00:44:27.000Z | DataProcessor/dev_set_partition.py | cherry979988/feedforward-RE | 546a608a8cb5b35c475e577995df70a89affa15e | [
"MIT"
] | null | null | null | DataProcessor/dev_set_partition.py | cherry979988/feedforward-RE | 546a608a8cb5b35c475e577995df70a89affa15e | [
"MIT"
] | null | null | null | __author__ = 'QinyuanYe'
import sys
import random
from shutil import copyfile
# split the original train set into
# 90% train-set (train_split.json) and 10% dev-set (dev.json)
if __name__ == "__main__":
random.seed(1234)
if len(sys.argv) != 3:
print 'Usage:feature_generation.py -DATA -ratio'
exit(1)
dataset = sys.argv[1]
ratio = float(sys.argv[2])
dir = 'data/source/%s' % sys.argv[1]
original_train_json = dir + '/train.json'
train_json = dir + '/train_split.json'
dev_json = dir + '/dev.json'
if 'TACRED' in dataset or 'Sub' in dataset:
print '%s has a provided dev set, skip splitting' % dataset
copyfile(original_train_json, train_json)
exit(0)
fin = open(original_train_json, 'r')
lines = fin.readlines()
dev_size = int(ratio * len(lines))
random.shuffle(lines)
dev = lines[:dev_size]
train_split = lines[dev_size:]
fout1 = open(dev_json, 'w')
fout1.writelines(dev)
fout2 = open(train_json, 'w')
fout2.writelines(train_split)
| 24.744186 | 67 | 0.640038 | 151 | 1,064 | 4.304636 | 0.423841 | 0.096923 | 0.078462 | 0.052308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022059 | 0.233083 | 1,064 | 42 | 68 | 25.333333 | 0.77451 | 0.087406 | 0 | 0 | 0 | 0 | 0.166322 | 0.027893 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.103448 | null | null | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ace6e11dd2c37cb4d2255b5a7148639ad40c246e | 717 | py | Python | NetCatKS/Logger/api/implementers/__init__.py | dimddev/NetCatKS-CP | 2d9e72b2422e344569fd4eb154866b98e9707561 | [
"BSD-2-Clause"
] | null | null | null | NetCatKS/Logger/api/implementers/__init__.py | dimddev/NetCatKS-CP | 2d9e72b2422e344569fd4eb154866b98e9707561 | [
"BSD-2-Clause"
] | null | null | null | NetCatKS/Logger/api/implementers/__init__.py | dimddev/NetCatKS-CP | 2d9e72b2422e344569fd4eb154866b98e9707561 | [
"BSD-2-Clause"
] | null | null | null | __author__ = 'dimd'
from twisted.python import log
from zope.interface import implementer
from NetCatKS.Logger.api.interfaces import ILogger
GLOBAL_DEBUG = True
@implementer(ILogger)
class Logger(object):
def __init__(self):
pass
def debug(self, msg):
if GLOBAL_DEBUG is True:
log.msg('[ ====== DEBUG ]: {}'.format(msg))
def info(self, msg):
log.msg('[ ++++++ INFO ]: {}'.format(msg))
def warning(self, msg):
log.msg('[ !!!!!! WARNING ]: {}'.format(msg))
def error(self, msg):
log.msg('[ ------ ERROR ]: {}'.format(msg))
def critical(self, msg):
log.msg('[ @@@@@@ CRITICAL ]: {}'.format(msg))
__all__ = [
'Logger'
] | 19.378378 | 55 | 0.563459 | 83 | 717 | 4.698795 | 0.39759 | 0.089744 | 0.123077 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242678 | 717 | 37 | 56 | 19.378378 | 0.718232 | 0 | 0 | 0 | 0 | 0 | 0.158774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0.043478 | 0.130435 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ace927c4fc1e25bbda7ec5e5f7a33fa84304d5ec | 9,388 | py | Python | videoServer.py | Hugoargui/eyeDetector | 8c0361f90dacc2e5d8262cca40b34165fdda841a | [
"MIT"
] | null | null | null | videoServer.py | Hugoargui/eyeDetector | 8c0361f90dacc2e5d8262cca40b34165fdda841a | [
"MIT"
] | null | null | null | videoServer.py | Hugoargui/eyeDetector | 8c0361f90dacc2e5d8262cca40b34165fdda841a | [
"MIT"
] | 3 | 2015-04-11T15:23:22.000Z | 2021-02-09T07:19:07.000Z | ## MIT LICENSE
#Copyright (c) 2014 Hugo Arguinariz.
#http://www.hugoargui.com
#
#Permission is hereby granted, free of charge, to any person
#obtaining a copy of this software and associated documentation
#files (the "Software"), to deal in the Software without
#restriction, including without limitation the rights to use,
#copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the
#Software is furnished to do so, subject to the following
#conditions:
#The above copyright notice and this permission notice shall be
#included in all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
#EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
#OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
#NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
#HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
#WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
#FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
#OTHER DEALINGS IN THE SOFTWARE.
## This module requires the SimpleWebSocketServer module by Opiate
## http://opiate.github.io/SimpleWebSocketServer/
## That software is also distributed under MIT license
## I am in not the author of SimpleWebSocketServer.py
####################################################################################################!/
## videoServer.py
## Inputs: NONE
## Outputs: NONE
## Non standard modules: eyeDetector, SimpleWebSocketServer
####################################################################################################!/usr/bin/env python
## This module runs on the server side
## It is is expected to continuously run on the background
## This is not a web server, a web app will need a real web server (Apache?) running in parallel
## On the client side (website) the browser is expected to open a WebSocket to this server
## The browser can capture webcam images from the user using Javascript + WebRTC
## The browser then sends several video frames per second to this server via the WebRTC socket
## For each video frame, this server uses the eyeDetector module to detect the eyes on the image
## This is done in 3 steps:
## A) The received image is decoded (it had been encoded by the client javascript before sending it over websocket
## B) The eyes are detected on the image.
## ## This returns: Eye coordinates (int X, int Y)
## Image modified to include green rectangles around the person eyes
## C) The new image is encoded to a format suitable to be sent back to the client via websockets
## Once the video frames have been processed, the data can be sent back to the browser via the same websocket connection
## In addition to the eye coordinates (X, Y)
## The image from step C can be sent too.
## This last step is optional, it may be enough to send only the eye coordinate variables (X, Y)
## This coordinates could be used on the client side to draw the exact same rectangles
## If the image is not going to be sent, step C should be removed in order to improve performace.
####################################################################################################
####################################################################################################
import signal, sys, ssl, logging
import time
from SimpleWebSocketServer import WebSocket, SimpleWebSocketServer, SimpleSSLWebSocketServer
from optparse import OptionParser
import cv2
import numpy as np
import base64
## Import custom packages
import eyeDetector
import clientAnimation
try:
import simplejson as json
except:
import json
logging.basicConfig(format='%(asctime)s %(message)s', level=logging.DEBUG)
##################################################################################################
class VideoServer(WebSocket):
##############################################################################################
def handleMessage(self):
## STEP A
# Handle incoming video frame
if self.data is None:
self.data = ''
decImg = None ## Image after being decoded
procImg = None ## Image with rectangles around the eyes
encImg = None ## Image encoded in a format suitable to be sent over websocket
# #################################################
# Try processing the frame
try:
#########################################
# Decode image
# The image should have been received from the client in binary form
img = str(self.data)
img = np.fromstring(img, dtype=np.uint8)
decImg = eyeDetector.decodeImage(img)
if (decImg == None):
print self.address, 'ERROR: Could not decode image. System time: '+ str(time.clock())
if ( decImg != None):
## STEP B
## Nothing wrong, detect eyes in the image
procImg, eyesX, eyesY = eyeDetector.detectEyes(decImg)
else:
# Neither None nor !None... no image in the first place!
print self.address, 'ERROR: Could not find an image to process! '+ str(time.clock())
#########################################
# Encode image to send it back
if (procImg != None):
## STEP C
retval, encImg = eyeDetector.encodeImage(procImg)
if False == retval:
print self.address, ('ERROR: Could not encode image!'+ str(time.clock()))
else:
encImg = base64.b64encode(encImg)
else:
print self.address, 'ERROR: Could not find an image to encode!'
except Exception as n:
print 'OpenCV catch fail' + str(n)
# #################################################
# Try sending the frame back to the client
try:
if (encImg != None):
# eyesX and eyesY are of numpy.int type, which is not json serializable
# We get them back to normal python int
eyesX = np.asscalar(np.int16(eyesX))
eyesY = np.asscalar(np.int16(eyesY))
#jsonize all data to send
## If we don't wish to send encImage it should be removed from here
out = {'frame': encImg, 'eyesX': eyesX, 'eyesY': eyesY}
jsonMessage = json.dumps(out, default=lambda obj: obj.__dict__)
message = encImg
else:
print self.address, 'ERROR: Something went wrong, NOT sending any image. '+ str(time.clock())
self.sendMessage( jsonMessage )
except Exception as n:
print n
##############################################################################################
def handleConnected(self):
## Incoming websocket connection from a browser
## Several connections can be handled at the same time from different browsers
print self.address, 'Video Server: Connection received from client at system time: '+ str(time.clock())
##############################################################################################
def handleClose(self):
## The client closed the connection with the server
print self.address, 'Video Server: Connection closed at system time: '+ str(time.clock())
##################################################################################################
if __name__ == "__main__":
print ' '
print 'Video server waiting for requests. System time: '+ str(time.clock())
print '*****************************************************************'
## When launched from command line we parse OPTIONAL input arguments
## The defaults will work just fine most times
## The http port used by websocket connections is set by --port
parser = OptionParser(usage="usage: %prog [options]", version="%prog 1.0")
parser.add_option("--host", default='', type='string', action="store", dest="host", help="hostname (localhost)")
parser.add_option("--port", default=8090, type='int', action="store", dest="port", help="port (8000)")
parser.add_option("--example", default='VideoServer', type='string', action="store", dest="example", help="VideoServer, others")
parser.add_option("--ssl", default=0, type='int', action="store", dest="ssl", help="ssl (1: on, 0: off (default))")
parser.add_option("--cert", default='./cert.pem', type='string', action="store", dest="cert", help="cert (./cert.pem)")
parser.add_option("--ver", default=ssl.PROTOCOL_TLSv1, type=int, action="store", dest="ver", help="ssl version")
(options, args) = parser.parse_args()
cls = VideoServer
## If we wish to encode the websocket data stream
if options.ssl == 1:
server = SimpleSSLWebSocketServer(options.host, options.port, cls, options.cert, options.cert, version=options.ver)
else:
server = SimpleWebSocketServer(options.host, options.port, cls)
## Handle when shooting this server down
def close_sig_handler(signal, frame):
server.close()
sys.exit()
## START the server
signal.signal(signal.SIGINT, close_sig_handler)
server.serveforever()
| 46.246305 | 132 | 0.586387 | 1,110 | 9,388 | 4.937838 | 0.354955 | 0.016055 | 0.020434 | 0.019157 | 0.120416 | 0.068783 | 0.015326 | 0.015326 | 0.015326 | 0.015326 | 0 | 0.004355 | 0.217299 | 9,388 | 202 | 133 | 46.475248 | 0.741562 | 0.418513 | 0 | 0.126582 | 0 | 0 | 0.181137 | 0.015212 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.139241 | null | null | 0.151899 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acf0e5f93f43919ca8a537e46d570aa00d8144da | 1,639 | py | Python | backend/serv/online_data.py | Alliance-Of-Independent-Programmers/acc-book | 3a0f9fa1092d7eee54102e787e2233607c6922cf | [
"MIT"
] | null | null | null | backend/serv/online_data.py | Alliance-Of-Independent-Programmers/acc-book | 3a0f9fa1092d7eee54102e787e2233607c6922cf | [
"MIT"
] | 1 | 2021-11-02T22:22:57.000Z | 2021-11-02T22:22:57.000Z | backend/serv/online_data.py | Alliance-Of-Independent-Programmers/acc-book | 3a0f9fa1092d7eee54102e787e2233607c6922cf | [
"MIT"
] | null | null | null | import base64
import os.path
path=os.path.dirname(__file__)
misha = base64.b64encode(open(os.path.join(path, "../Pics/Miahs.jpg"), "rb").read()).decode("UTF-8")
yaroslav = base64.b64encode(open(os.path.join(path, "../Pics/Yaroslav.jpg"), "rb").read()).decode("UTF-8")
goblin = base64.b64encode(open(os.path.join(path, "../Pics/Goblin.jpg"), "rb").read()).decode("UTF-8")
sanya = base64.b64encode(open(os.path.join(path, "../Pics/Sanya.jpg"), "rb").read()).decode("UTF-8")
artem = base64.b64encode(open(os.path.join(path, "../Pics/Artem.jpg"), "rb").read()).decode("UTF-8")
slava = base64.b64encode(open(os.path.join(path, "../Pics/Slava.jpg"), "rb").read()).decode("UTF-8")
andrew = base64.b64encode(open(os.path.join(path, "../Pics/Andrew.jpg"), "rb").read()).decode("UTF-8")
killreal = base64.b64encode(open(os.path.join(path, "../Pics/KillReal.jpg"), "rb").read()).decode("UTF-8")
mauri = base64.b64encode(open(os.path.join(path, "../Pics/Maury.jpg"), "rb").read()).decode("UTF-8")
online1 = {
"login": "Artem",
"img": artem,
}
online2 = {
"login": "Slava",
"img": slava,
}
online3 = {
"login": "Misha",
"img": misha,
}
online4 = {
"login": "Andrew",
"img": andrew,
}
online5 = {
"login": "Goblin",
"img": goblin,
}
online6 = {
"login": "KillReal",
"img": killreal,
}
online7 = {
"login": "Mauri",
"img": mauri,
}
online8 = {
"login": "Sany0K",
"img": sanya,
}
online9 = {
"login": "Yaroslave",
"img": yaroslav,
}
all_online = [
online1,
online2,
online3,
online4,
online5,
online6,
online7,
online8,
online9,
]
| 21.565789 | 106 | 0.594875 | 210 | 1,639 | 4.619048 | 0.2 | 0.068041 | 0.176289 | 0.194845 | 0.519588 | 0.519588 | 0.343299 | 0.343299 | 0 | 0 | 0 | 0.047826 | 0.158023 | 1,639 | 75 | 107 | 21.853333 | 0.655072 | 0 | 0 | 0 | 0 | 0 | 0.214548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033898 | 0 | 0.033898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acf75d195a7f9454dff3256ac3c4f362cd91d9cd | 529 | py | Python | core/network/Swin_T/__init__.py | ViTAE-Transformer/ViTAE-Transformer-Matting | 5cd1574cd46009a4e9660cabdc008718e20bc381 | [
"MIT"
] | 8 | 2022-03-31T05:58:45.000Z | 2022-03-31T13:24:18.000Z | core/network/Swin_T/__init__.py | ViTAE-Transformer/ViTAE-Transformer-Matting | 5cd1574cd46009a4e9660cabdc008718e20bc381 | [
"MIT"
] | null | null | null | core/network/Swin_T/__init__.py | ViTAE-Transformer/ViTAE-Transformer-Matting | 5cd1574cd46009a4e9660cabdc008718e20bc381 | [
"MIT"
] | null | null | null | from .swin_stem_pooling5_transformer import swin_stem_pooling5_encoder
from .swin_stem_pooling5_transformer import SwinStemPooling5TransformerMatting
from .decoder import SwinStemPooling5TransformerDecoderV1
__all__ = ['p3mnet_swin_t']
def p3mnet_swin_t(pretrained=True, img_size=512, **kwargs):
encoder = swin_stem_pooling5_encoder(pretrained=pretrained, img_size=img_size, **kwargs)
decoder = SwinStemPooling5TransformerDecoderV1()
model = SwinStemPooling5TransformerMatting(encoder, decoder)
return model
| 35.266667 | 92 | 0.835539 | 55 | 529 | 7.618182 | 0.4 | 0.076372 | 0.152745 | 0.095465 | 0.176611 | 0.176611 | 0 | 0 | 0 | 0 | 0 | 0.031579 | 0.102079 | 529 | 14 | 93 | 37.785714 | 0.850526 | 0 | 0 | 0 | 0 | 0 | 0.024575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
acfb03705c27649ad1f5865c957917038f62a92e | 2,872 | py | Python | setup.py | C0DK/lightbus | be5cc2771b1058f7c927cca870ed75d4cbbe61a3 | [
"Apache-2.0"
] | null | null | null | setup.py | C0DK/lightbus | be5cc2771b1058f7c927cca870ed75d4cbbe61a3 | [
"Apache-2.0"
] | null | null | null | setup.py | C0DK/lightbus | be5cc2771b1058f7c927cca870ed75d4cbbe61a3 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# DO NOT EDIT THIS FILE!
# This file has been autogenerated by dephell <3
# https://github.com/dephell/dephell
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
import os.path
readme = ""
here = os.path.abspath(os.path.dirname(__file__))
readme_path = os.path.join(here, "README.rst")
if os.path.exists(readme_path):
with open(readme_path, "rb") as stream:
readme = stream.read().decode("utf8")
setup(
long_description=readme,
name="lightbus",
version="1.1.0",
description="RPC & event framework for Python 3",
python_requires=">=3.7",
project_urls={
"documentation": "https://lightbus.org",
"homepage": "https://lightbus.org",
"repository": "https://github.com/adamcharnock/lightbus/",
},
author="Adam Charnock",
author_email="adam@adamcharnock.com",
keywords="python messaging redis bus queue",
classifiers=[
"Development Status :: 5 - Production/Stable",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX",
"Programming Language :: Python :: 3",
"Topic :: System :: Networking",
"Topic :: Communications",
],
entry_points={
"console_scripts": ["lightbus = lightbus.commands:lightbus_entry_point"],
"lightbus_event_transports": [
"debug = lightbus:DebugEventTransport",
"redis = lightbus:RedisEventTransport",
],
"lightbus_plugins": [
"internal_metrics = lightbus.plugins.metrics:MetricsPlugin",
"internal_state = lightbus.plugins.state:StatePlugin",
],
"lightbus_result_transports": [
"debug = lightbus:DebugResultTransport",
"redis = lightbus:RedisResultTransport",
],
"lightbus_rpc_transports": [
"debug = lightbus:DebugRpcTransport",
"redis = lightbus:RedisRpcTransport",
],
"lightbus_schema_transports": [
"debug = lightbus:DebugSchemaTransport",
"redis = lightbus:RedisSchemaTransport",
],
},
packages=[
"lightbus",
"lightbus.client",
"lightbus.client.docks",
"lightbus.client.internal_messaging",
"lightbus.client.subclients",
"lightbus.commands",
"lightbus.config",
"lightbus.plugins",
"lightbus.schema",
"lightbus.serializers",
"lightbus.transports",
"lightbus.transports.redis",
"lightbus.utilities",
],
package_dir={"": "."},
package_data={},
install_requires=["aioredis>=1.2.0", "jsonschema>=3.2", "pyyaml>=3.12"],
)
| 31.56044 | 81 | 0.607591 | 266 | 2,872 | 6.447368 | 0.530075 | 0.017493 | 0.053644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008858 | 0.253134 | 2,872 | 90 | 82 | 31.911111 | 0.790676 | 0.043872 | 0 | 0.088608 | 1 | 0 | 0.522802 | 0.20467 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050633 | 0 | 0.050633 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
acfb6bea34e4f21d414dc262f6f49c3c957210d9 | 5,840 | py | Python | src/skill_algorithms/trueskill_data_processing.py | EllAchE/nba_tipoff | f3820e391d4a6ddb611efeb6c709f16876771684 | [
"MIT"
] | null | null | null | src/skill_algorithms/trueskill_data_processing.py | EllAchE/nba_tipoff | f3820e391d4a6ddb611efeb6c709f16876771684 | [
"MIT"
] | null | null | null | src/skill_algorithms/trueskill_data_processing.py | EllAchE/nba_tipoff | f3820e391d4a6ddb611efeb6c709f16876771684 | [
"MIT"
] | null | null | null | import ENVIRONMENT
from src.database.database_creation import createPlayerTrueSkillDictionary
from src.skill_algorithms.algorithms import trueSkillMatchWithRawNums, trueSkillTipWinProb
from src.skill_algorithms.common_data_processing import beforeMatchPredictions, runAlgoForSeason, runAlgoForAllSeasons
# backlogtodo optimize trueskill, glicko etc. for rapid iteration
# backlogtodo refactor equations here to be generic
def runTrueSkillForSeason(seasonCsv: str, winningBetThreshold: float= ENVIRONMENT.GLICKO_TIPOFF_ODDS_THRESHOLD, startFromBeginning=False):
runAlgoForSeason(seasonCsv, ENVIRONMENT.PLAYER_TRUESKILL_DICT_PATH, ENVIRONMENT.TS_PREDICTION_SUMMARIES_PATH,
trueSkillBeforeMatchPredictions, trueSkillUpdateDataSingleTipoff, winningBetThreshold,
columnAdds=['Home TS Mu', 'Away TS Mu', 'Home TS Sigma', 'Away TS Sigma', 'Home Lifetime Appearances',
'Away Lifetime Appearances', 'Home Tipper Wins', 'Away Tipper Wins', 'Home Tipper Losses', 'Away Tipper Losses'], startFromBeginning=startFromBeginning)
# backlogtodo setup odds prediction to use Ev or win prob rather than bet threshold
def trueSkillBeforeMatchPredictions(psd, homePlayerCode, awayPlayerCode, homeTeam, awayTeam, tipWinnerCode, scoringTeam, predictionArray=None, actualArray=None, histogramPredictionsDict=None,
winningBetThreshold=ENVIRONMENT.TS_TIPOFF_ODDS_THRESHOLD):
return beforeMatchPredictions(psd, homePlayerCode, awayPlayerCode, homeTeam, awayTeam, tipWinnerCode, scoringTeam, predictionArray=predictionArray, actualArray=actualArray, histogramPredictionsDict=histogramPredictionsDict, predictionSummaryPath=ENVIRONMENT.TS_PREDICTION_SUMMARIES_PATH,
minimumTipWinPercentage=winningBetThreshold, predictionFunction=trueSkillTipWinProb, minimumAppearances=ENVIRONMENT.MIN_TS_APPEARANCES)
def runTSForAllSeasons(seasons, winningBetThreshold=ENVIRONMENT.TS_TIPOFF_ODDS_THRESHOLD):
runAlgoForAllSeasons(seasons, ENVIRONMENT.PLAYER_TRUESKILL_DICT_PATH, ENVIRONMENT.TS_PREDICTION_SUMMARIES_PATH, trueSkillBeforeMatchPredictions, trueSkillUpdateDataSingleTipoff,
winningBetThreshold, columnAdds=['Home TS Mu', 'Away TS Mu', 'Home TS Sigma', 'Away TS Sigma', 'Home Lifetime Appearances',
'Away Lifetime Appearances', 'Home Tipper Wins', 'Away Tipper Wins', 'Home Tipper Losses', 'Away Tipper Losses'])
def trueSkillUpdateDataSingleTipoff(psd, winnerCode, loserCode, homePlayerCode, game_code=None):
if game_code:
print(game_code)
winnerCode = winnerCode[11:]
loserCode = loserCode[11:]
winnerOgMu = psd[winnerCode]["mu"]
winnerOgSigma = psd[winnerCode]["sigma"]
loserOgMu = psd[loserCode]["mu"]
loserOgSigma = psd[loserCode]["sigma"]
winnerMu, winnerSigma, loserMu, loserSigma = trueSkillMatchWithRawNums(psd[winnerCode]["mu"], psd[winnerCode]["sigma"], psd[loserCode]['mu'], psd[loserCode]["sigma"])
winnerWinCount = psd[winnerCode]["wins"] + 1
winnerAppearances = psd[winnerCode]["appearances"] + 1
loserLosses = psd[loserCode]["losses"] + 1
loserAppearances = psd[loserCode]["appearances"] + 1
psd[winnerCode]["wins"] = winnerWinCount
psd[winnerCode]["appearances"] = winnerAppearances
psd[loserCode]["losses"] = loserLosses
psd[loserCode]["appearances"] = loserAppearances
psd[winnerCode]["mu"] = winnerMu
psd[winnerCode]["sigma"] = winnerSigma
psd[loserCode]["mu"] = loserMu
psd[loserCode]["sigma"] = loserSigma
print('Winner:', winnerCode, 'trueskill increased', winnerMu - winnerOgMu, 'to', winnerMu, '. Sigma is now', winnerSigma, '. W:', winnerWinCount, 'L', winnerAppearances - winnerWinCount)
print('Loser:', loserCode, 'trueskill decreased', loserMu - loserOgMu, 'to', loserMu, '. Sigma is now', loserSigma, '. W:', loserAppearances - loserLosses, 'L', loserLosses)
# backlogtodo refactor repeated code out of algo methods
if homePlayerCode == winnerCode:
homeMu = winnerOgMu
homeSigma = winnerOgSigma
awayMu = loserOgMu
awaySigma = loserOgSigma
homeAppearances = winnerAppearances - 1
awayAppearances = loserAppearances - 1
homeWins = winnerWinCount - 1
homeLosses = psd[winnerCode]["losses"]
awayWins = psd[loserCode]["wins"]
awayLosses = loserLosses
elif homePlayerCode == loserCode:
homeMu = loserOgMu
homeSigma = loserOgSigma
awayMu = winnerOgMu
awaySigma = winnerOgSigma
awayAppearances = winnerAppearances
homeAppearances = loserAppearances
awayWins = winnerWinCount - 1
awayLosses = psd[winnerCode]["losses"]
homeWins = psd[loserCode]["wins"]
homeLosses = loserLosses
else:
raise ValueError('neither code matches')
return {"Home TS Mu": homeMu, "Home TS Sigma": homeSigma, "Away TS Mu": awayMu, "Away TS Sigma": awaySigma, "Home Lifetime Appearances": homeAppearances, "Away Lifetime Appearances": awayAppearances,
"Home Tipper Wins": homeWins, "Home Tipper Losses": homeLosses, "Away Tipper Wins": awayWins, "Away Tipper Losses": awayLosses}
def calculateTrueSkillDictionaryFromZero():
createPlayerTrueSkillDictionary() # clears the stored values,
runTSForAllSeasons(ENVIRONMENT.ALL_SEASONS_LIST, winningBetThreshold=ENVIRONMENT.TS_TIPOFF_ODDS_THRESHOLD)
print("\n", "trueskill dictionary updated for seasons", ENVIRONMENT.ALL_SEASONS_LIST, "\n")
def updateTrueSkillDictionaryFromLastGame():
runTrueSkillForSeason(ENVIRONMENT.CURRENT_SEASON_CSV, winningBetThreshold=ENVIRONMENT.TS_TIPOFF_ODDS_THRESHOLD, startFromBeginning=False)
print("\n", "trueskill dictionary updated from last game", "\n")
| 62.12766 | 291 | 0.738356 | 533 | 5,840 | 8.003752 | 0.285178 | 0.039616 | 0.022269 | 0.035631 | 0.269808 | 0.23113 | 0.18331 | 0.18331 | 0.142991 | 0.142991 | 0 | 0.002474 | 0.169349 | 5,840 | 93 | 292 | 62.795699 | 0.876933 | 0.04726 | 0 | 0 | 0 | 0 | 0.147922 | 0 | 0 | 0 | 0 | 0.010753 | 0 | 1 | 0.081081 | false | 0 | 0.054054 | 0.013514 | 0.162162 | 0.067568 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4a0f002191d74bb50bea698bd5937724a5cff101 | 1,648 | py | Python | gamer_registration_system/con/tests/test_models.py | splummer/gamer_reg | 7cccbbf8e6e52e46594c8128a7e7a523b8202f03 | [
"MIT"
] | null | null | null | gamer_registration_system/con/tests/test_models.py | splummer/gamer_reg | 7cccbbf8e6e52e46594c8128a7e7a523b8202f03 | [
"MIT"
] | null | null | null | gamer_registration_system/con/tests/test_models.py | splummer/gamer_reg | 7cccbbf8e6e52e46594c8128a7e7a523b8202f03 | [
"MIT"
] | null | null | null | import pytest
import datetime
from django.test import TestCase
from django.utils import timezone
from gamer_registration_system.con.models import Convention, Event, EventSchedule
# Create your tests here.
class EventScheduleModelTests(TestCase):
new_con = Convention(convention_name='Test Future Con')
new_event = Event(convention=new_con, title='Test Future Event')
def test_recent_event_with_future_start(self, new_con=new_con, new_event=new_event):
"""
recent_event() returns False for events whose start_date
is in the future.
"""
time = timezone.now() + datetime.timedelta(days=30)
future_eventsched = EventSchedule(convention=new_con, event=new_event, start_date=time)
self.assertIs(future_eventsched.recent_event(), False)
def test_recent_event_with_old_event(self, new_con=new_con, new_event=new_event):
"""
recent_event() returns False for events whose start_date is older than 1 day
"""
time = timezone.now() - datetime.timedelta(days=1, seconds=1)
old_event = EventSchedule(convention=new_con, event=new_event, start_date=time)
self.assertIs(old_event.recent_event(), False)
def test_recent_event_with_recent_question(self, new_con=new_con, new_event=new_event):
"""
recent_event() returns True for events whose start_date is within the last day
"""
time = timezone.now() - datetime.timedelta(hours=23, minutes=59, seconds=59)
recent_event = EventSchedule(convention=new_con, event=new_event, start_date=time)
self.assertIs(recent_event.recent_event(), True)
| 43.368421 | 95 | 0.723908 | 222 | 1,648 | 5.117117 | 0.279279 | 0.058099 | 0.047535 | 0.047535 | 0.575704 | 0.556338 | 0.4375 | 0.4375 | 0.370599 | 0.370599 | 0 | 0.008197 | 0.18568 | 1,648 | 37 | 96 | 44.540541 | 0.838301 | 0.154733 | 0 | 0 | 0 | 0 | 0.024335 | 0 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.15 | false | 0 | 0.25 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4a12505b82a3a30031591a8cfae55ddea0b1e877 | 978 | py | Python | mul_func.py | motokimura/shake_shake_chainer | 3b87193dbfcf58723586dfc34c9bc21a900da327 | [
"MIT"
] | 2 | 2018-11-26T13:51:56.000Z | 2019-08-12T00:22:20.000Z | mul_func.py | motokimura/shake_shake_chainer | 3b87193dbfcf58723586dfc34c9bc21a900da327 | [
"MIT"
] | null | null | null | mul_func.py | motokimura/shake_shake_chainer | 3b87193dbfcf58723586dfc34c9bc21a900da327 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import chainer
from chainer import cuda
from chainer import configuration
class Mul(chainer.function.Function):
def __init__(self):
return
def forward(self, inputs):
x1, x2 = inputs
xp = cuda.get_array_module(x1) # Get numpy(x=n) or cupy(x=c) array module
alpha = xp.ones(x1.shape, dtype=x1.dtype) * 0.5
if configuration.config.train:
for i in range(len(alpha)):
alpha[i] = xp.random.rand()
return x1 * alpha + x2 * (xp.ones(x1.shape, dtype=x1.dtype) - alpha),
def backward(self, inputs, grad_outputs):
gx, = grad_outputs
xp = cuda.get_array_module(gx)
beta = xp.empty(gx.shape, dtype=gx.dtype)
for i in range(len(beta)):
beta[i] = xp.random.rand()
return gx * beta, gx * (xp.ones(gx.shape, dtype=gx.dtype) - beta)
def mul(x1, x2):
return Mul()(x1, x2)
| 26.432432 | 81 | 0.578732 | 140 | 978 | 3.971429 | 0.392857 | 0.071942 | 0.061151 | 0.05036 | 0.348921 | 0.089928 | 0.089928 | 0 | 0 | 0 | 0 | 0.022956 | 0.287321 | 978 | 36 | 82 | 27.166667 | 0.774749 | 0.084867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.130435 | 0.086957 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4a18f414dde63eff7e1a06931ef3b1725eecda3f | 539 | py | Python | tests/import/module_getattr.py | sebi5361/micropython | 6c054cd124bc6229bee127128264dc0829dea53c | [
"MIT"
] | 198 | 2017-03-24T23:23:54.000Z | 2022-01-07T07:14:00.000Z | tests/import/module_getattr.py | sebi5361/micropython | 6c054cd124bc6229bee127128264dc0829dea53c | [
"MIT"
] | 509 | 2017-03-28T19:37:18.000Z | 2022-03-31T20:31:43.000Z | tests/import/module_getattr.py | sebi5361/micropython | 6c054cd124bc6229bee127128264dc0829dea53c | [
"MIT"
] | 187 | 2017-03-24T23:23:58.000Z | 2022-02-25T01:48:45.000Z | # test __getattr__ on module
# ensure that does_not_exist doesn't exist to start with
this = __import__(__name__)
try:
this.does_not_exist
assert False
except AttributeError:
pass
# define __getattr__
def __getattr__(attr):
if attr == 'does_not_exist':
return False
raise AttributeError
# do feature test (will also test functionality if the feature exists)
if not hasattr(this, 'does_not_exist'):
print('SKIP')
raise SystemExit
# check that __getattr__ works as expected
print(this.does_not_exist)
| 22.458333 | 70 | 0.742115 | 75 | 539 | 4.88 | 0.586667 | 0.095628 | 0.163934 | 0.131148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19295 | 539 | 23 | 71 | 23.434783 | 0.841379 | 0.38961 | 0 | 0 | 0 | 0 | 0.099071 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0.071429 | 0.071429 | 0 | 0.214286 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4a1b274e1ae2fba791921111ffefa8e3eb237de0 | 826 | py | Python | Own/Python/Tutorials/Lists.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | Own/Python/Tutorials/Lists.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | Own/Python/Tutorials/Lists.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | #Cristian Chitiva
#cychitivav@unal.edu.co
#12/Sept/2018
myList = ['Hi', 5, 6 , 3.4, "i"] #Create the list
myList.append([4, 5]) #Add sublist [4, 5] to myList
myList.insert(2,"f") #Add "f" in the position 2
print(myList)
myList = [1, 3, 4, 5, 23, 4, 3, 222, 454, 6445, 6, 4654, 455]
myList.sort() #Sort the list from lowest to highest
print(myList)
myList.sort(reverse = True) #Sort the list from highest to lowest
print(myList)
myList.extend([5, 77]) #Add 5 and 77 to myList
print(myList)
#List comprehension
myList = []
for value in range(0, 50):
myList.append(value)
print(myList)
myList = ["f" for value in range(0,20)]
print(myList)
myList = [value for value in range(0,20)]
print(myList)
myList = [value for value in range(0,60,3) if value % 2 == 0]
print(myList) | 25.030303 | 68 | 0.641646 | 140 | 826 | 3.785714 | 0.385714 | 0.166038 | 0.192453 | 0.113208 | 0.211321 | 0.181132 | 0.181132 | 0.181132 | 0.181132 | 0.181132 | 0 | 0.096626 | 0.210654 | 826 | 33 | 69 | 25.030303 | 0.716258 | 0.27845 | 0 | 0.380952 | 0 | 0 | 0.009009 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.380952 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4a1b69377cbe114d0ff86a93ac41f2645662116e | 7,955 | py | Python | src/scripts/retrive_sfd_data.py | seattlepublicrecords/revealseattle | 727d6fcaed3abd3170f4941d249067c52e86bf96 | [
"Apache-2.0"
] | null | null | null | src/scripts/retrive_sfd_data.py | seattlepublicrecords/revealseattle | 727d6fcaed3abd3170f4941d249067c52e86bf96 | [
"Apache-2.0"
] | 5 | 2016-10-12T05:21:56.000Z | 2016-10-12T10:26:10.000Z | src/scripts/retrive_sfd_data.py | seattlepublicrecords/revealseattle | 727d6fcaed3abd3170f4941d249067c52e86bf96 | [
"Apache-2.0"
] | null | null | null | import sys, traceback
from bs4 import BeautifulSoup
import rethinkdb as r
import requests
import time
import geocoder
import datetime
from dateutil.parser import parse as dtparse
from pytz import timezone
r.connect( "localhost", 28015).repl()
la = timezone('America/Los_Angeles')
table = r.db("revealseattle").table("dispatch_log")
dbtable = r.db("revealseattle").table("dispatch_log")
dbtable.delete().run()
def get_todays_dispatches():
already_geocoded = []
existing_data = dict([(row['id'], row) for row in dbtable.run()])
addresses_and_coordinates = dbtable.pluck('address', 'coordinates').run()
addresses_to_coordinates = dict([(item['address'], item['coordinates']) for item in addresses_and_coordinates if item.get('coordinates')])
addresses_and_place_names = dbtable.pluck('address', 'place_name').run()
addresses_to_place_names = dict([(item['address'], item['place_name']) for item in addresses_and_place_names if item.get('place_name', '').strip()])
addresses_and_assessor_ids = dbtable.pluck('address', 'assessor_id').run()
addresses_to_assessor_ids = dict([(item['address'], item['assessor_id']) for item in addresses_and_place_names if item.get('assessor_id', '').strip()])
html = requests.get('http://www2.seattle.gov/fire/realtime911/getRecsForDatePub.asp?action=Today&incDate=&rad1=des').text
soup = BeautifulSoup(html, 'lxml')
data = []
table = soup.findAll('tr')[3].find('table').find('table')
rows = table.find_all('tr')
# http://www2.seattle.gov/fire/realtime911/getRecsForDatePub.asp?incDate=09%2F24%2F16&rad1=des
previous_day = datetime.date.today()-datetime.timedelta(1)
previous_day = previous_day.strftime('%m%%2F%d%%2F%y')
html = requests.get('http://www2.seattle.gov/fire/realtime911/getRecsForDatePub.asp?incDate=%s&rad1=des' % (previous_day)).text
soup = BeautifulSoup(html, 'lxml')
data = []
table = soup.findAll('tr')[3].find('table').find('table')
rows.extend(table.find_all('tr'))
for row in rows:
cols = list(row.findAll('td'))
incident_id = cols[1].getText()
db_id = 'SFD_'+incident_id
existing_data_for_row = existing_data.get(db_id, {})
is_active = 'class="active"' in str(cols[0])
if is_active:
org_address = cols[4].getText()
address = org_address + ', Seattle'
address = address.replace('/', '&')
incident = {'id': db_id, 'agency': 'SFD', 'incident_id': incident_id, 'address': address, 'is_active': is_active, 'unit_timestamps': get_unit_dispatches_for_incident(incident_id)}
incident["number_of_units_dispatched"] = len(set([row['unit'] for row in incident["unit_timestamps"]]))
incident["number_of_units_in_service"] = len([row['in_service'] for row in incident["unit_timestamps"] if row['in_service']])
incident["org_address"] = org_address
incident["datetime"] = la.localize(dtparse(cols[0].getText()))
incident["type"] = cols[5].getText()
incident["streetview_url"] = 'https://maps.googleapis.com/maps/api/streetview?size=100x100&key=AIzaSyB59q3rCxkjqo3K2utcIh0_ju_-URL-L6g&location='+incident['address']
coordinates = addresses_to_coordinates.get(address)
if coordinates:
incident["coordinates"] = coordinates
else:
coordinates = geocoder.google(address, key='AIzaSyBE-WvY5WPBccBxW-97ZSBCBYEF80NBe7U').latlng
print coordinates
incident["coordinates"] = coordinates
place_name = addresses_to_place_names.get(address)
if place_name:
incident["place_name"] = place_name
else:
url = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=%s,%s&radius=30.48&key=AIzaSyBE-WvY5WPBccBxW-97ZSBCBYEF80NBe7U' % (incident["coordinates"][0], incident["coordinates"][1])
print url
place_name = '; '.join([row.get('name', ' ') for row in requests.get(url).json()['results'][1:]])
incident["place_name"] = place_name
assessor_id = addresses_to_assessor_ids.get(address)
if assessor_id:
incident["assessor_id"] = assessor_id
incident["assessor_image_url"] = existing_data_for_row.get('assessor_image_url')
else:
url = 'http://gismaps.kingcounty.gov/parcelviewer2/addSearchHandler.ashx?add='+address
items = requests.get(url).json()['items']
incident["assessor_id"] = items[0].get('PIN', None) if items else None
url_beginning = 'http://blue.kingcounty.com/Assessor/eRealProperty/Dashboard.aspx?ParcelNbr='
if incident["assessor_id"]:
url = '%s%s' % (url_beginning, incident["assessor_id"])
print 'ASSESSOR url', url
assessor_html = requests.get(url).text
#print assessor_html
html_id = 'kingcounty_gov_cphContent_FormViewPictCurr_CurrentImage'
image_url_beginning = 'http://blue.kingcounty.com/Assessor/eRealProperty/'
assessor_soup = BeautifulSoup(assessor_html, 'lxml')
image_url_end = assessor_soup.find(id=html_id)['src']
image_url = '%s%s' % (image_url_beginning, image_url_end)
else:
image_url = ''
incident["assessor_image_url"] = image_url
address_history = existing_data_for_row.get('address_history')
if address_history:
incident["address_history"] = address_history
else:
url = 'https://data.seattle.gov/resource/grwu-wqtk.json?$order=datetime DESC&address='+org_address
print url
incident["address_history"] = requests.get(url, verify=False).json()
data.append(incident)
else:
# was it previously active in last loop?
try:
if dbtable.get('SFD_'+incident_id).run()['is_active']:
dbtable.get('SFD_'+incident_id).update({"is_active": False, "unit_timestamps": get_unit_dispatches_for_incident(incident_id)}).run()
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
print "*** print_tb:"
traceback.print_tb(exc_traceback, limit=1, file=sys.stdout)
print "*** print_exception:"
traceback.print_exception(exc_type, exc_value, exc_traceback,
limit=2, file=sys.stdout)
return data
def get_unit_dispatches_for_incident(incident_id):
incident_html = requests.get('http://www2.seattle.gov/fire/IncidentSearch/incidentDetail.asp?ID='+incident_id).text
incident_soup = BeautifulSoup(incident_html, 'lxml')
table = incident_soup.findAll('tr')[3].find('table').find('table')
rows = table.find_all('tr')
data = []
for row in rows[1:]:
cols = list(row.findAll('td'))
dispatched = cols[1].getText().strip()
arrived = cols[2].getText().strip()
in_service = cols[3].getText().strip()
data.append({'unit': cols[0].getText().strip().strip('*'), 'dispatched': dispatched, 'arrived': arrived, 'in_service': in_service})
return data
while True:
print '*'
try:
todays_data = get_todays_dispatches()
#print todays_data
print table.insert(todays_data).run(conflict='update')
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
print "*** print_tb:"
traceback.print_tb(exc_traceback, limit=1, file=sys.stdout)
print "*** print_exception:"
traceback.print_exception(exc_type, exc_value, exc_traceback,
limit=2, file=sys.stdout)
time.sleep(5) | 56.021127 | 215 | 0.63193 | 935 | 7,955 | 5.159358 | 0.225668 | 0.022803 | 0.00995 | 0.014925 | 0.327736 | 0.28607 | 0.273632 | 0.244196 | 0.202944 | 0.167496 | 0 | 0.0139 | 0.231301 | 7,955 | 142 | 216 | 56.021127 | 0.77498 | 0.020993 | 0 | 0.30597 | 0 | 0.037313 | 0.228417 | 0.018756 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.067164 | null | null | 0.104478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4a243a51f1c3704674b477f2c65030fc5b33c2c9 | 563 | py | Python | setup.py | gsgoncalves/K-NRM | b7bc8c44ddf6c8d0bc14a399beb05c9c1956fe2f | [
"BSD-3-Clause"
] | 198 | 2017-11-02T18:11:44.000Z | 2022-03-26T00:03:03.000Z | setup.py | gsgoncalves/K-NRM | b7bc8c44ddf6c8d0bc14a399beb05c9c1956fe2f | [
"BSD-3-Clause"
] | 18 | 2017-11-14T08:04:21.000Z | 2022-01-14T09:09:45.000Z | setup.py | gsgoncalves/K-NRM | b7bc8c44ddf6c8d0bc14a399beb05c9c1956fe2f | [
"BSD-3-Clause"
] | 43 | 2017-11-02T16:43:35.000Z | 2021-06-13T09:22:13.000Z | # Copyright (c) 2017, Carnegie Mellon University. All rights reserved.
#
# Use of the K-NRM package is subject to the terms of the software license set
# forth in the LICENSE file included with this software, and also available at
# https://github.com/AdeDZY/K-NRM/blob/master/LICENSE
from setuptools import setup
from setuptools import find_packages
setup(name='knrm',
version='0',
description='knrm',
author='Zhuyun Dai and Chenyan Xiong',
install_requires=['numpy', 'traitlets', 'tensorflow'],
packages=find_packages()
)
| 31.277778 | 78 | 0.719361 | 77 | 563 | 5.220779 | 0.766234 | 0.024876 | 0.099502 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.182948 | 563 | 17 | 79 | 33.117647 | 0.863043 | 0.486679 | 0 | 0 | 0 | 0 | 0.215548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c57e5f3b0b518154b490e3d006aecf49846e001b | 5,271 | py | Python | source_synphot/source.py | gnarayan/source_synphot | 3bc3d48217ad7ea5630131e68fd3c544d14d10f6 | [
"MIT"
] | null | null | null | source_synphot/source.py | gnarayan/source_synphot | 3bc3d48217ad7ea5630131e68fd3c544d14d10f6 | [
"MIT"
] | null | null | null | source_synphot/source.py | gnarayan/source_synphot | 3bc3d48217ad7ea5630131e68fd3c544d14d10f6 | [
"MIT"
] | 1 | 2020-05-05T04:38:41.000Z | 2020-05-05T04:38:41.000Z | # -*- coding: UTF-8 -*-
"""
Source processing routines
"""
from __future__ import absolute_import
from __future__ import unicode_literals
import warnings
from collections import OrderedDict
from astropy.cosmology import default_cosmology
import numpy as np
import os
import pysynphot as S
import astropy.table as at
from . import io
from . import passband
def load_source(sourcenames):
"""
Loads sources
Parameters
----------
sourcenames : array-like
The source names. Passed to :py:func:`source_synphot.io.read_source`
Returns
-------
sources : dict
The dictionary of source spectra
See Also
--------
:py:func:`source_synphot.io.read_source`
"""
sources = OrderedDict()
if np.isscalar(sourcenames):
sourcenames = np.array(sourcenames, ndmin=1)
else:
sourcenames = np.array(sourcenames).flatten()
nsource = len(sourcenames)
for source in sourcenames:
try:
thissource = io.read_source(source)
except Exception as e:
message = 'Source {} not loaded'.format(source)
warnings.warn(message, RuntimeWarning)
continue
sources[source] = thissource
return sources
def pre_process_source(source, sourcemag, sourcepb, sourcez, smooth=True):
"""
Pre-process a source at some redshift ``sourcez`` back to the rest-frame
and normalize it to have magnitude ``sourcemag`` in passband ``sourcepb``
Parameters
----------
sourcespec : str
The source spectrum filename
sourcemag : float
The magnitude of the source spectrum in passband ``sourcepb``
sourcepb : :py:class:`pysynphot.spectrum.ArraySpectralElement`
The passband in which `source` has magnitude ``sourcemag``
sourcez : float
The redshift of `source`
smooth : bool, optional
Smooth the spectrum (default: True)
Returns
-------
source : :py:class:`pysynphot.ArraySpectrum`
The de-redshifted, normalized and optionally smoothed spectrum
See Also
--------
:py:func:`astropy.table.Table.read`
"""
inspec = None
inspecz = np.nan
inspecmag = np.nan
inspecpb = None
source_table_file = os.path.join('sources', 'sourcetable.txt')
source_table_file = io.get_pkgfile(source_table_file)
source_table = at.Table.read(source_table_file, format='ascii')
ind = (source_table['specname'] == source)
nmatch = len(source_table['specname'][ind])
if nmatch == 1:
# load the file and the info
inspec = source_table['specname'][ind][0]
inspecz = source_table['redshift'][ind][0]
inspecmag = source_table['g'][ind][0] # for now, just normalize the g-band mag
elif nmatch == 0:
message = 'Spectrum {} not listed in lookup table'.format(source)
pass
else:
message = 'Spectrum {} not uniquely listed in lookup table'.format(source)
pass
if inspec is None:
warnings.warn(message, RuntimeWarning)
inspec = source
inspecz = sourcez
inspecmag = sourcemag
inspecpb = sourcepb
if not os.path.exists(inspec):
message = 'Spectrum {} could not be found'.format(inspec)
raise ValueError(message)
try:
spec = at.Table.read(inspec, names=('wave','flux'), format='ascii')
except Exception as e:
message = 'Could not read file {}'.format(source)
raise ValueError(message)
if hasattr(inspecpb,'wave') and hasattr(inspecpb, 'throughput'):
pass
else:
pbs = passband.load_pbs([inspecpb], 0.)
try:
inspecpb = pbs[inspecpb][0]
except KeyError as e:
message = 'Could not load passband {}'.format(inspecpb)
raise RuntimeError(message)
try:
inspecmag = float(inspecmag)
except (TypeError, ValueError) as e:
message = 'Source magnitude {} could not be interpreted as a float'.format(inspecmag)
raise ValueError(message)
try:
inspecz = float(inspecz)
except (TypeError, ValueError) as e:
message = 'Source redshift {} could not be interpreted as a float'.format(inspecz)
raise ValueError(message)
if inspecz < 0 :
message = 'Source must have positive definite cosmological redshift'
raise ValueError(message)
inspec = S.ArraySpectrum(spec['wave'], spec['flux'], fluxunits='flam')
try:
inspec = inspec.renorm(sourcemag, 'ABmag', inspecpb)
inspec.convert('flam')
except Exception as e:
message = 'Could not renormalize spectrum {}'.format(inspec)
raise RuntimeError(message)
if inspecz > 0:
zblue = 1./(1+inspecz) - 1.
inspec_rest = inspec.redshift(zblue)
inspec_rest.convert('flam')
c = default_cosmology.get()
mu = c.distmod(inspecz)
out = inspec_rest*(10.**(0.4*mu.value))
else:
out = inspec
# TODO renorm is basic and just calculates dmag = RNval - what the original spectrum's mag is
# and renormalizes - there's some sanity checking for overlaps
# we can do this without using it and relying on the .passband routines
return out
| 30.824561 | 97 | 0.634415 | 613 | 5,271 | 5.391517 | 0.314845 | 0.033283 | 0.018154 | 0.016339 | 0.118911 | 0.1059 | 0.1059 | 0.02118 | 0 | 0 | 0 | 0.004631 | 0.262569 | 5,271 | 170 | 98 | 31.005882 | 0.845639 | 0.25517 | 0 | 0.27 | 0 | 0 | 0.131453 | 0 | 0 | 0 | 0 | 0.005882 | 0 | 1 | 0.02 | false | 0.06 | 0.11 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c57ebe95c16965a0958db44e83cf0116a400cb09 | 415 | py | Python | objects.py | umesh0689/Mandorian | bb5bd963686c020482082dbb3e52218520410276 | [
"MIT"
] | null | null | null | objects.py | umesh0689/Mandorian | bb5bd963686c020482082dbb3e52218520410276 | [
"MIT"
] | null | null | null | objects.py | umesh0689/Mandorian | bb5bd963686c020482082dbb3e52218520410276 | [
"MIT"
] | null | null | null | from random import randint
class Objects:
def __init__(self):
self._type=' '
self._arr=[]
def creating_objects(self):
times=randint(10,20)
for i in range(times):
temp=[]
temp.append(randint(1,4))#1=slant 2=horizontal 3=vertical 4=powerup
temp.append(randint(1,27))
temp.append(randint(0,330))
self._arr.append(temp) | 31.923077 | 79 | 0.573494 | 54 | 415 | 4.259259 | 0.592593 | 0.130435 | 0.221739 | 0.156522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058621 | 0.301205 | 415 | 13 | 80 | 31.923077 | 0.734483 | 0.098795 | 0 | 0 | 0 | 0 | 0.002674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c58427c4a5c249ad637ef0452752594d7390392b | 40,912 | py | Python | zziplib-0.13.62/docs/make-doc.py | guangbin79/Lua_5.1.5-Android | d85e3f54169a36c9281f7da9ad2b4c3c34027e4b | [
"MIT"
] | 28 | 2017-04-20T06:21:26.000Z | 2021-12-10T15:22:51.000Z | zziplib-0.13.62/docs/make-doc.py | guangbin79/Lua_5.1.5-Android | d85e3f54169a36c9281f7da9ad2b4c3c34027e4b | [
"MIT"
] | 3 | 2017-04-05T00:41:45.000Z | 2020-04-04T00:44:24.000Z | zziplib-0.13.62/docs/make-doc.py | guangbin79/Lua_5.1.5-Android | d85e3f54169a36c9281f7da9ad2b4c3c34027e4b | [
"MIT"
] | 15 | 2015-02-23T16:35:28.000Z | 2022-03-25T13:40:33.000Z | #! /usr/bin/python
# -*- coding: UTF-8 -*-
import sys
import re
import string
import commands
import warnings
errors = 0
def warn(msg, error=None):
global errors
errors += 1
if error is None:
warnings.warn("-- "+str(errors)+" --\n "+msg, RuntimeWarning, 2)
else:
warnings.warn("-- "+str(errors)+" --\n "+msg+
"\n error was "+str(error), RuntimeWarning, 2)
#fu
# beware, stupid python interprets backslashes in repl only partially!
def s(string, pattern, repl, count=0):
return re.sub(pattern, repl, string, count)
def m(string, pattern):
return re.match(pattern, string)
def sorted_keys(dict):
keys = dict.keys()
keys.sort()
return keys
# we make up a few formatter routines to help in the processing:
def html2docbook(text):
""" the C comment may contain html markup - simulate with docbook tags """
return (
s(s(s(s(s(s(s(s(s(s(s(text,
r"<br\s*/?>",""),
r"(</?)em>",r"\1emphasis>"),
r"<code>","<userinput>"),
r"</code>","</userinput>"),
r"<link>","<function>"),
r"</link>","</function>"),
r"(?s)\s*</screen>","</screen>"),
# r"<ul>","</para><itemizedlist>"),
# r"</ul>","</itemizedlist><para>"),
# r"<li>","<listitem><para>"),
# r"</li>","</para></listitem>\n"),
r"<ul>","</para><programlisting>\n"),
r"</ul>","</programlisting><para>"),
r"<li>",""),
r"</li>",""))
def paramdef2html(text):
return s(s(s(s(s(text,
r"\s+<paramdef>", r"\n<nobr>"),
r"<paramdef>",r"<nobr>"),
r"</paramdef>",r"</nobr>"),
r"<parameters>",r"\n <code>"),
r"</parameters>",r"</code>\n")
def section2html(text):
mapping = { "<screen>" : "<pre>", "</screen>" : "</pre>",
"<para>" : "<p>", "</para>" : "</p>" ,
"<function>" : "<link>", "</function>" : "</link>" }
for str in mapping:
text = string.replace(text, str, mapping[str])
return text
def html(text):
return section2html(paramdef2html(text))
def cdata1(text):
return string.replace(text, "&", "&")
def cdata31(text):
return string.replace(string.replace(text, "<","<"), ">",">")
def cdata3(text):
return cdata31(cdata1(text))
def cdata43(text):
return string.replace(text,"\"", """)
def cdata41(text):
return cdata43(cdata31(text))
def cdata4(text):
return cdata43(cdata3(text))
def markup_as_screen41 (text):
""" used for non-star lines in comment blocks """
return " <screen> " + s(cdata41(text), r"(?m)^", r" ") +" </screen> "
def file_comment2section(text):
""" convert a C comment into a series of <para> and <screen> parts """
return ("<para>\n"+
s(s(s(s(s(s(s(text,
r"(?s){<([\w\.\-]+\@[\w\.\-]+\w\w)>",
r"<\1>"),
r"(?mx) ^\s?\s?\s? ([^\*\s]+ .*) $",
lambda x : markup_as_screen41 (x.group(1))),
r"(?mx) ^\s*[*]\s* $", r" \n</para><para>\n"),
r"(?mx) ^\s?\s?\s?\* (.*) $", r" \1 "),
r"(?sx) </screen>(\s*)<screen> ", r"\1"),
r"(?sx) <([^<>\;]+\@[^<>\;]+)> ", r"<email>\1</email>"),
r"(?sx) \<\;([^<>\&\;]+\@[^<>\&\;]+)\>\; ",
r"<email>\1</email>") + "\n</para>")
def func_comment2section(text):
""" convert a C comment into a series of <para> and <screen> parts
and sanitize a few markups already present in the comment text
"""
return ("<para>\n"+
s(s(s(s(s(s(s(s(s(s(s(text,
r"<c>",r"<code>"), r"</c>", r"</code>"),
r"(?mx) ^\s?\s?\s? ([^\*\s]+.*)",
lambda x: markup_as_screen41 (x.group(1))),
r"(?mx) ^\s?\s?\s?\* (.*) $", r" <br /> \1"),
r"(?mx) ^\s*<br\s*\/>\s* $", r"\n</para><para>\n"),
r"<<",r"<"), r">>",r">"),
r"(?sx) (</?para>\s*)<br\s*\/?>",r"\1"),
r"(?sx) (</?para>\s*)<br\s*\/?>",r"\1"),
r"(?sx) (<br\s*\/?>\s*)<br\s*\/?>",r"\1"),
r"(?sx) <\/screen>(\s*)<screen>",r"\1") + "\n</para>")
def markup_link_syntax(text):
""" markup the link-syntax ` => somewhere ` in the text block """
return (
s(s(s(s(text,
r"(?mx) (^|\s)\=\>\"([^\"]*)\"", r"\1<link>\2</link>"),
r"(?mx) (^|\s)\=\>\'([^\"]*)\'", r"\1<link>\2</link>"),
r"(?mx) (^|\s)\=\>\s(\w[\w.]*\w)\b", r"\1<link>\2</link>"),
r"(?mx) (^|\s)\=\>\s([^\s\,\.\!\?\:\;\<\>\&\'\=\-]+)",
r"\1<link>\2</link>"))
def this_function_link(text, name):
return s(text, r"(?sx) (T|t)his \s (function|procedure) ", lambda x
: "<function>"+x.group(1)+"he "+name+" "+x.group(2)+"</function>")
# -----------------------------------------------------------------------
class Options:
var = {}
def __getattr__(self, name):
if not self.var.has_key(name): return None
return self.var[name]
def __setattr__(self, name, value):
self.var[name] = value
#end
o = Options()
o.verbose = 0
o.version = s( commands.getoutput(
""" grep -i "^version *:" *.spec 2>/dev/null |
sed -e "s/[Vv]ersion *: *//" """), r"\s*",r"")
o.package = s(commands.getoutput(
""" grep -i "^name *:" *.spec 2>/dev/null |
sed -e "s/[Nn]ame *: *//" """), r"\s*",r"")
if not len(o.version):
o.version = commands.getoutput(""" date +%Y.%m.%d """)
if not len(o.package):
o.package = "_project"
o.suffix = "-doc3"
o.mainheader = o.package+".h"
class File:
def __init__(self, filename):
self.name = filename
self.mainheader = o.mainheader
self.authors = ""
self.copyright = ""
def __getattr__(self, name):
""" defend against program to break on uninited members """
if self.__dict__.has_key(name): return self.__dict__[name]
warn("no such member: "+name); return None
def set_author(self, text):
if self.authors:
self.authors += "\n"
self.authors += text
return text
def set_copyright(self, text):
self.copyright = text
return text
class InputFiles:
""" for each set of input files we can create an object
it does correspond with a single html-output page and
a single docbook <reference> master page to be output
"""
def __init__(self):
# the id will tell us in which order
# we did meet each function definition
self.id = 1000
self.files = [] # file_list
self.funcs = [] # func_list: of hidden class FuncDeclaration
self.file = None # current file
def new_File(self, name):
self.file = File(name)
self.files.append(self.file)
return self.file
def next_id(self):
id = self.id ; self.id += 1
return id
def add_function_declaration(self, comment, prototype):
class FuncDeclaration: # note that both decl.comment and
pass # decl.prototype are in cdata1 format
func = FuncDeclaration()
func.file = self.file
func.comment = s(comment, # need to take out email-style markups
r"<([\w\.\-]+\@[\w\.\-]+\w\w)>", r"<\1>")
func.prototype = prototype
func.id = all.next_id()
self.funcs.append(func)
# print id
return prototype
def scan_options (options, list):
def encode(text):
return s(s(text, r"¬", r"&#AC;"), r"\*/",r"¬")
def decode(text):
return s(text, r"¬", r"*/")
for name in options:
found = m(name, r"^(\w+)=(.*)")
if found:
o.var[found.group(1)] = found.group(2)
continue
#else
try:
input = open(name, "r")
except IOError, error:
warn(#...... (scan_options) ...............
"can not open input file: "+name, error)
continue
text = input.read() ; input.close()
text = encode (cdata1 (text))
file = list.new_File(name)
# cut per-function comment block
text = s(text, r"(?x) [/][*][*](?=\s) ([^¬]+) ¬ ([^\{\}\;\#]+) [\{\;]",
lambda x : list.add_function_declaration(
decode(x.group(1)), decode(x.group(2))))
# cut per-file comment block
found = m(text, r"(?sx) [/][*]+(?=\s) ([^¬]+) ¬ "
r"(?:\s*\#define\s*\S+)*"
r"(\s*\#include\s*<[^<>]*>(?:\s*//[^\n]*)?)")
if found:
file.comment = decode(found.group(1))
file.include = cdata31(found.group(2))
else:
file.comment = None
file.include = None
found = m(text, r"(?sx) ^ [/][*]+(?=\s) ([^¬]+) ¬ ")
if found:
file.comment = decode(found.group(1))
#fi
# throw away the rest - further processing on memorized strings only
return None
all = InputFiles()
scan_options (sys.argv[1:], all)
if not o.docbookfile:
o.docbookfile = o.package+o.suffix+".docbook"
if not o.libhtmlfile:
o.libhtmlfile = o.package+o.suffix+".html"
if not o.dumpdocfile:
o.dumpdocfile = o.package+o.suffix+".dxml"
# ...........................................................................
# check out information in the file.comment section
def all_files_comment2section(list):
for file in list:
if file.comment is None: continue
file.section = file_comment2section(file.comment)
file.section = s(
file.section, r"(?sx) \b[Aa]uthor\s*:(.*</email>) ", lambda x
: "<author>" + file.set_author(x.group(1)) + "</author>")
file.section = s(
file.section, r"(?sx) \b[Cc]opyright\s*:([^<>]*)</para> ",lambda x
: "<copyright>" + file.set_copyright(x.group(1)) + "</copyright>")
# if "file" in file.name: print >> sys.stderr, file.comment # 2.3
#od
all_files_comment2section(all.files)
# -----------------------------------------------------------------------
class Function:
" <prespec>void* </><namespec>hello</><namespec> (int) const</callspec> "
def __init__(self):
self.prespec = ""
self.namespec = ""
self.callspec = ""
self.name = ""
# def set(self, **defines):
# name = defines.keys()[0]
# self.__dict__[name] = defines[name]
# return defines[name]
# def cut(self, **defines):
# name = defines.keys()[0]
# self.__dict__[name] += defines[name]
# return ""
def __getattr__(self, name):
""" defend against program exit on members being not inited """
if self.__dict__.has_key(name): return self.__dict__[name]
warn("no such member: "+name); return None
def dict(self):
return self.__dict__
def dict_sorted_keys(self):
keys = self.__dict__.keys()
keys.sort()
return keys
def parse(self, prototype):
found = m(prototype, r"(?sx) ^(.*[^.]) \b(\w[\w.]*\w)\b (\s*\(.*) $ ")
if found:
self.prespec = found.group(1).lstrip()
self.namespec = found.group(2)
self.callspec = found.group(3).lstrip()
self.name = self.namespec.strip()
return self.name
return None
# pass 1 of per-func strings ...............................................
# (a) cut prototype into prespec/namespec/callspec
# (b) cut out first line of comment as headline information
# (c) sanitize rest of comment block into proper docbook formatted .body
#
# do this while copying strings from all.funcs to function_list
# and remember the original order in name_list
def markup_callspec(text):
return (
s(s(s(s(s(text,
r"(?sx) ^([^\(\)]*)\(", r"\1<parameters>(<paramdef>",1),
r"(?sx) \)([^\(\)]*)$", r"</paramdef>)</parameters>\1",1),
r"(?sx) , ", r"</paramdef>,<paramdef>"),
r"(?sx) <paramdef>(\s+) ", r"\1<paramdef>"),
r"(?sx) (\s+)</paramdef>", r"</paramdef>\1"))
def parse_all_functions(func_list): # list of FunctionDeclarations
""" parse all FunctionDeclarations and create a list of Functions """
list = []
for func in all.funcs:
function = Function()
if not function.parse (func.prototype): continue
list.append(function)
function.body = markup_link_syntax(func.comment)
if "\n" not in function.body: # single-line comment is the head
function.head = function.body
function.body = ""
else: # cut comment in first-line and only keep the rest as descr body
function.head = s(function.body, r"(?sx) ^([^\n]*\n).*",r"\1",1)
function.body = s(function.body, r"(?sx) ^[^\n]*\n", r"", 1)
#fi
if m(function.head, r"(?sx) ^\s*$ "): # empty head line, autofill here
function.head = s("("+func.file.name+")", r"[.][.][/]", r"")
function.body = func_comment2section(function.body)
function.src = func # keep a back reference
# add extra docbook markups to callspec in $fn-hash
function.callspec = markup_callspec (function.callspec)
#od
return list
function_list = parse_all_functions(all.funcs)
def examine_head_anchors(func_list):
""" .into tells later steps which func-name is the leader of a man
page and that this func should add its descriptions over there. """
for function in func_list:
function.into = None
function.seealso = None
found = m(function.head, r"(?sx) ^ \s* <link>(\w[\w.]*\w)<\/link>")
# if found and found.group(1) in func_list.names:
if found and found.group(1):
function.into = found.group(1)
def set_seealso(f, value):
f.seealso = value
return value
function.head = s(function.head, r"(.*)also:(.*)", lambda x
: set_seealso(function, x.group(2)) and x.group(1))
if function.seealso and None:
print "function[",function.name,"].seealso=",function.seealso
examine_head_anchors(function_list)
# =============================================================== HTML =====
def find_by_name(func_list, name):
for func in func_list:
if func.name == name:
return func
#od
return None
#fu
class HtmlFunction:
def __init__(self, func):
self.src = func.src
self.into = func.into
self.name = func.name
self.toc_line = paramdef2html(
" <td valign=\"top\"><code>"+func.prespec+"</code></td>\n"+
" <td valign=\"top\"> </td>\n"+
" <td valign=\"top\"><a href=\"#"+func.name+"\">\n"+
" <code>"+func.namespec+"</code>"+
" </a></td>\n"+
" <td valign=\"top\"> </td>\n"+
" <td valign=\"top\">"+func.callspec+"</td>\n")
self.synopsis = paramdef2html(
" <code>"+func.prespec+"</code>\n"+
" <br /><b><code>"+func.namespec+"</code></b>\n"+
" <code>"+func.callspec+"</code>\n")
self.anchor = "<a name=\""+func.name+"\" />"
self.section = "<para><em> "+func.head+"\n"+ \
"\n</em></para>"+section2html(func.body)
#class
class HtmlFunctionFamily(HtmlFunction):
def __init__(page, func):
HtmlFunction.__init__(page, func)
page.toc_line_list = [ page.toc_line ]
# page.html_txt = page.synopsis
page.synopsis_list = [ page.synopsis ]
page.anchor_list = [ page.anchor ]
page.section_list = [ this_function_link(page.section, func.name) ]
def ensure_name(text, name):
adds = "<small><code>"+name+"</code></small> -"
match = r"(?sx) .*>[^<>]*\b" + name + r"\b[^<>]*<.*"
found = m(text, match)
if found: return text
found = m(text, r".*<p(ara)?>.*")
if found: return s(text, r"(<p(ara)?>)", r"\1"+adds, 1)
return adds+text
def combined_html_pages(func_list):
""" and now add descriptions of non-leader entries (html-mode) """
combined = {}
for func in func_list: # assemble leader pages
if func.into is not None: continue
combined[func.name] = HtmlFunctionFamily(func)
for func in func_list:
if func.into is None: continue
if func.into not in combined :
warn(#......... (combine_html_pages) ..............
"function '"+func.name+"'s into => '"+func.into+
"\n: no such target function: "+func.into)
combined[func.name] = HtmlFunctionFamily(func)
continue
#fi
page = HtmlFunction(func)
into = combined[func.into]
into.toc_line_list.append( page.toc_line )
into.anchor_list.append( page.anchor )
into.synopsis_list.append( page.synopsis )
into.section_list.append(
s(ensure_name(this_function_link(section2html( func.body ),
func.name), func.name),
r"(?sx) (</?para>\s*) <br\s*\/>", r"\1"))
return combined.values()
html_pages = combined_html_pages(function_list)
def html_resolve_links_on_page(text, list):
""" link ref-names of a page with its endpoint on the same html page"""
def html_link (name , extra):
""" make <link>s to <href> of correct target or make it <code> """
if find_by_name(list, name) is None:
return "<code>"+name+extra+"</code>"
else:
return "<a href=\"#"+name+"\"><code>"+name+extra+"</code></a>"
#fu html_link
return s(s(text, r"(?sx) <link>(\w+)([^<>]*)<\/link> ",
lambda x : html_link(x.group(1),x.group(2))),
r"(?sx) \-\> ", r"<small>-></small>") # just sanitize..
#fu html_resolve_links
class HtmlPage:
def __init__(self):
self.toc = ""
self.txt = ""
self.package = o.package
self.version = o.version
def page_text(self):
""" render .toc and .txt parts into proper <html> page """
T = ""
T += "<html><head>"
T += "<title>"+self.package+"autodoc documentation </title>"
T += "</head>\n<body>\n"
T += "\n<h1>"+self.package+" <small><small><i>- "+self.version
T += "</i></small></small></h1>"
T += "\n<table border=0 cellspacing=2 cellpadding=0>"
T += self.toc
T += "\n</table>"
T += "\n<h3>Documentation</h3>\n\n<dl>"
T += html_resolve_links_on_page(self.txt, function_list)
T += "\n</dl>\n</body></html>\n"
return T
def add_page_map(self, list):
""" generate the index-block at the start of the onepage-html file """
keys = list.keys()
keys.sort()
for name in keys:
self.toc += "<tr valign=\"top\">\n"+ \
"\n</tr><tr valign=\"top\">\n".join(
list[name].toc_line_list)+"</tr>\n"
self.txt += "\n<dt>"+" ".join(list[name].anchor_list)
self.txt += "\n"+"\n<br />".join(list[name].synopsis_list)+"<dt>"
self.txt += "\n<dd>\n"+"\n".join(list[name].section_list)
self.txt += ("\n<p align=\"right\">"+
"<small>("+list[name].src.file.name+")</small>"+
"</p></dd>")
def add_page_list(self, functions):
""" generate the index-block at the start of the onepage-html file """
mapp = {}
for func in functions:
mapp[func.name] = func
#od
self.add_page_map(mapp)
#end
html = HtmlPage()
# html.add_function_dict(Fn)
# html.add_function_list(Fn.sort.values())
html.add_page_list(html_pages)
# and finally print the html-formatted output
try:
F = open(o.libhtmlfile, "w")
except IOError, error:
warn(# ............. open(o.libhtmlfile, "w") ..............
"can not open html output file: "+o.libhtmlfile, error)
else:
print >> F, html.page_text()
F.close()
#fi
# ========================================================== DOCBOOK =====
# let's go for the pure docbook, a reference type master for all man pages
class RefPage:
def __init__(self, func):
""" initialize the fields needed for a man page entry - the fields are
named after the docbook-markup that encloses (!!) the text we store
the entries like X.refhint = "hello" will be printed therefore as
<refhint>hello</refhint>. Names with underscores are only used as
temporaries but they are memorized, perhaps for later usage. """
self.refhint = "\n<!--========= "+func.name+" (3) ===========-->\n"
self.refentry = None
self.refentry_date = o.version.strip() # //refentryinfo/date
self.refentry_productname = o.package.strip() # //refentryinfo/prod*
self.refentry_title = None # //refentryinfo/title
self.refentryinfo = None # override
self.manvolnum = "3" # //refmeta/manvolnum
self.refentrytitle = None # //refmeta/refentrytitle
self.refmeta = None # override
self.refpurpose = None # //refnamediv/refpurpose
self.refname = None # //refnamediv/refname
self.refname_list = []
self.refnamediv = None # override
self.mainheader = func.src.file.mainheader
self.includes = func.src.file.include
self.funcsynopsisinfo = "" # //funcsynopsisdiv/funcsynopsisinfo
self.funcsynopsis = None # //funcsynopsisdiv/funcsynopsis
self.funcsynopsis_list = []
self.description = None
self.description_list = []
# optional sections
self.authors_list = [] # //sect1[authors]/listitem
self.authors = None # override
self.copyright = None
self.copyright_list = []
self.seealso = None
self.seealso_list = []
if func.seealso:
self.seealso_list.append(func.seealso)
# func.func references
self.func = func
self.file_authors = None
if func.src.file.authors:
self.file_authors = func.src.file.authors
self.file_copyright = None
if func.src.file.copyright:
self.file_copyright = func.src.file.copyright
#fu
def refentryinfo_text(page):
""" the manvol formatter wants to render a footer line and header line
on each manpage and such info is set in <refentryinfo> """
if page.refentryinfo:
return page.refentryinfo
if page.refentry_date and \
page.refentry_productname and \
page.refentry_title: return (
"\n <date>"+page.refentry_date+"</date>"+
"\n <productname>"+page.refentry_productname+"</productname>"+
"\n <title>"+page.refentry_title+"</title>")
if page.refentry_date and \
page.refentry_productname: return (
"\n <date>"+page.refentry_date+"</date>"+
"\n <productname>"+page.refentry_productname+"</productname>")
return ""
def refmeta_text(page):
""" the manvol formatter needs to know the filename of the manpage to
be made up and these parts are set in <refmeta> actually """
if page.refmeta:
return page.refmeta
if page.manvolnum and page.refentrytitle:
return (
"\n <refentrytitle>"+page.refentrytitle+"</refentrytitle>"+
"\n <manvolnum>"+page.manvolnum+"</manvolnum>")
if page.manvolnum and page.func.name:
return (
"\n <refentrytitle>"+page.func.name+"</refentrytitle>"+
"\n <manvolnum>"+page.manvolnum+"</manvolnum>")
return ""
def refnamediv_text(page):
""" the manvol formatter prints a header line with a <refpurpose> line
and <refname>'d functions that are described later. For each of
the <refname>s listed here, a mangpage is generated, and for each
of the <refname>!=<refentrytitle> then a symlink is created """
if page.refnamediv:
return page.refnamediv
if page.refpurpose and page.refname:
return ("\n <refname>"+page.refname+'</refname>'+
"\n <refpurpose>"+page.refpurpose+" </refpurpose>")
if page.refpurpose and page.refname_list:
T = ""
for refname in page.refname_list:
T += "\n <refname>"+refname+'</refname>'
T += "\n <refpurpose>"+page.refpurpose+" </refpurpose>"
return T
return ""
def funcsynopsisdiv_text(page):
""" refsynopsisdiv shall be between the manvol mangemaent information
and the reference page description blocks """
T=""
if page.funcsynopsis:
T += "\n<funcsynopsis>"
if page.funcsynopsisinfo:
T += "\n<funcsynopsisinfo>"+ page.funcsynopsisinfo + \
"\n</funcsynopsisinfo>\n"
T += page.funcsynopsis + \
"\n</funcsynopsis>\n"
if page.funcsynopsis_list:
T += "\n<funcsynopsis>"
if page.funcsynopsisinfo:
T += "\n<funcsynopsisinfo>"+ page.funcsynopsisinfo + \
"\n</funcsynopsisinfo>\n"
for funcsynopsis in page.funcsynopsis_list:
T += funcsynopsis
T += "\n</funcsynopsis>\n"
#fi
return T
def description_text(page):
""" the description section on a manpage is the main part. Here
it is generated from the per-function comment area. """
if page.description:
return page.description
if page.description_list:
T = ""
for description in page.description_list:
if not description: continue
T += description
if T: return T
return ""
def authors_text(page):
""" part of the footer sections on a manpage and a description of
original authors. We prever an itimizedlist to let the manvol
show a nice vertical aligment of authors of this ref item """
if page.authors:
return page.authors
if page.authors_list:
T = "<itemizedlist>"
previous=""
for authors in page.authors_list:
if not authors: continue
if previous == authors: continue
T += "\n <listitem><para>"+authors+"</para></listitem>"
previous = authors
T += "</itemizedlist>"
return T
if page.authors:
return page.authors
return ""
def copyright_text(page):
""" the copyright section is almost last on a manpage and purely
optional. We list the part of the per-file copyright info """
if page.copyright:
return page.copyright
""" we only return the first valid instead of merging them """
if page.copyright_list:
T = ""
for copyright in page.copyright_list:
if not copyright: continue
return copyright # !!!
return ""
def seealso_text(page):
""" the last section on a manpage is called 'SEE ALSO' usally and
contains a comma-separated list of references. Some manpage
viewers can parse these and convert them into hyperlinks """
if page.seealso:
return page.seealso
if page.seealso_list:
T = ""
for seealso in page.seealso_list:
if not seealso: continue
if T: T += ", "
T += seealso
if T: return T
return ""
def refentry_text(page, id=None):
""" combine fields into a proper docbook refentry """
if id is None:
id = page.refentry
if id:
T = '<refentry id="'+id+'">'
else:
T = '<refentry>' # this is an error
if page.refentryinfo_text():
T += "\n<refentryinfo>"+ page.refentryinfo_text()+ \
"\n</refentryinfo>\n"
if page.refmeta_text():
T += "\n<refmeta>"+ page.refmeta_text() + \
"\n</refmeta>\n"
if page.refnamediv_text():
T += "\n<refnamediv>"+ page.refnamediv_text() + \
"\n</refnamediv>\n"
if page.funcsynopsisdiv_text():
T += "\n<refsynopsisdiv>\n"+ page.funcsynopsisdiv_text()+ \
"\n</refsynopsisdiv>\n"
if page.description_text():
T += "\n<refsect1><title>Description</title> " + \
page.description_text() + "\n</refsect1>"
if page.authors_text():
T += "\n<refsect1><title>Author</title> " + \
page.authors_text() + "\n</refsect1>"
if page.copyright_text():
T += "\n<refsect1><title>Copyright</title> " + \
page.copyright_text() + "\n</refsect1>\n"
if page.seealso_text():
T += "\n<refsect1><title>See Also</title><para> " + \
page.seealso_text() + "\n</para></refsect1>\n"
T += "\n</refentry>\n"
return T
#fu
#end
# -----------------------------------------------------------------------
class FunctionRefPage(RefPage):
def reinit(page):
""" here we parse the input function for its values """
if page.func.into:
page.refhint = "\n <!-- see "+page.func.into+" -->\n"
#fi
page.refentry = page.func.name # //refentry@id
page.refentry_title = page.func.name.strip() # //refentryinfo/title
page.refentrytitle = page.func.name # //refmeta/refentrytitle
if page.includes:
page.funcsynopsisinfo += "\n"+page.includes
if not page.funcsynopsisinfo:
page.funcsynopsisinfo="\n"+' #include <'+page.mainheader+'>'
page.refpurpose = page.func.head
page.refname = page.func.name
def funcsynopsis_of(func):
return (
"\n <funcprototype>\n <funcdef>"+func.prespec+
" <function>"+func.name+"</function></funcdef>"+
"\n"+s(s(s(func.callspec,
r"<parameters>\s*\(",r" "),
r"\)\s*</parameters>",r" "),
r"</paramdef>\s*,\s*",r"</paramdef>\n ")+
" </funcprototype>")
page.funcsynopsis = funcsynopsis_of(page.func)
page.description = (
html2docbook(this_function_link(page.func.body, page.func.name)))
if page.file_authors:
def add_authors(page, ename, email):
page.authors_list.append( ename+' '+email )
return ename+email
s(page.file_authors,
r"(?sx) \s* ([^<>]*) (<email>[^<>]*</email>) ", lambda x
: add_authors(page, x.group(1), x.group(2)))
#fi
if page.file_copyright:
page.copyright = "<screen>\n"+page.file_copyright+"</screen>\n"
#fi
return page
def __init__(page,func):
RefPage.__init__(page, func)
FunctionRefPage.reinit(page)
def refpage_list_from_function_list(funclist):
list = []
mapp = {}
for func in funclist:
mapp[func.name] = func
#od
for func in funclist:
page = FunctionRefPage(func)
if func.into and func.into not in mapp:
warn (# ............ (refpage_list_from_function_list) .......
"page '"+page.func.name+"' has no target => "+
"'"+page.func.into+"'"
"\n: going to reset .into of Function '"+page.func.name+"'")
func.into = None
#fi
list.append(FunctionRefPage(func))
return list
#fu
# ordered list of pages
refpage_list = refpage_list_from_function_list(function_list)
class FunctionFamilyRefPage(RefPage):
def __init__(self, page):
RefPage.__init__(self, page.func)
self.seealso_list = [] # reset
self.refhint_list = []
def refhint_list_text(page):
T = ""
for hint in page.refhint_list:
T += hint
return T
def refentry_text(page):
return page.refhint_list_text() + "\n" + \
RefPage.refentry_text(page)
pass
def docbook_pages_recombine(pagelist):
""" take a list of RefPages and create a new list where sections are
recombined in a way that their description is listed on the same
page and the manvol formatter creates symlinks to the combined
function description page - use the attribute 'into' to guide the
processing here as each of these will be removed from the output
list. If no into-pages are there then the returned list should
render to the very same output text like the input list would do """
list = []
combined = {}
for orig in pagelist:
if orig.func.into: continue
page = FunctionFamilyRefPage(orig)
combined[orig.func.name] = page ; list.append(page)
page.refentry = orig.refentry # //refentry@id
page.refentry_title = orig.refentrytitle # //refentryinfo/title
page.refentrytitle = orig.refentrytitle # //refmeta/refentrytitle
page.includes = orig.includes
page.funcsynopsisinfo = orig.funcsynopsisinfo
page.refpurpose = orig.refpurpose
if orig.refhint:
page.refhint_list.append( orig.refhint )
if orig.refname:
page.refname_list.append( orig.refname )
elif orig.refname_list:
page.refname_list.extend( orig.refname_list )
if orig.funcsynopsis:
page.funcsynopsis_list.append( orig.funcsynopsis )
elif orig.refname_list:
page.funcsynopsis_list.extend( orig.funcsynopsis_list )
if orig.description:
page.description_list.append( orig.description )
elif orig.refname_list:
page.description_list.extend( orig.description_list )
if orig.seealso:
page.seealso_list.append( orig.seealso )
elif orig.seealso_list:
page.seealso_list.extend( orig.seealso_list )
if orig.authors:
page.authors_list.append( orig.authors )
elif orig.authors_list:
page.authors_list.extend( orig.authors_list )
if orig.copyright:
page.copyright_list.append( orig.copyright )
elif orig.refname_list:
page.copyright_list.extend( orig.copyright_list )
#od
for orig in pagelist:
if not orig.func.into: continue
if orig.func.into not in combined:
warn("page for '"+orig.func.name+
"' has no target => '"+orig.func.into+"'")
page = FunctionFamilyRefPage(orig)
else:
page = combined[orig.func.into]
if orig.refname:
page.refname_list.append( orig.refname )
elif orig.refname_list:
page.refname_list.extend( orig.refname_list )
if orig.funcsynopsis:
page.funcsynopsis_list.append( orig.funcsynopsis )
elif orig.refname_list:
page.funcsynopsis_list.extend( orig.funcsynopsis_list )
if orig.description:
page.description_list.append( orig.description )
elif orig.refname_list:
page.description_list.extend( orig.description_list )
if orig.seealso:
page.seealso_list.append( orig.seealso )
elif orig.seealso_list:
page.seealso_list.extend( orig.seealso_list )
if orig.authors:
page.authors_list.append( orig.authors )
elif orig.authors_list:
page.authors_list.extend( orig.authors_list )
if orig.copyright:
page.copyright_list.append( orig.copyright )
elif orig.refname_list:
page.copyright_list.extend( orig.copyright_list )
#od
return list
#fu
combined_pages = docbook_pages_recombine(pagelist = refpage_list)
# -----------------------------------------------------------------------
class HeaderRefPage(RefPage):
pass
def docbook_refpages_perheader(page_list): # headerlist
" creating the per-header manpage - a combination of function man pages "
header = {}
for page in page_list:
assert not page.func.into
file = page.func.src.file.mainheader # short for the mainheader index
if file not in header:
header[file] = HeaderRefPage(page.func)
header[file].id = s(file, r"[^\w\.]","-")
header[file].refentry = header[file].id
header[file].refentryinfo = None
header[file].refentry_date = page.refentry_date
header[file].refentry_productname = (
"the library "+page.refentry_productname)
header[file].manvolnum = page.manvolnum
header[file].refentrytitle = file
header[file].funcsynopsis = ""
if 1: # or += or if not header[file].refnamediv:
header[file].refpurpose = " library "
header[file].refname = header[file].id
if not header[file].funcsynopsisinfo and page.funcsynopsisinfo:
header[file].funcsynopsisinfo = page.funcsynopsisinfo
if page.funcsynopsis:
header[file].funcsynopsis += "\n"+page.funcsynopsis
if not header[file].copyright and page.copyright:
header[file].copyright = page.copyright
if not header[file].authors and page.authors:
header[file].authors = page.authors
if not header[file].authors and page.authors_list:
header[file].authors_list = page.authors_list
if not header[file].description:
found = m(commands.getoutput("cat "+o.package+".spec"),
r"(?s)\%description\b([^\%]*)\%")
if found:
header[file].description = found.group(1)
elif not header[file].description:
header[file].description = "<para>" + (
page.refentry_productname + " library") + "</para>";
#fi
#fi
#od
return header#list
#fu
def leaders(pagelist):
list = []
for page in pagelist:
if page.func.into : continue
list.append(page)
return list
header_refpages = docbook_refpages_perheader(leaders(refpage_list))
# -----------------------------------------------------------------------
# printing the docbook file is a two-phase process - we spit out the
# leader pages first - later we add more pages with _refstart pointing
# to the leader page, so that xmlto will add the functions there. Only the
# leader page contains some extra info needed for troff page processing.
doctype = '<!DOCTYPE reference PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"'
doctype += "\n "
doctype += '"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd">'+"\n"
try:
F = open(o.docbookfile,"w")
except IOError, error:
warn("can not open docbook output file: "+o.docbookfile, error)
else:
print >> F, doctype, '<reference><title>Manual Pages</title>'
for page in combined_pages:
print >> F, page.refentry_text()
#od
for page in header_refpages.values():
if not page.refentry: continue
print >> F, "\n<!-- _______ "+page.id+" _______ -->",
print >> F, page.refentry_text()
#od
print >> F, "\n",'</reference>',"\n"
F.close()
#fi
# _____________________________________________________________________
try:
F = open( o.dumpdocfile, "w")
except IOError, error:
warn ("can not open"+o.dumpdocfile,error)
else:
for func in function_list:
name = func.name
print >> F, "<fn id=\""+name+"\">"+"<!-- FOR \""+name+"\" -->\n"
for H in sorted_keys(func.dict()):
print >> F, "<"+H+" name=\""+name+"\">",
print >> F, str(func.dict()[H]),
print >> F, "</"+H+">"
#od
print >> F, "</fn><!-- END \""+name+"\" -->\n\n";
#od
F.close();
#fi
if errors: sys.exit(errors)
| 39.758989 | 79 | 0.535784 | 4,792 | 40,912 | 4.474124 | 0.116653 | 0.00569 | 0.005457 | 0.005037 | 0.228685 | 0.178871 | 0.147994 | 0.140252 | 0.11959 | 0.114832 | 0 | 0.005241 | 0.295757 | 40,912 | 1,028 | 80 | 39.797665 | 0.738581 | 0.092785 | 0 | 0.25 | 0 | 0.0075 | 0.151634 | 0.031217 | 0.00625 | 0 | 0 | 0 | 0.00125 | 0 | null | null | 0.00375 | 0.00625 | null | null | 0.015 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c5844e0aad5d3be4e5309b96d1cf831066661e3d | 2,618 | py | Python | cpm/cli.py | tzabal/cpm | b8736ec5a72e2e9f22e73b237a814516545db3e0 | [
"Apache-2.0"
] | 14 | 2017-03-09T20:55:06.000Z | 2021-11-12T11:43:51.000Z | cpm/cli.py | tzabal/cpm | b8736ec5a72e2e9f22e73b237a814516545db3e0 | [
"Apache-2.0"
] | null | null | null | cpm/cli.py | tzabal/cpm | b8736ec5a72e2e9f22e73b237a814516545db3e0 | [
"Apache-2.0"
] | 6 | 2017-03-28T01:54:46.000Z | 2021-02-23T20:42:49.000Z | # Copyright 2015 Tzanetos Balitsaris
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os.path
import sys
import prettytable
import cpm
def process_arguments():
description = ('A program that implements the Critical Path Method '
'algorithm in order to schedule a set of project activities '
'at the minimum total cost with the optimum duration.')
parser = argparse.ArgumentParser(description=description)
arg_help = 'a file that describes the project in JSON format'
parser.add_argument('project_file', help=arg_help)
arg_help = 'a directory that the generated images will be placed in'
parser.add_argument('-o', '--images-dir', help=arg_help)
arguments = parser.parse_args()
if not os.path.isfile(arguments.project_file):
sys.stderr.write('The project_file is not an existing regular file.\n')
sys.exit(1)
if arguments.images_dir and not os.path.isdir(arguments.images_dir):
sys.stderr.write('The images_dir is not an existing directory.\n')
sys.exit(1)
return arguments
def main():
arguments = process_arguments()
try:
project = cpm.validate(arguments.project_file)
except cpm.ProjectValidationException as exc:
sys.stderr.write(str(exc) + '\n')
sys.exit(1)
images_dir = arguments.images_dir + '/' if arguments.images_dir else ''
cpmnet = cpm.CriticalPathMethod(project)
cpmnet.run_cpm()
results, images, optimum_solution = cpmnet.get_results(images_dir)
results_table = prettytable.PrettyTable([
"Project Duration", "Critical Path(s)", "Direct Cost", "Indirect Cost", "Total Cost"
])
for result in results:
results_table.add_row([
result['project_duration'], result['critical_paths'],
result['direct_cost'], result['indirect_cost'], result['total_cost']
])
print results_table
print 'The optimum solution is {} for total cost and {} for project duration.'\
.format(optimum_solution[0], optimum_solution[1])
if __name__ == '__main__':
main()
| 35.378378 | 92 | 0.699771 | 350 | 2,618 | 5.114286 | 0.405714 | 0.040223 | 0.040223 | 0.015084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006256 | 0.206264 | 2,618 | 73 | 93 | 35.863014 | 0.855149 | 0.211612 | 0 | 0.108696 | 0 | 0 | 0.292195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.108696 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c584ef51737ff6abecfeaa9cb9bcdc82e84361a4 | 369 | py | Python | yoyoman01_traject/scripts/set_state_model.py | Gepetto/yoyoman01_robot | cd2fccb9378a1208605de88c88ec7b1482eaa271 | [
"BSD-2-Clause"
] | null | null | null | yoyoman01_traject/scripts/set_state_model.py | Gepetto/yoyoman01_robot | cd2fccb9378a1208605de88c88ec7b1482eaa271 | [
"BSD-2-Clause"
] | 1 | 2018-09-28T14:13:45.000Z | 2018-10-01T12:42:07.000Z | yoyoman01_traject/scripts/set_state_model.py | Gepetto/yoyoman01_robot | cd2fccb9378a1208605de88c88ec7b1482eaa271 | [
"BSD-2-Clause"
] | 2 | 2018-04-13T07:29:15.000Z | 2018-05-24T14:19:07.000Z | #!/usr/bin/env python
import rospy
import roslaunch
if __name__ == "__main__":
rospy.sleep(6.5)
uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
roslaunch.configure_logging(uuid)
launch = roslaunch.parent.ROSLaunchParent(uuid,["/home/ntestar/catkin_ws/src/yoyoman01_robot/yoyoman01_traject/launch/set_state.launch"])
launch.start()
rospy.spin()
| 30.75 | 139 | 0.772358 | 50 | 369 | 5.38 | 0.74 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018018 | 0.097561 | 369 | 11 | 140 | 33.545455 | 0.78979 | 0.054201 | 0 | 0 | 1 | 0 | 0.267241 | 0.244253 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c58b105434ef0739e6e7fc842f2dba276d5bf04c | 8,974 | py | Python | SaIL/network/supervised_regression_network.py | yonetaniryo/SaIL | c7404024c7787184c3638e9730bd185373ed0bf6 | [
"BSD-3-Clause"
] | 12 | 2018-05-18T19:29:09.000Z | 2020-05-15T13:47:12.000Z | SaIL/network/supervised_regression_network.py | yonetaniryo/SaIL | c7404024c7787184c3638e9730bd185373ed0bf6 | [
"BSD-3-Clause"
] | 1 | 2018-05-18T19:36:42.000Z | 2018-07-20T03:03:13.000Z | SaIL/network/supervised_regression_network.py | yonetaniryo/SaIL | c7404024c7787184c3638e9730bd185373ed0bf6 | [
"BSD-3-Clause"
] | 10 | 2018-01-11T21:23:40.000Z | 2021-11-10T04:38:07.000Z | #!/usr/bin/env python
"""Generic network class for supervised regression
Created on: March 25, 2017
Author: Mohak Bhardwaj"""
from collections import defaultdict
from planning_python.heuristic_functions.heuristic_function import EuclideanHeuristicNoAng, ManhattanHeuristicNoAng, DubinsHeuristic
import numpy as np
import random
class SupervisedRegressionNetwork():
def __init__(self ,\
output_size ,\
input_size ,\
learning_rate = 0.001 ,\
batch_size = 32 ,\
training_epochs = 15 ,\
summary_file = "learner_1" ,\
mode = "cpu" ,\
seed_val = 1234 ,\
graph_type = "XY"):
self.output_size = output_size
self.input_size = input_size
self.learning_rate = learning_rate
self.batch_size = batch_size
self.training_epochs = training_epochs
self.display_step = 1
self.summary_dir_train = os.path.join(os.path.abspath('saved_data/summaries'), summary_file+'_train')
self.summary_dir_test = os.path.join(os.path.abspath('saved_data/summaries'), summary_file+'_test')
print self.summary_dir_test
print self.summary_dir_train
self.seed_val = seed_val
self.input_shape = [self.input_size]
if mode == "gpu":
self.device = '/gpu:0'
else:
self.device = '/cpu:0'
global tf
global tflearn
import tensorflow as tf
import tflearn
# config = tf.ConfigProto()
# # config.device_count = {'GPU': 0}
# # config.log_device_placement=True
# config.allow_soft_placement=True
# config.gpu_options.allow_growth = True
self.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))#, log_device_placement=True))
#Heuristic Functions that will be helpful in calculating features
self.hs = [EuclideanHeuristicNoAng(), ManhattanHeuristicNoAng()]
#Some values that will help with normalization of features
if self.graph_type == "XY":
self.dist_norm = (lattice_params['x_lims'][1]-lattice_params['x_lims'][0])*(lattice_params['y_lims'][1]-lattice_params['y_lims'][0])#Max possible distance between two nodes
self.max_children = self.lattice.children.shape[0]
self.coord_norm = np.array([self.lattice.num_cells[0], self.lattice.num_cells[1]], dtype=np.float)
elif self.graph_type == "XYH":
self.dist_norm = None #Max possible dubin's distance between two nodes
self.max_children = len(self.lattice.children[0])
self.coord_norm = np.array([self.lattice.num_cells[0], self.lattice.num_cells[1], self.lattice.num_headings], dtype=np.float)
self.hs.append(DubinsHeuristic(lattice_params['radius']))
self.norm_start_n = np.array(self.start_n,dtype=np.float)/self.coord_norm
self.norm_goal_n = np.array(self.goal_n,dtype=np.float)/self.coord_norm
#Dictionaries that keep track of important values for feature calculation
self.cost_so_far = defaultdict(lambda: np.inf) #For each node, this is the cost of the shortest path to the start
self.num_invalid_predecessors = defaultdict(lambda: -1) #For each node, this is the number of invalid predecessor edges (including siblings of parent)
self.num_invalid_children = defaultdict(lambda: -1) #For each node, this is the number of invalid children edges
self.num_invalid_siblings = defaultdict(lambda: -1) #For each node, this is the number of invalid siblings edges (from best parent so far)
self.num_invalid_grand_children = defaultdict(lambda: -1) #For each node, this is the number of invalid grandchildren edges (seen so far)
self.depth_so_far = defaultdict(lambda: -1) #For each node, this is the depth of the node along the tree(along shortest path)
with tf.device(self.device):
self.graph_ops = self.init_graph()
self.init_op = tf.global_variables_initializer()
# _, self.episode_stats_vars = self.build_summaries()
# self.summary_ops = tf.summary.merge_all()
# print('Here')
# self.test_writer = tf.summary.FileWriter(self.summary_dir_test, self.sess.graph)
# print('Here2')
# self.train_writer = tf.summary.FileWriter(self.summary_dir_train, self.sess.graph)
# print('Her3')
self.sess.run(self.init_op)
print('network created and initialized')
def create_network(self):
"""Constructs and initializes core network architecture"""
state_input = tf.placeholder(tf.float32, [None] + self.input_shape)
net = tf.py_func(self.get_feature_vec, state_input, tf.float32, stateful=True, name="feature_calc")
net = tflearn.fully_connected(state_input, 100, activation='relu')
net = tflearn.fully_connected(net, 50, activation ='relu')
output = tflearn.fully_connected(net, self.output_size, activation = 'linear')
return state_input, output
def init_graph(self):
"""Overall architecture including target network,
gradient ops etc"""
state_input, output = self.create_network()
network_params = tf.trainable_variables()
target = tf.placeholder(tf.float32, [None] + [self.output_size])
cost = tf.reduce_sum(tf.pow(output - target, 2))/(2*self.batch_size)
optimizer = tf.train.RMSPropOptimizer(learning_rate = self.learning_rate)
train_net = optimizer.minimize(cost, var_list = network_params)
saver = tf.train.Saver()
graph_operations = {"s": state_input,\
"output": output,\
"target": target,\
"cost": cost,\
"train_net": train_net,\
"network_params": network_params,\
"saver": saver}
return graph_operations
def get_feature_vec(self, node):
"""Given a node, calculate the features for that node"""
feature_vec = [self.norm_start_n, self.norm_goal_n]
feature_vec.append(node,/self.coord_norm)
for h in self.heuristic_functions: feature_vec.append(h(node, self.goal_n)/self.dist_norm) #Normalize the distances
feature_vec.append(self.num_invalid_predecessors[node]/self.depth_so_far[node]*self.max_children) #Normalized invalid predecessors
feature_vec.append(self.num_invalid_siblings/self.max_children)
feature_vec.append(self.num_invalid_children/self.max_children)
feature_vec.append(self.num_invalid_grand_children/2*self.max_children)
return feature_vec
def train(self, database):
#Shuffle the database
random.shuffle(database)
for epoch in xrange(self.training_epochs):
# random.shuffle(database)
avg_cost = 0.
total_batch = int(len(database)/self.batch_size)
for i in xrange(total_batch):
batch_x, batch_y = self.get_next_batch(database, i)
#Run optimization op(backprop) and cost op(to get loss value)
_, c = self.sess.run([self.graph_ops['train_net'], self.graph_ops['cost']],\
feed_dict = {self.graph_ops['s']:batch_x,\
self.graph_ops['target']:batch_y})
#Compute Average Loss
avg_cost+= c/total_batch
#Display logs per epoch
if epoch%self.display_step == 0:
print "epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(np.sqrt(avg_cost))
print('optimization finished!')
def get_best_node(self, obs):
"""takes as input an open list and returns the best node to be expanded"""
return None
def get_q_value(self, obs):
obs = obs.reshape(self.input_shape)
output = self.graph_ops['output'].eval(session=self.sess, feed_dict={self.graph_ops['s']:[obs]})
return output
def save_params(self, file_name):
#file_path = os.path.join(os.path.abspath('saved_data/saved_models'), file_name +'.ckpt')
print(file_name)
save_path = self.graph_ops['saver'].save(self.sess, file_name)
print("Model saved in file: %s" % file_name)
return
def load_params(self, file_name):
#file_path = os.path.join(os.path.abspath('saved_data/saved_models'), file_name +'.ckpt')
self.graph_ops['saver'].restore(self.sess, file_name)
print('Weights loaded from file %s'%file_name)
def get_params(self):
return self.graph_ops['network_params']
def set_params(self, input_params):
[self.graph_ops['network_params'].assign(input_params[i]) for i in range(len(input_params))]
def get_next_batch(self, database, i):
batch = database[i*self.batch_size: (i+1)*self.batch_size]
batch_x = np.array([_[0] for _ in batch])
batch_y = np.array([_[1] for _ in batch])
new_shape_ip = [self.batch_size] + self.input_shape
new_shape_op = [self.batch_size] + [self.output_size]
batch_x = batch_x.reshape(new_shape_ip)
batch_y = batch_y.reshape(new_shape_op)
return batch_x, batch_y
def reset(self):
self.sess.run(self.init_op)
| 45.323232 | 178 | 0.676621 | 1,225 | 8,974 | 4.734694 | 0.234286 | 0.020172 | 0.022759 | 0.015517 | 0.234828 | 0.202414 | 0.163793 | 0.128276 | 0.128276 | 0.106207 | 0 | 0.008893 | 0.210608 | 8,974 | 197 | 179 | 45.553299 | 0.809853 | 0.182639 | 0 | 0.014815 | 0 | 0 | 0.054074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.044444 | null | null | 0.059259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c59dc3c8ab8e831eb7caf7fcbfb73b6aca3f9f0b | 1,219 | py | Python | src/tests/aguirregabiria_simple_tests.py | cdagnino/LearningModels | b31d4e1dd5381ba06fc5b1d2b0e2eb1515f2d15f | [
"Apache-2.0"
] | null | null | null | src/tests/aguirregabiria_simple_tests.py | cdagnino/LearningModels | b31d4e1dd5381ba06fc5b1d2b0e2eb1515f2d15f | [
"Apache-2.0"
] | null | null | null | src/tests/aguirregabiria_simple_tests.py | cdagnino/LearningModels | b31d4e1dd5381ba06fc5b1d2b0e2eb1515f2d15f | [
"Apache-2.0"
] | null | null | null | import numpy as np
from src import const
#TODO: should be imported from aguirregabiria_simple.py
def period_profit(p: np.ndarray, lambdas: np.ndarray, betas_transition=const.betas_transition):
"""
Correct expected period return profit. See ReadMe for derivation
"""
constant_part = (p-const.c) * np.e ** const.α * np.e ** ((const.σ_ɛ ** 2) / 2)
summation = np.dot(np.e**(betas_transition*np.log(p[:, np.newaxis])), lambdas)
return constant_part*summation
def test_period_profit():
p = np.array([1.4, 1.2])
lambdas = np.array([0.5, 0.4, 0.1])
beta_p_part = np.array([[np.e ** (-3. * 0.33647224), np.e ** (-2.5 * 0.33647224), np.e ** (-2 * 0.33647224)],
[np.e ** (-3. * 0.18232156), np.e ** (-2.5 * 0.18232156), np.e ** (-2 * 0.18232156)]])
summation_part = np.array([0.36443148 * lambdas[0] + 0.43120115 * lambdas[1] + 0.51020408 * lambdas[2],
0.5787037 * lambdas[0] + 0.63393814 * lambdas[1] + 0.69444444 * lambdas[2]])
expected = (p - const.c) * np.e ** const.α * np.e ** ((const.σ_ɛ ** 2) / 2) * summation_part
computed = period_profit(p, lambdas)
assert np.allclose(expected, computed, rtol=0.05) | 42.034483 | 114 | 0.599672 | 187 | 1,219 | 3.823529 | 0.331551 | 0.046154 | 0.044755 | 0.05035 | 0.179021 | 0.103497 | 0.103497 | 0.103497 | 0.103497 | 0.103497 | 0 | 0.144806 | 0.218212 | 1,219 | 29 | 115 | 42.034483 | 0.605456 | 0.097621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.0625 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c5a2b2cc9bb64eb2dd4546a33e1ab663ecfa9c59 | 591 | py | Python | example/example_list.py | vtheno/Vtools | 5a03891d0c2365602aaae68541f31ba89a6476e5 | [
"MIT"
] | 2 | 2018-04-25T08:59:55.000Z | 2018-04-25T09:53:35.000Z | example/example_list.py | vtheno/Vtools | 5a03891d0c2365602aaae68541f31ba89a6476e5 | [
"MIT"
] | null | null | null | example/example_list.py | vtheno/Vtools | 5a03891d0c2365602aaae68541f31ba89a6476e5 | [
"MIT"
] | null | null | null | from util.List import *
print( dir() )
lst = list(range(9999))
hd,*tl = lst
print( hd )
print( tl )
del hd,tl
Lst = toList(lst)
try:
print( hd,tl )
except NameError as e:
with Lst as (hd,tl):
print( hd )
print( tl )
print( type(hd),type(tl) )
print( Lst.hd is hd )
print( Lst.tl is tl )
del hd,tl
del lst,Lst
lst = Cons(1,Cons(2,Cons(3,Cons(4,nil))))
print( lst,type(lst) )
Lst = toPylist(lst)
print( Lst,type(Lst) )
with lst as (a,(b,c)):
print (a,b,c)
print (a is lst.hd)
print (b is lst.tl.hd)
print (c is lst.tl.tl)
| 18.46875 | 41 | 0.563452 | 109 | 591 | 3.055046 | 0.284404 | 0.06006 | 0.042042 | 0.084084 | 0.051051 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018561 | 0.270728 | 591 | 31 | 42 | 19.064516 | 0.75406 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0.535714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
c5b6c0de5cd1879d83d673da44359c98a06dd5a1 | 306 | py | Python | auth/api/urls.py | gabrielangelo/revelo-wallet | 3e91117b673e5aaf50773aa180af4117235965c9 | [
"BSD-3-Clause"
] | null | null | null | auth/api/urls.py | gabrielangelo/revelo-wallet | 3e91117b673e5aaf50773aa180af4117235965c9 | [
"BSD-3-Clause"
] | 8 | 2020-02-11T23:50:12.000Z | 2022-03-14T22:51:54.000Z | auth/api/urls.py | gabrielangelo/revelo-wallet | 3e91117b673e5aaf50773aa180af4117235965c9 | [
"BSD-3-Clause"
] | null | null | null | from django.conf.urls import url
from rest_framework_jwt.views import (
obtain_jwt_token,
refresh_jwt_token,
verify_jwt_token
)
urlpatterns = [
url(r'^obtain-token', obtain_jwt_token),
url(r'^token-refresh/', refresh_jwt_token),
url(r'^api-token-verify/', verify_jwt_token)
]
| 20.4 | 48 | 0.712418 | 44 | 306 | 4.636364 | 0.386364 | 0.235294 | 0.137255 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169935 | 306 | 14 | 49 | 21.857143 | 0.80315 | 0 | 0 | 0 | 0 | 0 | 0.15082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.